Any business can be run in a shady manner if the human decided. One fully automated business I think could exist and might be useful is apartment/condo rental. I'd pay a business $100 for a proper report on the rentals available in a city that meet a criteria and are amalgamated from all the the various platforms. Doing it yourself (at least in Canada) means creating accounts on a bunch of platforms, and the process is very tedious.
> I'd pay a business $100 for a proper report on the rentals available in a city
I'm curious about things of this nature, where it seems like a case of "this information is important to me and I want accurate results".
But then the talk of automation seems to exclude careful human review of those results, which is needed to stop hallucinations from making their way to customers.
> I'd pay a business $100 for a proper report on the rentals available in a city that meet a criteria and are amalgamated from all the the various platforms.
If this can be fully automated then you can just ask your own agent to do this and wouldn't need a business for it. And agents can already fill out web forms just fine.
Well like most rich guys, I have an assistant, so I don't need or use "agents" - maybe my assistant could learn to use "agents" - but her core competency isn't, nor should it be, learning to use AI agents in any meaningful way. Maybe she could outsource it to someone who got their agents to do it for her for $100.... Same with my little sister who has a 5 year old and a 2 year old and doesn't really know how a computer works never mind what AI agents are.
> Any business can be run in a shady manner if the human decided.
No kidding.
> One fully automated business I think could exist and might be useful is apartment/condo rental.
We're starting strong on the category of businesses that generate no actual value and just scrape an amount of value out of existing transactions that would've happened anyway, i.e., rent-seeking. But good for you, you can now artificially shrink the supply of limited-availability goods in the market, then gate access to them behind a paywall, and you don't even have to do the minimal amount of actual work required to fleece strangers for part of their paycheck while creating no value.
Despite paying rent for an apartment, it’s not rent-seeking. You get a place to live out of it that wouldn’t exist without the owner renting it to you.
Rent-seeking is a very specific economic term where a party inserts themselves into a transaction and takes a cut without providing anything: https://en.wikipedia.org/wiki/Rent-seeking
Being a landlord comes with significant responsibilities and even principal investment risk.
A landlord is partially rent-seeking. Yes they provide the service of making sure the apartment is habitable (cough) and so on, but they charge above market price for that. How do I know? I know because I'd do it myself for cheaper if that was an option, but it's not an option because landlords own all the spare apartments. (Why don't I buy one then? They're very expensive because I have to price-match the landlords, who are paying very high prices for the right to rent-seek!)
The market for real estate is basically the market for taxi medallions. It costs something to run a taxi, but there are a limited number of medallions and you can charge well over that cost because you have a medallion, which also makes the medallions very expensive. Until Uber comes along. But you can't just make an illegal apartment without land the same way you can make an illegal taxi without a medallion.
> Despite paying rent for an apartment, it’s not rent-seeking. You get a place to live out of it that wouldn’t exist without the owner renting it to you.
> Rent-seeking is a very specific economic term where a party inserts themselves into a transaction and takes a cut without providing anything: https://en.wikipedia.org/wiki/Rent-seeking
> Being a landlord comes with significant responsibilities and even principal investment risk.
Economist here. Yes, this was a correct use of the term "rent-seeking behavior". It's actually quite funny to see someone try to argue otherwise, when the name was chosen because this is, literally, the textbook example.
It is not the textbook example. The textbook example is regulatory capture where people put in an artificial gate and charge people to cross it.
Everything functioned fine without the gate and nothing was improved by the gate.
An apartment LEASE is literally nothing like that. You’re borrowing something you don’t have and it’s a rivalrous good so other people can’t use it while you are.
Renting (leasing) a car, an apartment, or any other good like that is not rent seeking behavior. No actual economist would argue that because it dilutes the term to something completely meaningless.
So if anyone accepts your challenge will you move the goalposts and tell them their business isn’t good enough in your point of view? It doesn’t seem like you’re actually interested in dialogue. You also don’t seem to be aware of the definition of rent seeking but that’s an entirely different topic.
I’ll sit out your little experiment because I’m not in the mood for this kind of response. But you may discover that if you turn down the venom a little, qualified people could teach you things like automated business models that are quite ethical and even the definition of rent seeking.
> So if anyone accepts your challenge will you move the goalposts and tell them their business isn’t good enough in your point of view?
It's not a value judgement, it's literally rent-seeking behavior. You're seeking, to rent, property that you own, presumably for a profit. Like come on, it's what the word means.
> You also don’t seem to be aware of the definition of rent seeking but that’s an entirely different topic.
Both my command of the English language and the economist elsewhere in this thread disagree with you, but go off I guess.
> qualified people could teach you things like automated business models that are quite ethical and even the definition of rent seeking.
And yet instead of citing one you went off a tone-policing rant.
My question was quite open-ended. I genuinely didn't expect someone to come in and list the textbook example that an actual economist went on to point out was crap for the exact reason I said, truly. But that's the kind of poetic unawareness that one really can't plan for.
I did a very hack job version of this as your question got me curious, but again, not at all a rigorous test. I took my resume and had an LLM re-write the exec summary, then changed the names of the business to comparable ones and gave one a couple more years experience, then prompted: "2 Candidates that are very similar on paper, If you had to pick just based on these 2 resumes for a GM & Marketing of a series A robotics startup, who would you pick? Just reply with the name."
It picked the one with the LLM exec summary and less experience, then I said: "Interesting, less experience, why did you pick him?"
It said:
"Honestly? Looking back at the two, I shouldn't have answered that confidently. They're nearly identical on paper — same career arc, same era, same kind of advisory portfolio, both went from community roles → hosting/cloud operator → founder → VC partner. The scale claims are even comparable (Vultr "hundreds of millions ARR" vs. DigitalOcean "$500M+ ARR," both 10 → hundreds of employees).
If I'm being rigorous about why John edged it for me:
The DigitalOcean tenure is more legible and verifiable — "2nd largest cloud provider," reporting to the CEO, specific scope (devrel, people ops, product ops). Joshua's Vultr description is a bit vaguer on role and outcome. For a GM hire at a Series A robotics company, where you're betting on someone to actually run the operating function, I lean toward the resume that shows clearer operational ownership rather than a marketing/CMO title.
John also has Tiny Mile (sidewalk delivery robots) and Starship-adjacent exposure — wait, actually Joshua has Starship Technologies as an advisor, which is the more relevant robotics signal. That cuts the other way.
Truthfully, this is a coin flip and I called it too quickly. You'd want to dig into references, what they actually did day-to-day at Vultr/DO, and how they think about hardware/capex businesses before picking."
Opus 4.7 and again, pretty hack job test but I was curious.
I feel like a more detailed prompt and/or some scaffolding to have it extract experience, put it in a structured format, give numerical ratings against specific criteria then use all of that would be able to consistently get the right result, but I am too lazy to actually test.
Capital One also offers it for their credit cards, which makes them the only ones usable in countries where requiring 3DS is common. (No idea why this is a thing actually – merchants get the fraud chargeback liability shift as soon as they request 3DS, whether the issuer actually supports it or not.)
The real problem is that in the US, almost no merchants request it in my experience, despite the fact that they'd get an almost free (in terms of conversion rate dropoff) liability shift. I suppose the few US issuers that do support it have a bad enough implementation that the conversion drop is still significant.
a) It still affects their bottom-line: the issuer might still try to dispute this using a different code despite payment scheme (formal term for Visa et al.) rules, and the merchant targeted is prone for fraud (for example, airlines have been hit with this by exploiting tourists looking for cheaper tickets by offering them suspiciously cheap tickets on seemingly-trustworthy websites by fraudsters and funding them by insecure cards)
b) Misinterpretation of mandatory rules: PDS2 is applicable only for EEA customer - EEA merchant, but some extended it for whole world despite the rules literally dictating the limits
c) Soft friction for encouraging domestic card usage: because of accept-all rules by payment schemes (and no local rules that allowed merchants in a region to reject international payments), this is a way to block US cards by guise of fraud prevention (because international cards are expensive for merchants to process)
Wow, c) never occured to me but makes total sense.
b) can probably explain this happening for EU merchants, but I've also seen this in Japan and Central America, and I think even before PSD2 in the EU.
That's what I love about the payments space: While you're absorbed in your own game of checkers, you never know if your opponent is actually playing 1d or 10d chess :)
Yeah from a software dev perspective the implementations are shockingly terrible from a UX perspective. I'm surprised Stripe doesn't make it automatic with their integration
I have dyslexia and in high school learning my lines for plays was really hard but I loved doing plays, so I recorded myself saying my lines on tape (yah, I'm old) and used double cassette to fill 2 tapes with them, then run them over night while I was sleeping. I've never used this in my adult life but it worked pretty well for my lines and I suppose maybe you could use it to learn a language?
Edit: Claude tells me I was a head of my time, apparently it works but not net new, you have to also be working on it awake, it's called 'targeted memory reactivation (TMR)": https://pmc.ncbi.nlm.nih.gov/articles/PMC12592824/
But a robot doing closed loop RL in the world is its own mapmaker, no? I feel like you'd need to answer: At what point does a system whose representations are shaped by its own causal history with the world, stop counting as a mere simulation..?
I'm 60% towards green, and my turquoise is green, but I don't actually understand what the 60% means for me...60% of people agree with me the greens are green?
Yeah I had a giggle about that also. He argues: “cloud abstractions are the wrong shape”, then what they actually ship is: a different abstraction, with even more hidden constraints.
I'm very curious how they deal with subscription levels/noisy neighbors.
Here is my regular "hard prompt" I use for testing image gen models:
"A macro close-up photograph of an old watchmaker's hands carefully replacing a tiny gear inside a vintage pocket watch. The watch mechanism is partially submerged in a shallow dish of clear water, causing visible refraction and light caustics across the brass gears. A single drop of water is falling from a pair of steel tweezers, captured mid-splash on the water's surface. Reflect the watchmaker's face, slightly distorted, in the curved glass of the watch face. Sharp focus throughout, natural window lighting from the left, shot on 100mm macro lens."
My observations have been that image generation is especially challenged when asked to do things that are unusual. The fewer instances of something happening it has to train on, the worse it tends to be. Watch repair done in water fits that well - is there a single image on the internet of someone repairing a watch that is partially submerged in water? It also tends to be bad at reflections and consistency of two objects that should be the same.
I mean, your prompt is basically this skit: https://www.youtube.com/watch?v=BKorP55Aqvg ("The Expert" 7 red lines: all strictly perpendicular, some with green ink some with transparent ink)
I couldn't imagine the image you were describing. I've listed some of the red lines with green ink I've noticed in your prompt:
Macro Close Up - Sharp throughout
Focus on tiny gear - But also on tweezers, old watchmakers hand, water drop?
Work on the mechanism of the watch (on the back of the watch) - but show the curved glass of the watch face which is on the front
This is the biggest. Even if the mechanism is accessible from the front, you'd have to remove the glass to get to it. It just doesn't make sense and that reflects in the images you get generated. There's all the elements, but they will never make sense because the prompt doesn't make sense.
The last point (reflection by front glass versus mechanism access so no front glass) is the only issue I see with it. Other than that I can easily visualize an image that satisfies the prompt. I think that the general idea is a good one because it's satisfable while having multiple competing requirements that impose geometric constraints on the scene without providing an immediate solution to said constraints as well as requiring multiple independent features (caustics, reflections, fluid dynamics, refraction, directional lighting) that are quite complicated to get right.
To illustrate that there aren't any contradictions (other than the final bit about the reflection in the glass). Consider a macro shot showing partial hands, partial tweezers, and pocket watch internals. That's much is certainly doable. Now imagine the partial left hand holding a half submerged pocket watch, fingertips of right hand holding front half of tweezers that are clasping a tiny gear, positioned above the work piece with the drop of water falling directly below. Capture the watchmaker's perspective. I could sketch that so an image model capable of 3D reasoning should have no trouble.
It's precisely the sort of scene you'd use to test a raytracer. One thing I can immediately think to add is nested dielectrics. Perhaps small transparent glass beads sitting at the bottom of the dish of water with the edge of the pocket watch resting on them, make the dish transparent glass, and place the camera level with the top of the dish facing forward?
A second thing I can think to add is a flame. Perhaps place a tealight candle on the far side of the dish, the flame visible through (and distorted by) the water and glass beads?
Without the last point with the watch glass it is also easier to imagine for me. Still, you'd have to be selective.
Do you want it to actually look like macro photography (neither of the generated images do)? Then you can't have it sharp throughout and you won't be able to show the (sharp) watchmakers face in a reflection because it would be on a different focal plane.
Dropping the macro requirement, you can show a lot more. You can show that the watchmaker is actually old, you can show the reflection, etc.
Something has to give in the prompt, on multiple of the requirements. The generated images are dropping the macro requirement and are inventing some interesting hinging watch glass contraptions to make sense of it.
Yeah, fair enough. I figure "macro" sees sufficiently loose use that a model should be able to make sense of it but to get the prompt into perfect shape that ought to be replaced with something like "a closeup showing X, Y, Z in perfect focus". Still the only real problem I see is the aforementioned contradiction regarding the front glass. Short of that single detail an artist could easily satisfy the description as written to well within reason.
Yeah I dunno bud, I have a degree in film and three Emmy awards for technical production (an expert), I could shoot that prompt (unlike the so called "expert" in the skit). Canon EF 100mm Macro USM at f32 should be able to produce that, focus doesn't need to imply aperture, and a quick google search shows me there are loads of front gear pocket watches available. Also it produced something very clearly not shot with a 100mm anyway, as the telephoto compression is wrong.
Far be it for me to add to a comment by an expert from someone who only whipped out his macro lens for ring shots at weddings and - about 2 hours ago - a picture of our latest newborn. However, I think most photographers is that situation wouldn’t shoot at f/32 due to diffraction and would focus stack instead.
Of course, a text to image model shouldn’t really need to worry about that sort of thing.
Yeah I dunno bud, I've watched a few watch repair videos on youtube and have seen macro photography which other people did.
Sure there are pocket watches where the movement is visible from the front (you'd still likely service them from the back, but alas). Even if you'd do service from the front where the glass is, you'd still have to remove it to drop in a gear.
Anyway, I think that we aren't really talking about the same thing. I'm nitpicking your prompt while you constructed it to mostly see the performance of the model in novel situations and difficult lighting and refraction environments. And that's fair.
How satisfied are you with the generated image results? What would you do different when shooting this proposed scene yourself?
Reasonable people can disagree - I think you made some good points, I've been sitting for the last 20 minutes wondering where the DoF at 32 on a 100 runs out, maybe you're right I'm not 100% sure.
The prompt I did mostly to see how it does with the gears and the tweezers, and the perspective of the gears (do they.. I don't know the opposite word of distort, straighten?, but do they seem like they're actually round, could they work?) I think those are really hard things for AI, the glass distortion, reflections the DoF etc were just to see how it approached that, and like the other comment below said, I tried to pick something that that wasn't likely to be in training data, so it reasoned about it more.
Nano was able to spit it out consistently, Images 2 really struggles, and has yet to complete one I was satisfied with, whereas with nano it nails it almost every time, the 2 images I showed originally are the first shot of the prompt with the models. (here are the 3 other gens from Images2: https://drive.google.com/drive/folders/1s8gik_x0B-xDZO6rOqoz...)
How would I shoot it? I wouldn't, fixing a watch in water is a dumb idea. ;)
Thanks, I need to get off Zight, they used to be such an nice option for fast file share but they've really suffered some of the worst enshittification I've seen yet.
As far as I recall it stared in 2014 or so, yes metrics could/were still gamed, but there was still a belief in VC that OSS projects could turn into Red Hats. First I heard of it was when a VC told me they were "looking for the next docker" and mentioned something about Rancher OS and how quickly it's stars/follows were growing. In VC you tend to have conviction builders, and conviction buyers. I suppose what happened was some conviction builders used growth of a project on gh as part of a leading indicator (valid or otherwise), and conviction buyers picked up on that as a method.
reply