The really strange thing is that so much of it doesn't work. Like I get that the SOTA models perform some tasks quite well and have some real value. But the AI being implemented in every corner creates a lot of really bad results. The Shopify code assistant will completely wreck your site and basically gets nothing correct. It will write 100 lines to change a color of a single DIV. The Amazon product Q&A will give you wrong information more frequently than not.
In what mind frame is it logical or necessary to put these extremely poorly functioning products in to the wild?
It's a desperate attempt at staying relevant, even if most of those companies don't realize it yet. Because of its general-purpose nature, AI subsumes products. Most software products that try to "implement AI in every corner" would, from the user's POV, be more useful if they became tools for ChatGPT/Claude/Gemini.
People's goals are rarely limited to just one software product, and products are basically defined as a bag of tools glued with UI, that work together but don't interoperate much with anything else. That boundary drawn around a bunch of software utilities, is given a name and a fancy logo, and sold or used to charge people rent. That's software products. But LLMs want to flip that around - they're good at gluing things, so embedding one within a product is just a waste of model capabilities, and actually makes the product boundary more apparent and annoying.
Or in short: consider Copilot in Microsoft Word, vs. "Generate Word Document" plugin/tool for a general LLM interface (whether Gemini webapp or Claude Code or something like TypingMind). The former is just an LLM locked in a box, barely able to output some text without refusing or claiming it can't do it. The latter is a general-purpose tool that can search the web for you, scrap some sites and run data analysis on results (writing its own code for this), talk results over with you, cross-reference with other sources, and then generate you a pretty Word document with formatting and images.
This is, btw., a real example. I used a Word document generator with TypingMind and GPT-4 via API, and it was more usable over a year ago than Copilot is even now. Partly because Copilot is just broken, but mostly because the LLM can do lots of things other than writing text in Word.
Point being, AI is eroding the notion of software product as something you sell/rent, which threatens just about the entire software industry :).
That's exactly what automatic bidding does - it only outbids enough to beat the competing bid (up to your max) without paying any more than is needed. https://www.ebay.com/help/buying/bidding/automatic-bidding?i... (Manual bids have bid increments as well. Although others have pointed out that advance bidding might cause others to bid more than they would if they thought no one else wants the item. )
Yes, what I think happens is the following: User A's price ceiling is $10, User B's $12. When both reveal their max price early, the item will go to $10.50 ($0.50 increment over A's max price). User A then has plenty of time to notice the item being valued at $10.50 by someone. In many cases users then adjust the value they assign to the item and increase their bid. The result: User B has to pay more than $10.50 they would have paid when sniping the item seconds before auction expiration.
Are these websites not serving public content? If there's some legal concerns just create a separate scraping LLC that fakes user agent and uses residential IPs or VPN or something. I can't imagine that the companies would follow through with some sort of lawsuit against a scraper that's trying to index their site to get them more visitors, if they allow GoogleBot.
Perhaps it wasn't a widespread norm though. But I don't really see why that matters as much, is the the issue that sites with robots.txt today only allow Googlebot and not other search engines? Or is Google somehow benefitting from having two decade old content that is now blocked because of robots.txt that the website operators don't want indexed?
Agree. It was not standard in the late 90s or early 00s. Most sites were custom built and relied on the _webmaster_ knowing and understanding how robots.txt worked. I'd heard plenty of examples where people had inadvertently blocked crawlers from their site, not knowing the syntax correctly. CMS' probably helped in the widespread adoption e.g. wordpress
With 10 yrs experience and taking travel jobs to remote locations you MIGHT break $100 hr without OT. With North Slope experience you can get jobs that are paying ~70 with guaranteed OT so like you will crack $100 /hr but that's working winter in Alaska. Even offshore jobs aren't paying $100. No one is paying 800.
It's not really any different that stopping selling counterfeit goods on a platform. Which is a challenge, but hardly insurmountable and the pay off from AI videos won't be nearly so good. You can make a few thousand a day selling knock offs to a small amount of people and get reliably paid within 72 hours. To make the same off of "content" you would have to get millions of views and the pay out timeframe is weeks if not months. Youtube doesn't pay you out unless you are verified, so ban people posting AI and not disclosing it and the well will run dry quickly.
Well then email spam will never have an incentive. That is a relief! I was going to predict that someday people would start sending millions of misleading emails or texts!
Sure those are the things Jesus advocated for in society, but it's not as if a warrior ethos isn't warranted for Jesus considering the attitude towards his message at the time. Especially when you gauge it in the light of what eventually happened to him. The ultimate message in Christianity is the necessity of perseverance in pursuit of goals. The world in general isn't immediately receptive to a message that priorities acceptance over lament.
In what mind frame is it logical or necessary to put these extremely poorly functioning products in to the wild?
reply