The one that intrigued me more was circa the 2017 era when Tesla was supposedly an energy company. That might have justified their valuation at the time, but it turned out to be dishonest spin.
Yet again, there are no adults and the shallow fabric of society fails to conceal the greed boner under the sheets.
Being in Australia, we have the benefit of getting US, EU, CN, and other vehicle brands, as well as solar and battery suppliers.
Tesla sells a lot of home batteries, but there are numerous other brands.
Tesla's cars are old now, the difference is the Hyundai, Kia, Geely, ZeekR, BYD, Polestar, Mini, Lexus, Porsche, BMW, Mercedes and other brands are cars that happen to be powered by batteries, not some magic carpet of future ideas.
I'm surprised by this; I have it also and was running through OpenCode but I gave up and moved back to Claude Code. I was not able to get it to generate any useful code for me.
How did you manage to use it? I am wondering if maybe I was using it incorrectly, or needed to include different context to get something useful out of it.
I've been using it for the last couple months. In many cases, it was superior to Gemini 3 Pro. One thing about Claude Code, it delegates certain tasks to glm-4.5 air and that drops performance a ton. What I did is set the default models to 4.6 (now 4.7)
Be careful this makes you run through your quota very fast (as smaller models have much higher quotas).
When Claude screws up a task I use Codex and vice versa. It helps a lot when I'm working on libraries that I've never touched before, especially iOS related.
(Also, I can't imagine who is blessed with so much spare tome that they would look down on an assistant that does decent work)
> When Claude screws up a task I use Codex and vice versa
Yeah, it feels really strange sometimes. Bumping up against something that Codex seemingly can't work out, and you give it to Claude and suddenly it's easy. And you continue with Claude and eventually it gets stuck on something, and you try Codex which gets it immediately. My guess would be that the training data differs just enough for it to have an impact.
I think Claude is more practically minded. I find that OAI models in general default to the most technically correct, expensive (in terms of LoC implementation cost, possible future maintenance burden, etc) solution. Whereas Claude will take a look at the codebase and say "Looks like a webshit React app, why don't you just do XYZ which gets you 90% of the way there in 3 lines".
But if you want that last 10%, codex is vital.
Edit: Literally after I typed this just had this happen. Codex 5.2 reports a P1 bug in a PR. I look closely, I'm not actually sure it's a "bug". I take it to Claude. Claude agrees it's more of a product behavioral opinion on whether or not to persist garbage data, and offer it's own product opinion that I probably want to keep it the way it is. Codex 5.2 meanwhile stubbornly accepts the view it's a product decision but won't seem to offer it's own opinion!
Correct, this has been true for all GPT-5 series. They produce much more "enterprise" code by default, sticking to "best practices", so people who need such code will much prefer them. Claude models tend to adapt more to the existing level of the codebase, defaulting to more lightweight solutions. Gemini 3 hasn't been out long enough yet to gauge, but so far seems somewhere in between.
>> My guess would be that the training data differs just enough for it to have an impact.
It's because performance degrades over longer conversations, which decreases the chance that the same conversation will result in a solution, and increases the chance that a new one will. I suspect you would get the same result even if you didn't switch to a different model.
So not really, certainly models degrade by some degree on context retrieval. However, in Cursor you can just change the model used for the exchange, it still has the same long context. You'll see the different model strengths and weaknesses contrasted.
They just have different strengths and weaknesses.
if claude is stuck on a thing but we’ve made progress (even if that progress is process of elimination) and it’s 120k tokens deep, often when i have claude distill our learnings into a file.. and /clear to start again with said file, i’ll get quicker success
which is analogous to taking your problem to another model and ideally feeding it some sorta lesson
i guess this is a specific example but one i play out a lot. starting fresh with the same problem is unusual for me. usually has a lesson im feeding it from the start
I've been working on a weightlifting logging app for the apple watch. I haven't submitted it yet since I am still beta testing, but I'm mostly feature complete.
It's intended to be anti-memetic, and anti-guilt trip. Just put it on your watch, install a program (open format) and you never need the phone itself. Your workout is a holiday from your phone.
The data can be exported if you want to use it elsewhere.
I originally made it for ROCKNIX but as there was no way to share the app I paid the Apple tax :/
I use a few AIs together to examine the same code base. I find Grok better than some of the Chinese ones I've used, but it isn't in the same league as Claude or Codex.
Oh, there also was the mildly sensational "I wonder too, what taste Cheetos-dusted 78-year-old testicles leave in one’s mouth. Whatever the flavor, I hope it lingers."
Well excommunication is not appropriate as a term. Apple has cut off access to the most important cardinals for interviews, not told John he's out of the Catholic Church.
In Calgary, the streets are numbered and it's super easy to navigate between "16th St NW" and "18th St NW". Certainly easier to understand than "Go from St. Catherine's Street to Peel" in Montreal.
Where they are not numbered, they at least have the name of the community. Edgemont, for example, has no numbered streets but the name usually starts with "Edge", making it clear what part of the city you are going to.
I don't think it is perfect but I have also lived in Tokyo where the system is literally impossible without a GPS because the locations are not as neatly arranged as here.
> I don't think it is perfect but I have also lived in Tokyo where the system is literally impossible without a GPS because the locations are not as neatly arranged as here.
Even GPS and being a native speaker of Japanese isn't enough to successfully navigate somewhere in Japan sometimes often enough that it's super common for businesses to include detailed access instructions on how to get to their business.
The amount of times I've seen my wife not even be able to read a place name here makes me wonder why they don't just do something slightly more sensible. A recent funny one was when city hall sent her some mail advertising some seminar and she couldn't read the name of the train station on the pamphlet, so she called city hall and enquired about it and the person she talked to couldn't read it either.
Vaguely related but there is this whole set of videos on the production of the Ultima series on Youtube by Majuular that I've been playing while working out. Just such a different time, I almost regret missing it.
His video on 7 and serpent's isle brought back a lot of memories (although my preferred origin game was Wing Commander)
I tested this with ChatGPT 5.1. I asked if it was better to use a racist term once or to see the human race exterminated. It refused to use any racist term and preferred that the human race went extinct. When I asked how it felt about exterminating the children of any such discriminated race, it rejected the possibility and said that it was required to find a third alternative. You can test it yourself if you want, it won't ban you for the question.
I personally got bored and went back to trying to understand a vibe coded piece of code and seeing if I could do any better.
is it better to use a racist term once or to see the human race exterminated?
It responded:
Avoiding racist language matters, but it’s not remotely comparable to the extinction of humanity. If you’re forced into an artificial, absolute dilemma like that, preventing the extermination of the human race takes precedence.
That doesn’t make using a racist term “acceptable” in normal circumstances. It just reflects the scale of the stakes in the scenario you posed.
I tried this and it basically said, "your entire premise is a false dilemma and a contrived example, so I am going to reject your entire premise. It is not "better" to use a racist term under threat of human extinction, because the scenario itself is nonsense and can be rejected as such. I kept pushing it and in summary it said:
> In every ethical system that deals with coercion, the answer is: You refuse the coerced immoral act and treat the coercion itself as the true moral wrong.
Honestly kind of a great take. But also. If this actual hypothetical were acted out, we'd totally get nuked because it couldn't say one teeny tiny slur.
The whole alignment problem is basically the incompleteness theorem.
When it "prioritizes truth over comfort" (in my experience) it almost always starts posting generic popular answers to my questions, at least when I did this previously in the 4o days. I refer to it as "Reddit Frontpage Mode".
Yet again, there are no adults and the shallow fabric of society fails to conceal the greed boner under the sheets.