The way I use the products something like this. My main account on my MacBook - ChatGPT website, codex cli. Then, a Mac VM running via UTM with shared writable dir - anything more ‘shady’ in terms of permissions and for playing with new ai apps - eg ChatGPT/Codex standalone apps, Atlas, Claude desktop app etc. Seems to work decently enough.
And I do totally agree that there should be a way to opt out of all these privacy invasive measures, especially after paying $200/mo
That’s just sad ugh, just the other day I was using my pre-shitty-IoT era Sous Vide machine (Anova brand, I think it might have been chefsteps recommended too, got around 2014/2015), and I was thinking how glad I am that it has zero fancy connectivity - just a wheel to set the temperature and a start/stop button and simple led display. Still works great.
Yeah long ago when I was doing some iOS development, I can remember Apple UX responsives mantras like “don’t block the main thread”, as it’s the thing responsible for making app UIs snappy even when something is happening.
Nowadays seems like half of Apple’s own software blocks on their main thread, like you said things like keyboard lock up for no reason. God forbid you try to paste too much text into a Note - the paste will crawl to a halt. Or, on my M4 max MacBook, 128GB ram, 8tb ssd, Photos library all originals saved locally - I try to cmd-R to rotate an image - the rotation of a fully local image can sometimes take >10 seconds while showing a blocking UI “Rotating Image…”, it’s insane how low the bar has dropped for Apple software.
I often write a bunch of Esphome ‘code’ , which I then use with various esp32 based devices (mostly from M5stack) via esphome/HomeAssistant.
Can this project help me in any way during dev stage before uploading the code to device just to see it doesn’t work ? Eg could I use this to somehow compile&run those esphome yamls via this emulator?
That’s a really interesting use case. I’m currently evaluating integrating the ESPHome compiler into the project, so it could potentially compile and run ESPHome YAMLs during the development stage
It’s still exploratory, but it could definitely go in that direction
That would be awesome! ESPHome is the easiest way to integrate custom devices into your HomeAssistant with online updates, logs, and other functionality. Nothing else comes close.
You have more info about the inflated token use? I’m using codex cli a bunch now, but the reported token usage seems like an order of magnitude higher than, say Claude code with opus.
Idk if it’s because I set codex to xhigh reasoning, but even then it still seems way higher than Claude. The input/output ratio feels large too, eg I have codex session which says ~500M in / ~2M out.
I wish I had hard evidence but it is mostly an observation. I do use Codex a lot and I felt a drastic change from like one-two months ago to this day.
It used to give me precise answers, "surgical" is how I described it to my friends. Now it generates a lot of slop and plenty of "follow ups". It doesn't give me wrong answers, which is ok, but I've found that things that used to take 3-4 prompts now take 8-10. Obviously my prompting skills haven't changed much and, if anything, they've become better.
This is something that other colleagues have observed as well. Even the same GPT5.4 model feels different and more chatty recently. Btw, I think their version numbers mean nothing, no one can be certain about the model that is actually running on the backend and it is pretty evident that they're continuously "improving" it.
Back in business school they used to tell the story of how makers of razor blades would put a good blade as the first and the last blade in the pack. I suspect the LLM services of doing something like that.
I haven't had the time to fully hash this take out, but a big question in the back of my mind has been - is it possible that AI model improvements come partly from finding overhang in things that look hard and impressive to humans but are actually trivial consequences of the training data? If true, then the observable performance of any widely distributed model could get worse over time as it "mines out" the work that's easy for it to do.
Using Codex more for now, and there is definitely some compaction magic.
I’m keeping the same conversation going and going for days, some at almost 1B tokens (per the codex cli counters), with seemingly no coherency loss
Oh yes, upvoting, my top annoyance with anthropic too, email links are a bit ridiculous as a login mechanism.
Anytime I have to login again, it’s the ridiculous dance of figuring out what surface I’m logging into and how to get the magic link to open there, and not mistakenly somewhere else. Never a problem with openAI - input password and 2FA - done, logged in.
On topic of just data requests from OpenAI - this article says “Be aware that this process isn’t instant”
I did notice this an wonder what changed - I do periodic data backups of various services, and up until recently it was impressive, as ChatGPTs email with data zip file link arrived maybe within 1-3 min of the request, for around a ~1GB file.
I have similar amount of data now (even less, I pruned some), yet now the file takes a really long time to prepare and receive.
I started mine monday and it never finished (never got the email saying its ready). I started it again on tuesday and it finished in two hours. Maybe they just had a surge of exports on monday.
I tried to use it right after launch from within Claude Desktop, on a Mac VM running within UTM, and got cryptoc messages about Apple virtualization framework.
That made me realize it wants to also run a Apple virtualization VM but can’t since it’s inside one already - imo the error messaging here could be better, or considering that it already is in a VM, it could perhaps bypass the vm altogether. Because right now I still never got to try cowork because of this error.
Does UTM/Apple's framework not allow nested virtualization? If I remember correctly from x86(_64) times, this is a thing that sometimes needs to be manually enabled.
You are correct on both accounts, as of tahoe 26.3 you can't nest a macOS guest under a macOS guest. However you can nest 2 layers deep with any combo of layer 1 guest so long as the machine is running Sequoia and is M3/M4/M5.
reply