You can, but does it work well? I assume CC has all kinds of Claude specific prompts in it, wouldn't you be better with a harness designed to be model agnostic like pi.dev or OpenCode?
I've been using all Kimi K2.6, gpt-5.4 and now Deepseek v4 (thought not extensively yet) in Claude Code and I can say it works much better than you'd expect. It looks like the system prompt and tools are pulling a lot of weight. Maybe the current models are good enough that you don't need them to be trained for a specific harness.
Both of these valuations are absolutely absurd. I guess Anthropic looks good in comparison, but I don't want to hold that bag.
The Chinese models are catching up in quality while being a fraction of the price. The market will speak, how many devices that contributed to this thread were made in the USA?
Sure you can argue the Chinese companies are heavily subsidized, but no major LLM lab is remotely close to making a profit this decade.
except they really aren't because the overwhelming majority of truck owners either never tow anything at all or limit their towing to utility class trailers that could just as easily be pulled by an Outback with a trailer hitch. Edit: To expand on this point even further, 1500/150 class trucks (the market niche most full sized truck owners inhabit and that these vehicles are targeted at) are complete bullshit for towing anyway. Hook up to anything heavier than a compact car on a tow dolly and you're begging to smoke the transmission.
I'm convinced Mazda could have taken over the world with an electric CX-5. What a missed opportunity. Hopefully it's not too late for them as I think it's one of the nicer, and more tasteful vehicles on the road.
The number of international students in US is less than 1M in the last year [1]. Maybe they will be true in college towns that they represent a higher percentage, but that is rounding error compared with the actual population demand. More than this, there are ~ 20M in the last year [2] university students in the US, so taking all universities there would be an average 19 to 1 ratio for US students vs international students. I really wonder what is your town and if this is really the case (or maybe a unique case).
Me too, but threading is botched in Python. Not just the Global Interpreter Lock. Some Python packages are not thread-safe, and it's not documented which ones are not. Years ago I discovered that CPickle was not thread safe, and that wasn't considered a problem.
You can still have thread safety issues with the GIL in place, because globals and other data is shared between threads.
For example, you can put a dictionary at the module level, thread A can set a key in that dictionary like “name”, thread B can overwrite it, and then thread A comes back, does dct[“name”] and gets an unexpected answer.
This is a relatively easy mistake to make, a lot of python code has module level variables.
I really don't get the whole coloured function thing. How's it not just the function signature? You might as well claim a new function argument makes a new colour. Granted, all of my use of async is in Rust in which the compiler picks this stuff up, so maybe in python there are other concerns I'm missing.
I think the "new function argument makes a new colour" is accurate on several levels. async/await is a dual for streamlining working with the Promise or Future or Task Monad (however you want to call it). (And async/await syntax in most of the languages that have it can actually be [ab]used for a substandard "do-notation" for nearly any Monad you want to use.)
At face value, yeah, every function in Haskell that accepts an IO monad is now "IO colored", but at the same time, that's a silly way to look at it. It's just extra type information and type bindings flowing as types flow through functions. It's just a bit of a tautology that functions that deal with other functions that need that type need to deal with that type themselves.
Functions that use Maybe/Option/nulls are all "nullable colored". Functions that use or return integers are clearly "integer colored". That's what programming languages do: they try to track how your types flow through functions. "Coloring" is a bad metaphor or at least a useless one, we just call that "types". Admittedly, I think that's why Python and JS users predominantly use the "what color is your function" complaints the most because all of the rest of typing information for them is generally opt-in and easily ignorable/forgotten.
Why is this so common? Do people seriously not read a language/library documentation? That's the absolute first thing I do when evaluating a technology.
Because people have deadlines and need to get things working. You read enough to figure out how to do what you need to do and then mostly move on.
This function was added in 3.7 with no note on the importance of saving a reference. In 3.9 a note was added "Save a reference to the result of this function, to avoid a task disappearing mid execution." which was then expanded with the explanation of a weak reference in 3.10.
It absolutely is common. People see there is a len function that takes one argument, they call len(some_collection), see that it indeed returns the number of items in the collection like they expect and move on. They don't expect len to return a negative number instead on Thursdays, and of course it doesn't because that would be a pretty big footgun. People also see that there is a create_task function that takes a coroutine, they call create_task(some_coroutine), see that the coroutine indeed runs like they expect, and move on. Sure, you're supposed to await the result, but maybe they don't need the awaited value anymore, only the side effects, and see that it still works.
reply