You could use an old-school formant synthesizer that lets you tune the parameters, like espeak or dectalk. espeak apparently has a klatt mode which might sound better than the default but i haven't tried it.
I found out the other day you can use modern clang-cl with the MSVC6 headers and it just works. you can download them from here https://github.com/itsmattkc/MSVC600 or just copy it from an install if you have one handy.
remember when JLCPCB became popular a few years ago and completely flipped hobby electronics upside down? I don't know how possible it is but it would be really cool if that happens in a few years with semiconductors. it's kind of mad that they've dominated our lives since the 1970s but you can only make them if you're a large company with millions of dollars (or several years, a big garage and lots of equipment as seen here). or tiny tapeout.
It seems to me that if there were as much of a customer base for custom ICs as there is for PCBs, a fabricator like TSMC could easily offer a batch prototyping service on a 28 nm node, where you buy just a small slice of a wafer, provided you keep to some restrictive design and packaging rules.
They already do offer that - it’s called a multi-project wafer or MPW. But it’s prohibitively expensive on a per-chip basis. It’s mostly used for prototyping or concept proving and not for commercial use.
One problem is, you need to create a photolithography mask set for any volume size of fabrication and those aren’t cheap. But that’s far from the _only_ problem with small volume.
This is an absolutely vital development for our computing freedom. Billion dollar industrial fabs are single points of failure, they can be regulated, subverted, enshittified by market forces. We need the ability to make our own hardware at home, just like we can make our own freedom respecting software at home.
Still relevant today. Many problems people throw onto LLMs can be done more efficiently with text completion than begging a model 20x the size (and probably more than 20x the cost) to produce the right structured output. https://www.reddit.com/r/LocalLLaMA/comments/1859qry/is_anyo...
I used to work very heavily with local models and swore by text completion despite many people thinking it was insane that I would choose not to use a chat interface.
LLMs are designed for text completion and the chat interface is basically a fine-tuning hack to make prompting a natural form of text completion to have a more "intuitive" interface for the average user (I don't even want to think about how many AI "enthusiasts" don't really understand this).
But with open/local models in particular: each instruct/chat interface is slightly different. There are tools that help mitigate this, but the more you're working closely to the model the more likely you are to make a stupid mistake because you didn't understand some detail about how the instruct interface was fine tuned.
Once you accept that LLMs are "auto-complete on steroids" you can get much better results by programming the way they were naturally designed to work. It also helps a lot with prompt engineering because you can more easily understand what the models natural tendency is and work with that to generally get better results.
It's funny because a good chunk of my comments on HN these days are combating AI hype, but man are LLMs really fascinating to work with if you approach them with a bit more clear headed of a perspective.
reply