Hacker Newsnew | past | comments | ask | show | jobs | submit | unleaded's commentslogin

You could use an old-school formant synthesizer that lets you tune the parameters, like espeak or dectalk. espeak apparently has a klatt mode which might sound better than the default but i haven't tried it.

Screenshots are at the bottom of the page.

I found out the other day you can use modern clang-cl with the MSVC6 headers and it just works. you can download them from here https://github.com/itsmattkc/MSVC600 or just copy it from an install if you have one handy.

then run (something like) this:

  clang-cl /winsysroot:"" /DWINVER=0x0400 /D_WIN32_WINNT=0x0400 -m32 /GS- -march=i586 -Wno-nonportable-include-path /imsvc"C:\MSVC6\VC98\Include" hello.c -fuse-ld=lld-link /link /SAFESEH:NO /SUBSYSTEM:WINDOWS,4.0 /LIBPATH:"C:\MSVC6\VC98\Lib" user32.lib kernel32.lib msvcrt.lib
I don't know if it's any better or worse than MinGW practically but it is definitely cursed.

It's fun and interesting. most people don't actually daily drive it

impressive... let's see the page source


>unless the purpose is specifically to have a retro effect where you eschew modern fonts for aesthetic purposes

There are better fonts for this too e.g. Fusion Pixel Font for CJK: https://github.com/TakWolf/fusion-pixel-font

(yes the readme is in chinese, use google translate or something)

i think i saw a good pixel font that supported arabic too once but of course i cant find it now..


remember when JLCPCB became popular a few years ago and completely flipped hobby electronics upside down? I don't know how possible it is but it would be really cool if that happens in a few years with semiconductors. it's kind of mad that they've dominated our lives since the 1970s but you can only make them if you're a large company with millions of dollars (or several years, a big garage and lots of equipment as seen here). or tiny tapeout.


It's not technologically feasible unless plastic aka flexible ICs take off.


Why?

It seems to me that if there were as much of a customer base for custom ICs as there is for PCBs, a fabricator like TSMC could easily offer a batch prototyping service on a 28 nm node, where you buy just a small slice of a wafer, provided you keep to some restrictive design and packaging rules.


They already do offer that - it’s called a multi-project wafer or MPW. But it’s prohibitively expensive on a per-chip basis. It’s mostly used for prototyping or concept proving and not for commercial use.

One problem is, you need to create a photolithography mask set for any volume size of fabrication and those aren’t cheap. But that’s far from the _only_ problem with small volume.



They should say on this page that this project has ended. There are some spinoffs people interested in this can look into:

https://tinytapeout.com/

https://wafer.space/

https://chipfoundry.io/


This is an absolutely vital development for our computing freedom. Billion dollar industrial fabs are single points of failure, they can be regulated, subverted, enshittified by market forces. We need the ability to make our own hardware at home, just like we can make our own freedom respecting software at home.


Still relevant today. Many problems people throw onto LLMs can be done more efficiently with text completion than begging a model 20x the size (and probably more than 20x the cost) to produce the right structured output. https://www.reddit.com/r/LocalLLaMA/comments/1859qry/is_anyo...


I used to work very heavily with local models and swore by text completion despite many people thinking it was insane that I would choose not to use a chat interface.

LLMs are designed for text completion and the chat interface is basically a fine-tuning hack to make prompting a natural form of text completion to have a more "intuitive" interface for the average user (I don't even want to think about how many AI "enthusiasts" don't really understand this).

But with open/local models in particular: each instruct/chat interface is slightly different. There are tools that help mitigate this, but the more you're working closely to the model the more likely you are to make a stupid mistake because you didn't understand some detail about how the instruct interface was fine tuned.

Once you accept that LLMs are "auto-complete on steroids" you can get much better results by programming the way they were naturally designed to work. It also helps a lot with prompt engineering because you can more easily understand what the models natural tendency is and work with that to generally get better results.

It's funny because a good chunk of my comments on HN these days are combating AI hype, but man are LLMs really fascinating to work with if you approach them with a bit more clear headed of a perspective.


Maybe? The loop process of try-fail-try-again-succeed is pretty powerful. Not sure how you get that purely with text completion.


Why would you do that when you could spend months building metadata and failing to tune prompts for a >100B parameter LLM? /s


it never has. see McCarthyism for instance


>Never

>Mentions one discrete event

Come on...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: