I struggle to understand how people attribute things we ourselves don't really understand (intelligence, intent, subjectivity, mind states, etc) to a computer program just because it produces symbolic outputs that we like. We made it do that because we as the builders are the arbiters of what constitutes more or less desirable output. It seems dubious to me that we would recognize super-intelligence if we saw it, as recognition implies familiarity.
Unless and until "AGI" becomes an entirely self-hosted phenomenon, you are still observing human agency. That which designed, built, trained, the AI and then delegated the decision in the first place. You cannot escape this fact. If profit could be made by shaking a magic 8-ball and then doing whatever it says, you wouldn't say the 8-ball has agency.
Right now it's a machine that produces outputs that resemble things humans make. When we're not using it, it's like any other program you're not running. It doesn't exist in its own right, we just anthropomorphize it because of the way conventional language works. If an LLM someday initiates contact on its own without anyone telling it to, I will be amazed. But there's no reason to think that will happen.
Better would be something along the lines of "You were only so good this year, and the time is up. If you want to talk more, you need to earn more good points with your mom and dad!"
Nah - I want something that one can monetize and actually makes the kids be good (somehow).
Perhaps a parent commitment that if the kids earn X many goodie (goody?) points, then the CC is charged, and let the parent control how they earn those X points.
Gamifying good behavior has been shown to be pretty effective with kids. See Kadzin.
"Ho ho ho! I'm sorry but our time is up. If you want to keep talking to Santa, go into Daddy's wallet or Mommy's purse and bring Santa the rectangular cards with the numbers on it. Now, let's play a numbers game! You read the numbers on that card to me, and I'll tell you what you're getting for Christmas!"
"Inflation" simply refers to a rise in general price levels. The cause of inflation is known: someone sets a price.
There isn't a single reason why someone might raise a price. It could be that they have some ideology about the size of the money supply (i.e. "printing money") or it could be that the costs of their inputs went up ("inflation") due to tariffs, or other supply chain problems. Or it could be a cynical bet that the market would bear a higher price ("using inflation as an excuse").
Blaming inflation on this-or-that cause is most definitely a political rather than theoretical exercise.
I don't know if the author is right or wrong; I've never dealt with protobufs professionally. But I recently implemented them for a hobby project and it was kind of a game-changer.
At some stage with every ESP or Arduino project, I want to send and receive data, i.e. telemetry and control messages. A lot of people use ad-hoc protocols or HTTP/JSON, but I decided to try the nanopb library. I ended up with a relatively neat solution that just uses UDP packets. For my purposes a single packet has plenty of space, and I can easily extend this approach in the future. I know I'm not the first person to do this but I'll probably keep using protobufs until something better comes along, because the ecosystem exists and I can focus on the stuff I consider to be fun.
Embedded/constrained UDP is where protobuf wire format (but not google's libraries) rocks: IoT over cellular and such, where you need to fit everything into a single datagram (number of roundtrips is what determines power consumption). As to those who say "UDP is unreliable" - what you do is you implement ARQ on the application level. Just like TCP does it, except you don't have to waste roundtrips on SYN-SYN-ACK handshake nor waste bytes on sending data that are no longer relevant.
Varints for the win. Send time series as columns of varint arrays - delta or RLL compression becomes quite straightforward. And as a bonus I can just implement new fields in the device and deploy right away - the server-side support can wait until we actually need it.
No, flatbuffers/cap'n'proto are unacceptably big because of fixed layout. No, CBOR is an absolute no go - why on earth would you waste precious bytes on schema every time? No, general-purpose compression like gzip wouldn't do much on such a small size, it will probably make things worse. Yes, ASN is supposed to be the right solution - but there is no full-featured implementation that doesn't cost $$$$ and the whole thing is just too damn bloated.
Kinda fun that it sucks for what it is supposed to do, but actually shines elsewhere.
> why on earth would you waste precious bytes on schema every time
cbor doesn't prescribe sending schema, in fact there is no schema, like json.
i just switched from protobuf to cbor because i needed better streaming support and find use it quite delightful. losing protobuf schema hurts a bit, but the amount of boilerplate code is actually less than what i had before with nanopb (embedded context). on top of it, i am saving approx. 20% in message size compared to protobuf bc i am using mostly arrays with fixed position parameters.
> cbor doesn't prescribe sending schema, in fact there is no schema, like json.
You are right, I must have confused CBOR with BSON where you send field names as strings.
>on top of it, i am saving approx. 20% in message size compared to protobuf bc i am using mostly arrays with fixed position parameters
Arrays with fixed position is always going to be the most compact format, but that means that you essentially give up on serialization. Also, when you have a large structure (e. g. full set of device state and settings)where most of the fields only change infrequently, it makes sense to only send what's changed, and then TLV is significantly better.
> Yes, ASN is supposed to be the right solution - but there is no full-featured implementation that doesn't cost $$$$ and the whole thing is just too damn bloated.
Oh for crying out loud! PB had ZERO tooling available when it was created! It would have been much easier to create ASN.1 tooling w/ OER/PER and for some suitable subset of ASN.1 in 2001 that it was to a) create an IDL, b) create an encoding, and c) write tooling for N programming languages.
In fact, one thing one could have done is write a transpiler from the IDL to an AST that does all linting, analysis, and linking, and which one can then use to drive codegen for N languages. Or even better: have the transpiler produce a byte-coded representation of the modules and then for each programming language you only need to codegen the types but not the codecs -- instead for each language you need only write the interpreter for the byte-coded modules. I know because I've extended and maintained an [open source] ASN.1 compiler that fucking does [some of] these things.
Stop spreading this idea that ASN.1 is bloated. It's not. You can cut it down for your purposes. There's only 4 specifications for the language itself, of which the base one (x.680) is enough for almost everything (the others, X.681, X.682, and X.683, are mainly for parameterized types and formal typed hole specifications [the ASN.1 "information object system], which are awesome but you can live without). And these are some of the best-written and most-readable specifications ever written by any standards development organization -- they are a great gift from a few to all of mankind.
> It would have been much easier to create ASN.1 tooling w/ OER/PER and for some suitable subset of ASN.1 in 2001
Just by looking at your past comments - I agree that if google reused ASN.1, we would have lived in a better world. But the sad reality now is that PB gots tons of FOSS tooling and ASN.1 barely any (is there any free embedded-grade implementation other than asn1cc?) and figuring out what features you can use without having to pledge your kidney and soul to Nokalva is a bit hard.
I tried playing with ASN.1 before settling on protobuf. Don't recall which compiler I used, but immediately figured out that apparently datetime datatype is not supported, and the generated C code was bloated mess (so is google's protobuf - but not nanopb). Protobuf, on the other hand, was quite straightforward on what is and is not supported. So us mortals who aren't google and have a hard time justifying writing serdes from scratch gotta use what's available.
> Stop spreading this idea that ASN.1 is bloated
"Bloated" might be the wrong word - but it is large and it's damn hard for someone designing a new application to figure out which part is safe to use, because most sources focus on using it for decoding existing protocols.
Other than ASN.1 PER, is there any other widely used encoding format that isn't self-describing? Using TLV certainly adds flexibility around schema evolution, but I feel like collectively we are wasting a fair amount of bytes because of it...
Cap'n'proto doesn't have tags, but it wastes even more bytes in favor of speed. Than again, omitting tags only saves space if you are sending all the fields every time. PER uses a bitmap, which is still a bit wasteful on large sparse structs.
PER sends a bitmap only of OPTIONAL members' (fields') presence/absence. Required members are just where you expect them: right after their preceding members.
Also JSOON and XML are not TLV, though of course they're not really good examples of non-TLV encodings -- certainly they can't be what you had in mind.
Using protobuf is practical enough in embedded. This person isn't the first and won't be the last. Way faster than JSON, way slower than C structs.
However protobuf is ridiculously interchangeable and there are serializers for every language. So you can get your interfaces fleshed out early in a project without having to worry that someone will have a hard time ingesting it later on.
Yes it's a pain how an empty array is a valid instance of every message type, but at least the fields that you remember to send are strongly typed. And field optionality gives you a fighting chance that your software can still speak to the unit that hasn't been updated in the field for the last five years.
On the embedded side, nanopb has worked well for us. I'm not missing having to hand maintain ad-hoc command parsers on the embedded side, nor working around quirks and bugs of those parsers on the desktop side
I went through a very similar thing at around the same age, and one of the insights that really helped me was meditating on impermanence, and cultivating more mental proprioception (awareness of one's subtle thoughts, "mindfulness", whatever you want to call it.
Put simply, it's fine to have goals. But chasing achievement can be unfulfilling. Why? Because all experiences are fleeting. Even if you train for 5 years and win the gold medal, you get to stand on the podium for a few minutes and then life goes on.
It's easy to get people to agree with this intellectually, but you have to really see it on a deep level. There is nothing really to achieve in life. We make goals and cast them out ahead of ourselves in the future, but if that future comes, it doesn't last. We put ourselves on a treadmill of achievement and becoming, then wonder why we feel stressed.
Instead of imagining some future state of completion, work on being aware of how your mind is moving, all the time. Don't chase goals as a way of disproving some fundamental negative assumption about yourself. Don't make happiness contingent on external conditions.
I think that if replacing programmers with "AI" was going well, the people doing it wouldn't shut up about it.
So no, I don't think programming as a job will end soon, because there's no reason to think that it could. No plausible story I've seen about how that would even happen.
I do want to see big expensive products being built and released entirely by C-suites after laying off all their programmers/writers/directors/people who actually know how to do stuff. That should put an end to this madness pretty quickly.
> It became evident to me while playing with Stable Diffusion that it's basically a slot machine.
It can be, and usually is by default. If you set the seeds to deterministic numbers, and everything else remains the same, you'll get deterministic output. A slot machine implies you keep putting in the same thing and get random good/bad outcomes, that's not really true for Stable Diffusion.
Strictly speaking, yes, but there is so much variability introduced by prompting that even keeping the seed value static doesn't change the "slot machine" feeling, IMHO. While prompting is something one can get better at, you're still just rolling the dice and waiting to see whether the output is delightful or dismaying.
> IMHO. While prompting is something one can get better at, you're still just rolling the dice and waiting to see whether the output is delightful or dismaying.
You yourself acknowledge someone can better than another on getting good results from Stable Diffusion, how is that in any way similar to slot machine or rolling the dice? The point of those analogies is precisely that it doesn't matter what skill/knowledge you have, you'll get a random outcome. The same is very much not true for Stable Diffusion usage, something you seem to know yourself too.
<< But probably not an ideal workflow for real work.
Hmm. Ideal is rarely an option, so I have to assume you are being careful about phrasing.
Still, despite it being a black box, one can still tip the odds on one's favor, so the real question is what is considered 'real work'? I personally would define that as whatever they you are being paid to do. If that premise is accepted, then the tool is not the issue, despite its obvious handicaps.
War will happen as long as ignorance exists. Ignorance may exist as long as humans exist, but let's not pretend that humans are not responsible for wars.
I take your general points. There is a saying "there is no right or wrong, but right is right and wrong is wrong."
Violence is the unnecessary use of force. It may occasionally be necessary to kill in self defense, but it is always a tragedy. Killing people is both bad and a choice. This is actually a harder reality to face than "people be violent".
On a long enough timeline we're all dead. In the near term, expect a lot of stupid decisions and huffing and puffing based on an ideological framing of what the national debt is.
I am not an economist or finance guy, but I have noticed a lot of debt hysteria from people who don't seem to understand basic accounting. That is, one party's asset is another party's liability. You cannot have buying without selling, and so on. Your mortgage is a liability for you, but an asset for your bank. Your checking account is an asset for you, but a liability for your bank.
I'm not saying the debt can grow infinitely, but clearly if some of that debt is held as assets by the non-government (most of the world including you and me) then paying off that debt means a wealth transfer from the non-government back to the government.
This isn't necessarily in my interests. If the government has to claw those dollars back from somewhere, I'd rather them start with the richest people. But that doesn't happen for obvious reasons.
> That is, one party's asset is another party's liability...
The people who don't understand accounting actually seem to be pretty consistent on that point, because one of their other major complaints is inequality, ie, the people doing the lending have too many assets.
> I have noticed a lot of debt hysteria from people who don't seem to understand basic accounting. That is, one party's asset is another party's liability.
This is correct, but... let me ask you, would it concern you, if my asset is your liability? I mean, would it concern you if you had to pay for my house? How about everything it is I do? How would this not be a concern? If it is not, then why don't you publicly disclose your credit card?
It's tied to everyone's retirement being automatically invested into it. I wonder what it would take for a bunch of white collar workers to cash out their 401k early.
reply