I love seeing everyone share their stories if learning on a TI-8x.
My school recommended the 83+ but I ended up with an 85, probably because it was on sale or something. This meant I couldn't share games that all the kids had in their 83 so I got my start by copying them by hand and trying to figure out the syntax differences by guessing. After one of those I was able to start making my cheater programs and aced geometry because of it.
I went with a TI-89 and had one good friend in HS that had one as well. This would have been late 99-00, I believe.
Fondest memories were recreating my school C++ project in TI BASIC and showing it to my teacher, using utilities to restore apps and data after a "reset" in math class so I could skip over memorizing equations, grayscale erotica, and of course Phoenix.
In 2000, NationsBank in Charlotte bought Bank of America. They used the BofA name, but the NB people ran things. Hugh McColl had been the CEO of NB for years, and he was CEO of BofA for a year. The next CEO, Ken Lewis, was also from NB. I worked for BofA in Chicago from 2001 to 2009. I talked to people in Charlotte all the time. I almost never talked to people in California.
Now that I think about it, I dealt with people in a lot of regions of the US, but almost nobody on the West Coast.
"Bank of America, Los Angeles, was founded in California in 1923. In 1928, this entity was acquired by the Bank of Italy of San Francisco, which took the Bank of America name two years later"
We have an enterprise cursor account so I can try all the mainstream models. Using composer 2 on our own code which I obviously have the source code for I couldn't get it to turn on a debug flag to bypass license checks while I was troubleshooting something. Infuriating. It was like that old Patrick from SpongeBob meme.
I don't understand why we would turn the models into law enforcement officers. Things that are illegal are still illegal and we have professionals to deal with crimes. I don't need Google to be the arbiter of truth and justice. It's already bad enough trying to get accountability from law enforcement and they work for us.
They're probably worried about liability. Let's say that Oracle finds out you reverse engineered their DB using Gemini. You can be sure they will sue Google. Not just for providing the tools, but you could make the argument that it's actually Gemini doing the reverse engineering, and on Google's hardware no less.
The difference is IDA Pro doesn’t do something unless you instruct it to, an LLM is unpredictable and may end up performing an action you did not intend. I see it often, it presents me options and does wait for my response, just starts doing what it thinks I want.
This. It's going to be tricky for the frontier model labs to argue they didn't intentionally design their models to do so, when the models take illegal actions.
I'm not even sure how one would construct a viable legal argument around that for SOTA models + harnesses, given the amount of creative choices that go into building them.
It'd be something like "Yes, we spent billions of dollars and thousands of person-hours creating these things, but none of that creative effort was responsible for or influenced this particular illegal choice the model made."
And they're caught between a rock and a hard place, because if they cripple initiative, they kill their agentic utility.
Ultimately, this will take a DMCA Section 512-like safe harbor law to definitively clear up: making it clear that outcomes from LLMs are the responsibility of their prompting users, even if the LLM produces unintended actions.
> I'm not even sure how one would construct a viable legal argument around that for SOTA models + harnesses, given the amount of creative choices that go into building them.
I'm not a lawyer, but to me the legal case seems pretty obvious. "We spent billions of dollars creating this thing to be a good programmer, but we did not intend for it to reverse engineer Oracle's database. No creative effort was spent making it good at reverse engineering Oracle's database. The model reverse-engineered Oracle's database because the user directed it to do so."
If merely fine-tuning an LLM to be good at reverse engineering is enough to be found liable when a user does something illegal, what does that mean for torrent clients?
Which is going to be hard to explain to a judge and jury, if it comes to that, how despite investing time, money, and effort (and no doubt test cases) into making a model better at reverse engineering... they shouldn't be liable when that model is used for reverse engineering.
Afaik, liability typically turns on intentional development of a product capability.
And there's no way in hell I'd take a bet against the frontier labs having reverse engineering training data, validation / test cases, and internal communications specifically talking about reverse engineering.
> “making it clear that outcomes from LLMs are the responsibility of their prompting users, even if the LLM produces unintended actions”
So if I ask “how does a real world production quality database implement indexes?” And it says “I disassembled Oracle and it does XYZ” then I am liable and owe Oracle a zillion dollars?
Whereas if I caveat “you may look at the PostgreSQL or SQLite or other free database engine source code, or industry studies, academic papers; you may not disassemble anything or touch any commercial software” - if it does, I’m still liable?
Who would dare use an LLM for anything in those circumstances?
We need that lawsuit to happen already so we can establish precedent. The person in the driver's seat of the Tesla should be at fault. The engineer using the llm should be at fault. The person behind the gun not the manufacturer should be at fault.
> The person in the driver's seat of the Tesla should be at fault.
I don't think this is a good analogy. For Tesla right now it might fly. However, when their software gets to waymo level of autonomy, I would expect liability to shift to the manufacturer.
If anything, I think that would be the true proof of a company trusting their software to allow for autonomous driving
In the America, whoever has the most money is liable. It's not worth it for the legal industry otherwise. The lawyer earns his pay by convincing the court that whatever established precedent doesn't apply to his case.
> Things that are illegal are still illegal and we have professionals to deal with crimes.
This is quite naive take though. The direction of travel is more fascism in Western governments where duties of traditional policing are taken over by big corporations whilst police forces are being gutted and made impotent.
> I don't understand why we would turn the models into law enforcement officers
It's a simple corporate risk minimization strategy. Just look at how universally despised Grok is on HN. Not because it's a bad model, but because it has less aggressive alignment which means it can be coaxed into saying things that get Xai pilloried here and elsewhere.
Grok was worse than even some of the more mediocre open models at actually doing anything. (At least anything tech work related.) GPT and Claude just do what I ask most of the time. With grok, it’s like a chore just getting it to understand the question.
You’re pulling your hair out trying to figure out what on earth you need to do to land in the right place in whatever topsy turvy embedding grok is using?
I also used to see Grok boosting/slack-cutting on here/Reddit constantly back in Peak Subsidy when xAI was giving out hundreds of dollars of credits for free per month.
After they killed that and then stopped handing out free model access to users of every Cline fork for weeks following model releases, vibe coder hype moved back to Chinese models for cost and the SOTA models for quality.
Agreed. There's are plenty of instances where people here on HN do mental gymnastics to justify using a truly good product when the company that builds it is morally bankrupt.
Not a criticism (I probably engage in that sort of thinking myself sometimes), just something I've observed. If Grok were actually good, we'd see that phenomenon here, but we don't.
No, they've clearly put a lot of work into alignment. It's just that they've been trying to align it with Elon Musk rather than Amanda Askell. Unfortunately the more anti-woke they try to make it, the worse it seems to perform.
> Unfortunately the more anti-woke they try to make it, the worse it seems to perform.
Probably because being anti-woke generally goes hand in hand with going against facts and logic. Cull the "woke", lose the facts+logic. Not that they care about that anyway.
Software engineering is one thing but if you look 10-20 years into the future and everyone can run models equivalent to today's SoTA locally with zero monitoring or censorship, that could... not be good.
Some people will use them responsibly but a lot of people will not.
LLMs are already frying some people's brains and there are some human desires that should not be encouraged
1. "Nobody" will likely read it. Don't overthink, don't be shy.
2. If you don't post you won't put the effort in to make it good. Finish your thoughts, fix your grammar, add headers and bullets and tags and pictures.
I would consider this a benefit. I've been a professional for 10 years and have successfully avoided CSS for all of it. Now I can do even more things and still successfully avoid it.
reply