Hacker Newsnew | past | comments | ask | show | jobs | submit | sanex's commentslogin

Those are generally used by someone who is behind. See: everything meta does.

I love seeing everyone share their stories if learning on a TI-8x.

My school recommended the 83+ but I ended up with an 85, probably because it was on sale or something. This meant I couldn't share games that all the kids had in their 83 so I got my start by copying them by hand and trying to figure out the syntax differences by guessing. After one of those I was able to start making my cheater programs and aced geometry because of it.


I went with a TI-89 and had one good friend in HS that had one as well. This would have been late 99-00, I believe.

Fondest memories were recreating my school C++ project in TI BASIC and showing it to my teacher, using utilities to restore apps and data after a "reset" in math class so I could skip over memorizing equations, grayscale erotica, and of course Phoenix.

https://www.youtube.com/watch?v=ke6DnczjaK0


BoA having roots in the Bank of Italy makes this even funnier.

Actually it does not have roots in Bank of Italy.

In 2000, NationsBank in Charlotte bought Bank of America. They used the BofA name, but the NB people ran things. Hugh McColl had been the CEO of NB for years, and he was CEO of BofA for a year. The next CEO, Ken Lewis, was also from NB. I worked for BofA in Chicago from 2001 to 2009. I talked to people in Charlotte all the time. I almost never talked to people in California.

Now that I think about it, I dealt with people in a lot of regions of the US, but almost nobody on the West Coast.


"Bank of America, Los Angeles, was founded in California in 1923. In 1928, this entity was acquired by the Bank of Italy of San Francisco, which took the Bank of America name two years later"

https://en.wikipedia.org/wiki/Bank_of_America


So why is the headquarters in Charlotte, genius?

A lot of things can happen between 1928 and 1999.

NationsBank also "took the Bank of America name".


i bet the answer rhymes with "shmaxes"

We have an enterprise cursor account so I can try all the mainstream models. Using composer 2 on our own code which I obviously have the source code for I couldn't get it to turn on a debug flag to bypass license checks while I was troubleshooting something. Infuriating. It was like that old Patrick from SpongeBob meme.

I don't understand why we would turn the models into law enforcement officers. Things that are illegal are still illegal and we have professionals to deal with crimes. I don't need Google to be the arbiter of truth and justice. It's already bad enough trying to get accountability from law enforcement and they work for us.


They're probably worried about liability. Let's say that Oracle finds out you reverse engineered their DB using Gemini. You can be sure they will sue Google. Not just for providing the tools, but you could make the argument that it's actually Gemini doing the reverse engineering, and on Google's hardware no less.

Let's say that Oracle finds out you reverse engineered their DB using IDA Pro. Would you expect Oracle to sue Hex Rays?

I don't understand why everything changes as soon as an LLM is involved. An LLM is just software.


The difference is IDA Pro doesn’t do something unless you instruct it to, an LLM is unpredictable and may end up performing an action you did not intend. I see it often, it presents me options and does wait for my response, just starts doing what it thinks I want.

This. It's going to be tricky for the frontier model labs to argue they didn't intentionally design their models to do so, when the models take illegal actions.

I'm not even sure how one would construct a viable legal argument around that for SOTA models + harnesses, given the amount of creative choices that go into building them.

It'd be something like "Yes, we spent billions of dollars and thousands of person-hours creating these things, but none of that creative effort was responsible for or influenced this particular illegal choice the model made."

And they're caught between a rock and a hard place, because if they cripple initiative, they kill their agentic utility.

Ultimately, this will take a DMCA Section 512-like safe harbor law to definitively clear up: making it clear that outcomes from LLMs are the responsibility of their prompting users, even if the LLM produces unintended actions.


> I'm not even sure how one would construct a viable legal argument around that for SOTA models + harnesses, given the amount of creative choices that go into building them.

I'm not a lawyer, but to me the legal case seems pretty obvious. "We spent billions of dollars creating this thing to be a good programmer, but we did not intend for it to reverse engineer Oracle's database. No creative effort was spent making it good at reverse engineering Oracle's database. The model reverse-engineered Oracle's database because the user directed it to do so."

If merely fine-tuning an LLM to be good at reverse engineering is enough to be found liable when a user does something illegal, what does that mean for torrent clients?


> No creative effort was spent making it good at reverse engineering Oracle's database.

That's the bit that's going to be nasty in evidence. 'So you didn't have any reverse engineering in your training or testing sets?'


Reverse engineering skill is just a byproduct of programming skill. They go hand in hand.

Yes.

Which is going to be hard to explain to a judge and jury, if it comes to that, how despite investing time, money, and effort (and no doubt test cases) into making a model better at reverse engineering... they shouldn't be liable when that model is used for reverse engineering.

Afaik, liability typically turns on intentional development of a product capability.

And there's no way in hell I'd take a bet against the frontier labs having reverse engineering training data, validation / test cases, and internal communications specifically talking about reverse engineering.


> “making it clear that outcomes from LLMs are the responsibility of their prompting users, even if the LLM produces unintended actions

So if I ask “how does a real world production quality database implement indexes?” And it says “I disassembled Oracle and it does XYZ” then I am liable and owe Oracle a zillion dollars?

Whereas if I caveat “you may look at the PostgreSQL or SQLite or other free database engine source code, or industry studies, academic papers; you may not disassemble anything or touch any commercial software” - if it does, I’m still liable?

Who would dare use an LLM for anything in those circumstances?


If they thought they would succeed, no doubt oracle would sue. I expect bad behavior from multinationals, especially oracle

They would not even expect it to succeed, just make an example of the company (the lawsuit is the punishment) to discourage others.

We need that lawsuit to happen already so we can establish precedent. The person in the driver's seat of the Tesla should be at fault. The engineer using the llm should be at fault. The person behind the gun not the manufacturer should be at fault.

We shouldn't need a lawsuit. The legislative branch should pass a law clarifying those things, that's their job.

Then you need a lawsuit to determine whether the law is “constitutional”.

> The person in the driver's seat of the Tesla should be at fault.

I don't think this is a good analogy. For Tesla right now it might fly. However, when their software gets to waymo level of autonomy, I would expect liability to shift to the manufacturer.

If anything, I think that would be the true proof of a company trusting their software to allow for autonomous driving


> However, when their software gets to waymo level of autonomy

Luckily that won’t happen.


Also especially if they claim they're selling autonomous cars

I believe that Mercedes does offer manufacturer liability.

In the America, whoever has the most money is liable. It's not worth it for the legal industry otherwise. The lawyer earns his pay by convincing the court that whatever established precedent doesn't apply to his case.

Unfortunately.

Also because Google is the one with a lot more money than whoever was using Gemini.

they're very worried about liability, it used to be a small thing, now it's as important as being on the frontier

sad to see, bc China doesn't give a fuck about liability, this is a structural disadvantage

the labs don't feel very protected by government, meanwhile the chinese government is yet again fostering protectionism

american industry keeps getting fucked by dubious lawmakers


> Things that are illegal are still illegal and we have professionals to deal with crimes.

This is quite naive take though. The direction of travel is more fascism in Western governments where duties of traditional policing are taken over by big corporations whilst police forces are being gutted and made impotent.


My small town police force has an MRAP, definitely not impotent.

Maybe control is also profitable.

> I don't understand why we would turn the models into law enforcement officers

It's a simple corporate risk minimization strategy. Just look at how universally despised Grok is on HN. Not because it's a bad model, but because it has less aggressive alignment which means it can be coaxed into saying things that get Xai pilloried here and elsewhere.


I just think Grok is a bad model. I haven't had success with it.

This.

I tried them all.

Grok was worse than even some of the more mediocre open models at actually doing anything. (At least anything tech work related.) GPT and Claude just do what I ask most of the time. With grok, it’s like a chore just getting it to understand the question.

You’re pulling your hair out trying to figure out what on earth you need to do to land in the right place in whatever topsy turvy embedding grok is using?


It's mostly just a bad model. Plenty of people would be willing to overlook the baggage if the model was even marginally better than the competition.

I also used to see Grok boosting/slack-cutting on here/Reddit constantly back in Peak Subsidy when xAI was giving out hundreds of dollars of credits for free per month.

After they killed that and then stopped handing out free model access to users of every Cline fork for weeks following model releases, vibe coder hype moved back to Chinese models for cost and the SOTA models for quality.


Agreed. There's are plenty of instances where people here on HN do mental gymnastics to justify using a truly good product when the company that builds it is morally bankrupt.

Not a criticism (I probably engage in that sort of thinking myself sometimes), just something I've observed. If Grok were actually good, we'd see that phenomenon here, but we don't.


I just read a bunch of compelling “Grok is better at this” use cases in a thread yesterday.

I’m not rushing towards it, but, had to mention.


No, they've clearly put a lot of work into alignment. It's just that they've been trying to align it with Elon Musk rather than Amanda Askell. Unfortunately the more anti-woke they try to make it, the worse it seems to perform.

> Unfortunately the more anti-woke they try to make it, the worse it seems to perform.

Probably because being anti-woke generally goes hand in hand with going against facts and logic. Cull the "woke", lose the facts+logic. Not that they care about that anyway.


Grok is despised because it has more aggressive alignment.

to what does the "it" in "I couldn't get it to turn on a debug flag" refer to?

Composer

Software engineering is one thing but if you look 10-20 years into the future and everyone can run models equivalent to today's SoTA locally with zero monitoring or censorship, that could... not be good.

Some people will use them responsibly but a lot of people will not.

LLMs are already frying some people's brains and there are some human desires that should not be encouraged


That's why there won't be any local models in 10-20 years. The latest Chinese models are already hosted on proprietary clouds.

That's a wild assumption and most certainly wrong. Open models will continue to evolve with or without Chinese labs.

1. "Nobody" will likely read it. Don't overthink, don't be shy. 2. If you don't post you won't put the effort in to make it good. Finish your thoughts, fix your grammar, add headers and bullets and tags and pictures.


Over fitting to the benchmarks since 1996


Having the launch website just scrollable generated images is so slick. I love this.


You can click the images too, to see the prompt that got them gen'ed.


They "only" have about 250 but they're authorized for 3000. They just bought a satellite company this week though that might boost the numbers a bit.


As late as 2010 there were "only" around 1000 satellites in orbit.


Apparently they lose authorization if they don't get to 1500 hence the ""


I would consider this a benefit. I've been a professional for 10 years and have successfully avoided CSS for all of it. Now I can do even more things and still successfully avoid it.


With cursor it's half off right now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: