Hacker Newsnew | past | comments | ask | show | jobs | submit | measurablefunc's commentslogin

The task is ill-defined.

You make it faster

Fewer instructions doesn't mean it's faster. It can be faster but it's not guaranteed in general. Obvious counterexample is single threaded vs multi-threaded code. Single threaded code will have fewer instructions but won't necessarily be faster.

It does in this case; you can read the assignment to see that it is all single-threaded

I read it, you're mistaken.

I did the assignment my guy

That's great but I didn't ask & that's still not addressing my point.

I didn’t ask you to be rude or wrong either, yet here we are. The assignment is explicitly single core and cycle accurate. Your point is completely irrelevant and shows a disconnect with the content being discussed.

It's neither rude nor wrong to ask for evidence to support claims being made in what appears to be corporate advertising. The claim is their LLM is better than a person, I asked for evidence. None was presented. It's not complicated.

Generate instructions for their simulator to compute some numbers (hashes) in whatever is considered the memory of their "machine"¹. I didn't see any places where they actually disallow cheating b/c it says they only check the final state of the memory² so seems like if you know the final state you could just "load" the final state into memory. The cycle count is supposedly the LLM figuring out the fewest number of instructions to compute the final state but again, it's not clear what they're actually measuring b/c if you know the final state you can cheat & there is no way to tell how they're prompting the LLM to avoid the answers leaking into the prompt.

¹https://github.com/anthropics/original_performance_takehome/...

²https://github.com/anthropics/original_performance_takehome/...


Well, they read your code in the actual hiring loop.

My point still stands. I don't know what the LLM is doing so my guess is it's cheating unless there is evidence to the contrary.

I guess your answer to "Try to run Claude Code on your own 'ill-defined' problem" would be "I'm not interested." Correct? I think we can stop here then.

Well that's certainly a challenge when you use LLMs for this test driven style of programming.

Why do you assume it’s cheating?

Because it's a well know failure mode of neural networks & scalar valued optimization problems in general: https://www.nature.com/articles/s42256-020-00257-z

Again, you can just read the code

You're missing the point. There is no evidence to support their claims which means they are more than likely leaking the memory into the LLM prompt & it is cheating by simply loading constants into memory instead of computing anything. This is why formal specifications are used to constrain optimization. Without proof that the code is equivalent you might as well just load constants into memory & claim victory.

> There is no evidence to support their claims

Do you make a habit of not presuming even basic competence? You believe that Anthropic left the task running for hours, got a score back, and never bothered to examine the solution? Not even out of curiosity?

Also if it was cheating you'd expect the final score to be unbelievably low. Unless you also suppose that the LLM actively attempted to deceive the human reviewers by adding extra code to burn (approximately the correct number of) cycles.


This has nothing to do w/ me & consistently making it a personal problem instead of addressing the claims is a common tactic for people who do not know what it means to present evidence for their claims. Anthropic has not provided the necessary evidence for me to conclude that their LLM is not cheating. I have no opinion on their competence b/c that is not what is at issue. They could be incompetent & not notice that their LLM is cheating at their take home exam but I don't care about that.

You are implying that you believe them to be incompetent since otherwise you would not expect evidence in this instance. They also haven't provided independent verification of their claims - do you suspect them of lying as well?

How do you explain the specific score that was achieved if as you suggest the LLM simply copied the answer directly?


Either they have proof that their LLM is not cheating or they don't. The linked post does not provide evidence that the LLM is not cheating. I don't have to explain anything on my end b/c my claim is very simple & easily refuted w/ the proper evidence.

And? Anthropic is not aware of this 2020 paper? The problem is not solvable?

Why are you asking me? Email & ask Anthropic.

Obviously, because you use this old paper as an argument.


There is no RL for programming languages. Especially ones w/ no significant amount of code.

I guess the op was implying that is something fixable fairly easily?

(Which is true - it's easy to prompt your LLM with the language grammar, have it generate code and then RL on that)

Easy in the sense of "it is only having enough GPUs to RL a coding capable LLM" anyway.


If you can generate code from the grammar then what exactly are you RLing? The point was to generate code in the first place so what does backpropagation get you here?

Post RL you won't need to put the grammar in the prompt anymore.

The grammar of this language is no more than a few hundred tokens (thousands at worst) & current LLMs support context windows in the millions of tokens.

Sure.

The point is that your statement about the ability to do RL is wrong.

Additionally your response to the Deepseek paper in the other subthread shows profound and deliberate ignorance.


Theorycrafting is very easy. Not a single person in this thread has shown any code to do what they're suggesting. You have access to the best models & yet you still haven't managed to prompt it to give you the code to prove your point so spare me any further theoretical responses. Either show the code to do exactly what you're saying is possible or admit you lack the relevant understanding to back up your claims.

> You have access to the best models & yet you still haven't managed to prompt it to give you the code to prove your point so spare me any further theoretical responses. Either show the code to do exactly what you're saying is possible

GPU poor here though...

To quote someone (you...) on the internet:

> More generally, don't ask random people on the internet to do work for you for free.

https://news.ycombinator.com/item?id=46689232


Claims require evidence & if you are unwilling to present it then admit you do not have any evidence to support your claims. It's not complicated. Either RL works & you have evidence or you do not know & can not claim that it works w/o first doing the required due diligence which (shockingly) actually requires work instead of empty theory crafting & hand waving.

Go read the DeepSeek R1 paper

Why would I do that? If you know something then quote the relevant passage & equation that says you can train code generators w/ RL on a novel language w/ little to no code to train on. More generally, don't ask random people on the internet to do work for you for free.

Your other comment sounded like you were interested in learning about how AI labs are applying RL to improve programming capability. If so, the DeepSeek R1 paper is a good introduction to the topic (maybe a bit out of date at this point, but very approachable). RL training works fine for low resource languages as long as you have tooling to verify outputs and enough compute to throw at the problem.

imo generally not worth it to keep going when you encounter this sort of HN archetype

So you should have no problem bringing up the exact passages & equations they use for their policies.

well, that’s one way to react to being provided with interesting reading material.

Bring up passage that supports your claim. I'll wait.

Not exactly sure what you are looking for here.

That GRPO works?

> Group Relative Policy Optimization (GRPO), a variant reinforcement learning (RL) algorithm of Proximal Policy Optimization (PPO) (Schulman et al., 2017). GRPO foregoes the critic model, instead estimating the baseline from group scores, significantly reducing training resources. By solely using a subset of English instruction tuning data, GRPO obtains a substantial improvement over the strong DeepSeekMath-Instruct, including both in-domain (GSM8K: 82.9% → 88.2%, MATH: 46.8% → 51.7%) and out-of-domain mathematical tasks (e.g., CMATH: 84.6% → 88.8%) during the reinforcement learning phase

Page 2 of https://arxiv.org/pdf/2402.03300

That GRPO on code works?

> Similarly, for code competition prompts, a compiler can be utilized to evaluate the model’s responses against a suite of predefined test cases, thereby generating objective feedback on correctness

Page 4 of https://arxiv.org/pdf/2501.12948


None of those are novel domains w/ their own novel syntax & semantic validators, not to mention the dearth of readily available sources of examples for sampling the baselines. So again, where does it say it works for a programming language with nothing but a grammar & a compiler?

To quote you:

> here is no RL for programming languages.

and

> Either RL works & you have evidence

This is just so completely wrong, and here is the evidence.

I think everyone in this thread is just surprised you don't seem to know this.

Haven't you seen the hundreds of job ads for people to write code for LLMs to train on?


You're not going to get less confused by doubling down. None of your claims are valid & this is because you haven't actually tried to do what you're suggesting. Taking a grammar & compiler & RLing will get you nowhere.

not even wrong

Exactly.

Too many mistakes & ill-defined concepts to correct them all but their conception of Godel's incompleteness theorem is in the "not even wrong" category.

Tokens do not encode semantics.

You can choose which token to sample based on language semantics. You simply don't sample invalid ones. So the language should be restrictive on what tokens it allows enough that invalid code is impossible.

> You can choose which token to sample based on language semantics

Can you though?

> the language should be restrictive on what tokens it allows

This is a restriction on the language syntax, not its semantics.


This is the right answer. Unless there is some equivalent of it on the open internet which their search engine can find you should not expect a good outcome.

"good outcome" is pretty subjective, I do get useful productivity gains from some LLM work, but the issues are the same as they always have been.

That's probably b/c you know how to write code & have enough of an understanding about the fundamentals to know when the LLM is bullshitting or when it is actually on the right track.

All of these things have readily available analogues on the web which means they are more than likely just laundering open source code & claiming victory.

There are many open-source toy browser implementations available, so this seems quite likely.

It doesn't compile so no victory

Just the usual corporate marketing & hype.

In 1897, the Indiana General Assembly attempted to legislate a new value for pi, proposing it be defined as 3.2, which was based on a flawed mathematical proof. This bill, known as the Indiana pi bill, never became law due to its incorrect assertions and the prior proof that squaring the circle is impossible: https://en.wikipedia.org/wiki/Indiana_pi_bill

You're forgetting that some equations have π/2 so on balance nothing will change. It will be the same number of symbols.

I don't think it's just the sheer number of symbols. It's also the fact that the symbol τ means "turn". So you can say "quarter-turn" instead of π/2.

I'm not sure why that point gets lost in these discussions. And personally, I think of the set of fundamental mathematical objects as having a unique and objective definition. So, I get weirdly bothered by the offset in the Gamma function.


One of my hobbies is reading a paper until I find a statement that is seems obviously false to me

    > mathematicians can derive new knowledge by reasoning from axioms without external information
But their entire section on paradoxes is full of what appears to be nonsense to me b/c I have actually studied the listed topics. They're sweeping too many assumption under the rug & I am confident the rest of the paper is not going to resolve any of the issues I noticed.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: