What what? Are you surprised it's that low, that high, that they can tell what their revenue is, that they report it on a monthly rather than annual basis, or something totally different?
It's going to be pretty hard to get a good answer to whatever you're having difficulties understanding if you can't be bothered to write more than a word.
I find it bizarre than no one here seems to be commenting on the insane amount of capex redirected towards AI infrastructure build out as a reason for such decisions. I only hear bad product or covid over hiring but they seem like cope to my cynical mind.
Is that the only factor? Is insider trading objective? (hint: it's not, read the law). It's objective only when we can attribute a quantitative measure to it? What's the relative "value" of $1M profit from insider trading vs a single child's destroyed psyche? How much value could that child have contributed to the society had it not been for the harm done to it? Is there really much subjectiveness in terms of the harm done to those kids?
All that to say: I don't think "objectivity" should be the (main) factor resulting in existence of adequate punishment.
I strongly disagree. I've gone through a similar education system and it's soul crushing to not perform well in those singular events that define your career and identity.
I'm assuming this is for tool call and orchestration. I didn't know we needed higher exploitable parallelism from the hardware, we had software bottlenecks (you're not running 10,000 agents concurrently or downstream tool calls)
Can someone explain what is Vera CPU doing that a traditional CPU doesn't?
But at what stage are we asking for that RAM? if it's the inference stage then doesn't that belong to the GPU<>Memory which has nothing to do with the CPU?
I did see they have the unified CPU/GPU memory which may reduce the cost of host/kernel transactions especially now that we're probably lifting more and more memory with longer context tasks.
I'm in a CS program right now, I've seen wild shifts from ChatGPT 3.0 to the current models:
1) I've seen students scoring A grades in courses they've barely attended for the entire semester
2) Using generative AI to solve assignments and take-home exams felt "too easy" and I was ethically conscious at first
3) At this point, a lot of students have complex side-projects to a point where everyone's resume looks the same. It's harder to create a competitive edge.
> 3) At this point, a lot of students have complex side-projects to a point where everyone's resume looks the same. It's harder to create a competitive edge.
This one of the things that breaks my heart personally.
I have personal projects I am so proud of that took me years to build or considerable effort to reading through papers and implementing by hand.
I used to show these in interviews with such pride, but now these are at best neutral to my application, but more likely a knock against me because they're so easy to vibe code.
I guess it would be like if you spent the last decade writing novels which you were really proud of and felt was part of the small contribution you've made humanity, then overnight people decided they were actually awful and of zero value.
Everything I ever wrote – all the SWE blog posts, tutorials, books, github repos. It's all useless now.
You put this well, now that you mention it, I sometimes find myself trying to defend my earlier work as "Pre-ChatGPT," as if that even matters. Relegating future such work to some sort of romanticized "artisanal craftsmanship" feels hollow. That being said, I'm more productive than ever and finally got projects that have stalled out going again, and these projects have made my own life easier as a result. More utility from the result than from having walked the journey perhaps.
It’s a variant of a knapsack problem. But neither Claude nor I initially realized it was a knapsack problem: it became clear only after the solution was found and proved.
What??
reply