Hacker Newsnew | past | comments | ask | show | jobs | submit | xienze's commentslogin

> If it's harder to work with, it's harder to work with, it's not the end of the world.

Yeah it just takes longer and makes you miserable in the process. No biggie!


We will still work ~8ish hours that day, and time will pass anyways.

> they just move slower so it’s not as attractive of a target.

Well, there’s other things. Maven doesn’t allow you to declare “version >= x.y.z” and doesn’t run arbitrary scripts upon pulling dependencies, for one thing. The Java classpath doesn’t make it possible to have multiple versions of the same library at the same time. That helps a lot too.

NPM and the way node does dependency management just isn’t great. Never has been.


Well, two things. First, “hi” isn’t a good prompt for these thinking models. They’ll have an identity crisis trying to answer it. Stupid, but it’s how it is. Stick to real questions.

Second, for the best performance on a Mac you want to use an MLX model.


Thanks! I assumed simpler == faster, but my ignorance is showing itself.

I am using the model they recommended in the blog post - which I assumed was using MLX?


> Microsoft is seriously the worst offender in shoving AI down everyone's throats.

The worst, or just ahead of the curve? Because you’re kidding yourself if you don’t think every other AI company or company integrating AI into their products won’t be using it as an advertising delivery vehicle.


exactly - the worst, or the best?

> Not even that! This study doesn't even say contamination is causing overestimation. It says that it's possible.

From the article:

> They found that on average, the gloves imparted about 2,000 false positives per millimeter squared area.

I dunno, that seems like a lot of false positives. Doesn’t that strongly imply that overestimation would be a pretty likely outcome here? Sounds like a completely sterile 1mm^2 area would raise a ton of false positives because of just the gloves.


The way you mitigate this is by using negative samples. Basically blank swabs/tubes/whatever that don't have the substance you're testing in it, but that is handled the same way.

Then the tested result is Actual Sample Result - Negative Sample Result.

So you'd expect a microplastic sample to have 2,000 plus N per mm^2, and N is the result of your test.


Yes? Most people don’t live their entire lives in a lab wearing nitrile gloves, so there’s an argument to be made that the concentration of microplastics found in that setting is not reflective of everyday life.

So, not that microplastics don’t exist, but that they don’t exist to the same degree as in a lab environment.


I wouldn't be surprised if e.g. all these paper-thin synthetic (plastic) disposable parts and fabrics used in labs shed microplastics way more than e.g. synthetic fabrics designed to be survive a machine wash a few dozen times, or upholstery meant to withstand tens of thousands of sitting cycles, nevermind solid plastics (e.g. reusable food containers, furniture surfaces).

That also has pretty poor memory bandwidth. 283GB/s I think.

Yeah. The main selling point I'd say is the onboard ConnectX-7 hardware.

> This isn't a solvable problem without world models.

I wish we could use something like a simple DSL rather than English prose to work with these models, in order to have some real precision to describe what we want.


Nothing stops that from happening. Just needs to be trained in that DSL. Though at that point it returns to it's original form as a better autocomplete/IntelliSense :).

That will likely happen in the specialized fields. We can already see tools like Figma, Mira, and others that generate functional-ish frontend components in full typescript and corresponding styles (that are also selectable and configurable in the interface). Though, not quite as free, since they do load their base framework and components to ensure consistency and sanity / error-checking, etc., but even then it is in fact generating you useable, modifiable components that you can engage with in precision in your normal DSL.

For video, this likely exists, or is being worked on as we speak. All specialized domain tools will go towards this model to allow those domain experts to use the tools with the precision they expect AND the agentic gains we already take for granted.


If only there was some kind of formalised "language" to, as it were, "programme" the automata but alas such a concept is impossible to conceptualise.

To be fair, it’s not really their fault that there are people who want to treat work that normally would be considered a way to pick up a few bucks during free time as a full time career.

Sure, go ahead and make fast food delivery a highly regulated line of work that pays $30/hour with benefits. Just don’t be surprised when it no longer becomes economically viable for DoorDash to continue operating.


DoorDash not being able to continue operating doesn’t bother me one bit. They aren’t curing cancer, and a society with fewer food delivery options based on extorting poor people to turn their vehicle’s equity into less cash than it’s worth will still be a fine society. But I would be perfectly fine if they continued to exist and simply provided some basic guarantees like subsidized healthcare for their employees.

> This isn't any different than the "person who wrote it already doesn't work here any more".

Yeah but that takes years to play out. Now developers are cranking out thousands of lines of “he doesn’t work here anymore” code every day.


> Yeah but that takes years to play out.

https://www.invene.com/blog/limiting-developer-turnover has some data, that aligns with my own experience putting the average at 2 years.

I have been doing this a long time: my longest running piece of code was 20 years. My current is 10. Most of my code is long dead and replaced because businesses evolve, close, move on. A lot of my code was NEVER ment to be permanent. It solved a problem in a moment, it accomplished a task, fit for purpose and disposable (and riddled with cursing, manual loops and goofy exceptions just to get the job done).

Meanwhile I have seen a LOT of god awful code written by humans. Business running on things that are SO BAD that I still have shell shock that they ever worked.

AI is just a tool. It's going from hammers to nail guns. The people involved are still the ones who are ultimately accountable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: