Hacker Newsnew | past | comments | ask | show | jobs | submit | more _the_inflator's commentslogin

I totally agree. He appeared to act against his employer and actively undermined Meta's effort to attract talent by his behavior visible on X.

And I stopped reading him, since he - in my opinion - trashed on autopilot everything 99% did - and these 99% were already beyond the two standard deviation of greatness.

It is even more highly problematic if you have absolutely no results eg products to back your claims.


As one MS Director put it out of frustration: "We do test, a lot. Our testers are called endusers. That's it."

More precisely: He said MSlers get paid by results, achieved Business Value. Testers exist and are called "End Users". Testing is mandatory and part of the core philosophy - they just must do it differently.

Reason: Fear of missing out if moving to slow.

I reminisce the times, where you put in a CD without internet connection. Actual Office is a mess. Thousands of half finished apps, subject to be cancelled anytime. Windows XP's UI was dubbed "glossy" - some of Office's apps UIs are LSD trips for kids. This is ridiculous. Nothing to work with and in no way usable for customer presentations.


> We do test, a lot. Our testers are called endusers.

Maybe they should read bug reports posted by the end users, and not have half-baked solutions posted by Very Ignorant Persons.


That's a good point. I think it would be bearable if they actually had a good feedback platform & interact with their users. Feedback Hub is just terrible: slow, featureless & built on top of their buggiest ui platform.

Unfortunately their audience is probably too big.


There's no business case for reading bug reports.


Crowdsource it. (Microsoft could) start a website called WeHeardYouLetsFixTeams.com where users submit bug reports for Teams out in public, other people vote on how much each bug is a pain point for them, the Teams, er team commits to fixing the top 5 each quarter. Do a whole media circus around it. Do a sales push to get people off Slack/Zulip/Discord/Telegram/Meet/etc. Get some industry accolades for listening to your users.


Only when you are a monopoly


> As one MS Director put it out of frustration: "We do test, a lot. Our testers are called endusers. That's it."

This is true since at least Win 95. One usually needed to wait until SP2 to get a resemblance of quality from Microsoft.

Now, since Vista, they got rid of ServicePacks. This says a lot about their quality culture.


No one gets fired for tuning out of temporary tuning out of his smartphone or doing chores the classic way I guess. ;)

I use mobile services timeboxed and in conjunction with blockers for certain services. I also went back to use old-school pencils and paper for work whenever possible. It is helpful - and fun.

Blocking mobile internet on smartphones improves sustained attention, mental health, and subjective well-being: https://academic.oup.com/pnasnexus/article/4/2/pgaf017/80160...

Brain Drain: The Mere Presence of One’s Own Smartphone Reduces Available Cognitive Capacity: https://www.journals.uchicago.edu/doi/full/10.1086/691462


I think the article is missing two points: if the latest layoffs aren't related to AI, then this doesn't mean AI won't have or has an impact on head count.

And investment and experiments by definition include the risk of failing. In almost everything lies a survivorship bias and no one talks about the 100+ car makers that went into goldrush mode 100+ years ago. This is life. Netflix vs Blockbuster - already forgotten?

Also the "fail rate" - so what part is failing and why? What's with the 5%? If we have a look at exponential functions this might be a really good deal, if the 5% can account for the losses. After all, benefits compound over time.

I witnessed first hand in FAANG some quota hires and I believe that now that no one gets paid for contrived and artificial business advantages, we are back to a more merits based evaluation of workers.

But AI should not be written off as fancy something with no impact. That's the wrong take. Whether it will be a springboard to new jobs that compensate for losses or replacements - I am not yet sure, but tent to be in the former group. ML engineers take care of ML - something new that takes care of something new.

We will see.


I agree with you mostly.

On the other hand, I think that show or it didn’t happen is essential.

Dumping a bit of code into an LLM doesn’t make it a code agent.

And what Magic? I think you never hit conceptual and structural problems. Context window? History? Good or bad? Large Scale changes or small refactoring here and there? Sample size one or several teams? What app? How many components? Green field or not? Which programming language?

I bet you will color Claude and especially GitHub Copilot a bit differently, given that you can easily kill any self made Code Agent quite easily with a bit of steam.

Code Agents are incredibly hard to build and use. Vibe Coding is dead for a reason. I remember vividly the inflation of Todo apps and JS frameworks (Ember, Backbone, Knockout are survivors) years ago.

The more you know about agents and especially code agents the more you know, why engineers won’t be replaced so fast - Senior Engineers who hone their craft.

I enjoy fiddling with experimental agent implementations, but value certain frameworks. They solved in an opiated way problems you will run into if you dig deeper and others depend on you.


To be clear, no one in this thread said this is replacing all senior engineers. But it is still amazing to see it work, and it’s very clear why the hype is so strong. But you’re right that you can quickly run into problems as it gets bigger.

Caching helps a lot, but yeah, there are some growing pains as the agent gets larger. Anthropic’s caching strategy (4 blocks you designate) is a bit annoying compared to OpenAI’s cache-everything-recent. And you start running into the need to start summarizing old turns, or outright tossing them, and deciding what’s still relevant. Large tool call results can be killer.

I think at least for educational purposes, it’s worth doing, even if people end up going back to Claude code, or away from genetic coding altogether for their day to day.


It didn't help documentation at all. I had to work with auth0 for example and their documentation is such a bloat, that I am already prototyping with better-auth.

No structure, outdated stuff marked as "preview" from 2023/2024, wikipedia like in depth articles about everything but not for simple questions like: how to implement a backend for frontend.

You find fragments and pieces of information here and there - but no guidance at all. Settings hidden behind tabs etc.

A nightmare.

No sane developer would have done such a mess, because of time constraints and bloat. You see and experience first hand, that the few gems are from the trenches, with spelling mistakes etc.

Bloat for SEO, the mess for devs.


"hitting refresh"

You made my day. I totally agree with you: state, state management, UX/UI.

I am extremely proud that I lately implemented exactly this: What if... you pass a link or hit reload - or back button in browser.

I have a web app that features a table with a modal preview when hitting a row - boy am I proud to have invested 1 hour in this feature.

I like your reasoning: it ain't a technical "because I can dump anything in a url", nope, it is a means to an end, the user experience.

Convenience, what ever. I have now a pattern to put in more convenience like this, which should be pretty normal.

The only think that remains and bothers me is the verbose URL - the utter mess and clutter in the browser's input field. I feel pain here and there is a conflict inside me between URL aesthetics and flatter the user by providing convenience.

I am working on a solution, because this messy URL string hurts my eyes and takes away a little bit the magic and beauty of the state transfer. This abstract mess should be taken care of, also in regard to obfuscation. It ain't cleanly to have full-text strings in the URL, with content which doesn't belong there.

But I am on it. I cannot leave the URL string out of the convenience debate, especially not on mobile. Also it can happen that strings get stripped or copy & paste accidentally cut of parts. The shorter the better and as we see, convenience is a brutally hard job to handle. Delicate at so many levels, here error handling due to wrongly formatted strings, a field few people ever entered.

My killer feature is the initial page load - it appears way more faster, since there are no skeletons waiting for their fetch request to finish. I am extremely impressed by this little feature and its impact on so many levels.

Cheers!


Sure!

And there is way more to it.

This kind of code was common and also the starting point of every modern language innovation we have today in JavaScript - even TypeScript, and maybe any modern web development on the server as well.

Tables were the only way to create browser independent layouts dynamically. Or put another way: adding interactivity to websites. And simply because hacking is fun and browsers were experimenting with APIs accessible by JavaScript.

CSS was still bleeding from ACID tests, Netscape was forgotten, Mozilla build Phoenix out of the ashes of the bursting bubble and called their effort Firefox.

In Germany there was and still is the infamous selfHTML project. I remember vividly reading and testing Stefans Münz tutorials on this topic. The content is untouched, only the layout changed, so go back in time for more table fun:

https://wiki.selfhtml.org/wiki/Beispiel:JS-Anwendung-Tabelle...

https://wiki.selfhtml.org/wiki/JavaScript/Tutorials/Tabellen...

It was pretty common to have large one file websites: php and html with css and javascript mixed.

There was no git, no VisualStudio Code, Claude Sonnet - no, Notepad and later Notepad++

(Even the DOOM guys had no version control system in the early stages.)

For me John Resig shines out here. Epic genius behind jQuery. The source code was pure magic, and his book "Secrets of the JavaScript Ninja" is for me the all time climax in programming excellence.

If you never utilised the prototype property, you will never understand any of the most basic structures and inner workings JavaScript has to this day and why Classes are "syntactical sugar" for functions and nothing else.

Function.toString in combination with New Function made me enter 10 matrices in parallel at the time. What a revelation. :D

Nicholas Zakas comes close with his seminal Web Development book, in which he featured every Browser API available at the time with examples on roughly 1000 pages. To this day, exercising most of it and understanding the DOM and Windows object was the best investment ever, because and this fact 15 years later paved the way for the success of a financial SaaS platform. Lost wisdom, not covered by any modern framework like Angular or ReacJS.


Yes. Cloud sellers new this: Happy path for this flagship project, the shinny new object, and some additional services. After the point of no return what usually happens is, that cloud will be a replica of bare metal development.

As an Computer Science dude and former C64/Amiga coder in Senior Management of a large international Bank, I saw first hand, how cost balloon simply due to the fact, that the bank recreates and replicates its bare metal environment in the cloud.

So increasing costs while nothing changed. Imagine that: fixed resources, no test environments, because virtualisation was out of the equation in the cloud due to policies and SDLC processes. And it goes on: releases on automation? Nope, request per email and attached scan of a paper document as sign-off.

Of course your can buy a Ferrari and use it as a farm tractor. I bet it is possible with a little modification here and there.

Another fact is, that lock in plays a huge role. Once you are in it, no matter what you subscribe to, magically everything slows suddenly down, a bit, but since I am a guy who uses a time tracker to test and monitor apps, I could easily draw a line even without utilizing my Math background: enforced throtelling.

There is a difference between 100, 300 and 500ms for SaaS websites - people without prior knowledge of peceptual psychology feel it but cannot but their finger in the wound. But since we are in the cloud, suddenly a cloud manager will offer you an speed upgrade - just catered for your needs! Here, have a trial period over 3 month for free and experience the difference for your business!

I am a bit of opinionated here and really suppose, that cloud metrics analysed the banks traffic and service usage to willingly slow it down in a way, only professionals could find out. Have you promised to be lightning fast in the first place? No, that's not what the contract says. We fed you with it, but a "normal" speed was agreed upon. It is like getting a Porsche as a rental car for free when you take your VW Beetle to the dealer for a checkup. Hooked, of course. A car is a car after all. How to boil a frog? Slowly.

Of course there will be more sales and this is achilles' heel for every business and indifferent customers - easy prey.

It is a vicious cycle, almost like taxation. You cannot hide from it, no escape and it is always on the rise.


Ferrari actually makes tractors.


That's Lamborghini, isn't it?

IIRC, he only got into making cars because Enzo Ferrari disrespected him.


Does Lamborghini still do?


I side with you. The more you know, the more you discover what you don’t know.

Every attempt to consider the extremely complex dynamics of human biology as a pure state machine, like with Pascal, deterministic of your know all the factors, is simplification and can safely be rejected as hypotheses.

Hormons, age, sex, weight, food, aging, sun, environmental, epigenetic changes, body composition, activity level, infections, medication all play a role, even galenic.


Put it this way: even in Pascal (especially in Pascal) you generally work in source code. You don't try to read the object code, and if you do, you generally might try to decompile or disassemble it. What you don't do -unless you're desperate- is try to understand what the program is doing by means of directly reading the hexdump (let alone actually printing it out in binary!)

Now imagine someone has written a Compiler that compiles something much more sophisticated into Pascal (some 'fourth generation language' (4GL) ) . Now you'd be working in that 4GL, not in Pascal. Looking at the Pascal source code here would be less useful. Best to look at the 4GL code.

Biology is a bit like that. It's technically deterministic all the way down (until we reach quantum effects, at least). But trying to explain why Aunt Betty sneezed by looking at the orbital hybridization state of carbon atoms might be a wee bit unuseful at times. Better to just hand her a handkerchief.

(And even this rule has exceptions: Abstractions can be leaky!)


You might be interested in this if you've never seen it: https://berthub.eu/articles/posts/reverse-engineering-source...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: