Hacker Newsnew | past | comments | ask | show | jobs | submit | robertclaus's commentslogin

Isn't this the difference between a dictionary and an encyclopedia?

I was afraid that no one would bring this up. I’m developing a strange relationship with Wikipedia over the commonplace role it serves as an online resource. But I appreciate how it normalized the practice of looking things up and to get a general overview of a thing, leading to internal and external references. Credit is due to search engines also in this case I reckon.

I think that's sort of what I got from the article - open the right tools for what you're actually working on, not everything you might need for all the tasks in your backlog.


I actually agree that the code is one of the most important things to get right at a software company. Still. I would argue very few companies win on code merit alone either though. Strategy, customer communication, market timing, etc on the business side; design, system architecture, dev velocity on the technical side. So many factors are important beyond the quality of the code.


Hi Ted! Small world to see you here!


The automotive industry is huge. It seems unlikely that they would lose lobbying efforts to startup tech companies - so it seems far more likely that cars get more expensive due to government mandated self-diving "safety" features, but just enough that Americans still buy them.


Automotive industry is being driven into the ground by chinese manufacturers now. They would probably be OK with "if we can't sell cars, nobody can" and keep just the (certified) robotaxi factories.


Classic CI bug with a flair of LLM fun! We had something similar creep into our custom merge queue a few weeks back.


What "classic CI bug" makes bots talk with each other forever? Been doing CI for as long as I've been a professional developer, and not even once I've had that issue.

I've made "reply bots" before, bunch of times, first time on IRC, and pretty much the second or third step is "Huh, probably this shouldn't be able to reply to itself, then it'll get stuck in a loop". But that's hardly a "classic CI bug", so don't think that is what you're referring to here right?


If you’re making a bot in which there will be many sub-behaviors, it can be tempting to say “each sub-behavior should do whatever checks it needs, including basic checks for self-reply.”

And there lie dragons, because whether a tired or junior or (now) not-even-human engineer is writing new sub-behavior, it’s easy to assume that footguns either don’t exist or are prevented a layer up. There’s nothing more classic than that.


I'm kind of understanding, I think, but not fully. Regardless of how you structure this bot, there will be one entrypoint for the webhooks/callbacks, right? Even if there is sub-behaviours, the incoming event is passing through something, or are we talking about "sub-bots" here that are completely independent and use different GitHub users and so on?

Otherwise I still don't see how you'd end up with your own bot getting stuck in a loop replying to itself, but maybe I'm misunderstanding how others are building these sort of bots.


Sorry, could have been more clear.

Someone sets up a bot with: on a trigger, read the message, determine which "skill" to use out of a set of behaviors, then let that skill handle all the behavior about whether or not to post.

Later, someone (or a vibe coding system) rolls out a new skill, or a change to the skill, that omits/removes a self-reply guard, making the assumption that there are guards at the orchestration level. But the orchestration level was depending on the skill to prevent self-replies. The new code passes linters and unit tests, but the unit tests don't actually mimic a thread re-triggering the whole system on the self-posting. New code gets yolo-pushed into production. Chaos ensues.


All I can think of, and actually have seen is

1. Bot run a series of steps A through Z.

2. Step X is calling an external system that runs its own series of steps.

3. Some potential outcomes of said external system is if it detects some potential outcomes (errors, failed tests, whatever) is it kicks back an automated process that runs back through the bot/system where said system makes the same mistake again without awareness it's caught in a loop.


  1. Set up a bot that runs on every new comment on a PR
  2. The bot comments something on that PR
Doesn't have to be more advanced than this to get an infinite loop if you don't build anything where it ignores comments from itself or similar.


Previously:

> pretty much the second or third step is "Huh, probably this shouldn't be able to reply to itself, then it'll get stuck in a loop". But that's hardly a "classic CI bug",


If I've previously misunderstood your point, copy pasting it doesn't clear anything up, no..?

I don't see why it's not a "classic CI bug". It's an easy trap to fall into, and I've seen it multiple times. Same with "action that runs on every commit to main to generate a file and push a new commit if the file changes", that suddenly gets stuck in a loop because the generated file contains a comment with the timestamp of creation.


Yeah, a bot replying to itself is pretty poor design. It's one of the first things you do even with toy bots. You can even hardcode knowing itself, since usually you have an unchanging ID. A much more common problem is if someone deploys another bot, which will lead your bot into having an endless back-and-forth with it.


> A much more common problem is if someone deploys another bot, which will lead your bot into having an endless back-and-forth with it.

This I'd understand, bit trickier since you're basically end up with a problem typical of distributed systems.

But one bot? One identity? One GitHub user? Seems really strange to miss something like that, as you say, it's one of the earlier things you tend to try when creating bots for chats and alike.


Being one of the earlier things to catch is what makes it a classic.


Odds this was AI generated?


It's literally just four screenshots paired with this sentence.

> Trying to orient our economy and geopolitical policy around such shoddy technology — particularly on the unproven hopes that it will dramatically improve– is a mistake.

The screenshots are screenshots of real articles. The sentence is shorter than a typical prompt.


Shouldn't this be a discussion?


I liked reading through it from a "is modern Python doing anything obviously wrong?" perspective, but strongly disagree anyone should "know" these numbers. There's like 5-10 primitives in there that everyone should know rough timings for; the rest should be derived with big-O algorithm and data structure knowledge.


At Plotly we did a decent amount of benchmarking to see how much the different defaults `uv` uses lead to its performance. This was necessary so we could advise our enterprise customers on the transition. We found you lost almost all of the speed gains if you configured uv behave as much like pip as you could. A trivial example is the precompile flag, which can easily be 50% of pips install time for a typical data science venv.

https://plotly.com/blog/uv-python-package-manager-quirks/


The precompilation thing was brought up to the uv team several months ago IIRC. It doesn't make as much of a difference for uv as for pip, because when uv is told to pre-compile it can parallelize that process. This is easily done in Python (the standard library even provides rudimentary support, which Python's own Makefile uses); it just isn't in pip yet (I understand it will be soon).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: