Hacker Newsnew | past | comments | ask | show | jobs | submit | nothrabannosir's commentslogin

Just re taxes: why would anything need to change on that front in the event of federalization of the EU? There already is a union, it already has money, money already flows from richer countries to poorer countries—what would federalization change?

I thought you were going to say “that comment recommending Kagi is exactly what those ads would look like: native responses making product recommendations as if they’re natural responses in the conversation”

Ding ding ding. Look at all the brands mentioned in just this thread. From a cursory look, I see:

* WSJ

* Bloomberg

* Financial Times

* Cartier

* Kagi

* Protonmail

* Coca-Cola

* HBO

* Windex

* Netflix

* Azure

* AWS

We are all ourselves advertisers, we just don't realize it. It is inevitable that chatbots will be RLHF-trained in our footsteps.


That is a weird definition of advertising. It's not an ad if I mention (or even recommend) a product in a post, without going off-topic and without getting any financial benefit.

The New American Oxford Dictionary defines "advertisement" as "a notice or announcement in a public medium promoting a product, service, or event." By that definition, anything that mentions a product in a neutral light (thereby building brand awareness) or positive light (explicitly promotional) is an ad. The fact that it may not be paid for is irrelevant.

A chatbot tuned to casually drop product references like in this thread would build a huge amount of brand awareness and be worth an incredible amount. A chatbot tuned to be insidiously promotional in a surgically targeted way would be worth even more.

I took a quick look at your comment history. If OpenAI/Anthropic/etc. were paid by JuliaHub/Dan Simmons' publisher/Humble Bundle to make these comments in their chatbots, we would unambiguously call them ads:

https://news.ycombinator.com/item?id=46279782:

   Precisely; today Julia already solves many of those problems.

   It also removes many of Matlab's footguns like `[1,2,3] + [4;5;6]`, or also `diag(rand(m,n))` doing two different things depending on whether m or n are 1.
(for the sake of argument, pretend Julia is commercial software like Matlab.)

https://news.ycombinator.com/item?id=46067423:

   I wasn't expecting to read a Hyperion reference in this thread, such a great book.
https://news.ycombinator.com/item?id=45921788:

   > Name a game distribution platform that respects its customers
   Humble Bundle.
You seem like a pretty smart, levelheaded person, and I would be much more likely to check out Julia, read Hyperion, or download a Humble Bundle based on your comments than I would be from out-of-context advertisements. The very best advertising is organic word-of-mouth, and chatbots will do their damndest to emulate it.

> Trying to change consumption habits (like smart grids, dynamic pricing, etc.) works poorly, especially for such vital resource as electricity.

Why? Has the UK started trying recently? When I lived there nobody gave a hoot about fluctuating prices. It would have been hard to even know when electricity was expensive or not. Has it changed?

Meanwhile >three decades ago my grandparents in rural France had a big red lamp on the kitchen wall that would light up when energy was expensive. It was a part of their life and they had no problem with it. They chose that plan deliberately because it ended up cheaper.

If you’re saying that even with adaptive behavior , it’s all a wash because the constant cost of peakers is so high that you lose all savings when they kick in , no matter how little you use; ok, I believe you did the math.

But if the claim is “it’s impossible for humans to adapt their energy consumption depending on the current price of electricity”, I have seen first hand that is not true. For sure when I lived in Britain nobody did this at all, but that would be at best a British limitation, not a human one.


I've never seen a red light, but the UK has had multiple electricity rates for households since the 1980s.

https://en.wikipedia.org/wiki/Economy_7

My parents would set timers on the dishwasher, washing machine etc too run at night.


> My parents would set timers

I'd suggest first measuring how much single load uses. In my case it's 1KWh and 0.4KWh. Daily load would save perhaps 4-5 GBP per month or 5% off an average bill.


All EV chargers sold in the UK are now smart, and adjust charging schedule according to price.

The vast majority of UK consumers have a pretty simple plan where they're not demand responsive. If it's pitch black and dead calm one Winter's night they pay the exact same price as midday in the summer if it's blazing sunshine and simultaneously blowing a gale across the whole country. Their retailler has done some estimates and figured on average they can sell power for, say, 25p per kWh all day, every day. Some days they're raking it in 'cos they paid a lot less than that, other days they wish the day would end, but if their team did the sums right it comes out profitable at year end.

There are people, especially people with EVs and who can do that sort of "turn on a dime" lifestyle where you do laundry when it's cheaper not because it's Thursday who pay 0p per kWh some hours and 45p per kWh for that bleak winter's night.

For now that second group are a minority but they do exist.

The enabling technology is a bit more sophisticated than your French red lamp. "Smart" meters relay your usage constantly so you can be charged in 30 minute chunks, the same way the wholesale electricity market works. This also means you can see at a glance what's going on. So that's nice. The usual conspiracy people insist this is a future tool of control by government, just like almost everything that has ever been invented, bar codes on groceries, mobile phones, newspapers, parking tickets, everything.


This does not work at scale. Sure, there is plenty of anecdotes how you can successfully play this game as a consumer living in a rural house with electric car, power wall, and rooftop solar, but try to telling about it to someone living in a high-rise apartment or to a heavy industry business. Your preaching will fall on deaf ears.

IIRC there are several utilities in the UK which provide option to price electricity dynamically, but they are not popular because people do not want to play this game. They want reliable supply of electricity for reasonable prices. Trying to mold consumption to satisfy intermittency of generation is nothing more than shifting the externality akin to telling people "you must plant trees to offset CO2 emissions!".


The most popular UK electricity retailer is Octopus Energy which is specifically focused on variable prices and flexible consumer demand. By what metric do you mean variable rate retailers are not popular?

3.3 million households in England were on Economy 7 tariffs in 2021 - around 14% of households

Debugging from git history is a separate question from merge vs rebase. Debugging from history can be done with non-rebased merges, with rebased merges, and with squashed commits, without any noticeable difference. Pass `--first-parent` to git-log and git-bisect in the first two cases and it's virtually identical.

My preference for rebasing comes from delivering stacked PRs: when you're working on a chain of individually reviewable changes, every commit is a clean, atomic, deliverable patch. git-format-patch works well with this model. GitHub is a pain to use this way but you can do it with some extra scripts and setting a custom "base" branch.

The reason in that scenario to prefer rebasing over "merging in master" is that every merge from master into the head of your stack is a stake in the ground: you can't push changes to parent commits anymore. But the whole point of stacked diffs is that I want to be able to identify different issues while I work, which belong to different changes. I want to clean things up as I go, without bothering reviewers with irrelevant changes. "Oh this README could use a rewrite; let me fix that and push it all the way up the chain into its own little commit," or "Actually now that I'm here, let me update dependencies and ensure we're on latest before I apply my changes". IME, an ideal PR is 90% refactors and "prefactors" which don't change semantics, all the way up to "implemented functionality behind a feature flag", and 10% actual changes which change the semantics. Having an editable history that you can "keep bringing with you" is indispensible.

Debugging history isn't really related. Other than that this workflow allows you to create a history of very small, easily testable, easily reviewable, easily revertible commits, which makes debugging easier. But that's a downstream effect.


> Debugging from git history is a separate question from merge vs rebase.

But the main benefit proponents or rebase say its for keeping the history clean which also makes it easier to pinpoint and offending commit.

Personally, a clean commit history was never something that made my job easier.

> Other than that this workflow allows you to create a history of very small, easily testable, easily reviewable, easily revertible commits, which makes debugging easier. But that's a downstream effect.

I would agree that it is important for commits to go from working state to working state as you are working on a task, but this is an argument for atomic commits, not about commit history.


> Personally, a clean commit history was never something that made my job easier.

How do you define "clean"? I've certainly been aided by commit messages that help me identify likely places to investigate further, and hindered by commit messages that lack utility.


> How do you define "clean"?

In the context of merge vs rebase, I think "clean" means linear, without visible parallel lines. Quality of commit messages is orthogonal. I agree with the poster that this particular flavor of "clean" (linear) has never ever helped me one bit.


Agreed, it just means "linear" for most people.

I think the obsession with a linear master/main is a leftover from the time when everyone used a centralized system like svn. Git wasn't designed like that; the Linux kernel project tells contributors to "embrace merges." Your commit history is supposed to look like a branching river, because that's an accurate representation of the activity within your community.

I think having a major platform like github encourages people to treat git as a centralized version control system, and care about the aesthetics of their master/main branches more than they should. The fact the github only shows the commit history as a linear timeline doesn't help, either.


we're in the minority I think. I always find it easier to just debug a problem from first principles instead of assuming that it worked at some point and then someone broke it. often times that assumption is wrong, and often times the search for bad commit is more lengthy and less informative than doing the normal experimental process. I certainly admit that there are ases where the test is easily reproducible and bisect just spits out the answer, but that a seductive win. I certainly wouldn't start by reading the commit log and rewinding history until I at least had a general idea of the source of the problem, and it wasn't immediately obvious what to try next to get more information.

if you look at it as in investment in understanding the code base more than just closing the ticket as soon as possible, then the 'lets see what really going on here' approach makes more sense.


> I certainly wouldn't start by reading the commit log

Me neither, for what is worth. But even if the idea is "when in order to figure out this issue, you have to go to the history", a linear history and a linear log never helped me either. For example, to find where a certain change happened to try to understand what was the intent, what I need is the commit and its neighbors, which works just as well with linear vs branching history because the neighbors are going to still be nearby up and down, not found via visual search.


If you have not already, try Graphite. You will be delighted as it serves that exact purpose.

I use magit which afaik is still undefeated for this workflow. Particularly with this snippet to “pop down” individual ranges of changes from a commit: https://br0g.0brg.net/notes/2026-01-13T09:49:00-0500.html .

Pretty nice I guess. Cool even. Impressive! And I only say this , just in case, for someone else maybe, ehh—is that it? Because that’s totally fine with me, same experience actually funny that, really impressive tech btw! Very nice. Just, maybe, do the CEOs know that? When people talk of “not having to code anymore”—do they know that this is how it’s described by one of its most prominent champions today?

Not that I mind, of course. As you said: amazing!

Maybe someone just check in with the CEOs who were in the news recently talking about their work force…


> When people talk of “not having to code anymore”

You should reinterpret that as "not having to type the code out be hand any more". You still need a significant depth of coding knowledge and experience to get good results out of these things. You just don't need to type out every variable declaration and for loop yourself any more.


Automate tools, not jobs.

Every single tool or utility you have in the back of your head, you can just make it in a few hours of wall-clock time, minutes of your personal active time.

Like I wanted a tool that can summarise different sources quickly, took me ~3 hours to build it using llm + fragments + OpenAI API.

Now I can just go `q <url>` in my terminal and it'll summarise just about anything.

Then I built a similar tool that can download almost anything `dl <url>` will use yt-dlp, curl and various other tools depending on the domain to download the content.


Simple is the opposite of complex; the opposite of hard is easy. They are orthogonal. Chess is simple and hard. Go is simpler and harder than chess.

Program optimization problems are less simple than both, but still simpler than free-form CRUD apps with fuzzy, open ended acceptance criteria. It would stand to reason an autonomous agent would do well at mathematically challenging problems with bounded search space and automatically testable and quantifiable output.

(Not GP but I assume that's what they were getting at)


I had a Panda from the early 2010s and that was my exact thought reading this thread: sounds like a fiat panda. Surprised to see this downvoted. Did they change so much?

Downvoted because they’re non-existent for north americans

Aren't the Fiat 500 and Panda the same car but with only a neo retro design on the former?

Nope

This is the original 500 https://www.autocar.co.uk/sites/autocar.co.uk/files/images/c...

This is the new 500 https://www.actualidadmotor.com/wp-content/uploads/2022/02/F...

I'll ignore the 500X and 500L because, to me, they are completely different cars.

This is the original Panda from the 80s https://www.hagerty.co.uk/wp-content/uploads/2020/06/The-ori...

This is the Panda from the early 2000s (the one I used to practice for my driving license) https://upload.wikimedia.org/wikipedia/commons/0/09/2004_Fia...

This is the more recent model https://www.motornet.it/img/modelli/auto/FIA/PANDA%202021_1....

The Panda is a completely different design.


ok I am referring to the early 2000's design, I believe Fiat was using the same platform and engine and I doubt they would do differently now.

Oh, ok, now I see what you mean.

And C15 does exist for them?

For some reason generating a valid wire format seems to be no problem for people when it comes to json. Forgot to escape a quote? Woops, that’s on me, should have used a serializer.

But add a few angled braces in there and lord have-a mercy, ain’t nobody can understand this ampersand mumbo jumbo, I wanna hand write my documents and generate wutever, yous better jus deal with it gosh dangit.

I prefer the current situation too but I still think it’s funny somehow we just never bought into serializers for html. Maybe the idea was before its time? I’m sure you’d have no such parsing problems in the wild if you introduced JTML now. Clearly people know how to serialize.


> For some reason generating a valid wire format seems to be no problem for people when it comes to json.

The "some reason" is that JSON is most often produced by code, written by programmers.

That's not the case for HTML which is often hand-authored directly by non-programmer users. Most people on Earth literally don't know what the word "syntax" means, much less feel comfortable dealing with syntax errors in text files.


"Most people on Earth" can't write HTML. And if one doesn't know what the word "syntax" means, they are not going to have much success writing HTML - after all, ignoring the errors does not make them go away.

Instead, it seems your choices are either:

(1) errors cause random changes on page, such as text squishing into small box, or whole paragraphs missing, or random words appearing on page, or style being applied to wrong text etc...

(2) errors cause page to fail loading with no text shown at all

Both of those are pretty bad, but I wish web would go with (2) instead of (1). Because in that case, we'd have whole ecosystem of tooling appear... Imagine "auto fixup" feature in web servers which "fixes up" bad HTML into good one. For basic users, it looks the same as today. But instead of fixup and guesswork being done by users' browsers (which author has no control of) it would be done by author's webhosts (which author can upgrade or not upgrade as needed...)


You are vastly underestimating how much of the web was and is successfully authored by non-technical people who barely understand the difference between a word processor and a text editor but are able to cobble together enough HTML tags that it mostly looks like they want and lets them share their knowledge with the rest of the world.


I found it very apt. There is a certain flavor of arrogance exhibited by European monopolies which are government adjacent that infuriates on a unique wavelength.

Maybe totally imagined but they irk me quite unlike any other.

Just thinking about it now makes me uneasy.


It would have to do very accurate parallel construction of GPS signal to lie about the driver's location yet correctly predict the arrival time, which cannot be faked.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: