Hacker Newsnew | past | comments | ask | show | jobs | submit | travisgriggs's commentslogin

I have this same reaction.

But I also have to honestly ask myself “aren’t humans also prone to make stuff up” when they feel they need to have an answer, but don’t really?

And yet despite admitting that humans hallucinate and make failures too, I remain uncomfortable with ultimate trust in LLMs.

Perhaps, while LLMs simulate authority well, there is an uncanny valley effect in trusting them, because some of the other aspect of interacting with an authority person are “off”.


I certainly won't.

I wonder if it's because they don't see other relevant low-cost/overhead solutions. "Follow me on facebook" certainly isn't going to be a win. "Follow me on Bluesky or Mastadon" is going to be ignored. "You can see my comments on insta" won't be relevant. "My TikTok is where it's at" might get you some young followers.

Other solutions (your own blog, medium, substack, etc), all come with more overhead and setup.


Another languages that just “gets” concurrent right (imo) is erlang/elixir. I’ve done elixir for the last 3 years off and on.

Can someone with experience in both Go and Elixir compare the two? I’m sure I can have GPT whip up a comparison and see the syntax diffeeences, but I’m curious what the real experience “in the trench” is like.


I've used both professionally. I think elixir has some amazing ideas. I love pattern matching. However! Just like all other interpreted languages, in elixir, I have to go to the call sites of the function that I am editing to understand what it is that is actually available and to understand what I can edit. I don't know what has been passed into my function. The lack of types is not fixed by having a type spec and dialyzer. Pattern matching helps. I wish Go had it. But when it comes to a growing organization, more and more of the codebase cannot fit in your head, and I find that teams and organizations are indeed faster in Go.

I recall hearing that Jose was making progress on types. Not sure where that landed.


I was curious in particular about the concurrency story between the two, but thanks for the higher level feedback.

I feel like modern design, in so many cases, missed the science for the symptoms.

"It shouldn't look cluttered" --> "Apply ever increasing amounts of padding/margin everywhere"

"keep it simple" --> "monochrome is the happy place", etc

etc


It depends.

I like programming. Quite a bit. But the modern bureaucratic morass of web technologies is usually only inspiring in the small. I do not like the fact that I have to balance so many different languages and paradigms to get to my end result.

It would be a bit like a playwright aficionado saying “I really love telling stories through stage play” only to discover that all verbs used in dialogue had to be in Japanese, nouns are a mix of Portuguese and German, and connecting words in English. And talking to others to put your play on, all had to be communicated in Faroese and Quechua.


While I generally agree with the narrative of the negative arc that stack overflow took, I found (and have as recently as a few months ago) that I could have enjoyable interactions on the math, Ux, written language, and aviation exchanges. The OS ones in the middle (always found the difference between Linux and superuser confusing).

I misread the title at first and thought it was hacker news questions [comments] that were being graphed. That’s what I would be interested in seeing

This. It’s computation. Computation doesn’t really “get” geopolitical borders.

I’m so sick of the ever increasing variances between the different “store” offerings in different regions of the world. Seems like every time I push an update (every month or so), I have to answer updated questions and declarations, often relative to different parts of the world.


This is a poorly thought through argument, as there is nothing that “gets” geopolitical borders.


I think it was Kent Beck who described Java as “all the elegance of C++ with all the speed of Smalltalk”?


> Instead of callbacks, you write code that looks sequential [but isn’t]

(bracketed statement added by me to make the implied explicit)

This sums up my (personal, I guess) beef with coroutines in general. I have dabbled with them since different experiments were tried in C many moons ago.

I find that programming can be hard. Computers are very pedantic about how they get things done. And it pays for me to be explicit and intentional about how computation happens. The illusory nature of async/await coroutines that makes it seem as if code continues procedurally demos well for simple cases, but often grows difficult to reason about (for me).


That is the price you pay. If you refuse to pay you are left to express a potentially complex state machine in terms of a flat state-transition table, so you have a huge python cases statement saying on event x do this and on event y do that. That obscures evident state-chart sequentiality, alternatives or loops (the stuff visible in the good old flow-charts) that otherwise could be mapped in their natural language constructs. But yes, is not honest flow. Is a tradeoff.


> looks sequential [but isn’t]

This is just wrong. It looks sequential and it is! What the original author means is that it looks synchronous but isn't. But of course that's not really true either, given the use of the await keyword, but that can be explained by the brief learning curve.

Swift concurrency may use coroutines as an implementation detail, but it doesn't expose most of the complexity of that model, or exposes it in different ways.


Sequential doesn’t mean reentrancy safe, something which has bitten me a few times in Swift concurrency.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: