Hacker Newsnew | past | comments | ask | show | jobs | submit | adonovan's commentslogin

I think you should change the cherries to a battery and call the game Correct Horse Battery Stable.

Or the cherries could be a delicious pastry or PBJ-like treat: _Collect Horse Buttery Stable_...

Use staples instead of walls as barriers.

Or turn the cherries into sugar lumps, and call the game My Lovely Horse

That is just delightful.

Reference[1] for anyone wondering.

[1] https://xkcd.com/936/


> The one big problem: gopls. We need the first line of the script to be without spaces...

Specifically the problem here is automated reformatting. Gopls typically does this on save as you are editing, but it is good practice for your CI system to enforce the invariant that all merged *.go files are canonically formatted. This ensures that the user who makes a change formats it (and is blamed for that line), instead of the hapless next person to touch some other spot in that file. It also reduces merge conflicts.

But there's a second big (bigger) problem with this approach: you can't use a go.mod file in a one-off script, and that means you can't specify versions of your dependencies, which undermines the appeal to compatibility that motivated your post:

> The primary benefit of go-scripting is [...] and compatibility guarantees. While most languages aims to be backwards compatible, go has this a core feature. The "go-scripts" you write will not stop working as long as you use go version 1.*, which is perfect for a corporate environment.

> In addition to this, the compatibility guarantees makes it much easier to share "scripts". As long as the receiving end has the latest version of go, the script will run on any OS for tens of years in the future.


True, but major versions are locked in through the import path and should be compatible.


> which undermines the appeal to compatibility that motivated your post

not really? this is about the language / core runtime rather than any dependencies.


My coworkers learn, and an important part of my job is teaching them. LLM-based tools don't.


A circular saw doesn't learn either. It's a tool, just like an LLM.

The LLM isn't replacing your coworkers, it's a tool they can (and IMO should) learn to use, just like an IDE or a debugger.


Predicting is easy. Predicting correctly less so.


When you are making predictions about what you are going to do, "correctly" is spelled "honestly".


This (amazing) hypothesis has been challenged by new evidence; see for example https://pmc.ncbi.nlm.nih.gov/articles/PMC4780611/.


Also, the temperature is not high enough (compared to the steam coming out of a gas/oil/nuclear plant) to obtain much work from the waste heat.


That is 100% the issue. This is really low quality heat. Making it better would require even more energy input (e.g. a heat pump) because we can’t safely run electronics hot enough to generate high quality process heat.


> England “gave up” scientific and technological leadership during the 20th century. (That’s a tongue-in-cheek take on it, don’t read too much into it.)

Was forced to give up, due to the economic devastation of WWII, might be more accurate (though of course there were other factors too).


“A lone coder, trained in the direct manipulation of symbols—an elegant weapon from a more civilized age—-is now all that stands between humanity and darkness.” etc


“Formulas that update backwards” is the main idea behind neural networks such as LLMs: the computation network produces a value, the error in this value is computed, and then the error quantity is pushed backward through the network; this relies on the differentiability of the function computed at each node in the network.


"Formulas that update backwards" isn't really the main idea behind neural networks. It's an efficient way of computing gradients, but there are other ways. For example forward propagation would compute a jacobian-matrix product of input wrt output with an identity matrix. Backpropagation is similar to bidi-calc to the same extent as it is similar to many other algorithms which traverse some graph backward.

I think you should be able to use bidi-calc to train a neural net, altough I haven't tried. You'd define a neural net, and then change it's random output to what you want it to output. However as I understand it, it won't find a good solution. It might find a least squares solution to the last layer, then you'd want previous layer to output something that reduces error of the last layer, but bidi-calc will no longer consider last layer at all.


All those words and you forget to provide people the breadcrumbs to learn more for themselves.

The term of interest is "backpropagation".


Won’t another breadcrumb be Prolog and “declarative programming”[1].

Wasn’t Prolog invented to formalise these kinds of problems of making the inputs match what the desired output should be.

[1] https://en.wikipedia.org/wiki/Declarative_programming


Yes, I'm glad to see a comment on Prolog. I think of it as _the_ foundational programming language for solving such problems. It isn't so much that it's a back propagation language; it's just that, based on which variables are bound at a given point, it will go forward deductively, or backwards inductively.


Prolog has basically nothing to do with calculus.


Notably this Onion piece was added to the home page of MIT’s architecture department the year my cohort of grad students moved from LCS into the newly opened Stata center.

The building was was fun to explore, but had a number of defects that suggested its designer was a big-picture not a details guy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: