> The one big problem: gopls. We need the first line of the script to be without spaces...
Specifically the problem here is automated reformatting. Gopls typically does this on save as you are editing, but it is good practice for your CI system to enforce the invariant that all merged *.go files are canonically formatted. This ensures that the user who makes a change formats it (and is blamed for that line), instead of the hapless next person to touch some other spot in that file. It also reduces merge conflicts.
But there's a second big (bigger) problem with this approach: you can't use a go.mod file in a one-off script, and that means you can't specify versions of your dependencies, which undermines the appeal to compatibility that motivated your post:
> The primary benefit of go-scripting is [...] and compatibility guarantees. While most languages aims to be backwards compatible, go has this a core feature. The "go-scripts" you write will not stop working as long as you use go version 1.*, which is perfect for a corporate environment.
> In addition to this, the compatibility guarantees makes it much easier to share "scripts". As long as the receiving end has the latest version of go, the script will run on any OS for tens of years in the future.
That is 100% the issue. This is really low quality heat. Making it better would require even more energy input (e.g. a heat pump) because we can’t safely run electronics hot enough to generate high quality process heat.
> England “gave up” scientific and technological leadership during the 20th century. (That’s a tongue-in-cheek take on it, don’t read too much into it.)
Was forced to give up, due to the economic devastation of WWII, might be more accurate (though of course there were other factors too).
“A lone coder, trained in the direct manipulation of symbols—an elegant weapon from a more civilized age—-is now all that stands between humanity and darkness.” etc
“Formulas that update backwards” is the main idea behind neural networks such as LLMs: the computation network produces a value, the error in this value is computed, and then the error quantity is pushed backward through the network; this relies on the differentiability of the function computed at each node in the network.
"Formulas that update backwards" isn't really the main idea behind neural networks. It's an efficient way of computing gradients, but there are other ways. For example forward propagation would compute a jacobian-matrix product of input wrt output with an identity matrix. Backpropagation is similar to bidi-calc to the same extent as it is similar to many other algorithms which traverse some graph backward.
I think you should be able to use bidi-calc to train a neural net, altough I haven't tried. You'd define a neural net, and then change it's random output to what you want it to output. However as I understand it, it won't find a good solution. It might find a least squares solution to the last layer, then you'd want previous layer to output something that reduces error of the last layer, but bidi-calc will no longer consider last layer at all.
Yes, I'm glad to see a comment on Prolog. I think of it as _the_ foundational programming language for solving such problems. It isn't so much that it's a back propagation language; it's just that, based on which variables are bound at a given point, it will go forward deductively, or backwards inductively.
Notably this Onion piece was added to the home page of MIT’s architecture department the year my cohort of grad students moved from LCS into the newly opened Stata center.
The building was was fun to explore, but had a number of defects that suggested its designer was a big-picture not a details guy.
reply