Hacker Newsnew | past | comments | ask | show | jobs | submit | parasense's commentslogin

Somebody in the other thread said it best... Wikipedia should simply block the UK entirely.


I'm not sure what you mean by some of the things you write, but the part about Microsoft being "cool guy phase" was hilarious.

I'd say Microsoft buying GitHub was part of a strategy to not lose relevance in the world that moves slowly towards Open Source Software. Or put another way, the world moves in a direction away from Microsoft, and by capturing GitHub they can manipulate the outcomes that would otherwise have been adversarial to Microsoft interests. It's just like when Microsoft forked Java back in the 1990s, and later created .NET. The whole VSCode or Visual Studio thing... it's just Microsoft Word for software engineers, and the whole point is to create an ecosystem that locks people into the ecosystem.

To think in terms of what Microsoft does, you have to step back and look into economic theory, at least a little bit. There is this idea in economics about isolated economies, and integrated economies. For example, Europe or North America relies on cheap manufactured goods from China, and so China's economy is intrinsically linked (integrated) into the economies of Europe or North America. THAT is the idea of what Microsoft does. They start by adding value, a soft-dependency you might say, and then make moves to becoming a hard dependency... to put into terms of a dependency graph. Then they link to dependency graphs together GitHub into VSCode, OpenAI into VSCode, One Drive into GitHub or One Drive into Hotmail...

I'll say for sure, at least Microsoft has a strategy, unlike Google where they seem to have a lot of failed projects.


As ridiculous or absurd as this idea might seem, it's probably the most succinct and likely effective response to this kind of situation. The UK is betting the rest of the world doesn't reciprocate.


Not ridiculous, the only way to stop injustice is to fight.


And did amphetamines on trains...


Thank you!


It was not behind a paywall. It worked for me.


Wired has a paywall that allows users to read four articles per month for free before requiring a subscription.


The SLS was a good idea, and it's actually a great rocket. However you are correct in saying it turned into a huge program for the old school rocket industrial complex. I think the private sector currently does this better, or it's certainly debatable. However, I think it's a mistake to say only the private sector can do this kind of thing optimally. There is some multiverse in the timelines where government contractors create an industrial rocket production line that quickly and cheaply stamps out heavy lift rockets. Granted, it's easier said than done, but it still doesn't have to be so expensive. Clearly the expensive part should be the R&D with the industrial production parts being jigged, automated, and fully optimized. The SLS obviously went another route by making the rocket production bespoke with non optimal, manual labor, etc... that kind of protection is acceptable for one-off science mission payloads, but not heavy lift....

Anyhoo, NASA letting so many people resign is good if your opinion is such that lowering government expenditure is a good thing. So long as the exit package is comparable to retirement package these government employees would have got otherwise. My guess is the resignation package has great near term performance but low long term (retirement) performance, making it a great option for younger workers able to pivot to new careers.


You're right about the Rust static binary size.

Hello world is really large, and it's unamusing how so much of the standard library is creamed into the resulting binary, no matter how trivial...

Do you know the current status of dynamic linking? I guess the lack of ABI stability is the big blocker, right? Probably no use in formalizing the linking bits if the goal posts keep moving. So it seems like the big problem is some committee will never complete the task... Because it will never be perfect... Something like that.


Gankra wrote a good piece about all the work involved in this years ago (specifically, what Swift had to go through to get ABI stability) <https://faultlore.com/blah/swift-abi/>


Thank you for this reference, it's a great read and actually gave me a better understanding of ABI compatibility issues in general.

Probably the wrong place to ask, but where is the claim that static compilation is "hurting battery life" coming from? More efficient use of CPU caches because frequently used shared libs are more likely to be cached? Or less allocations in RAM maybe?


I think it's about more than just having ABI stability - the rust ecosystem is built around applications being able to demand exact micro-versions of every dependency and that falls apart when you want multiple non-cooperative applications sharing the same crate binary.


This might seems like a rant for some of you, of even heretical to certain shell zealots... But it's about time we move past Posix compliance for shells. Don't get me wrong, it was a fabulous thing back in the 1980s and 1990s with respect to the Unix wars. But in a twist of irony Linux won the Unix wars, and these days Posix compliance, with respect to shells, mostly holds back innovation or modernization by pegging the concept of a terminal to something from 1988. Namely the Korn Shell (which is reference POSIX SHELL implementation back then), or even worse the Bourne shell. Doing get me wrong, I'm glad we're not on something like the C shell, but I'm pretty sure nobody today actually adhears to pure Posix compliance for shells scripting. So let's all just agree to drop the pretence snobbery, and move forward in a brave new world beyond Posix.


There are two ways to attempt to move beyond POSIX sh:

1. You can do a superset of POSIX, like BASH and I think Zsh. This gives you a graceful upgrade path while maintaining backward compatibility, at the expense of being somewhat "stuck" in places. Oil is another attempt at exploring how best to use this path.

2. You can throw out POSIX totally, like fish and PowerShell. This lets you really improve things, at the expense of breaking backwards compatibility. IMHO, breaking compatibility is painful enough that it's really really hard to justify.

It's also worth pointing out that you can separate the roles of "interactive shell" and "shell for scripts". It is, for example, perfectly reasonable to use fish for interactive sessions while keeping /bin/sh around and perhaps even preferring dash as its implementation, which gives you compatibility with software while making things friendlier to users. I mean, I say this as someone who writes a lot of sh scripts and between that and years of practice my fingers expect something roughly sh-like, but I hear a lot of good things from folks who just switched their interactive shell to ex. fish.


That's what I do: interactive Fish, scripted Bash/sh, although I let myself write Fish scripts for my own local, personal scripting that I don't care about sharing with the rest of the world.


I'm just a regular user. I don't care at all about the grand philosophy and ideals of my terminal.

All I know is that ZSH works with 100% of tasks and scripts I need and fish does not. Therefore, I get pissed at fish and it's a bad terminal. Who cares if fish is built on fresh new philosophy and this week's language du jour if it doesn't work?

I'm using the tool that works the way it's supposed to. I don't care if it works because it's using standards from 50 or 500 years ago because that is totally and completely disjoint from being a good tool


Okay, but is it a matter of fish not working or fish not working the way you're used to because you learned to use sh-like shells first? The people I hear praising fish the most are very often users who didn't have much experience before using it (not always, but often).

Granted, that still is a fair point IMO; backwards compatibility is for users too, not just programs.


For flavor, I maintained the bash-completions script for FreeBSD in the early 2000s, and now I’m a Fish advocate. I love it because I’ve used the others.


I like this idea, and I used fish for years.

But, it also increases the mental workload a bit. For one, you now use two similar-but-not-quite tools, and have to keep them straight to make sure you always use the right syntax.

What really did me in was, most of the snippets, docs, etc on the internet were POSIX-compatible, so I either had to translate to fish (which was less bash-compatible at the time), make a temp script, or drop into bash. All of which were constantly-annoying speed bumps.

One of the things I like about Oils (and why I'm contributing to it), is the bash-compatible part and the future-directions part are the same executable, so toggling the behavior is very fast.


Suprisingly, POSIX has recently adopted the find -print0 | xargs -0 idiom, so the gears do turn, but very slowly.

The latest standards for POSIX.2 utilities are here:

https://pubs.opengroup.org/onlinepubs/9799919799/utilities/

I do agree with you that UNIX userland would be miles ahead of where we are now if the POSIX.2 standard could be cajoled out of the '80s.


I'm with you. I've used Fish for a few years now and I find it so much more ergonomic for having foregone strict POSIX compliance. I still write cross-platform stuff in Bash when it's going to run on machines I don't personally control, but I'll write all my routine local interactive stuff (like adding helper functions, wrappers for other commands, etc.) in Fish because it's a breath of fresh air.

I strongly disagree with the notion of only learning one shell language "because what if I telnet into an ancient Sun box and Fish isn't available?" In exactly the same way, I don't exclusively write my programs in C in case some remote host might not have Python or Rust or Fish some day. I'll cross that bridge when I come to it, but in the mean time I want to use something that makes me happy and productive.


Posix compliance isn't holding back progress. You are welcome to make the most advanced, paradigm-smashing new shell in the universe. If it's good, people will use it. If you want it to replace Posix compliant shells, you might want to consider why people might not want to leave Posix and address that first, rather than ask everyone to abandon them "because we're advanced now"

But please don't ruin the one great thing about shell scripting, which is that it's still possible to write one shell script that runs everywhere. Yes it's old, antiquated and quirky. It's also very convenient not to have to 1) install new tools on every system, 2) adapt a billion old scripts for a new tool, and 3) learn yet-another-new-paradigm.


Things can be both a rant and true at the same time. I'm glad Fish didn't attempt Posix compliance.


I think everyone agrees with you, and they did back in say 2016 when I started https://oils.pub

They also agreed with you in the early 1990's. There are some quotes from Richard Stallman, David Korn (author of AT&T ksh), and Tom Duff (author of rc shell) here lamenting Bourne shell:

https://www.oilshell.org/blog/2019/01/18.html#slogans-to-exp...

A problem with using a Bourne shell compatible language is that field splitting and file name generation are done on every command word

nobody really knows what the Bourne shell’s grammar is

---

But there is a "collective action" problem. Shell was the 6th FASTEST growing language on Github in 2022: https://octoverse.github.com/2022/top-programming-languages

I imagine that, in 2025, there are MORE new people learning POSIX shell/bash, than say any other shell here: https://github.com/oils-for-unix/oils/wiki/Alternative-Shell...

Because they want to get work done for the cloud, or embedded systems, or whatever

Also, LLMs are pretty good at writing shell/bash!

---

Oils is designed to solve the legacy problem. OSH is the most bash-compatible shell in the world [1]:

https://oils.pub/osh.html

and then you also have an upgrade to YSH, a legacy-free shell, with real data structures: https://oils.pub/ysh.html

YSH solves many legacy problems, including the exact problems from the 1990's pointed out above :-)

So to the extent that you care about moving off of bash for scripting, you should probably prefer OSH and YSH to Brush

It looks like Brush aims for the OSH part (compatible), but there is no YSH part (dropping legacy)

(I may run Brush through our spec tests to see how compatible it is, but looking at number of tests / lines of code, I think it has quite some distance to go.)

[1] e.g. early this year, Koichi Murase rewrote bash arrays in OSH to use a new sparse data structure, which I mentioned in the latest blog post. Koichi is the author of the biggest shell program in the world (ble.sh), and also a bash contributor.

https://github.com/oils-for-unix/oils/wiki/The-Biggest-Shell...


Well the problem is that what should be the lingua-franca in a post POSIX/Bash world?

My preference is PowerShell. It's now open source [1], it has a wide install base, and is cross-platform. It is a bit heavy and slower to start (actually takes seconds), but the cleaness of it's record-based nature versus just string parsing is infinitely refreshing.

[1] https://github.com/PowerShell/PowerShell



You can build from source to strip away telemetrics.


"can" is doing some heavy lifting there, especially based on my experience with VSCodium (which by definition is forced to work within Microsoft's opinion of build systems)

Anyway, uh-huh:

https://github.com/PowerShell/PowerShell/blob/v7.5.1/.github... -> https://github.com/PowerShell/PowerShell/blob/v7.5.1/.github...

https://github.com/PowerShell/PowerShell/blob/v7.5.1/.github... -> https://github.com/PowerShell/PowerShell/blob/v7.5.1/.github...

and, relevant to your comment even they opt out of telemetry https://github.com/PowerShell/PowerShell/blob/v7.5.1/.github...

---

As a frame of reference, to build bash one only needs /bin/sh not a pre-built copy of bash itself https://git.savannah.gnu.org/cgit/bash.git/tree/configure?h=...


I wonder if the distros that have it in their repositories already do this.


Mac's Homebrew doesn't. I just installed it, ran the "ls" command, and saw Little Snitch block outbound connections to Akamai and Azure.


It is unlikely to be PowerShell.

On my old i386 server, this is my fastest shell:

  $ ll /bin/dash
  -rwxr-xr-x 1 root root 85368 Jan  5  2023 /bin/dash
The set of features in the POSIX.2 shell is designed to minimize resource usage.

This is simply a place that PowerShell cannot go.


i386 isn't even supported by Linux anymore and has less power than a Raspberry Pi. It's not the indicator of the future of anything. It might be nostalgic, but it's ewaste and better served by a Pi.


My mistake, my legacy rhel5 i686 on a 22nm Xeon.

This does not mean that resource-constrained environments do not exist.


Anything that's slow to start is completely unusable as a general replacement for shell scripting. For specific use-cases where the script itself would take a long time to run, a slow start may be fine, but `sh` scripts are used all over the place in contexts where you want it to do its thing and get out of the way as fast as possible (e.g. tweaking env vars or arguments before `exec`ing a binary).


During boot my shell took 2.5s to start up. Now, it took 0.9s to start up. I have trouble imagining a scenario where scripting is adequate, but a second or two of start up is too much. I'm thinking maybe high-availability migration or something. The benefits to correctness from the (optional) type-system seem worth it even there.


In a personal settings it's up to you. But in a collaborative settings? Good luck convincing everyone that your shiny shell works better.


I'm sure many of you remember the Monkey selfie from a few years ago... meh!

I'm also sure many of you understand the farther reaching implications of this ruling, especially how it relates to software code written by AI. All that code written by AI cannot be licensed as anything besides public domain. Just think of all the code people have checked into git, that they did not write! Next, please consider the implications towards the open source community if ever there is controversy about Linux kernel code that was AI generated, and then suddenly cannot be covered by the GPL. I think the neck-beard people over at NetBSD can sometimes be eccentric about many things, but this topic was deserved when they loudly banned all AI generated code from their repos.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: