No scp, no quoting hell. Obviously there is other ways of doing it. And also if you run it with the base64 printed out it's possible to pass along and the user can still see what will be run.
I got good enough on escaping to really give up on it, the worst cases are just not worth the effort.
ssh has "-n" which I think stops that. It's worked for me most of the time when I end up with that or a similar situation (the once or twice it didn't IIRC involved experiments with chaining ssh commands).
No, "-n" has the opposite effect to the one desired. "-n" is for when you have a server-side command that reads standard input and you want to prevent it from reading from the client side. "-n" cuts off the standard input so the server-side command reads empty input (/dev/null).
The desired effect is when you want the server-side script to read input from the client environment. For example so you can give answers to questions, or interact with a terminal program. That's broken by piping $script in as input, and it's also broken by "-n".
I don't see an obvious situation where that would fail. People pipe tar output, rsync, dd, etc, through ssh without issue. Stdin doesn't seem to have quoting issues, which is what you would expect.
The base64 in the parent post seems unneeded if you're passing through stdin/stdout.
I came here to say something like this is way easier. I have a script which does exactly this for an interactive SSH across a large number of hosts [0] (with -S, shell mode).
This is the method I ended up using for golem (https://github.com/robsheldon/golem), a tool I wrote for executing server documentation on remotes. Shell quoting was by far the hardest part to get right, and the base64 pipe was the only solution that correctly handled all forms of quoting embedded in the scripts.
I don't believe the base64 is really adding anything here. There aren't quoting or data corruption issues with data coming through stdin/stdout. If there were, all the various scripts that pipe tar, dd, rsync, and so on through ssh pipes would have uncovered that. Just piping the script to bash is enough.
The value added is that you’re not using stdin and can use it for something else.
Whether that’s actually useful depends on the use case. Personally I have an ssh wrapper script, somewhat similar (though not using base64), that fixes the quoting so that the argv passed to the wrapper directly corresponds to the argv of the command on the remote end. It’s meant for interactive use, so the program I’m trying to run could easily be something that reads from stdin, or even an interactive program like sudo or vim that expects stdin/stdout to be a tty. (To make those work, the -t option has to be passed to ssh.)
The equivalent would be of catting the script and piping that to the (remote) bash, but as part of the SSH arguments, not its stdin.
So:
ssh user@remotehost "$( cat script ) | bash"
Note that shell expansion of
$( cat script )
... occurs locally, not on the remote side. This means that any interpretable elements of the output (c|w)ould also be expanded locally, though I'm not quite sure what effects this might have.
That said, I'm not clear on exactly on what base64 adds here, though as "script" is directly interpreted and translated by base64, any local shell expansion is avoided.
There's a #UselessUseOfCat as well. Method could be simplified to:
This isn't solving the same problem as in the article, which is about inline one-liners.
When the commands already exist as a file, the problem has multiple solutions. Quoting via base64 is a clever one, but the result is actually the least convenient compromise of all: not only does the script have to pre-exist as a file, the transport of stdin is unavailable to the process executing the commands.
Other than that, it just makes it kinda obvious that it will not break in any scenario, period. It just feels better, make sense?
As someone wrote here avoiding the expansion issues is really nice.
It is also very easy to compress with xz, but hey, then you could just use xz to begin with! Well, I don't know how wierd binary data will effect stdout in all scenarios. So yet again I stay with safe. I know that you send tar and what not over ssh pipes, but I really don't trust bash that much.
Another nice way would be
ssh $server bash <<'EOF'
echo hello world
EOF
You could not use the stdin then sadly, but that I solvable.
rsync is smarter it doesn't make you transfer the file again if there are no changes. Also if you have keepAlive set in your ssh config running an rsync command followed by ssh to the same server should be instantaneous.
I got good enough on escaping to really give up on it, the worst cases are just not worth the effort.