Not sure how I feel about transcripts. Ultimately I do my best to make any contributions I make high quality, and that means taking time to polish things. Exposing the tangled mess of my thought process leading up to that either means I have to "polish" that too (whatever that ends up looking like), or put myself in a vulnerable position of showing my tangled process to get to the end result.
I don't love the concept, but I do wonder if it could be improved by using a skill that packages and install script, and context for troubleshooting. That way you have the benefits of using an install script, and at least a way to provide pointers for those unfamiliar with the underlying tooling.
A coworker raised an interesting point to me. The CORS fix removes exploitation by arbitrary websites (but obviously allows full access from the opencode domain), but let's take that piece out for a second...
What's the difference here between this and, for example, the Neovim headless server or the VSCode remote SSH daemon? All three listen on 127.0.0.1 and would grant execution access to another process who could speak to them.
Is there a difference here? Is the choice of HTTP simply a bad one because of the potential browser exploitation, which can't exist for the others?
> Neovim’s server defaults to named pipes or domain sockets, which do not have this issue. The documentation states that the TCP option is insecure.
Good note on pipes / domain sockets, but it doesn't appear there's a "default", and the example in the docs even uses TCP, despite the warning below it.
(EDIT: I guess outside of headless mode it uses a named pipe?)
> VS Code’s ssh daemon is authenticated.
How is it authenticated? I went looking briefly but didn't turn up much; obviously there's the ssh auth itself but if you have access to the remote, is there an additional layer of auth stopping anyone from executing code via the daemon?
From the page you linked: Nvim creates a default RPC socket at startup, given by v:servername.
You can follow the links on v:servername to read more about the startup process and figure out what that is, but tl;dr, it's a named pipe unless you override it.
If you have a localhost server that uses a client input to execute code without authentication, that’s a local code execution vulnerability at the very least. It becomes a RCE when you find a way to reach local server over the wire, such as via browser http request.
I don’t use VSCode you have mentioned so i don’t know how it is implemented but one can guess that it is implemented with some authentication in mind.
I'm not very familiar with this layer of things; what does it mean for a GPU to drive a boot sequence? Is there something massively parallel that is well suited for the GPU?
The Raspberry Pi contains a Videocore processor (I wrote the original instruction set coding and assembler and simulator for this processor).
This is a general purpose processor which includes 16 way SIMD instructions that can access data in a 64 by 64 byte register file as either rows or columns (and as either 8 or 16 or 32 bit data).
It also has superscalar instructions which access a separate set of 32-bit registers, but is tightly integrated with the SIMD instructions (like in ARM Neon cores or x86 AVX instructions).
This is what boots up originally.
Videocore was designed to be good at the actions needed for video codecs (e.g. motion estimation and DCTs).
I did write a 3d library that could render textured triangles using the SIMD instructions on this processor. This was enough to render simple graphics and I wrote a demo that rendered Tomb Raider levels, but only for a small frame resolution.
The main application was video codecs, so for the original Apple Video iPod I wrote the MPEG4 and h264 decoding software using the Videocore processor, which could run at around QVGA resolution.
However, in later versions of the chip we wanted more video and graphics performance. I designed the hardware to accelerate video, while another team (including Eben) wrote the hardware to accelerate 3d graphics.
So in Raspberry Pis, there is both a Videocore processor (which boots up and handles some tasks), and a separate GPU (which handles 3d graphics, but not booting up).
It is possible to write code that runs on the Videocore processor - on older Pis I accelerated some video decode sofware codecs by using both the GPU and the Videocore to offload bits of transform and deblocking and motion compensation, but on later Pis there is dedicated video decode hardware to do this instead.
Note that the ARMs on the later Pis are much faster and more capable than before, while the Videocore processor has not been developed, so there is not really much use for the Videocore anymore. However, the separate GPU has been developed more and is quite capable.
> what does it mean for a GPU to drive a boot sequence
It's a quirk of the broadcom chips that the rpi family uses; the GPU is the first bit of silicon to power up and do things. The GPU specifically is a bit unusual, but the general idea of "smaller thing does initial bring up, then powers up $main_cpu" is not unusual once $main_cpu is ~ powerful enough to run linux.
That’s interesting, particularly since as far as I can tell, nothing in userland really bothers to make use of its GPU. I would really like to understand why, since I have a whole bunch of Pi’s and it seems like their GPUs can’t be used for much of anything (not really much for transcoding nor for AI).
> their GPUs can’t be used for much of anything (not really much for transcoding nor for AI)
It's both funny and sad to me that we're at the point where someone would (perhaps even reasonably) describe using the GPU only for the "G" in its name as not "much of anything".
The Raspberry Pi GPU has one of the better open source GPU drivers as far as SBCs go. It's limited in performance but its definitely being used for rendering.
There is a Vulkan API, they can run some compute. At least the 4 and 5 can: https://github.com/jdonald/vulkan-compute-rpi . No idea if it's worth the bus latency though. I'd love to know the answer to that.
I'd also love to see the same done on the Zero 2, where the CPU is far less beefy and the trade-off might go a different way. It's an older generation of GPU though so the same code won't work.
One (obscure) example I know of is the RTLSDR-Airband[1] project uses the GPU to do FFT computation on older, less powerful Pis, through the GPU_FFT library[2].
It's not hard to get into the library of congress? It's purposely extremely easy. I forgot who it was, but there was one big right-wing talk show host that would end all of his segments by saying it's being added to the Library of Congress as if it's an exclusive accolade, and people rightfully called him out on his shit for how easy it is to do that
I was running i3 on Arch before. Then, I moved to Gnome Shell and have been daily driving COSMIC for a couple months. I think you will like it. At least on Alpha there were a couple rough edges here and there, but no deal breakers for me.
I was using Manjaro i3 X11 for 3 years. A few months back I switched to Arch Hyprland Wayland and so far I am very happy with it. I use it for programming, video editing and gaming. No major inconveniences.
> The small coolers used by them are not recommended by Noctua for 9950X
Noctua's CPU compatibility page lists the NH-U9s as "medium turbo/overclocking headroom" for the 9950X [0]. I don't think it's fair to suggest their cooler choice is the problem here.
On the same page linked by you, Noctua explains that the green check mark means that with that cooler the CPU can run all-core intensive tasks, exactly like those used by the gmplib developers, only at the base clock, which is 4.3 GHz for 9950X, with turbo disabled in BIOS.
Only then the CPU might dissipate its nominal TDP of 170 W, instead of the 200 W that it dissipates with turbo enabled.
With "best turbo headroom", you can be certain that the CPU can run all-core intensive tasks with turbo enabled. Even if you do no overclocking, but you run all-core intensive tasks with turbo enabled, this is the kind of cooler that you need.
Noctua does not define what "medium headroom" means, but presumably it means that you can run with turbo enabled all-core tasks that have medium intensity, not maximum intensity.
There is no doubt that it is a mistake to choose such a cooler when you intend to run intensive multi-threaded computations. A better cooler, but not much bigger, like NH-U12A, has an almost double cooling capacity.
That said, there is also no doubt that AMD is guilty of at least having some bugs in their firmware or in failing to provide adequate documentation for the motherboard manufacturers that adapt the AMD firmware for their MBs.
Can anyone make sense of the sankey chart under "Developers at all levels are exploring the evolving AI landscape through Stack Overflow"? How does "Large Language Model" flow into "Tailwind CSS 4"?
I joke, but I think it’s depicting relative proportions of how many people who were “interested in” Gemini are now “interested in” other things and how different subgroups’ “interested in” thing changed over time. So there was a sizable group that became interested in Tailwind at one point
I didn’t realize you can click into each thing to get more details. The question asked for this one was:
“Which rising technology of the added as new tag or tag subject area on Stack Overflow in the last three years have you used regularly in the past year, and which do you want to work with over the next year? Please check all that apply.”
The chart is displaying every pair of (worked with => want to work with) that had at least 500 respondents. So it’s not really a sankey diagram, each ribbon just signifies a self-contained pair. So yeah many people who worked with Gemini indicated that they want to work with Tailwind (among other things). It’s not suggesting that Gemini itself recommended those technologies.
It is weird. The biggest thing for me is that when you click through to the details you can filter to those who indicated that they are learning to code, and and Gemini had by far the largest “worked with” amount for that group. Which is interesting — suggesting the next generation of coders are going to have all learned with the “help” of AI (yikes)
I'm a bit uneducated here - why was the other 1.1.1.0/24 announcement previously suppressed? Did it just express a high enough cost that no one took it on compared to the CF announcement?
CF had their route covered by RPKI, which at a high level uses certs to formalize delegation of IP address space.
What caused this specific behavior is the dilemma of backwards comparability when it comes to BGP security. We area long ways off from all routes being covered by rpki, (just 56% of v4 routes according to https://rpki-monitor.antd.nist.gov/ROV ) so invalid routes tend to be treated as less preferred, not rejected by BGP speakers that support RPKI.
> It is also crazy to point that command at a production database and do random stuff with it
In a REPL, the output is printed. In a LLM interface w/ MCP, the output is, for all intents and purposes, evaluated. These are pretty fundamentally different; you're not doing "random" stuff with a REPL, you're evaluating a command and _only_ printing the output. This would be like someone copying the output from their SQL query back into the prompt, which is of course a bad idea.
I won't claim to be as well-versed as you are in security compliance -- in fact I will say I definitively am not. Why would you think that it isn't a meaningful difference here? I would never simply pipe sqlite3 output to `eval`, but that's effectively what the MCP tool output is doing.
reply