It seems surprising to me that this kind of basic thing (does the update work on the hardware we've released) wasn't validated by Apple prior to releasing this software update. Perhaps a sign of issues in the QA process at Apple around MacOS?
Release candidate was up on the developer site for a week and another comment says that only a subset of people are having the problem . It would seem that the group is some kind of anomaly that never got a release candidate installed also.
There can be outlier bugs, that only appear for a small subset of users, under certain conditions (from different OEM parts combo among dozens to different software packages installed, update paths followed, or options enabled).
From what the article says, this is specifically a problem when updating from an OS version released two days ago, to the other OS version released two days ago. That's not exactly the most logical or likely path for most users to take, though obviously Apple needs to be able to handle this going forward for users who are hesitant to update to 26 and want to run 15.7 in the meantime.
Of course not. But it makes sense that a bug with a narrower scope is more likely to escape testing, and apparently something that changed between 15.6.1 (released a month ago) and 15.7 (released two days ago) affects the process of upgrading to 26. So whatever code is at fault is probably pretty recent.
It is strange how folks are refusing to admit they can even _evaluate things_ in a bunch of cases. We're seeing that here, but I've also noticed it in other posts on HN: a disagreement with the position of the article is framed not as a distinct examination which comes to different conclusions, but instead commenters claim the post author was foolish in even attempting to evaluate the thing the post is about.
To some degree it feels like bits and pieces of anti-intellectualism getting into folks brains: rejecting the idea that folks can think about things at all.
Unless the lack of real time (or consistent time to) results drives down interest in the cloud version, or instead of driving down interest makes it appear as if people want something different than what they would want if the time to results was consistent/faster.
Still could be worth doing a bit of manual work like this, but it's worth being cautious about drawing conclusions from it.
Btech markets devices that are GMRS type accepted, it's actually one of their main businesses these days to take Chinese developed radios and modify them slightly and get them GMRS approved in the US.
FFMpeg does have an API. It ships a few libraries (libavcodec, libavformat, and others) which expose a C api that is used in the ffmpeg command line tool.
They're relatively low level APIs. Great if you're a C developer, but for most things you'd do in python just calling the command line probably does make more sense.
As someone that used these APIs in C, they were not very well-documented nor intuitive, and oftentimes segfaulted when you messed up, instead of returning errors—I suppose the validation checks sacrifice performance for correctness, which is undesirable. Either way, dealing with this is not fun. Such is the life of a C developer, I suppose....
Yes, that's what I did some time ago. I already want concurrency and isolation, so why not let the OS do that. Also I don't need to manage resources, when ffmpeg already does that.
If you are processing user data, the subprocess approach makes it easier to handle bogus or corrupt data. If something is off, you can just kill the subprocess. If something is wrong with the linked C api, it can be harder to handle predictably.
Java has an excellent GC, but a horrible runtime. .net is probably the best GC integrated into a language with decent memory layout ability. If all you want is the GC without a language attached, LXR is probably the most interesting. it's part of MMTK which is a rust library for memory allocation and GC that Java, Ruby, and Julia are all in the process of adding options for.
The post is a demonstration that a class of problems: causing Go to treat a integer field as a pointer and access the memory behind that pointer without using any of Go's documented "unsafe.Pointer" (or other documented as unsafe operations).
We're talking about programming languages being memory safe (like fly.io does on it's security page [1]), not about other specific applications.
It may be helpful to think of this as talking about the security of the programming language implementation. We're talking about inputs to that implementation that are considered valid and not using "unsafe" marked bits (though I do note that the Go project itself isn't very clear on if they claim to be memory-safe). Then we want to evaluate whether the programming language implementation fulfills what people think it fulfills; ie: "being a memory safe programming language" by producing programs under some constraints (ie: no unsafe) that are themselves memory-safe.
The example we see in the OP is demonstrating a break in the expectations for the behavior of the programming language implementation if we expected the programming language implementation to produce programs that are memory safe (again under some conditions of not using "unsafe" bits).
In this thread I linked the fly.io security page because it helps us establish that one can talk about _languages_ as being memory safe specifically, which is something it seems you're rejecting as a concept in the parent and other comments.
(In a separate comment about "what do people claim about Go anyhow", I linked the memorysafety.org page, but I did not expect it to help in getting you to the understanding that we can evaluate programming languages as being memory safe or not, where something from the company where someone was a founder seemed more likely to get a person to reconsider the framing of what we're examining)
So you're saying nobody cares about actual memory safety in concurrent code? Then why did the Swift folks bother to finally make the language memory-safe (just as safe as Rust) for concurrent code? Heck why did the Java folks bother to define their safe concurrency/memory model to begin with? They could have done it the Golang way and not cared about the issue.
Curiously, Go itself is unclear about its memory safety on go.dev. It has a few references to memory safety in the FAQ (https://go.dev/doc/faq#Do_Go_programs_link_with_Cpp_programs, https://go.dev/doc/faq#unions) implying that Go is memory safe, but never defines what those FAQ questions mean with their statements about "memory safety". There is a 2012 presentation by Rob Pike (https://go.dev/talks/2012/splash.slide#49) where it is stated that go is "Not purely memory safe", seeming to disagree with the more recent FAQ. What is meant by "purely memory safe" is also not defined. The Go documentation for the race detector talks about whether operations are "safe" when mutexes aren't added, but doesn't clarify what "safe" actually means (https://go.dev/doc/articles/race_detector#Unprotected_global...). The git record is similarly unclear.
In contrast to the go project itself, external users of Go frequently make strong claims about Go's memory safety. fly.io calls Go a "memory-safe programming language" in their security documentation (https://fly.io/docs/security/security-at-fly-io/#application...). They don't indicate what a "memory-safe programming language" is. The owners of "memorysafety.org" also list Go as a memory safe language (https://www.memorysafety.org/docs/memory-safety/). This later link doesn't have a concrete definition of the meaning of memory safety, but is kind enough to provide a non-exaustive list of example issues one of which ("Out of Bounds Reads and Writes") is shown by the article from this post to be something not given to us by Go, indicating memorysafety.org may wish to update their list.
It seems like at the very least Go and others could make it more clear what they mean by memory safety, and the existence of this kind of error in Go indicates that they likely should avoid calling Go memory safe without qualification.
> Curiously, Go itself is unclear about its memory safety on go.dev.
Yeah... I was actually surprised by that when I did the research for the article. I had to go to Wikipedia to find a reference for "Go is considered memory-safe".
Maybe they didn't think much about it, or maybe they enjoy the ambiguity. IMO it'd be more honest to just clearly state this. I don't mind Go making different trade-offs than my favorite language, but I do mind them not being upfront about the consequences of their choices.
At the time Go was created, it met one common definition of "memory safety", which was essentially "have a garbage collector". And compared to c/c++, it is much safer.
> it met one common definition of "memory safety", which was essentially "have a garbage collector"
This is the first time I hear that being suggested as ever having been the definition of memory safety. Do you have a source for this?
Given that except for Go every single language gets this right (to my knowledge), I am kind of doubtful that this is a consequence of the term changing its meaning.
True, "have a garbage collector" was never the formal definition, it was more "automatic memory management". But this predates the work on Rust's ownership system and while there were theories of static automatic memory management, all practical examples of automatic memory management were some form of garbage collection.
If you go to the original 2009 announcement presentation for Go [1], not only is "memory-safety" listed as a primary goal, but Pike provides the definition of memory-safe that they are using, which is:
"The program should not be able to derive a bad address and just use it"
Which Go mostly achieves with a combination of garbage collection and not allowing pointer arithmetic.
The source of Go's failure is concurrency, which has a knock-on effect that invalidates memory safety. Note that stated goal from 2009 is "good support for concurrency", not "concurrent-safe".
Thanks! I added a reference to that in the blog post.
Interestingly, in 2012 Rob Pike explicitly said that Go is "not purely memory safe" because "sharing is legal": https://go.dev/talks/2012/splash.slide#49. However it is not entirely clear what he means by that (I was not able to find a recording of the talk), but it seems likely he's referring to this very issue.
> "The program should not be able to derive a bad address and just use it"
My example does exactly that, so -- as you say, Go mostly achieves this, but not entirely.
> Note that stated goal from 2009 is "good support for concurrency", not "concurrent-safe".
My argument is that being concurrency-unsafe implies being memory-unsafe, for the reasons laid down in the blog post. I understand that that is a somewhat controversial opinion. :)
Hey! Cards on the table I'm not in love w/ your post, but mostly I'm curious about what discussion or outcome you were hoping for with it. Doesn't this boil down to "bad things will happen if you have data races in your code, so don't have data races in your code". Does it really matter what those bad things are?
That seems contrasted by Rob Pike's statement in 2012 in the linked presentation being one of the places where it's called "not purely memory safe". That would have been early, and Go is not called memory safe then. It seems like calling Go memory safe is a more recent thing rather than a historical thing.
Keep in mind that the 2012 presentations dates to 10 months after Rust's first release, and its version of "Memory Safety" was collecting quite a bit of attention. I'd argue the definition was already changing by this point. It's also possible that Go was already discovering their version of "Memory Safety" just wasn't safe enough.
If you go back to the original 2009 announcement talk, "Memory Safety" is listed as an explicit goal, with no carveouts:
"Safety is critical. It's critical that the language be type-safe and that it be memory-safe."
"It is important that a program not be able to derive a bad address and just use it; That a program that compiles is type-safe and memory-safe. That is a critical part of making robust software, and that's just fundamental."
> Rust's first release, and its version of "Memory Safety" was collecting quite a bit of attention
Note that this was not Rust's first stable release, but it's first public release. At the time it was still changing a lot and still had "garbage collected" types.
Yeah, it was the 0.1 release. I can't remember exactly when Rust entered the general "programming language discourse" on hackernews and /r/programming, but it was somewhere around here. I'm sure the people behind Go would have known about it by this point in time.
And while rust did have optional "garbage collected pointers", it's important to point out that it is not a garbage collected language. The ownership system and borrow checker were very much front-and-centre for the 0.1 release, it was what everyone was talking about.
Actually, my memory is that while the language had syntax to declare garbage collected pointers, it wasn't actually hooked up to a proper garbage collector. It was always more of a "we are reserving the syntax and we will hook it up when needed", and it turns out the ownership system was powerful enough that it was never needed.
> Actually, my memory is that while the language had syntax to declare garbage collected pointers, it wasn't actually hooked up to a proper garbage collector. It was always more of a "we are reserving the syntax and we will hook it up when needed", and it turns out the ownership system was powerful enough that it was never needed.
AFAIK it was just an `Rc`/`Arc` with the possibility of upgrading it to an actual GC in the future.
https://www.awwwards.com/summer-afternoon.html