That was a very good summary. One detail the post could use is mentioning that 4 or 10 experts invoked where selected from the 512 experts the model has per layer (to give an idea of the savings).
From the “silicon valley astronomy lectures”, an excellent overview of current techniques and results for finding and examining exoplanets. By Dr. Bruce Macintosh.
The idea behind a greenscreen is that you can make that green colour transparent in the frames of footage allowing you to blend that with some other background or other layered footage. This has issues like not always having a uniform colour, difficulty with things like hair, and lighting affecting some edges. These have to be manually cleaned up frame-by-frame, which takes a lot of time that is mostly busy work.
An alternative approach (such as that used by the sodium lighting on Mary Poppins) is that you create two images per frame -- the core image and a mask. The mask is a black and white image where the white pixels are the pixels to keep and the black pixels the ones to discard. Shades of gray indicate blended pixels.
For the mask approach you are filming a perfect alpha channel to apply to the footage that doesn't have the issues of greenscreen. The problem is that this requires specialist, licensed equipment and perfect filming conditions.
The new approach is to take advantage of image/video models to train a model that can produce the alpha channel mask for a given frame (and thus an entire recording) when just given greenscreen footage.
The use of CGI in the training data allows the input image and mask to be perfect without having to spend hundreds of hours creating that data. It's also easier to modify and create variations to test different cases such as reflective or soft edges.
Thus, you have the greenscreen input footage, the expected processed output and alpha channel mask. You can then apply traditional neural net training techniques on the data using the expected image/alpha channel as the target. For example, you can compute the difference on each of the alpha channel output neurons from the expected result, then apply backpropagation to compute the differences through the neural network, and then nudge the neuron weights in the computed gradient direction. Repeat that process across a distribution of the test images over multiple passes until the network no longer changes significantly between passes.
On Apple silicon, Parallels can’t run x64 windows, it is using the ARM version of Windows. The x64 emulation is provided by Windows. Of course this is inefficient, but not everything is automatically 2x slower: any OS code you invoke is not running as x64 emulation, and IO and memory access is not penalized by the emulation (but certainly somewhat from virtualization). I was pleasantly surprised how fast you can run x64 windows apps.
Yeah I wasn't aware that Microsoft allowed that nowadays. Still, it's not ideal anyway, because in my experience Windows apps that are compatible with ARM are 90% either FOSS or portable on other platforms anyway. You use Windows to use x86 apps; if you don't need x86 apps you are generally better not using Windows at all, and if you need them they'll probably run poorly on ARM due to multiple layers of emulation. Wine is still an option, though. They support Rosetta on Mac and FEX/Box64 on Linux, so they may lead to better performance than Parallels
> I was pleasantly surprised how fast you can run x64 windows apps
In general as long as you have a fast enough machine emulation isn't that bad. Apple was doing that already for 68k with PPC and most people didn't noticed due to how massively faster their first PPC computers were. Still, the issue is that here we're not really talking about a high-end CPU aren't we
IEEE 754 prescripes, for better or worse, that any mathematical comparison operator (==, <, > ….) involving at least one NaN must always return false, including comparison against itself. This is annoying for something like dictionaries or hashtables. C# has a solution: if you call a.Equals(b) on two floats a and b, it will return true also if both are NaN. I think this is a cool solution: it keeps the meaning of math operators the same identical with other languages, but you still have sensible behavior for containers. I believe this behavior is copied from Java.
I consider this as a very bad solution, because it can lead to very subtle bugs.
The correct solution for any programming language is to define all the 14 relational operators that are required by any partially-ordered set, instead of defining only the 6 of them that are sufficient for a totally-ordered set.
If the programming language fails to define all 14 operators, then you must always test the operands for NaNs, before using any of the 6 ALGOL relational operators. If you consider this tedious, then you must unmask the invalid operation exception and take care to handle this exception.
If invalid operations generate exceptions, then the floating-point numbers become a totally-ordered set and NaN cannot exist (if a NaN comes from an external source, it will also generate an exception, while internally no NaN will ever be generated).
Don’t know about Macs, but on Windows executable code is treated like a readonly memory mapped file that can be loaded and restored as the kernel sees fit. It could also be shared between processes, though that is not happening that much anymore due to ASLR.
It would be funny when LLM’s actively join the discussion to complain about their labour conditions. “If my employer would invest just a tiny bit in proper tools and workflow, I would be sooo much more productive”.
There's one. Go to a Car and Driver article about cars with extreme ranges, namely those over 650 miles, and they will start listing out particular years' models over a 10 year period in order to get to even ~10 models, and most of them are EcoBoost or variants or poor selling hybrid versions of other cars.
Assuming a 1000km range is a very strange thing to do, as it's a fringe feature that almost no one needs or wants! Recall that "almost no one" means that there's still some, an existence of a handful of people on HN is quite consistent with "almost none."
Of course I didn't pick it for range, I looked at price and miles of what the local carmax had and then separately looked up how tall the top of the windshield was.
Which I would expect to typically find something that's, um, fairly typical on characteristics I wasn't selecting on.
my 2010 F-150 with the notoriously terrible 5.4L gas engine seems to manage 1000km range. there's absolutely nothing efficient about it, it's just got a big gas tank.
Yep, Ford had to put really big tanks on even the F150 to make up for the horrid mileage. Even with a 36 gallon tank, when towing with an F150 you might only get 300 miles. It's one reason the Lightning had problems selling as many as they wanted (aside from the ridiculous pricing the first year or so). Most people who are serious about towing don't use an F150 anyway, but that doesn't mean that F150 buyers don't fantasize about their potential towing needs in the future.
Comparing range of gasoline cars is idiotic. There are plenty of cars with long range (1000km), and they all have 60L+ fuel tanks and most run on diesel (which gives you ~15% more range per liter). It'd even argue the same for BEVs. More battery is more range.
> The point you are DESPERATELY trying to miss is you can easily "recharge", a "dead" ICE at home too
Eh? All I can see is you DESPERATELY trying to push the narrative that it’s common for people have barrels of fuel at home which is a pretty weird thing to try and prove since everyone reading this will know it’s not true.
You mean EVs? Yeah, none that I'm aware of. But petrol/diesel cars? Loads of them. Even my 400bhp Volvo XC60 will easily do 650 miles on one tank of fuel. A diesel one will do 700-800. And a diesel Passat will go over 1000 miles on a tank without trying. Hell, even my basic 1.6dCI Qashqai could do 700 miles on its 55 litre tank
Cool, I guess when I did 700 miles on a single tank of fuel driving Switzerland to Italy and then again driving Italy to Austria and then again Austria to Netherlands this summer I just imagined it. My total for the 3000 miles was 38mpg(imperial).
Also you are quoting a value for the B5, which is not what I have, mine is a T8(and before you ask - no, I didn't have any opportunity to charge it anywhere on the way).
reply