Hacker Newsnew | past | comments | ask | show | jobs | submit | mysteria's commentslogin

I archived all my MiniDV tapes using a cheap firewire card and dvgrab on Linux, it can be set to automatically split noncontinous clips into different files for easy viewing. It's very straightforward to use and can be done unattended.

Just thinking back 10 years ago when I was arching all my DV tapes on my Dad's old G5... I did it all by hand through Final Cut Express. It would've been sooo much easier had I known about dvgrab back then!

Also ripped all my old MiniDV tapes a decade ago or so. (I don't remember it being tedious.) (I recall about 12GB for each 60min tape, FWIW.)

I've known for some time now not to trust media formats to remain easy to access as time goes on. Floppy disks, ZIP disks, SCSI…

So nice the home movies are now in the cloud (and on USB drives as additional backup).


I'd heard a few horror stories about people doing it on Windows and Mac, with bad compatibility and annoying software. With dvgrab it's super simple.

DVIO [0] and WinDV [1] would be the closest equivalents to dvgrab on Windows. Both are super easy to use (especially the former).

For HD DV tapes there's also HDVSplit [2].

[0] https://www.videohelp.com/software/DVIO

[1] https://www.videohelp.com/software/WinDV

[2] https://www.videohelp.com/software/HDVSplit


Firewire support was removed from the Linux kernel so I had to switch to Mint Linux to accomplish the same thing

> Firewire support was removed from the Linux kernel

This is very much incorrect. Maybe the subsystem wasn't built into a custom kernel you're using?

edit: google says improvements through 2026, support through 2029


Many distros (including Raspberry Pi OS) don't enable `CONFIG_FIREWIRE_OHCI` in the kernel, so support isn't built-in, unless you build your own kernel.

But yes, it will be supported through 2029, and then after that, it could remain in the kernel longer, there's no mandate to remove it if I'm reading the maintenance status correctly: https://ieee1394.docs.kernel.org/en/latest/#maintenance-sche...

> [After 2029, it] would be possibly removed from Linux operating system any day


Right, that matches my understanding. After 2029, It'll stick around as long as it continues to compile. If it fails to compile it would get dropped instead of updated as there's no maintainer.

This isn't really what you're asking for but is virtualization possible on the client side? Either through direct virtualization on the client PC or using VDI. Basically IE and Windows with admin rights would run in a restricted VM devoted solely to that app, with the VM restricted from network access outside of connections to the legacy server and any management/etc. requirements.

This would incur an added cost in licensing and possibly hardware but this would also be the cleanest way to do it. Also on the security side this would be safer than escalating a legacy ActiveX app on the secure client.

Having multiple instances of IE running remotely on Windows Server and then served using Citrix or something similar should work as well if you don't need full VM isolation between clients, and I've seen this used in real companies with legacy apps that can't run on the standard employee machines. Again though this has a licensing cost.


I remember a case where a company decided to assign employees random 16 character passwords with symbols and rotated them every 90 days or so. They were unchangeable and the idea was that everyone would be forced to use a secure password that changed regularly.

You can probably guess what happened, and that was that no one remembered their passwords and people wrote it down on their pads or sticky notes instead.


Writing down a password is a great option. However you need to keep that paper in a secure location. Put it in your wallet and treat it like a $100 bill - don't paste it to a monitor or under the keyboard.

A password manager is better for most things, but you need to unlock the password manager somehow.


Also "app passwords". Not just change, you can't even append text to it.

Those are just API keys people can type.

I mean the writing's on the wall, they just don't want to do it all at once to avoid backlash. I wouldn't be surprised if they kill sideloading completely several years down the road.

Thanks for this writeup as I haven't had time to review the video yet :)

So, the only way to manipulate it is to actually screw with the internals of the CPU itself by "glitching", meaning tampering with the power supply to the chip at exactly the right moment to corrupt the state of the internal electronics. Glitching a processor has semi-random effects and you don't control what happens exactly, but sometimes you can get lucky and the CPU will skip instructions. By creating a device that reboots the machine over and over again, glitching each time, you can wait until one of those attempts gets lucky and makes a tiny mistake in the execution process.

Considering that the PSP is a small ARM processor that presumably takes up little die space, would it make sense for it to them employ TMR with three units in lockstep to detect these glitches? I really doubt that power supply tampering would cause the exact same effect in all three processors (especially if there are differences in their power circuitry to make this harder) and any disrepancies would be caught by the system.


The Nintendo switch 2 uses DCLS (Dual-core lockstep) in the BPMP and PSC (PSC is PSP-like but RISC-V). So yes, it helps - I'm unsure if/where msft uses it on their products.

DCLS actually makes sense for this scenario as the fault tolerance gained from having three processors isn't needed here. The system can halt when there's a mismatch, it doesn't have to perform a vote and continue running if 2 of 3 are getting the same result.

Also I just thought of this but it should be possible to design a chip where the second processor runs a couple cycles behind the first one, with all the inputs and outputs stashed in fifos. This would basically make any power glitches affect the two CPUs differently and any disrepancies would be easily detected.


You could glitch both processors?

I think the idea is they both hang off the same voltage rail.

yeah give the man more ideas, smart

Piezo mics are pretty cheap, and if wired up to the microphone input of a computer or phone you could probably get better accuracy as well if you used the same signal processing techniques.

Seems some people have done this already with a PC app: https://timeandtidewatches.com/how-to-make-your-own-timegrap...


The astounding thing about Goliath wasn’t that is was a huge leap in performance, it was that the damn thing functioned at all. To this day, I still don’t understand why this didn’t raise more eyebrows.

This wasn't something I really dug into in great detail but I remember my surprise back then at how all those merged models and those "expanded" models like Goliath still generated coherent output. IMO those were more community models made by small creators for entertainment rather than work, and only really of interest to the local LLM groups on Reddit, 4chan, and Discord. People might briefly discuss it on the board and say "that's cool" but papers aren't being written and it's less likely for academics or corpo researchers to notice it.

That being said I wonder if it's possible to combine the layers of completely different models like say a Llama and a Qwen and still get it to work.

Even with math probes, I hit unexpected problems. LLMs fail arithmetic in weird ways. They don’t get the answer wrong so much as get it almost right but forget to write the last digit, as if it got bored mid-number. Or they transpose two digits in the middle. Or they output the correct number with a trailing character that breaks the parser.

Would using grammar parsing help here by forcing the LLM to only output the expected tokens (i.e. numbers)? Or maybe on the scoring side you could look at the actual probabilities per token to see how far the correct digit is.


I think the main challenge with combining layers of different would models be their differing embedding sizes and potentially different vocabularies.

Even between two models of identical architecture, they may have landed on quite different internal representations if the training data recipe was substantially different.

But it would be fun to experiment with.


Even with the same embedding sizes and vocabularies, there’s nothing that forces the meaning of dimension 1 of model 1 to mean the same thing as dimension 1 of model 2 — there are lots of ways to permute the dimensions of a model without changing its output, so whatever dimension 1 means the first time you train a model is just as likely to end up as dimension 2 the second time you train is as it is to be consistent with the first model.

Nobody here or on Reddit has mentioned this, maybe bc it’s too obvious, but it’s clear to me that the residual connections are an absolutely necessary component to making this merging possible — that’s the only reason dimension 1 of a later layer is encouraged to mean something similar to dimension 1 of an earlier layer.


On a related note - would it be easier, instead of doing a benchmark sweep across the whole NxN set of start-end pairs for which layers to modify, to instead measure cross-correlation between outputs of all layers? Shouldn't that produce similar results?

It’s a good spot for hobbyists to fill in the gaps. Maybe it’s not interesting enough for academics to study, and for corporate ML they would probably just fine tune something that exists rather than spending time on surgery. Even Chinese labs that are more resource constrained don’t care as much about 4090-scale models.


It's still non-trivial, as multi-digit numbers can be constructed a huge combination of valid tokens.

The code in the blog helps derive useful metrics from partial answers.


I mean from a privacy perspective alone its clear that Meta throws its ethics out the door in that regard. There's the Cambridge Analytica scandal, the more recent incident with Instagram bypassing Android OS restrictions for more tracking, and many many other examples.

Their apps also regularly nag you to allow access to stuff like contacts and the photo gallery when you've already said no the first time.

And for a personal anecdote: I was recently helping a senior setup Whatsapp Desktop on her Windows computer. It could chat fine but refused to join calls, displaying an error that said there was no microphone connected. I mean, there is a mic connected and it could record voice notes fine. Turns out that error actually meant that there was no webcam connected, and a webcam is required to join calls. I think it's the same way in the mobile app where you need to give it the camera permission to join a video call even if you turn the video off. Meanwhile Zoom, Teams, Webex, and others allow you to join any call without a mic or camera.

As she didn't have a webcam I first tried the OBS virtual camera but Whatsapp refused to recognize that despite all other apps working fine with it. Somehow Droidcam with no phone connected worked fine, displaying a black screen in the virtual camera feed, and that got Whatsapp to join the call successfully. Absolutely ridiculous and it's clear to me how desperately they want that camera access and that sweet data.


See, this is why I made a comment in that Apple thread (see my post history) about stopping Facebook doing things like this. I was told "Android can do it too". Yes but no. Apple may do evil things but they punished Facebook for their bullshit, revoking their certificate. The landscape of contact info (phone numbers, email addresses, social media services, people just submitted it, they trust me, dumb f-) means you can't have bad faith actors like Zuckerberg Zucking about. Whatsapp is such a clear case of antitrust just for starters

Edit: sorry, not entirely clear, I mean we need Apple's system of granularity. "Deny access to contacts" needs to work even when the asking company (Facebook) tries tricking people


Personally I wonder even if the LLM hype dies down we'll get a new boom in terms of AI for robotics and the "digital twin" technology Nvidia has been hyping up to train them. That's going to need GPUs for both the ML component as well as 3D visualization. Robots haven't yet had their SD 1.1 or GPT-3 moment and we're still in the early days of Pythia, GPT-J, AI Dungeon, etc. in LLM speak.


Exactly, they will pivot back to AR/VR


That's going to tank the stock price though as that's a much smaller market than AI, though it's not going to kill the company. Hence why I'm talking about something like robotics which has a lot of opportunity to grow and make use of all those chips and datacenters they're building.

Now there's one thing with AR/VR that might need this kind of infrastructure though and that's basically AI driven games or Holodeck like stuff. Basically have the frames be generated rather than modeled and rendered traditionally.


Nvidia's not your average bear, they can walk and chew bubblegum at the same time. CUDA was developed off money made from GeForce products, and now RTX products are being subsidized by the money made on CUDA compute. If an enormous demand for efficient raster compute arises, Nvidia doesn't have to pivot much further than increasing their GPU supply.

Robotics is a bit of a "flying car" application that gets people to think outside the box. Right now, both Russia and Ukraine are using Nvidia hardware in drones and cruise missiles and C2 as well. The United States will join them if a peer conflict breaks out, and if push comes to shove then Europe will too. This is the kind of volatility that crazy people love to go long on.


I feel that the push will not be towards a general computing device though, but rather to a curated computing device sort of like the iPhone or iPad. Basically general in theory but actually vendor restricted inside a walled garden.

With improved cellular and possibly future satellite connectivity I feel that this would also be more of a thin client than a local first device, since companies want that recurring cloud subscription revenue over a single lump sum.


That's the present, not the future.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: