Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am actually willing to support DIY camera efforts, but if you're semi-serious about taking pictures, this just wouldn't work. First, Raspberry Pi (I'm guessing this is a CM4/CM5) is a disaster for a camera board. Nobody wants a 20s boot every time you want to take a picture, cameras need to be near instantaneous. And you can't keep it on either, because the RPi can't really sleep. There are boards that can actually sleep, but with fewer sensor options.

Now moving on to the sensor (IMX 519 - Arducam?) - it's tinier than the tiniest sensor found on phones. If you really want to have decent image quality, you should look at Will Whang's OneInchEye and Four-thirds eye (https://www.willwhang.dev/). 4/3 Eye uses IMX294 which is currently the only large sensor which has Linux support (I think he upstreamed it) and MIPI. All the other larger sensors use interfaces like SLVS which are impossible to connect to.

If anyone's going to attempt a serious camera, they need to do two things. Use at least a 1 inch sensor, and a board which can actually sleep (which means it can't be the RPi). This would mean a bunch of difficult work, such as drivers to get these sensors to work with those boards. The Alice Camera (https://www.alice.camera/) is a better attempt and probably uses the IMX294 as well. The most impressive attempt however is Wenting Zhang's Sitina S1 - (https://rangefinderforum.com/threads/diy-full-frame-digital-...). He used a full frame Kodak CCD Sensor.

There is a market for a well made camera like the Fuji X-Half. It doesn't need to have a lot of features, just needs to have ergonomics and take decent pictures. Stuff like proofs are secondary to what actually matters - first it needs to take good pictures, which the IMX 519 is going to struggle with.



> Nobody wants a 20s boot every time you want to take a picture

But that's less due to the RPi and more due to lots of amateur projects that ship the RPi with a desktop Linux distribution like Raspbian (itself based on a very conservative one - Debian - that loves preserving decades of legacy crap).

You can absolutely get quick boot times on an RPi (or on an x86 machine for that matter, although you are limited by the time the firmware itself takes to boot) if you build your own read-only image with Buildroot/Yocto like any embedded shop would.

But I agree with the rest of the comment - an RPi is a terrible device for this (and for most purposes besides prototyping in fact). But not because of boot time reasons.


Also, the RPi is the wrong kind of hardware for attestation, at least use something like USB Armory which provides a user programmable ARM TrustZone environment.

Since USB Armory supports pinning multiple keys for secure boot (and IIRC protected storage), you could even deliver it set up with a manufacturer attestation key and allow the user to load and pin their own attestation key (useful for an organization like a news company) as well as allowing "dual boot" between the attested firmware signed by the pinned manufacturer key and the user's own firmware. I've wanted that kind of behavior in consumer hardware for a long time, where you have full freedom between using the locked down OEM environment or your own and switching between them freely.

(I assume the USB Armory might also not be ideal in terms of ability to sleep and boot speed, etc, but if you have a quicker smaller controller that's the main board then it could wake the one that supplies attestation and make that functionality available after it's done booting)


Another thread mentioned that this camera was made by crypto enthusiasts from a software/ZKP starting point, and not a photography starting point. If true, it will have a lot of maturation to do, but most likely they will either be incorporated into a "real" camera design, or they will just fold.


From these pics it actually looks like a whole PI4 board is used https://farcaster.xyz/faust


Interesting. I'm curious why they would do that.


1. buy stuff for $50

2. 3d print a couple of cases for $10

3. repurpose highschool summer break crypto project .. free? (excluding time spent)

4. ???

5. profit from selling it for $400 a pop


All the stuff is off the shelf. Makes it way easier to develop. There is no reason to actually use RPi, compute module or not, as a base camera board (talking from experience) other than it is super easy to start with.


I disagree. If CM5 had the ability to sleep at tiny fractions of a watt, there are really practical and usable cameras you can pull off today, even when it's not the most efficient. For all the downsides, it would more than make up in the ease-of-development department.

I believe if RPi6 adds sleep, you'd see a flurry of portable gadgets built on the platform.


Speed of development is fine for a prototype, but for an actual product it is just sloppy and wasteful. Problem isn’t even battery hungriness, but boot time. Users don’t want to wait 20-60 seconds for their camera to load an entire Linux kernel and drivers and then all the software you have gobbled together on top when you could be up and running almost instantly if you used microcontroller instead of cpu


You're agreeing with them, not disagreeing! :)

The person who you replied to said they only reason to choose them is easiness, and you've replied saying you disagree because for all the downsides the easiness makes up for it.


> There is a market for a well made camera like the Fuji X-Half.

That product has for its specs a ridiculous price point of €750..


But you don't buy it for the specs, you buy it for the experience. It topped sales charts when it was launched. If I had more time to spend on photography, or if I was younger, or if it was a little cheaper I'd have bought it myself.

I suspect more will follow the X-Half, because it gets orientation right. Most images are viewed today in portrait mode, and half-frame is the right format for that.


The people who buy these cameras would probably be better served by upgrading their phones. Phones are good enough cameras for this use and they are infinitely better at processing.

As a long time hobbyist photographer I can understand buying cameras because they have a certain appeal. But I have to say that I honestly do not understand why someone would spend lots of money and then not want to take advantage of the technology offered.

I think shooting to JPEG and using film profiles is kind of pointless. If you want to shoot film, shoot film. Imagine you have taken a really good picture, but it’ll always look worse than it could because you threw away most of the data and applied some look to it that will date it.

I do understand that a lot of people think these cameras are worth buying. And that they are selling well. But I can’t understand why.


> The people who buy these cameras would probably be better served by upgrading their phones.

I'm sorry if this too far off topic but I routinely go to use my phone's camera and the ambient light level is so high I can barely see what I'm intending to photograph, and I certainly can't see the on-screen controls.

I've seen hoods intended to over your head and into which the phone fits and this would, I assume, resolve the issue but by comparison a point and click with a 'proper' viewfinder (perhaps with the rubber surround some used to have) would be a very good solution by comparison.


There are many motivations for shooting jpeg with film sims, from just not wanting to expend the effort editing photos to my motivation as a colour-blind person who simply cannot see colour well enough to manually adjust photos. For me, it’s incredible being able to choose a film simulation and be happy with the result even if I know that the colours I’m seeing aren’t quite the same that others will see. It’s the entire reason I bought into the FujiFilm system.


If you want to shoot to JPEG, and not post process, you aren't really going to need a camera that was designed to capture far more data than the target format is capable of representing. And yet, people pay for really expensive cameras with the kind of dynamic range that is only useful for post-processing. It is like paying for a sports car with a big engine -- and then have someone else drive it no faster than 20mph while you sit in the passenger seat. It is a waste of money. And camera companies are taking advantage of consumers who think they need these expensive cameras to get the kinds of shots they want.

They don't.

Of course I understand that it is more complicated than that. How the camera looks and handles is a huge part of the equation. (I am, after all, the kind of moron who has a Leica in their collection of cameras -- which is a nice camera, but it isn't technically as good as my Nikons :-)). But I still feel that the industry is taking advantage of consumers by selling them capabilities they aren't ever going to use.

Some camera manufacturers do something that is somewhat sensible: they make their film emulation profiles available in post-processing. So you can shoot raw, take advantage of the leeway this provides to get the exposure and tonality right, and then apply the film simulations in post.

As for post-processing, I think the biggest problem is that people think it requires a lot of work and that it is complicated. It is easy to get that impression when you see all of the _atrocious_ editing videos on youtube of people over-editing pictures.

If you do have to spend a lot of time post-processing, the problem is usually that you have no idea how to capture a photo in the first place -- or you have no idea what you want. It pays off to learn how to shoot. And if people aren't interested in learning: mobile phone cameras will usually make more satisfying images with a lot less work. They are _far_ more capable of instant gratification than expensive compact cameras from just 10 years ago.

And I say that as someone who spends a lot of time learning. Even after 30 years. Either you want to up your game, or you don't. If you don't, then there is very little a film preset can do for you.

As for color blindness: you will be no more capable of creating a decent color photo by having the camera slap some color grading on your picture than if you actually edit it in lightroom. Though you can probably learn how to correct images that have obvious color defects without actually being able to see them in post. You can't do that in the camera.

That being said, I do most of my (very rapid) post processing in black and white. The first thing I do is to turn off the colors to adjust exposure, contrast, tonality etc. Once that is in place I turn the colors back on and do any color grading/corrections I want. This is where you'd apply film simulations etc. And as I said in the paragraph above: if you are color blind, it makes no difference if you let the camera do it or some film preset.

I spend perhaps 10-30 seconds per image in post. (Usually I spend more time on the first picture in a series and then apply those edits to all photos of the same scene or with the same settings and lighting with minor variations).

The the big advantage of doing this in post is that you have an entire universe of film simulations to choose from. You are not limited to what comes with your camera. The difference is that you will have a lot more wiggle room to get the exposure and tonality right.

A lot of photographers (myself included) don't actually shoot so the image looks like what I want to end up with, but with specific processing in mind. Usually because you know what the camera sensor is capable of doing, so you optimize for capture of usable raw data so you can get the result you want in post. And with practice, post processing shouldn't be time consuming.


> if it was a little cheaper I'd have bought it myself.

Same here. Even for the experience it's overpriced.


I know nothing about photography, but I'll just comment on this point:

> (I'm guessing this is a CM4/CM5) is a disaster for a camera board. Nobody wants a 20s boot every time you want to take a picture, cameras need to be near instantaneous.

You can boot an RPI in a couple hundred milliseconds.


I think almost everyone here is missing the point of this camera. In the post truth AI future, this is the camera you want when you photograph the billionaire or President or your spouse doing something awful. Any other photo proof won’t work because it can always be called fake. And yes I’m being serious. You are missing the point if you say the quality isn’t good enough or it’s too slow or bulky. The idea is the provable authenticity, which is going to be very important in the coming decades.


You can just AI generate a photo and snap a picture of that.

There's no such thing as provable authenticity.


I imagine that, if attested cameras like this come into any sort of regular use, you'll see additional layers of metadata mixed into the signature—a depth map, GPS, accelerometers, operator biometrics etc, none of which are necessarily infallible, but which certainly create considerable barriers to faking things.


That's likely to be easily detected.


I think some of the modern iPhone cameras use SLVS, so non-iPhone Apple Silicon might have a way of connecting to that natively too. Good luck using that though.

Without a native connection option, what remains to you is probably an FPGA converter (to MIPI CSI-2 D-PHY), which is going to be expensive of course. But still not as expensive as the sensor itself and the associated optics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: