Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Recently I've been on a bit of a deep dive regarding human color vision and cameras. This left me with the general impression that RGB bayer filters are vastly over-utilized (mostly due to market share), and are they are usually not great for tasks other than mimicking human vision! For example, if you have a stationary scene, why not put a whole bunch of filters in front of a mono camera and get much more frequency information?


That's common in high end astophotography, and almost exclusively used at professional observatories. However, scientists like filters that are "rectangular", with a flat passband and sharp falloff, very unlike human color vision.


Assuming the bands are narrow, that should allow approximately true-color images, shouldn't it?

Human S cone channel = sum over bands of (intensity in that band) * (human S-cone sensitivity in that channel)

and similarly for M and L cone channels, which goes to the integral representing true color in the limit.

Are the bands too wide for this to work?


> Are the bands too wide for this to work?

For wideband filters used for stars and galaxies, yes. Sometimes the filters are wider then the entire visible spectrum.

For narrowband filters used to isolate emission from a particular element, no. If you have just the Oxygen-III signal isolated from everything else, you can composite it as a perfect turquoise color.


One big reason for filters in astronomy and astrophotography is to block certain frequency ranges, such as city lights.


The vast majority of consumers want their camera to take pictures of people that “look good” to the human eye; the other uses are niche.

But that said, I’m actually surprised that astrophotographers are so interested in calibrating stars to the human eye. The article shows through a number of examples (IR, hydrogen emission line) that the human eye is a very poor instrument for viewing the “true” color of stars. Most astronomical photographs use false colors (check the captions on the NASA archives) to show more than what the eye can see, to great effect.


I suspect its because when conditions are right to actually see color in deep-sky objects, its confounding that it doesn't look the same as the pictures. Especially if seeing the colors with your own eyes feels like a transcendent experience.

I've only experienced dramatic color from deep sky objects a few times (the blue of the Orion Nebula vastly outshines all the other colors, for instance), and its always sort of frustrating that the picture show something so wildly different from what my own eyes see.


There's a good chance the real problem there is limited gamut on the screen, and with the right viewing method the RAW photo could look much much better.


If you get a big enough telescope it will gather enough light to where you'll see things in proper color. I've seen the Orion nebula with a 10 inch reflector in a good location and the rich pinks, blues and reds were impossible to miss. This is the actual photons emitted from that object hitting your retina so it's about as "true color" as you can get.

I think when astrophotographers are trying to render an image it makes sense that they would want the colors to match what your eyes would see looking through a good scope.


I think you want a push broom setup:

https://www.adept.net.au/news/newsletter/202001-jan/pushbroo...

Hyperspectral imaging is a really fun space. You can do a lot with some pretty basic filters and temporal trickery. However, once you’re out of hot mirror territory (near IR and IR filtering done on most cameras), things have to get pretty specialized.

But grab a cold mirror (visible light cutting IR filter) and a nighvision camera for a real party on the cheap.


In case you weren't already aware, that last bit basically describes most optical scientific imaging (e.g. satellite imaging or spectroscopy in general).


The technical term for this is multispectral imaging. Lots of applications across science and industry.

[0] https://en.wikipedia.org/wiki/Multispectral_imaging


And don't forget about polarization! There's more information out there than just frequency.


I guess that’s yet another dimension. Perhaps spin a polarizing filter in front of the camera to grab that?


There are for sure things to explore. Craig Bohren once wrote, that he wouldn't think of going anywhere without polarizing sunglasses. His books are really nice... (Fundamentals of Atmospheric Radiation, Clouds in a Glass of Beer, ..)


> why not put a whole bunch of filters in front of a mono camera and get much more frequency information?

Just rgb filters aren't really going to get you anything better than a bayer matrix for the same exposure time, and most subjects on earth are moving too much to do separate exposures for 3 filters.

The benefits of a mono camera and rgb filters is that you can take advantage of another quirk of our perception; we are more sensitive to intensity than color. Because of this, it's possible to get a limited amount of exposure time with the rgb filters, and use a 4th "luminance" filter for the majority of the time. During processing you can combine your rgb images, convert that to HSI and replace the I channel with your luminance image. Because the L filter doesn't block much light it's faster at getting signal, but it's only really a benefit for really dark stuff where getting enough signal is an issue.


Yeah, I was surprised to learn that camera technology was calibrated primarily towards making white people look normal on film. Everything else was secondary. This is why cameras often have a hard time with darker skin tones: a century of the technology ignoring them.

Then I felt surprised that I was surprised by that.


Well. What should the industry do instead at that time? It was mostly developed by white people for a market of a majority of white people.


We do quite a bit, multispectral imaging is a well worn field used a lot in astronomy, scientific research and when studying art and other historical artifacts. Some photographers use it too it just gets harder because the scene is more likely to change slightly making the image blurry when you go to layer the different spectra and generally photographers are trying to capture more human adjacent representations of the scene.

https://en.wikipedia.org/wiki/Multispectral_imaging

https://colourlex.com/project/multispectral-imaging/


There are a few specialty cameras with fewer filters

Canon has made a few astrophotography cameras:

https://en.wikipedia.org/wiki/Canon_EOS_R#Variants

There are also modified cameras available with the filters removed:

https://www.lifepixel.com/


That would trade time and frequency information for spatial information, which is what you want in astronomy, but maybe not for candid family photos.


Yes off course, but with the obvious disadvantage that you lose resolution for every filter you add. Then you say let's just increase the pixel count, which means smaller pixel pitch. But then you lose low light sensitivity, have to decrease your lens f/#, so more expensive lenses etc... Which is why it isn't done for commercially / mass market sensors.


I read that as: take a bunch of pictures of a static scene, each with a different filter capturing specific frequency bands individually. Merge afterwards with whatever weights or algorithms you want.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: