Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I totally understand the sentiment, Alvy Ray's paper feels anachronistic and perhaps too pushy, but I think it's still more right and more applicable that you're allowing.

The real point of the paper is that a pixel is a sample in a band-limited signal, not something that covers area. That's still just as true today, no matter what camera or display, no matter what pixel shape you're using. The point behind the paper still stands, even if the shape turns out to be a square, so we shouldn't get too hung up on the title and language railing against a square specifically.

While true that display pixels are more square today than when it was written, that's only one minor piece of the puzzle. Because we're talking about image resizing, there are multiple separate filters to consider, and for resizing it would be bad to treat pixels as squares even if you can.

If a camera's pixels are little squares, and we want to sample and then resize that image, our choice of resize filter needs to account for the little squares. We can't use a lanczos filter at all, we'd have to use something else entirely.

The big problem you have is that a sampled signal is band limited, and we treat them as perfectly band-limited. We have a body of knowledge about how to use and reason around perfectly band-limited signals, we don't have a strong image resizing theory for sampled data that is made of high-frequency samples.

If you don't convert to an ideal band-limited signal during initial sampling, then you'd have to keep the kernel shape with the image as some kind of meta data, and you'd have to use that during image resizes. If we don't have a perfectly band-limited signal, then our resize filter will always be larger than the ideal resize filter, and resizes with square pixels will take longer than resizes with band-limited point samples.

> The paper's arguments about coordinate systems are also a waste of time for the modern reader.

I'm curious why? These are still issues if you write a ray tracer, or if you mix DOM and WebGL in the same app. The paper was written for what was the SIGGRAPH going audience -- professors and Ph.D. Students -- at the time, which were all graphics researchers just learning about signal processing theory for the first time. Graphics textbooks today still cover Y-up vs Y-down for images and 0.5 offsets for pixels.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: