Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah this kind of kindergarten explanation I've seen.. This is making a bunch of simplification that aren't helpful

With a phased array you're beam forming and sweeping an area of space. Your signal returns are from the beam or side lobes. You can passively beam form on Rx as well.

But with SAR you're not beam forming. You're illuminating everything - the whole ground below you. And you get a return from everywhere all at once. Two equidistant reflectors will return signals simulatenously. If your flight path is between these two points, and the distance is always equal, how can you differentiate them?

You're digitally beam forming on the Rx somehow but I think there is more to it



> But with SAR you're not beam forming. You're illuminating everything - the whole ground below you. And you get a return from everywhere all at once. Two equidistant reflectors will return signals simulatenously. If your flight path is between these two points, and the distance is always equal, how can you differentiate them?

There are a couple conceptual ways to think about SAR. One is, in fact, as beamforming. Each position of the radar along the synthetic aperture is one element in an enormous array that's the length of the synthetic aperture itself: that's your receive array.

Regarding your question about scatterers that are equidistant along the entire synthetic aperture length: typically, SAR systems don't use isotropic antennas. And they're generally side-looking. So you would see the scatterer to one side of the radar, but not the equidistant scatterer on the other side.

If you had an isotropic antenna that saw to each side of the synthetic aperture, then the resulting image would be a coherent combination of both sides. Relevant search terms would be iso-range and iso-Doppler lines. Scatterers along the same iso-range and iso-Doppler lines over the length of the synthetic aperture are not distinguishable.

As to your question earlier in the chain, my preferred SAR book is Carrara et al. Spotlight Synthetic Aperture Radar: Signal Processing Algorithms. Given the title, it is of course geared toward spotlight (where you steer the beam to a particular point) rather than strip map or swath (where your beam is pointed at a fixed angle and dragged as you move along). It has decent coverage of the more computationally efficient Fourier-based image formation algorithms but does not really treat algorithms like the back projection that Henrik uses (I also think back projection is easier to grasp conceptually, particularly for those without a lot of background in Fourier transforms). But my book preference might just be because that's what I first learned with.


>> Your signal returns are from the beam or side lobes

You're skipping a step -- where does that beam come from? For simplicity lets think about a scene illuminated uniformly (i.e. from a single element) so that we don't get hung up on the transmit beam. I think we agree you could still sweep a receiving phased array beam across that scene. Lets further assume it's digital beamforming, so you're storing a copy of the signal incident _at every element of the array_. Not a 'beam' yet, just a bunch of individual signals.

>> you get a return from everywhere all at once

Yes! Think about each of those elements of the phased array -- they're also receiving signals from everywhere all at once.

It only becomes localized into a beam when you combine all the elements with specific phase weights. That process of combining element returns to form the beam is mathematically identical to what you do in SAR as well -- combine all your individual 'element' (individual snapshot in space) responses with some phase weights to get the return in one direction. Repeat the math in all directions to form one dimension of the image (second dimension is the radar time-of-flight bit, which is unrelated to the beamforming).

Maybe not you specifically, but I think people don't understand the 'synthetic aperture' part. Specifically, that you can ignore the time between snapshots (because the transmitter and receiver are synchronized) and act like all the snapshots the platform took across the line of flight happened simultaneously. What you're left with is the element responses to a big phased array, and you can 'beamform' using those responses.


You can't differentiate them in that case. You'd have to fly orthogonal across the surface for~ maximum effect.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: