I don't think they meant guesses were impressive in the sense of succeeding against a constraint of limited supporting data (which would be impressive in its own way). But just against a baseline expectation of what could reasonably be derived from a picture.
That there's such a thing as massive support infrastructure in the form of data and algorithmic firepower, that powers guessing capabilities to be as good as they are, that's the impressive thing.
You can use ESP32 with GPS modules and their PPS signals. The PPS signal from the module often has has a roughly precision around 60ns against the global GPS standard.
With that signal you can PID-control an internal timer of the ESP32 - which then can be used to timestamp audio frames. Send that to a central host over Wifi and you can use your standard localization math.
The trick is to use the internal ESP32 10MHz hardware which automatically kicks timestamps into a register if a GPIO does something. Not using high-level C constructs that must eat their way through x API layers.
I've been interested in deploying something like this around my property to localize sounds that I hear just for fun.
IMO having the on-device model to pre-filter to the signals of suspected drones is potentially a good idea in a wartime environment. Not only does it conserve bandwidth (which might be a limited resource), but it also reduces airtime and thus makes the devices harder to spot.
GPS is also unreliable in Ukraine, especially near the front line.
It's unclear which approach would be better from a power budget point of view. One requires substantially more local processing power but much less radio time, while the other requires continuous radio transmission.
> GPS is also unreliable in Ukraine, especially near the front line.
There are GPS antennas that physically can block out signals that are not coming from the sky with a huge amount of decibels. Maybe Aliexpress has some of them in stock? This was heavily ITAR-ed but this ban was lifted recently.
Other option: try to sync against the DCF77 signal from Germany. Not only the beep-beep-beep time signal but also the integrated phase modulation. Jamming VLF is difficult. 77KHz is in the range of ADCs.
Then make a voter: if GPS/GLONASS/Galileo/Beidou is available prefer them, if not fall-back to DCF77. If this fails: free-running.
Cell phones are kinda nice because they're hard to ITAR. Anyone can buy them. Old and crappy ones are generally still good enough for this kind of micro model. They come with their own batteries to hold-over between power loss or overnight. They also come with their own sensors, radios, and compute already integrated. Basically, you can just write software and ignore the hardware side entirely.
Remember, this isn't planned to be a long term solution, or to provide the highest quality available, or to be the cheapest or most efficient solution. It's intended to allow Ukraine to quickly plug sensor gaps in the lines and to scale easily.
You’re considering whether it would be possible - and perhaps quite elegant - to use an XY‑scanner to raster‑scan the end of an optical fiber across a prism, disperse the light, and then capture the resulting spectrum with a CCD line sensor.
With that setup, each pixel on the line sensor would effectively record the full spectral content of the light at that scanned position, all in a single acquisition.
You could probably use just an X-scanner, and instead of a CCD line sensor, use a regular 2D image sensor if you used a "1 pixel wide" slit aperture to crop the image perpendicularly to the direction that the prism disperses the light. So instead of a single pixel being dispersed, you disperse a line.
You would reduce the time required by the root of the number of pixels you want (assuming a square image).
(This is what we do in momentum-resolved electron energy loss spectroscopy. In that situation we have electromagnetic lenses that focus the electrons that have been dispersed, so we don't have as bad a chromatic aberration problem as the other response mentions).
I would love to see e.g. a butterfly image with a slider that I could drag to choose the wavelength shown!!
> I would love to see e.g. a butterfly image with a slider that I could drag to choose the wavelength shown!!
Here[1] are some 31-band hyperspectral images of butterflies. Numpy/pillow can unpack the .mat files into normal images. Then perhaps vibecode a slider, or just browse the band images?
I knew of the site having explored "First-tier physical-sciences graduate students are often deeply confused about color. Color is commonly taught, starting in K... very very poorly. So can we create K-3 interactive content centered around spectra, and give an actionable understanding of color?"
A problem for multispectral imagery (even within visible rgb), is that the wavelengths of light are different so the lens cannot be in focus for all spectrum at once. I have tested this out with a few of my slr lenses. If you have blue channel perfectly in focus, red isn't just a little out of focus, it is actually noticeably way out.
This is called chromatic aberration, for those who are intrigued.
Given that regular phone cameras have sensors that detect RGB, I wonder if one could notice improved image sharpness if one had three camera lenses (and used single-color sensors) next to one another laterally, with a color filter for R, G and B for each one respectively. So that the camera could focus perfectly for each wavelength.
there are lenses out there designed for apochromatic performance across the UV-Vis-IR band, but they tend to be really pricey.
The Coastal Optical 60mm is a frequently cited one. UV in particular is challenging, because glass that works well in the visible light range can be quite poorly translucent in UV. Quartz is better, but drives up the cost a lot, and comes with other tradeoffs.
I've had this problem as well, but it's just due to optical properties of the lens and extremely consistent from image to image, so you can calibrate and correct for it as long as you focus each wavelength and collect data separately.
I don't think you can property calibrate for it unless you also move the camera to compensate for focus breathing. I'm not sure if that would fully account for it either. That being said these things are only very noticeable pixel peeping.
Focus breathing can be compensated for. The "breathing" only changes the effective focal length, not the location of the camera, so you can map the pixels to match where they should be and bilinear/bicubic interpolate appropriately.
Shoot a checkerboard at both wavelengths each focused properly and then compute the mapping.
If you're shooting macro stuff then maybe you are changing the effective location of the camera slightly depending on the exact mechanics of the lens and whether the aperture slides with the focusing, but the couple of mm shift in camera location won't matter for landscapes.
Alternatively, use cine lenses which are engineered not to breathe, but they are typically more expensive for that reason.
It may even be a good thing, from a PoV of learning resiliency and adaptation to supply chain changes. They probably ended up very hard to disrupt.
I've seen an Orange Pi 5+ in a drone, which I wrote the upstream DTS for conincidentally, Raspberry Pi, etc.
Despite Opi5+ having sophisticated ISP and camera interfaces, they just used some USB/analog camera capture card. Probably because if you're using generic interface that just works, you can just throw in any SBC in that has some so-so working Linux build somewhere so that USB and gpio/i2c/spi or any of these generic interfaces work on and you're golden. Your other software can then stay the same, because everything that uses these interfaces from userspace is well abstracted away from platform details by Linux and works identically across all SBCs.
reply