Or, just don’t play the game. I don’t mean to be flippant, but why waste time on software employing shoddy practices? Wordle and Apple’s mini crossword-minis are sufficiently stimulating and quick.
My tolerance for software like that is very limited. It’s almost an immediate long-press and uninstall.
Respectfully, if
I may offer constructive criticism, I’d hope this isn’t how you communicate to software developers, customers, prospects, or fellow entrepreneurs.
To be direct, this reads like a fluff comment written by AI with an emphasis on probability and metrics. P(that) || that.
I’ve written software used by a local real estate company to the Mars Perseverance rover. AI is a phenomenally useful tool. But be weary of preposterous claims.
I'll take you at your word regarding respectfully. That was an off the cuff attempt to explain the real levers that control the viability of agents under particular circumstances. The target market wasn't your average business potato but someone who might care about a hand waived "order approximate" estimator kind of like big-O notation, which is equally hand waivey.
Given that, if you want to revisit your comment in a constructive way rather than doing an empty drive by, I'll read your words with an open mind.
There are many things wrong with this. I have an iPhone 17 Pro Max and use it to capture HEIF 48 and ProRAW images for Lightroom. There’s no doubt of the extraordinary capabilities of modern phone cameras. And there are camera applications that give you a sense of the sensor data captured, which only further illustrates the dazzling wizardly between sensor capture vs the image seen by laypeople.
That said, there is literally no comparison between the iPhone camera and the RAW photos captured on a modern full-frame mirrorless camera like my Nikon Z6III or Z9. I can’t mount a 180-600mm telephoto lens to an iPhone, or a 24-120mm, or use a teleconverter. Nor can I instantly swing an iPhone and capture a bird or aircraft flying by at high speed and instantly lock and track focus in 3D, capture 30 RAW images per second at 45MP (or 120 JPEGs per second), all while controlling aperture, shutter speed and ISO.
Physics is a thing. The large sensor size and lenses (that can make a Mac Studio seem cheap by comparison) serve a purpose. Try capturing even a remotely similar image on an iPhone in low light, and especially RAW, and you’ll be sitting there waiting seconds or more for a single image. Professional lenses can easily contain 25 individual lens elements that move in conjunction as groups for autofocus, zoom, motion stabilization, etc. They’re state-of-the-art modern marvels that make an iPhone’s subject detection pale by compare. Examples:
I can lock on immediately to a small bird’s eye 300 feet away with a square tracking the tiny eye precisely, and continue tracking. The same applies to pets, people, vehicles, and more with AI detection.
You can handhold a low-light shot at 1/15s to capture a waterfall with motion blur and continue shooting, with the camera optimizing the stabilization around the focus point—that’s the sensor and lens working in conjunction for real-time stabilization for standard shots, or “sports mode” for rapidly panning horizontally or vertically.
There’s a reason pro-grade cameras exist and people use them. See Simon D’entrement, Steve Perry, and many others on YouTube for examples.
For most people, it doesn’t matter. They can happily shoot still images and even amazingly high-quality video these days. But dismissing the differences is wildly misleading. These cameras require memory cards that cost half as much or more than the latest iPhone, and for good reason [1].
With everything, there are
trade offs. An iPhone fits in my pocket. A Nikon Z8 and 800mm lens and associated gear is a beast. Different tools, different job.
You are totally missing my point and talking past me. I have a Nikon Z8! I know what it is capable of!
The point I'm trying to make is that the RAW images coming out of a modern full-frame camera get very "light" processing in a typical workflow (i.e.: Adobe Lightroom), little more than debayering before all further treatment is in ordinary RGB space.
Modern mobile phones have sensors with just as many megapixels, capturing a volume of raw data (measured in 'bits') that is essentially identical to a high-end full-frame sensor!
The difference is that mobile phones capture and digitally merge multiple frames captured in a sequence to widen the HDR dynamic range and reduce noise. They can even merge images taken from slightly different perspectives or with moving objects. They also apply tricks like debayering that is aware of pixel-level sensor characteristics and is tuned to the specific make and model instead of shared across all cameras ever made, which is typical of something like Lightroom, Darktable, or whatever.
If I capture a 20 fps burst with a Nikon Z series camera... I can pick one. That's about the only operation I can do with those images! Why can't I merge multiple exposures with motion compensation to get an effective 10 ISO instead of 64, but without the blur from camera motion?
None of this has anything to do with lenses, auto-focus, etc...
I'm talking about applying "modern GPU" levels of computer power to the raw bits coming off a bayer sensor, whether that's in a phone or a camera. The phone can do it! Why can't Lightroom!?
> I have a Nikon Z8! I know what it is capable of!
It seems to me you underestimate the amount of work your camera is already doing. I feel like you overestimate the raw quality of a mobile camera as well.
> Modern mobile phones have sensors with just as many megapixels, capturing a volume of raw data (measured in 'bits') that is essentially identical to a high-end full-frame sensor!
There may be the same amount of bits but that doesn't mean that it captures the same quality of signal. It's like saying that a higher amount of bits on a ADC correspond to a better quality signal on the line, it just isn't true. Megapixels are overhyped, resolution isn't everything for picture quality.
> The phone can do it! Why can't Lightroom!?
Be the change you want to see, if the features that you want are not in Lightroom write a tool to implement it (or add the features to a tool like ffmpeg). The features you are talking about are in just software after capture so it should be possible from the camera's raw.
Perhaps you would be better of buying a high quality point and shoot camera or just using your phone instead of a semi professional full-frame camera for your purpose. With a DSLR you have options how to process, if that means in your "typical workflow" light processing then that's up to you. perhaps If you want to point shoot, instagram you indeed don't want to spend time processing in Lightroom and that's fine.
It feels like you are complaining about how your expensive pickup can't fit your family and suitcases when going on holiday like the neighbors SUV even though they have the same amount of horsepower and are build on the same chassis. They are obviously build for different purposes.
They’re known as DNs, or digital numbers. Thom Hogan’s eBooks do a phenomenal job of explaining the intricacies of camera sensors, their architecture, processing to JPEGs, and pretty much every aspect of capturing good photos.
The books, while geared toward Nikon cameras, are generally applicable. And packed with high-quality illustrations and an almost obsessive uber-nerd level of detail. He’s very much an engineer and photographer. When he says “complete guide”, he means it.
The section on image sensors, read-outs,
and ISO/dual gain/S&R, etc. is particularly interesting—-and should be baseline knowledge for anyone who’s seriously interested in photography.
To be clear, they default to JPEG for the image preview on the monitor (LCD screen). Whenever viewing an image on a professional camera, you’re always seeing the resulting JPEG image.
The underlying data is always captured as a RAW file, and only discarded if you’ve configured the camera to only store the JPEG image (discarding the original RAW file after processing).
> Whenever viewing an image on a professional camera
Viewing any preview image on any camera implies a debayered version: who says is it JPEG-encoded - why would it need to be? Every time I browse my SD card full of persisted RAWs, is the camera unnecessarily converting to JPEG just to convert it back to bitmap display data?
> The underlying data is always captured as a RAW file, and only discarded if you’ve configured the camera to only store the JPEG image (discarding the original RAW file after processing).
Retaining only JPEG is the default configuration on all current-generation Sony and Canon mirrorless cameras: you have to go out of your way to persist RAW.
I’ve submitted multiple bug reports over the years using the Feedback app. And, to my surprise, not only did I receive a detailed response within a month or so, the issues were resolved.
Arguably, the opposite is true. Ars Technica and others have written about this extensively [0].
Having summarized results appear immediately with links to the sources is preferable to opening multiple tabs and sifting through low-quality content and clickbait.
Many real-world problems aren't as simple as "type some keywords" and get relevant results. AI excels as a "rubber duck", i.e., a tool to explore possible solutions, troubleshoot issues, discover new approaches, etc.
Yes, LLMs are useful for junior developers. But for experienced developers, they're more valuable.
It's a tool, just like search engines.
Airplanes are also a tool. Would you limit your travel to destinations within walking distance? Or avoid checking the weather because forecasts use Bayesian probability (and some mix of machine learning)? Or avoid power tools because they deny the freedom of doing things the hard way?
One can imagine that when early humans began wearing clothing to keep warm, there were naysayers who preferred to stay cold.
The most creative people I know are using AI to further their creativity. Example: storytelling, world building, voice models, game development, artwork, assistants that mimic their personality, helping loved ones enjoy a better quality of life as they age, smart home automations to help their grandmother, text-to-speech for the visually impaired or those who have trouble reading, custom voice commands, and so on.
Should I tell my mom to turn off Siri and avoid highlighting text and tapping "Speak" because it uses AI under the hood? I think not.
They embrace it, just like creative people have always done.
Socrates had a skeptical view of written language, preferring oral communication and philosophical inquiry. This perspective is primarily presented through the writings of his student, Plato, particularly in the dialogue Phaedrus.
I confirmed that from my own memory via a Google AI summary, quoted verbatim above. Of course, I would never have learned it in the first place had somebody not written it down.
> Socrates had a skeptical view of written language, preferring oral communication and philosophical inquiry. This perspective is primarily presented through the writings of his student, Plato, particularly in the dialogue Phaedrus.
He did not. You should read the dialogue.
> I confirmed that from my own memory via a Google AI summary, quoted verbatim above.
This is the biggest problem with LLMs in my view. They are great at confirmation bias.
In Phaedrus 257c–279c Plato portrays Socrates discussing rhetoric and the merits of writing speeches not writing in general.
"Socrates:
Then that is clear to all, that writing speeches is not in itself a disgrace.
Phaedrus:
How can it be?
Socrates:
But the disgrace, I fancy, consists in speaking or writing not well, but disgracefully and badly.
Phaedrus:
Evidently."
I mean, writing had existed for 3 millennia by the point this dialogue was written.
I used Spacemacs for years and recommend it to others. It was fantastic in the early days, but the stability seems to have diminished. I encountered more bugs over time that I'd have to troubleshoot and fix myself.
I switched to Doom Emacs a couple of years ago. It's well-maintained with regular updates, fantastic language support, and lightning fast. The CLI tooling is also nice (i.e., you can run 'doom upgrade' to update everything, or 'doom doctor' if you encounter an issue).
It's the closest equivalent to VS Code in terms of working out of the box. Not to mention the advantages of Emacs with Vim keybindings. There is a learning curve, but the GitHub documentation is excellent.
Adding support for Ruby on Rails development, for example, is as simple as uncommenting '(ruby +rails +lsp)' line in the '~/.config/init.el' file, and then running 'doom sync'. There's a long list of supported languages and tooling [0].
Musk was heavily involved in the engineering efforts at SpaceX. NASA struggled to keep up with his continual and extremely detailed stream of questions regarding engineering choices, with an obsessive and relentless focus on blueprints.
The list of engineering achievements and innovations (often unorthodox) is too long to list in a comment, but I highly recommend the book The Space Barons by Christian Davenport [0]. It's a fantastic read.
My tolerance for software like that is very limited. It’s almost an immediate long-press and uninstall.
reply