Hacker Newsnew | past | comments | ask | show | jobs | submit | vincenthwt's commentslogin

Thank you for your perspective. As a machine vision engineer in the semiconductor industry, I have seen a lot of hype around deep learning and AI for vision applications. From my experience, deep learning works well for OCR but less so for classification tasks.

I often achieve better results by focusing on good lighting and using classical computer vision techniques.

I agree with your point about the politics of technology adoption. To protect my career, I usually promote hybrid approaches that combine deep learning and traditional computer vision methods. In reality, many deep learning solutions still rely heavily on classical techniques. Your comments on political challenges and decision-making in technology are very relevant to my experience.


Yes hybrid and ensembles would be a good way to handle this.


Yes, that's correct about OCR. I work as a machine vision engineer in the semiconductor industry, where each wafer usually has both OCR text and machine-readable codes such as barcodes, QR codes, or data matrix codes. The OCR typically uses the SEMI font standard.

To achieve accurate OCR results, I need to preprocess the image by isolating each character, sorting them from left to right, and using regular expressions (regex) to verify the output. However, I prefer machine-readable codes because they are simpler to use, feature built-in error detection, and are much more reliable. While deep-learning OCR solutions often perform well, they cannot guarantee the 100 percent accuracy required in our applications.

This approach is similar to how e-wallet payments use cameras to scan QR codes instead of OCR text, as QR codes provide greater reliability and accuracy.


Background Context: I am a machine vision engineer working with the Halcon vision library and HDevelop to write Halcon code. Below is an example of a program I wrote using Halcon:

* Generate a tuple from 1 to 1000 and name it 'Sequence'

tuple_gen_sequence (1, 1000, 1, Sequence)

* Replace elements in 'Sequence' divisible by 3 with 'Fizz', storing the result in 'SequenceModThree'

tuple_mod (Sequence, 3, Mod)

tuple_find (Mod, 0, Indices)

tuple_replace (Sequence, Indices, 'Fizz', SequenceModThree)

* Replace elements in 'Sequence' divisible by 5 with 'Buzz', storing the result in 'SequenceModFive'

tuple_mod (Sequence, 5, Mod)

tuple_find (Mod, 0, Indices)

tuple_replace (SequenceModThree, Indices, 'Buzz', SequenceModFive)

* Replace elements in 'Sequence' divisible by 15 with 'FizzBuzz', storing the final result in 'SequenceFinal'

tuple_mod (Sequence, 15, Mod)

tuple_find (Mod, 0, Indices)

tuple_replace (SequenceModFive, Indices, 'FizzBuzz', SequenceFinal)

Alternatively, this process can be written more compactly using inline operators:

tuple_gen_sequence (1, 1000, 1, Sequence)

tempThree:= replace(Sequence, find(Sequence % 3, 0), Fizz')

tempFive:= replace(tempThree, find(Sequence % 5, 0), 'Buzz')

FinalSequence := replace(tempFive, find(Sequence % 15, 0), 'FizzBuzz')

In this program, I applied a vectorization approach, which is an efficient technique for processing large datasets. Instead of iterating through each element individually in a loop (a comparatively slower process), I applied operations directly to the entire data sequence in one step. This method takes advantage of Halcon's optimized, low-level implementations to significantly improve performance and streamline computations.


Ah, I think you work in the same industry as me, machine vision. I completely agree with you, most applications use grayscale images unless it’s color-based application.

Which vision library are you using? I’m using Halcon by MVTec.


I used to work in industrial automation, I was mostly making the process control equipment that your stuff would plug into. PLCs and whatnot. We had a close relationship with Cognex, I don't remember the exact details of their software stack.


You're absolutely right, deep learning OCR often delivers better results for complex tasks like handwriting or noisy text. It uses advanced models like CNNs or CRNNs to learn patterns from large datasets, making it highly versatile in challenging scenarios.

However, if I can’t understand the system, how can I debug it if there are any issues? Part of an engineer's job is to understand the system they’re working with, and deep learning models often act as a "black box," which makes this difficult.

Debugging issues in these systems can be a major challenge. It often requires specialized tools like saliency maps or attention visualizations, analyzing training data for problems, and sometimes retraining the entire model. This process is not only time-consuming but also may not guarantee clear answers.


No matter how much you tinker and debug, classical methods can’t match the accuracy of deep learning. They are brittle and require extensive hand-tuning.

What good is being able to understand a system if this understanding doesn’t improve performance anyway?


I agree, Deep Learning OCR often outperforms traditional methods.

But as engineers, it’s essential to understand and maintain the systems we build. If everything is a black box, how can we control it? Without understanding, we risk becoming dependent on systems we can’t troubleshoot or improve. Don’t you think it’s important for engineers to maintain control and not rely entirely on something they don’t fully understand?

That said, there are scenarios where using a black-box system is justifiable, such as in non-critical applications where performance outweighs the need for complete control. However, for critical applications, black-box systems may not be suitable due to the risks involved. Ultimately, what is "responsible" depends on the potential consequences of a system failure.


This is a classic trade-off and the decision should be made based on the business and technical context that the solution exists within.


It really depends on the application. If the illumination is consistent, such as in many machine vision tasks, traditional thresholding is often the better choice. It’s straightforward, debuggable, and produces consistent, predictable results. On the other hand, in more complex and unpredictable scenes with variable lighting, textures, or object sizes, AI-based thresholding can perform better.

That said, I still prefer traditional thresholding in controlled environments because the algorithm is understandable and transparent.

Debugging issues in AI systems can be challenging due to their "black box" nature. If the AI fails, you might need to analyze the model, adjust training data, or retrain, a process that is neither simple nor guaranteed to succeed. Traditional methods, however, allow for more direct tuning and certainty in their behavior. For consistent, explainable results in controlled settings, they are often the better option.


Not to mention performance. So often, the traditional method is the only thing that can keep up with performance requirements without needing massive hardware upgrades.

Counter intuitively, I’ve often found that CNNs are worse at thresholding in many circumstances than a simple otsu or adaptive threshold. My usual technique is to use the least complex algorithm and work my way up the ladder only when needed.


I am usually working with historical documents, where both Otsu and adaptive thresholding are frustratingly almost but not quite good enough. My go-to approach lately is "DeepOtsu" [1]. I like that it combines the best of both the traditional and deep learning worlds: a deep neural net enhances the image such that Otsu thresholding is likely to work well.

[1] https://arxiv.org/abs/1901.06081


Ok. Those are impressive results. Nice addition to the toolbox


Something I've had a lot of success with (in cases where you're automating the same task with the same lighting) is having a human operator manually choose a variety of in-sample and out-of-sample regions, ideally with some of those being near real boundaries. Then train a (very simple -- details matter, but not a ton) local model to operate on small image patches and output probabilities for each pixel.

One fun thing is that with a simple model it's not much slower than techniques like otsu (you're still doing a roughly constant amount of vectorized, fast math for each pixel), but you can grab an alpha channel for free even when working in colored spaces, allowing you to near-perfectly segment the background out from an image.

The UX is also dead-simple. If a human operator doesn't like the results, they just click around the image to refine the segmentation. They can then apply directly to a batch of images, or if each image might need some refinement then there are straightforward solutions for allowing most of the learned information to transfer from one image to the next, requiring much less operator input for the rest of the batch.

As an added plus, it also works well even for gridlines and other stranger backgrounds, still without needing any fancy algorithms.


I’ve done that too. In essence it kinda sorta comes down to a small convolution kernel with learned weights.

In some places it works really well.


Thank you for sharing this. The part about Warren Buffett and the contrast with hustle culture is particularly delightful. It highlights the importance of competence and meaningful leadership over performative busyness.


Hopefully, your view is in the minority. If this mindset becomes prevalent in the US, nothing new will ever be invented, and no new regions of space will be explored.

Modern moon exploration isn’t about repeating Apollo but progressing toward resource extraction and establishing humanity’s long-term presence in space. These missions are designed to achieve goals that were previously impossible and lay the foundation for humanity’s future beyond Earth.


> humanity’s future beyond Earth.

Yes but why?

It's cool that we can learn about what's around us, but in practice we're light years away from being interplanetary, we just can't afford it and our energy sources are laughable.

Realistically speaking, how far are we really from "moon travel" that is both remotely affordable and worth the trip?


"Yes, but why?" If humans had never ventured beyond perceived limits, like crossing oceans or building planes, where would we be today?

"We’re light-years away from being interplanetary; it’s too costly and our energy is laughable." If people doubted the Wright brothers or mocked the idea of landing on the Moon, should we have stopped trying?

"How far are we from affordable Moon travel that’s worth it?" Humanity thrives when it takes risks and embraces exploration. Space is where the next wave of innovation and opportunity lies, and waiting for "perfect timing" ensures we stay stagnant while others move ahead. Why choose doubt over progress?


We have no lack of spending opportunities for "progress"; there are dozens of promising research fields, and the ressources we can realistically invest are limited.

Most historical progress was driven and motivated by incremental gains; exploration as an end in itself was not even enough to get Columbus funded, and big space projects are much more ressource intensive than that.

> Space is where the next wave of innovation and opportunity lies.

That's just, like, your opinion. I consider this extremely unlikely; to me, the most promising fields short and mid-term are AI and synthetic biology. Space exploration does not even come close-- even if we magically gained the capability to build large scale, self-sufficient cities on Mars and populated them with millions of people (which is extremely unlikely to happen in the next decades)-- what does that do for us? What progress do we gain? If you want to build habitats in unlivable, hostile environments, you can just as well do this in Antarctica, some desert or the deep sea, and I'd consider that likewise mostly an exercise in futility.

edit: To make my position a bit clearer: I think its fine to invest "reasonably" in space exploration; the current moon project I'd consider mostly a waste, but still somehwat justifiable. But spending twice or more of what NASA currently costs on Moon or Mars base projects would be a non justifiable waste in my eyes.


1. What is a "reasonable cost," and who decides?

Reasonable cost is subjective, but NASA’s budget provides perspective. At 0.4 percent of the US federal budget, it amounts to just 27 billion dollars in 2023, while the defense budget is 842 billion dollars, or 13 percent of annual spending. Redirecting just 5 percent of defense funding, about 40 billion dollars, would more than double NASA's budget and allow for significant progress on Moon and Mars projects. This minor reallocation would not impact national security, making space exploration both affordable and worthwhile. When we consider the technological, scientific, and economic benefits, investing in space stands out as a smart, future-focused decision.

2. Are there any minerals on the Moon worth exploring?

The Moon holds valuable resources like helium-3 for clean fusion energy, water ice for fuel and life support, and rare earth metals for advanced technologies. Helium-3 could power nuclear fusion reactors and potentially yield trillions of dollars in energy benefits. Water ice can be converted into hydrogen and oxygen, creating rocket fuel that reduces reliance on costly Earth resupplies for space missions. Mining rare earth metals on the Moon could also lessen our dependency on Earth’s finite resources and help minimize ecological damage caused by terrestrial mining. The long-term financial value of these resources far outweighs the costs of extracting them.

3. Will Moon and Mars bases actually double NASA’s existing budget?

This claim is incorrect. The Artemis program, for example, is projected to cost 93 billion dollars over more than ten years, with yearly spending far below doubling NASA’s current 27 billion dollar budget. Additionally, technologies like reusable rockets, such as SpaceX’s Starship, have lowered launch costs by 90 percent, making Moon and Mars exploration increasingly achievable. With international collaborations and private investment, developing these projects is far less expensive than critics often assume, and will not significantly burden taxpayers.

4. What about other technologies, like AI or synthetic biology?

While AI and synthetic biology can offer exciting short-term benefits, they focus on Earth-based solutions and neglect humanity's long-term survival. Space exploration addresses critical long-term challenges, such as resource scarcity, reducing dependence on Earth, and avoiding extinction-level threats. Unlike efforts in Earth’s hostile environments like Antarctica or the deep sea, Moon and Mars exploration unlock completely new resources and pathways for innovation. Delaying investment in space exploration risks stagnating progress, and waiting for the "perfect time" could mean missing transformative opportunities that secure humanity's future.


1) Reasonable cost is what taxpayers/voters are willing to give. If you want a $100bn NASA budget, you are basically asking every American for $200/y. If you made that optional, I'd argue that a lot (most) Americans would not be willing to pay.

2) I see no probable route for fusion reactors to become a competitive source of terrestrial electricity for at least the next 50 years and possibly never; without that, Helium-3 is mostly worthless (even if your fusion bet works out, you rely on an approach winning that actually needs He3 instead of breeding its own Tritium). For everything else, I don't see extraterrestrial mining being able to compete with current prices, and any significant influx would have it crash/undermine its own market (e.g. we only extract hundreds of tons of palladium globally, per year; doubling the supply would have a major effect on price).

3) I'd argue that current Moon/Mars project are mostly ineffective showmanship/PR. If you actually wanted somewhat self-sustaining settlements/industry within the century, costs would easily eclipse our current defense budget, and without demonstrating the ability to build that on earth first the whole thing would not be credible anyway.

Our current approach to manufacturing (post industrialization) is totally incompatible with self-sustaining colonies, too. There is nothing we could realistically achieve on moon or mars even in a century that is anywhere close to self-sustaining, without basically reinventing how we build things.

So from a risk mitigation point of view the whole endeavour is useless, too (this might change within a century-- synthetic biology specifically would be very promising here).


Reasonable Cost

1. You didn’t address my main argument: reallocating 5% of the U.S. defense budget to NASA could double its budget without raising taxes. Instead, you reframed it as additional taxation. My point is about redistributing current resources, not increasing taxpayer obligations.

2. Do you believe reallocating 5% of defense spending would harm national security? Or could it be a reasonable way to reprioritize national spending towards long-term scientific advancement?

Moon Resources

1. You claim extraterrestrial mining could "crash the market," but cheaper, abundant resources typically foster innovation and develop new industries (e.g., space-based solar power or advanced batteries), which could benefit consumers. Can you provide examples where resource surpluses caused economic collapse instead of creating opportunities?

2. You argue helium-3 is "mostly worthless" because fusion is 50+ years away. However, companies like Helion Energy predict commercial fusion by the 2030s, and technologies like aneutronic fusion could make helium-3 a critical resource. What specific evidence supports your lengthy timeline?

Effectiveness and Feasibility of Moon/Mars Projects

1. You claim Moon/Mars projects would exceed the defense budget but provide no data. NASA’s Artemis program, for example, is projected to cost $93B over a decade, far below $842B in annual U.S. defense spending. What data supports your claim of higher costs?

2. Reusable rockets, such as SpaceX’s Starship, have already reduced launch costs by up to 90%, directly countering your cost concerns. Why did you not address this?

3. Advancements in in-situ resource utilization (ISRU), 3D printing, and automated production are already paving the way for sustainable off-world colonies. Why do you dismiss these technologies entirely when critiquing the concept of self-sustainability?

4. While you note "showmanship" is a factor, history shows symbolic exploration fuels technological advancement. Apollo, for example, spurred breakthroughs in computing, communications, and materials science. Moon/Mars exploration could provide similar transformative benefits.

Comparison to AI and Synthetic Biology

1. You claim synthetic biology is more promising than space exploration, but can you provide evidence to support this? Space exploration directly addresses existential risks like resource scarcity and planetary threats.

2. Do you agree that space research fuels advancements in robotics, AI, and materials science, which vastly benefit Earth and humanity’s long-term survival? Why can’t space exploration and other emerging technologies work together to create a stronger foundation for humanity’s future?

3. Delaying space exploration may result in lost opportunities for innovation that could directly impact Earthly and extraterrestrial problems.

Conclusion

You raise important points, but much of your argument lacks supporting evidence and is based on speculation. I encourage further consideration of current research and advancements like reusable rockets, ISRU, and fusion energy, which prove the feasibility and value of space exploration. I appreciate your thoughts and look forward to continuing the discussion.


# Cost

National budget items have to stand on their own merit. I agree that the US overspends on defense, but "waste" in one place is no justification for "waste" elsewhere. You could apply the exact same argument to bloat any number of budget items in the 10 billion range, e.g. US foreign aid, and for a lot of those the humanitarian utility (and possibly even purely financial ROI) is much easier to argue than for a space program, too.

# Feasibility

The problem with any kind of space industry or self sustaining settlement is that you have to get all things there, first. How we build things currently is simply not amenable for remote bootstrapping, at all, even disregarding the fact that many critical industrial inputs easily available on earth are just... not... in space. Contrast this with biological life, which is much better at this aspects by relying on small, self-replicating building blocks for everything.

Self-sufficient colonies are currently completely out of reach. The same applies to space mining, indirectly. For those to be a credible next step, we would need to have some baseline industry already established, that would e.g. be capable of growing tens of tons of food (or refine tens of tons of aluminum per year) as a fundamental input. That is an unskippable step on the path to self-sufficiency, and an incredibly early one, too. But not only do we not have that right now, there are not even fleshed out concepts (much less projects) in the pipeline for this currently.

I confidently claim competitive mining (or independent settlements) are impossible in the next decades even with the full defense budget because there are too many intermediate steps missing that all the money in the world can not conjure up (=> see paragraph above for examples).

Cheaper launch costs or 3D printers change absolutely nothing; the problem with doing anything in space is that it costs you more than its own weight in fuel (in practice: many multiples) to get anything there, and I see no realistic paths to get "overhead costs" (separate from fuel) lower than for, say, air travel.

If you had to build a self-sufficient industry on earth that would already be incredibly challenging for any non-trivial industrial output (just think: how large is the total footprint of every industry involved in building you keyboard alone, or phone? sure there is some potential for consolidation but MUCH less than you would like without changing everything fundamentally).

If you had to pay for every single ton of material/personnel to be flown like 5 times around the full equator just to get there, it would be impossible to achieve self-sufficient industry economically, even on earth, in atmosphere, with human workforce and finetuned processes and a lot of other helpful inputs we won't have in space, and this calculus is unlikely to change anytime soon.

# Effectiveness

Even at best, say you have massive space mining industry and self-sufficient cities on Mars by the end of the century (again: this is a complete pipedream): What does that actually change? What does it get us? Basically nothing. We have a ton of problems, but doubling the available iron, aluminium, or electric energy is not gonna solve any of those. If ressource allocation is fundamentally broken, multiplying the input side simply won't help at all.

As far as mitigating extinction threats goes: Thats nice to have, but almost worthless, and you won't have any real benefit until the space colonies are fully self-sufficient. With worthless I mean: given some very conservative assumption (an asteroid impact killing every single human every 50M years, $20M per statistical human life), the "extinction insurance" would be $100M per year for the US-- not enough to pay for anything in space, really.

# Sidenote

Did not want to derail this into a fusion energy discussion, but Helion marketing is most obviously going to give the earliest timeline imaginable because the want investment dollars.

Consider critically: How far away are they from a design that can be built industrially/economically? Has to be several generations/iterations of prototype plants (this is very obvious from what they have right now). If you compare their past timelines with the present, you will find that they were ridiculously overoptimistic and they are far from finished, this is gonna get progressively worse. Technical feasibility is one thing, but economics are hard if you have to compete with panels of refined sand that harvests kilowatts of direct electrical power in a few square metres and costs less than a window of the same size...


Your latest response still does not provide any concrete data or evidence to support your claims about space exploration being a waste, self-sufficient colonies being impossible, or AI and synthetic biology being more promising alternatives.

I have already asked for specific data to back up your assumptions, but none has been provided.

Without evidence, this discussion remains purely speculative. I recommend looking into the significant advancements in reusable rockets, in-situ resource utilization, and fusion research before dismissing their potential. Unsupported claims about feasibility or value are simply unsubstantiated opinion.


Crossing oceans and building planes wasn't done for the sake of it, those goals were clearly useful.

The biggest problem of the moon mission isn't SLS, it's that the moon is a big dry ball of rock with nothing of any value or use there. There's literally no reason to go.


Crossing oceans and building planes were not always seen as clearly useful by everyone. Skeptics at the time dismissed them as dangerous, impractical, or unnecessary, yet those who pursued these goals unlocked advancements that transformed human history. The same applies to the Moon. It is far more than just a big, dry ball of rock; it contains highly valuable resources with practical potential.

For instance, the Moon has helium-3, a rare isotope that could one day power clean nuclear fusion energy, a trillion-dollar industry waiting to happen. Lunar water ice can be converted into hydrogen and oxygen for rocket fuel and life support, making sustainable space exploration feasible and reducing the need for costly Earth-based resources. The Moon also has rare earth metals that are vital for technology and renewable energy systems, helping us address resource scarcity and reduce the environmental damage caused by terrestrial mining.

We do not explore the Moon for its own sake. The point of space exploration is to create a foundation for future industries and innovation while solving long-term challenges, such as resource depletion and planetary risks. Given the enormous technological, economic, and environmental benefits these resources could provide, the Moon is far more than just a barren rock; it holds the key to securing humanity's future.


I love lookup tables. Thanks for sharing!


That’s a great question. While NNs are revolutionary, they’re just one tool. In industrial Machine Vision, tasks like measurement, counting, code reading, and pattern matching often don’t need NNs.

In fact, illumination and hardware setup are often more important than complex algorithms. Classical techniques remain highly relevant, especially when speed and accuracy are critical.


And, usually you need determinism, within tight bounds. The only way to get that with a NN is to have a more classical algorithm to verify the NN's solution, using boring things like least squares fits and statistics around residuals. Once you have that in place, you can then skip the NN entirely, and you're done. That's my experience.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: