> Like any experienced controls engineer, I spent a few days flipping the signs of various signals before I got them right.
As somebody with a M.Sc. in controls & signal processing (who ended up doing way more boring things), I always figured that I was doing that because I wasn't experienced enough. Turns out I also had the sign wrong on that one all along!
This reminds me of how one time in an interview, I realized I had an off-by-one error. Instead of actually trying to understand the source of the error, I just sped-run tweaking different values +/- 1 until I got the expected result.
In the moment I felt quite clever, but the interviewer apparently did not appreciate that technique.
Love to hear it. Interviewing sucks from all perspectives, so as an interviewer I try to give leeway for the stress and time pressure the interviewee is under. But the key thing is they must express their reasoning for their approach, and "I don't have time to dig into this" will get you a long way, at least under me.
(Obviously there are nuances, but there is not enough space in this comment to elaborate)
If you fail an interview because the interviewer was unreasonable, sometimes you should be thankful and think of it as a bullet dodged.
Yes, an otherwise good company might have a bad interviewer, but if they don't have some feedback loop to catch it, chances are there are other things they also don't catch.
Had an interview once over Zoom. Dude has me do a coding test: normal String manipulation algorithmic problem solving stuff. So I’m screen sharing and try to google something and he’s like “STOP! What are you doing?” I’m like “well I don’t remember the exact name of the function” and he’s like “You can’t do that, this is a Test!!”
So I flunked his “test” and then immediately contacted the recruiter and said “no thanks” before he could muahahaha
I mean, in this case… it is nice to get an interviewer that lets stuff slip, but brute-forcing bugs in an interview problem doesn’t seem very good, right?
Many years ago before a demo we discovered that a program failed to run properly but it did run successfully the next time. We couldn't figure why and the clock was ticking. We made a script to run it twice, fail then success, and the demo was good. Then we debugged the issue, which was something trivial (I can't remember the details) but proper debugging and fixing under time pressure is never trivial.
So, an interviewer might appreciate that balancing of signs if told that it is the fastest way make it work (and pass your demo,) only to fix it later. Once proven that the hacked code gives the right solution maybe you could have offered to send them the correct code on the next day or keep working on it if they wished so.
Long ago in high school, I entered a LEGO robot competition with some friends. Tests were line following, collision detection, etc. One of the tests involved the bot being sent on a collision course with a wall. It had to detect running into it and turn around. This was one of the easiest ones to complete, but shortly before the test started we realized that our pressure sensor was malfunctioning and didn't send any more signals. There was no more time to swap it out, I don't even think we had a spare to be honest.
Not wanting to give up points on an easy test, we gauged the distance the bot had to cover in the test, and quickly uploaded some new software. At the start of the test, our bot moved forward for 4 seconds, stopped, then turned around. Full points on that one!
Some times things just need to work and we can worry about them working _correctly_ later...
Damn you beat me to it, I knew this felt like a Monty Python skit...
>“Nobody expects the Spanish Inquisition! Our chief weapon is surprise... surprise and fear... fear and surprise... Our two weapons are fear and surprise... and ruthless efficiency.... Our three weapons are fear, and surprise, and ruthless efficiency... and an almost fanatical devotion to the Pope... Our four... no... Amongst our weapons... Amongst our weaponry... are such elements as fear, surprise... I'll come in again.”
> Instead of actually trying to understand the source of the error, I just sped-run tweaking different values +/- 1 until I got the expected result.
That's perfectly valid, when one knows a specific step or result must be positive or negative.
Not that much different from dimensional analysis, which speedruns you to get/fix to proper formula (at the cost of skimming on dimensionless constants).
Similarly, interviewer was not impressed when they cut me short and started walking me through some step and I pointed to them that their result was obviously wrong as it was dimensionally inconsistent and if they didn't cut me out the formula must have been something like baz*foo/bar^2 or something and now we just have to figure out the constants.
Honestly it wouldn't surprise me if it had been invented multiple times independently, it's just too obvious to anyone that has had to deal with calculation.
(But thanks for sharing what us without a Twitter account cannot see otherwise anymore…)
It was common saying 30 years ago when we were preparing for IT Olympiads in Eastern Europe - if you do even number of errors chances are your program will work ok.
We would joke in physics class that you either need to memorize the right-hand rule; or if you were right handed while doing the test, the negative left-hand rule.
(because students would often make the mistake while holding their pencil during the exam of how to calculate the curl)
The tricky bit is when you have multiple interacting systems which see different combinations of those sign errors: it makes it a lot more important to know where all the sign flips should actually be. For example, it's quite easy to make a PID controller where the D term is actually the opposite sign to the others.
This is especially common with a fresh USB port. I learned why recently from the book Open Circuits: The Inner Beauty of Electronic Components. There is a mechanical component that is more stiff before being used, which can lead one to think that they simply had the USB connector the wrong way around.
I highly highly recommend the book by the way, to anyone on HN. Goes all the way from mechanical components to resistors, nixie tubes, CMOS sensors, processors etc. Excellent photography that reveals the internals, along with operation explanations and history for each component.
Fun fact: part of the licensing agreement to use USB, is to have the usb symbol on top of the connector. So unless you're using a cheap unlicensed cable, look for the symbol facing up and you'll always be correct
After all this time I'm not going to trust a decoration.
When I'm connecting an USB cable horizontally I always think "hol up". So I always remember to plug it with the hole up.
When I'm connecting it vertically I think "hold right". So I know that if I want to hold it right, I have to put the hole right.
Has worked for me so far.
EDIT: Though "left" and "right" are more vague compared to "up" and "down", so connecting vertically assumes a specific "mental point of view". I'm just using one that's intuitive for me, but it might not work for everyone.
People think the USB-A port looks symmetrical. The way I do it is I look inside the connector and note which side the pins are before trying to plug it in 3 times.
As somebody who has practically no post-secondary and just likes to tinker in the garage. I thought it might be nice to get an education of some sorts so I could stop wasting time doing stuff like flipping signs till it works.
I went through an EE associates program at my local community college after I’d graduated with a BA, purely because I wanted to know how to fix/modify/tinker with audio equipment. I went at night, part time, and it was easily one of the best experiences of my life. Like yourself, I had some amateur experience already, and as the gaps were filled I had at least 4-5 truly paradigm shifting moments where multiple concepts finally clicked together and whole swaths of the world suddenly made sense. I found elegant connections in the physics, familiar logic applications, and gained a lot of insight that I hadn’t expected, which helped reinforce my sense of just how little I will ever really know.
Even though the purpose for taking those classes was largely personal, it’s materially contributed to my career in a lot of indirect ways (I’m an IT consultant and Linux sysadmin) by giving me a unique perspective for how things function on a much lower level than my colleagues who just have CS educations. I can troubleshoot Wi-Fi and signal transmission issues using spectrum and vector network analysis which makes everyone look at me as if I were a witch. I am comfortable disassembling and repairing equipment like scanners and commercial printers, with lots of moving parts and mains electricity that other techs won’t touch.
All that said, I highly recommend attending an EE program, and even though you’ll feel like a big, wrinkled brain smart person—you will never stop turning it off and on again, bit-bashing, or sign flipping—all quite valid techniques for when the years of diligent study and experience loses out to fat fingers and poor eyesight.
EE is very rewarding. I think it is only second to physics and nuclear eng in terms of how deep they go into physics.
You also cover a wide array of topics, from hardware and how it works to a lot of systems programming (e.g: real-time operating systems, kernels, device drivers) some computer science theory (mainly automata theory and concurrency like petri nets) to signal processing which includes audio to heavy yet extremely beautiful math topics such as control theory.
You are not alone. I've come to terms with the reality that every controller I've designed and implemented will always need a good amount of unit test coverage to ensure proper behavior (like signs and directions)...
Is there a term for this systematic approach? I do it too, in software, and hone in on the right behavior using unit tests - especially to account for idiosyncratic off by one errors.
Basically: get the structure right and then re-align the implementation to meet the expected behavior.
When studying electrical engineering (during all the advanced control theory math and stuff) we were told that this is the official way to do it. If your PID controller doesn't work at first, second or third attempt, you don't run for your math books, you tweak it and try again until it works.
Interesting. Whenever I meet something with a boolean behaviour I already decide upfront it will be easier and less time consuming to test accordingly instead of building a mental model. However I have no problem to model tree searching or a-star stuff. It just seems I never developed neurons with just two outcomes.
I spent many days trying to troubleshoot some HP/GL2 plotter code in the distant past. I eventually concluded that the real problem was with the implementation--I was working with code that was written by others and went a little crazy with coordinate transforms. Oops--worked as expected on one non-HP plotter. Drew the image inverted on a HP plotter. The HP implementation appeared to break if you flipped the world too many times. That was a *long* time ago, my memory is fuzzy by now.
(And in later days I saw a firmware update for a laser printer cause it to spew gibberish when fed embedded HP/GL2 code. This was in the era where there were still DOS programs running under Windows and somebody didn't check that it still worked right.)
Some things certainly can be modelled, but for others it is easier to simply try. For example, will applying a positive current to the motor make it spin in the clockwise or counterclockwise direction? It really depends on the behaviour and configuration of the motor controller, and in this case it was easier for me to just try.
The trick is to do these tests at a sufficiently low level, because that’s usually where these issues are, in my experience.
I think it's possible. But you'd have to be really careful with your equations, then have to be really careful to know which direction is positive in each signal and make sure to make the wiring and/or math match that. I can see why some trial and error would be easier than completely rechecking things when they don't work.
As somebody with a M.Sc. in controls & signal processing (who ended up doing way more boring things), I always figured that I was doing that because I wasn't experienced enough. Turns out I also had the sign wrong on that one all along!