Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Like any experienced controls engineer, I spent a few days flipping the signs of various signals before I got them right.

As somebody with a M.Sc. in controls & signal processing (who ended up doing way more boring things), I always figured that I was doing that because I wasn't experienced enough. Turns out I also had the sign wrong on that one all along!



A lot of engineering is ensuring you're making an even number of sign errors.


This reminds me of how one time in an interview, I realized I had an off-by-one error. Instead of actually trying to understand the source of the error, I just sped-run tweaking different values +/- 1 until I got the expected result.

In the moment I felt quite clever, but the interviewer apparently did not appreciate that technique.


>I just sped-run tweaking different values +/- 1 until I got the expected result

we call this "ML" now, and it pays extra


Really, the marketers seem to call it "AI".


in a job interview, it's ML. in a product, it's AI.


>in a job interview, it's ML. in a product, it's AI.

Oh. This is good. I'm using this!



What about a job interview between marketing folks?


I suspect they just call it the product.


Love to hear it. Interviewing sucks from all perspectives, so as an interviewer I try to give leeway for the stress and time pressure the interviewee is under. But the key thing is they must express their reasoning for their approach, and "I don't have time to dig into this" will get you a long way, at least under me.

(Obviously there are nuances, but there is not enough space in this comment to elaborate)


which makes interviewing all the more annoying because then your success applies to which interviewer you get


If you fail an interview because the interviewer was unreasonable, sometimes you should be thankful and think of it as a bullet dodged.

Yes, an otherwise good company might have a bad interviewer, but if they don't have some feedback loop to catch it, chances are there are other things they also don't catch.


Had an interview once over Zoom. Dude has me do a coding test: normal String manipulation algorithmic problem solving stuff. So I’m screen sharing and try to google something and he’s like “STOP! What are you doing?” I’m like “well I don’t remember the exact name of the function” and he’s like “You can’t do that, this is a Test!!”

So I flunked his “test” and then immediately contacted the recruiter and said “no thanks” before he could muahahaha


Such is every aspect of life, fortunately or unfortunately.


I mean, in this case… it is nice to get an interviewer that lets stuff slip, but brute-forcing bugs in an interview problem doesn’t seem very good, right?


Are your hiring? :)


Many years ago before a demo we discovered that a program failed to run properly but it did run successfully the next time. We couldn't figure why and the clock was ticking. We made a script to run it twice, fail then success, and the demo was good. Then we debugged the issue, which was something trivial (I can't remember the details) but proper debugging and fixing under time pressure is never trivial.

So, an interviewer might appreciate that balancing of signs if told that it is the fastest way make it work (and pass your demo,) only to fix it later. Once proven that the hacked code gives the right solution maybe you could have offered to send them the correct code on the next day or keep working on it if they wished so.


Long ago in high school, I entered a LEGO robot competition with some friends. Tests were line following, collision detection, etc. One of the tests involved the bot being sent on a collision course with a wall. It had to detect running into it and turn around. This was one of the easiest ones to complete, but shortly before the test started we realized that our pressure sensor was malfunctioning and didn't send any more signals. There was no more time to swap it out, I don't even think we had a spare to be honest.

Not wanting to give up points on an easy test, we gauged the distance the bot had to cover in the test, and quickly uploaded some new software. At the start of the test, our bot moved forward for 4 seconds, stopped, then turned around. Full points on that one!

Some times things just need to work and we can worry about them working _correctly_ later...


You... Literally made a test defeat device.

In other words, you pulled a Dieselgate, in LEGO form.

Were I to judge your implementation, not only would you sacrifice those points, I'd have disqualified you from the competition on ethical grounds.

There is never an excuse for smoke and mirrors. Never.


Well, the three hardest things in computer science are after all naming things and off-by-one errors.


I thought the two hardest things in CS was naming things, off-by-by one errors, and cache invalidation, but I must be remembering that incorrectly.


synchronization, too


OK, the two hardest things in CS are naming things, off-by-one errors, cache invalidation, and synchronisation...and consensus...

I'll come in again.


Consensus is a distributed systems problem. And in that space there are only two hard problems:

    2. Exactly-once delivery
    1. Guaranteed order of messages
    2. Exactly-once delivery


Damn you beat me to it, I knew this felt like a Monty Python skit...

>“Nobody expects the Spanish Inquisition! Our chief weapon is surprise... surprise and fear... fear and surprise... Our two weapons are fear and surprise... and ruthless efficiency.... Our three weapons are fear, and surprise, and ruthless efficiency... and an almost fanatical devotion to the Pope... Our four... no... Amongst our weapons... Amongst our weaponry... are such elements as fear, surprise... I'll come in again.”


synchronization. Don’t forget


The whole reason for the off by one error in the standard joke is that we forgot to use a mutex.


Don’t forget synchronisation.


Wrap up that speed-run up in an automation and refer to it as “fuzzing” and you can sell it for millions these days.


> Instead of actually trying to understand the source of the error, I just sped-run tweaking different values +/- 1 until I got the expected result.

That's perfectly valid, when one knows a specific step or result must be positive or negative.

Not that much different from dimensional analysis, which speedruns you to get/fix to proper formula (at the cost of skimming on dimensionless constants).

Similarly, interviewer was not impressed when they cut me short and started walking me through some step and I pointed to them that their result was obviously wrong as it was dimensionally inconsistent and if they didn't cut me out the formula must have been something like baz*foo/bar^2 or something and now we just have to figure out the constants.


Can often be justified by the fact you know the correct form is aX+b, so you only need to get it right at two points to make it right everywhere.


I just want you to know how hard I'm going to steal this and pretend I invented it when people think its clever.


I'm doing the same thing. I didn't[1] invent it either.

1: https://twitter.com/id_aa_carmack/status/419313776463077377


Carmack didn't invent it either, my physics teacher was saying that a lot 17 years ago so it's clearly a common thing.


a possible source is in the replies to that tweet: https://twitter.com/RobbieBC/status/419324772754132992


Honestly it wouldn't surprise me if it had been invented multiple times independently, it's just too obvious to anyone that has had to deal with calculation.

(But thanks for sharing what us without a Twitter account cannot see otherwise anymore…)


Clearly we must now start referring to it as ‘Carmack’s sign error joke’ ;)


had you not mentioned that, we might have gotten another fast inverse square root origin investigation


fast (multiplicative) inverse square root.


autocorrect or dyslexia. whoever wins, i lose


It was common saying 30 years ago when we were preparing for IT Olympiads in Eastern Europe - if you do even number of errors chances are your program will work ok.


We would joke in physics class that you either need to memorize the right-hand rule; or if you were right handed while doing the test, the negative left-hand rule.

(because students would often make the mistake while holding their pencil during the exam of how to calculate the curl)


This joke is used at the start of Abrash’s Graphics Programming Black Book, followed by “If you laughed, you’re a graphics nerd.” :D


Reminds me of one of my favorite lines of code:

  i = i - 2 // because I'm bad at this


The tricky bit is when you have multiple interacting systems which see different combinations of those sign errors: it makes it a lot more important to know where all the sign flips should actually be. For example, it's quite easy to make a PID controller where the D term is actually the opposite sign to the others.


The rest of engineering is doing very careful math and then multiplying everything by two just to be sure.


And when estimating the effort required, multiply by pi. https://news.ycombinator.com/item?id=28667174


also known as "a fortuitous cancellation of errors"


"Now let's assume our chicken is a sphere.."


Same in finance!


that is so not not !false


There are satellites in orbit right now that have their reaction wheels harnessed backwards and have a x *= -1.0; //DO NOT TOUCH in the codebase.

Everyone does it, though usually you do the test before you duct tape it to the top of a few thousand kg of explosives and push the red button :)


See also the phenomenon of it always taking 3 tries to plug in USB the correct way


This is especially common with a fresh USB port. I learned why recently from the book Open Circuits: The Inner Beauty of Electronic Components. There is a mechanical component that is more stiff before being used, which can lead one to think that they simply had the USB connector the wrong way around.

I highly highly recommend the book by the way, to anyone on HN. Goes all the way from mechanical components to resistors, nixie tubes, CMOS sensors, processors etc. Excellent photography that reveals the internals, along with operation explanations and history for each component.


That looks amazing! I immediately bought it, thanks for the recommendation!


My pleasure!


Fun fact: part of the licensing agreement to use USB, is to have the usb symbol on top of the connector. So unless you're using a cheap unlicensed cable, look for the symbol facing up and you'll always be correct


After all this time I'm not going to trust a decoration.

When I'm connecting an USB cable horizontally I always think "hol up". So I always remember to plug it with the hole up.

When I'm connecting it vertically I think "hold right". So I know that if I want to hold it right, I have to put the hole right.

Has worked for me so far.

EDIT: Though "left" and "right" are more vague compared to "up" and "down", so connecting vertically assumes a specific "mental point of view". I'm just using one that's intuitive for me, but it might not work for everyone.


…unless, of course, the port is upside down/sideways ;)


Or 4 when it turns out you're trying the HDMI port instead.


Or the fricken e-Sata port back in the day.


I mean, any physics teacher will tell you that all fermions have spin 1/2, so I don't know why people are so confused by USB.


I've never tried to plug a fermion into a USB port.


Just 3? I wonder now if I'm inexperienced.


Or in the bad days of micro usb on the third try you just force it in the wrong way and destroy the port.


The holes in the contact on the cable should be up when plugging into the laptop


People think the USB-A port looks symmetrical. The way I do it is I look inside the connector and note which side the pins are before trying to plug it in 3 times.


As somebody who has practically no post-secondary and just likes to tinker in the garage. I thought it might be nice to get an education of some sorts so I could stop wasting time doing stuff like flipping signs till it works.


I went through an EE associates program at my local community college after I’d graduated with a BA, purely because I wanted to know how to fix/modify/tinker with audio equipment. I went at night, part time, and it was easily one of the best experiences of my life. Like yourself, I had some amateur experience already, and as the gaps were filled I had at least 4-5 truly paradigm shifting moments where multiple concepts finally clicked together and whole swaths of the world suddenly made sense. I found elegant connections in the physics, familiar logic applications, and gained a lot of insight that I hadn’t expected, which helped reinforce my sense of just how little I will ever really know.

Even though the purpose for taking those classes was largely personal, it’s materially contributed to my career in a lot of indirect ways (I’m an IT consultant and Linux sysadmin) by giving me a unique perspective for how things function on a much lower level than my colleagues who just have CS educations. I can troubleshoot Wi-Fi and signal transmission issues using spectrum and vector network analysis which makes everyone look at me as if I were a witch. I am comfortable disassembling and repairing equipment like scanners and commercial printers, with lots of moving parts and mains electricity that other techs won’t touch.

All that said, I highly recommend attending an EE program, and even though you’ll feel like a big, wrinkled brain smart person—you will never stop turning it off and on again, bit-bashing, or sign flipping—all quite valid techniques for when the years of diligent study and experience loses out to fat fingers and poor eyesight.


EE is very rewarding. I think it is only second to physics and nuclear eng in terms of how deep they go into physics.

You also cover a wide array of topics, from hardware and how it works to a lot of systems programming (e.g: real-time operating systems, kernels, device drivers) some computer science theory (mainly automata theory and concurrency like petri nets) to signal processing which includes audio to heavy yet extremely beautiful math topics such as control theory.


You are not alone. I've come to terms with the reality that every controller I've designed and implemented will always need a good amount of unit test coverage to ensure proper behavior (like signs and directions)...


Quaternions are hard to get right, and impossible to get right the first time.

https://twitter.com/grapefrukt/status/1618517709767016450


Is there a term for this systematic approach? I do it too, in software, and hone in on the right behavior using unit tests - especially to account for idiosyncratic off by one errors.

Basically: get the structure right and then re-align the implementation to meet the expected behavior.


Guess and check :)


HL? (human learning)


I like this one.


Yeah, it's called being a hacker


When studying electrical engineering (during all the advanced control theory math and stuff) we were told that this is the official way to do it. If your PID controller doesn't work at first, second or third attempt, you don't run for your math books, you tweak it and try again until it works.


Interesting. Whenever I meet something with a boolean behaviour I already decide upfront it will be easier and less time consuming to test accordingly instead of building a mental model. However I have no problem to model tree searching or a-star stuff. It just seems I never developed neurons with just two outcomes.


This is precisely how I passed the lab portion of my Control Systems course in undergrad engineering.


Multimeter and basic input output testing is your underappreciated friend.


I spent many days trying to troubleshoot some HP/GL2 plotter code in the distant past. I eventually concluded that the real problem was with the implementation--I was working with code that was written by others and went a little crazy with coordinate transforms. Oops--worked as expected on one non-HP plotter. Drew the image inverted on a HP plotter. The HP implementation appeared to break if you flipped the world too many times. That was a *long* time ago, my memory is fuzzy by now.

(And in later days I saw a firmware update for a laser printer cause it to spew gibberish when fed embedded HP/GL2 code. This was in the era where there were still DOS programs running under Windows and somebody didn't check that it still worked right.)


Is there a way to model this theoretically? Or is it always trial and error?

I mean I realize you have to test the thing for "bugs" just wondering if a theory to perfectly model it is even possible.


Some things certainly can be modelled, but for others it is easier to simply try. For example, will applying a positive current to the motor make it spin in the clockwise or counterclockwise direction? It really depends on the behaviour and configuration of the motor controller, and in this case it was easier for me to just try.

The trick is to do these tests at a sufficiently low level, because that’s usually where these issues are, in my experience.


> sufficiently low level

Final try at ground level:

https://www.spacesafetymagazine.com/space-disasters/rocket-f...


I think it's possible. But you'd have to be really careful with your equations, then have to be really careful to know which direction is positive in each signal and make sure to make the wiring and/or math match that. I can see why some trial and error would be easier than completely rechecking things when they don't work.


Does anyone know what I have to learn to "know" about this theory? Is it control theory and classical mechanics?


Pretty much yes. I expect the heavy lifting is just math.


Of course. The OP was modeling it theoretically, but making mistakes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: