Hacker Newsnew | past | comments | ask | show | jobs | submit | bluishgreen's commentslogin

The only thing that reliably works is co-working with another human.

This is hard to find and not always possible. The reason it works is that it triggers the "empathy brain," which transfers the importance of the person to the importance of the task. Having an invested person always at your command is impossible, and an AI robot simply doesn't trigger that same empathy. It costs three cents per interaction. It is a robot. It isn't important, no matter how advanced it is.

There is something fascinating yet defeating about how the ADHD brain craves human connection. Just as loneliness can’t be solved by an app, ADHD cannot be "app-ed" out. I have found that these systems can lighten the cognitive load, but that is their limit.

I have a vibed chief-of-staff personal system. It knows everything and it neatly mapped out my state and day. I even know the first simple task I need to do because a prompt organized it for me on another page. Yet, I would still rather write this comment here. You already know this at some level, too.


> The only thing that reliably works is co-working with another human.

I found this didnt really do anything for me.

> ADHD cannot be "app-ed" out.

I have found significant success with Kanban. To the point where at times, I can even go unmedicated and still somewhat progress. With medication I might as well be superhuman.

Something about the WIP limit, the way I structure it so that I can see the timeframes (Next month, next week, today etc) and the moving of tasks from left to right really clicked with my brain.

It's been the most successful intervention in terms of my treatment in the entirety of my existence.


For me it's not about empathy, it's the external accountability and pressure, I.e. it needs to be someone you're not suuuper comfortable with.

But it is by far the best way to motivate myself.


"someone you're not suuuper comfortable with"

It swings both ways for me. Either super comfortable or uncomfortable.

What doesn't work is hiring a virtual assistant to hold me accountable. As soon as I pay them and they report to me, they are the same loop as my own brain. No empathy brain! :D. This is bleak I know.


yeah because it's kind of like they're just gonna do what you want no matter what, it doesn't feel real.

Needs to be something where you can't just stop as you wish.

Body doubling is different though. for that basically anyone works from my experience



You should be able to identify badfaith because your life depends on it. Otherwise you will drown in a pit of bothsides. Bad way to go.

Next we should hear from the counter party is from a court filing. Not here. This is well past having a chill chat on hackernews.


There are two kinds of bugs: the rare, tricky race conditions and the everyday “oh shucks” ones. The rare ones show up maybe 1% of the time—they demand a debugger, careful tracing, and detective work. The “oh shucks” kind where I am half sure what it is when I see the shape of the exception message from across the room - that is all the rest of the time. A simple print statement usually does the trick for this kind.

Leave us be. We know what we’re doing.


I see it the exact other way around:

- everyday bugs, just put a breakpoint

- rare cases: add logging

By definition a rare case probably will rarely show up in my dev environment if it shows up at all, so the only way to find them is to add logging and look at the logs next time someone reports that same bug after the logging was added.

Something tells me your debugger is really hard to use, because otherwise why would you voluntarily choose to add and remove logging instead of just activating the debugger?


So much this. Also in our embedded environment debugging is hit and miss. Not always possible for software, memory or even hardware reasons.


Then you need better hardware-based debugging tools like an ICE.


Rare 1% bugs practically require prints debugging because they are only going to appear only 6 times if you run the test 600 times. So you just run the test 600 times all at once, look at the logs of the 6 failed tests, and fix the bug. You don’t want to run the debugger 600 times in sequence.


Record-and-replay debuggers like rr and UndoDB are designed for exactly this scenario. In fact it's way better than logging; with logging, in practice, you usually don't have the logs you need the first time, so you have to iterate "add logs, rerun 600 times" several times. With rr and UndoDB you just have to reproduce once and then you'll be able to figure it out.


I'm not going to manually execute the bug in a test once if it is 1% (or .1%, which I often have to deal with also). I'm going to run it 600, 1200, or maybe even 1800 times, and then pick bug exhibitors to dissect them. I can imagine that these could all be running under a time travel debugger that just then stops and lets me interact when the bug is found, but that sounds way more complicated than just adding log messages and and picking thru the logs of failures.


rr has the one downside of being often useless for multithreading bugs since it serializes execution


Trace points do exist.


conditional breakpoints, watches, …


... will sometimes make the race condition not occur because things are too slow.

Like the bugs "that disappear in a debug build but happen in the production build all the time".


The tricky race conditions are the ones you often don't see in the debugger, because stopping one thread makes the behavior deterministic. But that aside, for webapps I feel it's way easier to just set a breakpoint and stop to see a var's value instead of adding a print statement for it (just to find out that you also need to see the value of another var). So given you just always start in debugging mode, there's no downside if you have a good IDE.


Using a debugger isn't a synonymous with single stepping.


Even just the debugger overhead can be enough to change the behavior of a subtle race condition


> The rare ones show up maybe 1% of the time

Lucky you lol

What I've found is that as you chew through surface level issues, at one point all that's left is messy and tricky bugs.

Still have a vivid memory of moving a JS frontend to TS and just overnight losing all the "oh shucks" frontend bugs, being left with race conditions and friends.

Not to say you can't do print debugging with that (tracing is fancy print debugging!), but I've found that a project that has a lot of easy-to-debug issues tends to be at a certain level of maturity and as times goes on you start ripping your hair out way more


Absolutely. My current role involves literally chasing down all these integration point issues - and they keep changing! Not everything has the luxury of being built on a stable, well tested base.

I'm having the most fun I've had in ages. It's like being Sherlock Holmes, and construction worker all at once.

Print statements, debuggers, memory analyzers, power meters, tracers, tcpump - everything has a place, and the problem space helps dictate what and when.


The easy-to-debug issues are there because I just wrote some new code, didn't even commit the code, and is right now writing some unit tests for the new code. That's extremely common and print debugging is alright here.


Unit and integration testing for long-term maintainable code that's easy and quick to prove it still works, not print debugging with laborious, untouchable, untestable garbage.


I've had far better luck print debugging tricky race conditions than using a debugger.

The only language where I've found a debugger particularly useful for race condition debugging is go, where it's a lot easier to synthetically trigger race conditions in my experience.


Use trace points and feed the telemetry data into the debugger for analysis.


Somehow I've never used trace points before, thanks!


I used to agree with this, but then I realized that you can use trace points (aka non-suspending break points) in a debugger. These cover all the use cases of print statements with a few extra advantages:

- You can add new traces, or modify/disable existing ones at runtime without having to recompile and rerun your program.

- Once you've fixed the bug, you don't have to cleanup all the prints that you left around the codebase.

I know that there is a good reason for debugging with prints: The debugging experience of many languages suck. In that case I always use prints. But if I'm lucky to use a language with good debugging tooling (e.g Java/Kotlin + IntelliJ IDEA), there is zero chance to ever print for debugging.


TIL about tracepoints! I'm a bit embarrassed to admit that I didn't know these exist, although I'm using debuggers on a regular basis facepalm. Visual Studio seems to have excellent support for message formatting, so you can easily print any variable you're interested in. Unfortunately, QtCreator only seems to support plain messages :-(


Even print debugging is easier in a good debugger.

Print debugging in frontend JS/TS is literally just writing the statement "debugger;" and saving the file. JS, unlike supposedly better designed languages, is designed to support hot reloading so often times just saving the file will launch me into the debugger at the line of code in question.

I used to write C++, and setting up print statements, while easier than using LLDB, is still harder than that.

I still use print debugging, but only when the debugger fails me. It's still easier to write a series of console.log()s than to set up logging breakpoints. If only there was an equivalent to "debugger;" that supported log and continue.


> JS (...) is designed to support hot reloading

no it's not lol. hmr is an outrageous hack of the language. however, the fact JS can accommodate such shenanigans is really what you mean.

sorry I don't mean to be a pedantic ass. i just think it's fascinating how languages that are "poorly" designed can end up being so damn useful in the future. i think that says something about design.


ESM has Hot Module Reloading. When you import a symbol it gives you a handle to that symbol rather than a plain reference, so that if the module changes the symbol will too.


It's not a feature of the language was my point, not that it's not possible.


Fully agree.

If I find myself using a debugger it’s usually one two things: - freshly written low level assembly code that isn’t working - basic userspace app crash (in C) where whipping out gdb is faster than adding prints and recompiling.

Even never needed a debugger for complex kernel drivers — just prints.


I guess I struggle to see how it's easier to print debug, if the debugger is right there I find it way faster.

Perhaps the debugging experience in different languages and IDEs is the elephant in the room, and we are all just talking past eachother.


Indeed, depends on deployment and type of application.

If the customer has their own deployment of the app (on their own server or computer), then all you have to go with, when they report a problem, are logs. Of course, you also have to have a way to obtain those logs. In such cases, it's way better for the developers to also never use debugger, because they are then forced to ensure during development that logs do contain sufficient information to pinpoint a problem.

Using a debugger also already means that you can reproduce the problem yourself, which is already half of the solution :)


One from work: another team is willing to support exactly two build modes in their projects: release mode, or full debug info for everything. Loading the full debug info into a debugger takes 30m+ and will fail if the computer goes to sleep midway through.

I just debug release mode instead, where print debug is usually nicer than a debugger without symbols. I could fix the situation other ways, but a non-reversible debugger doesn't justify the effort for me.


Exactly. At work for example I use the dev tools debugger all the time, but lldb for c++ only when running unit tests (because our server harness is too large and debug builds are too large and slow). I’ve never really used an IDE for python.

When using Xcode the debugger is right there and so it is in qt creator. I’ve tried making it work in vim many times and just gave up at some point.

The environment definitely is the main selector.


On non-x86 embedded platforms, a hardware debugger can be a pain (if even possible) and system emulation using QEMU usually isn’t available.

But yeah, user space debugging with a software debugger (gdb, etc…) is certainly useful for some things.


Well, if you have a race condition, the debugger is likely to change the timing and alter the race, possibly hiding it altogether. Race conditions is where print is often more useful than the debugger.


The same can be said about prints.


Yes, but to a lesser extent.


No, wrong. Totally wrong. You're changing the conditions that prevent accurate measurement without modification. This is where you use proper tools like an In-Circuit Emulator (ICE) or its equivalent.


I think you have a specific class of race conditions in mind where tight control of the hardware is desirable or even possible.

But what to do if you have a race condition in a database stored procedure? Or in a GUI rendering code? Even web applications can experience race conditions in spite of being "single-threaded", thanks to fetches and other asynchronous operations. I never heard of somebody using ICE in these cases, nor can I imagine how it could be used - please enlighten me if I'm missing something...

> You're changing the conditions that prevent accurate measurement without modification.

Yes, but if the race condition is course-enough, like it often is in above cases, adding print/logging may not change the timings enough to hide the race.


Safety systems in aerospace, industrial, and critical sectors use more advanced methodologies than do web developers, and are typically "better" engineers who tend to be familiar with tools and methodologies like debuggers, profilers, tracing, testing, symbolic execution, and (semi/)formal verification. People in low level engineering like kernel, driver, and/or performance engineering tend to be more familiar with such tools and approaches, but aren't as likely to employ as formal or conservative approaches. Security ginreenigne folks should be lumped in for good measure.


> the debugger is likely to change the timing

And the print will 100% change the timing.


Yes, but often no where as drastic as the debugger. In Android we have huge logs anyways, a few more printf statements aren’t going to hurt.


Log to a memory ring buffer (if you need extreme precision, prefetch everything and write binary fixed size "log entries"), flush asynchronously at some point when you don't care about timing anymore. Really helpful in kernel debugging.


Formatting log still takes considerable computing, especially when working on embedded system, where your cpu is only a few hundreds MHz.


You don't need to format the log on-device. You can push a binary representation and format when you need to display it. Look a 'defmt' for an example of this approach. Logging overhead in the path that emits the log messages can be tens of instructions.


Hence the mention of binary stuff.... We use ftrace in linux and we limit ourselves a lot on what we "print".


> the rare, tricky race conditions [...]. The rare ones show up maybe 1% of the time—they demand a debugger,

Interesting. I usually find those harder to debug with a debugger. Debuggers change the timing when stepping through, making the bug disappear. Do you have a cool trick for that? (Or a mundane trick, I'm not picky.)


It is also much much easier to fix all kinds of all other bugs stepping through code with the debugger.

I am in camp where 1% on the easy side of the curve can be efficiently fixed by print statements.


> Leave us be. We know what we’re doing.

No shade, this was my perspective until recently as well, but I disagree now.

The tipping point for me was the realisation that if I'm printing code out for debugging, I must be executing that code, and if I'm executing that code anyway, it's faster for me to click a debug point in an IDE than it is to type out a print statement.

Not only that, but the thing that I forgot to include in my log line doesn't require adding it in and re-spinning, I can just look it up when the debug point is hit.

I don't know why it took me so long to change the habit but one day it miraculously happened overnight.


> it's faster for me to click a debug point in an IDE than it is to type out a print statement

Interesting. I always viewed the interface to a debugger as its greatest flaw—who wants to grapple with an interface reimplementing the internals of a language half as well when you can simply type, save, commit, and reproduce?


Depends on your language, runtime, dev tooling.

I'm using IntelliJ for a Java project that takes a very long time to rebuild, re-spin and re-test. For E2E tests a 10-minute turn-around time would be blazingly fast.

But because of the tooling, once I've re-spun I can connect a debugger to the JVM and click a line in IntelliJ to set a breakpoint. Combined, that takes 5 seconds.

If I need to make small changes at that point I can usually rebuild it out exactly in the debugger to see how it executes, all while paused at that spot.


> who wants to grapple with an interface reimplementing the internals of a language half as well when you can simply type, save, commit, and reproduce?

i do, because it's much faster than typing, saving, and rebuilding, etc.


The real question is, why do we (as an industry) not use testing frameworks more to see if we could replicate those rare obscure bugs? If you can code the state, you now can reproduce it 100% of the time. The real answer seems to me, is that the industry isn't writing any or enough unit tests.

If your code can be unit tested, you can twist and turn it in many ways, if it's not an integration issue.


I don't see any evidence that the 1% of bugs can be reduced so easily. A debugger is unsuitable just as often as print debugging is. There is no inherent edge it gives to the sort of reasoning demanded. It is just a flathead rather than a phillips. The only thing that distinguishes this sort of bug from the rest is pain.


Often you can also just use conditional breakpoints, which surprisingly few people know about (to be clear, it's still a breakpoint, but your application just auto continues if false. Is usually usable via right click on the area you're clicking on to set the breakpoint.


When the print statements cause a change in asynchronous data hazards that leads to the issue disappearing, then what's the plan since you appear to "know it all" already? Perhaps you don't know as much as you profess, professor.


> Leave us be. We know what we’re doing.

No. You’re wrong.

I’ll give you an example a plain vanilla ass bug that I dealt with today.

Teammate was trying to use portaudio with ALDA on one of cloud Linux machines for CI tests. Portaudio was failing to initialize with an error that it failed to find the host api.

Why did it fail? Where did it look? What actual operation failed? Who the fuck knows! With a debugger this would take approximately 30 seconds to understand exactly why it failed. Without a debugger you need to spend a whole bunch of time figuring out how a random third party library works to figure out where the fuck to even put a printf.

Printf debugging is great if it’s within systems you already know inside and out. If you deal with code that isn’t yours then debugger is more then an order of magnitude faster and more efficient.

It’s super weird how proud people are to not use tools that would save them hundreds of hours per year. Really really weird.


The hardest bug I had to track down took over a month, and a debugger wouldn't have helped one bit.

On the development system, the program would only crash, under a heavy load, on the order of hours (like over 12 hours, sometimes over 24 hours). On the production system, on the order of minutes (usually less than a hour). But never immediately. The program itself was a single process, no threads what-so-ever. Core dumps were useless as they were inconsistent (the crash was never in the same place twice).

I do think that valgrind (had I known about it at the time) would have found it ... maybe. It might have caught the memory corruption, but not the actual root cause of the memory corruption. The root cause was a signal handler (so my "non-threaded code" was technically, "threaded code") calling non-async-safe functions, such as malloc() (not directly, but in code called by the signal handler). Tough lesson I haven't forgotten.


Ok? A debugger also wouldn’t help the hardest bug I ever fixed!

It is not the only tool in the bag. But literally the first question anyone should ask when dealing with any bug is “would attaching a debugger be helpful?”. Literally everyone who doesn’t use a debugger is less effective at their jobs than if they frequently used a debugger.


A modern debugger would have made that trivial. Just turn on time travel debugging mode and you would have been done after the first time it occurred.

Wait until the memory is corrupted and causes a crash. Set a hardware breakpoint on the corrupted memory location and run backward until the memory location was written in the signal handler. Problem solved.

Memory corruption bugs in single-threaded code are a solved problem.


> It’s super weird how proud people are to not use tools that would save them hundreds of hours per year. Really really weird.


:eyeroll:

I use logs and printf. But printf is a tool of last resort, not first. Debugging consideration #1 is “attach debugger”.

I think the root issue is that most people on HN are Linux bash jockeys and Linux doesn’t have a good debugger. GDB/LLDB CLI are poop. Hopefully RadDebugger is good someday. RadDbg and Superluminal would go a long long way to improving the poor Linux dev environment.


This post exactly.


Why is this news? Also why is this hackernews! Stock markets bounce for hundreds of reasons and crash just as often. Are we going to be reading updates about Bezos's net worth every time it shifts because it seems like an excellent way to get clicks by playing into the crowd's rubbernecking instincts?


don't ask questions, just be mad.



"Shamelessly stole the title from a hero of mine". Your Shamelessness is all fine. But at first I thought this is a post from Andrej Karpathy. He has one of the best personal brands out there on the internet, while personal brands can't be enforced, this confused me at first.


TL;DR: If more folks feel this way, please upvote this comment: I'll be happy to take down this post, change the title, and either re-post it or just don't - the GitHub repo is out there - that that should be more than enough. Sorry again for the confusion (I just upvoted it).

I am deeply sorry about the confusion. And the last thing I intended was to grab any attention away from Andrej, and / or being confused with him.

I tried to find a way to edit the post title, but I couldn't find one. Is there just a limited time window to do that? If you know how to do it, I'd be happy to edit it right away in case.

I didn't even think this post would get any attention at all - it is my first post indeed here, and I really did it just b/c if anybody could use this project to learn RL I was happy to share.


Throwing in my vote - I wasn’t confused, saw your GH link and a “Zero to Hero” course name on RL, seems clear to me and “Zero to Hero” is a classic title for a first course, nice that you gave props to Andrea too! Multiple people can and should make ML guides and reference each other. Thanks for putting in the time to share your learnings and make a fantastic resource out of it!


Thanks a lot. It makes me feel better to hear that the post is not completely confusing and appropriating - I really didn't mean that, or to use it as a trick for attention.


Didn't "Zero to Hero" come from Disney's Hercules movie before Karparthy used it?


Didn't know that, but now I have an excuse to go watch a movie :D


I didn't find it confusing at all. I think it's totally ok to re-use phrasing made famous by someone else - this is how language evolves after all.


Thank you, I appreciate it.


this is a great resource nonetheless. Even if you did use the name to get attention how does it matter? I still see it as a net positive. Thanks for sharing this


Thank you!


I read it as "Fathers Are One of Evolution's Cleverest Inventions".

It completely made sense. I heard some research where starting about 500K year ago humans started to pair bond as a way to prevent mom and child mortality during and after childbirth and indeed Fathers are a clever invention. So yea, feathers are cool - fathers too! (for more info/reading here is a book suggestion: Eve)



I found it a lot easier to understand the harmonic and geometric averages when I learned about the "generalized f-mean". Many averages are arithmetic averages of a transformation of the value. "f" refers to the function which transforms your values. https://en.wikipedia.org/wiki/Quasi-arithmetic_mean

- The geometric average is the arithmetic average of the logarithm. It places emphasis on the ratio between numbers, rather than the absolute difference.

- The harmonic average is the arithmetic average of the multiplicative inverse. It averages values by a constant numerator rather than denominator. For example, the average fuel economy of multiple vehicles makes more sense per-distance, so miles/gallon should be rewritten as gallons/mile.

- The (RMS) root-mean-square is the arithmetic average of the square. Electrical power is proportional to the square of the amperage or voltage, so AC current and voltage uses the RMS average to make the power calculations correct.


Gives the creepy vibes, but if you stop to think about it - this can stop good drivers from subsidizing the bad drivers. Not like the insurance companies are doing to lower the premium on good drivers, if you have a problem with that talk to capitalism. But bad drivers getting higher premiums is good for everyone.


> But bad drivers getting higher premiums is good for everyone.

Not necessarily. In many parts of the United States, a car is the only viable mode of transport. If you price the bad drivers out of the insurance market, they will forgo insurance all together. Then, if they cause a loss, they will be uninsured and the other driver's insurance will have to pay for the loss (or spend resources in costly suits) anyhow. So, then good drivers premiums will need to go up to compensate for the extra "bad drivers can't afford insurance" risk that good driver's carry. We end up in a similar situation in a roundabout manner but with the added element that now all our data is stored on everyone's servers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: