I use debuggers heavily, I code like Carmack does - I basically live in the debugger. The REPL at breakpoints is already integrated into my IDEs (all of the different ones) for the last 20 years, I couldn’t imagine a debugger without this functionality.
I guess this could be useful if you were cli only and didn’t use an IDE, but it’s not just the REPL that I like, in my IDE when I hit a breakpoint, I can see all local variables, the whole call stack without having to do anything (don’t have to type commands, don’t have to click buttons) - I want to see the entire program state without having to type a bunch of stuff at a REPL, that would slow me down enormously.
For example in the gif when you hit a breakpoint, print the line straight away! Don’t make me type:
print(__line__, __source__) just to see which breakpoint I hit (!)
Also I preview of the variables would be better, in my ide I see a list if all the local variables and all of their values (str, int, float, are just shown, numpy arrays the shape is shown, arrays the first few values are shown and the length) again, all of this without having to type exhaustive print statements into a REPL
Maybe it’s not your focus though, I’m just trying to say what I love about debugger driven development
Yes sure, dev containers inside each project, that way the entire environment (debugger, all ide plugins for linting etc) are standard across all devs and the coding environment matches prod exactly
This is what has excited me for many years - the idea I call "scientific refactoring"
What happens if we reason upwards but change some universal constants? What happens if we use Tao instead of Pi everywhere, these kind of fun questions would otherwise require an enormous intellectual effort whereas with the mechanisation and automation of thought, we might be able to run them and see!
Not just for math, but ALL of Science suffers heavily from a problem of less than 1% of the published works being capable of being read by leading researchers.
Google Scholar was a huge step forward for doing meta-analysis vs a physical library.
But agents scanning the vastness of PDFs to find correlations and insights that are far beyond human context-capacity will I hope find a lot of knowledge that we have technically already collected, but remain ignorant of.
This idea is just ridiculous to anyone who's worked in academia. The theory is nice, but academic publishing is currently in the late stages of a huge death spiral.
In any given scientific niche, there is a huge amount of tribal knowledge that never gets written down anywhere, just passed on from one grad student to the rest of the group, and from there spreads by percolation in the tiny niche. And papers are never honest about the performance of the results and what does not work, there is always cherry picking of benchmarks/comparisons etc.
There is absolutely no way you can get these kinds of insights beyond human context capacity that you speak of. The information necessary does not exist in any dataset available to the LLM.
No no, in comparison to academia, programmers have been extremely diligent at documenting exactly how stuff works and providing fairly reproducible artifacts since the 1960s.
Imagine trying to teach an AI how to code based on only slide decks from consultants. No access todocumentation, no stack overflow, no open source code used in the training data; just sales pitches and success stories. That's close to how absurd this idea is.
Exactly, and I think not every instance can be claimed to be a hallucination, there will be so much latent knowledge they might have explored.
It is likely we might see some AlphaGo type new styles in existing research workflows that AI might work out if there is some verification logic. Humans could probably never go into that space, or may be none of the researchers ever ventured there due to different reasons as progress in general is mostly always incremental.
Google Scholar is still ignoring a huge amount of scholarship that is decades old (pre-digital) or even centuries old (and written in now-unused languages that ChatGPT could easily make sense of).
In 1897, the Indiana General Assembly attempted to legislate a new value for pi, proposing it be defined as 3.2, which was based on a flawed mathematical proof. This bill, known as the Indiana pi bill, never became law due to its incorrect assertions and the prior proof that squaring the circle is impossible: https://en.wikipedia.org/wiki/Indiana_pi_bill
I don't think it's just the sheer number of symbols. It's also the fact that the symbol τ means "turn". So you can say "quarter-turn" instead of π/2.
I'm not sure why that point gets lost in these discussions. And personally, I think of the set of fundamental mathematical objects as having a unique and objective definition. So, I get weirdly bothered by the offset in the Gamma function.
I can write a sed command/program that replaces every occurence of PI with TAU/2 in LaTeX formulas and it'll take me about 30 minutes.
The "intellectual effort" this requires is about 0.
Maybe you meant Euler's number? Since it also relates to PI, it can be used and might actually change the framework in an "interesting way" (making it more awkward in most cases - people picked PI for a reason).
I think they mean in a more general way - thinking with tau instead of pi might shift the context in terms of another method or problem solving algorithm, or there might be obscure or complex uses of tau or pi that haven't cross-fertilized in the literature - where it might be natural to think of clever extensions or use cases in one context but not the other, and those extensions and extrapolations will be apparent to AI, within reach of a tedious and exhaustive review of existing literature.
I think what they were getting at is something like this: The application of existing ideas that simply haven't been applied in certain ways because it's too boring or obvious or abstract for humans to have bothered with, but AI can plow through a year's worth of human drudgery in a day or a month or so, and that sort of "brute force" won't require any amazing new technical capabilities from AI.
I'm using LLMs to rewrite every formula featuring the Gamma function to instead use the factorial. Just let "z!" mean "Gamma(z+1)", substitute everywhere, and simplify. Then have the AI rewrite any prose.
Assuming the browser has feature parity. I was visiting my parents over Xmas and my dad couldn’t make a payment because the number of saved payees was capped to 100. There was literally no option to delete a payee in the website, the only way we found was to install the app, authenticate, and do it in there. It’s happening already.
I hate that this is happening. I absolutely detest doing any kind of task other than pure content consumption and basic messaging from a smart phone.
Anything remotely more advanced than that, please let me use my computer and an app or website with, you know, an interface designed for more advanced operations.
Trying to do anything on a smartphone/touchscreen only device is nothing but an effort in pure frustration for me.
I don't see any ads on Firefox (Android) with uBlock Origin.
That site seems horrible though. Random words in the body like reddit are hyperlinks to SEO landing pages on the same site. And there must be a better (original) source for the story than this...
You should really use an ad blocker. The Internet is basically unusable these days without one. I block ad domains at the DNS level too, but the ad blocker is still necessary to remove the empty frames left, sad.
Also way too biased to humans, the fact that they poison us could just be a biochemistry coincidence, the author is operating from a very human-centric POV (like you say in (0))
Don’t worry about it - you know why? Because if the entire thing crashes, then everyone crashes with it, and there’s a million people (or more) that have a lot more skin in the game, a lot more power, and therefore a lot more incentive to make sure this doesn’t crash.
This is not your battle alone, so don’t worry like it is.
This is the problem with this, in simple cases like “you add N employees” then you can vaguely approximate it, like they do in the article.
But for anything that’s not this trivial example, the person who knows the value most accurately is … the customer! Who is also the person who is paying the bill, so there’s strong financial incentive for them not to reveal this info to you.
I often go back to customer support voice AI agent example. Let's say, The bot can resolve tickets successfully at a certain rate . This is capturable easily. Why is this difficult? What cases am I missing?
Yeah, the truth is that you need all of these things to be “good enough”
The idea has to be good enough, the execution has to be good enough and then the connections will come,
The idea that the system is rigged against someone personally is just them protecting their ego - it’s much easier pill to swallow that your failures are because “the whole system is rigged against you” than to accept the ideas and execution were simply not good enough.
And of course luck plays an element too, I’m lucky I haven’t had cancer yet, for example, there is an indisputable element of luck to life too, though luck surface area can be increased by failure resilience and brute force trying.
I don’t think the system is rigged, it’s just a way for failures to protect their ego, but as soon as the get over that and stop making excuses, they can learn, grow and adapt, and then success will come to them.
reply