The graph in your Macrotrends link shows the exact same numbers as the AI source, but is harder to read and the page is half ads. It's not an authoritative source -- the data was most likely parsed out of Oracle's earning reports by some janky regexp. I don’t know why you would trust this more than AI.
with an adblocker ... there is one ad on the page just above the graph about "Unlock Macrotrends Premium" which takes up 1.5/2cm of the page, while the graph underneath it takes up like 15cm. Then there's a bunch of other information on the page, none of which are ads. yes, there's a "you only get 5 page visits free" whole page pop-up thing, but there's an easy and well-known way round that for individuals who understand basic internet browser usage.
maybe start using an ad-blocker? pretty much everyone else does these days.
> the data was most likely parsed out of Oracle's earning reports by some janky regexp.
which is probably what the ai would do... or more likely it's just stealing it from the source i linked, since the numbers are exactly the same...
also, probably not because see (1b) below.
> I don’t know why you would trust this more than AI.
because (1a)
> Fundamental data from Zacks Investment Research, Inc.
> Built on Zacks Investment Research — trusted by institutional investors, academics, and financial professionals for over 45 years. [0]
I'd take people who have been doing this stuff for 45 years over some new-fangled toy that's well known to hallucinate and get things wrong in ways that appear authoritative.
also, on that (1b)
> Zacks employs a rigorous quality control process to make sure all data points are recorded accurately. For each company, a trained analyst enters the data from SEC filings, which is then double checked by a senior analyst. Once the data is entered, a senior analyst signs off on final completion after reviewing all the data. In addition, the data is subjected to a battery of automated checks to verify balancing relationships and correct errors. All data items are reviewed by multiple sets of trained eyes as well as automated computer checks. [1]
and (2) because that site provides other contextual information that is helpful, like the fact that Oracle's stock price has been trending downwards, which is possibly a reason why they felt the need to make cuts now. [2]
ai gives you the answer you want -- not the answers you might actually need.
To be fair, the stats are more trustworthy with a source link. Especially if you admit to using AI to generate the text in your comment (which is now actually against the guidelines, but I suspect most will forgive it if it's not too egregious, even after such an admission; in this case, it's nice formatting for being inline with the comments on this page), it would help to disclose where the actual data is coming from. I'd just include the link to where you verified the numbers, otherwise the comment is fine. (I mean, that's just my opinion, but there you have it.)
Based on the title, I thought this was going to be about a problem with JIRA or something, battling to get his Epic published by the end of the business day ... xD
we've had AlphaFold for a while. it's not a novel that we have ML solutions that can find, erm, novel solutions.
however, by and large, most LLMs as typically used by most individuals aren't solving novel problems. and in those scenarios, we often end up with regurgitated/most common/lowest common denominator outputs... it's a probability distribution thing.
i could be wrong, but i'm pretty sure that end-users get upset when a change takes a long time or it ends up breaking something for them.
just because people are finding that agents or whatever are speeding changes up now doesn't necessarily mean they won't encounter a slow-down later when the codebase becomes an un-maintainable mess. technical debt is always a thing, even with machines doing the work (the agent/machine still has to parse a codebase to make changes).
What makes you think that AI couldn’t make the same changes without breaking it whether you modify the code or not? And you do have automated unit tests don’t you?
Right now I have a 5000 line monolithic vibe coded internal website that is at most going to be used by 3 people. It mixes Python, inline CSS and Javasript with the API. I haven’t looked at a line of code. My IAM permissions for the Lambda runtime has limited permissions (meaning the code can’t do anything that the permissions won’t allow it to). I used AWS Cognito for authorization and validated the security of the endpoints and I validated the permissions of the database user.
Neither Claude nor Codex have any issues adding pages, features and API endpoints without breaking changes.
By definition, coding agents are the worse they will be right now.
i have a rule of thumb based on past experience. circa 10k per developer involved, reducing as the codebase size increases.
> 5000 line
so that's currently half a developer according to my rule of thumb.
what happens when that gets to 20,000 lines...? that's over the line in my experience for a human who was the person who wrote it. it takes longer to make changes. change that are made increasingly go out in a more and more broken state. etc. etc. more and more tests have to be written for each change to try and stop it going out in a broken state. more work needs to be done for a feature with equal complexity compared to when we started, because now the rest of the codebase is what adds complexity to us making changes. etc. etc. and that gets worse the more we add.
these agent things have a tendency and propensity to add more code, rather than adding the most maintainable code. it's why people have to review and edit the majority of generated code features beyond CRUD webapp functionality (or similar boilerplate). so, given time and more features, 5k --> 10k --> 20k --> ... too much for a single human being if the agent tools are no longer available.
so let's take it to a bit of a hyperbolic conclusion ... what about agents and a 5,000,000 line codebase...? do you think these agents will take the same amount of time to make a change in a codebase of that size versus 5,000 lines? how much more expensive do you think it could get to run the agents at that size? how about increases in error rate when making changes? how many extra tests need to be added for each feature to ensure zero breakage?
do you see my point?
(fyi: the 5 million LoC is a thought experiment to get you to critically think about the problem technical debt related to agents as codebase size increases, i'm not saying your website's code will get that big)
(also, sorry i basically wrote most of this over the 20 minutes or so since i first posted... my adhd is killing me today)
20K lines of code is well within the context window of any modern LLM. But just like no person tries to understand everything and keep the entire context in their brain, neither do modern LLMs.
Also documentation in the form of MD files becomes important to explain the why and the methodology.
a real study from Microsoft + Carnegie Mellon University with 319 study participants
> while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving.
here's a link to an actual source for people who also don't trust ai generated stuff
https://www.macrotrends.net/stocks/charts/ORCL/oracle/number...
edit: this source also includes data/graphs on stock price and bunch of other metrics, rather than just one number over time.
reply