Hacker Newsnew | past | comments | ask | show | jobs | submit | iainctduncan's commentslogin

As part of my work in technical diligence, I create medium-long form content marketing material on topics germane to PE investment in tech. In the last six months I did a series (not yet published) on the state of security in the age of gen-AI.

Basically, we are entering the ransomware apocalypse. It is insane what a godsend gen-AI has been to the cybercrime sector. When all you need to do is make something good enough to fool some of the people some of the time, genAI is perfect.

Things that used to work reliably - like trusting google ads or sponsored links not to be malvertizing sites - are meaningless now that gangs can trivially spin up networks of thousands of fake interacting sites and linked profiles to sneak by fraud detection. Phishing attacks are ridiculously sophisticated, combining voice, text, and video impersonation. Supply chain attacks are going to mean package managers are handgrenades. Ransomware gangs are running full on SaSS services allowing script kiddies access to big gun material. Attacks that were previously only in reach of nation-state-sponsored actors are now available for peanuts. And all of this is going to worse because of everyone and their dog using gen-AI to pump out huge amounts of vulnerable code. And then there is the world of prompt engineering for data exfiltration...

If you are young and wanting a promising trade in tech, security would absolutely be a good choice. Shit is going to get CRAZY.


I get amused that people don't realize that genAI is an existential threat to the internet and everything that has been built on it.

1) One can no longer trust things out on the web. 2) One no longer needs things out on the web.

For 1), I hope the defense mechanism kicks in time to bake security into our computing culture and pervades throughout the stack.


You were trusting things on the internet before LLM's?

Careful system administration and web browsing were relatively safe; nowadays, even upgrading the local libraries carries risk that must be assessed.

It has always been that way. Literally the only distro that encourages an update process with the requisite effort you should be putting in is Slackware. You should be reading the source code you build. You should be building from source. You should fully understand your toolchains. Binary only distros have always been the equivalent of wearing a condom to have sex. Usually fine, but technically outsourcing the hard work to someone that lets be real, 90% never get to know well enough to credibly trust to any degree. NPM & proglang level package management just doubled down on the real-estate you had to shift through.

Being a responsible programmer/sys admin has always been read heavy, as long as I've been alive. Write only code is antithetical to the basis of running a trustworthy system.


The Internet is quite fine at delivering packages over encrypted channels which I can trust. (Except where interdicted by governments, like in China, India, Russia, Turkyie,..)

The Web is a rather different beast, but the question is not "can you trust the Internet", but "can you trust a random website", and now even "can you trust a previously trustworthy website".

You of course should not trust any pictures or videos as critical evidence, they should be corroborated by other means. But this has been true for several years now.


To clarify, I meant it from a lay person's perspective. I do realize that one can argue if the average person will have developed this awareness now. The difference this time, I feel, is that the genAI tools are widely available for normal people to experiment with which will hopefully help develop this visceral feeling.

While there genuinely was fake content and astroturfed material on the web prior to LLMs, the cost to produce this stuff has fallen enormously. A major corporation or a state actor might pay a bunch of money for inorganic content but it was hard for some rando in Estonia to spin up a network of fake content to monetize on tiktok or whatever. This leads to way more fake content about a much wider range of topics.

I can't see an existential threat as in the internet as in it no longer existing. It's busier than ever although maybe with more junk.

> 1) One can no longer trust things out on the web.

I assume you mean software, because we haven't been trusting other things on the web already for decades.

As for software, everybody interested knew about inherent insecurity of supply chain of modern software but the solutions proposed were too expensive. We need an order of magnitude more money lost for organizations to start switching from today's security theater to a model with security built in.


In general and for software in particular too :). For general see my response to ellg.

Even though we were aware of the insecurity of the supply chain, 1) In practice we tend to ignore it except for mission critical cases. We still do. 2) Autonomous vulnerability/exploitation at scale was difficult and reserved for high value targets.

What you said will be accelerated by 2) now.


I can't tell if this is satire or not

> If you are young and wanting a promising trade in tech, security would absolutely be a good choice. Shit is going to get CRAZY.

I personally would still recommend software engineering. Security in far majority of places is still checkbox and cost driven. Outrage happens around incidents, but rarely are people willing to invest meaningful in their people. Security SaaS on the other hand, is doing great, so anything driving revenue there is good.


> If you are young and wanting a promising trade in tech, security would absolutely be a good choice.

If AI is capable of performing these attacks, what would stop AI from replacing the security engineers?


> If AI is capable of performing these attacks, what would stop AI from replacing the security engineers?

Because the threat model is one-sided - if an AI attack fails, the controller simply moves to the next target. If an AI defense fails, the victim is fucked.

Therefore, there is still value in being the human in Cyber Security (however you are supposed to capitalise that!)

There are still protections and mitigations that targets can do, but those things require humans. The things that attackers can do require no humans in the loop.


>Because the threat model is one-sided - if an AI attack fails, the controller simply moves to the next target. If an AI defense fails, the victim is fucked.

This was always the case? Security is asymmetric and attacker only needs to succeed once.


> Therefore, there is still value in being the human in Cyber Security

Why? Your logic applies equally well to humans. If the AI attacker fails they move onto the next target, if the human defence fails the victim is fucked.

> There are still protections and mitigations that targets can do, but those things require humans.

Which things would you point to here?


> Why? Your logic applies equally well to humans. If the AI attacker fails they move onto the next target, if the human defence fails the victim is fucked.

I didn't claim that the human defence is the only layer. Your analogy is only valid if my claim is that it's AI attackers vs Human defenders. It's not. It's AI attackers vs AI + Human defenders.

> Which things would you point to here?

If you cannot imagine any value that a human can add to an AI defence, then this conversation is effectively over; I am not in the mood to enumerate the value that a human can add to AI defence.


> If you cannot imagine any value that a human can add to an AI defence, then this conversation is effectively over

I honestly find that a bizarre response in the middle of a discussion but you do you.

Maybe someone else could humour me since you're not in the mood to expand on the point that you made? The topic of the thread was that the ability of the AI tooling is outpacing what individuals can handle. Why would a human then be in a position to defend better than an AI when an AI is in a better position to attack than a human?


Red team has to be lucky once, blue team has to be perfect. How many places take red teaming seriously now?

Compare how fast real attackers could iterate vs the defenders.


This is less true than it seems. It is pretty rare to go from vuln to simple exploit for systems that people care about. There are plenty of vulns in chrome or whatever that were difficult to actually weaponize because you need just the right kind of gadgets to create a sandbox escape and the vuln only lets you write to ineffective memory addresses.

Stealing a bitcoin wallet by cracking the private key for it also requires red team to be lucky once. Once AI security gets to the point where the probability is infinitesimal for causing actual harm to the business it will be fine.

Yes, and on an infinite time horizon we are all dead.

It’s the time between then and now that we’re talking about.


Existing concepts like defense in depth make it exponentially harder for an AI to build a full exploit chain. Even with a full exploit chain with one mistake you'll trigger a detection system which can fool your attack.

The more I use AI and my workplace buys into it, the more I’m doing person to person work in a security context.

exactly

They're not and they won't. I'm from genx and have a background in infosec. I don't agree that AI is the cause of this sudden surge in activity or if this is even a sudden surge. This stuff was always occurring if you were paying attention. It just making the mainstream news now.

Geopolitics is the cause of the recent uptick in activity. Many of these groups are state sponsored or just fronts for nation-states themselves. genAI just makes it easier for people further down the chain to go after low hanging fruit.

The most significant impact genAI is having on infosec is creating work for those people in infosec through vibe coding and turning untested AI systems loose on internal networks. genAI just lets developers and admins shoot themselves in the foot faster. genAI is an artificial intern.


LLM-based software is just another layer to be hacked.

This just seems like the result is people are going to be driven off the internet. It will simply not be safe for the layperson.

Literally the Blackwall from Cyberpunk 2077.

Most people's internet is Instagram + Games from AppStore + TikTok + Netflix + Banking Apps. Everything is within specific walls and guardrails.

Sounds like an ultimately good thing to me. It was an interesting experiment, but the negatives largely outweigh the positives at this point.

(I do realize the irony of writing this on HN, but I digress)


Just in general, the outcome of where technology is going may spur many to reduce their usage in favor of "the real world"; I agree it might be a good thing.

It might not be a bad thing if we have an Internet for humans, and a segmented Internet for AI.

Who's enforcing that rule?

No man lands between walled gardens

> If you are young and wanting a promising trade in tech, security would absolutely be a good choice. Shit is going to get CRAZY.

Yes, but you can't be a CISSP or SOC monkey - that has no future.

You need to be an actual Software Engineer who understands development fundamentals, OS internals, web dev fundamentals, algorithms, etc as well as offensive and defensive concepts.

To many "cybersecurity" graduates in North America aren't even qualified to do L1 IT Helpdesk, which is a shame because the IT to Security talent pipeline is critical (along with the SRE, SWE, and ML to security pipeline).


As an “actual” software engineer, what do you recommend me to read to work in cybersecurity? Assume I have a solid background in OS internals, algos, networking, software engineering. I have never worked in cybersecurity though (I have never reversed engineered anything)

What do you specialize in as a SWE? Can you identify architectural or implementation bugs and think about how an attacker can exploit that to laterally move across your environment?

Cybersecurity is basically a wholistic architectural review of software that takes business, engineering, and operational context into account to make a qualified judgment about risk.


i'm one of these developers who found myself doing a lot of security-oriented devops work. how do i get away from compliance? i hate checking boxes, feels like it creates some pointless work sometimes. compliance alone makes me never want to do cybersecurity but i enjoy the architecture stuff and thinking about threats

> i hate checking boxes

> hate checking boxes, feels like it creates some pointless work sometimes

Everyone does. It doesn't actually help reduce tangible risk, but it helps you understand the operational and liability aspect of cybersecurity which is critical as well.

> compliance alone makes me never want to do cybersecurity

Compliance =/= Cybersecurity. If you work in an organization where security actually means compliance, then leave.

---

Honestly, it's region and industry dependent. If you are east coast, transition into a JPMC or GS tier bank (yes, banks are bleeding edge security personas).

If you are west coast, it shouldn't be difficult for a SRE/DevOps/Cloud type to become a SWE or Solutions Engineer at a cybersecurity company.

If you are in Europe, get an H1B and leave. I literally helped sponsor 2 O-1s today from European cybersecurity founders who wanted to leave becuase of subpar terms and bureaucracy.


Definitely agree. I guess I should have specified I meant "real programmer who wants a career". ;-)

The crazy part is that none of this is unexpected.

This was exactly the reason why GPT-2 was restricted for general release in 2019.

Check out section 4 - https://cdn.openai.com/GPT_2_August_Report.pdf


Oh, we're back to not being able to trust Google Ads again?

I recall there being Malvertising campaign problems ~12-15 years ago or so, and then they seemed to get on top of it.


typosquatting is shaping up to be a serious problem again.

Do you have some pointers to start advancing in security world?

How can open source software possibly survive this?

There’s no closed source software anymore, clankers are mighty good at decompiling.

Open source has advantages over closed source: You can demonstrate your sSDLC whereas with closed source you have to believe the vendor.

in the upside, the current Adminstration is making most of that legit grift, so investing in homegrown fruad should be on every PE's 2026 wishlist

why yes, yes I am. ;-)

I recently searched for information on a potential pet poisoning. The google overview had the decimal spot in the wrong place.... confusing a lethal amount for a trivial amount. My pet was fine, but had it actually eaten more, and had I used the google answer as my yardstick, it could have not been.

Worst piece of enshitification in my daily life.


Listening and transcribing is an excellent thing to do. But it would be terrible advice to say it's the only thing to do.

Also, I would argue that if you really want the benefit of transcribing, don't write it down until you have memorized whatever chunk you are transcribing - the act of memorizing it and learning it solely by ear is where the real value is.

On the other hand, this is not a good way to learn technique or the fretboard, as the easy keys will be vastly overrepresented, and you don't need to know where you are. That's a challenge that's almost unique to guitar and bass, and getting over that hump requires learning material by note name (whether from scores, tabs, or just chord symbols).

(my bonafides: 35 years playing, gig on sax, bass, piano, and percussion, currently doing an interdisciplinary PhD in music and CS, and running a jazz club night where I perform weekly)


"Keys" and "note names" literally only come up on the guitar when playing open strings. When playing fretted notes, the guitar is a completely relative instrument. You should focus on learning diatonic patterns of tones and semitones on the fretboard directly, not individual notes. It's a completely different method than the piano keyboard, which involves working within a fixed diatonic framework, and altering it to make "transposition" work. This meshes well with solfège and even more so with historical solmization, which are also highly relative methods (being intended originally for the voice).

> "Keys" and "note names" literally only come up on the guitar when playing open strings.

That is a false statement and stated boldly.

While it is possible to not know the note names, it is such a simple thing that takes very low effort (5 minutes a day for like 3 weeks) and it helps simplify, find and simply understand the instrument better. I would advise any player to just do it.


I play electric and uptight jazz bass. While what you describe is possible for some genres, not knowing keys and the fretboard the same way you know keys on the piano is a non starter for jazz. All the competent jazz guitarists and bassists I know have this down cold. It's table stakes in my world.

> uptight jazz bass

mah man . . . too much of that loosey goosey stuff.


Lol I have to admit, I do play "uptight bass". The Freudian thumb slip got me, ha

Relaxing on that thing is hard! :)



I adore those albums with Paul Desmond and Jim Hall. So killer.

I too like the albums of which you speak, they are a little tangential to the piece I linked above which was a sneaky Double Take Five misdirect.

apologies.


Hah. Bass player here, too. Although I only play the electric. Haven't had a chance to do the upright (or uptight even!). Needless to say, I only play rock / blues stuff, though I do love listening to Jazz. Brubeck is a perennial fav of mine.

Curious to know more: Where are you doing your interdisciplinary PhD in music + CS?

Thanks


Hi, University of Victoria in Canada.

Fantastic. That's on the island, yes? Do they have a Music + CS program, or is yours self-designed interdisciplinary?

Hi, yup on Vancouver Island. They have a combined music + CS undergrad, a Masters in Music Technology (which is an MMus), and a roll-your-own interdisciplinary Masters and PhD. The INTD is the grad ticket for people who want to do CS heavy stuff, though you can do it in the MTech if you want the MMus. I did MTech and now INTD PhD with CS as the primary. I'm the author of Scheme for Max, which puts an s7 scheme interpreter in Max and PD, and work on that platform is my research area, including creation of algorithmic music works.

UVic has some great profs in the combination. My supervisor George Tzanetakis is internationally respected in music information retrieval, and Sarah Belle Reid just got my retired music supervisor's job too (Andy Schloss). Feel free to email my user name @ gmail.com if you want to know more.


One has only to compare blogs and "thought leadership" posts from now and five years ago to see this is already happening, and big time.

And if you want a fantastic read that is both a gripping mystery and so-on-point satire of startup culture, Ruth Ware's "One By One" is fucking awesome. It's an homage to the Agatha Christie novel, but at a startup's corporate retreat.

It is astonishing to me how many people here are defending "not actually knowing what the hell you are doing", on the basis that LLMs will "keep getting better".

The bit they are missing, IMHO, is that if LLMs keep getting better, doing the steering-the-LLM version keeps getting easier, and being an LLM-using-expert rapidly drops to a value of zero. Like literally fucking nothing. If anyone can do it easily with an LLM, why would anyone pay you anything? Why would they care? You might as well be a teenager at a fast food job.

In this scenario, only actually understanding will be any kind of differentiator at all. And if that scenario doesn't come to pass, it will still be a far better differentiator to have a clue.


And this is better how exactly? If you're running a business, do you not want to catch employees mistakes as early as possible? Most ideas are crap. I'd way rather they get elimated after someone spent an hour making slides than a day vibecoding a prototype.

And then there is the problem that vibecoding is addictive so the more one has done of it on the prototype, the worse one's judgement of whether it's actually something worth building...


he exited!!!


... "so far."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: