Hacker Newsnew | past | comments | ask | show | jobs | submit | simianwords's commentslogin

Different levels of capabilities. The summary feature in google uses a quick and inaccurate AI model. Were it to be a heavier model, we wouldn’t have this problem.

We would still have this problem. The heavier models make mistakes at too high a rate vs. a physician. Especially on imaging data. Real world data and patient presentations often deviate from the textbooks they are trained on.

-Med student


That's a different class of problem. It will do just fine on text based queries spanning a few pages. Probably better than the average physician (average over all countries).

I do agree that LLM's are not there yet in the image part.


Lots of reasons to be skeptical about this article

- I can’t find any of the accounts

- geolocation feature is hard to crack so they must have been identified as Iranian when this feature was released

- these don’t seem to be high profile accounts


Good job being skeptical, it's a good first instinct to have with any media. However, reading the actual article is a good choice as well.

- They link to several of the accounts (now suspended) in the article. - The geolocation was easily spoofed with a VPN, as stated in the article. - They never claim these account have any significant standing, only that they have "thousands of followers", likely in aggregate.

The news here is that they were previously known to be Iranian-linked. This blackout is further evidence, because they all stopped posting when Iran went offline.


Just recently I heard that typed languages are best for agentic programming

Just recently I heard that they can donate to “typed languages” too, a donation to one language does’t preclude other donations, and given their cash injections they have a few $1.5m’s to spare.

For any programming really, but I think Python got big due to

  a) the huge influx of beginners into IT,
  b) lots of intro material available in Python and 
  c) having a simple way to run your script and get feedback (same as PHP)

I say that as someone urging people to look beyond Python when they master the basics of programming.

Python has a terseness that is hard to rival. I think that was a major selling point: its constructs and use of whitespace mean that a valid Python program looks pretty close to the pseudo-code one might write to reason out the problem before writing it in another language.

I doubt that this is the selling point. Imho it is nothing special compared to Haskell, F# and the likes.

It's a huge selling point for me and many I know who knows it. Nothing like code that you can read like you're reading a book/article.

Python doesn't require you to understand monads to write useful Python.

To be clear: Haskell is great, but its entire vibe (lazy evaluation, pure functions) is entirely different from what Python's about. Someone who knows C++ or Java has a much bigger gap to jump to pick up Haskell than to pick up Python.


  > Someone who knows C++ or Java has a much bigger gap to jump to pick up Haskell than to pick up Python.
True, they are all imperative c-style languages.

  > Python doesn't require you to understand monads to write useful Python.
If that is the concern I would recommend anyone interested to dabble with F#. Part of its design philosophy is to keep the complexity out wrt type systems. It offers a vast vetted library¹, better dependency management², it is truly multi-paradigm (imperative, functional and oop), vastly better performance, and it is strongly typed without requiring type annotations³.

I know I am not going to sell it to monogamous devs, but those that are open minded should give it a try.

___

¹ This is something people will start to appreciate once they get serious about the risk of supply chain attacks.

² Python developers feel they are doing fine with pip or uv, at least in my experience, but then I find they haven't dealt with package mgmt in alternative languages.

³ Types in python are a hack, bolting on something afterwards will not reach what is possible with a language that has been designed with types as core element.


Python is a typed language. Perhaps you were trying to say something different?

Is it static or dynamic? Whatever rust is that python isn’t.

Rust is static. Python is optionally static.

Python type hints are static - at the moment, they are advisory only, but there is an obvious route forward to making Python an (optionally) fully statically typed language by using static type checking on programs before execution.

Didn't The Powers That Be™ say that was not going to happen?

I might be missing the point but isn’t this what we use mypy et al for today?

They clearly meant a statically typed language. Yes Python is Strongly Typed, but I think we all knew what they meant.

Types are best, period. Whether they are native or hints doesn't really matter for the agent, what matters is the interface contract they provide.

I don’t get this argument because if we put the effort to get it typed, we don’t get one of the best benefits - performance.

But that's not the argument here. Python type hints allow checking correctness statically, which is what matters for agents.

Yes then you might as well use some other language that uses types but also gets you performance. I agree the ecosystem is missing but hey we have LLMs now

I don't understand why you keep bringing up performance. If you're considering using Python, as many projects are, performance is obviously not a concern.

Python is a good language. Its ecosystem is rich, and I find it very productive. I want to use it, but I also want as much static analysis as possible, so I use ruff and pyright.


Performance isn’t the only important metric. There are other pros to weigh. For many apps a language might be performant enough, and bring other pros that make it more appealing than more performant alternatives.

That’s what makes types easier for me, too, so that makes sense.

> Python type hints allow checking correctness statically

Not really. You can do some basic checking, like ensuring you don't pass a string into where an integer is expected, but your tests required to make sure that you're properly dealing with those integers (Python type hints aren't nearly capable enough to forgo that) would catch that anyway. The LLM doesn't care if the error comes from a type checker or test suite.

When you get into real statically typed languages there isn't much consideration for Python. Perhaps you can prompt an LLM to build you an extractor, but otherwise, based on what already exists, your best bet is likely Lean extracted to C, imported as a Python module. Easier would be to cut Python out of the picture, though.

If you are satisfied with the SMT middle-ground, Dafny does support Python as a target. But as the earlier commenter said: Types are best.


I think you're underestimating the current state of the Python type system. With Python 3.12 and pyright or mypy in strict mode, it's very reliable, and it makes those kinds of tests unnecessary. This requires you to fully buy into the idea, though, with 100% of your codebase statically typed and using only typed libraries, unless you're comfortable writing wrappers.

It's not Rust-level, but I'd argue it's better than C or Go's type systems.


The comparison with Rust and not something like Lean, Rocq, or Idris is telling. Rust's type system is not much better than Python's, still requiring tests for everything.

These partial type systems cannot replace any actually useful tests. I'll grant you that testing is the least understood aspect of computer science, leading to a lot of really poorly conceived tests out in the wild. I can buy that those bad, useless tests can be replaced — albeit weren't actually needed in the first place.


The best benefit depends on your problem domain.

For a lot of the business world, code flexibility is much more important than speed because speed is bottlenecked not on the architecture but on the humans in the process; your database queries going from two seconds to one second matters little if the human with their squishy eyeballs takes eight seconds to digest and understand the output anyway. But when the business's needs change, you want to change the code supporting them now, and types make it much easier to do that with confidence you aren't breaking some other piece of the problem domain's current solution you weren't thinking about right now (especially if your business is supported by a team of dozens to hundreds of engineers and they each have their own mental model of how it all works).

Besides... Regarding performance, there is a tiny hit to performance in Python for including the types (not very much at all, having more to do with space efficiency than runtime). Not only do most typed languages not suffer performance hindrance from typing, the typing actually enables their compilation-time performance optimizations. A language that knows "this variable is an int and only and int and always an int" doesn't need any runtime checks to confirm that nobody's trying to squash a string in there because the compiler already did that work by verifying every read and write of the variable to ensure the rules are followed. All that type data is tossed out when the final binary gets built.


So add mypy to your pre-commit


Damn… ok, I’ll try it

All this but none of the performance benefits.

I'd say most of us who prefer Python (a pretty significant number given it's the most popular language out there) don't care that much about performance, as today's machines are pretty fast and the main bottlenecks aren't in the language itself anyway. What we care about is usability/friendliness so we ourselves can iterate quickly.

If your code is talking to an LLM, the performance difference between rust and python represents < 0.1% of the time you spend waiting for computers to do stuff. It's just not an important difference.

This is clearly not what I'm speaking about - there are only a few applications that talk to an LLM.

The article is about Anthropic's contribution to Python. Pretty much all of their code talks to an LLM.

And just a few comments earlier you said:

> Just recently I heard that typed languages are best for agentic programming

Are we not talking about using python (or some alternative) to constrain the behavior of agents?


I was more thinking of python used as general purpose backend language. We can use LLM's to vibecode such languages.

Today…

It's true; mypy won't make your Python faster. To get something like that, you'd want to use Common LISP and SBCL; the SBCL compiler can use type assertions to actually throw away code-paths that would verify type expectations at runtime (introducing undefined behavior if you violate the type assertions).

It's pretty great, because you can run it in debug mode where it will assert-fail if your static type assertions are violated, or in optimized mode where those checks (and the code to support multiple types in a variable) go away and instead the program just blows up like a C program with a bad cast does.


> mypy won't make your Python faster

Mypyc will do. See https://blog.glyph.im/2022/04/you-should-compile-your-python...


The point about mypy was it does type checking (static analysis) for your Python code. Not speeding it up.

Why is this getting downvoted... it is true. Also it is true that dynamic languages (like Ruby ;) and Python) are more efficient with tokens, like significantly then types like C, C++ or such. But Javascript and Typescript are using twice the tokens of Ruby for example and Clojure is even more efficient, obviosly I would add.

It's not incorrect, but in the context of the given Hacker News submission it reads as "why fund Python at all?"

For vibe code, since it's not important whether the output works, JavaScript is even better.

AFAICT Python basically is a [statically-]typed language nowadays. Most people are using MyPy or an alternative typechecker, and the community frowns on those who aren’t.

> Most people are using MyPy or an alternative typechecker, and the community frowns on those who aren’t.

That's not like a widespread/by-default/de-facto standard across the ecosystem, by a wide margin. Browse popular/trending Python repositories and GitHub sometime and I guess you can see.

Most of the AI stuff released is still basically using conda or pip for dependencies, more times than not, they don't even share/say what Python version they used. It's basically still the wild west out there.

Never had anyone "frown" towards me for not using MyPy or any typechecker either, although I get plenty of that from TS fans when I refuse to adopt TS.


> Never had anyone "frown" towards me for not using MyPy or any typechecker either

I’ve seen it many times. Here’s one of the more extreme examples, a highly-upvoted comment that describes not using type hints as “catastrophically unprofessional”:

https://www.reddit.com/r/Python/comments/1iqytkf/python_type...


But yeah, that's reddit, people/bots rejoice over anything being cargoculted there, and you really can't take any upvote/downvote numbers on reddit seriously, it's all manipulated today.

Don't read stuff on reddit and use whatever you've "learned" there elsewhere, because it's basically run by moderators who try to profit of their communities these days, hardly any humans left on the subreddits.

Edit: I really can't stress this enough, don't use upvotes/likes/stars/whatever as an indicator that a person on the internet is right and has a good point, especially not on reddit but I would advice people to not do so on HN either, or any other place. But again, especially on reddit, the upvotes literally count for nothing. Don't pick up advice based on upvoted comments on reddit!


Generally you only get frowned at if you're not using type hints while contributing to a project whose coding standards say "we use type hints here."

If you're working on a project that doesn't use type hints, there's also plenty of frowning, but that's just because coding without a type checker is kind of painful.


> Generally you only get frowned at if you're not using type hints while contributing to a project whose coding standards say "we use type hints here."

Yeah, that obviously makes sense, not following the code guidelines of a project should be frowned upon.


I think in the case of TS, it's more that JavaScript itself is notoriously trash (I'm not being subjective; see https://www.destroyallsoftware.com/talks/wat), and TypeScript helps paper over like 90% of the holes in JavaScript.

Python typed or untyped feels like a taste / flexibility / prototyping tradeoff; TypeScript vs. JavaScript feels like "Do you want to get work done or do you want to wrap barbed wire around your ankle and pull?" And I say this as someone who will happily grab JS sometimes (for <1,000 LOC projects that I don't plan to maintain indefinitely or share with other people).

Plus, TypeScript isn't a strict superset of JavaScript, so choice at the beginning matters; if you start in JS and decide to use TS later, you're going to have to port your code.


Typed Python vs untyped Python is literally the same as TS vs JS, don't let others fool you into thinking somehow it's different.

> TypeScript helps paper over like 90% of the holes in JavaScript

Always kind of baffles me when people say this, how are you actually programming where 90% of the errors/bugs you have are related to types and other things TS addresses? I must be doing something very different when writing JS because while those things happen sometime (once or twice a year maybe?), 90% of the issues I have while programming are domain/logic bugs, and wouldn't be solved by TS in any way.


I mean, I'm one of the fools who would fool you into thinking it's different, since I use all four languages. ;)

I can just skip the mypy run if I want to do untyped Python. I can't skip adding types if I'm writing TypeScript in most contexts; it's not valid TypeScript syntax. Conversely, I can't add types to JavaScript; it's not valid JavaScript syntax (jsdoc tags and running a static checker over that being a different subject, and more akin to the Python situation).

> how are you actually programming where 90% of the errors/bugs you have are related to types and other things TS addresses

It's the things in the "wat" video. JavaScript, in general, errs on the side of giving you some answer when you try and do something very unusual with types (like add a boolean to a number or a string to an array) over taking a runtime error. TypeScript will fail to typecheck in most of the places where those operations are techincally correct but surprising as hell in the wrong way unless you explicitly coerce the types to match up.


> It's the things in the "wat" video.

It's a funny video, still after 15 years of seeing it, I'll give you that. But the number of times I'm bothered by accidentally triggering those scenarios in real-life? Could probably count that on one hand.

I also give you that TypeScript helps beginner JavaScript developers a ton, and that's no easy feat by itself, just because of those things you mention. Once you build up intuition about how things work in JavaScript though, those sort of bugs should stop happening though, otherwise I'd say you aren't really learning the language.


It's a pretty nice best-of-both-worlds arrangement. The type information is there, but the program still runs without it (unless one is doing something really fancy, since it does actually make a runtime construct that can be introspected; some ORMs use the static type data to figure out database-to-object bindings). So you can go without types for prototyping, and then when you're happy with your prototype you can let mypy beat you up until the types are sound. There is a small nonzero cost to using the types at runtime (since they do create metadata that doesn't get dropped like in most languages with a static compilation step, like C++ or TypeScript).

I can name an absolute handful of languages I've used that have that flexibility. Common LISP comes to mind. But in general you get one or the other option.


> It's a pretty nice best-of-both-worlds arrangement

It’s also a worst-of-both-worlds arrangement, in that you have to do the extra work to satisfy the type checker but don’t get the benefits of a compiled language in terms of performance and ease-of-deployment, and only partial benefits in terms of correctness (because the type system is unsound).

AFAIK the Dart team felt this way about optional typing in Dart 1.x, which is why they changed to sound static typing for Dart 2.


Without dependent typing, it's the worst of all worlds anyway. You have to express types, but they aren't expressive enough to not have to also express the same in tests, leaving this weird place where you have to repeat yourself over and over.

That was an okay tradeoff for humans writing code as it enables things like the squiggly line as you type for basic mistakes, automatic refactoring, etc. But that stuff makes no difference to LLMs.


>AI in its current form has no actual sense of truth or ethics.

This is untrue. It does have sense of truth and ethics. Although it does get few things wrong from time to time but you can't reliably get it to say something blatantly incorrect (at least with thinking enabled). I would say it is more truthful than any human on average. Ethically I don't think you can get it to do or say something unethical.


To the people downvoting me: why do you think AI is untruthful or unethical?

the burden of proof lies on someone making a positive assertion. why do you think it's possible for "AI" to be either of those things at this point in time (let alone whether it's possible at all).

If AI lies, please give an example of AI lying blatantly (use ChatGPT 5.2 with thinking).

The lie should be clearly blatant one, something that a reasonable person would never do.


"if AI tells the truth, please give an example of AI telling the truth blatantly."

you can't. an LLM chatbot has no concept of truth, or reasonableness, or blatantness. you're willing to accept a distillation of tokens that may or may not form a truthful statement based on probability, with no intent or understanding behind it.


how is it different? i don't get it.

It's the frictionless aspect of it. It requires basically no user effort to do some serious harassment. I would say there's some spectrum of effort that impacts who is liable along with a cost/benefit analysis of some safe guards. If users were required to give paragraph long jailbreaks to achieve this and xAI had implemented ML filters, then I think there could be a more reasonable case that xAI wasn't being completely negligent here. Instead, it looks like almost no effort was put into restricting Grok from doing something ridiculous. The cost here is restricting AI image generation which isn't necessarily that much of a burden on society.

It is difficult to put similar safeguards into Photoshop and the difficulty of doing the same in Photoshop is much higher.


i think you have a point but consider this hypothetical situation.

you are in 1500's before the printing press was invented. surely the printing press can also reduce the friction to distribute unethical stuff like CP.

what is the appropriate thing to do here to ensure justice? penalise the authors? penalise the distributors? penalise the factory? penalise the technology itself?


Photocopiers are mandated by law to refuse copying currency. Would you say that's a restriction of your free speech or too burdensome on the technology itself?

If curl is used by hackers in illegal activity, culpability falls on the hackers, not the maintainers of curl.

If I ask the maintainers of curl to hack something and they do it, then they are culpable (and possibly me as well).

Using Photoshop to do something doesn’t make Adobe complicit because Adobe isn’t involved in what you’re using Photoshop for. I suppose they could involve themselves, if you’d prefer that.


so why is the culpability on grok?

Because Grok posts child porn, which is illegal. Section 230 doesn't apply, since the child porn is clearly posted by Grok.

where did grok post child porn?

X, formerly Twitter.

You could drive your car erratically and cause accidents, and it would be your fault. The fact that Honda or whoever made your car is irrelevant. Clearly you as the driver are solely responsible for your negligence in this case.

On the other hand, if you bought a car that had a “Mad Max” self driving mode that drives erratically and causes accidents, yes, you are still responsible as the driver for putting your car into “Mad Max” mode. But the manufacturer of the car is also responsible for negligence in creating this dangerous mode that need not exist.

There is a meaningful distinction between a tool that can be used for illegal purposes and a tool that is created specifically to enable or encourage illegal purposes.


You don’t understand the difference between typing “draw a giraffe in a tuxedo in the style of MC Escher” into a text box and getting an image in a few seconds, versus the skill and time necessary to do it in an image manipulation program?

You don’t understand how scale and accessibility matter? That having easy cheap access to something makes it so there is more of it?

You don’t understand that because any talentless hack can generate child and revenge porn on a whim, they will do it instead of having time to cool off and think about their actions?


yes but the onus is on the person calling grok and not grok.

why do you think that?

So, is it that you don’t understand how the two differ (which is what you originally claimed), or that you disagree about who is responsible (which the person you replied to hasn’t specified)?

You made one specific question, but then responded with something unrelated to the three people (so far) who have replied.


why does it matter if grok is advertising or you are advertising? in reality there's no difference. its just a tool you can invoke.

I wrote something similar earlier:

This is because they have entrenched themselves in a comfortable position that they don’t want to give up.

Most won’t admit this to be the actual reason. Think about it: you are a normal hands on self thought software developer. You grew up tinkering with Linux and a bit of hardware. You realise there’s good money to be made in a software career. You do it for 20-30 years; mostly the same stuff over and over again. Some Linux, c#, networking. Your life and hobby revolves around these technologies. And most importantly you have a comfortable and stable income that entrenches your class and status. Anything that can disrupt this state is obviously not desireable. Never mind that disrupting others careers is why you have a career in the first place.


> disrupting others careers is why you have a career in the first place.

Not every software project has or did this. In fact I would argue many new businesses exist that didn't exist before software and computing and people are doing things they didn't beforehand. Especially around discovery of information - solving the "I don't know what I don't know" problem also expanded markets and demand to people who now know.

Whereas the current AI wave seems to be more about efficiency/industrialization/democratizing of existing use cases rather than novel things to date. I would be more excited if I saw more "product orientated" AI use cases other than destroying jobs. While I'm hoping that the "vibing" of software will mean that SWE's are needed to productionise it I'm not confident that AI won't be able to do that soon too nor any other knowledge profession.

I wouldn't be surprised with AI if there's mass unemployment but we still don't cure cancer for example in 20 years.


> Not every software project has or did this. In fact I would argue many new businesses exist that didn't exist before software and computing and people are doing things they didn't beforehand.

That's exactly what I am hoping to see happen with AI.


All I can say to that is "I hope so too"; but logic is telling me otherwise at this point. Because the alternative, as evidenced by this thread, isn't all that good. The fear/dread in people since the holidays has been sad to see - its overwhelmed everything else in tech now.

I agree, but is it bad to have this reaction? Upending people’s lives and destroying their careers is a reasonable thing to fear

It’s ok to be empathetic but they have lucrative careers because they did the same to other careers that don’t exist now.

agreed

knowledge cutoff date is different for 4o and 5.2

You are exaggerating. LLMs simply don’t hallucinate all that often, especially ChatGPT.

I really hate comments such as yours because anyone who has used ChatGPT in these contexts would know that it is pretty accurate and safe. People also can generally be trusted to identify good from bad advice. They are smart like that.

We should be encouraging thoughtful ChatGPT use instead of showing fake concern at each opportunity.

Your comment and many others just try to signal pessimism as a virtue and has very less bearing on reality.


All we can do is share anecdotes here, but I have found ChatGPT to be confidently incorrect about important details in nearly every question I ask about a complex topic.

Legal questions, question about AWS services, products I want to buy, the history a specific field, so many things.

It gives answers that do a really good job of simulating what a person who knows the topic would say. But details are wrong everywhere, often in ways that completely change the relevant conclusion.


I definitely agree that ChatGPT can be incorrect. I’ve seen that myself. In my experience, though, it’s more often right than wrong.

So when you say “in nearly every question on complex topics", I’m curious what specific examples you’re seeing.

Would you be open to sharing a concrete example?

Specifically: the question you asked, the part of the answer you know is wrong, and what the correct answer should be.

I have a hypothesis (not a claim) that some of these failures you are seeing might be prompt-sensitive, and I’d be curious to try it as a small experiment if you’re willing.


In one example, AWS has two options for automatic deletion of objects in S3 buckets that are versioned.

"Expire current versions" means that the object will be automatically deleted after some period.

"Permanently delete non-current versions" means that old revisions will be permanently removed after some period.

I asked ChatGPT for advice on configuring a bucket. Within a long list of other instructions, it said "Expire noncurrent versions after X days". In this case, such a setting does not exist, and the very similar "expire current versions" is exactly the wrong behavior. "Permanently delete noncurrent versions" is the option needed.

The prompt I used has other information in it that I don't want to share.


I don't think that LLM's do a significantly worse job than the average human professional. People get details wrong all the time, too.

LLM give false information often. The ability for you to catch incorrect facts is limited by your knowledge and ability and desire to do independent research.

LLMs are accurate with everything you don't know but are factually incorrect with things you are an expert in is a common comment for a reason.


As I used LLMs more and more for fact type queries, my realization is that while they give false information sometimes, individual humans also give false information sometimes, even purported subject matter experts. It just turns out that you don’t actually need perfectly true information most of the time to get through life.

No they don’t give false information often.

They do. To the point where I'm getting absolutely furious at work at the number of times shit's gotten fucked up and when I ask about how it went wrong the response starts with "ChatGPT said"

Do you double check every fact or are you relying on yourself being an expert on the topics you ask an llm? If you are an expert on a topic you probably aren't asking ab llm anyhow.

It reminds me of someone who reads a newspaper article about a topic they know and say its most incorrect but then reading the rest of the paper and accepting those articles as fact.


Gell-Mann Amnesia

"Often" is relative but they do give false information. Perhaps of greater concern is their confirmation bias.

That being said, I do agree with your general point. These tools are useful for exploring topics and answers, we just need to stay realistic about the current accuracy and bias (eager to agree).


I just asked chatGPT.

"do llms give wrong information often?"

"Yes. Large language models produce incorrect information at a non-trivial rate, and the rate is highly task-dependent."

But wait, it could be lying and they actually don't give false information often! But if that were the case, it would then verify they give false information at a non trivial rate because I don't ask it that much stuff.


I have them make up stuff constantly for smaller rust libraries that are newish or dont get a lot of use.

Whether or not Hallucination “happens often” depends heavily on the task domain and how you define correctness. In a simple conversational question about general knowledge, an LLM might be right more often than not — but in complex domains like cloud config, compliance, law, or system design, even a single confidently wrong answer can be catastrophic.

The real risk isn’t frequency averaged across all use cases — it’s impact when it does occur. That’s why confidence alone isn’t a good proxy: models inherently generate fluent text whether they know the right answer or not.

A better way to think about it is: Does this output satisfy the contract you intended for your use case? If not, it’s unfit for production regardless of overall accuracy rates.


Can you explain the exact way in which this is possible? It’s not legal to be denied jobs based on health. Not to deny insurance

And how would you know what they base their hiring upon? You would just get a generic automated response..

You would not be privy to their internal processes, and thusfar not be able to prove wrong doing. You would just have to hope for a new Snowden and that the found wrongdoings would actually be punished this time.


I don't get it, if you're medically unfit for a job, why would you want the job?

For instance, if your job is to be on your feet all day and you can barely stand, then that job is not for you. I have never met employers that are so flush in opportunities of candidates that they just randomly choose to exclude certain people.

And if it's insurance, there's a group rate. The difference only variable is what the employee chooses out of your selected plans (why make a plan available if you don't want people to pick that one?) and family size. It's illegal to discriminate of family size and that does add up to 10k extra on the employer side. But there are downsides to hiring young single people, so things may balance out.


Usually there's one or two job responsibilities among many, that you can do, but not the way everyone else does them. The ADA requires employers to make reasonable accommodations, and some employers don't want to.

So less, the job requires you to stand all day, and more, once a week or so they ask you make a binder of materials, and the hole puncher they want you to use dislocates your hands (true story). Or, it's a desk job, but you can't get from your desk to the bathroom in your wheelchair unless they widen the aisles between desks (hypothetical).


Very large employers don't have a group rate. The insurance company administers the plan on behalf of the company according to pre-agreed rules, then the company covers all costs according to the employee health situation.

Read your policy!


I believe existing laws carve out exceptions for medical fitness for certain positions for this very reason. If I may, stepping back for a second: the reason privacy laws exist, is to protect people from bad behavior from employers, health insurance, etc.

If we circumvent those privacy laws, through user licenses, or new technology - we are removing the protections of normal citizens. Therefore, the bad behavior which we already decided as a society to ban can now be perpetrated again, with perhaps a fresh new word for it to dodge said old laws.

If I understand your comment, you are essentially wondering why those old laws existed in the first place. I would suggest racism or other systemic issues, and differences in insurance premiums, are more than enough to justify the existence of privacy laws. Take a normal office job as an example over a manual labor intensive job. No reason at all that health conditions should impact that. The idea of not being hired because I have a young child, or a health condition, that would raise the group rate from the insurer passing the cost to my employer (which would be in their best interest to do) is a terrible thought. And it happened before, and we banned that practice (or did our best to do so).

All this to say, I believe HIPAA helps people, and if ChatGPT is being used to partially or fully facilitate medical decision making, they should be bound under strict laws preventing the release of that data regardless of their existing user agreements.


> I believe existing laws carve out exceptions for medical fitness for certain positions for this very reason.

It’s not just medical but a broad carve out called “bona fide occupational qualifications”. If there’s a good reason for it, hiring antidiscrimination laws allow exceptions.


> And if it's insurance, there's a group rate.

Insurers derive rates for each employer from each employer's costs where laws allow this. And many employers self fund medical insurance.


Do corporations use my google searches as data to hire me?

Do you have any proof they don't? Do you have any proof the "AI System" that they use to filter out candidates doesn't "accidentally" access data ? Are you willing to bet that Google, OpenAI, Anthropic, Meta, won't sell access to that information?

Also, in some cases: they absolutely do. Try to get hired in Palantir and see how much they know about your browsing history. Anything related to national security or requiring clearances has you investigated.


The last time I went through the Palantir hiring process, the effort on their end was almost exclusively on technical and cultural fit interviews. My references told me they had not been contacted.

Calibrating your threat model against this attack is unlikely to give you any alpha in 2026. Hiring at tech companies and government is much less deliberate than your mental model supposes.

The current extent of background checks is an API call to Checkr. This is simply to control hiring costs.

As a heuristic, speculated information to build a threat model is unlikely to yield a helpful framework.


>the effort on their end was almost exclusively on technical and cultural fit interviews

How could you possibly know if they use other undisclosed methods as part of the recruitment? You are assuming Palatir would behave ethically. Palantir, the company that will never win awards based on ethics


References were not contacted. Colleagues at the company are unaware of such practices.

It is impossible to prove a negative, but having strongly held beliefs without evidence is an antipattern.


You’re over thinking it. Like all top tech companies, they just want the best engineers.

On the contrary, they hire the trendiest: https://danluu.com/programmer-moneyball/

Yeah this seems accurate, I just mean they aren’t looking at your google searches when deciding if they should hire you.

Ah yes, Palantir is "just" a tech company.

Notwithstanding the fact that tech companies hire dogshit employees all the time and the vast majority of employees of any company of size 1000+ are average at best, Palantir happens to be rating so high on the scale of evil that I'd pop champagne if it got nuked tomorrow.

If any company would do it, it would be Palantir.


That’s the point. If any company would do it, it’s Palantir, and they don’t. In fact it’s quite the opposite. Their negative public image makes hiring more difficult causing them to accept what they can get.

Also, I’m not saying they have the best talent, just that they want the best talent.


As if any company that did that is a company I would want to work for.

For instance back when I was interviewing at startups and other companies where I was going to be a strategic hire, I would casually mention how much I enjoyed spending time on my hobbies and with my family on the weekend so companies wouldn’t even extend an offer if they wanted someone “passionate” who would work 60 hours a week and be on call.


I certainly understand this perspective.

But is it really so hard to imagine a world where your individual choice to "opt-out" or work for companies that don't use that info is a massive detriment to your individual life? It doesn't have to be every single company doing it for you to have no _practical_ choice about it (if you want to make market rate for your services.)


I live my life by the “Ben Kenobi” principal. I’m 51, when things go completely to shit, I’ll just go out and live as a hermit somewhere.

Ah the ol’ “fuck you got mine” approach

Exactly what am I suppose to do? I vote for politicians who talk about universal healthcare, universal child care, public funding of college education and trade schools etc.

But the country and the people who could most benefit from it are more concerned with whatever fake outrage Fox News comes up with an anti woke something or the other.

So yeah, if this is the country America wants, I’m over it. I’ve done my bid.

While other people talk about leaving the country, we are seriously doing research and we are going to spend a month and a half outside of the US this year and I’ve already looked at residency requirements in a couple of countries after retirement including the one we are going to in a month and a half.


> Exactly what am I suppose to do?

I think GP is suggesting that you're supposed to do something akin to what Ben Kenobi did while aboard the Death Star, not what he did beforehand.

This, in no way, represents my own feelings or opinion on this matter. I'm just trying to aid the conversation.


No, I was being snarky and that was a mistake and I apologize. For some reason I thought the person above was happy or okay with the current state and can just fck off if/when it affects them negatively.

I basically did what they plan on doing. I fcked off because my country was already too far gone. But I always always make sure I will never talk positively or be in denial about the state it’s in. America isn’t there (yet). What made me snarky was the mistaken hypocrisy.


“ Do you have any proof they don't?”

Do you have any proof they don’t have a goose randomly deciding to hire you?

The lack of proof gives no credence to it actually happening


Probably not directly, that would be too vulnerable. But they could hire a background check company, that could pay a data aggregator to check if you searched for some forbidden words, and then feed the results into a threat model...

No they do not.

Anyone who has worked in hiring for any big company knows how much goes into ensuring hiring processes don't accidentally touch anything that could be construed as illegal discrimination. Employees are trained, policies and procedures are documented, and anyone who even accidentally says or does anything that comes too close to possibly running afoul of hiring laws will find themselves involved with HR.

The idea that these same companies also have a group of people buying private search information or ChatGPT conversations for individual applicants from somewhere (which nobody can link to) and then secretly making hiring decisions based on what they find is silly.

The arguments come with the usual array of conspiracy theory defenses, like the "How can you prove it's not happening" or the claims that it's well documented that it's happening but nobody can link to that documentation.


Not yet. But Google itself would ask you for your resume if you happened to search for a lot of things related to programming.

Yes, I remember a friend that interned there a couple times showed me that. One of them was “list comprehensive python” and the Google website would split in 2 and give you some really fun coding challenges. I did a few, and you get 4(?) right you get a guaranteed interview I think. I intended to come back and spend a lot of time on an additional one, but I never did. Oops

I think I only did three or something and I didn't hear back from them. Honestly my view of Google is that they aren't as cool as they think they are. My current position allows me to slack off as much as I want and it's hard to beat that, even if they offer more money (they won't in the current market).

"Ask you for your resume" is a funny way of saying "Show an advertisement to invite people to apply for a job"

I'm kind of amazed that so many people in this comment section believe their Google searches and ChatGPT conversations are being sold and used.

Under this conspiracy theory they'd have to be available for sale somewhere, right? Yet no journalist has ever picked up the story? Nobody has ever come out and whistleblown that their company was buying Google searches and denying applicants for searching for naughty words?


Google "doesn't sell your data" but RTB leaks that info, and the reason no one is called out for "buying Google searches and denying applicants for searching for naughty words" is because it is trivial to make legal.

It is well documented in many many places, people just don't care.

Google can claim that it doesn’t sell your data, but if you think that the data about your searches isn't being sold, here is just a small selection of real sources.

https://www.iccl.ie/wp-content/uploads/2022/05/Mass-data-bre...

And it isn't paranoia, consumer surveillance is a very real problem, and one of the few paths to profitability for OpenAI.

https://techpolicy.sanford.duke.edu/data-brokers-and-the-sal...

https://stratcomcoe.org/cuploads/pfiles/data_brokers_and_sec...

https://www.ftc.gov/system/files/ftc_gov/pdf/26AmendedCompla...

https://epic.org/a-health-privacy-check-up-how-unfair-modern...


> and the reason no one is called out for "buying Google searches and denying applicants for searching for naughty words" is because it is trivial to make legal.

Citation needed for a claim of this magnitude.

> It is well documented in many many places, people just don't care.

Yes, please share documentation of companies buying search data and rejecting candidates for it.

Like most conspiracy theories, there are a lot of statements about this happening and being documented but the documentation never arrives.


Like most cults, you ignore direct links with cites from multiple governments agencies, but here is another.

https://www.upturn.org/work/comments-to-the-cfpb-on-data-bro...

> Most employers we examined used an ATS capable of integrating with a range of background screening vendors, including those providing social media screens, criminal background checks, credit checks, drug and health screenings, and I-9 and E-Verify.29 As applicants, however, we had no way of knowing which, if any, background check systems were used to evaluate our applications. Employers provided no meaningful feedback or explanation when an offer of work was not extended. Thus, a job candidate subjected to a background check may have no opportunity to contest the data or conclusions derived therefrom.30

If you are going to ignore a decade of research etc... I can't prove it to you.

> The agency found that data brokers routinely sidestep the FCRA by claiming they aren't subject to its requirements – even while selling the very types of sensitive personal and financial information Congress intended the law to protect.

https://www.consumerfinance.gov/about-us/newsroom/cfpb-propo...

> Data brokers obtain information from a variety of sources, including retailers, websites and apps, newspaper and magazine publishers, and financial service providers, as well as cookies and similar technologies that gather information about consumers’ online activities. Other information is publicly available, such as criminal and civil record information maintained by federal, state, and local courts and governments, and information available on the internet, including information posted by consumers on social media.

> Data brokers analyze and package consumers’ information into reports used by creditors, insurers, landlords, employers, and others to make decisions about consumers

https://files.consumerfinance.gov/f/documents/cfpb_fcra-nprm...

And that CFPB proposal was withdrawn:

https://www.consumerfinancialserviceslawmonitor.com/2025/05/...

Note screen shots of paywalled white papers from large HR orgs:

https://directorylogos.mediabrains.com/clientimages/f82ca2e3...

Image from here:

https://vendordirectory.shrm.org/company/839063/whitepapers/...

But I am betting you come back with another ad hominem, so I will stay in the real world while you ignore it, enjoy having the last word.


You keep straying from the question. The question was: who has access to google searches? RTB isn't google searches. Background screening isn't google searches. Social media isn't google searches. Cookies aren't google searches. etc etc

Every link you provided is for tangential things. They're bad, yes, but they're not google searches. Provide a link where some individual says "Yes, I know what so-and-so searched for last wednesday."


Where in your post are Google searches used?

Can you answer this question without walls of unrelated text, ad hominem attacks (saying I’m in a cult), or link bombing links that don’t answer the question?

It’s a simple question. You keep insisting there’s an answer and trying to ad hominem me for not knowing it, but you consistently cannot show it.


This fails the classic conspiracy theory test: Any company practicing this would have to be large enough to be able to afford to orchestrate a chain of illegal transactions to get the data, develop a process for using it in hiring, and routinely act upon it.

The continued secrecy of the conspiracy would then depend on every person involved in orchestrating this privacy violation and illegal hiring scheme keeping it secret forever. Nobody ever leaking it to the press, no disgruntled employees e-mailing their congress people, no concerned citizens slipping a screenshot to journalists. Both during and after their employment with the company.

To even make this profitable at all, the data would have to be secretly sold to a lot of companies for this use, and also continuously updated to be relevant. Giant databases of your secret ChatGPT queries being sold continuously in volume, with all employees at both the sellers, the buyers, and the users of this information all keeping it perfectly quiet, never leaking anything.


It doesn't though. As an aside, I have been using a competitor to chatgpt health (nori) for a while now, and I have been getting an extreme amount of targeted ads about HRV and other metrics that the app consumes. I have been collecting health metrics through wearables for years, so there has been no change in my own search patterns or beliefs about my health. I just thought ai + health data was cool.

> And how would you know what they base their hiring upon?

GDPR Request. Ah wait, regulation bad.


> It’s not legal to be denied jobs based on health.

There is a vast gap between what is not legal and what is actually actionable in a court of law, which is well known to a large power nexus.


> It’s not legal to be denied jobs based on health. Not to deny insurance

The US has been pretty much a free-for-all for surveillance and abusing all sorts of information, even when illegal to do so. On the rare occasions that they get caught, the penalty is almost always a handslap, and they know it.


How are you ever going to prove this?

You just get an automated denial from the ATS that's based on the output from AI inference engine.


The ADA made it illegal to discriminate against job seekers for health conditions and ObamaCare made it illegal to base cover and rates on pre-existing conditions.

What are the chances those bills last long in the current administration and supreme court?


And yet, if you want life insurance you can’t get it with a bunch of pre existing conditions. And you can be discriminated against as a job seeker as long as they don’t make it obvious.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: