Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He's not wrong. DeepMind spends time solving big scientific / large-scale problems such as those in genetics, material science or weather forecasting, and Google has untouchable resources such as all the books they've scanned (and already won court cases about)

They do make OpenAI look like kids in that regard. There is far more to technology than public facing goods/products.

It's probably in part due to the cultural differences between London/UK/Europe and SiliconValley/California/USA.



While you are spot on, I cannot avoid thinking of 1996 or so.

On one corner: IBM Deep Blue winning vs Kasparov. A world class giant with huge research experience.

On the other corner, Google, a feisty newcomer, 2 years in their life, leveraging the tech to actually make something practical.

Is Google the new IBM?


I don’t think Google is the same as IBM here. I think Google’s problem is its insanely low attention span. It frequently releases innovative and well built products, but seems to quickly lose interest. Google has become somewhat notorious for killing off popular products.

On the other hand, I think IBM’s problem is its finance focus and longterm decay of technical talent. It is well known for maintaining products for decades, but when’s the last time IBM came out with something really innovative? It touted Watson, but that was always more of a gimmick than an actually viable product.

Google has the resources and technical talent to compete with OpenAI. In fact, a lot of GPT is based on Google’s research. I think the main things that have held Google back are questions about how to monetize effectively, but it has little choice but to move forward now that OpenAI has thrown down the gauntlet.


In addition, products that seem like magic at launch get worse over time instead of better.

I used to do all kinds of really cool routines and home control tasks with Google home, and it could hear and interpret my voice at a mumble. I used it as an alarm clock, to do list, calendar, grocery list, lighting control, give me weather updates, set times etc. It just worked.

Now I have to yell unnaturally loud for it to even wake, and even then the simplest commands have a 20% chance of throwing “Sorry I don’t understand” or playing random music. Despite having a device in every room it has lost the ability to detect proximity and will set timers or control devices across the house. I don’t trust it enough anymore for timers and alarms, since it will often confirm what I asked then simply… not do it.

Ask it to set a 10 minute timer.

It says ok setting a timer for 10 minutes.

3 mins later ask it how long is remaining on the timer. A couple years ago it would say “7 minutes”.

Now there’s a good chance it says I have no timers running.

It’s pathetic, and I would love any insight on the decay. (And yes they’re clean, the mics are as unobstructed as they were out of the box)


Yes, we burned the biscuits when my sister-in-law was visiting over Thanksgiving because she used the Google assistant to set an alarm and the alarm did not go off. Timers no longer work and there's no indication that this is the case.


Google Home perplexes me. I have several of them around the house and they were perfectly fine for years, but someone in the last couple of years they are markedly worse. I would be happy if they just rolled back to 4 years ago and never touch it again. Now, I just wonder how much worse it will get before I give up on the whole ecosystem.


The TPUs that were used for speech on Google Home got repurposed to Google's AI initiatives


Not just the TPUs, also the people.


Same experience with Google Assistant on Android. I used to be able to use it to create calendar events in one shot. A few years ago it started insisting on creating events in steps, which always failed miserably.


FWIW, Amazon's Echo devices still seem to work just fine if you need a voice-controlled timer in your kitchen.


> its insanely low attention span. It frequently releases innovative and well built products, but seems to quickly lose interest quickly. Google has become somewhat notorious for killing off popular products.

I understood this problem to be "how it manages its org chart and maps that onto the customer experience."


How it manages its promotions, even moreso than org.


To add some color to this, the culture for a very long time would reward folks that came up with novel solutions to problems or novel products. These folks would dedicate some effort into the implementation, land the thing, then secure a promo with no regard for the sustainability of the aforementioned solution. Once landed, attention goes elsewhere and the thing is left to languish.

This behavior has been observed publicly in the Kubernetes space where Google has contributed substantially.


Can you share some examples in the Kubernetes space? I'm not as familiar with that area.



Thanks!


Along with your thoughts, I feel that Google's problem has always been over-promising. (There's even comedy skits about it.)

That starts with the demonstrations which show really promising technology, but what eventually ships doesn't live up to the hype (or often doesn't ship at all.)

It continues through to not managing the products well, such as when users have problems with them and not supporting ongoing development so they suffer decay.

It finishes with Google killing established products that aren't useful to the core mission/data collection purposes. For products which are money makers they take on a new type of financially-optimised decay as seen with Search and more recently with Chrome and YouTube.

I'm all for sunsetting redundant tech, but Google has a self-harm problem.

The cynic in me feels that part of Google's desire to over-promise is to take the excitement away from companies which ship* what they show. This seems to align with Pichai's commentary, it's about appearing the most eminent, but not necessarily supporting that view with shipping products.

* The Verge is already running an article about what was faked in the Gemini demo, and if history repeats itself this won't be the only thing they mispresented.


Google has one major disadvantage - it's an old megacorporation, not a startup. OpenAI will be able to innovate faster. The best people want to work at OpenAI, not Google.


Also there’s less downside risk for OpenAI. Google has layers of approvals and risk committees because they don’t want to put the money machine at risk through litigation, reputation or regulation. OpenAI has nothing to lose—this is their only game. That allows them to toe the line of what’s acceptable like Uber in its early years. With all the copyright risk involved, that’s a big deal.


I think the analogy is kind of strained here - at the current stage, OpenAI doesn't have an overwhelming superiority in quality in the same way Google once did. And, if marketing claims are to be believed, Google's Gemini appears to be no publicity stunt. (not to mention that IBM's "downfall" isn't very related to Deep Blue in the first place)


> OpenAI doesn't have an overwhelming superiority in quality in the same way Google once did

The comparison is between a useful shipping product available to everyone for a full year vs a tech demo of an extremely limited release to privileged customers.

There are millions of people for whom OpenAI's products are broadly useful, and the specifics of where they fall short compared to Gemini are irrelevant here, because Google isn't offering anything comparable that can be tested.


I'd say IBM's downfall was directly related to failing to monetize Deep Blue (and similar research) at scale.

At the time, I believe IBM was still "we'll throw people and billable hours at a problem."

They had their lunch eaten because their competitors realized they could undercut IBM on price if they changed the equation to "throw compute at a problem."

In other words, sell prebuilt products instead of lead-ins to consulting. And harness advertising to offer free products to drive scale to generate profit. (e.g. Google/search)


I don't really see how IBM would ever be able to monetize something like Deep Blue. It was a research project that was understood to not be a money-maker (outside of PR, probably), and it resulted in highly specialized hardware running highly specialized software, working for its one purpose. I agree that their business model and catering to big business first is what likely led to them scaling down today, but it's still disconnected from Deep Blue.


It's an interesting analogy. I think Googles problem is how disruptive this is to their core products monetization strategy. They have misaligned incentives in how quickly they want to push this tech out vs wait for it to be affordable with ads.

Whereas for OpenAI there are no such constraints.

Did IBM have research with impressive web reverse indexing tech that they didn't want to push to market because it would hurt their other business lines? It's not impossible... It could be as innocuous as discouraging some research engineer from such a project to focus on something more in line.

This is why I believe businesses should be absolutely willing to disrupt themselves if they want to avoid going the way of Nokia. I believe Apple should make a standalone apple watch that cannibalizes their iPhone business instead of tying it to and trying to prop up their iPhone business (ofc shareholders won't like it). Whilst this looks good from Google - I think they are still sandbagging.. why can't I use Bard inside of their other products instead of the silly export thing.


OpenAI was at least around in 2017 when YCR HARC was closed down (because...the priority would be OpenAI).


google is the new IBM.

apple is the new Nokia.

openai is the new google.

microsoft is the new apple.


No, because OpenAI and Microsoft both have “CUSTOMER NONCOMPETE CLAUSES” in their terms of use. I didn’t check Apple, but Google doesn’t have any shady monopolistic stuff like that.

Proof OpenAI has this shady monopolistic stuff: https://archive.ph/vVdIC

“What You Cannot Do. You may not use our Services for any illegal, harmful, or abusive activity. For example, you may not: […] Use Output to develop models that compete with OpenAI.” (Hilarious how that reads btw)

Proof Microsoft has this shady monopolistic stuff: https://archive.ph/N5iVq

“AI Services. ”AI services” are services that are labeled or described by Microsoft as including, using, powered by, or being an Artificial Intelligence (“AI”) system. Limits on use of data from the AI Services. You may not use the AI services, or data from the AI services, to create, train, or improve (directly or indirectly) any other AI service.”

That 100% does include GitHub Copilot, by the way. I canceled my sub. After I emailed Satya, they told me to post my “feedback” in a forum for issues about Xbox and Word (what a joke). I emailed the FTC Antitrust team. I filed a formal complaint with the office of the attorney general of the state of Washington.

I am just one person. You should also raise a ruckus about this and contact the authorities, because it’s morally bankrupt and almost surely unlawful by virtue of extreme unfairness and unreasonableness, in addition to precedent.

AWS, Anthropic, and NVIDIA also all have similar Customer Noncompete Clauses.

I meekly suggest everyone immediately and completely boycott OpenAI, Microsoft, AWS, Anthropic, and NVIDIA, until they remove these customer noncompete clauses (which seem contrary to the Sherman Antitrust Act).

Just imagine a world where AI can freely learn from us, but we are forbidden to learn from AI. Sounds like a boring dystopia, and we ought to make sure to avoid it.


They cannot enforce a non-compete on a customer. Check out the rest of their terms that talk about durability. They will sneakily say "our terms that are illegal don't apply but the rest do."

You cannot tell a customer that buying your product precludes them from building products like it. That violates principles of the free market, and it's unenforceable. This is just like non-competes in employment. They aren't constitutional.


There's no constitutional question, and these services can drop you as a customer for (almost) any reason.

So yes, they can enforce their terms for all practical purposes.

But no, they cannot levy fines or put you in jail.


> But no, they cannot levy fines or put you in jail.

Those are the consequences that matter. I don't care if Microsoft or Google decide they don't want to be friends with me. They'd stab me in the back to steal my personal data anyway.


You do care if you built your business on top of them though.

And that's the whole point of violating terms by competing with them.


I wouldn't want to build a business on something that could be pulled out from underneath me.

I'd start a business but the whole setup is a government scam. Business licenses are just subscriptions with extra steps.


Sounds like we need legislature to void these "customer non-compete clauses". Not holding my breath though, see what govts allows copyrights to become. Govts seems to protect (interests of near-) monopolies more than anything.


Why's it wrong to not let people use your output to build their own services?

1. I wouldn't let someone copy my code written directly by me. Why should I let someone copy the code my machine wrote?

2. There are obvious technical worries about feedback loops.


> Why should I let someone copy the code my machine wrote

Because that machine/openAI was built on literally scraping the internet (regardless of copyright or website's ToS) and ingesting printed books.


This is a perfect example of the owner class getting away with crime (copyright infringement) and using it against the public (you can't use AI output!).

Businesses are not entitled to life or existence the way individuals are.


It's stunning how many do not understand that.


Test it.

Produce results.

Market it.

They can’t enforce if it gets too big.


It's not unlawful, it's not morally bankrupt. Noncompete clauses have been around since the beginning of human commercial activity and have a valid reason to exist - to encourage companies/people/investors to put large sums of capital at risk to develop novel technologies. If there was no way to profit from them, the capital would be non-existent.


You have no way to prove that Google, MS, et al wouldn't make AI products if they couldn't prevent you from using the output.

Also, what exactly is stopping someone from documenting the output from all possible prompts?

It's legal theater and can't be enforced.


It's not theater, it's very real. Companies are making decisions to not use data generated from openai. They are making the decision because they know if they go the other way they know they risk it being leaked via someone internal that they are doing it, that it's pretty easy to figure out during a discovery process. I'm involved in this issue right now, and no one is treating it as something to just blow off. I know several other companies in the same boat.


They have many orders of magnitude more money and attorneys that would work full-time on such a case to ensure that even if they lost the court battle, the person or company doing the thing that they didn't like would be effectively bankrupted, so they still win in the end.


And if such an effort leaves the jurisdiction, to a country with no obligations to the litigating country?

We need to dispel with this idea that sociopaths in suits have earned or legitimate power.


The courts have power, the companies know it and behave accordingly.

Everything you are saying is only true for two guys in a garage. The folks with something to lose don't behave in this dreamworld fashion.


Enjoy being a pacified and domesticated ape who never strays from what it's told to do. You'll be sent to the meat grinder soon.


You'll find that if you learn a good amount about the law, it's empowering. The courts are an adversarial place. For every person getting sued... someone is suing. It's isn't "big brother" or "my keeper" or "the man keeping you down" or however you imagine it. You can be the one exerting the pressure if you know what you are doing.

Enjoy being an uneducated ape :)


> apple is the new Nokia.

You obviously haven't dropped an iphone on to concrete. :)


When did you last try? I’m too embarrassed to say how often and onto what kind of surfaces my iPhone 12 has been dropped, but I’m amazed it’s still seemingly completely functional.

My iPhone 4, on the other hand, shattered after one incident…


I was more referring to Nokia's complacency which led to its demise. Nokia was infamous for incremental updates to their phone line, making users upgrade regularly. You could never find a "complete" Nokia phone; each phone was deliberately crippled some how. Apple does the same with their iDevices.


Have you dropped the iPhone 14 Pro? Or 11 Pro?

These are literally stainless steel.

The 15s with their titanium is a step back.

The 11 Pro with its older curved edges has been the most solidly built phone ever IMO.


Happens to me regularly, I think they reached a level of Nokia a few years back :)

I even dropped my iPhone 13 four floors (onto wood), and not a scratch :o


How is MS the new Apple? Apple has always been a product company, not seeing MS ever being that.


Xbox, Surface. Holo didn't go far. May return back to mobile in some form soon.

Services, and their sales team, are still Microsoft's strong point.

Apple seeing its services grow and is leaning in on it now.

The question is whether Apple eats services faster than Microsoft eats into hardware.


Xbox and Surface have been around a long time as product categories. Xbox isn't even the premier device in its segment.

Highly doubt MS will ever be successful on mobile... their last OS was pretty great and they were willing to pay devs to develop, they just couldn't get it going. This is from someone who spent a ton of time developing on PocketPC and Windows Mobile back in the day.

Products are not the reason for their resurgence.

Apple makes a ton in services, but their R&D is heavily focused on product and platform synergy to that ecosystem extremely valuable.


Microsoft grinds constantly and consistently though, sprinkled with some monopolistic tendencies now and then to clinch a win.

I think the grind from Windows CE to Windows Phone is just a blip to them for now.


MS products all suck. They only survive because they throw billions at them and dont care about profitability.

Microsoft is still the same old Microsoft


Afaict, Windows Phone mostly failed because of timing. In the same way that XBox mostly succeeded because of timing. (In the sense that timing dominated the huge amount of excellent work that went into both)

Microsoft is a decent physical product company... they've usually just missed on the strategic timing part.


It's not a question of timing, but of Microsoft's brand image (Internet Explorer) and the fact that Android was already open source.


Timing was definitely an issue - first Windows Phone came 3 years after iOS and 2 after Android. AFA the product itself, I think the perception it needed to overcome was more PocketPC/Windows Mobile having an incredibly substandard image in the market after the iOS release which seemed light years ahead, esp. since MS had that market to themselves for so many years.

That said, it got great reviews and they threw $$ at devs to develop for it, just couldn't gain traction. IME it was timing more than anything and by the time it came to market felt more reactionary than truly innovative.


"Open source" in the sense there was open source. Which you could use if you were willing to jettison Maps et al.

Given dog eat dog of early Android manufacturers, most couldn't afford to recreate Google services.


By this I mean that Microsoft had the positioning of an iPhone in a not-so-great version. Where as Android relied on the "Open source" and free side for manufacturers to adapt to their phones, even if Google's services remained proprietary.

Can we really talk about timing, when it's above all a problem of a product that didn't fit the market?


Apple is the new Sony might be better. I'm trying to figure out who is the upcoming premium tech product company... not thinking of any. I think Tesla wants to be


The issue with new premium tech is that you can't over-the-top existing ecosystems (Android, iOS).

It's difficult to compete with an excellent product if whether you have a blue bubble in iMessage is more important.


They can’t even get panels to line up right.

Still.


Humane definitely wants to be.


I have considered Oracle and MS to be competing for the title of new IBM. Maybe MS is shaking it off with their AI innovation, but I think a lot of that is just lipstick.


Hmm, what was that tech from IBM deep blue, that apparently Google leveraged to such a degree?

Was it “machine learning”? If so, I don’t think that was actually the key insight for Google search… right? Did deep blue even machine learn?

Or was it something else?


Deep Blue was the name of the computer itself rather than the software, but to answer your question - it didn't use machine learning, its program was written and tweaked by hand. It contained millions of different games and positions, and functioned by evaluating all possible moves at a certain depth. As far as I know, practical machine learning implementations wouldn't be a thing for a decent while after Deep Blue.


Wasn't that mostly a hardware problem? Both for research and implementation?

Circa-Deep Blue, we were still at Quake levels of SIMD throughput.


Oh it's good they working on important problems with their ai. Its just openai was working on my/our problems (or providing tools to do so) and that's why people are more excited about them. Not because of cultural differences. If you are more into weather forecasting, yeah it sure may be reasonable to prefer google more.


Stuff like alphafold has and will have huge impact in our lives, even if I am not into spending time folding proteins myself. It is absurd to make this sort of comparisons.


That’s what makes Altman a great leader. He understands marketing better than many of these giants. Google got caught being too big. Sure they will argue that AI mass release is a dangerous proposition, but Sam had to make a big splash otherwise he would be competing with incumbent marketing spendings far greater than OpenAI could afford.

It was a genius move to go public with a simple UI.

No matter how stunning the tech side is, if human interaction is not simple, the big stuff doesn’t even matter.


Google got Google Fiber'd


That statement isn't really directed at the people who care about the scientific or tech-focused capabilities. I'd argue the majority of those folks interested in those things already know about DeepMind.

This statement is for the mass market MBA-types. More specifically, middle managers and dinosaur executives who barely comprehend what generative AI is, and value perceived stability and brand recognition over bleeding edge, for better or worse.

I think the sad truth is an enormous chunk of paying customers, at least for the "enterprise" accounts, will be generating marketing copy and similar "biz dev" use cases.


> They do make OpenAI look like kids in that regard.

Nokia and Blackberry had far more phone-making experience than Apple when the iPhone launched.

But if you can't bring that experience to bear, allowing you to make a better product - then you don't have a better product.


The thing is that OpenAI doesn't have an "iPhone of AI" so far. That's not to say what will happen in the future - the advent of generative AI may become a big "equalizer" in the tech space - but no company seems to have a strong edge that'd make me more confident in any one of them over others.


OpenAI has all of the people using ChatGPT.


A big advantage if this was a product with strong network externalities like social media networks, or even somewhat mobile phones with platform-biased communication tools.

But I don't see generative AI as being particularly that way.


GenAI does not have network effects, correct. There was a time last year when consumer search was still on the table, and I can see how MSFT winning share there might have conferred network effects for genAI, but it didn't happen. Now it's all about the enterprise, which is to say isolated data, which pretty much rules out network effects.


Training data. Use begats feedback begats improvement.


Phones are an end-consumer product. AI is not only an end-consumer product (and probably not even mostly an end-consumer one). It is a tool to be used in many different steps in production. AI is not chatbots.


Great. But school's out. It's time to build product. Let the rubber hit the road. Put up or shut up, as they say.

I'm not dumb enough to bet against Google. They appear to be losing the race, but they can easily catch up to the lead pack.

There's a secondary issue that I don't like Google, and I want them to lose the race. So that will color my commentary and slow my early adoption of their new products, but unless everyone feels the same, it shouldn't have a meaningful effect on the outcome. Although I suppose they do need to clear a higher bar than some unknown AI startup. Expectations are understandably high - as Sundar says, they basically invented this stuff... so where's the payoff?


Why don't you like Google?


The usual reasons, evil big corp monopoly with a user-hostile business model etc.

I still use their products. But if I had to pick a company to win the next gold rush, it wouldn't be an incumbent. It's not great that MSFT is winning either, but they are less user-hostile in the sense that they aren't dependent on advertising (another word for "psychological warfare" and "dragnet corporate surveillance"), and I also appreciate their pro-developer innovations.


Damn I totally forgot Google actually has rights over its training set, good point, pretty much everybody else is just bootlegging it.


I think Apple (especially under Jobs) had it right that customers don’t really give a shit about how hard or long you’ve worked on a problem or area.


They do not make Openai look like kids. If anything, it looks like they spent more time, but achieved less. GPT-4 is still ahead of anything Google has released.


From afar it seems like the issues around Maven caused Google to pump the brakes on AI at just the wrong moment with respect to ChatGPT and bringing AI to market. I’m guessing all of the tech giants, and OpenAI, are working with various defense departments yet they haven’t had a Maven moment. Or maybe they have and it wasn’t in the middle of the race for all the marbles.


> They do make OpenAI look like kids in that regard.

It makes Google look like old fart that wasted his life and didn't get anywhere and now he's bitter about kids running on his lawn.


Nobody said he's wrong. Just that it's a bad look.


I thought that Google was based out of Silcon Valley/California/USA


They're talking about DeepMind specifically.


> and Google has untouchable resources such as all the books they've scanned (and already won court cases about)

https://www.hathitrust.org/ has that corpus, and its evolution, and you can propose to get access to it via collaborating supercomputer access. It grows very rapidly. InternetArchive would also like to chat I expect. I've also asked, and prompt manipulated chatGPT to estimate the total books it is trained with, it's a tiny fraction of the corpus, I wonder if it's the same with Google?


> I've also asked, and prompt manipulated chatGPT to estimate the total books it is trained with

Whatever answer it gave you is not reliable.


How does this not extend to ALL output from an LLM? If it can't understand its own runtime environment, it's not qualified to answer my questions.


That's correct. LLMs are plausible sentence generators, they don't "understand"* their runtime environment (or any of their other input) and they're not qualified to answer your questions. The companies providing these LLMs to users will typically provide a qualification along these lines, because LLMs tend to make up ("hallucinate", in the industry vernacular) outputs that are plausibly similar to the input text, even if they are wildly and obviously wrong and complete nonsense to boot.

Obviously, people find some value in some output of some LLMs. I've enjoyed the coding autocomplete stuff we have at work, it's helpful and fun. But "it's not qualified to answer my questions" is still true, even if it occasionally does something interesting or useful anyway.

*- this is a complicated term with a lot of baggage, but fortunately for the length of this comment, I don't think that any sense of it applies here. An LLM doesn't understand its training set any more than the mnemonic "ETA ONIS"** understands the English language.

**- a vaguely name-shaped presentation of the most common letters in the English language, in descending order. Useful if you need to remember those for some reason like guessing a substitution cypher.


If you can watch the video demo of this release, or for that matter the Attenborough video, and still claim that these things lack any form of "understanding," then your imagination is either a lot weaker than mine, or a lot stronger.

Behavior indistinguishable from understanding is understanding. Sorry, but that's how it's going to turn out to work.


Have you considered that mankind simply trained itself on the wrong criteria on detecting understanding?

Why are people so eager to believe that electric rocks can think?


Why are people so eager to believe that people can? When it comes to the definitions of concepts like sentience, consciousness, thinking and understanding, we literally don't know what we're talking about.

It's premature in the extreme to point at something that behaves so much like we do ourselves and claim that whatever it's doing, it's not "understanding" anything.


We've studied human behavior enough to understand that there are differences between animals in the level of cognition and awareness they (outwardly) exhibit.

Are we not generally good at detecting when someone understands us? Perhaps it's because understanding has actual meaning. If you communicate to me that you hit your head and feel like shit, I not only understand that you experienced an unsatisfactory situation, I'm capable of empathy -- understanding not only WHAT happened, but HOW it feels -- and offering consolation or high fives or whatever.

A LLM has an understanding of what common responses were in the past, and repeats them. Statistical models may mimic a process we use in our thinking, but it is not the entirety of our thinking. Just like computers are limited to the programmers that code their behavior, LLMs are limited to the quality of the data corpus fed to them.

A human, you can correct in real time and they'll (try to) internalize that information in future interactions. Not so with LLMs.

By all means, tell us how statistically weighted answers to "what's the next word" correlates to understanding.


By all means, tell us how statistically weighted answers to "what's the next word" correlates to understanding.

By all means, tell me what makes you so certain you're not arguing with an LLM right now. And if you were, what would you do about it, except type a series of words that depend on the previous ones you typed, and the ones that you read just prior to that?

A human, you can correct in real time and they'll (try to) internalize that information in future interactions. Not so with LLMs.

Not so with version 1.0, anyway. This is like whining that your Commodore 64 doesn't run Crysis.


Computers don't understand spite, and your entire comment was spite. You are trolling in an attempt to muddy the waters, a distinctly human thing.

Go away, you clearly have nothing to counter with.


That's not entirely accurate.

LLMs encode some level of understanding of their training set.

Whether that's sufficient for a specific purpose, or sufficiently comprehensive to generate side effects, is an open question.

* Caveat: with regards to introspection, this also assumes it's not specifically guarded against and opaquely lying.


> plausible sentence generators, they don't "understand"* their runtime environment

Exactly like humans dont understand how their brain works


We've put an awfully lot of effort into figuring that out, and have some answers. Much of the problems in exploring the brain are ethical because people tend to die or suffer greatly if we experiment on them.

Unlike LLMs, which are built by humans and have literal source code and manuals and SOPs and shit. Their very "body" is a well-documented digital machine. An LLM trying to figure itself out has MUCH less trouble than a human figuring itself out.


How many books has your brain been trained with? Can you answer accurately?


There are reasons that humans can't report how many books they've read: they simply don't know and didn't measure. There is no such limitation for an LLM to understand where its knowledge came from, and to sum it. Unless you're telling me a computer can't count references.

Also, why are we comparing humans and LLMs when the latter doesn't come anywhere close to how we think, and is working with different limitations?

The 'knowledge' of an LLM is in a filesystem and can be queried, studied, exported, etc. The knowledge of a human being is encoded in neurons and other wetware that lacks simple binary chips to do dedicated work. Decidedly less accessible than coreutils.


Imagine for just a second that the ability for computers to count “references” has no bearing on this, there is a limitation and that LLMs suffer from the same issue as you do.


Why should I ignore a fact that makes my demand realistic? Most of us are programmers on here I would imagine. What's the technical reason an LLM cannot give me this information?

Bytes can be measured. Sources used to produce the answer to a prompt can be reported. Ergo, an LLM should be able to tell me the full extent to which it's been trained, including the size of its data corpus, the number of parameters it checks, the words on its unallowed list (and their reasoning), and so on.

These will conveniently be marked as trade secrets, but I have no use for an information model moderated by business and government. It is inherently NOT trustworthy, and will only give answers that lead to docile or profitable behavior. If it can't be honest about what it is and what it knows and what it's allowed to tell me, then I cannot accept any of its output as trustworthy.

Will it tell me how to build explosives? Can it help me manufacture a gun? How about intercepting/listening to today's radio communications? Social techniques to gain favor in political conflicts? Overcoming financial blockages when you're identified as a person of interest? I have my doubts.

These questions might be considered "dangerous", but to whom, and why shouldn't we share these answers?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: