Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my opinion, this is all a massive gamble by Musk to pivot Tesla to an AI-first tech company. Except that Tesla cannot really do AI well and don't have the resources or talent the likes of Google, Meta, OpenAI, etc. does to do novel research and push AI forward.

And he has to make this gamble because Tesla's fundamentals as a car company is going down the drain and its entire valuation hinges on the fact that they are not just a car company. That's why he's constantly announcing new products (robotaxis, humanoid robots) that are nowhere close to materializing, making visits to China to ink HD maps deal with Baidu for FSD and claiming to spend $10B on AI infrastructure this year.

He seems to be in forever stock pump mode, so much so that Tesla's best product till date might just be its stock.



> He seems to be in forever stock pump mode, so much so that Tesla's best product till date might just be its stock.

A lot of companies seem to be gutting everything to the point where their only product is their stock.


This isn't new. Hp, IBM, GE, on and on


I can’t wait to point and laugh when the correction comes due.


you won't be laughing when your and your parents' pensions are wiped out because they buy proportionally into the S&P500.


It won't be a big deal given 5 or 10 years.


The TV series "Silicon Valley" does not disappoint.


Except that AI has not shown itself to be useful, at least not considering the staggering costs, anyway. Tens of billions of dollars to... fix people's grammar and generate tons of SEO spam? What PROBLEM are they solving here?


I think the parent poster stated it pretty clearly: the problem they're trying to solve is how to keep the stock price floating at a multiple of what the business's actual fundamentals suggest it should be.

The problem is, while that worked well for a good 10 or 20 years, it seems that people are now starting to catch on to the scheme. But I'm not sure that means that you can just stop doing it. As someone elsewhere in the thread pointed out, dragging things out as much as you can is probably preferable to a sudden and brutal value correction for just about everyone with actual skin in the game.


You have to remember a lot of those expenses are actually goosing the companies revenues side with low qual revenue. A good portion of investment dollars into OpenAI are with Microsoft credits -- which OpenAI then uses as opposed to real money.

Doesn't answer your problem issue - though the real dollar cost of investment and training is lower.


It's good at wake word detection and industrial anomaly autocorrelation.

That's about it.


FSD is currently using neural-network style AI, and it's frankly amazing to use and watch and is massively useful.


In what way is it useful? What value is being provided? In my experience it requires constant supervision and readiness to intervene at any moment. There are plenty of reports and photos of it running wheels into curbs with little time for the driver to react.

Given that, while using it you do not regain any time or attention that you would have otherwise spent driving. That doesn't mean it isn't impressive. A car that can drive itself like a 15-year-old on their first outing with a fresh learner's permit that needs constant coaching from a parent or instructor is very impressive, just not useful.

I will say that in clear conditions on long highway trips, basic Autopilot does have utility. It does allow you to divert some attention from keeping the car between the lines and matching the speed of the car in front, and use that attention to keep an eye on the large traffic picture, and arrive to your destination slightly less fatigued. Using FSD on city streets seems like the opposite of that to me, an increase in stress and workload that currently provides no practical utility.


I don't think they lead the pack on that, though. Everybody in the self-driving space is using AI to some degree.

E.g. Waymo was at 17,311 miles per disengagement (human takeover) in their 2023 report, and they're not even the top. Zoox was the top at 177,602 miles per disengagement, which is shockingly good if they're not gaming those numbers with tiny service areas or something.

I don't think Tesla publishes their disengagement data, but what I can find crowdsourced from their users is pretty bad relative to the above. The most optimistic number I could find was from 2022 at ~400 miles per disengagement. That's not even very good for 2022; Mercedes-Benz was at 1,400 miles per disengagement, and I didn't even know they had a self-driving division. Nissan was at 149 miles/disengagement, which makes Nissan their closest competitor by capabilities (the next highest after Tesla was QCraft.ai at 863 miles/disengagement, no idea who they are).


Is that the thing that keeps crashing people into highway barriers?


That, and inadequate sensor diversity and coverage.


Any recent version of FSD (i.e. 12.3.x) is technology close to magic.

There was a time when HN would recognise technological acheievement on its own merit, without allowing personal politics to cloud our judgement.

But sadly, we're in a perverse era of political tribalism where FSD is bad because https://elonbad.com/


FSD is cool as a great demo. But the optics and facts are that it’s got people killed. Multiple people over the years. Silly mistakes causing crashes.

It will get better but definitely does not live up to the “full self driving” marketing hype. That kills the magic.

Meanwhile look at Waymo. They don’t make a lot of noise. They take safety really seriously and keep on improving actual “self driving cars” city by city. Zero people dead.

I’ve sat in both. FSD was a great demo, but Waymo truly felt like magic. No driver at all!


I don’t think we will ever have any self driving tech that inter mingles with non self driving cars that will result in zero deaths.

While sad, mile for mile FSD is better than humans.


How the heck did you reach the conclusion that the current state of FSD is safer than a human driver. The Tesla Community FSD tracker has it at 157 city miles per disengagement at the moment. The people collecting this data are the Tesla enthusiasts as well.

It has a very long way to go before it is better than a human. We won’t know the true stats until it is allowed to operate without supervision.

Source: https://www.teslafsdtracker.com/Main


FSD only drives in a subset of conditions humans drive in so the comparison is invalid.


> While sad, mile for mile FSD is better than humans.

For this statement to be correct, we’d need to have full disclosure of all travel using FSD at any point, accidents which happened anytime FSD was active or had been recently deactivated (for example, that guy who fell asleep counts even if FSD deactivated a minute before the vehicle crashed), and be able to compare that to the same trips driven by human drivers. You especially need to avoid including incidents in the human stats which are in conditions where FSD would do even worse.


This notion is ridiculous to me everytime I hear it. How can we objectively measure that its safer? It feels way too easy to miss an externality and just chalk that up to teslas mile per mile being safer on a technicality


"Any recent version", because "still recent, but not AS recent versions" were lucky to navigate a well marked roundabout in daylight without causing near misses.

FSD will be "close to magic" when it's 11pm on a Pittsburgh night in January, with the snow coming down, road markings barely visible, if at all, and it still gets you home.


If it's 11pm on a Pittsburgh night in January, with the snow coming down, road markings barely visible, if at all, then the safe behaviour is to wait it out and drive in the morning.

If FSD didn't work in those conditions, and this encouraged you to wait it out, then it might have just saved your life (or the lives of other road users).


I subscribe to this thought. HN has really made a turn that anything Elon does is bad even when he has managed to pull off some unbelievable feats. He isn't binary in his accomplishments as most people are fairly complex.

It's a sad state of affairs - though I imagine its mostly cross-over of younger generations blending in their polarizing reddit politics over here. It is a dilutive process unfortunately.


If you're a top-tier AI researcher, why TF would you choose Tesla to work for? The shine has gone off Musk as a super-genius. All you'd be getting is an arbitrary, capricious boss and terrible work hours.


And even if you delivered something nice you could still get fired for something stupid. No thanks.


Tesla AI orgs can pay well above market for AI talent. Thats about the only reason anyone would join. If you are insensitive to work hours but want to get paid, its not the worst option.


Do they really outbid Google, Microsoft, Facebook, OpenAI, Apple, etc.? I totally believe they pay better than the average startup but those companies are spraying money around right now and their stock options have a lot more upside – Tesla’s P/E is wildly high so they’d need a phenomenal reversal in fortune to drive it enough higher for anyone to see a great return.


Its not uncommon for PhD hires from top schools to be paid upper 6figs (i.e 600k+) of real money, according to a few friends of mine doing PhDs at top schools. OpenAI is probably the only one thats truly "competitive."

I also (personally) know of several non-ML offers that were substantially higher than FAANG. Though even within FAANG only Meta and maybe Google (and Amazon at hogher levels) are very competitive. Meta and Google likely have better growth prospects though with better refreshers, bonuses, stock growth, etc. But up front Tesla AI offers are definitely very strong. Even then I'm not sure its worth the stress.


> That's why he's constantly announcing

Security fraud.


Yes, that seems correct from what I've seen. And if self driving can be improved enough, then it will pay off. However, I remain skeptical that he'll be able to improve it enough to compensate for other deficiencies in the product.


> And if self driving can be improved enough, then it will pay off.

Hasn't Uber been waiting for the same thing to save it?


Uber has been "out" of the self-driving game since end of 2020. The post T.K (Travis Kalanick) era has seen major divesting and shifting of the balance sheet away from longer term bets like self-driving in an attempt to reach profitability & shielding Uber from more risk. Here is a short history of self-driving @ Uber.

August '15: Uber announces its partnership with Carnegie Mellon University (CMU) to establish the Uber Advanced Technologies Group (ATG) in Pittsburgh, focusing on self-driving technology research and development.

May '16: Uber begins testing its self-driving Ford Fusion vehicles in Pittsburgh, marking its first public demonstration of autonomous vehicle technology.

August '16: Uber acquires Otto, a self-driving truck startup founded by former Google engineers. This ended up with a lawsuit over stolen IP that was a large setback from what I remember.

Sept. '16: Uber launches its first self-driving ride-hailing service in Pittsburgh, using a fleet of modified Ford Fusion's. Safety drivers are present in each car to take control if needed.

December '16: Uber starts testing its self-driving cars in San Francisco without obtaining the necessary permits from the CA DMV. The DMV revokes the registration of Uber's test vehicles, forcing the company to halt its operations in the city.

March '17: Uber resumes its self-driving tests in Arizona, taking advantage of the state's friendly regulations for autonomous vehicle testing.

March '18: A fatal accident occurs in Tempe, Arizona, when an Uber self-driving vehicle strikes and kills a pedestrian. Uber suspends its self-driving testing program across all cities in the aftermath of the incident.

July '18: Uber resumes its self-driving tests in Pittsburgh, with additional safety measures and limitations on the vehicles' operating conditions.

March '19: Uber receives a $1 billion investment from SoftBank Vision Fund, Toyota, and DENSO for its self-driving unit, valuing the division at $7.25 billion.

June '19: Uber resumes its self-driving tests in San Francisco, having obtained the necessary permits from the California DMV.

Dec '20: Uber sells its ATG division to Aurora, a self-driving vehicle startup, in a deal valued at $4 billion. As part of the agreement, Uber invests $400 million in Aurora and retains a 26% stake in the combined company.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: