Hacker Newsnew | past | comments | ask | show | jobs | submit | zmgsabst's commentslogin

Also the lifecycle of your system, eg, I’ve maintained projects that we no longer actively coded, but we used the tests to ensure that OS security updates, etc didn’t break things.

You can helicopter shells and propellant onto the deck, then take them below for storage — as loading the guns from their magazines already happens.

VLS requires that you reload missile by missile at the place they’re fired from the top, which requires you have crane access to each VLS cell. You could replace the many non-reloadable tubes with fewer, reloadable tubes connected via loaders to magazines… but we’re starting down the path to re-inventing guns.


Helicopters don't have that much range - certainly way less then a ship does. So either you're close enough to a land base they can make the trip, or you're operating from another munitions ship - it's all the same problem.

And again, you're paying for all of this in the form of far slower firing guns with less range and precision.


Helicopters can operate off supply ships, 100km back from the conflict area and ferry munitions to your battleship that’s standing only 20km back. You can also use airdrops from cargo planes, delivery by small boats, or dropping back to meet the supply ship directly. None of those methods resupply VLS cells.

We’re also not debating a return to old guns — but to a modern version using autoloaders and shells equipped with guidance and range extension, to around 100km using modern techniques. Using barrages of all barrels, it’s closer to firing off waves of ~45 missiles at targets 100km away (9 guns, 5 rounds per minute burst).

The real difference is a battleship carries 1200 rounds instead of 120 VLS cells — and can replenish those rounds at sea. We gain that increased storage and endurance for decreased burst capacity, but remain over 45/min; excluding the VLS cells (which a modern battleship would also have).


The problem is 100km back just isn't very far, when missiles like the Ukranian Neptune have a range currently of 200km, and extended range variants in the works that push that to 1,000km.

That's a non-NATO, "country at war" system. Within NATO inventory you have the Tomahawk that dates to the 80s and has a range of 1,350km conservatively.

So if you needed to fulfill a long-duration shore bombardment mission against a non-peer opponent...sure, there's advantages to being able to loiter and reload.

But it seems abundantly clear that versus any peer or near-peer opponent, the closer to their coastline you get then the further in-land they can launch anti-ship missiles from - which they are heavily incentivized to do, and where the sky is also just getting more and more dangerous - i.e. a ship within 100km of a shoreline is starting to be in the range of medium weight drones, or autonomous surface vessels (which might deploy drones - as the Ukranians have been doing).

In your example, the issue isn't that the ship doing the shooting is in range: it's that the resupply ship is also in range and a better target.


I’d suspect that an AI CEO would have to complement its weaknesses — just like any CEO. And in this case, rely on subordinates for glad-handing and vision pitches, while itself focusing on coordinating them, staging them to the right meetings, coordinating between departments, etc.

I think an AI could be strong at a few skills, if appropriately chosen:

- being gaslightingly polite while firmly telling others no;

- doing a good job of compressing company wide news into short, layperson summaries for investors and the public;

- making PR statements, shareholder calls, etc; and,

- dealing with the deluge of meetings and emails to keep its subordinates rowing in the same direction.

Would it require that we have staff support some of the traditional soft skills? Absolutely. But there’s nothing fundamentally stopping an AI CEO from running the company.


One massive bullet that is missing that is worth more than the other things combined: synthesizing all company data and developing a cohesive strategy that achieves some stated long term vision.

There is no shortage of data a company has at its disposal these days and a CEO will bias towards what they feel they are best at. We see that with Steve Jobs versus Tim Cook. Tim loves seeing numbers go up into the right, so that's where the passion is in the company these days. An AI CEO that could not only balance that out but cancel it out could be a real strength.

The human ceo would still be indispensable in setting the company vision, and defining its culture and values; crucial ingredients for execution.


Sure — but people reasonably distinguish between photos and digital art, with “photo” used to denote the intent to accurately convey rather than artistic expression.

We’ve had similar debates about art using miniatures and lens distortions versus photos since photography was invented — and digital editing fell on the lens trick and miniature side of the issue.


Journalistic/event photography is about accuracy to reality, almost all other types of photography are not.

Portrait photography -- no, people don't look like that in real life with skin flaws edited out

Landscape photography -- no, the landscapes don't look like that 99% of the time, the photographer picks the 1% of the time when it looks surreal

Staged photography -- no, it didn't really happen

Street photography -- a lot of it is staged spontaneously

Product photography -- no, they don't look like that in normal lighting


This is a longstanding debate in landscape photography communities - virtually everyone edits, but there’s real debate as to what the line is and what is too much. There does seem to be an idea of being faithful to the original experience, which I subscribe to, but that’s certainly not universal.

Re landscape photography: If it actually looked like that in person 1 percent of the time, I'd argue it's still accurate to reality.

There are a whole lot of landscape photographs out there I can vouch for their realism 1% of the time because I do a lot of landscape photography myself and tend to get out at dawn and dusk a lot. There are lots of shots I got where the sky looked a certain way for a grand total of 2 minutes before sunrise, and I can see similar lighting in other peoples' shots as real.

A lot of armchair critics on the internet who only go out to their local park at high noon will say they look fake but they're not.

There are other elements I can spot realism where the armchair critic will call it a "bad photoshop". For example, a moon close to the horizon usually looks jagged and squashed due to atmospheric effects. That's the sign of a real moon. If it looks perfectly round and white at the horizon, I would call it a fake.


Nothing can be staged spontaneously.

I’d suspect the other direction:

Police unions get LLMs classified as some kind of cognitive aid, so it becomes discrimination to ban them in school or the workplace.


"Losing access to LLMs hurts minorities the hardest, with job performance suffering compared to their cis white male peers..."

If they use this angle, it's a shoo-in


That is an aspect I had not considered in my assumptions that AI/robots will eventually go through the same/similar social justice process as all the other causes, i.e., women’s suffrage, racial equality, gay rights, etc. because it will ultimately and, arguably, more than all the other prior social justice causes célèbres, serve the ruling class that has risen to dominate through social justice causes far more than anything prior.

It’s going to be interesting to see the state propaganda against the bigots and evil bioists (or whatever the word smithing apparatchiks will devise) so want to bar the full equality in society of AI/robots who look just like you and me after all and also just want equal rights to love each other, and who are you to oppose others since we are all just individuals?

Shoot the messenger all you want, but it’s coming.


Cynical and fun to read but no. Too many parasites have already chewed their way to the empty heart of power of the post-war liberal system, and I think the next time it gets power at the highest levels in the US will be the end if it there. Maybe it will last another generation in Europe, but not long enough to see the scenario you describe play out.

It's not cynical at all. It's quite the opposite actually; an expression of the suicidal and pathological altruism that has caused the west to self-destruct through he guiding hand of psychopathic narcissistic charlatan leaders and con artists.

I am unsure how Europe will go, because there is still a possibility of a glimmer of hope, but frankly, that too is dimming extremely quickly with how systemic things really are, let alone how they are developing, the real vs expected trending towards pessimistic outcomes.

What you may be missing is that there is a possibility where your presumed resistance or rejection of AI and robotic equality is forced upon you one way or another; either you are forced to "arms race" adoption, or the superior external force foists subjugation to their AI/robotics dominance on you (a kind of 19th century Chinese/Japanese, Industrial Revolution comes knocking at the front door experience).

Unfortunately for us all, some things you are simply foolish to just ignore, reject/resist as if it will somehow just magically go away or ignore you too. The reality of the matter is that the psychopathic narcissistic tribe of people who control these obsessive, controlling, imposing forces care immensely about dominating and controlling you, even if you want to ignore them.... they will not ignore you, let alone leave you be until you are subjugated.


No — providing funding to promote creation and discovery is why those exist; granting a temporary monopoly is the mechanism meant to accomplish that goal.

This sounds pedantic, but it’s important to not mistake the means for an end:

> To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;

https://constitution.congress.gov/browse/article-1/section-8...


I wish I could get defensive protection against the millions and millions of bad software parents out there. But there is no such thing.

Did you know that Facebook owns the patent on autocompletes? Yahoo owned it and Facebook bought it from them as kind of a privately owned nuclear weapon to create a doctrine of mutually assured destruction with other companies who own nuclear-weapons-grade patents.

Of course the penalty for violating a patent is much worse if you know you are doing it, so companies are very much not eager to have the additional liability that comes with their employees being aware that every autocomplete is a violation of patent law.


That’s actually a negative:

My Docker build generating the byte code saves it to the image, sharing the cost at build time across all image deployments — whereas, building at first execution means that each deployed image instance has to generate its own bytecode!

That’s a massive amplification, on the order of 10-100x.

“Well just tell it to generate bytecode!”

Sure — but when is the default supposed to be better?

Because this sounds like a massive footgun for a system where requests >> deploys >> builds. That is, every service I’ve written in Python for the last decade.


There’s a lot of us who think the tension is overblown:

My own results show that you need fairly strong theoretical knowledge and practical experience to get the maximal impact — especially for larger synthesis. Which makes sense: to have this software, not that software, the specification needs to live somewhere.

I am getting a little bored of hearing about how people don’t like LLM content, but meh. SDEs are hardly the worst on that front, either. They’re quite placid compared to the absolute seething by artist friends of mine.


> There are plenty of non-native English tech enthusiasts writing absolute gems in the most broken English you can imagine! Nobody has ever had trouble distinguishing those from low-quality garbage.

Your entire theory about LLMs seems to rely on that… but it’s just not true, eg, plenty of quality writing with low technical merit is making a fortune while genuinely insightful broken English languishes in obscurity.

You’re giving a very passionate speech about how no dignified noble would be dressed in these machine-made fabrics, which while some are surely as finely woven as those by any artisan, bear the unmistakable stain of association with plebs dressed in machine-made fabrics.

I admire the commitment to aesthetics, but I think you’re fighting a losing war against the commoditization and industrialization of certain intellectual work.


Well, no.

Because authors do two things typically when they use an LLM for editing:

- iterate multiple rounds

- approve the final edit as their message

I can’t do either of those things myself — and your post implicitly assumes there’s underlying content prior to the LLM process; but it’s likely to be iterated interactions with an LLM that produces content at all — ie, there never exists a human-written rough draft or single prompt for you to read, either.

So your example is a lose-lose-lose: there never was a non-LLM text for you to read; I have no way to recreate the author’s ideas; and the author has been shamed into not publishing because it doesn’t match your aesthetics.

Your post is a classic example of demanding everyone lose out because something isn’t to your taste.


Thank you for your post, it's more elegant than my explanation and makes good arguments.

Sometimes I question my sanity these days when my (internally) valid thoughts seem to swoosh by externally.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: