Hacker Newsnew | past | comments | ask | show | jobs | submit | simon_000666's commentslogin

I really wish they'd remake this game - it was the best. Same story, same turn based game play, same maps, update graphics, that's all it needs.

Would also be awesome if there was an MMORPG version where you could wander the wasteland and come across real people...


You had me until MMORPG. I hate ESO/FO76, it’s a completely different game, one I have zero interest in playing.


> Would also be awesome if there was an MMORPG version

There's always Fallout Online: https://www.youtube.com/watch?v=hg3mK1yVnWM


Would be nice if they fixed and revived a few broken quests too. Worried a bit with how recent remakes have been handled, but maybe after 40k crashes, old game remakers will start respecting the source again.


some guy in Poland is making Fallout 2 in 3D... it even works in the browser? somewhat

https://jonasz-o.itch.io/fallout2remake3d/devlog/708154/upda...


Wouldn’t strapping it to the bottom of a blimp/airship have been simpler?


Or multiple helicopters flying in formation - also a possibility? Or just a bunch of drones?


Or A Very Large Quadcopter, because they already know a thing or two about making large rotating airfoils with a hub full of neodynium and coils?

My guess would be range is the limiting factor, helicopters struggle keeping themselves in the air for more than a few hours, before taking into account any meaningful payload. Fixed wing aircraft can achieve so much more range.


Well, there are no large blimps/airships in production. Also, they are quite sensitive to windy conditions, and you want to errect the turbines in very windy places... On the other side, building a small amount of rather simple airplanes should be quite doable in todays day and age.


In summary. At some point an ai assistant will replace 80% of the functionality/apps offered by iOS & android - it will finally become the new OS (as we all knew from watching her).

This is bad for Google as google’s main feature is search. Ai assistants will replace search and kill the main revenue stream which is ad clicks.

Google’s secret plan is to eventually canabilize it’s pixel phone with an ‘agent first’ device to try to beat Apple, MS & OpenAI with a horizontal offering that is significantly better than the fragmented world of ChatGPT &iOS & azure.

But it’s worth remembering and I think the article fails to point this out. Agents will still ‘recommend’ things, you ask them to book you a flight - they still have to recommend a couple of options out of many - the agents need to decide and ultimately that decision is the same as choosing who to place at the top of the search results page - whoever wins the agent wars will win a significant proportion of Google search revenue as referral fees AND a significant proportion of IOS App Store revenue.

It really is winner takes all.


Does no-one remember the 1992 Robin Williams film - Toys? This has been coming for a lonnng time. Question is when there are two armies operating on the same principle do we reach some kind of nash equilibrium?


Personally, I think there’s too many variables that are spread across political, geographical, and technical to ever reach a Nash Equilibrium in a near peer / peer conflict scenario, unless it pertains to total distraction—that’s why I think MAD was so successful.

Would love to hear your thoughts


Might want to edit the word distraction into destruction, as it creates a somewhat confusing post at first otherwise. Beyond that, I think there was more to nukes than just mutually assured destruction. I think most world leaders are cowards. Just envision in your mind Joe Leader somehow being captured and being given a choice, which he somehow knew to be 100% honest. He could immediately and permanently end a conflict, and be spared. Or he would be killed on the spot. How many leaders are going to go, "No. This conflict is just and right. Do what you must, but I will not betray my country and conscience."?

They'd all be pissing their pants just to live another day. And so like any coward they'll happily make the "tough decision" to casually sacrifice millions of other peoples lives - billions if necessary, but they would not, in a million years, accept a scenario where they, themselves, might be killed. And nukes created just such a scenario. Drones, especially given the existence of extremely effective (but distance constrained) electronic warfare and other technology, doesn't intuitively threaten their own mortality. So I expect to see full steam ahead.


It may not be the equilibrium you want. A prisoners dilemma game is a very likely outcome.


I assume the eventual outcome will be more like PK Dick's The Second Variety, or maybe the Nier: Automata games


Love the summary, don’t agree org entropy is inevitable and also don’t agree that equity is a silver bullet. Many times the cure is worse than the disease, it creates a whole bunch of behavioral anti-patterns e.g. people sticking around too long, politics and grandstanding in order to win more equity. In my experience profit-share provides same positive incentives without the downsides.


How? Profits are short term indicators while equity is longer term.


“ my point is purely about the effective output of an individual. If we were fighting an existential threat, say an asteroid that would hit the earth in a year, would you really tell everyone involved in the project that they should go home after 35 hours a week, because they are harming the project if they work longer?”

— doesn’t this depend on the outcome of your work to fight an existential threat? If you fail then going home after 35hrs was exactly the right thing to do (as you’ve optimized for making the most of your remaining time on earth) if your successful then however many hours you spent was worth it.

Surely this entire argument is pointless unless you know the result of the time you spend?


He tells you literally and exactly what he means: his "point is purely about the effective output of an individual". Emphasis is mine on effective output. His opponents argue that there's a peak in productivity, such that if workers wanted the greatest chance at stopping the asteroid, they should choose to only work 35 hours a week. He argues that this peak either does not exist, or is way more than 35 hours.


It makes more sense when you think of salaried vs hourly. Because hourly people are so obviously useful on long hours that companies will pay 50% more to have them there.


“ The Russians, invaded twice by Germany in the 20th century” - technically in WW1 Russia mobilized her armies ahead of Germany and was the first to cross over state lines on august 17th 1914 - so in ww1 it’s more like Russia invades Germany.


Also, WW2 started with Germany and Russia invading Poland together.

Because Hitler later betrayed Stalin, and Russia did by far the most work in defeating Nazi Germany, we have since learned to see Russia as one of the good guys in WW2, but they didn't start out that way; they were initially one of the aggressors, invading Poland, the Baltic states, and Finland.


> invading Poland

Ah, the convenience of ignoring Munich Agreement.

>> In 1938, the Soviet Union was allied with France and Czechoslovakia. By September 1939, the Soviets were to all intents and purposes a co-belligerent with Nazi Germany, due to Stalin's fears of a second Munich Agreement with the Soviet Union replacing Czechoslovakia. Thus, the agreement indirectly contributed to the outbreak of war in 1939.[87]


I'm not ignoring that at all, but it's quite a leap from misguided appeasement to arguing that that justified the invasions of Poland, the Baltic States and Finland.

But the whole run up to WW2 definitely shows the folly of appeasing aggressors by rewarding their aggression. A lesson that's definitely relevant today.


> but it's quite a leap from misguided appeasement

There were quite a lot of actions which guided the Soviets up to 1939 and I can't say these were 'misguided appeasement'. It takes a lot to convince someone to work with people whose anthem contains 'Kam'raden, die Rotfront und Reaktion erschossen / Marschier'n im Geist in unser'n Reihen mit.'


"It takes a lot to convince someone to work with people whose anthem contains 'Kam'raden, die Rotfront und Reaktion erschossen / Marschier'n im Geist in unser'n Reihen mit.' "

Not much, if there is something to gain. Because diplomats did the talking and they are usually disconnected from the goons on the ground doing that singing. And the sovjets had their songs and goons and gulags and NKVD too, as you well know.

And I am not sure if you actually understand that song, because it just says the Nazis think about the spirits of their comrades who were killed by the Rotfront, (german communist) and by the Reaktion, conservative (monarchistic) forces opposing the Nazis. (not the other way around, like you seemed to have understood)

And they were. But of course by that time, way more Rotfront people were murdered by the Nazis and they probably also sang about that, so you likely could have choosen a better song. This song lets one rather question, why the nazis could bring themself to work with the sovjets at all and the answer is the same: because there was something to gain.

Otherwise this discussion sounds to me like a discussion about what is better, plague or cholera?

It was 2 confronting totalitarian empires, with total disregard for human life, unless it happened to be an important party member. And in the use of terror against anyone opposing, they were pretty similar.


> because there was something to gain

Or there was something to lose if remain in the current status quo.

> diplomats did the talking and they are usually disconnected from the goons on the ground doing that singing

Sorry?

>> [...] Goebbels' propaganda created what became one of the Nazi Party's central martyr-figures of their movement. He officially declared Wessel's march, renamed as the "Horst-Wessel-Lied" ("Horst Wessel Song"), to be the Nazi Party anthem [...] The "Horst Wessel Song" was sung by the SA at the funeral, and was thereafter extensively used at party functions, as well as sung by the SA during street parades.

Were the Soviet diplomats deaf?

> because it just says the Nazis think about the spirits of their comrades who were killed by the Rotfront

:rolleyes: Some people pertain the idea what the Soviets were totally clueless about how evil Nazis were[0] or what the Soviets were totally on board with Nazis in their evilness[1]. Sometimes both at the same time.

This song is a clear evidence what both the nazis and the communists were a 'natural enemies', what there is no fucking way the Soviets didn't knew that (see [1] again) and what if you find them working together then for the reasons you should look not at them but for their environment.

[0] rare, but I've seen those folks

[1] this is the default, especially if the history knowledge ends with a parroted response about Molotov-Ribbentrop


So you were aware, that the lyrics you cited were no direct threat to the sowjets?

Apart from that, of course they were natural enemies. Mein Kampf and many other sources spoke about the concept of conquering land in the east.

And the sowjets wanted the world revolution. Also documented.

So why pick that example then?

Because with that, you could also "proof" that the Nazis were mortal enemies to the conservative forces in germany. Were they? As far as I know, not really and surely not as long as the nazis were successfull.

So I really do suspect, that you were not aware, but cannot admit a misstake now, like you cannot admit any wrongdoing by the sowjets. Sorry, but this is not a base for me for serious debate.


None of this changes the fact that Poland was invaded by both the Nazis and the Soviets. They may have hated each other ideologically, but politically, they were aligned at that moment. And even if they weren't sufficiently aligned by your standards, the Soviet Union still invaded 5 countries that didn't do anything to threaten the USSR, making the Soviets one of the aggressors of the war.


That’s amazing, some of those articles are priceless bits of HN satire.


Yes! I've also noticed that GPT-3.5 and GPT-4 have somewhat different senses of humor. GPT-3.5 tends to lean more towards absurdism — sometimes it's so bad that it's actually good, but that's not always the case. So, I use both for generating headlines.

However, GPT-3.5 often struggles to generate coherent comment threads, so I use GPT-4 for that, which does the job pretty well.

For images, I simply ask LLM to include a prompt in the img alt tag, and then I call the DALL-E API to generate JPEGs. But honestly, DALL-E isn't great; it would be much better to use StableDiffusion or Midjourney, but I'm too lazy to integrate them (does Midjourney even have a public API?)

If you want to see the prompts, here is the code: https://github.com/crackernews/crackernews.github.io


I think this is interesting in the context of chatgtp. A model trained on language - which is itself a limited model of reality. If the purpose of language is reason and reason convincing and persuading other people, winning arguments with other people, defending and justifying actions and decisions to other people - then what is chatGTP - essentially a reason engine? An engine designed to convince people that it know’s best regardless of the truth? Is it actually a tool of control?


ChatGTP/4 is to AGI what pepper’s ghost is to holography.

It’s a parlor trick, even if you add plugins or the ability to call other hugging face ML models - it’s just a parlor trick with fancier bells and whistles. All it is doing is using stochastic gradient descent to predict the next word in a sequence based on an enormous sophisticated training set designed to amaze people.

Thinking it has advanced because it can now get calculations correct is a fallacy. It’s still just predicting the next word, it’s just that it’s now got a post processing step that is converting those next words into code and parroting the output. It maybe be able to now answer 4567*9876 correctly (using the human hardcoded wolfram alpha engine) but it still does not fundamentally comprehend why 1+1=2 - like my 5 year old can.

Until it can generate its own internal neural networks to for example learn to logically reason about calculations we are still far from AGI. Also those calling for more data are misguided - less data, more sophisticated architectures than transformers are the only way to avoid the stochastic parrot trap.


I find this so bizarre. Every time someone demonstrates a new way in which models are capable of a wider array of tasks than expected someone goes "it's just predicting tokens".

It's such a big "just". You are just firing neurons. The stock market is just supply and demand. The internet is just a bunch of computers talking through 50 year old protocols that don't work very well.

Everything is just something else! I wonder if the first tribe to be annihilated by bronze weapons were like "that stuff is just like stone but more malleable, don't see what the big deal is".


Stavros' law of AGI: If we know how it works, it's not true AGI.


Pepper’s ghost is also impressive when you see it for the first time. They’ve enhanced it do entire concerts now with dead music stars on stage for huge audiences. Has it helped us get any closer solve holography, will I be able to have a Star Trek style hologram roaming round my house because of pepper’s ghost?


It's not a big just. Saying it is AGI is an insanely huge claim. Don't flip it around and saying the skeptic is the one making a large claim. They aren't!


I asked chatGPT why it kept apologising and told it to not apologise to me.

Guess what, it apolgised immediately after and then again when I asked why it apologised even after I told it not to.


That’s pretty common in Japan, from what I’ve heard. Cultural upbringing is hard to distance yourself from.


Is chapgpt Japanese?


Guess what, I just saw one of those idiots from the bronzeworking tribe with a BENT sword. Imagine using weapons with blades that can get bent.


Except "this is just" is sprinkled all over NNs, DL and in turn of ChatGPT. Actually they pride themselves on "this is just".

So your argument is probably more accurate for the other camp, or at least as accurate for the other camp as well.


I'm not sure what you're getting at here but I'll try to respond. My argument is that "this is just" is meaningless as a way to assess the impact of a technology.

If AI researches say, "this is just X and it can do Y!" then fine, that's just framing for "look: Y". When stochastic parrot guys say "this is just X, what's impressive about that?" it throws me for a loop coz they are are refusing to engage with Y.


I think we disagree about what Y is. My point is that Y is not that different from materially what is possible with a slack bot from circa 2015. Essentially chatgtp is a less efficient way to get to the same outcomes that were already possible. The trick is that it appears to be something it’s not - AGI.

I like your bronze sword analogy. From my point of view chatgtp is not a bronze sword, it’s a Stone Age sword that someone has painted bronze. It has value because people realize the advantage that a true bronze sword would have in a battle. However, when you actually put it through it’s paces you quickly realise it offers no actual value over what came before.


>> It's still just predicting the next word

Predicting the next word is a much deeper problem than people like you realise. To be able to be good at predicting the next word you need to have an internal model of the reality that produced that next word.

GPT-4 might be trained at predicting the next word, but in that process it learns a very deep representation of our world. That explains how it has an intuition for colours despite never having seen colours. It explains why it knows how physical objects in the real world interact.

Now, if you disagree with this hypothesis it's very easy to disprove it by presenting a problem to GPT4 that is very easy for humans to solve but not for GPT4. Like the Yann Lecun gear problem, which GPT4 is also able to solve.


“To be able to be good at predicting the next word you need to have an internal model of the reality that produced that next word.”

Now that’s an interesting claim - that I would deeply dispute. It learns from text. Text itself is a model of reality. So chatgtp if anything proves that in order to be good at predicting the next word all you need is a good model of a model of reality. GTP knows nothing of actual reality only the statistics around symbol patterns that occur in text.


You are being given a chance to dispute it. Give an example of a problem that any human would be easily able to solve but GPT4 wouldn't.

>> "good model of a model of reality"

That is just a model of reality. Also, a "model of reality" is what you'd typically call a world model. Its an intuition for how the world works, how people behave, that apples fall from trees and that orange is more similar to red than it is to grey.

Your last line shows that you still have a superficial understanding of what its learning. Yes it is statistics, but even our understanding of the world is statistical. The equations we have in our head of how the world works are not exact, they're probabilistic. Humans know that "Apples fall from the _____" should be filled with 'tree' with a high probability because that's where apples grow. Yes, we have seen them grow there, whereas the AI model has only read about the growing on trees. But that distinction is moot because both the AI model and humans express their understanding in the same way. The assertion we're making is that to be able to predict the next word well, you need an internal world model. And GPT4 has learnt that world model well, despite not having sensory inputs.


Can chatgtp ride a bicycle? Can you ride a bicycle? If you ‘d never rode on a bicycle before - do you think if you read enough books on bicycle riding, the physics of bicycle riding, the physics of the universe - you would have anywhere near as complete a model of bicycle riding as someone who’d actually rode on a bicycle before. Sure you’d be able to talk a great game about riding bicycles - but when it comes to the crunch, you’d fall flat on your face. That’s because riding a bicycle involves a large number of incredibly complex emergent control phenomena embedded within the marvel of engineering that is the human body - not just the small part of the brain that handles language. So call me when LLM’s can convert their ‘world models’ learned from statistics on human language use into being able to ride a bicycle first time. Until then I feel comfortable in the knowledge they know virtually nothing of our objective reality.


Could Stephen Hawking ride a bicycle?


Yes, his mnd was diagnosed around the age of 21? And he didn’t learn to ride bicycles from reading books.


Your 5yo does not understand 1+1. You yourself do not understand it. Entire careers were spent trying to pin it down. It is basically its own branch of mathematics.

I understand your point, but I am struggling to see why it matters. This seems more and more an argument like “cars are not horses”. I know they are not but does it matter? Cars are superior for our use cases.


And while it may be true that it is far from AGI, I don’t think calling it a parlor trick does it justice. I used it this morning to set up a new workout routine for myself after having it write a little boilerplate typescript code to bootstrap 70% of a micro service I want to set up. My girlfriend who is studying react got a lot of value out of it by having compile errors explained to her. My mum uses it to practice English. I am going to integrate GPT-4 into a new product where it provides tangible value for non technical users. To be useful it does not need to be sentient or able to iterate on its own architecture.


Yeah I agree that’s fair, a parlor trick is perhaps a little harsh. ChatGTP can provide value - It’s arguable whether having done that ‘with classical’ methods could have been more efficient or whether the end result is as good - (btw careful with code - in my experience ChatGTP often thinks it knows what is wrong but is way off - something an experienced coder would notice immediately). Do you remember the tamagotchi? That also provided value to millions of people, many people thought of it as sentient even - was it? No - was it anywhere near AGI? No. If we can find good uses for the GTP models that were not possible or cost prohibitive before - then great. I think we just need to be clear - like the Tamagotchi - this is far from AGI and plugins/hugging face is not penultimate step before skynet.


Weird behaviour I’ve noticed is a lot of folks on the unimpressed/doomism side of AI consistently say GTP instead of GPT, I wonder why this pattern exists?


> less data, more sophisticated architectures

“The bitter lesson” would like to have a word. http://www.incompleteideas.net/IncIdeas/BitterLesson.html

I appreciate your enthusiasm, but the history of ML shows that your approach is less likely to work. Maybe you’ll be the one to prove everyone else wrong. Architectural breakthroughs are few and far between, and it’s incredibly difficult to reason about. I came up with the Lion optimizer while Google was using random tree search across 300 TPUs to discover the same thing, and it’s just five lines or so.


Is this some kind of copypasta? Too many tropes all at once. "GTP" on top of all this is too on the nose.


Ha I just commented above at the pattern of people in this camp using “GTP” fairly consistently.

What a curious psychological study, maybe dyslexic people feel more threatened by a large language model so clearly understanding words that they’re more likely to attempt to discredit it?


If it wasn't, it is now.


Evolution is just gene selection through natural selection. To create an eye is not possible

Well neural networks have unpredicted emergent properties. I don't see how anyone can rule out or know future behaviour


> It's still just predicting the next word.

Computer-generated random numbers are not truly random, yet they are practically random in most real-world use cases. You can’t easily cheat the RNG in World of Warcraft to get critical strike every time.

The output from GPT is generally very intelligent and versatile in terms of text. It may even be capable of handling more multi-modal problems with the use of enough sensors and motors. Perhaps the same idea of "predicting the next move" or "predicting the next idea" can still apply.

Who knows, maybe humans are essentially physical creatures that "generate the next thought and generate the next move"?

One of the biggest issues with GPT is its lack of mid-term memory like human do. Instead, we need vector store and search then bolt back its short term memory instead of letting it handle everything in a more coherent way. Perhaps it could benefit from lightweight fine-tuning technologies like LoRA and hypernetworks for stable diffusion. If this issue is resolved we would see it'll get even more practical. Again, the flaw is not about "predicting the next words".


I don't think whether it's AGI or not actually matters when it starts materially affecting the economy.


+10 Very well-said (and to-the-point).


Setting aside whether you're right or wrong about this... assuming you are right, then are you worried this will set everyone down the wrong path? That we'll spend ten years iterating on transformer models, never getting any closer to AGI? Is there another direction you think we should be moving toward instead (or at least simultaneously)?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: