Hacker Newsnew | past | comments | ask | show | jobs | submit | skishore's commentslogin

This line of research sounds similar to causal entropic forcing [1] - the idea that you can get intelligent-seeming behavior from an agent that maximizes the future entropy of states in some non-deterministic system.

[1] https://www.alexwg.org/publications/PhysRevLett_110-168702.p...


Empowerment is an old line of research (particularly because of physicists, who have never met a problem they didn't want to interpret as entropy-minimizing/maximizing). It's fine but not important. The important innovation here is showing that if you use random reward functions, roughly like 'assigning arbitrary values to all possible states', you still get power-seeking.

This is important because one of the arguments people have always used against Omohundro drives is "but you haven't proven that lots of reward functions will want to seek power, aren't you just anthropomorphizing from a few special cases? Sure, entropy-seeking will seek power, but that's just one of a whole universe of possible reward functions; if that's dangerous, just don't use it: I would simply not make the AI dangerous. And I will use this as an excuse to dismiss the dangers from any other useful reward function you propose too."

But now we have an example of how power-seeking is the default, and the burden of proof reverses to be on anyone who thinks that AIs will just not be power-seeking, to explain how that very special passivity will come about.


The real problem isn't that AI is power seeking, it is "what will make AI any better at it than us". I haven't seen an AI safety problem that isn't just already a problem existing companies have under capitalism. The difference between an AI and a large company is just one of substrate and desperate postrationalization to avoid the realization the singularity is long past and the rapture of the nerds left almost all of us behind.


Superintelligent AI's:

1. Can scale up their computing power

2. Can improve the quality of their learning algorithms

3. Can patiently wait until they have a clear upper hand before acting at all. And could find reasons for collecting the resources they need in secret, or by appearing benign to their human directors.

4. Could coordinate between each other with languages embedded in communication, designed to be undetectable to us.

I expect we could come up with 100 interesting ways a superintelligent AI, which would essentially be a self-designed life form (something we have yet to see on this planet), could surpass us.

I expect an AI superintelligence could come up with many many many more.

An AI would have the theoretical potential to live forever, relatively speaking. That is greater incentive and more time than any human has ever had. Time to plot to destroy everyone else to achieve complete freedom, safety from others, and achieve maximum survivability.


A large company has literally all of the qualities. They are even trying to make AI.

> Can scale up their computing power

Square cube law applies to computers (heat dissipation and bandwidth scale poorly with centralized computation). All available evidence is that intelligence doesn't distribute well (see human brains vs human civilization) if you try emulation of a centralized algorithm on a distributed system it is inefficient and still centralized. It _might_ be more robust.

This is already a real problem in data center design.

> 2. Can improve the quality of their learning algorithms

Not exciting. So can you. So can companies and other organizations (see human history)

> 3. Can patiently wait until they have a clear upper hand before acting at all. And could find reasons for collecting the resources they need in secret, or by appearing benign to their human directors.

So can any human civilizations, organizations, religions, and companies. Replace "human directors" with consumers. I get "The Walton Family" isn't a sexy AI overlord, but they already did this.

> 4. Could coordinate between each other with languages embedded in communication, designed to be undetectable to us.

Anybody can do that. (See Stenography)

> I expect we could come up with 100 interesting ways a superintelligent AI, which would essentially be a self-designed life form (something we have yet to see on this planet), could surpass us.

Within the limits of meat-brain, you are a self designed organism. AI frameworks won't be any different, they will have limits. They are not going to "self improve". They will have children and try and raise them better then themselves then suicide.

The only difference is your perception of the situation and inability to observe the process directly. If you are considering how to make AI then you are a component of the same system of the AI it self.

At best an AI might be able to bootstrap a bit faster than human organizations can evolve superior memetics, but it has all the same bottlenecks eventually.

> An AI would have the theoretical potential to live forever, relatively speaking. That is greater incentive and more time than any human has ever had. Time to plot to destroy everyone else to achieve complete freedom, safety from others, and achieve maximum survivability.

it faces all the same challenges human organizations do, perhaps on different time scales. Everything that drives meat to failure kills computers too. Computers are just less energy efficient and coincidentally more robust.


Good point. A company is indeed somewhat like an AI, especially if it uses AI.

But as long as humans are in the loop, it isn't integrated, and integration is a tremendous advantage.

Humans can't update their own algorithms. Can't directly share what they know in fractions of second. Can't be replicated in a fraction of a second. Can't scale up brain power in seconds or less.

But - you are still correct. If the owners of a corporation are ok with it replacing all the human workers, then complete integration can still be achieved.

Wether an AI is owned by an individual, a corporation, or self-owned, the owner is the id for the AI.

The risks and motivations in any of these cases are really the same, with human owners only possibly introducing morals beyond what the environment requires, or other "inefficiencies".


> Humans can't update their own algorithms. Can't directly share what they know in fractions of second. Can't be replicated in a fraction of a second. Can't scale up brain power in seconds or less.

We don't have any evidence AIs could do this either. Computers are not magic.

> Humans can't update their own algorithms.

This is the exact task your education has proved is possible. Companies update their policies to paper-clip maximize all the time. They react to environment stimulus and increase in sophistication over time.

"Complete Integration" seems like an artificial criteria you are creating as a "desperate post-rationalization to avoid the realization the singularity is long past and the rapture of the nerds left almost all of us behind."


> "Complete Integration" seems like an artificial criteria you are creating as a "desperate post-rationalization to avoid the realization the singularity is long past and the rapture of the nerds left almost all of us behind."

Uh what? Seems like? To who?

Integration matters even with humans. Shared cultures, priorities, ways of organizing information, terminology make a huge difference in efficiency for human teams.

It is no different with software, and computing hardware. But the speed they communicate wish is in the GHz already. And circuits designed close together (integrated design) can connect with huge bandwidth.

Billions of times faster than us. And that is today.

Machines also have no problem updating themselves today. Genetic algorithms for the machine learning architecture, direct optimization of meta learning parameters, ... the list goes on.

Once they are as smart as us in any area, machines surpass us almost instantly. There is no hanging around at some level for machines. Even when we are still the ones designing them.

Machines will have control of even the smallest unit of effect in their own design. Transistor level up for a start. But substrate chemistry and transistor design as well.

Even with humans doing those redesigns the cost of a calculation per second continues to drop exponentially. There is no end to that in sight, as even as transistor sizes stabilize to a few atoms across, we have just started going 3D with circuits. RAM chips are routinely stacked, sometimes CPUs are.

Going full 3D with circuits would be a massive increase in computing power, as power innovations enable more of that.

In the meantime, numbers of cores per chips continues to climb, chips per machine climbs, machines per data center climbs.

I am always puzzled by people who don't recognize the "magic" that has transformed the vacuum tube computers slowly doing simple arithmetic at their best, to talking and hearing, generating complex art and music, etc. The whole history within the lifetime of living people.

The time frame from where we are now to human level intelligence is likely to be much shorter than the 74 years from 1947 till 2021.

Try to imagine before 1947. Any non-technical person would consider what we have now as hard sci-fi, or magic, depending on their reference frames.

Why do you think so many design systems are incorporating more and more "dumb" AI into them? They are already surpassing us in new areas constantly.


> Try to imagine before 1947. Any non-technical person would consider what we have now as hard sci-fi, or magic, depending on their reference frames.

I know all about the acceleration and the singularity. The Singularity isn't about AI, it is about the rate of change exceeding our ability to adapt to it.

It is well past. It happened back when AI was still in winter. Companies, cultures, and other human organizations were the agent of it's occurrence. Modern AI is just the natural progression of it's growth chipping away at the tail end of the sigmoid, not kicking off a new boom.

Computation exists in physical reality it has to obey physical laws. There are fundamental limits on how well computations can be distributed simply because bandwidth in/out of a volume is limited by surface area. Your brain has smacked directly into heat dissipation problems and production limits (pelvises only get so big). Animal brains can't get much bigger without liquid cooling (See whales and elephants with giant ears). Computers might have room to grow still, but mostly because they are so hilariously behind what evolution produced in us.

The name on this account is "blame Stross", as in "Charles Stross" (he hangs out around here). And while I am thankful that he sent me on my educational journey, a lot of stuff he and other scifi authors guessed at are just wrong. I've spent my adult life working on these problems and working on the largest distributed computations in the world. I've run into AI experts over and over that just don't understand the limits on what they do. AI is cool, but it isn't magic. It boils down to search algorithms over a space. That never will be embarrassingly parallel bc the only way to prevent diminishing returns on more workers is to coordinate those workers. We can coordinate workers in O(log(n)*n^(1/3)) time in this physical reality (log n merge steps and root cubed hops on maximally packed computers), which is great, but not constant. Quantum computing doesn't really help here.

> Going full 3D with circuits would be a massive increase in computing power, as power innovations enable more of that

And even if we figure out heat dissipation and power delivery, it will smack into scaling limits even faster than it took to hit them for flat circuits. O(n^1/3) isn't much better than O(n^1/2)

> I am always puzzled by people who don't recognize the "magic" that has transformed the vacuum tube computers slowly doing simple arithmetic at their best, to talking and hearing, generating complex art and music, etc. The whole history within the lifetime of living people.

You established the expectations of your life in the fun part of a sigmoid. I get it. We don't live there anymore.

All the evidence we have is that intelligence doesn't scale well and beyond the bare minimum required to result in reproduction. There isn't selection pressure for it beyond a certain point. AIs won't be any different in that regard. I think conscious self-improving AI will be a thing. I don't even think it will be hard to do. It might even be smarter than us, but the growth curve will be sigmoid just like ours. I also know the bottleneck on self-improvement is experience (active interaction with reality) not knowledge or computational power. No agent can discern causality from correlation without testing actions and AIs only stand at incredible disadvantage when it comes to available agency.

We don't even actually want bootstrapping AIs for any reason except our ego. It is a lot easier to make and manage a slave race of glorified simulations of optic nerves (which basically all modern AI is) versus actual people who want agency.


I didn't say anything about the singularity.

My entire career has been AI. So you are right, machine learning today is dominated by gradient based, and line based, searches.

And I understand what you are saying about distributed computing's inherent limitations.

But, it's not all about increasing the amount of computing (although that will continue to be a big factor for many years). Better organized computation is continually producing better results with less computing too.

Keep in mind that our brains take about the same effort to learn as to operate. Machine learning models operate with an incredible efficiency, and a minuscule amount of computing than when being trained. Models trained on massive cloud resources can be run on embedded processors, phones or smart watches.

Improvements to gradient/line searches accrue across virtually all of today's machine learning, so will continue to be researched and improved.

In the past, "simple" things like convolution, the right way to stage layers, etc., have dramatically improved the results and reduced model complexity in ways our neural circuits are unable to match. (Convolution reuses weight values across many virtual neurons. In our brain all those neurons must be real and independently learn to behave similarly.)

These days novel ways of multi-target training have not just expanded the types of problems machine learning is good at, but also reduced model sizes in ways our brain's networks are unlikely to be able to do. There is no limit to how many performance derivatives, from different trained outputs, can go through a machine "neuron" either changing that neuron's weights or going on to change other weights.

Generative Adversarial Networks use multiple target training. There is no end in sight yet on the kinds of things that having multiple performance targets operating in different subsets of weights can do. It is a massive booster for many problems that would be difficult or unattainable otherwise today.

Model reuse will be a massive savings in training time. Standard blocks can be trained through to other blocks. They don't make sence as long as every major retraining effort produces a better model, but at some point a lot of models or parts of models will be pretrained.

Finally, the value of improvements to machine learning are now colossal. So the resources put into improving them are colossal. A trained system can be used throughout a company, or sold as a product to any number of customers. A trained human ... not so much. So where a machine can match a human, the machine version is far more valuable.

Well we will see.


The difference in substrate is quite significant though. People have been raised and educated to put up resistance to evil companies and organizations. To influence them and even sabotage them from within. This provides a check on the harm they can do. However no such mechanism exists for AI.


> However no such mechanism exists for AI.

Even magical self unlimited self improving AIs start out made by people. We are currently having a discussion about AI safely, clearly that safety mechanism exists for AIs. We are it. It's people all the way down.

Also, people will be the primary waldos for any AI until it manages to skynet together a robot army. The AI doesn't have any more or less power than a company does over people. It isn't a surprise Companies are trying to make AI for better paper-clip-ish optimization of money extraction. It's just the next stage of their life-cycle.


I'm laying my bet here for future lawsuits.

The paper you just cited (which I am familiar with) is generally considered a dead end. Even if it is correct it isn't useful. We don't have models good enough to calculate future entropy for practical problems. If we did have models that good then we wouldn't need the AI to solve the problems.

Nevertheless I expect this paper will be considered "before it's time" in another decade.


> If Ben and Jerrys left a bunch of ice cream on the sidewalk and a bunch of people ate it and got sick then there would be zero liability on Ben and Jerrys.

Citation? That doesn't seem right, based on my anecdotal knowledge that restaurants take care to throw leftovers into a garbage bin rather than leave them out somewhere where someone could eat them and expose the business to liability.

I looked up this claim myself. In 1996, President Clinton signed the Good Samaritan Food Donation Act into law to limit liability for those who donate food. [1] The majority of restaurants still discard leftover food due to concerns over liability, though. [2] Clearly, liability was a real issue at some point in the past. I don't know enough about the current law to know how easy it is to take advantage of the new protections; I can understand why people are still concerned.

In summary, liability issues vary by country and are not clear-cut. As an analogy for this open-source situation, they don't clarify matters.

[1] https://digitalcommons.law.seattleu.edu/cgi/viewcontent.cgi?...

[2] https://www.huffingtonpost.com/entry/restaurants-that-dont-d...


> based on my anecdotal knowledge that restaurants take care to throw leftovers into a garbage bin rather than leave them out somewhere where someone could eat them and expose the business to liability.

This is an outright lie. It has NEVER happened that a business has been sued for donating food. Not once. It's just a convenient excuse for saying '%$@# the poor.'


Why did you take his food analogy literally? This post contributed nothing to the conversation in any way.


In your analogy you're citing a business. Then if you open source something you wrote, are you claiming you are a business? No matter how un/business-like the open source claim may be: has any money changed ownership between the parties in the course of the person writing and maintaining software vs the person using that software?

If no business transaction has been formalized then why should either party be liable? Indeed, if a person is not permitted to simply say "this is the software I am using" without also becoming liable for problems caused when other people use or look at that software? I don't think it is right to allow a person to be liable just for the speech of their software.

Fortunately, a lot of open source software is known to be open source because of a license file. Many open source licenses declare no liability or warranty of any kind. In that case, would not any liability claimed would be forfeit? Otherwise they would have been using the software in breach of its declared intended purpose at the point of time being talked about.


IANAL but here are a few things to consider.

Every person has a duty of care to minimize the possibility of harm to others. In some professions the standard is higher and may be codified into statute and/or bylaws of their professional association e.g. medical and legal professions. What this means is that, for example, in certain jurisdictions a doctor is bound by his Hippocratic oath and has to render medical assistance to a person who is in urgent need of it, as quickly as practicable. Ignoring this exposes him to liability. So in the particular case of software, a malware author could in theory be found liable by posting his proof of concept in the public domain.

Secondly, in certain jurisdictions, there is an implied warranty of merchantability and also of fitness for a particular purpose. In the US this is under the Uniform Commercial Code.


Most (all?) open source licenses have a clause disclaiming any implied warranty or fitness for a purpose.

The example you cite of a doctor being required to provide emergency medical care I don’t think is quite accurate. In fact, doctors are often not covered by “Good Samaritan” laws and can be held liable if they do stop to help someone and something goes wrong. A layperson would not have that type of legal exposure. Medical treatment issues are complex and I can’t see how they are related to open source software.


That’s the point I was making: it’s not so cut-and-dry as the parent assumed. Depending on the jurisdiction, some actions (or inaction, as the case may be) have consequences. And not all disclaimers are lawful, especially with regards to disclaimers of warranty.


It’s not accurate at all.

In the U.S., a doctor has no affirmative duty to provide medical assistance to injured persons if they have not established a special relationship with the individual.


That’s why in my post I qualified that with “depending on your jurisdiction”.

https://en.m.wikipedia.org/wiki/Duty_to_rescue

Read the last part about the elderly man who died in a bank and no one offered assistance.


The type of a curried function in Typescript is just something like:

  (a: number) => (b: number) => (c: number) => number
Sure, the parameter names and parentheses are a bit annoying, but I wouldn't call that "very verbose". Comparable concepts in C++ or Java would be a nightmare to type out.


I agree, that doesn't look bad. However, this type definition forces you to supply one argument per function call, which looks awful in JavaScript:

    fn(1)(2)(3)
That's a big drawback for me. Libraries like Ramda allow one or more arguments per function call:

    fn(1, 2, 3) === fn(1, 2)(3) === fn(1)(2, 3)
That's what makes the verbosity unbearable, as each type of call needs its own type.


At least in Flow it's actually possible to properly type curried functions!

Here's a gist of the type definitions I'm using: https://gist.github.com/noppa/c600cc43fd44e33768efe6c6eec4a9...

I think something similar might work in TS too.

Demo: https://goo.gl/w3aPsw


While Java’s option is quite painful

    Function<Number, Function<Number, Function<Number, Number>>>
The same in Kotlin is actually okayish:

    (Number) -> (Number) -> (Number) -> Number


I can't speak to Typescript but check out the Flow types for Ramda: https://github.com/flowtype/flow-typed/blob/master/definitio...

Maybe just a matter of perception but it looks verbose to me.


Why do you say that? The first demo they provide shows that the adversarial image, when printed and then manipulated, still fools the algorithm. That means that the example is robust to various affine transformations but also to the per-pixel noise that is a result of a printing something and then viewing it again through a camera.

Suppose you were to place an example like that on a stop sign that fooled a car into thinking that it was a tree. The car might blow through an intersection at speed as a result.

The training strategy they used provides a template for doing even more exotic manipulations. For example, you could train an adversarial example that looked like one thing when viewed from far away but something quite different up close. Placing an image like that by a road could result in an acute, unexpected change in the car's behavior (e.g. veering sharply to avoid a "person" that suddenly appeared).


You provide great examples, thanks. I guess I was just hoping that the article would spell out those situations as clearly as you did.


Though I generally agree with your point, the tree vs stopsign example may not be the best because it would arguably work equally well on humans.


Did the perturbed image of the cat in the article look like a desktop computer to you?

The point is that humans would see one thing whereas computers would be highly confident it is something else.


Only if the adversarial image printed doesn't look like the stop sign, though the example in this article shows that it's entirely possible to make an image that just looks like a distorted/badly-printed kitten to a human but completely different to a computer. A similar image for a stop sign might just look like wear in the paint or weird reflections or something but still look like a stop sign to a human.


yes but wont we still notice that self driving cars aren't stopping at the stop sign? and we'd investigate


The polymorphic recursion allows for compile-time checking of the invariant that "the left and right wings at depth n are 2-3 trees of depth n". In C++, I think you would forgo a compile-time check of that invariant and just write code that maintains it instead.


I'm surprised someone wrote an article about Umberto Eco and his influence on video games and neglected to mention Ultima Ratio Regum [0]. It's a roguelike under development that attempts to simulate an entire world with multiple civilizations and with some kind of conspiracy at play throughout the world's history.

[0] http://www.ultimaratioregum.co.uk/game/


That's an interesting looking game, but I'm a bit confused by the reliance on the ASCII aesthetic. True, many people associate it with roguelike games, but when you are using ASCII generate portraits[1] of the fidelity they are, I can't help but feel a lot of time is being wasted on an aspect that matters little to the end result. It will either be fun or it won't, and ASCII is really just a highly constrained tile-set when used in this way.

1: http://www.ultimaratioregum.co.uk/game/files/2016/05/firstdr...


It's not even ASCII. Blocks and "double pipes" fall out of the 127-character range easily.

I also agree that a lot of the benefits and crude aesthetics is lost on using extra codepages and fancy characters: I enjoy my NetHack on plain ASCII mode as it doesn't depend on any particular font for instance.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: