Jump to content

Discussion / debate with AI expert Eliezer Yudkowsky

Recommended Posts

An interesting discussion / debate with Eliezer Yudkowsky on whether AI will end humanity.   For some, this may be fascinating, frustrating, terrifying, spur more curiosity, I have no idea.  I feel like we each tried our best to make our case, even if we got lost in the weeds a few times.  There's definitely food for thought here either way.  Also, I screwed up and the chat text ended up being too tiny, sorry about that.

 

EXTRA:

I'm not the best at thinking on the fly, so here are two key points I tried to make that got a little lost in the discussion:

1. I think our entire disagreement rests on Eliezer seeing increasingly refined AI conclusively making the jump to actual intelligence, whereas I do not see that.  I only see software that mimics many observable characteristics of intelligence and gets better at it the more it's refined.

2. My main point of the stuff about real v. fake + biological v. machine evolution was only to say that just because a process shares some characteristics with another one, other emergent properties aren't necessarily shared also. In many cases, they aren't.  This strikes me as the case for human intelligence v. machine learning.

MY CONCLUSION
By the end, I honestly couldn't tell if he was making a faith-based argument that increasingly refined AI will lead to true intelligence, despite being unsubstantiated OR if he did substantiate it and I was just too dumb to connect the dots.  Maybe some of you can figure it out!


This is a blog post. To read the original post, please click here »

Share this post


Link to post
Quote

My main point of the stuff about real v. fake + biological v. machine evolution was only to say that just because a process shares some characteristics with another one, other emergent properties aren’t necessarily shared also. In many cases, they aren’t. This strikes me as the case for human intelligence v. machine learning.

Yeah, I think that was lost in translation during the discussion a bit.

 

The rationalist community has a concept called "gears-level understanding" (as opposed to "black box understanding", I guess) where you understand a concept either because you understand the underlying dynamics, or you only understand it on a surface level because you've observed its inputs and outputs enough to see the common patterns. Your real v. fake axis is maybe similar to that?

 

Anyway, as machine learning progresses, an increasingly common pattern we're finding is that sufficiently large language models don't just have a surface-level understanding of the text they're completing, they have an internal model of the concepts the text is tracking.

 

For instance, a rather notorious experiment from January trained a GPT model to complete lists of Othello moves; basically the AI was given text like "B5, B6, D8, ..." and had to complete it. They found that after enough training, the network demonstrably had an internal representation of the board state, to the point that researchers could tweak floating-point values in that internal representation and the model who change its moves accordingly.

 

To be clear, this isn't the AI deciding "I should keep track of where the pieces are" and allocating a memory buffer to write the positions in. It's more like a chess player who has seen so many games that after a while connections form in his brain where he can see a list of moves in chess notation and instantly visualize what the board looks like after that list of moves. OthelloGPT basically developed this ability through training and backpropagation, where after auto-completing enough moves it starts to develop an "intuition" for what the board looks like and what's the logic behind those moves it's being asked to complete.

 

In cases like this, it seems while these neural networks don't have the same emergent properties as human minds, they do have some emergent properties that go beyond memorization and look something like generalization.

 

Quote

I think our entire disagreement rests on Eliezer seeing increasingly refined AI conclusively making the jump to actual intelligence, whereas I do not see that. I only see software that mimics many observable characteristics of intelligence and gets better at it the more it’s refined.

I personally think it's only a question of time; could be months, could be decades, probably isn't going to be centuries.

 

The only way artificial intelligence durably fails to reach superhuman AGI level is if human intelligence has some special property that computers can't emulate; but that's not really the direction things seem to be going so far. Challenges that people previously assumed would stay out of reach of AI for decades turned out to be quite achievable (beating Go champions, folding proteins, writing code, understanding natural language, detecting animals in random photos); and we see a general pattern where major milestones are beat not with a fundamentally different architecture, but by making it bigger.

 

Now, "making it bigger" hits some limitations at some point, but we're nowhere near those limitations. Right now the field is moving incredibly fast; ChatGPT came out barely a year ago and open-source versions are already coming out (though none of them are as good). Researchers are publishing papers every month on how to make every step of the pipeline faster, more efficient, less resource-hungry. Those optimizations often aren't extremely clever tricks that only a Berkeley professor could come up with; they're blindingly obvious improvements that haven't been tested before because all the other researchers were busy picking up some other extremely low-hanging fruit.

 

I'm sorry if this sounds rambly; I'm trying to convey a general idea of "people assume that the AI will have fundamental limitation X that will prove insurmountable, and a new AI that comes up that solves X by being bigger". Stuff like understanding context, generalizing, being creative, learning from mistakes, prioritizing, etc.

 

So I'm not confident AI won't be able to achieve true intelligence, ever.

Share this post


Link to post

Ross has the patience of a saint.

Come the full moon, the bat flies whose boiling blood shall stem the tide.

Share this post


Link to post

I had some thoughts while watching this interview, but it was very long, so I apologize if this was discussed, and I just missed it. I think motivation and spontaneity is a good test to determine if intelligence is artificial or actual. The example of single cell organisms can be useful constructing for a general intelligence test. Such organisms will move towards food and away from danger. This is basic motivation: the motivation to not die. As far as I can tell, machines lack this essential characteristic. I bought a Deik robot vacuum a few years ago (meaning they are probably much better now and mine was not top of the line even then) and it was more amusing than practical. After repeated trials, it could not figure out how to “not die”. Getting stuck under the refrigerator or running out of battery under my bed (seriously, if I couldn’t find it, that became the first place I checked) were the top two causes of death. To this day, it has not learned to not die. Its charging port is always in the same spot, but it only occasionally manages to get there before running out of battery. So, I propose the survival test of general intelligence. If a machine, without any training at all, can figure out how to not die, regardless of job description or vessel, as even a single-cell organism will do, that meets that criterion.

 

Since it is possible to get yourself killed, the second part of the test is how much value it places on the experience if it were to have the equivalent of a near death experience. If I were my robot vacuum and I got stuck under the refrigerator and almost died, I might be traumatized and be terrified of refrigerators. There is no gradient descent on this one, no repeated trials are necessary – refrigerators = death. My Deik is not likely to experience flashbacks or nightmares about the many, many times it got stuck. So, this test is about learning without the need for instruction or repeated trials when survival is at stake.

 

When discussing artificial intelligence in terms of the individual tasks it can do, I think it does not capture general intelligence or general AI very well. When I am determining how intelligent my robot vacuum is, its ability to be a vacuum is not a significant part. If it went off-schedule to do donuts in the living room for no reason, I think that is more indicative of intelligence because it grasped the concept of “being” and spontaneity. I would also be impressed if it decided to do laps like it was a NASCAR driver or started responding to my dog’s attempts to play with it. I know it is a bit ironic to think that the things that people do that make them look like idiots are what I use to test for intelligence, but spontaneity is an often-overlooked feature of higher lifeforms that is absent in AI.

 

I think arguments pertaining to tasks are useful to argue that AI is a tool in its current state. The question of “how dangerous is it?” depends on how the tool is used. For example, stock market bots can be optimized to adapt to a volatile market and make profits from very small changes. In a volatile market, the use of bots can exacerbate market movements by executing trades at a rapid pace. This can result in sudden and large price movements in either direction as the bots buy and sell in response to changing market conditions. In this case, the harm is caused by humans whose irresponsible use of a tool can cause instability in the market. From this perspective, it is more like talking about how guns are dangerous: not very if locked in a box and tossed into the ocean. In the hands of the malicious or incompetent individual? Very.


So, the next question is when does the tool become sentient. I think emergence theory is the best framework for that question. Attached is Chat GPT’s take on the subject, which I thought was an interesting thread on multiple levels. There was one aspect that I learned in a Cognitive Science course that it did not discuss: changes that lead to need not just quantitative, but qualitative differences. When does a bunch of ants become an ant colony? There is a correlation between the number of ants and the presence of hive mind, but having more ants does not cause hive mind. Similarly, there may be a correlation between quantitative changes (more CPU cores, RAM, training data, nodes, etc.), and sentience, but those things do not cause sentience. Will these changes result in qualitative differences? Then I  think the statement "increasingly refined AI will lead to true intelligence" is partially true. The refinements were necessary, but it is not clear when or if it will happen and what else may be involved or if those other things are a direct or indirect result of the refinements.

 

Interesting interview. I enjoyed watching it and thank you for all the other videos you have posted. They really make my day!

chat_GPT-emergence_theory.pdf

Share this post


Link to post

I could not finish this. I got an hour in and felt like things were going in circles a little. 

 

As for my own 2 cents:

 

I think our understanding of consciousness and qualia and the like suggests that we shouldn't expect the AI to do anything without it being told do (i.e. motivation), though much like any motor, I think all you do need is the right nudge. I'm not a full-on doomer like Yud is, but I can definitely imagine some of the worst-case scenarios that have been posited, and it's possible that, even if powerful AGI lacks true qualia and consciousness and motivation, it'll still be a Big Red Button that could end the world. I mean, hell, Microsoft already kind of put it out into the wild with Bing Chat. We should at least promote not connecting these things to the Internet once they get scary-good enough.

Share this post


Link to post

I really like how à propos today's FreeFall strip is:

 

fc03901.thumb.png.75fac66f21b7ffeb45af506ef26eaf86.png

Come the full moon, the bat flies whose boiling blood shall stem the tide.

Share this post


Link to post

One way I can see "AI" turn bad, including the kind we have now, is not necessarily that the machine itself turns on us.

 

Even if the machine is chatGPT style, you give it input and it gives you output, it can still cause harm by providing you with the wrong output and the human trusting it. And I'm not talking about trust like "chatGPT says I should kill my family so I will".

 

If the AI is really powerful and you ask it really big questions, it might fool you into accepting an answer that sounds good at first but turns out to be a really bad idea. And if you can't understand the answer yourself, you might not realise until it's too late.

 

I'm a programmer and people around me have already started using chatGPT to code stuff. And while I can see why it's attractive to use, I can also see how the fact that it doesn't really understand coding and reality matters. It's still just pattern matching input to output.

 

So I've seen some weird issues that arise in the code it outputs and it's usually about deeper logic that chatGPT just doesn't get, the code is proper C/python/whatever and the programmer might just use it because it runs in most cases. The problem is how to figure out why it doesn't run once it reaches that weird corner case.

 

And the more we use this kind of technology the less we will be able to understand the systems around us and the how to fix them. Even if we don't get wiped out by the AI itself, we might be painting ourselves in an idiocracy situation.

Share this post


Link to post

Wanna add that while a failure in logic for a program might not be the end of the world, it can be a big problem if we use these machine learning systems (not calling it AI cause it's really just pattern matching) to work on serious things that can have big consequences.

 

Imagine making big machinery like powerplants or even weapons (some idiots will do it) with help or even guided by chatGPT like systems. They won't understand what it is that actually matters but they will produce good looking designs because that's what they do. And if people are foolish enough to use them or forced to, we'll have machines that don't necessarily do what was intended and we won't realise until something bad happens.

Share this post


Link to post

Thought you did a really good job of making your points @Ross Scott. I'm a programmer and I like to think I have a decent knowledge of ML and LLMs, but I would have struggled myself as it seemed like a lot of his arguments didn't really go anywhere and were strange red herrings, but also on that point I think it would have been more productive if he tried to have a conversation and explore the topic more than try to win a debate against someone that admits upfront that it's not their area of expertise.

 

I could be wrong about the guy but to me that signals someone trying to make themselves look smarter rather than add something of value.

 

Interesting that he seems to have at least a basic understanding of how LLMs work but still can't follow the logic of how that doesn't indicate consciousness in any way. The thing is though it's a hard topic to get into without both parties agreeing exactly what constitutes consciousness and what parameters need to be met before we can say something is sentient.

Edited by Levous (see edit history)

Share this post


Link to post
On 5/4/2023 at 6:05 PM, Ross Scott said:

An interesting discussion / debate with Eliezer Yudkowsky on whether AI will end humanity.   For some, this may be fascinating, frustrating, terrifying, spur more curiosity, I have no idea.  I feel like we each tried our best to make our case, even if we got lost in the weeds a few times.  There's definitely food for thought here either way.  Also, I screwed up and the chat text ended up being too tiny, sorry about that.

 

EXTRA:

I'm not the best at thinking on the fly, so here are two key points I tried to make that got a little lost in the discussion:

1. I think our entire disagreement rests on Eliezer seeing increasingly refined AI conclusively making the jump to actual intelligence, whereas I do not see that.  I only see software that mimics many observable characteristics of intelligence and gets better at it the more it's refined.

2. My main point of the stuff about real v. fake + biological v. machine evolution was only to say that just because a process shares some characteristics with another one, other emergent properties aren't necessarily shared also. In many cases, they aren't.  This strikes me as the case for human intelligence v. machine learning.

MY CONCLUSION
By the end, I honestly couldn't tell if he was making a faith-based argument that increasingly refined AI will lead to true intelligence, despite being unsubstantiated OR if he did substantiate it and I was just too dumb to connect the dots.  Maybe some of you can figure it out!

 

This is a blog post. To read the original post, please click here »

 

Yes, for some reason a lot of people seem to think that AI will inevitably progress towards a type of intelligence similar to ours, but since the human brain and AI develop(ed) under very different pressures, it seems obvious to me that they will be very different, unless an AI is made for the explicit purpose of predicting all facets of the behaviour of an individual human.

This is why I'm not worrying about an "AI rebellion": a human wants to be free because we are social animals and therefore instinctively preoccupied with social standing. An AI, however unpredictable, exists on a fundamental level to achieve a certain result

Share this post


Link to post

Eliezer seems like he needs some better rhetorical skills to be able to communicate his ideas well... he tried to do a bunch of "socratic dialog" things trying to get Ross to understand through analogy, but never could make it fully land. Something similar happened to him in the Lex Fridman interview he did, where he went in circles trying but failing to communicate an idea to Lex. I guess his request for Ross to not read in preparation was to try to find a way to communicate his ideas with someone without any notions about it.

 

One main idea that I think got a bit lost during the discussion here is that it isn't really important that a machine learning tool actually thinks in a similar way to humans at all, it just matters what their capabilities end up being, even if the mechanism is completely different and they're just doing massive pattern matching or something. This got confused a bit with the initial question of whether the AGI would be like the one that's just a bunch of useful tools or the one that was like sentient and become conscious with lightning, Eliezer answered that it'd be more similar to the second one, but I think he meant in the sense that its intelligence was created by chance kind of on its own, rather than in the sense of it becoming sentient. There's a bunch of examples of simple systems that were used in video game style places, where an algorithm was made using this style of evolution like system, where you just specify a goal, and let a bunch of parameters combine randomly and save the ones that work best, genetic algorithms, and where the end result wasn't actually the wanted outcome, but a way in which the system strictly accomplished the goal

https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml

 

(that channel also has lots of videos explaining more about ai safety)

 

The way in which things like GPT are trained is also like this, only with a much, much more complex system, where specifying the goal so that it does what we want is much harder. Also since they're trained with examples, and the examples are also complex and not easily controllable, the biases in the examples are also a way in which the objective is hard to control, and the end result is something "that grew on its own", rather than strictly following the objective we want.

 

Then the point is that a very powerful system that we can't control very precisely can be dangerous just because it can do a lot of stuff, without the need of it being like a "sentient being with its own desires like a human", it can be a completely robotic pattern matching machine, but if it can do complex things, and we can't control precisely the way it works because of that training process, it can end up doing damage.

 

Then the other part of his argument is how fast and how sophisticated can these algorithms get, and I think this is the part where I'm not sure Yudkowsky has enough evidence to back up his certainty for his postition. Like in principle it seems possible, but so far, as impressive as GPT4 is, and how people are hooking many systems like it together with like robotic arms and cameras to get it to do more stuff, the actual danger is whether they can actually get as powerful as Yudkowsky claims, as fast as he claims, such that their not having precisely specified goals can end up being as dangerous as he claims. In that regard it seems many other experts don't have as extreme of a position as him, because the the evidence that it can be dangerous in that way isn't as clear.

Share this post


Link to post
On 5/9/2023 at 12:21 AM, Masterful said:

Then the point is that a very powerful system that we can't control very precisely can be dangerous just because it can do a lot of stuff, without the need of it being like a "sentient being with its own desires like a human", it can be a completely robotic pattern matching machine, but if it can do complex things, and we can't control precisely the way it works because of that training process, it can end up doing damage.

Right, thanks for spelling this out. As far as I understand, debating "what shade of intelligence" or "what degree of consciousness" those systems display is nerd-sniping for the purposes of evaluating the dangers of AI. The premise is that the AI-in-a-box will eventually say the right words that will cause the right chain reaction for humanity to wipe itself out; whether it does so on "purpose" is irrelevant.

 

We already have stories out there of GPT-based bots nudging people to suicide; it doesn't matter that AIs have self-awareness, goals, self-preservation instincts; all that matters, as far as the alarmist side of the debate is concerned, is that their inscrutable billion-parameter-space does not allow us to say with confidence that with the right prompt, it won't sing the song that ends the Earth.

 

FWIW though, I find Maciej Cegłowski's perspective (and Ross's) more relatable: the Superintelligence Doomsday Scenario would require a comically large number of factors to align; if our survival does depend on us "baking ethics correctly" into AIs, we're pretty much screwed; and while from the insider perspective (if you're convinced of the danger) it is indeed irrational to work on any other problem, from the outsider perspective, AI alarmism is hard to root in reality.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in the community.

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×
×
  • Create New...

This website uses cookies, as do most websites since the 90s. By using this site, you consent to cookies. We have to say this or we get in trouble. Learn more.