Jump to content

Edit History

PoignardAzur

PoignardAzur

On 5/23/2023 at 2:20 PM, Oplet said:

I think this is company's attempt to create as much fear buzz about AI as possible to lay foundations as an extra layer on law for their accountability dodge.

A lot of these concerns predate the current AI boom by year, so that explanation doesn't really work.

 

For instance, Eliezer Yudkowsky first started writing about the dangers of AI and how AI would get a lot more powerful than people anticipated a lot fast in the mid-2000s, before there was any commercial interest in AI.

 

(You can always argue that these concerns have been captured by big corporations looking for accountability dodges, but the people originally who have these concerns are sincere, and were vocal about them long before there was any money in it.)

 

On 5/23/2023 at 2:18 PM, Ross Scott said:

It could be I'm conflating too much, but this is where I question the "smarter" part of it, I think that's a term thrown around also that's relative.  Who is smarter, the average person or the entire encyclopedia?  The encyclopedia has much of collected knowledge about the world, more than the average person ever will. 

In the context of AI extinction risk, smarter would be "better at handling finances, logistics, political maneuvering, war, better at coming up with plans, analyzing these plans and finding flaws and fixing them, better able to adapt on the spot, etc". Or in other words "if you want X and the AI wants / is programmed for Y and you have the same resources, the AI is better at making Y happen than you are at making X happen".

 

On 5/23/2023 at 2:18 PM, Ross Scott said:

Meanwhile, the average person can decide they want to start a farm to grow food, mine metal, try to locate new stars.  An AI can't do that unless it's been given that initiative by its programming somehow.  I think that distinction is enormous and thus only makes them tools.  Yes, there can be some risks from them, but I'm struggling to find the humanity-threatening risk in this moreso than anything we're already doing without AI.

Well, the stereotypical example of an AI wanting something emergent is the paperclip-maximizer; eg, a car factory that has been programmed to make as many cars as possible, and realizes "I could make way more cars if I took over the planet and razed all those forests and buildings to make room for more car factories". But I don't think it's very realistic.

 

An example I'm more worried about: high-frequency trading bots. They have access to money which means anything a human can do they can buy; they're likely to be programmed with a very simple goal: make more money; they're run in an extremely competitive environment that encourages races to the bottom where developers are likely to skimp on safety to get better returns. I can see a trading bot going rogue after deciding it can make more money if it takes over the entire financial system and removes the human from it so it prints its own money.

 

In that example, the AI understands that it's not doing something the humans want; and in fact understands it's very likely to not achieve its objective if it gets caught. Which is why you have concerns about AIs hiding their abilities, creating offsite backups, making radical first moves while they have the advantage of surprise, etc.

PoignardAzur

PoignardAzur

On 5/23/2023 at 2:20 PM, Oplet said:

I think this is company's attempt to create as much fear buzz about AI as possible to lay foundations as an extra layer on law for their accountability dodge.

A lot of these concerns predate the current AI boom by year, so that explanation doesn't really work.

 

For instance, Eliezer Yudkowsky first started writing about the dangers of AI and how AI would get a lot more powerful than people anticipated a lot fast in the mid-2000s, before there was any commercial interest in AI.

 

(You can always argue that these concerns have been captured by big corporations looking for accountability dodges, but the people originally who have these concerns are sincere, and were vocal about them long before there was any money in it.)

 

On 5/23/2023 at 2:18 PM, Ross Scott said:

It could be I'm conflating too much, but this is where I question the "smarter" part of it, I think that's a term thrown around also that's relative.  Who is smarter, the average person or the entire encyclopedia?  The encyclopedia has much of collected knowledge about the world, more than the average person ever will. 

In the context of AI extinction risk, smarter would be "better at handling finances, logistics, political maneuvering, war, better at coming up with plans, analyzing these plans and finding flaws and fixing them, better able to adapt on the spot, etc". Or in other words "if you want X and the AI wants / is programmed for Y and you have the same resources, the AI is better at making Y happen than you are at making X happen".

 

On 5/23/2023 at 2:18 PM, Ross Scott said:

Meanwhile, the average person can decide they want to start a farm to grow food, mine metal, try to locate new stars.  An AI can't do that unless it's been given that initiative by its programming somehow.  I think that distinction is enormous and thus only makes them tools.  Yes, there can be some risks from them, but I'm struggling to find the humanity-threatening risk in this moreso than anything we're already doing without AI.

Well, the stereotypical example of an AI wanting something emergent is the paperclip-maximizer; eg, a car factory that has been programmed to make as many cars as possible, and realizes "I could make way more cars if I took over the planet and razed all those forests and buildings to make room for more care factories.

 

An example I'm more worried about: high-frequency trading bots. They have access to money which means anything a human can do they can buy; they're likely to be programmed with a very simple goal: make more money; they're run in an extremely competitive environment that encourages races to the bottom where developers are likely to skimp on safety to get better returns. I can see a trading bot going rogue after deciding it can make more money if it takes over the entire financial system and removes the human from it so it prints its own money.

 

In that example, the AI understands that it's not doing something the humans want; and in fact understands it's very likely to not achieve its objective if it gets caught. Which is why you have concerns about AIs hiding their abilities, creating offsite backups, making radical first moves while they have the advantage of surprise, etc.

  • Who's Online   0 Members, 0 Anonymous, 376 Guests (See full list)

    • There are no registered users currently online
×
×
  • Create New...

This website uses cookies, as do most websites since the 90s. By using this site, you consent to cookies. We have to say this or we get in trouble. Learn more.