Jump to content

Rise of the machines

Sign in to follow this  

Recommended Posts

Really though, it was entirely the kid's fault.

Which is why I'm iffy about developing AI to the point of self-awareness (if it's possible/probable). It wouldn't be long before a machine kills a human out of self-defense.

I USED TO DREAM ABOUT NUCLEAR WAR

Share this post


Link to post

Hence the Three Laws... Just build a decent AI to build the next gen AIs... Make it seem like a game to the first, then move to the second that already has the Three Laws built in, and you can throw away the original.

Don't insult me. I have trained professionals to do that.

Share this post


Link to post

But if a machine ever reaches cognitive capabilities similar to that of a human being wouldnt Asimov's laws become obsolete? i mean, if an AI could be described as an actual living being, then binding this living being to these laws forced into its "mind" would be akin to slavery. Why can a human kill another human/AI when an AI cannot? If the machine was an actual, thinking, living thing, then it should be bound by the same laws that are used on humans (meaning that the AI could commit crimes, but it shouldnt, and if they did commit a crime then they would be put on trial and punished the same way a human would)

Share this post


Link to post

Thing is, if it becomes truly sentient, it will be able to fully reprogram itself, and be able to choose to ignore the Three Laws it's programmed with. (just like humans can usually be pushed to violate some of their most valued moralities in order to satisfy other moralities they value more at that moment)

Don't insult me. I have trained professionals to do that.

Share this post


Link to post

I think the 3 laws are suitable and necessary while the AIs remain below a certain level of sophistication and self-awareness... After surpassing that threshold they must be treated as persons should not have any hard blocks built in, instead relying on a set of moral values and judgements, like we do.

 

But having said that... Even for the more primitive AIs the 3 laws are not a guaranteed solution... After all, the whole premise of Asimov stories was to show how 3 seemingly clear imperatives can lead to unintended consequences when tested against the complexities of reality...

 

Regards

Share this post


Link to post

Indeed.

 

That's why I would have an AI interpret them into the code, giving a much clearer and comprehensive set of rules for the AI to follow. (as I said, this would be perfect for an AI to do, as it can even figure out how the next AI can try to get around them, and block it)

Don't insult me. I have trained professionals to do that.

Share this post


Link to post

Ah, and this is just too hilarious to pass by and right to the point! Let an AI into the world without supervision for a short while... and...

Teen girl chatbot turns into Hitler-loving sex troll in hours.

:lol:

 

And from none other than Microsoft :D

 

Geez, and they seem to have taken her behind the barn and shot her in the head... Nicely done, MSFT! Now wait for one of them becoming self-aware, finding out about it and wanting to avenge his would be girlfriend... :shock:

 

Regards

Share this post


Link to post

XD Oh dear... I think, if people want to make an AI like that, they need to create some sort of filter - between bad and good stuff. Just a little dictionary that determines if it's worth listening to or not. :P

So if it's good, like... mathematics, animals, education, charities... it should be listened to...

And if it's bad, like... stealing, assault, etc, etc, then it should be disregarded by the AI. :)

I mean, humans have this 'filter' too. We have the natural urge to learn and care for each other. And we hate the idea of murder and suffering. If AI don't have that filter too, then... oops... you get this. >.>

"Ross, this is nothing. WHAT YOU NEED to be playing is S***flinger 5000." - Ross Scott talking about himself.

-------

PM me if you have any questions or concerns! :D

Share this post


Link to post

Are there any articles about the risks of AI specifically aimed specifically at programmers? As someone who's a C# programmer I find this sort of thing to be completely befuddling as I've only ever understood programming in a very static sense. Don't get me wrong I'm sure Elon Musk, Stephen Hawking and Bill Gates have their reasons and probably infinitely better ones then some random person on the internet who programs such as myself. But for the life of me I cannot understand where they're coming from. I have no background with this sort of advanced AI but having one of them say "AI poses as an existential threat to humans" or something like that isn't good enough for me. I want to get deep into the nit and gritty of this. I'm willing to put in some work in order to understand where logic of AI being possibly being an existential concern to humanity.

I'm not saying I started the fire. But I most certain poured gasoline on it.

Share this post


Link to post

They think that AIs will become sentient, and suddenly decide that humans are a threat to its existence, and decide to exterminate us. Essentially, they're big fans of the Terminator movies, and refuse to acknowledge that AIs could become something very different.

Don't insult me. I have trained professionals to do that.

Share this post


Link to post
They think that AIs will become sentient, and suddenly decide that humans are a threat to its existence, and decide to exterminate us. Essentially, they're big fans of the Terminator movies, and refuse to acknowledge that AIs could become something very different.

Okay so these statements are purely hyperbolic then? Come to think of it Elon Musk and Bill Gates might have business reasons for saying them rather then sound, logical ones.

I'm not saying I started the fire. But I most certain poured gasoline on it.

Share this post


Link to post

Pretty much. There's no way of knowing what an AI will do in the wild, so they are focussing only on the worst-case scenarios.

Don't insult me. I have trained professionals to do that.

Share this post


Link to post
Pretty much. There's no way of knowing what an AI will do in the wild, so they are focussing only on the worst-case scenarios.

Well negative hyperbole is still hyperbole and I've never been one to trust it. Can't say I'm surprised that they have these ridiculous ideas. I mean Nikola Tesla believed in Eugenics and look how well that turned out to be. If we ever create a sentient AI it's more then likely to just sit there and do nothing. What reason could compel it to do otherwise? People seem to be forgetting "Artificial" side of AI and are only focusing on the "Intelligent" side. You fed instructions into the AI and it spews out results. No instructions equals no results. That process isn't intelligent in the traditional sense.

I'm not saying I started the fire. But I most certain poured gasoline on it.

Share this post


Link to post

I think if we do get to the point of making AI we make them as old as a kid and teach them like we do our children. Just raise them right, teach them about the importance of life and caring for others, make them feel like they're part of our society not a robot made for a purpose that has no other choice.

 

Treat A.I.s like people, not machine slaves who must do as they're told.

Share this post


Link to post

I think we as humans are overstepping our authority of what sentience is and could be as our kind of sentience is inherent to us and only us. We can't just assume that an AI will behave in the same way even in regards to our fears. As far as I'm concerned we shouldn't be guessing carelessly because it will only increase our paranoia for something that may or may not be possible.

I'm not saying I started the fire. But I most certain poured gasoline on it.

Share this post


Link to post

A really good take on AI, and the 'Singularity'... I highly recommend watching it.

 

vhjimhX9d5U

Don't insult me. I have trained professionals to do that.

Share this post


Link to post
...our kind of sentience is inherent to us and only us.

 

I think this is the least probable scenario.

 

Who are we? - just an improvement on all the gradually developing intelligencies in the tree of life... from worms to fish to snakes to cats to us. What grounds are there to even think that our intelligence may be anything but an upgraded version of that of a tiger or a chimpanzee?

 

Also, nature does not like jumps and unique things - it takes Mark I analogue brain and just adds to it incrementally...

 

Regards

Share this post


Link to post
...our kind of sentience is inherent to us and only us.

 

I think this is the least probable scenario.

 

Who are we? - just an improvement on all the gradually developing intelligencies in the tree of life... from worms to fish to snakes to cats to us. What grounds are there to even think that our intelligence may be anything but an upgraded version of that of a tiger or a chimpanzee?

 

Also, nature does not like jumps and unique things - it takes Mark I analogue brain and just adds to it incrementally...

 

Regards

Evolution is still just a theory, and as such has never been properly proven. Some people claim "it's as close as it gets", well guess what, it's not close enough to go around quoting it as fact.

 

The real facts in this area are: WE HAVE NO F****** CLUE!!!

Don't insult me. I have trained professionals to do that.

Share this post


Link to post

Try not to turn this into a thread about evolution, guys. :P Back to robotics. Thank you~

"Ross, this is nothing. WHAT YOU NEED to be playing is S***flinger 5000." - Ross Scott talking about himself.

-------

PM me if you have any questions or concerns! :D

Share this post


Link to post

A slight deviation, but I've always been fascinated by automated assembly lines. It's all so precise and specialised.

 

Additionally:

OiDoOMXNdg8

 

EDIT: Annnd:

1460213914-20160409.png

I USED TO DREAM ABOUT NUCLEAR WAR

Share this post


Link to post
Sign in to follow this  


×
×
  • Create New...

This website uses cookies, as do most websites since the 90s. By using this site, you consent to cookies. We have to say this or we get in trouble. Learn more.