Jump to content

Robot Ethics

Recommended Posts

http://en.wikipedia.org/wiki/Robot_ethics

 

I'll start this out. If someone designs a robot that can feel pain, should they be charged if they purposefully inflict massive amounts of pain on it for no reason? (I asked my mom this same question and she just couldn't get it in her head that something other than a biological being could possibly feel pain. Needless to say, I didn't talk to her for the rest of the day.)

Share this post


Link to post

Just for sadistic pleasure, yeah. Even doing that just to test the pain threshold or whatever would be wrong. You wouldn't commit tests like that on a human. And I believe there are far too many robot enslavement movies out there for us to create a working sentient robot and treat it like trash. A non sentient is debateable.

 

But for no reason, then sure.

http://steamcommunity.com/id/Kaweebo/

 

"There are no good reasons. Only legal ones."

 

VALVE: "Sometimes bugs take more than eighteen years to fix."

Share this post


Link to post

A major problem here is how one defines "consciousness". It overlapps with the main issue of the field "artificial inteligence". The term AI is mainly associated with unsolved problems in computer science. As soon as a problem is solved, there is an algorithm, people refuse to call it inteligent and it is attributed to another field. (e.g. things like compiler optimisation used to be AI problems; path finding in computer games is technically not an AI problem either).

 

So, on the one hand, if you were an engineer and designed such a system, you would know how it works down to the bottom of it. You would know, that there is an array of sensors that delivers data, which is fed through algorithms (perhaps embedded in an FPGA or loads of arithmetics, loops, branches, etc. in software) that then assigns the term "pain" to it if it exceeds a certain threshold.

It's a sequence of calculations. Would you call that inteligent? An emotion? Consciousness?

 

On the other hand, if you analyze the human brain, it basically boils down to the same thing, yet you experience what you are experiencing right now. You start asking yourself how that is even possible, or how other living entities percieve the world around them, or even those machines? Yet to realize, that you can and never will know, because "you" yourself are locked into that system of neurons in your head.

 

 

Slightly off topic: And here comes the point that buggs me most. The realization that you can't tell anything about your environment, because it is basically an interpretation of signals from a sensory array that is connected to your brain. Basically you can't even tell if any of this actually exists, including that thing called "brain", or what "existence" actually means. Your only input are those sensors and they prove their presence via themselves and you appear to have some form of motory output through which you can interact with that world, may it actually be that way or not.

 

Share this post


Link to post

(Lonesome Drifter enters philosphy mode) If the robot has a personality and a conscious, operates it's life on moral codes that it had formed itself, able to establish its own opinion on things, basically becoming a person in every sense except that they biological life then I believe the person should be charged, because the robot is pretty much a person and it's creator should have more sympathy with it. But, being a firm believer in democracy, and that everyone should have a say, it should go to a vote within the country that this incident happened. But if this were to ever happen, I would want him charged.

Waiter, there's a terriost in my soup!

Share this post


Link to post

I think that there's going to be a huge merger between citizen ethics and robot ethics.

But then again, robots might/will become citizens in between the near and far future.

 

For me, that felt very heavily opinionated. It felt almost uncomfortable to type that.

Share this post


Link to post

Honestly the whole 'robots will become civilians' thing always seemed unrealistic to me. Why, when we create fully functional, maybe human-based machines, would we create them just to live like us? Creating some robot machine race that lives alongside us for no apparent reason other than 'cuz itd be cool' doesn't seem worth the effort. I'm 99% sure when scientists begin making fully capable machines, they won't give them the intelligence t think for themselves, they're simply there to do things for us.

 

And even then, you could argue that it's a waste of time to create robot servants, anyway, since that means we'd be out of a LOT of work. And also the chance of a robot rebellion possibly uprising is bad, too. But that last one is pretty stretched.

http://steamcommunity.com/id/Kaweebo/

 

"There are no good reasons. Only legal ones."

 

VALVE: "Sometimes bugs take more than eighteen years to fix."

Share this post


Link to post
Honestly the whole 'robots will become civilians' thing always seemed unrealistic to me... they're simply there to do things for us

 

I think AI becoming human-like is just a matter of time. It's inevitable. There are two converging processes here

 

- one, is that to do things for us we would want robots able to make decisions and not having to be programmed all the time. But to make sophisticated decisions and to fit within the human society the machine will have to understand humans and act in a compatible manner. The obvious way to achieve that is to give the machine the same social logic and emotions as humans possess. In fact, there is no other way - just like nature has developed emotions and intuition as solutions to decision making in natural environment, so we will have to do the same for our machines to be able to co-exist usefully with us.

 

- two, is humans modifying and augmenting their bodies with artificial enhancements - implants, replacement body part etc. This is already happening - 2-way human/machine synaptic interfaces for prosthetics are being developed and some stage it will be too difficult to say whether you are dealing with a machine with biological components or a human with machine parts.

 

If someone designs a robot that can feel pain, should they be charged if they purposefully inflict massive amounts of pain on it for no reason?

 

This question, as stated, is easy to answer - whatever the law of the day says. Or, in other words, you cannot be charged with breaking a law that does not exist.

 

Robots first will have to be recognised as persons in law, then they will acquire rights which the state will have to protect. When that happens, causing robot pain will become an offense.

 

This will, most likely, be a lengthy process, though - perhaps, the evolution of animal rights is an appropriate analogy here.

 

Regards

Share this post


Link to post
This question, as stated, is easy to answer - whatever the law of the day says. Or, in other words, you cannot be charged with breaking a law that does not exist.

The question is SHOULD, not will... Many things SHOULD be legally prosecuted, but they aren't; there are many things I SHOULD do to get rid of my belly fat, but I don't... Get the idea?

 

Robots first will have to be recognised as persons in law, then they will acquire rights which the state will have to protect. When that happens, causing robot pain will become an offense.

There is no requirement of something being a person for it to be illegal to torture it. (that is what intentionally inflicting pain on a subject for the express purpose of causing pain is)

 

This will, most likely, be a lengthy process, though - perhaps, the evolution of animal rights is an appropriate analogy here.

Animals already have better rights than unborn human beings... Even unborn animals. What makes you think that robots will have worse rights than that?

Don't insult me. I have trained professionals to do that.

Share this post


Link to post
Animals already have better rights than unborn human beings... Even unborn animals. What makes you think that robots will have worse rights than that?

My post above (that apparently no one read) basically boils down to the point that machines like the computer you're using right now are not magic. It's circuitry and propperly executed algorithms ("mathematical cooking recipies"). Loads of them.

 

If such robots are built, they won't be magic either. They are designed and made up in the exact same way and people will refuse to call them conscious or say that they have feelings or emotions, constantly thinking that they missed the goal of designing such a robot, because it's just algorithms.

Share this post


Link to post

@BTGNullseye: the original question was "should they be charged". That's why I said "as stated" before posting my opinion.

 

Your interpretation of the question is moral and should be restated as "should it be an offence to cause pain to a machine that feels pain?". This is a valid question. In fact I think that was the intent of the original question but I did not want to second guess the OP.

 

Whether animals have rights (and are therefore persons in law) or not is still a grey area, their status in law is still changing and evolving and will continue to do so for decades to come - that is why I used them as an example, which I think illustrates how long and drawn out the process can be.

 

@St. Goliath - deterministic algorithms (your "cooking recipes") are suitable only to simplest types of robots. Once you move into fuzzy logic, neural networks etc even the people who design them cannot say how exactly they will operate. Biological brains are not magic either. They also started with very simple systems and algorithms, which rapidly became complicated and non-deterministic.

 

People will not refuse to recognise consciousness of a machine just because they think they know how it's made. Quite the opposite, people tend to anthropomorphise and ascribe human characteristics to just about anything, from dead trees in the forest to even simple robotic toys.

 

The problems I see in the recognition of robots' rights will be of similar nature to our present day's racism, chauvinism and xenophobia. I think lots of people will feel that sentient robots are so human that they pose a competitive threat to them.

 

Regards

Share this post


Link to post
deterministic algorithms (your "cooking recipes") are suitable only to simplest types of robots. Once you move into fuzzy logic, neural networks etc even the people who design them cannot say how exactly they will operate.

I don't know what you call fuzzy-logic, but when I think of the control systems I was forced to design, it boils down to simple interpolation. Regarding neural networks, a neuron basically does weighted addition where the weighting factors can be adjusted using some form of feedback. Both things are computable, i.e. if you know its current state, you can determine pretty well what wil happen next; you can simulate it with a Turing machine and there we have our algorithms, our arithmetic operations, branches and loops. =P (unless you prefere functional programming languages)

 

Biological brains are not magic either

I mentioned that above, which makes the whole discussion even more interessting. If you look at the whole thing as huge networks of neurons and look at how you actually percieve the world around you rises the question of what conscious and existence actually mean. Something we won't solve here.

 

People will not refuse to recognise consciousness of a machine just because they think they know how it's made. Quite the opposite, people tend to anthropomorphise and ascribe human characteristics to just about anything, from dead trees in the forest to even simple robotic toys.

 

The problems I see in the recognition of robots' rights will be of similar nature to our present day's racism, chauvinism and xenophobia. I think lots of people will feel that sentient robots are so human that they pose a competitive threat to them.

Yes, people attribute human characteristic to lots of things, especially if you don't know how it works (that's why I used the term "magic), but if you know that you are dealing with a machine that implements "consciousness" as a series of calculations, written down by somebody, will you say it has feelings or call it conscious?

I think that even if people attribute human characteristics to robots, that though will come to their minds when dealing with human like machines (->uncanny valley), amplified of course by chauvinism and xenophobia.

Share this post


Link to post

I mean when the system starts working on probabilities, estimates and incorporates learning.

 

Yes, individual neurons are simple but the complexity would increase exponentially with the number of elements. The system will be producing different results starting from the same initial state and you will not be able to predict deterministically what the end result will be, only in terms of probabilities...

 

But here again there is no fundamental difference between electronic and biological machines if you build the former on the same principles as the latter.

 

Would I call machine conscious or accept that it may have feelings if I know that it is a sum of parts governed by calculations and algorithms designed by someone? Of course I would.

 

Conversely, would you stop considering humans human when we will find out exactly how brain works and determine all the algorithms and calculation it performs as it functions?

 

To me humans are just complex biological machines. Our brains also perform the same calculations and are subject to the same mathematical and logical principles as the machines we make. These principles have not been designed by us, they were only discovered, and we could only discover them because they exist and are universal.

 

There is one issue that intrigues me though - self awareness. At what point all these processes combine to create the "first person" perception? Is it in design of the hardware, software or is it, perhaps, a fundamental property of the universe?

 

Regards

Share this post


Link to post

If a toaster could talk, it would be the happiest little toaster in the world, simply because it would know it was fulfilling its purpose whenever the lever went down and toast popped out.

If a robot could be designed to experience pain and then experienced pain, I imagine in some capacity it would be informed about serving some purpose in experiencing pain, and therefore would be still fulfilling its given duty and job while experiencing pain, like a soldier or a crash test dummy.

This is a nice metric server. No imperial dimensions, please.

Share this post


Link to post

The logical extreme of that argument is quite concerning; it's dangerously close to justifying slavery.

 

I would say that if we ever made conscious machines we would be bound to grant them basic rights. There's too many arbitrary lines that have to be drawn to exclude a conscious machine from ethical consideration. The hard part would figuring out what constitutes a conscious entity.

Share this post


Link to post

Blue, you seem to be saying that pain and serving a purpose is somehow equivalent.

 

Pain is simply a warning signal. If you are serving a purpose and it is painful - there is something wrong. You need to seriously consider if you are the right entity to serve that purpose and if there maybe a better way of serving that purpose - the one that does not involve pain.

 

If the pain is caused by actions of others towards you - it is not good either (unless you like S&M or something and it is consensual or if you are getting your root canals drilled) and in most cases deliberate causing of pain to other humans or animals is immoral and often illegal. There is no reasons for it to be different with robots if their design is such that they can feel pain.

 

Regards

Share this post


Link to post

But we did not make animals able to feel pain.

This is a nice metric server. No imperial dimensions, please.

Share this post


Link to post

Human did not create animals in the first place. Our culture views animals as conscious entites, just like humans. (If any of that contrasts to your religious views, please just ignore it)

 

Because of the reasons stated above by Vapymid, animals (just like normal humans) view pain as a bad thing ("worst case feeling") and try to avoid it.

 

For those reasons, willingly causing pain to animals is viewed as "not ok", just like causing pain to other people. And because the idea of our law is to model the common ethical and moral views of the people, it is prohibited by law.

 

So the idea of the whole thing is that if we build conscious machines that can feel pain (which is a logical thing to do, beacuse pain indicates that your body is being damaged and you should do something to prevent that) that it should be illegal for the same reasons.

Share this post


Link to post

My argument was flawed by virtue of the fact that I do not think about robotic ethics all that much.

My reasoning of the fact largely stems to the basis that Robots would not need or want rights or anything like that unless we told them (by programming them) that they could ask for rights and so on if they wanted to. At the point where artificial intelligence in true ability could discern for themselves whether or not they wanted rights, the only possible way they could view us is as an analogy for a relationship between Creator and Created, like Man and God, like it or not. (The analogy however is by no means perfect, since man is sinful and God is not. But that's a different matter. The analogous relationship would not be ignorable.)

This is a nice metric server. No imperial dimensions, please.

Share this post


Link to post

Well, that's the sad thing. Pain is very subjective, and hard to quantify a hard number to add to a program. But discounting programming concerns, assuming that was all accounted for and they had human cognition, I think it's perfectly reasonable that an AI/robot should be treated just as humane as a living intelligent being. And yes, as such, anyone doing anything inhumane should be subject to the law in full form.

Share this post


Link to post


×
×
  • Create New...

This website uses cookies, as do most websites since the 90s. By using this site, you consent to cookies. We have to say this or we get in trouble. Learn more.