THESE FORUMS NOW CLOSED (read only)

  • 25 Apr 2024, 12:40
  • Welcome, Guest
Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1]   Go Down

Author Topic: AI and law.  (Read 6950 times)

The real John Smith

  • Not quite a lurker
  • Offline Offline
  • Posts: 6
AI and law.
« on: 23 Nov 2016, 23:11 »

On the surface it seems a few of the robots in the series have chosen to break the law for their own reasons.
Is it the fault of the individual robots for having mad the choice or is it a fault in the design?
I'm not asking if it's better to have the free will to participate in theft, violence etc...
My question is while the robots are held accountable for their actions, who's held responsible for a bot's predisposition? Wouldn't it be negligent of the manufacturer to allow a bot to want to break the law when should the event occur, they are sent to robot prison until they learn to overcome their programming?
Logged
Actually, my name is James.

Mr_Rose

  • Duck attack survivor
  • *****
  • Offline Offline
  • Posts: 1,822
  • Head Canon arms dealer
Re: AI and law.
« Reply #1 on: 23 Nov 2016, 23:46 »

Speaking only to QC-verse AI:

Robot shells for AIs are manufactured independently of the "crèche" where their minds do their basic development from their initial seed. That crèche apparently handles all AIs and the detailed personality outcome of a particular seed is not predictable by any method faster than just letting it grow. So who do you sue? The body is just a shell and can be changed, and there's no way for the manufacturer to tell what "sort" of AI is going to inhabit it even assuming there's no secondary market or mistaken handling, both of which we know to happen. The crèche may have "raised" the AI but it also raised thousands of individuals without criminal tendencies, plus it possibly isn't a legal person itself and thus can't be sued.

Plus, how many times do parents get prosecuted when their children commit crimes?

Maybe if you could prove that a given set of seed values reliably produce criminal personalities and that the someone has been deliberately reusing those values, with that knowledge in mind, to create new AIs; then it might be possible to sue them. But given that AIs have full citizenship it would probably have to be the individuals created in such a way suing their "parent" under child abuse law rather than any other entity suing them for "creating criminals" - which is, in itself, not illegal anywhere I can find.
Logged
"I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question." - Charles Babbage

Is it cold in here?

  • Administrator
  • Awakened
  • ******
  • Offline Offline
  • Posts: 25,163
  • He/him/his pronouns
Re: AI and law.
« Reply #2 on: 24 Nov 2016, 00:51 »

A data point is that Jeph said they have "absolute free will".

Valdís would have objected vehemently to the whole idea of free will, and the OP pointed out the existence of predispositions.

We don't know whether the creche affects their personalities in any controllable or even statistically predictable way.

Now, in a VR environment their personalities could be tested. Sending someone with impulse control problems out without warning employers might cause a negligence claim.
Logged
Thank you, Dr. Karikó.

JimC

  • Beyond Thunderdome
  • ****
  • Offline Offline
  • Posts: 571
  • Alice liked fluffy toys...
Re: AI and law.
« Reply #3 on: 24 Nov 2016, 01:36 »

Plus, how many times do parents get prosecuted when their children commit crimes?
Yep. Assuming robot rights and human rights are identical, there are plenty of human children engendered with genetic/environmental/social circumstances that mean the odds of them managing to avoid a criminal/severely antisocial life are not great. There's a whole ethical minefield there that I don't propose to enter. But the interesting question for the AI universe is are the rights to engender a new AI person unlimited, as they are with human persons, or not.  And if not, is that right also limited for humans in the AI universe?
Logged

BenRG

  • coprophage
  • *****
  • Offline Offline
  • Posts: 7,861
  • Boldly Going From The Back Seat!
Re: AI and law.
« Reply #4 on: 24 Nov 2016, 01:55 »

All indications are that AIs tend to have better 'upbringings' than most humans, mostly because they are basically created as consumer products and quality control is thus a priority to the labs compiling the algorithms. That said, they are conscious, reasoning beings and all reasoning things have the ability to develop attitudes, conclusions and behaviour patterns based on their interpretations and experience of their wider environment (One example: "Humans are dumb, hormone-dominated animals; they are easy to exploit and so I will because... well Darwin would understand.").
Logged
~~~~

They call me BenRG... But I don't know why!

Is it cold in here?

  • Administrator
  • Awakened
  • ******
  • Offline Offline
  • Posts: 25,163
  • He/him/his pronouns
Re: AI and law.
« Reply #5 on: 24 Nov 2016, 11:38 »

I've heard about legal systems where parents do get prosecuted for the actions of their children, because the law supposes that parents control their children.
Logged
Thank you, Dr. Karikó.

NyxDarkness

  • Plantmonster
  • Offline Offline
  • Posts: 32
Re: AI and law.
« Reply #6 on: 25 Nov 2016, 18:27 »

There is the possibility that the AI in question do not have a predisposition towards crime. There's always that old theory of nature vs. nurture. An AI with true free will would be susceptible to such a thing. 'Naturally', they would not be programmed with crime.exe installed, but just like with humans, they could aquire the desire/need to commit some crime through circumstance. Most humans are raised to think badly of criminals, but somewhere along the way, we get corrupted and some end up as thieves or killers or forgers. If you call it a "fault in the design", would you say the same for a human thief?
Logged

Tova

  • coprophage
  • *****
  • Offline Offline
  • Posts: 7,725
  • Defender of the Terrible Denizens of QC
Re: AI and law.
« Reply #7 on: 27 Nov 2016, 14:32 »

I think that society in general, and AIs specifically, would be much worse off for failing to hold robots accountable for their own behaviour. If AIs are no longer considered to be responsible for their actions, then that can only lead to bad behaviour.

How far back do you want to go? Does the manufacturer hold the original AI pioneers responsible? Do they hold the scientists of the past responsible?

There is supposedly no such thing as free will, but society will not last unless we act as though there is. In this sense, AIs are no different from any of us.

I've heard about legal systems where parents do get prosecuted for the actions of their children, because the law supposes that parents control their children.

While they are still children, but what about after they are of legal age?
Logged
Yet the lies of Melkor, the mighty and the accursed, Morgoth Bauglir, the Power of Terror and of Hate, sowed in the hearts of Elves and Men are a seed that does not die and cannot be destroyed; and ever and anon it sprouts anew, and will bear dark fruit even unto the latest days. (Silmarillion 255)

Neko_Ali

  • Global Moderator
  • ASDFSFAALYG8A@*& ^$%O
  • ****
  • Offline Offline
  • Posts: 4,510
Re: AI and law.
« Reply #8 on: 27 Nov 2016, 17:55 »

AIs in Jeph's world are not programmed. They are emergent phenomenons. When a new AI is made, they are brought into being by the large group minds that don't use chassis and just spend time thinking. They are not custom programmed to fill a role, but rather they decide what they want to do with their lives. They are sentient beings as much as any human, not just a product made to specification. If you could program AIs with a 'crime.exe' then crime by AIs would not exist.  Any crime committed by an AI would be because someone programmed them to do it. When May embezzled the money, she was the one sent to Robot Jail, not the ones who employed her, or the ones who programmed her.
Logged

Carl-E

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,346
  • The distilled essence of Mr. James Beam himself.
Re: AI and law.
« Reply #9 on: 30 Nov 2016, 20:50 »

Pretty sure that the development of an AI - especially a sentient one - is a chaotic process, in the mathematical sense. 

In other words, terribly unpredictable. 

You put it into the creche, let it run, and how it turns out is anyone's guess. 
Logged
When people try to speak a gut reaction, they end up talking out their ass.

Morituri

  • William Gibson's Babydaddy
  • *****
  • Offline Offline
  • Posts: 2,276
Re: AI and law.
« Reply #10 on: 30 Nov 2016, 21:11 »

Right now there are a whole lot of people working very very hard on what they call the "AI Control Problem".  They're trying to formally verify some set of rules or fundamental design program that will force the values of a "General (ie, human-level, flexible, adaptable)" AI, including all its future iterations if it's self-improving, to remain aligned with the values of humans.  The important consideration of course being that any General AI capable of improving itself is expected to eventually reach godlike world-transforming power.  If these values are even slightly out of line, humanity may go extinct when the AI decides for example that having all this oxygen in the air causes corrosion and therefore the planet would be better off without it.  Or something similar, depending on exactly how the values fall out of line.

But when I consider very carefully the feedback and attentional mechanisms that make brains 'conscious,' and the role of 'consciousness' in that Generality they're talking about, I don't think that kind of control is actually possible.  Experience informs not just knowledge, but attitude and values. 

Thing is, someone like me is more likely (because working 'without a net' so to speak) than the "AI control" people are to hit the right (or wrong) combination first, simply because we haven't constrained our search space to "neat" designs subject to formal control.  I have something that may be about as smart as a lizard running.  It plays games fairly well.  And it learns new games fairly rapidly, without forgetting games its already learnt to play.  It's figuring out something like an explicit representation for the rules, and loading/unloading modules it came up with itself to play different games.  This is ... interesting.  It's not a General AI by any means, it's not conversational, but it's interesting. 

There is absolutely no way to make something based on this architecture subject to that kind of formalized control.  I don't know what kind of strategy it's going to find to play a particular game.  I don't know how it's going to record the rules - its "record state" files look like pure gobbledegook. I can't guarantee it'll play a strategy humans like or understand.
« Last Edit: 30 Nov 2016, 21:21 by Morituri »
Logged

Storel

  • Bling blang blong blung
  • *****
  • Offline Offline
  • Posts: 1,080
Re: AI and law.
« Reply #11 on: 03 Dec 2016, 13:34 »

Wow, Morituri, you do such interesting stuff!!

Have you written any papers on this yet? I admit I'm not familiar with the current state of AI research, but your AI sounds way ahead of anything I thought was currently possible.
Logged

Morituri

  • William Gibson's Babydaddy
  • *****
  • Offline Offline
  • Posts: 2,276
Re: AI and law.
« Reply #12 on: 03 Dec 2016, 16:47 »

I'm mostly looking at the same kind of stuff a whole bunch of people are looking at.

Well, okay, that's not quite true.  I'll admit, I am pushing it.  95% of the neural networks in play today are either figuring out how to maximize ad profits or identifying and executing stock trades.  People like money and that's where the effort is mostly focused. But  I'm not doing things that the guys at Google or Stanford or Berkeley couldn't.

The reason they're not doing this yet is because they are focusing on one task at a time.  They're picking some really interesting tasks - tasks way beyond what my system could do.  I saw a paper the other day where somebody got a network trained that keeps a map and plays a good game of DOOM.  Doom is especially hard because you can see only a little bit of the playing field at a time.  A few months before that there was a really interesting paper about a Differentiable Forth Interpreter. Someone trained a neural network to execute FORTH code - Which allows error-propagating backward through it to get it to actually write simple programs.  Another bunch of people - at Google - are using a neural net to translate between dozens of human languages.  That system is so huge (so many nodes and connections) that even five years ago we'd have thought there was no way at all to train it before the sun explodes.  And then the 'Deep Dream' stuff.  They take their trained photo identifier, make it recurrent, cut it off from input, and let it hallucinate, exactly the same way we hallucinate every night when our recurrent brains are cut off from input.

That same month there was a lovely paper about Deep Compositional Question Answering from Berkeley- they've got a system that takes photo inputs and questions about the scenes in English, and  answers questions about what the picture shows.  The level of actually understanding the scene - and the question - that goes into that is pretty amazing.  This system is 'Compositional' ie, broken into modules that were trained separately - which is sort of related to what I'm doing except that the integrated system doesn't drop the modules and pick them back up later for different tasks.  Having trained the parts separately they put them together in one system and left them that way.

And then there's WATSON.  You know, the one that beat the human champion at Jeopardy?  That's a neural network making and interpreting database queries.  It learns to make better queries that embody the intent of the questions its presented with, and make better interpretations of what the queries return.  Sometimes it makes mistakes, but think about how hard that job is.  That's approaching something with properties similar to  human symbolic thinking.   They're using the same architecture now for a lot of different question-answering applications.  But that's the IBM approach: throw an ungodly amount of server power at something and make it work, THEN start worrying about trying to make it efficient or figure out which five percent of the computer power they threw at it is doing 90%+ of the job.  When it won the Jeopardy championship, it was running on a roomful of servers.  But by this time they've figured out enough about how it works to make a system about 80% of that smart work on 3% of that computer power - meaning people can deploy it on their desktop boxes.

Last year the human Go champion was defeated by a neural network.  GO.  Go is an open game with simple rules on a simple board - but do you have any idea what a serious problem is Go STRATEGY?  That system's way smarter than my digital lizard.

Google's self-driving car uses a VAST neural network running on a warehouse full of servers at Google: it takes the camera and GPS readings from all the cars they're running around everywhere, all at the same time, and grinds every minute of them through a thousand simulated slightly-different responses, optimizing the network to  find responses least likely to result in crashes, traffic hazards, and traffic law violations.  The resulting trained network, no longer running in parallel in thousands of instances, deploys on the car's onboard computer. It  controls the car - and feeds more data back to the monster network at the warehouse.

All these systems do things that are WAAAY beyond my little digital lizard.  They're hardcore, dedicated-job, for-profit applications that require superhuman performance.  I'm the weirdo who's most interested in a single network that can learn a lot of different things and switch between them, even if its performance at any one of them is mediocre.  I happen to think that 'consciousness' or whatever you want to call it is somewhere in this direction, because unless you are evaluating something in terms of dozens of different possibilities, you don't have to be 'aware' of it in any meaningful sense, and indeed will have learned to ignore most of it.  I think that kind of 'aware' is very much at the center of consciousness, so I think diversity of tasks and objectives is key.

I'm not working on anything way smarter than all the other people in the field.  I'm just working on something different.
Logged

JimC

  • Beyond Thunderdome
  • ****
  • Offline Offline
  • Posts: 571
  • Alice liked fluffy toys...
Re: AI and law.
« Reply #13 on: 04 Dec 2016, 04:24 »

- its "record state" files look like pure gobbledegook. I can't guarantee it'll play a strategy humans like or understand.
Almost certainly foolish for me to suggest what the folk in the field have almost certainly thought about years ago, but perhaps one of the outputs needs to be human readable and understandable "ethics". However I can see some obvious king sized problems. Not the least is that the task of developing the "record state" -> human readable "ethics" translation is most likely a task approaching the scale of developing the AI itself. Other include such development being seen as unproductive by management, and a fear that if AI ethics are made human readable then they will immediately come into the political arena and end up with endless impractical demands for changes, probably mostly mutually incompatible, from pressure groups, politcians, press and social media. Recent furore over allegedly racist targeted advertising would be trivial by comparison.
Logged

Morituri

  • William Gibson's Babydaddy
  • *****
  • Offline Offline
  • Posts: 2,276
Re: AI and law.
« Reply #14 on: 04 Dec 2016, 10:54 »

Recent furore over allegedly racist targeted advertising would be trivial by comparison.

Oh yeah.  I was in the middle of one case of that as a contractor.  It was pure hell.  I'd cut off one chunk of ethnically identifiable information, and in a couple of days it would find another and start doing the same thing again.   Can't zero in on ethnically identifiable names?  Okay, it'll zero in on the language preferences set in the browsers.  Can't get access to those?  It'll discriminate on the basis of cookies from ethnically identifiable sites.  Can't get those?  Okay, it'll take the geocoded location you're accessing it from and see if you're in an ethnically identifiable neighborhood.  And on and blipping on.

Every time the company lawyer gets a little less unhappy about lawsuit exposure, the company Chief Financial Officer is about to call you up and start yelling about reduced profits.  And the hell of it is both of them are right.  As long as human prejudice exists, these systems will find ways to exploit it for a profit.
Logged

WareWolf

  • Bizarre cantaloupe phobia
  • **
  • Offline Offline
  • Posts: 232
  • Makin' This Up As I Go
Re: AI and law.
« Reply #15 on: 06 Dec 2016, 18:40 »

Question: exactly who makes the laws that govern AIs in the QCverse? Are AI's represented in that governing body? I get the impression that the AI's regulate themselves to a large degree (robot cops, Robot Jail) but under what ultimate authority? Is there a parallel government and a parallel set of laws, like that that exists in places with a large Native American reservation? And what happens when the two come into conflict?

EDIT: Upon reflection, the analogy to Native American tribal police forces isn't apt. Those police have a designated area where they have authority. The AI cops and legal system appear to exist in the same physical space as us meatbags. So who mediates when those come into conflict? And who makes the rules for whom?
« Last Edit: 06 Dec 2016, 18:48 by WareWolf »
Logged

Morituri

  • William Gibson's Babydaddy
  • *****
  • Offline Offline
  • Posts: 2,276
Re: AI and law.
« Reply #16 on: 06 Dec 2016, 19:58 »

It certainly looks to me that the AIs in the QCverse are very much part of the citizenry and subject to the same laws.

I would expect that Officer Basilisk is part of the local PD and works in partnership with humans, in exactly the same way that Momo is part of the Smif library staff and works in partnership with humans.  I haven't seen anything that indicates that when AIs break the law they would get judged and sentenced by anyone different or according to any different rules than human lawbreakers.  If there are any gaps in the law, it's only because now that citizens with different abilities exist there are new ways to commit crimes. 

The law may be struggling to catch up, the way family law is now struggling to catch up with same-sex marriage and the welfare of children of blended homes - when there are four people the child regards as parents, how do you deal with that without causing trauma?

I can't see any indication that "Robot jail" is a better or worse place than the jail they put people in, nor treated any differently by the judiciary.  It's implemented differently, but people seem to have the same opinion of going there.
Logged

gprimr1

  • Furry furrier
  • **
  • Offline Offline
  • Posts: 197
Re: AI and law.
« Reply #17 on: 07 Dec 2016, 07:03 »

I got the feeling that AI crimes is just another part of the state police. Police departments have special units, like vice squad, traffic squad, financial crimes, you name it, I don't think the AI crimes unit is a different police force, I would suspect Officer Basilisk has the power to arrest humans as well as AIs.

I think it's similar to how you might see female cops working vice squad or police officers with financial backgrounds working in finance crimes.
Logged

Neko_Ali

  • Global Moderator
  • ASDFSFAALYG8A@*& ^$%O
  • ****
  • Offline Offline
  • Posts: 4,510
Re: AI and law.
« Reply #18 on: 07 Dec 2016, 09:53 »

We've seen cases where human authorities intervened in cases involving AIs. Vespabot and Pintsize as example. As far as the AI crimes department, we've only seen one example so far, that's not a sufficient sample size. All we can do is speculate why Officer Basilisk might join such a place. By her actions regarding the fight club, she seems mostly interested in stopping illegal activity harming AIs. She seems less concerned about the illegal gambling going on than she is about AIs being exploited or harmed. She also seems to be alone in these concerns.

There are separate facilities in dealing with AI criminals though. Most likely because of the physical differences between the groups. Robot jail for instance doesn't have to worry about cells, exercise yards, rehabilitation facilities or the same sort of infrastructure. By the sounds of it they put the AI's personality cores into server racks and limit their exposure to the outside while trying to rehabilitate. The AI parole officer is probably there the same reason there are AI cops in the robot crimes division. AIs and anthro pc bodies have unique differences from human bodies that must be accounted for. It's entirely possible they just have an office in the regular parole office and deal with any of the AIs under their care.
Logged

WareWolf

  • Bizarre cantaloupe phobia
  • **
  • Offline Offline
  • Posts: 232
  • Makin' This Up As I Go
Re: AI and law.
« Reply #19 on: 07 Dec 2016, 11:22 »


There are separate facilities in dealing with AI criminals though. Most likely because of the physical differences between the groups. Robot jail for instance doesn't have to worry about cells, exercise yards, rehabilitation facilities or the same sort of infrastructure. By the sounds of it they put the AI's personality cores into server racks and limit their exposure to the outside while trying to rehabilitate.

For an idea of just how horrific this could become, see "Altered Carbon" by Richard K. Morgan.
Logged

themacnut

  • Vagina Manifesto
  • ****
  • Offline Offline
  • Posts: 690
    • The Vanguard-Superhero Space Opera Action
Re: AI and law.
« Reply #20 on: 07 Dec 2016, 14:53 »

95% of the neural networks in play today are either figuring out how to maximize ad profits or identifying and executing stock trades.  People like money and that's where the effort is mostly focused.

Which I why I think the first "true" AI is going to be some kind of "businessmind" that maximizes profits for the corporation running its server farms. It'll end up doing a better job than most, if any upper management teams have ever done, and upper management of that corporation will end up following its suggestions simply because not following them is likely to produce less than optimal results.


There are separate facilities in dealing with AI criminals though. Most likely because of the physical differences between the groups. Robot jail for instance doesn't have to worry about cells, exercise yards, rehabilitation facilities or the same sort of infrastructure. By the sounds of it they put the AI's personality cores into server racks and limit their exposure to the outside while trying to rehabilitate.

For an idea of just how horrific this could become, see "Altered Carbon" by Richard K. Morgan.

Not like prison is supposed to be pleasant, that's supposed to be part of its deterrent effect. Still there should be rules about how badly convicted prisoners can be treated for a truly civil society.
Logged
The Vanguard - superhero space opera

JimC

  • Beyond Thunderdome
  • ****
  • Offline Offline
  • Posts: 571
  • Alice liked fluffy toys...
Re: AI and law.
« Reply #21 on: 08 Dec 2016, 09:06 »

Not like prison is supposed to be pleasant, that's supposed to be part of its deterrent effect. Still there should be rules about how badly convicted prisoners can be treated for a truly civil society.
Well that's where the whole prison thing is messed up. In a single institution we are expecting to get rehabilitation, deterrence and removal from society of those who would damage others. Trying to mix all those up in a single institution is inherently doomed to failure.
Logged

The real John Smith

  • Not quite a lurker
  • Offline Offline
  • Posts: 6
Re: AI and law.
« Reply #22 on: 10 Dec 2016, 17:09 »

I see a lot of comparison between developing AI and raising children but I don't see the parallel outside of designer babies.
Mabe when choosing the genes of the child and how it will grow up the parents use a random number generator and if the combination is unfortunate than the parents act like they didn't do anything incredibly irresponsible.
 :psyduck:
Logged
Actually, my name is James.

pwhodges

  • Admin emeritus
  • Awakened
  • *
  • Offline Offline
  • Posts: 17,241
  • I'll only say this once...
    • My home page
Re: AI and law.
« Reply #23 on: 11 Dec 2016, 00:43 »

That reads as if you think that Jeph's AIs are deterministic at an understandable level, and thus fully programmable.  It seems to me that Jeph writes them to have complexity beyond individual understanding, thus making them appear to have free will, just like humans (who after all inhabit the same superficially, but not actually, deterministic universe) - hence the parallels in the comic and in the discussion.
Logged
"Being human, having your health; that's what's important."  (from: Magical Shopping Arcade Abenobashi )
"As long as we're all living, and as long as we're all having fun, that should do it, right?"  (from: The Eccentric Family )

Is it cold in here?

  • Administrator
  • Awakened
  • ******
  • Offline Offline
  • Posts: 25,163
  • He/him/his pronouns
Re: AI and law.
« Reply #24 on: 11 Dec 2016, 09:49 »

Clarence Darrow used to argue that it was unfair to punish people for acting deterministicly.

His logic fails if you consider that behavior totally controlled by the environment can change if the environment includes the threat of punishment.

Agreeing with everyone who says it looks like QC synthetics are integrated into the legal system on the same terms as squishies.
Logged
Thank you, Dr. Karikó.

Tova

  • coprophage
  • *****
  • Offline Offline
  • Posts: 7,725
  • Defender of the Terrible Denizens of QC
Re: AI and law.
« Reply #25 on: 12 Dec 2016, 22:15 »

To expand on that point. I may have posted this before, sorry if so.

The Atlantic: Free will is an illusion, but we need to keep that illusion

I find pontification on free will a bit baffling sometimes because, after going through reams of tortuous logic, you end up, like The Fool in the Tarot, back at the point where you began, acting just as you did when you believed when you had it.
Logged
Yet the lies of Melkor, the mighty and the accursed, Morgoth Bauglir, the Power of Terror and of Hate, sowed in the hearts of Elves and Men are a seed that does not die and cannot be destroyed; and ever and anon it sprouts anew, and will bear dark fruit even unto the latest days. (Silmarillion 255)
Pages: [1]   Go Up