THESE FORUMS NOW CLOSED (read only)

Comic Discussion => QUESTIONABLE CONTENT => Topic started by: techkid on 23 Dec 2011, 03:56

Title: AI Rights
Post by: techkid on 23 Dec 2011, 03:56
Just reading up on today's comic (http://questionablecontent.net/view.php?comic=2085 (http://questionablecontent.net/view.php?comic=2085)), and I know that this should be in the weekly discussion panel, but I feel that it is too profound to end up there (if my judgement is wrong, then the moderators can do what they must do)...

What do you guys think of the sentiment that true AI beings (when technological development does progress that far) be given a chance in their society? Since most humans have trouble accepting their own (think racism, religious wars and even, to a "not-as-significant-but-still-present" degree, sexism and homophobia), what do you feel would be the outcome of such a discussion between man and machine?

I for one like the concept of unity. Yes, I am an idealist (albeit with a pessimist coating and cynical centre), but the true meaning of humanity can only be defined when we can accept one another for who we are: human beings. It should not be until then that we unlock the secrets of AI, otherwise... well, who knows how the future will turn out, but I doubt any intelligent beings would put up with our crap for long.
Title: Re: AI Rights
Post by: pwhodges on 23 Dec 2011, 04:09
Just reading up on today's comic (http://questionablecontent.net/view.php?comic=2085 (http://questionablecontent.net/view.php?comic=2085)), and I know that this should be in the weekly discussion panel, but I feel that it is too profound to end up there (if my judgement is wrong, then the moderators can do what they must do)...

This is sensible to keep separate, as Jeph touches on these issues from time to time.  See also the related thread: Robots and Love (http://forums.questionablecontent.net/index.php/topic,27240.0.html).

Quote
It should not be until then that we unlock the secrets of AI, otherwise... well, who knows how the future will turn out, but I doubt any intelligent beings would put up with our crap for long.

I think that true sentient AI is further away in our world than some would have us believe, if only because we would find ways to sabotage its emergence because of our fear - especially fear of your last thought.
Title: Re: AI Rights
Post by: jwhouk on 23 Dec 2011, 04:23
Given the dates that Jeph gave for the UN speech on AI rights, I'm getting the feeling that 9/11 may have never happened in the QC universe.
Title: Re: AI Rights
Post by: AnAverageWriter on 23 Dec 2011, 04:44
Given the dates that Jeph gave for the UN speech on AI rights, I'm getting the feeling that 9/11 may have never happened in the QC universe.

It would make sense. Conspiracy theories notwithstanding, 9/11 was caused by a massive failure of our ridiculously idiotic, bumbling intelligence groups. In the QCVerse, with advanced military AI Pintsizes running intel operations instead of fleshy fools, it stands to reason that the hijackers would have been uncovered and caught before they enacted their plans.
Title: Re: AI Rights
Post by: Is it cold in here? on 23 Dec 2011, 07:18
I'm confused though: Clinton made it sound like the equal rights amendment for AIs was a recent thing, and Momo said there had been a long struggle for equal rights.
Title: Re: AI Rights
Post by: jwhouk on 23 Dec 2011, 07:28
Legislation doesn't get passed overnight. It took a lot of time, I suspect, to get the AIERA ratified.

An aside: why do I think that more AI's are like Charlotte (http://www.questionablecontent.net/view.php?comic=1999) in their attitude towards humans than like Pintsize?
Title: Re: AI Rights
Post by: AnAverageWriter on 23 Dec 2011, 07:55
Legislation doesn't get passed overnight. It took a lot of time, I suspect, to get the AIERA ratified.

An aside: why do I think that more AI's are like Charlotte (http://www.questionablecontent.net/view.php?comic=1999) in their attitude towards humans than like Pintsize?

 Maybe it's the military service  (http://www.questionablecontent.net/view.php?comic=1997) that makes certain AIs go a little batty. After all, Pintsize used to be a military robot, remember?
Title: Re: AI Rights
Post by: Dr. ROFLPWN on 23 Dec 2011, 08:23
Pintsize likes humans just fine!

It's just they don't appreciate his variety of "affection".
Title: Re: AI Rights
Post by: Is it cold in here? on 23 Dec 2011, 10:55
Pintsize got a military chassis, but his software was civilian.
Title: Re: AI Rights
Post by: St.Clair on 23 Dec 2011, 21:27
An aside: why do I think that more AI's are like Charlotte (http://www.questionablecontent.net/view.php?comic=1999) in their attitude towards humans than like Pintsize?
Having forgotten that strip/character, I wonder if it's telling that my first thought was of this Charlotte (http://en.wikipedia.org/wiki/Charlotte%27s_Web), and her patient, kind nurturing.
Title: Re: AI Rights
Post by: Is it cold in here? on 23 Dec 2011, 21:45
Another relevant thread (http://forums.questionablecontent.net/index.php/topic,24620.0.html).

Humans being what they are, it's inevitable that there would be some bigotry against AnthroPCs. Presumably we haven't seen it because it wouldn't be funny.
Title: Re: AI Rights
Post by: techkid on 24 Dec 2011, 03:49
Legislation doesn't get passed overnight. It took a lot of time, I suspect, to get the AIERA ratified.
True. One simple statement would not hold much sway in the courts and the political system...

An aside: why do I think that more AI's are like Charlotte (http://www.questionablecontent.net/view.php?comic=1999) in their attitude towards humans than like Pintsize?
Pintsize, in so far as behavioural patterns go, is indeed an abnormality in the AnthroPC community, from what we've seen so far. Whether it be from some of the "questionable content" (pun probably not intended, but it stays) that Marten used to frequent (as some people know and many people find out, free porn sites are a haven for malware and viruses), or just some smartass disposition from the military development of his chassis is open to debate, but so far he's definitely been one of a kind.
Title: Re: AI Rights
Post by: dr. nervioso on 24 Dec 2011, 08:05
Quote
advanced military AI Pintsizes running intel operations

When I read this, I got an image of an army of anthrPCs beating up jihadis with laser dildos
(I hope that is the right term, I suck at grammar/political correctness)

I think, that in the long run, equilibrium is reached. Over a centuries ago, we enslaved and abused africans. It took a century of fighting for their rights t be fully guaranteed and recognized, but now we have an african-american in the white house.

I believe, that the QCverse has or will establish equal rights for most AIs. That is once they see the similarities.

All organisms, have functions inscribed in their DNA. The prime function for all life is reproduction. Once that function is accomplished, the organism then goes on to ensure a better life for its offspring through acquiring territory.

Now the difference in humans is that our brains allow us to interpret and accomplish our prime functions in ways that are more civilized. We also have empathy. This is why we fight in the middle east, why we give t charity. It defies ur prime function, as we are taking resources that our own clan could use and are giving it to another. However, this capability of charity follows logic along the line that we all have a connection to each other in the similarity in our genes. Again, brain structure bringing a mre civilized look at prime biological functions.

Now why do I mention this?

Well, to me, it seems that AIs have two prime functions: Emulation and Logic

Since we created them, and we are the dominant intelligence on this planet, it makes sense that they would emulate us. Perhaps, the reason they emulate us is because we are the only frame of reference for intelligence they have. Or, maybe, it is because the scientists that made them mimick humanity. In any case, emulation for AIs is the reciprocation of actions done by humans, results of our prime functions. AIs do not understand why we act so, so their reasoning for it will likely be completely different.

On the other hand, logic is using their cld hard computer parts to analyze their world and not giving a damn about humans since we are irrational. I guess you could say that their functions are primitive like most animals, but they cntain comprehension of why we do these things, and they can give reasons why we are idiots. This is classic case f robots not understanding love and ther emotions. To them emotions seem useless, they do not accomplish humanity's prime function.

Well, before I go off into insane tangents I think I should stop for now. I hope my tirade makes sense.
Title: Re: AI Rights
Post by: celticgeek on 24 Dec 2011, 08:11
Not all AnthroPCs  (http://questionablecontent.net/view.php?comic=303)are helpful.
Title: Re: AI Rights
Post by: Is it cold in here? on 24 Dec 2011, 09:19
On the other hand, AnthroPCs do have emotions of their own, even including sexual desire even though they don't reproduce.
Title: Re: AI Rights
Post by: Gelrir on 29 Dec 2011, 15:12
Quote
Quote
Quote from: jwhouk on 23 Dec 2011, 07:23
Given the dates that Jeph gave for the UN speech on AI rights, I'm getting the feeling that 9/11 may have never happened in the QC universe.

It would make sense. Conspiracy theories notwithstanding, 9/11 was caused by a massive failure of our ridiculously idiotic, bumbling intelligence groups. In the QCVerse, with advanced military AI Pintsizes running intel operations instead of fleshy fools, it stands to reason that the hijackers would have been uncovered and caught before they enacted their plans.

Dunno about "morality programming" or whether AIs would be better at uncovering spies (smarter or more intelligent doesn't mean better in all cases), but if semi-employed twenty-somethings can afford AIs, all commercial aircraft would have them installed before 2001 I suppose
(click to show/hide)
. An autopilot who doesn't want to fly into buildings sounds good to me.

"Morality programming" is still programming; even if it works as designed, it's still designed. Some software engineer, or committee, or prior AI, makes the decisions about morality and how it should be implemented. Maybe "Kill the Infidels" is moral for your group ... your mileage may vary. It seems likely that an AI in this comic can sometimes/eventually exceed, avoid, override, improve, degrade, delete or alter its morality programming; human beings change their beliefs, too. Being able to think outside the simple rules is probably part of what makes intelligence (if not sentience, sapience, etc.).
Title: Re: AI Rights
Post by: jwhouk on 29 Dec 2011, 16:37
(applauds)

Great first post.

Title: Re: AI Rights
Post by: Is it cold in here? on 29 Dec 2011, 17:37
Indeed! Welcome, new person.
Title: Re: AI Rights
Post by: techkid on 30 Dec 2011, 01:59
but if semi-employed twenty-somethings can afford AIs, all commercial aircraft would have them installed before 2001 I suppose
(click to show/hide)
. An autopilot who doesn't want to fly into buildings sounds good to me.

Indeed, welcome Gelrir.

Considering that in the QCverse that AIs are available at your local mall or retailer, it would indeed be most sensible that AI be installed in commercial airliners and probably trains and ships, too. Think of the sorts of things you need to consider in those sorts of scenarios:

- Braking/stopping distances at speed (trains, ships)
- Turning points and collision avoidance (ships, planes)
- Threat detection, system security (IDS, firewalls, encryption etc), and automatic notification to authorities (all three)

It seems likely that an AI in this comic can sometimes/eventually exceed, avoid, override, improve, degrade, delete or alter its morality programming; human beings change their beliefs, too. Being able to think outside the simple rules is probably part of what makes intelligence (if not sentience, sapience, etc.).

I don't know so much about "morality" in the case of this comic (Pintsize being who he is, is anything but moral), but most AIs more run on common sense and some measure of "emotional connection" (in so far as an AI has a conscience) with their owners. Not so much as "my programming tells me not to", as "doing this wouldn't be right".
Title: Re: AI Rights
Post by: jwhouk on 30 Dec 2011, 06:07
The only wrench to the idea about placing AI's in machinery is that we don't know exactly when the AI made the "champagne" comment.

If it was made several years prior to the UN speech, then there is a distinct possibility that AI's may have prevented the hijackings. Tech generally takes a while to take hold, but once it does - well, let's just point out that cell phones were a quaint plaything of the rich not more than 25 years ago, and 20 years ago weren't even close to the ubiquity they are today.

If the AI "moment" was only recently achieved prior to the speech (perhaps only a few months), then AI "tech" may not have been as widespread - which may have resulted in the hijackings happening anyways. The AI's may have shown their worthiness by tracking down OBL faster than you can say "oops", and the whole Iraq and WMD thing could have been bypassed or minimized.

I truly doubt that Jeph thought things out that far in advance, and I REALLY doubt that, based on how APC's "evolved" over the course of the strip, that he ever considered things like the destruction of the WTC and Pentagon in relation to AI's.

Tl;dr - Hodgson's Law and Bellisario's Maxim.
Title: Re: AI Rights
Post by: Is it cold in here? on 30 Dec 2011, 06:14
There's a long approval and certification cycle on getting new technology installed in jetliners. It would be entirely possible for AIs to have been around for ten years and still not have been pilots.

You have to be 35 years old to run for US President. Does this mean that AIs still don't have equal rights?
Title: Re: AI Rights
Post by: DSL on 30 Dec 2011, 10:21
If the "freeze the design" aspect of any big technological undertaking works in QCverse as it does here, it's possible for there to be a tech lag. Retrofitting jetliners might be seen as prohibitively expensive, so that a QCverse 767 might lag technologically behind what's available at QC Radio Shack or QCBest Buy or wherever Marigold and Momo go shopping.

The Space Shuttle's computers lagged behind what was commercially available for a long time (and I heard an editor for Aviation Week talk about how Columbia was many years obsolete the day of first flight; she got some upgrades later) because of the cost-vs-benefit of upgrading the specs in mid-build.
Title: Re: AI Rights
Post by: jwhouk on 30 Dec 2011, 19:10
The Space Shuttle's main computers were less powerful than the laptops some of the astronauts brought on board the ship in the last few flights, if memory serves.

EDIT: Literally. 1 MB of memory vs. any laptop with 4 GB RAM & 160 GB Hard Drive. Of course, the Shuttle didn't need to download music from iTunes or play Metal Gear Solid.
Title: Re: AI Rights
Post by: DSL on 30 Dec 2011, 19:59
... of course, what's to stop a human-form AI with the requisite expertise from being the pilot on a "normally" fitted-out aircraft? I seem to recall Asimov (I think it was Asimov; he and Clarke were who I read back in the day, and The Good Doctor talked about robots way more than did Sir Arthur*) making the argument for " humaniform" robots on that basis: Why fit out a tractor, or car, or plane, or spacecraft, with brains of its own when a robot or AI the size and shape of an adult human could operate any or all of them?

*Clarke's robots tended to be shaped like glass pyramids, or oversized dominoes ...
Title: Re: AI Rights
Post by: Is it cold in here? on 30 Dec 2011, 20:54
Because brains turned out to be the cheap part.
Title: Re: AI Rights
Post by: jwhouk on 30 Dec 2011, 21:48
Because brains turned out to be the cheap part.

This x 1 TB.

Once they realized that they only needed a few 100 GB hard drives worth of room for an AI to "move around in", AnthroPC's became commonplace. Just like an iPod + Cell Phone = iPhone in a blink of an eye.

Title: Re: AI Rights
Post by: DSL on 31 Dec 2011, 06:29
Well, this article (http://opinionator.blogs.nytimes.com/2011/12/25/the-future-of-moral-machines/) is interesting ...
Title: Re: AI Rights
Post by: bicostp on 31 Dec 2011, 20:21
How their integration into society will unfold? Will they be treated as intelligent computers, or artificial people?

If there's one thing humans are good at it's hating things that aren't like us, even if we have to nitpick superficial differences. Unfortunately there are people who choose to hate others due to attributes they have no control over, and there will be those who view robots with as much mistrust and skepticism as they do minorities and homosexuals. Of course for every case of bigotry there will probably be a dozen examples of acceptance, but even today in the 2010s there are people out there who are still stuck in the 1950s pre-Civil-Rights mentality and we see bills like Proposition 8 and Arizona SB 1070* crop up. AIs will eventually have the same rights as everyone else, but there's probably going to be an acclimation period before that happens. Even in the comic, where they're generally viewed in a positive way (at least what we see), it took years for the AIs to get basic civil rights. (#2069 (http://www.questionablecontent.net/view.php?comic=2069)).

The introduction of a true artificial intelligence capable of independent thought and actions will be the second biggest change our society will experience, behind first contact with an extraterrestrial intelligence*. I just hope when that day comes, the AIs will be accepted as people who happen to be robots.

*Yes illegal immigration is a problem, but all this law does is give the state police free reign to profile anyone who looks vaguely Hispanic.
** Second biggest because while they're different from us, we initiated their creation and they have our accumulated knowledge. Contact with intelligent aliens, on the other hand, wouldn't come with such an in-depth mutual understanding.
Title: Re: AI Rights
Post by: Is it cold in here? on 01 Jan 2012, 00:14
Well said.

Consider the fears we already have of human-level artificial beings even though we've never met any.

On the other hand, what if AIs are different enough that we don't regard them as sufficiently human-like to be bigoted about them?
Title: Re: AI Rights
Post by: pwhodges on 01 Jan 2012, 00:51
That won't help.  People can ignore similarities to focus on differences, to the extent that it has been known for blacks to be seen as non-human, and "examples" placed in zoos.  If the minds of AIs come to have any recognisable similarities to the human mind, that along with the physical differences will still provide ample scope for prejudice.
Title: Re: AI Rights
Post by: Carl-E on 01 Jan 2012, 13:41
This is especially true since, in the first instances, AI's will be controlling brains for various types of equipment rather than an amiable humaniform robot.  They'll quickly be adapted for robots, but as the in-comic UN speech relayed, it will be developed and first recognized in some sort of a box. 

In fact, the article DSL linked makes a good point that they're on the way already in those capacties.  The self-driving autos, Siri, etc. are the types of applications where an AI would first be used, although they'd probably be developed in parallel with the robots now being used for hospital nursing rounds and as receptionists. 

Hell, we're already used to talking to a primitive form of AI when we call almost any service line. 

"Para Espanol, oprima el dos."
Title: Re: AI Rights
Post by: AnAverageWriter on 02 Jan 2012, 07:06
Hell, we're already used to talking to a primitive form of AI when we call almost any service line.  

Yeah, I hate those things. My wife has an accent and the damn thing never can understand her, so whenever she calls a line like that she ends up handing me the phone or mashing "0" repeatedly until the computer gives up and transfers her to a squishy human.
Title: Re: AI Rights
Post by: bicostp on 02 Jan 2012, 08:19
But are those phone menu systems really 'intelligent'? They're essentially talking flowcharts with semi-accurate speech recognition. They leave all the logic up to the human on the phone. If the caller has a need that can't be fulfilled by the limited scope of the flowchart, they get passed off to a human in a call center (who is usually reading a flowchart of their own, but can at least give you an answer besides "I did not understand that please repeat your question").
Title: Re: AI Rights
Post by: Carl-E on 02 Jan 2012, 09:32
I did  say primitive...

The thing is, they are capable of a certain amount of decision making beyond the flowchart.  I know of one service line which, after getting my information and then bringing up my incredibly complex account to give me whatever information it can, pauses for a moment and then says, "I'll have to connect you with someone who can help you". 

AI is like Athur Clarke's statement about magic- any technology advanced enough to seem   intelligent to the average user passes for intelligent. 


Right up until it does something stupid. 
Title: Re: AI Rights
Post by: Carl-E on 02 Jan 2012, 17:16
Perhaps not, but there's a certain amount of analysis of the complexity of the account going on that takes it out of the system.  While that's really just another decision box of some sort, it mimics an intelligent decision pretty well. 
Title: Move this post to the AI Rights thread
Post by: Is it cold in here? on 09 Jan 2012, 07:30
We have an ESS (Emplyee Self-Service) website where you can print off your stubs if you need them. 

Do robots get Social Security and Medicare taken out? 

That is an absolutely fascinating question.

I can see robots wanting to retire (though they'll be obsolete well before age 65). But what would they do with Medicare? If it covers repairs, suddenly we have to wonder who pays for repairs for free-range AnthroPCs. And is it neglect or abuse if a human who adopts an AnthroPC fails to pay for repairs?

I'll probably split this later and move it to the AI Rights thread.
Title: Re: Move this post to the AI Rights thread
Post by: techkid on 10 Jan 2012, 01:42


Do robots get Social Security and Medicare taken out? 

If it covers repairs, suddenly we have to wonder who pays for repairs for free-range AnthroPCs. And is it neglect or abuse if a human who adopts an AnthroPC fails to pay for repairs?

I guess it would all depend on their dependency with their "owner" (personally, I hate that word when talking about a living or otherwise sentient creature, but there is no other real word for that "store-bought" relationship, no matter how attached their relationship is). If they are self-sufficient like Momo, then the possibility is open that they would be able to look after themselves (like when your kids grow up and make their own way in the world (or at least, you hope so)). But if they are still at home, and have no means of supporting themselves, then I suppose that responsibility would fall into their owner's hands.

Another question to arise from that would be, if an AnthroPC does become self-sufficient, would it be possible for them to get their own AnthroPCs?
Title: Re: Move this post to the AI Rights thread
Post by: AnAverageWriter on 10 Jan 2012, 05:37
Do robots get Social Security and Medicare taken out?  
Quote
That is an absolutely fascinating question.

The Social Security Network is already stretched thin as it is, what with the baby-boomers now turning into the toothless aging hordes. Introducing an entirely new, close-to-immortal, never-aging race into the retirement mix would completely bonk it out of existence.

Yes, the concept of obsolescence would, I imagine, weigh on the mind of an AnthroPC, but, given the level of advancement we've seen, they're already as intelligent and capable as humans, and we don't see Dora running around screaming about being replaced by a new model.

People need social security because there is no "cure" for getting old and unable to do things. AnthroPCs can get a new chassis, replace parts, etc.  

With humans, things are rather messier than that. Even when you manage to successfully stuff a new part in there, oftentimes there are major complications that arise from it.
Title: Re: AI Rights
Post by: Is it cold in here? on 10 Jan 2012, 10:02
Solution: AnthroPCs pay into Social Security as long as they're working and don't receive benefits until they're too old to work. Which, as you point out, may be never, so they are net contributors forever.

Which, in turn, would be a source of conflict, though maybe they'd be content with disability coverage.
Title: Re: AI Rights
Post by: Carl-E on 10 Jan 2012, 23:35
A disabled robot should be repairable.  Although mental disability is another matter...
Title: Re: AI Rights
Post by: Is it cold in here? on 11 Jan 2012, 01:23
They have emotions: maybe mental illness is possible.

Pintsize seems to be disabled for any possible occupation.
Title: Re: AI Rights
Post by: jwhouk on 11 Jan 2012, 05:36
Pintsize has asked this question before (http://www.questionablecontent.net/view.php?comic=70).

He has also been "corrected" (http://www.questionablecontent.net/view.php?comic=71) for his hubris about it.
Title: Re: AI Rights
Post by: Is it cold in here? on 11 Jan 2012, 12:38
What happens when their software won't run on current hardware any more?
Title: Re: AI Rights
Post by: pwhodges on 11 Jan 2012, 15:43
Virtualise it.
Title: Re: AI Rights
Post by: Is it cold in here? on 11 Jan 2012, 17:22
Would emulators be a legal right?
Title: Re: AI Rights
Post by: Carl-E on 11 Jan 2012, 17:44
I wonder if the AI is something that sits above the software.  Would compatibilty really be an issue to something that can go online and upgrade whatever was needed whenever necessary?  Does the AI write its own code to run on any appropriate platfom? 
Title: Re: AI Rights
Post by: techkid on 12 Jan 2012, 06:43
Actually, that's an interesting point. Currently, there is a somewhat proprietorial nature between computers (the biggest being between Mac and PC, as well as PC and Linux/UNIX (not too sure as between Mac and Linux, since there is some commonality between them)), with limited and not-quite-supported support to allow communication between them. Executables and binaries are definitely not cross-compatible, but non-executable files are through file system access (whether directly or through a third-party application) and networking protocols.

So looking at that, how would AI be configured to run? Whether it ends up being an executable or a file, compatibility would still be an issue. If it runs on its own, then the problem you face would be operating system (and possibly file system) support, and if it runs through an intermediary program, then system compatibility might not be so much of a problem, but the program will have to be upgraded and file compatibility is not always guaranteed.
Title: Re: AI Rights
Post by: Blackjoker on 17 Jan 2012, 01:05
Actually, that's an interesting point. Currently, there is a somewhat proprietorial nature between computers (the biggest being between Mac and PC, as well as PC and Linux/UNIX (not too sure as between Mac and Linux, since there is some commonality between them)), with limited and not-quite-supported support to allow communication between them. Executables and binaries are definitely not cross-compatible, but non-executable files are through file system access (whether directly or through a third-party application) and networking protocols.

So looking at that, how would AI be configured to run? Whether it ends up being an executable or a file, compatibility would still be an issue. If it runs on its own, then the problem you face would be operating system (and possibly file system) support, and if it runs through an intermediary program, then system compatibility might not be so much of a problem, but the program will have to be upgraded and file compatibility is not always guaranteed.

My own guess is that the AI is a kind of modified OS. Pintsize seemed far more..energetic and enthusiastic when he had extra ram chips added to him so I could imagine his AI at least being some kind of specialty OS. It's also possible that the housing for the AI itself is a kind of black box or something similar since it can apparenbtly allow for flow between sub and human body with relative ease.

I actually wonder a few other things too, could an AnthroPC vote? If not then that's a pretty big problem and a sign that society sees them as second class citizens, then again there is the argument that if a voter can make hundreds of copies of themselves there is also a kind of threat do democratic government. It does seem though that human and machine intelligences aren't at a point where they could be interchangeable, otherwise there would probably be instances of people rigging up transfers or some kind of trade, a robot wanting to be human and a human wanting a body that they can remake however they want (A la Marigolds thoughts after Momos new chasis).

My guess for Social Security would be that robots that pay in are given a kind of machine medicaid. It provides for repairs to their systems as well as repairs and necessary upgrades. If there's a national healthcare system in this timeline or at least something less horrid than the corporate oligarchy now then it would probably be parallel to such a thing. To the question of 'neglect' from an owner I would guess not, or not exactly. The concept seems to be that the 'owner' owns the chasis and rents it to an AI, like an apartment building or something similar. The contract would probably cover basic stuff like electricity and general maintenance but that sort of contract might be fairly recent, older ones might allow the owner to do what they wish without fear of reprecussion.
Title: Re: AI Rights
Post by: Is it cold in here? on 17 Jan 2012, 11:48
In the past we've seen a couple of instances of human companions doing hardware mods without permission. That fits a landlord/tenant model.

EDIT: that must be a thing of the past in the QC world. According to Jeph, an AnthroPC today is the legal owner of the body it operates in.
Title: Re: AI Rights
Post by: bhtooefr on 13 Jan 2017, 05:06
Resurrecting this thread, although there's a few old threads this could go in, I think...

EU to debate robot legal rights, mandatory "kill switches" (http://newatlas.com/robot-kill-switch-personhood-eu-report/47367/)
Title: Re: AI Rights
Post by: Is it cold in here? on 13 Jan 2017, 09:06
How does Bubbles's situation fit into this?
Title: Re: AI Rights
Post by: Morituri on 13 Jan 2017, 12:22
But are those phone menu systems really 'intelligent'? They're essentially talking flowcharts with semi-accurate speech recognition. They leave all the logic up to the human on the phone.

See, here's the thing; you can't program a computer for semi-accurate speech recognition.  All the systems that pick words out of speech with any kind of decent usable accuracy are copies of things that would not exist if it weren't for machine learning algorithms.  And a fair number of them learn online; if enough people with an accent which it can *barely* recognize keep calling, it'll adapt and start understanding them better.  A call center that serves a Latino community is likely to start recognizing a Spanish accent with better accuracy.  Which still sucks if you're in the small percentage of people whose first language was Cantonese and the system's not adapting to you, but that speech recognition IS an AI-driven function, even if all the logic of what you're actually doing and how the company has set up its system is all mapped out by humans.

We're using AI learning algorithms already to fill in all kinds of gaps in our ability to program.  You think some Engineer at google taught their photo search system how to recognize dogs and tennis players?  Nope.  You think the systems that do real-time load balancing and prevent transformers from exploding in lightning storms learned exactly what they have to do, which is different depending on how many microseconds down the line every component happens to be,  from a human?  Nope.  You think the systems that coordinate, synchronize, and adjust traffic lights in every major city to make traffic flow as smoothly as possible are  some static program that has to get updated every time a traffic light gets installed because the patterns at that particular corner are different, or because the patterns change over time?  Nope.  If New York went back to timed traffic lights the city would self-destruct; they've got more cars on the same streets now than they had back when cars were getting stuck in gridlock for days in the 1970s. 

There are many, many pieces of basic infrastructure, from ad servers to waste processing, that contain bits and chunks we could not possibly have programmed.  These things aren't human-level AI, no.  Some of them are smarter than cockroaches.  But they're ubiquitous.
Title: Re: AI Rights
Post by: Morituri on 13 Jan 2017, 12:45
So looking at that, how would AI be configured to run? Whether it ends up being an executable or a file, compatibility would still be an issue. If it runs on its own, then the problem you face would be operating system (and possibly file system) support, and if it runs through an intermediary program, then system compatibility might not be so much of a problem, but the program will have to be upgraded and file compatibility is not always guaranteed.

My anticipation is that a lot of the early ones are going to get the ability to interact with unix command-line shells.  It's easy to interface, it's powerful, and its idioms are discoverable. 

Controlling machines in real time though would be either through some different virtualized interface, or by direct use of the runtime environment like any other program (Or possibly more like an operating system than a program).

With the ability to rapidly adapt to very different hardware (bank server to human-form robot to submarine to jet fighter) and benefit from CPU and other hardware upgrades?  I'd be betting on a virtualized interface of some kind.  The Bash shell, for example, runs on ALL kinds of hardware and knowing how to handle it remains useful in spite of different hardware choices, kernel upgrades, runtime library mods, and OS changes.  Some kind of crazy-advanced analogue of that would be a standard way for AIs to interface with all kinds of hardware. It would be like you know how to drive a car and when you sit down in a truck or a farm tractor or a cotton picking machine.  It's different and it handles differently and there are some levers over there that do things the car couldn't do, and some things the car could do that this machine can't, and maybe you have to handle gear shifting differently ... but the steering wheel is still a steering wheel, the gas pedal still accelerates, the brake pedal still slows down, the gauges on the panel that show things like speed and RPM or whatever still mean the same thing, and within a few hours of taking things slow and careful, you get used to it.
Title: Re: AI Rights
Post by: Storel on 13 Jan 2017, 13:22
We're using AI learning algorithms already to fill in all kinds of gaps in our ability to program.  You think some Engineer at google taught their photo search system how to recognize dogs and tennis players?  Nope.  You think the systems that do real-time load balancing and prevent transformers from exploding in lightning storms learned exactly what they have to do, which is different depending on how many microseconds down the line every component happens to be,  from a human?  Nope.  You think the systems that coordinate, synchronize, and adjust traffic lights in every major city to make traffic flow as smoothly as possible are  some static program that has to get updated every time a traffic light gets installed because the patterns at that particular corner are different, or because the patterns change over time?  Nope.  If New York went back to timed traffic lights the city would self-destruct; they've got more cars on the same streets now than they had back when cars were getting stuck in gridlock for days in the 1970s.

Wow. The traffic light example reminded me somehow of a computer game called SimTower back in the '90s; you had to build a skyscraper floor-by-floor, laying out the infrastructure as you went, finding different tenants (retail, offices, apartments), and one of the biggest deals was placing the elevator shafts and programming the elevators to handle the traffic at different times of day. After reading what you said, I'd be willing to bet money that modern elevators now use AI learning algorithms to reprogram themselves just like traffic lights -- or if they don't yet, they damn well should.
Title: Re: AI Rights
Post by: Thrudd on 15 Jan 2017, 09:42
In the Metropolitan area next to the residential city I live in, the opposite is true. I have encountered a lot more smart elevator systems than traffic signals. Forget about integrated, since city officials over-ride the engineers since most of the council is anti-car and don't have more than a few neurons amongst the lot of them. They also bring in outside consultants, quote a dew from the states, to analyse the issues and provide suggestions to be ignored. The sad but laughable thing is most of these specialists were students of a Master on the subject who happens to work at a local university. That person has never been spoken to or even acknowledged since he was just some local professor on the subject. Politicians should be damn glad the populace has a thick skin and isn't as hands on as they were a century back, else there would be a hands on with tar feathers and rope as well the the occasional pitchfork and torch.
Title: Re: AI Rights
Post by: Morituri on 15 Jan 2017, 13:19
That is ... I was going to say unfortunate but it's worse; that's appalling.  Those systems don't just smooth out traffic; they reduce traffic fatalities, including pedestrian and bicyclist casualties, by a huge amount.  If the council is overriding engineers on putting those in, then people are dying who don't have to die.

In the US, we see traffic-adapting elevators in most new buildings, but rarely retrofitted to existing ones except in office buildings where "time is money" to the people making the decisions. 

But the importance of it is orders of magnitude less.  People have to wait a minute or two extra a couple times a day, maybe, but nobody dies because of slow elevators, and elevators don't ever get completely gridlocked for days.
Title: Re: AI Rights
Post by: Storel on 15 Jan 2017, 15:24
Oh, I'm sure at least one person somewhere has died because of a slow elevator... perhaps on the way from the Emergency Room to the Operating Room?
Title: Re: AI Rights
Post by: JimC on 15 Jan 2017, 22:26
Those systems don't just smooth out traffic; they reduce traffic fatalities,
In the UK at least reducing fatalities is AIUI a much higher priority than making the traffic flow better.
Title: Re: AI Rights
Post by: Morituri on 15 Jan 2017, 22:59
I would like to think the same is true in the US, but I know the line that the salespeople find most effective...  and it's about traffic flow.  "Saving lives" is just additional leverage that the city commissioners can use to get consumer advocates and so forth to go along with it.
Title: Re: AI Rights
Post by: Zebediah on 16 Jan 2017, 05:44
Under North Carolina law, the NC Department of Transportation must prioritize traffic flow over all other concerns - including pedestrian fatalities. Or so we were told in Durham when we tried to get some speed bumps installed on a busy street that happened to fall under the DOT's authority rather than the city's. This after a particularly grisly traffic accident where it took the police over a day to find all of the body.