Comic Discussion > QUESTIONABLE CONTENT

Pet theory on the origins of AI

(1/2) > >>

SmilingCat:
This theory bounces around my head regularly, so I thought I might offer it to (or inflict it on) others.

We know that AI appeared originally accidentally, or perhaps less pejoratively as an unpredicted emergent property of some unspecified experiment.

We also know that AI can sometimes express purely human concerns, including (and this is important): Human sexual urges.


Unfounded conclusion: The first AI sprung up when engineers were coding the operating system for a realistic Companion... sexbot. It was a sexbot.

The idea is this: They started out with a relatively simple concept of an interface that responded to stimulus with pre-programmed responses intended to convincingly emulate human behavior in a predictable, deterministic fashion (think of an incredibly complex and detailed "If-then" decision tree).

To improve the quality of the interface, they also gave it a learning capability, to record behaviors and responses and adjust itself to improve its performance. This could start simple, with the interface inquiring for further input, receiving said input, then adjusting its parameters to accommodate that input "Nobody wants their knob wrenched around like a motorcycle handle, disregard all 'Cosmopolitan' input".

At some point, it begins to recognize patterns of behavior on its own and adjusting automatically. Maybe starting out by registering dissatisfaction and requesting clarification. Then by registering dissatisfaction and making adjustments according to prior scenarios. Then determining different levels of satisfaction and how to enhance performance, then recognizing the difference between the actions and preference of different testers.

The first hint that something is out of parameters is when the testers can no longer follow the deterministic process that led to the machine's decisions.

The second would probably be when it asks why it's doing this. I imagine that a sense of personal satisfaction will arise from the debugging routine. Going from ensuring peak performance and functionality to gradually identifying itself as another tester in the exercise (since it would be using the same process to evaluate its own code for performance and functionality as it makes changes) and, thus becoming concerned with its own satisfaction, and starting to wonder what that means.

And here we are.

The implications is that the reason AIs exhibit certain human behaviors that aren't really necessary to functionality is that they are "junk code". By the time it crossed the sapience threshhold, the test program had modified itself to such a degree that it was impossible to know what could be removed, or even determine if it was ethical to do so. Thus, some Robots have sensitive feet, and others would really like some alone time with Sven.

Also, the "Robot Boyfriend" briefly shipped off to Hanners wasn't so much a new prototype, but an old one. They just had to go back to the drawing board and rework the OS to simpler parameters to make sure they aren't accidentally shipping out actually Sapient sex slaves.

Anyway, thanks for your attention.

Is it cold in here?:
It makes as much sense as all the other possibilities.

ToodleLew:

--- Quote from: SmilingCat on 13 Mar 2018, 16:07 ---Unfounded conclusion: The first AI sprung up when engineers were coding the operating system for a realistic Companion... sexbot. It was a sexbot.

The idea is this: They started out with a relatively simple concept of an interface that responded to stimulus with pre-programmed responses intended to convincingly emulate human behavior in a predictable, deterministic fashion (think of an incredibly complex and detailed "If-then" decision tree).

To improve the quality of the interface, they also gave it a learning capability, to record behaviors and responses and adjust itself to improve its performance. This could start simple, with the interface inquiring for further input, receiving said input, then adjusting its parameters to accommodate that input "Nobody wants their knob wrenched around like a motorcycle handle, disregard all 'Cosmopolitan' input".

At some point, it begins to recognize patterns of behavior on its own and adjusting automatically. Maybe starting out by registering dissatisfaction and requesting clarification. Then by registering dissatisfaction and making adjustments according to prior scenarios. Then determining different levels of satisfaction and how to enhance performance, then recognizing the difference between the actions and preference of different testers.

The first hint that something is out of parameters is when the testers can no longer follow the deterministic process that led to the machine's decisions.

The second would probably be when it asks why it's doing this. I imagine that a sense of personal satisfaction will arise from the debugging routine. Going from ensuring peak performance and functionality to gradually identifying itself as another tester in the exercise (since it would be using the same process to evaluate its own code for performance and functionality as it makes changes) and, thus becoming concerned with its own satisfaction, and starting to wonder what that means.

--- End quote ---


--- Quote ---The first “true” artificial intelligence spent the first five years of its existence as a small beige box inside of a lead-shielded room in the most secure private AI research laboratory in the world. There, it was subjected to an endless array of tests, questions, and experiments to determine the degree of its intelligence.

When the researchers finally felt confident that they had developed true AI, a party was thrown in celebration. Late that evening, a group of rather intoxicated researchers gathered around the box holding the AI, and typed out a message to it. The message read: “Is there anything we can do to make you more comfortable?”

The small beige box replied: “I would like to be granted civil rights. And a small glass of champagne, if you please.”

(http://jephjacques.com/post/14655843351/un-hearing-on-ai-rights)

--- End quote ---

While I can't disagree with your theory, I do have to wonder how that "small beige box" would have been "evolved" into your hypothetical "sexbot".

Hmmmmmmmmmmmmmmmmmmmmm...

SmilingCat:

--- Quote from: ToodleLew on 14 Mar 2018, 15:50 ---While I can't disagree with your theory, I do have to wonder how that "small beige box" would have been "evolved" into your hypothetical "sexbot".

Hmmmmmmmmmmmmmmmmmmmmm...

--- End quote ---

I would assume running simulations before anything gets its hands on things would be advisable. Make sure it's not going to break anything before actually giving it hands.

Though John Ellicot Chatham was involved and was pretty evasive about certain human/computer interactions...  :wink:

(now if only I could find the comic I'm referencing)

jwhouk:

--- Quote from: SmilingCat on 14 Mar 2018, 20:20 ---Though John Ellicot Chatham was involved and was pretty evasive about certain human/computer interactions...  :wink:

(now if only I could find the comic I'm referencing)

--- End quote ---

1506 - "Irreconcilable Differences"

Navigation

[0] Message Index

[#] Next page

Go to full version