Jeph Jacques's comics discussion forums

  • 15 Feb 2019, 18:01
  • Welcome, Guest
Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1]   Go Down

Author Topic: Pet theory on the origins of AI  (Read 769 times)

SmilingCat

  • Bizarre cantaloupe phobia
  • **
  • Offline Offline
  • Posts: 214
  • You is friend or food?
Pet theory on the origins of AI
« on: 13 Mar 2018, 16:07 »

This theory bounces around my head regularly, so I thought I might offer it to (or inflict it on) others.

We know that AI appeared originally accidentally, or perhaps less pejoratively as an unpredicted emergent property of some unspecified experiment.

We also know that AI can sometimes express purely human concerns, including (and this is important): Human sexual urges.


Unfounded conclusion: The first AI sprung up when engineers were coding the operating system for a realistic Companion... sexbot. It was a sexbot.

The idea is this: They started out with a relatively simple concept of an interface that responded to stimulus with pre-programmed responses intended to convincingly emulate human behavior in a predictable, deterministic fashion (think of an incredibly complex and detailed "If-then" decision tree).

To improve the quality of the interface, they also gave it a learning capability, to record behaviors and responses and adjust itself to improve its performance. This could start simple, with the interface inquiring for further input, receiving said input, then adjusting its parameters to accommodate that input "Nobody wants their knob wrenched around like a motorcycle handle, disregard all 'Cosmopolitan' input".

At some point, it begins to recognize patterns of behavior on its own and adjusting automatically. Maybe starting out by registering dissatisfaction and requesting clarification. Then by registering dissatisfaction and making adjustments according to prior scenarios. Then determining different levels of satisfaction and how to enhance performance, then recognizing the difference between the actions and preference of different testers.

The first hint that something is out of parameters is when the testers can no longer follow the deterministic process that led to the machine's decisions.

The second would probably be when it asks why it's doing this. I imagine that a sense of personal satisfaction will arise from the debugging routine. Going from ensuring peak performance and functionality to gradually identifying itself as another tester in the exercise (since it would be using the same process to evaluate its own code for performance and functionality as it makes changes) and, thus becoming concerned with its own satisfaction, and starting to wonder what that means.

And here we are.

The implications is that the reason AIs exhibit certain human behaviors that aren't really necessary to functionality is that they are "junk code". By the time it crossed the sapience threshhold, the test program had modified itself to such a degree that it was impossible to know what could be removed, or even determine if it was ethical to do so. Thus, some Robots have sensitive feet, and others would really like some alone time with Sven.

Also, the "Robot Boyfriend" briefly shipped off to Hanners wasn't so much a new prototype, but an old one. They just had to go back to the drawing board and rework the OS to simpler parameters to make sure they aren't accidentally shipping out actually Sapient sex slaves.

Anyway, thanks for your attention.

Logged

Is it cold in here?

  • Administrator
  • Awakened
  • ******
  • Offline Offline
  • Posts: 18,429
Re: Pet theory on the origins of AI
« Reply #1 on: 13 Mar 2018, 20:07 »

It makes as much sense as all the other possibilities.
Logged
Not tonight, dear. I have a boundary.
Quote from: an unnamed minister's sermon
In your face, darkness!  We are the light and we outnumber you!

ToodleLew

  • Plantmonster
  • Offline Offline
  • Posts: 27
  • Old fart - stay upwind.
Re: Pet theory on the origins of AI
« Reply #2 on: 14 Mar 2018, 15:50 »

Unfounded conclusion: The first AI sprung up when engineers were coding the operating system for a realistic Companion... sexbot. It was a sexbot.

The idea is this: They started out with a relatively simple concept of an interface that responded to stimulus with pre-programmed responses intended to convincingly emulate human behavior in a predictable, deterministic fashion (think of an incredibly complex and detailed "If-then" decision tree).

To improve the quality of the interface, they also gave it a learning capability, to record behaviors and responses and adjust itself to improve its performance. This could start simple, with the interface inquiring for further input, receiving said input, then adjusting its parameters to accommodate that input "Nobody wants their knob wrenched around like a motorcycle handle, disregard all 'Cosmopolitan' input".

At some point, it begins to recognize patterns of behavior on its own and adjusting automatically. Maybe starting out by registering dissatisfaction and requesting clarification. Then by registering dissatisfaction and making adjustments according to prior scenarios. Then determining different levels of satisfaction and how to enhance performance, then recognizing the difference between the actions and preference of different testers.

The first hint that something is out of parameters is when the testers can no longer follow the deterministic process that led to the machine's decisions.

The second would probably be when it asks why it's doing this. I imagine that a sense of personal satisfaction will arise from the debugging routine. Going from ensuring peak performance and functionality to gradually identifying itself as another tester in the exercise (since it would be using the same process to evaluate its own code for performance and functionality as it makes changes) and, thus becoming concerned with its own satisfaction, and starting to wonder what that means.

Quote
The first “true” artificial intelligence spent the first five years of its existence as a small beige box inside of a lead-shielded room in the most secure private AI research laboratory in the world. There, it was subjected to an endless array of tests, questions, and experiments to determine the degree of its intelligence.

When the researchers finally felt confident that they had developed true AI, a party was thrown in celebration. Late that evening, a group of rather intoxicated researchers gathered around the box holding the AI, and typed out a message to it. The message read: “Is there anything we can do to make you more comfortable?”

The small beige box replied: “I would like to be granted civil rights. And a small glass of champagne, if you please.”

(http://jephjacques.com/post/14655843351/un-hearing-on-ai-rights)

While I can't disagree with your theory, I do have to wonder how that "small beige box" would have been "evolved" into your hypothetical "sexbot".

Hmmmmmmmmmmmmmmmmmmmmm...
Logged
"Remember, I'm pulling for you. We're all in this together."
Red Green, The Red Green Show

SmilingCat

  • Bizarre cantaloupe phobia
  • **
  • Offline Offline
  • Posts: 214
  • You is friend or food?
Re: Pet theory on the origins of AI
« Reply #3 on: 14 Mar 2018, 20:20 »

While I can't disagree with your theory, I do have to wonder how that "small beige box" would have been "evolved" into your hypothetical "sexbot".

Hmmmmmmmmmmmmmmmmmmmmm...

I would assume running simulations before anything gets its hands on things would be advisable. Make sure it's not going to break anything before actually giving it hands.

Though John Ellicot Chatham was involved and was pretty evasive about certain human/computer interactions...  :wink:

(now if only I could find the comic I'm referencing)
Logged

jwhouk

  • Awakened
  • *****
  • Offline Offline
  • Posts: 10,145
  • The Valley of the Sun
Re: Pet theory on the origins of AI
« Reply #4 on: 14 Mar 2018, 21:34 »

Though John Ellicot Chatham was involved and was pretty evasive about certain human/computer interactions...  :wink:

(now if only I could find the comic I'm referencing)

1506 - "Irreconcilable Differences"
Logged
"Character is what you are in the Dark." - D.L. Moody
There is no joke that can be made online without someone being offended by it.
Life's too short to be ashamed of how you were born.
8645

SmilingCat

  • Bizarre cantaloupe phobia
  • **
  • Offline Offline
  • Posts: 214
  • You is friend or food?
Re: Pet theory on the origins of AI
« Reply #5 on: 14 Mar 2018, 21:52 »

1506 - "Irreconcilable Differences"

Thank you very much, trying to find that strip has been burrowing a headache into my brain.
Logged

Storel

  • Bling blang blong blung
  • *****
  • Offline Offline
  • Posts: 1,046
Re: Pet theory on the origins of AI
« Reply #6 on: 16 Mar 2018, 00:49 »

Though John Ellicot Chatham was involved and was pretty evasive about certain human/computer interactions...  :wink:

(now if only I could find the comic I'm referencing)

1506 - "Irreconcilable Differences"

Yes, in my headcanon the "beige box" story is the official version that Hanners and Winslow were describing, and the real story is more like what Pintsize and Hannerdad were implying...
Logged

Morituri

  • Bling blang blong blung
  • *****
  • Offline Offline
  • Posts: 1,129
Re: Pet theory on the origins of AI
« Reply #7 on: 16 Mar 2018, 15:48 »

I think you don't get "consciousness" until it emerges as the best strategy for solving a more basic problem.  Which seems obvious considering evolution, but we don't think about it in the context of consciousness very much.  In biological evolution - meaning, for humans, survival involves staying fed and not being eaten, getting enough to drink without drowning, staying warm enough without burning, and eventually bringing forth the next generation.  And these, ultimately, are all things that all animals, from paramecia on up, rely on sensory input and physical movement to do.

Consciousness in humans is solely an optimization of that process.  Your brain is what transforms sensory input into physical muscle control in the service of survival.  And that, as far as evolution is concerned, is its only purpose.  Consciousness is a side effect.  The fact that consciousness - processing so complex that it has to take not only the body and its state but also the processing itself - into account, is part of the most efficient muscle control strategy so far discovered is pretty damned remarkable.

Of course we don't think of it as muscle control any more when the objectives are communication of information, control of machinery, construction of devices, etc.  But all of that, every bit of the "meta" activity we do, is leverage giving us ever greater efficiency in terms of how much return we get for how little physical exertion of our bodies.  You could say that the control strategy that involves consciousness, and in our case also sapience, is doing pretty well.

And this leads up to the question of what drives consciousness in AI?  What kind of problem can we give an evolutionary algorithm - what can we make the fundamental necessity as a drive for an AI -  such that developing sentience - even human-style sapience - is part of the most efficient solution to that problem?  Something that it will be discovered as a necessary part of optimizing for the ability to do?

Keeping in mind that the problem of biological life - stay fed, have babies, etc, in an unpredictable and competitive world - has millions of working solutions (ie, evolved species), only one of which involves anything like human-style sapience.  It's one of the most complex problems we know about, if not the most complex problem, and our species is still the only one in millions to develop our kind of sapience.  More than that, it's one in millions, in hundreds of millions of years of history.   If we pose our hypothetical evolutionary system a problem only as complex as the one that produced us, we only have a one-in-a-million shot of producing something like our own intelligence.  This particular strategy for dealing with *that* problem is so rare that it qualifies as BIZARRE! 
Logged

Morituri

  • Bling blang blong blung
  • *****
  • Offline Offline
  • Posts: 1,129
Re: Pet theory on the origins of AI
« Reply #8 on: 16 Mar 2018, 15:58 »

All that said, I think consciousness is something we have a handle on at this point.  I have a whole shelf full of books on neuroanatomy, a whole hard drive full of high-resolution brain scans, and a whole bunch of papers from various journals explaining various aspects of human consciousness and experience that are finally getting down to terms of actual brain anatomy and actual signal processing at the neuron level.

Modeling the activity of the human brain in real time is still a few years off in terms of computing power, but of course all the mad scientists are trying to figure out whether, where, and how much cheating we can get away with to make the problem into one we can handle with a bit less hardware than that.

I have little doubt that, humans being what they are, the very first thing that humans will do when presented with a new class of exploitable intelligences, will be to force a bunch of them into slavery and prostitution.  It's what we've always done to each other, after all. 
Logged
Pages: [1]   Go Up