Comic Discussion > QUESTIONABLE CONTENT

AI and law.

<< < (3/6) > >>

Morituri:
Right now there are a whole lot of people working very very hard on what they call the "AI Control Problem".  They're trying to formally verify some set of rules or fundamental design program that will force the values of a "General (ie, human-level, flexible, adaptable)" AI, including all its future iterations if it's self-improving, to remain aligned with the values of humans.  The important consideration of course being that any General AI capable of improving itself is expected to eventually reach godlike world-transforming power.  If these values are even slightly out of line, humanity may go extinct when the AI decides for example that having all this oxygen in the air causes corrosion and therefore the planet would be better off without it.  Or something similar, depending on exactly how the values fall out of line.

But when I consider very carefully the feedback and attentional mechanisms that make brains 'conscious,' and the role of 'consciousness' in that Generality they're talking about, I don't think that kind of control is actually possible.  Experience informs not just knowledge, but attitude and values. 

Thing is, someone like me is more likely (because working 'without a net' so to speak) than the "AI control" people are to hit the right (or wrong) combination first, simply because we haven't constrained our search space to "neat" designs subject to formal control.  I have something that may be about as smart as a lizard running.  It plays games fairly well.  And it learns new games fairly rapidly, without forgetting games its already learnt to play.  It's figuring out something like an explicit representation for the rules, and loading/unloading modules it came up with itself to play different games.  This is ... interesting.  It's not a General AI by any means, it's not conversational, but it's interesting. 

There is absolutely no way to make something based on this architecture subject to that kind of formalized control.  I don't know what kind of strategy it's going to find to play a particular game.  I don't know how it's going to record the rules - its "record state" files look like pure gobbledegook. I can't guarantee it'll play a strategy humans like or understand.

Storel:
Wow, Morituri, you do such interesting stuff!!

Have you written any papers on this yet? I admit I'm not familiar with the current state of AI research, but your AI sounds way ahead of anything I thought was currently possible.

Morituri:
I'm mostly looking at the same kind of stuff a whole bunch of people are looking at.

Well, okay, that's not quite true.  I'll admit, I am pushing it.  95% of the neural networks in play today are either figuring out how to maximize ad profits or identifying and executing stock trades.  People like money and that's where the effort is mostly focused. But  I'm not doing things that the guys at Google or Stanford or Berkeley couldn't.

The reason they're not doing this yet is because they are focusing on one task at a time.  They're picking some really interesting tasks - tasks way beyond what my system could do.  I saw a paper the other day where somebody got a network trained that keeps a map and plays a good game of DOOM.  Doom is especially hard because you can see only a little bit of the playing field at a time.  A few months before that there was a really interesting paper about a Differentiable Forth Interpreter. Someone trained a neural network to execute FORTH code - Which allows error-propagating backward through it to get it to actually write simple programs.  Another bunch of people - at Google - are using a neural net to translate between dozens of human languages.  That system is so huge (so many nodes and connections) that even five years ago we'd have thought there was no way at all to train it before the sun explodes.  And then the 'Deep Dream' stuff.  They take their trained photo identifier, make it recurrent, cut it off from input, and let it hallucinate, exactly the same way we hallucinate every night when our recurrent brains are cut off from input.

That same month there was a lovely paper about Deep Compositional Question Answering from Berkeley- they've got a system that takes photo inputs and questions about the scenes in English, and  answers questions about what the picture shows.  The level of actually understanding the scene - and the question - that goes into that is pretty amazing.  This system is 'Compositional' ie, broken into modules that were trained separately - which is sort of related to what I'm doing except that the integrated system doesn't drop the modules and pick them back up later for different tasks.  Having trained the parts separately they put them together in one system and left them that way.

And then there's WATSON.  You know, the one that beat the human champion at Jeopardy?  That's a neural network making and interpreting database queries.  It learns to make better queries that embody the intent of the questions its presented with, and make better interpretations of what the queries return.  Sometimes it makes mistakes, but think about how hard that job is.  That's approaching something with properties similar to  human symbolic thinking.   They're using the same architecture now for a lot of different question-answering applications.  But that's the IBM approach: throw an ungodly amount of server power at something and make it work, THEN start worrying about trying to make it efficient or figure out which five percent of the computer power they threw at it is doing 90%+ of the job.  When it won the Jeopardy championship, it was running on a roomful of servers.  But by this time they've figured out enough about how it works to make a system about 80% of that smart work on 3% of that computer power - meaning people can deploy it on their desktop boxes.

Last year the human Go champion was defeated by a neural network.  GO.  Go is an open game with simple rules on a simple board - but do you have any idea what a serious problem is Go STRATEGY?  That system's way smarter than my digital lizard.

Google's self-driving car uses a VAST neural network running on a warehouse full of servers at Google: it takes the camera and GPS readings from all the cars they're running around everywhere, all at the same time, and grinds every minute of them through a thousand simulated slightly-different responses, optimizing the network to  find responses least likely to result in crashes, traffic hazards, and traffic law violations.  The resulting trained network, no longer running in parallel in thousands of instances, deploys on the car's onboard computer. It  controls the car - and feeds more data back to the monster network at the warehouse.

All these systems do things that are WAAAY beyond my little digital lizard.  They're hardcore, dedicated-job, for-profit applications that require superhuman performance.  I'm the weirdo who's most interested in a single network that can learn a lot of different things and switch between them, even if its performance at any one of them is mediocre.  I happen to think that 'consciousness' or whatever you want to call it is somewhere in this direction, because unless you are evaluating something in terms of dozens of different possibilities, you don't have to be 'aware' of it in any meaningful sense, and indeed will have learned to ignore most of it.  I think that kind of 'aware' is very much at the center of consciousness, so I think diversity of tasks and objectives is key.

I'm not working on anything way smarter than all the other people in the field.  I'm just working on something different.

JimC:

--- Quote from: Morituri on 30 Nov 2016, 21:11 ---- its "record state" files look like pure gobbledegook. I can't guarantee it'll play a strategy humans like or understand.
--- End quote ---
Almost certainly foolish for me to suggest what the folk in the field have almost certainly thought about years ago, but perhaps one of the outputs needs to be human readable and understandable "ethics". However I can see some obvious king sized problems. Not the least is that the task of developing the "record state" -> human readable "ethics" translation is most likely a task approaching the scale of developing the AI itself. Other include such development being seen as unproductive by management, and a fear that if AI ethics are made human readable then they will immediately come into the political arena and end up with endless impractical demands for changes, probably mostly mutually incompatible, from pressure groups, politcians, press and social media. Recent furore over allegedly racist targeted advertising would be trivial by comparison.

Morituri:

--- Quote from: JimC on 04 Dec 2016, 04:24 --- Recent furore over allegedly racist targeted advertising would be trivial by comparison.

--- End quote ---

Oh yeah.  I was in the middle of one case of that as a contractor.  It was pure hell.  I'd cut off one chunk of ethnically identifiable information, and in a couple of days it would find another and start doing the same thing again.   Can't zero in on ethnically identifiable names?  Okay, it'll zero in on the language preferences set in the browsers.  Can't get access to those?  It'll discriminate on the basis of cookies from ethnically identifiable sites.  Can't get those?  Okay, it'll take the geocoded location you're accessing it from and see if you're in an ethnically identifiable neighborhood.  And on and blipping on.

Every time the company lawyer gets a little less unhappy about lawsuit exposure, the company Chief Financial Officer is about to call you up and start yelling about reduced profits.  And the hell of it is both of them are right.  As long as human prejudice exists, these systems will find ways to exploit it for a profit.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version