Comic Discussion > QUESTIONABLE CONTENT

WCDT Strips 3461-3465 (17-21 April 2017)

<< < (30/45) > >>

Morituri:
After reading today's comic I hesitated to even come to the forums because "OH MY GOD THE SHIPPING!"

But I was pleasantly surprised.  Thanks, most of you, for not going there.  It develops as (and if) it develops, and that's fine.

I work on narrow AI systems for actual paychecks, but my private obsession and code I write on my own is aimed (probably badly, but at least that's where I'm aiming) at conscious/general AI.  I have something to say about memory in AI that might be relevant.

On the question of memory, there are about a hundred different things people are trying.  Most of them are of the form "Here's a way to interface thus-and-such memory construction with control by a neural network - let's see whether we can train a neural network to use it in an appropriate way."

Some examples, spoilered because you don't need the specific examples to get the point : (click to show/hide) Many 'minor' or 'early' examples in this wikipedia article
Neural Pushdown Automata
  Long Short Term Memory
Neural Turing Machines
  Fuzzy Neural Petri Nets
Neural State Machines
Echo State Networks
Reservoir Computing

Anyway, so far what we have when we make a system that needs to keep track of a lot of context and specific information, is a system that has a lot of tiny little pieces of memory we can read, but no individual piece makes much sense by itself. If something is simple, we can usually work out what the system is using each of the memory subsystems for - but if it's complicated, the relationships between all those pieces gets as hard to interpret sensibly as all the topological connections and weights and thresholds.  This is especially true with networks made of LSTM - each node represents a single remembered number, and how they relate to memory of things that are more complicated than that is determined by network structure.

I imagine that memory stored in files - literal recordings of the sensory inputs - is a handy and useful thing that future AIs will probably have available to them, the same way footage of every minute of human lives is going to be on surveillance video or social media or both.  But I also expect that the memory that actually gets used from moment to moment, the subjectively experienced memories and formative experiences, are likely to be chaotically structured and difficult to make actual sense of from an objective POV. 

Eventually we're going to solve strong AI.  And then it's going to solve us.

pwhodges:

--- Quote from: RuffGruff on 20 Apr 2017, 08:02 ---(I've not seen anything like a debate over AI personhood within the comic yet)
--- End quote ---

Are you aware of Freefall?  Note that it's been going for a really long time, so it may take a while to reach where it starts to get interesting.

Case:

--- Quote from: Mehre on 20 Apr 2017, 07:46 ---Well ...

--- End quote ---

Hi new-ish person! Very interesting post, and it leaves me with a few questions (Also: Do you have a CS-background? Would you be willing to expand a bit on the terms you used for us laypeople, like 'neural networks'?)


--- Quote from: Mehre on 20 Apr 2017, 07:46 ---Well. In QC universe it doesnt seem that AI are based on neural networks, so only what we can apply here are general concepts.

--- End quote ---

What makes you think that QC-verse AI don't incorporate building blocks that function like neural networks? We know of at least two realizations of the concept - the ones in our heads, and the ones simulated in our Turing-machine class computers. While the physical realisations of the concept are completely different, the operating principle is the same - so why should QC verses AI's not use the concept? It does appear to have it's uses.

What Jeph said (3376) was (*):
--- Quote ---"The AI mind, as those of organic beings, is self-constructing and self-organizing. It is an emergent system. Just as we do not fully comprehend the organic mind, the AI mind remains mysterious. However, we do have a better understanding of the building blocks. Quantum spin states in foamed nanocrystal lattices can be manipulated, and with a greater degree than possible with organic matter"
--- End quote ---

I see nothing here that would preclude those spin-states from simulating neural networks? Did I miss something? :-\

(click to show/hide)It bugs me that I have no idea how feasible that idea of Jeph's would be, despite it actually touching on my job ... many-body spin lattice systems are a huge subtopic in both experimental and theoretical solid state physics research - though the more interesting ones (read: "more complex, and therefore potentially more useful') tend to feature many-particle correlations, which could make it hard to edit parts of the collective spin-state without changing entirely unrelated ones. It wouldn't be like editing DNA - more like DNA that sometimes spontaneously decides to re-write entire coding sequences of itself just because you exchanged one base-pair for another. So changing a comma might end up re-writing a sentence.

It's Good 'TechTheTech', in any case.


--- Quote from: Mehre on 20 Apr 2017, 07:46 ---One of them is that it is subsymbolic and distributed, which would point to memories more akin to human ones.

--- End quote ---

Could you elaborate on the meaning of those two terms? And how they relate to memory? From what I've found, the term 'subsymbolic' relates to processing, not necessary storage (though the latter is a part of the former, to some degree):


--- Quote ---‘Symbolic’ and ‘subsymbolic’ characterize two different approaches to modeling cognition. Traditionally, as I understand it, this dichotomy pitted anything easily understandable as a symbol manipulation system (logic and symbol string rewrite systems and associated abstract computing machines that classify or generate strings of symbols - e.g. Turing machines, finite state machines) as fundamentally different in some meaningful way from basically just neural networks (the biological ones and the biologically-inspired but simplistic artificial models of them) and things like them. Crucially, representations and algorithms in the second approach don’t feature things you can point to that easily look like crisp, discrete, categorical symbols.

This divide coincided with other divides in AI and related philosophy; some of the associated buzzwords (for further exploration of your own) are "neats and scruffies", "embodied cognition" and "the symbol grounding problem" (see "the Chinese room argument" against the possibility of "strong AI"). The divide has become less consequential as time has gone on: for starters, the two flavors of cognition are not really at odds with each other - IIRC, there's a proof of the equivalence of some form of neural network and Turing machines (meaning that for any Turing machine, there exists at least one corresponding neural network that behaves the same and vice versa), and people have (probably always?) implemented subsymbolic models on/in very symbol system-y hardware and programming languages.

("What's the difference between symbolic and subsymbolic processing?")
--- End quote ---

I'm pretty sure my squishy matter can handle both symbolic and subsymbolic processing. The former is what I earn my croissants with, the latter enables me to prefer croissants over some healthy fruit-salad (and hands me a bad conscience to go along with the bakery, entirely for free ... :laugh:).  But my copy of Mathematica is much better than I am at handling certain parts of symbolic manipulations (It's still a crap physicist, though. Even a bad 'symbolic computer' - For starters, it doesn't know what to do with its abilities)

So I'm not sure whether 'symbolic' and 'subsymbolic' is analogous to the 'human/machine'-divide. Mostly because nearly all humans can do both.



--- Quote from: Mehre on 20 Apr 2017, 07:46 ---Hovewer, we know that AI here have their OS, files and whatnot. So i envision that there are two complementary memory systems. One being ordinary files and other being memories in AI itself. Files probably wouldnt be real memories but something more akin to notebook in your mind, but that would be just speculation.

--- End quote ---

I'm definitely able to memorize a small notebook - just not very reliably, or quickly. And unlike other memories - smells for example - I'd be able to access those 'data memories' nearly at will. Just like opening a file in my mind ... By contrast: Right now, I cannot recall the smell of weed to my mind - but I know I'd instantly recognize it if I smelled it. So it seems I also have two different, complementary memory systems that look pretty similar to what you describe. Soooooooh ... would your hypothetical "QC-verse AI Mod 2.1 by Mehre" really be that different from a human mind? Or from a machine?

I can do anything your QC-AI can - maybe not as fast, or as reliable, but I have the ability. But I can also do what a standard computer can do.




--- Quote from: pwhodges on 20 Apr 2017, 00:12 ---Even an optical link goes through a (small) flaw in the shielding, though. 

--- End quote ---

If the EMP mostly sticks to the microwave-part of the spectrum, you'd probably be fine with apertures smaller than 1mm. From the pulse-shapes I've seen, it looks like only EMP's from nukes are really delta-distribution-like, and therefore cover large parts of the whole EM-spectrum. The other, non-nuclear EMP pulse-shapes were significantly broader (and therefore more restricted in their 'Fourier-space support', or bandwidth)

Is it cold in here?:

--- Quote from: Bubbles ---You cannot modify individual memories

--- End quote ---

Her memory is known to work differently from one of today's file systems, then.

JimC:
More shipping than the Panama canal here at the moment...

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version