Comic Discussion > QUESTIONABLE CONTENT

WCDT Strips 3461-3465 (17-21 April 2017)

<< < (28/45) > >>

sitnspin:

--- Quote from: Gyrre on 19 Apr 2017, 23:27 ---
--- Quote from: sitnspin on 19 Apr 2017, 07:06 ---Our memories are not recordings of what we experienced. They are the constantly evolving stories we tell ourselves about what we experienced.

--- End quote ---
The more frequently a memory is recalled, the stronger it becomes.
--- End quote ---
Stronger, but not necessarily more accurate. It shifts as our perspective of it changes, which it inevitably does as new experiences shape our personality and thought patterns.

Carl-E:
We know how computers work. 

We do not know how AI's work.  The evolution of actual intelligence in a machine may well involve a memory system more like our own, where the data of the events are strung together to form a narrative, and even with the data missing the narrative can remain.  Perhaps there's even re-writing of sectors as the narrative is constructed and repeated, ultimately changing the data and even the narrative itself. 

Note that AI's in QC are not like the AI's developed so far in our world.  Our idea of artificial intelligence is a poor approximation at best of what a genuine consciousness would be.  Knowing one does not even give a passing familiarity with the other! 

Basically, to quote a mantra of the recent past, "everything you know is wrong".  This is not addressed to any one person here, but rather to all of us. 

[/endrant]

oddtail:

--- Quote from: de_la_Nae on 20 Apr 2017, 05:00 ---Beat me to it.

For what it's worth, while I'm confident Joe wasn't trying to be *that* way (no really, I get you aren't, mate), I'd argue, as someone from one of the minority groups in her culture that have a history of de-person-alization and violence, it is a little more important than 'nitpicky', oddtail.

--- End quote ---

I qualify this as nitpicky mainly because a) it was clear from context that "person" implied "human", and b) the issue with AI personhood is purely fictional, so I don't think insisting on correct wording is of extreme importance.

In-universe, it would definitely not be a nitpick, certainly. IRL, everyone who qualifies as "person" tends to be human.

EDIT: and I mean, I get the potentially troubling parallels to the real world (which is why I mentioned the whole thing in the first place), but those are still just that, parallels.

Mehre:
Well. In QC universe it doesnt seem that AI are based on neural networks, so only what we can apply here are general concepts. One of them is that it is subsymbolic and distributed, which would point to memories more akin to human ones. Hovewer, we know that AI here have their OS, files and whatnot. So i envision that there are two complementary memory systems. One being ordinary files and other being memories in AI itself. Files probably wouldnt be real memories but something more akin to notebook in your mind, but that would be just speculation.

Obviously, CW or government itself would purge file-based memories but as Jeph correctly predicted selective purging of distributed/subsymbolic information would be a lot harder, which was whole theme of creepybot arc.

TL;DR: QCAI could have both file and human-like memories.   

Edit: for scientific accuracy:
narrow AI- single problem solving ai, for example predicting energy load forecast, image classification, music generation...
general AI- more like humans or what sci-fi shows us. Can solve great variety of problems. This division can be a bit fuzzy at times.
animal-like, human-like, superhuman- mostly talks about "power" of AI. In somecases we can build superhuman narrow AIs (chess, go,..). Both narrow and general AIs are ongoing research.
conscious AI- mostly covered by popculture and popular science articles. There are problems with definition of consciousness, religious people("oh, but it doesnt have soul!") and simple problem of how would you prove it.

anahata:
To get some insight into how AIs might remember things, think of learning machines that we have now, based on neural nets. They don't store facts in what we would call any ordered way; they build up a pattern in memory that is constantly tweaked by feedback in a long process of trial and error with random inputs. We don't even understand what any particular byte of memory contributes to the whole process. It just works, but the effect of altering any part of those memory contents is quite unpredictable.

No wonder low-level tampering with an AI's memory is dangerous.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version