Why does 'Complexity' alter the fact that AI's are still machines?
And that machines (as we know them, as we have no evidence to the contrary) do not 'forget' data unless programmed (or made) to do so.
I think that the key phrase in your post is "(machines) as we know them."
AIs in QC are not machines as we know them. What's more, they are an emergent species, and thus may well share some characteristics with biological species. But as we have no real evidence, as you say. We are left to speculate.
Methinks that Tova made the best argument - I'd leave it to the resident experts (Morituri, Mehre) who actually work on that stuff to flesh this out, but I'm pretty sure that Bubbles is not
just an upscaled version of the Desktop-PC I'm typing this post on - but rather something on another level altogether. Also something that operates
in a completely different fashion.
The resident CS'sers can probably give us exhaustive quantitative definitions of measures of 'complexity', but my argument is that this PC (RAM-machine?) of mine is fundamentally no different from it's earlier incarnations. None of those incarnations will one day say "Case? About that integral you're trying to compute ... I don't think you'll find a closed solution. But maybe you don't need one. Why don't you try a Taylor-expansion on that potential & we integrate each order separately? You're already in a semi-classical approximation, so stopping at a saddle-point approximation probably won't loose you crucial information. Why? Let's say I have a hunch ..."
It will never learn on it's own. It will never make new discoveries based on stuff it already understands. Not only because it cannot 'understand' in any meaningful way - but because there's nothing there that will make it try to, just for the lulz. It's just gears and levers, there's nobody home, no light on behind the windows.
Bubbles is something on a different level
entirely. Human memory storage and the things our minds can do with that info is on a different level
from our current machines entirely.
There's current guesstimates that one
of our brains could probably store the entire internet, bit for bit - Yet forgetting stuff appears to be a crucial part of their functioning
, rather than being a design flaw
To use your example above, the only evidence we have to date (Emily in Bubbles mind) is that AI's are like those savants you discuss above. They have total recall, unless acted upon by an outside influence...
No, what we have is evidence that their memory can be erased by outside influence, not that that's the only
way data might vanish from their memory. "Absence of evidence != evidence of absence"
As to the savant - I point again at the part where that guy can't tie his shoelaces
... ? His 'abilities' are not an indication of enhanced
functionality, they are an indication that a crucial regulation system is his head is malfunctioning. Everyone of us has that 'ability', potentially. But being in a state where we could utilize those abilities would also likely mean having crippled ourselves, and having crippled other, vital cognitive abilities. (Trade-offs ... !!!)
If I understand you correctly, (part of) your view seems to be that our limits in memory recording, storage and retention are regrettable weaknesses of an imperfect design - My argument is that maybe those so-called 'weaknesses' are the reason
why we are highly functional - the key-word being 'trade-offs' - and that things that have capabilities similar to ours would have to deal with the same kind of trade-offs that shaped the way our grey matter works. (*)
But that does not alter the fact that they are not biological entities, and as such cannot have the same arguments put forward to compare them against the way we 'keen, neat humans' work.
I'll leave it to the CS-experts (e.g. Morituri, Mehre) to flesh out my argument (yes, I'm shameless like that) - but there's constraints on 'information processing thingies'
that are valid independent
of their physical realization
(squishy grey stuff, Silicon, or "Insert TechTheTech"
). The physics, mathematics and Computer Science in QC verse are still the same as ours.
So yes, I'd generally
agree with your argument - but I think it only goes so far. You can say a lot
about the world based solely on the knowledge of some very few, but crucial constraints.
I'd guess that total recall plus human-grade capabilities of meta-cognition (thinking about our thinking) and association would quickly fill up any
storage system, however advanced (and we haven't talked about pron yet ...). When I go read an article on Wiki, I associate stuff with the information I look at. I make connections to related topics - not unlike a hyperlink to another Wiki-article. Now image something that goes through Wikipedia and gratuitously throws in more hyperlinks between articles every time it pays a visit - but it never erases an article, or a hyperlink. In short order, you'd have more hyperlinks than text
(Edit: maybe relevant article
- The gist of it, apparently, is that while there's estimates as to the storage capacity of our brain (LARGE!
), there's also a reason why we're not likely to run out of space: When we form memories that are similar to existing ones, the one that is recalled less often becomes weaker over time. Our brains don't even try
to retain as much information as they can. Furthermore, it appears that the real bottleneck, and a theory as to why our brain's design includes forgetting stuff, is not storage capacity, but writing speed)
There's a reason
why we forget things.
As to the way how 'keen neat humans work': The fact that we're everything but
keen and neat - there's even primate relatives of ours who beat us at certain cognitive tasks (*) - yet still
manage to effortlessly be the hottest shit around would make one think, methinks?
(*) Another example: Chimps blow us out of the water at certain working memory tasks (Disclaimer: I've seen the vid of that experiment, and I can manage three to four numbers. IIRC, a reasearcher commented that human children sometimes manage five. Chimps do the full ten numbers - again, and again and again, without any apparent effort). I'm neither a biologist, nor a neurologist, but I guess when you throw yourselves from one tree to another 20m aboveground, it's kinda handy to have the ability to memorize the position of any useful branches at a glance. We don't have that ability - which does seem kinda limiting, at first glance.
OTOH, We build rocketships that fly to the moon, and discuss the nature of the mind. Chimps seem kinda limited in that department. The point is that better memory isn't necessarily a more useful ability to have, in the sense that it increases overall functionality - and since every design has inherent trade-offs, 'more useful to have than ...' tends to be a deal-maker, or deal-breaker.
That's a tendency in nature that is often overlooked in SF-writing: Progress doesn't mean that stuff just "gets better all the time, in all respects". That thinking is an artifact of mid-20th century tech-optimism.)