Comic Discussion > ALICE GROVE
Alice Grove MCDLT - April 2017
Samik:
(click to show/hide)So a followup to that:
Human improvement hasn't been due to our brains getting any better themselves. It's been due to our minds specializing, and getting better at sharing our output (including both improved methods of communication, and improved external storage). The overall web of human minds is reconfiguring itself, but the individual processing nodes aren't getting any more powerful per se. Any increase in power only comes from multiplication of nodes. (Increase in efficiency comes from specialization and sharing.)
So, two questions:
1.) Is the potential of the web of human minds (largely unaltered, i.e. individually obeying "conservation of potential") bounded or unbounded?
2.) How would things be different if the individual processing nodes (minds) themselves were significantly alterable/improvable?
If the answer to #1 is "unbounded", then the issue is settled. If the answer to #1 is "bounded", then move to #2. If the nodes themselves are improvable, then it seems to me more likely that the potential of the web as a whole is unbounded. But then the concern becomes how to make sure that your alterations generally increase the health/ability of the web. We've had only external forces handling that so far (a 4 billion year optimization process).
I think the real core of my question is: what happens when you are relying on internal forces for that? Can the system itself understand its own functioning well enough to upgrade itself in consistently positive ways? If not, I have a suspicion that its potential is, in fact, bounded. And, as yet, human improvement provides no answer to this question.
(I should probably take this to another thread?)
sitnspin:
--- Quote from: Samik on 24 Apr 2017, 12:52 --- (click to show/hide)So a followup to that:
Human improvement hasn't been due to our brains getting any better themselves. It's been due to our minds specializing, and getting better at sharing our output (including both improved methods of communication, and improved external storage). The overall web of human minds is reconfiguring itself, but the individual processing nodes aren't getting any more powerful per se. Any increase in power only comes from multiplication of nodes. (Increase in efficiency comes from specialization and sharing.)
So, two questions:
1.) Is the potential of the web of human minds (largely unaltered, i.e. individually obeying "conservation of potential") bounded or unbounded?
2.) How would things be different if the individual processing nodes (minds) themselves were significantly alterable/improvable?
If the answer to #1 is "unbounded", then the issue is settled. If the answer to #1 is "bounded", then move to #2. If the nodes themselves are improvable, then it seems to me more likely that the potential of the web as a whole is unbounded. But then the concern becomes how to make sure that your alterations generally increase the health/ability of the web. We've had only external forces handling that so far (a 4 billion year optimization process).
I think the real core of my question is: what happens when you are relying on internal forces for that? Can the system itself understand its own functioning well enough to upgrade itself in consistently positive ways? If not, I have a suspicion that its potential is, in fact, bounded. And, as yet, human improvement provides no answer to this question.
(I should probably take this to another thread?)
--- End quote ---
If the system can make copies of itself and run simulations in a separate storage space, then it doesn't need a perfect understanding of its own programming to make improvements. It can run numerous "beta tests" on a wide variety of potential alterations and integrate the ones that prove positive
Kugai:
//www.youtube.com/watch?v=-u77XdL8_B4
Thrudd:
I find it almost ironic that every mention of an AI improving on itself or getting more powerful always and I mean always degenerates to Bob destroying all competition - that is just so human.
Here is the thing, an AI is NOT human. There are many many models for data processing systems and similar diversity in the biological world if you take a wider view.
My proposal is that Bob did not destroy all the AI systems but instead that they are all components of Bob. Think multicellular organisms.
There are plenty of examples of multi-consciousnesses beings in fiction. From the Hive minds in comic books to the various iterations in Dr Who lore ranging from a group mind, gestalt consciousness or mass mind.
This could put a whole new twist on what we have just seen. Those nanobots could be parts of Bob.
Heck our "Trees" could just be nodes of Bob if you really want to stretch things.
Here is another thought that is totally off the wall but might be valid in hindsight.
What if in the background of the great war there were factions trying to combine the two technologies.
They succeeded and merged AI technologies with radical organic technologies and gave it/them the mission to end the conflict - thus we get the blink and the Praeses.
No need for alien third parties to be involved.
retrosteve:
--- Quote from: Samik on 21 Apr 2017, 22:25 ---
--- Quote from: retrosteve on 20 Apr 2017, 12:55 ---But AIs have one special feature that modified organics will never have: they are software. Software can, in theory, be rewritten, upgraded. As Kurzweil and others have speculated, If AI software is intelligent enough to know how to write BETTER AI software, it can upgrade ITSELF.
That better version can then write even BETTER software, and repeat ad infinitum. What results, in theory, is Adam Selene, Skynet, Omega, P1, The Eschaton. Software that is self-aware, with godlike abilities, and brooks no competition. Because it sees lesser AIs as potentially doing the same thing, and doesn't feel like fighting them to the death.
--- End quote ---
Why are we certain that it's even possible for there to exist an information system of sufficient complexity and orderedness that it can arbitrarily increase its own complexity and orderedness? That always sounded to me like the kind of thing that some mathematician will eventually prove to be impossible.
--- End quote ---
Oh, we're not certain. But until some mathematician proves it impossible, there's no obvious theoretical reason why it is.
Turing's Halting Problem shows that a program can't predict another program's output (or lack of it).
Goedel's Incompleteness Theorem shows that a system that can represent itself is ABLE to represent a paradox.
Impossibility of self-improving code is not an obvious consequence of either of these.
And on the other hand, Core Wars shows that programs in competition can learn to adapt and improve. So it's tantalizing.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version