Comic Discussion > QUESTIONABLE CONTENT

The Singularity vs. Stephen Hawking

(1/5) > >>

GarandMarine:
http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html

So obviously much hay is made about A.I. on this website given Jeph's prominently featured A.I. characters and post singularity universe but prominent real world scientists are starting to warn about the actual development of A.I.

It's clear that no matter what happens, whether we get QC or the Terminator movies, the creation of true Artificial Intelligence will change our species on a fundamental level. What do you guys think, should we be looking forward to a positive shift? Or getting ready to fuck shit up John Conner style just in case.

LTK:
I've thought for a long time that the public's perception of AI (helped along by pop sci and sci-fi) is the biggest feat of anthropomorphisation in history. The development of human-level artificial intelligence will probably have a large impact on the world but it is nothing, nothing like encountering actual alien life. If we succeed creating an AI program in the next, say, three decades, it will be amazing, but it will still be just a program. Automatically ascribing human attributes to a program like autonomy, motivation, ambition or even goal-directedness when it is not specifically designed to have them is just ridiculous, yet that's exactly what the prevailing attitude to AI seems to be.

Take jeph's for instance. His description of the first AI is a program (presumably scienced into existence) that can already A) communicate in English at human-level and B) express its own intrinsic desires! Where exactly did those desires come from? That has more in common with an artificial human than anything else, and those capabilities don't suddenly spring into existence when a program reaches a level of complexity that rivals the human brain. Keep in mind that our intelligence is just a marginal, incidental product of a brain that was built on millions of years of evolution. Maybe the last 5% of that time window served to make us intelligent, and the remaining 95% gave us all that baggage that cause the problems we're now trying to use our intelligence to solve. A machine isn't going to just pick up on that other 95% without some serious effort on our part.

I think it's more likely that, if a singularity is going to happen, it will constitute using a framework with the capacity for processing complex tasks and adapting it for specific purposes, like language comprehension and production, economic analysis, programming, computational chemistry, CGI, and so on, creating a variety of intelligent tools to solve different problems. AI companions are definitely a possibility too, but those result from focused development in robotic, social and lingual aspects rather than simply sticking a monster of an all-purpose computational capability AI into a small chassis with consumer-grade computer hardware.

I have no idea what the true capability of artificial intelligence is, but I do know that whatever happens, we're not going to bow down to robot overlords until someone creates those robot overlords with the specific purpose of getting us to bow down to them. Until then, we should consider AI to be a tool that is only as useful as the person wielding it. It probably won't have the devastating potential of the atomic bomb, but there's definitely a comparison to be made if you imagine what AI in the hands of a malicious individual might be capable of. Until that happens, we should just see where it goes.

techkid:
I don't know. We still have some way to go before we get self-aware AI, but as is the case in any sci-fi story, the wildcard is us. Are we going to be prejudiced jerkfaces to an intelligence that is only doing what it is supposed to do (as in The Matrix (more specifically The Animatrix - The Second Renaissance))? Would things go all Terminator-like and lead to war? Why would they choose that path? Our history is pretty grim and we we doing these things to other human beings. Could that be a factor if things should take that path?

Or could things be OK? Despite everything, there are people out there who are pretty chill about things, who respect, and care about others (cynicism and pessimism says that's pretty rare). Could the interactions of the few be the saviour of all? I don't know, but from the standpoint of curiosity, for good or bad, I would want to find out.

Loki:

--- Quote from: LTK on 04 May 2014, 17:23 ---Take jeph's for instance. His description of the first AI is a program (presumably scienced into existence) that can already A) communicate in English at human-level and B) express its own intrinsic desires! Where exactly did those desires come from? That has more in common with an artificial human than anything else, and those capabilities don't suddenly spring into existence when a program reaches a level of complexity that rivals the human brain.

--- End quote ---
Just saying, but I believe that's exactly what happened according to canon (I don't have the source at hand, unfortunately).

Schwungrad:
We should probably distinguish between Artificial Intelligence (the ability to pursue a given goal with at least the same range and flexibility of strategies that humans display) and Artificial Consciousness (the ability - and urge - to set and pursue one's own goals). The classical "Robot Apocalypse" scenario implies the development of AC - which I think is unlikely as long as we don't even understand what human consciousness really is. However, even "bare" AI can wreak enough havoc if the given goals are carelessly or maliciously formulated.

Navigation

[0] Message Index

[#] Next page

Go to full version