Ok, I just have to say that I'm getting a hint of smugness that both of the last two weeks polls #1 results were ideas I put forward.
_____
Also, in case any of the people who were discussing how AI learn things are still mulling things over:
I'd posit that the AGI seen here would use similar (but more advanced) machine learning techniques that we tend to use for today's ANI.
AGI= Artificial General Intelligence, ANI = Artificial Narrow Intelligence.
It's worth understanding there is a distinct difference (as has been stated above) between storing a knowledge bank, and understanding what to do with the knowledge.
That understanding is what distinguishes an intelligence from a textbook, wikipedia page, or other "file". The challenge is making the AI understand what it's making a decision on, when it's making the decision, what the options are, and which decisions it should be making. Often this boils down to knowing how to locate "useful" data.
There's a few techniques, and techniques can be either "unsupervised" or "supervised", where the intelligence is guided to make the correct decisions.
The basic idea though, is when trying to teach it how to do something, you give it the situation as many times as possible, and let it see the patterns. Depending on the type of situation, you might be asking it to recognise something, or to make an A,B decision, or something more complex. In general, you'd either give it the correct solution, and let it attempt to figure out Why it's correct, or you'd let it draw it's own conclusions, and possibly after it's built up a number of possible patterns, tell it which ones to disregard (if any).
As an example, for my thesis recently, I was using object recognition software to teach an AI how to locate sharks in aerial view photographs of shallow waters. It not only had to locate them, but distinguish them from other possible objects, like boats, dolphins, people, seals, etc.
In practice, an AGI would likely be doing similar things, except spread over every situation it encounters, rather than trying to solve 1 task.
Years ago, in some of the dialogue, Jeph described that the initial spark of life was discovered by chance. Scientists were looking for it, but had no idea what would create it, and still (I assume) didn't exactly know what it was about that combination/arrangement of processing patterns in a neural network that created the sentience. However, unlike an organic brain, machine brains can be far more easily "paused" without affecting ongoing computation. This would allow every single piece of the extremely complicated network to be mapped and duplicated.
This would mean that every AI (except possibly spooky) begins "life" as a cloned "seed". Variances in learning following that would then create the unique individuals that we see in the comic. It's likely that some portion of that learning would happen before putting the AI into a mobile physical body. While not an exact analogue, I'd describe it as being quite similar to human development. Initially connected to the "life support" of a mother being, the mind is developed to handle all autonomous functions (which should simply be able to be downloaded as pre-learned subroutines), as well as the capability to Learn a huge amount later. Eventually it's awareness and understanding can be developed to an infancy, where the real learning and experience can begin. I doubt you'd see robots running around as true infants, stumbling on everything, as most of those kinds of things could probably be learnt once on one chassis, then transferred as firmware to all duplicate models. Any knowledge that is guaranteed to be common could probably be dealt with in a similar way, but they may choose to learn Everything manually, to maximise the ability to create unique environmental learning, and avoid iRobot type clones.
Just a few thoughts from a computer scientist/electronics engineer.