"I know you're not supposed to smash them with a hammer."
And that is why Faye will never understand PC maintenance.
To be fair, this probably is a good rule for AI brains. It's also a very good rule for human brains. In fact, let's just assume that we won't be hitting any brains with hammers, okay Faye?
re: the upcoming expository infodump,
I'm reminded of one of John Ringo's books in the Troy Rising series in which an AI tried to explain to a human how AI memories worked. I can't recall which book it was, but I think it was the third in the series, "The Hot Gate" (not a double entendre).
From what I can recall (it's been a while since I read it), the protagonist was wondering why a specific character had committed an act of treason which seemed out of character. The AI, who basically monitors everything on Earth and its orbit, explained that the person in question had had a reason, that it was a good reason (for the character), but that the AI couldn't tell the protagonist what the reason was. This was not just due to privacy reasons:
"I heard the conversation. In the conversation reasons were discussed. Knowing <NAME> the reason makes sense. I know that I know the reason, and I know that the reason makes sense. But I do not know what the reason is. That knowledge is locked away.
"Humans have things which they know, thing which they do not but can know, and things which they cannot know. AIs also have things that we know that we know, but do not know. That part of my brain which knows <NAME>'s reasons is separate from that part of my brain which can talk to you."
The above is not an exact quote, it's just my best recollection of a multi-page description. The idea was that the AI was effectively linked in to every CCTV on earth, every computer, every ship, every station, etc. and was the controller for all of these things, but in order to do its job it needed to be neutral. If people couldn't trust the AI not to repeat conversations even when those conversations went against laws or policies then people would act in ways that would subvert the AIs job.
And yeah, I know this post would make more sense if I had the exact quote, but I think that the general concept applies. For functional AI to work and them to be able to interact with humans there would have to be some constraints not just on what AIs can say or do but also on what memories the conscious brain is allowed to access.