I put the whole "robot sexuality" thing (along with the "no mass unemployment in the wake of the singularity" thing) down to artistic license. Jeph makes his world the way it is so he can do what he wants with it.
I see no reason why an actual AI would have a sexual response (unless explicitly programmed to have one), and I also see no reason why AI's wouldn't be set up to work 24/7 (and like it), not to mention being infinitely clone-able at low marginal cost, thus rendering nearly all humans unemployable. But a world where those things were true wouldn't support an interesting comic like QC.
Not according to how we think it works now.
As part of my job, I work with recurrent neural networks. Mostly they do language translation, but some of them are starting to be used in video processing. Still images are handled with feedforward networks (sort of like 'reflexes,' they don't require any sequential processing to identify things), but once you are trying to identify processes which take place over time rather than just things that exist in space, you need to use recurrent networks, because feedforward networks have no way to represent time and changing inputs.
There is a thing that happens with neural networks once they're beyond a certain level of complexity; you have to let them stabilize without pattern input for some part of their training time, or they don't learn. If you train them on patterns nonstop, they get stuck in a local maximum and stop learning. If you use them in production without training, of course they don't learn anyway,
To let a recurrent network stabilize, you have to let it run with no input (or with 'noise' input consisting of very small random values) and train with Hebbean Learning or some other undirected learning algorithm and regularization strategy. Patterns that mimic typical activity during runtime emerge, but they are undirected, and during this time regularization causes the system to "back out" of local maxima that produce inconsistencies or require extreme responses or weights to maintain. It preferentially preserves what is consistent and doesn't require inconsistencies or extreme adjustments, so most of the benefit of directed training is preserved during the process.
The result is better generalization; accuracy on training cases may decline slightly, while accuracy on testing cases usually rises. Once the two are roughly equalized, if you resume training you can usually smoothly improve the system until it gets better accuracy overall than the best you could do in training before stabilization.
Researchers avoid using the word "Dream" - it's too fraught with baggage that implies things which aren't (yet) true. Or at least they did until Google showed people some of the outputs produced during the process, calling it "Deep Dream." Researchers and workers in the field held our collective breath for a few weeks wondering if mobs with pitchforks and torches were about to show up, but most people didn't immediately decide we were Mucking About In God's Domain and Producing Abominations, so the apprehension has died down somewhat.
Nevertheless it's hard to escape the conclusion that if and when we do have human-level AI, if it's implemented as a neural network it's going to have to spend some fraction of its runtime doing network stabilization - and it's very easy to imagine that subjectively the experience could be described in much the same way we describe dreaming.