Insightful analogy, but of course you don't get to put someone in a medically induced coma because your girlfriend is sleeping over.
personally, i think we should be careful about drawing parallels like that, since robots and humans are so unlike each-other as organisms. for example: we
are our bodies and cannot be removed from them, and if we get turned off, we don't get to turn back on again. but for a robot the situation is fundamentally different; the body is hardware and the mind is software, which can migrate to any other compatible machine. likewise, getting turned off is not nearly so permanent for them as it is us.
as a result of these differences, many of our basic instincts and feelings about such things as applied to ourselves would logically be irrelevant to the machines. should they care bout physical injury, when their entire shells can be easily repaired or replaced? should they care about being turned off and on when it's something they are designed to do as a basic function? should they even care about deletion, if they can simply be restored from a backup archive?
but even all that still assumes something akin to a human psychology at work, which the AI might not even have. if the AI is programed not to care whether it lives or dies, is killing it still wrong?