The first problem is to rigorously define love, after three thousand years spent bickering over how to define it
colloquially.
Also, while I was addressing Mad Cat earlier, I missed this diarrhea of the keyboard hiding under the filk. I can smell the philosophy degree from here.
In all seriousness, the argument that digital systems are only capable of moving data around, performing arithmetic, and comparing digital values flies in the face of chaos theory and emergent behaviour. As soon as you have more than one digital processor operating asynchronously, you have chaos. As soon as you have you have a source of data to a single digital processor that is derived from a chaotic source, you have chaos, and with chaos, you get emergent behaviour. Emergent behaviour like emotions.
"But Cat," I hear you say, "multi-core processors have been around for years and work just great." Yes, they do... with synchronization mechanisms in both hardware and the OS. As soon as you start investigating cluster OSes, MPI, OpenMosix, etc. where computers connected only by network connections, yet have to cooperate on large problem sets, you realize an appreciation for the need for synchronization mechanisms and get an idea for how weird computers can behave when things occur in a an unusual sequence.
Why would chaos become anything we'd recognize as emotion? You're literally suggesting here that sentience will arise from a random malfunction, one that doesn't aid function, which is the only reason to reproduce code that doesn't work as expected. What you're suggesting is akin to mammals walking fully-formed out of the primordial sea under conditions more favorable to algae.
"But Cat," I hear you say, "no digital system can generate chaotic data." Au contrair, I say to you. PC northbridge chipsets and CPUs have, for a long time, featured devices with that very purpose in mind. They're called thermistors, tiny resistors that change their resistance in the presence of different temperatures, and analogue to digital converters with a high level of precision. By passing a small voltage, even one known a priori with a high level of precision, through that thermistor, there is no real, determiniastic way to predict what voltage will come out the other end, since it depends on the temperature of the thermistor at the time of the measurement. If you then feed that voltage into a high-precision ADC, you get a sequence of digital bits which represents that voltage as measured. The thing is, if the thermistor is of a relatively low quality, the thermistor will have very coarse fine-grained behaviour. A tiny temperature change in one temperature regime will have a large effect on the measured voltage, while a similarly tiny change of temperature in another temperature regime will have a similarly tiny effect on the measured voltage. And, the sizes of these effective changes in measured voltage can change over time.
What I'm saying is that while the most significant bits in the ADC output might be perfectly predictable (if the CPU's been running for A time under Y load, then its temperature should be Z and the ADC bits will be 0011010011101XXX. The first 13 bits might be predictable with a high degree of certainty, assuming those preconditions are known with sufficient precision, but the last three bits of the 16-bit ADC output will be utterly chaotic and unpredictable. For security, just pick up the last bit of several sequential ADC measurements and you can amass a HUGE storehouse of genuinely random bits of digital data. In the parlance of digital computational hardware, this is an RNG or Random Number Generator. This is true randomness, not the pseudo-randomness of a deterministic random number generator algorithms which is completely deterministic once the initial "seed" value is known. There is literally no physical mechanism in physics whereby the value of the random number output by a hardware RNG may be predicted. Thus, if your idealized computational arithmetic operations are fed these RNG values, it too takes on the characteristic of a chaotic system.
And don't even get me started on startup conditions, where computer chaos was first discovered in supposedly deterministic weather prediction software when the same simulation was run multiple times, but from different starting points in time with starting conditions given from earlier starting simulations. Your idealized computing device might only be capable of moving data around, performing arithmetic upon it, and comparing digital values, but that's only in the idealized world. Robots in the QCverse, just like actual electronic digital computing devices in our world, have to operate as embodied real world hardware, where the idealized rules can be broken.
I'm sorry. Before, I was using the word "deterministic" as though I were talking to someone who actually knew what it meant, rather than using it as a blanket term for anything that goes against pop-chaos-theory woo. If there's any randomness or pseudorandomness, different results on the same startup conditions, even on occasion vastly different results, can be expected. And even if there isn't, yes, occasional malfunctions to be expected. However, you're not going to see certain kinds of patterns spontaneously arise and persist without environmental pressures tending to favor them. That's so far from proper chaos theory, it would be like Newton feigning the hypothesis that the planets were moved by myriad literal, invisible hands of God.
No, all that is just a long-winded way of saying "computers malfunction in all these ways, and if they malfunction enough, they might become real boys!" (Also that some set up sources of true randomness - but numbers so obtained aren't actually going to
do anything they're not programmed to.) Even if this were possible, what you're describing isn't "artificial intelligence" in any real sense, but just intelligence that happens to pop up near a computer, like a Godzilla for the information age. You're anthropomorphizing the programs we have in a way that's just not supported by anything; why would an agent arising from malfunctions have meaningful access to the "deterministic" algorithms (many of which are, of course, randomized, with a pseudo-RNG or a physical one, but "deterministic" in your sense) of the idealized computer that the physical computer was designed to run, and most of its power in society stem from its running, as faithfully as possible? Machinery approaching as closely as possible an "idealized," "deterministic" computing device is what Momo, Winslow, Pintsize, and the cute robot clerks all appear to run on, since if not, they couldn't be faithfully transferred between chassis as they are.
The part in bold was my point - AI research is ongoing, and people do try programming learning behaviours with a wide berth. That is the purpose. Everything else you said there is assuming it isn't done, but then you mention the one place where it is done.
The fact that you think you have to tell me this is exactly why I say you've missed the point.
And it will still probably be an accident...
An accident only in the broadest sense, that an exploration into the nature of sentience or a large-scale simulation of the human mind might yield better results than expected. I don't buy that it will come from the kind of "evolution" Kyrendis was describing.