I think you don't get "consciousness" until it emerges as the best strategy for solving a more basic problem. Which seems obvious considering evolution, but we don't think about it in the context of consciousness very much. In biological evolution - meaning, for humans, survival involves staying fed and not being eaten, getting enough to drink without drowning, staying warm enough without burning, and eventually bringing forth the next generation. And these, ultimately, are all things that all animals, from paramecia on up, rely on sensory input and physical movement to do.
Consciousness in humans is solely an optimization of that process. Your brain is what transforms sensory input into physical muscle control in the service of survival. And that, as far as evolution is concerned, is its only purpose. Consciousness is a side effect. The fact that consciousness - processing so complex that it has to take not only the body and its state but also the processing itself - into account, is part of the most efficient muscle control strategy so far discovered is pretty damned remarkable.
Of course we don't think of it as muscle control any more when the objectives are communication of information, control of machinery, construction of devices, etc. But all of that, every bit of the "meta" activity we do, is leverage giving us ever greater efficiency in terms of how much return we get for how little physical exertion of our bodies. You could say that the control strategy that involves consciousness, and in our case also sapience, is doing pretty well.
And this leads up to the question of what drives consciousness in AI? What kind of problem can we give an evolutionary algorithm - what can we make the fundamental necessity as a drive for an AI - such that developing sentience - even human-style sapience - is part of the most efficient solution to that problem? Something that it will be discovered as a necessary part of optimizing for the ability to do?
Keeping in mind that the problem of biological life - stay fed, have babies, etc, in an unpredictable and competitive world - has millions of working solutions (ie, evolved species), only one of which involves anything like human-style sapience. It's one of the most complex problems we know about, if not the most complex problem, and our species is still the only one in millions to develop our kind of sapience. More than that, it's one in millions, in hundreds of millions of years of history. If we pose our hypothetical evolutionary system a problem only as complex as the one that produced us, we only have a one-in-a-million shot of producing something like our own intelligence. This particular strategy for dealing with *that* problem is so rare that it qualifies as BIZARRE!