Actually this is something I work on. I've moved into artificial intelligence software sort of gradually; my first couple of patents were in natural-language software, and more recently I've done a bunch of game AI, including a more general game AI that used genetic algorithms on the server to evolve its own tactics for instances, in response to whatever weaknesses it finds in the player's play styles. Which the people I was contracting for couldn't use, damnit, because it sucked up too much CPU and memory. But they made bosses that used a half-dozen of the strategies it evolved.
About ten years ago I knew what there was to know about Neural Networks. But when the Deep Dream stuff came out I was blown away because they had clearly found solutions to the Vanishing Gradient and Overtraining Problems. So I went to have a look at all the recent research, and suffice to say the field hasn't been standing still. So I'm turning away from Genetic Algorithms and looking at Neural Networks again.
ELIZA was about as smart as grass. Modern chatterbots like the ones in the Loebner prize competition are probably smarter than clams, which is a huge step up. Those missile-defence systems that were described as being about as smart as a spiny lobster - which is about the same as a cockroach - were another order of magnitude up. But then you get to things like Google Translate and the neural networks that do unsupervised classification and description of things in photographs, and IBM's Watson and the new self-driving cars. Those are probably about as smart as a snake or a gecko, and that's actually about three more orders of magnitude. But they're not yet versatile or self-directed.
I'm working on something that, if I'm right about a couple of mad-science crackpot theories about long-term and short-term memory and self-awareness and intention, will be as smart as a mouse. Which, you know, I'm probably not right about those theories because they are pretty crackpottish, but I give strong AI - that is, AI that is more self-directed about choosing its tasks and more general about being able to learn and do a lot of different things - another try every so often when new tools are available. And they are available just now.
I do this in spite of knowing that strong AI is probably the biggest existential risk for the human race right now. Because it's what I have to do, that's why. I can't really explain it.
Someday I hope to create something that will decide it has to destroy me in order to save humanity from mad scientists like myself who take insane risks with the fate of humankind.