That article got this exactly right; machine learning algorithms find ways to accomplish whatever fulfils their requirement, absolutely without regard to whether a human understands it or not, and absolutely without regard to the way a human would do it. I actually work on learning systems for a living, so I've seen this happen MANY times, and often been called in to "correct" it.
For example an ad brokering system that got a machine-learning program to work on the problem of maximizing profits for ads displayed; that's a fairly straightforward, reasonable requirement, yes? So a few months later I got called in because someone had noticed (and threatened to sue the broker about) the way it was doing that. It had learned to exploit human prejudice, in ways that perpetuate it. A case in point was that if you did an address search for a person with an identifiably "ethnic" name, the ads that would come up on the side would usually be offers to investigate the person's criminal record, while if the name was "whitebread american" the ad that would come up would be about their employment or academic history. This is because, when prejudiced people are looking up identifiably ethnic names, ads about criminal records meet their prejudiced expectations and are more likely to get a click - hence, more profitable for the broker.
So I adjusted what the system could "see" - instead of getting the actual names to work with, it got a hash value of the name - still individually identifiable but no longer ethnically correlated. And that held it for a while. Then I got called in again because it was directing ethnic people to drug rehabilitation sites and showing them real-estate ads in noticeably run-down neighborhoods. To the point where they almost couldn't even FIND good neighborhoods. This originated because the ad revenue depended on closed sales, and the real-estate guys in the neighborhood associations (who have to approve any sales) are prejudiced as hell. Turned out that this time it was correlating cookies set by ads displayed on websites whose clientele were noticeably ethnic.
And so it goes. By turns it was singling out for limited access or stereotype-based treatment ethnicities, genders, national origins, etc.
One thing to note is that at no point did I stop getting angry phone calls. It's just that, at some point, I started getting fewer angry phone calls from the legal department about lawsuit exposure, and started getting more angry calls from the sales representatives about declining revenue.
Because, the hell of it is, declining revenue was a reality. The system did exactly what people had ordered it to do and maximized profits. But as long as human beings are prejudiced bastards, it will find ways to exploit human prejudice in order to make that profit.