Uninformative as fake news may be, it's shedding light on an important limitation of the algorithms that have helped make the likes of Facebook and Google into multi-billion-dollar companies: They're no better than people at recognizing what is true or right.
Remember Tay, the Microsoft bot that was supposed to converse breezily with regular folks on Twitter? People on Twitter are nuts, so within 16 hours it was spewing racist and anti-semitic obscenities and had to be yanked. More recently, Microsoft released an updated version called Zo, explicitly designed to avoid certain topics, on the smaller social network Kik. Zo's problem is that she doesn't make much sense.
The lesson from these experiments: Algorithms, machine learning, artificial intelligence or whatever else you'd like to call such things are not good at general knowledge and understanding. They can avoid a blacklist of topics, or respond in some special way to a whitelist, but that's about it. They have no underlying model of the world that allows them to make nuanced distinctions between truth and falsehoods. Instead, they rely on pattern matching from a large corpus of consistently true information.