Jul. 25th, 2020

I rarely agree with Diane Francis' columns but I know a lot of people do. Here's the one that appeared in yesterday's paper:

https://www.seaforthhuronexpositor.com/diane-francis/diane-francis-the-disturbing-world-of-deepfakes/wcm/09a248a0-6145-4cca-a135-49e55615fe88

And here's what I think. Yes, fake news and doctored photos, audio and video are problems. But I don't entirely agree with the solutions she proposes. She states that "A fake filter, or authentication process, along with new laws must be created to protect the public, consumers and voters from such fakes."

I'll deal with the second part of her proposal first: new laws. And yes, laws might be good. I mean, what kind of a society are we if we don't protect our weakest or most vulnerable members? But they tend to be of limited use out in cyberspace which tends not to respect national boundaries. International agreements? Even better, but even harder to enforce. And in cases where they do get enforced, the resultant penalties tend to be a drop in the bucket or a slap on the wrist for the rich and powerful, but a pyrrhic victory for more disadvantaged groups.

Now what about the fake filter? I guess she's subscribing to that old adage that if you're not part of the solution, you're part of the problem. But in this instance, I disagree. The trouble is that in her lexicon, "fake filter" seems to refer exclusively to electronic screening processes, which of course may be malevolent as well as benign. She mentions that "social media giants like Google are staging contests to come up with software antidotes." Hmmm... that sounds to me suspiciously like putting the the fox in charge of the hen house and setting the cat amongst the pigeons!

What makes the best bullshit-detector? Most likely a bull, I should think! And who has the most powerful incentive to sift out all the nasty stuff that is injurious to human health and well-being? A human, preferably one with a sufficient level of intelligence and reasoning power to identify the most egregious examples of falsehoods in truth's clothing and refer the ones that look or sound or smell suspect to experts in the appropriate field.

Artificial intelligence may have come a long way over the years, decades and centuries, but we shouldn't confuse a web-crawling search engine with a human mind. Nor should we assume that the geeks designing the filters and authentication processes have no personal or corporate axe to grind. In the hands of a spin-doctor, outrageous statements can all too easily be recast as "key messages" or "expert advice".

So while electronic tools may indeed have a legitimate role to play, the most crucial element by far is the human element. We need better media literacy training and we need to constantly upgrade our skills and awareness or at least know where to turn to address the gaps in our knowledge. While avoiding absurd conspiracy theories, we can still cultivate a certain reasoned scepticism and adapt our thinking as new information or hypotheses emerge.

That's not easy during a pandemic like this one, when so much of our information is coming from online sources and even the state of expert knowledge is very much incomplete. We just have to do the best we can. The future of human civilization depends on it!
Page generated Jul. 3rd, 2025 09:32 pm
Powered by Dreamwidth Studios