Twenty years ago, Cass Sunstein, in his book Republic.com, warned that information technology might weaken democracy by allowing people to filter out news that contradicts their views of the world. The situation now is even worse than he imagined. It’s not just that people can find media sources, like cable TV channels, that align with their views. Now we have platforms like Facebook actively pushing content at people. Facebook’s goal is to keep people engaged on its platform. It does this by building profiles of its users and feeding them the stories they are most likely to find engaging – meaning the stories that align with their world views.
Over the past 25 years, Democratic attitudes about the Republican party have become much more unfavorable, and vice versa. To the extent that unfavorable views are based on falsehoods, that’s harmful to our democracy.
What are our personal responsibilities as consumers of information?
First, we need to understand that Facebook constructs profiles of its users and attempts to keep them engaged by feeding them content it thinks they will like. That business model leads to the creation of ideological echo chambers.
Second, we need to understand confirmation bias: our brains are prewired to uncritically accept information that conforms to our views and filter out information that contradicts one of them.
Third, we need to be skeptical. All information is not created equal. Has the author identified himself or herself? What is the author’s qualifications? A website may look neutral, but that may be deceiving. Is the website affiliated with a particular cause? Have fact-checkers weighed in on the story? What do fact-checking sites like PolitiFact.com, FactCheck.org, or Snopes.com say about the story? Are the images authentic? Is the author making logical arguments?
Before reposting a story, you should:
Deliberate, particularly if the story affects you emotionally. Make sure you take the time to be smart consumer of information.
Reveal the sources of the information.
Ensure your own claims are based on sound, logical arguments.
Hold yourself accountable by revealing your identity and qualifications.
AI Ethics
Guy Nadivi interviewed me for his podcast, Intelligent Automation Radio. In the interview I talk about "AI Ethics for Business," a free online short course presented by Seattle University. Here is a link to the podcast:
Artificial Intelligence: Implications for Ethics and Religion
In January 2020 I participated in a one-day conference in New York City hosted by Union Theological Seminary, the Jewish Theological Seminary, The Riverside Church, and the Greater Good Initiative. I was one of the panelists who discussed the implications of development in artificial intelligence with respect to the common good, human dignity, trans-humanism, and our conception of and relationship to God. Here is a short video recap of the day:
In October 2016 the University of St. Thomas in St. Paul, Minnesota held a conference with the title, A Culture of Ethics: Engineering for Human Dignity and the Common Good. I gave one of the keynote lectures at the conference. Here is a video of my presentation:
How to Design and Lead a Great Computer Ethics Course
I began teaching computer ethics around 1994, and I have learned a lot since then about how to run a successful course. If you are a faculty member looking for practical advice, you'll find it here in this 17 minute presentation, part of Pearson Education's Learning Makes Us webinar series: