Michael J. Quinn, Ph.D.
  • Home
  • Public Speaking
  • Computer Ethics
    • Ethics for the Information Age
    • Contemporary Cases and Opinion Pieces
    • Presentations and Interviews
  • Parallel Programming
  • Contact

Presentations and Interviews

Ten Things You Need to Know about Algorithmic Bias

Creating an unbiased AI-driven system is much easier said than done. In this talk, Michael Quinn provides examples of AI-driven systems making biased recommendations, discussed the causes of bias in computerized systems, and demonstrated how a "fair" system may be judged "unfair" according to different reasonable metrics.

Gonzaga University, February 19, 2025
​

A Conversation about Companion Robots

In this lunchtime conversation with students from Oregon State University's Honors College, we discussed how interactions with companion robots could affect their human users and the proper role of government regulations.

Oregon State University, October 28, 2024
​

Teaching Computer Ethics

Michael Quinn answers educators' questions about computer ethics: how the field has changed in the past 30 years, what topics to cover in a computer ethics class, how to frame discussions about ethical issues, how to handle students' use of AI tools to complete their coursework, and more.

zyBooks Author Training, September 26, 2024

Teaching Students about Algorithmic Bias and Fairness

Join author Michael Quinn as he introduces the important topic of algorithmic bias. You will learn about the three types of computer bias and see how all of them have occurred in real systems. You will leave the session with compelling examples of how biased computer systems have harmed people of color and people with lower incomes. You will also learn an important reason why "fairness" is such an elusive target for system designers to hit.

Pearson "Digital Learning Now" webinar, February 8, 2024

Self-driving Vehicles: A Cautionary Tale

Elaine Herzberg was the first pedestrian to be killed by a self-driving vehicle. I summarize Uber’s effort to develop an autonomous vehicle, focusing on the engineering and management decisions that contributed to the March 18, 2018, accident. The story of Herzberg’s death illustrates some of the challenges faced by developers of new AI-driven technologies. I conclude by suggesting some practical ways that regulators can help ensure public safety as autonomous vehicles are deployed.

DevFest 2023 - Portland (December 2023)

AI Seminar, College of Engineering, Oregon State University (October 2023)
​

Why Universities Need AI Programs Now More Than Ever

Hope Reese interviewed me for pnw.ai, "the voice of AI in the Pacific Northwest." Here is a link to the interview:

Why Universities Need AI Ethics Programs Now More Than Ever (May 2022)
​

Examining Digital Ethics at Seattle University

EDUCAUSE President and CEO John O'Brien interviewed Jeffery Smith, Nathan Colaner, and myself for EDUCAUSE Review. Here is a link to the article:

Examining Digital Ethics at Seattle University (January 2021)

The Economics and Ethics of Misinformation

​Here is a portion of the presentation I gave at the Inaugural Symposium of the Grefenstette Center at Duquesne University:

Disinformation, Misinformation and Technology: New Ethical Challenges and Solutions (October 2020)

Twenty years ago, Cass Sunstein, in his book Republic.com, warned that information technology might weaken democracy by allowing people to filter out news that contradicts their views of the world. The situation now is even worse than he imagined. It’s not just that people can find media sources, like cable TV channels, that align with their views. Now we have platforms like Facebook actively pushing content at people. Facebook’s goal is to keep people engaged on its platform. It does this by building profiles of its users and feeding them the stories they are most likely to find engaging – meaning the stories that align with their world views.
 
Over the past 25 years, Democratic attitudes about the Republican party have become much more unfavorable, and vice versa. To the extent that unfavorable views are based on falsehoods, that’s harmful to our democracy.

What are our personal responsibilities as consumers of information?
  • First, we need to understand that Facebook constructs profiles of its users and attempts to keep them engaged by feeding them content it thinks they will like. That business model leads to the creation of ideological echo chambers.
  • Second, we need to understand confirmation bias: our brains are prewired to uncritically accept information that conforms to our views and filter out information that contradicts one of them.
  • Third, we need to be skeptical. All information is not created equal. Has the author identified himself or herself? What is the author’s qualifications? A website may look neutral, but that may be deceiving. Is the website affiliated with a particular cause? Have fact-checkers weighed in on the story? What do fact-checking sites like PolitiFact.com, FactCheck.org, or Snopes.com say about the story? Are the images authentic? Is the author making logical arguments?
 
Before reposting a story, you should:
  • Deliberate, particularly if the story affects you emotionally. Make sure you take the time to be smart consumer of information.
  • Reveal the sources of the information.
  • Ensure your own claims are based on sound, logical arguments.
  • Hold yourself accountable by revealing your identity and qualifications.
    ​

AI Ethics

Guy Nadivi interviewed me for his podcast, Intelligent Automation Radio. In the interview I talk about "AI Ethics for Business," a free online short course presented by Seattle University. Here is a link to the podcast:

Intelligent Automation Radio: Dr. Michael Quinn, Dean of the College of Science and Engineering at Seattle University (March 2020)
​

Artificial Intelligence: Implications for Ethics and Religion

Picture
In January 2020 I participated in a one-day conference in New York City hosted by Union Theological Seminary, the Jewish Theological Seminary, The Riverside Church, and the Greater Good Initiative. I was one of the panelists who discussed the implications of development in artificial intelligence with respect to the common good, human dignity, trans-humanism, and our conception of and relationship to God. Here is a short video recap of the day:

Artificial Intelligence: Implications for Ethics and Religion
​

​Tuning In to Ethics

In October 2016 the University of St. Thomas in St. Paul, Minnesota held a conference with the title, A Culture of Ethics: Engineering for Human Dignity and the Common Good. I gave one of the keynote lectures at the conference. Here is a video of my presentation:

Tuning In to Ethics (October 2016)
​

How to Design and Lead a Great Computer Ethics Course

I began teaching computer ethics around 1994, and I have learned a lot since then about how to run a successful course. If you are a faculty member looking for practical advice, you'll find it here in this 17 minute presentation, part of Pearson Education's Learning Makes Us webinar series:

How to Design and Lead a Great Computer Ethics Course (October 2016)
Proudly powered by Weebly
  • Home
  • Public Speaking
  • Computer Ethics
    • Ethics for the Information Age
    • Contemporary Cases and Opinion Pieces
    • Presentations and Interviews
  • Parallel Programming
  • Contact