Podchaser Logo
Home
S2E26: "Building Ethical Machines" with Reid Blackman, PhD (Virtue Consultants)

S2E26: "Building Ethical Machines" with Reid Blackman, PhD (Virtue Consultants)

Released Tuesday, 5th September 2023
Good episode? Give it some love!
S2E26: "Building Ethical Machines" with Reid Blackman, PhD (Virtue Consultants)

S2E26: "Building Ethical Machines" with Reid Blackman, PhD (Virtue Consultants)

S2E26: "Building Ethical Machines" with Reid Blackman, PhD (Virtue Consultants)

S2E26: "Building Ethical Machines" with Reid Blackman, PhD (Virtue Consultants)

Tuesday, 5th September 2023
Good episode? Give it some love!
Rate Episode

This week, I welcome philosopher, author, & AI ethics expert, Reid Blackman, Ph.D., to discuss Ethical AI. Reid authored the book, "Ethical Machines," and is the CEO & Founder of Virtue Consultants, a digital ethical risk consultancy. His extensive background in philosophy & ethics, coupled with his engagement with orgs like AWS, U.S. Bank, the FBI, & NASA, offers a unique perspective on the challenges & misconceptions surrounding AI ethics.

In our conversation, we discuss 'passive privacy' & 'active privacy' and the need for individuals to exercise control over their data. Reid explains how the quest to train data for ML/AI can lead to privacy violations, particularly for BigTech companies. We touch on many concepts in the AI space including: automated decision making vs. keeping "humans in the loop;" combating AI ethics fatigue; and advice for technical staff involved in AI product development. Reid stresses the importance of protecting privacy, educating users, & deciding whether to utilize external APIs or on-prem servers. 

We end by highlighting his HBR article - "Generative AI-xiety" - and discuss the 4 primary areas of ethical concern for LLMs: 

  1. the hallucination problem; 
  2. the deliberation problem; 
  3. the sleazy salesperson problem; & 
  4. the problem of shared responsibility

Topics Covered:

  • What motivated Reid to write his book, "Ethical Machines"
  • The key differences between 'active privacy' & 'passive privacy'
  • Why engineering incentives to collect more data to train AI models, especially in big tech, poses challenges to data minimization
  • The importance of aligning privacy agendas with business priorities
  • Why what companies infer about people can be a privacy violation; what engineers should know about 'input privacy' when training AI models; and, how that effects the output of inferred data
  • Automated decision making: when it's necessary to have a 'human in the loop'
  • Approaches for mitigating 'AI ethics fatigue'
  • The need to backup a company's stated 'values' with actions; and why there should always be 3 - 7 guardrails put in place for each stated value
  • The differences between 'Responsible AI' & 'Ethical AI,' and why companies seem reluctant to talk about ethics
  • Reid's article, "Generative AI-xiety," & the 4 main risks related to generative AI
  • Reid's advice for technical staff building products & services that leverage LLM's

Resources Mentioned:

Guest Info:



Privado.ai
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.

Shifting Privacy Left Media
Where privacy engineers gather, share, & learn

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Copyright © 2022 - 2024 Principled LLC. All rights reserved.

Show More

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features