Podchaser Logo
Home
Trust is the Critical Competitive Factor: Growing Digital Ethics in Practice

Trust is the Critical Competitive Factor: Growing Digital Ethics in Practice

Released Thursday, 3rd December 2020
Good episode? Give it some love!
Trust is the Critical Competitive Factor: Growing Digital Ethics in Practice

Trust is the Critical Competitive Factor: Growing Digital Ethics in Practice

Trust is the Critical Competitive Factor: Growing Digital Ethics in Practice

Trust is the Critical Competitive Factor: Growing Digital Ethics in Practice

Thursday, 3rd December 2020
Good episode? Give it some love!
Rate Episode

On this podcast, we’re joined by Tim Gordon, founding partner at Best Practice in AI, a research firm that helps organizations use AI to build competitive advantage. While it might seem as though there is little or no guidance around digital ethics, for those familiar with the space like Tim, the problem is actually that there are hundreds of frameworks to choose from. However, we’re starting to see these frameworks coalesce around six themes: fairness and inclusivity, explainability and transparency, safety, social benefit, responsibility, and accountability. Accountability is one of the key responsibilities of any board, and as such it’s absolutely vital that boards are capable of handling the legal, regulatory, operational, and reputational questions which can arise from an increasing dependency on automated decision-making systems.

Most importantly, Tim highlights that trust is the key factor which will enable competitive advantage in the future for AI-driven businesses: organizations which are highly trusted will find it easy to obtain the data they need to power automated decision making systems, while low-trust businesses will come up against increasing barriers in the form of reputational costs, regulatory frameworks, or pure operational cost to obtain the data they need to function.Best Practice in AI has worked with the World Economic Forum to develop a series of checklists boards can use to ensure they’re prepared to raise the right questions at the right times to challenge their operational teams, with ethics forming the central heart of the guidebook. Their next step is to build a new framework for C-level leaders to think through the nitty-gritty operational questions they’ll need to face in order to live up to the board’s expectations. While most companies are still nascent in their exploration of AI, regulatory frameworks are already starting to emerge (GDPR being a frontrunner example) which will begin to impact on the technology industry landscape within Europe and beyond. Boards need to get to grips with the regulation on the horizon, not only within their own region but also the international cooperation and competition factors, to ensure they’re prepared to continue innovating.

Tim outlines three basic problems with AI: the technology isn’t working as designed, the data being used to train algorithms is based on existing biases which reflect societal bias, and finally, the new business models that are enabled by AI. Tim believes the first two categories are indeed ethically problematic but they echo existing ethical risks that are already faced within industry. It’s really the enablement of new business models that empower ethical problems that are unique to the digital age, such as mass surveillance through personalization, the attention economy, and the polarization of echo chambers as personalization forces us into increasingly narrow paths. The more accurate our automated decision-making systems become, the more accurately they reflect our existing societal structures, with all its flaws: this enables new questions to be raised not only about the usefulness of automated decision making systems, but about the systems we use to govern broader society.

Show More

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features