Big Tech

A Technology, Government and News podcast featuring and
 1 person rated this podcast

Best Episodes of Big Tech

Mark All
Search Episodes...
In this episode of Big Tech, co-hosts David Skok and Taylor Owen discuss how our understanding of the impacts big tech has on society has shifted over the past year. Among these changes is the public’s greater awareness of the need for regulation in this sector.In their conversation, David and Taylor reflect upon some of the major events that have contributed to this shift. The COVID-19 pandemic highlighted the need for better mechanisms to stop the spread of misinformation. And it has shown that social media platforms are capable of quickly implementing some measures to curb the spread of misinformation. However, the Facebook Oversight Board, which their guest Kate Klonick talked about in season 1, is not yet operational, and won’t be until after the US presidential election; even then, its powers will be limited to appeals rather than content oversight.In July 2020, the big tech CEOs testified in an antitrust hearing before the US Subcommittee on Antitrust, Commercial and Administrative Law. “That moment,” Taylor Owen says, “represented a real turning point in the governance agenda.” This growing big tech antitrust movement is showing that law makers, now better prepared and understanding the issues more clearly, are catching up to big tech. The public is starting to recognize the harms alongside the benefits of these companies’ unfettered growth. In season 2, Matt Stoller spoke with David and Taylor about monopoly power, and how these modern giants are starting to look like the railroad barons of old.From diverse perspectives, all the podcast’s guests have made the point that technology is a net good for society but that the positives do not outweigh the negatives — appreciating the many benefits that platforms and technology bring to our lives does not mean we can give them free rein. As Taylor explains, “When we found out the petrochemical industry was also polluting our environment, we didn’t just ban the petrochemical industry and ignore all the different potential positives that came out of it. We just said you can’t pollute any more.” With the technology sector embedded in all aspects of our democracies, economies and societies, it’s clear we can no longer ignore the need for regulation.
Biotechnology — the use of biological processes for industrial and other purposes, especially through genetic manipulation of micro-organisms — is a field experiencing massive growth worldwide. For many decades, advances in biology have been made in large academic or corporate institutions, with entry to the field restricted by knowledge and financial barriers. Now, through information sharing and new means of accessing lab space and equipment, a whole new community of amateur scientists are entering the molecular biology space. The emergence of this growing do-it-yourself “biohacker” community raises ethical questions of what work should be allowed to proceed.In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak with Ellen Jorgensen, a molecular biologist and Chief Scientific Officer at Aanika Biosciences. She is an advocate for democratizing biotechnology by enabling more individuals to have access to lab space and equipment. Jurisdictions are taking different approaches to biotechnology, with some, such as the European Union and Africa, being more restrictive than others, such as China. What makes the fragmentation of governance surrounding genetic modification different from fragmentation in internet or tech governance is that biotechnology’s raw material is a global interconnected web of life. A biological modification can have unintended, even disastrous, impacts worldwide. For example, says Jorgensen, “What I’m concerned with is things like gene drives, which is a variety of CRISPR gene editing that [is] self-perpetuating.…So, 100 percent of the offspring, where one of the parents has this gene drive, all have the gene drive. So, it can spread through a population, particularly one with a short lifespan, like mosquitoes, within a very short period of time. And here, for the first time, we have the ability to potentially wipe out a species.” As Jorgensen points out, with such high stakes, we have an “inherent motivation to regulate.” Working together on a global set of standards, and setting aside their own ethical or moral understandings to find a solution that works for everyone, will present a challenge for nations.
Online advertisement and social media platforms have had a major impact on economies and societies around the globe. Those impacts are happening in retail, with the shift in spending from brick and mortar to online; in advertising, where revenues have moved from print and broadcast to online social platforms; and in society more broadly, through algorithmic-amplified extremism and hate speech. The big tech companies at the centre of these shifts have little incentive to change the nature of their operations. It now falls to nations around the globe to find ways to regulate big tech in the face of what many view as a market failure. In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak to Damian Collins, a British member of Parliament and former chair of the House of Commons Digital, Culture, Media and Sport (DCMS) select committee. As chair of DCMS, Collins led the investigation into Cambridge Analytica’s role in the Brexit referendum. He was also involved in the creation of the International Grand Committee on Disinformation and “Fake News.” Collins doesn’t blame the tech giants for their inaction, but rather sees the problem as governance policies that have lagged behind. There is a need for policy to catch up and ensure citizens are protected, just as other complex global markets, such as the financial industry, have done. International cooperation and information sharing enable nations to take on the large global tech companies together without each needing to start from scratch.
Journalism has had a storied history with the internet. Early on, the internet was a niche market, something for traditional publishers to experiment with as another medium for sharing news. As it gained popularity as a news source, newsrooms began to change as well, adapting their business models to the digital age. Newspapers had historically generated revenue through a mix of subscriptions, advertising and classifieds. But internet platforms Craigslist and Kijiji soon took over classified. Google Ads presented advertisers with more refined marketing tools than the newspapers could offer. And Facebook and Twitter made it possible for readers to consume news for free without visiting newspaper’s website. In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak to Emily Bell, the director of the Tow Center for Digital Journalism at Columbia University’s Graduate School of Journalism. Newsrooms are left with few options to make money now. Unless you are a large outlet with a sizable online subscriber base like The New York Times, your capacity for local reporting will be hampered by the economic need to focus on stories that have the broadest reach. Many media conglomerates have cut back their local reporting, creating news deserts across large regions. Not having local reporters on the case is having negative impacts on democracy, too. As Bell explains, “Where there is no local press…local officials tend to stay in office for longer. They tend to pay themselves more.” Smaller local news outlets that can build a relationship with their readers can see success if their readers are able to pay the subscription fees. But it is often poorer communities, where people can't afford local news subscriptions, that most need the services of good local journalism. Bell sees an opportunity to rethink the way news is funded: first, by looking to communities to decide what level of reporting they require, and second, by resourcing it accordingly. 
Is it possible to access the internet without interacting with the big five American tech companies? Technically, yes, but large swaths of the web would be inaccessible to consumers without the products and platforms created by Apple, Amazon, Facebook, Microsoft and Alphabet, Google’s parent company. In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak with Matt Stoller, the director of research at the American Economic Liberties Project and the author of Goliath: The 100-Year War Between Monopoly Power and Democracy. Stoller looks at how the political landscape has changed from the time of the railroad tycoons to the modern Silicon Valley tech monopolies. Each of these companies has established itself in a market-leading position in different ways. On mobile, Apple’s App Store is the only way for software developers to reach iPhone customers. Google controls search, maps and online advertising. Amazon’s website is the dominant online retail platform. “In some ways, it’s a little bit like saying, well, you know, that railroad that goes through this one narrow valley that you have to take to get to market, well, that’s not a monopoly, because there are other railroads in the country,” Stoller says. “Well, yeah, maybe there are, but it doesn’t matter if you need that particular railroad to get where you’re going…. that’s what Amazon is like in a lot of the sectors that it deals with.” Finally, underpinning much of the internet are Microsoft Azure and Amazon Web Services cloud data centres. Is corporate power a political problem or a market problem? Skok, Owen and Stoller discuss topics ranging from the robber barons of the 1930s and the antitrust reforms that followed, to the current environment, one that evolved over several political generations to become, as Stoller describes it, a crisis of concentration separated from “caretaking,” in which profits can amass through domination rather than through better products or services. 
Social media platforms have assumed the role of news distribution sources, but have largely rejected the affiliated gatekeeper role of fact-checking the content they allow on their sites. This abdication has led to the rise of fake news, disinformation and propaganda. In this episode of Big Tech, co-hosts David Skok and Taylor Owen spoke with journalist and Rappler founder Maria Ressa just days before her conviction in a high-profile cyber-libel case against her, as well as her colleague Reynaldo Santos, Jr. and Rappler Inc. as a whole. On Monday, June 15, the Manila Regional Trial Court Branch 46 ruled that Ressa and Santos, Jr. were liable, but that Rappler as a company was not. This case is viewed in the larger context as an attack on journalistic freedoms protected under the Filipino Constitution. Ressa has repeatedly come under fire by the Duterte government for calling out what she sees as illiberal-leaning and propaganda. Facebook was a key component of President Rodrigo Duterte’s election in 2016. Ressa explained, “On Facebook, a lie told a million times becomes a fact.” The disinformation that spreads on social media platforms is having real-world impacts on how citizens view democratic institutions. “If you debate the facts, you can’t have integrity of markets. You can’t have integrity of elections….This is democracy’s death by a thousand cuts,” said Ressa. 
Efforts to contain the COVID-19 pandemic have essentially shut down the economy. Now, as regions now look to reopen, the focus is shifting to minimizing further infection by monitoring the virus’s spread and notifying people who may have been exposed to it. Public health authorities around the globe have deployed or are developing contact-tracing or contact-notification mobile apps. Apple and Google have partnered to develop a Bluetooth-based, contact notification application programming interface, now available as an update for their mobile operating systems. In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak with Carly Kind, director of the Ada Lovelace Institute, an organization studying issues around data and artificial intelligence. In April, the institute published a report on digital contact tracing called Exit through the App Store?There are concerns about this technology’s rapid and expansive rollout. First, around data collection: some jurisdictions are developing their apps to store data centrally rather than at device level. Next, technical considerations: for example, Bluetooth-based apps only register proximity and don’t account for other metrics, such as whether the contact happened outdoors, where infection risk is lower. Further, concerns about Apple and Google’s tech-focused solution, which infringes on the public health space: “From a power and control perspective, you can’t help but feel somewhat afraid that two companies control almost every device in every hand in the world and are able to wield that power in ways that contradict, right or wrong, the desires of national governments and public health authorities,” Kind cautions. Finally, there are concerns about how health-tracking apps, and our access to them, could impact our freedom to move about: we need to think about the ways these apps could marginalize individuals who don’t have the technology to prove their health status.
The impact of COVID-19 on the global economy was swift and substantial. Unemployment numbers are reaching depression-era levels and nations are clamouring to unveil stability packages intended to lessen the economic impact. Will these measures work to restore the economic status quo, or is this an opportunity to rethink our economic structures? In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak with Joseph Stiglitz, a Nobel laureate economist and a professor at Columbia University, about lessons from past economic crises, and how those lessons apply to the economic fallout of COVID-19. This time around, tech companies command much of the market, and many of them are experiencing stability, if not growth. Zoom and Netflix, for example, have seen surges in their share price. Unfortunately, the tech industry introduces new labour-market issues: digital platforms often require fewer employees to operate than other industries like manufacturing or air travel. With comparatively small staff numbers, tax breaks and growing monopolies, technology companies have an outsize effect on society. Stiglitz argues that taxing these companies in “an adequate way” could have a positive social impact. “The fact is: we lowered the corporate taxes rather than trying to capture back for the public some of the enormous profits that are accruing to the tech giants,” he explains. These companies could emerge from the pandemic more dominant than ever.
Modern digital tools have brought about new conveniences, enabling many to work from home during the COVID-19 pandemic. But despite our countless ways to connect with each other, studies increasingly show that people are more isolated, more depressed and less empathetic than before.In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak with Douglas Rushkoff about the internet’s evolution from an emerging technology to the monopolistic system we have today. Rushkoff is a professor of media theory and digital economics at Queens College, City University of New York and author of Team Human, Throwing Rocks at the Google Bus and more.Rushkoff reflects on his involvement in the early internet in the 1980s and early ‘90s. At the time, there wasn’t any online advertising, and computer coding was seen as a hobby, rather than a career. As more people’s attention moved online, so did advertisers. The internet became an extension of American capitalism, seeking to capture and analyze our attention to generate growth. We see the impact now on our society, democracy and overall wellbeing. But Rushkoff explains that the problem isn’t the technology—it’s the application. “People think, ‘Oh, you used to like digital, and now you hate digital.’ No, digital’s been the same. I used to love the way that we applied digital, and now I hate the way we’re applying digital. There’s a really big difference. It’s like, I like hammers as long as people aren't hitting each other in the face with them.” For Rushkoff, it’s time to change the course of technology development, and put humanity first. 
Social media platforms enable a free flow of content — regardless of source. And because of that system, content creators and online influencers (whether they are credible or not) shift public opinion, and spark polarization. Governments and platforms have been working on this issue, but now, in light of the COVID-19 pandemic, the dissemination of factual, credible information — and the removal of misleading information that could cause harm — is urgent.In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak with Angie Drobnic Holan, editor-in-chief of PolitiFact, a fact-checking organization focused on reviewing statements made by political figures and rating them for accuracy. PolitiFact is one of the fact-checking sites that is part of the Facebook fact-checking program. They provide Facebook with fact-checks that are displayed alongside user posts that have been flagged as misleading. Facebook, however, is just one platform among many that are under fire for the rampant online extremism, fake news and disinformation facilitated by their products. Often, such platforms cite issues of scale (millions of posts a day) as cause for the problem and for their inability to solve it. However, as Drobnic Holan explains, the COVID-19 pandemic highlights just how much power social media platforms have when it comes to shaping the flow of information. “I am all in favour of freedom of speech and the first amendment. But I think false information is extraordinarily pernicious and it needs to be handled in a relatively aggressive manner,” she says. “It can't just be left to say, ‘oh, well, we hope people will find the right information eventually.’ No, that's not a way for a healthy democracy to function — with misinformation swirling all around and people being not sure what's true or not.” Drobnic Holan says.
Subscribe on Apple Podcasts, Spotify or wherever you get your podcasts.
Companies, international organizations and government agencies are all working to identify and eliminate online hate speech and extremism. It’s a game of cat and mouse: as regulators develop effective tools and new policies, the extremists adapt their approaches to continue their efforts. In this episode of Big Tech, Taylor Owen speaks with Sasha Havlicek, founding CEO of the Institute for Strategic Dialogue, about her organization and how it is helping to eliminate online extremism and hate speech. Several issues make Havlicek’s work difficult. The first challenge is context: regional, cultural and religious traditions play a factor in defining what is and what is not extremist content. Second, there isn’t a global norm about online extremism to reference. Third, jurisdictions present hurdles; who is responsible for deciding on norms and setting rules? And finally, keeping up with evolving technology and tactics is a never-ending battle. As online tools become more effective in identifying and removing online extremism and hate speech, extremist groups find ways to circumvent the systems. These problems are amplified by engagement-driven algorithms. While the internet enables individuals to choose how and where they consume content, platforms exploit users’ preferences to keep them engaged. “The algorithms are designed to find ways to hold your attention, … that by feeding you slightly more titillating variants of whatever it is that you're looking for, you are going to be there longer. And so that drive towards more sensationalist content is I think a real one,” Havlicek says. These algorithms contribute to the creation of echo chambers, which are highly effective tools for converting users to extremists.
Earlier this month, Facebook’s chief executive Mark Zuckerberg travelled to Europe ahead of the European Commission’s announcement of a new data strategy plan. The proposed regulations would seek to give Europe a competitive advantage by establishing data trusts to foster development in artificial intelligence technology. The tension between Silicon Valley and Brussels was on full display during Zuckerberg’s visit: while the tech executive was welcomed in state-visit fashion, his recommendation for European platform regulation was flatly rejected. To understand Zuckerberg’s message for the European Union, David Skok and Taylor Owen speak with Mark Scott, chief technology correspondent at POLITICO, about the proposed data strategy and its potential implications for tech giants like Facebook. The conversation breaks down a series of events that started on February 15, when Zuckerberg spoke at the Munich Security Conference about his company’s investment in building tools to address concerns around privacy, hate speech and democracy. He argued that private organizations shouldn’t be responsible for deciding what is acceptable online—rather, that should be left to governments. Then, on February 17, Zuckerberg travelled to Brussels to meet with officials at the European Commission. His visit coincided with Facebook’s release of a new white paper titled “Charting a Way Forward: Online Content Regulation.” Two days later, European Commission president Ursula von der Leyen announced the “ European Data Strategy,” absent Zuckerberg’s recommendations. “This is about trying to make sure that Europe doesn't miss out on the next wave of tech, which is frankly mostly AI,” explains Scott.
In this episode of Big Tech Taylor Owen sits down with Ben Scott, Director of Policy and Advocacy at Luminate, to discuss how the internet has evolved throughout Scott’s time in Washington, DC. Scott has worked for Bernie Sanders, Barack Obama and Hillary Clinton in the State Department and then on her election campaign. Discussing Scott’s evolving role in digital policy, Owen says: “wherever the internet has been, you have been.” While working on cable regulations in Washington DC in 2003, Scott realized that the internet was the next major form of communication technology, sparking an interest in net neutrality regulations. “And then in a very short period of years before it [the internet] becomes monetized and concentrated power takes it over, it becomes controlled by a handful of commercial interests and then people give up trying to fight against that and that becomes status quo. And we had intervened at that moment, stopped cable and telecom industries from grabbing hold of the internet and kept it decentralized.” Scott joined Barack Obama’s presidential campaign to draft the first ever internet policy agenda for a presidential candidate. While in the State Department, Scott saw the spread of the internet and its ability to promote democracy and enable societal change — the Arab Spring for example. During the same period, platform companies continued to grow their influence in Washington DC, and eventually, Donald Trump’s successful presidential campaign illustrated just how easily the internet could be manipulated. As Scott puts it: “if everybody's using the same tools of online manipulation and distortion and organized amplification of messages that don't actually have that much support in the public, if you are willing to go totally over the top with the most outrageous, the most sensational, the most divisive, the most controversial, provocative — that, ultimately those messages spread farther, faster than anything else.”
There’s a false narrative surrounding artificial intelligence (AI): that it cannot be regulated. These idea stems, in part, from a belief that regulations will stifle innovation and can hamper economic potential, and that the natural evolution of AI is to grow beyond its original code. In this episode of Big Tech co-hosts David Skok and Taylor Owen speak with Joanna J. Bryson, professor of ethics and technology at the Hertie School of Governance in Berlin (beginning February 2020). Professor Bryson begins by explaining the difference between intelligence and AI, and how that foundational understanding can help us to see how regulations are possible in this space. “We need to be able to go back then and say, ‘okay, did you file a good process?’ A car manufacturer, they're always recording what they did because they do a phenomenally dangerous and possibly hazard thing … and if one of them goes wrong and the brakes don't work, we can go back and say, ‘Why did the brakes not work?’ And figure out whose fault [it] is and we can say, ‘Okay, you've got to do this recall. You've got to pay this liability, whatever.’ It's the same thing with software,” Bryson explains. It is the responsibility of nations to protect those inside its borders, and that protection must extend to data rights. She discusses how the EU General Data Protection Regulation — a harmonized set of rules that covers a large area and crosses borders — is an example international cooperation that resulted in a harmonized set of standards and regulations for AI development.
There is a growing sense that governments are not able to effectively solve the problems of the world. The narrative that governments are slow, costly and not informed enough to make the right decisions. This stands in contrast to the private sector; business leaders are regarded as effective leaders because they generate incredible wealth. The “savior complex” is particularly strong among the wealthiest tech executives. Their world view is rooted in the idea that we can use technology to solve all the world’s problems. In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak with Anand Giridharadas, author of Winners Take All: The Elite Charade of Changing the World. Giridharadas speaks about this rise of powerful tech executives who are using their wealth and influence to reshape systems of governance — instead of supporting democratic institutions, they are creating their own philanthropic organizations. For Giridharadas these elites are the new plutocrats who have seized power through wealth, much like the railroad tycoons of old. “There is enormous moral difference between five guys deciding to do something and a city deciding to do something. This is something I think you wouldn't have had to explain to people 100 years or 200 years ago when we actually had more faith in the idea of democratic action,” Giridharadas says. He goes on to explain that where funds are coming from is more important than the amount of funding, citing military aid to Ukraine as an example. There is a difference between $400 million dollars provided by the American taxpayer and $400 million dollars provided by a wealthy executive who isn’t elected to represent the best interests of a population. Giridharadas argues that if tech billionaires really want to help make the world a better place, they should just pay their fair share in taxes, and leave governments to solve the world’s problems.
Google and IBM are in a race to achieve quantum supremacy — both sides claim that they are winning. But what does quantum supremacy really mean? Quantum technologies have the ability to solve complex solutions to build better materials, design more effective pharmaceutical treatments or optimize large distribution networks. Nations or corporations that can harness the power of quantum technologies will have an advantage over their competitors. The leap in technology could be so great that it could have devastating impacts on the economies that lack quantum technologies. In this episode of Big Tech co-hosts David Skok and Taylor Owen speak with James Der Derian, Director of the Centre for International Security Studies at The University of Sydney and Principal Investigator at Project Q, about the impact of quantum technologies could have on peace, security, economics and society. Dr. Shohini Ghose, Professor of Physics and Computer Science at Wilfrid Laurier University provides a foundational understanding of quantum computing at the beginning of this episode.
Truth and facts are not the same thing. To find truth, we must combine the best available information (what we know to be facts) with our own lens. This is especially challenging on social media: posts may be presented as if they are truthful, but truth comes with some subjectivity. This is where fact checking tools — like the encyclopedic knowledge of Wikipedia — are essential. That is, if they present neutral information, offer unbiased results, have a transparent process and are able to be edited. In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak with Katherine Maher, the Wikimedia Foundation’s chief executive officer and executive director. Maher joined Wikimedia in 2016 just prior to the election of President Trump. She is acutely aware of Wikipedia’s emerging role as a fact checking tool. Transparent revision history is a cornerstone of Wikipedia. Anyone using the tool can access the edit log and provide their own edits to correct information. Some technologies subvert this process. For example, connected devices such as smart speakers pull Wikipedia’s information to provide answers to users without allowing feedback or sharing a record of edits. Other factors influence Wikipedia’s transparency and neutrality, as well. Maher’s team is looking to address biases that exist within Wikipedia’s database; she acknowledges that there is a gender imbalance, both with Wikipedia editors (80 percent of edits are made by men) and with the low percentage of articles about women and minorities. Maher explains: “we know, for example, that an article about a woman is four times more likely to mention her marital status than an article about a man. If you're doing [AI] training of semantic pairing and you start to associate someone's marital status with their gender, in so far as there's a sort of higher correlation of value to someone being married or divorced, being a woman that propagates that bias out into all of the products that are then going to go ahead and use that algorithm or that dataset in the future.” Wikipedia will need to work to solve these issues if it wishes to remain a trusted source for facts.
Online platforms like Facebook and Google Ads are positioned as superior tools for micro-targeting advertisements. The promise of greater returns on investment and granular control over who will engage with an ad has attracted advertisers. In this episode of Big Tech, co-hosts David Skok and Taylor Owen speak with David Carroll, an associate professor of media design at Parsons School of Design at The New School. Carroll’s own understanding of the ways in which platforms were monetizing his online activity was featured in The Great Hack documentary. His research into how online advertising systems work reveals just how little thought was put into oversight and monitoring of the systems that advertisers built. Carroll argues that financial incentives created the current data-hungry advertising environment. “The advertising complex was built to sort us into categories without knowing what our categories are. And then from there the recommendation engines were built to surface vast databases into a user interface. And those were biased towards engagement to move the chart up and then the user interface to reward people right on the surface for engagement, to move the chart up. And it was never about intelligence or goodness, it was just about greed and metrics.” For Carroll, digital advertising platforms are reminiscent of 1830s snake oil salesmen who used penny press newspapers to share misleading information. Eventually, people were protected from such ads when US Congress created the Food and Drug Administration (FDA). Now, governments must step in to build the same kind of oversight bodies to regulate digital advertising.
Facebook is establishing a 40-person oversight board to pass rulings on whether or not content should remain on their platform. The board aims to represent all regions of the world, rulings are set to be released in multiple languages and decisions about content to be made expeditiously. Only one researcher, Kate Klonick, was invited in to observe the process that went into establishing the framework for this oversight board. In this episode of Big Tech co-hosts David Skok and Taylor Owen speak with Kate Klonick, an assistant professor of law at St. John's University law school, and an affiliate fellow at Yale law school about what she witnessed in this process. Klonick was embedded in Melno Park without a non-disclosure agreement, given full access to meetings and was able to record all the conversations and workshops. Throughout the process, she maintained her academic immunity, not accepting anything from Facebook, not even a free hotel room. Klonick discusses the struggles faced by the team tasked with building the oversight board. At the beginning, it didn’t look like the project could be a success: “My lens is obviously from a legal perspective, and it's a little bit like when you're a hammer, everything's a nail. I look at a lot of the problems that I was seeing as they were creating this oversight board, and it was comparative constitutionalism, it was administrative law, it was democratic legitimacy.” Facebook continued to work on the oversight board, and as Kate admits, they did solve many of huge constitutional problems presented by content moderation. But she is still skeptical about how this board will scale, whether it will be overrun with appeals and how will the public will perceive its effectiveness.
After discovering several small charges to her credit card that totaled upward of US$900, Rana Foroohar figured her card must have been stolen. She quickly realized those micro-transactions were coming from a game her son was playing. Unknowingly, he was spending real money inside a free-to-play game. As a journalist, Rana was fascinated by the way in which apps were capturing people’s attention and wallets. In this episode of Big Tech co-hosts David Skok and Taylor Owen speak with Rana Foroohar, global business columnist and associate editor for The Financial Times and global economic analyst at CNN about her new book: Don't Be Evil: How Tech Betrayed Its Founding Principles — and All of Us. They discuss the cognitive, economic, and political impacts that big tech companies are having on our societies and economies. Foroohar discusses the risks that big tech companies’ current business models present to the global economy. While these businesses are motivated by growth (rather than profits) investors want to see profits once the company goes public. If those profits are not coming, stocks start to sink. “Uber goes out into every possible market, breaks whatever regulation it can, grabs market share, doesn't worry about making money, is allowed to continue that business model with private investors just pumping it up, pumping it up,” Foroohar says. Technology companies make up a large percentage of equity markets, and there is real concern that the economy could be heading toward a tech-led crash.
Taylor Owen and David Skok have known each other for many years; their work in democracy and journalism ensures that their paths often cross. Taylor is an academic studying technology’s transformative impact on democracy. David has spent his journalism career in the thick of an ever-changing media landscape. They both recall one meeting at a restaurant in Toronto about ten years ago, when Taylor brought together a group of academics and journalists to discuss how technology was reshaping society. David was skeptical, and proclaimed to the room that nobody cares about privacy and technology. A lot has changed since that meeting. Technology’s impact on society, the economy and democracy is evident, and today, people care. On the Big Tech podcast, Taylor and David will unpack the nuanced challenges that technology presents.
Rate Podcast

Share This Podcast

Recommendation sent



Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Podcast Details

Created by
CIGI / The Logic
Podcast Status
Oct 25th, 2019
Latest Episode
Aug 27th, 2020
Release Period
Avg. Episode Length
32 minutes

Podcast Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
Are we missing an episode or update?
Use this to check the RSS feed immediately.