It’s a reasonable question. But the idea that AI companies should automatically report violent conversations to police is more complicated than it sounds.
To try and unpack it, I spoke with Meredith Whittaker, the President of Signal – an encrypted messaging platform that doesn’t collect your data, serve you ads, or track who you’re talking to. Whittaker runs the most private messaging app on the planet, which also means there is almost certainly illegal activity happening on Signal that no one, including her, knows about.
But this conversation isn’t just about Tumbler Ridge. The instinct to trade privacy for “safety” is reshaping the entire tech landscape: Amazon now lets you scan a whole neighbourhood’s worth of Ring camera footage; Australia requires teenagers to verify their ages before accessing social media. These technologies offer real value – but they all ask you to give something up in return. So I wanted to ask Whittaker why that trade might not be worth making.
Editor's note: A previous version of this article reported an incorrect final tally of the injured during the shooting at Tumbler Ridge. Two were critically injured. The podcast audio also includes an incorrect final tally of the injured.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>It’s a reasonable question. But the idea that AI companies should automatically report violent conversations to police is more complicated than it sounds.
To try and unpack it, I spoke with Meredith Whittaker, the President of Signal – an encrypted messaging platform that doesn’t collect your data, serve you ads, or track who you’re talking to. Whittaker runs the most private messaging app on the planet, which also means there is almost certainly illegal activity happening on Signal that no one, including her, knows about.
But this conversation isn’t just about Tumbler Ridge. The instinct to trade privacy for “safety” is reshaping the entire tech landscape: Amazon now lets you scan a whole neighbourhood’s worth of Ring camera footage; Australia requires teenagers to verify their ages before accessing social media. These technologies offer real value – but they all ask you to give something up in return. So I wanted to ask Whittaker why that trade might not be worth making.
Editor's note: A previous version of this article reported an incorrect final tally of the injured during the shooting at Tumbler Ridge. Two were critically injured. The podcast audio also includes an incorrect final tally of the injured.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>It’s hard to overstate how quickly this shift happened. Just a few years ago, even Elon Musk was calling for an industry-wide pause on AI development, and the Biden administration was developing an “AI Bill of Rights” – one of the most thoughtful and comprehensive frameworks for AI regulation I’ve ever seen.
The architect of that initiative was Dr. Alondra Nelson. Today, she leads the Science, Technology, and Social Values Lab at the Institute for Advanced Study and is fresh off a stint on Zohran Mamdani’s mayoral transition team in New York. I wanted to have her on to wrestle with an urgent question: how do you make a technology safe when nobody seems particularly interested in regulating it – and what might happen if we don’t?
Mentioned:
Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, by the White House Office of Science and Technology Policy
The mirage of AI deregulation, by Alondra Nelson (Science)
International AI Safety Report 2026, by Yoshua Bengio et al
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>It’s hard to overstate how quickly this shift happened. Just a few years ago, even Elon Musk was calling for an industry-wide pause on AI development, and the Biden administration was developing an “AI Bill of Rights” – one of the most thoughtful and comprehensive frameworks for AI regulation I’ve ever seen.
The architect of that initiative was Dr. Alondra Nelson. Today, she leads the Science, Technology, and Social Values Lab at the Institute for Advanced Study and is fresh off a stint on Zohran Mamdani’s mayoral transition team in New York. I wanted to have her on to wrestle with an urgent question: how do you make a technology safe when nobody seems particularly interested in regulating it – and what might happen if we don’t?
Mentioned:
Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, by the White House Office of Science and Technology Policy
The mirage of AI deregulation, by Alondra Nelson (Science)
International AI Safety Report 2026, by Yoshua Bengio et al
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>It’s all very weird. And, depending on who you ask, potentially terrifying. A bunch of autonomous AIs plotting to overthrow our species sounds like the kind of doomsday scenario we’ve been worrying about for decades.
Not everyone thinks Moltbook is a sign that our AIs have become sentient. But even the skeptics think it’s a pretty profound technological leap. It’s just not clear yet whether that’s an exciting development – or a terrifying one.
Mentioned:
“AI Doesn’t Reduce Work—It Intensifies It,” by Aruna Ranganathan and Xingqi Maggie Ye (Harvard Business Review)
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>It’s all very weird. And, depending on who you ask, potentially terrifying. A bunch of autonomous AIs plotting to overthrow our species sounds like the kind of doomsday scenario we’ve been worrying about for decades.
Not everyone thinks Moltbook is a sign that our AIs have become sentient. But even the skeptics think it’s a pretty profound technological leap. It’s just not clear yet whether that’s an exciting development – or a terrifying one.
Mentioned:
“AI Doesn’t Reduce Work—It Intensifies It,” by Aruna Ranganathan and Xingqi Maggie Ye (Harvard Business Review)
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>But Gen Z is skeptical, too. They worry about job security, about offloading their thinking to machines, about AI’s staggering energy consumption. Most of all, they worry they won’t get a say in shaping our future.
Ava Smithing, 24, and Sneha Revanur, 22, are trying to change that. Smithing is the advocacy director at the Young People’s Alliance and the host of “Left to Their Own Devices,” a podcast about how technology is rewriting childhood. Revanur is the founder of Encode AI, a youth-led nonprofit focused on AI policy. Politico once called her the “Greta Thunberg of AI.”
Together, they’re two of the most influential young voices in tech. So we brought them on to find out what older generations are getting wrong about AI – and what Gen Z wants from the most powerful technology in history.
Mentioned:
Technopoly: The Surrender of Culture to Technology, by Neil Postman
Gameplan, by Encode AI
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>But Gen Z is skeptical, too. They worry about job security, about offloading their thinking to machines, about AI’s staggering energy consumption. Most of all, they worry they won’t get a say in shaping our future.
Ava Smithing, 24, and Sneha Revanur, 22, are trying to change that. Smithing is the advocacy director at the Young People’s Alliance and the host of “Left to Their Own Devices,” a podcast about how technology is rewriting childhood. Revanur is the founder of Encode AI, a youth-led nonprofit focused on AI policy. Politico once called her the “Greta Thunberg of AI.”
Together, they’re two of the most influential young voices in tech. So we brought them on to find out what older generations are getting wrong about AI – and what Gen Z wants from the most powerful technology in history.
Mentioned:
Technopoly: The Surrender of Culture to Technology, by Neil Postman
Gameplan, by Encode AI
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>That’s the rallying cry of the tech companies and governments racing to develop artificial intelligence as fast as humanly possible. The argument is that whoever reaches AGI first won’t just be dominant technologically, or economically – they’ll be the world’s next super power. But, if I’m being honest, I don’t know if that framing holds up. And part of the reason for that is that we don’t really understand China.
Enter Keyu Jin. Jin is a Harvard trained economist who splits her time between London and Beijing, and her book, The New China Playbook, is her attempt to “read China in the original” – to provide a firsthand look at the forces that shaped the country’s unprecedented rise. China’s success is a puzzle. How did one of the poorest nations on the planet become the second richest in less than a century? How did an economy without free markets birth a tech sector that rivals – and in some ways surpasses – Silicon Valley?
The answers to these questions aren’t academic. China became a global power without capitalism and without democracy, which means its success has profound implications for both.
And as Canada sets out to find its footing in a rapidly changing world order, one thing is abundantly clear: we need to start reckoning with the Chinese playbook.
Mentions:
The New China Playbook, by Keyu Jin
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>That’s the rallying cry of the tech companies and governments racing to develop artificial intelligence as fast as humanly possible. The argument is that whoever reaches AGI first won’t just be dominant technologically, or economically – they’ll be the world’s next super power. But, if I’m being honest, I don’t know if that framing holds up. And part of the reason for that is that we don’t really understand China.
Enter Keyu Jin. Jin is a Harvard trained economist who splits her time between London and Beijing, and her book, The New China Playbook, is her attempt to “read China in the original” – to provide a firsthand look at the forces that shaped the country’s unprecedented rise. China’s success is a puzzle. How did one of the poorest nations on the planet become the second richest in less than a century? How did an economy without free markets birth a tech sector that rivals – and in some ways surpasses – Silicon Valley?
The answers to these questions aren’t academic. China became a global power without capitalism and without democracy, which means its success has profound implications for both.
And as Canada sets out to find its footing in a rapidly changing world order, one thing is abundantly clear: we need to start reckoning with the Chinese playbook.
Mentions:
The New China Playbook, by Keyu Jin
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>We’re two weeks into the new year, and none of those things have happened. So, full disclosure: I have no idea if we’re going to reach artificial general intelligence or see the rise of humanoid robots this year. If the people at the centre of the industry can’t figure it out, I doubt I can.
But I do have some ideas about how AI could reshape our world over the next 12 months. I think we’re going to see a new political movement pushing back against AI adoption and leaning into our collective humanity. Democratic governments will defy an increasingly protectionist America and start taking digital regulation seriously again. And we’ll start establishing cultural norms about AI use – like whether you really need to respond to that AI-generated e-mail your colleague just sent.
On this episode, I turn the mics around and invite my longtime producer, Mitchell Stuart, to ask me about what’s actually in store for the year ahead.
Mentioned:
Trust, attitudes and use of artificial intelligence (2025), KPMG
Human-centric AI: Perspectives on trust and the future of AI (2025), Telus
Could an Alternative AI Save Us from a Bubble? (Gary Marcus), by Machines Like Us
GPT-5 System Card, OpenAI
Multi-model assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support, by Mahmud Ohmar et al (Nature)
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>We’re two weeks into the new year, and none of those things have happened. So, full disclosure: I have no idea if we’re going to reach artificial general intelligence or see the rise of humanoid robots this year. If the people at the centre of the industry can’t figure it out, I doubt I can.
But I do have some ideas about how AI could reshape our world over the next 12 months. I think we’re going to see a new political movement pushing back against AI adoption and leaning into our collective humanity. Democratic governments will defy an increasingly protectionist America and start taking digital regulation seriously again. And we’ll start establishing cultural norms about AI use – like whether you really need to respond to that AI-generated e-mail your colleague just sent.
On this episode, I turn the mics around and invite my longtime producer, Mitchell Stuart, to ask me about what’s actually in store for the year ahead.
Mentioned:
Trust, attitudes and use of artificial intelligence (2025), KPMG
Human-centric AI: Perspectives on trust and the future of AI (2025), Telus
Could an Alternative AI Save Us from a Bubble? (Gary Marcus), by Machines Like Us
GPT-5 System Card, OpenAI
Multi-model assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support, by Mahmud Ohmar et al (Nature)
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Under his leadership, NVIDIA has become a goliath. Somewhere between 80 and 90 per cent of AI tools run on NVIDIA hardware, making it the world’s most valuable company. But unlike his contemporaries, Huang has been remarkably quiet about the technology – and the world – he’s building.
In his new book, The Thinking Machine: Jensen Huang, NVIDIA, and the World’s Most Coveted Microchip, journalist Stephen Witt pulls back the curtain. And what he finds is, at times, shocking: Huang believes there is zero risk in developing superintelligence.
So who is Jensen Huang? And should we worry that the most powerful person in AI is racing forward at breakneck speed, blind to the potential consequences?
Mentioned:
The Thinking Machine: Jensen Huang, NVIDIA, and the World’s Most Coveted Microchip, by Stephen Witt
How Jensen Huang’s Nvidia Is Powering the A.I. Revolution, by Stephen Witt (The New Yorker)
The A.I. Prompt That Could End the World, by Stephen Witt (New York Times)
Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.
Media sourced from the BBC.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Under his leadership, NVIDIA has become a goliath. Somewhere between 80 and 90 per cent of AI tools run on NVIDIA hardware, making it the world’s most valuable company. But unlike his contemporaries, Huang has been remarkably quiet about the technology – and the world – he’s building.
In his new book, The Thinking Machine: Jensen Huang, NVIDIA, and the World’s Most Coveted Microchip, journalist Stephen Witt pulls back the curtain. And what he finds is, at times, shocking: Huang believes there is zero risk in developing superintelligence.
So who is Jensen Huang? And should we worry that the most powerful person in AI is racing forward at breakneck speed, blind to the potential consequences?
Mentioned:
The Thinking Machine: Jensen Huang, NVIDIA, and the World’s Most Coveted Microchip, by Stephen Witt
How Jensen Huang’s Nvidia Is Powering the A.I. Revolution, by Stephen Witt (The New Yorker)
The A.I. Prompt That Could End the World, by Stephen Witt (New York Times)
Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.
Media sourced from the BBC.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>You didn’t need to have a PhD or even use your real name – you just needed an internet connection. Against all odds, it worked. Today, billions of people use Wikipedia every month, and studies show it’s about as accurate as a traditional encyclopedia.
But how? How did Wikipedia not just turn into yet another online cesspool, filled with falsehoods, partisanship and AI slop? Wikipedia founder Jimmy Wales just wrote a book called The Seven Rules of Trust, where he explains how he was able to build that rarest of things: a trustworthy source of information on the internet. In an era when trust in institutions is collapsing, Wales thinks he’s found a blueprint – not just for the web, but for everything else too.
Mentioned:
The Seven Rules of Trust by Jimmy Wales and Dan Gardner
A False Wikipedia ‘Biography’ by John Seigenthaler (USA Today)
Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.
Photo Illustration: The Globe and Mail/Brendan McDermid/Reuters
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>You didn’t need to have a PhD or even use your real name – you just needed an internet connection. Against all odds, it worked. Today, billions of people use Wikipedia every month, and studies show it’s about as accurate as a traditional encyclopedia.
But how? How did Wikipedia not just turn into yet another online cesspool, filled with falsehoods, partisanship and AI slop? Wikipedia founder Jimmy Wales just wrote a book called The Seven Rules of Trust, where he explains how he was able to build that rarest of things: a trustworthy source of information on the internet. In an era when trust in institutions is collapsing, Wales thinks he’s found a blueprint – not just for the web, but for everything else too.
Mentioned:
The Seven Rules of Trust by Jimmy Wales and Dan Gardner
A False Wikipedia ‘Biography’ by John Seigenthaler (USA Today)
Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.
Photo Illustration: The Globe and Mail/Brendan McDermid/Reuters
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>There’s little doubt we’re living through an AI economy. But many economists worry there may be trouble ahead. They see companies like OpenAI – valued at half a trillion dollars while losing billions every month – and fear the AI sector looks a lot like a bubble. Because right now, venture capitalists aren’t investing in sound business plans. They’re betting that one day, one of these companies will build artificial general intelligence.
Gary Marcus is skeptical. He’s a professor emeritus at NYU, a bestselling author, and the founder of two AI companies – one of which was acquired by Uber. For more than two decades, he’s been arguing that large language models (LLMs) – the technology underpinning ChatGPT, Claude, and Gemini – just aren’t that good.
Marcus believes that if we’re going to build artificial general intelligence, we need to ditch LLMs and go back to the drawing board. (He thinks something called “neurosymbolic AI” could be the way forward.)
But if Marcus is right – if AI is a bubble and it’s about to pop – what happens to the economy then?
Mentioned:
The GenAI Divide: State of AI in Business 2025, by Project Nanda (MIT)
MIT study finds AI can already replace 11.7% of U.S. workforce, by MacKenzie Sigalos (CNBC)
The Algebraic Mind, by Gary Marcus
We found what you’re asking ChatGPT about health. A doctor scored its answers, by Geoffrey A. Fowler (The Washington Post)
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>There’s little doubt we’re living through an AI economy. But many economists worry there may be trouble ahead. They see companies like OpenAI – valued at half a trillion dollars while losing billions every month – and fear the AI sector looks a lot like a bubble. Because right now, venture capitalists aren’t investing in sound business plans. They’re betting that one day, one of these companies will build artificial general intelligence.
Gary Marcus is skeptical. He’s a professor emeritus at NYU, a bestselling author, and the founder of two AI companies – one of which was acquired by Uber. For more than two decades, he’s been arguing that large language models (LLMs) – the technology underpinning ChatGPT, Claude, and Gemini – just aren’t that good.
Marcus believes that if we’re going to build artificial general intelligence, we need to ditch LLMs and go back to the drawing board. (He thinks something called “neurosymbolic AI” could be the way forward.)
But if Marcus is right – if AI is a bubble and it’s about to pop – what happens to the economy then?
Mentioned:
The GenAI Divide: State of AI in Business 2025, by Project Nanda (MIT)
MIT study finds AI can already replace 11.7% of U.S. workforce, by MacKenzie Sigalos (CNBC)
The Algebraic Mind, by Gary Marcus
We found what you’re asking ChatGPT about health. A doctor scored its answers, by Geoffrey A. Fowler (The Washington Post)
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>It’s a vision that, in the age of artificial intelligence, now seems increasingly possible.
But utopia is far from guaranteed. Many experts predict that AI will also lead to mass job loss, the development of new bioweapons and, potentially, the extinction of our species.
So if you’re building a technology that could either save the world or destroy it – is that a moral pursuit?
These kinds of thorny questions are at the heart of Bregman’s latest book, Moral Ambition. In a sweeping conversation that takes us from the invention of the birth control pill to the British Abolitionist movement, Bregman and I discuss what a good life looks like (spoiler: he thinks the death of work might not be such a bad thing) – and whether AI can help get us there.
Mentioned:
Moral Ambition, by Rutger Bregman
Utopia for Realists, by Rutger Bregman
If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, by Eliezer Yudkowsky and Nate Soares
Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.
Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>It’s a vision that, in the age of artificial intelligence, now seems increasingly possible.
But utopia is far from guaranteed. Many experts predict that AI will also lead to mass job loss, the development of new bioweapons and, potentially, the extinction of our species.
So if you’re building a technology that could either save the world or destroy it – is that a moral pursuit?
These kinds of thorny questions are at the heart of Bregman’s latest book, Moral Ambition. In a sweeping conversation that takes us from the invention of the birth control pill to the British Abolitionist movement, Bregman and I discuss what a good life looks like (spoiler: he thinks the death of work might not be such a bad thing) – and whether AI can help get us there.
Mentioned:
Moral Ambition, by Rutger Bregman
Utopia for Realists, by Rutger Bregman
If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, by Eliezer Yudkowsky and Nate Soares
Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.
Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>But this felt different. This wasn’t discreet backdoor lobbying or a furtive effort to curry favour with an incoming administration. These were some of the most influential men in the world quite literally aligning themselves with the world’s most powerful politician – and his increasingly illiberal ideology.
Carole Cadwalladr has been tracking the collision of technology and politics for years. She’s the investigative journalist who broke the Cambridge Analytica story, exposing how Facebook data may have been used to manipulate elections. Now, she’s arguing that what we’re witnessing goes beyond monopoly power or even traditional oligarchy. She calls it techno-authoritarianism – a fusion of Trump’s authoritarian political project with the technological might of Silicon Valley.
So I wanted to have her on to make the case for why she believes Big Tech isn’t just complicit in authoritarianism, but is actively enabling it.
Mentioned:
The First Great Disruption 2016-2024, by Carole Cadwalladr
Trump Taps Palantir to Compile Data on Americans, by Sheera Frenkel and Aaron Krolik (New York Times)
This is What a Digital Coup Looks Like, by Carole Cadwalladr (TED)
Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.
Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>But this felt different. This wasn’t discreet backdoor lobbying or a furtive effort to curry favour with an incoming administration. These were some of the most influential men in the world quite literally aligning themselves with the world’s most powerful politician – and his increasingly illiberal ideology.
Carole Cadwalladr has been tracking the collision of technology and politics for years. She’s the investigative journalist who broke the Cambridge Analytica story, exposing how Facebook data may have been used to manipulate elections. Now, she’s arguing that what we’re witnessing goes beyond monopoly power or even traditional oligarchy. She calls it techno-authoritarianism – a fusion of Trump’s authoritarian political project with the technological might of Silicon Valley.
So I wanted to have her on to make the case for why she believes Big Tech isn’t just complicit in authoritarianism, but is actively enabling it.
Mentioned:
The First Great Disruption 2016-2024, by Carole Cadwalladr
Trump Taps Palantir to Compile Data on Americans, by Sheera Frenkel and Aaron Krolik (New York Times)
This is What a Digital Coup Looks Like, by Carole Cadwalladr (TED)
Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.
Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>But there are also songs that sound like Drake, cartoons that look like The Simpsons, and stories that read like Game of Thrones. In other words, AI-generated work that’s clearly riffing on – or outright mimicking – other people’s art. Art that, in most of the world, is protected by copyright law. Which raises an obvious question: how is any of this legal?
The AI companies claim they’re allowed to train their models on this work without paying for it, thanks to the “fair use” exception in American copyright law. But Ed Newton Rex has a different view: he says it’s theft.
Newton Rex is a classical music composer who spent the better part of a decade building an AI music generator for a company called Stability AI. But when he realized the company – and most of the AI industry – didn’t intend to license the work they were training their models on, he quit. He has been on a mission to get the industry to fairly compensate creators ever since. I invited him on the show to explain why he believes this is theft at an industrial scale – and what it means for the human experience when most of our art isn’t made by humans anymore, but by machines.
Mentioned:
Copyright and Artificial Intelligence: Generative AI Training, by the United States Copyright Office
A.I. Is Coming for Culture, by Josha Rothman (The New Yorker)
Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Host direction by Athena Karkanis. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail. Media sourced from BBC News.
Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>But there are also songs that sound like Drake, cartoons that look like The Simpsons, and stories that read like Game of Thrones. In other words, AI-generated work that’s clearly riffing on – or outright mimicking – other people’s art. Art that, in most of the world, is protected by copyright law. Which raises an obvious question: how is any of this legal?
The AI companies claim they’re allowed to train their models on this work without paying for it, thanks to the “fair use” exception in American copyright law. But Ed Newton Rex has a different view: he says it’s theft.
Newton Rex is a classical music composer who spent the better part of a decade building an AI music generator for a company called Stability AI. But when he realized the company – and most of the AI industry – didn’t intend to license the work they were training their models on, he quit. He has been on a mission to get the industry to fairly compensate creators ever since. I invited him on the show to explain why he believes this is theft at an industrial scale – and what it means for the human experience when most of our art isn’t made by humans anymore, but by machines.
Mentioned:
Copyright and Artificial Intelligence: Generative AI Training, by the United States Copyright Office
A.I. Is Coming for Culture, by Josha Rothman (The New Yorker)
Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Host direction by Athena Karkanis. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail. Media sourced from BBC News.
Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>While he was at the University of Toronto, Hinton pioneered the neural network research that would become the backbone of modern AI. (One of his students, Ilya Sutskever, went on to be one of OpenAI’s most influential scientific minds.) In 2013, Hinton left the academy and went to work for Google, eventually winning both a Turing Award and a Nobel Prize.
I think it’s fair to say that artificial intelligence as we know it, may not exist without Geoffrey Hinton.
But Hinton may be even more famous for what he did next. In 2023, he left Google and began a campaign to convince governments, corporations and citizens that his life’s work – this thing he helped build – might lead to our collective extinction. And that moment may be closer than we think, because Hinton believes AI may already be conscious.
But even though his warnings are getting more dire by the day, the AI industry is only getting bigger, and most governments, including Canada’s, seem reluctant to get in the way.
So I wanted to ask Hinton: If we keep going down this path, what will become of us?
Mentioned:
If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, by Eliezer Yudkowsky and Nate Soares
Agentic Misalignment: How LLMs could be insider threats, by Anthropic
Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.
Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>While he was at the University of Toronto, Hinton pioneered the neural network research that would become the backbone of modern AI. (One of his students, Ilya Sutskever, went on to be one of OpenAI’s most influential scientific minds.) In 2013, Hinton left the academy and went to work for Google, eventually winning both a Turing Award and a Nobel Prize.
I think it’s fair to say that artificial intelligence as we know it, may not exist without Geoffrey Hinton.
But Hinton may be even more famous for what he did next. In 2023, he left Google and began a campaign to convince governments, corporations and citizens that his life’s work – this thing he helped build – might lead to our collective extinction. And that moment may be closer than we think, because Hinton believes AI may already be conscious.
But even though his warnings are getting more dire by the day, the AI industry is only getting bigger, and most governments, including Canada’s, seem reluctant to get in the way.
So I wanted to ask Hinton: If we keep going down this path, what will become of us?
Mentioned:
If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, by Eliezer Yudkowsky and Nate Soares
Agentic Misalignment: How LLMs could be insider threats, by Anthropic
Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.
Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Students aren’t just using artificial intelligence to write their essays. They’re using it to generate ideas, conduct research, and summarize their readings. In other words: they’re using it to think for them. Or, as New York Magazine recently put it: “everyone is cheating their way through college.”
University administrators seem paralyzed in the face of this. Some worry that if we ban tools like ChatGPT, we may leave students unprepared for a world where everyone is already using them. But others think that if we go all in on AI, we could end up with a generation capable of producing work – but not necessarily original thought.
I’m honestly not sure which camp I fall into, so I wanted to talk to two people with very different perspectives on this.
Conor Grennan is the Chief AI Architect at NYU’s Stern School of Business, where he’s helping students and educators embrace AI. And Niall Ferguson is a senior fellow at Stanford and Harvard, and the co-founder of the University of Austin. Lately, he’s been making the opposite argument: that if universities are to survive, they largely need to ban AI from the classroom. Whichever path we take, the consequences will be profound. Because this isn’t just about how we teach and how we learn – it’s about the future of how we think.
Mentioned:
AI’s great brain robbery – and how universities can fight back, by Niall Ferguson (The London Times)
Everyone Is Cheating Their Way Through College, by James D. Walsh (New York Magazine)
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, by Nataliya Kos’myna (MIT Media Lab)
The Diamond Age, by Neal Stephenson
How the Enlightenment Ends, by Henry A. Kissinger
Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Host direction by Athena Karkanis. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at the Globe & Mail.
Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Students aren’t just using artificial intelligence to write their essays. They’re using it to generate ideas, conduct research, and summarize their readings. In other words: they’re using it to think for them. Or, as New York Magazine recently put it: “everyone is cheating their way through college.”
University administrators seem paralyzed in the face of this. Some worry that if we ban tools like ChatGPT, we may leave students unprepared for a world where everyone is already using them. But others think that if we go all in on AI, we could end up with a generation capable of producing work – but not necessarily original thought.
I’m honestly not sure which camp I fall into, so I wanted to talk to two people with very different perspectives on this.
Conor Grennan is the Chief AI Architect at NYU’s Stern School of Business, where he’s helping students and educators embrace AI. And Niall Ferguson is a senior fellow at Stanford and Harvard, and the co-founder of the University of Austin. Lately, he’s been making the opposite argument: that if universities are to survive, they largely need to ban AI from the classroom. Whichever path we take, the consequences will be profound. Because this isn’t just about how we teach and how we learn – it’s about the future of how we think.
Mentioned:
AI’s great brain robbery – and how universities can fight back, by Niall Ferguson (The London Times)
Everyone Is Cheating Their Way Through College, by James D. Walsh (New York Magazine)
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, by Nataliya Kos’myna (MIT Media Lab)
The Diamond Age, by Neal Stephenson
How the Enlightenment Ends, by Henry A. Kissinger
Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Host direction by Athena Karkanis. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at the Globe & Mail.
Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>But, according to Balsillie, none of this can be blamed on Trump. He thinks that over the last thirty years we’ve clung to an outdated economic model and have allowed our politics to be captured by corporate interests.
So, with less than a week to go before the federal election, I thought it was the perfect time to sit down with Jim and ask him how we might build a stronger, more sovereign Canada.
Mentioned:
“Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS),” The World Trade Organization
“Reinforcing Canada’s security and sovereignty in the Arctic,” Prime Minister of Canada
“Ontario Welcomes Siemens’ $150 Million Investment to Establish New Technology Centre in Oakville,” news release from the Government of Ontario
Further Reading:
“We are all economic nationalists now,” by Jim Balsillie (National Post)
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>But, according to Balsillie, none of this can be blamed on Trump. He thinks that over the last thirty years we’ve clung to an outdated economic model and have allowed our politics to be captured by corporate interests.
So, with less than a week to go before the federal election, I thought it was the perfect time to sit down with Jim and ask him how we might build a stronger, more sovereign Canada.
Mentioned:
“Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS),” The World Trade Organization
“Reinforcing Canada’s security and sovereignty in the Arctic,” Prime Minister of Canada
“Ontario Welcomes Siemens’ $150 Million Investment to Establish New Technology Centre in Oakville,” news release from the Government of Ontario
Further Reading:
“We are all economic nationalists now,” by Jim Balsillie (National Post)
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>In her recent report on foreign interference, Justice Marie-Josée Hogue wrote that “information manipulation poses the single biggest risk to our democracy”. Meanwhile, senior Canadian intelligence officials are predicting that India, China, Pakistan and Russia will all attempt to influence the outcome of this election. To try and get a sense of what we’re up against, I wanted to get two different perspectives on this. My colleague Aengus Bridgman is the Director of the Media Ecosystem Observatory, a project that we run together at McGill University, and Nina Jankocwicz is the co-founder and CEO of the American Sunlight Project. Together, they are two of the leading authorities on the problem of information manipulation.
Mentioned:
“Public Inquiry Into Foreign Interference in Federal Electoral Processes and Democratic Institutions,” by the Honourable Marie-Josée Hogue
"A Pro-Russia Content Network Foreshadows the Automated Future of Info Ops,” by the American Sunlight Project
Further Reading:
“Report ties Romanian liberals to TikTok campaign that fueled pro-Russia candidate,” by Victor Goury-Laffont (Politico)
“2025 Federal Election Monitoring and Response,” by the Canadian Digital Media Research Network
“Election threats watchdog detects Beijing effort to influence Chinese Canadians on Carney,” by Steven Chase (Globe & Mail)
“The revelations and events that led to the foreign-interference inquiry,” by Steven Chase and Robert Fife (Globe & Mail)
“Foreign interference inquiry finds ‘problematic’ conduct,” by The Decibel
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>In her recent report on foreign interference, Justice Marie-Josée Hogue wrote that “information manipulation poses the single biggest risk to our democracy”. Meanwhile, senior Canadian intelligence officials are predicting that India, China, Pakistan and Russia will all attempt to influence the outcome of this election. To try and get a sense of what we’re up against, I wanted to get two different perspectives on this. My colleague Aengus Bridgman is the Director of the Media Ecosystem Observatory, a project that we run together at McGill University, and Nina Jankocwicz is the co-founder and CEO of the American Sunlight Project. Together, they are two of the leading authorities on the problem of information manipulation.
Mentioned:
“Public Inquiry Into Foreign Interference in Federal Electoral Processes and Democratic Institutions,” by the Honourable Marie-Josée Hogue
"A Pro-Russia Content Network Foreshadows the Automated Future of Info Ops,” by the American Sunlight Project
Further Reading:
“Report ties Romanian liberals to TikTok campaign that fueled pro-Russia candidate,” by Victor Goury-Laffont (Politico)
“2025 Federal Election Monitoring and Response,” by the Canadian Digital Media Research Network
“Election threats watchdog detects Beijing effort to influence Chinese Canadians on Carney,” by Steven Chase (Globe & Mail)
“The revelations and events that led to the foreign-interference inquiry,” by Steven Chase and Robert Fife (Globe & Mail)
“Foreign interference inquiry finds ‘problematic’ conduct,” by The Decibel
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Since 2008, more than 250 local news outlets have closed down in Canada. The U.S. has lost a third of the newspapers they had in 2005. But this is about more than a failing business model. Only 31 percent of Americans say they trust the media. In Canada, that number is a little bit better – but only a little.
The problem is not just that people are losing their faith in journalism. It’s that they’re starting to place their trust in other, often more dubious sources of information: TikTok influencers, Elon Musk’s X feed, and The Joe Rogan Experience.
The impact of this shift can be seen almost everywhere you look. 15 percent of Americans believe climate change is a hoax. 30 percent believe the 2020 election was stolen. 10 percent believe the earth is flat.
A lot of this can be blamed on social media, which crippled journalism's business model and led to a flourishing of false information online. But not all of it. People like Jay Rosen have long argued that journalists themselves are at least partly responsible for the post-truth moment we now find ourselves in.
Rosen is a professor of journalism at NYU who’s been studying, critiquing, and really shaping, the press for nearly 40 years. He joined me a couple of weeks ago at the Attention conference in Montreal to explain how we got to this place – and where we might go from here.
A note: we recorded this interview before the Canadian election was called, so we don’t touch on it here. But over the course of the next month, the integrity of our information ecosystem will face an inordinate amount of stress, and conversations like this one will be more important than ever.
Mentioned:
"Digital News Report Canada 2024 Data: An Overview," by Colette Brin, Sébastien Charlton, Rémi Palisser, Florence Marquis
"America’s News Influencers," by Galen Stocking, Luxuan Wang, Michael Lipka, Katerina Eva Matsa,Regina Widjaya,Emily Tomasik andJacob Liedke
Further Reading:
"Challenges of Journalist Verification in the Digital Age on Society: A Thematic Review," Melinda Baharom, Akmar Hayati Ahmad Ghazali, Abdul Muati, Zamri Ahmad
"Making Newsworthy News: The Integral Role of Creativity and Verification in the Human Information Behavior that Drives News Story Creation," Marisela Gutierrez Lopez, Stephann Makri, Andrew MacFarlane, Colin Porlezza, Glenda Cooper, Sondess Missaoui
"The Trump Administration and the Media (2020)," by Leonard Downie Jr. for the Committee to Protect Journalists.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Since 2008, more than 250 local news outlets have closed down in Canada. The U.S. has lost a third of the newspapers they had in 2005. But this is about more than a failing business model. Only 31 percent of Americans say they trust the media. In Canada, that number is a little bit better – but only a little.
The problem is not just that people are losing their faith in journalism. It’s that they’re starting to place their trust in other, often more dubious sources of information: TikTok influencers, Elon Musk’s X feed, and The Joe Rogan Experience.
The impact of this shift can be seen almost everywhere you look. 15 percent of Americans believe climate change is a hoax. 30 percent believe the 2020 election was stolen. 10 percent believe the earth is flat.
A lot of this can be blamed on social media, which crippled journalism's business model and led to a flourishing of false information online. But not all of it. People like Jay Rosen have long argued that journalists themselves are at least partly responsible for the post-truth moment we now find ourselves in.
Rosen is a professor of journalism at NYU who’s been studying, critiquing, and really shaping, the press for nearly 40 years. He joined me a couple of weeks ago at the Attention conference in Montreal to explain how we got to this place – and where we might go from here.
A note: we recorded this interview before the Canadian election was called, so we don’t touch on it here. But over the course of the next month, the integrity of our information ecosystem will face an inordinate amount of stress, and conversations like this one will be more important than ever.
Mentioned:
"Digital News Report Canada 2024 Data: An Overview," by Colette Brin, Sébastien Charlton, Rémi Palisser, Florence Marquis
"America’s News Influencers," by Galen Stocking, Luxuan Wang, Michael Lipka, Katerina Eva Matsa,Regina Widjaya,Emily Tomasik andJacob Liedke
Further Reading:
"Challenges of Journalist Verification in the Digital Age on Society: A Thematic Review," Melinda Baharom, Akmar Hayati Ahmad Ghazali, Abdul Muati, Zamri Ahmad
"Making Newsworthy News: The Integral Role of Creativity and Verification in the Human Information Behavior that Drives News Story Creation," Marisela Gutierrez Lopez, Stephann Makri, Andrew MacFarlane, Colin Porlezza, Glenda Cooper, Sondess Missaoui
"The Trump Administration and the Media (2020)," by Leonard Downie Jr. for the Committee to Protect Journalists.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>But that may be changing. Earlier this year a Chinese startup called DeepSeek launched its own AI chatbot, sending shockwaves across Silicon Valley. According to DeepSeek, their model – DeepSeek-R1 – is just as powerful as ChatGPT but was developed at a fraction of the cost. In other words, this isn’t just a new company, it could be an entirely different approach to building artificial intelligence.
To try and understand what DeepSeek means for the future of AI, and for American innovation, I wanted to speak with Karen Hao. Hao was the first reporter to ever write a profile on OpenAI and has covered AI for The MIT Tech Review, The Atlantic and the Wall Street Journal. So she’s better positioned than almost anyone to try and make sense of this seemingly monumental shift in the landscape of artificial intelligence.
Mentioned:
“The messy, secretive reality behind OpenAI’s bid to save the world,” by Karen Hao
Further Reading:
“DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning,” by DeepSeek-AI and others.
“A Comparison of DeepSeek and Other LLMs,” by Tianchen Gao, Jiashun Jin, Zheng Tracy Ke, Gabriel Moryoussef
“Technical Report: Analyzing DeepSeek-R1′s Impact on AI Development,” by Azizi Othman
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>But that may be changing. Earlier this year a Chinese startup called DeepSeek launched its own AI chatbot, sending shockwaves across Silicon Valley. According to DeepSeek, their model – DeepSeek-R1 – is just as powerful as ChatGPT but was developed at a fraction of the cost. In other words, this isn’t just a new company, it could be an entirely different approach to building artificial intelligence.
To try and understand what DeepSeek means for the future of AI, and for American innovation, I wanted to speak with Karen Hao. Hao was the first reporter to ever write a profile on OpenAI and has covered AI for The MIT Tech Review, The Atlantic and the Wall Street Journal. So she’s better positioned than almost anyone to try and make sense of this seemingly monumental shift in the landscape of artificial intelligence.
Mentioned:
“The messy, secretive reality behind OpenAI’s bid to save the world,” by Karen Hao
Further Reading:
“DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning,” by DeepSeek-AI and others.
“A Comparison of DeepSeek and Other LLMs,” by Tianchen Gao, Jiashun Jin, Zheng Tracy Ke, Gabriel Moryoussef
“Technical Report: Analyzing DeepSeek-R1′s Impact on AI Development,” by Azizi Othman
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>We are living in a world of perpetual distraction. There are more things to read, watch and listen to than ever before – but our brains, it turns out, can only absorb so much. Politicians like Donald Trump have figured out how to exploit this dynamic. If you’re constantly saying outrageous things, it becomes almost impossible to focus on the things that really matter. Trump’s former strategist Steve Bannon called this strategy “flooding the zone.”
As the host of the MSNBC show All In, Chris Hayes has had a front-row seat to the war for our attention – and, now, he’s decided to sound the alarm with a new book called The Sirens’ Call: How Attention Became the World’s Most Endangered Resource.
Hayes joined me to explain how our attention became so scarce, and what happens to us when we lose the ability to focus on the things that matter most.
Mentioned:
"Twitter and Tear Gas: The Power and Fragility of Networked Protest," by Zeynep Tufekci
Further Reading:
"Ethics of the Attention Economy: The Problem of Social Media Addiction," by Vikram R. Bhargava and Manuel Velasquez.
"The Attention Economy Labour, Time and Power in Cognitive Capitalism," by Claudio Celis Bueno
“The business of news in the attention economy: Audience labor and MediaNews Group’s efforts to capitalize on news consumption,” Brice Nixon
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>We are living in a world of perpetual distraction. There are more things to read, watch and listen to than ever before – but our brains, it turns out, can only absorb so much. Politicians like Donald Trump have figured out how to exploit this dynamic. If you’re constantly saying outrageous things, it becomes almost impossible to focus on the things that really matter. Trump’s former strategist Steve Bannon called this strategy “flooding the zone.”
As the host of the MSNBC show All In, Chris Hayes has had a front-row seat to the war for our attention – and, now, he’s decided to sound the alarm with a new book called The Sirens’ Call: How Attention Became the World’s Most Endangered Resource.
Hayes joined me to explain how our attention became so scarce, and what happens to us when we lose the ability to focus on the things that matter most.
Mentioned:
"Twitter and Tear Gas: The Power and Fragility of Networked Protest," by Zeynep Tufekci
Further Reading:
"Ethics of the Attention Economy: The Problem of Social Media Addiction," by Vikram R. Bhargava and Manuel Velasquez.
"The Attention Economy Labour, Time and Power in Cognitive Capitalism," by Claudio Celis Bueno
“The business of news in the attention economy: Audience labor and MediaNews Group’s efforts to capitalize on news consumption,” Brice Nixon
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>The most potent version of this tech is Pegasus, a surveillance tool developed by an Israeli company called NSO Group. Once Pegasus infects your phone, it can see your texts, track your movement, and download your passwords – all without you realizing you’d been hacked.
We know a lot of this because of Ron Deibert. Twenty years ago, he founded Citizen Lab, a research group at the University of Toronto that has helped expose some of the most high profile cases of cyber espionage around the world.
Ron has a new book out called Chasing Shadows: Cyber Espionage, Subversion, and the Global Fight for Democracy, and he sat down with me to explain how spyware works, and what it means for our privacy – and our democracy.
Note: We reached out to NSO Group about the claims made in this episode and they did not reply to our request for comment.
Mentioned:
“Chasing Shadows: Cyber Espionage, Subversion, and the Global Fight for Democracy,” by Ron Deibert
“Meta’s WhatsApp says spyware company Paragon targeted users in two dozen countries,” by Raphael Satter, Reuters
Further Reading:
“The Autocrat in Your iPhone,” by Ron Deibert
“A Comprehensive Analysis of Pegasus Spyware and Its Implications for Digital Privacy and Security,” Karwan Kareem
“Stopping the Press: New York Times Journalist Targeted by Saudi-linked Pegasus Spyware Operator,” by Bill Marczak, Siena Anstis, Masashi Crete-Nishihata, John Scott-Railton, and Ron Deibert
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>The most potent version of this tech is Pegasus, a surveillance tool developed by an Israeli company called NSO Group. Once Pegasus infects your phone, it can see your texts, track your movement, and download your passwords – all without you realizing you’d been hacked.
We know a lot of this because of Ron Deibert. Twenty years ago, he founded Citizen Lab, a research group at the University of Toronto that has helped expose some of the most high profile cases of cyber espionage around the world.
Ron has a new book out called Chasing Shadows: Cyber Espionage, Subversion, and the Global Fight for Democracy, and he sat down with me to explain how spyware works, and what it means for our privacy – and our democracy.
Note: We reached out to NSO Group about the claims made in this episode and they did not reply to our request for comment.
Mentioned:
“Chasing Shadows: Cyber Espionage, Subversion, and the Global Fight for Democracy,” by Ron Deibert
“Meta’s WhatsApp says spyware company Paragon targeted users in two dozen countries,” by Raphael Satter, Reuters
Further Reading:
“The Autocrat in Your iPhone,” by Ron Deibert
“A Comprehensive Analysis of Pegasus Spyware and Its Implications for Digital Privacy and Security,” Karwan Kareem
“Stopping the Press: New York Times Journalist Targeted by Saudi-linked Pegasus Spyware Operator,” by Bill Marczak, Siena Anstis, Masashi Crete-Nishihata, John Scott-Railton, and Ron Deibert
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>But what we haven’t done on this show is try to explain how AI actually works. So this seemed like as good a time as any to ask our listeners if they had any burning questions about AI. And it turns out you did.
Where do our queries go once they’ve been fed into ChatGPT? What are the justifications for using a chatbot that may have been trained on plagiarized material? And why do we even need AI in the first place?
To help answer your questions, we are joined by Derek Ruths, a Professor of Computer Science at McGill University, and the best person I know at helping people (including myself) understand artificial intelligence.
Further Reading:
“Yoshua Bengio Doesn’t Think We’re Ready for Superhuman AI. We’re Building It Anyway,” Machines Like Us podcast
“ChatGPT is blurring the lines between what it means to communicate with a machine and a human,” by Derek Ruths
“A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going,” by Michael Wooldridge
“Artificial Intelligence: A Guide for Thinking Humans,” by Melanie Mitchell
“Anatomy of an AI System,” by Kate Crawford and Vladan Joler“
Two years after the launch of ChatGPT, how has generative AI helped businesses?,” by Joe Castaldo
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>But what we haven’t done on this show is try to explain how AI actually works. So this seemed like as good a time as any to ask our listeners if they had any burning questions about AI. And it turns out you did.
Where do our queries go once they’ve been fed into ChatGPT? What are the justifications for using a chatbot that may have been trained on plagiarized material? And why do we even need AI in the first place?
To help answer your questions, we are joined by Derek Ruths, a Professor of Computer Science at McGill University, and the best person I know at helping people (including myself) understand artificial intelligence.
Further Reading:
“Yoshua Bengio Doesn’t Think We’re Ready for Superhuman AI. We’re Building It Anyway,” Machines Like Us podcast
“ChatGPT is blurring the lines between what it means to communicate with a machine and a human,” by Derek Ruths
“A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going,” by Michael Wooldridge
“Artificial Intelligence: A Guide for Thinking Humans,” by Melanie Mitchell
“Anatomy of an AI System,” by Kate Crawford and Vladan Joler“
Two years after the launch of ChatGPT, how has generative AI helped businesses?,” by Joe Castaldo
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>But before we barrel into a year where I think AI will be everywhere, we thought this might be a good moment to step back and ask an important question: what exactly is AI?
On our next episode, we'll be joined by Derek Ruths, a Professor of Computer Science at McGill University.
And he's given me permission to ask him anything and everything about AI.
If you have questions about AI, or how its impacting your life, we want to hear them. Send an email or a voice recording to: [email protected]
Thanks – and we’ll see you next Tuesday!
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>But before we barrel into a year where I think AI will be everywhere, we thought this might be a good moment to step back and ask an important question: what exactly is AI?
On our next episode, we'll be joined by Derek Ruths, a Professor of Computer Science at McGill University.
And he's given me permission to ask him anything and everything about AI.
If you have questions about AI, or how its impacting your life, we want to hear them. Send an email or a voice recording to: [email protected]
Thanks – and we’ll see you next Tuesday!
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>As she tried to make sense of what happened, Megan discovered that Sewell had fallen in love with a chatbot on Character.AI – an app where you can talk to chatbots designed to sound like historical figures or fictional characters. Now Megan is suing Character.AI, alleging that Sewell developed a “harmful dependency” on the chatbot that, coupled with a lack of safeguards, ultimately led to her son’s death.
They’ve also named Google in the suit, alleging that the technology that underlies Character.AI was developed while the founders were working at Google.
I sat down with Megan Garcia and her lawyer, Meetali Jain, to talk about what happened to Sewell. And to try to understand the broader implications of a world where chatbots are becoming a part of our lives – and the lives of our children.
We reached out to Character.AI and Google about this story. Google did not respond to our request for comment by publication time.
A spokesperson for Character.AI made the following statement:
“We do not comment on pending litigation.
Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry. As part of this, we have launched a separate model for our teen users – with specific safety features that place more conservative limits on responses from the model.
The Character.AI experience begins with the Large Language Model that powers so many of our user and Character interactions. Conversations with Characters are driven by a proprietary model we continuously update and refine. For users under 18, we serve a version of the model that is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content. This initiative – combined with the other techniques described below – combine to produce two distinct user experiences on the Character.AI platform: one for teens and one for adults.
Additional ways we have integrated safety across our platform include:
Model Outputs: A “classifier” is a method of distilling a content policy into a form used to identify potential policy violations. We employ classifiers to help us enforce our content policies and filter out sensitive content from the model’s responses. The under-18 model has additional and more conservative classifiers than the model for our adult users.
User Inputs: While much of our focus is on the model’s output, we also have controls to user inputs that seek to apply our content policies to conversations on Character.AI.This is critical because inappropriate user inputs are often what leads a language model to generate inappropriate outputs. For example, if we detect that a user has submitted content that violates our Terms of Service or Community Guidelines, that content will be blocked from the user’s conversation with the Character. We also have a process in place to suspend teens from accessing Character.AI if they repeatedly try to input prompts into the platform that violate our content policies.
Additionally, under-18 users are now only able to access a narrower set of searchable Characters on the platform. Filters have been applied to this set to remove Characters related to sensitive or mature topics.
We have also added a time spent notification and prominent disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice. As we continue to invest in the platform, we will be rolling out several new features, including parental controls. For more information on these new features, please refer to the Character.AI blog HERE.
There is no ongoing relationship between Google and Character.AI. In August, 2024, Character.AI completed a one-time licensing of its technology and Noam went back to Google.”
If you or someone you know is thinking about suicide, support is available 24-7 by calling or texting 988, Canada’s national suicide prevention helpline.
Mentioned:
Megan Garcia v. Character Technologies, Et Al.
“Google Paid $2.7 Billion to Bring Back an AI Genius Who Quit in Frustration” by Miles Kruppa and Lauren Thomas
“Belgian man dies by suicide following exchanges with chatbot,” by Lauren Walker
“Can AI Companions Cure Loneliness?,” Machines Like Us
“An AI companion suggested he kill his parents. Now his mom is suing,” by Nitasha Tiku
Further Reading:
“Can A.I. Be Blamed for a Teen’s Suicide?” by Kevin Roose
“Margrethe Vestager Fought Big Tech and Won. Her Next Target is AI,” Machines Like Us
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>As she tried to make sense of what happened, Megan discovered that Sewell had fallen in love with a chatbot on Character.AI – an app where you can talk to chatbots designed to sound like historical figures or fictional characters. Now Megan is suing Character.AI, alleging that Sewell developed a “harmful dependency” on the chatbot that, coupled with a lack of safeguards, ultimately led to her son’s death.
They’ve also named Google in the suit, alleging that the technology that underlies Character.AI was developed while the founders were working at Google.
I sat down with Megan Garcia and her lawyer, Meetali Jain, to talk about what happened to Sewell. And to try to understand the broader implications of a world where chatbots are becoming a part of our lives – and the lives of our children.
We reached out to Character.AI and Google about this story. Google did not respond to our request for comment by publication time.
A spokesperson for Character.AI made the following statement:
“We do not comment on pending litigation.
Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry. As part of this, we have launched a separate model for our teen users – with specific safety features that place more conservative limits on responses from the model.
The Character.AI experience begins with the Large Language Model that powers so many of our user and Character interactions. Conversations with Characters are driven by a proprietary model we continuously update and refine. For users under 18, we serve a version of the model that is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content. This initiative – combined with the other techniques described below – combine to produce two distinct user experiences on the Character.AI platform: one for teens and one for adults.
Additional ways we have integrated safety across our platform include:
Model Outputs: A “classifier” is a method of distilling a content policy into a form used to identify potential policy violations. We employ classifiers to help us enforce our content policies and filter out sensitive content from the model’s responses. The under-18 model has additional and more conservative classifiers than the model for our adult users.
User Inputs: While much of our focus is on the model’s output, we also have controls to user inputs that seek to apply our content policies to conversations on Character.AI.This is critical because inappropriate user inputs are often what leads a language model to generate inappropriate outputs. For example, if we detect that a user has submitted content that violates our Terms of Service or Community Guidelines, that content will be blocked from the user’s conversation with the Character. We also have a process in place to suspend teens from accessing Character.AI if they repeatedly try to input prompts into the platform that violate our content policies.
Additionally, under-18 users are now only able to access a narrower set of searchable Characters on the platform. Filters have been applied to this set to remove Characters related to sensitive or mature topics.
We have also added a time spent notification and prominent disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice. As we continue to invest in the platform, we will be rolling out several new features, including parental controls. For more information on these new features, please refer to the Character.AI blog HERE.
There is no ongoing relationship between Google and Character.AI. In August, 2024, Character.AI completed a one-time licensing of its technology and Noam went back to Google.”
If you or someone you know is thinking about suicide, support is available 24-7 by calling or texting 988, Canada’s national suicide prevention helpline.
Mentioned:
Megan Garcia v. Character Technologies, Et Al.
“Google Paid $2.7 Billion to Bring Back an AI Genius Who Quit in Frustration” by Miles Kruppa and Lauren Thomas
“Belgian man dies by suicide following exchanges with chatbot,” by Lauren Walker
“Can AI Companions Cure Loneliness?,” Machines Like Us
“An AI companion suggested he kill his parents. Now his mom is suing,” by Nitasha Tiku
Further Reading:
“Can A.I. Be Blamed for a Teen’s Suicide?” by Kevin Roose
“Margrethe Vestager Fought Big Tech and Won. Her Next Target is AI,” Machines Like Us
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>The reporters learned that in 2019 the Canadian Food Inspection Agency introduced a new system that relies on an algorithm to prioritize sites for inspectors to visit. Investigative reporters Grant Robertson and Kathryn Blaze Baum talk about why this new system of tracking was created, and what went wrong.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>The reporters learned that in 2019 the Canadian Food Inspection Agency introduced a new system that relies on an algorithm to prioritize sites for inspectors to visit. Investigative reporters Grant Robertson and Kathryn Blaze Baum talk about why this new system of tracking was created, and what went wrong.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Because of that seemingly infinite complexity, developing software that could master Go has long been a goal of the AI community.
In 2016, researchers at Google’s DeepMind appeared to meet the challenge. Their Go-playing AI defeated one of the best Go players in the world, Lee Sedol.
After the match, Lee Sedol retired, saying that losing to an AI felt like his entire world was collapsing.
He wasn’t alone. For a lot of people, the game represented a turning point – the moment where humans had been overtaken by machines.
But Frank Lantz saw that game and was invigorated. Lantz is a game designer (his game “Hey Robot” is a recurring feature on The Tonight Show Starring Jimmy Fallon), the director of the NYU game center, and the author of The Beauty of Games. He’s spent his career thinking about how technology is changing the nature of games – and what we can learn about ourselves when we sit down to play them.
Mentioned:
“AlphaGo”
“The Beauty of Games” by Frank Lantz
“Adversarial Policies Beat Superhuman Go AIs” by Tony Wang Et al.
“Theory of Games and Economic Behavior” by John von Neumann and Oskar Morgenstern
“Heads-up limit hold’em poker is solved” by Michael Bowling Et al.
Further Reading:
“How to Play a Game” by Frank Lantz
“The Afterlife of Go” by Frank Lantz
“How A.I. Conquered Poker” by Keith Romer
“In Two Moves, AlphaGo and Lee Sedol Redefined the Future” by Cade Metz
Hey Robot by Frank Lantz
Universal Paperclips by Frank Lantz
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Because of that seemingly infinite complexity, developing software that could master Go has long been a goal of the AI community.
In 2016, researchers at Google’s DeepMind appeared to meet the challenge. Their Go-playing AI defeated one of the best Go players in the world, Lee Sedol.
After the match, Lee Sedol retired, saying that losing to an AI felt like his entire world was collapsing.
He wasn’t alone. For a lot of people, the game represented a turning point – the moment where humans had been overtaken by machines.
But Frank Lantz saw that game and was invigorated. Lantz is a game designer (his game “Hey Robot” is a recurring feature on The Tonight Show Starring Jimmy Fallon), the director of the NYU game center, and the author of The Beauty of Games. He’s spent his career thinking about how technology is changing the nature of games – and what we can learn about ourselves when we sit down to play them.
Mentioned:
“AlphaGo”
“The Beauty of Games” by Frank Lantz
“Adversarial Policies Beat Superhuman Go AIs” by Tony Wang Et al.
“Theory of Games and Economic Behavior” by John von Neumann and Oskar Morgenstern
“Heads-up limit hold’em poker is solved” by Michael Bowling Et al.
Further Reading:
“How to Play a Game” by Frank Lantz
“The Afterlife of Go” by Frank Lantz
“How A.I. Conquered Poker” by Keith Romer
“In Two Moves, AlphaGo and Lee Sedol Redefined the Future” by Cade Metz
Hey Robot by Frank Lantz
Universal Paperclips by Frank Lantz
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>In September, Mark Zuckerberg announced that Meta had developed “the most advanced glasses the world had ever seen.” That same day, Open AI CEO Sam Altman predicted we could have artificial super intelligence within a couple of years. Elon Musk has said he’ll land rockets on Mars by 2026.
We appear to be living through the kinds of technological leaps we used to only dream about. But whose dreams were those, exactly?
In her latest book, Imagination: A Manifesto, Ruha Benjamin argues that our collective imagination has been monopolized by the Zuckerbergs and Musks of the world. But, she says, it doesn’t need to be that way.
Mentioned:
“Imagination: A Manifesto,” by Ruha Benjamin
Summer of Soul (...Or, When the Revolution Could Not Be Televised), directed by Questlove
“The Black Woman: An Anthology,” by Toni Cade Bambara
“The New Artificial Intelligentsia,” by Ruha Benjamin
“Race After Technology,” by Ruha Benjamin
Breonna's Garden, with Ju'Niyah Palmer
“Viral Justice,” by Ruha Benjamin
The Parable Series, by Octavia Butler
Further Reading:
“AI could make health care fairer—by helping us believe what patients say,” by Karen Hao
“How an Attempt at Correcting Bias in Tech Goes Wrong,” by Sidney Fussell
“Unmasking AI: My Mission to Protect What Is Human in a World of Machines,’” by Joy Buolamwini
“The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence,” by Timnit Gebru and Émile P. Torres
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>In September, Mark Zuckerberg announced that Meta had developed “the most advanced glasses the world had ever seen.” That same day, Open AI CEO Sam Altman predicted we could have artificial super intelligence within a couple of years. Elon Musk has said he’ll land rockets on Mars by 2026.
We appear to be living through the kinds of technological leaps we used to only dream about. But whose dreams were those, exactly?
In her latest book, Imagination: A Manifesto, Ruha Benjamin argues that our collective imagination has been monopolized by the Zuckerbergs and Musks of the world. But, she says, it doesn’t need to be that way.
Mentioned:
“Imagination: A Manifesto,” by Ruha Benjamin
Summer of Soul (...Or, When the Revolution Could Not Be Televised), directed by Questlove
“The Black Woman: An Anthology,” by Toni Cade Bambara
“The New Artificial Intelligentsia,” by Ruha Benjamin
“Race After Technology,” by Ruha Benjamin
Breonna's Garden, with Ju'Niyah Palmer
“Viral Justice,” by Ruha Benjamin
The Parable Series, by Octavia Butler
Further Reading:
“AI could make health care fairer—by helping us believe what patients say,” by Karen Hao
“How an Attempt at Correcting Bias in Tech Goes Wrong,” by Sidney Fussell
“Unmasking AI: My Mission to Protect What Is Human in a World of Machines,’” by Joy Buolamwini
“The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence,” by Timnit Gebru and Émile P. Torres
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>With her decade-long tenure as one of the world’s most powerful anti-trust watchdogs coming to an end, Vestager has turned her attention to AI. She spearheaded the EU’s AI Act, which will be the first and, so far, most ambitious piece of AI legislation in the world.
But the clock is ticking – both on her term and on the global race to govern AI, which Vestager says we have “very little time” to get right.
Mentioned:
The EU Artificial Intelligence Act
“Dutch scandal serves as a warning for Europe over risks of using algorithms,” by Melissa Heikkilä
“Belgian man dies by suicide following exchanges with chatbot” by Lauren Walker
General Data Protection Regulation (GDPR)
“The future of European competitiveness” by Mario Draghi
“Governing AI for Humanity: Final Report” by the United Nations Secretary-General’s High-level Advisory Body
The Artificial Intelligence and Data Act (AIDA)
Further Reading:
“Apple, Google must pay billions in back taxes and fines, E.U. court rules” by Ellen Francis and Cat Zakrzewski
“OpenAI Lobbied the E.U. to Water Down AI Regulation” by Billy Perrigo
“The total eclipse of Margrethe Vestager” by Samuel Stolton
“Digital Empires: The Global Battle to Regulate Technology” by Anu Bradford
“The Brussels Effect: How the European Union Rules the World” by Anu Bradford
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>With her decade-long tenure as one of the world’s most powerful anti-trust watchdogs coming to an end, Vestager has turned her attention to AI. She spearheaded the EU’s AI Act, which will be the first and, so far, most ambitious piece of AI legislation in the world.
But the clock is ticking – both on her term and on the global race to govern AI, which Vestager says we have “very little time” to get right.
Mentioned:
The EU Artificial Intelligence Act
“Dutch scandal serves as a warning for Europe over risks of using algorithms,” by Melissa Heikkilä
“Belgian man dies by suicide following exchanges with chatbot” by Lauren Walker
General Data Protection Regulation (GDPR)
“The future of European competitiveness” by Mario Draghi
“Governing AI for Humanity: Final Report” by the United Nations Secretary-General’s High-level Advisory Body
The Artificial Intelligence and Data Act (AIDA)
Further Reading:
“Apple, Google must pay billions in back taxes and fines, E.U. court rules” by Ellen Francis and Cat Zakrzewski
“OpenAI Lobbied the E.U. to Water Down AI Regulation” by Billy Perrigo
“The total eclipse of Margrethe Vestager” by Samuel Stolton
“Digital Empires: The Global Battle to Regulate Technology” by Anu Bradford
“The Brussels Effect: How the European Union Rules the World” by Anu Bradford
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>That creeping feeling that everything online is getting worse has a name: “enshittification,” a term for the slow degradation of our experience on digital platforms. The enshittification cycle is why you now have to wade through slop to find anything useful on Google, and why your charger is different from your BFF’s.
According to Cory Doctorow, the man who coined the memorable moniker, this digital decay isn’t inevitable. It’s a symptom of corporate under-regulation and monopoly – practices being challenged in courts around the world, like the US Department of Justice’s antitrust suit against Google.
Cory Doctorow is a British-Canadian journalist, blogger and author of Chokepoint Capitalism, as well as speculative fiction works like The Lost Cause and the new novella Spill.
Every Friday, Lately takes a deep dive into the big, defining trends in business and tech that are reshaping our every day. It’s hosted by Vass Bednar.
Machines Like Us will be back in two weeks.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>That creeping feeling that everything online is getting worse has a name: “enshittification,” a term for the slow degradation of our experience on digital platforms. The enshittification cycle is why you now have to wade through slop to find anything useful on Google, and why your charger is different from your BFF’s.
According to Cory Doctorow, the man who coined the memorable moniker, this digital decay isn’t inevitable. It’s a symptom of corporate under-regulation and monopoly – practices being challenged in courts around the world, like the US Department of Justice’s antitrust suit against Google.
Cory Doctorow is a British-Canadian journalist, blogger and author of Chokepoint Capitalism, as well as speculative fiction works like The Lost Cause and the new novella Spill.
Every Friday, Lately takes a deep dive into the big, defining trends in business and tech that are reshaping our every day. It’s hosted by Vass Bednar.
Machines Like Us will be back in two weeks.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Mentioned:
“Silicon Valley, the New Lobbying Monster” by Charles Duhigg
“Big Crypto, Big Spending: Crypto Corporations Spend an Unprecedented $119 Million Influencing Elections” by Rick Claypool via Public Citizen
“I’m Running Out of Ways to Explain How Bad This Is” by Charlie Warzel
“Elon Musk Has Reached a New Low” by Charlie Warzel
“The movement to diversify Silicon Valley is crumbling amid attacks on DEI” by Naomi Nix, Cat Zakrzewski and Nitasha Tiku
“The Techno-Optimist Manifesto” by Marc Andreessen
“Trump Vs. Biden: Tech Policy,” The Ben & Marc Show
“The MAGA Aesthetic Is AI Slop” by Charlie Warzel
Further Reading:
“Biden's FTC took on big tech, big pharma and more. What antitrust legacy will Biden leave behind?” by Paige Sutherland and Meghna Chakrabarti
“Inside the Harris campaign’s blitz to win back Silicon Valley” by Cat Zakrzewski, Nitasha Tiku and Elizabeth Dwoskin
“The Little Tech Agenda” by Marc Andreessen and Ben Horowitz
“Silicon Valley had Harris’s back for decades. Will she return the favor?” by Cristiano Lima-Strong and Cat Zakrzewski
“SEC’s Gensler turns tide against crypto in courts” by Declan Harty
“Trump vs. Harris is dividing Silicon Valley into feuding political camps” by Trisha Thadani, Elizabeth Dwoskin, Nitasha Tiku and Gerrit De Vynck
“Inside the powerful Peter Thiel network that anointed JD Vance” by Elizabeth Dwoskin, Cat Zakrzewski, Nitasha Tiku and Josh Dawsey
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Mentioned:
“Silicon Valley, the New Lobbying Monster” by Charles Duhigg
“Big Crypto, Big Spending: Crypto Corporations Spend an Unprecedented $119 Million Influencing Elections” by Rick Claypool via Public Citizen
“I’m Running Out of Ways to Explain How Bad This Is” by Charlie Warzel
“Elon Musk Has Reached a New Low” by Charlie Warzel
“The movement to diversify Silicon Valley is crumbling amid attacks on DEI” by Naomi Nix, Cat Zakrzewski and Nitasha Tiku
“The Techno-Optimist Manifesto” by Marc Andreessen
“Trump Vs. Biden: Tech Policy,” The Ben & Marc Show
“The MAGA Aesthetic Is AI Slop” by Charlie Warzel
Further Reading:
“Biden's FTC took on big tech, big pharma and more. What antitrust legacy will Biden leave behind?” by Paige Sutherland and Meghna Chakrabarti
“Inside the Harris campaign’s blitz to win back Silicon Valley” by Cat Zakrzewski, Nitasha Tiku and Elizabeth Dwoskin
“The Little Tech Agenda” by Marc Andreessen and Ben Horowitz
“Silicon Valley had Harris’s back for decades. Will she return the favor?” by Cristiano Lima-Strong and Cat Zakrzewski
“SEC’s Gensler turns tide against crypto in courts” by Declan Harty
“Trump vs. Harris is dividing Silicon Valley into feuding political camps” by Trisha Thadani, Elizabeth Dwoskin, Nitasha Tiku and Gerrit De Vynck
“Inside the powerful Peter Thiel network that anointed JD Vance” by Elizabeth Dwoskin, Cat Zakrzewski, Nitasha Tiku and Josh Dawsey
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>It’s also a central question of speculative fiction. And one that few people have tried to answer as thoughtfully – and as poetically – as Emily St. John Mandel.
Mandel is one of Canada’s great writers. She’s the author of six award winning novels, the most recent of which is Sea of Tranquility – a story about a future where we have moon colonies and time travelling detectives. But Mandel might be best known for Station Eleven, which was made into a big HBO miniseries in 2021. In Station Eleven, Mandel envisioned a very different future. One where a pandemic has wiped out nearly everyone on the planet, and the world has returned to a pre industrial state. In other words, a world without technology.
I think speculative fiction carries tremendous power. In fact, I think that AI is ultimately an act of speculation. The AI we have chosen to build, and our visions of what AI could become, have been shaped by acts of imagination.
So I wanted to speak to someone who has made a career imagining other worlds, and thinking about how humans will fit into them.
Mentioned:
“Last Night in Montreal” by Emily St. John Mandel
“Station Eleven” by Emily St. John Mandel
The Nobel Prize in Literature 2014 – Lecture by Patrick Modiano
“The Glass Hotel” by Emily St. John Mandel
“Sea of Tranquility” by Emily St. John Mandel
Summary of the 2023 WGA MBA, Writers Guild of America
Her (2013)
“The Handmaid’s Tale” by Margaret Atwood
“Shell Game” by Evan Ratliff
Further Reading:
“Can AI Companions Cure Loneliness?,” Machines Like Us
“Yoshua Bengio Doesn’t Think We’re Ready for Superhuman AI. We’re Building it Anyway.,” Machines Like Us
“The Road” by Cormac McCarthy
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>It’s also a central question of speculative fiction. And one that few people have tried to answer as thoughtfully – and as poetically – as Emily St. John Mandel.
Mandel is one of Canada’s great writers. She’s the author of six award winning novels, the most recent of which is Sea of Tranquility – a story about a future where we have moon colonies and time travelling detectives. But Mandel might be best known for Station Eleven, which was made into a big HBO miniseries in 2021. In Station Eleven, Mandel envisioned a very different future. One where a pandemic has wiped out nearly everyone on the planet, and the world has returned to a pre industrial state. In other words, a world without technology.
I think speculative fiction carries tremendous power. In fact, I think that AI is ultimately an act of speculation. The AI we have chosen to build, and our visions of what AI could become, have been shaped by acts of imagination.
So I wanted to speak to someone who has made a career imagining other worlds, and thinking about how humans will fit into them.
Mentioned:
“Last Night in Montreal” by Emily St. John Mandel
“Station Eleven” by Emily St. John Mandel
The Nobel Prize in Literature 2014 – Lecture by Patrick Modiano
“The Glass Hotel” by Emily St. John Mandel
“Sea of Tranquility” by Emily St. John Mandel
Summary of the 2023 WGA MBA, Writers Guild of America
Her (2013)
“The Handmaid’s Tale” by Margaret Atwood
“Shell Game” by Evan Ratliff
Further Reading:
“Can AI Companions Cure Loneliness?,” Machines Like Us
“Yoshua Bengio Doesn’t Think We’re Ready for Superhuman AI. We’re Building it Anyway.,” Machines Like Us
“The Road” by Cormac McCarthy
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>And then there was Yoshua Bengio.
Bengio is one of AI’s pioneering figures. In 2018, he and two colleagues won the Turing Award – the closest thing computer science has to a Nobel Prize – for their work on deep learning. In 2022, he was the most cited computer scientist in the world. It wouldn’t be hyperbolic to suggest that AI as we know it today might not exist without Yoshua Bengio.
But in the last couple of years, Bengio has had an epiphany of sorts. And he now believes that, left unchecked, AI has the potential to wipe out humanity. So these days, he’s dedicated himself to AI safety. He’s a professor at the University of Montreal and the founder of MILA - the Quebec Artificial Intelligence Institute.
And he was at this big AI conference too, amidst all these Silicon Valley types, pleading with the industry to slow down before it’s too late.
Mentioned:
“Personal and Psychological Dimensions of AI Researchers Confronting AI Catastrophic Risks” by Yoshua Bengio
“Deep Learning” by Yann LeCun, Yoshua Bengio, Geoffrey Hinton
“Computing Machinery and Intelligence” by Alan Turing
“International Scientific Report on the Safety of Advanced AI”
“Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?” by R. Ren et al.
“SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”
Further reading:
“‘Deep Learning’ Guru Reveals the Future of AI” by Cade Metz
“Montréal Declaration for a Responsible Development of Artificial Intelligence”
“This A.I. Subculture’s Motto: Go, Go, Go” By Kevin Roose
“Reasoning through arguments against taking AI safety seriously” by Yoshua Bengio
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>And then there was Yoshua Bengio.
Bengio is one of AI’s pioneering figures. In 2018, he and two colleagues won the Turing Award – the closest thing computer science has to a Nobel Prize – for their work on deep learning. In 2022, he was the most cited computer scientist in the world. It wouldn’t be hyperbolic to suggest that AI as we know it today might not exist without Yoshua Bengio.
But in the last couple of years, Bengio has had an epiphany of sorts. And he now believes that, left unchecked, AI has the potential to wipe out humanity. So these days, he’s dedicated himself to AI safety. He’s a professor at the University of Montreal and the founder of MILA - the Quebec Artificial Intelligence Institute.
And he was at this big AI conference too, amidst all these Silicon Valley types, pleading with the industry to slow down before it’s too late.
Mentioned:
“Personal and Psychological Dimensions of AI Researchers Confronting AI Catastrophic Risks” by Yoshua Bengio
“Deep Learning” by Yann LeCun, Yoshua Bengio, Geoffrey Hinton
“Computing Machinery and Intelligence” by Alan Turing
“International Scientific Report on the Safety of Advanced AI”
“Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?” by R. Ren et al.
“SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”
Further reading:
“‘Deep Learning’ Guru Reveals the Future of AI” by Cade Metz
“Montréal Declaration for a Responsible Development of Artificial Intelligence”
“This A.I. Subculture’s Motto: Go, Go, Go” By Kevin Roose
“Reasoning through arguments against taking AI safety seriously” by Yoshua Bengio
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Luckily, everything is on the table. Grinding entire mountains into powder and dumping them into oceans. Sucking carbon directly out of the air and burying it underground. Spraying millions of tons of sulphur dioxide directly into the atmosphere.
Gwynne Dyer has spent the past four years interviewing the world’s leading climate scientists about the moonshots that could save the planet. Dyer is a journalist and historian who has written a dozen books over his career, and has become one of Canada’s most trusted commentators on war and geopolitics.
But his latest book, Intervention Earth, is about the battle to save the planet.
Like any reporting on the climate, it’s inevitably a little depressing. But with this book Dyer has also given us a different way of thinking about the climate crisis – and maybe even a road map for how technology could help us avoid our own destruction.
Mentioned:
“Intervention Earth: Life-Saving Ideas from the World’s Climate Engineers” by Gwynne Dyer
“Scientists warn Earth warming faster than expected – due to reduction in ship pollution” by Nicole Mortillaro
“Global warming in the pipeline” by James Hansen, et al.
“Albedo Enhancement by Stratospheric Sulfur Injections: A Contribution to Resolve a Policy Dilemma?” by Paul Crutzen
Further Reading:
Interview with Hans Joachim Schellnhuber and Gwynne Dyer
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Luckily, everything is on the table. Grinding entire mountains into powder and dumping them into oceans. Sucking carbon directly out of the air and burying it underground. Spraying millions of tons of sulphur dioxide directly into the atmosphere.
Gwynne Dyer has spent the past four years interviewing the world’s leading climate scientists about the moonshots that could save the planet. Dyer is a journalist and historian who has written a dozen books over his career, and has become one of Canada’s most trusted commentators on war and geopolitics.
But his latest book, Intervention Earth, is about the battle to save the planet.
Like any reporting on the climate, it’s inevitably a little depressing. But with this book Dyer has also given us a different way of thinking about the climate crisis – and maybe even a road map for how technology could help us avoid our own destruction.
Mentioned:
“Intervention Earth: Life-Saving Ideas from the World’s Climate Engineers” by Gwynne Dyer
“Scientists warn Earth warming faster than expected – due to reduction in ship pollution” by Nicole Mortillaro
“Global warming in the pipeline” by James Hansen, et al.
“Albedo Enhancement by Stratospheric Sulfur Injections: A Contribution to Resolve a Policy Dilemma?” by Paul Crutzen
Further Reading:
Interview with Hans Joachim Schellnhuber and Gwynne Dyer
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>In April, the Israeli magazine +972 published a story describing how Israel was using an AI system called Lavender to target potential enemies for air strikes, sometimes with a margin of error as high as 10 per cent.
I remember reading that story back in the spring and being shocked, not that such tools existed, but that they were already being used at this scale on the battlefield. P.W. Singer was less surprised. Singer is one of the world’s foremost experts on the future of warfare. He’s a strategist at the think tank New America, a professor of practice at Arizona State University, and a consultant for everyone from the US military to the FBI.
So if anyone can help us understand the black box of autonomous weaponry and AI warfare, it’s P.W. Singer.
Mentioned:
“‘The Gospel’: how Israel uses AI to select bombing targets in Gaza” by Harry Davies, Bethan McKernan, and Dan Sabbagh
“‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza” by Yuval Abraham
“Ghost Fleet: A Novel of the Next World War” by P. W. Singer and August Cole
Further Reading:
“Burn-In: A Novel of the Real Robotic Revolution” by P. W. Singer and August Cole
“The AI revolution is already here” by P. W. Singer
“Humans must be held responsible for decisions AI weapons make” in The Asahi Shimbun
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>In April, the Israeli magazine +972 published a story describing how Israel was using an AI system called Lavender to target potential enemies for air strikes, sometimes with a margin of error as high as 10 per cent.
I remember reading that story back in the spring and being shocked, not that such tools existed, but that they were already being used at this scale on the battlefield. P.W. Singer was less surprised. Singer is one of the world’s foremost experts on the future of warfare. He’s a strategist at the think tank New America, a professor of practice at Arizona State University, and a consultant for everyone from the US military to the FBI.
So if anyone can help us understand the black box of autonomous weaponry and AI warfare, it’s P.W. Singer.
Mentioned:
“‘The Gospel’: how Israel uses AI to select bombing targets in Gaza” by Harry Davies, Bethan McKernan, and Dan Sabbagh
“‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza” by Yuval Abraham
“Ghost Fleet: A Novel of the Next World War” by P. W. Singer and August Cole
Further Reading:
“Burn-In: A Novel of the Real Robotic Revolution” by P. W. Singer and August Cole
“The AI revolution is already here” by P. W. Singer
“Humans must be held responsible for decisions AI weapons make” in The Asahi Shimbun
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>One of the central reasons for this is that the advertising model that has supported journalism for more than a century has collapsed. Simply put, Google and Meta have built a better advertising machine, and they’ve crippled journalism’s business model in the process.
It wasn’t always obvious this was going to happen. Fifteen or twenty years ago, a lot of publishers were actually making deals with social media companies, thinking they were going to lead to bigger audiences and more clicks.
But these turned out to be faustian bargains. The journalism industry took a nosedive, while Google and Meta became two of the most profitable companies in the world.
And now we might be doing it all over again with a new wave of tech companies like OpenAI. Over the past several years, OpenAI, operating in a kind of legal grey area, has trained its models on news content it hasn’t paid for. While some news outlets, like the New York Times, have chosen to sue OpenAI for copyright infringement, many publishers (including The Atlantic, the Financial Times, and NewsCorp) have elected to sign deals with OpenAI instead.
Julia Angwin has been worried about the thorny relationship between big tech and journalism for years. She’s written a book about MySpace, documented the rise of big tech, and won a Pulitzer for her tech reporting with the Wall Street Journal.
She was also one of the few people warning publishers the first time around that making deals with social media companies maybe wasn’t the best idea.
Now, she’s ringing the alarm again, this time as a New York Times contributing opinion writer and the CEO of a journalism startup called Proof News that is preoccupied with the question of how to get people reliable information in the age of AI.
Mentions:
“Stealing MySpace: The Battle to Control the Most Popular Website in America,” by Julia Angwin
“What They Know” WSJ series by Julia Angwin
“The Bad News About the News” by Robert G. Kaiser
“The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work” by By Michael M. Grynbaum and Ryan Mac
“Seeking Reliable Election Information? Don’t Trust AI” by Julia Angwin, Alondra Nelson, Rina Palta
Further Reading:
“Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance” by Julia Angwin
“A Letter From Our Founder” by Julia Angwin
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>One of the central reasons for this is that the advertising model that has supported journalism for more than a century has collapsed. Simply put, Google and Meta have built a better advertising machine, and they’ve crippled journalism’s business model in the process.
It wasn’t always obvious this was going to happen. Fifteen or twenty years ago, a lot of publishers were actually making deals with social media companies, thinking they were going to lead to bigger audiences and more clicks.
But these turned out to be faustian bargains. The journalism industry took a nosedive, while Google and Meta became two of the most profitable companies in the world.
And now we might be doing it all over again with a new wave of tech companies like OpenAI. Over the past several years, OpenAI, operating in a kind of legal grey area, has trained its models on news content it hasn’t paid for. While some news outlets, like the New York Times, have chosen to sue OpenAI for copyright infringement, many publishers (including The Atlantic, the Financial Times, and NewsCorp) have elected to sign deals with OpenAI instead.
Julia Angwin has been worried about the thorny relationship between big tech and journalism for years. She’s written a book about MySpace, documented the rise of big tech, and won a Pulitzer for her tech reporting with the Wall Street Journal.
She was also one of the few people warning publishers the first time around that making deals with social media companies maybe wasn’t the best idea.
Now, she’s ringing the alarm again, this time as a New York Times contributing opinion writer and the CEO of a journalism startup called Proof News that is preoccupied with the question of how to get people reliable information in the age of AI.
Mentions:
“Stealing MySpace: The Battle to Control the Most Popular Website in America,” by Julia Angwin
“What They Know” WSJ series by Julia Angwin
“The Bad News About the News” by Robert G. Kaiser
“The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work” by By Michael M. Grynbaum and Ryan Mac
“Seeking Reliable Election Information? Don’t Trust AI” by Julia Angwin, Alondra Nelson, Rina Palta
Further Reading:
“Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance” by Julia Angwin
“A Letter From Our Founder” by Julia Angwin
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Of course, Silicon Valley has always been driven by libertarian sensibilities and an optimistic view of technology. But the radical techno-optimism of people like Andreesen, and billionaire entrepreneurs like Peter Thiel and Elon Musk, has morphed into something more extreme. In their view, technology and government are always at odds with one another.
But if that’s true, then how do you explain someone like Audrey Tang?
Tang, who, until May of this year, was Taiwan’s first Minister of Digital Affairs, is unabashedly optimistic about technology. But she’s also a fervent believer in the power of democratic government.
To many in Silicon Valley, this is an oxymoron. But Tang doesn’t see it that way. To her, technology and government are – and have always been – symbiotic.
So I wanted to ask her what a technologically enabled democracy might look like – and she has plenty of ideas. At times, our conversation got a little bit wonky. But ultimately, this is a conversation about a better, more inclusive form of democracy. And why she thinks technology will get us there.
Just a quick note: we recorded this interview a couple of months ago, while Tang was still the Minister of Digital Affairs.
Mentions:
“vTaiwan”
“Polis”
“Plurality: The Future of Collaborative Technology and Democracy” by E. Glen Weyl, Audrey Tang and ⿻ Community
“Collective Constitutional AI: Aligning a Language Model with Public Input,” Anthropic
Further Reading:
“The simple but ingenious system Taiwan uses to crowdsource its laws” by Chris Horton
“How Taiwan’s Unlikely Digital Minister Hacked the Pandemic” by Andrew Leonard
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Of course, Silicon Valley has always been driven by libertarian sensibilities and an optimistic view of technology. But the radical techno-optimism of people like Andreesen, and billionaire entrepreneurs like Peter Thiel and Elon Musk, has morphed into something more extreme. In their view, technology and government are always at odds with one another.
But if that’s true, then how do you explain someone like Audrey Tang?
Tang, who, until May of this year, was Taiwan’s first Minister of Digital Affairs, is unabashedly optimistic about technology. But she’s also a fervent believer in the power of democratic government.
To many in Silicon Valley, this is an oxymoron. But Tang doesn’t see it that way. To her, technology and government are – and have always been – symbiotic.
So I wanted to ask her what a technologically enabled democracy might look like – and she has plenty of ideas. At times, our conversation got a little bit wonky. But ultimately, this is a conversation about a better, more inclusive form of democracy. And why she thinks technology will get us there.
Just a quick note: we recorded this interview a couple of months ago, while Tang was still the Minister of Digital Affairs.
Mentions:
“vTaiwan”
“Polis”
“Plurality: The Future of Collaborative Technology and Democracy” by E. Glen Weyl, Audrey Tang and ⿻ Community
“Collective Constitutional AI: Aligning a Language Model with Public Input,” Anthropic
Further Reading:
“The simple but ingenious system Taiwan uses to crowdsource its laws” by Chris Horton
“How Taiwan’s Unlikely Digital Minister Hacked the Pandemic” by Andrew Leonard
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>And it’s not just chip companies. The S&P 500 (the index that tracks the 500 largest companies in the U.S.) is at an all-time high this year, in no small part because of the sheen of AI. And here in Canada, a new report from Microsoft claims that generative AI will add $187 billion to the domestic economy by 2030. As wild as these numbers are, they may just be the tip of the iceberg. Some researchers argue that AI will completely revolutionize our economy, leading to per capita growth rates of 30%. In case those numbers mean absolutely nothing to you, 25 years of 30% growth means we’d be a thousand times richer than we are now. It’s hard to imagine what that world would like – or how the average person fits into it. Luckily, Rana Foroohar has given this some thought. Foroohar is a global business columnist and an associate editor at The Financial Times. I wanted to have her on the show to help me work through what these wild predictions really mean and, most importantly, whether or not she thinks they’ll come to fruition.
Mentioned:
“Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity” by Daron Acemoglu and Simon Johnson (2023)
“Manias, Panics, and Crashes: A History of Financial Crises” by Charles P. Kindleberger (1978)
“Irrational Exuberance” by Robert J. Shiller (2016)
“Gen AI: Too much spend, too little benefit?” by Goldman Sachs Research (2024)
“Workers could be the ones to regulate AI” by Rana Foroohar (Financial Times, 2023)
“The Financial Times and OpenAI strike content licensing deal” (Financial Times, 2024)
“Is AI about to kill what’s left of journalism?” by Rana Foroohar (Financial Times, 2024)
“Deaths of Despair and the Future of Capitalism” by Anne Case and Angus Deaton (2020)
“The China Shock: Learning from Labor Market Adjustment to Large Changes in Trade” by David H. Autor, David Dorn & Gordon H. Hanson (2016)
Further Reading:
“Beware AI euphoria” by Rana Foroohar (Financial Times, 2024)
“AlphaGo” by Google DeepMind (2020)
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>And it’s not just chip companies. The S&P 500 (the index that tracks the 500 largest companies in the U.S.) is at an all-time high this year, in no small part because of the sheen of AI. And here in Canada, a new report from Microsoft claims that generative AI will add $187 billion to the domestic economy by 2030. As wild as these numbers are, they may just be the tip of the iceberg. Some researchers argue that AI will completely revolutionize our economy, leading to per capita growth rates of 30%. In case those numbers mean absolutely nothing to you, 25 years of 30% growth means we’d be a thousand times richer than we are now. It’s hard to imagine what that world would like – or how the average person fits into it. Luckily, Rana Foroohar has given this some thought. Foroohar is a global business columnist and an associate editor at The Financial Times. I wanted to have her on the show to help me work through what these wild predictions really mean and, most importantly, whether or not she thinks they’ll come to fruition.
Mentioned:
“Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity” by Daron Acemoglu and Simon Johnson (2023)
“Manias, Panics, and Crashes: A History of Financial Crises” by Charles P. Kindleberger (1978)
“Irrational Exuberance” by Robert J. Shiller (2016)
“Gen AI: Too much spend, too little benefit?” by Goldman Sachs Research (2024)
“Workers could be the ones to regulate AI” by Rana Foroohar (Financial Times, 2023)
“The Financial Times and OpenAI strike content licensing deal” (Financial Times, 2024)
“Is AI about to kill what’s left of journalism?” by Rana Foroohar (Financial Times, 2024)
“Deaths of Despair and the Future of Capitalism” by Anne Case and Angus Deaton (2020)
“The China Shock: Learning from Labor Market Adjustment to Large Changes in Trade” by David H. Autor, David Dorn & Gordon H. Hanson (2016)
Further Reading:
“Beware AI euphoria” by Rana Foroohar (Financial Times, 2024)
“AlphaGo” by Google DeepMind (2020)
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Rushkoff’s lack of enthusiasm around AI may stem from the fact that he doesn’t see it as the ground shifting technology that some do. Rather, he sees generative artificial intelligence as just the latest in a long line of communication technologies – more akin to radio or television than fire or electricity.
But while he may not believe that artificial intelligence is going to bring about some kind of techno-utopia, he does think its impact will be significant. So eventually we did talk about AI. And we ended up having an incredibly lively conversation about whether computers can create real art, how the “California ideology” has shaped artificial intelligence, and why it’s not too late to ensure that technology is enabling human flourishing – not eroding it.
Mentioned:
“Cyberia” by Douglas Rushkoff
“The Original WIRED Manifesto” by Louis Rossetto
“The Long Boom: A History of the Future, 1980–2020″ by Peter Schwartz and Peter Leyden
“Survival of the Richest: Escape Fantasies of the Tech Billionaires” by Douglas Rushkoff
“Artificial Creativity: How AI teaches us to distinguish between humans, art, and industry” by Douglas Rushkoff” by Douglas Rushkoff
“Empirical Science Began as a Domination Fantasy” by Douglas Rushkoff
“A Declaration of the Independence of Cyberspace” by John Perry Barlow
“The Californian Ideology” by Richard Barbrook and Andy Cameron
“Can AI Bring Humanity Back to Health Care?,” Machines Like Us Episode 5
Further Reading:
“The Medium is the Massage: An Inventory of Effects” by Marshall McLuhan
“Technopoly: The Surrender of Culture to Technology” by Neil Postman
“Amusing Ourselves to Death” by Neil Postman
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Rushkoff’s lack of enthusiasm around AI may stem from the fact that he doesn’t see it as the ground shifting technology that some do. Rather, he sees generative artificial intelligence as just the latest in a long line of communication technologies – more akin to radio or television than fire or electricity.
But while he may not believe that artificial intelligence is going to bring about some kind of techno-utopia, he does think its impact will be significant. So eventually we did talk about AI. And we ended up having an incredibly lively conversation about whether computers can create real art, how the “California ideology” has shaped artificial intelligence, and why it’s not too late to ensure that technology is enabling human flourishing – not eroding it.
Mentioned:
“Cyberia” by Douglas Rushkoff
“The Original WIRED Manifesto” by Louis Rossetto
“The Long Boom: A History of the Future, 1980–2020″ by Peter Schwartz and Peter Leyden
“Survival of the Richest: Escape Fantasies of the Tech Billionaires” by Douglas Rushkoff
“Artificial Creativity: How AI teaches us to distinguish between humans, art, and industry” by Douglas Rushkoff” by Douglas Rushkoff
“Empirical Science Began as a Domination Fantasy” by Douglas Rushkoff
“A Declaration of the Independence of Cyberspace” by John Perry Barlow
“The Californian Ideology” by Richard Barbrook and Andy Cameron
“Can AI Bring Humanity Back to Health Care?,” Machines Like Us Episode 5
Further Reading:
“The Medium is the Massage: An Inventory of Effects” by Marshall McLuhan
“Technopoly: The Surrender of Culture to Technology” by Neil Postman
“Amusing Ourselves to Death” by Neil Postman
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>While these two camps are, in many ways, completely at odds with one another, they do share one thing in common: they both buy into the hype of artificial intelligence.
But when you dig deeper into these systems, it becomes apparent that both of these visions – the utopian one and the doomy one – are based on some pretty tenuous assumptions.
Kate Crawford has been trying to understand how AI systems are built for more than a decade. She’s the co-founder of the AI Now institute, a leading AI researcher at Microsoft, and the author of Atlas of AI: Power, Politics and the Planetary Cost of AI.
Crawford was studying AI long before this most recent hype cycle. So I wanted to have her on the show to explain how AI really works. Because even though it can seem like magic, AI actually requires huge amounts of data, cheap labour and energy in order to function. So even if AI doesn’t lead to utopia, or take over the world, it is transforming the planet – by depleting its natural resources, exploiting workers, and sucking up our personal data. And that’s something we need to be paying attention to.
Mentioned:
“ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine” by Joseph Weizenbaum
“Microsoft, OpenAI plan $100 billion data-center project, media report says,” Reuters
“Meta ‘discussed buying publisher Simon & Schuster to train AI’” by Ella Creamer
“Google pauses Gemini AI image generation of people after racial ‘inaccuracies’” by Kelvin Chan And Matt O’brien
“OpenAI and Apple announce partnership,” OpenAI
“New Oxford Report Sheds Light on Labour Malpractices in the Remote Work and AI Booms” by Fairwork
“The Work of Copyright Law in the Age of Generative AI” by Kate Crawford, Jason Schultz
“Generative AI’s environmental costs are soaring – and mostly secret” by Kate Crawford
“Artificial intelligence guzzles billions of liters of water” by Manuel G. Pascual
“S.3732 – Artificial Intelligence Environmental Impacts Act of 2024″
“Assessment of lithium criticality in the global energy transition and addressing policy gaps in transportation” by Peter Greim, A. A. Solomon, Christian Breyer
“Calculating Empires” by Kate Crawford and Vladan Joler
Further Reading:
“Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence” by Kate Crawford
“Excavating AI” by Kate Crawford and Trevor Paglen
“Understanding the work of dataset creators” from Knowing Machines
“Should We Treat Data as Labor? Moving beyond ‘Free’” by I. Arrieta-Ibarra et al.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>While these two camps are, in many ways, completely at odds with one another, they do share one thing in common: they both buy into the hype of artificial intelligence.
But when you dig deeper into these systems, it becomes apparent that both of these visions – the utopian one and the doomy one – are based on some pretty tenuous assumptions.
Kate Crawford has been trying to understand how AI systems are built for more than a decade. She’s the co-founder of the AI Now institute, a leading AI researcher at Microsoft, and the author of Atlas of AI: Power, Politics and the Planetary Cost of AI.
Crawford was studying AI long before this most recent hype cycle. So I wanted to have her on the show to explain how AI really works. Because even though it can seem like magic, AI actually requires huge amounts of data, cheap labour and energy in order to function. So even if AI doesn’t lead to utopia, or take over the world, it is transforming the planet – by depleting its natural resources, exploiting workers, and sucking up our personal data. And that’s something we need to be paying attention to.
Mentioned:
“ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine” by Joseph Weizenbaum
“Microsoft, OpenAI plan $100 billion data-center project, media report says,” Reuters
“Meta ‘discussed buying publisher Simon & Schuster to train AI’” by Ella Creamer
“Google pauses Gemini AI image generation of people after racial ‘inaccuracies’” by Kelvin Chan And Matt O’brien
“OpenAI and Apple announce partnership,” OpenAI
“New Oxford Report Sheds Light on Labour Malpractices in the Remote Work and AI Booms” by Fairwork
“The Work of Copyright Law in the Age of Generative AI” by Kate Crawford, Jason Schultz
“Generative AI’s environmental costs are soaring – and mostly secret” by Kate Crawford
“Artificial intelligence guzzles billions of liters of water” by Manuel G. Pascual
“S.3732 – Artificial Intelligence Environmental Impacts Act of 2024″
“Assessment of lithium criticality in the global energy transition and addressing policy gaps in transportation” by Peter Greim, A. A. Solomon, Christian Breyer
“Calculating Empires” by Kate Crawford and Vladan Joler
Further Reading:
“Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence” by Kate Crawford
“Excavating AI” by Kate Crawford and Trevor Paglen
“Understanding the work of dataset creators” from Knowing Machines
“Should We Treat Data as Labor? Moving beyond ‘Free’” by I. Arrieta-Ibarra et al.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Without sufficient time and attention, patients are suffering. There are 12 million significant misdiagnoses in the US every year, and 800,000 of those result in death or disability. (While the same kind of data isn’t available in Canada, similar trends are almost certainly happening here as well).
Eric Topol says medicine has become decidedly inhuman – and the consequences have been disastrous. Topol is a cardiologist and one of the most widely cited medical researchers in the world. In his latest book, Deep Medicine, he argues that the best way to make health care human again is to embrace the inhuman, in the form of artificial intelligence.
“Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again” by Eric Topol
“The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations” by H. Singh, A. Meyer, E. Thomas
“Burden of serious harms from diagnostic error in the USA” by David Newman-Toker, et al.
“How Expert Clinicians Intuitively Recognize a Medical Diagnosis” by J. Brush Jr, J. Sherbino, G. Norman
“A Randomized Controlled Study of Art Observation Training to Improve Medical Student Ophthalmology Skills” by Jaclyn Gurwin, et al.
“Why Doctors Should Organize” by Eric Topol
“How This Rural Health System Is Outdoing Silicon Valley” by Erika Fry
Further Reading:
"The Importance Of Being" by Abraham Verghese
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Without sufficient time and attention, patients are suffering. There are 12 million significant misdiagnoses in the US every year, and 800,000 of those result in death or disability. (While the same kind of data isn’t available in Canada, similar trends are almost certainly happening here as well).
Eric Topol says medicine has become decidedly inhuman – and the consequences have been disastrous. Topol is a cardiologist and one of the most widely cited medical researchers in the world. In his latest book, Deep Medicine, he argues that the best way to make health care human again is to embrace the inhuman, in the form of artificial intelligence.
“Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again” by Eric Topol
“The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations” by H. Singh, A. Meyer, E. Thomas
“Burden of serious harms from diagnostic error in the USA” by David Newman-Toker, et al.
“How Expert Clinicians Intuitively Recognize a Medical Diagnosis” by J. Brush Jr, J. Sherbino, G. Norman
“A Randomized Controlled Study of Art Observation Training to Improve Medical Student Ophthalmology Skills” by Jaclyn Gurwin, et al.
“Why Doctors Should Organize” by Eric Topol
“How This Rural Health System Is Outdoing Silicon Valley” by Erika Fry
Further Reading:
"The Importance Of Being" by Abraham Verghese
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>The brain-computer interface that Arbaugh uses is part of an emerging field known as neurotechnology that promises to reshape the way we live. A wide range of AI empowered neurotechnologies may allow disabled people like Arbaugh to regain independence, or give us the ability to erase traumatic memories in patients suffering from PTSD.
But it doesn’t take great leaps to envision how these technologies could be abused as well. Law enforcement agencies in the United Arab Emirates have used neurotechnology to read the minds of criminal suspects, and convict them based on what they’ve found. And corporations are developing ways to advertise to potential customers in their dreams. Remarkably, both of these things appear to be legal, as there are virtually no laws explicitly governing neurotechnology.
All of which makes Nita Farahany’s work incredibly timely. Farahany is a professor of law and philosophy at Duke University and the author of The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology.
Farahany isn’t fatalistic about neurotech – in fact, she uses some of it herself. But she is adamant that we need to start developing laws and guardrails as soon as possible, because it may not be long before governments, employers and corporations have access to our brains.
“PRIME Study Progress Update – User Experience,” Neuralink
“Paralysed man walks using device that reconnects brain with muscles,” The Guardian
Cognitive Warfare – NATO’s ACT
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>The brain-computer interface that Arbaugh uses is part of an emerging field known as neurotechnology that promises to reshape the way we live. A wide range of AI empowered neurotechnologies may allow disabled people like Arbaugh to regain independence, or give us the ability to erase traumatic memories in patients suffering from PTSD.
But it doesn’t take great leaps to envision how these technologies could be abused as well. Law enforcement agencies in the United Arab Emirates have used neurotechnology to read the minds of criminal suspects, and convict them based on what they’ve found. And corporations are developing ways to advertise to potential customers in their dreams. Remarkably, both of these things appear to be legal, as there are virtually no laws explicitly governing neurotechnology.
All of which makes Nita Farahany’s work incredibly timely. Farahany is a professor of law and philosophy at Duke University and the author of The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology.
Farahany isn’t fatalistic about neurotech – in fact, she uses some of it herself. But she is adamant that we need to start developing laws and guardrails as soon as possible, because it may not be long before governments, employers and corporations have access to our brains.
“PRIME Study Progress Update – User Experience,” Neuralink
“Paralysed man walks using device that reconnects brain with muscles,” The Guardian
Cognitive Warfare – NATO’s ACT
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>That idea would eventually become the basis for Replika, Kuyda’s AI startup. Today, Replika has millions of active users – that’s millions of people who have AI friends, AI siblings and AI partners.
When I first heard about the idea behind Replika, I thought it sounded kind of dystopian. I envisioned a world where we’d rather spend time with our AI friends than our real ones. But that’s not the world Kuyda is trying to build. In fact, she thinks chatbots will actually make people more social, not less, and that the cure for our technologically exacerbated loneliness might just be more technology.
Mentioned:
“ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine” by Joseph Weizenbaum
“elizabot.js”, implemented by Norbert Landsteiner
“Speak, Memory” by Casey Newton (The Verge)
“Creating a safe Replika experience” by Replika
“The Year of Magical Thinking” by Joan Didion
Additional Reading:
The Globe & Mail: “They fell in love with the Replika AI chatbot. A policy update left them heartbroken”
“Loneliness and suicide mitigation for students using GPT3-enabled chatbots” by Maples, Cerit, Vishwanath, & Pea
“Learning from intelligent social agents as social and intellectual mirrors” by Maples, Pea, Markowitz
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>That idea would eventually become the basis for Replika, Kuyda’s AI startup. Today, Replika has millions of active users – that’s millions of people who have AI friends, AI siblings and AI partners.
When I first heard about the idea behind Replika, I thought it sounded kind of dystopian. I envisioned a world where we’d rather spend time with our AI friends than our real ones. But that’s not the world Kuyda is trying to build. In fact, she thinks chatbots will actually make people more social, not less, and that the cure for our technologically exacerbated loneliness might just be more technology.
Mentioned:
“ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine” by Joseph Weizenbaum
“elizabot.js”, implemented by Norbert Landsteiner
“Speak, Memory” by Casey Newton (The Verge)
“Creating a safe Replika experience” by Replika
“The Year of Magical Thinking” by Joan Didion
Additional Reading:
The Globe & Mail: “They fell in love with the Replika AI chatbot. A policy update left them heartbroken”
“Loneliness and suicide mitigation for students using GPT3-enabled chatbots” by Maples, Cerit, Vishwanath, & Pea
“Learning from intelligent social agents as social and intellectual mirrors” by Maples, Pea, Markowitz
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Few people understand this trajectory better than Maria Ressa. Ressa is a Filipino journalist, and the CEO of a news organization called Rappler. Like many people, she was once a fervent believer in the power of social media. Then she saw how it could be abused. In 2016, she reported on how Rodrigo Duterte, then president of the Philippines, had weaponized Facebook in the election he’d just won. After publishing those stories, Ressa became a target herself, and her inbox was flooded with death threats. In 2021, she won the Nobel Peace Prize.
I wanted this to be our first episode because I think, as novel as AI is, it has undoubtedly been shaped by the technologies, the business models, and the CEOs that came before it. And Ressa thinks we’re about to repeat the mistakes we made with social media all over again.
Mentioned:
“How to Stand Up to a Dictator” by Maria Ressa
“A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism” by Thompson et al.
Rappler’s Matrix Protocol Chat App: Rappler Communities
“Democracy Report 2023: Defiance in the Face of Autocratization” by V-Dem
“The Foundation Model Transparency Index” by Stanford HAI (Human-Centered Artificial Intelligence)
“All the ways Trump’s campaign was aided by Facebook, ranked by importance” by Philip Bump (The Washington Post)
“Our Epidemic of Loneliness and Isolation” by U.S. Surgeon General Dr. Vivek H. Murthy
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>Few people understand this trajectory better than Maria Ressa. Ressa is a Filipino journalist, and the CEO of a news organization called Rappler. Like many people, she was once a fervent believer in the power of social media. Then she saw how it could be abused. In 2016, she reported on how Rodrigo Duterte, then president of the Philippines, had weaponized Facebook in the election he’d just won. After publishing those stories, Ressa became a target herself, and her inbox was flooded with death threats. In 2021, she won the Nobel Peace Prize.
I wanted this to be our first episode because I think, as novel as AI is, it has undoubtedly been shaped by the technologies, the business models, and the CEOs that came before it. And Ressa thinks we’re about to repeat the mistakes we made with social media all over again.
Mentioned:
“How to Stand Up to a Dictator” by Maria Ressa
“A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism” by Thompson et al.
Rappler’s Matrix Protocol Chat App: Rappler Communities
“Democracy Report 2023: Defiance in the Face of Autocratization” by V-Dem
“The Foundation Model Transparency Index” by Stanford HAI (Human-Centered Artificial Intelligence)
“All the ways Trump’s campaign was aided by Facebook, ranked by importance” by Philip Bump (The Washington Post)
“Our Epidemic of Loneliness and Isolation” by U.S. Surgeon General Dr. Vivek H. Murthy
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
]]>