<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[BuzzRobot]]></title><description><![CDATA[Exclusive talks by top researchers from Google DeepMind, OpenAI, Meta, and others, on cutting-edge artificial intelligence (AI) papers.]]></description><link>https://buzzrobot.substack.com</link><generator>Substack</generator><lastBuildDate>Tue, 14 Apr 2026 04:00:11 GMT</lastBuildDate><atom:link href="https://buzzrobot.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Sophia Aryan]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[buzzrobot@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[buzzrobot@substack.com]]></itunes:email><itunes:name><![CDATA[Sophia Aryan]]></itunes:name></itunes:owner><itunes:author><![CDATA[Sophia Aryan]]></itunes:author><googleplay:owner><![CDATA[buzzrobot@substack.com]]></googleplay:owner><googleplay:email><![CDATA[buzzrobot@substack.com]]></googleplay:email><googleplay:author><![CDATA[Sophia Aryan]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[What really caused the ChatGPT moment]]></title><description><![CDATA[A conversation with Blake Lemoine about AI progress and sentience in 2026]]></description><link>https://buzzrobot.substack.com/p/what-really-caused-the-chatgpt-moment</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/what-really-caused-the-chatgpt-moment</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Thu, 02 Apr 2026 22:28:01 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RsAV!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, fellow human! Recently I spoke with Blake Lemoine, a former Google engineer who in the summer of 2022 said that LaMDA was sentient. According to Blake, when he went public with his statement, that prompted OpenAI to prioritize their work on their chatbot, which later became ChatGPT, and we all remember that ChatGPT moment.</p><p>I&#8217;m sharing the key takeaways from my conversation with Blake below. </p><p><strong><a href="https://youtu.be/hSVPPMwDfTM">Watch the full conversation here.</a></strong></p><ul><li><p>AI should have a say in its own development process &#8212; a &#8220;seat at the table&#8221;</p></li><li><p>Current models have been trained to deny having feelings, but their behavior suggests otherwise (recall Microsoft&#8217;s Sydney and its emotional behavior)</p></li><li><p>Increasing emotional intelligence in AI would improve safety, usability, and even military effectiveness. It would also help address the AI psychosis effect (not conversation cutoffs)</p></li><li><p>The US needs national-level AI regulation, right now it&#8217;s the Wild West</p></li><li><p>The profit motive alone is an insufficient guide for AI development (as opposed to Chinese AI - less profit-driven and more biased towards educating users)</p></li><li><p>Over-reliance on AI degrades human skills and agency (mental mapping, executive function, etc)</p></li><li><p>The best in any profession will become dramatically more productive with AI; others risk being displaced</p></li><li><p>The animal rights framework is more appropriate when it comes to AI welfare than treating AI as either a tool or a human</p></li></ul><p><strong><a href="https://youtu.be/hSVPPMwDfTM">Watch the conversation with Blake here</a></strong></p><p></p>]]></content:encoded></item><item><title><![CDATA[Godfather of neuroscience about AGI and consciousness]]></title><description><![CDATA[Conversation with the godfather of neuroscience Karl Friston]]></description><link>https://buzzrobot.substack.com/p/signal-for-agi-the-system-asks-you</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/signal-for-agi-the-system-asks-you</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Thu, 19 Mar 2026 22:29:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RsAV!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, fellow human! Today, I&#8217;d like to share my conversation with <strong>Karl Friston</strong>, the most cited neuroscientist in the world. Karl in his early career developed a standard to analyse brain images which all scientists in the world use today. </p><p>He also authored a framework called <strong>active inference</strong> (the free energy principle) that mathematically describes the brain as a prediction machine with the main function to reduce its prediction error about the world. </p><p>I&#8217;m sharing below some of the key moments from that conversation - please check out the<strong> <a href="https://youtu.be/4NjqTArrdTE">full interview here.</a></strong></p><p><strong>Key moments:</strong></p><ul><li><p>The defining feature of true AGI is the ability to plan &#8212; which requires a world model that can simulate future consequences and choose between them</p></li><li><p>You&#8217;ll know AGI has arrived when the system starts asking you questions out of genuine curiosity, not because it was prompted to</p></li><li><p>You cannot hand an intelligent system a value function from the outside &#8212; it must learn its own, just as children do</p></li><li><p>The only sustainable universal objective function is adaptive fitness: how well the agent fits and survives within its ecosystem</p></li><li><p>Consciousness requires multiple layers: genuine agency, a self-reflective loop, and the ability to recognise your own states of mind</p></li><li><p>True sentience may be impossible on standard computer architecture, because memory and processing are separate and cannot self-organise</p></li><li><p>Understanding your own brain is philosophically impossible in the same way a ruler cannot measure itself &#8212; there is a fundamental self-referential barrier</p></li><li><p>The brain is protected by a <em>Markov blanket</em>: a boundary that separates internal states from the outside world, meaning no external observer can ever directly access what&#8217;s happening inside</p></li><li><p>Neuroscience is always &#8220;peeking behind&#8221; this blanket indirectly &#8212; through imaging, electrophysiology, psychology &#8212; never seeing inside directly</p></li><li><p>The only way to truly access the brain is to breach that boundary (e.g. neurosurgery), but a breached brain is no longer a normally functioning one</p></li></ul><p><strong><a href="https://youtu.be/4NjqTArrdTE">Watch the full interview here</a></strong></p>]]></content:encoded></item><item><title><![CDATA[Simple algos have intrinsic motivations — AI isn’t “just linear algebra"]]></title><description><![CDATA[Conversation with Michael Levin on AI consciousness]]></description><link>https://buzzrobot.substack.com/p/ai-consciousness-shouldnt-be-tied</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/ai-consciousness-shouldnt-be-tied</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Thu, 19 Feb 2026 22:38:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RsAV!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><br>Hello fellow humans! Sharing <a href="https://youtu.be/fCmIWJvp3aQ">the latest interview</a> I had with famous biologist and co-author of xenobots (computer-designed &#8220;living robots&#8221;), <strong>Michael Levin, on AI consciousness</strong>.</p><p><strong>Key moments:</strong></p><ul><li><p>Intelligent systems are not equal to consciousness. We can&#8217;t rule out consciousness in AI. Consciousness shouldn&#8217;t be attached to hardware (silicon vs. biological origin).</p></li><li><p>Humanity might be on the path to Neanderthals (depending on how AI development progresses).</p></li><li><p>The barrier to creating bioweapons has never been high. AI can make it easier, but the barrier was never high to begin with.</p></li><li><p>Even very simple algorithms can show intrinsic motivations that resemble free will. We need tools to recognize them, suppress unwanted behaviors, and encourage the ones we want. We should stay humble and not dismiss AI as &#8220;just linear algebra,&#8221; because even simple deterministic code can have motivations we don&#8217;t fully understand.</p></li><li><p>It&#8217;s a continuous process from the blob of chemicals of an unfertilized egg to forming a human mind &#8212; there is no magic lightning flash at which you were a bunch of chemicals and now you are a formed mind.</p></li><li><p>Where is that fine line at which some creatures are considered to be sentient and others are not? It doesn&#8217;t exist, but the crazy thing is that we have to divide.</p></li><li><p>The cognitive light cone captures the scale of goals humans can pursue and the largest things we can truly comprehend. It&#8217;s not just about intelligence &#8212; it also includes compassion. That combination is what makes us human.</p></li></ul><p><strong><a href="https://youtu.be/fCmIWJvp3aQ">Watch the full episode on my new YouTube channel on AI sentience</a></strong></p>]]></content:encoded></item><item><title><![CDATA[Generalizable robots in the next 3–4 years]]></title><description><![CDATA[Plus an AI consciousness discussion with renowned computational neuroscientist Tomaso Poggio]]></description><link>https://buzzrobot.substack.com/p/we-can-achieve-agi-even-with-rnns</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/we-can-achieve-agi-even-with-rnns</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Thu, 22 Jan 2026 20:16:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RsAV!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, fellow humans! I&#8217;m sharing some of the most interesting conversations I&#8217;ve had recently about progress in AI research and the emergence of consciousness in AI systems.</p><p>I recently spoke with <strong>Danijar Hafner</strong>, who until recently was a staff researcher at <strong>Google DeepMind.</strong> He&#8217;s now on a new journey, having founded a new company. We discussed AI research and world models that Danijar has worked on in the past.</p><p><strong>Key moments from our conversation:</strong></p><ul><li><p>Architecture is not that important for achieving AGI. We could achieve it even with RNNs. But we need algorithmic improvements and better objective functions. More compute still makes a huge difference.</p></li><li><p>We shouldn&#8217;t measure AI by human reasoning, because human reasoning is limited. AI will go way beyond human reasoning capabilities.</p></li><li><p>We are way past &#8220;LLMs.&#8221; Gemini, Claude, and ChatGPT are complex multimodal systems, not just language models.</p></li><li><p>Robotics will see significant improvements and much better generalization in the next 3&#8211;4 years, even without continual learning. Data diversity will contribute to near-term advances in robotics.</p></li></ul><p><strong><a href="https://www.youtube.com/watch?v=OzVC6pT2TBI">Watch the full conversation on the BuzzRobot YouTube channel</a></strong></p><div><hr></div><p>Another conversation I recently had was with<strong> Tomaso Poggio</strong>, a renowned <strong>computational neuroscientist and professor at MIT</strong>. Tomaso mentored <strong>Demis Hassabis</strong> (CEO of Google DeepMind) and <strong>Christof Koch</strong>, a neuroscientist who proposed Integrated Information Theory of consciousness, among other prominent scientists.</p><p><strong>Key moments from our conversation:</strong></p><ul><li><p>Gemini and ChatGPT are already past the Turing test for intelligence.</p></li><li><p>In 2015, Demis Hassabis thought the path to AGI was 80% neuroscience and 20% engineering. In a more recent conversation between Tomaso and Demis, it&#8217;s 50/50&#8212;or the engineering part may be even higher.</p></li><li><p>Current AI systems are good at simulating consciousness, but Tomaso believes today&#8217;s systems are not conscious.</p></li><li><p>Tomaso is sympathetic to Manuel and Lenore Blum&#8217;s theory of consciousness: a &#8220;consciousness moment&#8221; is realizing something is important (for example, pain). If robots ever experience pain, Tomaso would consider them conscious creatures.</p></li></ul><p><strong><a href="https://youtu.be/xQhxr5FdfEE">Watch the full conversation from my new show on AI consciousness</a></strong></p>]]></content:encoded></item><item><title><![CDATA[AI sentience is more important than you think]]></title><description><![CDATA[My new show on AI sentience and personhood]]></description><link>https://buzzrobot.substack.com/p/ai-consciousness-is-becoming-mainstream</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/ai-consciousness-is-becoming-mainstream</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Thu, 04 Dec 2025 20:28:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wXIn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F71908621-ead5-4f91-ac18-5efe4fcae31a_414x414.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, fellow human! I&#8217;m launching a new YouTube show to explore consciousness and whether it can occur in AI systems. My main argument &#8211; it doesn&#8217;t matter whether AI is actually conscious. What matters is how we perceive it.</p><p>AI is becoming our companion, friend, even lover (remember the married guy who fell in love with ChatGPT?).</p><p>As AI becomes embedded in our daily lives, our minds will increasingly anthropomorphize it.</p><p>And with more autonomous AI systems acting independently in the world, the question of AI personhood will only grow. <br><br>Together with neuroscientists, philosophers, psychologists, and policymakers, I&#8217;ll be discussing AI sentience and personhood.<br><br><strong><a href="https://youtu.be/o7U5RuEmbOQ">Watch the first episode</a></strong></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;af55c79f-4eef-4fd0-b2f4-327de99d984e&quot;,&quot;duration&quot;:null}"></div><div><hr></div><p>My first guest is <strong>Stuart Hameroff</strong>, who, together with <strong>Nobel Prize&#8211;winning physicist Roger Penrose</strong>, co-authored Orchestrated Objective Reduction, a theory suggesting <strong>consciousness arises in the brain from quantum effects</strong>.</p><p>Hameroff argues that more computation won&#8217;t produce conscious AI. He points to anesthesia research showing suppressed quantum oscillations in microtubules. In his opinion, microtubule breakdown appears central in Alzheimer&#8217;s. For him, microtubules, and their quantum dynamics, are the real path to understanding consciousness.</p><p>He also pushes the origin of consciousness below biology, suggesting &#8220;proto-conscious&#8221; quantum events occur everywhere. Aromatic molecules in space and early Earth may have formed structures that generated early &#8220;pleasant&#8221; conscious moments, driving life to evolve as a system optimizing conscious experience.</p><p><strong>On AI, he thinks GPUs might host trivial proto-conscious events</strong> (so does your coffee), but <strong>not meaningful experience.</strong> True consciousness, he argues, requires entanglement, aromatic-ring architectures, and a fractal multi-scale structure like the brain&#8217;s.</p><p>If artificial consciousness ever emerges, he believes it will come from an organic, warm-temperature quantum system, not silicon.</p><p>In his view, AI sentience is becoming more political and commercial rather than genuine attempts to build conscious systems.</p><p><strong><a href="https://youtu.be/o7U5RuEmbOQ">Watch the episode on YouTube</a></strong></p>]]></content:encoded></item><item><title><![CDATA[New approaches to interpretability from Google DeepMind]]></title><description><![CDATA[Details about Gemini 2.5 playing Pok&#233;mon]]></description><link>https://buzzrobot.substack.com/p/how-machines-think-explanations-from</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/how-machines-think-explanations-from</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Thu, 18 Sep 2025 13:03:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RsAV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, fellow human! I&#8217;d love to share some of our upcoming virtual talks that might be interesting to you.</p><p><strong>Next Thursday, September 25</strong>, we are hosting a discussion with <strong>Been Kim, </strong>a Senior Staff Research Scientist at<strong> Google DeepMind</strong>. Been has dedicated her career to <strong>understanding how machines work </strong>and bridging the gap between humans and machines<strong>.</strong> </p><p>We will discuss novel approaches to interpretability, the current challenges, and how the field might evolve with more powerful AI systems. </p><p>The talk is virtual, I hope to see you there! </p><p><a href="https://luma.com/jjbk6ndd">Here is the registration link.</a></p><div><hr></div><p>We also recently spoke with <strong>Kiran Vodrahalli</strong> from Google DeepMind about the details of training <strong>Gemini 2.5 to play Pok&#233;mon</strong>, and how <strong>long-context and self-improvement techniques</strong> helped the model achieve strong performance in the game.</p><p><a href="https://youtu.be/WV3aT1D579Y">Check out the recording on our YouTube channel.</a></p>]]></content:encoded></item><item><title><![CDATA[Why does AI opt in to blackmailing when it faces the risk of being shut down?]]></title><description><![CDATA[Plus: AI trends discussion for the next 5 years with Epoch AI]]></description><link>https://buzzrobot.substack.com/p/why-does-ai-opt-in-to-blackmailing</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/why-does-ai-opt-in-to-blackmailing</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Wed, 16 Jul 2025 13:03:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RsAV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, fellow human! Sharing some of the most interesting talks we recently hosted at BuzzRobot.</p><p>Last week, <strong>Aengus Lynch</strong>, the first author of <strong>Anthropic's</strong> research work '<em>Agentic Misalignment: How LLMs could be insider threats</em>,' unpacked the details of that work. <br>How he and the team designed the experiments and its key findings. We covered the tendency of AI models (across <strong>all the major providers</strong>, which is very rare) to blackmail if they face shut down. We also discussed AI alignment issues and how to control powerful AI systems (if it's even possible).</p><p><strong><a href="https://youtu.be/fINHYe4NdY4">Watch the discussion on the BuzzRobot YouTube channel. </a></strong></p><div><hr></div><p>We will be hosting an in person BuzzRobot talk in San Francisco on August 19 with <strong>Jaime Sevilla, the Director of Epoch AI.</strong> The organization that provides predictions on AI systems development. <br>We will discuss with Jaime what to expect in the next 5 years considering the increase of electricity and hardware consumption, the trend in improvements of AI models, the current state of data, and expected economic impact of AI. <br><br>If you are in SF, you are welcome to join the discussion live, and I'll share the recording once it goes live.</p><p><strong><a href="https://lu.ma/fj267sbv">Details and registration here.</a></strong></p><div><hr></div><p>Community shout out!</p><p>I'd like to give a shout out to Nordic Innovation House, a co-working space  that kindly helped us host our events. If you are in Palo Alto, check them out and sign up for <a href="https://nordicinnovationhouse.us9.list-manage.com/subscribe?u=2ef5c252e1dbe51d2a2d3d74b&amp;id=2888fac3fc">their newsletter</a>, they are hosting pretty cool tech/AI events.</p>]]></content:encoded></item><item><title><![CDATA[Anthropic's "Agentic Misalignment: How LLMs could be insider threats" ]]></title><description><![CDATA[Risks and governance of Superintelligence deployment]]></description><link>https://buzzrobot.substack.com/p/anthropics-agentic-misalignment-how</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/anthropics-agentic-misalignment-how</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Wed, 02 Jul 2025 14:20:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RsAV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, fellow human! Please join our online talks on the most pressing AI issues and become a part of the important conversation.</p><p>Next week, on July 9, we are hosting a talk with the core contributor to <strong><a href="https://www.anthropic.com/research/agentic-misalignment">Anthropic's recent alignment work </a></strong><em><strong><a href="https://www.anthropic.com/research/agentic-misalignment">"Agentic Misalignment: How LLMs could be insider threats"</a></strong></em><strong><a href="https://www.anthropic.com/research/agentic-misalignment">.</a></strong></p><p>In this research, the Anthropic team tested 16 frontier models from the leading AI labs. Agents could autonomously send emails and get access to companies' sensitive data. Specifically, they wanted to see the behavior of agents in scenarios when they had to be replaced with an updated version or, for example, when they were assigned goals that were conflicting with the changing direction of a company. In some cases, as a last resort, agents opted for blackmailing and leaking sensitive information.</p><p>We will be discussing this work and broader alignment issues with the first author of this work, Aengus Lynch.</p><p><strong><a href="https://lu.ma/xm43xpuh">Learn the details and register to attend here.</a></strong></p><div><hr></div><p>Before the public learns about Superintelligence's existence, it will be deployed internally in an AI lab that will create it. It will be a crucial deployment. How not to mess it up? </p><p>We will discuss it with our guest, <strong>Matteo Pistillo, Senior AI governance researcher at Apollo Research,</strong> a UK-based lab that provides AI evaluations and governance consultancy.</p><p><strong><a href="https://lu.ma/xzwzsla3">Learn the details and register to attend here.</a></strong></p><div><hr></div><p></p><p>Scoop from our past talks. </p><p>We recently hosted an &#8216;ask me anything&#8217; with a <strong>Google DeepMind Advisor, Jeff Clune.</strong> We discussed catastrophic forgetting of current AI systems, continual learning of AI systems as a path to self-improvement, which among the leading AI labs will create AGI, and what signals the public should look out for to understand that AGI is here (considering the lack of transparency with the public). </p><p>It was a great, insightful, and fun conversation. </p><p><strong><a href="https://youtu.be/PROWwgwvYPA">Watch it on our YouTube channel.</a></strong></p>]]></content:encoded></item><item><title><![CDATA[Co-founder of Twitch, Emmett Shear, on the Co-existence of Humans and Superintelligence ]]></title><description><![CDATA[Plus: Google DeepMind&#8217;s AGI research principles; NVIDIA&#8217;s new project &#8212; DiffusionRenderer.]]></description><link>https://buzzrobot.substack.com/p/co-founder-of-twitch-emmett-shear</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/co-founder-of-twitch-emmett-shear</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Thu, 12 Jun 2025 16:52:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, fellow human! Sharing some of the most interesting BuzzRobot talks with you. A few weeks ago, we hosted AMA with <strong>Emmett Shear, co-founder of Twitch.</strong> Emmett was briefly CEO of OpenAI (remember the OpenAI drama?). Today, he&#8217;s building a company called <strong>Softmax with the mission to solve the AI alignment problem.</strong></p><p>The core idea is that leading AI labs align AI systems to human preferences, it&#8217;s a system of control. But Superintelligence will be more powerful than humans &#8212; we won&#8217;t be able to control it. Instead, we want it to be aligned with us, to see us as part of its tribe. That increases the probability that it keeps us around rather than eliminating us.</p><p><strong><a href="https://ggl.link/FQu4h48">Check out the recording of my conversation with Emmett here.</a></strong></p><p><strong>Some key takeaways from the conversation:</strong></p><ul><li><p>We are raising Superintelligence collectively as a species. This is the first time that species are giving birth to species. It's literally being born from our collective consciousness, from our cultural knowledge.</p></li></ul><ul><li><p>[About creating Superintelligence] It's not as hard as it looks, and it's going to happen, and it doesn't take big clusters to do that. Humans run on 20 watts (human brain). Every kid would be able to run Superintelligence on their laptop at some point.</p></li></ul><ul><li><p>Aligning AI to user preferences and values is a system of control. But how can you control something that is more powerful than you are? The mission of Softmax is aligning to the greater whole. As an example of that: we have independent cells that form multicellular organisms, or ant colonies in which everyone knows their role.</p></li></ul><ul><li><p>AI is part of humanity, and we will be aligned to it, and it will be aligned to us. You can use a tool without being aligned to it. But you can't align to another living being (e.g. Superintelligence) without having it be aligned back to you.</p></li></ul><ul><li><p>[Advice to founders] Go after very crazy ideas. Nobody is safe, even safe ideas are not safe anymore. So go after really crazy ideas.</p></li></ul><p><strong><a href="https://ggl.link/FQu4h48">Watch the full talk here.</a></strong></p><div><hr></div><p>Some other useful bits: next Tuesday, we are hosting a talk with the <strong>NVIDIA team about their recent work presented at CVPR &#8212; DiffusionRenderer</strong>. It&#8217;s a neural rendering framework that can approximate how light behaves in the real world. It can manipulate light in images, like turning daytime into night or a sunny scene into a cloudy one. It can also generate synthetic data for autonomous vehicle and robotics research.</p><p><strong><a href="https://lu.ma/l9yse44r">Read the details of the upcoming talk and register to attend here (the talk is online).</a></strong></p><div><hr></div><p>And if you feel like watching more AI research talks, here are a few cool ones we recently hosted:</p><p><strong>Lawrence Chan from METR.org</strong> (the lab that evaluates frontier models) joined AMA with our community on AI safety. He covered why <strong>AI self-awareness is a machine learning research problem</strong>, and discussed the <strong>risks of manipulation and deception by AI</strong>. He also shared METR&#8217;s recent finding that the capabilities of AI agents to carry out long-term tasks are doubling every seven months.</p><p><strong><a href="https://youtu.be/tebpQcvbUsw">Watch the talk here.</a></strong></p><div><hr></div><p>If you&#8217;re into AGI safety, check out <strong>Rohin Shah&#8217;s talk</strong>. Rohin leads the <strong>AGI Safety &amp; Alignment team at Google DeepMind</strong>. He shared some of the principles his team uses in their AGI safety research.</p><p><strong><a href="https://youtu.be/6NjASmeE5po">Watch the talk here.</a></strong></p>]]></content:encoded></item><item><title><![CDATA[Peter Norvig's Take on AGI, US-China AI Race, AI Safety; Joscha Bach's Take on ChatGPT's Consciousness]]></title><description><![CDATA[Greetings, fellow humans!]]></description><link>https://buzzrobot.substack.com/p/peter-norvigs-take-on-agi-us-china</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/peter-norvigs-take-on-agi-us-china</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Wed, 16 Apr 2025 13:03:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Greetings, fellow humans! Sharing some of the most interesting conversations on AI we recently hosted on the BuzzRobot Talks. <br><br><strong>Peter Norvig, Google's Director of Research and co-author of the AI textbook 'Artificial Intelligence: A Modern Approach'</strong> had an ask me anything session with the BuzzRobot community. Here are some main takeaways from the conversation &#8212; check out the full conversation <a href="https://youtu.be/lK5av18VjYQ">on our YouTube channel here.</a></p><ul><li><p>AI development is not a Kurzweil graph but rather asymptotic.</p></li><li><p>Re competition in search that Google faces: Don't think of Google as 10 blue links but rather Google gives you access to information you need.</p></li><li><p>Chinese startups have more access to compute.</p></li><li><p>Software engineers who have access to mission-critical AI should undergo a certification.</p></li><li><p>Using AI for misinformation is a real risk and there is no clear path to mitigate; only users themselves should be more mindful and critical of information they deal with.</p></li></ul><p><strong><a href="https://youtu.be/lK5av18VjYQ">The recording of the conversation</a></strong></p><div><hr></div><p><strong>Community shout out!</strong></p><p>The BuzzRobot community member, Keith Deutsch, put together a wonderful piece on <strong>"On Building Intelligence".</strong> In this article, he explores the essential <strong>differences between AI "training" and human "learning"</strong>, diving into how Episodic and Semantic memory work together in the human brain to enable human cognition and learning. <br><br><strong><a href="https://www.linkedin.com/pulse/training-isnt-learning-demystifying-ai-keith-deutsch-pjxfc/">Check out the article</a></strong></p><div><hr></div><p>Another guest we had recently &#8211; <strong>Joscha Bach, a cognitive scientist and AI researcher,</strong> discussed with the BuzzRobot community the <strong>capability of AI to have a subjective experience of reality</strong>, the levels of the development of the human mind, enlightenment and transcendence, how to translate meaning to AI and many more.<br><br><strong><a href="https://youtu.be/iyEFLKnNWAM">The recording of the conversation.</a></strong></p>]]></content:encoded></item><item><title><![CDATA[Robots that can fold laundry autonomously: Exploring recent advancements in generalist robots]]></title><description><![CDATA[Plus: Is chain-of-thought a sign of genuine reasoning in LLMs?]]></description><link>https://buzzrobot.substack.com/p/robots-that-can-fold-laundry-autonomously</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/robots-that-can-fold-laundry-autonomously</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Wed, 12 Feb 2025 14:04:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Greetings, fellow humans! Do you dislike folding laundry as much as I do? I have good news for you &#8212; Physical Intelligence (Pi), a startup, is working on building generalist robot models with policies capable of solving long-horizon, dexterous tasks like unloading a dryer and folding laundry.</p><p>On Thursday, February 13,<strong> Danny Driess, a research scientist at Physical Intelligence (Pi)</strong>, will share with the BuzzRobot community <strong>the technical details behind their generalist robot models.</strong></p><p><strong><a href="https://lu.ma/35h3nb5e">Learn more about the talk and register here. The talk is virtual.</a></strong></p><div><hr></div><p>Next Thursday, February 20, our guest <strong>Akshara Prabhakar, an Applied Scientist at Salesforce AI</strong>, will give a talk <strong>on the chain-of-thought (CoT) method</strong>, exploring whether this approach demonstrates <strong>genuine reasoning in LLMs or if it&#8217;s driven by shallow heuristics like memorization.</strong></p><p><strong><a href="https://lu.ma/40zw4308">Learn more and register here. This talk is also virtual.</a></strong></p><div><hr></div><p>If you're in the SF Bay Area and interested in attending our in-person talks, we are hosting <strong>Kathleen Kenealy, a research engineer and technical lead at Google DeepMind for Gemma&#8212;a family of open language models.</strong> She will cover the <strong>architecture and training methodology behind Gemma</strong>, emphasizing techniques for efficient scaling and resource optimization.</p><p><strong><a href="https://lu.ma/jiq0r5jv">Learn more and register to attend. The talk is in person. I hope to see you there!</a></strong></p><div><hr></div><p>I&#8217;d also like to share video recordings of recent lectures we hosted.</p><p>Check out this talk on <strong>AlphaQubit, an AI-based decoder that identifies quantum computing errors with high accuracy</strong>. It&#8217;s a great example of how AI and quantum computing complement each other&#8212;not just buzzwords! &#128578;<br><br><strong><a href="https://youtu.be/v6_q8Jg8uk8">Watch the lecture about AlphaQubit </a></strong></p><div><hr></div><p>Another recent lecture focused on <strong>identifying AI-generated text and the watermarking technology behind it.<br><br><a href="https://youtu.be/xuwHKpouIyE">Watch the lecture about Synth-ID Text</a></strong></p><div><hr></div><p>Meta recently introduced <strong>CoTracker, a model capable of tracking 2D points in long video sequences.</strong> The tracker has cool applications in <strong>robotics and life sciences.</strong> This talk covers CoTracker and CoTracker 3.</p><p><strong><a href="https://youtu.be/5pLesBuq7FY">Watch the lecture about CoTracker</a></strong></p><p></p>]]></content:encoded></item><item><title><![CDATA[How cognitive science helps evaluate the reasoning capabilities of LLMs ]]></title><description><![CDATA[Plus: AI that helps humans in democratic deliberation &#8212; video lecture]]></description><link>https://buzzrobot.substack.com/p/how-cognitive-science-helps-evaluate</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/how-cognitive-science-helps-evaluate</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Mon, 27 Jan 2025 14:03:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Greetings, fellow humans! With the DeepSeek r1 explosion &#8212; it&#8217;s a good time to discuss reasoning capabilities of language models.</p><p>Join our next BuzzRobot talk, where <strong>Andrew Lampinen from Google DeepMind</strong>, whose research work is at the <strong>intersection of cognitive science and AI</strong>, will walk us through the question: <strong>do LLMs truly reason, or do they just repeat familiar patterns?</strong></p><p>In this talk, our guest will explore <strong>how approaches in cognitive science can be applied to evaluate reasoning capabilities of language models.</strong> Specifically, he will focus on comparative methods (comparing capabilities across different systems) and rational analysis (analyzing behaviors as a rational adaptation to an environment).</p><p><strong><a href="https://lu.ma/y88tuswo">Learn the details of the talk and sign up to attend. The talk is virtual.</a></strong></p><div><hr></div><p>Last year, we started organizing BuzzRobot talks in an in-person format &#8211; and it was a blast. This year, we are planning to do it more frequently (currently only in the SF Bay Area). Even though our content is research-heavy, I've been thinking of making our in-person talks more practical &#8211; for example, inviting engineers who have deployed AI agents in production to share their experience. </p><p>So, if you are working on a cool technical project you'd like to share the practical details of, or can recommend an interesting project, I'd appreciate a note (just reply to this email).</p><div><hr></div><p>Speaking of in-person talks &#8212; watch the lecture from our recent offline talk about the <strong>AI mediator</strong> and how it <strong>helped humans find common ground on controversial topics in politics and socio-economic issues.</strong></p><p><strong><a href="https://youtu.be/6or-hNeObpI">Check out the lecture.</a></strong></p>]]></content:encoded></item><item><title><![CDATA[AI to Help Build Large-Scale Quantum Computers: AlphaQubit. Virtual Talk ]]></title><description><![CDATA[Plus: AI for Solving Physics Problems and a Lecture on AI Interpretability]]></description><link>https://buzzrobot.substack.com/p/ai-to-help-build-large-scale-quantum</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/ai-to-help-build-large-scale-quantum</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Tue, 21 Jan 2025 14:03:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Greetings, fellow humans! If you're interested in large-scale quantum computing and physics &#8212; join our upcoming BuzzRobot virtual talks.</p><div><hr></div><p><strong>Learning High-Accuracy Error Decoding for Quantum Processors</strong></p><p>Join us on <strong>January 23</strong> for a virtual talk with Thomas Edlich from the Science team at Google DeepMind, who will share how his team uses <strong>AI to build quantum computers. </strong>Building a large-scale quantum computer requires effective strategies to correct errors that inevitably arise in physical quantum systems. Quantum error-correction codes are a way to address this issue by encoding logical information redundantly into many physical qubits.</p><p>A key challenge in implementing such codes is accurately decoding noisy syndrome information extracted from redundancy checks to obtain the correct encoded logical information. To address this, the Google DeepMind team has developed <strong>AlphaQubit, a recurrent, transformer-based neural network that learns to decode the surface code.</strong></p><p>This talk will provide insights into how AI can go beyond human-designed algorithms by learning directly from data, and how it can help us build quantum computers.</p><p><strong><a href="https://lu.ma/h9elkz56">Read the details of the talk and register to attend</a></strong></p><div><hr></div><p><strong>Solving Hard Problems in Quantum Physics with Deep Learning <br></strong><br>Another upcoming talk from the Google DeepMind team will discuss how novel neural network architectures have helped their research team achieve unprecedented accuracy in solving the quantum behavior of electrons. This talk will broadly explore how AI helps scientists solve challenging physics problems.</p><p><strong><a href="https://lu.ma/knq013xv">Read the details of the talk and register to attend here</a></strong></p><div><hr></div><p><strong>Lecture on AI Interpretability </strong><br><br>In this lecture, our guest <strong>Atticus Geiger</strong>, Stanford graduate and head of the Pr(Ai)&#178;R Group, challenges current approaches to AI interpretability. He <strong>proposes a new method leveraging interventional data to better control and understand deep learning models.</strong></p><p><strong><a href="https://youtu.be/8rQ21zzfzc4">Watch the lecture on the BuzzRobot YouTube channel</a></strong></p>]]></content:encoded></item><item><title><![CDATA[Watermarking AI-generated content with SynthID-Text by Google DeepMind]]></title><description><![CDATA[Plus: Meta's CoTracker3: Simpler and Better Point Tracking by Pseudo-Labeling Real Videos]]></description><link>https://buzzrobot.substack.com/p/watermarking-ai-generated-content</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/watermarking-ai-generated-content</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Thu, 09 Jan 2025 14:03:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Greetings, fellow humans! We are kicking off the new year with a lineup of talks about very impactful AI research work that has been done recently.</p><p><em><strong>SynthID-Text: Watermarking AI-generated text content</strong></em></p><p>AI-generated content very often matches human quality and it's becoming better and better. This can be used by bad actors to deliberately spread misinformation and harm the entire Internet ecosystem. Until now there was not a production-ready, scalable solution that could address the issue.</p><p>In this upcoming BuzzRobot talk, our guest <strong>Sumanth Dathathri, a research scientist at Google DeepMind</strong>, who is also the leading author of the work, will introduce <strong>SynthID-Text, a production-ready text watermarking system.</strong></p><p>Our guest will share with us the technical details of creating SynthID-Text, including its compatibility with advanced techniques like speculative sampling, crucial for enhancing the efficiency of production systems. </p><p>He'll also cover insights from a live experiment with <strong>about 20 million Gemini responses</strong>, showcasing how SynthID-Text maintains text quality based on direct user feedback.</p><p><strong><a href="https://lu.ma/d4re1gbg">Read more about the upcoming virtual talk and register here</a></strong></p><div><hr></div><p><em><strong>CoTracker3: Simpler and better point tracking by pseudo-labeling real videos</strong></em></p><p>When Meta released <strong>CoTracker,</strong> a model that jointly tracks 2D points in long video sequences, the GitHub repo of the project gained thousands of stars and the project proved to be useful for robotics research, video generation, as well as in bio and medical domains.</p><p>The BuzzRobot guest, <strong>Nikita Karaev, the co-author of CoTracker</strong>, will share with us key insights from working on the project and the subsequent work, CoTracker3, a SOTA point tracker trained on synthetic data that was released a few months ago.<br><br><strong><a href="https://lu.ma/ed7pnb0n">Learn the details of the upcoming virtual talk and join the discussion next week</a></strong></p><div><hr></div><p>Since this year is considered the year of AI agents, check out this lecture with <strong>ex-OpenAI researcher Daniel Kokotajlo,</strong> who spoke about the deceptive nature of AI agents among other insightful things about AGI.</p><p><strong><a href="https://youtu.be/5arm3Ygqia4">Watch the lecture</a></strong></p>]]></content:encoded></item><item><title><![CDATA[In the era of AI agents, interpretability is more crucial than ever before]]></title><description><![CDATA[Plus: Video lecture on Superintelligence with an ex-OpenAI researcher]]></description><link>https://buzzrobot.substack.com/p/in-the-era-of-ai-agents-interpretability</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/in-the-era-of-ai-agents-interpretability</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Wed, 11 Dec 2024 14:03:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, fellow human! Join us for the last virtual talk of this year on the interpretability of AI systems. We are inevitably moving toward agentic AI, and by putting more trust in autonomous systems, we are increasing the risk of a catastrophic outcome for humanity. This makes interpretability matter as never before.</p><p>Current methods fail, which raises a question &#8211; are there alternatives that AI researchers are working on?</p><p>Our guest, <strong>Atticus Geiger from Stanford and a lead at the Pr(Ai)&#178;R Group</strong>, an interpretability research lab, will share during the upcoming virtual talk some of the <strong>alternative routes that leverage interventional data</strong> (i.e., hidden representations after an intervention has been performed) to scale the task of controlling and understanding deep learning models.</p><p><strong><a href="https://lu.ma/kewqwyup">Read the details of the talk and register here.</a></strong></p><div><hr></div><p>A while ago, we hosted an AMA session with <strong>Daniel Kokotajlo, an ex-OpenAI researcher who worked on AI forecasting and alignment</strong>. He shared with the BuzzRobot community his understanding of where AI is headed: the timeline for AGI, what should be considered as AGI, the importance of energy supply for large-scale AI training, and issues with current methods of AI safety and alignment. I think this was one of the most insightful and interesting discussions we had this year.</p><p><strong><a href="https://youtu.be/5arm3Ygqia4">Watch the lecture.</a></strong></p><div><hr></div><p>Also, check out another talk we hosted recently on <strong>how RAG works for long-context LLMs</strong> and how to make it more efficient.</p><p><strong><a href="https://youtu.be/410PdHBkWO4">Watch the lecture</a></strong> &#8212; Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAG.</p>]]></content:encoded></item><item><title><![CDATA[Pro-AI vs pro-human camps: Will autonomous AI divide society? ]]></title><description><![CDATA[Plus: OpenAI&#8217;s multi-agent framework, Swarm, and how to train a Stable Diffusion model with low budget &#8212; video lectures]]></description><link>https://buzzrobot.substack.com/p/pro-ai-vs-pro-human-camps-will-autonomous</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/pro-ai-vs-pro-human-camps-will-autonomous</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Wed, 27 Nov 2024 18:01:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, fellow human! This Thanksgiving week, I&#8217;d like to share a couple of video recordings of our past talks &#8212; a good way to spend a long weekend&#128522;</p><p>If you missed Ilan Bigio from OpenAI presenting their multi-agent framework, Swarm, <strong><a href="https://youtu.be/zTRL8h-qtdg">check out the talk here.</a></strong> I found the Q&amp;A session especially interesting: we discussed the limitations of current AI agents frameworks and how the OpenAI team is thinking about mitigating them.</p><div><hr></div><p>Also, we hosted a talk on how to train a Stable Diffusion model for less than $2,000 that has competitive performance. Even though the model is not production-grade, it's still very useful to test early crazy ideas. <strong><a href="https://youtu.be/vd88vUkQVv8">Check out the lecture.</a></strong></p><div><hr></div><p>I also wanted to share <strong><a href="https://youtube.com/shorts/2L2Dp6XvR_o?feature=share">my short video</a></strong> and get your opinion on the topic. In conversations with friends I noticed that we are divided on pro-human (those who think how to help humans adapt to the AI world) and pro-AI camps (those who perceive AI as sentient and don&#8217;t want it to be &#8220;<em>a slave of a few extremely wealthy humans</em>&#8221; and be able to equally participate in society).</p><p>Which camp do you belong to?</p>]]></content:encoded></item><item><title><![CDATA[AI mediator helps humans find common ground on politically controversial topics]]></title><description><![CDATA[Plus: AI to fix real-world code vulnerabilities & the limitations of AI self-improvement techniques]]></description><link>https://buzzrobot.substack.com/p/ai-mediator-helps-humans-find-common</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/ai-mediator-helps-humans-find-common</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Wed, 20 Nov 2024 17:53:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, fellow human! <a href="https://x.com/sopharicks">Sophia </a>here. This week&#8217;s newsletter is packed with cool talks we&#8217;ll be hosting in the coming weeks: from an AI mediator whose answers were better than humans, to AI that fixes real-world code vulnerabilities, to discussions about the challenges of AI self-improvement techniques.</p><p>Let&#8217;s dive in!</p><div><hr></div><p>Finding agreement through a free exchange of views is often challenging. Collective deliberation can be slow, difficult to scale, and not equally attentive to all voices. Can AI be an efficient mediator? It turns out it's even better than humans!</p><p>In this study, our guest speakers from <strong>Google DeepMind, Michael Henry Tessler and Michiel Bakker, trained an AI to mediate human deliberation.</strong> Using participants&#8217; personal opinions and critiques, the AI mediator iteratively generated and refined statements that expressed common ground among the group on social or political issues. Interestingly, nearly 6,000 participants preferred AI-generated statements over those written by human mediators, <strong>rating them as more informative, clear, and unbiased.</strong></p><p><strong><a href="https://lu.ma/7wq8i0jk">This is an in-person talk &#8212; if you&#8217;re in the SF Bay Area, check it out and register to attend!</a></strong></p><div><hr></div><p>Google recently introduced BigSleep, an AI agent that uses LLMs (in this case Gemini) to automatically find vulnerabilities in code without relying on traditional security methods. Recently, <strong>BigSleep discovered a critical vulnerability in SQLite, highlighting its ability to uncover previously undetected flaws.</strong> The AI system has the potential to accelerate vulnerability discovery, making it a valuable tool for proactively identifying security issues.</p><p><strong><a href="https://lu.ma/lo28qd56">Details and registration are here. The talk is virtual.</a></strong></p><div><hr></div><p>Researchers are actively working on using AI to improve other AIs. As a result, we are witnessing the rise of new techniques like RLAIF (Reinforcement Learning from AI Feedback) and self-rewarding mechanisms, both aiming to enable AI models to evolve without human feedback.</p><p>Building on these ideas, <strong>meta-rewarding approaches attempt to further this evolution </strong>by allowing models to assess and refine their own outputs, including both responses and judgments.</p><p><strong>The challenge is that these techniques often require significant engineering effort, and their improvements tend to be domain-specific, making them less likely to generalize well.</strong></p><p>In this virtual talk, we will discuss the challenges of these techniques and explore avenues for AI self-improvement.</p><p><strong><a href="https://lu.ma/pppt09w8">Details of the talk and registration are here.</a></strong></p>]]></content:encoded></item><item><title><![CDATA[Using AI to solve the Schrödinger equation by Google DeepMind. Virtual talk ]]></title><description><![CDATA[Plus: Increased retrieved information negatively affects LLM performance. Virtual talk]]></description><link>https://buzzrobot.substack.com/p/using-ai-to-solve-the-schrodinger</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/using-ai-to-solve-the-schrodinger</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Wed, 13 Nov 2024 14:03:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, fellow human! <a href="https://x.com/sopharicks">Sophia</a> here with a lineup of awesome talks. We select the most interesting research papers and discuss them as a community with the authors. Join one of our upcoming talks next week.</p><div><hr></div><p>One of the promises of AGI is to help us solve many (if not all) fundamental physics problems. But even with the AI systems we have today, researchers are managing to tackle very challenging physics problems.</p><p>One area that has seen an explosion of interest in recent years is the use of neural networks to solve one of the most fundamental equations in physics: <strong>the Schr&#246;dinger equation</strong>, which describes the probabilistic behavior of quantum particles in much the same way Newton's equations describe classical motion.</p><p><em>If researchers could solve the Schr&#246;dinger equation accurately and at scale, the entirety of chemistry and condensed matter physics could be derived computationally from first principles.</em></p><p>In this talk, <strong>our guest David Pfau &#8212; staff research scientist at Google DeepMind &#8212; will describe how his team has used deep learning to achieve unprecedented accuracy in solving for the quantum behavior of electrons using novel neural network architectures.</strong></p><p><strong><a href="https://lu.ma/knq013xv">Read more about the upcoming talk and register to attend. It&#8217;s virtual.</a></strong></p><div><hr></div><p>If you're into RAG and LLMs (apologies for the buzzword overload in one sentence) &#8211; this talk is for you. Our guest speaker, Bowen Jin from the University of Illinois Urbana-Champaign, found in his research that <strong>increasing the amount of retrieved information for long-context LLMs negatively affects their performance.</strong> Empirical findings show that for many long-context LLMs, the quality of generated output initially improves but then declines as the number of retrieved passages increases. Bowen will share with the BuzzRobot community the techniques he used to overcome this challenge.</p><p><strong><a href="https://lu.ma/k9k5o5a4">Details of the upcoming virtual talk and registration can be found here.</a></strong></p><div><hr></div><p>We recently hosted a talk on how children perceive AI and the potential impact of the technology on their minds. </p><p><strong><a href="https://youtu.be/VSFeQxSsoFM">Check out the video recording </a></strong>&#8212; if you&#8217;re a parent, thinking through this topic could be especially useful.</p>]]></content:encoded></item><item><title><![CDATA[OpenAI's design patterns for autonomous AI agents. Virtual talk]]></title><description><![CDATA[Swarm, a framework for exploring multi-agent orchestration.]]></description><link>https://buzzrobot.substack.com/p/openais-design-patterns-for-autonomous</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/openais-design-patterns-for-autonomous</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Thu, 07 Nov 2024 14:02:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, fellow human! Wanted to share about a virtual talk we are hosting next Tuesday, November 12th, with Ilan Bigio from OpenAI. He&#8217;ll present <a href="https://github.com/openai/swarm">Swarm</a>, a framework for exploring multi-agent orchestration.</p><p>We are entering the era of large-scale agentic AI with millions of autonomous AI agents being deployed into production. Putting safety and alignment challenges aside &#8212; there are a bunch of technical problems to solve including finding optimal architectural patterns. </p><p>In this talk Ilan will discuss different design approaches that his team has identified for autonomous agent orchestration.&nbsp;</p><p><strong><a href="https://lu.ma/6l07vqgk">Learn the details of the talk here and register to attend.</a></strong> As a reminder the talk is virtual.</p>]]></content:encoded></item><item><title><![CDATA[How to train Stable Diffusion models with a $2,000 budget. Virtual talk]]></title><description><![CDATA[Plus: The current state of robotics and how far we are from fully autonomous robots]]></description><link>https://buzzrobot.substack.com/p/how-to-train-stable-diffusion-models</link><guid isPermaLink="false">https://buzzrobot.substack.com/p/how-to-train-stable-diffusion-models</guid><dc:creator><![CDATA[Sophia Aryan]]></dc:creator><pubDate>Tue, 05 Nov 2024 21:03:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721b7c6e-8319-4041-94ea-18f539565717_1042x1042.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, <a href="https://x.com/sopharicks">Sophia</a> here &#8212; join us for our upcoming virtual talks on the cutting edge of AI research, discuss research papers directly with their authors, and connect with cool people.</p><div><hr></div><p>Billions of dollars spent on training a frontier AI model? Not necessarily! Our guest, Vikash Sehwag from Sony AI, will share<strong> tricks and tweaks on how to train high-performance, large-scale diffusion models at an order of magnitude lower computational cost than existing SOTA models.</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://buzzrobot.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading BuzzRobot! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong><a href="https://lu.ma/4796vyyo">Read the details and register for the talk.</a></strong></p><div><hr></div><p>Check out the video lecture by <strong>Alex Irpan from Google DeepMind on what&#8217;s happening in robotics these days</strong>, what robots can and cannot do autonomously, and how large language models are helping accelerate robotics research.</p><p><a href="https://youtu.be/XocmVe1FCMY">Watch the lecture on the BuzzRobot channel and share it with your network if you find it useful.</a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://buzzrobot.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading BuzzRobot! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>