<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Teaching Python Newsletter]]></title><description><![CDATA[The latest thoughts on Pedagogy and Programming]]></description><link>https://teachingpythonpodcast.substack.com</link><generator>Substack</generator><lastBuildDate>Mon, 13 Apr 2026 15:03:16 GMT</lastBuildDate><atom:link href="https://teachingpythonpodcast.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Teaching Python LLC]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[teachingpythonpodcast@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[teachingpythonpodcast@substack.com]]></itunes:email><itunes:name><![CDATA[Teaching Python Podcast]]></itunes:name></itunes:owner><itunes:author><![CDATA[Teaching Python Podcast]]></itunes:author><googleplay:owner><![CDATA[teachingpythonpodcast@substack.com]]></googleplay:owner><googleplay:email><![CDATA[teachingpythonpodcast@substack.com]]></googleplay:email><googleplay:author><![CDATA[Teaching Python Podcast]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Loop That Learns Too Much]]></title><description><![CDATA[Unlike a Python loop, an AI loop can spiral unless we add guardrails.]]></description><link>https://teachingpythonpodcast.substack.com/p/the-loop-that-learns-too-much</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/the-loop-that-learns-too-much</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Mon, 16 Jun 2025 20:56:40 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/166104233/19320087be394bda6c6838a49e61a867.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In Python, a loop is simple. You iterate over a list, repeat a process, and stop when you&#8217;re done. It doesn&#8217;t adapt. It doesn&#8217;t change. It just follows instructions.</p><p>But AI loops are different.</p><p>In machine learning, each pass through the data changes the model. That&#8217;s the point. It learns from mistakes, adjusts its parameters, and tries again. These loops are adaptive&#8212;and powerful.</p><p>But here&#8217;s the catch:<br>If the training data is biased, uncleaned, or unvalidated, the loop doesn&#8217;t just repeat the error&#8212;it <em>amplifies</em> it.</p><p>One well-known example? A r&#233;sum&#233;-screening AI trained on historical hiring data learned to downgrade applicants with female-associated names. Why? Because the loop learned from flawed patterns. And no one stopped it.</p><p>That&#8217;s what makes AI loops dangerous:</p><ul><li><p>They <strong>overfit</strong> on noise.</p></li><li><p>They <strong>codify bias</strong> into logic.</p></li><li><p>They <strong>forget</strong> useful old knowledge when over-optimized on the new.</p></li></ul><p>Unlike a Python loop, an AI loop can spiral unless we add guardrails.</p><p>And here&#8217;s the important part for teachers:<br>Most of us aren&#8217;t training our own models. We&#8217;re using AI tools built by others. So our guardrails are different.</p><p>We teach students to prompt wisely.<br>To question outputs.<br>To ask: <em>What did this AI learn&#8212;and should it have?</em></p><p>Because at the end of the day, we&#8217;re not just teaching code.<br>We&#8217;re teaching the judgment to break the loop, before it learns too much.</p>]]></content:encoded></item><item><title><![CDATA[AI and Education: Separating Fact from Fiction]]></title><description><![CDATA[Who to Trust When Everyone Sounds Like an Expert]]></description><link>https://teachingpythonpodcast.substack.com/p/ai-and-education-separating-fact</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/ai-and-education-separating-fact</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Sat, 14 Jun 2025 14:20:16 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/165939967/ad38a33ed34168ed90bb6fabc966230c.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>With AI developing rapidly and countless "experts" sharing advice online, it's challenging for educators to distinguish credible information from hype. This video provides three practical criteria to help teachers evaluate AI and education content they encounter on social media and professional platforms.</p><h2>Key Takeaways</h2><h3>1. Look for Evidence-Based Claims</h3><p><strong>What to look for:</strong></p><ul><li><p>Specific references to research studies or data</p></li><li><p>Acknowledgment of limitations and uncertainties</p></li><li><p>Clear anecdotes from real classroom experiences</p></li><li><p>Survey data and perspective-based studies</p></li></ul><p><strong>Red flags:</strong></p><ul><li><p>Sweeping generalizations like "AI will transform everything"</p></li><li><p>Claims without supporting evidence</p></li><li><p>Content focused primarily on generating views or engagement</p></li></ul><p><strong>Reality check:</strong> According to recent Pew Research data from 2024, only 6% of K-12 teachers currently believe AI tools do more good than harm in education, with 25% saying they do more harm than good. This suggests the landscape is more complex than many AI enthusiasts present.</p><h3>2. Verify Educational Connections</h3><p><strong>What credible sources have:</strong></p><ul><li><p>Current, active involvement in educational environments (classrooms, online courses, tutoring, professional development)</p></li><li><p>Strong connections to practicing educators</p></li><li><p>Understanding of modern educational challenges and contexts</p></li></ul><p><strong>Be cautious of:</strong></p><ul><li><p>Advice from people whose teaching experience is decades old</p></li><li><p>Generic recommendations that could apply to any field</p></li><li><p>Content that ignores the complexity and nuance of education environments</p></li></ul><p><strong>Why this matters:</strong> Education has evolved significantly even before AI entered the picture. Current, relevant experience is essential for practical AI guidance.</p><h3>3. Consider Motivations and Financial Interests</h3><p><strong>Questions to ask:</strong></p><ul><li><p>Is this person selling an AI product or service?</p></li><li><p>What's their primary motivation for creating this content?</p></li><li><p>Do they acknowledge the complexities of classroom implementation?</p></li><li><p>Are they promising unrealistic solutions to complex problems?</p></li></ul><p><strong>Green flags:</strong></p><ul><li><p>Acknowledgment of challenges alongside opportunities</p></li><li><p>Realistic timelines and expectations</p></li><li><p>Focus on improving education rather than promoting products</p></li><li><p>Recognition that no single tool solves all educational problems</p></li></ul><p><strong>Note:</strong> Some AI education products are evidence-based and developed with educator input. The key is distinguishing between those and products created primarily for market capture.</p><h2>Research and Data</h2><h3>Key Study Referenced</h3><ul><li><p><strong>Pew Research Center Survey (May 15, 2024)</strong>: "A Quarter of U.S. Teachers Say AI Tools Do More Harm Than Good in K-12 Education"</p><ul><li><p>25% say AI does more harm than good</p></li><li><p>32% see equal benefits and harms</p></li><li><p>6% say AI does more good than harm</p></li><li><p>35% are unsure</p></li><li><p><a href="https://www.pewresearch.org/short-reads/2024/05/15/a-quarter-of-u-s-teachers-say-ai-tools-do-more-harm-than-good-in-k-12-education/">Read the full study &#8594;</a></p></li></ul></li></ul><h3>Additional Context</h3><p>The rapid pace of AI development makes traditional longitudinal studies challenging to conduct, making survey data and qualitative research particularly valuable for understanding current educator perspectives.</p><h2>For Further Exploration</h2><h3>Recommended Approach</h3><p>When encountering AI and education content online:</p><ol><li><p><strong>Check the source</strong> - Look up the author's background and current educational involvement</p></li><li><p><strong>Examine the evidence</strong> - Ask what research or data supports their claims</p></li><li><p><strong>Consider the context</strong> - Does the advice account for real classroom constraints and challenges?</p></li><li><p><strong>Follow the money</strong> - Understand what the content creator might gain from their recommendations</p></li></ol><h3>Questions for Critical Evaluation</h3><ul><li><p>Does this person currently work with students or teachers?</p></li><li><p>Are they citing specific research or just making broad claims?</p></li><li><p>Do they acknowledge both benefits and challenges of AI in education?</p></li><li><p>Are they selling something that might bias their perspective?</p></li><li><p>Do their recommendations seem practical for typical classroom environments?</p></li></ul><h2>The Bottom Line</h2><p>AI has genuine potential in education, but the field is still evolving. The most credible voices acknowledge both opportunities and challenges, ground their advice in evidence and current practice, and respect the complexity of teaching and learning.</p><p>As educators, we serve our students best by being discerning consumers of AI information - embracing useful innovations while maintaining healthy skepticism about promises that sound too good to be true.</p><div><hr></div><p><em>Have thoughts on evaluating AI education content? Share this post with other educators and continue the conversation about separating fact from fiction in our rapidly evolving field.</em></p><h3>Additional Resources</h3><ul><li><p>For more comprehensive information on AI in education research, check academic databases for peer-reviewed studies</p></li><li><p>Government education departments often provide evidence-based guidance on educational technology adoption</p></li><li><p>Professional education organizations typically offer more measured perspectives than social media influencers</p></li></ul>]]></content:encoded></item><item><title><![CDATA[Why Chunking Matters—in Learning, in Python, and in AI]]></title><description><![CDATA[We all teach students to break big problems into smaller steps. But what if I told you that same strategy&#8212;chunking&#8212;is the foundation for everything from memory to machine learning?]]></description><link>https://teachingpythonpodcast.substack.com/p/why-chunking-mattersin-learning-in</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/why-chunking-mattersin-learning-in</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Wed, 11 Jun 2025 22:55:26 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/165745244/b9641134a44248f75b521aa259bce448.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In this video, I connect three worlds:</p><ul><li><p>How chunking supports student learning</p></li><li><p>How we teach it through Python variables and string formatting</p></li><li><p>And how AI models rely on chunking to function</p></li></ul><p>You probably talk a lot about AI in the classroom: how to prompt it, how to use it, how it doesn&#8217;t think, but in actuality we don&#8217;t really talk about how it DOES mirrors human thinking. This video is about <strong>chunking</strong>: not just as a teaching strategy, but as a bridge between how <em>humans learn</em> and how <em>AI models process information</em>.</p><p>In cognitive science, chunking is how we manage complexity&#8212;breaking things into smaller parts so we don&#8217;t overwhelm our working memory. It&#8217;s how we remember phone numbers, take notes, and scaffold learning for students.</p><p>Now here&#8217;s the interesting part&#8212;AI models, do something similar. They process language in tokens and use parameters like <code>stream=False</code> to control how output is delivered. When that setting is off, the AI waits until it&#8217;s &#8220;done&#8221; to give you everything at once. No chunks, no partial thoughts&#8212;just a final, uninterrupted stream.</p><p>Sounds helpful, right? Sometimes. But just like students, AI doesn&#8217;t always do its best work when forced to hold too much at once.</p><p>There&#8217;s a connection here worth exploring: humans chunk to learn, and machines chunk to reason. When we teach students to notice that parallel, we are helping them understand not just how to prompt but how to think, structure, and communicate more clearly.</p><p></p>]]></content:encoded></item><item><title><![CDATA[The Next Big Thing in AI: Model Context Protocol (MCP) and Agentic AI]]></title><description><![CDATA[A deep dive into the cutting-edge technology that's unlocking AI agents]]></description><link>https://teachingpythonpodcast.substack.com/p/the-next-big-thing-in-ai-model-context</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/the-next-big-thing-in-ai-model-context</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Wed, 11 Jun 2025 02:52:30 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/165677990/67927ed26b5b528b6d9f18fd2c45ef68.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Hi everyone! Kelly and I are taking some time this summer to relax and reflect on the rapidly evolving world of AI and education. Today, I want to introduce you to a concept that's brand new and incredibly exciting - something we're all still learning about together.</p><p>It's called <strong>Model Context Protocol</strong>, or <strong>MCP</strong> for short.</p><h2>What is Model Context Protocol (MCP)?</h2><p>An MCP is a special kind of API that conforms to a specific format, allowing large language models to use that API in really neat and interesting ways.</p><p>Think about the last time you used an AI system and asked it to look up information about the latest sports scores. Maybe it was able to figure that out by doing a search on the internet and then using that search as a prompt to feed back into the model.</p><p>But MCPs are even cooler than that. They provide a structured way for models to go get information from various sources on the internet and have that information be accurate and up-to-date. So instead of the AI guessing at what the sports score is, it's getting the real information from that endpoint in a way that the model can understand it, process it, and use it in a much richer way.</p><p><strong>Learn more about MCP:</strong></p><ul><li><p><a href="https://modelcontextprotocol.io/introduction">Official MCP Documentation</a> - The definitive technical guide</p></li><li><p><a href="https://www.anthropic.com/news/model-context-protocol">Anthropic's MCP Introduction</a> - The original announcement</p></li><li><p><a href="https://simplescraper.io/blog/how-to-mcp">Complete MCP Implementation Guide</a> - Practical tutorial for building MCP servers</p></li></ul><h2>The Power of Agentic AI</h2><p>What's really interesting about this is that MCP also unlocks something called <strong>agentic AI</strong> - where the AI can act as an agent on your behalf. This is possible because MCPs also define actions that the model can perform on your behalf in a structured way.</p><p>Since I'm traveling and I've flown this week, let's talk about another kind of agent: a travel agent.</p><h3>The Travel Agent Example</h3><p>Think about the interaction you have with a travel agent when booking a vacation. You might call the agent and ask them:</p><ul><li><p>Information about what trips are available</p></li><li><p>What the airfare to your favorite destination looks like</p></li><li><p>What are the specials this week</p></li></ul><p>That's an example of an MCP interaction where the model might be trying to retrieve information from that endpoint and use it to give you a better response.</p><p>But when you go to the next level with an MCP, you can also define actions that it can take. So not only can the MCP provide information about those airfares and flights, it also gives the model the opportunity to <strong>book those flights on your behalf</strong> and give you a confirmation number in return.</p><p><strong>Explore Agentic AI further:</strong></p><ul><li><p><a href="https://www.ibm.com/think/insights/ai-agents-2025-expectations-vs-reality">IBM's Guide to AI Agents in 2025</a> - Professional insights on what's coming</p></li><li><p><a href="https://hbr.org/2024/12/what-is-agentic-ai-and-how-will-it-change-work">Harvard Business Review on Agentic AI</a> - How it will transform work</p></li><li><p><a href="https://www.coursera.org/learn/agentic-ai">Coursera's Agentic AI Course</a> - Structured learning path</p></li></ul><h2>Why This Matters Right Now</h2><p>These MCPs are really powerful, and they're brand new. We're seeing a lot of new places that are bringing them out. This is the latest trend, unlocking this whole power of agentic AI.</p><p>Being able to do these things represents some of the latest cutting-edge stuff in artificial intelligence. Gartner has identified agentic AI as the top strategic technology trend for 2025, and we're seeing rapid adoption across industries.</p><p><strong>Hands-on Learning Resources:</strong></p><ul><li><p><a href="https://huggingface.co/learn/mcp-course/unit0/introduction">Hugging Face MCP Course</a> - Free comprehensive course from beginner to advanced</p></li><li><p><a href="https://www.datacamp.com/tutorial/mcp-model-context-protocol">DataCamp MCP Tutorial</a> - Hands-on tutorial with demo projects</p></li><li><p><a href="https://www.analyticsvidhya.com/blog/2024/10/learning-path-for-ai-agents/">Analytics Vidhya AI Agents Learning Path</a> - Step-by-step guide to expertise</p></li><li><p><a href="https://github.com/modelcontextprotocol">MCP GitHub Repository</a> - Open source examples and implementations</p></li></ul><h2>The Big Picture</h2><p>Keep an eye on this space. Look for the <strong>MCP</strong> term. Look for <strong>agentic AI</strong>. Recognize that it's really just a special kind of API that you can even write yourself.</p><p>The exciting part is that we're at the very beginning of this technology. It's so new that we're all learning how it works together. But the potential is enormous - we're talking about AI systems that don't just answer questions, but can actually take meaningful actions in the real world on your behalf.</p><p>Whether you're an educator, developer, or just someone curious about where AI is heading, now is the perfect time to start exploring these concepts. The foundations are being laid right now for what could be a fundamental shift in how we interact with artificial intelligence.</p><p>More on this to come as we continue exploring the intersection of AI and education!</p><div><hr></div><p><em>Want to stay updated on the latest developments in AI and educational technology? Subscribe for more insights as we navigate this exciting frontier together.</em></p>]]></content:encoded></item><item><title><![CDATA[What’s in a Model? ]]></title><description><![CDATA[Teaching Students to See Through the "Illusion of Thinking"]]></description><link>https://teachingpythonpodcast.substack.com/p/whats-in-a-model</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/whats-in-a-model</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Mon, 09 Jun 2025 22:01:57 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/165581845/94b5f668f0fe930312f3030f53967875.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Surprise!!!! Our  advanced AI models&#8212;don&#8217;t actually think! </p><p>Apple released a paper two days ago and everyone is talking about it.</p><p>Models are fluent. They&#8217;re fast. But they don&#8217;t reason. According to Apple&#8217;s 2025 research, these models excel at pattern recognition and surface-level fluency. They do great on simple tasks, stumble on anything unfamiliar, and collapse when reasoning gets hard. As complexity increases, the models don&#8217;t step up, they often step back. Even when given the right algorithm, they fail to apply it.</p><p>So, what&#8217;s really in a model?</p><ul><li><p><strong>Data</strong>: Trained on massive text and code corpora</p></li><li><p><strong>Prediction</strong>: Built to guess what comes next</p></li><li><p><strong>Understanding</strong>: They mimic, but don&#8217;t comprehend</p></li></ul><p>This matters in education. Because our students are growing up in a world full of tools that <em>sound</em> smart. And if we&#8217;re not careful, they&#8217;ll mistake fluency for intelligence.</p><p>Here&#8217;s how we build real <strong>AI literacy</strong> in the classroom:</p><ul><li><p><strong>Fluency &#8800; Understanding</strong><br>Just because something sounds right doesn&#8217;t mean it is. Teach students to question the <em>how</em>, not just the <em>what</em>.</p></li><li><p><strong>Prompt for Inquiry</strong><br>Don&#8217;t settle for one answer. Vary the prompt. Compare outputs. Reflect on why the responses change.</p></li><li><p><strong>Predict&#8211;Observe&#8211;Explain</strong><br>Let students guess what AI will say, check the result, then explain where it worked and where it didn&#8217;t. That&#8217;s metacognition.</p></li><li><p><strong>Test Generalization</strong><br>Can the logic from one problem transfer to another? With students, eventually yes. With AI, often not.</p></li><li><p><strong>Foster Critical Thinking</strong><br>Teach students when <em>not</em> to trust the tool. Teach them to reason better than the machine.</p><p></p></li></ul><p>It is kind of like the difference between <strong>Bloom&#8217;s knowledge level</strong> and the <strong>analysis, evaluation, and synthesis students are capable of when they&#8217;re taught to think</strong>.</p><p>So yes, AI is powerful. But it doesn&#8217;t understand. And that&#8217;s good for teachers, because that is our job! So over the summer relax, but when we return in August,  make sure our students don&#8217;t just use AI and we teach them how to outthink it.</p><p></p><p>https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf</p>]]></content:encoded></item><item><title><![CDATA[Tip #12 Temperature Isn’t Always Constant—Except When It Is]]></title><description><![CDATA[The teachable power of hyperparameters...]]></description><link>https://teachingpythonpodcast.substack.com/p/tip-12-temperature-isnt-always-constantexcept</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/tip-12-temperature-isnt-always-constantexcept</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Thu, 05 Jun 2025 20:16:10 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/165296377/b820e46d6eb79a12aa2c5620cdf95118.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Recently, I&#8217;ve been learning how to run local AI models and tweak them using Python. It&#8217;s been a rewarding challenge for my brain. Learning to configure models, control outputs, and understand how a few lines of code can completely reshape the way an AI behaves.</p><p>One of the first things I came across was something called <strong>temperature</strong>. Not the weather kind but a hyperparameter that controls how random or predictable the model&#8217;s responses are.</p><p>Set it to <code>0.2</code> and you get safe, stable, predictable output. The AI sticks to the most probable words. This temperature is perfect for instructions, summaries, or technical writing. Set it to <code>1.2</code>, and suddenly it starts taking creative leaps. New phrasings. Surprising ideas. Sometimes brilliant. Sometimes unusable. But always different.</p><p>And here&#8217;s what clicked for me: this is exactly the kind of thing we should be teaching in CS classes not just for AI, but for helping students understand <strong>constants</strong> and <strong>hyperparameters</strong> in real-world code.</p><p>Temperature is a <strong>constant</strong>. It&#8217;s a value you set before a model runs. It doesn&#8217;t change during execution. And that constant fundamentally shifts how the system behaves. Just like we teach students to define <code>PI = 3.14159</code>, we can teach them to define <code>temperature = 0.7</code> and explain how that choice affects creativity, structure, or precision in output.</p><p>Most students will never get excited about the word <em>hyperparameter</em>. But they <em>will</em> get curious when they change one value and suddenly the AI&#8217;s personality shifts.</p><p>It&#8217;s not just about randomness. It&#8217;s about control. About design. About structure. And that&#8217;s the kind of thinking we want students to internalize not just to be users of AI, but builders, tuners, and thinkers who understand how systems work under the hood.</p><p>We talk a lot about teaching computational thinking, but this is what it looks like in practice: taking one small constant and using it to open up a conversation about agency in code.</p><p>So the next time you are teaching your students about variables and constants don&#8217;t just call it syntax. Call it a choice and explain to them about variability in AI models. That&#8217;s where the learning happens.</p>]]></content:encoded></item><item><title><![CDATA[AI Modularity: A New Way to Teach Problem Decomposition]]></title><description><![CDATA[How large language models can transform the way we teach computational thinking]]></description><link>https://teachingpythonpodcast.substack.com/p/ai-modularity-a-new-way-to-teach</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/ai-modularity-a-new-way-to-teach</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Thu, 05 Jun 2025 01:22:20 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/165236245/56facb2e96bb037e9956279b422fd1fa.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>When we talk about modularity in programming, we immediately think of functions, libraries, and decorators&#8212;the building blocks that help us organize and abstract our code. But there's a different kind of modularity emerging in the age of AI, one that has profound implications for how we teach problem-solving and computational thinking.</p><h2>Beyond Code: Modularity in Thought</h2><p>Working with AI models like ChatGPT or Claude isn't just about getting answers&#8212;it's about learning to break down complex problems into digestible pieces. This mirrors what great teachers have always done: taking overwhelming concepts and decomposing them into manageable steps that students can actually follow.</p><p>The breakthrough insight is that AI can serve as both a collaborator in this decomposition process and a tool for teaching students how to do it themselves. When you prompt an AI to help break down a complex problem, you're not just solving that specific problem&#8212;you're modeling a crucial thinking skill.</p><h2>The Teaching Moment</h2><p>Here's where it gets exciting for educators: you can create exercises that are entirely focused on this decomposition process. Instead of giving students a problem and expecting them to solve it, give them a complex scenario and ask them to work with an AI to break it down into smaller, more manageable components.</p><p>The learning happens in the iteration. Students quickly discover whether they've broken things down effectively. Did the AI understand their decomposition? Could it work with their proposed steps? Was the breakdown too granular or not detailed enough?</p><h2>Instant Feedback at Scale</h2><p>What makes this approach revolutionary is the feedback loop. In a traditional classroom, a teacher might work with one student at a time to help them think through problem decomposition. But with AI as a teaching assistant, every student can engage in this iterative process simultaneously, getting immediate feedback on their thinking.</p><p>You can even instruct the AI to be deliberately skeptical: "Question my approach," or "Find potential flaws in how I've broken this down," or "Suggest alternative ways to decompose this problem." This turns the AI into a thinking partner that challenges students to refine their approach.</p><h2>Practical Classroom Applications</h2><p>The beauty of this method is its flexibility. You might start with something as simple as the classic "teach a computer to make a peanut butter and jelly sandwich" exercise, but now students can actually have a conversation with the AI about each step, testing whether their instructions are clear and complete.</p><p>Or tackle more complex scenarios: How would you design a recycling program for your school? How would you plan a community event? How would you approach learning a new programming language? Each of these can become exercises in decomposition, with the AI serving as both collaborator and critic.</p><h2>Assessment Through Process</h2><p>The most interesting aspect of this approach is how it shifts assessment. Instead of evaluating the final solution, you're evaluating the thinking process. Ask students to reflect on their experience: What kinds of steps did the AI help them identify? Were there too many steps or too few? Did the decomposition actually make the problem easier to tackle?</p><p>This metacognitive element&#8212;thinking about thinking&#8212;is where deep learning happens.</p><h2>The Bigger Picture</h2><p>We're still in the early days of understanding how AI can enhance education, but this application of AI-assisted problem decomposition feels particularly promising. It scales personalized instruction, provides immediate feedback, and teaches a fundamental skill that transfers far beyond programming or even academics.</p><p>The goal isn't to replace human teachers but to augment their capability to help students develop crucial thinking skills. When every student can engage in rapid, iterative problem-solving practice with an AI thinking partner, we open up possibilities for teaching computational thinking that simply weren't feasible before.</p><h2>Try It Yourself</h2><p>If you're an educator, consider experimenting with decomposition exercises in your classroom. Start small&#8212;maybe with a familiar problem that you know has multiple valid approaches. Watch how students interact with the AI, observe where they struggle with the breakdown process, and notice what insights emerge from their reflections.</p><p>The key is remembering that we're training the students, not the AI. The model is the tool; the learning happens in the human mind grappling with how to think systematically about complex problems.</p><p><em>What complex problems could you decompose with your students? How might this change the way you approach problem-solving in your classroom?</em></p>]]></content:encoded></item><item><title><![CDATA[Tip # 10 Tokenization]]></title><description><![CDATA[What Cleaning a Boat Taught Me About Teaching Tokenization]]></description><link>https://teachingpythonpodcast.substack.com/p/tip-10-tokenization</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/tip-10-tokenization</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Mon, 02 Jun 2025 21:53:10 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/165048669/eecd87482270ebedef70be49fc86ea76.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>This weekend, I was cleaning out my dad&#8217;s boat. It was full of leaves. As I was pushing everything toward the scupper, I noticed something:<br>Some leaves flowed through easily.<br>Some got stuck and blocked the whole drain.</p><p>That moment reminded me of what I was teaching earlier this week with my exploratory CS students. We were looking at how tokenization works in large language models. (DM us for more info about this amazing lesson!)</p><p>We used the <code>tiktoken</code> Python library to experiment with how GPT-4 tokenizes text. The students were trying to guess how many tokens a word or phrase would generate and getting pretty into it. Spoiler: the token count isn&#8217;t always what you expect.</p><p>Here&#8217;s what we learned:</p><ul><li><p>Tokens aren&#8217;t the same as words. A token can be a full word, part of a word, or punctuation.</p></li><li><p>Models tokenize differently. </p></li><li><p>They use something called Byte Pair Encoding (BPE) to merge common letter combinations and reduce token count.</p></li></ul><p>We also talked about <strong>token limits</strong>: That includes both your prompt <em>and</em> the AI&#8217;s response. If you go over, the oldest tokens get dropped&#8212;basically, the model forgets what you told it earlier.</p><p>It&#8217;s like trying to shove too many leaves through a scupper. The system can only handle so much before something gets stuck or falls off the end.</p><p>And just like that scupper, certain language makes things worse:</p><ul><li><p>Ambiguous words like &#8220;pitch&#8221; or &#8220;bank&#8221;</p></li><li><p>Slang or rare words the model hasn&#8217;t seen much</p></li><li><p>Typos, run-ons, idioms&#8212;they all mess with the flow</p></li></ul><p>For CS teachers, this is gold. Tokenization is a great way to connect string handling, memory management, and prompt design. It&#8217;s also a good reminder that input clarity matters&#8212;especially when working with systems that rely on structured text.</p><p>We&#8217;re not just teaching kids to code. We&#8217;re teaching them how machines &#8220;think.&#8221; </p>]]></content:encoded></item><item><title><![CDATA[Learn a little JSON, will ya?]]></title><description><![CDATA[Don't worry, this is still a Python podcast. We're just going to say JavaScript a few times]]></description><link>https://teachingpythonpodcast.substack.com/p/learn-a-little-json-will-ya</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/learn-a-little-json-will-ya</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Sun, 01 Jun 2025 00:04:13 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/164889562/093bde45332bc12906bcb5232ec9e89b.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>One of the most interesting data formats that you&#8217;ll find everywhere across the internet is something called <a href="https://www.json.org/json-en.html">JSON</a>, which stands for JavaScript Object Notation. It&#8217;s a lightweight way of representing data in a way that machines can parse it easily and humans can also read it. </p><blockquote><p>&#8220;When AI talks back, it&#8217;s not just giving you words&#8212;it&#8217;s giving you structure. And that structure is usually JSON.&#8221; </p></blockquote><p>Whether you&#8217;re using ChatGPT, calling an API, or working with any modern AI tool, there&#8217;s a format running under the hood that packages your input and output neatly. That format is called <strong>JSON</strong>&#8212;JavaScript Object Notation.And if you don&#8217;t know how to read it, you&#8217;re missing half the conversation.</p><p>What is JSON, really? JSON is a lightweight, readable way to structure data using <strong>key-value pairs</strong>. Think of it like organized digital note-taking:</p><pre><code>{
    "name": "Kelly",
    "role": "Teacher",
    "subject": "Python + AI",
    "likes_AI": true
}</code></pre><p>That&#8217;s a chunk of structured data&#8212;just like the kind returned by almost every AI API.</p><h2>Why this matters in AI</h2><p>When you use an AI tool (like OpenAI&#8217;s API), you get a JSON response that looks something like this:</p><pre><code>{
    "choices": [
        {
            "text": "Sure! Here's an explanation of JSON...",
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 15,
        "completion_tokens": 42,
        "total_tokens": 57
    }
}</code></pre><p>This isn&#8217;t random&#8212;it&#8217;s <em>how</em> AI communicates. Every part of the AI&#8217;s response is stored in an organized format that a computer&#8212;and you&#8212;can read and act on.</p><h2>Why CS teachers should care</h2><ul><li><p>Teaching JSON helps students understand <em>how AI thinks in structure</em>.</p></li><li><p>It bridges directly into Python&#8217;s dict, list, string, int, and float data types, making it incredibly teachable:</p></li></ul><pre><code># python example

response = {"choices": [{"text": "Hello!", "finish_reason": "stop"}]}

print(response["choices"][0]["text"])</code></pre><p>Once students can parse a dictionary, they can pull, clean, and remix AI data&#8212;turning outputs into usable code, dashboards, or reports.<br><strong>JSON fluency = AI fluency.</strong></p><h2>Why non-CS teachers should care</h2><p>Even if you never code, understanding JSON means you can:</p><ul><li><p>Spot and fix broken integrations (e.g., Google Forms to Sheets errors)</p></li><li><p>Interpret structured outputs from tools like ChatGPT, Canva AI, or data dashboards</p></li><li><p>Advocate for <em>safe, clean</em> data use in your classroom&#8212;because now you know what&#8217;s being passed around<br>It&#8217;s also a great way to model data literacy: What&#8217;s the value? What&#8217;s the label? What&#8217;s missing?</p></li></ul><h2>In the classroom</h2><p>Ask students:</p><ul><li><p>What&#8217;s something in your life that could be represented as a JSON object?</p></li><li><p>How could you store and retrieve data from it?</p></li></ul><p>For younger students: Try building character profiles or quiz results in JSON.</p><p>For older students: Parse a real API response in Python with json.loads().</p><p>AI isn&#8217;t just generating words. It&#8217;s sending data in clean, labeled boxes. If you can read JSON, you can understand what AI is really saying&#8212;and maybe more importantly, how it&#8217;s saying it. And in this world, that&#8217;s power.</p>]]></content:encoded></item><item><title><![CDATA[What AI Isn’t Actually Doing]]></title><description><![CDATA[When I introduce students to Python, one of the first "fun" moments that hooks them into coding is when they write their first if statement and the "magic" happens.]]></description><link>https://teachingpythonpodcast.substack.com/p/what-ai-isnt-actually-doing</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/what-ai-isnt-actually-doing</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Thu, 29 May 2025 18:24:39 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/164745554/d6ace793066da89fb6f6ce4617dcc0e2.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p></p><p><strong>The False Logic Problem</strong></p><p>In Python:</p><pre><code><code>if score &gt;= 90:
    print("A")
else:
    print("Try again")</code></code></pre><p>This is real logic. The machine checks a condition, selects a branch, and executes an outcome. Same input, same result&#8212;every time. That&#8217;s what it means to evaluate logic explicitly.</p><p>Now try this: ask ChatGPT,<br><strong>"If my test score is 92, what grade did I get?"</strong></p><p>It might say &#8220;A.&#8221; That sounds right. But here&#8217;s what actually happened:<br><strong>No condition was evaluated. No rule was applied.</strong><br>The model simply predicted what text usually follows that kind of question.</p><p>It <strong>looks</strong> like logic.<br>It <strong>sounds</strong> like logic.<br>But it&#8217;s just pattern matching&#8212;an illusion of reasoning.</p><p>Another example:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PNLL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58aedcbf-788a-43d6-bd81-3a05e15eab05_1456x624.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PNLL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58aedcbf-788a-43d6-bd81-3a05e15eab05_1456x624.png 424w, https://substackcdn.com/image/fetch/$s_!PNLL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58aedcbf-788a-43d6-bd81-3a05e15eab05_1456x624.png 848w, https://substackcdn.com/image/fetch/$s_!PNLL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58aedcbf-788a-43d6-bd81-3a05e15eab05_1456x624.png 1272w, https://substackcdn.com/image/fetch/$s_!PNLL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58aedcbf-788a-43d6-bd81-3a05e15eab05_1456x624.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PNLL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58aedcbf-788a-43d6-bd81-3a05e15eab05_1456x624.png" width="1456" height="624" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58aedcbf-788a-43d6-bd81-3a05e15eab05_1456x624.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:624,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:167236,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://teachingpythonpodcast.substack.com/i/164745554?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58aedcbf-788a-43d6-bd81-3a05e15eab05_1456x624.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PNLL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58aedcbf-788a-43d6-bd81-3a05e15eab05_1456x624.png 424w, https://substackcdn.com/image/fetch/$s_!PNLL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58aedcbf-788a-43d6-bd81-3a05e15eab05_1456x624.png 848w, https://substackcdn.com/image/fetch/$s_!PNLL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58aedcbf-788a-43d6-bd81-3a05e15eab05_1456x624.png 1272w, https://substackcdn.com/image/fetch/$s_!PNLL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58aedcbf-788a-43d6-bd81-3a05e15eab05_1456x624.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p><strong>Why This Matters in Your Classroom</strong></p><p>Students need to distinguish between systems that <strong>execute logic</strong> and systems that <strong>mimic the appearance of logic</strong>.</p><p>Try this:</p><ol><li><p>Show students the Python <code>if/else</code> block above</p></li><li><p>Show them an AI response to a similar logical prompt</p></li><li><p>Ask: "Which one actually followed conditional reasoning? Which one just predicted what should come next?"</p></li></ol><p>This isn't about teaching students to distrust AI. It's about teaching them to recognize fundamentally different types of thinking&#8212;one that follows rules, one that follows patterns.</p><p>In a world where AI increasingly resembles human reasoning, this distinction isn't just useful. It's essential.</p>]]></content:encoded></item><item><title><![CDATA[Tip 6: Three ideas for prompting better learning]]></title><description><![CDATA[From Sean: explain it like I'm _____, the reverse Socratic method, and encourage questions and clarifications.]]></description><link>https://teachingpythonpodcast.substack.com/p/tip-6-three-ideas-for-prompting-better</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/tip-6-three-ideas-for-prompting-better</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Wed, 28 May 2025 01:13:27 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/164607890/1a09cc8ff8e9a0fc689b5bf94342d87e.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>I&#8217;ve been working a lot with LLMs in my day job to write better code. But I often find myself asking the LLM to teach me new things and help me understand concepts better. Here are some tips for making that a much richer learning experience.</p>]]></content:encoded></item><item><title><![CDATA[Spaghetti, AI, and the Zen of Teaching Code]]></title><description><![CDATA[&#8220;Now is better than never. Although never is often better than right now. If the implementation is hard to explain, it&#8217;s a bad idea.&#8221;]]></description><link>https://teachingpythonpodcast.substack.com/p/spaghetti-ai-and-the-zen-of-teaching</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/spaghetti-ai-and-the-zen-of-teaching</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Mon, 26 May 2025 15:46:55 GMT</pubDate><content:encoded><![CDATA[<p>There&#8217;s a moment in every classroom where the code stops working.</p><p>A loop runs forever. A variable name gets misspelled. The game crashes before the player even gets to move. For a 6th grader (or new coder) learning Python, this isn&#8217;t just frustrating, it is almost soul-crushing. But it is also the best drama, the beautiful hook, and the messy middle where real thinking begins.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://teachingpythonpodcast.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Teaching Python Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Spaghetti code, at first glance, shows lack of complete understanding and coding abilities but if you look deeper, it also shows a thought process a human side of the code (whether right or wrong). </p><p>It is this thought process and <em>confusion that shows the real learning process</em> and should not be something we avoid or allow to go away in replacement for efficiency.</p><p>I always try to replicate this process. I use &#8220;spaghetti thought&#8221; when I work and use AI.</p><p>Even though I know how to prompt, I still iterate. I still go back, reword, tweak, validate the outputs outside of the AI. I treat AI the way I treat code: as something to debug, to question, to build with <em>and</em> think about.</p><p>The outputs may look perfect, but the humanity and thinking is not. The lesson isn&#8217;t how to get it perfect the first time. It&#8217;s how to stay curious long enough to understand <em>why</em></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;52c56655-4f95-45ae-a8f9-c964e0483823&quot;,&quot;duration&quot;:null}"></div><p>.</p><p>So if your students&#8217; Python code is messy, celebrate it. If your prompts don&#8217;t hit the mark right away, use that.</p><p>Because good code, good AI use, and good teaching all have one thing in common:</p><p>They&#8217;re built in the messy middle.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://teachingpythonpodcast.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Teaching Python Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Tip#5 Prompt Engineering Starts With Variables ]]></title><description><![CDATA[Variables teach students how to structure input, control output, and think clearly. That&#8217;s true in Python and it&#8217;s true in GenAI.]]></description><link>https://teachingpythonpodcast.substack.com/p/tip5-prompt-engineering-starts-with</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/tip5-prompt-engineering-starts-with</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Sat, 24 May 2025 18:59:16 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/164370772/26fcbddafe567989f0c1f984bfca4a9a.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Want students to get better at prompting AI, start with variables.</p><p>In code, we assign variables to structured infor:</p><p> <code>topic = "climate change"</code><br></p><p>In prompts, <code>&#8220;Summarize {topic} for a 6th grader.&#8221;</code><br>Same logic. Different wrapper.</p><p>If your students can manage inputs and tweak outputs in Python, they&#8217;re already halfway to mastering prompt engineering. And if they&#8217;re learning to write better prompts, they&#8217;re also learning to code&#8212;whether they know it or not.</p><p>Prompting <em>is</em> programming. And variables are still the MVP.</p><p><br>#AIinEducation #PromptEngineering #TeachingCoding #Python #EdTech #ComputationalThinking</p>]]></content:encoded></item><item><title><![CDATA[Tip #4 Check the Code. Don’t Just Trust It]]></title><description><![CDATA[Critical thinking starts with code review.]]></description><link>https://teachingpythonpodcast.substack.com/p/tip-4-check-the-code-dont-just-trust</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/tip-4-check-the-code-dont-just-trust</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Fri, 23 May 2025 19:18:19 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/164194162/4be615bacb3a908a0bb719f5d72549b6.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Whether it&#8217;s predictive modeling or generative AI, don&#8217;t assume the code is right just because it runs. Look closely:<br>&#8211; What does the code <em>actually</em> do?<br>&#8211; Does it meet the goal?<br>&#8211; Is the output valid and useful?</p><p>AI is highly iterative, so don&#8217;t be afraid to question everything. Review, revise, and  argue with the output. Copying code is not learning. Understanding what it does, is.</p><p>Critical thinking starts with code review.</p><p>#CSEducation #AIinEducation #TeachingPython #CodeReview #AIliteracy #ComputationalThinking #Debugging #CriticalThinking #CSforTeachers #DailyCSTips</p>]]></content:encoded></item><item><title><![CDATA[Tip#3 Just an API, Not Magic ]]></title><description><![CDATA[What really powers AI chatbots and why is understanding requests in Python matter?]]></description><link>https://teachingpythonpodcast.substack.com/p/just-an-api-not-magic</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/just-an-api-not-magic</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Thu, 22 May 2025 19:30:22 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/164151624/ff11dcbba0a03f1c2049494b0573e022.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>When you use ChatGPT or Gemini, you&#8217;re sending a POST request to an API endpoint. It passes your input to a large language model, gets a response, and sends it back. That&#8217;s it. Just structured data.</p><p>For non-CS teachers: APIs power your gradebooks, LMS tools, Google Docs integrations, you use them every day.</p><p>For CS teachers: Teach students to use Python&#8217;s requests library. Let them call APIs directly, log JSON responses, and see how ChatGPT works behind the scenes.</p><p>The second you understand that AI is just a structured system you begin to understand why AI literacy is so important.</p><p>Stay tuned for more from <strong><a href="https://www.linkedin.com/company/86959474/admin/page-posts/published/?share=true#">Sean Tibor</a></strong> and <strong><a href="https://www.linkedin.com/company/86959474/admin/page-posts/published/?share=true#">Kelly Schuster- Paredes</a></strong></p><p><strong>#Python</strong> <strong>#AIliteracy</strong> <strong>#CSeducation</strong> <strong>#APIs</strong> <strong>#edtech</strong> <strong>#ChatGPT</strong> <strong>#TeachersWhoCode</strong> <strong>#ComputationalThinking</strong> <strong>#AIinEducation</strong> <strong>#TeachingAI</strong> <strong>#PromptEngineering</strong></p>]]></content:encoded></item><item><title><![CDATA[Tip #2 Iteration vs. Repetition]]></title><description><![CDATA[What happens when the machine gets better at repeating than the human?]]></description><link>https://teachingpythonpodcast.substack.com/p/tip-2-iteration-vs-repetition</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/tip-2-iteration-vs-repetition</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Wed, 21 May 2025 19:30:27 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/164037717/478b3eacfc5b66fd0e74f72026d74cd7.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>When we teach coding, loops feel like magic. Repeat something without writing it a hundred times? Yes, please.</p><p>But in the age of AI, we face a deeper question:<br><em>What happens when the machine gets better at repeating than the human?</em></p><p>Loops in code aren&#8217;t intelligence. They are a repetition with structure.<br>Loops in machine learning? Prediction, error, adjustment, repeat, do this a million of times quickly.<br>AI is Fast. AI is Efficient. AI is Tireless.<br>But AI is NOT curious. AI is NOT aware.</p><p>Humans loop too, but we reflect.<br>We forget. We feel. We <em>understand.</em></p><p>The loop isn&#8217;t smarter. Just faster.<br>The real power? Knowing how the loop works, so we can teach it, use it, and question it.</p><p>#AILiteracy #TeachingAI #ComputerScienceEducation #MachineLearning #CodingEducation #HumanVsMachine #PythonLoops #CSed #TeachTheThinking #EducationAndAI #TeachingPython #AIInTheClassroom</p>]]></content:encoded></item><item><title><![CDATA[Tip #1 Prediction ≠ understanding. ]]></title><description><![CDATA[Tools like ChatGPT and Gemini don&#8217;t understand your question&#8212;they predict.]]></description><link>https://teachingpythonpodcast.substack.com/p/tip-1-prediction-understanding</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/tip-1-prediction-understanding</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Tue, 20 May 2025 19:30:35 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/163996195/af36407f659bd1cb319de12f11fbb32f.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Tools like ChatGPT and Gemini don&#8217;t <em>understand</em> your question&#8212;they <em>predict</em> the next most likely word based on patterns, not knowledge.</p><p>Want a great classroom activity?<br>Ask: <strong>"How does a fish smell?"</strong><br>Let students debate the meaning. Then ask the AI.<br>Prediction &#8800; understanding. And that&#8217;s where real learning begins.</p><p>We&#8217;re launching a daily series on practical AI + Python concepts for educators. No hype. Just clarity.</p><p>#AILiteracy #AIinEducation #TeachingAI #PythonForTeachers #ComputerScienceEducation #GenerativeAI #EdTech #CSed #AIandCoding #AIforEducators #TechLiteracy #PromptResponsibly #TeachingPython</p>]]></content:encoded></item><item><title><![CDATA[Teaching AI with Code: Beyond the Prompts ]]></title><description><![CDATA[As a teacher who codes, we are launching a daily series on true AI literacy. Not hype, not gimmicky prompts but practical Python-based knowledge that explains AI and what it can look like in education]]></description><link>https://teachingpythonpodcast.substack.com/p/teaching-ai-with-code-beyond-the</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/teaching-ai-with-code-beyond-the</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Mon, 19 May 2025 13:23:27 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/163919030/85734fbe7c6454a806146dda4b09ce7c.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Each day, Sean or Kelly will be posting tips on how Python/CS are connected with AI. Stay tuned as we try to make this a daily thing!</p><p>Whether you code already or are AI-curious, subscribe for daily bite-sized concepts connecting Python, AI, and classroom reality. Let's teach AI responsibly, one concept at a time.</p><p>Subscribe to learn what's really behind the AI tools transforming education in 2025.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Episode 147: The Power of Teaching APIs]]></title><description><![CDATA[Sean Tibor and Kelly Schuster-Paredes take a deep dive into teaching APIs, sharing practical lessons, amusing anecdotes, and insights into integrating APIs into a comprehensive coding curriculum.]]></description><link>https://teachingpythonpodcast.substack.com/p/episode-147-the-power-of-teaching-a30</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/episode-147-the-power-of-teaching-a30</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Mon, 24 Mar 2025 04:00:00 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/163920178/b4a522baae57330e531beb64545ab51d.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In Episode 147 of Teaching Python, Sean Tibor and Kelly Schuster-Paredes focus on the importance and power of teaching APIs to coding students. They share personal stories and practical strategies for engaging students with APIs, from simple use cases to more complex projects. Join them as they discuss how to make lessons fun and relevant, leveraging LLMs (Large Language Models) for code explanations, and teaching through trial and error. This episode also touches on the broader applications of APIs in today's technological landscape, examining how learning APIs can open up new possibilities for students and equip them with essential skills for the future. Whether you're a teacher, student, or coding enthusiast, there's something valuable in this episode for you.</p><p><a href="https://www.patreon.com/teachingpython">Support Teaching Python</a></p>]]></content:encoded></item><item><title><![CDATA[Episode 146: PSF Education Outreach Workgroup and the Education Summit]]></title><description><![CDATA[In this episode, Sean and Kelly are joined by Keith and Chuck from the Python Education and Outreach Workgroup to discuss their efforts in promoting Python education. They talk about the group's goals, such as seeking feedback on Python education resource]]></description><link>https://teachingpythonpodcast.substack.com/p/episode-146-psf-education-outreach-37d</link><guid isPermaLink="false">https://teachingpythonpodcast.substack.com/p/episode-146-psf-education-outreach-37d</guid><dc:creator><![CDATA[Teaching Python Podcast]]></dc:creator><pubDate>Tue, 21 Jan 2025 01:00:00 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/163920179/f7484b0854b606786b8fb2cbbe869ad2.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>In episode 146 of Teaching Python, hosts Sean Tibor and Kelly Schuster Perez delve into the newly established Python Education and Outreach Work Group, featuring guests Cheuk Ting Ho and Keith Murray. The group, aimed at enhancing Python education within the community, outlines its mission to gather feedback on educational resources and support initiatives like the Education Summit at PyCon US. Keith and Cheuk discuss their backgrounds and roles within the Python Software Foundation, emphasizing the need for fresh educational content and community engagement.</p><p>The episode also explores the work group's goals, which include:</p><ul><li><p>Seeking and receiving feedback on Python educational resources</p></li><li><p>Consolidating and improving existing Python education materials</p></li><li><p>Supporting and expanding the education summit at PyCon US</p></li></ul><p>Additionally, the hosts share personal 'wins of the week,' highlighting the importance of accountability and community in the educational journey. Kelly and Sean discuss their past experiences with the Education Summit and encourage listeners to get involved by submitting talk proposals or joining in interactive sessions. The episode concludes with practical advice on how educators and enthusiasts can engage with the group to further Python education and outreach.</p><p>Special Guests: Cheuk Ting Ho and Keith Murray.</p><p><a href="https://www.patreon.com/teachingpython">Support Teaching Python</a></p><p>Links:</p><ul><li><p><a href="https://www.pyohio.org/2025/" title="PyOhio 2025">PyOhio 2025</a> &#8212; Summer 2025 in Cleveland, OH</p></li><li><p><a href="https://wiki.python.org/psf/PythonEduWGCharter" title="PythonEduWGCharter - PSF Wiki">PythonEduWGCharter - PSF Wiki</a> &#8212; The Education &amp; Outreach Workgroup's (EOW) purpose is to support the Python Software Foundation&#8217;s mission to promote the Python programming language, especially in supporting and enhancing the education of Python. The Education &amp; Outreach Workgroup is a workgroup of the Python Software Foundation&#8217;s (PSF).</p></li><li><p><a href="https://us.pycon.org/2024/events/education-summit/" title="- Education Summit - PyCon US 2024">- Education Summit - PyCon US 2024</a> &#8212; In 2024, PyCon US held its 12th annual Python Education Summit in person!</p></li></ul>]]></content:encoded></item></channel></rss>