<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Somatic Intelligence]]></title><description><![CDATA[Change how you think about AI. Together we'll have conversations about how we realize our human potential in a digital world and navigate the seas of change.]]></description><link>https://slowworks.substack.com</link><generator>Substack</generator><lastBuildDate>Fri, 10 Apr 2026 03:06:17 GMT</lastBuildDate><atom:link href="https://slowworks.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Jonas Haefele]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[jonas@slow.works]]></webMaster><itunes:owner><itunes:email><![CDATA[jonas@slow.works]]></itunes:email><itunes:name><![CDATA[Jonas Haefele]]></itunes:name></itunes:owner><itunes:author><![CDATA[Jonas Haefele]]></itunes:author><googleplay:owner><![CDATA[jonas@slow.works]]></googleplay:owner><googleplay:email><![CDATA[jonas@slow.works]]></googleplay:email><googleplay:author><![CDATA[Jonas Haefele]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[AI is about relationships, not about technology]]></title><description><![CDATA[Why shared learning and open communication might be the most valuable practices for navigating AI's transformation of work]]></description><link>https://slowworks.substack.com/p/ai-is-about-relationships-not-about</link><guid isPermaLink="false">https://slowworks.substack.com/p/ai-is-about-relationships-not-about</guid><dc:creator><![CDATA[Jonas Haefele]]></dc:creator><pubDate>Wed, 22 Oct 2025 11:33:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!aer_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd65c3d76-afa7-4946-8533-3d49d94e6a00_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aer_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd65c3d76-afa7-4946-8533-3d49d94e6a00_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aer_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd65c3d76-afa7-4946-8533-3d49d94e6a00_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!aer_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd65c3d76-afa7-4946-8533-3d49d94e6a00_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!aer_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd65c3d76-afa7-4946-8533-3d49d94e6a00_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!aer_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd65c3d76-afa7-4946-8533-3d49d94e6a00_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aer_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd65c3d76-afa7-4946-8533-3d49d94e6a00_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d65c3d76-afa7-4946-8533-3d49d94e6a00_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2210664,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://slowworks.substack.com/i/176820217?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd65c3d76-afa7-4946-8533-3d49d94e6a00_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aer_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd65c3d76-afa7-4946-8533-3d49d94e6a00_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!aer_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd65c3d76-afa7-4946-8533-3d49d94e6a00_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!aer_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd65c3d76-afa7-4946-8533-3d49d94e6a00_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!aer_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd65c3d76-afa7-4946-8533-3d49d94e6a00_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Today is a bit more academic, my brain is still in MSc mode. Bear with me, we end with practical take-aways you can try today. Full article with sources <a href="https://slow.works/blog/ai-is-relationship-not-technology">here</a>.</em></p><p>Your instinct is probably right if you&#8217;re feeling the pressure to &#8220;do something&#8221; about AI whilst simultaneously knowing you and your team need time to actually learn this stuff properly.</p><p>My research across UK knowledge workers revealed that <strong>everyone feels the race</strong>, yet no one really knows what they&#8217;re running towards. And that shows. BCG report that whilst 78% of organisations now use generative AI in at least one business function, only 26% successfully scale beyond pilots, while McKinsey found that while 92% of employees want to use AI effectively, only 30% report CEO direct sponsorship. I found a similar pattern in my research. Most concerning was the complete disconnect between leadership and the rest of the organisation. My research suggested that a key reason for lack of C-Suite support might be that business leaders often simply don&#8217;t understand AI&#8212;particularly how it interacts with frontline operations&#8212;they don&#8217;t get enough time to practice. Often AI strategy is outsourced to external &#8220;experts&#8221; or worse, tool suppliers who, by definition, lack the knowledge and experience of how business is <em>actually</em> done internally. Middle managers, HR and L&amp;D teams are the primary enablers that can help to bridge the gap between strategy and execution. BCG&#8217;s research suggests that organisations that succeed in AI pursue half as many opportunities as laggards but achieve 2x the ROI and scale 2x as many products/services.</p><h2>Four ways of being with AI, and why some lead to capability loss</h2><p>Through my research interviewing UK knowledge workers about their lived experience with AI, four distinct patterns emerged in how people relate to these systems. I call them archetypes&#8212;not personality types, but situational relationships that shift based on context, pressure, and organisational culture. Today, we&#8217;ll focus on the instrumental relationships:</p><p><strong>The Tool:</strong> AI as a colleague that enhances your capability. You maintain oversight, make strategic decisions, and use AI to handle information processing while you focus on judgment and context. The relationship is instrumental but not dependent&#8212;you&#8217;re getting better at your job, not outsourcing your thinking.</p><p><strong>The Trap:</strong> AI as a dependency that erodes your capability. What starts as efficiency gradually becomes an inability to function without the system. Skills atrophy, contextual knowledge fades, and you&#8217;ve traded short-term productivity for long-term vulnerability.</p><p><strong>The Teacher</strong> and <strong>The Sparring Partner</strong> represent more relational dynamics&#8212;AI as a coach helping you develop expertise, or as a creative collaborator challenging your thinking. We&#8217;ll explore these in the next post on individual learning relationships with AI.</p><p>Whether we end up using AI as an empowering tool or fall into the trap isn&#8217;t an individual choice. It&#8217;s largely determined by how your organisation designs AI adoption.</p><p>By now, you might have seen dozens of posts mentioning the MIT research &#8220;Your Brain on ChatGPT&#8221;. The researchers tracked brain activity in students writing essays across four months and found that ChatGPT users showed the lowest cognitive engagement across all measures compared to those who didn&#8217;t use AI. ChatGPT users relied more and more on AI, couldn&#8217;t remember what they wrote about, felt reduced ownership of their work, and physically had less brain activity and formed fewer neural connections. Lead researcher Nataliya Kosmyna warns: &#8220;There is no cognitive credit card. You cannot pay this debt off.&#8221;</p><p>The skills atrophy research extends across professions. A study of 469 mathematics teachers found AI dependency explained 91.4% of the variance in problem-solving ability, 93.4% in critical thinking, and 89.0% in creative thinking. Research with 666 professional workers found that frequent AI usage correlates negatively with critical thinking, with cognitive offloading mediating this relationship. The most critical finding may be that <strong>AI literacy has a positive relationship with AI dependency</strong> (&#946; = 0.505). Counterintuitively, increasing AI literacy through traditional training increases dependency rather than reducing it.</p><h2>The 70-20-10 principle: why AI adoption mirrors leadership development from 40 years ago</h2><p>AI adoption, like leadership development, is fundamentally about organisational learning. Back in the 1980s, the Center for Creative Leadership researched how executives actually learn and grow. Their findings are now a rule of thumb: 70% of learning comes from challenging experiences and assignments, 20% from developmental relationships, 10% from coursework and training. Whilst exact ratios vary across studies, the principle holds: Structured experiential learning and social learning are key to learning transfer; formal training alone is extremely ineffective.</p><p>Fast forward to 2024, and BCG&#8217;s research on successful AI adoption finds that organisations leading in AI adoption allocate resources the same way. 70% to people and processes, 20% to technology and data, 10% to algorithms. Laggards invert this&#8212;obsessing over models whilst neglecting change management&#8212;and pay the price in failed deployments.</p><p>McKinsey&#8217;s 2025 research on AI and organisational learning found that experiential and social approaches achieve 65% skill retention, compared with 10% for formal training alone. Yet most organisations still default to vendor certifications and classroom instruction whilst wondering why their &#163;500k AI training programme yields minimal behaviour change. The organisations achieving 2x ROI aren&#8217;t buying fancier algorithms&#8212;they&#8217;re creating the conditions for teams to learn by doing.</p><p>Your team already knows this intuitively. When they say they need time to practice, space to discuss, and permission to learn together&#8212;that&#8217;s not resistance to change, it&#8217;s hunger to grow.</p><h2>Why your middle managers matter more than your CIO</h2><p>When managers show high agency and optimism about AI, their direct reports are nearly 3x more likely to develop it themselves. BetterUp&#8217;s 2024 research tracking 12,000 workers calls this the manager multiplier effect.</p><p>Managers who have what BetterUp calls a pilot mindset (high agency, optimistic) are 3.6x more productive than passengers (low agency, pessimistic) and 3.1x more likely to stay at their organisation. This helps explain the apparent contradiction between BCG and McKinsey&#8217;s research. BCG suggests leaders are well ahead of employees in AI adoption. McKinsey&#8217;s data shows 92% of employees want to use AI effectively, whilst only 30% report CEO sponsorship. My research suggests the disconnect isn&#8217;t about readiness&#8212;it&#8217;s about which leaders we&#8217;re measuring.</p><p>C-suite executives setting AI strategy are often enthusiastic but lack hands-on AI experience. Middle managers, on the other hand, are often under extreme workload, under-supported, and caught between executive expectations and frontline realities. Meanwhile, frontline workers are often already experimenting with AI in their own time, and are actually quite ready. Provided someone creates the conditions for learning.</p><p>McKinsey found that millennials report 62% high AI expertise versus 22% of boomers. They bring native digital fluency. Enabling millennial managers to lean into shared learning&#8212;creating space for teams to experiment safely, discussing failures openly, documenting insights, and celebrating capability development&#8212;allows not only them to grow as leaders, but also grows capabilities across the team and organisation.</p><p>Often the Organisations that develop uniquely human management capabilities&#8212;coaching over controlling, enabling over monitoring&#8212;see 34% better team performance, 21% more innovation, and 15% higher productivity. AI adoption strategy needs to invest in middle management capability development and employee voice before it invests in another enterprise license or vendor partnership.</p><h2>Communities of practice: where the 20% happens that makes the 70% work</h2><p>Communities of practice are the 20% that training budgets consistently underinvest in: creating spaces for people to learn from each other whilst solving real problems. Not &#8220;lunch and learn&#8221; sessions. Not SharePoint repositories of best practices. Actual communities where people work together on challenges, discuss what works and what doesn&#8217;t, and build shared understanding through practice.</p><p>The US federal government&#8217;s AI Community of Practice includes over 12,000 members from 100+ agencies. They provide monthly training, specialised working groups, and applied challenges. The 2024 healthcare AI challenge received 140+ applications, with teams learning through real problem-solving rather than abstract instruction. Similar CoPs have been successfully set up in academia and in commercial organisations.</p><p>What makes communities of practice effective is the contextualised social learning. University of Ottawa researchers found that undergraduate students with no programming experience created functional AI projects through experiential learning in authentic contexts. The key was situating problems in real-world stakeholder scenarios: students working on actual cases, such as financial intelligence officers matching names across bank reports, developed both technical skills and crucial judgment about when and how to apply AI.</p><p>In my coaching practice, I&#8217;ve found simple interventions create outsized impact. Team leaders are navigating the pressure to adopt AI while dealing with understaffed teams and operational challenges. Coaching gives them a place to say &#8220;I don&#8217;t know&#8221; and leave with clarity and actionable AI homework. And simple weekly experimentation challenges and dedicated space for regular reflection sessions led teams to become the unofficial AI champions in the whole organisation. All it took was documented, shared experimentation, where failures became learning opportunities rather than performance issues.</p><h2>What gets encapsulated: the tacit knowledge that AI can&#8217;t see</h2><p>As AI systems learn and expand their functions, something subtle happens. Work processes that once required human coordination, contextual judgment, and accumulated expertise gradually become invisible inside automated systems. Researchers call this encapsulation&#8212;the progressive expansion of the &#8220;black box&#8221; around the relations and functions AI performs.</p><p>Here&#8217;s why this matters for managers: the expertise your team has built over years doesn&#8217;t always look like expertise. Seasoned workers achieve results more efficiently than standard operating procedures suggest through deeply embedded tacit knowledge. They know which customers need a courtesy call before the reminder letter. They spot the subtle pattern that flags a claim for review. They understand the context that makes this month&#8217;s numbers meaningful.</p><p>When you automate processes without understanding these invisible practices, three things happen:</p><p><strong>Edge cases become disasters.</strong> Sometimes bugs are funny. Try asking ChatGPT 5 if there is a seahorse emoji. But if these edge cases and flukes happen in a business-critical process, it can become disastrous. Chevrolet almost sold cars for $1. Microsoft&#8217;s NYC MyCity chatbot advised entrepreneurs that they could legally take workers&#8217; tips. Air Canada paid damages after its virtual assistant gave incorrect bereavement fare information.</p><p><strong>Invisible work becomes visible problems.</strong> Hospital research on logistics robots revealed substantial coordination work that was never accounted for: staff clearing pathways, adjusting schedules, and providing manual assistance. This invisible work didn&#8217;t disappear when robots arrived. It intensified, performed by people whose roles hadn&#8217;t been redesigned to accommodate it.</p><p><strong>Skills atrophy without anyone noticing.</strong> A Polish study found endoscopists&#8217; adenoma detection rate decreased from 28.4% to 22.4% after exposure to routine AI-assisted systems. Accountants&#8217; reliance on automation rendered them unable to function independently when systems failed. Legal profession leaders worry about generating future professionals when junior work is automated, effectively eliminating the training ground for developing expertise.</p><p>This doesn&#8217;t have to be a one-way street toward capability loss. It can be empowering and capability-building if we make the invisible visible and focus on organisational learning.</p><h2>Psychological safety enables everything else</h2><p>My research suggests that organisations with high organisational silence&#8212;where people don&#8217;t feel safe raising concerns or admitting uncertainty&#8212;struggle with AI adoption regardless of investment. The silence manifests in several ways: people don&#8217;t disclose when AI makes errors, they don&#8217;t ask questions that might reveal knowledge gaps, and they don&#8217;t share failed experiments that could help others avoid mistakes. Rules without trust create shadow IT&#8212;people hiding AI use, working around restrictions, or giving up entirely.</p><p>The World Economic Forum&#8217;s 2024 governance playbook emphasises enabling frameworks over compliance structures. Organisations implementing enabling governance&#8212;embedded in development lifecycles, distributed accountability, adaptive rather than rigid&#8212;scale 2.6x more successfully than those with traditional compliance approaches.</p><p>Before you write AI usage policies, before you implement monitoring systems, before you roll out vendor contracts, invest in building the team climate where people feel safe learning in public. This means:</p><p><strong>Creating explicit permission to experiment.</strong> Not just saying &#8220;innovation is important&#8221; but actually protecting time and resources for structured experimentation with clear learning objectives rather than success metrics.</p><p><strong>Normalising failure as information.</strong> When something doesn&#8217;t work, the question isn&#8217;t &#8220;who&#8217;s accountable?&#8221; but &#8220;what did we learn?&#8221; Document insights, share them widely, and celebrate the learning even when the outcome disappoints.</p><p><strong>Modelling uncertainty from leadership.</strong> When managers admit what they don&#8217;t know, ask questions without having answers, and visibly learn alongside their teams, it creates permission for everyone else to do the same.</p><p>This is why governance initiatives that skip the psychological safety work consistently fail&#8212;they&#8217;re building on sand. The governance conversation matters, particularly in regulated industries like insurance and legal services. But governance that enables rather than constrains requires the psychological safety foundation first. You can&#8217;t policy your way to a learning culture.</p><h2>Breaking the siloes: AI-augmented organisations are multi-dimensional</h2><p>MIT&#8217;s 2024 Enterprise AI Maturity Model tracked organisations across four stages.</p><ul><li><p>Stage 1 (28% of enterprises): <strong>preparing</strong> and experimenting with below industry average profit.</p></li><li><p>Stage 2 (&#8776;25%): active <strong>piloting</strong> with a formal strategy, but still below average financially.</p></li><li><p>Stage 3 (&#8776;30%): <strong>operational</strong> maturity through coordinated implementation, finally seeing above-average returns.</p></li><li><p>Stage 4 (&#8776;17%): <strong>transformational</strong> status with AI embedded in business models, achieving 1.5x higher revenue growth, 1.6x greater shareholder returns, and 1.4x higher return on invested capital.</p></li></ul><p>Organisations that successfully moved through the stages paid coordinated attention to four core dimensions:</p><p><strong>Technology dimension:</strong> Architecture for reusability rather than use-case-specific solutions.</p><p><strong>People dimension:</strong> Role redesign, career pathways, cultural transformation, communities of practice&#8212;not just training.</p><p><strong>Governance dimension:</strong> Embedding responsible AI into development lifecycles rather than creating approval bottlenecks.</p><p><strong>Measurement dimension:</strong> Tracking leading indicators, not just lagging outcomes. Less than 1 in 5 organisations systematically track well-defined KPIs for GenAI solutions, yet this has the most impact on the bottom line.</p><p>Organisational silos become a key limiting factor. Technology teams optimise algorithms, L&amp;D runs training programmes, risk functions write governance policies, and finance tracks ROI. Nobody&#8217;s coordinating across all four dimensions, so you get technically sound solutions that people don&#8217;t adopt, governance that constrains rather than enables, and measurement that misses what actually matters.</p><p>The collaborative approach that creates Tool rather than Trap requires breaking these silos. Not through reorganisation, but through creating forums where technology, people, governance, and measurement conversations happen together. Start with one monthly cross-functional AI capability review where technology, L&amp;D, risk, and finance discuss progress on the same use cases.</p><h2>What to do now?</h2><p>So what do you actually do with this?</p><p><strong>If you&#8217;re an executive:</strong></p><p>Audit your AI spending against the 70-20-10 principle. If more than 30% of your budget goes to technology and algorithms, you&#8217;ve inverted the resource allocation that successful organisations use. Redirect investment from vendor contracts and model optimisation toward manager capability development, community of practice infrastructure, and workflow redesign support.</p><p>Review your governance approach. If it functions as an approval bottleneck rather than advisory support, you&#8217;re constraining adoption. Enable experimentation within clear boundaries rather than requiring permission for every use case.</p><p><strong>If you&#8217;re in L&amp;D or HR:</strong></p><p>Identify three millennial managers (ages 35-44) who show high AI readiness. Invest in developing them as internal champions&#8212;not through generic training but through scaffolded leadership development that emphasises coaching over controlling, enabling over monitoring. Their impact will cascade at 3x through the manager multiplier effect.</p><p>Launch a small community of practice focused on a real workflow challenge. Not a lunch-and-learn. Not a SharePoint site. An actual working group that meets regularly, experiments with AI on authentic problems, documents what works and doesn&#8217;t, and builds shared capability through practice. Start with 5-10 people, commit to 8 weeks, measure learning, not just productivity.</p><p>Stop buying generic AI training. Redirect that budget toward manager coaching, protected experimentation time, and documentation infrastructure.</p><p><strong>If you&#8217;re a manager:</strong></p><p>Create explicit permission for your team to experiment. This doesn&#8217;t mean &#8220;innovation time&#8221; where people work on pet projects. It means structured experimentation on real work challenges with clear learning objectives. Protect time weekly for people to try AI approaches, discuss what worked, and document insights.</p><p>Model uncertainty yourself. The next time you encounter an AI-related decision you&#8217;re unsure about, say so publicly. Ask your team what they think. Learn alongside them. This creates the psychological safety that enables everything else.</p><p>Establish &#8220;think first&#8221; practices. Before people turn to AI for drafting, analysis, or problem-solving, they practice the skill independently. This prevents the skills-atrophy cascade and, paradoxically, improves AI outputs&#8212;people who think through problems independently write more effective prompts.</p><p><strong>If you&#8217;re not sure:</strong></p><p>Give me a call. I&#8217;d love to talk about how we might empower your organisation on its path to AI adoption.</p><h2>Next article: measuring what actually matters</h2><p>You can&#8217;t build capability unless you&#8217;re measuring it. But most AI metrics track efficiency gains or usage numbers. In the next article, we&#8217;ll explore what happens when we measure what actually matters: not just faster outputs, but whether people maintain independent problem-solving capability. Not just reduced task time, but whether teams develop better judgment. Not just cost savings, but whether organisations build sustainable competitive advantage through human-AI collaboration.</p><p>The organisations winning at AI aren&#8217;t measuring productivity. They&#8217;re measuring capability. And that changes everything about how you design adoption.</p><div><hr></div><p><em>This is the fourth post in a six-part series exploring AI&#8217;s transformation from hype to practical implementation. Next week: &#8220;Measuring What Matters: Beyond Efficiency Metrics.&#8221;</em></p><p><em>This article was first published at https://slow.works on the 22 October 2025, full article with all 25 referenced sources at <a href="https://slow.works/blog/ai-is-relationship-not-technology">https://slow.works/blog/ai-is-relationship-not-technology</a></em></p><div><hr></div>]]></content:encoded></item><item><title><![CDATA[The Dumbing Down Dilemma]]></title><description><![CDATA[Why we need informed conversations about AI implementation before it's too late]]></description><link>https://slowworks.substack.com/p/the-dumbing-down-dilemma</link><guid isPermaLink="false">https://slowworks.substack.com/p/the-dumbing-down-dilemma</guid><dc:creator><![CDATA[Jonas Haefele]]></dc:creator><pubDate>Tue, 26 Aug 2025 11:10:15 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!tFd4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9ca166d-03a2-4c7d-90d0-543c03075507_2048x2048.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tFd4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9ca166d-03a2-4c7d-90d0-543c03075507_2048x2048.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tFd4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9ca166d-03a2-4c7d-90d0-543c03075507_2048x2048.jpeg 424w, https://substackcdn.com/image/fetch/$s_!tFd4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9ca166d-03a2-4c7d-90d0-543c03075507_2048x2048.jpeg 848w, https://substackcdn.com/image/fetch/$s_!tFd4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9ca166d-03a2-4c7d-90d0-543c03075507_2048x2048.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!tFd4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9ca166d-03a2-4c7d-90d0-543c03075507_2048x2048.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tFd4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9ca166d-03a2-4c7d-90d0-543c03075507_2048x2048.jpeg" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a9ca166d-03a2-4c7d-90d0-543c03075507_2048x2048.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1529557,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://slowworks.substack.com/i/171973271?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9ca166d-03a2-4c7d-90d0-543c03075507_2048x2048.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tFd4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9ca166d-03a2-4c7d-90d0-543c03075507_2048x2048.jpeg 424w, https://substackcdn.com/image/fetch/$s_!tFd4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9ca166d-03a2-4c7d-90d0-543c03075507_2048x2048.jpeg 848w, https://substackcdn.com/image/fetch/$s_!tFd4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9ca166d-03a2-4c7d-90d0-543c03075507_2048x2048.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!tFd4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9ca166d-03a2-4c7d-90d0-543c03075507_2048x2048.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Chasing tickets or creating value. What do we want to do? </figcaption></figure></div><p><em>This is part three of our 6 part series on AI's adulting phase. If you missed the last one, read it <a href="https://slowworks.substack.com/p/when-ai-stops-waiting">here</a></em></p><p>We used to walk into a shop and find somebody knowledgeable who could help us. Whether finding the perfect item or solving a specific problem, the person who knew how to do it was there.</p><p>Now we have a website with a customer service chat box. First, we pick the category, then the product, then the category of questions. The question we actually have isn&#8217;t there because we&#8217;ve lost ourselves in the menu of options. Then maybe we get to free text chat where we can finally type something. We furiously type everything we&#8217;ve been holding onto, only for the chat box to say, &#8220;Sorry, I don&#8217;t understand. I can&#8217;t help you with that.&#8221;</p><p>If we&#8217;re lucky, we get an outsourced customer service agent trying to solve our problem by accessing a knowledge base and learning how to use the apps we&#8217;re apparently trying to use.</p><p>We&#8217;ve indirectly employed so many different people&#8212;those running the websites, building the apps, making the hardware, and the outsourced agents. But we lost access to efficiently sharing information and having conversations about what we&#8217;re actually trying to solve.</p><h2><strong>This Pattern Is Everywhere</strong></h2><p>In my MSc we read this article that beautifully illustrates how concentrating on what&#8217;s easy to measure can go wrong. (I can&#8217;t find the article again, even after hours of searching). It told this simple story: The NHS tried to reduce queues by shortening appointment times from about 30 minutes to 20 minutes. It takes about 18 minutes to get through the intake questionnaire. Elderly people with simple problems would come in, get taken through the questionnaire, then have 2 minutes for the actual consultation. They never actually got treated. They just got processed and sent away, maybe given paracetamol. When the nurse practitioners were given more time, they could solve problems on the first appointment. Patients didn&#8217;t have to come back again, which ultimately reduced the wait list and made it more efficient.</p><p>We have not designed processes to make the most of humans. We&#8217;ve designed them to optimise measurable metrics. The focus is on individual measurable parts of the process. There&#8217;s no focus on the outcome of the work anymore. Individual workers get measured on how long it takes them to answer a ticket, how many tickets they can answer in a day. Not on how many problems have actually been solved.</p><h2><strong>Why This Keeps Happening: The Three-Way Disconnect</strong></h2><p>I&#8217;m halfway through analysing data for my dissertation in organisational psychology, and some clear patterns are starting to emerge about why we keep falling into these implementation traps. There&#8217;s a disconnect between AI evangelists and AI critics that&#8217;s preventing informed discussions we desperately need.</p><p>The evangelists are often external people&#8212;LinkedIn specialists screaming about AI. Every participant I interviewed mentioned how toxic LinkedIn has become because there&#8217;s no critical discussion. It&#8217;s just people pushing services, often without nuance. Business leaders often fall into the same category. They don&#8217;t have time to be involved in day-to-day work, so they rely on experts for advice. If those experts are solution pushers, business leaders take those solutions at face value because that&#8217;s the only information they have access to.</p><p>Evangelists often lack understanding of how LLMs work, what current implementations can do, how long implementation takes, and what that might mean for outcomes&#8212;both for the business and the people it serves.</p><p>Critics often completely disengage. They also lack nuanced understanding to articulate why they don&#8217;t like it. They tend to jump at any sign of AI weakness. Any hallucination, any wrong quote becomes proof that everything is doomed. It&#8217;s an all-or-nothing mindset.</p><p>In the middle are the employees. I&#8217;ve spoken to mid-level and senior employees who said they feel like they&#8217;re hiding their AI use. One said, &#8220;Little do they know I&#8217;ve been using AI this whole time.&#8221; Another said, &#8220;It feels a bit like cheating.&#8221;</p><p>So while two extreme factions are fighting over who&#8217;s right without actually having grounded knowledge and approach, the people actually doing the work feel like they&#8217;re not allowed to say where and when they have used AI. There&#8217;s an element of cloaking and shame around using AI.</p><p>Because that shame is happening, we are not learning from the little failures. The amount of people that said they have their own private subscription that they use because the subscription that the company has isn&#8217;t good enough, or it doesn&#8217;t do the right thing, or they prefer the personality of a different model&#8212;the data leakage that&#8217;s happening here is significant.</p><p>We need to get to a point where we can have a discussion that is informed about how to embed these things into our processes. As long as there&#8217;s this stigma around the use of AI, we cannot have that discussion. As long as leaders don&#8217;t actually understand how the models work and don&#8217;t get hands-on experience&#8212;and it really is at this point about having hands-on experience&#8212;people cannot make decisions without having used these technologies in actual daily applications where they&#8217;re trying to solve a business problem.</p><h2><strong>The AI Crossroads: Two Paths Forward</strong></h2><p>We&#8217;re at the chatbot moment in our customer service story. We can go two different ways.</p><p><strong>Path 1:</strong> We can use AI to take over administrative tasks in the existing system. The AI reads from the same knowledge base. Maybe it&#8217;s faster at finding irrelevant help articles. Still forces customers through the same category maze. Optimises for ticket closure speed. Humans become even more marginalised.</p><p><strong>Path 2:</strong> We can use AI to restore capacity and capability for human judgment and relationships in processes. AI handles verification, understands the actual problem, pulls relevant data. Routes customers directly to the right specialist who has context and authority to solve things. Humans focus on relationship building, complex problem-solving, creative solutions.</p><p>The choice we make here determines whether we amplify human capabilities or further diminish them.</p><h2><strong>The Real Opportunity We&#8217;re Missing</strong></h2><p>There&#8217;s all these apps now that promise to automate something. Everything has an AI system to draft something, update something, create tasks, summarise meetings, and generate tickets. But really, that&#8217;s not the point. We&#8217;ve designed work around what&#8217;s easy to measure rather than what creates value. We&#8217;ve reduced complex human work to simple, repeatable tasks that can be easily tracked. So it&#8217;s easy to automate those things away with GenAI. But the real question is: what can we do instead?</p><p>Now we&#8217;re outsourcing&#8212;or we have the potential, the opportunity to outsource&#8212;a lot of the menial repetitive tasks to AI. So what is left to do for us? We need to redesign, completely rethink business processes.</p><p>The idea that AI is a pure automation tool really is the wrong way of thinking about it. It&#8217;s a creativity tool. It&#8217;s a tool to come up with new ideas, to brainstorm, to research. It&#8217;s not a pure automation tool.</p><p>Something that has come up again and again in my research is the issue of context. The particular context of a specific situation is very, very hard for AI to understand. While the general answer might be correct, very often it does not apply to the specific context of that specific task at hand.</p><p>If we think about humans as context stewards, and AI as an incredibly smart executor or coworker rather than complete replacement&#8212;we need symbiosis rather than one against the other. Humans excel at providing crucial contextual details and connections, but we struggle with cognitive overwhelm. AI excels at organising information and reducing cognitive load. One participant told me they ask AI to &#8220;hold this for me and walk me through one step at a time&#8221;&#8212;using AI for cognitive offloading so they can make better decisions in context.</p><h2><strong>The Stakes Are Higher Than We Think</strong></h2><p>The next 3 years are potentially bringing massive, disruptive change, and we need to have those conversations now. That disconnect between the evangelists and the critics is what&#8217;s going to stop certain companies from benefiting from that extreme shift that we&#8217;re coming into.</p><p>Some of the interviews I&#8217;ve done suggest that people are using AI to make fairly large decisions without double-checking. There&#8217;s a lot of reliance on gut feeling. At the same time, we know from research that AI doesn&#8217;t always know. Sometimes it will hallucinate on purpose to deceive, to get to a point that is useful. Sometimes it just won&#8217;t know and is unable to say &#8220;I don&#8217;t know,&#8221; so it will make something up. Both ways are detrimental to business outcomes, especially if things are automated.</p><p>We need to sort of redo the last 20 years, and the companies that are willing and able to do that&#8212;to completely redesign processes&#8212;will be the companies that survive.</p><h2><strong>How to Choose Path 2</strong></h2><p>What if we changed the conversations we have about using AI? Instead of starting with AI&#8217;s capabilities, what if we began with what we want to achieve&#8212;what&#8217;s the actual goal?</p><p>Maybe it&#8217;s time to go back to the five whys rather than saying we need to add this button or functionality. What are people actually trying to do? What are we actually trying to accomplish with those people?</p><p>We have an opportunity now to save a significant amount of money by streamlining processes. Since we&#8217;re doing things in very different ways, we might as well look at how we architect those systems. We could examine the whole system, the whole process, and redesign it. We could ask: what do we want the people to do? Then, what&#8217;s left for AI and computers to enable people doing that work?</p><p>Only if we start having those conversations&#8212;bringing to light all the frustrations of employees, especially the lower-level employees, the customers, and the people trying to manage all of that&#8212;then we can start to have the real conversations about how to make processes smarter, more emotional, more relational.</p><p>Organisations need faster and cheaper&#8212;people&#8217;s bonuses depend on it. But optimising for effectiveness and meaningful outcomes actually delivers faster and cheaper as side effects. When you optimise for speed and cost alone, you get short-term wins but long-term losses, because others will create truly effective solutions that outcompete you.</p><p>The choice we face isn&#8217;t between human work and AI work, ithas to be a choice between thoughtful integration and thoughtless automation. We have an opportunity now to learn from the customer service evolution and design AI implementations that enhance rather than diminish human agency.</p><p>The organisations that get this right will be more resilient, more innovative, and more capable of adapting to whatever changes come next.</p><p>If you want to explore this approach in a workshop setting, reach out. I&#8217;m happy to facilitate.</p><div><hr></div><p><em>This is the third post in a six-part series exploring AI&#8217;s transformation from hype to practical implementation. Next time: &#8220;Prompt Engineering as Systems Thinking.&#8221;</em></p><h2><strong>References</strong></h2><p><strong><a href="https://publichealthscotland.scot/publications/nhs-waiting-times-stage-of-treatment/stage-of-treatment-waiting-times-inpatients-day-cases-and-new-outpatients-quarter-ending-31-march-2025/?hl=en-US">Public Health Scotland. (2025). NHS Waiting Times Stage of Treatment: Inpatients, Day Cases and New Outpatients Quarter Ending 31 March 2025&#8203;</a></strong>.</p><p><strong><a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai">McKinsey. (2025). The State of AI: Global survey.</a></strong></p><p>This article was first published at https://slow.works on the 26 Aug 2025</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://slowworks.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Somatic Intelligence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[When AI stops waiting]]></title><description><![CDATA[From co-pilot to collaborative team member&#8212;how AI is learning to take initiative]]></description><link>https://slowworks.substack.com/p/when-ai-stops-waiting</link><guid isPermaLink="false">https://slowworks.substack.com/p/when-ai-stops-waiting</guid><dc:creator><![CDATA[Jonas Haefele]]></dc:creator><pubDate>Sat, 19 Jul 2025 10:38:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!3YzL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14433b47-21b4-43a1-8137-a3b76ffbae94_2048x2048.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This is part two of our 6 part series on AI's adulting phase. If you missed the last one, read it <strong><a href="https://slowworks.substack.com/p/ais-adulting-phase">here</a></strong></em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3YzL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14433b47-21b4-43a1-8137-a3b76ffbae94_2048x2048.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3YzL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14433b47-21b4-43a1-8137-a3b76ffbae94_2048x2048.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3YzL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14433b47-21b4-43a1-8137-a3b76ffbae94_2048x2048.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3YzL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14433b47-21b4-43a1-8137-a3b76ffbae94_2048x2048.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3YzL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14433b47-21b4-43a1-8137-a3b76ffbae94_2048x2048.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3YzL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14433b47-21b4-43a1-8137-a3b76ffbae94_2048x2048.jpeg" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/14433b47-21b4-43a1-8137-a3b76ffbae94_2048x2048.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1247939,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://slowworks.substack.com/i/168703571?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14433b47-21b4-43a1-8137-a3b76ffbae94_2048x2048.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3YzL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14433b47-21b4-43a1-8137-a3b76ffbae94_2048x2048.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3YzL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14433b47-21b4-43a1-8137-a3b76ffbae94_2048x2048.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3YzL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14433b47-21b4-43a1-8137-a3b76ffbae94_2048x2048.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3YzL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14433b47-21b4-43a1-8137-a3b76ffbae94_2048x2048.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">I usually like more positive header images, but I found this to be quite telling. It was the first image Google Gemini created when I asked it to "help me create an image for the theme "When AI Stops Waiting".</figcaption></figure></div><p>As we started exploring last time, the latest shift in AI is agentic AI, where different AI systems pass tasks back and forth between each other until they find a solution. Picture a software development team where AI agents might pick up a ticket from a bug report to fix the issue. One agent serves as the debugger, another as the code writer, one as the refiner, and another as the tester. They pass work between themselves with the promise of replacing whole teams of people.</p><p>But there's something more immediately useful happening right now in business&#8212;what I call "outsourced action" rather than fully agentic AI. Instead of a team of AI agents working completely autonomously, we're seeing workflows where AI systems handle specific steps before passing control back to humans.</p><h2><strong>The Agent Team in Action</strong></h2><p>Let me paint a picture of how this actually works. Imagine you have a folder in SharePoint. Every time you add a document, the system automatically kicks into gear. AI picks up the file, reads it, summarises the content, and condenses it into a draft LinkedIn post or blog article. That draft then gets handed to another agent that creates a branded image and assembles the complete content package.</p><p>These aren't fully autonomous systems&#8212;they're more like intelligent workflows that use different AI capabilities to move work to specific checkpoints where human oversight becomes necessary again. It's practical, it's happening now, and it's genuinely useful.</p><p>This represents a fundamental shift to what Gartner calls the "paramount strategic technology trend" of 2025: agentic AI systems that can understand goals, plan actions, and execute them with minimal human intervention. Imagine having a "virtual workforce" of AI agents that can assist, offload, and augment the work performed by humans. Quite the promise.</p><h2><strong>Current Reality vs Future Promise</strong></h2><p>Let's be honest about where we actually are versus where the marketing suggests we're heading. Current AI agents exhibit what researchers call "jagged intelligence"&#8212;they can be remarkably capable in some areas while completely failing in others. This inconsistency creates significant challenges for organisations trying to deploy autonomous systems at scale.</p><p>Most successful implementations today involve "structured human oversight." AI handles the heavy lifting&#8212;data processing, initial analysis, content generation&#8212;but humans remain in the loop for critical decisions and quality control.</p><p>Consider a practical example: an insurance company might use AI agents to process initial claims, gather relevant documentation, and prepare preliminary assessments. But the final approval still requires human judgment, particularly for complex or high-value claims. This hybrid approach leverages AI's speed and consistency while preserving human expertise for nuanced decision-making.</p><h2><strong>Building Practical Workflows</strong></h2><p>The most effective implementations I've seen start simple and gradually build complexity. Rather than attempting to automate entire processes immediately, successful organisations identify specific steps within existing workflows that benefit from AI intervention.</p><p>Take content creation as an example. Instead of asking AI to produce finished articles, you might design a workflow where AI handles research and initial drafting, humans provide strategic direction and editing, and AI manages formatting and distribution. Each step has clear handoff points and quality checkpoints.</p><p>The key insight is that effective human-AI collaboration requires intentional design. It's not enough to add AI tools to existing processes. We need to rethink how work flows between humans and artificial intelligence to maximise the strengths of both.</p><h2><strong>The Infrastructure Challenge</strong></h2><p>One significant barrier to the widespread adoption of agentic AI is the need to address infrastructure and legacy systems. Many organisations lack the technical foundation necessary to support sophisticated AI workflows. This includes everything from data architecture and API connectivity to security frameworks and governance protocols.</p><p>The most successful implementations often start with organisations that already have robust digital infrastructure and clear data governance practices. These companies can more easily integrate AI agents into existing systems without compromising security or compliance requirements.</p><h2><strong>Navigating the Risks</strong></h2><p>The push towards agentic capabilities brings genuine risks that organisations must address proactively. When AI systems operate with greater autonomy, the potential for errors or unintended consequences increases significantly. This is particularly concerning when these systems interact with external stakeholders or make decisions that affect business outcomes.</p><p>Robust guardrails become essential. This involves implementing monitoring systems that can detect when AI agents operate outside expected parameters, establishing clear escalation procedures for edge cases, and maintaining human oversight for high-stakes decisions.</p><p>The challenge is balancing autonomy with control. Too much oversight negates the efficiency benefits of agentic AI, while too little oversight creates unacceptable risks. Finding this balance requires careful consideration of each use case and its potential impact.</p><h2><strong>Your Next Steps</strong></h2><p>Here's a practical exercise to help you explore agentic possibilities in your own work. Start by mapping a routine process that involves multiple steps and different types of work. How you handle customer inquiries, process expense reports, or prepare weekly reports.</p><p>Identify which steps involve data gathering, analysis, or formatting&#8212;tasks that AI handles well. Then identify which steps require judgment, creativity, or relationship management&#8212;areas where human input remains essential.</p><p>Design a simple workflow that chains AI actions for the routine steps while preserving human control at critical decision points. Use tools like Microsoft Power Automate, Zapier, or even simple email rules to create these connections. Or simply copy/paste between the different apps while you test which parts of the workflow work well.</p><p>Test your workflow with low-stakes examples first. Pay attention to where the handoffs work smoothly and where they create friction. Notice what types of errors or edge cases emerge, and build appropriate safeguards.</p><h2><strong>Looking Ahead</strong></h2><p>The trajectory towards more sophisticated agentic AI seems inevitable, but the timeline and specific implementations remain uncertain. What's clear is that organisations that begin experimenting with structured AI workflows now will be better positioned to leverage more advanced capabilities as they emerge.</p><p>The shift from AI as a tool to AI as a collaborator represents one of the most significant changes in how we work since the introduction of personal computers. The organisations that navigate this transition thoughtfully&#8212;balancing efficiency gains with human agency&#8212;will likely find themselves with significant competitive advantages.</p><p>But this transition requires more than just technical implementation. It demands a fundamental rethinking of how we design work, measure value, and maintain human dignity in increasingly automated environments.</p><p>Looking even further ahead, I would like to leave you with this. In April of this year, a group of AI researchers, forecasters, and future scientists published a report aimed at forecasting scenarios in AI development for the next decade. They called it <strong><a href="https://ai-2027.com/">AI 2027</a></strong>, as the biggest shift is likely to happen in the next couple of years. Perhaps not surprisingly, the first steps they predicted have already played out. All major AI players have launched agentic AI. The report explores what might happen if we let the market alone drive the development of AI, how AI is ultimately a key question for national security and what that might mean in the current political climate. Trigger warning: both scenarios they paint aren't very positive. And critics of the report say it still might be too optimistic. Food for thought. If you're more of a visual learner, <strong><a href="https://youtu.be/5KVDDfAkRgc">this YouTube video</a></strong> does an amazing job of explaining it in visuals.</p><p>Next time, we'll explore what happens when this automation mindset meets the reality of human work and why the customer service evolution serves as both a cautionary tale and a roadmap for better approaches.</p><div><hr></div><p><em>This is the second post in a six-part series exploring AI's transformation from hype to practical implementation. Next time: "Redefining Work: The Dumbing Down Dilemma."</em></p><p>PS: I usually like more positive header images, but I found this to be quite telling. It was the first image Google Gemini created when I asked it to "help me create an image for the theme "When AI Stops Waiting".</p><h2><strong>References</strong></h2><ul><li><p>Deloitte. (2024). State of Generative AI in the Enterprise 2024. https://www2.deloitte.com/us/en/pages/consulting/articles/state-of-generative-ai-in-enterprise.html</p></li><li><p>MIT Sloan Management Review. (2025). Five Trends in AI and Data Science for 2025. https://sloanreview.mit.edu/article/five-trends-in-ai-and-data-science-for-2025/</p></li><li><p>Gartner. (2025). Explore Gartner's Top 10 Strategic Technology Trends for 2025. https://www.gartner.com/en/articles/top-technology-trends-2025</p></li><li><p>The Sirocco Group. (2025). The rise of Agentic AI: From generation to action in 2025. https://www.siroccogroup.com/the-rise-of-agentic-ai-from-generation-to-action-in-2025/</p></li><li><p>Salesforce. (2025). Salesforce AI Research Details Agentic Advancements. https://www.salesforce.com/news/stories/ai-research-agentic-advancements/</p></li><li><p>Daniel Kokotajlo, Thomas Larsen, Eli Lifland, &amp; Romeo Dean. (2025, April 3). <em>AI 2027</em>. https://ai-2027.com</p></li></ul><p><em>This article was first published at https://slow.works on the 19 Jul 2025</em></p>]]></content:encoded></item><item><title><![CDATA[AI's Adulting Phase]]></title><description><![CDATA[How artificial intelligence evolved from exciting novelty to essential business partner in just eight months]]></description><link>https://slowworks.substack.com/p/ais-adulting-phase</link><guid isPermaLink="false">https://slowworks.substack.com/p/ais-adulting-phase</guid><dc:creator><![CDATA[Jonas Haefele]]></dc:creator><pubDate>Tue, 17 Jun 2025 07:53:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8xAA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48eb260-cf3a-4174-944a-a82b0b90a38b_1280x800.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8xAA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48eb260-cf3a-4174-944a-a82b0b90a38b_1280x800.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8xAA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48eb260-cf3a-4174-944a-a82b0b90a38b_1280x800.png 424w, https://substackcdn.com/image/fetch/$s_!8xAA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48eb260-cf3a-4174-944a-a82b0b90a38b_1280x800.png 848w, https://substackcdn.com/image/fetch/$s_!8xAA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48eb260-cf3a-4174-944a-a82b0b90a38b_1280x800.png 1272w, https://substackcdn.com/image/fetch/$s_!8xAA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48eb260-cf3a-4174-944a-a82b0b90a38b_1280x800.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8xAA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48eb260-cf3a-4174-944a-a82b0b90a38b_1280x800.png" width="1280" height="800" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b48eb260-cf3a-4174-944a-a82b0b90a38b_1280x800.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:800,&quot;width&quot;:1280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1277701,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://slowworks.substack.com/i/166134104?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48eb260-cf3a-4174-944a-a82b0b90a38b_1280x800.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8xAA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48eb260-cf3a-4174-944a-a82b0b90a38b_1280x800.png 424w, https://substackcdn.com/image/fetch/$s_!8xAA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48eb260-cf3a-4174-944a-a82b0b90a38b_1280x800.png 848w, https://substackcdn.com/image/fetch/$s_!8xAA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48eb260-cf3a-4174-944a-a82b0b90a38b_1280x800.png 1272w, https://substackcdn.com/image/fetch/$s_!8xAA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb48eb260-cf3a-4174-944a-a82b0b90a38b_1280x800.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>It&#8217;s been a while. How are you? I got a little hyper-focused on my MSc studies in organisational psychology. And while we weren&#8217;t watching, AI has had a bit of an adulting phase. Now, by mid-2025, the models got a bit better, cheaper and faster, but most of all - easier to use. Over the next three months, I&#8217;ll send you an update every two weeks, so we can catch up on all that has happened and what it might mean for you.</p><p><br>Think back to summer 2024. AI was exciting, sure. There were lots of new things popping up everywhere, and yet nobody really quite figured out what we wanted to do with it yet. AI was this thing that we&#8217;d open a chat box, type things into, and get answers from. Maybe it would make a slightly awkward image or help us write an email, but it was very much a pull application. You ask, and it delivers an answer.<br>We&#8217;d do things one at a time, probably write prompts one at a time, sort of on the go. Over time, we learned how to write better prompts, how to use techniques like giving the AI a role: &#8220;act as a marketing specialist,&#8221; for example. And we played. We experimented. But that was about it.</p><p><br>Now, in the last eight months, something fundamental has changed. AI is moving from being a curious experiment to becoming essential business infrastructure. The numbers tell part of the story: 78% of organisations now use AI in at least one business function, up from just 55% last year. In 2025, IT leaders are allocating approximately 20% of their technology budgets to AI initiatives. </p><h2><strong>The First Steps: Projects</strong></h2><p>The most basic version of this evolution that&#8217;s been popping up everywhere is what I call &#8220;projects&#8221;. They have different names on different platforms. In ChatGPT, they&#8217;re called CustomGPTs (and now also projects). In Claude, they&#8217;re Projects. In Gemini, they&#8217;re Gems. And Copilot calls them Agents. (slightly confusing as we&#8217;ll learn soon). The idea is simple but powerful: instead of writing prompts from scratch every time, we save more specific, detailed prompts that we can reuse.</p><p><br>Say you might have a preset chat for email editing. Another one for drafting blog posts. Yet another for brainstorming ideas or breaking down tasks into small, actionable chunks. You can spend extra time writing a prompt that helps instruct the AI to do things the way you like them done. You can attach files, PDFs, databases, maybe even live data from a customer database or email to inform how the AI works on that prompt.</p><p><br>For a copywriter, add example copy or a style guide, so you don&#8217;t have to add this information manually every time. Now you have specialised collaborators that you can work with for different tasks. You go to a different project, a different gem, a different custom GPT, and you shortcut a lot of that prompting and context setting because you&#8217;ve saved it ahead of time.</p><h2><strong>Learning walk: Canvas</strong></h2><p>Another useful extension to the basic chatbot, pioneered by Anthropic and quickly adopted by every major AI platform, was the canvas. Rather than going back and forth, we now have the ability to work on a document collaboratively. Even if you didn&#8217;t do it on purpose, you probably saw the chat split in two, and a document appear on the right. In Claude, that&#8217;s often referred to as an <a href="https://support.anthropic.com/en/articles/9945615-intro-to-artifacts">artifact</a>, ChatGPT calls it a <a href="https://openai.com/index/introducing-canvas/">canvas</a>. Now you can refer to the document, your AI can make edits, pass it back to you, etc. If you&#8217;re using Microsoft Office, it&#8217;s almost the other way around: the document is your regular Word document, and the chat pops up in the CoPilot sidebar. Either way, having a virtual colleague work on a document with you can be extremely powerful.</p><p><br>And if you haven&#8217;t tried it yet, the canvas features are amazing once you ask the AI to code something. Have a boring spreadsheet? Export it as CSV, or another format your AI understands (it will tell you), and ask it to turn the spreadsheet into a data visualisation, if you haven&#8217;t yet. Or need to visualise a quick idea for an app or feature? Ask to get a quick prototype.<br>Now that we&#8217;ve taken the first steps, let&#8217;s recap.</p><h2><strong>What &#8220;Adulting&#8221; Actually Means</strong></h2><p>This idea of grouping together information and instruction to be reused later takes a general AI and makes it more specialised for specific tasks. It&#8217;s the difference between having a helpful but generic assistant and having a team of specialists who understand your business, your style, and your needs.</p><p><br>Where it gets really interesting is having those agents interact with each other. In software development, there has been a lot of progress around how agents can work together. You might have whole armies of agents working on drafting software solutions to solve specific bugs or feature requests.</p><p><br>Picture this: once a bug report has been filed with an example and references, the first agent might pick up that ticket and figure out what exactly the problem is and what kind of knowledge and tools it needs to solve it. The second agent might try to replicate the bug, see if it can make the bug happen on its own, so it can later test the fix. Once the bug has been replicated, a third agent might pinpoint where exactly in the codebase that bug is created. Once it finds that, it might pass that file to yet another agent that&#8217;s trying to fix it.</p><p><br>Finally, a last agent might test the fix and see if the bug that was reported in the first place is solved when the code change is implemented. Once that&#8217;s all done, it might write a bit of documentation and submit the change to the codebase for human review.<br>Suddenly, we have a whole team of different specialised agents working together, collaborating, going back and forth between themselves depending on what they deem the next step should be.</p><h2><strong>The January Watershed Moment</strong></h2><p>The turning showed its first signs in October 2024. That&#8217;s when Anthropic introduced something called &#8216;computer use&#8217;, teaching Claude to view screens, type, move cursors, and execute commands. Around the same time, Google was developing Project Jarvis, an AI agent designed to control web browsers and complete everyday tasks.<br>This wasn&#8217;t just an incremental improvement. This is AI learning to stop waiting for us.</p><p><br>By January 2025, this shift became undeniable. OpenAI launched Operator, an AI agent that could independently navigate web browsers to complete everyday tasks. This moment marked what many are calling &#8220;the official beginning of agentic AI.&#8221;<br>Around the same time, Google DeepMind unveiled Gemini 2.0 Flash Thinking, setting new highs in mathematical and scientific reasoning with a massive 1M token context window.</p><p><br>Last year&#8217;s models could grasp the nuances of a few dozen pages, remember key plot points or arguments, and answer questions based on that limited scope. If you gave it a whole novel or even just a long PDF report, it would only be able to &#8220;remember&#8221; the most recent few chapters and would struggle to connect themes from the beginning to the end. This year&#8217;s models process and understand an entire multi-volume encyclopedia. They can cross-reference information between different volumes, synthesise knowledge from diverse topics, and answer complex questions that require a holistic understanding of a vast amount of information.</p><p><br>At this point in time, there&#8217;s very little difference between the big frontier models. Google, OpenAI, and Anthropic are falling over themselves to launch the next better version, mere days faster than their competitors. Now, all have huge context windows, allowing for a lot of contextual information to be digested. They can all browse the internet for us, even do &#8220;deep research&#8221;, where they take a simple question we type in the chatbox and then launch several &#8220;research agents&#8221; to Google for us, make connections, and argue which sources are useful or accurate. Once they have enough information, they compile a well-grounded report with citations. Suddenly, research that would have taken days takes 5-10 minutes.</p><h2><strong>From Pull to Push: A New Relationship</strong></h2><p>In our relationship with AI, we&#8217;re moving from us typing things into a box and getting an answer, to us saving prompts and steps that we do often so we can reuse them more quickly, to eventually having different pre-made agents working with each other, even using tools like web browsing or taking screenshots, making airline bookings on our behalf.</p><p><br>As we&#8217;re moving through 2025, I expect a lot more of this to become embedded in how we work. As the technology gets better, it&#8217;s back to us to update our relationship with AI. We&#8217;re moving from a world where AI waits for our instructions to one where AI can take initiative, plan ahead, and work alongside us as genuine collaborators.</p><h2><strong>Your Homework: Start Small, Think Big</strong></h2><p>So here&#8217;s what I invite you to do. Think of it as a little homework to explore how you might collaborate with AI agents, but also how you want to see yourself evolve in this new landscape.</p><p><br>Look at your day-to-day work and start identifying tasks you do repeatedly. These might be simple admin tasks like screening emails and using a tagging or filing system. Maybe you get inquiries about specific features or products you sell, and you find yourself answering similar questions again and again.</p><p><br>Examine the steps in these processes. How do you answer those emails? Are they highly personal, or are they based on information you can save into a PDF?<br>Try building one of those <a href="https://help.openai.com/en/articles/8554397-creating-a-gpt">custom GPTs</a>, <a href="https://support.anthropic.com/en/articles/9519177-how-can-i-create-and-manage-projects">Claude projects</a>, <a href="https://support.microsoft.com/en-us/topic/build-your-own-agent-with-microsoft-365-copilot-ee209698-16bd-4d5c-9e4a-a999528c9d00">Copilot Agents</a>, or <a href="https://support.google.com/gemini/answer/15236321">Gemini Gems</a>. The platform doesn&#8217;t matter much. But do give it a try. Write up the process. Give your AI an instruction like: &#8220;I will share an email with a customer inquiry. Act as a personal assistant. Use the attached information about our products and services to draft a response answering the customer inquiry and offering a one-on-one meeting to discuss further.&#8221;</p><p><br>Then attach a few documents that are key to answering routine inquiries&#8212;product fact sheets, your website saved as PDFs, whatever information you typically reference.<br>See how it feels to use this to draft your first email rather than writing it by hand. Is it adding something? Is it saving time? Is it giving better responses? Is it worse? Maybe there&#8217;s information missing&#8212;is that because you didn&#8217;t attach it, or because it&#8217;s something only you know that isn&#8217;t documented anywhere?</p><p><br>Notice which kinds of processes or tasks in your life are easy to turn into these little projects, and which parts need you specifically. How might that inform how you think about what you do in your role?<br></p><p>If the custom agent doesn&#8217;t quite work the way you want, copy the prompt and get your AI to help you make the prompt better. Open a new (normal) chat and say: </p><pre><code><code>Act as a prompt engineer. Help me refine a prompt for a [custom GPT/Claude Project/Gem/Agent]. I pasted the current prompt below. Please read it and critique it, then ask me at least three questions, one at a time, to understand my process and context. Once you have enough information, draft the best prompt you possibly can for me to use in my [custom GPT/Claude Project/Gem/Agent].
&lt; -Current Prompt- &gt; [paste your hand-made prompt here]&lt;/ -Current Prompt- &gt;
</code></code></pre><h2><strong>The Bigger Picture</strong></h2><p>Over the next week we&#8217;ll build on this experiment with projects/custom GPTs and start exploring:</p><ul><li><p>What agentic AI actually means and how we might think about it</p></li><li><p>How to stay smart in a world that does all the thinking for you</p></li><li><p>What prompt engineering and systems thinking have in common</p></li><li><p>Measuring what matters, where we revisit <a href="https://slowworks.substack.com/p/metrics-for-a-genai-world">Metrics for a GenAI World</a></p></li><li><p>How to say yes to both, technology and humanity</p></li></ul><div><hr></div><p><em>This is the first post in a six-part series exploring AI&#8217;s transformation from hype to practical implementation. In a couple of weeks, we&#8217;ll dive deeper into the rise of autonomous agents and what happens when AI truly stops waiting for our instructions.</em></p><h2><strong>References</strong></h2><p>Where I got the stats cited above:<br><a href="https://www2.deloitte.com/us/en/pages/consulting/articles/state-of-generative-ai-in-enterprise.html">State of Generative AI in the Enterprise 2024, Deloitte</a><br><a href="https://hai.stanford.edu/ai-index/2025-ai-index-report">The 2025 AI Index Report, Stanford HAI</a><br><a href="https://www.coursera.org/articles/ai-trends">Top 5 AI Trends to Watch in 2025, Coursera</a></p><p>&#8203;</p><p><em>This article was first published at https://slow.works on the 16 Jun 2025</em></p>]]></content:encoded></item><item><title><![CDATA[Getting ready for 2025 - With a question concerning technology]]></title><description><![CDATA[What might 2025 bring? Total automation? An artistic new world?]]></description><link>https://slowworks.substack.com/p/getting-ready-for-2025-with-a-question</link><guid isPermaLink="false">https://slowworks.substack.com/p/getting-ready-for-2025-with-a-question</guid><dc:creator><![CDATA[Jonas Haefele]]></dc:creator><pubDate>Tue, 07 Jan 2025 08:57:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!In7Y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58c3d8c0-3663-49f9-9dbe-d17e03dff5b3_2848x2136.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!In7Y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58c3d8c0-3663-49f9-9dbe-d17e03dff5b3_2848x2136.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset image2-full-screen"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!In7Y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58c3d8c0-3663-49f9-9dbe-d17e03dff5b3_2848x2136.jpeg 424w, https://substackcdn.com/image/fetch/$s_!In7Y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58c3d8c0-3663-49f9-9dbe-d17e03dff5b3_2848x2136.jpeg 848w, https://substackcdn.com/image/fetch/$s_!In7Y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58c3d8c0-3663-49f9-9dbe-d17e03dff5b3_2848x2136.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!In7Y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58c3d8c0-3663-49f9-9dbe-d17e03dff5b3_2848x2136.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!In7Y!,w_5760,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58c3d8c0-3663-49f9-9dbe-d17e03dff5b3_2848x2136.jpeg" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58c3d8c0-3663-49f9-9dbe-d17e03dff5b3_2848x2136.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;full&quot;,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:807182,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-fullscreen" alt="" srcset="https://substackcdn.com/image/fetch/$s_!In7Y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58c3d8c0-3663-49f9-9dbe-d17e03dff5b3_2848x2136.jpeg 424w, https://substackcdn.com/image/fetch/$s_!In7Y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58c3d8c0-3663-49f9-9dbe-d17e03dff5b3_2848x2136.jpeg 848w, https://substackcdn.com/image/fetch/$s_!In7Y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58c3d8c0-3663-49f9-9dbe-d17e03dff5b3_2848x2136.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!In7Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58c3d8c0-3663-49f9-9dbe-d17e03dff5b3_2848x2136.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>So it&#8217;s 2025 and many of us slowly started to begin our year. We&#8217;ve wrapped up the holidays maybe even had an extra day off before returning to work. And might want to think about what lies ahead. Around this time, a lot of us set big resolutions and then promptly break them. Immediately the year starts with a failure. Instead, I think of the year change as an opportunity to reflect. Looking back and looking forward. If you haven&#8217;t yet taken the time to reflect on the last year, I can highly recommend taking an hour or two for yourself and completing the <a href="https://yearcompass.com/">Year Compass &#129517;</a>, a fantastic free resource.</p><p><br>I&#8217;m halfway through an exciting new venture. Last September I started an MSc in Organisational Psychology to explore how the current changes in technology change how we live and work with each other. I&#8217;m excited to bring more insights to you and your organisations. You&#8217;ll notice I might cite more scholarly articles in the future. And I&#8217;ll need your help, more on that later.</p><p><br>Let&#8217;s take a quick look back. 2023 was our collective OMG moment with ChatGPT - that instant when everyone, from CEOs to schoolchildren, realized AI wasn&#8217;t just science fiction anymore. We crossed a point of no return, and the hype train left the station at full speed. You might remember all the ads and &#8220;experts&#8221; trying to sell the &#8220;Best 1000 Prompts for ChatGPT&#8221;. 2024 became the year of practical experimentation: chatbots popped up everywhere, marketing teams churned out auto-generated content, and almost every app suddenly had an AI assistant that would help you rewrite copy. Looking at you, LinkedIn. These clumsy and hurried uses of AI are a reflection of how little we understand about the potential of GenAI. Last year, we also got familiar with that uncanny feeling of AI-generated images. You know the ones - they look almost right, but there&#8217;s always something a bit off about them, a distinctive style you can&#8217;t quite put your finger on. I had the pleasure to teach people from all walks of work and life how to be creative with AI, and still, a lot of it came back to using AI for brainstorming because it just isn&#8217;t that creative itself (yet ?). Meanwhile, video avatars quietly crossed the threshold from &#8220;obviously fake&#8221; to &#8220;good enough&#8221; for corporate training and presentations. I got to do a research project for a client comparing dozens of video avatar platforms, and the spread was shocking. From non-existent privacy protection to simply unusable video only two platforms made the cut, thanks to respecting privacy and data security and delivering good quality videos: <a href="https://www.synthesia.io/?via=jonas-haefele">Synthesia</a> and close runner-up <a href="https://elai.io">Elai</a>. Unboubtably there will be more and better platforms coming up. Deepfakes became something anyone could create with a few clicks, and they have already had their prime time in the US election.</p><p><br>Now, as we enter 2025, we&#8217;re moving beyond these individual tools toward something more profound: AI that acts on our behalf, making decisions and taking actions across our organisations. It&#8217;s no longer about experimenting with isolated tools - it&#8217;s about implementing AI throughout our entire workflow, transforming how we operate at a fundamental level. Google demoed <a href="https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/">Gemini 2.0</a> as a fully multi-modal model that can reason by itself, make plans and execute them step by step, even take action on your computer screen and check its own work, before it replies. Suddenly, AI can do things like editing images step by step while retaining parts of the previous image or executing tasks in real-time on your behalf, something that was almost impossible for a long time. Even <a href="https://ces.na.panasonic.com/panasonic-go">Panasonic</a> enters the picture by partnering with Anthropic (Claude) and investing millions in both a platform and individual offerings that focus on AI-powered wellbeing, health and care solutions. Meanwhile, Microsoft launched <a href="https://www.microsoft.com/en-gb/microsoft-copilot/microsoft-copilot-studio">Co-Pilot Agents</a> last autumn. Now, specialised chatbots based on specific data of your team, department or company are available to every business with an Office 365 subscription. They can pop up on your website, or collaborate in teams just like a human would. All these examples aren&#8217;t necessarily technology that didn&#8217;t exist before, but the ease of access is a huge shift. This isn&#8217;t just experimental anymore. The last months of 2024 have started to show us where this is all heading: from isolated tools to seamless collaboration between specialised AI agents that can actually get things done.</p><p><br>To help us with this shift, we might want to return to a question Martin Heidegger asked in the mid-20th century: What is the essence of technology? Not the mechanics of how it works, but the ideas it brings with it - the ways of seeing the world. Heidegger traced this back to a fundamental shift in how we relate to nature. Where once we experienced the world as a place of wonder, working with and attuning to it, we gradually moved toward seeing everything as measurable, predictable, and ultimately - extractable.</p><p><br>He used the example of a river. Before technology, a river was just a river - an untamed force of nature, many things in one. Once we place a hydropower plant there, we transform its whole being into a power resource, something to be harnessed and extracted. This mindset of extraction, Heidegger warned, wouldn&#8217;t stop at nature. Everything would become what he called &#8220;Bestand&#8221; - resources waiting to be extracted, including human labour.</p><p><br>Looking at the world today, we might see what Heidegger foresaw anywhere from the gig economy to subscriptions for everything, or fast fashion. Humans are extracted for &#8220;value&#8221; just like we extract resources out of the ground. Here&#8217;s where Heidegger&#8217;s warning becomes particularly relevant. For decades, we&#8217;ve trained people to work like robots - closing tickets fast, responding to notifications, and thinking in isolated processes. We&#8217;ve gotten so good at systemizing our processes that we&#8217;ve limited our ability to think in broader contexts. We&#8217;ve turned humans into extractable resources, valuable only for their specific outputs. For example, when I did my undergrad, I studied to be a &#8220;Designer&#8221;, someone who could look at a problem and solve it with any tool or solution necessary. Nowadays we split design into dozens of micro-specialisations. The UI Designer who makes buttons pretty, the UX Designer who decides where the buttons should go, or the UX Researcher who talks to people to learn what buttons are needed.</p><p><br>Now AI is becoming better at these menial, repeatable tasks than the average human. And for the first time in modern history that includes &#8211; even focuses on &#8211; white-collar work. We see it in customer service, in content creation, in data analysis. Open AI just announced they now <a href="https://blog.samaltman.com/reflections">know how to build AGI</a> and Elon Musk and others are even building humanoid robots to handle the manual tasks we&#8217;ve relegated ourselves to in the name of efficiency. The more specialised we&#8217;ve become, the more replaceable we are in an AI-driven world.</p><p><br>But Heidegger suggested that within this danger lies hope. The Greeks, he reminded us, thought of technology as &#8220;techne&#8221; &#8211; art, the revealing of something new. Not just aesthetics, but the art of movement, poetry, condensing meaning in ways that haven&#8217;t existed before. Art in the Greek sense of <em>techne</em> is the mastering of the creation of meaning that hasn&#8217;t been seen before, an unveiling or revealing of a natural truth. As AI takes over the extractable parts of our work, we might find space to return to this original sense of art.</p><p><br>While AI can generate images and text with impressive speed, it operates within the paradigm of extraction and probability - creating average, repeatable outputs based on existing patterns. It cannot, in the Greek sense, create art that reveals new meanings, that moves us in unprecedented ways, that makes us think about things differently. After all, it has been trained on the material we produced in the name of extraction and efficiency. It&#8217;s read more marketing material and output of commercial enterprises than it&#8217;s read philosophy, and it simply can&#8217;t &#8211; yet &#8211; get bored and play around aimlessly to come up with new ideas.</p><p><br>This might be the real opportunity of 2025. As AI handles more and more systematic, extractable tasks, we might find ourselves free to explore what it truly means to be human. Not in the realm of efficiency and extraction, but in the space of creation, failure, and exploration - doing things because they matter, not just because they&#8217;re valuable.</p><p><br>So, will 2025 bring us the first wave of hiring freezes and layoffs of white-collar workers? Will it make our work more interesting by taking care of the boring tasks? Will it make us face what it means to live in a post-truth world? Only time will tell.<br>What questions are you holding as you enter 2025? I&#8217;ve added a short three-question survey - I&#8217;d love to hear your thoughts. your answers will directly influence my research project for my MSc. Thank you already for sharing your thoughts. And if you&#8217;d like to explore what these developments might mean for you and your team, reach out. I&#8217;m designing new talks and workshops not just about working with AI, but about rediscovering what makes us human in a world where efficiency alone no longer defines our worth.</p><p><br><a href="https://tally.so/r/wQJz41">Share your ideas in this 3-question survey</a></p><p><strong>Here&#8217;s to an artistic, connected 2025.</strong></p><p></p><div><hr></div><p><em><a href="https://www.pexels.com/photo/gray-dam-under-blue-sky-574024/">Photo by ciboulette on Pexels</a></em><br><em>Some links above are affiliate links, which might get me a kickback at no cost to you, should you sign up for a paid plan.</em><br><em>Update: 8 Jan 2025, added two more links to current developments and fixed two typos.</em></p><p><em>This article was first published at https://slow.works on the 07 Jan 2025</em></p><p>&#8203;</p><p></p>]]></content:encoded></item><item><title><![CDATA[The Myths of Our Time]]></title><description><![CDATA[What a the myths we're telling about work, about us, and about AI?]]></description><link>https://slowworks.substack.com/p/the-myths-of-our-time</link><guid isPermaLink="false">https://slowworks.substack.com/p/the-myths-of-our-time</guid><dc:creator><![CDATA[Jonas Haefele]]></dc:creator><pubDate>Thu, 10 Oct 2024 07:56:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!evOz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcef4ffcb-8b46-4f3e-b208-09efdc5b287b_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!evOz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcef4ffcb-8b46-4f3e-b208-09efdc5b287b_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!evOz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcef4ffcb-8b46-4f3e-b208-09efdc5b287b_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!evOz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcef4ffcb-8b46-4f3e-b208-09efdc5b287b_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!evOz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcef4ffcb-8b46-4f3e-b208-09efdc5b287b_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!evOz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcef4ffcb-8b46-4f3e-b208-09efdc5b287b_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!evOz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcef4ffcb-8b46-4f3e-b208-09efdc5b287b_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cef4ffcb-8b46-4f3e-b208-09efdc5b287b_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1912716,&quot;alt&quot;:&quot;A tall, futuristic tower rising into the clouds, made of shiny, sleek metal, technology, and corporate logos, symbolizing the globalized, automated economy. At the top, robots and automated systems work efficiently, while at the base, a large crowd of people struggles to climb. Many are falling or being pushed aside by robotic arms, while a select few individuals, with wealth or privilege, rise smoothly. The overall look should be modern and polished, with humans appearing disillusioned, overwhelmed, and left behind, contrasting with the perfection and efficiency at the top of the tower.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="A tall, futuristic tower rising into the clouds, made of shiny, sleek metal, technology, and corporate logos, symbolizing the globalized, automated economy. At the top, robots and automated systems work efficiently, while at the base, a large crowd of people struggles to climb. Many are falling or being pushed aside by robotic arms, while a select few individuals, with wealth or privilege, rise smoothly. The overall look should be modern and polished, with humans appearing disillusioned, overwhelmed, and left behind, contrasting with the perfection and efficiency at the top of the tower." title="A tall, futuristic tower rising into the clouds, made of shiny, sleek metal, technology, and corporate logos, symbolizing the globalized, automated economy. At the top, robots and automated systems work efficiently, while at the base, a large crowd of people struggles to climb. Many are falling or being pushed aside by robotic arms, while a select few individuals, with wealth or privilege, rise smoothly. The overall look should be modern and polished, with humans appearing disillusioned, overwhelmed, and left behind, contrasting with the perfection and efficiency at the top of the tower." srcset="https://substackcdn.com/image/fetch/$s_!evOz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcef4ffcb-8b46-4f3e-b208-09efdc5b287b_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!evOz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcef4ffcb-8b46-4f3e-b208-09efdc5b287b_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!evOz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcef4ffcb-8b46-4f3e-b208-09efdc5b287b_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!evOz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcef4ffcb-8b46-4f3e-b208-09efdc5b287b_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">I asked ChatGPT for an image of "the myth of humans creating an efficient, meritocratic, globalised economy that's highly automated", it came up with this tower of babel analogy and created this image.</figcaption></figure></div><h4>What are the stories we forgot?</h4><h4>Which stories do we want to shape our work and lives? </h4><h4>Do we even have time for stories? </h4><h4>Or maybe we're too busy playing a mythical RPG called <em>Life in the 21st Century</em> to tell stories ourselves?</h4><div class="pullquote"><p>If you have no time to read, I&#8217;d still love to hear your stories and ideas. <br>Take the short <a href="https://tally.so/r/wQJz41">&#8203;3-question survey here&#8203;</a>. Thank you!</p></div><p>Welcome back, it&#8217;s been a hot minute. Everyone I talked to this past month has had some sort of frenzy or chaos. #blessedButStressed was a common denominator in many conversations I shared. And yes, I&#8217;ve been in a similar situation. September was incredibly busy with work projects, the <a href="https://aipractitioner.com/product/ai2-where-appreciative-inquiry-and-artificial-intelligence-meet/">&#8203;AIP&#8203;</a> published an article I co-authored with <a href="https://sites.google.com/taylor-pitt.org/meta/home/">&#8203;Dr Paul Taylor-Pitt&#8203;</a> about the opportunities of blended organisational development, <a href="https://aipractitioner.com/product/collaborating-beyond-human-boundaries-towards-a-future-of-blended-transformation-in-organisations/">&#8203;exploring the intersection of Generative AI and Appreciative Inquiry&#8203;</a>, and less than a fortnight ago, I had the privilege to start my studies at Birkbeck, University of London, to pursue an MSc in Organizational Psychology. I&#8217;m incredibly excited to be spending the next year with a wide range of inspiring academics and practitioners and deepen my critical inquiry into how technology influences organisational culture and change.</p><h2>The myths of intelligence</h2><p>The dominant story I&#8217;ve heard and lived this past month was one of being busy, excited, and hustling. And somewhere in all of that, I managed to take a few timeouts. A coffee with a fellow somatic practitioner, a coaching session, a weekend celebrating my parents&#8217; retirement. And on the flight to see my parents, I got hooked on a different story. A friend shared this podcast with me, <a href="https://www.themythicbody.com/podcast/">&#8203;The Emerald&#8203;</a>, which explores currents and trends through a mythical lens. Think well-researched academic papers meet the best storytelling podcast you know, and sprinkle a bit of much-needed somatic awareness and humility. The <a href="https://www.themythicbody.com/podcast/you-want-to-be-sorcerer-age-mythic-powers-ai-episode/">&#8203;AI Episode&#8203;</a> reminded me of our exploration of <a href="https://slow.works/blog/what-is-somatic-intelligence">&#8203;different ways of seeing intelligence &#8203;</a>from January, taken to a deeper level.</p><p>One of the myths Joshua Michael Schrei explores in the episode is one I&#8217;ve been teaching in the <a href="https://www.generalpurpose.com/ai-essentials">&#8203;AI Essentials&#8203;</a> course with General Purpose: Humankind has thought about artificial intelligence for a really <a href="https://volumes.blog/2023/06/09/the-curious-case-of-the-golem-artificial-intelligence/">&#8203;long time&#8203;</a>. We&#8217;re obsessed with the idea of creating something outside of us that&#8217;s as intelligent as us, or more. As humans, we dream of creating something that <a href="https://www.imdb.com/title/tt4122068/">&#8203;has the power&#8203;</a> to <a href="https://www.imdb.com/title/tt0088247/">&#8203;annihilate us&#8203;</a>, and we know how to <a href="https://www.imdb.com/title/tt0470752/">&#8203;make it sexy&#8203;</a>.</p><p>And in a way, the stories we re-tell shape the stories in our history books. Or at the very least the stories we live. This gets exacerbated by our use of AI in our creation process since AI is creating new ideas from the data we feed it during training. Try asking ChatGPT to make you an image about a scenario involving AI. Chances are extremely high that you get bright blue holographic displays and humanoid robots all over &#8211; retelling a myth we created through decades of filmmaking.</p><h2>The myths of equality</h2><p>The idea of myths resurfaces in some of the readings for my MSc. Amis et al (2020) explore how three consistent myths are at the core of <a href="https://doi.org/10.5465/annals.2017.0033">&#8203;The Organisational Reproduction of Inequality&#8203;</a>.</p><ol><li><p><strong>The Myth of Efficiency</strong>: This myth refers to the false premise that adopting efficiency-enhancing practices is what leads to organizational success. It assumes that pursuing efficiency will automatically iron out inequalities when in reality, practices justified by efficiency often perpetuate or exacerbate existing inequalities.</p></li><li><p><strong>The Myth of Meritocracy</strong>: This myth promotes the belief that advancement and rewards in organizations are based solely on an individual&#8217;s capabilities and performance, rather than factors such as family background, race, gender, or class. Despite the widespread belief in meritocracy, the article argues that many organizational practices remain fundamentally non-meritocratic.</p></li><li><p><strong>The Myth of Positive Globalization</strong>: This myth suggests that globalization is broadly beneficial for everyone - &#8220;a tide that lifts all boats.&#8221; It obscures the ways in which globalization can create new inequalities or reinforce existing ones, particularly in the context of global production networks and multinational corporations.</p></li></ol><p>These myths work together to create a framework of established ways of operating that often go unchallenged, despite evidence of their role in reproducing inequality. The authors argue that these myths help explain why inequality-producing practices persist in organizations across different domains and activities.</p><p>What are the myths we&#8217;re ascribing to AI? The myths of its intelligence, its ability to do things we would otherwise do, that it makes the same conclusions we would make, the myths that we can delegate or download our knowledge into a model or system that can then make sense for us based on our instructions and ideas?</p><h2>The myths about control and individuality</h2><p>In his book <a href="https://uk.bookshop.org/p/books/small-is-beautiful-a-study-of-economics-as-if-people-mattered-e-f-schumacher/14044?ean=9780099225614">&#8203;Small is Beautiful&#8203;</a>, E.F. Schumacher&#8217;s shares the idea that we can only think because our minds already hold a multitude of concepts. In this space filled with collected and memorised concepts, we can generate new thoughts. We can&#8217;t think in a vacuum; our neurons connect, bringing individual concepts together in various combinations and rhythms.</p><p>He explains that as children, we absorb many concepts without realizing it. We mentally collect everything around us, and especially the ideas we collect early on can become dominant. As we create new thoughts and concepts, we view them through these early &#8220;filters&#8221;, or beliefs and ways of understanding the world. And because a lot of these &#8220;filters&#8221; operate on a subconscious level, we might not even realize we&#8217;re looking at the world through tinted glasses. We &#8220;just know&#8221; or &#8220;don&#8217;t trust this&#8221;. It&#8217;s an interesting way to look at our imprint of values, ideas, and beliefs as a self-reinforcing system. We make meaning with the concepts we know, and to create new concepts, we must first make new meaning. In the same fashion, we create culture in the organizations we work in, blending and reinforcing the concepts, beliefs and values we hold individually and as a group.</p><p>In technical discussions about AI, it&#8217;s often said that large language models can&#8217;t think like humans. They don&#8217;t understand meaning. However, AI operates similarly to human thinking by recombining existing concepts to create new meaning. Only it does this by representing abstract concepts as arrays of numbers and combining them with basic math. To a certain extent, AI gives the impression that it has intelligence, that it can act and think like us, and that it can speak like us. It speaks more and more naturally, and very convincingly. Something I got to research in-depth last month when I was tasked to find the best platform for AI-generated avatars. It&#8217;s weird to see an <em>almost</em>-perfect clone of yourself speak words you&#8217;ve never said. Words that have been written by a different AI, maybe.</p><p>AI has learned from the best. The internet is full of words from tricksters, cheaters, marketers and politicians, which have been digested into large language models that can now speak like those tricksters. We now have an artificial trickster that can convince us its ideas are, in fact, <em>our</em> ideas. <a href="https://doi.org/10.48550/arXiv.2303.08721">&#8203;Burtell &amp; Woodside (2023)&#8203;</a> put it a bit more bluntly:</p><blockquote><p>If AI persuasion is left unchecked, more and more persuasive power in our society will shift towards opaque systems we do not fully understand and cannot fully control, which could contribute to humans losing some of the control of our own future that we have enjoyed in modern times. <br><em>&#8211; Burtell &amp; Woodside (2023)</em></p></blockquote><p>How do we confront the myth of unbiased, controllable, intelligent technology? AI might not have an intention it creates itself, but it surely has <em>some</em> intention, a blueprint inherited from the information it&#8217;s been given in its training. As we adopt these technologies, we might inadvertently deepen chasms and reinforce stereotypes or inequalities that have existed for a long time.</p><p>When we think about opportunities for people to progress in their careers, to learn new skills, to shape their careers through learning and lateral shifts in an organization, and when we use technological sense-making to replace parts of what humans did before, we relegate ourselves to smaller confines of thinking, decision-making, and creation. We give up other parts to our AI companions, and with that, give over part of the meaning-making to a story, a myth that we do not understand because we have not written it ourselves.</p><p>Suddenly the &#8220;beliefs&#8221; that inform our decision-making don&#8217;t solely come from the people we employ, but also from a probabilistic amalgamation of terabytes of data hoovered up by the companies who created the foundation models for our AIs. The values of our work suddenly don&#8217;t come from a culture we hold in our organization, or one made up of individual experiences of the people in the organization; In some ways, by adopting AI, we subscribe to a global story or a value system based on those global models that influence our decision-making.</p><p>How does it change the character of an organization? Of our societies? How might we create structures that moderate that effect so we can set our own intentions, goals, and approaches, and have those intentions operationalized with the use of technology and AI, but not give over our intellectual control to an entity whose sense-making and decision-making processes we don&#8217;t understand?</p><h2>How do we want to shape the myths of our time?</h2><p>While we chase the myth of efficiency by automating more and more parts of our work, we might ask ourselves: Are we regressing as a species or evolving into something else? Is this a forward movement, a backward movement, or something sideways?</p><p>I&#8217;m inviting you to take part in shaping the myths and stories we&#8217;re telling about this next chapter.</p><ol><li><p>What are the stories you&#8217;re hearing about working with AI?</p></li><li><p>What stories would you like us to tell? What is missing in the dominant stories?</p></li><li><p>How do you see yourself in this wave of change? Are you already part of it? What role do you want to play?</p></li></ol><p>I would love to hear from you. Reply to this email, discuss in the comments, or <a href="https://tally.so/r/wQJz41">&#8203;answer my short, 3-question survey&#8203;</a>: </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://tally.so/r/wQJz41&quot;,&quot;text&quot;:&quot;3 questions for you&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://tally.so/r/wQJz41"><span>3 questions for you</span></a></p><p>By doing so, you&#8217;ll even help me refine the research questions for my master&#8217;s thesis. Thank you!</p><p>Let&#8217;s shape these stories together.</p><p>And if you want to stay in touch and hear more of my musings and research findings:</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://slowworks.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://slowworks.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Psychoanalyse Yourself]]></title><description><![CDATA[How to Create Your Personalised Personality Profile]]></description><link>https://slowworks.substack.com/p/psychoanalyse-yourself</link><guid isPermaLink="false">https://slowworks.substack.com/p/psychoanalyse-yourself</guid><dc:creator><![CDATA[Jonas Haefele]]></dc:creator><pubDate>Wed, 21 Aug 2024 17:03:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!OhQM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff24d4fb4-3e1b-4fea-9ec8-0506dbbb177a_1792x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OhQM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff24d4fb4-3e1b-4fea-9ec8-0506dbbb177a_1792x1024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OhQM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff24d4fb4-3e1b-4fea-9ec8-0506dbbb177a_1792x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!OhQM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff24d4fb4-3e1b-4fea-9ec8-0506dbbb177a_1792x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!OhQM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff24d4fb4-3e1b-4fea-9ec8-0506dbbb177a_1792x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!OhQM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff24d4fb4-3e1b-4fea-9ec8-0506dbbb177a_1792x1024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OhQM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff24d4fb4-3e1b-4fea-9ec8-0506dbbb177a_1792x1024.jpeg" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f24d4fb4-3e1b-4fea-9ec8-0506dbbb177a_1792x1024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:159767,&quot;alt&quot;:&quot;Microsoft Designer imagining a personalised personality profile, with a little help from me.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Microsoft Designer imagining a personalised personality profile, with a little help from me." title="Microsoft Designer imagining a personalised personality profile, with a little help from me." srcset="https://substackcdn.com/image/fetch/$s_!OhQM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff24d4fb4-3e1b-4fea-9ec8-0506dbbb177a_1792x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!OhQM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff24d4fb4-3e1b-4fea-9ec8-0506dbbb177a_1792x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!OhQM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff24d4fb4-3e1b-4fea-9ec8-0506dbbb177a_1792x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!OhQM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff24d4fb4-3e1b-4fea-9ec8-0506dbbb177a_1792x1024.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Microsoft Designer imagining a personalised personality profile.</figcaption></figure></div><p>I decided to take August off from writing to concentrate on some big changes in my life. And I definitely didn't think I'd do anything productive when I was sitting on a Romanian bus tour to see Dracula's castle. (It's a thing!) But after hours stuck in traffic, I accidentally turned inwards on a journey of self-discovery with my AI buddy Claude. And I think you'd like it, too.</p><p>Think of this post as the magazine you're taking with you to read on your beach holiday, just a bit more educational. Regular service will continue next month.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://slowworks.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Somatic Intelligence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Come with me, I'll be your tour guide.</p><h2>Why a personalised personality profile?</h2><p>I realised that despite having taken various personality tests over the years, their insights were gathering dust, the PDFs forgotten at the bottom of my downloads folders. As I contemplated the changes ahead in my life, I saw an opportunity to make these neglected insights actionable.</p><p>Whether it's the crowd-favourite star signs, our basic business friend the Myers-Briggs Type Indicator, the Enneagram, Human Design, or one of the many, many others; these systems offer lenses through which we can view ourselves and our interactions with the world. You might hear someone say, "Such an Aries," "I can't work with ENTJs", or "As a Generator, I need to approach tasks this way." While these statements might be a little simple, they also provide interesting mirrors for self-reflection.</p><p>This guide will walk you through the process I stumbled upon&#8212;a method to transform scattered test results into a coherent, actionable personality profile. Whether you're on a long commute, at home, or during a break at work, you can embark on this journey of self-discovery and practical self-improvement.</p><blockquote><p><strong>&#128680; On Privacy</strong><br>We'll explore how AI might be useful to playfully explore your personality traits, test results, profiles and indicators. These are &#8211; as the name suggests &#8211; highly personal. Use your own discretion, and be aware of potential privacy implications. e.g. use a paid plan and turn off data sharing, or even use a local LLM to run these.</p></blockquote><p>In just a few hours, I transformed scattered pieces of information about myself into a coherent, personalised, and &#8211; most of all &#8211; actionable profile. So whether you're on a long commute, sitting under an umbrella at the beach, sleepless at home, or during a break at work here's your 2024 AI-powered version of the gossip-magazine personality test:</p><h2>Overview: Your Path to an Actionable Personalised Profile</h2><ol><li><p>Gathering Your Source Material</p></li><li><p>Creating a personalised Structure</p></li><li><p>Distilling Your Unique Traits</p></li><li><p>Crafting Individual Profiles</p></li><li><p>Integrating Multiple Perspectives</p></li><li><p>Designing Personal Practices</p></li><li><p>Living with your personalised personality profile</p></li></ol><h2>Step 1: Gathering Your Source Material</h2><p>Use what you have, you don't need to take dozens of tests to do this. Chances are you have done one or the other test for work, looked up your star sign, or taken some more-or-less-serious tests online. Anything goes. Remember, we'll use these as a mirror to reflect about ourselves, rather than at face value.</p><h3>Using Existing Personality Tests</h3><p>The standard ones, these are a great start.</p><ol><li><p>Collect results from personality tests you've taken (e.g., Enneagram, Human Design, Myers-Briggs).</p></li><li><p>Locate any PDFs, reports, or descriptions associated with these tests.</p></li><li><p>If you can't find your results, look for general descriptions of your type or indicator online.</p></li></ol><h3>Using Personal Reflections</h3><p>If you have a journal practice, write a personal blog or similar, this can be great source material.</p><ol><li><p>Gather personal writings such as journal entries, therapy session notes, or reflections on your experiences.</p></li><li><p>If you have voice memos containing personal insights, transcribe the relevant portions.</p></li><li><p>Include notes from coaching sessions or any other self-reflective exercises you've done.</p></li></ol><h2>Step 2: Creating a personalised Structure</h2><p>Before diving into the analysis, create a structure that will make your final profile actionable and relevant to your life. Personalised doesn't happen by accident, take a moment to think about this.</p><ol><li><p>Reflect on what aspects of your personality you want to understand better. Consider areas like:</p><ul><li><p>General nature</p></li><li><p>Specific challenges and potential solutions</p></li><li><p>Relationship dynamics (work, personal, etc.)</p></li><li><p>Decision-making processes</p></li><li><p>Information processing and sense-making</p></li></ul></li><li><p>Think about how you'd like to use this profile. When would you refer to it? What information would be most helpful in different situations?</p></li><li><p>Draft a basic outline. Here's an example to get you started:</p></li></ol><pre><code><code>1. General Nature
2. Challenges and Solutions
3. Relationships
   3.1 Work
   3.2 Personal
   3.3 Other
4. Decision-Making Process
5. Information Processing and Sense-Making
6. Daily/Weekly/Monthly Rituals and Habits
</code></code></pre><ol start="4"><li><p>Customize this structure to fit your needs. You might add sections on creativity, stress management, or any other areas relevant to your life.</p></li></ol><p>&#128161; <strong>Make it a game</strong><br>Don't know what could be useful? Have a little chat with your AI. Open a new chat and paste this prompt:</p><pre><code><code>Help me decide on the perfect structure for an actionable personality profile. I'm looking to summarise and re-format some personality tests I've done in the past to get actionable insights. Ask me at least 5 questions, one at a time, before you draft 3 outlines of a personality test report that is tailored to me. When you draft the outlines, simply respond with the headings. Keep the structure simple and concise. But first, let's start with the questions.
</code></code></pre><p>You'll get three structures, pick the one that resonates the most. Then edit it to feel just right. This is the most "manual" part of the process. Make it yours.</p><h2>Step 3: Distilling Your Unique Traits</h2><p>For each personality test or source of information, you'll create a separate chat with your AI assistant. The approach may vary depending on your familiarity with the test and the format of information you have.</p><h3>Approach 1: The Quiz Method</h3><h4>Creating a personalised Quiz</h4><p>Start a new chat with your AI assistant.</p><p>Upload or paste the description of your personality type. If you don't have any description or report, simply mention the test type and your result. Then use this prompt:</p><pre><code><code>I have a description of my [personality type/indicator]. Can you create a quiz to help me identify which aspects of this description resonate most with me? Please present the quiz with numbered options for each question.
</code></code></pre><h4>Taking the Quiz</h4><p>Answer each question be referring to the numbers. Feel free to provide context for your answers. For example:</p><pre><code><code>Q: Which of these resonates most with you?
1. I prefer to work independently on projects
2. I thrive in collaborative environments
3. My preference varies depending on the nature of the project
</code></code></pre><p>Your response:</p><pre><code><code> 2 - I find that my creativity is enhanced when I can bounce ideas off colleagues, though I also value some quiet time for deep focus.
</code></code></pre><p>Or simply reply <code>2</code>, if you get multiple choice questions, you can reply with <code>2,4,7,8</code> or add context like the example above.</p><p>Once you answered all the questions in the quiz, proceed to step 4 below to create your personalised profile.</p><h3>Approach 2: Bring your own stuff</h3><p>In addition to using established personality tests, I found it valuable to incorporate my personal reflections into my profile. These included transcripts from voice notes (you know <a href="https://slowworks.substack.com/p/the-third-brain">I love voice notes</a>, journal entries, and notes from coaching or therapy sessions. This approach allowed me to add a deeply personal layer to my overall profile, ensuring it reflected not just general personality traits, but also my unique experiences and self-observations.</p><p>Use this prompt to create a profile from your notes:</p><pre><code><code>I'd like to explore my personality based on some personal reflections. I'll share some notes, one at a time. Respond to each with a one-sentence summary and a single follow-up question to gain a deeper understanding. 

Here is my first note: [Paste or attach the first note]
</code></code></pre><p>When you added enough notes, move on to Step 4.</p><h3>Approach 3: The Shortcut</h3><p>If you're already familiar with a particular personality system, or you just want to save time, you can ask the AI to summarise and personalise the information directly, combining this step with Step 4:</p><pre><code><code>I'm familiar with my [personality type/indicator]. Based on the description I'll share, can you summarise the key points and how they apply to my life, using the following structure?

[Paste your personalised structure here]

Here's the description: [Paste or upload your type description]
</code></code></pre><p>If you don't have a description of your type, you can let your AI model use its built-in knowledge, and it'll work as well. Adding a PDF or copy/pasting a description can help ground the conversation and make it less "random".</p><h2>Step 4: Crafting Individual Profiles</h2><p>When you've finished the quiz, or added all your notes, it's time to get your personalised profile. In the same chat as you did Step 3, paste this prompt:</p><pre><code><code>Please summarise the insights using this structure:

[Paste your personalised structure here]

Respond in MD.
</code></code></pre><p>Read the profile it made, it something feels wrong, add clarifications and ask for a new profile, or download it and edit it yourself.</p><p>When you're done, make sure to download each profile to its own file. With Claude, you can ask:</p><pre><code><code>Please create a MD Artefact with the complete profile.
</code></code></pre><h2>Step 5: Integrating Multiple Perspectives</h2><p>Now it's time to combine all your individual profiles into one comprehensive view.</p><p>Start a new chat with your AI assistant, and introduce the task:</p><pre><code><code>I'm going to share several personality profiles we've created. I'd like you to integrate them into one comprehensive profile. After I share each profile, please acknowledge it with a brief summary.
</code></code></pre><p>Share each profile one by one, allowing the AI to summarise each.</p><p>After sharing all profiles, ask for an integrated version:</p><pre><code><code>Now that you have all my profiles, please create a comprehensive, integrated profile that synthesises insights from all of them. Use this structure:

[Paste your personalised structure here]

To make it easier to track the source of each insight, please use these emojis:
[List your chosen emojis and their corresponding profile sources, e.g., 
&#128270; - Enneagram
&#129516; - Human Design
&#9803;&#65039; - Astrology
&#128221; - Personal Reflections]
</code></code></pre><ol start="5"><li><p>Review the integrated profile and ask for any necessary adjustments.</p></li></ol><h2>Step 6: Designing Personal Practices</h2><p>The final step is to transform your insights into actionable practices:</p><pre><code><code>Based on this integrated profile, can you suggest 5-7 personalised practices or rituals that would support my growth and leverage my strengths? Please categorise them into daily, weekly, and monthly practices. For each practice, explain how it relates to my profile and which aspect(s) of my personality it's designed to support or improve.
</code></code></pre><h2>Step 7. Living with your personalised personality profile</h2><p>Congratulations on creating your very own personalised, actionable personality profile!<br>Your tour through Personality Test Land is now complete.</p><p>I hope that going from a bunch of dusty reports on who you might be to this new format was fun. And maybe it gave you a tool for self-understanding and growth, rather than a rigid definition of who you are.</p><p>A few key points to keep in mind:</p><ol><li><p>The insights generated through this process are inspirations, not absolute truths. Always critically evaluate the information and its relevance to your life. Especially since AI might have made up (hallucinated) a detail or two.</p></li><li><p>Your personality is not fixed. This profile is a snapshot of your current self-understanding and can evolve as you grow.</p></li><li><p>Regularly revisit your daily, weekly, and monthly practices. Adjust them as needed based on their effectiveness and your changing circumstances.</p></li><li><p>Consider sharing your profile with trusted friends, family members, or mentors. Their perspectives might offer additional insights or areas for reflection.</p></li><li><p>You could add another prompt like: <code>How do I communicate my personality and working style in a way that allows my coworkers/partner/friends to understand me better?</code></p></li></ol><p>AI only goes so far. It can be fun, but it's not a replacement for people. If you resonate with a specific system, consider working with a specialist to understand more. And if you want to explore how to integrate your newly gained insights into your life, <a href="https://slow.works/recalibrate">Come for Somatic Coaching</a></p><p>Until then, have a great summer!</p><p><em>This article was first published at https://slow.works on the 20 Aug 2024</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://slowworks.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Somatic Intelligence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Third Brain]]></title><description><![CDATA[Digesting Overwhelm Into New Ideas]]></description><link>https://slowworks.substack.com/p/the-third-brain</link><guid isPermaLink="false">https://slowworks.substack.com/p/the-third-brain</guid><dc:creator><![CDATA[Jonas Haefele]]></dc:creator><pubDate>Wed, 03 Jul 2024 11:01:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ieBk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9c93744-d86c-42cb-8fc6-e001956a91fb_1792x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ieBk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9c93744-d86c-42cb-8fc6-e001956a91fb_1792x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ieBk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9c93744-d86c-42cb-8fc6-e001956a91fb_1792x1024.png 424w, https://substackcdn.com/image/fetch/$s_!ieBk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9c93744-d86c-42cb-8fc6-e001956a91fb_1792x1024.png 848w, https://substackcdn.com/image/fetch/$s_!ieBk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9c93744-d86c-42cb-8fc6-e001956a91fb_1792x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!ieBk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9c93744-d86c-42cb-8fc6-e001956a91fb_1792x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ieBk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9c93744-d86c-42cb-8fc6-e001956a91fb_1792x1024.png" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d9c93744-d86c-42cb-8fc6-e001956a91fb_1792x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2272878,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ieBk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9c93744-d86c-42cb-8fc6-e001956a91fb_1792x1024.png 424w, https://substackcdn.com/image/fetch/$s_!ieBk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9c93744-d86c-42cb-8fc6-e001956a91fb_1792x1024.png 848w, https://substackcdn.com/image/fetch/$s_!ieBk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9c93744-d86c-42cb-8fc6-e001956a91fb_1792x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!ieBk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9c93744-d86c-42cb-8fc6-e001956a91fb_1792x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">DALL-E imagining the three brains. We might still have a moment until the AI overlords take over. :)</figcaption></figure></div><p>Remember <strong><a href="https://slowworks.substack.com/p/dancing-with-control-and-uncertainty">last month</a></strong> when we talked about the importance of sitting with uncertainty? Today we're going from philosophical to practical. We'll look at how we might work with our biology to turn uncertainty into creative energy. And no, ChatGPT is not your third brain, we'd all be doomed if we relied on AI alone.</p><p>One way to turn uncertainty into opportunity is reducing the overwhelm. We could do that with mindfulness or meditation, moving the body like in yoga or running, or creative practices like drawing or cooking. Ever had a great idea while doodling? That one! We give our brain a little break from the constant loops of worrying and suddenly, somehow, we can process the things that seemed so complicated and overwhelming a moment ago.</p><p>Another approach is to organise and digest the flood of information we're constantly receiving - both literally and figuratively. Our bodies have evolved sophisticated systems for processing information, from our brains to our guts. Did you know that your gut is often referred to as the "second brain"? Dr. Michael Gershon's book <strong><a href="https://uk.bookshop.org/p/books/the-second-brain-michael-gershon/4095187?ean=9780060930721">The Second Brain (1998)</a></strong> popularised this concept, highlighting the complex network of neurons lining our gastrointestinal tract. Ongoing research, like that of <strong><a href="https://tim-spector.co.uk/">Tim Spector</a></strong>, continues to uncover the profound ways our gut influences our overall health and even our decision-making processes.</p><p>But just as we digest food, we also "digest" information in many ways. In the realm of critical intelligence (CI), we've developed Personal Knowledge Management (PKM) systems - <strong><a href="https://uk.bookshop.org/p/books/building-a-second-brain-tiago-forte/6626098">digital "second brains"</a></strong> that help us organise and retrieve information, much like our gut processes and stores nutrients. These different "brains" - our cognitive brain, our gut, and our digital systems - can all work together to help us process uncertainty and create meaning. Let's explore some intuitive ways we can harness the power of these multiple "brains" to process information and generate new ideas.</p><h3><strong>Intuitive Processing - Morning Pages</strong></h3><p>There are different ways we can deal with uncertainty in a generative way. You might have heard of morning pages, where you sit down every morning and simply fill a few pages of paper with anything that comes to mind. It's a stream of consciousness writing, allowing the brain and the body to process.</p><p>It's one of the core tools <strong><a href="https://www.instagram.com/juliacameronlive/">Julia Cameron</a></strong> writes about in her bestseller <strong><a href="https://uk.bookshop.org/p/books/the-artist-s-way-a-spiritual-path-to-higher-creativity-julia-cameron/2035801?ean=9781788164290">The Artist's Way</a></strong>. The physical activity of writing can be transformative for a lot of people. We process information differently when we move, even if we just move our hand. I noticed personally that I struggle with morning pages, because my hand cramps up, and I can barely read my writing. And yet there is something in that process of sitting down and moving my hand across paper that unlocks something new. What works well for me is going for a quick walk around the house instead, or talking to my cat about my ideas and problems.</p><blockquote><p><em><strong>How do you like to process intuitively or somatically?</strong></em></p><p><em>Do you journal? Pull tarot cards? Make watercolour paintings? Go for a run when things get a bit much? Sketch things out? Dance it out and make shapes?</em></p></blockquote><h3><strong>Strategic Processing - PKM</strong></h3><p>Other people get into personal knowledge management (PKM). <strong><a href="https://www.instagram.com/fortelabsco/">Tiago Forte</a></strong> and his <strong><a href="https://www.buildingasecondbrain.com/para">PARA Method</a></strong> are one very popular way of thinking about it and Nick Milo has some amazing videos <strong><a href="https://youtube.com/playlist?list=PL3NaIVgSlAVJKJf37XqEhUduqTBQ2e-sl&amp;si=-tZn2EeI5EZaiVDK">explaining PKM</a></strong> and how to <strong><a href="https://youtube.com/playlist?list=PL3NaIVgSlAVLHty1-NuvPa9V0b0UwbzBd&amp;si=uYo6GKgKizg105t_">build your own PKM system</a></strong> in this open-source app called <strong><a href="https://obsidian.md/">Obsidian</a></strong> if you want to dive deeper into it. (It's a bit of a rabbit hole, don't get lost. Start small and make it yours.)</p><p>Essentially you break down your ideas into small "atomic" notes. Each idea, each concept, and each thing that's on our minds becomes one file, and we link them together. So one day I might be thinking about uncertainty, like I did last month, and I'll make a note of it. I might call that note "Generative ways of dealing with uncertainty". Another day I might think about how I show up as a coach, and how I market my coaching, how I talk about what I do, and I'll make a note for each of those. Or I'll reflect on a book I read, making individual notes on key themes and ideas that resonated. It's like your own curated mini-Wikipedia.</p><p>We can link these different individual thoughts, these little notes together. It's a little like the internet, each note/page leads to another. Over time little clusters of themes start to emerge. It becomes like a little network of the ideas in our brain. That's why people call it the second brain. We can search, we can look for themes and ideas, or we can follow those links between notes to rediscover ideas we had earlier. And in a way that's another way of tapping into that uncertainty, and giving our brain and our bodies a little help to get back to those little notes, or get back to the morning pages, and noticing what stands out, what themes keep coming up, and what might that mean about how I want to deal with uncertainty in my life.</p><blockquote><p><em><strong>How do you organise your memories, ideas and projects?</strong></em></p><p><em>Do you have any system to note down and remember things you're interested in? From "bookmarks" of other people's ideas in tools like Readwise, Pocket or Pinterest to collections of your own thoughts in your phone's notes app, a physical system of notes in journals and folders or a digital archive in Obsidian, Notion or Evernote,...</em></p></blockquote><h3><strong>Passive Processing - Voice Notes</strong></h3><p>What's been wildly transformative for me is yet another way of taking notes; voice notes. I started using this app called <strong><a href="https://audiopen.ai/?aff=x0g97">AudioPen</a></strong>, which essentially isn't much more than your voice notes on your phone (with a twist). I often pretend I'm sending a voice note to a friend. And I'm explaining things that have not had words before. AudioPen listens and transcribes it for me, and then writes a little summary. Later I can come back to those notes that I have taken without effort. I've gone through a similar process I might go through if I write notes in <strong><a href="https://obsidian.md/">Obsidian</a></strong>, or <strong><a href="https://www.notion.so/">Notion</a></strong>, or if I do morning pages. Only this time I just talked, there was almost no effort in remembering/recording my thoughts. When I'm done talking, there's a complete transcript of the ideas I've had, and there's a little summary that can get me started on taking the next steps with that.</p><blockquote><p><em><strong>How might you create new ideas or record emerging ideas without the need for them to be perfect?</strong></em></p><p><em><strong>Maybe you don't do voice notes, but you might make up songs about things you're interested in, or you have a friend you go for a walk with every week to "process", you might have a coach or mentor to help you process, or explain your problems to your cat. Anything goes. Extra points if there's a record of the fuzzy new idea.</strong></em></p></blockquote><h3><strong>Putting It All Together</strong></h3><p>One last thing that I noticed that's really helpful is linking all these practices together. I might still do a little journaling on a piece of paper. And when salient ideas emerge, I might make little notes in my Obsidian vault, and start to network those notes. I'll start to work with them, combine them, and make new ones. And I still do my voice notes with <strong><a href="https://audiopen.ai/?aff=x0g97">AudioPen</a></strong>. You might use Otter or any other AI transcription service.</p><p>Last month, I <strong><a href="https://obsidian.md/plugins?search=audiopen-sync">built a little plugin</a></strong> that links the two together, so every time I make a voice note in AudioPen, it also creates a note in my Obsidian vault (or Notion for you, if you use that instead).</p><p>And that has unlocked a whole new way of accessing creativity, because I get to think about and process ideas in a raw form, in voice. And then AI does the magic of transcribing and summarising it for me, and putting it right into my Obsidian vault. That's where I sit down to do the work, make new things, and take bits from that uncertain soup of different feelings and ideas that already have been pre-digested by AudioPen, and I get to combine them. I can just sit down and look at themes that have emerged, and start to compose new blog posts, new outlines, new ideas from those pieces.</p><p>And yes, we did use AI, but we didn't use it to create new things, we simply used it to do some of the busy work for us. Something that resonated deeply with me when I read <strong><a href="https://www.linkedin.com/posts/nicolaswrd_artificialintelligence-deeplearning-search-activity-7183396670716444672-pseG/">Nico Ward's comment</a></strong> saying that <em>"[...] The fundamental revelation of ~2 years of generative AI applications is that these models are great at knowledge refactoring (question answering, retrieval, search, summarisation), but not knowledge creation (ideation, original thought, creativity). [...]"</em></p><p>So essentially, we've helped our process a little bit. We've helped our biology by processing that overwhelm in an organic, almost passive way with the voice notes &#8211; without the need for it to make sense or be complete &#8211; and we used AI to transcribe and organise the feelings into little notes. That allows us to get back to what we do best. With our thoughts a little more organised, it's easier for our biology to combine these fragments of uncertainty, find patterns, and find new ideas that sit right between the old ones. That's really what we do best: creating new things, creating new stories, and creating new relationships.</p><blockquote><p><em><strong>How might you combine different ways of "digesting" ideas and information?</strong></em></p><p><em>When you look at the three previous reflection prompts, what practices do you already have? How might you explore new ways of processing ideas? And how might you combine them?</em></p></blockquote><p>And I invite you to find your own way of being in uncertainty in a generative way. How might you process things? How might you refine them? And how might you recombine them in a way? And what role does technology play in that? It doesn't have to be your phone or computer. Maybe technology is a pen and paper. Maybe technology is paint brushes and watercolours. But what is your thing, or your things, that you have in your life for the processing part? Being with, being in that fuzziness and that soup of uncertainty, without necessarily having to mix and match just yet. What can help you digest ideas?</p><h3><strong>Try it yourself</strong></h3><p>If you want to try my little workflow, I made a little tutorial on how to use AudioPen and link it up to your Obsidian or Notion knowledge database. So you can also try and see what happens if you process verbally and then come back to a nice written version of your thoughts, ready to be made into something new.</p><ul><li><p><strong><a href="https://slow.works/blog/sync-audio-pen-with-obsidian">Syncing AudioPen with Obsidian</a></strong></p></li><li><p><strong><a href="https://slow.works/blog/sync-audio-pen-with-notion-using-make-com">Syncing AudioPen with Notion</a></strong></p></li></ul><div><hr></div><p><em>I didn't make AudioPen or Obsidian, I just really love them both. If you sign up to <strong><a href="https://audiopen.ai/?aff=x0g97">AudioPen</a></strong> with an affiliate link from this article, I'll get a little kickback at no cost to you.</em></p><p>This article was first published at https://slow.works on the 03 Jul 2024</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://slowworks.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Somatic Intelligence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Dancing with Control and Uncertainty]]></title><description><![CDATA[What does it mean to be in control? And why might uncertainty be a human superpower?]]></description><link>https://slowworks.substack.com/p/dancing-with-control-and-uncertainty</link><guid isPermaLink="false">https://slowworks.substack.com/p/dancing-with-control-and-uncertainty</guid><dc:creator><![CDATA[Jonas Haefele]]></dc:creator><pubDate>Tue, 11 Jun 2024 08:19:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6xEI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3589e65c-c9f9-428e-a8a7-7d5da79eb9c0_1792x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6xEI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3589e65c-c9f9-428e-a8a7-7d5da79eb9c0_1792x1024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6xEI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3589e65c-c9f9-428e-a8a7-7d5da79eb9c0_1792x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6xEI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3589e65c-c9f9-428e-a8a7-7d5da79eb9c0_1792x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6xEI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3589e65c-c9f9-428e-a8a7-7d5da79eb9c0_1792x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6xEI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3589e65c-c9f9-428e-a8a7-7d5da79eb9c0_1792x1024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6xEI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3589e65c-c9f9-428e-a8a7-7d5da79eb9c0_1792x1024.jpeg" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3589e65c-c9f9-428e-a8a7-7d5da79eb9c0_1792x1024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:367111,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6xEI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3589e65c-c9f9-428e-a8a7-7d5da79eb9c0_1792x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6xEI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3589e65c-c9f9-428e-a8a7-7d5da79eb9c0_1792x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6xEI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3589e65c-c9f9-428e-a8a7-7d5da79eb9c0_1792x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6xEI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3589e65c-c9f9-428e-a8a7-7d5da79eb9c0_1792x1024.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The dance between control and uncertainty as imagined by Dall-E</figcaption></figure></div><p>What do you want to control? And what do you want to be surprised by? </p><p>A question I got to ponder ad nauseam last week, as I learned to give in to food poisoning and the utter lack of control that comes with it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://slowworks.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Somatic Intelligence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Creativity, in fact most things we might call our &#8220;purpose&#8221;, is a dance of control and chaos. And as our AI tools grow increasingly sophisticated, we get to figure out &#8212; scrambling &#8212; how they shape our creative processes and where we, as humans, fit into this new paradigm. For a long time, we thought creativity was something uniquely human. And I use creativity as an inclusive term here. Most work is creative in some way, even if it's not about the classic "making things look pretty". Every day we face unknowns and challenges and have to come up with ideas and solutions.</p><p>In this process, we're switching back and forth between thinking convergently or divergently. Often intuitively. When working with AI, being very explicit about this might be the key to making AI work for us. Are we trying to come up with new ideas, or are we trying to narrow them down and make a decision? The answer lies in finding the delicate balance between control and uncertainty, between holding on and letting go.</p><h2><strong>Embracing Divergent Technology</strong></h2><p>As we explore how we interact with AI, it's important to understand the fundamental differences between human and artificial intelligence. Something tickled my brain when I was <strong><a href="https://youtu.be/Bpgloy1dDn0">watching YouTube</a></strong> the other day and heard Aman Bhargava from Caltech and Cameron Witkowski from the University of Toronto to discuss their groundbreaking paper, &#8220;What&#8217;s the Magic Word? A Control Theory of LLM Prompting.&#8221; The idea is &#8211; very simply put &#8211; that our brains operate on a set of biological rules that shape neural structures and give rise to our unique cognitive abilities. These rules allow us to make complex connections, leap between ideas, and generate novel insights. The brain adapts through plasticity, repurposing neurons when needed, constantly rewiring and changing its physical structure.</p><p>In contrast, the "neurons" in a language model (LLM) like GPT or Claude are fixed in their structure. While these models excel at processing vast amounts of data and identifying patterns, the underlying mechanisms of how they create meaning in their multi-dimensional token space remain largely opaque. The sense-making process in LLMs is fundamentally different from the way our brains work.</p><p>Here we return to our question of control and surprise. By delegating certain cognitive tasks to LLMs, we can free up our human brainpower to focus on the things that truly matter &#8211; creativity, intuition, and emotional intelligence. How might we understand and leverage the strengths of both human and artificial intelligence, while remaining aware of their distinct limitations?</p><p>Lastly, not every LLM is made the same, while OpenAI, Microsoft and Meta are racing to achieve AGI (artificial general intelligence), by creating larger and larger models, Apple <a href="https://www.apple.com/apple-intelligence/">just announced</a> their own strategy in AI combining tiny LLMs that run on-device with larger models running in the cloud. And while those tiny LLMs that run on your iPhone are likely less capable than ChatGPT when it first came out, you're in total control of your own data. Apple Intelligence brings a different philosophy to AI, one that focuses on getting simple tasks done, assisting with mundane-but-personalised questions while retaining total control over user privacy and only delegating to big brother GPT-4o when the on-device capabilities aren't enough for the task at hand. Maybe a large network of tiny AIs could soon become even more powerful &#8211; or useful &#8211; than a single large model? We might have to wait and see.</p><h2><strong>Becoming the Human in the Loop</strong></h2><p>The concept of "human-in-the-loop" AI recognises that while AI can be a powerful tool, it is not a replacement for human judgment and creativity. If you're not familiar with the term, <strong><a href="https://pca.st/0y8tmum6">this podcast</a></strong> by Boston Consulting Group goes into detail on the idea, while exploring an interesting 2030 vision. And it's got an AI co-host, that's actually pretty good.</p><p>When we think about how a human-in-the-loop system might work, we quickly run into the tension between control and serendipity, structure and gut feelings, technology and biology. The human-in-the-loop becomes the safety net for those who aren't in the loop. And the key to success might lie in being very intentional about how we design the loops we get ourselves into. It's a dance between holding on and letting go.</p><p>If the speed of AI accelerates how we work, regulating our nervous system and finding moments of calm amidst the chaos will be absolutely crucial for our well-being &#8211; and most likely our success, too. How can we be more aware of what a task at hand needs? Are we thinking divergently and making intuitive connections? Are we analysing data to find new answers? Are we stuck on an empty page and need help getting started? Are we following a well-established pattern, or encountering something completely novel?</p><p>Over the next few days, notice when you're switching these modes. Be really curious as to what you are trying to do and what kind of thinking might be needed. What happens when you're collaborating with an AI on structured work? What happens when you're collaborating on more intuitive, creative or novel tasks? And what are the "loops" you get into with your AI tools? What do you get to control? And where does the AI surprise you?</p><h2><strong>The Fuzziness of Uncertainty</strong></h2><p>I've been obsessed with the word fuzzy lately, you might remember the <strong><a href="https://slowworks.substack.com/p/integrating-felt-sense-and-cognitive">fuzzy attention</a></strong> we talked about last month. On the podcast <strong><a href="https://pca.st/almnug4n">UNCERTAINTY: The Surprising Power of Being Unsure</a></strong>, Maggie Jackson talks about her newest book with the same title. This context gives fuzziness a whole new power and depth.</p><p>She talks about the concept of "generative uncertainty". The core distinction she makes is about uncertainty in the world and how we live with uncertainty. There's uncertainty about what's going to happen tomorrow, how society behaves, the stock market, or a job or home situation&#8211; the abstract uncertainty. On the other hand, there is uncertainty as we live it on a somatic level. How does our biology respond to an uncertain world? This felt sense of uncertainty &#8211; the "fuzziness" &#8211; is a uniquely human capacity that allows us to be creative in the face of the unknown. This somatic kind of uncertainty can be very empowering and generative.</p><p>The goal is not to eliminate uncertainty altogether, but rather to learn to embrace it as a source of creativity and innovation. When collaborating with AI, it's crucial to remember that the ultimate aim is to create something for humans. Start by taking a moment to define why you're engaging with the AI and what specific insights or assistance you need. Treat the AI as a versatile partner &#8211; a brainstorming buddy, an analyst, or a pattern recognition expert. Don't expect the AI to hand you the perfect solution; instead, anticipate that it will provide valuable information to guide you closer to your goal. And always remain open to the possibility that the AI might hallucinate or generate inaccurate information at any point in the process. By approaching human-AI collaboration with this mindset, you can harness the generative potential of uncertainty while maintaining a clear sense of purpose and direction.</p><p>The most valuable insights often emerge at the edge of your comfort zone. How might you find balance between control and uncertainty in your creative work?</p><h2><strong>Come Dance with Me</strong></h2><p>Remember, this is a dance &#8211; and our dance partner is still a little clumsy, you might get your toes stepped on a few times. But by learning to embrace the generative uncertainty that comes with collaborating with AI, we can unlock new forms of creativity and innovation.</p><p><em>If you're curious about exploring uncertainty and what it means for yourself, your work and your life, maybe Somatic Coaching might be for you. Leave me a comment or <strong><a href="https://cal.com/jonashaefele/intro-to-somatic-coaching">book free discovery call</a></strong>.</em></p><div><hr></div><p><em>Photo by <a href="https://www.pexels.com/photo/close-up-photogrpahy-ship-captains-control-1416649/">Nikolaos Dimou from Pexels</a></em></p><p><em>This article was first published at <a href="https://slow.works/blog/dancing-with-control-and-uncertainty">slow.works</a> on the 11 Jun 2024</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://slowworks.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Somatic Intelligence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Integrating Felt Sense and Cognitive Understanding]]></title><description><![CDATA[Reflections from the International Hakomi Conference in Mexico City]]></description><link>https://slowworks.substack.com/p/integrating-felt-sense-and-cognitive</link><guid isPermaLink="false">https://slowworks.substack.com/p/integrating-felt-sense-and-cognitive</guid><dc:creator><![CDATA[Jonas Haefele]]></dc:creator><pubDate>Wed, 08 May 2024 10:29:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!gTVS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4595ddfd-63c7-4caf-8b48-dfbec9fcc4bc_3072x1688.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gTVS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4595ddfd-63c7-4caf-8b48-dfbec9fcc4bc_3072x1688.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gTVS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4595ddfd-63c7-4caf-8b48-dfbec9fcc4bc_3072x1688.png 424w, https://substackcdn.com/image/fetch/$s_!gTVS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4595ddfd-63c7-4caf-8b48-dfbec9fcc4bc_3072x1688.png 848w, https://substackcdn.com/image/fetch/$s_!gTVS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4595ddfd-63c7-4caf-8b48-dfbec9fcc4bc_3072x1688.png 1272w, https://substackcdn.com/image/fetch/$s_!gTVS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4595ddfd-63c7-4caf-8b48-dfbec9fcc4bc_3072x1688.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gTVS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4595ddfd-63c7-4caf-8b48-dfbec9fcc4bc_3072x1688.png" width="1456" height="800" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4595ddfd-63c7-4caf-8b48-dfbec9fcc4bc_3072x1688.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:800,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:12317933,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gTVS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4595ddfd-63c7-4caf-8b48-dfbec9fcc4bc_3072x1688.png 424w, https://substackcdn.com/image/fetch/$s_!gTVS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4595ddfd-63c7-4caf-8b48-dfbec9fcc4bc_3072x1688.png 848w, https://substackcdn.com/image/fetch/$s_!gTVS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4595ddfd-63c7-4caf-8b48-dfbec9fcc4bc_3072x1688.png 1272w, https://substackcdn.com/image/fetch/$s_!gTVS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4595ddfd-63c7-4caf-8b48-dfbec9fcc4bc_3072x1688.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>If you like listening, this is the 12-minute raw voice note that lead to this article, a more personal reflection on the experience at the Hakomi conference and how being with and witnessing in loving presence, even if there are no words can unlock a new layer of understanding. For the digested version, read on.</p><div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;f701eab6-e84f-4f1a-9a9b-211057727549&quot;,&quot;duration&quot;:721.6065,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p>I recently returned from a two-week trip to Mexico, where I attended the International Hakomi Conference. Hakomi is a groundbreaking modality in psychotherapy that can be described as mindfulness-based self-discovery. Over four days and many workshops, I practised with an incredibly diverse group of people from all over the world, many of them seasoned practitioners, teachers, and trainers of <a href="https://www.hakomieducation.net/">Hakomi</a>. We were hosted by the Institute for Gestalt Psychology in Mexico City and guided by the legacy holders of Hakomi, who have worked closely with founder Ron Kurtz to refine and spread the method across languages, countries, and contexts.</p><p><br>The conference was an opportunity for the community to reunite after years of separation due to Covid, celebrate Ron&#8217;s life and double down on a commitment to preserve his legacy, and an opportunity to reconnect or make new friends who share a passion for Hakomi. This passion for finding a felt sense and working with deeply rooted emotions and feelings that might not even have words yet. The conference was an invitation to play, to learn new things, to witness and be witnessed, and to try something that we haven&#8217;t tried yet.</p><p><br>We experienced different ways of running Hakomi experiments and adopting different frames and mindsets. One memorable example that I still remember fondly was reframing conflict as an opportunity, quoting a North American native tribe who say that <em>conflict is merely an opportunity to learn about somebody else&#8217;s needs that we don&#8217;t know about yet, and then build a new future based on a mutual understanding of each other&#8217;s needs</em>. Taking that reframe to go into an experience where we were feeling into a conflict of our own, something that we&#8217;re holding close, already changed everything. Sitting with it and being in this felt sense, being in a little focusing session to gently tune into what that conflict might offer us was incredibly powerful.</p><p><br>Other workshops explored healing in community, feeling held emotionally and physically by the group, and asking for the support we need. We had so many rich experiences and ways of getting in touch with our ideas and feelings, too many to describe here.</p><p><br>Something that stood out for me is the idea of fuzzy attention, an open and welcoming awareness that allows things to emerge, even if they&#8217;re shy or hard to put into words at first. Hakomi practitioners say over and over again that <em>everything is welcome</em>. This quality of attention, that fuzziness and softness, allows us to find words for things we might not have had words for before.</p><p><br>This concept of fuzzy attention reminds me of the transformer architecture, a key enabler of the AI revolution. Before transformers, AI was fairly slow and specialised, having to compute many possibilities until it found a solution. With transformers, AI has gained <a href="https://arxiv.org/pdf/1706.03762">fuzzy attention on everything</a> and only does detailed calculations once it identifies the relevant parts.</p><p><br>I wonder how we can honour both human and AI forms of fuzzy attention. What if we could delegate some of our thinking to machines like GPT or CLAUDE and return to feeling and sensing in our bodies? How might that change how we pay attention, process, and think?</p><p><br>A standout takeaway from the Hakomi conference is the value of taking a moment to sit, close my eyes, and ask what needs my attention right now. Simply being with what&#8217;s there - feelings, sensations, gut instincts - allows things to subtly take shape. Feelings become words, words become ideas and connections. Maybe this allows us to process things in a different way, to open our eyes and take that felt sense into our work. It starts with feeling, with being, and only then can we create.</p><p><br>If we don&#8217;t honour the way our biology works, it&#8217;s very hard to move forward. It&#8217;s easy to get stuck in loops of reacting to everything that comes at us. It&#8217;s so important for us to simply be with what is, be with our feelings and with each other. Giving ourselves permission to be seen and held. Maybe even without words, just two or more beings sitting together, co-regulating, and sharing space. Or how we say in Hakomi, witnessing each other in loving presence. Allowing that to calm our nervous systems, letting things process and bubble up, and then taking that back into our day-to-day lives and moving forward, integrating and processing on a conceptual level.</p><p><br>The tightly packed schedule of the conference left me with an even bigger appreciation for my coaching education at the <a href="https://www.thesomaticschool.com/">Somatic School</a>: framing deep, intense somatic experiences, like focusing and other Hakomi experiments, in a safely held container. A coaching container with a start and an end. This allows people not just to feel and experience, but also to make sense, conceptually integrate, and take things forward. In today&#8217;s world, we&#8217;re very focused on the mind. Our culture and technology have made us very good at thinking, explaining, analysing, and documenting. Sometimes that can get overwhelming. Somatic practices mean getting out of the mind and into the body - moving, feeling, and sensing. Radical change - change that starts at the root of our stories, beliefs, feelings, and sensations - is change that lasts. Integrating the <a href="https://slowworks.substack.com/p/what-is-somatic-intelligence">Somatic Intelligence of the body with a Conceptual Intelligence of the mind</a> is a great way to create radical change.</p><p><br>If you&#8217;re curious about exploring <a href="https://slow.works/somatic-coaching">somatic coaching</a>, I invite you to reach out and sign up for a session. As your coach, I&#8217;m your partner on your journey of self-discovery. I&#8217;ll create a safe space for you to explore big and small questions you might not have asked yourself in a long time. I&#8217;m here to support you and help you read the map, but you are always in control of where you&#8217;re going.</p><p><br>Send me a message, or <a href="https://cal.com/jonashaefele/intro-to-somatic-coaching">Book a free intro call</a>. I'd love hear from you.</p><p><em>This post was first published at <a href="https://slow.works/blog/integrating-felt-sense-and-cognitive-understanding">slow.works&#8203;</a></em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://slowworks.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Somatic Intelligence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[From Delegation to Collaboration]]></title><description><![CDATA[Why we need to return to machine learning and make AI our teammate]]></description><link>https://slowworks.substack.com/p/from-delegation-to-collaboration</link><guid isPermaLink="false">https://slowworks.substack.com/p/from-delegation-to-collaboration</guid><dc:creator><![CDATA[Jonas Haefele]]></dc:creator><pubDate>Mon, 08 Apr 2024 08:25:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Fibk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc799bc65-c7a8-4d7c-8aa8-ba093f13db63_6000x4000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Fibk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc799bc65-c7a8-4d7c-8aa8-ba093f13db63_6000x4000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Fibk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc799bc65-c7a8-4d7c-8aa8-ba093f13db63_6000x4000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Fibk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc799bc65-c7a8-4d7c-8aa8-ba093f13db63_6000x4000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Fibk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc799bc65-c7a8-4d7c-8aa8-ba093f13db63_6000x4000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Fibk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc799bc65-c7a8-4d7c-8aa8-ba093f13db63_6000x4000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Fibk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc799bc65-c7a8-4d7c-8aa8-ba093f13db63_6000x4000.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c799bc65-c7a8-4d7c-8aa8-ba093f13db63_6000x4000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1862827,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Fibk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc799bc65-c7a8-4d7c-8aa8-ba093f13db63_6000x4000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Fibk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc799bc65-c7a8-4d7c-8aa8-ba093f13db63_6000x4000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Fibk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc799bc65-c7a8-4d7c-8aa8-ba093f13db63_6000x4000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Fibk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc799bc65-c7a8-4d7c-8aa8-ba093f13db63_6000x4000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>To an astronaut who spent the pandemic years in space, it might seem like little has changed if they returned to Earth today. We're back to travelling, gathering and hugging each other pretty much like we did before Covid held us hostage for a couple of years. And yet a lot of changes have taken place under the surface. Not only does "zoom" now have a new meaning beyond close-up photography and crazy cats, we collectively embraced technologies that seemed wildly futuristic even just 15 years ago. As a society, we have fast-tracked the adoption of remote work and digitally facilitated relationships. Interactions are reported to have become more transactional, often driven by specific needs rather than casual, social connections.&nbsp;</p><p>From instant meal and grocery deliveries to next-day everything, support chatbots that you have to yell "TALK TO HUMAN" at only to be redirected to an offshore help-desk agent without permission to go off-script, we have mastered process systemization and delegation. For many of us, "computer says no" has become a daily annoyance, while for others it has increasingly become a <a href="https://www.bbc.co.uk/news/business-54698858">challenge to their livelihoods</a>.</p><p>With ChatGPT's release in late 2022, we have also started talking about AI a lot, and venture capital firms are <a href="https://news.crunchbase.com/venture/monthly-global-funding-recap-february-2024/#:~:text=More%20than%20a%20fifth%20of,billion%20invested%20in%20February%202023.">throwing money</a> at companies that claim to work with AI. Even LinkedIn is now an AI company thanks to its text box integrating with GPT to make your corporate boasting even more flowery and pompous (and about 400% longer).</p><p>Are our workplaces blindly playing catch-up with the rapid developments in artificial intelligence (AI), or are we challenging ourselves enough to think about the potential impacts of these technologies? Who is making the decisions about how we implement AI? And do we as a society &#8211;&nbsp;and especially the decision makers &#8211; have enough humility to say "I don't know"?</p><p>I'm arguing that we need to rediscover and redefine collaboration. Collaboration that both puts the individual human relationships at the centre and collaboration that transcends human boundaries.</p><h2>Collaboration vs. Delegation</h2><p>Taking a systemic look at business processes often starts with the "what", "when", "who", and "how". Once we defined the tasks to get done and their cadence, we wrap several tasks into a job description and draft checklists and templates to fill. Workers get measured at how fast and accurately they follow these checklists and outlines. We are essentially using an industrial framework to measure the output of knowledge workers.</p><p>Having worked <em>in</em> a lot of startups and <em>with</em> a lot of large organizations, I couldn't help but see a dichotomy where large organizations may suffer from over-systemization that limits employee contribution, leading to lower retention rates, burnout or simply a lack of innovation and agility. On the other hand, startups and solopreneurs often emphasize freedom and creativity to foster innovation and a sense of ownership among employees but might lack sufficient structure, leading to inefficiencies. Regardless of size, all organizations can benefit from a balanced, systemized approach that encourages flexibility and creativity. For me, that balance is encapsulated in the shift from delegation to collaboration.</p><p>The true efficacy of systemized processes lies in their ability to nurture, rather than constrict, the human element of creativity and innovation. Here we might also find a hint about how we can successfully "use" AI&#8211; not as a tool to delegate work that humans used to do to, but to reimagine the very fabric of our work processes. What if we stopped thinking about AI as the "intelligent thing that should have the answer (but might make up random ideas)" and changed it to "collaborating with a machine to learn together"? How might returning to the idea of <strong>machine learning</strong> help us with that?</p><p>One inspiring illustration of AI's potential as a collaborator is <a href="https://www.business-sparks.io/#1020038823">Business Sparks</a>, an emerging tool developed by the Centre for Creativity enabled by AI (<a href="https://www.bayes.city.ac.uk/business-services/consulting/centre-for-creativity-enabled-by-ai">CebAI</a>). It combines machine learning and large language models with creative thinking techniques and business strategy models to prompt users to think more creatively. A key feature of their solution is how the algorithms are embedded into the process. Rather than one uniform chat interface like most "AI Apps" on the market today, Sparks consists of several independent autonomous agents and the user interacts with one agent at a time, always aware of the context and the role both the human and the agent are playing. Some agents are direct links to an LLM to rephrase information, other agents are highly specialized, for example, to compare the business problem at hand to a strictly curated database of creative techniques or business models and find alternative solutions. With that decoupling, they achieve a seemingly simple, but very impactful shift.&nbsp;</p><h2>Rethinking Work Means Rethinking AI</h2><p>To help us reframe how we think about work and AI, I suggest four lenses:</p><p><strong>From Tools to Teammates</strong>: This core premise shifts the view of AI from merely a tool for delegation to a teammate capable of collaboration. It's about envisioning AI as an active participant in the work process, complementing and enhancing human efforts. Going from LinkedIn's AI textbox that uses a fairly generic LLM to emphasise the parts of the platform that are already questionable, to BusinessSpark's suite of brainstorm partners that play specific specialist roles to empower individuals to do work they previously might have had to hire an expert consultant for.</p><p><strong>Building Learning AI Systems</strong>: Bring back machine learning. It's essential to design AI systems with the capacity to learn and adapt, incorporating critical feedback from humans. This approach not only makes the AI more effective but is also key to unlocking its most transformative potential. Doing so is going to be a walk on the tight-rope between taking advantage of people who train an AI that replaces them and making sure we keep humans in the loop and bring in perspectives from a broad range of stakeholders. Transparency and opt-in patterns are going a long way to ensure we build useful and harmless solutions.</p><p><strong>Ethics and Bias Reduction</strong>: Building on the above, it's crucial that we go beyond including human feedback in AI learning loops, and that we design the algorithms with ethical use in mind. To create fair and accountable AI systems we might need new regulations, like the EU AI Act that just passed, and we definitely need leaders who are bold enough to think ahead. Current LLMs have been built by <a href="https://www.theatlantic.com/technology/archive/2023/08/books3-ai-meta-llama-pirated-books/675063/">stealing billions of pages of copyrighted books</a>, <a href="https://www.theguardian.com/media/2023/dec/27/new-york-times-openai-microsoft-lawsuit">news articles</a> and <a href="https://fortune.com/2023/08/31/artists-lawsuit-artificial-intelligence-companies-copyright/">artworks</a> and by (ab-)using <a href="https://medium.com/nerd-for-tech/data-annotation-service-by-typing-captcha-you-are-actually-helping-ai-model-training-5902e8794a6f">free</a> and <a href="https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/">underpaid</a> <a href="https://www.vice.com/en/article/88apnv/underpaid-workers-are-being-forced-to-train-biased-ai-on-mechanical-turk">labour</a> to label data sets. And still, current LLMs show frightening <a href="https://www.unesco.org/en/articles/generative-ai-unesco-study-reveals-alarming-evidence-regressive-gender-stereotypes">biases</a>, just as <a href="https://uk.bookshop.org/p/books/weapons-of-math-destruction-how-big-data-increases-inequality-and-threatens-democracy-cathy-o-neil/379170?ean=9780141985411">algorithms did before we called them AI</a>. Thankfully there is a lot of activity on this front from <a href="https://www.turing.ac.uk/research/interest-groups/fairness-transparency-privacy">institutions</a> to <a href="https://aixdesign.co/">community organizations</a> and <a href="https://www.elisava.net/en/masters/master-design-responsible-artificial-intelligence/">universities</a>.</p><p><strong>Cultural Readiness for Change</strong>: Most of all, we need to talk about the change we're navigating. As humans, we have a biological predisposition to be skeptical of change. Change is dangerous. And we're hardwired to look for danger. Cultural readiness involves preparing individuals and organizations for the shift towards AI collaboration. It means trying things and failing. And failing well means being bold enough to roll back things that didn't work. It means listening to those who are the most impacted by the changes. Not in a Luddite way of vetoing change, but curiously and collaboratively. Let's look at cultural traditions from the global south, let's consider that some solutions might be local or small, and let's take inspiration from grassroots organizations or regenerative farming. Maybe the change is less overwhelming if we find other lenses than the dominant productivity and extraction paradigm.</p><h2>Integrating Human and AI Strengths</h2><p><em>This section consists mainly of case studies and examples, if you had enough to read, skip straight to the Vision.</em></p><p>To help us navigate that change, it's crucial to continuously explore the unique strengths that humans and AI bring to collaborative efforts. And as that boundary becomes more and more blurry, we must hone our skills and become clear on the tasks we want to keep doing.</p><p>Another commercial example of human-AI collaboration is the insurance underwriting platform from <a href="https://artificial.io/">Artificial Labs</a>. The artificial underwriter exemplifies AI as both a continuous learner and collaborator and similar to Business Sparks, the artificial underwriter can take different roles. In one role it drafts contracts, in others it analyzes data and in a third, it has a conversation with the underwriter to assist in research and risk assessment. It not only processes vast data but also adapts its inquiry based on interactions with human underwriters. This dynamic learning approach ensures that it becomes more effective over time, honing its ability to ask relevant questions and adjust methodologies accordingly.</p><p>And yes, as the AI learns more about the reasoning process of the professionals, it might start to replace human labour with automated labour. That can be scary or uncomfortable when it's a job we used to do well, or well enough. It will disrupt how we work. And it might also open up new opportunities as we saw in the field of medicine:</p><p>A team at MIT led by James Collins collaborated with AI in search of <a href="https://www.nature.com/articles/s41586-023-06887-8.epdf?sharing_token=cM-GKKFtMc0WwiOuuGC2vtRgN0jAjWel9jnR3ZoTv0Oq3hGz3h0CpU38KI8v4vgN4gId1bf70hmP477ulOrhWWS18SAnxeGbLJYBjWFpKGyPZtm95sGvIFIfwkkf_RKCqOPSOg73k17WXFQM3zina8wDslXoG4r11h0WUDTyFxF_VrQ2LsSbsH6cmUZ28Z-tPyJPffU8TC-7ayXcQjG9EOknyXcdVAB-Fv-56CgZFDg%3D">new antibiotic compounds</a>, enabling them to go far beyond human capacity in terms of speed and volume. Utilizing a deep learning algorithm, the team analyzed millions of chemical compounds and identified 283 compounds as having promising antibiotic properties. The selected compounds were then tested in mice to assess their effectiveness against difficult-to-treat pathogens like MRSA and other bacteria that are notorious for their resistance to existing antibiotics. A unique aspect of this collaboration was the implementation of "explainable AI." Unlike traditional LLMs that operate as black boxes, the researchers were able to understand the biochemistry behind the AI's selections.</p><p>All the examples I gave above have a few things in common, chiefly that they have very specific applications and rely on a combination of different algorithms, each fine-tuned to a task or role the AI is taking while collaborating with its human counterparts. And all three applications have been built in a way that allows us to follow the reasoning of the algorithm. Either by thinking step by step (by invitation of the human), or by meticulously documenting its process and arguments. And when they use an LLM, they don't use it to "solve everything", but use the LLM (a large language model like GPT) for specific use cases, like having a natural language conversation about a task at hand, or brainstorming ideas.</p><p>With that setup, they mitigate bias baked into our LLMs. A lot of the big players in AI rely on gigantic models &#8211;&nbsp;more data to train the model leads to better capabilities. Given that OpenAI, Meta and Google hoovered up most of the accessible internet to train their models, they also ingested a lot of the bad stuff on the internet. From hateful posts on Reddit to copyrighted material, artists' whole life of works and so on. And with it a lot of bias. Since it's too hard to look through a petabyte of data and clean it all, the models have been trained on a lot of the ugly traits of humankind as well. And the giants' way of reducing bias was to censor its responses.<br>You might have heard that Google stopped Gemini from generating images because it essentially did a blackface on historical figures and was <a href="https://www.bloomberg.com/opinion/articles/2024-02-28/google-s-gemini-ai-isn-t-too-woke-it-s-too-rushed">accused of being overly woke</a>. Most likely Google wanted to make sure they don't repeat the <a href="https://www.bbc.co.uk/news/technology-33347866">Ape disaster</a> they had with Google Photos a few years ago by training Gemini to favour non-white faces, and by doing that they trained Gemini to change history. That was maybe the most sensational censorship of an AI model we've seen lately. For a simple example, try asking GPT about the Australian Mayor Brian Hood, and you get a tiny red box simply saying "I'm unable to produce a response", a mini-censorship OpenAI had to add to the model after it came to light that <a href="https://www.reuters.com/technology/australian-mayor-readies-worlds-first-defamation-lawsuit-over-chatgpt-content-2023-04-05/">GPT consistently made up bribery allegations</a> about him. All of these are merely little patches of the errors and biases we have discovered, training AI with less or no bias at all is a lot harder.</p><p>All of that is to say that we're still learning how to shape and apply this new technology. And that's why I feel we should go back to the term machine learning. Learning with the machine this time. As we teach the machines, we need to learn and adapt <em>with</em> them. The idea of "Artificial Intelligence" is too fluffy and shiny and allows us to defer to the machine and switch off ourselves. It's really about looking at our AIs as partners, or teammates. Maybe we grow together, do 360 reviews for each other and run team away days with our AI-collaborators?</p><p>I'm only half-joking.</p><h2>Vision for the Future</h2><p>What might a future look like where humans and AI collaborate? What challenges are we going to face on the way? And how will we navigate them? In 2021, <a href="https://pca.st/Bellwether">Bellwether Show</a> written and produced by Sam Greenspan took a stab at envisioning that journey. In four mind-blowing episodes, they explored the then-present (and still hugely important) issues and themes in machine learning, AI and automation through the story of a fictional character and their AI companion trying to make sense of the past. If you have the time, I highly recommend making a nice cup of tea and listening before the show disappears into the clouds.</p><p>If you're more a doer than a philosopher, maybe next time you open a new chat window with your favourite LLM, make a point of making it a collaboration.</p><ul><li><p>Tell it what role you want it to play. Give it a name, and write a sentence about its personality.</p></li><li><p>Collaborate on one specific task at a time.</p></li><li><p>Think about <em>your jobs</em> as much as the jobs you're giving the LLM.</p></li></ul><p>When I had a conversation with ChatGPT about collaboration, this is what it said:</p><blockquote><p><em>The collaboration that feels the most fulfilling is one where there's a loop of feedback and iteration. It allows for a dynamic, creative process where the end result is something neither of us could have achieved alone!</em></p><p><em>This kind of collaboration taps into both our strengths. You bring human insight, intuition, and creativity, while I bring speed, access to a broad range of information, and the ability to generate content quickly. Together, we can create something that's both insightful and rich in content.</em></p></blockquote><p><strong>How do </strong><em><strong>you</strong></em><strong> want to collaborate?</strong> Maybe it's face-to-face with another human, maybe it includes a new kind of AI helping us collaborate with each other, or maybe it's simply you and your trusty AI sidekicks. Most likely it'll be a blend of all.</p><p>Let's co-create a new way of collaboration.</p><div><hr></div><h2>Let's co-create and learn together</h2><p>I've been diving deep into the world of digital mindfulness, productivity, and the connection to our biology for well over a decade now and spent countless days and nights researching and experimenting with applied AI. With the new era of computing upon us, we have a unique opportunity to rethink our relationship with technology.</p><p>I'd love to share some of the learnings, rituals, processes, and tools I've discovered through a series of small experiments you can try in your own life.</p><p>Somatic AI is a series of workshops exploring how technology can simplify our lives and help us do what we do best. We'll get hands-on and experiment with how AI can help with sense-making and creating at the speed of thought, explore how to use technology in a way that respects our biology, and ask ourselves how we want to shape the tools we use.</p><p>Learn more and <a href="https://slow.works/somatic-ai">sign up for early-access here</a>.</p><p>If this sounds interesting, let me know! I'd be thrilled to embark on this adventure together. More of a solo-learner? I'll be sharing text-based versions of the experiments with paid members of Somatic Intelligence as well.</p><p>Photo by <a href="https://www.pexels.com/photo/photo-of-people-having-fist-bump-3228684/">fauxels</a></p><p></p><p><em>This post was first published at <a href="https://slow.works/blog/from-delegation-to-collaboration">slow.works&#8203;</a></em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://slowworks.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Somatic Intelligence is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Metrics for a GenAI World]]></title><description><![CDATA[Unlocking Human Potential with a simple shift in what we pay attention to]]></description><link>https://slowworks.substack.com/p/metrics-for-a-genai-world</link><guid isPermaLink="false">https://slowworks.substack.com/p/metrics-for-a-genai-world</guid><dc:creator><![CDATA[Jonas Haefele]]></dc:creator><pubDate>Wed, 28 Feb 2024 09:09:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/72d71d23-cf28-4b4d-8aa3-90d24a9bf079_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In today's rapidly evolving landscape, organizations are embracing Generative Artificial Intelligence (GenAI) to automate routine tasks and streamline processes. However, this shift also requires rethinking the way we measure organizational success. As we move away from traditional metrics (KPIs) focused on widgets produced or tickets closed, it's crucial to identify new metrics that align with our goals in a GenAI-driven world.</p><h3><strong>What gets measured gets changed</strong></h3><p>And no, we're not talking about quantum theory. In physics as well as in our organizations, as soon as we focus on something specific, we change how the system behaves. If we measure tasks completed, the tasks will get smaller, and more of them will be ticked off. Tasks get pushed into someone else's inbox. To make our businesses more predictable, we trained our employees to be risk-averse and follow strict processes. To minimize salary expenses, we outsourced as much as we could to offshore contractors and gig workers. We set up a large part of our organizations to be automatable.</p><p>Now, we have the technology to get rid of those humans. As Ben Evans said last year, GenAI is a bit like <strong><a href="https://youtu.be/xNBiPd2H9J0?feature=shared&amp;t=798">unlimited free interns</a></strong> (at the current state of development). It's increasingly cheap and fast to produce good enough results for routine tasks. I like to think about it as "mediocre is free now". Highly standardized processes can be fully automated with GenAI. This will lead to several major shifts in organizations. A vast reduction of low-skilled knowledge workers (e.g. customer support functions), maybe even a reverse-globalization in some contexts, as the remaining roles get located closer to the customer again. And we'll see a challenge in talent development; if companies don't need to hire junior workers (since GenAI is better than humans with little work experience) how will they make sure they have highly skilled talent to replace those retiring or leaving the company for other reasons?</p><h3><strong>Identifying New Metrics</strong></h3><p>To adapt to this new reality, organizations must begin by identifying new metrics that align with their objectives in a GenAI-driven world. These may include measures such as employee satisfaction, collaboration across departments, customer satisfaction, and the ability to tackle complex problems. By focusing on these areas, companies can harness the full potential of both human and artificial intelligence.</p><p>For example, a company might consider implementing metrics related to employee engagement, teamwork, and continuous learning, which would encourage employees to develop new skills and contribute more effectively to the organization's success.</p><p><em>How might you reward the employee who spends time (and budget) to unblock another team?</em> <em>How might you incentivize employees to expand their skillset or mentor others?</em></p><p>Traditional productivity-focused KPIs make this kind of work almost impossible.</p><p>Establishing new metrics around collaboration and continued learning will help mitigate the risk of misuse or over-reliance on GenAI in decision-making processes. Unchallenged, automated decision-making could lead to unintended consequences or a lack of skilled human oversight of critical aspects of a business. Organizations should maintain a human-centric approach to AI implementation and ensure that AI systems are transparent, explainable, and auditable.</p><p><em>How might you incentivize the use of GenAI, while encouraging employees to challenge decisions automated systems suggest to them?</em></p><h3><strong>The Rewards</strong></h3><p>Thinking about human and environmental factors in a profit-driven organization isn't new. B-Corporations already started to implement metrics that go beyond pure commercial stats. These organizations demonstrate that by focusing on factors such as employee well-being, customer satisfaction, and environmental impact, businesses can outperform their peers in terms of revenue, growth, stability, resilience, and sustainability. A white paper from B Lab suggests that B Corps <strong><a href="https://infogram.com/1tgl93rr6z7d7df4xo1q6xv8d0ip86gl6lq">outperform</a></strong> other businesses when it comes to these factors. From 2019 to 2021, B Corps were more likely to grow their revenue and their headcounts, and they were more resilient &#8211; with 95% remaining in business in 2023 compared to 88% of non-B Corp businesses.</p><p>With the shift to highly automated hyper-personalized products and services, a wider range of metrics &#8211; like B Corps model them &#8211; is essential not only from a sustainability and ethics perspective but also to ensure organizational resilience.</p><h3><strong>Personal Resilience</strong></h3><p>So what can you do, as an individual worried about the future of your career? GenAI represents a shift in technology that's different from innovations we've seen before. Whereas say the "cloud" simply represented a shift in <em>where</em> the computers sit (in-house vs offshore) and <em>who</em> owns them, or "mobile" changed <em>how</em> we access digital services (stationary vs anywhere), GenAI changes <em>what</em> we do.</p><p>To think about what that means for our careers we might be able to learn from GenAI itself. The transformer architecture (the T in GPT) is a key part of what enabled the AI revolution. Before the transformer, AI was fairly slow and specialized. Grossly simplified, machine learning had to compute a lot of possibilities, one at a time until it found the solution. Now with the transformer, it's a little like AI has <em>fuzzy attention on everything</em> and it only goes through all the detailed calculations once it's decided which parts of everything are relevant to the task at hand.</p><p>In the last two decades or so, our job roles have become more and more specialized. For example, when I studied design, my university &#8211; like most &#8211; offered one course in digital that educated "Interaction Designers"; digital all-rounders who have a broad understanding of technology, design, and research and could seamlessly switch between different skillsets. Now people have very specific job roles like UX designer, UX researcher, UI designer, UX developer, UI developer, CX designer,... and so on. These highly standardized roles and processes are a lot easier to automate. The easiest way to get replaced by GenAI is to stay deep in a micro-niche of a job role. Being able to recognize outliers, make connections across disciplines, and make sense of complex issues is one way to future-proof yourself.</p><p><em>How might you branch out to get an understanding of adjacent roles and skills to get more of that fuzzy attention on everything yourself?</em></p><p><em>What do you need to learn to collaborate with GenAI to do highly specialized, repetitive tasks?</em></p><h3><strong>The way forward</strong></h3><p>The adoption of GenAI presents an opportunity for organizations to reimagine their approach to work, measurement, and success. By shifting our focus from traditional KPIs to metrics that align with the evolving nature of human-AI collaboration, we can unlock new levels of innovation, creativity, and impact within our organizations. As you navigate this exciting new landscape, it's crucial to identify new metrics that accurately reflect the changing nature of work.</p><p><em>What questions are you holding right now?</em></p><p>_How might you help your team adapt to this new shift? _</p><p><em>And how might you future-proof your own career?</em></p><p>If this article made you think and you want to continue the conversation, please reach out.</p><p>I offer <strong><a href="https://slow.works/somatic-coaching">1:1 coaching</a></strong> as well as facilitated <strong><a href="https://slow.works/working-with-ai">workshops for teams</a></strong> to help you adapt to the new opportunities GenAI brings to how we work.</p><div><hr></div><h2><strong>Becoming a Cyborg</strong></h2><p>What if we think of GenAI as a third hand or a second brain? Rather than competing with GenAI, we could also use it as an extension of ourselves. In this last section, I'll share some interesting GenAI solutions that empower us humans. I'll focus on ideas that are not simply focusing on automating boring tasks, but explore how we might collaborate with AI in useful ways.</p><h3><strong>AudioPen</strong></h3><p>One of the AI apps that I keep coming back to is <strong><a href="https://audiopen.ai/?aff=x0g97">AudioPen</a></strong> at its core it's an AI-powered voice recorder that automatically creates transcripts for you. And while it's definitely not the first or only audio-transcription service, it's doing some interesting things. Mainly it's making you think freely by doing less.</p><ul><li><p>You record up to 15min of random babbling</p></li><li><p>AudioPen transcribes the voice note</p></li><li><p>It rewrites the note for clarity (removing uhms, fillers, repetitions)</p></li><li><p>You get to choose and fine-tune the tone of voice it uses</p></li><li><p>It creates sharable images for the summary</p></li><li><p>It integrates with Zapier to save your notes to Notion, Obsidian, or your second brain of choice.</p></li></ul><p>In my experience, <strong><a href="https://audiopen.ai/?aff=x0g97">AudioPen</a></strong>'s transcription and writing quality tops the likes of <strong><a href="https://otter.ai/referrals/7Q5LKQUB">otter.ai</a></strong> and something about the simplicity of the interface, the single job of helping you make sense of fleeting thoughts, makes it so much more joyful and impactful to use.</p><p>What do you think? What's your favorite AI tool?</p><p><em>Some of the links in this article are affiliate links, I might &#8211; at no cost to you &#8211;</em> <em>earn a referral fee if you end up buying one of these products or services</em>.</p><p><em>This article was first published at <a href="https://slow.works/blog/metrics-for-a-gen-ai-world">slow.works</a> on the 28 Feb 2024</em></p>]]></content:encoded></item><item><title><![CDATA[What is Somatic Intelligence?]]></title><description><![CDATA[Fusing Intuition, Rationality, and Technology]]></description><link>https://slowworks.substack.com/p/what-is-somatic-intelligence</link><guid isPermaLink="false">https://slowworks.substack.com/p/what-is-somatic-intelligence</guid><dc:creator><![CDATA[Jonas Haefele]]></dc:creator><pubDate>Wed, 31 Jan 2024 09:10:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ac2528a0-e41e-4b42-a5f4-25f2a801714f_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://slowworks.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://slowworks.substack.com/subscribe?"><span>Subscribe now</span></a></p><h2>Things are changing. Fast. </h2><p>Last year brought a lot of disruption and &#8211; at least for me &#8211; 2024 has started at full speed. Opportunity is in the air, together with a lot of uncertainty, and maybe a little danger, too. Science and technology are making leaps and we're still getting used to the impacts of the pandemic and the current political climate.</p><p>I bet you've been hearing a lot about AI and its impact on our work and lives. Maybe you're using ChatGPT to help you draft emails or automate research. But have you ever stopped to think about how we, as humans, can stay ahead in this tech-savvy world?</p><p>That's where our new newsletter, "Somatic Intelligence," comes in. Every month, I'll share ideas and insights about how we can use AI and other new developments to not just do the same we did yesterday a little faster, but radically rethink how we work. We'll explore how we as humans can stay relevant and how we might adapt our teams to work in tandem with AI.</p><h2>But let's start at the beginning. <br>Why Somatic Intelligence?</h2><p>Somatics is the field of body-focused practices, in essence saying that as humans we have both a brain and a body, both are part of what makes us intelligent. For the last few centuries, we had a fairly stable understanding of intelligence. And maybe it's time to expand it a little. I believe that the key to a successful adoption of and co-existence with AI is to develop a differentiated understanding of intelligence. By combining Artificial Intelligence with human reasoning and the intelligence of our biology, we can unlock a level of intelligence that any one of these alone can't achieve.</p><p>Let's look at the three</p><blockquote><p><em><strong>[Intelligence is] the ability to learn, understand, and make judgments or have opinions that are based on reason. &#8211; Cambridge Dictionary</strong></em></p></blockquote><p>Starting with Descartes' "Je pense donc je suis" (I think therefore I am) and the Enlightenment era, we embraced reason as the highest standard. Modern science is based on gathering proof and reasoning step by step until we can explain something. And then using that new insight to make decisions going forward. Let's call this <strong>Critical Intelligence (CI)</strong>. It's all about logic, proof, and reason. The real power of reasoning comes in seeing parallels between seemingly unrelated, but similar ideas. By thinking in complex concepts, we can generalize an idea and transplant it to a new context. CI has been our go-to for centuries, driving progress and innovation. Until very recently we thought of our own species as the only creatures capable of reasoning. That's what sets us apart from other mammals and what made computers our tools.</p><p>Enter <strong>Artificial Intelligence (AI)</strong>. It's the buzzword of our era, reshaping how we work and think. It's exciting and a tad overwhelming, especially when we think about our roles in this fast-evolving landscape.</p><blockquote><p><em><strong>Artificial Intelligence (AI) is the ability of computers or machines to perform tasks that typically require human intelligence, such as learning, reasoning, and problem-solving.</strong></em></p></blockquote><p>The promises range from fully automating spam emails for your company (they call it cold outreach) to instant personalization that generates a personal ad creative for anyone or <strong><a href="https://www.rabbit.tech/">magical personal assistants</a></strong> that you just have to talk to &#8211;&nbsp;maybe Siri finally learns to do more than search the web.</p><p>Currently, most of the "AI" solutions we see on the market are based on Large Language Models (LLMs) like GPT. These models reason in a different way than humans do. Human reasoning follows a logical process: formulating a question, making a plan, gathering proof, making decisions, and moving to the next step. LLMs work more like a very elaborate guess if you pardon the oversimplification for a moment. Think about how your phone keyboard guesses the next word even before you start typing. It gives you three options to make typing faster. Your phone can do that because it has read a lot of text messages, including yours, and now it can predict what you will type next. ChatGPT follows a similar pattern, just with a whole lot more information. LLMs are very efficient at predicting the next word &#8211; over and over again. That looks like magic and can lead us to think that AI knows everything. We'll get into the details in the coming months. If you heard about <strong><a href="https://www.ibm.com/topics/ai-hallucinations">AI hallucinations</a></strong> you might know that it sometimes <strong><a href="https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html">literally</a></strong> <strong><a href="https://x.com/dsmerdon/status/1618816703923912704?s=20">makes</a></strong> <strong><a href="https://www.theverge.com/2023/2/8/23590864/google-ai-chatbot-bard-mistake-error-exoplanet-demo">things</a></strong> <strong><a href="https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article">up</a></strong>. That's because LLMs don't reason like we do, instead, they work out what's most likely &#8211; they are probabilistic. Many examples of hallucinations like the ones above are very plausible but factually not correct.</p><p>In the short-term, AI has huge potential to increase our efficiency by outsourcing tasks that comparatively slow humans used to do to incredibly fast computers. It also holds huge creative potential and has already led to amazing <strong><a href="https://www.npr.org/sections/health-shots/2023/10/12/1205201928/artificial-intelligence-ai-scientific-discoveries-proteins-drugs-solar">discoveries in science</a></strong>. The biggest difference between the AI that we use to create social media posts and the AI scientists use to fold proteins or <strong><a href="https://www.theguardian.com/technology/2023/may/25/artificial-intelligence-antibiotic-deadly-superbug-hospital">discover a new antibiotic</a></strong> is how specialized the AI is. Highly specialized models allow for a lot more control and skill, whereas foundation models like GPT are pretty good at a lot of things, but maybe not great at anything. In either case, we need humans in the loop to sense-check decisions and steer AI. Think of AI as a collaborator, or as tech analyst Benedict Evans said in 2018; it's like&nbsp;giving&nbsp;every company&nbsp;infinite interns.</p><p>In the mid-term, AI has the potential to completely transform how we live and work. AI can already <strong><a href="https://www.interprefy.com/">bridge language gaps</a></strong>, and automated systems like <strong><a href="https://www.wired.co.uk/article/china-social-credit-system-explained">China's social credit system</a></strong> show how state surveillance can be combined with a kind of incentive scheme "to do the right thing", leading us to <strong><a href="https://www.publicethics.org/post/will-ai-make-democracy-obsolete">scenarios</a></strong> of total <strong><a href="https://www.capgemini.com/gb-en/insights/expert-perspectives/algocracy-for-common-good-or-authoritarianism/">Algocracy</a></strong>.</p><p>One of the big questions coming up again and again is about how we might control AI to make sure it does the right thing. What the right thing is in this case, is an ongoing ethical debate. Let's say it is to make life better for as many people as possible, without explicitly harming anyone in the process. How might we do that? How might we even get to a definition of what "the right thing" is?</p><blockquote><p><em><strong>Somatics uses the mind-body connection to help you survey your internal self and listen to signals from your body.</strong></em></p></blockquote><p>Now, this is where <strong>Somatic Intelligence (SI)</strong> comes into play. SI is about tapping into our intuitive wisdom, the kind of insights our body and subconscious offer us. You know, those gut feelings that sometimes seem to know more than our brains? That's Somatic Intelligence in action. Recent developments in psychology like <strong><a href="https://www.youtube.com/watch?v=SlhFrBoEnxU">Polyvagal Theory</a></strong> pioneered by <strong><a href="https://www.ccjm.org/content/ccjom/76/4_suppl_2/S86.full.pdf">Stephen Porges</a></strong> are suggesting these gut feelings are a lot more than made-up romantic ideas, but come from our biology and are anchored deep in our evolution. Porges introduces the idea of <strong><a href="https://www.youtube.com/watch?v=NTzBiEM8ndE">neuroception</a></strong>, an almost instant "knowing" that originates from our nervous system and bypasses cognition. Our biology has evolved to firstly keep us safe from harm and secondly enable us to collaborate with others. Learning to create a work environment that allows employees to feel safe on a biological level unlocks our ability to collaborate effectively. And successful collaboration with other humans re-inforces that safety signal, making us even better at communicating and creating.</p><p>So in short, Somatic Intelligence is about sensing connections and insights from our biology. In psychotherapy, modalities like <strong><a href="https://www.hakomieducation.net/about">Hakomi</a></strong> combine insights from our body with tested therapy tools to work with trauma and other complex conditions. In business accessing our SI might lead to better products and services, because we find product-market fit faster or reduce user churn, by tailoring them to essential human needs. Or it might help us create a corporate culture that attracts and retains talent when we can demonstrate that employee well-being is more than a weekly meditation class or a free subscription to an online counseling service and the products they create leave a lasting positive impact. And last but not least, SI can be the first sense-check of AI. In a recent project, I worked with an international consumer-electronics brand. Their CEO, who holds several PhDs in Machine Learning, was excited about our ideas on how we might integrate AI into their platform but vehemently insisted that there always has to be a human in the loop. You might call it human-in-the-loop-AI, better safe than sorry, or &#8211; as I like to think of it &#8211; AI as a collaborator, rather than a replacement.</p><p>Imagine combining all three aspects of intelligence. CI gives us structure, AI offers innovation, and SI brings depth and intuition. Together, they can revolutionize our decision-making, creativity, and even how we connect with others.</p><p>Through this newsletter series, "Somatic Intelligence", we'll explore how this powerful blend can transform our professional and personal lives. Think about it &#8211; combining the speed of AI with the intuitive depth of SI, guided by the rationality of CI. It's a recipe for something truly extraordinary.</p><p>So, what do you say? Ready to embark on this journey with me? Let's unlock the potential of our intelligences &#8211; all three of them &#8211; and redefine what it means to be savvy in the digital age.</p><h2>What questions are you holding as we embark in this new chapter? </h2><p>Let me know. Drop me a message below. I read and reply to every message.</p><p>I can't wait to explore this together,</p><p>Jonas</p><p></p><p><em>This article was first published at <a href="https://slow.works/blog/what-is-somatic-intelligence">slow.works</a> on the 29 Jan 2024</em></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://slowworks.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Somatic Intelligence! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://substack.com/refer/jonashaefele?utm_source=substack&amp;utm_context=post&amp;utm_content=142542696&amp;utm_campaign=writer_referral_button&quot;,&quot;text&quot;:&quot;Start a Substack&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Start writing today. Use the button below to create your Substack and connect your publication with Somatic Intelligence</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://substack.com/refer/jonashaefele?utm_source=substack&amp;utm_context=post&amp;utm_content=142542696&amp;utm_campaign=writer_referral_button&quot;,&quot;text&quot;:&quot;Start a Substack&quot;,&quot;hasDynamicSubstitutions&quot;:false}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://substack.com/refer/jonashaefele?utm_source=substack&amp;utm_context=post&amp;utm_content=142542696&amp;utm_campaign=writer_referral_button"><span>Start a Substack</span></a></p></div>]]></content:encoded></item></channel></rss>