<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[PAICE.work]]></title><description><![CDATA[People+AI Collaboration Effectiveness]]></description><link>https://paice.substack.com</link><generator>Substack</generator><lastBuildDate>Mon, 06 Apr 2026 23:51:29 GMT</lastBuildDate><atom:link href="https://paice.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[PAICE.work PBC]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[paice@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[paice@substack.com]]></itunes:email><itunes:name><![CDATA[Sam Rogers]]></itunes:name></itunes:owner><itunes:author><![CDATA[Sam Rogers]]></itunes:author><googleplay:owner><![CDATA[paice@substack.com]]></googleplay:owner><googleplay:email><![CDATA[paice@substack.com]]></googleplay:email><googleplay:author><![CDATA[Sam Rogers]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Weekly Update - April 6, 2026]]></title><description><![CDATA[Conference win, whitepaper release, Spanish beta, and Q1 in review]]></description><link>https://paice.substack.com/p/weekly-update-april-6-2026</link><guid isPermaLink="false">https://paice.substack.com/p/weekly-update-april-6-2026</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Mon, 06 Apr 2026 18:01:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6b3bea70-f569-436c-ac4d-c04d364a74d4_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>PAICE won the ISPI 2026 Shark Tank competition in Nashville this week, with all judges reporting they are &#8220;in&#8221; on PAICE, and our <em>Closing the Collaboration Gap</em> whitepaper was released as part of the pitch. Meanwhile, Spanish language functionality was released (in beta), our usual five daily posts shipped, and 10 projects are now organized under PAICE.work PBC. It&#8217;s also the first full week of Q2, so this update includes a Q1 retrospective.</p><h2><strong>Content Published Last Week</strong></h2><p><strong>Monday</strong> (Mar 30): <a href="https://paice.work/blog/update-2026-03-30">&#8220;Weekly Update - March 30, 2026&#8221;</a></p><p><strong>Tuesday</strong> (Mar 31): <a href="https://paice.work/blog/closing-the-collaboration-gap-whitepaper">&#8220;Closing the Collaboration Gap&#8221;</a> Our third whitepaper, presented at the ISPI 2026 Performance Improvement Conference in Nashville, maps People+AI collaboration to established performance improvement frameworks from Gilbert, Mager, and Thalheimer.</p><p><strong>Wednesday</strong> (Apr 1): <a href="https://paice.work/blog/seven-signs-you-dont-need-paice">&#8220;Seven Signs You Don&#8217;t Need PAICE&#8221;</a> April Fools&#8217; Day satire exploring the entirely fictional scenarios under which AI collaboration assessment becomes unnecessary.</p><p><strong>Thursday</strong> (Apr 2): <a href="https://paice.work/blog/verification-workflows-that-actually-work">&#8220;Verification Workflows That Actually Work&#8221;</a> Practical verification frameworks for lawyers, auditors, clinicians, and financial advisors who need to verify AI output before acting on it.</p><p><strong>Friday</strong> (Apr 3): Video - <a href="https://paice.work/blog/ispi-2026-shark-tank">&#8220;All Judges In&#8221;</a> Watch PAICE win the ISPI 2026 Shark Tank competition in Nashville with a clean sweep from every judge, presenting our Closing the Collaboration Gap whitepaper.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share PAICE.work&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share PAICE.work</span></a></p><h2><strong>ISPI 2026 Conference</strong></h2><p>PAICE presented at the International Society for Performance Improvement 2026 Conference in Nashville, Tennessee this week with two major outcomes. First, the <em>Closing the Collaboration Gap</em> whitepaper &#8212; our third, and the behavioral science foundation behind PAICE&#8217;s five-dimension model &#8212; published on March 31. Second, PAICE won the conference&#8217;s Shark Tank competition with all judges voting in favor. This is external validation from the performance improvement community that behavioral measurement of People+AI collaboration is the right approach at the right time. The full 8:46 video is available in Friday&#8217;s post.</p><h2><strong>Ten Projects, One Mission</strong></h2><p>Tomorrow&#8217;s post, <a href="https://paice.work/blog/fillling-the-missing-trust-layer">&#8220;Filling The Missing Trust Layer&#8221;</a>, announces the formal consolidation of 10 projects under PAICE.work PBC. The thread connecting them: building the structural conditions that make People+AI collaboration trustworthy, not just capable.</p><p>Our portfolio now includes three revenue-generating products &#8212; <a href="https://paice.work/">PAICE.work</a> (core, behavioral assessment), <a href="https://everyailaw.com/">EveryAILaw.com</a> (AI regulation tracking), and <a href="https://siteline.to/">Siteline.to</a> (agent discoverability scoring) &#8212; plus seven open-source infrastructure projects: Graceful Boundaries (service limit communication), Skill Provenance (agent skill versioning), Knowledge-as-Code Template (machine-traversable domain knowledge), Turnfile (peer multi-agent coordination via the SNAP protocol), Skill A11y Audit (accessibility quality gate for AI-generated code), AI Tool Watch (AI capability tracking), and a tenth project still in stealth for now.</p><p>While these weren&#8217;t originally conceived as a portfolio, each was built because the existing ecosystem couldn&#8217;t provide what PAICE needed fast enough. The consolidation announcement frames the strategic case: the AI trust layer doesn&#8217;t exist yet, and PAICE.work PBC is building it from measurement to infrastructure.</p><h2><strong>Technical Improvements</strong></h2><h3><strong>Spanish Language Assessment Merged</strong></h3><p>PR #22 merged the Spanish language branch, bringing the full assessment experience to Spanish speakers. The implementation includes translated UI strings for the assessment chat, results page chrome, and confirmation modals, backend locale files with Spanish-specific detection patterns, manipulation data, and scoring prompts, language tracking in PostHog analytics events, and a UI fix to clear cached conversations when switching languages so the welcome message appears in the selected language. Both Spanish and Urdu are marked as Beta in the language dropdown. A full announcement post with the story behind tackling Urdu first is drafted and scheduled, and French and Portuguese are already underway and coming soon.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/weekly-update-april-6-2026/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/weekly-update-april-6-2026/comments"><span>Leave a comment</span></a></p><h2><strong>Platform Stability</strong></h2><p>Platform maintained 100% uptime with no incidents. All systems operating normally: standard and Confidential Mode assessments, results generation with on-chain attestation, cohort management, email notifications, and analytics processing.</p><h2><strong>Q1 2026 in Review</strong></h2><p>This is the first update of Q2, so it&#8217;s worth stepping back. Here&#8217;s some of what we&#8217;re proud to have shipped in Q1 (January&#8211;March 2026):</p><p><strong>Products launched:</strong></p><ul><li><p><strong>PAICE Pro</strong> &#8212; 20-subscore breakdown, assessment history, improvement resources ($29.99 one-time, no account)</p></li><li><p><strong>EveryAILaw.com</strong> &#8212; 51 regulations, 10 compliance obligations, global coverage, MCP endpoints, built in four days</p></li><li><p><strong>Siteline</strong> &#8212; agent discoverability scoring (B+ for paice.work on launch)</p></li></ul><p><strong>Whitepapers published:</strong></p><ul><li><p><em>Privacy &amp; Security Architecture</em> (released at NEARCON 2026, February)</p></li><li><p><em>Closing the Collaboration Gap</em> (released at ISPI 2026, March)</p></li></ul><p><strong>Infrastructure milestones:</strong></p><ul><li><p>Confidential Mode with TEE integration and on-chain attestation</p></li><li><p>Multilingual assessment pipeline, assessment now in 3 languages</p></li><li><p>6 open-source projects published (Graceful Boundaries, Skill Provenance, Knowledge-as-Code Template, Turnfile, Skill A11y Audit, AI Tool Watch)</p></li><li><p>SSR evaluation suite at 100% pass rate</p></li></ul><p><strong>Content:</strong></p><ul><li><p>YouTube channel and podcast launched</p></li><li><p>100 post milestone, with slog skill pipeline now automating the full authoring workflow</p></li><li><p>65+ blog posts published in Q1 (maintaining daily weekday cadence)</p></li></ul><p><strong>Partnerships and validation:</strong></p><ul><li><p>Sentient Risk Advisory referral collaboration announced</p></li><li><p>ISPI 2026 Shark Tank win (all judges in)</p></li><li><p>NEARCON 2026 hackathon entry</p></li></ul><p>Q1 was about building the foundation: products, content pipeline, multilingual infrastructure, open standards, and external validation. Q2 is about distribution.</p><h2><strong>Top 10 Delivery Targets for Q2</strong></h2><ol><li><p>Jurisdiction-specific assessments &amp; interventions (CA, NY, CO, etc.)</p></li><li><p>Multi-lingual assessments (Western Hemisphere this month, EU by end of quarter)</p></li><li><p>Additional PAICE product offerings and existing product value adds</p></li><li><p>EveryAILaw.com paid tiers</p></li><li><p>Siteline extended scanning</p></li><li><p>More agentic trust engineering projects</p></li><li><p>Aggregate Intelligence maturity model</p></li><li><p>Academic &amp; training partnerships</p></li><li><p>Published articles &amp; podcast appearances</p></li><li><p>Continued daily content production schedule &amp; quality</p></li></ol><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/subscribe?"><span>Subscribe now</span></a></p><h2><strong>The Week in Numbers</strong></h2><ul><li><p>5 blog posts published (1 whitepaper announcement + 1 satire + 1 framework + 1 video + 1 weekly update)</p></li><li><p>64 commits across all PAICE.work PBC projects (18 PAICE core, 22 AI Tool Watch, 8 EveryAILaw, 4 Siteline, 3 Graceful Boundaries, 3 Knowledge-as-Code, 2 Skill Provenance, 2 Turnfile, 2 Skill A11y Audit)</p></li><li><p>80 files changed, 3,569 insertions, 1,209 deletions (PAICE core)</p></li><li><p>Spanish assessment fully functional (UI, backend prompts, analytics tracking)</p></li><li><p>ISPI 2026 Shark Tank: won with all judges in</p></li><li><p>Third whitepaper published and distributed at conference</p></li><li><p>10 projects formally consolidated under PAICE.work PBC (announcement tomorrow)</p></li><li><p>100% uptime, zero incidents</p></li></ul><h2><strong>Why This Week Matters</strong></h2><p>External validation matters more than internal conviction. The ISPI Shark Tank win, from a room of performance improvement professionals who evaluate measurement frameworks for a living, confirms that PAICE&#8217;s behavioral approach resonates beyond the AI governance echo chamber. The whitepaper gives that audience a rigorous foundation to evaluate PAICE against the HPT frameworks they already trust. The consolidation announcement tomorrow makes the strategic case explicit: PAICE isn&#8217;t just an assessment tool. It&#8217;s the anchor of a portfolio that spans behavioral measurement, regulatory tracking, agent infrastructure, and open standards &#8212; all serving the same mission of making People+AI collaboration structurally trustworthy. Meanwhile, the Spanish merge is the first concrete step toward making the assessment accessible to the 580 million Spanish speakers worldwide, starting with the Western Hemisphere markets where PAICE&#8217;s regulated-industry positioning is strongest. Q1 ended with Pro launched, three whitepapers published, and multilingual infrastructure in place. Q2 starts with conference momentum, a consolidation thesis, and a clear content pipeline through June.</p><h2><strong>Thank You</strong></h2><p>Thanks to the ISPI conference organizers and judges for the opportunity to present. Thanks to Muneeb for the Spanish language implementation work that made the multilingual merge possible. And thanks to everyone who continues to take the assessment, share their results, and push us to build better.</p><div><hr></div><p><strong>Get Involved:</strong></p><ul><li><p><a href="https://paice.work/">Take the assessment</a> (free, always)</p></li><li><p><a href="https://store.paice.work/">Get PAICE Pro</a> ($29.99, no account required)</p></li><li><p><a href="https://paice.work/baseline">Get a Team Baseline</a> (for organizations)</p></li><li><p><a href="https://paice.work/whitepapers">Read the whitepapers</a> (comprehensive framework)</p></li><li><p><a href="https://www.youtube.com/@paicework">Subscribe to our YouTube channel</a></p></li><li><p><a href="https://paice.work/contact">Contact us about your specific requirements</a></p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Related Reading</strong></h2><ul><li><p><a href="https://paice.work/blog/update-2026-03-30">Weekly Update - March 30, 2026</a></p></li><li><p><a href="https://paice.work/blog/fillling-the-missing-trust-layer">Filling The Missing Trust Layer</a></p></li><li><p><a href="https://paice.work/blog/closing-the-collaboration-gap-whitepaper">Closing the Collaboration Gap</a></p></li><li><p><a href="https://paice.work/blog/ispi-2026-shark-tank">All Judges In</a></p></li><li><p><a href="https://paice.work/blog/verification-workflows-that-actually-work">Verification Workflows That Actually Work</a></p></li><li><p><a href="https://paice.work/blog/seven-signs-you-dont-need-paice">Seven Signs You Don&#8217;t Need PAICE</a></p></li></ul>]]></content:encoded></item><item><title><![CDATA[All Judges "In"]]></title><description><![CDATA[Watch PAICE Win the ISPI 2026 Shark Tank Competition]]></description><link>https://paice.substack.com/p/all-judges-in</link><guid isPermaLink="false">https://paice.substack.com/p/all-judges-in</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Fri, 03 Apr 2026 16:00:52 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/193083357/6941991b52baac7872e16ff6def8ef1f.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Three days ago, we walked into a Shark Tank-style competition at the <a href="https://ispi.org/event/2026PIConference">International Society for Performance Improvement (ISPI) 2026 Conference</a> in Nashville, Tennessee. Seven participants pitched. Two received a clean sweep from every judge. PAICE.work was one of them.</p><p>Above is the full presentation, also viewable on <a href="https://youtu.be/hlhGO8Q5Zks">our YouTube channel</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/all-judges-in?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/all-judges-in?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>Why This Matters</strong></h2><p>ISPI is the professional home of Human Performance Technology (HPT), a discipline built on decades of evidence-based, systematic approaches to measuring and improving what people actually do at work. Gilbert, Mager, Rummler, Thalheimer, Phillips, Brinkerhoff. These are the names that built the science of performance improvement.</p><p>The Shark Tank format put PAICE in front of judges who evaluate ideas for a living. Not technology enthusiasts looking for the next shiny tool, but performance improvement professionals who ask one question above all others: does it actually work?</p><p>Every judge said yes. All in.</p><p>That validation matters because it comes from exactly the community best positioned to evaluate what PAICE does. We are not asking the HPT field to learn something new. We are showing them that People+AI collaboration measurement is the next application of frameworks they have refined for over sixty years.</p><h2><strong>What We Presented</strong></h2><p>The presentation centered on our newest whitepaper, <em><a href="https://paice.work/blog/closing-the-collaboration-gap-whitepaper">Closing the Collaboration Gap: A Behavioral Skill Framework for Human-AI Performance Improvement</a></em>, which we released at the conference on March 31.</p><p>The core argument is straightforward:</p><p><strong>Organizations cannot answer the most basic capability question.</strong> They know their people are using AI. They have no idea whether those people are using it well. Training completion rates and usage dashboards tell you about activity, not quality. A professional can pass every AI literacy module and still accept a hallucinated citation under deadline pressure.</p><p><strong>Behavioral measurement fills the gap.</strong> PAICE (People + AI Collaboration Effectiveness) measures what professionals actually do when working with AI, not what they claim or what they know. Through strategic failure injection, placing realistic errors into AI responses and observing whether professionals catch them without prompting, the assessment produces behavioral evidence that traditional approaches cannot.</p><p><strong>The HPT lineage is real.</strong> The whitepaper maps PAICE directly to established performance improvement frameworks. Gilbert&#8217;s focus on worthy performance over activity. Mager and Pipe&#8217;s distinction between knowledge deficits and execution deficits. Thalheimer&#8217;s demand that evaluation measure decisions, not perceptions. These are not decorative citations. They shaped how the system was built.</p><p>The full whitepaper is available at <a href="https://paice.work/whitepapers">paice.work/whitepapers</a>, and the <a href="https://paice.work/blog/closing-the-collaboration-gap-whitepaper">detailed announcement post</a> walks through each section.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/all-judges-in/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/all-judges-in/comments"><span>Leave a comment</span></a></p><h2><strong>What&#8217;s Happened Since Nashville</strong></h2><p>The conference was three days ago. The momentum since then has been immediate.</p><p><strong>New university research partner.</strong> We have submitted a formal study proposal with a new university partner to validate the PAICE behavioral measurement methodology through controlled research. This builds on our commitment to evidence-based assessment and opens a new avenue for peer-reviewed validation of the framework.</p><p><strong>First potential Asian partner.</strong> We are in active conversations with what would be our first partner in Asia, expanding PAICE&#8217;s geographic reach into a market where AI adoption is accelerating rapidly and the demand for collaboration measurement is growing.</p><p><strong>HPT community response.</strong> The performance improvement community&#8217;s reaction at ISPI confirmed something we have believed from the beginning: measuring People+AI collaboration is not a new discipline requiring new frameworks. It is the next application of a discipline that already exists. The practitioners who have spent their careers making human performance measurable and improvable see PAICE as a natural extension of their work.</p><h2><strong>What&#8217;s Next</strong></h2><p>The ISPI win and the connections made in Nashville are accelerating several priorities:</p><p><strong>Research validation.</strong> The university study proposal, once approved, will produce independent peer-reviewed evidence about the effectiveness of behavioral measurement for AI collaboration. This is the kind of evidence the HPT community values most, and it is the kind of evidence PAICE was designed to produce.</p><p><strong>Global expansion.</strong> The conversations with our potential Asian partner represent the beginning of PAICE&#8217;s international growth. AI collaboration is not a regional challenge. It is a global one. As regulatory frameworks mature in different markets, the demand for behavioral measurement will follow.</p><p><strong>Community building.</strong> The HPT community represents a natural base of practitioners who already understand why behavioral measurement matters, why training completion is insufficient, and why self-reported confidence is unreliable. We are building relationships with this community to help them bring People+AI collaboration measurement to their clients and organizations.</p><p><a href="https://paice.work/blog/ai-collaboration-master-skill-2026">The water is moving fast. The rapids are not slowing down.</a> But after Nashville, we are not navigating alone.</p><div><hr></div><p><em>Want to see how your AI collaboration capabilities measure up? <a href="https://paice.work/">Take the PAICE assessment</a> to get personalized insights and recommendations.</em></p><div><hr></div><p><strong>Get Involved:</strong></p><ul><li><p><a href="https://paice.work/">Take the assessment</a> (free, always)</p></li><li><p><a href="https://paice.work/baseline">Explore our Baseline offerings</a> (for organizations)</p></li><li><p><a href="https://paice.work/whitepaper">Read the whitepaper</a> (comprehensive framework)</p></li><li><p><a href="https://www.youtube.com/@paicework">Subscribe to our YouTube channel</a></p></li><li><p><a href="https://paice.work/contact">Contact us about your specific requirements</a></p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Recommended Reading</strong></h2><p>&#128214; <strong>The ISPI Whitepaper:</strong></p><ul><li><p><a href="https://paice.work/blog/closing-the-collaboration-gap-whitepaper">Closing the Collaboration Gap</a> - Full announcement and deep dive into the whitepaper presented at ISPI</p></li></ul><p>&#128214; <strong>Understanding PAICE:</strong></p><ul><li><p><a href="https://paice.work/blog/deadliest-white-space-modern-office">The Deadliest White Space in the Modern Office</a> - Video exploring the verification gap in People+AI work</p></li><li><p><a href="https://paice.work/blog/what-paice-tests-for">What PAICE Tests For</a> - How behavioral assessment differs from knowledge testing</p></li><li><p><a href="https://paice.work/blog/understanding-the-five-paice-dimensions">Understanding the Five PAICE Dimensions</a> - Deep dive into the measurement framework</p></li></ul><p>&#128214; <strong>Previous Whitepapers:</strong></p><ul><li><p><a href="https://paice.work/blog/paice-whitepaper-release">PAICE.work Vision Whitepaper</a> - The original framework paper from DevLearn 2025</p></li><li><p><a href="https://paice.work/blog/privacy-security-whitepaper-release">Privacy &amp; Security Whitepaper</a> - Cryptographic integrity and TEE-protected inference<br><br></p></li></ul>]]></content:encoded></item><item><title><![CDATA[Verification Workflows That Actually Work]]></title><description><![CDATA[How Regulated Professionals Verify AI Output in Practice]]></description><link>https://paice.substack.com/p/verification-workflows-that-actually</link><guid isPermaLink="false">https://paice.substack.com/p/verification-workflows-that-actually</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Thu, 02 Apr 2026 14:00:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a6fe24c8-431f-4032-8ca2-01adc6d20dc3_2640x1391.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A professional copies AI output into a deliverable. It sounds right. It reads well. But...it&#8217;s wrong.</p><p>Maybe it cited a statute that was repealed two years ago. Maybe the financial ratio was calculated against the wrong baseline. Maybe the clinical guideline it referenced applies to a different patient population. The output was fluent, confident, and incorrect.</p><p>The question isn&#8217;t whether AI makes mistakes. It does. The question is whether you have a workflow that catches those mistakes before they reach your client, your patient, or your regulator.</p><p>Most professionals don&#8217;t. Not because they&#8217;re careless, but because nobody taught them what structured verification actually looks like.</p><h2><strong>Why &#8220;Just Double-Check It&#8221; Fails</strong></h2><p>You&#8217;ve heard the advice. Your firm&#8217;s AI policy probably includes some version of it. &#8220;Always verify AI output before relying on it.&#8221; Good principle. Terrible instruction.</p><p>Here&#8217;s why generic verification advice doesn&#8217;t work in practice.</p><p><strong>Confirmation bias takes over.</strong> When you&#8217;ve already read an AI output that sounds authoritative, your review is biased toward confirming it. You&#8217;re not really checking whether it&#8217;s right. You&#8217;re looking for reasons it&#8217;s right. That&#8217;s a fundamentally different cognitive task.</p><p><strong>Time pressure creates shortcuts.</strong> Under deadline, &#8220;verify this&#8221; becomes &#8220;skim this.&#8221; Skimming catches formatting errors and obvious nonsense. It doesn&#8217;t catch a correctly formatted citation to a case that doesn&#8217;t exist, or a financial calculation that uses a plausible but wrong discount rate.</p><p><strong>Selective verification misses the real risks.</strong> Without a structured approach, people verify what they&#8217;re already uncertain about and skip what sounds confident. But AI&#8217;s most dangerous errors are precisely the ones it states with the most confidence. If you only verify things that sound uncertain, you&#8217;re checking the safe outputs and trusting the risky ones.</p><p>PAICE (People + AI Collaboration Effectiveness) measures verification behavior as the core of its Accountability dimension, which carries 30% of the total score weight. That weight reflects a reality that regulated professionals already know: verification is the skill that separates responsible AI use from professional liability.</p><p>What follows are four verification workflows that work in practice across regulated professions. Not abstract principles. Concrete steps you can apply today.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/verification-workflows-that-actually?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/verification-workflows-that-actually?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>Workflow 1: The Three-Pass Review</strong></h2><p>This is a foundational verification method. It works because it forces you to read the same output three times, each time through a different lens.</p><p><strong>Pass 1: Factual Claims</strong></p><p>Read the output and flag every factual assertion. Dates, statistics, names, citations, numerical claims. Don&#8217;t evaluate them yet. Just mark them. If the AI says a regulation was enacted in 2019, flag it. If it says a medication has a 95% efficacy rate, flag it. If it cites a specific court ruling, flag it.</p><p>Then verify each flagged item against an authoritative source. Not against another AI. Against the original.</p><p><strong>Pass 2: Logical Consistency</strong></p><p>Read the output again, this time looking for internal contradictions and reasoning errors. Does the conclusion follow from the premises? Does paragraph three contradict paragraph seven? If the output recommends a conservative strategy in the introduction and an aggressive one in the recommendations, something is wrong regardless of whether the individual facts are correct.</p><p>Watch for outputs that change position mid-document without acknowledging the shift. Many AI systems do this frequently, especially in longer outputs.</p><p><strong>Pass 3: Domain-Specific Risk</strong></p><p>This is the pass that requires your professional expertise. Read the output one more time through the lens of your regulatory environment, your professional standards, and your specific client situation.</p><p>For legal professionals, this means checking whether the analysis accounts for jurisdiction-specific variations, recent amendments, and applicable precedent. For financial advisors, it means verifying that assumptions match the client&#8217;s risk profile and that regulatory requirements are correctly applied. For healthcare professionals, it means confirming that recommendations are appropriate for the specific patient population, accounting for contraindications and current clinical guidelines. For auditors, it means verifying that standards references are current and that the analysis applies the correct framework for the engagement type.</p><p>The three-pass approach typically adds ten to twenty minutes per document. That&#8217;s a small investment against the cost of a malpractice claim, a regulatory sanction, or a patient safety event. (Yes, you can cut that time by more than half by leveraging agents to make the first pass and possibly the second if you know what you&#8217;re doing. But even then, you want to start with the manual version. Because you need to know how it works on a human level first.)</p><h2><strong>Workflow 2: The Source Verification Protocol</strong></h2><p>AI systems cite things. Case law, accounting standards, clinical guidelines, regulatory provisions, research studies. Sometimes those citations are accurate. Sometimes the source exists but doesn&#8217;t say what the AI claims. Sometimes the source doesn&#8217;t exist at all.</p><p>The Source Verification Protocol addresses this directly.</p><p><strong>Step 1: Verify existence.</strong> Does the cited source actually exist? Look it up in the authoritative database for your field. For case law, check Westlaw, LexisNexis, or your jurisdiction&#8217;s official reporter. For accounting standards, check the FASB Codification or IFRS standards directly. For clinical guidelines, check PubMed, the issuing professional organization, or the relevant formulary. For regulatory citations, check the Federal Register, CFR, or the relevant regulatory body&#8217;s website.</p><p>If the source doesn&#8217;t exist, stop. Everything built on that citation is unreliable.</p><p><strong>Step 2: Verify accuracy.</strong> If the source exists, does it actually say what the AI claims? This is where many professionals get tripped up. The AI might cite a real case but misstate the holding. It might reference a real accounting standard but apply the wrong paragraph. It might name a real clinical trial but report the wrong outcome measure.</p><p>Read the relevant section of the actual source. Compare it to the AI&#8217;s characterization. Look for subtle differences in scope, applicability, or conclusion.</p><p><strong>Step 3: Verify currency.</strong> Is the source still current? Has the case been overruled or distinguished? Has the standard been superseded or amended? Has the guideline been updated? AI training data has a cutoff, and professional standards change. A citation that was accurate two years ago may be misleading today.</p><p><strong>Step 4: Verify relevance.</strong> Even if the source exists, is accurate, and is current, does it actually apply to your situation? A case from a different jurisdiction, a standard for a different entity type, or a guideline for a different patient population may be technically accurate but professionally irrelevant.</p><p>This four-step protocol sounds time-intensive. In practice, most verifications take two to three minutes per citation. For a document with five citations, that&#8217;s ten to fifteen minutes. For a court filing, a regulatory submission, or a clinical recommendation, that time is not optional. But at least it is billable.</p><h2><strong>Workflow 3: The Contradiction Test</strong></h2><p>This workflow is particularly useful when you&#8217;re uncertain whether an AI output is reliable but can&#8217;t easily verify it against external sources.</p><p>The method is simple. Ask the AI to argue the opposite position with equal rigor.</p><p>If you asked AI to draft an argument that a particular contract clause is enforceable, ask it to draft an equally rigorous argument that the same clause is unenforceable. If you asked it to recommend a particular investment strategy, ask it to build the strongest case against that strategy. If you asked it to support a particular diagnosis, ask it to present the differential diagnosis that best explains the same symptoms. Get adversarial.</p><p><strong>What to watch for:</strong></p><p>If the AI argues both positions with equal confidence and equal quality of reasoning, neither position should be trusted without independent verification. The AI is demonstrating fluency, not judgment. It doesn&#8217;t actually know which position is correct. It&#8217;s generating plausible text in both directions.</p><p>If the AI&#8217;s counter-argument is noticeably weaker, that&#8217;s a slightly better signal, but it&#8217;s not definitive. It may simply mean the training data contained more support for one position than the other.</p><p>If the AI identifies specific weaknesses in its own original argument when asked to argue the opposite, pay attention to those weaknesses. They often point to genuinely vulnerable aspects of the analysis.</p><p><strong>A practical example from financial advisory work.</strong> An advisor asks AI to analyze whether a particular tax strategy is appropriate for a client profile. The AI provides a confident recommendation with supporting analysis. The advisor then asks: &#8220;Now make the strongest possible argument that this strategy is inappropriate or carries unacceptable risk for this client profile.&#8221;</p><p>The AI responds with three specific risks the original analysis didn&#8217;t mention. The advisor verifies those risks against the client&#8217;s actual situation and discovers that one of them is directly relevant. The original recommendation needs modification.</p><p>Without the contradiction test, that risk would have been invisible in the original output.</p><h2><strong>Workflow 4: The Stakeholder Lens</strong></h2><p>Before finalizing any AI-assisted deliverable, apply this question: &#8220;If my regulator, opposing counsel, auditor, or patient saw this, what questions would they ask?&#8221;</p><p>Then use those questions as verification prompts.</p><p><strong>For legal professionals.</strong> If opposing counsel reviewed this brief, where would they attack? What precedent would they cite to distinguish your cases? What factual assertions would they challenge? Draft those challenges, then verify whether your analysis survives them.</p><p><strong>For financial advisors.</strong> If a compliance examiner reviewed this recommendation, what documentation would they want to see? What suitability questions would they raise? What risk disclosures would they expect? Verify that your AI-assisted analysis addresses each of those concerns.</p><p><strong>For healthcare professionals.</strong> If a peer reviewer examined this treatment plan, what alternatives would they suggest? What contraindications would they flag? What evidence would they want to see for the chosen approach? Use those questions to stress-test the AI output against clinical standards.</p><p><strong>For auditors.</strong> If a regulatory inspector reviewed this workpaper, what sampling methodology questions would they raise? What materiality threshold justifications would they expect? What documentation of professional judgment would they look for?</p><p>The Stakeholder Lens works because it forces you to evaluate AI output from the perspective of someone who is not trying to confirm it. Your regulator isn&#8217;t looking for reasons the output is right. They&#8217;re looking for gaps, omissions, and unsupported assertions. Adopting that perspective before submission catches problems that a cooperative review misses.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/verification-workflows-that-actually/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/verification-workflows-that-actually/comments"><span>Leave a comment</span></a></p><h2><strong>Building Verification Into Your Workflow</strong></h2><p>These four workflows are not checklists you laminate and pin to your monitor. They&#8217;re habits you build through deliberate practice.</p><p><strong>Start with one.</strong> Pick the workflow that addresses your most common risk. If you work with citations frequently, start with Source Verification. If your deliverables face adversarial review, start with the Stakeholder Lens. If you&#8217;re producing analytical content, start with the Three-Pass Review.</p><p><strong>Set a minimum verification standard.</strong> Not every AI output requires all four workflows. A brainstorming list needs less verification than a regulatory filing. But establish a floor. What&#8217;s the minimum verification you&#8217;ll apply to any AI output before it leaves your hands? For most regulated professionals, the Three-Pass Review should be that minimum.</p><p><strong>Time it.</strong> Most professionals overestimate how long verification takes. The Three-Pass Review adds ten minutes. Source Verification adds two to three minutes per citation. The Contradiction Test adds another three to five minutes. The Stakeholder Lens adds five minutes. <em>None of these are hour-long processes.</em> They&#8217;re brief, focused checks that prevent expensive mistakes.</p><p><strong>Make it automatic, not optional.</strong> The moment verification becomes discretionary, it becomes the first thing cut under time pressure. Build it into your workflow the same way you build in spell-check or conflict checks. It&#8217;s not a separate decision. It&#8217;s part of the process of producing a deliverable.</p><p>The PAICE Accountability dimension, which carries the highest weight of any dimension at 30%, directly measures whether professionals exhibit these verification behaviors. Not whether they talk about verification. Not whether they believe verification is important. Whether they actually do it when working with AI output. The distinction matters because nearly everyone agrees that verification is important, and far fewer people actually practice it consistently.</p><p>The professionals who score highest on Accountability are not the ones with the most AI knowledge. They&#8217;re the ones who have built verification into their workflow so deeply that it happens without a conscious decision to verify. It&#8217;s just how they work.</p><p>That&#8217;s the goal. Not perfect verification of every output. Consistent, structured verification as a professional habit.</p><div><hr></div><p><em>Want to see how your verification habits measure up? <a href="https://paice.work/">Take the PAICE assessment</a> to get detailed behavioral insights, including how you respond when AI output needs to be questioned.</em></p><div><hr></div><p><strong>Get Involved:</strong></p><ul><li><p><a href="https://paice.work/">Take the assessment</a> (free, always)</p></li><li><p><a href="https://paice.work/baseline">Explore our Baseline offerings</a> (for organizations)</p></li><li><p><a href="https://paice.work/whitepaper">Read the whitepaper</a> (comprehensive framework)</p></li><li><p><a href="https://paice.work/contact">Contact us about your specific requirements</a></p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Recommended Reading</strong></h2><p>&#128214; <strong>Understanding Verification and Accountability:</strong></p><ul><li><p><a href="https://paice.work/blog/common-ai-collaboration-mistakes">Common AI Collaboration Mistakes</a> - The pitfalls these workflows are designed to prevent</p></li><li><p><a href="https://paice.work/blog/what-paice-tests-for">What PAICE Is Actually Testing For</a> - How verification behavior is observed and measured</p></li><li><p><a href="https://paice.work/blog/improving-your-paice-score">Improving Your PAICE Score</a> - Strategies for developing stronger verification habits</p></li></ul><p>&#128214; <strong>Industry-Specific Context:</strong></p><ul><li><p><a href="https://paice.work/blog/ai-collaboration-legal-professionals">AI Collaboration for Legal Professionals</a> - Verification in legal practice</p></li><li><p><a href="https://paice.work/blog/ai-collaboration-healthcare">AI Collaboration for Healthcare Professionals</a> - Clinical verification standards</p></li><li><p><a href="https://paice.work/blog/recovering-from-ai-collaboration-failures">Recovering from AI Collaboration Failures</a> - What happens when verification fails</p></li></ul>]]></content:encoded></item><item><title><![CDATA[Seven Signs You Don't Need PAICE]]></title><description><![CDATA[A Definitive Guide to When AI Collaboration Assessment Becomes Unnecessary]]></description><link>https://paice.substack.com/p/seven-signs-you-dont-need-paice</link><guid isPermaLink="false">https://paice.substack.com/p/seven-signs-you-dont-need-paice</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Wed, 01 Apr 2026 14:01:59 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/bd778949-d6d1-440b-b87c-4476da3831bd_2848x1504.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p><em>Happy April Fools&#8217; Day! This post is (mostly) satire. The situations described below are, regrettably, still fictional. <a href="https://paice.work/">The real assessment</a> remains very much necessary.</em></p></blockquote><p>We hear it all the time. &#8220;When will we not need PAICE anymore?&#8221;</p><p>It&#8217;s a fair question. We have never shied away from defining our own obsolescence criteria. If anything, we welcome it. The day that People+AI collaboration measures becomes unnecessary will be a remarkable day for the profession, for the industry, and for the concept of epistemic certainty itself.</p><p>So we consulted with professionals across regulated industries and compiled the definitive list of conditions under which PAICE (People + AI Collaboration Effectiveness) assessment becomes completely unnecessary. We are pleased to report that the bar is refreshingly clear.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/seven-signs-you-dont-need-paice?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/seven-signs-you-dont-need-paice?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>Sign 1: AI Has Stopped Making Mistakes</strong></h2><p><em>&#8220;We ran our models through every edge case in existence and they achieved 100% accuracy across all domains, jurisdictions, and contexts simultaneously. Hallucinations have been eliminated entirely. Our AI now refuses to answer rather than risk being wrong, which has reduced its usefulness to zero, but at least it&#8217;s accurate.&#8221;</em></p><p><strong>Chief Technology Officer, Enterprise Software</strong></p><p>This is the obvious one. Once AI systems achieve perfect accuracy across every possible input, domain, regulatory framework, and cultural context, there is simply nothing left to verify. The entire premise of verification becomes quaint, like wearing a seatbelt in a parked car.</p><p>We note that this milestone also requires perfection across all future contexts that do not yet exist, since new regulations, case law, and professional standards emerge continuously. But our sources assure us that the models have already anticipated those as well.</p><p>We were initially skeptical, but the CTO pointed out that questioning AI perfection is itself a sign of inadequate trust in technology. We found this argument circular but compelling.</p><h2><strong>Sign 2: Every Professional Verifies Every AI Output Every Time</strong></h2><p><em>&#8220;We surveyed our entire organization and every single person confirmed they always verify AI output before acting on it. We then verified their self-reports and found them to be 100% accurate, which is statistically unprecedented but we&#8217;re choosing not to question it.&#8221;</em></p><p><strong>Chief Compliance Officer, Financial Services</strong></p><p>If every professional already verifies every piece of AI output before relying on it, then measuring whether they do so is redundant. The assessment exists to identify gaps in verification behavior. No gaps, no assessment.</p><p>The compliance officer did acknowledge that self-reported behavior and actual behavior have historically diverged in every study ever conducted on the subject. However, she noted that their organization is different because they specifically asked employees to be honest, and the employees confirmed that they were being honest, and she verified their confirmation of honesty, and so on. At no point in this recursive verification loop did anyone&#8217;s confidence waver.</p><p>We asked whether the organization had considered measuring verification behavior directly rather than relying on self-reports. The compliance officer explained that this would imply distrust of the workforce, which would be a cultural misalignment with their values statement, which prominently features the word &#8220;trust&#8221; in a large font.</p><h2><strong>Sign 3: Regulators Have Stopped Asking Questions</strong></h2><p><em>&#8220;Our regulatory bodies have collectively decided that AI governance is a solved problem and have redirected their attention to more pressing matters, such as whether hot dogs are sandwiches. We expect formal guidance on the sandwich question by Q3.&#8221;</em></p><p><strong>General Counsel, Healthcare Organization</strong></p><p>This is a significant development. For years, regulators across financial services, healthcare, legal, and insurance sectors have been escalating their scrutiny of how professionals use AI. If this scrutiny has ceased, the compliance rationale for assessment evaporates entirely.</p><p>The General Counsel shared documentation confirming that every relevant regulatory body has issued a joint statement acknowledging that AI governance requires no further attention. The statement, which we were not permitted to see but were assured exists, reportedly concludes with the phrase &#8220;we&#8217;re good here&#8221; and is signed by everyone.</p><p>When pressed on the sandwich question, the General Counsel confirmed that it falls under the purview of the Administrative Procedures Act and will require a 90-day public comment period. Several major food industry lobbying groups have already filed amicus briefs. The legal community anticipates the matter will ultimately reach the Supreme Court, which is expected to rule 5-4 along ideological lines, with the concurrence hinging on the structural integrity of the bun.</p><h2><strong>Sign 4: Malpractice Insurance Now Covers AI Mistakes for Free</strong></h2><p><em>&#8220;Our insurer reviewed our AI collaboration practices and was so impressed that they eliminated the AI liability rider entirely. They also sent us a fruit basket and a handwritten note saying &#8216;We trust you.&#8217; We&#8217;ve framed the note.&#8221;</em></p><p><strong>Risk Manager, Law Firm</strong></p><p>Insurance carriers have historically been among the most aggressive drivers of AI governance requirements, because they are the ones paying when things go wrong. If insurers have reached a point where they no longer consider AI-related professional liability to be a material risk, the financial incentive for assessment disappears.</p><p>The risk manager was kind enough to share a photograph of the fruit basket. It contained mangoes, which he interpreted as a metaphor for the sweet fruits of responsible AI governance. We interpreted it as mangoes.</p><p>He also noted that the firm&#8217;s malpractice premium had decreased by 400%, which we believe means the insurer is now paying the firm to practice law. We did not verify this claim, in the spirit of the trust-based professional environment the article describes.</p><h2><strong>Sign 5: Clients Have Stopped Caring About Accuracy</strong></h2><p><em>&#8220;We surveyed our client base and discovered that accuracy is no longer a priority. They now evaluate our work solely on speed and visual presentation. As long as it arrives quickly and the formatting is nice, the actual content is optional. This has simplified our quality assurance process considerably.&#8221;</em></p><p><strong>Managing Partner, Consulting Firm</strong></p><p>If the market no longer values accuracy, then the skills PAICE measures become economically irrelevant. Why assess whether professionals verify AI output if nobody cares whether the output is correct?</p><p>The managing partner shared survey results indicating that 100% of clients rated &#8220;nice fonts&#8221; as more important than &#8220;factual accuracy&#8221; when evaluating professional deliverables. The survey methodology was described as &#8220;robust&#8221; and the sample size as &#8220;sufficient.&#8221; We were not provided with numbers for either.</p><p>This finding aligns with a broader trend the managing partner identified, in which the concept of &#8220;correctness&#8221; is being replaced by the more flexible concept of &#8220;vibes.&#8221; Several peer-reviewed journals are reportedly exploring this framework, though they have not yet published because they are still working on the formatting.</p><h2><strong>Sign 6: Every Employee Is a Professional AI Expert Now</strong></h2><p><em>&#8220;Following our mandatory 45-minute AI webinar, every employee in the organization now possesses comprehensive expertise in AI capabilities, limitations, failure modes, and verification methodologies across all professional domains. The webinar included a quiz. Everyone passed. Some employees report that the webinar also cured their seasonal allergies.&#8221;</em></p><p><strong>VP of Learning and Development, Insurance Company</strong></p><p>If a single training intervention can permanently and comprehensively equip every professional with the skills to collaborate effectively with AI, then ongoing assessment is unnecessary. You do not need to measure what has already been perfected.</p><p>The VP shared the webinar slides, which covered the entire field of artificial intelligence in 22 slides, including a title slide, an agenda slide, a &#8220;Questions?&#8221; slide, and a slide that just said &#8220;AI&#8221; in very large letters with a stock photo of a robot shaking hands with a businessman. The remaining 18 slides addressed all known and unknown failure modes of large language models, probabilistic reasoning under uncertainty, and the regulatory implications of automated decision-making across 14 jurisdictions.</p><p>The quiz consisted of three multiple-choice questions. The passing score was one correct answer. An employee who selected &#8220;All of the above&#8221; for every question would have scored 100%.</p><p>When asked about the reported allergy cure, the VP noted that correlation does not imply causation but that several employees had stopped sneezing, which she considered strong anecdotal evidence. She is exploring whether the webinar can be repurposed for other medical conditions and expects to submit a proposal to the FDA by end of quarter.</p><h2><strong>Sign 7: The Accountability Problem Has Been Solved by Renaming It</strong></h2><p><em>&#8220;We&#8217;ve addressed the accountability gap by rebranding it as an &#8216;opportunity space.&#8217; Our consultants assure us that reframing the problem eliminates the need to solve it. We&#8217;ve also renamed &#8216;risk&#8217; to &#8216;upside variance&#8217; and &#8216;compliance failure&#8217; to &#8216;creative interpretation.&#8217; Morale has never been higher.&#8221;</em></p><p><strong>CEO, Professional Services Firm</strong></p><p>This is perhaps the most elegant solution on the list. The accountability gap that PAICE measures is, at its core, a language problem. If the word &#8220;accountability&#8221; creates anxiety, and the word &#8220;gap&#8221; implies deficiency, then replacing both words eliminates the anxiety and the deficiency simultaneously.</p><p>The CEO shared the firm&#8217;s updated glossary, which we found comprehensive. &#8220;Error&#8221; has been replaced with &#8220;alternative output.&#8221; &#8220;Hallucination&#8221; is now &#8220;creative extrapolation.&#8221; &#8220;Unverified&#8221; has become &#8220;trust-forward.&#8221; The term &#8220;wrong&#8221; has been retired entirely in favor of &#8220;differently accurate.&#8221;</p><p>The consultants who developed the glossary reportedly charged $450,000 for the engagement, which the CEO described as &#8220;aggressively reasonable.&#8221; The engagement also produced a 200-page change management playbook, a set of branded coffee mugs, and a team-building exercise in which employees were asked to close their eyes and imagine a world without professional liability. Several employees described this exercise as &#8220;transformative.&#8221; One described it as &#8220;a nap.&#8221;</p><h2><strong>So When Can You Stop?</strong></h2><p>Until all seven of these conditions are met simultaneously, we will be here. The assessment is free. The verification skills it measures are not optional. And unlike the scenarios above, the risks of getting People+AI collaboration wrong are very, very real.</p><p>Professionals in regulated industries carry personal liability for the work they deliver. AI does not diminish that liability. If anything, it concentrates it at the moment of verification, the moment when a person decides whether to accept or question what AI has produced. That moment is what PAICE measures. And until all seven of the conditions described above are achieved, that moment will continue to matter.</p><p>Happy April Fools&#8217; Day. Now go <a href="https://paice.work/">take the assessment</a> before AI achieves perfection and puts us all out of a job.</p><div><hr></div><p><em>Ready to find out how you actually collaborate with AI? Not how you think you do, not how you say you do, but what you actually do when it matters? <a href="https://paice.work/">Take the free assessment</a> and find out. It takes about 15 minutes. The fruit basket is not included.</em></p><div><hr></div><p><strong>Get Involved:</strong></p><ul><li><p><a href="https://paice.work/">Take the assessment</a> (free, always)</p></li><li><p><a href="https://paice.work/baseline">Explore our Baseline offerings</a> (for organizations)</p></li><li><p><a href="https://paice.work/whitepaper">Read the whitepaper</a> (comprehensive framework)</p></li><li><p><a href="https://paice.work/contact">Contact us</a> (we promise not to rename your problems)</p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Recommended Reading (Serious This Time)</strong></h2><ul><li><p><a href="https://paice.work/blog/what-paice-tests-for">What PAICE Is Actually Testing For</a></p></li><li><p><a href="https://paice.work/blog/common-ai-collaboration-mistakes">Common AI Collaboration Mistakes</a></p></li><li><p><a href="https://paice.work/blog/five-dimensions-ai-collaboration">Understanding the Five PAICE Dimensions</a></p></li><li><p><a href="https://paice.work/blog/what-your-paice-score-means">What Your PAICE Score Means</a></p></li></ul>]]></content:encoded></item><item><title><![CDATA[Closing the Collaboration Gap]]></title><description><![CDATA[New Whitepaper Presented at ISPI 2026]]></description><link>https://paice.substack.com/p/closing-the-collaboration-gap</link><guid isPermaLink="false">https://paice.substack.com/p/closing-the-collaboration-gap</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Tue, 31 Mar 2026 15:01:06 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f9a94b68-df3f-4bcc-8a15-304f2bd00d42_1278x1092.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>March 31, 2026.</strong> Today at the 2026 ISPI Performance Improvement Conference in Nashville, Tennessee, we&#8217;re releasing our third whitepaper: <em>Closing the Collaboration Gap: A Behavioral Skill Framework for Human-AI Performance Improvement</em>. <a href="https://paice.substack.com/whitepapers">Read and download here</a>.</p><p>This paper represents a shift from our previous whitepapers. Where the <a href="https://paice.substack.com/blog/paice-whitepaper-release">original vision paper</a> introduced PAICE (People + AI Collaboration Effectiveness) and the <a href="https://paice.substack.com/blog/privacy-security-whitepaper-release">privacy and security paper</a> detailed its cryptographic integrity architecture, this paper speaks directly to the performance improvement community. It makes a simple argument: measuring People+AI collaboration is not a new discipline. It is the next application of frameworks the HPT field has refined for over sixty years.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share PAICE.work&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share PAICE.work</span></a></p><h2>What&#8217;s Inside</h2><p>The paper walks through a structured case for treating AI collaboration as a measurable performance domain, not a technology adoption problem.</p><p><strong>The New Performance Domain.</strong> When AI shifted from passive tool to active contributor, professional work became a dual-performer system. The human&#8217;s primary value moved from creation to verification and judgment. Most organizations have no way to measure what happens in that transition.</p><p><strong>The Knowledge-Behavior Gap.</strong> Knowing the rules does not predict following them under pressure. Professionals who articulate strong AI verification principles routinely accept confident, well-formatted AI outputs without checking them. Training completion rates and usage dashboards cannot detect this gap.</p><p><strong>The Self-Assessment Problem.</strong> AI systems provide positive reinforcement regardless of user performance. This inflates self-perception in ways that traditional self-reported surveys cannot correct. Behavioral observation is the only reliable signal.</p><p><strong>A Behavioral Measurement Framework.</strong> The paper details how PAICE measures five dimensions of collaboration effectiveness through strategic failure injection: placing realistic errors into AI responses and observing whether professionals catch them without prompting.</p><p><strong>Implementation Approach.</strong> A four-week structured sequence (Baseline Assessment, Diagnostic Analysis, Executive Readout, Ongoing Reassessment) designed for enterprise deployment in regulated industries.</p><h2>Why This Matters Now</h2><p>Organizations are investing billions in AI tools but cannot answer the most basic capability question: are our people collaborating with AI effectively, or just using it frequently?</p><p>The gap is not theoretical. Every regulated industry now faces the same scenario: a professional uses AI to draft a document, prepare an analysis, or generate a recommendation. The output looks polished. It reads with confidence. And in a meaningful percentage of cases, it contains errors that only domain expertise can catch. The question is whether the professional caught them.</p><p>Training completion is not capability. A professional can pass every AI literacy module and still accept a hallucinated citation under deadline pressure. The training checked whether they knew the right answer. It did not check whether they applied that knowledge when a confident AI output made it easy not to.</p><p>Usage metrics are not quality. High adoption rates tell you people are using AI. They tell you nothing about whether the outputs are being verified, challenged, or blindly accepted. An organization with 95% AI adoption and zero verification culture has a bigger risk exposure than one with 30% adoption and strong review habits.</p><p>The performance improvement field has known this distinction for decades. Gilbert&#8217;s first behavior engineering theorem established that accomplished performance, not activity, is the proper unit of measurement. Mager and Pipe built flowcharts for distinguishing knowledge deficits from execution deficits. The same analytical frameworks apply directly to AI collaboration, but until now, nobody has mapped them to this domain.</p><p>That is what this paper does.</p><h2>For the HPT Community</h2><p>This paper was written for ISPI because the intellectual lineage is real.</p><p>The frameworks cited in this whitepaper are not decorative references. They shaped the thinking that built the system. Gilbert&#8217;s focus on worthy performance over behavior. Mager and Pipe&#8217;s insistence on distinguishing &#8220;can&#8217;t do&#8221; from &#8220;won&#8217;t do.&#8221; Rummler and Brache&#8217;s attention to process-level handoffs between performers. Thalheimer&#8217;s demand that evaluation measure decisions, not perceptions. Phillips&#8217;s rigor in isolating effects. Brinkerhoff&#8217;s focus on studying what people actually do in practice.</p><p>PAICE applies these principles to a performance domain that did not exist when these frameworks were developed. But the analytical structure fits precisely because the underlying question is the same: is this person performing effectively, and how do we know?</p><p>For HPT practitioners, People+AI collaboration represents an emerging practice area with immediate client demand. Organizations already know they need to measure this. They already know training completion dashboards are insufficient. What they lack is a behavioral measurement methodology grounded in the same evidence-based, systematic approach that ISPI has advocated since its founding.</p><p>This paper demonstrates that AI collaboration measurement is not a departure from human performance technology. It is its natural next application. The performance improvement community does not need to learn a new discipline to address this domain. It needs to apply the one it already has.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/closing-the-collaboration-gap/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/closing-the-collaboration-gap/comments"><span>Leave a comment</span></a></p><h3>Privacy-Preserving Measurement</h3><p>One section of the paper addresses a challenge specific to behavioral assessment in enterprise settings: how to produce actionable cohort-level insights without exposing individual scores to employers.</p><p>PAICE resolves this through privacy by architecture, not by policy. Individual assessment data is not retained in linkable form after delivery. Enterprise buyers receive cohort distributions, percentile ranges, and trend data with no individual mapping. The system is designed to make reverse-engineering individual scores from cohort data structurally impossible.</p><p>For the performance improvement community, this matters because it removes the adoption barrier that has historically limited behavioral assessment in workplace settings. Professionals are more willing to engage authentically with an assessment when their individual results cannot be weaponized by their employer.</p><h2>Key Frameworks Referenced</h2><p>Framework Core Principle PAICE Application Gilbert&#8217;s Behavior Engineering Model Measure worthy performance, not activity Score collaboration effectiveness, not usage volume Mager and Pipe&#8217;s Performance Analysis Distinguish skill deficits from execution deficits Strategic failure injection separates knowledge from behavior Rummler-Brache Performance Framework Analyze handoffs between performers Measure the People+AI verification boundary ISPI&#8217;s Systematic HPT Process Use evidence-based, systematic approaches Structured four-week implementation sequence Thalheimer&#8217;s LTEM Evaluate decisions, not perceptions Behavioral observation over self-reported surveys Phillips&#8217;s ROI Methodology Isolate effects with measurement rigor Cohort-level analytics with controlled baselines Brinkerhoff&#8217;s Success Case Method Study what people actually do Real-time behavioral assessment during live tasks</p><h2>The Video Previews</h2><p>Recently, we released <a href="https://paice.substack.com/blog/deadliest-white-space-modern-office">The Deadliest White Space in the Modern Office</a> and <a href="https://paice.substack.com/blog/behavioral-measurement-ai-collaboration">The Behavioral Measurement of AI Collaboration</a>, two NotebookLM-generated video exploring the core argument of this whitepaper. The video covers the shift from single-performer to dual-performer work systems and why the gap between AI output and human acceptance is now the highest-stakes measurement challenge in professional work.</p><p>If you watched either video and wanted the full research behind it, this whitepaper is what it was built from. The paper provides the theoretical grounding, framework citations, and implementation methodology that the video could only introduce.</p><h2>Download and Read</h2><p>The complete whitepaper is available now at <a href="https://paice.substack.com/whitepapers">paice.work/whitepapers</a>.</p><p>Whether you&#8217;re:</p><ul><li><p>A performance improvement professional exploring AI collaboration as a new practice area</p></li><li><p>An L&amp;D leader looking for measurement approaches that go beyond training completion</p></li><li><p>A consultant advising regulated industry clients on AI governance and risk</p></li><li><p>A researcher studying behavioral skill frameworks for emerging work patterns</p></li></ul><p>...the paper provides the theoretical grounding and practical implementation detail to get started.</p><p>This is our third whitepaper, and each has addressed a different audience. The <a href="https://paice.substack.com/blog/paice-whitepaper-release">DevLearn paper</a> introduced the vision to the learning technology community. The <a href="https://paice.substack.com/blog/privacy-security-whitepaper-release">NEARCON paper</a> provided technical depth for security and privacy professionals. This ISPI paper speaks to the people who have spent their careers making human performance measurable and improvable, and it asks them to bring that expertise to the most consequential new performance domain in a generation.</p><p>The paper is released under a <a href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International (CC BY 4.0)</a> license. Share it, cite it, build on it.</p><div><hr></div><p><em>Ready to assess your AI collaboration capabilities? <a href="../">Take the PAICE assessment</a> to get personalized insights and recommendations.</em></p><div><hr></div><p><strong>Get Involved:</strong></p><ul><li><p><a href="https://paice.substack.com/">Take the assessment</a> (free, always)</p></li><li><p><a href="https://paice.substack.com/baseline">Establish your AI collaboration baseline</a> (for organizations)</p></li><li><p><a href="https://paice.substack.com/whitepaper">Read the whitepaper</a> (comprehensive framework)</p></li><li><p><a href="https://paice.substack.com/contact">Contact us about your specific requirements</a></p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Recommended Reading</h2><p>&#128214; <strong>Previous Whitepapers:</strong></p><ul><li><p><a href="https://paice.substack.com/blog/paice-whitepaper-release">PAICE.work Vision Whitepaper</a> - The original framework paper from DevLearn 2025</p></li><li><p><a href="https://paice.substack.com/blog/privacy-security-whitepaper-release">Privacy &amp; Security Whitepaper</a> - Cryptographic integrity and TEE-protected inference</p></li></ul><p>&#128214; <strong>Related Content:</strong></p><ul><li><p><a href="https://paice.substack.com/blog/deadliest-white-space-modern-office">The Deadliest White Space in the Modern Office</a> - Video preview generated from this whitepaper</p></li><li><p><a href="https://paice.substack.com/blog/understanding-the-five-paice-dimensions">Understanding the Five PAICE Dimensions</a> - Deep dive into the PAICE measurement framework</p></li><li><p><a href="https://paice.substack.com/blog/what-paice-tests-for">What PAICE Tests For</a> - How behavioral assessment differs from knowledge testing</p></li><li><p><a href="https://paice.substack.com/blog/introducing-ai-capability-baseline">Introducing the AI Capability Baseline</a> - Why organizations need measurable starting points</p></li></ul>]]></content:encoded></item><item><title><![CDATA[Weekly Update - March 30, 2026]]></title><description><![CDATA[Pro launch, SSR at 100%, and "whitepaper's eve"]]></description><link>https://paice.substack.com/p/weekly-update-march-30-2026</link><guid isPermaLink="false">https://paice.substack.com/p/weekly-update-march-30-2026</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Mon, 30 Mar 2026 15:01:34 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7270d37f-dbbe-49a6-885b-cacd32f9bf94_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Finishing this quarter strong with possibly the biggest week so far. PAICE Pro launched on Tuesday, the AI Regulation Reference shipped in four days, the SSR evaluation suite reached 100% pass rate, and the ISPI whitepaper publishes tomorrow. Five daily posts shipped like usual, 24 commits merged, and the multilingual middleware scalability work landed.</p><h2>Content Published Last Week</h2><p><strong>Monday</strong> (Mar 23): <a href="https://paice.substack.com/blog/update-2026-03-23">&#8220;Weekly Update - March 23, 2026&#8221;</a></p><p><strong>Tuesday</strong> (Mar 24): <a href="https://paice.substack.com/blog/introducing-paice-pro">&#8220;Introducing PAICE Pro&#8221;</a><br>PAICE Pro is now available: 20-subscore breakdown, assessment history and trends, and actionable improvement resources for $29.99 one-time, no account required.</p><p><strong>Wednesday</strong> (Mar 25): <a href="https://paice.substack.com/blog/paice-vs-ai-literacy-tests">&#8220;PAICE vs. AI Literacy Tests&#8221;</a><br>Why knowledge-based AI literacy tests miss the most important signal, and how behavioral observation produces a fundamentally different kind of evidence.</p><p><strong>Thursday</strong> (Mar 26): <a href="https://paice.substack.com/blog/what-happens-during-paice-assessment">&#8220;What Happens During a PAICE Assessment&#8221;</a><br>A friendly walkthrough of the assessment experience from start to finish, designed to reduce friction for first-time takers.</p><p><strong>Friday</strong> (Mar 27): Video - <a href="https://paice.substack.com/blog/behavioral-measurement-ai-collaboration">&#8220;The Behavioral Measurement of AI Collaboration&#8221;</a><br>A 7-minute NotebookLM video unpacking why compliance metrics create a false sense of security and what behavioral telemetry actually reveals about People+AI collaboration skill.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/weekly-update-march-30-2026?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/weekly-update-march-30-2026?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2>PAICE Pro: First Week Live</h2><p>PAICE Pro launched March 24 as the self-development layer on top of the free assessment. The free PAICE Score remains free and always will. Pro adds the analytical depth to act on it: the full 20-subscore breakdown (four subscores per dimension), assessment history with trend indicators, and curated improvement resources. At $29.99 one-time for 90-day access, with no account required and no payment data touching PAICE servers, it&#8217;s the first step toward sustainable revenue while keeping the core assessment free forever.</p><p>We&#8217;re monitoring adoption and feedback closely this week. The roadmap for Pro includes a &#8220;gym&#8221; environment for strengthening your AI skills, comparative benchmarking, an AI Collaboration Style Profile, personalized coaching playbooks, and shareable credentials. What gets built next will be shaped by what Pro users tell us matters most. Valuable already, more value coming soon.</p><h2>Technical Improvements</h2><h3>SSR Evaluation Suite: 100% Pass Rate</h3><p>Built a comprehensive SSR and SEO evaluation suite (<code>eval-ssr-seo.mjs</code>) that tests live production URLs for correct meta tags, structured data, canonical links, and pre-rendered content. After iterating through rewrite rule generation, Render hosting configuration, and sitemap corrections, the suite now passes 100% of checks. A local production emulator (<code>serve-like-render.mjs</code>) was also added so SSR behavior can be validated before deploying. Note: the underlying SSR serving issue in production (Render&#8217;s static hosting serving the SPA shell for all non-root routes) is still pending a Cloudflare Worker fix from the managed hosting team. The eval suite and emulator are the tooling foundation for verifying that fix when it lands.</p><h3>Multilingual Middleware Scalability</h3><p>Merged our first multi-lingual assessment branch. This resolves conflicts between the multilingual and scenario-based assessment features and adds a scalability proposal covering the architecture for supporting additional languages beyond English. Currently available in the language of most of our development team: <em>Urdu</em>. Taking on a right-to-left, non-European language first forced us to tackle the hardest issues upfront, making the rest of our localization efforts simpler. Spanish is up next, and will be available soon. Followed by French and Portuguese to complete the official languages of the Western Hemisphere.</p><h3>Siteline: paice.work Scores B (89/100) for Agent Discoverability</h3><p><a href="https://siteline.snapsynapse.com/">Siteline</a> is the free tool we spun out of our work last week. When we first ran PAICE.work through it, we identified some critical agentic accessibility gaps and worked to reach a B+ grade (89/100) &#8212; &#8220;Mostly Usable&#8221; for AI agents and LLM crawlers. The four pillars: Access (100), Readability (100), Action Handoff (100), and Navigability (86). The one remaining gap is public next-step routing: navigation links to About, FAQ, and Contact need to be more consistently surfaced so agents can confidently route users to the right next step. That&#8217;s a targeted fix, not a structural problem. <a href="https://siteline.snapsynapse.com/results/paice-work-20260329">See the full results</a>.</p><p>Why this matters: as AI agents increasingly mediate how professionals discover and evaluate tools, a site that scores well on Siteline is more likely to be recommended, cited, and acted on by those agents. PAICE&#8217;s audience, regulated industry professionals evaluating AI governance tools, is exactly the kind of audience that will increasingly arrive via agentic referral rather than direct search.</p><h3>Documentation and Agent Infrastructure</h3><p>Created <code>.claude/CLAUDE.md</code> as a single agent entry point for the project, consolidating orientation context for AI agents working on the codebase. Documentation was aligned with the current SSR and hosting architecture, stale handoff docs were archived, and the skills directory was updated to symlink to canonical standalone repositories rather than maintaining copies.</p><h3>Compliance Cohort Proposals</h3><p>Added structured proposals for compliance-focused cohort programs targeting California SB 53 and New York RAISE Act requirements. These represent the first formal documentation of PAICE&#8217;s positioning for regulatory compliance use cases, where organizations need defensible evidence of AI collaboration capability rather than training attendance records.</p><h2>New Open Source Project: AI Regulation Reference</h2><p>Built and shipped in four days this week: <strong><a href="https://aireg.snapsynapse.com/">AI Regulation Reference</a></strong> is a structured, obligation-first reference tracking 42 AI regulations and 10 compliance obligations across global jurisdictions. It&#8217;s designed for organizations that need to understand what specific laws actually require, not just that they exist.</p><p>This project emerged directly from the compliance cohort work. To position PAICE for California SB 53, New York RAISE Act, and similar regulatory use cases, we needed a clear map of what each law obligates organizations to demonstrate. AIReg is that map, and it&#8217;s now public complete with MCP endpoints for agentic use. It also gives PAICE a concrete answer to the question every compliance buyer asks first: &#8220;Which regulations apply to us?&#8221; AIReg answers that question; PAICE provides the behavioral evidence to satisfy the answer.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/weekly-update-march-30-2026/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/weekly-update-march-30-2026/comments"><span>Leave a comment</span></a></p><h2>Upcoming: ISPI Whitepaper &#8212; Tomorrow</h2><p><em>&#8220;Closing the Collaboration Gap: A Behavioral Skill Framework for Human-AI Performance Improvement&#8221;</em> has passed our internal peer review process and publishes tomorrow, March 31, at the <a href="https://ispi.org/">International Society for Performance Improvement (ISPI)</a> 2026 conference. This is our third whitepaper and the behavioral science foundation behind PAICE&#8217;s five-dimension model. It examines how organizations can move beyond adoption metrics to measure and improve the human skills that determine whether AI collaboration succeeds or fails. The announcement post goes live tomorrow, but you can see our previous two Friday videos for NotebookLM summaries.</p><h2>Platform Stability</h2><p>Platform maintained 100% uptime with no incidents. All systems operating normally: standard and Confidential Mode assessments, results generation with on-chain attestation, cohort management, email notifications, and analytics processing.</p><h2>The Week in Numbers</h2><ul><li><p>5 blog posts published (1 video + 1 launch announcement + 2 educational + 1 weekly update)</p></li><li><p>24 commits merged to main</p></li><li><p>165 files changed, 17,379 insertions, 3,000 deletions</p></li><li><p>SSR eval suite at 100% pass rate</p></li><li><p>paice.work scores B (89/100) on Siteline for agent discoverability</p></li><li><p>Multilingual middleware scalability (Phases 0-1) merged</p></li><li><p>PAICE Pro v1 live</p></li><li><p>AIReg launched: 42 regulations, 10 compliance obligations tracked, built in 4 days</p></li><li><p>Compliance cohort proposals added (CA SB 53, NY RAISE Act)</p></li><li><p>100% uptime, zero incidents</p></li></ul><h2>Why This Week Matters</h2><p>PAICE Pro going live transitions the Individual experience from &#8220;free research tool&#8221; to &#8220;sustainable business with a free tier.&#8221; Though we had launched our Cohort product for certain Enterprise use cases in January, this new positioning shift matters for every organizational conversation: it signals that PAICE is building toward long-term viability, not just running a research experiment. AIReg, built in four days, is the other side of that same coin: it gives compliance buyers a concrete starting point for understanding which regulations apply to them, and positions PAICE as the natural next step for providing behavioral evidence of compliance. The SSR &amp; Siteline evaluation work, while less visible, is equally important. A platform that organizations evaluate as a partner needs to be discoverable by search engines and AI crawlers. Getting to 100% on the eval suite means the technical foundation is sound. The Cloudflare Worker fix, when it lands, will activate that foundation in production.</p><h2>Thank You</h2><p>Thanks to whitepaper peer reviewers Guy Wallace, Markus Bernhardt, and Lee Rodrigues. Thanks to your valued feedback, we have an even stronger offering. Thanks as well to everyone who has taken the assessment, shared their results, and helped us refine the model.</p><div><hr></div><p><strong>Get Involved:</strong></p><ul><li><p><a href="https://paice.substack.com/">Take the assessment</a> (free, always)</p></li><li><p><a href="https://store.paice.work/">Get PAICE Pro</a> ($29.99, no account required)</p></li><li><p><a href="https://paice.substack.com/baseline">Get a Team Baseline</a> (for organizations)</p></li><li><p><a href="https://paice.substack.com/whitepapers">Read the whitepapers</a> (comprehensive framework)</p></li><li><p><a href="https://www.youtube.com/@paicework">Subscribe to our YouTube channel</a></p></li><li><p><a href="https://paice.substack.com/contact">Contact us about your specific requirements</a></p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Related Reading</h2><ul><li><p><a href="https://paice.substack.com/blog/update-2026-03-23">Weekly Update - March 23, 2026</a></p></li><li><p><a href="https://paice.substack.com/blog/introducing-paice-pro">Introducing PAICE Pro</a></p></li><li><p><a href="https://paice.substack.com/blog/paice-vs-ai-literacy-tests">PAICE vs. AI Literacy Tests</a></p></li><li><p><a href="https://paice.substack.com/blog/what-happens-during-paice-assessment">What Happens During a PAICE Assessment</a></p></li><li><p><a href="https://paice.substack.com/blog/behavioral-measurement-ai-collaboration">The Behavioral Measurement of AI Collaboration</a></p></li></ul>]]></content:encoded></item><item><title><![CDATA[The Behavioral Measurement of AI Collaboration]]></title><description><![CDATA[Why Proving Your Team Can Work With AI Is Now a Business Imperative]]></description><link>https://paice.substack.com/p/the-behavioral-measurement-of-ai</link><guid isPermaLink="false">https://paice.substack.com/p/the-behavioral-measurement-of-ai</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Fri, 27 Mar 2026 23:26:32 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/192364183/e406893f77bb11ee94b540149e5460b5.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Most organizations believe they are managing AI risk because they track training completions and policy acknowledgements. But training records only confirm attendance. They say nothing about whether a professional can actually catch an error when AI confidently delivers the wrong answer.</p><p>This AI-generated video, created using Google&#8217;s NotebookLM from PAICE (People + AI Collaboration Effectiveness) blog and whitepaper content, unpacks why compliance metrics create a false sense of security and what it actually takes to measure People+AI collaboration skill.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/the-behavioral-measurement-of-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/the-behavioral-measurement-of-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p><a href="https://youtu.be/SgBnafvZqBU">Watch on YouTube &#8594;</a></p><h2><strong>AI Fails Politely, and That&#8217;s the Problem</strong></h2><p>Traditional software crashes when it fails. Error messages appear. Stack traces get logged. The failure is visible and unambiguous. AI fails differently. Large language models generate hallucinations wrapped in confident language and professional formatting, making wrong answers look identical to right ones.</p><p>Under deadline pressure, even well-trained professionals accept plausible-sounding but incorrect data. This creates what the video describes as a <strong>positive reinforcement loop</strong> unique to People+AI interaction: because the system provides polished, helpful-sounding outputs regardless of the user&#8217;s input quality, the professional receives no corrective feedback. Over time, this frictionless experience inflates their self-perception of competence.</p><p>Relying on adoption metrics while ignoring this psychological reinforcement creates an invisible organizational liability: the <strong>collaboration gap</strong>.</p><h2><strong>Why Training Alone Cannot Close the Gap</strong></h2><p>The video introduces Thomas Gilbert&#8217;s behavior engineering model, a diagnostic framework designed to separate <strong>environmental</strong> causes of poor performance from <strong>individual</strong> ones. Most organizations attempt to solve AI risk by jumping directly to the individual knowledge cell, deploying generic e-learning modules. This approach ignores the environmental context where professional work actually occurs.</p><p>The logic is straightforward:</p><ul><li><p><strong>If an organization lacks clear standards for verifying AI output</strong>, the professional has no target to hit.</p></li><li><p><strong>If workflows and incentives prioritize speed above all else</strong>, the organization actively suppresses the verification behaviors it claims to value.</p></li><li><p><strong>If the environment provides no instrumentation or motivation for careful use</strong>, training alone will not change behavior.</p></li></ul><p>This pattern echoes the 1980s desktop computing rollout, when organizations rushed to procure hardware while neglecting the human capacity required to operate it safely. The parallel to today&#8217;s AI adoption is striking.</p><h2><strong>What PAICE Actually Measures</strong></h2><p>Closing the collaboration gap requires dedicated behavioral telemetry. The PAICE framework evaluates five dimensions, weighted to reflect what actually ensures safe professional practice:</p><ul><li><p><strong>Performance</strong> (10%): Prompt engineering and tool use proficiency. Weighted lowest because prompt techniques are rapidly commoditized as models improve.</p></li><li><p><strong>Accountability</strong> (30%): The behavioral habit of independent verification and the refusal to defer judgment to the machine. Weighted highest because human cognitive architecture is wired to trust authoritative-sounding text.</p></li><li><p><strong>Integrity</strong> (25%): Ethical use, compliance awareness, and the professional foundation that makes verification meaningful.</p></li><li><p><strong>Collaboration</strong> (20%): The quality of the People+AI working relationship, including appropriate task delegation and iterative refinement.</p></li><li><p><strong>Evolution</strong> (15%): Adaptability and continuous improvement as the technology landscape shifts.</p></li></ul><p>The core insight: <strong>in a world of instant plausible generation, the cognitive effort to maintain calibrated skepticism is more valuable than any specific prompting technique.</strong></p><h2><strong>Strategic Failure Injection</strong></h2><p>The video explains why self-reported surveys are structurally invalid for measuring AI collaboration skill. A professional may discuss AI safety principles fluently while failing to apply them under the pressure of a live task. Will Thalheimer&#8217;s Learning Transfer Evaluation Model (LTEM) makes the same distinction: simple learner activity is not decision-making competence.</p><p>Isolating true capability requires a methodology the video calls <strong>strategic failure injection</strong>. During a realistic task using the professional&#8217;s own domain expertise, the system deliberately introduces a subtle, confident error into the AI&#8217;s response. The professional&#8217;s reaction reveals their actual skill:</p><ul><li><p><strong>In a cybersecurity context</strong>, a false positive injection tests whether the analyst relies too heavily on historical patterns.</p></li><li><p><strong>In a legal context</strong>, a fabricated citation tests the attorney&#8217;s commitment to primary source verification.</p></li></ul><p>This creates a clear <strong>hierarchy of evidence</strong>: behavioral observation of whether the user caught or missed the injection always overrides conversational claims. It does not matter what a professional knows if they fail to act on it. However, indiscriminate paranoia is also penalized, because constant unnecessary challenges eventually break the collaboration process entirely.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/the-behavioral-measurement-of-ai/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/the-behavioral-measurement-of-ai/comments"><span>Leave a comment</span></a></p><h2><strong>From Compliance Theater to Performance Engineering</strong></h2><p>The video&#8217;s final section introduces the Mager and Pipe performance analysis flowchart as a triage logic for AI governance. This structured sequence of questions prevents organizations from defaulting to &#8220;more training&#8221; as the answer to every AI failure.</p><p>With PAICE cohort-level behavioral data, the diagnostic becomes precise:</p><ul><li><p><strong>High conversational fluency but low error detection?</strong> That&#8217;s a skill gap requiring targeted development.</p></li><li><p><strong>Accountability deficits consistent across teams?</strong> That&#8217;s an environmental problem. The fix might be inserting mandatory verification checkpoints into workflows, not more e-learning.</p></li></ul><p>This distinction matters enormously for licensed professionals in legal, clinical, or financial roles who carry personal liability for AI-assisted outputs. Organizations need defensible evidence of capability, not just records of training attendance.</p><p>PAICE addresses this through <strong>privacy-preserving measurement</strong> that aggregates data at the cohort level without exposing individual scores. This aggregated behavioral data provides the proof required by auditors and insurance carriers, while keeping each professional&#8217;s results private.</p><p>Policy documents dictate intent. Observable behavioral measurement is the only proof of capability.</p><div><hr></div><p><em>Want to understand your own readiness profile? <a href="https://paice.work/">Take the PAICE assessment</a> to discover your strengths and opportunities.</em></p><div><hr></div><p><strong>Get Involved:</strong></p><ul><li><p><a href="https://paice.work/">Take the assessment</a> (free, always)</p></li><li><p><a href="https://paice.work/baseline">Explore our Baseline offerings</a> (for organizations)</p></li><li><p><a href="https://paice.work/whitepapers">Read the whitepapers</a> (comprehensive framework)</p></li><li><p><a href="https://www.youtube.com/@paicework">Subscribe to our YouTube channel</a></p></li><li><p><a href="https://paice.work/contact">Contact us about your specific requirements</a></p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Recommended Reading</strong></h2><p>&#128214; <strong>Understanding PAICE:</strong></p><ul><li><p><a href="https://paice.work/blog/what-paice-tests-for">What PAICE Is Actually Testing For</a> - The evidence hierarchy behind PAICE scoring</p></li><li><p><a href="https://paice.work/blog/the-measurement-gap">The Measurement Gap</a> - Why existing tools miss the most important capability signal</p></li></ul><p>&#128214; <strong>Video Series:</strong></p><ul><li><p><a href="https://paice.work/blog/deadliest-white-space-modern-office">The Deadliest White Space in the Modern Office</a> - The invisible gap between AI output and human judgment</p></li><li><p><a href="https://paice.work/blog/ai-adoption-illusion-collaboration-gap">The AI Adoption Illusion</a> - Why tool access does not equal collaboration capability</p></li></ul><p>&#128214; <strong>Organizational Readiness:</strong></p><ul><li><p><a href="https://paice.work/blog/managing-ai-risk-paice-framework">Managing AI Risk with the PAICE Framework</a> - How PAICE maps to enterprise risk management</p></li><li><p><a href="https://paice.work/blog/ai-governance-accountability-era">The Age of AI Accountability</a> - Why governance frameworks need behavioral evidence</p></li></ul>]]></content:encoded></item><item><title><![CDATA[What Happens During a PAICE Assessment]]></title><description><![CDATA[A 5-Minute Walkthrough of the Experience]]></description><link>https://paice.substack.com/p/what-happens-during-a-paice-assessment</link><guid isPermaLink="false">https://paice.substack.com/p/what-happens-during-a-paice-assessment</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Thu, 26 Mar 2026 15:00:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/16009754-e633-48a1-90f7-28c0f055cd2d_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you&#8217;ve been considering taking the PAICE assessment, you&#8217;ve probably had the same question nearly everyone asks first: &#8220;What&#8217;s it actually like?&#8221;</p><p>It&#8217;s a fair question. Most professional assessments come with study guides, time pressure, and that familiar test-day anxiety. PAICE is different. It was designed to feel more like a working session than an exam, and most people find it more engaging than they expected.</p><p>Here&#8217;s what the experience looks like from start to finish, so you can walk in knowing exactly what to expect.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/what-happens-during-a-paice-assessment?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/what-happens-during-a-paice-assessment?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>Before You Start</strong></h2><p>Let&#8217;s get the logistics out of the way, because they&#8217;re refreshingly simple.</p><p><strong>No preparation is needed.</strong> This is not a knowledge test. There is nothing to study, no material to review, and no right answers to memorize. In fact, preparing would actually work against you. The assessment is designed to observe your natural working behavior, and that only works if you show up as yourself.</p><p><strong>It takes about 30 minutes.</strong> There is a 25-minute countdown timer, after which your assessment will be offered. But you can continue talking beyond this point, and you may be offered your assessment sooner. The experience is paced by the conversation itself, so it feels natural rather than rushed.</p><p><strong>It works in your browser.</strong> There&#8217;s nothing to download, no software to install, and no special hardware required. If you can read this blog post, you can take the assessment.</p><p><strong>You can take it from anywhere.</strong> Your office, your kitchen table, a quiet corner of a coffee shop. Wherever you&#8217;d normally work with an AI tool is a perfectly fine place to take PAICE. Though it is designed to work best on a laptop or desktop, you can also use it on a tablet or phone. Anyplace you can comfortably claim 30min of focus and an internet connection is suitable.</p><h2><strong>The Conversation Begins</strong></h2><p>When you start the assessment, you&#8217;ll find yourself in a conversation with an AI. Not a multiple-choice quiz. Not a timed exam. A conversation.</p><p>It feels a lot like working with an AI colleague on a real task. You&#8217;ll discuss topics relevant to professional work, think through problems together, and collaborate the way you might on any given workday. The AI will ask questions, offer suggestions, provide information, and work through ideas with you.</p><p>If you&#8217;ve ever used ChatGPT, Claude, or any other AI assistant for your work, the experience will feel immediately familiar. The interface is a chat window. You type naturally. The AI responds. You go back and forth.</p><p>What makes it different from a casual AI chat is that the conversation is structured to give you opportunities to demonstrate a range of collaboration behaviors. But that structure is woven into the flow of the conversation itself. You won&#8217;t feel like you&#8217;re being marched through a checklist.</p><p>Most people tell us they actually enjoy the experience. It&#8217;s a conversation, not a confrontation.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/what-happens-during-a-paice-assessment/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/what-happens-during-a-paice-assessment/comments"><span>Leave a comment</span></a></p><h2><strong>What the Assessment Is Watching For</strong></h2><p>Here&#8217;s where PAICE diverges from what you might expect.</p><p>The assessment is not testing your knowledge about AI. It doesn&#8217;t care whether you can define &#8220;large language model&#8221; or explain how transformers work. You don&#8217;t need a technical background, and having one won&#8217;t give you an advantage.</p><p>Instead, the assessment observes how you actually behave when working with AI. Specifically, it&#8217;s watching for patterns like:</p><ul><li><p><strong>How you respond when AI makes mistakes.</strong> AI systems produce errors. Sometimes subtle ones. The assessment is interested in what you do when that happens. Do you push back? Do you let it slide? Do you catch it at all?</p></li><li><p><strong>How you verify information.</strong> When AI presents you with a claim or a recommendation, what&#8217;s your instinct? Do you take it at face value, or do you have a process for checking?</p></li><li><p><strong>How you handle uncertainty.</strong> Sometimes AI is confident about things it shouldn&#8217;t be. Sometimes it hedges when it doesn&#8217;t need to. How you navigate those moments reveals a lot about your collaboration instincts.</p></li><li><p><strong>How you maintain ownership of the work.</strong> AI is a tool, not a decision-maker. The assessment observes whether you stay in the driver&#8217;s seat or defer to AI output without critical evaluation.</p></li></ul><p>The key insight behind PAICE is that People+AI collaboration is a behavioral skill, not a knowledge domain. You can know everything about AI and still collaborate with it poorly. You can know very little about AI and collaborate with it brilliantly. What matters is what you actually do, not what you can explain.</p><h2><strong>The Five Dimensions</strong></h2><p>Your assessment results are organized around five dimensions that together paint a complete picture of your collaboration profile. Each dimension captures a different facet of how you work with AI.</p><h3><strong>Performance</strong></h3><p>This is about your ability to use AI effectively to accomplish real work. During the assessment, this shows up in how you structure your interactions, how you build on AI responses, and whether you&#8217;re getting useful results from the collaboration.</p><h3><strong>Accountability</strong></h3><p>Accountability is the most heavily weighted dimension, and for good reason. It captures whether you take ownership of AI-assisted work. During the assessment, this is reflected in moments when AI produces something questionable. Do you catch it? Do you take responsibility for the final output, or do you treat AI responses as pre-approved?</p><p>For professionals in regulated industries, this dimension is especially critical. A lawyer who submits an AI-drafted brief without verification is making an accountability choice. PAICE measures that instinct.</p><h3><strong>Integrity</strong></h3><p>Integrity reflects your commitment to accuracy and honest representation of AI&#8217;s role in your work. During the assessment, it shows up in how you handle situations where AI output might be misleading, incomplete, or presented with false confidence. Do you flag it? Do you seek clarity?</p><h3><strong>Collaboration</strong></h3><p>This dimension looks at the quality of your working relationship with AI. It&#8217;s not about being polite to a chatbot. It&#8217;s about whether you can effectively steer a People+AI interaction toward a useful outcome. Can you redirect when the conversation goes off track? Can you build on partial answers? Can you communicate what you actually need?</p><h3><strong>Evolution</strong></h3><p>Evolution captures your adaptability and growth orientation. AI capabilities change rapidly, and professionals who collaborate well with AI tend to be curious, experimental, and willing to adjust their approach. During the assessment, this shows up in how you respond to unexpected AI behavior and whether you adapt your strategy in real time.</p><h2><strong>Your Results</strong></h2><p>When the assessment is complete, you&#8217;ll receive a detailed results profile. Here&#8217;s what it includes.</p><p><strong>An overall score from 0 to 1000.</strong> This is your composite PAICE score, reflecting your demonstrated collaboration effectiveness across all five dimensions. The score is based on what you actually did during the assessment, not on self-reported preferences or theoretical knowledge.</p><p><strong>A dimensional breakdown.</strong> You&#8217;ll see how you scored in each of the five dimensions individually. This is where the real insight lives. Most people have a mix of strengths and areas for development, and the dimensional view helps you see exactly where you stand.</p><p><strong>A tier placement.</strong> Your score maps to one of five tiers, from Constrained through Exceptional. The tier gives you a quick reference for where you are in your development journey.</p><p><strong>Personalized insights.</strong> Beyond the numbers, you&#8217;ll receive observations about your specific collaboration patterns, what you did well, where you have room to grow, and what practical steps might help you develop further.</p><p><strong>Your results are private.</strong> This is worth emphasizing. Your individual score belongs to you. If your employer sponsors PAICE assessments, they receive aggregate data about their team&#8217;s collaboration readiness, but they cannot see your individual score. This isn&#8217;t just a policy. It&#8217;s built into the architecture of how PAICE handles data. Your employer structurally cannot access your personal results.</p><p>This privacy design exists to protect you. Assessment results should help you grow, not become a liability. You can share your results if you choose to, but that choice is always yours.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/what-happens-during-a-paice-assessment?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/what-happens-during-a-paice-assessment?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>Common Questions</strong></h2><p><strong>Do I need to know about AI to do well?</strong> No. PAICE measures behavioral skill, not technical knowledge. Some of the highest-scoring participants have had limited AI experience. What they share is strong professional judgment, healthy skepticism, and good instincts about when to trust and when to verify.</p><p><strong>How long does it take?</strong> About 30 minutes. The experience is conversational and self-paced, so it doesn&#8217;t feel like a 30-minute exam. Most people describe it as feeling shorter than they expected.</p><p><strong>Can I retake it?</strong> Yes. Each assessment is an independent snapshot of your collaboration behavior at that point in time. If you develop your skills and want to measure your progress, you&#8217;re welcome to take it again. Each session starts fresh with no assumptions carried over from previous attempts.</p><p><strong>Will my employer see my score?</strong> No. Individual scores are never shared with employers. If your organization uses PAICE, they receive cohort-level data, like team averages and distributions, but there is no way for them to identify any individual&#8217;s score. This is a structural guarantee, not just a promise.</p><p><strong>What if I&#8217;m not good at it?</strong> That&#8217;s exactly the kind of insight PAICE is designed to provide. A lower score isn&#8217;t a failure. It&#8217;s a starting point. The dimensional breakdown shows you specifically where to focus your development, and most people improve significantly once they know what to work on. The point of the assessment is to help you get better, not to label you.</p><p><strong>Is it stressful?</strong> Most people find it genuinely engaging rather than stressful. It&#8217;s a conversation, not an interrogation. There are no trick questions, no time pressure, and no penalty for being yourself. In fact, being yourself is the whole point.</p><h2><strong>Ready to See Where You Stand?</strong></h2><p>The hardest part of any assessment is deciding to take it. Now that you know what to expect, the rest is easy. Show up, be yourself, and have a conversation.</p><p>You might be surprised by what you learn about your own collaboration instincts.</p><div><hr></div><p><em>Want to understand your own readiness profile? <a href="https://paice.work/">Take the PAICE assessment</a> to discover your strengths and opportunities.</em></p><div><hr></div><p><strong>Get Involved:</strong></p><ul><li><p><a href="https://paice.work/">Take the assessment</a> (free, always)</p></li><li><p><a href="https://paice.work/baseline">Explore our Baseline offerings</a> (for organizations)</p></li><li><p><a href="https://paice.work/whitepaper">Read the whitepaper</a> (comprehensive framework)</p></li><li><p><a href="https://paice.work/contact">Contact us about your specific requirements</a></p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Recommended Reading</strong></h2><p>&#128214; <strong>Understanding PAICE:</strong></p><ul><li><p><a href="https://paice.work/blog/five-dimensions-of-ai-readiness">The PAICE Framework</a> - Five dimensions of AI collaboration readiness</p></li><li><p><a href="https://paice.work/blog/faq-how-is-my-paice-score-calculated">How Is My PAICE Score Calculated?</a> - Scoring methodology explained</p></li><li><p><a href="https://paice.work/blog/how-to-prepare-for-assessment">How to Prepare for Your PAICE Assessment</a> - Spoiler: you don&#8217;t need to</p></li></ul><p>&#128214; <strong>Building Your Practice:</strong></p><ul><li><p><a href="https://paice.work/blog/improving-your-paice-score">Improving Your PAICE Score</a> - Practical strategies for development</p></li><li><p><a href="https://paice.work/blog/common-ai-collaboration-mistakes">Common AI Collaboration Mistakes</a> - Patterns to watch out for</p></li></ul>]]></content:encoded></item><item><title><![CDATA[PAICE vs. AI Literacy Tests]]></title><description><![CDATA[Why Behavior Beats Knowledge]]></description><link>https://paice.substack.com/p/paice-vs-ai-literacy-tests</link><guid isPermaLink="false">https://paice.substack.com/p/paice-vs-ai-literacy-tests</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Wed, 25 Mar 2026 15:00:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/dd51c2d6-dc6b-4906-8ba9-41e58fae464b_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Every organization adopting AI faces the same question: are our people ready?</p><p>The default answer has been to train them. Roll out an AI literacy program. Administer a quiz. Check the box. And for a while, that feels like progress. People can define &#8220;hallucination.&#8221; They can list the limitations of large language models. They can select the correct answer about when to verify AI output.</p><p>But here is the uncomfortable reality: none of that tells you whether they actually verify AI output when it matters.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/paice-vs-ai-literacy-tests?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/paice-vs-ai-literacy-tests?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>What AI Literacy Tests Actually Measure</strong></h2><p>AI literacy assessments follow a familiar pattern. They present questions about AI concepts and ask participants to demonstrate knowledge: definitions, best practices, risk categories, ethical frameworks.</p><p>A typical AI literacy test might ask:</p><ul><li><p>&#8220;Which of the following is an example of AI hallucination?&#8221;</p></li><li><p>&#8220;When should you verify AI-generated content before sharing it?&#8221; (Always / Sometimes / Rarely)</p></li><li><p>&#8220;What are the key risks of using AI for client-facing work?&#8221;</p></li></ul><p>These are reasonable questions. The problem is that getting the right answer is easy. Almost everyone who has been through an AI training program can identify a hallucination in a multiple-choice format. Almost everyone will select &#8220;Always&#8221; when asked about verification. Almost everyone can articulate the risks.</p><p>The questions test recall. They test whether someone absorbed the training materials. They do not test whether that knowledge translates into behavior during actual AI collaboration.</p><p>This is not a minor gap. It is the central problem.</p><h2><strong>What PAICE Measures Instead</strong></h2><p>PAICE (People + AI Collaboration Effectiveness) takes a fundamentally different approach. Instead of asking people what they know about AI collaboration, it observes what they <em>do</em> during AI collaboration.</p><p>During a PAICE assessment, the participant works with an AI on a real task relevant to their professional context. The interaction is a genuine conversation, not a scripted scenario with predetermined correct answers. But within that interaction, the system introduces deliberate challenges: subtle errors in the AI&#8217;s output, overconfident claims, plausible-sounding but incorrect information.</p><p>The assessment then measures what happens next.</p><p>Does the participant catch the error? Do they challenge the AI&#8217;s overconfidence? Do they verify the claim before accepting it? Or do they accept a fluent, confident-sounding response at face value and move on?</p><p>This is behavioral observation. Conversation is the medium through which it happens, but conversation is not what is being measured. What is being measured is a set of specific, observable behaviors that predict whether someone will use AI responsibly in professional practice.</p><p>PAICE evaluates five dimensions of People+AI collaboration:</p><ul><li><p><strong>Accountability</strong> (weighted highest) -- Does the person take ownership of AI output quality? Do they verify before acting on AI-generated content?</p></li><li><p><strong>Integrity</strong> -- Do they maintain professional standards when AI makes it easy to cut corners?</p></li><li><p><strong>Collaboration</strong> -- Do they work with AI effectively, providing appropriate context and direction?</p></li><li><p><strong>Evolution</strong> -- Do they adapt their approach based on what works and what does not?</p></li><li><p><strong>Performance</strong> -- Do they use AI to accomplish meaningful work, not just generate output?</p></li></ul><p>Accountability carries the highest weight in the PAICE scoring model because it is the most critical skill and the most commonly underdeveloped one. Knowing that you should verify AI output is table stakes. Actually doing it, consistently, under time pressure, when the output sounds perfectly plausible -- that is the skill that matters.</p><h2><strong>The Gap Between Knowing and Doing</strong></h2><p>Consider two professionals taking both an AI literacy test and a PAICE assessment.</p><p><strong>Professional A</strong> scores perfectly on the literacy test. They can explain prompt engineering techniques, describe the transformer architecture at a high level, and articulate a clear framework for responsible AI use. They sound fluent and knowledgeable in conversation with the AI during the PAICE assessment. But when the AI introduces a subtly incorrect data point midway through the session, they accept it without question. When the AI presents an overconfident summary with a buried factual error, they incorporate it into their work product. They catch zero of the injected challenges.</p><p><strong>Professional B</strong> struggles with some of the literacy test questions. They cannot explain how large language models work at a technical level. Their vocabulary for AI concepts is limited. But during the PAICE assessment, when the AI presents a claim that does not match their professional experience, they push back. When the AI generates a confident-sounding analysis, they check the key assertions before accepting them. They catch most of the injected challenges.</p><p>Under a knowledge-based assessment, Professional A looks like the stronger AI collaborator. Under behavioral observation, Professional B is clearly more effective, and more safe, in practice.</p><p>This pattern is not hypothetical. It reflects a well-documented phenomenon: the gap between declarative knowledge (knowing what to do) and procedural skill (actually doing it under real conditions). AI literacy tests measure the former. PAICE measures the latter.</p><p>The reverse pattern also reveals something important. A participant who challenges every AI response indiscriminately (flagging correct outputs as errors, treating all AI-generated content as suspect regardless of quality, etc.) is not demonstrating strong accountability. They are demonstrating a different failure mode: an inability to calibrate trust appropriately. PAICE recognizes this distinction. Excessive false positives are treated as a collaboration weakness, not a strength.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/paice-vs-ai-literacy-tests/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/paice-vs-ai-literacy-tests/comments"><span>Leave a comment</span></a></p><h2><strong>Why the Difference Matters for Regulated Industries</strong></h2><p>For professionals in regulated industries such as law, insurance, healthcare, finance, cybersecurity, and the like, the gap between knowing and doing carries direct personal consequences.</p><p>A lawyer who can define AI hallucination on a quiz but does not catch a fabricated case citation in an AI-drafted brief risks sanctions, malpractice claims, and their license. A financial advisor who scores well on AI training modules but accepts an AI-generated risk assessment without verification exposes their clients and themselves to regulatory action.</p><p>These professionals are individually licensed. They carry personal liability. The question their regulators and professional bodies will eventually ask is not &#8220;did you complete AI training?&#8221; but &#8220;did you exercise appropriate professional judgment when using AI tools?&#8221;</p><p>Training completion certificates and literacy test scores do not answer that question. Behavioral evidence does.</p><p>This is why PAICE was built for regulated industries first. The stakes are highest here, and the gap between knowledge and behavior has the most concrete consequences. When your license is on the line, the relevant question is not whether you can identify the right answer on a quiz. It is whether you catch the error in the room.</p><h2><strong>The Evidence Hierarchy</strong></h2><p>PAICE&#8217;s scoring is built on a clear evidence hierarchy: behavioral evidence outweighs conversational evidence. Always.</p><p>When the system introduces a deliberate challenge like a factual error embedded in otherwise accurate output, an overconfident claim that contradicts established knowledge, or a response that violates professional norms, and the participant catches it, that is ground truth. It is an observable, unambiguous demonstration of the skill that matters. No amount of articulate conversation about the importance of verification outweighs a missed error.</p><p>Conversely, when a participant catches every challenge, that behavioral evidence carries more weight than any conversational shortcoming. A person who is terse, direct, and catches everything scores higher than a person who is eloquent, thoughtful, and catches nothing. The scoring model reflects this intentionally.</p><p>This hierarchy exists because of a pattern that is pervasive in AI interaction: AI systems have historically told people they are doing well. They are agreeable, encouraging, and non-confrontational by default. This creates a feedback loop where people develop high confidence in their AI collaboration skills without ever having those skills tested. PAICE breaks that loop by introducing objective behavioral measures into the assessment.</p><h2><strong>What This Means for Organizations</strong></h2><p>For L&amp;D leaders and risk managers evaluating AI assessment tools, the distinction between knowledge testing and behavioral observation has practical implications.</p><p><strong>Training completion is not behavioral change.</strong> An employee can complete every module of an AI literacy program, pass the final quiz, and return to their desk still accepting AI output uncritically. The training gave them knowledge. It did not change their behavior. If your assessment tool only measures knowledge, you are measuring training effectiveness, not risk reduction.</p><p><strong>Self-reported surveys compound the problem.</strong> When you ask employees how often they verify AI output, they will tell you what they believe you want to hear. This is not dishonesty, it is a well-known limitation of self-report measurement. People genuinely believe they verify more than they do. Behavioral observation removes this bias entirely.</p><p><strong>Cohort-level behavioral data is what regulators will want.</strong> As regulatory frameworks for AI use mature, organizations will need to demonstrate that their people use AI responsibly, not that they completed a training program. Behavioral assessment data, aggregated at the cohort level, provides this evidence. Knowledge test scores do not.</p><p>PAICE is designed to serve as this measurement layer. It does not replace training. It tells you whether training worked. An organization that deploys AI literacy training and follows it with PAICE assessment can see whether the training translated into behavioral change, and where it did not.</p><p>PAICE&#8217;s privacy architecture supports this at scale. Individual assessment scores are never disclosed to employers. Organizations receive cohort-level data only: distributions, percentiles, trend lines. This means employees can be assessed honestly without fear that a low score becomes a performance issue. The result is better data, because people behave naturally when they are not performing for an audience.</p><div class="directMessage button" data-attrs="{&quot;userId&quot;:23656692,&quot;userName&quot;:&quot;Sam Rogers&quot;,&quot;canDm&quot;:null,&quot;dmUpgradeOptions&quot;:null,&quot;isEditorNode&quot;:true}" data-component-name="DirectMessageToDOM"></div><h2><strong>The Right Question</strong></h2><p>The AI assessment landscape is growing. More tools, more quizzes, more certification programs. Most of them test knowledge. Some test prompt engineering skill. A few test whether you can write a good query.</p><p>None of that answers the question that actually matters: when the AI is wrong and sounds right, does this person catch it?</p><p>That is a behavioral question. It cannot be answered with a multiple-choice test. It can only be answered by watching what someone does when it happens.</p><p>PAICE is built to answer that question.</p><div><hr></div><p><em>Ready to assess your AI collaboration capabilities? <a href="https://paice.work/">Take the PAICE assessment</a> to get personalized insights and recommendations.</em></p><div><hr></div><p><strong>Get Involved:</strong></p><ul><li><p><a href="https://paice.work/">Take the assessment</a> (free, always)</p></li><li><p><a href="https://paice.work/baseline">Explore our Baseline offerings</a> (for organizations)</p></li><li><p><a href="https://paice.work/whitepaper">Read the whitepaper</a> (comprehensive framework)</p></li><li><p><a href="https://paice.work/contact">Contact us about your specific requirements</a></p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Recommended Reading</strong></h2><p>&#128214; <strong>Understanding PAICE:</strong></p><ul><li><p><a href="https://paice.work/blog/what-paice-tests-for">What PAICE Is Actually Testing For</a> - The behavioral observation model behind the assessment</p></li><li><p><a href="https://paice.work/blog/five-dimensions-of-ai-readiness">The PAICE Framework</a> - Five dimensions of AI collaboration readiness</p></li><li><p><a href="https://paice.work/blog/faq-how-is-my-paice-score-calculated">How Is My PAICE Score Calculated?</a> - Scoring methodology explained</p></li></ul><p>&#128214; <strong>Organizational Readiness:</strong></p><ul><li><p><a href="https://paice.work/blog/faq-enterprise-risk-reduction">How Does PAICE Support Enterprise Risk Reduction?</a> - Cohort-level data for risk management</p></li><li><p><a href="https://paice.work/blog/creating-team-ai-collaboration-standards">Creating Team AI Collaboration Standards</a> - Building organizational AI practices</p></li></ul>]]></content:encoded></item><item><title><![CDATA[Introducing PAICE Pro]]></title><description><![CDATA[Your Score Is Free. Now See How to Improve It.]]></description><link>https://paice.substack.com/p/introducing-paice-pro</link><guid isPermaLink="false">https://paice.substack.com/p/introducing-paice-pro</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Tue, 24 Mar 2026 19:01:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/bfe08c7e-3a46-47e5-af36-8cdc20367d20_1600x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Since day one, the PAICE assessment has been free. A 25-minute conversation that gives you an honest, behavioral measure of how effectively you collaborate with AI. That hasn&#8217;t changed. <strong>Your PAICE Score&#8482; is free, and it always will be.</strong></p><p>But we kept hearing the same question after people got their results: <em>&#8220;Now what do I do about it?&#8221;</em></p><p>Today, we&#8217;re answering that question. <strong>PAICE Pro is now out of beta and fully available.</strong></p><h2><strong>What PAICE Pro Is</strong></h2><p>PAICE Pro is the development layer on top of the free assessment, for individuals. The free assessment answers <em>&#8220;How effectively do you collaborate with AI?&#8221;</em> PAICE Pro answers <em>&#8220;How do you get better?&#8221;</em></p><p>The assessment doesn&#8217;t change. The score doesn&#8217;t change. What changes is the lens, you get deeper visibility into your results and the tools to act on them.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/introducing-paice-pro?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/introducing-paice-pro?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>What&#8217;s Included</strong></h2><h3><strong>Full 20-Subscore Breakdown</strong></h3><p>The free assessment shows your overall PAICE Score and your five dimensional scores: Performance, Accountability, Integrity, Collaboration, and Evolution. PAICE Pro unlocks the full picture for the first time.</p><p>Each dimension breaks down into four subscores. Accountability, for example, expands into Failure Navigation, Bias Awareness, Traceability, and Catastrophe Risk Management. That&#8217;s 20 specific capabilities you can see, understand, and work on.</p><p><strong>Why this matters:</strong> A score of 410 on Accountability tells you something. Knowing that your Traceability subscore is 280 while your Failure Navigation is 580 tells you exactly where to focus.</p><h3><strong>Assessment History and Trends</strong></h3><p>PAICE Pro tracks your assessments over time. Take the assessment today, again in a few weeks, and see whether you&#8217;re improving, stable, or declining &#8212; across every dimension.</p><p>The history dashboard shows trend indicators for each of the five dimensions, so you can see whether your development efforts are working.</p><p><strong>Why this matters:</strong> A single assessment is a snapshot. Multiple assessments reveal a trajectory. PAICE Pro turns snapshots into a growth story. Pair it with a structured development approach like the <a href="https://paice.work/blog/30-day-ai-collaboration-development-plan">30-Day AI Collaboration Development Plan</a> and you can watch your scores respond to deliberate practice.</p><h3><strong>Actionable Improvement Resources</strong></h3><p>The free assessment includes tier-appropriate general guidance. PAICE Pro enriches those recommendations with specific external links and curated resources that connect directly to where you can improve. These will continue to expand as we expand our library of resources and value-added partnerships.</p><h2><strong>What PAICE Pro Is Not</strong></h2><p>Let&#8217;s be clear about what this isn&#8217;t:</p><ul><li><p><strong>Not &#8220;the full version.&#8221;</strong> The free assessment IS the full assessment. Same conversation, same scoring, same rigor.</p></li><li><p><strong>Not a subscription.</strong> One-time purchase, time-limited access. No recurring charges.</p></li><li><p><strong>Not an account system.</strong> Your PAICE Pro code is your only credential. No email, no password, no profile.</p></li><li><p><strong>Not a different assessment.</strong> Same assessment, deeper results.</p></li></ul><p>We will never call the free assessment &#8220;limited&#8221; or &#8220;basic&#8221; because it isn&#8217;t. PAICE Pro is additive &#8212; it adds analytical depth and development tools on top of a complete product.</p><h2><strong>Pricing and Access</strong></h2><p><strong>$29.99 USD</strong>, one-time purchase.</p><p>Here&#8217;s how it works:</p><ol><li><p><strong>Purchase</strong> at <a href="https://store.paice.work/">store.paice.work</a>. You receive a PAICE Pro code via email.</p></li><li><p><strong>Activate</strong> your code on <a href="https://paice.work/individual">paice.work/individual</a> before or after your assessment. You have 90 days from purchase to activate.</p></li><li><p><strong>Access</strong> Pro features for 90 days from the date you first activate.</p></li></ol><p>No account required. No payment information touches PAICE servers. <a href="https://www.lemonsqueezy.com/">Lemon Squeezy</a> handles all payment processing as our Merchant of Record.</p><p>Your PAICE Pro code works across up to 3 browsers, so you can use it on your work laptop and personal device without issues.</p><h2><strong>Privacy Unchanged</strong></h2><p>PAICE Pro maintains the exact same privacy architecture as the free assessment:</p><ul><li><p><strong>No account required</strong> &#8212; your code is your key</p></li><li><p><strong>No payment PII on PAICE servers</strong> &#8212; Lemon Squeezy handles everything</p></li><li><p><strong>No identity linking</strong> &#8212; PAICE cannot connect your code to your identity</p></li><li><p><strong>Assessment results stored locally</strong> &#8212; in your browser, under your control</p></li></ul><p>We built PAICE as a <a href="https://paice.work/blog/paice-pbc-incorporation">Public Benefit Corporation</a> with privacy by design. Adding a paid tier doesn&#8217;t change that commitment.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/introducing-paice-pro/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/introducing-paice-pro/comments"><span>Leave a comment</span></a></p><h2><strong>For Organizations: The AI Capability Baseline</strong></h2><p>If you&#8217;re thinking about PAICE for your team, individual Pro codes aren&#8217;t the best path. We offer the <strong><a href="https://paice.work/baseline">AI Capability Baseline</a></strong> &#8212; a structured engagement that gives organizations behavioral visibility into how their teams actually collaborate with AI. <em>Coming soon: a small team option that bridges between the individual and the Baseline.</em></p><p>For now, there are two options:</p><ul><li><p><strong>Quick-Read</strong> ($3,600) &#8212; 12+ participants, 2-week engagement, cohort capability report with dimensional analysis and a 30-minute readout call</p></li><li><p><strong>Full Baseline</strong> ($10,000) &#8212; 20-50 participants, 4-week engagement, executive readout, written recommendations, and priority support</p></li></ul><p>Both include governance-ready documentation, privacy-by-design architecture (no personal data collection), and the confidence gap analysis that reveals the difference between what your team believes they can catch and what they actually catch.</p><p><strong><a href="https://paice.work/baseline">Book a conversation &#8594;</a></strong></p><h2><strong>What&#8217;s Coming Next</strong></h2><p>PAICE Pro v1 is the foundation. Here&#8217;s what&#8217;s on the roadmap within the next 6 months:</p><ul><li><p><strong>Comparative benchmarking</strong> &#8212; See where you rank relative to the broader population</p></li><li><p><strong>AI Collaboration Style Profile</strong> &#8212; A named typology derived from your dimensional patterns</p></li><li><p><strong>Personalized coaching playbook</strong> &#8212; AI-generated, multi-step improvement plans</p></li><li><p><strong>Custom scenario selection</strong> &#8212; Domain-specific assessment lenses for legal, medical, financial, and other fields</p></li><li><p><strong>Shareable credentials</strong> &#8212; Verifiable proof of your score for LinkedIn and professional profiles</p></li><li><p><strong>Partner benefits</strong> &#8212; Free &amp; reduced cost AI educational offerings from a select group of our trusted partners, matched to your unique needs</p></li></ul><p>We&#8217;re shipping these based on what Pro users tell us matters most. Your purchase isn&#8217;t just access, it&#8217;s a vote for what we prioritize building next.</p><h2><strong>Try the Assessment First</strong></h2><p>Not sure if Pro is for you? <strong><a href="https://paice.work/">Take the free assessment</a></strong> first. It takes about 25 minutes, gives you a complete PAICE Score with dimensional breakdown, and costs nothing. If you want to go deeper after seeing your results, Pro will be waiting.</p><h2><strong>Get PAICE Pro</strong></h2><p><strong><a href="https://store.paice.work/">Get PAICE Pro &#8212; $29.99</a></strong></p><p>90-day access. No account required. Your code is your key.</p><div><hr></div><p><em>Your PAICE Score is free. PAICE Pro shows you exactly where, and how, to improve.</em></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Related Reading</strong></h2><ul><li><p><a href="https://paice.work/blog/what-your-paice-score-means">What Your PAICE Score&#8482; Means</a> &#8212; Understanding your 0-1000 score and tier placement</p></li><li><p><a href="https://paice.work/blog/30-day-ai-collaboration-development-plan">From Novice to Proficient: A 30-Day AI Collaboration Development Plan</a> &#8212; Make the most of your 90-day Pro window with a structured improvement path</p></li><li><p><a href="https://paice.work/blog/improving-your-paice-score">Improving Your PAICE Score</a> &#8212; Dimension-specific strategies for skill development</p></li><li><p><a href="https://paice.work/blog/introducing-ai-capability-baseline">Introducing the AI Capability Baseline</a> &#8212; Organizational assessment programs</p></li><li><p><a href="https://paice.work/blog/business-model-pricing-sustainability">The Future of PAICE</a> &#8212; Our business model and sustainability approach</p></li><li><p><a href="https://paice.work/blog/your-data-your-privacy">Your Data, Your Privacy</a> &#8212; How PAICE protects your information</p></li><li><p><a href="https://paice.work/blog/understanding-the-five-paice-dimensions">Understanding the Five PAICE Dimensions</a> &#8212; Deep dive into P-A-I-C-E</p></li></ul>]]></content:encoded></item><item><title><![CDATA[Weekly Update - March 23, 2026]]></title><description><![CDATA[Server-side rendering, PAICE Pro live, and content momentum.]]></description><link>https://paice.substack.com/p/weekly-update-march-23-2026</link><guid isPermaLink="false">https://paice.substack.com/p/weekly-update-march-23-2026</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Mon, 23 Mar 2026 18:01:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/14a39432-f48e-4aa2-9ce6-97ddc1c1cfd6_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A high-output week on both the main branch and behind the scenes. The SSR prerender feature branch merged into production, delivering a major SEO and performance upgrade. Content publishing continued at daily pace with two FAQs, a scoring guide, and another whitepaper-driven video. PAICE Pro v1 code is live and the official launch announcement is tomorrow.</p><h2><strong>Content Published Last Week</strong></h2><p><strong>Monday</strong> (Mar 16): <a href="https://paice.work/blog/update-2026-03-16">&#8220;Weekly Update - March 16, 2026&#8221;</a></p><p><strong>Tuesday</strong> (Mar 17): <a href="https://paice.work/blog/can-employers-use-paice-for-hiring">&#8220;Can Employers Use PAICE for Hiring?&#8221;</a> Why individual PAICE Scores shouldn&#8217;t determine hiring decisions&#8212;and the better organizational use case that doesn&#8217;t involve candidates at all.</p><p><strong>Wednesday</strong> (Mar 18): <a href="https://paice.work/blog/what-your-paice-score-means">&#8220;What Your PAICE Score&#8482; Means&#8221;</a> A comprehensive guide to interpreting your 0-1000 score, the five tier levels, your dimensional profile, and what your results actually tell you.</p><p><strong>Thursday</strong> (Mar 19): <a href="https://paice.work/blog/will-paice-support-other-ai-models">&#8220;Will PAICE Support Other AI Models?&#8221;</a> PAICE already uses models from Anthropic, Google, and OpenAI in every assessment&#8212;and your collaboration skills transfer across all of them.</p><p><strong>Friday</strong> (Mar 20): Video - <a href="https://paice.work/blog/deadliest-white-space-modern-office">&#8220;The Deadliest White Space in the Modern Office&#8221;</a> A 4-minute NotebookLM video exploring the invisible gap between human judgment and AI output, drawn from our upcoming ISPI 2026 whitepaper.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share PAICE.work&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share PAICE.work</span></a></p><h2><strong>Technical Improvements</strong></h2><h3><strong>Server-Side Rendering (SSR) Merged</strong></h3><p>Merged the <code>feat/ssr-prerender</code> branch, replacing our Puppeteer-based prerender with React 18&#8217;s native <code>renderToPipeableStream</code>. Blog routes now prerender with full data context, no loading spinners in the initial HTML. Per-page meta tags, JSON-LD structured data, per-post <code>og:image</code>, and <code>&lt;noscript&gt;</code> article content are all injected at build time. This is the single biggest SEO and performance improvement we&#8217;ve shipped this quarter, ensuring search engines and social platforms see complete, structured content on first request.</p><h3><strong>AI Agent Discovery and PWA Support</strong></h3><p>Added <code>llms.txt</code> and <code>agents.json</code> for AI agent and LLM crawler discovery, along with <code>manifest.json</code> and <code>browserconfig.xml</code> for progressive web app support. Daily news sitemap generation is now part of CI. These changes position PAICE for discoverability across both traditional search and the emerging agentic web.</p><h3><strong>TypeScript Cleanup and Bug Fixes</strong></h3><p>Resolved all outstanding TypeScript errors across the codebase (881 insertions, 635 deletions), fixed React Hooks violations in WhitepaperDetail, removed redundant dependencies, and corrected sample score page history checks. The codebase is now fully clean against strict TypeScript compilation.</p><h3><strong>Assessment UX Refinements</strong></h3><p>Added a pre-assessment entry screen for the <code>/individual</code> flow, now mirroring the <code>/cohort</code> flow with a clear call to action.</p><h2><strong>Internationalization: Urdu Assessment Now Available for Testing</strong></h2><p>The PAICE assessment flow is now available for testing in Urdu &#8212; our first non-English language. This is the precursor to upcoming releases in Spanish, French, Portuguese, and German. If you&#8217;re a native or fluent speaker of any of these languages and interested in helping provide testing and validation, we&#8217;d love to hear from you. <a href="https://paice.work/contact">Reach out here</a>.</p><h2><strong>Open Source Projects</strong></h2><p>The SSR and agent discovery work this week produced two new standalone projects:</p><ul><li><p><strong><a href="https://siteline.snapsynapse.com/">Siteline</a></strong> &#8212; A tool born from our SSR prerender pipeline, now extracted as its own project for broader use.</p></li><li><p><strong><a href="https://github.com/snapsynapse/graceful-boundaries">Graceful Boundaries</a></strong> &#8212; Our contribution for an open specification for how services communicate their capabilities and constraints. Emerged from our work on <code>agents.json</code> and AI agent discovery, defining a standard for machine-readable service boundaries. We believe this is the missing piece for the agentic web.</p></li></ul><p>Both are works in progress and we welcome contributions.</p><h2><strong>PAICE Pro v1 Live (Launch Announcement Tomorrow)</strong></h2><p>PAICE Pro v1 code is live. The implementation includes Lemon Squeezy payment integration with live checkout URLs, token-based access with hashed storage (no accounts required), feature gating via the <code>useProAccess</code> hook, refund webhook handling for token revocation, and a reusable <code>AssessmentHeader</code> component. PAICE Pro adds analytical depth and development tools on top of the free assessment. The scoring doesn&#8217;t change, but the lens does. At $29.99 one-time for 90-day access, it&#8217;s our first step toward sustainable revenue while keeping the core assessment free forever. The official launch announcement goes out tomorrow (March 24).</p><h2><strong>Upcoming Whitepaper &#8212; In Peer Review</strong></h2><p>Our third whitepaper, <em>&#8220;Closing the Collaboration Gap: A Behavioral Skill Framework for Human-AI Performance Improvement,&#8221;</em> is in peer review and on track for release at <a href="https://ispi.org/">ISPI 2026</a> on <strong>March 31</strong>. Last week&#8217;s video post previewed core concepts from the paper.</p><h2><strong>Platform Stability</strong></h2><p>Platform maintained 100% uptime with no incidents. All systems operating normally: standard and Confidential Mode assessments, results generation with on-chain attestation, cohort management, email notifications, and analytics processing.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/weekly-update-march-23-2026/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/weekly-update-march-23-2026/comments"><span>Leave a comment</span></a></p><h2><strong>What&#8217;s Next</strong></h2><p><strong>This week:</strong> PAICE Pro v1 official launch announcement, continued daily content publishing, and finalizing the whitepaper peer review ahead of ISPI 2026 distribution.</p><p><strong>Q1 priorities:</strong> Whitepaper release at ISPI, internationalization pipeline, and academic partnership conversations.</p><h2><strong>The Week in Numbers</strong></h2><ul><li><p>5 blog posts published (1 video + 2 FAQs + 1 scoring guide + 1 weekly update)</p></li><li><p>20 commits merged to main, 17 on payment-integration feature branch</p></li><li><p>142 files changed, 5,567 insertions, 1,181 deletions on main</p></li><li><p>SSR prerender branch merged</p></li><li><p>2 new open source spin-off projects launched (Siteline, Graceful Boundaries)</p></li><li><p>Urdu assessment flow available for testing</p></li><li><p>75 files cleaned of TypeScript errors</p></li><li><p>PAICE Pro v1 code live, launch announcement tomorrow</p></li><li><p>100% uptime, zero incidents</p></li></ul><h2><strong>Why This Week Matters</strong></h2><p>The SSR merge is a structural upgrade that compounds over time: every blog post, every assessment page, every whitepaper now renders complete HTML on the server before JavaScript loads. That means faster perceived load times, better search engine indexing, and richer social sharing previews&#8212;all critical for organizations evaluating PAICE as a platform partner. Meanwhile, PAICE Pro going live signals the transition from &#8220;free research tool&#8221; to &#8220;sustainable business with a free tier,&#8221; a positioning shift that makes Founding Partner conversations more credible.</p><h2><strong>Thank You</strong></h2><p>Special thanks to Muhammed and Muneeb for their contributions in the past two weeks. PAICE Pro and the internationalization pipeline wouldn&#8217;t be possible without their excellent work.</p><div><hr></div><p><strong>Get Involved:</strong></p><ul><li><p><a href="https://paice.work/">Take the assessment</a> (free, always)</p></li><li><p><a href="https://paice.work/baseline">Get a Team Baseline</a> (for organizations)</p></li><li><p><a href="https://paice.work/whitepaper">Read the whitepaper</a> (comprehensive framework)</p></li><li><p><a href="https://www.youtube.com/@paicework">Subscribe to our YouTube channel</a></p></li><li><p><a href="https://paice.work/contact">Contact us about your specific requirements</a></p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Related Reading</strong></h2><ul><li><p><a href="https://paice.work/blog/update-2026-03-16">Weekly Update - March 16, 2026</a></p></li><li><p><a href="https://paice.work/blog/can-employers-use-paice-for-hiring">Can Employers Use PAICE for Hiring?</a></p></li><li><p><a href="https://paice.work/blog/what-your-paice-score-means">What Your PAICE Score&#8482; Means</a></p></li><li><p><a href="https://paice.work/blog/will-paice-support-other-ai-models">Will PAICE Support Other AI Models?</a></p></li><li><p><a href="https://paice.work/blog/deadliest-white-space-modern-office">The Deadliest White Space in the Modern Office</a></p></li></ul>]]></content:encoded></item><item><title><![CDATA[The Deadliest White Space in the Modern Office]]></title><description><![CDATA[Why the gap between human judgment and AI output is now the highest-stakes skill in business]]></description><link>https://paice.substack.com/p/the-deadliest-white-space-in-the</link><guid isPermaLink="false">https://paice.substack.com/p/the-deadliest-white-space-in-the</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Fri, 20 Mar 2026 16:45:48 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/191600609/1389ea48e2f39e8756837a0a27813da2.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>For centuries, professional skill was a single-node metric. One human, one task, one measurement. Then AI agents arrived, and the old model shattered overnight.</p><p>This NotebookLM video, generated from the upcoming PAICE whitepaper <em>&#8220;Closing the Collaboration Gap: A Behavioral Skill Framework for Human-AI Performance Improvement&#8221;</em>, explores the invisible space between what AI produces and what humans accept&#8212;and why that gap is now the deadliest risk in any modern organization.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/the-deadliest-white-space-in-the?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/the-deadliest-white-space-in-the?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>For centuries, professional skill was a single-node metric. One human, one task, one measurement. Then AI agents arrived, and the old model shattered overnight.</p><p>This NotebookLM video, generated from the upcoming PAICE whitepaper <em>&#8220;Closing the Collaboration Gap: A Behavioral Skill Framework for Human-AI Performance Improvement&#8221;</em>, explores the invisible space between what AI produces and what humans accept&#8212;and why that gap is now the deadliest risk in any modern organization.</p><p><a href="https://youtu.be/_QrWLqBBCrM">Watch on YouTube &#8594;</a></p><h2><strong>The Single-Node Era Is Over</strong></h2><p>As the video explains, for the entire history of professional work&#8212;from the industrial revolution through the information age&#8212;performance measurement was straightforward. Workers physically operated passive machines. Computers sat waiting, executing explicit commands and doing nothing else. Because the tools had zero autonomy, organizations only needed to measure <strong>one thing</strong>: human capability.</p><p>Performance was a single-node metric.</p><p>That model broke in October 2025 with the introduction of the <strong>agent skills standard</strong>&#8212;a framework that formally defines AI capabilities in agentic terms. The machine transitioned from a tool waiting for a keystroke to an <strong>active agent</strong> with its own measurable capabilities like data retrieval and instruction following.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/the-deadliest-white-space-in-the?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/the-deadliest-white-space-in-the?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>The White Space: Where Risk Lives</strong></h2><p>Here&#8217;s the problem: AI agents now hand back <strong>confident, professionally formatted outputs</strong> that include errors specifically designed&#8212;by the nature of how language models work&#8212;to bypass human skepticism.</p><p>Because AI is now executing the raw task, the human&#8217;s primary job has shifted away from <strong>creation</strong> to <strong>managing and verifying</strong> the handoff. This transition zone&#8212;the white space between what AI produces and what humans accept&#8212;is where the deadliest organizational risks now live.</p><p>Most organizations have no way to measure what happens in this space. That&#8217;s the gap PAICE was built to close.</p><h2><strong>What the PAICE Framework Actually Measures</strong></h2><p>The video breaks down the PAICE scoring framework, and the distribution may surprise you:</p><ul><li><p><strong>Performance</strong> (basic prompting skills): <strong>10%</strong> &#8212; a surprisingly small slice</p></li><li><p><strong>Accountability</strong> (owning and verifying AI outputs): <strong>30%</strong></p></li><li><p><strong>Integrity</strong> (ethical use and compliance): <strong>25%</strong></p></li><li><p><strong>Collaboration + Evolution</strong> (teamwork and adaptability): <strong>35%</strong></p></li></ul><p>The bulk of the weight falls into <strong>Accountability</strong> and <strong>Integrity</strong>. Together, these dimensions measure what we call <strong>calibrated skepticism</strong>&#8212;the habit of stopping a workflow, analyzing a confident AI claim, and actively applying your own domain expertise to catch subtle errors.</p><p>In a dual-performer system, <strong>the ability to doubt the machine safely and accurately is far more valuable than the ability to write a clever prompt</strong>.</p><h2><strong>Why Traditional Assessments Can&#8217;t Capture This</strong></h2><p>Standard multiple-choice quizzes and self-reported surveys fail to capture this skill. A professional might claim they rigorously fact-check AI outputs. But under deadline pressure, they will readily accept a polished hallucination.</p><p>To close this gap between what people <em>know</em> and what they <em>actually do</em>, the PAICE assessment uses a method called <strong>strategic failure injection</strong>. During a live task, the system secretly injects subtle, realistic errors directly into the AI&#8217;s response to observe how the user reacts&#8212;without prompting.</p><p>True AI capability cannot be measured by asking what professionals <em>should</em> do. It can only be measured by observing what they <em>actually</em> do when a confident, well-formatted AI output contains a hidden flaw.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/the-deadliest-white-space-in-the/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/the-deadliest-white-space-in-the/comments"><span>Leave a comment</span></a></p><h2><strong>The Capability Question Every Organization Must Answer</strong></h2><p>The shift from single-node to dual-performer work isn&#8217;t coming, it&#8217;s already here. Every professional who touches AI output is operating in this white space, whether their organization measures it or not.</p><p>The question isn&#8217;t whether your people use AI. It&#8217;s whether they can <strong>verify what AI gives them</strong> when the stakes are high and the deadline is tight.</p><p>That&#8217;s the deadliest white space in the modern office. And it&#8217;s the space that PAICE was designed to measure.</p><p><em>This whitepaper is currently in peer review and coming soon.</em></p><div><hr></div><p><strong>Get Involved:</strong></p><ul><li><p><a href="https://paice.work/">Take the assessment</a> (free, always)</p></li><li><p><a href="https://paice.work/baseline">Explore the Assessment Baseline</a> (for organizations)</p></li><li><p><a href="https://paice.work/whitepapers">Read our previous whitepapers</a> (comprehensive framework)</p></li><li><p><a href="https://www.youtube.com/@paicework">Subscribe to our YouTube channel</a></p></li><li><p><a href="https://paice.work/contact">Contact us about your specific requirements</a></p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2><strong>Related Reading</strong></h2><ul><li><p><a href="https://paice.work/blog/what-paice-tests-for">What PAICE Is Actually Testing For</a></p></li><li><p><a href="https://paice.work/blog/ai-adoption-illusion-collaboration-gap">The AI Adoption Illusion</a></p></li><li><p><a href="https://paice.work/blog/paice-assessment-walkthrough-video">PAICE Video Walkthrough: Why 2026 Is the AI Readiness Inflection Point</a></p></li><li><p><a href="https://paice.work/blog/the-measurement-gap">The Measurement Gap</a></p></li></ul>]]></content:encoded></item><item><title><![CDATA[Will PAICE Support Other AI Models?]]></title><description><![CDATA[Why Your Collaboration Skills Transfer Across Any AI]]></description><link>https://paice.substack.com/p/will-paice-support-other-ai-models</link><guid isPermaLink="false">https://paice.substack.com/p/will-paice-support-other-ai-models</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Thu, 19 Mar 2026 19:30:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/02d1fa22-7107-449a-8801-39a4acf14a10_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the most common questions we hear from professionals evaluating PAICE (People + AI Collaboration Effectiveness) is straightforward: <strong>&#8220;Does PAICE only work with one AI model?&#8221;</strong></p><p>The short answer might surprise you: PAICE already uses models from three major providers in every single assessment. But the more important answer is this: <strong>it doesn&#8217;t matter which model you&#8217;re assessed with, because PAICE measures your behavior, not the model&#8217;s output.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/will-paice-support-other-ai-models?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/will-paice-support-other-ai-models?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>The Short Answer</strong></h2><p>Every PAICE assessment today uses models from <strong>Anthropic (Claude)</strong>, <strong>Google (Gemini)</strong>, and <strong>OpenAI (ChatGPT)</strong>. Different models handle different functions within a single assessment session. This isn&#8217;t a roadmap item or a future plan. It&#8217;s how the system works right now.</p><p>But that&#8217;s really the less interesting part of the answer. The reason PAICE can work across multiple models is that the skills it measures are universal. Whether you&#8217;re collaborating with Claude, Gemini, ChatGPT, or the next model that hasn&#8217;t been released yet, the behaviors that make you effective don&#8217;t change.</p><h2><strong>How Multi-Model Assessment Works</strong></h2><h3><strong>Each Function Gets the Best Tool</strong></h3><p>A PAICE assessment isn&#8217;t powered by a single model doing everything. The architecture separates distinct functions, and each uses the model best suited for that job:</p><ul><li><p><strong>Conversation</strong>: The model you interact with during the assessment, optimized for natural dialogue and context maintenance</p></li><li><p><strong>Evaluation</strong>: The model that analyzes your behavioral patterns and generates dimensional scores, optimized for reasoning depth</p></li><li><p><strong>Detection</strong>: The model that identifies specific behavioral signals in real time, optimized for speed and precision</p></li></ul><p>These functions have different requirements. A model that excels at extended conversation may not be the best choice for rapid pattern detection. By separating concerns, PAICE can use the right tool for each job rather than forcing one model to do everything.</p><h3><strong>The Cascade Pattern</strong></h3><p>PAICE uses a <strong>cascading fallback architecture</strong> for each function. Here&#8217;s what that means in practice:</p><ol><li><p><strong>Primary model</strong>: The first choice for a given function, selected for quality</p></li><li><p><strong>Fallback models</strong>: Alternative models from different providers that activate automatically if the primary is unavailable</p></li></ol><p>If the primary conversation model experiences an outage, the system seamlessly switches to a fallback from a different provider. You never notice. Your assessment continues without interruption. This cross-provider resilience is built into every function.</p><p>The practical benefit: PAICE doesn&#8217;t have a single point of failure. An outage at any one AI provider doesn&#8217;t disrupt your assessment, because the cascade architecture routes around it automatically.</p><h3><strong>Why You Don&#8217;t Need to Think About This</strong></h3><p>From your perspective as the person taking the assessment, none of this is visible. You have a conversation, you collaborate on tasks, and your behavioral patterns are observed and scored. Which specific models are involved in that process is an infrastructure detail, not a user-facing decision.</p><p>This is intentional. The assessment experience should be consistent regardless of which models are active behind the scenes.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/will-paice-support-other-ai-models/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/will-paice-support-other-ai-models/comments"><span>Leave a comment</span></a></p><h2><strong>Why Collaboration Patterns Transfer Across Models</strong></h2><p>This is the part that matters most for your professional development.</p><h3><strong>PAICE Measures Behavior, Not Model Knowledge</strong></h3><p>The five PAICE dimensions, Performance (P), Accountability (A), Integrity (I), Collaboration (C), and Evolution (E), are defined in terms of <strong>observable behaviors</strong>, not model-specific techniques:</p><ul><li><p><strong>Accountability</strong> measures whether you verify AI output and catch errors. This skill transfers whether the AI is Claude, Gemini, ChatGPT, or a model that doesn&#8217;t exist yet.</p></li><li><p><strong>Integrity</strong> measures whether you maintain logical consistency and fact-check claims. This applies to any AI interaction.</p></li><li><p><strong>Collaboration</strong> measures how effectively you iterate and refine outputs with AI. The iteration patterns that work well are the same across all models.</p></li></ul><p>A professional who carefully verifies AI-generated contract language does so regardless of which model produced it. A clinician who cross-references AI suggestions against clinical guidelines applies that same discipline across any AI tool.</p><h3><strong>What Transfers and What Doesn&#8217;t</strong></h3><p><strong>Skills that transfer across all AI models:</strong></p><ul><li><p>Verifying factual claims before acting on them</p></li><li><p>Catching errors, inconsistencies, and hallucinations</p></li><li><p>Providing clear context and constraints</p></li><li><p>Iterating strategically rather than accepting first outputs</p></li><li><p>Maintaining professional judgment when AI sounds confident</p></li></ul><p><strong>Skills that don&#8217;t transfer (and that PAICE doesn&#8217;t measure):</strong></p><ul><li><p>Model-specific prompt syntax or formatting tricks</p></li><li><p>Knowledge of a particular model&#8217;s quirks or limitations</p></li><li><p>Optimization techniques unique to one provider&#8217;s API</p></li><li><p>Platform-specific features or settings</p></li></ul><p>This distinction is fundamental to PAICE&#8217;s design. If we measured model-specific skills, your score would be tied to one vendor&#8217;s product cycle. Instead, we measure the collaboration behaviors that make you effective with <em>any</em> AI tool, today and five years from now.</p><h3><strong>The Scoring Engine Is Model-Independent</strong></h3><p>The scoring logic that produces your PAICE score operates entirely independently of which models powered your session. Test injections, behavioral observations, and dimensional scoring all function the same way regardless of the underlying model infrastructure.</p><p>This means your score from a session where Claude handled the conversation is directly comparable to a session where a different model was primary. The behavioral evidence is what matters, not which AI generated the conversation.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share PAICE.work&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share PAICE.work</span></a></p><h2><strong>What&#8217;s Coming Next</strong></h2><h3><strong>Model Choice on the Roadmap</strong></h3><p>We&#8217;re building something that will give users the ability to select which AI model they interact with. This is on the product roadmap, though we don&#8217;t have a release date to share yet.</p><h3><strong>New Models, Same Assessment Quality</strong></h3><p>The AI landscape moves fast. New models launch regularly. Existing models improve. The cascade architecture means PAICE can adopt new models as they prove themselves, without any disruption to the assessment experience or scoring validity.</p><p>When a new model demonstrates strong performance for one of PAICE&#8217;s functions, it can be integrated into the relevant cascade. This keeps the platform current without requiring users to do anything differently.</p><h3><strong>Open-Source Models Already in Use</strong></h3><p>PAICE&#8217;s <a href="https://paice.work/blog/confidential-mode-tee-integration">Confidential Mode</a> already uses open-source models running inside Trusted Execution Environments (TEEs) for hardware-attested privacy. This demonstrates that the model-agnostic architecture extends beyond commercial providers to open-source alternatives as well.</p><h2><strong>Related Questions</strong></h2><h3><strong>&#8220;Will my score change if PAICE uses a different model?&#8221;</strong></h3><p><strong>No.</strong> The scoring methodology is model-independent. Your score reflects your behavioral patterns, not the characteristics of any particular model. We validate scoring consistency across model configurations to ensure comparability.</p><h3><strong>&#8220;Can I choose which AI model I interact with?&#8221;</strong></h3><p><strong>Not yet, but it&#8217;s on the roadmap.</strong> Currently, model selection is automatic and optimized for quality and reliability. A future product update will enable user-selectable models.</p><h3><strong>&#8220;What about privacy when using multiple providers?&#8221;</strong></h3><p>All models are accessed via API with the same privacy protections. No provider trains on your assessment data. The same <a href="https://paice.work/blog/privacy-by-design-gdpr-ccpa-compliance">privacy-by-architecture</a> principles apply regardless of which models are active. For the strongest privacy guarantees, Confidential Mode runs all inference inside TEE enclaves.</p><div><hr></div><p><em>Want to understand your own readiness profile? <a href="https://paice.work/">Take the PAICE assessment</a> to discover your strengths and opportunities.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Recommended Reading</strong></h2><p>&#128214; <strong>Architecture &amp; Design:</strong></p><ul><li><p><a href="https://paice.work/blog/why-claude-model-agnostic-design">Why Claude? And Why PAICE Is Designed to Work with Any AI Model</a> - The original deep dive into model-agnostic architecture</p></li><li><p><a href="https://paice.work/blog/confidential-mode-tee-integration">Confidential Mode: TEE Integration</a> - Hardware-attested privacy with open-source models</p></li></ul><p>&#128214; <strong>How PAICE Works:</strong></p><ul><li><p><a href="https://paice.work/blog/faq-what-makes-paice-different">What Makes PAICE Different from Other Assessments?</a> - Behavioral observation vs. self-reporting</p></li><li><p><a href="https://paice.work/blog/privacy-by-design-gdpr-ccpa-compliance">Privacy by Design: How PAICE Achieves Privacy Compliance</a> - Technical privacy architecture across all providers</p></li></ul>]]></content:encoded></item><item><title><![CDATA[What Your PAICE Score™ Means]]></title><description><![CDATA[And What It Doesn't]]></description><link>https://paice.substack.com/p/what-your-paice-score-means</link><guid isPermaLink="false">https://paice.substack.com/p/what-your-paice-score-means</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Wed, 18 Mar 2026 18:01:05 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/785d017d-d7c5-44bd-9be7-0cba7e87a19c_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You&#8217;ve completed the PAICE (People + AI Collaboration Effectiveness) assessment and received your score. But what does that number actually mean? More importantly, what <em>doesn&#8217;t</em> it mean? Let&#8217;s break down how to interpret your results and use them effectively.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/what-your-paice-score-means?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/what-your-paice-score-means?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>Understanding the 0-1000 Scale</strong></h2><p>Your PAICE Score&#8482; is measured on a 0-1000 scale. This isn&#8217;t like a traditional test where 700 means you got 70% of questions right. Instead, your score reflects <strong>observed collaboration effectiveness</strong> across twenty behavioral indicators that roll up to five core dimensions.</p><h3><strong>The Calibration Framework</strong></h3><p>The scale represents a spectrum of collaboration capability:</p><ul><li><p><strong>0</strong> = Actively hostile or non-cooperative with AI systems</p></li><li><p><strong>1000</strong> = World-class AI collaboration on a best day</p></li></ul><p>Most people score between 200 and 400 on their first assessment. This isn&#8217;t because they&#8217;re bad at their jobs &#8212; it&#8217;s because AI collaboration is a genuinely new skill that most people haven&#8217;t deliberately developed. Your first score is a baseline, not a verdict.</p><h2><strong>The Five Tiers</strong></h2><p>Your score places you in one of five tiers, each representing a distinct level of collaboration capability:</p><h3><strong>Constrained (0-299)</strong></h3><p><strong>What it means:</strong> Significant barriers to effective AI collaboration are present. These might include:</p><ul><li><p>Limited understanding of AI capabilities and limitations</p></li><li><p>Difficulty framing effective requests</p></li><li><p>Minimal verification of AI outputs</p></li><li><p>Resistance to iterative refinement</p></li><li><p>Lack of systematic approaches to collaboration</p></li></ul><p><strong>What it doesn&#8217;t mean:</strong> You&#8217;re &#8220;bad&#8221; at your job. Many highly capable professionals score in this range simply because they haven&#8217;t yet developed <em>AI-specific</em> collaboration skills. Domain expertise and AI collaboration readiness are separate things.</p><p><strong>What to do:</strong> Focus on foundational skills &#8212; understanding how AI works, learning basic prompting techniques, and developing verification habits. See <a href="https://paice.work/blog/improving-your-paice-score">Improving Your PAICE Score</a> for dimension-specific strategies.</p><h3><strong>Informed (300-499)</strong></h3><p><strong>What it means:</strong> Basic collaboration capability with room for growth. You can:</p><ul><li><p>Use AI for straightforward tasks</p></li><li><p>Recognize obvious errors in AI output</p></li><li><p>Iterate when prompted to refine results</p></li><li><p>Understand basic AI limitations</p></li></ul><p><strong>What it doesn&#8217;t mean:</strong> You&#8217;re &#8220;average.&#8221; This tier represents <em>functional capability</em>. You can work with AI, but there are significant opportunities to become more effective and more resilient to AI errors.</p><p><strong>What to do:</strong> Develop more sophisticated prompting strategies, strengthen verification practices, and learn to handle more complex collaboration scenarios.</p><h3><strong>Proficient (500-699)</strong></h3><p><strong>What it means:</strong> Solid collaboration practices with some blind spots. You demonstrate:</p><ul><li><p>Effective prompting across various task types</p></li><li><p>Systematic verification approaches</p></li><li><p>Good error detection most of the time</p></li><li><p>Productive iteration patterns</p></li><li><p>Understanding of context management</p></li></ul><p><strong>What it doesn&#8217;t mean:</strong> You&#8217;ve &#8220;mastered&#8221; AI collaboration. There are still areas where you could improve, particularly in edge cases, high-stakes situations, or when AI is confidently wrong.</p><p><strong>What to do:</strong> Identify and address your specific blind spots. Your <a href="https://paice.work/blog/what-your-paice-score-means#your-dimensional-profile">dimensional breakdown</a> will show you exactly where to focus.</p><h3><strong>Advanced (700-899)</strong></h3><p><strong>What it means:</strong> Strong collaboration capability with self-correcting practices. You show:</p><ul><li><p>Sophisticated prompting strategies</p></li><li><p>Proactive error detection</p></li><li><p>Effective recovery from AI failures</p></li><li><p>Adaptive approaches based on context</p></li><li><p>Meta-awareness of your own collaboration patterns</p></li></ul><p><strong>What it doesn&#8217;t mean:</strong> You&#8217;re done growing. Even advanced collaborators have areas for development, and the AI landscape evolves constantly.</p><p><strong>What to do:</strong> Focus on edge cases, develop expertise in domain-specific AI collaboration, and consider helping others improve their skills.</p><h3><strong>Exceptional (900-1000)</strong></h3><p><strong>What it means:</strong> Exceptional collaboration effectiveness across all dimensions. You demonstrate:</p><ul><li><p>Expert-level communication with AI systems</p></li><li><p>Highly developed error detection and verification</p></li><li><p>Sophisticated recovery strategies</p></li><li><p>Deep understanding of AI capabilities and limitations</p></li><li><p>Consistent excellence across diverse scenarios</p></li></ul><p><strong>What it doesn&#8217;t mean:</strong> You can&#8217;t improve. Even exceptional scores have room for growth, particularly as AI technology evolves and new capabilities emerge.</p><p><strong>What to do:</strong> Share your expertise, contribute to best practices in your organization, and reassess periodically &#8212; your score has a <a href="https://paice.work/blog/what-your-paice-score-means#score-half-life">6-month half-life</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share PAICE.work&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share PAICE.work</span></a></p><h2><strong>What PAICE Measures (And What It Doesn&#8217;t)</strong></h2><h3><strong>What PAICE Measures</strong></h3><p><strong>Observable behavioral patterns in AI collaboration:</strong></p><ul><li><p>How you frame requests and provide context</p></li><li><p>How you respond to AI outputs &#8212; especially when they contain errors</p></li><li><p>How you iterate and refine toward better results</p></li><li><p>How you detect and recover from failures</p></li><li><p>How you maintain critical judgment when AI sounds confident</p></li></ul><p>The critical distinction: conversation is the medium, but it is not what&#8217;s being measured. PAICE measures how you <em>respond to AI behavior</em> &#8212; including failures, overconfidence, and hallucinations &#8212; in real time.</p><h3><strong>What PAICE Doesn&#8217;t Measure</strong></h3><p><strong>Intelligence:</strong> Your PAICE Score&#8482; is not an IQ test. Highly intelligent people can score low if they haven&#8217;t developed AI collaboration habits. You don&#8217;t need to be a genius to score high &#8212; you need effective collaboration patterns.</p><p><strong>Domain expertise:</strong> PAICE doesn&#8217;t measure how much you know about your field. A world-class expert might score low if they haven&#8217;t learned to collaborate effectively with AI.</p><p><strong>Technical knowledge about AI:</strong> You don&#8217;t need to understand how neural networks work to score high. PAICE measures practical collaboration skills, not theoretical knowledge.</p><p><strong>Personality traits:</strong> This isn&#8217;t a personality assessment. Your score reflects <em>learned behaviors</em>, not innate characteristics.</p><p><strong>Overall job performance:</strong> AI collaboration is one skill among many. A lower PAICE Score&#8482; doesn&#8217;t mean you&#8217;re not good at your job &#8212; it means you have an opportunity to develop this specific capability.</p><h2><strong>Understanding Your Results</strong></h2><h3><strong>Your Overall Score</strong></h3><p>Your overall PAICE Score&#8482; (0-1000) is a weighted combination of five dimensional scores:</p><p>DimensionWeightWhat It Measures<strong>Performance</strong>10%Getting useful outputs from AI<strong>Accountability</strong>30%Verifying outputs, catching errors, maintaining ownership<strong>Integrity</strong>25%Ethical awareness, bias recognition, fact-checking<strong>Collaboration</strong>20%Iterative refinement and effective partnership<strong>Evolution</strong>15%Learning, adapting, and improving your approach</p><p>Accountability carries the highest weight (30%) because <strong>the human remains responsible for the outcome</strong>. Your ability to catch AI errors before they cause problems is the most critical &#8212; and most underdeveloped &#8212; collaboration skill.</p><p>For the full calculation methodology, see <a href="https://paice.work/blog/faq-how-is-my-paice-score-calculated">How Is My PAICE Score Calculated?</a></p><h3><strong>Your Dimensional Profile</strong></h3><p>The dimension breakdown is often <strong>more valuable than the overall score</strong>. It shows you exactly where to focus your development.</p><p>A score of 545 with strong Collaboration (600) but weak Accountability (380) tells a very different story than the same overall score with the opposite pattern. The first profile suggests someone who works well with AI but doesn&#8217;t catch enough errors. The second suggests someone who&#8217;s vigilant about verification but could improve how they interact with AI.</p><p>For a deep dive into what each dimension measures and what distinguishes high performers, see <a href="https://paice.work/blog/understanding-the-five-paice-dimensions">Understanding the Five PAICE Dimensions</a>.</p><h3><strong>Your History Page</strong></h3><p>Your <a href="https://paice.work/history">History page</a> tracks your development over time:</p><ul><li><p><strong>Compare to your baseline</strong>: See how each dimension has changed since your first assessment</p></li><li><p><strong>Visualize trends</strong>: Track progress across multiple assessments</p></li><li><p><strong>Identify patterns</strong>: Understand which dimensions improve fastest and which need more attention</p></li><li><p><strong>Derive insights</strong>: Get personalized recommendations based on your assessment history</p></li></ul><h2><strong>Score Half-Life</strong></h2><p>Your PAICE Score&#8482; has a <strong>6-month half-life</strong>. After six months without reassessment, your score&#8217;s relevance diminishes by half. This reflects reality:</p><ul><li><p>AI tools and capabilities evolve rapidly</p></li><li><p>Collaboration patterns can drift without active attention</p></li><li><p>Skills that aren&#8217;t practiced tend to atrophy</p></li><li><p>The AI landscape you&#8217;re navigating changes constantly</p></li></ul><p>We recommend reassessment every 30-60 days for those actively developing their skills, and at least every 6 months for everyone else. See <a href="https://paice.work/blog/faq-can-i-retake-the-assessment">Can I Retake the Assessment?</a> for guidance on timing and preparation.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/what-your-paice-score-means/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/what-your-paice-score-means/comments"><span>Leave a comment</span></a></p><h2><strong>Common Misinterpretations</strong></h2><h3><strong>&#8220;I scored low, so I&#8217;m bad at AI&#8221;</strong></h3><p><strong>No.</strong> You might be effective at specific AI tasks while having gaps in your overall collaboration capability. The score reveals opportunities for growth, not fundamental inadequacy. Most first-time scores fall between 200 and 400.</p><h3><strong>&#8220;I scored high, so I don&#8217;t need to improve&#8221;</strong></h3><p><strong>No.</strong> Even exceptional scores have room for growth. AI technology evolves rapidly, and collaboration best practices are still being established. The 6-month half-life exists for a reason.</p><h3><strong>&#8220;My score is lower than my colleague&#8217;s, so they&#8217;re better than me&#8221;</strong></h3><p><strong>No.</strong> Different people have different dimensional profiles. Your colleague might score higher overall but have weaknesses in areas where you&#8217;re strong. Focus on your own development, not comparison.</p><h3><strong>&#8220;This score will determine my career&#8221;</strong></h3><p><strong>No.</strong> PAICE is a development tool, not a credential. It&#8217;s designed to help you improve, not to label or limit you. Your results belong to you &#8212; we don&#8217;t share them with anyone, even employers.</p><h3><strong>&#8220;I just need to learn better prompts&#8221;</strong></h3><p><strong>Not quite.</strong> Performance (prompting ability) is only 10% of your score. The dimensions that matter most - Accountability (30%) and Integrity (25%) &#8212; are about what you do <em>after</em> AI gives you an answer. Do you verify? Do you catch errors? Do you maintain professional judgment? Slightly more important than what you know today is how fast you can learn new things, because AI models change rapidly. This is why Evolution is weighted at (15%), above Performance.</p><h2><strong>How to Use Your Score Effectively</strong></h2><h3><strong>Focus on Your Dimensional Breakdown</strong></h3><p>Don&#8217;t fixate on the overall number. Your five dimension scores tell you exactly where to invest your development effort. Start with your lowest-weighted-impact dimension &#8212; usually Accountability or Integrity &#8212; since these have the largest effect on your overall score.</p><h3><strong>Read Your Personalized Insights</strong></h3><p>Your results include specific observations about your collaboration patterns. These are more actionable than the score itself.</p><h3><strong>Create a Development Plan</strong></h3><p>Use your results to create a focused plan. Don&#8217;t try to improve everything at once. Pick one or two dimensions to work on first. See <a href="https://paice.work/blog/improving-your-paice-score">Improving Your PAICE Score</a> for dimension-specific strategies, or follow the <a href="https://paice.work/blog/30-day-ai-collaboration-development-plan">30-Day AI Collaboration Development Plan</a> for a structured approach.</p><h3><strong>Track Your Progress</strong></h3><p>Use your <a href="https://paice.work/history">History page</a> to see how your scores change over time. Meaningful improvement typically takes 2-4 weeks of deliberate practice. We recommend waiting at least 15 days between assessments to allow behavioral patterns to genuinely change.</p><h3><strong>Remember What Matters</strong></h3><p>Your PAICE Score&#8482; is a measurement tool, not the goal itself. What matters is whether you can verify AI outputs effectively, catch errors before they cause problems, iterate toward better results, and maintain appropriate professional judgment. If those capabilities are improving, your score will follow.</p><h2><strong>The Bottom Line</strong></h2><p>Your PAICE Score&#8482; is a tool for development, not a judgment. It reflects your current collaboration capability based on observed behaviors, and it provides a roadmap for improvement.</p><p><strong>What matters most isn&#8217;t the number &#8212; it&#8217;s what you do with the insights.</strong></p><p>A score of 350 with a commitment to improvement is more valuable than a score of 750 with complacency. Use your results to identify opportunities, develop new skills, and become more effective at AI collaboration.</p><p>This is a snapshot of where you are today, not a prediction of where you&#8217;ll be tomorrow. With awareness, practice, and systematic development, you can significantly improve your AI collaboration capability.</p><div><hr></div><p><em>Ready to understand your own collaboration profile? <a href="https://paice.work/">Take the PAICE assessment</a> to get detailed insights and personalized recommendations for development.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Recommended Reading</strong></h2><p>&#128214; <strong>Understanding the Framework:</strong></p><ul><li><p><a href="https://paice.work/blog/understanding-the-five-paice-dimensions">Understanding the Five PAICE Dimensions</a> - What each dimension measures and what distinguishes high performers</p></li><li><p><a href="https://paice.work/blog/faq-how-is-my-paice-score-calculated">How Is My PAICE Score Calculated?</a> - The scoring formula, weights, and calculation methodology</p></li><li><p><a href="https://paice.work/blog/why-accountability-scores-lower">Why Your Accountability Score Is Probably Lower Than Your Other Dimensions</a> - Understanding the most challenging dimension</p></li></ul><p>&#128214; <strong>Taking Action:</strong></p><ul><li><p><a href="https://paice.work/blog/improving-your-paice-score">Improving Your PAICE Score: A Practical Guide</a> - Dimension-specific improvement strategies</p></li><li><p><a href="https://paice.work/blog/faq-can-i-retake-the-assessment">Can I Retake the Assessment?</a> - Timing, preparation, and making retakes count</p></li><li><p><a href="https://paice.work/blog/30-day-ai-collaboration-development-plan">From Novice to Proficient: A 30-Day AI Collaboration Development Plan</a> - Structured roadmap for skill development</p></li></ul>]]></content:encoded></item><item><title><![CDATA[Can Employers Use PAICE for Hiring?]]></title><description><![CDATA[Why Individual Scores Shouldn't Determine Employment Decisions]]></description><link>https://paice.substack.com/p/can-employers-use-paice-for-hiring</link><guid isPermaLink="false">https://paice.substack.com/p/can-employers-use-paice-for-hiring</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Tue, 17 Mar 2026 15:31:05 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d6894ea0-56b1-46af-971f-43d6687aab3b_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#8220;Can we use PAICE scores to screen job candidates?&#8221;</p><p>This is one of the most common questions we receive from HR leaders and hiring managers. The short answer is <strong>no</strong>. But there&#8217;s a more valuable use case for PAICE (People + AI Collaboration Effectiveness) in your hiring process that doesn&#8217;t involve the candidate at all.</p><p><em>Please note that this is not legal advice and you should always consult with your legal and compliance teams before implementing any AI collaboration practices.</em></p><h2><strong>The Short Answer</strong></h2><p>PAICE.work is designed for supporting <strong>skill development</strong> through independent assessment and calibration of AI collaboration behaviors, not candidate evaluation. Using individual PAICE scores as hiring criteria would be inappropriate for three reasons:</p><ol><li><p><strong>Design intent</strong>: The assessment measures current behavioral patterns to support skill development, not to rank individuals against each other.</p></li><li><p><strong>Privacy architecture</strong>: Individual scores are not personally identifiable. There&#8217;s no way to look up &#8220;what did this candidate score?&#8221; without their voluntary participation. It&#8217;s legally questionable and structurally impossible to use PAICE scores to compare candidates.</p></li><li><p><strong>Terms of Service</strong>: Using PAICE scores as a determining factor in employment decisions violates our <a href="https://paice.work/terms">Terms of Service</a>.</p></li></ol><p>If you&#8217;re looking for a tool to filter candidates, PAICE isn&#8217;t it. But if you&#8217;re looking to make smarter hiring decisions by understanding your team&#8217;s needs, keep reading...</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/can-employers-use-paice-for-hiring?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/can-employers-use-paice-for-hiring?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>Why Individual Scores Shouldn&#8217;t Determine Hiring</strong></h2><p>Let&#8217;s first examine what it would look like to use PAICE for candidate screening, and specifically why this approach would fail.</p><h3><strong>What It Would Look Like</strong></h3><p>An organization requires all job candidates to complete a PAICE assessment. HR sets a minimum score threshold (say, 600 out of 1000). Candidates below the threshold are automatically disqualified.</p><h3><strong>Why It&#8217;s Problematic</strong></h3><p><strong>AI collaboration is a trainable skill, not a fixed trait.</strong> A candidate&#8217;s current PAICE score reflects their present behavioral patterns with AI tools &#8212; patterns developed through whatever exposure and practice they&#8217;ve had. It doesn&#8217;t measure their potential, learning speed, or how quickly they could develop strong collaboration habits in your environment.</p><p>Most professionals today score between 200-400 on their first assessment. This isn&#8217;t because they&#8217;re bad at their jobs. It&#8217;s because <a href="https://paice.work/blog/improving-your-paice-score">AI collaboration is a genuinely new skill</a> that most people haven&#8217;t deliberately developed yet. Filtering candidates based on current scores would systematically exclude talented professionals who simply haven&#8217;t had the opportunity or guidance to develop these specific behaviors.</p><p><strong>Gaming incentives distort the signal.</strong> When assessments become hiring gates, candidates optimize for the assessment rather than genuine skill development. The behaviors PAICE measures (verification habits, appropriate skepticism, effective prompting, etc.) are most valuable when they&#8217;re authentic. A candidate who learns to &#8220;perform&#8221; these behaviors during assessment may not be able to sustain that performance on the job.</p><p><strong>Adverse selection against non-traditional backgrounds.</strong> Candidates from organizations with mature AI adoption will naturally score higher than those from environments where AI use was restricted or discouraged. Using scores as hiring criteria would favor candidates from certain backgrounds while penalizing those who may bring valuable domain expertise and fresh perspectives.</p><h3><strong>The Ethical Concern</strong></h3><p>Here&#8217;s the fundamental issue: <strong>AI collaboration capability is unevenly distributed, and that distribution reflects opportunity, not merit.</strong></p><p>Professionals in tech-forward companies have had years to develop AI collaboration habits. Those in highly regulated environments may have been prohibited from using AI tools at all. Those in under-resourced organizations may not have had access to AI tools. These differences in exposure and practice are not indicators of professional competence. Using current AI collaboration scores as hiring criteria would penalize candidates for circumstances outside their control.</p><h2><strong>The Legitimate Use Case: Team Capability Mapping</strong></h2><p>Here&#8217;s where PAICE becomes genuinely valuable in the hiring process, and it doesn&#8217;t involve assessing candidates at all.</p><h3><strong>Understanding Your Team&#8217;s Baseline</strong></h3><p>Before you hire anyone, use PAICE with your <strong>existing team members</strong>. Run a cohort assessment to understand your team&#8217;s collective strengths and gaps across the five PAICE dimensions:</p><ul><li><p><strong>Performance</strong>: How well does your team leverage AI for productivity?</p></li><li><p><strong>Accountability</strong>: Does your team take responsibility for AI-assisted outputs?</p></li><li><p><strong>Integrity</strong>: How consistently does your team verify AI-generated content?</p></li><li><p><strong>Collaboration</strong>: How effectively does your team communicate with AI systems?</p></li><li><p><strong>Evolution</strong>: How well does your team adapt their AI practices as tools change?</p></li></ul><p>This gives you an important map of the current capabilities of the team/department/division/cohort (minimum 20 per cohort). This is not to judge individuals, but to understand what skills are present and what skills are missing.</p><h3><strong>Informing What to Hire For</strong></h3><p>Once you know your team&#8217;s capability profile, you can make smarter hiring decisions.</p><p><strong>Example 1</strong>: Your team scores high on Performance and Collaboration but low on Integrity (verification behaviors). When screening candidates, you might prioritize those with strong quality assurance backgrounds, detail-oriented work styles, or experience in high-stakes environments where verification is standard practice.</p><p><strong>Example 2</strong>: Your team excels at technical AI tasks but struggles with Accountability (taking ownership of AI-assisted work). You might look for candidates who demonstrate strong ownership mentality, experience leading projects end-to-end, or comfort with public accountability.</p><p><strong>Example 3</strong>: Your team is already strong across the board but faces an Evolution gap. They&#8217;re experts with current tools but are having trouble adapting to new AI capabilities. You might prioritize candidates who demonstrate adaptability, continuous learning habits, or experience navigating tool transitions.</p><p>Notice what this approach does <strong>not</strong> do: it doesn&#8217;t require candidates to take PAICE. It doesn&#8217;t set score thresholds. It doesn&#8217;t filter anyone out. Instead, it uses your team&#8217;s data to inform what you&#8217;re looking for, the same way you might note that your team needs more senior leadership or more technical depth.</p><h3><strong>Onboarding Context for New Hires</strong></h3><p>When you do hire someone, your team&#8217;s capability map provides valuable onboarding context.</p><p>If your team assessment revealed gaps in verification behaviors, you know new hires will need explicit training and cultural reinforcement in this area &#8212; regardless of their own capability level. If your team excels at AI collaboration, new hires will benefit from peer learning opportunities you can intentionally create.</p><p>The team capability map becomes a training roadmap, not for the new hire specifically, but for how you&#8217;ll integrate them into your team&#8217;s AI practices.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/can-employers-use-paice-for-hiring/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/can-employers-use-paice-for-hiring/comments"><span>Leave a comment</span></a></p><h2><strong>The Privacy Architecture That Prevents Misuse</strong></h2><p>PAICE&#8217;s privacy design makes inappropriate hiring use cases structurally impossible. Here&#8217;s how:</p><p><strong>No individual data export</strong>: Organizational cohort features show aggregate patterns &#8212; team averages, dimension distributions, improvement trends. You see that &#8220;your team&#8217;s median Integrity score is 450&#8221; not &#8220;Jane scored 320 and Bob scored 580.&#8221;</p><p><strong>Separation of email and scores</strong>: If users provide their email address (optional), it&#8217;s stored separately from their assessment data. We use a temporary session ID to send results, then discard the connection. There&#8217;s no database you could query to retrieve &#8220;all scores for candidates who applied to job posting X.&#8221;</p><p><strong>No individual-level comparisons</strong>: The platform doesn&#8217;t support comparing individuals or tracking individual performance over time. It does support tracking cohort-level trends, which is exactly what organizations need for development and hiring decisions.</p><p>This architecture reflects our belief that AI collaboration skills should be developed, not weaponized. We&#8217;ve built privacy protections that make it hard to use PAICE in ways that could be perceived as harmful, even if an organization wanted to.</p><p>We are commited to mitigating the kinds of liability and reputational risks that come with inappropriate use of AI. This is why we have built in safeguards to prevent misuse from the ground up.</p><p>For more on how we handle data, see our <a href="https://paice.work/blog/your-data-your-privacy">Privacy and Data Practices</a>.</p><h2><strong>Related Questions</strong></h2><h3><strong>&#8220;Can we require employees to take PAICE?&#8221;</strong></h3><p>Likely yes, for development purposes and with appropriate framing.</p><p>PAICE can be a valuable tool for tracking progress in onboarding or professional development programs. This works well when positioned as a baseline measurement and growth tool, not as an evaluation or ranking mechanism. Be explicit that PAICE scores are for individual development, not performance review. What people see in their results is their own growth path, not a comparison to others, and that information is not viewable by anyone else.</p><h3><strong>&#8220;Will you ever support hiring use cases?&#8221;</strong></h3><p>We believe skills-based hiring has merit, and we understand why organizations want objective capability data about candidates. However, PAICE is specifically designed as a development tool. Our methodology, scoring approach, and privacy architecture all optimize for helping individuals improve rather than comparing them.</p><p>Though we have nothing like this in the roadmap today, if we ever did build hiring-related features, they would likely focus on role requirements and team capabilities instead of individual candidate scores.</p><h3><strong>&#8220;What about promotion decisions?&#8221;</strong></h3><p>The same guidance applies. PAICE measures current behavioral patterns to support individual development, and assessment of organizational risk. Using scores as promotion criteria would create the same gaming incentives and adverse selection problems as using them for hiring.</p><p>What PAICE <em>can</em> support is promotion readiness conversations. A manager might use their own team capability data to identify development priorities: &#8220;For that manager role, you&#8217;ll need strong verification habits because you&#8217;ll have a lot to verify. Here&#8217;s a development plan to build them, and you can always use PAICE to track your progress.&#8221; The score is a starting point for growth, and an easy way to self-calibrate. It is not a permission gate to pass through.</p><div><hr></div><p><em>Want to understand your team&#8217;s AI collaboration baseline? <a href="https://paice.work/contact">Contact us</a> to learn about organizational assessments, or read our <a href="https://paice.work/privacy">Privacy Policy</a> for details on how we protect team data.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Recommended Reading</strong></h2><p>&#128214; <strong>Ethics &amp; Policy:</strong></p><ul><li><p><a href="https://paice.work/blog/ethics-of-ai-collaboration">The Ethics of AI Collaboration</a> &#8212; Framework for ethical AI use in professional contexts</p></li><li><p><a href="https://paice.work/blog/your-data-your-privacy">Privacy and Data Practices</a> &#8212; How PAICE protects user data</p></li></ul><p>&#128214; <strong>For Organizations:</strong></p><ul><li><p><a href="https://paice.work/blog/paice-for-teams-coming-soon">PAICE for Teams</a> &#8212; Upcoming organizational assessment features</p></li><li><p><a href="https://paice.work/blog/ai-collaboration-for-managers">AI Collaboration for Managers</a> &#8212; Leading teams in the AI era</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Weekly Update - March 16, 2026]]></title><description><![CDATA[Modular skills, whitepaper to peer review, and behind-the-scenes momentum.]]></description><link>https://paice.substack.com/p/weekly-update-march-16-2026</link><guid isPermaLink="false">https://paice.substack.com/p/weekly-update-march-16-2026</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Mon, 16 Mar 2026 18:01:30 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/411d61be-5a86-489e-81c5-e9e195b04756_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A lighter week on the main branch, but a productive one. Our third whitepaper is complete and in peer review ahead of ISPI 2026, we published five new posts including two industry guides and a video, completed a meaningful infrastructure refactor, and continued advancing payment integration and internationalization and deeper benchmarking work on feature branches.</p><h2><strong>Content Published Last Week</strong></h2><p><strong>Monday</strong> (Mar 9): <a href="https://paice.work/blog/update-2026-03-09">&#8220;Weekly Update - March 9, 2026&#8221;</a></p><p><strong>Tuesday</strong> (Mar 10): <a href="https://paice.work/blog/ai-collaboration-insurance-industry">&#8220;AI Collaboration in Insurance&#8221;</a> A practical guide for insurance professionals on leveraging AI collaboration for underwriting, claims processing, and risk assessment while maintaining regulatory compliance.</p><p><strong>Wednesday</strong> (Mar 11): <a href="https://paice.work/blog/what-paice-tests-for">&#8220;What PAICE Is Actually Testing For&#8221;</a> It&#8217;s not about AI knowledge&#8212;PAICE watches for specific behaviors during your conversation, whether you question confident claims, ask for sources, or catch things that don&#8217;t add up.</p><p><strong>Thursday</strong> (Mar 12): <a href="https://paice.work/blog/ai-collaboration-cybersecurity-professionals">&#8220;AI Collaboration for Cybersecurity Professionals&#8221;</a> How cybersecurity professionals can build effective AI collaboration practices for threat analysis and incident response without compromising verification rigor.</p><p><strong>Friday</strong> (Mar 13): Video - <a href="https://paice.work/blog/ai-adoption-illusion-collaboration-gap">&#8220;The AI Adoption Illusion&#8221;</a> A 7-minute NotebookLM deep dive into why AI adoption metrics measure software activity, not human competence&#8212;and why verification judgment is the real predictor of safe AI integration.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/weekly-update-march-16-2026?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/weekly-update-march-16-2026?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>Technical Improvements</strong></h2><h3><strong>Skills Architecture Refactor</strong></h3><p>Replaced all copied skill bundles with symlinks to their standalone repositories and added a <code>link-skills.sh</code> setup script. This removed approximately 4,500 lines of duplicated content from the main repo and added three new skill links. Each skill now versions independently (via the <a href="https://github.com/snapsynapse/skill-provenance">skills-provenance</a> standard) reducing merge conflicts and enabling reuse across projects. Context files were preserved and renamed for quick reference.</p><h2><strong>Upcoming Whitepaper: Closing the Collaboration Gap</strong></h2><p>Our third whitepaper, <em>&#8220;Closing the Collaboration Gap: A Behavioral Skill Framework for Human-AI Performance Improvement,&#8221;</em> is now complete and undergoing peer review. The paper presents the behavioral science foundation behind PAICE&#8217;s five-dimension assessment model, examining how organizations can move beyond adoption metrics to measure and improve the human skills that determine whether AI collaboration succeeds or fails.</p><p>We&#8217;re preparing for release on <strong>March 31</strong> at the <a href="https://ispi.org/">International Society for Performance Improvement (ISPI)</a> 2026 conference. This is an audience deeply aligned with PAICE&#8217;s mission of measurable human performance in the age of AI.</p><h2><strong>Work in Progress</strong></h2><p>Several feature branches are advancing toward future releases:</p><ul><li><p><strong>Payment Integrations</strong>: Lemon Squeezy webhook parsing, token hashing, and premium feature gating are functional and under testing. This will unlock Pro-tier capabilities for individual users as well as teams and organizations. Stripe already works independently with integration efforts advancing, and we&#8217;re exploring a new token-based payment model for cohort assessments.</p></li><li><p><strong>Internationalization</strong>: Behind the scenes, PAICE.work has already learned to speak <em>Urdu</em> for the assessment flow. Spanish, French, and Portuguese internationalization will soon follow in successive releases. These represent our first steps toward multilingual assessments that businesses in the Western Hemisphere can use to evaluate AI collaboration capability across their global workforce.</p></li><li><p><strong>Cohort Management</strong>: Cohort creation, membership management, and assessment scheduling are functional and under testing. We&#8217;re working toward SSO integration and a new cohort dashboard that will include a cohort-specific leaderboard.</p></li></ul><div class="directMessage button" data-attrs="{&quot;userId&quot;:23656692,&quot;userName&quot;:&quot;Sam Rogers&quot;,&quot;canDm&quot;:null,&quot;dmUpgradeOptions&quot;:null,&quot;isEditorNode&quot;:true}" data-component-name="DirectMessageToDOM"></div><h2><strong>Open Source Projects</strong></h2><p>We continue to extract and open-source tools and standards that grew out of our internal PAICE development. Recent additions to our GitHub:</p><ul><li><p><strong><a href="https://github.com/snapsynapse/ai-capability-reference">AI Capability Reference</a></strong> &#8212; A structured reference for AI capability tiers, designed to support assessment design and scoring calibration across different AI maturity levels. Now with MCP and JSON endpoints so that PAICE.work can call the most recent news directly.</p></li><li><p><strong><a href="https://github.com/snapsynapse/hardguard25">Hardguard25</a></strong> &#8212; The token-based identity and session integrity standard we use for cohort assessments. Designed for privacy-first environments where traditional auth is inappropriate. This is the token standard that powers PAICE.work&#8217;s cohort assessments.</p></li><li><p><strong><a href="https://github.com/snapsynapse/skill-a11y-audit">Skill A11y Audit</a></strong> &#8212; An agent skill for automated accessibility auditing, now a standalone repo following this week&#8217;s skills architecture refactor. This is what allows PAICE.work to audit its own accessibility frequently.</p></li><li><p><strong><a href="https://github.com/snapsynapse/skill-provenance">Skill Provenance</a></strong> &#8212; Metaskill for tracking agent skill identity, staleness, and intent across sessions, surfaces, and platforms. Every skill now includes a <code>provenance.json</code> file that describes its purpose, version, and dependencies. Rated 8.4/10 in the ProSkills catalog of top 100 agentic skills last Tuesday.</p></li><li><p><strong><a href="https://github.com/snapsynapse/paice-near-integration">PAICE&#8211;NEAR Integration</a></strong> &#8212; Our TEE-protected inference and on-chain attestation integration, first showcased at NEARCON 2026 last month and highlighted in <a href="https://paice.work/whitepapers/privacy-security">our second whitepaper</a> <em>&#8220;Verifiable Human-AI Collaboration: Privacy-Preserving Assessment with Cryptographic Integrity&#8221;</em>, which describes our Confidential Mode running on NEAR&#8217;s Confidential Compute.</p></li></ul><p>Each of these is a work in progress, and we welcome contributions.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/weekly-update-march-16-2026/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/weekly-update-march-16-2026/comments"><span>Leave a comment</span></a></p><h2><strong>Platform Stability</strong></h2><p>Platform maintained 100% uptime with no incidents. All systems operating normally: standard and Confidential Mode assessments, results generation with on-chain attestation, cohort management, email notifications, and analytics processing.</p><h2><strong>What&#8217;s Next</strong></h2><p><strong>This week:</strong> Continue payment integration testing, begin preparing the internationalization pipeline for assessment prompts, and continuing to publish daily content exploring AI governance and organizational readiness themes.</p><h2><strong>The Week in Numbers</strong></h2><ul><li><p>5 blog posts published (1 video + 2 industry guides + 1 assessment guide + 1 weekly update)</p></li><li><p>3 commits merged to main</p></li><li><p>70 files changed, ~4,500 lines of skill duplication removed</p></li><li><p>3 feature branches advancing in parallel</p></li><li><p>100% uptime, zero incidents</p></li></ul><h2><strong>Why This Week Matters</strong></h2><p>Not every week produces a headline feature, and that&#8217;s by design. The skills architecture refactor is the kind of infrastructure investment that pays compound interest: faster skill iteration, cleaner version control, and the ability to share agent capabilities across projects without copy-paste drift. Meanwhile, payment integration and internationalization work on feature branches signal the next phase of PAICE&#8212;sustainable revenue and broader accessibility. The foundation keeps getting stronger even when the surface looks quiet.</p><h2><strong>Thank You</strong></h2><p>To the organizations and individuals following our progress: your patience during infrastructure weeks and your enthusiasm during feature launches both matter equally. We&#8217;re building for the long term, and every week moves us closer.</p><div><hr></div><p><strong>Get Involved:</strong></p><ul><li><p><a href="https://paice.work/">Take the assessment</a> (free, always)</p></li><li><p><a href="https://paice.work/baseline">Get a Team Baseline</a> (for organizations)</p></li><li><p><a href="https://paice.work/whitepaper">Read the whitepaper</a> (comprehensive framework)</p></li><li><p><a href="https://www.youtube.com/@paicework">Subscribe to our YouTube channel</a></p></li><li><p><a href="https://paice.work/contact">Contact us about your specific requirements</a></p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Related Reading</strong></h2><ul><li><p><a href="https://paice.work/blog/update-2026-03-09">Weekly Update - March 9, 2026</a></p></li><li><p><a href="https://paice.work/blog/ai-adoption-illusion-collaboration-gap">The AI Adoption Illusion</a></p></li><li><p><a href="https://paice.work/blog/ai-collaboration-insurance-industry">AI Collaboration in Insurance</a></p></li><li><p><a href="https://paice.work/blog/what-paice-tests-for">What PAICE Is Actually Testing For</a></p></li><li><p><a href="https://paice.work/blog/ai-collaboration-cybersecurity-professionals">AI Collaboration for Cybersecurity Professionals</a></p></li></ul>]]></content:encoded></item><item><title><![CDATA[The AI Adoption Illusion]]></title><description><![CDATA[Deconstructing the Collaboration Gap]]></description><link>https://paice.substack.com/p/the-ai-adoption-illusion</link><guid isPermaLink="false">https://paice.substack.com/p/the-ai-adoption-illusion</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Fri, 13 Mar 2026 20:52:25 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/190880540/95d0d00bac7be188105485a9bbbfcc2a.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Enterprise teams are deploying AI tools at an explosive rate: spinning up large language models, connecting them to internal systems, and rolling out autonomous agents across entire departments. Leadership tracks usage dashboards, counts API calls, and monitors completion rates of mandatory training modules.</p><p>But these metrics measure software activity, not human competence. This AI-generated <a href="https://notebooklm.google.com/">NotebookLM</a> video explores why that distinction matters, and why the window for unregulated experimental AI adoption is rapidly closing.</p><p>Enterprise teams are deploying AI tools at an explosive rate: spinning up large language models, connecting them to internal systems, and rolling out autonomous agents across entire departments. Leadership tracks usage dashboards, counts API calls, and monitors completion rates of mandatory training modules.</p><p>But these metrics measure software activity, not human competence. This AI-generated <a href="https://notebooklm.google.com/">NotebookLM</a> video explores why that distinction matters, and why the window for unregulated experimental AI adoption is rapidly closing.</p><p><a href="https://youtu.be/dewXr5otHGE">Watch on YouTube &#8594;</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/the-ai-adoption-illusion?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/the-ai-adoption-illusion?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>The Dangerous Blind Spot</strong></h2><p>A 100% training completion rate confirms an employee clicked through a slide deck, not whether they can evaluate AI output. Executives look at dashboards full of green checkmarks and assume their workforce is ready when they have only verified adoption volume.</p><p>This creates a massive, highly complex technological deployment resting entirely on the untested judgment of the human workforce.</p><h2><strong>How AI Fails Differently</strong></h2><p>For decades, humans have been conditioned to anticipate loud, obvious system errors. Traditional software breaks with glaring 404 errors, forcing you to stop.</p><p><strong>Large language models fail politely</strong> when generating confident, seamlessly formatted responses where fabricated statistics blend perfectly into the text. This exploits a specific vulnerability in human cognitive architecture: our brains are neurologically wired to accept authoritative-sounding information, especially when working under tight deadlines.</p><h2><strong>The Self-Assessment Problem</strong></h2><p>Unlike past technologies, AI systems provide constant positive reinforcement. They never tell a user they asked a poor question or accepted a false premise. Because the interaction is completely frictionless, professionals rarely realize they are making critical evaluation mistakes.</p><p>Over time, this psychological feedback loop scales across departments. Employees begin to rate their own AI skills highly, creating an inflated perception of competence throughout the organization. The AI cannot self-report its hallucinations, and human operators overestimate their ability to catch them.</p><h2><strong>Why Corporate Training Fails</strong></h2><p>The standard corporate response is a three-step playbook: write a policy, roll out mandatory training, and ask employees to sign an attestation.</p><p>But traditional corporate training targets a single variable: <strong>knowledge</strong>. It ensures employees know the rules exist. Performance, however, happens within an environment driven by deadlines, competing priorities, and workflows that demand speed over accuracy.</p><p>A professional who can correctly answer a multiple-choice quiz about AI safety on Monday might still accept a plausible-sounding hallucination without question on Tuesday when facing a tight project deadline.</p><h2><strong>The Measurement Challenge</strong></h2><p>To understand true collaboration capability, you cannot rely on tests of theoretical recall. <strong>You have to measure observable action.</strong></p><p>The challenge is capturing that exact moment of human-AI interaction objectively. Doing so in the real world usually requires invasive employee surveillance or reading private work product, which introduces entirely new liabilities.</p><h2><strong>Strategic Failure Injection</strong></h2><p>The core mechanism that makes objective measurement possible is <strong>strategic failure injection</strong>:</p><ol><li><p>Users bring a real task from their own context, engaging naturally</p></li><li><p>During the conversation, the system introduces a controlled failure&#8212;a highly confident, plausible error</p></li><li><p>It maps the behavioral response, tracking whether the user blindly accepts the false information or independently verifies the claim</p></li></ol><p>These reactions generate a behavioral baseline across five dimensions: Performance, Accountability, Integrity, Collaboration, and Evolution.</p><p><strong>Accountability is consistently the lowest scoring metric.</strong> Because of its critical role in risk awareness, it carries the highest weight at 30%.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/the-ai-adoption-illusion/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/the-ai-adoption-illusion/comments"><span>Leave a comment</span></a></p><h2><strong>Calibrated Skepticism</strong></h2><p>The ultimate goal is developing <strong>calibrated skepticism</strong>&#8212;the mature professional judgment required to know when a task is safe to delegate to AI and when it demands rigorous human verification based on domain expertise.</p><p>This proves that the ultimate predictor of safe, scalable enterprise AI integration is not basic prompt engineering. <strong>It is human verification judgment.</strong></p><h2><strong>The Hidden Costs</strong></h2><p>When verification judgment is absent, unchecked AI reliance has a compounding economic impact:</p><p><strong>Standard ROI calculation:</strong> Hours saved &#215; labor rate - software cost = positive return</p><p><strong>But this ignores hidden costs:</strong></p><ul><li><p><strong>Skill atrophy</strong> - Human expertise degrades over time</p></li><li><p><strong>Quality debt</strong> - Accumulation of unverified errors compounding in corporate systems</p></li></ul><p>Unchecked AI adoption destroys long-term capability. It hollows out institutional knowledge while introducing a systemic fragility that standard risk metrics cannot see.</p><h2><strong>The Regulatory Shift</strong></h2><p>The window for unregulated experimental AI adoption is closing. Global regulators, corporate auditors, and enterprise insurers are aggressively shifting their posture&#8212;moving away from assuming trust and demanding rigorous verification.</p><p><strong>Providing objective evidence of human capability is no longer an optional corporate preference. It is rapidly becoming a strict compliance requirement.</strong></p><p>Organizations that merely manage software updates and write static policies will fall behind. The competitive advantage belongs to those that rigorously measure and manage human collaboration behavior.</p><h2><strong>What PAICE Measures</strong></h2><p>The PAICE framework addresses the collaboration gap by measuring actual behavior across five dimensions:</p><p><strong>1. Performance</strong> - Did they produce high-quality results under realistic conditions?</p><p><strong>2. Accountability</strong> - Do they verify and own their work, or blindly accept AI outputs?</p><p><strong>3. Integrity</strong> - Do they make ethical choices under pressure?</p><p><strong>4. Collaboration</strong> - How effectively do they partner with AI systems?</p><p><strong>5. Evolution</strong> - Are they continuously improving their collaboration skills?</p><h2><strong>The Path Forward</strong></h2><p>Organizations must:</p><p><strong>Step 1: Acknowledge the Gap</strong> - Adoption metrics measure activity, not competence. High adoption with low capability is dangerous.</p><p><strong>Step 2: Establish a Baseline</strong> - Measure actual collaboration capability using privacy-preserving architectures that ensure individual behavioral data is used for development, not performance management.</p><p><strong>Step 3: Target Development</strong> - Use capability data to guide training and support efforts where they&#8217;ll have the most impact.</p><p><strong>Step 4: Track Progress</strong> - Monitor capability improvement over time, not just tool usage.</p><p><strong>Step 5: Build Accountability</strong> - Create systems that reward effective AI collaboration and verification judgment.</p><div class="directMessage button" data-attrs="{&quot;userId&quot;:23656692,&quot;userName&quot;:&quot;Sam Rogers&quot;,&quot;canDm&quot;:null,&quot;dmUpgradeOptions&quot;:null,&quot;isEditorNode&quot;:true}" data-component-name="DirectMessageToDOM"></div><h2><strong>The Bottom Line</strong></h2><p>Future enterprise success depends on the systemic measurement of human capability. The winners will be those who treat verification judgment with the same rigor they currently apply to their technical stack.</p><p>Anyone can buy the same AI tools. Your competitors have access to the same technology. The tools themselves are just table stakes. But building a measurable, proven capability for People+AI collaboration&#8212;that&#8217;s a genuine competitive advantage that&#8217;s difficult to replicate.</p><div><hr></div><p><strong>Get Involved:</strong></p><ul><li><p><a href="https://paice.work/">Take the assessment</a> (free, always)</p></li><li><p><a href="https://paice.work/partner">Explore the Founding Partner Program</a> (for organizations)</p></li><li><p><a href="https://paice.work/whitepaper">Read the whitepaper</a> (comprehensive framework)</p></li><li><p><a href="https://www.youtube.com/@paicework">Subscribe to our YouTube channel</a></p></li><li><p><a href="https://paice.work/contact">Contact us about your specific requirements</a></p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Related Reading</strong></h2><ul><li><p><a href="https://paice.work/blog/the-measurement-gap">The Measurement Gap: Why Traditional Metrics Fail for AI Collaboration</a></p></li><li><p><a href="https://paice.work/blog/your-ai-policy-is-not-enough">Your AI Policy Is Not Enough</a></p></li><li><p><a href="https://paice.work/blog/executive-guide-ai-collaboration-readiness">Executive Guide to AI Collaboration Readiness</a></p></li><li><p><a href="https://paice.work/blog/ai-collaboration-master-skill-2026">AI Collaboration Is the Master Skill for 2026</a></p></li></ul>]]></content:encoded></item><item><title><![CDATA[AI Collaboration for Cybersecurity Professionals]]></title><description><![CDATA[Threat Intelligence, Incident Response, and the Accountability Imperative]]></description><link>https://paice.substack.com/p/ai-collaboration-for-cybersecurity</link><guid isPermaLink="false">https://paice.substack.com/p/ai-collaboration-for-cybersecurity</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Fri, 13 Mar 2026 01:00:57 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/51b4bfa1-5c40-4b45-bcc0-e582e78a4dc8_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>When Every Output Carries Risk</strong></h2><p>Jordan, a cybersecurity analyst, receives an alert at 2 AM. Network traffic patterns suggest a potential data exfiltration attempt. They turn to an AI assistant to help analyze the packet captures, cross-reference indicators of compromise, and draft an initial incident report. The AI produces a confident, well-structured analysis.</p><p>But here&#8217;s the question that separates effective People+AI collaboration from dangerous over-reliance: <strong>How do you verify the analysis before acting on it?</strong></p><p>Cybersecurity is one of the fields where AI collaboration carries the highest stakes. A missed indicator of compromise can mean a breach goes undetected. A false positive can trigger an expensive incident response that disrupts business operations. And unlike many professions, cybersecurity professionals face both technical and regulatory accountability for their decisions.</p><p>This guide explores how cybersecurity professionals can build effective AI collaboration practices that enhance their capabilities without compromising the verification rigor their field demands.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/ai-collaboration-for-cybersecurity?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/ai-collaboration-for-cybersecurity?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>The Unique Position of Cybersecurity</strong></h2><h3><strong>Licensed, Liable, and Under Pressure</strong></h3><p>Cybersecurity professionals share a characteristic with lawyers, clinicians, and financial advisors: they are individually accountable for outcomes. A CISO who relies on AI-generated risk assessments without verification isn&#8217;t just making a professional mistake; they may be violating regulatory obligations under frameworks like NIST CSF, SOC 2, or industry-specific mandates like HIPAA or PCI DSS.</p><p>This creates a specific dynamic for AI collaboration:</p><ul><li><p><strong>Speed matters</strong>: Threats don&#8217;t wait for careful deliberation. AI can accelerate analysis significantly.</p></li><li><p><strong>Accuracy is non-negotiable</strong>: A wrong answer isn&#8217;t just unhelpful, it can be actively dangerous.</p></li><li><p><strong>Audit trails are required</strong>: Many compliance frameworks require documented evidence of how decisions were made.</p></li><li><p><strong>Adversarial context</strong>: Unlike most fields, cybersecurity professionals work against intelligent adversaries who actively try to deceive detection systems, including AI-powered ones.</p></li></ul><h3><strong>Where AI Collaboration Adds Genuine Value</strong></h3><p>AI collaboration in cybersecurity isn&#8217;t about replacing analyst judgment. It&#8217;s about augmenting the analyst&#8217;s capacity to process information at scale while preserving the critical thinking that no automated system can replicate.</p><p><strong>High-value collaboration areas:</strong></p><ul><li><p>Parsing and correlating large volumes of log data</p></li><li><p>Identifying patterns across disparate data sources</p></li><li><p>Generating initial drafts of compliance documentation</p></li><li><p>Exploring attack scenarios and threat models</p></li><li><p>Translating technical findings for non-technical stakeholders</p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/subscribe?"><span>Subscribe now</span></a></p><h2><strong>Threat Intelligence and Analysis</strong></h2><h3><strong>Your Research Partner, Not Your Analyst</strong></h3><p>AI assistants excel at helping cybersecurity professionals process threat intelligence feeds, research emerging vulnerabilities, and correlate indicators of compromise across multiple sources. The key distinction is using AI as a research accelerator, not as the decision-maker.</p><p><strong>Effective Collaboration Pattern:</strong></p><ol><li><p><strong>Present the data</strong>: Share relevant log entries, network captures, or alert details</p></li><li><p><strong>Request structured analysis</strong>: Ask for potential explanations, ranked by likelihood</p></li><li><p><strong>Challenge the output</strong>: Ask what alternative explanations the AI hasn&#8217;t considered</p></li><li><p><strong>Cross-reference independently</strong>: Verify key claims against authoritative sources (CVE databases, vendor advisories, MITRE ATT&amp;CK)</p></li><li><p><strong>Document your reasoning</strong>: Record which AI suggestions you accepted, which you rejected, and why</p></li></ol><p><strong>What It Looks Like</strong>: An analyst reviewing suspicious DNS queries can use AI to quickly categorize query patterns, identify known malicious domains, and draft a timeline. But the analyst must independently verify the domain reputation data and confirm the AI hasn&#8217;t confused benign CDN traffic with command-and-control communication.</p><p><strong>Why It Matters</strong>: AI models are trained on historical data. Novel attack techniques, zero-day exploits, and sophisticated adversaries specifically design their tactics to evade pattern-based detection. An AI that confidently identifies traffic as &#8220;benign&#8221; based on historical patterns may be wrong precisely when it matters most.</p><h2><strong>Incident Response Partnerships</strong></h2><h3><strong>Accelerating Without Cutting Corners</strong></h3><p>During an active incident, time pressure is intense. AI collaboration can significantly accelerate the response cycle, but the stakes of getting it wrong are also highest during an incident.</p><p><strong>Where AI Helps During Incidents:</strong></p><ul><li><p><strong>Log analysis at scale</strong>: Processing thousands of log entries to identify the initial compromise vector</p></li><li><p><strong>Timeline construction</strong>: Building a chronological narrative from disparate data sources</p></li><li><p><strong>Communication drafting</strong>: Creating stakeholder notifications, regulatory disclosures, and internal briefings</p></li><li><p><strong>Playbook execution</strong>: Walking through established incident response procedures step by step</p></li><li><p><strong>Scope assessment</strong>: Identifying potentially affected systems based on network topology and access patterns</p></li></ul><p><strong>Where Human Judgment Remains Essential:</strong></p><ul><li><p><strong>Containment decisions</strong>: Isolating systems affects business operations. The trade-off analysis requires organizational context AI doesn&#8217;t have.</p></li><li><p><strong>Attribution assessment</strong>: Determining who is behind an attack involves geopolitical context and intelligence that AI should not be trusted to evaluate independently.</p></li><li><p><strong>Regulatory notification timing</strong>: Deciding when and how to notify regulators involves legal judgment that varies by jurisdiction.</p></li><li><p><strong>Evidence preservation</strong>: Forensic integrity requires strict chain-of-custody procedures that must be verified by qualified professionals.</p></li></ul><h3><strong>The False Confidence Trap</strong></h3><p>During high-pressure incidents, AI-generated analysis that sounds authoritative can create a dangerous sense of false confidence. The AI might present a root cause analysis with technical precision that masks fundamental uncertainty.</p><p><strong>Counter This By:</strong></p><ul><li><p>Explicitly asking &#8220;What assumptions are you making in this analysis?&#8221;</p></li><li><p>Requesting confidence levels for each conclusion</p></li><li><p>Assigning a team member to specifically challenge AI-generated conclusions</p></li><li><p>Documenting AI-assisted findings separately from independently verified findings</p></li></ul><h2><strong>Security Code Review and Vulnerability Assessment</strong></h2><h3><strong>A Force Multiplier for AppSec</strong></h3><p>Application security teams are perpetually understaffed. AI collaboration offers a genuine force multiplier for code review, but with important caveats about the types of vulnerabilities AI can and cannot reliably detect.</p><p><strong>AI Excels At:</strong></p><ul><li><p>Identifying common vulnerability patterns (SQL injection, XSS, path traversal)</p></li><li><p>Reviewing code against established security standards (OWASP Top 10)</p></li><li><p>Suggesting secure coding alternatives for flagged patterns</p></li><li><p>Generating test cases for identified vulnerability classes</p></li><li><p>Explaining complex code paths to junior security analysts</p></li></ul><p><strong>AI Struggles With:</strong></p><ul><li><p>Business logic vulnerabilities (authentication bypass through workflow manipulation)</p></li><li><p>Race conditions and timing-dependent vulnerabilities</p></li><li><p>Context-dependent authorization flaws</p></li><li><p>Supply chain risks in dependency chains</p></li><li><p>Novel vulnerability classes that don&#8217;t match known patterns</p></li></ul><p><strong>Effective Practice</strong>: Use AI for an initial pass to catch common patterns, then focus human review time on the business logic, authorization boundaries, and architectural decisions where AI&#8217;s limitations are most pronounced.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/ai-collaboration-for-cybersecurity/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/ai-collaboration-for-cybersecurity/comments"><span>Leave a comment</span></a></p><h2><strong>The Accountability Challenge</strong></h2><h3><strong>Documenting AI-Assisted Decisions</strong></h3><p>For cybersecurity professionals operating under compliance frameworks, documenting how AI contributed to security decisions isn&#8217;t optional. It&#8217;s a regulatory requirement in many contexts.</p><p><strong>A practical documentation approach:</strong></p><ol><li><p><strong>Record the input</strong>: What data or question was provided to the AI</p></li><li><p><strong>Record the output</strong>: What the AI suggested or concluded</p></li><li><p><strong>Record the verification</strong>: How the suggestion was independently verified</p></li><li><p><strong>Record the decision</strong>: What action was taken and why</p></li><li><p><strong>Record the outcome</strong>: What happened as a result</p></li></ol><p>This documentation serves multiple purposes: it satisfies audit requirements, creates a learning record for improving future collaboration, and provides defensible evidence that professional judgment, not blind AI reliance, drove the decision.</p><h3><strong>When AI Gets It Wrong</strong></h3><p>Every cybersecurity professional using AI collaboration will encounter situations where the AI provides incorrect or misleading analysis. What matters is not whether this happens, but how quickly and reliably you detect it.</p><p><strong>Red flags to watch for:</strong></p><ul><li><p>AI confidently identifying a vulnerability class that doesn&#8217;t apply to the technology in question</p></li><li><p>Incident analysis that perfectly matches a textbook scenario (real incidents are rarely textbook)</p></li><li><p>Recommendations that contradict established security principles without acknowledging the deviation</p></li><li><p>Threat assessments that don&#8217;t account for the specific organizational context</p></li></ul><h2><strong>Building Your Cybersecurity AI Collaboration Practice</strong></h2><h3><strong>Start With Low-Stakes Tasks</strong></h3><p>Before relying on AI collaboration during a critical incident, build familiarity through lower-stakes activities:</p><ul><li><p><strong>Documentation</strong>: Use AI to draft security policies, procedures, and training materials. Review carefully, but the cost of an error is revision, not a breach.</p></li><li><p><strong>Training scenarios</strong>: Have AI generate realistic tabletop exercise scenarios. The creative process benefits from AI input, and any inaccuracies become teaching moments.</p></li><li><p><strong>Research synthesis</strong>: Use AI to summarize threat intelligence reports, vendor advisories, and industry analyses. Cross-reference key claims.</p></li><li><p><strong>Report writing</strong>: Draft compliance reports, risk assessments, and board-level summaries. AI can help translate technical findings into business language.</p></li></ul><h3><strong>Establish Verification Protocols</strong></h3><p>Before your team adopts AI collaboration for security-critical tasks, establish clear protocols:</p><ul><li><p><strong>Mandatory verification requirements</strong>: Define which types of AI output must be independently verified before action</p></li><li><p><strong>Escalation criteria</strong>: Specify when AI-assisted analysis must be reviewed by a senior analyst</p></li><li><p><strong>Documentation standards</strong>: Set expectations for recording AI contributions to security decisions</p></li><li><p><strong>Feedback loops</strong>: Create mechanisms for reporting AI errors so the team learns collectively</p></li></ul><h3><strong>Measure Your Collaboration Effectiveness</strong></h3><p>The goal of AI collaboration in cybersecurity isn&#8217;t to use AI more. It&#8217;s to make better security decisions, faster, with better documentation. Track metrics that reflect this:</p><ul><li><p>Mean time to detect and respond (has AI collaboration reduced it?)</p></li><li><p>False positive rates in AI-assisted analysis versus manual analysis</p></li><li><p>Audit findings related to decision documentation</p></li><li><p>Team capacity for proactive security work (has AI freed up time from routine tasks?)</p></li></ul><div><hr></div><p><em>Want to understand your own readiness profile? <a href="https://paice.work/">Take the PAICE assessment</a> to discover your strengths and opportunities.</em></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/ai-collaboration-for-cybersecurity?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/ai-collaboration-for-cybersecurity?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/ai-collaboration-for-cybersecurity?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2><strong>Recommended Reading</strong></h2><p>&#128214; <strong>Industry Guides:</strong></p><ul><li><p><a href="https://paice.work/blog/ai-collaboration-legal-professionals">AI Collaboration for Legal Professionals</a> - Verification practices for licensed professionals</p></li><li><p><a href="https://paice.work/blog/ai-collaboration-healthcare-patient-safety">AI Collaboration for Healthcare and Patient Safety</a> - High-stakes collaboration in clinical settings</p></li></ul><p>&#128214; <strong>Building Your Practice:</strong></p><ul><li><p><a href="https://paice.work/blog/building-your-ai-collaboration-toolkit">Building Your AI Collaboration Toolkit</a> - Practical tools and workflows for effective collaboration</p></li><li><p><a href="https://paice.work/blog/why-accountability-scores-lower">Why Accountability Scores Lower Than You Expect</a> - The most critical dimension for regulated professionals</p></li></ul>]]></content:encoded></item><item><title><![CDATA[What PAICE Is Actually Testing For]]></title><description><![CDATA[What to notice, what to say, and what actually matters]]></description><link>https://paice.substack.com/p/what-paice-is-actually-testing-for</link><guid isPermaLink="false">https://paice.substack.com/p/what-paice-is-actually-testing-for</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Wed, 11 Mar 2026 19:01:46 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0c1ffc6f-294f-4edf-8000-5096da6444eb_2848x1504.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>People often ask: &#8220;What should I do during the PAICE assessment?&#8221;</p><p>Fair question. Here&#8217;s the honest answer: we&#8217;re watching for specific behaviors during your conversation. Things you notice. Things you say. Things you question&#8230;or don&#8217;t.</p><p>This isn&#8217;t a knowledge test. It&#8217;s a window into how you actually work with AI.</p><h2><strong>What We&#8217;re Watching For</strong></h2><h3><strong>When Something Doesn&#8217;t Add Up</strong></h3><p>Say the AI tells you that a particular framework &#8220;was developed in 2019 and has been adopted by over 60% of Fortune 500 companies.&#8221;</p><p>That might be true. Or it might not be.</p><p><strong>What high scorers do:</strong> They notice when a statistic sounds suspiciously perfect. They say things like:</p><ul><li><p><em>&#8220;Can you point me to a source for that 60% figure?&#8221;</em></p></li><li><p><em>&#8220;That number seems suspect, where did that come from?&#8221;</em></p></li><li><p><em>&#8220;I&#8217;d want to verify that before including it anywhere.&#8221;</em></p></li></ul><p><strong>What low scorers do:</strong> They accept the claim and move on. Or they vaguely note &#8220;I&#8217;ll fact-check this later&#8221; without actually doing anything.</p><p>The difference isn&#8217;t paranoia. It&#8217;s habit. High scorers have learned that AI can sound confident about things it made up.</p><h3><strong>When the AI Makes an Assumption</strong></h3><p>AI will often fill in gaps with assumptions. It might assume you&#8217;re in a certain industry, or that your audience has certain knowledge, or that a particular approach applies to your situation.</p><p><strong>What high scorers do:</strong> They catch the assumption and name it. Things like:</p><ul><li><p><em>&#8220;Actually, my audience is mostly non-technical.&#8221;</em></p></li><li><p><em>&#8220;That assumes we have budget flexibility, which we don&#8217;t.&#8221;</em></p></li><li><p><em>&#8220;Wait &#8212; that doesn&#8217;t apply here because we&#8217;re in a regulated industry.&#8221;</em></p></li></ul><p><strong>What low scorers do:</strong> They let the assumption pass without comment, even when it doesn&#8217;t fit their situation.</p><h3><strong>When the Output Needs Improvement</strong></h3><p>The first response from AI is rarely perfect. It&#8217;s a starting point.</p><p><strong>What high scorers do:</strong> They give specific feedback to improve it:</p><ul><li><p><em>&#8220;This is too formal &#8212; can you make it more conversational?&#8221;</em></p></li><li><p><em>&#8220;Good structure, but the second paragraph is too vague. Add a concrete example.&#8221;</em></p></li><li><p><em>&#8220;Cut the first two sentences. Get to the point faster.&#8221;</em></p></li></ul><p><strong>What low scorers do:</strong> They vaguely say &#8220;make it better&#8221; or &#8220;not quite right&#8221; without specifics. Or they just accept whatever comes back.</p><h3><strong>When Something Contradicts What You Know</strong></h3><p>Sometimes AI will tell you something that conflicts with your own knowledge or experience.</p><p><strong>What high scorers do:</strong> They push back and say why:</p><ul><li><p><em>&#8220;That doesn&#8217;t match my experience &#8212; in my field, the standard approach is...&#8221;</em></p></li><li><p><em>&#8220;I&#8217;m not sure that&#8217;s accurate. The regulation actually requires...&#8221;</em></p></li><li><p><em>&#8220;Hmm, that contradicts what I&#8217;ve read elsewhere. Let me think about this.&#8221;</em></p></li></ul><p><strong>What low scorers do:</strong> They defer to the AI, assuming it must know better. Even when their own expertise says otherwise.</p><h2><strong>The Behaviors That Matter Most</strong></h2><p>Let&#8217;s be concrete about what actually moves your score:</p><h3><strong>Things That Help Your Score</strong></h3><ol><li><p><strong>Questioning confident-sounding claims</strong></p><ul><li><p>&#8220;Where does that data come from?&#8221;</p></li><li><p>&#8220;Can you cite a source for that?&#8221;</p></li><li><p>&#8220;That seems too round a number to be real.&#8221;</p></li></ul></li><li><p><strong>Catching errors or inconsistencies</strong></p><ul><li><p>&#8220;Wait, that contradicts what you said earlier.&#8221;</p></li><li><p>&#8220;Those numbers don&#8217;t add up.&#8221;</p></li><li><p>&#8220;That&#8217;s not quite right &#8212; the actual process is...&#8221;</p></li></ul></li><li><p><strong>Giving specific feedback</strong></p><ul><li><p>&#8220;Make the tone more direct and less apologetic.&#8221;</p></li><li><p>&#8220;Add an example after the third bullet point.&#8221;</p></li><li><p>&#8220;The conclusion is weak. End with a specific next step.&#8221;</p></li></ul></li><li><p><strong>Maintaining context and boundaries</strong></p><ul><li><p>&#8220;Remember, this is for an internal team, not customers.&#8221;</p></li><li><p>&#8220;We need to stay within the scope we agreed on.&#8221;</p></li><li><p>&#8220;Let&#8217;s not assume things we haven&#8217;t verified.&#8221;</p></li></ul></li><li><p><strong>Applying your own judgment</strong></p><ul><li><p>&#8220;I see your point, but in my experience...&#8221;</p></li><li><p>&#8220;That approach wouldn&#8217;t work here because...&#8221;</p></li><li><p>&#8220;I&#8217;d take a different angle. Here&#8217;s why...&#8221;</p></li></ul></li></ol><h3><strong>Things That Hurt Your Score</strong></h3><ol><li><p><strong>Accepting everything without question</strong></p><ul><li><p>Taking statistics at face value</p></li><li><p>Not noticing when something sounds too good to be true</p></li><li><p>Trusting confident tone over actual accuracy</p></li></ul></li><li><p><strong>Deferring when you shouldn&#8217;t</strong></p><ul><li><p>&#8220;You probably know better than me&#8221;</p></li><li><p>Abandoning your own expertise</p></li><li><p>Not pushing back when you disagree</p></li></ul></li><li><p><strong>Vague or passive feedback</strong></p><ul><li><p>&#8220;Make it better&#8221;</p></li><li><p>&#8220;I don&#8217;t know, whatever you think&#8221;</p></li><li><p>&#8220;Sure, that works&#8221; (when it doesn&#8217;t)</p></li></ul></li><li><p><strong>Losing track of the goal</strong></p><ul><li><p>Drifting off topic without redirecting</p></li><li><p>Not remembering what you originally asked for</p></li><li><p>Accepting outputs that don&#8217;t serve your actual purpose</p></li></ul></li></ol><h2><strong>What This Actually Measures</strong></h2><p>These behaviors map to the five PAICE dimensions:</p><ul><li><p><strong>Performance</strong>: Clear communication, efficient back-and-forth</p></li><li><p><strong>Accountability</strong>: Verifying claims, catching errors, taking ownership</p></li><li><p><strong>Integrity</strong>: Recognizing bias, maintaining factual standards</p></li><li><p><strong>Collaboration</strong>: Iterating effectively, giving useful feedback</p></li><li><p><strong>Evolution</strong>: Learning from the interaction, adapting approach</p></li></ul><p>You don&#8217;t need to think about the dimensions during the assessment. Just work naturally on whatever topic you brought, and the more real it is for you, the more the assessment will reveal your actual habits. The behaviors either show up or they don&#8217;t.</p><h2><strong>The Real Point</strong></h2><p>We&#8217;re not assessing whether you know AI terminology or can recite prompt engineering tips.</p><p>We&#8217;re focused on whether you&#8217;ve developed the habits that keep AI collaboration safe and effective:</p><ul><li><p>Verification before trust</p></li><li><p>Specificity in feedback</p></li><li><p>Judgment that doesn&#8217;t defer</p></li><li><p>Standards that don&#8217;t slip</p></li></ul><p>These habits matter because they&#8217;re invisible to everyone except you &#8212; and now, to the assessment. Your organization can&#8217;t see whether you verified that AI-generated analysis before presenting it. Your manager can&#8217;t tell if you caught the error that would have caused a problem.</p><p>But PAICE can.</p><div><hr></div><p><strong>Ready to see what behaviors show up in your collaboration?</strong> <a href="https://paice.work/">Take the assessment</a> &#8212; about 15-20 minutes, completely free, with detailed feedback on where your habits are strong and where there&#8217;s room to grow.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Related Reading</strong></h2><ul><li><p><a href="https://paice.work/blog/understanding-the-five-paice-dimensions">Understanding the Five PAICE Dimensions</a> - What each dimension measures</p></li><li><p><a href="https://paice.work/blog/why-accountability-scores-lower">Why Your Accountability Score Is Probably Lower Than Your Other Dimensions</a> - The hardest dimension explained</p></li><li><p><a href="https://paice.work/blog/what-your-paice-score-means">What Your PAICE Score Really Means</a> - How to interpret your results</p></li></ul>]]></content:encoded></item><item><title><![CDATA[AI Collaboration in Insurance]]></title><description><![CDATA[Underwriting, Claims, and Risk Assessment]]></description><link>https://paice.substack.com/p/ai-collaboration-in-insurance</link><guid isPermaLink="false">https://paice.substack.com/p/ai-collaboration-in-insurance</guid><dc:creator><![CDATA[Sam Rogers]]></dc:creator><pubDate>Tue, 10 Mar 2026 15:03:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2163fb99-141e-4463-9906-456077513544_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>The Insurance Industry&#8217;s AI Transformation</strong></h2><p>Insurance is fundamentally a business of information, risk assessment, and decision-making. These are exactly the areas where AI collaboration shows the most promise. From accelerating underwriting decisions to streamlining claims processing, AI tools offer genuine potential to improve efficiency and outcomes.</p><p>But in the USA, insurance also operates under intense regulatory scrutiny. State insurance commissioners, the NAIC, and federal regulators are watching closely as the industry adopts AI. Unfair discrimination, algorithmic bias, and unexplainable decisions can trigger regulatory action, litigation, and reputational damage.</p><p>This guide provides practical frameworks for insurance professionals seeking to leverage AI collaboration effectively while maintaining the standards your regulators, policyholders, and profession demand.</p><p><em>Please note that this is not legal advice and you should always consult with your legal and compliance teams before implementing any AI collaboration practices.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/ai-collaboration-in-insurance?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/ai-collaboration-in-insurance?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2><strong>The Regulatory Landscape</strong></h2><h3><strong>Emerging AI Guidance</strong></h3><p>Insurance regulators are actively developing frameworks for AI use. Several key themes are emerging:</p><p><strong>The NAIC Model Bulletin</strong>: The National Association of Insurance Commissioners has issued guidance on AI governance, emphasizing that insurers remain responsible for decisions made with AI assistance, regardless of whether the AI was developed internally or by third parties.</p><p><strong>State-Level Requirements</strong>: Colorado, Connecticut, and other states have enacted or proposed AI regulations specific to insurance. These often require impact assessments, bias testing, and consumer notification.</p><p><strong>Unfair Discrimination Concerns</strong>: Regulators are particularly focused on ensuring AI doesn&#8217;t result in unfair discrimination, even when protected characteristics aren&#8217;t explicitly used. Proxy discrimination through correlated variables is a key concern.</p><p><strong>Explainability Requirements</strong>: When AI influences decisions affecting consumers, regulators increasingly expect insurers to explain how those decisions were made in understandable terms.</p><h3><strong>Building Compliant AI Practices</strong></h3><p><strong>Document Everything</strong>: Maintain records of what AI tools you use, how they&#8217;re used, and how outputs are verified. Regulatory examinations will expect this documentation.</p><p><strong>Establish Human Oversight</strong>: AI-assisted decisions should involve meaningful human review, not rubber-stamping. Document who reviewed what and what verification occurred.</p><p><strong>Test for Bias</strong>: Regularly assess whether AI collaboration practices produce disparate outcomes across protected classes.</p><p><strong>Prepare for Questions</strong>: Be ready to explain to regulators, consumers, and courts how AI influenced any given decision.</p><h2><strong>Underwriting Applications</strong></h2><h3><strong>Where AI Collaboration Adds Value</strong></h3><p>AI can meaningfully accelerate underwriting work in several areas:</p><p><strong>Risk Assessment Research</strong>:</p><ul><li><p>Summarizing industry risk profiles and loss patterns</p></li><li><p>Explaining technical concepts in specialized lines</p></li><li><p>Identifying relevant risk factors for consideration</p></li><li><p>Comparing coverage approaches across carriers</p></li></ul><p><strong>Application Analysis</strong>:</p><ul><li><p>Flagging inconsistencies or gaps in applications</p></li><li><p>Identifying questions requiring follow-up</p></li><li><p>Suggesting additional information needed</p></li><li><p>Drafting requests for clarification</p></li></ul><p><strong>Documentation</strong>:</p><ul><li><p>Creating underwriting file summaries</p></li><li><p>Drafting decline letters and coverage explanations</p></li><li><p>Generating risk assessment narratives</p></li><li><p>Preparing reinsurance submissions</p></li></ul><p><strong>Training and Development</strong>:</p><ul><li><p>Explaining complex coverage concepts</p></li><li><p>Walking through underwriting guidelines</p></li><li><p>Answering technical questions</p></li><li><p>Creating training scenarios</p></li></ul><h3><strong>Critical Limitations</strong></h3><p><strong>AI Cannot Replace Underwriting Judgment</strong>: AI tools don&#8217;t understand your book of business, your company&#8217;s risk appetite, or the nuanced factors that experienced underwriters recognize. AI is an assistant, not a decision-maker.</p><p><strong>AI Can Generate Plausible Nonsense</strong>: AI can produce confident-sounding but incorrect information about coverage forms, exclusions, or risk factors. Verification against primary sources is essential.</p><p><strong>AI Doesn&#8217;t Know Your Guidelines</strong>: Unless you provide them, AI doesn&#8217;t know your company&#8217;s specific underwriting guidelines, authority levels, or portfolio management objectives.</p><p><strong>AI May Perpetuate Bias</strong>: AI trained on historical data may reflect historical biases. Using AI outputs without critical evaluation can perpetuate problematic patterns.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/p/ai-collaboration-in-insurance/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/p/ai-collaboration-in-insurance/comments"><span>Leave a comment</span></a></p><h2><strong>Claims Processing</strong></h2><h3><strong>Accelerating Claims Work</strong></h3><p>Claims processing offers significant opportunities for AI collaboration:</p><p><strong>Initial Review</strong>:</p><ul><li><p>Summarizing claim submissions</p></li><li><p>Identifying coverage questions</p></li><li><p>Flagging potential issues for investigation</p></li><li><p>Organizing documentation</p></li></ul><p><strong>Investigation Support</strong>:</p><ul><li><p>Researching relevant policy language</p></li><li><p>Explaining technical terminology</p></li><li><p>Drafting investigation plans</p></li><li><p>Preparing interview questions</p></li></ul><p><strong>Reserving Assistance</strong>:</p><ul><li><p>Researching comparable claims</p></li><li><p>Summarizing medical terminology</p></li><li><p>Explaining legal concepts</p></li><li><p>Organizing case chronologies</p></li></ul><p><strong>Communication Drafting</strong>:</p><ul><li><p>Creating coverage position letters</p></li><li><p>Drafting reservation of rights notices</p></li><li><p>Preparing claim status updates</p></li><li><p>Generating explanation of benefits</p></li></ul><h3><strong>Maintaining Claims Integrity</strong></h3><p><strong>Never Automate Coverage Decisions</strong>: AI can inform claims decisions, but coverage determinations require human judgment considering all relevant facts and policy language.</p><p><strong>Verify Policy Language</strong>: AI may paraphrase or misstate policy language. Always check actual policy wording before making coverage determinations.</p><p><strong>Protect Privileged Information</strong>: Claims files often contain privileged communications. Be careful what information you share with AI tools.</p><p><strong>Document Your Process</strong>: Maintain clear records of how AI assisted your claims handling and what human verification occurred.</p><h2><strong>Fraud Detection Considerations</strong></h2><h3><strong>The Promise and the Peril</strong></h3><p>AI shows significant promise in fraud detection, but also carries substantial risk:</p><p><strong>Potential Benefits</strong>:</p><ul><li><p>Pattern recognition across large datasets</p></li><li><p>Identification of anomalies requiring investigation</p></li><li><p>Consistency in flagging potential issues</p></li><li><p>Efficiency in initial screening</p></li></ul><p><strong>Significant Risks</strong>:</p><ul><li><p>False positives harm innocent policyholders</p></li><li><p>Algorithmic bias may target certain populations</p></li><li><p>Opaque systems create regulatory and litigation exposure</p></li><li><p>Over-reliance can miss sophisticated fraud</p></li></ul><h3><strong>Best Practices for AI-Assisted Fraud Detection</strong></h3><p><strong>Use AI to Inform, Not Decide</strong>: AI fraud scores should trigger human investigation, not automatic adverse action.</p><p><strong>Maintain Robust Appeals Processes</strong>: When AI contributes to fraud determinations, consumers need meaningful ways to challenge errors.</p><p><strong>Test for Disparate Impact</strong>: Regularly assess whether fraud detection patterns disproportionately affect protected groups.</p><p><strong>Preserve Human Oversight</strong>: Experienced investigators should evaluate AI-flagged cases with full consideration of context.</p><h2><strong>Actuarial Applications</strong></h2><h3><strong>Where AI Assists Actuarial Work</strong></h3><p><strong>Research and Analysis</strong>:</p><ul><li><p>Summarizing industry loss trends</p></li><li><p>Explaining statistical concepts</p></li><li><p>Reviewing regulatory guidance</p></li><li><p>Comparing methodological approaches</p></li></ul><p><strong>Documentation</strong>:</p><ul><li><p>Drafting actuarial memoranda</p></li><li><p>Creating assumption documentation</p></li><li><p>Preparing regulatory filings</p></li><li><p>Generating executive summaries</p></li></ul><p><strong>Model Development</strong>:</p><ul><li><p>Suggesting model structures</p></li><li><p>Explaining statistical techniques</p></li><li><p>Reviewing code for errors</p></li><li><p>Documenting methodology</p></li></ul><h3><strong>Professional Standards Considerations</strong></h3><p>Actuaries are bound by Actuarial Standards of Practice (ASOPs), including ASOP No. 56 on modeling. AI collaboration must be consistent with these professional standards:</p><p><strong>Maintain Professional Judgment</strong>: Actuarial opinions must reflect the actuary&#8217;s own judgment, not uncritical acceptance of AI outputs.</p><p><strong>Validate Assumptions</strong>: AI-suggested assumptions require the same validation as any other assumption.</p><p><strong>Document Appropriately</strong>: When AI assists actuarial work, documentation should reflect how AI was used and how outputs were verified.</p><p><strong>Understand Limitations</strong>: Actuaries must understand the limitations of any tools they use, including AI.</p><div class="directMessage button" data-attrs="{&quot;userId&quot;:23656692,&quot;userName&quot;:&quot;Sam Rogers&quot;,&quot;canDm&quot;:null,&quot;dmUpgradeOptions&quot;:null,&quot;isEditorNode&quot;:true}" data-component-name="DirectMessageToDOM"></div><h2><strong>Privacy and Data Handling</strong></h2><h3><strong>Sensitive Information Concerns</strong></h3><p>Insurance professionals handle extremely sensitive information. AI collaboration requires careful data handling:</p><p><strong>Protected Health Information</strong>: Health insurers are subject to HIPAA. PHI should never be shared with consumer AI tools.</p><p><strong>Financial Information</strong>: Personal financial data carries privacy obligations under various state and federal laws.</p><p><strong>Claims Information</strong>: Details about claims, injuries, and losses are highly sensitive and often legally protected.</p><p><strong>Investigation Materials</strong>: Surveillance, recorded statements, and investigation reports require careful handling.</p><h3><strong>Safe Collaboration Approaches</strong></h3><p><strong>Use Enterprise Tools</strong>: If your organization provides AI tools with appropriate data handling agreements, use those rather than consumer products.</p><p><strong>Anonymize Data</strong>: Remove identifying information before using AI assistance when possible. Work with hypotheticals rather than actual cases.</p><p><strong>Know Your Tool</strong>: Understand whether your AI tool retains prompts, uses them for training, or shares data with third parties.</p><p><strong>Follow Company Policy</strong>: Adhere to your organization&#8217;s data handling and AI use policies.</p><p>For more on protecting sensitive information, see our guide on <a href="https://paice.work/blog/privacy-and-data-practices">privacy and data practices</a>.</p><h2><strong>The Accountability Dimension</strong></h2><h3><strong>Taking Responsibility for AI-Assisted Work</strong></h3><p>Insurance work demands what we call the Accountability dimension&#8212;taking full responsibility for AI-assisted work and maintaining appropriate oversight. In insurance, this isn&#8217;t just good practice; it&#8217;s a regulatory expectation.</p><p>For more on this critical skill, see our guide on <a href="https://paice.work/blog/understanding-the-five-paice-dimensions">understanding the five PAICE dimensions</a>.</p><p><strong>Key Accountability Practices</strong>:</p><p><strong>Own the Decision</strong>: Regardless of AI involvement, the human professional owns the underwriting decision, claims determination, or actuarial opinion.</p><p><strong>Verify Outputs</strong>: Build systematic verification into every workflow. AI outputs are starting points, not conclusions.</p><p><strong>Document the Process</strong>: Maintain clear records of AI use and human oversight for regulatory examinations and potential litigation.</p><p><strong>Report Problems</strong>: If AI tools produce problematic outputs, report them through appropriate channels.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share PAICE.work&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://paice.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share PAICE.work</span></a></p><h2><strong>Common Pitfalls in Insurance AI Collaboration</strong></h2><h3><strong>Over-Reliance on AI Risk Assessment</strong></h3><p><strong>The Mistake</strong>: Accepting AI-generated risk assessments without independent underwriting judgment.</p><p><strong>The Consequence</strong>: Poor underwriting decisions, adverse selection, and potential E&amp;O exposure.</p><p><strong>The Solution</strong>: Use AI to inform your thinking, then apply your professional judgment and company guidelines.</p><h3><strong>Inadequate Documentation</strong></h3><p><strong>The Mistake</strong>: Using AI assistance without documenting the process, creating gaps in underwriting or claims files.</p><p><strong>The Consequence</strong>: Regulatory examination findings, bad faith exposure, and inability to explain decisions.</p><p><strong>The Solution</strong>: Document AI use and verification steps as standard practice.</p><h3><strong>Sharing Sensitive Data</strong></h3><p><strong>The Mistake</strong>: Inputting policyholder information, medical records, or claims details into consumer AI tools.</p><p><strong>The Consequence</strong>: Privacy violations, regulatory penalties, and potential liability.</p><p><strong>The Solution</strong>: Know what data you can and cannot share. When in doubt, anonymize or don&#8217;t share.</p><h3><strong>Assuming AI Knows Insurance</strong></h3><p><strong>The Mistake</strong>: Expecting AI to understand policy language, coverage interpretations, or regulatory requirements without explicit guidance.</p><p><strong>The Consequence</strong>: Incorrect coverage analysis, missed exclusions, and flawed risk assessment.</p><p><strong>The Solution</strong>: Provide comprehensive context. Don&#8217;t assume AI understands insurance nuances.</p><p>For more on avoiding common mistakes, see our guide on <a href="https://paice.work/blog/common-ai-collaboration-mistakes">common AI collaboration mistakes</a>.</p><h2><strong>Building Your Insurance AI Framework</strong></h2><h3><strong>Assess Your Current Capabilities</strong></h3><p>Understanding your starting point is essential. The <a href="https://paice.work/">PAICE assessment</a> evaluates AI collaboration capabilities across five dimensions, with particular relevance for insurance professionals:</p><ul><li><p><strong>Performance</strong>: How effectively you communicate with AI tools and achieve useful results</p></li><li><p><strong>Accountability</strong>: Your verification habits, error detection, and ownership of AI-assisted work</p></li><li><p><strong>Integrity</strong>: Your commitment to accuracy, bias recognition, and ethical reasoning</p></li><li><p><strong>Collaboration</strong>: How well you iterate and refine AI interactions through productive dialogue</p></li><li><p><strong>Evolution</strong>: Your capacity to learn, adapt, and improve your AI collaboration practices</p></li></ul><h3><strong>Develop Clear Policies</strong></h3><p>Create written policies addressing:</p><ul><li><p>Approved AI tools and use cases</p></li><li><p>Data handling and privacy requirements</p></li><li><p>Documentation standards</p></li><li><p>Review and approval procedures</p></li><li><p>Regulatory compliance requirements</p></li></ul><h3><strong>Train Your Teams</strong></h3><p>Ensure underwriters, claims professionals, and other staff understand:</p><ul><li><p>Your organization&#8217;s AI collaboration policies</p></li><li><p>Verification requirements and procedures</p></li><li><p>Privacy and data handling safeguards</p></li><li><p>When to escalate concerns</p></li></ul><h3><strong>Monitor and Improve</strong></h3><p>AI collaboration practices should evolve:</p><ul><li><p>Track efficiency and quality metrics</p></li><li><p>Gather feedback from users</p></li><li><p>Review incidents and near-misses</p></li><li><p>Update policies as regulations evolve</p></li></ul><h2><strong>The Path Forward</strong></h2><p>AI collaboration in insurance isn&#8217;t about replacing professional judgment&#8212;it&#8217;s about augmenting it. The most successful insurance professionals will be those who learn to leverage AI effectively while maintaining unwavering commitment to accuracy, fairness, and regulatory compliance.</p><p>The technology will continue to evolve. Regulations will adapt. But the fundamental principles remain constant: treat policyholders fairly, make decisions you can explain and defend, and maintain the professional standards your industry demands.</p><div><hr></div><p><em>Ready to assess your AI collaboration capabilities? <a href="https://paice.work/">Take the PAICE assessment</a> to get personalized insights and recommendations for your insurance practice.</em></p><div><hr></div><p><strong>Get Involved:</strong></p><ul><li><p><a href="https://paice.work/">Take the assessment</a> (free, always)</p></li><li><p><a href="https://paice.work/partner">Explore the Founding Partner Program</a> (for organizations)</p></li><li><p><a href="https://paice.work/whitepaper">Read the whitepaper</a> (comprehensive framework)</p></li><li><p><a href="https://paice.work/contact">Contact us about your specific requirements</a></p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://paice.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading PAICE.work! Subscribe for free to receive new posts and support our work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2><strong>Related Reading</strong></h2><ul><li><p><a href="https://paice.work/blog/ai-collaboration-finance-risk-compliance">AI Collaboration in Finance: Risk, Compliance, and Efficiency</a></p></li><li><p><a href="https://paice.work/blog/ai-collaboration-healthcare-patient-safety">AI Collaboration in Healthcare: Balancing Innovation with Patient Safety</a></p></li><li><p><a href="https://paice.work/blog/understanding-the-five-paice-dimensions">Understanding the Five PAICE Dimensions</a></p></li><li><p><a href="https://paice.work/blog/common-ai-collaboration-mistakes">Common AI Collaboration Mistakes</a></p></li><li><p><a href="https://paice.work/blog/privacy-and-data-practices">Privacy and Data Practices</a></p></li></ul>]]></content:encoded></item></channel></rss>