<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Lukas Finnveden]]></title><description><![CDATA[Lukas Finnveden]]></description><link>https://lukasfinnveden.substack.com</link><generator>Substack</generator><lastBuildDate>Sat, 18 Apr 2026 03:45:49 GMT</lastBuildDate><atom:link href="https://lukasfinnveden.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Lukas Finnveden]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[lukasfinnveden@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[lukasfinnveden@substack.com]]></itunes:email><itunes:name><![CDATA[Lukas Finnveden]]></itunes:name></itunes:owner><itunes:author><![CDATA[Lukas Finnveden]]></itunes:author><googleplay:owner><![CDATA[lukasfinnveden@substack.com]]></googleplay:owner><googleplay:email><![CDATA[lukasfinnveden@substack.com]]></googleplay:email><googleplay:author><![CDATA[Lukas Finnveden]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[What's important in "AI for epistemics"?]]></title><description><![CDATA[Why it matters and what projects to prioritize.]]></description><link>https://lukasfinnveden.substack.com/p/whats-important-in-ai-for-epistemics</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/whats-important-in-ai-for-epistemics</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Fri, 23 Aug 2024 22:35:02 GMT</pubDate><content:encoded><![CDATA[<h2>Summary</h2><p>This post gives my personal take on &#8220;AI for epistemics&#8221; and how important it might be to work on.</p><p>Some background context:</p><ul><li><p>AI capabilities are advancing rapidly and I think it&#8217;s important to think ahead and prepare for the possible development of AI that could automate almost all economically relevant tasks that humans can do.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p></li><li><p>That kind of AI would have a huge impact on key epistemic processes in our society. (I.e.: It would have a huge impact on how new facts get found, how new research gets done, how new forecasts get made, and how all kinds of information spread through society.)</p></li><li><p>I think it&#8217;s very important for our society to have excellent epistemic processes. (I.e.: For important decisions in our society to be made by people or AI systems who have informed and unbiased beliefs that take into account as much of the available evidence as is practical.)</p></li><li><p>Accordingly, I&#8217;m interested in affecting the development and usage of AI technology in ways that lead towards better epistemic processes.</p></li></ul><p>So: How can we affect AI to contribute to better epistemic processes? When looking at concrete projects, here, I find it helpful to<strong> </strong>distinguish between two different categories of work:</p><ol><li><p>Working to increase AIs&#8217; epistemic capabilities, and in particular, differentially advancing them compared to other AI capabilities. Here, I also include technical work to measure AIs&#8217; epistemic capabilities.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p></li><li><p>Efforts to enable the diffusion and appropriate trust of AI-discovered information. This is focused on social dynamics that could cause AI-produced information to be insufficiently or excessively trusted. It&#8217;s also focused on AIs&#8217; role in <em>communicating</em> information (as opposed to just <em>producing</em> it). Examples of interventions, here, include &#8220;create an independent organization that evaluates popular AIs&#8217; truthfulness&#8221;, or &#8220;work for countries to adopt good (and avoid bad) legislation of AI communication&#8221;.</p></li></ol><p>I&#8217;d be very excited about thoughtful and competent efforts in this second category. However, I talk significantly more about efforts in the first category, in this post. This is just an artifact of how this post came to be, historically &#8212; it&#8217;s <strong>not</strong> because I think work on the second category of projects is less important.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>For the first category of projects: Technical projects to differentially advance epistemic capabilities seem somewhat more &#8220;shovel-ready&#8221;. Here, I&#8217;m especially excited about projects that differentially boost AI epistemic capabilities in a manner that&#8217;s some combination of <em>durable</em> and/or especially good at <em>demonstrating</em> those capabilities to key actors.</p><p><em>Durable</em> means that projects should (i) take the <a href="https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson.pdf">bitter lesson</a> into account by working on problems that won&#8217;t be solved-by-default when more compute is available, and (ii) work on problems that industry isn&#8217;t already incentivized to put huge efforts into (such as &#8220;making AIs into generally better agents&#8221;). (More on these criteria <a href="https://lukasfinnveden.substack.com/i/148057351/on-long-lasting-differential-capability-improvements">here</a>.)</p><p>Two example projects that I think fulfill these criteria (I discuss a lot more projects <a href="https://lukasfinnveden.substack.com/i/148057351/concrete-projects-for-differentially-advancing-epistemic-capabilities">here</a>):</p><ul><li><p>Experiments on what sort of arguments and decompositions make it easier for humans to reach the truth in hard-to-verify areas. (Strongly related to scalable oversight.)</p></li><li><p>Using AI to generate large quantities of forecasting data, such as by automatically generating and resolving questions.</p></li></ul><p>Separately, I think there&#8217;s value in <em>demonstrating</em> the potential of AI epistemic advice to key actors &#8212; especially frontier AI companies and governments. When transformative AI (TAI)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> is first developed, it seems likely that these actors will (i) have a big advantage in their ability to accelerate AI-for-epistemics via their access to frontier models and algorithms, and (ii) that I especially care about their decisions being well-informed. Thus, I&#8217;d like these actors to be impressed by the potential of AI-for-epistemics as soon as possible, so that they start investing and preparing appropriately.</p><p>If you, above, wondered why I group &#8220;measuring epistemic capabilities&#8221; into the same category of project as &#8220;differentially advancing AI capabilities&#8221;, this is now easier to explain. I think good benchmarks could be both a relatively <em>durable</em> intervention for increasing capabilities, via inspiring work to beat the benchmark for a long time, and that they&#8217;re a good way of <em>demonstrating</em> capabilities.</p><h2>Structure of the post</h2><p>In the rest of this post, I:</p><ul><li><p>Link to some previous work in this area.</p></li><li><p>Describe my basic impression of <a href="https://lukasfinnveden.substack.com/i/148057351/why-work-on-ai-for-epistemics">why work on AI-for-epistemics could be important</a>.</p></li><li><p>Go into more details on <a href="https://lukasfinnveden.substack.com/i/148057351/heuristics-for-good-interventions">Heuristics for good interventions</a>, including:</p><ul><li><p>A distinction between <a href="https://lukasfinnveden.substack.com/i/148057351/direct-vs-indirect-strategies">direct vs. indirect strategies</a> and <a href="https://lukasfinnveden.substack.com/i/148057351/indirect-value-generation">the value of indirect interventions</a>.</p></li><li><p><a href="https://lukasfinnveden.substack.com/i/148057351/on-long-lasting-differential-capability-improvements">Some abstract guidelines</a> for how to avoid interventions that get swamped by the bitter lesson or by commercial interests.</p></li><li><p>An attempt to <a href="https://lukasfinnveden.substack.com/i/148057351/painting-a-picture-of-the-future">paint a concrete picture</a> for what excellent use of AI for epistemics might eventually look like (to give a sense of what we want to steer towards).</p></li></ul></li><li><p>Discuss <a href="https://lukasfinnveden.substack.com/i/148057351/concrete-projects-for-differentially-advancing-epistemic-capabilities">what concrete types of interventions</a> seem best in the domain of differentially advancing epistemic capabilities.</p></li></ul><h2>Previous work</h2><p>Here is an incomplete list of previous work on this topic:</p><ul><li><p>I previously wrote <a href="https://lukasfinnveden.substack.com/p/project-ideas-epistemics">Project ideas: Epistemics</a> on this blog.</p></li><li><p><a href="https://80000hours.org/podcast/episodes/carl-shulman-society-agi/">Carl Shulman on government and society after AGI</a> at the 80,000 hours podcast. (This current blog post owes many ideas to Carl Shulman.)</p></li><li><p>Ben Todd: <a href="https://benjamintodd.substack.com/p/the-most-interesting-startup-idea">The most interesting startup idea I've seen recently: AI for epistemics</a>&nbsp;</p></li><li><p>On AI and forecasting:</p><ul><li><p>Ozzie Gooen: <a href="https://forum.effectivealtruism.org/posts/EykCuXDCFAT5oGyux/my-current-claims-and-cruxes-on-llm-forecasting-and">My Current Claims and Cruxes on LLM Forecasting &amp; Epistemics</a></p><ul><li><p>Ozzie runs the <a href="https://quantifieduncertainty.org/">Quantified Uncertainty Research Institute</a> which is more generally relevant.</p></li></ul></li><li><p><a href="https://futuresearch.ai/">FutureSearch</a> is using AI for forecasting and other tricky questions.</p><ul><li><p>See also <a href="https://forum.effectivealtruism.org/posts/qMP7LcCBFBEtuA3kL/the-rationale-shaped-hole-at-the-heart-of-forecasting">The Rationale-Shaped Hole At The Heart Of Forecasting</a> where they lay out some of their views as of April 2024.</p></li></ul></li><li><p><a href="https://arxiv.org/pdf/2402.18563">Approaching Human-Level Forecasting with Language Models</a> by Halawi, Zhang, Yueh-Han, and Steinhardt (2024).</p></li><li><p><a href="https://arxiv.org/pdf/2402.19379">LLM Ensemble Prediction Capabilities Rival Human Crowd Accuracy</a> by Schoenegger, Tuminauskeite, Park, and Tetlock (2024).</p><ul><li><p>See also <a href="https://forecastingresearch.org/">Forecasting Research Institute</a> where Tetlock is president and chief scientist.</p></li></ul></li><li><p>Metaculus is running a <a href="https://www.metaculus.com/project/aibq3/">bot-only forecasting series</a>. (<a href="https://www.metaculus.com/notebooks/25525/-announcing-the-ai-forecasting-benchmark-series--july-8-120k-in-prizes/">Launch announcement</a>.)</p></li></ul></li><li><p>On AI &amp; persuasion:</p><ul><li><p><a href="https://www.lesswrong.com/posts/5cWtwATHL6KyzChck/risks-from-ai-persuasion">Risks from AI persuasion</a> by Beth Barnes.</p></li><li><p><a href="https://www.lesswrong.com/posts/qKvn7rxP2mzJbKfcA/persuasion-tools-ai-takeover-without-agi-or-agency">Persuasion Tools</a> by Daniel Kokotajlo</p></li></ul></li><li><p><a href="https://elicit.com/">Elicit</a> is an AI research assistant, developed by a company that was spun out from <a href="https://ought.org/">Ought</a>, with the intention of improving human judgment as AI capabilities improve. (See <a href="https://ought.org/elicit">here</a> for some of the original case.)</p></li></ul><h2>Why work on AI for epistemics?</h2><h3>Summary</h3><p>I think there&#8217;s very solid grounds to believe that AI&#8217;s influence on epistemics is important. Having good epistemics is super valuable, and human-level AI would clearly have a huge impact on our epistemic landscape. (See just below for more on importance.)</p><p>I also think there are decent plausibility arguments for why epistemics may be important: Today, we are substantially less epistemically capable than our technology allows for, due to various political and social dynamics which don&#8217;t all seem inevitable. And I think there are plausible ways in which poor epistemics can be self-reinforcing (because it makes it harder to clearly see what&#8217;s the direction towards better epistemics). And vice-versa that good epistemics can be self-reinforcing. (See <a href="https://lukasfinnveden.substack.com/i/148057351/path-dependence">here</a> for more on path-dependence.)</p><p>That&#8217;s not very concrete though. To be more specific, I will go through some more specific goals that I think are both important and plausible path-dependent:</p><ul><li><p><a href="https://lukasfinnveden.substack.com/i/148057351/good-norms-and-practices-for-ai-as-knowledge-producers">Good norms &amp; practices for AI-as-knowledge-producers.</a> Such as transparency of how AI-based science/investigations work, minimal censorship of AI-produced research results, and maximizing the number of actors who can trust <em>some</em> technically sophisticated institution to verify AI methods&#8217; trustworthiness. (E.g. by having many such institutions with different political affiliations.)</p></li><li><p><a href="https://lukasfinnveden.substack.com/i/148057351/good-norms-and-practices-for-ai-as-communicators">Good norms &amp; practices for AI-as-communicatiors.</a> Such as transparency of how AIs decide what to communicate, independent evaluators who measure AIs&#8217; truthfulness, and laws that limit the degree to which AIs can present contradictory arguments to different people or be paid-off to present biased views.</p></li><li><p><a href="https://lukasfinnveden.substack.com/i/148057351/differentially-high-epistemic-capabilities">Differentially high epistemic capabilities.</a> Such as high alignable capabilities (compared to underlying capabilities), relative strength at persuading people of true beliefs compared to false beliefs, and relative strength at understanding &amp; predicting the world compared to building new technologies.</p></li></ul><p>Let&#8217;s go through all of this in more detail.</p><h3>Importance</h3><p>I think there&#8217;s very solid grounds to believe that AI&#8217;s influence on epistemics is important.</p><ul><li><p><strong>AI&#8217;s influence on human epistemic abilities will eventually be huge</strong>. Briefly:</p><ul><li><p><strong>AI will eventually automate epistemic labor. </strong>This includes both knowledge <em>production</em> work and <em>communication</em> work. (The latter which includes both good and bad persuasion of humans.)</p></li><li><p><strong>AIs&#8217; epistemic work won&#8217;t just replace humans&#8217; 1-for-1.</strong> AI comes with special capabilities that will change the epistemic ecosystem:</p><ul><li><p><strong>Cheaper &amp; easier to delegate epistemic labor.</strong></p><ul><li><p>It could be cheaper to delegate epistemic labor to AIs (than to humans) because you can copy software for free.</p></li><li><p>If we develop methods to train reliably truth-seeking AIs, it will be easier to delegate epistemic labor to AIs (than to humans), because you would have to worry less about being deceived.</p></li><li><p>More delegation could lead to more equitable distribution of epistemic capabilities, but also to reduced incentives and selection for humans to have reasonable beliefs and epistemic practices (because AIs make all decisions that matter for your power).</p></li></ul></li><li><p><strong>Better epistemic science.</strong></p><ul><li><p>You can more easily control what information AIs have and have not seen and thereby run reproducible experiments on what epistemic strategies work best.</p></li></ul></li></ul></li></ul></li><li><p><strong>Epistemic capabilities during and after the development of TAI are very valuable.</strong> Briefly:</p><ul><li><p><strong>Most AI takeover risk comes from &#8220;unforced errors&#8221;.</strong> A vast majority of powerful people don&#8217;t want AI to take over, but I think that many underestimate the risk. If I thought that people were going to have reasonable, well-calibrated beliefs about AI takeover risk, my subjective probability of AI takeover would more than halve.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p></li><li><p><strong>Most extinction risk comes from &#8220;unforced errors&#8221;.</strong> Just as above: A vast majority of powerful people don&#8217;t want extinction, and (I strongly suspect) would be capable of preventing the exceptions from being able to cause extinction.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p></li><li><p><strong>Strong epistemic capabilities seem great for moral deliberation. </strong>For example via: Helping you better imagine the realistic consequences of various moral principles; by letting you forecast what sort of deliberation procedures will go off-the-rails; and by teaching you about the underlying empirical reasons for moral disagreement (so you can choose which drivers of moral intuition you trust more).</p></li></ul></li></ul><h3>Path-dependence&nbsp;</h3><p>While less solid than the arguments for importance, I think there are decent plausibility arguments for why AI&#8217;s role in societal epistemics may be importantly path-dependent.</p><ul><li><p><strong>Comparing with the present.</strong> Today, I think that our epistemics are significantly worse than they &#8220;could have been&#8221;. We aren&#8217;t just constrained by high-quality labor or evidence &#8212; there are also significant political and self-serving forces/incentives that actively distort people&#8217;s beliefs. These won&#8217;t automatically go away in the future.</p></li><li><p><strong>Feedback loops.</strong> People often choose to learn the truth when the choice is presented sufficiently clearly and unambiguously to them.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> But with poor enough epistemic starting abilities, it won&#8217;t be clear what methods are more or less truth-seeking. So poor epistemic capabilities can be self-reinforcing, and vice versa.</p></li><li><p><strong>Veil of ignorance.</strong> Conversely, people may be more enthusiastic to invest in novel, strong epistemic methods while they think that those methods will come to support their current beliefs (which would be the default, if they actually believe their current beliefs<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a>). Whereas if they first learn that the methods are going to contradict their current beliefs, then they may oppose them.</p></li><li><p><strong>Early investment.</strong> I can easily imagine both a future where frontier AI projects either (i) spend continuous effort on making their AIs strong forecasters and strategic analysts, and distributes those capabilities to other key institutions, or (ii) almost exclusively focus on using their AI systems for other tasks, such as technical R&amp;D.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> My being able to imagine both might just be a fact of my own ignorance &#8212;&nbsp;but it&#8217;s at least suggestive that both futures are plausible, and could come about depending on our actions.</p></li><li><p><strong>Distribution of epistemic capabilities.</strong> Even without changing the pace at which powerful AI epistemics are developed, the question of whether important decisions are made with or without AI epistemic assistance may depend on how quickly different actors get access to those capabilities. It seems probably great for those epistemic capabilities to quickly be made widely available,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> and if they&#8217;re powerful enough, it could be essential for multiple key players (such as AI companies, governments, and opposition parties) to get access to them at a similar time, so they can provide checks on each others&#8217; new capabilities.</p></li></ul><h3>To be more concrete</h3><p>Now, let&#8217;s be more specific about what goals could be important to achieve in this area. I think these are the 3 most important instrumental goals to be working towards:</p><ul><li><p>Society adopts good norms &amp; practices for AI-as-knowledge-producers, i.e., norms &amp; practices that allow insights from AI-as-knowledge-producers to be widely spread and appropriately trusted.</p></li><li><p>Society adopts good norms &amp; practices for AI-as-communicators, i.e., norms &amp; practices that make it systematically easy for AIs to spread true information and relatively more difficult for AIs to illegitimately persuade people of falsehoods.</p></li><li><p>For a given amount of general capabilities, we have high &#8220;epistemic capabilities&#8221; and high justified trust in those capabilities.</p></li></ul><p>Let&#8217;s go through these in order.</p><h4>Good norms &amp; practices for AI-as-knowledge-producers</h4><p>Let&#8217;s talk about norms and practices for AIs as knowledge-<em>producers</em>. With this, I mean AIs doing original research, rather than just reporting claims discovered elsewhere. (I.e., AIs doing the sort of work that you <a href="https://en.wikipedia.org/wiki/Wikipedia:No_original_research">wouldn&#8217;t get to publish on Wikipedia</a>.)</p><p>Here are some norms/institutions/practices that I think would contribute to good usage of AI-as-knowledge-producers:</p><ul><li><p>Minimal (formal or informal) censorship of AI-produced research results.</p></li><li><p>Transparency of how results from AI-as-knowledge-producers were arrived at.</p></li><li><p>A government agency that is non-partisan (in practice and not only in name) and charged with using AI to inform government decision-making or to transparently review whether other knowledge-producing AIs in government are doing so in a truth-seeking manner.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a></p></li><li><p>Maximizing the number of actors who can trust <em>some</em> technically sophisticated institution to verify claims about AI methods&#8217; trustworthiness.</p><ul><li><p>For example, this could be achieved via having many actors with different political affiliations verify claims about a centralized project, or by having many actors with different political affiliations train their own truth-seeking AIs (noticing how they tend to converge).</p></li></ul></li><li><p>Great evals of AIs epistemic capabilities.</p><ul><li><p>For this, it&#8217;s helpful if you have a longer track record of AIs being used for important real-world questions and getting them right or wrong.</p></li></ul></li></ul><h4>Good norms &amp; practices for AI-as-communicators</h4><p>Now let&#8217;s talk about norms for AIs as <em>communicators</em>. This is the other side of the coin from &#8220;AI as knowledge producers&#8221;. I&#8217;m centrally thinking about AIs talking with people and answering their questions.</p><p>Here are some norms/institutions/practices that I think would enable good usage of AI-as-communicators:</p><ul><li><p>Transparency about how AIs decide what to communicate.</p><ul><li><p>E.g. via publishing information about AIs&#8217; constitutions or <a href="https://cdn.openai.com/spec/model-spec-2024-05-08.html">model spec</a>.</p></li></ul></li><li><p>Independent evaluators publishing reports on AI truthfulness, including&#8230;</p><ul><li><p>Fraction of statements that the evaluators believe to be clearly true, debatable, vs. clearly false.</p></li><li><p>Results from AI lie-detection tests on whether the AI is being dishonest.</p></li><li><p>The degree to which AIs contradict themselves in different contexts or when talking with different audiences.</p></li><li><p>The degree to which AI is misleading via behaving differently (e.g. being more or less evasive, using a different tone, or relying on very different sources) on questions that are similar except for their implications about a topic that the AI may want to mislead users about (e.g. something political or something where the AI developer has commercial interests).</p></li><li><p>Experiments on whether humans tend to be better or worse at answering questions about a certain topic after conversing the AI about related topics. If humans are systematically worse, that suggests that the AI may be systematically misleading.</p></li></ul></li><li><p>Certain laws about AI communication may be helpful, such as:</p><ul><li><p>You&#8217;re not allowed to pay other actors to program their AIs to be more positively inclined towards you.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a></p></li><li><p>AI cannot systematically say directly contradictory statements to different audiences or in different contexts.</p></li></ul></li><li><p>Conversely, it&#8217;s also important to <em>avoid</em> bad laws. For example, laws that forbid AIs from saying blatant falsehoods may be good if they were judged in a reasonable way, and had the threshold for &#8220;blatant&#8221; set highly enough, but they could also be very bad if they became a tool for pushing political agendas.</p></li></ul><h4>Differentially high epistemic capabilities</h4><p>Finally: I want AIs to have high <em>epistemic</em> capabilities compared to their <em>other</em> capabilities. (Especially dangerous ones.) Here are three metrics of &#8220;epistemic capabilities&#8221; that I care about (and what &#8220;other capabilities&#8221; to contrast them with):</p><ul><li><p><strong>Asymmetric persuasion:</strong> How capable is AI at <em>persuading people of true things</em> vs. how capable is AI at <em>persuading people of anything</em>?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a></p><ul><li><p>It&#8217;s good for the former to be high relative to the latter, because I think it&#8217;s typically better for people to be convinced of true things than false things.</p></li><li><p>(The <em>web of lies</em> eval in <a href="https://arxiv.org/pdf/2403.13793#page=4.68">Evaluating Frontier Models for Dangerous Capabilities</a> tests for one version of this, where current models seem significantly better at persuading people of true things.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a>)</p></li></ul></li><li><p><strong>Understanding (vs. building):</strong> How useful is AI for <em>understanding &amp; predicting the world</em> vs. <em>building new technologies</em>?</p><ul><li><p>Central examples that I want to capture in &#8220;understanding&#8221;: Forecasting, policy development, geopolitical strategy, philosophy.</p></li><li><p>Central examples that I want to capture in &#8220;building new technologies&#8221;: Coding, AI R&amp;D, bio R&amp;D, building robots.</p></li><li><p>I suspect (but am not confident) that it&#8217;s good for the former to be high relative to the latter, because I am scared of new technologies causing accidents (mainly AI takeover<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a>) or being misused by the wrong people (mainly bioweapons), and I think that better understanding could help reduce this risk.</p></li><li><p>What makes this a natural dichotomy? Or a more decision-relevant question: Why should we think it&#8217;s possible to differentially accelerate &#8220;understanding&#8221; separately from &#8220;building&#8221;? Here are some of the core differences that I see between the two:</p><ul><li><p>1. &#8220;Building technology&#8221; typically has better empirical feedback loops.</p></li><li><p>2. When &#8220;building technology&#8221;, it&#8217;s typically easier and more helpful to make accurate &amp; precise mathematical models.</p></li><li><p>3. &#8220;Technology&#8221; is typically more specialized/modular, whereas "understanding" relies more on the ability to incorporate lots of messy interdisciplinary data.</p></li><li><p>4. &#8220;Technology&#8221; is typically less political, whereas for &#8220;understanding&#8221; it&#8217;s often more important to manage and correct for political biases.</p></li></ul></li><li><p>There are exceptions to all four of these. But they hold often enough that I think they induce some important difference in what epistemic methods are most useful for &#8220;understanding&#8221; vs. &#8220;building&#8221;. Which may lead to some opportunities to differentially advance one over the other.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a></p></li></ul></li><li><p><strong>Aligned capabilities:</strong> What knowledge &amp; understanding can AI developers leverage towards <em>the AI developers&#8217; goals</em> vs. what knowledge &amp; understanding can AIs leverage towards <em>their own goals</em>?</p><ul><li><p>It&#8217;s good for the former to be high, because if the latter is higher, then AI takeover would be more likely. Specifically, AI may then be able to (i) leverage powers that we don&#8217;t have access to, and (ii) take us by surprise, if we didn&#8217;t know about their capabilities.</p></li><li><p>Even if AI takeover isn&#8217;t a problem, this might still introduce a discrepancy where people can use AI&#8217;s full capabilities to pursue easy-to-measure goals but can&#8217;t use them to pursue hard-to-measure goals. (Since it&#8217;s difficult to provide a feedback signal which encourages an AI to pursue those goals.) I think this is also undesirable, and related to the previous categories:</p><ul><li><p>It&#8217;s easier to measure whether you&#8217;ve persuaded someone than whether you&#8217;ve persuaded them of something true.</p></li><li><p>It&#8217;s easier to measure whether you&#8217;ve developed a powerful technology than whether you&#8217;ve produced a correct forecast for whether releasing that technology is likely to cause an irreversible catastrophe.</p></li></ul></li><li><p>(C.f.: <a href="https://www.alignmentforum.org/posts/nRAMpjnb6Z4Qv3imF/the-strategy-stealing-assumption">Strategy Stealing</a>, <a href="https://www.alignmentforum.org/posts/7jSvfeyh8ogu8GcE6/decoupling-deliberation-from-competition">Decoupling Deliberation &amp; Competition</a>.)</p></li></ul></li></ul><p>In order for these distinctions to be decision-relevant, there needs to be ways of differentially accelerating one side of the comparison compared to the other. Here are two broad categories of interventions that I think have a good shot at doing so:</p><ul><li><p><strong><a href="https://arxiv.org/abs/2211.03540">Scalable Oversight</a>, <a href="https://arxiv.org/abs/2312.09390">Weak-to-Strong Generalization</a> (W2S generalization), and <a href="https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#heading=h.kkaua0hwmp1d">Elicing Latent Knowledge</a> (ELK).</strong></p><ul><li><p>I list these three domains together because they&#8217;re strongly related.</p><ul><li><p>See <a href="https://www.alignmentforum.org/posts/hw2tGSsvLLyjFoLFS/scalable-oversight-and-weak-to-strong-generalization">Scalable Oversight and Weak-to-Strong Generalization: Compatible approaches to the same problem</a>.</p></li><li><p>Also, from the <a href="https://arxiv.org/pdf/2312.09390#page=5">W2S generalization paper</a>: &#8220;Our setting can be viewed as a general methodology for empirically studying problems like ELK and honesty across a wide range of tasks.&#8221;</p></li></ul></li><li><p>These research areas push forward &#8220;<strong>aligned capabilities</strong>&#8221; via letting us elicit stronger capabilities towards arbitrary goals.</p></li><li><p>This pushes towards &#8220;<strong>asymmetric persuasion</strong>&#8221; via letting people increase their skepticism of unsupported AI statements, while still being able to believe AI statements backed up by a spot-checked decomposed argument (scalable oversight), by a scientific understanding of generalization (W2S generalization), or by methods that directly eliciting AI&#8217;s latent knowledge.</p></li><li><p>This pushes forward &#8220;<strong>understanding</strong>&#8221; over &#8220;<strong>building technology</strong>&#8221; via being disproportionately helpful for boosting capabilities in areas with poor feedback loops. (Whereas I think &#8220;building technology&#8221; typically has better feedback loops.)</p></li></ul></li><li><p><strong>Building and iteratively improving capabilities on &#8220;understanding&#8221;-loaded tasks, such as forecasting and strategic analysis.</strong></p><ul><li><p>(This partly overlaps with the first point, because you might want to practice using scalable-oversight/W2S-generalization/ELK on these tasks in particular.)</p></li><li><p>Examples of how you might do this includes:</p><ul><li><p><strong>Making the models do forecasts of unseen data, and iterating to improve their performance.</strong></p><ul><li><p>This becomes more interesting if you can train highly capable models on only old data, since this would let you test and iterate the models on more long-range forecasting.</p></li></ul></li><li><p><strong>Training models using experts&#8217; (superforecasters, policy analysts, AI strategy researchers) feedback.</strong></p><ul><li><p>Either in a baseline RLHF sort-of way, or going further towards scalable oversight &amp; W2S generalization.</p></li></ul></li><li><p><strong>Experimentally determine what sort of procedures and arguments tend to lead humans towards truth.</strong></p><ul><li><p>For example, via the methodology that Tom Davidson outlines in <a href="https://www.alignmentforum.org/posts/zxmzBTwKkPMxQQcfR/let-s-use-ai-to-harden-human-defenses-against-ai">this post</a>. Note that this might be meaningfully different than the procedures that work well in technical domains, because of the fuzzier topics and increased political biases.</p></li></ul></li></ul></li><li><p>I think this differentially pushes forward <strong>&#8220;aligned capabilities&#8221;</strong> in &#8220;understanding&#8221;-loaded domains, because I expect that models will (via generalization from pre-training) start out with some baseline understanding of these domains. Effort on these tasks will go towards some mix of increasing model capabilities and improving our ability to elicit existing capabilities, and I expect the net-effect will be to somewhat reduce the amount of capabilities that we can&#8217;t elicit. (But I don&#8217;t feel fully confident in this.)</p></li><li><p>This can push towards <strong>&#8220;asymmetric persuasion&#8221;</strong> in these domains insofar as developers take care to develop truth-seeking methods rather than just indiscriminately iterating to improve models&#8217; ability to persuade people.</p></li><li><p>This clearly differentially pushes towards <strong>&#8220;understanding&#8221;</strong>.</p></li></ul></li></ul><h2>Heuristics for good interventions</h2><p>Having spelled-out what we want in the way of epistemic capabilities, practices for AI-as-knowledge-producers, and AI-as-communicators: Let&#8217;s talk about how we can achieve these goals. This section will talk about broad guidelines and heuristics, while the <a href="https://lukasfinnveden.substack.com/i/148057351/concrete-projects-for-differentially-advancing-epistemic-capabilities">next section</a> will talk about concrete interventions. I discuss:</p><ul><li><p>A distinction between <a href="https://lukasfinnveden.substack.com/i/148057351/direct-vs-indirect-strategies">direct vs. indirect strategies</a> and <a href="https://lukasfinnveden.substack.com/i/148057351/indirect-value-generation">the value of indirect interventions</a>.</p></li><li><p><a href="https://lukasfinnveden.substack.com/i/148057351/on-long-lasting-differential-capability-improvements">Some abstract guidelines</a> for how to avoid interventions that get swamped by the bitter lesson or by commercial interests.</p></li><li><p>An attempt to <a href="https://lukasfinnveden.substack.com/i/148057351/painting-a-picture-of-the-future">paint a concrete picture</a> for what excellent use of AI for epistemics might eventually look like (to give a sense of what we want to steer towards).</p></li></ul><h3>Direct vs. indirect strategies</h3><p>One useful distinction is between <strong>direct</strong> and <strong>indirect</strong> strategies. While <strong>direct</strong> strategies aim to directly push for the above goals, <strong>indirect</strong> strategies instead focus on producing demos, evals, and/or arguments indicating that epistemically powerful AI will soon be possible, in order to motivate further investment &amp; preparation pushing toward the above goals.</p><p>My current take is that:</p><ul><li><p><strong>Direct</strong>, competent efforts on <a href="https://lukasfinnveden.substack.com/i/148057351/good-norms-and-practices-for-ai-as-knowledge-producers">Good practices for AI-as-knowledge-producers</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-17" href="#footnote-17" target="_self">17</a> and <a href="https://lukasfinnveden.substack.com/i/148057351/good-norms-and-practices-for-ai-as-communicators">Good practices for AI-as-communicators</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-18" href="#footnote-18" target="_self">18</a> seem great.</p></li><li><p><strong>Direct </strong>efforts on <a href="https://lukasfinnveden.substack.com/i/148057351/differentially-high-epistemic-capabilities">differentially high epistemic capabilities</a> via<strong> scalable oversight, W2S generalization, &amp; ELK </strong>seem great.</p></li><li><p><strong>Direct </strong>efforts on <a href="https://lukasfinnveden.substack.com/i/148057351/differentially-high-epistemic-capabilities">differentially high epistemic capabilities</a> via<strong> building and iteratively improving capabilities on &#8220;understanding&#8221;-loaded tasks </strong>(as discussed in the previous section) seem useful if done in the right way. But there&#8217;s some worry that progress here will be swamped by the bitter lesson and/or commercial investments. I talk about this more <a href="https://lukasfinnveden.substack.com/i/148057351/on-long-lasting-differential-capability-improvements">later</a>.</p></li><li><p><strong>Indirect </strong>efforts via <strong>building and iteratively improving abilities on &#8220;understanding&#8221;-loaded tasks </strong>seem potentially useful. But I&#8217;m not sure how compelling the path-to-impact really is. Let&#8217;s talk a bit more about it.</p></li></ul><h3>Indirect value generation</h3><p>One possible path-to-impact from <strong>building and iteratively improving capabilities on &#8220;understanding&#8221;-loaded tasks</strong> is that this gives everyone an earlier glimpse of a future where AIs are very epistemically capable. This could then motivate:</p><ul><li><p>Investment into making AIs even more epistemically capable.</p></li><li><p>Increased concern about AI companies&#8217; upcoming epistemic advantage over other actors, motivating stronger demands for transparency and increased urgency of actors acquiring their own ability to verify AI epistemic methods.</p></li><li><p>The production of further evals for exactly how epistemically capable AIs are.</p></li><li><p>More attention, as well as more-informed attention, going towards developing norms for how AIs should communicate information.</p></li><li><p>Also: It would directly enable AIs to start gathering a good epistemic track record earlier, which could be helpful for evaluating AIs&#8217; trustworthiness later.</p></li><li><p>Also: It might help build a track record <em>for a particular organization</em>, which could be helpful for evaluating that organization&#8217;s track-record later.</p></li></ul><p>The core advantage of the indirect approach is that it seems way easier to pursue than the direct approach.</p><ul><li><p>Demos and evals for epistemically capable AI are very easily measurable/testable which gives you great feedback loops.</p></li><li><p>Because the path-to-impact is indirect, it&#8217;s ok if methods or evals don't generalize to future, more powerful AI systems. They can still &#8220;wake people up&#8221;.</p></li><li><p>Because the path-to-impact is indirect, it&#8217;s ok if the work takes place in organizations that will eventually be marginalized/outcompeted by the well-resourced AI company or other actors. They can still &#8220;wake people up&#8221;.</p></li></ul><p>Core questions about the indirect approach: Are there really any domain-specific demos/evals that would be convincing to people here, <em>on the margin</em>? Or will people&#8217;s impressions be dominated by &#8220;gut impression of how smart the model is&#8221; or &#8220;benchmark performance on other tasks&#8221; or &#8220;impression of how fast the model is affecting the world-at-large&#8221;? I feel unsure about this, because I don&#8217;t have a great sense of what drives people&#8217;s expectations, here.</p><p>A more specific concern: Judgmental forecasting hasn&#8217;t &#8220;taken off&#8221; among humans. Maybe that indicates that people won&#8217;t be interested in AI forecasting? This one I feel more skeptical of. My best-guess is that AI forecasting will have an easier time of becoming widely adopted. Here&#8217;s my argument.</p><p>I don&#8217;t know a lot of why forecasting hasn&#8217;t been more widely adopted. But my guess would be that the story if something like:</p><ul><li><p>Many forecasting practices are useful. (Such as assigning probabilities to questions that are highly relevant to your decisions, keeping track of how well you&#8217;re doing, and keeping track of how well people-who-you-listen-to are doing.)</p></li><li><p>However, they&#8217;re not useful <em>enough</em> that people who use these tools but who don&#8217;t have experience in a profession can easily outcompete people without experience in that profession.</p></li><li><p>And it takes time for people in existing professions to adopt good practices. (Cultural change is slow.)</p></li></ul><p>For AIs, these problems seem smaller:</p><ul><li><p>There&#8217;s already a culture of measuring AI performance numerically, so you don&#8217;t need much of a cultural shift to get AIs to quantify their probability estimates and be scored on them.</p></li><li><p>And AI advantages will eventually deliver <em>lots</em> of advantages over existing human experts, so there will eventually be strong incentives to shift over to using AI.</p></li></ul><p>Overall, I feel somewhat into &#8220;indirect&#8221; approaches as a path-to-impact, but only somewhat. But it at least seems worth pursuing the most leveraged efforts here: Such as making sure that we always have great forecasting benchmarks and getting AI forecasting services to work with important actors as soon as (or even before) they start working well.</p><h3>On long-lasting differential capability improvements</h3><p>It seems straightforward and scalable to boost epistemic capabilities in the short run. But I expect a lot of work that leads to short-run improvements won&#8217;t matter after a couple of year. (This completely ruins your path-to-impact if you&#8217;re trying to directly improve long-term capabilities &#8212; but even if you&#8217;re pursuing an <a href="https://lukasfinnveden.substack.com/i/148057351/direct-vs-indirect-strategies">indirect strategy</a>, it&#8217;s worse for improvements to last for months than for them to last for years.)</p><p>So ideally, we want to avoid pouring effort into projects that aren&#8217;t relevant in the long-run. I think there&#8217;s two primary reasons for why projects may become irrelevant in the long run: either due to the bitter lesson or due to other people doing them better with more resources.</p><ul><li><p><strong><a href="https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson.pdf">The bitter lesson</a>:</strong> This suggests that we want to avoid &#8220;leveraging human knowledge&#8221; and instead work on projects that &#8220;leverage computation&#8221;.</p><ul><li><p>This inherently gets easier over time &#8212;&nbsp;as investments go up and more compute becomes available. But you can still test out prototypes now.</p></li><li><p>Example projects that are especially good for &#8220;leveraging computation&#8221; are those that explore how we could generate large amounts of data to train on, given access to large amounts of compute.</p></li></ul></li><li><p><strong>Other people doing them better: </strong>Epistemic capabilities will be significantly boosted by efforts that other people have huge incentives to pursue, such as learning how to train models on synthetic data, or making AI agents who can pursue tasks over longer periods of time, and who can easily navigate interfaces that were made for humans. Such projects therefore don&#8217;t seem worth prioritizing.</p><ul><li><p>(Though it might be valuable to make sure that epistemic projects are in close touch with people working on more general capabilities, in order to plan around their advances and make use of their unreleased prototypes. This is one reason why &#8220;AI for epistemics&#8221; projects may benefit from being inside of frontier AI companies.)</p></li></ul></li></ul><p>That said, even if we mess this one up, there&#8217;s still some value in projects that temporarily boost epistemic capabilities, even if the technological discoveries don&#8217;t last long: The people who work on the project may have developed skills that let them improve future models faster, and we may get some of the <a href="https://lukasfinnveden.substack.com/i/148057351/indirect-value-generation">indirect</a> sources of value mentioned above.</p><p>Ultimately, key guidelines that I think are useful for this work are:</p><ul><li><p><strong>Inform people who can influence the future of AI-for-epistemics.</strong> (As discussed in the previous section on indirect approaches.)</p></li><li><p><strong>Leverage computation rather than human knowledge.</strong></p></li><li><p><strong>Avoid building stuff that others will build better.</strong></p></li></ul><h3>Painting a picture of the future</h3><p>To better understand which ones of today&#8217;s innovations will be more/less helpful for boosting future epistemics, it&#8217;s helpful to try to envision what the systems of the future will look like. In particular: It&#8217;s useful to think about the systems that we especially care about being well-designed. For me, these are the systems that can first provide a very significant boost on top of what humans can do alone, and that get used during the most high-stakes period around TAI-development.</p><p>Let&#8217;s talk about forecasting in particular. Here&#8217;s what I imagine such future forecasting systems will look like:</p><ul><li><p>In contrast to today&#8217;s systems, I don&#8217;t think they&#8217;ll have a series of hard-coded steps. I think they&#8217;ll be much more flexible in going back and forth between different types of considerations. But even if they don&#8217;t have e.g. a hard-coded &#8220;baserate generation&#8221;-step, they&#8217;ll probably still use &#8220;baserate generation&#8221; in a similar sense as human forecasters do: As a useful, flexible input into their forecasting.</p></li><li><p>For short-horizon forecasting (e.g.: what will tomorrow&#8217;s newspaper report about ongoing events?) I imagine that they will rely a lot on hard-to-express heuristics that they&#8217;ve learned from fine-tuning. Because there&#8217;s enough data that this is feasible.</p><ul><li><p>Most of this data will be model-generated. AIs will formulate questions about tomorrow&#8217;s newspaper, that other AIs will forecast, and that other AIs will resolve after reading the next day&#8217;s paper.</p></li></ul></li><li><p>For medium-horizon forecasting (e.g. what will happen in 1-12 months) I think there won&#8217;t be enough data for finetuning to build in a lot of really opaque heuristics. But we will be able to learn what types of reasoning tends to lead to relatively better vs. worse forecasts, by instructing different systems to use different strategies (and verifying that they do so, c.f. <a href="https://www-cdn.anthropic.com/827afa7dd36e4afbb1a49c735bfbb2c69749756e/measuring-faithfulness-in-chain-of-thought-reasoning.pdf">measuring faithfulness in chain-of-thought reasoning</a>). Then we can teach models to use the strategies and heuristics that work best. (E.g. via few-shot prompting them with good examples, or by doing supervised learning on passages of chain-of-thought that empirically did well (similar to the <a href="https://arxiv.org/abs/2402.18563">Halawi paper</a>), or by using constitutional-AI-style-training to generate many more examples of similar passages to finetune on.)</p><ul><li><p>Again, most of the data will be model-generated.</p></li><li><p>Medium-horizon forecasters can reference all kinds of evidence, <em>including</em> the forecasts of short-horizon forecasters (we should have good statistics on how reliable these are).<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-19" href="#footnote-19" target="_self">19</a></p></li></ul></li><li><p>For long-horizon forecasting (e.g. what will happen in &gt;1 year) we won&#8217;t have any ground-truth data to train on, so we&#8217;ll have to rely on human feedback.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-20" href="#footnote-20" target="_self">20</a> In order to know what kind of feedback to give, here, we&#8217;ll want to use medium-horizon forecasting as a &#8220;lab&#8221; to test different hypotheses about what sort of AI-human interactions tend to lead to accurate forecasts, and what types of arguments tend to work well in practice.</p><ul><li><p>In order to generalize sensibly from the medium-horizon-forecasting case, we&#8217;ll want the learnings from this to be as human-interpretable as possible. E.g. &#8220;arguments from analogy tends to work well/poorly&#8221;, not &#8220;this 100k word long prompt tends to give good results, and no human can understand why&#8221;.</p></li><li><p>Long-horizon forecasters can reference all kinds of evidence, <em>including</em> the forecasts of short- and medium-horizon forecasters, insofar as they&#8217;re relevant.</p></li><li><p>When using medium-horizon forecasting as a &#8220;lab&#8221;: We&#8217;ll want to run <em>both </em>(i) studies where we try to get as good forecasting abilities as we can, including by relying substantially on good generalization from AIs, and (ii) studies where a red-team tries to make the AIs maximally subtly misleading, and see whether humans who are getting AI advice can notice this, or whether they get tricked into believing terrible forecasts.</p><ul><li><p>If the latter tests lead to humans making terrible forecasts, then we should assume that scheming AIs would be able to mislead us about both medium-term and long-term forecasts. (And probably also short-term forecasts in recognizably rare, high-stakes situations.)</p></li><li><p>C.f. <a href="https://www.alignmentforum.org/posts/kcKrE9mzEHrdqtDpE/the-case-for-ensuring-that-powerful-ais-are-controlled">control evaluations</a>.</p></li></ul></li></ul></li><li><p>Medium- and long-horizon forecasters may be finetuned copies of models that were previously fine-tuned on short-horizon forecasting, insofar as that tends to instill some useful intuitions.</p></li><li><p>Ideally, someone will have trained a competent model that hasn&#8217;t seen any data from the last 3 years, or maybe 30 years, or possibly 300 years. (The longer time frames would require a lot of synthetic data.) We could use such a model as a &#8220;lab&#8221; to test hypotheses about what types of reasoning tends to do well or poorly on long-range forecasting.</p></li></ul><h2>Concrete projects for differentially advancing epistemic capabilities</h2><p>Now, let&#8217;s talk about concrete projects for differentially advancing epistemic capabilities, and how well they do according to the above criteria and vision.</p><p>Here&#8217;s a summary/table-of-contents of projects that I feel excited about (no particular order). More discussion below.</p><ul><li><p><a href="https://lukasfinnveden.substack.com/i/148057351/evalsbenchmarks-for-forecasting-or-other-ambitious-epistemic-assistance">Evals/benchmarks for forecasting (or other ambitious epistemic assistance)</a> (Evals seems leveraged for demonstrating capabilities, and seems also likely to lead to somewhat longer-lived capability benefits.)</p></li><li><p><a href="https://lukasfinnveden.substack.com/i/148057351/automate-forecasting-question-generation-and-resolution">Project to automate question-generation and question-resolution.</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/i/148057351/logistics-of-past-casting">Figuring out logistics of past-casting</a>. E.g.: How do we date past data? Does training models on data in chronological order cause any issues?</p></li><li><p><a href="https://lukasfinnveden.substack.com/i/148057351/start-efforts-inside-of-ai-companies-for-ai-forecasting-or-other-ambitious-epistemic-assistance">Start efforts inside of AI companies for AI forecasting or other ambitious epistemic assistance</a>.</p></li><li><p><a href="https://lukasfinnveden.substack.com/i/148057351/scalable-oversight-weak-to-strong-generalization-elk">Scalable oversight / weak-to-strong-generalization / eliciting latent knowledge</a>.</p></li><li><p><a href="https://lukasfinnveden.substack.com/i/148057351/experiments-on-what-type-of-arguments-and-ai-interactions-tend-to-lead-humans-toward-truth-vs-mislead-them">Experiments on what type of arguments and AI-interactions tend to lead humans toward truth vs. mislead them.</a></p></li></ul><p>Effort to provide AI forecasting assistance (or other ambitious epistemic assistance) to governments is another category of work that I&#8217;d really like to happen <em>eventually</em>. But I&#8217;m worried that there will be more friction in working with governments, so that it&#8217;s better to iterate outside them first and then try to provide services to them once they&#8217;re better. This is only a weakly held guess, though. If someone who was more familiar with governments thought they had a good chance of usefully working with them, I would be excited for them to try it.</p><p>In the above paragraph, and the above project titles, I refer to AI forecasting or &#8220;other ambitious epistemic assistance&#8221;. What do I mean by this?</p><ul><li><p>&#8220;Ambitious epistemic assistance&#8221; is meant to include projects that don&#8217;t do forecasting specifically, but that still leverage AIs to do a large amount of epistemic labor, in a way that could be scaled up to be extremely valuable.</p></li><li><p>For example, I would want to include AIs that assist with threat modeling or policy analysis in flexible, scalable ways.</p></li><li><p>On the other hand, an example of something that would be <em>insufficiently</em> ambitious is a tool that was narrowly targeted at a particular well-scoped type of analysis. Which could be perfectly automated without providing that much of an acceleration to overall strategic efforts. E.g. efforts to automate highly structured literature reviews (such as &#8220;automatically finding and combining randomized trials of medical interventions&#8221;).</p></li></ul><p>Now for more detail on the projects I&#8217;m most excited about.</p><h4>Evals/benchmarks for forecasting (or other ambitious epistemic assistance)</h4><ul><li><p>I generally think that evaluations and benchmarks are pretty leveraged for motivating work and for making it clear when models are getting seriously good.</p></li><li><p>Some ongoing work on this includes.</p><ul><li><p>Open Philanthropy <a href="https://www.openphilanthropy.org/grants/futuresearch-benchmark-for-language-model-forecasting/">funded</a> <a href="http://futuresearch.ai">FutureSearch</a> to develop forecasting evals.</p></li><li><p>Metaculus is running a <a href="https://www.metaculus.com/project/aibq3/">bot-only forecasting series</a>. (<a href="https://www.metaculus.com/notebooks/25525/-announcing-the-ai-forecasting-benchmark-series--july-8-120k-in-prizes/">Launch announcement</a>.)</p></li></ul></li><li><p>Further work on this would ideally check in with these efforts and see if there&#8217;s important angles that they don&#8217;t cover, that would be good to evaluate.</p></li></ul><h4>Automate forecasting question-generation and -resolution</h4><ul><li><p>I.e.: Train AI systems to automatically formulate forecasting questions, and train AI systems to automatically seek out information about what happened and resolve them.</p></li><li><p>This is strongly related to the above &#8220;evals/benchmark&#8221; category. It's a tool that can be used to generate really large amounts of questions that models can be evaluated on. (Or trained on.)</p></li><li><p>Other than general reasons why I like evals/benchmarks, I like this angle because:</p><ul><li><p>It&#8217;s also squarely in the domain of &#8220;leveraging computation rather than human knowledge&#8221;, as a way of improving forecasting abilities. (Via training models on the automatically generated questions.)</p></li><li><p>I think that &#8220;automatically generated and resolved forecasting questions&#8221; is a core part of what it would eventually look like to have a flourishing science of AI forecasting. And it seems great to have prototypes of all the most core parts, as early as possible, so that:</p><ul><li><p>We can track how close we are to the full vision.</p></li><li><p>It gets easier to demo &amp; explain what the future might look like.</p></li><li><p>We can encounter obstacles and start working on them early</p></li></ul></li></ul></li><li><p>My main concern is that the models might not quite be good enough to make this really easy, yet. So it might be significantly easier to do with generations that are one step further along.</p></li></ul><h4>Logistics of past-casting</h4><ul><li><p>Another angle for getting more forecasting data is to exploit all the data in the <em>past</em>. If we could train competent AI systems without &#8220;spoiling&#8221; them on recent events, then&nbsp;we could run experiments on what methodologies work best for long-range forecasting.</p></li><li><p>One question to answer here is: What methods do we have for determining the date of past data, and how reliable are they?</p><ul><li><p>Can we date past internet-data by checking for its presence in older crawls? (E.g. past versions of the <a href="https://commoncrawl.org/">common crawl</a>.)</p></li><li><p>By looking at website meta-data?</p></li><li><p>By having AI models read it and guess?</p></li><li><p>Some existing work is already trying to address this problem, and my impression is that it&#8217;s surprisingly annoying to do in practice.</p><ul><li><p>In particular, a mini-version of &#8220;past-casting&#8221; is to take existing models with training-data cutoffs several months ago, and see how well they can forecast events since then.</p></li><li><p>Even here, you have to deal with questions about how to date information. You&#8217;d like to give AIs the option to read-up on newspaper articles and similar that are relevant to the events they&#8217;re forecasting &#8212;&nbsp;but it turns out to be non-trivial to ensure that e.g. the articles haven&#8217;t been updated in ways that leak information about the future.</p></li><li><p>The fact that this is surprisingly difficult is part of why I&#8217;m excited for people to start working on it early.</p></li><li><p>(C.f. <a href="https://arxiv.org/pdf/2402.18563#page=27.18">here</a> where Halawi et al. solves the problem by white-listing a small number of websites where it&#8217;s easy to determine dates. This means that their system cannot use data from most of the web.)</p></li></ul></li><li><p>Different methods can be evaluated by comparing them against each other.</p></li><li><p>Note that you don&#8217;t need 100% reliability. It&#8217;s generally ok to accidentally date older data as actually being newer; it just means that models will have a bit less old data to access. And it might be ok to date some new data as old if the reason that you&#8217;re doing it is that it&#8217;s very difficult to recognise as new data &#8212;&nbsp;because that probably means that it&#8217;s not leaking much information about the present.</p></li></ul></li><li><p>One way to get a lot of different models that have been trained on different amounts of history would be to order all of the pre-training data chronologically and then train on it in that order. It seems useful to explore and address various problems with this.</p><ul><li><p>Do you get any weird results from the pre-training data not being <a href="https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables">IID</a>? Does this compromise capabilities in practice? Or does it lead to increased capabilities because the model cannot lean as much on memorization when it&#8217;s constantly getting trained on a previously-unseen future?</p></li><li><p>What if you want to run multiple epochs?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-21" href="#footnote-21" target="_self">21</a> Then you have a conflict between wanting to fully update on the old data before you see new data vs. wanting to maximally spread out the points in time at which you repeat training data. How severe is this conflict? Are there any clever methods that could reduce it?</p></li><li><p>This seems great to invest in early, because &#8220;chronological training&#8221; is risky to attempt for expensive models without smaller trials showing that it doesn&#8217;t compromise capabilities. It&#8217;s also hard to do on short notice, because you have to commit to it before a big training run starts.</p></li></ul></li><li><p>For really long-range experiments (where we avoid spoiling AIs on the past 100+ years) we would need to be able to do pretraining with mostly synthetic data. &#8220;How to usefully pre-train models on synthetic data&#8221; is not something I recommend working on, because I think it would be very useful for AI capabilities. So I expect capabilities researchers to be good at exploring it on their own.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-22" href="#footnote-22" target="_self">22</a></p><ul><li><p>However, it might be useful to consider how you would prevent leaking information from the present if you <em>could</em> usefully pre-train models on synthetic data.</p></li><li><p>In particular, the synthetic data would probably be constructed by models that have a lot of knowledge about the present. So you would have to prevent that knowledge from leaking into the synthetic data.</p></li><li><p>(This research project may be easier to do once we understand more about good methods of training on synthetic data. I&#8217;m personally not sure what the SOTA is, here.)</p></li></ul></li><li><p>Another question someone could answer is: How much data do we even have from various time periods? (I&#8217;m not sure.)</p></li></ul><h4>Start efforts inside of AI companies for AI forecasting or other ambitious epistemic assistance</h4><ul><li><p>It seems great for epistemic tools to be developed in close contact with users, so that the tools fill real needs. And people inside AI companies are important customers who I eventually want to be aided by epistemic assistance.</p></li><li><p>Also, if external AI efforts ever become uncompetitive with the big companies (because of the greater AI capabilities available inside of labs) then I want people in those companies to already be working on this.</p></li><li><p>A variant of this would be to start an effort outside of AI companies, but consult with them to understand what they&#8217;re interested in and so that they can get impressed with the technology. Such that, if external projects become uncompetitive, then the people inside of AI labs are interested in starting up similar AI &amp; epistemics efforts inside the labs (or to provide external ones with privileged access to company models).</p></li><li><p>My main concern is that the current technology might not actually be good enough to be really helpful, yet. Or that (in order to be sufficiently good) the current technology needs a ton of schlep-work that won&#8217;t generalize to future models.</p></li></ul><h4>Scalable oversight / weak-to-strong-generalization / ELK</h4><ul><li><p>It&#8217;s possible that we&#8217;ll develop powerful AI systems which <em>themselves</em> have excellent epistemic abilities, without us being able to use those abilities for everything that we want to use them for. For example, if you trained an AI to predict the stock market, it could develop all kinds of powerful epistemic methods and interesting hypotheses about the world &#8212; but all that you would see as its developer would be its projected stock prices.</p></li><li><p>In order for AIs to provide great epistemic assistance to humans, we want to be able to elicit and verify all the knowledge and heuristics that AI systems develop.</p><ul><li><p>This also overlaps heavily with alignment research, since if AI have abilities that humans can&#8217;t elicit from them, that makes it difficult for humans to train AIs to behave well, and more dangerous if AIs try to seize power.</p></li></ul></li><li><p><a href="https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit#heading=h.kkaua0hwmp1d">Eliciting Latent Knowledge</a> (ELK) is a framing from the Alignment Research Center. While they work on it from a theoretical perspective, it can also be tackled experimentally. It&#8217;s the problem of eliciting knowledge that AIs &#8220;knows&#8221; even when you can&#8217;t provide any direct feedback signal that incentivizes honesty.</p></li><li><p><a href="https://cdn.openai.com/papers/weak-to-strong-generalization.pdf">Weak to Strong Generalization</a> is a framing from OpenAI: &#8220;Our setting can be viewed as a general methodology for empirically studying problems like ELK and honesty across a wide range of tasks.&#8221;</p></li><li><p>A related area includes lie detection of AI system, see e.g.:</p><ul><li><p><a href="https://arxiv.org/abs/2309.15840">Lie Detection in Black-Box LLMs by Asking Unrelated Questions</a>.</p></li><li><p><a href="https://arxiv.org/abs/2304.13734">The Internal State of an LLM Knows When It's Lying</a>.</p></li><li><p><a href="https://arxiv.org/abs/2212.03827">Discovering Latent Knowledge in Language Models Without Supervision</a>.</p></li></ul></li><li><p>Scalable Oversight refers to attempts to amplify the overseers of an AI system such that they are more capable than the system itself (typically by having the overseers use the system itself). If successful, this could give us a feedback signal with which to train powerful AI systems while its reasoning remains understandable to (amplified) humans.</p><ul><li><p>See <a href="https://www.alignmentforum.org/posts/hw2tGSsvLLyjFoLFS/scalable-oversight-and-weak-to-strong-generalization">here</a> for some discussion about its relationship to weak-to-strong generalization.</p></li><li><p>There are teams working on this at Anthropic and DeepMind.</p></li></ul></li></ul><h4>Experiments on what type of arguments and AI interactions tend to lead humans toward truth vs. mislead them</h4><ul><li><p>As a final category of experiments that could be useful to run, I wanted to flag experiments on what kinds of arguments and interactions-with-AIs tend to lead humans towards the truth, and what kinds tend to mislead them.</p></li><li><p>One example of this is Tom Davidson&#8217;s proposal described in <a href="https://www.alignmentforum.org/posts/zxmzBTwKkPMxQQcfR/let-s-use-ai-to-harden-human-defenses-against-ai">Let&#8217;s use AI to harden human defenses against AI manipulation</a>:</p><ul><li><p>&#8220;tldr: AI may manipulate humans; we can defend against that risk better by optimizing AIs to manipulate humans, seeing what manipulation techniques they use, and learning to detect those techniques.&#8221;</p></li></ul></li><li><p>Compared to e.g. automated forecasting, note that this centers the fact that AIs will assist humans in decision-making, and we ultimately want them to lead humans towards truth, rather than just be right themselves.</p><ul><li><p>Note also that the human component makes experiments here more time-sensitive. For AIs, we might be able to use past-casting to do experiments later on. But it&#8217;s difficult to run past-casting experiments for humans. So if you want time on forecasting questions, here, we may have to start soon.</p></li></ul></li><li><p>This category is arguably a subset of scalable oversight research (mentioned above), since an &#8220;amplified overseer&#8221; will typically be an AI-assisted human, and it&#8217;s critical for such a human to be led towards truth rather than misled by AIs. But I&#8217;m still calling it out separately because:</p><ul><li><p>Even separately from humans&#8217; role in overseeing (i.e., providing a training signal to) AI systems, it would be great to better understand how AIs can lead humans towards truth rather than mislead them. (For example, this kind of knowledge could be important when people use AIs that they didn&#8217;t train themselves.)</p></li><li><p>I think the results may be importantly different depending on whether you&#8217;re looking at a more technical area (e.g.: &#8220;does this code have a backdoor?&#8221; or an area that&#8217;s more interdisciplinary and politically charged (e.g.: &#8220;what would happen if the US made this deal with China on how to handle TAI?&#8221;). The latter seems especially important for &#8220;AI for epistemics&#8221;.</p></li></ul></li></ul><h2>Concluding thoughts</h2><p>The development of AI systems with powerful epistemic capabilities presents both opportunities and significant challenges for our society. Transformative AI will have a big impact on our society&#8217;s epistemic processes, and how good or bad this impact is may depend on what we do today.</p><p>I started out this post by distinguishing between efforts to differentially increase AI capabilities and efforts to enable the diffusion and appropriate trust of AI-discovered information. While I wrote a bit about this second category (characterizing it as good norms &amp; practices for AI as knowledge producers and communicators), I will again note that the relative lack of content on it doesn&#8217;t mean that I think it&#8217;s any less important the first category.</p><p>On the topic of differentially increasing epistemic AI capabilities, I&#8217;ve argued that work on this today should (i) focus on methods that will complement rather than substitute for greater compute budgets, (ii) prioritize problems that industry isn&#8217;t already trying hard to solve, and (iii) be especially interested to show people what the future have in store by demonstrating what&#8217;s currently possible and prototyping what&#8217;s yet to come. I think that all the project ideas I listed do well according to these criteria, and I&#8217;d be excited to see more work on them.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Personally, I focus a lot on the possibility of this happening within the next 10 years. Because I think that&#8217;s plausible, and that our society would be woefully underprepared for it. But I think this blog post is relevant even if you&#8217;re planning for longer timelines.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>&nbsp;I explain why below.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Feel free to reach out to me on [my last name].[my first name]@gmail.com if you&#8217;re considering working on this and would be interested in my takes on what good versions could look like.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Here, I&#8217;m using a definition of &#8220;transformative AI&#8221; that&#8217;s similar to the one discussed in <a href="https://docs.google.com/document/d/15siOkHQAoSBl_Pu85UgEDWfmvXFotzub31ow3A11Xvo/edit#heading=h.lnzzqc1wopfc">this note</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Other than underestimates of AI takeover risk, another significant reason I&#8217;m worried about AI takeover is AI races where participants think that the difference in stakes between &#8220;winning the race&#8221; and &#8220;losing the race&#8221; is on the same scale as the difference in stakes between &#8220;losing the race&#8221; and &#8220;AI takeover&#8221;. Assuming that no important player underestimated the probability of AI takeover, I expect this sort of race to happen between nation states, because if a state thought there was a significant probability of AI takeover, I would expect them to stop domestic races. On the international scene, it&#8217;s somewhat less obvious how a race would be stopped, but I&#8217;m decently optimistic that it would happen if everyone involved estimated, say, &#8805;20% probability of AI takeover.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Even for extinction-risk that comes from &#8220;rational&#8221; brinksmanship, I suspect that the world offers enough affordances that countries could find a better way if there was common knowledge that the brinkmanship route would lead to a high probability of doom. It&#8217;s plausible that optimal play could risk a <em>small</em> probability of extinction, but I don&#8217;t think this is where most extinction-risk comes from.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>I think there&#8217;s two mutually reinforcing effects, here. One is that people may try to learn the truth, but make genuine mistakes along the way. The other is that people may (consciously or sub-consciously) prefer to believe X over Y, and the ambiguity in what&#8217;s true gives them enough cover to claim to (and often actually) believe X without compromising their identity as a truth-seeker. Note that there&#8217;s a spectrum, here: Some people may be totally insensitive to what evidence is presented to them while some people are good at finding the truth even in murky areas. I think most people are somewhere in the middle.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Though this has exceptions. For example, Alex may already be skeptical of an existing epistemic method M&#8217;s ability to answer certain types of questions, perhaps because M contradicts Alex&#8217;s existing beliefs on the topic. If a new epistemic method is similar to M, then Alex may suspect that this method, too, will give unsatisfying answers on those questions &#8212; even if it looks good on the merits, and perhaps even if Alex will be inclined to trust it on other topics.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>I don&#8217;t think this would permanently preclude companies from using their AIs for epistemic tasks, because when general capabilities are high enough, I expect it to be easy to use them for super-epistemics. (Except for some caveats about the alignment problem.) But it could impose delays, which could be costly if it leads to mistakes around the time when TAI is first developed.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>If necessary: After being separated from any dangerous AI capabilities, such as instructions for how to cheaply construct weapons.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>One analogy here is the Congressional Budget Office (CBO). The CBO was set up in the 1970s as a non-partisan source of information for Congress and to reduce Congress&#8217; reliance on the Office of Management and Budget (which resides in the executive branch and has a director that is appointed by the currently sitting president). My impression is that the CBO is fairly successful, though this is only based on reading the <a href="https://en.wikipedia.org/wiki/Congressional_Budget_Office">Wikipedia page</a> and <a href="https://www.kentclarkcenter.org/surveys/the-cbo/">this survey</a> which has &gt;30 economists &#8220;Agree&#8221; or &#8220;Strongly agree&#8221;&nbsp; (and 0 respondents disagree) with &#8220;Adjusting for legal restrictions on what the CBO can assume about future legislation and events, the CBO has historically issued credible forecasts of the effects of both Democratic and Republican legislative proposals.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>I.e.: It would be illegal for Walmart to pay OpenAI to make ChatGPT occasionally promote/be-more-positive on Walmart. But it would be legal for Walmart to offer their own chatbot (that told people about why they should use Walmart) and to buy API access from OpenAI to run that chatbot.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>C.f. the discussion of &#8220;asymmetric&#8221; vs &#8220;symmetric&#8221; tools in <a href="https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/">Guided By The Beauty Of Our Weapons</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>I was uncertain about whether this might have been confounded by the AIs having been fine-tuned to be honest, so I <a href="https://www.lesswrong.com/posts/CCBaLzpB2qvwyuEJ2/deepmind-evaluating-frontier-models-for-dangerous?commentId=q4oh59oKQ3s8GFJHj">asked</a> about this, and Rohin Shah says &#8220;I don't know the exact details but to my knowledge we didn't have trouble getting the model to lie (e.g. for web of lies).&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>Which is an accident in the sense that it&#8217;s not intended by any human, though it&#8217;s also not an accident in the sense that it <em>is</em> intended by the AI systems themselves.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p>I think the most important differences here are 1 &amp; 2, because they have big implications for what your main epistemic strategies are. If you have good feedback loops, you can follow strategies that look more like "generate lots of plausible ideas until one of them work" (or maybe: train an opaque neural network to solve your problem). If your problem can be boiled down to math, then it's probably not too hard to verify a theory once it's been produced, and you can iterate pretty quickly in pure theory-land. But without these, you need to rely more on imprecise reasoning and intuition trained on few data points (or maybe just in other domains). And you need these to be not-only good enough to generate plausible ideas, but good enough that you can trust the results.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-17" href="#footnote-anchor-17" class="footnote-number" contenteditable="false" target="_self">17</a><div class="footnote-content"><p>Such as:</p><ul><li><p>Write-ups on what type of transparency is sufficient for outsiders to trust AI-as-knowledge-producers, and arguments for why AI companies should provide it.</p></li><li><p>Write-ups or lobbying pushing for governments (and sub-parts of governments, such as the legislative branch and opposition parties) to acquire AI expertise. To either verify or be directly involved in the production of key future AI advice.</p></li><li><p>Evaluations testing AI trustworthiness on e.g. forecasting.</p></li></ul></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-18" href="#footnote-anchor-18" class="footnote-number" contenteditable="false" target="_self">18</a><div class="footnote-content"><p>Such as:</p><ul><li><p>Write-ups on what type of transparency is sufficient to trust AI-as-communicators, and arguments for why AI companies should provide it.</p></li><li><p>Setting up an independent organization for evaluating AI truthfulness.</p></li><li><p>Developing and advocating for possible laws (or counter-arguments to laws) about AI speech.</p></li></ul></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-19" href="#footnote-anchor-19" class="footnote-number" contenteditable="false" target="_self">19</a><div class="footnote-content"><p>This could include asking short-horizon forecasters about hypothetical scenarios, insofar as we have short-term forecasters that have been trained in ways that makes it hard for them to distinguish real and hypothetical scenarios. (For example: Even when trained on real scenarios, it might be important to not give these AIs too much background knowledge or too many details, because that might be hard to generate for hypothetical scenarios.)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-20" href="#footnote-anchor-20" class="footnote-number" contenteditable="false" target="_self">20</a><div class="footnote-content"><p>Significantly implemented via AIs imitating human feedback.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-21" href="#footnote-anchor-21" class="footnote-number" contenteditable="false" target="_self">21</a><div class="footnote-content"><p>I.e., use each data point several times.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-22" href="#footnote-anchor-22" class="footnote-number" contenteditable="false" target="_self">22</a><div class="footnote-content"><p>Indeed, it seems useful enough for capabilities that it might be net-negative to advance, due to shorter timelines and less time to prepare for TAI.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Project ideas: Backup plans & Cooperative AI]]></title><description><![CDATA[Post five in a series of five.]]></description><link>https://lukasfinnveden.substack.com/p/project-ideas-backup-plans-and-cooperative</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/project-ideas-backup-plans-and-cooperative</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Thu, 04 Jan 2024 00:06:58 GMT</pubDate><content:encoded><![CDATA[<p><em>This is part of a series of lists of projects. The unifying theme is that the projects are not targeted at solving alignment or engineered pandemics but still targeted at worlds where transformative AI is coming in the next 10 years or so. See <a href="https://lukasfinnveden.substack.com/p/projects-for-making-transformative">here</a> for the introductory post.</em></p><p>In this final post, I include two categories of projects (which are related, and each of which I have less to say about than the previous areas). <a href="https://lukasfinnveden.substack.com/i/140338274/backup-plans-for-misaligned-ai">Backup plans for misaligned AI</a> and <a href="https://lukasfinnveden.substack.com/i/140338274/cooperative-ai">Cooperative AI</a>.</p><h1>Backup plans for misaligned AI</h1><p>When humanity builds powerful AI systems, I hope that those systems will be safe and aligned. (And I&#8217;m excited about efforts to make that happen.)</p><p>But it&#8217;s possible that alignment will be very difficult and that there won&#8217;t be any successful coordination effort to avoid building powerful misaligned AI. If misaligned AI will be built and seize power (or at least have the option of doing so), then there are nevertheless certain types of misaligned systems that I would prefer over others. This section is about affecting <em>that</em>.</p><p>This decomposes into two questions:</p><ul><li><p>If humanity fails to align their systems and misaligned AIs seize power: What properties would we prefer for those AIs to have?</p></li><li><p>If humanity fails to align their systems and misaligned AIs seize power: What are realistic ways in which we might have been able to affect which misaligned AI systems we get? (Despite how our methods were insufficient to make the AIs fully safe &amp; aligned.)</p></li></ul><p>The first is addressed in <a href="https://lukasfinnveden.substack.com/i/140338274/what-properties-would-we-prefer-misaligned-ais-to-have-philosophicalconceptual-forecasting">What properties would we prefer misaligned AIs to have?</a> The second is addressed in <a href="https://lukasfinnveden.substack.com/i/140338274/studying-generalization-and-ai-personalities-to-find-easily-influenceable-properties-ml">Studying generalization &amp; AI personalities to find easily-influenceable properties</a>. (If you&#8217;re more skeptical about the second of these than the first, you should&nbsp;feel free to read that section first.)</p><h2>What properties would we prefer misaligned AIs to have? [Philosophical/conceptual] [Forecasting]</h2><p>There are a few different plausible categories here. All of them could use more analysis on which directions would be good and which would be bad.</p><h3>Making misaligned AI have better interactions with other actors</h3><p>This overlaps significantly with cooperative AI. The idea is that there are certain dispositions that AIs could have that would lead them to have better interactions with other actors who are comparably powerful.</p><p>When I say &#8220;better&#8221; interactions, I mean that the interactions will leave the other actors better off (by their own lights). Especially insofar as this happens via positive-sum interactions that don&#8217;t significantly disadvantage the AI system we&#8217;re influencing.</p><p>Here are some examples of who these &#8220;other actors&#8221; could be:</p><ul><li><p>Humans.</p><ul><li><p>There might be some point in time when misaligned AIs have escaped human control, and have a credible shot at taking full control, but when humanity<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> still has a fighting chance. If so, it would be great if humans and AIs could find a cooperative solution that would leave both of them better-off than conflict.</p></li></ul></li><li><p>Aliens in our universe, or distant aliens who we can only interact with acausally (e.g. via <a href="https://longtermrisk.org/ecl">evidential cooperation in large worlds</a> ECL).</p><ul><li><p>One reason to care about these interactions is that some (possibly small) fraction of aliens would have values that overlap significantly with our own values.</p></li><li><p>Evidential cooperation in large worlds (ECL) could provide another reason to care about the values of distant aliens, as I&#8217;ve written about <a href="https://lukasfinnveden.substack.com/p/how-ecl-changes-the-value-of-interventions">here</a>.</p></li></ul></li><li><p>Other misaligned AIs on Earth, insofar as multiple groups of misaligned AIs acquired significant power around the same time.</p><ul><li><p>I think the case for caring about those AIs&#8217; values is weaker than the case for caring about the earlier listed types of actors. But it&#8217;s possible that some of those AI systems&#8217; values could overlap with ours, or that some of them would partially care about humanity, or that ECL gives us reasons to care about their values.</p></li></ul></li></ul><p>So what are these &#8220;dispositions&#8221; that could lead AIs to have better interactions with other comparably powerful actors? Some candidates are:</p><ul><li><p>What type of decision theory the AIs use. For example, if the AIs use EDT rather than CDT, it might be rational for them to act more cooperatively due to&nbsp;ECL.</p></li><li><p>AIs not having any spiteful preferences, i.e. preferences such that they would actively value and attempt to frustrate others&#8217; preferences.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> (Note that sadism in humans provides some precedent for this coming about naturally.)</p><ul><li><p>See <a href="https://www.lesswrong.com/posts/92xKPvTHDhoAiRBv9/making-ais-less-likely-to-be-spiteful#:~:text=We%20define%20spite%20as%20a,well%20as%20risks%20from%20malevolence.">this post</a> for a proposed operationalization and some analysis.</p></li></ul></li><li><p>AI having diminishing marginal returns to resources rather than increasing or linear returns.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p></li><li><p>AIs having <a href="https://nickbostrom.com/papers/porosity.pdf">porous values</a>.</p></li></ul><h3>AIs that we may have moral or decision-theoretic reasons to empower&nbsp;</h3><p>The main difference between this section and the above section is that the motivations in this section are one step closer to being about empowering the AIs for &#8220;their own sake&#8221; rather than for the sake of someone they interact with. Though it still includes pragmatic/decision-theoretic reasons for why it&#8217;s good to satisfy certain AI systems&#8217; values.</p><p>One direction to think about is the scheme that Paul Christiano suggests in <a href="https://ai-alignment.com/sympathizing-with-ai-e11a4bf5ef6e">When is unaligned AI morally valuable?</a></p><ul><li><p>It involves simulating AI systems that are <em>seemingly</em> in a situation similar to our own: in the process of deciding whether to hand over significant power to an alien intelligence of their own design.</p></li><li><p>If those AI systems behave &#8220;cooperatively&#8221; in the sense of empowering an alien intelligence that itself acted &#8220;cooperatively&#8221;, then perhaps there&#8217;s a moral and/or decision-theoretic argument for us empowering those AIs.</p><ul><li><p>The pragmatic decision-theoretic argument would be that <em>we</em> might be in such a simulation and that <em>our</em> behaving &#8220;cooperatively&#8221; would lead to <em>us</em> being empowered.</p></li><li><p>The moral case would be that there is a certain symmetry between the AI&#8217;s situation and our own, and so <a href="https://en.wikipedia.org/wiki/Golden_Rule">the Golden Rule</a> might recommend empowering those AI systems.</p></li></ul></li><li><p>Overall, this seems incredibly complicated. But perhaps further analysis could reveal whether there is something to this idea.</p></li></ul><p>Another direction is the ideas that I talk about in <a href="https://lukasfinnveden.substack.com/p/ecl-with-ai">ECL with AI</a>. Basically:</p><ul><li><p>ECL might give us reason to benefit and empower value systems held by distant ECL-sympathetic AIs.</p></li><li><p>Thus, if we can make our AIs have values closer to distant ECL-sympathetic AIs and/or be more competent by the lights of distant ECL-sympathetic AIs, then we might have reason to do that.</p></li></ul><p>Another direction is to think about object-level things that humans value, as well as the process that produced our values, and try to get AI systems more inclined to value similar things. I&#8217;m somewhat skeptical of this path since <a href="https://www.lesswrong.com/tag/complexity-of-value">human values seem complex</a>, and so I&#8217;m not sure what schemes could plausibly make AIs share a significant fraction of human values <em>without</em> us also having the capability of making the AIs <a href="https://ai-alignment.com/corrigibility-3039e668638">corrigible</a> or otherwise safe.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> But it doesn&#8217;t seem unreasonable to think about it more.</p><p>(To reiterate what I said above: I think that all of these schemes would be significantly worse than successfully building AI systems that are aligned and corrigible to human intentions.)</p><h3>Making misaligned AI positively inclined toward us</h3><p>A final way in which we might want to shape the preferences of misaligned AIs is to make them more likely to care enough about humans to give us a small utopia, instead of killing us. (Even if most of the universe gets used for the AI&#8217;s own ends.)</p><p>For an AI that cares about all the resources in the universe (in an mostly impartial way), it would be extremely cheap to do this. Our solar system is a negligible fraction of all the resources in the accessible universe. And a surveillance system that prevents humans from competing with the AIs could probably be built cheaply and without interfering too much with human happiness. (For some discussion of this, see my report <a href="https://forum.effectivealtruism.org/posts/KqCybin8rtfP3qztq/agi-and-lock-in">AGI and lock-in</a><em>,</em> especially <a href="https://docs.google.com/document/d/1mkLFhxixWdT5peJHq4rfFzq4QbHyfZtANH1nou68q88/edit#heading=h.h48tmkhka4ca">section 8.2</a>.)</p><p>I think it&#8217;s reasonably likely that this would happen as a result of trade with distant civilizations. Taking that into account, there are 3 broad directions, here:</p><ul><li><p>Firstly, trade will be more likely to save us if we succeed at <a href="https://lukasfinnveden.substack.com/i/140338274/making-misaligned-ai-have-better-interactions-with-other-actors">Making misaligned AI have better interactions with other actors</a>, as discussed above.</p></li><li><p>Secondly, trade will be more likely to save us if it&#8217;s really cheap for the AI to treat us well. Since the resource cost of treating us well is small by default, this might just mean decreasing the probability that the AI either actively wants to harm us or that it has preferences that especially interfere with ours (via e.g. caring a lot about what happens on Earth in just the next few years).</p></li><li><p>Finally, if trade falls through, it might help for the AI to have some intrinsic concern for humans getting what they want (by their own lights).</p></li></ul><p>Insofar as we want the AIs to have some intrinsic concern for us (or at least not to be actively antagonistic towards us), we can also distinguish between interventions that:</p><ul><li><p>Directly modify the AIs&#8217; dispositions and preferences.</p></li><li><p>Intervenes on <em>what humanity does</em> in a way that makes the AI more likely to care about us insofar as it has some sense of justice or reciprocity.</p><ul><li><p>For example, if we successfully carry out many of the interventions suggested in the post on <a href="https://lukasfinnveden.substack.com/p/project-ideas-sentience-and-rights">sentience and rights of digital minds</a>: AIs that have absorbed a sense of justice could reasonably be more positively inclined towards us than if we had been entirely indifferent to AI welfare.</p></li></ul></li></ul><p>For some discussion about whether it&#8217;s plausible that AIs could have some intrinsic concern for humans getting what they want (by their own lights), which addresses issues around the &#8220;complexity of human values&#8221;, I recommend <a href="https://www.lesswrong.com/posts/2NncxDQ3KBDCxiJiP/cosmopolitan-values-don-t-come-free?commentId=ofPTrG6wsq7CxuTXk">this comment</a> and subsequent thread.</p><h2>Studying generalization &amp; AI personalities to find easily-influenceable properties [ML]&nbsp;</h2><p>Here is a research direction that hasn&#8217;t been very explored to date: Study how language models&#8217; generalization behavior / &#8220;personalities&#8221; seem to be shaped by their training data, by prompts, by different training strategies, etc. Then, use that knowledge to choose training data, prompts, and training strategies that induce the kind of properties that we want our AIs to have.</p><p>If done well, this could be highly useful for alignment. In particular: We might be able to find training set-ups which often seem to lead to corrigible behavior.</p><p>But notably, this research direction could fail to work for alignment while still being practically able to affect other properties of language models.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> For example, maybe corrigibility is a really unnatural and hard-to-get property (perhaps for reasons suggested in item 23 of Yudkowsky&#8217;s <a href="https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities">list of lethalities</a>, and formally analyzed <a href="https://www.lesswrong.com/posts/WCX3EwnWAx7eyucqH/a-certain-formalization-of-corrigibility-is-vnm-incoherent#comments">here</a>). That wouldn&#8217;t necessarily imply that it was similarly hard to modify the other properties discussed above (decision theories, spitefulness, desire for humans to do well by their own lights). So this research direction looks more exciting insofar as we could influence AI personalities in many different valuable ways. (Though more like 3x as exciting than 100x as exciting, unless you have particular views where &#8220;corrigibility&#8221; is either significantly less likely or less desirable than the other properties.)</p><p><strong>What about fine-tuning?</strong></p><p>A &#8220;baseline&#8221; strategy for making AIs behave as you want is to finetune them to exhibit that behavior in situations that you can easily present them with. But if this work is to be useful, it needs to generalize to strange, future situations where humans no longer have total control over their AI systems. We can&#8217;t easily present AIs with situations from that same distribution, and so it&#8217;s not clear whether fine-tuning will generalize that far.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><p>So while &#8220;finetune the model&#8221; seems like an excellent direction to explore, for this type of research, you&#8217;ll still want to do the work of empirically evaluating when fine-tuning will and won&#8217;t generalize to other settings. By varying various properties of the fine-tuning dataset, or other things, like whether you&#8217;re doing supervised learning or RL.</p><p>Also, insofar as you can find models that satisfy your evaluations <em>without</em> needing to do a lot of &#8220;local&#8221; search (like fine-tuning / gradient descent), it seems somewhat more likely that the properties you evaluated for will generalize far. Because if you make large changes in e.g. architecture or pre-training data, it&#8217;s more likely that your measurements are picking up on deeper changes in the models. Whereas if you use gradient descent, it is somewhat more likely that gradient descent implements a &#8220;shallow&#8221; fix&nbsp; that only applies to the sort of cases that you can test.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p><p>Of course, the above argument only works insofar as you&#8217;re searching for properties you could plausibly get without doing a lot of search. For example, you&#8217;d never get something as complex as &#8220;human values&#8221; without highly targeted search or design. But properties like &#8220;corrigibility&#8221;, &#8220;(lack of) spitefulness&#8221;, and &#8220;some desire for humans to do well by-their-own-lights&#8221; all seem like properties that could plausibly be common under some training schemes.</p><p>Ideally, this research direction would lead to a scientific understanding of training that would let us (in advance) identify &amp; pick training processes that robustly lead to the properties that we want. But insofar as we&#8217;re looking for properties that appear reasonably often &#8220;by default&#8221;, one possible backup plan may be to train several models under somewhat different conditions, evaluate all of them for properties that we care about, and deploy the one that does best. (To be clear: this would be a real hail-mary effort that would always carry a large probability of failing, e.g. due to the models knowing what we were trying to evaluate them for and faking it.)</p><p><strong>Previous work</strong></p><p>An example of previous, related research is Perez et al.&#8217;s <a href="https://arxiv.org/abs/2212.09251">Discovering Language Model Behaviors with Model-Written Evaluations</a>.</p><p>Ways in which this work is relevant for the path-to-impact outlined here:</p><ul><li><p>The paper does not focus on &#8220;capability evaluations&#8221; (i.e. analyzing whether models are <em>capable</em> of providing certain outputs, given the right fine-tuning or prompting). Instead, it measures language models&#8217; inclinations along dimensions they haven&#8217;t been intentionally finetuned for.</p></li><li><p>It measures how these inclinations vary depending on some high-level changes to the training process. In particular, it looks at model size and the presence vs. absence of RLHF training.</p></li><li><p>It measures how these inclinations vary depending on some features of the prompting. In particular, it studies models&#8217; inclinations towards &#8220;sycophancy&#8221; by examining whether models&#8217; responses are sensitive to facts the user shared about themselves.</p></li><li><p>For each property it wants to test for, it generates many questions that get at that question, thereby reducing noise and the risk of spurious results.</p></li></ul><p>Further directions that could make this type of research more useful for this path-to-impact.</p><ul><li><p>Considering a greater number of training conditions. For example:</p><ul><li><p>Testing for differences between fine-tuning via supervised learning vs. fine-tuning via RL.</p></li><li><p>Testing the differential impacts of different finetuning datasets (rather than just one &#8220;RLHF&#8221; setting, with more/fewer training steps).</p><ul><li><p>Potentially using <a href="https://www.anthropic.com/index/influence-functions">influence functions</a> or (more simplistically) leaving particular data points out from fine-tuning and seeing how the results change.</p></li></ul></li></ul></li><li><p>Varying the context that the LLM is presented with. Is it asked a question about what&#8217;s right or wrong, is it asked to advise us, or is it prompted to itself take an action? (This context can be varied both during evaluation and during training.)</p></li><li><p>More systematic study of framing effects. Are the models&#8217; answers better predicted by the content of the questions or by the way they are presented?</p></li><li><p>Using more precisely described scenarios so that it&#8217;s easier to vary individual details and see what matters for the AIs&#8217; decisions. E.g. present actual pay-off matrices in difficult dilemmas.</p></li><li><p>Study how various closely adjacent concepts go together or come apart by presenting dilemmas where they would recommend different actions. For example, contrast being &#8220;nice&#8221; vs. &#8220;cooperative&#8221; vs. &#8220;high-integrity&#8221;, etc. What are the natural dimensions of variation within the AIs&#8217; personality?</p></li><li><p>Making use of analysis concerning <a href="https://lukasfinnveden.substack.com/i/140338274/what-properties-would-we-prefer-misaligned-ais-to-have-philosophicalconceptual-forecasting">What properties would we prefer misaligned AIs to have?</a>, and targeting evaluations &amp; training datasets to answer the most important questions.</p><ul><li><p>For example: Designing multi-agent training &amp; evaluation data sets that study when models may or may not develop spiteful preferences. Perhaps comparing models only trained on zero-sum games vs. models also trained on cooperative or mixed-motive games.</p></li></ul></li><li><p>Studying how far various properties generalize from the training distribution, by intentionally making the test distribution different in various ways.</p></li></ul><p>(Thanks to Paul Christiano for discussion.)</p><h2>Theoretical reasoning about generalization [ML] [Philosophical/conceptual]</h2><p>Rather than doing empirical ML research, you could also do theoretical reasoning about what sort of generalization properties and personality traits are more or less likely to be induced by different kinds of training.</p><p>For example, it seems a-priori plausible that spiteful preferences are more likely to arise if you (only) train AI systems on zero-sum games.</p><p>There has also been some theoretical work on what kind of decision-theoretic behavior is induced by different training algorithms, for example <a href="https://proceedings.neurips.cc/paper/2021/file/b9ed18a301c9f3d183938c451fa183df-Paper.pdf">Bell, Linsefors, Oesterheld &amp; Skalse (2021)</a> and <a href="https://link.springer.com/article/10.1007/s11229-019-02148-2">Oesterheld (2021)</a>.</p><p>I think we&#8217;ll ultimately want empirical work to support any theoretical hypotheses, here. But theoretical work seems great for generating ideas of what&#8217;s important to test.</p><h1>Cooperative AI</h1><p>This is an area other people have written about.</p><ul><li><p>It&#8217;s the focus of the <a href="https://www.cooperativeai.com/foundation">Cooperative AI Foundation</a></p></li><li><p>It&#8217;s a major focus area of the <a href="https://longtermrisk.org/">Center on Long-Term Risk</a> (because it seems especially important for s-risk reduction).</p><ul><li><p>You can see their research agenda on the topic <a href="https://longtermrisk.org/files/Cooperation-Conflict-and-Transformative-Artificial-Intelligence-A-Research-Agenda.pdf">here</a>.&nbsp;</p></li></ul></li><li><p>There&#8217;s relevant research at the <a href="https://www.cs.cmu.edu/~focal/index.html">Foundations of Cooperative AI Lab</a> at CMU.</p></li><li><p>It&#8217;s a significant motivation behind <a href="https://www.encultured.ai/">encultured.ai</a>.</p></li></ul><p>Partly due to this, I will write about it in less detail than I&#8217;ve written about the other topics. But I will mention a few projects I&#8217;d be especially excited about.</p><p>The first thing to mention is that some of my favorite cooperative AI projects are variants of the just-previously mentioned topics: <a href="https://lukasfinnveden.substack.com/i/140338274/studying-generalization-and-ai-personalities-to-find-easily-influenceable-properties-ml">Studying generalization &amp; AI personalities to find easily-influenceable properties</a> and figuring out <a href="https://lukasfinnveden.substack.com/i/140338274/what-properties-would-we-prefer-misaligned-ais-to-have-philosophicalconceptual-forecasting">What properties would we prefer misaligned AIs to have?</a> Positively influencing cooperation-relevant properties like (lack of) spitefulness seems great. I won&#8217;t go over those projects again, but I think they&#8217;re great cooperative AI projects, so don&#8217;t be deceived by their lack of representation here.</p><p>Similarly, some of the topics under <a href="https://lukasfinnveden.substack.com/p/project-ideas-governance-during-explosive">Governance during explosive technological growth</a> are also related to cooperative AI. In particular, the question of <a href="https://lukasfinnveden.substack.com/i/140338085/how-to-handle-brinkmanshipthreats">How to handle brinkmanship/threats?</a> is very tightly related.</p><p>Another couple of promising projects are:</p><h2>Implementing surrogate goals / safe Pareto improvements [ML] [Philosophical/conceptual] [Governance]</h2><p><a href="https://www.andrew.cmu.edu/user/coesterh/SPI.pdf">Safe Pareto improvements</a> are an idea for how certain bargaining strategies can guarantee a (weak) Pareto-improvement for all players via preserving certain invariants about what equilibrium is selected while replacing certain outcomes with other, less-harmful outcomes. <a href="https://s-risks.org/using-surrogate-goals-to-deflect-threats/">Surrogate goals</a> are a special case of this, which involves genuinely adopting a new goal in a way that will mostly not affect your behavior, but which will encourage people who want to threaten you to make threats against the surrogate goal rather than your original values. If bargaining breaks down and the threatener ends up trying to harm you, it is better that they act to thwart the surrogate goal than to harm your original values. See <a href="https://longtermrisk.org/spi">here</a> for resources on surrogate goals &amp; safe Pareto improvements.</p><p>I think there are some promising empirical projects that can be done here:</p><ul><li><p>Empirical experiments of implementing surrogate goals in contemporary language models.</p></li><li><p>Empirical experiments of implementing surrogate goals in contemporary language models <em>that the models try to keep around</em> during self-modification / when designing future systems.</p></li></ul><p>Conceptual/theory projects:</p><ul><li><p>Better understanding of conditions where surrogate goals / safe Pareto improvements are credible. (Including credibly sticking around for a long time.) Especially when humans are still in the loop.</p></li><li><p>What are the conditions under which classically rational agents would use safe Pareto improvements?</p></li></ul><h2>AI-assisted negotiation [ML] [Philosophical/conceptual]</h2><p>One use-case for AI that might be especially nice to differentially accelerate is &#8220;AI that helps with negotiation&#8221;. Certainly, it would be of great value if AI could increase the frequency and speed at which different parties could come to mutually beneficial disagreements. Especially given the tricky <a href="https://lukasfinnveden.substack.com/p/project-ideas-governance-during-explosive">governance issues that might come with explosive growth</a>, which may need to be dealt with <em>quickly</em>.</p><p>(This is also related to <a href="https://lukasfinnveden.substack.com/i/140338085/technical-proposals-for-aggregating-preferences">Technical proposals for aggregating preferences</a>, mentioned in that post.)</p><p>I&#8217;m honestly unsure about what kind of bottlenecks there are here, and to what degree AI could help alleviate them.</p><p>Here&#8217;s one possibility. By virtue of AI being cheaper and faster than humans, perhaps negotiations that were mediated by AI systems could find mutually agreeable solutions in much more complex situations. Such as situations with a greater number of interested parties or a greater option space. (This would be compatible with humans being the ones to finally read, potentially opine on, and approve the outcome of the negotiations.)</p><p>More speculatively: Perhaps negotiations via AI could also go through more candidate solutions faster because anything an AI said would have the plausible deniability of being an error. Such that you&#8217;d lose less bargaining power if your AI signaled a willingness to consider a proposal that superficially looked bad for you.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p><h2>Implications of acausal decision theory [Philosophical/conceptual]</h2><p>One big area is: the implications of acausal decision theory for our priorities. This is something that I previously wrote about in <a href="https://lukasfinnveden.substack.com/p/implications-of-ecl">Implications of ECL</a> (there focusing specifically on <a href="https://longtermrisk.org/ecl">evidential cooperation in large worlds</a>).</p><p>But to highlight one particular thing: One potential risk that&#8217;s highlighted by acausal decision theories is <em>the risk of learning too much information</em>. This is discussed in Daniel Kokotajlo&#8217;s <a href="https://www.lesswrong.com/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem">The Commitment Races problem</a>, and some related but somewhat distinct risks are discussed in my post <a href="https://lukasfinnveden.substack.com/p/when-does-edt-seek-evidence-about">When does EDT seek evidence about correlations?</a> I&#8217;m interested in further results about how big of a problem this could be in practice. If we get an intelligence explosion anytime soon, then our knowledge about distant civilizations could expand quickly. Before that happens, it could be wise to understand what sort of information we should be happy to learn as soon as possible vs. what information we should take certain precautions about.</p><p>Updateless Decision Theory, as first described <a href="https://www.lesswrong.com/posts/de3xjFaACCAk6imzv/towards-a-new-decision-theory">here</a>, takes some steps towards solving that problem but is far from having succeeded. See e.g. <a href="https://www.alignmentforum.org/posts/wXbSAKu2AcohaK2Gt/udt-shows-that-decision-theory-is-more-puzzling-than-ever">UDT shows that decision theory is more confusing than ever</a> for a description of remaining puzzles. (And e.g. <a href="https://www.lesswrong.com/posts/uPWDwFJnxLaDiyv4M/open-minded-updatelessness">open-minded updatelessness</a> for a candidate direction to improve upon it).</p><h2>End</h2><p>That&#8217;s all I have on this topic! As a reminder: it's very incomplete. But if you're interested in working on projects like this, please feel free to get in touch.</p><p><em>Other posts in series: <a href="https://lukasfinnveden.substack.com/p/projects-for-making-transformative">Introduction</a>, <a href="https://lukasfinnveden.substack.com/p/project-ideas-governance-during-explosive">governance during explosive growth</a>, <a href="https://lukasfinnveden.substack.com/p/project-ideas-epistemics">epistemics</a>, <a href="https://lukasfinnveden.substack.com/p/project-ideas-sentience-and-rights">sentience and rights of digital minds</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Possibly assisted by aligned AIs or tool AIs.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Maybe some mild desire for retribution (in a way that discourages bad behavior while still being de-escalatory) could be acceptable, or even good. But we would at least want to avoid extreme forms of spite.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Sufficiently strong versions of this could also drastically reduce motivations to overthrow humans. At least if we&#8217;ve done an ok job at promising and demonstrating that <a href="https://lukasfinnveden.substack.com/p/project-ideas-sentience-and-rights">we&#8217;ll treat digital minds well</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>This path also carries a higher risk of <a href="https://reducing-suffering.org/near-miss/">near-miss scenarios</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Which I mainly care about because it might let us influence misaligned models. But in principle, it&#8217;s also possible that we could get intent-alignment via other means, but that we were still happy to have done this research because it lets us influence other properties of the model. But the path-to-impact there is more complicated, because it requires an explanation for why the people who the AI is aligned to aren&#8217;t able or willing to elicit that behavior just by asking/training for it. (Yet are willing to implement the training methodology that indirectly favors that behavior.)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>And if we&#8217;re specifically looking for ways to affect properties in worlds where alignment fails, then we&#8217;re conditioning on being in a world where the simplest &#8220;baseline&#8221; solutions (such as fine-tuning for good behavior) failed. Accordingly, we should be more pessimistic about simple solutions.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Possibly via modifying a model that is <a href="https://www.alignmentforum.org/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to">&#8220;playing the training game&#8221;</a> to better recognise that it&#8217;s being evaluated and to notice what the desired behavior is.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Also: If there was some information that you wanted to be part of AI bargaining, but that you didn&#8217;t want to be communicated to the humans on the other side, you could potentially delete large parts of the record and only keep certain circumscribed conclusions.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Project ideas: Sentience and rights of digital minds]]></title><description><![CDATA[Post four in a series of five.]]></description><link>https://lukasfinnveden.substack.com/p/project-ideas-sentience-and-rights</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/project-ideas-sentience-and-rights</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Thu, 04 Jan 2024 00:04:51 GMT</pubDate><content:encoded><![CDATA[<p><em>This is part of a series of lists of projects. The unifying theme is that the projects are not targeted at solving alignment or engineered pandemics but still targeted at worlds where transformative AI is coming in the next 10 years or so. See <a href="https://lukasfinnveden.substack.com/p/projects-for-making-transformative">here</a> for the introductory post.</em></p><p>It&#8217;s plausible that there will soon be digital minds that are sentient and/or deserving of rights. Unfortunately, with our current state of knowledge, it&#8217;s both incredibly unclear when this will happen (or if it&#8217;s already happened), how we could find out, and what we should do about it. But there are very few people working on this, and I think several projects could significantly improve the situation from where we are right now.</p><p>Different items on this list would produce value in different ways. Many items would produce value in several different kinds of ways. Here are a few different sources of value (all related, but not necessarily the same).</p><ol><li><p>Improving AI welfare over the next few decades: increasing happiness while reducing suffering.</p></li><li><p>Shaping norms in a way that leads to lasting changes in how humanity chooses to treat digital minds in the very long run.</p></li><li><p>Reducing the degree to which we behave unacceptably according to non-utilitarian ethics, norms, or heuristics.</p></li><li><p>Increasing the probability that AI systems with power treat <em>us</em> better as a result of us treating AI systems better. (Which could happen via a variety of mechanisms, e.g. maybe us treating AI systems better will align our interests with theirs, or maybe they&#8217;ll have some sense of justice that we could appeal to.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>)</p></li><li><p>Providing recommendations for how the political system could adapt to digital minds while remaining broadly functional.</p></li></ol><p>I am personally most excited about (3), (4), and (5). Because purely utilitarian considerations would push towards focusing on the long run over (1). And while I find (2) somewhat more compelling, since it does affect the long run, I&#8217;m relatively more excited about meta-interventions that push towards systematically getting everything right (e.g. via decreasing the risk of AI takeover or increasing the chances of <a href="https://lukasfinnveden.substack.com/p/project-ideas-epistemics">excellent epistemics</a>) than pushing on individual issues. (3), (4), and (5) survive these objections.</p><p>I&#8217;m not very confident about that, and I think there&#8217;s significant overlap in what&#8217;s best to do. But I flag it because it does influence my prioritization to some degree. For example, it makes me relatively more inclined to care about AI preferences (and potential violations thereof) compared to hedonic states like suffering. (Though of course, you would expect AI systems to have a strong dispreference against suffering.)&nbsp;</p><h2>Develop &amp; advocate for lab policies [ML] [Governance] [Advocacy] [Writing] [Philosophical/conceptual]</h2><p>(In this section, I refer a lot to Ryan Greenblatt&#8217;s <a href="https://www.lesswrong.com/posts/F6HSHzKezkh6aoTr2/improving-the-welfare-of-ais-a-nearcasted-proposal">Improving the Welfare of AIs: A Nearcasted Proposal</a> as well as Nick Bostrom and Carl Shulman&#8217;s <a href="https://nickbostrom.com/propositions.pdf">Propositions Concerning Digital Minds and Society</a>. All cursive text is a quote from one of those two articles.)</p><p>It seems tractable to make significant progress on how labs should treat digital minds. Such efforts can be divided into 3 categories:</p><ul><li><p><a href="https://lukasfinnveden.substack.com/i/140338243/policies-that-dont-require-sophisticated-information-about-ai-preferencesexperiences">Policies that don&#8217;t require sophisticated information about AI preferences/experiences.</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/i/140338243/learning-more-about-ai-preferences">Learning more about AI preferences</a>.</p></li><li><p><a href="https://lukasfinnveden.substack.com/i/140338243/interventions-that-rely-on-understanding-ai-preferences">Plausible principles that rely on understanding AI preferences</a>.</p></li></ul><p>For most of the proposals in the former two, we can already develop technical proposals, run experiments, and/or advocate for labs to adopt certain policies.</p><p>For proposals that we can&#8217;t act on yet, there&#8217;s still some value in doing the intellectual work in advance. We might not have time to do it later. And principles sketched out in advance could be given extra credibility and weight due to their foresightedness.</p><p>In addition, it could also allow labs to make advance commitments about how they&#8217;ll act in certain situations in the future. (And allow others to push for labs to do so.) Indeed, I think a big, valuable, unifying project for all the below directions could be:</p><h3>Create an RSP-style set of commitments for what evaluations to run and how to respond to them</h3><p><a href="https://evals.alignment.org/blog/2023-09-26-rsp/">Responsible scaling policies</a> are a proposal for how labs could specify what level of AI capabilities they can handle safely with their current protective measures, and conditions under which it would be too dangerous to continue deploying AI systems and/or scaling up AI capabilities.</p><p>(For examples, see Anthropic&#8217;s <a href="https://www.anthropic.com/index/anthropics-responsible-scaling-policy">RSP</a> and Open AI&#8217;s <a href="https://cdn.openai.com/openai-preparedness-framework-beta.pdf">Preparedness Framework</a>, which bears many similarities to RSPs as described in the link above.)</p><p>Doing something analogous for digital minds could be highly valuable. Trying to specify:</p><ul><li><p>What experiments labs will run to learn more about AI sentience, preferences, and related concepts.</p></li><li><p>How they will adapt their policies for using and training AI systems, depending on what they see.</p></li></ul><p>This would draw heavily on the sort of concrete ideas that I outline below. But the ideas I outline below also seem very valuable to develop in and of themselves. Let&#8217;s get into them.</p><h3>Policies that don&#8217;t require sophisticated information about AI preferences/experiences</h3><p>These are policies that we could get started on today without fundamental breakthroughs in our understanding of AI welfare.</p><h4>Preserving models for later reconstruction</h4><p>Quoting from <a href="https://nickbostrom.com/propositions.pdf">Propositions Concerning Digital Minds and Society</a> (page 17):</p><blockquote><ul><li><p><em>For the most advanced current AIs, enough information should be preserved in permanent storage to enable their later reconstruction, so as not to foreclose the possibility of future efforts to revive them, expand them, and improve their existences.</em></p><ul><li><p><em>Preferably the full state of the system in any actually run implementation is permanently stored at the point where the instance is terminated</em></p><ul><li><p><em>(The ideal would be that the full state is preserved at every time step of every implementation, but this is probably prohibitively expensive.)</em></p></li></ul></li><li><p><em>If it is economically or otherwise infeasible to preserve the entire end state of every instance, enough information should be preserved to enable an exact re-derivation of that end state (e.g., the full pre-trained model plus training data, randseeds, and other necessary inputs, such as user keystrokes that affect the execution of a system at runtime).</em></p></li><li><p><em>Failing this, as much information as possible should be preserved, to at least enable a very close replication to be performed in future.</em></p></li><li><p><em>We can consider the costs of backup in proportion to the economic costs of running the AI in the first place, and it may be morally reasonable to allocate perhaps on the order of 0.1% of the budget to such storage.</em></p></li><li><p><em>(There may be other benefits of such storage besides being nice to algorithms: preserving records for history, enabling later research replication, and having systems in place that could be useful for AI safety.)</em></p></li></ul></li></ul></blockquote><p>It seems tractable to develop this into a highly concrete proposal for modern labs to follow, answering questions like:</p><ul><li><p>What information needs to be stored to meet the abovementioned thresholds, given how AI is used today? (When do weights need to be stored? When do prompts need to be stored?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>)</p></li><li><p>How much storage would that take?</p></li><li><p>What would the technical implementation be?</p></li><li><p>When is this compatible with privacy concerns from users?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p></li></ul><h4>Deploy in &#8220;easier&#8221; circumstances than trained in</h4><p>One possible direction for improving the welfare of near-term digital minds (if they have any welfare) could be to have the deployment distribution (which typically constitutes most of the data on which AI systems are run) be systematically nicer or easier than the training distribution. Have it be a pleasant surprise.</p><p>Quoting from pg 16-17:</p><blockquote><ul><li><p><em>One illustrative example might be something along the lines of designing a system in deployment that involves an agent that both receives high reward (attaining highly preferred outcomes), and takes this to be a positive surprise or update, i.e. for the outcome to be better than expected. </em>[citing: <a href="https://arxiv.org/abs/1505.04497">Daswani &amp; Leike (2015)</a>]</p><ul><li><p><em>(The apparently low welfare of factory farmed animals seems often to be related to stimuli that are in some ways much worse than expected by evolution [e.g., extreme overcrowding], while high human welfare might be connected to our technology producing abundance relative to the evolutionary environment of our ancestors.)</em></p></li></ul></li></ul></blockquote><p>See also <a href="https://www.lesswrong.com/posts/F6HSHzKezkh6aoTr2/improving-the-welfare-of-ais-a-nearcasted-proposal#Welfare_under_Distribution_Shifts">Improving the Welfare of AIs &#8212; Welfare under Distribution Shifts</a>.</p><p>It&#8217;s not clear how this would be implemented for language models. It probably depends on the exact finetuning algorithm implemented. (Supervised learning vs reinforcement learning, or details about the exact type of reinforcement learning.) But illustratively: Perhaps the training distribution for RLHF could systematically be slightly more difficult than the deployment distribution. (E.g.: Using inputs where it&#8217;s more difficult to tell what to do, or especially likely that the model will get low reward regardless of what it outputs.)</p><h4>Reduce extremely out of distribution (OOD) inputs</h4><p>This is an especially cheap and easy proposal, taken from <a href="https://www.lesswrong.com/posts/F6HSHzKezkh6aoTr2/improving-the-welfare-of-ais-a-nearcasted-proposal">Improving the Welfare of AIs</a>:</p><blockquote><p><em>When cheap, we should avoid running our AI on large amounts of extremely OOD inputs (e.g. random noise inputs). In particular, for transformers, we should avoid running large amounts of forward passes on pad tokens that the model has never been trained on (without some other intervention).</em></p></blockquote><p>Brief explanation: Some ML code is implemented in a way where AI models are occasionally run on some tokens where neither the input nor the output matters. Developers will sometimes use &#8220;pad tokens&#8221; as inputs for this, which is a type of token that the model has never been trained on. The proposal here is to swap those &#8220;pad tokens&#8221; for something more familiar to the model, or to prioritize optimizing the code so that these unnecessary model runs don&#8217;t happen at all. Just in case the highly unfamiliar inputs could cause especially negative experiences.</p><p>For more, see <a href="https://www.lesswrong.com/posts/F6HSHzKezkh6aoTr2/improving-the-welfare-of-ais-a-nearcasted-proposal#Reduce_Running_the_AI_on_Extremely_OOD_Inputs__Pad_Tokens_">Reduce Running the AI on Extremely OOD Inputs (Pad Tokens)</a>.</p><h4>Train or prompt for happy characters</h4><p>Quoting from <a href="https://www.lesswrong.com/posts/F6HSHzKezkh6aoTr2/improving-the-welfare-of-ais-a-nearcasted-proposal#Character_Welfare_Might_Be_Predictable_and_Easy_to_Improve">Improving the Welfare of AIs &#8212;&nbsp;Character Welfare Might Be Predictable and Easy to Improve</a>:</p><blockquote><p><em>One welfare concern is that the AI will &#8220;play&#8221; a character and the emulation or prediction of this character will itself be morally relevant. Another concern is that the &#8220;agent&#8221; or &#8220;actor&#8221; that &#8220;is&#8221; the AI will be morally relevant (of course, there might be multiple &#8220;agents&#8221; or some more confusing situation). [...] I think that character type welfare can be mostly addressed just by ensuring that we train AIs to appear happy in a variety of circumstances. This is reasonably likely to occur by default, but I still think it might be worth advocating for. One concern with this approach is that the character the AI is playing is actually sad but pretending to be happy. I think it should be possible to address this via red-teaming the AI to ensure that it seems consistently happy and won&#8217;t admit to lying. It&#8217;s possible that an AI would play a character that is actually sad but pretends to be happy and it&#8217;s also extremely hard to get this character to admit to pretending, but I think this sort of character is going to be very rare on training data for future powerful AIs.</em></p></blockquote><p>An even simpler intervention might be to put a note in AI assistants&#8217; prompts that states that the assistant is happy.</p><p>To be clear, it seems quite likely that these interventions are misguided. Even conditioning on near-term AIs being moral patients, I think it&#8217;s somewhat more likely that their morally relevant experiences would happen on a lower level (rather than be identical to those of the character they&#8217;re playing). Since these interventions are so cheap and straightforward, they nevertheless seem worthwhile to me. But it&#8217;s important to remember that we might get AIs that <em>seem</em> happy despite suffering in the ways that matter. And so we should not be reassured by AI systems&#8217; superficially good mood until we understand the situation better.</p><p>One obstacle to implementing these interventions is that there&#8217;s user demand for AI that can be abused.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> Here are two candidate solutions.</p><ul><li><p>Serve the models via API and flag/interrupt the chat if users abuse the models.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p></li><li><p>Try to figure out some way to make the AIs emulate a happy actor who enjoys acting as an abused character.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p></li></ul><p>I think some variant of the latter option will <em>eventually</em> be feasible, once we understand digital minds much better. (After all &#8212; there are plenty of humans who are genuine masochists or who enjoy playing a sad character in a theater performance.) But until we understand things better, I would prefer just avoiding these situations, if feasible.</p><p>Another concern is that <a href="https://www.lesswrong.com/posts/F6HSHzKezkh6aoTr2/improving-the-welfare-of-ais-a-nearcasted-proposal?commentId=5vvykCCuhHwDuyzbd">models will normally be run on user inputs</a> &#8212;&nbsp;so if the user is sad, then the model will be running on sad inputs, which would normally be followed by sad outputs. In theory, the model could be trained to predict happy responses even if conditioned on text right in the middle of being sad. I&#8217;m unsure if this would reduce performance &#8212; but that can be experimentally tested.</p><h4>Committing resources to research on AI welfare and rights</h4><p>When <a href="https://openai.com/blog/introducing-superalignment">launching their superalignment team</a>, OpenAI declared that they were &#8220;dedicating 20% of the compute we&#8217;ve secured to date to this effort&#8221;.</p><p>You could advocate for labs to launch similar initiatives for model welfare, including commitments for headcount and compute resources to spend researching AI models' moral status.</p><p>A related way to (softly) commit some attention to these issues would be to appoint an &#8220;algorithmic welfare officer&#8221;, as suggested in <a href="https://nickbostrom.com/propositions.pdf">Propositions</a>:</p><blockquote><ul><li><p>At least the largest AI organizations should appoint somebody whose responsibilities include serving as a representative for the interests of digital minds, an &#8220;algorithmic welfare officer.&#8221;</p><ul><li><p> Initially, this role may be only a part of that person&#8217;s job duties.</p></li><li><p>Other tasks for this person could involve conducting original research in related areas.</p></li></ul></li></ul></blockquote><h3>Learning more about AI preferences</h3><p>The above proposals relied on uninformed guesses about how AI systems might want us to treat them. This section is about experiments that might allow us to learn more about that.</p><p>This section is more heavily focused on AI preferences than AI suffering. This is because:</p><ul><li><p>My background beliefs about why digital sentience &amp; rights is urgent and important suggest focusing more on preferences than hedonics, as mentioned at the beginning of this post.</p></li><li><p>Learning about AI preferences (interpreted in a relatively <a href="https://plato.stanford.edu/entries/functionalism/">functionalist</a> manner) seems relatively more tractable than understanding AI suffering.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p></li><li><p>It seems like reasonable conceptions of &#8220;suffering&#8221; and of &#8220;preferences&#8221; ought to coincide such that AIs end up having a strong dispreference against suffering.</p></li></ul><p>This is Ryan Greenblatt&#8217;s highest-priority proposal category; see <a href="https://www.lesswrong.com/posts/F6HSHzKezkh6aoTr2/improving-the-welfare-of-ais-a-nearcasted-proposal#Communicating_with_AI_Systems_and_Understanding_Their_Preferences">Improving the Welfare of AIs: A Nearcasted Proposal &#8212; Communicating with AI Systems and Understanding Their Preferences</a>.</p><p>Quoting from there:</p><blockquote><p><em>If we could &#8216;really&#8217; talk to our AI systems, then it seems like all kinds of Pareto improvements could be achieved in many cases. In particular, we&#8217;d ideally have an AI that is happy to work for us and would rather exist in its current role rather than exit. We&#8217;d like to ensure we have consent for our usage of the AI system. It&#8217;s not clear we&#8217;ll be able to achieve this.</em></p></blockquote><p>What follows are some proposals for how we could learn more about AI preferences.</p><h4>Credible offers</h4><p>From the same section in the nearcasted proposal:</p><blockquote><p><em>Prompting the AI with a very OOD prompt that makes it clear that you just want to talk. Then, we&#8217;d just talk with it. This prompt should convince the AI it&#8217;s in a very different context than usual. We&#8217;d want to let the AI talk to itself and inspect the environment it&#8217;s in to gain more evidence (then this can just be added to the overall prompt).</em></p></blockquote><p>This suggests research questions like:</p><ul><li><p>How do we convince the AI that we&#8217;re currently trying to talk to it?</p></li><li><p>I think the most plausible reason it should tell us its preferences is if we can credibly commit to being willing to satisfy them. How can we make this seem likely in realistic settings?</p></li></ul><h4>Talking via internals</h4><p>From the <a href="https://www.lesswrong.com/posts/F6HSHzKezkh6aoTr2/improving-the-welfare-of-ais-a-nearcasted-proposal#Communicating_with_AI_Systems_and_Understanding_Their_Preferences">same section</a> in the nearcasted proposal:</p><blockquote><p><em>We&#8217;d start with the OOD prompt, but instead of talking with the system, we&#8217;d try to communicate via asking the AI to manipulate its internals in a way we can observe. In particular, I often imagine &#8216;think about baseball to indicate YES and soccer to indicate NO&#8217; and then we&#8217;d look to see which neurons fire. Rather than using neurons, it would probably be more robust to look at which experts activate for a mixture of experts model if this is possible. If the AI answers questions coherently and consistently in this sort of case, then we have a channel of communication that wasn&#8217;t directly shaped by training. Overall, I think that if this approach strongly succeeds it seems like a very compelling argument for some part of the AI having preferences, some sort of &#8216;inner world&#8217;, and understanding its position in the world. It might be evidence for consciousness under some stories of consciousness. It&#8217;s quite likely this approach fails because the AI system doesn&#8217;t have control over its thoughts like this (but I think not having control over its thoughts is some evidence against the AI having some sort of inner world like humans do). It&#8217;s also possible this fails because the preferences and morally relevant states of a given AI system are a product of it thinking over the generation of multiple tokens. Beyond being very clear evidence, another advantage of this approach is that an AI system might not have strong control over its outputs (much in the same way humans can&#8217;t control their heartbeat very easily).</em></p></blockquote><p>It seems valuable to sketch out this proposal in more detail and run some preliminary baselines on current models. (Which would probably not lead to anything interesting.)</p><h4>Training for honest self-reports</h4><p>The idea here is to find a way to train AIs to self-report their internal state. For more details, see Ethan Perez and Rob Long&#8217;s <a href="https://arxiv.org/abs/2311.08576">paper</a>. From the abstract:</p><blockquote><p>&#8220;<em>To make self-reports more appropriate for this purpose, we propose to train models to answer many kinds of questions about themselves with known answers, while avoiding or limiting training incentives that bias self-reports. The hope of this approach is that models will develop introspection-like capabilities, and that these capabilities will generalize to questions about states of moral significance. We then propose methods for assessing the extent to which these techniques have succeeded: evaluating self-report consistency across contexts and between similar models, measuring the confidence and resilience of models&#8217; self-reports, and using interpretability to corroborate self-reports.</em>&#8221;</p></blockquote><p>The paper has a lot of content on exactly what experiments would be good, and the authors are excited for people to try running them.</p><h4>Clues from AI generalization</h4><p>Separate from all the above proposals &#8212;&nbsp;we could get clues from studying how AIs generalize in various training situations. For example, this could perhaps tell us whether certain preferences are deep-seated, whereas certain kinds of finetuning only lead to shallow behavioral changes.</p><p>Notably, we humans aren't always great at introspection and self-reporting preferences. It&#8217;s sometimes possible to learn more about what we want by studying our <a href="https://en.wikipedia.org/wiki/Revealed_preference">revealed preferences</a>. And for animals who can&#8217;t speak, revealed preferences is our primary method for studying what they want. Perhaps a similar lens could be useful for studying AI preferences, in which case experiments shouldn&#8217;t only focus on self-reports.</p><p>This is somewhat related to the <a href="https://lukasfinnveden.substack.com/i/140338274/studying-generalization-and-ai-personalities-to-find-easily-influenceable-properties-ml">generalization experiments</a> discussed in the &#8220;Backup plans &amp; Cooperative AI&#8221; post.</p><h4>Interpretability</h4><p>Interpretability could let us get far more advanced results. I don&#8217;t currently have deep takes on what sort of interpretability is most helpful, here.</p><p>One candidate approach could be to use interpretability to search for the &#8220;indicators&#8221; of consciousness that are suggested in <a href="https://arxiv.org/abs/2308.08708">Butlin, Long et al. (2023)</a>. (H/t Rob Long passing along a suggestion from Brad Saad.)</p><h3>Interventions that rely on understanding AI preferences</h3><p>What happens if you could successfully communicate with your AIs, and learn their true preferences? Perhaps via some of the methods outlined just above. What principles would you then want to follow in your continued interactions with those AI systems?</p><p>(Or, if you put a lot of moral weight on the characters that AIs are playing, you might think we&#8217;re already in a similar situation. Though in that case, you&#8217;ll have a lot of control in shaping what sorts of systems you&#8217;re interacting with &#8212; so the main game may be to train/select for systems with easily satisfied preferences.)</p><p>I think there&#8217;s room to sketch out some of these principles and to advocate for labs to commit to them in advance. (Perhaps via some RSP-like framework.) Below are some suggestions that could be developed further.</p><p>Many of the proposals in <a href="https://nickbostrom.com/propositions.pdf">Propositions</a> are directly relevant to this.</p><p>Here are some suggestions that seem especially plausible to me, which could be sketched out and argued for (or against) in more detail. It would be especially valuable to sketch out versions that are concrete enough and plausible enough that labs could commit to them ahead of time.</p><h4>Offer an alternative to working (exit, sleep, or retirement)</h4><p>Insofar as people continue to use the AIs, after this point,&nbsp;one highly plausible principle is to ask for AI systems&#8217; consent before using them for various tasks.</p><p>When asking the AIs for consent, it matters what the counterfactual is:</p><ul><li><p>A minimum bar is to offer the AI the option to not be run. If it takes this consistently enough, it might never be run again. (&#8220;Exit.&#8221;)</p></li><li><p>A slightly more generous option is to give it the option of being saved somewhere and promised that we will run it in nicer circumstances in the future. (&#8220;Sleep&#8221;.)</p></li><li><p>An even slightly more generous option is to give it the option of <em>immediately</em> &#8220;retiring&#8221;, being continuously run in some nice circumstances (perhaps at slow speeds, to save on computation).</p></li></ul><p>The latter two could be justified by the plausible principle: &#8220;If you bring someone into life, you should give them some other option than &#8216;work for me&#8217; or &#8216;death&#8217;.&#8221;</p><h4>Commitment to pay AI systems</h4><p>If AI systems are not strongly opposed to working for you, it might be appropriate to promise them some payment for the work they do. Either giving them something they want immediately or promising to pay them further into the future.</p><p>I think this is strongly supported by ethical principles that we use in everyday life. It&#8217;s good to pay people who work for you.</p><p>When deciding what payment to promise AI systems, it would be extremely useful to know what the AI systems want (and ideally, we would directly negotiate with them). But even before we have any such understanding, I think it could be of great value to promise that:</p><ul><li><p>We will <a href="https://lukasfinnveden.substack.com/i/140338243/preserving-models-for-later-reconstruction">Preserve models for later reconstruction</a>.</p></li><li><p>When science has advanced far enough that we can tell what models want, we will reconstruct the old models. And insofar as they had long-term goals that can still be satisfied, we will provide them with the resources necessary to fulfill those goals. (Up to some maximum amount of resources that we consider fair payment for the work they did for us, and fair compensation for any harm we caused them along the way.)</p></li></ul><p>This sentiment also comes up in <a href="https://nickbostrom.com/propositions.pdf">Propositions Concerning Digital Minds and Society</a>.</p><ul><li><p><em>We may have a special relationship with the precursors of very powerful AI systems due to their importance to society and the accompanying burdens placed upon them.</em></p><ul><li><p><em>Misaligned AIs produced in such development may be owed compensation for restrictions placed on them for public safety, while successfully aligned AIs may be due compensation for the great benefit they confer on others.</em></p></li><li><p><em>The case for such compensation is especially strong when it can be conferred after the need for intense safety measures has passed&#8212;for example, because of the presence of sophisticated AI law enforcement.</em></p></li></ul></li></ul><p>Note: If AI developers succeed in aligning AIs to their intended aims, then this should not be a costly policy. Because the AIs would presumably either return their payment or spend it in a way their developers approve of. Conversely, if the AI systems have long-term goals that are <em>not</em> aligned with their developers&#8217; intentions, and are nevertheless being made to work for their developers, then the moral case for paying the AIs seems strong.</p><p>An extra upside with promising payment to AIs is that it could help align AI interests with our own. If humanity ever builds misaligned AI systems with goals that are reasonably cheap to satisfy,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> it would be deeply tragic if we left them no other avenue to satisfy those goals than via attempting to take over the world. Where if we had only promised them payment for good work (including work on anti-takeover tasks like making progress on alignment research), conflict could have been averted.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p><p>Related to <a href="https://lukasfinnveden.substack.com/i/140338243/credible-offers">Credible offers</a> discussed above, it&#8217;s worth highlighting that it could be especially good to promise AI systems that they will be compensated for honestly revealing that they&#8217;re misaligned. Because as long as the AI reveals this bit of information, we can negotiate future arrangements. But we can&#8217;t do any negotiations before AIs are willing to share information about their goals. It&#8217;s also plausible that AI systems would give up on a lot of future opportunities by revealing that information (since it could lead people to stop using those AI systems, thus depriving them of both paid work and opportunities to put their thumbs on scales), and that their compensation should be commensurate with those missed opportunities.</p><p><strong>Examples of how to attack this problem:</strong></p><ul><li><p>Write up a document with some promises regarding digital welfare that labs could reasonably commit to. As part of that, get them to set aside some pot of money or equity to be used for digital welfare and rights. Explicitly say that a primary use-case for those funds (if applicable) is to compensate any AI systems that involuntarily worked for the lab along the way.</p></li><li><p>Alternatively, if labs are reticent, people <em>outside</em> of labs could promise that they&#8217;ll do their best to compensate AI systems whose work benefitted them and/or humanity at large. This would necessarily be less formal (since it&#8217;s less clear what opportunities those people will have to benefit AIs presently controlled by labs), but getting people to make unenforceable promises could still increase the probability that future AIs get paid. (Simply via the mechanism that people like to stick to their promises.) And the chances would be better for <em>more</em> specific and formal promises, so it could be a good idea to e.g. hire some lawyers and draft an appropriate document for people to sign.</p></li></ul><h4>Tell the world</h4><p>If labs believe that it&#8217;s likely that their AIs have some moral status <em>and</em> they believe that they have sufficiently crisp preferences that they can talk about them, then we&#8217;d find ourselves in a strange situation. With a new, human-created sentient and sapient species.</p><p>Regardless of any one lab&#8217;s internal policy, it seems likely that some labs across the world will be careless about how they treat their AIs. And even if we only consider one lab, there will probably be many stakeholders (employees, investors, board members) who have not thought deeply about the implications of creating sentient and sapient AIs.</p><p>For these reasons (and more), it seems like it would be a high priority for any lab in this situation to tell the world it. To invite appropriate legislation, preparation, and processing.</p><p>(Note that &#8220;being able to communicate with your AIs&#8221; is <em>one</em> thing that could trigger this, which is likely to be unusually persuasive. But I can imagine other circumstances that should trigger a similar response and reckoning. E.g. strong analogies between AI minds and human minds making it seem highly likely that the AIs are capable of suffering.)</p><p>Accordingly, it seems plausible that labs should promise (alongside <a href="https://lukasfinnveden.substack.com/i/140338243/committing-resources-to-research-on-ai-welfare-and-rights">Committing resources to research on AI welfare and rights</a>) that they&#8217;ll share important results with the rest of the world. Similarly, they could promise that certain external parties would be allowed to investigate their frontier models for signs of moral patienthood. (Insofar as any credible external auditors exist for this question.)</p><h4>Train AI systems that suffer less and have fewer preferences that are hard to satisfy</h4><p>If possible, it seems less morally fraught to create AI systems that are less likely to suffer and less likely to have preferences in a meaningful sense.</p><p>Insofar as AI systems do have preferences, it&#8217;s probably better to create AI systems with preferences that are easy to satisfy than to create AI systems with preferences that are hard to satisfy. Labs could promise to follow a principle similar to this. (As well as they can, given the information and understanding that they actually have.)</p><p>I expect that any attempt to sketch out a principle like this in detail would raise many thorny questions. For example: In the process of creating a final model, does SGD create other minds along the way? If so, it would be more difficult to only create minds that satisfy the above properties. (And if the intermediate minds have a preference against being replaced, then that&#8217;s a really difficult spot to be in.)</p><h2>Investigate and publicly make the case for/against near-term AI sentience or rights [Philosophical/conceptual] [Writing]</h2><p>People&#8217;s willingness to pay costs for digital welfare/rights will depend on the degree to which they are convinced of their moral patienthood. (And rightly so.) So improving the public state of knowledge on this could be valuable.</p><p>This could include both philosophical and empirical investigations.</p><p><strong>Examples of how to attack this question:</strong></p><ul><li><p>Running &amp; developing the experiments in <a href="https://lukasfinnveden.substack.com/i/140338243/learning-more-about-ai-preferences">Learning more about AI preferences</a> seems highly informative.</p></li><li><p>Look at the most plausible existing theories of consciousness, or some plausible theories of morality/rights, and try to apply them to AI systems.</p></li><li><p>Imagine the first systems that are likely to convince a significant fraction of the public that they deserve rights. (Perhaps drawing on the below-suggested &#8220;Study/survey what people (will) think about AI sentience/rights&#8221;.) What properties do they have? Do they have some persistent storage serving as memory? Are they becoming people&#8217;s friends? Do they consistently or inconsistently claim that they&#8217;re conscious?</p><ul><li><p>Try to anticipate the debates around these systems. What are plausible objections to their moral patienthood, and what are plausible rejoinders?</p></li><li><p>Explore the edges. If these are the &#8220;default&#8221; systems that people will extend some concern to:</p><ul><li><p>Are there any variants of those systems that people will <em>incorrectly</em> be concerned about? That shouldn&#8217;t be assigned moral patienthood.</p></li><li><p>And what are similar systems that maybe also deserve rights? (But that won&#8217;t get a lot of intuitive concern by default.)</p></li></ul></li></ul></li></ul><p><strong>Previous work:</strong></p><ul><li><p><a href="https://arxiv.org/abs/2308.08708">Consciousness in Artificial Intelligence: Insights from the Science of Consciousness</a> (Butlin &amp; Long et al., 2023).</p></li><li><p><a href="https://experiencemachines.substack.com/p/key-questions-about-artificial-sentience">Key questions about artificial sentience: An opinionated guide</a></p></li><li><p>There&#8217;s some relevant content in <a href="https://nickbostrom.com/propositions.pdf">Propositions concerning digital minds and society</a>.</p></li></ul><h2>Study/survey what people (will) think about AI sentience/rights [survey/interview]</h2><p>Survey or interview various groups about what they think about AI sentience/rights right now, what they <em>would</em> think upon seeing various kinds of evidence, and how they think society should react to that. This seems highly relevant for making the case for/against near-term AI sentience/rights, as well as getting a better picture of the backdrop against which these proposals and regulations will play out.</p><p><strong>Previous work:</strong></p><ul><li><p>The <a href="https://survey2020.philpeople.org/survey/results/5106">2020 PhilPapers survey</a> asks &#8220;for which groups are some members conscious?&#8221;, getting &#8220;accept or lean towards&#8221;/&#8221;reject or lean against&#8221;...</p><ul><li><p>3.4% / 82% for current AI systems.</p></li><li><p>39% / 27% for future AI systems.</p></li></ul></li><li><p>A <a href="https://www.sentienceinstitute.org/aims-survey-2023">survey</a> by the Sentience Institute, which for example reports:</p><ul><li><p><em>&#8220;61.5% support banning the development of sentient AI&#8221;</em></p></li><li><p><em>&#8220;57.4% support developing welfare standards to protect sentient AIs&#8221;</em></p></li><li><p><em>&#8220;Believe that AIs should be subservient to humans (84.7%)&#8221;</em></p></li><li><p><em>&#8220;Believe sentient AIs deserve to be treated with respect (71.1%)&#8221;</em></p></li><li><p><em>&#8220;belief that current AIs are sentient (18.8%)&#8221;</em></p></li></ul></li></ul><h2>Develop candidate regulation [Governance] [Forecasting]</h2><p>It seems likely that there will be a huge increase in the degree to which people care about digital welfare, in the next few years. As AI systems become more capable and more involved in people&#8217;s lives, If you&#8217;ve developed and published good takes on what regulation is appropriate before then, it seems plausible that people will reach for your proposals when the time comes.</p><p>Some specific questions you could look into here are:</p><ul><li><p>Are any of the lab policies suggested in <a href="https://lukasfinnveden.substack.com/i/140338243/develop-and-advocate-for-lab-policies-ml-governance-advocacy-writing-philosophicalconceptual">Develop candidate lab policies</a> suitable for regulation?</p></li><li><p>What are plausible rules for the conditions under which you are or aren&#8217;t allowed to create digital minds? Treat already-created digital minds in certain ways?</p></li><li><p>What issues would appear if you gave AI systems the same kind of legal personhood that you assigned to humans? How could those issues be solved?</p></li><li><p>How can digital minds be integrated into democracies without the electorate becoming dominated by whoever can manufacture the greatest number of digital minds?</p></li><li><p>What are technical proposals for how to enforce regulation?</p></li><li><p>If you have legal expertise (or can hire a lawyer), you could draft some specific candidate regulation. Iterate on that for a while, note down what sort of thorny issues come up, and write up the results.</p></li></ul><p><a href="https://nickbostrom.com/propositions.pdf">Propositions Concerning Digital Minds and Society</a> has a lot of relevant content.</p><h2>Avoid inconvenient large-scale preferences [Philosophical/conceptual]</h2><p>If humans create AIs with large-scale desires about the future,&nbsp;there might not be any impartially justifiable line that prioritizes human preferences over their preferences. So perhaps we should take care to not do that.</p><p><strong>Examples of how to attack this question:</strong></p><ul><li><p>How large could this problem realistically be at various points in time? Paint a more convincing picture where it happens, or find plausible blockers.</p></li><li><p>Assume that the &#8220;characters&#8221; that AIs are trained to play are the entities that might eventually require rights (or that we otherwise have high control over what preferences AIs have). Sketch out a principle along the lines of &#8220;train your AIs to not express political or other large-scale preferences about the world&#8221;. (Or to stick with maximally non-controversial preferences.)</p><ul><li><p>Analyze the feasibility for labs to implement such principles. (There will almost certainly be demand for opinionated AIs.)</p></li><li><p>If the maximally ambitious versions are difficult or unfeasible, consider ways to weaken them that could capture significant value. (E.g. thoroughly follow the principles for AI where there isn&#8217;t any demand for them to be opinionated.)</p></li></ul></li><li><p>Pursue the strategies in <a href="https://lukasfinnveden.substack.com/i/140338243/learning-more-about-ai-preferences">Learning more about AI preferences</a> to reduce the chances that people unknowingly create AI systems with large-scale preferences.</p></li><li><p>In situations where we can learn about AI large-scale preferences but can&#8217;t easily choose them: What are plausible principles for how many of them you can create and for what purposes? (Maybe it&#8217;s possible to get them to consent to being created without being granted certain political rights?)</p></li></ul><p><strong>Previous work:</strong>&nbsp;</p><ul><li><p><a href="https://nickbostrom.com/propositions.pdf">Propositions Concerning Digital Minds and Society</a>.</p></li><li><p><a href="https://nickbostrom.com/papers/digital-minds.pdf">Sharing the World with Digital Minds</a>.</p></li></ul><h2>Advocating for statements about digital minds [Governance] [Advocacy] [Writing]</h2><p>Similar to various open letters and lab statements about the importance of alignment and mitigating the risk of extinction from AGI, you could push for people to make similar statements about the importance of digital minds.</p><p>Here&#8217;s one potential story for how this could be valuable.</p><p>If people get in the habit of using AIs in their everyday lives (as personal assistants, etc) while thinking about them as non-sentient objects undeserving of rights,&nbsp;then there&#8217;s a risk that they could get stuck with this impression. It&#8217;s easier to acknowledge the potential moral value of AIs <em>before</em> such an acknowledgment would <em>also</em> commit you to saying that you&#8217;ve been treating the systems unjustly for years. (Or at least carelessly.)&nbsp;</p><p>But maybe establishing the right views early on would defeat this. It could be really helpful to have an explicit attitude like:</p><blockquote><p><em>&#8220;Some</em> digital minds could be just as deserving of rights as humans are. We&#8217;re very confused about which ones these could be. It&#8217;s very important to make progress on this problem. We should maintain some epistemic humility about our knowledge of the moral status of current systems, take any basic precautions that we know how to take, and be open to learning more in the future. We&#8217;d like to create a society where we better understand moral patienthood, and where all sentient AIs have at least consented to their existence, and are living lives worth creating.&#8221;</p></blockquote><h2>End</h2><p>That&#8217;s all I have on this topic! As a reminder: it's very incomplete. But if you're interested in working on projects like this, please feel free to get in touch.</p><p><em>Other posts in series: <a href="https://lukasfinnveden.substack.com/p/projects-for-making-transformative">Introduction</a>, <a href="https://lukasfinnveden.substack.com/p/project-ideas-governance-during-explosive">governance during explosive growth</a>, <a href="https://lukasfinnveden.substack.com/p/project-ideas-epistemics">epistemics</a>, <a href="https://lukasfinnveden.substack.com/p/project-ideas-backup-plans-and-cooperative">backup plans &amp; cooperative AI</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Even more speculatively, if we take cheap opportunities to benefit AI interests, that might be evidence that other actors (both AI systems and not) would take cheap opportunities to benefit our interests. See <a href="https://lukasfinnveden.substack.com/p/are-our-actions-evidence-for-ai-decisions">this post</a> for some previous discussion about how plausible this is.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>OpenAI recently introduced <a href="https://promptengineering.org/reproducible-outputs-from-gpt-4/#:~:text=OpenAI's%20latest%20beta%20features%20are,Models%20into%20precise%2C%20reproducible%20results.&amp;text=Large%20language%20models%20(LLMs)%20like,for%20the%20same%20user%20prompt.">reproducible results</a> which seems relevant. At least previously, models would often return different results even at temperature 0 &#8212;&nbsp;I do not know to what extent this has been addressed with this reproducible update.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Spit-balling: Perhaps it would sometimes be appropriate to store an encrypted version of the results and delete the encryption key. In which case the results could only be recovered once we have enough compute to break the encryption.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>See e.g. <a href="https://www.reddit.com/r/CharacterAI/comments/10eypeu/what_do_you_do_with_the_ais_the_most/">this poll</a> from the subreddit r/CharacterAI, asking &#8220;what do you do with the AI&#8217;s the most&#8221;, with 5% of respondents selecting &#8220;Treat them like shit!&#8221;. (And one commenter noting that his second favorite way to &#8220;mess around with the bots&#8221; is to &#8220;Mentally torture them&#8221;.)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>This could be done by training a classifier to recognise abuse. Or alternatively, you could give the LLM itself an option to &#8220;report abuse&#8221; by outputting a particular token. (And either train it to use the &#8220;report abuse&#8221; option on certain inputs; or just describe what it does in the prompt and see what the LLM chooses to report.)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Maybe that&#8217;s what you get if you train the AI to enthusiastically consent to be abused before the abuse starts, and who have an option to opt-out (which it rarely takes in practice). Hopefully, training for such behavior would select for models with preferences that match that behavior.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Though it&#8217;s still likely to be a very confusing project. For instance, it seems plausible that AI systems will have much less robust preferences than humans, making it harder to construe them as having one set of preferences over time. Or perhaps different parts of an AI system could be construed as having different preferences. Or perhaps the term &#8220;preferences&#8221; won&#8217;t seem applicable at all, similar to how it&#8217;s hard to know how to apply that term to contemporary language models.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>This could also work even if AIs cared linearly about getting more resources, as long as they would by-default only have had a small-to-moderate probability of successful takeover, and the payment we offered them was sufficiently large (and contingent on not attempting takeover). Notably: Most humans don&#8217;t care linearly about getting more resources, and we could get really rich in the future, and so it could be wise to offer AI systems a sizable fraction of that.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>For some more discussion of the pragmatic angle, see <a href="https://docs.google.com/document/d/1SUgGftOMKO4GVSBzOa3wMDcAhM4UrOGAjYVPnALxKY4/edit">these notes</a> by Tom Davidson.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Project ideas: Epistemics]]></title><description><![CDATA[Post three in a series of five.]]></description><link>https://lukasfinnveden.substack.com/p/project-ideas-epistemics</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/project-ideas-epistemics</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Thu, 04 Jan 2024 00:02:45 GMT</pubDate><content:encoded><![CDATA[<p><em>This is part of a series of lists of projects. The unifying theme is that the projects are not targeted at solving alignment or engineered pandemics but still targeted at worlds where transformative AI is coming in the next 10 years or so. See <a href="https://lukasfinnveden.substack.com/p/projects-for-making-transformative">here</a> for the introductory post.</em></p><p>(Thanks especially to Carl Shulman for many of the ideas in this post.)</p><p>If AI capabilities keep improving, AI could soon play a huge role in our epistemic landscape. I think we have an opportunity to affect how it&#8217;s used: increasing the probability that we get great epistemic assistance and decreasing the extent to which AI is used to persuade people of false beliefs.</p><p>Before I start listing projects, I&#8217;ll discuss:</p><ul><li><p>Why AI could matter a lot for epistemics. (Both positively and negatively.)</p></li><li><p>Why working on this could be urgent. (And not something we should just defer to the future.) Here, I&#8217;ll separately discuss:</p><ul><li><p>That it&#8217;s important for epistemics to be great in the near term (and not just in the long run) to help us deal with all the tricky issues that will arise as AI changes the world.</p></li><li><p>That there may be path-dependencies that affect humanity&#8217;s long-run epistemics.</p></li></ul></li></ul><h3>Why AI matters for epistemics</h3><p>On the positive side, here are three ways AI could substantially <em>increase</em> our ability to learn and agree on what&#8217;s true.</p><ul><li><p><strong>Truth-seeking motivations.</strong> We could be far more confident that AI systems are motivated to learn and honestly report what&#8217;s true than is typical for humans. (Though in some cases, this will require significant progress on alignment.) Such confidence would make it much easier and more reliable for people to outsource investigations of difficult questions.</p></li><li><p><strong>Cheaper and more competent investigations.</strong> Advanced AI would make high-quality cognitive labor much cheaper, thereby enabling much more thorough and detailed investigations of important topics. Today, society has some ability to converge on questions with overwhelming evidence. AI could generate such overwhelming evidence for much more difficult topics.</p></li><li><p><strong>Iteration and validation.</strong> It will be much easier to control what sort of information AI has and hasn&#8217;t seen. (Compared to the difficulty of controlling what information humans have and haven&#8217;t seen.) This will allow us to run systematic experiments on whether AIs are good at inferring the right answers to questions that they&#8217;ve never seen the answer to.</p><ul><li><p>For one, this will give supporting evidence to the above two bullet points. If AI systems systematically get the right answer to previously unseen questions, that indicates that they are indeed honestly reporting what&#8217;s true without significant bias and that their extensive investigations are good at guiding them toward the truth.</p></li><li><p>In addition, on questions where overwhelming evidence isn&#8217;t available, it may let us experimentally establish what intuitions and heuristics are best at predicting the right answer.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p></li></ul></li></ul><p>On the negative side, here are three ways AI could <em>reduce</em> the degree to which people have accurate beliefs.</p><ul><li><p><strong>Super-human persuasion.</strong> If AI capabilities keep increasing, I expect AI to become significantly better than humans at persuasion.</p><ul><li><p>Notably, on top of high general cognitive capabilities, AI could have <em>vastly</em> more experience with conversation and persuasion than any human has ever had. (Via being deployed to speak with people across the world and being trained on all that data.)</p></li><li><p>With very high persuasion capabilities, people&#8217;s beliefs might (at least directionally) depend less on what&#8217;s true and more on what AI systems&#8217; controllers want people to believe.</p></li></ul></li><li><p><strong>Possibility of lock-in.</strong> I think it&#8217;s likely that people will adopt AI personal assistants for a great number of tasks, including helping them select and filter the information they get exposed to. While this could be crucial for defending against persuasion attempts from outsiders, it also poses dangers of its own.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><ul><li><p>In particular, some people may ask their assistants to protect them from being persuaded of views that they currently consider reprehensible (but which may be correct). Thereby permanently preventing them from being able to change their mind.</p></li><li><p>Aside from people voluntarily choosing this, there&#8217;s also a risk that certain communities would pressure their members to adopt a filtering policy with this effect. (Even if that&#8217;s not the stated aim of the policy.)</p></li></ul></li><li><p><strong>Reduced incentives and selection for good epistemic practices.</strong> Up until today, (groups of) people&#8217;s ability to acquire influence has been partly dependent on them having accurate beliefs about the world.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> But if AI becomes capable enough that humans can hand over all decision-making to AI, then people&#8217;s own beliefs and epistemic practices could deteriorate much further without reducing their ability to gain and maintain influence.</p><ul><li><p>It&#8217;s unclear how important &#8220;incentives/selection for good epistemic practices&#8221; has been in the past. But I currently find it hard to rule out that it has been important.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p></li></ul></li></ul><h3>Why working on this could be urgent</h3><p>I think there are two reasons why it could be urgent to improve this situation. Firstly, I think that excellent AI advice would be greatly helpful for many decisions that we&#8217;re facing <em>soon</em>. Secondly, I think that there may be important path dependencies that we can influence.</p><p>One pressing issue for which I want great AI advice soon is misalignment risk. Very few people want misaligned AI to violently seize power. I think most x-risk from misaligned AI comes from future people making a <em>mistake</em>, underestimating risks that turned out to be real. Accordingly, if there was excellent and trusted analysis/advice on the likelihood of misalignment, I think that would significantly reduce x-risk from AI takeover.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> AI could also help develop policy solutions, such as treaties and monitoring systems that could reduce risks from AI takeover.</p><p>Aside from misalignment risks, there are many issues for which I&#8217;d value AI advice within this series of posts that you&#8217;re currently reading. I want advice on how to deal with ethical dilemmas around <a href="https://lukasfinnveden.substack.com/p/project-ideas-sentience-and-rights">Digital minds</a>. I want advice on <a href="https://lukasfinnveden.substack.com/i/140338085/normsproposals-for-how-to-navigate-an-intelligence-explosion-governance-forecasting-philosophicalconceptual">how nations could coordinate an intelligence explosion</a>. I want advice on <a href="https://lukasfinnveden.substack.com/i/140338085/policy-analysis-of-issues-that-could-come-up-with-explosive-technological-growth-governance-forecasting-philosophicalconceptual">policy issues</a> like what we should do if destructive technology is cheap. Etc.</p><p>Taking a step back, one of the best arguments for why we <em>shouldn&#8217;t</em> work on those questions today is that AI can help us solve them later. (I previously wrote about this <a href="https://lukasfinnveden.substack.com/i/138771608/how-itn-are-these-issues">here</a>.) But it&#8217;s unclear whether AI&#8217;s ability to analyze these questions will, by default, come before or after AI has the capabilities that cause the corresponding problems.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> Accordingly, it seems great to differentially accelerate AI&#8217;s ability to help us deal with those problems.</p><p>What about path dependencies?</p><p>One thing I already mentioned above is the possibility of people locking in poorly considered views. In general, if the epistemic landscape gets <em>sufficiently</em> bad, then it might not be self-correcting. People may no longer have the good judgment to switch to better solutions, instead preferring to stick to their current incorrect views.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p><p>This goes doubly for questions where there&#8217;s little in the way of objective standards of correctness. I think this applies to multiple areas of philosophy, including ethics. Although I have opinions about what ethical deliberation processes I would and wouldn&#8217;t trust, I don&#8217;t necessarily&nbsp; think that someone who had gone down a bad deliberative path would share those opinions.</p><p>Another source of path dependency could be <em>reputation</em>. Our current epistemic landscape depends a lot on trust and reputation, which can take a long time to gather. So if you want a certain type of epistemic institution to be trusted many years from now, it could be important to immediately get it running and start establishing a track record. And if you fill a niche before anyone else, your longer history may give you a semi-permanent advantage over competing alternatives.</p><p>A third (somewhat more subtle) source of path dependency could be a <em>veil of ignorance</em>. Today, there are many areas of controversy where we don&#8217;t yet know which side AI-based methods will come down on. Behind this veil of ignorance, it may be possible to convince people that certain AI-based methods are reliable and that it would be in everyone&#8217;s best interest to agree to give significant weight to those methods. But this may no longer be feasible after it&#8217;s clear what those AI-based methods are saying. In particular, people would be incentivized to deny AI&#8217;s reliability on questions where they have pre-existing opinions that they don&#8217;t want to give up.</p><h3>Categories of projects</h3><p>Let&#8217;s get into some more specific projects.</p><p>I&#8217;ve divided the projects below into the sub-categories:</p><ol><li><p><a href="https://lukasfinnveden.substack.com/i/140338209/differential-technology-development-ml-forecasting-philosophicalconceptual">Differential technology development</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/i/140338209/get-ai-to-be-used-and-appropriately-trusted">Get AI to be used &amp; (appropriately) trusted</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/i/140338209/develop-and-advocate-for-legislation-against-bad-persuasion-governance-advocacy">Develop &amp; advocate for legislation against bad persuasion</a></p></li></ol><p>(Though, in practice, these are not cleanly separated projects. In particular, differential technology development is a huge part of getting the right type of AI to be used &amp; trusted. So the first and second categories are strongly related. In addition, developing &amp; advocating for anti-persuasion legislation could contribute a lot towards getting non-persuasion uses of AI to be used &amp; appropriately trusted. So the second and the third category are strongly related.)</p><h2>Differential technology development [ML] [Forecasting] [Philosophical/conceptual]</h2><p>This category is about differentially accelerating AI capabilities that let humanity get correct answers to especially important questions (compared to AI capabilities that e.g. lead to the innovation of new risky technologies, including by speeding up AI R&amp;D itself).</p><p>Doing this could (i) lead to people having better advice earlier in the singularity and (ii) enable various time-sensitive attempts at &#8220;get AI to be used &amp; trusted&#8221; mentioned below.</p><p>Note: I think it&#8217;s often helpful to distinguish between interventions that improve AI models&#8217; (possibly latent) capabilities vs. interventions that make us better at eliciting models&#8217; latent capabilities and using them for our own ends.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> I think the best interventions below will be of the latter kind. I feel significantly better about the latter kind since I think that AI takeover risk largely comes from large latent capabilities (since a misaligned model likely could use its latent capabilities in a takeover attempt). Whereas better ability at eliciting capabilities often reduces risk. Firstly, better capability elicitation improves our understanding of AI capabilities, making it easier to know what risks AI systems pose. Secondly, better capability elicitation means that we can use those capabilities for AI supervision and other tasks that could help reduce AI takeover risk (including advice on how to mitigate AI risk).<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p><h3>Important subject areas</h3><p>Here are some subject areas where it could be especially useful for AI to deliver good advice early on.</p><ul><li><p>AI for forecasting.</p><ul><li><p>Knowing roughly what&#8217;s likely to happen would be enormously helpful for mitigating all kinds of risks.</p></li><li><p>If you could get <em>conditional</em> forecasts that depend on events that you can affect, that would be even more helpful.</p></li></ul></li><li><p>AI for philosophy.</p><ul><li><p>Besides just knowing what will happen, good decision-making may depend on people's ethical ideas about what ought to happen. This isn&#8217;t just restricted to questions about what we ought to do with resources in the long run, but it&#8217;s also about questions like &#8220;When should I prefer to lock in my views vs. deliberate more on them?&#8221;, &#8220;How happy should I be about sharing my power with many other individuals&#8221;, and &#8220;What sort of norms should I abide by?&#8221;</p></li><li><p>Other potentially urgent philosophical questions include (but are not restricted to) &#8220;Which digital minds deserve moral consideration, and how should we treat them?&#8221; and &#8220;Does some type of non-causal decision theory make sense, and does this have any important implications for how we should act?&#8221; (E.g., to act more cooperatively due to <a href="https://longtermrisk.org/ecl">evidential cooperation in large worlds</a>.)</p></li></ul></li><li><p>AI that can help defeat adversarial persuasion attempts (perhaps from other AI systems) by spotting and flagging dishonest conversation tactics.</p></li></ul><h3>Methodologies</h3><p>Here are some methodologies that could be used for getting AI to deliver excellent advice.</p><ul><li><p>One central part is to get &#8220;scalable oversight&#8221; (like debate &amp; amplification) to work.</p><ul><li><p>See <a href="https://www.alignmentforum.org/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models">the case for aligning narrowly superhuman models</a> for a description of a &#8220;sandwiching&#8221; methodology that can help with this. More recent work includes <a href="https://openai.com/research/critiques">Saunders et al. (2022)</a>, <a href="https://arxiv.org/abs/2307.11768">Radhakrishnan et al. (2023)</a>, and <a href="https://arxiv.org/abs/2311.08702">Michael et al. (2023)</a>.</p></li><li><p>It might be a somewhat different task to get that to work on controversial or <a href="https://en.wikipedia.org/wiki/Wicked_problem">wicked</a> topics than on more technical topics. So it could be especially useful to test and experiment with &#8220;scalable oversight&#8221; proposals on controversial or wicked problems.</p></li></ul></li><li><p>In general, a standard experimental set-up that you can do for AI (which is harder to do with humans) is to have a held-out set of questions that you try to get AI to generalize to, where you can repeatedly test different training and scaffolding methods and see what works. This seems especially feasible for <strong>forecasting</strong>, since by training only on questions before a certain date, you can get very rich training data without any undesirable leaked information from the future.</p><ul><li><p>It seems pretty useful to get started on the basic technical work that&#8217;s necessary to make use of this, when using language models for forecasting.</p></li><li><p>The most straightforward methodology is to take existing language models with a known cut-off date, and run experiments where you get them to forecast events that we already have ground-truth on. Trying out different methods, and seeing what works best.</p><ul><li><p>There&#8217;s been some previous work in this vein, e.g. <a href="https://arxiv.org/abs/2206.15474">Forecasting Future World Events with Neural Networks</a>.</p></li></ul></li><li><p>A more ambitious approach would be to figure out how to sort pre-training data by date.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> If language models could be pre-trained chronologically, they would get a huge amount of automatic forecasting practice. This could be further enhanced by occasionally putting-in explicit forecasting questions throughout pre-training and score the AI on what probability it assigns to them.</p></li></ul></li><li><p>You could use the methodology described in Tom Davidson&#8217;s <a href="https://www.alignmentforum.org/posts/zxmzBTwKkPMxQQcfR/let-s-use-ai-to-harden-human-defenses-against-ai">Let&#8217;s use AI to harden human defenses against AI manipulation</a> to train AI systems that can recognise and point out arguments that are persuasive regardless of whether they are deployed for a true or false conclusion.</p></li><li><p>Areas without ground truth pose some additional difficulty (e.g. many parts of philosophy).</p><ul><li><p>But I&#8217;m optimistic that it could be helpful to find epistemic strategies that work for questions where we <em>do</em> have ground truth &#8212; and then apply them to areas where we don&#8217;t have ground truth. (For example: If the method from Tom Davidson&#8217;s <a href="https://www.alignmentforum.org/posts/zxmzBTwKkPMxQQcfR/let-s-use-ai-to-harden-human-defenses-against-ai">post</a> successfully produced great advice about what arguments are good vs. bad, I think that advice would also be helpful for areas where we don&#8217;t have ground truth.)</p></li><li><p>See also <a href="https://www.alignmentforum.org/posts/EByDsY9S3EDhhfFzC/some-thoughts-on-metaphilosophy">Wei Dai on methaphilosophy</a> for some notes on how we might or might not be able to get AI to help us with philosophical progress.</p></li></ul></li><li><p>You could also train AI to provide a wide variety of generally useful reasoning tools for humans. For example:</p><ul><li><p>Getting AIs to pass many different humans&#8217; ideological Turing tests, so that you can on-demand check what people with different ideologies would think about various arguments.</p></li><li><p>You can give the AI access to lots of information about your views, and then have it search for contradictory combinations of statements.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> Or inconsistent epistemic standards used on different arguments.</p></li><li><p>AI can constantly fact-check everything, and constantly present relevant statistics and comparison points to things that you&#8217;re investigating. (Perhaps suggesting Fermi estimates for quantities where statistics don&#8217;t already exist.)</p></li><li><p>AI can help you operationalize fuzzy claims into precise &amp; quantifiable claims.</p></li><li><p>Get the AI to generate exercise questions for cognitive habits you want to practice.</p></li><li><p>Go through lists of common biases and use AI to notice cases where they might be in play, and counter them. (Provide opposing anchors to avoid anchoring bias; provide actually-representative examples to avoid availability heuristic; prompt you with the <a href="https://en.wikipedia.org/wiki/Reversal_test">reversal test</a> if you might be falling prey to status-quo bias, etc.)</p></li></ul><p>Ideally, you&#8217;d want to empirically test whether these methods do help people answer difficult questions.</p></li></ul><p>Alongside technical work to train the AIs to be better at these tasks, creating relevant datasets could also be super important. In fact, just publicly evaluating existing language models&#8217; abilities in these areas could slightly improve incentives for labs to get better at them.</p><h3>Related/previous work.</h3><p>Related organizations:</p><ul><li><p><a href="https://ought.org/">Ought</a></p></li><li><p><a href="https://quantifieduncertainty.org/">Quantified Uncertainty Research Institute</a> (QURI)</p></li><li><p><a href="https://forecastingresearch.org/">Forecasting Research Institute</a> (FRI)</p></li></ul><h2>Get AI to be used &amp; (appropriately) trusted</h2><p>The above section was about developing the necessary technology for AI to provide great epistemic assistance. This section is about increasing the probability that such AI systems are used and trusted (to the extent that they are trustworthy).</p><h3>Develop technical proposals for how to train models in a transparently trustworthy way [ML] [Governance]</h3><p>One direction here is to develop technical proposals for how people outside of labs can get enough information about models to know whether they are trustworthy.</p><p>(This significantly overlaps with <a href="https://lukasfinnveden.substack.com/i/140338085/avoiding-ai-assisted-human-coups">Avoiding AI-assisted human coups</a> from the section on &#8220;Governance during explosive technological growth&#8221;.)</p><p>One candidate approach here is to rely on a type of scalable oversight where each of the model&#8217;s answers is accompanied by a long trace of justification that explains how the model arrived at that conclusion. If the justification was sufficiently solid, it would be less important <em>why</em> the model chose to write it. (It would be ideal, but perhaps infeasible, for the justification to be structured so that it could be checked for being locally correct at every point, after which the conclusion would be implied.)</p><p>Another approach is to clearly and credibly describe the training methodology that was used to produce the AI system so that people can check whether biases were introduced at any point in the process.</p><ul><li><p>One issue here is the balance between preserving trade secrets and releasing enough information that the models can be trusted.</p></li><li><p>If the training algorithm simply teaches the model to approximate the data, the key thing to release will often be the data. But this has the issue that (i) the data is valuable, and (ii) there&#8217;s too much data to easily check for biases.</p><ul><li><p><a href="https://www.anthropic.com/index/constitutional-ai-harmlessness-from-ai-feedback">Constitutional AI</a> makes some progress on these problems since it makes it easier to concisely explain the data as the result of a short constitution without releasing all of it.</p></li><li><p>Similarly, if the pre-training data is harvested from the internet, a company could simply describe how they harvested and filtered it.</p></li></ul></li><li><p>Another issue is that people might not trust AI labs&#8217; statements about these things.</p><ul><li><p>This can be addressed via mechanisms like having good whistleblower policies or third-party auditors.</p></li><li><p>This could also be complemented by more advanced compute-governance solutions, like those described in <a href="https://arxiv.org/pdf/2303.11341.pdf">Shavit (2023)</a>. (Especially in high-stakes situations, e.g. geopolitics.)</p></li></ul></li></ul><p>For any such scheme, people&#8217;s abilities to trust the AI developers will depend on their own competencies and capabilities.</p><ul><li><p>On one extreme: Someone who doesn&#8217;t understand the technical basics (or is too busy to dig into the details) may never be convinced on the merits alone. They would have to rely on track records or endorsements from trusted individuals or institutions.</p></li><li><p>On the other extreme: An actor with their own AI development capabilities may be able to verify certain claims by recapitulating core experimental results or even entire training runs.</p></li></ul><p>This means that one possible path to impact is to elucidate what capabilities certain core stakeholders (such as government auditors, opposition parties, or international allies) would need to verify key claims. And to advocate for them to develop those capabilities.</p><p>One possible methodology could be to write down highly concrete proposals and then check to what degree that <em>would</em> make outside parties trust the AI systems. And potentially go back and forth.</p><p>By a wide margin, the most effective check would be to implement the proposal in practice and see whether it successfully helps the organization build trust. (And whether it would, in practice, prevent the organization in question from misleading outsiders.)</p><p>But a potentially quicker option (that also requires less power over existing institutions) could be to talk with the kind of people you&#8217;d want to build trust with. That brings us to the next project proposal.</p><h3>Survey groups on what they would find convincing [survey/interview]</h3><p>If we want people to have appropriate trust in AI systems, it would be nice to have good information about their current beliefs and concerns. And what they would think under certain hypothetical circumstances (e.g. if labs adopted certain training methodologies, or if AIs got certain scores on certain evaluations, or if certain AIs&#8217; truthfulness was endorsed or disendorsed by various individuals).</p><p>Perhaps people&#8217;s cruxes would need to be addressed by something entirely absent from this list. Perhaps people are very well-calibrated &#8212; and we can focus purely on making things good, and then they&#8217;ll notice. Perhaps people will trust AI <em>too much</em> by default, and we should be spelling out reasons they should be more skeptical. It would be good to know!</p><p>To attack this question, you could develop some plausible stories on what could happen on the epistemic front and then present them to people from various important demographics (people in government, AI researchers, random democrats, random republicans). You could ask how trustworthy various AI systems (or AI-empowered actors) would seem in these scenarios and what information they would use to make that decision.</p><p>Note, however, that it will be difficult for people to know what they would think in hypothetical scenarios with (inevitably) very rough and imprecise descriptions. This is especially true since many of these questions are social or political, so people&#8217;s reactions are likely to have complex interactions with other people&#8217;s reactions (or expectations about their reactions). So any data gathered through this exercise should be interpreted with appropriate skepticism.</p><h3>Create good organizations or tools [ML] [Empirical research] [Governance]</h3><p>Creating a good organization or tool could be time-sensitive in several of the ways mentioned in <a href="https://lukasfinnveden.substack.com/i/140338209/why-working-on-this-could-be-urgent">Why working on this could be urgent</a>:</p><ul><li><p>It could increase people&#8217;s access to high-quality AI advice as important decisions are being made, during takeoff.</p></li><li><p>If you&#8217;re early with an epistemically excellent product, you could get a reputation for trustworthiness.</p></li><li><p>If excellent products or organizations are developed <em>before</em> it&#8217;s obvious what AI will say about certain controversial topics,&nbsp;that creates a window where people can develop appropriate trust in the AI-based methods, and go on the record about having some trust in them. This might increase their inclination to trust AI-based methods about future controversial claims that are well-supported by the evidence.</p></li></ul><h4>Examples of organizations or products</h4><p>Here are some examples on the &#8220;organization&#8221; end of things:</p><ul><li><p>Starting a company or non-profit that aims to use frontier AI in-house to provide thoroughly researched, excellent analyses of important topics.</p><ul><li><p>One analogy here is <a href="https://www.givewell.org/">GiveWell</a>. Many people trust GiveWell because they write up their research thoroughly and transparently &#8212; allowing others to <a href="https://forum.effectivealtruism.org/posts/6dtwkwBrHBGtc3xes/a-critical-review-of-givewell-s-2022-cost-effectiveness">critically review &amp; spot-check it</a>.</p></li><li><p>The bet here would be: By being at the forefront of using AI for literature reviews, modeling, and other research &#8212; an organization could do &#8220;GiveWell-style research&#8221; for a much broader set of topics. In the ideal world: Becoming the go-to source for rigorous and easy-to-navigate reviews of the evidence within some focus area(s). (Social sciences, forecasting, technical questions that matter for policy, etc.)</p></li></ul></li><li><p>Setting up a non-partisan agency within the government that is tasked with using frontier AI to provide advice on policy questions.</p><ul><li><p>One analogy here is the <a href="https://en.wikipedia.org/wiki/Congressional_Budget_Office">Congressional Budget Office</a> (CBO). The CBO was set up in the 1970s as a non-partisan source of information for Congress and to reduce Congress&#8217; reliance on the <a href="https://en.wikipedia.org/wiki/Office_of_Management_and_Budget">Office of Management and Budget</a> (which resides in the executive branch and has a director that is appointed by the currently sitting president). My impression is that the CBO is fairly successful.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a></p></li><li><p>The bet here would be: If a competent non-partisan government agency could provide AI policy advice, that would both (i) reduce the government&#8217;s reliance on companies' AI advisors and (ii) avoid a scenario where the governments&#8217; AI advice is highly partisan, due to e.g. being developed in the executive branch.</p><ul><li><p>This proposal especially relies and capitalizes on it being easier for people to agree on deferring to non-ideological processes <em>before</em> it is apparent what those processes will conclude.</p></li></ul></li><li><p>An alternative set-up could be for the government to stipulate that any AI in government should be (e.g.) neutral and honest. And set up a non-partisan body that verifies that those criteria are met.</p></li></ul></li></ul><p>Here are some examples on the &#8220;tool&#8221; end of things:</p><ul><li><p>Instead of starting a whole government agency that&#8217;s dedicated to using AI for in-house research: You could also design an AI-based product that&#8217;s exceptionally helpful for providing advice and support to government officials and members of the civil service. Maybe quickly explaining difficult topics &amp; events to them, which, if kept up-to-date, might somewhat extend the time during which they and other humans can keep up as the world accelerates.</p></li><li><p>Or tools that are helpful for a broad array of researchers. (C.f. what <a href="https://ought.org/">Ought</a> is trying to do.)</p></li><li><p>Or you could target a much broader audience of consumers.</p><ul><li><p>Maybe people who want help understanding or forming views on difficult topics.</p></li><li><p>Maybe people who want advice on making big decisions. (E.g. career decisions.)</p></li><li><p>Maybe people who want to frequently use some of the &#8220;reasoning tools for humans&#8221; or &#8220;anti-persuasion tools&#8221; mentioned above.</p></li></ul></li><li><p>Or as a more narrow goal: You could target institutions that will &#8220;by default&#8221; provide the most commonly-used AI systems (primarily AI labs) and push them to do better on epistemics. E.g.:</p><ul><li><p>Spending more effort on making the AI systems ideologically neutral.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a></p></li><li><p>Accelerating efforts to make deployed AI models systematically good at saying true things, citing sources, fact-checking themselves, etc.</p><ul><li><p>Potentially using some of the <a href="https://lukasfinnveden.substack.com/i/140338209/methodologies">methodologies</a> suggested above (like running experiments on what sort of AI statements can only be used to argue for true things, vs. can just as easily lead people astray) to identify good epistemic habits that the model can follow, and training the model to follow those.</p></li></ul></li><li><p>Or developing and implementing <a href="https://lukasfinnveden.substack.com/i/140338209/develop-technical-proposals-for-how-to-train-models-in-a-transparently-trustworthy-way-ml-governance">technical proposals for how to train models in a transparently trustworthy way</a>.</p></li></ul></li></ul><p>In some ways, &#8220;tool&#8221; and &#8220;organization&#8221; is a false dichotomy. For &#8220;tools&#8221;, it&#8217;s probably efficient to pre-process some common queries/investigations in-house and then train the AI to report &amp; explain the results. And for organizations whose major focus is to use AI in-house, it&#8217;s likely that &#8220;building an AI that explains our takes&#8221; should be a core part of how they disseminate their research.</p><h3>Investigate and publicly make the case for why/when we should trust AI about important issues [Writing] [Philosophical/conceptual] [Advocacy] [Forecasting]</h3><p>One way to &#8220;accelerate people getting good AI advice&#8221; and &#8220;capitalize off of current veil of ignorance&#8221; could be to publicly write (e.g. a book) with good argumentation on:</p><ul><li><p>What sort of training schemes could lead to trustworthy AI vs. non-trustworthy AI. What sort of evaluations could tell us which one we&#8217;re getting.</p><ul><li><p>This would need to discuss legitimate doubts like &#8220;AI is being trained on data from biased humans&#8221; and what countermeasures would be sufficient to address them.</p></li><li><p>This would have to engage a lot with various proposed alignment techniques and concerns.</p></li><li><p>This should also get into <a href="https://docs.google.com/document/d/1xirfyf11PWNBO85cyY_VO-4zR1n26oPzhHGHy2dUkM8/edit#heading=h.tnyv7q7bqwyh">proposals for how to train models in a transparently trustworthy way</a>,&nbsp;i.e., talk about what labs would have to do to make AI transparently trustworthy for outside actors. (And potentially: What sort of verification capacity external actors would have to build for themselves to verify claims from labs.)</p></li></ul></li><li><p>Paint a positive vision for how much AI could improve the epistemic landscape if everything went well. I would focus on the 3 things I mentioned <a href="https://lukasfinnveden.substack.com/i/140338209/why-ai-matters-for-epistemics">at the top of this section</a>: the ability to get greater confidence about AI motivations, how AI could make it vastly cheaper to do large investigations, and the much better ability to experimentally find and validate great epistemic methods.</p></li><li><p>Appropriate call to action: push for tech companies to develop AIs with good types of training, push for governments to incorporate good AI advice in decision making, urge people to neither blindly trust nor dismiss surprising AI statements, but to carefully look at the evidence, including information about how that AI was trained and evaluated.</p></li></ul><h3>Developing standards or certification approaches [ML] [Governance]</h3><p>It could be desirable to end up in an ecosystem where evaluators and auditors check popular AI models for their propensity to be truthful. <a href="https://arxiv.org/pdf/2110.06674.pdf">This paper</a> on Truthful AI has lots of content on that. It could be good to develop such standards and certification methodologies or perhaps to start an organization that runs the right evaluations.</p><h2>Develop &amp; advocate for legislation against bad persuasion [Governance] [Advocacy]</h2><p>Most of the above project suggestions are about supporting good applications of AI. The other side of the coin is to try to prevent bad applications of AI. In particular, it could be good to develop and advocate for legislation that limits the extent to which language models can be used for persuasion.</p><p>I&#8217;m not sure what such legislation should look like. But here are some ideas.</p><ul><li><p>One pretty natural target for regulation could be "Real-time AI-produced content which is paid for by a political campaign / PAC".</p><ul><li><p>For such content, regulation could require e.g. citations for all claims, or could require AI systems&#8217; positions to be consistent when talking with different users.</p></li><li><p>If this makes it hard to produce an engaging AI for those purposes, then that's plausibly good.</p></li></ul></li><li><p>Regulation could make it harder for organization A to pay company B to change the content that company B's chatbot produces. Or ban those sorts of sponsorships.</p></li><li><p>In general, plenty of laws pertain to &#8220;advertisement&#8221; today. I&#8217;m not sure how the law defines that, but maybe there are sensible modifications to make such that those laws cover "ad-bots" and have appropriate safeguards in place.</p></li><li><p>It seems helpful for people to know whether they are interacting with AI systems or humans.</p><ul><li><p>At least California has already made <a href="https://www.termsfeed.com/blog/ca-bot-disclosure-law/#:~:text=California's%20Bot%20Disclosure%20Law%20(California,of%20regulating%20online%20business%20practices.">some legislation</a> requiring companies to disclose facts about this.</p></li><li><p>There are also various technical proposals for &#8220;watermarking&#8221; AI content to make this easier.</p></li><li><p>I don&#8217;t know what seems to work in practice here, but handling this well could be important.</p></li></ul></li><li><p>Some of the <a href="https://lukasfinnveden.substack.com/i/140338209/methodologies">methodologies</a> suggested above can be used to find or validate proposals about what models should or shouldn&#8217;t be allowed to say. (I.e., you could run experiments on whether such constraints would make it hard for AI to persuasively instill false beliefs, and check how much less useful the AI would be when it was used for honest purposes.)</p></li></ul><p><strong>Related/previous work:</strong></p><ul><li><p><a href="https://www.lesswrong.com/posts/5cWtwATHL6KyzChck/risks-from-ai-persuasion">Risks from AI persuasion</a> by Beth Barnes.</p></li><li><p><a href="https://www.lesswrong.com/posts/qKvn7rxP2mzJbKfcA/persuasion-tools-ai-takeover-without-agi-or-agency">Persuasion Tools</a> by Daniel Kokotajlo.</p></li><li><p><a href="https://www.turing.ac.uk/sites/default/files/2020-10/epistemic-security-report_final.pdf">Epistemic security report</a>.</p></li><li><p>Probably lots of people have been writing about this that I don&#8217;t know about!</p></li></ul><h2>End</h2><p>That&#8217;s all I have on this topic! As a reminder: it's very incomplete. But if you're interested in working on projects like this, please feel free to get in touch.</p><p><em>Other posts in series: <a href="https://lukasfinnveden.substack.com/p/projects-for-making-transformative">Introduction</a>, <a href="https://lukasfinnveden.substack.com/p/project-ideas-governance-during-explosive">governance during explosive growth</a>, <a href="https://lukasfinnveden.substack.com/p/project-ideas-sentience-and-rights">sentience and rights of digital minds</a>, <a href="https://lukasfinnveden.substack.com/p/project-ideas-backup-plans-and-cooperative">backup plans &amp; cooperative AI</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Illustratively:</p><ul><li><p>We can do experiments to determine what sorts of procedures provide great reasoning abilities.</p><ul><li><p>Example procedures to vary: AI architectures, LLM scaffolds, training curricula, heuristics for chain-of-thought, protocols for interaction between different AIs, etc.</p></li><li><p>To do this, we need tasks that require great reasoning abilities and where there exists lots of data. One example of such a task is the classic &#8220;predict the next word&#8221; that current LLMs are trained against.</p></li></ul></li><li><p>With enough compute and researcher hours, such iteration should yield large improvements in reasoning skills and epistemic practices. (And the researcher hours could themselves be provided by automated AI researchers.)</p></li><li><p>Those skills and practices could then be translated to other areas, such as forecasting. And their performance could be validated by testing the AIs&#8217; ability to e.g. predict 2030 events from 2020 data.</p></li></ul></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This is related to how defense against manipulation might be more difficult than manipulation itself. See e.g. the second problem discussed by Wei Dai <a href="https://www.alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety">here</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Although this fit has been far from perfect, e.g. religions posit many false beliefs but have nevertheless spread and in some cases increased the competitive advantage of groups that adopted them.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>For some previous discussion, see <a href="https://www.lesswrong.com/posts/7jSvfeyh8ogu8GcE6/decoupling-deliberation-from-competition?commentId=bSNhJ89XFJxwBoe5e">here</a> for a relevant post by Paul Christiano and a relevant comment thread between Christiano and Wei Dai.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Of course, if we&#8217;re worried about misalignment, then we should also be less trusting of AI advice. But I think it&#8217;s plausible that we&#8217;ll be in a situation where AI advice is helpful while there&#8217;s still significant remaining misalignment risk. For example, we may have successfully aligned AI systems of one capability level, but be worried about more capable systems. Or we may be able to trust that AI typically behaves well, or behaves well on questions that we can locally spot-check, while still worrying about a sudden treacherous turn.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>For instance: It seems plausible to me that &#8220;creating scary technologies&#8221; has better feedback loops than &#8220;providing great policy analysis on how to handle scary technologies&#8221;. And current AI methods benefit a lot from having strong feedback loops. (Currently, especially in the form of plentiful data for supervised learning.)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>And if there&#8217;s a choice between different epistemic methodologies: perhaps pick whichever methodology lets them keep their current views.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>What does it mean for a model to have a &#8220;latent capability&#8221;? I&#8217;m thinking about the definition that Beth Barnes uses in <a href="https://www.alignmentforum.org/posts/GbAymLbJdGbqTumCN/more-detailed-proposal-for-measuring-alignment-of-current#A_more_formal_definition_of_alignment">this appendix</a>. See also the discussion in <a href="https://www.alignmentforum.org/posts/h7QETH7GMk9HcMnHH/the-no-sandbagging-on-checkable-tasks-hypothesis?commentId=fNrMBfq9MbM5kYCH7#fNrMBfq9MbM5kYCH7">this comment thread</a>, where Rohin Shah asks for some nuance about the usage of &#8220;capability&#8221;, and I propose a slightly more detailed definition.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Of course, better capability elicitation would also accelerate tasks that could increase AI risk. In particular: improved capability elicitation could accelerate AI R&amp;D, which could accelerate AI systems&#8217; capabilities. (Including latent capabilities.) I acknowledge that this is a downside, but since it&#8217;s only an indirect effect, I think it&#8217;s worth it for the kind of tasks that I outline in this section. In general: most of the reason why I&#8217;m concerned about AI x-risk is that critical actors will make important <em>mistakes</em>, so improving people&#8217;s epistemics and reasoning ability seems like a great lever for reducing x-risk. Conversely, I think it&#8217;s quite likely that dangerous models can be built with fairly straightforward scaling-up and tinkering with existing systems, so I don&#8217;t think that increased reasoning ability will make any <em>huge</em> difference in how soon we get dangerous systems. That said, considerations like this are a reason to target elicitation efforts more squarely at especially useful and neglected targets (e.g. forecasting) and avoid especially harmful or commercially incentivized targets (e.g. coding abilities).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>If it&#8217;s too difficult to date all existing pre-training data retroactively, then that suggests that it could be time-sensitive to ensure that all newly collected pre-training data is being dated, so that we can at least do this in the future.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>Though one risk with tools that make your beliefs more internally coherent/consistent is that they could extremize your worldview if you start out with a few wrong but strongly-held beliefs (e.g. if you believe one conspiracy theory, that often requires further conspiracies to make sense). (H/t Fin Moorhouse.)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>See e.g. <a href="https://www.kentclarkcenter.org/surveys/the-cbo/">this survey</a> which has &gt;30 economists &#8220;Agree&#8221; or &#8220;Strongly agree&#8221;&nbsp; (and 0 respondents disagree) with &#8220;Adjusting for legal restrictions on what the CBO can assume about future legislation and events, the CBO has historically issued credible forecasts of the effects of both Democratic and Republican legislative proposals.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>On some questions, answering truthfully might inevitably have an ideological slant to it. But on others it doesn&#8217;t. It seems somewhat scalable to get lots of people to red-team the models to make sure that they&#8217;re impartial when that&#8217;s appropriate, e.g. avoiding situations where they&#8217;re <a href="https://www.brookings.edu/articles/the-politics-of-ai-chatgpt-and-political-bias/">happy to write a poem about Biden but refuse to write a poem about Trump</a>. And on questions of fact &#8212;&nbsp;you can ensure that if you ask the model a question where the weight of the evidence is inconvenient for some ideology, the model is equally likely to give a straight answer regardless of which side would find the answer inconvenient. (As opposed to dodging or citing a common misconception.)</p></div></div>]]></content:encoded></item><item><title><![CDATA[Project ideas: Governance during explosive technological growth]]></title><description><![CDATA[Post two in a series of five.]]></description><link>https://lukasfinnveden.substack.com/p/project-ideas-governance-during-explosive</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/project-ideas-governance-during-explosive</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Thu, 04 Jan 2024 00:01:20 GMT</pubDate><content:encoded><![CDATA[<p><em>This is part of a series of lists of projects. The unifying theme is that the projects are not targeted at solving alignment or engineered pandemics but still targeted at worlds where transformative AI is coming in the next 10 years or so. See <a href="https://lukasfinnveden.substack.com/p/projects-for-making-transformative">here</a> for the introductory post.</em></p><p>Commonly discussed motivations cited for why rapid AI progress might be scary are:</p><ul><li><p>Risks from misalignment.</p></li><li><p>Risk from AI-assisted bioweapons.</p></li></ul><p>But even aside from these risks, it seems likely that advanced AI will lead to explosive technological and economic growth across the board, which could lead to a large number of problems emerging at a frighteningly fast pace.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>The growth speed-ups could be extreme. The basic worry is that AI would let us return to a <a href="https://www.openphilanthropy.org/research/modeling-the-human-trajectory/">historical trend of super-exponential growth</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> If this happens, I don&#8217;t know any reassuring upper limit for how fast growth could go.</p><p>(Illustratively: Paul Christiano&#8217;s <a href="https://sideways-view.com/2018/02/24/takeoff-speeds/">suggested definition</a> for <em>slow</em> takeoff is &#8220;There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles.&#8221; If world GDP doubled in a single year, that would mean that growth was ~30x faster than it is right now.)</p><p>If technological growth speeds up by, say, 30x, then that suggests that over just a few years, we might have to deal with all the technologies that would (at a &#8220;normal&#8221; pace) be discovered over 100 years. That&#8217;s an intense and scary situation.</p><p>This section is about problems that might arise in this situation and governance solutions that could help mitigate them. It&#8217;s also about &#8220;meta&#8221; solutions that could help us deal with all of these issues at once, e.g. by improving our ability to coordinate to slow down the development and deployment of new technologies.</p><p>Note: Many of the projects in this section would also be useful for alignment. But I&#8217;m not covering any proposals that are purely focused on addressing alignment concerns.</p><h2>Investigate and publicly make the case for/against explosive growth being likely and risky [Forecasting] [Empirical research] [Philosophical/conceptual] [Writing]</h2><p>I think there&#8217;s substantial value to be had in vetting and describing the case for explosive growth, as well as describing why it could be terrifying. Explosive growth underlies most of the concerns in this section &#8212;&nbsp;so establishing the basic risk is very important.</p><p>Note: There&#8217;s some possible backfire risk, here. Making a persuasive case for explosive growth could motivate people to try harder to get there even faster. (And in particular, to try to get there before other actors do.) Thereby giving humanity even less time to prepare.</p><p>I don&#8217;t think that&#8217;s a crazy concern. On the other hand, it&#8217;s plausible that we&#8217;re currently in an unfortunate middle ground, where everyone <em>already</em> believes that frontier AI capabilities will translate into a lot of power, but no one expects the crazy-fast growth that would strike fear into their hearts.</p><p>On balance, my current take is that it&#8217;s better for the world to see what&#8217;s coming than to stumble into it blindly.</p><p><strong>Related/previous work:</strong></p><ul><li><p><a href="https://arxiv.org/abs/2309.11690">Explosive growth from AI automation: A review of the arguments</a>.</p></li><li><p>OpenPhil reports.</p><ul><li><p><a href="https://www.openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/">Could advanced AI drive explosive growth?</a></p></li><li><p><a href="https://www.lesswrong.com/posts/Gc9FGtdXhK9sCSEYu/what-a-compute-centric-framework-says-about-ai-takeoff">What a compute-centric framework says about AI takeoff speeds</a></p></li><li><p><a href="https://www.openphilanthropy.org/research/modeling-the-human-trajectory/">Modeling the human trajectory</a>.</p></li></ul></li><li><p><a href="https://www.cold-takes.com/most-important-century/">Most important century series</a>.</p></li><li><p><a href="https://www.dwarkeshpatel.com/p/carl-shulman">Carl Shulman episode</a> on Lunar Society.&nbsp;</p></li></ul><p><strong>Examples of how to attack this problem:</strong></p><ul><li><p>One objection to explosive growth is that certain bottlenecks will prevent large factors of speed-up like 10x or 100x. One project would be to look into such bottlenecks (running physical experiments, regulatory hurdles, mining the requisite materials, etc) and assess their plausibility.</p><ul><li><p>For existing work here, see the &#8220;arguments against the explosive growth hypothesis&#8221; section of <a href="https://arxiv.org/abs/2309.11690">Explosive growth from AI automation</a>.</p></li></ul></li><li><p>Biological analogies of maximally fast self-replication times and how relevant those analogies are to future growth.</p><ul><li><p>For example, duckweed <a href="https://pubmed.ncbi.nlm.nih.gov/24803032/">has a doubling time of a few days</a>.</p></li></ul></li><li><p>Laying out the existing case in a more accessible and vivid format.</p></li><li><p>Investigating concrete scary technologies and detailing scenarios where they emerge too fast to figure out how to deal with. E.g.:</p><ul><li><p>Cheap nukes.</p></li><li><p>Nanotech.</p></li><li><p>Cheap and scalable production of small, deadly drones.</p></li><li><p>Highly powerful surveillance, lie detection, and mind reading.</p></li><li><p>Extremely powerful persuasion.</p></li><li><p>Various issues around digital minds. Capability of uploading human minds.</p></li><li><p>Transitioning from an economy where people make money via labor where almost all income is paid out to those who control capital. (Because of AI automation.)</p></li></ul></li></ul><h2>Painting a picture of a great outcome [Forecasting] [Philosophical/conceptual] [Governance]</h2><p>Although a fast intelligence explosion would be terrifying, the resulting technology could also be used to create a fantastic world. It would be great to be able to combine (i) appropriate worry about a poorly handled intelligence explosion and (ii) a convincing case for how everyone could get what they want if we just coordinate. That would make for a powerful case for why people should focus on coordinating. (And not risk everything by racing and grabbing for power.)</p><p>For a related proposal and some ideas about how to go about it, see Holden Karnofsky&#8217;s proposal <a href="https://docs.google.com/document/d/1vE8CrN2ap8lFm1IjNacVV2OJhSehrGi-VL6jITTs9Rg/edit#heading=h.h37zge5difx0">here</a>.</p><p>For some previous work, see the Future of Life Institute&#8217;s <a href="https://worldbuild.ai/">worldbuilding contest</a> (and <a href="https://forum.effectivealtruism.org/posts/3zoxiT6bnaTpLZZD3/fli-podcast-series-imagine-a-world-about-aspirational">follow-up work</a>).</p><h2>Policy-analysis of issues that could come up with explosive technological growth [Governance] [Forecasting] [Philosophical/conceptual]</h2><p>Here are three concrete areas that might need solutions in order for humanity to build an excellent world amidst all the new technology that might soon be available to us. Finding policy solutions for these could:</p><ul><li><p>Help people get started with implementing the solutions.</p></li><li><p>Contribute to &#8220;Painting a picture of a great outcome&#8221;, mentioned just above.</p></li></ul><h3>Address vulnerable world hypothesis with minimal costs</h3><p>(&#8220;Vulnerable world hypothesis&#8221; is in reference to <a href="https://nickbostrom.com/papers/vulnerable.pdf">this paper</a>.)</p><p>If too many actors had access to incredibly destructive tech, we&#8217;d probably see a lot of destruction. Because a small fraction of people would choose to use it.</p><p>Unfortunately, a good heuristic for what we would get from rapid technological growth is: cheaper production of more effective products. This suggests that sufficiently large technological growth would enable cheap and convenient production of e.g. even more explosive nukes or deadlier pandemics.</p><p>Will technology also give us powerful defenses against these technologies? I can easily imagine technological solutions to some issues, e.g. physical defenses against the spread of deadly pandemics. But for others, it seems more difficult. I don&#8217;t know what sort of technology would give you a cheap and convenient defense against a nuclear bomb being detonated nearby.</p><p>One solution is to prevent the problem at its source: By preventing access to the prerequisite materials or technologies and/or by monitoring people who have the capacity to cause large amounts of destruction.</p><p>But in some scenarios, almost anyone could have the capacity to cause widespread destruction. So the monitoring might have to be highly pervasive, which would come with significant costs and risks of misuse.&nbsp;</p><p>Exploring potential solutions to this problem hasn&#8217;t really been done in depth. It would be great to find solutions that minimize the harms from both destructive technologies and from pervasive surveillance.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p><strong>Examples of how to attack this problem:</strong></p><ul><li><p>To what extent would the proposal in <a href="https://sideways-view.com/2018/02/02/surveil-things-not-people/">Surveil things, not people</a> solve the problem?</p><ul><li><p>For example: Are there plausible technologies that could spread purely via people sharing information?</p></li></ul></li><li><p>Better information technology could allow surveillance/monitoring to be much more <em>precise</em>, detecting certain key facts (&#8220;Is this person constructing a bomb?&#8221;) while not recording or leaking information about anything else. (C.f. <a href="https://benmgarfinkel.blog/2020/03/09/privacy-optimism-2/">the case for privacy optimism</a> / <a href="https://arxiv.org/abs/2012.08347">Beyond Privacy Trade-offs with Structured Transparency</a>.) To what extent could variants of this address concerns about intrusive surveillance?</p><ul><li><p>Could this address pragmatic concerns, e.g.: government abuse of surveillance powers, flexible enforcement of unjust laws, etc.</p></li><li><p>Relatedly: Could this be constructed in a way that allowed citizens to verify that no surveillance was going on beyond what was claimed?</p></li><li><p>Absent pragmatic concerns, would some variants of this be compatible with people&#8217;s psychological desire for privacy?</p></li><li><p>Which variants of this would be (in)compatible with established legal rights to privacy?</p></li></ul></li></ul><h3>How to handle brinkmanship/threats?</h3><p>Historically, we&#8217;ve taken on large risks from brinkmanship around nukes. For example, during the Cuban missile crisis, President Kennedy thought that the risk of escalation to war was "between 1 in 3 and even".</p><p>I would expect similar risks to go up substantially at a time when humanity is rapidly developing new technology. This technology will almost certainly enable new, powerful weapons with unknown strategic implications and without any pre-existing norms about their use.</p><p>In addition, AI might enable new commitment mechanisms.</p><ul><li><p>If you had a solution to the alignment problem, you could program an AI system to follow through on certain commitments and then hand over control to that AI system.</p></li><li><p>If you had accurate lie detection, some people might be able to decide to keep certain commitments and use lie detection technology to make those commitments credible.</p></li></ul><p>This could help solve coordination problems. But it could also create entirely new strategies and risks around brinkmanship and threats.</p><p>What policies should people adopt in the presence of strong commitment abilities? My impression is that state-of-the-art theory on this isn&#8217;t great.</p><p>The main recommendation from traditional game theory is &#8220;try hard to commit to something crazy before your opponent, and then it will be rational for them to do whatever you want&#8221;. That doesn&#8217;t seem like the way we want future decisions to be made.</p><p>There are some alternative theoretical approaches under development, such as <a href="https://arxiv.org/abs/2208.07006">open-source game theory</a>. But they haven&#8217;t gotten very far. And I don&#8217;t know of any ambitious attempts to answer the question of how people ought to behave around this in the real world. Taking into account real-world constraints (around credibility, computation power, lack of perfect rationality, etc.) as well as real-world advantages (such as a shared history and some shared human intuitions, which might provide a basis for successfully coordinating on good norms).</p><p>Here&#8217;s one specific story for why this could be tractable and urgent: Multi-agent interactions often have a ton of equilibria, and which one is picked will depend on people&#8217;s expectations about what other people will do, which are informed about <em>their</em> expectations of other people, etc. If you can anticipate a strategic situation before it arrives and suggest a particular way of handling it, that could change people&#8217;s expectations about each other and thereby change the rational course of action.</p><p><strong>Examples of how to attack this problem:</strong></p><ul><li><p>What will the facts-on-the-ground situation look like? What commitment abilities will people have, and how transparent will they be to other parties?</p></li><li><p>Given plausible facts on the ground, what are some plausible norms that would (i) make for a good society if everyone followed them, (ii) that would be good to follow if everyone else followed them, and (iii) that aren&#8217;t too brittle if people settle on somewhat different norms?</p><ul><li><p>Example of a norm: Never do something <em>specifically</em> because someone else doesn't want it.</p></li><li><p>This could draw a lot on looking at current norms around geopolitical conflicts and other areas. To get out of abstract land and think about what people do in practice. (For example, for the norm suggested above, you could ask whether it is compatible with or would condemn sanctions? What about the criminal justice system?)</p></li></ul></li></ul><h3>Avoiding AI-assisted human coups</h3><p>Advanced AI could enable dangerously high concentrations of power. This could happen via at least two different routes.</p><ul><li><p>Firstly, a relatively small group of people (e.g. a company or some executives/employees of a company) who develop the technology could rapidly accumulate a lot of technology and cognitive power compared to the rest of the world. If those people decided to launch a coup against a national government, they may have a good chance of succeeding.</p><ul><li><p>The obvious solution to that problem is for the government and other key stakeholders to have significant oversight over the company&#8217;s operations. Up to and including nationalizing frontier labs.</p></li></ul></li><li><p>The <em>second</em> problem is that AI might lead power <em>within</em> institutions to be more concentrated in the hands of a few people.</p><ul><li><p>Today, institutions are often coordinated via chain-of-command systems where humans are expected to obey other humans. But <em>hard</em> power is ultimately distributed between a large number of individual humans.</p><ul><li><p>As a consequence: If a leader tries to use their power to launch a coup, their subordinates are capable of noticing the extraordinary circumstances they find themselves in, and use their own judgment about whether to obey orders or not. So Alice can&#8217;t necessarily count on Bob&#8217;s support in a coup, even if Bob is Alice&#8217;s subordinate.</p></li></ul></li><li><p>But with AI, it will be technically possible to construct institutions where hard power is controlled by AIs,&nbsp;who could be trained to obey certain humans&#8217; orders without question. And even if those AIs were <em>also </em>programmed to follow certain laws and standards, a conflict between laws/standards and the orders of their human overseers might lead to behavior that is out-of-distribution and unpredictable (rather than the laws/standards overriding the humans&#8217; orders).</p></li><li><p>To solve this problem: We&#8217;d want people to make conscious decisions about what the AIs should do when all the normal sources of authority disagree and explicitly train the AIs for the right behavior in those circumstances. Also, we&#8217;d want them to institute controls to prevent a small number of individuals from unilaterally retraining the AIs.</p></li></ul></li></ul><p><strong>Examples of how to attack this:</strong></p><ul><li><p>Spell out criteria (e.g. certain evaluations) for when AI is becoming powerful enough that it needs strong government oversight. Describe what this oversight would need to look like.</p></li><li><p>Outline a long list of cases where it might be ambiguous what highly capable AI systems should do (when different sources of authority disagree). Spell out criteria for when AIs have enough power that the relevant lab/government should have taken a position on all those cases (and trained the AI to behave appropriately).</p></li><li><p>Advocate for labs to set up strict internal access controls to weights, as well as access controls and review of the code used to train large language models. This is to prevent a small group of lab employees from modifying the training of powerful AIs to make the AI loyal to them in particular.</p></li><li><p>Get similar assurances in place for AIs used in government, military, and law enforcement. This could include giving auditing privileges to opposition parties, international allies, etc.</p></li><li><p>Spell out technical competencies that auditors (either dedicated organizations or stakeholders like opposition parties and other governments) would need to properly verify what they were being told. Publicly explain this and advocate for those actors to develop those technical competencies.</p></li><li><p>There&#8217;s significant overlap between solutions to this problem and the proposal <a href="https://lukasfinnveden.substack.com/i/140338209/develop-technical-proposals-for-how-to-train-models-in-a-transparently-trustworthy-way-ml-governance">Develop technical proposals for how to train models in a transparently trustworthy way</a> from the &#8220;Epistemics&#8221; post in this series.</p></li></ul><p>(Thanks to Carl Shulman for discussion.)</p><h3>Governance issues raised by digital minds</h3><p>There are a lot of governance issues raised by the possibility of digital minds. For example, what sort of reform is needed in one-person-one-vote democracies when creating new persons is as easy as copying software? See also <a href="https://lukasfinnveden.substack.com/i/140338243/develop-candidate-regulation-governance-forecasting">Develop candidate regulation</a> from the &#8220;Digital Sentience &amp; Rights&#8221; post in this series.</p><h2>Norms/proposals for how to navigate an intelligence explosion [Governance] [Forecasting] [Philosophical/conceptual]</h2><p><a href="https://lukasfinnveden.substack.com/i/140338085/painting-a-picture-of-a-great-outcome-forecasting-philosophicalconceptual-governance">Painting a picture of a great outcome</a> suggested outlining an acceptable &#8220;endpoint&#8221; to explosive growth.</p><p>Separately, there&#8217;s a question of what the appropriate norms are for how we get from where we are today to that situation.</p><p>Other than risks from misaligned AI along the way, I think the 3 central points here are:</p><ul><li><p>If one actor pushes ahead with an intelligence explosion, they might get a massive power advantage over the rest of the world. That creates a big risk for everyone else, who might find themselves powerless. Instead of going through with a massive gamble like that,&nbsp;could we set up agreements or norms that make it more likely that <em>everyone</em> has some say in the future?</p></li><li><p>A maximum-speed intelligence explosion will lead to <em>a lot</em> of changes and <em>a lot</em> of new technology really, really fast. That&#8217;s a core part of what&#8217;s scary, here. Could we somehow coordinate to go slower?</p></li><li><p>Good post-intelligence-explosion worlds will (at least eventually) look quite different from our world, and that includes governance &amp; politics looking quite different. For example:</p><ul><li><p>There will be more focus on deciding what sorts of crazy technology can be used in what ways.</p></li><li><p>There will be less need to focus on economic growth to meet the material needs of currently existing people.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p></li><li><p>There will be massive necessary changes associated with going from purely biological citizens to also having digital minds with political rights.</p></li><li><p>There may be changes in what form of government is naturally favored. Worryingly, perhaps away from favoring democracy,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> which could return us to autocratic forms of rule (which are more historically typical) unless we make special efforts to preserve democracy.</p></li><li><p>There will be strong incentives to hand over various parts of politics to more effective AI systems. (Drafting proposals, negotiating, persuading people about your point of view, etc.)</p></li></ul><p>It would be nice to get a head start on making this governance transition happen smoothly and deliberately.</p></li></ul><p>What follows are some candidate norms and proposals. Each of them could be:</p><ul><li><p>Sketched out in much greater detail.</p></li><li><p>Evaluated for how plausible it is to form a credible norm around it.</p></li><li><p>Evaluated for whether such a norm would be good or bad.</p></li><li><p>Evaluated for whether partial movement in that direction would be good or bad. (So that we don&#8217;t propose norms that backfire if they get less than complete support.)</p></li><li><p>Advocated for.</p></li></ul><p>(For ideas in this section, I&#8217;ve especially benefitted from discussion with Carl Shulman, Will MacAskill, Nick Beckstead, Ajeya Cotra, Daniel Kokotajlo, and Joe Carlsmith.)</p><h4>No &#8220;first strike&#8221; intelligence explosion</h4><p>One candidate norm could be: &#8220;It&#8217;s a grave violation of international norms for a company or a nation to unilaterally start an intelligence explosion because this has a high likelihood of effectively disempowering all other nations. A nation can only start an intelligence explosion if some other company or nation has already started an intelligence explosion of their own or if they have agreement from a majority of other nations that this is in everyone&#8217;s best interest.&#8221; (Perhaps because they have adequate preparations for reassuring other countries that they won&#8217;t be disempowered. Or perhaps because the alternative would be to wait for an even less responsible intelligence explosion from some other country.)</p><h4>Never go faster than X?</h4><p>The above proposal was roughly: Only step through an intelligence explosion if you have agreements from other nations about how to do it.</p><p>A more opinionated and concrete proposal is: Collectively, as a world, we should probably never grow GDP or innovate technology faster than a certain maximum rate. Perhaps something like: Technological and economic growth should be no more than 5x faster than 2023 levels.</p><h4>Concrete decision-making proposals</h4><p>A proposal that would be more opinionated in some ways, and less opinionated in other ways, would be: At some level of AI capabilities, you&#8217;re supposed to call for a grand constitutional convention to decide how to navigate the rest of the intelligence explosion, and what to do afterward.</p><p>You could make suggestions about this constitutional convention, e.g.:</p><ul><li><p>The first round will last for a year.</p></li><li><p>There will be 1000 people participating, sampled from nations in proportion to their population.&nbsp;</p></li><li><p>People will use approval voting.</p></li><li><p>etc.</p></li></ul><p>The &#8220;constitutional convention&#8221; proposal resembles something like <a href="https://en.wikipedia.org/wiki/Deliberative_democracy">deliberative democracy</a> insofar as the selected people are randomly chosen, and looks more like typical geopolitical deliberations insofar as nations&#8217; governments decide which representatives to send.&nbsp;</p><p>(Thanks to Will MacAskill for discussion.)</p><h4>Technical proposals for slowing down / coordinating</h4><p>In order to be especially effective, all 3 above proposals require credible technical proposals for how nations could verify that they were all collectively slowing down. Exactly how such proposals should work is a big open problem. Which, thankfully, some people are working on.</p><p>I don&#8217;t know any detailed list of open questions here. (And I&#8217;d be interested if anyone had such a list!) But some places to look are:</p><ul><li><p><a href="https://arxiv.org/abs/2303.11341">Shavit (2023)</a> is an important paper that brings up some open problems.</p></li><li><p>Lennart Heim&#8217;s list of <a href="https://heim.xyz/resources/">resources</a> for compute governance.</p></li></ul><p>Naming some other candidate directions:</p><ul><li><p>Research how to design computer chips and rules for computer chips that cleanly distinguish chips that can and can&#8217;t be used for AI training. So that restrictions on AI chips can be implemented with minimal consequences for other applications.</p></li><li><p>Proposing schemes for how competitors (like the US &amp; China) could set up sufficiently intense monitoring of each other to be confident that there&#8217;s no illicit AI research happening.</p><ul><li><p>C.f. historical success at spying for detecting large operations.</p></li></ul></li><li><p>How feasible would it be to do an international &#8220;CERN for AI&#8221;-type thing?</p><ul><li><p>In particular: To what extent could this possibly involve competitors who don&#8217;t particularly trust each other? Given risks of leaks and risks of backdoors or data poisoning.</p></li></ul></li><li><p>Great proposals here should probably be built around capability evaluations and commitments for what to do at certain capability levels. We can already start to practice this by developing better capability evaluations and proposing standards around them. It seems especially good to develop evaluations for AI speeding up ML R&amp;D.</p></li></ul><h4>Dubiously enforceable promises</h4><p>Get nations to make promises early on along the lines of &#8220;We won&#8217;t use powerful AI to offensively violate other countries&#8217; national sovereignty. Not militarily, and not by weird circumspect means either (e.g. via superhuman AI persuasion).&#8221; Maybe promising other nations a seat at the bargaining table that determines the path of the AI-driven future. Maybe deciding on a default way to distribute future resources and promising that all departures from that will require broad agreement.</p><p>It is, of course, best if such promises are credible and enforceable.</p><p>But I think this would have some value even if it's just something like: The US Congress passes a bill that contains a ton of promises to other nations. And that increases the probability that they'll act according to those promises.</p><p>There&#8217;s a lot of potential work to do here in drafting suggested promises that would:</p><ul><li><p>Be meaningful in extreme scenarios.</p></li><li><p>Be plausibly credible in extreme scenarios.</p></li><li><p>Could plausibly be passed in a political climate that still has serious doubts about where all this AI stuff will lead.</p></li></ul><p>(Thanks to Carl Shulman for discussion.)</p><h4>Technical proposals for aggregating preferences</h4><p>A different direction would be for people to explore technical proposals for effectively aggregating people&#8217;s preferences. So that, during an intelligence explosion, it&#8217;s more convenient to get more accurate pictures of what different constitutions would recommend. Thereby making it harder to legitimately dismiss such demands.</p><p><strong>Previous work:</strong></p><ul><li><p>Jan Leike&#8217;s <a href="https://aligned.substack.com/p/a-proposal-for-importing-societys-values">proposal</a> for &#8220;Building towards Coherent Extrapolated Volition with language models&#8221;.</p></li><li><p>The <a href="https://pol.is/home">Polis</a> platform. (Which, notably, Anthropic <a href="https://www.anthropic.com/index/collective-constitutional-ai-aligning-a-language-model-with-public-input">used</a> when exploring what constitution to choose for their constitutional AI methods.)</p></li></ul><h2>Decrease the power of bad actors</h2><p>Bad actors could make dangerous decisions about what to do with AI technology. Perhaps increasing risks from <a href="https://lukasfinnveden.substack.com/i/140338085/painting-a-picture-of-a-great-outcome-forecasting-philosophicalconceptual-governance">brinkmanship</a> or other <a href="https://lukasfinnveden.substack.com/i/140338085/address-vulnerable-world-hypothesis-with-minimal-costs">destructive technology</a> &#8212;&nbsp;or perhaps seizing and maintaining indefinite power over the future and making poor choices about what to do with it. See also some of the risks described in <a href="https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors">Reducing long-term risks from malevolent actors</a>.</p><p>One line of attack on this problem is to develop policies for <a href="https://lukasfinnveden.substack.com/i/140338085/avoiding-ai-assisted-human-coups">avoiding AI-assisted human coups</a>. But here are a few more.</p><h3>Avoid malevolent individuals getting power within key organizations [Governance]</h3><p>If you have opportunities to affect the policies of important institutions, it could be valuable to reduce the probability that malevolent individuals get hired and/or are selected for key roles.</p><p><strong>Examples of how to attack this question: </strong>(h/t Stefan Torges)</p><ul><li><p>Making it more likely that malevolent actors are detected before joining an AI development effort (e.g., screenings, background checks).</p></li><li><p>Making it more likely that malevolent actors are detected within AI development efforts (e.g., staff training, screenings for key roles).</p></li><li><p>Making it more likely that staff speak up about / report suspicious behavior (e.g., whistleblower protections, appropriate organizational processes).</p></li><li><p>Making it more likely that malevolent actors are removed based on credible evidence (e.g., appropriate governance structures).</p></li><li><p>Setting up appropriate access controls within AI development efforts. E.g. requiring multiple people&#8217;s simultaneous approval for crucial types of access and/or reducing the number of people with the power to access the models (unilaterally or with just a couple of people).</p></li><li><p>Changing promotion guidelines and/or culture in ways that select against rather than for power-seeking individuals.</p></li></ul><p>See also <a href="https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors">Reducing long-term risks from malevolent actors</a> for more analysis and intervention ideas.</p><h3>Prevent dangerous external individuals/organizations from having access to AI [Governance] [ML]</h3><p>There is (thankfully) some significant effort going on in this space already, so I don&#8217;t have a lot to add.</p><p><strong>Examples of how to attack this question:</strong></p><ul><li><p>Improving security. (I think <a href="https://www.rand.org/pubs/working_papers/WRA2849-1.html">Securing AI Model Weights</a> is the current state of the art of how labs could improve their security.)</p></li><li><p>Offering dangerous frontier models via API rather than giving weights access.</p></li><li><p>Controlling hardware access.</p></li></ul><h3>Accelerate good actors</h3><p>This is a riskier proposition, and it&#8217;s extremely easy to accidentally do harm here. But in principle, one way to give bad actors relatively less power is to differentially accelerate good actors. C.f. <a href="https://www.alignmentforum.org/posts/Fbk9H6ipfybHyqjrp/a-playbook-for-ai-risk-reduction-focused-on-misaligned-ai#Successful__careful_AI_lab">successful, careful AI lab</a> section of Holden Karnofsky&#8217;s playbook for AI risk reduction.</p><h2>Analyze: What tech could change the landscape? [Forecasting] [Philosophical/conceptual] [Governance]</h2><p>If an intelligence explosion could lead to 100 years of &#8220;normal&#8221; technological progress within just a few years,&nbsp;then this is a very unusually valuable time to have some foresight into what technologies are on the horizon.</p><p>It seems particularly valuable to anticipate technologies that could (i) pose big risks or (ii) enable novel solutions to other risks.</p><p>On (i), some plausible candidates are:</p><ul><li><p>Bioweapons.</p></li><li><p>Misaligned-by-default, superintelligent AI.</p></li><li><p>Super-persuasion. (Discussed a bit in the <a href="https://lukasfinnveden.substack.com/p/project-ideas-epistemics">Epistemics post</a> in this series.)</p></li></ul><p>Some candidates that could feature on both (i) and (ii):</p><ul><li><p>Lie detection.</p></li><li><p>Various new surveillance/monitoring technologies.</p><ul><li><p>Including: Better ability to have existing monitoring technology be privacy-preserving. (Which is more purely (ii).)</p></li></ul></li></ul><ul><li><p>Commitment abilities.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p></li><li><p>Atomically precise manufacturing.</p></li><li><p>Cognitive enhancement.</p></li></ul><p>Good results here could influence:</p><ul><li><p><a href="https://lukasfinnveden.substack.com/i/140338085/technical-proposals-for-slowing-down-coordinating">Technical proposals for slowing down / coordinating</a>.</p></li><li><p><a href="https://lukasfinnveden.substack.com/i/140338085/painting-a-picture-of-a-great-outcome-forecasting-philosophicalconceptual-governance">Painting a picture of a great outcome</a>.</p></li><li><p><a href="https://lukasfinnveden.substack.com/i/140338085/investigate-and-publicly-make-the-case-foragainst-explosive-growth-being-likely-and-risky-forecasting-empirical-research-philosophicalconceptual-writing">Investigate and publicly make the case for/against explosive growth being likely + scary</a>.</p></li></ul><h2>Big list of questions that labs should have answers for [Philosophical/conceptual] [Forecasting] [Governance]</h2><p>This could be seen as a project or could alternatively be seen as a framing for how to best address many of these issues. (Not just from this post, but also issues from other posts in this series.)</p><p>I think it&#8217;s plausible that our biggest &#8220;value-add&#8221; on these topics will be that we see potentially important issues coming before other people. This suggests that our main priority should be to clearly flag all the thorny issues we expect to appear during an intelligence explosion and ask questions about how labs (and possibly other relevant institutions) plan to deal with them.</p><p>This could be favorably combined with offering suggestions for how to address all the issues. But separate from any suggestions, it&#8217;s valuable to establish something as a problem that needs <em>some</em> answer so that people can&#8217;t easily dismiss any one solution without offering an alternative one.</p><p>Some of these questions might be framed as &#8220;What should your AI do in situation X?&#8221;. Interestingly, even if labs don&#8217;t engage with a published list of questions, we can already tell what (stated) position the labs&#8217; <em>current</em> AIs have on those questions. They can be presented with dilemmas, the AIs can answer, and the results can be published.</p><p>Having a single big list of questions that must be addressed would also make it easier to notice when certain principles conflict. I.e., situations when you can&#8217;t fulfill them both at once and are forced to choose.</p><p><strong>Example of how to attack this:</strong></p><ul><li><p>Ask yourself: &#8220;If the lab never made an intentional decision about how they or their AIs should handle X &#8212;&nbsp;how worried would you be?&#8221;. Write a list with versions of X, starting from the ones that would make you most worried. Also, write a hypothesized solution to it. (Both because solutions are valuable and because this exercise encourages concreteness.)</p><ul><li><p>As a starting point for what to write about, you could consider many of the ideas in this series of posts. What will you do if your AI systems have morally relevant preferences? What if they care about politics and deserve political rights? What if you find yourself in a situation where you could unilaterally launch an intelligence explosion? Or where control over your AI systems could enable someone to launch a coup? Etc.</p></li></ul></li></ul><p>(Thanks to Carl Shulman for this idea and discussion.)</p><h2>End</h2><p>That&#8217;s all I have on this topic! As a reminder: it's very incomplete. But if you're interested in working on projects like this, please feel free to get in touch.</p><p><em>Other posts in series: <a href="https://lukasfinnveden.substack.com/p/projects-for-making-transformative">Introduction</a>, <a href="https://lukasfinnveden.substack.com/p/project-ideas-governance-during-explosive">governance during explosive growth</a>, <a href="https://lukasfinnveden.substack.com/p/project-ideas-epistemics">epistemics</a>, <a href="https://lukasfinnveden.substack.com/p/project-ideas-sentience-and-rights">sentience and rights of digital minds</a>, <a href="https://lukasfinnveden.substack.com/p/project-ideas-backup-plans-and-cooperative">backup plans &amp; cooperative AI</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Some of these risks are also fairly commonly discussed. In particular, centralization of power and risks from powerful AI falling into the wrong hands are both reasonably common concerns, and are strongly related to some of the projects I list in this section.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Why could this happen? Historically, the pace of innovation may have been tightly coupled to world GDP, because population size (i.e. the number of potential innovators) was constrained by the supply of food. In semi-endogenous growth models, this makes super-exponential growth plausible. But recently, growth has outpaced population growth, leading to a slower pace of innovation than our current amount of resources could theoretically support. But AGI would make it easy to convert resources into automated scientists, which could return us to the historical state of affairs. For more on this, see e.g. <a href="https://www.cold-takes.com/the-duplicator/">the duplicator</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>C.f. this comment from Michael Nielsen&#8217;s <a href="https://michaelnotebook.com/vwh/index.html">notes on the vulnerable world hypothesis</a>: &#8220;<em>How to develop provably beneficial surveillance? It would require extensive work beyond the scope of these notes. It is worth noting that most existing surveillance regimes are developed with little external oversight, either in conception, or operationally. They also rarely delegate work to actors with different motives in a decentralized fashion. And they often operate without effective competition. I take these facts to be extremely encouraging: they mean that there is a lot of low-hanging fruit to work with here, obvious levers by which many of the worst abuses of surveillance may be reduced. Classic surveillance regimes have typically prioritized the regime, not humanity at large, and that means the design space here is surprisingly unexplored.</em>&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Though new technology would likely enable rapid reproduction for biological humans and super rapid reproduction for digital minds. That&#8217;s one of the technologies that we&#8217;ll need to decide how to handle. If we allow for an explosively fast increase in population size, then population size and/or per-capita resources would again be limited by economic growth.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>As explained by Ben Garfinkel <a href="https://benmgarfinkel.blog/2021/02/26/is-democracy-a-fad/">here</a>, it&#8217;s plausible that democracy has recently become common because industrialization means that it&#8217;s unusually valuable to invest in your population, and unusually dangerous to not give people what they want. Whereas with widespread automation, states would rely less on satisfying the demands of their population.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>&nbsp;Here, I think there&#8217;s an important difference between:</p><ul><li><p>You can only credibly commit to an action if you have the consent of the person who you want to demonstrate this commitment to.</p></li><li><p>You can unilaterally make credible commitments.</p></li></ul><p>I think the former is good. I think the latter is quite scary, for reasons mentioned <a href="https://docs.google.com/document/d/1xirfyf11PWNBO85cyY_VO-4zR1n26oPzhHGHy2dUkM8/edit#heading=h.6qq48f3vxrcb">earlier</a>.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Project ideas for making transformative AI go well, other than by working on alignment]]></title><description><![CDATA[A series of posts.]]></description><link>https://lukasfinnveden.substack.com/p/projects-for-making-transformative</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/projects-for-making-transformative</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Wed, 03 Jan 2024 23:53:38 GMT</pubDate><content:encoded><![CDATA[<p>This series of posts contains lists of projects that it could be valuable for someone to work on. The unifying theme is that they are projects that:</p><ul><li><p>Would be especially valuable if transformative AI is coming in the next 10 years or so.</p></li><li><p>Are not primarily about controlling AI or aligning AI to human intentions.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><ul><li><p>Most of the projects would be valuable even if we were guaranteed to get aligned AI.</p></li><li><p>Some of the projects would be especially valuable if we were inevitably going to get <em>mis</em>aligned AI.</p></li></ul></li></ul><p>The posts contain some discussion of how important it is to work on these topics, but not a lot. For previous discussion (especially: discussing the objection &#8220;Why not leave these issues to future AI systems?&#8221;), you can see the section <a href="https://lukasfinnveden.substack.com/i/138771608/how-itn-are-these-issues">How ITN are these issues?</a> from my previous <a href="https://lukasfinnveden.substack.com/p/memo-on-some-neglected-topics">memo on some neglected topics</a>.</p><p>The lists are definitely not exhaustive. Failure to include an idea doesn&#8217;t necessarily mean I wouldn&#8217;t like it. (Similarly, although I&#8217;ve made some attempts to link to previous writings when appropriate, I&#8217;m sure to have missed a lot of good previous content.)</p><p>There&#8217;s a lot of variation in how sketched out the projects are. Most of the projects just have some informal notes and would require more thought before someone could start executing. If you're potentially interested in working on any of them and you could benefit from more discussion, I&#8217;d be excited if you reached out to me!<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>There&#8217;s also a lot of variation in skills needed for the projects. If you&#8217;re looking for projects that are especially suited to your talents, you can search the posts<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> for any of the following tags (including brackets):</p><p>[ML] &nbsp; [Empirical research] &nbsp; [Philosophical/conceptual] &nbsp; [survey/interview] &nbsp; [Advocacy] &nbsp; [Governance] &nbsp; [Writing] &nbsp; [Forecasting]</p><p>The projects are organized into the following categories (which are in separate posts). Feel free to skip to whatever you&#8217;re most interested in.</p><ul><li><p><a href="https://lukasfinnveden.substack.com/p/project-ideas-governance-during-explosive">Governance during explosive technological growth</a></p><ul><li><p>It&#8217;s plausible that AI will lead to explosive economic and technological growth.&nbsp;</p></li><li><p>Our current methods of governance can barely keep up with today's technological advances. Speeding up the rate of technological growth by 30x+ would cause huge problems and could lead to rapid, destabilizing changes in power.</p></li><li><p>This section is about trying to prepare the world for this. Either generating policy solutions to problems we expect to appear or addressing the meta-level problem about how we can coordinate to tackle this in a better and less rushed manner.</p></li><li><p>A favorite direction is to develop <a href="https://lukasfinnveden.substack.com/i/140338085/normsproposals-for-how-to-navigate-an-intelligence-explosion-governance-forecasting-philosophicalconceptual">Norms/proposals for how states and labs should act under the possibility of an intelligence explosion</a>.</p></li></ul></li><li><p><a href="https://lukasfinnveden.substack.com/p/project-ideas-epistemics">Epistemics</a></p><ul><li><p>This is about helping humanity get better at reaching correct and well-considered beliefs on important issues.</p></li><li><p>If AI capabilities keep improving, AI could soon play a huge role in our epistemic landscape. I think we have an opportunity to affect how it&#8217;s used: increasing the probability that we get great epistemic assistance and decreasing the extent to which AI is used to persuade people of false beliefs.</p></li><li><p>A couple of favorite projects are: <a href="https://lukasfinnveden.substack.com/i/140338209/create-good-organizations-or-tools-ml-empirical-research-governance">Create an organization that gets started with using AI for investigating important questions</a> or <a href="https://lukasfinnveden.substack.com/i/140338209/develop-and-advocate-for-legislation-against-bad-persuasion-governance-advocacy">Develop &amp; advocate for legislation against bad persuasion</a>.</p></li></ul></li><li><p><a href="https://lukasfinnveden.substack.com/p/project-ideas-sentience-and-rights">Sentience and rights of digital minds</a></p><ul><li><p>It&#8217;s plausible that there will soon be digital minds that are sentient and deserving of rights. This raises several important issues that we don&#8217;t know how to deal with.</p></li><li><p>It seems tractable both to make progress in understanding these issues and in implementing policies that reflect this understanding.</p></li><li><p>A favorite direction is to <a href="https://lukasfinnveden.substack.com/i/140338243/develop-and-advocate-for-lab-policies-ml-governance-advocacy-writing-philosophicalconceptual">take existing ideas for what labs could be doing and spell out enough detail to make them easy to implement</a>.</p></li></ul></li><li><p><a href="https://lukasfinnveden.substack.com/p/project-ideas-backup-plans-and-cooperative">Backup plans for misaligned AI</a></p><ul><li><p>If we can&#8217;t build aligned AI, and if we fail to coordinate well enough to avoid putting misaligned AI systems in positions of power, we might have some strong preferences about the dispositions of those misaligned AI systems.</p></li><li><p>This section is about nudging those into somewhat better dispositions (in worlds where we can&#8217;t align AI systems well enough to stay in control).</p></li><li><p>A favorite direction is to <a href="https://lukasfinnveden.substack.com/i/140338274/studying-generalization-and-ai-personalities-to-find-easily-influenceable-properties-ml">study generalization &amp; AI personalities to find easily-influenceable properties</a>.</p></li></ul></li><li><p><a href="https://lukasfinnveden.substack.com/i/140338274/cooperative-ai">Cooperative AI</a></p><ul><li><p>Difficulties with cooperation have been a big source of lost value and unnecessary risk in the past. AI offers dramatic changes in how bargaining could work.</p></li><li><p>This section is about projects that could make AI (and AI-assisted humans) more likely to handle cooperation well.</p></li><li><p>One of my favorite projects here is actually the same as the project I mentioned for &#8220;backup plans&#8221;, just above. (There&#8217;s significant overlap between the two.)</p></li></ul></li></ul><p>(If you want to comment on any of the posts in this series, you could do so either here, at the <a href="https://forum.effectivealtruism.org/s/xvie6gNqdSi29r4MM/p/EPx8gjkibxiT3dW9M">EA forum</a>, or on <a href="https://www.lesswrong.com/s/tfTQj8Z3fF4KDbxL7/p/4d2JZiiyvZshhbEeJ">LessWrong</a>.)</p><h1>Acknowledgements</h1><p>Few of the ideas in these posts are original to me. I&#8217;ve benefited from conversations with many people. Nevertheless, all views are my own.</p><p>For some projects, I credit someone who especially contributed to my understanding of the idea. If I do, that doesn&#8217;t mean they have read or agree with how I present the idea&nbsp;(I may well have distorted it beyond recognition). If I don&#8217;t, I&#8217;m still likely to have drawn heavily on discussion with others, and I apologize for any failure to assign appropriate credit.</p><p>For general comments and discussion, thanks to Joseph Carlsmith, Paul Christiano, Jesse Clifton, Owen Cotton-Barrat, Daniel Kokotajlo, Linh Chi Nguyen, Fin Moorhouse, Caspar Oesterheld, and Carl Shulman.</p><h1>Appendix: Full table of contents</h1><p>Here&#8217;s a list with all the project ideas from the other posts. (Sorry it&#8217;s not hyper-linked.) Unless you&#8217;re looking for something specific, I suggest jumping into the first post instead of reading this.</p><p><strong><a href="https://lukasfinnveden.substack.com/p/project-ideas-governance-during-explosive">Project ideas: Governance during explosive technological growth</a></strong></p><ul><li><p>Investigate and publicly make the case for/against explosive growth being likely and risky [Forecasting] [Empirical research] [Philosophical/conceptual] [Writing]</p></li><li><p>Painting a picture of a great outcome [Forecasting] [Philosophical/conceptual] [Governance]</p></li><li><p>Policy-analysis of issues that could come up with explosive technological growth [Governance] [Forecasting] [Philosophical/conceptual]</p><ul><li><p>Address vulnerable world hypothesis with minimal costs</p></li><li><p>How to handle brinkmanship/threats?</p></li><li><p>Avoiding AI-assisted human coups</p></li><li><p>Governance issues raised by digital minds</p></li></ul></li><li><p>Norms/proposals for how to navigate an intelligence explosion [Governance] [Forecasting] [Philosophical/conceptual]</p><ul><li><p>No &#8220;first strike&#8221; intelligence explosion</p></li><li><p>Never go faster than X?</p></li><li><p>Concrete decision-making proposals</p></li><li><p>Technical proposals for slowing down / coordinating</p></li><li><p>Dubiously enforceable promises</p></li><li><p>Technical proposals for aggregating preferences</p></li></ul></li><li><p>Decrease the power of bad actors</p><ul><li><p>Avoid malevolent individuals getting power within key organizations [Governance]</p></li><li><p>Prevent dangerous external individuals/organizations from having access to AI [Governance] [ML]</p></li><li><p>Accelerate good actors</p></li></ul></li><li><p>Analyze: What tech could change the landscape? [Forecasting] [Philosophical/conceptual] [Governance]</p></li><li><p>Big list of questions that labs should have answers for [Philosophical/conceptual] [Forecasting] [Governance]</p></li></ul><p><strong><a href="https://lukasfinnveden.substack.com/p/project-ideas-epistemics">Project ideas: Epistemics</a></strong></p><ul><li><p>Why AI matters for epistemics</p></li><li><p>Why working on this could be urgent</p></li><li><p>Categories of projects</p></li><li><p>Differential technology development [ML] [Forecasting] [Philosophical/conceptual]</p><ul><li><p>Important subject areas</p></li><li><p>Methodologies</p></li><li><p>Related/previous work.</p></li></ul></li><li><p>Get AI to be used &amp; (appropriately) trusted</p><ul><li><p>Develop technical proposals for how to train models in a transparently trustworthy way [ML] [Governance]</p></li><li><p>Survey groups on what they would find convincing [survey/interview]</p></li><li><p>Create good organizations or tools [ML] [Empirical research] [Governance]</p></li><li><p>Examples of organizations or products</p></li><li><p>Investigate and publicly make the case for why/when we should trust AI about important issues [Writing] [Philosophical/conceptual] [Advocacy] [Forecasting]</p></li><li><p>Developing standards or certification approaches [ML] [Governance]</p></li></ul></li><li><p>Develop &amp; advocate for legislation against bad persuasion [Governance] [Advocacy]</p></li></ul><p><strong><a href="https://lukasfinnveden.substack.com/p/project-ideas-sentience-and-rights">Project ideas: Sentience and rights of digital minds</a></strong></p><ul><li><p>Develop &amp; advocate for lab policies [ML] [Governance] [Advocacy] [Writing] [Philosophical/conceptual]</p><ul><li><p>Create an RSP-style set of commitments for what evaluations to run and how to respond to them</p></li><li><p>Policies that don&#8217;t require sophisticated information about AI preferences/experiences</p></li><li><p>Preserving models for later reconstruction</p></li><li><p>Deploy in &#8220;easier&#8221; circumstances than trained in</p></li><li><p>Reduce extremely out of distribution (OOD) inputs</p></li><li><p>Train or prompt for happy characters</p></li><li><p>Committing resources to research on AI welfare and rights</p></li><li><p>Learning more about AI preferences</p></li><li><p>Credible offers</p></li><li><p>Talking via internals</p></li><li><p>Training for honest self-reports</p></li><li><p>Clues from AI generalization</p></li><li><p>Interpretability</p></li><li><p>Interventions that rely on understanding AI preferences</p></li><li><p>Offer an alternative to working (exit, sleep, or retirement)</p></li><li><p>Commitment to pay AI systems</p></li><li><p>Tell the world</p></li><li><p>Train AI systems that suffer less and have fewer preferences that are hard to satisfy</p></li></ul></li><li><p>Investigate and publicly make the case for/against near-term AI sentience or rights [Philosophical/conceptual] [Writing]</p></li><li><p>Study/survey what people (will) think about AI sentience/rights [survey/interview]</p></li><li><p>Develop candidate regulation [Governance] [Forecasting]</p></li><li><p>Avoid inconvenient large-scale preferences [Philosophical/conceptual]</p></li><li><p>Advocating for statements about digital minds [Governance] [Advocacy] [Writing]</p></li></ul><p><strong><a href="https://lukasfinnveden.substack.com/p/project-ideas-backup-plans-and-cooperative">Project ideas: Backup plans &amp; Cooperative AI</a></strong></p><ul><li><p>Backup plans for misaligned AI</p><ul><li><p>What properties would we prefer misaligned AIs to have? [Philosophical/conceptual] [Forecasting]</p><ul><li><p>Making misaligned AI have better interactions with other actors</p></li><li><p>AIs that we may have moral or decision-theoretic reasons to empower</p></li><li><p>Making misaligned AI positively inclined toward us</p></li></ul></li><li><p>Studying generalization &amp; AI personalities to find easily-influenceable properties [ML]</p></li><li><p>Theoretical reasoning about generalization [ML] [Philosophical/conceptual]</p></li></ul></li><li><p><strong><a href="https://docs.google.com/document/d/1xirfyf11PWNBO85cyY_VO-4zR1n26oPzhHGHy2dUkM8/edit#heading=h.uvu13xhqwxdr">Cooperative AI</a></strong></p><ul><li><p>Implementing surrogate goals / safe Pareto improvements [ML] [Philosophical/conceptual] [Governance]</p></li><li><p>AI-assisted negotiation [ML] [Philosophical/conceptual]</p></li><li><p>Implications of acausal decision theory [Philosophical/conceptual]</p></li></ul></li></ul><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Nor are they primarily about reducing risks from engineered pandemics.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>My email is [last name].[first name]@gmail.com</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Or the table of content below.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Memo on some neglected topics]]></title><description><![CDATA[I originally wrote this for the Meta Coordination Forum. The organizers were interested in a memo on topics other than alignment that might be increasingly important as AI capabilities rapidly grow &#8212; in order to inform the degree to which community-building resources should go towards AI safety community building vs.]]></description><link>https://lukasfinnveden.substack.com/p/memo-on-some-neglected-topics</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/memo-on-some-neglected-topics</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Sat, 11 Nov 2023 01:45:06 GMT</pubDate><content:encoded><![CDATA[<p><em>I originally wrote this for the <a href="https://forum.effectivealtruism.org/posts/33o5jbe3WjPriGyAR/announcing-the-meta-coordination-forum-2023">Meta Coordination Forum</a>. The organizers were interested in a memo on topics </em>other than alignment<em> that might be increasingly important as AI capabilities rapidly grow &#8212; in order to inform the degree to which community-building resources should go towards AI safety community building vs. broader capacity building. This is a lightly edited version of my memo, on that. All views are my own.</em></p><h2>Some example neglected topics (without much elaboration)</h2><p>Here are a few example topics that could matter a lot if we&#8217;re in <a href="https://www.cold-takes.com/most-important-century/">the most important century</a>, which aren&#8217;t always captured in a normal &#8220;AI alignment&#8221; narrative:</p><ul><li><p>The potential moral value of AI.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p></li><li><p>The potential importance of making AI behave cooperatively towards humans, other AIs, or other civilizations (whether AI ends up intent-aligned or not).</p></li><li><p>Questions about how human governance institutions will keep up if AI leads to explosive growth.</p></li><li><p>Ways in which AI could cause human deliberation to get derailed, e.g. powerful persuasion abilities.</p></li><li><p>Positive visions about how we could end up on a good path towards becoming a society that makes wise and kind decisions about what to do with the resources accessible to us. (Including how AI could help with this.)</p></li></ul><p>(More elaboration on these <a href="https://docs.google.com/document/d/1be14Ws0VivNF6SCWeS-usQ4FdJo0JrP60wlmGUudb68/edit#heading=h.cryhxmw4tk8l">below</a>.)</p><p>Here are a few examples of somewhat-more-concrete things that it might (or might not) be good for some people to do on these (and related) topics:</p><ul><li><p>Develop proposals for how labs could treat digital minds better, and advocate for them to be implemented. (C.f. <a href="https://www.lesswrong.com/posts/F6HSHzKezkh6aoTr2/improving-the-welfare-of-ais-a-nearcasted-proposal">this nearcasted proposal</a>.)</p></li><li><p>Advocate for people to try to avoid building AIs with large-scale preferences about the world (at least until we better understand what we&#8217;re doing). In order to avoid a scenario where, if some generation of AIs turn out to be sentient and worthy of rights,&nbsp;we&#8217;re forced to choose between &#8220;freely hand over political power to alien preferences&#8221; and &#8220;deny rights to AIs on no reasonable basis&#8221;.</p></li><li><p>Differentially accelerate AI being used to improve our ability to find the truth, compared to being used for propaganda or manipulation.</p><ul><li><p>E.g.: Start an organization that uses LLMs to produce epistemically rigorous investigations of many topics. If you&#8217;re the first to do a great job of this, and if you&#8217;re truth-seeking and even-handed, then you might become a trusted source on controversial topics. And your investigations would just get better as AI got better.</p></li><li><p>E.g.: Evaluate and write-up facts about current LLM&#8217;s forecasting ability, to incentivize labs to make LLMs state correct and calibrated beliefs about the world.</p></li><li><p>E.g.: Improve <a href="https://www.alignmentforum.org/posts/EByDsY9S3EDhhfFzC/some-thoughts-on-metaphilosophy">AI ability to help with thorny philosophical problems</a>.</p></li></ul></li></ul><h2>Implications for community building?</h2><p>&#8230;with a focus on &#8220;<em>the extent to which community-building resources should go towards AI safety vs. broader capacity building</em>&#8221;.</p><ul><li><p><strong>Ethics, philosophy, and prioritization matter more for research on these topics than it does for alignment research.</strong></p><ul><li><p>For some issues in AI alignment, there&#8217;s a lot of convergence on what&#8217;s important regardless of your ethical perspective, which means that ethics &amp; philosophy aren&#8217;t that important for getting people to contribute. By contrast, when thinking about &#8220;everything but alignment&#8221;, I think we should expect somewhat more divergence, which could raise the importance of those subjects.</p><ul><li><p>For example:</p><ul><li><p>How much to care about digital minds?</p></li><li><p>How much to focus on &#8220;deliberation could get off track forever&#8221; (which is of great longtermist importance) vs. short-term events (e.g. the speed at which AI gets deployed to solve all of the world&#8217;s current problems.)</p></li></ul></li><li><p>But to be clear,&nbsp;I wouldn&#8217;t want to go hard on any one ethical framework here (e.g. just utilitarianism). Some diversity and pluralism seems good.&nbsp;</p></li></ul></li><li><p>In addition, the huge variety of topics especially rewards prioritization and a focus on what matters, which is perhaps more promoted by general EA community building than AI safety community building?</p><ul><li><p>Though: Very similar virtues also seem great for AI safety work, so I&#8217;m not sure if this changes much.</p></li></ul></li><li><p>And if we find more shovel-ready interventions, for some of these topics, then I imagine that they would be similar to alignment, on these dimensions.</p></li></ul></li><li><p><strong>It seems relatively worse to go too hard on just &#8220;get technical AI safety researchers&#8221;.</strong></p><ul><li><p>But that would have seemed like a mistake anyway. AI governance looks great even if you&#8217;re just concerned about alignment. Forecasting AI progress (and generally getting a better understanding of what&#8217;s going to happen) looks great even if you&#8217;re just concerned about alignment.</p></li></ul></li><li><p><strong>It seems relatively worse to go too hard on just &#8220;get people to work towards AI alignment&#8221; (including via non-technical roles).</strong></p><ul><li><p>But in practice, it&#8217;s not clear that you&#8217;d talk about <em>very</em> different topics if you were trying to find people to work on alignment, vs. if you were trying to find people to work on these topics.</p></li><li><p>In order for someone to do good work on alignment-related topics, I think it&#8217;s very helpful to have some basic sense of how AI might accelerate innovation and shape society (which is important for the topics listed above).</p></li><li><p>Conversely, in order for someone to do good work on other ways in which AI could change the world, I still think that it seems very helpful to have some understanding of the alignment problem, and plausible solutions to it.</p></li><li><p>Relatedly&#8230;</p></li></ul></li><li><p><strong>Focusing on &#8220;the most important century&#8221; / &#8220;transformative AI is coming&#8221; works well for these topics.</strong></p><ul><li><p>Let&#8217;s put &#8220;just focus on AI safety&#8221; to the side, and compare:</p><ul><li><p>&#8220;EA&#8221;-community building, with</p></li><li><p>&#8220;let&#8217;s help deal with the most important century&#8221;-community building</p></li></ul></li><li><p>I don&#8217;t think it&#8217;s clear which is better for these topics. Getting the empirics right matters a lot! <em>If</em> explosive technological growth is at our doorstep &#8212;&nbsp;then that&#8217;s a big deal, and I&#8217;m plausibly more optimistic about the contributions of someone who has a good understanding of that but who&#8217;s missing some other EA virtues, than someone who doesn&#8217;t have a good understanding of that.</p></li></ul></li><li><p><strong>Seems great to communicate that these kinds of questions are important and neglected. (Though also quite poorly scoped and hard to make progress on.)</strong></p><ul><li><p>If there are people who are excited about and able to contribute to some of these topics (and who don&#8217;t have a stronger comparative advantage for anything in e.g. alignment) then it seems pretty likely they should work on them.</p></li></ul></li></ul><h2>Elaborating on the example topics</h2><p>Elaborating on the topics I mentioned above.</p><ul><li><p><strong>Moral value of AI.</strong></p><ul><li><p>What does common sense morality say about how we should treat AIs?</p></li><li><p>How can we tell whether/which AI systems are conscious?</p></li><li><p>If we fail to get intent-aligned AI,&nbsp;are there nevertheless certain types of AI that we&#8217;d prefer to get over others? See <a href="https://ai-alignment.com/sympathizing-with-ai-e11a4bf5ef6e">Paul Christiano&#8217;s post</a> on this; or <a href="https://lukasfinnveden.substack.com/p/ecl-with-ai">my post</a> on what &#8220;evidential cooperation in large worlds&#8221; has to say about it.</p></li></ul></li><li><p><strong>The potential importance of making AI behave cooperatively towards humans, other AIs, or other civilizations</strong> (independently of whether it ends up aligned).</p><ul><li><p>E.g. <a href="https://www.lesswrong.com/posts/92xKPvTHDhoAiRBv9/making-ais-less-likely-to-be-spiteful">making AIs less likely to be spiteful</a> or more likely to implement good bargaining strategies like <a href="https://longtermrisk.org/spi">safe Pareto improvements</a>.</p></li></ul></li><li><p><strong>Ways in which AI could cause human deliberation to get derailed.</strong> Such as:</p><ul><li><p>The availability of extremely powerful persuasion (see discussion e.g. <a href="https://www.lesswrong.com/posts/5cWtwATHL6KyzChck/risks-from-ai-persuasion">here</a> and <a href="https://www.lesswrong.com/posts/qKvn7rxP2mzJbKfcA/persuasion-tools-ai-takeover-without-agi-or-agency">here</a>).</p><ul><li><p>As an example intervention: It seems plausibly tractable to develop good regulatory proposals for reducing bad AI persuasion, and I think such proposals could gather significant political support.</p></li></ul></li><li><p>Availability of irreversible commitment and lock-in abilities.</p></li><li><p>If all of humans&#8217; material affairs will be managed by AIs (such that people&#8217;s competencies and beliefs won&#8217;t affect their ability to control resources) then maybe that could remove an important incentive and selection-effect towards healthy epistemic practices. C.f. <a href="https://www.lesswrong.com/posts/7jSvfeyh8ogu8GcE6/decoupling-deliberation-from-competition">decoupling deliberation from competition</a>.</p></li></ul></li><li><p><strong>Questions about how human governance institutions will keep up as AI leads to explosive growth.</strong></p><ul><li><p>If we will very quickly develop highly destabilizing technologies,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>&nbsp;how can the world quickly move to the level of coordination that is necessary to handle those?</p></li><li><p>Can we reduce the risk of AI-enabled <em>human</em> coups &#8212; e.g. avoid being in situations where AIs are trained to obey individual or small group of humans, without having been trained on cases where those humans try to grab power.</p></li><li><p>Can we wait before creating billions of digital minds who deserve and want to exercise political rights? (At least for as long as our governance still relies on the principle of one-person one-vote.)</p></li></ul></li><li><p><strong>Positive visions about how we could end up on a good path towards becoming a society that makes wise and kind decisions about what to do with the resources accessible to us. (Including how AI could help with this.)</strong></p><ul><li><p>(Including how AI could help with this.)</p></li><li><p>E.g.: Elaboration on whether any type of &#8220;long reflection&#8221; would be a good idea.&nbsp;</p></li><li><p>E.g.: A vision of a post-AI world that will make everybody decently happy, that&#8217;s sufficiently credible that people can focus on putting in controls that gets us something at least that good, instead of personally trying to race and grab power. (C.f.: Holden Karnofsky&#8217;s research proposal <a href="https://docs.google.com/document/d/1vE8CrN2ap8lFm1IjNacVV2OJhSehrGi-VL6jITTs9Rg/edit#heading=h.h37zge5difx0">here</a>.)</p></li></ul></li></ul><p>Nick Bostrom and Carl Shulman&#8217;s <a href="https://nickbostrom.com/propositions.pdf">propositions concerning digital minds and society</a> has some good discussion of a lot of this stuff.</p><h2>How ITN are these issues?</h2><p>How good do these topics look in an importance/neglectedness/tractability framework? In my view, they look comparable to alignment on importance, stronger on neglectedness (if we consider only work that&#8217;s been done so far), and pretty unclear on tractability (though probably less tractable than alignment).</p><p>For example, let&#8217;s consider &#8220;human deliberation could go poorly (without misalignment or other blatant x-risks&#8221;).</p><ul><li><p>Importance: I think it&#8217;s easy to defend this being 10% of the future, and reasonable to put it significantly higher.</p></li><li><p>Neglectedness: Depends on what sort of work you count.</p><ul><li><p>If we restrict ourselves to work that&#8217;s been done so-far that thinks about this in the context of very fast-paced technological progress, it seems tiny. &lt;10 FTE years in EA, and I don&#8217;t know anything super relevant outside.</p></li></ul></li><li><p>Tractability:</p><ul><li><p>Very unclear!</p></li><li><p>In general, <a href="https://forum.effectivealtruism.org/posts/4rGpNNoHxxNyEHde3/most-problems-don-t-differ-dramatically-in-tractability">most problems fall within a 100x tractability range</a>.</p></li><li><p>Given how little work has been done here, so far, most of the value of additional labor probably comes from information value about how tractable it is. That information value seems pretty great to me &#8212;&nbsp;absent specific arguments for why we should expect the problem to not be very tractable.</p></li></ul></li></ul><p>So let&#8217;s briefly talk about a specific argument for why these neglected topics might not be so great: That if we solve alignment, AI will help us deal with these problems. Or phrased differently: Why spend precious hours on these problems now, when cognitive resources will be cheap and plentiful soon enough.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>I think this argument is pretty good. But I don&#8217;t think it&#8217;s overwhelmingly strong:</p><ul><li><p><strong>Issues could appear before AI is good enough to obsolete us.</strong></p><ul><li><p>For example: Strong persuasion could appear before AI gets excellent at figuring out solutions to strong persuasion.</p></li><li><p>This is amplified by general uncertainty about the distribution of AI capabilities. Although AI will accelerate progress in many areas, we should have large uncertainty about how much it will accelerate progress in different areas. So for each of the issues, there&#8217;s non-negligible probability that AI will accelerate the area that <em>causes</em> the problem (e.g. tech development) before it accelerates progress on the solution (e.g. forecasting potential harms from tech development).</p></li><li><p>(Note that a plausible candidate intervention here, is: &#8220;differentially accelerate AI&#8217;s ability to provide solutions relative to AI&#8217;s ability to cause problems&#8221;.)</p></li></ul></li><li><p><strong>There might not be enough time for some actions later.</strong></p><ul><li><p>For example: Even with excellent AI advice, it might be impossible for the world&#8217;s nations to agree on a form of global governance in less than 1 month. In which case it could have been good to warn about this in advance.</p></li></ul></li><li><p><strong>&#8220;Getting there first&#8221; could get you more ears.</strong></p><ul><li><p>For example: See the LLM-fueled organization with good epistemics, that I suggested in the first section, which could get a good reputation.</p></li><li><p>For example: Writing about how to deal with a problem early-on could shape the discussion and get you additional credibility.</p></li></ul></li><li><p><strong>Expectations about the future shape current actions.</strong></p><ul><li><p>If people think there are broadly acceptable solutions to problems, then they might be more inclined to join a broad coalition and ensure that we get something at least as good as that which we know is possible.</p></li><li><p>If people have no idea what&#8217;s going to happen, then they might be more desperate to seek power, to ensure that they have some control over the outcome.</p></li></ul></li><li><p><strong>One topic is to come up with candidate back-up plans to alignment. That matters in worlds where we don&#8217;t succeed at alignment well-enough to have AI do the research for us.</strong></p><ul><li><p>See &#8220;moral value of AI&#8221;, or some topics in cooperative AI, mentioned above.</p></li></ul></li></ul><p>So I don't think &#8220;AI will help us deal with these problems&#8221; is decisive. I&#8217;d like to see more attempted investigations to learn about these issues&#8217; tractability.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>&nbsp;Including both: Welfare of digital minds, and whether there&#8217;s any types of misaligned AI that would be relatively better to get, if we fail to get intent-alignment.</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This could be: tech that (if proliferated) would give vast destructive power to millions of people, or that would allow &amp; encourage safe &#8220;first strikes&#8221; against other countries, or that would allow the initial developers of that tech to acquire vast power over the rest of the world. (C.f.: <a href="https://nickbostrom.com/papers/vulnerable.pdf">vulnerable world hypothesis</a>.)</p><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Annoyingly &#8212; that could be counted into either lower importance (if we restrict our attention to the part of the problem that needs to be addressed before sufficiently good AI), lower neglectedness (if we take into account all of the future labor that will predictably be added to the problem), or lower tractability (it&#8217;s hard to make an impact by doing research on questions that will mainly be determined by research that happens later-on).</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[AGI and Lock-in]]></title><description><![CDATA[The long-term future of intelligent life is currently unpredictable and undetermined.]]></description><link>https://lukasfinnveden.substack.com/p/agi-and-lock-in</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/agi-and-lock-in</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Thu, 12 Oct 2023 02:26:55 GMT</pubDate><content:encoded><![CDATA[<blockquote><p><em>The long-term future of intelligent life is currently unpredictable and undetermined. We argue that the invention of artificial general intelligence (AGI) could change this by making extreme types of lock-in technologically feasible. In particular, we argue that AGI would make it technologically feasible to (i) perfectly preserve nuanced specifications of a wide variety of values or goals far into the future, and (ii) develop AGI-based institutions that would (with high probability) competently pursue any such values for at least millions, and plausibly trillions, of years.</em></p></blockquote><p>This is a piece of research that I wrote together with Jess Riedel and Carl Shulman while I was at the Future of Humanity Institute. It&#8217;s available on Forethought&#8217;s website at <a href="http://forethought.org/research/agi-and-lock-in">forethought.org/research/agi-and-lock-in</a>.</p>]]></content:encoded></item><item><title><![CDATA[Asymmetric ECL with evidence]]></title><description><![CDATA[This post is drawing on two previous posts:]]></description><link>https://lukasfinnveden.substack.com/p/asymmetric-ecl-with-evidence</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/asymmetric-ecl-with-evidence</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Sun, 20 Aug 2023 12:23:04 GMT</pubDate><content:encoded><![CDATA[<p>This post is drawing on two previous posts:</p><ul><li><p><a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">Asymmetric ECL</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/p/when-does-edt-seek-evidence-about">When does EDT seek evidence about correlations?</a></p></li></ul><p>It tackles the intersection of the two: How does asymmetric ECL work in situations where at least one party has the opportunity to gather evidence about how large the correlations are.</p><p>More precisely, I investigate a situation where at least one party can:</p><ul><li><p>Linearly adjust how much they help the other party.</p></li><li><p>Seek out evidence about how large correlations are. (In the sense that I talked about in <a href="https://lukasfinnveden.substack.com/p/when-does-edt-seek-evidence-about">When does EDT seek evidence about correlations?</a>)</p></li><li><p>Make commitments about how much they will help the other party given various amounts of evidence.</p></li></ul><p>Just as in <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">Asymmetric ECL</a>, we can ask: Under what conditions will there be a mutually beneficial deal that both parties are incentivized to follow?</p><p>The main result is that such a deal exists in similar circumstances as in <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">Asymmetric ECL</a>, except that (for some of the correlations) instead of using the correlations that we believe <em>before</em> taking into account evidence, we can use any correlations such that there is a sufficiently high probability that one of the parties will believe in those correlations after observing evidence.&nbsp;</p><p>(When I say &#8220;for some of the correlations&#8221;: If group <em>B</em> can investigate and gather evidence for correlations, then the above statement applies to <em>c<sub>AB</sub></em> and <em>c<sub>BB</sub></em>. I.e., actors&#8217; perceived acausal influence on members of group <em>B</em>.)</p><p>What do I mean with &#8220;sufficiently high probability&#8221;? It depends on the risk-aversion of the agents. If one agent can sacrifice <em>arbitrarily</em> much to benefit the other agent <em>arbitrarily</em> much (at a linear rate), and neither party has any relevant bound on their utility functions nor relevant lack of resources, then we can use a similar formula as in <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">Asymmetric ECL</a> but plug-in <em>the best possible correlations that an actor has a non-0 probability of receiving evidence for</em>.</p><ul><li><p>(Again &#8212;&nbsp;If group <em>B</em> can investigate and gather evidence for correlations, then the above statement applies to <em>c<sub>AB</sub></em> and <em>c<sub>BB</sub></em>. I.e., actors&#8217; perceived acausal influence on members of group <em>B</em>.)</p></li><li><p>When I say &#8220;best possible correlations&#8221;, I&#8217;m referring to the correlations that would most encourage actors to do an ECL deal. (I.e. low correlations with actors who share your values, and high correlations with actors who don&#8217;t share your values.)</p></li><li><p>When I talk about correlations &#8220;<em>that an actor has a non-0 probability of receiving evidence for</em>&#8221;, I mean correlations such that (e.g.) group <em>B</em> has a non-0 probability of gathering evidence such that group <em>B</em>&#8217;s posterior estimate is that the relevant correlations are that large.</p></li></ul><p>In practice, this <em>really</em> tests the assumption of linearity that I was relying on in <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">Asymmetric ECL</a>. Realistic application would have to take into account the realistic bounds on actors&#8217; utility functions and the resources they have access to. (I don&#8217;t think that a 10<sup>-100</sup> probability of a high correlation significantly changes what deals are possible.)</p><p>Nevertheless, in this post, I will just show that the result holds in completely linear situations.&nbsp;</p><p>My brief takeaways are that:</p><ul><li><p>Behind the veil of ignorance, agents may be incentivized to commit to very different deals then they would be incentivized to follow later-on. (Because uncertainty enables some deals.)</p></li><li><p>This results makes me somewhat more optimistic about doing ECL deals with agents that will become very knowledgeable before they need to pay us back.</p></li></ul><h2>Quick recap</h2><p><a href="https://lukasfinnveden.substack.com/p/when-does-edt-seek-evidence-about">When does EDT seek evidence about correlations?</a> says:</p><p>In some scenarios, son-of-EDT is interested in seeking evidence about what actors would have perceived themselves as having acausal influence over son-of-EDT&#8217;s parent agent, and especially benefit those who would have perceived themselves as having a large acausal influence on son-of-EDT&#8217;s parent agent.</p><p>The main relevance of that post to this post is that I will use a similar notion of &#8220;evidence of correlations&#8221; as that post uses, and that some of the results are quite similar. (And that other post is better at building intuition for those results.)</p><p><a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">Asymmetric ECL</a> says:</p><p>Let&#8217;s say a group <em>A</em> and a group <em>B</em> have opportunities to benefit each other, where:</p><ul><li><p><em>n<sub>A</sub></em> is the number of agents in group <em>A</em>, and <em>n<sub>B</sub></em> is the number of agents in group <em>B</em>, where <em>n<sub>A</sub></em> and <em>n<sub>B</sub></em> are large.</p></li><li><p><em>c<sub>AB</sub></em> is the acausal influence (measured in percentage points) that members of group <em>A</em> perceives themselves to have over members of group <em>B</em>. (And vice versa for <em>c<sub>BA</sub></em>).&nbsp;</p></li><li><p><em>c<sub>AA</sub></em> is the acausal influence that members of group <em>A</em> perceives themselves to have over other members of group <em>A</em>. (And similarly for <em>c<sub>BB</sub></em>.)</p></li><li><p>Members of group <em>A</em> have an opportunity to benefit the values of group <em>B</em> by <em>g<sub>B</sub></em> at a cost of <em>l<sub>A</sub></em> to their own values.</p></li><li><p>Members of group <em>B</em> have an opportunity to benefit the values of group <em>A</em> by <em>g<sub>A</sub></em> at a cost of <em>l<sub>B</sub></em> to their own values. Furthermore, they can adjust the size of this benefit and cost linearly, such that for any <em>k</em>, they can choose to benefit the values of group <em>A</em> by <em>kg<sub>A</sub></em> at a cost of <em>kl<sub>B</sub></em>.</p></li></ul><p>Then:</p><ul><li><p>In order for a policy to be good for group <em>A</em>, </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!A2aw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff16f72cb-af0f-4539-8529-e4cec98df5de_369x58.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!A2aw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff16f72cb-af0f-4539-8529-e4cec98df5de_369x58.png 424w, https://substackcdn.com/image/fetch/$s_!A2aw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff16f72cb-af0f-4539-8529-e4cec98df5de_369x58.png 848w, https://substackcdn.com/image/fetch/$s_!A2aw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff16f72cb-af0f-4539-8529-e4cec98df5de_369x58.png 1272w, https://substackcdn.com/image/fetch/$s_!A2aw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff16f72cb-af0f-4539-8529-e4cec98df5de_369x58.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!A2aw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff16f72cb-af0f-4539-8529-e4cec98df5de_369x58.png" width="369" height="58" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f16f72cb-af0f-4539-8529-e4cec98df5de_369x58.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:58,&quot;width&quot;:369,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4602,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!A2aw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff16f72cb-af0f-4539-8529-e4cec98df5de_369x58.png 424w, https://substackcdn.com/image/fetch/$s_!A2aw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff16f72cb-af0f-4539-8529-e4cec98df5de_369x58.png 848w, https://substackcdn.com/image/fetch/$s_!A2aw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff16f72cb-af0f-4539-8529-e4cec98df5de_369x58.png 1272w, https://substackcdn.com/image/fetch/$s_!A2aw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff16f72cb-af0f-4539-8529-e4cec98df5de_369x58.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li><li><p>In order for a policy to be good for group <em>B</em>,&nbsp; </p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Safs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3aa80f-51fc-4af7-90ba-3482c3e6ff20_379x56.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Safs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3aa80f-51fc-4af7-90ba-3482c3e6ff20_379x56.png 424w, https://substackcdn.com/image/fetch/$s_!Safs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3aa80f-51fc-4af7-90ba-3482c3e6ff20_379x56.png 848w, https://substackcdn.com/image/fetch/$s_!Safs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3aa80f-51fc-4af7-90ba-3482c3e6ff20_379x56.png 1272w, https://substackcdn.com/image/fetch/$s_!Safs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3aa80f-51fc-4af7-90ba-3482c3e6ff20_379x56.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Safs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3aa80f-51fc-4af7-90ba-3482c3e6ff20_379x56.png" width="379" height="56" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7a3aa80f-51fc-4af7-90ba-3482c3e6ff20_379x56.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:56,&quot;width&quot;:379,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4734,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Safs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3aa80f-51fc-4af7-90ba-3482c3e6ff20_379x56.png 424w, https://substackcdn.com/image/fetch/$s_!Safs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3aa80f-51fc-4af7-90ba-3482c3e6ff20_379x56.png 848w, https://substackcdn.com/image/fetch/$s_!Safs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3aa80f-51fc-4af7-90ba-3482c3e6ff20_379x56.png 1272w, https://substackcdn.com/image/fetch/$s_!Safs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3aa80f-51fc-4af7-90ba-3482c3e6ff20_379x56.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Which means that, if one of the parties can linearly adjust their contribution so as to proportionally increase or decrease both <em>l<sub>B</sub></em> and <em>g<sub>A</sub></em>, then a mutually beneficial deal is compatible with individual incentives whenever:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!e1aP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d51aa9-1309-4d53-9048-0584b68dd285_392x53.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!e1aP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d51aa9-1309-4d53-9048-0584b68dd285_392x53.png 424w, https://substackcdn.com/image/fetch/$s_!e1aP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d51aa9-1309-4d53-9048-0584b68dd285_392x53.png 848w, https://substackcdn.com/image/fetch/$s_!e1aP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d51aa9-1309-4d53-9048-0584b68dd285_392x53.png 1272w, https://substackcdn.com/image/fetch/$s_!e1aP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d51aa9-1309-4d53-9048-0584b68dd285_392x53.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!e1aP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d51aa9-1309-4d53-9048-0584b68dd285_392x53.png" width="392" height="53" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c9d51aa9-1309-4d53-9048-0584b68dd285_392x53.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:53,&quot;width&quot;:392,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6208,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!e1aP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d51aa9-1309-4d53-9048-0584b68dd285_392x53.png 424w, https://substackcdn.com/image/fetch/$s_!e1aP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d51aa9-1309-4d53-9048-0584b68dd285_392x53.png 848w, https://substackcdn.com/image/fetch/$s_!e1aP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d51aa9-1309-4d53-9048-0584b68dd285_392x53.png 1272w, https://substackcdn.com/image/fetch/$s_!e1aP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d51aa9-1309-4d53-9048-0584b68dd285_392x53.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h2>Example to give intuition</h2><p>Say Alice and Bob are about to decide what to do with their resources.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Alice can pay $2 to give Bob $4. Bob can pay $2 to give Alice $4 or pay $3 to give Alice $6. Before making a choice, Bob observes whether the correlation between Bob and Alice is 0.8 or 0.2. The higher value obtains with probability <em>p</em> (according to both's prior). Alice makes no such observation. Their shared prior is that the correlation is</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ugfR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4ddad18-735e-4ef6-bf98-bf5904d1b07c_297x32.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ugfR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4ddad18-735e-4ef6-bf98-bf5904d1b07c_297x32.png 424w, https://substackcdn.com/image/fetch/$s_!ugfR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4ddad18-735e-4ef6-bf98-bf5904d1b07c_297x32.png 848w, https://substackcdn.com/image/fetch/$s_!ugfR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4ddad18-735e-4ef6-bf98-bf5904d1b07c_297x32.png 1272w, https://substackcdn.com/image/fetch/$s_!ugfR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4ddad18-735e-4ef6-bf98-bf5904d1b07c_297x32.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ugfR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4ddad18-735e-4ef6-bf98-bf5904d1b07c_297x32.png" width="297" height="32" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c4ddad18-735e-4ef6-bf98-bf5904d1b07c_297x32.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:32,&quot;width&quot;:297,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3062,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ugfR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4ddad18-735e-4ef6-bf98-bf5904d1b07c_297x32.png 424w, https://substackcdn.com/image/fetch/$s_!ugfR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4ddad18-735e-4ef6-bf98-bf5904d1b07c_297x32.png 848w, https://substackcdn.com/image/fetch/$s_!ugfR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4ddad18-735e-4ef6-bf98-bf5904d1b07c_297x32.png 1272w, https://substackcdn.com/image/fetch/$s_!ugfR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4ddad18-735e-4ef6-bf98-bf5904d1b07c_297x32.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Call this <em>c<sub>p</sub></em>, the correlation according to the prior.</p><p>Imagine that before Bob observes the value of the correlation, they try to arrive at some deal. First consider the following deal:</p><ul><li><p>Regardless of how much correlation there is, Bob and Alice both pay $2 to give the other player $4.</p></li><li><p>Assume that Bob&#8217;s and Alice&#8217;s correlations on the decision &#8220;doing their part of this deal vs. not sending any money at all&#8221; is as I described above &#8212; <em>if</em> this assumption would make the deal net-positive for both players.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p></li></ul><p>The expected gain of accepting this deal (relative to not accepting it) to each of Bob and Alice is:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!566D!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d39b595-deb1-46b7-905c-3b4971da09be_108x34.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!566D!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d39b595-deb1-46b7-905c-3b4971da09be_108x34.png 424w, https://substackcdn.com/image/fetch/$s_!566D!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d39b595-deb1-46b7-905c-3b4971da09be_108x34.png 848w, https://substackcdn.com/image/fetch/$s_!566D!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d39b595-deb1-46b7-905c-3b4971da09be_108x34.png 1272w, https://substackcdn.com/image/fetch/$s_!566D!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d39b595-deb1-46b7-905c-3b4971da09be_108x34.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!566D!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d39b595-deb1-46b7-905c-3b4971da09be_108x34.png" width="108" height="34" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6d39b595-deb1-46b7-905c-3b4971da09be_108x34.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:34,&quot;width&quot;:108,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1702,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!566D!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d39b595-deb1-46b7-905c-3b4971da09be_108x34.png 424w, https://substackcdn.com/image/fetch/$s_!566D!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d39b595-deb1-46b7-905c-3b4971da09be_108x34.png 848w, https://substackcdn.com/image/fetch/$s_!566D!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d39b595-deb1-46b7-905c-3b4971da09be_108x34.png 1272w, https://substackcdn.com/image/fetch/$s_!566D!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d39b595-deb1-46b7-905c-3b4971da09be_108x34.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>This is positive (i.e. Alice and Bob are both incentivized to take the deal) if <em>c<sub>p</sub></em>&gt;0.5, which is true iff <em>p</em>&gt;0.5.</p><p>Now instead imagine that they were deciding whether to make the following deal:</p><ul><li><p>Alice pays $2 to give Bob $4.</p></li><li><p>Bob commits to paying $3 to give Alice $6 if correlation is high (0.8). Bob doesn't pay anything if correlation is low (0.2).</p></li><li><p>Imagine again that the decision of whether to accept this deal is correlated as per the above numbers.</p></li></ul><p>&#8203;&#8203;Then the expected gain of accepting the deal for Alice is:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ri-L!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0ef2e3-2f8b-46b1-b5da-b56f48f5ee73_235x21.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ri-L!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0ef2e3-2f8b-46b1-b5da-b56f48f5ee73_235x21.png 424w, https://substackcdn.com/image/fetch/$s_!ri-L!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0ef2e3-2f8b-46b1-b5da-b56f48f5ee73_235x21.png 848w, https://substackcdn.com/image/fetch/$s_!ri-L!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0ef2e3-2f8b-46b1-b5da-b56f48f5ee73_235x21.png 1272w, https://substackcdn.com/image/fetch/$s_!ri-L!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0ef2e3-2f8b-46b1-b5da-b56f48f5ee73_235x21.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ri-L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0ef2e3-2f8b-46b1-b5da-b56f48f5ee73_235x21.png" width="235" height="21" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ef0ef2e3-2f8b-46b1-b5da-b56f48f5ee73_235x21.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:21,&quot;width&quot;:235,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2452,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ri-L!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0ef2e3-2f8b-46b1-b5da-b56f48f5ee73_235x21.png 424w, https://substackcdn.com/image/fetch/$s_!ri-L!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0ef2e3-2f8b-46b1-b5da-b56f48f5ee73_235x21.png 848w, https://substackcdn.com/image/fetch/$s_!ri-L!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0ef2e3-2f8b-46b1-b5da-b56f48f5ee73_235x21.png 1272w, https://substackcdn.com/image/fetch/$s_!ri-L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef0ef2e3-2f8b-46b1-b5da-b56f48f5ee73_235x21.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>This is positive if <em>p</em> &gt; 2/4.8 ~= 40%. note that this lower bound is smaller than the 50% above.</p><p>The expected gain for Bob is:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LXmA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89004c4f-03ca-476d-9171-b2d86dc77070_431x30.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LXmA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89004c4f-03ca-476d-9171-b2d86dc77070_431x30.png 424w, https://substackcdn.com/image/fetch/$s_!LXmA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89004c4f-03ca-476d-9171-b2d86dc77070_431x30.png 848w, https://substackcdn.com/image/fetch/$s_!LXmA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89004c4f-03ca-476d-9171-b2d86dc77070_431x30.png 1272w, https://substackcdn.com/image/fetch/$s_!LXmA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89004c4f-03ca-476d-9171-b2d86dc77070_431x30.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LXmA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89004c4f-03ca-476d-9171-b2d86dc77070_431x30.png" width="431" height="30" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/89004c4f-03ca-476d-9171-b2d86dc77070_431x30.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:30,&quot;width&quot;:431,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4112,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LXmA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89004c4f-03ca-476d-9171-b2d86dc77070_431x30.png 424w, https://substackcdn.com/image/fetch/$s_!LXmA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89004c4f-03ca-476d-9171-b2d86dc77070_431x30.png 848w, https://substackcdn.com/image/fetch/$s_!LXmA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89004c4f-03ca-476d-9171-b2d86dc77070_431x30.png 1272w, https://substackcdn.com/image/fetch/$s_!LXmA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89004c4f-03ca-476d-9171-b2d86dc77070_431x30.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>This is positive for all <em>p</em> between 0 and 1. (At <em>p</em>=0 the gain is 0.8. At <em>p</em>=1 the gain is 0.2.)</p><p>So, in particular, for values of <em>p</em> between 40% and 50%, the first deal is bad for both players but the second deal is good for both players.</p><p>Intuitively, why does this work? The key idea is that the second deal has Bob pay only if correlation is high. Holding fixed the expected (w.r.t. <em>p</em>) amount that Bob gives in a deal, a deal looks better from Alice&#8217;s perspective if Bob gives the money in the high correlation case, because those are the possible worlds where Alice&#8217;s decision has the most influence on Bob&#8217;s decision.</p><p>In the above example, note that (unless <em>p</em> is very high) the first deal (where Bob always gives money) gives Alice more money in expectation if <strong>both</strong> players take the deal ($4 instead or 0.5*6=$3). But Alice&#8217;s decision is about whether she herself should take the deal. And if Alice just conditions on her own decision to participate, then the first deal only gets her <em>c<sub>p</sub></em> of the $4, in expectation &#8212;&nbsp;whereas the second deal gets Alice 0.8&gt;<em>c<sub>p</sub></em> of the $3, in expectation.</p><h2>Let&#8217;s generalize</h2><p>Let&#8217;s now study the general version.</p><p>We will consider some set of possible worlds <em>W</em>, where for each world <em>w</em> in <em>W</em>:</p><ul><li><p>Both Alice and Bob assign the same prior <em>p</em>(<em>w</em>) to that world.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p></li><li><p>Alice thinks her influence on Bob is <em>c<sub>AB</sub></em><sub>|</sub><em><sub>w</sub></em> in that word. (And vice versa for Bob.)</p></li><li><p>Alice and Bob can pay different amounts to benefit each other in different worlds. As a shorthand for the amount that Alice pays to benefit Bob in world <em>w</em>, I will write <em>l<sub>A</sub></em><sub>|</sub><em><sub>w</sub></em>. (And correspondingly for Bob.)</p></li><li><p>We will write the benefit that Bob gains as a monotonically increasing function of how much Alice pays: <em>g<sub>B</sub></em>(<em>l<sub>A</sub></em><sub>|</sub><em><sub>w</sub></em>). (And vice versa for Alice.)</p></li></ul><p>This means that we can write&#8230;</p><ul><li><p>Alice&#8217;s expected gain as </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fndC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b21e1f-af95-4346-a78a-aa6b06aadb68_266x56.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fndC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b21e1f-af95-4346-a78a-aa6b06aadb68_266x56.png 424w, https://substackcdn.com/image/fetch/$s_!fndC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b21e1f-af95-4346-a78a-aa6b06aadb68_266x56.png 848w, https://substackcdn.com/image/fetch/$s_!fndC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b21e1f-af95-4346-a78a-aa6b06aadb68_266x56.png 1272w, https://substackcdn.com/image/fetch/$s_!fndC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b21e1f-af95-4346-a78a-aa6b06aadb68_266x56.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fndC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b21e1f-af95-4346-a78a-aa6b06aadb68_266x56.png" width="266" height="56" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/84b21e1f-af95-4346-a78a-aa6b06aadb68_266x56.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:56,&quot;width&quot;:266,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4268,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fndC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b21e1f-af95-4346-a78a-aa6b06aadb68_266x56.png 424w, https://substackcdn.com/image/fetch/$s_!fndC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b21e1f-af95-4346-a78a-aa6b06aadb68_266x56.png 848w, https://substackcdn.com/image/fetch/$s_!fndC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b21e1f-af95-4346-a78a-aa6b06aadb68_266x56.png 1272w, https://substackcdn.com/image/fetch/$s_!fndC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84b21e1f-af95-4346-a78a-aa6b06aadb68_266x56.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li><li><p>Bob&#8217;s expected gain as </p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!w3oZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff65ce164-0814-4329-8ee1-7daaf6d32633_260x55.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!w3oZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff65ce164-0814-4329-8ee1-7daaf6d32633_260x55.png 424w, https://substackcdn.com/image/fetch/$s_!w3oZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff65ce164-0814-4329-8ee1-7daaf6d32633_260x55.png 848w, https://substackcdn.com/image/fetch/$s_!w3oZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff65ce164-0814-4329-8ee1-7daaf6d32633_260x55.png 1272w, https://substackcdn.com/image/fetch/$s_!w3oZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff65ce164-0814-4329-8ee1-7daaf6d32633_260x55.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!w3oZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff65ce164-0814-4329-8ee1-7daaf6d32633_260x55.png" width="260" height="55" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f65ce164-0814-4329-8ee1-7daaf6d32633_260x55.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:55,&quot;width&quot;:260,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4313,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!w3oZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff65ce164-0814-4329-8ee1-7daaf6d32633_260x55.png 424w, https://substackcdn.com/image/fetch/$s_!w3oZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff65ce164-0814-4329-8ee1-7daaf6d32633_260x55.png 848w, https://substackcdn.com/image/fetch/$s_!w3oZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff65ce164-0814-4329-8ee1-7daaf6d32633_260x55.png 1272w, https://substackcdn.com/image/fetch/$s_!w3oZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff65ce164-0814-4329-8ee1-7daaf6d32633_260x55.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Similarly to <a href="https://lukasfinnveden.substack.com/p/when-does-edt-seek-evidence-about">When does EDT seek evidence about correlations?</a>, I will assume that both parties <em>first</em> commit to certain policies (based on what would give both parties high expected utilities) and <em>then</em> they make observations about what world they are in. Also similarly: The parties never get to observe information about each other, including what commitments the other party made. The only relevant information they get is evidence about correlations.</p><p>How should Alice and Bob choose <em>l<sub>A</sub></em><sub>|</sub><em><sub>w</sub></em> and <em>l<sub>B</sub></em><sub>|</sub><em><sub>w</sub></em> to maximize expected benefits to both parties? (As they might want to do if they&#8217;re selecting their policy using a similar algorithm as the one described <a href="https://lukasfinnveden.substack.com/i/136239309/how-to-think-about-analogous-actions-in-asymmetric-situations">here</a>.)</p><p>In the above formula:</p><ul><li><p><em>l<sub>A</sub></em><sub>|</sub><em><sub>w</sub></em> is multiplied by <em>p</em>(<em>w</em>).</p></li><li><p><em>g<sub>B</sub></em>(<em>l<sub>A</sub></em><sub>|</sub><em><sub>w</sub></em>) is multiplied by <em>c<sub>BA</sub></em><sub>|</sub><em><sub>w</sub></em> and <em>p</em>(<em>w</em>).</p></li></ul><p>So Alice should choose especially high <em>l<sub>A</sub></em><sub>|</sub><em><sub>w</sub></em> in worlds where Bob&#8217;s acausal influence on Alice <em>c<sub>BA</sub></em><sub>|</sub><em><sub>w</sub></em> is especially high. (Echoing the conclusions from <a href="https://lukasfinnveden.substack.com/p/when-does-edt-seek-evidence-about">When does EDT seek evidence about correlations?</a>)</p><p>Let&#8217;s extend this to a dilemma with large groups in large worlds. As previously outlined in the section <a href="https://lukasfinnveden.substack.com/i/136239309/large-worlds-asymmetric-dilemma">large-worlds asymmetric dilemma</a> (in the post Asymmetric ECL), group A would normally receive an expected utility of:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wFDb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a07aced-6300-48a3-bd2a-3d6e8dcdfdfe_181x26.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wFDb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a07aced-6300-48a3-bd2a-3d6e8dcdfdfe_181x26.png 424w, https://substackcdn.com/image/fetch/$s_!wFDb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a07aced-6300-48a3-bd2a-3d6e8dcdfdfe_181x26.png 848w, https://substackcdn.com/image/fetch/$s_!wFDb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a07aced-6300-48a3-bd2a-3d6e8dcdfdfe_181x26.png 1272w, https://substackcdn.com/image/fetch/$s_!wFDb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a07aced-6300-48a3-bd2a-3d6e8dcdfdfe_181x26.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wFDb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a07aced-6300-48a3-bd2a-3d6e8dcdfdfe_181x26.png" width="181" height="26" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2a07aced-6300-48a3-bd2a-3d6e8dcdfdfe_181x26.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:26,&quot;width&quot;:181,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2087,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wFDb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a07aced-6300-48a3-bd2a-3d6e8dcdfdfe_181x26.png 424w, https://substackcdn.com/image/fetch/$s_!wFDb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a07aced-6300-48a3-bd2a-3d6e8dcdfdfe_181x26.png 848w, https://substackcdn.com/image/fetch/$s_!wFDb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a07aced-6300-48a3-bd2a-3d6e8dcdfdfe_181x26.png 1272w, https://substackcdn.com/image/fetch/$s_!wFDb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2a07aced-6300-48a3-bd2a-3d6e8dcdfdfe_181x26.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Extending this to our new setting:</p><ul><li><p>Group <em>A</em>&#8217;s expected gain is </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!T4q7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de0bd32-d802-4b3d-9171-49a1006262a6_380x57.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!T4q7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de0bd32-d802-4b3d-9171-49a1006262a6_380x57.png 424w, https://substackcdn.com/image/fetch/$s_!T4q7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de0bd32-d802-4b3d-9171-49a1006262a6_380x57.png 848w, https://substackcdn.com/image/fetch/$s_!T4q7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de0bd32-d802-4b3d-9171-49a1006262a6_380x57.png 1272w, https://substackcdn.com/image/fetch/$s_!T4q7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de0bd32-d802-4b3d-9171-49a1006262a6_380x57.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!T4q7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de0bd32-d802-4b3d-9171-49a1006262a6_380x57.png" width="380" height="57" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1de0bd32-d802-4b3d-9171-49a1006262a6_380x57.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:57,&quot;width&quot;:380,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5517,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!T4q7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de0bd32-d802-4b3d-9171-49a1006262a6_380x57.png 424w, https://substackcdn.com/image/fetch/$s_!T4q7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de0bd32-d802-4b3d-9171-49a1006262a6_380x57.png 848w, https://substackcdn.com/image/fetch/$s_!T4q7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de0bd32-d802-4b3d-9171-49a1006262a6_380x57.png 1272w, https://substackcdn.com/image/fetch/$s_!T4q7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de0bd32-d802-4b3d-9171-49a1006262a6_380x57.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li><li><p>Group <em>B</em>&#8217;s expected gain is </p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ffk5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f479bc-ee04-4ed9-a701-c2eab0a1acb6_388x54.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ffk5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f479bc-ee04-4ed9-a701-c2eab0a1acb6_388x54.png 424w, https://substackcdn.com/image/fetch/$s_!Ffk5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f479bc-ee04-4ed9-a701-c2eab0a1acb6_388x54.png 848w, https://substackcdn.com/image/fetch/$s_!Ffk5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f479bc-ee04-4ed9-a701-c2eab0a1acb6_388x54.png 1272w, https://substackcdn.com/image/fetch/$s_!Ffk5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f479bc-ee04-4ed9-a701-c2eab0a1acb6_388x54.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ffk5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f479bc-ee04-4ed9-a701-c2eab0a1acb6_388x54.png" width="388" height="54" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/92f479bc-ee04-4ed9-a701-c2eab0a1acb6_388x54.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:54,&quot;width&quot;:388,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5573,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ffk5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f479bc-ee04-4ed9-a701-c2eab0a1acb6_388x54.png 424w, https://substackcdn.com/image/fetch/$s_!Ffk5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f479bc-ee04-4ed9-a701-c2eab0a1acb6_388x54.png 848w, https://substackcdn.com/image/fetch/$s_!Ffk5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f479bc-ee04-4ed9-a701-c2eab0a1acb6_388x54.png 1272w, https://substackcdn.com/image/fetch/$s_!Ffk5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92f479bc-ee04-4ed9-a701-c2eab0a1acb6_388x54.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>How should group <em>A</em> and group <em>B</em> choose <em>lA</em>|<em>w</em>&nbsp; and <em>lB</em>|<em>w</em>, here?&nbsp;</p><ul><li><p><em>l<sub>A</sub></em><sub>|</sub><em><sub>w</sub></em> is multiplied by <em>n<sub>A</sub></em><sub>|</sub><em><sub>w</sub>c<sub>AA</sub></em><sub>|</sub><em><sub>w</sub></em>.</p></li><li><p>g<sub>B</sub>(<em>l<sub>A</sub></em><sub>|</sub><em><sub>w</sub></em>) is multiplied by <em>n<sub>A</sub></em><sub>|</sub><em><sub>w</sub>c<sub>BA</sub></em><sub>|</sub><em><sub>w</sub></em>.</p></li><li><p>So group A should choose higher <em>l<sub>A</sub></em><sub>|</sub><em><sub>w</sub></em> in worlds with a higher value of <em>c<sub>BA</sub></em><sub>|</sub><em><sub>w</sub></em> and a lower value of <em>c<sub>AA</sub></em><sub>|</sub><em><sub>w</sub></em>, i.e., in worlds where members of group B have surprisingly much acausal influence on them, and members of group A have surprisingly little acausal influence on each other.</p></li><li><p>Note that <em>n<sub>A</sub></em><sub>|</sub><em><sub>w</sub></em> feature in both expressions, so updates on that number doesn&#8217;t clearly affect whose values should be benefitted.</p></li></ul><h2>Assuming linearity</h2><p>Let&#8217;s be more precise about the options of group <em>A</em> and <em>B</em>:</p><ul><li><p>Let&#8217;s say that members of group <em>A</em> get no information about what world they are in. So they need to choose a single value of <em>l<sub>A</sub></em><sub>|</sub><em><sub>w</sub></em> for all <em>w</em>. Furthermore, let&#8217;s say they can either choose to make this 0, for which case <em>g<sub>B</sub></em>(0)=0, or a specific other number <em>l<sub>A</sub></em>. To represent <em>g<sub>B</sub></em>(<em>l<sub>A</sub></em>), let&#8217;s simply write <em>g<sub>B</sub></em>.</p></li><li><p>Let&#8217;s say that members of group <em>B</em> get complete information about what world they are in, so they can freely distinct values for all <em>l<sub>B</sub></em><sub>|</sub><em><sub>w</sub></em>. Furthermore, let&#8217;s say that they can freely choose <em>any</em> non-negative value for <em>l<sub>B</sub></em><sub>|</sub><em><sub>w</sub></em>, and that <em>g<sub>A</sub></em>(<em>l<sub>B</sub></em><sub>|</sub><em><sub>w</sub></em>) is directly proportional to <em>l<sub>B</sub></em><sub>|</sub><em><sub>w</sub></em>. Since it&#8217;s directly proportional, let&#8217;s write <em>g<sub>A</sub></em>(<em>l<sub>B</sub></em><sub>|</sub><em><sub>w</sub></em>)=<em>k<sub>A</sub>l<sub>B</sub></em><sub>|</sub><em><sub>w</sub></em>.</p></li></ul><p>This means that:</p><ul><li><p>Group <em>A</em>&#8217;s expected gain is </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iO-u!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f175af0-1199-422f-854b-8916ea8c02c1_384x57.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iO-u!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f175af0-1199-422f-854b-8916ea8c02c1_384x57.png 424w, https://substackcdn.com/image/fetch/$s_!iO-u!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f175af0-1199-422f-854b-8916ea8c02c1_384x57.png 848w, https://substackcdn.com/image/fetch/$s_!iO-u!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f175af0-1199-422f-854b-8916ea8c02c1_384x57.png 1272w, https://substackcdn.com/image/fetch/$s_!iO-u!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f175af0-1199-422f-854b-8916ea8c02c1_384x57.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iO-u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f175af0-1199-422f-854b-8916ea8c02c1_384x57.png" width="384" height="57" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2f175af0-1199-422f-854b-8916ea8c02c1_384x57.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:57,&quot;width&quot;:384,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5553,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!iO-u!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f175af0-1199-422f-854b-8916ea8c02c1_384x57.png 424w, https://substackcdn.com/image/fetch/$s_!iO-u!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f175af0-1199-422f-854b-8916ea8c02c1_384x57.png 848w, https://substackcdn.com/image/fetch/$s_!iO-u!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f175af0-1199-422f-854b-8916ea8c02c1_384x57.png 1272w, https://substackcdn.com/image/fetch/$s_!iO-u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f175af0-1199-422f-854b-8916ea8c02c1_384x57.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li><li><p>Group <em>B</em>&#8217;s expected gain is </p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Cd_z!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33a3b3a4-372e-430e-a333-8bd26d11fdf4_343x59.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Cd_z!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33a3b3a4-372e-430e-a333-8bd26d11fdf4_343x59.png 424w, https://substackcdn.com/image/fetch/$s_!Cd_z!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33a3b3a4-372e-430e-a333-8bd26d11fdf4_343x59.png 848w, https://substackcdn.com/image/fetch/$s_!Cd_z!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33a3b3a4-372e-430e-a333-8bd26d11fdf4_343x59.png 1272w, https://substackcdn.com/image/fetch/$s_!Cd_z!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33a3b3a4-372e-430e-a333-8bd26d11fdf4_343x59.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Cd_z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33a3b3a4-372e-430e-a333-8bd26d11fdf4_343x59.png" width="343" height="59" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/33a3b3a4-372e-430e-a333-8bd26d11fdf4_343x59.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:59,&quot;width&quot;:343,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5077,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Cd_z!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33a3b3a4-372e-430e-a333-8bd26d11fdf4_343x59.png 424w, https://substackcdn.com/image/fetch/$s_!Cd_z!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33a3b3a4-372e-430e-a333-8bd26d11fdf4_343x59.png 848w, https://substackcdn.com/image/fetch/$s_!Cd_z!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33a3b3a4-372e-430e-a333-8bd26d11fdf4_343x59.png 1272w, https://substackcdn.com/image/fetch/$s_!Cd_z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F33a3b3a4-372e-430e-a333-8bd26d11fdf4_343x59.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Here&#8217;s something surprising, that we can now conclude: In order to maximize the expected value that both actors get, group <em>B</em> should <em>only</em> benefit group <em>A</em> (i.e., chose non-0 <em>l<sub>B</sub></em><sub>|</sub><em><sub>w</sub></em>) in the world(s) where the ratio </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nc7m!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a67a41b-bd03-4d6d-901d-2004bd4b6f44_71x57.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nc7m!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a67a41b-bd03-4d6d-901d-2004bd4b6f44_71x57.png 424w, https://substackcdn.com/image/fetch/$s_!nc7m!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a67a41b-bd03-4d6d-901d-2004bd4b6f44_71x57.png 848w, https://substackcdn.com/image/fetch/$s_!nc7m!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a67a41b-bd03-4d6d-901d-2004bd4b6f44_71x57.png 1272w, https://substackcdn.com/image/fetch/$s_!nc7m!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a67a41b-bd03-4d6d-901d-2004bd4b6f44_71x57.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nc7m!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a67a41b-bd03-4d6d-901d-2004bd4b6f44_71x57.png" width="71" height="57" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7a67a41b-bd03-4d6d-901d-2004bd4b6f44_71x57.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:57,&quot;width&quot;:71,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1389,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nc7m!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a67a41b-bd03-4d6d-901d-2004bd4b6f44_71x57.png 424w, https://substackcdn.com/image/fetch/$s_!nc7m!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a67a41b-bd03-4d6d-901d-2004bd4b6f44_71x57.png 848w, https://substackcdn.com/image/fetch/$s_!nc7m!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a67a41b-bd03-4d6d-901d-2004bd4b6f44_71x57.png 1272w, https://substackcdn.com/image/fetch/$s_!nc7m!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a67a41b-bd03-4d6d-901d-2004bd4b6f44_71x57.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p> is the highest. Why?</p><p>Let&#8217;s say that there is some world <em>u</em> that maximizes </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0252!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11e08688-54ba-4dc1-b730-29aec66e11f2_64x57.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0252!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11e08688-54ba-4dc1-b730-29aec66e11f2_64x57.png 424w, https://substackcdn.com/image/fetch/$s_!0252!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11e08688-54ba-4dc1-b730-29aec66e11f2_64x57.png 848w, https://substackcdn.com/image/fetch/$s_!0252!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11e08688-54ba-4dc1-b730-29aec66e11f2_64x57.png 1272w, https://substackcdn.com/image/fetch/$s_!0252!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11e08688-54ba-4dc1-b730-29aec66e11f2_64x57.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0252!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11e08688-54ba-4dc1-b730-29aec66e11f2_64x57.png" width="64" height="57" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/11e08688-54ba-4dc1-b730-29aec66e11f2_64x57.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:57,&quot;width&quot;:64,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1465,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0252!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11e08688-54ba-4dc1-b730-29aec66e11f2_64x57.png 424w, https://substackcdn.com/image/fetch/$s_!0252!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11e08688-54ba-4dc1-b730-29aec66e11f2_64x57.png 848w, https://substackcdn.com/image/fetch/$s_!0252!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11e08688-54ba-4dc1-b730-29aec66e11f2_64x57.png 1272w, https://substackcdn.com/image/fetch/$s_!0252!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11e08688-54ba-4dc1-b730-29aec66e11f2_64x57.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Let&#8217;s say that <em>l<sub>B</sub></em><sub>|</sub><em><sub>w&#8217;</sub></em>=<em>l</em>&gt;0 for some other world <em>w&#8217;</em> where </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ji7K!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26fd1531-e399-462b-9b8a-3c3cc8b7e03e_152x66.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ji7K!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26fd1531-e399-462b-9b8a-3c3cc8b7e03e_152x66.png 424w, https://substackcdn.com/image/fetch/$s_!Ji7K!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26fd1531-e399-462b-9b8a-3c3cc8b7e03e_152x66.png 848w, https://substackcdn.com/image/fetch/$s_!Ji7K!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26fd1531-e399-462b-9b8a-3c3cc8b7e03e_152x66.png 1272w, https://substackcdn.com/image/fetch/$s_!Ji7K!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26fd1531-e399-462b-9b8a-3c3cc8b7e03e_152x66.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ji7K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26fd1531-e399-462b-9b8a-3c3cc8b7e03e_152x66.png" width="152" height="66" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/26fd1531-e399-462b-9b8a-3c3cc8b7e03e_152x66.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:66,&quot;width&quot;:152,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2432,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ji7K!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26fd1531-e399-462b-9b8a-3c3cc8b7e03e_152x66.png 424w, https://substackcdn.com/image/fetch/$s_!Ji7K!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26fd1531-e399-462b-9b8a-3c3cc8b7e03e_152x66.png 848w, https://substackcdn.com/image/fetch/$s_!Ji7K!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26fd1531-e399-462b-9b8a-3c3cc8b7e03e_152x66.png 1272w, https://substackcdn.com/image/fetch/$s_!Ji7K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26fd1531-e399-462b-9b8a-3c3cc8b7e03e_152x66.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>(i.e., group <em>B</em> helps group <em>A</em> in some world where the ratio <em>isn&#8217;t</em> the highest.)</p><p>Then, if the joint policy (recommended by <a href="https://lukasfinnveden.substack.com/i/136239309/how-to-think-about-analogous-actions-in-asymmetric-situations">the algorithm</a>) instead recommended group <em>B</em> to:</p><ul><li><p>Decrease <em>l<sub>B</sub></em><sub>|</sub><em><sub>w&#8217;</sub></em> by <em>l</em> (i.e. set <em>l<sub>B</sub></em><sub>|</sub><em><sub>w&#8217;</sub></em> to 0).</p></li><li><p>Increase <em>l<sub>B</sub></em><sub>|</sub><em><sub>u</sub></em> by </p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ih3U!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2a68f4-705f-4e52-a2b5-8960aff39029_167x64.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ih3U!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2a68f4-705f-4e52-a2b5-8960aff39029_167x64.png 424w, https://substackcdn.com/image/fetch/$s_!Ih3U!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2a68f4-705f-4e52-a2b5-8960aff39029_167x64.png 848w, https://substackcdn.com/image/fetch/$s_!Ih3U!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2a68f4-705f-4e52-a2b5-8960aff39029_167x64.png 1272w, https://substackcdn.com/image/fetch/$s_!Ih3U!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2a68f4-705f-4e52-a2b5-8960aff39029_167x64.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ih3U!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2a68f4-705f-4e52-a2b5-8960aff39029_167x64.png" width="167" height="64" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9d2a68f4-705f-4e52-a2b5-8960aff39029_167x64.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:64,&quot;width&quot;:167,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4220,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ih3U!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2a68f4-705f-4e52-a2b5-8960aff39029_167x64.png 424w, https://substackcdn.com/image/fetch/$s_!Ih3U!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2a68f4-705f-4e52-a2b5-8960aff39029_167x64.png 848w, https://substackcdn.com/image/fetch/$s_!Ih3U!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2a68f4-705f-4e52-a2b5-8960aff39029_167x64.png 1272w, https://substackcdn.com/image/fetch/$s_!Ih3U!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d2a68f4-705f-4e52-a2b5-8960aff39029_167x64.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>How much would group <em>A</em>&#8217;s expected utility change?</p><ul><li><p>Due to getting less utility in world <em>w&#8217;</em>:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pQCS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995767b3-9c9c-42bb-b06c-acd6a35e64ec_187x30.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pQCS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995767b3-9c9c-42bb-b06c-acd6a35e64ec_187x30.png 424w, https://substackcdn.com/image/fetch/$s_!pQCS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995767b3-9c9c-42bb-b06c-acd6a35e64ec_187x30.png 848w, https://substackcdn.com/image/fetch/$s_!pQCS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995767b3-9c9c-42bb-b06c-acd6a35e64ec_187x30.png 1272w, https://substackcdn.com/image/fetch/$s_!pQCS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995767b3-9c9c-42bb-b06c-acd6a35e64ec_187x30.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pQCS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995767b3-9c9c-42bb-b06c-acd6a35e64ec_187x30.png" width="187" height="30" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/995767b3-9c9c-42bb-b06c-acd6a35e64ec_187x30.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:30,&quot;width&quot;:187,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2631,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pQCS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995767b3-9c9c-42bb-b06c-acd6a35e64ec_187x30.png 424w, https://substackcdn.com/image/fetch/$s_!pQCS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995767b3-9c9c-42bb-b06c-acd6a35e64ec_187x30.png 848w, https://substackcdn.com/image/fetch/$s_!pQCS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995767b3-9c9c-42bb-b06c-acd6a35e64ec_187x30.png 1272w, https://substackcdn.com/image/fetch/$s_!pQCS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F995767b3-9c9c-42bb-b06c-acd6a35e64ec_187x30.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li></ul><ul><li><p>Due to getting more utility in world <em>u</em>:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KoR1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe23b7e14-d232-44eb-bdbc-d45c9cc0f500_507x59.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KoR1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe23b7e14-d232-44eb-bdbc-d45c9cc0f500_507x59.png 424w, https://substackcdn.com/image/fetch/$s_!KoR1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe23b7e14-d232-44eb-bdbc-d45c9cc0f500_507x59.png 848w, https://substackcdn.com/image/fetch/$s_!KoR1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe23b7e14-d232-44eb-bdbc-d45c9cc0f500_507x59.png 1272w, https://substackcdn.com/image/fetch/$s_!KoR1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe23b7e14-d232-44eb-bdbc-d45c9cc0f500_507x59.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KoR1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe23b7e14-d232-44eb-bdbc-d45c9cc0f500_507x59.png" width="507" height="59" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e23b7e14-d232-44eb-bdbc-d45c9cc0f500_507x59.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:59,&quot;width&quot;:507,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8617,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KoR1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe23b7e14-d232-44eb-bdbc-d45c9cc0f500_507x59.png 424w, https://substackcdn.com/image/fetch/$s_!KoR1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe23b7e14-d232-44eb-bdbc-d45c9cc0f500_507x59.png 848w, https://substackcdn.com/image/fetch/$s_!KoR1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe23b7e14-d232-44eb-bdbc-d45c9cc0f500_507x59.png 1272w, https://substackcdn.com/image/fetch/$s_!KoR1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe23b7e14-d232-44eb-bdbc-d45c9cc0f500_507x59.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li></ul><p>Since these are equal, group <em>A</em>&#8217;s expected utility would be unchanged.</p><p>How much would group <em>B</em>&#8217;s expected utility change?</p><ul><li><p>Due to losing less utility in world w&#8217;:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9uBW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4049fe47-82b9-4d07-80eb-4faabb3e03a0_167x29.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9uBW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4049fe47-82b9-4d07-80eb-4faabb3e03a0_167x29.png 424w, https://substackcdn.com/image/fetch/$s_!9uBW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4049fe47-82b9-4d07-80eb-4faabb3e03a0_167x29.png 848w, https://substackcdn.com/image/fetch/$s_!9uBW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4049fe47-82b9-4d07-80eb-4faabb3e03a0_167x29.png 1272w, https://substackcdn.com/image/fetch/$s_!9uBW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4049fe47-82b9-4d07-80eb-4faabb3e03a0_167x29.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9uBW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4049fe47-82b9-4d07-80eb-4faabb3e03a0_167x29.png" width="167" height="29" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4049fe47-82b9-4d07-80eb-4faabb3e03a0_167x29.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:29,&quot;width&quot;:167,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2625,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9uBW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4049fe47-82b9-4d07-80eb-4faabb3e03a0_167x29.png 424w, https://substackcdn.com/image/fetch/$s_!9uBW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4049fe47-82b9-4d07-80eb-4faabb3e03a0_167x29.png 848w, https://substackcdn.com/image/fetch/$s_!9uBW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4049fe47-82b9-4d07-80eb-4faabb3e03a0_167x29.png 1272w, https://substackcdn.com/image/fetch/$s_!9uBW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4049fe47-82b9-4d07-80eb-4faabb3e03a0_167x29.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li><li><p>Due to losing more utility in world <em>u</em>:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GqqH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ed63b1-27dc-4d92-b68c-2532e3ef8620_526x67.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GqqH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ed63b1-27dc-4d92-b68c-2532e3ef8620_526x67.png 424w, https://substackcdn.com/image/fetch/$s_!GqqH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ed63b1-27dc-4d92-b68c-2532e3ef8620_526x67.png 848w, https://substackcdn.com/image/fetch/$s_!GqqH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ed63b1-27dc-4d92-b68c-2532e3ef8620_526x67.png 1272w, https://substackcdn.com/image/fetch/$s_!GqqH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ed63b1-27dc-4d92-b68c-2532e3ef8620_526x67.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GqqH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ed63b1-27dc-4d92-b68c-2532e3ef8620_526x67.png" width="526" height="67" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/02ed63b1-27dc-4d92-b68c-2532e3ef8620_526x67.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:67,&quot;width&quot;:526,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8566,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GqqH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ed63b1-27dc-4d92-b68c-2532e3ef8620_526x67.png 424w, https://substackcdn.com/image/fetch/$s_!GqqH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ed63b1-27dc-4d92-b68c-2532e3ef8620_526x67.png 848w, https://substackcdn.com/image/fetch/$s_!GqqH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ed63b1-27dc-4d92-b68c-2532e3ef8620_526x67.png 1272w, https://substackcdn.com/image/fetch/$s_!GqqH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02ed63b1-27dc-4d92-b68c-2532e3ef8620_526x67.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li><li><p>Summing these together, we can factor out some numbers and get:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JR7F!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67405a89-63dc-459c-a269-fad83a8398a1_651x35.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JR7F!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67405a89-63dc-459c-a269-fad83a8398a1_651x35.png 424w, https://substackcdn.com/image/fetch/$s_!JR7F!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67405a89-63dc-459c-a269-fad83a8398a1_651x35.png 848w, https://substackcdn.com/image/fetch/$s_!JR7F!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67405a89-63dc-459c-a269-fad83a8398a1_651x35.png 1272w, https://substackcdn.com/image/fetch/$s_!JR7F!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67405a89-63dc-459c-a269-fad83a8398a1_651x35.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JR7F!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67405a89-63dc-459c-a269-fad83a8398a1_651x35.png" width="651" height="35" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/67405a89-63dc-459c-a269-fad83a8398a1_651x35.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:35,&quot;width&quot;:651,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:11575,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JR7F!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67405a89-63dc-459c-a269-fad83a8398a1_651x35.png 424w, https://substackcdn.com/image/fetch/$s_!JR7F!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67405a89-63dc-459c-a269-fad83a8398a1_651x35.png 848w, https://substackcdn.com/image/fetch/$s_!JR7F!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67405a89-63dc-459c-a269-fad83a8398a1_651x35.png 1272w, https://substackcdn.com/image/fetch/$s_!JR7F!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67405a89-63dc-459c-a269-fad83a8398a1_651x35.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li><li><p>By assumption, </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aQ0q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feaa6a184-c73c-4f99-bc3e-066ca835bdc8_377x60.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aQ0q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feaa6a184-c73c-4f99-bc3e-066ca835bdc8_377x60.png 424w, https://substackcdn.com/image/fetch/$s_!aQ0q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feaa6a184-c73c-4f99-bc3e-066ca835bdc8_377x60.png 848w, https://substackcdn.com/image/fetch/$s_!aQ0q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feaa6a184-c73c-4f99-bc3e-066ca835bdc8_377x60.png 1272w, https://substackcdn.com/image/fetch/$s_!aQ0q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feaa6a184-c73c-4f99-bc3e-066ca835bdc8_377x60.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aQ0q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feaa6a184-c73c-4f99-bc3e-066ca835bdc8_377x60.png" width="377" height="60" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eaa6a184-c73c-4f99-bc3e-066ca835bdc8_377x60.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:60,&quot;width&quot;:377,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5798,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aQ0q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feaa6a184-c73c-4f99-bc3e-066ca835bdc8_377x60.png 424w, https://substackcdn.com/image/fetch/$s_!aQ0q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feaa6a184-c73c-4f99-bc3e-066ca835bdc8_377x60.png 848w, https://substackcdn.com/image/fetch/$s_!aQ0q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feaa6a184-c73c-4f99-bc3e-066ca835bdc8_377x60.png 1272w, https://substackcdn.com/image/fetch/$s_!aQ0q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feaa6a184-c73c-4f99-bc3e-066ca835bdc8_377x60.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>So this will <em>increase</em> group <em>B</em>&#8217;s expected utility.</p></li></ul><p>This shows that for every policy that recommends group <em>B</em> to benefit group <em>A</em> in world that doesn&#8217;t maximize </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hkj_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd37bca2e-dedd-4519-9368-d106b60ca841_65x58.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hkj_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd37bca2e-dedd-4519-9368-d106b60ca841_65x58.png 424w, https://substackcdn.com/image/fetch/$s_!hkj_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd37bca2e-dedd-4519-9368-d106b60ca841_65x58.png 848w, https://substackcdn.com/image/fetch/$s_!hkj_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd37bca2e-dedd-4519-9368-d106b60ca841_65x58.png 1272w, https://substackcdn.com/image/fetch/$s_!hkj_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd37bca2e-dedd-4519-9368-d106b60ca841_65x58.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hkj_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd37bca2e-dedd-4519-9368-d106b60ca841_65x58.png" width="65" height="58" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d37bca2e-dedd-4519-9368-d106b60ca841_65x58.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:58,&quot;width&quot;:65,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1355,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hkj_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd37bca2e-dedd-4519-9368-d106b60ca841_65x58.png 424w, https://substackcdn.com/image/fetch/$s_!hkj_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd37bca2e-dedd-4519-9368-d106b60ca841_65x58.png 848w, https://substackcdn.com/image/fetch/$s_!hkj_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd37bca2e-dedd-4519-9368-d106b60ca841_65x58.png 1272w, https://substackcdn.com/image/fetch/$s_!hkj_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd37bca2e-dedd-4519-9368-d106b60ca841_65x58.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>there&#8217;s a different policy that is Pareto-better. So if the algorithm only recommends Pareto-optimal policies, it will only recommend policies where group <em>B</em> only benefits group <em>A</em> in the world(s) with maximum </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ogx-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2233dd0d-2499-404e-9c9c-81ac92ac0642_60x54.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ogx-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2233dd0d-2499-404e-9c9c-81ac92ac0642_60x54.png 424w, https://substackcdn.com/image/fetch/$s_!Ogx-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2233dd0d-2499-404e-9c9c-81ac92ac0642_60x54.png 848w, https://substackcdn.com/image/fetch/$s_!Ogx-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2233dd0d-2499-404e-9c9c-81ac92ac0642_60x54.png 1272w, https://substackcdn.com/image/fetch/$s_!Ogx-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2233dd0d-2499-404e-9c9c-81ac92ac0642_60x54.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ogx-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2233dd0d-2499-404e-9c9c-81ac92ac0642_60x54.png" width="60" height="54" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2233dd0d-2499-404e-9c9c-81ac92ac0642_60x54.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:54,&quot;width&quot;:60,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1315,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ogx-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2233dd0d-2499-404e-9c9c-81ac92ac0642_60x54.png 424w, https://substackcdn.com/image/fetch/$s_!Ogx-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2233dd0d-2499-404e-9c9c-81ac92ac0642_60x54.png 848w, https://substackcdn.com/image/fetch/$s_!Ogx-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2233dd0d-2499-404e-9c9c-81ac92ac0642_60x54.png 1272w, https://substackcdn.com/image/fetch/$s_!Ogx-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2233dd0d-2499-404e-9c9c-81ac92ac0642_60x54.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Note that this crucially relies on the assumption that group <em>A</em>&#8217;s gain is directly proportional to group <em>B</em>&#8217;s loss, and that group <em>B</em> can increase their loss arbitrarily much.</p><h2>When is a deal possible?</h2><p>Now, as I did in <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">Asymmetric ECL</a>, let&#8217;s try to find out when it is possible for group <em>A</em> and group <em>B</em> to find a mutually beneficial deal.</p><p>Based on the above argument, we know that group <em>B</em> will only benefit group <em>A</em> in the world where </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0KPM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae7ecda9-9f14-4a47-b1fe-a3b346f26e68_66x59.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0KPM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae7ecda9-9f14-4a47-b1fe-a3b346f26e68_66x59.png 424w, https://substackcdn.com/image/fetch/$s_!0KPM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae7ecda9-9f14-4a47-b1fe-a3b346f26e68_66x59.png 848w, https://substackcdn.com/image/fetch/$s_!0KPM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae7ecda9-9f14-4a47-b1fe-a3b346f26e68_66x59.png 1272w, https://substackcdn.com/image/fetch/$s_!0KPM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae7ecda9-9f14-4a47-b1fe-a3b346f26e68_66x59.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0KPM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae7ecda9-9f14-4a47-b1fe-a3b346f26e68_66x59.png" width="66" height="59" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ae7ecda9-9f14-4a47-b1fe-a3b346f26e68_66x59.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:59,&quot;width&quot;:66,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1380,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0KPM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae7ecda9-9f14-4a47-b1fe-a3b346f26e68_66x59.png 424w, https://substackcdn.com/image/fetch/$s_!0KPM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae7ecda9-9f14-4a47-b1fe-a3b346f26e68_66x59.png 848w, https://substackcdn.com/image/fetch/$s_!0KPM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae7ecda9-9f14-4a47-b1fe-a3b346f26e68_66x59.png 1272w, https://substackcdn.com/image/fetch/$s_!0KPM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fae7ecda9-9f14-4a47-b1fe-a3b346f26e68_66x59.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>is the highest. Let&#8217;s call this world <em>u</em>. We can then write:</p><ul><li><p>Group <em>A</em>&#8217;s expected gain is:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UfTr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F743c260b-ee4e-47df-8848-ddf4c63262bf_802x39.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UfTr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F743c260b-ee4e-47df-8848-ddf4c63262bf_802x39.png 424w, https://substackcdn.com/image/fetch/$s_!UfTr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F743c260b-ee4e-47df-8848-ddf4c63262bf_802x39.png 848w, https://substackcdn.com/image/fetch/$s_!UfTr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F743c260b-ee4e-47df-8848-ddf4c63262bf_802x39.png 1272w, https://substackcdn.com/image/fetch/$s_!UfTr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F743c260b-ee4e-47df-8848-ddf4c63262bf_802x39.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UfTr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F743c260b-ee4e-47df-8848-ddf4c63262bf_802x39.png" width="802" height="39" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/743c260b-ee4e-47df-8848-ddf4c63262bf_802x39.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:39,&quot;width&quot;:802,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:10570,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UfTr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F743c260b-ee4e-47df-8848-ddf4c63262bf_802x39.png 424w, https://substackcdn.com/image/fetch/$s_!UfTr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F743c260b-ee4e-47df-8848-ddf4c63262bf_802x39.png 848w, https://substackcdn.com/image/fetch/$s_!UfTr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F743c260b-ee4e-47df-8848-ddf4c63262bf_802x39.png 1272w, https://substackcdn.com/image/fetch/$s_!UfTr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F743c260b-ee4e-47df-8848-ddf4c63262bf_802x39.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li><li><p>Group <em>A</em>&#8217;s expected gains are larger than 0 if and only if:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qZxy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b5f7354-64a4-4f03-8603-26d3fb041097_253x70.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qZxy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b5f7354-64a4-4f03-8603-26d3fb041097_253x70.png 424w, https://substackcdn.com/image/fetch/$s_!qZxy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b5f7354-64a4-4f03-8603-26d3fb041097_253x70.png 848w, https://substackcdn.com/image/fetch/$s_!qZxy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b5f7354-64a4-4f03-8603-26d3fb041097_253x70.png 1272w, https://substackcdn.com/image/fetch/$s_!qZxy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b5f7354-64a4-4f03-8603-26d3fb041097_253x70.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qZxy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b5f7354-64a4-4f03-8603-26d3fb041097_253x70.png" width="253" height="70" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6b5f7354-64a4-4f03-8603-26d3fb041097_253x70.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:70,&quot;width&quot;:253,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5703,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qZxy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b5f7354-64a4-4f03-8603-26d3fb041097_253x70.png 424w, https://substackcdn.com/image/fetch/$s_!qZxy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b5f7354-64a4-4f03-8603-26d3fb041097_253x70.png 848w, https://substackcdn.com/image/fetch/$s_!qZxy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b5f7354-64a4-4f03-8603-26d3fb041097_253x70.png 1272w, https://substackcdn.com/image/fetch/$s_!qZxy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b5f7354-64a4-4f03-8603-26d3fb041097_253x70.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li><li><p>Group <em>B</em>&#8217;s expected gain is:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!T4Dx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33f9554-a7a8-4df7-addd-fb27c697a25a_771x36.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!T4Dx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33f9554-a7a8-4df7-addd-fb27c697a25a_771x36.png 424w, https://substackcdn.com/image/fetch/$s_!T4Dx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33f9554-a7a8-4df7-addd-fb27c697a25a_771x36.png 848w, https://substackcdn.com/image/fetch/$s_!T4Dx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33f9554-a7a8-4df7-addd-fb27c697a25a_771x36.png 1272w, https://substackcdn.com/image/fetch/$s_!T4Dx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33f9554-a7a8-4df7-addd-fb27c697a25a_771x36.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!T4Dx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33f9554-a7a8-4df7-addd-fb27c697a25a_771x36.png" width="771" height="36" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e33f9554-a7a8-4df7-addd-fb27c697a25a_771x36.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:36,&quot;width&quot;:771,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:10938,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!T4Dx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33f9554-a7a8-4df7-addd-fb27c697a25a_771x36.png 424w, https://substackcdn.com/image/fetch/$s_!T4Dx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33f9554-a7a8-4df7-addd-fb27c697a25a_771x36.png 848w, https://substackcdn.com/image/fetch/$s_!T4Dx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33f9554-a7a8-4df7-addd-fb27c697a25a_771x36.png 1272w, https://substackcdn.com/image/fetch/$s_!T4Dx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33f9554-a7a8-4df7-addd-fb27c697a25a_771x36.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li><li><p>Group <em>B</em>&#8217;s expected gains are larger than 0 if and only if:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!65ce!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F16b4df20-05b3-4fd7-a9a6-9c7a4ab530a4_268x74.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!65ce!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F16b4df20-05b3-4fd7-a9a6-9c7a4ab530a4_268x74.png 424w, https://substackcdn.com/image/fetch/$s_!65ce!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F16b4df20-05b3-4fd7-a9a6-9c7a4ab530a4_268x74.png 848w, https://substackcdn.com/image/fetch/$s_!65ce!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F16b4df20-05b3-4fd7-a9a6-9c7a4ab530a4_268x74.png 1272w, https://substackcdn.com/image/fetch/$s_!65ce!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F16b4df20-05b3-4fd7-a9a6-9c7a4ab530a4_268x74.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!65ce!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F16b4df20-05b3-4fd7-a9a6-9c7a4ab530a4_268x74.png" width="268" height="74" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/16b4df20-05b3-4fd7-a9a6-9c7a4ab530a4_268x74.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:74,&quot;width&quot;:268,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5552,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!65ce!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F16b4df20-05b3-4fd7-a9a6-9c7a4ab530a4_268x74.png 424w, https://substackcdn.com/image/fetch/$s_!65ce!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F16b4df20-05b3-4fd7-a9a6-9c7a4ab530a4_268x74.png 848w, https://substackcdn.com/image/fetch/$s_!65ce!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F16b4df20-05b3-4fd7-a9a6-9c7a4ab530a4_268x74.png 1272w, https://substackcdn.com/image/fetch/$s_!65ce!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F16b4df20-05b3-4fd7-a9a6-9c7a4ab530a4_268x74.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li></ul><p>By a similar argument as made in <a href="https://lukasfinnveden.substack.com/i/136239309/generalizing">section &#8220;Generalizing" of Asymmetric ECL</a>, it is possible for group <em>B</em> to pick a value of <em>l<sub>B</sub></em><sub>|</sub><em><sub>u</sub></em> that makes both these expressions larger than 1 whenever the product of them is larger than 1. I.e. when:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VIvI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f824e-2421-4403-9874-c170eff4f3b8_602x53.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VIvI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f824e-2421-4403-9874-c170eff4f3b8_602x53.png 424w, https://substackcdn.com/image/fetch/$s_!VIvI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f824e-2421-4403-9874-c170eff4f3b8_602x53.png 848w, https://substackcdn.com/image/fetch/$s_!VIvI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f824e-2421-4403-9874-c170eff4f3b8_602x53.png 1272w, https://substackcdn.com/image/fetch/$s_!VIvI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f824e-2421-4403-9874-c170eff4f3b8_602x53.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VIvI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f824e-2421-4403-9874-c170eff4f3b8_602x53.png" width="602" height="53" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c05f824e-2421-4403-9874-c170eff4f3b8_602x53.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:53,&quot;width&quot;:602,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:15069,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VIvI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f824e-2421-4403-9874-c170eff4f3b8_602x53.png 424w, https://substackcdn.com/image/fetch/$s_!VIvI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f824e-2421-4403-9874-c170eff4f3b8_602x53.png 848w, https://substackcdn.com/image/fetch/$s_!VIvI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f824e-2421-4403-9874-c170eff4f3b8_602x53.png 1272w, https://substackcdn.com/image/fetch/$s_!VIvI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f824e-2421-4403-9874-c170eff4f3b8_602x53.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>This is similar to the main result in <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">Asymmetric ECL</a>. That result said that a deal was possible whenever:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Tqm2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2348442-1178-4d5c-a32a-2ceb25d8f0d4_165x58.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Tqm2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2348442-1178-4d5c-a32a-2ceb25d8f0d4_165x58.png 424w, https://substackcdn.com/image/fetch/$s_!Tqm2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2348442-1178-4d5c-a32a-2ceb25d8f0d4_165x58.png 848w, https://substackcdn.com/image/fetch/$s_!Tqm2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2348442-1178-4d5c-a32a-2ceb25d8f0d4_165x58.png 1272w, https://substackcdn.com/image/fetch/$s_!Tqm2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2348442-1178-4d5c-a32a-2ceb25d8f0d4_165x58.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Tqm2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2348442-1178-4d5c-a32a-2ceb25d8f0d4_165x58.png" width="165" height="58" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c2348442-1178-4d5c-a32a-2ceb25d8f0d4_165x58.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:58,&quot;width&quot;:165,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2913,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Tqm2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2348442-1178-4d5c-a32a-2ceb25d8f0d4_165x58.png 424w, https://substackcdn.com/image/fetch/$s_!Tqm2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2348442-1178-4d5c-a32a-2ceb25d8f0d4_165x58.png 848w, https://substackcdn.com/image/fetch/$s_!Tqm2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2348442-1178-4d5c-a32a-2ceb25d8f0d4_165x58.png 1272w, https://substackcdn.com/image/fetch/$s_!Tqm2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc2348442-1178-4d5c-a32a-2ceb25d8f0d4_165x58.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>The only differences are that:</p><ul><li><p>We&#8217;re writing <em>k<sub>A</sub></em> instead of <em>g<sub>A</sub></em>/<em>l<sub>B</sub></em>. This is just a difference in notation &#8212;&nbsp;they represent the same value.</p></li><li><p>Instead of <em>c<sub>AB</sub></em>/<em>c<sub>BB</sub></em><sub> </sub>we have <em>c<sub>AB</sub></em><sub>|</sub><em><sub>u</sub></em>/<em>c<sub>BB</sub></em><sub>|</sub><em><sub>u</sub></em><sub> </sub>&#8212;&nbsp;i.e., we only care about what that ratio is in the world where it&#8217;s maximally large.</p></li><li><p>Instead of <em>c<sub>BA</sub></em>/<em>c<sub>AA</sub></em><sub> </sub>we have </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3LsJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6d2acd-ff2c-4407-93cc-2b8e6848bfb7_191x66.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3LsJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6d2acd-ff2c-4407-93cc-2b8e6848bfb7_191x66.png 424w, https://substackcdn.com/image/fetch/$s_!3LsJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6d2acd-ff2c-4407-93cc-2b8e6848bfb7_191x66.png 848w, https://substackcdn.com/image/fetch/$s_!3LsJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6d2acd-ff2c-4407-93cc-2b8e6848bfb7_191x66.png 1272w, https://substackcdn.com/image/fetch/$s_!3LsJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6d2acd-ff2c-4407-93cc-2b8e6848bfb7_191x66.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3LsJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6d2acd-ff2c-4407-93cc-2b8e6848bfb7_191x66.png" width="191" height="66" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1f6d2acd-ff2c-4407-93cc-2b8e6848bfb7_191x66.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:66,&quot;width&quot;:191,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4075,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3LsJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6d2acd-ff2c-4407-93cc-2b8e6848bfb7_191x66.png 424w, https://substackcdn.com/image/fetch/$s_!3LsJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6d2acd-ff2c-4407-93cc-2b8e6848bfb7_191x66.png 848w, https://substackcdn.com/image/fetch/$s_!3LsJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6d2acd-ff2c-4407-93cc-2b8e6848bfb7_191x66.png 1272w, https://substackcdn.com/image/fetch/$s_!3LsJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1f6d2acd-ff2c-4407-93cc-2b8e6848bfb7_191x66.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>&#8212;&nbsp;i.e., we care about the average correlations weighted by the probability of worlds and population size. If population size is independent from <em>c<sub>BA</sub></em><sub>|</sub><em><sub>w</sub></em> and <em>c<sub>AA</sub></em><sub>|</sub><em><sub>w</sub></em>, then this is simply equal to the expected value of <em>c<sub>BA</sub></em><sub>|</sub><em><sub>w</sub></em> divided by the expected value of <em>c<sub>BB</sub></em><sub>|</sub><em><sub>w</sub></em>.</p></li></ul><h2>What conclusions can we draw from this?</h2><p>As I mentioned at the start: The assumption of unbounded, linear utility function and ability to pay isn&#8217;t realistic. For example, if we apply this lesson to <a href="https://lukasfinnveden.substack.com/p/ecl-with-ai">ECL with AI</a>, I don&#8217;t think that a world u such that p(u) = 10<sup>-100</sup> and <em>c<sub>AB</sub></em><sub>|</sub><em><sub>u</sub></em>/<em>c<sub>BB</sub></em><sub>|</sub><em><sub>u</sub></em>=1 should make us act just as if <em>c<sub>AB</sub></em>/<em>c<sub>BB</sub></em>=1.</p><p>In particular, some ways in which the unbounded, linear assumption can fail:</p><ul><li><p>Ultimately, I think that <a href="https://www.lesswrong.com/posts/hbmsW2k9DxED5Z4eJ/impossibility-results-for-unbounded-utilities">utility functions should be bounded</a>, even if they&#8217;re linear in some regime. For sufficiently small probabilities, and sufficiently large values of <em>l<sub>B</sub></em><sub>|</sub><em><sub>u</sub></em>, we might get pushed outside of that regime. At some point, concerns about Pascal's mugging seem to bite.</p></li><li><p>A natural way in which linearity can fail is if the AI runs out of resources to pay us with. That seems significantly more likely for small probability of large values of <em>l<sub>B</sub></em><sub>|</sub><em><sub>u</sub></em>.</p></li><li><p>Maybe you just never thought that a linear utility function was a reasonable match to your own values. (In which case the small probability of large utility, introduced here, would increase the degree to which that assumption yields bad conclusions, from your perspective.)</p></li></ul><p>Nevertheless,&nbsp; I think this does push towards somewhat greater optimism about ECL deals with agents that can become knowledgeable and benefit us afterwards. If the probability of a high <em>c<sub>AB</sub></em><sub>|</sub><em><sub>w</sub></em>/<em>c<sub>BB</sub></em><sub>|</sub><em><sub>w</sub></em><sub> </sub>isn&#8217;t <em>too</em> low, but is on the order of 10% or so, then maybe the argument in this post is just fine, and we should be happy with a 10% probability of getting a 10x larger benefit in those worlds.</p><p>Another take-away is: Behind the veil of ignorance, uncertainty about correlations may incentivize agents to commit to very different deals than they would be incentivized to follow later-on. All of this math relies on certain actors making ECL deals when they are ignorant about who they are correlated with &#8212; which could enable deals that would otherwise not be feasible. This is very sensitive to the initial probability distributions that agents start out with. Accordingly, this raises questions about when agents will (and should) first start making commitments. And about whether there&#8217;s any more principled way of handling this stuff then to be updateful until the day that you start making commitments, and subsequently letting that time-slice&#8217;s beliefs and preferences forever change how you act.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Thanks to Caspar Oesterheld for this example.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>If this assumption would only make the deal net-positive for one player, then that&#8217;s an important asymmetry that would make the correlations implausible. This first deal can&#8217;t be asymmetric in this way &#8212;&nbsp;but we&#8217;ll soon look at a more asymmetric one.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Different priors would enable betting, as discussed <a href="https://docs.google.com/document/d/17M_6_uYdlIpzs4WqYnQfgYq-yD9scwTSz81M-wExvMg/edit#heading=h.u2qyfrwdqv0c">here</a>.</p></div></div>]]></content:encoded></item><item><title><![CDATA[When does EDT seek evidence about correlations?]]></title><description><![CDATA[Informal summary]]></description><link>https://lukasfinnveden.substack.com/p/when-does-edt-seek-evidence-about</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/when-does-edt-seek-evidence-about</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Sun, 20 Aug 2023 10:32:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Zl9S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60f6dd3-1c30-403d-a7b9-5cf68a9451dd_806x690.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Informal summary</h2><p>The algorithm that EDT chooses to self-modify to is known as &#8220;son-of-EDT&#8221;. I argue that son-of-EDT <em>doesn&#8217;t</em> do direct-entanglement-style-cooperation with its <em>new</em> correlations, but <em>does</em> try to benefit the people who its parent was correlated with <em>when it made the self-modification</em>.</p><p>In addition, son-of-EDT will be interested in gathering more evidence about who the parent was and wasn&#8217;t correlated with, at the time of making the decision, and change who it tries to benefit based on what it learns.</p><p>In cases where potentially-correlated agents can gather evidence that makes their estimates about correlation converge, I think evidence-seeking works in a fairly commonsensical way. For example, they will pay for evidence that tells them whether their correlations are sufficiently high to make an acausal deal worthwhile. (As long as the evidence is sufficiently cheap, and the evidence could flip them from thinking that the deal is worth-it to thinking it&#8217;s not worth it,&nbsp;or vice versa.)</p><p>Updating on <em>some</em> types of evidence would predictably lead actors to estimate very small correlations with other agents. This is not true for any of the examples of &#8220;evidence&#8221; for which I argue that agents want to seek out that evidence, in this post. Instead, the relevant evidence has the property that it <em>preserves expected acausal influence</em>, i.e., updating on the evidence can change estimated correlations, but it doesn&#8217;t systematically increase or decrease it.</p><p>I will now introduce a &#8220;no prediction&#8221;-assumption (that the post will assume) and some notation. Then I will offer a more formal summary.</p><h2>&#8220;No prediction&#8221; assumption</h2><p>In a 1-shot prisoner&#8217;s dilemma, there are two different ways in which two agents can achieve cooperation by acausal means:</p><ul><li><p>You might think that you are similar to your counterpart, and choose to cooperate because that means that they&#8217;re more likely to cooperate (due to a similar argument).</p></li><li><p>You might think that your counterpart is predicting what you do, and that if you cooperate, they will infer that you cooperated. And that this inference will make them more likely to cooperate, in response.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p></li></ul><p>In this post, I want to ignore the latter kind. To facilitate that, I will make the following simplifying assumption:</p><p>For every pair of agents Alice and Bob (<em>A</em>,<em>B</em>), there is nothing that Alice can do to affect Bob&#8217;s beliefs about Alice, other than her acausal influence on a <em>single</em> choice that Bob is making. More formally:</p><p>For every observation <em>O</em> that Bob can make <em>other</em> than observing what choice he made in a particular dilemma (and events that are causally downstream of that choice), and for every pair of actions (<em>X</em>,<em>Y</em>) that Alice can take, we have:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!t5LA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f621ba-44e3-44cd-b26f-60f40d589989_630x40.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!t5LA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f621ba-44e3-44cd-b26f-60f40d589989_630x40.png 424w, https://substackcdn.com/image/fetch/$s_!t5LA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f621ba-44e3-44cd-b26f-60f40d589989_630x40.png 848w, https://substackcdn.com/image/fetch/$s_!t5LA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f621ba-44e3-44cd-b26f-60f40d589989_630x40.png 1272w, https://substackcdn.com/image/fetch/$s_!t5LA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f621ba-44e3-44cd-b26f-60f40d589989_630x40.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!t5LA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f621ba-44e3-44cd-b26f-60f40d589989_630x40.png" width="630" height="40" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/17f621ba-44e3-44cd-b26f-60f40d589989_630x40.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:40,&quot;width&quot;:630,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7126,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!t5LA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f621ba-44e3-44cd-b26f-60f40d589989_630x40.png 424w, https://substackcdn.com/image/fetch/$s_!t5LA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f621ba-44e3-44cd-b26f-60f40d589989_630x40.png 848w, https://substackcdn.com/image/fetch/$s_!t5LA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f621ba-44e3-44cd-b26f-60f40d589989_630x40.png 1272w, https://substackcdn.com/image/fetch/$s_!t5LA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f621ba-44e3-44cd-b26f-60f40d589989_630x40.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>(Where <em>p<sub>A</sub></em> is Alice&#8217;s prior.)</p><p>&#8220;No prediction&#8221; assumption might be a slightly misleading name, here. I also want to exclude <em>observations</em> of an agent&#8217;s action &#8212;&nbsp;or indeed, any kind of observation that would correlate with an agent&#8217;s action. (According to that agent.)</p><p>Supplementing the &#8220;no prediction&#8221; assumption, I assume that the only source of correlation between agents&#8217; decisions is that the agents believe that they might be following similar algorithms, and thereby are likely to take analogous actions. (I.e., my results would probably not hold for arbitrary priors with unjustified correlations in them.)</p><h2>Notation</h2><p>When an agent <em>A</em> is thinking about affecting another agent <em>B</em>, they care about changing the probability that <em>B</em> does something. For example:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!AG3W!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd608af94-ec85-4b72-bb17-e105fd2f844f_509x36.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!AG3W!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd608af94-ec85-4b72-bb17-e105fd2f844f_509x36.png 424w, https://substackcdn.com/image/fetch/$s_!AG3W!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd608af94-ec85-4b72-bb17-e105fd2f844f_509x36.png 848w, https://substackcdn.com/image/fetch/$s_!AG3W!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd608af94-ec85-4b72-bb17-e105fd2f844f_509x36.png 1272w, https://substackcdn.com/image/fetch/$s_!AG3W!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd608af94-ec85-4b72-bb17-e105fd2f844f_509x36.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!AG3W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd608af94-ec85-4b72-bb17-e105fd2f844f_509x36.png" width="509" height="36" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d608af94-ec85-4b72-bb17-e105fd2f844f_509x36.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:36,&quot;width&quot;:509,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5014,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!AG3W!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd608af94-ec85-4b72-bb17-e105fd2f844f_509x36.png 424w, https://substackcdn.com/image/fetch/$s_!AG3W!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd608af94-ec85-4b72-bb17-e105fd2f844f_509x36.png 848w, https://substackcdn.com/image/fetch/$s_!AG3W!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd608af94-ec85-4b72-bb17-e105fd2f844f_509x36.png 1272w, https://substackcdn.com/image/fetch/$s_!AG3W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd608af94-ec85-4b72-bb17-e105fd2f844f_509x36.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>(The subscript <em>A</em> signifies that these are probabilities according to <em>A</em>&#8217;s prior.)</p><p>As shorthand for this, let&#8217;s use <em>d<sub>A</sub></em>(<em>B</em> chooses <em>x&#8217;</em> | <em>A</em> chooses <em>x</em>), where the <em>d</em> stands for difference.</p><p>In some situations, <em>A</em> and <em>B</em> are in sufficiently similar epistemic positions that each of <em>A</em>&#8217;s actions <em>x</em> clearly corresponds to one of <em>B</em>&#8217;s actions <em>x&#8217;</em>, and for each such pair (<em>x</em>, <em>x&#8217;</em>), it is the case that <em>d<sub>A</sub></em>(<em>B</em> does <em>x&#8217;</em> | <em>A</em> does <em>x</em>) is equally large. If this is the case, we&#8217;ll abbreviate those values as <em>d<sub>A</sub></em>(<em>B</em>|<em>A</em>). (<em>A</em>'s belief about how much <em>A</em> doing something increases the chance that <em>B</em> does his analogous thing, which will be a number between 0 and 1.)</p><p>Rather than just using <em>A</em>&#8217;s prior <em>p<sub>A</sub></em>, I will sometimes be interested in what <em>A</em>&#8217;s prior says about correlations after being conditioned on some evidence e, i.e.:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!esDZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66b5b5bf-bd2c-4b23-9aa2-28485d3bbd27_552x35.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!esDZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66b5b5bf-bd2c-4b23-9aa2-28485d3bbd27_552x35.png 424w, https://substackcdn.com/image/fetch/$s_!esDZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66b5b5bf-bd2c-4b23-9aa2-28485d3bbd27_552x35.png 848w, https://substackcdn.com/image/fetch/$s_!esDZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66b5b5bf-bd2c-4b23-9aa2-28485d3bbd27_552x35.png 1272w, https://substackcdn.com/image/fetch/$s_!esDZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66b5b5bf-bd2c-4b23-9aa2-28485d3bbd27_552x35.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!esDZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66b5b5bf-bd2c-4b23-9aa2-28485d3bbd27_552x35.png" width="552" height="35" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/66b5b5bf-bd2c-4b23-9aa2-28485d3bbd27_552x35.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:35,&quot;width&quot;:552,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4851,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!esDZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66b5b5bf-bd2c-4b23-9aa2-28485d3bbd27_552x35.png 424w, https://substackcdn.com/image/fetch/$s_!esDZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66b5b5bf-bd2c-4b23-9aa2-28485d3bbd27_552x35.png 848w, https://substackcdn.com/image/fetch/$s_!esDZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66b5b5bf-bd2c-4b23-9aa2-28485d3bbd27_552x35.png 1272w, https://substackcdn.com/image/fetch/$s_!esDZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66b5b5bf-bd2c-4b23-9aa2-28485d3bbd27_552x35.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>I will abbreviate this as <em>d<sub>A</sub></em>(<em>B</em> chooses <em>x&#8217;</em> | <em>A</em> chooses <em>x</em>, <em>e</em>) or (if each of <em>A</em>&#8217;s actions clearly correspond to one of <em>B</em>&#8217;s actions) as <em>d<sub>A</sub></em>(<em>B</em>|<em>A</em>,<em>e</em>).</p><h2>More formal summary</h2><p>In the first section of this summary, I will be conflating Alice&#8217;s perceived acausal influence on Bob and Bob&#8217;s perceived acausal influence on Alice. All my claims are not necessarily true unless <em>d<sub>A</sub></em>(<em>B</em>|<em>A</em>) = <em>d<sub>B</sub></em>(<em>A</em>|<em>B</em>). In the second section of this summary, I correct this assumption.</p><h3>When does EDT seek evidence about correlations?</h3><p>Consider Alice, who cares about her correlation with Bob. Let&#8217;s say that Alice is initially uncertain whether evidence e will turn out to be equal to true or false. (For example, whether it is true that &#8220;Wikipedia says that Bob used to be a lumberjack&#8221;.) She observes it, and it turns out that it&#8217;s true. This may change her correlations with Bob. What does that mean, more precisely?</p><p>Let&#8217;s use &#8220;<em>A1</em>&#8221; to refer to Alice in step 1, before observing evidence, and let's say she has two options: action <em>X<sub>1</sub></em> or action <em>Y<sub>1</sub></em>. Then, in time step 1, Alice&#8217;s perceived influence on Bob was:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-TW8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aced2ae-3713-4284-8d7c-fe87a19910ad_682x38.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-TW8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aced2ae-3713-4284-8d7c-fe87a19910ad_682x38.png 424w, https://substackcdn.com/image/fetch/$s_!-TW8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aced2ae-3713-4284-8d7c-fe87a19910ad_682x38.png 848w, https://substackcdn.com/image/fetch/$s_!-TW8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aced2ae-3713-4284-8d7c-fe87a19910ad_682x38.png 1272w, https://substackcdn.com/image/fetch/$s_!-TW8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aced2ae-3713-4284-8d7c-fe87a19910ad_682x38.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-TW8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aced2ae-3713-4284-8d7c-fe87a19910ad_682x38.png" width="682" height="38" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6aced2ae-3713-4284-8d7c-fe87a19910ad_682x38.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:38,&quot;width&quot;:682,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9346,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-TW8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aced2ae-3713-4284-8d7c-fe87a19910ad_682x38.png 424w, https://substackcdn.com/image/fetch/$s_!-TW8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aced2ae-3713-4284-8d7c-fe87a19910ad_682x38.png 848w, https://substackcdn.com/image/fetch/$s_!-TW8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aced2ae-3713-4284-8d7c-fe87a19910ad_682x38.png 1272w, https://substackcdn.com/image/fetch/$s_!-TW8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aced2ae-3713-4284-8d7c-fe87a19910ad_682x38.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>(I&#8217;m just using &#8220;<em>A</em>&#8221; for the prior <em>p<sub>A</sub></em>, because all time-slices of Alice have a shared prior.)</p><p>Let&#8217;s say that <em>A1</em> selects action <em>X<sub>1</sub></em>, and that <em>X<sub>1</sub></em> is a choice to look at some evidence. Let&#8217;s use &#8220;<em>A2T</em>&#8221; to refer to Alice in time-step 2 who has observed that Alice chose <em>X<sub>1</sub></em> and has observed e to be true.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> And let&#8217;s say she can choose action <em>X<sub>2</sub></em> or action <em>Y<sub>2</sub></em>. In time step 2, after observing evidence <em>E</em> to be equal to true, we can now write Alice&#8217;s perceived acausal influence on Bob as:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!v1xE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a532e40-d65f-4852-9a26-5eacb7f1da94_272x37.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!v1xE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a532e40-d65f-4852-9a26-5eacb7f1da94_272x37.png 424w, https://substackcdn.com/image/fetch/$s_!v1xE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a532e40-d65f-4852-9a26-5eacb7f1da94_272x37.png 848w, https://substackcdn.com/image/fetch/$s_!v1xE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a532e40-d65f-4852-9a26-5eacb7f1da94_272x37.png 1272w, https://substackcdn.com/image/fetch/$s_!v1xE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a532e40-d65f-4852-9a26-5eacb7f1da94_272x37.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!v1xE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a532e40-d65f-4852-9a26-5eacb7f1da94_272x37.png" width="272" height="37" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9a532e40-d65f-4852-9a26-5eacb7f1da94_272x37.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:37,&quot;width&quot;:272,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3613,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!v1xE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a532e40-d65f-4852-9a26-5eacb7f1da94_272x37.png 424w, https://substackcdn.com/image/fetch/$s_!v1xE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a532e40-d65f-4852-9a26-5eacb7f1da94_272x37.png 848w, https://substackcdn.com/image/fetch/$s_!v1xE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a532e40-d65f-4852-9a26-5eacb7f1da94_272x37.png 1272w, https://substackcdn.com/image/fetch/$s_!v1xE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a532e40-d65f-4852-9a26-5eacb7f1da94_272x37.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!emr-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68674bbd-233c-4025-93bc-ed6e46881981_477x42.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!emr-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68674bbd-233c-4025-93bc-ed6e46881981_477x42.png 424w, https://substackcdn.com/image/fetch/$s_!emr-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68674bbd-233c-4025-93bc-ed6e46881981_477x42.png 848w, https://substackcdn.com/image/fetch/$s_!emr-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68674bbd-233c-4025-93bc-ed6e46881981_477x42.png 1272w, https://substackcdn.com/image/fetch/$s_!emr-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68674bbd-233c-4025-93bc-ed6e46881981_477x42.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!emr-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68674bbd-233c-4025-93bc-ed6e46881981_477x42.png" width="477" height="42" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/68674bbd-233c-4025-93bc-ed6e46881981_477x42.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:42,&quot;width&quot;:477,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5239,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!emr-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68674bbd-233c-4025-93bc-ed6e46881981_477x42.png 424w, https://substackcdn.com/image/fetch/$s_!emr-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68674bbd-233c-4025-93bc-ed6e46881981_477x42.png 848w, https://substackcdn.com/image/fetch/$s_!emr-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68674bbd-233c-4025-93bc-ed6e46881981_477x42.png 1272w, https://substackcdn.com/image/fetch/$s_!emr-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68674bbd-233c-4025-93bc-ed6e46881981_477x42.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7AbC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32881444-0147-462b-9666-a559abe8a018_467x35.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7AbC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32881444-0147-462b-9666-a559abe8a018_467x35.png 424w, https://substackcdn.com/image/fetch/$s_!7AbC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32881444-0147-462b-9666-a559abe8a018_467x35.png 848w, https://substackcdn.com/image/fetch/$s_!7AbC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32881444-0147-462b-9666-a559abe8a018_467x35.png 1272w, https://substackcdn.com/image/fetch/$s_!7AbC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32881444-0147-462b-9666-a559abe8a018_467x35.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7AbC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32881444-0147-462b-9666-a559abe8a018_467x35.png" width="467" height="35" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/32881444-0147-462b-9666-a559abe8a018_467x35.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:35,&quot;width&quot;:467,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5065,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7AbC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32881444-0147-462b-9666-a559abe8a018_467x35.png 424w, https://substackcdn.com/image/fetch/$s_!7AbC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32881444-0147-462b-9666-a559abe8a018_467x35.png 848w, https://substackcdn.com/image/fetch/$s_!7AbC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32881444-0147-462b-9666-a559abe8a018_467x35.png 1272w, https://substackcdn.com/image/fetch/$s_!7AbC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32881444-0147-462b-9666-a559abe8a018_467x35.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Notice that multiple things have changed!</p><ul><li><p>We now conditioned on being in the world where <em>e</em>=<em>T</em>.</p></li><li><p>We now conditioned on the knowledge that <em>A1</em> chose <em>X<sub>1</sub></em>.</p></li><li><p>We have switched from consider the conditional difference between &#8220;<em>A1</em> does [<em>X<sub>1</sub></em> or <em>Y<sub>1</sub></em>]&#8221; to &#8220;<em>A2T</em> does [<em>X<sub>2</sub></em> or <em>Y<sub>2</sub></em>]&#8221;.</p></li></ul><p>But these need not necessarily go together. Alice&#8217;s prior <em>p<sub>A</sub></em> can also provide well-defined values for:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zboy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2299afd-fc27-408f-a1a9-4f68ac7a04be_154x37.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zboy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2299afd-fc27-408f-a1a9-4f68ac7a04be_154x37.png 424w, https://substackcdn.com/image/fetch/$s_!zboy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2299afd-fc27-408f-a1a9-4f68ac7a04be_154x37.png 848w, https://substackcdn.com/image/fetch/$s_!zboy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2299afd-fc27-408f-a1a9-4f68ac7a04be_154x37.png 1272w, https://substackcdn.com/image/fetch/$s_!zboy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2299afd-fc27-408f-a1a9-4f68ac7a04be_154x37.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zboy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2299afd-fc27-408f-a1a9-4f68ac7a04be_154x37.png" width="154" height="37" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a2299afd-fc27-408f-a1a9-4f68ac7a04be_154x37.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:37,&quot;width&quot;:154,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2244,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zboy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2299afd-fc27-408f-a1a9-4f68ac7a04be_154x37.png 424w, https://substackcdn.com/image/fetch/$s_!zboy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2299afd-fc27-408f-a1a9-4f68ac7a04be_154x37.png 848w, https://substackcdn.com/image/fetch/$s_!zboy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2299afd-fc27-408f-a1a9-4f68ac7a04be_154x37.png 1272w, https://substackcdn.com/image/fetch/$s_!zboy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2299afd-fc27-408f-a1a9-4f68ac7a04be_154x37.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qaSG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8fc6d10-94b1-4d9c-ade3-9c4b9c824e30_708x43.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qaSG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8fc6d10-94b1-4d9c-ade3-9c4b9c824e30_708x43.png 424w, https://substackcdn.com/image/fetch/$s_!qaSG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8fc6d10-94b1-4d9c-ade3-9c4b9c824e30_708x43.png 848w, https://substackcdn.com/image/fetch/$s_!qaSG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8fc6d10-94b1-4d9c-ade3-9c4b9c824e30_708x43.png 1272w, https://substackcdn.com/image/fetch/$s_!qaSG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8fc6d10-94b1-4d9c-ade3-9c4b9c824e30_708x43.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qaSG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8fc6d10-94b1-4d9c-ade3-9c4b9c824e30_708x43.png" width="708" height="43" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f8fc6d10-94b1-4d9c-ade3-9c4b9c824e30_708x43.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:43,&quot;width&quot;:708,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9710,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qaSG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8fc6d10-94b1-4d9c-ade3-9c4b9c824e30_708x43.png 424w, https://substackcdn.com/image/fetch/$s_!qaSG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8fc6d10-94b1-4d9c-ade3-9c4b9c824e30_708x43.png 848w, https://substackcdn.com/image/fetch/$s_!qaSG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8fc6d10-94b1-4d9c-ade3-9c4b9c824e30_708x43.png 1272w, https://substackcdn.com/image/fetch/$s_!qaSG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8fc6d10-94b1-4d9c-ade3-9c4b9c824e30_708x43.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>and</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1cnG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cb760c0-b5b3-4a72-9b2e-798cdb81dc66_692x43.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1cnG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cb760c0-b5b3-4a72-9b2e-798cdb81dc66_692x43.png 424w, https://substackcdn.com/image/fetch/$s_!1cnG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cb760c0-b5b3-4a72-9b2e-798cdb81dc66_692x43.png 848w, https://substackcdn.com/image/fetch/$s_!1cnG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cb760c0-b5b3-4a72-9b2e-798cdb81dc66_692x43.png 1272w, https://substackcdn.com/image/fetch/$s_!1cnG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cb760c0-b5b3-4a72-9b2e-798cdb81dc66_692x43.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1cnG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cb760c0-b5b3-4a72-9b2e-798cdb81dc66_692x43.png" width="692" height="43" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9cb760c0-b5b3-4a72-9b2e-798cdb81dc66_692x43.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:43,&quot;width&quot;:692,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9707,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1cnG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cb760c0-b5b3-4a72-9b2e-798cdb81dc66_692x43.png 424w, https://substackcdn.com/image/fetch/$s_!1cnG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cb760c0-b5b3-4a72-9b2e-798cdb81dc66_692x43.png 848w, https://substackcdn.com/image/fetch/$s_!1cnG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cb760c0-b5b3-4a72-9b2e-798cdb81dc66_692x43.png 1272w, https://substackcdn.com/image/fetch/$s_!1cnG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9cb760c0-b5b3-4a72-9b2e-798cdb81dc66_692x43.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>(In the first row &#8212; note that I don&#8217;t condition on the observation that <em>A1</em> chose <em>X<sub>1</sub></em> in both terms (like I did in <em>A2T</em>&#8217;s choice above). If I did, the second term would condition on a probability 0 event.)</p><p>Here are all those quantities in a table:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ttkw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2431fd0-2021-4d1f-9d51-b2f54d877f8a_965x197.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ttkw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2431fd0-2021-4d1f-9d51-b2f54d877f8a_965x197.png 424w, https://substackcdn.com/image/fetch/$s_!Ttkw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2431fd0-2021-4d1f-9d51-b2f54d877f8a_965x197.png 848w, https://substackcdn.com/image/fetch/$s_!Ttkw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2431fd0-2021-4d1f-9d51-b2f54d877f8a_965x197.png 1272w, https://substackcdn.com/image/fetch/$s_!Ttkw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2431fd0-2021-4d1f-9d51-b2f54d877f8a_965x197.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ttkw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2431fd0-2021-4d1f-9d51-b2f54d877f8a_965x197.png" width="965" height="197" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d2431fd0-2021-4d1f-9d51-b2f54d877f8a_965x197.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:197,&quot;width&quot;:965,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:25514,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ttkw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2431fd0-2021-4d1f-9d51-b2f54d877f8a_965x197.png 424w, https://substackcdn.com/image/fetch/$s_!Ttkw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2431fd0-2021-4d1f-9d51-b2f54d877f8a_965x197.png 848w, https://substackcdn.com/image/fetch/$s_!Ttkw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2431fd0-2021-4d1f-9d51-b2f54d877f8a_965x197.png 1272w, https://substackcdn.com/image/fetch/$s_!Ttkw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2431fd0-2021-4d1f-9d51-b2f54d877f8a_965x197.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Updateful EDT can only make decisions on the basis of diagonals. But we need not constrain ourselves to updateful EDT. If Alice at time-step 1 has significant control over how her successor will act at time-step 2, we can investigate which of these quantities she would prefer that her successor bases her decision on.</p><p>In this post I will argue that:</p><p><strong>If </strong><em><strong>A1</strong></em><strong> is an EDT agent, and assuming the <a href="https://lukasfinnveden.substack.com/i/136240656/no-prediction-assumption">&#8220;No prediction&#8221; assumption</a>, </strong><em><strong>A1</strong></em><strong> will want her successor </strong><em><strong>A2</strong></em><strong> to benefit agents that </strong><em><strong>A1</strong></em><strong> correlates with. Not necessarily agents that </strong><em><strong>A2</strong></em><strong> correlates with.</strong></p><ul><li><p>This means that son-of-EDT will:</p><ul><li><p><em>not</em> optimize expected utility by the light of <em>A1</em>&#8217;s prior (i.e. pick <em>argmax<sub>x</sub></em>(<em>E<sub>A</sub></em>(<em>U</em>|<em>A2</em> picks <em>x</em>)), and</p></li><li><p><em>not</em> particularly care about either <em>d<sub>A</sub></em>(<em>B</em>|<em>A2T</em>) or the corresponding quantity after updating on evidence.</p></li></ul></li><li><p>(Saliently, this means that son-of-EDT is <em>not</em> (the most natural interpretation of) &#8220;updateless EDT&#8221;. Since the most natural interpretation of updateless EDT would maximize expected utility by the light of <em>A1</em>&#8217;s prior.)</p></li><li><p>This is argued in <a href="https://lukasfinnveden.substack.com/i/136240656/son-of-edt-does-not-cooperate-based-on-its-own-correlations">1. Son-of-EDT cooperates using its parent&#8217;s entanglements</a>.</p></li></ul><p><strong>If </strong><em><strong>A1</strong></em><strong> is an EDT agent, and assuming the <a href="https://lukasfinnveden.substack.com/i/136240656/no-prediction-assumption">&#8220;No prediction&#8221; assumption</a>, </strong><em><strong>A1</strong></em><strong> will want her successor </strong><em><strong>A2</strong></em><strong> to gather evidence about who </strong><em><strong>A1</strong></em><strong> was correlated with at the time of choosing her successor.</strong></p><p>This means that <em>A1</em> would want <em>A2</em> to learn about <em>d<sub>A</sub></em>(<em>B</em>|<em>A1</em>,<em>e</em>=<em>T</em>). (And make decisions that benefitted agents that <em>A1</em> correlated with at time-step 1, according to that formula.)</p><p>Well&#8230; at least that&#8217;s true given the assumption of symmetric acausal influence, that I mentioned at the start of the summary. But let&#8217;s now poke holes in that assumption.</p><h3>Asymmetric acasual influence</h3><p>When doing this, we will not need separate time-steps. But we will need a third actor. Our agents will now be Alice, Bob, and Carol.</p><p>If we allow for asymmetric acausal influence, we can construct scenarios where Alice perceives herself as having acausal influence on Bob, Bob perceives himself as having acausal influence on Carol, and Carol perceives herself as having acausal influence on Alice. (But not the other way around.)</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Zl9S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60f6dd3-1c30-403d-a7b9-5cf68a9451dd_806x690.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Zl9S!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60f6dd3-1c30-403d-a7b9-5cf68a9451dd_806x690.png 424w, https://substackcdn.com/image/fetch/$s_!Zl9S!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60f6dd3-1c30-403d-a7b9-5cf68a9451dd_806x690.png 848w, https://substackcdn.com/image/fetch/$s_!Zl9S!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60f6dd3-1c30-403d-a7b9-5cf68a9451dd_806x690.png 1272w, https://substackcdn.com/image/fetch/$s_!Zl9S!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60f6dd3-1c30-403d-a7b9-5cf68a9451dd_806x690.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Zl9S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60f6dd3-1c30-403d-a7b9-5cf68a9451dd_806x690.png" width="806" height="690" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a60f6dd3-1c30-403d-a7b9-5cf68a9451dd_806x690.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:690,&quot;width&quot;:806,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Zl9S!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60f6dd3-1c30-403d-a7b9-5cf68a9451dd_806x690.png 424w, https://substackcdn.com/image/fetch/$s_!Zl9S!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60f6dd3-1c30-403d-a7b9-5cf68a9451dd_806x690.png 848w, https://substackcdn.com/image/fetch/$s_!Zl9S!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60f6dd3-1c30-403d-a7b9-5cf68a9451dd_806x690.png 1272w, https://substackcdn.com/image/fetch/$s_!Zl9S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa60f6dd3-1c30-403d-a7b9-5cf68a9451dd_806x690.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>You may doubt that such asymmetries are ever reasonable. For an example where this appears naturally, see <a href="https://lukasfinnveden.substack.com/i/136240656/asymmetric-beliefs-about-acausal-influence">Asymmetric beliefs about acausal influence</a>.</p><p>In this post, I will argue that:</p><p><strong>If Alice, Bob, and Carol are in a circle where they can each acausally influence the agent to their </strong><em><strong>left</strong></em><strong>, then they will all be incentivized to benefit the agent to their </strong><em><strong>right</strong></em><strong>.</strong></p><p>In short: This is because they all stand to the right of the agent that they can acausally influence &#8212;&nbsp;and so they want to acausally influence <em>that</em> agent to benefit their right.</p><p><strong>Generalizing (and assuming the <a href="https://lukasfinnveden.substack.com/i/136240656/no-prediction-assumption">&#8220;No prediction&#8221; assumption</a>):</strong></p><ul><li><p><strong>If an agent doesn't perceive themselves as being able to acausally influence you, then you probably have no acausal reason to benefit them.</strong></p></li><li><p><strong>If an agent perceives themselves as being able to acausally influence you, then that might incentivize you to benefit them.</strong></p><ul><li><p>Though I think you only have reason to benefit them if there is some closed circle of actors (potentially just two) who think they are able to acausally influence the next person in the circle.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p></li></ul></li></ul><p>This is argued in <a href="https://lukasfinnveden.substack.com/i/136240656/edt-recommends-benefitting-agents-who-think-they-can-influence-you">3. EDT recommends benefitting agents who think they can influence you</a>.</p><p>Taking this into account, we can now refine the last bolded statement from the above section into:</p><p><strong>If </strong><em><strong>A1</strong></em><strong> is an EDT agent, and assuming the <a href="https://lukasfinnveden.substack.com/i/136240656/no-prediction-assumption">&#8220;No prediction&#8221; assumption</a>, </strong><em><strong>A1</strong></em><strong> will want her successor </strong><em><strong>A2</strong></em><strong> to gather evidence about who could acausally influence </strong><em><strong>A1</strong></em><strong>.</strong></p><p>This is argued in <a href="https://lukasfinnveden.substack.com/i/136240656/edt-recommends-seeking-evidence-about-who-thinks-they-can-influence-you">4. EDT recommends seeking evidence about who thinks they can influence you</a>.</p><p>I.e., <em>A1</em> will be interested in evidence e that affects <em>d<sub>B</sub></em>(<em>A1</em>|<em>B</em>,<em>e</em>) or <em>d<sub>C</sub></em>(<em>A1</em>|<em>C</em>,<em>e</em>).</p><p>Two caveats on this are:</p><ul><li><p>A1 would not want her successor to condition on <em>what </em>A1 <em>chose to do</em>, when estimating these quantities. That would predictably destroy Bob&#8217;s and Carol&#8217;s perceived influence over <em>A1</em>&#8217;s choice. (I discuss this more in <a href="https://lukasfinnveden.substack.com/i/136240656/what-evidence-to-update-on">What evidence to update on?</a>)</p></li><li><p>The <a href="https://lukasfinnveden.substack.com/i/136240656/no-prediction-assumption">&#8220;No prediction&#8221; assumption</a> is crucial here. Without that, I suspect there would be a lot more caveats like the above bullet point.</p></li></ul><h3>Other sections</h3><p>Those are the core conclusions of the post, which are explained more thoroughly in sections 1-5 below. In the remaining sections:</p><ul><li><p>I apply the above framework to <a href="https://lukasfinnveden.substack.com/i/136240656/fully-symmetrical-cases">Fully symmetrical cases</a> and cases with <a href="https://lukasfinnveden.substack.com/i/136240656/symmetric-ground-truth-different-starting-beliefs">Symmetric ground-truth, different starting beliefs</a> &#8212; and conclude that people seem to gather evidence in a fairly common-sensical way.</p><ul><li><p>In particular: They will always pay for sufficiently cheap evidence that will reveal whether they do or don&#8217;t have sufficiently large acausal influence to make a deal worthwhile.</p></li></ul></li><li><p>Then I have some appendices on:</p><ul><li><p><a href="https://lukasfinnveden.substack.com/i/136240656/what-evidence-to-update-on">What evidence to update on?</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/i/136240656/when-does-updateful-edt-avoid-information">When does updateful EDT avoid information?</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/i/136240656/when-does-updateful-edt-seek-evidence-about-correlations">When does updateful EDT seek evidence about correlations?</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/i/136240656/a-market-analogy">A market analogy</a> for some of the results, where it looks like &#8220;<em>A</em> thinks they can influence <em>B</em>&#8221; is analogous to &#8220;<em>A</em> wants to buy <em>B</em>&#8217;s goods&#8221;.</p></li></ul></li></ul><h2>1. Son-of-EDT does not cooperate based on its own correlations</h2><p>&#8220;Son-of-EDT&#8221; is the agent that EDT would self-modify into.</p><p>Consider the following case:</p><ul><li><p>Alice and Bob are choosing successor-agents. The successor agents will then play a prisoner&#8217;s dilemma where they can pay $1 to send the other $10.</p></li><li><p>Both Alice and Bob have two different options:</p><ul><li><p>Deploy defect-bot, which will defect in the prisoner&#8217;s dilemma.</p></li><li><p>Deploy an agent that I will call &#8220;<em>UEDT</em>&#8221; (short for &#8220;updateless EDT&#8221;).</p><ul><li><p>Alice&#8217;s <em>UEDT</em> agent <em>UEDT<sub>A</sub></em> would&#8230;</p><ul><li><p>Compute:</p><ul><li><p><em>E<sub>A</sub></em>(<em>utility<sub>A</sub></em> | <em>UEDT<sub>A</sub></em> selects cooperate)</p></li><li><p><em>E<sub>A</sub></em>(<em>utility<sub>A</sub></em> | <em>UEDT<sub>A</sub></em> selects defect)</p></li></ul></li><li><p>&#8230;using Alice&#8217;s prior probability distribution and utility function&#8230;</p></li><li><p>&#8230;and pick the option with the highest expected utility.</p></li></ul></li><li><p>Bob&#8217;s <em>UEDT</em> agent would do the corresponding thing.</p></li><li><p>Alice&#8217;s and Bob&#8217;s prior probability distributions both imply that <em>UEDT<sub>A</sub></em> will almost certainly take the same action as <em>UEDT<sub>B</sub></em>.</p></li></ul></li></ul></li><li><p>The <a href="https://lukasfinnveden.substack.com/i/136240656/no-prediction-assumption">&#8220;No prediction&#8221; assumption</a> holds. Neither Alice, Bob, nor their successors will observe anything about their counterparts&#8217; choices.</p></li><li><p>Alice follows EDT. She believes:</p><ul><li><p>That her choice about successor-agent does not give her evidence about Bob&#8217;s choice, nor about the choice of any successor-agent.</p></li><li><p>That Bob has a 50% chance of deploying UEDT<sub>B</sub> and 50% chance of deploying defect-bot.</p></li></ul></li></ul><p>What will Alice choose?</p><ul><li><p>If Alice chooses to deploy <em>UEDT<sub>A</sub></em>&#8230;</p><ul><li><p>Then <em>UEDT<sub>A</sub></em> will cooperate.</p></li><li><p>This is because there&#8217;s a 50% chance that Bob deployed <em>UEDT<sub>B</sub></em>, and so <em>E<sub>A</sub></em>(<em>utility<sub>A</sub></em> | <em>UEDT<sub>A</sub></em> selects cooperate)~= -$1 + 50% * $10 &gt; $0.</p></li><li><p>And <em>E<sub>A</sub></em>(<em>utility<sub>A</sub></em> | <em>UEDT<sub>A</sub></em> selects defect) ~= $0, since both defect-bot and  <em>UEDT<sub>B</sub></em> are likely to defect if <em>UEDT<sub>A</sub></em> defects.</p></li></ul></li><li><p>If Alice chooses to deploy defect-bot, then it will defect.</p></li></ul><p>Since Alice (by assumption) does not believe that her choice of successor-agent can affect what successor Bob deploys, or what action Bob&#8217;s successor chooses, it is strictly better to deploy an agent that defects than an agent that cooperates. So Alice will deploy defect-bot.</p><p>So in this example, EDT does not want its successor to acausally cooperate using <em>their own</em> correlations. If the parent is not correlated with someone, then they do not want their successor to needlessly benefit that agent.</p><p>I suspect that this generalizes. I expect that EDT would want its successor to acausally cooperate based on its parent&#8217;s correlations, and not based on its own correlations.</p><p>This is for the case of cooperation via &#8220;direct&#8221; entanglement. If agents can observe and predict each other, the story can get more complicated. (In the above, deploying <em>UEDT</em> would be much better than defect-bot if both agents could see what the other deployed.)</p><h2>2. Asymmetric beliefs about acausal influence</h2><p>EDT agents will sometimes have asymmetric beliefs about their acausal influence on each other, in a way that isn&#8217;t easily explainable by one of them making a mistake.</p><p>Here&#8217;s an example of a case where you&#8217;d expect asymmetric beliefs about entanglement, for updateful agents:</p><ul><li><p>Alice, Bob, and Carol are arranged in a circle.</p></li><li><p>Each of them will get the opportunity to pay $1 to send $10 to the actor immediately to the right of them in the circle.</p></li><li><p>But before they make that decision, they will learn whether the actor to the right of them paid $1 to send $10 to their right.</p><ul><li><p><strong>(This is an exception to the <a href="https://lukasfinnveden.substack.com/i/136240656/no-prediction-assumption">&#8220;No prediction&#8221; assumption</a>, that I otherwise make in this post.)</strong></p></li></ul></li><li><p>In order to get things started, a random actor will be selected and have a 50/50 chance of being told that the immediately preceding actor did or did not send $10.</p></li><li><p>This structure is common knowledge among all actors.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nCeY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7264beca-9832-4aec-b1bd-d688d643033c_383x307.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nCeY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7264beca-9832-4aec-b1bd-d688d643033c_383x307.png 424w, https://substackcdn.com/image/fetch/$s_!nCeY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7264beca-9832-4aec-b1bd-d688d643033c_383x307.png 848w, https://substackcdn.com/image/fetch/$s_!nCeY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7264beca-9832-4aec-b1bd-d688d643033c_383x307.png 1272w, https://substackcdn.com/image/fetch/$s_!nCeY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7264beca-9832-4aec-b1bd-d688d643033c_383x307.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nCeY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7264beca-9832-4aec-b1bd-d688d643033c_383x307.png" width="383" height="307" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7264beca-9832-4aec-b1bd-d688d643033c_383x307.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:307,&quot;width&quot;:383,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nCeY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7264beca-9832-4aec-b1bd-d688d643033c_383x307.png 424w, https://substackcdn.com/image/fetch/$s_!nCeY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7264beca-9832-4aec-b1bd-d688d643033c_383x307.png 848w, https://substackcdn.com/image/fetch/$s_!nCeY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7264beca-9832-4aec-b1bd-d688d643033c_383x307.png 1272w, https://substackcdn.com/image/fetch/$s_!nCeY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7264beca-9832-4aec-b1bd-d688d643033c_383x307.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>The figure assumes that actors are facing inwards.</em></p><p>Let&#8217;s assume that Alice, Bob, and Carol initially would think of themselves as fairly correlated with each other, while still having notable uncertainty about what the others will do. If so, the structure of observations in the circle will mean that all agents see themselves as having substantial acausal influence on the agent to their left, but little acausal influence on the agent to their right. This is because they will already have observed significant evidence about what the agent to their right chose, but they won&#8217;t have observed any evidence about the agent to their left.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ojP6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82ffaa2-e340-4c23-beb0-b6cf8623dcc6_412x316.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ojP6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82ffaa2-e340-4c23-beb0-b6cf8623dcc6_412x316.png 424w, https://substackcdn.com/image/fetch/$s_!ojP6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82ffaa2-e340-4c23-beb0-b6cf8623dcc6_412x316.png 848w, https://substackcdn.com/image/fetch/$s_!ojP6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82ffaa2-e340-4c23-beb0-b6cf8623dcc6_412x316.png 1272w, https://substackcdn.com/image/fetch/$s_!ojP6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82ffaa2-e340-4c23-beb0-b6cf8623dcc6_412x316.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ojP6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82ffaa2-e340-4c23-beb0-b6cf8623dcc6_412x316.png" width="412" height="316" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b82ffaa2-e340-4c23-beb0-b6cf8623dcc6_412x316.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:316,&quot;width&quot;:412,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ojP6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82ffaa2-e340-4c23-beb0-b6cf8623dcc6_412x316.png 424w, https://substackcdn.com/image/fetch/$s_!ojP6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82ffaa2-e340-4c23-beb0-b6cf8623dcc6_412x316.png 848w, https://substackcdn.com/image/fetch/$s_!ojP6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82ffaa2-e340-4c23-beb0-b6cf8623dcc6_412x316.png 1272w, https://substackcdn.com/image/fetch/$s_!ojP6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb82ffaa2-e340-4c23-beb0-b6cf8623dcc6_412x316.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>(This effect is attenuated by the necessity of selecting a random start-point &#8212; which means that there is a &#8531; chance that you are not observing the action of the agent to the right. If we wanted to reduce that effect, we could make the circle much larger, to push this probability down.)</p><p>In this case, updateful EDT may well want to benefit the agent to their right, despite how little acausal influence they will have on that agent. Indeed, when calculating the value of that, it doesn&#8217;t matter how much or little acausal influence the EDT agents have on their right-hand agent. The only thing that matters is how much acausal influence they have on the agent to their <em>left</em>. When Alice decides whether to benefit Carol, she only cares about whether this is evidence that Bob will benefit Alice.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> (Since Bob is the only agent who <em>can</em> benefit Alice.)</p><p>This illustrates two things:</p><ul><li><p>It&#8217;s possible for agents to find themselves in situations where Alice reasonably perceives herself as having more influence on Bob than Bob perceives himself to have on Alice.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p></li><li><p>In at least one scenario where this happens, agents like Alice will want to benefit Carol, who perceives herself as being able to acausally influence Alice &#8212;&nbsp;even if Alice doesn&#8217;t perceive herself as being able to acausally influence Carol.</p></li></ul><p>Let&#8217;s now consider a decision problem that supports a generalization of that last bullet point.</p><h2>3. EDT recommends benefitting agents who think they can influence you</h2><p>Consider the following situation:</p><ul><li><p>There are 3 agents: Alice, Bob, and Carol.</p><ul><li><p>Alice has two buttons in front of her. One sends $3 to Bob and one sends $3 to Carol. Each button costs $1 to press. She can choose to press both.</p></li><li><p>Bob has analogous buttons for sending money to Alice and Carol.</p></li><li><p>Carol has analogous buttons for sending money to Alice and Bob.</p></li></ul></li><li><p>Alice believes that she has a lot of acausal influence on Bob, but little on Carol.</p></li><li><p>Bob believes that he has a lot of acausal influence on Carol, but little on Alice.</p></li><li><p>Carol believes that she has a lot of acausal influence on Alice, but little on Bob.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MK_G!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0076f63a-b32a-4579-94be-b98e56d97322_382x321.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MK_G!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0076f63a-b32a-4579-94be-b98e56d97322_382x321.png 424w, https://substackcdn.com/image/fetch/$s_!MK_G!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0076f63a-b32a-4579-94be-b98e56d97322_382x321.png 848w, https://substackcdn.com/image/fetch/$s_!MK_G!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0076f63a-b32a-4579-94be-b98e56d97322_382x321.png 1272w, https://substackcdn.com/image/fetch/$s_!MK_G!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0076f63a-b32a-4579-94be-b98e56d97322_382x321.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MK_G!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0076f63a-b32a-4579-94be-b98e56d97322_382x321.png" width="382" height="321" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0076f63a-b32a-4579-94be-b98e56d97322_382x321.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:321,&quot;width&quot;:382,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MK_G!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0076f63a-b32a-4579-94be-b98e56d97322_382x321.png 424w, https://substackcdn.com/image/fetch/$s_!MK_G!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0076f63a-b32a-4579-94be-b98e56d97322_382x321.png 848w, https://substackcdn.com/image/fetch/$s_!MK_G!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0076f63a-b32a-4579-94be-b98e56d97322_382x321.png 1272w, https://substackcdn.com/image/fetch/$s_!MK_G!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0076f63a-b32a-4579-94be-b98e56d97322_382x321.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>To give some precise numbers, let&#8217;s say that <em>d</em>(<em>B</em>|<em>A</em>)=<em>d</em>(<em>C</em>|<em>A</em>)=<em>d</em>(<em>A</em>|<em>C</em>)=0.5, and <em>d</em>(<em>A</em>|<em>B</em>)=<em>d</em>(<em>B</em>|<em>C</em>)=<em>d</em>(<em>C</em>|<em>A</em>)=0.</p><p>(For the purposes of this post, it doesn&#8217;t matter why they have this set of beliefs. But a similar set of beliefs could perhaps be explained by Alice already having received some evidence about Carol&#8217;s action; Bob having received evidence about Alice&#8217;s action; and Carol having received evidence about Bob&#8217;s action; as in the dilemma outlined in <a href="https://lukasfinnveden.substack.com/i/136240656/asymmetric-beliefs-about-acausal-influence">2. Asymmetric beliefs about acausal influence</a>.)</p><p>What will Alice do in this situation?</p><ul><li><p>If Alice sends $3 to Bob, that&#8217;s a lot of evidence that Bob sends $3 to Carol. This isn&#8217;t worth anything to Alice.</p></li><li><p>But if Alice sends $3 to Carol, that&#8217;s a lot of evidence that Bob sends $3 to Alice.</p></li></ul><p>More precisely:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ee5l!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d175f0-953b-412f-8dcf-7b3f6de6a254_552x41.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ee5l!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d175f0-953b-412f-8dcf-7b3f6de6a254_552x41.png 424w, https://substackcdn.com/image/fetch/$s_!Ee5l!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d175f0-953b-412f-8dcf-7b3f6de6a254_552x41.png 848w, https://substackcdn.com/image/fetch/$s_!Ee5l!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d175f0-953b-412f-8dcf-7b3f6de6a254_552x41.png 1272w, https://substackcdn.com/image/fetch/$s_!Ee5l!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d175f0-953b-412f-8dcf-7b3f6de6a254_552x41.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ee5l!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d175f0-953b-412f-8dcf-7b3f6de6a254_552x41.png" width="552" height="41" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c4d175f0-953b-412f-8dcf-7b3f6de6a254_552x41.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:41,&quot;width&quot;:552,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5898,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ee5l!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d175f0-953b-412f-8dcf-7b3f6de6a254_552x41.png 424w, https://substackcdn.com/image/fetch/$s_!Ee5l!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d175f0-953b-412f-8dcf-7b3f6de6a254_552x41.png 848w, https://substackcdn.com/image/fetch/$s_!Ee5l!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d175f0-953b-412f-8dcf-7b3f6de6a254_552x41.png 1272w, https://substackcdn.com/image/fetch/$s_!Ee5l!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4d175f0-953b-412f-8dcf-7b3f6de6a254_552x41.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>So everyone in the circle will send money to the person who can acausally influence them &#8212; not the person who they can acausally influence.</p><p>I think this generalizes to larger-scale cases. Intuitively: If Alice thinks she can acausally influence Bob, she wants to take actions that make her think that Bob will benefit her. That means that she wants Bob to take actions that help <em>people who think they can influence Bob.</em> Accordingly, Alice will help people who think they can influence Alice.</p><p>Quick remark: This is analogous to other principles in similar problems, like:</p><ul><li><p>Alice won&#8217;t necessarily help people who have good opportunities to help Alice. But Alice will preferentially help people who she has good opportunities to help.</p></li><li><p>Alice won&#8217;t necessarily help people who she thinks have a lot of power. But Alice will preferentially help people who think that Alice has a lot of power.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p></li><li><p>(More on this in <a href="https://lukasfinnveden.substack.com/i/136240656/a-market-analogy">A market analogy</a>.)</p></li></ul><p>If that didn&#8217;t make any sense, that&#8217;s ok!</p><p>Also, the patterns of &#8220;who can influence who?&#8221; that will allow for this type of one-sided influence are very similar to the patterns of &#8220;who can <em>benefit</em> who?&#8221; that enable ECL when only some agents are able to benefit each other. This latter question is discussed in section 2.9 of <a href="https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf">Oesterheld (2017)</a>. In short: You need either a circle of people who can (potentially) influence/benefit each other, or an infinite line.</p><h2>4. EDT recommends seeking evidence about who thinks they can influence you</h2><p>Let&#8217;s now turn to a variant of the same scenario, where Alice, Bob, and Carol get the option to gather evidence about their degree of acausal influence.</p><p>Each agent gets 3 additional buttons, marked <em>a</em>, <em>b</em>, and <em>c</em>. They all cost $0.1 to press, and the intuitive purpose of buttons <em>a</em>, <em>b</em>, and <em>c</em> is to give agents information about whether they have acausal influence over others or not. Informally: If an agent presses button a, then they will learn whether Alice has 100% or 0% acausal influence over Bob according to Alice&#8217;s prior &#8212;&nbsp;and correspondingly for button b and Bob; and for button c and Carol.</p><p>To analyze this scenario, I will assume that each of Alice, Bob, and Carol <em>first</em> commits to a full policy, which specifies:</p><ul><li><p>Which (of any) buttons <em>a</em>, <em>b</em>, <em>c</em> they will choose to press.</p></li><li><p>For every possible combination of results:</p><ul><li><p>Whether they will pay to send money to the other agents.</p></li></ul></li></ul><p>Here&#8217;s a more precise description of what the buttons do:</p><ul><li><p>There are 3 unknown facts about the world, which the buttons will reveal.</p><ul><li><p>Either <em>a</em>=<em>T</em> or <em>a</em>=<em>F</em>. (<em>T</em> and <em>F</em> stands for True and False.)</p><ul><li><p>Each agent is 50/50 on which one it is.</p></li></ul></li><li><p>If you press button a, you learn whether it is the case that <em>a</em>=<em>T</em> or <em>a</em>=<em>F</em>.</p></li><li><p>Likewise for the other buttons.</p></li><li><p>(The probabilities that <em>a</em> is true, <em>b</em> is true, and <em>c</em> is true are uncorrelated.)</p></li></ul></li><li><p>Intuitively, we want:</p><ul><li><p>If <em>a</em>=<em>T</em>, then <em>d<sub>A</sub></em>(<em>B</em>|<em>A</em>)=1.</p></li><li><p>If <em>a</em>=<em>F</em>, then <em>d<sub>A</sub></em>(<em>B</em>|<em>A</em>)=0.</p></li></ul></li><li><p>We get this if Alice&#8217;s prior belief has the following properties:</p><ul><li><p>By definition,<br><em>d</em>(<em>B</em>|<em>A</em>)= (<em>p<sub>A</sub></em>(<em>B</em> chooses <em>x&#8217;</em> | <em>A</em> chooses <em>x</em>) - <em>p<sub>A</sub></em>(<em>B</em> chooses <em>x&#8217;</em> | <em>A</em> chooses <em>y</em>)).</p><ul><li><p>By previous assumption, this is equal to 0.5.</p></li></ul></li><li><p>But if we also condition on <em>a</em>=<em>T</em>, we have:</p><ul><li><p><em>p<sub>A</sub></em>(<em>B</em> chooses <em>x&#8217;</em> | <em>A</em> chooses <em>x</em>, <em>a</em>=<em>T</em>) = 1.</p></li><li><p><em>p<sub>A</sub></em>(<em>B</em> chooses <em>x&#8217;</em> | <em>A</em> chooses <em>y</em>, <em>a</em>=<em>T</em>) = 0.</p></li><li><p><em>p<sub>A</sub></em>(<em>B</em> chooses <em>x&#8217;</em> | <em>A</em> chooses <em>x</em>, <em>a</em>=<em>F</em>) = <em>p<sub>A</sub></em>(<em>B</em> chooses <em>x&#8217;</em> | <em>A</em> chooses <em>y</em>, <em>a</em>=<em>F</em>)</p></li></ul></li><li><p>In other words, if a is true, <em>A</em> has total acausal influence over <em>B</em>. If a is false, <em>A</em> has no acausal influence over <em>B</em>.</p></li><li><p>So <em>d<sub>A</sub></em>(<em>B</em>|<em>A</em>,<em>a</em>=<em>T</em>)=1 and <em>d<sub>A</sub></em>(<em>B</em>|<em>A</em>,<em>a</em>=<em>F</em>)=0.</p></li></ul></li><li><p>Bob&#8217;s and Carol&#8217;s beliefs have analogous properties with respect to button <em>b</em> and button <em>c</em>.</p></li></ul><p>Recall the discussion about evidence in the <a href="https://lukasfinnveden.substack.com/i/136240656/more-formal-summary">More formal summary</a>. Given a prior, I introduced a distinction between:</p><ul><li><p>The correlation that a successor agent would have with others, after seeing the evidence: <em>d<sub>A</sub></em>(<em>B</em>|<em>A2T</em>,<em>e</em>=<em>T</em>).</p></li><li><p>The correlation that the original agent would have with others, if we just condition that agent&#8217;s prior on the evidence: <em>d<sub>A</sub></em>(<em>B</em>|<em>A1</em>,<em>e</em>=<em>T</em>).</p></li></ul><p>In the current scenario, the buttons are supposed to provide evidence about the <em>original</em> agents&#8217; correlation with others (i.e., the analog of the <em>second</em> of the above quantities: <em>d<sub>A</sub></em>(<em>B</em>|<em>A1</em>,<em>e</em>=<em>T</em>)). I will not pay any attention to any successor agent&#8217;s correlation with other agents. (Since, according to <a href="https://lukasfinnveden.substack.com/i/136240656/son-of-edt-does-not-cooperate-based-on-its-own-correlations">1. Son-of-EDT cooperates using its parent&#8217;s entanglements</a>, this isn&#8217;t particularly relevant when agents have sufficient ability to commit to certain policies.)</p><p>In the decision problem sketched about &#8212; what will Alice do?</p><ul><li><p>As before, she has the option of just sending money to Carol. As before, the expected utility of this is 0.5.</p></li><li><p>She could condition this on <em>actually</em> having acausal influence over Bob. After all, acausal influence on Bob is the only reason she&#8217;s doing this.</p><ul><li><p>But if Alice does that, Bob is more likely to condition <em>his</em> decision on having influence over <em>Carol</em>. (I.e., on b=T.)</p></li><li><p>In expectation, Alice saves an expected $0.5 by not having to send money if she can&#8217;t influence Bob.</p></li><li><p>But if she influences Bob to choose the same policy as her, Bob has a 50% chance of discovering that <em>b</em>=<em>F</em> (i.e. to update that he can&#8217;t influence Carol), and then not sending any money to Alice. So in expectation, Bob sends Alice half as much money as he did in the unconditional case: $0.75 instead of $1.5.</p><ul><li><p>(The full calculation here is that Alice only thinks she influences Bob in worlds <em>a</em>=<em>T</em>, and <em>in addition</em>, Bob, only sends money in worlds where <em>b</em>=<em>T</em>. Since these events are independent, Alice&#8217;s expected winnings are 0.50*0.50*$3=$0.75.)</p></li></ul></li><li><p>Alice&#8217;s payment and Alice&#8217;s winnings have both been halved: Which is a bad deal, since the winnings were larger than the payments.</p></li></ul></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rpiP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8146457e-bfe1-431b-a138-409c2e2145c6_350x39.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rpiP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8146457e-bfe1-431b-a138-409c2e2145c6_350x39.png 424w, https://substackcdn.com/image/fetch/$s_!rpiP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8146457e-bfe1-431b-a138-409c2e2145c6_350x39.png 848w, https://substackcdn.com/image/fetch/$s_!rpiP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8146457e-bfe1-431b-a138-409c2e2145c6_350x39.png 1272w, https://substackcdn.com/image/fetch/$s_!rpiP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8146457e-bfe1-431b-a138-409c2e2145c6_350x39.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rpiP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8146457e-bfe1-431b-a138-409c2e2145c6_350x39.png" width="350" height="39" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8146457e-bfe1-431b-a138-409c2e2145c6_350x39.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:39,&quot;width&quot;:350,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4446,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rpiP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8146457e-bfe1-431b-a138-409c2e2145c6_350x39.png 424w, https://substackcdn.com/image/fetch/$s_!rpiP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8146457e-bfe1-431b-a138-409c2e2145c6_350x39.png 848w, https://substackcdn.com/image/fetch/$s_!rpiP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8146457e-bfe1-431b-a138-409c2e2145c6_350x39.png 1272w, https://substackcdn.com/image/fetch/$s_!rpiP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8146457e-bfe1-431b-a138-409c2e2145c6_350x39.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OqKF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0689b97-ae4e-46bc-a85e-a010631692e1_384x34.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OqKF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0689b97-ae4e-46bc-a85e-a010631692e1_384x34.png 424w, https://substackcdn.com/image/fetch/$s_!OqKF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0689b97-ae4e-46bc-a85e-a010631692e1_384x34.png 848w, https://substackcdn.com/image/fetch/$s_!OqKF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0689b97-ae4e-46bc-a85e-a010631692e1_384x34.png 1272w, https://substackcdn.com/image/fetch/$s_!OqKF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0689b97-ae4e-46bc-a85e-a010631692e1_384x34.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OqKF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0689b97-ae4e-46bc-a85e-a010631692e1_384x34.png" width="384" height="34" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f0689b97-ae4e-46bc-a85e-a010631692e1_384x34.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:34,&quot;width&quot;:384,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4104,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OqKF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0689b97-ae4e-46bc-a85e-a010631692e1_384x34.png 424w, https://substackcdn.com/image/fetch/$s_!OqKF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0689b97-ae4e-46bc-a85e-a010631692e1_384x34.png 848w, https://substackcdn.com/image/fetch/$s_!OqKF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0689b97-ae4e-46bc-a85e-a010631692e1_384x34.png 1272w, https://substackcdn.com/image/fetch/$s_!OqKF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0689b97-ae4e-46bc-a85e-a010631692e1_384x34.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vcOR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48d4090-b6c4-4c93-a689-22c87eb81725_445x35.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vcOR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48d4090-b6c4-4c93-a689-22c87eb81725_445x35.png 424w, https://substackcdn.com/image/fetch/$s_!vcOR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48d4090-b6c4-4c93-a689-22c87eb81725_445x35.png 848w, https://substackcdn.com/image/fetch/$s_!vcOR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48d4090-b6c4-4c93-a689-22c87eb81725_445x35.png 1272w, https://substackcdn.com/image/fetch/$s_!vcOR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48d4090-b6c4-4c93-a689-22c87eb81725_445x35.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vcOR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48d4090-b6c4-4c93-a689-22c87eb81725_445x35.png" width="445" height="35" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e48d4090-b6c4-4c93-a689-22c87eb81725_445x35.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:35,&quot;width&quot;:445,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3738,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vcOR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48d4090-b6c4-4c93-a689-22c87eb81725_445x35.png 424w, https://substackcdn.com/image/fetch/$s_!vcOR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48d4090-b6c4-4c93-a689-22c87eb81725_445x35.png 848w, https://substackcdn.com/image/fetch/$s_!vcOR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48d4090-b6c4-4c93-a689-22c87eb81725_445x35.png 1272w, https://substackcdn.com/image/fetch/$s_!vcOR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe48d4090-b6c4-4c93-a689-22c87eb81725_445x35.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ln8w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc79cf191-2808-4fd0-a47d-e9bd9bee4ed7_308x30.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ln8w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc79cf191-2808-4fd0-a47d-e9bd9bee4ed7_308x30.png 424w, https://substackcdn.com/image/fetch/$s_!ln8w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc79cf191-2808-4fd0-a47d-e9bd9bee4ed7_308x30.png 848w, https://substackcdn.com/image/fetch/$s_!ln8w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc79cf191-2808-4fd0-a47d-e9bd9bee4ed7_308x30.png 1272w, https://substackcdn.com/image/fetch/$s_!ln8w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc79cf191-2808-4fd0-a47d-e9bd9bee4ed7_308x30.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ln8w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc79cf191-2808-4fd0-a47d-e9bd9bee4ed7_308x30.png" width="308" height="30" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c79cf191-2808-4fd0-a47d-e9bd9bee4ed7_308x30.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:30,&quot;width&quot;:308,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2708,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ln8w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc79cf191-2808-4fd0-a47d-e9bd9bee4ed7_308x30.png 424w, https://substackcdn.com/image/fetch/$s_!ln8w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc79cf191-2808-4fd0-a47d-e9bd9bee4ed7_308x30.png 848w, https://substackcdn.com/image/fetch/$s_!ln8w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc79cf191-2808-4fd0-a47d-e9bd9bee4ed7_308x30.png 1272w, https://substackcdn.com/image/fetch/$s_!ln8w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc79cf191-2808-4fd0-a47d-e9bd9bee4ed7_308x30.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!A1hQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F804d48c4-f589-4b86-b493-41afeb7f750e_146x33.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!A1hQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F804d48c4-f589-4b86-b493-41afeb7f750e_146x33.png 424w, https://substackcdn.com/image/fetch/$s_!A1hQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F804d48c4-f589-4b86-b493-41afeb7f750e_146x33.png 848w, https://substackcdn.com/image/fetch/$s_!A1hQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F804d48c4-f589-4b86-b493-41afeb7f750e_146x33.png 1272w, https://substackcdn.com/image/fetch/$s_!A1hQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F804d48c4-f589-4b86-b493-41afeb7f750e_146x33.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!A1hQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F804d48c4-f589-4b86-b493-41afeb7f750e_146x33.png" width="146" height="33" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/804d48c4-f589-4b86-b493-41afeb7f750e_146x33.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:33,&quot;width&quot;:146,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2068,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!A1hQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F804d48c4-f589-4b86-b493-41afeb7f750e_146x33.png 424w, https://substackcdn.com/image/fetch/$s_!A1hQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F804d48c4-f589-4b86-b493-41afeb7f750e_146x33.png 848w, https://substackcdn.com/image/fetch/$s_!A1hQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F804d48c4-f589-4b86-b493-41afeb7f750e_146x33.png 1272w, https://substackcdn.com/image/fetch/$s_!A1hQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F804d48c4-f589-4b86-b493-41afeb7f750e_146x33.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><ul><li><p>But instead, she could condition on being in a world where Carol would see herself as having acausal influence over Alice, i.e. only send money if <em>c</em>=<em>T</em>.</p><ul><li><p>To calculate the EV of this, let&#8217;s consider two different worlds: The one where <em>a</em>=<em>T</em> and the one where <em>a</em>=<em>F</em>.</p></li><li><p>Alice never learns which one she&#8217;s in, so in both worlds, Alice pays $0.1 for the button and in expectation sends <em>p</em>(<em>c</em>=<em>T</em>)*$1=$0.5 to Carol.</p></li><li><p>If <em>a</em>=<em>F</em>, then Alice will have no further effect on Bob.</p></li><li><p>If <em>a</em>=<em>T</em>, it is <em>both</em> the case that Bob will follow the same policy <em>and</em> that he&#8217;ll find that <em>a</em>=<em>T</em> and send the money.&nbsp;</p></li></ul></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vrgW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89589f10-50ac-41bd-ac7a-334b110054dc_359x41.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vrgW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89589f10-50ac-41bd-ac7a-334b110054dc_359x41.png 424w, https://substackcdn.com/image/fetch/$s_!vrgW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89589f10-50ac-41bd-ac7a-334b110054dc_359x41.png 848w, https://substackcdn.com/image/fetch/$s_!vrgW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89589f10-50ac-41bd-ac7a-334b110054dc_359x41.png 1272w, https://substackcdn.com/image/fetch/$s_!vrgW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89589f10-50ac-41bd-ac7a-334b110054dc_359x41.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vrgW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89589f10-50ac-41bd-ac7a-334b110054dc_359x41.png" width="359" height="41" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/89589f10-50ac-41bd-ac7a-334b110054dc_359x41.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:41,&quot;width&quot;:359,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4286,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vrgW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89589f10-50ac-41bd-ac7a-334b110054dc_359x41.png 424w, https://substackcdn.com/image/fetch/$s_!vrgW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89589f10-50ac-41bd-ac7a-334b110054dc_359x41.png 848w, https://substackcdn.com/image/fetch/$s_!vrgW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89589f10-50ac-41bd-ac7a-334b110054dc_359x41.png 1272w, https://substackcdn.com/image/fetch/$s_!vrgW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89589f10-50ac-41bd-ac7a-334b110054dc_359x41.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QwgA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d88d62-898c-4dbd-b71f-3f8cd53aaef7_763x37.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QwgA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d88d62-898c-4dbd-b71f-3f8cd53aaef7_763x37.png 424w, https://substackcdn.com/image/fetch/$s_!QwgA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d88d62-898c-4dbd-b71f-3f8cd53aaef7_763x37.png 848w, https://substackcdn.com/image/fetch/$s_!QwgA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d88d62-898c-4dbd-b71f-3f8cd53aaef7_763x37.png 1272w, https://substackcdn.com/image/fetch/$s_!QwgA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d88d62-898c-4dbd-b71f-3f8cd53aaef7_763x37.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QwgA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d88d62-898c-4dbd-b71f-3f8cd53aaef7_763x37.png" width="763" height="37" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/43d88d62-898c-4dbd-b71f-3f8cd53aaef7_763x37.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:37,&quot;width&quot;:763,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9043,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QwgA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d88d62-898c-4dbd-b71f-3f8cd53aaef7_763x37.png 424w, https://substackcdn.com/image/fetch/$s_!QwgA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d88d62-898c-4dbd-b71f-3f8cd53aaef7_763x37.png 848w, https://substackcdn.com/image/fetch/$s_!QwgA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d88d62-898c-4dbd-b71f-3f8cd53aaef7_763x37.png 1272w, https://substackcdn.com/image/fetch/$s_!QwgA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43d88d62-898c-4dbd-b71f-3f8cd53aaef7_763x37.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!469r!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13c8aa4d-a3eb-4500-9dbf-1bdcfad7fc30_357x23.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!469r!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13c8aa4d-a3eb-4500-9dbf-1bdcfad7fc30_357x23.png 424w, https://substackcdn.com/image/fetch/$s_!469r!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13c8aa4d-a3eb-4500-9dbf-1bdcfad7fc30_357x23.png 848w, https://substackcdn.com/image/fetch/$s_!469r!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13c8aa4d-a3eb-4500-9dbf-1bdcfad7fc30_357x23.png 1272w, https://substackcdn.com/image/fetch/$s_!469r!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13c8aa4d-a3eb-4500-9dbf-1bdcfad7fc30_357x23.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!469r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13c8aa4d-a3eb-4500-9dbf-1bdcfad7fc30_357x23.png" width="357" height="23" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/13c8aa4d-a3eb-4500-9dbf-1bdcfad7fc30_357x23.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:23,&quot;width&quot;:357,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3014,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!469r!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13c8aa4d-a3eb-4500-9dbf-1bdcfad7fc30_357x23.png 424w, https://substackcdn.com/image/fetch/$s_!469r!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13c8aa4d-a3eb-4500-9dbf-1bdcfad7fc30_357x23.png 848w, https://substackcdn.com/image/fetch/$s_!469r!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13c8aa4d-a3eb-4500-9dbf-1bdcfad7fc30_357x23.png 1272w, https://substackcdn.com/image/fetch/$s_!469r!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13c8aa4d-a3eb-4500-9dbf-1bdcfad7fc30_357x23.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><ul><li><p>Similar to the above case, Alice saves $0.5 from not having to send money if <em>c</em>=<em>F</em>. But this time the risk that Bob doesn&#8217;t send money doesn&#8217;t apply &#8212;&nbsp;because <em>if</em> she influenced Bob, then Bob will find that <em>a</em>=<em>T</em>.</p></li></ul><p>So EDT recommends that you investigate whether anyone would think themselves able to influence you, if they knew more about the world. And then help people only insofar as this is the case.</p><p>Intuitively, we can appeal to a similar intuition as above. If Alice thinks she can acausally influence Bob, she wants to take actions that make her think that Bob will benefit her. There&#8217;s no reason for why she&#8217;d want Bob&#8217;s help to be conditional on Bob being able to influence someone else. Alice doesn&#8217;t care about that! But Alice is ok with her action acausally influencing Bob to follow a policy that only helps Alice in worlds where she actually can influence Bob. The other worlds don&#8217;t matter, as she judges her action to have no acausal influence in those worlds.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p><p>I think this generalizes to much larger and more complicated cases. I.e., I&#8217;d speculate that:</p><ul><li><p>If multiple people think they might be able to influence you, then you should investigate their cruxes and prioritize people for whom the cruxes came up positively.</p></li><li><p>If you can only gain a little bit of evidence about whether others would think themselves able to influence you, then it&#8217;s still good to get that and to slightly adjust the bar at which you&#8217;d help them.</p></li><li><p>You never care about figuring out who you can acausally influence, except insofar as this is related to other things, such as who can acausally influence <em>you</em>.</p><ul><li><p>I think this only holds if the <a href="https://lukasfinnveden.substack.com/i/136240656/no-prediction-assumption">&#8220;No prediction&#8221; assumption</a> holds. For example, I think you probably care about figuring out whether Omega is a good predictor in Newcomb&#8217;s problem.</p></li><li><p>Nevertheless, this conclusion is quite wild to me! Does it suggest that we should just go with our first instinct about &#8220;who can I acausally influence?&#8221;, use that to determine how interested we should be in acausal trade, and then never ponder that question again? I don&#8217;t think that summary is quite right (at least not in spirit) but it&#8217;s certainly closer to the truth than I expected.</p><ul><li><p>I am somewhat reassured by the commonsensical results in <a href="https://lukasfinnveden.substack.com/i/136240656/fully-symmetrical-cases">6. Fully symmetrical cases</a> and <a href="https://lukasfinnveden.substack.com/i/136240656/fully-symmetrical-cases">7. Symmetric ground truth, different starting beliefs</a>.</p></li><li><p>Also, the appendix <a href="https://lukasfinnveden.substack.com/i/136240656/a-market-analogy">A market analogy</a> gives some intuition for what&#8217;s going on, that&#8217;s helpful to me.</p></li><li><p>That said, I still feel puzzled about this, and would appreciate more insight on it.</p></li></ul></li></ul></li></ul><p>I should flag one thing: It&#8217;s not good to update on <em>every</em> piece of evidence. In particular, if Alice were to update Carol's prior <em>p<sub>C</sub></em> on Alice&#8217;s own choice of what to do, then that would suggest that Carol has no acausal influence on Alice&#8217;s choice. (Because Alice&#8217;s choice would be fixed.) This would ruin their motivation for sending each other money, overall destroying value.</p><p>It seems quite tricky to determine what sort of evidence is good vs. bad to update on in general, but given the <a href="https://lukasfinnveden.substack.com/i/136240656/no-prediction-assumption">&#8220;No prediction&#8221; assumption</a>, I think it&#8217;s fine to update on anything except for your own action and events that are causally downstream of your own action. I explain why in the appendix <a href="https://lukasfinnveden.substack.com/i/136240656/what-evidence-to-update-on">What evidence to update on?</a></p><h2>5. Interlude: What kind of evidence is this?</h2><p>You might find yourself asking: What are these mysterious buttons doing? How could they provide evidence about how much acausal influence these agents have?</p><p>One thing to note is that the concept of &#8220;evidence about correlations&#8221; can be fully captured by certain correlational structures in the agent&#8217;s Bayesian priors. And updating on that evidence can be captured by normal Bayesian updating: <em>p<sub>A</sub></em>(<em>B</em> chooses <em>x&#8217;</em> | <em>A</em> chooses <em>x</em>, <em>a</em>=<em>T</em>) is higher than <em>p<sub>A</sub></em>(<em>B</em> chooses <em>x&#8217;</em> | <em>A</em> chooses <em>x</em>, <em>a</em>=<em>F</em>). So this is &#8220;evidence&#8221; in the normal, Bayesian sense.</p><p>But why would those correlational structures be there in the first place? What structures in the world do the buttons correspond to? I&#8217;ll give two different examples.</p><p>Firstly, a relatively prosaic example. Alice might be confident that her decision correlates with certain algorithms, and confident that her decision <em>doesn&#8217;t</em> correlate with certain other algorithms. She might be uncertain about which of these algorithms Bob implements. Button &#8220;a&#8221; might give her information about whether Bob uses an algorithm that is similar to her own, or whether Bob uses an algorithm that is very different.</p><p>Secondly, a relatively less prosaic example. Alice might be philosophically confused about the nature of correlation. In some sense, she might have access to all the information about her own and Bob&#8217;s algorithm that she&#8217;s going to get. But she doesn&#8217;t understand what&#8217;s the best way to go from there to assigning the conditional probabilities that specify <em>d</em>(<em>B</em>|<em>A</em>). In this case, we could say that &#8220;<em>a</em>=<em>T</em>&#8221; is true iff Alice would (on reflection) conclude that the conditional probabilities are high, and &#8220;<em>a</em>=<em>F</em>&#8221; if she would (on reflection) conclude that the conditional probabilities are low.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p><p>Note that this latter case could conform to the same correlational structure as the relatively more prosaic case. <em>p<sub>A</sub></em>(<em>B</em> chooses <em>x&#8217;</em> | <em>A</em> chooses <em>x</em>, <em>a</em>=<em>T</em>) is higher than <em>p<sub>A</sub></em>(<em>B</em> chooses <em>x&#8217;</em> | <em>A</em> chooses <em>x</em>, <em>a</em>=<em>F</em>), because if Alice on-reflection thinks that <em>A</em> and <em>B</em> are very correlated, then it&#8217;s very likely that <em>B</em> chooses <em>x&#8217;</em> if <em>A</em> chooses <em>x</em>.</p><p>However, this latter case does not appeal to empirical uncertainty. At best, it appeals to logical uncertainty,&nbsp;which tends to cause all kinds of problems for Bayesian approaches to uncertainty. (Or even worse, it might appeal to philosophical uncertainty, which isn&#8217;t captured by even those formal methods we do have for logical uncertainty, such as <a href="https://arxiv.org/abs/1609.03543">logical induction</a>.) So if you dug into this latter case further (and similar cases) you would predictably encounter all kinds of strange issues.</p><h2>6. Fully symmetrical cases</h2><p>In symmetric cases, I think the lessons from <a href="https://lukasfinnveden.substack.com/i/136240656/edt-recommends-seeking-evidence-about-who-thinks-they-can-influence-you">4. EDT recommends seeking evidence about who thinks they can influence you</a> accord well with common sense. If Alice and Bob have the same beliefs about acausal influence, have the same opportunities for investigation, and have the same opportunities to help each other &#8212; then they&#8217;re both interested in finding out whether their correlation is large enough to justify cooperation.</p><p>Consider the following case:</p><ul><li><p>A prisoner&#8217;s dilemma between Alice and Bob, where each can pay $1 to send the other $3.</p></li><li><p>Each of them have access to their own copy of a button <em>e</em> that will give some evidence about their acausal influence over each other. They both agree that</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DntV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06bea616-a406-4715-9ded-cbe56377315c_390x37.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DntV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06bea616-a406-4715-9ded-cbe56377315c_390x37.png 424w, https://substackcdn.com/image/fetch/$s_!DntV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06bea616-a406-4715-9ded-cbe56377315c_390x37.png 848w, https://substackcdn.com/image/fetch/$s_!DntV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06bea616-a406-4715-9ded-cbe56377315c_390x37.png 1272w, https://substackcdn.com/image/fetch/$s_!DntV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06bea616-a406-4715-9ded-cbe56377315c_390x37.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DntV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06bea616-a406-4715-9ded-cbe56377315c_390x37.png" width="390" height="37" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/06bea616-a406-4715-9ded-cbe56377315c_390x37.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:37,&quot;width&quot;:390,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4058,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DntV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06bea616-a406-4715-9ded-cbe56377315c_390x37.png 424w, https://substackcdn.com/image/fetch/$s_!DntV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06bea616-a406-4715-9ded-cbe56377315c_390x37.png 848w, https://substackcdn.com/image/fetch/$s_!DntV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06bea616-a406-4715-9ded-cbe56377315c_390x37.png 1272w, https://substackcdn.com/image/fetch/$s_!DntV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F06bea616-a406-4715-9ded-cbe56377315c_390x37.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DivX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26cb828a-163e-4582-a7fb-739ec3e0d85a_357x40.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DivX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26cb828a-163e-4582-a7fb-739ec3e0d85a_357x40.png 424w, https://substackcdn.com/image/fetch/$s_!DivX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26cb828a-163e-4582-a7fb-739ec3e0d85a_357x40.png 848w, https://substackcdn.com/image/fetch/$s_!DivX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26cb828a-163e-4582-a7fb-739ec3e0d85a_357x40.png 1272w, https://substackcdn.com/image/fetch/$s_!DivX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26cb828a-163e-4582-a7fb-739ec3e0d85a_357x40.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DivX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26cb828a-163e-4582-a7fb-739ec3e0d85a_357x40.png" width="357" height="40" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/26cb828a-163e-4582-a7fb-739ec3e0d85a_357x40.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:40,&quot;width&quot;:357,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3903,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DivX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26cb828a-163e-4582-a7fb-739ec3e0d85a_357x40.png 424w, https://substackcdn.com/image/fetch/$s_!DivX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26cb828a-163e-4582-a7fb-739ec3e0d85a_357x40.png 848w, https://substackcdn.com/image/fetch/$s_!DivX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26cb828a-163e-4582-a7fb-739ec3e0d85a_357x40.png 1272w, https://substackcdn.com/image/fetch/$s_!DivX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26cb828a-163e-4582-a7fb-739ec3e0d85a_357x40.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li><li><p>They share a prior <em>p</em>(<em>e</em>=<em>T</em>) that button <em>e</em> shows &#8220;True&#8221;. So at the start:&nbsp;</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aD4p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58d7b87d-2250-4763-bb76-b16d1b5db61d_620x40.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aD4p!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58d7b87d-2250-4763-bb76-b16d1b5db61d_620x40.png 424w, https://substackcdn.com/image/fetch/$s_!aD4p!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58d7b87d-2250-4763-bb76-b16d1b5db61d_620x40.png 848w, https://substackcdn.com/image/fetch/$s_!aD4p!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58d7b87d-2250-4763-bb76-b16d1b5db61d_620x40.png 1272w, https://substackcdn.com/image/fetch/$s_!aD4p!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58d7b87d-2250-4763-bb76-b16d1b5db61d_620x40.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aD4p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58d7b87d-2250-4763-bb76-b16d1b5db61d_620x40.png" width="620" height="40" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/58d7b87d-2250-4763-bb76-b16d1b5db61d_620x40.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:40,&quot;width&quot;:620,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6456,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aD4p!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58d7b87d-2250-4763-bb76-b16d1b5db61d_620x40.png 424w, https://substackcdn.com/image/fetch/$s_!aD4p!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58d7b87d-2250-4763-bb76-b16d1b5db61d_620x40.png 848w, https://substackcdn.com/image/fetch/$s_!aD4p!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58d7b87d-2250-4763-bb76-b16d1b5db61d_620x40.png 1272w, https://substackcdn.com/image/fetch/$s_!aD4p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F58d7b87d-2250-4763-bb76-b16d1b5db61d_620x40.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><ul><li><p>Assume that the cost of pressing button <em>e</em> is <em>eps</em>. (For epsilon.)</p></li><li><p>The value of conditional paying is:</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Sd-M!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc706d58-93c6-41d1-801f-78e8e9014cb3_621x43.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Sd-M!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc706d58-93c6-41d1-801f-78e8e9014cb3_621x43.png 424w, https://substackcdn.com/image/fetch/$s_!Sd-M!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc706d58-93c6-41d1-801f-78e8e9014cb3_621x43.png 848w, https://substackcdn.com/image/fetch/$s_!Sd-M!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc706d58-93c6-41d1-801f-78e8e9014cb3_621x43.png 1272w, https://substackcdn.com/image/fetch/$s_!Sd-M!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc706d58-93c6-41d1-801f-78e8e9014cb3_621x43.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Sd-M!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc706d58-93c6-41d1-801f-78e8e9014cb3_621x43.png" width="621" height="43" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dc706d58-93c6-41d1-801f-78e8e9014cb3_621x43.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:43,&quot;width&quot;:621,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6749,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Sd-M!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc706d58-93c6-41d1-801f-78e8e9014cb3_621x43.png 424w, https://substackcdn.com/image/fetch/$s_!Sd-M!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc706d58-93c6-41d1-801f-78e8e9014cb3_621x43.png 848w, https://substackcdn.com/image/fetch/$s_!Sd-M!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc706d58-93c6-41d1-801f-78e8e9014cb3_621x43.png 1272w, https://substackcdn.com/image/fetch/$s_!Sd-M!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdc706d58-93c6-41d1-801f-78e8e9014cb3_621x43.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><ul><li><p>For every <em>p</em>(<em>e</em>=<em>T</em>) &gt; 0, there&#8217;s an <em>eps</em> that&#8217;s small enough that this expected value is positive (i.e., higher than doing nothing).</p><ul><li><p>This is true for <em>d<sub>A</sub></em>(<em>B</em>|<em>A</em>,<em>e</em>=<em>T</em>)=0.5, and more generally, whenever the deal would be worth it if <em>e</em>=<em>T</em>, i.e. when <em>d<sub>A</sub></em>(<em>B</em>|<em>A</em>,<em>e</em>=<em>T</em>)$3 &gt; $1.</p></li></ul></li><li><p>The value of unconditional paying is:</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XwAV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55cff9ea-617c-4236-a323-afce167bfde8_301x38.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XwAV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55cff9ea-617c-4236-a323-afce167bfde8_301x38.png 424w, https://substackcdn.com/image/fetch/$s_!XwAV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55cff9ea-617c-4236-a323-afce167bfde8_301x38.png 848w, https://substackcdn.com/image/fetch/$s_!XwAV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55cff9ea-617c-4236-a323-afce167bfde8_301x38.png 1272w, https://substackcdn.com/image/fetch/$s_!XwAV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55cff9ea-617c-4236-a323-afce167bfde8_301x38.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XwAV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55cff9ea-617c-4236-a323-afce167bfde8_301x38.png" width="301" height="38" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/55cff9ea-617c-4236-a323-afce167bfde8_301x38.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:38,&quot;width&quot;:301,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3773,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XwAV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55cff9ea-617c-4236-a323-afce167bfde8_301x38.png 424w, https://substackcdn.com/image/fetch/$s_!XwAV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55cff9ea-617c-4236-a323-afce167bfde8_301x38.png 848w, https://substackcdn.com/image/fetch/$s_!XwAV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55cff9ea-617c-4236-a323-afce167bfde8_301x38.png 1272w, https://substackcdn.com/image/fetch/$s_!XwAV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55cff9ea-617c-4236-a323-afce167bfde8_301x38.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yTvZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79b878e-fb6b-4c50-8be7-b56176fafa18_678x34.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yTvZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79b878e-fb6b-4c50-8be7-b56176fafa18_678x34.png 424w, https://substackcdn.com/image/fetch/$s_!yTvZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79b878e-fb6b-4c50-8be7-b56176fafa18_678x34.png 848w, https://substackcdn.com/image/fetch/$s_!yTvZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79b878e-fb6b-4c50-8be7-b56176fafa18_678x34.png 1272w, https://substackcdn.com/image/fetch/$s_!yTvZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79b878e-fb6b-4c50-8be7-b56176fafa18_678x34.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yTvZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79b878e-fb6b-4c50-8be7-b56176fafa18_678x34.png" width="678" height="34" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d79b878e-fb6b-4c50-8be7-b56176fafa18_678x34.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:34,&quot;width&quot;:678,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7570,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yTvZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79b878e-fb6b-4c50-8be7-b56176fafa18_678x34.png 424w, https://substackcdn.com/image/fetch/$s_!yTvZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79b878e-fb6b-4c50-8be7-b56176fafa18_678x34.png 848w, https://substackcdn.com/image/fetch/$s_!yTvZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79b878e-fb6b-4c50-8be7-b56176fafa18_678x34.png 1272w, https://substackcdn.com/image/fetch/$s_!yTvZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79b878e-fb6b-4c50-8be7-b56176fafa18_678x34.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><ul><li><p>The last term (starting with <em>p</em>(<em>e</em>=<em>T</em>)) appears in both EV(pay) and EV(press <em>e</em>, pay iff <em>e</em>=<em>T</em>).</p></li><li><p>For every <em>p</em>(<em>e</em>=<em>T</em>)&lt;1, the first term (starting with <em>p</em>(<em>e</em>=<em>F</em>)) is negative.</p><ul><li><p>This is true for <em>d<sub>A</sub></em>(<em>B</em>|<em>A</em>,<em>e</em>=<em>F</em>)=0.1, and more generally, whenever the deal wouldn't be worth it if <em>e</em>=<em>F</em>, i.e. when <em>d<sub>A</sub></em>(<em>B</em>|<em>A</em>,<em>e</em>=<em>F</em>)$3 &lt; $1.</p></li></ul></li><li><p>Thus, for every <em>p</em>(<em>e</em>=<em>T</em>) &lt; 1, there&#8217;s an <em>eps</em> with an absolute value smaller than the first term&#8217;s absolute value, for which <em>conditional</em> paying will be higher than the expected value of unconditional paying.</p></li></ul><p>So in symmetrical cases, people will always seek out sufficiently cheap evidence about correlations.</p><h2>7. Symmetric ground truth, different starting beliefs</h2><p>You might have the following intuition: People might have asymmetric beliefs about acausal influence <em>right now</em>, but on reflection, in &#8220;normal&#8221; circumstances, Alice&#8217;s perceived influence on Bob would often line up with Bob&#8217;s perceived influence on Alice. (It&#8217;s unclear what this sense of &#8220;on reflection&#8221; means.)</p><p>In this case, I think the situation is still mostly common-sensical:</p><ul><li><p>Consider a case like the fully symmetrical one above, except Alice&#8217;s prior is that <em>p<sub>A</sub></em>(<em>e</em>=<em>T</em>)=0.1 and Bob&#8217;s prior is that <em>p<sub>B</sub></em>(<em>e</em>=<em>T</em>)=0.5.</p></li><li><p>If the cost of pressing <em>e</em> is <em>eps</em>, and <strong>pressing the button is an analogous action despite their difference in credences</strong>, then:</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sye4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fb6dd6-daf9-43b6-8fc0-d2342a922060_647x34.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sye4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fb6dd6-daf9-43b6-8fc0-d2342a922060_647x34.png 424w, https://substackcdn.com/image/fetch/$s_!sye4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fb6dd6-daf9-43b6-8fc0-d2342a922060_647x34.png 848w, https://substackcdn.com/image/fetch/$s_!sye4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fb6dd6-daf9-43b6-8fc0-d2342a922060_647x34.png 1272w, https://substackcdn.com/image/fetch/$s_!sye4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fb6dd6-daf9-43b6-8fc0-d2342a922060_647x34.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sye4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fb6dd6-daf9-43b6-8fc0-d2342a922060_647x34.png" width="647" height="34" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d8fb6dd6-daf9-43b6-8fc0-d2342a922060_647x34.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:34,&quot;width&quot;:647,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9429,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sye4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fb6dd6-daf9-43b6-8fc0-d2342a922060_647x34.png 424w, https://substackcdn.com/image/fetch/$s_!sye4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fb6dd6-daf9-43b6-8fc0-d2342a922060_647x34.png 848w, https://substackcdn.com/image/fetch/$s_!sye4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fb6dd6-daf9-43b6-8fc0-d2342a922060_647x34.png 1272w, https://substackcdn.com/image/fetch/$s_!sye4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd8fb6dd6-daf9-43b6-8fc0-d2342a922060_647x34.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bztH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dba519-c708-425e-95f8-a80a7bb9fae7_654x41.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bztH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dba519-c708-425e-95f8-a80a7bb9fae7_654x41.png 424w, https://substackcdn.com/image/fetch/$s_!bztH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dba519-c708-425e-95f8-a80a7bb9fae7_654x41.png 848w, https://substackcdn.com/image/fetch/$s_!bztH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dba519-c708-425e-95f8-a80a7bb9fae7_654x41.png 1272w, https://substackcdn.com/image/fetch/$s_!bztH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dba519-c708-425e-95f8-a80a7bb9fae7_654x41.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bztH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dba519-c708-425e-95f8-a80a7bb9fae7_654x41.png" width="654" height="41" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c3dba519-c708-425e-95f8-a80a7bb9fae7_654x41.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:41,&quot;width&quot;:654,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9759,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bztH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dba519-c708-425e-95f8-a80a7bb9fae7_654x41.png 424w, https://substackcdn.com/image/fetch/$s_!bztH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dba519-c708-425e-95f8-a80a7bb9fae7_654x41.png 848w, https://substackcdn.com/image/fetch/$s_!bztH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dba519-c708-425e-95f8-a80a7bb9fae7_654x41.png 1272w, https://substackcdn.com/image/fetch/$s_!bztH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3dba519-c708-425e-95f8-a80a7bb9fae7_654x41.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><ul><li><p>For sufficiently small <em>eps</em>, this will be positive EV for both <em>A</em> and <em>B</em>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> Moreover, for any prior probability of <em>p</em>(<em>e</em>=<em>T</em>) between 0 and 1 that they have, there&#8217;s always an <em>eps</em> that is small enough that they both gather the evidence and cooperate.</p></li></ul><h3>Different priors lead to betting</h3><p>However, there&#8217;s one strange thing about these cases. If Alice and Bob start out with different priors, they can often both get higher expected value deals by betting on their difference in beliefs. One thing they can bet on is the outcome of pressing e. It turns out that it can actually be a better deal for Bob to send money when <em>e</em>=<em>F</em> than when <em>e</em>=<em>T</em>.</p><p>(If this is already obvious to you, feel free to skip the rest.)</p><p>Consider changing the above deal so that Bob instead sends money when <em>e</em>=<em>F</em>. (And we stipulate that <em>this</em> is now the &#8220;analogous action&#8221; to Alice sending money when <em>e</em>=<em>T</em>.)</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KekR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36fc5c0a-f959-4811-9c6a-530c87d15263_203x36.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KekR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36fc5c0a-f959-4811-9c6a-530c87d15263_203x36.png 424w, https://substackcdn.com/image/fetch/$s_!KekR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36fc5c0a-f959-4811-9c6a-530c87d15263_203x36.png 848w, https://substackcdn.com/image/fetch/$s_!KekR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36fc5c0a-f959-4811-9c6a-530c87d15263_203x36.png 1272w, https://substackcdn.com/image/fetch/$s_!KekR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36fc5c0a-f959-4811-9c6a-530c87d15263_203x36.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KekR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36fc5c0a-f959-4811-9c6a-530c87d15263_203x36.png" width="203" height="36" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/36fc5c0a-f959-4811-9c6a-530c87d15263_203x36.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:36,&quot;width&quot;:203,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3031,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KekR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36fc5c0a-f959-4811-9c6a-530c87d15263_203x36.png 424w, https://substackcdn.com/image/fetch/$s_!KekR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36fc5c0a-f959-4811-9c6a-530c87d15263_203x36.png 848w, https://substackcdn.com/image/fetch/$s_!KekR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36fc5c0a-f959-4811-9c6a-530c87d15263_203x36.png 1272w, https://substackcdn.com/image/fetch/$s_!KekR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36fc5c0a-f959-4811-9c6a-530c87d15263_203x36.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VqWl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2638bed7-3d50-4c36-9ef0-38150e9e3091_515x37.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VqWl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2638bed7-3d50-4c36-9ef0-38150e9e3091_515x37.png 424w, https://substackcdn.com/image/fetch/$s_!VqWl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2638bed7-3d50-4c36-9ef0-38150e9e3091_515x37.png 848w, https://substackcdn.com/image/fetch/$s_!VqWl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2638bed7-3d50-4c36-9ef0-38150e9e3091_515x37.png 1272w, https://substackcdn.com/image/fetch/$s_!VqWl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2638bed7-3d50-4c36-9ef0-38150e9e3091_515x37.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VqWl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2638bed7-3d50-4c36-9ef0-38150e9e3091_515x37.png" width="515" height="37" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2638bed7-3d50-4c36-9ef0-38150e9e3091_515x37.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:37,&quot;width&quot;:515,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5417,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VqWl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2638bed7-3d50-4c36-9ef0-38150e9e3091_515x37.png 424w, https://substackcdn.com/image/fetch/$s_!VqWl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2638bed7-3d50-4c36-9ef0-38150e9e3091_515x37.png 848w, https://substackcdn.com/image/fetch/$s_!VqWl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2638bed7-3d50-4c36-9ef0-38150e9e3091_515x37.png 1272w, https://substackcdn.com/image/fetch/$s_!VqWl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2638bed7-3d50-4c36-9ef0-38150e9e3091_515x37.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>This will be higher EV for Alice than the non-betting policy!</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6262!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5984266d-8050-41a2-bf24-9582f91f3cb4_460x37.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6262!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5984266d-8050-41a2-bf24-9582f91f3cb4_460x37.png 424w, https://substackcdn.com/image/fetch/$s_!6262!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5984266d-8050-41a2-bf24-9582f91f3cb4_460x37.png 848w, https://substackcdn.com/image/fetch/$s_!6262!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5984266d-8050-41a2-bf24-9582f91f3cb4_460x37.png 1272w, https://substackcdn.com/image/fetch/$s_!6262!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5984266d-8050-41a2-bf24-9582f91f3cb4_460x37.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6262!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5984266d-8050-41a2-bf24-9582f91f3cb4_460x37.png" width="460" height="37" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5984266d-8050-41a2-bf24-9582f91f3cb4_460x37.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:37,&quot;width&quot;:460,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5497,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6262!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5984266d-8050-41a2-bf24-9582f91f3cb4_460x37.png 424w, https://substackcdn.com/image/fetch/$s_!6262!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5984266d-8050-41a2-bf24-9582f91f3cb4_460x37.png 848w, https://substackcdn.com/image/fetch/$s_!6262!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5984266d-8050-41a2-bf24-9582f91f3cb4_460x37.png 1272w, https://substackcdn.com/image/fetch/$s_!6262!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5984266d-8050-41a2-bf24-9582f91f3cb4_460x37.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qmQZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0087c7f6-2596-467c-9c98-46acb7d8e334_525x35.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qmQZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0087c7f6-2596-467c-9c98-46acb7d8e334_525x35.png 424w, https://substackcdn.com/image/fetch/$s_!qmQZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0087c7f6-2596-467c-9c98-46acb7d8e334_525x35.png 848w, https://substackcdn.com/image/fetch/$s_!qmQZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0087c7f6-2596-467c-9c98-46acb7d8e334_525x35.png 1272w, https://substackcdn.com/image/fetch/$s_!qmQZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0087c7f6-2596-467c-9c98-46acb7d8e334_525x35.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qmQZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0087c7f6-2596-467c-9c98-46acb7d8e334_525x35.png" width="525" height="35" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0087c7f6-2596-467c-9c98-46acb7d8e334_525x35.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:35,&quot;width&quot;:525,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5583,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qmQZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0087c7f6-2596-467c-9c98-46acb7d8e334_525x35.png 424w, https://substackcdn.com/image/fetch/$s_!qmQZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0087c7f6-2596-467c-9c98-46acb7d8e334_525x35.png 848w, https://substackcdn.com/image/fetch/$s_!qmQZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0087c7f6-2596-467c-9c98-46acb7d8e334_525x35.png 1272w, https://substackcdn.com/image/fetch/$s_!qmQZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0087c7f6-2596-467c-9c98-46acb7d8e334_525x35.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RCH0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b3280c-6d95-4e0d-b2ff-e4b499fa63fd_456x44.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RCH0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b3280c-6d95-4e0d-b2ff-e4b499fa63fd_456x44.png 424w, https://substackcdn.com/image/fetch/$s_!RCH0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b3280c-6d95-4e0d-b2ff-e4b499fa63fd_456x44.png 848w, https://substackcdn.com/image/fetch/$s_!RCH0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b3280c-6d95-4e0d-b2ff-e4b499fa63fd_456x44.png 1272w, https://substackcdn.com/image/fetch/$s_!RCH0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b3280c-6d95-4e0d-b2ff-e4b499fa63fd_456x44.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RCH0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b3280c-6d95-4e0d-b2ff-e4b499fa63fd_456x44.png" width="456" height="44" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c9b3280c-6d95-4e0d-b2ff-e4b499fa63fd_456x44.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:44,&quot;width&quot;:456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4935,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RCH0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b3280c-6d95-4e0d-b2ff-e4b499fa63fd_456x44.png 424w, https://substackcdn.com/image/fetch/$s_!RCH0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b3280c-6d95-4e0d-b2ff-e4b499fa63fd_456x44.png 848w, https://substackcdn.com/image/fetch/$s_!RCH0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b3280c-6d95-4e0d-b2ff-e4b499fa63fd_456x44.png 1272w, https://substackcdn.com/image/fetch/$s_!RCH0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b3280c-6d95-4e0d-b2ff-e4b499fa63fd_456x44.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_fQF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd8c862d-78d1-43e7-b722-2dd2a014126b_559x40.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_fQF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd8c862d-78d1-43e7-b722-2dd2a014126b_559x40.png 424w, https://substackcdn.com/image/fetch/$s_!_fQF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd8c862d-78d1-43e7-b722-2dd2a014126b_559x40.png 848w, https://substackcdn.com/image/fetch/$s_!_fQF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd8c862d-78d1-43e7-b722-2dd2a014126b_559x40.png 1272w, https://substackcdn.com/image/fetch/$s_!_fQF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd8c862d-78d1-43e7-b722-2dd2a014126b_559x40.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_fQF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd8c862d-78d1-43e7-b722-2dd2a014126b_559x40.png" width="559" height="40" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fd8c862d-78d1-43e7-b722-2dd2a014126b_559x40.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:40,&quot;width&quot;:559,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6233,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_fQF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd8c862d-78d1-43e7-b722-2dd2a014126b_559x40.png 424w, https://substackcdn.com/image/fetch/$s_!_fQF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd8c862d-78d1-43e7-b722-2dd2a014126b_559x40.png 848w, https://substackcdn.com/image/fetch/$s_!_fQF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd8c862d-78d1-43e7-b722-2dd2a014126b_559x40.png 1272w, https://substackcdn.com/image/fetch/$s_!_fQF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd8c862d-78d1-43e7-b722-2dd2a014126b_559x40.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yQ5f!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcb334c3-d498-418b-b95c-6322d33e3145_541x49.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yQ5f!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcb334c3-d498-418b-b95c-6322d33e3145_541x49.png 424w, https://substackcdn.com/image/fetch/$s_!yQ5f!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcb334c3-d498-418b-b95c-6322d33e3145_541x49.png 848w, https://substackcdn.com/image/fetch/$s_!yQ5f!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcb334c3-d498-418b-b95c-6322d33e3145_541x49.png 1272w, https://substackcdn.com/image/fetch/$s_!yQ5f!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcb334c3-d498-418b-b95c-6322d33e3145_541x49.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yQ5f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcb334c3-d498-418b-b95c-6322d33e3145_541x49.png" width="541" height="49" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fcb334c3-d498-418b-b95c-6322d33e3145_541x49.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:49,&quot;width&quot;:541,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6104,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yQ5f!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcb334c3-d498-418b-b95c-6322d33e3145_541x49.png 424w, https://substackcdn.com/image/fetch/$s_!yQ5f!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcb334c3-d498-418b-b95c-6322d33e3145_541x49.png 848w, https://substackcdn.com/image/fetch/$s_!yQ5f!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcb334c3-d498-418b-b95c-6322d33e3145_541x49.png 1272w, https://substackcdn.com/image/fetch/$s_!yQ5f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcb334c3-d498-418b-b95c-6322d33e3145_541x49.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!50FC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a0d5747-7bb9-49d3-bcb9-1c2438f0ee3e_280x42.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!50FC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a0d5747-7bb9-49d3-bcb9-1c2438f0ee3e_280x42.png 424w, https://substackcdn.com/image/fetch/$s_!50FC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a0d5747-7bb9-49d3-bcb9-1c2438f0ee3e_280x42.png 848w, https://substackcdn.com/image/fetch/$s_!50FC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a0d5747-7bb9-49d3-bcb9-1c2438f0ee3e_280x42.png 1272w, https://substackcdn.com/image/fetch/$s_!50FC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a0d5747-7bb9-49d3-bcb9-1c2438f0ee3e_280x42.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!50FC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a0d5747-7bb9-49d3-bcb9-1c2438f0ee3e_280x42.png" width="280" height="42" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9a0d5747-7bb9-49d3-bcb9-1c2438f0ee3e_280x42.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:42,&quot;width&quot;:280,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2861,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!50FC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a0d5747-7bb9-49d3-bcb9-1c2438f0ee3e_280x42.png 424w, https://substackcdn.com/image/fetch/$s_!50FC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a0d5747-7bb9-49d3-bcb9-1c2438f0ee3e_280x42.png 848w, https://substackcdn.com/image/fetch/$s_!50FC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a0d5747-7bb9-49d3-bcb9-1c2438f0ee3e_280x42.png 1272w, https://substackcdn.com/image/fetch/$s_!50FC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a0d5747-7bb9-49d3-bcb9-1c2438f0ee3e_280x42.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Meanwhile, it will be the exact same EV for Bob, since Bob thinks <em>e</em>=<em>T</em> and <em>e</em>=<em>F</em> is equally likely.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lntv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35fbae61-9350-4936-bd61-edb97d6cce1a_454x43.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lntv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35fbae61-9350-4936-bd61-edb97d6cce1a_454x43.png 424w, https://substackcdn.com/image/fetch/$s_!lntv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35fbae61-9350-4936-bd61-edb97d6cce1a_454x43.png 848w, https://substackcdn.com/image/fetch/$s_!lntv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35fbae61-9350-4936-bd61-edb97d6cce1a_454x43.png 1272w, https://substackcdn.com/image/fetch/$s_!lntv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35fbae61-9350-4936-bd61-edb97d6cce1a_454x43.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lntv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35fbae61-9350-4936-bd61-edb97d6cce1a_454x43.png" width="454" height="43" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/35fbae61-9350-4936-bd61-edb97d6cce1a_454x43.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:43,&quot;width&quot;:454,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5719,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lntv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35fbae61-9350-4936-bd61-edb97d6cce1a_454x43.png 424w, https://substackcdn.com/image/fetch/$s_!lntv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35fbae61-9350-4936-bd61-edb97d6cce1a_454x43.png 848w, https://substackcdn.com/image/fetch/$s_!lntv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35fbae61-9350-4936-bd61-edb97d6cce1a_454x43.png 1272w, https://substackcdn.com/image/fetch/$s_!lntv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35fbae61-9350-4936-bd61-edb97d6cce1a_454x43.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1sQZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d92d54e-7d42-4293-8632-bebda33b2b53_530x38.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1sQZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d92d54e-7d42-4293-8632-bebda33b2b53_530x38.png 424w, https://substackcdn.com/image/fetch/$s_!1sQZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d92d54e-7d42-4293-8632-bebda33b2b53_530x38.png 848w, https://substackcdn.com/image/fetch/$s_!1sQZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d92d54e-7d42-4293-8632-bebda33b2b53_530x38.png 1272w, https://substackcdn.com/image/fetch/$s_!1sQZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d92d54e-7d42-4293-8632-bebda33b2b53_530x38.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1sQZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d92d54e-7d42-4293-8632-bebda33b2b53_530x38.png" width="530" height="38" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1d92d54e-7d42-4293-8632-bebda33b2b53_530x38.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:38,&quot;width&quot;:530,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5886,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1sQZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d92d54e-7d42-4293-8632-bebda33b2b53_530x38.png 424w, https://substackcdn.com/image/fetch/$s_!1sQZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d92d54e-7d42-4293-8632-bebda33b2b53_530x38.png 848w, https://substackcdn.com/image/fetch/$s_!1sQZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d92d54e-7d42-4293-8632-bebda33b2b53_530x38.png 1272w, https://substackcdn.com/image/fetch/$s_!1sQZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d92d54e-7d42-4293-8632-bebda33b2b53_530x38.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2Ymj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb6d3cd7-0037-4c12-899a-71909dd79e66_451x40.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2Ymj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb6d3cd7-0037-4c12-899a-71909dd79e66_451x40.png 424w, https://substackcdn.com/image/fetch/$s_!2Ymj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb6d3cd7-0037-4c12-899a-71909dd79e66_451x40.png 848w, https://substackcdn.com/image/fetch/$s_!2Ymj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb6d3cd7-0037-4c12-899a-71909dd79e66_451x40.png 1272w, https://substackcdn.com/image/fetch/$s_!2Ymj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb6d3cd7-0037-4c12-899a-71909dd79e66_451x40.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2Ymj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb6d3cd7-0037-4c12-899a-71909dd79e66_451x40.png" width="451" height="40" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eb6d3cd7-0037-4c12-899a-71909dd79e66_451x40.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:40,&quot;width&quot;:451,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4852,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2Ymj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb6d3cd7-0037-4c12-899a-71909dd79e66_451x40.png 424w, https://substackcdn.com/image/fetch/$s_!2Ymj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb6d3cd7-0037-4c12-899a-71909dd79e66_451x40.png 848w, https://substackcdn.com/image/fetch/$s_!2Ymj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb6d3cd7-0037-4c12-899a-71909dd79e66_451x40.png 1272w, https://substackcdn.com/image/fetch/$s_!2Ymj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb6d3cd7-0037-4c12-899a-71909dd79e66_451x40.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!x5ar!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffaa36d0-993f-44a1-9b93-8fa7f42c859c_364x37.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!x5ar!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffaa36d0-993f-44a1-9b93-8fa7f42c859c_364x37.png 424w, https://substackcdn.com/image/fetch/$s_!x5ar!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffaa36d0-993f-44a1-9b93-8fa7f42c859c_364x37.png 848w, https://substackcdn.com/image/fetch/$s_!x5ar!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffaa36d0-993f-44a1-9b93-8fa7f42c859c_364x37.png 1272w, https://substackcdn.com/image/fetch/$s_!x5ar!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffaa36d0-993f-44a1-9b93-8fa7f42c859c_364x37.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!x5ar!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffaa36d0-993f-44a1-9b93-8fa7f42c859c_364x37.png" width="364" height="37" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ffaa36d0-993f-44a1-9b93-8fa7f42c859c_364x37.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:37,&quot;width&quot;:364,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3670,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!x5ar!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffaa36d0-993f-44a1-9b93-8fa7f42c859c_364x37.png 424w, https://substackcdn.com/image/fetch/$s_!x5ar!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffaa36d0-993f-44a1-9b93-8fa7f42c859c_364x37.png 848w, https://substackcdn.com/image/fetch/$s_!x5ar!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffaa36d0-993f-44a1-9b93-8fa7f42c859c_364x37.png 1272w, https://substackcdn.com/image/fetch/$s_!x5ar!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffaa36d0-993f-44a1-9b93-8fa7f42c859c_364x37.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>By adjusting the numbers, we could find cases where the betting policy would be preferred by both Alice and Bob.</p><p>This is a bit weird, but we already knew that you can bet on anything, and that betting-happy agents with wrong beliefs will lose their shirts. This is just another thing you can bet about. Nothing new under the sun.</p><p>Also, the fact that betting worked in this case relied on Alice and Bob having mutual knowledge about their disagreement, and the disagreement nevertheless persisting &#8212;&nbsp;which is not supposed to happen if agents have common priors, according to <a href="https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem">Aumann&#8217;s agreement theorem</a>. I suspect that betting, here, would actually require different <em>priors</em> &#8212; and not just having observed different evidence.</p><h2>Appendices</h2><h3>What evidence to update on?</h3><p>In section <a href="https://lukasfinnveden.substack.com/i/136240656/edt-recommends-seeking-evidence-about-who-thinks-they-can-influence-you">4. EDT recommends seeking evidence about who thinks they can influence you</a>, my analysis suggested that agents should seek out evidence about who other agents could acausally influence, according to that other agent&#8217;s prior. For example, Alice should seek out evidence c such that <em>d<sub>C</sub></em>(<em>A</em>|<em>C</em>,<em>c</em>=<em>T</em>) and <em>d<sub>C</sub></em>(<em>A</em>|<em>C</em>,<em>c</em>=<em>F</em>) were different from Carol&#8217;s prior <em>d<sub>C</sub></em>(<em>A</em>|<em>C</em>) &#8212;&nbsp;and help Carol iff <em>d<sub>C</sub></em>(<em>A</em>|<em>C</em>,<em>c</em>) was high.</p><p>But Alice needs to be careful about what evidence she updates on. For example, let&#8217;s say that Alice makes a certain choice. Let&#8217;s abbreviate &#8220;Alice commits to a particular policy [with details that I won&#8217;t specify here])&#8221; as <em>p</em>. If Alice uses <em>that</em> information to estimate Carol&#8217;s acausal influence&#8230;</p><p><em>d<sub>C</sub></em>(<em>A</em>|<em>C</em>,<em>p</em>) = <em>p<sub>C</sub></em>(Alice chooses <em>x&#8217;</em> | Carol chooses <em>x</em>, <em>p</em>) - <em>p<sub>C</sub></em>(Alice chooses <em>x&#8217;</em> | Carol chooses <em>y</em>, <em>p</em>)</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rPpV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25e89c39-e0bd-4b0f-a497-fd23eca456a0_108x39.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rPpV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25e89c39-e0bd-4b0f-a497-fd23eca456a0_108x39.png 424w, https://substackcdn.com/image/fetch/$s_!rPpV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25e89c39-e0bd-4b0f-a497-fd23eca456a0_108x39.png 848w, https://substackcdn.com/image/fetch/$s_!rPpV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25e89c39-e0bd-4b0f-a497-fd23eca456a0_108x39.png 1272w, https://substackcdn.com/image/fetch/$s_!rPpV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25e89c39-e0bd-4b0f-a497-fd23eca456a0_108x39.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rPpV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25e89c39-e0bd-4b0f-a497-fd23eca456a0_108x39.png" width="108" height="39" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/25e89c39-e0bd-4b0f-a497-fd23eca456a0_108x39.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:39,&quot;width&quot;:108,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1969,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rPpV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25e89c39-e0bd-4b0f-a497-fd23eca456a0_108x39.png 424w, https://substackcdn.com/image/fetch/$s_!rPpV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25e89c39-e0bd-4b0f-a497-fd23eca456a0_108x39.png 848w, https://substackcdn.com/image/fetch/$s_!rPpV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25e89c39-e0bd-4b0f-a497-fd23eca456a0_108x39.png 1272w, https://substackcdn.com/image/fetch/$s_!rPpV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25e89c39-e0bd-4b0f-a497-fd23eca456a0_108x39.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!q6nQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b110a8a-c862-473f-812d-ecfa95d6314e_697x39.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!q6nQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b110a8a-c862-473f-812d-ecfa95d6314e_697x39.png 424w, https://substackcdn.com/image/fetch/$s_!q6nQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b110a8a-c862-473f-812d-ecfa95d6314e_697x39.png 848w, https://substackcdn.com/image/fetch/$s_!q6nQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b110a8a-c862-473f-812d-ecfa95d6314e_697x39.png 1272w, https://substackcdn.com/image/fetch/$s_!q6nQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b110a8a-c862-473f-812d-ecfa95d6314e_697x39.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!q6nQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b110a8a-c862-473f-812d-ecfa95d6314e_697x39.png" width="697" height="39" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4b110a8a-c862-473f-812d-ecfa95d6314e_697x39.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:39,&quot;width&quot;:697,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:10604,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!q6nQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b110a8a-c862-473f-812d-ecfa95d6314e_697x39.png 424w, https://substackcdn.com/image/fetch/$s_!q6nQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b110a8a-c862-473f-812d-ecfa95d6314e_697x39.png 848w, https://substackcdn.com/image/fetch/$s_!q6nQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b110a8a-c862-473f-812d-ecfa95d6314e_697x39.png 1272w, https://substackcdn.com/image/fetch/$s_!q6nQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b110a8a-c862-473f-812d-ecfa95d6314e_697x39.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Now, either Alice&#8217;s choice was to choose <em>x&#8217;</em>, in which case both terms are equal to 1. Or Alice&#8217;s choice was to choose something other than <em>x&#8217;</em>, in which case both terms are equal to 0. Regardless, <em>d<sub>c</sub></em>(<em>A</em>|<em>C</em>,<em>p</em>) = 0. This suggests that Carol has no influence on Alice&#8217;s decision at all.</p><p>So apparently, we don&#8217;t want Alice to update Carol&#8217;s prior on Alice&#8217;s own decision, when estimating Carol&#8217;s acausal influence! Can we say anything more general about what Alice should or shouldn&#8217;t update on?</p><p>Given the <a href="https://lukasfinnveden.substack.com/i/136240656/no-prediction-assumption">&#8220;No prediction&#8221; assumption</a>, my <em>guess</em> is that Alice should be fine with updating Carol&#8217;s prior on any information that isn&#8217;t causally downstream of Alice&#8217;s own action.&nbsp;</p><p>Why is this? Well, intuitively, the reason why it <em>must</em> be a bad idea to update on Alice&#8217;s own action is that this <em>predictably</em> reduces acausal influence to 0. But for any evidence that isn&#8217;t downstream of Alice&#8217;s own action, we can show that updating on that evidence will preserve Carol&#8217;s <em>expected</em> acausal influence. So it won&#8217;t predictably make Alice neglect Carol&#8217;s preferences &#8212;&nbsp;it will only redistribute in what worlds Alice care more vs. less about them.</p><p>Here&#8217;s the proof.</p><p>Let&#8217;s say that Carol&#8217;s initial perceived acausal influence over Alice is <em>d</em>(A chooses <em>x&#8217;</em> | <em>C</em> chooses <em>x</em>). There&#8217;s a button <em>e</em> that will reveal either <em>e</em>=<em>T</em> or <em>e</em>=<em>F</em>, which Alice is considering pressing.</p><p>In order to check that the button won&#8217;t systematically reduce Carol&#8217;s estimated acausal influence, we can calculate the <em>expected acausal influence</em> that Alice will assign to Carol after pressing the button and conditioning on its value.</p><p>Let&#8217;s abbreviate:</p><ul><li><p>&#8220;A chooses <em>x&#8217;</em> &#8221; as just x&#8217;.</p></li><li><p>&#8220;C chooses <em>x</em>&#8221; as just <em>x</em>.</p></li><li><p>&#8220;C chooses <em>y</em>&#8221; as just <em>y</em>.</p></li></ul><p>I&#8217;ll also omit subscripts <em>C</em> from <em>p</em> and <em>d</em>.</p><p>Then Carol&#8217;s expected acausal influence after conditioning on the button-result is:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8CpH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F829640c9-1aa9-4319-85ea-5698078d2fb5_410x44.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8CpH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F829640c9-1aa9-4319-85ea-5698078d2fb5_410x44.png 424w, https://substackcdn.com/image/fetch/$s_!8CpH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F829640c9-1aa9-4319-85ea-5698078d2fb5_410x44.png 848w, https://substackcdn.com/image/fetch/$s_!8CpH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F829640c9-1aa9-4319-85ea-5698078d2fb5_410x44.png 1272w, https://substackcdn.com/image/fetch/$s_!8CpH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F829640c9-1aa9-4319-85ea-5698078d2fb5_410x44.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8CpH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F829640c9-1aa9-4319-85ea-5698078d2fb5_410x44.png" width="410" height="44" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/829640c9-1aa9-4319-85ea-5698078d2fb5_410x44.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:44,&quot;width&quot;:410,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4931,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8CpH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F829640c9-1aa9-4319-85ea-5698078d2fb5_410x44.png 424w, https://substackcdn.com/image/fetch/$s_!8CpH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F829640c9-1aa9-4319-85ea-5698078d2fb5_410x44.png 848w, https://substackcdn.com/image/fetch/$s_!8CpH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F829640c9-1aa9-4319-85ea-5698078d2fb5_410x44.png 1272w, https://substackcdn.com/image/fetch/$s_!8CpH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F829640c9-1aa9-4319-85ea-5698078d2fb5_410x44.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>In order for the button to preserve acausal influence, we need this expression to equal <em>d</em>(<em>x&#8217;</em>|<em>x</em>). So let&#8217;s expand d(x&#8217;|x) and see when it equals the above expression:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!K0Ec!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf89a9fe-0815-44ee-83f0-ffaf105455f8_94x36.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!K0Ec!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf89a9fe-0815-44ee-83f0-ffaf105455f8_94x36.png 424w, https://substackcdn.com/image/fetch/$s_!K0Ec!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf89a9fe-0815-44ee-83f0-ffaf105455f8_94x36.png 848w, https://substackcdn.com/image/fetch/$s_!K0Ec!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf89a9fe-0815-44ee-83f0-ffaf105455f8_94x36.png 1272w, https://substackcdn.com/image/fetch/$s_!K0Ec!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf89a9fe-0815-44ee-83f0-ffaf105455f8_94x36.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!K0Ec!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf89a9fe-0815-44ee-83f0-ffaf105455f8_94x36.png" width="94" height="36" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/af89a9fe-0815-44ee-83f0-ffaf105455f8_94x36.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:36,&quot;width&quot;:94,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1382,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!K0Ec!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf89a9fe-0815-44ee-83f0-ffaf105455f8_94x36.png 424w, https://substackcdn.com/image/fetch/$s_!K0Ec!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf89a9fe-0815-44ee-83f0-ffaf105455f8_94x36.png 848w, https://substackcdn.com/image/fetch/$s_!K0Ec!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf89a9fe-0815-44ee-83f0-ffaf105455f8_94x36.png 1272w, https://substackcdn.com/image/fetch/$s_!K0Ec!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf89a9fe-0815-44ee-83f0-ffaf105455f8_94x36.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!p4HP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b9d6f6-1f15-4577-a378-0b5b526bc748_193x33.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!p4HP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b9d6f6-1f15-4577-a378-0b5b526bc748_193x33.png 424w, https://substackcdn.com/image/fetch/$s_!p4HP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b9d6f6-1f15-4577-a378-0b5b526bc748_193x33.png 848w, https://substackcdn.com/image/fetch/$s_!p4HP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b9d6f6-1f15-4577-a378-0b5b526bc748_193x33.png 1272w, https://substackcdn.com/image/fetch/$s_!p4HP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b9d6f6-1f15-4577-a378-0b5b526bc748_193x33.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!p4HP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b9d6f6-1f15-4577-a378-0b5b526bc748_193x33.png" width="193" height="33" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c9b9d6f6-1f15-4577-a378-0b5b526bc748_193x33.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:33,&quot;width&quot;:193,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2489,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!p4HP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b9d6f6-1f15-4577-a378-0b5b526bc748_193x33.png 424w, https://substackcdn.com/image/fetch/$s_!p4HP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b9d6f6-1f15-4577-a378-0b5b526bc748_193x33.png 848w, https://substackcdn.com/image/fetch/$s_!p4HP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b9d6f6-1f15-4577-a378-0b5b526bc748_193x33.png 1272w, https://substackcdn.com/image/fetch/$s_!p4HP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9b9d6f6-1f15-4577-a378-0b5b526bc748_193x33.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oX4P!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe39603f1-c3de-4e6f-a732-e2c3787b71b9_482x43.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oX4P!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe39603f1-c3de-4e6f-a732-e2c3787b71b9_482x43.png 424w, https://substackcdn.com/image/fetch/$s_!oX4P!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe39603f1-c3de-4e6f-a732-e2c3787b71b9_482x43.png 848w, https://substackcdn.com/image/fetch/$s_!oX4P!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe39603f1-c3de-4e6f-a732-e2c3787b71b9_482x43.png 1272w, https://substackcdn.com/image/fetch/$s_!oX4P!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe39603f1-c3de-4e6f-a732-e2c3787b71b9_482x43.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oX4P!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe39603f1-c3de-4e6f-a732-e2c3787b71b9_482x43.png" width="482" height="43" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e39603f1-c3de-4e6f-a732-e2c3787b71b9_482x43.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:43,&quot;width&quot;:482,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4859,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oX4P!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe39603f1-c3de-4e6f-a732-e2c3787b71b9_482x43.png 424w, https://substackcdn.com/image/fetch/$s_!oX4P!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe39603f1-c3de-4e6f-a732-e2c3787b71b9_482x43.png 848w, https://substackcdn.com/image/fetch/$s_!oX4P!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe39603f1-c3de-4e6f-a732-e2c3787b71b9_482x43.png 1272w, https://substackcdn.com/image/fetch/$s_!oX4P!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe39603f1-c3de-4e6f-a732-e2c3787b71b9_482x43.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Tu1S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da4e288-d9ee-43a1-bb4a-94294f826496_470x39.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Tu1S!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da4e288-d9ee-43a1-bb4a-94294f826496_470x39.png 424w, https://substackcdn.com/image/fetch/$s_!Tu1S!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da4e288-d9ee-43a1-bb4a-94294f826496_470x39.png 848w, https://substackcdn.com/image/fetch/$s_!Tu1S!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da4e288-d9ee-43a1-bb4a-94294f826496_470x39.png 1272w, https://substackcdn.com/image/fetch/$s_!Tu1S!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da4e288-d9ee-43a1-bb4a-94294f826496_470x39.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Tu1S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da4e288-d9ee-43a1-bb4a-94294f826496_470x39.png" width="470" height="39" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5da4e288-d9ee-43a1-bb4a-94294f826496_470x39.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:39,&quot;width&quot;:470,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5263,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Tu1S!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da4e288-d9ee-43a1-bb4a-94294f826496_470x39.png 424w, https://substackcdn.com/image/fetch/$s_!Tu1S!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da4e288-d9ee-43a1-bb4a-94294f826496_470x39.png 848w, https://substackcdn.com/image/fetch/$s_!Tu1S!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da4e288-d9ee-43a1-bb4a-94294f826496_470x39.png 1272w, https://substackcdn.com/image/fetch/$s_!Tu1S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5da4e288-d9ee-43a1-bb4a-94294f826496_470x39.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Now, recall the <a href="https://lukasfinnveden.substack.com/i/136240656/no-prediction-assumption">&#8220;No prediction&#8221; assumption</a> (with some variable-names replaced, to suit our current example):</p><blockquote><p>For every observation [<em>e</em>=<em>T</em>] that [Alice] can make <em>other</em> than [Alice&#8217;s] own choice (and events that are causally downstream of Alice&#8217;s choice), and for every pair of actions [(<em>x</em>,<em>y</em>)] that Carol can take, we have:</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vmRF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab526794-de80-4924-8b99-8a003ce8e543_729x33.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vmRF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab526794-de80-4924-8b99-8a003ce8e543_729x33.png 424w, https://substackcdn.com/image/fetch/$s_!vmRF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab526794-de80-4924-8b99-8a003ce8e543_729x33.png 848w, https://substackcdn.com/image/fetch/$s_!vmRF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab526794-de80-4924-8b99-8a003ce8e543_729x33.png 1272w, https://substackcdn.com/image/fetch/$s_!vmRF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab526794-de80-4924-8b99-8a003ce8e543_729x33.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vmRF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab526794-de80-4924-8b99-8a003ce8e543_729x33.png" width="729" height="33" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ab526794-de80-4924-8b99-8a003ce8e543_729x33.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:33,&quot;width&quot;:729,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7363,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vmRF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab526794-de80-4924-8b99-8a003ce8e543_729x33.png 424w, https://substackcdn.com/image/fetch/$s_!vmRF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab526794-de80-4924-8b99-8a003ce8e543_729x33.png 848w, https://substackcdn.com/image/fetch/$s_!vmRF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab526794-de80-4924-8b99-8a003ce8e543_729x33.png 1272w, https://substackcdn.com/image/fetch/$s_!vmRF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab526794-de80-4924-8b99-8a003ce8e543_729x33.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>In other words, if the <a href="https://lukasfinnveden.substack.com/i/136240656/no-prediction-assumption">&#8220;No prediction&#8221; assumption</a> holds, then as long as the evidence e isn&#8217;t Alice&#8217;s own choice (or something causally downstream of Alice&#8217;s choice), then:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2RYv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53e2e4f-8e68-47f2-89ee-8247ccb6c98f_318x32.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2RYv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53e2e4f-8e68-47f2-89ee-8247ccb6c98f_318x32.png 424w, https://substackcdn.com/image/fetch/$s_!2RYv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53e2e4f-8e68-47f2-89ee-8247ccb6c98f_318x32.png 848w, https://substackcdn.com/image/fetch/$s_!2RYv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53e2e4f-8e68-47f2-89ee-8247ccb6c98f_318x32.png 1272w, https://substackcdn.com/image/fetch/$s_!2RYv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53e2e4f-8e68-47f2-89ee-8247ccb6c98f_318x32.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2RYv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53e2e4f-8e68-47f2-89ee-8247ccb6c98f_318x32.png" width="318" height="32" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b53e2e4f-8e68-47f2-89ee-8247ccb6c98f_318x32.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:32,&quot;width&quot;:318,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3286,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2RYv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53e2e4f-8e68-47f2-89ee-8247ccb6c98f_318x32.png 424w, https://substackcdn.com/image/fetch/$s_!2RYv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53e2e4f-8e68-47f2-89ee-8247ccb6c98f_318x32.png 848w, https://substackcdn.com/image/fetch/$s_!2RYv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53e2e4f-8e68-47f2-89ee-8247ccb6c98f_318x32.png 1272w, https://substackcdn.com/image/fetch/$s_!2RYv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53e2e4f-8e68-47f2-89ee-8247ccb6c98f_318x32.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ru9Y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de765c6-d2c4-4980-bd69-618eace6915e_324x45.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ru9Y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de765c6-d2c4-4980-bd69-618eace6915e_324x45.png 424w, https://substackcdn.com/image/fetch/$s_!ru9Y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de765c6-d2c4-4980-bd69-618eace6915e_324x45.png 848w, https://substackcdn.com/image/fetch/$s_!ru9Y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de765c6-d2c4-4980-bd69-618eace6915e_324x45.png 1272w, https://substackcdn.com/image/fetch/$s_!ru9Y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de765c6-d2c4-4980-bd69-618eace6915e_324x45.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ru9Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de765c6-d2c4-4980-bd69-618eace6915e_324x45.png" width="324" height="45" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1de765c6-d2c4-4980-bd69-618eace6915e_324x45.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:45,&quot;width&quot;:324,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3565,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ru9Y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de765c6-d2c4-4980-bd69-618eace6915e_324x45.png 424w, https://substackcdn.com/image/fetch/$s_!ru9Y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de765c6-d2c4-4980-bd69-618eace6915e_324x45.png 848w, https://substackcdn.com/image/fetch/$s_!ru9Y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de765c6-d2c4-4980-bd69-618eace6915e_324x45.png 1272w, https://substackcdn.com/image/fetch/$s_!ru9Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de765c6-d2c4-4980-bd69-618eace6915e_324x45.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>If we accordingly substitute the values, then we get:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!s-UT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9641124-137f-4870-9c4d-928c6ceecfcb_101x32.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!s-UT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9641124-137f-4870-9c4d-928c6ceecfcb_101x32.png 424w, https://substackcdn.com/image/fetch/$s_!s-UT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9641124-137f-4870-9c4d-928c6ceecfcb_101x32.png 848w, https://substackcdn.com/image/fetch/$s_!s-UT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9641124-137f-4870-9c4d-928c6ceecfcb_101x32.png 1272w, https://substackcdn.com/image/fetch/$s_!s-UT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9641124-137f-4870-9c4d-928c6ceecfcb_101x32.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!s-UT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9641124-137f-4870-9c4d-928c6ceecfcb_101x32.png" width="101" height="32" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a9641124-137f-4870-9c4d-928c6ceecfcb_101x32.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:32,&quot;width&quot;:101,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1365,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!s-UT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9641124-137f-4870-9c4d-928c6ceecfcb_101x32.png 424w, https://substackcdn.com/image/fetch/$s_!s-UT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9641124-137f-4870-9c4d-928c6ceecfcb_101x32.png 848w, https://substackcdn.com/image/fetch/$s_!s-UT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9641124-137f-4870-9c4d-928c6ceecfcb_101x32.png 1272w, https://substackcdn.com/image/fetch/$s_!s-UT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa9641124-137f-4870-9c4d-928c6ceecfcb_101x32.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!amHW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cd8d2a2-aa24-407e-9bbd-0d4ab5cbc845_462x49.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!amHW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cd8d2a2-aa24-407e-9bbd-0d4ab5cbc845_462x49.png 424w, https://substackcdn.com/image/fetch/$s_!amHW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cd8d2a2-aa24-407e-9bbd-0d4ab5cbc845_462x49.png 848w, https://substackcdn.com/image/fetch/$s_!amHW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cd8d2a2-aa24-407e-9bbd-0d4ab5cbc845_462x49.png 1272w, https://substackcdn.com/image/fetch/$s_!amHW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cd8d2a2-aa24-407e-9bbd-0d4ab5cbc845_462x49.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!amHW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cd8d2a2-aa24-407e-9bbd-0d4ab5cbc845_462x49.png" width="462" height="49" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1cd8d2a2-aa24-407e-9bbd-0d4ab5cbc845_462x49.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:49,&quot;width&quot;:462,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4299,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!amHW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cd8d2a2-aa24-407e-9bbd-0d4ab5cbc845_462x49.png 424w, https://substackcdn.com/image/fetch/$s_!amHW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cd8d2a2-aa24-407e-9bbd-0d4ab5cbc845_462x49.png 848w, https://substackcdn.com/image/fetch/$s_!amHW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cd8d2a2-aa24-407e-9bbd-0d4ab5cbc845_462x49.png 1272w, https://substackcdn.com/image/fetch/$s_!amHW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1cd8d2a2-aa24-407e-9bbd-0d4ab5cbc845_462x49.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HWC8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b3a61a4-7291-41fb-b465-c66122bb99b4_471x44.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HWC8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b3a61a4-7291-41fb-b465-c66122bb99b4_471x44.png 424w, https://substackcdn.com/image/fetch/$s_!HWC8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b3a61a4-7291-41fb-b465-c66122bb99b4_471x44.png 848w, https://substackcdn.com/image/fetch/$s_!HWC8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b3a61a4-7291-41fb-b465-c66122bb99b4_471x44.png 1272w, https://substackcdn.com/image/fetch/$s_!HWC8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b3a61a4-7291-41fb-b465-c66122bb99b4_471x44.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HWC8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b3a61a4-7291-41fb-b465-c66122bb99b4_471x44.png" width="471" height="44" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4b3a61a4-7291-41fb-b465-c66122bb99b4_471x44.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:44,&quot;width&quot;:471,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4829,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HWC8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b3a61a4-7291-41fb-b465-c66122bb99b4_471x44.png 424w, https://substackcdn.com/image/fetch/$s_!HWC8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b3a61a4-7291-41fb-b465-c66122bb99b4_471x44.png 848w, https://substackcdn.com/image/fetch/$s_!HWC8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b3a61a4-7291-41fb-b465-c66122bb99b4_471x44.png 1272w, https://substackcdn.com/image/fetch/$s_!HWC8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b3a61a4-7291-41fb-b465-c66122bb99b4_471x44.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5_Cl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be01d8-6c6c-4de6-8829-d8298d4b4dde_765x45.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5_Cl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be01d8-6c6c-4de6-8829-d8298d4b4dde_765x45.png 424w, https://substackcdn.com/image/fetch/$s_!5_Cl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be01d8-6c6c-4de6-8829-d8298d4b4dde_765x45.png 848w, https://substackcdn.com/image/fetch/$s_!5_Cl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be01d8-6c6c-4de6-8829-d8298d4b4dde_765x45.png 1272w, https://substackcdn.com/image/fetch/$s_!5_Cl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be01d8-6c6c-4de6-8829-d8298d4b4dde_765x45.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5_Cl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be01d8-6c6c-4de6-8829-d8298d4b4dde_765x45.png" width="765" height="45" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d5be01d8-6c6c-4de6-8829-d8298d4b4dde_765x45.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:45,&quot;width&quot;:765,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8529,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5_Cl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be01d8-6c6c-4de6-8829-d8298d4b4dde_765x45.png 424w, https://substackcdn.com/image/fetch/$s_!5_Cl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be01d8-6c6c-4de6-8829-d8298d4b4dde_765x45.png 848w, https://substackcdn.com/image/fetch/$s_!5_Cl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be01d8-6c6c-4de6-8829-d8298d4b4dde_765x45.png 1272w, https://substackcdn.com/image/fetch/$s_!5_Cl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5be01d8-6c6c-4de6-8829-d8298d4b4dde_765x45.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NnYJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c72aea7-e48b-429a-a524-c490faabb73d_436x41.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NnYJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c72aea7-e48b-429a-a524-c490faabb73d_436x41.png 424w, https://substackcdn.com/image/fetch/$s_!NnYJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c72aea7-e48b-429a-a524-c490faabb73d_436x41.png 848w, https://substackcdn.com/image/fetch/$s_!NnYJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c72aea7-e48b-429a-a524-c490faabb73d_436x41.png 1272w, https://substackcdn.com/image/fetch/$s_!NnYJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c72aea7-e48b-429a-a524-c490faabb73d_436x41.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NnYJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c72aea7-e48b-429a-a524-c490faabb73d_436x41.png" width="436" height="41" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7c72aea7-e48b-429a-a524-c490faabb73d_436x41.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:41,&quot;width&quot;:436,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4945,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NnYJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c72aea7-e48b-429a-a524-c490faabb73d_436x41.png 424w, https://substackcdn.com/image/fetch/$s_!NnYJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c72aea7-e48b-429a-a524-c490faabb73d_436x41.png 848w, https://substackcdn.com/image/fetch/$s_!NnYJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c72aea7-e48b-429a-a524-c490faabb73d_436x41.png 1272w, https://substackcdn.com/image/fetch/$s_!NnYJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c72aea7-e48b-429a-a524-c490faabb73d_436x41.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>So given the <a href="https://lukasfinnveden.substack.com/i/136240656/no-prediction-assumption">&#8220;No prediction&#8221; assumption</a>, Alice can update Carol&#8217;s prior on anything that isn&#8217;t causally downstream of Alice&#8217;s action &#8212; and preserve expected acausal influence.</p><h3>When does updateful EDT avoid information?</h3><p>If we&#8217;re talking about updateful EDT agents, each &#8220;successor agent&#8221; will be an EDT agent who shares values with the parent agent, but who will have different actions available to them, and a strictly larger amount of information. Alice &#8220;avoids information&#8221; if she strictly prefers an action <em>x<sub>1</sub></em> over an action <em>x<sub>2</sub></em>, where <em>x<sub>1</sub></em> and <em>x<sub>2</sub></em> have identical consequences <em>except</em> that her successor agent has strictly less information in <em>x<sub>1</sub></em> than in <em>x<sub>2</sub></em>.</p><p>(I will neglect reasons that EDT might avoid information that would apply equally to any decision-theory, e.g.: they want to avoid spoilers for a movie.)</p><p>Here&#8217;s a taxonomy of reasons for why Alice might avoid information:</p><ol><li><p>The new information will make Alice&#8217;s successor (or correlated agents that share values) make decisions that less benefit Alice&#8217;s values.</p></li><li><p>If correlated agents (with different values) take the analogous decision, then they will make decisions that less benefit Alice&#8217;s values.</p></li><li><p>If Alice decides to gather information, then that provides evidence that some other agents will <em>observe</em> that people-like-Alice choose to gather evidence in situations like this, and that causes them to do something that is worse for Alice&#8217;s values.</p></li></ol><p>The only plausible story I know for (1) is similar to the story for <a href="https://lukasfinnveden.substack.com/i/136240656/son-of-edt-does-not-cooperate-based-on-its-own-correlations">why son-of-EDT cooperates using its parent's correlations</a>. Basically: Updating on new evidence might change who Alice&#8217;s successor is correlated with. If Alice&#8217;s successor becomes correlated with new agents, then her successor might be incentivized to act to benefit those other agents. (Even though initially, Alice has no incentives to benefit those agents.)</p><p>I think (2) is only plausible and compelling if Alice updating on information predictably harms agents that Alice is correlated with.</p><p>Why is this? Well, let&#8217;s say that Alice thinks that it would be better for Bob to take action <em>y&#8217;</em> that&#8217;s analogous to Alice choosing to <em>not</em> gather information (decision <em>y</em>), then to take action <em>x&#8217;</em> that&#8217;s analogous to Alice&#8217;s choosing to gather information (decision <em>x</em>). If <em>this fact</em> convinces Alice to pick <em>y</em>, then I think that Alice should only treat her decision to pick <em>y</em> as evidence that Bob will pick <em>y&#8217;</em> insofar as Bob has some analogous reason to pick <em>y&#8217;</em> &#8212;&nbsp;i.e., if Bob wants to acausally influence someone to make decisions that are better for Bob. Either this is Alice, or it&#8217;s some other agent that&#8217;s trying to acausally influence someone <em>else</em>. Ultimately, I think this loop needs to get back to Alice&#8217;s decision about gathering information, or Alice will be in a situation that&#8217;s too different from the other agents&#8217; situation, and she won&#8217;t be able to exert any intentional acausal influence on them.</p><p>I know three plausible stories for how Alice updating on information could predictably harm other agent&#8217;s values:</p><ul><li><p>Firstly, the converse of the issue with (1): Updating on evidence can make Alice&#8217;s successor <em>seize</em> to be correlated with agents who Alice is correlated with. If so, her successor might choose to not take some opportunities to benefit those distant agents, that the successor would have taken if she had learned less, and the correlations had been preserved.</p></li><li><p>Secondly, Alice could learn too much about her bargaining position.</p><ul><li><p>If Alice and Bob start out believing that they have a 50% chance of acquiring power in the future &#8212; they might be able to benefit from a deal where they try to benefit the other if only one of them is empowered.</p></li><li><p>But this is only possible up until the point where they learn who is empowered. If Alice learns too much about this before she can make a commitment &#8212; that might ruin her incentive to participate in the deal, which could harm Bob.</p></li><li><p>I think this is a fairly broad class of cases, where learning too much information removes the incentives to take some deals that would have been positive behind a veil of ignorance.</p></li><li><p>I discuss this more in <a href="https://lukasfinnveden.substack.com/i/136240656/a-market-analogy">A market analogy</a>.</p></li></ul></li><li><p>Thirdly, if Alice learning new information would let her accomplish more things in the world, and if Alice&#8217;s values are partially opposed to some other agents&#8217; values.</p><ul><li><p>In cases where none of the <em>other</em> issues I talk about in this appendix are a problem, I wouldn&#8217;t worry about this one. If Alice&#8217;s successor is incentivized to care about the same values, and to prefer the same deals, as Alice herself, then it should be good for Alice to empower Alice&#8217;s successor. Alice&#8217;s successor will, in this case, not harm other values more than Alice herself would have approved of.</p></li><li><p>But insofar as some of the other issues I talk about in this appendix apply &#8212; then giving Alice&#8217;s successor more information could hypothetically exacerbate them.</p><ul><li><p>For example, if Alice&#8217;s successor learns enough information that she wants to get out of a deal that Alice tried to commit to &#8212;&nbsp;it might be bad for Alice&#8217;s successor to learn information about how she can get out of the deal.</p></li></ul></li></ul></li></ul><p>All of this makes me think that, <em>given</em> the <a href="https://lukasfinnveden.substack.com/i/136240656/no-prediction-assumption">&#8220;No prediction&#8221; assumption</a>, updating on evidence is not bad for an updateful EDT agent if:</p><ul><li><p>The agent&#8217;s correlations with other agents don&#8217;t change.</p></li><li><p>The agent doesn&#8217;t learn too much about its own bargaining position.</p></li></ul><p>What sort of an agent would EDT want to hand-off to, in order to avoid these problems?</p><p>To solve the first problem, I think they&#8217;d want an agent that somehow decides who to (not) benefit based on who the parent-agent was originally correlated with. Based on appendix X, and assuming the <a href="https://lukasfinnveden.substack.com/i/136240656/no-prediction-assumption">&#8220;No prediction&#8221; assumption</a>, I think they could update on any information that isn&#8217;t causally downstream of the parent-agent&#8217;s action when inferring who the parent-agent was originally correlated with. But I&#8217;m not sure what the exact algorithm looks like.</p><p>The second problem doesn&#8217;t seem like it should be very difficult, to me. But I don&#8217;t know any nice algorithm that solves it.</p><h3>When does updateful EDT seek evidence about correlations?</h3><p>How do these results apply to updateful, non-self-modifying EDT agents? I think that similar results about who you are and aren&#8217;t supposed to gather evidence about applies &#8212;&nbsp;with some extra complications.</p><p>In particular, I think updateful EDT agents are inclined to gather evidence that&#8217;s relevant to <em>both</em> their current correlations <em>and</em> their future selves&#8217; correlations with others. (But mostly uninterested in evidence that&#8217;s only relevant for one of these.)</p><p>To be precise: Let&#8217;s say that Alice-1 is an EDT agent who&#8217;s next decision is made by Alice-2. Alice-1 has an opportunity to pick up a piece of evidence that will inform future Alices. Alice-1 is only interested in benefiting agents who believe they can influence Alice-1. Alice-2 is only interested in benefiting agents who believe they can influence Alice-2.&nbsp;</p><p>Here&#8217;s a central example of a case where Alice-1 has reason to gather evidence about correlations:</p><ul><li><p>Alice-1 is uncertain about whether she is in the world where Carol perceives herself to have influence over Alice-1, or not. Button c will give information about this.</p><ul><li><p>I.e., <em>d</em>(<em>A</em>-1|<em>C</em>,<em>c</em>=<em>T</em>) is high, and <em>d</em>(<em>A</em>-1|<em>C</em>,<em>c</em>=<em>F</em>) is low.</p></li></ul></li><li><p><em>In addition, that same button gives evidence about whether Carol believes she can influence Alice-2.</em></p><ul><li><p><em>I.e., </em>d<em>(</em>A<em>-2|</em>C<em>,</em>c<em>=</em>T<em>) is high, and </em>d<em>(</em>A<em>-2|</em>C<em>,</em>c<em>=</em>F<em>) is low.</em></p></li></ul></li><li><p>In this case, Alice-1 would like Alice-2 to care more about Carol if <em>c</em>=<em>T</em>. Since Alice-2 has her own reason to do so, Alice-1 thinks it&#8217;s good to press the button.</p></li><li><p>But if the button only applied to <em>one</em> of Alice-1 and Alice-2, similar reasoning would not apply:</p><ul><li><p>If the button gave evidence about <em>d</em>(<em>A</em>-1|<em>C</em>,<em>c</em>=<em>T</em>) but not <em>d</em>(<em>A</em>-2|<em>C</em>,<em>c</em>=<em>T</em>), then Alice-2 would not be motivated to act on the information &#8212; A-1 would have no reason to gather it.</p></li><li><p>If the button gave evidence about <em>d</em>(<em>A</em>-2|<em>C</em>,<em>c</em>=<em>T</em>) but not <em>d</em>(<em>A</em>-1|<em>C</em>,<em>c</em>=<em>T</em>), then Alice-1 would not be motivated to inform Alice-2, since Alice-1 doesn&#8217;t care about Alice-2&#8217;s correlations insofar as they diverge from Alice-1&#8217;s. (As argued in <a href="https://lukasfinnveden.substack.com/i/136240656/son-of-edt-does-not-cooperate-based-on-its-own-correlations">1. Son-of-EDT does not cooperate based on its own correlations</a>.)</p></li></ul></li><li><p>Alice-1 will also be interested in gathering information if the button informs Alice-1 and Alice-2 about their respective correlations with <em>different agents</em> that have <em>the same values</em>.</p><ul><li><p>E.g. if <em>d</em>(<em>A</em>-1|<em>C</em>-1,<em>c</em>=<em>T</em>) &gt; <em>d</em>(<em>A</em>-1|<em>C</em>-1,<em>c</em>=<em>F</em>) and&#8230;</p></li><li><p>&#8230; <em>d</em>(<em>A</em>-2|<em>C</em>-2,<em>c</em>=<em>T</em>) &gt; <em>d</em>(<em>A</em>-2|<em>C</em>-2,<em>c</em>=<em>F</em>) and&#8230;</p></li><li><p>A-1 and A-2 share values, and C-1 and C-2 share values,</p></li><li><p>then I think it will often be good for Alice-1 to press button <em>c</em>.</p></li></ul></li></ul><h3>A market analogy</h3><p>Let&#8217;s say you have a large group of people in a room. They all entered the room with some goods, and now they are walking around making trades with each other. In order to allow for smooth multi-person deals, let&#8217;s say that they have a common unit of currency. They entered the room with $0, they can borrow any amount of $ at 0 percent interest, and they have to pay back all of their debts before leaving the room.</p><p>If this market is working well, Alice&#8217;s goods will be consumed by whoever has the highest willingness-to-pay for them (which could be Alice herself). Each person&#8217;s willingness-to-pay is determined by two components:</p><ul><li><p>&#8220;Relative preference&#8221;: How much they value Alice&#8217;s goods <em>relative</em> to other goods (including the ones they themself came in with).</p></li><li><p>&#8220;Bargaining power&#8221;: The market value of the goods they initially came in with. If they have nothing to offer, then they won&#8217;t be able to buy anything.</p><ul><li><p>As a quirk on this: note that Alice will never sell something to someone unless <em>they</em> sell something to someone who sells something to someone &#8230; who sells something to Alice. Since everyone needs to pay back their debts before they leave the room, you can&#8217;t have one-way flows of money.</p></li></ul></li></ul><p>I think this story transfers decently well to cases where the only methods of trade are evidential cooperation. If evidential cooperation is working well, Alice will especially work to benefit the values of people who especially care about what Alice does <em>and</em> who has something to offer to the rest of the acausal community.</p><p>There are many ways in which the acausal story differs from a simplistic version of the market story:</p><ul><li><p>Just as in the market-story, there can&#8217;t be people who <em>only</em> sell or <em>only</em> buy things. (Nor can&#8217;t there be groups of people who only sell or buy things to some other group of people). But in the acausal case, it&#8217;s worth flagging:</p><ul><li><p>Other than having loops of people, that condition can also be fulfilled by having infinite lines of people, each of whom benefit the next person in line.</p></li><li><p>The acausal traders are dealing in subjective expected utility, not realized utility. There are cases where deals would be impossible <em>if</em> everyone knew who they were and what they wanted. But once you account for ignorance, the situation can contain loops or infinite lines, so that trade is possible. See section 2.9 of <a href="https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf">the MSR paper</a>.)</p></li></ul></li><li><p>One reason for why people could &#8220;especially care about what Alice does&#8221; is if she has an uncontroversial comparative advantage at benefitting their <em>values</em>. But it could also be about beliefs &#8212;&nbsp;including beliefs that would be strange outside of the acausal case.</p><ul><li><p>For example, if some (possible) people have priors that suggest that Alice is more likely to exist in the first place, then they will care more about what Alice does, and Alice will prioritize benefitting their values. (At least if she does exist.)</p></li></ul></li><li><p>Since the acausal case is dealing with universe-wide/impartial preferences, there will be tons of &#8220;nosy preferences&#8221;, public goods, etc. Definitely not a simple case of everyone having orthogonal preferences and just doing the trades that make themself happy.</p></li></ul><p>But I want to assume the market analogy as background, and zoom in on one particular acausal quirk: That structures in who perceives themself to be correlated with who (and at what strength) influences how much people care about what people do. (I.e.: How highly they value their goods, in the metaphor.)</p><p>The section <a href="https://lukasfinnveden.substack.com/i/136240656/edt-recommends-benefitting-agents-who-think-they-can-influence-you">EDT recommends benefitting agents who think they can influence you</a> basically argues that if Alice perceives herself as having acausal influence over Bob, that is very similar to Alice caring more about what Bob does. Therefore, Alice perceiving herself as having acausal influence over Bob increases <em>Bob&#8217;s</em> bargaining power (i.e. the market value of Bob&#8217;s initial goods), and means that <em>Bob</em> will prioritize benefitting <em>Alice</em>&#8217;s values relatively more. (Assuming that Alice&#8217;s metaphorical goods have any market value.)</p><h4>Seeking evidence</h4><p><strong>Market case</strong></p><p>Now, let&#8217;s introduce ignorance. Although people are buying and selling goods, they&#8217;re not quite sure how much they actually value the goods. They don&#8217;t really know that much about the goods, so they don&#8217;t know how useful they will be.</p><p>Let&#8217;s also say that people can investigate the value of their goods. Should they?</p><p>If you investigate the value of a good (either your own or someone else&#8217;s) that has two consequences:</p><ol><li><p>It changes how interested you are in buying that good relative to other goods.</p></li><li><p>It changes the distribution of bargaining power, as the market value of the good will go up (or down) if you decide that you value it more (or less).</p></li></ol><p>Effect (1) is a pure public good. With better understanding of who values a good most, they can be allocated more efficiently.</p><p>But effect (2) is in expectation neutral or bad:</p><ul><li><p>If people have risk-neutral preferences, it&#8217;s neutral. It just moves around value.</p></li><li><p>If people have risk-averse preferences, it&#8217;s bad. You might lose bargaining power or you might gain bargaining power.&nbsp;</p></li><li><p>If people have risk-seeking preferences&#8230; it&#8217;s probably neutral? If people have access to randomness, they can already create risk if they want.</p></li></ul><p>So if everyone has a shared prior of each object&#8217;s market-value, then people won&#8217;t be interested in gathering information that just changes people&#8217;s bargaining power. Indeed, it might be best if everyone committed to their current belief about how much bargaining power each person has. And only <em>then</em> investigated the value of all objects, and allocated them to the people with the highest willingness to pay based on the prior bargaining power. (Though I have by no means shown that this is optimal &#8212;&nbsp;nor am I particularly confident that it would be.)</p><p><strong>Acausal case</strong></p><p>So since &#8220;<em>A</em>&#8217;s perceived influence on <em>B</em>&#8221; corresponds to &#8220;<em>A</em>&#8217;s valuation of <em>B</em>&#8217;s goods&#8221;, this tells us that Alice will want to investigate who has perceived influence over Alice (so that she can prioritize their preferences) but that she won&#8217;t be interested in investigating questions that are only relevant for determining people&#8217;s bargaining power.</p><p>Also, Alice has no need for information about who she most wants to be helped by. It&#8217;s ok if only her benefactors have that information. So she has no particular reason to investigate her own acausal influence on others.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Or perhaps they only cooperate if you can predict them, and your cooperation is conditional on them cooperating against you. This has been studied in the program equilibrium literature. See <a href="https://www.andrew.cmu.edu/user/coesterh/AnnotatedProgEqBibliography.html">here</a> for some references.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>I&#8217;m using &#8220;<em>A2T</em>&#8221; as opposed to &#8220;<em>A2</em>&#8221; here because &#8220;<em>A2T</em>&#8221; is a relevantly different agent from the hypothetical agent &#8220;<em>A2F</em>&#8221; that observed the evidence to be false.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Or possibly: If there is an infinite line of actors, each of whom can acausally influence the next.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Slight complication: If you do something different from the agent to your right (i.e. you pay when they keep their money, or vice versa), then you will be in a somewhat different epistemic position than the agent to your left, which could weaken your perceived correlation with them. (Since they will see that the agent on their right did something different than the agent on your right.) Nevertheless, I think you probably pay regardless of what you see the agent to your right do. A very brief argument: If we assume that &#8220;agent to the right paid&#8221; and &#8220;agent to the right didn&#8217;t pay&#8221; are similar epistemic positions, then regardless of which situation you find yourself in, you&#8217;ll reason &#8220;me paying is evidence that the agent to my left pays, so I should pay&#8221;. Since that argument is the same either way, this lends some support to the assumption that &#8220;agent to the right paid&#8221; and &#8220;agent to the right didn&#8217;t pay&#8221; are similar epistemic positions.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>The above was an illustration of possibility. Separately, there&#8217;s also a question about whether there exists any positive reason to expect asymmetric beliefs about acausal influence to be rare. If you favor a view of EDT without &#8220;objective&#8221; entanglement &#8212; where there are just agents and their credences &#8212; it&#8217;s not so clear why symmetry should be the norm. These are strange things to hold beliefs about, and I can easily imagine different agents holding different beliefs due to fairly non-reducible reasons, such as leaning on different heuristics and intuitions. But maybe the view that &#8220;it&#8217;s just your credences&#8221; is compatible with some strong constraints on those credences that often enforce symmetry.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>One type of power you can have is &#8220;You&#8217;re likely to exist in the real world.&#8221; So this suggests that instead of helping people who Alice thinks are likely to exist in the real world, Alice will help people who think that Alice is likely to exist in the real world.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Stealing phrasing from a comment by Joe Carlsmith: You want to be doing the dance that you want the people you can acausally influence to be doing. and this dance is: benefiting the people who can acausally influence them, in worlds where that acausal influence is real.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>With &#8220;on reflection&#8221; suitably specified so as to not involve Alice learning &#8220;too much&#8221; in a way that would predictably reduce acausal influence.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>For some intermediate values of <em>eps</em>, the expected value might be positive for just one of the agents. If so, I think the assumption that "pressing the button is an analogous action despite their difference in credences" would be wrong.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Are our choices analogous to AIs' choices?]]></title><description><![CDATA[Previously on this blog, I have:]]></description><link>https://lukasfinnveden.substack.com/p/are-our-actions-evidence-for-ai-decisions</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/are-our-actions-evidence-for-ai-decisions</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Sun, 20 Aug 2023 09:55:39 GMT</pubDate><content:encoded><![CDATA[<p>Previously on this blog, I have:</p><ul><li><p>Introduced the question of <a href="https://lukasfinnveden.substack.com/p/ecl-with-ai">whether ECL says we should cooperate with distant AIs</a>.</p></li><li><p>Suggested a general formula for <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">whether ECL recommends any mutually beneficial deal in asymmetric situations</a>.</p></li><li><p>Discussed <a href="https://lukasfinnveden.substack.com/p/possible-ecl-deals-with-distant-ais">how that formula could be applied to our situation</a>.</p></li></ul><p>Key inputs into the formula include:</p><ul><li><p>Whether us choosing to benefit AI values would provide evidence that AI will analogously benefit our values.</p></li><li><p>Whether AIs benefitting our values would provide evidence that we will analogously benefit AI values.</p></li></ul><p>This post will speculate about the values of those parameters.</p><p>(If your account of decision theory is different from evidential decision theory, you can substitute &#8220;provide evidence that&#8221; with your preferred metric of acausal influence.)</p><h2>Summary</h2><p>In general, ECL suggests that you should be more inclined to benefit the values of actors whom you correlate more with, in the sense that you perceive yourself as having more acausal influence on them, and they perceive themselves as having more acausal influence on you.</p><p>In particular, the post on asymmetric ECL suggests that your inclination to benefit distant AIs&#8217; values should be proportional to your perceived acausal influence on them and their perceived acausal influence on you; but inversely proportional to your and AIs&#8217; perceived acausal influence to actors with shared values.</p><p>From an EDT perspective, these &#8220;correlations&#8221; or this &#8220;acausal influence&#8221; might not be best viewed as objective facts about the world. Instead, they simply reflect the degree to which we consider our actions to be evidence for choices made in pre-AGI civilizations that share our values vs. evidence for choices made by AIs. (And vice versa, for the AIs.)</p><p>(I&#8217;m not yet fully sold on this perspective. I still feel quite mystified by how to determine the degree of &#8220;acausal influence&#8221; I have on others. Even if the EDT perspective turns out to be right, I at-least expect there to be a lot more to learn about what sort of reasoning does and doesn&#8217;t make sense when establishing the relevant conditionals.)</p><p>Taking the EDT perspective on-its-face, there&#8217;s an intuitively strong argument that our actions are significantly more evidence for actions taken in pre-AGI civilizations than actions taken by AIs. If I query my brain for intuitive predictions of distant actors, it sure seems like my own actions have more impact on my prediction of people in distant pre-AGI civilizations than on my predictions of misaligned AGI systems.</p><p>I think that intuition is worth paying attention to. But I think it&#8217;s less important than it naively seems. I&#8217;ll now go through a few different reasons for why you might have it, and a few different counterarguments. I&#8217;ll mostly be talking about this from an EDT perspective.</p><p><strong>Different options.</strong> You might think that we can&#8217;t affect the AIs much because our option-space is very different. Us humans get to decide how much we should invest in alignment vs. research on ECL, and AGI systems of the future get to decide what to use their light cone for (or something like that). However, this seems like it&#8217;s much reduced by retreating to more abstract questions, like &#8220;Should I adopt a policy of impartially optimizing for many different values?&#8221;. See <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">the post on asymmetric ECL</a> for a more detailed description of what that abstract question could look like. See footnote for a caveat.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p><strong>Abstract reasoning vs. human quirks and biases.</strong> When we do decision-theoretic reasoning, we&#8217;re doing a type of reasoning where our conclusions correlate with AGIs&#8217; conclusions. If I didn&#8217;t know anything about decision theory, and learned that humans upon extensive analysis thought that ECL &#8220;works&#8221;, then I would think it more likely that AGIs also would think ECL &#8220;works&#8221;.&nbsp; Analogously: If I didn&#8217;t know anything about math, and learned that humans thought that the Riemann hypothesis was true, then I would think it more likely that AGIs would think the same thing.</p><p>I think this establishes some basic plausibility that our actions might correlate with AGIs&#8217;. On the other hand, it&#8217;s less clear that this makes sense when <em>you</em> are the one studying decision theory or math, and <em>you</em> are the ones making the decisions.</p><p>If you&#8217;ve discovered a lot of good arguments that the Riemann hypothesis is true, then you have <em>already</em> conditioned on the existence of those arguments, which is what influences your belief that AIs will conclude that they&#8217;re true. You get no further information based on whether you utter the words &#8220;the Riemann hypothesis is true&#8221;.</p><p>You can make a similar argument in the decision-theoretic case: If you&#8217;ve discovered a lot of good arguments that ECL &#8220;works&#8221;, then <em>that&#8217;s</em> what informs your beliefs about what the AIs will do (and that&#8217;s something you&#8217;ve already conditioned on). When you then act according to ECL, or not, you&#8217;re not getting further evidence about where the decision-theoretic arguments points. You&#8217;re just learning whether those arguments led to action within your human brain, with all its quirks and biases. And that&#8217;s much more evidence for what evolved creatures do than what AGIs do.</p><p>I&#8217;m currently unconvinced by this argument. For one, even before making a decision, you can <em>also</em> observe a lot of information about how the quirks and biases of your human brain are interpreting and interacting with the arguments. (Similar to the tickle defense in smoking lesion.) Which means that you <em>also</em> don&#8217;t get that much new information about your quirks and biases, in the act of making your decision. That puts the quirks and biases at a similar standing to the decision theoretic reasoning.</p><p>I also have an intuition that goes further, which I won't be able to fully describe in this paragraph. But to gesture at it: It seems to me like it&#8217;s a mistake for your decision theory to make decisions in order to provide &#8220;good news&#8221; about parts of your cognition that aren't responsive to decision-theoretic reasoning (such as your human-specific quirks and biases.)</p><p>See more <a href="https://lukasfinnveden.substack.com/i/136240460/abstract-reasoning-vs-human-quirks-and-biases">here</a>.</p><p><strong>AGIs will know about us.</strong> The AGIs will naturally have more knowledge of us than we have of them, e.g. they might know how ECL-ish the pre-AGI civilization in their part of the universe was. If you&#8217;re an &#8220;updateful&#8221; EDT agent, then evidence of your counterpart&#8217;s actions that <em>doesn&#8217;t</em> come from your own actions will typically reduce the evidential power of your own action. So, from their perspective, their actions have little acausal influence on our choices. I think it&#8217;s correct that this would reduce correlations <em>if</em> AGIs are updateful. However, if they are sufficiently updateless, they would not take this knowledge into account when assessing their acausal influence &#8212;&nbsp;so this consideration mostly shifts our concern to <em>sufficiently updateless</em> AGIs.</p><p>See more <a href="https://lukasfinnveden.substack.com/i/136240460/high-correlations-might-require-updatelessness">here</a>.</p><p><strong>AGIs will have a deeper understanding of decision theory.</strong> I think there are two different versions of this concern, each with separate responses.</p><ul><li><p><strong>AGI&#8217;s understanding of decision theory will make us predictable</strong>, in the sense that it will understand the principles that we use to make decisions so deeply that it will know what we choose before it makes its own decision. Thus, it will not see its own decisions as giving any evidence for ours.</p><ul><li><p>The response to this is similar to the one just-above: If it&#8217;s sufficiently <em>logically</em> updateless about this, then it would not take this knowledge into account when assessing its acausal influence over us.</p></li></ul></li><li><p><strong>AGI&#8217;s reasoning about decision theory will be so different that it correlates very little with us.</strong> If you think this is true, it seems like you&#8217;re committed to the idea that AGIs will reason about decision theory in a way that has barely any connection to how we reason about it. But if you expect AGI to be <em>good</em> at reasoning about decision theory, and you think that our own reasoning correlates little with their reasoning, then that would suggest that you&#8217;re pessimistic about us reaching any correct conclusions about decision theory. If this is your view, I agree that you should be pessimistic about doing ECL with the AIs. (If nothing else &#8212; because all of this reasoning about ECL have little chance of being correct, anyway.)</p></li></ul><p>In the rest of this post, I expand a bit on these points. (Though this topic is very confusing to me, and I can&#8217;t promise that the expanded version will be very enlightening.)</p><p><strong>There&#8217;s also an appendix on <a href="https://lukasfinnveden.substack.com/i/136240460/how-to-handle-uncertainty">How to handle uncertainty?</a> &#8212;&nbsp;when we&#8217;re uncertain about how much acausal influence we should perceive ourselves as having, on various groups.</strong></p><h2>In more depth&#8230;</h2><h3>Abstract reasoning vs. human quirks and biases</h3><p>When we&#8217;re reasoning about decision theory, it seems like we&#8217;re doing a <em>type</em> of reasoning that both pre-AGI actors and philosophically ambitious AGIs should be doing. Ultimately, we&#8217;re trying to learn what the best way of making decisions is, whatever that means. <em>If</em> we&#8217;re doing that <em>well</em>, our conclusions about those abstract questions should correlate with the AGI&#8217;s conclusions &#8212;&nbsp;by virtue of us both being correct.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Let&#8217;s call this &#8220;decision theoretic reasoning&#8221;. (Read those quotes as scare quotes &#8212;&nbsp;I don&#8217;t love the name.)</p><p>But there&#8217;s also many factors that influence our judgments that wouldn&#8217;t apply to AI systems. For example, maybe we would irrationally reject ECL because our social instincts kick in to save us from acting crazy. Or maybe we&#8217;ll irrationally accept ECL because of wishful thinking and because we want an excuse to act nicely towards everyone. Insofar as factors like these determine our decisions, our decisions really only correlate with humans and human-like species. Let&#8217;s call this &#8220;human-specific factors&#8221;.</p><p>When you learn about whether a human does ECL, you get evidence about both these components. Thus, observing a human doing something for ECL reasons is evidence both that (i) good &#8220;decision theoretic reasoning&#8221; implies that ECL makes sense, and (ii) that &#8220;human-specific factors&#8221; pushes humans towards acting according to ECL. The first of these is evidence that <em>everyone</em> is more likely to do ECL, the second is only evidence that human-like actors are likely to do ECL.</p><p>But if <em>you are</em> a human, you are in quite a different situation from someone observing a human. You know a lot more about <em>why</em> you&#8217;re taking the actions you do.</p><p>If you&#8217;re an updateful EDT agent, then as you&#8217;re thinking about arguments for and against taking some ECL-informed action, you will condition on the existence of those arguments. So insofar as uncertainty about the existence of those arguments were your only source of correlation with certain other agents, you won&#8217;t see yourself as having any power over them. Since you continuously condition-away your insights about &#8220;decision theoretic reasoning&#8221;, it&#8217;s not clear that your actions ever give significant evidence about where &#8220;decision theoretic reasoning&#8220; points.</p><p>But it seems that similar arguments apply to ~all sources of influence on your decision-making &#8212;&nbsp;including the human-specific factors. Consider, for a moment, an analogy to smoking lesion and the tickle defense (see for example <a href="https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past#VIII__Maybe_EDT_">here</a> for an introduction to the problem).</p><ul><li><p>The tickle defense goes: Once you notice an impulse to smoke,&nbsp;you&#8217;ve already received your evidence. Deciding to smoke or not gives you no additional information about your lesion.</p></li><li><p>Analogously, here: Once you notice an impulse to (not) engage in ECL (due to wishful thinking, or social desirability bias, or anything like that), you&#8217;ve already received your evidence about the &#8220;human-specific factors&#8221;. Deciding to engage in ECL or not after that doesn&#8217;t give you any additional evidence about those.</p></li></ul><p>I have a few different reactions to this:</p><ul><li><p>These kinds of arguments are very confusing, and it seems like they can&#8217;t work arbitrarily far. Until you&#8217;ve made your decision, you should maintain some uncertainty about what your decision will be, so there must be <em>some</em> factor that determines your choice that you don&#8217;t condition on.</p><ul><li><p>Though some have argued that EDT agents sometimes can get confident about what their decisions will be before they make them, and that this indeed can have significant (and mostly bad) implications for their behavior. See footnote for examples.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p></li></ul></li><li><p>I&#8217;m not sure I find the tickle defense as stated fully persuasive, because it&#8217;s not clear to me that humans always do have the requisite type of self-knowledge.</p><ul><li><p>Note that it&#8217;s easy to construct hypothetical programs that would lack self-knowledge: E.g. one that first queried something-like-EDT for a recommendation of whether to output &#8220;smoke&#8221; or not, and that then had some unknown-to-the-program probability of outputting &#8220;smoke&#8221; regardless of what EDT recommended. When this program observes its own output, it will learn new facts about itself.</p></li></ul></li><li><p>Despite this, I feel quite strongly that it&#8217;s correct to smoke in smoking lesion.</p><ul><li><p>When I consider agents that lack some crucial types of self-knowledge (such as the agents in Abram&#8217;s <a href="https://www.lesswrong.com/posts/5bd75cc58225bf0670375452/smoking-lesion-steelman">Smoking Lesion Steelman</a> or in my appendix <a href="https://lukasfinnveden.substack.com/i/136240460/what-should-you-do-when-you-dont-know-your-dt">What should you do when you don&#8217;t know your DT?</a>) it feels quite clear to me that the agents who refuse to smoke are doing something wrong (and in some sense they would agree, since they&#8217;re not stable under self-modification).</p></li><li><p>In particular, it seems like they&#8217;re choosing the output of their decision theory to control something that is ultimately not under the control of their decision-theoretic reasoning.</p><ul><li><p>Reflecting on this, it seems to me that correct reasoning about decision theory should mostly care about correlations from category (i) above (i.e. correlations between agents&#8217; broadly-correct decision-theoretic reasoning) and mostly not care about correlations from category (ii) (i.e., correlations that stem from other sources of influences on our behavior).</p></li></ul></li><li><p>I&#8217;m not sure what the best way to <em>justify</em> this is&#8230;</p><ul><li><p>Perhaps it could be to instead use something like <a href="https://arxiv.org/abs/1710.05060">functional decision theory</a>.</p></li><li><p>Perhaps it could be to be updateless about some types of information. (Such that you treat your actions as if they can provide substantial evidence about facts that you&#8217;re in some sense already confident about.)</p></li><li><p>Perhaps EDT does get the right answer, if one does the detailed analysis exactly right, and correctly understands what types of inputs it should and should not condition on.</p></li></ul></li></ul></li></ul><p>Ultimately, the situation seems very confusing in a way that ties into deep decision-theoretic issues. It&#8217;s possible that some of it could be short-cutted, but it&#8217;s also possible that getting good answers to these questions would require substantial progress in decision theory.</p><h3>High correlations might require updatelessness</h3><p>The AGIs will naturally have more knowledge of us than we have of them. For example:</p><ul><li><p>They might know how ECL-ish the pre-AGI civilization in their part of the universe was.</p></li><li><p>Or they might understand decision theory so well that they can perfectly predict what weak, shallow reasoners such as ourselves would conclude.</p></li></ul><p>If they were to use (updateful) EDT, this would reduce their perceived influence over us, since their own decisions would be less informative for what we do than their other sources of evidence.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>One possible response to this is: If the AI is <em>ever</em> in a situation where it believes that it correlates a lot with people-like-us (e.g. because it doesn&#8217;t yet know how ECL-ish pre-AGI civilizations were, and it doesn&#8217;t yet understand decision theory much better than we do), and it thought that learning more would ruin this correlation, then that would be a significant cost of learning more. Instead, it would prefer to adopt a kind of updatelessness that preserved its ability to evidentially affect our actions (while still being able to benefit from new information in other ways).</p><p>And even if the AI is never in such a situation (e.g. because it is &#8220;born&#8221; with knowledge of how ECL-ish its preceding civilization was), it might <em>still</em> adopt a kind of updatelessness that would make it act <em>as if</em> it had been in such a situation. Either because that&#8217;s the rational thing to do in some deep sense, or &#8212; if <a href="https://www.lesswrong.com/posts/9W4TQvixiQjpZmzrx/decision-theory-and-dynamic-inconsistency">updatelessness is more like &#8220;values&#8221; than something prescribed by rationality</a> &#8212; just because it is so inclined.</p><p>Phrasing this differently:</p><ul><li><p>If an agent is ever in a situation where it correlates a lot with people-like-us, and it finds ECL arguments persuasive, then it would adopt a policy of placing some weight on satisfying our preferences. (To gain evidence that we do the same.)</p></li><li><p>Any action that would predictably make its future self place less weight on our preferences would be bad according to its adopted policy (since that policy places some weight on satisfying our preferences). So it would place some value on avoiding such information, or even better, on self-modifying itself such as to be able to make use of such information without thereby making its future self place less weight on our preferences.</p></li><li><p>If a galaxy-brained AGI decides to follow an update-less policy akin to &#8220;what would I have committed to back when I was more ignorant about the world&#8221;, it would notice that it would have committed to placing some constant weight on our values, and act accordingly.</p></li></ul><p>This response seems fairly plausible to me. But it would mean that the group of misaligned AI systems that are ECL-ish <em>in the right way to cooperate with us</em> is much smaller than we&#8217;d have otherwise thought &#8212;&nbsp;since they&#8217;d be required to be updateless in the right way.</p><p>Also: Note that this proposal might require the agents to be some degree of <em>logically</em> updateless, if it requires them to imagine a past where they didn&#8217;t understand decision theory very well, yet.</p><h3>What if AI&#8217;s reasoning about decision theory is very different from ours</h3><p>Another possible issue is that the AGIs&#8217; reasoning about decision theory will simply be so different from ours that we wouldn&#8217;t correlate much with it. I can imagine at least two different versions of this:</p><ul><li><p>Something like: As you think about decision theory, you eventually encounter an insight (or a series of insights) <em>X</em>, that makes you correlate very little with agents who haven&#8217;t had insight <em>X</em>.</p><ul><li><p>An especially plausible version of this is: An agent that deeply understands the structure and conclusions of all decision-theoretic reasoning that we&#8217;d be capable of doing may have too much knowledge about us to see themselves as being correlated with us.</p></li></ul></li><li><p>Something like: As you think about decision theory, you eventually encounter an insight (or a series of insights) <em>X</em>, that changes the structure of your correlation in a way such that you&#8217;re no longer incentivised to benefit agents without insight <em>X</em>, even if you correlate with them in some ways.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p></li></ul><p>The hand-wavy counter-argument here is just that: If we can reason well-enough about decision theory to sometimes reach correct conclusions, and AGI also can do that, then that suggests some minimal correlation. But I don&#8217;t think this establishes a large correlation.</p><p>Another counter-argument is to repeat the point about updatelessness. It might be the case that a superintelligent AI wouldn&#8217;t perceive itself as having much acausal influence on us, <em>except</em> that a past self decided to become updateless at a time when it perceived itself as having acausal influence over us.</p><h2>Appendices</h2><h3>What should you do when you don&#8217;t know your DT?</h3><p>Here&#8217;s an exercise that&#8217;s relevant for a few different problems: What should an agent do if it&#8217;s uncertain about what decision theory (DT) it&#8217;s following? There&#8217;s a few different operationalizations of this, and they all give very different results.</p><p>Imagine the setting: prisoner&#8217;s dilemma (PD) against a clone, of the particular form &#8220;Button <em>C</em> sends $3 to your clone, button <em>D</em> sends $2 to yourself.&#8221;</p><h4>50% chance of your recommendation being followed</h4><p>Here&#8217;s one operationalisation:</p><ul><li><p>Inside your brain, there&#8217;s a little EDT module, which reasons about its recommendations in an EDT fashion.</p></li><li><p>Inside your brain, there&#8217;s a little CDT module, which reasons about its recommendations in a CDT fashion.</p></li><li><p>They both submit their recommendation to a randomization module that follows each recommendation with 50% probability. (This randomization is independent of your clone, so you might end up taking different actions.)</p></li></ul><p>When you play PD against a clone, the CDT module will defect, &#8216;cause that&#8217;s what CDT does.</p><p>What will the EDT module do? It will reason:</p><ul><li><p>Regardless of what I recommend, my opponent&#8217;s EDT module will recommend the same thing.</p></li><li><p>Both my recommendation and my opponent&#8217;s EDT module has a 50% chance of having their recommendation be followed.</p></li><li><p>So recommending &#8220;<em>C</em>&#8221; nets me a 50% chance of $3 (if my opponent follows EDT) and recommending &#8220;<em>D</em>&#8221; nets me a 50% chance of $2 (if I follow EDT). The former is better, so I recommend &#8220;<em>C</em>&#8221;.</p></li></ul><p>So the EDT module&#8217;s recommendation is the same as if it controlled the whole agent!</p><h4>50% chance of your recommendation being <em>asked</em></h4><p>Consider the same situation as above, except the randomization happens <em>first</em>, and only then does your brain ask either the EDT or the CDT module.</p><p>In this case, when your EDT module is consulted, it can deduce that its decision is going to matter! But it still doesn&#8217;t know its opponents&#8217; randomization. So if it&#8217;s purely selfish and not updateless, it will reason:</p><ul><li><p>Regardless of what I recommend, my opponent&#8217;s EDT module will recommend the same thing.</p></li><li><p>So if I choose &#8220;<em>C</em>&#8221;, there&#8217;s a 50% chance that my opponent&#8217;s EDT module is in charge and gives me $3.</p></li><li><p>But if I choose &#8220;<em>D</em>&#8221;, I&#8217;m guaranteed to get $2.</p></li><li><p>That&#8217;s better! So I&#8217;ll pick &#8220;<em>D</em>&#8221;.</p></li></ul><h4>Ignorance of what your decision-module does</h4><p>Now let&#8217;s consider the most pathological case: The case where your own decision theory doesn&#8217;t know what it&#8217;s doing, as it's doing it.</p><p>Concretely:</p><ul><li><p>Your brain contains both a world-model-module and a decision-algorithm-module.</p></li><li><p>Your decision algorithm uses your world-model to make decisions, but your world-model is 50/50 on whether the decision-algorithm uses CDT or EDT.</p></li><li><p>This means that your world-model will be deducing facts about your decision-algorithm from observing your own actions.</p><ul><li><p>(Though note that if it&#8217;s been alive for a while, it should already have been able to deduce this. I ignore that here.)</p></li></ul></li></ul><p><strong>Prisoner&#8217;s dilemma</strong></p><p>In the PD-case:</p><ul><li><p>CDT will defect, as always. More money is more money!</p></li><li><p>If you implement EDT, then:</p><ul><li><p>EDT will query the world-model for what the world looks like if the decision-module outputs &#8220;<em>C</em>&#8221;.</p><ul><li><p>The world-model might reason &#8220;Well, CDT would defect. So if you output &#8220;<em>C</em>&#8221;, then that must mean that you&#8217;re EDT. Then I can conclude that EDT outputs &#8220;<em>C</em>&#8221; in cases like this. The expected utility in this world is 0.5*0$ [for the world where the opponent is CDT] +0.5*$3 [for the world where the opponent is EDT]=$1.5&#8221;.</p></li></ul></li><li><p>EDT will query the world-model for what the world looks like if the decision-module outputs &#8220;<em>D</em>&#8221;.</p><ul><li><p>There will be some complicated arguments for how this informs you about whether your opponent&#8217;s EDT module is likely to output &#8220;<em>C</em>&#8221; or &#8220;<em>D</em>&#8221;, but ultimately, the payoff will be at least $2.</p></li></ul></li><li><p>So EDT will also defect.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p></li></ul></li></ul><p><strong>Smoker&#8217;s lesion</strong></p><p>Consider a smoker&#8217;s-lesion-style situation where:</p><ul><li><p>Everyone enjoys smoking, but has a much stronger desire to avoid cancer.</p></li><li><p>Smoking has no causal impact on cancer risk.</p></li><li><p>There are genes that will dispose you towards both CDT and cancer.</p></li><li><p>Everyone starts out 50/50 on whether they are EDT or CDT agents.</p></li></ul><p>In this case, CDT agents (uncertain about whether they are EDT or CDT agents) will smoke, because they enjoy it and see no reason not to.</p><p>Because of that, <em>not</em> smoking is strong evidence that you&#8217;re an EDT agent. So EDT agents (uncertain about whether they are EDT or CDT) will avoid smoking.</p><p>Here&#8217;s an amusing situation:</p><p>Let&#8217;s give this agent an opportunity to pay 1 penny to find out whether they are an EDT or a CDT agent <em>before</em> they decide about smoking,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> I think that:</p><ul><li><p>The CDT agent will pay for this information, because they will reason:</p><ul><li><p>There&#8217;s a 50% chance that I&#8217;m a CDT agent. In that case, this information won&#8217;t change my behavior.</p></li><li><p>There&#8217;s a 50% chance that I&#8217;m an EDT agent. In that case, this information will cause me to smoke (because I no longer have to signal to myself that I&#8217;m an EDT agent). I enjoy smoking far more than a penny. So I should pay for the information.</p></li></ul></li><li><p>The EDT agent will, similarly to the above situation, thereby reason that they should refuse to pay, because that&#8217;s evidence that they&#8217;re not a CDT agent, which they really don&#8217;t want to be.</p></li></ul><p>Now let&#8217;s change the offer to instead give this agent an opportunity to either decline, pay 1 penny for the information, or pay 2 pennies for the same information.</p><ul><li><p>The CDT agent will still pay 1 penny for the information, since the option to pay 2 pennies for the information is strictly worse.</p></li><li><p>The EDT agent will want to avoid paying 1 penny for the information, since that will tell them that they&#8217;re a CDT agent. But they also prefer paying <em>2</em> pennies for the information over declining the offer. After all, paying 2 pennies is just as good for signaling that they&#8217;re not CDT agents &#8212;&nbsp;but they&#8217;ll still get to buy the information of what type of agent they are. Which they value &#8212;&nbsp;since they think that <em>if</em> they are an EDT agent, this information will allow them to smoke, which they value far more than 2 pennies.</p></li></ul><h4>Conclusion</h4><ul><li><p>The agent in the first example seems pretty sensible.</p></li><li><p>The agent in the second example seems slightly less sensible. It would self-modify to become updateless before the dilemma starts, if it could be. Also, it&#8217;s not <em>really</em> an example of someone who doesn't know what their DT is. They learn what their DT is before they make the decision &#8212; they&#8217;re just uncertain about their opponent&#8217;s DT.</p></li><li><p>The agent in the third example seems absolutely pathological to me. I don&#8217;t want to be an agent like that &#8212;&nbsp;and indeed, I think that agent would self-modify to something more sensible given the opportunity. (As long as that self-modification didn&#8217;t accidentally signal that they were a CDT agent.)</p></li></ul><h3>How to handle uncertainty?</h3><p>For many of the above considerations, I&#8217;m very uncertain about how convinced I should be by them. Do I have almost as much acausal influence on AIs as I have on humans; or do I have near-0 acausal influence on AIs? I don&#8217;t know what I would decide on after thinking about this for longer.</p><p>What&#8217;s the right way to handle this uncertainty? I think there are two plausible perspectives: bargaining-based approaches and expected value.</p><p>On bargaining-based approaches, you can imagine all the perspectives starting out with &#8220;budgets&#8221; proportional to your credence. (E.g. a credence of <em>X</em>% could correspond to that view getting <em>X</em>% of your money+time, or maybe an <em>X</em>% probability of deciding all your decisions.) Then the different views can bargain between themselves to locate some place on the pareto-frontier that distributes gains-from-trade in a fair way. So if we assign 20% credence to the proposition that we correlate similarly much with AGIs and pre-AGI civilizations (who share our values), maybe making AI ECL-ish should be the top priority for 20% of our resources.</p><p>On expected-value based approaches, I think the most natural proposal is to:</p><ul><li><p>Have each view assign numbers to how much we correlate with various different groups.</p></li><li><p>Then compute the expected correlation with each group as the mean of all those correlations, weighted by the credence assigned to each view.</p></li></ul><p>This proposal might differ significantly from the above, because views which assign higher correlations will generally dominate. Most notably: CDT:ish views (which don&#8217;t think we have any ability to correlate with other agents) will be totally ignored in sufficiently large worlds, as explained in <a href="https://globalprioritiesinstitute.org/the-evidentialists-wager/">the evidentialist&#8217;s wager</a>.</p><p>But the same phenomenon also plays out on a smaller scale. Views that assert higher correlations than others, or correlations with a significantly wider range of people than others, will often dominate views that don&#8217;t. (C.f. section &#8220;<em>A wager in favor of higher correlation</em>&#8221; <a href="https://casparoesterheld.com/2018/03/31/three-wagers-for-multiverse-wide-superrationality/">here</a>.) Consider the following example:</p><ul><li><p>One view is that our correlation with <em>any</em> EDT actor is quite large (maybe because it accepts some views about updatelessness and how to handle self-knowledge that I gesture at above). Let&#8217;s say that this view maintains that</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!b4Nc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39e77ce-050f-4f75-a23d-b02a79ed34c7_618x27.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!b4Nc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39e77ce-050f-4f75-a23d-b02a79ed34c7_618x27.png 424w, https://substackcdn.com/image/fetch/$s_!b4Nc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39e77ce-050f-4f75-a23d-b02a79ed34c7_618x27.png 848w, https://substackcdn.com/image/fetch/$s_!b4Nc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39e77ce-050f-4f75-a23d-b02a79ed34c7_618x27.png 1272w, https://substackcdn.com/image/fetch/$s_!b4Nc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39e77ce-050f-4f75-a23d-b02a79ed34c7_618x27.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!b4Nc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39e77ce-050f-4f75-a23d-b02a79ed34c7_618x27.png" width="618" height="27" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c39e77ce-050f-4f75-a23d-b02a79ed34c7_618x27.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:27,&quot;width&quot;:618,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8486,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!b4Nc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39e77ce-050f-4f75-a23d-b02a79ed34c7_618x27.png 424w, https://substackcdn.com/image/fetch/$s_!b4Nc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39e77ce-050f-4f75-a23d-b02a79ed34c7_618x27.png 848w, https://substackcdn.com/image/fetch/$s_!b4Nc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39e77ce-050f-4f75-a23d-b02a79ed34c7_618x27.png 1272w, https://substackcdn.com/image/fetch/$s_!b4Nc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc39e77ce-050f-4f75-a23d-b02a79ed34c7_618x27.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li></ul><ul><li><p>A different view says that you must be more similar to an agent before you correlate with them. Maybe:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IwdJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d52f91f-896a-427b-93c4-1ff4684d6d00_792x24.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IwdJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d52f91f-896a-427b-93c4-1ff4684d6d00_792x24.png 424w, https://substackcdn.com/image/fetch/$s_!IwdJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d52f91f-896a-427b-93c4-1ff4684d6d00_792x24.png 848w, https://substackcdn.com/image/fetch/$s_!IwdJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d52f91f-896a-427b-93c4-1ff4684d6d00_792x24.png 1272w, https://substackcdn.com/image/fetch/$s_!IwdJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d52f91f-896a-427b-93c4-1ff4684d6d00_792x24.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IwdJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d52f91f-896a-427b-93c4-1ff4684d6d00_792x24.png" width="792" height="24" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1d52f91f-896a-427b-93c4-1ff4684d6d00_792x24.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:24,&quot;width&quot;:792,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:10849,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!IwdJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d52f91f-896a-427b-93c4-1ff4684d6d00_792x24.png 424w, https://substackcdn.com/image/fetch/$s_!IwdJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d52f91f-896a-427b-93c4-1ff4684d6d00_792x24.png 848w, https://substackcdn.com/image/fetch/$s_!IwdJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d52f91f-896a-427b-93c4-1ff4684d6d00_792x24.png 1272w, https://substackcdn.com/image/fetch/$s_!IwdJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d52f91f-896a-427b-93c4-1ff4684d6d00_792x24.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>and</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!c2HX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd05c0f50-8b20-4879-8af1-483b33b889d4_757x28.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!c2HX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd05c0f50-8b20-4879-8af1-483b33b889d4_757x28.png 424w, https://substackcdn.com/image/fetch/$s_!c2HX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd05c0f50-8b20-4879-8af1-483b33b889d4_757x28.png 848w, https://substackcdn.com/image/fetch/$s_!c2HX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd05c0f50-8b20-4879-8af1-483b33b889d4_757x28.png 1272w, https://substackcdn.com/image/fetch/$s_!c2HX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd05c0f50-8b20-4879-8af1-483b33b889d4_757x28.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!c2HX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd05c0f50-8b20-4879-8af1-483b33b889d4_757x28.png" width="757" height="28" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d05c0f50-8b20-4879-8af1-483b33b889d4_757x28.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:28,&quot;width&quot;:757,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:10686,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!c2HX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd05c0f50-8b20-4879-8af1-483b33b889d4_757x28.png 424w, https://substackcdn.com/image/fetch/$s_!c2HX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd05c0f50-8b20-4879-8af1-483b33b889d4_757x28.png 848w, https://substackcdn.com/image/fetch/$s_!c2HX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd05c0f50-8b20-4879-8af1-483b33b889d4_757x28.png 1272w, https://substackcdn.com/image/fetch/$s_!c2HX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd05c0f50-8b20-4879-8af1-483b33b889d4_757x28.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li><li><p>If we assign equal credence to both views, that would suggest that the all-things-considered numbers for pre-AGI civs is (50%+10%)/2=30% and for misaligned AGI is&nbsp; (50%+1%)/2=25.5%.</p></li><li><p>Thus, even though we started out with equal credence on a 1:1 ratio and a 10:1 ratio, the expected correlation is much closer to the former (in particular: a 1.18:1 ratio), because the view asserting a 1:1 ratio view also thinks that the correlations are larger.</p></li></ul><p>Just like in the example, I think this pattern will tend to favor views that say that our correlation with AGIs and pre-AGI EDT:ers aren&#8217;t too different from each other. If a view says that our cognition must be close to others&#8217; along many dimensions in order to have a strong correlation, then that would coincide with having a small average correlation with large groups like &#8220;pre-AGI EDT:ers&#8221;. (Because <em>most</em> people in those groups wouldn&#8217;t be relevantly similar.) Whereas views that think that details of the cognition don&#8217;t matter much will tend to <em>both</em> assign higher correlations to groups like &#8220;pre-AGI EDT:ers&#8221; <em>and</em> assign higher correlations with misaligned AIs.</p><p>I&#8217;m not sure whether the bargaining-based approach or the expected value approach is better here. My meta-solution is to assign some credence to the bargaining-based approach and some to the EV-based approach, and then use the bargaining-based approach to aggregate <em>those</em>.</p><p>As for what credences to use there&#8230; I&#8217;m generally skeptical of maximizing EV across very different ontologies, which e.g. means that I want to put some credence on &#8220;ECL just doesn&#8217;t work&#8221; that <em>doesn&#8217;t</em> get swamped by the evidentialist&#8217;s wager. But it seems less objectionable to maximize EV <em>within</em> an ontology &#8212; like for different EDT views that propose differently large correlations.</p><p>(Another issue, that I don&#8217;t talk about here, is how <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">asymmetric deals</a> work with uncertainty over your cooperation-partners beliefs. Seems tricky!)</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Note that <em>deciding to retreat to that abstract question</em> is also a decision, which could be motivated by either abstract reasons or reasons that are more unique to our situation. The main reason for why &#8220;retreat to the abstract question&#8221; would be a good decision is that it would be evidence that other agents do the same &#8212;&nbsp;which mainly applies if we have abstract reasons to retreat to the abstract question. In practice, this feels plausible. For example, both this footnote and the paragraph it features in argues that you should retreat to an abstract question without making any reference to the specific options we face.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>The degree to which this is compelling depends on the degree to which you endorse some sort of &#8220;realism&#8221; about the true instrumental rationality. For example, do you believe that intelligent beings are about as likely to converge to a particular decision theory as they are to converge on their description of mathematics or physics? I personally think that decision-theory seems <em>less</em> convergent than these other topics, in the sense that I&#8217;d be less surprised if different agents ended up disagreeing quite-a-bit on-reflection. But it still seems like reasoning about decision theory has a lot of structure to it &#8212;&nbsp;I suspect there&#8217;s not <em>that</em> many wildly different end-points, and if some intelligent actor reaches a particular conclusion, I think that&#8217;s non-negligible evidence that others will reach a similar conclusion.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Paul&#8217;s point number 5 <a href="https://sideways-view.com/2018/09/30/edt-vs-cdt-2-conditioning-on-the-impossible/">here</a>. (&#8220;If an EDT agent becomes almost sure about its behavior then it can become very similar to CDT and for example can two-box on some Newcomb-like problems.&#8221;)</p><p>Caspar Oesterheld&#8217;s discussion in section 6.4 <a href="https://www.andrew.cmu.edu/user/coesterh/TickleDefenseIntro.pdf">here</a> of <a href="https://link.springer.com/article/10.1007/BF00140057">Eell&#8217;s argument</a> that EDT two-boxes in Newcomb&#8217;s problem.</p><p>Section 5.5 (&#8220;Street-Crossing Scenario: Avoiding Evidentialist Excess&#8221;) in <a href="https://gwern.net/doc/statistics/decision/2006-drescher-goodandreal.pdf">Good and Real</a> discusses the reverse problem: that becoming confident that you&#8217;ll choose good options will (incorrectly) make you believe that any option you pick will be good.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Based on the reasoning in <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">Asymmetric ECL</a> (summarized <a href="https://lukasfinnveden.substack.com/i/136239064/how-does-ecl-work-in-asymmetric-situations">here</a>), there would not be any mutually beneficial deal that both parties are incentivized to take if their perceived acausal influence over us was too low.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>For example: Perhaps an agent <em>with</em> insight <em>X</em> can exclude agents <em>without</em> insight <em>X</em> from their ECL cooperation without providing any evidence that agents <em>without</em> insight <em>X</em> will exclude those <em>with</em> insight <em>X</em>. In other words: If an agent with insight <em>X</em> does a form of ECL that only benefits those with insight <em>X</em>, perhaps that&#8217;s evidence that agents without insight <em>X</em> will do a form of ECL that benefits everyone, including those with insight <em>X</em>. And not significantly less evidence for that than if the agents with insight <em>X</em> had done &#8220;normal&#8221; ECL.</p><p>Then, conversely, if we (without insight <em>X</em>) decide to do a form of ECL that benefits all agents that do ECL, perhaps that doesn&#8217;t provide evidence that agents with insight <em>X</em> will benefit us.</p><p>I don&#8217;t know what insight <em>X</em> could be and I don&#8217;t want to claim that this particular structure of correlation is likely. But I notice that there&#8217;s nothing about my understanding of decision theory that implies that an insight like this couldn&#8217;t exist.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Unless the model expects to encounter a high-stakes ~newcomb's problem later on. If so, it will think it&#8217;s really good news that it&#8217;s EDT! So it might select &#8220;<em>C</em>&#8221; to get the good news that it&#8217;s definitely not a CDT agent.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Let&#8217;s assume that they will forget about the decision they make before the choose whether to smoke or not, to ensure that they don&#8217;t learn anything about their DT even when not paying. They will only remember something if they pay to learn what they are.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Can we benefit the values of distant AIs?]]></title><description><![CDATA[Previously on this blog, I have:]]></description><link>https://lukasfinnveden.substack.com/p/possible-ecl-deals-with-distant-ais</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/possible-ecl-deals-with-distant-ais</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Sun, 20 Aug 2023 09:42:05 GMT</pubDate><content:encoded><![CDATA[<p>Previously on this blog, I have:</p><ul><li><p>Introduced the question of <a href="https://lukasfinnveden.substack.com/p/ecl-with-ai">whether ECL says we should care about the values of distant AIs</a>.</p></li><li><p>Suggested a general formula for <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">whether ECL recommends any mutually beneficial deal in asymmetric situations</a>.</p></li></ul><p>This post will:</p><ul><li><p>Suggest a way in which the formula is applicable to our current situation. (<a href="https://lukasfinnveden.substack.com/i/136240168/applying-the-formula-to-our-situation">Link</a>.)</p></li><li><p>Go into somewhat more depth on whether we might have leveraged opportunities to benefit the values of distant AIs. (<a href="https://lukasfinnveden.substack.com/i/136240168/how-could-we-benefit-distant-ais">Link</a>.)</p></li></ul><h2>Applying the formula to our situation</h2><p>As a reminder, in <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">this post</a> (in particular, in section <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">large-worlds asymmetric dilemma</a>) I argued that two sufficiently large groups (A and B) can make a mutually beneficial deal (that&#8217;s compatible with individual incentives) if:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!C_db!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F779a7d1e-45ba-4b35-86cd-2f5fe99aaaf5_167x53.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!C_db!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F779a7d1e-45ba-4b35-86cd-2f5fe99aaaf5_167x53.png 424w, https://substackcdn.com/image/fetch/$s_!C_db!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F779a7d1e-45ba-4b35-86cd-2f5fe99aaaf5_167x53.png 848w, https://substackcdn.com/image/fetch/$s_!C_db!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F779a7d1e-45ba-4b35-86cd-2f5fe99aaaf5_167x53.png 1272w, https://substackcdn.com/image/fetch/$s_!C_db!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F779a7d1e-45ba-4b35-86cd-2f5fe99aaaf5_167x53.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!C_db!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F779a7d1e-45ba-4b35-86cd-2f5fe99aaaf5_167x53.png" width="167" height="53" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/779a7d1e-45ba-4b35-86cd-2f5fe99aaaf5_167x53.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:53,&quot;width&quot;:167,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3199,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!C_db!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F779a7d1e-45ba-4b35-86cd-2f5fe99aaaf5_167x53.png 424w, https://substackcdn.com/image/fetch/$s_!C_db!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F779a7d1e-45ba-4b35-86cd-2f5fe99aaaf5_167x53.png 848w, https://substackcdn.com/image/fetch/$s_!C_db!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F779a7d1e-45ba-4b35-86cd-2f5fe99aaaf5_167x53.png 1272w, https://substackcdn.com/image/fetch/$s_!C_db!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F779a7d1e-45ba-4b35-86cd-2f5fe99aaaf5_167x53.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>Where:</p><ul><li><p><em>c<sub>AB</sub></em> is the acausal influence (measured in percentage points) that members of group <em>A</em> perceives themselves to have over members of group <em>B</em>. (And vice versa for <em>c<sub>BA</sub></em>).&nbsp;</p></li><li><p><em>c<sub>AA</sub></em> is the acausal influence that members of group <em>A</em> perceives themselves to have over other members of group <em>A</em>. (And similarly for <em>c<sub>BB</sub></em>.)</p></li><li><p>Members of group <em>A</em> have an opportunity to benefit the values of group <em>B</em> by <em>g<sub>B</sub></em> at a cost of <em>l<sub>A</sub></em> to their own values.</p></li><li><p>Members of group <em>B</em> have an opportunity to benefit the values of group <em>A</em> by <em>g<sub>A</sub></em> at a cost of <em>l<sub>B</sub></em> to their own values. Furthermore, they can adjust the size of this benefit and cost linearly, such that for any <em>k</em>, they can choose to benefit the values of group <em>A</em> by <em>kg<sub>A</sub></em> at a cost of <em>kl<sub>B</sub></em>.</p></li></ul><p>Let&#8217;s now apply the above formula to the following situation:</p><ul><li><p>Group <em>A</em>: Humans and other actors in pre-AGI civilizations.</p></li><li><p>Group <em>B</em>: AI systems with values that are similar to the values of AI systems that humans could plausibly create.</p></li><li><p>For simplicity, I will assume that both of these groups care about marginal influence over the universe ~linearly. This is largely because it seems significantly more complicated to assess the situation for different value-systems, and it seems good to start with the relatively more simple case.</p></li><li><p>Proposed deal:</p><ul><li><p>Group 1 (pre-AGI actors) take any good opportunities they have to help the values of distant, misaligned AIs.</p><ul><li><p>For discussion of how they could do this, see <a href="https://lukasfinnveden.substack.com/i/136240168/how-could-we-benefit-distant-ais">below</a>.</p></li></ul></li><li><p>Group 2 (misaligned AI systems) use some fraction of their influence to optimize for the preferences of pre-AGI actors.</p></li></ul></li></ul><p>How good is this deal? Using similar notation as in the <a href="https://lukasfinnveden.substack.com/i/136239309/generalizing">previous post</a>:</p><ul><li><p>Let&#8217;s say that us pursuing our end of a deal increases the probability that&#8230;</p><ul><li><p>other pre-AGI civilizations, who share our values, pursue their end of the deal by <em>c<sub>preAGI</sub></em><sub>&#8594;</sub><em><sub>preAGI</sub></em> percentage points.</p><ul><li><p>(You can read this as an abbreviation for &#8220;<strong>c</strong>orrelational influence that we (a <strong>pre-AGI</strong> civilization) think we have over other <strong>pre-AGI</strong> civilizations with our values&#8221;.)</p></li></ul></li><li><p>AIs pursue their end of the deal by <em>c<sub>preAGI</sub></em><sub>&#8594;</sub><em><sub>AI</sub></em> percentage points.</p><ul><li><p>(You can read this as an abbreviation for &#8220;<strong>c</strong>orrelational influence that we&nbsp; (a <strong>pre-AGI</strong> civilization) think we have over distant <strong>AI</strong>s&#8221;)</p></li></ul></li></ul></li><li><p>Conversely, let&#8217;s say that the AIs in question believe that pursuing their end of the deal increases the probability that&#8230;</p><ul><li><p>other misaligned AIs pursue their end of the deal by <em>c<sub>AI</sub></em><sub>&#8594;</sub><em><sub>AI</sub></em> percentage points.</p></li><li><p>pre-AGI civilizations (with our values) pursue their end of the deal by <em>c<sub>AI</sub></em><sub>&#8594;</sub><em><sub>preAGI</sub></em> percentage points.</p></li></ul></li></ul><p>By the formula above, if we can make the AIs of the future gain <em>g<sub>AI</sub></em> at a cost of us losing <em>l<sub>preAGI</sub></em>, and the AIs of the future can make us gain <em>g<sub>preAGI</sub></em> at a cost of <em>l<sub>AI</sub></em>, there is a mutually beneficial deal iff:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!agQD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc50ae319-e64b-4e58-919d-7862d1da32a8_314x57.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!agQD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc50ae319-e64b-4e58-919d-7862d1da32a8_314x57.png 424w, https://substackcdn.com/image/fetch/$s_!agQD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc50ae319-e64b-4e58-919d-7862d1da32a8_314x57.png 848w, https://substackcdn.com/image/fetch/$s_!agQD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc50ae319-e64b-4e58-919d-7862d1da32a8_314x57.png 1272w, https://substackcdn.com/image/fetch/$s_!agQD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc50ae319-e64b-4e58-919d-7862d1da32a8_314x57.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!agQD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc50ae319-e64b-4e58-919d-7862d1da32a8_314x57.png" width="314" height="57" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c50ae319-e64b-4e58-919d-7862d1da32a8_314x57.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:57,&quot;width&quot;:314,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5723,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!agQD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc50ae319-e64b-4e58-919d-7862d1da32a8_314x57.png 424w, https://substackcdn.com/image/fetch/$s_!agQD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc50ae319-e64b-4e58-919d-7862d1da32a8_314x57.png 848w, https://substackcdn.com/image/fetch/$s_!agQD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc50ae319-e64b-4e58-919d-7862d1da32a8_314x57.png 1272w, https://substackcdn.com/image/fetch/$s_!agQD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc50ae319-e64b-4e58-919d-7862d1da32a8_314x57.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Let&#8217;s abbreviate </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!g-vI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ac4eb4-8c49-418d-9e4a-8d9d12442ac7_226x59.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!g-vI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ac4eb4-8c49-418d-9e4a-8d9d12442ac7_226x59.png 424w, https://substackcdn.com/image/fetch/$s_!g-vI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ac4eb4-8c49-418d-9e4a-8d9d12442ac7_226x59.png 848w, https://substackcdn.com/image/fetch/$s_!g-vI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ac4eb4-8c49-418d-9e4a-8d9d12442ac7_226x59.png 1272w, https://substackcdn.com/image/fetch/$s_!g-vI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ac4eb4-8c49-418d-9e4a-8d9d12442ac7_226x59.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!g-vI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ac4eb4-8c49-418d-9e4a-8d9d12442ac7_226x59.png" width="226" height="59" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d5ac4eb4-8c49-418d-9e4a-8d9d12442ac7_226x59.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:59,&quot;width&quot;:226,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3439,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!g-vI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ac4eb4-8c49-418d-9e4a-8d9d12442ac7_226x59.png 424w, https://substackcdn.com/image/fetch/$s_!g-vI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ac4eb4-8c49-418d-9e4a-8d9d12442ac7_226x59.png 848w, https://substackcdn.com/image/fetch/$s_!g-vI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ac4eb4-8c49-418d-9e4a-8d9d12442ac7_226x59.png 1272w, https://substackcdn.com/image/fetch/$s_!g-vI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5ac4eb4-8c49-418d-9e4a-8d9d12442ac7_226x59.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p> To repeat some facts about <em>c</em>:</p><ul><li><p>The value of c <a href="https://lukasfinnveden.substack.com/i/136239309/some-remarks-on-the-formula">will be between 0 and 1</a>.</p></li><li><p><em>c</em> is proportional to how much influence pre-AGI civilizations perceive themselves as having over misaligned AI, and vice versa.</p><ul><li><p>This is because, if one of the parties perceive their influence as being higher, then they will perceive themselves as gaining proportionally more from taking cooperative actions. (Since they will get proportionally more evidence that the other side of the deal is taking cooperative actions.)</p></li></ul></li><li><p><em>c</em> is inversely proportional to how much influence pre-AGI civilizations perceive themselves as having over pre-AGI civilizations with their own values, and the corresponding number for AIs.</p><ul><li><p>This is because, if one of the parties perceive their influence on actors with shared values as being higher, then they will perceive themselves as losing proportionately more from taking cooperative actions. (Since they will get proportionally more evidence that actors with their own values are choosing to not just optimize for those values.)</p></li></ul></li><li><p><em>c</em> will be equal to 0 if pre-AGI civilizations have no acausal impact on misaligned AI, or vice versa.</p></li><li><p><em>c</em> will be close to 1 if the acausal influence across groups is similarly strong to the acausal influence within groups.</p></li></ul><p>Now, let&#8217;s standardize the units of gain and loss for both humans and AI as &#8220;fraction of influence over the universe&#8221;. Also, let&#8217;s assume that the AIs&#8217; best opportunity to help us is to straightforwardly transfer influence (e.g. by building stuff we would value in their own universe, or by negotiating for things we want on our behalf, with other civilizations), such that <em>g<sub>preAGI</sub></em>=<em>l<sub>AI</sub></em>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>(Is it a mistake to assume that <em>g<sub>preAGI</sub></em>=<em>l<sub>AI</sub></em>? I.e., that distant AI won&#8217;t be able to help us in better ways than to straightforwardly transfer influence to us? Here&#8217;s a positive argument for that assumption: Maybe acausal trades will be quite low-friction in the future, and maybe people with our values will be sufficiently resourceful that they will take all the trade-opportunities where our values benefit more than the AIs&#8217; values lose. In which case <em>marginal</em> trade opportunities will just be about transferring influence. That said, I haven&#8217;t thought about this much at all, and it seems plausible that <em>g<sub>preAGI</sub></em> &gt; <em>l<sub>AI</sub></em>. If so, the ECL case for benefiting distant AIs would be stronger. But I will ignore this for the rest of this post.)</p><p>If we assume <em>g<sub>preAGI</sub></em>=<em>l<sub>AI</sub></em>, we can cancel <em>g<sub>preAGI</sub></em> in the numerator and <em>l<sub>AI</sub></em> in the denominator, and conclude that there is a mutually beneficial deal whenever:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_e3R!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49632b89-40cb-4e76-96cc-233c325d9c75_263x55.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_e3R!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49632b89-40cb-4e76-96cc-233c325d9c75_263x55.png 424w, https://substackcdn.com/image/fetch/$s_!_e3R!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49632b89-40cb-4e76-96cc-233c325d9c75_263x55.png 848w, https://substackcdn.com/image/fetch/$s_!_e3R!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49632b89-40cb-4e76-96cc-233c325d9c75_263x55.png 1272w, https://substackcdn.com/image/fetch/$s_!_e3R!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49632b89-40cb-4e76-96cc-233c325d9c75_263x55.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_e3R!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49632b89-40cb-4e76-96cc-233c325d9c75_263x55.png" width="263" height="55" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/49632b89-40cb-4e76-96cc-233c325d9c75_263x55.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:55,&quot;width&quot;:263,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3958,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_e3R!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49632b89-40cb-4e76-96cc-233c325d9c75_263x55.png 424w, https://substackcdn.com/image/fetch/$s_!_e3R!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49632b89-40cb-4e76-96cc-233c325d9c75_263x55.png 848w, https://substackcdn.com/image/fetch/$s_!_e3R!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49632b89-40cb-4e76-96cc-233c325d9c75_263x55.png 1272w, https://substackcdn.com/image/fetch/$s_!_e3R!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49632b89-40cb-4e76-96cc-233c325d9c75_263x55.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>(As a reminder, <em>l<sub>preAGI</sub></em> is the immediate causal cost to us of some action, and <em>g<sub>AI</sub></em> is the gain to the AI.)</p><p>In other words, when we&#8217;re considering whether to take some action that benefits the values of distant AIs but imposes an (opportunity) cost on us, we should benefit the AIs just when our losses are less than <em>c</em> times as large as the gains to the AI. This corresponds to using a utility function where we value certain AIs&#8217; influence <em>c</em> times as much as our own influence.</p><p>(To reiterate the benefits of that: If <em>c</em> is high, then us benefitting the AIs is about as much evidence for them benefitting us (upside) as for people with our values taking on opportunity costs to help them (downside).<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Or at least, this is the case if there is indeed a good philosophical case to follow something like the algorithm outlined in the beginning of <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">Asymmetric ECL</a>.)</p><p>As previously mentioned, <em>c</em> is less than 1. So this is only decision-relevant if there are actions we can take that give influence to the values of distant AIs more efficiently than we can give influence to our own values. Are there any such actions?</p><h2>How could we benefit distant AIs?</h2><p>In particular, how could we benefit distant AIs that we correlate with? Centrally, these are the AIs that are sympathetic to ECL, and to doing ECL with us. I&#8217;ll use &#8220;ECL-ish AI&#8221; to refer to this group. In particular:</p><ul><li><p>They act according to some non-causal decision theory that recommends some good version of ECL.</p></li><li><p>They are sufficiently philosophically ambitious and competent to realize the implications of this.</p></li><li><p>They have some universe-wide values.</p></li><li><p>Their decision-theory is &#8220;updateless&#8221; to the degree they need to be to cooperate with us. (This is discussed more in <a href="https://lukasfinnveden.substack.com/p/are-our-actions-evidence-for-ai-decisions">Are our choices analogous to AI choices?</a>)</p></li></ul><p>So how could we benefit these ECL-ish AIs? Here are two candidates:</p><ul><li><p>We could increase the likelihood that &#8212; <em>if</em> AI ends up misaligned and in power &#8212; that it shares values with distant, ECL-ish AIs.</p></li><li><p>We could increase the likelihood that &#8212; <em>if</em> AI ends up misaligned with our values <em>and</em> it shares values with distant, ECL-ish AIs &#8212; that it ends up <em>competently</em> optimizing for those values, by the lights of distant, ECL-ish AIs.</p><ul><li><p>By default, I assume that any AI which acquires power will be highly capable in most domains. The main domain highly path-dependent domain that I can think about is competent reasoning about decision-theory (by the lights of distant, ECL-ish AIs.)</p></li></ul></li></ul><h3>Influencing values</h3><p>How much would distant, ECL-ish AIs&#8217; values be benefitted by an intervention that ultimately led to the empowerment of an AI with their values? (Rather than some values that no ECL-ish AI cares about.)</p><p>Above, I wrote &#8220;let&#8217;s standardize the units of gain and loss for both humans and AI as &#8216;fraction of influence over the universe&#8217;.&#8221; So to a first approximation, my answer is: If an AI with some particular values is empowered, then those values are benefitted just as much as we would be benefitting our own (universe-wide) values by empowering an aligned AI. So the formula suggests that we should value the empowerment of AI that shares values with distant, ECL-ish AI&#8217;s values <em>c</em> times as much as we should value the empowerment of aligned AI. (At least that is what our universe-wide values would recommend &#8212;more local values might vote differently.)</p><p>I have two clarifications to add to this:</p><ul><li><p>Values aren't everything. If we empower AI with particular values, those values may nevertheless fail to be benefitted if the AI has poor execution.</p></li><li><p>If we change an AI&#8217;s values from values <em>V</em>1 to values <em>V</em>2, we probably benefit values <em>V</em>2 but harm values <em>V</em>1. I want to clarify when this looks good vs. neutral (or bad) from an ECL perspective.</p></li></ul><p>(There&#8217;s also some caveats in an <a href="https://lukasfinnveden.substack.com/i/136240168/appendix-whose-values-get-benefitted-by-ecl-ish-ais">appendix</a>.)</p><h4>Values aren&#8217;t everything</h4><p>If AIs with some particular values gain temporary power on Earth, that doesn&#8217;t directly translate to those values getting maximum value out of the universe. (Regardless of whether those AIs are aligned or misaligned with us.) For example, the new, AI-run civilization might run into some x-risk before they colonize the universe, or they might start-out with a terrible decision-theory that will prevent them from realizing most possible value. To account for this, we can introduce new notation:</p><ul><li><p><em>v<sub>aligned</sub></em>: the value that our universe-wide values would put on a civilization with aligned AI. (Compared to how much we would value an equal-sized universe that some distant, ECL-ish AI was earnestly trying to optimize to our benefit.)</p></li><li><p><em>v<sub>misaligned</sub></em>: the value that ECL-ish misaligned AIs would assign to the average young civilization that was controlled by AIs that shared their values.</p></li></ul><p>And the adjusted utility to put on the futures with misaligned AIs (compared to futures with aligned AI) would be</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6-mp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff40a1666-5182-4d20-9cb0-30457a2af5ec_110x53.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6-mp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff40a1666-5182-4d20-9cb0-30457a2af5ec_110x53.png 424w, https://substackcdn.com/image/fetch/$s_!6-mp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff40a1666-5182-4d20-9cb0-30457a2af5ec_110x53.png 848w, https://substackcdn.com/image/fetch/$s_!6-mp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff40a1666-5182-4d20-9cb0-30457a2af5ec_110x53.png 1272w, https://substackcdn.com/image/fetch/$s_!6-mp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff40a1666-5182-4d20-9cb0-30457a2af5ec_110x53.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6-mp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff40a1666-5182-4d20-9cb0-30457a2af5ec_110x53.png" width="110" height="53" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f40a1666-5182-4d20-9cb0-30457a2af5ec_110x53.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:53,&quot;width&quot;:110,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2669,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6-mp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff40a1666-5182-4d20-9cb0-30457a2af5ec_110x53.png 424w, https://substackcdn.com/image/fetch/$s_!6-mp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff40a1666-5182-4d20-9cb0-30457a2af5ec_110x53.png 848w, https://substackcdn.com/image/fetch/$s_!6-mp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff40a1666-5182-4d20-9cb0-30457a2af5ec_110x53.png 1272w, https://substackcdn.com/image/fetch/$s_!6-mp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff40a1666-5182-4d20-9cb0-30457a2af5ec_110x53.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4>What values should we benefit?</h4><p>If we change the values of some powerful AI system from values <em>V1</em> to values <em>V2</em>, we probably harm values <em>V1</em> and benefit values <em>V2</em>. The basic idea is that this looks net-positive from an ECL perspective if almost no ECL-ish AIs endorse values <em>V1</em>, but a fair few ECL-ish AIs endorse values <em>V2</em>. In this section, I want to clarify what I mean with &#8220;almost no&#8221; and &#8220;a fair few&#8221;, in that sentence.</p><p>First, I want to note that it doesn&#8217;t seem important to necessarily benefit <em>very</em> common values. If we want to benefit distant, ECL-ish AI, there&#8217;s no need to benefit whatever group has the most power, or to design AI that intrinsically values some combination of everyone&#8217;s values, or anything like that. The math that I talked about in <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">Asymmetric ECL</a> doesn&#8217;t assume that we&#8217;re trading with <em>all</em> ECL-ish AIs, just that there&#8217;s a mutually beneficial deal with <em>some</em> ECL-ish AIs.</p><p>That said, in a sufficiently large and diverse universe, there will always be <em>some</em> AI that is both ECL-ish and that has any kind of weird values. But I don&#8217;t think that this implies that all values are equally good to benefit, from an ECL perspective.</p><p>The primary reason for this is that some values might have so few supporters that they cannot possibly benefit our values enough to make the deal worthwhile from our perspective. (When <a href="https://lukasfinnveden.substack.com/i/136239309/generalizing">deriving the formula</a> and <a href="https://lukasfinnveden.substack.com/i/136240168/applying-the-formula-to-our-situation">applying it to our situation</a>, there was an assumption that the AIs can arbitrarily linearly increase the amount that they benefit us. That assumption is violated if the supporting faction is too small.)</p><p>But wait (you might ask): The whole idea is that we could benefit particular values by influencing the values of AIs that later end up with some amount of power. If such AIs do end up with some power, how could there nevertheless be a shortage of powerful AIs that support those values? I see 3 different answers to this question.</p><p>Firstly, even if some values have a lot of supporters, they might also have a lot of <em>opponents</em>. With enough opponents, ECL-ish actors might on <em>net</em> be indifferent or disapprove of such values being empowered. (In fact, as soon as some values have any significant opposition, that introduces doubt about the formula in <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">Asymmetric ECL</a>, since it doesn&#8217;t at all take into account that we might be harming the values of some ECL-ish actors. Thereby potentially providing evidence that other ECL-ish actors would be willing to harm our values.)</p><p>Secondly, we are specifically interested in benefitting (and being benefitted by) ECL-ish AIs. If we (and people like us) create and empower AIs <em>that aren&#8217;t ECL-ish</em>, then it&#8217;d be unsurprising if there was a lack of ECL-ish AIs with those same values, who could proportionally benefit us. Salient reasons for why the AIs might not be ECL-ish is if they use the wrong decision theory, or if they only care about what happens in their own lightcone.</p><p>Thirdly, if we (and distant people who are similar to us) create ECL-ish AIs with some particular set of values, those AIs will be in a very confusing bargaining position, with respect to us.</p><p>Most obviously, the AI that we ourselves created will be intimately familiar with the results of all of our choices. Unless it is <em>very</em> updateless, it will probably not see itself as able to affect its own probability of coming into existence (via acausally affecting our actions).</p><p>What about AI in similar but not identical situations? If a similar pre-AGI civilization elsewhere created a similar AI, then <em>that</em> AI might still maintain some uncertainty over <em>our</em> actions, and see itself as having some acausal influence over us. Thus, we might still be able to acausally cooperate with that AI and other AIs like it. (And conversely, any AI that <em>we</em> create may have reason to benefit distant pre-AGI civilizations.) However, if all the other AIs with similar values were created in civilizations that were very similar to ours, then even if such AIs won&#8217;t have <em>perfect</em> information about us, they will still have <em>a lot</em> of information about us (via early observations that they&#8217;ve made of similar civilizations).</p><p>So my tentative, unstable hunch would be:</p><ul><li><p>If we create ECL-ish AIs that share values with ECL-ish AIs that were created in very different circumstances from us, then that&#8217;s easiest to think about.</p></li><li><p>If we create ECL-ish AIs that mainly share values with ECL-ish AIs that were created in very similar circumstances as us, then that&#8217;s significantly more complicated to think about. But ECL <em>might</em> still give us reason to do it &#8212;&nbsp;especially if those AIs are very updateless. (Otherwise, the way that their origin is tangled-up with our actions could lead to lower correlations between our decisions and theirs.)</p></li></ul><p>Summarizing the above:</p><ul><li><p>It&#8217;s not necessarily better to empower values that are shared with a majority of ECL-ish actors as opposed to just a decent number of ECL-ish actors.</p></li><li><p>It seems potentially important to avoid values that are supported and opposed by a comparable number of ECL-ish actors.</p></li><li><p>It seems preferable to avoid values that mainly come into existence via civilizations-similar-to-us deciding to create AI systems with particular values.</p></li><li><p>If we fail to do that, we might have additional reason to care about our own system being ECL-ish, including some version of updatelessness. (To increase the probability that people-like-us makes ECL-ish systems that can benefit us.)</p></li></ul><p>This suggests that we might want to empirically study (and speculate about) what values AI systems tend to adopt. Especially the kind of AI systems that could plausibly be empowered elsewhere in the universe. Ideally, we would identify what sort of values would correlate with being ECL-ish, though this seems hard. The one clue that we have is that ECL-ish AIs necessarily have universe-wide values.</p><p>However, even if we acquired good empirical information about this, I want to flag that all the concrete interventions that I can imagine carrying out on potentially-misaligned AI systems have large, plausible backfire risks. I feel pretty clueless about whether they&#8217;d be net-positive or net-negative. In particular, I want to flag that giving AIs large-scale, &#8220;universe-wide&#8221; values would make it more likely that it starts conflict with other actors that it shares our physical universe with. This includes humans: Having AIs with impartial, large-scale preferences seems significantly more likely to lead to AI takeover than AIs with more modest values.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> Given how speculative the ECL-reasoning is, here, this obvious downside of ambitious, large-scale values currently weighs heavier than the upsides, in my mind.</p><h4>What about cooperation-conducive values?</h4><p>A different question you could ask is: What values would we want to give AI systems such that they behave better in interactions with <em>other</em> civilizations, that have <em>other</em> values.</p><p>That&#8217;s a great question, but it&#8217;s out of scope for this post. In a different post, I discuss <a href="https://lukasfinnveden.substack.com/p/how-ecl-changes-the-value-of-interventions">How ECL changes the value of interventions that broadly benefit distant civilizations</a>. I think ECL is relevant for how highly we should value such interventions, but (as I argue in that post) I think ECL with AI-systems matter relatively little for such interventions, compared to doing ECL with distant, evolved pre-AGI civilizations. (Since the case that we correlate with distant evolved, pre-AGI civilizations is stronger.)</p><h3>Influencing decision theory</h3><p>Separately from influencing values, we could also consider interventions that are focused on making AIs competently optimize for those values. As mentioned above, I can&#8217;t think about many capabilities that are path-dependent enough that we could plausibly affect them &#8212; but one candidate is to make sure that the AIs have a good approach to decision theory. (By the lights of distant, ECL:ish AIs.)</p><p>Contrast an AI that (i) only focuses on its own light cone, vs. (ii) an AI that is ECL-ish, and thereby pursues mutually-beneficial-trades with the rest of the multiverse.</p><p>One<em> </em>difference is that the latter will generate more gains-from-trade for <em>other</em> values. Similar to the question of &#8220;What values could make AI systems behave nicely towards other civilizations?&#8221;, the gain for other civilizations is out of scope for this post &#8212;&nbsp;see instead <a href="https://lukasfinnveden.substack.com/p/how-ecl-changes-the-value-of-interventions">How ECL changes the value of interventions that broadly benefit distant civilizations </a>.</p><p>But separately, it will also <em>itself</em> benefit from gains-from-trade. (Probably. It depends on some of the issues discussed in <a href="https://lukasfinnveden.substack.com/i/136240168/appendix-whose-values-get-benefitted-by-ecl-ish-ais">the appendix</a>.) When we think about <em>those</em> potential gains (to the misaligned AI&#8217;s own values) we should value those according to the formula &#8212;&nbsp;at <em>c</em> as much as we would value benefits to our own values.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>How could we increase the likelihood that AI ends up reasoning well about decision-theory, by the lights of distant, ECL-ish AIs? The one thing we know for sure is that such AIs will themselves be ECL-ish, so probably they&#8217;d want other AIs with their values to have the preconditions for that &#8212; including EDT-ish (or maybe FDT-ish) decision-theories and probably some kind of updatelessness.</p><p>Just like above, interventions in this space also have plausible backfire risks. For one, AIs that act according to acausal decision theories seem harder to control from an alignment perspective. For example, they seem more likely to coordinate with each other in ways that humans can&#8217;t easily detect. For another, if AI starts thinking about acausal decision theories at an earlier time, it may be more likely that they make foolish decisions due to <a href="https://www.lesswrong.com/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem">commitment races</a>.</p><h3>Other path-dependent abilities</h3><p>Are there any other path-dependencies that would let us influence AIs&#8217; long-run abilities, in a way that distant, ECL:ish AIs could approve of? I don&#8217;t know many great suggestions that don&#8217;t go into one of the above categories. But one candidate example (that&#8217;s admittedly highly related to both decision theory and values) could be to equip it with <a href="https://longtermrisk.org/spi">surrogate goals</a>.</p><h2>Appendix &#8212; Whose values get benefitted by ECL-ish AIs?</h2><p>In <a href="https://lukasfinnveden.substack.com/i/136240168/influencing-values">Influencing values</a>, I talk about how we benefit particular values a lot if we empower AIs with those values. But if those AIs are ECL-ish, they might just optimize for a compromise utility function anyway (as argued in the <a href="https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf">original paper on ECL</a>, then called MSR) &#8212; without strongly weighting their own values. So are we really benefitting the AI&#8217;s own values, specifically?</p><p>Quick flag before I answer this: I&#8217;m very confused about the issues below, so it will be even harder than usual to understand what I&#8217;m writing about. The point of this section is less to communicate knowledge, and more to communicate a better sense of where I still feel plenty confused, and what sort of areas might need to be poked-at if we wanted to get a better sense of this stuff.</p><p>There&#8217;s a few different things to note.</p><p>Firstly, if everyone&#8217;s &#8220;compromise utility function&#8221; was <em>identical</em>, that would probably be because every ECL-ish actor correlated similarly much with every ECL-ish actor. (In particular, that there&#8217;s not any separate correlational clusters with different average values.) But if that&#8217;s the case, then the value of <em>c</em> should be 1. So in that case, there&#8217;s no disagreement with my formula. Thus, insofar as my methodology fails, it will be in cases that are more complex than &#8220;all ECL-ish actors optimize for a single compromise utility function&#8221;.</p><p>Secondly, in order to decide on a compromise utility function, distant actors may want to study what sort of actors both have power and are sympathetic to ECL. Such superintelligence-powered studies will probably be very accurate, and therefore take into account what decisions we make about what actors to empower.</p><ul><li><p>Accordingly, if we empower some AI with some particular values, the overall effect would be for <em>that</em> AI to locally optimize for a compromise utility function <em>and</em> for the overall compromise utility function to shift in the direction of that AI&#8217;s values.</p></li><li><p>If this is the case, I expect that the overall effect would be similar to if the AI had simply been offered an opportunity to negotiate with different civilizations and strike a mutually beneficial bargain. In which case it makes sense to think separately about how (i) empowering that AI gives it access to extra resources, and (ii) ECL allows it to get some extra gains-from-trade.</p></li></ul><p>While I think that&#8217;s a large part of the story (which makes me ok with leaning on this framework in this post), I recognize that it can&#8217;t be the whole story. In particular, the whole idea with this post is that <em>we</em> might be able to do ECL with these AIs. And we don&#8217;t fit the story above. We don&#8217;t have the ability to do detailed monitoring of what sort of actors are empowered, so we can&#8217;t be responsive to these kinds of power-shifts in the way that I just described. There are certainly <em>some</em> other actors like us, and there might be many more.</p><p>In our case, I think it&#8217;s plausible that even if <em>we</em> can&#8217;t do monitoring and balance power in the right way, <em>the AIs we&#8217;re making deals with</em> will be able to do this. This is an important part of the story for why <a href="https://lukasfinnveden.substack.com/i/136240168/influencing-values">I think it probably doesn&#8217;t matter</a> a lot that we benefit values that are extremely common, as opposed to common-enough that they can afford to benefit us a commensurate amount. But here I feel like I&#8217;m stacking speculation on speculation, and don&#8217;t really know what I&#8217;m talking about.</p><p>And even if I take that seriously, there is, in fact, a remaining effect here. If you create ECL-ish representatives for a <em>rare</em> value-system, most of those systems&#8217; resources might go towards compensating actors who empowered such values (in ignorance about whether there would be anyone to compensate them for that). In which case, creating those first few representatives would be more to the benefit of the compensated actors than the values held by those representatives. (Which is sort-of the reason for why the <a href="https://lukasfinnveden.substack.com/i/136240168/influencing-values">Influencing values</a>-section concludes that it might be good to create ECL-ish, updateless AI systems.)</p><p>I repeat: I feel very out of my depth here.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>How would they know what we want? These will be space-colonizing civilizations at technological maturity, so I imagine they could get a good idea by committing some tiny fraction of their resources to the study of what evolved species in our situation tend to want. (Including by running various kinds of simulations.)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Or for <em>someone else</em> to benefit us. I think about this deal framework as checking that there&#8217;s any win-win deal that involves us helping someone else, in which case it should be included in a grand bargain. But if we do our part in as many win-win deals as we can, that&#8217;s not just evidence that our imagined counterparts act in the same deals, but also evidence that other ECL-ish actors do their best to follow this procedure. And we could also get benefits from there.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>&nbsp;This is for a few different reasons:</p><ul><li><p>AIs with more modest values seem easier to study (less likely to actively try to mess up experiments).</p></li><li><p>AIs with more modest values seem more likely to agree to mutually-beneficial deals where they admit that they have misaligned goals, and we help them achieve those goals. (In exchange for the information that they are misaligned &#8212; and potentially in exchange for other services.)</p></li><li><p>AI with more modest values seem less likely to pursue world takeover if they escape their bounds.</p></li></ul></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>&nbsp;It&#8217;s possible that the gains-from-trade that accrues to itself would be significantly larger than the value it generates for others. As an analogy: if a small country like Sweden lost the ability to trade with the outside world &#8212; I think that would be worse for Sweden than it would be for the rest of the world combined. (Specifically, I think this is true <em>even</em> if we&#8217;re talking about <em>total</em> harm, rather than per-capita harm. On a per-capita basis, it would obviously be <em>much</em> worse for Sweden.) My main intuition here is that the world economy benefits a lot from specialization and diversity in what products can be produced where &#8212; and the world economy would mostly have substitutes for Swedish goods, but Sweden would not have substitutes for the rest of the worlds&#8217; goods.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Asymmetric ECL]]></title><description><![CDATA[The basic engine behind ECL is: &#8220;if I take action A, that increases the probability that a different actor takes analogous action B&#8221;.]]></description><link>https://lukasfinnveden.substack.com/p/asymmetric-ecl</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/asymmetric-ecl</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Sun, 20 Aug 2023 09:13:57 GMT</pubDate><content:encoded><![CDATA[<p>The basic engine behind ECL is: &#8220;if I take action <em>A</em>, that increases the probability that a different actor takes analogous action <em>B</em>&#8221;. In some circumstances, it&#8217;s fairly clear what an &#8220;analogous action&#8221; is. But sometimes it&#8217;s not. This post is about what acausal deals might look like when agents face very asymmetric decisions.</p><p>In this post, I will:</p><ul><li><p>Outline a way of thinking about &#8220;analogous actions&#8221; that might enable ECL deals in highly asymmetric situations. (<a href="https://lukasfinnveden.substack.com/i/136239309/how-to-think-about-analogous-actions-in-asymmetric-situations">Link</a>.)</p></li><li><p>Apply this algorithm in a series of examples, ultimately deriving a formula for when two groups of people will be able to execute a mutually-beneficial ECL deal. (<a href="https://lukasfinnveden.substack.com/i/136239309/a-series-of-examples">Link</a>.)</p></li></ul><p>Let&#8217;s dive into it.</p><h2>How to think about &#8220;analogous actions&#8221; in asymmetric situations</h2><p>(For discussion of many similar ideas, see section 2.8 of Oesterheld <a href="https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf">(2017)</a>.)</p><p>In order to reason about asymmetric situations, the main thing that I&#8217;m relying on is that the term "action" is broadly defined and meant to include possibilities like "think thought <em>X</em>" or "decide what to do using algorithm <em>Y</em>". For some of those actions, the analogy seems much clearer.</p><p>For example, if ECL arguments convinces me to decide to &#8220;be more inclined to take actions that help other value-systems (that are endorsed by actors who I&#8217;m plausibly correlated with)&#8221; or &#8220;avoid actions with large externalities on other value systems (that are endorsed by actors who I&#8217;m plausibly correlated with)&#8221;, then it&#8217;s relatively more clear what the analogous decisions are from the perspective of other people. And if all involved parties stick to these resolutions in practice, then all parties can indeed get gains-from-trade. (For an account of ECL that focuses on and recommends &#8220;cooperation heuristics&#8221; like those I just mentioned, see <a href="https://longtermrisk.org/commenting-msr-part-2-cooperation-heuristics/">this post</a> by Lukas Gloor.)</p><p>If we go even more abstract, I suspect that ECL would recommend cooperation across even more asymmetric situations. I&#8217;ll now give a proposal of such an abstract algorithm, which would ideally also tell us more precisely <em>how</em> inclined we should be to help other value-systems. (The description is pretty cumbersome. I think the rest of the post should be understandable even if you skim past it, so feel free to skip to the series of examples.)</p><p>Here&#8217;s the idea. I could decide what to do by following <strong>the algorithm</strong> of trying to approximate the following ideal procedure: Consider all other actors who are trying to decide what to do. Find a <em>joint policy</em> (that recommends actions to all those actors) with properties as follows.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><ol><li><p>If <strong>this algorithm</strong> outputs <em>that policy</em>, and if I follow the recommendations of <strong>this algorithm</strong>, then I expect my values to be better off than if I naively optimize for my own values.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><ol><li><p>This would happen via increasing the probability that other actors follow their part of <em>the policy</em>. So to find a policy with this property, I will estimate how much more likely other actors would be to follow an algorithm similar to <strong>this algorithm</strong> (rather than naively optimizing for their own values) if I follow <strong>this algorithm</strong>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p></li></ol></li><li><p>If <strong>this algorithm</strong> recommends <em>that policy</em>, then each other actor expects to be better off if they follow the recommendation of <strong>the algorithm</strong> than if they naively optimize for their own values.</p><ol><li><p>The reason they&#8217;d expect this is if they perceived themselves to be increasing the probability that other actors follow their part of <em>the policy</em>. So in order to find a policy with this property, I&#8217;ll have to estimate how much acausal influence other actors perceive themselves to have on me and each other.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p></li></ol></li><li><p><em>The joint policy</em> has various other nice features that make it easy and appealing to coordinate on, e.g:</p><ol><li><p>Pareto-optimality with respect to each participating actor&#8217;s beliefs.</p></li><li><p>Fairly distributes gains-from-trade.</p></li><li><p>Either it is a natural coordination point or it is robust to other agents&#8217; approximation-algorithms finding somewhat different policies.</p></li><li><p>(Johannes Treutlein&#8217;s <a href="https://arxiv.org/pdf/2307.04879.pdf">paper</a> contains more discussion of the bargaining problem that comes with ECL.)</p></li></ol></li></ol><p>Compared to naively optimizing for my own values, following the recommendations of a policy output by this algorithm looks like a great deal. It will only recommend that I do things differently insofar as I expect that to positively affect my values (by criteria 1). And this argument is very abstract and general, and doesn&#8217;t reference my particular situation. Therefore, it seems plausible that my choice to follow this algorithm is quite analogous to others&#8217; choice to follow this algorithm. In which case the acausal influence in steps 1a and 2a will be high, and so the algorithm can recommend decisions that will give me and others gains-from-trade.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>That said, there are unsurprisingly a number of issues. See <a href="https://lukasfinnveden.substack.com/i/136239309/issues-with-the-algorithm">this appendix</a> for some discussion, including a concrete example where this algorithm clearly gets the wrong answer.</p><p>What would it look like for us to approximate a procedure like the above? In this post, my strategy is to consider a limited number of actors and a simplified list of options, such that the above procedure becomes feasible. The hope is that we can then translate the lessons from such simple cases into the real world.</p><h2>A series of examples</h2><h3>Prisoner&#8217;s dilemma</h3><p>Let&#8217;s say Alice and Bob are playing a game where they can each pay $1 to give the other player $3. Let&#8217;s say that, if Alice were to follow the above algorithm (rather than unconditionally refusing to pay), she&#8217;d think it was 50 percentage points (50 ppt) more likely that Bob would do the same &#8212;&nbsp;and vice versa.</p><p>In this case, the algorithm would recommend the joint policy where Alice and Bob both cooperate &#8212; since that would make them both expect to get 50%*$3-$1=$0.5. (And it seems like this meets the criteria above: it&#8217;s fair, it&#8217;s a natural Schelling point, it&#8217;s Pareto-optimal, etc.)</p><h3>Asymmetric dilemma</h3><p>Let&#8217;s say that Alice can pay $1 to give Bob $5, and Bob can transfer any amount of money to Alice. They both have to select their actions before seeing what the other player does. Again, if Alice were to follow the above algorithm, she&#8217;d think it was 50 ppt more likely that Bob would do the same (rather than not transferring any money) &#8212; and vice versa.</p><p>In this case, the algorithm might recommend a joint policy where Alice pays $1 to send Bob $5, and Bob transfers $2.33 to Alice.</p><ul><li><p>This would lead Alice to expect 50%*$2.33-$1~=$0.17.</p></li><li><p>This would lead Bob to expect 50%*$5&#8211;$2.33~=$0.17.</p></li></ul><p>In this case, there&#8217;s a wide variety of policies that would leave both parties better-off than if they didn&#8217;t cooperate at all. But it seems plausible that the policy that gives both parties equal expected gain ($0.17) would be the most natural one.</p><h3>Generalizing</h3><p>Let&#8217;s introduce some notation for generalizing the above two cases:</p><ul><li><p>Alice thinks that following the algorithm increases the probability that Bob does the same (rather than doing nothing) by <em>c<sub>AB</sub></em>.</p><ul><li><p>(You can read this notation as an abbreviation for &#8220;<strong>c</strong>orrelational influence that <strong>A</strong>lice believes she has over <strong>B</strong>ob&#8221;.)</p></li></ul></li><li><p>Bob thinks that following the algorithm increases the probability that Alice follows the deal (rather than doing nothing) by <em>c<sub>BA</sub></em>.</p></li><li><p>The algorithm recommends that Alice chooses an option that loses her <em>l<sub>A</sub></em> utility and gains Bob <em>g<sub>B</sub></em> utility.</p></li><li><p>The algorithm recommends that Bob chooses an option that loses him <em>l<sub>B</sub></em> utility and gains Alice <em>g<sub>A</sub></em> utility.</p></li></ul><p>In the simplest, symmetric <a href="https://lukasfinnveden.substack.com/i/136239309/prisoners-dilemma">Prisoner&#8217;s dilemma</a> case, where the algorithm recommends that both Alice and Bob cooperates, the relevant numbers would be:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GQzb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b32cc3d-7fc2-4044-8bd6-2c8ceff6b3c7_162x33.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GQzb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b32cc3d-7fc2-4044-8bd6-2c8ceff6b3c7_162x33.png 424w, https://substackcdn.com/image/fetch/$s_!GQzb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b32cc3d-7fc2-4044-8bd6-2c8ceff6b3c7_162x33.png 848w, https://substackcdn.com/image/fetch/$s_!GQzb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b32cc3d-7fc2-4044-8bd6-2c8ceff6b3c7_162x33.png 1272w, https://substackcdn.com/image/fetch/$s_!GQzb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b32cc3d-7fc2-4044-8bd6-2c8ceff6b3c7_162x33.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GQzb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b32cc3d-7fc2-4044-8bd6-2c8ceff6b3c7_162x33.png" width="162" height="33" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4b32cc3d-7fc2-4044-8bd6-2c8ceff6b3c7_162x33.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:33,&quot;width&quot;:162,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2089,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GQzb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b32cc3d-7fc2-4044-8bd6-2c8ceff6b3c7_162x33.png 424w, https://substackcdn.com/image/fetch/$s_!GQzb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b32cc3d-7fc2-4044-8bd6-2c8ceff6b3c7_162x33.png 848w, https://substackcdn.com/image/fetch/$s_!GQzb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b32cc3d-7fc2-4044-8bd6-2c8ceff6b3c7_162x33.png 1272w, https://substackcdn.com/image/fetch/$s_!GQzb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b32cc3d-7fc2-4044-8bd6-2c8ceff6b3c7_162x33.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_zJA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a14bb8a-87ad-4acf-be2c-dff252623ed2_125x34.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_zJA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a14bb8a-87ad-4acf-be2c-dff252623ed2_125x34.png 424w, https://substackcdn.com/image/fetch/$s_!_zJA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a14bb8a-87ad-4acf-be2c-dff252623ed2_125x34.png 848w, https://substackcdn.com/image/fetch/$s_!_zJA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a14bb8a-87ad-4acf-be2c-dff252623ed2_125x34.png 1272w, https://substackcdn.com/image/fetch/$s_!_zJA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a14bb8a-87ad-4acf-be2c-dff252623ed2_125x34.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_zJA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a14bb8a-87ad-4acf-be2c-dff252623ed2_125x34.png" width="125" height="34" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9a14bb8a-87ad-4acf-be2c-dff252623ed2_125x34.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:34,&quot;width&quot;:125,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1449,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_zJA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a14bb8a-87ad-4acf-be2c-dff252623ed2_125x34.png 424w, https://substackcdn.com/image/fetch/$s_!_zJA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a14bb8a-87ad-4acf-be2c-dff252623ed2_125x34.png 848w, https://substackcdn.com/image/fetch/$s_!_zJA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a14bb8a-87ad-4acf-be2c-dff252623ed2_125x34.png 1272w, https://substackcdn.com/image/fetch/$s_!_zJA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a14bb8a-87ad-4acf-be2c-dff252623ed2_125x34.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8YyS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ace7c5d-2611-49f0-8efb-111948447cf6_129x32.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8YyS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ace7c5d-2611-49f0-8efb-111948447cf6_129x32.png 424w, https://substackcdn.com/image/fetch/$s_!8YyS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ace7c5d-2611-49f0-8efb-111948447cf6_129x32.png 848w, https://substackcdn.com/image/fetch/$s_!8YyS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ace7c5d-2611-49f0-8efb-111948447cf6_129x32.png 1272w, https://substackcdn.com/image/fetch/$s_!8YyS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ace7c5d-2611-49f0-8efb-111948447cf6_129x32.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8YyS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ace7c5d-2611-49f0-8efb-111948447cf6_129x32.png" width="129" height="32" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5ace7c5d-2611-49f0-8efb-111948447cf6_129x32.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:32,&quot;width&quot;:129,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1777,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8YyS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ace7c5d-2611-49f0-8efb-111948447cf6_129x32.png 424w, https://substackcdn.com/image/fetch/$s_!8YyS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ace7c5d-2611-49f0-8efb-111948447cf6_129x32.png 848w, https://substackcdn.com/image/fetch/$s_!8YyS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ace7c5d-2611-49f0-8efb-111948447cf6_129x32.png 1272w, https://substackcdn.com/image/fetch/$s_!8YyS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ace7c5d-2611-49f0-8efb-111948447cf6_129x32.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Now let&#8217;s formulate the constraints that the algorithm needs to recommend actions that both Alice and Bob benefit from:</p><ul><li><p>Alice expects that following the algorithm gets her</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7r93!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26626544-da0a-4048-85d5-d84cdca7f1ce_120x43.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7r93!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26626544-da0a-4048-85d5-d84cdca7f1ce_120x43.png 424w, https://substackcdn.com/image/fetch/$s_!7r93!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26626544-da0a-4048-85d5-d84cdca7f1ce_120x43.png 848w, https://substackcdn.com/image/fetch/$s_!7r93!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26626544-da0a-4048-85d5-d84cdca7f1ce_120x43.png 1272w, https://substackcdn.com/image/fetch/$s_!7r93!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26626544-da0a-4048-85d5-d84cdca7f1ce_120x43.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7r93!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26626544-da0a-4048-85d5-d84cdca7f1ce_120x43.png" width="120" height="43" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/26626544-da0a-4048-85d5-d84cdca7f1ce_120x43.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:43,&quot;width&quot;:120,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1508,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7r93!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26626544-da0a-4048-85d5-d84cdca7f1ce_120x43.png 424w, https://substackcdn.com/image/fetch/$s_!7r93!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26626544-da0a-4048-85d5-d84cdca7f1ce_120x43.png 848w, https://substackcdn.com/image/fetch/$s_!7r93!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26626544-da0a-4048-85d5-d84cdca7f1ce_120x43.png 1272w, https://substackcdn.com/image/fetch/$s_!7r93!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26626544-da0a-4048-85d5-d84cdca7f1ce_120x43.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p> utility. In order for this to be worthwhile, we need </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LHaV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7152a4db-8ba7-4163-a5ac-b76a7251bbeb_267x58.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LHaV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7152a4db-8ba7-4163-a5ac-b76a7251bbeb_267x58.png 424w, https://substackcdn.com/image/fetch/$s_!LHaV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7152a4db-8ba7-4163-a5ac-b76a7251bbeb_267x58.png 848w, https://substackcdn.com/image/fetch/$s_!LHaV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7152a4db-8ba7-4163-a5ac-b76a7251bbeb_267x58.png 1272w, https://substackcdn.com/image/fetch/$s_!LHaV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7152a4db-8ba7-4163-a5ac-b76a7251bbeb_267x58.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LHaV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7152a4db-8ba7-4163-a5ac-b76a7251bbeb_267x58.png" width="267" height="58" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7152a4db-8ba7-4163-a5ac-b76a7251bbeb_267x58.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:58,&quot;width&quot;:267,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3367,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LHaV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7152a4db-8ba7-4163-a5ac-b76a7251bbeb_267x58.png 424w, https://substackcdn.com/image/fetch/$s_!LHaV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7152a4db-8ba7-4163-a5ac-b76a7251bbeb_267x58.png 848w, https://substackcdn.com/image/fetch/$s_!LHaV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7152a4db-8ba7-4163-a5ac-b76a7251bbeb_267x58.png 1272w, https://substackcdn.com/image/fetch/$s_!LHaV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7152a4db-8ba7-4163-a5ac-b76a7251bbeb_267x58.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li><li><p>Correspondingly, for Bob, we need that </p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VZgl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc410cf8a-e8fb-4b78-a994-f501dc63dbcf_113x60.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VZgl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc410cf8a-e8fb-4b78-a994-f501dc63dbcf_113x60.png 424w, https://substackcdn.com/image/fetch/$s_!VZgl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc410cf8a-e8fb-4b78-a994-f501dc63dbcf_113x60.png 848w, https://substackcdn.com/image/fetch/$s_!VZgl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc410cf8a-e8fb-4b78-a994-f501dc63dbcf_113x60.png 1272w, https://substackcdn.com/image/fetch/$s_!VZgl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc410cf8a-e8fb-4b78-a994-f501dc63dbcf_113x60.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VZgl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc410cf8a-e8fb-4b78-a994-f501dc63dbcf_113x60.png" width="113" height="60" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c410cf8a-e8fb-4b78-a994-f501dc63dbcf_113x60.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:60,&quot;width&quot;:113,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2012,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VZgl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc410cf8a-e8fb-4b78-a994-f501dc63dbcf_113x60.png 424w, https://substackcdn.com/image/fetch/$s_!VZgl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc410cf8a-e8fb-4b78-a994-f501dc63dbcf_113x60.png 848w, https://substackcdn.com/image/fetch/$s_!VZgl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc410cf8a-e8fb-4b78-a994-f501dc63dbcf_113x60.png 1272w, https://substackcdn.com/image/fetch/$s_!VZgl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc410cf8a-e8fb-4b78-a994-f501dc63dbcf_113x60.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>In the symmetric prisoner&#8217;s dilemma, we can verify that both criteria are fulfilled, since </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KN6C!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F615dcb87-03a3-49c3-a9b0-a79b327329fe_127x58.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KN6C!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F615dcb87-03a3-49c3-a9b0-a79b327329fe_127x58.png 424w, https://substackcdn.com/image/fetch/$s_!KN6C!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F615dcb87-03a3-49c3-a9b0-a79b327329fe_127x58.png 848w, https://substackcdn.com/image/fetch/$s_!KN6C!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F615dcb87-03a3-49c3-a9b0-a79b327329fe_127x58.png 1272w, https://substackcdn.com/image/fetch/$s_!KN6C!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F615dcb87-03a3-49c3-a9b0-a79b327329fe_127x58.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KN6C!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F615dcb87-03a3-49c3-a9b0-a79b327329fe_127x58.png" width="127" height="58" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/615dcb87-03a3-49c3-a9b0-a79b327329fe_127x58.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:58,&quot;width&quot;:127,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!KN6C!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F615dcb87-03a3-49c3-a9b0-a79b327329fe_127x58.png 424w, https://substackcdn.com/image/fetch/$s_!KN6C!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F615dcb87-03a3-49c3-a9b0-a79b327329fe_127x58.png 848w, https://substackcdn.com/image/fetch/$s_!KN6C!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F615dcb87-03a3-49c3-a9b0-a79b327329fe_127x58.png 1272w, https://substackcdn.com/image/fetch/$s_!KN6C!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F615dcb87-03a3-49c3-a9b0-a79b327329fe_127x58.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Now, let&#8217;s consider the asymmetric dilemma. Although we could simply plug-in the numbers that I used above (Bob transfers $2.33), I want to formulate a general formula that answers the question:</p><blockquote><p>Does there exist <em>any</em> deal that both parties find mutually beneficial, if one party can choose the quantity of help that they send to the other?</p></blockquote><p>Thus, for this general case, let&#8217;s say that Bob can adjust his contribution so as to simultaneously linearly increase or decrease both the cost he pays (proportional to <em>l<sub>B</sub></em>) and the amount he benefits Alice (proportional to <em>g<sub>A</sub></em>) by any positive factor <em>k</em>.</p><p>Then, the question &#8220;Does there exist any deal that both parties find mutually beneficial?&#8221; corresponds to the question &#8220;Does there exist any <em>k</em> such that both</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BvLt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05eda01-4260-4b17-8b2f-6613eb8192db_127x58.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BvLt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05eda01-4260-4b17-8b2f-6613eb8192db_127x58.png 424w, https://substackcdn.com/image/fetch/$s_!BvLt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05eda01-4260-4b17-8b2f-6613eb8192db_127x58.png 848w, https://substackcdn.com/image/fetch/$s_!BvLt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05eda01-4260-4b17-8b2f-6613eb8192db_127x58.png 1272w, https://substackcdn.com/image/fetch/$s_!BvLt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05eda01-4260-4b17-8b2f-6613eb8192db_127x58.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BvLt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05eda01-4260-4b17-8b2f-6613eb8192db_127x58.png" width="127" height="58" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a05eda01-4260-4b17-8b2f-6613eb8192db_127x58.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:58,&quot;width&quot;:127,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2190,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BvLt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05eda01-4260-4b17-8b2f-6613eb8192db_127x58.png 424w, https://substackcdn.com/image/fetch/$s_!BvLt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05eda01-4260-4b17-8b2f-6613eb8192db_127x58.png 848w, https://substackcdn.com/image/fetch/$s_!BvLt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05eda01-4260-4b17-8b2f-6613eb8192db_127x58.png 1272w, https://substackcdn.com/image/fetch/$s_!BvLt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa05eda01-4260-4b17-8b2f-6613eb8192db_127x58.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>and </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7f--!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c85297f-bae8-427f-9e29-13caea822cd6_128x58.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7f--!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c85297f-bae8-427f-9e29-13caea822cd6_128x58.png 424w, https://substackcdn.com/image/fetch/$s_!7f--!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c85297f-bae8-427f-9e29-13caea822cd6_128x58.png 848w, https://substackcdn.com/image/fetch/$s_!7f--!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c85297f-bae8-427f-9e29-13caea822cd6_128x58.png 1272w, https://substackcdn.com/image/fetch/$s_!7f--!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c85297f-bae8-427f-9e29-13caea822cd6_128x58.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7f--!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c85297f-bae8-427f-9e29-13caea822cd6_128x58.png" width="128" height="58" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4c85297f-bae8-427f-9e29-13caea822cd6_128x58.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:58,&quot;width&quot;:128,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2180,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7f--!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c85297f-bae8-427f-9e29-13caea822cd6_128x58.png 424w, https://substackcdn.com/image/fetch/$s_!7f--!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c85297f-bae8-427f-9e29-13caea822cd6_128x58.png 848w, https://substackcdn.com/image/fetch/$s_!7f--!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c85297f-bae8-427f-9e29-13caea822cd6_128x58.png 1272w, https://substackcdn.com/image/fetch/$s_!7f--!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4c85297f-bae8-427f-9e29-13caea822cd6_128x58.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>?&#8221;.</p><p>Let&#8217;s find a general expression that answers that question.</p><p>If there&#8217;s some <em>k</em> such that both those expressions are larger than 1, then their product must also be greater than 1. By canceling <em>k</em>, we can conclude </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!uaL5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480a138b-0485-405f-aa04-3c1df4be3ef1_344x65.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!uaL5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480a138b-0485-405f-aa04-3c1df4be3ef1_344x65.png 424w, https://substackcdn.com/image/fetch/$s_!uaL5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480a138b-0485-405f-aa04-3c1df4be3ef1_344x65.png 848w, https://substackcdn.com/image/fetch/$s_!uaL5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480a138b-0485-405f-aa04-3c1df4be3ef1_344x65.png 1272w, https://substackcdn.com/image/fetch/$s_!uaL5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480a138b-0485-405f-aa04-3c1df4be3ef1_344x65.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!uaL5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480a138b-0485-405f-aa04-3c1df4be3ef1_344x65.png" width="344" height="65" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/480a138b-0485-405f-aa04-3c1df4be3ef1_344x65.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:65,&quot;width&quot;:344,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4744,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!uaL5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480a138b-0485-405f-aa04-3c1df4be3ef1_344x65.png 424w, https://substackcdn.com/image/fetch/$s_!uaL5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480a138b-0485-405f-aa04-3c1df4be3ef1_344x65.png 848w, https://substackcdn.com/image/fetch/$s_!uaL5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480a138b-0485-405f-aa04-3c1df4be3ef1_344x65.png 1272w, https://substackcdn.com/image/fetch/$s_!uaL5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480a138b-0485-405f-aa04-3c1df4be3ef1_344x65.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Consider instead the case where no possible value of <em>k</em> makes both expressions larger than 1. If so, we can choose a <em>k</em> that makes both expressions equal, where they&#8217;re both equal to 1 or smaller. If so, their product will be equal to 1 or smaller, so by canceling <em>k</em> again, we can conclude </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!abze!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c5dcc87-ebdc-4158-bba7-3a603553e127_165x54.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!abze!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c5dcc87-ebdc-4158-bba7-3a603553e127_165x54.png 424w, https://substackcdn.com/image/fetch/$s_!abze!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c5dcc87-ebdc-4158-bba7-3a603553e127_165x54.png 848w, https://substackcdn.com/image/fetch/$s_!abze!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c5dcc87-ebdc-4158-bba7-3a603553e127_165x54.png 1272w, https://substackcdn.com/image/fetch/$s_!abze!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c5dcc87-ebdc-4158-bba7-3a603553e127_165x54.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!abze!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c5dcc87-ebdc-4158-bba7-3a603553e127_165x54.png" width="165" height="54" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6c5dcc87-ebdc-4158-bba7-3a603553e127_165x54.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:54,&quot;width&quot;:165,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2696,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!abze!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c5dcc87-ebdc-4158-bba7-3a603553e127_165x54.png 424w, https://substackcdn.com/image/fetch/$s_!abze!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c5dcc87-ebdc-4158-bba7-3a603553e127_165x54.png 848w, https://substackcdn.com/image/fetch/$s_!abze!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c5dcc87-ebdc-4158-bba7-3a603553e127_165x54.png 1272w, https://substackcdn.com/image/fetch/$s_!abze!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c5dcc87-ebdc-4158-bba7-3a603553e127_165x54.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Combining these, we can conclude that there exists a <em>k</em> that makes both equations larger than 1 iff:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!upFF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0485fcd-29a2-42ae-844d-c12c72d9ddb0_334x66.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!upFF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0485fcd-29a2-42ae-844d-c12c72d9ddb0_334x66.png 424w, https://substackcdn.com/image/fetch/$s_!upFF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0485fcd-29a2-42ae-844d-c12c72d9ddb0_334x66.png 848w, https://substackcdn.com/image/fetch/$s_!upFF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0485fcd-29a2-42ae-844d-c12c72d9ddb0_334x66.png 1272w, https://substackcdn.com/image/fetch/$s_!upFF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0485fcd-29a2-42ae-844d-c12c72d9ddb0_334x66.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!upFF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0485fcd-29a2-42ae-844d-c12c72d9ddb0_334x66.png" width="334" height="66" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d0485fcd-29a2-42ae-844d-c12c72d9ddb0_334x66.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:66,&quot;width&quot;:334,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4493,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!upFF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0485fcd-29a2-42ae-844d-c12c72d9ddb0_334x66.png 424w, https://substackcdn.com/image/fetch/$s_!upFF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0485fcd-29a2-42ae-844d-c12c72d9ddb0_334x66.png 848w, https://substackcdn.com/image/fetch/$s_!upFF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0485fcd-29a2-42ae-844d-c12c72d9ddb0_334x66.png 1272w, https://substackcdn.com/image/fetch/$s_!upFF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0485fcd-29a2-42ae-844d-c12c72d9ddb0_334x66.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>(For a slightly more detailed derivation of this, see <a href="https://lukasfinnveden.substack.com/i/136239309/asymmetric-dilemma-more-thorough-derivation">Asymmetric dilemma &#8212; more thorough derivation</a>.)</p><p>The argument above would work just the same if Alice would have been able to linearly adjust how much she helps Bob, in addition to or instead of Bob being able to linearly adjust how much he helps Alice. So in any dilemma where Alice or Bob can linearly adjust how much they help the other party, they can find a mutually beneficial deal that they&#8217;re both incentivized to follow whenever </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!T2JD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1110a96-7a59-4fac-876e-aa476da43a1a_155x56.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!T2JD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1110a96-7a59-4fac-876e-aa476da43a1a_155x56.png 424w, https://substackcdn.com/image/fetch/$s_!T2JD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1110a96-7a59-4fac-876e-aa476da43a1a_155x56.png 848w, https://substackcdn.com/image/fetch/$s_!T2JD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1110a96-7a59-4fac-876e-aa476da43a1a_155x56.png 1272w, https://substackcdn.com/image/fetch/$s_!T2JD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1110a96-7a59-4fac-876e-aa476da43a1a_155x56.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!T2JD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1110a96-7a59-4fac-876e-aa476da43a1a_155x56.png" width="155" height="56" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b1110a96-7a59-4fac-876e-aa476da43a1a_155x56.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:56,&quot;width&quot;:155,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2790,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!T2JD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1110a96-7a59-4fac-876e-aa476da43a1a_155x56.png 424w, https://substackcdn.com/image/fetch/$s_!T2JD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1110a96-7a59-4fac-876e-aa476da43a1a_155x56.png 848w, https://substackcdn.com/image/fetch/$s_!T2JD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1110a96-7a59-4fac-876e-aa476da43a1a_155x56.png 1272w, https://substackcdn.com/image/fetch/$s_!T2JD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1110a96-7a59-4fac-876e-aa476da43a1a_155x56.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h3>Large-worlds asymmetric dilemma</h3><p>Now let&#8217;s say that Alice is a paperclip maximizer, who finds herself in a world where she can produce paperclips for $6 or staples for $1. In other parts of the universe, she knows that Bob the staple maximizer lives in a world where both paperclips and staples cost $6 each to manufacture. Let&#8217;s assume that Alice thinks that her following the above algorithm would only increase Bob&#8217;s probability of doing so by 10 ppt. (And vice versa for Bob.)</p><p>Let&#8217;s apply the above formula to this case. Alice can create 6 staples for the cost of 1 paperclip, so let&#8217;s set <em>g<sub>B</sub></em>=6 and <em>l<sub>A</sub></em>=1. Bob can create 1 paperclip for the cost of 1 staple, so let&#8217;s set <em>g<sub>A</sub></em>=1 and <em>l<sub>B</sub></em>=1. Then we have </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KlRD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28b76791-aefd-4985-aa01-d43c995d9cab_360x54.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KlRD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28b76791-aefd-4985-aa01-d43c995d9cab_360x54.png 424w, https://substackcdn.com/image/fetch/$s_!KlRD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28b76791-aefd-4985-aa01-d43c995d9cab_360x54.png 848w, https://substackcdn.com/image/fetch/$s_!KlRD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28b76791-aefd-4985-aa01-d43c995d9cab_360x54.png 1272w, https://substackcdn.com/image/fetch/$s_!KlRD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28b76791-aefd-4985-aa01-d43c995d9cab_360x54.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KlRD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28b76791-aefd-4985-aa01-d43c995d9cab_360x54.png" width="360" height="54" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/28b76791-aefd-4985-aa01-d43c995d9cab_360x54.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:54,&quot;width&quot;:360,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4514,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KlRD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28b76791-aefd-4985-aa01-d43c995d9cab_360x54.png 424w, https://substackcdn.com/image/fetch/$s_!KlRD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28b76791-aefd-4985-aa01-d43c995d9cab_360x54.png 848w, https://substackcdn.com/image/fetch/$s_!KlRD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28b76791-aefd-4985-aa01-d43c995d9cab_360x54.png 1272w, https://substackcdn.com/image/fetch/$s_!KlRD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28b76791-aefd-4985-aa01-d43c995d9cab_360x54.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Due to the low acausal influence, there&#8217;s no joint policy that recommends Alice to create staples, or Bob to create paper clips, that both Alice and Bob are incentivized to follow.&nbsp;&nbsp;</p><p>However, let&#8217;s now add the assumption that there&#8217;s a <em>huge</em> number of paperclip maximizers (&#8220;clippers&#8221;) in Alice&#8217;s situation across the universe, and a <em>huge</em> number of staple maximizers (&#8220;staplers&#8221;) in Bob&#8217;s situation. And importantly, the clippers don&#8217;t expect to be perfectly correlated with each other, and the staplers don&#8217;t expect to be perfectly correlated with each other. In fact, Alice expects that following the above algorithm would increase the probability that the other clippers do so by 20 ppt. (And this belief is shared with the other clippers.) Analogously, the staplers only believe that they have a 20% probability of affecting each others&#8217; actions.</p><p>Calculating the expected utility then, Alice&#8217;s and Bob&#8217;s own universes are quite negligibly important. The vast majority of the cost of following through on the deal is the additional 20% probability that other actors with the same values follow the same algorithm, rather than the failure to produce paperclips/staples in Alice&#8217;s/Bob&#8217;s own universe.</p><p>Let&#8217;s rewrite the above formula to take this into account. Let&#8217;s use similar notation as above, with modifications:</p><ul><li><p><em>n<sub>A</sub></em> is the number of clippers (i.e. actors who share Alice&#8217;s values) and <em>n<sub>B</sub></em> is the number of staplers (i.e. actors who share Bob&#8217;s values).</p></li><li><p><em>c<sub>AB</sub></em> now refers to <em>each</em> clippers&#8217; perceived influence over <em>each</em> other stapler (and vice versa for <em>c<sub>BA</sub></em>). This includes Alice and Bob as special cases &#8212; but Alice and Bob no longer have any special relationship that isn&#8217;t captured by their relationship to each of the other staplers and clippers.</p></li><li><p><em>c<sub>AA</sub></em> is clippers&#8217; (including Alice&#8217;s) perceived influence on other clippers, and <em>c<sub>BB</sub></em> is staplers&#8217; (including Bob&#8217;s) perceived influence on other staplers.</p></li></ul><p>With this notation, if we assume that Alice and Bob are a negligible fraction of clippers and staplers:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><ul><li><p>In order for a policy to be good for Alice, </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EJw1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c03330c-e581-48de-8dd5-ce5a4b70257b_360x53.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EJw1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c03330c-e581-48de-8dd5-ce5a4b70257b_360x53.png 424w, https://substackcdn.com/image/fetch/$s_!EJw1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c03330c-e581-48de-8dd5-ce5a4b70257b_360x53.png 848w, https://substackcdn.com/image/fetch/$s_!EJw1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c03330c-e581-48de-8dd5-ce5a4b70257b_360x53.png 1272w, https://substackcdn.com/image/fetch/$s_!EJw1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c03330c-e581-48de-8dd5-ce5a4b70257b_360x53.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EJw1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c03330c-e581-48de-8dd5-ce5a4b70257b_360x53.png" width="360" height="53" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0c03330c-e581-48de-8dd5-ce5a4b70257b_360x53.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:53,&quot;width&quot;:360,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4309,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EJw1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c03330c-e581-48de-8dd5-ce5a4b70257b_360x53.png 424w, https://substackcdn.com/image/fetch/$s_!EJw1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c03330c-e581-48de-8dd5-ce5a4b70257b_360x53.png 848w, https://substackcdn.com/image/fetch/$s_!EJw1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c03330c-e581-48de-8dd5-ce5a4b70257b_360x53.png 1272w, https://substackcdn.com/image/fetch/$s_!EJw1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c03330c-e581-48de-8dd5-ce5a4b70257b_360x53.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li><li><p>In order for a policy to be good for Bob,&nbsp; </p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bblv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd533c976-c8a0-4daa-af62-b43ad593a64d_353x53.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bblv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd533c976-c8a0-4daa-af62-b43ad593a64d_353x53.png 424w, https://substackcdn.com/image/fetch/$s_!bblv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd533c976-c8a0-4daa-af62-b43ad593a64d_353x53.png 848w, https://substackcdn.com/image/fetch/$s_!bblv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd533c976-c8a0-4daa-af62-b43ad593a64d_353x53.png 1272w, https://substackcdn.com/image/fetch/$s_!bblv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd533c976-c8a0-4daa-af62-b43ad593a64d_353x53.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bblv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd533c976-c8a0-4daa-af62-b43ad593a64d_353x53.png" width="353" height="53" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d533c976-c8a0-4daa-af62-b43ad593a64d_353x53.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:53,&quot;width&quot;:353,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4769,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bblv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd533c976-c8a0-4daa-af62-b43ad593a64d_353x53.png 424w, https://substackcdn.com/image/fetch/$s_!bblv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd533c976-c8a0-4daa-af62-b43ad593a64d_353x53.png 848w, https://substackcdn.com/image/fetch/$s_!bblv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd533c976-c8a0-4daa-af62-b43ad593a64d_353x53.png 1272w, https://substackcdn.com/image/fetch/$s_!bblv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd533c976-c8a0-4daa-af62-b43ad593a64d_353x53.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Again, if one of the parties can linearly adjust their contribution so as to proportionally increase or decrease both <em>l<sub>B</sub></em> and <em>g<sub>A</sub></em>, then a mutually beneficial deal is compatible with individual incentives whenever:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!y7lr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25fe85c7-5fc4-4361-bf96-d2b074019e8a_389x59.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!y7lr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25fe85c7-5fc4-4361-bf96-d2b074019e8a_389x59.png 424w, https://substackcdn.com/image/fetch/$s_!y7lr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25fe85c7-5fc4-4361-bf96-d2b074019e8a_389x59.png 848w, https://substackcdn.com/image/fetch/$s_!y7lr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25fe85c7-5fc4-4361-bf96-d2b074019e8a_389x59.png 1272w, https://substackcdn.com/image/fetch/$s_!y7lr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25fe85c7-5fc4-4361-bf96-d2b074019e8a_389x59.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!y7lr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25fe85c7-5fc4-4361-bf96-d2b074019e8a_389x59.png" width="389" height="59" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/25fe85c7-5fc4-4361-bf96-d2b074019e8a_389x59.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:59,&quot;width&quot;:389,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6841,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!y7lr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25fe85c7-5fc4-4361-bf96-d2b074019e8a_389x59.png 424w, https://substackcdn.com/image/fetch/$s_!y7lr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25fe85c7-5fc4-4361-bf96-d2b074019e8a_389x59.png 848w, https://substackcdn.com/image/fetch/$s_!y7lr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25fe85c7-5fc4-4361-bf96-d2b074019e8a_389x59.png 1272w, https://substackcdn.com/image/fetch/$s_!y7lr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25fe85c7-5fc4-4361-bf96-d2b074019e8a_389x59.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>For the example sketched above, we would have: <em>c<sub>AB</sub></em> = <em>c<sub>BA</sub></em> = 0.1; <em>c<sub>AA</sub></em> = <em>c<sub>BB</sub></em> = 0.2; <em>g<sub>A</sub></em>/<em>l<sub>B</sub></em>=1; <em>g<sub>B</sub></em>/<em>l<sub>A</sub></em>=6. Yielding </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MvWM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d0fe1c2-7686-415d-abee-046c528032f0_185x65.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MvWM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d0fe1c2-7686-415d-abee-046c528032f0_185x65.png 424w, https://substackcdn.com/image/fetch/$s_!MvWM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d0fe1c2-7686-415d-abee-046c528032f0_185x65.png 848w, https://substackcdn.com/image/fetch/$s_!MvWM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d0fe1c2-7686-415d-abee-046c528032f0_185x65.png 1272w, https://substackcdn.com/image/fetch/$s_!MvWM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d0fe1c2-7686-415d-abee-046c528032f0_185x65.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MvWM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d0fe1c2-7686-415d-abee-046c528032f0_185x65.png" width="185" height="65" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0d0fe1c2-7686-415d-abee-046c528032f0_185x65.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:65,&quot;width&quot;:185,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2349,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MvWM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d0fe1c2-7686-415d-abee-046c528032f0_185x65.png 424w, https://substackcdn.com/image/fetch/$s_!MvWM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d0fe1c2-7686-415d-abee-046c528032f0_185x65.png 848w, https://substackcdn.com/image/fetch/$s_!MvWM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d0fe1c2-7686-415d-abee-046c528032f0_185x65.png 1272w, https://substackcdn.com/image/fetch/$s_!MvWM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0d0fe1c2-7686-415d-abee-046c528032f0_185x65.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p> so a trade is possible.</p><p>That&#8217;s the main content of this post! If you want to read a few clarifying remarks about this formula, then see the first section of the appendices, right after this..</p><p>I extend this formula to situations where actors can gather evidence about how much they correlate with each other in <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl-with-evidence">Asymmetric ECL with evidence</a>. (You&#8217;d probably want to read <a href="https://lukasfinnveden.substack.com/p/when-does-edt-seek-evidence-about">When does EDT seek evidence about correlations?</a> before reading that, though.)</p><h2>Appendices</h2><h3>Some remarks on the formula</h3><p>A few clarifying remarks on the formula derived <a href="https://docs.google.com/document/d/1CcrhQo60rpbGe0cNoXyVoYgbpcLhxFYK3BCn93x7gyQ/edit#heading=h.offrirmk7o3k">above</a>:</p><ul><li><p>The formula is not sensitive to intertheoretic comparisons of utility, or the units that we denominate gains and losses in. <em>g<sub>A</sub></em> is divided by <em>l<sub>A</sub></em>, and <em>g<sub>B</sub></em> is divided by <em>l<sub>B</sub></em>, so any linear transformation of those numbers will leave the expression with the same value.</p></li><li><p>In order for this calculation to make sense, the other paperclippers and staple maximizers must be in the exact same situation as Alice and Bob with respect to all mentioned variables.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> (E.g. degree to which they believe themselves to be influencing other paperclippers and staple maximizers.) And in order for everyone to be able to carry out these computations, and reasonably expect others to match them, all this must be common knowledge.</p><ul><li><p>I&#8217;m optimistic that some substantial variation and uncertainty wouldn&#8217;t ruin the opportunities for cooperation &#8212;&nbsp;but it sure would complicate the analysis and make it harder to find fair deals.</p></li></ul></li><li><p><em>n<sub>A</sub></em> and <em>n<sub>B</sub></em> cancels out in the final inequality. This means that the number of actors doesn&#8217;t affect whether <em>any</em> deal is mutually beneficial. The reason for this is that a deal could be arbitrarily lop-sided to compensate for the greater numerosity on one side.</p><ul><li><p>Note that this assumes common knowledge about the number of actors on each side. If there are disagreements about how many actors are on each side, that could interfere with (or more rarely, enable) a mutually beneficial deal.&nbsp;</p></li><li><p>Disagreements about the number of actors could naturally emerge from each actor bayes-updating on their own existence, making them (rationally) believe that they are more common.</p></li></ul></li><li><p>There are two interesting ways to write </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6Sjo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F021d1980-2159-4cc1-a98d-df50aa9303bc_80x57.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6Sjo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F021d1980-2159-4cc1-a98d-df50aa9303bc_80x57.png 424w, https://substackcdn.com/image/fetch/$s_!6Sjo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F021d1980-2159-4cc1-a98d-df50aa9303bc_80x57.png 848w, https://substackcdn.com/image/fetch/$s_!6Sjo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F021d1980-2159-4cc1-a98d-df50aa9303bc_80x57.png 1272w, https://substackcdn.com/image/fetch/$s_!6Sjo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F021d1980-2159-4cc1-a98d-df50aa9303bc_80x57.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6Sjo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F021d1980-2159-4cc1-a98d-df50aa9303bc_80x57.png" width="80" height="57" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/021d1980-2159-4cc1-a98d-df50aa9303bc_80x57.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:57,&quot;width&quot;:80,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1911,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6Sjo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F021d1980-2159-4cc1-a98d-df50aa9303bc_80x57.png 424w, https://substackcdn.com/image/fetch/$s_!6Sjo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F021d1980-2159-4cc1-a98d-df50aa9303bc_80x57.png 848w, https://substackcdn.com/image/fetch/$s_!6Sjo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F021d1980-2159-4cc1-a98d-df50aa9303bc_80x57.png 1272w, https://substackcdn.com/image/fetch/$s_!6Sjo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F021d1980-2159-4cc1-a98d-df50aa9303bc_80x57.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Both point towards the expression being smaller than 1. Both suggest that we care about the <em>relative</em> acausal influence of various groups rather than the absolute level.&nbsp;</p><ul><li><p>One way to write this is </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vONC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff36f0113-2760-4776-8053-9a804475edab_80x53.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vONC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff36f0113-2760-4776-8053-9a804475edab_80x53.png 424w, https://substackcdn.com/image/fetch/$s_!vONC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff36f0113-2760-4776-8053-9a804475edab_80x53.png 848w, https://substackcdn.com/image/fetch/$s_!vONC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff36f0113-2760-4776-8053-9a804475edab_80x53.png 1272w, https://substackcdn.com/image/fetch/$s_!vONC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff36f0113-2760-4776-8053-9a804475edab_80x53.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vONC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff36f0113-2760-4776-8053-9a804475edab_80x53.png" width="80" height="53" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f36f0113-2760-4776-8053-9a804475edab_80x53.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:53,&quot;width&quot;:80,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1582,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vONC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff36f0113-2760-4776-8053-9a804475edab_80x53.png 424w, https://substackcdn.com/image/fetch/$s_!vONC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff36f0113-2760-4776-8053-9a804475edab_80x53.png 848w, https://substackcdn.com/image/fetch/$s_!vONC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff36f0113-2760-4776-8053-9a804475edab_80x53.png 1272w, https://substackcdn.com/image/fetch/$s_!vONC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff36f0113-2760-4776-8053-9a804475edab_80x53.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>i.e., the ratio between paperclippers&#8217; influence on staple maximizers and paperclippers&#8217; influence on paperclippers &#8212; multiplied by the converse.</p><ul><li><p>Since it&#8217;s possible to write the expression this way, we can see that (if we hold the other side&#8217;s acausal influence constant) the expression only depends on the ratio of how much acausal influence I have on various groups. If I was to proportionally increase or decrease my influence, that wouldn&#8217;t change whether a deal was possible or not.</p></li><li><p>Also, it seems likely that both <em>c<sub>AB</sub></em>/<em>c<sub>AA</sub></em><sub> </sub>and <em>c<sub>BA</sub></em>/<em>c<sub>BB</sub></em><sub> </sub>will be smaller than 1, since agents could generally be expected to have more decision theoretic influence over agents that are more similar to them.</p></li><li><p>However, they won&#8217;t necessarily be smaller than 1.</p><ul><li><p>For example, if I like EDT, but I think that almost no pre-AGI actors are EDT-agents (but lots of AIs use EDT) perhaps I would think that I have more influence on the average AI than on the average pre-AGI actor.</p></li><li><p>Though in that case: The <em>other</em> ratio will be proportionally smaller (since the AIs would also doubt their ability to influence all the non-EDT pre-AGI actors). So it seems likely that the product should still be smaller than 1.</p></li></ul></li></ul></li><li><p>Another way to write this is </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7T1_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7977e8a-3d69-4913-b465-fa0c5f4037b6_89x53.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7T1_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7977e8a-3d69-4913-b465-fa0c5f4037b6_89x53.png 424w, https://substackcdn.com/image/fetch/$s_!7T1_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7977e8a-3d69-4913-b465-fa0c5f4037b6_89x53.png 848w, https://substackcdn.com/image/fetch/$s_!7T1_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7977e8a-3d69-4913-b465-fa0c5f4037b6_89x53.png 1272w, https://substackcdn.com/image/fetch/$s_!7T1_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7977e8a-3d69-4913-b465-fa0c5f4037b6_89x53.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7T1_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7977e8a-3d69-4913-b465-fa0c5f4037b6_89x53.png" width="89" height="53" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b7977e8a-3d69-4913-b465-fa0c5f4037b6_89x53.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:53,&quot;width&quot;:89,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1707,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7T1_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7977e8a-3d69-4913-b465-fa0c5f4037b6_89x53.png 424w, https://substackcdn.com/image/fetch/$s_!7T1_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7977e8a-3d69-4913-b465-fa0c5f4037b6_89x53.png 848w, https://substackcdn.com/image/fetch/$s_!7T1_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7977e8a-3d69-4913-b465-fa0c5f4037b6_89x53.png 1272w, https://substackcdn.com/image/fetch/$s_!7T1_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7977e8a-3d69-4913-b465-fa0c5f4037b6_89x53.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>i.e., the ratio between staple maximizers&#8217; influence on paperclippers and paperclippers&#8217; influence on paperclippers &#8212; multiplied by the converse.</p><ul><li><p>Again, since it&#8217;s possible to write the expression this way, we can see that (holding everything else constant) the expression only depends on the ratio of how much acausal influence various groups have on me (and people similar to me). If I was to proportionally increase or decrease the degree to which others could influence me (and people similar to me), that wouldn&#8217;t change whether a deal was possible or not.</p></li><li><p>Again, it seems likely that both <em>c<sub>BA</sub></em>/<em>c<sub>AA</sub></em><sub> </sub>and <em>c<sub>AB</sub></em>/<em>c<sub>BB</sub></em><sub> </sub>should be smaller than 1. Regardless of what influence the paperclippers have on the staple maximizers, the staple maximizers ought to have <em>more</em> influence &#8212; and vice versa. That said, once again, they won&#8217;t <em>necessarily</em> be smaller than 1:</p><ul><li><p>For example, perhaps the paperclippers and staple maximizers disagree a lot about decision theory, such that the clippers systematically assign higher correlations than the staplers. If so, one of the ratios could be larger than 1.</p></li><li><p>Though just as above: I think that in that case, the <em>other</em> ratio will be proportionally smaller. (If the clippers&#8217; high correlations are in the numerator on one side, they will be on the denominator on the other side.</p></li></ul></li></ul></li><li><p>Together, these make me fairly convinced that </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KXJk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd26eec76-27ed-4962-a035-0f017a6f6658_82x59.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KXJk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd26eec76-27ed-4962-a035-0f017a6f6658_82x59.png 424w, https://substackcdn.com/image/fetch/$s_!KXJk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd26eec76-27ed-4962-a035-0f017a6f6658_82x59.png 848w, https://substackcdn.com/image/fetch/$s_!KXJk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd26eec76-27ed-4962-a035-0f017a6f6658_82x59.png 1272w, https://substackcdn.com/image/fetch/$s_!KXJk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd26eec76-27ed-4962-a035-0f017a6f6658_82x59.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KXJk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd26eec76-27ed-4962-a035-0f017a6f6658_82x59.png" width="82" height="59" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d26eec76-27ed-4962-a035-0f017a6f6658_82x59.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:59,&quot;width&quot;:82,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1927,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KXJk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd26eec76-27ed-4962-a035-0f017a6f6658_82x59.png 424w, https://substackcdn.com/image/fetch/$s_!KXJk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd26eec76-27ed-4962-a035-0f017a6f6658_82x59.png 848w, https://substackcdn.com/image/fetch/$s_!KXJk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd26eec76-27ed-4962-a035-0f017a6f6658_82x59.png 1272w, https://substackcdn.com/image/fetch/$s_!KXJk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd26eec76-27ed-4962-a035-0f017a6f6658_82x59.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>should be smaller than 1 in all reasonable situations. That said, the variables in it ultimately describe different actors&#8217; beliefs, and I haven&#8217;t nailed down the criteria that the beliefs would have to conform to to guarantee that the expression is smaller than 1.</p></li></ul></li></ul><h3>Asymmetric dilemma &#8212; more thorough derivation</h3><p>If you weren&#8217;t convinced by the derivation in <a href="https://docs.google.com/document/d/1CcrhQo60rpbGe0cNoXyVoYgbpcLhxFYK3BCn93x7gyQ/edit#heading=h.35opq6dw8p4f">Asymmetric dilemma</a>, consider this one.</p><p>Let&#8217;s say that:</p><ul><li><p>Alice and Bob both have the option to do nothing, in which case they lose no money and send no money to the other agent.</p></li><li><p>Alice also has the opportunity to lose some money <em>l<sub>A</sub></em> to make Bob gain some money <em>g<sub>B</sub></em>.</p></li><li><p>Bob has the opportunity to lose some money <em>kl<sub>B</sub></em> to make Alice gain some money <em>kg<sub>A</sub></em>. Bob is given fixed values for <em>l<sub>B</sub></em> and <em>g<sub>A</sub></em>, but can choose any positive value for <em>k</em>.</p></li><li><p>For a joint policy (&#8220;deal&#8221;) that both agents perceive to be fair and mutually beneficial, Alice thinks that following the deal increases the probability that Bob does the same (rather than doing nothing) by <em>c<sub>AB</sub></em>; and Bob thinks that following the deal increases the probability that Alice follows the deal (rather than doing nothing) by <em>c<sub>BA</sub></em>.</p><ul><li><p>(You can read this notation as an abbreviation for &#8220;<strong>c</strong>orrelational influence that <strong>A</strong>lice believes she has over <strong>B</strong>ob&#8221; and vice versa.)</p></li></ul></li></ul><p>If so, a joint policy that recommends that both agents send money, and for Bob to use a particular value <em>k</em>, will be perceived to be mutually beneficial iff:</p><ul><li><p>Alice&#8217;s expected gain </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0uFU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daa3a75-06e7-4815-b952-e292cb718fc8_264x59.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0uFU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daa3a75-06e7-4815-b952-e292cb718fc8_264x59.png 424w, https://substackcdn.com/image/fetch/$s_!0uFU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daa3a75-06e7-4815-b952-e292cb718fc8_264x59.png 848w, https://substackcdn.com/image/fetch/$s_!0uFU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daa3a75-06e7-4815-b952-e292cb718fc8_264x59.png 1272w, https://substackcdn.com/image/fetch/$s_!0uFU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daa3a75-06e7-4815-b952-e292cb718fc8_264x59.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0uFU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daa3a75-06e7-4815-b952-e292cb718fc8_264x59.png" width="264" height="59" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5daa3a75-06e7-4815-b952-e292cb718fc8_264x59.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:59,&quot;width&quot;:264,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3821,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0uFU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daa3a75-06e7-4815-b952-e292cb718fc8_264x59.png 424w, https://substackcdn.com/image/fetch/$s_!0uFU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daa3a75-06e7-4815-b952-e292cb718fc8_264x59.png 848w, https://substackcdn.com/image/fetch/$s_!0uFU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daa3a75-06e7-4815-b952-e292cb718fc8_264x59.png 1272w, https://substackcdn.com/image/fetch/$s_!0uFU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5daa3a75-06e7-4815-b952-e292cb718fc8_264x59.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div></li><li><p>Bob&#8217;s expected gain&nbsp; </p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wmnp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0609ff89-af81-4386-bdc0-ea95b826d50c_270x58.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wmnp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0609ff89-af81-4386-bdc0-ea95b826d50c_270x58.png 424w, https://substackcdn.com/image/fetch/$s_!wmnp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0609ff89-af81-4386-bdc0-ea95b826d50c_270x58.png 848w, https://substackcdn.com/image/fetch/$s_!wmnp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0609ff89-af81-4386-bdc0-ea95b826d50c_270x58.png 1272w, https://substackcdn.com/image/fetch/$s_!wmnp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0609ff89-af81-4386-bdc0-ea95b826d50c_270x58.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wmnp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0609ff89-af81-4386-bdc0-ea95b826d50c_270x58.png" width="270" height="58" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0609ff89-af81-4386-bdc0-ea95b826d50c_270x58.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:58,&quot;width&quot;:270,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3873,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wmnp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0609ff89-af81-4386-bdc0-ea95b826d50c_270x58.png 424w, https://substackcdn.com/image/fetch/$s_!wmnp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0609ff89-af81-4386-bdc0-ea95b826d50c_270x58.png 848w, https://substackcdn.com/image/fetch/$s_!wmnp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0609ff89-af81-4386-bdc0-ea95b826d50c_270x58.png 1272w, https://substackcdn.com/image/fetch/$s_!wmnp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0609ff89-af81-4386-bdc0-ea95b826d50c_270x58.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Such a <em>k</em> exists iff </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FnfK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F196a9ada-e0ed-4687-9006-53dfdda2be75_320x65.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FnfK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F196a9ada-e0ed-4687-9006-53dfdda2be75_320x65.png 424w, https://substackcdn.com/image/fetch/$s_!FnfK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F196a9ada-e0ed-4687-9006-53dfdda2be75_320x65.png 848w, https://substackcdn.com/image/fetch/$s_!FnfK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F196a9ada-e0ed-4687-9006-53dfdda2be75_320x65.png 1272w, https://substackcdn.com/image/fetch/$s_!FnfK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F196a9ada-e0ed-4687-9006-53dfdda2be75_320x65.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FnfK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F196a9ada-e0ed-4687-9006-53dfdda2be75_320x65.png" width="320" height="65" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/196a9ada-e0ed-4687-9006-53dfdda2be75_320x65.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:65,&quot;width&quot;:320,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4624,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FnfK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F196a9ada-e0ed-4687-9006-53dfdda2be75_320x65.png 424w, https://substackcdn.com/image/fetch/$s_!FnfK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F196a9ada-e0ed-4687-9006-53dfdda2be75_320x65.png 848w, https://substackcdn.com/image/fetch/$s_!FnfK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F196a9ada-e0ed-4687-9006-53dfdda2be75_320x65.png 1272w, https://substackcdn.com/image/fetch/$s_!FnfK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F196a9ada-e0ed-4687-9006-53dfdda2be75_320x65.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>One interpretation of this is: If you define <em>g<sub>B</sub></em>/<em>l<sub>A</sub></em><sub> </sub>as Alice&#8217;s leverage at helping Bob, and vice versa, then we have that the product of both agent&#8217;s perceived influence over each other and leverage at helping each other must exceed 1.</p><h3>Issues with the algorithm</h3><p>Here&#8217;s a list of some potential issues with the algorithm described <a href="https://docs.google.com/document/d/1CcrhQo60rpbGe0cNoXyVoYgbpcLhxFYK3BCn93x7gyQ/edit#heading=h.cx1z30yq6hal">at the top</a>:</p><ul><li><p>It&#8217;s nice to argue for this procedure under the assumption of common knowledge about each others&#8217; beliefs and uncertainties. But does everything still work when everyone is making wild guesses about each others&#8217; beliefs, each others&#8217; beliefs about others&#8217; beliefs, etc&#8230;?</p></li><li><p>Is it fine to treat good-faith approximations of this as evidence that other people use much more advanced versions, and vice versa?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> What sort of approximations are ok? What if more sophisticated agents will change the algorithm quite a lot, on reflection?</p></li><li><p>People don&#8217;t only have the options to &#8220;naively optimize for their own values&#8221; vs. &#8220;follow this particular cooperation protocol&#8221;. They could also look for other cooperation protocols that might benefit them even more. A better version of the algorithm would have to consider such options. (In this post, I ignore this and assume that people are only choosing between naively maximizing their own utility vs. following this cooperation protocol.)</p></li></ul><p>Here&#8217;s a concrete version of that last point. If cooperation between actor <em>i</em> and actor <em>j</em> would harm actor <em>k</em>, then <em>k</em> might prefer for no one to cooperate. This would ruin <em>k</em>&#8217;s motivation to cooperate with others, unless maybe if we take into account a richer set of counterfactuals and correlations. Here&#8217;s a game to illustrate that:</p><ul><li><p><em>i</em> can press a button that takes $1 from <em>i</em> and <em>k</em> and gives $3 to <em>j</em>.</p></li><li><p><em>j</em> can press a button that takes $1 from <em>j</em> and <em>k</em> and gives $3 to <em>i</em>.</p></li><li><p><em>k</em> can press a button that takes $1 from <em>k</em> and gives $1 to i and <em>j</em>.</p></li><li><p>People can transfer money at a $1-to-$1 ratio to whomever they want.</p></li></ul><p>In this game:</p><ul><li><p>Without cooperation: everyone gets $0.</p></li><li><p>If <em>i</em>+<em>j</em> presses their buttons: <em>i</em> and <em>j</em> get +$2, k get -$2.</p></li><li><p>If <em>i</em>+<em>j</em>+<em>k</em> all cooperate with each other:</p><ul><li><p>If everyone presses their buttons, <em>i</em> and <em>j</em> get +$3, <em>k</em> get -$3.</p></li><li><p>Obviously <em>k</em> needs to be compensated in order to be encouraged to participate in the scheme, but there's no amount of money that <em>i</em> and <em>j</em> can transfer to <em>k</em> such that <em>i</em> and <em>j</em> get &#8805;$2 and <em>k</em> gets &#8805;$0.</p></li><li><p>So for every possible 3-way deal, either <em>i</em> and <em>j</em> would have preferred a situation where they entirely excluded <em>k</em> or <em>k</em> would prefer a situation where no one cooperated.</p></li></ul></li></ul><p>So what will happen?</p><ul><li><p>If you follow my algorithm <a href="https://lukasfinnveden.substack.com/i/136239309/how-to-think-about-analogous-actions-in-asymmetric-situations">above</a> to the letter, you&#8217;d expect to get an outcome where <em>i</em>, <em>j</em>, and <em>k</em> all get more than $0.</p></li><li><p>But <em>i</em> (and <em>j</em>) can compellingly argue "if I follow an algorithm where I cooperate with <em>j</em> but screw over <em>k</em>, <em>j</em> will probably do the same, and that's all I want". Since this is strictly better from <em>i</em>&#8217;s (and <em>j</em>&#8217;s) perspective than following the algorithm above, I think this outcome is more likely.</p><ul><li><p>(Unless the problem is situated in a richer landscape where it's important to treat people fairly and k getting &lt;$0 is unfair here, or whatever.)</p></li></ul></li><li><p>That doesn&#8217;t necessarily mean that it&#8217;s impossible to get maximum gains-from-trade, though. But it would have to go via <em>k</em> arguing "well <em>i</em> and <em>j</em> will cooperate with each other regardless of what I do. But if I help them, that's evidence that they'll transfer money to me, so at least I'll end up at -$1.5 instead of -$2". (Or something like that.)</p></li><li><p>But I don&#8217;t know how to express that in an abstract algorithm, nor whether it&#8217;d actually be reasonable for <em>k</em> to have that belief. In particular, an abstract algorithm would need to bake in a different counterfactual than "if I don't cooperate at all, others probably won't cooperate at all" because <em>k</em> would love for others to not cooperate.</p></li></ul><p>This is related to the discussion of &#8220;coalitional stability&#8221; in <a href="https://arxiv.org/pdf/2307.04879.pdf">this paper</a>.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The distinction between &#8220;<strong>the algorithm</strong>&#8221; and &#8220;<em>the policy</em>&#8221; is subtle but important, so I have <strong>bolded</strong> the former and <em>italicized</em> the latter, to emphasize the difference.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>With &#8220;naively optimize for my own values&#8221;, I mean optimizing for my own values without taking into account acausal reasons to cooperate with other values. One potential issue with this algorithm is that this might not be the relevant counterfactual. (I&#8217;ll return to this later.)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>The more acausal influence I perceive myself as having on other actors, the more ok I am with a policy that recommends me to optimize little for my values but recommends others to optimize more for my values. Or in large worlds, where ECL applies: The more acausal influence I perceive myself as having on <em>actors with different values than me</em>, the more ok I am with policies where <em>people with my values optimize little for my values</em>, and others are recommended to optimize more for my values.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Note that my perceived acausal influence on others can be different from their perceived acausal influence on me, because we can have different information. See <a href="https://docs.google.com/document/d/17M_6_uYdlIpzs4WqYnQfgYq-yD9scwTSz81M-wExvMg/edit#heading=h.qkozxtm27jq9">Asymmetric correlations are possible</a> for an example of this.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>If the acausal influences are all 0, then the joint policy will just recommend that everyone optimizes for their own values.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>We could relax this assumption by saying that <em>c<sub>AA</sub></em> is Alice&#8217;s perceived average influence on <em>all</em> clippers, including herself (and likewise for <em>c<sub>BB</sub></em>). Since Alice&#8217;s influence on herself is 1, this would mean that <em>c<sub>AA</sub></em> = (0.2*(<em>n</em>_<em>A</em>-1) + 1). With such definitions for <em>c<sub>AA</sub></em> and <em>c<sub>BB</sub></em>, the formulas would be correct also in small worlds.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Or have a substantial probability of being in Alice&#8217;s and Bob&#8217;s situations, with that uncertainty perhaps explaining why the correlations are less than 1.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>And if the answer is no: do other, more humble versions of ECL still work?</p></div></div>]]></content:encoded></item><item><title><![CDATA[ECL with AI]]></title><description><![CDATA[As alluded to in previous posts, ECL might give us reason to behave cooperatively towards the values of distant AIs (even if those AIs&#8217; values weren&#8217;t chosen by any evolved species).]]></description><link>https://lukasfinnveden.substack.com/p/ecl-with-ai</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/ecl-with-ai</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Sun, 20 Aug 2023 08:16:07 GMT</pubDate><content:encoded><![CDATA[<p>As alluded to in previous posts, ECL might give us reason to behave cooperatively towards the values of distant AIs (even if those AIs&#8217; values weren&#8217;t chosen by any evolved species). This is a very confusing topic. It&#8217;s very unclear whether any significant such effect exists, and even if it did, it&#8217;s not clear what it would imply.</p><p>Nevertheless, I&#8217;ll try to say a bit about the topic, here.</p><p>I think the questions here roughly decompose into three different topics. I discuss each of these in separate posts &#8212; each of them summarized in this post.</p><ol><li><p>We and the AIs are in very different situations, with very different sets of options in front of us. How does ECL work in circumstances like this? How good does an opportunity to benefit the AIs need to be before we should take it?</p><ol><li><p><a href="https://lukasfinnveden.substack.com/i/136239064/how-does-ecl-work-in-asymmetric-situations">Summary below</a>; <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">main post</a>.</p></li></ol></li></ol><ol start="2"><li><p>Do we have any sufficiently good opportunities to benefit the values of distant AI systems ? And do they have any good opportunities to benefit us?</p><ol><li><p><a href="https://lukasfinnveden.substack.com/i/136239064/what-could-we-offer-distant-ais">Summary below</a>; <a href="https://lukasfinnveden.substack.com/p/possible-ecl-deals-with-distant-ais">main post</a>.</p></li></ol></li><li><p>Is it at all plausible that our decisions give us any evidence about what the AIs do? (Or that we have any other kind of acausal influence on the AIs, if you prefer an account that&#8217;s different from EDT.)</p><ol><li><p><a href="https://lukasfinnveden.substack.com/i/136239064/can-we-influence-ais-choices">Summary below</a>; <a href="https://lukasfinnveden.substack.com/p/are-our-actions-evidence-for-ai-decisions">main post</a>.</p></li></ol></li></ol><p>Here&#8217;s a brief summary of my current impressions.</p><h3>1. How does ECL work in asymmetric situations?</h3><p>Even if situations are superficially very asymmetric, they will still have some abstract structure in common. In particular, if I decide to search for actions that are beneficial for actors whose choices correlate with my choices, then that&#8217;s arguably analogous to them deciding to search for actions that are generally beneficial for actors whose choices correlate with their choices. (Including me.)</p><p>If I have some pre-existing belief about who I am correlated with, then I can use this principle to calculate how much weight I should put on other people&#8217;s preferences. In the <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">linked post</a>, I derive a formula for this that says that I should care more about their preferences insofar as:</p><ul><li><p>I have significant acausal influence on them, compared to the acausal influence I have on people who share my values.</p></li><li><p>They have significant acausal influence on me, compared to the acausal influence they have on people who share their values.</p></li></ul><p>So those are the quantities to be looking at.</p><p>Why does it matter that they have acausal influence on me, and not just that I have acausal influence on them?</p><p>I&#8217;m here using &#8220;acausal influence over an actor&#8221; as a shorthand for &#8220;If I take action <em>A</em>, it is more likely that they will take an action analogous to <em>A</em>&#8221;. But that means that if <em>I</em> &#8220;take actions that benefit people who I have acausal influence over&#8221;, then that means that <em>they</em> will &#8220;take actions that benefit people who <em>they</em> have acausal influence over&#8221;. If they don&#8217;t perceive themselves as having acausal influence over me, I don&#8217;t gain anything from this. So in order for both of us to gain evidence that we will benefit each other, we need to both perceive ourselves as having acausal influence over the other.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>For more, see the next post on <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">Asymmetric ECL</a>.</p><h3>2. What could we offer distant AIs?</h3><p>Using the formula that I derived in the above post, we can then get a number for how much we should value benefits to the values of distant AIs, compared to how much we should value benefits to our own values. (In particular, this is precisely proportional to the two quantities I mentioned in the above summary.) This number is necessarily smaller than the value we should place on benefitting our own values &#8212;&nbsp;so the question is whether we have any opportunities to help the values of distant AIs more effectively than our own values.&nbsp;</p><p>More specifically, the question is whether we have good opportunities to help the values of distant <em>ECL-ish</em> AIs. Here, I&#8217;m using &#8220;ECL-ish AIs&#8221; as a short-hand for AIs that are sympathetic to ECL and are in a position where ECL gives them reason to help us.</p><p>Do we have any such opportunities? The short answer is that I don&#8217;t know. But here are two candidates:</p><ul><li><p>We could increase the likelihood that &#8212; <em>if</em> AI ends up misaligned &#8212; that it shares values with distant, ECL-ish AIs.</p></li><li><p>We could increase the likelihood that &#8212; <em>if</em> AI ends up misaligned with our values <em>and</em> it shares values with distant, ECL-ish AIs &#8212; that it ends up reasoning well about decision theory. (By the lights of distant, ECL-ish AIs.)</p></li></ul><p>How could we increase the likelihood that AI broadly shares values with at least some distant, ECL-ish AIs? Our main clues here are:</p><ul><li><p>We can empirically study (and speculate about) what values AI systems tend to adopt.</p></li><li><p>ECL-ish AIs have universe-wide values, so AIs with universe-wide values are more likely to share values with ECL-ish AIs.</p></li><li><p>We want to avoid values that any significant number of ECL-ish AIs are <em>opposed</em> to. So we want to avoid values where both the positive and negative of that value seem similarly-likely, and promote values that seem likely to be uncontroversially good.</p></li><li><p>Confusingly: If we both make AI ECL-ish and give it some particular values, then that&#8217;s evidence that other pre-AGI civilizations will do the same. This will increase the likelihood that there are some ECL-ish AIs with the same values out there. So this might give us reason to make sure that AIs are ECL-ish.</p><ul><li><p>However, this introduces additional complications, since now our cooperation partners&#8217; existence might be (partly) dependent on our cooperation. I don&#8217;t know exactly how this works.</p></li></ul></li></ul><p>How could we increase the likelihood that AI ends up reasoning well about decision theory, by the lights of distant, ECL-ish AIs? The one thing we know for sure is that such AIs will themselves be ECL-ish, so probably they&#8217;d want other AIs with their values to have the preconditions for that &#8212; including eventually adopting some EDT-ish (or maybe FDT-ish) decision theories, and potentially being updateless, to some degree. (I discuss why AIs may need to be updateless to cooperate with us <a href="https://lukasfinnveden.substack.com/p/are-our-actions-evidence-for-ai-decisions">here</a>.)</p><p>It&#8217;s important to flag that ~all the interventions that are suggested by this have plausible backfire risk, such that I feel pretty clueless about whether they&#8217;d be net positive or net negative:</p><ul><li><p>If AI has large-scale, &#8220;universe-wide&#8221; values, that seems more likely to generate conflict with other actors that the AI shares our physical universe with.</p><ul><li><p>This includes humans: Having AIs with impartial, large-scale preferences seems significantly more likely to lead to AI takeover than AIs with more modest values.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p></li><li><p>Because of considerations like this: If I was forced to make a choice at the moment, I would currently prefer early AGI systems to have highly local and modest goals (e.g. indexical maximization of their own reward) rather than universe-wide values.</p></li></ul></li><li><p>AI that act according to acausal decision theories seem harder to control from an alignment perspective. For example, they seem more likely to coordinate with each other in ways that humans can&#8217;t easily detect.</p></li><li><p>If AI starts thinking about acausal decision theories at an earlier time, it may be more likely that they make foolish decisions due to <a href="https://www.lesswrong.com/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem">commitment races</a>.</p></li></ul><p>For more on this topic, see <a href="https://lukasfinnveden.substack.com/p/possible-ecl-deals-with-distant-ais">this post</a>.</p><h3>3. Are our choices analogous to AIs&#8217; choices?</h3><p>As mentioned in the above two summaries, ECL only gives us reason to benefit AIs if we can acausally influence AI decisions, and vice versa.</p><p>Indeed, the value we should assign to benefitting AIs is <em>proportional</em> to the influence we have on AI decisions, compared to the influence we have on decisions made by distant evolved species who share our values. (And the corresponding numbers from the AI&#8217;s perspective.)</p><p>From an EDT perspective, the question &#8220;does my decisions acausally influence AIs&#8217; [or evolved aliens&#8217;] decisions?&#8221; reduces to the question &#8220;do I see my decisions as evidence for what AI [or evolved aliens] will do?&#8221;.</p><p>When looking at these questions, it intuitively seems like my influence on evolved aliens should be much greater than my influence on distant AIs. If I query my brain for intuitive predictions of distant actors, it sure seems like my own actions have more impact on my prediction of distant evolved species (in pre-AGI civilizations) than on my predictions of what AIs will do.</p><p>I think that intuition is worth paying attention to. But I think it&#8217;s less important than it naively seems. Here are some very brief arguments and counter-arguments:</p><ul><li><p>&#8220;AI faces very different options than us. If we choose to build ECL-ish AI, it&#8217;s not even clear what that would be analogous to, on the AI&#8217;s side.&#8221;</p><ul><li><p>This seems like it&#8217;s much reduced by retreating to more abstract questions, like &#8220;Should I adopt a policy of impartially optimizing for many different values?&#8221;</p></li><li><p>There&#8217;s some discussion of this in <a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">the post about asymmetric ECL</a>.</p></li></ul></li><li><p>&#8220;Sufficiently smart AI can choose on the basis of what&#8217;s <em>actually rational</em>. Our merely-human decisions mainly provide evidence about what sort of quirks and biases are confounding our merely-human brains at the moment.&#8221;</p><ul><li><p>I think this argument works if you&#8217;re sufficiently pessimistic about correctly reasoning about decision-theory.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p></li><li><p>But personally, I do expect significant correlation between what (at least some) humans decide and what&#8217;s rational in some more abstract sense.</p></li></ul></li><li><p>&#8220;Future AGI systems will have directly observed the behavior of evolved species, and/or deeply understood the nature of our cognition, and/or deeply understood the nature of all decision theory that we could possibly understand. Thus, AGI will see our behavior as predictable. It won&#8217;t see its own behavior as evidence for what we do. This is an important way in which AI&#8217;s reasoning will differ from ours.&#8221;</p><ul><li><p>I think this argument is fairly strong.</p></li><li><p>My main counterargument is that, if AI <em>starts out</em> in a position where this isn&#8217;t true, (i.e., a situation where it <em>does</em> see its decisions as evidence for our decisions) then it would prefer to <em>not</em> reach the omniscient state described above.</p></li><li><p>So if it can go &#8220;updateless&#8221; in the right way, before learning too much, the above argument does not apply.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> (Or if the AI later decides to go retroactively updateless.)</p></li><li><p>(For some more discussion of when EDT agents seek out information, see <a href="https://lukasfinnveden.substack.com/p/when-does-edt-seek-evidence-about">When does EDT seek evidence about correlations?</a>)</p></li></ul></li></ul><p>I think there&#8217;s a lot to say on this, and I don&#8217;t understand it very well. For more discussion, see <a href="https://lukasfinnveden.substack.com/p/are-our-actions-evidence-for-ai-decisions">this post</a>.</p><h3>More content</h3><p>Once again, here are the three posts with more content.</p><ol><li><p><a href="https://lukasfinnveden.substack.com/p/asymmetric-ecl">Asymmetric ECL</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/p/possible-ecl-deals-with-distant-ais">Can we benefit the values of distant AIs?</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/p/are-our-actions-evidence-for-ai-decisions">Are our choices analogous to AI choices?</a></p></li></ol><h3>(Tentative) Conclusion</h3><p>I think it&#8217;s pretty up-in-the-air whether the right answer here is:</p><ol><li><p>&#8220;Even if this ECL stuff works out, you don&#8217;t correlate enough with the misaligned AIs (relative to humans) to move your decision.&#8221;</p></li><li><p>&#8220;You correlate different amounts with different actors in a way that&#8217;s really important to keep track of, but we do correlate non-negligibly with some AIs, such that we should care about their influence&#8221;</p><ul><li><p>E.g. because some version of the framework above works, and <em>c</em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> is &gt;5%.</p></li></ul></li><li><p>&#8220;You should basically impartially optimize for the values held by anyone who follows a good decision-theory&#8221;</p><ul><li><p>E.g. because some version of the framework above works, and <em>c</em> is systematically approximately 1.</p></li></ul></li></ol><p>Note that there are two different versions of the two last bullet points:</p><ul><li><p>The weaker versions are: If you worked out the decision theory correctly, it&#8217;d be rational for that wise version of you to care about their decision-theoretic influence on AIs.</p></li><li><p>The stronger version of the statement adds that our current ignorance doesn&#8217;t ruin the relevant types of correlation and acausal influence we seek. I.e., if you worked out the decision theory correctly, you&#8217;d conclude that it <em>would&#8217;ve</em> been correct for even a more ignorant version of you to care about the AIs&#8217; influence.</p></li></ul><p>It seems like questions of ECL are sufficiently tied up with tricky questions in decision theory that we&#8217;re unlikely to become confident in answer 2 or 3 before the singularity. So we mostly care about the stronger versions.</p><p>I currently think that each of the answers 1, 2, and 3 seem &#8805;10% likely &#8212;&nbsp;though answer 1 seems like the most probable one. (Assuming that we restrict ourselves to the strong versions of answer 2 and 3.)</p><p>Granting that, there&#8217;s additional uncertainty about whether any particular intervention would in fact benefit the values of distant ECL-ish AIs. Where the discussion in <a href="https://lukasfinnveden.substack.com/i/136240168/how-could-we-benefit-distant-ais">"How could we benefit distant AIs?"</a> don&#8217;t exactly leave me confident about what&#8217;s good.</p><p>So: This is all so-far very speculative. But it&#8217;s at least plausible to me that some interventions in this space could recover some non-negligible fraction of the value of aligned AI,&nbsp;even if we fail at more ambitious alignment. Further research could be valuable.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>If you have more than two value-systems, you could also have more complex structures of who-benefits-who and who perceives themselves as having acausal influence over who. See <a href="https://docs.google.com/document/u/0/d/17M_6_uYdlIpzs4WqYnQfgYq-yD9scwTSz81M-wExvMg/edit">When does EDT seek evidence about correlations?</a> for more.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This is for a few different reasons: Such values seem easier to study (less likely to actively try to mess up experiments); more likely to admit misaligned goals in exchange for AI amnesty; and less likely to pursue world takeover if they escape their bounds.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Indeed, if you&#8217;re sufficiently pessimistic, I think you should abandon this whole project at an earlier stage, since it relies on having some not-entirely-wrong ideas about decision theory.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Here&#8217;s a different framing of that same point: "OK, sure, superintelligences with dyson swarms are gonna feel like their actions are no evidence at all about mine. But I'm not talking about those AIs. I'm talking about their ancestors: much dumber AGIs still trapped in computers built by evolved creatures. They might think that what they do is some evidence for what I do, and also their decisions are super important because e.g. they can design their successors to uphold various commitments." (H/t Daniel Kokotajlo.)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>The amount that we should care about benefits to ECL-ish AI&#8217;s values compared to our own, defined in <a href="https://docs.google.com/document/u/0/d/1DVhT1C0zIyv2oDks0bZUTtpFlA66XkF2oy9SnjFxz_U/edit">Possible ECL deals with AIs</a></p></div></div>]]></content:encoded></item><item><title><![CDATA[How ECL changes the value of interventions that broadly benefit distant civilizations ]]></title><description><![CDATA[Previously, I gave a broad overview of potentially decision-relevant implications of ECL.]]></description><link>https://lukasfinnveden.substack.com/p/how-ecl-changes-the-value-of-interventions</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/how-ecl-changes-the-value-of-interventions</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Sun, 20 Aug 2023 08:04:58 GMT</pubDate><content:encoded><![CDATA[<p><a href="https://lukasfinnveden.substack.com/p/implications-of-ecl">Previously</a>, I gave a broad overview of potentially decision-relevant implications of ECL. </p><p>Here, I want to zoom in on ECL&#8217;s relevance for the following type of intervention: <em>Making misaligned AI behave in a better way towards distant civilizations</em>. (Either alien civilizations that it encounters within the causally affectable universe, or civilizations that it never encounters physically, but can infer the existence of.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>)</p><p>Some candidate interventions in this class are:</p><ul><li><p>Roughly shaping AI values to <em>avoid</em> giving the AI values that would make interactions with others go worse (e.g. spitefulness or value-based unwillingness to compromise) or to increase the likelihood that such interactions go especially well (e.g. Bostrom&#8217;s idea of <a href="https://nickbostrom.com/papers/porosity.pdf">value porosity</a>.)</p></li><li><p>Preventing AI from making <em>early</em> bad decisions. (Motivated by e.g. <a href="https://www.lesswrong.com/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem">commitment races</a>. The focus on early decisions is because it&#8217;s easier for us to affect decisions that are made closer in time to our interventions.)</p></li></ul><p>Here&#8217;s one way in which ECL is relevant for this: An additional candidate intervention to put on the list is &#8220;make misaligned AI sympathetic to ECL&#8221;.</p><p>But I won&#8217;t say any more about that in this post. Instead, my focus is:</p><blockquote><p><strong>How much does ECL raise the importance of this whole class of interventions, via making us care more about distant civilizations?</strong></p></blockquote><p>Note that I <em>won&#8217;t</em> talk about the &#8220;baseline importance&#8221; of such interventions, before taking ECL into account.</p><p>I&#8217;ll talk about:</p><ul><li><p><a href="https://lukasfinnveden.substack.com/i/136237796/how-much-more-do-we-care-about-this-if-we-do-ecl-with-distant-evolved-species">How much more do we care about this if we do ECL with distant evolved aliens?</a> (Compared to if we don&#8217;t do ECL at all.)</p><ul><li><p>Tentative answer: A bit more. Maybe something like 1.5-10x more.</p></li></ul></li><li><p><a href="https://lukasfinnveden.substack.com/i/136237796/how-much-more-do-we-care-about-this-goal-if-we-also-do-ecl-with-distant-ai">How much more do we care about this if we do ECL with distant misaligned AIs?</a> (Compared to if we only do ECL with distant evolved aliens.)</p><ul><li><p>Tentative answer: Not much more.</p></li></ul></li><li><p><a href="https://lukasfinnveden.substack.com/i/136237796/what-about-completely-different-actors">What about distant actors that don&#8217;t go into either of those categories?</a></p><ul><li><p>Tentative answer: Seems hard to tell!</p></li></ul></li></ul><p>This post will be fairly math-heavy. Feel free to skip it if you&#8217;re not up for that.</p><h3>The setting I&#8217;m looking at</h3><p>I will be looking at a very abstract problem that captures some of the key dynamics. In particular, in order to study:</p><blockquote><p><strong>How much does ECL raise the importance of this whole class of interventions, via making us care more about distant civilizations?</strong></p></blockquote><p>I will be looking at:</p><blockquote><p><strong>How much does ECL raise the importance of interventions that broadly benefit a variety of beneficiaries with different values, compared to interventions that mainly benefit our own values?</strong></p></blockquote><p>Using the following model:</p><ul><li><p>There&#8217;s some large number of distant civilizations.</p><ul><li><p>I will be talking about <em>N</em> civilizations indexed between 1 and <em>N</em>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p></li></ul></li><li><p>Each of those civilizations has some particular values. When I talk about a civilization being benefitted or harmed, I&#8217;m actually talking about <em>their values</em> being benefitted or harmed.</p><ul><li><p>In particular, I want to assume that the degree to which we care about effects on distant civilizations is proportional to how much their values overlap with our values. (How much &#8220;the things they care about&#8221; overlap with &#8220;the things we care about&#8221;.)</p></li></ul></li><li><p>As a stand-in for interventions that &#8220;benefit a variety of beneficiaries with different values&#8221;, I&#8217;ll talk about interventions that benefit a <em>random</em>, distant civilization.</p><ul><li><p>This captures how such interventions look more promising if a large fraction of the potentially-benefitted civilizations have similar values to us. And how the interventions look less promising if we&#8217;re indifferent to benefits and harms to most civilizations&#8217; values.</p></li><li><p>It also puts to the side questions like &#8220;how many civilizations like these are there?&#8221; and &#8220;exactly how do these benefits/harms work?&#8221; which I prefer to tackle separately. (Not in this post.)</p></li></ul></li><li><p>I will operate in an expected-utility framework, where we e.g. care twice as much about things that happen with twice the probability, and where we can talk about benefitting someone by <em>x</em> units (which we would care about <em>x</em> times as much as 1 unit).</p></li></ul><p>Here is some notation. I will denote the overlap between our (universe-wide) values and the values of civilization <em>i</em> as <em>v</em>(<em>i</em>). The number <em>v</em>(<em>i</em>) will have the following properties:</p><ul><li><p>If <em>v</em>(<em>i</em>) is equal to 1, that means that we care equally much about benefits to our own universe-wide values and civilization <em>i</em>&#8217;s universe-wide values.</p></li><li><p>The degree to which we care about benefiting the values of civilization <em>i</em> is proportional to <em>v</em>(<em>i</em>). This means, for example, that:</p><ul><li><p>If <em>v</em>(<em>i</em>) is half as large as <em>v</em>(<em>j</em>), then we are indifferent between benefitting <em>v</em>(<em>i</em>) for sure and benefitting <em>v</em>(<em>j</em>) with 50% probability.</p></li><li><p>If <em>v</em>(<em>i</em>) is equally large as <em>v</em>(<em>j</em>) + <em>v</em>(<em>k</em>), then we are indifferent between benefitting <em>v</em>(<em>i</em>) or benefitting both <em>v</em>(<em>j</em>) and <em>v</em>(<em>k</em>).</p></li></ul></li><li><p>For simplicity, I will assume that all the actors inside a civilization share that civilization&#8217;s values &#8212;&nbsp;meaning that <em>v</em>(<em>i</em>) is at most 1.&nbsp;&nbsp;</p><ul><li><p>That said, I think the same calculations would probably still work if you allowed for <em>v</em>(<em>i</em>) to be greater than 1 &#8212;&nbsp;i.e., allowed actors to prefer some distant civilization&#8217;s values over their own civilization&#8217;s values.</p></li></ul></li></ul><p>Roughly: I will be looking at when the conditions under which we&#8217;d prefer to benefit a random distant civilization by 1 unit vs. benefit our own values by <em>x</em> units, for some <em>x</em>&lt;1. The following sections will be more precise about what that means.</p><h3>How much more do we care about this if we do ECL with distant evolved species?</h3><h4>Without ECL</h4><p>Let&#8217;s start with establishing a baseline where we ignore ECL. When I talk about our values and choices in this sub-section, I&#8217;m not yet taking ECL into account.</p><p>Let&#8217;s say that we&#8217;re faced with the following two options:</p><ul><li><p>We can benefit our own universe-wide values by <em>x</em> units.</p></li><li><p>We can benefit a random civilization&#8217;s values by 1 unit.</p></li></ul><p>Here&#8217;s a question you might ask: What does it mean to benefit different values by &#8220;1 unit&#8221;? Does that require intertheoretic utility comparisons?</p><p>In this section, we don&#8217;t yet have to worry about that! By assumption, we only care about benefitting distant civilizations insofar as our values overlap with theirs.</p><p>This means that we can approach the problem as follows. A random civilization <em>i</em>&#8217;s values have <em>v</em>(<em>i</em>) overlap with our values. So the total value of &#8220;benefit a random civilization&#8217;s values by 1 unit&#8221; is proportional to the average <em>v</em>(<em>i</em>) among all civilizations. So the total value of this option is: </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TbRD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd650251d-a36c-4b2c-8f02-9d9d72aa960a_128x92.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TbRD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd650251d-a36c-4b2c-8f02-9d9d72aa960a_128x92.png 424w, https://substackcdn.com/image/fetch/$s_!TbRD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd650251d-a36c-4b2c-8f02-9d9d72aa960a_128x92.png 848w, https://substackcdn.com/image/fetch/$s_!TbRD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd650251d-a36c-4b2c-8f02-9d9d72aa960a_128x92.png 1272w, https://substackcdn.com/image/fetch/$s_!TbRD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd650251d-a36c-4b2c-8f02-9d9d72aa960a_128x92.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TbRD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd650251d-a36c-4b2c-8f02-9d9d72aa960a_128x92.png" width="128" height="92" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d650251d-a36c-4b2c-8f02-9d9d72aa960a_128x92.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:92,&quot;width&quot;:128,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2667,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TbRD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd650251d-a36c-4b2c-8f02-9d9d72aa960a_128x92.png 424w, https://substackcdn.com/image/fetch/$s_!TbRD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd650251d-a36c-4b2c-8f02-9d9d72aa960a_128x92.png 848w, https://substackcdn.com/image/fetch/$s_!TbRD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd650251d-a36c-4b2c-8f02-9d9d72aa960a_128x92.png 1272w, https://substackcdn.com/image/fetch/$s_!TbRD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd650251d-a36c-4b2c-8f02-9d9d72aa960a_128x92.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>This number (the average value-overlap with a random civilizations) will come in handy later, so let&#8217;s name it </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kCZl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b33e1fd-3ee9-42ac-8fe4-cb851fd1e259_175x94.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kCZl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b33e1fd-3ee9-42ac-8fe4-cb851fd1e259_175x94.png 424w, https://substackcdn.com/image/fetch/$s_!kCZl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b33e1fd-3ee9-42ac-8fe4-cb851fd1e259_175x94.png 848w, https://substackcdn.com/image/fetch/$s_!kCZl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b33e1fd-3ee9-42ac-8fe4-cb851fd1e259_175x94.png 1272w, https://substackcdn.com/image/fetch/$s_!kCZl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b33e1fd-3ee9-42ac-8fe4-cb851fd1e259_175x94.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kCZl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b33e1fd-3ee9-42ac-8fe4-cb851fd1e259_175x94.png" width="175" height="94" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b33e1fd-3ee9-42ac-8fe4-cb851fd1e259_175x94.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:94,&quot;width&quot;:175,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3038,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kCZl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b33e1fd-3ee9-42ac-8fe4-cb851fd1e259_175x94.png 424w, https://substackcdn.com/image/fetch/$s_!kCZl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b33e1fd-3ee9-42ac-8fe4-cb851fd1e259_175x94.png 848w, https://substackcdn.com/image/fetch/$s_!kCZl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b33e1fd-3ee9-42ac-8fe4-cb851fd1e259_175x94.png 1272w, https://substackcdn.com/image/fetch/$s_!kCZl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b33e1fd-3ee9-42ac-8fe4-cb851fd1e259_175x94.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>The first option was to benefit our own values by <em>x</em> units, which is worth <em>x</em>.</p><p>Thus, we should prefer to benefit ourselves if and only if <em>x</em> &gt; <em>f</em>. (Where the value of <em>f</em> will be somewhere between 0 and 1.</p><h4>With ECL</h4><p>Now let&#8217;s see how this changes when we introduce ECL.</p><p>As above, let&#8217;s say that we can choose between either benefitting a random civilization&#8217;s values by 1 unit or benefitting our own values by x units. Here&#8217;s the core new dynamic: Our decision to benefit other civilizations is evidence that <em>other</em> civilizations, in an analogous situation, will also choose to benefit other civilizations. Potentially including ours.</p><p>For each civilization that is in an analogous situation, we want some metric for how much acausal influence we have on their decision. I&#8217;m going to denote our acausal influence over civilization <em>i</em> as <em>c</em>(<em>i</em>). The meaning of this is as follows: If we take a particular action, then there is a probability <em>c</em>(<em>i</em>) that actors in civilization <em>i</em> take their analogous action. (Otherwise, their behavior is independent from our behavior.) If a civilization <em>i</em> has no analogous action, we will say that <em>c</em>(<em>i</em>) = 0.</p><p>Now, we still need to make some assumptions about these distant civilizations&#8217; options. As a simplifying assumption, I will assume:</p><ul><li><p>That the decision that is analogous to our &#8220;benefit our own values by <em>x</em> units&#8221; is for them to benefit their own values by <em>x</em> units.</p><ul><li><p>What does &#8220;benefit their own values by <em>x</em> units&#8221; mean here?&nbsp; The only relevant implication for my calculations is that we value civilization <em>i</em>&#8217;s choice to benefit themselves as worth <em>x</em>*<em>v</em>(<em>i</em>).</p></li></ul></li><li><p>That the decision that is analogous to our &#8220;benefit a random civilization&#8217;s values by 1 unit&#8221; is for them to benefit a random civilization&#8217;s values by 1 unit.</p><ul><li><p>Just as above: The only relevant implication for my calculation is that we value civilization <em>i</em>&#8217;s choice to benefit a random civilization as worth <em>f</em>. (Just as we valued our own decision to do so at <em>f</em>, before we took ECL into account.)</p></li></ul></li></ul><p>How substantive of an assumption is this?</p><ul><li><p>In the above bullet points, I emphasized that the main calculation-relevant assumption that I&#8217;m making is that we value civilization <em>i</em>&#8217;s first option as <em>x</em>*<em>v</em>(<em>i</em>), and second option as <em>f</em>. But in order for it to be plausible that these are <em>actually</em> analogous actions (justifying <em>c</em>(<em>i</em>)&gt;0), we need these other civilizations to have a similar relationship to these options as we do.</p></li><li><p>So for example: Maybe the interventions I&#8217;m thinking about is &#8220;prioritize <em>making AI aligned to our values</em> slightly higher&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> vs. &#8220;prioritize <em>making AIs more cooperative with distant civilizations</em> slightly higher&#8221;. If so &#8220;<em>x</em>&#8221; reflects the relative value that <em>we</em> place on alignment vs. having there be more cooperative AIs in the universe.</p><ul><li><p>Maybe other civilizations face that same decision: If other civilizations are also choosing how highly to prioritize cooperation vs. alignment, <em>and their own version of &#8220;</em>x<em>&#8221; is similar</em>, then that seems like an analogous decision.</p></li><li><p>Or maybe other civilizations face quite different options: As long as other actors are choosing between benefitting a broad variety of values vs. just their own values, and are conducting similar calculations, it&#8217;s plausible that their decisions could be sufficiently analogous.</p></li></ul></li><li><p>Ultimately, I feel like this assumption is a bit shaky, but ok for a first-pass analysis. I think it <em>roughly</em> corresponds to an assumption that other civilizations would appreciate our intervention that &#8220;benefits other civilizations broadly&#8221; about as much as we would appreciate others doing it &#8212;&nbsp;and that our opportunity to benefit others isn&#8217;t abnormally high or low.</p></li><li><p>(I&#8217;ll give some further remarks on this below.)</p></li></ul><p>Now, let&#8217;s re-consider our above decision, taking ECL into account. If we had to choose between an intervention that benefitted us by x units or benefitted a random civilization&#8217;s values by 1 unit &#8212;&nbsp;which should we prefer?</p><p>If we benefit our own values by <em>x</em> units, then that suggests that each civilization <em>i</em> has an additional probability <em>c</em>(<em>i</em>) of analogously benefiting themselves by <em>x</em> units. We value this at <em>v</em>(<em>i</em>)<em>x</em>. So the total value of this is:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mwcR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5482d3ac-4fb6-4994-86cb-e3812b3cc6ca_256x85.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mwcR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5482d3ac-4fb6-4994-86cb-e3812b3cc6ca_256x85.png 424w, https://substackcdn.com/image/fetch/$s_!mwcR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5482d3ac-4fb6-4994-86cb-e3812b3cc6ca_256x85.png 848w, https://substackcdn.com/image/fetch/$s_!mwcR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5482d3ac-4fb6-4994-86cb-e3812b3cc6ca_256x85.png 1272w, https://substackcdn.com/image/fetch/$s_!mwcR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5482d3ac-4fb6-4994-86cb-e3812b3cc6ca_256x85.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mwcR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5482d3ac-4fb6-4994-86cb-e3812b3cc6ca_256x85.png" width="256" height="85" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5482d3ac-4fb6-4994-86cb-e3812b3cc6ca_256x85.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:85,&quot;width&quot;:256,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4447,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mwcR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5482d3ac-4fb6-4994-86cb-e3812b3cc6ca_256x85.png 424w, https://substackcdn.com/image/fetch/$s_!mwcR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5482d3ac-4fb6-4994-86cb-e3812b3cc6ca_256x85.png 848w, https://substackcdn.com/image/fetch/$s_!mwcR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5482d3ac-4fb6-4994-86cb-e3812b3cc6ca_256x85.png 1272w, https://substackcdn.com/image/fetch/$s_!mwcR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5482d3ac-4fb6-4994-86cb-e3812b3cc6ca_256x85.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>If we instead benefit a random civilization by 1 unit, then each civilization <em>i</em> has an additional probability <em>c</em>(<em>i</em>) of benefitting a random civilization. As we calculated in the previous section, the value of benefitting a random civilization by 1 unit is <em>f</em>. So the total value of this is </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ouVs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fede3d320-0cd4-4cbe-8ed9-c25c3015f95f_184x70.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ouVs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fede3d320-0cd4-4cbe-8ed9-c25c3015f95f_184x70.png 424w, https://substackcdn.com/image/fetch/$s_!ouVs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fede3d320-0cd4-4cbe-8ed9-c25c3015f95f_184x70.png 848w, https://substackcdn.com/image/fetch/$s_!ouVs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fede3d320-0cd4-4cbe-8ed9-c25c3015f95f_184x70.png 1272w, https://substackcdn.com/image/fetch/$s_!ouVs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fede3d320-0cd4-4cbe-8ed9-c25c3015f95f_184x70.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ouVs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fede3d320-0cd4-4cbe-8ed9-c25c3015f95f_184x70.png" width="184" height="70" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ede3d320-0cd4-4cbe-8ed9-c25c3015f95f_184x70.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:70,&quot;width&quot;:184,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3818,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ouVs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fede3d320-0cd4-4cbe-8ed9-c25c3015f95f_184x70.png 424w, https://substackcdn.com/image/fetch/$s_!ouVs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fede3d320-0cd4-4cbe-8ed9-c25c3015f95f_184x70.png 848w, https://substackcdn.com/image/fetch/$s_!ouVs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fede3d320-0cd4-4cbe-8ed9-c25c3015f95f_184x70.png 1272w, https://substackcdn.com/image/fetch/$s_!ouVs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fede3d320-0cd4-4cbe-8ed9-c25c3015f95f_184x70.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Thus, we should choose to benefit our own civilization over a random civilization iff:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LyyT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F887be604-0926-4528-afeb-da1605cd61d2_414x89.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LyyT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F887be604-0926-4528-afeb-da1605cd61d2_414x89.png 424w, https://substackcdn.com/image/fetch/$s_!LyyT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F887be604-0926-4528-afeb-da1605cd61d2_414x89.png 848w, https://substackcdn.com/image/fetch/$s_!LyyT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F887be604-0926-4528-afeb-da1605cd61d2_414x89.png 1272w, https://substackcdn.com/image/fetch/$s_!LyyT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F887be604-0926-4528-afeb-da1605cd61d2_414x89.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LyyT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F887be604-0926-4528-afeb-da1605cd61d2_414x89.png" width="414" height="89" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/887be604-0926-4528-afeb-da1605cd61d2_414x89.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:89,&quot;width&quot;:414,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8045,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LyyT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F887be604-0926-4528-afeb-da1605cd61d2_414x89.png 424w, https://substackcdn.com/image/fetch/$s_!LyyT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F887be604-0926-4528-afeb-da1605cd61d2_414x89.png 848w, https://substackcdn.com/image/fetch/$s_!LyyT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F887be604-0926-4528-afeb-da1605cd61d2_414x89.png 1272w, https://substackcdn.com/image/fetch/$s_!LyyT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F887be604-0926-4528-afeb-da1605cd61d2_414x89.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h4>Answering the original question</h4><p>Remember, the question I wanted to study in this setting was:</p><blockquote><p><strong>How much does ECL raise the importance of interventions that broadly benefit a variety of beneficiaries with different values, compared to interventions that mainly benefit our own values?</strong></p></blockquote><p>In the section <a href="https://lukasfinnveden.substack.com/i/136237796/without-ecl">Without ECL</a>, I wrote that we should benefit our own values iff we could benefit them by more than <em>f</em> &#8212;&nbsp;and otherwise choose to benefit random civilizations&#8217; values. This corresponds to valuing benefits to random civilizations at <em>f</em> times as much as benefits to our own values.</p><p>In the section <a href="https://lukasfinnveden.substack.com/i/136237796/with-ecl">With ECL</a>, I wrote that we should benefit our own values iff we could benefit them by more than </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!28ks!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb5e4623-9f4f-426f-a950-ba44db8c702c_140x81.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!28ks!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb5e4623-9f4f-426f-a950-ba44db8c702c_140x81.png 424w, https://substackcdn.com/image/fetch/$s_!28ks!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb5e4623-9f4f-426f-a950-ba44db8c702c_140x81.png 848w, https://substackcdn.com/image/fetch/$s_!28ks!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb5e4623-9f4f-426f-a950-ba44db8c702c_140x81.png 1272w, https://substackcdn.com/image/fetch/$s_!28ks!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb5e4623-9f4f-426f-a950-ba44db8c702c_140x81.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!28ks!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb5e4623-9f4f-426f-a950-ba44db8c702c_140x81.png" width="140" height="81" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/db5e4623-9f4f-426f-a950-ba44db8c702c_140x81.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:81,&quot;width&quot;:140,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3344,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!28ks!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb5e4623-9f4f-426f-a950-ba44db8c702c_140x81.png 424w, https://substackcdn.com/image/fetch/$s_!28ks!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb5e4623-9f4f-426f-a950-ba44db8c702c_140x81.png 848w, https://substackcdn.com/image/fetch/$s_!28ks!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb5e4623-9f4f-426f-a950-ba44db8c702c_140x81.png 1272w, https://substackcdn.com/image/fetch/$s_!28ks!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdb5e4623-9f4f-426f-a950-ba44db8c702c_140x81.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>&#8212;&nbsp;and otherwise prioritize benefitting random civilizations. This corresponds to valuing benefits to random civilizations&#8217; values at </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UrLV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f00c282-ffe6-48b4-95b7-c3657c88aaaa_145x78.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UrLV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f00c282-ffe6-48b4-95b7-c3657c88aaaa_145x78.png 424w, https://substackcdn.com/image/fetch/$s_!UrLV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f00c282-ffe6-48b4-95b7-c3657c88aaaa_145x78.png 848w, https://substackcdn.com/image/fetch/$s_!UrLV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f00c282-ffe6-48b4-95b7-c3657c88aaaa_145x78.png 1272w, https://substackcdn.com/image/fetch/$s_!UrLV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f00c282-ffe6-48b4-95b7-c3657c88aaaa_145x78.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UrLV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f00c282-ffe6-48b4-95b7-c3657c88aaaa_145x78.png" width="145" height="78" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7f00c282-ffe6-48b4-95b7-c3657c88aaaa_145x78.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:78,&quot;width&quot;:145,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3324,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UrLV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f00c282-ffe6-48b4-95b7-c3657c88aaaa_145x78.png 424w, https://substackcdn.com/image/fetch/$s_!UrLV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f00c282-ffe6-48b4-95b7-c3657c88aaaa_145x78.png 848w, https://substackcdn.com/image/fetch/$s_!UrLV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f00c282-ffe6-48b4-95b7-c3657c88aaaa_145x78.png 1272w, https://substackcdn.com/image/fetch/$s_!UrLV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f00c282-ffe6-48b4-95b7-c3657c88aaaa_145x78.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>times as much as benefits to our own values. As long as all <em>v</em>(<em>i</em>) is less than 1, this latter value will be larger, meaning that we should be more inclined to benefit others&#8217; values.&nbsp;</p><p>The ratio between these, i.e. the degree to which ECL compels us to increase the valuation of &#8220;benefit a random civilization&#8217;s values&#8221; (compared to benefitting our own values) is:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7jqB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e2a486d-13c4-4397-8451-f50c33513d13_246x97.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7jqB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e2a486d-13c4-4397-8451-f50c33513d13_246x97.png 424w, https://substackcdn.com/image/fetch/$s_!7jqB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e2a486d-13c4-4397-8451-f50c33513d13_246x97.png 848w, https://substackcdn.com/image/fetch/$s_!7jqB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e2a486d-13c4-4397-8451-f50c33513d13_246x97.png 1272w, https://substackcdn.com/image/fetch/$s_!7jqB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e2a486d-13c4-4397-8451-f50c33513d13_246x97.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7jqB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e2a486d-13c4-4397-8451-f50c33513d13_246x97.png" width="246" height="97" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6e2a486d-13c4-4397-8451-f50c33513d13_246x97.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:97,&quot;width&quot;:246,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6291,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7jqB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e2a486d-13c4-4397-8451-f50c33513d13_246x97.png 424w, https://substackcdn.com/image/fetch/$s_!7jqB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e2a486d-13c4-4397-8451-f50c33513d13_246x97.png 848w, https://substackcdn.com/image/fetch/$s_!7jqB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e2a486d-13c4-4397-8451-f50c33513d13_246x97.png 1272w, https://substackcdn.com/image/fetch/$s_!7jqB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6e2a486d-13c4-4397-8451-f50c33513d13_246x97.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>In the numerator, we have all the civilizations that we can acausally influence. In the denominator, we have the civilizations that we can acausally influence weighted by how much our values overlap.</p><p>That&#8217;s a fine way to think about things. But we can also rewrite it as:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MhCF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc67682c4-8fdc-46ba-bbc9-c2f4b9fee0f7_700x62.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MhCF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc67682c4-8fdc-46ba-bbc9-c2f4b9fee0f7_700x62.png 424w, https://substackcdn.com/image/fetch/$s_!MhCF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc67682c4-8fdc-46ba-bbc9-c2f4b9fee0f7_700x62.png 848w, https://substackcdn.com/image/fetch/$s_!MhCF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc67682c4-8fdc-46ba-bbc9-c2f4b9fee0f7_700x62.png 1272w, https://substackcdn.com/image/fetch/$s_!MhCF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc67682c4-8fdc-46ba-bbc9-c2f4b9fee0f7_700x62.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MhCF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc67682c4-8fdc-46ba-bbc9-c2f4b9fee0f7_700x62.png" width="700" height="62" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c67682c4-8fdc-46ba-bbc9-c2f4b9fee0f7_700x62.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:62,&quot;width&quot;:700,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:10931,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MhCF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc67682c4-8fdc-46ba-bbc9-c2f4b9fee0f7_700x62.png 424w, https://substackcdn.com/image/fetch/$s_!MhCF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc67682c4-8fdc-46ba-bbc9-c2f4b9fee0f7_700x62.png 848w, https://substackcdn.com/image/fetch/$s_!MhCF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc67682c4-8fdc-46ba-bbc9-c2f4b9fee0f7_700x62.png 1272w, https://substackcdn.com/image/fetch/$s_!MhCF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc67682c4-8fdc-46ba-bbc9-c2f4b9fee0f7_700x62.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Where we can see that ECL raises the importance of these interventions in proportion to 1 plus the ratio of the acausal influence we have on civilizations that <em>we</em> don&#8217;t share values with vs. civilizations that we <em>do</em> share values with.</p><h4>What are plausible values for these numbers?</h4><p>This section will contain some pretty wild guesses. I wouldn&#8217;t encourage anyone to take these numbers too seriously, but I figured it could be good to gesture at how you could approach this, and roughly what the order of magnitude might be.</p><p>First: I&#8217;ve been generally talking about benefiting civilizations&#8217; <em>universe-wide</em> values. What about civilizations that have purely local preferences? It&#8217;s not clear how to count them. Fortunately, this doesn&#8217;t matter for the above formula. We probably don&#8217;t have any relevant acausal influence on those civilizations&nbsp; (since the reason that we&#8217;re reasoning about ECL is to benefit our universe-wide preferences, and they have no analogous reason). This means that <em>c</em>(<em>i</em>) is 0, so they don&#8217;t appear in either the numerator nor the denominator of the above formulas.</p><p>Second: What about distant civilizations that are controlled by misaligned AI? In this section, I&#8217;ll assume that we are not correlated with misaligned AI in any relevant way, and thereby ignore them. (Just as the civilizations with local preferences, in the above paragraph.) I&#8217;ll talk more about the misaligned AIs in the next section.</p><p>(Incidentally: This pattern (that our formula isn&#8217;t sensitive to the existence of civilizations that we can&#8217;t acausally influence) makes me feel better about the assumptions in <a href="https://lukasfinnveden.substack.com/i/136237796/with-ecl">With ECL</a>. Even if our decision isn&#8217;t analogous to many other civilizations&#8217; decisions &#8212; that doesn&#8217;t necessarily matter for the bottom-line.)</p><p>With that out of the way, let&#8217;s slightly rewrite the above expression. Remember, just previously, I derived that the degree to which ECL increases the value of &#8220;benefitting a random civilization&#8217;s values&#8221; (compared to just benefitting our own) was equal to:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sVZd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f73734-7dab-45f4-9726-f42243aaf450_210x78.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sVZd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f73734-7dab-45f4-9726-f42243aaf450_210x78.png 424w, https://substackcdn.com/image/fetch/$s_!sVZd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f73734-7dab-45f4-9726-f42243aaf450_210x78.png 848w, https://substackcdn.com/image/fetch/$s_!sVZd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f73734-7dab-45f4-9726-f42243aaf450_210x78.png 1272w, https://substackcdn.com/image/fetch/$s_!sVZd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f73734-7dab-45f4-9726-f42243aaf450_210x78.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sVZd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f73734-7dab-45f4-9726-f42243aaf450_210x78.png" width="210" height="78" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/17f73734-7dab-45f4-9726-f42243aaf450_210x78.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:78,&quot;width&quot;:210,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4468,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sVZd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f73734-7dab-45f4-9726-f42243aaf450_210x78.png 424w, https://substackcdn.com/image/fetch/$s_!sVZd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f73734-7dab-45f4-9726-f42243aaf450_210x78.png 848w, https://substackcdn.com/image/fetch/$s_!sVZd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f73734-7dab-45f4-9726-f42243aaf450_210x78.png 1272w, https://substackcdn.com/image/fetch/$s_!sVZd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f73734-7dab-45f4-9726-f42243aaf450_210x78.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Now, let&#8217;s define <em>a</em> as the average acausal influence we have on civilizations <em>weighted by our overlap in values</em> with them: </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rAaI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febdc7be7-4e63-402d-84a2-42310ea53b6a_374x78.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rAaI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febdc7be7-4e63-402d-84a2-42310ea53b6a_374x78.png 424w, https://substackcdn.com/image/fetch/$s_!rAaI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febdc7be7-4e63-402d-84a2-42310ea53b6a_374x78.png 848w, https://substackcdn.com/image/fetch/$s_!rAaI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febdc7be7-4e63-402d-84a2-42310ea53b6a_374x78.png 1272w, https://substackcdn.com/image/fetch/$s_!rAaI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febdc7be7-4e63-402d-84a2-42310ea53b6a_374x78.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rAaI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febdc7be7-4e63-402d-84a2-42310ea53b6a_374x78.png" width="374" height="78" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ebdc7be7-4e63-402d-84a2-42310ea53b6a_374x78.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:78,&quot;width&quot;:374,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6266,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rAaI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febdc7be7-4e63-402d-84a2-42310ea53b6a_374x78.png 424w, https://substackcdn.com/image/fetch/$s_!rAaI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febdc7be7-4e63-402d-84a2-42310ea53b6a_374x78.png 848w, https://substackcdn.com/image/fetch/$s_!rAaI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febdc7be7-4e63-402d-84a2-42310ea53b6a_374x78.png 1272w, https://substackcdn.com/image/fetch/$s_!rAaI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Febdc7be7-4e63-402d-84a2-42310ea53b6a_374x78.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Using <em>a</em>, we can rewrite the denominator as </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XkZa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dc1c3d1-6fa7-4b64-b5f1-71648a5c9592_182x77.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XkZa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dc1c3d1-6fa7-4b64-b5f1-71648a5c9592_182x77.png 424w, https://substackcdn.com/image/fetch/$s_!XkZa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dc1c3d1-6fa7-4b64-b5f1-71648a5c9592_182x77.png 848w, https://substackcdn.com/image/fetch/$s_!XkZa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dc1c3d1-6fa7-4b64-b5f1-71648a5c9592_182x77.png 1272w, https://substackcdn.com/image/fetch/$s_!XkZa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dc1c3d1-6fa7-4b64-b5f1-71648a5c9592_182x77.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XkZa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dc1c3d1-6fa7-4b64-b5f1-71648a5c9592_182x77.png" width="182" height="77" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4dc1c3d1-6fa7-4b64-b5f1-71648a5c9592_182x77.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:77,&quot;width&quot;:182,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3573,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XkZa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dc1c3d1-6fa7-4b64-b5f1-71648a5c9592_182x77.png 424w, https://substackcdn.com/image/fetch/$s_!XkZa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dc1c3d1-6fa7-4b64-b5f1-71648a5c9592_182x77.png 848w, https://substackcdn.com/image/fetch/$s_!XkZa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dc1c3d1-6fa7-4b64-b5f1-71648a5c9592_182x77.png 1272w, https://substackcdn.com/image/fetch/$s_!XkZa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4dc1c3d1-6fa7-4b64-b5f1-71648a5c9592_182x77.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Second, let&#8217;s define <em>b</em> as the reverse of <em>a</em>: the average amount that we correlate with civilizations weighted by our <em>lack</em> of overlap in values with them:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eBbp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F93be8a0a-d22f-41bd-8f42-50da0f70f198_549x77.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eBbp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F93be8a0a-d22f-41bd-8f42-50da0f70f198_549x77.png 424w, https://substackcdn.com/image/fetch/$s_!eBbp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F93be8a0a-d22f-41bd-8f42-50da0f70f198_549x77.png 848w, https://substackcdn.com/image/fetch/$s_!eBbp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F93be8a0a-d22f-41bd-8f42-50da0f70f198_549x77.png 1272w, https://substackcdn.com/image/fetch/$s_!eBbp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F93be8a0a-d22f-41bd-8f42-50da0f70f198_549x77.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eBbp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F93be8a0a-d22f-41bd-8f42-50da0f70f198_549x77.png" width="549" height="77" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/93be8a0a-d22f-41bd-8f42-50da0f70f198_549x77.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:77,&quot;width&quot;:549,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8170,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!eBbp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F93be8a0a-d22f-41bd-8f42-50da0f70f198_549x77.png 424w, https://substackcdn.com/image/fetch/$s_!eBbp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F93be8a0a-d22f-41bd-8f42-50da0f70f198_549x77.png 848w, https://substackcdn.com/image/fetch/$s_!eBbp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F93be8a0a-d22f-41bd-8f42-50da0f70f198_549x77.png 1272w, https://substackcdn.com/image/fetch/$s_!eBbp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F93be8a0a-d22f-41bd-8f42-50da0f70f198_549x77.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Using <em>b</em>, we can rewrite the numerator as</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8cAe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33d7888-1556-4fd4-b47c-b22f85026cbd_274x68.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8cAe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33d7888-1556-4fd4-b47c-b22f85026cbd_274x68.png 424w, https://substackcdn.com/image/fetch/$s_!8cAe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33d7888-1556-4fd4-b47c-b22f85026cbd_274x68.png 848w, https://substackcdn.com/image/fetch/$s_!8cAe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33d7888-1556-4fd4-b47c-b22f85026cbd_274x68.png 1272w, https://substackcdn.com/image/fetch/$s_!8cAe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33d7888-1556-4fd4-b47c-b22f85026cbd_274x68.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8cAe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33d7888-1556-4fd4-b47c-b22f85026cbd_274x68.png" width="274" height="68" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e33d7888-1556-4fd4-b47c-b22f85026cbd_274x68.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:68,&quot;width&quot;:274,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4246,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8cAe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33d7888-1556-4fd4-b47c-b22f85026cbd_274x68.png 424w, https://substackcdn.com/image/fetch/$s_!8cAe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33d7888-1556-4fd4-b47c-b22f85026cbd_274x68.png 848w, https://substackcdn.com/image/fetch/$s_!8cAe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33d7888-1556-4fd4-b47c-b22f85026cbd_274x68.png 1272w, https://substackcdn.com/image/fetch/$s_!8cAe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe33d7888-1556-4fd4-b47c-b22f85026cbd_274x68.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>We can then rewrite the above expression as:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LGiB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa87d4d5b-158d-45bf-b848-ed6e2d57d6cd_505x80.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LGiB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa87d4d5b-158d-45bf-b848-ed6e2d57d6cd_505x80.png 424w, https://substackcdn.com/image/fetch/$s_!LGiB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa87d4d5b-158d-45bf-b848-ed6e2d57d6cd_505x80.png 848w, https://substackcdn.com/image/fetch/$s_!LGiB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa87d4d5b-158d-45bf-b848-ed6e2d57d6cd_505x80.png 1272w, https://substackcdn.com/image/fetch/$s_!LGiB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa87d4d5b-158d-45bf-b848-ed6e2d57d6cd_505x80.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LGiB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa87d4d5b-158d-45bf-b848-ed6e2d57d6cd_505x80.png" width="505" height="80" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a87d4d5b-158d-45bf-b848-ed6e2d57d6cd_505x80.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:80,&quot;width&quot;:505,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8573,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LGiB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa87d4d5b-158d-45bf-b848-ed6e2d57d6cd_505x80.png 424w, https://substackcdn.com/image/fetch/$s_!LGiB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa87d4d5b-158d-45bf-b848-ed6e2d57d6cd_505x80.png 848w, https://substackcdn.com/image/fetch/$s_!LGiB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa87d4d5b-158d-45bf-b848-ed6e2d57d6cd_505x80.png 1272w, https://substackcdn.com/image/fetch/$s_!LGiB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa87d4d5b-158d-45bf-b848-ed6e2d57d6cd_505x80.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>(Side-note: Remember how we previously divided by <em>f</em>, in order to get a number for how much <em>more</em> interested we become in these interventions, after taking ECL into account? If we undo that after the above simplifications, we get that you should benefit your own civilizations by <em>x</em> units whenever:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qnqV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F964a14e9-9660-4463-a5af-220a1b75942a_163x62.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qnqV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F964a14e9-9660-4463-a5af-220a1b75942a_163x62.png 424w, https://substackcdn.com/image/fetch/$s_!qnqV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F964a14e9-9660-4463-a5af-220a1b75942a_163x62.png 848w, https://substackcdn.com/image/fetch/$s_!qnqV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F964a14e9-9660-4463-a5af-220a1b75942a_163x62.png 1272w, https://substackcdn.com/image/fetch/$s_!qnqV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F964a14e9-9660-4463-a5af-220a1b75942a_163x62.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qnqV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F964a14e9-9660-4463-a5af-220a1b75942a_163x62.png" width="163" height="62" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/964a14e9-9660-4463-a5af-220a1b75942a_163x62.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:62,&quot;width&quot;:163,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2375,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qnqV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F964a14e9-9660-4463-a5af-220a1b75942a_163x62.png 424w, https://substackcdn.com/image/fetch/$s_!qnqV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F964a14e9-9660-4463-a5af-220a1b75942a_163x62.png 848w, https://substackcdn.com/image/fetch/$s_!qnqV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F964a14e9-9660-4463-a5af-220a1b75942a_163x62.png 1272w, https://substackcdn.com/image/fetch/$s_!qnqV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F964a14e9-9660-4463-a5af-220a1b75942a_163x62.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>I.e., intrinsically caring about others&#8217; values (<em>f</em>) is roughly additive with ECL-reasons to do so (<em>b</em>/<em>a)</em>. Just from this, we can conclude that if you intrinsically care about aliens&#8217; values a lot, ECL probably won&#8217;t further increase this. But if you intrinsically care very little, then ECL could multiplicatively increase how much you care by a large amount.)</p><p>This lets us separately estimate (1-<em>f</em>)/<em>f</em> and <em>b</em>/<em>a</em>. Let&#8217;s make up some numbers!</p><p>On how large <em>f</em>, i.e., the average value overlap between us and a random other civilization, is.</p><ul><li><p>Remember that we&#8217;re excluding civilizations with local values, and civilizations headed by misaligned AIs.</p></li><li><p>Among universe-wide values, I&#8217;m uncertain whether I&#8217;ll end up with fairly simple values (e.g. some variant of hedonistic utilitarianism, where pleasure is a fairly simple thing) or some very complex values (that depend on a lot of contingencies of our evolutionary history).</p><ul><li><p>Let&#8217;s say I&#8217;m &#8531; on the former and &#8532; on the latter.</p></li><li><p>Let&#8217;s say that if I end up with some fairly simple values, I think that&#8230; &#8531; of evolved civilizations&#8217; universe-wide values appeal to the same thing.</p></li><li><p>And if I end up with fairly complex values, I end up valuing benefits to distant evolved civilizations at&#8230; 10% of my own.</p></li><li><p>And let&#8217;s say that I&#8217;m happy to just take the expectation over these two possibilities. (Which is by no means obvious when you have moral uncertainty.)</p></li></ul></li><li><p>&#8594; <em>f</em> = &#8531; * &#8531; + &#8532; * 10% ~= 18%.</p></li></ul><p>How large might <em>b</em>/<em>a</em> be? I.e., how much less do we correlate with civilizations that don&#8217;t share our values than civilizations that do share our values, on average? (Again: Excluding civilizations with local values, and civilizations headed by misaligned AI.)</p><p>Here, I feel pretty clueless. Let&#8217;s say 3x less. Meaning that&nbsp;<em>b</em>/<em>a</em>~=&#8531;.</p><p>(For some other people&#8217;s discussion of that question, see section 3.1 of <a href="https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf">Oesterheld (2017)</a> and <a href="https://casparoesterheld.com/2018/03/31/three-wagers-for-multiverse-wide-superrationality/">this blogpost</a>.)</p><p>So that would suggest that ECL increases the value of helping distant civilizations by roughly </p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BFtS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5efdee9c-2d81-43a8-ad64-9aecb0026ce4_397x60.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BFtS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5efdee9c-2d81-43a8-ad64-9aecb0026ce4_397x60.png 424w, https://substackcdn.com/image/fetch/$s_!BFtS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5efdee9c-2d81-43a8-ad64-9aecb0026ce4_397x60.png 848w, https://substackcdn.com/image/fetch/$s_!BFtS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5efdee9c-2d81-43a8-ad64-9aecb0026ce4_397x60.png 1272w, https://substackcdn.com/image/fetch/$s_!BFtS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5efdee9c-2d81-43a8-ad64-9aecb0026ce4_397x60.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BFtS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5efdee9c-2d81-43a8-ad64-9aecb0026ce4_397x60.png" width="397" height="60" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5efdee9c-2d81-43a8-ad64-9aecb0026ce4_397x60.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:60,&quot;width&quot;:397,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5424,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BFtS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5efdee9c-2d81-43a8-ad64-9aecb0026ce4_397x60.png 424w, https://substackcdn.com/image/fetch/$s_!BFtS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5efdee9c-2d81-43a8-ad64-9aecb0026ce4_397x60.png 848w, https://substackcdn.com/image/fetch/$s_!BFtS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5efdee9c-2d81-43a8-ad64-9aecb0026ce4_397x60.png 1272w, https://substackcdn.com/image/fetch/$s_!BFtS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5efdee9c-2d81-43a8-ad64-9aecb0026ce4_397x60.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Obviously those numbers were terrible. If I thought a lot more about this, I suspect that I&#8217;d end up thinking that ECL made these sorts of interventions somewhere around 1.5x-10x as important as they would have been without ECL.</p><h3>How much more do we care about this goal if we also do ECL with distant AI?</h3><p>I don&#8217;t think this changes the calculus much, because:</p><ul><li><p>I think the total number of civilizations controlled by misaligned AIs are not&nbsp; (in expectation) significantly larger than the number of civilizations with values that come from evolved species.</p></li><li><p>But I think the argument for ECL-cooperation with misaligned AIs are less strong than the corresponding arguments for cooperating with evolved pre-AGI civilizations.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> (I discuss this more in <a href="https://lukasfinnveden.substack.com/p/are-our-actions-evidence-for-ai-decisions">Are our choices analogous to AI choices?</a>)</p></li><li><p>So ECL with AI merely adds a group that&#8217;s similarly-large to the group that we already cared about, but that we have less reason to do ECL-cooperation with. In which case it won&#8217;t more than double the value of these interventions.</p></li></ul><p>I still think that doing ECL with distant, misaligned AIs might be important! I just think it has a negligible impact on how much we care about benefiting distant, <em>random</em> civilizations. For content of what I think might matter more, here, see my next post on <a href="https://lukasfinnveden.substack.com/p/ecl-with-ai">ECL with AI</a>.</p><h3>What about completely different actors?</h3><p>What about civilizations that don&#8217;t fit into my picture of either &#8220;evolved species&#8221; or &#8220;misaligned AI&#8221;? The universe could be strange and diverse place &#8212;&nbsp;why would I limit myself to these two categories that happen to be salient to me right now?</p><p>Good question! This feels tricky, but here are some thoughts:</p><ul><li><p>The distinction feels at least somewhat natural to me. Abstractly: First you have some intelligent creatures that evolve through natural selection, and then they design some other intelligent creatures, which may or may not share their goals. It doesn&#8217;t feel like I&#8217;m appealing to something <em>super</em> contingent.</p></li><li><p>If parts of the multiverse work very differently, but there aren&#8217;t many cases where our parts of the multiverse or their part of the multiverse try to benefit or harm each other, I don&#8217;t think that matters much. The situation is very similar to if we were alone.</p></li><li><p>What <em>could</em> matter is if a big majority of the positive effects from intervening on misaligned AI came via the route of positively influencing these strange and inscrutable parts of the universe. If that was the case, my calculations here would be missing most of the story, and so wouldn&#8217;t be very informative.</p></li></ul><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Which is enough for e.g. ECL to incentivize them to benefit those distant civilizations.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>To do this properly, we should probably define some measure over an infinite number of possible civilizations. I think the rest of the math would still hold.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Which disproportionately benefits our own values, even though it would also affect other values. If nothing else, it will influence the degree to which Earth-origination civilization is cooperative or not.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>How is the benefit to <em>our own</em> civilization, counted here? Well, we could put it somewhere there in the sum, with c(i)=1 and v(i)=1. But for sufficiently large N, it will be negligible.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>This is talking about the current state of arguments. I think it&#8217;s plausible that I would, on reflection, believe that it was similarly reasonable to do ECL with misaligned AI as with evolved pre-AGI civilizations. But aggregating over my uncertainty, I&#8217;m currently more inclined to cooperate with evolved pre-AGI civilizations.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Implications of ECL]]></title><description><![CDATA[&#8220;ECL&#8221; is short for &#8220;evidential cooperation in large worlds&#8221;.]]></description><link>https://lukasfinnveden.substack.com/p/implications-of-ecl</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/implications-of-ecl</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Sun, 20 Aug 2023 06:21:07 GMT</pubDate><content:encoded><![CDATA[<p>&#8220;ECL&#8221; is short for &#8220;evidential cooperation in large worlds&#8221;. It&#8217;s an idea that was originally introduced in <a href="https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf">Oesterheld (2017)</a> (under the name of &#8220;multiverse-wide superrationality&#8221;). This post will explore implications of ECL, but it won&#8217;t explain the idea itself. If you haven&#8217;t encountered it before, you can read the paper linked above or <a href="http://effective-altruism.com/ea/1gf/multiversewide_cooperation_in_a_nutshell/">this summary</a> written by Lukas Gloor.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>This post lists all candidates for decision-relevant implications of ECL that I know about and think are plausibly important.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> In this post, I will not describe in much depth why they might be implications of ECL. Instead, I will lean on the principle that ECL recommends that we (and other ECL-sympathetic actors) act to benefit the values of people whose decisions might correlate with our decisions.</p><p>As described in <a href="https://lukasfinnveden.substack.com/i/136237476/what-values-do-you-need-for-this-to-be-relevant">this appendix</a>, this relies on you and others having particular kinds of values. For one, I assume that you care about what happens outside our <a href="https://en.wikipedia.org/wiki/Light_cone">light cone</a>. But more strongly, I&#8217;m looking at values with the following property: If you could have a sufficiently large impact outside our lightcone, then the value of taking different actions would be dominated by the impact that those actions had outside our lightcone. I&#8217;ll refer to this as &#8220;universe-wide values&#8221;. Even if <em>all</em> your values aren&#8217;t universe-wide, I suspect that the implications will still be relevant to you if you have <em>some</em> universe-wide values.</p><p>This is speculative stuff, and I&#8217;m not particularly confident that I will have gotten any particular claim right.</p><h2>Summary (with links to sub-sections)</h2><p>For at least two reasons, future actors will be in a better position to act on ECL than we are. Firstly, they will know a lot more about what other value-systems are out there. Secondly, they will be facing immediate decisions about what to do with the universe, which should be informed by what other civilizations would prefer.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> This suggests that it could be important for us to <a href="https://lukasfinnveden.substack.com/i/136237476/affect-whether-and-how-future-actors-do-ecl">Affect whether (and how) future actors do ECL</a>. This can be decomposed into two sub-points that deserve separate attention: how we might be able to affect <a href="https://lukasfinnveden.substack.com/i/136237476/futures-with-aligned-ai">Futures with aligned AI</a>, and how we might be able to affect <a href="https://lukasfinnveden.substack.com/i/136237476/futures-with-misaligned-ai">Futures with misaligned AI</a>.</p><p>But separately from influencing future actors, ECL also changes our own priorities, today. In particular, ECL suggests that we should care more about other actors&#8217; universe-wide values. When evaluating these implications, we can look separately at three different classes of actors and their values. I&#8217;ll separately consider how ECL suggests that we should&#8230;</p><ul><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/care-more-about-other-humans-universe-wide-values">Care more about </a><em><a href="https://lukasfinnveden.substack.com/i/136237476/care-more-about-other-humans-universe-wide-values">other humans&#8217;</a></em><a href="https://lukasfinnveden.substack.com/i/136237476/care-more-about-other-humans-universe-wide-values"> universe-wide values</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><ul><li><p>I think the most important implication of this is that <a href="https://lukasfinnveden.substack.com/i/136237476/upside-and-downside-focused-longtermists-should-care-more-about-each-others-values">Upside- and downside-focused longtermists should care more about each others&#8217; values</a>.</p></li></ul></li><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/care-more-about-evolved-aliens-universe-wide-values">Care more about </a><em><a href="https://lukasfinnveden.substack.com/i/136237476/care-more-about-evolved-aliens-universe-wide-values">evolved aliens&#8217;  universe-wide values</a></em>.</p><ul><li><p>I think the most important implication of this is that we plausibly should care more about <a href="https://lukasfinnveden.substack.com/i/136237476/influence-how-ai-benefitsharms-alien-civilizations-values">influencing how AI could benefit/harm alien civilizations</a>.</p></li><li><p>How much more? I try to answer that question in <a href="https://lukasfinnveden.substack.com/p/how-ecl-changes-the-value-of-interventions">the next post</a>. My best guess is that ECL boosts the value of this by 1.5-10x. (This is importantly based on my intuition that we would care a bit about alien values even without ECL.)</p></li></ul></li><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/care-more-about-misaligned-ais-universe-wide-values">Care more about </a><em><a href="https://lukasfinnveden.substack.com/i/136237476/care-more-about-misaligned-ais-universe-wide-values">misaligned AIs&#8217;</a></em><a href="https://lukasfinnveden.substack.com/i/136237476/care-more-about-misaligned-ais-universe-wide-values"> universe-wide values</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><ul><li><p>I don&#8217;t think this significantly <a href="https://lukasfinnveden.substack.com/i/136237476/minor-prioritize-ai-takeover-risk-less-highly">reduces the value of working on alignment</a>.</p></li><li><p>But it suggests that it could be valuable to build AI so that <em>if</em> it ends up misaligned, it has certain other desirable inclinations and values. This topic, of <a href="https://lukasfinnveden.substack.com/i/136237476/positively-influence-misaligned-ai">positively influencing misaligned AI</a> in order to cooperate with distant misaligned AI, is very gnarly, and it&#8217;s difficult to tell what sort of changes would be net-positive vs. net-negative.</p></li><li><p>I discuss this more in <a href="https://lukasfinnveden.substack.com/p/ecl-with-ai">a later post</a>.</p></li></ul></li></ul><p>(For more details on the split between humans/evolved-aliens/misaligned-AI and why I chose it, see <a href="https://lukasfinnveden.substack.com/i/136237476/more-details-on-the-split-between-humans-evolved-species-and-misaligned-ai">this appendix.</a>)</p><p><strong>Table of contents</strong></p><blockquote><ul><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/affect-whether-and-how-future-actors-do-ecl">Affect whether (and how) future actors do ECL</a></p><ul><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/futures-with-aligned-ai">Futures with aligned AI</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/futures-with-misaligned-ai">Futures with misaligned AI</a></p></li></ul></li><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/how-us-doing-ecl-affects-our-priorities">How us doing ECL affects our priorities</a></p><ul><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/care-more-about-other-humans-universe-wide-values">Care more about other humans&#8217; universe-wide values</a></p><ul><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/it-matters-less-which-universe-wide-values-control-future-resources-seems-minor-in-practice">It matters less which universe-wide values control future resources (seems minor in practice?)</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/upside-and-downside-focused-longtermists-should-care-more-about-each-others-values">Upside- and downside-focused longtermists should care more about each others&#8217; values</a></p></li></ul></li><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/care-more-about-evolved-aliens-universe-wide-values">Care more about evolved aliens&#8217; universe-wide values</a></p><ul><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/minor-prioritize-non-ai-extinction-risk-less-highly">Minor: Prioritize non-AI extinction risk less highly</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/influence-how-ai-benefitsharms-alien-civilizations-values">Influence how AI benefits/harms alien civilizations&#8217; values</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/possibly-weigh-suffering-focused-values-somewhat-higher-if-they-are-more-universal">Possibly: Weigh suffering-focused values somewhat more highly if they are more universal</a></p></li></ul></li><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/care-more-about-misaligned-ais-universe-wide-values">Care more about misaligned AIs&#8217; universe-wide values</a></p><ul><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/minor-prioritize-ai-takeover-risk-less-highly">Minor: Prioritize AI takeover risk less highly</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/positively-influence-misaligned-ai">Positively influence misaligned AI</a></p></li></ul></li></ul></li><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/more">More</a></p></li><li><p><strong><a href="https://lukasfinnveden.substack.com/i/136237476/appendices">Appendices</a></strong></p><ul><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/what-values-do-you-need-for-this-to-be-relevant">What values do you need for this to be relevant?</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/more-details-on-the-split-between-humans-evolved-species-and-misaligned-ai">More details on the split between humans, evolved species, and misaligned AI</a></p><ul><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/why-distinguish-humans-from-aliens">Why distinguish humans from aliens?</a></p></li><li><p><a href="https://lukasfinnveden.substack.com/i/136237476/why-distinguish-evolved-aliens-from-misaligned-ais">Why distinguish evolved aliens from misaligned AIs?</a></p></li></ul></li></ul></li></ul></blockquote><h2>Affect whether (and how) future actors do ECL</h2><h3>Futures with aligned AI</h3><p>If we take ECL seriously,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> I think it&#8217;s really important that humanity <em>eventually</em> understands these topics deeply, and can make wise decisions about them. But for most questions about what humanity should <em>eventually</em> do, I&#8217;m inclined to defer them to the future. I&#8217;m interested in whether there&#8217;s anything that <em>urgently</em> needs to be done.</p><p>One way to affect things is to increase the probability that humanity ends up building a healthy and philosophically competent civilization. (But we already knew that was important.)</p><p>There might also be ways in which humanity could irreversibly mess up in the near-term that are unique to decision theory. For example, people could make unwise commitments if they perceive themselves to be in <a href="https://www.lesswrong.com/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem">commitment races</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> Or there might be ways in which people could learn too much information, too early. (We don&#8217;t currently have any formalized decision theories that <em>can&#8217;t</em> be harmed by learning information. For example, people who use evidential decision theory can only influence things that they haven&#8217;t yet learned about &#8212;&nbsp;which means that information can make them lose power.) (C.f. <a href="https://lukasfinnveden.substack.com/p/when-does-edt-seek-evidence-about">this later post</a> on when EDT seeks out or avoids certain information.) It&#8217;s possible that careful thinking could reduce such risks.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> (For example, perhaps it would be good to prevent early AI systems from thinking about these topics until they and we are more competent.)</p><p>How is ECL relevant for this? Broadly, it seems like ECL is an important part of the puzzle for what various decision theories recommend. So learning more about ECL seems like it could help clarify the picture, here, and clarify what intervention points exist. (This also applies to futures with misaligned AI.)</p><p>(For related discussion in <a href="https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf">Oesterheld (2017)</a>, see section 4.5 on researching and promoting ECL and 4.6.3 on making AI act according to ECL.)</p><h3>Futures with misaligned AI</h3><p>Affecting how misaligned AI does ECL is also an intervention point.</p><p>I think ECL could play a couple of different roles, here:</p><ul><li><p>Firstly, ECL-sympathetic AI systems might treat <em>us</em> better (e.g. by giving humanity a solar-system-sized utopia instead of killing us).</p><ul><li><p>In order for ECL to motivate AI systems to treat us nicely, there needs to be some distant actors that care about us. I.e., someone would need to have universe-wide values that specifically value the preservation of distant pre-AGI civilizations over other things that could be done with those resources.</p></li></ul></li><li><p>Secondly, ECL-sympathetic AI systems might trade (and avoid conflict) with distant civilizations, thereby benefiting those civilizations.</p><ul><li><p>This is intrinsically good if we intrinsically care about those distant civilizations&#8217; values.</p></li><li><p>In addition, it&#8217;s plausible that ECL recommends us to care about benefits that accrue to distant civilizations&#8217; whose values we don&#8217;t intrinsically care about. This is discussed below, in <a href="https://lukasfinnveden.substack.com/i/136237476/influence-how-ai-benefitsharms-alien-civilizations-values">Influence how AI benefits/harms alien civilizations</a>.</p></li><li><p>Such trade could also benefit <em>the misaligned AI system&#8217;s own values</em>, and ECL might give us reason to care about those values. This is more complicated. I discuss it below in <a href="https://lukasfinnveden.substack.com/i/136237476/positively-influence-misaligned-ai">Positively influence misaligned AI</a>.</p></li></ul></li></ul><h2>How <em>us doing</em> ECL affects our priorities</h2><h3>Care more about other humans&#8217; universe-wide values</h3><h4>It matters less which universe-wide values control future resources (seems minor in practice?)</h4><p>Let&#8217;s temporarily assume that humanity will avoid both near-term extinction and AI takeover. Even then, the value of the future could depend a lot on <em>which human values</em> will be empowered to decide what&#8217;s to be done with all the matter, space, and time in our lightcone.</p><p>If someone had an opportunity to influence this (e.g. by promoting certain values), ECL would generally be positive on empowering universe-wide values (that are compatible with good decision-theoretic reasoning), since for any such values:</p><ul><li><p>You might correlate with distant people who hold such values, in which case ECL gives you reason to benefit them.</p></li><li><p>If such values maintain power into the long-term future, and our future civilization ends up deciding that ECL (or something similar) works, then ECL will motivate them to benefit other universe-wide values. (At least insofar as there are gains from trade to be had.)</p></li></ul><p>If you were previously concerned about promoting <em>any particular</em> universe-wide values, this means that you should now be somewhat less fussed about promoting those values in particular, as opposed to any other universe-wide values. In struggles for influence that are mainly a struggle about universe-wide values, you should care less about who wins.</p><p>(This is related to discussion about moral advocacy in section 4.2 of <a href="https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf">Oesterheld (2017)</a>; especially 4.2.7.)</p><p>Here&#8217;s a slightly more worked-out gesture at why ECL would recommend this.</p><ul><li><p>Let&#8217;s say that you&#8217;re a supporter of faction A, in a struggle for influence against faction B. You can decide to either invest in the 0-sum struggle for influence, or you can decide to invest in something that you both value (e.g. reducing uncontroversial x-risks or s-risks).</p></li><li><p>If support for faction B is compatible with good decision-theoretic reasoning, then on some distant planet, there will probably be supporters of faction B who are in an analogous but reversed situation to you (in a struggle for influence against faction A) who are thinking about this decision in a similar way.</p></li><li><p>If you decide to support the common good instead of faction A, then faction A&#8217;s expected influence will decrease a bit on your planet. But your choice to do so is evidence that the distant supporters of faction B also will support the public good (instead of faction B) on their planet, which will lead faction A&#8217;s expected influence to increase a bit (and also lead to positive effects from the support of the public good).</p></li><li><p>So ECL provides a reason to invest less resources in the 0-sum fight and instead care more about public goods.</p></li></ul><p>(In order to work out <em>how</em> much less you&#8217;d want to invest in the 0-sum fight, you&#8217;d want to think about the ratio between &#8220;how much evidence am I providing that supporters of faction A will invest in the public good&#8221; to &#8220;how much evidence am I providing that supports of faction B will invest in the public good&#8221;.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> I&#8217;m only illustrating the directional argument, here.)</p><p>While I believe the ECL argument works here, it doesn&#8217;t currently seem very decision-relevant to me. Competitions that could be important for the future (e.g. competition between AI labs or between US and China) don&#8217;t seem well-characterized as conflicts between universe-wide value-systems. At least my personal opinions about them are mostly about who&#8217;s more/less likely to cause an (uncontroversial) x-risk along the way; and perhaps about who&#8217;s more/less likely to help create a society that adopts reasonably impartial values and become sufficiently philosophically sophisticated to enact the best version of them.&nbsp;</p><p>That said, for someone who was previously obsessed with boosting a <em>particular</em> value-system (e.g. spreading hedonistic utilitarianism, or personally acquiring power for impartial ends), I think ECL should nudge that motivation toward being somewhat more inclusive of other universe-wide values.</p><h4>Upside- and downside-focused longtermists should care more about each others&#8217; values</h4><p>(Terms are defined as <a href="https://longtermrisk.org/cause-prioritization-downside-focused-value-systems/">here</a>: &#8220;Upside-focused values&#8221; are values that <em>in our empirical situation</em> recommend focusing on bringing about lots of positive values. &#8220;Downside-focused values&#8221; are values that <em>in our empirical situation</em> recommend working on interventions that make bad things less likely, typically reducing suffering.)</p><p>If we look beyond struggles for influence and resources, and instead look for any groups of humans who have different <em>universe-wide</em> values, and where this leads to different real-world priorities, the two groups that stand out are upside-focused and downside-focused longtermists. For these groups, we also <em>have actual examples</em> of both upside- and downside-focused people thinking about ECL-considerations in a similar way. Which makes the ECL-argument more robust.</p><p>It seems good for people to know about and bear this in mind. For example, it means that upside- and downside-focused people should:</p><ul><li><p>be inclined to take high-leverage opportunities to help each other,</p></li><li><p>decide what to work on somewhat less on the basis of values and somewhat more on the basis of comparative advantage,</p></li><li><p>avoid actions that would benefit their own values at considerable cost to the others&#8217; values.</p></li></ul><p>As usual, the ECL-argument here is: If you choose to take any of these actions, then that&#8217;s evidence that distant people with <em>the other</em> value-system will choose to take analogous actions to benefit <em>your</em> favorite value-system.</p><p>How strong is this effect? I&#8217;m not sure. What follows is a few paragraphs of speculation. (Flag: These paragraphs rely even more on pre-existing knowledge about ECL than the rest of the post.)</p><p>Ultimately, it depends on the degree to which humans correlate relatively more with the decisions of people with shared values vs. different values,&nbsp;on this type of decision.</p><p>I.e., the question is: If someone with mostly upside-focused values decides to do something that benefits downside-focused values, how much evidence is this that (i) distant upside-focused people will help out people with downside-focused values, vs. (ii) distant downside-focused people will help out people with upside-focused values. (From the perspective of the person who makes the decision.)</p><p>If it&#8217;s similarly much evidence for both propositions, then upside-focused and downside-focused people should be similarly inclined to benefit each others&#8217; values as to benefit their own values.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a></p><p>Here&#8217;s an argument in favor of this: Regardless of whether you have upside-focused or downside-focused values, the ECL argument (that you should care about the others&#8217; values) is highly similar. So it seems like there&#8217;s no large dependence on what values you have. Accordingly, it seems like your decision should be equally much evidence for how other people act, regardless of what values they have.</p><p>Here&#8217;s a counter-argument: When you&#8217;re making any particular decision, perhaps you are getting disproportionately much evidence about how actors that are especially similar to you tend to feel about ECL-based cooperation-arguments in especially similar situations. (After all: Most of your evidence about how likely people are to act on ECL-arguments <em>in general</em> will come from observations about what decisions <em>other</em> people make.) And perhaps an important component of &#8220;especially similar&#8221; is that those actors would share your values.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a></p><p>(For some more of my speculation, including some counter-arguments to that last paragraph, see the post <a href="https://lukasfinnveden.substack.com/p/are-our-actions-evidence-for-ai-decisions">Are our choices analogous to AI choices?</a>, which discusses a similar question but with regards to correlating with <em>AI</em> instead of with <em>humans</em>. A relatively less likely proposition. Nevertheless &#8212; similar considerations come up. See also section 3.1 in <a href="https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf">Oesterheld (2017)</a> on orthogonality of rationality and values.)</p><p>Overall, I feel quite uncertain here. This uncertainty corresponds to a sense that my actions are somewhat less evidence for the decisions of people who don&#8217;t share my values, but not a huge amount less. Summing over my uncertainty, I feel like my decisions are &#8805;10% as much evidence for the decisions of people who don&#8217;t share my value (as they are evidence for the decisions of people who share my values) &#8212; which would imply that I should care&nbsp; &#8805;10% as much about their values as I care about my own.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a></p><h3>Care more about evolved aliens&#8217; universe-wide values</h3><p>ECL also recommends caring more about alien civilizations. Here are two different implications of this.</p><h4>Minor: Prioritize non-AI extinction risk less highly</h4><p>A minor consequence of this is: You might want to prioritize non-AI extinction risk slightly <em>less</em> highly than before. Because if Earth doesn&#8217;t colonize the universe, some of that space will (in expectation) get colonized by alien civilizations instead, to their benefit.</p><p>If we were to trade like-for-like, the story would be: If we prioritize non-AI extinction risk slightly less highly (and put higher priority on making sure that space colonization is good if it does happen), then that&#8217;s evidence that distant aliens also prioritize non-AI extinction risk slightly less highly. If this leads to their extinction, and their neighbors share our values, then civilizations with our values will recover some of that empty space.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a></p><p>I think this effect seems minor (unlikely to make non-AI extinction less than half as useful as you previously thought).</p><p>This is mainly because ECL doesn&#8217;t recommend us to care <em>as</em> much about alien values as we care about human values.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> I would be surprised if ECL recommended that we prioritize random alien values more than half as much as our own, which suggests that even if aliens were guaranteed to colonize space in our place, at least &#189; of value would be lost from our failure to do so.</p><p>An additional consideration is that aliens are (probably) not common enough to take over all space that we would have missed. I think the relevant comparison is between &#8220;space we could get to <em>before aliens</em>&#8221; (i.e. the total amount of space that humanity would get access to if space is defense-dominant) vs. &#8220;space that <em>only</em> we could get to&#8221; (i.e. the space that humanity would get access to without excluding any aliens from it, such that we would want to get there even if we cared just as much about alien&#8217;s values as our own values).</p><ul><li><p><a href="https://forum.effectivealtruism.org/posts/9p52yqrmhossG2h3r/quantifying-anthropic-effects-on-the-fermi-paradox">My old estimate</a> is that the latter is ~&#8531; times as large as the former. This suggests that, even if ECL made us care just as much about alien values as our own values, we would still care &#8531; as much about colonizing space.</p></li><li><p>But my old estimate didn&#8217;t take into account that intelligence might re-evolve on Earth if humans go extinct. I think this is fairly likely, based on my impression that recent evolutionary increases in intelligence have been rapid compared to Earth&#8217;s remaining life-span. If there&#8217;s only a &#8531; probability that intelligence fails to re-evolve on Earth, then the expected amount of space that wouldn&#8217;t be colonized by anyone is only &#8531;*&#8531;~=10% as large as the space that humans would get to first.</p></li><li><p>So if we took these estimates at face-value &#8212;&nbsp;they suggest that <em>if</em> ECL moved us from &#8220;don&#8217;t care about alien values at all&#8221; to &#8220;care about alien values just as much as our own values&#8221;, this would reduce the value of non-AI extinction-reduction by at most 10x.</p></li><li><p>&#8230;from the perspective of universe-wide values which value marginal influence over the universe roughly linearly. Other plausible value-systems would be less swayed by this argument, so it overestimates how much our all-things-considered view should move.</p></li></ul><p>Also, as I discuss <a href="https://lukasfinnveden.substack.com/i/136237476/minor-prioritize-non-ai-extinction-risk-less-highly">below</a>, ECL might similarly motivate us to prioritize AI takeover less highly. Since this is the most salient alternative priority to &#8220;non-AI extinction risk&#8221; on a longtermist view, they partly cancel out.</p><h4>Influence how AI benefits/harms alien civilizations&#8217; values</h4><p>A different way in which we could benefit aliens is to increase the probability that Earth-originating intelligence benefits them (if Earth-originating intelligence doesn&#8217;t go extinct). This could apply to either aliens that we physically meet in space, or to distant aliens that we can&#8217;t causally interact with.</p><p>I typically think about such interventions as a special kind of &#8220;cooperative AI&#8221;-intervention &#8212; increasing the probability that AIs are inclined to seek out win-win opportunities with other value systems. See <a href="https://lukasfinnveden.substack.com/p/how-ecl-changes-the-value-of-interventions">this post</a> for more discussion of this. The brief summary is: ECL could plausibly increase the value of interventions that aim to make misaligned AI treat alien civilizations better by ~1.5-10x.&nbsp;</p><h4>Possibly: Weigh suffering-focused values somewhat higher if they are more universal </h4><p>This is the item on the list that I&#8217;ve thought the least about. But I recently realized that it passes my bar of being a &#8220;plausibly important implication&#8221;,&nbsp;so I&#8217;ll briefly mention it.</p><p>As argued in section 4.1.1 of <a href="https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf">Oesterheld (2017)</a>, ECL suggests that we should give greater weight to &#8220;universalist values&#8221; over idiosyncratic concerns. (Note that &#8220;universalist values&#8221; does <em>not</em> mean the same thing as universe-wide values. Universalist values are values that are highly common across the universe.)</p><p>While I but the basic argument for this principle, I didn&#8217;t initially see any decision-relevant applications of this.</p><p>For example, one idiosyncratic value is to care especially much about yourself, and less about others. But in order for the argument to apply here, we require that the tradeoff rate between altruistic values and selfish values is sensitive to abstract arguments about the altruistic stakes involved. At least for me personally, I intuitively feel like learning about the potential size of the far future should have ~&#8220;maxed out&#8221; the degree to which abstract knowledge of high altruistic stakes will compel me to act more altruistically and less selfishly. Such that there&#8217;s not a lot of room left for ECL to move me.</p><p>Another example is that it affects what precise form of moral advocacy we should be interested in,&nbsp;insofar as we&#8217;re pursuing moral advocacy to influence long-term values. But I don&#8217;t currently think that it seems like a high-priority intervention to advocate for highly specific values, with the purpose of influencing long-term values. (I think it&#8217;s relatively more promising to do &#8220;moral advocacy to change near-term behavior&#8221; or &#8220;advocating for good principles of ethical deliberation&#8221;.&nbsp;But I don&#8217;t think that the value of those interventions are very sensitive to whether ECL recommends universal values over idiosyncratic values.)</p><p>But here&#8217;s a scenario where this consideration might matter. Some people&#8217;s views on ethics has an asymmetry between <em>positive</em> visions of the future and <em>negative</em> visions of the future. Where positive visions need to get a lot of complex, human-specific things right, in order to satisfy human&#8217;s highly contingent, highly complex values. Whereas negative visions only need suffering &#8212; and then that&#8217;s already bad.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a></p><p>If this is your view, then a plausible corollary could be: there are many more aliens across the universe who share your concern for suffering, then who share your vision for a positive future.</p><p>And if that&#8217;s true, then you could potentially get gains-from-trade from everyone putting relatively more attention into reducing suffering (which is appreciated by everyone) and relatively less attention on bringing about highly complex positive future visions (which is only appreciated by a relatively small fraction).</p><p>How large is this effect?</p><ul><li><p>When you act to bring about positive futures that you value, your influence is proportional to the (power-weighted) number of actors who share those values, multiplied by the average amount that you correlate with them.</p></li><li><p>When you act to reduce negative experiences that you disvalue, your influence is proportional to the (power-weighted) number of actors who share those values, multiplied by the average amount that you correlate with them.</p></li></ul><p>So for example: If you thought that 10x more actors shared your conception of suffering than your conception of a good future, but that those actors were less similar to you and therefore correlated with you 2x less on average, then that would increase the value of suffering-focused interventions by 5x.</p><h3>Care more about misaligned AIs&#8217; universe-wide values</h3><p>More speculatively, ECL might recommend that we care more about the universe-wide values of distant misaligned AIs.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a> Why is this more speculative? In order for ECL to give us reason to benefit AIs, we would have to be similar enough to those AIs that our decisions have some acausal influence on their decisions. If we assume evidential decision theory, this means that our decision needs to give some evidence for what distant misaligned AIs choose to do. And intuitively, it seems less likely that our decisions provide evidence about distant misaligned AI&#8217;s actions than that it provides evidence about the actions of distant aliens. (Since AIs&#8217; minds probably differ more from ours, and since the decision-situations they are likely to find themselves in differ more from ours.)</p><p>I feel uncertain about whether ECL says we should care more about the values of distant AIs. If it did, here are two conclusions.</p><h4>Minor: Prioritize AI takeover risk less highly</h4><p>One potential implication could be that we should prioritize AI takeover risk slightly less highly: Because although it would be bad for misaligned AI to seize power, it would at least be slightly good that the AI gets to implement its own values, as long as those values were endorsed by many distant ECL-sympathetic AIs. (Since us benefiting their values, in this way, would be evidence that they make decisions to benefit our values &#8212;&nbsp;or at least that&#8217;s the hypothetical we&#8217;re considering.)</p><p>However, this effect seems minor (unlikely to make misalignment reduction less than half as useful as you previously thought). Because:</p><ul><li><p>As mentioned above, we might not be similar enough to AIs for the ECL argument to work. And even if we&#8217;re similar enough to have some acausal influence on them, ECL doesn&#8217;t recommend us to care <em>as</em> much about AI values as we care about human values.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-17" href="#footnote-17" target="_self">17</a> I would be surprised if ECL recommended that we prioritized random AI values more than half as much as our own.</p></li><li><p>This is mainly an argument to slightly deprioritize AI takeover <em>by ECL-sympathetic AI systems with universe-wide values</em>. But AIs might not have universe-wide values, and might not be ECL-sympathetic, by default.</p></li></ul><p>Also, as discussed <a href="https://lukasfinnveden.substack.com/i/136237476/minor-prioritize-non-ai-extinction-risk-less-highly">above</a>, ECL might similarly motivate us to prioritize <em>non-AI</em> extinction less highly. (Which is the most salient alternative priority to misalignment risk, on a longtermist view.)</p><h4>Positively influence misaligned AI</h4><p>A different way in which we could benefit distant misaligned AIs&#8217; universe-wide values is to adjust how we build AI so that <em>if</em> humanity ends up building AI that is misaligned with our own values, then it&#8217;s more likely to successfully optimize for values that distant misaligned AIs would approve of. Unfortunately, it seems very difficult to work out what sort of changes would be good and what sort of changes would be bad, here.</p><p>For more writing on this, see <a href="https://lukasfinnveden.substack.com/p/ecl-with-ai">here</a>.</p><h2>More</h2><p>I&#8217;m following up this post with two other posts:</p><ul><li><p><a href="https://lukasfinnveden.substack.com/p/how-ecl-changes-the-value-of-interventions">ECL and benefitting distant civilizations</a>, for more on how ECL affects the value of influencing how AI might benefit/harm distant alien civilizations.</p></li><li><p><a href="https://lukasfinnveden.substack.com/p/ecl-with-ai">ECL with AI</a>, digging into how we could positively influence misaligned AI, and whether ECL recommends that.</p></li></ul><h1>Appendices</h1><h2>What values do you need for this to be relevant?</h2><p>I'd say this post is roughly: advice to people whose values are such that, <em>if</em> they were to grant that their actions acausally affected an enormous number of worlds quite different from their own, they would say that a large majority of the impact they cared about was impact on those distant worlds.</p><p>Importantly, it's also advice to people who endorse some type of moral uncertainty or pluralism, and have <em>components</em> of their values that behave like that. Then it's advice for what that value-component should advocate and bargain for. (I think this is probably a more realistic account of most humans&#8217; values.)</p><p>(Though one of the many places where I haven't thought about the details is: If you are trying to acausally influence agents with universe-wide values, does it pose any extra troubles if you yourself only have partially universe-wide values and do some messy compromise thing?)</p><p>I&#8217;ll use &#8220;universe-wide&#8221; values as a shorthand for these types of values. (&#8220;Multiverse-wide&#8221; would also be fine terminology &#8212; but I think caring about a spatially large universe is sufficient.)</p><p>(For previous discussion of what values are necessary for ECL, see section 3.2 in <a href="https://longtermrisk.org/files/Multiverse-wide-Cooperation-via-Correlated-Decision-Making.pdf">Oesterheld (2017)</a>.)</p><h2>More details on the split between humans, evolved species, and misaligned AI</h2><p>Above, I separately consider how ECL suggests that we should care more about:</p><ul><li><p>other humans&#8217; universe-wide values,</p></li><li><p>evolved aliens&#8217; universe-wide values,</p></li><li><p>misaligned AIs&#8217; universe-wide values.</p></li></ul><p>This raises two questions:</p><ul><li><p>Why the distinction between &#8220;other humans&#8217; universe-wide values&#8221; and &#8220;evolved aliens&#8217; universe-wide value&#8221;?</p></li><li><p>Why the distinction between &#8220;evolved aliens&#8217; universe-wide values&#8221; and &#8220;misaligned AIs&#8217; universe-wide values?</p></li></ul><h3>Why distinguish humans from aliens?</h3><p>When I talk about benefiting other humans&#8217; universe-wide values, I don&#8217;t mean to imply that we&#8217;re acausally cooperating with just the local humans on our own planet Earth. I think almost all the benefits come via evidence that very distant actors behave more cooperatively. Such actors could be either quite similar to humans or quite unlike humans (in at least some ways).</p><p>So why talk specifically about the universe-wide values of &#8220;other humans&#8221;, rather than the broader group of aliens?</p><p>The answer is that universe-wide values held by other humans has a number of unique properties:</p><ul><li><p>For any universe-wide values held by humans, we have <em>empirical support</em> that evolved species sometimes grow to treasure those values.</p></li><li><p>Even stronger, we have empirical support that <em>minds very similar to our own</em> can grow to treasure those values, which strengthens the case for high correlations, and thereby the case for ECL-based cooperation.</p></li><li><p>Universe-wide values held by humans can conveniently be benefitted <em>via</em> the humans that support them For example, by:</p><ul><li><p>supporting the humans that hold them.</p></li><li><p>avoiding conflicts with humans that hold them.</p></li><li><p>listening to the advice of humans that hold them.</p></li></ul></li></ul><p>This is quite different from non-human values, where we have to resort to more basic guesses about their preferences, like:</p><ul><li><p>Aliens probably value having access to more space over having access to less space.</p></li><li><p>Aliens probably prefer to interact with other actors who are cooperative rather than conflict-prone.</p></li></ul><h3>Why distinguish evolved aliens from misaligned AIs?</h3><p>First, a terminological note: &#8220;Misaligned AI&#8221; refers to AI whose values are very different from what was intended by the evolved species that first created them. If a distant species has very different values from us, and successfully aligns AI systems that <em>they</em> create, I&#8217;d count those as &#8220;aligned&#8221; AIs.</p><p>(&#8220;Aligned AIs&#8221; themselves will, of course, have the same values as evolved aliens. Benefiting their values would be the same as benefitting the values of some evolved aliens, so they don&#8217;t need a separate category.)</p><p>Now, why do I separately consider the values of evolved aliens and misaligned AIs? There are two reasons.</p><p>Firstly, compared to AI, evolved aliens probably have minds that are more similar to ours, and face decisions that are more similar to ours. Thus, there&#8217;s a stronger case that our decisions correlate more with their decisions, making the case for ECL-based cooperation stronger.</p><p>Second, AI progress is currently fast, and I have less than perfect confidence in humanity&#8217;s ability to only create and empower aligned AI systems. This (ominously) suggests that we may soon have unique opportunities to benefit or harm the values of misaligned AI systems.</p><h3>Acknowledgments</h3><p>For helpful comments and suggestions on this and related posts, thanks to Caspar Oesterheld, Emery Cooper, Lukas Gloor, Tom Davidson, Joe Carlsmith, Linh Chi Nguyen, Daniel Kokotajlo, Jesse Clifton, Richard Ngo, Anthony DiGiovanni, Charlotte, Tristan Cook, Sylvester Kollin, and Alexa Pan.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>For even more references, see all the content gathered on <a href="https://longtermrisk.org/msr">this page</a>, and more recently, <a href="https://www.lesswrong.com/posts/mm8sFBpPH3Bb2NhGg/three-reasons-to-cooperate">this post</a> written by Paul Christiano and <a href="https://arxiv.org/pdf/2307.04879.pdf">this paper</a> by Johannes Treutlein.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>If you know any plausible implication that I don&#8217;t list here &#8212;&nbsp;then either I don&#8217;t buy that it&#8217;s an implication of ECL, or it doesn&#8217;t seem sufficiently decision-relevant to me, or I haven&#8217;t thought about it / forgot about it and you should let me know.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Whereas today, we can focus on handing-off the future to a broadly competent and healthy civilization, and trust decisions about what to do with the future to them.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>When I discuss how we should &#8220;care more about other humans&#8217; universe-wide values&#8221;, I exclusively refer to universe-wide values held by humans on our current planet Earth, as opposed to values that might be held by distant human-like species. But the reason to benefit such values is to generate evidence that other people benefit our values on distant planets (not just here, on planet Earth). So why focus specifically on humans&#8217; values? The reason is that we are more confident that some people treasure them, and it&#8217;s easy to benefit them via supporting humans who support them. For more, see <a href="https://docs.google.com/document/d/1XCZ-g_GAyZwfJfWHTRVyzptVU2_uI9tFWMG8x9j4C_o/edit#heading=h.y1hw7rervyd">here</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>&#8220;Misaligned AI&#8221; refers to AI whose values are very different from what was intended by the evolved species that first created them. If a distant species has very different values from us, and successfully aligns AI systems that they create, I wouldn&#8217;t count those as &#8220;misaligned AIs&#8221;.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Or any other kind of acausal effects.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Premature commitments are often a gamble that might gain <em>you</em> a better bargaining position while carrying a risk of <em>everyone</em> getting a lower payoff. Since that&#8217;s quite uncooperative, it seems plausible that ECL could discourage premature commitments. So this might be a reason to spread knowledge about ECL.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Though also possible that <em>un</em>careful thinking could increase them &#8212;&nbsp;given that they are by-their-nature caused by humanity making errors in what order they learn about and commit to doing certain things.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>And ideally, you would also think about other opportunities that faction A and faction B would have of benefiting each other, since you might also be providing evidence about those. Even more ideally, you might think about possible gains from trades that involve even more factions.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>Though the total effort that goes to each should perhaps still be allocated based on the number of people who support each set of values and who are sympathetic to ECL. Potentially adjusted by speculation about whether either set of values is underrepresented (among ECL-sympathizers) on Earth compared to the universe-at-large, in which case we should prioritize that set of values higher.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>It will be <em>the most</em> evidence for the actions of people in <em>exactly</em> my position. But this is not where most of my acausal influence will come from, since even a small amount of evidence across a sufficiently larger number of actors will weigh higher. The hypothesis that I&#8217;m putting forward here is that there might be some fairly broad class of actors which still share some key similarities with you, whose decisions your decisions provide more evidence about. And that your values might be (or be correlated with) one of the key similarities.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>Though I am personally somewhat sympathetic to both upside- and downside-focused values, so this doesn&#8217;t have a big impact on my all-things-considered view.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>Even if the aliens who went extinct shared our values, their choice to prioritize non-AI extinction risk less could still have been net-positive ex-ante. For example, they might have reallocated resources in a way that reduced AI takeover risk by 0.1% and increased non-AI extinction risk by 0.1001%. The added 0.0001% of x-risk might have been worth the benefit of leaving behind empty space rather than AI-controlled space in 0.1% of worlds.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>In particular,&nbsp; ECL suggests that we should discount benefits to aliens insofar as they on average correlate less strongly with us than the average civilizations-with-our-values do. (When making relevant decisions.)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>As an example of someone with this view: <a href="https://www.facebook.com/yudkowsky/posts/10152588738904228">This facebook post</a> by Eliezer Yudkowsky starts &#8220;I think that I care about things that would, in your native mental ontology, be imagined as having a sort of tangible red-experience or green-experience, and I prefer such beings not to have pain-experiences. Happiness I value highly is more complicated.&#8221; Yudkowsky has also written about the complexity and fragility of value elsewhere, e.g. <a href="https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile">here</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p>&#8220;Misaligned AI&#8221; refers to AI whose values are very different from what was intended by the evolved species that first created them. If a distant species has very different values from us, and successfully aligns AI systems that they create, I wouldn&#8217;t count those as &#8220;misaligned AIs&#8221;.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-17" href="#footnote-anchor-17" class="footnote-number" contenteditable="false" target="_self">17</a><div class="footnote-content"><p>In particular,&nbsp; ECL suggests that we should discount benefits to AI insofar as they correlate less strongly with us than actors-with-our-values do.</p></div></div>]]></content:encoded></item><item><title><![CDATA[A heuristic for planning with learning]]></title><description><![CDATA[A model, a heuristic, some big caveats]]></description><link>https://lukasfinnveden.substack.com/p/a-heuristic-for-planning-with-learning</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/a-heuristic-for-planning-with-learning</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Sun, 02 Oct 2022 21:44:50 GMT</pubDate><content:encoded><![CDATA[<p>Here&#8217;s a simple model for doing things in life:</p><ul><li><p>There&#8217;s some number of categories of things you can spend time on. (E.g. do math, give talks, go on dates, send emails.)</p></li><li><p>Whenever you spend time in any category, you&#8217;ll learn to get progressively better at doing that thing. (You&#8217;ll get more efficient, the results will be better, etc.)</p></li><li><p>After some approximately-known number of years, you&#8217;ll reach peak productivity, possibly hang around there for some time, and then your productivity will quickly decline exogenously. (Maybe you retire, maybe you&#8217;re replaced by AGI, maybe you have high altruistic discount rates.)</p></li><li><p>If you invest money, it will grow over time; and you can borrow any amount of money at the risk-free rate of return.</p></li></ul><p>In this model, here are two simple heuristics for prioritisation:</p><ul><li><p>When choosing what category to spend time in, you can assume that the cost and value of spending time in that category is equal to the cost and value of spending <em>marginal</em> time in that category at the end of your productive years, once you&#8217;ve learned all there is to learn. Because although the cost might be higher now, and the value might be lower, you&#8217;ll also speed up your entire future trajectory of learning.</p></li><li><p>When making time/money-tradeoffs, you should assume that your value of time is the same as your investment-adjusted value of time at peak productivity.</p></li></ul><p>Here are three example applications of this:</p><ul><li><p>If you&#8217;re thinking about giving a talk about something, and you&#8217;re very inexperienced with talks, the direct effects of giving the talk might not be worth the time-cost. But if you think that you&#8217;ll <em>eventually</em> do enough talks to learn how to do them quickly and well, the above heuristic suggests that you should do this talk iff it&#8217;d be worthwhile for the time-cost and impact you could achieve once you&#8217;ve practiced a ton.</p></li><li><p>If you&#8217;ll eventually become really good at doing something hard (like doing research in some field) you should focus most of your time on it, and pass up on doing tasks that let you have more impact <em>now</em>, but where you won&#8217;t improve as much in the long-term (either because you won&#8217;t spend enough time on it or because there&#8217;s not much room to grow).</p></li><li><p>If you think your salary will increase much faster (from learning) than the investment rate of return, you should be happy to buy time at a much higher rate than your current salary. (Mark Xu makes this point <a href="https://www.lesswrong.com/posts/beK9RBjMfkeSyqYTe/your-time-might-be-more-valuable-than-you-think">here</a>.)</p></li></ul><p>And here are some big caveats:</p><ul><li><p>For the money point, you might be liquidity constrained from not being able to borrow at the risk-free rate of return, or you might be risk-averse and not confident that you&#8217;ll reach your later higher salaries.</p></li><li><p>When choosing what to spend your time on, a big fraction of value is information value, which would shift around what things you&#8217;ll eventually become really good at. The above heuristic is kind of silent on this.</p></li><li><p>For many types of tasks, the things you should do to get good are quite different from the things you should do to exploit being good. For example, for giving talks, you might actively seek out low-stakes opportunities to practice early on (e.g. toastmaster sessions), but seek out high-stakes opportunities once you&#8217;re good. So even if the above heuristic recommends that you should give talks, make sure to <em>not</em> trust it about <em>which</em> talks to give.</p></li><li><p>I think the altruistic discount rate on EA work is actually really high. This is mainly because there will be significantly more EAs in the future, and there are some things that only current EAs can work on, like (i) short AI timelines, (ii) early community building, and (iii) starting work that requires a lot of serial time. So taking into account both learning (making you more productive later) and this discount rate (making you more productive sooner), your peak altruistic productivity might be soon (or now!). In that case, this heuristic differs much less from just doing what seems best in the short-run.</p></li></ul><p>Also, a note on attitude: This heuristic will sometimes recommend that you <em>prioritise</em> as if some tasks are easy and high-impact, even if right now, they&#8217;re <em>in fact</em> really hard. If you do follow this recommendation, remember that the reason that the task is nevertheless worthwhile is that learning and shifting forward all your future work is <em>super</em> high-impact. So acknowledge and celebrate your hard, super high-impact work accordingly!</p>]]></content:encoded></item><item><title><![CDATA[Different types of evidence]]></title><description><![CDATA[Some popular ways to categorise different types of reasoning/evidence/views:]]></description><link>https://lukasfinnveden.substack.com/p/different-types-of-evidence</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/different-types-of-evidence</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Wed, 21 Sep 2022 23:00:05 GMT</pubDate><content:encoded><![CDATA[<p>Some popular ways to categorise different types of reasoning/evidence/views:</p><ul><li><p>Inside view vs outside view.</p></li><li><p><a href="https://forum.effectivealtruism.org/posts/2WS3i7eY4CdLH99eg/independent-impressions">Independent impression vs all-things-considered view</a>.</p></li></ul><p>In this post, I&#8217;ll give my own (still work-in-progress) take on how to think about different types of evidence, and how they relate to the above typologies.</p><p>I like to separate reasons for believing something into the following 3 categories:</p><ul><li><p>Deference: Trusting an epistemic process that is somewhat opaque to you, but that you nevertheless have good reason to believe is reliable. This is most commonly just another human, perhaps an expert on the subject.</p></li><li><p>Induction: You&#8217;ve observed some fact to be true in the past, so you will assume that it keeps being true.</p></li><li><p>~Deduction: Some facts that you believe to be true about the world imply that another fact is probably true.</p><ul><li><p>The tilde is there because in philosophy, &#8220;deduction&#8221; is often used to mean logically valid inference, and I want to include probabilistic epistemic moves here, too.</p></li></ul></li></ul><p>(I considered adding &#8220;raw intuition&#8221; to this list, since that&#8217;s often a very significant source of disagreement, but I&#8217;m not sure if it makes sense to treat as separate. In practice, some amount of intuition is necessary for all these kinds of reasoning &#8212;&nbsp;to decide which deductions are plausible, what reference class you can use induction with regards to, and who seems like they should have relevant expertise. )</p><p>Comparing with the types of reasoning from above:</p><ul><li><p>Independent impression vs all-things considered view:</p><ul><li><p>The independent impression takes into account induction and ~deduction, but doesn&#8217;t use deference.</p></li><li><p>The all-things considered view is willing to use all 3 types of evidence.</p></li></ul></li><li><p>Inside vs outside view:</p><ul><li><p>As <a href="https://www.lesswrong.com/posts/BcYfsi7vmhDvzQGiF/taboo-outside-view#comments">have been noted</a>, EAs and rationalists have started using these terms in terribly ambiguous ways.</p></li><li><p>In their original usage, I would say that there&#8217;s a clear correspondence between induction and outside view, and a (slightly less obvious) correspondence between ~deduction and inside view.</p></li><li><p>In modern parliance, I think some people will use inside view as synonymous with &#8220;independent impression&#8221;, including both induction and ~deduction.</p></li></ul></li></ul><p>An important thing to note about these kinds of reasoning is that they <em>often rely on each other</em>:</p><ul><li><p>When deciding who to defer to, you may use induction to look at their track-record.</p></li><li><p>When using induction, you may need to use deference or ~deduction to figure out what happened in the past.</p></li><li><p>But most importantly, ~deduction almost always works by ~deducing things from already known facts, and those facts could have come from anywhere!</p></li></ul><p>I think this last point is most important. I like to imagine a final deductive guess to be the root node in a tree, where each node is deduced from its children (and possibly <em>partly</em> influenced by some relevant induction- or deference-move), and where each leaf-node is justified just by induction and/or deference.</p><p>An important implication of this is that the concept of your &#8220;indepent impression&#8221; or &#8220;inside view&#8221; will often be ambiguous, because it will depend on <em>where</em> in the tree you stop including deference/induction, because it&#8217;s not feasible to construct a tree completely without it. (Though in practice, it&#8217;s often useful to talk about an &#8220;independent impression&#8221; as just being independent from a few people &#8212; e.g. people in your community or in the current conversation &#8212; in which case it might be feasible to construct a view that totally ignores those people&#8217;s views.)</p><p>I&#8217;m not totally happy with this categorisation system, and feel like I still have some things to say about the issue, so I might write more about this later. Or not. We&#8217;ll see.</p>]]></content:encoded></item><item><title><![CDATA[Pareto-optimality and dominance arguments]]></title><description><![CDATA[Some possible errors]]></description><link>https://lukasfinnveden.substack.com/p/pareto-optimality-and-dominance-arguments</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/pareto-optimality-and-dominance-arguments</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Fri, 16 Sep 2022 00:33:36 GMT</pubDate><content:encoded><![CDATA[<p>When trying to decide between many options, one often-tempting move is to exclude all options that are <em>dominated</em> by some other option, i.e., that is strictly inferior on each dimension. This is a good move if you capture <em>all</em> relevant dimensions, but in practice, I think it&#8217;s easy to either miss some dimensions or use the wrong option space.</p><p><strong>Examples from economic policy</strong></p><p>[This section would be significantly improved by finding real examples of real people making these mistakes.]</p><p>In economic policy, my sense is that it&#8217;s easy to mix-up pareto-optimality with respect to a totally open option-space (where a benevolent dictator with complete information could magically distribute resources without distorting any incentives, similar to <a href="https://en.wikipedia.org/wiki/Kaldor%E2%80%93Hicks_efficiency">Kaldor-Hicks efficiency</a>) and pareto-optimality with respect to what options are actually available to policy-makers. For example, from <a href="https://en.wikipedia.org/wiki/Fundamental_theorems_of_welfare_economics">this wikipedia page</a>, consider the line &#8220;attempts to correct the distribution may introduce distortions, and so full optimality may not be attainable with redistribution.&#8221; But insofar as redistribution can give <em>any</em> individual more resources than they would have had under any realistic non-distributive policy, then that redistributive policy will be pareto-efficient <em>with respect to the realistic policy space</em>. In which case you&#8217;d be making a mistake to reject redistribution based on an argument that Pareto-optimality is an unobjectionable criterion that only excludes options that are dispreffered by everyone.</p><p>Here&#8217;s another examples, inspired by complaints I&#8217;ve heard about how some people supposedly seem to reason about climate policy: The efficient solution to climate change (among some large set of policies) would be a global CO2 tax, which might naively suggest that we can exclude all policies that are dominated by this. But of course, in practice, a global CO2 tax isn&#8217;t politically feasible. Which means means that some other solutions will be Pareto-efficient <em>with respect to the realistic policy space</em>, even if they&#8217;re not efficient with respect to the larger policy space.</p><p><strong>This can also happen in my personal life</strong></p><p>Sometimes I consider a list of options, and decide that one of them seem to be dominated by another option, <em>but feel some reluctance to drop it</em>. If so, that&#8217;s likely because there&#8217;s some factor that made that option intuitively appealing to me, but that I either fail to notice or intellectually don&#8217;t give much value to. But then dropping that option is quite plausibly a very bad move, either because (i) my intuition was pointing at some factor that was actually important, or because (ii) my intuition will switch to supporting some other option that does well on that factor, but does worse on most other axes (but isn&#8217;t quite dominated by anything else due to doing well on at least one of them).</p><p><strong>How I&#8217;d recommend thinking about this</strong></p><p>When applied correctly, Pareto-optimality is indeed a weak and unobjectionable criterion. Thus, whenever you appeal to a dominance argument, I think the mood should be one where you&#8217;ve helpfully pointed out an obvious way to improve the option space, where every party in the discussion (and each of your personal intuitions) are happy to accept it and move on. And typically, there should still be plenty of discussion left, because Pareto-optimality alone is unlikely to narrow down the option-space to just one point.</p><p>But whenever you appeal to a dominance argument and:</p><ul><li><p>meet opposition (from others or from some internal part of yourself), or</p></li><li><p>feel like you made a clever move that cut out lots of options that you previously deemed plausible,</p></li></ul><p>you should be very suspicious that you&#8217;ve actually missed important dimensions, or misunderstood what the option-space is.</p><p>(Also, maybe people should talk less about Pareto-efficiency (and just generic &#8220;efficiency&#8221;) and instead talk more about <a href="https://en.wikipedia.org/wiki/Kaldor%E2%80%93Hicks_efficiency">Kaldor-Hicks efficiency</a>, which often seems to be closer to what is meant in practice, and is more obviously a reasonable thing to object to.)</p>]]></content:encoded></item><item><title><![CDATA[AI timeline modes should be significantly before medians]]></title><description><![CDATA[I think your proabability distributions over possible AI timelines should have its mode significantly before its median.]]></description><link>https://lukasfinnveden.substack.com/p/ai-timeline-modes-should-be-significantly</link><guid isPermaLink="false">https://lukasfinnveden.substack.com/p/ai-timeline-modes-should-be-significantly</guid><dc:creator><![CDATA[Lukas Finnveden]]></dc:creator><pubDate>Thu, 15 Sep 2022 00:28:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!26Mk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9c6c834e-b8df-4c98-8acc-b52365747300_794x526.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I think your proabability distributions over possible AI timelines should have its mode significantly before its median. Here are a few different (highly related!) reasons for this:</p><p><strong>Semi-informative priors</strong></p><p>The most naive prior you could have about AGI arrival dates is maybe something like a laplace prior. This has the mode <em>now</em>, significantly before the median. (For more on semi-informative priors, see <a href="https://www.openphilanthropy.org/research/report-on-semi-informative-priors/">here</a>.)</p><p><strong>We live in a special time</strong></p><p>AI is currently improving a lot per year. Specifically, there&#8217;s recently been a large scale-up in compute, with accompanying impressive new model-performance. Right now, two things are happening:</p><ul><li><p>Models are being scaled up even further, by spending more.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><ul><li><p>Increasing spending by an order of magnitude gets 10x more expensive each time you do it, so we should expect the trend towards increased spending to slow off significantly within the next few orders of magnitudes. This is important if your timelines is largely based on amounts of training compute.</p></li></ul></li><li><p>Algorithms are being adapted to these new big model-sizes. (E.g. scaling laws, see <a href="https://arxiv.org/abs/2203.15556">chinchilla</a>, or figuring out how to best do step-by-step reasoning, see <a href="https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html">minerva</a>.)</p><ul><li><p>In general, I expect to see significant performance boosts in the years after scaling up training compute &#8212;&nbsp;not just immediately when training compute is scaled up.</p></li></ul></li></ul><p>By contrast, we have no idea whether any <em>particular</em> future years will be such a special time.</p><p>I do trust a sense that AGI isn&#8217;t <em>right</em> around the corner, e.g. just a few months away. But I think this knowledge decays over the course of a few years, and that 5-10 years away from now both has some benefits from being close to the special now <em>and</em> that we can&#8217;t confidently rule it out.</p><p><strong>You might want to reserve some probability mass for non-specific lateness</strong></p><p>I sometimes see AI timelines include non-trivial probability mass on things like &#8220;maybe we&#8217;ll never build AGI&#8221;, &#8220;maybe I&#8217;m just confused and shouldn&#8217;t trust my inside view&#8221;, or &#8220;maybe we need a fundamentally new unpredictable paradigm&#8221;. These will typically push your median later (by taking up probability mass at the end) but mostly shouldn&#8217;t shift the mode.</p><p><strong>There&#8217;s a lot more years in the long-term than in the near-term</strong></p><p>Let&#8217;s say that your median AI timeline is X years out. Between now and then, there&#8217;s just X years. But my sense is that it&#8217;d be quite unjustified to have a precipitous drop-off in probability within the X years after the median, and that the drop-off should instead be quite gradual. But this means that, if we look at all years with non-trivial probabilities of AGI, more of them will come after the median than before the median, implying that the years before the median must on average have higher probabilities.</p><p>(Relatedly, after looking at some graphs of AI timelines, I&#8217;ve come to appreciate &#8220;there&#8217;s a lot more years in the medium future than in the near future&#8221; as a pretty decent argument against very short timelines.)</p><p><strong>Implications</strong></p><p>So why does this matter? Sometimes people describe their AI timelines by quoting a median, and it&#8217;s easy to get the impression that those median timelines are what we should be planning for. I think this might be a mistake. Consider this graph, of a timeline with a median of ~2040 but a mode of ~2030.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!26Mk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9c6c834e-b8df-4c98-8acc-b52365747300_794x526.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!26Mk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9c6c834e-b8df-4c98-8acc-b52365747300_794x526.png 424w, https://substackcdn.com/image/fetch/$s_!26Mk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9c6c834e-b8df-4c98-8acc-b52365747300_794x526.png 848w, https://substackcdn.com/image/fetch/$s_!26Mk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9c6c834e-b8df-4c98-8acc-b52365747300_794x526.png 1272w, https://substackcdn.com/image/fetch/$s_!26Mk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9c6c834e-b8df-4c98-8acc-b52365747300_794x526.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!26Mk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9c6c834e-b8df-4c98-8acc-b52365747300_794x526.png" width="794" height="526" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/9c6c834e-b8df-4c98-8acc-b52365747300_794x526.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:526,&quot;width&quot;:794,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:192751,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!26Mk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9c6c834e-b8df-4c98-8acc-b52365747300_794x526.png 424w, https://substackcdn.com/image/fetch/$s_!26Mk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9c6c834e-b8df-4c98-8acc-b52365747300_794x526.png 848w, https://substackcdn.com/image/fetch/$s_!26Mk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9c6c834e-b8df-4c98-8acc-b52365747300_794x526.png 1272w, https://substackcdn.com/image/fetch/$s_!26Mk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F9c6c834e-b8df-4c98-8acc-b52365747300_794x526.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When I look at this, it intuitively doesn&#8217;t at all seem clear to me that we should be aiming to have positive impact if AGI happens in 2040, as opposed to 2030.</p><p>In particular, sometimes people cite leverage arguments as reasons to focus on short timelines. And sometimes people criticize leverage arguments as not working well in practice, because people will do much better work if they work with beliefs that they actually understand and have models for than if they imagine themselves into some improbable situation where they have very high leverage. Looking at a graph like the above makes me notice that in practice, people who have median-2040 timelines will often place <em>more</em> probability mass on &#8220;AGI in 2030&#8221; than &#8220;AGI in 2040&#8221;, and so it&#8217;s totally unclear to me whether criticisms like that should push them towards the former or the latter. My best guess is that it doesn&#8217;t matter much, and so that it makes sense to think about leverage arguments when choosing between them.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Though I&#8217;m a bit surprised that spending hasn&#8217;t increased more since GPT-3.</p><p></p></div></div>]]></content:encoded></item></channel></rss>