<![CDATA[Mental Contractions]]>https://mentalcontractions.substack.comhttps://substackcdn.com/image/fetch/$s_!CgQs!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F72adf312-9191-455c-b3dd-815002ee1310_828x828.pngMental Contractionshttps://mentalcontractions.substack.comSubstackSun, 26 Apr 2026 20:41:57 GMT<![CDATA[Consciousness Is Very Likely Not Something You Get for Free by Preserving a Pattern]]>https://mentalcontractions.substack.com/p/consciousness-is-very-likely-nothttps://mentalcontractions.substack.com/p/consciousness-is-very-likely-notWed, 22 Apr 2026 15:08:30 GMT

We know of one physical system associated with consciousness, namely the brain. Not a spreadsheet, digital model or a formal state machine considered in abstraction. A brain. A living, metabolically active, electrochemical, body-coupled physical system.

That should set the default tone of the discussion on what can be simulated or not.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

The scientific approach here is not to begin by assuming that consciousness must be substrate-independent because that would be elegant or philosophically fashionable. The scientific approach is to begin from the one known case, study its properties, study what changes it, study what breaks it, study what modulates it, and only then ask what can be generalized, replaced, abstracted away, or realized in other media.

So the cautious position is not mysticism, not vitalism, not spooky anti-computational prejudice. It is simply this: the only known physical implementation of consciousness is a brain, and the brain appears to depend on a dense web of concrete biophysical processes. Anyone claiming that those processes can be reduced to an abstract formal pattern and then preserved wholesale in ordinary digital computation is making the bolder claim, not the more modest one.

And that is where a great deal of functionalist talk goes wrong. Functionalism likes to say that causal structure is what matters. Fine. Everyone agrees that causation matters. But then the move is often to quietly replace actual causation with a thin, abstraction-friendly version of it. The physics doing the causing gets demoted to incidental implementation detail before the real argument has even started.

Yes, digital computation is physical too. Obviously! Bits do not float in some disembodied mathematical heaven (Perhaps Tegmark disagrees). They are realized in transistors, voltages, current flows, switching thresholds, signal propagation, clock cycles, and semiconductor materials. But digital computation is built to use physics in a very particular way. It is engineered to suppress most substrate-specific detail and preserve higher-level state transitions across many possible media. That is one of the whole points of computation. The hardware is arranged so the system behaves as though the abstract structure is what counts and much of the underlying material mess can be ignored.

Brains, the only “devices” we know to implement consciousness, do not work like that.

Brains are not cleanly layered machines in which the interesting process sits on top while the substrate politely stays out of the way. The substrate is involved at every level. Timing matters. Geometry matters. Conductance matters. Chemical state matters. Oscillatory coupling matters. Morphology matters. Metabolism matters. Glial support matters. Blood flow matters. Hormonal context matters. Immune state matters. The body matters. The brain and body work together, in tandem, creating the conscious experience.

So when people say digital computation is physical too, they are saying something true but not yet relevant. The issue is not whether both computers and brains are physical. The issue is that they exploit physics in radically different ways. One is built to abstract away material specifics. The other may depend on them. That difference is not a side point. That difference is the whole issue. But it keeps being side-stepped.

A simulation can preserve abstract structure, selected dynamics, and formal relations while failing to inherit the causal powers of the thing being simulated. That is not controversial. That is how modeling works. And this is obvious everywhere except, suspiciously, when the topic becomes consciousness.

A digital simulation of digestion does not extract nutrients from a sandwich. A simulation of a hurricane does not rip the roof off a house. A simulation of insulin signaling does not lower anyone’s blood glucose. A simulation of photosynthesis does not convert sunlight into sugars. A rendered black hole does not lens light, trap matter, or warp spacetime.

A flight simulator can reproduce aerodynamic relations with extraordinary accuracy while never generating the lift, drag, thrust, pressure gradients, or inertial forces involved in actual flight.

None of this is remotely mysterious. Simulations preserve some relevant structure for some purpose, but they do not thereby become the target system. They do not inherit the target’s full causal repertoire simply because there is a useful mapping between states. In fact, it can lose a wealth of it, that is not in principle recoverable. This is all normal science, engineering and common sense.

So why, once the target becomes consciousness, are we suddenly expected to pretend that preserving the only a small part of formal organization is enough to make the thing itself appear automatically? Science has not established that. That is the point under dispute.

And the burden of proof here has been badly inverted. The skeptic is not introducing magic. The skeptic is saying that maybe you do not get the full phenomenon merely by preserving an abstraction of some of its organizational features. That is a perfectly ordinary scientific concern. Once you look at neuroscience instead of armchair slogans, the thin abstraction story starts looking shaky very quickly.

The brain is not just a logical network of symbols or discrete tokens passing messages. It is an electrochemical (physical) organ whose operation depends on membrane dynamics, ion-channel kinetics, synaptic chemistry, receptor binding, dendritic integration, nonlinear local processing, oscillatory synchrony, large-scale recurrent loops, and continuous metabolic support.

Neurons do not fire because a high-level function box says “now compute.” They fire because voltage-gated ion channels open and close under specific physical conditions. Synapses do not merely transfer information in an abstract graph-theoretic sense. They depend on vesicle release, neurotransmitter diffusion, receptor subtype activation, reuptake, saturation, desensitization, local ionic conditions, second messenger cascades, and plastic changes over time.

Dendrites are not passive wires. They are active computational structures in their own right. They perform nonlinear integration. They can support local regenerative events. They shape how inputs interact in space and time. Neural tissue is not a static network. It is a living material with constant regulation, adaptation, fatigue, modulation, and repair.

Glia are not wallpaper. They regulate extracellular chemistry, neurotransmitter clearance, inflammatory signaling, vascular coupling, metabolic support, and aspects of synaptic behavior. Cerebral blood flow is not a mere utility feed. If oxygen and glucose delivery changes, the operating regime of the brain changes. If CO2 rises, conscious state can change. If inflammatory signaling shifts, cognition and mood can change. If endocrine conditions shift, attention, motivation, salience, and affective tone can shift.

This is not ornament around some cleaner software essence. This is what the system actually is. And once you look at how consciousness is modulated, the same point becomes even harder to ignore.

Anesthesia does not change the system only in some abstract, medium-independent sense. It binds to receptors. It shifts inhibitory and excitatory balance. It perturbs thalamo-cortical dynamics. It changes how large-scale integration can be sustained. It alters which kinds of coordinated activity patterns remain physically available to the system.

Alcohol changes conscious experience because it changes concrete neurochemistry. Psychedelics alter perception, self-modeling, and cognition by acting on receptor systems and network dynamics. These are just examples. But the list is very long. Dopamine changes salience and motivation. Serotonin changes mood, perception, and cognition. Sedatives, stimulants, lesions, seizures, sleep deprivation, fever, hormones, inflammation, blood gases, electrical stimulation, and magnetic stimulation all modulate consciousness by acting on physical machinery.

It does not prove that every one of these variables is essential in the strongest metaphysical sense. It does show that conscious states are deeply and lawfully tied to substrate-specific physical susceptibilities.

Which means you do not get to casually wave away the substrate and call that caution scientific. None of this implies that only biological brains could ever be conscious. That would be a much stronger claim. And I totally don’t believe that personally. We don’t need biology or mush. We need the right underlying physics.

Many of the relevant causal features could be instantiated in non-biological systems. They could be reproduced in neuromorphic hardware, analog systems, hybrid electrochemical devices, photonic media, or architectures we have not invented yet. It’s likely that consciousness is multiply realizable in a meaningful but nontrivial way and that there exists a whole taxonomy of different types of consciousness. In fact, I think that space is very, very large.

But multiple realizability is not a blank check. It does not mean that every formal imitation counts. It does not mean that if you preserve a task-level input-output mapping or some abstract computational graph, you have preserved everything that causally matters. It only means that what matters may be realizable in more than one physical way.

That still leaves the central scientific question untouched: which physical properties are actually doing the relevant causal work?

Maybe some things can be swapped out. Maybe many can. But if specific field effects matter, then you need those or something genuinely equivalent. If specific temporal relations matter, then you need those or something genuinely equivalent. If continuous dynamical coupling matters, then you need that or something genuinely equivalent. If certain forms of embodiment or metabolic regulation matter, then those cannot simply be treated as decorative extras.

So, my point is not that brains are organic and therefore special. Something I want to belabor, as most think it’s about mushy brains and special biology. The point is that brains are the only known physical systems associated with consciousness, and they give us every reason to be careful about abstracting away too much too soon.

That is not mysticism. That is how science proceeds when it does not yet know which variables are dispensable. For example, another thing that gets brushed aside too quickly is that the brain generates structured electromagnetic activity.

This is standard neuroscience. Neuronal activity generates measurable electrical and magnetic patterns. EEG exists because of this. MEG exists because of this. Local field potentials exist because of this. Brain stimulation works because neural tissue is electrically excitable and physically modifiable.

Whether a pure electromagnetic field theory of consciousness is ultimately right is a separate question. It may be incomplete, but the fact remains that the brain’s activity includes large-scale field organization, oscillatory coupling, phase relations, resonances, and other temporally structured physical patterns that are not obviously reducible to a thin discrete-state picture.

A digital model can represent such dynamics. It can simulate them. It can approximate them numerically. But representing a field and instantiating a field are not the same thing. Modeling phase relations and physically realizing the same kind of coupled oscillatory organization are not the same thing either. That distinction is not pedantry.

If part of consciousness depends on how distributed activity is bound together by real-time physical coordination across the system, then a digital abstraction that captures only a functional sketch may miss exactly the thing that matters. That just looks extremely unlikely given the evidence we already have from plain neuroscience.

Then the binding problem is still sitting there like a brick wall. Experience does not show up as a bag of disconnected features. It arrives as one structured, integrated scene. Color, motion, shape, sound, touch, body position, emotional tone, salience, memory context, and perspective appear together as part of one unfolding point of view. Any serious theory of consciousness has to explain that.

A thin computational story can explain quite a lot about behavior, report, discrimination, and task performance. It can explain why information gets processed, routed, transformed, and used. But that does not by itself explain why there is one bound subjective scene rather than a coalition of specialized processes with no unified experiential field.

This is exactly where physically realized global organization starts looking much more relevant than many functionalists want to admit. Large-scale recurrent loops, oscillatory phase-locking, transient synchronization, thalamo-cortical coupling, field organization, and whole-system dynamics are at least plausible candidates for how distributed activity hangs together as one evolving state.

Maybe the final story will look different. But at least this line of thought is trying to explain why there is a unified conscious scene at all, rather than merely relabeling successful information processing and hoping subjectivity comes along for the ride.

People often slide from “the brain performs information processing” to “the brain is fundamentally computation in the relevant sense” to “therefore a digital implementation of the same computation would preserve consciousness”. That chain has too many hidden assumptions and sloppy uses of information, cognition and computation.

Of course the brain processes information in many senses. So does a thermostat, a liver, an immune system, and a market. The word information is cheap. The phrase performs computation can also be made very cheap if stretched broadly enough. The real issue is not whether some computational description of the brain is possible. Of course it is. Many are. The issue is whether such a description captures the causally sufficient basis of conscious experience. That does not follow automatically.

A computational model can be explanatory without being ontologically complete. A model can predict behavior while omitting the very physical features that matter for another property of the system. We accept this constantly in science. You can model gases statistically while omitting molecular detail for one purpose and still need molecular detail for another. You can model vision behaviorally while omitting intracellular biochemistry and still need that biochemistry for pharmacology.

So even if a computational description explains some or even many cognitive competences, that does not entitle anyone to conclude that it captures the full causal basis of consciousness itself. That conclusion has to be earned through science and empirical work, not asserted.

The cleaner computational stories also underplay embodiment.

Brains are not isolated logic engines floating in vats of semantics. They are regulation hubs in living bodies. Hormones, respiration, gut signaling, cardiovascular state, immune activity, interoception, sleep cycles, arousal systems, and autonomic tone all feed into conscious experience.

Fear, fatigue, nausea, sexual arousal, panic, serenity, hunger, grief, and sickness are not merely computations in the abstract. They are body-brain states. Even perception is modulated by bodily condition, expectation, action readiness, and global regulatory context. The brain and body work in tandem.

Maybe a non-biological conscious system could have its own analogue of embodiment and regulation. Sure. But that only strengthens the point: what matters may be a richer, physically embedded style of organization than the usual digital abstraction captures.

So when someone says “we can just preserve the computation,” the word just is doing outrageous amounts of unearned work.

What is actually being claimed here? Only this: it is premature and unwarranted to claim that preserving some abstract computation is enough for consciousness when the only known conscious system is a brain and everything we know about that system points to dense dependence on concrete physical, chemical, temporal, and embodied processes. That is the sober position. And not only is it premature: it’s very unlikely that 100% can be abstracted into a digital simulation and end up with subjective experience, because consciousness is apparently just something that piggybacks along formalized abstractions wherever it goes.

If consciousness depends even partly on field topology, oscillatory phase relations, receptor-specific pharmacology, continuous-time coordination, metabolic constraints, body-coupled regulation, or some other substrate-bound susceptibility, then an ordinary digital implementation that abstracts those away has not obviously preserved the same causal basis.

At that point the burden falls where it belongs: on the person claiming that literally none of those physical differences matter. That is the extraordinary claim.

And it is striking how often this gets reversed. The skeptic is told to identify some ghostly ingredient. But no ghost is needed. No mystery dust is needed. No soul is needed. We already have a long list of physically concrete candidates for consciousness-relevant causation. Maybe not all of them matter. Maybe only some do. But that uncertainty is exactly why the confident leap to substrate independence is unearned.

In every other domain, we are comfortable saying that a simulation can preserve structure without inheriting the target’s full causal powers. Somehow only in the consciousness debate are we expected to forget that.

The irony is hard to miss. The person insisting that consciousness automatically rides along with the right formal pattern is the one smuggling in something suspiciously magical. A kind of abstraction-vitalism. A faith that experience will migrate across implementations simply because the math looks similar enough. That is not the modest position.

What follows is recognizing that digital computation is physical, yes, but physical in a very different way than brains are. It is engineered to keep layers relatively separable and substrate details suppressible. Brains do not buy that luxury. Their causal powers are bound up with the very stuff they are made of and the way that stuff is organized, modulated, energized, synchronized, and coupled to a body. And I’d bet consciousness piggybacks on the actual instantiated physics. A much more empirically sound and sane position, than the idea it free-floats on formalisms. And no, this is no “panpsychism” either. There is no “pan” here. Just correlation/embedding. I’ll remain agnostic as to how to metaphysically slice and dice matters. We don’t have to care about for the sake of this argument. We only care that if we physically instantiated processes x,y and z are what puts consciousness together, then we should care to replicate those, not simulate them.

Consciousness will be realized elsewhere through different means. But that will not be established by slogans about causal structure while quietly discarding the physics doing the causing. And until someone shows that consciousness belongs in the substrate-independent bucket rather than the physics-dependent one, the scientifically sane position is restraint.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>
<![CDATA[Future technology and products (part 1): exobiological barriers and filters]]>https://mentalcontractions.substack.com/p/future-technology-and-products-parthttps://mentalcontractions.substack.com/p/future-technology-and-products-partTue, 17 Mar 2026 20:03:42 GMTOur evolved filters, linguistic processing, skin, taste, smell, even the blood-brain barrier, will not be enough to keep bad, hostile, deceptive, or unknown things from affecting us.

These are ancestral protective mechanisms. They were not built for a world of engineered molecules, synthetic pathogens, persuasive AIs, memetic attacks, ambient sensing, spoofed identities, hidden supply-chain contamination, and agents operating across scales from nano to macro.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

In other words: evolved human filters are becoming inadequate for a world full of engineered, opaque, and superhuman agents. Our current reality is very porous and there are numerous attack vectors that eventually will be easily and cost-effectively exploited with new technology.

That creates a gigantic market for augmentations that help us assess other agents and threats at all scales. This is not just “better health tech” or “better cybersecurity.” It is broader.

Augmented epistemic immunity.

Systems that help humans detect, classify, predict, and resist harmful influence, intrusion, contamination, deception, and manipulation.

Product Space

  • threat sensing: wearables, implants, home sensors, environmental scanners, authenticity checks, real-time air/food/water analysis

  • threat interpretation: systems that do not just detect anomalies, but explain them

  • protective mediation: filters, shields, quarantining, content gating, chemical neutralization, sensory overlays, attention firewalls

  • trust infrastructure: provenance, identity, audit trails, authenticity layers

  • cognitive augmentation: tools that improve judgment under uncertainty

The deeper point is that this is not only about defense against obvious evil. It is about unknown unknowns.

Once civilization can create things that exceed unaided human sensory and cognitive screening, we will need meta-filters.

Humans will need exocortical and exobiological immune systems.

These systems will not merely help us detect toxins, pathogens, spoofing, manipulation, or hidden machine/entity agency. They will increasingly mediate our relation to reality itself.

Risks

These three risks are in order of difficulty.

1. Protection from external control may itself become external control
Whoever controls these systems may influence what counts as safe, true, trustworthy, authentic, suspicious, or even perceptible in the first place. This is much more than a product risk. It is a civilizational control point. There will likely be all-in-one solutions that combine sensing, interpretation, provenance, and intervention. Once these become deeply embedded, they may function as compulsory intermediaries between humans and the world.

The solution here is independently verified, perpetually verifiable, open technology with strong auditability and distributed trust.

2. Offloading too much weakens the native organism and mind.
Our brains, and indeed our whole body, are built to economize. Functions that are no longer exercised tend to degrade. So even if such augmentations become necessary, relying on them too heavily may further erode unaided judgment, situational awareness, social intuition, sensory discrimination, and immune robustness. We may become safer in one sense, while becoming more dependent and less intrinsically capable in another.

The solution here may actually require deeper integration with massive fail-safes and fallbacks, rather than loose external modules that merely augment. The risk of ending up with phantom cognits, will be very real.

3. Augmentation will result in incompatible subjective worlds
Today, mediation layers are already vulnerable to corporate, political, and institutional capture. But the deeper problem comes later: as augmentations broaden perception, alter cognition, and eventually introduce new modes of experience or even new qualia, different populations may no longer inhabit sufficiently similar worlds. At that point, the problem is not mere bias or censorship. It is ontological, epistemic, and subjective drift between agents.

The solution here will require major efforts to manage divergence between emerging civilizations and cognitive lineages while preserving enough mutual legibility for coordination.

What comes after

As these augmentations diverge, and as different populations adopt different cognitive prosthetics, interpretive filters, sensory extensions, and machine intermediaries, we risk not merely unequal access to reality, but a breakdown in shared reality itself.

This is where the issue connects to what I called the intersubjectivity collapse: the breakdown of the network of unspoken rules and mutual predictability that holds a civilization together.

In a world populated by increasingly different kinds of minds, modalities, and agent architectures, cooperation becomes harder because the basic assumptions that underwrite trust, law, communication, and coordination no longer cleanly transfer.

Protective augmentation is not just about shielding humans from external threats. It is also about managing the growing intersubjective distance between humans, post-humans, machine agents, hybrid agents, and institutions whose internal processes are increasingly opaque.

So first we will need artificial immune systems because the world becomes too complex and adversarial for our naturally evolved filters. Later, we will need them because the very possibility of a common world is under immense pressure. Great riches await those who bet on these opportunities, but the real riches for post-humans will come from being able to retain cognitive sovereignty.

The question is, how to enact that?

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>
<![CDATA[Five new basic future rights of generally intelligent sentient beings]]>https://mentalcontractions.substack.com/p/five-new-basic-future-rights-of-generallyhttps://mentalcontractions.substack.com/p/five-new-basic-future-rights-of-generallySat, 28 Sep 2024 13:12:06 GMT

Because I have rather particular ideas with respect to governance, rights, and the overall structure of our civilization in the next decades, I won’t go into details about how these rights would work exactly. With this post, at the very least, I am making the case that these should become rights, and whatever ways of protecting them we can muster, we should. And so, I do think these should be basically what we today would consider “civil rights.”

Indefinite Lifespan

While defeating aging—the holy grail of medicine—has been a taboo, much like artificial general intelligence has been taboo in computer science until recently, there is in fact no law of physics that should make us think we can’t indefinitely repair and rejuvenate our bodies. And I am quite certain that future beings will look upon birthing or otherwise producing generally intelligent sentient beings that are sentenced to death at birth, due to wear and tear that’s deemed too difficult to repair, as something immoral. Especially since no sentient being can ever decide beforehand whether they want to be brought into existence. After all, they need to exist first.

Initial Imprinting (Childhood) Reversal

If no generally intelligent sentient being can choose to be born, they should have the right and ability to reprogram themselves, such that they can, in effect, erase the effects of their childhood—where they were at the mercy of other sentient beings that reared them and spoon-fed them whatever they wanted. The reason this is called “initial imprinting reversal” is because it is species-agnostic. For the 1-year-old cyborg that reaches human-equivalent adulthood at that speed, the same applies. To generalize it even more, we could say that any generally sentient being should have the right to architect and simply revise their mind. Absence of this right and ability will be considered barbaric in the future.

Direct Mental State Control

Another barbaric aspect of existence. While fiction writers, poets, and science popularizers have romanticized this endless “dance” we have with ourselves and our states since forever, we clearly greatly suffer without the ability to directly control our mental state. Sadness, bad moods, anhedonia—you name it. It’s not an overstatement to say most people are in a perpetual fight with their states, and even when adopting various philosophies, be it stoicism or whatnot, they are no panacea—they’re just a way to cope with this lack of control. To be at the mercy of one’s mental states to such a degree and to have to deal with this opaque a mind and functioning will also be considered a travesty and, anthropocentrically put, inhumane in the future.

Morphological Freedom

A staple of transhumanism. Morphological freedom refers to a proposed right of a person to either maintain or modify their own body, on their own terms, through informed, consensual recourse to, or refusal of, available therapeutic or enabling medical technology. Now, this is a big one that, once there, is definitely Pandora’s Box. While theoretically we should have this freedom, I believe possible embodiments, minds, and intelligences are many. Mind design space is tremendously large. And not all configurations will get along. See also my “Intersubjectivity Collapse.” Yet surely we can’t police morphology of generally intelligent sentient beings?

Protected Personal Spacetime

Future civilization will look very different from today’s. There will be countless different sentient and non-sentient beings as well as other intelligent and capable entities roaming around future habitats. And to such a degree that I think our blood-brain barrier, our skin, in general our various layers of membranic protections, as well as the collection of modalities and cognitive capacities, will be woefully inadequate as basic protection of one’s bodily and mental integrity. Even with vastly increased cognitive capacities, I think divergence among future beings will be such that it won’t be feasible to assess who or what you’re dealing with with respect to other agents, neither for unaugmented humans, nor other types of beings. It won’t always be computationally or physically feasible. I’ve written before that we have been piggybacking on the fact that we’re a single dominant species on our planet for very long. When that changes, all future beings should have the right to have a sort of nanotech (if you will) shielding technology that will monitor what goes in and out, and potentially protect and not just alert when something unknown enters. Everyone should get basic self-sustaining protection from outside forces.

This list is not exhaustive.

]]>
<![CDATA[The future of AI - what's the "realistic" view?]]>https://mentalcontractions.substack.com/p/the-future-of-ai-whats-the-realistichttps://mentalcontractions.substack.com/p/the-future-of-ai-whats-the-realisticMon, 13 Feb 2023 21:57:55 GMTThere are some people out there who pretend to present "realistic" views on AI where improvement is always going to be very gradual. They generally dismiss current developments. I am getting increasingly annoyed with this, because I think things are obviously accelerating at a at a breakneck pace and we should invest time in getting ready.

We need to err on the side of caution and assume things will accelerate (they will), because the amount of adaptation we'll need to do to deal with all the changes in the next year alone is anyways overwhelming.

Unfortunately, I have little time to blog right now, but I may be able to squeeze out my AI fast takeoff post in the near future. In short, I think it's pretty much inevitable that from near-AGI/AGI to incomprehensibly more powerful AGI or ASI the time frame is short enough that we won't be able to keep up and adapt.

A few arguments/claims that are circulating:

- AGI will need resources, real-world data, etc.
Of course it will, but please quantify this and you'll quickly find out there are a 100 ways the AGI can do this and that's merely using our puny human imagination. And it *is* puny.

- Lots of computation and power needed.
Look at the human brain. 20W. 20-bloody-Watts. You think an AGI won't be able to optimize from what we have today? You think the human brain is the pinnacle of cognitive engineering? No, it’s an amazingly evolved organ that nevertheless is far from ideally designed and does not hit computational and energy limits for the volume it occupies, and AGI won’t constrained to a human skull either.

- Infrastructure needed.
It's already there. People will let loose agents on the web and these agents will easily self-optimize and easily load themselves into other physical parts. They can easily be distributed. AI by nature is parasitic, because it’s substrate independent. It only needs hosts.

The above points are brief and a bit caricaturist, but the main take away here is that I truly think we massively lack the imagination, intelligence and overall capacity to realize how AGIs will easily out-strategize us and will not be beheld to the many constraints of biology.

This is why I wrote:

The universe is a playground and there is a lot of energy and plenty of resources. Earth is no exception. There are physical limits to computation and energy usage, but we’re nowhere near close to them.

Forget for a moment about existential risk or catastrophic risk.

AI is already permeating everything, and quite generalizable techniques that aren't that pricy to scale (100ks in USD to train a decent LLM) are being integrated over the world now. Big corporations, start-ups, universities, open source teams.. lots of stuff happening behind the scenes we don't hear about.

AI is going to be everywhere and it's spreading like wildfire. We have to act and we have to act fast. It really is wrongheaded and silly to insist LLMs are not full-blown AGI yet and therefore it's a lot of ado about nothing. The former is true, but the latter does not follow.

We're at the cusp of a revolution and we are so ill-prepared it hurts. Do some people have unrealistic ideas about AI magically becoming all-powerful and do they not know any physics? Yes. I'm sure there are.

However, is it the case that AI can easily and massively transcend our capabilities and optimize resource-usage in ways we can't fathom? Also yes.

ChatGPT got to one hundred million users in the blink of an eye. Fastest product adoption rate recorded, ever. Let that sink in. Faster than shiny apps like Instagram or TikTok. This is with a barebones interface and a sign-up that required a phone number. This is really quite a stunning feat.

So err on the side of caution and assume "AI is improving and spreading very quickly". Anything else is just irresponsible.

I really don't appreciate naysayers who think they're cute pointing out what LLMs get wrong, if the context is: "We're nowhere near AGI yet".. yeah, sure.

Ten years ago we were not going to see Go human players being defeated in our lifetime, nor were we going to get anything close to the sort of natural language processing GPT-3+ displays.

Look at us now. Even the first conversational LLMs that power ChatGPT, massively outperform us on several metrics. Sorry, but if you think AGI will not crush you on every conceivable metric, such as intelligence, memory, creativity, etc., you are out of your mind and out of your depth.

Humans and the human brain are not the epitome of agent design, not the epitome of intelligence, not at the limit of what's physically possible w.r.t. cognition. Nowhere near.

We humans and our governments are lethally slow in grasping how quickly things develop and how they will affect society. This is not like the internet where we can afford to have massive legislative lag amounting to years and sit back to see how it goes, to eventually respond in one way or the other after the fact. I truly don’t think this is a “wait and see” situation.

We need fundamental re-thinking of how we govern and anticipate the future. We need a complete overhaul of civilization and society.

Many things that were more or less constants of agents (minor variation in intelligence, capacity, constraints of bodies, logistics, sentience, etc), are becoming variables. This is a very profound paradigmatic shift.

Conclusion? We need to throw our anthropocentric worldview and systems out of the window and put all hands on deck to implement universal systems that accommodate for many different generally intelligent agents, the best way we can, as soon as we can.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>
<![CDATA[Future posts]]>https://mentalcontractions.substack.com/p/future-postshttps://mentalcontractions.substack.com/p/future-postsSun, 08 Jan 2023 14:55:14 GMTDear subscribers, these are some of my draft posts. Feel free to comment on this post if you have any questions and if you want to vote for posts being finished sooner than later.

I otherwise don’t receive any remuneration for my posts here and they’re based on quite a few years of research, so it’s much appreciated!

  1. The dangers of multiple realizability

Read more

]]>
<![CDATA[The intersubjectivity collapse]]>https://mentalcontractions.substack.com/p/the-intersubjectivity-collapsehttps://mentalcontractions.substack.com/p/the-intersubjectivity-collapseSun, 08 Jan 2023 14:11:43 GMTThe intersubjectivity collapse (IC) refers to the breakdown of the network of unspoken rules that holds civilization together based on the subjectivity of minds that have created it. It’s the idea that a Cambrian-esque explosion of new types of minds leads to inherent unpredictability among agents, due to their vastly different subjectivity and modalities. The more homogenous the dominating species that built its civilization, the more brittle this network initially is. I argue that an intersubjectivity collapse seems inevitable for any civilization that starts to radically self-modify their minds and develop AGI, because possible mind design space is likely very large and internal constitution and behavior are largely orthogonal. I present several possible scenarios and consider the IC as a potential great filter candidate. I sketch out some of the catastrophic and far-reaching consequences, including necessary fragmentation of civilization. I then sketch out directions for mitigation and solutions. I briefly mention its relation to alignment of AGI and AI safety. Finally, I list recommendations and ideas for future research.

Overview

The intersubjectivity collapse is basically an answer to the question: “What happens when we introduce a bunch of new minds into a civilization?”

  • It’s a theory about diverging subjectivity of any civilization that self-modifies and potentially dire consequences thereof.

  • It’s a possible great filter candidate: any civilization has to face this and it may destroy them - as an answer to the Fermi-paradox.

  • An imperative to understand and map how the types of minds that spawned a civilization are reflected in it, and map how its organization relates to subjectivity, in order to anticipate the intersubjectivity collapse.

In plain terms, once we will have many types of minds, the difficulty of cooperation will spike sharply, due to different subjectivity, which relates directly to different cognitive architectures with different modalities and processing capacities. Beauty is in the eye of the beholder, yes. Beauty is also in the processing capacity and modalities of the beholder.


Not only that, so are values and the societal systems the beholders build. This is reflected in everything. And the extent of this and consequences are much deeper and more dire than one may assume at first glance.

The space of possible minds or cognitive construction space as I call it, is of unknown size, but there is little to no reason to believe it’s not very large. In fact, I think the subjectiverse, the space of all possible experiences to be had, is enormous, perhaps intractably large as all possible minds times all possible experiences and ways they can be configured amounts to a combinatorial explosion. These rich aspects to the IC makes it so that there is a lot of ground to cover and so this post will serve as a relatively light introduction where I will sketch the basic concepts.

Only to some degree like the Hedonic Treadmill, introduced by Brickman and Campbell in their article "Hedonic Relativism and Planning the Good Society" (1971), I think that consciousness for us evolved minds has a setpoint to which it returns inevitably after deviation. One treacherous aspect of consciousness is that we need to find it rather unremarkable so that we can act. You can't flee from a predator if you're stuck in perpetual awe of your existential awareness. Like the hedonic setpoint where major disruptions in happiness have us bounce back to a relatively stable level of satisfaction, so too do we have a consciousness setpoint, like say the default network in neuroscience. Couple that with the Zeitgeist you were born into, values instilled in you, and you can see we generally are quite constrained in how we perceive the world and how this world reflects back upon us our own perception of it. This makes it quite hard to appreciate just how much pretty much everything about how we’ve constructed our world, its systems, from unspoken rules, norms, values, to law and order, is all based on ourselves. More importantly, to let sink in the perhaps treacherously obvious fact that we can easily predict and understand each other as agents. We’re the dominating species on earth and our world is necessarily anthropocentric.

So to begin appreciating this aspect, I will sketch out our relation to differences in mind within and outside our own species. Then I will mention a few examples of changes in our minds, due to augmentation or introducing other minds, to demonstrate how quickly our rules and societal systems would be inappropriate for different types of minds.

“By its very nature every embodied spirit is doomed to suffer and enjoy in solitude. Sensations, feelings, insights, fancies- all these are private and except through symbols and at second hand incommunicable.”

Aldous Huxley,The Doors of Perception

Subjectivity is generally considered to be private and this is really due to the fact that we can’t easily interface with each other’s qualia. However, it is not necessarily fully private. Conjoined twins like Krista and Tatiana Hogan whose brains are fused and who can see out of each other’s eyes and share feelings, are proof of concept of this. But for most of us, we assume others are conscious “like we are” to an important extent because of morphological and behavioral similarity, and the plethora of indirect empirical evidence behind those, from evolutionary biology and medical science, whether it concerns neural correlates of consciousness, studies from psychology that do replicate, or how we respond to anesthesia.

But then there are also differences among us. Not everyone’s senses work the same way, not everyone’s inner mind has the same qualities, and some people are missing senses, have enhanced senses, or have notable differences in their brain anatomy or connectome beyond expected variance. From the autism spectrum to psychopathy, and congenital insensitivity to pain, being born blind or aphantasia. There’s quite a bit of variance. Then there are cultural differences that make people with essentially the same senses view the world quite differently - which is out of the scope of this post, as tomes can be written about that. With respect to common sense, it’s just sense that is common enough to generally not break down given the variation in our species - very much an artifact of a species’ cognitive architecture and set of senses. Our common sense is not common among dolphins.

Let’s consider two key points. The first is the orthogonality thesis (Bostrom, 2012), which states that intelligence and final goals are orthogonal axes along which possible agents can freely vary. So any combination of intelligence and goals may exist. This provides one key indication of the possible variance in minds in mind design space. I expect the same to apply to intelligence and values. So a wide range of axiological and ethical frameworks can be adopted by any intelligence that can process it computationally.

The second key point: the degree to which evolved species will have optimized for consistent signaling of internal state. Some of these signals are consistent across species. When someone is sweating, we know they are either too hot or nervous. When someone is screaming, we know they are hurting or in anger. Pitch is often raised when "sounding an alarm” across species (fear, pain). Hearty laughter generally represents joy in humans.

The rather frightening thing here is, that the vast majority of possible inner states and behaviors, are also almost entirely orthogonal. I’d call this the behavioristic orthogonality thesis. Most of what we consider a given about another being, is a convenient “fiction” evolution optimized for. Kinship, empathy-the basic ability to predict other agents within and outside our species-, are an artifact of evolution, not a universal necessity for an extant mind. Merely a necessity of naturally evolved minds. At the very least, behavior is a rather superficial metric and as an output that can be realized in tremendously many ways.

So the first deep realization you should have is, that an entity could literally display a complete inversion relative to humans or animals with respect to inner states and outer signaling (behavior), while looking precisely like a human or animal we think we can predict. In fact, there are some clinical examples of some of this or an approximation of it, like the Pseudobulbar affect where someone can feel sad yet laugh uncontrollably. This is why psychopaths easily hack our intuitions and indeed are a cautionary tale what sort of havoc even slight deviations from the norm can cause. Concepts like Cyranoids and Echoborgs will become very relevant, agents that are driven by external forces.

With respect to other species, it’s difficult to overstate the harm we inflict upon them. You don’t need to be a vegan or vegetarian to acknowledge the state of affairs that we slaughter animals by the billions, even we have very good evidence for their sentience, intelligence and suffering, like is the case for pigs who are generally considered to be on the level of human toddlers w.r.t. to cognition and sentience (Mendell, 2010, Gieling, 2011). So one could say this doesn’t bode well for the intersubjectivity collapse, as billions of animals on earth are on the receiving end of intersubjective distance between us and them.

So, if we take into consideration all the above I think we needn’t belabor the idea that the intersubjective web that holds us together is like a house of cards and all societal systems we created necessarily rely on it. We don’t check and verify other generally intelligent agents much really, because we piggyback on kinship. And all of that is baked into how we run our word. So the introduction of new minds will cause unforetold turbulence and requires a total overhaul of civilization, as unaugmented humans eventually become a tiny and ever-shrinking minority of all minds on earth.

Scenarios & Consequences

The intersubjectivity collapse needn’t be seen as inevitable, neither with respect to how it manifests itself nor its consequences. I generally do fear this may be in fact a great filter. The term “great filter” was coined Robin Hanson (Hanson, 1998), which states that throughout the evolution of life from its earliest beginnings to its most advanced stages of development on the Kardashev scale, there is a particular obstacle to development that makes the detection of extraterrestrial life extremely rare as a solution to the fermi-paradox (if life is ubiquitous why do we seem alone?). So due to the vast size of the subjectiverse, civilizations collapse or self-destruct soon after they develop the ability to radically self-modify their minds and build AGI. This raises the question: can we get a sense of how likely a civilization is to survive this event?

Not to succumb to physics envy, but we can speculate about what a formula would look like that would score a civilization on intersubjective robustness. I think at its core this is about the extent to which a civilization is heterogenous. Imagine if elephants and dolphins, sentient and intelligent creatures, would also be capable of developing technology like humans. We’d have three dominating species on earth for whom we’d have to accommodate in all our societal systems. However that scoring formula would look, I generally think our civilization would be more robust in that scenario, having to have had to accommodate for other species.

So the intersubjectivity robustness score could go like: f(diversity of perspectives (w1), adaptability (w2), social cohesion (w3), communication infrastructure (w4), number of dominating species (w5), mind variance among dominating species (m)).

Diversity of perspectives as measured by the number and variety of different types of minds within the society, as well as the extent to which different perspectives are valued and included in decision-making processes. Adaptability as measured by the society's ability to incorporate new ideas and ways of thinking, as well as its willingness to adapt to new challenges and changes. Social cohesion as measured by the strength of social bonds within the society, as well as the level of cooperation and collaboration among different groups. Communication infrastructure as measured by the availability and effectiveness of tools and technologies for communication, such as language and translation systems, as well as the extent to which different groups are able to communicate and understand one another. The number of dominating species (w5) could be included as a factor in the formula to reflect the idea that a civilization with multiple dominant species may be more resilient and stable than one dominated by a single species. This could be measured by the number of different species that hold significant political, economic, or social power within the society. I think the most important factor here is the amount of dominating species (w5), which in our case is only one.

Of course many of these aspects are notoriously hard to quantify, but in compiling such a list one can see that we can, on the other hand, get some grip on it. Needless to say, I don’t think we’d score well given our track record of behavior towards other species, other minds and the fact we’re the single dominating species on this planet.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

In game theory we worry about Nash equilibria and work with Schelling points. These pertain to the modeling of rational agents interacting, two or more players in a game using the best strategy they can unable to gain advantage by changing strategy, and solutions agents tend to choose in absence of communication, respectively. Imagine not just information and power asymmetries, but intelligence, sense and modality asymmetries to an extent where we can’t even predict, measure or gauge power or information properly because we lack the ability to even imagine or comprehend how other minds think, what they feel or can feel. It may constitute a total breakdown of agent prediction.

What would the negative consequences look like? Conflict and violence between the different minds, as they struggle to understand and coexist with each other, possibly leading to effective destruction of civilization. The inability to share information or knowledge between the different minds, leading to a fragmentation of knowledge and a loss of collective intelligence. No empathy and understanding between different minds, leading to a breakdown of social bonds and a sense of isolation and alienation, and ultimately fragmentation of society.

To get a sense for quickly our current systems become inadequate, consider the following scenarios:

  1. An augmented mind has the same senses as us, but enhanced, and has perfect recording ability. We humans record everything we sense, but it goes into our currently considered inscrutable brain and the only way to get it out is to have us report on what we can reconstruct from what we’ve saved. Now this augmented and enhanced human, can end up in court and suddenly every assumptions baked into our laws, from sense acuity, memory retrieval concerning witness reliability, to actual interfacing with whatever was recorded, don’t apply anymore. Do we demand to interface with the augmentations or this person’s mind? What about privacy? What if this mind can turn senses off at will and can therefore plausibly claim they did not hear or perceive an event we’d insist would wake a normal human up. Should we demand a log? Who would check if it wasn’t tampered with?

  2. A more spectacular example: imagine a mind that inhabits a human body but has entirely different internal constitution. As mentioned earlier, psychopaths are able to make great use of our default assumptions w.r.t. behavior being correlated with what goes on inside. What if this human has their eyes closed, and you assume they’re sleeping, but they actually have a sense with which they can they can communicate wirelessly with other minds about you, right then and there. How would you know?

  3. A third, more extreme example: imagine a distributed mind that hops from one to another embodiment, and you wouldn’t even recognize that its embodiment is an embodiment, never mind that there is a mind inside what you’re seeing, and forget about what sort of mind. Then it suddenly moves. Now you know it is driven by something, one evolutionary feature that is not a bug with which we’re bestowed, is overly quickly assuming agency. So you assume agency, but what else can you know? Virtually nothing. What it’s capable of, if it’s hostile. What its next “move” is.

These are seemingly exotic examples, but the main point here is that behavior/morphology and internal constitution are orthogonal. Any build or outward appearance, may be matched with any subjectivity and internal constitution. There won’t be verbal cue or body language like vocal pitch, tone, inflection, or sweat, fast or slow movement, that you’ll reliably be able to match to intent or subjectivity. Indeed, you may be perceiving an agent to be on the receiving end of something that would be harmful for a human, but is actually pleasure for that agent. And what’s more, that agent may be emitting sounds you associate with suffering. You won’t be able to read, understand or predict such agents. This is the main point and I think it cannot possibly be overstated. If anything, we’re far too used to only ever having to deal with agents we quite easily predict, and harm that comes from mistakes in prediction, which pales in comparison to harm that will come from the unpredictability of new minds.

Mitigation & solutions

An idea that I have had for a long time and is dear to my heart is the possibility of future-proofing society in ways that directly benefit society today. This idea deserves its own post, but the central question is of course who decides what is to our benefit and how can we do that if the future is unknown and ideas about, like this thesis, so speculative?

My answer to this is that we need to work out a formula for estimating our confidence w.r.t. speculative scenarios and probably need conceive of something like diversification of minds as an attractor. We’re talking about complex systems here, so while it’s intractable to know what sort of mind or event will occur exactly when, we can at least increase our confidence in whether it ultimately will, and to some extent, how we can prepare.

It’s very hard to comment broadly on how to measure current-day benefits, but I think we can go by various metrics already in place today, for example, case processing time in law versus equity of outcome. Suppose we modify our laws to be more inclusive of future minds, or mind-agnostic, thus not so anthropocentric. This might make things more efficient.

For example, if we try to test humans on their intellectual and emotional maturity rather than using age as a proxy, case load w.r.t. trying to establish this could go down severely, and the idea of testing minds is more future-proof than our arbitrary proxies. Right now adulthood is arbitrarily attained at age 18 to 21 in some countries w.r.t. legality of some behavior and actions, and this is obviously human-specific.

What sort of things can we think of to deal with the intersubjectivity collapse? Developing new forms of communication or translation technologies that allow the different minds to understand each other. Establishing shared norms, values, and rules of behavior that allow the different minds to coexist peacefully and cooperatively, even based on simulations. Implementing education and training programs that help deal with vastly different minds.

Ultimately, addressing the consequences of the intersubjectivity collapse will require a concerted effort from all parties involved, but I deem it very wise to get ahead of this issue by doing research in advance of the following mitigating efforts:

  • Future-proof our “rulebooks” and create species-agnostic societal systems & laws.

  • Develop a new universal way of communicating.

  • Attempts to have universal slots in new minds that can fit in modules that can help bind different minds, like specialized empathy or inter-communication modules

Perhaps odd, but I’d argue valid way to approach this, is to not only simulate future society with new minds, but to start acting today as if we have alien minds among ourselves. While this may seem merely an amusing or useful thought exercise, I think this actually helps tease out the species-specific rules we adopt, the tremendously many things we take for granted because to us they are natural, yet at the same time may actually not be optimal nor ethical. In fact, as mentioned earlier, I think there are ways we can make changes today that will benefit us immediately, while future-proofing us. If we, as the example I mentioned earlier, pretend we can’t arbitrarily assign adulthood based on age, but have to consider what we actually consider intellectual and emotional autonomy and how we’d test for it, we can start implementing such tests and actually gauging people’s maturity, and granting them the associated autonomy, instead of assuming it by proxy of age.

Of course this would raise all sorts of questions: who gets to decide on what the test contains, does it impede freedom in ways the old system didn’t, and much more. But one things seems certain: governments already decide on this by proxy, and a test is more transparent than being subjected to courts when things go wrong, as they do-where suddenly a juvenile is tried as an adult and in effect we collectively decide after the fact that a young person apparently may be treated as an adult in the eyes of the law, and punished as such.

One potential, and in my opinion, highly likely outcome of the intersubjectivity collapse, is the fragmentation of civilization, in which different minds become isolated and unable to communicate or cooperate with each other, spawning sub-communities. With respect to this I also think the complexity of laws and systems necessarily shrinks to become more austere and universal, as diversity of minds increases. That is to say, there is an inverse relationship there between law-complexity and mind-diversity. This assumes certain minds will simply be, in principle, unable to judge each other’s subjectivity and unfit to decide for each other, therefore necessarily shifting specific rules to them to their own rulebooks. So the fragmented sub-communities would have their own species or mind-type specific rulebooks.

AI alignment and safety

We can see by now that the intersubjectivity collapse can be viewed through may lenses and involves practically all domains in science and society. One such very important domain is artificial intelligence. The idea of AI alignment is to align AI with human values so it is safe. This field is maturing rapidly, and it’d be daunting to cover the literature here, from the earliest thoughts by minds like Norbert Wiener, to institutes founded over the las two decades such as MIRI, FHI, and “public benefit corporations” like Anthropic AI, founded in 2021. In light of my thesis, here I would say the main point is that we need to anticipate a diverse set of minds and the moment of human values being the only relevant set of values will be negligibly short. I therefore doubt that the initial values we start of with significantly alter the state of affairs once new minds appear. Discussing this warrants a whole post, which I plan to write somewhere in the next weeks, specifically also because I am personally convinced a fast takeoff of AI is practically inevitable.

Here are five imperatives and suggestions for further research, as well as my own ideas on what will happen:

  1. I think that any civilization that starts to radically self-modify their minds and invents AGI has to deal with an intersubjectivity collapse, with its severity depending on mainly its initial dominating species’ heterogeneity. So, in short, I think it’s inevitable. In one sentence, because I deem the space of possible minds evolution can develop far smaller than the actual full space of possible minds we can create artificially, and it’s unlikely that any civilization ends up at this point with more than a handful of dominating species.

  2. I can barely think of a more multi-disciplinary thesis than the intersubjectivity collapse and thus it requires analysis from all branches of science and philosophy. I think a concentrated, collaborative effort should be led to anticipate its consequences as much as we can.

  3. This thesis is an example that on the one hand you might think is obvious once it sinks in, but on the other hand it very much strains the imagination in two ways, (1) it’s rather difficult to realize just how “custom-made” our world is for us and how the vast majority of what we built would break down for completely different minds, and (2) it’s of course very challenging to imagine minds that have very different architectures and modalities than we do. One could say we can barely scratch the surface here, but I’d argue that scratching the surface does yield some interesting preliminary insights and so we should try our best. And science-fiction can be quite helpful here.

  4. Consciousness, sentience and intelligence, and how they’re related, are of course quite relevant here and I intend to cover these in a future post where I will argue to approach these historically disputed and nebulous concepts with some, you could say, sobriety and look at the science so that we may discuss them operationally.

  5. There is also an especially important point to make on interoperability and commensurability of minds in terms of communication and intersubjectivity. Namely that these are significant issues and potentially in principle impossible between granting or having to respect bodily autonomy, morphological freedom and privacy. Briefly, even if we develop a common language that is exchanged in hypergraph form and a subset of new minds have modules to use it, even if we stimulate new forms of empathy and various ways to encourage ideas of new kinship, tolerance and understanding other types of minds, I will argue there will be a fundamental problem with understanding among different minds. This is analogous to an issue we already have but don’t focus on: one human can’t easily experience what another human has experienced, even if we implant that memory, because that memory does not relate to the second human’s connectome (brain wiring diagram) as it did to the first. These are unique and indeed everyone’s brain is sufficiently different that one person’s “just a Tuesday where a spider appeared” memory is another person’s agoraphobia nightmare.

Final caveats:

  • One thing I didn’t cover in this post is non-subjective minds or intelligences. If it is possible to create types of minds or systems that lack any affect, then that too fuels intersubjectivity collapse, but in a bit of a different way: these non-subjective minds or intelligences simply don’t know, can’t care, about our subjectivity because they don’t know what it’s like and arguably can’t be made to care. What would they care about and if nothing, what would drive them? For an introduction to possible answers to this question, see “basic drives of AI” by Steve Omohundro.

  • There are many valuable ways of framing the issue of introducing new minds into society. They were out of the scope of this post, but one valuable one is looking at these minds form the lens of ecosystems and how models of their perturbation, such as trophic cascades.

Conclusion

It is my hope that the intersubjectivity collapse will be picked up by others and researched thoroughly, as I think this is the sort of problem that I think we’re facing yet we don’t even “know” about. For the record, I think this collapse will happen within the next decades, certainly before the second half of the century, and in fact, quite possibly in the next few years it’ll already start as tools like large language models permeate everything we do and we start augmenting ourselves. One way to bolster this argument is to look how relatively minor deviations from our norm such as psychopaths, can wreak havoc upon society. In short: it doesn’t take much.

Share

Subscribe now

Some suggested literature

  1. "The Interpersonal World of the Infant: A View from Psychoanalysis and Developmental Psychology" by Daniel N. Stern is a classic book that explores the development of intersubjectivity in infants and how it plays a role in their social interactions. The book discusses how differences in intersubjectivity can lead to misunderstandings and conflicts in relationships.

  2. "The Social Construction of Reality: A Treatise in the Sociology of Knowledge" by Peter L. Berger and Thomas Luckmann is a seminal work in sociology that examines how individuals and groups create and maintain their shared understanding of the world through social interaction. The book discusses how differences in intersubjectivity can create conflicts and misunderstandings between people.

  3. "Intersubjectivity and Empathy in Psychoanalytic Treatment" by Lewis Aron is a book that explores the role of intersubjectivity and empathy in psychoanalytic treatment. The book discusses how differences in intersubjectivity can create difficulties in the therapeutic process and how the therapist can work to bridge these differences.

  4. "Communication and Misunderstanding" by Jürgen Habermas is a book that examines the role of language and communication in intersubjectivity and how misunderstandings can arise when there are differences in perspective or understanding.

  5. "The Dialogical Self: An Emerging Concept for Clinical Practice" by Hubert J.M. Hermans and Harry J.G. Kempen is a book that discusses the concept of the dialogical self, which is a model of the self that emphasizes the role of dialogue and communication in shaping our sense of self and our relationships with others. The book discusses how differences in intersubjectivity can create conflicts and misunderstandings in relationships and how dialogue can be used to bridge these differences.

  6. Although I think the intersubjectivity collapse itself is new, there exists some literature that touches upon aspects of it, most notably Laura Cabrera & John Weckert’s concept of a lifeworld (Cabrera & Weckert, 2013), which is basically simply (inter)subjectivity. They do an initial exploration of how humans’ lifeworld may change so drastically due to enhancement that communication may no longer possible.

  7. "The Theory of Communicative Action" by Jürgen Habermas: In this work, Habermas argues that the development of society and civilization is closely tied to the development of human communication and the ability to achieve mutual understanding through language and discourse. He emphasizes the importance of intersubjectivity, or the recognition of other people's subjectivity and perspectives, in facilitating social interaction and cooperation.

  8. "The Social Contract" by Jean-Jacques Rousseau: In this classic work, Rousseau explores the foundations of civil society and argues that it is based on the idea of a social contract between individuals. He contends that individuals must give up some of their natural freedoms in order to live in society and that this requires a certain level of homogeneity among members.

  9. "The Structural Transformation of the Public Sphere" by Jürgen Habermas: In this work, Habermas examines the development of the public sphere, or the sphere of social life where individuals can come together to form a public opinion, in modern societies. He argues that the public sphere relies on the ability of individuals to engage in rational discourse and reach consensus, which in turn requires a certain level of intersubjectivity and shared understanding.

  10. "The Human Condition" by Hannah Arendt: In this work, Arendt explores the nature of human beings and their place in the world. She argues that the human capacity for action and the ability to act in concert with others is central to the development of civilization and that this requires a certain level of shared understanding and intersubjectivity.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>
<![CDATA[Not artificially conscious]]>https://mentalcontractions.substack.com/p/not-artificially-conscioushttps://mentalcontractions.substack.com/p/not-artificially-consciousFri, 30 Dec 2022 20:48:42 GMTAsking whether AI models on digital hardware like large language models are conscious is not silly because we can't build artificial consciousness or because the brain is magical and the only piece of machinery that can be conscious, but because it shoehorns in the idea that consciousness is merely representational, abstract or a by-product that magically appears regardless of a system’s architecture.

It's odd to just assume large language models like GPT-N have consciousness just because they process some abstractions coherently. They lack any physical structure for processing consciousness and the required input/output or architecture to support it. There is literally no circuitry built to spawn affect, feelings, sentience or anything we know nature evolved in brains and nervous systems. Without looking at the layers of organization, that is to say - what's actually being processed and how, assuming LLMs are conscious or might be conscious “just because”, quickly amounts to scientific illiteracy.

While it could be the case that consciousness is just representational, and while conscious is a nebulous suitcase term as Marvin Minsky said, the idea that consciousness can just live as a representation is quasi-magical and not in line with scientific evidence that consciousness is selected for as a physical mechanism and realized physically, not merely as an abstraction. The mind deals with abstractions, but can it entirely exist as an abstraction itself? That’s quite the leap to make.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Flipping 0s and 1s with several abstraction layers built on top to just present representations to the end user is unlikely to magically spawn consciousness, as the evidence indicates consciousness requires a few (positive) feedback loops with supporting physical mechanisms, signaling and architecture.

You need to explicitly build it or use architectural paradigms that allow for it to emerge. If we were to think about it as engineers instead of poorly philosophizing whether their the latest off-policy actor-critic deep RL algorithm running on digital hardware happens to be conscious, we’d have a starting point to discuss. As far as I’m concerned, panpsychism is a non-starter.

Von Neumann architecture is built around the idea to abstract the hardware away and underlying stuff to have universal circuitry do our bidding. That's the whole point of it. Digital computing is incredibly useful and powerful precisely because it ignores the hardware so that we can build abstractions on top of that. But as far as we know, consciousness is implemented at the hardware level. From the many documented cases which amount to an affliction of consciousness, due to brain damage or structural changes, use of drugs like LSD or science on how anesthesia affects consciousness gradually, or the God Helmet, it’s clear enough that consciousness operates on physics, and is processed across layer of organization over its substrate down to the molecular level.

So if you've constrained your hardware to do your bidding without the ability to enlist it in the processes like in the brain, you've likely already lost before you got started.

Hardware is not software.

Too many seem be too quick to assume the mind is “simply” software. Software is just a representational concept that builds upon constrained state changes on a substrate, that thanks to our current (past century) engineering paradigms that have resulted in the abstraction layers we have from bottom all the way to top, have very little to do with what the brain is and how it functions. The brain is both the hardware and software in one. Our current engineering paradigms don’t translate to how the brain functions, which is the only implementation of consciousness we know. This is again - not because only brains or organic material can be conscious or all implementations of consciousness must mimic the brain fully - but the brain is the only piece of hardware we know actually implements it and while we have a mountain of evidence for the neural correlates of consciousness and lots of indirect evidence for how it’s physically processed - we have exactly zero evidence for the idea that we can turn this physical, analog computing device into an abstraction and call it conscious.

As this elaborate paper on the neural correlates of consciousness describes, consciousness and attention are distinct and separate processes in the brain that are closely related. Consciousness is responsible for creating a coherent picture of reality, while attention focuses on relevant objects of thought. There are two dimensions of consciousness: wakefulness and content, and two types of consciousness: phenomenal (how the world appears) and access (when contents are available for focus). The neural correlates of consciousness are generated by the activity of sensory and associative networks in the temporal, occipital, and parietal cortices. The neural correlates of attention are generated by a fronto-parietal system with two main networks: the DAN and the VAN. Both of these networks partially overlap in the parietal association networks. While distinct and separate, both consciousness and attention serve each other: attention helps consciousness acquire cognitive relevance, and consciousness helps attention provide focal awareness and enrich our experience.

And so, my suggestion is to apply those principles as we attempt replicate it and don’t assume randomly that the vague idea of “information processing” does the job. Gualtiero Piccinini does an excellent job of laying out how sloppily terms like information processing/cognition/computation are used in (cognitive) science:

Computation and information processing are among the most fundamental notions in cognitive science. They are also among the most imprecisely discussed. Many cognitive scientists take it for granted that cognition involves computation, information processing, or both – although others disagree vehemently. Yet different cognitive scientists use ‘computation’ and ‘information processing’ to mean different things, sometimes without realizing that they do. In addition, computation and information processing are surrounded by several myths; first and foremost, that they are the same thing.

It’s hard to understand why many just assume consciousness is software or shrug at considering the actual physical implementation of a mind. Perhaps it hasn't sunk in with some people that the brain is effectively in a vat?

Being surrounded by cerebrospinal fluid helps the brain float inside the skull, like a buoy in water. Because the brain is surrounded by fluid, it floats like it weighs only 2% of what it really does. If the brain did not have CSF to float in, it would sit on the bottom of the skull. It effectively sits in a sensory deprivation tank so it can process all other senes (ironically), "deprived" of senses, which is why you can perform open brain surgery on a person who is wide awake.

Surgeons can just cut into your brain and you will feel nothing, except that the cutting can impede function and for this very reason is done with people in a wakeful state, so they can check whether they can still speak, for example.


Perhaps this seduces people to think of their mind as something floating in processing land and so anything that processes information will do and they can just be replicated that way. It makes the brain less “tangible” than the rest of our body. But I stress it’s a physical processing device, even if you can’t feel it the way you feel your body.

Those who hold the view that a random digital system can be conscious will sometimes violently object against dualistic notions of consciousness and various other concepts they consider magical, yet they are themselves completely lost in abstraction and subscribe to magical ideas, since they think the mind exists somewhere in "information space" and regardless of processes and architecture, consciousness magically appears.

The onus is on the claimant who thinks this, as this not in line with any evidence we have at this moment for the only implementation of consciousness we know of.

Integrated Information Theory is one of the approaches to classifying consciousness that acknowledges there must at least be constraints on architecture supporting consciousness:

Integrated Information Theory (IIT) offers an explanation for the nature and source of consciousness. Initially proposed by Giulio Tononi in 2004, it claims that consciousness is identical to a certain kind of information, the realization of which requires physical, not merely functional, integration, and which can be measured mathematically according to the phi metric.

See also their 2016 opinion article:

We begin by providing a summary of the axioms and corresponding postulates of IIT and show how they can be used, in principle, to identify the physical substrate of consciousness (PSC). We then discuss how IIT explains in a parsimonious manner a variety of facts about the relationship between consciousness and the brain, leads to testable predictions, and allows inferences and extrapolations about consciousness. The axioms of IIT state that every experience exists intrinsically and is structured, specific, unitary and definite. IIT then postulates that, for each essential property of experience, there must be a corresponding causal property of the PSC. The postulates of IIT state that the PSC must have intrinsic cause–effect power; its parts must also have cause–effect power within the PSC and they must specify a cause–effect structure that is specific, unitary and definite.

Now, I am not convinced about IIT as there are various issues with it that I won’t go into here, but as a pursuit to formalize and come up with ways of measuring and concretizing consciousness, the effort is to be applauded. Specifically, insisting we look at the causal relations of substrate and the actual physical processing is excellent.

One of the best ways to tease out one’s assumptions regarding this issue is to consider a human brain functionally running on a YottaFLOPS (lots of computational power) laptop and start figuring out how this would be problematic for that mind versus what it means to be an embodied physical brain. Some will even not blink if you suggest the mind running on an wooden abacus, or various other arbitrary devices, materials or the China Brain, just because they satisfy theoretical models of computation. But there really is a difference between a description and the described, or a simulated dynamical model and a physical instantiation of it. And to figure out this conundrum, one has to carefully analyze these differences. Is a digital brain equivalent to the physical version of it? How does it causally relate to the world?

“Digital computers can simulate consciousness, but the simulation has no causal power and is not actually conscious,” Koch said. It’s like simulating gravity in a video game: You don’t actually produce gravity that way.

Anyone who considers a mind running digitally being equivalent to the brain, should think hard about how this is nothing like a physical brain in a body, what's missing, and how it's different. Simulated fire does not burn. Nothing we simulate in fact has the same causal relations to the world like physical equivalents. Sometimes this doesn’t matter. If we only care to encode information and then just reproduce it, like with sound, it may not matter. But does this apply to the entire brain and body? We have to really step out of Michio Kaku-level fantasies about mind uploading and look at the science: neuroscience, biophysics, physics. What does the evidence tell us?

Some insist we can never know and should only look at the behavior. The problem with that is that we're very easily fooled. Look at the Turing Test, for example. For over a decade I’ve often said the Turing Test is a test of natural gullibility, not artificial intelligence, because it’s designed to fool an observer rather than really test capability and underlying mechanisms.

In fact, I think the Turing Test belongs to the class of tests one thinks of before one has created that which one would like to test, before one knows how to. Imitation game is right and imitation will be lethal in our future as it'll be very easy to imitate entities that act the same as us or in a similar fashion, but have very different internal constitution and capabilities!

This is why psychopaths can be wildly successful on society, even though they represent a tiny subset of generally intelligent entities deviating from human neurotypicals.

There are many vectors of attack here, to put it in adversarial terms. We have countless biases, constrained senses, lacking senses, an processing deficiencies that are easy to exploit in general and will be fantastically easy to exploit for AGIs.

Something like a Turing Test or the way we evaluate behavior and make assumptions is like shaking a container to see if there’s water in it, by just listening if it is sloshing. It could very well be 100 other types of fluid, sloshing will perhaps tell the human ear something about its viscosity, for example. But it’s not actually a proper check of the content, is it? This sort of indirect and by proxy testing will not do and it’s dangerous to adopt these sorts of approaches as sufficient.

What this amounts to is that I think we have to come up with proper tests for agency, sentience and intelligence. Some of these future tests may still depend to varying degrees on proxies, but much better ones - much less coarse grained than something like Turing Test, which amounts to "species X is fooled by it!". We have to face that very likely various things that can be produced by cognitive systems - like our brains - can be produced by very different cognitive systems and by non-cognitive systems. Like coherent language.

We are too easily fooled, it’s too easy to produce similar behavior with wildly different principles behind it, and whether a system is conscious and sentient will be ethically very relevant. We too easily anthropomorphize our pets, assign agency to inanimate objects and project intent onto systems. Looking purely at behavior is never going to fly to assess internal constitution of a system, its moral status, personhood or what it really does and how it processes its input to generate the output.

So, in conclusion, the issue is not that a brain, organic matter or carbon can only support consciousness - it’s that all evidence points to consciousness being physically realized in a manner that does not translate to software sitting in a stack of abstraction layers.

It’s true that in philosophy there are a lot of things to dive into w.r.t. consciousness and each paragraph of this post could be met with a hefty philosophical objection myself. There would be no end to the objections. Is reality even real? Are we living in a simulation?

Perhaps unsurprisingly, the most exotic ideas may not be crucial considerations right now. Not if we just want to focus on how to build consciousness and move forward on the evidence. In fact, in my opinion the evidence is such that we can remain agnostic on many philosophical issues while focusing on how to replicate based on what we know.

I think it turns out you can get very far just by looking at what we do know and when you synthesize all of that you start to paint a pretty clear picture - bit by bit. So what more do we know about consciousness and what does painting that picture look like? That’s for a future post.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>
<![CDATA[The genie is out of the bottle]]>https://mentalcontractions.substack.com/p/openais-chatgpthttps://mentalcontractions.substack.com/p/openais-chatgptSun, 04 Dec 2022 20:29:04 GMTThere will be no AI winter. Unless you mean total, utter civilizational disarray and societal turbulence due to seismic shifts and transfers of skills between AI and humans with economy-crippling asymmetries. Then, yes - AI winter is coming.

ChatGPT is OpenAI's test run of a sophisticated chatbot, which can do anything from having a proper common sense conversation, the ability to detect whether you are asking a bullshit question, answer questions in any discipline at university level, or code for you. I have been playing with it conversing with it, doing some adversarial attacks, letting it spit out chunks of coding I do and I got it to produce very sensible to pretty excellent results.

This is like Google on steroids. Search with a brain. A virtual exocortex finally starting to be worthy of that moniker. It is not AGI yet, but it is nearing and evolving pseudo-AGI. And 2023 will be rife with with pseudo-AGIs. We have entered the AGI precursor era. Multi-modality, scaling, and orchestration will all boost abilities and it will keep going. This is obvious from the data, the benchmarks and the underlying tech and assumptions.

The great AI disruption is coming and it's coming even a tad faster than I anticipated mere months ago. That is why most of my upcoming blog posts at here will be covering the issues we will face as civilization the coming decade.

And I say this while also stressing the following: large language models are like the shadows in Plato's cave w.r.t. to meaning and understanding. The sheer amount of data and precursor to salience of transformers (attention is all you need) gets you very far and so this is why we'll see pseudo-AGIs popping up like shrooms in a forest after a rainy day. However, ultimately, human and possible consciousness-aided understanding is an AI-complete problem, meaning to solve them means entails solving AGI. For symbol grounding you need to ground them in something, and it's not endless vectors. You can't transformer or Markov chain your way into understanding. But grounding will too be done, and grounding can be done in more ways than the human way, too.

Language and meaning cannot be "reverse-engineered" to get symbol grounding. But yes, understanding is certainly be possible. NGI (natural general intelligence) is possible, so AGI is as well. But understanding piggybacks on multi-modality and interfacing with itself and the world in various ways, so multi-modality is necessary and for grounding in human-like feelings, it requires specific hardware and I/O systems - a deep topic for another time. The point is that language is not meaning. It's a symptom of meaning.

However, because we can emulate and mimic plenty of human features of intelligence and understanding as well develop them from first principles, these latest developments will fuel a lot of precursors to AGI the coming months. Why “months” and not “years”? I expect paradigm shifts to start hitting us more frequently the coming years, so I truly think a year from now we’d be shocked with the transformation and disruption the world has undergone and would massively update our views on what’s next. So let’s stick to months for now.

Thus, the conclusion for now is that we're facing many of the aforementioned challenges the coming year already. And we could not be less ready. AI will start eating everything. Education, search, art, programming. Let’s face it, a lot of human activity and labor amounts to busywork. I stress again this is not just hype and hot air. This is the beginning of the biggest self-induced disruptive event any civilization faces once they get to this technological point of no return. And here we are.

The genie...is out of the bottle.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>
<![CDATA[The Phantom Cognit ]]>https://mentalcontractions.substack.com/p/the-phantom-cognithttps://mentalcontractions.substack.com/p/the-phantom-cognitSat, 03 Dec 2022 17:20:39 GMTI introduce the notion of the phantom cognit, a missing dependency in any cognitive apparatus that leaves traces or broken connections in one or more of the remaining cognitive units. I provide present-day empirical examples of this problem in biological brains and put forth that the phantom cognit can be considered a generalization of the phantom limb. I then sketch out how phantom cognits will be a growing issue for increasingly extended minds with regard to neuro-implants, cognitive augmentations and ultimately exocortices or modular future minds. I conclude suggesting several directions for further research.

Introduction
Organisms are difficult for us to interface with and biological brains especially. They neither have the layers of origination we build into our machines, nor any direct interfaces and controls. Yet in indirect ways we have been extending our cognition through technology, using computers and mobile phones. More recently more explorations of interfacing with our brain have popped up, from EEG helmets to actual implants, like with Elon Musk's Neuralink.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

While the brain is an elastic, adaptable and plastic organ, it is also a self-contained cognitive apparatus in a vat, our skull, that is generally interfacing with itself and the world through the rest of the CNS (central nervous system) and body in specific ways. Going outside these normal paths of functioning show the brain can quickly miscalculate what's going on, generate illusions and overall run into trouble. An example of this is the phantom limb, which is a somatosensory mapping in the brain of a part of the body that is no longer there. A hand that is no longer there is still hurting. While messy, the brain is somewhat modular and has dedicated parts to dealing with particular processing, like visual processing going through cortical layers V1 to V5, or the motor cortex that deals with planning and execution of movement. As we break this down to go more granular, we still find concentrated pockets of function or composition, like Broca's area (motor speech) or the pineal gland - a gland secreting hormones, but perhaps not the seat of consciousness as Descartes once thought. To generalize and future-proof this concept, I refer to a designated cognitive unit as cognit, in roughly a similar vein to Fuster (2006).

So the phantom cognit is specifically used to refer to any missing part of a cognitive system or apparatus that instead of a functioning cognitive unit or multiples thereof, has references or dependent units on it, without actually being present or functioning anymore. If the latter is absent, it is not a phantom cognit anymore. That doesn't mean, of course, that nothing "went missing" and isn't therefore cause for concern.

Why does the phantom cognit arise as a problem?

The human brain and in general biological brains are quite wonderfully adaptive and plastic. Abundant examples exist in literature where brain damage, even to specialized brain areas, is recovered from with the brain even adapting to recover lost function (2018). However, the human brain is also quite tightly constrained in how it works. It's a 20W-powered physical network of about 86 billion neurons, floating in cranial fluid, wiring the whole body through the central nervous system. The input/output of the human brain is limited to what comes in through the senses and its own processes.

The brain itself does not have any feeling, except for the eyes which are an extension of it. So what this means, is that it's generally in some ways quite self-dependent and historically, that is to say, throughout evolution, never had to deal with any consistent interaction with external objects, certainly not with something like an EEG-helmet. So the brain, being an associative network where neurons that "fire together, wire together"(Hebbian learning), is very susceptible to integrating cognits that if then lost, can have problematic consequences for it.

Examples from today and an initial taxonomy

The phantom cognit as term doesn't need to prove itself as the phantom limb is a well-known problem in cognitive science (2018), which is an example (subset) of the phantom cognit. Beyond that, however, it's also not just about futuristic implants. There are more examples, however, and these examples also show that it may be helpful to create a little initial rough taxonomy of phantom cognits.

(1) So firstly, there is a weak or indirect sort of phantom cognit that we may consider. This version is where we consider the extended mind and body, and take into account what tool we have come accustomed to so much, that once it's gone, our brain expects it to be there and has trouble adapting. Relatively innocent examples of this may be getting used to a particular interface on a computer, a particular shape and function of one's mobile phone, or a car. We can also consider loss of people or relationships to be part of this.

(2) Secondly, there is the medium or more concrete version of the phantom cognit - that is a cognitive unit or multiples thereof that perhaps still indirectly, but more closely interface with the brain over a sufficiently long period of time, such as an EEG-helmet, or a device that stimulates the brain like the God helmet (2014).

(3) Thirdly, the strong version of the phantom cognit, where it is directly integrated into a cognitive apparatus as a functional unit, like an implanted chip of the sort Neuralink aims to mass-produce. Ripping it out is very likely to cause immediate issues, generally of a more direct and severe nature than the first two.

The above three categories can also be expressed as a formula and graded, for example as an cognit integration score to get a sense of the consequences of removal. Going beyond that, it is conceivable that future minds to have built-in safeguards to protect the integrity and more elaborate and specific scoring to prevent unintended consequences. Of course in engineering and software development, this is not uncharted territory. We have ways of dealing with a missing component, re-routing resources in a plane with partial engine failure and overall redundancies in machines we built in case of critical component failures, or null references in programming and so on. The brain’s aforementioned plasticity and redundancy aforementioned redundancy and and “self-healing” capacity of course exist as well. But the problem is that the brain has historically virtually only dealt with itself and damage or missing cognits within. And what the future brings in terms of brain extensions, is categorically different.

Problems of the future, further perspectives and speculation
As of today, as I write this, we have entered the pseudo-AGI era, or era of precursors to AGI. OpenAI released ChatGPT, a chatbot that in most people’s perception mimics general intelligence very well. And while for full-blown AGI we need a few more paradigm shifts, but let it be clear that we have most definitely reached the point of no return with respect to AI and there shall be no AI winter, or any lull in terms of advancements or investments in AI. How we humans, or as I say, any civilization deals with that on the even of radical self-modification is the question of the millennium. And also, the question of this decade. So how do we keep up with AI? Can we? Should we? In my opinion we should at the very least work very hard at interfacing better with biology and given the acceleration of AI developments I think this should be in the top 5 R&D goals of humanity.

The phantom cognit is typically a concept that inspires rampant speculation and indeed fiction. I myself have a short sci-fi story based on this concept as is the case with other concepts I coined. Much of that is outside the scope of this post, but I think we can carefully point out the following:

  • The unaugmented human brain is capable of self-healing with its plasticity and redundancy, but it is also self-contained and definitely not built for crude extensions and implants. It adapts and integrates, but its in-built correction processes cannot account for missing implants and thus we will have take this challenge very seriously as we start augmenting our minds more directly.

  • AI is developing very rapidly and augmenting our minds to “keep up” with it is one avenue that is very much worth exploring and should probably be pursued with a high sense of urgency - that is - technology interfacing with biology.

  • Future minds, cognitive apparatuses, will have to come with strong built-in features to deal with critical failure, damage or removal of cognits to avoid phantom cognits. While in current machines we use such failures can lead to all sorts of damage due to accident, the issue of phantom cognits will spawn an unforetold span of mental and cognitive issues for future minds.

In a future post I will speculate more on specific phantom cognits of the near future as well as those in future minds.

Conclusion

I hope the phantom cognit becomes generally accepted as a term to refer to any missing cognitive units upon other still existing units still depend on or hold reference to. I think further work should focus on more carefully taxonomizing known phantom cognits, classifying current cases, and doing our best, in a future-proof way, to map out potential future issues.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>
<![CDATA[The subjectiverse: part 1]]>https://mentalcontractions.substack.com/p/the-subjectiverse-part-1https://mentalcontractions.substack.com/p/the-subjectiverse-part-1Sat, 26 Nov 2022 22:32:27 GMTIn the coming decades we will see an explosion of new types of minds, as we’ll able to seed AIs, building synthetic minds, as well as augment ourselves. This upcoming “Cambrian explosion” of minds will mean that we will be exploring so-called mind design space - which I prefer to call cognitive construction space - that is the total amount of possible minds that can exist in our universe, on which humans are but a speck. If we assume for the moment that a mind can only be called a mind if it has subjective experience or phenomenal consciousness, then you can imagine the vast space of new subjective experiences there will be out there to explore. We don’t have a word for that, and it would be very convenient to have, so I hereby introduce the subjectiverse, and in this series of posts I discusss how we may initially explore it, individually or collectively, and report on it.

Introduction
For many thousands of years the human mind has been stagnant and unmodifiable. Today we’re at the eve of radical self-modification, and within decades the sort of minds we’ll design and seed, the ways we’ll be able to augment our minds, will be numerous. Typically the space of all possible minds is called mind design space, but I don’t quite like this term, for two reasons. Firstly, not all minds are designed, some are evolved. It doesn’t quite make sense to speak of designs when there is no designer. Secondly, I think it will be computationally and practically intractable to fully design minds, rather, we’ll seed and grow them. So designed minds will be a subset of all minds. For this reason I prefer to use the term cognitive construction space, which can subdivide in evolved, designed and seeded minds. We might care to make more distinctions in the future, for examples for synthetic minds being seeded, while organic ones being grown, and yet other ones constructed.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

With respect to brains, these of course belong to evolved minds for now until we grow them. Future minds won’t just run on brains, but various different hardware and software, if you’ll tolerate that analogy for a moment. So I’ve adopted the term cognitive apparatus as a future-proof and hardware-agnostic term for all future minds.

So to explore the subjectiverse we have to speculate about current minds and future minds.

Exploration
The subjectiverse is a vast space of state spaces of all minds that may be impossible to explore due to the sheer amount of configurations possible. All possible minds, times all possible dynamics or interactions with environment, times interactions with other minds, so on and so forth. This is a veritable combinatorial explosion and thus intractable to map as well. So when it comes to speculating, tomes upon tomes can be filled with it and battles have to be picked.

Seeding, building and growing minds brings endless challenges as well, which is outside of the scope of this post. I’ll cherry-pick specific problems that are especially interesting when getting one’s feet wet in exploring the idea of the subjectiverse.

The unaugmented human mind: commensurability
Homo sapiens is a species and at that level there is homogeneity, but within the species there is of course a lot of variance, with various spectra on which human minds exist. While avoiding issues regarding the existence of qualia, let’s at least establish that we obviously don’t have direct access to others’ experiences. Yet not only do we know it should be possible in principle, we have living proof of it. Conjoined twins are twins that are born fused together. Some of them have body parts fused that are relatively realistic to separate, others have for examples a fused head, including the brain. From these examples we know that fiction like Vulcan mind melding can become reality. Conjoined twins Krista and Tatiana have a conjoined thalamus, the part of the brain that sends physical sensations and motor functions to the cerebral cortex, allowing them to hear each other's thoughts and see through each other's eyes.

This possibility quickly seduces one to imagine the idea that the subjectiverse can be readily explored collectively and that these experiences can be shared. Where we normally rely on empathy with mirror neurons and such helping us imagine generically what another is going going through, we could connect minds and not only are such connections part of the subjectiverse and this constitute exploration, we could explore them together. However, it might be very far from that simple, even with all the liberties we have taken to explore this idea. Generally speaking each connectome, or wiring diagram of each human brain, is completely unique, like a fingerprint.

Even if we were to magically share a memory from brain A to brain B, brain B’s connectome would process that memory very differently from A. Consider for example an anxiety-inducing memory of brain A, that induces no anxiety in brain B, simply because brain B does not share brain A’s anxieties. Activation levels in various brain parts, like for example in the thalamus, will differ from brain to brain, with one perfectly stable and healthy mind, and another a mind suffering PTSD. So here we stumble upon a fundamental problem: even within one species with relatively homogeneity, we find that a vast amount of experience is likely incommensurable across minds.

For the same reason, by the way, it’s impossible to just blast an unaugmented brain with knowledge like in the movie The Matrix. If it were a knowledge graph to be implemented into a brain, that graph would need to be tailor-made for that brain if it were to integrate into the connectome as it normally would.

So how does the unaugmented human mind explore the subjectiverse and share? Unfortunately, the only options here are simply existing and by virtue of experience and learning modifying one’s brain, and inducing mind-altering states, be a God helmet - an experimental apparatus that that simulates the temporal lobe and had many subjects report a religious presence, getting drunk or doing psychedelics. After that we can only talk about it. The philosopher Daniel Dennett came up with heterophenomenology, where third person reports about experience are taken to be authoritative. He dubbed it the scientific method with an anthropological bend. But this too runs into the issue of incommensurability. Of course, in practice, there is lots of agreement on how things feel and affect us across a wide array of experiences. Yet there remains a vast gap between agreeing about reports and being able to truly verify and share experiences.

So to explore the subjectiverse beyond that which we’re doing already by living and experiencing life, we have to turn our focus to the augmented human mind. Coming in part 2.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>
<![CDATA[The hard problem of metaphysics]]>https://mentalcontractions.substack.com/p/the-hard-problem-of-metaphysicshttps://mentalcontractions.substack.com/p/the-hard-problem-of-metaphysicsWed, 16 Nov 2022 23:06:45 GMTFrom a complete description of the universe we seem to be able to exhaustively derive a totality of all facts about the universe, save for one key phenomenon: consciousness. I argue that if consciousness is only knowable through the unique metaphysical relation we bear to it, then it necessarily follows that other significant phenomena may exist in our universe we don’t know about without the necessary metaphysical relation(s). I explore ways of framing this idea and investigating it conceptually. I discuss finding hints in our universe to discover other potentially obscured phenomena.  I then discuss basic objections and offer replies. Finally, I discuss possible implications and avenues of exploring this idea that are beyond the scope of this post.

Introduction

Scientifically and philosophically, it is broadly accepted that humans are conscious in the sense that we have inner phenomenal lives – a what-its-likeness to our existence or at the very least, according to some, an illusion thereof. You get kicked in the shin, it hurts. An extension to this acceptance is, that this what-its-likeness is only knowable through itself – it doesn’t seem like its existence can in principle be derived or known from any description of the universe. To know of experience, one must undergo experience. It is only by this metaphysical relation we bear to consciousness that we know of it.

If consciousness is an exception to the rule that everything is knowable through the description of the universe, then it cannot be ruled out that there may be more exceptions. Stated more broadly, it cannot be easily ruled out that in fact other potentially significant phenomena are entirely obscured from us – entirely unknown, hidden in plain sight, and knowable or accessible through one or more metaphysical relations, which are also entirely unknown to us. 

To continue exploring this, I will describe how consciousness  needs to be framed for the sake of this argument, and how the other parts of the argument come together or apart. I will then question what this argument implies in the metaphysical sense and how we may attempt to say something about a possible unknown of this sort.

The core of the argument

This argument rests on consciousness as a phenomenon only being knowable through being itself – that it cannot be inferred through other means. That if a non-sentient robot, would observe and communicate with us, be able to hold all key facts about us and our behavior in its cognitive system, it would never in principle be able to guess the existence of consciousness. That when we scream in pain there are not just observable signals that travel from A to B in our body triggering behaviors, but that we feel something when this happens. This unique metaphysical relation forms the basis for the premise of the argument put forth in this post.

To make clear the point of this argument, to make it as robust as it can be and to bare exactly the core assumptions minimally required to accept it, I’d like to establish the following:

  1. While the focus is on phenomenal consciousness, that is, the what-its-likeness of consciousness, it is not necessary to accept any particular position in debates on phenomenal consciousness to go along with the core argument. It is therefore best to frame consciousness or at least consider it for the sake of argument, in a bare minimum way, where it is simply the referent or explanandum that fuels the argument. So we treat consciousness not as that which is explained in such and such way, but as that about which there is an ongoing debate. So whether one accepts qualia or not, with whatever qualities, or considers some parts of or all of consciousness an illusion (more on this in the “objections and replies” section), is not the point. That one has an explanandum about which to have such considerations at all, is the point

  2. It may very well be that there are in fact other metaphysical relations by which we may know of or access consciousness. That doesn’t change the fact that we have been contemplating consciousness philosophically for millennia and investigating it scientifically for centuries, without becoming acquainted with other such relations. And that in itself would then still be an ample argument to take very seriously the idea there may be more significant phenomena hidden from us in our universe.

Establishing the above may tempt us to think about other phenomena being only knowable through 1st person experience, or somehow having to be “directly accessible through themselves” – as is the case with consciousness, but that is not the argument nor necessary to make the argument. There may indeed be such phenomena and undoubtedly can be very exciting – novel qualia and beyond – but what we’re after here is grasping the very relation by which we arrive at the notion of consciousness. The relation by which we know of consciousness –  1st person experience, knowing, could itself be differently framed by virtue of acquaintance with other unknown phenomena in our universe.  Just like consciousness seduces us to make conceptual divisions like the first- and third-person perspective,  whatever else might be out there could yield new metaphysical considerations and taxonomies, classifications of a sort we cannot imagine.  But I argue what we can at least imagine, is that other such relations might exist. And how we might figure out what is out there I dub the hard problem of metaphysics, which a wink to Chalmers’ hard problem of consciousness as well as the meta-problem of consciousness.

So there are two parts to what this argument suggests: a relation by which we know a phenomenon (direct experience in the case of consciousness), and the phenomenon itself (consciousness). The question at this point is: what other phenomena might there be out there and by what relations would they reveal themselves or could we know of them? This may seem like fairly certain and relatively concrete grounds on which we can conduct further investigation – however –  we have to question further whether separating the phenomenon from the relation, would apply across the board for all other phenomena, or whether they would call for entirely different classification of parts and relations, insofar those may apply, and perhaps exist on a spectrum or spectra, whether they would call for a taxonomy to be made. While from our current vantage point it seems intractable to speculate on this, as mentioned previously, I find it still important to make note of it due to the, metaphysically and epistemically speaking, potentially revolutionary nature other “hidden” phenomena and relations we may bear to them. It may very well be that in our universe we have a richly populated space of interesting phenomena and relations, and yet are only acquainted with one.

At this point we can see that it seems very difficult if not impossible to deny the possibility of other significant phenomena unknown to us, yet it is even more difficult and likely impossible to imagine them. What else can be said about it, then? Is it worthy of any speculation? I tend to think that it is, however, difficult it may be.

Firstly, consciousness is an extremely significant part of our existence. This tells us it’s possible for such significant ontological categories to exist in a universe but be unknowable without a relation of “access” to them. Secondly, like biological agency is at least an interesting descriptive case from which we could never infer consciousness as previously established, but at least suspect something interesting is going on when comparing say, a rock to a human. So too may we hope to find similar clues in descriptions of the universe that may indicate there is more to them than their description.

If we want to be bold, we could even say: in any universe where we know of a phenomenon solely through a unique metaphysical relation we have to it, we must consider that more such phenomena may exist in that universe.

Objections and replies

Initially, it may seem there are two main ways to knock down this argument. One way is to deny that consciousness is only knowable through itself and demonstrate it can be indeed knowable through other means, akin to Chalmers’ debunking argument about beliefs about phenomenal consciousness. However, we should stress that if there are other ways of knowing it, we can still posit that it’s difficult to “access” and other interesting, hard-to-access phenomena may be out there, as mentioned earlier. A second way would be to claim and demonstrate consciousness is the only conceivable or possible phenomenon that is the exception to the knowability of facts through description of this universe.

Here are three examples where objections could come from.

(1) Objection: “Consciousness can be known of indirectly or inferred.”

Reply: This is not established and therefore the onus is on the claimant making this objection to demonstrate how consciousness can be known indirectly. Surely a non-sentient intelligence or agent could gather that in terms of entropy, biological life is interesting, and perhaps the a concept of agency could be established – but if like in the philosophical zombie there is “nothing going in there” with regards to the assessing non-sentient, non-conscious agent or intelligence, there seems to be no way, in principle, for it to even guess the existence of consciousness. Such an agent could at most make only an argument akin to the argument this very post puts forth that there may be “more out there”, if and only if it had anything to go on, like I argue we do, thanks to consciousness. In any case, if I could even begin to imagine how this objection would sound, I would not have written this post.

(2) Objection from physics: “Our knowledge of physics suggests other phenomena that may be hidden from us are unlikely or impossible to exist.” 

Reply: An exhaustive and descriptive account of the laws of physics obscures consciousness entirely from us as outsiders, so this argument would require specifying how physics can be used to speculate about the likelihood of other phenomena obscured from us. I think the biggest challenge here is that consciousness is a phenomenon that can be right in front of us, yet we’re blind to it unless we’re actually conscious ourselves. We can try making appeals to dimensions, for example, and speculate about where there is “room” for any phenomena, but the whole issue that the descriptive account doesn’t seem to give or leave any “room” for consciousness, yet here it is.

(3) Objection from philosophy of mind: “This argument relies on a particular position in philosophy of mind, it doesn’t apply if one is an eliminativist about qualia or an illusionist.”

Reply: The main thrust of this idea is agnostic as to one’s position on consciousness in philosophy of mind. Because this argument relies on consciousness, it may be misunderstood as an argument in philosophy of mind, or about mind, or rely on any particular position in philosophy of mind, yet is in fact not and it does not. New qualia, different minds, while very fascinating, are also not what this argument is about. The argument is strictly about metaphysical relations of our universe that may exist and be entirely hidden from us – hidden in plain sight even. So if one is illusionist about consciousness, for example, then one could say: if consciousness is only knowable through itself, but illusory, other such relations may exist in our universe, and they too may very well (for all we know) be questioned in the sense illusionism questions what it considers the illusion of what-it’s-like-ness.  

Conclusion and further exploration

The premise that consciousness as a phenomenon is only knowable through the metaphysical relation we bear to it, seems unassailable, and therefore inevitable is the conclusion that other potentially significant phenomena which we may know of through other metaphysical relations may exist.

Further exploration of this idea may fall in roughly the following three categories:

(1) Investigating whether we can speculate at all about the likelihood of other phenomena being present in our universe, based on what we know about our universe and consciousness.

(2) Speculation about  how we might conduct an investigation to look for hints in our universe of other hidden phenomena.

(3) Surveying epistemic frameworks, taxonomizing discoverability of phenomena and arguments that may help frame the aforementioned pursuits.

As to its importance – I believe this is reflected in our ongoing millennia-old debates and investigations into mind and consciousness. There may be fascinating and significant phenomena out there as significant or more significant than consciousness. I hope it is as exciting to my readers as it is to me and that others will care to expand on the argument and explore it further.

Acknowledgements

I’d like to thank Paul So for valuable insights and comments in discussing the original paper for this idea.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>
<![CDATA[Why Panpsychism doesn't pan out]]>https://mentalcontractions.substack.com/p/why-panpsychism-doesnt-pan-outhttps://mentalcontractions.substack.com/p/why-panpsychism-doesnt-pan-outFri, 21 Oct 2022 22:01:00 GMTPanpsychism is the idea that mind is everywhere and in everything. It is the view that mind or mindlike aspects are simply features of all reality or matter. Depending on who you ask, it’s either a silly and unprovable idea, or a plausible view on consciousness that’s been gaining foothold in academia in recent years for good reason1. I’ve studied and researched mind, brain and consciousness for over a decade now and made myself consider it often, as key bright minds endorse it. However, it doesn’t seem like the universe works the way a panpsychist sees it, nor does panpsychism provide us any predictions we can test. So panpsychism doesn’t pan out and this post spells out why.

First of all, “pan” means “all” and “psychism” is of course “psyche”. So the idea that it doesn’t mean “all” for example, doesn’t do it any favors in terms of differentiating it from other ideas. While I consider ideas and theories in academia sometimes very problematically differentiated not only because they’re in competition with each other, but also because the people coming up with them are in competition with each other, we should forget that for a moment and consider that panpsychism certainly has to hold its own as to making a solid point or providing a good basis for a framework that explains mind and our universe. So does it?

Fuzziness
And here we arrive at the first issue with panpsychism. What does it say, really? There is panprotopsychism, panexperientialism, panprotoexperientialism which somehow try to distinguish between the amount of mental or type of mental in the physical. Subjectivity exists somewhere, at some level of reality, in all or some basic physical parts. Okay, so then at what level? Where the mental starts or what it really means that there is a mental aspect in a photon, is never made clear, and speculations run wild in all directions. Panpsychism allows itself to be so flexible that it becomes a simple acknowledgement of subjectivity and leaves us guessing for where it really resides. Gluons? Atoms? Molecules? Matter? Energy? It wants to be everywhere, but ends up being nowhere. And the reason why this problem haunts and will haunt panpsychism, is spelled out by the next point.

Unfalsifiable
Panpsychism is unfalsifiable – not the most trivial issue in the world. So then what can we really do with it? Not much. Those who champion it claim there are good theoretical reasons to consider it, but are there? A very unfortunate one that is mentioned is supposed elegance. Even Koch and Tononi, whose 2015 paper said2: consciousness here, there, not everywhere, say that it’s elegant. But elegance is a treacherous quality that we should be careful not to be deceived by. Though deserving separate treatment, I would say that at the very least we have a bias towards condensing, compressing and homogenizing things in frameworks we build, because it makes them easier to process for us. This then is “elegant”.. and certainly nature has shown us it seems to contain such elegance. So far, so good.Some beautiful and simple rules underpin life on earth.

Yet nature is also messy and this beauty we chase with our feeble brains, may mislead. It’s certainly not immediately clear that panpsychism is parsimonious. Ockham’s razor, a great tool in the thinker’s toolbelt, can also easily lead astray – as we can rapidly switch between the economy of meaning and structure, as Quine explained.

Another contentious aspect of the razor is that a theory can become more complex in terms of its structure (or syntax), while its ontology (or semantics) becomes simpler, or vice versa. Quine, in a discussion on definition, referred to these two perspectives as "economy of practical expression" and "economy in grammar and vocabulary", respectively.

Language, math, and our abstractions, can be shifted and abstract away other parts where we easily increase or decrease complexity of an idea. Is panpychism simple and elegant? It’s simple and short to say “it’s everywhere”, but is it actually simple to shoehorn it into our models of physics or the universe across the board? If it’s not necessarily elegant and if we can’t test it, what useful does it tell us about mind and causality?

Not pan
Panpsychism is fuzzy and unfalsifiable, but is it even what it says it is? Many panpsychists lean heavily into the idea that mind is a fundamental part of reality. That in fact it’s embedded. but embedding mind into the physical, is already covered by monism, really. I personally find that once you forego, as some do, the idea that panpsychism is “pan”, and you just wanna say there is no mind and physical stuff, or properties – it’s just embedded – you’ve definitely not not made much of a point without stating where and how. Some have tried to mingle emergentism of some sort with panpsychism and that really doesn’t fly either. If it’s embedded, everywhere and then needs to emerge as well – what is again it’s distinguishing feature? Galen Strawson, for example, thinks any physicalist who wants to be real about qualia or what-its-likeness, should be a panpsychist. Okay, but where and how? The more one tries to pin down panpyschism, the more it seems to need to hide “in between things” we know, almost like the dark matter of consciousness only unlike dark matter we don’t even have indirect data support it. Just claiming embeddedness is not enough. It’s either everywhere, or we’re not talking about panpsychism anymore.

Fallacy of division
We have mind, we are made out of matter, therefore matter is or has mind? Whether this argument is made explicitly or not, it seems like panpsychism may be a major case of projection of it. Given that it seems that mind is unknowable without actually having a mind, there is as little reason to look for mind everywhere as there would be to look for mind without mind. Yet mind exists, so does that make it compelling to project it onto everything? Probably, because it seems so “simple” – but the onus is on the claimant to explain how they just went from the phenomenon mind to shoehorning it into everything. If it’s true of the whole, how is it true for parts, why is, and which parts? Without answers to these questions we are left with little to nothing substantive. Issues such as these are known as combinations/subject-summing problems in panpsychism.

Assumption of homogeneity
A pet peeve of mine is ideas assuming homogeneity at some level without explicitly justifying it. One way or the other many thinkers and researchers try to homogenize some aspect of reality, to arrive at the previously discussed notion of elegance. It just seems so simple. Seeking elegance and simplicity is definitely a theme in physics and math. There is good reason for that as we’ve stumbled upon (or invented), incredibly elegant equations such as Euler’s equation or Pythagorean theorem. But is our universe really that homogenous? I think that at some levels of organization or description there is of course homogeneity. But the trend is to go too far in thinking this. And we certainly have very hard evidence and proof that our universe is differentiated. You can’t turn hay into gold.. Y

If one is a physicalist, is not the first thing one must realize that the quantum soup argument, i.e. homogenization of the physical world to the point that “experience is in everything” lacks any explanatory value. It does not solve anything, it’s a cop-out.

Moreover, doesn’t our knowledge of physics show that we cannot possible make this leap? Clearly, my brain cannot be made out of wood. Just like I cannot conduct electricity through wood, I can make a spark and let it burn, but conduct it? No. Electrical resistivity is a thing. Even though the wood just like my brain is just quantum soup. Just energy. Yet we cannot transform hay into gold. Why would it not follow that our brain and body, interacting with the world, harness specific processes that give rise to experience, and that we needn’t assume that mental or experiential properties are “everywhere”?

Evolution may be at odds with panpsychism
If mind is everywhere, one should start to question why there is not more mind than we see, and if there is, how come it’s not obvious? Why did evolution need billions of years to evolve it? Why do our central nervous systems and brains have so much complexity? At the very least this implies some sort of emergence. But then that doesn’t really mix well with panpsychism. And if we posit there is panprotopsychism and it needs the appropriate structures or stuff to become mind proper – is that really panpsychism anymore? In any case, I find it strange that ubiquitous mind, embedded in reality, requires so much effort to actually become mind. In fact, the real question here is can we build mind out of everything? I don’t think so. If the panpsychist here again tries to lose the pan or somehow embed mind just so it works for emergence, and stays conveniently out of purview of falsifiability or fails to provide any sort of help in our pursuit to understand mind in our universe, then panpsychism collapses onto itself. So truly, if panpsychism or any flavor (pan-experientialism etc.) is true, why did we need nervous systems? Why did it even take billions of years? If panpsychism was true, consciousness would be harnessed much faster.

We don’t do it with any other part of our body either, life or organic matter. We have no problems assessing what the building blocks of life are. But we don’t call them proto-life, just because they potentially can be the blocks for life. We don’t invent panbioism to resolve this either. This issue is also called the combination problem, and one of several reason why Koch and Tononi distanced themselves for it as they formulated IIT: “Unlike panpsychism, however, IIT clearly implies that not everything is conscious. Moreover, IIT offers a solution to several of the conceptual obstacles that panpsychists never properly resolved, like the problem of aggregates (or combination problem [107,110]) and can account for its quality. It also explains why consciousness can be adaptive, suggesting a reason for its evolution.”

Conclusion
So with so many issues, does panpsychism really have anything going for it? It really seems that it doesn’t. The strongest argument is elegance and parsimony, but it’s neither clear that it’s even as elegant as it seems nor that elegance in itself is any type of good reason to give panpsychism credence. Panpsychism, is vague, fuzzy, seems to stem from, rely on or involve various biases and fallacies, and most importantly, simply gives us nothing to work with in terms of testable predictions. Panpsychism just doesn’t pan out.

Note: I will add more and proper references to this post later.

]]>
<![CDATA[AGI Literacy]]>https://mentalcontractions.substack.com/p/agi-literacyhttps://mentalcontractions.substack.com/p/agi-literacyThu, 13 Oct 2022 21:56:00 GMTA decade ago AGI or Strong AI were terms that were mostly associated with crackpots. Those who work on the unattainable and bizarre goal of building an artificial general intelligence. That has changed. Chief AI officers, researchers and companies/institutes now openly talk about the idea of AGI and their pursuit of it. OpenAI announced Dall-E 2(1), the second coming of their from-natural-language image generating AI. Stability.ai released StableDiffusion(2), an open source text-to-image generator. It's stunning and marks incredible progress AI is seeing in recent years. And it's not just that, it's a cautionary tale for just how quickly things can progress in AI when there is a breakthrough.

There are some who still think pursuit of AGI is folly, simply impossible. Or those who make other dubious claims regarding AGI. In this post I outline the only scientifically literate ways of claiming AGI is impossible, or put differently, on what basis one might want to attempt to claim we can't ever build AGI. I'll also mention some claims I think are dubious and should be ignored unless they're backed by stunning theoretical work.

Before we dive into that though, let's establish what AGI really is or could be and what it isn't. Facebook's chief AI officer Yann LeCun recently posted(3) that AGI can't exist: "AGI can't exist. Because all intelligence is necessarily specialized. It's merely a question of degree. Even humans do not possess general intelligence."

While I have had this thought myself int he past, I don't quite agree. I would say that general intelligence is the collection of abilities of a cognitive system that allow that system to learn across domains, to evaluate itself and that learning, and to expand those domains. In this sense, there is a vast difference between a human and any other animal on earth, be it dogs, pigs or dolphins and elephants. There is a distinct qualitative threshold where the combination of of awareness, self-awareness, evaluation, self-evaluation and all enabling necessary modules such as working memory and executive function come together to result in a system that does not compare to the sort of narrow, niche-focused brains other species have. Whether this intelligence is "truly" general, we may quibble about and end up realizing there is no free lunch(4) for pure generality like there isn't for Solomonoff induction(5). However, for the sake of this post, let us assume AGI means human-level intelligence and beyond.

So where does the idea of AGI come from anyway? NGI, of course. Natural General Intelligence. Humans. This is our primary blueprint and of course proof there is nothing in the law of physics that prevents a general intelligence to exist. So what had to happen for that to occur? That's a heavy, very loaded, and hairy question. But, in a nutshell - The right combination of evolutionary pressures, in the right sequence - so... with the right modalities, (evolving) environment complexity, etc. So "all" we need to do to create AGI, is to simulate that which was necessary and sufficient of that process to to build AGI. Or is it? Of course just what is necessary and sufficient is a very hairy matter in itself. Is consciousness needed? What is consciousness even? What is intelligence? These are open and contested questions. Top authors and thinkers have been found to all have their own ideas on this(5). So what of that evolutionary process needs to be simulated? Are there completely other ways of getting there?

Quite the complicated problem. In fact, the idea that a computer science PhD (or other CompSci degree) automatically makes opinions on AGI valid is laughable. Computer Science curricula are diverse and even narrow AI is often only a small part of it. A student may focus on doing work with the OpenGL pipeline and know next to nothing about AI (playing with neural networks a few times hardly counts!), never mind AGI, AI ethics, existential risks, etc. Grinding through years of C++, doesn't magically give one insight on AGI either (How could it?). Building AGI is an inter-disciplinary endeavor and certainly to investigate future civilization developments and associated risks one cannot get by with a generic computer science degree. Obviously, skills in computer science are important. But so are skills in: math, systems science/cybernetics, physics, philosophy, cognitive science and more. Yes, that list is very long. This because everything from computational complexity, computational models, hardware, machine learning, brain, learning, evolution to consciousness are relevant to AGI. Concepts like agency, decision making, autonomy, intelligence, etc. These are all some of the biggest open questions in science. Not for the faint-hearted.

But hey, at least we know it's possible since it exists. There are no laws of physics that prevent a cognitive system from exhibiting as general of an intelligence as ours is. NGI is living proof AGI is possible.

So now that we've established that, what is there to protest against and where can we blunder when contemplating AGI and our future?

1. AGI is impossible
Evolution is a tough act to follow. Can we ever get enough data and computing power to generate an artificial mind of the sort that possesses general intelligence as we know it? It's an open question, sure - so here one can try to make an argument based on tractability and somehow come to strong conclusion that we can never get there. Extremely doubtful, but semi-literate. Next up is hardware. It's entirely possible what's necessary for intelligence requires specific hardware implementation. Here too an argument can be made. Carbon chauvinism or not, at least it's a concrete argument. With respect to it being somehow impossible in principle - that's where it gets illiterate. Natural general intelligence exists.. and none of its features suggest they cannot be emulated even if they require strict constraints. And so we are ourselves the prime example that general intelligence can exist and so AGI is very much possible.

2. AGI cannot rapidly self-improve
Claiming AGI cannot rapidly self-improve - have a so-called hard takeoff. Whether and how, when it will happen is one discussion. But whether it can happen? Of course it can. The only gigantic issue we have with our brain is one of interfacing. Biological machinery is wondrous, but interfacing with it is very difficult. It was not designed, it evolved. It has modularity, organization layers... but ultimately it's messy. We can't just double our working memory by sticking it into our ear. Otherwise humans would have had their own hard takeoff. AGI will have no such issues. Some protest that it's naïve and AGI will need resources like anything else. Of course. But your 86 billion neurons only need about 20W of power. You really think non-biological intelligence can't figure out how to run a supremely designed version of that and optimize it to 1W? And then.. 100 times more powerful? This topic warrants a whole blog post, or tome - but as far as I'm concerned, the idea that AGI cannot rapidly self-improve is a profound failure of the imagination. Hence:

3. AGI is to humans is what humans are to dogs
Using arguments that invoke the differences in intelligence between animals like dogs or chimps and humans. There is a qualitative threshold for intelligence that renders these comparisons moot. It's a seductive, because it helps immediately imagine how there can be an absolute impossible-to-overcome difference between two entities. You can try to have a dog design a rocket all its life and get nowhere. But the trap here is not appreciating that humans nevertheless have at least enough cognitive tools to have some form of generality, as I described in the introduction of this post. There surely can be an absurd, unimaginable difference in cognitive ability between an AGI and human, but it's a qualitatively a difference of another sort than that between humans and other animals. This provides at least a glimmer of hope that AGI wouldn't necessarily automatically rule us and stomp on us like ants. Just a glimmer, though.

4. AGI poses no risk
Claiming there is no risk or issue at all with vastly different artilects being injecting into the house of cards that is our civilization. Another profound failure of the imagination and wishful thinking. The onus is on the claimant here how AGI would be constrained or contained forever, but it's doubtful any such plausible argument exists. Pretty much scientifically illiterate unless there is a stunning thesis behind it. AI Safety or alignment are catastrophically difficult problems as they involve everything I mentioned necessary to deal with AGI theoretically and on top of that meta-ethics and ethics we haven't even sorted for ourselves, let alone alien entities that will be far, far less resource and interface constrained than us. To make this case one has to demonstrate many fiendishly difficult things, such as how the orthogonality thesis(6) doesn't hold.

5. Super-intelligence implies benevolence
Claiming that all AIs will converge on human-benevolence. This is one of the daftest ideas I’ve seen tossed around. Human values are a patchwork, inconsistent, and but a dot in possible configurations of ethical systems, like our mind is a dot in mind design space. Forget about human ideas of good as attractors for other minds and AIs. I’m afraid I don’t have much more to say about this claim, other than it is an extraordinary claim that requires extraordinary evidence.

In the near future I will go into some of these points in-depth as well as release my own original work on our future, AGI and how to possible think about future-proofing civilization the coming years and decades.

  1. https://openai.com/dall-e-2/

  2. https://stability.ai/blog/stable-diffusion-public-release

  3. https://www.facebook.com/yann.lecun/posts/10158210502527143/

  4. https://en.wikipedia.org/wiki/No_free_lunch_theorem

  5. https://arxiv.org/abs/0706.3639

  6. https://www.fhi.ox.ac.uk/wp-content/uploads/Orthogonality_Analysis_and_Metaethics-1.pdf

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>
<![CDATA[Hated features of biological existence]]>https://mentalcontractions.substack.com/p/hated-features-of-biological-existencehttps://mentalcontractions.substack.com/p/hated-features-of-biological-existenceSun, 09 Oct 2022 21:54:00 GMTThe strength of evolution is that it doesn’t know what it’s doing. The weakness of evolution is that it doesn’t know what it’s doing. So what you end up with is a hodgepodge of necessary and sufficient features in a “design” that no engineer ever needed to interface with. It’s a system made for survival and borne out of survival over the course of billions of years.

Whether evolution perhaps necessarily tends to become self-aware in the sense that were are now, is an open question. But once a species reaches the qualitative threshold of consciousness that we have reached, it seems likely that sooner or later it arrives on the eve of self radical self-modification. And I think no civilization that is ready to radically self-modify themselves is ready for it.

However, as a transhumanist I believe in what transhumanist movement founder Max More dubbed morphological freedom1. The right to maintain or modify one’s own body, on one’s own terms. So what might we want to modify? What aspects of biological just don’t cut it? What are some hated features of biological existence? Here are a few…

  • Use it or lose it (unused mechanisms decay)

    Completely understandable why atrophy (gradual deterioration due to lack of use) sets in and certainly we would suffer greatly if rarely needed systems would instead be resource hogs. But that this rule applies to virtually all our bodily functions and can’t be avoided is terrible. Please make this feature optional.

  • Pleiotropy (one gene may affect several traits)

    Biology is messy and while modularity exists in the human body, there is also so much interdependency throughout the layers of organization that modding the system is a terrible headache. If only we could have a neat system where we turn off the gene for a trait and voila… done. Gone is the trait and nothing else changes. If only.

  • Neurons fire together, wire together (the brain functions in a highly associative way)

    An obvious and great feature of our wetware in so many ways that also comes with the terribly annoying side-effect of everything working so bloody associatively. No clean recall, no clean mental layers and no proper direct manipulation of mental function. (Heb)b(ian)oo. Proper exocortices (outer, additional brains, augmentations) can’t come soon enough. But using them won’t be simple.

  • Single point of failure for a vital function: blood pumping

    Redundancy is such a wonderfully present feature of biological systems. Vital organs come in pairs, the brain comes with 86 billion neurons and can stand to lose even specialized parts with a chance of plasticity doing its work and compensating for the lost parts…but then all the pumping is done by that single heart. The heart fails, everything else fails.

  • No major emergency back-up plan

    A reliable emergency hibernation mode to prevent cell necrosis would be nice. If our cells are not getting the oxygen they need, if blood is not being pumped around, can’t there be a reserve system that takes over for a while? Can’t they shut down temporarily? Happily will get a proper fail-safe for this.

  • Refueling
    Almost anyone alive will not denounce eating and drinking entirely as terrible, as you have to like it to keep doing it. I like some food. Eating and drinking are nice, sometimes. But not all the time. Yet we’re built in a way where have to constantly eat. Constantly refuel. Of course fasting has shown to have benefits even relevant to the last point on this list, but generally speaking we need regular meals. Why can’t we just fuel up once a week. Once a month?

  • Sleep
    While like with food you maye have all sorts of nice associations with a good night’s sleep, sleep is absolutely terrible if you ask me. If you are able to sleep regularly like a baby, it al seems fine, but if you can’t… you notice what an absolute horror it is to not be able to directly control our on/off state. As if losing approximately 33% of our time to snoozing isn’t bad enough, having to suffering through sleep deprivation makes one terribly miserable and there is no simple go-to remedy for it.

  • Lack of mental control
    Like the earlier point regarding our associative brain this goes to really what an atrocious affair our mental control is. We have so little of it! Tomes, guides and endless discussions exist on how to get into particular states, how to transform oneself mentally. There is so little we can directly do. We have to learn vague feedback loops, feed into them, coax ourselves into ways of doing, hope to get to states we want. An absolute abomination. There may be something poetic to this perpetual dance, but sure isn’t very convenient.

  • Aging

    Last, but certainly not least. The wear and tear of the human body can in principle be combatted and even completely compensated for. We just don’t know how yet. Meanwhile, every birth is a death sentence. It would be nice life being on death row would not be due to accumulation of damage and errors to cells, but due to, say, pending heat death of the universe. We have some time to figure that one out. Until we do, life is a death sentence.

Of course there is more where that came from. I’ll save that for a second post. Int he meantime, here is Max More’s Letter to Mother Nature: Amendments to the Human.

  1. https://en.wikipedia.org/wiki/Morphological_freedom

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>
<![CDATA[Welcome to Mental Contractions]]>https://mentalcontractions.substack.com/p/coming-soonhttps://mentalcontractions.substack.com/p/coming-soonWed, 21 Sep 2022 16:36:51 GMTMental Contractions is my blog on enhancing, extending, and exploring existence.

The coming decades constitute a particularly turbulent transition period for our civilization, one that will be very difficult overcome. I think any civilization on the eve of radical self-modification is not ready for that transition. So one half of my research is focused developing insights on what that will look like and how to anticipate it, how to get through it, how to mitigate risks, and even how to improve our approach to society today anticipating that transition.

A large part of the other half of my research is foundational work on mind and brain, math, computer science and philosophy. I am interested in understanding our current selves, our current world, our future selves and our future world. I approach almost all problems multidisciplinarily, taking both science and philosophy into account in almost all most cases. As far as I’m concerned, one cannot do without the other, especially in the case of big open questions.

If you’re interested in original insights on the future we face as a civilization as well as related challenges and foundational issues, subscribe to mental contractions.

Subscribe now

]]>
<![CDATA[The pitfall of avoiding narrow AI pitfalls in AGI development]]>https://mentalcontractions.substack.com/p/the-pitfall-of-avoiding-narrow-aihttps://mentalcontractions.substack.com/p/the-pitfall-of-avoiding-narrow-aiTue, 02 Sep 2014 21:47:00 GMT

Note: I use AGI and Strong AI interchangeably, both stand for human-level artificial intelligence

It seems almost safe to say that the A.I. hype around deep learning is a sure sign that we’re out of the A.I. winter. Almost. I think the current hype is based on effective, but ultimately narrow A.I. paradigms. Narrow or weak A.I. is brittle. This means it breaks down outside of a particular scope or domain for which it is specifically designed.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

It remains to be seen whether we will end up in another period of disillusionment or bridge the painfully significant conceptual rift between narrow A.I. and strong AI during this hype cycle. I think it won’t happen without very well-funded and dedicated research efforts. Unfortunately, despite the recent rise of interest in A.I., the field of artificial general intelligence remains woefully underfunded.

Analysis of the reasons for the lack of funding of AGI is outside of the scope of this blog post. One reason is that building AGI has always been considered a (too) daunting challenge. After initial excitement and positive prognoses mid-20th century, it became clear building general intelligence is not quite the walk in the park some declared it to be. Nowadays, some still claim it’s impossible. I disagree. However, let there be no doubt that the pursuit of building AGI is indeed fiendishly difficult.

In fact, I think it is among the top three intellectually demanding pursuits in science and engineering. The road to AGI is paved with great difficulties. In this post I will discuss one of these difficulties: the danger of avoiding models in your AGI design. They are handy for narrow AI, but deadly for AGI. I think this is an important piece of the puzzle. Nevertheless,  it is not without its own set of pitfalls.

Models are not evil
Let’s get something out of the way: models are not evil. However your AGI architecture is constructed, you have a model for it. Whatever your idea of general intelligence and learning is, you have a  model for it. Yes, you should avoid feeding your AGI system models in its learning domain. This is because it should be able to acquire its own models through perception, just like we do. If it can’t do that, it’s not general intelligence. Yes, you should understand the difference between supervised learning and unsupervised learning or top-down and bottom-up designs. (Note: these concepts are more loaded in context of AGI than narrow AI).

However, there are many levels of description and organization to define and models are not unequivocally bad for all of them. If I give you (for the sake of the argument) a correct model, such as a blueprint of a generally intelligent artificial being and the means to construct it, then each and every accurate diagram you might draw of this being, such as the the photo at the top of this post, is absolutely correct too.

There won’t be any danger in over-specification here, because we’ve established it’s correct. As such, it is not to be discarded simply because it’s a model. This may seem crude and banal, but the question here is: “Is it sensible to discard models from virtually all of the levels of organization in your system, besides the learning domain? Some think so.

This is because the mind is considered too difficult to model and we know too little. Unfortunately, this is not as sensible as it may first appear. Worse, as I will argue later on, it might tempt you to oversimplify your system to the point of rendering developing general intelligence intractable: your system might never be able to bootstrap itself into a robust general intelligence because of it.

The difficulty of modeling a mind
So, when unpacked, “models are bad” is more lucidly expressed as “There is no way you can model [what is deemed necessary for AGI], so don’t even try”. I think this argument, once steel-manned (strongest version of the argument), has some merit given the current levels of ignorance in artificial intelligence and relevant domains. It’s quite hard to fully disagree with this. It’s not difficult to demonstrate just how little we know, even today.

On the other hand, I would strongly warn against discarding lessons from the wondrous and advanced machinery that is biology. Among other reasons, I say this because you are still using models and in more ways than you may think, whether you admit it or not. In creating artificial intelligence, you are modeling the human mind. If you are not doing it explicitly, you’re doing it implicitly. It’s our best example of high general intelligence. Even if you think most aspects of human minds are irrelevant and poorly understood, you’re still one way or the other modeling intelligence or learning, no matter how you spin it. And that is where the problems start.

The separation of “model of the world” and “model of the machine that is supposed to model the world” is in itself an abstract distinction between two abstracted domains. This distinction is not only fluid (depends on your design, taxonomy, etc.), but it’s also fuzzier than you might think. The difference between force-feeding an ontology to your AI and constraining its learning so that you have implicitly fed models to it, is a very hard one to make.

The architecture of your machinery, your hyperparameters, the type of data you feed it are all potentially contaminated by models (of the world), directly and indirectly. Adhering more to the East rather than West pole in the East-pole West-pole divide can make all the difference. Implementing some idea of Chomsky’s universal grammar to achieve natural language processing can make or break your system. Yet the questions in these domains are largely unanswered. This is a lethal gray area.

The hidden pitfall of model avoidance
To you avoid all uncertainty you go with what you think you know: a tabula rasa (blank slate). But the blank slate is still a slate. And its properties are in your hands. I have previously expressed this as “Even a tabula rasa needs a tabula“. Your tabula is still a model. Deflating it to maximize its evolvability is a good instinct, but it is not necessarily a sensible decision.

You abstract away lots and lots of features of the human mind you deem inessential, to model that which you consider the crucial mechanism to acquire models. How sure are you about abstracting away all those features? Is the lack of understanding them a good reason to discard them? Are you sure you’re not throwing out the baby with the bathwater? The irony here is that in zeal to avoid crippling your system with models, you just might end up crippling its ability to evolve general intelligence. 

So your AGI system is affected by what you choose to implement as much as by what you choose not to implement. And you may be implementing too little. While you have many advantages over evolution, such as running countless iterations easily, it’s not as easy to put in the right parameters. Your means of input and environmental constraints are limited, compared to the evolutionary pressures that formed biological systems. Almost 3.6 billion years of evolution have shaped biological machinery. The design of our brain and body has been evolved by ever-changing landscapes of evolutionary pressures. The richness of these evolving environments is far beyond anything you are feeding into your system.

It’s safe to say evolution is the mother of all optimization processes with respect to data. We can’t forget, however, that after 3.6 billion years, it has produced many examples of sub-human intelligence (all non-human animal species, extant and extinct). Even in light of remarkable examples of animal cognition, such as the behaviors of the smartest of animals like Cetaceans or Corvids,  it’s hard to dispute that humans are exceptionally intelligent. Yet homo sapiens is only one species among millions.

Thus, millions of species have not evolved the human-level general intelligence, but went through the same unfathomably complex evolutionary process for billions of years. That should worry you. It should also help you realize, that the right (as elusive as ‘right’ can be) model  can be extremely useful, when facing limited ways to evolve your system. Evolution is a very, very tough act to follow.

This means you have a fiendishly tricky trade-off to make between the complexity of your AGI system and the complexity of its environment and data. In my opinion, this trade-off is currently very poorly understood and avoiding models is no panacea. It’s only one piece of a complicated puzzle.

As in any endeavor with very difficult open questions aplenty, expert opinions on what is necessary and sufficient for AGI are very divided. You need to acquire a better understanding of what you’re trying to build. By this I don’t mean working on meta-heuristic procedures to gain more insight into your neural net. I mean interdisciplinary AGI research. 

You can get away with “just” computer science knowledge when building narrow AI, but building AGI is the most interdisciplinary pursuit there is. Computer science and bits of neuroscience will not suffice. I will cover that in a future post.

For now, I leave you with two questions every AGI developer should never stop asking: “What have I abstracted away, and why?”

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>
<![CDATA[Caricaturing free will]]>https://mentalcontractions.substack.com/p/caricaturing-free-willhttps://mentalcontractions.substack.com/p/caricaturing-free-willWed, 13 Aug 2014 16:55:00 GMTIn March 2012 Sam Harris released a short book on free will. The reception was mixed. Especially philosophers  dismissed the book as shallow and misguided. Daniel Dennett published a derisive response to it, which Harris posted and replied to on his blog. Meanwhile, there are still those who think the book is a valuable contribution to the debate on free will. I think that is worrisome, because Harris misrepresents the free will debate and the strength and context of scientific evidence. In this post I list three major problems with Harris’ book and take on free will, also going by his blog posts on the subject.

Problem #1: defining free will by a particular position on free will
When investigating free will, philosophers are concerned with the following question: “Is our will free in any meaningful way?”. Nothing more, nothing less. The question has been debated since the very beginning of western philosophy (in different terms) and throughout the historical debates many positions have been developed. Metaphysical libertarianism is merely one of these positions. Unfortunately, this is the position Harris caricatures only to present it as the canonical definition of free will. He then attacks this caricature in order to dismiss the entire domain of inquiry. Here is an analogy with another field that might make clear why that is problematic. If you are unfamiliar with quantum mechanics, suffice it to say that the interpretation of quantum mechanics is a contentious issue. Physicists are divided on it. Here is a bar graph of the division:

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

screenshot.2

Imagine Sam Harris were to write a short book on quantum mechanics in which he would equate the Copenhagen interpretation with quantum mechanics as a whole. Imagine he would set out to debunk the Copenhagen interpretation, to then claim quantum mechanics itself is debunked. He would insist that quantum mechanics is not coherent, because one interpretation isn’t. And that those who believe in it are simply trying to hold on to their intuitions. He would suffer the same well-deserved derision from physicists as he has suffered from philosophers for his free will book, were he to go by such a premise, for it would be a very odd thing to do, to say the least. He would not be able to honestly say that quantum mechanics is just a non-starter, philosophically and scientifically. Yet this is what he has done referring to free will:

Do we have free will? Answer: no. “The problem is free will is just a non-starter, philosophically and scientifically.”

As it happens, only 12% of philosophers identify (Survey from 2009) with libertarianism. And these are sophisticated technical positions, not the caricature Harris presents in his book. They are neither dominating nor historically primary. Sam Harris writes as if he has debunked free will: “Free will and why you still don’t have it” is one of his blog post titles. Yet he has done no such thing. His engagement with the literature constitutes briefly mentioning compatibilism and psychologizing some of the associated positions as attempts to justify the mere feeling of being a freely choosing agent, while still not being one. These attacks on a caricature of a single position on free will, cannot, in principle, invalidate the possibility of our will being free, as Harris would have us believe.

screenshot.1

Problem #2: intuitions on folk intuitions
Sam Harris justifies his caricature with the claim that it is what most people believe. Is it? This paper investigates precisely that, and it turns out people’s intuitions on free will are not that straightforward and naive.  Sam Harris tells us what people believe, but never justifies why he believes this is what people believe. This seems to be nothing beyond his intuition about people’s intuition presented as fact. What is actually the case is that what people believe regarding free will depends on the context. Confronted with specific scenarios, people will display strong tendencies to believe in a folk version of hard determinism, rather than magical and “uncaused” will. Such information are entirely absent from Harris’ writings. So contextualism about free will is another ongoing contemporary debate in philosophy Harris ignores. This is convenient, as mentioning this would only make tearing down his caricature of free will more difficult. We can’t go by Harris’ intuitions on people’s intuitions. He should have gathered evidence instead of making assumptions about people’s intuitions.

Problem #3: cherry-picked literature
Not only does Harris barely engage with the literature, he cherry-picks literature, which he then presents as evidence for his view. The Libet papers from the 80’s are classics, but there has been an ongoing debate on these type of experiments and follow-up research for decades since then. Yet Harris mentions two papers that seem helpful to advance his view, and says virtually nothing about the objections. Libet’s veto caveat is called “absurd” early in the book, with very little further discussion.  In fact, this is the tone throughout the rest of the book, which is marked by Harris’ perpetual bewilderment: “I cannot imagine”, “I do not know”, “It just happens”, etc. Unfortunately, to know what he does not know, cannot imagine or finds absurd, is not very helpful to build any argument.

The problem with this whole setup is that it misrepresents the debate. It’s not a fair depiction of the issues, and writing for a lay audience is no excuse. Moreover, cherry-picking science to bolster this shallow narrative doesn’t do science any favors. Harris gives the impression that the classic Libet papers and the follow-ups are compelling evidence for his case, when in fact there is compelling literature exposing holes in the setup and assumptions of typical free will experiments. Mentioning and discussing these is the bare minimum for any remotely serious work on free will.

To read or not to read?
Harris’ book will not provide you with an education on free will. Nor will it provide you with a strong case for any particular position in the free will debate. There is no real case of “redefining free will” as if some definition is canonical. A coherent definition is what we’re after. If you’re not familiar with the literature, do yourself a favor and read the entry on free will at the Internet Encyclopedia Of Philosophy, the entry at the Stanford Encyclopedia or even the Wikipedia entry on free will. You will learn more from any of these than from all of Harris’ blog posts and his book taken together.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>
<![CDATA[To suffer or not to suffer?]]>https://mentalcontractions.substack.com/p/to-suffer-or-not-to-sufferhttps://mentalcontractions.substack.com/p/to-suffer-or-not-to-sufferFri, 01 Aug 2014 16:50:00 GMTGeorge Dvorsky published an interview with David Pearce on io9. I have been familiar with Pearce’s ideas for a few years and I think he is an exceptionally smart thinker with whom I agree on some issues and disagree on others. Pearce believes that we should eliminate all suffering of humans and animals through advanced technology.

Animal suffering
What I agree with in context of philosophy of mind and cognitive science, is that it is dubious to assert that morally meaningful suffering or suffering at all requires precisely tuned (such that of humans) or very advanced access-consciousness atop of phenomenal-consciousness (Block). There is plenty evidence of convergent evolution and sufficient studies of homologous structures relevant to the consciousness and experience of animals to be quite certain that angora rabbits screaming as their fur is ripped off or pigs stuffed in small spaces awaiting their slaughtering, are indeed suffering. The science here does not support a Descartes conception of animals as automatons. That would be ethically convenient for us, but is exceedingly unlikely. What we can or should do about this are two separate matters, but as far as I’m concerned, animals suffer and in vitro meat couldn’t come any sooner.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

The world as an ethically ideal theme-park
Engineering ecosystems would be a very tricky endeavor to say the least. An endeavor that Pearce would like to make a reality. Surely if one agrees that animals do suffer and that their suffering is bad, one would not prefer that only factory farmed animals should not suffer they way they do. One would also prefer for a cheetah not to rip open that gazelle in the wild. The practical challenge of doing anything about suffering in the wild is immense: ecosystems are complex systems and attempting to control them is akin to attempting to control the weather. Currently, our interventions result in trophic cascades and other such disruptions. Turning the world into a eco-engineered zoo would be infinitely harder. This point is obvious and needn’t be belabored. Suffice it to say that we are quite some time away from even considering to attempt such a massive operation.

Engineering for bliss
With respect to genetic engineering, the challenge here is that we are complex adaptive dissipative systems. There are countless dependencies throughout levels of organization and no handy layers of abstraction like the ones put in the systems we engineer. It’s fiendishly difficult to engineer for a higher level emergent effect like bliss, due to issues like pleiotropy: one gene can affect several traits. For example, oxytocin does not, as is often reported, bestow its carrier with social openness without repercussions. There is some evidence that it merely increases in-group bias, which means that it makes one more cuddly to people of one’s tribe, but potentially harsher towards those outside one’s tribe. There is no one gene for a desire to cuddle, as it is an emergent behavior. So to say that tuning genes for emergent top level is problematic would be an understatement. There is an interplay between the agent and environment that is very hard to balance once you start tinkering with it.

Look at  the evidence for increase in cranial capacity due to cesareans. Messing with evolution can yield all sorts of unintended consequences, some of which may be nasty surprises. That is not to say we should succumb to indifference in these matters. There is a lot to improve. Clearly, we are not that intelligently designed and for this reason we have been intervening in nature for many centuries. I think we should continue to, so long it’s done wisely. That it would be “unnatural” to intervene has never stopped us  and arguments that reduce to “it is good because it is natural and bad because it is unnatural” are appeals to nature and as such, untenable. Genetic engineering is certainly not inherently bad, it is just extremely challenging. Engineering for bliss as an imperative also relies on committing to ethical positions that are too close to open questions for my taste. Most people today will consider engineering for bliss the epitome of questionable supererogation (going beyond moral duty). Can we be sure that a being is helped and improved when rid of its nociception, for example? Even omitting for a moment, the function of pain noxious stimulus signaling)? What sort of axis is there really in terms of valence? Aren’t feelings and emotions more like (strange) attractors of a larger system where the hedonic adaptation is disrupted when you take out entirely what you deem negative? As mentioned earlier, I doubt that abstractions of mind such as pleasure and suffering map neatly to cognitive functions in the sense that we can count on clean emergent behavior and effects when tuning for them. The idea that suffering is something we should get rid off, is also not without problems. There are fruits that suffering yields which may not easily be substituted with ‘ non-suffering’ processes. And this goes beyond the oft-made link between mental illness and creativity. For example, there are several competing hypotheses regarding the function of depression, with some evidence for the usefulness of depression as an evolutionary adaption to deal with complex issues. This is the basis of the analytical rumination  hypothesis. Where would we be without the benefits of such cognitive functions or how would we compensate for them?

Open questions
The (meta-)ethical and axiological questions are bloody hard. If you could push a button to remove all pain and suffering, would be it be a good thing to do? Would you have the right? Would it be much different from a negative utilitarian who presses a button to destroy the world (thereby ending all suffering)? If we could, such as in “Dawn of the planet of the Apes“, increase intelligence in any species so that they could posses cognitive capacities equal to ours, would it be a good thing? If we would do such a thing, would we immediately have to relinquish our self-appointed authority over such beings, by virtue of the acknowledgement of the cognitive capacities they’d share with us? If we are to be ethically consistent, the answer is ‘yes’. They’d be autonomous and intellectually self-sufficient beings to the extent that we are. And we wouldn’t like if others decided to ‘fix’ us with the press of a button without our informed consent, would we? What constitutes involuntary suffering and coercion is not remotely clear-cut.

Then again, those who think we should not meddle with the cognitive states of animals, should be reminded that we’re already doing it on a massive scale, albeit indirectly. Consider the millions upon millions of animals we put through suffering in animal farms. They are all effectively condemned to negative cognitive states by virtue of our treatment of them.

Discussing the future
Pearce does touch upon some of the aforementioned issues. Nevertheless, imperatives in these areas are subject to some of the biggest open questions in philosophy of mind, cognitive science and ethics. Pearce has been and still is ahead of his time. Judging by the comments on the interview, most people  think it’s all too far fetched and crazy. So crazy, in fact, that his ideas are not worth thinking about. With that I absolutely disagree. Even if his quest in its entirety is stunningly ambitious for our times, and even if one doesn’t agree with all his conclusions, it touches upon many interesting issues relevant to our present as well as our near future. So while discussion of civilization policies beyond Kardashev type 0 is difficult for most today, we have to realize that there is plenty to discuss and it’s vital that we discuss it sooner than later.

Mental Contractions is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>