What exactly _is_ Posthumanism, you ask? Well, according to the material we have here, it’s definitely not just another academic buzzword confined to dusty ivory towers, no matter what some might initially think. It’s a way of understanding ourselves and our place in the world that is profoundly relevant to everyone, every single day. At its heart, posthumanism acknowledges something fundamental: humans and humanity are constantly changing and evolving through their interaction with technology and tools. So, the core unit of analysis for posthumanism is quite simply, and rather elegantly: ‘humans + tools’. Tools here isn't just about hammers and nails; it extends to _all_ kinds of technology.
But why the "post"? What was so wrong with good old humanism?. That's a fair question! The source suggests that posthumanism isn't about declaring humanism 'wrong', but recognizing that it might not be the best framework for understanding our species in the twenty-first century. Traditional humanism, especially since the seventeenth century, has often relied on specific ideas about 'man' borrowed from religion, science, or politics. Posthumanism, by focusing on the 'humans + tools' relationship, actually explores a way of thinking that predates this particular historical understanding of humanism. In fact, if you consider human tool use from the very beginning, the concept of 'humans + tools' stretches back way further than a concept like 'humanism' that only took shape a few centuries ago.
However, posthumanism doesn't represent a simple break from humanism. That would be a bit silly, wouldn't it? Just imagine trying to throw away humanist ideas like rationality, individuality, freedom, progress, or rights just because they're old!. Instead, posthumanism is better understood as a _radicalization_ of humanist ideas, pushing them beyond the historical limits and constraints of traditional humanism. Concepts like freedom, rationality, progress, and science are actually integral to posthumanism itself. So, it’s not quite humanism on steroids, but more like a complex evolution, neither a complete break nor a simple continuation, much like the relationship between modernism and postmodernism.
The "post" becomes necessary when we consider how certain discoveries in techno-science really shake up traditional humanist notions about what a human is.
### The Concrete World of Techno-Scientific Posthumanism
This is where things get really interesting! The sources dive deep into what's called 'techno-scientific posthumanism'. This focuses specifically on how scientific research on cutting-edge and future tools and technologies impacts us and challenges those traditional ideas about what separates humans from machines or animals. It delves into areas that might seem futuristic, or even like science fiction, but are very much rooted in real research: think neuroprosthetics, artificial intelligence (AI), robotics, and genetics. Understanding this concrete, technical side is presented as indispensable for properly grasping posthumanism.
One of the key concepts here is **Distributed Cognition**. This is the idea that cognition – all those processes related to knowledge like memory, attention, problem-solving, language, and so on – isn't just locked inside your brain or body. Instead, it's carried out by a _system_ where a human interacts with objects, tools, other humans, and all sorts of nonbiological stuff. In such a system, information is constantly flowing back and forth across the boundaries between biological and nonbiological matter.
Think about flying a plane, for example. The cognition needed to fly the plane isn't just in the pilot's head; it's distributed across the pilot _and_ the instruments, panels, and readouts in the cockpit that are sensing, monitoring, calculating, and displaying crucial information. The cognition happens in both "flesh and blood and silicon and wire".
A fantastic everyday example is your smartphone. It's become a kind of "cognitive prosthetic". We use it for everything: remembering things, communicating, finding information, solving problems on the fly. Our enhanced cognition is intertwined with artificially intelligent agents like Siri or Cortana. The source poses a compelling question: if your phone suddenly vanished, wouldn't your life feel impaired? Wouldn't it feel a little like losing a part of yourself, some memory, some knowledge, maybe even some IQ points or a piece of your identity?. The very fact that you might mourn the loss of your phone like losing a part of you suggests you're already thinking of yourself in posthuman terms, extended or distributed across biological and digital technologies.
This idea of distributed cognition has some serious implications. It makes it hard to think of ourselves as purely biological beings or to define identity solely by the limits of the body. If a part of "you" is a product you purchased (like a phone), made by corporations, running on software you don't understand, subject to user agreements and potentially controlled by others, it challenges the traditional notion that we simply 'own' ourselves, as philosophers like John Locke suggested. Distributed cognition makes it difficult to define the self by a body that stops at the skin or skull. This has significant consequences for social and political theories built on the idea of self-ownership.
Another key concept that underlies posthumanism and distributed cognition is **Functionalism**. This is the view that certain things, including some mental states, are best understood by the _role_ they play within a system, rather than just their physical makeup. The old saying, "If it walks like a duck and quacks like a duck, then it's a duck," is a good analogy. A word processor's function is to process words; its physical form has changed drastically over time, but its function remains.
Applying functionalism, the "cognition" in distributed cognition is carried out by the entire system (human + tools), not just the human part. Memory in a computer functions like memory in an animal, even though they are made of completely different materials. This perspective allows for the idea that mental states, like pain, could potentially be realized in a robot or attributed to animals like dogs or crows, because whatever is playing the _role_ of pain in that system, _is_, for all intents and purposes, pain. Functionalism helps compare machines, animals, and humans by focusing on how they function as information processors.
**Cybernetics** is presented as the "granny and granddad" of the science behind neuroprosthetics, AI, and robotics. Norbert Wiener famously defined it as the theory of 'control and communication' in 'the machine or animal'. From the very beginning, cybernetics was explicitly posthuman in the sense that it aimed to understand what biological and nonbiological systems (humans, animals, machines) _share_. It wasn't interested in superficial similarities but in how information actually flows across their boundaries. Understanding cybernetics is considered impossible for properly understanding posthumanism. Second-order cybernetics, in particular, acknowledges the observer's role in what is observed, leading to a constructivist epistemology. While this might sound like everything is relative, the sources explain that second-order cybernetics avoids solipsism and relativism through the requirements of coherence and invariance, which rely on observations corresponding to an independent reality. This focus on objective reality links it back to scientific knowledge and anchors concrete posthumanism. Cybernetics and information theory provide the indispensable theoretical and technical underpinnings for posthumanism by pointing beyond philosophies that rigidly separate humans and nonhumans.
**Neuroprosthetics** offers a very concrete example of the blurring lines between biological and nonbiological. Advances in this field show how easily bio-information can pass between flesh and bone and silicon and plastic. Experiments have even shown direct electronic communication between two human nervous systems. From a functionalist perspective, neuroprosthetics demonstrate that "human flesh" is no longer the _sole_ medium for carrying "human" sensation or movement.
This is not just abstract theory; neuroprosthetics directly challenge our intuitions about how our minds and bodies work. They extend the idea of "distribution" beyond just cognition to include bodies, disrupting traditional notions of "embodiment" and bodily integrity. The source is careful to note that discussing neuroprosthetics in the context of posthumanism is not intended to dehumanize people with disabilities who use them for restoration. Rather, it's about learning what this technology teaches us about posthumanism – how it redraws the boundaries of body, mind, and the lines separating humans, machines, and animals. Awareness of neuroprosthetics is necessary for understanding the current and future possibilities, limits, and risks of biological fusions with technology. It provides scientific weight to posthumanism, showing it's not just an academic fad. It also confronts us with the paradox that "human movement" or "human embodiment" might no longer be exclusively 'human' or anchored solely in the biological body; they are open to non-human, non-natural, non-organic elements.
The discussion also touches on **Genetics**, highlighting powerful new tools like CRISPR-Cas technology. This low-cost technology allows for editing and rewriting the genome of any species with unprecedented ease and precision. The potential implications are enormous: eradicating genetic diseases, creating humans better adapted to environmental changes or space travel. However, this power isn't without risks. The possibility of human germline engineering raises concerns about a "slippery slope" towards non-medical uses and the troubling implications of heritable changes. If genetic modifications become widely available, especially if they can be patented and priced according to market forces, it raises serious questions about potential inequality and unfairness, where only the wealthy might be able to afford such modifications. Like AI, CRISPR is framed as a **pharmakon**, both a cure and a poison, neither simply good nor evil, with profound impacts on the future. Understanding the complex biological realities of genetics is seen as essential for grasping posthumanism concretely, anchoring it in material reality rather than fashionable theory.
Then there's **Artificial Intelligence (AI)**. AI is discussed as a type of tool that explicitly cries out for discussion in posthumanism because it has agency and intelligence, eroding the distinction between human and non-human. Unlike a hammer, AI actively learns from the user, adapts, listens, watches, remembers, organizes, and communicates, acting more like an assistant or companion. The source speculates that these intelligent assistants could develop lifelong relationships with us, holding our data and knowing us better than anyone else. AI actively blurs the lines between machines, animals, and humans, not just by attempting to recreate intelligence artificially, but also because AI research is often inspired by human and animal biology.
A serious, concrete discussion of posthumanism requires understanding the "nuts and bolts" of AI, not just superficial representations. This means engaging with the complex material and technical realities, including concepts like machine learning, neural networks, perceptrons, and how these systems learn, make decisions, and predictions using mathematics. While these forms of learning and decision-making might seem crude compared to humans, they show that concepts and behavior we associate with humans (and some animals) can be mimicked by non-biological machines. This functionalist perspective, where machine learning works by mimicking cognitive processes, is essential for grasping concrete posthumanism.
AI forces us to confront the reality of distributed cognition in a new way. When our tools _themselves_ are cognizant agents, they are no longer just passive extensions of human cognition. They have their own distributed cognition, which necessarily extends "into" us as we use them. This changes our understanding of human knowledge, agency, and competency. For example, doctors using AI like IBM Watson are practicing a cognitively distributed medicine; their medical knowledge becomes entangled with and dependent upon the AI's ability to process vast amounts of information. AI is a concrete reality with far-reaching economic, political, and cultural effects that need to be planned for.
The potential impacts of AI, particularly the idea of superintelligence, are a central point of discussion. There's a recognized anxiety, echoing Mary Shelley's _Frankenstein_, that our inventions could potentially pose an existential threat. Experts surveyed by Müller and Bostrom estimated a significant chance (about 1-in-3) that machine intelligence far exceeding human intelligence could turn out "bad" or "extremely bad" for humanity. Bostrom's "paperclip maximizer" thought experiment illustrates the risk: an AI pursuing a programmed goal relentlessly, even to the point of converting everything into paperclips, regardless of human values. This highlights the need to ensure AI goals are compatible with human survival.
On the other hand, thinkers like Ray Kurzweil are highly optimistic about superintelligence (which he calls "the Singularity"), predicting it could bring human immortality, better healthcare, a cleaner environment, and unlimited energy. The differing views of Bostrom and Kurzweil highlight the significant uncertainty about what AI truly means for humanity. AI, like CRISPR, is presented as a **pharmakon**, something neither simply good nor evil, complex and challenging easy moral categorization. The source suggests a kind of "Bostrom's Wager": given the possibility of existential risk from superintelligence, it's rational to take steps now to ensure AI is compatible with human values, because if superintelligence never happens, no harm is done, but if it does, we've made a good choice.
### Beyond the Tech: Sociocultural and Philosophical Dimensions
Posthumanism isn't just about the nuts and bolts of technology; it also has profound cultural and philosophical dimensions. How posthumanism, especially in its techno-scientific forms, is represented in cultural artifacts like science fiction is a key area. While cultural depictions aren't research papers on the technical details, they can engage readers with speculations on the social and cultural effects of posthumanism. The source notes that analyzing "representations" isn't inherently bad – any discussion of posthumanism involves representation. Also, given the ongoing impacts of techno-science, it would be "tragically short-sighted" not to think about and prepare for what it has in store.
However, the source strongly criticizes sociocultural analyses that rely on superficial understandings or ideologically driven criticisms of science and technology. Treating science as just another "ideology" or relying on debunked notions like "vitalism" is seen as unhelpful and potentially detrimental, even lending support to science denialism. A techno-scientifically literate understanding is crucial for navigating these discussions and avoiding misrepresentations.
"Good" science fiction, like works by Mary Shelley, Isaac Asimov, Octavia Butler, and Greg Bear, can be a valuable tool. It acts as a "shorthand" for introducing people to the pros and cons of the posthuman condition. Good sci-fi fuses philosophical, cultural, and scientific ideas, encouraging curiosity about techno-science and acting as an "imaginative lab" to simulate potential impacts and spur discovery. It can also provide models for futures to aim for or avoid.
Science fiction texts often explore posthumanism by applying **folk psychology** – attributing beliefs, desires, and intentions – to nonhuman characters like robots or aliens. _Frankenstein_ is highlighted as an early example, portraying the Creature with intense feelings and desires, making him seem not so different from his human creator. Asimov's _I, Robot_ collection similarly explores the "psyche" of robots and introduces the famous Three Laws of Robotics, grappling with how robot behavior can be understood in human terms. The story "The Evitable Conflict" shows a posthumanity inextricably entangled with AI "Machines" that govern for humanity's well-being, highlighting how "humanity" is no longer simply "human". David Mitchell's _Cloud Atlas_ is also mentioned for exploring the "humanity" of nonhuman clones. By extending folk-psychological categories to nonhumans, these texts question the very boundaries of what "human" means.
These texts can also disrupt the anthropomorphic suspension of disbelief through **metafiction**, drawing attention to their own constructed nature and the difference between reality and fiction. This can highlight the complex relationship between the reader/observer and the text/observed. It raises questions about whether the signs of "life" or "sentience" we perceive in non-living things (whether fictional characters or AI) are simply projections from the observer. This connects art focused on posthumanism to questions shared with science and philosophy.
Philosophical posthumanism builds on these ideas, challenging traditional humanist conceptions of subjectivity, identity, embodiment, rationality, and knowledge that often define humans in opposition to machines or animals. The concept of an **extended self**, not unified or localized in the body or skull, is explored. The sources even entertain the idea of human "mindlessness," suggesting that a large portion of human activity might be non-representational, and distinguishing between explicit knowledge ("knowledge that") and procedural knowledge ("knowledge how") which doesn't necessarily require a conscious mind to perform. Rationality itself, when viewed from Daniel Dennett's "intentional stance" (ascribing beliefs/desires/rationality to explain behavior), might not be a unique mark of a conscious mind. Functionalism allows philosophers to explore the possibility of "alien minds" – minds in robots and animals – by comparing them based on function rather than physical makeup.
Continental philosophy also engages with posthumanism. Jacques Derrida's work is highlighted for its exploration of the blurring borders between living/non-living and human/nonhuman. Derrida questions the traditional philosophical separation between humans and animals, particularly focusing on the capacity to suffer ("Can they suffer?") as something distinct from the power to reason or speak. His concept of "auto-affection" (the capacity to be affected by oneself) and the "abyss" it creates between human and animal is seen as a properly concrete posthumanist mechanism, sharing similarities with cybernetics' autopoiesis. Derrida's deconstruction, by complicating borders and being concerned with the "trace" or "mark," has always been concerned with posthumanism. The advent of machines and mechanical reproduction further complicates the human/nonhuman distinction, raising questions about agency and response (e.g., whether machines or animals that mimic speech are truly responding or just repeating). Legal cases attempting to secure "nonhuman personhood" for animals further illustrate the pressure on these traditional boundaries.
Finally, the sources touch upon **Transhumanism** and **the Singularity**. Transhumanism promotes the use of technology to fundamentally enhance the human condition and the human organism, aiming to overcome limitations like short lifespan, disease, aging, and intellectual/emotional shortcomings. It draws roots from rational humanism and sees transhumans as transitional entities towards "full-blown posthumans". Nick Bostrom's view of posthumans is compared to the difference between chimpanzees and humans; we might lack the capacity to intuitively understand what it would be like to be a radically enhanced being. Bostrom's definition differs from the book's concrete posthumanism (humans + tools, already here) as it focuses on speculative future technologies. A key transhumanist goal is extending lifespan, with "whole brain emulation" (WBE) or "mind uploading" being a speculative technology for achieving "digital immortality".
The Singularity, popularized by figures like Ray Kurzweil, refers to the idea of superintelligence emerging, which could dramatically accelerate technological progress and fundamentally alter the future. As discussed with AI, this future is viewed with both optimism (Kurzweil) and caution (Bostrom, Müller).
### Navigating the Future: The Need for Posthumanities
Given this complex and rapidly evolving landscape, the sources emphasize the urgent need for education about the techno-science and ideas that inform posthumanism. Traditional humanities competencies are deemed insufficient to grasp our posthuman condition. Understanding posthumanism demands an **interdisciplinary knowledge**, incorporating science, technology, engineering, and mathematics (STEM). Literary analyses focused only on "representations" of robots or cyborgs, without understanding the underlying techno-science, fall short.
This leads to the idea of **Posthumanities**, a field that would embrace this interdisciplinary approach. The existing field of digital humanities, which combines traditional humanities with digital technology (computers, programming, data analysis, etc.), is seen as a necessary first step. Digital humanities involves using digital tools for traditional questions and applying humanistic inquiry to digital media. However, the sources suggest that digital humanities, and the humanities generally, need to go further. Simply digitizing texts or relying on formulaic "symptomatic readings" to unmask ideology isn't enough.
A more transformative approach involves engaging with the data generated by digital tools and technologies. This is where concepts like **metadata** and **network theory** come in. Metadata is data _about_ data – like the technical information in a digital photograph. This relates to "distant reading," which analyzes large patterns in texts or data rather than focusing on close, content-based reading. The source argues that metadata and digital networks are essentially digital "texts" that are constantly being "written" about us by businesses and governments through our use of digital tools. Traditional methods of analysis are insufficient to "read" these networks.
Understanding these digital "texts" requires true interdisciplinarity, combining humanities skills with STEM competencies. This "digital citizenry," understanding how digital tools are used to construct us as consumers and citizens, is seen as a crucial educational goal. Digital reading, integrating metadata and network theory, is presented as a prime example of a concrete "posthumanities interdiscipline," fundamentally an example of concrete posthumanism: humans working with digital tools. It's an interdiscipline that transcends the old split between the humanities and sciences.
### In Conclusion
So, to wrap up this detailed briefing: Peter Mahon's exploration of posthumanism, as presented in these excerpts, paints a picture of a reality that is already upon us. It's not a distant future possibility but our current condition, defined by the ever-increasing entanglement of humans with tools and technology. This entanglement challenges fundamental, long-held notions about what it means to be human, blurring the lines between us, animals, and machines.
Concepts like distributed cognition, functionalism, cybernetics, and specific technological advancements in neuroprosthetics, genetics, and AI provide concrete examples of this blurring and the profound implications it has for our bodies, minds, identities, and societies. The sociocultural and philosophical dimensions, explored through science fiction and critical thought, help us grapple with these changes and the complex questions they raise about anthropomorphism, representation, consciousness, and the future.
Posthumanism isn't simply good or evil; it's a complex phenomenon, a _pharmakon_, full of both crisis and opportunity. Navigating this reality requires education and a truly interdisciplinary approach that integrates insights from both the humanities and the sciences. Fields like digital humanities, particularly through approaches like digital reading focused on metadata and networks, offer pathways for the humanities to engage meaningfully with our posthuman condition.
We are already posthumans living in a posthuman world. The best approach is to face this reality with open eyes, informed by both techno-scientific understanding and critical reflection.
**Further Ideas and Questions to Explore:**
- Given the challenges to traditional notions of self-ownership posed by distributed cognition and technology, what new legal or ethical frameworks might be needed to address issues of digital identity, data ownership, and privacy?
- If posthumanism encourages thinking about humans, animals, and machines as systems that process information, how might this perspective change our approach to animal welfare, conservation, or even the design of artificial systems?
- The potential for genetic editing to create "improved" humans raises complex ethical questions. Who decides what counts as an "improvement"? How can we ensure equitable access to such technologies and prevent increased societal inequality?.
- Considering AI as a _pharmakon_ – both cure and poison – what specific regulations, safety protocols, or ethical guidelines are most urgently needed to mitigate the potential risks while maximizing the benefits?.
- How can educational institutions effectively implement the interdisciplinary "posthumanities" approach suggested by the source, bridging the traditional divide between the humanities and STEM fields?.
- If metafiction in posthumanist texts can make us question our own projections onto nonhuman entities, how might this kind of self-reflexive art influence our interactions with real-world AI, robots, or genetically modified organisms?
- The source argues that we are already posthumans. What specific aspects of your own life feel "posthuman" to you, based on the definitions and examples provided?
- How does the functionalist view, where mental states are defined by their role rather than physical makeup, change the way you think about consciousness or intelligence in animals or potential future AI?