Underpinning the enhancement conversation is, naturally enough, that thing which makes us desire enhancement in the first place: the enhancement impulse. That is, our most ancient and engrained desire to improve what we are. Among the defining characteristics of the human psyche, see, is its yearning for more, for better, for different. We, from the earliest of ages, crave better looks, smarter brains, and, should we happen to be so wholesome, more open hearts. To the extent possible — to the extent the various facts of our physical organism and psychological will allow — we shape our lives in these directions. That is, we shape them towards enhancement. Similarly, when it comes to our children, we do all that our circumstances afford to ensure the best possible lives for them, to prepare them as fully as possible for the world that awaits. For instance, we seek the best education for them, attempt to indoctrinate them in the most powerful and positive ideas, connect them to the most valuable social networks, train them in the physical and intellectual arts, feed them the good foods, and so on. In other words, we enhance them to the fullest extent we’re capable of doing so — just as we do ourselves. When we trace the arc of human history, the trajectory of the human story, what we witness is the power of the Enhancement Impulse at work. And as microcosms of the broader picture, when we view the arc of our own lives, what we witness is some version of the same, whether more or less potent. As the ultimate source of the enhancement conversation, any discussion of the ethics of enhancement would be negligently incomplete without at least examining the impulse that gives rise to it all — even if such an examination fails to suggest how we might better manage it.

The enhancement impulse, the desire to improve ourselves and those we care for, is among the most rationally defensible desires we could possibly embody. Through the evolutionary prism, this impulse is nothing but the logic of the “selfish gene” at play, our thirst for continued existence made manifest. But while it’s an entirely rational desire on the surface, a motive grounded in evolutionary good sense, there’s also something about it that, on reflection, strikes one as rather pathological. Although our desire to enhance ourselves and our kin might be partly motivated by a perfectly healthy concern for our/their best interests — love, if you will — a little introspection reveals that this impulse is at least equally driven by the underside of our minds — that place where fear, insecurity, and dissatisfaction lives. In other words, we enhance ourselves and seek to do the same for others, not purely because we desire — as a matter of logical deduction — the things that improvement would entail, but because we are fundamentally dissatisfied with what we are in the present, and invariably project that inferiority complex onto those lives with which we share intimate boundaries.

Freud, who is today greatly misunderstood, would’ve chalked this phenomenon up to the tension that characterises the human mind generally: the balance of power between Eros, on one hand, and Thanatos, on the other. Both forces, Freud believed, govern human behaviour, and though they’re very different in flavour/texture, they can be wielded towards the same ends — in this case, enhancement. A slightly different psychological cut across the question would have us conclude that our enhancement impulse represents our underlying sense of incompleteness which results from our being separated from the womb at birth, and thus our yearning to realise, once again, that same state of wholeness we experienced during gestation. A less poetic/more naturalistic view of the situation might suggest that our enhancement impulse is, just like every other human impulse, simply a reflection of how our minds happen to be constituted. Our desire to improve ourselves is imbued with all the myriad qualities of our minds, the wholesome and the not-so. A more wholesome mind will, naturally and invariably, desire more wholesome desires , and by extension, more wholesome enhancements. Similarly, a mind riddled with greed and lust and anger will inevitably crave things — and indeed enhancements — of a very different, less wholesome sort. As a very rough rule, desires towards enhancement that spring from the positive/wholesome/good states of mind result in the kinds of enhancements we would generally consider positive/wholesome/good. Conversely, enhancements made on the basis of negative/unwholesome/bad states of mind will most often (if not always) result in enhancements that are, similarly, negative/unwholesome/bad. Think Karma without the metaphysical baggage — like begets like.

The context in which we’re having this discussion is that of a hyper-competitive society that incessantly promotes our striving, our craving, our desire. Though desire and craving and the enhancement impulse run very core to our Being, we’re also culturally conditioned — to an astonishing degree — to believe that our happiness is contingent upon our achieving or acquiring certain things in the world. Striving is so central to our current cultural complex, so deeply embedded in the values of the collective mind, that we rarely stop to consider just whether or not it’s actually serving us, or what’s exactly motivating it. We’re all moving towards some vague notion of better, with very little sense of why “better” is in fact better. Too many of us, it’s apparent, are simply on autopilot — running the default operating system of society on our own biological wetware. A dangerous game, to be sure.

The crux of all this is simple: we build the world — and our lives — with our minds. They’re the proximate source of this whole thing we call civilisation, and the only means by which we experience it. Trite, perhaps, but true all the same. So if we’re concerned with the future of civilisation, with the future of human enhancements, on some level — setting aside certain existential risks beyond our control — it’s only because we are concerned with the quality of our minds. In other words, we don’t trust ourselves. And, given the history, rightly so. We’re not, as a species, exactly symbols of some ultimate end-state of moral wisdom. Not yet, at least. And so we should be nervous. However, we should at the same time also be clear as to what exactly we’re nervous about. One should not be concerned with enhancements in principle. Again, remember, an enhancement is by definition an enhancement — we should be all about them in principle! Instead, we ought to be nervous about bringing our fallible minds — their biases, prejudices, and general ignorance — onto the subject. In other words, we ought to be worried we’ll fuck things up because we’re a little — and perhaps more than a little — fucked up ourselves.

Make no mistake, the central project of our peoples is getting our minds right. As the designers/architects/builders of the future, we must do all that we can to ensure that the source of our visions and the means by which we seek to materialise them are grounded in all the Good states of mind. That is, we want our future to embody, at some level of abstraction, as much hope/love/wisdom/intelligence/compassion/creativity and as little as their antithesis as our nature permits. This is the Work.

It’s interesting to consider the parallels between the situation here, as in the one with our minds, and the current conversation around AI. Since we find it plausible that we might eventually build something amounting to artificial “general intelligence/superintelligence” there is a concern that, unless we were to fully bake our best interests into such intelligence, our paths might eventually diverge. Even if the divergence were small to begin with, it seems likely that, given the disparity in intelligence, the divergence would very quickly grow to a point where our interests are fundamentally and irreconcilably misaligned — to the point, in the worst case, of our extinction.

Since we obviously don’t want this to happen, people are trying to figure out how to make sure we build AI that serves, rather than destroys us. We want, that is, to build wise AI — not stupid-smart AI (AI that turns the world to paper-clips, for instance). We call this problem “AI safety”, “AI alignment”, or just as prosaically, “the AI control problem”. And while it’s a genuinely important problem to be working on, one that, even if we don’t eventually build such intelligent things, will bear interesting philosophical fruit, it’s worth noting just how similar this predicament is to the relation in which we stand to our own minds. That is, there is an intelligence on this planet — specifically, human intelligence — and yet it’s far from entirely benign. In fact, it’s nothing short of the greatest threat to our continued existence. Our minds, above all, represent the single-most danger to ourselves. Thus before we consider ourselves prepared to align a foreign intelligence with our best interests, it would seem sensible that we should first — or at least simultaneously — work to align our own minds with our own goals and objectives, whatever they happen to be.

All of this is to say that the central target of our engineering efforts should be — and really must be — the quality of our minds — that is, the quality of our humanity. Although it’s an extraordinarily utopian-sounding concept, it’s worth bearing in mind that we are forever and always engineering our minds — whether effectively or intentionally or otherwise. With every change we make to our external environment, and every moment we make contact with experience, every act in the world, we are most literally shaping our minds. It’s clear, by this point, just how incredibly malleable the human mind is, to what extent it can be improved or degraded by experience. Although it mightn’t be the blank slate the empiricists thought it to be, the fact and scope of our malleability remains astounding. Where the East have understood this point for millennia, having developed a rich and sophisticated discipline of inner-engineering, the West is yet to fully appreciate.

Indeed, one of the central premises of Western civilisation is that human nature is, by and large, fixed — and not only fixed but fixed in a less than entirely pure direction. Whether or not we’re entirely shit creatures, it’s inarguable that we have such tendencies — for we are animals, after all. Left to our own devices, and provided enough time, we’ll inevitably begin to rip each others throats out, the founding thinkers of our political system surmised. That is, unless we impose some kind of framework onto our nature — subvert, control it. That’s what the “State” is for — Hobbes’ Leviathan. While we mightn’t be capable of changing our nature per se, by implementing some set of controls, we could manage it. Through laws and customs and institutions mediated by a central power, a government, we could control/sublimate our more base instincts and realise, as Lincoln put it, the “Better angels of our nature”.

To be sure, this project of constructing a socio-political-legal framework for directing human behaviour has been a tremendous success, and it’s an important piece of the civilisational puzzle. For it’s a fact, albeit somewhat inconvenient, that alongside the better angels of our nature, there exist worser demons, too. To be naive to this reality, à la Rousseau, and let our nature entirely out the bag, without any constraints/controls, would be grossly negligent. And yet, it can’t be the whole project. At least equally important, and in the long run far more so, is the project of positively modulating our nature directly — that is, engineering the very source code of our minds. Of course, the distinction between improving our nature, directly, and simply subverting it is neither clear nor all that necessary. Changes we make to our environment, including the Leviathan and its corollaries, does produce in us material changes to our humanity, as does more direct and targeted interventions — i.e. meditation or psychedelics. For practical intents, however, there is a difference between the two. And while we have paid much attention to the external project, to the internal project we are yet to pay much mind (irony intended). What we require here in the West, in order to most sure up our collective future, is the establishment of a contemplative tradition, a bona fide science and engineering of mind, akin to what the East has going — with a bit of our own flavouring, of course. The limits to what we can do to our minds are as yet almost entirely unexplored. They are the final frontier indeed. And by combining ancient wisdom with modern techniques and technologies, we might at last colonise them.

@fair

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store