Transhumanism - the practice of enhancing people through technology - sounds like science fiction. But when it arrives (and it will), it will create unique problems for CIOs
Thinking ahead | Late last year, the editors of a leading public policy magazine, Foreign Policy, asked eight prominent intellectuals to identify the single idea they felt was currently posing the greatest threat to humanity. Most of the suggestions were merely old demons: various economic myths, the idea that you can fight "a war on evil", Americaphobia and so on. Only Francis Fukuyama, a member of the President's Council on Bioethics, came up with a new candidate: transhumanism.
Transhumanism might be described as the technology of advanced individual enhancement. While it includes physical modifications (diamondoid teeth, self-styling hair, autocleaning ears, nanotube bones, lipid metabolizers, polymer muscles), most of the interest in the technology focuses on the integration of brains and computers - especially brains and networks. Sample transhumanist apps could include mobile phone implants (which would allow virtual telepathy), memory backups and augmenters, thought recorders, reflex accelerators, collaborative consciousness (whiteboarding in the brain), and a very long list of thought-controlled actuators. Ultimately, the technology could extend to the uploading and downloading of entire minds in and out of host bodies, providing a self-consciousness that, theoretically, would have no definitive nor necessary end. That is, immortality, of a sort.
While some of these abilities are clearly quite far off, others are already attracting researchers (see "Brain Gain", page 94), and none are known (at the moment at least) to be impossible to achieve. Fukuyama obviously felt the technology was close enough at hand to write a book about it, Our Posthuman Future: Consequences of the Biotechnology Revolution, the thrust of which is that society should give the whole idea a miss. His main concern was that transhumanism would place an impossible burden on the idea of equal rights, since it would multiply the number of ways of being human well past our powers of tolerance. (If we have all this trouble with something simple like skin colour, just wait until some people have wings, augmented memory and reflex accelerators.)
Ignorance Is No Option
Still, it's not clear that boycotting neurotech will be a realistic option. When the people around you - competitors, colleagues, partners - can run Google searches in their brains during conversations; or read documents upside down on a desk 10 metres away; or remember exactly who said what, when and where; or coordinate meeting tactics telepathically; or work forever without sleep; or control every device on a production line with thought alone, your only probable alternative is to join them or retire. No corporation could ignore the competitive potential of a neurotech-enhanced workforce for long.
Right now, the only people thinking about transhumanism are futurists, ethicists (such as Fukuyama) and researchers. However, if and when we do advance into this technology, several management issues will also need attention.
Consider, for instance, the case of upgrade management.
From a purely capitalist point of view, one virtue of transhumanism is that it incorporates both body and mind into the continuous upgrade cycle that characterizes contemporary consumption patterns. Once a given modification - such as a cortical display - is successfully invented, newer and better ones will crop up on the market every year, boasting lower power requirements, higher resolution, hyperspectral sensitivity, longer mean time between failures, richer recording, sharing and backup features, and so on. Multiply by all the devices embraced by the transhumanist agenda, and it's clear that every year even the most financially secure users will be forced to winnow a small number of choices from an enormous range of possibilities.
Another concern could be digital rights management.
When brains can interact with hard disks, remembering will become the equivalent of copying. Presumably, intellectual property producers will react with the usual mix of policies, some generous, some not. Some producers will want you to pay every time you remember something; others will allow you to keep content in consciousness for as long as you like but levy an extra charge for moving it into long-term memory; still others will want to erase their content entirely as rights expire, essentially inducing a contractually limited form of amnesia. While any one of these illustrations might be wrong in detail, there will almost certainly be a whole range of intellectual property issues and complications that will need to be managed.
In other words, it looks as though the transhumanist era is going to be a Golden Age for CIOs and their skill sets. Even in the case of problems for which CIOs do not have immediate solutions, they will probably be the right people to think about the answers. Take, for example, the extremely vexing problem of neurosecurity.
A brain running on a network will obviously be an extremely attractive target for everyone from outright criminals to bored hackers to spammers. Why worry about actually earning a promotion when you can just write a worm that will configure your superior's brain so that the very thought of you triggers his or her pleasure centres? Why bother with phishing when you can direct your victims to transfer their assets straight to your bank account? Why tolerate the presence of infidels when they can be converted to the one true faith with the push of a button?
Who Do You Trust? Not You
Peter Cassidy, secretary-general of The Anti-Phishing Working Group, is one of the few analysts thinking about neurosecurity. He says that a key problem is that the brain appears to consider itself a trusted environment. When brain region A gets a file request from region B, it typically hands over the data automatically, without asking for ID or imposing more than the most minimal plausibility check. It is true that with age and experience our brains do gradually build up a short blacklist of forbidden instructions, often involving particular commands originating from the hypothalamus or adrenal glands (for example, "bet the house on red", or "pick a fight with that bunch of sailors"), but in general, learning is slow and the results patchy. Such laxity will be inadequate in an age when brainjacking has become a perfectly plausible form of sabotage.
Cassidy points out that one of the core problems in neurosecurity is defining trusted agents. All security depends on the concept of two trusted parties (a trusted identity and a computer) and a trust applicant. The neurosecurity conundrum is that it mixes all these identities in the same brain. It forces you to face the questions of when, whether and how to trust yourself. Still, CIOs (and CSOs) are familiar with the essence of even this issue, which is much like analyzing the problem of defending an enterprise against an employee who has gone bad.
One possible approach to neurosecurity might be to implant a public-key infrastructure in our brains so that every neural region can sign and authenticate requests and replies from any other region. A second might be maintaining a master list of approved mental activities and blocking any mental operations not on that list. (Concerns about whether the list itself was corrupted might be addressed by refreshing the list constantly from implanted and presumably unhackable ROM chips.) It might also be necessary to outsource significant portions of our neural processing to highly secure computing sites. In theory, such measures might improve on the neurosecurity system imposed on us by evolution, making us less vulnerable to catchy tunes and empty political slogans.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.