The role & threats for humans in the race for super-human AI
We all know AI is coming. What we’re not sure yet is how much and how smart. Sooner or later it will get close to human intelligence, which is the most complex and remarkable instrument in our known universe. AI is a human invention. So what are its limits? Can we really even reach parity with the human mind? Can we reach even higher? Should we?
Is it better to stop? Some propose to set boundaries; man-made limits & conventions. Will they be respected? Is it really feasible to set a artificial limit to “progress”? Unfortunately till now history has shown us that when humanity is close to a scientific/technological leap there is little we can do to keep everybody from diving in. So the more interesting question for me is how much higher can we go, and what is the real limit in this “race”?
Is it really possible to build something much better than a human brain? I would argue yes. First we need to decompose the workings of the existing human brain, to rebuild it on silicon, finetune it and reach parity at some point. This is the inevitable step of the current technological progress. We can’t avoid it really, because we have this locked opaque box in front of us called the brain. Sooner or later we’ll figure it out and unlock its secrets. And it will be sooner than you think.
Then, to reach at least 1 level above in terms of complexity, we can take inspiration from methods usable in the existing technological brains of our time (we call them computers).
- We can do performance improvements of the existing design
- by shrinking the modules to compact more in the same space
- by overclocking them
- by connecting the modules with faster interfaces
- We can try to connect the individual brains in a kind of interconnected sentience, hoping that the sum of the parts will somehow produce a level of intelligence higher and better than its individual constituents
But let’s not fool ourselves. All these are improvements in the existing architecture, which seems to be nature’s best yet. The main problem is that we did not invent our current architecture. And it took nature at least a few million years to be able to understand it (through the human brain reverse-engineering the human brain) and copy it.
So we’ll take it from there, reuse it and improve it, in the many incremental ways we mentioned above. If we’re lucky we might even be able to take it one level above that. But can we go much further? Can we really go 2 levels above? Is it even possible, I wonder? To reuse an often cited question, can an ant’s brain ever invent a human’s brain?
I know what some might say. It is not us who need to invent it. The new artificial sentience that is one level above the human brain, the N+1s, the true AI brains, that will have received all those performance upgrades & interconnectedness… these are the beings that will be tasked to invent the N+2. As human, we will be too far away to even comprehend the leap in complexity that needs to occur, but they will be much closer and thus this evolutionary step will be within their reach.
Leaving aside that with the paragraph above we have just demoted humanity to the role of a by-stander at best or to an obsolete ancestor ready to be phased out at worst… it sounds like a reasonable argument, right? In the same way that the ape-ish ancestor of homo sapiens (N-1) was just an evolutionary step for us to come into existence (N), in the same way humans were a necessity to give life to an N+1 sentience (that happens to be artificial), and these in turn will produce in due time the N+2s, and so on and so forth, with turtles all the way up.
To some this argument might seem rational, to others too idealistic, distant or even just a thought exercise in futility. For me, I believe that the increase of organizational complexity is a trend of paramount importance for this little Earth who seemingly alove in this threatening & never-ending entropy increase in our universe, and therefore the legacy & the future of human intelligence is fundamental. And I don’t buy that argument. Instead, I tend to favor an answer that brings humanity back to the center-stage; and I hope I’m maintaining my objectivity in this assessment despite my human nature.
I don’t believe that superintelligent robots (N+1s) have a good chance of continuing this trend of increasing organizational complexity and at some time inventing a “race” even superior to their own (N+2). And my core argument for supporting this is the change of paradigm that will have happened if we ever move from humans to AIs as the dominant life form on Earth. The tools that brought us to N will cease to function. Random mutations and competition (a.k.a. death) as the main characteristics of Evolution will not be applicable any more. By moving from carbon lifeforms to silicon-based ones, from natural death to optional death, from inaccuracy of replications to precision-perfect repeatability, from separate Wants per Ego to a ubiquitous and rational altruism, something will have been lost in this transition.
The mechanism of a trillion mistakes that gives birth to the one great thing every now and then will have ceased. What brought us from N-100 to N and N+1 will not be there any more, because we changed the substrate of this process, or the platform if you wish. The rules of the game are different and they’ll have to invent a new way of making groundbreaking progress, parting with the billion-year tried and true recipe. Is it really feasible to find a new recipe? To my lesser human mind, this sounds arrogant. To paraphrase an earlier statement, could a super-human’s/AI’s brain ever invent a deity?
To conclude my argumentation, if someday our robotic overlords come to a decision point of what is the role of humans in this meta-human era, here is my plea: before deciding to destroy the human race because you believe it served its purpose and is now obsolete/irrelevant/not useful, give it a second thought. After all, it is the last & best child of evolution and look where it got us. The randomness, the novelty and the occassional spark of genius are features that should go hand in hand with progress. We need to remind ourselves that the goal is common, pre-ordained and above any sentient existence; the progression of organization complexity. And if the universe has something to teach us is that diversity is natural, uniquitous and desirable. So live and let live!
Posted on January 21, 2017, in Uncategorized. Bookmark the permalink. Leave a comment.
Leave a comment