Image ©Mark Tomlinson |
Concerned about viruses, porn,
and spam choking up the internet? Just wait until the AI rats get to work.
As bioethicist James Hughes has
pointed out, we should not assume that malign forms of artificial intelligence
that may emerge would be of the UFAI (unfriendly artificial intelligence)
variety. ‘Unfriendliness’ implies ill will, which in turn implies human-level
or greater intelligence. In contrast, AI rats would be neither friendly nor
unfriendly, but they could do a great deal of damage with their relentless
digital scavenging.
Don’t such rats already plague
us? Aren’t viruses gnawing at and infecting our IT systems now? Yes, but
computer viruses are not intelligent. Rat-level AI is still some way off. The
problem-solving capabilities of even the tiniest of rodents are a source of
both inspiration and frustration to neuroscientists and AI researchers; rats
have drives, and like us, they employ sophisticated behaviours to satisfy those
drives. In contrast, computer viruses are mere algorithms lacking any form of
desire or intent.
How would AI rats emerge? ‘Weak AI’
systems already abound: air-traffic control systems, vehicle engine-management
systems, big-data language translation systems, ‘expert systems’, and so on. It
is possible that weak AIs augmented to perform ever-more specialised roles
could be elevated, accidentally (or maliciously), to a semi-sentient or
sentient – but not sapient – level. Without sapient cognition, these entities would
have no reasons or ability to attempt to communicate with us. Initially, they
would perform the roles for which they were originally designed; multiplying
geometrically, however, they might soon run out of target tasks and become
‘hungry’ for more. An engine-management system in frenzied competition with itself and
with other such systems may not make for a pleasant driving experience. Rat
infestation in an air-traffic control system could spell disaster.
Fortunately, others have
different ideas about AI rodent scenarios. The roboticist and AI-developer
Steve Grand opts for the rather cuter analogy of squirrels. In his book Creation:
Life and How to Make It, he suggests that even squirrel-level intelligence
could be extremely useful to us:
But imagine putting squirrel brains into, let us say, a set of traffic lights. …If the mind of a rodent was placed into each signal, and the signals were rewarded for how well they managed to smooth the flow of traffic in their local area, then it seems plausible that it could work.
Note that reward is vital
to this kind of scenario. These ‘squirrels’ would have to be intelligent enough
to be able to seek reward, and they would have to be able to learn that
they would find that reward in the harmonious and efficient performance of
their appointed tasks. It may be hard for us to imagine smooth traffic flow as
the ‘nuts’ of such a setup, but just try to think about how a sapient or transapient strong AI might view our motivations.
I find implausible the idea of a
Singularity where we suddenly leap from where we are now directly to strong AI.
More likely, and I agree with Hughes on this, is one where intermediate-level
AIs begin to spring up and multiply as scientists press on towards
ever-stronger artificial intelligence. Refining our AI ethics to cover the
kinds of capabilities with which we might accidentally or purposely endow weak
AIs may allow us to benefit from harmonious (and quite cuddly) traffic systems
while avoiding the need for drastic pest control.