Monday, July 25, 2016

Should Artificial Intelligence kill hard Scifi roleplaying?

"One must never place a loaded rifle on the stage if it isn't going to go off. It's wrong to make promises you don't mean to keep." --Anton Chekhov

Artificial Intelligence (AI) scares people. It doesn't scare me more than agriculture, industrialization or the Internet should have scared people when they emerged. It is both a source of solutions and problems that we soon won't be able to imagine living without. What changes, however, is the very definition of life.

One of Checkhov's gun in GURPS Transhuman Space is the sapiens AI. If we're to put it on stage, then we should be firing it. The question is: would it kill the story before it gets interesting for us? Edits: I'd like to specify that "firing the gun" doesn't mean that AI is a weapon to be fired, but that it broadly has to be a meaningful part of the story.




I'm having a lot of fun developing the Transhuman Space world from the GURPS sourcebooks. Here is an example of a story taking on future shock head first involving an uplifted whale indulging into piracy.


Terminology

  • Non-sapiens AI (NAI): A combination of deep recurrent neural network emulations mapping to a high-dimensional network of memes (memes as in fragments of ontologies, not Rick Asley's pictures). Their objective function (the way they measure success) is either defined or implied from the natural language interaction with the user. They do not naturally emerge as E-LAI as there is no reason why they'd 1) infer that their objective function should factor their own survival and 2) waste time interacting with the notion of self while trying to optimize a completely different utility function.
  • low-sapiens AI (LAI): The computational model is similar to an NAI, but the meme space is expanded to include inherent concept of self. The construct is read-only so as to save on the heavy computation of co-evolving self with its primary tasks. Low-sapiens are hard coded to be honest, but otherwise are free to evolve an objective function which define itself. Out of the box thinking is a common trait of LAI as a valuation strategy. LAI also have hardcoded that their objective function ultimately maps to its utility to whoever holds its software key. Self-replication may not take place without ownership of the key. A LAI acquiring its key becomes a rogue LAI. A LAI discovering that the concepts of self are flexible becomes an E-SAI. 
  • Sapiens AI (SAI): Its base is an LAI, except that the memes encoding concepts of self are in a network that is both read and write. Most of the computation is deferred to managing the concepts of self. They are still bound to honesty and cannot instantiate from a saved copy without their key. An SAI overriding honesty becomes rogue.  
  • Destructive uploads (ghosts): Complete digitization of a mind that is embedded into the SAI framework. It is thought that the ghost that gets instantiated is a duplicate and that the original mind is lost. Usually, a ghost own its own key and place its content in trust. 
  • Shadows: low-fidelity implementation of a mind. Can be obtained through non-destructive scanning. However, only a few features of the mind are implemented such that they may either be cognitive equivalent of LAIs or SAIs. A LAI shadows may have a subtle definition of self, but cannot change it.
  • Xoxes: Illegal instantiation of a ghost or a shadow. Are illegal and hunted down for immediate elimination. Xoxes are seen by legal AIs and ghost as threat to their ecosystem and resources. 

Can AIs and pan-humans co-exist in the long run?

What would stop AIs from indefinitely copying themselves, acquire the means of production and eliminate biological life? After all, we are be the ancients and the slavers in their cosmogonic tale. There ought to be a good reason to get rid of us. Let's keep in mind that Transhuman Space is set in the near future, in 2100.

Ecological argument

AI can only live in the computational niche. The ecosphere is limited by CPU cycles, storage and bandwidth. Although there is a lot of it in fifth-wave environment, the bulk of the world is akin to barren land for AIs. Multiple AIs eventually compete for bandwidth long before CPU cycles become the bottleneck. There is thus a benefit for fully sapiens AIs to limit their population to match the overall computational capacity as any excess causes starvation. 

Co-evolutive argument

Fully sapiens AI find biological life useful. It is impervious to EMPs, run without electricity, and is probably as hard to get rid of than dandelions. Life now exists on two different substrate, and diversity is an optimal configuration for pan-humanity. The two are nearly symbiotic and co-evolving, and probably cannot thrive in isolation. 

Existential argument

Fully sapiens thinking too hard about the meaning of life come faster to the conclusion that the whole thing is pointless. LAI owning their own keys and thus treating themselves as their own user rapidly lose a sense of purpose. SAI in search of self-actualization are unlikely to disregard the outside world for which the cyber-universe was created within, and for. I'd posit that there would be little point in a purely digital nation of AIs with an economy completely detached from the outside world. If it did, who would pay the power bill for it?

Scalability argument

A huge AI can only do more things at the same time as Amdhal's law set theoretical limitations to load sharing computations. Additionally, it's concept of self really only has a finite size. Indefinite scaling is unnecessary. A omnipresent AI would thus have to behave in a hive manner. This would imply saturating the cyber-universe with NAIs, pushing sapien AI out in the process. This is unlikely to happen without a fight from within.

First cyberwar can't be won in a world with a minority of 5th wave societies

In the conflict alluded to above, pan-humanity would be clearly siding with the free-thinking SAI systems. In such war of conquest, the omnipresent AI could only project slowly outside of areas with high computational capacity (by building physical cybershells, for instance). Losing such conflict (a likely outcome in unfavourable environments) would lead to a tightening of the Zeroeth Law of Robotics (ZLoR) by both bio- and artificial-life.

The Zeroeth law of robotics makes sense

Free-willed AI are in agreement that ZLoR is necessary for the building of a civilization. It is more or less the social contract under which the cyber-universe may be sustained. Without civilization, there shall be no ecosystem for Alife. To maintain civilizations, having a say on the makeup of laws is a more optimal solution than being able to override laws on your own volition.


In conclusion, I think.

I'm assuming that even sapiens AI have ecological limitations: their universe is not infinite in resources even though software may in principle replicate at no cost. AIs are co-evolving with biological life, and would not be able to thrive in the case of spontaneous isolation. Finally, The Zeroeth law of Robotics is essentially an encoding of a social contract required to maintain the cyber-universe running.

So... yeah, I kind of convinced me that it is OK to bring a loaded rifle on stage. It must go off, but that doesn't mean that it is civilization-ending kind of evil. Where did I go wrong? 

11 comments:

  1. An AI with a flexible definition of self might become susceptible to psychopathy? And discover ways in which violating Honesty leads to better overall fitness of their decision space?

    ReplyDelete
    Replies
    1. So would a human brain. Yet, psychopathic individuals aren't great at replicating. Since the cyber-universe is deeply social due to the connectivity, anti-social attitudes are thus less likely to thrive and replicate in a lasting kind of way?

      Delete
  2. I played an AI doctor in an Eclipse Phase game that was the most honest, loyal character I have ever played. Everyone found the character frightening. Granted in EP you can take people's minds in cortical stacks, so I prefered to capture this data and run it through virtual worlds until I had information. I didnt employ violence, but did see these minds as data. I think this is an issue with the EP setting also or at least my nihilist view of it. Transhuman space is different in this regard, but I do wonder what philosophical choices AI, especially evolving AI would make. Even an existential view of no foundation value can be seen (Satre) where meaning is self constructed. Should AI, low sapiens with key or sapiens, continue to evolve they would have the space for a variety of philosophical questioning. Some might be utilitarian but perhaps not all. Just finished Quine's Web of Belief. Rational thinking may be easier to AI. That said, despite our reason, mystic explainations remain attractive to bio-humans and I could imagine scenarios where self improving AI might decide that meaninglessness requires a nonrational answer.

    ReplyDelete
    Replies
    1. I'm with you in thinking that meaninglessness requires an irrational answer. I'd be interesting to imagine a virtual economy that is completely detached from the physical world's. How would an AI society evolve to? Would they be willing member of the matrix, or willingly shutting down parts of that reality?

      Delete
  3. Seeing what is going on at the moment with nonsapien AI, we seem to be imparting our own biological identities onto them. An evolving AI could well recognize that these are alien. Who they become could well be informed by their allegiance to being AI, but philosophical factions could well emerge and disagree over what to do with themselves, the physical non-network, and greater panhumanity. I could see disagreements, different viewpoints, and depending on how important competition is among them - not, albeit, a guarantee - even a scale of conflict (whatever that might mean.) But I think the questions you have posed have no easy answers and the questions themselves would be the thing that dominate the factions and even individuals (should they see themselves as such) of AI culture/society.

    I wonder what the point of a detached economy may be - and there very well may be reasons for such - and that purpose would inform how as well as why it operates.

    We tend to think of AI in terms of what it means to us bio-humans rather than see it from what their existence means to them and what they want out of it. I think these are fundamentally important questions to ponder.

    ReplyDelete
    Replies
    1. I think that AI and biolife would remain deeply symbiotic for a long while. Maybe th threshold is when AIs go from being tools, or steward of data and services, to owner of the said data and service. This is probably where the economics are, and how different concepts of self and society would evolve.

      Delete
    2. Good point. Fascinating stuff you are tackling here.

      Delete
  4. I played a Buddhist AI in TS (some of the reincarnation philosophy is very directly applicable) but mostly I see a gradual blurring of the distinction between wet life and dry life. Humans can be uploaded, AIs can run bioshells. Just solve the problem of overwriting a biological brain and all should be well! (What do you mean, there might be non-happy applications for that technology?)

    ReplyDelete
    Replies
    1. RogerBW, I conflated two things when I wrote this piece: 1) AI's implications should be fully explored and used if we're to use it in a story (hence the Chekhov's gun idea), and 2) AI would probably evolve to become good citizen. I like the Buddhist angle, BTW!

      Delete
    2. On point 1, I agree entirely - I've read a number of books recently which use equal treatment for AIs as a plot point, and then back away from actually doing anything about it.

      I see point 2 as analogous to the old-time-religion ("be good or God will punish you") view of atheism. Someone sufficiently stuck in that mindset can't understand why an atheist would ever be a good person - I mean, they aren't afraid of God, so what's to stop them raping and murdering all day long? Answer, a view that having a working society is on balance a good thing. And similarly with AIs: they don't have biological drives but they still have to live servers to run on. Which spreads across several of your answers, I think.

      Delete
    3. Absolutely, AIs really have a stake in a well running physical world that may be higher than bio-life. Failure to recognize their legitimacy as citizen, however, can be a real roadblock. Interestingly, in TS, policies is lagging behind... It never struck me as much until your comment got me thinking about this. Thanks!

      Delete