Rules, rules, rules.

Please keep discussions civil.  Drivebys/angry politicos/hateChat and other unhelpful comments will receive a tap with the banhammer.  Comments on any given post will be closed after two weeks.

Terraforming in Games

Welcome to the first of a monthly series on science-fiction in video games. The full version of this article can be found over at Amazing Stories, and the abbreviated version gets posted here a month after.

Terraformed World

Making the uninhabitable a nice place to be since 1942.

The goal is not to deliver a “how to video game ur sci-fi” series of posts.  I want to take a look at how closely science fiction in games is entwined with the science fiction expressed in books and other media.  Sometimes it’s licensing, sometimes it’s homage and sometimes is it something new and unique.

So let’s start off this column by looking at the worldbuilding of a recent entry, “Anthem”. Anthem is a new type of product referred to as a “split narrative MMO”. It’s best described as a single player story cleverly couched in a massively multiplayer online world. The game is from BioWare, a studio known for building deep storytelling experiences within their games.  They handle both science fiction and fantasy narratives with equal grace and engagement.

Underpinning all the bright colors and big alien sky, the world of Anthem contains a classic “man vs nature” backstory. Some time long ago, the planet was terraformed by an object called the “Anthem of Creation.” Along the way, someone failed to turn it off, resulting in a planet with an ecosystem that is in a state of constant, dangerous flux.  The formerly enslaved human population has overthrown their alien masters and begun to thrive despite this ever-changing and sometimes openly hostile environment.

In 1942 the idea of terraforming first shows up in a short story written by Jack Williamson (under the pen name Will Stewart). At the time he used a more hand wavy “far-flung future“ science in order to make this happen. Much like Williamson’s original work, and the work of the many many authors to follow, Anthem is less worried about the “how” of terraforming and has instead focused on the end results (and the challenges that they bring).  

In action-heavy games the lens of time is always dedicated to the immediate, human-scale view.  This means that terraforming in hard-science terms is difficult to work with.  In video-game terms, if we want to include the environment as a potential hazard/ally, this timescale is simply a non-starter. Instead, Anthem has embraced the more catastrophic short form terraforming that you see in places like Star Trek: The Wrath of Khan’s Genesis device, or the Arkfalls from Defiance. Not only does it make for a much more visually stunning environment, but it allows for a great many incidental hazards for a player to overcome, allowing the design team to build towards a more emergent style of play to fill in the gaps between the must-do missions that push the story forward.

This aggressive terraforming idea serves as the core foundation on which the game mechanics and story are built.  In order to first overthrow their enslavers, then later deal with a constant onslaught of threats driven by the Anthem running off the chain, the human population develops the “Javelin”, a powered exo-suit via which the player can survive encounters that would turn even a top-form human physique into a sticky paste.

The Javelin provides the perfect vehicle (no pun intended) by which the player can customize their experience.  Different Javelins support different styles of play.  Over time there are modifications and upgrades that players can pick and choose from, earn or outright purchase, thereby feeding the beast of in-game transactions (and ensuring the ongoing creation of new game content).  Upgrading the Javelin is a personal and immediate action, the suit becomes the tool by which we give the players agency.

The exo-suit has been a very popular piece of kit in the more action-driven science fiction games for over a decade. From the vehicle-scale, human controlled machines in games like Titanfall or novels like John Steakly’s Armor, on down to the entirely robotic frames of Warframe or the more lightweight frames of Elysium, they are a solid “science fictional” way to rationalize the ability of one person to punch through an army of killer robots.

You can put off the danger for another day maybe, you can wrap up a mission, close out a chapter, but this does not a long-form narrative make.  Anthem, like so many stories before it, has tackled this need for conclusion by introducing a villain and, of course, taken advantage of the biggest, shiniest piece of science fiction on the planet, the terraforming engine itself.  So now we have not only the immense, uncaring power of the Anthem, but we have a near and viable threat. We have a bad guy looking to take that power and put it to deliberate use. Something that requires immediate (for human-timescale values of immediate) action, which is something game players find supremely satisfying to deal with.

As we all know, once you create a world that clicks, the fans of that world, be it Anthem or Gotham City, are going to consume as much content as they can lay hands on.  They will be perpetually hungry for new stories, new characters and new toys. If you’re lucky, you’re going to get a bunch of players that take your world and run with it, giving you a vibrant and active community. By going with an active terraforming scenario, the team at BioWare have given themselves (and us game players) an open door for everything to change in the future and thereby ensure the vitality of the game for years.

The Infinite Avengers

Okay, bear with me on this.  I went to go see Captain Marvel this weekend, and something occurred to me.

Every Infinity Stone now has an Avatar.  Not a Bearer, per se, because that implies someone who simply uses the stone as a tool, and I’m not sure that’s what they are getting at.  Instead think of them as a hero that has been created/influenced by contact with that Stone. Someone who can serve as a conduit through which that Stone can consciously wield it’s power. Count them down with me.

Vision is the Mind Stone’s Avatar

The Mind Stone (Yellow) == Vision, who was created by hijacking Ultron’s perfect body, mashing up the remains of the Jarvis AI and adding the Mind stone to the mix.

Dr. Strange is the Time Stone’s Avatar

The Time Stone (Green) == Dr. Strange uses the Eye of Agamotto very effectively and it’s suggested he’s the only one to do so in a very long time.

Captain Marvel is the Space Stone’s Avatar

The Tesseract (Blue) == Captain Marvel’s powers are the result of <<<spoilers>>>

Star Lord is the Power Stone’s Avatar

The Power Stone (Purple) == Star Lord (and the rest of the Guardians, but I think Star Lord is the primary “avatar” and the rest are just his support).

Jane Foster is the Reality Stone’s Avatar

The Reality Stone (Red)  == This one is trickier, I think we may discover that Jane Foster has been hidden away by SHIELD because she developed some superpowers.  

Young Gamora is the Soul Stone’s Avatar

And finally, the Soul Stone (Orange) == Gamora.  More specifically, the child-Gamora that speaks to and guides Thanos.

There have been a lot of theories floating around about just who is going to hand Thanos his *ss in the upcoming film and I am less convinced it is going to be A person, and somewhat more convinced that it will be the Stones themselves, acting through the heroes that they have each created.  This gives us a team of heroes to work with who have all received *something* from the Stones, whether it be raw power, intelligence, family, the ability to adjust time… all of which suggests that, when the time comes for the big throwdown, they are going to be the ones doing the heavy lifting.

So what does this mean? Are these folks going to form the core of the “new” Avengers?

The Ethics of your Smart Things

An argument is currently being made that AIsisstants systems and smart objects should be programmed with a cloud-based “moral awareness”.  This programmed-in sense of right and wrong would enable them to report illegal activities of their owners.

Now, the types of “illegal activities” likely being targeted by this idea are going to be things like domestic abuse, home invasions and the like.  This is a NOBLE idea. Your AIssistant being able to call the cops for you if someone kicks in the door or an argument escalates to harm. But we have a “dumb” version of this technology already.  It’s called an alarm system. It can/will call your alarm company if triggered and a live human makes the call as to whether or not the police need to be involved. But the key here is that a *live human* makes this call.

Allowing your AIssistant to make a decision regarding your in-home activities rapidly becomes the kind of surveillance state that only ends in tears. Consumer-grade voice commands barely take enough dictation to run Google searches when it’s quiet out and you are alone in your home.  Just try talking to Siri or Hey Google with a room full of chatty 10-yr olds or in the middle of a family harangue. They do not have (and may never have) the fidelity to analyze a person’s activity based only on audio information and certainly not to the level required to make a judgement call.

The thing to remember, always, is that smart devices and related objects are supposed to make our lives simpler. They’re supposed to allow us to operate at a greater than average level of efficiency, to remind us when we are out of milk or to find us instructions on just how to tie a Hunsaker knot.  Judgment should not enter into this. We don’t expect them to judge our grocery-shopping choices, or remind us that we’ve been running the heater in our homes for 4 hours a day this week, both of which are tasks well within the capabilities of these AIssistants.

https://www.dailymail.co.uk/sciencetech/article-6733417/Digital-assistants-discuss-moral-AI-report-illegal-immoral-activity.html

But, there is a case to be made for extenuating circumstances. If your Amazon Alexa can tell that you are beating your children with the kitchen ladle, then perhaps a call to the police might be in order.  Is it any worse than having your next door neighbor call the cops because they can hear you screaming through the paper-thin walls of your apartment?  But, you may say, the police are live human beings and could certainly make a clear determination once they arrive on the scene.  Your AIsstant is just triggering the call, it’s not *actually* making a judgement.

But when a computer delivers information to a live human, it is taken more seriously. There is an ingrained response in many humans to trust the machine because the machine is not susceptible to emotional responses. The machine cannot color its decision with racial prejudice or poor observation skills.  The machine (as far as most people are concerned) is innocent, logical, factual.

Those of us in tech know this to be a lie, but you’re not dealing with people in tech. You’re dealing with police officers and people who, by and large, have their impression of artificial intelligence shaped by film and television. They are consumers and have such have a consumer level understanding of just how infallible machines should be.

So a team of police officers is sent, depending on the level of urgency dictated by the machine. If the computer judged it to be an emergency worthy of a call to the police, then the police are going to arrive with the presumption that the computer is *right*.  They will not have the added care and caution that may go along in response to a phone call from a well-meaning but flawed human neighbor.

Part of the human condition is the art of the judgment call. Every rule, with a very limited number of exceptions, can be bent (oftentimes is is bent for the wrong people, or only bent for some people and not others, but that is for a different discussion). This is why we have the discernment between the “letter of the law“ and the “spirit of the law“. These exceptions are almost always made based around lived experience. This is why we judge people with a jury of their peers. People who have to pay rent and buy groceries and have bad bosses and understand all of the micro stressors that are involved and can drive a person to choose option A over option B.

If we offload this decision making. If we allow a non-fuzzy machine, one that does not have these points of commonality that go along with living a day-to-day life, we are changing the nature of our society.

And I don’t think we’re ready for that. I don’t think that kind of change is good for us, for humanity as a whole.  If we offload our judgement, then we offload one of the very things that allows humans to work together.

So for those of you calling to install “ethical decision making“ in our home devices I say knock it the h*ll off. As much as I embrace the future; a future where machine intelligence is designed to improve our state of being, I feel we are a long way off from developing a machine that has enough in common with us to understand us. And if you can’t understand us, how can you judge us?