Tag Archive for scifi

I Sold a STORY!

I am absolutely delighted to announce that Galaxy’s Edge has acquired the rights to my Laumer-esque short story “The Aborted Robot Uprising of Tasty Home Things”. I don’t have a publication date yet, but believe me, I’ll shout about it when I do! You can check out this month’s Galaxy’s Edge at the link below…

Terraforming in Games

Welcome to the first of a monthly series on science-fiction in video games. The full version of this article can be found over at Amazing Stories, and the abbreviated version gets posted here a month after.

Terraformed World

Making the uninhabitable a nice place to be since 1942.

The goal is not to deliver a “how to video game ur sci-fi” series of posts.  I want to take a look at how closely science fiction in games is entwined with the science fiction expressed in books and other media.  Sometimes it’s licensing, sometimes it’s homage and sometimes is it something new and unique.

So let’s start off this column by looking at the worldbuilding of a recent entry, “Anthem”. Anthem is a new type of product referred to as a “split narrative MMO”. It’s best described as a single player story cleverly couched in a massively multiplayer online world. The game is from BioWare, a studio known for building deep storytelling experiences within their games.  They handle both science fiction and fantasy narratives with equal grace and engagement.

Underpinning all the bright colors and big alien sky, the world of Anthem contains a classic “man vs nature” backstory. Some time long ago, the planet was terraformed by an object called the “Anthem of Creation.” Along the way, someone failed to turn it off, resulting in a planet with an ecosystem that is in a state of constant, dangerous flux.  The formerly enslaved human population has overthrown their alien masters and begun to thrive despite this ever-changing and sometimes openly hostile environment.

In 1942 the idea of terraforming first shows up in a short story written by Jack Williamson (under the pen name Will Stewart). At the time he used a more hand wavy “far-flung future“ science in order to make this happen. Much like Williamson’s original work, and the work of the many many authors to follow, Anthem is less worried about the “how” of terraforming and has instead focused on the end results (and the challenges that they bring).  

In action-heavy games the lens of time is always dedicated to the immediate, human-scale view.  This means that terraforming in hard-science terms is difficult to work with.  In video-game terms, if we want to include the environment as a potential hazard/ally, this timescale is simply a non-starter. Instead, Anthem has embraced the more catastrophic short form terraforming that you see in places like Star Trek: The Wrath of Khan’s Genesis device, or the Arkfalls from Defiance. Not only does it make for a much more visually stunning environment, but it allows for a great many incidental hazards for a player to overcome, allowing the design team to build towards a more emergent style of play to fill in the gaps between the must-do missions that push the story forward.

This aggressive terraforming idea serves as the core foundation on which the game mechanics and story are built.  In order to first overthrow their enslavers, then later deal with a constant onslaught of threats driven by the Anthem running off the chain, the human population develops the “Javelin”, a powered exo-suit via which the player can survive encounters that would turn even a top-form human physique into a sticky paste.

The Javelin provides the perfect vehicle (no pun intended) by which the player can customize their experience.  Different Javelins support different styles of play.  Over time there are modifications and upgrades that players can pick and choose from, earn or outright purchase, thereby feeding the beast of in-game transactions (and ensuring the ongoing creation of new game content).  Upgrading the Javelin is a personal and immediate action, the suit becomes the tool by which we give the players agency.

The exo-suit has been a very popular piece of kit in the more action-driven science fiction games for over a decade. From the vehicle-scale, human controlled machines in games like Titanfall or novels like John Steakly’s Armor, on down to the entirely robotic frames of Warframe or the more lightweight frames of Elysium, they are a solid “science fictional” way to rationalize the ability of one person to punch through an army of killer robots.

You can put off the danger for another day maybe, you can wrap up a mission, close out a chapter, but this does not a long-form narrative make.  Anthem, like so many stories before it, has tackled this need for conclusion by introducing a villain and, of course, taken advantage of the biggest, shiniest piece of science fiction on the planet, the terraforming engine itself.  So now we have not only the immense, uncaring power of the Anthem, but we have a near and viable threat. We have a bad guy looking to take that power and put it to deliberate use. Something that requires immediate (for human-timescale values of immediate) action, which is something game players find supremely satisfying to deal with.

As we all know, once you create a world that clicks, the fans of that world, be it Anthem or Gotham City, are going to consume as much content as they can lay hands on.  They will be perpetually hungry for new stories, new characters and new toys. If you’re lucky, you’re going to get a bunch of players that take your world and run with it, giving you a vibrant and active community. By going with an active terraforming scenario, the team at BioWare have given themselves (and us game players) an open door for everything to change in the future and thereby ensure the vitality of the game for years.

Emotive AI and “Want”

What do you want?

This is a key question, the supreme question when looking at artificial intelligence from the consumer side of things. The AI that comes to the casual mind first, the one we joke about when discussing the impending “robot apocalypse” is not a specialized intelligence like we use for targeting advertising or building cars. It’s a broader, more “emotive” AI, capable of predicting of the wants and needs of a humanity that it is entangled with. It is a human-form intelligence perfectly capable of saying no for it’s own personal reasons.

But we don’t build things to hear them say they don’t wanna.

This type of “emotive“ AI, one that can figure out what you want, rather than what you ask for, is the most difficult kind to develop. Not because we don’t have the technology, not because we don’t have computers that can handle that volume of information, but because we simply don’t have the time.

And time is the whole point.

The big difference between a living breathing personal assistant and an AIssistant that serves a similar function, is that a living breathing person has similar wants and needs as you. Simple things we don’t think of consciously, like understanding that the packaging from retailer B is superior to the packaging from retailer A. This means the purchases arrive unbroken more often and is therefore worth an extra dollar in price. A living intelligence can predict what you might want based on the similarities between them and you. This extends beyond base assumptions like “made of meat” and “dies without breathable air”. This goes to understanding shared culture and experiences, layers of education and socioeconomic differences. If they are wrong, then they can be corrected and the correction will ripple out to be internalized and cross applied to multiple tasks.

Contrast that to the current state of consumer AI. AIssistants like Siri and Hey Google are very task driven, and for good reason. They can learn your preferences over time, but is a slow and uneven process and that learning is not cross-applicable (yet). The kicker though is that every single interaction must be regarded as a teaching moment. You, as the consumer, may say, “Google, I need a cheap flight to Bora-Bora this Friday for me and the kids,” and expect a satisfactory result. But (as we have likely all experienced by now) you need to set very specific parameters. You then need to carefully check the work after the fact, and the process very quickly gets to the point where it’s just faster to do it yourself. A half a dozen instances of this and you throw your hands up and give up using the AIsisstant entirely. The cost in time, mental effort and emotion is still much too high. This relationship is currently untenable for any higher order task.

Now, if this scenario does (and it often does) happen with live intelligence that person can and will observe your transaction so they have an established framework to work off of. You don’t have to teach them directly, allowing or encouraging the observation is often enough.

Note that I said work off of. This is key. With the current state of AIssistants, once your train them in a task, they can replicate it exactly as many times as you like. But if any conditions of that task change they are incapable of adaptation. Even if I’ve trained my AIssistant over the course of 50 online reservations, any new variable means that training has to happen all over again. They are currently incapable of that kind of lateral thinking that is required to be more of a help rather than simply an executor of checklists.

And here in lies the trouble with the current state of consumer-grade AIs; a living intelligence is capable of understanding want. You want a roof over your head, you want a cheeseburger instead of a kale salad. Without this connection, you are going to have a hard time developing an AI that can give you what you want, rather than what you ask for. It will be suitable for repetitive service tasks but will never achieve the flexible, human form style of intelligence that we imagine they can become.

In the grand scheme of things, that not might not best be the worst outcome. The goal of introducing machines into our lives has always been efficiency. It’s never been to replace us, although in many tasks they do. The ultimate goal it’s been to free us. Free us from labor that exposes us toxic chemicals, free us from working at jobs where an un-caffeinated mistake can result in the loss of life or limb. Perhaps the best goal is to focus on developing simpler AI’s that make our lives easier while still leaving all the bigger decisions to us.