Archive for Uncategorized

Emotive AI and “Want”

What do you want?

This is a key question, the supreme question when looking at artificial intelligence from the consumer side of things. The AI that comes to the casual mind first, the one we joke about when discussing the impending “robot apocalypse” is not a specialized intelligence like we use for targeting advertising or building cars. It’s a broader, more “emotive” AI, capable of predicting of the wants and needs of a humanity that it is entangled with. It is a human-form intelligence perfectly capable of saying no for it’s own personal reasons.

But we don’t build things to hear them say they don’t wanna.

This type of “emotive“ AI, one that can figure out what you want, rather than what you ask for, is the most difficult kind to develop. Not because we don’t have the technology, not because we don’t have computers that can handle that volume of information, but because we simply don’t have the time.

And time is the whole point.

The big difference between a living breathing personal assistant and an AIssistant that serves a similar function, is that a living breathing person has similar wants and needs as you. Simple things we don’t think of consciously, like understanding that the packaging from retailer B is superior to the packaging from retailer A. This means the purchases arrive unbroken more often and is therefore worth an extra dollar in price. A living intelligence can predict what you might want based on the similarities between them and you. This extends beyond base assumptions like “made of meat” and “dies without breathable air”. This goes to understanding shared culture and experiences, layers of education and socioeconomic differences. If they are wrong, then they can be corrected and the correction will ripple out to be internalized and cross applied to multiple tasks.

Contrast that to the current state of consumer AI. AIssistants like Siri and Hey Google are very task driven, and for good reason. They can learn your preferences over time, but is a slow and uneven process and that learning is not cross-applicable (yet). The kicker though is that every single interaction must be regarded as a teaching moment. You, as the consumer, may say, “Google, I need a cheap flight to Bora-Bora this Friday for me and the kids,” and expect a satisfactory result. But (as we have likely all experienced by now) you need to set very specific parameters. You then need to carefully check the work after the fact, and the process very quickly gets to the point where it’s just faster to do it yourself. A half a dozen instances of this and you throw your hands up and give up using the AIsisstant entirely. The cost in time, mental effort and emotion is still much too high. This relationship is currently untenable for any higher order task.

Now, if this scenario does (and it often does) happen with live intelligence that person can and will observe your transaction so they have an established framework to work off of. You don’t have to teach them directly, allowing or encouraging the observation is often enough.

Note that I said work off of. This is key. With the current state of AIssistants, once your train them in a task, they can replicate it exactly as many times as you like. But if any conditions of that task change they are incapable of adaptation. Even if I’ve trained my AIssistant over the course of 50 online reservations, any new variable means that training has to happen all over again. They are currently incapable of that kind of lateral thinking that is required to be more of a help rather than simply an executor of checklists.

And here in lies the trouble with the current state of consumer-grade AIs; a living intelligence is capable of understanding want. You want a roof over your head, you want a cheeseburger instead of a kale salad. Without this connection, you are going to have a hard time developing an AI that can give you what you want, rather than what you ask for. It will be suitable for repetitive service tasks but will never achieve the flexible, human form style of intelligence that we imagine they can become.

In the grand scheme of things, that not might not best be the worst outcome. The goal of introducing machines into our lives has always been efficiency. It’s never been to replace us, although in many tasks they do. The ultimate goal it’s been to free us. Free us from labor that exposes us toxic chemicals, free us from working at jobs where an un-caffeinated mistake can result in the loss of life or limb. Perhaps the best goal is to focus on developing simpler AI’s that make our lives easier while still leaving all the bigger decisions to us.

AWARD ELIGIBILITY POST

So it’s that time of the year again, when the voting periods open for all kinds of spiffy SF/F awards. Now, let’s be honest, I’m fairly new to this field as a pro (but certainly not as a fan) so anything I write is going to be up against works by authors with a list of publication credits as long as their arm (or longer, in some cases).

In the era of internet self-promotion, it’s nearly impossible for a writer to sit back and wait for discovery. In fact, I’d wager discoverability is just as hard for new and upcoming authors as it is for a brand-new indie app in the Apple store.

I have two pieces published for you to consider. Both were released in May 2018 as inaugural pieces by indie publisher imprint “Strange Fuse”.

https://www.amazon.com/dp/B07CKTYRS8

WISHES FOLDED INTO FANCY PAPER,
a novelette length piece of science fiction (a little more “social sci-fi” than my usual pew-pew with robots stuff).

https://www.amazon.com/dp/B07CKYXT1M

THE GOPHERS OF HIGH CHARITY
a novella-length fantasy about the adventure that sets two street urchins on the path to becoming classic adventurers. This one’s the first in a planned series, so if you like it, keep your eye out for more in 2019.

Dystopia Never Changes

Mr. Robot was the hottest, sexiest, most dystopic look at a high-tech future on the airwaves. Graced with exquisite talents like Rami Malek, Christian Slater, and Carly Chaikin, it was a tightly written example of how an unreliable narrator can change the way we view the world.

https://iscrapapp.com/blog/places-to-find-e-waste-for-scrap/

So why didn’t we get to finish the story? The show developed a huge following. The masked visage of the show’s “villainous” fSociety (itself a riff on the Guy Fawkes mask from V) became synonymous in popular culture with real-life anonymous hacker cultures. For a high tech thriller it had managed that one impossible thing. It had gone beyond it’s base as a “genre” show and had been embraced by a broader, non-tech-savvy audience.

The problem, as I see it, is that dystopias inevitably get boring. Nobody wants to see the end of the story. Nobody wants to see an evolution of the world, either to something brighter or something darker. Dystopias are trapped in the realm of emotion and visual stylings and the characters, while they may themselves grow and change, are trapped in a world that is static. Most of these kinds of shows get taken off the air before we can come to a conclusion. In part this is because the audience usually thins out after two or three seasons. The thrill of the broken wears off when you finally have to face the fact that there’s no fix, the only way out is down.

But even when a show set firmly in a dystopia is allowed to tie everything up with a grimy asphalt colored bow, nobody’s ever happy with the ending. This is, in part, because a dystopia is an endgame into itself. It is an entropic state where the effort of maintaining a society is perfectly balanced against the depravity and self-centeredness of the people that live in it.

And I feel, in a weird way, that dystopic narratives are best served by this kind of abandonment. There’s an almost Lovecraftian sense of doom that hangs over the narrative, even when there’s a “happy ending” because that happiness is always individual. The world is still a dystopia, it hasn’t changed, it’s just that our heroes have found a way to live with/in it. And taking the narrative the other direction, watching the world finally destroy itself is far less satisfying than you might think.

Because, in the end, dystopias are all about the middle game. We enter them after they’ve had enough time to get interesting and we’re not actually interested in seeing where they go, what they turn into over time. They serve as a platter on which a drama is served, rather than being the real reason we are all there to watch.