Game UX Summit ’18 | Keynote – A Plausible Utopian Future of Video Games

I started out in AOL message boards. I don’t know if anybody remembers those, but I was doing role playing games. Play by message board role playing games, which I guess is actually still a thriving thing. It just moves from technology to technology.

And got pulled into Dragon Realms. I was an assistant game designer on a tech space game called Dragon Realms for many years, and then from there into handheld games, virtual world start-ups, games for kids. Gosh, online gambling, mobile games. Social, mobile, I’ve been kind of all over the place. There’s no clear through line to my career. I’m also an author of fantasy novels.

Is that funny? Oh no. So what I’m doing now. I can’t tell you what I’m doing now. I’m working for Google.

I have a small design group that does gameplay research and development, and our mission is to explore innovation in technology to discover transformative powerful gameplay experiences. So it’s very specific. It’s fun. I’m going to go pretty fast. People tell me this constantly.

I go fast. I think of myself as an information age presenter, so you can always Google it later. Try not to worry if it seems to be flying by, and I’m always more than happy to discuss all of the stuff that is in this presentation later. So I spend a lot of time thinking about the future and talking about the future, so that’s kind of what I want to talk about here, and also try to share some of the fun that I’m having in my most recent position. There will be three parts to the talk. The first will be machine learning, which I’m sure you’re very excited about; accessibility and its role for design, and I know there have been several talks already that address accessibility; and then kind of how all this wraps back together and into what I think our job is as designers.

So I’m going to say it first before I get into the machine learning that I don’t know how to do ML. I probably will never know how to do ML, and that’s fine. I own that, and I think that our role as designers is to understand just enough of a thing to figure out what to do with it, and so what I’m going to try to give you is the design perspective on machine learning, because I do actually believe that the next big leap that machine learning will take will be because it is driven by design instead of being driven by technology alone. So what is machine learning? It’s the science of trying to get machines to act like people, which we have known for a long time.

This definition doesn’t actually tell you very much, and so in trying to understand what machine learning is going to do, it’s kind of helpful to ask why is machine learning. So why is it a big deal now? What’s happening?

And basically what happened is Moore’s Law. So you see the little column of numbers at the top has gotten really, really big. 20 billion.

And this is kind of a paradigm change in compute power. It means that kind of everything about how computers work has to change on a software level to some extent to accommodate that much power coming in, and that’s what machine learning really is. It’s a new architecture. It’s a new way of organizing that is more stealth organizing. And it’s more mysterious, and it’s pretty weird.

So, again you can think about that as the structure of the brain. What ML is is math and then crazy computers, and one of the things that I’ve found the most interesting is that the software engineers at Google by and large don’t understand machine learning, so it’s a complete paradigm change. And the ML people are their own people, which is fun to me because it’s like, oh, this is a whole new discipline, a whole new thing. If you were around in the 1990s, you might remember that neural nets were becoming a thing, and I was in college then, and I remember that I had a couple of programmer friends that were obsessed with neural nets. It’s like a little self organizing thing, and we just give it data, and eventually it gives us something, and we don’t know how! It was very exciting.

But people kind of treated them as curiosities, and said “This will never go anywhere” because there wasn’t enough power. So now there’s enough power. If you want to understand ML from a high level, I highly recommend you look up Jeff Dean’s talks. He has a really entertaining way of presenting this stuff, and talking about machine learning, and what it does.

He leads research at Google, and he’s also just a lot of fun to listen to. One of the things that he has pointed out that really kind of clicked for me was this massive change that has happened in machine vision. So machines, as recently as 2011, would only be able to identify something about 80 or so percent of the time. They had a really high error rate, but in just five years, their error rate reduced to the point that it’s actually better than people, and that’s kind of astonishing. That means that there’s a whole new frontier of things that are possible now that were not possible that long ago.

I don’t know if this is actually going to animate. Oh, it will. Cool.

So you might’ve heard about deep learning. Deep learning is kind of a stack of ML. It’s math doing more math.

So it’s an organizing principle by which you can take that self organizing thing down at the bottom, feed it with data, and then it gets filtered up into an output layer. So that’s all it means. It’s just a stack. So in the ML space, there’s a lot of very complex terminology that it doesn’t have to be as complex as it is. It’s just because it’s so new, we have all this complicated language for it. So it’s math doing math doing math.

The other thing that is interesting to realize is that there’s a lot of crazy math involved in this stuff, and the people that get into it tend to get into it through math, but most of what they spend their time doing is cleaning data. Because you’re going to feed this thing with data in order to train it, it’s only as good as the data that you put into it. So you spend most of your time cleaning that data because it turns out most data is really contaminated and has to be put into an organized way. It also has a tendency to focus on large banks of data, and this was another thing that was like a big, mind-blowing thing for me. So the natural language processing looks at Wikipedia because Wikipedia has this astonishing quantity of words. It has 27 billion words, and that is ML scale data.

So ML also needs a ton of data. And Wikipedia is kind of perfect because it not only is a ton of words, but they’re organized in a fairly regular manner, and so it kind of lends itself to the machine being able to recognize patterns in what’s happening. So what can we do with it? Another really great resource is How to Become a Centaur by Nicky Case, who is one of my favorite designers and talks about what it means when we transition into the centaur metaphor, which is the horse being the machine and the person is the person.

There are actually some really hilarious business graphics you can find about machines being stapled onto people if you want to Google those. This is a nice, much more pretty version of it though. So one of the things that we’re doing with it is reducing energy costs. So I say this, again, kind of as an illustration of what is possible. What ML can do is sort of solve complex systems, so if you have something like a cooling system that has a ton of inputs, you can feed the data into it, and then optimize the settings on all of that to get a desired result. So it has to be given a very specific result, which is reduce this cost.

But if you give it the very specific goal, you feed it a whole bunch of complicated data that’s way more mathematically complex than a person could kind of solve by themselves, you can get a pretty significant effect, which becomes pretty profound for what it does for people. We’re also learning to detect cancer with it. This is another one of those visual things. A visual pattern that is very fixed that is cancer, is not cancer, but is very visually complex is something that we do with people, but now we can create this microscope that runs the pattern on it, works with the pathologist, and says, “This is a pattern that looks like it might be cancer” because you fed it with a lot of that “is” and “is not.” And give them an alert and say, “We’ve detected an abnormality. You might want to look at this particular part.”

It’s also predicting floods. So weather is another complex input system. The history of flood patterns in a place is another complex set of data. You put those two together, and the machine can find and make predictions based on what it’s seen historically, and then send an alert to a community and say, “You might want to get out because it kind of looks like there’s going to be a flood here,” which is something that’s being worked on. So I apologize for the Google bias on this. This is just like what is proximal to me.

It’s not actually an advertisement. It’s more like this is what I have access to, and what the people around me understand. Watson and a lot of the others, it’s a very rich space, and it’s moving so fast that it’s really hard to keep up with. But the point is that it’s not coming soon anymore. It’s here, and the game industry is actually a little bit late to the party on ML, which is partly why I want to get around to talking to people about it because it’s extremely powerful.

It’s actually quite accessible, and it’s already making changes in the world. So what do we do with it for games is what we need to figure out. And there have been some people that have been working on this. You can mine and find GDC vault talks, and some really interesting stuff. And we’re beginning to get samples of those ML people in the game industry, but it’s very, very few.

And I think we need to solve this as a design problem. So this is kind of a map of machine learning I found. I actually spent an hour trying to find the correct citation for who made this thing, but it’s actually sourced to Corsair, and also these other guys, and I couldn’t sort it out.

So I apologize to whoever created this if they ever see this. Ignore the middle. Don’t look at the big nodes. The terminology doesn’t matter, but look around the outside. So things like meaningful compression, things like image classification.

Those are very specific usages that come out of these branches of ML, and it is kind of helpful to understand that there are these major branches that lead to very specific applications. And so you can kind of drill in to each one of these and think, “Well, they’re using it this way. How could we use it for games?” Image classification is a fun one to me, and it’s this sort of like … Obviously, because it came from the internet, it’s trained on cats.

Is it a cat? Is it not a cat? So this is kind of a good case for machines, because it’s got all those pixels, and it can find the pattern in the pixel. This is a cat, this is not a cat. You can see in Silicon Valley, they did is it a hot dog, is not a hot dog. But image classification, I think, has a lot of potential because if you can do it in real time on a game.

I use Katamari Damacy, because it’s one of my favorite games, but also because when you pick up an object in Katamari Damacy, it tells you what it is. And all this was laboriously hand coded by people. But with ML, we could do it in real time based on what the player is seeing. And if we were doing that, it could auto-tag things. It could be an entirely new type of UI, user interface layer on what the player is seeing. It also could understand when something is obscured by something else, which is actually kind of complex to do graphically.

But because we’re processing what the player is seeing, we could kind of respond to exactly what we know is in front of them, and the objects that are around them. I might be pointing this too far away. There we go.

Clustering is something that, for me, having an MMO background, being very interested in complex social interactions, I think that one of the areas of the game industry that may actually understand ML more are the people that are already working in analytics, because analytics kind of grows into ML naturally. But I think that we’re just scratching the surface of what we could do with clustering. So clustering is just grouping things together based on a criteria that it’s fed. So there are entirely new forms of recommendation, which if we looked at them as a design problem, we could say … Not just like who to play with, or what might you enjoy, but what equipment do you think you might want to use?

What character should you be playing? And all of this is extremely customized because we’re processing the behavior of the player themselves, and giving them extremely customized recommendations based on patterns we’re seeing in the data. So it’s another thing that is similar to what we already do, but it’s an amplification that makes it much more design accessible as a tool, and things that we can do to change the experience of the player with these systems. So this is not style transfer.

This is weird. This is called Deep Dream, and it’s doing a form of image processing where it’s finding a pattern and then repeating that pattern over and over and over inside the same image. And it actually kind of frustrates me that if you try to look up ML and art, this is a lot of what you get, and I think it’s because we as people are at a place where we believe that ML is weird, and so we believe that a weird image should be the result of a thing that is weird. It’s interesting, but it’s not at all useful. But the same technology used a slightly different way is style transfer that finds a pattern in one image and applies it to another, and that, I think, has wide, massive artistic purpose. Because what you’re doing is basically taking concept art and applying it to a thing, and we do that quite a lot.

So this … I don’t know if this is actually going to play. We’ll see. Yeah, there we go. So this is interesting. It’s doing style transfer frame by frame and turning that horse into a zebra, because it’s seen a lot of pictures of zebras, and then it’s taking the patterns that it recognizes from all those pictures of zebras and projecting them.

So when you think about style transfer being done in animated form, instead of just on 2D images, all of a sudden, man, that gets kind of crazy. So this … Hopefully it won’t make any noise. Good. So the two minute paper is kind of interesting.

It is going to play. I meant to try to get it to the 38 second point, so I’m kind of going to tap dance here until we get to where I want to show you. But this is another form of … The two minute papers kind of take these really dense ML papers, and compress them into two minute videos, which is kind of cool. They’re a little quirky, but some of them are pretty fun.

So he’s talking about style transfer and where it is in impressionist form of taking a style and applying it. So in this case, taking the starry night and applying it to this city, which I think is maybe in Amsterdam. And then that you can do that kind of agnostically with many different types of art to create environments.

What’s interesting about this is that he thinks this is terribly uninteresting, so he says this is the part that’s boring. But we really want to be able to do it on photos, so he’s kind of showing some of the work that’s been done on taking the same thing over and over and applying it to photos. And they said that the real holy grail here is doing it to pictures, because this impressionistic style, which to us it’s a little grainy. It’s not super great, but it’s really, really interesting, and it’s producing these images that are very clearly artistically styled, even though they originate from photographs. But then he’s going to get to this next part.

Yeah. So at this point, he says, “Now, this stuff is completely insane,” and it’s very interesting that an ML person would say “This is completely insane.” So this is a style transfer that’s being done on photographs, where you see the photos on the left and the photos on the right are the two source images, and they’re coming together in this hybrid thing in the middle. So the ability to actually interweave those two means that you can take this style transfer and get a really, really precise result out of it now. And that’s what they’re pretty excited about, because this used to be very, very difficult. But just within the last couple of years, maybe even the last year, it’s suddenly gotten way, way more useful.

So you might’ve heard about reinforcement learning. When I talk to game developers, and you ask them about AI and ML, they usually are thinking about what is called reinforcement learning. So the AI that’s genuinely unsupervised, and it just does its thing, and we don’t know why, and it’s super weird. This is a documentary on AlphaGo that’s recently … I think it’s on Netflix. It’s super, super interesting.

But DeepMind has created an AI that can play Go that has beaten the top master in Go, which is kind of astonishing. I want to pause so that we can watch RL actually solve Space Invaders. This is DeepMind playing Space Invaders. So one of the things that you see is that there’s a lot of jitter down there, and a person wouldn’t do that. This is some of the weird stuff that we see.

We get these emergent strategies that we didn’t know were actually optimal. Maybe they’re not. It could just be that this was the particular solution that machine reached. And one of the questions that I’ve asked them, hey, if you run the same ML twice on the same data, do you get the same thing or a different thing?

And they went like “Pow! I don’t know!” So we’ve got to try that.

But you can tell that there’s kind of an alienness to this, and yet it’s doing things that are super optimal, like nailing one of those guys on the end before the other ones have even come down, and it’s crazy. Because nobody told this thing what to do. They just gave it the inputs and said “Win,” and it played itself over and over and over again until it won.

So when you talk about RL and this kind of stuff, this is what people tend to think about. It’s disturbing, right? It’s HAL. Relatives are afraid that this is where AI and ML are going because that’s what they’ve been given from Hollywood to expect from these things, and because there is just enough genuine weirdness in there. It’s very mysterious. But what I would say about RL in particular is that we are in the super early phases of it.

It has a long way to go, and it turns out that video games are just very solvable, and have very fixed rules, even though they have a very broad expressive space. So they’re kind of in a sweet spot for machine learning right now, but it’s a long time out. So applications of RL for games are pretty far out, though don’t quote me on that one.

But it’s very, very promising because when you look at the expressive potential of an AI that just has its own personality and can do its own thing, I think it just opens up all sorts of things if we can create characters based on the data that we put into them. Give them rules, and give them objectives, and just let them go. And they could surprise us.

It’s kind of pretty amazing. It’s useful to talk about what ML can’t do. So what it’s not so good at yet, and again, I’m not an expert, is communicating in part because one of the things that has happened to me as I get to know some of these technologies is I’ve developed this really profound appreciation for how complicated our use of language is, and directly connected it is to our intelligence itself.

Because the way that words aren’t just words, they’re ideas, and they’re concepts, and so you have to actually build all of those concepts and words to get a machine that understands language. It can solve things that are very short form, which gives it kind of some neat tricks, and that’s what The Assistant does, but what all that means is that the interactions that we have that the Assistant can mimic are actually just very rules based, including things like making hair appointments. So it’s also not very good at qualitative judgments. Being able to compare two things that it hasn’t been given a right and wrong answer for is very difficult for it currently. It might be able to do it based on a ton of input from what other people think, so it could aggregate the opinions of people, but it can’t actually come up with its own.

And then, finally, of course, social connection. It could be that empathy is more rules based than we think it is, and we might be able to find interesting ways in which machines can be empathic, but I suspect that even if we did, we would kind of know … If we knew we were talking to a machine, we wouldn’t feel the same connection. So there’s something very intrinsic about social connection with another human being that the machine is just not going to replace. I think that that’s useful because when you think about this concept of combining humanity with the machine, what the machine needs is creativity. It needs empathy.

It needs our humanity. And when we combine those two things together, the result can be incredibly powerful. And so not to spoil the ending of AlphaGo, but this is kind of a quotation from near the end of it where this master, who initially … Well, I don’t want to spoil it too much, but his play was actually profoundly improved through his play with the machine, and I think that’s a very clear example of what we’re talking about with this idea of the centaur, that his ability to project his own skill through this thing, and get this feedback that was so super complex and super mysterious and surprising pushed him to a higher level of achievement.

And that is, I think, ultimately what ML will do. It will take away a lot of the things we don’t want to do in our work, and it will push us to be better, and stronger, and ultimately more empowered. So I do think that machine learning is the next great leap. I think that there’s ample evidence that that is the case, and the thing for us in the game industry is that we have a tendency to ignore the next great leap until it’s already somebody else’s next great leap. So some other industry will come in, or something that we’ve perceived to be outside of the industry. Now all of these are inside the industry, but they were made by outsiders.

I don’t think that it has to be that way. I think that we can just pay attention, and learn, and make the time, and stay ahead of that next great leap instead of waiting for somebody else to eat our lunch. I want to change direction a little bit and talk about this concept of the next billion. And there’s kind of a line down there if you can see. So Google has this idea of the next billion users. How many people are familiar with that?

Have you heard that phrase before? Oh, okay. So some.

I thought maybe in the UX community you might’ve. When I kind of sat with that … So I only started at Google last December. I really couldn’t get it out of my head. Just everything about it. What do you mean, a billion?

That’s a lot of people. And we don’t really think about billions in the game industry. We think about the next billion dollars, but not about the next billion users. We don’t really think about the next billion players. So it’s gotten into my head, and I’ve been thinking about what if we thought at that scale?

What if we took a step back and said who are the next billion gamers? And I think that it becomes really interesting. When you look at the global population, I know that many people in this room are probably outside of console games, but console games, I think, arguably are the core of the video game industry, and yet the highest grossing console is 155 million, which is such a tiny sliver of the global population. There is so much room to grow. And if we think about that as a design problem and try to start knocking down the barriers that prevent people from coming to video games, I think that that’s how we can get to billions.

I keep turning away from the thing. Sorry. Another way to look at this is even if you include mobile, you can look at internet access. Internet access is actually getting there. It’s getting there super fast.

I think 250 million people got on the internet in 2017, which is pretty radical. But still, the combined mobile market doesn’t scratch half of the global internet. The global internet does not scratch half of the global population.

So there are so many billions. There is more market out there than we have even begun to think to address, and that’s kind of exciting, because I think it will make games better when we have that kind of radical reach. When I said accessibility, this might have been what you thought of.

And I do mean this as well. I think that access to hardware is one part of the problem that limits people to coming to games, but I think physical accessibility is a major part, too, and I am so excited about this device. I think that it’s a stroke of genius to kind of wrap accessibility into a hub, and be able to accommodate many different affordances of control through one thing that makes games accessible to so many more people. But we don’t even really know who is kept out of games.

Maybe some people know, because they’re tracking who’s limited from the affordances of the controllers that we have. But I’ve been thinking a lot about the history of games, and the history of controllers, and the history of how we play, and how complex it’s gotten. Even mobile games, part of the reason why mobile has so much more reach is because it’s a multipurpose device, but part of it is the simplicity of that touch interface.

But I think that we can go even simpler and even more accessible, and back into the history of game controllers to think about what is the kind of controller that could reach way more people. Especially now that we have many new ways of both reaching players with controllers, and of allowing people to make new ones. So I hope that you’re paying attention to the Alt-Ctrl community, because they are just so much darn fun. Now that people can create their own controllers, and they have access themselves to the technology to build these interfaces, this has become part of the design space.

It’s reachable for us as designers now to be able to craft the controller itself, to fulfill it, and to get it to a player. To radically change how people even approach games in the first place. If you look at this in comparison to the global workforce … I tried to make this graph work. You can’t see the lines. So even if I take that global population column off, and just look at the global workforce, it’s just 3.4 billion people.

You still can’t see the lines of the largest size of the workforce working in the game industry. I think it’s amazing that there are one million monthly active users on Unity. When we think about where we came from in the game industry, that’s a ton of people.

It seems like a lot. I think they have something like 4.5 million installs. That’s a lot, but it doesn’t even register when you compare it to the global workforce. We are such a small community, and it makes you think, if we want to get to the next billion gamers, how are we going to do that if we don’t have them at the table with us? So then I think about the next billion game developers, because I think that the biggest gate between us and those billions is the accessibility of game development itself.

So we all know about these demographics, or I think probably most of us do. This is from an IGDA survey earlier this year, and we can see very easily … We see these year after year. The demographics do not reflect even the demographics or North America much less reflect the demographics of the world. I think that the UX community does better at this than the broader industry, and that’s kind of interesting.

There’s something more accessible about design than some of our other fields, and it means that we have a responsibility and, I think, a greater awareness to try and broaden that range to think about billions. It’s why we have more of these conversations. But when you compare especially those demographics to the perception of how people think minorities are represented in the games, you start to see a little bit of the picture of the people that we’re not reaching. Because if gamers don’t even really know if minorities are being fairly represented, and if the minorities themselves don’t feel that they are as represented as the rest of the community, how can we possibly reach them?

This is a piece of that next billion. It’s a piece of who’s not at the table. So who are the next billion game developers, and why don’t we have them? I think it does come back to design, and so that’s kind of what I want to close with.

I think that all of these things actually bridge together. That we need better tools to make games more accessible, to reach more people. The game development pipeline itself is incredibly complex, and it’s complexity is such that it tends to have all of its knowledge stored in these little silos, and you have to have access to the silo in order to learn from the people to be able to be passed along the culture. The complexity makes it kind of viral in how it spreads, which also means that if the initial community is not diverse, it has a hard time becoming diverse because of its inaccessibility. It’s even hard to know where to start.

There are all these different technologies, and then each one is super complex, and all of these diversity of opinions about which one is good and which one is not. If you’re a novice on the outside, it can feel extremely intimidating. I think we can lower the bar.

I think that if you take a technology like style transfer and combine it with a massive open store of 3D objects, you have literally a drag-and-drop 3D creation tool that can make something the way that you want it very, very quickly. I’m very excited about the potential of combining these technologies for games specifically because I think that they can be so transformative in how we make games. I would also say that making games is about more than just making games. I think that video games can be the dioramas of the 21st century.

This is a game … I hesitate to even show it to you, because it’s kind of broken and not finished. It’s Time Society When I was working at GlassLab on education, I got really excited about the potential of interactive fiction for learning. And so every teacher that I would talk to, I would say, “Would you use a text-based game in your classroom?”

And eventually I met one that said, “Yeah, let’s do that!” And so we made this history game called the Time Society Chronicles. The thing about it is that I learned more through the process of making that game about the American Revolution than I ever learned in school.

And it’s because … Had I made a diorama about the American Revolution, I still wouldn’t have learned as much. A video game itself is an amazing artifact. You all know this.

You have design for all of the inputs. You have to design all the rules in the middle. You have to design all the outputs. That means you know the thing inside and out. So if you’re making it about a particular subject, it becomes this artifact that is kind of like the encapsulation of your understanding of the rules of that particular learned thing, which also means that if a teacher plays the game, they know what the kid understands.

So if we had accessible games that could be made very simply by learners, it would be an amazing tool for education. Way beyond even our ability to kind of make games and share them with each other, make money off them. So I want to share with you my utopian view for the future of video games, and it starts with a little girl in India in a rural village … Says to her phone, “I want to make a game.” And the game says, “Okay, what kind of game?” She says, “Well, how about a puzzle,” and it says, “Okay.” It starts to throw all these puzzle pieces on the screen, and she says, “Well, I want it to have some rainbows, and some bubbles, and a castle, and maybe a turtle.”

And so it says, “Okay,” and it pulls in all these things. And then she says, “Here’s this picture.” She uploads this drawing that she’s done with her phone by taking a picture of it, and says, “Make it look like this.” And it says, “Okay,” and it makes it look like that. And it creates this 3D thing, which is her vision, on this phone. And then when she thinks that it’s done, she says, “Okay, publish it.”

And first it alerts just the people that are in her local district. They say, “Oh, here’s this new game,” and they start to play it, and they start to make comments on it. And then maybe she changes it and makes it better, and releases it back again. And the more comments and the more shares it gets, the wider it goes.

That’s how easy I want it to be to make games. If it is that easy, and I think we could make it that easy, these could be the developers that are the next billion game developers. And I want to know the games that they would make.

I want to know the kind of games that they would play. That is the pipeline that I think gets us to the next billion, and the next billion after that, and the next billion after that. So I think this is an important thing, too, because the path to getting there is through these tools that represent the future of technology and the future of games. I think that it’s part of our responsibility as designers to think about what it means to use these tools, what the technology means. I think that’s what games have always been.

They have been the channel by which technology reaches people, and there’s a responsibility in that. There’s this quote that I kind of take around Google from a book. It’s very dense. The Fourth Industrial Revolution. By the fourth industrial revolution, he’s talking about machine learning, and the transformation that it will have on all of society. And he talks about income inequality, which is something that we do talk about in ML pretty frequently.

Both see it as a problem we have to solve as a society, and that’s something that we have to solve as a people in our communities at home. But this second paragraph to me is very interesting. He says, “The world lacks a consistent, positive, and common narrative that outlines the opportunities and challenges of the fourth industrial revolution. A narrative that is essential if we are to empower a diverse set of individuals and communities, and avoid a popular backlash against the fundamental changes underway.” This stuff is here. It’s happening.

And I think that this is our responsibility. I do not think that the technologists will be the bearers of the narrative. I think that’s what design is for. And so I kind of want to be able to step back and share this bigger picture of where I think games can go, because we are the bearers of technology to culture, and because design itself is the translation. The technology is about, and the technologists are about the weather and the how, and design is about what and why. So there’s part of the message of machine learning.

It means that we can’t afford to look away from the future. It means that we’ll be part of the answer of whether we have this narrative or not. That’s it.

I actually don’t know what time it is. So what we have up here. Yeah, yeah.

Speaker 2: Hello? Erin Hoffman J.: Hi. Speaker 2: First of all, I am speechless. It was amazing. I think that’s why nobody stand up, because it was really, really good. And I was like, how is [inaudible 00:32:29] we are in a world that everything end of the day, if you work in a company, it’s all about the money and the revenue and everything.

What the things that you would advise for us that are working every day? Like a lot of people has good heart, and want to change the world as well, but how can we do maybe steps in order to get to this point where we’re going to include games in more of the daily basis? What is your perspective and what can we do now to change the future?

Erin Hoffman J.: Sure. Thank you. Speaker 2: Thank you. Erin Hoffman J.: It’s a really hard question. I think when it comes to revenue and doing the right thing, that we have this assumption that they usually exist in opposition because they’re two separately hard problems.

But I think that it is part of the challenge of design to solve them both at the same time, and so I think the thing that’s really hard is that when you really start to internalize the potential of the problems we could solve in the world, you can get very caught up in the problems, and you can feel despair. And so looking into that darkness is difficult, and then also looking into the prosperity and the money focus and all that, it gets you in a bad head space. So I think kind of acknowledging that and still committing to doing the work of seeing both sides and bringing them together, and finding that solution that is both, that’s not just one or the other, that that’s how you move things forward. And it’s also, I think, about committing to your own learning, and just continuing to look for that solution that has to be out there somewhere that we just haven’t found yet. Kevin: Hi.

Your talk was super interesting and kind of really intersects with something that we’re kind of doing. My name’s Kevin and we’re from this company called Taunt, which is born out of this machine learning watching esports, and kind of we’re taking a look at what kind of interesting experiences can we do when we have machines learning what League of Legends looks like. But kind of something that I was wondering if you could help talk about is how does Google think about interesting signals, or kind of like what is an interesting experience borne out of these machine learning experiments? Because one of the things that we’ve kind of found challenging is how do you find the fun in machine learning?

Erin Hoffman J.: Right. That’s right. Yeah. It’s another great question.

Thank you. You’re going to get me in trouble, too. This is a question where I could say all kinds of offensive things. For the most part, Google is so committed to its technology and doing the right thing in the world that it doesn’t directly think about using games in that way.

Now, DeepMind would completely disagree with me, I think. They would say they’re doing all this amazing work with games, but it is still in the pursuit of solving intelligence, which is just this grander thing. I think that they don’t yet realize the potential of games as an ambassador to people, and so I think that’s sort of where it comes from. It’s like you have to solve their desire to do good in the world … So it’s almost like the backwards side of the question.

Where we find ourselves having to articulate … Like if Google’s curing cancer, why are we making video games? But I think we have to honestly have an answer to that, which is that there is a purpose in inspiring people, and delight is its own end. And I think it wants to understand that, too. I don’t know if I’m quite answering your question, but I do think that it has to be led by design, and specifically by game design. It must be fun first. We had this problem in education as well that the subject matter experts get so excited about their discipline that they want the discipline and they want to just kind of impose it on people.

And that’s not how design works. So if you can get them on the same page. Like, no, fun is super hard. There are people that specialize in fun. It is really hard. It’s an expertise.

We need senior people, we need senior creative here guiding this process, and that’s the bar and it cannot drop. That’s the sort of where the hard part of that lives. Kevin: Cool. Thank you. Erin Hoffman J.: Mm-hmm (affirmative).

Speaker 4: Thank you very much for your presentation. My question is kind of weird. It’s kind of like I’d like to share a a statement I have, and I’m curious about how you think about it. Erin Hoffman J.: Okay. Speaker 4: The statement is actually trying to answer a question is, okay, how can we not to be replaced by machine eventually, right? So because we can see there is a trend, like they already developed some art institute just to do the model by themselves, or some crazy example like the first time user experience, they can just gathering those data from users, and the machine can actually come to make those customized for players and really like benefit our KPIs.

So that got me thinking, okay, how come it not to replaced by machine? So I have a two conversation or two stories inspired me because my job is trying to start a UIC department in my company. So the first story is actually from the conversation I had with a data scientist, who is in our company who is actually trying to create those kind of machine learning. So basically by having a conversation with him was somehow understand, okay, the data is actually trying to help us to understand how people behave. Like tracking their behavior, so that means that make it into a more UX familiar mode is we have a motivations in a way, and then we have behaviors, right? But the motivations, I ask him, okay, sounds like a chicken and egg, because when we design a game, the rules somehow determine their motivations, but right now when we look at their behaviors or something that we are trained to do the change, should we just 100% listen to their behaviors or we also need to think about the fundamental things that how we change the rule.

So that’s the first story. The second is when I’m trying to introduce a user experience kind of thinking, I constantly have some conflicts with artists, with designers whoever find themselves doing creative work. So saying like, okay, I don’t want to listen to other people’s opinion. I know how to do the work. I know how to do the creative work. Or another voice I constantly receive is like, can you just tell me what to do from studying people, from learning this data or something?

So the statement I’m trying to share with you … I’m curious how you think about it. After those two stories, I realize you’re not doing user research as also a part of machine learning. It’s very hard for us to tell us what is the right thing to do, but it’s really good at is narrowing down the scope by telling us what is wrong. Don’t do this.

We try it. This is a mistake. But really come to giving us a smaller box that this is the limitation. We’ve tried other areas.

It doesn’t work, and that is where the creative come. Like you can be [inaudible 00:39:51] about creative insight with that box because the marketing trend, it’s very hard to see, okay, this game, like Fortnite, it going to be a hit until it’s actually in the market. So I’m curious about how you think about it. Is that actually a right statement? Or because I have a very limited experience.

Erin Hoffman J.: I think the thread through a lot of what you’re saying, you have to tell me if this is right or not, it has to do with earning the trust of the development team. Is that fair to say? Yeah. And that is a hard thing to do.

Especially I think designers live in this place all the time, because all you do is just say stuff and you hope people listen to you. And so I think that the empathy process of listening to them and understanding what their constraints are is really important because sometimes you can take the market constraint, but if you’re not taking into account their constraints, you’re giving them solutions or prescriptions for things that don’t meet their criteria. So the process of kind of getting everybody’s criteria on the table takes time. It also just, I think, especially when you’re building a new organization, you have to be patient.

It’s really hard, because you want to get results, especially when you feel like you know these are the facts. But you have to be patient with that process of growing the trust organically, and growing the relationship with the team, which is another thing that the machines can’t do for us. So that answers the part of that as well is that that process of building relationships is crucial in design, and it’s super hard. Speaker 4: Thank you. Erin Hoffman J.: Yeah.

Speaker 5: Hi. Thanks for the great talk. So, one question. I work here for [inaudible 00:41:31] Vancouver in a group that does a bit of machine learning applied to animation.

Erin Hoffman J.: Awesome. Speaker 5: And some of the challenges we face when we try to bring basically the machine learning to production environment is predictability. So you train this very complex system with a bunch of data, and then the system all of a sudden does something you weren’t expecting.

And stuff that sometimes you don’t want, or that you need to predict in some ways, and it could be a problem especially for games that might have age ratings, for example. So the AI could say or do something that is not appropriate for the type of game, or the animation could be inappropriate for the situation, or the style might not be good for the art director, and so on. And so those are the type of problem we’re facing, and we’re not quite sure exactly how to deal with that. And I was wondering what you guys are thinking about those [inaudible 00:42:18] and how you deal with that? Erin Hoffman J.: That’s really exciting that that’s what you’re working on. I feel like I have so many questions for you.

But I think a lot of what you say reminds me of the challenge of making a simulation game generally speaking, because it’s another one of those things where you’re creating all of these super complex inputs, and the player is going to do something, and you don’t exactly know what it’s going to do. So I think that all I know from that discipline, and people I’ve talked to that have worked for Maxis, which is kind of to me the king of that space, or I think Sid Meier is a little bit more narrative based in some of his games, but if you ask simulation people in games how you solve that problem, all they say is it just takes time. It takes forever. You have to just keep massaging, and keep addressing those edge cases.

I even hear this from the ML community as well that you have to create the criteria in which the ML is operating, and often that you don’t do that right the first time. There’s this iteration process of kind of boxing it in so that it doesn’t do something terrible. Sometimes obviously the more complex the inputs the more edge cases you have to deal with, and it gets harder and harder, but I do think that this is another space that we may have better design practices for in the future. I know that the game process is just to sort of brute force it, and take ten years to make a game, which is probably not a satisfying answer. Speaker 5: Yeah. Thank you.

Erin Hoffman J.: Uh-huh. I think that’s it. Maybe. Cool.

Thank you very much.