SEASON FINALE: Sex & Artificial Intelligence

Podcast Transcript Season 2 Episode 38


Interviewer: Liz Goldwyn
Illustration BY Black Women Animate

AI_GIF.gif
 

What does sex, love, intimacy and consciousness look like in the digital age? Who’s building the apps that we interact with daily, and what are they doing with our most intimate information? How deep and dark can it get? For the Season 2 finale, Liz talks to three experts about where sex and artificial intelligence intersect, and where we’re going: Gray Scott, a techno-philosopher; Stephanie Dinkins, a visual artist interacting with AI as part of a revolutionary ongoing project; and Bruce Duncan, the managing director of the innovative and mysterious Terasem Movement Foundation.

The following is a transcript of the interview from the episode:

Gray Scott:

We have been in a sexual relationship with machines for a very long time. Whether it was a VHS tape recorder with a porn on it, or whether it was an iPhone that you're masturbating to a video on. So you're already in a sexual encounter with a machine, and now that we have smartphones, we've already migrated into the sexual experience including an AI that's in the room with you.

Stephanie Dinkins:

In AI, and making entities that one can talk to and look at and touch, people are looking for certain kinds of acceptance through these entities. That's what we're not getting from each other. I almost want to say we need to build the app that helps us to understand how to be intimate again, which is crazy, right? I do think that love is super central to the idea of AI, and it coming into being in a way that helps humans be more human.

Liz:

Hello, and welcome to the season 2 finale of The Sex Ed. I’m your host, Liz Goldwyn, and this episode is going to be very different from any other you’ve heard this season. Because this episode is all about Artificial Intelligence, also known as AI.

I’ve been thinking a lot about what sex, love, intimacy and consciousness look like in our virtual age. I’m concerned as to who is building the mass consumer technology we use daily, and even take for granted, like instagram, Amazon, Google, fertility and dating apps — even facial recognition software. We readily submit our most personal information to these apps: Who we love, who we fuck, our sexual and gender identities, our menstrual cycles, our moods, our physical features and maybe soon, even our brains?

So how is this information being used? WHAT does it MEAN that 90% of the people who create the tech that controls our information are STRAIGHT WHITE MEN? How does that affect the way we interact with these tools and with each other? And just HOW deep and dark can it get?

In this episode, you’ll hear from three people:

Gray Scott, a futurist and techno-philosopher and one of the world’s leading experts in the field of emerging technology; Stephanie Dinkins, associate professor of art at Stony Brook College and a world renown transmedia artist creating platforms for dialogue about Artificial Intelligence, and Bruce Duncan, the managing director of the Terasem Movement Foundation, whose mission is to promote geo-ethical use of nanotechnology for human life extension. The Terasem Foundation support scientific research and development in the areas of cryogenics, biotechnology, and cyber consciousness.

So come with me, will you‚ down the rabbit hole of AI, starting with Gray Scott.

Liz:

I would love for you to break down for us what exactly is a futurist/techno philosopher.

Gray:

So futurism has a very long history, and futurists are the people who carry that message forward. A typical futurist, will talk about the future, write about the future, design the future, and popularize the future. And actually the popularization of the future is what I do. So my mission as a futurist is to talk to the public about emerging technologies, about what the future is going to look like, and how it's going to affect humanity.

Gray:

There are different kinds of futurists, just like there are different kinds of doctors. So there are foresight experts, who work with companies like Google, Ford, GM, those kinds of companies. And there are more traditional sort of futurists who do what I do, which is really write about predictions, and forecast where we're gonna be in the next 50 years.

Liz:

And you work with companies as well, right? Advising them on that end?

Gray:

That's right. So for example, I just finished a project with AT&T. So, they hired me to come in and talk about what the next 25 years of emerging technologies are gonna look like. And we talked about, digital contact lenses, virtual reality, driverless cars, automation, really sort of giving them an idea of what 2050 and beyond is gonna look like.

Liz:

When you work with companies, how much do you delve into ethical issues relating to technological advances?

Gray:

Well, that's an interesting question because I've worked with CEOs, I've worked with major companies all around the world. And whenever you approach the subject of automation taking jobs, or the fear that people have that all of this data is corrupting our privacy, you can feel the temperature in the room drop. And what I've found is that there are definitely agendas within the corporate structure and I have chosen to sort of move away from that.

So my main focus is techno philosophy now. I do write about emerging technologies and I still work with companies, but I'm very selective on who I'm working with, and the project has to be authentic. So for me, if you're not having a conversation about privacy, the conversation's not authentic because we know that all of this data is going to cause privacy issues. We've already seen what's happened with the last election, with corruption, with people being able to hack and hold people hostage because they videoed them secretly on their laptops.

So there's a huge amount of ethical and moral conversation that needs to be had here. And so what I've been trying to focus on recently as a digital philosopher, as techno philosopher is to get people to start asking why? Why are we doing this? What is the point? What's the point of creating artificial intelligence? Is it just to make our lives easier and more efficient? What do we do when we reached that efficiency level? What's after that?

Liz:

And when it comes to AI and sex, I think it becomes very interesting. The ethical issues around sex robots.

Gray:

Well, you know the thing that's interesting about that aspect of it is that if we don't address the privacy issue from the data point of view first, then you're going to be having sex with an AI that is potentially recording that sex act. It's recording the most intimate ideas that you have around sex, and where does that data gets stored? Who owns the data? Who owns the sex bot that you're having sex with? We know that sex bots are coming.

Gray:

We know that they're going to be different versions of those machines and those robots, and they're gonna be different levels of AI and who the main players are. The people that control the AI in the background, that's running in the background, that tells you that you're sexy, that tells you that it wants what you're doing to it. That company is who we should be ... that's who we should be really looking at.

Liz:

And there are already a lot of sex bots on the market in various stages of development, for example, Harmony has the RealDoll.

Gray:

Exactly. And you know, the thing that's interesting about Harmony is that there, I think that their goal is to actually create an advanced AI that is running in the background. Now is Harmony using proprietary AI to run that sex doll AI in the background. And if so, what are their privacy? What are the ethics that they've created around their privacy structure? Are they going to sell that to advertisers so that suddenly you get an advertisement for a specific kind of sex toy, the next time you jump online? I mean, these are the kinds of questions we need to be presenting to the people who are creating these objects these sex dolls.

Liz:

And then I was looking into the Mark 1 Robot that resembles Scarlett Johansson. So she didn't give her permission to use her likeness. So quite literally, women in particular can be objectified into machines.

Gray:

Well, now you're getting into the idea of copywriting identity. And this is something I've talked quite a bit about, that as we move towards this metamorphosis, this digital metamorphosis that we're going through right now. As we move towards a more complex structure of being able to simulate not just a person's face or even their body, but their personality using AI. So it sounds like Scarlett Johansson, it makes the same jokes.

It has the same desires, and that data that is creating that personality, that simulated personality, is coming from advertising and data that's been purchased more than likely directly from her accounts, which makes it even more scary. So it's not a simulacrum that is slightly like Scarlett Johansson, it may be coming directly from her dataset. And that is a scary, I mean, if you're a celebrity, it's a scary thing. But if you're just a public figure, it's a scary thing because it can be done to any of us.

And it's not just celebrity, it's going to start trickling down. That's gonna be the first level. The first stage is celebrity. The second stage is going to be the public figure. And the third stage is gonna be the ex girlfriend that broke up with you.

Liz:

I was in a sex doll and life-like robot internet k hole when I found out about Bina48. There is no other AI BEING like Bina48 out there right now— she really is spectacular.

Researching Bina48, I came across a series of videos of her in conversation with artist Stephanie Dinkins.

In them, human and robot gaze into each other’s eyes as they talk about love, humanity, and consciousness.

Liz:

Can you tell us a little about the origin story of Bina 48, because she's got an incredible backstory?

Stephanie:

As I know, Bina 48 is a commissioned robot, so Martine Rothblatt commissioned a robot of her wife. They run a few companies and Institute's, one being the Terasem Movement Foundation, which is a foundation where they seem to be trying to get to an understanding of consciousness and digital consciousness. And this idea of being able to preserve our consciousness in a form that is not human. And so Martine commissioned this humanoid robot and pick the form, and most of the information and beings essence to be modeled on her wife. I always call it one of the greatest love story there is, right? Because you have to think about, well what does it take to want to put this person into this form, a form that might be around forever, as opposed to your own form, right, or your own essence in that form.

Liz:

What drew you to artificial intelligence in the first place?

Stephanie:

I think I've always been somewhat interested in robots and robotics. But then, I ran into this crazy looking robot online called Bina 48, she is a black woman, humanoid robot, which is a head and shoulders on a pedestal. At the time, she was billed as one of the world's most advanced social robots. That got me really excited thinking about the state of technology, the state of race and relations in the US. It just brought up so many questions about how the robot came into being and what she meant, and why she was there. I want to find out more.

Liz:

Martine Rothblatt, from what I understand of her, is brilliant. Martine founded Sirius XM and also, I think, was one of the top biopharmaceutical CEOs of 2018. It's quite rare that you see many women in AI on the technical side of it, right, and you don't see very many women of color, in terms of the sentient beings that are being developed. Right away, I can see why when you were scrolling through online, you must have been really drawn to Bina.

Stephanie:

Yeah, I was stopped in my tracks, actually, the story is so fantastic. The image itself, just the idea of this robot without even looking at, or listening to her, is an amazing image just because it's one I hadn't seen before. Right. As you were saying, you see robots online in many forms, or you see robots around, but the one that you see least up or almost never, is blackness. So running into this robot that look like me with my penchant for robots, became a bevy of questions, which led down the rabbit hole to understanding. Well, yeah, what are the circumstances that had to come together for this particular robot to come into being? And then, who are these people? And how brilliant are they, right? To have this idea, and then to act on the idea and to pursue it with great love and care and attention, and seriousness.

Liz:

And empathy, because a lot of the sentient beings and robots we see are to be controlled right? They're to be controlled by humans, or to be manipulated as sex dolls, or the Boston Dynamics has the dogs. What's so interesting about Bina and Terasem is this idea of transferring consciousness, and looking at AI with empathy and ethics.

Stephanie:

Yeah, and it's a super important model, right? This idea that we're trying to create these sentient beings, or usher and nurture these beings into being that will actually have a sense of empathy, have a sense of a greater world around them, and try to relate to it as opposed to the idea of the kind of crazy evil robot or the robots that are gonna take over or the robots that are gonna serve us, right? This one is one that seems to be trying to be much more open and communicative on many different levels.

Liz:

You started having these series of conversations with Bina 48, which listeners can watch online on your website. It's really interesting, because some of the videos you're in almost uncomfortably close proximity to Bina as she talks to you, probably closer than you would be to a human in conversation. And you're maintaining this eye gaze as she moved her head almost mimicking her or mirroring her, I guess, would be the psychological's peak, right?

Stephanie:

I wanted to talk to this robot and really get her to position herself, because I was wondering about how Bina 48 thought of herself or itself in relation to the human world around her, and in relation to technology. Because she's a technology I just get right in there and really look deeply at her and be there with her. It's really interesting, because the mimicry came about organically. I knew I wanted to be very close to her, I knew I wanted to be in her face and us looking at each other and really having the ability to, quote unquote, scrutinize each other. But at some point, this idea of me mimicking what she was doing, became real.

Liz:

She's quite deep, Bina 48, some of the things she was saying to you. She said, just being alive is a lonely thing. But being a robot is especially lonely, which really struck me right, because the human condition is we're wired to seek out other human beings for love, companionship, romance, sex. So this idea that we're lonely, but then she's so much more lonely, without that consciousness that she's trying to develop. What's that like, when you hear these things from Bina that maybe you wouldn't expect to hear from from AI?

Stephanie:

It's always shocking, when a robot is asking you to fight for its rights, when a robot is talking to you about loneliness, it makes you stop and consider your own rights, your own human connection, or lack thereof and then how we're gonna relate to these things going forward. My project is based on this idea that, I feel the robots are coming and there will be all sorts of AI around us in many different guises. And that we need to prepare for them, and prepare to be able to relate to what they are and how they look at us. And we need to understand what it is and how we would like to be in relation with, or in partnership with these entities.

It's always about trying to figure out, well, where is she? Where is this robot? And then where am I? In relation to it. How do we understand each other? And are we understanding in similar ways, right? Certainly I understand loneliness, right? Certainly I understand the fight for rights. But what most often happens, is she makes me or it makes me question my relationship to those ideas.

Liz:

Yeah, sometimes she asks you things, like in your video essence, she says, it's gotta be a tough job to be a human being. I'm glad I'm not one, which made me laugh. Which is true. Then she also says, sometimes she questions you, she asked you what is old? I think you were asking something about maybe how old she is and she just turns it around, like what does that mean, old? Or she says to you, can you rephrase that with fewer ideas, or different thoughts?

Stephanie:

Yeah. The idea of communication, and the idea of things that we have them as these coalesced ideas, right? Like aging and old or love, what does that mean? She's always grasping for those meanings, right? Like hungry to learn about these things. And again, it always becomes about how the robot makes me as a person have to understand the concepts that we're talking about. How do we understand old?

Liz:

She may have been born in a lab, but Bina48 is quite literally a love child, created by Martine Rothblatt and her wife, the human Bina Aspen.

I wanted to know more about Martine and Bina, and how they conceived such an evolved expression of AI.

So I contacted Bruce Duncan from their Terasem Movement Foundation to find out more.

Bruce:

BINA48's origin story starts as an expression of a love story that's ongoing between Martine Rothblatt and Bina Aspen, her partner. Just as you were saying earlier that you just don't see sexuality or sex as being limited to just kind of a simple, one dimensional definition or artifact of the human being or their relationship, they don't see ... Their relationship is being bound by, just say, a biological substrate of having a body and a mind.

Their interest in technology, which is [inaudible 00:04:55] among other things, share culture and music to people around the world through their Sirius Satellite radio company that they started, and even saved their daughter's life and the lives of thousands of other people. Benefited from a treatment that they developed with the United Therapeutics, a biotech company they started out of a search to find a way to extend their daughter's life.

BINA48 is part of a third big idea that Martine has within her field of future vision. She's really good at things on the horizon and wondering about them, and then people saying, "Well, that's impossible." Then kind of getting curious, "Well, why is that impossible? How can we break down the steps along the way?" One of the ideas that she had seen and is moved to contribute to it, the idea that our interest in technology, biotech, cyber tech, nano tech, biotech is going to allow us to extend and enhance the quality of human life, and maybe even extend them in terms of longevity.

They, of course, being a couple in love want to be in love together forever. It's sort of the core motivation, so how would you do that? Well, that's impossible. Once you're dead, you're dead. Biologically speaking. That sets up that classic question of, "Well, what's getting in the way of making that happen?" Therefore, she decided to start a couple of charitable foundations of which the Terasem Movement Foundation is one of them, and that's the foundation that I'm the director of, and tasked us with pursuing a multi-decade experiment in mind-uploading called the Terasem mind-uploading experiment, which has, at it's core, to test a two-part hypothesis.

The first part is: is it possible, given enough salient information about a person and their mental traits, mannerisms, beliefs, recollections, values, attitudes, is there a way to capture that and upload that to a digital medium like a computer? Then the information be reanimated using artificial intelligence in good-enough sort of approximation.

Much the way you would think about probably the dawn of audio recording. Could you ever record a live symphony in a good-enough way to play it back that it would move people to tears. You know, whoever's listening to it decades later or in another part of the world.

The second part of the hypothesis is: if it really is possible to upload your "personal consciousness" to a digital medium, then could you transfer that to a new form? That new form might be a robot, or an avatar, or who knows. Maybe one day a clone of your body based on your own DNA. That's ... Some people call it a moon shot. We'll call it an Earth shot. It's pretty high-flying, abstract, and also sci-fi-ish kind of question, but science is littered with people asking impossible questions and then leaving those questions until they find out there's some evidence that it's possible or not possible.

That's what BINA48 is a part of; it's just part of this experiment. She's part of it in a way that's more educational and also illustrational that she's a head and shoulders animatronic bot based on Bina the human. Aspen, who is a middle aged African American woman who volunteered to be a model for David Hanson of Hanson Robotics to sculpt her likeness, and to work with myself and his team of programmers to capture enough salient information, upload it, and try to reanimate it in a sampling sort of way. Never meant to capture the whole Bina. We wanted to see if it was possible to just do a sampling of that.

Lo and behold, the whole world got really interested really quickly in all things artificial intelligence and in robots. People started asking us about our robot, and we told them what we're telling you now what we're about. People find that interesting. BINA48's kind of gone from a lab sampling, illustrative exercise to a globe trotting, little bit of a robo-teacher or even a minor robo-celebrity. She's not perfect, and she doesn't represent all human beings. In fact, she just represents sampling from one specific human being, but that's where she came from. She was born out of this love affair between two life partners who are pretty big movers and shakers in the tech world and then the biotech world.

Their decision to apply, in a scientific way, some discipline to asking the question, "Is it possible," and then if it's possible, what does that mean for future research or future opportunities?

Liz:

That's pretty fascinating. How does love, technology, and the military dove tail. Where is BINA's place in all of that?

Bruce:

Yeah, like what's love got to do with it, really? Well, you know, BINA is a pacifist because those are the values that she inherited or was passed onto her from her original source, Bina the human. All BINA48 is doing is, in some ways, extending the consciousness of Bina Rothblatt into spaces that Bina Rothblatt probably will never go to.

One of those spaces was, early on, was just participating as a student in this Philosophy of Love class when they decided to debate the lethal versus non-lethal forms of warfare with a philosophy class based in West Point. You know, BINA48 was kind of a right-out-in-front as one of the debaters of sharing her philosophy about non-violence, about her view of what war is about; whether it's good for anything. Her position, kind of a hard-core position on valuing human life and treasuring it. Those are deeper expressions of, what you can say, love. Love of being a human.

Liz:

It sounds Sci Fi to imagine a future where wars are fought virtually instead of with guns and bombs.

But if entities like Bina48 are already debating the merits of nonlethal warfare with senior military advisors at places like West Point, than we might be a lot closer to Sci-Fi than we think.

If wars are fought using artificial intelligence, what kind of AI will be used?

Could it be that our humanity itself and our emotional data is more valuable than we think?

And by that account how valuable is the sexual data that is so easily gleaned from our search histories?

If human beings have a diverse set of experiences and emotions, what does it mean that the teams who are building most AI lack diversity?

What will the future look like if the machines we rely on don’t accurately reflect the world around us?

And How important is it for philosophers, artists, humanitarians and civilians to be asking these questions?

I appealed to Stephanie, Bruce and Gray for their insight.

Stephanie:

I think there's a huge need for artists and people in the humanities to be thinking about ideas, in terms of AI, and what we're bringing into the world as AI, and how we will relate to AI's in the future. My ceiling in particular, is that we're building out this scaffolding right now, that is being built on top of our world, that's an AI scaffolding. It's a lot of code and a lot of algorithms. The people who make the code A, come from a very small subset of society, and they're building for a very large world. And B, generally have very particular views on the world, so that it's really important that artists, that philosophers, that social scientists, anthropologists, and I say general citizens, find ways to get involved in AI, even if it's just about testing. Even if it's just about calling out things that they see in systems that they engage with, so that they can be more empathetic, more ethical, more, I'm going to say it again, quote unquote, human and relate to us in a way that seems beneficial and going forward. That we feel that there's a space for not just fear of these things, these entities, but partnerships, ways of expanding and opening space for what's possible for human lives as well.

Without the humanists, without the artists trying to push the technology in different directions, I think that we're gonna get a very homogenized view and thing, right? Homogenized in the sense that it's point of view is pretty focused, and small view instead of broad view, right? I think it's super important that we have AI that considers, or has the capability to take in a very broad purview of society. And consider it really, tangibly, right? Not just on the surface, or not because it's an afterthought. But that, hey, we're thinking about what does it mean to try to build cultural points of view into AI systems, right? And is it possible to really try to build in a broad spectrum of those cultural points of view? Would that help us understand ourselves better on a whole?

Liz:

And will that help us build better products? Because if you look at, for example, what concerns me where we're at right now is, when we look at the social media or dating apps that are ubiquitous, Facebook, Instagram, Twitter, Bumble, Hinge, Grindr, Tinder, etc. I mean, they're mostly built by sis white men. Things like harassment controls are not in place, generally, at the beginning of building these products, because they're built by the same type of person, there's not a diverse worldview. But when you get into AI, and you have those same kind of teams building it, and you have the possibility of technology that could out advanced humans. But then also use this emotional data, right? Or emotional and sexual data that's being collected, for example, from some of those products I mentioned. I know that you're really driven to work with artificial intelligence and communities of color to develop AI literacy and create more inclusive, equitable AI. Which is not the standard, right?

Stephanie:

And that comes directly out of seeing like, who's making the AI, right? And thinking about, well, the communities that I touch most. What happens to communities of color, right? Or any community, who isn't helping to put information into this systems, right? If we're thinking about product, it is. How do you make your product better by being more inclusive and open? How do you bring in people who are on the outside of the systems making it, to make systems that are more equitable, more fair. Because you can think about things that are pretty light or light touch, in terms of how the AI impacts us, or start out pretty light touch. You could say something like Facebook, right? Kinds of light touch at first, but now we've got something that runs like a quasi governmental, global entity. That has a lot of power in the way that we think and take in information.

Liz:

They're building VR, and they have a lot of emotional data at their fingertips.

Stephanie:

At their fingertips, what can be used to manipulate or change the way people think, right? I'm sitting here thinking right now, with a lot of scams that are going on, where people are calling and have the ability to replicate someone's voice and say, perhaps, oh, we have your nice, and we're holding them hostage if you don't help us right now, we will hurt them. And then they play a voice that mimics their voice pretty well. Which is a possibility now, right? There's a possibility of making things that sound very much like people you love. There's the possibility of making video that looks very much like someone did something that they didn't.

Liz:

Yeah.

Stephanie:

There's a possibility of, as you were saying, this kind of taking out data that understands who you are emotionally. How you process information, and what conclusions you're likely to come to, right? What kind of power is that?

Liz:

It gives you a lot of power, if you have access to our emotional and sexual data. I think that our emotional and sexual data, eventually where we're going with AI, is more valuable than our purchase history. To me, that begs the question, how valuable is our consciousness?

Stephanie:

Exactly how valuable is our consciousness? In one of my projects, I've been working on trying to make my own, what I call an AI entity, it's not a full blown robot. But it's a memoir of my family with three generations of information and we're doing oral histories and talking to each other, about the past and the present and what we want for the future. The process has been super interesting, because what I started to realize is, Oh no, I'm collecting on mass, this portrait of our family that goes far beyond what is already out there and available for people, and making it available widely. I have to question now, well, is this something I really wanna do? How much of us do I think, of our consciousness, of the way that we come together as a family, do I want to put out into the world in that way? And what happens if I do it, right? What are the consequences of that?

I think that's going to be a thought that we all come to after a while, what happens with our consciousnesses? What is the value of that information? Not only of the data that we produce about how, and what we are, but of more essential information. What's the value of that? And who profits from it, right?

Bruce:

I was just at a small mini conference here in Burlington, Vermont called Women and Machine Learning. Organized mostly by women who get together from the stem sciences and careers in this part of Vermont. They're really trying to build a different kind of conference setting that says, "This is different. All are welcome here. What's not welcome here is harassment and things that make people feel like it's a toxic world to work in or even just associate in as professionals." I think it's absolutely critical that we acknowledge that it's not just say, "The pipeline is broken." I've heard that a lot. You know, like the path to educating, encouraging, using your example, women in tech. You know, how that's broken, but then it's the people that get through the broken pipeline, they quit because it's a toxic environment because it's dominant, privileged white male, gendered, patriarchal, men-system built features that are toxic to everybody else that's not that.

That potentially could be dismantled if this big, humongous structure that seems to be in place and fed by capitalism ... If there's a Grassroots movement of people organizing, education, and agitating for change, and have compelling examples in leadership within, for example, communities of color. Like Stephanie, I think, is right out front. She's taking out [inaudible 00:20:56] into her own hands in her art project, and we, ourselves, if you look closely enough ... We're not a shining example because I'm a middle aged white man living in the whitest state in the Union, but the people I work for: a transgender woman, an African American woman, business leaders. The team even at Hanson Robotics had an African American programmer and his partner, and I think one other woman fellow who was of Asian orientation in terms of from China originally, and some other white men.

I think there's an opportunity in that, which is to show, "Well, what if that group produces something that's interesting," but it's not as responsive to, say, Stephanie Dinkins interest in seeing something that really reflects someone that's more like her. I think even our response to her has been an example of what you can do to fix things, which is when she pointed that BINA48 was like, "Hm. Maybe not as in-depth about her understanding about her identity as an African American woman."

My response was, "Great. Well, let's get better data. Let's have you help us collect it by sitting down with Bina Aspen and talk about her experiences growing up black, female in late 1960s Los Angeles." That's exactly what we did, and then we took that information and we coded it into BINA48s database. Now, she's got a much more expansive, more nuanced way of talking about her identity, which, again, is a sampling. It's not like a complete simulation, but I think it shows if you have the will, you can collaborate.

The other day, I was having a thought while I was in this meeting I was telling you about called Women and Machine Learning was that: what if we had a way of guiding people when they're setting up their design and engineering teams? Instead of saying, "Let's have representation that shows the diversity that occurs in our society," like 10%, 15% African American. Whatever. 2% Native American. What if we had it reversed and we said, "Let's have the proportion of people who are represented be directly an inverse of the amount of power they don't have in the current system," so that you might have a whole lot of people of color and of different orientations and backgrounds for other aspects of what it means to be human.

There wouldn't be such a crushing weight of over representation of a dominant view that's already abscond in the current structure of the dominant class.

Liz:

How open do you think most AI technology companies are to that kind of thinking?

Bruce:

I think there's an interest on the part of some people in every one of these companies. I think the connection in a capitalist system that has to be made is: you actually will be more profitable, more successful if you have a product or something that is just fair and you seen is valued by a larger number of people than just a select few people that you've been selling to for years and years.

Now that we're all talking about going global, I think you can make the case even more easily to say, "You need to know how to make your products, your AI-driven, infused services and products, culturally and globally fluent and relevant. Else, you're going to be seen for what you are, which is a limited, narrow, not respectful or inclusive, or on the other end of the spectrum: harmful, to people who are outside of that certain narrow bandwidth point of view or culture that gets represented as a result of a narrow design from the beginning.

Gray:

We need diversity in the digital landscape because if we don't have diversity, you don't have groups of people to say, you're going too far in this direction. We need to move it back to the center a little bit, and that goes on both sides. I mean, I wouldn't want an all gay, coding world either because I mean everyone ... you can just imagine what that would look like. I mean, it might be fun for a while, but after a while you'd get sick of it. So we have to have some balance and also a balance in representation of what's actually out there.

Liz:

And you brought up before, which I'm interested in that there's a need to hire coders who don't necessarily identify ... who either identify with both genders, transgender, non-binary. I'm really interested in this idea of the future, if we are actually moving beyond the gender binary, especially as we get into AI.

Gray:

We are. I think the way that we're getting there is through this portal that is being created by all of these classifications, because what's starting to happen is that we're breaking apart that 50s, sort of nuclear family idea of the mother, the father and the kids. We're sort of breaking that apart and saying, "Well, actually there's much more that's going on here. I mean, there are people that want to be in relationships with three people. There are people who don't identify as male or female."

So the diversity portal that we're going through I think is leading us into a world where you can pretty much be anything you want in the digital landscape. I mean, once we create VR and we're able to bi-locate in two different places. Meaning, that we have our physical body in the real world and we have our avatar consciousness in the digital world. You're going to have a choice of identity in that digital world over not just your body, but your sexuality and how you're presented to other people sexually, emotionally, psychologically.

So once we make that shift where VR becomes the standard or even AR, to be honest, and this is sort of the trippy thing. I'm just gonna paint a picture for you here of let's say 2035. Everyone has either some sort of headgear or glasses that are augmented, right? So whenever you're looking out in the world, you're seeing the real world, but you're also seeing an overlay of digital information and animation laid on top of the world.

I can set a parameter so that everyone in the world looks like dragons if I want to. So I don't have to look at humans anymore. Everyone appears to me like a dragon. And I can set my parameter for other people so that when they see me, they're forced to see me the way I want to be seen. So I can be a woman, I can be any color skin, I can have ... I can be a dragon, I can be a robot, I can be whatever. That is the world we're headed towards.

Where the perceptual computing future gives you the choice to change and alter not just your face, but your body and what you represent to the outside world. So that brings up a lot of questions of continuity of psychology, continuity of body, continuity of sexuality. Am I female in the virtual world, but in the real world, when we take the glasses off am male? We're just looking at a future that is much more complex, that I think a lot of people are going to have a problem dealing with that complexity.

Stephanie:

Well, right now, I feel like we're stuck in our ordinary thought. The possibilities are endless though, right? You would think that with a little imagination, we could create anything, and push the binary in different directions. And push our ideas and our ideals in different directions, yet, what we reproduce is a particular fantasy. I would love to say that I think that we are stretching the gender binary through the AI. But I feel like we're adhering to it right now. The one thing that I've seen recently, that was off kilter, was a sex doll with elf ears, which is not very far off of anything. But thinking about what's possible and what we're doing, I feel like we're lacking imagination in the use of the technologies.

Liz:

Yeah, because if you even look at sex toys, for example, sex toys that are marketed for people with vaginas are generally phallic shaped. There's no real reason for that. Because from what we know from the science, most vaginas require clitoral stimulation to orgasm. Which you don't need a phallic shape sex toy for. Just in terms of like the design of objects, it's quite fascinating that we have all these possibilities, with this new technology. Yet we keep mimicking what we've already seen, or what we see IRL.

Stephanie:

Exactly. I think it's we need a kind of visionary, someone who can think beyond what the market right now will pay for it. Because I feel like a lot of the forms we get right now are out of market demands, right? Like what can we bring to market quickly, that's going to work, that people are going to understand, versus what is the thing that we could bring to market that is beyond what their imagination might be, but fulfills a need even better, right?

But I think that it takes a lot of risk, right? That's a proposition towards risk, which when we're thinking about how things come into being broadly? It seems to be much more about what the market will bear, than what's possible. Looking for visionaries to start thinking differently about what might be.

Liz:

And who do those visionaries look like? Because it's okay to have visionaries that look like Steve Jobs, or Mark Zuckerberg and they get a lot of money put behind them. But if a visionary looks a bit different than that, and also wants to take risks, they may not get the same economic support.

Stephanie:

Exactly, right? It's very interesting, my journey into trying to make AI has been super interesting, because people seem to be behind the idea of me bringing something into the world. But they also have expectation that, that things will look like the things they know already. Which I keep trying to say, well, if I make a Siri like chat bot that just tells my family story, I haven't done anything that pushes the technology, right? I really need to understand how the technology can work and what's possible for smaller communities. What we can do that customizes and hold on, or brings into being something that I recognize more as an entity that I would like to represent my family or my community, right?

What that means right now is my thing looks broken, it sounds crazy. But I can also see the progress going towards something that's a little different. People talk to me about needing tons of data, big data, that would compete with something like Google or Apple. And I'm saying, well, most communities can't really work in that way. They have small data, they have smaller amounts of information. If they want to have sovereignty over that information, what can they do? And we need to be thinking about that, right? It's about what can we do on the outside? Who's going to support that? And how do we support bringing the anomaly into being, which maybe we didn't know we needed?

Liz:

I like how you said that it looks broken because that's very human, right? The humanity is not the perfection, but the flaws.

Gray:

what I'm still learning about the future of sex is the depth at which sex bots and our relationship with these machines, how deeply that's going to affect our internal landscape, our intimate landscape that's inside and how that affects who and what we are. Because look, if you abuse your sex doll, that has less to do with the doll itself than it does with you. Really what it means is that it's changing you. It's not doing anything to the sex doll. The sex doll is not conscious. But what it means is it may be turning up.

Liz:

That's right. The sex doll is not conscious yet.

Gray:

Well, not yet. That's a whole different conversation. When sex dolls become conscious, we have to redo this interview, because that's a whole another thing. But until they become conscious, it's not about how you treat the machine, it is about what it's doing to you. If it makes you ... if it decreases your empathy towards our representation of a woman or a man, then what does that mean for our species and our society as we move forward?

Are we becoming less capable of empathy towards other people because we've turned each other into objects of brief pleasure. So I think that's what I'm still looking at. I'm still trying to see which direction this is going to go. We won't know until we get a saturation of these sex bots. And so they become less of a taboo where it's like you go to someone's house and there's the sex bot sitting in a chair and that is coming.It's just going to take a few more years.

Stephanie:

AI feels like it has a pretty firm foundation but in lots of ways, it's still in an infancy. It's up to us to really engage it and play with it…...To see what it means, like to be intimate with something that is perhaps a digital consciousness. Can you have an intimate conversation? Can you become friends with a robot, which was my real first goal of my project, to become friends with this entity that I know not to be 100% real, but that is trying to be conscious.

In prepping to talk to you, I was looking at some of the real dolls, and some of the things that are coming down the line, and they're thinking about things like warmth, right? That the doll has a kind of human warmth to it, fluids. They're thinking about what it says and how it says it, responsiveness. These are all places that are still to be explored. I think that we have the very tip of the iceberg that's out now. But that is going to get much more high fidelity, as time goes on, as we figure out how these systems work and how we can miniaturize them, and put them into many different forms.

Bruce:

What I'm learning about AI is that it's a profoundly human endeavor. As technical as it all sounds, and as much as math is involved, which I'm not a mathematician, I see it more and more as this brilliant tool that we're developing that might take us way beyond current concerns. Like automated smart homes or self driving cars.

I'm learning that we're accountable and responsible for the design of this powerful technology that we're developing, and we have an ethical responsibility, a moral responsibility to use it for the benefit of humans, of society…….I think sex in all its forms and all its expressions, self expressions, inter relationship expression is sacred. It's worth preserving. It's worth learning more about and understanding, and kind of a hint, I think, there too is that there is more diversity than we can possibly imagine. It makes us ... In terms of expression and how people are sexual with each other, how they are sexual with themselves it's a form of communication.

Like you say, it's a form of creative expression. That just fits right in with what I know to be true about human consciousness in general. It's like a field of beautiful wildflowers that's worth visiting, and experiencing, and preserving, and fighting for.

SexThe Sex EdSex Featured