Skip to content

Audio Cone Transcripts: AI

CAN AI SHAPE THE TRUTH?

Halsey Burgund

Sound Artist and Creative Technologist

Co-Director, In Event of Moon Disaster

“Can AI shape the truth?”

We'd been thinking a lot about artificial intelligence and what that means in the world of media creation, and in our case, we wanted to create a completely synthesized voice that sounded like President Nixon, and that is not simply manipulating audio parameters. It requires an AI that knows which voice it was listening for and which voice it was going to convert it into.

My work prior to this project has largely been focused on creating these aesthetic experiences that really focus on the voice. And the reason I love the voice is because, just the sound and the way people speak, it's kind of a window into who they are.

And then, I get to this piece, and I'm like, hold it. What am I doing? Am I like going back on all these years of artistic practice that I've had? And I think the answer is no, I'm certainly not. This project really did get me to think a lot about what authenticity is and what it will be in the future, as more and more of this technology gets out there.

My name is Halsey Burgund. I am a sound artist and creative technologist, and I am the co-director of the short film In Event of Moon Disaster.

Matt Groh

Research Scientist

MIT Media Lab, Affective Computing group

“Can AI shape the truth?”

I work on how could AI models help us improve the way we detect misinformation, but also how could it potentially like mess us up.

It's a tricky question. How do we move forward to know what's real, when you can manipulate video to make it look just like real life? And I think one of the things that I like to take this back to, is thinking about how do you tell if someone is lying. Like, let's say they saw an elephant flying across the Mass Ave bridge. And so maybe you're going to want some video. Well then, you want to ask yourself, okay, are the shadows correct? And maybe we get to a degree that the visual effects studio is just perfect at making this happen. Well then, it's like, ah, well, the weight of the animal and the size of their wings doesn't really make sense to me.

And so you have to use all these other conceptual kind of things and reasoning about the world. And that's what we're going to have to do with a lot of the deepfakes in the future is not just treat video as the gold standard for evidence, but think more critically and just kind of take video as another form of just someone having told us a story, rather than shown us some deep, true, obvious evidence.

My name's Matt Groh. I'm a PhD candidate in the Affective Computing group at the MIT Media Lab.

Matt Groh

Research Scientist

MIT Media Lab, Affective Computing group

“Can AI shape the truth?”

I work on human-AI collaborative decision making. And so basically what that means is, how do we build AI systems to help people make better decisions.

A couple years ago, at the start of my PhD program, we were talking about coming up with research projects. And one of the things we talked about was, well, the internet seems to be this place where people are always adding stuff to it. What if we could figure out a way to delete things from the internet? Well, what if we combined two different kinds of computer vision algorithms? And so, one was this object recognition kind of thing, and another was this thing called in-painting.

And basically what it is, you take away some pixels and then you try to fill the pixels in with what makes sense with a scene. So we're like, oh, well, if we combine those two kinds of things, we actually can create this thing that can erase objects from images.

The reason we want to share with people is, if people know that this can be done, then people can also readily say, oh, that's that trick. I'm aware of that trick. And, and I love magic. But I also love explaining the details, and then seeing what's the frontier, what's the things that we can't explain, and then go work on those.

My name's Matt Groh. I'm a PhD candidate in the Affective Computing group at the MIT Media Lab.

CAN A MACHINE BE BIASED?

Kevin Smith

Research Scientist

MIT Brain and Cognitive Sciences Department

“Can a machine be biased?”

The question of, what is the most dangerous area in which algorithmic blind spots will affect society? I think the answer to me is everywhere. Knowing the data and the algorithm used by big companies like Google, like Facebook, it's necessary, but not sufficient to understanding how these systems work. If we had that information, at least that gives us a capability of testing for them and saying, hey, I've looked at this data, it looks like there's more white people in this vision data set, there might be a problem with that.

Any of the systems we use are going to be a function of the architecture that's used to design the system, the data that's used to train the system, and how it's used in a societal context. And until we really understand how all three of those pieces fit together, there's potential for, well, bad societal outcomes.

Most of AI is a black box and impenetrable. We can study the effects of that, if we know what goes into that black box, even if we don't necessarily know exactly how it works.

I'm Kevin Smith. I'm a research scientist in the Brain and Cognitive Sciences Department, and I work in the Computational Cognitive Sciences Lab.

Matt Groh

Research Scientist

MIT Media Lab, Affective Computing group

“Can a machine be biased?”

We’re already living with AI. We get predictions of what kind of movies we should watch on Netflix. And, actually a lot of doctors are already using clinical support tools. So, it's constantly around us. And the question is, when should we rely on our training and when should we rely on a model? And the thing is, you got to think for yourself. Models can actually be quite effective and much, much better than random, but it doesn't mean it's perfect, because it can't take in every single dimension of context. And a good example is like emotion recognition. If you see a photo of someone smiling, you might think, oh, this person's happy, but then maybe, there's a social context where they're smiling because they're really feeling anxious.

And so, because almost all models are really fine-tuned and really narrowly focused, we've kind of run into these bias situations. And that's where people have to recognize, okay, we got to transcend what this model has been doing and think for ourselves and say, okay, what's the big picture?

My name's Matt Groh. I'm a PhD candidate in the Affective Computing group at the MIT Media Lab.

Valdemar Danry

Research Assistant

MIT Media Lab, Fluid Interfaces lab

“Can a machine be biased?”

My research is really centered around how we can use artificial intelligence to help us with different cognitive tasks, such as learning and asking questions and dissecting information.

I think if we are to use AI as a tool, people should be able to question and always get the information about why the AI or the machine learning model is making the recommendations that it's doing, or why it's acting upon you in a certain way.

For instance, the Wearable Reasoner project that we did, we really looked into just giving people a simple result on a piece of argumentation, versus giving them an explanation on why something was classified a certain way. Then you can start to have a discussion. And having an explanation and having people being able to question the output of this machine learning model, people found to be more helpful. They felt like, you understand more what's going on in the system.

And so, you can really use questioning to explore a piece of argumentation and really understand if the machine made the right prediction or classification. And ultimately that will help you make better decisions.

My name is Valdemar Danry, and I'm a research assistant at Fluid Interfaces lab at MIT Media Lab.

CAN YOU LIVE WITH AI?

Pat Pataranutaporn

PhD student

MIT Media Lab, Fluid Interfaces group

“Can you live with AI?”

We work on wearable technology and on-body technologies that can enhance and augment human capabilities, making us more creative, making us more attentive, enhancing our cognitive capabilities in all the domains. So, the goal for our group is to sort of create new types of devices and digital technology that can empower people rather than, you know, enforce people.

I think in the beginning of the field, people [sic] focusing on recreating that kind of intelligence by mimicking the human brain. But right now, I think we are to the point where we can start thinking about how artificial intelligence will work with human intelligence. How can each of them supplement one another?

Many of the devices that we have today usually distract us by showing us too much information, showing us information without understanding the context. The device that we tried to create, not only just gives us information, but gives us new capabilities. So, you know, turning a human into what we call a superhuman, or turning a human into a cyborg, which is an interesting term that, you know, describes a human-machine symbiosis.

So, not just a device that gives us superficial information, but rather gives us a new power or new capabilities instead.

My name is Pat. I'm a PhD student at the Fluid Interfaces group at MIT Media Lab.

Rosalind Picard

Professor

MIT Media Lab Affective Computing group

“Can you live with AI?”

The area of research I focus on is an area I named affective computing many years ago, affective with an “a,” although hopefully nicely confused with effective with an “e.” The original idea was to create artificial intelligence that was emotionally intelligent, that it would know the difference between if it was interacting with a person that was pleased and having a great experience, or a person that was really frustrated and things are getting annoying.

The word understand is one I need to be a little picky about, because the machines do not have a mind. They're not conscious. They're simply running algorithms and simulations based on us giving them examples of inputs and desired outputs and context where a particular set of behaviors or choices would hopefully be good ones to make. At the same time, when we craft an interaction, we may want a person to feel understood by the machine, and feeling understood is a bit different than being understood. But the feeling of being understood can be created.

My name is Rosalind Picard. I'm a professor in the MIT Media Lab.

Valdemar Danry

Research Assistant

MIT Media Lab, Fluid Interfaces lab

“Can you live with AI?”

I've always been interested in technology. When I was five years old, I took my dad's computer and split it apart. And I tried to make a robot out of it. And unfortunately, that didn't work out quite well. It was a little bit tougher than I thought. Ever since then, I've been kind of occupied with the mind and why people think the way that they think, how emotions and feelings feed into our cognitive skills. And reading philosophy was really a way for me to get in on that.

So, I think in terms of how we as humans use AI for our own cognitive enhancement, the stage that AI is right now, it's not really that smart. It can be really good at pattern recognition, but it is terrible at explaining why something is the case.

AI, as it is right now at least, seems to be very dependent on the data that it's trained on, and the sort of intentions of the people who build the algorithms. And I personally, I really think, or I really hope, that we will be able to use AI as a tool, but I hope that we do not need it eventually.

My name is Valdemar Danry, and I'm a research assistant at Fluid Interfaces lab at MIT Media Lab.

CAN A MACHINE PERCEIVE ITS WORLD?

Julie Shah

Professor

MIT Department of Aeronautics and Astronautics

Interactive Robotics group, Computer Science and Artificial Intelligence Laboratory

“Can a machine perceive its world?”

From a very young age, basically as early as I can remember, I loved airplanes. I loved rocket ships, the space shuttle, and I wanted to be an astronaut. As an undergrad, I really enjoyed control systems. I really enjoyed studying like the autopilot of aircraft and especially how the autopilot had to be designed considering our human capabilities, the capabilities of the pilot.

And then for my PhD, entirely switched gears, I pursued a PhD in artificial intelligence. How do you still design this more autonomous capability to be able to enhance and sort of fit well with like a puzzle piece with human capability, and what can be achieved by harnessing the relative strengths of humans and machines? And that's what inspired me to pursue research in developing AI models that are capable of modeling people to facilitate more effective collaboration between humans and machines.

I'm Julie Shah. I'm a professor of aeronautics and astronautics at MIT, and I lead the Interactive Robotics group as a part of the Computer Science and Artificial Intelligence Laboratory. I also have a role as Associate Dean of social and ethical responsibilities of computing within the Schwarzman College of Computing.

Kevin Smith

Research Scientist

MIT Brain and Cognitive Sciences Department

“Can a machine perceive its world?”

My path to where I am today started with a class I took in high school, called “philosophy of mind.” And it was there that I started really engaging with these questions of, you know, what does it mean for a machine to be intelligent? You don't just need a good understanding of the world. You need a good understanding of what you can do in the world. And that's actually a lot harder.

I think the statement that we're writing algorithms that we can't read is very, very correct. And in some ways, we can never get to the point where we're going to understand every single process and every single decision that's made by machines. And nor should we. If we look at people, well, if I asked you, why'd you have, you know, steak for dinner last night, for instance, you wouldn't be able to tell me about every single step along your decision process for that. But rather, I was hungry. I haven't had steak in a while, and it sounded good. And that, it's high level, but it still kind of explains what you were thinking.

We need something like that for our current algorithms, in order to get to the point where we can really understand what they're doing, maybe not at a, you know, artificial neuron by artificial neuron way, but at least at a way that we're able to communicate with them and trust them.

I'm Kevin Smith. I'm a research scientist in the Brain and Cognitive Sciences Department, and I work in the Computational Cognitive Sciences Lab.

Rosalind Picard

Professor

MIT Media Lab Affective Computing group

“Can a machine perceive its world?”

Early on, I liked learning. And I liked when I was freed up to be curious and ask questions and get answers about how things that seem magical in the world worked. Later while trying to figure out how to build a cooler computer, I started studying the human brain. And that was even more amazing than anything I could ever imagine.

So, I was part of a group trying to build better computer vision systems, and we were studying how the brain worked and that involved the visual cortex and parts of the cortex near your scalp. And then I ran into this work on synesthesia, where people were tasting soup and feeling shapes in their hands, or seeing letters and seeing corresponding colors.

Well, it turned out the key part of the brain involved in this was these deeper regions of the brain involved in emotion memory. And the more I learned about these areas of the brain, the more I learned that emotion was critical to intelligent thinking. Everything we were trying to do in AI, it turned out, needed intelligent, emotional processing.

So, it turned out that emotion was informing intelligent decision-making and rational thought and rational combination of information and perception.

My name is Rosalind Picard. I'm a professor in the MIT Media Lab.

CAN YOU REASON WITH A ROBOT?

Julie Shah

Professor

MIT Department of Aeronautics and Astronautics

Interactive Robotics group, Computer Science and Artificial Intelligence Laboratory

“Can you reason with a robot?”

If we think about what it means for a robot to be an effective teammate, what I like to do is look at what makes for an effective human teammate. If we look at pilots and co-pilots, or just, you know, you cooking with your family in a kitchen, there are three aspects to effective human teammates. One is the ability of a person to know what their partner is thinking. The second is the ability to anticipate what they'll do next. And the third is to be able to use that information and make fast changes as circumstances unfold.

So rather than supplanting, what if you envision the role as enhancing human work, or augmenting human work, and what can be achieved by that? A key responsibility for us as technologists is to at the beginning, put in the work to envision the possible futures, and envision the ways in which technology might help us achieve one future versus another.

I'm Julie Shah. I'm a professor of aeronautics and astronautics at MIT, and I lead the Interactive Robotics group as a part of the Computer Science and Artificial Intelligence Laboratory. I also have a role as associate Dean of social and ethical responsibilities of computing within the Schwarzman College of Computing.

Kevin Smith

Research Scientist

MIT Brain and Cognitive Sciences Department

“Can you reason with a robot?”

When we talk about judgments, we tend to think of them more as the sort of human level of being able to assess the world around us, to understand what's going on, and to give an answer to a question or to a problem, based on that understanding.

These days, we don't really have computers that have these sorts of capabilities, and we're very far from it. But to say we could never have a machine that looks up at the skies and says what's out there, well, I think that's very egocentric of people to think that there's something unique about us that couldn't exist anywhere else.

Trying to build general artificial intelligence is going to require these capabilities to not just crunch a lot of data, but to interpret it, to build models of the world, and to answer questions based on those models instead of just statistical regularities in the training that it’s seen.

And the question is, Do I think that it's going to be possible in the future? Well, I don't see why not?

I'm Kevin Smith. I'm a research scientist in the Brain and Cognitive Sciences Department, and I work in the Computational Cognitive Sciences Lab.

Nadia Figueroa

Postdoctoral Associate

MIT Computer Science and Artificial Intelligence Laboratory

MIT Department of Aeronautics and Astronautics

“Can you reason with a robot?”

I do have a feeling that the current state of the art, let's say of like robotics and machine learning applied to robotics, it is focusing a lot on how to leverage the human inside the technology.

I want a robot that is capable of understanding what the human wants and adapting to it, if the human changes, because we change all the time. We change our preferences. We change our intentions. When you're shaking hands with someone, someone might have like a very stiff handshake and then you kind of like soften to make the handshake harmonious. And all of that is without even saying a word. That is what we need to make a robot intelligent. So, a robot should be able to adapt to those types of changes. And then we need the AI. We need the smart algorithms.

That's what really excites me about what I do, because at the end of the day, by doing robotics, in some way, we're trying to understand how we work as humans.

My name is Nadia Figueroa. I am a postdoctoral associate at MIT, affiliated to CSAIL and to the Aero Astro department. And, I work with Julie Shah, with her group, the Interactive Robotics group.