Upper House Commons Events

AI x Humanity

Upper House Season 4 Episode 4

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 54:29

How should artificial intelligence shape our understanding of what it means to be human?

As AI advances rapidly, questions about its ethical, cultural, and social implications are more urgent than ever.

A public panel featuring UW-Madison experts in philosophy, history, communications, and ethics as they explore how AI intersects with meaning, knowledge, and human values. The University of Wisconsin–Madison’s Center for Humanistic Inquiry into AI and Uncertainty will engage in rich dialogue not just on what AI can do—but also on what it should do, and how communities can shape its influence with insight, care, and awareness.

Our moderator was Jeremy Morris, Professor of Media and Cultural Studies and Director of Graduate Studies at the University of Wisconsin–Madison, where he also serves as Faculty Director of the Center for Humanistic Inquiry into AI and Uncertainty.

Featured Speakers:
• Catalina Toma is a Professor of Communication Science at the University of Wisconsin–Madison and Associate Editor of Computers in Human Behavior. Her research examines how people understand and relate to one another through communication technologies, focusing on the social and psychological dynamics of digital interaction.

• John J. Curtin is a Professor of Psychology at the University of Wisconsin–Madison. His research focuses on substance use disorders and other mental health conditions, advancing innovative, technology-based approaches to prevention and treatment.

• James Goodrich is an Assistant Professor of Philosophy at the University of Wisconsin–Madison. His research centers on normative ethics, with particular attention to the intersection of political philosophy and economics, and to the moral questions that arise in public policy and markets.

• Courtney Bell is a Professor of Learning Sciences at the University of Wisconsin–Madison and Director of the Wisconsin Center for Education Research. She studies teaching domestically and internationally, with a focus on measures of teaching quality. Her work also helps instructors learn how to support all students’ growth and development. 

Send us Fan Mail

Upper House Commons gathers the university community for spiritual, intellectual, and vocational formation.

We explore big ideas and engage in conversations that matter within arts and humanities, justice and society, leadership and vocation, science and technology, spiritual formation, and theology. Whether you are a student or faculty member at UW–Madison or beyond, working in the marketplace, or serving in the church, we see you as part of our university community. Gather with us for one of our programs —our “commons”— each a pasture for shared spiritual, intellectual, and vocational formation.

Head over to our events page to see what's coming soon, or mark your calendar for these upcoming programs.

Find out more slbf.org/upperhousecommons 

SPEAKER_02

Welcome, welcome. We are so glad you're here. Welcome to our session on AI and humanity. My name is Joy Fia. I am the director of the Upper House Commons here, and the Upper House Commons is an initiative of the Stephen Laurel Brown Foundation. If you have not been in this space before, we are a Christian study center that is hoping to lead conversation in both formation and intellectual development as it relates to Christian faith. But we are very interested in a wide diversity of ideas. And in that, in that way, we want to host a great number of diverse voices and events in this space, which is what we're doing today by welcoming some of our University of Wisconsin faculty, which we are so excited to hear from. We are partnering with the Center for Humanistic Inquiry into AI and uncertainty, which is a new and a new initiative, correct? A very new initiative of the University of Wisconsin. And these people up here have all been participating in that conversation. The way that this is going to work today, we are going to hear from each of the faculty members up here about five minutes of the area that they're doing work in right now. And so that'll be exciting to hear their area of work. And then our moderator is going to have a bunch of questions for them for about 15 minutes. And during that time, then we will also open up the poll everywhere for you to write some questions in. And then at that point, the questions will come up for the panelists to provide some answers. So I'm going to introduce each of the panelists to you. And so that you can know a little bit more about them. If you would just wave when I say your name, that would be great. I'll stand over here to the side. The first person I want to introduce is our moderator, but he is also the center's director. So Dr. Jeremy Morris. He is a professor of media and cultural studies and the director of graduate studies, where he also serves as the faculty director for the Center for Humanistic Inquiry into AI and uncertainty. And he is going to share a little bit of his research into AI and the music and entertainment industry. Second, Dr. Kennelina Toma. She is professor of communication science and she is associate editor of Computers and Human Behavior. She studies the social and psychological psychological dynamics of digital interactions, including chatbots and identity formation among adolescents. Did you know that? Oh oh good. I didn't say anything that wasn't true. Okay, just checking. Want to make sure. Okay. Um, Dr. John Curtin. He's a professor of psychology. His research focuses on substance use disorders and mental health conditions. And he's works on developing innovative technology-based approaches to prevention and treatment. And then Dr. Courtney Bell. She is a professor of learning sciences and director of the Wisconsin Center for Education Research. Her research explores teaching both domestically and internationally, including how AI is used in classroom management. Okay. And then last, I have Dr. Jimmy Goodrich. Okay. He's an assistant professor of philosophy. His work centers on the ethical implications of how AI is reshaping the economy. So we are thankful for each of you. And so we're going to start with Dr. Morris, and he's going to share a little bit about his research first.

SPEAKER_04

Do I need to turn it on? No. Oh, there it is. Hi. Thank you. It's really great to be here. And I was just at a conference this weekend. There was like seven people in the room. So it's wonderful to have a nice audience. Academic conferences, you know, it's the things. But thank you for having us. I'm very happy to be here to talk about uh with this wonderful panel of academics uh to to talk about kind of UW's uh holistic approach um to studying AI and uh how we're kind of moving forward to understanding this technology. Um and that involves studying, I think, not just the technologies uh of AI systems, but wrestling with the deep social and cultural effects AI will have on how we communicate, how we express ourselves, how we understand our identities and our relationships with others. Uh I've sort of argued elsewhere that I think AI's biggest impacts uh won't be technical, they'll be cultural. And I think understanding that is key to thinking about how we shape and guide uh this technology or this set of technologies going forward. So, as Joy mentioned, I'm I'm currently the faculty director of this brand new center. It just started in January, so it's really just getting up and running. Uh, it's a new center on campus called the Center for Humanistic Inquiry into Uncertainty and AI. Uh, we call it different things, AI and uncertainty. Uncertainty and AI, I wanted to call it uncertainty and AI, so I could say un-AI. Uh, but I think campus doesn't want to uh, you know, portray a negative uh version of it, and neither do I, right? There is this fundamental uncertainty, which I think is at the heart of why we're here to kind of study it. So, yeah, the center is the result of a very large uh NEH grant um that we received last year, uh, as well as matching funds from campus. So they've made a pretty big commitment to this. The center is founded on this belief that a technology as complex as AI requires cross-campus collaborations. So the center will serve as this kind of collaborative hub for exploring the many uncertainties that AI introduces to key social values such as equity, uh, fairness, trustworthiness, privacy, uh, and civic responsibility. So uh I think you can see here uh the huge team of folks that were involved with this, and there's probably even some more folks there. Um, Jimmy and I are on the steering committee uh for this, and I'm currently faculty director. There's also an advisory board of some of the folks at the university, and then there's a uh group of fellows uh who are now working with us for the rest of the year. They've each got their own kind of individual projects. You're gonna hear from Catalina about her project. Um, and then there's other folks around campus who aren't uh, you know, technically affiliated with the center yet, but are doing amazing work on AI. And so um Courtney and John are here as kind of examples of of all the sort of work that's going on. So um a really big team, uh a team that I'm I'm very much looking forward um to working with. My own research, um I think I still have some time to talk a little bit about that, uh, looks at the increasing use of generative AI uh in the music and podcasting industries. So I've been interested in the way new technologies shape how we uh uh experience and understand things like music and podcasts that mean something so deeply um to us. You may have heard recently that songs created with AI-generated music and vocals have been topping various billboard charts. I was familiar with some of the rock and hip-hop bands, but in doing research for today I found there was a song by uh Solomon Ray last November. Did anybody hear about this? Yeah? Uh Solomon Ray here, uh who is, quote, a Mississippi-made soul singer carrying a southern soul revival into the present. Uh they received over 325,000 monthly listeners and they became the top artist on the iTunes uh top 100 Christian and Gospel albums charts in November, right? So uh almost every genre of music now has a story like this, right? Of an artist that is AI generated that sort of made it to the top. So I'm studying the impact on the many creative workers who depend on these industries, right? How are they engaging with generative AI? Uh, how are things like artificially generated artists affecting their own royalty rates or their own kind of ways of making a living on Spotify? Because of the popularity and importance of media, I think many people's first interactions with generative AI, or at least some of the interactions they care about most deeply and meaningfully, uh, are going to be through things like music, um, film, podcasts, and stuff like that. So that's been my sort of area of focus, trying to understand how the industries are sort of grappling with these new technologies, how the musicians or podcasters themselves are working through decisions they need to make about should I employ this technology or should I not. So that's a quick bit about me and my research. I'm now going to pass it on to the rest of the crew here. We've got four folks coming up. The first two, uh, I would say sort of are using AI very um kind of thoroughly in their projects. Uh they're sort of uh they've integrated in it. They're they depend on the technology for the projects they're doing. Uh and then we'll fill the the the second uh second group of two speakers will sort of talk about kind of um philosophical approaches to AI or ways of thinking about AI um critically as we kind of understand how these technologies work. So thank you so much. I'm gonna pass it to Courtney now. Yeah.

SPEAKER_03

Thank you very much, Jeremy. Oh, that works. Okay, it's really nice to be here today. Thank you, Jeremy, for hosting us and to Upper House also. Um, I'm gonna give you a little bit of background uh about a new lab on campus that I founded. We just hired a new director who started in August, so that's exciting. So we are the SimLab, we just call it simulation, and we do mixed reality simulation and we also do virtual simulation. Because we're housed in the Ed School and the Wisconsin Center for Education Research, which is the nation's oldest and most productive externally funded education research center. Um, we started with education, but our colleague who leads the lab, Rob Hubble, came to us from UNC. He's a computer scientist by training and a cognitive scientist also. And we're working with the nursing school, we're working with the business school, a whole bunch of different um partners, I'll call them. But the common thread across them is the having of challenging conversations. And so you say, well, what's a challenging conversation? Each of you, I am sure, have had such challenging conversations, and they occur in all parts of our lives. I'm a former high school biology, chemistry, physics teacher. One example of a conversation is a conversation with that I had with a parent whose child wanted to go away, move away from rural North Carolina where I taught, to go to college at Boston College. That was a hard conversation that I did not have any preparation for. That person's daughter happened to be really smart, and I happen to hold the view that that would be a reasonable desire on her daughter's part to try to apply to and go to Boston College. That was not the parents' view. So that's an example of a challenging conversation. We have them all the time with students in classrooms where we're trying to figure out what does this child understand about double-digit subtraction? Like, why did they just say that thing? Okay, say more about that. And what happens in many, right now we're partnering with the med school around bedside end-of-life conversations. I'm sure many of the humans in this room have been present in those conversations. It's not the best to have your physician practicing on your family in those conversations. It'd be really nice for them to have done a little run-through with somebody beside your family because it's a very high-stakes conversation for everybody involved in that conversation. So the STIM lab is focused on challenging conversations, whether they're socially challenging or they're um technically challenging, that there's hard things to explain, or their conversation, like that one with the teacher and the student, like it's hard to figure out what is this person saying to me? What's my next follow-up question? I can't just say, could you say it differently to a 10-year-old? They're not, they don't have that many different ways of saying it. Okay, so that's what we focus on. And what I wanted to show to you, which is why I have a slide, that's the example of the simulation setup that we're using for training some pre-service teachers who are certified in both uh secondary science and bilingual education. So they are supposed to be able to identify the language needs of the young people in the room and connect them to the subject matter, which uh in the US is often only taught in English, not always. But what happens is the teacher hops into that situation, they put on the headset, they can use all the tools in front of them. And what's there is this is actually a mixed reality simulation. All those little ones, and I'll show you the second slide, because there are big ones on the left-hand side, those are avatars, but they are voiced actually by an actor who is not an expert in math, is not an expert in pretending to be a patient at the end of their life, but they're an actor, they inhabit a human thing. And through the simulation design, we help them understand what is regular, typical to say in this situation. So that's one of our teachers learning how to do this, and she's practicing asking questions. But that's the setup, can be done around the world because the interactor who is voicing the avatars in mixed reality, this is called, because the human is in the loop. The human is actually on the computer at the same time as the person who's rehearsing or practicing. We are developing situations though where it's all virtual, where we use generative AI or gen AI. So the avatar is actually through an LLM model, closed LLM model that we're telling it what to draw on. We don't want it to just say general things it can find out there on the internet, um, gives responses to the person who's practicing. So it's both of those kinds of mixed reality and virtual reality that we're working on. And one of the nice things about it is it creates an environment, and I'm a learning scientist, so it creates an environment where it's safe for the learner to learn. There are many miseducative experiences that occur in the world where someone tries something challenging and it doesn't go well. That's super common and super normal. Learners make mistakes. We expect you to make mistakes when you're learning something, right? That's normal. Experts make mistakes. So we want that learning to be able to be productive and not mis-educative, as John Dewey would say. So I will stop there and I will hand it on.

SPEAKER_05

Right, so good afternoon, everybody. Um by way of introduction, I wanted to give you a sense of how I got here. I uh started my career as a psychophysiologist conducting laboratory experiments, studying the effects of drugs on the brain. But about a decade ago, I found that my heart was increasingly not in the work. I'd become a clinical psychologist to help people who were struggling with substance use disorders. My paternal grandmother died of complications secondary to alcoholism. My dad struggled with alcohol use his entire adult life, and during the periods of time where he lost control, it affected our whole family. My cousin Stephen has a severe substance use disorder, and he's completed several treatment programs and had periods of stability, but they've always ended in another relapse. And my Aunt Kathy reached out to me on numerous occasions to ask me what could be done to help Stephen. And it was really those conversations that got me thinking about how I could redirect my research program to help people like Stephen and my father and my grandmother. So Stephen's experiences in particular capture several key challenges that make it so difficult to help people like him. So, first, we have effective treatments to help patients initially reduce or stop their harmful use for periods of time, but substance use disorders are chronic relapsing conditions that require lifelong monitoring and support to prevent relapse. And that's where our current treatment infrastructures really fall short. Our treatment system has very little capacity to provide this essential long-term continuing care. And it's also true that the causes and risk factors of relapse are myriad. Patients differ widely in their lapse risks, and critically, both the risks and the optimal supports needed, even for one specific person like Stephen, change month to month, day to day, and even moment to moment. So about a decade ago, my lab brought fresh eyes to look at these barriers to providing personalized long-term continuing care. And we believed that we could harness two technologies that were emerging at that time: personal sensing and artificial intelligence algorithms, to develop a highly scalable, smart recovery monitoring and support system that could both predict lapses before they occurred and provide personalized support and recommendations to patients about how to prevent those lapses from occurring. So, fast forward to today, and I can now both describe the system that we've been developing, make clear how it works with an eye to the components that are relevant to our focus today on ethics and social impacts, and also foreshadow our next steps. So, to start, the systems rely on three sources of information. Users provide brief daily reports on key lapse risks that we collect using their smartphones. We also passively monitor their moment-by-moment location, and we've been experimenting with collecting their cellular communications data as well. And these are all clearly private and highly sensitive sources of information. I'd be happy to talk later about our experiences gathering and using these inputs. We've had two large grants funded from the NIH to use these inputs to train machine learning models to predict future lapses. And as a result of this work, we now know that we can do this both with exceptionally high accuracy and with a very high degree of temporal resolution, even down to the specific hour of the lapse. But critically, beyond just making lapse probability predictions, we can also use these same machine learning models to understand which lapse risk factors are most influential for any specific individual at a specific moment in time. And so this allows us to understand not only when a lapse might occur, but also critically why and potentially how best to intervene to prevent it. So we've also been thinking carefully about a variety of ethical issues. So, for example, we routinely look for algorithmic bias by comparing model performance in subgroups that experience health disparities. As a result, we learned that the initial models we developed perform more poorly when we use them with patients of color. We've now corrected those biases and we see comparable performance across a variety of subgroups. We also routinely meet with diverse community advisory boards to get their feedback both on the needs that we're seeking to meet with these systems and also on their explicit design, including their inputs, how they work, and their outputs. And finally, we're now in a very exciting moment where we receive new funding from the NIH to implement the system for the first time in real time. So this is going to allow us to tune it, to optimize long-term patient engagement, and to explicitly evaluate the impact of using the system on the clinical outcomes that we care about. And I'm also happy to report that my heart is now back in my work, and I'm eager to talk and discuss with you guys some of the complex ethical issues that arise with developing and implementing a system like this. So I look forward to our discussions. Thanks.

SPEAKER_00

Thank you, John, and hello everybody. My name is Catalina Toma. I am a professor of communication science in the Department of Communication Arts. It's all a little bit confusing. I like to describe myself as a media psychologist because my research is interested in examining what happens when people utilize communication media to interact with others. And I'm interested in how these media affect people's sense of self. So how we think about ourselves and how we think how we feel about ourselves or in our personal relationships. And I look at really micro settings like our romantic partners and our friendships and family members, and also in individuals' psychological well-being. So how do we um feel about our relationships and about ourselves in the world? And following John's excellent lead, I'll tell you a little story about how I came to this topic. Um, and this will date me very clearly. Um, I was an international student coming to the United States for college way back in the day. And I was really worried about um phone cards and calling my family and how to stay in touch with them. And back then, instant messaging um and emailing were uh becoming a big thing. And I discovered them and absolutely fell in love with them. Um, the opportunities for connection, you know, so easily at a touch as a touch of a button and the sense of presence that I felt with people uh dear to me who were actually far away were just very intriguing to me. So I came to these technologies kind of as a bit of a groupie and with a sense of optimism. And then um, you know, I got to graduate school and I realized that there's, you know, complicated effects. Yeah, there's these, you know, positive um effects on connection, but there's also a dark side. And um I started I started to study deception in these online spaces because we get to, you know, construct ourselves and different versions of ourselves online in areas that are a little bit different than face to face. And we did um my advisor and I did um the first big study on how much deception happens happens in online dating. So I have answers about that if you have questions. And I continue to study. I study mobile devices. You know, texting is really big for interpersonal connection. Um, I continue to study online dating. Um, and recently, of course, because technology evolves constantly, it's a moving target. Of course, I've started to become interested in the newest iterations of what we're seeing now, which is um generative AI, conversational AI, and algorithms. And so uh recently, the past couple of years, I became really intrigued about this notion of the possibility that these technologies exercise a bit of an insidious effect on us that we may not be aware of or think about, um, affecting how we view ourselves and mirroring ourselves back to ourselves and giving us an opportunity to learn about who we are and what we like and so on and so forth. And I started on a project that I'll perhaps tell you more about later with online dating algorithms. Um actually, the online dating industry was really a pioneer in developing recommendation algorithms because they're trying to match people with potential mates who they think would be good for them. And so nowadays, a lot of these online dating platforms utilize algorithms to give you top matches who they think uh is good for you. So, we, my graduate students and I and collaborators started to become intrigued in whether people learn about themselves or form impressions of themselves based on the feedback given to them from by the algorithm. So if the algorithm gives me more attractive matches, does that mean that I'm more attractive? Do I view myself as a more desirable person? And the answer is yes. It seems like it is working. People get social feedback from algorithms. And recently, through this wonderful new Center for Uncertainty and AI, um I am starting to develop a project on adolescents and how they learn about themselves through conversations with what I call conversational AI, you know, Chad, GPT, Gemini, whatever, all of these platforms that many of us are currently using. You know, adolescence is a period of turbulence. We're all trying to figure out who we are at that age. And a lot of it is a process of exploration and trying on and discarding um possi uh different versions of ourselves. And there's a few surveys and anecdotally from our students, we're learning that many of these conversations happen with chat bots. So, yeah, people still go to friends and family and uh romantic partners, but they also go to chat bots to decode social situations, to ask for advice, to get recommendations. So this year I'll be spending time talking to adolescents soon, hopefully, trying to understand how they utilize these technologies, what they learn about themselves in the process, how they critically discern accurate and helpful information from not accurate or not helpful, how they integrate feedback that they get from these automated systems with feedback that they get from real individuals uh in their life. So for that, I don't have answers yet, but I'm very excited to hopefully get some soon. Thank you.

SPEAKER_01

Hello? Great. Thank you so much, everyone, for uh being here. It's really nice to see you all. Um so I'm a philosopher, um, and I'm a philosopher who works on uh the economy, so I can predict the jokes about how I managed to get a job, but um really the stuff I work on has to do with kind of the intersection of two seeming facts right now. One is we've seen rapid technological development in the space of something like AI, algorithms in general, data collection, which has now been going on for longer than AI. Um and with that, really rapid deployment of these technologies by companies. Um, of course, many of you who have in have email, you probably had overnight at some point just a kind of AI chatbot just inserted into your email apps. It's all over the place. On the other hand, um our governments are very, very slow. They work like molasses. So it seems that this rapid adoption of these new technologies and the fact that these technologies are constantly improving very, very quickly is possibly changing the basic structure of our economy. On the one hand, I'll give you three examples in just a moment. And yet our kind of institutions, our political, social institutions that might be able to temper certain bad features of this rapid deployment of these technologies is awfully slow and struggles to keep up with these kinds of things. Now, this worries me as a philosopher. You might ask, well, why aren't you an economist then, or something like that? Um, because there's a kind of standard story about what justifies the use of markets and property rights and things like that in society. And it's just this. These are social technologies themselves that help us cooperate with each other to produce more good stuff. Basically, stuff that in the long run has improved our lives. Uh, the empirical data on this, I think, is pretty clear over uh a long time span. Uh basically the history of humanity, if you look at average life expectancy, education, all kinds of other markers, it looks pretty unsavory for quite a long time. If you think about even what it was like to be a king in the year 1700, it's really not that great. I would not try places with a king in the 1700s compared to now. And I suspect if you read a biography of one of these kings, neither would you. Okay. Around 1850, we get the Industrial Revolution, we get a change in the basic structure of the economy, things take off. Now that's not to take a super uncritical look at what's happened, but it would be a shame to undo the progress that we have made. Okay, so there's three basic things I said I would uh raise, which are questions I'm interested in. One is maybe the one that's already occurred to you, which is automation. It seems like these tools, AI, um, etc., have already had an impact on the labor market. It's already seemingly creating problems with job creation. Too many people seem to be losing jobs to AI compared with how many new jobs are created. It's not the ratio we would like. Another one is innovation. So the concern here comes back to data collecting practices in general. There's basically a few large companies, Alphabet, Meta, etc., have really quite a preponderance of the super valuable data there is out there that allows us to make great innovations, which might help us continue to live prosperously. Um, because they to some extent, I hesitate to use the word monopoly, but have something approaching a monopoly on this data, they're kind of able to gatekeep innovation in a variety of ways. Because they can gatekeep that innovation, they make themselves very valuable and they make it so they can pick winners and losers in the market. I'm sort of just asserting these things. Hopefully it's a provocation. Um that's not good if you liked the standard story about markets. You're supposed to have competition, not a few companies picking winners and losers, not having uh a few companies able to partner with the government on public good initiatives like health and education. You might get a little worried if they're forced to deal with just one or two. All right, and finally, um though we haven't seen a ton of this yet, it's it's starting to happen, and that's algorithmic pricing. So there's a particularly there's a version that is worrying people right now called personalized pricing. So if you go online, um you know, you'll get ads for things. They're tracking a lot of your behavior across a lot of websites. They create these targeted ads, I'm sure you're familiar with this, on the basis of your behavior online. But increasingly they're able to change the price of the goods you might buy on the basis of their predictions on how willing you are to buy them. Now, there may be concerns here both about privacy, fairness, and in general, whether or not this is really looking like a system of cooperation between everyone, if they're able to essentially take home more of the um gains than you are in this exchange. Okay. That's all sound a bit doomy, like I said, provocation. Um, but those are the kinds of questions I've been interested in. Thank you.

SPEAKER_04

All right. Um, yeah, I guess thank you um for all of those great kind of synopses, and thank you even more for staying to time. It looks like we have uh 18 minutes, which was three more minutes more than I was expecting for the moderated portion here. So I have a couple of questions that we um had sort of settled on to talk about uh first that we hope would get again kind of be provocations or food for thoughts uh that would help open up the Q ⁇ A after that. And Joy, of course, will let me know if I'm if I'm wrong on my my time there and and and we'll I don't know, play some uh music from the Grammys to walk me off the stage or whatever. Um okay, so I wanted to start with this question, and I guess I'll I'll move one of the mics over here too, um, depending on who wants to answer. But um, yeah, I guess I was interested in, on the one hand, you know, generative AI uh in particular has been criticized for the harms that it causes environmentally, socially, ethically, and in terms of the problematic politics of the data sets on which AI is built. And I know not everybody here is working with generative AI, but I use that as sort of one example, right? But there's all these kinds of bigger issues around AI. So I wanted to know how each of you in your work reckon with both the positives and the negatives that AI technologies introduce, right? What does this mean for you studying uh or researching with these technologies, given the sort of pluses and minuses that they bring? So I don't know who wants to jump in on that, but that was the first question. And again, just keep an eye on the time here. We've got about three, four questions, so we'll try and get through through all of them.

SPEAKER_03

Yeah, yeah.

SPEAKER_05

I could start. Um, so I mean, in terms of the opportunity in my field, I think it's obvious we've all been touched by substance use disorders. Everyone's aware of the opioid epidemic, deaths of despair, and they're linking to substance use disorders. I didn't say it, but I I can report that only one out of ten people in any year with an active substance use disorder get any support or treatment from it. And so when you see those harms, the opportunity to reduce those costs are large. The way we think I mean, there are many ways that what we're trying to do can go awry, but there are several areas that are kind of high on my radar. I mean, the first, and I tried to work this in initially, was the idea of disparities, um, making sure that everyone benefits equally from these technologies. And there are things that we can do that you might immediately think about, like making sure that the algorithms work as well for everybody, for these subgroups, and that's not a given. As I said, in our preliminary work, that was not true. And so that is clearly important that we all are in the habit of doing that right at the start. Um, but there are other ways that these can become inequitable as well. Do you have access to the system? Is it acceptable to you? You heard the types of data I'm collecting, and you could imagine certain groups that have been marginalized or uh stigmatized may not want to provide or trust us with sensitive data. And so although the system might work well for them, they won't engage in its use because of that. And so that's why we work really carefully with the communities we're trying to serve to identify these concerns, address them, figure out how we can reduce these sorts of barriers. Second big issue is who owns the data. I mean, I think that's obvious. Um it, you know, in our own work, we're really of the opinion, A, that the data stay locally and that the feedback from the algorithms go to you. You would imagine probably a very different level of comfort with this if it was your doctor or maybe your doctor as part of an HMO that controlled your insurance that was getting the feedback as to how well your recovery was going. And so we really try to empower people to use their own data to help themselves directly rather than indirectly through these through these other structures. And and those are two. I could talk about others, but I want to make sure there's time for other people. Um you go that way. I'll do one.

SPEAKER_00

I'm happy to continue and then we'll move on to Courtney. Um, so um in my work, I remember that I study micro settings. So I study people's interpersonal relationships and um visions of themselves and how they view and how they feel about themselves. And um before answering the questions of positive or negative uh effects of of these uh conversational AI, perhaps maybe let's think a little bit about how they function. And uh what we're seeing about uh with conversational AI, which I'm sure if you've used them, it will uh you'll you'll relate to, is that there's a lot of anthropomorphic responses, which means that people engage with them as if they were other people, right? They anthropomorphize these AI. Why? Because they have a series of features that really activate our automatic and kind of mindless beliefs and reactions that it's almost as though it's a person. You know, for instance, they give immediate responses, right? They engage in a back and forth with you. And these responses are what we call contingent. So it means that they take into account what you have already said, right? They respond to you uh in a contingent fashion. They have memory, they remember stuff that you've you've told them and they bring it back at appropriate uh times in conversation. Um, in addition to that, they're also really easy to access and they don't feel intrusive in the way that it might feel intrusive to reach out to another person with maybe your problems or whatever you want to vent about or or discuss about. So, on the one hand, we're seeing that people tend to think about conversational AI in this particular way. On the other hand, we can also look at what kind of responses these conversational AI give to people. And a few systematic biases in the way that these uh chatbots operate have been documented. One that's been really well documented that perhaps you felt yourself is that of sycophancy. So they respond in excessively validating and flattering sort of way, possibly excessive sometimes, but they're very kind generally. If you want them to be mean, you have to tell them. Sometimes I tell, you know, I use Chat GPT, and sometimes I want feedback on a paper and I'll tell it to be please be critical. Like, don't tell me it's a really good paper. Um, and also what we're doc documenting a lot of mirroring. So because they're so contingent, they give you back what you give it. So there's a lot of mirroring of the feedback that of the information that the user inputs. So what the user inputs actually is really important. So the positives or negatives that we're seeing or starting to see in my lab from this research kind of depend on uh on these operations that I've just uh described. You know, for instance, we conducted a study uh where we asked, we identified lonely college students and we asked them to talk to a chatbot for a week about their most important um event of the day. And we noticed a small but significant reduction in loneliness at the end of the week. So maybe that in this context, that anthropomorphizing, that uh pretending that this is almost like a person and the illusion of empathy that you're getting from the sick fancy maybe had some benefits, if you want to think about it this way. But the more interesting story for me is that uh it really mattered how the students shaped those conversations. So we're seeing we saw very clear evidence of mirroring. Students who talked in more positive ways were had their positivity mirrored and that made them feel better. And by the same token, students who talked in negative ways had their negativity mirrored by the chatbot, and that made them feel worse. And that was a really powerful effect that we need to consider and think about as a potential um harm. I've already told you about my work with only dating algorithms, where we're seeing that people take cues about how to evaluate themselves from the feedback that they're getting from algorithms. And I mean, this could be good, I suppose, if the matches that the algorithm gives you are attractive, which there are evidence to suggest that they are. There is a bias that the algorithm will give you attractive folks so that you get hooked on the platform, right, and want to stick around longer. I actually was just reading the other day a fascinating paper that's actually showing that individuals on the dating profiles reach out on average to people who are 25% more attractive than themselves. So we all tend we all have this bias of like reaching out to more attractive people. I can be measuring that. I can I can tell you how to measure that, but it'll take a while. So okay. Let's uh so let's name the right. So yeah, in short, we have to take kind of a micro look at what happens in these interactions, who's interacting and in what way and with what kind of AI. And the positives and the harms are very circumstantial. So there's a lot of uh potential, but there's also potential for harm. Yeah.

SPEAKER_04

Oh, and Dick. Um I don't know if Renee or Jimmy had much add. Okay, so uh I'll I'll I'll jump to the the second question here, which uh I just assume there might be folks in the room who are interested in in students and teaching and what's going on at the university in terms of AI. So um maybe if I could get you know a quick kind of minute and a half, two minutes from everybody on how you've incorporated AI into your teaching and uh routines, uh, or maybe even resisted incorporating AI uh into your teaching routines. And if you also want to talk a little bit about your research, you can, but I feel like we got we got some of that in your in your in your um opening setup. So yeah, how have you incorporated AI or resisted incorporating AI into your teaching routines?

SPEAKER_01

Sure. Um so I only incorporate AI into my teaching in one way, and that is I feed it exam questions I might ask and try and figure out how to word my questions so that generative AI is bad at answering them. Um so that's pretty much the only use in my work life I have for it myself. I'm a philosopher, you know, I get to stare at the clouds and come up with pretty ideas. So maybe I'm privileged in this way. Um, but yeah, pretty much the only thing I use it for is to make sure that my students can't use it well.

SPEAKER_03

Okay, I'm gonna go the other end. We deliberately, and it's super uh up to faculty members in our school of education. It's up to faculty members to decide um how it gets used in teaching. And so in Edsike, I think in general, people are being ex faculty are being very explicit with students about how to use it. And we have a whole course about teaching them how to use it actually, um, in ways that are productive and helping them identify what AI is good and helpful for and what it is less helpful for. I have not yet done this, but I have colleagues that are taking essays. If they're still allowing students to write essays outside of sitting down and literally writing them or sitting down and literally creating them on computers in the session, um, I have colleagues that are taking essays and putting them through various kinds of software to check for AI generation. And on the one hand, you could think about that as surveillance. I think the real issue, at least for us in the learning sciences, is there's two, there's there's many outcomes of an education that we hope for. One of them is the ability to actually think full stop. Maybe. So we actually need to practice that. And so you can, and so I think sometimes students feel like we're surveilling them, like, no, no, no. Okay, and it is against the code of conduct for them to be handing in AI work. So let's just be clear. Like that that's not that's not confusing for students. They know they're not supposed to hand in things that someone else, including AI, wrote. So they are violating okay. So it's not that so much, but we are, I think that is one. And then the other thing is that if you can use generative AI in wise ways, you actually can learn how to think better. So you can say, like, okay, I'm really searching for this word. I've identified this way, I'm saying it. It's confusing. Can you help me resay it, AI? So grab that sentence, put it in, and AI will give you three different ways to say it. Like, oh man, okay, I like this one. So on the one hand, that you could view that as cheating. And on the other hand, in that particular case, I don't view that as cheating. They're critically analyzing what do I select, what uh has my meaning, what words convey my meaning more than other words do. So those are the kinds of things we're doing. We're trying to take it as a tool and treat it as a tool.

SPEAKER_05

So much of my work, much of my teaching is at the level of graduate students teaching data science, statistics, and applied machine learning. And so we use this very, very regularly. Um, in the use for coding assistance is obviously one of the first uses that emerged at a generative AI, and it's quite useful there. But we introduce it in a very staged way because um it has problems, and so you need to be able to identify those problems. And so at the beginning, the first half of the semester, we don't have students using it at all because they need to learn what the code looks like. And then once they understand what good code looks like, Genre Bay, I can write it faster for you. If you think of it almost as a typing assistant, so this is what I want to do. It comes out there, and then in the second half, they're learning to evaluate what comes out to link it to what they already know how to do and to vet it. And anything that they don't understand, they need to probe further. And so that becomes kind of a critical piece. But it is, it speeds up the workflow, if nothing else, than the typing, and it'll also give you ideas that you might not have had, and then you check it, like, oh, that's a new way to do this, and that's great. And so clearly that's uh a huge benefit to those of us that are writing code to do analyses, to do data science. Um and on the other hand, you know, we're training data scientists who need to understand what they're doing, they need to understand concepts, and there's two ways that we kind of reinforce that. I mean, when it comes to evaluation, I let them use AI for code generation. But when it comes to evaluating concepts, we we do that on the fly in the moment on exams without AI. And I tell them, you know, although you could get answers on concepts with AI, you couldn't engage in the work we do if there wasn't a solid set of domain expertise in your head already. You can't constantly be going out to some agent to help you think about things. And so we tried to get people to understand what they bring to this is their domain expertise, and that that needs to be kind of at their fingertips. And so that we evaluate in the moment, not allowing them to use AI. But I have them use AI to develop that domain expertise as well. And so, and I use it myself this way as well. Every semester, ideas that I'm working on teaching, I go in regularly into Claude. I'm a Claude fan, and I have conversations with them. I understand this, you know. Tell me more about the bias variance trade-off and how it plays out in the K and N algorithm if N goes down, and you get answers back, but then we train the students that you get answers back. Now you try to connect that to what you already know, to vet if you trust it, and then you come back and you challenge those answers to see it what comes back when you challenge them. And we find that that's a way to really deepen their own knowledge. It's kind of like having a TA in the room, and we have TAs available to them too, but they don't come into our offices as much as we would like. Many of our students now take advantage of engaging in these dialogues with the with these various agents as well, and we find that that really does help them kind of develop that conceptual domain knowledge as well, but then having it at their own fingertips in their own heads. Um, I'm just looking at the clock there.

SPEAKER_04

I want to make sure I I hear uh and I'm gonna try and hear from everyone on this, so this might be a bit ambitious of me, but I did want to know, um, given your own experiences with AI and your research, what do you think is missing or misunderstood in dominant public discussions about AI? So, what is a key takeaway you wish that the general public knew more about that they don't already? So I don't know if each person can do that in about a line or two. But oh yeah? Okay. Then you got 75 minutes for that, then really, really just kind of think your way through it.

SPEAKER_00

Yeah, I I'm happy to start uh briefly. Um I'll just say that convers I even statistically this is not new, but in terms of public adoption and and perceptions, um Genai is fairly new. So we're still in a period of collective sense making, right? We're still trying to figure this out. Um and it's interesting to see what has happened historically to other technologies when they were in this stage. And we're seeing some patterns if we go back in history to, you know, I've seen social media and only dating and texting just in my lifetime, but we've had television and radio and all sorts of technologies. And what we tend to see as patterns is that um individuals tend to swing between dystopian and utopian narratives about these technologies. So on the one hand, there's the nerds, I was one of them for lots of these technologies who just get very excited and they think it's going to make everything better. And on the other hand, there's the skeptics and the people who are really worried that this is gonna end the world as we know it and have catastrophic effects. And, you know, over time, as we study more and learn more and become more sophisticated and nuanced, we see that there's always effects, right? These technologies do something, but they're usually nowhere near as large as either the skeptics or the optimists are hoping for. So this is like my advice to just be patient and see where the stakes. We're all studying it and thinking about it critically and trying to figure it out, and we'll see where we go.

SPEAKER_01

Yeah, I think this follows up on that uh nicely. I I guess I'll say two things. Um one of it follows up on the pessimism and optimism thing. I I sort of think if anyone seems very, very confident about where we're going with all of this, say in the next 10 years, and they're trying to convince you of either a very optimistic story or a very pessimistic story, they're also trying to sell you something. Um, I think the honest answer is even those at the top of these companies producing this technology don't really know. Um, so that's the first thing. The second thing is, and this seems to be uh common ground to both the optimist and the pessimist, and that is we are whatever the outcomes are going to be, they're inevitable. There's nothing we can do about it now. We're already headed down this road. I don't think we should assume that. I think there's more if we get worried about things we can do to proceed with caution on all of this and pick and choose our interventions, pick and choose those deployments of these technologies than many people would like you to believe.

SPEAKER_03

Three things. I think AI is not one thing, and the best parable to use is the one of the elephant. So I would say it's really important for us to understand a bunch of blind people in a room with an elephant, people have a hand on all different parts and they're telling us all different things. That is true about AI. And so I would encourage us to turn the lights on and talk to one another about what's happening and engage in this conversation, which follows up on both of those things. Um and I think, you know, the title, I'm staring at the slide. Um, the title talks about humans. I would just really caution us to also think about the earth and this story. There are really significant impacts of AI on the environment. And if we keep leaving that out of the conversation, it's only about these Homo sapiens, and it's not only. And it's some of those technologies that you were rattling off had some of those implications, but not nearly like this one does.

SPEAKER_05

So w one of the key things that I try to keep at the forefront is thinking about issues of AI and engagement. So you heard me mention engagement in my own work. You know, we need to design a system that people want to use over time because if they're not using the system, they're not going to get the clinical benefits we want from it. But I kind, but when I said it, I said optimize engagement because I also don't want to develop a system that the users are in all the time. I want them outliving their lives. And so I might want them checking in, getting an idea about support activity that could help them that day, and then going on into their lives. And so getting that optimal amount of engagement for the outcomes that I want, which is overall good mental health and health in general. And where I worry is that many of the developments of AI also care a lot about engagement, but they want you in there all the time and to their benefit. And a business model that's developed, well, first about social media with ad sales, and you see that contaminating the AI systems as well that want to just keep you in there, the sycophanti, the other, the other sorts of things that are designed to keep you in that system, that's where the harms are as well. And so like I think we as a side have to think about how we can move to other models where the benefits to the groups that are developing these don't come just from capturing our attention and maintaining our attention.

SPEAKER_04

I guess I'd just conclude by saying uh for me, one of the big things I would say is to not overmystify this thing, right? Like these products sometimes feel very magical when you're interacting with them, right? Uh, and it's I think on us to realize, no, this is like a set of calculations. This is a set of data that it's pulling on, this is how this works. And technically, I may not be able to describe that because I'm not a computer scientist, but you know, I can know enough about this thing to know that this isn't just kind of pure magic that's happening right here, right? And so keeping that always in perspective, I think, is uh and with that, I probably will uh stop here.