Evan Meyer

Evan Meyer welcomes, Ramayya Krishna. Ramayya is an expert on digital transformation and has worked extensively with firms and policymakers on using technology. He is the W. W. Cooper and Ruth F. Cooper Professor of Management Science and Information Systems at Heinz College and the Department of Engineering and Public Policy at Carnegie Mellon University. A faculty member at CMU since 1988, Krishnan was appointed as Dean of the Heinz College of Information Systems and Public Policy in 2009.

Meyerside Chats seeks to eliminate the “us and them” narrative and toxic polarization by striving to create virtuous community leadership and authentic conversation.  The intent is to showcase the humanity in those that take on the often thankless jobs of public service through civil discourse, and honoring differing points of view.

Ramayya Krishnan

Krishnan was educated at the Indian Institute of Technology and the University of Texas at Austin. He has a bachelor’s degree in mechanical engineering, a master’s degree in industrial engineering and operations research, and a PhD in management science and information systems. He is an expert on digital transformation and has worked extensively with firms and policymakers on using technology and analytics to achieve policy goals. He is well known for his work in e-commerce and information risk management where he has made seminal contributions to technology management and policy. His current research interests are in the responsible use of AI and in data-driven approaches to support workforce development.

About Evan Meyer

Evan is the Founder of BeautifyEarth.com, a tech platform and marketplace that speed tracks the urban beautification process through art, as well as the original 501(c)3 sister organization and public charity that beautifies schools in the communities that need it most. Beautify has now facilitated thousands of murals around the planet, working with hundreds of communities, community organizations, cities and national brands.

He is also the Founder of RideAmigos.com, a tech platform that optimizes commuter travel and behavior through intelligent programs and analytics for governments, large enterprises, and universities, serving many regions across the US.

As a civic leader in the City of Santa Monica, he is the past Chairman of his neighborhood (Ocean Park), giving residents a voice in the public process, as well as helping the City of Santa Monica with innovative, actionable ways of civic engagement.

Podcast Highlights

[00:01:41] Exploring the use of cloud technology in society and policy-making.

[00:02:11] Discussing decision-making models in voting for the public and policymakers.

[00:09:01] The importance of having an informed populace in a democracy.

[00:12:16] The influence of technology on shared experiences and bridging differences.

[00:14:01] Building relationships and bridges between different political opinions.

[00:16:00] The role of being a good listener and demonstrating respect in conversations.

[00:17:43] Overlap between AI and politics in policy making.

[00:18:00] AI for policy making and policy for AI.

[00:23:00] Policy catching up with technological change and the role of AI.

[00:24:47] Contrasting the lack of understanding of social media by policymakers in the past.

[00:25:25] The potential of AI to educate and provide faster access to information for policymakers and the public.

[00:27:00] AI’s role in helping understand complex public interest issues.

[00:29:48] Mitigating AI’s negative effects through measures like provenance and watermarking.

[00:32:14] Using reputation and crowdsourced approaches to evaluate information sources.

[00:33:00] The absence of a perfect solution to the information problem.

[00:33:33] Cautions about AI’s predictive capabilities and data representation.

[00:35:37] The potential benefits of AI in supporting decision-making and speeding up processes.

[00:36:13] Importance of representative data for predictive models and the risk of bias.

[00:37:00] Overcoming bias in data to ensure representation of society as a whole.

[00:39:48] Responsible AI implementation, understanding the system as a whole, and differential impact on individuals.

[00:40:23] Augmentation for democracy and the need for careful consideration of the pipeline.

[00:41:00] Assisting policymakers with AI tools to catch up with policy-making demands.

[00:43:35] Vision for a more prosperous, peaceful, and just society using technology for good.

SUMMARY KEYWORDS

Ramayya Krishnan, Digital Transformation, Technology and Analytics, Information Risk Management, Responsible use of AI, Consumer Behavior, Misinformation, Disinformation, Civic Humility, Technology and Information Accuracy

Transcript

[00:00:00] Evan Meyer: Okay. Hi everyone, and thanks for joining another Meyer Side Chats. Today we have Ramayya Krishnan, and he is the WW Cooper and Ruth F. Cooper, professor of Management, science and Information Systems at Heinz College at Carnegie Mellon, and the Department of Engineering and Public Policy as well. He is a faculty member at CMU since 1988, Krishnan was appointed as Dean of the Heinz College of Information Systems and Public Policy in 2009. He has a bachelor’s in Mechanical Engineering, a master’s degree in Industrial Engineering and Operations Research, and a PhD in Management Science and Information Systems. He’s an expert on digital transformation and has worked extensively with firms and policymakers on using technology and analytics to achieve policy goals.

[00:00:48] Evan Meyer: He is well known for his work in e-commerce and information risk management, where he has made seminal contributions to technology management and policy. Current research interests are in the responsible use of AI and in data [00:01:00] driven approaches to support workforce development. He’s won numerous awards and serves more boards and commissions than this introduction can handle including that for the governor of Tom Wolf of Pennsylvania and for our current president.

[00:01:13] Evan Meyer: And I am thrilled to have him here today. Thank you, sir, for being here. How are you?

[00:01:21] Ramayya Krishnan: I’m doing great. Thank you again for inviting me to be part of this.

[00:01:25] Evan Meyer: Yeah. You have a ton of wonderful knowledge in some really cool areas. As a fellow technologist and somewhat similar background, at least from the early educational standpoint, I’ve I can really appreciate the cloud technology lends itself.

[00:01:41] Evan Meyer: Not just to society, but to policy. And I was thrilled to ask you a few questions on as it relates to a variety of topics, starting with the fun topic of voting, which I think, since you have, you’ve studied decision in choice models for e-commerce systems.

[00:02:00] And I was wondering from your experience how this logic can be used to support.

[00:02:04] Evan Meyer: Decision making around voting for both the public and for policymakers. Have you thought about that and where can it apply?

[00:02:11] Ramayya Krishnan: To be honest I’m not a deep expert in how people vote or how people make those choices. But to your question about how choices could be made one has to make a couple of assumptions here.

[00:02:24] Ramayya Krishnan: If you think about decision making as a rational process you could imagine people gathering information about either candidates or about, what’s whatever is on the ballot and then making coming to a determination of what serves not only their interest, but perhaps the interest of society as a whole, and then making the choice based on that now.

[00:02:51] Ramayya Krishnan: But we all recognize that. It’s not entirely irrational process, right? Sometimes we are there are psychological factors that go into this. There are factors related to how charismatic leader is. So it’s not purely about just the substance of to this analytic approach to decision making.

[00:03:12] Ramayya Krishnan: There are these other psychological factors. So to your question while I’ve not studied voting in and off itself, I’ve studied other kinds of consumer behavior and behavior in other settings, and it’s some combination of sort of rational data informed decision making combined with psychological factors that then determine how people make decisions.

[00:03:37] Evan Meyer: Yeah, I guess, one of the things that was interesting to me was around. Thinking about that, maybe not even just for the president or who you’re electing, but when you’re voting on local or regional state issues and measures those decision choice models seem like they can lend themselves in, into applying probably even more relevantly.

[00:03:59] Evan Meyer: [00:04:00] Like there, like you’re right, charisma is a tough one, right? Who’s taller is a tough one too. Maybe that’s something that could be included in the model. I don’t know.

[00:04:07] Ramayya Krishnan: But the implicit in this is the fact that you have really good information upon which you’re making this decision. So people have to educate themselves.

[00:04:20] Ramayya Krishnan: About a given issue, even. Let’s take not candidates, but let’s say an issue which could be a bond issue, or it could be an issue related to, approving funding for some public interest project. One has to acquire the information before you can apply this more rational approach that we’re talking about.

[00:04:39] Ramayya Krishnan: Oftentimes the information that you’re getting may be. There might be noise in it, if you will. There could be misinformation, there could be disinformation, there could be all of that. The other pieces, even if there is good information out there, the willingness of individuals to invest the time and effort to actually [00:05:00] acquire the base of information upon which to make this decision.

[00:05:04] Ramayya Krishnan: That’s why we rely on intermediaries to try and help us. Acquire the information because people have lots of things to do. They have families to take care of, they have kids to take care of, and they’re balancing a bunch of different things. So in reality, not everybody is this perfectly informed, rational individual making the decision.

[00:05:25] Ramayya Krishnan: That is the point I was trying to make. That one inevitably works with constraints on time, constraints on how complete your knowledge is of the issue that you’re voting on. But. In total. I think that’s the system that we have and it works out quite well. But the idea of having an informed populace is important.

[00:05:45] Ramayya Krishnan: The idea of the people willing to invest the time and effort to acquire the information, to make these decisions is important. That’s one of the foundations of democracy, right? That we have an informed populace that’s willing to make these decisions.

[00:05:59] Evan Meyer: How do we [00:06:00] make sure that the information that people do get is,

[00:06:04] Evan Meyer: I wanna say the word factual? Yeah, that’s yeah, it’s a tough word these days, right? Because everyone’s definition of fact, I don’t even think is the same. Probably should be. We’re gonna get on the same page from a toxicity and polarity standpoint.

[00:06:20] Ramayya Krishnan: But what do we do?

[00:06:22] Ramayya Krishnan: I think there are two different issues, right?

[00:06:26] Ramayya Krishnan: One is the information is out there, but people don’t invest the time to actually to go out and acquire it, and therefore form incorrect perceptions of what it is. They’re quote unquote making decisions about this is, this could be true of products, it could be true of. The kind of voting kinds of context you’re talking about.

[00:06:48] Ramayya Krishnan: So here the information is not, it’s not that the information has been affected. It’s there, but you haven’t invested or I haven’t invested the time it took to [00:07:00] acquire a good enough understanding. And therefore I’m voting on incorrect perceptions of what it is. I’m supposed to be making a decision about a second category, and that becomes, Hugely has become hugely complicated, has to do with the fact that the signal to noise ratio has changed considerably in an era where the costs of producing information keep going down and therefore, the number of pe and at a time when everybody listened to Walter Cronkite, this is way back when.

[00:07:32] Ramayya Krishnan: Very few people. There were very few information sources. You could obtain it from the three or four different television newscast. Today you have personalized information. You have social media, you have so much information and algorithms curating what you actually get to see and therefore in that world.

[00:07:55] Ramayya Krishnan: And now it’s going to be even further complicated by the use of generative AI. Which has further [00:08:00] reduced the the cost of producing information. So how does one, make sure that one gets good information is actually a huge issue. So I think you’re putting a finger on it, whether you call it misinformation, disinformation, but even if you think of it as purely a signal to noise ratio, that there’s so much noise that extracting signal from that noise has become harder and people just tend to limit themselves to what are called filter bubbles, which is that small group of people that you tend to interact and connect with. And therefore that further increases the likelihood of not having a broad shared base of information on which we collectively, unless we have some shared ground truth that we all agree on that becomes harder for societies to make decisions that are, broadly beneficial. So I think that’s a complication, which is different than [00:09:00] the first kind of issue that I raised.

[00:09:01] Evan Meyer: You used a good term there. I missed the first word, but it was a telecommunications term for echo chambers. Something

[00:09:07] Ramayya Krishnan: bubbles. Yeah. Filter bubbles.

[00:09:09] Ramayya Krishnan: Filter bubbles. Filter bubbles. There you go. It’s like echo chambers.

[00:09:12] Evan Meyer: It’s like echo chamber. Like an echo chamber. Okay. Yeah, I got the main idea there and I was, I think that’s a good way Of putting it, actually, it makes me think it’s like a filter bubble. The way to relate it maybe for people is if you’re looking for a product and you start filtering, you’re gonna get your product. And the internet is a perfect way to find exactly what you look for. There’s a comedian, Bill Burr, who does a funny bit about going to, I’m right.com. It’s really easy to find if you just search.

[00:09:42] Evan Meyer: What you think you’re gonna get, you’ll get it. And that even adds another level of attraction from the, from, we’ll call it truth or fact. And getting back to center is tricky. I think one of the things that has frustrated me is that people, I think we need some [00:10:00] level of humility in the information that we have and how much time we’re willing to put into it.

[00:10:06] Evan Meyer: And, A lot of what I see is that there is this lack of civic humility, we’ll call it in in the, in, in the information, in, in what we think about politics, in holding our opinions and beliefs so strong and unwilling to shift or accept different perspective. And I believe that, one of the good things about comedy is that it’s It’s like it takes all of the uniquenesses of different of people and it puts them on the table and it makes jokes about them.

[00:10:35] Evan Meyer: It says We all have funny things. Yeah. And we’re all interesting and if I’m Jewish, I can make Jewish jokes. If I’m if you’re black, you make black jokes easier. Some people can cross those lines. And they figure out how to do it. And I think that’s to a lot of it is I do think that’s part of the answers.

[00:10:53] Evan Meyer: Being able to all of us come together and say, we all have our things. What do we need to do [00:11:00] to get to a place where we can accept some level of humility around the information we have a little bit better. And how can technology, is there a way that technology can support

[00:11:12] Rmmayya Krishnan: that.

[00:11:13] Ramayya Krishnan: By the way I’m smiling because when I was in, when I was in school this is way back when, right?

[00:11:18] Ramayya Krishnan: In the 30 years ago there used to be reruns of a show called All in the Family and

[00:11:23] Ramayya Krishnan: Jefferson’s, I don’t know if you remember these shows. You’re too young, perhaps. So I’m familiar. So I think they actually used comedy to shine a light on, on society and how all these transformations are happening, be it about race relations or be it about, working class versus the, the non-working class and the issues that, conflicts that arose, et cetera, but did it in such a fun way that people could laugh about things like you just said.

[00:11:53] Ramayya Krishnan: I think the, the. Where the technology is can be both a benefit, but also a [00:12:00] constraint is in a take today’s world, right? We have moved to a model, even with take television where we tend to watch stuff on our own time. You, you time shift stuff you tend to have shows that.

[00:12:16] Ramayya Krishnan: You binge watch these are all things that didn’t exist 30 years ago. With Netflix you binge watch a whole bunch of episodes at one go. Now there are shows that are watched by a large number of people and we have a shared. You can have a water cooler conversation about, game of Thrones or something like that.

[00:12:35] Evan Meyer: I don’t binge watch, but I know a lot of people do. I know that’s a big, that’s a big deal. It’s not part of my thing, but no, I know. But my

[00:12:42] Ramayya Krishnan: point is to your, the idea for you to have a shared experience, a large number of people to have, should have experienced it, and that experience allows you to laugh and think about differences that we have.

[00:12:58] Ramayya Krishnan: But allows us to bridge those [00:13:00] differences. And this is and I think I was mentioning to you at at Carnegie Mellon we want on the one hand to have in fact we have an initiative around freedom of expression that allows for people to speak their mind and be able to communicate their points of view, but hand in hand with that.

[00:13:22] Ramayya Krishnan: Should be the capacity to be good active listeners so that you’re willing to listen to people who hold opinions other than your own. Be able to have a laugh and enjoy the company of others who are not exactly like you be able to deal with complex it’s like diffuse. Conflict, if you will.

[00:13:40] Ramayya Krishnan: It’s like conflict resolution skills and negotiation skills. These are, I think, a package of skills that we grow up over time acquiring. But I think it becomes even more important in today’s world where you want to have this capacity, like you were saying, to have shared experiences, enjoy each [00:14:00] other’s company even though you might.

[00:14:01] Ramayya Krishnan: Be perfectly fine. You might have a different opinion than mine about some particular issue, but we should be able to work together. Talk to one another. I was with a former president, Bill Clinton at an event, and he was mentioning how he was working with bringing together younger Republicans and democratic leaders together around food, around lunches and dinners.

[00:14:24] Ramayya Krishnan: With the idea that they would build relationships, human relationships, because human relationships are the key to building those kinds of bridges that I think you’re referring to. Yeah.

[00:14:35] Evan Meyer: I, there’s, one of the things I always said is if people would just have coffee with one another, I think if everyone who disagreed with each other would just have some coffee and talk about things other than politics.

[00:14:49] Evan Meyer: I think you’d eliminate like 50% of the toxicity. And there was while back, Newt King Ridge introduced a policy that allowed for folks in the [00:15:00] Senate and Congress to live in their localities and not have to live in DC. And that’s, there was arguments of why that was good. Which make a lot of sense.

[00:15:11] Evan Meyer: You wanna be home with the people and representing your people, and you need to be out there with your people, not, and representing where you’re from. Not DC however, when they were in DC they were having a lot more coffee together. They were getting to know each other, going out for dinner.

[00:15:27] Evan Meyer: And even at the local level, you realize wow, if two people, you make so many assumptions about people before you even meet them, you’re filled with all these predispositions. And prejudices even in in, in about people’s beliefs or who they are, and you start categorizing them and labeling them, and you don’t realize how many things you have in common.

[00:15:47] Evan Meyer: Maybe your kids go to the same school, maybe you both love the same sports team, or maybe you both have the same childhood promise. There’s just so much that people have in common and I think a [00:16:00] coffee could be the answer

[00:16:01] Ramayya Krishnan: for a lot of it. No, I think the fact that human relationships really and learning these skills that I’m referring to, I think are great ways of building the bridges.

[00:16:11] Ramayya Krishnan: Really important. I agree. Yeah. Yeah.

[00:16:13] Evan Meyer: How far, how quickly people are even in, in reference to what you’re saying, like how much do you need to prove your point before you get into like a battle with somebody, for example, do you know, do you have to prove your point in this in a conversation that you’re having with someone if it’s gonna escalate it?

[00:16:29] Evan Meyer: Or could you, do you have the self-control to like back down? That’s part of the negotiation, conflict resolution kind of stuff, right? And I think there’s there’s a psychological element to it. Like somewhere. Have you experienced where, or at least that’s my experience, like where, from a psychological standpoint, do you think that people can shift?

[00:16:49] Evan Meyer: When they’re having these conversations around politics before they get into that war zone with people, what do you think needs to shift in that moment when you’re having a conversation? Maybe how do you do it?

[00:16:59] Ramayya Krishnan: [00:17:00] This is far afield from my field of expertise as an academic, by the way.

[00:17:03] Ramayya Krishnan: I, I think the key to this I’d say is beyond The conflict resolution and the negotiation piece, I think it begins with being a good listener. And I think if we can be good listeners and I think that begins the, it shows respect. It shows your willingness to hear the other person’s opinion.

[00:17:22] Ramayya Krishnan: Doesn’t mean you have to agree with them, but at least you demonstrate the capacity to be able to listen. And I think that’s the beginning of the

[00:17:30] Evan Meyer: process. I completely agree. You have a lot of work. You’ve done a lot of work in AI, you advise a lot in AI. Where have you seen the overlap between AI and politics and policy making in your roles?

[00:17:43] Evan Meyer: I so

[00:17:44] Ramayya Krishnan: It, not so much politics, but in policy making, you could imagine two aspects. One is AI for policy making. The others policy for AI. You could [00:18:00] go in both ways. So for instance if you think about ways in which. You could allocate scarce resources. Let me give you a concrete example.

[00:18:10] Ramayya Krishnan: Imagine in fact, this is a problem that that actually with, there’s been some work that’s been done by members of my faculty and colleagues. It’s called the people who are eligible but not enrolled problem. It’s, that is they’re eligible for certain benefits. That have been, say, let’s say you are a a single woman.

[00:18:31] Ramayya Krishnan: This is like the SNAP program, for nutritional benefits for kids for single moms with kids. You’re eligible for this, but you’re not enrolled in this program. And one of the things that. The Department of Human Services, which is a government agency oftentimes, county level or state level is trying to understand this.

[00:18:52] Ramayya Krishnan: Why is somebody who’s eligible for this program not enrolled? And one would like to do, first of all, [00:19:00] you’d like to know who potentially these families are that are eligible but not enrolled. And then to try and identify the root cause. Of why they’re eligible but not enrolled. And a, a common example, common reason could be that maybe they don’t have access to a computer and don’t have facility with getting on the internet and getting on a website and enrolling themselves.

[00:19:26] Ramayya Krishnan: And that’s one of the reasons why they’re eligible, but they’re not enrolled. And if that’s the, if that’s the root cause, then you could say, you know what, if I sent a case worker out, The case worker here is a scarce resource, but if I sent that case worker out to that family and sat down next to that individual, let’s say I’m the person who needed this assistance, the case worker comes to my home.

[00:19:50] Ramayya Krishnan: I have to schedule that case worker to come to my home when I’m at home, not when I’m at work. So there is a scheduling problem, which is a problem that AI can help resolve. [00:20:00] And they come and sit by my side, help me get enrolled. Now, that’s an e example of an AI based intervention. One is I had to predict which families likely were eligible but not enrolled.

[00:20:14] Ramayya Krishnan: I had to identify what the root cause was of why they were not enrolled. Then I had to figure out, okay, the root cause was because they needed assistance getting on the internet and getting enrolled. That I had to schedule a scarce resource, in this case, a case worker, to go to their home at the right time to sit by their site and help them get enrolled.

[00:20:34] Ramayya Krishnan: That’s an AI based intervention, and if I trial that intervention, I can now determine does it actually help the individuals, now get enrolled. So that’s an example of AI in support of policy making, which is, or policy implementation, because there’s a policy that says we’d like to support. These individuals with a particular government program.[00:21:00]

[00:21:00] Ramayya Krishnan: But we find that people are, who are eligible or not enrolled, and here’s a solution. So this is a very concrete example of using AI-based approaches or data-driven approaches or data analytics in support of policy implementation. The other side going around policy making for ai for instance certifying when, the National Institute of Standards and Technology has created something called the AI Risk Management Framework.

[00:21:26] Ramayya Krishnan: When AI based solutions are to be deployed for high risk applications, let’s say, For when people are for resume processing, for people to get hired into jobs. Today when people apply for jobs, there’s so many resumes that come in. There’s an AI that might process that resume and decide whether to include that resume for further processing or not.

[00:21:51] Ramayya Krishnan: Or when you go to a a doctor you could imagine AI being used to read. X-ray radiological images, [00:22:00] for instance. So these are all applications where the risk of not getting it right is so high that you’d like to ensure that the AI’s that are deployed in those contexts are certified in ways that you wanna make sure they’ve been responsibly developed and when they’re actually deployed.

[00:22:21] Ramayya Krishnan: That we understand what the potential harms might be. So that’s an example of if there was an act like the EU is now creating an AI act where they’re saying, based on this risk tiering of AI systems, we’d like to have different regular regulations governing the deployment of AI systems.

[00:22:41] Ramayya Krishnan: That’s policy making for AI. It’s about AI. So these are two ways in which you can think of about AI and policy making, interacting with one another. One is AI being used in support of policy implementation, and then the other side public policy being created to regulate [00:23:00] or govern ai, if you will. Sure.

[00:23:01] Ramayya Krishnan: Does that help? These are two concrete. Yeah, for sure. For

[00:23:04] Evan Meyer: sure. Do you think that there’s that that kinda age old conversation of policy just can never catch up to technology. How are we doing with that, can AI help that or is it gonna make that distance even farther?

[00:23:17] Ramayya Krishnan: I think it’s true that the pace of technological change.

[00:23:21] Ramayya Krishnan: Let give you an example, right? Everybody’s been using. Or everybody at least aware of ChatGPT since it came out in November of 22. So it’s been, what, seven, eight months? The pace and the rapidity with which it’s been adopted within, which it’s been being used, it’s being deployed, the public policy around generative AI is just getting caught up.

[00:23:44] Ramayya Krishnan: So it’s absolutely true that in this case and in many cases, technology leads public policy and I think it’s really important for our public policy makers to be better educated about the technology. So I think that is, that, that aspect, it’s not just the [00:24:00] technology leads public policy, but our public policy makers need to get better educated about technology.

[00:24:07] Ramayya Krishnan: And I must say I was the, I dunno if you saw the senate hearing where the Senate hearing where Sam Altman, Gary Marcus and Christina Montgomery were testifying at the Senate hearing on ai. I thought the senators asked very good questions and I thought the witnesses gave great responses too.

[00:24:26] Ramayya Krishnan: I thought that was one of our a really good example of our public policymakers getting caught up. This is in contrast to, I dunno if you remember, pre pandemic Mark Zuckerberg. Testifying in Congress on social media there, it seemed like our public policymakers weren’t quite aware of how exactly social media worked.

[00:24:47] Ramayya Krishnan: The No, the, we talked about filter bubbles and recommendation algorithms. They didn’t see how that model worked or what the business model was that drove social media, but I think we’ve come a long way [00:25:00] and for the better. So in that sense to your question, yes, technology leads public policy and but on the other hand I’m, there’s some I’m hopeful that our policy makers are investing the time and effort.

[00:25:11] Evan Meyer: Hope is good. Hope is good. Hope is, I like to hear hope. Do you think that we can start to use AI in a way that can both educate us on

[00:25:25] Evan Meyer: issues and maybe get us the information around these issues faster, whether it’s for the public or for the policymakers. Like how can we use it in that way so that they can have better access to more informa to the information that’s important and make more informed votes.

[00:25:45] Evan Meyer: Is

[00:25:45] Ramayya Krishnan: that possible? If you take a step back and think about what AI can do, you can, if I think about getting access to information more broadly, as even as education. So I dunno if you’ve seen this Khan Academy [00:26:00] video of Sal Khan demonstrating how he’s integrated chat g PT four or chat g PT driven by GPT four.

[00:26:11] Ramayya Krishnan: Khan Academy. I dunno if, for those of you, of your viewers who may not be familiar with Khan Academy. Khan Academy is a, a great learning platform. A great education platform for kids in particular, but even for older kids or adults even to acquire knowledge of. Different kinds of content.

[00:26:29] Ramayya Krishnan: Doesn’t have to be just math and algebra and computer programming. It can be history, it can be English, it can be different topics. And what Sal’s demonstrating here is with the ai, it becomes like a patient teacher where it allows you to make mistakes, gives you, hence, corrects your errors, corrects your concepts so that you acquire better learning outcomes.

[00:26:53] Ramayya Krishnan: So if you take that. That, that as a, as an example of AI for good AI [00:27:00] in, in action for good. What do you mean corrects? Can you explain that? For instance, it let’s say that it you’re learning how to do a simple algebra problem and you make a mistake. About how you’re translating a word problem into an algebraic representation.

[00:27:16] Ramayya Krishnan: It, the AI can give you hints about where you went wrong. It gives you alternative formulations and alternative examples. So in other words, it’s not giving you the answer, but it’s giving you the support. And teaching you the concepts so that you can correctly translate the word problem into an algebraic formulation.

[00:27:34] Ramayya Krishnan: I see things that we probably learned in middle school and and later, but In the absence of a patient teacher slash instructor, maybe you don’t get the concepts correct down correctly, in which case you know that gap in your understanding then prevents you from learning other more complicated concepts.

[00:27:55] Ramayya Krishnan: So think of that as a metaphor. Now, if you think of that as a metaphor and saying, now [00:28:00] I’m trying to understand an issue that’s in the public interest that I’m trying to acquire and gather information about. You could imagine AI for the good. Now it has to be trusted. It has to be responsibly developed, playing the same role like in that Khan Academy kind of setting of actually helping you acquire.

[00:28:22] Rammayya Krishnan: Information to educate you about a fairly complex issue. Which could be for instance about, it, it’s about maybe bond funds for improving roads or parks or, and it can tell you all about, maybe the budgets and how they’re to be used and why. Better parks and better public playgrounds are gonna be helpful for the community, et cetera, et cetera.

[00:28:45] Ramayya Krishnan: But you can also have ai the same technology being used in ways that maybe add a whole bunch more Noise is manipulative, is. Can create. And you’ve heard about deep fakes, for instance, I’m sure, [00:29:00] which is this video that sounds like just like you have been speaking, but it put words, puts words in your mouth.

[00:29:06] Evan Meyer: I’ve seen Arnold Schwarzenegger is the face of just about every character now.

[00:29:10] Ramayya Krishnan: So you could imagine now that’s ai, the same AI that could be used for positive and societal gerd. Could also be used in a malicious way, but that’s probably true for a lot of technologies. So to your point about can AI be used in a positive, beneficial way?

[00:29:28] Ramayya Krishnan: Yes, absolutely. Can it also be used in a malicious way? Yes. So the question is how do you ensure that you get, societally beneficial uses? Of ai especially upon deployment, and there are a number of approaches that people are thinking about. So for instance, let me give you one example, right?

[00:29:48] Ramayya Krishnan: So the whole issue of provenance which is this idea of when I write something, knowing that it was created by a human versus by an AI bot. Creating a watermark that shows [00:30:00] that it was produced by a AI bot versus a human is an example of something that’s being proposed as a way to, especially with elections coming up in 2024.

[00:30:10] Ramayya Krishnan: The question is, to what extent can we provide either through policy by law, saying that when you have election based advertising, you need to clearly say, If this was produced by an ai, you need to come out and say it was produced by an ai. And here is the provenance and here is the watermark that shows that this was indeed the case.

[00:30:33] Ramayya Krishnan: So there are some approaches to try and mitigate the noise problem that I talked about earlier. Sure. But anyway that’s a long answer, but I hope no, I think that helps.

[00:30:42] Evan Meyer: Yeah, I think that helps. There, there’s also, I suppose I thought through reputable resource. It. A lot of, I think where you get information from it comes to like where you’re getting information from also.

[00:30:51] Evan Meyer: Exactly. Where are you reading things from? Are you checking your sources? Are you making sure that this is from someone reputable? I tend to get a lot of my information [00:31:00] from people that I respect. Political science, people in political science historians, ac in, in academia a lot more than I’d get it from, say, mainstream media.

[00:31:12] Evan Meyer: I think there’s a fil that filter helps me say, wow, I trust the values and morals of this individual. I think they come from a good moral standing. And that to me is a good source of information. I think most people should do that from wherever they’re getting information from.

[00:31:27] Evan Meyer: But you would think that. It rely, it’s gonna be the responsibility of those people or the networks to now say what we put out there to people is more important than ever as well. And you’re starting to trust, so you’re like, okay this person has given this in their lecture. They must have done their research to figure out like that this is not fake information.

[00:31:52] Evan Meyer: You would hope unless they got fooled too.

[00:31:55] Ramayya Krishnan: Yeah, so I, I think this is actually a really challenging problem, by the way.[00:32:00] But one, one approach might be to have, like when you buy a bottle of wine, just to give you an example, right? The whether it’s from California or whether it’s from Oregon, or whether it’s from Washington or whether it’s from Europe, From France to Italy.

[00:32:14] Ramayya Krishnan: That’s provenance. That’s what’s the origin? That’s like the example I gave you of was this created by a human, by an ai, et cetera. But then we also have reviewers of wine that you trust that might say that’s a, that bottle of cabernet is a 93. This one is an 89. And, but you might like some wine reviewers more than others.

[00:32:37] Ramayya Krishnan: Maybe this person’s 93 something. You don’t think, eh, I don’t really think that’s that great. But this person’s 88 might be actually good. You get your own personal preferences based on your trying things out, right? So I could imagine a crowdsourced approach, like a Yelp, if you will.

[00:32:56] Ramayya Krishnan: Where you have five stars and a large number of people have [00:33:00] voted on this source of information being. But then inevitably all of these things are, there’s nothing that’s perfect. In other words, you mentioned that you have your preferred individuals. I might have my preferred individuals, and we might have very, we might arrive at very different points of view about the same problem.

[00:33:19] Ramayya Krishnan: So there is no, unfortunately no easy answer to this. This problem, but one could try to mitigate it in the manner that I’m, there are multiple approaches being discussed and offered as potential solutions. Yeah.

[00:33:33] Evan Meyer: There’s a gentleman by the name of Cesar Hidalgo. He has done he did a TED talk or a TEDx talk on on augmented democracy.

[00:33:44] Evan Meyer: And it blew my mind, and it was this idea that, you we’re training algorithms, daily training, Spotify, YouTube, Google, Facebook, whatever. You’re on TikTok and they’re learning a lot about you. Now, [00:34:00] some degree you can determine what that person would v just by that collection of data.

[00:34:05] Evan Meyer: You could make really good. Assumptions about what someone would vote on a particular issue based on the information that here’s the, in the inputs, here’s their behaviors, right? We’re tracking, advertisers, even advertisers know like a lot more about this than we think, right? In bulk.

[00:34:25] Evan Meyer: So I’m not too scared, but When you have all that information, you can start matching that up and saying, wow, it’s 98% chance that like, this person would vote this way on this issue. This talk really got me thinking about how reasonable that a, that it is that AI can support us.

[00:34:43] Evan Meyer: Just to bring it back to the beginning a little bit in voting, whether it’s a bond issue or a, a juror how many people do you know which juror. Sorry, not with Sure. Judge. Like how many people know, do the research around a judge to [00:35:00] determine whether that’s the right judge or even what, how bonds work.

[00:35:04] Evan Meyer: So with what you’re talking about to give them the right information, using those kind of tools to say, this is how, the type of information that you can be given to help learn more about maybe to correct some of your assumptions or but also to help say, Our algorithm is, and it gets a little scary, but it’s pretty cool because it can start to help you know, the way you think, probably even better than you can, and that can help politicians vote essentially to a large number of bills, like way faster to help support technology, right?

[00:35:37] Evan Meyer: I think that’s where this got me very excited, curious to hear your

[00:35:40] Ramayya Krishnan: thoughts. By the way, I haven’t seen, I’ve seen this Ted talk. I look forward to listening to this talk and better understanding it. But if I understood the bro, if I heard what I what you’re saying I think there are two or three comments that I’d like to make.

[00:35:54] Ramayya Krishnan: Not so much about augmented democracy, but this idea of data. And [00:36:00] ways in which you’d be predictive of behavior. And in your particular example, you’re giving examples of voting behavior, but you could imagine purchase behavior for instance, right? Sure. There are things, two things to keep in mind.

[00:36:13] Ramayya Krishnan: One is the first order effect. Of using these predictive types of models and there it’s really important to make sure that the data that you’re basing these predictions on is representative of what it is you’re seeking to make predictions about. And the suppose we only had data about people that are, let’s say, are Asian and who live in this.

[00:36:38] Ramayya Krishnan: Particular neighborhood and we try and build predictive models using that data. We’d be wrong about other segments that might have different behaviors than that group. Yeah. So this idea of bias data is something really to keep in mind and be very cognizant of. Sure. And

[00:36:55] Evan Meyer: restricted samples, essentially think that’s

[00:36:57] Ramayya Krishnan: what you’re Yeah.

[00:36:58] Ramayya Krishnan: You, yeah, you need to and in [00:37:00] some cases, just by virtue of. The fact that our society is structured the way it is, I might not have data about people that look like you or maybe that have my kind of background, et cetera. So the question is, when I have bias samples, how do I ensure that I get something more representative of society as a whole?

[00:37:19] Ramayya Krishnan: So that’s something the data question is an important one. The second piece, I think is this capacity to then build predictive models. You could, Ima, I don’t know how useful you think recommendation systems are. They’re one kind of predictive model. Like when I’m on Amazon or when I’m on Netflix there are recommendations being made about, Hey, watch this movie because you watch this movie, this is why you’re be, you’re seeing this recommendation.

[00:37:46] Ramayya Krishnan: In other words, there, this is called an item type of recommendation like, People who watch this also watch this. That’s the

[00:37:54] Evan Meyer: simplest form of it,

[00:37:55] Ramayya Krishnan: yeah. That’s like the correlation. A one almost. It’s like a correlation between Yeah. [00:38:00] People who watch these things, and you can generalize this by saying, take the features of this movie.

[00:38:05] Ramayya Krishnan: And, these are movies that are action involve multiple international locales involve car chases. And are less than 90 minutes long, let’s say. So those are the features. Now that’s, I abstracted it from maybe three or four different movies that you watched and gave a thumbs up to.

[00:38:23] Ramayya Krishnan: Now I can take those features and say, how many other movies have, fit that bill? They don’t have, I don’t need to know exactly. And you can add to it, actors, director types, et cetera, et cetera. And that’s where the, that recommendation approach comes from. But, That’s one. Now, this is not a life or death decision.

[00:38:41] Ramayya Krishnan: If the Netflix screwed up on making a recommendation on a movie to watch I’m not particularly bothered. But if it was about a therapy to pursue or if it was a an educational decision or a decision about who to screen in or screen out for a job. Those things become [00:39:00] much more consequential.

[00:39:01] Ramayya Krishnan: So when the decision making is consequential, you really need to pay attention to the algorithms and the downstream impact and outcomes. Yeah. Not just the output, but the outcomes and the impact of the outcomes of these algorithms. So you need to think about this, not just about the algorithm, but of the system as a whole.

[00:39:24] Ramayya Krishnan: What goes into the system? What comes out, what the impact is, who’s affected? Is there differential impact on some people more than others? All of that. Yeah. So I think before we think about augmentation in the form, be it for democracy, be it for anything consequential, more consequential than a Netflix recommendation, right?

[00:39:48] Ramayya Krishnan: Yep. I think we need to think about this pipeline. Very carefully and understand that’s where the responsible Yeah. AI part comes in. And as long as you have done that [00:40:00] due diligence then I think we can ask the question, what are all the benefits that accrue from having this kind of capability to do predictions from being able to really assist.

[00:40:13] Ramayya Krishnan: In ways that actually might benefit the individual benefit society. So that’s where that would be my take on it without fully knowing exactly what exactly augmented democracy is. Yeah.

[00:40:23] Evan Meyer: Yeah. Yeah. And look, one of the things I know that policymakers have our teams that do a lot of this research.

[00:40:31] Evan Meyer: Yeah. Read the bills, give them the summaries and help them make decisions. To me, a lot of that is very chat, cheap chat, g p t, we could probably do a lot of that as it gets better to assist, to give, I don’t wanna say replace. I guess what I’m gonna say, for the sake of being careful, I don’t wanna say that I guess what I wanna say is catch up with the amount of policy that needs to be made.

[00:40:56] Evan Meyer: And how far behind we are and how to help [00:41:00] use it to catch up. I think that’s where I see it being a tool. If the algorithm was exposed and you’re like, oh, okay, this is what it’s basing on. The algorithm says this is our judge, and now we can start feeding it. Yes or no? Was it right or wrong?

[00:41:12] Evan Meyer: Okay, great. We can feed it more into the algorithm, but it’s an assist. I suppose I see it as assisting. At least for short term.

[00:41:18] Ramayya Krishnan: And I’m saying as long as it’s responsibly done in the ways that I was outlining, but I called it a first order effect because I, and the reason is you could see the benefit that accrues from being able to build something like this in first order terms, in terms of improving efficiency, effectiveness, equity, et cetera.

[00:41:37] Ramayya Krishnan: Sure. But the second order impact is equally important, which is that one of the advantages of in other words, it’s not just about improving efficiency and effectiveness, you also want quality and creativity to be enhanced as well. Yeah. So in other words this is one of the amazing things about humans is that many of these AI tools, if not [00:42:00] thoughtfully deployed, will tend to replicate the more commonly occurring prior decisions versus exploring new things.

[00:42:09] Ramayya Krishnan: This is in statistical terms it’s called looking at the head of the distribution versus the tail of the distribution. So the idea, the second order thing to be mindful of in this kind of application of AI in the manner that you’re describing is

[00:42:24] Ramayya Krishnan: we don’t want to everybody converged to some standard mean, but rather you want to have the capacity to have creative new things emerge as well. Sure. So you want to do this in thoughtful ways, so not only the first order impact of improving efficiency, equity, and and effectiveness, but to do so in a way where you also don’t lose out on new stuff that you want to have. That’s, which is what’s amazing thing about people is that they also are able to be creative about creating new things.

[00:42:56] Ramayya Krishnan: Yeah. This is,

[00:42:58] Evan Meyer: and I know you. You have a love [00:43:00] for art and music as well. It’s one of the things that becomes tricky around art and music and the stuff that gets created, right? It’s interesting to hear and see this stuff. But it’s starting to touch everything. A lot to think about.

[00:43:14] Evan Meyer: A lot of work to do. Thank so last question. Yes, sir. And then I know we gotta jump We’re gonna, I do want to hear what is your vision? You’ve done a lot of good work. You’ve put in years and years of hard work to build and create and inspire, and what is your vision for the world?

[00:43:35] Ramayya Krishnan: That’s a big question.

[00:43:36] Ramayya Krishnan: Evan I know all I wanna thank you for, giving me this opportunity to speak with you. As somebody who is an education is a patent the vision is without a doubt that the next generation inherited. And even more prosperous, peaceful and just society without a doubt, right?

[00:43:58] Ramayya Krishnan: And everything that we are working [00:44:00] on, that we’ve been talking about, in fact, has been about how do we use technology for good in ways that actually achieves that outcome. So that would be my very short and sweet response to that.

[00:44:12] Evan Meyer: Awesome. Love it. It was great to see you and thank you, sir.

[00:44:17] Evan Meyer: Thanks for connecting with me today and I will see you soon.

[00:44:21] Ramayya Krishnan: Appreciate it. Have a great rest of the day. Bye-bye.

Leave a Reply

Your email address will not be published. Required fields are marked *