AI Unpacked: Trust, Threats, and Trends with Dr. Srijan Kumar

Play episode
AI Unpacked - Trust, Threats, and Trends with Dr. Srijan Kumar
AI Unpacked - Trust, Threats, and Trends with Dr. Srijan Kumar
Hosted by
H. Brown
Subscribe on iTunesListen on Stitcher
Listen on SpotifySubscribe on Google Play

AI Unpacked - Trust, Threats, and Trends with Dr. Srijan Kumar
AI Unpacked – Trust, Threats, and Trends with Dr. Srijan Kumar

Join Hanh Brown, host of Boomer Living Broadcast, as she dives deep into the world of web safety for older adults, AI reliability, and digital advancements with esteemed guest, Dr. Srijan Kumar. This episode will take you through the maze of misinformation, online manipulation, and the challenges and promises that AI holds. From understanding the nuances of network modeling to exploring the latest in multimodal learning, Mrs. Brown and Dr. Kumar shed light on how the digital realm is evolving and what it means for our future. They also discuss the implications of these advancements on mental health, the role of peer correction, and the importance of trustworthiness in AI systems. Whether you’re a tech enthusiast or just curious about the digital world, this episode is packed with insights that you won’t want to miss.

Delve deep into the dynamic world of web safety and AI robustness in our latest coverage:

– From spotting misinformation to understanding its impact on mental health.
– Unraveling techniques to detect online manipulation, hate speech, and fake reviews.
– Ensuring the trustworthiness and reliability of AI systems, while safeguarding them from adversaries.
– Explore cutting-edge advancements in network/graph modeling and the intricacies of large-scale network predictions.
– Discover the latest in multimodal learning, focusing on robust vision-and-language integrations.
– Lastly, glimpse the horizon with our insights into future research directions and the evolving landscape of web safety.

Join us as we navigate these crucial realms, paving the way for a safer, more reliable digital future.

You can find Srijan on LinkedIn: https://www.linkedin.com/in/srijankr/
👉 See our Website:  https://podcast.boomerliving.tv/
🎙 Boomer Living Podcast: https://hanhdbrown.com/
👉 LinkedIn: https://bit.ly/2TFIbbd

Transcript:

Hanh:
Welcome to Boomer Living, a podcast dedicated to the AI and the digital evolution and its impact on the aging journey. I’m your host, Hanh Brown. So today we’re venturing into the world of artificial intelligence and its implications for senior well being. As technology advances and break through speed, it’s a boon and a challenge,

Hanh:
especially for our aging population. Whether it’s the promise of telemedicine, or the pitfalls of online misinformation. AI is reshaping the landscape of our digital age. So joining us is an AI and ML machine learning innovator, Srijan Kumar. With over 16 years of specialized experience, particularly in natural language processing and generative

Hanh:
AI, he’s the assistant professor at Georgia Tech’s College of Computing. And his groundbreaking methods are at the heart of platforms like Flipkart and Twitter’s Birdwatch. With a stellar academic journey that includes roles at Google AI and Stanford University, and honors such as the NSF Career Award, and a spot in Forbes 30 under 30, I am thrilled to

Hanh:
have him on Boomer Living Broadcast. So, in this episode we will delve deep into the transformative role of AI, its potential in areas like early detection of cognitive decline, safeguarding seniors online, and a broader society implications of misinformation. So whether you’re a caregiver, tech enthusiast, or just eager to understand AI’s role in the lives

Hanh:
of the aging community, you’re in for an enlightening session. So, let’s dive in. All right. So welcome, Srijan.

Srijan:
Hi. It’s a pleasure to be here. Thanks for inviting me.

Hanh:
Yes. Thank you. And welcome. So to get started, please share with us a little bit about yourself professionally and personally. And then shed light on your research and how you approach the challenges of web safety and integrity.

Srijan:
Sure. So everyone these days uses the internet. I grew up using the internet and had accounts on some of the earlier social media platforms. And there I was fascinated by how I could connect with my friends who are like, who I haven’t chatted with in a very, very long time. And I used to spend a lot of time

Srijan:
on these websites and platforms. Uh, and as I realized the great potential it has, such as crowdsourcing platforms, crowdsource knowledge sharing platforms, such as, uh, Wikipedia, and social network platforms, I also saw a lot of downsides. Because I saw people were harassing each other. Uh, people were adding false information on Wikipedia articles.

Srijan:
Um, and people were just not being quote unquote nice. So, uh, that was around the time when I got super fascinated with AI and machine learning and data science. And I started my PhD at the University of Maryland after doing my undergrad, uh, at i i t in India. And that’s where I started my research journey around creating the state-of-the

Srijan:
art AI, machine learning, data science algorithms that could ident, that could detect, predict, and mitigate the online, the harmful effects of bad actors and harmful content online. So over the last decade or so, I’ve created some of the foremost algorithms that have, uh, that can identify false information, online hate speech, uh, targeting Asians or black population.

Srijan:
Uh, we have created, I’ve created algorithms that look into how one can identify fake reviews in e commerce platforms. False information inserted in Wikipedia articles and Bad actors who are doing these harmful activities and it has been a super Interesting and fun journey doing this because it’s always a cat and mouse game where we create the

Srijan:
algorithms We create the greatest best algorithms to identify them and then the bad actors try to adapt and change their behaviors So that they don’t get caught over this course over This, this period, as you mentioned, Hanh, uh, we, we have created algorithms that have been implemented at Flipkart, which is India’s largest e commerce platform, uh, have influenced Twitter’s birdwatch system

Srijan:
and a few others as well, which many, which billions of people use every day. So it’s been, it’s been incredible, um, everyone from a teenager to a senior citizen, everyone uses social media, internet platforms. So essentially the work that we are trying to do helps to keep everyone safe.

Hanh:
Wow. Congratulations. I love everything about what you said, what you do and how you service a wide range of age, particularly for folks. You know, for seniors who are perhaps more susceptible to online scams or misinformation. So let’s say how, how can your research in web safety and integrity be applied

Hanh:
to protect the older generations online?

Srijan:
You’re absolutely right that the older citizens. are particularly vulnerable. Um, this has been shown for misinformation as well that older citizens are, uh, more susceptible to believing false information even though it’s, even though um, it’s not verified. There’s ton of research that has done surveys, uh, both qualitative

Srijan:
surveys and quantitative ones and larger scale data analysis to show that that is in fact the case. In today’s world of Uh, of generative AI where, uh, things can be aware of false information, fake videos, fake phone calls, fake audio can be easily generated. They, everyone’s susceptible, but specifically older population as well. In fact, I was on a

Srijan:
flight a few months ago. Uh, it was back, uh, I believe in early summer. And, uh, I, I was scrolling on Google and I saw a news article where it said, That an elderly couple, um, grandparents living somewhere in Canada, they got a phone call. Um, sounded like their grandson. The voice was saying they’re in

Srijan:
trouble, they really need some money urgently because, you know, if they don’t, they’ll get into more trouble. And the elderly couple essentially panicked, went to their bank, transferred around 21, 000, uh, to, to, who turned out to be a scammer. So, elderly population are susceptible, so is everyone else, but slightly more for the elderly population.

Srijan:
So, I believe the type of techniques, the type of solutions that need to be created to help, uh, help them is… is twofold. One is education, essentially telling them that, see, the world has changed and this is possible now. You know, you need some sort of, uh, defense mechanisms to be able to verify the person who is calling you

Srijan:
is indeed, uh, is indeed the person who is, uh, who they claim to be. And the second is a suite of algorithmic solutions, AI based algorithmic solutions that can help safeguard, uh, users of these platforms to begin with. So, uh, like one thing, one many practical solution that I’ve created with my family, with my parents, is we have created a safe word.

Srijan:
Uh, which is like a verbal password, which me and my family members know, my parents, my, my wife and everyone, like we all use the same password. this verbal password. And if someone’s feels fishy or suspicious about about a phone call or something, they can just ask for that password, right? And since we haven’t like noted it

Srijan:
down anywhere or something, it’s it’s like it’s very unlikely that a stammer, um, would be able to get that, right? So even like these small hacks, small solutions, practical solutions are ways in which one can protect Um, protect your loved ones from being scammed, from being harassed online and so on.

Hanh:
Mm hmm. And you know what? I echo the sample that you cited about people calling in, pretending to be a voice and as you know, you can clone voice nowadays readily, but this is before, this is a few years ago when someone did claim to be my son calling grandparents and then grandma didn’t know. Grandma said, is this?

Hanh:
So, and so? Then it gave away my son’s name, which then this person later on calls, Hey, I am, this name. Well, anyway, so you got to be careful who you give away your names or your children’s name.

Srijan:
Absolutely. Yeah. No, that is.

Hanh:
It’s just ugly, you know? And how awful is that to capitalize on seniors, vulnerability and naivety and so forth. And here’s the other thing I’m seeing more and more. Where you get an email, you won X amount, you know, lottery or something, you won something click here and you click here, type in such and such, you know?

Hanh:
It’s, it’s awful.

Srijan:
Those scams, uh, email scams have been going around for a very, very long time, pretty much since emails became quite popular. Um, you know, as I was mentioning earlier, it’s a cat and mouse game. So, there are algorithmic filters that have been created to identify such scam emails and prevent them. from reaching your inbox.

Srijan:
That’s right. But, again, bad actors are creative. They try various different variations of, of content that they’re trying to pass on as being authentic and see whatever gets passed, uh, whatever is able to beat these filters, is able to, like, fool these filters, um, to land into your algorithm, uh, into your inboxes. Um, these, the, the examples that

Srijan:
you gave, About people, uh, you know, someone saying your grandson’s name or their grandson’s name and revealing that it’s so easy now you don’t even have all like everyone’s putting all this information on social media, uh, on there. So on Facebook and so on and so forth, all of that information is there and not to scare anyone. But, uh, nowadays the AI technology

Srijan:
specifically for voice has. Improved or enhanced so much that even a one second clip of your voice can be used to, to essentially clone your voice. Um, so that, that’s, that’s been around, uh, that’s, that has rapidly evolved and, uh, it’s very easy to be able to do that. So, solutions wise, uh, uh, like education, so doing a podcast like this that educates the elderly.

Srijan:
population that, hey, something like this is possible. Be aware of this and be proactive in taking certain steps. Those are, that is part of the solution. Creating algorithms and systems like the ones that my students and I create at Georgia Tech are another set of solutions where Sometimes we work with these tech industries to

Srijan:
help them improve their systems by creating algorithms that are better than the state of the art right now. Um, and, and, uh, I think like this two pronged approach is important. Simultaneously, other solutions are the ones that, other solutions, um, are specifically around online myths and disinformation that need to be created. And, um, I am working with, um, with

Srijan:
some collaborators, social scientists and communication experts at various universities where we are trying to create systems that help empower professional fact checkers and journalists more efficient in their job. You know, there’s so much false information out there. And people do fall prey to them and the only, uh, reasonable line of defense

Srijan:
against them is, is good journalism and, uh, what we are trying to do, uh, and, and right now the state of journalism and fact checking is such that it’s super difficult for someone, uh, for, for these fact checkers to be able to To essentially fact check everything that’s out there because there’s just too much of that and too few fact checkers. So, what we are doing is to help them by

Srijan:
creating these, uh, algorithmic systems, AI systems, that can help identify, detect and, uh, detect and prioritize content that can potentially be… fall that they can fact check and do it in a timely proactive manner rather than doing it and doing it in a reactive manner after things have already reached and influenced people. So, We are working with various

Srijan:
different organizations to help them be more efficient. And we are hoping that our AI systems will be able to, uh, to, to essentially reduce the reach that online misinformation has today and in the future.

Hanh:
Wow. What, what a wonderful undertake and best to you in all these projects you have. And I concur with you. I mean, misinformation is out there and there’s, there’s. Folks online, like you said, not so great intentions. And this is very concerning when we think about seniors and it’s

Hanh:
not just about false info, but also about a broader effect that it can have like mental health. Right. So misinformation. From those online from bad actors, could you explain the impact of that on mental health in as well as your research contribution in this area?

Srijan:
Yeah, thanks for the question. So, you know, me and my colleagues at Georgia Tech, we hypothesized early during the pandemic that there’s a lot of confusion around COVID misinformation, false information around COVID being spread, and that’s causing a lot of panic and confusion, um, and at the same time, everyone’s already stressed about this new virus, new thing that’s happening

Srijan:
around the world, everyone’s, everyone and their loved ones are being impacted. So we, uh, and there’s just too much chaos for those of you who may remember, um, I just do too many things going on at the time. And what we hypothesized was. Misinformation not only misleads people and, you know, uh, misinforms them, but also it may impact the anxiety

Srijan:
level, the stress level that people have, specifically certain types of misinformation that, uh, certain types of misinformation can be more alarming and, uh, and, and anxiety inducing than others. And since then we have seen other types of misinformation as well, specifically targeting Asian population, uh, around covert, uh, covert. 19 origins and so on, uh, which have

Srijan:
led, actually, uh, led to physical harassment, violence, uh, against these, uh, against Asians, both, uh, both young as well as old. In fact, um, this, there are a few organizations in California that are curate, that were at least curating a list of incidents, um, of violence, harassment, threats, both online as well as in the physical world.

Srijan:
And overwhelmingly, uh, you would see that older populations were being, were being impacted as well. So we were seeing a lot of impact, um, online as well as in the physical world of misinformation. So, we set out to study and quantify how much impact there might be. And what we found through a very extensive research process, where, uh,

Srijan:
we collected a lot of data specifically from Twitter, which is one of the most popular social media platforms. And what we analyzed was, was the, uh, was this hypothesis that does consuming misinformation increase your anxiety? And what we found was, in fact, it does. Um, broadly, it increases your anxiety twice, uh, as, uh, compared to, you know, not, not, uh, consuming misinformation.

Srijan:
And, uh, that was, uh, that was across the board for various different, uh, populations across race, across, uh, gender, um, as well as across different education levels. So people who are more educated and less educated, everyone’s was, everyone was being impacted by this increase of anxiety, uh, due to misinformation. What was worse was it’s already proven

Srijan:
that people who are more anxious are more vulnerable to misinformation. So this creates this vicious cycle where you consume misinformation, you become more anxious. And because you are more anxious, you are more vulnerable to misinformation. And this just cycle just continues and you can spiral down. So, um, since, so, so over

Srijan:
hope for conducting this work, conducting this research. was to be able to say that this negative impact exists, first of all. And second, um, you know, that platforms, uh, government agencies, mental health organizations, seniors, senior organizations, all need to be, need to proactively take steps. to not only mitigate misinformation

Srijan:
and remove it from the platforms, but also take steps to counter the harms, the alternate harms that occur because of spread and consumption of online mis and disinformation. So this was, um, this, this was timely as well as, you know, um, important for the older population as well, because they Not only consume more information, more, more misinformation, but also maybe

Srijan:
more vulnerable to, uh, to, to these mental health anxiety stress impacts.

Hanh:
I echo that. You want to encourage them to use digital media and so forth to stay connected, especially in the scenario you described, which is during COVID. So we still want to have a means to engage. Whether it’s post COVID or during, like what we’re doing right now in, in our conversation, I

Hanh:
learned so much about your work. Not everybody has the means to go to different events, right? And especially when older adults, many of them are on a fixed income. So connecting with people on social media is a great outlet, but again, proceed with caution, right? It’s so important. Okay.

Hanh:
We’ve seen online shopping or browsing reviews and wonder. Is this review genuine or just someone trying to manipulate the system? And beyond shopping, the web can sometimes be a tough space, especially with hate speech targeting certain communities. And it’s important to recognize and address these challenges. Let’s talk about online manipulation

Hanh:
from fake reviews on shopping sites to darker corners where hate speech lurks. So could you discuss your research on detecting fake reviews? And then how has it been implemented in the production of Flipkart?

Srijan:
Sure. So this, uh, we, we started this very interesting project back, uh, around seven, eight years ago where, you know, online e commerce platforms were already very, very popular. And they, and they were so popular that. Sellers were purchasing fake reviews so that their products get highly ranked by the search algorithms, by the ranking

Srijan:
algorithms that e commerce platforms have. They should rely heavily on the, on the reviews that, uh, that these products have received. So what these sellers were doing was trying to purchase fake reviews to increase their own ranking and ratings while reducing the ranking and ratings of their competitors. Um, prior to our work, research has

Srijan:
shown that a one star increase in rating could increase the revenue by double digit percentage points of a seller. Which is huge if you think about the scale at which many of these sellers operate. So there’s a huge incentive, both, uh, fina um, mostly financial, uh, for these sellers to, to conduct these fake reviews, to increase their ratings by conduct by adding fake reviews.

Srijan:
Because of that, um, platforms like Amazon and others are struggling with dealing with ways to identify and remove fake reviews. Because if you think about it, uh, there’s no absolute truth to the quality of a product. You may like a product, but I may not. And both are valid opinions about the same product, right?

Srijan:
So it gets super difficult to identify whether a particular piece of review that someone has given to a particular product, whether that’s genuine or that’s fake. So it’s a very challenging problem. And, uh, therefore we came up with an AI algorithm that would use not just the piece of content that’s written, but looking at what are the other reviews that a user has written and what are the

Srijan:
other reviews that a product has received. And we would create what is called a graph or a network, which would represent the different entities. In this case, the users and the products and the sellers. in the, in the platform. And what it would try to do is to identify, um, the, to, to weed out the bad fake reviewers from the genuine ones.

Srijan:
So we created a very sophisticated at the time, uh, system that would be able to identify and remove them, uh, and, uh, or flag them rather. And what we did was we ran it on various different large scale data sets that were out there from various different platforms, including Amazon and opinions and a few others. And what we were able to show was

Srijan:
that our detection algorithm would be. able to detect these, uh, these fake reviewers at a much higher precision compared to existing, uh, techniques that were out there. And the reason it was so powerful was because it was using this. graph, the network connectivity, as well as behavior patterns extracted from both the users and the reviewers,

Srijan:
as well as from the product and combining all of those to create these insights about, about behaviors. So we ran our system on Flipkart as well, which is India’s largest e commerce platform. And we were collaborating with them. So we said, here are the 150 accounts that our algorithm thinks is, is are problematic, potentially problematic.

Srijan:
Can you, can you check and tell us whether that’s indeed the case? And then they went through these accounts and then, you know, they had access to a lot more information and data than we did. And they were able to validate that 127 of those were in fact fraudulent accounts that were giving fake reviews. Um, so eventually our system was integrated, uh, was, was used in

Srijan:
production at Flipkart and we have also released our algorithms, our AI systems so that others can benefit from it. The world has changed so much since then. Because now it’s so, so much more trivial to write fake reviews, uh, on, on any of your favorite, uh, e commerce platforms or a restaurant, like there’s fake reviews everywhere. Like I don’t buy anything without

Srijan:
looking at reviews, even though I know that, you know, many of them might not actually be true. So, um, it’s just the scale with which all of this is happening is, is super, um, super large and. More systems like the ones that I just described are needed to be able to weed out the, the weed from the shaft and also to create a online

Srijan:
ecosystem that is a, uh, that is trustworthy, reliable, and safe for, uh, for everyone to be able to use.

Hanh:
You know, and another thing too, as far as getting positive reviews, people are more inclined to leave negative reviews than they do for positive ones. And perhaps that’s what’s driving this fake positive ones, you know, but regardless, you might be able to get away with it on some, but if you’re looking at businesses with thousands and thousands of reviews, I’m not sure how effective

Hanh:
these scammers are going to be, right? But again.

Srijan:
So the objective of the scammers is, is, oftentimes, not to, not to manipulate the super high, um, super popular products, right? So for any product, um, for any product category, there’s what we call a long tail of products. Uh, so a long tail means that there will be some products

Srijan:
that will be super popular. Let’s say you are looking at, looking at, um, you know, I don’t know, a water bottle, right? So there will be like some products that will be super popular. But there will be then hundreds of products that are not as popular.

Hanh:
Right? Long tail keywords, as opposed to shorter, more searchable, right? The long tail ones are, I guess, a more specific. But still there’s a lot of them.

Srijan:
Yeah. So there’s a lot of them. And the ranking between them is somewhat ambiguous, right? So they’re always, they know that they’re not going to be in the top two or top three because those, those are like super popular items. But any search platform has like 10 slots whenever you search for something.

Srijan:
So they’re competing for the remaining seven, and they want to be as high ranked as possible. So the fake reviews that are there, I mean, they exist on. Um, they exist as fake reviews on super popular comments, uh, or super popular products as well, but it might not impact them as much, but the long, like majority of the products are not super popular.

Srijan:
1 percent of the products are popular. The remaining 99 percent are all competing to be, to be, uh, to become popular. So that’s where the economics of fake reviews, uh, starts getting very, very interesting.

Hanh:
Yeah. All right, so let’s talk about trustworthiness and robustness of ai. Well, AI holds so much promise, but how can we truly trust it? Right? How do we make it sure that it’s reliable, especially when there are those trying to game the system. So in terms of AI, what are the

Hanh:
challenges ensuring the trustworthiness, reliability, and robustness of machine learning and deep learning models against this manipulation?

Srijan:
Yeah. So let’s say that’s a big and very active topic of research and the entire AI, uh, AI community. Um, there are even entire conferences that are dedicated. to studying and understanding the fairness, accountability and trustworthiness of AI systems. So, um, the type of work, the type of

Srijan:
threads that have been, that have emerged around trustworthiness of AI are a few. So one of the threads is how to make AI systems more explainable. So essentially what that means is to say why we know an AI system is making certain decisions. Can the AI system explain itself and tell why it’s making the decision that it’s, that it’s making?

Srijan:
Because if we are able to get the, the AI system to be able to explain itself, either, uh, in, in, in, in a textual format or whatever, if we are able to get that explanation, we’ll be able to see whether that explanation is logical or not. And second, if there’s, if it’s illogical or if there is, if there are any errors or biases in the logic.

Srijan:
Uh, then, you know, certain actions can be taken to improve. The, uh, the AI system, right? So explainability is a means to an end where you want to create AI systems that are, uh, that are better. So explainability is one aspect. The other is creating systems that are unbiased and safe, which means you want the AI system to be, to be making

Srijan:
similar, uh, or equivalent decisions. Um, and fair decisions for someone not based on their demographic attributes, but based based on based on the other aspects that are there, right? So let’s say there’s an AI systems that’s doing job ranking resumes for, uh, for a job, right? So you don’t want an AI system to be ranking females lower than males just

Srijan:
because just because of their gender or black people lower than white people just because of their race, right? AI systems are more than just ChatGPT There are there are tons of AI systems that are being used for various different important applications such as you know jobs recruiting HR health care finance and loan applications and others and Many times there are government

Srijan:
guidelines and regulations that require AI systems to be unbiased and fair. And oftentimes it’s, um, AI systems need to be fair and unbiased to prevent pushback, uh, or negative publicity. So, uh, there’s a lot of, lot of, um, A lot of research that has gone into creating systems, first of all, identifying and auditing systems to be, to say whether they’re

Srijan:
biased or unfair, and if so, how to fix them and how to create them. So that’s like the second stream. So first was. How to create explainable AI. The second is how to create unbiased and fair AI. And the third, uh, big stream is around understanding the robustness and reliability of AI systems, which is

Srijan:
where a lot of my research also lies. So this is, uh, this, this entire thread is around understanding how AI systems can be created in a way. that over time they can be, they can be reliable and robust. Let me break it down what that means. So the way AI systems are created is they are trained on certain data sets. And then those AI systems are then

Srijan:
deployed in, uh, in, in production and then they run for a period of time. Oftentimes when you run it for long times, uh, what you see is that the distribution of the data, the distribution of the labels and the data as well, can change over time in such a way that the distribution shifts and it’s not like the the products that the data distribution on which it was trained

Srijan:
is not, uh, not very similar to the one on which it’s now being used. So this type of drift, um, this is called drift, so this type of drift can hurd the system into making, um, wrong predictions more often, and you need to be able to identify that something like this has occurred and then fix it. Uh, a lot of times these drift, these drifts occur naturally because of how

Srijan:
these systems are, uh, are created and just how things are evolving. Other times drifts occur when you’re creating user facing applications where you drained your data on certain type of user user inputs, and then you release it to the public and the systems and then the users are giving inputs that have very that are very different from the from the inputs that The system was strained.

Srijan:
When that happens as well, the models, uh, the system start to fail. They make, they make up stuff or they just give wrong, uh, wrong outputs and predictions. And all of these are quite bad because you don’t want you, I mean, all of these will lead to model failures and unexpected at unexpected places. So this is, this is, these are scenarios

Srijan:
where there’s no, no one trying to manipulate, but oftentimes there are bad actors who have purposefully, um, who purposefully aim to manipulate AI systems. So think of someone, uh, someone who wants, um, who, who wants to conduct a fraudulent activity on a banking system that can be accessed through a, through an AI chatbot, they will specifically try to manipulate the chatbot to be

Srijan:
able to, you know, take out money from someone else’s account and so on. Other examples are, uh, are bad actors, um, who, who, who, who would try to get, let’s say, chat GPT to spit out racist or sexist or unethical stuff, right? So a lot of bad actors are motivated to break AI systems in a way that would, uh, that would harm Either the end users or, uh, or the companies

Srijan:
themselves and, um, a big thread of my research is to be able to identify how, um, how these, uh, how this might be possible and, uh, then how to fix this.

Hanh:
Mm hmm. And this underscores the, the need of data ownership, foundation models. And having your own digital system and ecosystem, like we talked prior to the, to the recording. It’s so important. And all the concerns that we’re discussing right now could be remedied by having your own foundation model.

Hanh:
Okay. And that’s why I’m an advocate for AI value creator, as opposed to just a user, because I think you, you open yourself to exposure and risk. As a user than a value creator. Yeah, I, I echo what you’re saying now, now how can AI and machine learning tools be used to detect early signs of cognitive decline or other brain

Hanh:
health issues in the aging population? What’s your take?

Srijan:
Uh, before we move on to that question, can I just add something around the foundation model aspect that you mentioned?

Hanh:
Yes.

Srijan:
So, uh, it’s, it’s, it’s great that you bring that up because there’s a huge trend in industry to create AI systems internally for your own, uh, for your own applications. Many of, many people, many of my friends and companies. They’re building their own, um, own internal large language models, foundation models, deploying them

Srijan:
using open source, uh, versions and fine tuning on their own data, which is great, uh, which leads you to not sending your data to open AI servers or one of these third party servers. That is an essential step, but that’s complimentary, uh, to the security that these AI systems need to have. Because you are creating, let’s say a foundation model on,

Srijan:
on, on a certain application. That does not necessarily, uh, ensure that nobody can manipulate it. Um, manipulation by bad actors, manipulation due to drift, um, other things like prompt injection, hallucination, all of these are issues that are very inherent to AI systems on, uh, even foundation, uh, on all foundation models.

Srijan:
Even if you create your own, the system, uh, these issues still stay, uh, around. So things, uh, things, things, um, around, around them, keywords that I mentioned, these are all, uh, realistic. These are all happening. And many of the, many of the companies that I’ve talked to, many of the sponsors of my research, Any of them are struggling with moving their products

Srijan:
from proof of concept to production because of many of these issues. So I think, uh, the issue around safety and security of AI models goes beyond just protecting the data, uh, which is, which is important, but it also a lot around like access control. Uh, and things like, um, um, things like preventing hallucination, detecting hallucinations when they happen, uh,

Srijan:
prompt injections, personally identifiable information leakage, and so on. Mm hmm. What’s your take on Watson in Azure AI, for instance? What’s your take on that? These are great systems. I think, um, Microsoft has already integrated a lot of foundation models. They have, uh, they have

Srijan:
now partnership with Meta. They have, they already, of course have partnership with Open AI to provide these enterprise scale AI systems, uh, to and private AI sys private child, um, foundation models via their Azure AI sys, um, ecosystem. to enterprises. Uh, those again are a good first step to prevent data from

Srijan:
being sent to external servers. But the issue and the need for creating secure, reliable AI systems still and foundation models and products based on these foundation models still remain.

Hanh:
How do you see AI and machine learning tools be used to detect early signs of cognitive decline or any other health issues for the aging population? What’s your take?

Srijan:
Yeah, thanks for the question. Um, so I, of course, is impacting everyone’s life and it is being integrated and being, uh, being incorporated into various different health Um, health applications. So there’s a huge, uh, effort. Um, a lot of it is funded by NIH and various other healthcare, uh, health organizations where they are

Srijan:
spending a lot of energy resources to understand how AI can be best used to improve healthcare broadly with a huge focus also on elderly healthcare. So, uh, what, what sort of research, uh, the, the. findings there, of course, are the type of techniques that have been created. There are looking a lot at, uh, looking a lot computer vision

Srijan:
as well as national language processing and, uh, audio metrics. So the way these work is they are using signals from these multi. dimensional modalities of information. So looking at the audio, the visuals, the, the content, the semantics of all of these things, where oftentimes they monitor individuals, for example, for an extended period of time to see,

Srijan:
to, to look at, uh, their health, uh, and declines and health and so on. So very, uh, a lot of research, it’s a very active area of research, uh, to. understand, um, healthcare needs, anticipate them, detect, uh, detect bad things that are happening, uh, before they happen, and essentially identify early signs of a lot of, a lot of healthcare needs and issues.

Srijan:
Um, I think. As with any healthcare system, it requires a lot of rigorous testing, a lot of, uh, rigorous analysis before, uh, and validation before it actually gets out there for everyone else to use. So I think a lot of these systems are in that phase where they’re, where the proof of concept has been created in research and now is being

Srijan:
created, uh, is being validated. And, uh, I’m hoping just like everyone else. I’m hoping that, uh, these AI systems, uh, and are broadly able to improve everyone’s health. Um, what I do know is a lot of physicians, healthcare clinicians, uh, healthcare workers are already being assisted by AI systems, um, so that their

Srijan:
workload becomes manageable and they are able to provide better healthcare. services to, uh, to, to their patients and better, better care and better services. So I’m quite bullish. I’m quite positive. And the use of, of AI in various different aspects, not, not just in the direct diagnosis, but also in various different aspects.

Srijan:
of the detection, mitigation of, uh, of various different, uh, healthcare needs.

Hanh:
Mm hmm. Mm hmm. That’s great. So let’s talk about the advancements in multimodal learning. We’re diving into the realm of, let’s say, think about the blend of vision and language and how they come together in the tech world. It’s like teaching machines to

Hanh:
see and understand like we do. And, As technology keeps evolving, especially with things like telemedicine, how do you, how is AI ensuring our seniors get the timely care they need even from a distance? So what’s your take?

Srijan:
Yeah, I think that’s, that’s very important. Telemedicine is around, uh, getting more and more popular, um, and you know, more making it more accessible. There’s a lot of, so what we call multimodality in AI. So multimodality and essentially means. Looking at multiple dimensions, multiple axes of information,

Srijan:
and fusing them together. So looking at, uh, audio, visual, uh, long term information that’s stored in electronic health records. Uh, looking at, uh, looking at, you know, content from people. So, a lot of signals from a variety, essentially a lot of variety of signals being used to, to make judgments and inferences in a

Srijan:
way that would be most useful. So a lot of AI systems have been created, uh, over the last decade or so that look at various individual aspects of these, uh, of these modalities. So some that just look at images, some that just look at videos, some that just look at the, uh, the, the audio and so on. And now as the systems homogenize, so what, what has happened in the AI field

Srijan:
is that The technology behind each one of these is getting similar and similar. And as that happens, it becomes easy and easier and easier to integrate them together. That has led to creation of this entire idea around multimodality, where you can extract information and signals and insights of all of them together. Which is greater than

Srijan:
the sum of its parts. And this, it has been shown to be like very useful and various different use cases, including, uh, including also in healthcare. And as if you, if you think about it, doctors look, um, like clinicians, doctors, healthcare providers, they look at information in a multimodal manner, and we are trying to create

Srijan:
systems that will be able to do that. Uh, so not so as not to replace. doctors or health care providers, but to augment them and to, uh, to help them be more efficient. So a lot of work is going on around that. Uh, I, uh, I’m hoping that in the next, uh, next few years, we’ll be starting to see some of those applications being used and, uh, revolutionizing how healthcare

Srijan:
is provided across the board for, uh, for, for various different things.

Hanh:
Awesome. You know, it’s a great time to, uh, to learn the possibilities of AI, right? I mean, I always tell people approach it with enthusiasm, but with caution, great power comes with responsibility. So, um, but in the meantime, you can’t consume yourself with fear. You have to move forward and move it with caution.

Hanh:
Like I mentioned, as we head into the final stretch of our conversation. Let’s look a bit into what’s on the horizon. For instance, where is all this AI and tech research heading and from personalized text generation attacks and dynamic trajectories in interaction networks? There’s a lot to unpack and more

Hanh:
importantly, how can AI shape the future of personalized care for our seniors? And given all the advancements. Where do our experts see the future of web safety and especially for the older generation and those who care for them? So, can you explain your work on personalized text generation, a tag on deep sequence embedding, base classification

Hanh:
models and its implications? How is that and how is that going to impact the items I discussed?

Srijan:
Yeah, so, uh, personalized healthcare, personalized, um, you know, chatbots and AI assistants are going to get very, very, very popular. In all aspects of life, we are from education to various different types of services, you know, think about your personal lawyers, and healthcare providers to do a lot of essentially will have these in lots of different aspects and areas.

Srijan:
And we will continue to see. them being built, many of them, um, being built already, uh, to assist in various aspects. So there are, for example, uh, there are healthcare, uh, companies and healthcare foundation models being built that can help essentially provide the best, um, that, that have the values that a doctor has.

Srijan:
And, uh, so that once these foundation models, healthcare foundation models are built, they can be integrated into creating services. Uh, like, like we discussed with personalized healthcare assistance and so on, think about the possibilities where you don’t, you, you have your health, your, your healthcare provider essentially on your phone, right?

Srijan:
Uh, you can chat with them, you can discuss anything related to your health with them. And all of these become quite, uh, quite, um, uh, it’s a possibility it’s a world. where, uh, where productivity has increased a lot and everyone has access to a lot more than, than they do today. So the future holds a lot for, uh, for the entire world.

Srijan:
And what we are going to see is this boom of, of various different applications of AI and everyone’s, uh, everyone’s life being impacted by it. What, uh, the, the work that you mentioned, which is about personalized text generation attacks, that’s looking at the safety of these AI systems and how, uh, bad actors can manipulate these AI systems in a way that would fool.

Srijan:
Uh, that would prevent them from getting caught. So this particular project, uh, came from a, an award that, uh, that Facebook or Meta gave to, uh, gave to me and one of my colleagues. And what they showed, uh, through the research, we showed that bad actors are able to manipulate their behavior ever so slightly in a way that’s undetectable to

Srijan:
the AI system, but it fools them in a way that would, uh, that would prevent them from getting caught by, by the AI system. As you can imagine, all these social media platforms and all the, all these online systems. have AI systems in place to identify bad actors and weed them out and remove them. However, as I mentioned earlier, it’s a cat and mouse game where bad actors

Srijan:
are also changing their behavior. Very often, in fact, and they’re trying to get get their objectives without getting caught. So that’s where a lot of a lot of this research comes into play that it’s one of the first systems that’s showing how bad actors can manipulate these AI systems and how successful they would be in a way by doing personalized text generation.

Srijan:
So they would, they, these bad actors. And the, um, can write in the way they typically would to convey the things that they want to in a way that would, uh, that would not be detected by, by these AI systems. Um, so that’s, that’s the entire project, uh, objective. And what we were able to show was that even, even Meta’s bad actor

Srijan:
detection system can be detect, uh, can be fooled very, very easily, uh, by these bad actors by making. minus changes in the way they write, um, and, uh, they will be able to prevent detection, uh, in a way that would cause a lot of harm. Um, and these are like systems, these detection systems are the ones that are already being used in production

Srijan:
and they would fail one out of, uh, one out of four times, which is a lot. So. This shows the possibility of how the world, uh, which is, uh, which is being empowered by these personalized assistants can also be flipped and can also, the same technology can be flipped and be used by bad actors to conduct these, these types of manipulation in a way that, uh, that

Srijan:
requires us to be a bit more proactive and careful about the ways in which we, we, uh, about how we operate in this world.

Hanh:
You know, I often think with everything that you and I discussed today, obviously there’s a lot of precaution one needs to take, but at the same time, you know, we want to be a part of shaping AI as opposed to, gee, AI just happens to me. It just happened to my business. What do I do? Right. So despite the fears and everything

Hanh:
that’s so real, what we’re talking about. The bad actors and so forth. I think it’s even more important that we raise awareness, education, and inspiration that you know what, it’s real, but let’s be a part of that, shaping the AI. That’s so important.

Srijan:
Absolutely. 100 percent agree with that. And I think. Like everyone, uh, everyone will be, since everyone will be impacted by AI, uh, it’s so important to be able to, uh, to, to empower AI to be, uh, be empowering, empowering people with AI and empowering AI itself, to be, uh, to be the best it can.

Hanh:
Wow. So what a journey into the world of AI today, especially its implications for our seniors and aging community, from the nuances and misinformation to the innovations of AI for senior care and conversation. How it has covered in a wide range of topics today. Do you have anything else that you

Hanh:
would like to share as we wrap up?

Srijan:
Yeah, I think I’m, I’m super excited about AI. And what I would say is that the same technologies, um, we, we discussed a lot, uh, today. And I believe that the world is already changing because of AI. And, uh, we, we. Can do it in a way that would help everyone.

Srijan:
And I think like doing it responsibly, doing it in a reliable manner, creating AI systems that are robust and reliable and trustworthy is super crucial for the long term implications and the long term use that AI will have in the broader society. So, super excited about what the next decade, uh, is, is going to bring to us, specifically as it’s powered and inspired by AI.

Srijan:
And uh, yeah, I mean, we have, we have a lot, uh, to, uh, there’s a lot that has been done and there’s a lot more to come.

Hanh:
Mm hmm. I echo that. And I also want to interject for the listeners, perhaps baby boomers. One of the things about adapting to AI is also adapting to a lifelong learning attitude, because it’s, it can be very intimidating. You’re not going to get the output that you provide the input to.

Hanh:
It’s a learning process because as you know, how should I say this? We often think that, you know, in the later years, we’ve had something mastered and perhaps maybe not trusting rightfully so, but I think part of adaptation to anything is. Taking an open mind approach and a life learning approach and keep staying on it, trying, because I

Hanh:
think it will only flourish in your lives and your business life as well. And I also have some exciting news to share to my listeners. My startup has recently partnered with Microsoft startup and we’re thrilled to introduce AI50. It’s a B2B foundation model. And the model is designed to be a cornerstone of streamline for streamline

Hanh:
business operations, bringing diverse function under one seamless umbrella. What’s even more reassuring is that the unwavering commitment to data privacy, ensuring that your business operations are not just efficient, but also secure. So, we’ll be diving into this much further in the upcoming linkedIn live event, “Making AI understand you and your data; The foundation model.”

Hanh:
It’s happening September 19th, 11 AM to 12 PM Eastern time. So we’ll be delving into this model, how it can revolutionize your business operations, and answer any questions that you have. And thank you so much to our guest today, Srijan and for shedding light on such intricate facets of AI. Appreciate your time and

Hanh:
your valuable insight. Take care, everyone.

Srijan:
Thanks for the conversation.

Hanh:
Thank you. Thank you so much.

More from this show

Episode 183

Pin It on Pinterest

Share This