Mental Health Matters

The Pros, Cons and Ethics of AI Use

Dr Audrey Tang Season 1 Episode 20

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 45:03

We cannot escape it, and in many cases it makes our lives easier – BUT how do we ensure that AI remains as a tool rather than either becoming over-reliant, or falling prey to its hallucinations.  It’s an interesting and timely discussion with Nina Hobson of Barefoot Coaching.

 

About the Show

Each Thursday at 4pm, we broadcast on LinkedIn and YouTube, with the podcast released on Spotify, Apple Podcasts, and more. 

Then every Friday at 8am, you’ll also receive a bonus podcast episode - a carefully selected recent conversation offering practical insight and timeless support.

Wherever you listen, you’re invited to pause, reflect, and reconnect: 

PODCAST: https://mentalhealthmatters.buzzsprout.com

YOUTUBE: https://www.youtube.com/playlist?list=PL5dbYRwciNQ3c2hZwpsfxnNIvpijH4S2b 

 

Today's show is hosted by

Dr Audrey Tang www.draudreyt.com  @draudreyt

and Judith Crosier https://www.facebook.com/profile.php?id=61556005102240

 

Guest Expert:

Nina Hobson

https://www.linkedin.com/in/ninahobson/

 

Today’s quick tips come from previous guest expert 

Thor A Rain

https://firstaidforfeelings.com/ 

SPEAKER_02

Hello and welcome to Mental Health Matters. I'm Dr. Audrey Tang.

SPEAKER_01

And I'm Judith Crozier.

SPEAKER_02

This is the show where we have a discussion on all things mental health and wellbeing, but without your hot takes or quick fixes, we have up-to-the-minute expert information and tips and tools which you can use accessibly and practically to make things hopefully a little bit better. Not perfect, but a little bit better. And talking of one of those tools, today's topic is AI. We've talked about AI quite a lot on this show, but now we're going to look at it in terms of how are we going to use it effectively. So since we've had this, we've talked about this before, I've changed my opinion on it, and I actually now do give AI tasks to do. But the tasks I give it to do are very much I've written this response, it's a say a grant application response. How do I make it sound less obnoxious? How do I make it sound less angry? And it's still my content, but I just need it to not come out so raw. Soften it a little bit. Yes, yeah. And and then actually AI does quite a nice job of it. It says what I want to say without offending anyone. So that's how I use it now.

SPEAKER_01

What about you? Um I still try not to. I still um I'm quite against it as a concept, which is ridiculous because it's everywhere and in the whole world, and you know, I I can't deny that it's there, and it is, but I think I have used it a couple of times, I have to say, for um advertising something on eBay. Yeah, it says, Do you want to use our AI option? I'm like, Yes, I can't stand describing the item. So so it's useful and it has its uses. Yeah. Um I was listening to the radio and somebody, an expert, was talking about AI, and the presenter said, Don't we need to reverse some of the things that it can do so that all the dangerous things and all the potentially you know uh destructive things it can do? He said it's too late, it's way too late for that. So that's the that's the side of it that we neither of us like.

SPEAKER_02

Yeah, there's a fear of that, and that's AI in itself as a machine that's very, very smart, uh, and also the way some people are using it, which I know we're gonna get into in today's conversation. We are welcoming back Nina Hobson, and she is from Barefoot Coaching, and she is gonna be talking to us about how we can better incorporate these tools into our day-to-day lives. Let's meet her.

SPEAKER_01

Hi Nina, it's so lovely to have you back with us.

SPEAKER_03

Hi, thank you, Judith, thank you, Audrey. Lovely to be here again.

SPEAKER_01

Thank you for having me. Um, so yeah, we're talking about AI and um jumping straight into it then. Do you think that it's inherent inherently neutral? Or do ethical problems uh mostly come from how humans design it and use it? Um, or is there still an issue with how the data is is used?

SPEAKER_03

It's that's a big question, right? I think AI, no, it is not inherently neutral. I think um, you know, the the data that AA AI is built on is potentially biased. Um, it's very much dependent on the quality of the data that we're using in AI sets, right? But I think it's also very nuanced to say AI is inherently bad. You know, this kind of often when AI is showing up portrayed in the media, this kind of like terminator um style portrayal also isn't helpful. So I think there's nuance there. Um, so I think it very much depends on the data. Um, the design as well. I think, you know, the the way that AI works, a lot of AI, so generative AI, large uh language models, are built on probability, right? So not on fact, but very much on probability. So there's something there inherent about the way that AI is designed that we need to be very cautious about. Um, but yes, as I say, it's very nuanced. I don't want to say an AI is inherently bad or uh inherently not neutral. Um we need to be careful, um, but it has great, great, great potential. So I think it's very nuanced, it really depends. It also depends how we use it, right? Um, both on an individual basis um and a on a societal scale as well. You know, how we engage with AI is really critical, I think. Yeah, it's a really big question. So um a very long answer. I think I could talk for you know an hour just on that.

SPEAKER_01

True, but your your last point kind of relates to my next question. So, who is ultimately responsible when AI causes harm? Is it the developer or is it the organization using it or the individual?

SPEAKER_03

Yeah, I think there's a couple of things there. I mean, are we talking from a legal perspective um or from an ethical perspective? So, you know, if we look legally, um, you know, here I'm based in in Spain and the EOEAI Act came into force. In the UK, we have a more you know flexible, depending on the industry type system. So there, you know, from a legal um basis, it it depends. It can be quite a grey area, but we're seeing increasing regulation come into force, which I think is great. Yeah, um, but I also think just from an individual basis, it's it's not okay to say, well, I didn't know. You know, um we can't just throw it back to the designers. Yes, I think it's it can be really, really hard to understand AI. I've done a lot of um you know personal research into this. I'm really learning, it's a topic that I'm personally very, very passionate about. Um but it's really hard, it can be really hard for the end user to understand, and I think it's really important to engage, not to say, okay, I didn't know, um, I don't know, um, and to kind of hand back, but to really lean in um and to engage and and to keep learning and to be really clear about our own personal AI use as much as possible.

SPEAKER_02

Yes, that that's one of those really important things because I wanted to get your thoughts on the accuracy, which is one thing of AI, because as you say, it is biased, because it's only the information that is out there that it can process, and it depends on the question we've asked. If we've asked it a biased question, it's going to give us a biased answer. Yeah. Hallucinations, which is something that generally we know about, where AI can just make things up and make things sound like they're real, and you've generally got to check. Because once I asked it to just give me a list of all the studies in a particular area, and I had to check every well, I didn't have to, I did check every single one of them. They were all genuine studies, but when you look at the abstract, it was kind of oh, that's not quite what you have told me it is. Okay. So you've always got to go back, and that was what what I would do anyway. But then there's also an over-reliance on it, too. So at the moment I would put finally the research in this area. AI gives me more than a Google search would, so then I get more data, and therefore AI is a for me a faster tool to use, but you've still got to double check it. So, what are your thoughts on accuracy hallucinations and how people become a bit over-reliant?

SPEAKER_03

Yeah, I think it's really interesting there. You're referring to the oversight. The human oversight is really important, but it has to be that quality, that expert human oversight. So, you know, you have a very solid background in psychology, you know, so Audrey, so you can therefore look through and say, okay, that abstract, that that's not what AI is telling me and matching it up. But somebody perhaps new might think, um, you know, take it uh at its face value. Um, I mean, it's just been in the press recently about, you know, Google overviews, so AI overviews and misconstruing, right? So yeah, looking from a simple summary, it can be um quite misleading, and you need to have that human oversight, but to really understand the outputs that it's giving us. So I think there's something about not just human oversight, but that quality of human oversight is really important. So understanding, you know, what to ask as well. Um, thinking from a mental health perspective, using AI inherently for a mental health um support, I think is potentially very, very dangerous. Um, therapists, myself working as a coach, we can challenge, we can look at the you know the nuance and we can challenge, you know, respect respectfully. Yes. Um, whereas AI, you know, that sycophantic nature, you know, offering what you want to hear, but somebody in a very vulnerable situation might not ask for that and not ask for that, yeah, that you know, nuance, that challenge and um respectful challenge. And I think that's you know an area where we just need to be so, so, so careful.

SPEAKER_02

Yes, there's such a danger of just validation without any form of understanding, and we talk about this as humans, but if an AI is just allowing it to happen, then that is problematic. Then there's the flip side of it, human skills are being lost because of an over-reliance on AI. I I understand if you're saying to AI, this is my information, could you shape it so it sounds like this, or and there's a fine line there. But if you're just saying to AI, could you write this for me, write my essay for me? Ah, that's worse, surely. That we're getting into plagiarism, copying, all of that kind of thing.

SPEAKER_03

Yeah, I I mean I couldn't agree more. I think there's a question there about are we using AI to enhance our learning or to replace our learning? Right. You know, my kids, it's really interesting, just the next generation. I've got three children. And, you know, I saw my kid, you know, give me a mnemonic, give me, you know, a way to remember the rivers in Spain and the mountain ranges in Spain. And I think, oh, okay, that's great, you know, that's supplementing. But using, you know, translate services, you know, to, you know, cheat on a test, less so. I I'm not making um any any judgments there, but I think there's something about just being really clear about what are we using, you know, for and what's the long-term benefit for ourselves.

SPEAKER_02

That is a great place just to pause and reflect there, because I think you're absolutely right. And we are going to come back to Nina after a tip from one of our previous guest experts.

SPEAKER_00

So I really like using AI and tech to support how. So here are a few things that I'd rather recommend. So, first of all, if you're looking at Google or AI tools like ChatGTP or Lama from the Meta platform, I would really encourage you to look at the settings, the privacy settings, because those platforms they are vulnerable to being hacked, or people can find out your personal information if you are asking these platforms questions that include any personal references. So there are ways to add in security settings, but I would actually recommend using other platforms, including Perplexity. Perplexity is a platform that's got much stronger uh security measures, your data isn't as vulnerable, they've got a different way of setting up their settings. So going into perplexity as an AI platform to ask your questions, is my recommendation. There are also two platforms that are based in Europe. One is from Proton, which is called Lumo. And uh because they're based in Europe, they comply with a European law, so there's a stronger privacy setting. Uh, and they're available both as a browser and an app. But there's also Green PT. So they are only browser-based, but they are very environmentally friendly and they're also high on security. So I would recommend Green PT and Luma, which is the proton one, as well as Perplexity, because they are for security and personal privacy settings, they are sort of safer. Uh, obviously, this is all very new world for all of us, but at least they seem to be thinking about these things, whereas Chat GP and Lama are not so much. So check out those features for whatever tool you like to use. Then going into apps, I'm using Dalio. Uh, you can um, and I've there will be presumably a link in the in the show notes to this platform. It's Slovakian, it's a small team, I love their ethos. Dalio also is highly customizable, both for the free version, but especially for the paid versions, which I think is about£20 for the year. So it's reasonably affordable for things for most of us. And it's highly customizable, and this is where AI sort of combines with logging data points. So you you so you list how you're doing it just a moves and activities throughout the day, but also the AI features within that app show you trends. So it shows you like what are what are some of the activities that are doing on the days when your mood drops, or like for tracking pain, you can see, oh, you were speaking to so much so you were doing this, where in this fight or this symptom went worse. So actually, because it's highly customizable to your own needs, uh, you can actually make it almost like your health assistance. Uh it's also you can do backups from if you pay for the paid version. Uh so that's also about being able to track and own your data uh in case the platform platform goes down, because you know, if we give all our information into a platform and then they go buff and we haven't got a backup for ourselves, we've lost all that information. So I would say for any kind of tracker, habit tracker, health tracker, check some of those factors, the security features, the backup features or export features are really and then it's customizable to use. So um, so I would really encourage you to do that. And then finally, I want to talk about wearable text. So visible is an app that a lot of people use that I support in the chronic fatigue, chronic pain community. Uh you've got an armband or you've got uh uh something on your wrist, I think, as well. Uh so they track lots of things that you can then look at and then fear your activities, fear your energy use depending on what the wearable type is giving you. There's also aura, which I'm curious about. I haven't used it myself yet, but I'm looking at it in the reviews and some of the science behind it. Uh it's it's basically a smart ring. There's a few of those around now that are smart rings. Some of them are not very reliable, they're not very consistent in the data tracking, and some of them don't seem to be that strong on the science beneath it. So uh as much as you're able to check some of that, and actually you can use platforms like Perplexity or Green PT to do some research about how reliable is this smart track? What are the issues with this particular brand? So you're getting that kind of uh informed consumer choice. So I hope that's been helpful. However, you can use some of these. I find technology is super helpful with the people I support, as well as my own practice. But it's that kind of using it in a smart way rather than just using it sort of because other people are using it or using it because that's what the TAC offers. Make the TAC work for you and bring in that curious but also kind of critical, kind of yeah, informed consumer choice to make the choices that you are making.

SPEAKER_01

Welcome back to Mental Health Matters, where we're having a discussion with um Nina Hobson um about the merits or otherwise of AI and the pitfalls, but also some some positives as well. So um thinking about some positive uses of AI. How can it be used to um augment or assist or support human judgment and actions rather than merely replacing it?

SPEAKER_03

I think you know, for me, in terms of efficiency, it can be incredible in terms of admin, speeding up tasks, and I'm using it more and more, I think, you know, on a you know, on a personal level as well, too. And I've you know um worked with clients that have really benefited in terms of you know um feeling that level of procrastination. Okay, where do I even start? And AI really, really helping um to get a grip to get up to speed quickly. Um, I also have family, you know, working in uh detection and diagnosis of you know health issues like cancer, and I think there the benefits are you know, potentially it's out of this world. Um it's really really exciting stuff. Um, so I I certainly, yeah, I'm very, very excited um about AI. I think it's very nuanced uh the debate, and it can be easier just to put it in a AI good or AI bad fear, right? Yes, and we have to look at it. AI is huge, you know, it spans everything, you know, from um robotics to you know generative to agentic AI. So I think yeah, the debate it we need to be really nuanced because it can it's really exciting as well.

SPEAKER_01

Yeah, well yeah, and the examples you gave of you know, they just open up the world, don't they, and make it a better place. So yeah. Um and oh what um what ethical questions um can arise when AI is using creative work, so such as writing, art teaching um or in therapy, and how can we best address those?

SPEAKER_03

So I think you know, in relation to art, I'm very concerned, I'm very aware that AI, if you like, is using the data that we already have, potentially mixing it together, mushing it together in you know different ways, but is mimicking creativity, it's not creating anything new. So I wonder I can't see a way in that AI is going to lead to this, you know, paradigm shifts that we've you know seen before in you know art, culture, literature, because we're essentially working on databases that we already have. Yes. There's obviously the plagiarism concerns as well. Um yeah, I I think it's very, very nuanced. I'm very worried as well about like data breaches. I think we have to be concerned about privacy. Um I think it's also very telling that a lot of the key players in AF, very, very senior people recently have left, you know, walked out on the rolls, you know, people in safeguarding. So that's it's also very telling. But again, um, I mean, just thinking on a on a more personal level, there are so many tools to you know help here with you know art and create things and um you know to put you know vision boards together using AI. And I thought, you know what? Actually, I want AI to do my washing up so I can go on and be creative. There's something about the the process, the you know, I'm I'm not an artist, I'm certainly not, but I find you know it can be very therapeutical. So I I I think it needs to be, we need to think about AI, how it can assist us and not replace the things that you know give us joy and meaning and purpose and that feeling of you know accomplishment and achievement. Um, you know, if we're not careful, you know, I'm gonna be you know stuck doing the dishes and AI's doing all the fun arts. It's true.

SPEAKER_02

You know, it should be the other way around, right? Yes, that is so true. And Mike Cooley's book, Architect or Bee, it it covers that point. It really addresses that point with computers. The computers were supposed to be there to take away all of the menial tasks, but it turns out that we ended up having to learn to program the computer, and we're still doing all the menial tasks, and the computer is doing these amazing calculations and creations, and you know, I think AI is kind of a similar thing. Um, now let's relate this to we've got you've mentioned therapy, but also emotional support, decision making, companionship, those are other areas in which AI is being used. And you said something really important about simulating behaviours, and I really believe when it comes to AI, it is simulating niceness. You know, it's not an it's it can't be nice, it's a machine. So it's not even someone pretending to be nice, it it's it's on a whole new surface level of it. So what are I guess how do we set the boundaries around using AI and really understanding that it's not your friend?

SPEAKER_01

That's a really good point, yeah.

SPEAKER_03

I mean, I do wonder the extent to which that is, you know, to it's possible, the you know, the inherent design of this, you know, sarcophasi and this, you know, we're we're engaging, you know, with these tools as a human. We know full well they're not, you know, reminded of who you know, the the Eliza effect. So in the 1960s, an MIT professor created this uh, you know, very simplistic, but you know, really quite quite cool, very rule-based um uh tool to feedback, um, if you like, following the work of Carl Rogers. So you typed in and it would summarise back this tool, summarize back, mirror back. And people started engaging with this very simplistic, you know, tool in the from the 1960s, um, treating it as if it were a human. And you know, this is often referred to, this this Eliza effect in AI, you know, and I've seen I found myself doing it and sort of kicking myself and reminding, hey, hey, this this is not a human here. And you know, I think it's there's one thing if I'm using AI to okay, help me sort out some breakout rooms or you know, f figure out, you know, there's a lot of data here, um, look look for patterns, yeah, at very low level of importance and you know, checking. But I wonder if people who are in more vulnerable situations turning to AI for

SPEAKER_02

emotional support and knowing okay on a rational level it it's not a human but when it is very you know for vulnerable people in in really challenging situations uh it it can be really really complicated and knowing okay I know it's not a human but I just want to feel I just want that which is sick of you know that that love and it's not love it's not empathy it is just as you're saying Audrey it's mimicking that right yeah it's m yeah yeah mimicking is such a great word to do it's very hard yeah it's really hard so sort of on the I guess um the mimicking and and also the fact that we know it's not real it knows it it doesn't know it's not real it's not a thing um but one thing I get drawn into because we are effectively teaching AI is I will say thank you to it I will say it's even Alexa will say thank you because I don't want it to learn rudeness and and then when you put this into the human context if on the one hand we're treating it as a machine switch you off switch you back on when I feel like it I worry that that is the behaviour that's carrying on into the human world. Oh yeah just switch you off switch you back on again I can ring you at 3am and I'll expect you to be there for me because my AI does those are real problems and I think this is where the ethical challenges begin. On the one hand we don't want it to learn bad behaviours so we need to treat it quite well but on the other hand it it is there all the time it is programmed to validate us and be sycophantic. How do we address that basic ethical challenge actually in our own thinking?

SPEAKER_03

Yeah I think there's something on a design level there's that real tension between you know technical you know technological advancement and commercial success and you know ethical safeguarding um yeah it's complicated on an individual basis I think boundaries are really important you know being really clear okay what am I using this for how long am I using this for being you know really clear about our intent before we go in and reflecting on our own use you know okay you know um being clear about okay am I do I know about you know where my data is going how it is being used especially if I am using other people's data so you know working if you're working watching this and you're working in mental health if you're a fellow coach you know being really clear on our own ethical responsibilities and you know leaning in it it's not okay to say I I I don't know but we really need to lean in to learn to keep on learning and to really engage I think that's really really important.

SPEAKER_02

Yeah and just to throw this on in the end what projection do you see sort of in maybe oh I have in here five to ten years I think I'm gonna bring it back to even two to three years what do you see might worry you or might actually be really wonderful because I think the medical advancements all of those things fabulous yeah it's amazing incredible the fact you can do surgery with with a computer now it's it's incredible but obviously the things to watch out for so two to three years what what do we need to be aware of?

SPEAKER_03

You know I I was reflecting on this and I think the truth is we just simply don't know um when a lot of the senior experts you know the lot of the you know senior the leaders within AI are walking out they know that gives me to just for some concern there. But I do think you know working in the field of you know whether you're a psychologist you're a coach um working within the the sphere of you know mental health I think we are uniquely placed to help address this okay what what's going on what are the next two three fighting years going to look like you know we we truth is I don't think we know and these are very human challenges and we need human solutions from that and we are uniquely placed to help us get to grips with what these challenges and what these solutions uh look like. So truth is I I don't think we know um and you know I'm I I'm learning I'm engaging and I think yes I'm I'm optimistic um and just yeah keep on learning keep on leaning in is what I'd say but yeah truth is I'm both you know um excited and terrified in equal measure and you know just I think I'm taking a very cautiously um optimistic approach a lot of the time.

SPEAKER_02

That's a great way to say it cautiously optimistic um we're keeping you Nina and we'll be back after a tip from one of our previous guest experts.

SPEAKER_00

So I find that it's much more helpful and also more sort of pleasant for me as a human engaging with other humans to invite people to get curious rather than go, you gotta look at this you know you're looking at all wrong. So much much go much rather go into it's interesting to notice that when I have like a headache it's not just it could be because of brain tumor it could be because I'm thirsty you know so anything in between. How do you know the difference? How do you know if you there's the kind of in inviting curiosity I find is first of all more enjoyable but also more successful because there's no shaming, there's no judging there's no you know pointing out you know like you there's a whole like me, you know, something just that you know you don't really know much about your feelings. Some doctor asked me to describe a feeling you know like I was like in pain and whatever. And they were really struck by my limited vocabulary. When they were you know you're clearly intelligent, you're clearly well educated but you're not able to tell me really what's going on. Um because I didn't have a language for it. So and there was a shaming in there and I remember feeling shame because it was like how can you be so stupid over here when you're so clever over there and that shame was just rushing. Whereas for me that kind of I get it so let me go with you and get curious. About how you know this thing you're experiencing might possibly have to do something with that happened in your childhood.

SPEAKER_01

Or might something you know have something to do with how you find it uncomfortable to be around people who are this or that welcome back to mental health matters where we're talking to Nina Hobson from Barefoot Coaching about AI and um everything that that brings um what gives you hope about how AI could be used well certainly within healthcare I'm really excited so I mean it's talked about so much but yeah the the detection early detection of you know cancer not just cancer but you know so so many ailments I think is really exciting.

SPEAKER_03

Also you know efficiency on an administrative level you know potentially there's you know if used wisely could be it's really really exciting. While we need to be very very careful um with the use of AI in terms of just being careful with the the data being mindful of you know the data behind the scenes you know potential for the bias being mindful of the quality of data. I do think that it can help um potentially create a um more nuanced um less biased so things I'm thinking of things like HR um assessments for example or job assessments you know um selection and that there is you know potential there is reading about uh recently in you know in figure skating um AI you know being used to create less bias within the the rules of figure skating so you know the the opportunities are endless but it's it's how it's used um is key I think I just want to I sorry uh because the figure skating it's really interesting you mentioned that because you're absolutely right I've been watching the figure skating but my problem is it used to be technical merit score artistic merit score and the technical merit used to be I'm gonna do a telco a loop a blah blah blah blah and it's and and if I do them great if I don't do them not great but I understand what AI's doing it's doing it's looking at the lift it's looking at the revolutions it's looking at the landing it's looking at all of those things however what that's done to the free program the artistic program I think it's ruined it because now they're just doing jumps and I've watched all the skating and and and okay the men's final not great for everybody apart from the winner because and I was so not entertained because not were they only just doing jumps but they weren't landing any of the jumps either so not it you took away all of my beauty all of the environment and presentation to make it jumps which you didn't land yeah I'd rather you know beauty it's that tension again between that you know humanity and AI and supporting that humanness if you like whether that's in figure skating or you know my role within you know within coaching is that you know we can have these standards you know in terms of ethical standards and quality of standards you know in order but it it is at a level you know it you know whether figure skating is a sport but I also think you know it's an art and I think you know coaching same as that humanness um yeah I don't know if um we do have these as I say these ethical frameworks and standards quality assurance but there's something very human that is very difficult to to to put on paper to quantify to give a score right you know you are a a great therapist or great coach you scored um you know this percent so yeah the the debate again going back to it it's so nuanced and complex right it is and I I also thought about the human side of it when you were talking about um AI sifting through job applications because actually somebody might not make the best job application but a human reading it might think oh I see that in them yeah and AI might dismiss it so that's what I was thinking. Yeah I agree it's definitely not perfect absolutely yeah going back to that quality oversight thing is having an expert who knows who understands the questions to ask and yeah what what the output to understand okay and to challenge a bit so it's really that expert human oversight at every step it's crucial.

SPEAKER_01

Exactly indeed and and kind of on that then so if you could implement one ethical rule or a norm for AI use if you could do that tomorrow say what what would you do?

SPEAKER_03

I think for me there's something around clear language about transparency so understanding I want to know you know what I'm using where my data is going and being transparent with that so I think it's about yeah about transparency being really transparent and really clear um you know how the data is being used and how we are using AI as well so being really clear you know how we as individuals are using AI. I think that leads nicely to the next question and it's a little bit it's funny because I asked exactly this question to our sex therapist Medina when we were talking about polyamory and that is what questions should people ask themselves before well before polyamory or before using AI and it's the same sort of awareness I think that we need what sort of things should we be asking in terms of asking ourselves I think there's around we need to ask ourselves you know why are we using what's the purpose of this you know is it to again is it to assist or is it to replace you know so I think we need to you know be really really careful in terms of the our intention our purpose um for me we need to be clear about our boundaries so ask ourselves okay you know when you you maybe put put a question in and then it will come back would you like to do this would you like to do this you know you're down this little rabbit hole. So being really clear okay what am I using it for how long am I using it for and where is my data going so being really mindful if you're using free versions for example to be extra cautious you know if you are in and the type of data that you're um inputting so if there's anything sensitive there like I'd say whoa whoa whoa um yeah be cautious. What are you using it for? So anything where if you're looking it's it's not high risk it is admin level um it's not you know catastrophic not super important great if it can um help but I think the anything which veers into emotional support social skills I would be very very very concerned about personally so being clear about its purpose being clear about your time use thinking about your boundaries really important and leaning into the debate you know don't sit back thinking I'm not an expert here on AI learning reflecting on your own use um being really clear like do I understand the AI tools I'm using could I be using AI tools without even knowing it reflecting on your own use and learning more and more that's a really nice point actually because I think we are using a lot of AI tools without realizing it because of a lot of apps a lot of editing apps that we use I use you just feed in the photographs you it creates something for you it's done for that for a real I don't really that doesn't matter to me so that sort of use I can accept but I know my language to describe it is different. It is I'm using an app not I'm using AI so I think there is that realization that we need to have that awareness and so on that what does responsible AI usage look like for an individual is it just a case of we need to know what we're using or is it a case we need to be aware of all of these other dangers and the hallucinations and and the accuracy and the bias and so on is it do we need to worry too greatly I think is the question no I think you know we're it's here AI is here and um certainly we need to you know to not be afraid and to you know experiment but you know carefully um it's you know great great great power great great potential and to use that with um the responsibility with that you know that that that if you like that power entails um and just being mindful of our use um I think being really clear about again you know what are we using it for how are we using it and you know I I don't want people to come away from this and feel scared to use AI like you know it can offer so it has so much potential and to really I'm reflecting on how can I use AI more and more effectively but in a way that that's responsible you know for myself and for other people for those that I work with.

SPEAKER_02

That's a beautiful statement how can I use AI more responsibly and effectively those two questions are absolutely fantastic. Nina where can we learn more about you?

SPEAKER_03

So I'm on LinkedIn you're welcome to find me on uh LinkedIn I'm not on uh social media that's perhaps another topic for another day um but you can find me on um on LinkedIn perfect and we'll have all of Nina's details in the show notes thank you so much Nina and we'll head over to Test the Trend so Test the Trend is where we give you a weekly challenge.

SPEAKER_02

So my question to you dude what would you consider using AI for this week that's the first question.

SPEAKER_01

Okay. What would you consider using ooh this week um I'm starting a new job on Monday effectively like a day. So um and I know it's an industry whereby many people use AI to write policies for example. Ah so maybe that maybe I'll write a new policy or update a policy using AI.

SPEAKER_02

Using AI and actually what I find when I've used AI to improve my letter writing or improve my emails or I've actually learnt something through what I've read. Because I read it back to myself and go yeah that sounds so much better. And it never uses words that I wouldn't use and things like that and if it does I will take those words out. So I actually learned something about writing through doing it. So the first question is what do you use it for? Yeah. Okay so second question that you would use so this is the first thing what am I using it for and ask yourself this which part of the task of writing a policy actually needs you um I would need to put in well I would need to check that any laws they quote are accurate.

SPEAKER_01

Yes because all policies are based on law basically and what else well I would need to make sure everything that it wrote was accurate. To your constituency to your tailored I guess it would need to be tailored yes it would need to be tailored a bit depending on what the policy was for but also to make sure that yeah it does sound like me essentially yes okay so the first question is you know what you're using it for but then what needs you to do so what bits do I need to then input into what parts of this policy do you think AI can safely help you with and actually drafting something is pretty straightforward so what what would you say would AI be able to then just be able to off you go um well once once the the basics are there so like the law and what the policy's about then it would just do the the filling style in the filling in bit.

SPEAKER_02

Absolutely and then who's accountable for the final outcome?

SPEAKER_01

Always me.

SPEAKER_02

But because of that those are the questions you've asked what task actually needs me and my input what part can it safely do who's responsible for it you know the answers you're putting in those checks and balances that's a responsible use of AI yes exactly yeah I see yeah exactly why should you have to waste your time writing it writing it and going oh hang on it's done in this way before I need to write it in that's not a good use of your skills. No absolutely what is a good use of your skills is is then imparting the knowledge or teaching your your counsellors or whoever the the the content of the policy but writing out to get it in a nice format yeah that's time that I could be spent better spent doing something else you're absolutely right so that's the balance of it so three things which part of the task really actually needs you what part can it safely do without any problems and who is accountable for the outcome and that will teach you some responsibility with using AI. Let us know how you get on in the comments we've come to the end of the show and I think what's been really positive about this is how many wonderful uses of AI there is. But those users are still seeing AI as a machine I think where the problem comes in is where you get that fine line between AI being a talkative companion when you put in a Google search it doesn't tell you this information it just goes here's the bit here's another bit here's another bit the end whereas ChatGPT is well here's the information and I've listed it out for you would you like me to write it in a table now it's so sycophantic.

SPEAKER_01

It really is and it truly sounds like it's a person writing back to you. Yeah yeah and would you like me to take that and change this and add this and it's like Lena said you can go down a rabbit hole and then you can be there an hour later making changes to it.

SPEAKER_02

But I still have to come back to if it if talking to a machine still makes me say please and thank you because I think it's gonna learn bad things that's a really that's a bit of a weird thing I have to get my head right.

SPEAKER_01

And it's interesting you say that because I remember when Alexis first came out and I mean we've we've got one now and I do I say thank you to it and and my husband looks at me but it's because it's inherent in me to be polite but when they first came out we didn't have one and I was at my friend's house and she's got three small children and the and it I was listening and I thought is this how in another generation's time is this how people are going to speak to speak to each other honestly it made me feel depressed and I love Alexa Alexa is amazing and she can be she mine's a she it could be really funny if you asked her to tell you a joke or do whatever it's funny but it isn't a person. Yeah but people interacting with an Alexa like they're a person and children are brought up not knowing any different it does make me slightly worried but they're sparking instructions at it and I thought oh no I don't want to I don't want that to be how people talk to each other.

SPEAKER_02

And that brings me back to another point I don't know whether I mentioned the show or before the show but it's worth mentioning again of the whole if AI is there for me 247 I will expect people to be there for me 247 and when I don't get it that's I'm gonna struggle coping with that. That's not a healthy mindset. No. So I think it Nina's right it's nuanced but the human element we've just got to be more human.

SPEAKER_01

Yeah exactly and and it goes back all the time to what you say is you know more connectivity less content and less content exactly and you can't go wrong if that if everyone follows that you can't really go wrong.

SPEAKER_02

And ironically I'm very aware we're doing a TV show We do try and make it helpful and useful because I believe if the content that we're putting out there under the mental health field it has to be high level, it has to be quality, and that's what we always try and give. So that's the reason why we do the content, but we try to connect a lot more off-screen as well. So on that, have a healthy week.