by
Joseph Peters
| Jul 26, 2023
Welcome to The Voice of Counseling, presented by the American Counseling Association. This program is hosted by Dr. S. Kent Butler. This week's episode is Artificial Intelligence in Counseling, Part One, and features Dr. Russell Fulmer.
Welcome to The Voice of Counseling, from the American Counseling Association. I'm Dr. S. Kent Butler, and joining me today is Dr. Russell Fulmer. Dr. Fulmer is a senior associate professor in the department of educational studies at the Institute of Leadership and Education Advancement Development, ILED, at Xi'an Jiaotong-Liverpool University. He is the director of the MSC in digital education program, and is from the US, and holds a PhD in counseling education from the Kansas State University. Before joining XJTLU, Dr. Fulmer taught counseling at Northwestern University. Prior to that, he spent time in medical education in the West Indies. He has a book, Counseling and Psychotherapy: Theory and Beyond. It is published by Cognella Press, and is scheduled for publication next spring. Dr. Fulmer's research interests include artificial intelligence, applicable to mental health, and education, and psychodynamics. Ni Hao. How are you, Doctor?
I'm doing well. How are you doing today?
I'm doing well, thank you. I'm doing pretty well. It's exciting to have you. I want people to know that, as soon as I hit the ground running as president elect, I received an email from you, and you said, "You know, there's something I think maybe you might want to put on your radar in terms of artificial intelligence." Can you talk about what that was for you, and reaching out to me, and how that whole thing kind of got started?
Absolutely, yes. As you say, even before you started your tenure as president, I was already, well, reaching out or pestering you about AI-
I don't think it was pestering.
... and about the possibility of the ACA doing something with it, maybe through a taskforce, and that came from, let's say two major reasons. Starts with it being one of my areas of interest on my research agenda, so through that personal interest, I've done many a literature review, lots of due diligence, inquiring into the counseling and AI interface. That then leads to the second reason, because of what I found, which is not much, frankly. There's just a paucity of research, scholarship, dialog, conference presentations, you name it, about most things AI and counseling. To me, that is, as we say, that sizable gap in the literature. Well, not just in the literature, but from a associational standpoint, just most things counseling and AI, there's not much there.
So there's a big gap there, and I thought it's rare that we have something in, well life I guess, that has the potential of artificial intelligence, is burgeoning, as AI is in just about every way, shape, and form, and every aspect of civilization. Therefore, we can conclude, directly or indirectly, it's touching, already is touching, all of us, and hence our clients as well, and something so massive, and with just the potential of AI to really shape our life even more, it's probably something that counseling wants to start exploring, at least talking about in some way. So through my personal interests and just my observations of life and my lit reviews, I thought you know, I wonder if the ACA would be interested, maybe doing a taskforce, and Dr. Butler is going to be the new ACA president, so here comes the emails.
Here comes the emails, so I got the email, and I'm sitting there, and I'm looking at it, and I'm like... Artificial intelligence, I knew what it was, but I was just like, hmm, is that really my interest area? Is that something that could really be something that I could put on a platform for? I looked at it, and I talked about it with our CEO, Richard Yep, and I was like, you know, I think this is really something that's really important for us to tackle, and to look at. So I reached out to you, and therefore the taskforce has kind of blossomed. But before we go into the taskforce a little bit, I want to ask you, what is AI? What is artificial intelligence, and why, for the most part, should counselors care?
That's an excellent question, and a deceptively complex question, and one that, frankly, you'll find some overlap in the definitions that researchers, or computer scientists, or whoever the theorist or writer is, has provided, but I don't know if there is one definition of AI that everyone turns to, and I'm pretty confident there's not one definition of AI that all behavioral scientists or counselors turn to. Now, with those caveats in mind and all that disclaimer, I'm going to give you one. I'm going to give you one [crosstalk 00:06:17]
Right, right, right. So, I think it's really funny. I'm listening to you go there, and I think in my mind, when I first thought about AI, I went automatically to Hollywood, right? And all these movies that have all this technology, that is doing things. I don't know if this is true or not, and you can tell me if it is. I think the movie, The Terminator, that Arnold Schwarzenegger was in, has some type of AI stuff in it, and then like the... Everybody always thinks that, oh, it's going to go horribly bad, right? So you get this technology going, and then it has a mind of its own, and it takes over the world. So anyway, hope I didn't [crosstalk 00:06:58]
Yeah, yeah. I think that's common, and that is a common belief in any way, in that we're kind of educated through Hollywood. I tell people, and I don't know about you, but I don't remember the last time I went to an academic conference, tried to deliver a presentation through PowerPoint, and didn't have some type of a problem doing so, so we're probably a ways away from robots taking over the world, when I can't even deliver a PowerPoint presentation, that type of technology [crosstalk 00:07:31]
But on the other hand, you know, sometimes I wonder what is going on behind the scenes at the Googles of the world, Apple, the big tech, Facebook, and them. I know they publish some [crosstalk 00:07:48]
Meta now. I think Facebook just changed its name. I don't know [crosstalk 00:07:51]
I don't know if it's going to stick, but anyway. So yeah, there's a lot of things going on behind the scenes. I cut you off [crosstalk 00:07:59]
I cut you off with the definition, so what do you say is AI?
All right, so I've created a definition that I borrowed, modified from Max Tegmark, who's a physicist and talks about this. He said it's really the ability to accomplish complex goals, or of computers to accomplish complex goals. Mine would be the ability of nonhuman entities to accomplish goals. I would take out the maybe complex, because I don't know if this is categorical as much as maybe quantitative. You know, there's smaller, maybe rudimentary, and there are more complex and big, but to the extent to which a nonhuman, or a synthetic, a machine, a computer can accomplish a goal, perhaps it is artificially intelligent.
I think the difficult term there is intelligence. Tegmark provides another... I'll paraphrase him, at a conference of intelligent intelligence researchers who could not agree on the definition of intelligence. You and I could probably spend all day on what is intelligence? You know, are there multiple intelligences? There's the traditional way, but that has a... There's a lot of, I think, errors at that way of looking at it.
Right, right. Because one would tend to think that that leans towards book smarts or something along those lines, but in a very real sense, it could be a whole host of different things that really bring about intelligence.
I think so, I think so. So it's easy to understand. I like it. It's not a perfect definition. There would be some things wrong with it, but we could go with that. That, therefore, is casting a pretty wide net if you think about it, because we would have to then differentiate AI from other more vanguard technologies, like virtual reality, and augmented reality, and are they accomplishing complex goals, or goals, and if so, then are they one in the same, or is VR kind of under that greater AI umbrella? I think it's really important though, because so much starts with what is something? You know, what is the definition? If we don't have that, then we're going to be making assumptions, and we're going to get on different wavelengths, and that can just create problems down the road. So I think the definition is important.
It is important, because I think also, getting out in front of the messaging is also important, because if someone else comes in and co-opts your message about what AI is, especially with regards to counseling, then it can go terribly left, right? Because I think about, even with tele-mental health, there have been people who went kicking and screaming into what it meant to be a counselor that could do a distance learning, or however it was that we kind of talk about it, right? So now it was tele-mental health, and what the ethics is behind it, and all those other things, so if somebody comes in and co-opts that message about AI, then it could very well kind of disrupt a good trajectory in terms of how we could get it incorporated into what we do as counselors.
Yeah, that's a great point. I agree, I agree, and perhaps, idea for the taskforce is for us to develop our own counseling and AI definition, or when they do interface, when they collide, what does that mean? I delved into this a little bit. I wrote a paper about this, and I tried to bring them together, speaking of definitions, with what the definition of counseling is. That's even one, until the ACA put out, I think they did have a taskforce, and-
Right, a consensus definition, yes.
Yeah, the consensus definition that I now use, and I share with students, but prior, I say unless you read it, or unless you just Google it while I'm asking you the question, my guess is we can go around the room, and I say, "What is counseling?" I'm going to get, if there's 10 people in the room, probably 10 different definitions. There'll be some overlap, but they're not going to be exactly the same.
I know that early on, students would say something like, "You know, it's where I can go give advice to..." Nope, nope, stop. Stop right there. We don't give advice. We don't do that.
Well, I like what the ACA put out. I use it often, and what I've done is I took... There's really three parts, as I see it, to the definition. It's a professional relationship, and then the second would be empowers towards goals, so I think... I should have this memorized, Dr. Butler, as much as I [crosstalk 00:12:55]
Well, you know what? That's really the three core things that [crosstalk 00:12:59]
The three big parts of it.
And I think that's really important, right? So that's what you... You don't have to remember the whole elevator speech in terms of what that definition is, but if you can remember those three core things, then you have the ability to kind of move the narrative.
Yes. Absolutely, so I look at it as the operative word would be empowerment, but you need that professional relationship that puts you under a different set of ethics, and legalities, and such, and to what? To accomplish goals, so three levels there. Then I took AI and divided that into four historical time periods, historically, modern, and then some speculation into the future, and then do counselors... Can AI be a counselor? Can it help do counseling? Well, the answer to that would then depend on the definition of course, but if we use that definition of counseling, what I came up with is historically, does AI... Do we have a professional relationship? No. Did it empower? Likely not. Did it help accomplish goals? Likely not. Historically, with some of the more primitive AIs.
Currently, there's a little bit more of a research backing with contemporary AI. So do we have a professional relationship today when we apply AI? No, not a professional relationship. Does it empower? To me, that's unknown. Does it help accomplish goals? I would say likely yes. Now, not in all cases, but likely yes. In some cases, contemporary AI, even if it's just a bot, can help people accomplish their goal. Maybe their goal is to experience a little bit less anxiety. I think it can help with that.
So how do we simplify it? Because I know, it's really funny. This is a sidebar here. I used to say A and I. I keep saying A and I, and I was like, well it's not artificial and intelligence. It's artificial intelligence. That's just a sidebar there. But how can we simplify it, just very simple? In simple terms, like somebody for a fifth grader or a kindergartner. What would we say AI is, in really simple fashion?
Okay, so you got my definition about accomplishing goals. To a fifth grader, I'm going to have to talk this out a little bit.
It might be the ability of a computer, or even a robot, to do some independent communication with you, to talk to you without a human being kind of pulling the strings a little bit. So the next thing you know, you're just quote "talking," I should say, interacting with this machine, or this computer, or this robot, and it's either through text, or maybe even verbally communicating with you a little bit. So that's one application of 101, but I might start with that, if I'm talking to-
So how is that different than trying to help somebody understand... Like, there's these games that people play now, and I'm not one... I'm not a gamer, so I'm not going to have the terminology right, but you know-
... back in the day, it was Atari, and now it's like Nintendo Switch and all these other things.
I grew up with classic Nintendo, NES.
There you go. So, those things would often give you opportunities to take pathways, right? So if you choose this option, you'll go this way. If you choose that option, you'll go this way. It's almost like the Madden football, right? If you're playing it, it interacts with you as you kind of go through and make these different, I guess opportunities to move forward, to score, right? Because you're playing against something. I don't know what it is, to get down the field, and something along those lines. Is that even close to what AI might look like?
I think so, and to just continue with the gaming thread here, Watson, from Jeopardy, was an example of an AI [crosstalk 00:17:37]
... playing, in this case, a more kind of a tough game, in Jeopardy. Another example would be AlphaGo, so the board game, Go, that's played copiously in Asia. It's not played, in most circles, at least here in The States as much, but thought to be an intuitive board game, and I think with more possible options than there are... I don't want to say atoms in the universe, but let's just say atoms in the universe. Anyway, they devise AlphaGo, and it beats the world's best Go player, and it was a milestone because of... They thought it was more intuitive. It wasn't just a rote, "If this, then, if this, that," so the algorithm had to be more, shall we say complex, and it certainly solved a complex goal there.
I might add, just to continue one more along this scheme, of where AI has its, well, virtual hand in. Even in the arts, that's another area that many thought it was either never going to happen or we were a long, long way from it, and there are now AIs that can help... Now, we can argue if it's good or not, and many people would say it's not. I won't dispute that, but paintings, composing classical music. They'll play, for example, Bach, and then here is the AI-composed Bach, created Bach-like, Bach-esque, and can you tell which one is which? Of course, the aficionados wouldn't have much of a problem with it, but to my ear, not being a classical music aficionado, I hear them both and think, I don't know. They sound pretty similar to me.
I'm going to tell you right now, when it was records versus the new CDs, remember when that whole thing went out?
I remember saying, "I don't hear a difference." People were like, "Oh, you don't hear the difference?" I was like, "No, I don't hear the difference."
I mean, music sounds good, and if I'm playing it on my phonograph or if I'm playing it in the CD player, I'm not detecting this change. Then, people started telling me what that change was, and then I'm like, "Oh, okay. I get it." So-
May I just add something really quick, Dr. Butler?
It's a great illustration. We were talking about music, and maybe the aficionados know, but you and I, maybe we wouldn't be able to tell. Well, in a-
... in a way, and this is kind of an apples to oranges, but I'm going to go there, even though some people may not like it-
It can be kind of that way with providing mental health support.
An AI providing mental health support, when a trained counselor sees it, they're very skeptical, "I don't know about that, and it can't do this, and I don't like that." Okay, they've got a point. However, to the people who don't live in our counseling world, and are just after a little support, it can work for them. We, in that sense, would be the classical music aficionado, right? With the finely-tuned eyes, but other people aren't in that world, and they look at, well, maybe it's helping-
You bring up a really good point, right? In a good use of it, it could take bias out of the equation, right?
That's another big one to talk about.
So maybe we could talk about this again, but it takes bias out of the equation, but if the actual programs that went into creating it were biased, then you have a biased AI?
And it can be disastrous.
Yeah, and sometimes it's bias that was inserted into the algorithms initially. Sometimes, it was just unknowingly, or it was through machine learning, the AI then learns from its users, and the users teach it to be biased, or discriminatory, or what have you. There's a pretty notorious case of this. It's called TAY, T-A-Y, TAY. I want to say that, was it Microsoft or I don't... Anyway, it was put out, I think in the Twitterverse or somewhere, and because it learned from the language and the dialog of the users, you get all a bunch of, some ways nasty and vile stuff, and then it learns from them, and the next thing you know, it's spitting out these biased, to say the least, just horrible things. So it can be either way depending on the type of AI. It can be from the creation, or from how it's learned, machine learning as they say, you know? Machine learning. So, a host of ethical questions, and it just goes to show that we've got to be very careful.
We've got to be very careful. That makes perfect sense in regards to that. But how do we ensure that, right? How do we ensure that bias gets out of AI?
Well, okay, so ethics and AI. Wrote another paper on this, one that just came out, because as we're trying to plug that big gap in the literature, I figure so much has to start with ethics, because if we don't have good ethics, then we're not going to have much, and we don't want to go there. I think there's a number of ways that we can help with this. For example, transparency with the AI companies who create this. I think they should be more transparent about the algorithms that they use, who created the AI, the AI's limitations. The AI companies could do much better with cultural diversity in the training of their programmers, because to your original point, that is the genesis of a fair amount of bias. [crosstalk 00:24:09] all of it.
So when you look at AI community, and you think about the number of individuals who are doing this work, a lot of the things that they've been doing, especially when you think about that might add to the bias, is that they were norming it on themselves, and it's like who's in that field, right?
Right? Do you have a wealth of individuals who come from different intersectionalities in the field, that are doing this work, or are you norming it on a white, cisgender male?
Yeah. There's a lot of people like me in those fields, and not only at the top, say in the CEO roles or the administrative who get it started, but then there are folks similar to me at the nitty-gritty programming level, and I don't claim to be... I'm no computer programmer. Don't get me wrong, but I do know enough to know that yeah, to your point, they are, anyways unintentionally norming the algorithm to identify and associate this image in accordance with their worldview. You start with that, and then it's just going to kind of snowball, and the next thing you know, you get some really terrible things happening through the AI, that we don't want to anthropomorphize. It's not really the AI being independent. It's just a computer, essentially. It's a program that was programmed. It was taught to do [inaudible 00:25:45]
Right. Yes, so you know, there was this case where I heard that it wasn't recognizing certain ethnicities, and if you don't have people in the room... Again, that's about inclusion, right? If you don't have people in the room to kind of speak to the fact that we need to really make sure that we're pulling in all individuals, and all types of situations, then we run the risk of doing harm.
We do, and that then is nonmaleficence, and that's in our ethical code, "Do no harm." Part of a counselor's advocacy efforts, even if they don't use an AI clinically, "Here's the AI I'm using," a case can be made that we are ethically obliged to at least advocate on some level, because AI, again, is... I don't want to say it's omnipresent, but it's omnipresent, and so it's-
Well, listen, we have about three minutes before we go to break, and I want to ask you, maybe to start off this, and then we'll kind of come back to it after the break, but what is the history of AI in counseling?
Traditionally not much. If you do a thorough review of the counseling profession and AI, you're not going to see a lot of interactions there. I think the AI in counseling interface, they've found each other a little more recently. So again, using my model, historical AI, back when they had what's called the Dartmouth Conference back in the '50s, and then we went through this AI winter as they say, in which research, and funding, and advancements kind of plateaued or dried up a little bit, and then now, I don't know the exact date that I would say, you know, "From this date precisely," but here in more contemporary times, with AI and just mental health coming together, how can we not... Why would we not, as counselors, because with all due respect, the psychologists are there, you know? There's psychiatrists and the MDs. Medicine is there, and we have our profession, and we are mental health supporters, promoters, providers as well, so it seems to me that now is the time, for that AI in counseling interface to become a little more prominent.
Yeah. Well, we'll take a little bit of a break shortly, but one of the things I wanted to ask you is, we talk about AI maybe in the counseling room, between counselor-client type situations, but there have got to be other ways that AI can be very impactful in the way that counselors do their work. I'm thinking about a private practice person who is doing this work where they're maybe keeping notes in or doing billing and all these other types of things. I'm wondering if AI has the opportunity to kind of connect in that manner as well. Maybe something to think about. After the break, we can kind of talk a little bit about that as well. Then also, I want to hear from you with regards to this big decision to end up in China.
Sounds good. Sounds good.
Sounds good? All right, so why don't we just take a quick break. This is The Voice of Counseling. I'm Dr. S. Kent Butler, and we'll be back in a moment.
Counselors help positively impact lives, by providing support, wellness, treatment. We're working to change lives. We are creating a world where every person has access to the quality professional counseling and mental health services needed to thrive.
Hotep. Welcome back to The Voice of Counseling with Dr. S. Kent Butler. This is an American Counseling Association inspired program, so we have Dr. Russell Fulmer with us right now. We've been talking about AI in counseling spaces. I want to bring you back in, Dr. Fulmer, and I want to just kind of have a sense of where we left off with talking about what counseling means in the eyes of an AI expert. Can you talk a little bit more about what it is that we're doing, and how counseling intersects with that?
Yeah. I think the possibilities of bringing AI into clinical counseling are immense. Then on a more macro level, I think there's something to be said about the existential impacts, really, of AI on clients. It seems to infiltrate... We can go in so many different directions to it, but just for example, it's well known that AI and automation are often paired together, and then that has impacts on jobs, and many clients with seeking employment, seeking employment amidst a global pandemic, and burgeoning technology, and everything else in this day and age, of just really fast change. Clients have a lot on their plate, and clinical counselors have a lot to sort out when they talk to them.
More clinically, there's some pretty interesting research going on with bringing in some neuroscience, I know, to AI, that has implications for, say, diagnostics, for diagnosis. I mean, look at something like depression in the tradition ways that MDD, clinical depression of any stripe is diagnosed. There might be a psychometric inventory and assessment that can help, true, but sometimes, it's subjective. It's clinical judgment and such on the part of the clinical counselor. Well, nowadays, AI is starting to help. Not suggesting that just anyone's going to go out and get this program, but starting to help with that by way of really, I guess, data analysis. Big data would be one way.
AI can find patterns that the individual naked eye will probably miss. Then they link a bunch of either written communication, or even in some ways, verbal with certain depression symptoms. There's other research, not to go off on this too much. I think it's really interesting, but it can listen to a therapy session, and it will... Little graph at the bottom, you'll see this is where chitchat was going on. This is where cognitive restructuring was going on.
I can certainly see the use in that as it becomes a little more prominent. You could say that will help us be more data driven, perhaps, in counseling, through-
More specific, if anything else, right? I mean, you can really-
... use it as a tool to stay on top of your game. Here's a question for you. You made me think of this as you were talking just now, and I don't know how farfetched it is or whatever have you, but I'm going to try it anyway. We have a situation where in counseling, it's unethical to kind of formulate a relationship with your clients in any kind of a way, right? You meet once a week, and you do your counseling session, and you go on about your way, right? You come back the next week. AI offers up a different opportunity, right? Is it possible for there to be a improper relationship between an AI type of a situation and a client, and also, where we meet once a week, could there be opportunities for AI to come in and be more instrumental in the life of a client, where they can kind of meet more often? Then does that cause a problem as well?
Intriguing questions here. I see a research article or a book about this one.
Yeah, yeah. Well, we might want to work this out.
Yeah, yeah. The first one... Improper relationships. Maybe we could go back, again I don't mean to keep beating this dead horse, to the definition of counseling. You've got to have a professional relationship established in order to do counseling, right? And to be involved with clinical counseling. Typically, today, I know some cases of counselors and counseling centers, and there's some colleges that do this. They might use a... We have these AI bots, these mental health support agents, Woebot is a common noted example of that, another is called Tess. I've done some work with that one. They're essentially little avatars on your smartphone, and they will provide psychoeducation, maybe CBT and such. Part of me thinks, how could a person develop any type of feelings or improper relationship with them? On the other hand, I know that-
Never say never, right? You never say never, and there's a case in, well in China. I think it was Microsoft again, who introduced a bot named Xiaoice is the name, that wasn't even supposed to be counseling I think, but still, a bot to interact with, and they kept track of some of the data, and I think over a million people said, "I love you" to Xiaoice. And this is just text-
You know, there's, again, not to talk about Hollywood, but there was a movie about that, where the person got really enmeshed with the AI type of body or whatever it was. But anyway, I don't want to go there, but go ahead.
Yeah, so it's an ethical predicament that will probably arise sooner or later, by way of a counselor or counselors at least augmenting their in-person, real counseling via an AI, and then it will happen, and then we will, "Let's talk about it. What do we do? And is this even a thing? Is it possible, or how do we handle it?"
Right. Your co-chair, Dr. Williams, is kind of tinkling in that area, with regards to that type of avatar type situation, so we'll have an opportunity to [crosstalk 00:37:08]
Yeah, Dr. Williams is doing some, I think some great research in this...
So, when you think about all the stuff that's been going on, is there anything, before we get off today, that you want to bring in to this conversation, that might kind of also support the need for AI in what we are doing in counseling?
Well, we've traditionally used a... many have, a biopsychosocial model in case conceptualization, even to help with treatment and such. So if we look at the biopsychosocial model, and then we think counselors seek to help people improve their mental health, so from a biological angle, some clients are on, say medication. That would be an example of the bio. Now, I know most counselors don't prescribe that, but they would help with treatment, with management. Then, from a psycho perspective, a counselor might explore intrapsychic processes, you know, the individuality [inaudible 00:38:11] From a social standpoint, counselors might advocate to change systems that disempower their clients.
I would suggest another addition to the biopsychosocial, maybe a biopsychosocial-techno, BPST. I'm not sure if, especially avant garde, this high technology falls neatly under any one of them, so if we look at how we can facilitate mental health, biologically, psychologically, and sociologically, maybe AI would fall under that T, techno part, and that can be another avenue, another domain or sphere if you will, that we examine, and ultimately use to help clients.
And just to point out the obvious, you know, tech is everywhere.
I hardly know anyone who doesn't spend, I don't know, probably hours looking at a screen each day, their computer, or phone or-
You know what? I have an iPhone, and every week, it gives me my weekly update on how much time I spent, screen time, and sometimes I'm embarrassed by the number of hours that I've been on that screen, so [crosstalk 00:39:27]
Well, yeah. I think you're not alone either, you know? If we just acknowledge reality, and there's very few indications that this is going to go down in the future. We're probably going to have a more-
... technological world. You know, so I'm not saying it's all good, and I get that part, but just acknowledging the reality of it, and then doing our part to maybe just shape it here, use it for our clients' advantage there, could be something that we put on our radar.
Nice, nice. I want to shift gears a little bit. We've been talking about AI and things along those lines. I want to find out what, in your brain matter, in your brain power, have you moving towards this as a way of research and a way of looking at counseling? What was it about you and how you see counseling that got you there? And the other side of that question is, how do we... You know, you're saying that there's a dearth of research in this area. How do we encourage new counselors, those who are coming into research, down this pathway to kind of help to bring forth a much more stronger, robust research effort when it comes to AI?
That's a good question. First of all, that second question, it seems to me that just education and knowledge on the part of psych counselor educators might help. If counselor educators have at least a rudimentary knowledge and understanding of AI, then they're going to in turn, maybe, pass along some of that knowledge to students. It will stimulate conversations, and then who knows what will happen from that?
So that could be a start-
You know, it's a good marriage, because most young people, if you're looking at it that way, and I'm not trying to... This is not ageism. I'm really looking at it in terms of people who are coming into the profession. They're coming from that mindset anyway, so it does make sense for counselor educators to kick it up a notch, and be really helpful, or impactful, or influential anyway, in terms of how people are using AI [crosstalk 00:41:41]
Yeah. I think so. I mean, many of them, maybe because of... Or not maybe, I think almost certainly because of COVID, you know? I know it's facilitated this tele-mental health, and then we have tele-mental health technology, and that could be a way to... And then some of these other things like AI, VR, and AR, that could be a segue into... I could see all those going together. But I agree. The younger generations, they're brought up with a... seems like with a screen in their hand, and some of us weren't, but that's how it is these days [crosstalk 00:42:14]
Yeah. They're doing all kinds of things, even [crosstalk 00:42:16]
They do. And then with the first question, personally I'll say two things about my interest in AI. One, I keep a keen eye on mega trends in life and society. I'm not a detailed person. In fact, I'm pretty bad at detailed type of work, but I'm pretty good at the big picture. I kind of gravitate towards that, and mega trends, and I got to... That then would lead to number two, which is frankly intuition. My intuition just kind of led me there, and I developed an interest, maybe because of my identification of AI being a societal mega trend, and then I just have a kind of a futurist bent to me. I find it intriguing. I started looking into it, and you know, not a lot of folks are doing this, in our world anyway, and I wonder what it would be like when we bring the two together. What does that look like, and how can we explore here and there? There's just a plethora of research questions for the researcher, waiting to be explored.
You know, that's what we talk about when we talk about thinking outside the box, and exploring things that really help to move the needle in what counseling has become. We are light years away from where we started over 100 years ago, with what counseling really even came to the forefront with. And now, we're looking at technology as a way to kind of be our future, so that's really a neat process.
I think so. Again, it starts with acknowledging what is. Many people, I think jump to what should be. "I don't like this. You know, I have a reaction to it." And I'm not saying that AI is perfect, or it should... I'm not saying it should take over, and take our jobs, and it can do what human counselors can. I'm not saying that. I'm just acknowledging that it's here. It shows every sign of burgeoning. If we acknowledge that... We ignore it and deny it to our own peril.
You know, that's really funny, because that's when the message gets co-opted, right? Because then, somebody's going to be like, "Oh, we're going to lose our jobs. They're going to do this. They're going to... All this stuff is going to come in." No, there's always going to be space for human beings. There's never not a space for human beings, and our profession is about us connecting with people, so it makes a lot of sense. But to have something in your back pocket that helps support what you do is not a bad thing.
It's not a bad thing, and it's also respecting the diversity of human experience and personal preference. There's some people, if you could use an AI, if we had a wonderful, advanced AI here that can help, some clients may not want to have anything to do with it, but others might, you know? Just like most therapeutic modalities, and theoretical orientations, and techniques, one size doesn't fit all, and I think the more versatile a clinical counselor or even the profession can be, probably the better, you know? You have more tools in your proverbial toolbox to draw from, and maybe this can be one of them.
Yeah, that's pretty neat. So, as we start to look at winding down this counseling hour that we have together, can you tell me a little bit about what your hopes are for the taskforce?
Well, in general, when I approached you, I thought we should find a way to put AI on the proverbial radar screen of the counseling profession. We've had talks about everything from maybe videos to get through, or, and I know this is the boring and no one likes this, but the academic papers, and this is what it is, and then you read this article in this paper. I can see room for both, but the taskforce has come up with the idea of maybe some of us through video, explaining, and after all, people might have time and the inclination to watch a five-minute video, and that could be a little more interesting. Then you at least have an understanding, versus, you know, "Here's a 15-page paper, and read it, and see what you think." We do that sometimes, but-
I agree with you. I agree with you 100% there, because I think that... I'm of the mindset that we do all this research that goes and sits on the shelf, as opposed to it being practical and something that people can use, right? Counselors that are out in the field every day need to be able to see these things, touch these things, feel these things, and be able to incorporate that into the work that they do. A journal article, with numbers, and statistics, and all those other things doesn't always get there, right? So you've got this person who's reading this thing, maybe not even understanding it, because you know, just like when you send a bad text to someone and they misinterpret what it means, there are misinterpretations in even research that people are reading. Then, you're telling them to take that and then kind of reimagine it and duplicate it in their environment, without much help, right? AI could be really important in helping people see how they can take something and actually incorporate it into the work that they're actually doing in their own environment.
Yeah, well put. Well put. That is true, and it's also in alignment with... We're talking about technology, and you know, video technology is kind of par for the course here, versus the old-fashioned journal article, even though I kind of like to do me some writing every now and then, to-
Nothing against writing. I have nothing against writing-
... but there are people who are doing this writing, but they're not doing anything that's helping to transition it into what people can do with all practicality in their own work environment.
Shouldn't that be what research is?
Yeah, it should be translational in some way. Otherwise, it's as you say, it's sitting on the shelf, collecting... On the proverbial shelf. I guess today, more the electronic shelf-
... maybe a citation or two, right? That's great. You got a citation or two, you got somebody maybe tenure and promoted, but what did it actually do in the community, for the people who are actually needing their support and help from a counseling perspective?
Yep. Hence the importance of translational research, practicalities. Absolutely.
Well definitely great. Life is changing for you. You're about to embark on a... Or you have already embarked on this, but you're about to embark on a life in China. I believe you're moving there. Can you talk a little bit about what that experience is going to be like for you, and what you're looking forward to?
I don't know what it's going to be like. I'm looking forward to it. It will be an adventure. I'll be at Xi'an Jiaotong University, and Liverpool University, and I hope to continue both my AI work and research, and of course collaboration. I see the opportunity for some pretty interesting potential collaborations, maybe between the department or some schools in China and some of the people here as well, so we'll see what happens.
Excellent. Well, I wish you nothing but success, and wish you well. I appreciate that you reached out to me, and that we created this relationship now that will keep us moving forward and pushing the AI agenda forward, so thank you Dr. Fulmer, for being a guest today. It's been a quick hour, it seems. That's what happens when you have great conversations, and thank you for being a part of The Voice of Counseling, from the American Counseling Association. We are going to close out for today. Our hashtag is TapSomeoneIn. You know, shake it up a bit, so it's really nice to see you today, and I'm looking forward to us moving this needle forward with the taskforce. Any last-minute words?
Just thank you so much, Dr. Butler. Thank you for having me. It's been a pleasure to converse with you during this hour. It's been fun. Thank you.
No worries, no worries. Thank you for joining us, The Voice of Counseling. We're out for today. Have a great day.
ACA provides these podcasts solely for informational and educational purposes. Opinions expressed in these podcasts do not necessarily reflect the view of ACA. ACA is not responsible for the consequences of any decisions or actions taken in reliance upon or as a result of the information and resources provided in this program. This program is copyright 2021 by the American Counseling Association. All rights reserved.