James Taylor (00:09)
My guest today is Marissa Afton, partner and head of global accounts at Potential Project, a global research and leadership development firm helping companies like IBM, Amgen, and Eli Lilly build more human workplaces. Marissa is also a co-author of the new book, More Human, How the Power of AI Can Transform the Way You Lead, a powerful and timely guide for leaders navigating the intersection of artificial intelligence and human potential. In More Human,
Marisa and our co-authors argue that AI shouldn’t dehumanize leadership, but elevate it. Drawing on interviews with over a hundred global executives and research from thousands of leaders, they show how AI can help us become more aware, wiser, and more compassionate in how we lead. Today, we’ll explore what it means to be an AI augmented leader, how to lead with both heart and algorithm, and why the future of leadership is not about being more digital, but being more human. Marisa.
Welcome to the Super Creativity Podcast.
Marissa Afton (01:09)
Thank you so much, I appreciate your introduction and it’s so nice to be here.
James Taylor (01:15)
So before we get into the depth of the book itself, what first sparked your interest in human-centered leadership, this field more generally?
Marissa Afton (01:24)
Well, Potential Project, and as you mentioned in our introduction, Potential Project is a global research development and research firm, and we have been around for about 15 years now, and our mission is to create more human workplaces. Sounds simple, not necessarily easy, but the way we do it is we invite leaders to look at their own, enhancing their own leadership potential by understanding their own minds and their mindsets.
They’re how mindsets enhance behaviors and how mindsets enhance action. So we have for a very long time been very committed to looking at what are some of the research-backed outcomes of being a human-centered leaders. And now we have kind of extrapolated the research we’ve already been doing for some time into how we support leaders in an AI-enabled world.
James Taylor (02:19)
Now, probably when you were probably starting to think about this book with your co-authors, we might have been just coming out of the kind pandemic period and everyone was talking about distributed teams and virtual teams and working that way as well. And I’m guessing you were probably thinking about going down one way of the book, but with the AI, maybe things kind of changed. How did this book kind of develop and did it become a different beast as you started creating it together?
Marissa Afton (02:45)
It’s so interesting that you’re asking that question. So in fact, we were already under contract with HBR and HBR has been our partner now. This is our third book published with them. And we were under contract for a completely different book. It was going to be on selfless leadership on eagle-less leadership. Stay tuned. We’re still under contract with them for that book. But right when we were at the beginning stages of looking at that topic, they came to us and they said,
Hey, have you heard about this thing called generative AI? And this is going back to about 2022. And we thought, yes, we’ve heard about it, but why are you coming to us? And they said, well, we understand that you’re not experts in the technology.
but you are experts in leadership. And we’re really curious about starting an exploration on how AI can enhance leadership or to the point that you said in our introduction, could it dehumanize our leadership? And that became very interesting for us. So that’s how we started and to be really transparent. We weren’t all in on AI. I mean, we did have some skepticism about can AI actually elevate our leadership potential.
Can AI help us be more human-centric at work? And ⁓ we were skeptical, but now we see that actually there’s some real benefits that AI can do for us in our leadership.
James Taylor (04:14)
I saw Jeff Bezos being interviewed the other day and being asked a question on stage and someone said to him, the first question people are asking him as a leader is, what’s gonna change? And he said, and that’s an interesting question, it’s a good question, what’s gonna change with artificial intelligence we’re seeing? said, but the better question probably, the more interesting question is, what’s gonna stay the same?
Marissa Afton (04:40)
Hmm.
James Taylor (04:41)
as we go through these different changes, said, because you can build a strategy as a leader around things that stay the same. In his case, he was talking about an Amazon, in 10 years time, people are still gonna want fast delivery. No one’s gonna be saying, actually, could you just deliver it, but make it a little bit slower. So said, we can build a business around that as well. As you started talking to all these leaders about artificial intelligence, how it was reshaping both their industry, their businesses, but also themselves as leaders, what surprised you about some of those initial conversations?
Marissa Afton (04:55)
Yeah.
What surprised us at first was the disconnect that some leaders had about how much AI is here and it’s here to stay. And so we noticed that there was a bit of a gap between some leaders who were kind of all in, they were early adopters, they were really ready to leverage the benefits of AI, may or may not have seen some of the risks of AI for their leadership, which we can talk about. And then there was another huge category which…
I think is shrinking a bit over time, but it’s still there of some leaders who are like, mm, AI, do I really need it? Is it really helpful? Maybe if I just wait it out, it will be a fad that will pass. So that was one of the first curiosities we had about those who were early adopters who we actually think are augmenting their leadership potential even more by using AI and those who were kind of waiting for.
maybe the digital natives to use it for better efficiency but weren’t seeing the benefit to themselves as leaders.
James Taylor (06:09)
And for those that were probably in the middle, maybe the 80%, they weren’t the outliers, they weren’t the early adopters, but they also weren’t the laggards as well. What kind of questions were they asking of themselves, their leadership team, those people that were reporting to them, ⁓ advisors about artificial intelligence?
Marissa Afton (06:28)
Well, I think one of the questions that we have noticed that has become a real theme for us is around leaders really asking the question first and foremost about what AI can do for them. And we know AI can do many, many great things. It can make you more efficient. It can help you with tasks. It can help you with decision making. In fact, there’s some
pretty scary research out there, not our own, that ⁓ up to 70 % of leaders would prefer an AI to make the decisions for them rather than having to make their own decisions. And it’s simply because leaders are overwhelmed. They’ve got so much going on and the pressure is continuing to increase. And so while that’s a good question for leaders to ask, which is what can AI do for us?
We also have invited the question, what can AI do to us? And I think that that’s the balance that we need for everybody to embrace is it has great capabilities, but it also has some risks.
James Taylor (07:29)
And it’s a line in the book where you say, AI gives us time to be better leaders, but we often fill it with more work. Why do you think we struggle so much with that? know, I remember years ago reading books like The Four Hour Workweek, and I know a lot of people that kind of really went all in, and all they ended up doing was just working even crazier than they were probably before. So they didn’t end up working four hours. You kind of use some of the benefits of ideas in that book, but they just…
they just can use, like with, yeah, so with those leaders, if let’s say they are releasing themselves from making some of the decisions and they’re able to use AI to make better decisions or think through things in a better way, they start to save some of that time, where do think they should be diverting that time? How should they be using that additional time that they’re gonna gain from this?
Marissa Afton (07:58)
Feel the time.
Yeah, thank you, James. It’s a great question. And like to the first part of your question is like, why do we do that? And that’s kind of like an age old question. I mean, I think nature pours a vacuum. And so we see that space and we think about email was supposed to be the great time saver. How well has that worked for us? I mean, we just seem really, really prone to fill in time with activity, busyness, which doesn’t necessarily equal impact. And so what could
leaders or dare I say what should leaders be doing with that time saved and this is where that human centered approach really comes in. If you can become more efficient with the AI tool, your AI agent, well could you just add more work to your plate? Of course you can or you can have ⁓ make a different decision and
This requires some intentionality for leaders to determine what’s the best way for me to help engage my people, to help bring them along with me. And so really the invitation is if you’re using AI to help you prepare, for example, for a performance conversation, do you just cram in more time or more activities, or do you actually create some of the mental space to think about how do you want that conversation to go?
where do you want that person to ⁓ end up and land in the dialogue that you’re about to have? Let the AI do all the data collection and the sentiment analysis and use the algorithms to give you the insights on where that person is in their journey. And then you use your human head and heart to really make the space to determine what you want.
that conversation to evolve into and what you want that person to get out of that conversation as an example.
James Taylor (10:14)
I remember when I, I I use AI before I get into, if I’m doing, I’m doing a sales call, let’s say with someone. And I’ll use AI to analyze the psychometrics of the people I’m gonna do the call with in advance of the call with, it’s a board or whoever the board is. And I remember doing it early on, this was, I’ve been doing it since like 2018, I think, as I started using very early tools, IBM was starting to make tools to be able to do this. And I was just kind of playing around, it was great. And, but it felt a little bit early on.
Marissa Afton (10:24)
Excellent.
James Taylor (10:41)
like it felt a little bit too ⁓ machine, almost like machine-like. was kind of, and then it took me a little while to figure out, how do I blend that machine learning, that what I’m learning from the machine to be able to do that and bring the kind of the humanity bit back in as well. It feels like that’s where we are for a lot of people. Everyone sees the potential, the AI and you saw it first, like customer service companies rolling out chat bots and agentic AI and everyone then starting to go, whoa.
Marissa Afton (10:58)
Exactly.
James Taylor (11:10)
Whoa, whoa, whoa, we’re kind of losing sense of who we’re serving and the human part of this.
Marissa Afton (11:16)
But it’s great that you had that insight and that you had it early. And I think that not everybody has caught onto it, especially as the AI tools have become more more sophisticated. sound like every day it’s evolving. It’s constantly learning. It’s constantly growing. It’s capabilities to be more human-like and to have more of that human touch almost. So you don’t have that kind of machine robotic sense when you’re interacting with it anymore. And actually that’s a bigger risk.
Because the more human it becomes, the less human we need to be in how we are interacting or how we’re using the outputs of the AI. It create this beautifully crafted communication. And you think, ⁓ that’s it. I’m just going to cut, send.
without actually using your own discernment as a human to determine whether what it’s put out is number one, is it a hallucination? Is it like even giving you valid ⁓ kind of facts or data? And number two, does it sound authentic for you? And this is where I see leaders have gotten trapped by letting the AI help inspire a new way of communicating.
but forgetting that they have to still put their human element on top of it as you describe that you do.
James Taylor (12:34)
I’m also seeing it because so much of the training data for the AI was trained on Western ideas, actually, would narrow it even further than that, Western, often written by people that probably look like me, white guys living in Bay Area, and there was certain biases that automatically go in when you’re doing that. And I was actually having a conversation with two very different groups the other day. One was in the Middle East, where they’re launching ⁓
series of AIs, Falcon is probably the most well-known in the UAE, because they said we want AI that also reflects our culture and has more training data related to us. And then the other one I had a conversation with one of the Native American tribes that were talking about this as well, but from a slightly different perspective, because he said for them, they were worried more about things like credit, credit history, credit biases that are kind of coming in from a finance and loan lending side of things as well. So as we start,
Marissa Afton (13:11)
Yes.
Mmm.
James Taylor (13:31)
we start looking at is kind of bringing back that human side into this, that human kind of centered leadership. What structures have you seen companies start to use or leaders start to build around themselves to ensure that they bring in not just from themselves individually as a leadership, but across their whole leadership team? So I’m now kind of talking about things like governance and having ways of just checking some of this stuff.
Marissa Afton (13:53)
Mmm.
So one of our principles is always that humans need to stay in the driver’s seat, right? The human still needs to leverage the AI, not the other way around. And one of the things that we notice, as you just said, the benefit of AI is it can give you unlimited perspectives. The risk of AI is it can put you into an echo chamber. And the echo chamber is based on whoever has taught the AI whatever it is that it’s being asked to do.
And when we don’t use our own critical thinking capability, then we can get trapped in the echo chamber as opposed to, then reinforcing our own biases that we may or may not even be aware of, rather than looking at what the potential is of having that unlimited perspective taking that the AI, has unlimited.
⁓ knowledge like more knowledge than any human on earth can have but if we just allow the AI to be the expert instead of allowing ourselves to be the experts and Really checking in and making sure that humans are in the driver’s seat. That’s where it can be problematic But but the other thing I just want to say James is is there’s also a real opportunity and I love that you’re bringing up, know different perspectives from different cultures and I have a lot of leaders who
want to make sure that they are inclusive and that they are culturally appropriate. And this has to do whether you’re in a geographic, you have a team member who’s in a different geographic culture, or even if you have a team member who has a neurodiversity, for example, and they may approach perspectives or challenges in different ways.
And that’s where AI can actually help us. And you can use the AI as a little bit of a coach for you. Like if I’m, you know, the white, middle-aged, female, American, and I’m speaking to my younger digitally native colleague who’s in India, for example, and what’s a way that I can approach this individual in this scenario in a way that they’re going to be able to hear and feel heard at the same time as another way of looking.
James Taylor (16:10)
Now one of the other lines you had in the book is you talk about AI as an exoskeleton for the mind and heart. Can you unpack that a little bit? Just give us some of that in terms of practical terms for the leaders.
Marissa Afton (16:21)
Yeah, so the idea of an exoskeleton is it helps strengthen our own capabilities. And so when we’re really looking at the AI and the leader, we’re seeing the marriage of both ends. We’re seeing that the AI has some great strengths. And as I’ve also talked about some risks, ⁓ if we’re just
⁓ over indexing on the AI making all the decisions. And then the humans also have some great strengths and then some downsides as we’ve just described. And so the exoskeleton idea is that we take the best of both to enhance them together. And so one way that we suggest doing this is it’s a little bit of a dance. And maybe just to step back, James, to illustrate this.
When we started doing our research, not only did we do a lot of data analysis and assessments with leaders, we had a lot of one-on-one interviews with leaders, but because our area of expertise is in understanding the nature of the human mind, we really wanted to explore how to create this exoskeleton, this marriage of human and machine through the operating model of the mind.
To explain that, I’ll just say when we look at any model of the mind, there are basically three core things that we look at. We look at how we perceive information, how we discern information, and then how we respond to information, anything that’s coming up in our environment. And then we overlay that perceive, discern, and respond with three key qualities of human potential, which is for leaders to become more aware.
So when you’re perceiving, you’re aware of your internal environment in your own mind and the external world you’re in. And then you discern. And to discern, you need your inner wisdom, your own ability to really tap in not only to your intelligence, but every part of what you’re bringing to the table as a leader. And then the responsive capacity of the mind, we look at bringing heart. And that’s the compassionate piece. So the human brings awareness, wisdom, and compassion.
And then the exoskeleton that the AI brings is this toggling of human setting context so that the AI can give us content. And as I mentioned earlier, where leaders go wrong is if they just say, there’s the content, I’m good, I press send and I keep going. But the exoskeleton says, no, I have the content. Now I use my inner wisdom, my discernment to ask questions.
and make sure that I’m probing the content that I’ve just gotten from the AI to make sure it’s exactly what I need and where I want to go. And when I ask those questions, when I start to ask the AI, give me a different perspective or how can I think about this a different way or tell me what could go wrong, then the AI presents answers. And then the last piece around the responsiveness with the compassion.
It’s the leader’s ability to respond in the moment with heart and let the AI be algorithmic. And so it’s really this dance between context, content, questions, answers, heart and algorithm that creates the exoskeleton, the best of human and machine.
James Taylor (19:46)
So on that first one, perceive, perception, how we perceive things. So we’ve had another guest on the show, Keith Sawyer, his episode will be coming up soon. He just wrote a great book about learning to see and what those in any form of work that they do, what they can learn from those who work in the fine arts, ⁓ architecture. And he said, they interviewed over a hundred professors from the top arts degree programs in America. And he said,
Marissa Afton (19:49)
Yes.
Hmm.
James Taylor (20:16)
what many of them say, what we’re really teaching, because most of these students are coming in, they’re already actually very skilled from a craft, from their skill, but what we’re really teaching them is learning to see, how to perceive things in different ways about themselves, their own work, their identity, how it’s expressed in their work, but also how others perceive them and what they’re doing as well. So was there any, as you were talking to all these leaders, was there any examples that can have stayed with you on that?
Marissa Afton (20:30)
Hmm.
Nice.
James Taylor (20:45)
perceived bit, that perception bit and how people were using AI to kind of augment them in that way as leaders to help with the perception and the perceived part.
Marissa Afton (20:54)
Well, I think one of the things that I’ve seen many leaders embracing is AI as a personal coach. And you can do so many creative things. And I love that example that you just gave of being able to like see things from the inside and see things from the outside. And we all have blind spots. But what you can do is you can create and again, you know, within protected
⁓ systems and assuming that different leaders may have their own ⁓ tools for AI that have already been ⁓ put into some data security ⁓ system. But you can feed the AI information about you as an example and the type of leader you want to be. And you can give it
so many different data points and you can teach the AI about the future you, the legacy you want to lead, the values you have as a leader. You can also give it some metrics about say your personality type, if that would be of interest and you’re creating a profile of the idealized you. And that coach can then be a sparring partner for you as the individual and you can go in and you can
invite it to give you ideas about how to move through different types of challenges to build your own awareness of what may be unseen in terms of the assumptions you might be making or decisions you might be taking that may not fit for the type of leader you want to be.
James Taylor (22:34)
That, as you’re saying that, it’s reminding me of two things. One is Marshall Goldsmith, the coach, business coach, where he has someone call him, I think what his coach calls him every day at a certain time of day with the same three questions. So it’s like a prompt. So he’s basically has to reflect on the day. And in Buddhism, in Tibetan Buddhism, they have a thing, I’ve forgotten the name of it, it essentially means five times a day. Where five times a day you have to pause, stop,
Marissa Afton (22:51)
Yeah.
Yes.
James Taylor (23:03)
and just be present and think about what am I doing just now? How is this? What is it? And so I guess the AI can almost, as you’re saying, it can fulfill part of that function, like helping that leader pause five times a day, for example, very briefly and just say, what is this doing? How does this reflect the leader that you want to become and the potential that you have? And also the objectives that you have for yourself and your organization.
Marissa Afton (23:29)
I love that you’re bringing in that pause as well because that’s something that, and again, the awareness only comes when we give ourselves the space to pause. Usually awareness doesn’t happen when we’re just moving from meeting to meeting or activity to activity and our brain is simply full. Awareness happens in moments of stillness. And you you talk about creativity, creativity also happens in moments of stillness. You know, we have these great
creative breakthroughs when we’re in the shower or on a run or whatever it is. And so for leaders to be able to embrace that pause and be really purposeful and intentional and allow the AI to help you with that is another great tactic to be able to enhance our awareness and our perceptions.
James Taylor (24:15)
Now, as we’re get moving to this more, you talk about as the AI augmented leader. So let’s imagine, let’s contrast two leaders. Let’s contrast a non-AI augmented leader and an AI augmented leader. How would they perform something differently? How would they react to a situation in their organization as a leader slightly different?
Marissa Afton (24:39)
Well, I think you just use one of the key words right there and that’s react. So the non-AI augmented leader, first of all, is probably just running on a lot of habitual behaviors, ⁓ potentially also habitual mindsets. So they may be a little bit rigid on how they approach a particular challenge, a particular situation, which may not be a way to ⁓ creatively think through and problem solve.
⁓ they’re probably much more in reactivity mode. They’re probably feeling like they are putting out fires all day. They’re just ⁓ moving from activity to activity and they may get to the end of the day and simply feel like I’ve been active, I’m tired, but what have I actually gotten done? Have I been able to move the needle on what my big goals are and what I’m trying to achieve?
And I will say, I hope that there are not a lot of leaders who are in that extreme of that category, but leaders who are, are probably not feeling great about themselves or their impact. Where we want to be able to move to with AI augmentation is to be more thoughtful and more responsive to the day-to-day pivots and really look at how we’re leveraging AI, yes, to make us more efficient, but also to make us more.
thoughtful, to make us more compassionate, to be able to leverage AI for preparing for big decisions, for even having brainstorming sessions with our team, to be able to create more team cohesion, even more of a sense of connection and belonging, right? The risk is we’re all going to ask all of our critical questions to AI. We’re not going to talk to other humans anymore because we don’t want to admit or be embarrassed by our silly questions.
AI never judges us for having the silly questions, but actually the opportunity is, yes, of course, leverage AI for those questions, but leverage the human connection as well. So using AI to help us brainstorm how to be more connected with one another, to be more creative with one another. I have used ⁓ AI in so many group brainstorming sessions where each
member of the team is using their own AI, which of course is going to give you different responses, even with the same prompts, depending on how we use it. And that helps us generate a lot of collective energy and a lot of collective creative problem solving rather than us each individually working in silos with our AI agents.
James Taylor (27:22)
I saw a CEO the other day she was talking about, I forgot the name of the CEO in the company, but I think she was one of the big US banks, if I remember right. And she said what she’s done is she’s created digital twins of each of her board members. And ⁓ so she said as she’s prepping for her board meetings or board conversations, she’s essentially doing the pitch, doing the presentation for the digital twins first. And she said what’s been interesting, said they will
Marissa Afton (27:30)
Mm.
Yes.
James Taylor (27:52)
it knows those individuals quite well, the digital twins, it will then know the kind of questions that person is likely to ask. And that’s interesting, but she said what’s even more interesting is that because you’re able to recognize those questions, you automatically then build it into your presentation. So the questions then the humans in the room actually are asking end up being like five levels higher in terms of the quality of the questions than you would normally have had, because you’ve already kind of dealt with lot of the simpler questions that they would have had.
Marissa Afton (28:13)
Yes.
I love that and it’s such a great example and we’ve done something similar in potential project within our own leadership team and again, you know that leader hopefully I assume has asked the permissions of her people to create those digital twins just you know, that’s a good best practice. But you know, why wouldn’t you get that kind of buy-in for exactly the type of impact that you just said? Isn’t it great to have a leader and have a group where you can level up, raise your gaze on the type of questions
that you asked, you can get there that much quicker because all of the basic questions have already been vetted, if you will, by your AI agents. Love it. What a great example.
James Taylor (28:59)
So as we start to finish up here, if someone’s listening to this and they’re leading a team and they want to become more human, basically, they’re deploying AI, they’re implementing AI, they have all these things, but they’re getting a sense, it’s kind of getting away from them a little bit and they’re kind of losing a sense of the humanity in the organization. What is one thing that you could tell them to do, to think, to question as they’re going through this?
Marissa Afton (29:28)
So one of the things that we have noticed is a lot of companies that have decided to embrace AI, they put a lot of their focus on the tool and the technology, and they have their whole technology team making sure that everybody has access and that’s great and good. But what some have lost focus on is that not only do we have to train in using the tool, we have to train the human as well.
And so really making sure that there’s a balance for leaders and teams to not only train on the technology, but to train on the human aspect is one of the ways that we can help people leverage it more effectively. ⁓ So James, I don’t know if that answers the question.
James Taylor (30:15)
No, I mean,
I think one of the things you mentioned in the book, the analogy used in the book is AI is like a Ferrari, but it doesn’t matter if you don’t know how to drive, but if you have that really powerful machine there, and if you don’t know the basics of how to drive, you can get into some trouble pretty quickly in that situation.
Marissa Afton (30:20)
Yeah.
Exactly.
Exactly.
And remembering that AI, unlike a passive tool, right? So you think about a hammer, for example, a hammer doesn’t become dangerous if nobody’s touching it. It only becomes dangerous once you’re picking it up. But the AI is constantly active. It’s in the background. It’s listening. It’s learning. It’s an active agent. And so that’s why, again, humans have to be in the driver’s seat. We have to be the ones who learn how to best leverage it before it’s
starts to leverage us. So yes, the Ferrari is another great example.
James Taylor (31:11)
So Marissa, this book, More Human, How the Power of AI Can Transform the Way You Lead is out now. We’re have links here to the book as well. But if you wanna learn more about the work you’re doing yourself and ⁓ the Penetral Project, where’s the best place for them to go and do that?
Marissa Afton (31:26)
people are welcome to come directly to our website, which is potentialprojectalloneword.com. We also share a lot of our insights on our website, but on our LinkedIn channel. so we frequently will, the research is constant, it’s ongoing, even though we have the book. The book was published in March, 2025, based on data that was probably already several months old by then, and it’s continuing to evolve. So go to our LinkedIn channel.
potential project and you can reach out to me individually on LinkedIn as well.
James Taylor (32:02)
Marissa Afton, thank you for being a guest on the Super Creativity Podcast.
Marissa Afton (32:05)
Thank you so much, James. What a pleasure.