Thanks to HRBench for powering this episode. To find out more about the company building the future of people intelligence, reach out to book a demo at hrbench.com/directionallycorrect !
Check out this episode of the #1 people analytics podcast with special guest, Suzanne Bell, Lead Scientist, NASA’s Behavioral Health & Performance Lab!
In this wide-ranging and deeply fascinating conversation, Cole Napper sits down with Suzanne Bell to explore one of the most unique and high-stakes applications of industrial-organizational psychology in the world: preparing human beings to live, work, and thrive in space. As a lead scientist at NASA, Suzanne shares how her team supports astronaut selection, behavioral health, team dynamics, cognitive readiness, sleep science, and mission performance as humanity prepares for a sustained return to deep space through the Artemis missions and eventually journeys to Mars.
The conversation dives into the enormous psychological and operational challenges associated with long-duration spaceflight. Suzanne explains how life aboard spacecraft like Orion differs dramatically from even the International Space Station, where astronauts already operate under isolation and confinement. Living in an extremely small shared environment with little privacy introduces new complexities around teamwork, adaptability, emotional regulation, and interpersonal dynamics. She discusses how NASA studies these conditions through both real missions and Earth-based analog environments, allowing researchers to better understand what makes teams resilient under prolonged stress.
Cole and Suzanne also unpack the science behind astronaut selection and what constitutes “fit to mission.” Suzanne explains that while technical expertise matters, behavioral competencies such as adaptability, teamwork, emotional stability, and the ability to both lead and follow become increasingly critical as missions grow longer and more isolated. She emphasizes that NASA applies rigorous scientist-practitioner principles, including competency modeling, multimethod assessment, and evidence-based selection systems, to identify individuals capable of succeeding in some of the harshest environments humans have ever encountered.
One of the most compelling sections of the discussion focuses on how humans adapt under stress. Suzanne shares insights from NASA’s growing database of individuals who have lived in isolated and confined environments, highlighting research showing that humans are remarkably adaptable but that transitions themselves often create the greatest challenges. Whether adjusting to microgravity, returning to Earth, or preparing for life on another planet, the process of adaptation places enormous demands on cognition, emotion, and physical functioning. She also reveals emerging findings showing that declines in positive affect during long-duration isolation can reduce task speed even when accuracy remains high, reinforcing the importance of emotional well-being for mission success.
The conversation also explores Bayesian statistics, small-sample research, and how NASA approaches evidence generation in situations where only a handful of astronauts may ever participate in a mission. Suzanne explains how her lab transformed its data infrastructure to aggregate findings across missions and simulations, enabling faster learning cycles and more effective decision-making for future Artemis missions. The discussion becomes a masterclass in applied research design, demonstrating how rigorous analytics can still thrive in environments with limited data but extraordinarily high consequences.
Cole and Suzanne also spend significant time discussing AI, ethics, and the future of scientist-practitioner work. Suzanne shares how NASA is thinking about AI-assisted monitoring and Earth-independent operations for future Mars missions where communication delays make real-time support from Earth impossible. Together they explore the ethical responsibilities researchers have to engage with emerging technologies proactively, ensuring science helps shape responsible adoption rather than reacting after the fact.
Beyond the science, the episode offers a deeply human look into Suzanne herself, including her routines, leadership philosophy, curiosity about the world, and perspective on balancing an incredibly demanding career with family life. The result is an inspiring conversation about psychology, leadership, teamwork, innovation, resilience, and the future of human exploration.
If you like this episode, you’d also love exploring prior episodes—visit colenapper.com for the full archive and show links.
Powered by the WRKdefined Podcast Network.
[00:00:02] Well, Suzanne, one of the reasons I reached out to you amongst many is you're actually an IO psychologist by training as well. And can I say you might be the best IO psychologist? I don't know. Well, it's generous. There's lots of people who have lots of contributions. Thank you. Because you're getting to do some of the most high impact, high stakes type of IO psychology work in the applied space.
[00:00:26] And so one of the things that we said prepping for this is that you're really at this juncture between, you know, people analytics, applied research, and you're imbibing in the scientist practitioner model that we've talked about for years. And one of the things you touched on a second ago is, you know, thinking about what constitutes a good astronaut and what does it mean to be fit to mission? And those are some of the things. Can you expand upon that? And like, what does that look like from your perspective?
[00:00:56] So they're just like in other areas of IO psych, people have done competency analyses across different mission sets. So the way it works in the astronaut selection and training process is you're trained and you're first selected to become an astronaut candidate. And then you go through the short name for asteroid candidate is ASCAN. And so you go through ASCAN training for a couple of years, then you graduate from that.
[00:01:25] And that's when someone becomes an astronaut and then you're waiting for mission assignment and then they are assigned to a mission. So whether the International Space Station or this Artemis crew. And so through your research, what do you know about human beings under stress that perhaps no one else knows from the type of research that you've done? Oof. Probably a lot because we have that huge database and we're working on getting publications out.
[00:01:53] We're working so fast to express to the community everything we know. Welcome to Directionally Correct, a people analytics podcast with your host Cole Knapper and today's guest,
[00:02:20] Suzanne Bell, lead scientist in NASA's behavioral health and performance lab. Hey, Directionally Correct fans. This podcast is dedicated to you to help democratize people intelligence for the world of work. If you're looking to support the podcast, please make sure to listen weekly. Subscribe to the Directionally Correct Substack newsletter. Sign up for the Data Driven HR Academy at datadrivenhracademy.com.
[00:02:46] Purchase Cole's book, People Analytics, or check out everything else at coleknapper.com. Before we get into it, a quick word about HR Bench, the company powering this podcast. You know, when we all started in people analytics, we wanted to do strategic work, building predictive models, workforce planning, advising the C-suite, and most of all, quantifying the impact for the business. Instead, we spend months building dashboards and reports that should already exist.
[00:03:17] HR Bench eliminates that entire phase. Your HRIS connects, your metrics calculate, your benchmarks populate. This is not novel. This is day one, not quarter two. That means skipping straight to prescriptive analysis, storytelling, and taking action for the business. Want to learn more? Book a demo at hrbench.com. hrbench.com slash directionally correct. Find out more about the company powering this podcast and building the future of people intelligence.
[00:03:46] As always, all opinions are our own, and thanks for being a listener. Well, Suzanne, I have to say, I have to say I love all of the guests I bring on equally, but some I love a little bit more equally than others. I have been looking forward to this episode for months, and it was the episode I've been looking forward to my entire life. Oh!
[00:04:13] So thank you so much for joining me today on Directionally Correct because I am in awe of what you do and what NASA does and all of the things that are positive. So thank you, thank you, thank you for being here. Oh, absolutely my pleasure. No, thank you for having me. And I love being able to share what we do out to the broader community. So appreciate your podcast.
[00:04:37] Well, I sent you, I don't know if you saw a link to it, but I sent this little short video that shows the scale of how far away the earth is to the moon. Did you get a chance to watch that really quick? I did. I did. It is. And so I don't want to play it on here because the chances of technical difficulties or like us getting pulled off YouTube or something are very high for sharing that.
[00:05:00] But the thing I wanted to say for the audience sake, because we're recording in the middle of the Artemis 2 mission, which is like in terms of like the scale of humanity and like the things that matter in terms of society. This is one of the few things that they'll actually probably write about in history books that are going on at the moment. And so it's so cool to get a chance to talk to you. But in this video for context, they show a little beach ball, which is the size of the earth.
[00:05:27] And then they wrap a rope around it. I believe it's 14 times. And and that's to indicate how far the moon is from the earth. And they show they put another little tinier ball, a little white ball to signify the moon. And they show how far away the moon is actually from earth. And it's something like 30 yards in terms of this little beach ball on this rope. And so it just shows the magnitude of the distance. We all look in the night sky and we can see the moon. It's like, oh, it's not that far.
[00:05:57] It's really far. And so I'll put a link in the show notes for anybody who wants to go listen to it or watch the video to see in terms of scale. But I say all that to say as a lead into, you know what what's going on right now with with Artemis and what role did you and your team in the research play into getting us to this point?
[00:06:21] So what you're hitting on is this concept of how far away the moon is and and that's lost on some people. Other people are quite aware of it, but we've been hanging out now for a while in what's called low Earth orbit in the International Space Station. We've had a human presence there for a couple of decades now. But it's been a while since we've been back to deep space. And so the moon is deep space. And as you're pointing out, it's a lot further.
[00:06:49] And so this mission is historic for a number of reasons. But one of the things I'm excited about is having that human presence back in deep space. Deep space is completely different on the body, on the way teams can interact, on the way individuals are in terms of their health and performance. And so this is really important data for us, not only so we can have that sustained presence on the moon, but also someday go on to Mars. And we can talk about that a little bit, but that's even further away.
[00:07:18] So a lot of what we're doing now is starting to push the limits so that someday we can go to Mars. So what did my team do? We're involved in Artemis in a number of ways, but one is I'm the principal investigator for a project called Archer. This is Artemis research for crew health and readiness. And so there's four main pillars in it. One is human systems interaction.
[00:07:43] So we partner with our human factors colleagues, and together we're looking at how the human interacts with Orion, which is the space vehicle. So they've done lots of tests on Orion. They've done lots of tests when they were first circling the moon before they went on to deep space. We've done lots of testing before we went. There was Artemis one, but ultimately, you know, the human is the final integrator and all of this stuff. And so we look at how the human interacts with Orion systems.
[00:08:12] So we can think about whether or not we need new processes, procedures, whether or not we need to do things differently, organize things differently, so that they can relate to the system in a really positive way for performance and mission success. The second thing we look at is how the team interacts. So there's been some beautiful pictures about the team coming together and living in long term isolation and confinement in such a small space is tricky.
[00:08:39] And so we think about not only how they work together to maximize performance, but also how they live together as a group. And so that's one of the ways we are. We help inform training in those areas as well for the third big thing we look at is behavioral health.
[00:09:00] So looking at your emotions, your motivation, your ability to perform as an individual, your cognitive readiness, your cognitive performance. We look at those both in terms of this mission. But what we're going to learn from this mission will set us up for doing things like repeated spacewalks, landing on the moon, just increasingly higher cognitive and physical workload. Then the last big part we look at is sleep. So sleep. So sleep.
[00:09:29] When you the early in the mission, they were circling the earth and they had so much testing that they had to do while they were still in low earth orbit. So we know people need more than four hours of sleep, but really to test all those systems, that's all that could be scheduled in. And so one of my colleagues who is a collaborator on this project and runs the sleep and fatigue lab at NASA. Her name is Dr. Erin Flynn Evans.
[00:09:55] She and her team actually tested that protocol to make sure that it was viable for the complicated things we're going to have them do. So those are the four main areas of our research to get to this point. A lot went into that in terms of preparation. Earlier work that we've been doing for years has fed into both inform training, but also in my role and my previous role.
[00:10:18] I've advised on the selection process and especially in terms of teamwork for since 2017, which is when two of the crew were selected. And so even just those early assessments to pick people who are good team players and then going on and on. So years of work, but really exciting right now to also lead the Artemis Archer project, which is just thrilling for our lab. So is that a flavor of what I do?
[00:10:45] No, that is so I mean, so you're saying some of these folks got selected in 2017, 2018 time frame and they've been preparing this whole time. No. So the way it works in the astronaut selection and training process is you're trained and you're first selected to become an astronaut candidate. And then you go through the short name for astronaut candidate is ASCAN. And so you go through ASCAN training for a couple of years, then you graduate from that.
[00:11:12] And that's when someone becomes an astronaut and then you're waiting for mission assignment and then they are assigned to a mission. So whether the International Space Station or this Artemis crew. And so so there's a long process and there's training along the way. And so it's it's no like we didn't select in 2017. Yeah, yeah. But we did.
[00:11:33] We have in collaboration with behavioral health and behavioral health and performance operations who leads the behavioral health and performance selection process. And several of my lab members contribute to that, though. We we do like think towards longer missions in that process. So who are crews that can go to the moon who can go to Mars and start to look at that. So.
[00:11:59] Well, Suzanne, one of the reasons I reached out to you amongst many is you're actually an IO psychologist by training as well. And can I say you might be the best IO psychologist? I don't know. Well, it's generous. There's lots of people who have lots of contributions. Thank you. You're getting to do some of the most high impact, high stakes type of IO psychology work in the applied space.
[00:12:23] And so one of the things that we said prepping for this is that you're really at this juncture between, you know, people analytics, applied research, and you're imbibing in the scientist practitioner model that we've talked about for years. And one of the things you touched on a second ago is, you know, thinking about what what constitutes a good astronaut and what does it mean to be fit to mission? And those are some of the things. Can you expand upon that? And like what what does that look like from your perspective?
[00:12:52] So there just like in other areas of IO psych, people have done competency analyses across different mission sets. So what does it look like and what competencies do we need when we're in low Earth orbit? What do we need for sustained presence on the moon? What do we need for going to Mars someday? Interestingly, in terms of the behavioral health and performance competencies, they don't really change. It's just some become more important than others.
[00:13:21] So like your need for adaptability, your need for teamwork and group living as we have longer missions and you're living in a small space for an extended period of time. That group living component really increases. So picture something like the International Space Station. It's the size of a six bedroom house. You have privacy there when you're bad enough. You can go and find your privacy and your ways to unwind.
[00:13:50] If you've seen the crew up in Orion right now, it is tiny, you know, and so there's a lot of privacy. Well, I would say there is no privacy. Right. And so that's that's one of the things we look at and is, you know, how how doable is that for longer missions? And so that concept of like, how do you live in a space where you're exercising there? You're also making your food.
[00:14:18] You're also doing complicated teamwork tasks that are high stakes and, you know, bumping into each other and getting around each other. What can we do to position them for success? What processes and procedures can they put in? What norms can they have? You know, how can we select people who are able to do that better? So we factor all these points of leverage in to get to that moment where you can all see them hugging, you know, in one of the most recent pictures.
[00:14:44] So when I imagine, you know, you actually have a really interesting vantage point on doing this type of research because you, you know, they sometimes call it like the small in but high K problems, which is you have small sample sizes that you're dealing with in terms. These are real humans, but they're again, peak human beings that are going on and doing these kind of miraculous things. But you have lots and lots of data points on that, that small sample.
[00:15:14] How do you navigate that? And the fact that you have a limited amount of time to to do this type of assessment as well. Yeah, actually, I can answer that question in two ways that might be interesting for people that are listening. So when I first took over the lab, the lab was small and was done as a contract. But then they decided, you know, we really need to build this capability for NASA as we move into the Artemis missions, Mars missions.
[00:15:43] And so I'm actually the first civil servant to have my role as a full time role. And when I joined, I thought we have to be able to have this data at our fingertips. And we are always going to deal with small sample issues. So so what do we do here? And so so there was a lot of data that wasn't processed.
[00:16:09] And I did a kind of a digital transformation in the way that we set up the lab so that we can take all these small sample studies. And it actually we have a lot of our processes semi automated in the way we process data. We collect similar data in most circumstances. We can answer the questions that are specific to that circumstance.
[00:16:30] But then what we've also done is created a database that contains across all these studies what our data is. And as you as you pointed out, it's small sample. But in this database now, we have one hundred and ten people who have lived in isolation, either in the International Space Station or in Earth based analogs in 45 teams.
[00:16:53] So we've really created a database that gives us a lot more power to look across these studies and understanding what are these what are these sticking points? What are these large trends that we see? Why that's important is that database is now at our fingertips. So when we propose the Artemis research and the lab was asked to lead it, it was very natural to be able to go in with the Bayesian Bayesian approach.
[00:17:21] So when they they tapped us to put in a proposal for how we would do this research, you know, it's four people. Well, how are you going to do anything besides descriptively with four people? And so, you know, right there, we also need to build out the Artemis framework of the Artemis to Artemis three Artemis four.
[00:17:44] And so just throughout that complete notion of frequentist statistics and we use a Bayesian framework. So we use our best guesses from our existing data from low Earth orbit to predict what we think would happen. And then we update those beliefs with that crew of four and then the next crew and then the next crew.
[00:18:04] And so what we'll be able to do is very quick turnarounds of what we learned from Artemis to how that can inform the way we mission plan for Artemis three. And then we'll take Artemis three data and inform how we plan for mission four. And so it's just an iterative process. This allows us to say, OK, what do we learn from Artemis two?
[00:18:24] But it also allows us a good position so that as we accumulate these Artemis missions and start to do really complicated things to, you know, have a sustained presence on the moon. We can look across the situations and say, well, here's where there's, you know, room to push. Here's areas where we're kind of maxing out. We're seeing performance decrements.
[00:18:46] And so so that going back now, even like a few years, that digital transformation and the way we treat data and the way we want it at our fingertips and maximizing small samples by using a systematic approach across studies so we can aggregate it really got us to where we are right now, which my lab will turn around the Artemis two data in under a month to inform Artemis three. And that's because most of our processes are at least semi automated and then take scientists review.
[00:19:15] So it's really like, you know, you start these foundations years before the ask is even made to be able to do something like that. So we're really excited to get this data in our hands and we'll actually be collecting some of the data right when they land today. So very excited about that.
[00:19:35] Well, I mean, I just feel like I almost want to do like a PSA really quick for all the grad students out there listening, which is you can use frameworks like Bayesian methods to augment small samples because this is there's no other higher stake than, you know, what you're doing with astronauts. And if it applies with what Suzanne is working on, it probably applies for your silly study on graduate students as well.
[00:19:57] So, you know, I mean, you know, ask, ask good questions about problems and it's not a silly study. Find out what the problem is and provide a solution. But yeah, no, it's what I, one of the things I like about a Bayesian approach to is, is we can quantify and express our uncertainty. So, so the other thing to know is sometimes you don't know, right? We always want to race to conclusions and be so proud of our like big effect we found in something.
[00:20:27] But it could be that we have what we think will happen in this data. And then we update that with the Artemis 2 data and we just say, you know what, with what we're seeing, we actually don't know yet. But, but that's important for decision makers too. So to, to, I can, I like Bayesian framework sometimes because you, you're expressing your uncertainty, which in high stakes environment like space flight, it's absolutely critical.
[00:20:52] I can't go in and make a judgment call just because P is less than 0.05, you know, and, and change the history of space flight. What I need to do is express what I found and then express my uncertainty around it. It's absolutely critical in an environment like this. Well, through your research, what do you know about human beings under stress that perhaps no one else knows from the type of research that you've done? Oof.
[00:21:22] Probably a lot because we have that huge database and we're working on getting publications out. We're working so fast to, to express to the community, everything we know. That's one of the pillars we've been working on in the lab is information dissemination. We want so many people working on space flight issues right now that the faster we can get what we know out, then that helps everybody else to do better research and provide solutions for the sticking points. Um, a couple things.
[00:21:50] I don't know that, that no one else would know this. I mean, I like to think that at least everybody in my lab would know, but, um, I think maybe more of a summary statement would be, I think we forget how incredibly adaptable humans are. Hmm. But in that adaptability, it's tricky because the transition can often be what matters. So let me give you an example.
[00:22:17] When people go up into space, they have to get used to working in microgravity. So doing something like getting a wrench is infinitely more complicated because it's strapped in, you have to find it. It has to be in the right place. It might be inventoried in such a way and stored where it's like behind four things because, because it's just, it's just such a different space. And then it, it, you don't want it to float away. So it's like, um, you know, attach the things.
[00:22:44] So, so already you're doing something difficult. Um, sometimes an astronauts get to space, they take a little time to adapt. They have like a, a space flight sicknesses. They kind of get oriented, but then interestingly enough is pretty much everyone adapts and then they get used to working in that environment.
[00:23:04] But what's that interesting is when they come back to earth, uh, they can go anywhere from being able to walk just fine to stumbling around, you know, and, and really having a lot of sensory motor disruption. And I, I think sometimes we look at that and say, oh, that, that person, you know, like, oh, they're, they didn't recover as well or something like that. But it's actually kind of the opposite in that they have adapted to living in microgravity.
[00:23:31] And now they have to adapt to living on earth gravity. And so what I think is interesting to think about is, um, what about that transition can we do both in space and in life? So in space, why that's important is because when we land on the moon, that is something where you have to adapt or you go on a spacewalk.
[00:23:58] You have to adapt again to that suit pressure and whatever you're doing. And it's this constant adaptation process, um, where we take in our environment and then act on something. And so I guess, I guess my point is, is like, sometimes we can get so comfortable in a situation that then it's the transfer, the adaptation that's hard and needs to be problem solved. And we see that in space. And I think that, you know, that's something maybe for people on earth too.
[00:24:25] Um, other things that I've learned, uh, I think, um, a really interesting thing is just some recent findings that I actually presented this week at a conference where we see that positive affect declines in a long-term isolation. So people, we don't have any instances of depression. So people tend not to report clinical levels of depression, which I think people think, oh, you're in this small space for a long time, isolated, you know, you're going to be depressed.
[00:24:56] We don't see that in our well-selected, uh, participants. And, and, um, what we see is over time, there tends to be some people who have a decline in positive affect. What's really interesting with that finding is that we've actually tied it to performance too, in that people don't work as quickly when that affect tends to decline.
[00:25:18] They still work as accurately, but when they have that positive feeling, they get the accurate work done, but also can do it faster. So when you see pictures like the crew smiling and so happy and being cohesive, that is, that is just wonderful. They're not only for wellbeing, but the success of the mission, but they're also going to work together better as a team and then be able to do their individual tasks better when they get those bump ups in positive affect.
[00:25:43] So it's really, um, maybe a finding people don't know yet, but we'll hope to publish soon. And I did present this week. So you're the second to hear that. Yeah. Super fascinating. Um, I'm wondering, so that was like one of my things, like Artemis is top of mind, but also a lot of stuff with like isolation. And Mars and just being like in confined spaces for long periods of time, but also isolated from the rest of humanity for long periods of time.
[00:26:13] There's like different kinds of like psychic stress that, you know, the astronauts face that maybe most other humans may never face in their entire lifetime. And so how do they cope with that stress? How do you find the right individuals to be able to deal with that, that kind of that you mentioned the adaptability, but also I'm assuming these are also, you might even be selecting for hyper adaptive people already or something along those lines. How do you, how do you think about that from like what NASA is trying to really optimize for?
[00:26:43] And like, is there like something you are trying to optimize for? So it's all the methods that a good IO psychologist would be trained on. And, and for the selection part of it, I can only really talk about what's publicly available because that is closely guarded in terms of it's a high stakes selection context. So we don't, you know, reveal our methods or we'd have to remake them while everybody fakes everything, you know?
[00:27:08] So, but, but it's just like systems thinking that we would in typical talent management in that we know what opportunities we have to train. We do have training done with the astronauts, but there are certain things that we actually don't have time in the training sequence to do. And so just like an organization, if you're not going to train on it, you got to select on it.
[00:27:34] And so those competencies that, that are not going to be in the training sequence do get prioritized in terms of being selected upon. Just like with, we don't always do this in selection because we'd always don't have the luxury, but, but we, our selection decisions have to be right. And so we use best practices, we use multimodal methods, not multimodal, multimethods.
[00:27:59] I'm in multimodal AI mode because I was working on that earlier, but the, but we use multimethods. So we have a couple looks at the same thing across a couple of days and some other ways that we look at it. And then we. We make a PSA here for the graduate students. Multi trait, multimethod actually matters in the real world. Sometimes. Absolutely matters. It absolutely matters. Yeah.
[00:28:24] Cause you, you, especially in a high stakes selection circumstance, all those best practice things that you learn, you know, as a graduate student that. You know, we might have something like 16,000 people apply and only select four astronauts. Well, we have to be aware of impression management. And so we have to create tools that allow us to have those actual good looks at, you know, what is someone like, not just what they're self presenting.
[00:28:55] Because they've read somewhere on the internet that we want people who are both good leaders and good followers, you know, and so then they fake that. Right. So how do you get around that? And so, yeah, multi trait, multimethod. Absolutely. You know, is, is critical. So, and then thinking of, of, in terms of systems, right, there could be other reasons that you aren't training for something, or there could be lots of reasons where maybe your, your HR at a, in a small faculty.
[00:29:24] In a small town, you've got to select whoever's available, you know, so then you put your money, your resources behind training, because you have to select who's there. I have the opposite problem. You know, we have the opposite problem at NASA where we get to be very picky. But just to go back with that, that's even an interesting thing in and of itself, because we are working to commercialize space and make things like low earth orbit more accessible to a broader group of people.
[00:29:49] And so that becomes a different selection and training problem or challenge in that, instead of being so picky. Now it's like, well, we want to make sure that lots of people can go to space. So what are the minimum qualifications needed to be safe in space? And then what can we train on to make it more accessible to everybody? So, so we actually, you know, do have both challenges in the way we think about things. I just love what you work on. I do too. I love my job, but thank you.
[00:30:18] It's just so cool. Well, we've been focusing a lot or I've been focusing a lot on the individual side of things, but we also know that astronauts work in a team and that team, even though might be shot deep into space to do, you know, either the moon or Mars or wherever in the future. And, you know, there's this thought in most of corporate America that what do you do with the brilliant jerks? Right. And, and do, how do you, how do you deal with disagreeable people?
[00:30:47] And like, do you index more towards, you know, the people who are innovators, but maybe they're difficult to work with because, you know, it's so important that we innovate, but maybe in a, in a team focused setting that maybe you want to prioritize the team's feelings on things versus just the brilliant jerk or something along those lines.
[00:31:06] How, how, how do, how do you think about that from the team dynamics of going into creating this, this type of, I mean, again, we've, we've emphasized how stressful it is and how maybe different than normal human everyday life it is. How, how do you deal with the team dynamics as well? Yeah. So I can give you a little insight into my thinking in general and the way I approach this kind of challenge. But one is I always try to be a strategic thinker.
[00:31:34] So whether it's NASA and thinking about what are we trying to do here in terms of meeting mission objectives. You know, if you're an industry, you can think about what is our sustained competitive advantage as a company and how does what my team's doing feed into that. What that allows you to do is figure out, you know, what are my more proximal team outcomes that I'm maximizing.
[00:32:02] That's the first thing we have to target before we think about, you know, how are we designing a team for success? So what am I, is innovation the most important is efficiency, you know, and that's, it could be both, right. Um, there could be room to plan both. And so, so figuring out what you're going to prioritize. Um, then what can you do on the design side? Do I have the latitude to select who I want? Uh, am I kind of stuck with the team that I'm given?
[00:32:31] As you know, I'm a huge team composition person, like in terms of, I love the concept of. I'm like, come on, Suzanne, you're very, come on. You like, I'm leading you to water. You got a drink. Come on. Tell me your team stuff. I know. So like with team composition, um, a lot of people don't think about it. They think about it only in the selection aspect. Right. But what it also tells us is how to work better with the people on our team.
[00:32:58] What I mean by that is there's lots of things we can do related to like compensatory mechanisms. And so like you're pointing out, I mean, when I select out a disagreeable person, if I could also pick someone with the great skills who was higher on agreeableness. Absolutely. Right. Yes, please do. But sometimes you already have that person or maybe they have a really unique expertise and they have to be brought into the team.
[00:33:28] Knowing that ahead of time allows you to put systems in place. So do I have other relational people? Do I have ways of putting boundaries up around that person? So if they're toxic, it doesn't pollute the rest of the team. Um, you see this a lot in professional sports. They bring in a star and sometimes that star can be very toxic. Um, and then it rises and then, uh, you know, they actually do that to the detriment of the people still remaining. Right.
[00:33:57] And then you can see sometimes those programs either late in the stars career or, um, when that star leaves, they have nothing left. And so that's where you got to start with is what is the point here? Are you trying to maximize this year? Are you trying to build a brand for five years from now? In which case, how are you caretaking the non-star who are putting up with that toxic person? You know, and are you still building their skills?
[00:34:21] Are you still, um, giving them the, the praise and support they need and acknowledging even what they have to deal with sometimes. And, and so like thinking about what you're trying to do in terms of maximizing long-term success. And then what strategies can you put in play, um, is really critical. There's lots of great work. Uh, I love, uh, there's a paper by court, right? That talks about like where you see, um, you know, teams lower in conscientiousness, uh,
[00:34:49] you know, really benefit from team charters. Cause they don't naturally do that kind of stuff, but your high conscientiousness teams, you know, they kind of do that already. Anyway. Um, do you need a team charter? Sure. It's probably best practices, but at the same time, like you really need it for your low conscientiousness team. So, so understanding the composition and then putting the right compensatory mechanism in places is really the way to go. Well, so what, what makes the team high performing then?
[00:35:17] And it sounds like you're saying that there's not no singular model depending on the kind of the makeup of the team or, or is there anything you would add to that? Oh yeah. So I I'm actually a fan of a lot of work. I love all the team science that we've done, but, but when I approach kind of, uh, team situations, I'm a fan of Hackman's work who talked about six, six enabling conditions that you need in place.
[00:35:42] And then once those are in place, the team will find a way to succeed with coaching. I think that's the best, um, way to think about it. But what I like to do is then bring in all the things we've learned about the way teams function. So for example, going back to composition, like, uh, Hackman would talk about that. We need the right people.
[00:36:06] Well, we know a lot about what the right people are now by what, um, other people have done through meta-analyses and other research. And then just like I talked about that, uh, putting a team charter in place for low agreeableness people. So not only the right people, but having norms that support good performance is another one of Hackman's conditions. And that's a great example of how you codify norms that support your team.
[00:36:31] And so those six conditions of, you know, having a compelling direction, um, having the right people in place, having, uh, supportive norms, supportive organizational context. Um, those, those are all critical things and just even inventorying and self-awareness of, of where you're strong and where you're not on those can really help you to compensate in other parts of the system. Um, so you can end up with something successful.
[00:36:58] So, well, so I, I, I feel like I'm just like quizzing you for a comprehensive exam right now. And I apologize for that. Maybe we take a slightly different direction here for a minute. So the astronauts you've been selecting for quite impressive people. You're an impressive person yourself. And so long distance swimmer, rock climber, bouldering, you know, hockey mom, all of the things. Tell me a little bit about you as a human being and, and how do you, how do you manage it all?
[00:37:27] How do you do all of these things? Well, you're, you have to do a 360 on me to see if I'm actually managing it effectively. So, you know, like maybe I am, maybe if you asked my kids and husband, they might have a different opinion on it, but, um, oh, that's a good question. I, so I've seen some stuff recently online about the 5am club and I kind of chuckled to myself because I've for years gotten up at 5am.
[00:37:55] And I have a routine of, um, I get up at 5am even without an alarm. That is my time from 5am to about 7.15am when the kids are awake and getting ready for school. Two hours and 15 minutes of uninterrupted thinking and working time. For me that's invaluable because I tend to be back to back in meetings throughout the day.
[00:38:17] Um, but to be creative, to be innovative, you, you just need a little bit of time to think through things to think through strategy. And so, um, I've gotten so much work done in those two hours and 15 minutes, you know, aggregated across five days a week. I even do it on the weekend just cause I wake up with them, but, but, you know, seven days a week. And then for years that time is so valuable.
[00:38:41] So everybody might not be a morning person, but where is that time where you get your flow and you're able to really just knock stuff out. It's going to look different for everybody. Sometimes it's like Saturdays for people or 2am for people, not for me, but, um, but, but morning. That's one way I try to manage it. The other is, and I don't know if this is good or bad, but it's worked for me. I'm a compartmentalizer. So when I am working, I am thinking about work.
[00:39:07] And I just don't think about all the other non-work things. And then, but I'm also the opposite. Like when I'm not at work, I'm trying to be present and mindful of like, this is my time to watch my kid play volleyball or hockey or things like that. And, um, I naturally compartmentalize, but I think that can be helpful because sometimes when we don't do that, then we've actually gone through 24 hours and done neither well.
[00:39:36] So your kids are mad that you're looking at your phone the whole time at their game and you try to get work done. But maybe like you have all this personal stuff going on and checking whatever and, and doing that. And then you've done both ineffectively. Whereas if you just say, okay, I'm going to do this now and be done with it. And then I'm going to do this now. Uh, you know, it's for me, it's effective, but, um, I hope anyway, we'll see.
[00:39:59] No, I think I'm there with you at the 5am thing, by the way, the thing I really struggle with, and I'm curious if you do daylight savings time, when it changes completely messes up my mornings. And like, I'm still kind of running, like reeling from that, uh, having that happened a few, few weeks ago. Do you struggle with that at all? Um, it's not too bad for me.
[00:40:23] Um, I also, for my role, uh, went to, to Germany, uh, last month and that was another time zone thing where I thought, oh my goodness, I'm going to come back and I'm going to be, you know, up at 3am. Um, but I got up at, I think 3am or something like that. Cause the time zone change. And then I do audio books that are semi boring, um, as my means to fall back asleep. So there's a sweet spot. Like I can't do a book that's really engaging where I'm like, what happens next?
[00:40:52] You know, I'm listening to my, my headphones in bed and not falling back asleep. But if it's too boring, then I'm thinking about all the things I want to do. So I, I have these like, um, please don't ask me what the titles are because I'd partially be embarrassed. But the, um, this, this cadre of books in between, you know, it's like engaging enough for me to think about and care moderately what happens next, but also not engaging enough where I can't fall back asleep. So that's how I manage like jet lag or sleep. I just listen to a quasi boring book. That's somewhat interesting.
[00:41:22] And then I doze back off and then I get back on schedule. But there's also a lot of research. Um, you know, we dabble enough with the sleep folks for me to tell you this, where to, to get back on track to like flood yourself with life. Right. That that can help reset our brains. And so I try to be mindful about that, like whether it's jet lag or, or daylight savings of like, even if I'm working all day inside, um, I might not normally take a walk for a break, but I will purposely go outside and flood myself with light.
[00:41:50] Uh, when I'm in that transition period to help kind of like facilitate that reset. That's one of the things you're supposed to do. So I follow directions from the research. Look at you practicing what you preach. Nothing less. Well, uh, do you want to join me in Cole's corner Suzanne? Oh, okay. I, uh, yes. Welcome to Cole's corner. Should I? All right. You can tell me. Let's, uh, let's start out with some rapid fire.
[00:42:20] Okay. If you weren't the best diet psychologist in the world. Oh, stop saying that. So many people are good, but thank you for the, for saying that. That's it. Let me editorialize. Let me have my things. Okay. I only have small, small pleasures in life. Um, but so what would you have done with your career if you weren't working at NASA? Okay.
[00:42:40] So when I was a senior in college, I was, I was a business minor in college and everybody was saying, oh, accounting is the worst. Accounting is the worst. Accounting is the worst. And so I put it off, put it off until my senior year. It was actually the spring of my senior year. And I was already admitted to PhD programs and I was like, and I took that class and I loved it.
[00:43:05] I'm a numbers person. So I loved it. And I told my professor, had I taken this class as a freshman, I would be an accountant. And the world would be lesser for it. Which I'm glad, I'm glad I, I'm glad I didn't because I do like to innovate and maybe that's bad in accounting. But, um, but yeah, that, or when I was a kid and don't worry, I don't use this as a secret question in my, um, log on. So you won't be able to do it.
[00:43:32] But I used to want to be an archeologist. I was also a history major. So I love, I love lots of things, but, um, archeology. And then I found out how boring that job might actually be. Cause it's like, you spend so much detail, uh, to get to the next discovery. I was like, yeah, I don't know. That's for me, but yeah. So who knows? The use case podcast is where technology vendors get to talk about themselves.
[00:43:57] And it's a wonderful place for vendors, investors, uh, and practitioners to listen to the story of the solution, the features, the benefits, the attributes, et cetera. And, uh, we get to know the CEO or founder, uh, during the, during the, uh, call. And we also get to know the tech. So subscribe to the use case podcast. What's the place you've never been to that you'd most like to go and why?
[00:44:25] Ooh, I'm actually, uh, quite a traveler. So there's, I've been to a lot of places. Um, I actually just went to India about two months ago for a beautiful wedding. That was just fabulous. And that would have been what I would have answered before two months ago. Um, cause I had always wanted to go there and it was really enriching.
[00:44:42] Um, boy, I have, if you want to give me an openness to experience measure, I'm like at the 99.9 percentile for openness to experience. So my answer to you is going to be wherever I haven't been. So I actually want to go to every place I haven't been. And so I will continue to do that.
[00:45:08] Where's my next stop. It's actually Prague. Cause my kid has a hockey tournament there, but, um, where after that, you know, I don't know. It's just wherever I haven't been. I'm, I'm curious to a fault. And so I like to learn about other cultures and people. So. If you were a character in any book, TV show or movie, who would you be and why? Okay. I am absolutely.
[00:45:33] Oof. I wouldn't say a specific character, but I am that FBI agent that, um, the lead female sleuth, who's like heading up a team. Um, just love shows like that. And just, you know, I'm like, I want to be you. So maybe I have like five more careers. I want to be too, but I just love that, that problem solving, thinking out of the box, um, thriller. That's, that's for me.
[00:46:01] I got one big question for you. So if you had to create kind of like the ultimate astronaut leader, so a leader of the other astronauts, what are some of the things that you would look for? And it could be any, it doesn't even have to be psychological. Um, but what are some of the things that you would look for in that person? So the thing about astronauts is by the time they become astronauts, they are so accomplished, right?
[00:46:29] They have made discoveries as scientists or have been test pilots and, and they've had careers even before they're selected. So a great leader of astronauts is also someone who can follow because they are all, you know, high achievers, um, and really at the top of the food chain in a lot of ways.
[00:46:53] But, but when you have that combination, someone who can, you know, be humble and, um, bring out the best in others, uh, rather than just themselves. I mean, think about if you have an army of amazing people, how powerful are you if you can equip them to be their best?
[00:47:17] And so I wouldn't look for a traditional command down type leader, quite the opposite of, of, hey, you're in charge of, you know, all these amazing people. How do you help them shine? That is such a good answer. Yeah. It's almost like you were made to do this. Not even an accountant, the world would have been robbed of your talents. Well, I do my own taxes to fulfill that accounting, you know, enjoyment, I guess, or whatever.
[00:47:45] But I just like when all the numbers line up, but even in teamwork, I like when the line numbers line up. So I use it to that now. That's that high openness and to experience. All right. Well, let's do some, what am I reading? Okay. So the first, the first one I've got from you, a recent article in, um, the community for psychologists, I believe is the journal practice innovations.
[00:48:06] And this is called ethical use of artificial intelligence in industrial psychology research and practice by Richard Landers and Sarah Nakamoto. And, and so what they do is they go through and they give a sort of a compendium of sorts of how we should think about how to use AI for IO psychology research. And they give a few examples that IO should remember key lessons from the past.
[00:48:31] So all the other ways that we've used things like machine learning and other advanced methods in the past of IO psychology. IOS have an ethical responsibility to remain engaged and share their expertise were relevant as AI conversations and decisions happen around them. IOS should resist looking for easy, simple solutions and instead build AI ethics foundation on top of first principles presented, uh, such as the APA's ethical principles of psychologists.
[00:48:59] Um, and IOS should proactively identify and vet AI systems before they're used. IOS should find healthy balance between fruitlessly skeptical resistance and ill consumed rushing into the future. So kind of finding a balance amongst those things. And I know your, your team actually, uh, does some, uh, pretty advanced work in this space as well, Suzanne.
[00:49:24] So what, what did you think about this article and, and how does it impact like how NASA is thinking about the introduction of AI into the research as well? Well, I apologize. I have not read that particular article yet. I did see it come out and just, just haven't gotten to it yet. Um, but you're right. We do use AI and I've thought about the ethics for quite some time. And in fact, some people in, um, philosophy have written about this, the philosophy of ethics and engineering. And so they have some good ideas here too.
[00:49:55] Um, all those points, those sound great. And your particular question was what again? Sorry. I was just thinking. Like, what do you think about the ethical use of AI? And if you want to draw upon examples that you've used yourself, feel free to, but yeah.
[00:50:13] So ethical use of AI, I, I, first I would just say as a blanket statement, um, we often don't explore ethics enough across areas, not just AI. So AI is the latest application of it, but even like decision-making in teams, you know, um, there's a couple of people who research it, but think of when you go and do like team building or team coaching or team training.
[00:50:37] How often do we point out to them to make sure that the leader, uh, can, you know, um, help bring that conversation back into ethics when it was needed. Um, ethical uses of AI is very important for I was psychologists, particularly because, you know, so much of an individual's work day is life is work. Mm-hmm . And so we are making, you know, you're very generous about the way you talk about me affecting the world, but I, a psychologist star right.
[00:51:07] So, so our selection decisions have meaningful impacts to individuals, right? When you don't select someone and we, and we need to like, we need to do our job. Um, but, but we need to do our job responsibly because the decisions we make around employment have meaningful impact on people. And so making sure we do that ethically is absolutely important. Um, for AI, I think of AI, I actually love AI stuff.
[00:51:35] I, I, I even attend our, um, working group where we're coming up with the standards and guidelines for human rated space vehicles, uh, of AI. It's a really important area that I enjoy spending time on. Um, and we use AI in the lab and machine learning both, both within the lab, but also partner with computer scientists like Theodore, Chaspari, you'll see me publish with her and other folks. Um, but I, I think it's really important to think of it as a tool.
[00:52:04] So just like, uh, when you are approaching some sort of situation, sometimes AI is the right choice. Sometimes it's not. And I think, I think people can be really bad at, I've learned something well, so now I'm going to force this hammer on every single thing. Right? So when I think about AI, I think about, well, what does AI do well? You know, like routine applications, um, things like that.
[00:52:34] Um, where it's starts to do less well is when there isn't an appropriate data set that it can be trained on. So this is a problem space that I deal with right now in that, as we move on to Mars, we think about Earth independent, um, operations. So you pointed out how far the moon is. Okay.
[00:52:56] Mars is so far that there's going to be anywhere from a four to 22 minute communication delay, depending on where it is from Earth. So this real time support from mission control that you even see with the Artemis missions. Now it's just not going to be available in the same way. So a lot of the things we're designing right now for Mars are very Earth independent and just keep Earth, uh, on board for situational awareness, but then get the team to self-correct, get individuals to self-correct.
[00:53:26] And AI monitoring tools is one of the things we're working on. So my problem is, is when I think about that problem, the first time we will ever have gone to Mars is when we're on Mars. Yeah. How do I responsibly apply AI training on things and then getting data for the first time and ensuring it can learn properly from that new data to keep people safe?
[00:53:53] Um, but knowing that that will be the first time we actually have data from that circumstance. So that's, that's an example of, you know, the kind of gray areas of AI that exist right now and that we have to think through, like, not only the ethics of that, but the application of that or how do we apply that responsibly? So, okay. Can I build on that really quick? Yeah.
[00:54:18] Um, because, uh, this kind of gets to the last point of the article where it's saying like you have to balance skepticism with kind of being an early adopter. Yeah. And I actually think there is a obligation on the research community to be early adopters. And the reason is, is because like when things like new things crop up like AI and people start to use it in applications around the workforce, the, the true scientist practitioners out there, they say, you know, well, where's the science?
[00:54:46] I need something to help navigate this new situation. And if researchers say, well, we're not gonna, that's just a fly by night. It's just a bubble. It's nothing, nothing to see here. And they don't actually provide the research foundations for those other people. There's nothing for the practitioners to rely upon. And, and so the onus really is on the scientific community to be the early adopter. So when new technologies come along, that if people want to be guided by the science, that there's actually science to rely upon.
[00:55:14] And I think there, there's an ethical responsibility there, but I think it's related to what you're saying about, you know, the first time we land on Mars is the first time we land on Mars. And, you know, we can't, there is no scientific foundation about what it's like to walk around on Mars as a human being, because there is no Bayesian prior for us to kind of work with. Right. And so I think it's such a fascinating thing to think about is, as the world is, we're undergoing a lot of digital transformation at the moment.
[00:55:41] How do we navigate that as scientists and how does science keep up pace to navigate the amount of change that we're undergoing? Yeah, I think that, and we do have a science space for what we'll do for Mars, right? We collect data in what we call analog environments where we mimic what we expect from Mars. And that way we have like some information.
[00:56:01] It's not, we won't be like guessing when we go, but the, um, I, I am a hundred percent an early adopter of things like in my personal life. You never. Yes. In my personal life, in work life, um, uh, I've been accused of being too far ahead of my time sometimes. And that, that, uh, someone was in my office the other day and they said, it must be interesting to be you because we just did something.
[00:56:29] And I remember you saying this three years ago and I said, it's fine. Like I said, I'm just used to it. It's just, you know, what, what happens over time. But, um, but the, uh, yeah, we have to be at the forefront so that we can guide responsible, um, adoption of things. And, and importantly, so we can figure out where the guardrails need to be. Right.
[00:56:56] So if we aren't the ones playing with it, researching it, applying it, um, other people will create those guardrails without our inputs. Exactly. So you think of, I, I see this all the time with like, you were talking about the scientist practitioner model. Mm-hmm . We have to have research that is practical and able to be implemented, particularly like in my work.
[00:57:23] Because if I don't provide solutions that are operationally relevant, then there will be another path to get that information. It's like how they talk about people find that path to like cut across the, um, like when you're walking towards, um, you're like on campus somewhere and you like cut the path and the sidewalks not there yet, but it's like the fastest way to get there. Yeah. You know, that's exactly AI. Right. Mm-hmm .
[00:57:50] Is if I stick my head in the sand, then people are going to create these non sidewalk paths that may or may not be with IO psychologists and other things. Um, so I'm a fan of getting ahead of it, understanding it, but then putting guardrails on it and understanding proper application of it. Um, so that we can use it in the way that it performs well. And then we can warn people when we say this isn't really the best application of it. And we have other tools that better answer this question for you.
[00:58:23] Suzanne, I'm going to compliment you one last time. I'm going to make you uncomfortable, but you have been such a fantastic guest today. I've been looking forward to this for so long. If people want to learn more about the research that you and your team are doing, how, how can they best do that? And if they even want to reach out, I'm not giving them a blanket to reach out to you. If they want to reach out, how can they get in touch? Yeah. You're welcome to, um, reach out to me on LinkedIn. If it's not NASA related.
[00:58:52] Um, if it's NASA related and you have like a actual NASA question, please feel free to take email at my NASA address. It's, it's available on the internet. You can find it. Um, it's, uh, I, I am pretty busy, so I'm not always the best at, at responding to, um, cold emails since I get quite a few of those. But, but I am a lover of people who want to learn and be better and contribute to society as they can in their particular area. And so where I can help, I'll try to. Yeah.
[00:59:21] But where can people find the research that you're doing and all the things that your team are working on? Um, you can go to, uh, just like, uh, scholar dot, you know, Google.com and look at my name and reverse citation, search it, and you will find all my recent publications. Um, you can, and those are in collaboration with a lot of wonderful people. Um, so you can see our later AI work that we've been publishing. We often have presented and those presentations have to go through something called, um, strives, which is our public way.
[00:59:51] So you'll actually see those as well and can reach out if you want to learn more about that stuff. Um, there's a great, uh, uh, speaking of psychology podcast I was on too, which we go into a lot of detail on some of the psychology of space. If that's of interest to you, it's a little different than the emphasis in your, um, podcast, which is great. So if people want to learn more about space psychology, they can. Yeah. One of the beauties about, you know, being a public servant is you guys actually get to present your research in journals and at conferences,
[01:00:21] and that's what organizations don't do or won't do. And so please go out and look into some of the things that Suzanne has, has, has researched because she is the consummate scientist practitioner. I think I've checked off all the bingo of all the right words that you use today, but, uh, Suzanne, thank you so much for joining me today. And you've been listening to directionally correct a people on Alex podcast with your host, Cole Knapper and today's guest, Suzanne belt. So thanks for joining me. Thank you, Cole. My pleasure. Thank you so much for joining us today. Thank you.


