Bob Pulver engages with Ceci Dones, an educator, lecturer, academic researcher, and the founder of 3 Standard Deviations. She focuses on data literacy, technology adoption, and trust and authenticity in tech-mediated communications. They explore the complexities of trust in AI, the importance of data literacy, and the need for curiosity, creativity, and critical thinking in navigating the AI landscape. Ceci shares her journey from marketing to academia, emphasizing the significance of understanding data quality and governance. The conversation also touches on the role of AI in education, the ethical implications of AI, and the future of human-AI collaboration.
Keywords
AI, data literacy, ethics, education, critical thinking, curiosity, creativity, technology, trust, authenticity
Takeaways
- Ceci Dones is a hybrid professional in AI and data literacy.
- Understanding data quality is crucial for effective AI implementation.
- Curiosity, creativity, and critical thinking are essential skills in the AI era.
- Data literacy does not mean everyone must be a data scientist.
- AI should be used as a coach, not just a calculator.
- Young people have a clear understanding of fairness in technology.
- We must not lose our inherent sense of fairness as we mature.
- The future of AI is exciting yet uncertain.
- We are still in the early stages of understanding AI's impact.
- The conversation around AI ethics is becoming increasingly important.
Sound Bites
- "What does AI mean for organizations?"
- "Garbage in, garbage out."
- "Curiosity, creativity, critical thinking."
- "You don't have to be a data scientist."
- "We are still figuring this out."
- "We come pre-programmed with ideas of fairness."
- "How do we not lose the humanness?"
- "AI should be a coach, not a calculator."
- "This is all so exciting and terrifying."
- "It's only chapter one."
Chapters
00:00 Introduction to AI and Trust
02:47 The Journey of a Hybrid Professional
06:02 Data Literacy and AI Implementation
09:11 Curiosity, Creativity, and Critical Thinking
11:57 Navigating Data Literacy Without Overwhelm
18:10 The Role of AI in Education
22:04 AI Ethics and Responsibility
36:59 The Future of AI and Human Interaction
Ceci Dones: https://www.linkedin.com/in/ceciliadones
3 Standard Deviations: https://www.3standarddeviations.com/
What Is a CMO to Do With AI?: https://www.linkedin.com/newsletters/7182036727794909184/
For advisory work and marketing inquiries:
Bob Pulver: https://linkedin.com/in/bobpulver
Elevate Your AIQ: https://elevateyouraiq.com
Thanks to Warden AI (https://warden-ai.com) for their sponsorship and support of the show! Warden is an AI assurance platform for HR technology to demonstrate AI-powered solutions are fair, compliant and trustworthy.
Powered by the WRKdefined Podcast Network.
[00:00:00] Welcome to Elevate Your AIQ, the podcast focused on the AI-powered yet human-centric future of work. Are you and your organization prepared? If not, let's get there together. The show is open to sponsorships from forward-thinking brands who are fellow advocates for responsible AI literacy and AI skills development to help ensure no individuals or organizations are left behind. I also facilitate expert panels, interviews, and offer advisory services to help shape your responsible AI journey. Go to ElevateYourAIQ.com to find out more.
[00:00:39] Welcome back to Elevate Your AIQ. This is Bob Pulver, and today we're diving into the intersection of AI, trust, ethics, and human potential. Ceci is a practitioner and academic. She's the founder of Three Standard Deviations and a researcher focused on trust and authenticity in tech-mediated communication. We are going to explore the importance of critical thinking in an AI-powered world, the evolving role of data literacy, and how organizations can ensure ethical and responsible
[00:01:05] AI adoption. Ceci also shares her three C's framework, curiosity, creativity, and critical thinking as a foundation for thriving in the age of AI. This is a thought-provoking conversation that I really enjoyed, and I think you'll be glad you listened. Let's jump in. Hello, everyone. Welcome back to another episode of Elevate Your AIQ. I am your host, Bob Pulver. With me today, I have the pleasure
[00:01:29] of speaking with Ceci Doniz. How are you doing this morning, Ceci? Good, good. Thank you. Thank you. This is a very exciting conversation. I'm very curious where we take it. Yeah, there's a lot to talk about. So why don't we start with just you introducing yourself and some of the work you've been doing and how you've wound up doing it. Yeah, I would say I am a practitioner academic, and what that means is I'm a hybrid. On the practitioner side, I am the founder of three
[00:01:58] standard deviations. So I help organizations big and small make sense of what AI means for their organization and for their consumers. On the academic side, I study trust and authenticity signals in technology-mediated communications. So I am endlessly fascinated by nonverbal communications when
[00:02:19] technology is helping and hindering that connection. Wow, that sounds like a big and complex endeavor. So much nonsense going on, misinformation, disinformation. So let's get into it. Should we start with the academic side or should we start with the practitioner side? We can do the convoluted
[00:02:43] circular journey. That is, I guess, me trying to be a grown-up. Always been driven by my curiosity. And so I am a qual-quant researcher, and I've been very much focused in trying to study people, why do they do what they do, and telling those stories about people using data. So I started first in marketing land, advertising land, really trying to understand consumer psychology
[00:03:07] and all of those things. I wanted to do more fun stuff with the data, and then the data was in a silo. The data was sequestered away. I couldn't find the data. And so the second part of my career, I really focused on building those data skills. And so I was brought in two large organizations trying to do the digital data transformation. And so I would build those capabilities for them. Data and analytics, people process platform, the whole end-to-end. And then I would say my third part of my career
[00:03:36] is that my pandemic gift to myself is to attempt to finish my dissertation and focus on trying to understand how do we operate in this AI world. So I've been teaching for a bit. I do lecture on AI ethics. I've been writing for a good amount of time at this point, and I'm still following my curiosity. So it's a nonlinear path. And the reason why I'm here is just I kept on asking questions and doing my
[00:04:06] best to find the answers. So I love nonlinear paths because that's what my career has been all about. Good luck with your dissertation, certainly. The concept of, I guess, data literacy, I think you're way ahead of most on that front. And it seems like that gives you an excellent vantage point from which to
[00:04:33] judge a lot of what's going on because I think a lot of people have sort of skipped over that. And what I mean by that is when it comes to everyone's just sort of jumping on AI, oh, we got to do AI. Forget about the business case. All our competitors are using it. Our people are using it already. We've got to get moving. But one of the things I've always contended is without some level of data
[00:05:02] and analytics maturity, you're kind of flying blind and just grabbing at shiny objects in a way. Or maybe you're putting too much trust in external data and models that have been trained with data that you don't know where that data came from. It certainly wasn't yours. But as organizations attempt to
[00:05:28] sort of customize solutions, maybe augmenting solutions that have been trained externally with their own data, you still have to trust the data that you're injecting. It doesn't matter what you own it. But if you don't understand what data you have and the quality of that data, the provenance of that data, et cetera, you're still going to be, I guess, playing this with one hand behind your back.
[00:05:58] A little bit. Yes. Agree, agree. So I'll give a little bit more of a technical answer and then I'll do the more business side answer that I like to do as well as an educator. The more technical answer, nothing's changed since the whole thing around big data. So it's still garbage in, garbage out. And so my recommendation to all data leaders and all stakeholders
[00:06:22] is if you don't understand the provenance, so you set a spot on, if you don't understand the lineage, if you don't understand the data generating process on which you're building your entire AI house on, you may run into problems. You may find, you know, you have a family of rats underneath your house. You may find there's mold in the walls, all sorts of things that maybe you don't want as an AI house
[00:06:45] owner. But it's very important to know. And so I always, always start back with, do you understand your data quality? Do you understand your data governance? Those things are all fundamental. And then we start talking about, okay, something fancy like AI, is that really applicable for you? So that's a little bit more of a technical answer. The answer that I tend to use when I do run
[00:07:09] workshops. So I do run workshops for organizations trying to figure out, should we AI this? Why are we saying AI is a verb? And just generally trying to figure it all out. So I have, I do my alliteration. It's three C's. So the first one's curiosity because, well, first of all, I'm CC and I'm just very curious. So that's always important. That's very critical for data literacy, critical for AI culture.
[00:07:36] And then I would say creativity. And the reason why I say this is that when we're thinking about outputs, the low hanging fruit usually for AI is automation. And so let me do the thing I repeat over and over again, but let the machine do it. That's great. That's definitely going to be a measurable business outcome. Yes, your finance counterparts will be very happy with you for a
[00:08:00] moment. But then the value generation, the longevity of the utilization of the tool, you need to figure out a new way. How do we create something new, a new revenue stream, a new idea, a new offer, new value for your consumer? So that's creativity. And so I always recommend, yes, be curious, but you got to do something with it. So create. So creativity is important. And the third component,
[00:08:26] I would say, is it goes back all the way to grammar school. Critical thinking. Yes, you can read the words in the book. Yes, you can read the output from the prompt. But where's that intuition? Where's that? Does this make sense? Or can I triangulate this information? Or gosh, you know, last time I checked, gravity tends to pull things towards the ground. Seems a bit odd that it would be the opposite way,
[00:08:52] but I'm open to new information. Let's explore this. Let's really think about this critically. And so that criticality, I think, is so important as well as you're trying to evaluate. And that's a higher order managerial skill. And I think the opportunity with the AI space is that it allows all of us to actually expand our skill sets and refine our skill sets in those areas,
[00:09:16] not just outsourcing it to the machine, but really developing it in yourself. And that's, I think, will be the differentiator in what I also call in these workshops, durable skills that will last whatever changes in technology. We are freaking out around AI today. There will be a new label tomorrow and none of us will know what it is. And then suddenly we have to worry about that.
[00:09:41] But the durable skills around curiosity, creativity, critical thinking, communication, those things will outlast the technology. Absolutely. The critical thinking piece in particular, I mean, I love the durable skills angle as well. I like that word. But the critical thinking, I just think, is something that everyone who's starting to explore, I think about students,
[00:10:11] anyone who's already has a smartphone, everyone who's on a computer, where even just a basic internet search is now giving you some AI responses. It's to think more deeply about whether that's really accurate. Don't take it at the output at face value. And some of that, as we were talking
[00:10:32] earlier, is related to just having some grounding in what should be expected behavior, what makes sense. And common sense is not always common. But some of this is just basic. It's not a calculator. And this isn't just give it one prompt necessarily and get the answer. And that is the end of the
[00:10:58] interaction. Sometimes you have to add context. Sometimes you have to challenge some of these tools, just like you would challenge some know-it-all friend of yours who seems to have an answer for everything. And it's unlikely that they actually know what they're talking about in all cases. So just think more deeply about what you're doing. I think that I'm hoping that some of the newer
[00:11:25] large language models where it's sort of showing you its thought process in a way, its reasoning, I'm hoping that some of that will be extremely helpful for those who are not necessarily critically thinking yet when they're using these tools. Because you can actually look at it and say, oh, I see it completely misunderstood or I didn't explain that clearly enough. I need to add
[00:11:53] this additional context and now I'll get a better or potentially completely different answer. But I do wonder, just going back to the sort of data literacy piece, I get concerned when I hear people getting overwhelmed with just too much AI stuff. I'll never learn all this or whatever.
[00:12:17] And so I guess one of the questions I have for you is, as you think about people's sort of data literacy, does everyone need to have that data literacy to be able to apply that critical thinking? Yeah. So that's a really good question. I have all sorts of feelings when it comes to the word
[00:12:40] literacy because most of us have had the opportunity to go through formal education. So we learned how to read and that's part of literacy. Most of us also, through that formal education, also learned how to work with numbers. So numeracy. And so where I get a little bit opinionated about data literacy is that I don't know where we came about with this weird expectation that everyone has to be a data
[00:13:09] scientist. If we think about languages, right, what are we actually trying to do? We're trying to communicate with a computer. We're trying to communicate with a machine such that through those interactions, and it could be repeated interactions over time, like with prompting, we get an output we want. So it's still learning another language, another communication skill. And so for those of us who may speak multiple languages, or maybe those of us who don't, but maybe have traveled to other markets.
[00:13:38] Okay, you travel to a market where you making it up, you don't speak French. What do you do? You pick up the travel guide, maybe, or Duolingo or whatever it is to pick up enough because you're going on holiday for two weeks and you want to taste all the wonderful wines. Fantastic. That's a brilliant objective goal. You're learning enough French such that you can operate in that environment. That's excellent.
[00:14:04] When I think about data literacy, and when I think about individuals that are feeling overwhelmed, I feel overwhelmed every day with the amount of AI and data news. But something that helps to keep me grounded is understanding, oh, I don't have to be able to write law in French. I'm only going there for
[00:14:27] two weeks. And so what is my purpose for learning to do this? And then calibrating the skills and tools I need around data and literacy to make sure that I have enough to be able to operate effectively. And more importantly, know when I need more help. So for example, could we utilize LLMs today to
[00:14:50] produce our own apps if you've never been a software engineer? Oh yes, I've had numerous friends that on a weekend exercise, never been a software engineer, but had a really cool idea. They worked with an LLM. Is it the worst UI ever? Yes. Would a UX designer cringe? Probably. But does it work? Is it operational? They were able to do that working with an LLM, but they had a very specific goal. So if your specific
[00:15:18] goal is I really want to be good at utilizing chat GPT, fantastic. That's your LLM of choice. Okay, let's understand what does this prompt engineering thing. Oh, it's really about communication with context. Oh, it's really about active listening because the machine is giving you feedback when you don't like the output. Okay, great. Or maybe you just want to utilize traditional AI. All of us
[00:15:42] have been utilizing traditional AI if you've ever done any kind of work on a computer. It's the example I use all the time in conference and talks and in other locations. I say, have you been using email? Most people will say yes. Okay, when was the last time you clicked on a spam email? What happens when employees behave badly? Boy, we could do an entire TV show, maybe a Netflix special
[00:16:11] on that. Well, Ryan and I sat down and recorded episodes for FOMA and we asked practitioners, give us your most outrageous story. You know, the sales leader that brings cocaine to work, you know, whatever. Just bring us the outrageous and it is funny. So if you need a laugh, which we all do from time to time, search for workplace misconduct on wherever you get your podcasts and you'll find it
[00:16:41] and trust me, you will laugh and cry, but you'll definitely laugh. All right. Thank you. No, why would I do that? Yeah. Spam. Guess what? Machine learning. Oh, traditional AI. Oh, okay. You've already been utilizing it. So it's really a mind shift change, I would argue, when it comes to data literacy. One, being super clear. Are you going on holiday or are you actually starting to build an app?
[00:17:06] Or you're studying law in a foreign language? Okay. Those are different objectives and therefore, the amount of knowledge you need to have around data and AI, different. That's good. We want that. We want that diversity. And the second part is, I always tell this to other practitioners and other technologists in the space. I consider myself one of them. I think one of the most beautiful things
[00:17:31] we've done is create this almost magical vocabulary full of acronyms that makes it precise, which is great. Jargon is helpful, especially when you're talking to other technical practitioners, but is horrific when you're trying to relate to other people, real everyday people. And so by utilizing all these fancy words, we've abstracted away from the connection that we're actually trying to create
[00:18:01] with others. And so shifting the mindset to, you're exactly where you need to be on your AI journey and your data journey. Let's just find the objective and then ask for help to get the resources to get you enough skills so you can do what you want to do with data and AI. I have had people who are not in this space at all give me some feedback on the podcast and they're
[00:18:25] not even sure what you were talking about. And because I didn't sort of curb my own, you know, buzzword bingo from being in the space for too long. I mean, I imagine you've encountered this all the time as you shift between, you know, a business leader, you know, audience to, you know, academic, you know, student audience. Even if we only speak English, which is perfectly fine, being a monolinguist is perfectly fine.
[00:18:52] We speak dialects all the time. You speak very differently to your family as opposed to your co-workers, as opposed to your friends, as opposed to someone who's from a different generation or someone who English may not be their first or second or third language. You speak in different ways in different contexts. Those are all dialects. The machine dialect that we're all learning now,
[00:19:16] yeah, it's an interesting one. And in some ways, this AI machine dialect we're all learning to communicate in is also helping us to reflect upon ourselves, actually. How do we communicate? How do we listen? And so in that way, it's kind of self-revealing, which is great for some people who want that self-awareness. Not so great if you're just trying to book the flight or book the
[00:19:44] hotel and the machine will not do what you want it to do. I also think that, you know, these durable, you know, human skills, like you pointed out before, like you can use AI for automation or you could use things that came before AI for automation as well. And but once you get past that and you get to the sort of higher value, you know, tasks and,
[00:20:09] you know, activities that you need to actually do the rest of the work, you know, those durable skills are going to become even more important, right? Like you've gotten rid of, if you get rid of a lot of the grunt work, the repetitive, you know, sort of rote tasks, this is where we need to figure out what does the human and AI, you know, sort of collaboration look like and how do we determine
[00:20:37] which additional sort of maybe reasoning tasks do we now upload to a more competent, more advanced AI solution versus what do we hang on to to maintain, you know, relationships and things like that. So I think that's where it just emphasizes that as we move forward, these skills, these human skills
[00:21:00] become more important. And, you know, World Economic Forum has been saying that quite clearly for years now in terms of future of jobs, future in-demand skills and things like that. So I think people are just, not everyone obviously pays attention to the World Economic Forum. But I do think it's an important, the 2025 report came out recently. And I think just, it's good to see that that has not changed even
[00:21:29] with all these advancements of AI, but I do think people need to pay more attention to that. And I guess one of the questions I have from an academic perspective is, do you see students learning how to sort of adjust and adapt to this as opposed to just going through more traditional curriculum and like, are they prepared, I guess, for being part of the future workforce?
[00:21:58] So I'll share a small anecdote. I will not say which university I was sitting in the cafeteria of, but I was sitting in a cafeteria for graduate school and it was around finals time. And I guess I was blending in very well because the students were continuing to have their conversations. And I was overhearing a conversation amongst a group of students and they were discussing,
[00:22:26] okay, what parts of the final do we give to ChatGPT as opposed to, I want to write it because I'm going to present it or I'm going to turn it in or whatever it is. And at the time, the initial thought is, oh my goodness, that's cheating, bad, bad, bad. The administration, you know, every university has their own policy around it. And, you know, we had to adhere to that. But then I got very curious about it
[00:22:53] because those students, unprompted, were already figuring out, hey, I need to manage this machine. Hey, I have a full workload that is, I suspect, overwhelming. It's finals. And I need to do a division of labor. So let me put on my critical thinking and figure out, okay, what is most appropriate and rote that maybe the machine can do well? And I study the work, but where's the
[00:23:22] components that allow me to shine, to demonstrate that I actually address the learning objectives of the course and demonstrate that I can actually articulate whatever I need to articulate for the final? And so I thought, huh, that's a really good skill to have. That would be something wonderful to encourage and foster within individuals who are entering the workforce. I would also argue maybe
[00:23:51] it's good for all of us to figure it out as more and more of these tools kind of turn online for us. I'm figuring out what is appropriate for a machine versus myself. And I think where it gets difficult, and this philosophy of mine has changed so much, somewhat as I've gotten older, I used to very much believe, okay, everything must be at scale. How do we define growth, capitalism, all the wonderful
[00:24:18] things? How do we get the numbers big, big, big, which is great, very helpful. And it is really great, especially if you're trying to get funding for the next round to develop more features for your whatever startup you're a part of. But I'm beginning to think that that scaling is at juxtaposition or at tension with personalization, meaning my skill profile will fundamentally always look different
[00:24:46] than someone else's skill profile. My strengths will be a different constellation than somebody else's strengths. My weaknesses will look very different. And my weaknesses today were not the same weaknesses 10 years ago, will not be the same weaknesses 10 years from now. Strength as well. And so I think when we are thinking about how to augment ourselves and what the human AI collaboration looks like in work
[00:25:15] context, school context, any other context, it's going to look quite bespoke to the individual. Because maybe I only want the AI to amplify what I'm already good at. Okay, that's a choice. Or maybe I only want the AIs to supplement what I'm actually weaker at. And frankly, I don't have motivation to improve upon. That's a choice. Excellent choice. But you have to make that choice. And I
[00:25:42] think that's where scaling up and thinking about big can be a way to, what's the saying? I think paying too much attention to the forest and not enough on the individual trees. Oh, am I saying it backwards? I'm saying it backwards. No, I get it backwards all the time too. Not seeing the forest through the trees? I probably have it backwards too.
[00:26:10] Point being, I'm starting to think what's more important is my individual lived experience, the people I interact with on a day-to-day basis. Can I help somebody improve their lived experience utilizing technology in some way? If yes, fantastic. If no, and it's only just a cup of tea and a chat, also good. And I think that's something we have to start to be very cognizant of.
[00:26:38] To me, that points to one of the things that I know you study and talk to people about in addition to trust, which is authenticity. Are you being your authentic self as you utilize AI in the appropriate ways that you have determined you want to use it or your organization expects you to use it?
[00:27:00] But I think you raise a really compelling point about that sort of augmented intelligence. And as people try to figure out how do I sort of navigate my own skills journey and think about the trajectory of my career, which maybe it's linear, good for you, but it's certainly not a strike
[00:27:27] against you if you take a non-linear or what I call a lattice kind of approach to your career, or some people call it a portfolio career. But it's a really sort of provocative thing to think about. Well, play to my strengths and then have AI sort of help me do an even better job of playing to my
[00:27:50] strengths and let me sort of round out what I contribute with others who are stronger where I'm weaker. Or do I make myself a more complete employee or leader by acknowledging that I'm great at these things and AI can mitigate some of the weaknesses that I have? Is there a happy medium where I just
[00:28:17] shrink some of those weaknesses while also expanding and improving some of the strengths? I mean, I think to one of your points, that is sort of an individual, it needs to be an individual sort of decision and journey depending on the context and the circumstances. And I like how you put your particular skills and
[00:28:45] profile and experience or whatever is this sort of unique sort of constellation of attributes and capabilities and skills. And obviously, personality plays a part of that. And, you know, the contexts with which you've been exposed across, you know, industries, or in your case, you know, academia versus business and things like that. So, you know, there's certainly no silver bullet to solve all these
[00:29:13] things. But it's also why, you know, what works for someone when you're interacting with some of these generative AI tools is not necessarily, you're not necessarily getting the same benefit or the same results. I think at this point in time, given the nascency of the ecosystem, if we were to follow the news cycles
[00:29:37] and all the hype cycles, this may feel like it's old news. But in reality, no, no, this is this is all the consumer facing versions of these technologies are very much nascent. I think the most prudent strategy to take at this point in time is, of course, with some level of caution, right, we don't we don't want to cause any any unintended harms. But with some caution, keeping an open mind and keeping
[00:30:05] a mindset of playfulness and curiosity could be more useful at this point, as opposed to what is the formula? Or what is the best practice? There are no best practices. We are still literally figuring this out. We are as curious about the machines and what they're producing as the machines are,
[00:30:31] one could argue, perplexed at our level of frustration with them. So something that this conversation is making me think about is, it wasn't too long ago, actually, when we all had to say AI hallucinations, when it makes an output that doesn't make sense. Oh, it's a hallucination. And hallucinations, bad, bad, bad, bad, bad. We have to do away with them. And there is research to
[00:30:57] mitigate the hallucinations because we, you know, having machines that are a bit more predictable could be useful for us to mitigate risk. However, there is a new line of research that's beginning to explore, huh, these hallucinations, this is data. What is it trying to tell us? Is there a pattern here? Is there something to learn from this aberration? Is there something to learn from this
[00:31:24] deviance? I think that's very interesting. And because of those things, I think we still have to keep an open mind about a lot of this AI stuff. Also recognizing, you know, some level of caution. So running with scissors, maybe not so good, but using scissors to cut out really cool things, arts and crafts. Yeah, try it. Why not? And I think that's where I know we talk a lot about ones
[00:31:53] and zeros in data land and AI lands and technology land. So it sounds very abstracted. Why am I in this business? Why do I love this topic? I find it to be so creative. If we allow it to be, our medium is technology and the amount of things you can create from it, I can't even imagine. And that excites me so much. Yeah, I think there's a lot, as you pointed out, I think we're still very early
[00:32:21] days with all of this. And it is exciting to see, you know, what, you know, the directions that some of these tools can go and how is it really starting to mirror, you know, human brain kind of reasoning or how do you explain to your point that even the hallucinations, you're right. I mean, how many of us are curious enough to ask, like, why? Why is it doing this? And maybe some of that
[00:32:47] exposure, the transparency of its reasoning logic to the user will help with a little bit of this. But there's a futurist and author, Gary Bowles, who works at Singularity University, who's been on the show before, but he just posted the other day talking about hallucinations. And it was like,
[00:33:08] wait a minute. So it's really smart. And it goes and it realizes it doesn't have the answer for you. And yet it still gives you one, knowing that that's not the answer. Why does it do this? Right. So I'm not expecting everyone to have that level of curiosity, just like I don't expect everyone who gets a driver's license to also be a mechanic and know what every sound and every
[00:33:38] gauge and all these things mean. But some of us are going to be curious. And those are the folks that are probably going to keep, you know, drilling in and probing and learning. And, you know, for other people, that's just the rabbit hole that they don't want to go down. But I think it's important to understand how it's thinking, because as these things evolve or how we come to rely on them more and more, you've got to think more deeply about, you know, what this all means.
[00:34:07] First of all, I think it's a very beautiful way to communicate that idea. Something it makes me think about is, so words have valence and we attach these valences. So positive, negative. So hallucination, generally we attach the negative valence. So, ooh, it's not good. But I get very curious. So when we go to sleep, and hopefully some of us do, and hopefully some of us are dreaming,
[00:34:37] the things we dream about. Typically not realistic. Typically has some kernels of something that looks familiar, but is also maybe not familiar at all. We call those dreams, we don't call them hallucinating, we don't call them sleeping hallucinations. So whereas the words we use to describe the things
[00:35:01] we experience, because there is valence to them, so positive negative feelings about them, shapes how we perceive those things. And so I get very curious if we were to remove those positive negative feelings, can we learn something from it? So spot on. I'm very excited to see the outputs of the work, especially if it gives us more insight into how the machines think.
[00:35:28] I wanted to switch gears and talk a little bit about AI ethics. So I'm curious about your audience's, you know, think about that concept. So when I do speak to other business leaders around AI ethics, it tends to be more around regulatory requirements. What's the anticipation of those regulatory requirements? What does this mean
[00:35:52] in terms of data governance and AI ops and all those technical words that we use to say, how do we manage the machines? And so it tends to be more around policy and regulation. Hey, this is William Tenka, Work Defined. Hey, listen, I'd like to talk to you a little bit about Inside the C-Suite, the podcast. It's a look into the journey of how one goes from high school, college, whatever, all the way to the C-Suite, all the ups and downs, failures, successes, all that stuff.
[00:36:21] Give it a listen. Subscribe wherever you get your podcasts.
[00:37:07] When I speak to builders of machines, so builders of AI, builders of technology, it tends to look a little bit different. So they have a beautiful idea of some solution they want to build. And typically, by the time I'm introduced into the conversation, I ask them, so who is going
[00:37:32] to use the solution? What problems are you helping to solve? It's an elegant, beautiful solution that I could never think of, never actually create myself. But I'm curious, who is it serving? Is it serving another machine? Or is it serving people? And if it's serving people, which people? Let's take a step back. Because sometimes I find, and this is not true of all, but sometimes in the fervor and excitement
[00:38:01] of building things that are really cool, we get too focused on solutions. So I tend to be that voice in those conversations. For graduate students, it's interesting because they have all sorts of ambitions of what they want to build, what they want to create, how they're going to change the world. And it's all wonderful. And it's really for them, I'm trying to instill in them that
[00:38:29] you're going to be the engine of energy in the next wave of innovation. And as a result of that, you also have to keep in mind, okay, how do I make sure that I'm being as good as I can be, whatever good means, to the people around me, to the people that I want to create value for. And so they may be builders or they may be on the business side, it varies. I've also taught AI
[00:38:58] ethics to middle school students. That was very interesting. I was terrified because I do not have background in K through 12 pedagogy. So I was not sure if I would be cool enough to hang with them. Gratefully, it all worked out. I've been invited back. But for them, it's so fascinating to me. And
[00:39:22] it's also what gives me so much hope in the space. I was really intimidated, but then I realized very quickly in speaking to them, there's a radical clarity that young people, sub-18, middle school, even kids have around what does it mean to be fair. They have very clear understanding of what it means
[00:39:48] to be fair. And they're very clear in communicating how sometimes the grownups in their lives are unfair. And so I typically bring the example for that cohort. I typically ask them to think about, when was the last time you were fighting over a video game or a toy or your favorite meal with your sibling? What did your parents do? What did your mother, father, your caretakers do? Was that fair?
[00:40:16] Fair. And then it brings up the whole conversation about fairness and how do we define fairness, always in their terms. But it's so clear to them. So it's beautiful that we're actually, we come pre-programmed with these ideas already in place. And it's really, as we grow up and mature and understand that the complexity of life, how do we not lose that? I think that's a challenge that we
[00:40:44] have today. So through our layers of extraction, which we also call maturity, how do we not lose the humanness that we were already born with? That's the hard part. Yeah. It's really interesting. I've been sort of dabbling in what's going on on the education side. I mean, I care about what's going on inside workplaces, partly to your original point around,
[00:41:10] look, there's legislation that exists. There's more legislation coming, a lot more longstanding legislation that was around since before you and I were on this planet. And so how people sort of absorb that into some of their existing models is important, not just to
[00:41:36] contort themselves towards some of the pre-existing teams around governance, risk, compliance, legal, et cetera, but that there's additional layers of complexity and also the sort of responsibility to sort of oversee all of this is not necessarily relegated to those teams specifically. It's also not
[00:42:03] IT's job, just because it's AI doesn't mean IT owns all these aspects because when we talk about ethics and human centricity and things like that, there's a lot more to think about. But on the education side, like before people even get to be part of the workforce, my daughter's in high school and I've talked to the school administrators and I just think about a couple of things. One is preparedness
[00:42:29] for both higher education and the workforce because it won't be very long before she's looking for internships and applying to things in addition to, of course, the college essay application process. But I think about the fact that, like we talked about before, AI is on their devices. They have
[00:42:52] laptops, they have smartphones, they've been using digital technology since basically birth. And as you pointed out with your middle school example, and for those not in the US, that's grades six through eight, sometimes five through eight. So basically like 10, 11 years old through like 13, like they can start to understand and appreciate some of these concepts, like you said, around,
[00:43:20] you know, fairness and equity. And it might be a little bit of a stretch to get into, get too, too deep into, you know, human centricity and things like that. But you can start to lay the groundwork, you know, that early, as well as, you know, using your graduate school example, like where people really need to
[00:43:43] think about like, I've got a test. The teacher or the professor is their expectation is that you are using your human brain to show that you understand the material and that you're putting, you know, what you, when you put words to paper or answer these questions, that it's, it's your brain. And if you, there are specific
[00:44:10] circumstances where it's okay to sort of augment your thinking and your brain, but you need to understand what some of those expectations are so that you don't cross that sort of ethical line. And whether that's a, you know, a college essay or just trying to get your homework done and recognizing that you should be, if you're going to use AI, it should be in the same capacity that you
[00:44:35] would ask a parent or a teacher or, um, a, uh, tutor for, for help. You are not to go to that. I mean, I know my daughter understands this, but not everyone necessarily does. And so they're looking for, you know, everyone's always looking for a shortcut, an easy way out and just get this done and move on. But if the, if you acknowledge that the objective is to learn and not just get the resulting
[00:45:00] good grade, then you should use AI as, as sort of that coach and mentor and tutor and not just as, um, you know, a calculator or something that's going to give you the answer and do the work for you and essentially earn you credit that is perhaps not justifiably. Yeah, I agree. And the, the challenge in the higher ed space is, okay, what do we mean by learning now? When we say learning
[00:45:27] objectives, those are very open questions. Uh, the challenges when we're talking about under 18s is that, um, physically, emotionally, socially, they are going through so much significant change. Um, the prioritization of social emotional development, um, in context of utilizing technologies to mitigate this communication, mitigate the, um, um, the interactions. That's something to consider as well.
[00:45:57] So it's, um, very, very complicated. Um, what I'm so hopeful about is it's not as though these ideas are happening in a silo or happening in, in doors that cannot open necessarily. Um, it is part of the public discourse, um, AI and technology. So it's beautiful in the sense that, okay, yes, it's
[00:46:23] complicated because we have a collective action problem, but it's also great because everyone is thinking about this at some level, not at the same level. And then that's good too. Um, but at least it's salient. At least we can talk about it. Um, I think one of the challenges with today, um, that technology has helped to exacerbate, um, as opposed to alleviate is there are topics that are
[00:46:50] not able to be discussed openly. Um, and that's difficult because it doesn't progress the conversation at all. Um, we are still very fortunate that we can still have these very open conversations about what AI means for us and what the technologies mean for us. And so I'm still quite hopeful about all of it. Um, do I know the end of the story? No, if I did, this would be a very different conversation.
[00:47:17] Um, but I think that's part of the fun that that's, it's why, why this is all so exciting and terrifying. It is, it is. But, uh, I personally, I am cautiously optimistic that things will work out and AI will be, uh, a welcome, you know, addition to how we, how we learn, how we develop ourselves for, for our careers. And I guess in our both personal and professional lives, but we've got
[00:47:45] to, uh, gotta be ethical about it. We've got to be human centric about it. And, um, I think there's enough of us that are trying to spread that, uh, message, uh, that hopefully, uh, it clicks for everyone. Yeah. Yeah. So more to come. We don't know the ending. You're not. It's only chapter one. Cece, uh, I've loved having you. This has been a great conversation. Uh, thank you so much for spending time with me and, uh, on behalf of my audience.
[00:48:15] Thank you. Thank you. In the show notes, I'll put links to, you know, your, your profile and your website. Um, I'll also include, we didn't get a chance to talk about your LinkedIn newsletter. Um, but I'll put a link to that in the show notes as well. Uh, well, thank you again, Cece. And, uh, thank you everyone for listening. Uh, we'll see you next time.


