.png)
The Copilot Connection
Welcome to Copilot Connection, the podcast that explores the world of Microsoft Copilots! Join your hosts, Zoe Wilson and Kevin McDonnell, as they take you on a journey through the different Copilots available and how they can help you in your day-to-day life. From the newly launch date announced Microsoft 365 Copilot to the Dynamics 365, GitHub, Windows, Custom and more Copilots, we'll cover it all. Our upbeat and engaging conversations with experts in the field will keep you entertained and informed. Tune in to Copilot Connection and discover how these AI-powered assistants can transform the way you work!
Want to be interviewed for the show? Sign up at The Copilot Connection: Call for Speakers @ Sessionize.com
The Copilot Connection
Ep 27 - Being responsible with Chris Huntingford
How responsible are you with AI? Zoe Wilson and Kevin McDonnell are joined this week by Chris Huntingford (Director of AI at ANS among many other things!), focusing on responsible AI practices, data management, and the socio-economic impacts of technology. We covered the importance of transparency, accountability, and ethical practices in AI implementation. The discussion also touches on the cultural shifts required for successful AI adoption, the challenges of governance in AI systems, and the future of AI agents.
We also had an amazing announcement for the Copilot Fireside Chat that you can hear at the end! Or get a sneak peak on the Fireside Chat site.
Key takeaways:
- The importance of responsible AI practices is paramount.
- Data management and compliance are critical in the AI landscape.
- Socioeconomic impacts of AI technology must be considered.
- Education and critical thinking are essential for future generations.
- Creativity should not be overshadowed by AI advancements.
- AI-generated content can lead to a decline in quality and creativity.
- Legal accountability for AI deployment is necessary.
- Individuals must advocate for ethical AI practices.
- The EU AI Act sets important standards for AI accountability.
- Community engagement is vital for responsible AI development. Transparency is crucial in AI deployment.
- Shared responsibility involves vendors, deployers, and users.
- Accountability must be established at all levels.
- Data quality directly impacts AI outcomes.
- Cultural shifts are necessary for effective AI adoption.
- Governance in AI is more important than ever.
- Organizations should focus on training rather than cutting jobs.
- AI implementation requires careful planning and testing.
- Responsible AI practices must be prioritized.
- The future of AI agents needs robust governance frameworks.
Kevin McDonnell (00:11)
Welcome to the Copilot Connection.
Zoe Wilson (00:14)
We're here to share with you all the news, insights, and capabilities of the Microsoft Copilot ecosystem from across the entire Microsoft stack. I'm Zoe Wilson and I lead the Copilot Business Transformation Practice at Accenture and Avanade covering all copilots and agents. I'm an MVP for Copilot in Teams and I'm also a Microsoft Regional Director.
Kevin McDonnell (00:35)
Kevin McDonnell, I'm an MVP, copilot, strategy and modern workplace, AI leader, Avanade and also someone who doesn't have their prompter working today. So I keep looking off to my side and losing track of my notes, but we will be releasing episodes as podcasts and on and on YouTube with insights from across the community and Microsoft on the different areas of copilot, moving into agents these days and the extensibility.
thinking about what you need to do to make the best value in your organization, what you need to do to prepare them and even start implementing and extending them further with there as well.
Zoe Wilson (01:13)
So we've got quite a special episode today where it's not just me and Kevin. We've actually got a bit of a special guest joining us. But before we get onto that, we also had a very special guest that we spent some time with on the Copilot Fireside Chat. What was that? A week ago, Kevin? A little over a week?
Kevin McDonnell (01:32)
Yes, it was only a week ago. I to think about that for a while. it really is. Yeah, we had the fantastic Abram Jackson, who I love listening to. I feel I fanboy far too much on the this show about him and on bit what his ego is going to kind of go too far on that. But it was a really good chat. And I think we both said, what do we have about 40, 45 minutes we could have spoken on for hours.
Zoe Wilson (01:35)
Yeah, time's just vanishing. Yeah.
Kevin McDonnell (02:00)
things like that with questions and just listening to him. I think it's about, you know, what's happening in the agentic space, what's happening in generative AI, where things are going, what you need to do to think about within there. And it was just fascinating.
Zoe Wilson (02:14)
Yeah, I mean the 45 minutes just flew by in the blink of an eye
and I felt like we could have kept going for hours really because he's not just informed and up to speed on what Microsoft are doing. He's actually, you know, he's incredibly well researched and up to speed on the entire industry and what all of the competitors are doing. Things like Deep Zeek and you know some of the other more challenger.
Kevin McDonnell (02:26)
Which isn't like us.
Zoe Wilson (02:41)
AI LLMs or platforms or solutions that are coming through as well. So yeah, just a really, really good discussion.
Kevin McDonnell (02:43)
Absolutely.
Yeah, so if you get a chance to see Abra, I think we will try and get him on the show. I know, I can't remember who he's speaking to about it, but saying we should get him back on Fireside Chat. I think actually what we'll try and do is get him on the podcast at some point as well. Yes, well, yeah, absolutely. In fact, we may even get him for a little bit in the MVP Summit in a couple of weeks that we've got a next big event coming up.
Zoe Wilson (03:00)
As well, why not birth? Yeah.
Yeah. Yeah. And then I guess talking of guests on the show, we've got an interview with one and only Chris Huntingford. Now, Kevin and I were both at the Microsoft AI Tour in London this week. Chris was also there. It was really good to see him in person. And I just, I love talking to Chris because he's one of these people who's got such a strong technical background, but also really passionate about things like governance and people as well.
Kevin McDonnell (03:41)
Absolutely. And I think passion is the word. This is is someone who loves their community, who loves the technology, who loves making magic happen and is not afraid to share it, which is really good. And and he's not afraid to say what he thinks as well. So I'm going to confess, we have already recorded this. I missed the first half. One of the things I miss about being a host on this show is I feel a bit weird listening back to it too often. So it's quite nice not being there that I can.
not feel weird listening back to that bit.
Zoe Wilson (04:12)
Yeah, so we'll get into the interview with Chris. Like Kevin said, he wasn't there for the first bit. He does join halfway through, so we finish off with all three of us. And we'll be back at the end to wrap up and tell you about some of the awesome things that we've got coming ahead through the rest of March.
Kevin McDonnell (04:29)
And we have a huge announcement for what we're having on the next Copilot Fireside Chat. So stick around to the end to hear more about that. Otherwise, should we hand over to you and Chris?
Zoe Wilson (04:41)
Sounds like a plan.
Zoe Wilson (04:42)
you may notice that I'm joined with someone today who doesn't look an awful lot like Kevin. Chris Huntingford, thank you so much for joining us. Do you want to just do a quick intro for yourself for the benefit of all our listeners?
Chris Huntingford (04:56)
Yeah, thank you for having me Zoe. So what's up, my name is Chris. I'm the director of AI at a small partner in the UK called ANS. And I also work with another company called Cloud Lighthouse and I get to be a partner there and work with organizations in Europe and America. So yeah, it's been wild.
Zoe Wilson (05:14)
Yeah, it's certainly been an interesting couple of years in the industry, hasn't it? And I mean, for those of you listening, I've had the pleasure of knowing Chris for years, but it really feels like the kind of parallel tracks across these apps and modern work have really collided in the last two years with all of the co-pilots and AI stuff that's going on. I know that you've been kind of getting yourself up to speed on a lot of the things that normally would have sat in my world, like Purview and all of the M365 co-pilot stuff.
Chris Huntingford (05:39)
yeah.
Zoe Wilson (05:42)
And as well as that, know you've, you're really passionate about responsible AI as well. So, do you want to just talk a little bit first of all around kind of the, all the data stuff you've been doing with purview and why that's so important for people.
Chris Huntingford (05:55)
Yeah, so, okay. This comes from a place of concern. I'm going to give you a little bit of a story here because it's really interesting. Right. So many moons ago when I was working at Microsoft, I got to work with partners, right? And this is probably like 2018, I think. And one of the partners I was working for was building a solution that would look at the transcripts of people from like mental hospitals and things.
And I'm talking like big hospitals like Broadmoor and things like that. So I got to read a lot of these extremely disturbing transcripts. And what they wanted to do was review the transcript, pull out keywords, and then potentially look at other people and see if they were potentially going to be problematic. Okay. So this is like as ethical AI and responsible AI was kind of being spoken about. Microsoft kind of had a framework, but they didn't really have a framework. Okay. It existed, but it wasn't public. So.
I had this weird feeling and I went to my mentor, Phil Harvey, I was like, dude, I'm doing this thing, but this feels a bit strange. It's the, call it the pink cupcake analogy. When you see the strawberry cupcake on the table and you feel like taking it as wrong, it probably is. And that's kind of how I felt about this. And he's like, yeah, man, you actually shouldn't be doing that because you're not allowed to profile people with artificial intelligence. Like it's not an ethical thing to do. It's a bit minority report-esque. And I was like, okay, this does feel strange, right? But the crazy thing is it worked.
Right. So you could do it like, and this was not even using any ridiculous type of AI. This was AI in power platform. You could actually use the keyword finder and the phrase finder and go and do this stuff and then build this case from there. And that got me thinking. was like, Holy smokes. This is something that I really, really need to take care of and like really start doing research. And so I did. And then probably around last year, I had the privilege of meeting Pamela Cortez.
who was, I don't know her role now, but at the time she was working in the responsible AI team for Okto at Microsoft. And you know, I got chatting to her and I just like learned tons and tons and my interest was peaked. And then I started doing research into the law and like the EU AI Act and the California Acts and the stuff that's happening in Europe. And I was like, wow, okay, this is really interesting, but it boils down to data. And that is kind of where your world comes in, where I am historically a biz apps person, like I worked in dynamics or structured data.
And I started learning about unstructured data and SharePoint and how M365 works and Compliance Center and obviously Azure Purview. And I figured that if data is a digital representation of you, your company and your customers, we should be respecting it way more deeply than we are now. So I started learning tools like Purview. And let me tell you, probably the best thing I ever did in my career was started getting into this. not just because it's like career growth, but more...
from a social economic perspective, like it's really important for society that we understand this stuff and we respect it. So yeah, the purview piece has been incredible, but it's also eye opening. Like learning about what people are actually doing in tenants and doing with data has been both disturbing, but really interesting. So yeah, that's kind of how I got into it.
Zoe Wilson (09:04)
I
think it's primarily not doing, isn't it, in terms of not managing, not even knowing what people have.
Chris Huntingford (09:10)
Yeah,
the security by obscurity doesn't work anymore. Like you can't have your text 2008 folder buried deep in your computer and not have that available to, you know, potential threats, right? Like now you will, AI is a loud hail for terrible data. So you will find that stuff and you will find it quickly and you will be able to share it even more quickly. So yeah, it's wild.
Zoe Wilson (09:15)
Yeah.
Yeah, yeah,
I mean, it's super important, but actually one of the other things that you mentioned there, I want to pick up on a little bit because this is one of the big things that feels like it's keeping me awake at night at the moment, which is around the socioeconomic impact of all of this, because you've kind of got the stuff which is in our world, which is the corporate data and systems and how people manage that. But one of the things that I...
definitely observing is it feels like technology is evolving so quickly and the things that you can do with it are evolving so quickly. And one of my big fears at the moment is that we are implementing things with technology without stopping to think about the human impact. And, you know, the conversation, like when I started talking to people about co-pilot two years ago, it was very much, this is going to.
Chris Huntingford (10:15)
Yes.
Zoe Wilson (10:24)
augment people. need to, you know, when you're rolling this out to people, you need to help them understand this isn't about replacing jobs, but you only have to look at the Judson keynote from Wednesday's AI tour in London, where he talks about how he's committed to the board. They're going to have a hiring freeze at Microsoft for three years. They'll still achieve double digit revenue growth, but with the same amount of FTEs and
I mean, the way that's framed, I think, is quite positive. But I'm aware that people are looking at how they could take out entire teams, entire departments, and society is not evolving at a pace to keep up with this. We're going to eradicate loads of jobs and not actually be able to put our arms around people.
Chris Huntingford (11:07)
Yeah. Okay. So in at Power Plat conference last year, Trevor Noah was on stage, right? And he said something that was extremely profound. And he said, we should start protecting people and stop protecting jobs. And I was like, a, and he's a South African, right? So obviously I'm like, yes, bro, that's true. But I liked that. And the reason I liked it is because I think that we have to shift. We have to shift. we're going to, we're going to be moving.
shifting left at some point to go into like a different career mode. It's the whole, it's the whole Charlie and the Chocolate Factory analogy, right? Like, you know, Charlie, Charlie Bucket's dad was screwing toothpaste caps into the toothpaste tubes in the factory. And then he got fired because there was a robot that did the job for him. Then the robot broke and he went back to fix the robot. Right. And it's kind of like juvenile, I guess, but at the same time, you we have to think of it that way where we need to just get educated and we need to, we need to start thinking about how we can go and move up.
into different types of roles. Like look at rate teaming, rate teaming kind of existed. Now there are thousands of cab drivers all over the world, and especially in places like New York, who were actually rate teaming for OpenAI because they come from different countries, from educated places. They couldn't get jobs in New York. So what did they do? They joined the OpenAI rate teaming network and started testing the AI, the AI, right? But I find it extremely interesting that I think it is our responsibility. Like if you are listening to this podcast, this is you by the way.
It is our responsibility to be educating the people around us. Like if we are not doing that, we're doing the wrong thing in my opinion. And it is our social economic responsibility to make sure that any customers, your family members, your friends, it is our responsibility to help them understand why this is here, what it's going to do, the impact of it and how we can help educate people. That's my personal perspective.
Zoe Wilson (12:55)
Yeah, I couldn't agree more. I know you've got kids, Chris. How are you guiding or advising your kids from a career perspective?
Chris Huntingford (13:03)
So they still very little, okay, so I've had two issues with career stuff with these kids. Okay, I'm gonna speak very openly here. The first one is the one day my daughter came home to me and we were reading a book and like I love space and rocks and stars and things like that, like I studied it and the book was on astronauts and she's like, daddy, why are all astronauts men? And I'm like, hold up, like hold up, let me help you. So I started showing her a lot of the female astronauts that have gone into space and telling her the stories and.
And that type of thing. And she's like, oh, cool. So she can do that. And I was like, yes, you can absolutely do that. I'm like, you are not limited by anything around you at all in any way, shape or form. Okay. So that's the first thing that I've been doing from within, because I think that it's, there's a lot of garbage out there that people get fed. Okay. And on that note, AI is part of that. I feel like a lot of, a lot of people are getting fit absolute junk through the media and all these things. And actually, I think we just got to bring it back to solid core facts that actually, you know what, like,
they will be using AI in their job. My children will never write an email from scratch. They will never write a word document from scratch. They will never write code from scratch. Okay. So I have had to change my perspective on that and say to them, you know what, what will happen in your workplace is going to be different to mine, but it's my job to educate them and show them the tools. And I do, right? Like I do talk to them about it. They're very little still. They don't understand the AI, but what they do get is that they will be using technology and that's what they'll be doing. It's in some way, or So those are the two things.
Zoe Wilson (14:17)
Yeah.
Yeah, and they'll be using technology
in a completely different way as well. I remember my half sister is 13 years younger than me. And I feel like we grew up in a time where there wasn't tech, there wasn't mobile phones, anything like that when we were kids. And by the time I was at university, there were mobile phones, we had pagers, we had internet at home, home computers, and it was a completely different world. then my half sister was, so she was born when I was 13.
Chris Huntingford (14:33)
Yeah.
Zoe Wilson (14:56)
And I remember kind of watching her grow up thinking that she was going to be so much more technologically advanced than we were. But it's actually the opposite because all of the battles that we had with, you know, building our own PC or repairing like really old computers, the issues that we had with dial-up internet, the kids today, they don't know those struggles. They haven't had to learn how to problem solve. You know, they've just got tech that works. And I feel like with those,
Chris Huntingford (15:05)
Yeah.
Zoe Wilson (15:24)
that are going to start coming through into work in the next 10 to 20 years. It's going to be that but on steroids.
Chris Huntingford (15:33)
It's the lack of critical thinking ability. Yeah, so we were lucky, right? Like I'm 41, so I'm an 80s kid, right? And I grew up in the movement of technology, right? The change in the way technology worked, very fundamentally as well, And what I found is that, okay, I'm gonna use my school as an example. So I have to write essays about like astrobiology and things like that. And one of the essays I had to write was,
Zoe Wilson (15:36)
And that's a problem.
Chris Huntingford (16:01)
You are flying to a moon on another of one of the other planets. It's an ice moon. How do you gather chemical information? And could we live there? Could we, could there be life on this planet? Right. Now I could AI that I could quite easily stick that into a GPT and not use my creative brain. Okay. And build that essay. I could very easily do that. And that's a problem. Like that's a big problem because I'm not critically thinking. I'm not saying,
what are the things I need to understand? And when it comes down to the crux of it, I don't have the answers, right? Like I'm not going to be able to answer any questions on this stuff. And it's the same with tech. In fact, it's going to be the same with anything. Will Dohrington actually posted something on LinkedIn quite recently about, you know, it's kind of like, it removes the cognitive load, but I don't think that's good. Like, I don't think that's a good thing.
Zoe Wilson (16:53)
think for
people who are already experts in a field, it can help remove the cognitive load. But for those who are not yet expert, I think it could potentially limit the depth and the breadth of the expertise that they could actually achieve through their field.
Chris Huntingford (16:58)
Yeah.
And it sucks, right? Because I think it takes the joy away from learning.
Zoe Wilson (17:14)
And
the joy away from being creative as well. mean, in the essay example, yeah, you could probably get something that was fairly middle of the road or good if you put some time in just to give it your tone of voice and things like that. And it might give you a few points that you'd not thought of. But actually, I think being able to flex your creative muscle is super important as well.
Chris Huntingford (17:32)
But you want, yeah,
you want to do that. So one of my friends calls it a brain shower. Like using your creative muscle to go and do something awesome. Like I paint in my spare time. So I paint and I play drums and guitar. I go to gym and that's important. Right. But like, I don't want AI to take that from me. Absolutely freaking, really not. Like I don't want AI to take away me playing drums, but like, why have we let it take away the things that bring us the most joy? That's the part that I struggle with. Like we invented this cool thing.
Zoe Wilson (17:49)
Mm.
Chris Huntingford (18:01)
But then we were like, okay, be an image designer. I'm like, no, don't be an image designer. Like, I want to do that. Why don't you be the thing that captures my horrible expenses? Yeah. Yeah. And I think that does fall under responsible AI and ethical AI. I think it is part of it. The thing that they talk about is like the transparency and accountability and the rise standards.
Zoe Wilson (18:05)
Yeah.
Yeah, The thing that sorts out the groceries that I need in the fridge this week.
Chris Huntingford (18:27)
And that accountability piece is quite, it's kind of important, right? So that's something I personally struggle with. I hate it and I love it. Like I love the fact that this amazing new technology is here. I hate the fact that it's taken away the fun things. Yeah.
Zoe Wilson (18:31)
Mm.
Yeah,
I'll tell you one thing that has really been winding me up the past few months. First of all, it was all of the long form LinkedIn posts where you could just tell it's AI generated. There's nothing, especially if you know the person, like if, know, if you can't see any of the personality, you've just got some meaningless selfie long form posts or some AI generated image. And that's all I see on LinkedIn now, which is
Chris Huntingford (18:50)
Mm-mm.
I hate
it.
Zoe Wilson (19:09)
which is depressing, but the one that got me the other day, was browsing Facebook and I'm in a barbecue group and someone had posted a video of them cooking, I don't know, like a brisket or ribs or something like that. And they'd made an AI generated song that talked about what they were doing. And yeah, it was a little bit creative, but it just sounded crap. Like absolute average middle of the road music. And I feel like this is the next wave of crap we're gonna have to deal with.
Chris Huntingford (19:15)
Yeah. Amazing.
Yeah.
Yeah, we're gonna, this is gonna be, okay. So it's kind of like data exfiltration at its worst. Like the creation of terrible junk, right? It really is.
Zoe Wilson (19:46)
And then just the
dumbing down and people accepting that this terrible junk is kind of, you know, what's out there.
Chris Huntingford (19:53)
Yeah.
Have you ever seen the movie Idiocracy? Okay, I love that movie, but it's at some point they just can't even speak properly. You know, they forget how to use creative words and how to fill sentences with things that are meaningful, right? I don't, like, this is extremely cynical and I promise you I'm not cynical, but I feel like this could be the beginning of something like that. And I'm watching it with short form text with, you know, people messaging me.
Zoe Wilson (19:57)
Yeah.
Yeah.
Chris Huntingford (20:23)
So, like a lot of the people I go to school with, text me and it's all very like short form. And I'm like, we're forgetting about like the creativity and filling the sentence with something that's meaningful. And let me tell you, if somebody tags me in one of those AI LinkedIn posts, I promise I will remove myself and I will unfollow you because I follow people on social, let me just put the word out there very clearly, social media, because it is you who I want to hear from. It is not some very many swear words in this place right now.
Zoe Wilson (20:33)
Yeah.
Chris Huntingford (20:53)
dumb text that you put on LinkedIn, because I won't read it. I will not read what you put on there. won't read an AI generated book. I won't read a LinkedIn post that is AI generated. It's not going to happen. I want you to make spelling mistakes. I want you to put weird stuff in there that is quirky. I want to see you idiosyncrasies. I don't want AI junk. don't want, if I see the word fostered one more time. Sorry, that's my ransom.
Zoe Wilson (20:56)
The end.
Yeah. Technology is a rapidly evolving
landscape. Yeah.
Chris Huntingford (21:20)
Oh my gosh. I'll tell you
what, Victor and me, we wrote a book recently. We did use elements of AI to generate some of the content, like to begin, but the things that we wanted to write about don't exist on the internet. So like ecosystem enablement is not something you can AI generate because it doesn't know what it is. Exactly. And this is the thing that I found extremely interesting when going through this process.
Zoe Wilson (21:43)
doesn't have the source material.
Chris Huntingford (21:50)
I'd write a piece of it and then I'm like, okay, I need to write down, you know, like the top 10 roles for ecosystem enablements and how an ecosystem architect will go and do X, Y, and Z. And the role does not exist. There is no website other than the Mendix website, maybe, where you'll find any content on it, right? But it was challenging because then I started thinking about source material for AI and the more junk you generate, the more junk you generate kind of thing. It's a fractal. Yeah.
Zoe Wilson (22:12)
Yeah. And I
know when a lot of platforms now that they ask you to tag or they'll if they can tell they'll apply a tag that says something's AI generated. And I'm hopeful that that is so that it could be excluded from future harvesting of source material. But there will always be things that slip through. yeah, it just leads to this dumbing down. If you think about like all of the people that you know and where the midpoint is in terms of
quality and intelligence and all of those things. And then you look at everything that those people produce and that's the stuff that's going into the AI, which means that like the midpoint of that is pretty mediocre.
Chris Huntingford (22:45)
Go.
Yeah, it's like the voxel of information, right? Like it is the best at being extremely average. So yeah, I struggle with it a lot.
Kevin McDonnell (22:57)
So I heard you say extremely
average and I thought it was a perfect time to join.
Chris Huntingford (23:01)
There's nothing average about you my At all. And we have frozen, Kevin.
Zoe Wilson (23:02)
Hahaha!
Kevin McDonnell (23:05)
You
Zoe Wilson (23:08)
apart from maybe his wifi connection.
Kevin McDonnell (23:11)
No, it's my...
ZLG camera. I might drop it off slightly and come back in because I can't change my damn camera.
Chris Huntingford (23:19)
There's no judgment. Yeah, it's starting to get really interesting, right? Like I'm seeing it in companies. So here's a good one for you. Imagine you're a legal firm, right? And you are building meta documentation based on meta documentation that exists, right? So contractual information and things like that. You haven't had a human in the loop. Okay. You haven't had anyone check that content and the grad who is supposed to be checking the content doesn't understand the content. That goes out.
Okay. People sign it because they don't read these things. Then what? Like, where are we? We are now in a state where like, not only has AI generated something that's probably incorrect, but now you've signed something that's probably incorrect without understanding what you signed. And now you're legally bound because somebody didn't review or somebody didn't make, take the effort to like, go ahead and actually make sure the information was right. And that's the part that I can't, that I'm struggling with.
Zoe Wilson (24:01)
Mm.
Yeah,
yeah, so that's one part. The other thing that worries me slightly is when I look at some of the geopolitical things that are happening. And I know we talked earlier Chris about the need for society to get their arms around people a little. One of the things that worries me is that the barrier for responsible AI from a regulatory perspective in certain parts of the world will become less.
Chris Huntingford (24:24)
ho ho ho ho ho.
Zoe Wilson (24:41)
So I had a conversation with someone the other day at the AI Tour where they asked me if I'd ever had conversations about co-pilot in weapons systems.
Chris Huntingford (24:52)
gosh. Okay, so humans suck because we have turned walking sticks into weapons. Like that's what we do. Okay, that ain't gonna go away. So I hate to say it, but like we have done a very good job of weaponizing everything we touch. Like that's what we do. We're like, look at this amazing thing. How do we hurt people with it? I'm like, man, why do people suck so much? Like honestly, it just makes me sad. like, yeah, the only answer I'll...
Zoe Wilson (25:12)
Yes. Yeah.
Yeah.
Kevin McDonnell (25:20)
But how do you control
it as well? That's part of challenge. How do you stop those things? know, I'm sure we've talked a bit about this, but education, getting people to think about the why, but you can't just stop it because people will find ways around it. That's the worrying thing.
Chris Huntingford (25:36)
Yeah. So,
so I did a keynote in Talon recently. so Yannick and
were like, we had a very, very in-depth conversation in Norway. And they're like, can you, can you go and tell everyone this? And it's about responsibility and taking responsibility for the things you do, the people around you and the things you create. But this is not an AI thing. Okay. This is like, this is a human thing. And, I remember we are humans are horrible. Like Scott Hanselman did the funniest thing at a keynote once. I think it was the Scottish Summit.
Zoe Wilson (25:55)
you
and humans are horrible.
Kevin McDonnell (26:01)
You
No he's not.
Chris Huntingford (26:06)
He's great. he's like, I remember he's talking about the sock puppet analogy and he's like, oh, oh, the AI and like, you're breaking like what I'm doing is I'm jail breaking the AI but what we forget is we are the ones doing it. It's not like, oh, you remove the sock from your hand. You're like, oh crap, it's me. It's me that's doing it. It's like this huge surprise. No, it's, it's, it's you that is the one not you, but you know, in general, like people, is you that is the irresponsible one, right?
Zoe Wilson (26:28)
Mm.
Chris Huntingford (26:31)
And this is why I feel so strongly about taking responsibility for it. And my theory here is that if enough people get behind it and like actually take responsibility and say, no, this is not okay. Like we are not going to stand for this. And if enough people get behind it, we have much more of a chance. That's what I think. And unfortunately, we're not going to stop people like the Elon Musk's of the world doing dumb things, right? Like, but my theory here, I'll quickly give you one is that this is just an analogy, right?
Zoe Wilson (26:31)
Yeah.
Yeah.
Chris Huntingford (26:59)
Elon Musk, well, these people, they could be the richest people in the world, but if people don't sell things to them, they have nowhere to spend their money. And it's the same thing with AI. Like if you, if enough people get behind it and enough people back it and say, no, we refuse to stand for this, then I think we have a much more of walking chance.
Kevin McDonnell (27:17)
I think it's about getting people thinking about it, isn't it? It's getting it into people's loop, getting it into people considering why, and considering why people are doing it. Why are people pushing a lot of these tools on there? Why are they pushing the things? And if you can think about that, you can work out whether you think it is right for you as well.
Chris Huntingford (27:39)
Yeah, I'm seeing it now on social media though. I mean, I'm seeing, I'm seeing it now where people are actively posting about this stuff and actively saying like, this is not okay. I mean, when the whole freedom of speech thing with Grok came out and no one realized that actually on Twitter, there's a box you have to uncheck, but they were using our data to train the models, right? People were up in arms, man. Like they weren't happy with it. And I'm seeing it more and more where even at AI Tour on Tuesday,
Zoe Wilson (28:00)
Mm.
Chris Huntingford (28:05)
Do you know, like I stood in the same spot for three and a half hours. I'm not joking when I say every conversation I had in that spot was about responsible AI. And people saying to me like, what do you think of this? And look, I'm not an AI expert. Like I'm learning just as everyone else's, right? But what I do know is that there are barriers and guardrails being put in place like the EU AI Act, where people are going to be held accountable for the things they do, build and sell. And
This is not GDPR. This is not, oh, we have a data residency issue. This is like, if you make dumb stuff, you are going to be held accountable. They've got a thing called the AI, there's an AI council. It's like 500 companies. And those companies, that council is allowed to hold you up at any point if you have put an AI in the public eye. They're allowed to review what you've done. I've joined the calls. been on the actual calls. Why do you think there are no more public facing copilots and bots? Go and find me one.
legit, go and find me a public facing co-pilot on a website that works and that has been built correctly. The Air Canada one got hauled up. The Kia one got hauled up. So yeah, like things are happening, right? And I actually do think that legalities from governments are actually being, people are being held accountable and I like it. I think it's good.
Zoe Wilson (29:22)
Yeah, I do
feel as well that it will take things like the EU AI Act to actually force some of the standards that we won't be able to comply with. Because again, if you look at the things that are happening over the pond, they've got like this degradation in standards and expectations and things that they could do and a freeing up of regulatory guidance or legalities even. So yeah, I think the I think the EU AI Act and another
Chris Huntingford (29:35)
yeah.
Yeah.
Zoe Wilson (29:51)
other things like this will be absolutely key. And I know you said you're not an AI expert, Chris, but I think it's really important that we've got people like you who are passionate about this and who have that strong sense of what is right and what is acceptable. And again, a shout out to everybody listening. We need more people to actually make sure that they understand the risks and what all of this means and are able to advocate.
And you might think that you don't have enough influence, but if you're, if you're in a room with a customer, or if you're in a room with your organization and you're designing solutions, it's really important that you were able to stand up and say, no, that's not, that's not good. That's not okay.
Chris Huntingford (30:34)
I couldn't agree more. think, and we do need to, we do need to stand up and we do need to say, actually, I don't agree. And I'm going to tell you something interesting. I read, I was lucky enough to be at the AI partner day on Tuesday. And I went to the responsible AI track and Microsoft had a couple of lawyers in there. And I've actually partnered with a legal firm myself called Burgess Salmon. And I've learned from them as well. Right. But here's something interesting. So the EU AI act is extra territorial. Okay. What that means is that if you are under the umbrella,
of the act in any way, shape or form, and you are not in Europe, the law still applies to you. So if you are in Manchester council and you are riding your bicycle and you crash your bike into a pothole, okay. And say Manchester council are using AI to do that pothole identification and they don't identify that one, who is liable?
It's the people who put the AI in, right? And that's the part that's crazy. So here's another one. If you create a public facing bot, yeah.
Kevin McDonnell (31:32)
Sorry, so if a partner's working
with them and they've put the AI in, even if they've kind of handed it over and finished that work, they're still liable for that.
Chris Huntingford (31:45)
You have to have a transparency note. You have to have proof of education. So here's the thing. Microsoft are very good. So they are the vendor. So you can break it into three, the vendor, the deployer, or the vendor slash provider, the deployer and the user. Okay. It's called the share. It's shared responsibility. So all three are liable in any way, shape or form when AI comes into play. Here's the thing. If your Microsoft have provided transparency notes for every layer of their AI that they provide from a platform. So they've done the right thing. Microsoft have given us a lot of information.
Kevin McDonnell (31:47)
Interesting.
Very interesting.
Chris Huntingford (32:14)
They've been extremely accountable. They've been very transparent. They read team every single thing. And I know this because I've seen what they do from a digital safety perspective. I know how the deployment safety board works. Right. So any AI you use from Microsoft, they've been extremely good at being accountable, transparent. They've provided safety. They provided security. You are a partner and you are training a model. You are now using their tools a little bit of outside of like what, what. So they can only be responsible for the platform. You are now the partner. You are responsible.
for providing the transparency of what you are using the tools for. correct. So say you use Copilot Studio in a public facing bot, you need to tell people this is AI, this is what it's doing. Microsoft still rapid with responsible AI from a tech perspective, but if you provide terrible data, it is on you. It's not on Microsoft, right? And this is the whole thing. So if you are a customer using AI, you should ask your partner who's implemented it with you.
Zoe Wilson (32:47)
The thing you're building essentially.
Chris Huntingford (33:13)
Please be transparent. Show me your red teaming. Show me how AI operations work. Show me the qualifications of the people deploying the solution in my organization. And let me tell you, I would love to see partners do that because I myself am working on that right now. Right? I would love to see it. And that's being responsible and that's being held accountable for actually deploying. So remember, if you go all the way back and you wheel it all the way back, this is a shared responsibility model.
It's not only on Microsoft to do it. It's not only on the user. It is on the partner as well, and actually primarily on the deployer.
Kevin McDonnell (33:45)
I think where Microsoft has to take a stand, and I've always loved that Scott Guthrie talks about falling into the right bucket, they should make it easy for people creating these solutions to do the right thing as well and to keep that challenge. So I absolutely agree with you, Chris, it's not Microsoft's responsibility, but they need to be there kind of at every point continuing to push that, to kind of make sure that people are considering that as well.
Chris Huntingford (33:45)
These are my rants.
Kevin McDonnell (34:14)
and not to stop. I think it's fair to say.
Chris Huntingford (34:15)
And I think they do.
Yeah, I think they do. think this is one of the first times I've seen a vendor actually take responsibility for the tech that they're selling. Like the fact that they had lawyers in the room on Tuesday giving advice and saying like, this is what we need to be doing here is awesome. But again, it's on us, right? As partners who are deploying these solutions, it's on us to make sure the data is correct. So Zoe, back to our data discussion, like making sure the data is correct. You're not going to go and train an LM on a bunch of biased data.
and then be surprised when it comes up with junk answers. That's not the LLM. That is you. And that's the exactly like, this is why I keep on saying to people, if you, if you are going to have anyone deploy an AI solution in your business, there are three things you can do to make sure that you're going to be safer. Okay. Number one, get them to evidence the people who are doing this roles, education. Okay. Make sure that the folks that are doing this are actually understand what they're doing. Okay. Number two,
Zoe Wilson (34:55)
in garbage out.
Chris Huntingford (35:18)
evidence of AI operations, Evidence of how you treat non-deterministic behavior from a GPT. It is non-deterministic. You cannot test the same way. You cannot do application lifecycle management the same way. You can't test for something you don't know the absolute answer for. So how are you red teaming? Okay. And that's really key. And number three, evidence of ethical and responsible AI frameworks. Show me your transparency node framework. Show me how you red team.
Show me how you plan on holding being accountable. Explain to me shared responsibility from an AI perspective. That's what I would love for people to do. Just get those three things right, right? And like that will be hold people accountable, right? It'll hold us accountable. The customers are educated. Like I'll give you a quick example. When I deployed my copilot studio, my first copilot studio live bots and it is deployed live. I can't tell you the customer name, but it's a global organization. Okay. We, took us five minutes to build that bot.
It took us a month and a half to productionize it. Do you know why? Man, the meta prompts, because they've got geopolitical data inside that copilot. So we had to evidence how we red teamed. We had to evidence the types of red teaming that we did. We had to show the cases in the transparency note. We had a Power BI that actually showed the answers and the differences between the different bots that we were using. Then on the website, they actually have training data. They say, this is a copilot studio solution.
they actually tell the user how to prompt. They're like, if you put a terrible prompt in, you can expect a bad answer. So they've taken responsibility. Yeah, it's amazing. I love it. I think it's such a cool story. Like I said, so this is not a five minute solution, put a bot on and hope for the best. This is like a hard thing to do, But I can tell you that we did the right thing.
Zoe Wilson (36:47)
Hahaha
Yeah, and it's really interesting because I see with lot of organizations, there's this real mismatching expectations in terms of the speed of being able to build something and actually understanding all of the things that you need to do to be able to do this properly.
Chris Huntingford (37:15)
Yeah, just because you can make it quickly doesn't mean you should productionize it quickly. And I feel like if we just stick to it, yeah. Yeah.
Kevin McDonnell (37:19)
Yeah. But you should make it quickly. You should make it quickly
Zoe Wilson (37:19)
Yeah.
Kevin McDonnell (37:24)
and learn quickly. Don't just tidy everything and then try and build stuff because you won't understand from that as well. I 100 % agree. Yes, you've got to go out there and build the extra things. But I think getting in there, getting hands on with this stuff with a small group of people, locking it down to small bits of data and growing so you learn.
Chris Huntingford (37:33)
Yeah.
Kevin McDonnell (37:47)
before you go live with it as well. If you bury your head in the stands, if you try and stop doing any of this, you're going to hit the same problem because suddenly you're going to be rushing it later as well.
Chris Huntingford (37:50)
Yeah.
Yeah, just, just start properly and end properly. Hey, and look, I'll be honest with you. Like when I started doing this lessons, we learned, man, it wasn't easy. Like it was, it was actually quite horrible because there were so many things I hadn't taken into account. like productionizing that solution was a really big lesson, but now I know how to do it. Now I can tell people, I can say, Hey, this is the framework that I use. This is what I did. here's how we did ALM. Here's how we tested. And it's really good, right? Because I think getting those lashings on the back.
You have to do it with AI. to me and finding, yeah. And find organizations who, yeah, find organizations who want to do that with you. Like I'll give you an example, Verge of Salmon, the legal firm I've been working with. They're so innovative. Like they're like, okay, we're happy to try and make mistakes, but you know what they've done? They've gone and productionized, copilot across their whole org. They're using it and they're using it well. And it's interesting because.
Kevin McDonnell (38:28)
Yeah. And continue doing it. You know, we're all learning as we go along.
Zoe Wilson (38:32)
Yeah.
Chris Huntingford (38:54)
The program of work was quite difficult, but actually it was such a great set of lessons to learn. And they will tell you, like you have a chat to them, they're very happy to share what happened with them. it turned out, and Zoe, this goes back to the tech piece, it moved away from a technical program to a human-centric program. And I feel like a lot of AI is not actually tech, it's cultural.
Zoe Wilson (39:15)
Yeah, it has to be business owned, business led, people centric. One of the other things that really frustrates me at the moment with particularly with agents is people wanting to just apply AI and agents to everything and applying AI or agents to things that are just standard automation. And then when we think about all that, yeah, yeah.
Chris Huntingford (39:31)
Hahaha
Yeah.
Kevin McDonnell (39:39)
Sparkling automation.
Chris Huntingford (39:41)
Sparkling automation.
Zoe Wilson (39:42)
Yeah. When we think about all the effort that's needed to take something from
being built, being production ready. And it is quite clear that people who are doing this and not doing all of that work because otherwise they'd realize actually there's no point doing this for something where bog standard automation would actually would actually do the trick. But everybody just wants AI in everything.
Chris Huntingford (40:06)
Yeah, so what I'm gonna say next is gonna be hard, right? So the first thing is, yeah, you're right. I think that a lot of people are confusing agentification for Sparkly automation. So to quote Donna, you know, like it's just AI infused automation, right? The part that I am excessively worried about is, okay, do you know what a fractal flow is? Okay, so fractal flow is a...
an automation that calls an automation that calls an automation and eventually get this never ending loop. All right. So that's a fractal flow. What happens when you have agents and care of like just before you got on Zoey and I were talking about the generation of like terrible data, right? So what happens when you have an agent, I'm going to use the word agent just very loosely. It could be a sparking automation that generates data and it becomes fractal. So agents behaving badly, right? Like running around causing nonsense. Data exfiltration is going to be proliferated.
Zoe Wilson (40:37)
Yeah, yeah, yeah.
Chris Huntingford (41:02)
way more now than ever before. Like, and also think about bad actors, like what happens. Okay. So what I would love for people to start thinking about is yes, make an agent. Yes, it's a good idea. Think about what happens with agent Armageddon, right? Like agentic Armageddon. What happens when you have these badly designed agents that are proliferating bad data across your business? Okay. So where is the governance? Observability does not count as governance in this scenario. I want hardcore governance now.
You'll have people say, we'll have a governance agent. I'm like, awesome. Who governs the governance agents? Yeah. Yeah. So.
Kevin McDonnell (41:33)
You
Zoe Wilson (41:36)
Yeah, yeah. And this,
think, you know, talking about society and organizations not being able to move and keep up fast enough, I think that's another really interesting area as well, because what we almost need is job descriptions for the agents. Like an intern, what are the guardrails? When do you know that that agent is going off piste and doing something that it shouldn't? And if this is something where there isn't a human in the loop, how do you tell?
Kevin McDonnell (41:51)
Yeah. Yeah.
Chris Huntingford (41:52)
Yeah.
you
Zoe Wilson (42:02)
And at what point do you need to intervene and stop that thing or how do you get it back on track? And again, know, people are just building stuff without thinking about all of these things.
Kevin McDonnell (42:12)
Yep.
Absolutely.
Chris Huntingford (42:12)
Yeah, to
me it's scary, right? I like the job role thing, but then you get people that are bad at their jobs as well, right? Like people who just don't want to be there. What happens when the agent doesn't want to be there?
Kevin McDonnell (42:25)
That's the thing with all of this. All of this AI is
Zoe Wilson (42:25)
Hahaha!
Kevin McDonnell (42:28)
just surfacing these problems that have been around for a long, long time and just making them more prominent. It's a governance. have heard, Chris, you might have spoken about governance before, even before this AI thing. We've talked about it in the SharePoint world, cleaning up your data, having these processes. Nothing has changed. This isn't things that you shouldn't have been doing before. It's just now far more obvious and more dangerous.
Chris Huntingford (42:42)
Baby. Yeah.
Kevin McDonnell (42:55)
that you haven't. So there's so much we can use from what we've learned and you don't need an agent to do that. And just very quickly, we need the three of us to do a conference session called Agenda Geden. I'm loving that. That is sorted.
Chris Huntingford (43:00)
Yeah.
Yes! Agent Omegeddon.
Zoe Wilson (43:10)
Ha
Chris Huntingford (43:12)
I actually drew, so in May, after chatting to Pamela, I drew what Agent Omegeddon would look like, right? And it's like it starts out with these little things, then they get little devil horns. And it's quite funny because I drew this ages ago because I had the feeling, because I know how fractal flows work, I had the feeling that this was going to be a thing. And it is a thing now.
And we're just at the beginning, right? Like we're just at the precipice of dialing over into this like agentic phase, because we are still using Sparkling automation. We are not doing true agentification. It's not something we're doing now. We're not there. Okay. but as we dial over into it, somebody has got to invent the, I don't know, the, some cool thing that helps with agent governance. Somebody's got to do it. Like if you want to be a billion dollar organization right now, go and invent that thing. Right.
That's what I would recommend. Somebody that's much smarter than me needs to do it.
Kevin McDonnell (44:05)
And I think it is happening. I think there are things happening there. Do I think it's happening fast enough? No. Do I think there's 20 million different types of agents that are all coming together and having something that will govern all the different ones on there? That's going to be the fun and games. You've got your Salesforce agents. You've got your SAP one. Well, no, it's one. Well, yeah, I suppose it is. It is an agent that will look at the other ones. But, yeah, to be able to connect all those and understand it. Yeah.
Zoe Wilson (44:22)
One agent to rule them all.
Chris Huntingford (44:24)
Yeah.
Zoe Wilson (44:29)
Or is it one platform to govern them all maybe?
Chris Huntingford (44:33)
who governs the platform.
Zoe Wilson (44:35)
Hmm.
Kevin McDonnell (44:36)
An agent.
Zoe Wilson (44:37)
Yeah.
Chris Huntingford (44:38)
This is why I think somebody's got to do it, right? But I do think that, think, okay, if I kind of backtrack, the things that I would love to see right now is number one, for people to come together and start really talking about responsible AI, like the two of you decided to do here, like really start talking about it and really driving that responsible behavior. And
not just responsible behavior, but people taking responsibility for things. Not just like, we made a thing. You made a thing. Awesome. What have you done to educate the people around you? What have you done to actually prove that you've tested this thing out? Show us. Like I would love for people to actively show me because I struggle in the tech world because people will say, we've done this thing. And when I ask for evidence, cause I'm very curious, like I want to see, I'm not calling you out. Like I want to see it, right? Because I want to learn.
and they can't show me, get angry, I get mad. And I actually said to Donna the other day, was like, why do I get so mad when people don't do this stuff? And she's like, it's because you feel responsible for it. So I would love for people just to take responsibility for getting the people around them educated. To me, that would be just a good start, right?
Kevin McDonnell (45:41)
Yeah.
Zoe Wilson (45:48)
Yeah.
And I think it's really interesting because when you look at the announcements that come out of all the big tech conferences, well, not even announcements, all just everything that comes out of the big tech conferences, so much of it is just marketing and it doesn't even exist yet. And then, you know, in like in my world, clients are asking, what does this mean? How do we use it? What does it look like? And the thing doesn't exist yet, which means that you've got this friction or this tension that exists between between actually being able to guide people responsibly.
Kevin McDonnell (46:18)
And it means people are rushing to do more and more and more because their expectation of what can be done is ahead of what is there. So the people who are delivering things are pushing ahead faster and faster without putting the right things in place. And we need to stop.
Chris Huntingford (46:19)
Yeah.
Yeah, it's a problem.
Zoe Wilson (46:33)
Yeah, just this huge sense of
missing out. Like if you can't keep up, you know, if you're not on the hamster wheel and you can't keep up, you're just going to miss out. You're going to be left behind.
Chris Huntingford (46:42)
Yeah, think the other piece of advice I would give to companies right now is I don't think anyone truly knows what... Yeah, well, stop cutting jobs. Yeah, because you are going to need... Yeah, you're going to... Yeah, I agree. Protect people, not jobs, right? Like you're going to need them.
Kevin McDonnell (46:48)
Stop cutting jobs. Sorry, yeah. Train, don't cut.
Zoe Wilson (46:58)
So I saw,
can't remember which company it is. saw an article earlier this week about a company that proudly stated that maybe a year ago that they were, were firing their entire customer service department to use AI. The announcement this week was we're going back to a human contact center. It's like, okay, that experiment didn't go very well, did it? Yeah.
Chris Huntingford (47:07)
Dumbasses.
No, behold, behold my lack of surprise. I
just, yeah, I agree with you. Like stop cutting jobs. Like start thinking about how you can get those people shifting left to doing other things, being observers, just you're going to need them. This is the whole, the whole thing of we have low code. We don't need developers. One year later, they're like, man, we need all the developers. It's the same thing. Like
Zoe Wilson (47:32)
Yep.
Yeah, we need more
developers, more people who understand this. And I remember when, when we first started talking about all of the Viva stuff, Kevin and employee experience, one of the things that, there was a huge focus on was the tacit knowledge. How do you get the tacit knowledge out of people's heads? That, you know, that, that knowledge that they have from spending 20, you know, institutionalized 20 years in the organization, understand exactly how it works and how to get things done.
Chris Huntingford (47:43)
Yeah.
Zoe Wilson (48:07)
and then just sweepingly get rid of all of this. And even if you get to a point where you have to bring the people back, you've still lost all of that tacit organizational knowledge, which is priceless.
Kevin McDonnell (48:17)
Absolutely.
Yeah.
Chris Huntingford (48:18)
It
is absolutely priceless. cannot, I put in my slides for talent. I'm like, it's not with great power comes great responsibility. Now it's with great knowledge comes great responsibility. And I do think that again, we are responsible for making sure that we pass on that information as clearly as we possibly can. This is why I don't hold onto slides. I don't hold onto knowledge. I don't care if people take my stuff. And the reason is because I want them to like, to be honest.
Kevin McDonnell (48:43)
But I think it's
how to use knowledge as well. You we're in a world where there's so much knowledge out there, how you get the right thing. We don't need to know all the dates and exact details. It's getting to that as quick as possible. I think it's the key, isn't it?
Chris Huntingford (48:59)
Yeah.
Yeah. I totally agree with you, man. I do think that, you know, dialing it again, dialing it back to like the whole responsibility thing. I just would love for people to just openly learn, openly take responsibility, openly share. And I feel like the more people that do that, the more people that do it openly without feeling threatened and without feeling that their jobs are threatened, you know, it's, it would just really help the whole AI voice in general, you know, that that's just like my perspective and I'm trying, but
I don't know, I wish I was smarter, honestly.
Kevin McDonnell (49:33)
I was about to say that's a lovely thing to wrap up on, but I meant the bit before that, not about you being smarter. I think on that side, that was great.
Chris Huntingford (49:39)
Yeah. Well, yeah, he's that as well.
Zoe Wilson (49:40)
Yeah, I don't know.
I feel like the world might be in trouble if you were any smarter, Chris.
Chris Huntingford (49:49)
I don't know. Thank you. I'll take it. yeah, but guys, thank you so much for having me both of you. think I just want to tell you that you are both doing an amazing job of this. And thank you for sharing your knowledge with everyone. Legit, because this I talk to people who watch your podcast, right? And they love it. So like, just keep please doing it and please do more because we need it.
Zoe Wilson (49:50)
So you're
Kevin McDonnell (49:50)
I like that.
Zoe Wilson (49:50)
pretty damn smart as it is.
Kevin McDonnell (50:04)
shut.
Zoe Wilson (50:14)
Thank you. And thank you for joining us. This is, yeah.
Kevin McDonnell (50:14)
So this is why we shouldn't
have pink tops because I'm blushing now. It's not a good clash.
Chris Huntingford (50:19)
Alright
Zoe Wilson (50:22)
You're always so on brand, Kevin.
Chris Huntingford (50:23)
you two. You are.
Kevin McDonnell (50:24)
Hahaha
Zoe Wilson (50:27)
Yeah, a huge thank you Chris. This has been a really enjoyable session for me and something I've been nagging Kevin for a little while to talk about. I think you felt it was important we had someone else to bounce off this one rather than it just ending up being us ranting at the audience.
Kevin McDonnell (50:39)
Absolutely.
Plus
any excuse to bring Chris along. Yeah.
Chris Huntingford (50:45)
I love a rant.
thanks guys. love a rant. Cool.
Zoe Wilson (50:47)
We out.
Kevin McDonnell (50:50)
So, Zoe,
have we done a nice promotion of all the things we have coming up or should we talk a little bit about that?
Zoe Wilson (50:55)
We haven't known.
think let's say bye to Chris and then we can kind of wrap up with what's coming over the next few weeks.
Kevin McDonnell (51:02)
Sounds a great idea.
Chris Huntingford (51:02)
Well, thank
you. I really appreciate you both. Cool.
Kevin McDonnell (51:05)
Cool. Thank you very much and see
Zoe Wilson (51:06)
Thanks for
joining us and we'll see you in a couple of weeks, Chris.
Kevin McDonnell (51:07)
you in a couple of weeks.
Chris Huntingford (51:10)
Yay, yes we will. Cool. I'll see you guys later.
Zoe Wilson (51:16)
So there we have it, a wonderful discussion with Chris Huntingford on all things responsible AI, governance, why we need to focus on people and not jobs. And yeah, I always just have so much fun talking to Chris.
Kevin McDonnell (51:32)
I am still Chris is the responsible one. It still feels weird to me having seen him out a lot, you No, exactly. But it is great. It is why I love him as well. It is that balance that you see from there that really comes across. He's got that zaniness, that craziness, but such underlying knowledge with it.
Zoe Wilson (51:34)
Yeah, yeah, it's like, it doesn't compute, it? Yeah, so...
Yeah, yeah, I could not agree more. So that was a huge discussion. We've got more big things planned for future episodes, including the big announcement for the co-pilot fireside chat that Kevin teased at the start of this episode. So Kevin, do you want to tell people more?
Kevin McDonnell (52:11)
so nervous. I put a LinkedIn post about this that kind of past me would be looking at current me going, you're doing what? But yes, we are over at the MVP Summit and we are going to be hosting Mr. Jeff Teeper, the father of SharePoint for a fireside chat. for all those MVPs listening, no, you can't come and watch. I know Zoe, you keep going, maybe we can find it. No, no, we're going to keep it. You have to join via Teams. All of them are very welcome to come join.
Zoe Wilson (52:38)
You
Kevin McDonnell (52:41)
It won't be recorded, might be a few selfies if we can sneak quite a few in on there. We won't be doing any summaries. You have to be there to hear what he says. We would love to have your questions, whether you can be there or not. So if you look at my LinkedIn, I'll put a link to that on the show notes. We're trying to grab questions. We want to hear from you what you want to ask him. We'll be asking some questions on the day as well, hopefully. I just realized we need to talk about that.
Hopefully we'll be asking on the day as well, but get your questions in early. we're asking people, if you want to put your name, you can. We are taking anonymous ones. We will be there with him in person. So, you know, nothing that's going to get anything thrown at us or anything like that would be great.
Zoe Wilson (53:09)
Hahaha
Yeah, I'm not
risking being kicked out of the MVP program because you've got a controversial question that you want to ask.
Kevin McDonnell (53:28)
No. Absolutely.
Sorry, was that for the audience of Ami Zaheel? OK, OK.
Zoe Wilson (53:34)
Both, both, Yeah, it
has made me laugh though how many of our friends who will be at MVP Summit have been asking if they can come and fangirl or fanboy in person. I did laugh when Matt Baird commented saying he was available to help as well because I kind of expect it from people who are the modern work MVPs but it made me laugh that one of the BizApps ones was getting involved as well.
Kevin McDonnell (53:49)
Yeah
It is very funny. It is very funny. No, it's it, but it's great. And I thank Jeff and his team for helping make this happen. So I really look forward to it. And every time I've seen Jeff just kind of speak, he has so much history and so much knowledge and so much understanding of it all, but also knows when to say, I don't know. Let me push that to other people. So we'll take all the awkward questions on there. I would say, please. Yeah, we'll talk about renames.
Let's not make every question about that or governance. So let's see a few different ones in there as well.
Zoe Wilson (54:32)
Yeah. So like Kevin said, go find the form, fill it in, send us your questions. Please sign up to watch the fireside chat and hopefully ask more questions on the day as well. But we're super excited for this and we really look forward to having you on our next podcast episode as well.
Kevin McDonnell (54:53)
Yeah, absolutely. And to any MVPs listening, we are going to be taking the GoPros, I've even got the microphone here ready, getting charged, getting used to it. So we'll be getting some interviews and stuff from people as well. So come and find us in the summit.
Zoe Wilson (55:09)
Awesome. So, yeah, thank you again for joining us. Absolutely great episode. Thanks as always, Kevin, for a fantastic chat. I'm looking forward to seeing you in a couple of weeks, because that means we will be at Summit, as well as all of our friends. And thanks, everyone, for listening.
Kevin McDonnell (55:29)
Thanks very much. Don't forget to subscribe and see you in a couple of weeks. Bye bye.
Zoe Wilson (55:33)
Thank you, bye bye.