The Copilot Connection

CCDev 1 - Navigating Copilot extensibility

Send us a text

Welcome to a brand new way of hearing about the latest Copilot extensibility news! In this episode of the Copilot Connection, Kevin swaps out Zoe for Garry Trinder in the first of a spin-off of episodes where we dig deeper into the extensibility side of Copilot.

In this first episode, we go into an overview of the options that you have for extending Copilot. We talk cover why you would need to extend Copilot, the different ways you can do that (whether by end user, maker or specialist) and touch a little on how you can actually get started with doing it.

Is there a specific topic that you'd like to see us cover? Let us know!


Takeaways

  • Extensibility in Copilot allows developers to create tailored solutions.
  • Understanding AI maturity is crucial for effective implementation.
  • Personalization of AI tools enhances user experience and productivity.
  • Scoping and context are essential for effective use of Copilot.
  • Integrating external knowledge sources can enhance AI capabilities.
  • Balancing deterministic and non-deterministic approaches is key in AI applications.
  • Declarative agents simplify the process of building AI solutions.
  • Responsible AI practices are necessary when developing custom solutions.
  • Continuous learning and adaptation are vital in the evolving AI landscape.

Related links

Kevin McDonnell (00:11)
Welcome to the Copilot Connection. We're here to bring you all the latest news, the latest thinking and the latest ideas around Copilot and the wider section from that. And those eagle-eyed watching on the video may notice that we have a slightly different co-host with me here today. And the reason is we've decided to start doing not quite spin-offs of the podcast, but more focused areas that we can do on a regular basis. So...

Today, we're going to be digging into some of the more extensibility and development sides of Copilot. I'm Kevin McDonnell. I'm the Copilot strategy and modern workplace AI leader, Avanade, and I have a new co-host with me today. Gary, would you like to introduce yourself?

Garry Trinder (00:54)
Thanks, Kev. Yeah. Hi everyone. My name is Garry Trinder. I am a developer advocate. I work at Microsoft and focused on extensibility, specifically Microsoft 365 Copilot extensibility. But you you can't go far without actually, you know, working with things in Azure, Copilot Studio, all those kinds of things as well. So yeah, excited to be here. Looking forward to chatting tech things with you, Kev.

Kevin McDonnell (01:19)
Absolutely. So this is the reason when usually at this point Zoe would kind of go quiet and glaze over slightly as I dig into this stuff. This will be some of the shows that we look to go into those in a bit more depth. And we're going to tag these so you know very clearly. And if devs not your thing, no worries, skip this. The good news is Zoe will also have a guest that we'll introduce soon, a kind of fellow co-host that is going to dig a bit more into the responsible AI and

governance side of things. I'm sure some of you are guessing going, I wonder who that could be. I'm working that out. But so we're going to have these, we're going to tag them if we can work out a way to have different feeds. So if you just want these, you can get them if you want everything you can and look through from that. But we thought we'd keep them in the main feeds and go on from there at that point. And why are we doing this? Well, you kind of alluded a little bit to it there, Gary, that you said, well, there's this, this is your...

There's Foundry, there's Copilot Studio, Declarative Agent, SharePoint Agents, MCP, A2A, all these kind of phrases that we kind of throw at people here and there. And I think it gets people a bit lost from that. So what we're going to do today is kind of go as a bit of an intro that covering all the, not covering all the different areas, but sort of talking about them. But we're going to frame it as a kind of why, what and how side of it, aren't we?

Garry Trinder (02:43)
⁓ yeah, like exactly what you said. ⁓ you can't go like a week without some new thing being kind of released or some new protocol and then the rush for adding that protocol into different services. And then we have some services that support this, some services that don't, know, where it's, like, it's, it's trying to navigate that, that kind of landscape of something that's, that's kind of always changing. ⁓ but

Kevin McDonnell (02:55)
Yeah.

Garry Trinder (03:09)
they're just different options. They're just the tools in the tool bag, right? And they allow you to achieve like a certain outcome, but there's still different ways of doing that. yeah, hopefully, I mean, I'm always learning as well. And there's always things that I don't know because we've released something in Azure land and it's like, it's not quite been on my radar, but it's quite important. It's like, you know, we're all going through the same thing. So I'm pretty sure that people are listening and like, ⁓ what about this and what about that? So yeah.

Kevin McDonnell (03:12)
lately.

You

And I think we've

had a regular call, haven't we, for a while. We've kind of dug into these things and chatted, you know, what a client's saying, what are Microsoft doing and things like that. So I was kind of like, we should just record this. As long as we don't mention some things in there. And that's why we're not going live to make sure we don't tread too many boundaries. Then we can kind of record it and make it available for other people as well.

So, but I think most importantly is, you you touch on there, Gary, there's new protocols, there's new things coming out. I think it's really important to understand why. Why is it that you should look at these things? What are the benefits? What are the important bits to it? Because I think you and I are both people who go, ⁓ something new, let's try it out. Let's play with this. Let's see what it is, which is great. But not everyone should be doing that. There's got to be reasons why you're looking, doing it. You're not just bringing new technology in for the sake of it.

Garry Trinder (04:29)
Yeah.

And I think on that, it's like different levels of maturity as well. ⁓ in the, know, we've got people who are devs and looking at agents from a kind of, you know, process perspective and, know, where that can fit into, you know, standard, maybe improving like a business workflow that would maybe go across company. You've got other people who are using AI in their day to day might be using prompting, but then it's like,

Where's that shift next to them? Okay. I'm doing my prompting, but like, now I need an agent and understanding. Okay. Well, why do I need it? Need an agent. And once you understand that it's like, okay, what options do I have? ⁓ and what can I do? ⁓ so how can you kind of like mature on that, that scale as well? Because I mean, I personally think is like AI is becoming very much a personal tool. Like there's one size fits all, but generally you want it to

be adapted and tailored to your personal circumstance and the way that you interact with different people, like if you're generating content. But then that can scale to your organization as well. Like your organization has certain rules and cultures that you might want to include into these agents before you ⁓ push them out. And so it's kind of like understanding that and then thinking, okay, how do we build these experiences?

are relevant to individuals and exactly. It's, you know, perfect example is the kind of, you know, write, write me an email and the email that comes back is something that's really formal. And you're like, I would never write that email. So you want to add a bit of personalization. It's like the technology is there, but you just want to tweak it and tailor it for what you want it to do. And there's loads of different options that might be out of the box, which I think you get more options, but it's like, I need something custom.

Kevin McDonnell (05:58)
to you. Yeah.

Garry Trinder (06:24)
what, again, what can I use ⁓ for my scenario?

Kevin McDonnell (06:28)
So

I agree with that, but I'd say it's not just individual. say that that kind of evolution of seeing the team faces something we'll probably cover, you the ability to bring agents into meetings where you're working together on things. The joy of pages where you can work to, you know, you can take that prompt and then work together, enhance it on there as well. I think if I'm honest, most people aren't ready for that.

And I don't think the technology is quite in the place to be there, but I think that there aren't almost two streams. There's me and us that need to go through there. And I know Ami Diamond had a thing of kind of Copilot is for us, agents for me, which I didn't entirely agree with on that. But I think it's a kind of, I think there are a section that exactly as you say, it's me, I want to be able to do stuff. I want things in my style. I love that email example. My session at Comsverse was actually quite a big chunk was exactly that.

How do you how do you put you into it? And with with the copilot memory coming along, there's even more ways to be able to do this. But how does it not fit? Anyway, we're going to drift drifting already, which is unsurprising on that. So I'm going to kick off a bit with that why you've kind of led into and ⁓ when I first started talking about extensibility, there were kind of two two things I got people thinking about, because I think the assumption is copilot. And I kind of mean that in the generics sense.

can do everything because of this kind of focus on copilot is UI for AI. People think they can just go there and they they know what their context is. They know what they wanted to do and they kind of assume copilot does as well. So if you say, give me a status report, does that mean your family at home? How are they doing? Does that mean the project you're working on? Does that mean your

status and poor on how you're doing with your career and things like that. In your head, you're probably thinking about the projects, seeing Teams messages going on, go away, no, stop it. But thinking about those kind of updates and things that you have that context. Copilot doesn't, Copilot has, you know, you've copilot for the web, it's got everything. M365 Copilot has your emails, your teams, it's got everything from that sense. So one of the things to look at is scoping in.

is to be able to define a bit more of that content within there. Now, when I first started talking about this, we were talking about agents within that. ⁓ Sorry, damn it. Wrong way around. We talked about plugins. Now, I think it's even better that you can scope it with agents. And you're starting to look at specific scenarios, focusing on specific tasks that you can say, you know, when I add mention this agent or when it identifies that I should be using this agent.

that it will scope down to that sort of limited set of knowledge, that limited ⁓ set of functionality within there to get to what you want. Copilot Notebooks is a great example where you can put a set of different files in different places and say, I want to ask questions of this. So I think that's one of the big areas is to kind of say, I have a specific task. And I think if we're looking at this automation of business processes, I have a business process.

Garry Trinder (09:25)
Yeah.

Kevin McDonnell (09:42)
I want you to focus on this. That's one example there.

Garry Trinder (09:46)
Yeah, specialists, right? It's like designing something that's really good at doing a particular job. And I think that's kind of, and the context, right? It's like you're adding the, the, the context that you're maybe thinking about into that agent. So you don't have to think about it. So it's like status report, right? It's like, well, you might have a project agent, but an agent specific to a project.

And then you might say, well, I just want a status report. Well, the agent already knows what project you're talking about. So it can understand that. And it's got all the kind of ⁓ instructions in the background of how you want it to be format and all that kind of stuff. And it just takes that load away. Yeah.

Kevin McDonnell (10:20)
Yeah. And just quickly on that one, you were talking

about the personal one. I genuinely have a status report agent that I've built and it doesn't it doesn't have the sources because I can't connect up to that DevOps environment, sadly, but it has the format. So I tend to splurge into it and just say we did this, we did that. And it kind of rejigs it into the right format of something to edit. So it's a very basic end user agent, really. But it does exactly what I need it to. It scopes it to that format.

Garry Trinder (10:28)
time.

Yep.

Yeah. And this is the thing kind of going back to the technology bit of like, you know, we talked about so many different things from like end user, really simple to really advanced stuff that it's like, it's, it's trying to figure out, okay, well where, which bits relevant, which, when to use the relevant technology. And back to your point, Kevin, about that status report, I have exactly the same thing. Like I've got markdown files in my OneDrive, which I point the declarative agent at, and then I get it to

basically go through my weekly, uh, like markdown files for every day, like a bit of a journal, and then give me a summary. Like, what did I work on and cross-reference it? Like, are there things that I would look at? But it's incredibly simple to, to, to, build. Um, I don't need, you know, custom models and everything like that. It's, it's, it's, it's a rag and it's summarization and it works and it's a good kind of easy step into kind of doing that. And, you know, as things progress as well, like.

Kevin McDonnell (11:23)
That's nice.

Garry Trinder (11:44)
We've seen things like, you know, to give you another option, it's like, we've got the standard agents. ⁓ we've had the new kind of agents come in, like researcher, which is then like reasoning, right. And deep reasoning. And then this is the thing is like then picking the, okay, do I need reasoning here? Or do I just need something to generate? Because they're very different things and they're going to give you different outcomes. ⁓ you know, it's like, actually you might want a reasoning model for planning, right?

Kevin McDonnell (11:55)
of research.

Yeah.

Garry Trinder (12:14)
It's really good at that. It's going to look at lots of different options. It's going to, you know, kind of reiterate on things rather than just kind of responding directly to your, your prompt. It's going to do that extra work. So I think those kinds of things as well as like another thing to think about is like, is that something that you need for your scenario?

Kevin McDonnell (12:36)
Yeah, absolutely. So that's the kind of scoping in on the scenario ⁓ on there. I think the other side is scoping out. So M365 Copilot especially has M365 data, but you can actually extend that with connectors to bring more knowledge into that to be able to use. you know, bringing things like Salesforce in there, your CRM systems, your IT tickets on there, working with a

pharma company at the moment and their research materials, their research notes and all their knowledge there, their ERP systems, how they track all the different things, even their bespoke databases, they're all bits of data knowledge that you might want to bring into things as well. So extending beyond that becomes a different way with the knowledge. Plus you got your APIs, your tools as they're now kind of being named because we love good renaming.

on there that you're connecting to do things. So you were talking about planning. Yep, planning. You want to get all your different knowledge to come through, but then you want to make a task for people as well. So somehow you can connect to that. Yes, that could be planner, but it could be JIRA, could be Azure DevOps, could be all sorts of different things. I was talking to someone yesterday about Zoho and things like that. All these different ways that you could be logging those tasks and they've got different APIs and they're scoping outside that.

boundary you talk about as well. And then the other one that's slightly new, I'll just finish this off and then let you go through there, ⁓ that I think is really hard to kind of really put specific criteria, but it's enhanced. It's with Copilot, you get what Copilot does, especially with the orchestration, you've almost got do something, fire off, get a result for the user. You might want to do something, connect to something else, process that in a certain way.

and you want to control a slightly more complex flow within there, you need to take that a little bit further. So you may want to enhance and build some more logic. I'm working on a project at the moment where in certain scenarios, we kind of don't want the LLM to kick in. We want to go through a very fixed process. It must do this. It must do that. It must do that as well. You want to make sure that especially if someone's talking about something sensitive.

You don't want the LLM to go, yeah, that's fine. Go and do that and chop your leg off. That's lovely. That's exactly what we want you to do. No, no, no. And yes, I know you've got the responsible AI and things like that, but there are certain scenarios you very much want to keep it on those rails as well. So that that's another area I'd say you see that extensibility to.

Garry Trinder (15:16)
Yeah. And I think that's kind of a key thing of using AI where it's relevant and using other solutions where it's relevant. One thing which I keep hearing at the moment is OCR. People are like, oh, OCR, just throw it in a model. And it's like, okay, yes, it can do that. But actually is that what you want to use it for? Because maybe you want consistency. You've got layouts. It's good at certain things. Like if it's like a dynamic thing, then AI is good at that, right? If it's...

Kevin McDonnell (15:35)
Damn it, I'm doing exactly that at the moment.

Garry Trinder (15:46)
consistent documents, right? Then your traditional OCR tools will most likely be better because they're deterministic. Like given this input, here is the output and it will be the same. Whereas with AI, you're not always going to get that. So it's like, again, it can do it, but is it relevant to your scenario? And it's like, which, which bit of the building blocks in your solution, what does AI need to turn up as opposed to trying to do all of it?

⁓ and then, know, you kind of like maybe chasing yourself cause it's like, again, not everything's deterministic. You can get a good guarantee, like changing prompt models, even, that, but it's like to get a hundred percent every time. That's likely, you know, it's not going to happen. So.

Kevin McDonnell (16:31)
Yeah,

that's it. It's interesting. I know our regular catch up is what have you been up to? And we haven't really had that bit yet. So I'm speaking first of July. Is that next week or week after? I think it's the week after down at the South Coast User Group in the UK. And I'm sure, Gary, you remember I wear on EarthBot, the artist formerly known as the Bingbot. I've just remade that using multi-agentic with Azure Foundry.

Garry Trinder (16:51)
yeah, yeah.

Kevin McDonnell (16:57)
More as a kind of POC, it was good. I always liked to kind of try some new technology with a little bit of a purpose. thought, well, it's I've been meaning to re-update this. I've kind of started a few different times. This is great chance see how easy and it was pretty easy, but it's exactly that. It's not deterministic. And there's times I want my flow to follow exactly that thing and it doesn't. And it's really irritating. It's like I kind of need bits to be non-deterministic. I need it to kind of...

be able to have a chat and a conversation. There's times I need to go, no, you need to follow these exact steps and not go out of that. And it doesn't. it's it's really I think we see a lot of people exploring these ideas and and seeing what works and what doesn't. And there will be mistakes because people want to play with the latest toys on there. And then people will realize, well, no, it works in this scenario, not in this one. And it's it's going to be a combination. It's not you must go deterministic or you must go non-deterministic as well.

Garry Trinder (17:41)
Yeah.

Yeah, it's like you want, you want best of both worlds. And I think there's, there's an appreciation of that. And there's different solutions. Like as soon as you said that kind of like what sprung to mind was the recent updates for logic apps, logic apps being deterministic workflow, but having actions in there to provide that AI element that, you know, non-deterministic, ⁓ kind of actions in your, ⁓ in your workflows. it's like, it's those kinds of things that again, hooking into the bits that

Kevin McDonnell (18:05)
Hmm.

Garry Trinder (18:22)
⁓ where AI is needed and not just trying to get it to do everything because it's like, if you want a linear path, workflows are still a good option, right? So, you know, it's, it's where they need to, like, you might want to go, okay, well, actually we, we want to, ⁓ do some reasoning over something to then dis like let the LLM make the decision. Like that's where you're kind of like, you know, letting the LLM do its, do its thing.

Kevin McDonnell (18:31)
Absolutely, absolutely.

Garry Trinder (18:49)
to then, but that might be then choosing another deterministic workflow. ⁓

Kevin McDonnell (18:55)
I was chatting with one of our automation team and we kind of decided that when it comes to intelligent automation, the automation is when you speak to a client and they tell you what process they want. The non-deterministic stuff is when you realize what they actually do with it on there, which just made me chuckle slightly and has helped my framing of it.

So we talked a bit about the why there. Gary, what do we have that can actually do this?

Garry Trinder (19:24)
I mean, where do we start? Because there's a whole ton of different ways. So I think to kind of frame it in the way that I think is, like being developer advocate in like Microsoft 365, so focused more on copilot is, okay, what options do I have in terms of where I want to use copilot? So we've got, copilot now can be used by everyone, right?

Kevin McDonnell (19:25)
Yeah.

Garry Trinder (19:53)
Everyone with an M365 license can go and use Copilot Chat and they can use agents in a limited fashion, dependent on how that's built. So from an agent builder perspective in M365, you've kind of got two camps. You've got the, we've already got the model, we've already got the orchestrator, and I just want to extend that. So it's a, I don't need to deploy anything. You can just configure it.

And that would be declarative agents. And on the other side, you've got then, well, actually, sorry,

Kevin McDonnell (20:27)
So do

we, we, did we say what declarative agents are? I if we should touch on that a little bit in case people don't know.

Garry Trinder (20:31)
so yeah, so declarative agents

are, well, as it sounds, it's a declarative way of defining an agent's behaviors. So the data sources that it's connected to its instructions, maybe, ⁓ actions that it can perform, like making a request to an API, ⁓ those kinds of things that extend what you already have in copilot. So you, you're reusing the whole stack and your little agent bit is sat on the top.

Right? So from a user perspective, you're going to copilot, you see the specialized agent on the rail, click on it. That is then your agent experience and you know, how you've defined it in that declarative ⁓ manifest the package. Then that's how it will behave. know, back to your kind of point about why Kev, the first thing is like, you know,

Are you building something that's a specific scenario? ⁓ Is it gonna achieve a particular task? Do you wanna preload it with a load of context so that when you ask for your status report in the agent, it already knows, well, like if it's tied to a project, okay, yes, I know status report means this project, I can go and get that relevant information ⁓ to help end users really. like context is key and we're not always great. ⁓

providing that context. It's like, Hey, I've got a problem with my computer. Okay. What does that mean? It's like that kind of thing, right?

Kevin McDonnell (22:04)
But

I think the important thing on the kind of what is start simple. Don't go into full blown solution. Yes, architects, some of you, I'm talking to you. ⁓ Start with the simplest thing. And that's why I love the kind of SharePoint agents, the agent builder. We were talking about that status report earlier. We can go in with a few lines of a prompt. No code whatsoever can build a really powerful thing there. And all you need to know is a bit of prompting.

Garry Trinder (22:10)
Yeah, exactly.

Kevin McDonnell (22:33)
So that's available. And then you start to build up and say, right, I need more knowledge. I needed to connect to things. I need to age a little bit more complex from there.

Garry Trinder (22:34)
Yeah, it's.

Yeah, and this is the thing is that sometimes you might not need to have it connected to data. That's always an option, right? It's like, you know, if you're starting from, starting from scratch and let's say you are writing, using Copilot to help you write news articles for the internet as an example, right? After a few of those kind of... ⁓

Kevin McDonnell (22:47)
Yeah. Yeah.

Garry Trinder (23:06)
kind of uses of of, of copilot chat to generate a news article. You might end up coming up with a bit of a formula. Like, okay, I'm kind of asking the same things. Actually now I want to create an agent so that I don't have to keep putting those prompts in or, you know, have a huge prompt that I have to keep putting in all the time. It's like, okay, let's just take that prompt and that standardize it in an agent. And then it becomes easier because you might just say like, Hey, I'm writing an article about this topic. Here's some.

content for extra context, apply the instructions, the rules around how you want to create this article. And that could just be instructions. It doesn't have to be connected to anything because you just pass the context straight in. So yeah, it's a really easy way of doing that. Yeah. Yeah. I mean, from a kind of getting used to it perspective.

Kevin McDonnell (23:49)
Yeah, absolutely. I mean, the status report example say I want it to look like this, for example, would be some of those instructions.

Garry Trinder (24:03)
Like what we just said, just using instructions is great because anyone can do that. Anyone can go into the agent builder and build a ⁓ instructions based agent, so to speak.

Kevin McDonnell (24:06)
Yeah.

Yeah,

and a top tip for people, use Copilot to help you build those instructions. So kind of have that chat with Copilot, say, I wanted to do this, I want to do this, how would you recommend it? And it kind of gives you a lovely format for it as well.

Garry Trinder (24:19)
Yeah.

Yeah. And I did exactly the same thing, even though I was, I wasn't building the UI, but I was doing it in VS code. was like building declarative agent. I'm just thinking I've got GitHub copilot in VS code. So just, just generate me kind of a, a ⁓ good starting point for instructions for an agent that can do this. And it would go along and it would build it out and I go, okay, great. I'm at a good starting point and then I can iterate on it as well. So yeah, use AI to build your AI agents as well.

Kevin McDonnell (24:55)
So what's the next step beyond that then? You've kind of gone, well, this agent's nice, but I want it to do blah. What's next?

Garry Trinder (25:01)


yeah. I guess this is kind of the bit where it's like. Going back to the trying things. It's like, try it because it might be the, for your scenario, what you get out of the box is good enough. Right. And it's fine. Maybe you're doing some general, generating content, you're summarizing things, you know, doing kind of, I kind of feel as like the standard tasks of what an LLM can do. Right. Like it doesn't matter which one you choose. They're all generally good at doing these kind of.

⁓ tasks. ⁓

Kevin McDonnell (25:32)
As

long as it's in as your open AI, right, Gary?

Garry Trinder (25:34)
yeah. Models vary, but yeah, but, it's, kind of a thing. Like you're using a generalist kind of model or you, you know, maybe you've ⁓ decided that, know, you want to pick and choose your model and pick and choose your orchestrator and you're kind of building outside of copilot. Maybe you're building things in, in Azure, maybe on other clouds as well using different, different services.

Kevin McDonnell (25:37)
can exist.

Garry Trinder (26:04)
And the question is then is like, okay, from a Copilot perspective, like UI, how can I extend Copilot with these kind of like custom agents, these things that are not running on Copilot's infrastructure? Like, how do we bring those in? So from a user perspective, all they need to know is I go to Copilot and I see the list of agents. It doesn't matter whether it's declarative using the infrastructure from Copilot or it's custom using whatever you've decided.

You've got that option, ⁓ with different SDKs, like Microsoft three to five agents, SDK to kind of bring those custom agents into the copilot user interface. So again, from a user perspective, they get the same experience, just the answers, the responses are generated in a different way. ⁓ and it's a really difficult kind of thing to kind of put a finger on because it's like, kind of, what do I use when, or which models best? And it's like,

Kevin McDonnell (26:56)
What do I use when?

Garry Trinder (27:03)
The one thing which I found like doing this for, you know, it feels like quite a few years now is like, everything that you do is so specific to your scenario that don't just go with, this model has these benchmarks because those benchmarks might be tests that are not even relevant to what you're trying to achieve. And you really have to go through this kind of process of evaluating the responses from different models.

Kevin McDonnell (27:19)
You

Yeah. And it could be, it can be a very tiny thing, tiny requirement that you add that can drastically change which model, which orchestrator, which level of complexity, which can sound really odd. I always loved the one of ⁓ like having a whole load of policy documents. Now we can very easily throw a load of policy documents. We could put a SharePoint agent in that. That could solve some of it. We could do that with Copilot Studio and add a bit more instructions, a bit more detail, have some

Garry Trinder (27:32)
and different models act in different ways.

Huge.

Kevin McDonnell (28:00)
You know, could request changes within that and have actions. But how do you get it to know that you are you, you're in this location, you're of this level, ⁓ these are the things you've done before? Suddenly, it's jumped in complexity and just putting it into a rag-based system is not going to work because you need that kind of pulling out of your information and orchestrating towards the right policy. You need to put a load more...

things in that it feels like a very small step towards it, but it's actually a big thing behind the scenes. So it's being ready for that.

Garry Trinder (28:34)
Yeah.

It's kind of like the iceberg problem. It's right. It's like you look at copilot and go, okay, I see this bit at the top and I see the chat and I see responses and it's like, it's not gonna have, I'm going to redo all this myself and I'm going to rebuild this. it's like, don't underestimate how much like, ⁓ is put into the, the copilot infrastructure. One thing which you haven't touched about responsible AI, right?

Kevin McDonnell (28:38)
Yeah. Yeah.

Yes, good point, good point.

Garry Trinder (28:59)
Responsible AI is something

that Copilot is doing behind the scenes for you. It's doing your compliance. It's doing your security across all your documents. It's handling that for you. When you go down the custom route, those are now things that you need to think about. So you need to think about if you're hooking up to, you know, ⁓ another data source is like how the permissions, you know, propagate. How do you know that the person who is asking for this information should be able to see it as well?

those kind of things, content controls, checking the inputs, checking the outputs, all those kinds of things then is like, really need to be thinking about. That's not a reason to not do it. It's just the, see a lot of people kind of rush down there, well, I just need to use another model and then forget that there's all this other stuff that really you should be thinking about as well. So yeah.

Kevin McDonnell (29:47)
Yeah.

And I think

while that was a facetious comment about as long as you're using Azure OpenAI ⁓ with it, I actually think it's quite important because you get that responsible AI. You can bring your content safety APIs within that. You've got the logging and monitoring. So one thing you said there about ⁓ it doesn't matter about which ⁓ agent to use as an end user. Well, it could do because they could cost a very different thing and you might be using it a lot.

Garry Trinder (29:58)
Okay.

true.

Kevin McDonnell (30:23)
when it's not the right agent and it kind of costs some things there. So there's kind of considerations that come into it. So the responsibility, the cost, the value it's bringing ⁓ within that, annoyingly, need to be factored into all these decisions around, know, even what model you're going to use. If you're using the latest model because there's a good reason for it, technically, it's going to cost more. Is the value of what it brings going to be worth that extra cost?

Garry Trinder (30:50)
Yeah. And this is where we kind of almost see like there's a, there's a middle ground, right. Well, so I've got Copilot the infrastructure and I can build declarative agents on there. And it's like, okay. So for whatever reason, it's not quite there. So the alternative is go and rebuild the whole stack, right? Like on both sides of the extremes. it's like, there's, there's, there's a bit in the middle, right. Where. Yeah. So.

Kevin McDonnell (30:56)
Hmm.

Yeah.

I can see the money people in the middle going, what? What? What?

Garry Trinder (31:20)
A great example was, and I speak to customers all the time who have said, okay, we've already gone and built our indexing. Like we've gone and used Azure AI search, for example, right? We've got things in there. We don't want to have to then rebuild everything like, you know, create a, like deploy all the models and do all the orchestration. Because it's like, if Copilot already has that, how can we leverage that? And I found this from ISVs were really interesting. Cause it's like,

That's a thing that they don't have to cost in. It's like, they can do the indexing of the content and then provide that to a declarative agent, maybe through an API. And that's maybe then where you can kind of get this balance of actually the model that Copilot is using is, you know, generic model. It'll be the GPT class models, right? Good at summarizing the general models. But it might be that actually you need custom indexing.

Maybe you've got documents that are, yeah, exactly. Like the Markdown one, it's kind of like, I'm chunking it myself almost by going, okay, each file is a day. So it's not huge, right? But if you've got big documents and like, let's say big legal documents, things like that, and you, it needs to be a big thing. It's like, you've maybe got a decision of like, well,

Kevin McDonnell (32:19)
your markdown example, you know.

Garry Trinder (32:42)
Do we chunk that up? Like, do we break that document up in M365 or do we apply our own kind of custom indexing where we can index the content in the way that we want it to, because again, it's our content, but we reuse the infrastructure from Copilot. We don't have to go and deploy another model. And a way that you can do that is create a new client of agent with an API plugin that could just call like the Azure.

⁓ AI search API and issue a search query and return those results back to, ⁓ back to Copilot to let it reason over. And that's kind of a halfway house, if you like. And I think that gives a lot of scope and a lot of control back to, ⁓ organizations that want something a little bit different. ⁓ or like are hitting limits with maybe

Kevin McDonnell (33:33)
and are happy to pay

for it as well.

Garry Trinder (33:35)
Yeah, exactly. ⁓ but even then, you know, it's like, depends on what you want. You can, you could just have lexical search. You don't have to have vector search. Again, it depends on what your needs are. You don't have to go full like, yeah.

Kevin McDonnell (33:44)
Absolutely. Absolutely.

Can I pick that? I think it's a really

important point there. So, Let's School Search is almost the traditional search as we've seen it before. It kind of indexes the content based on keyword. Vector Search is it builds up relationships between your kind of tokens, between your different chunks. I'm not going say between different words. It's almost the chunks of words to kind of... It will give you a much better view of kind of connecting different bits of documents rather than going...

Garry Trinder (33:56)
Keywords.

Kevin McDonnell (34:18)
you want this keyword, it's this document here, have this. With Vector Search, you'll give you, you want this chunk of a document, you want this chunk within there so you can get more detailed results. And it's, I think in a lot of people's heads, it's vector is better. And that's not true. It's better for certain scenarios. In other scenarios, lexical will be better from that as well. Maybe this is a deep dive. We could go into a show around those two things.

Garry Trinder (34:41)
Yeah. So again, from a cost perspective, definitely, I think so, but

it kind of comes back to that. You're always balancing costs, right? And it's, it's again, it's like, yes, you can go do vector. There's a lot more things you need to think about. need a model in there and generate embeddings. You know, there is generally more costs, but it's like, if you can get away with not doing vectors and you could use more of a traditional search, um, kind of approaches, that's going to be cheaper. Um, but it's going to give you a solution that, you know,

hopefully then will satisfy the requirements, the outcomes that you need. Which is an interesting thing, like, cause we kind of talked about indexing, you've got a copilot connectors, right? Which is what we mentioned is like, you've got data on the outside. Yeah, I didn't want to say that. I almost said graph connectors, but I'm glad you said it. Yeah. I adore the amount of times I said that I build. was like, no, no, So yeah.

Kevin McDonnell (35:25)
and that the artist formerly known as Graph Connectors for anyone who's been around a while. I was quite impressed.

Hahaha!

Garry Trinder (35:40)
Copilot connectors, formerly known as graph connectors is a way of bringing that external content into M 365 and into the semantic index in Copilot. So you get that indexing for free. The trade off. Okay. Come.

Kevin McDonnell (35:53)
Can I call that out because I get

this question a lot? Yes, if you do that external graph connector, it is semantic index, which is the vector based, not the lexical, which is really, really important.

Garry Trinder (36:06)
Yeah. So you're bringing it into, and this is like any other content in M3 to 5, right? So it's like the same way that a document would be indexed in SharePoint, like, you know, that kind of thing. So it's like the default. It's what we've chosen as the platform, the way that we index it, but it's a free way of getting an index. The trade-off is you don't know how that thing's being indexed because you're not in control of it. And again, it's one of those things of like, if you're bringing in the content, then you know,

Kevin McDonnell (36:12)
Yeah, absolutely.

Sorry to keep jumping

in. Can I counter that slightly? You don't know how it's being chunked. ⁓ You can manage the index. So you've obviously got this out of the box indexes. So there's things like ⁓ Salesforce ServiceNow, think it's a SQL one, et cetera. And you can basically say, here's my Salesforce instance, go and index this and it will go and put it through. But you could build your own with a graph connector and tell you...

Garry Trinder (36:36)
Chai Kum.

That's true, yeah.

Yeah.

Kevin McDonnell (37:03)
and tell it what you want to index, you can make sure you can put permissions, you can pull the things from there. How it chunks that up and uses it, you have got control over that bit on there.

Garry Trinder (37:15)
Yes, that's,

that's true to give that, that, that caveat, but I think it's a good thing of like, okay, I've got external data. I want to bring that in. don't always have to go to like a specific search service like Azure AI search, but again, it's like, try that. Like there's, there's no cost to it. We basically removed the limits ⁓ at build.

Kevin McDonnell (37:31)
Absolutely.

Ooh, have you?

Garry Trinder (37:37)
Yeah, so now you can have as many connections as you like. The content has increased from 4 meg to 30. There's a lot more. Yeah, that's gone. Yeah.

Kevin McDonnell (37:41)
really? ⁓ I missed that.

Because it used to be 50 million indexed items as a limit. Has that gone then as well?

I'll be back.

Garry Trinder (37:55)
So yeah, a lot of the limits have been removed. So they're a lot more accessible than they have done in the past. when I've kind of the first thing, like I did a session at Build was, okay, who knows about Graph Connectors? And a lot of people's hands went up and it's like, okay, because Graph Connectors have been around for a while. Like I remember we're talking about them from a search perspective. Yeah.

Kevin McDonnell (38:14)
That makes me feel happy. I think when

we were at CPS together, was talking about Graph Connectors. Finally, they're getting their moment.

Garry Trinder (38:20)
But this is the thing is like, and this is one thing that just a general comment of give yourself time to refresh and go over a technology, maybe every six months, like just to see where it's progressed because graph graph connectors, copilot connectors is one thing. Like there might've been limits that like, okay, we hit limits on items on the number of connections that we can create in a tenant.

Kevin McDonnell (38:31)
Yeah.

Garry Trinder (38:47)
And it's like now they're a bit more viable, but that's the same thing. Just changed slightly. The service has been changed the same way with declarative agents. And I see a lot of people who did this where maybe they tried a declarative agent when copilot was kind of first out. And the experience wasn't as good as it is today. Today they are miles, miles better, lot more consistent, but

Kevin McDonnell (39:13)
easier to

manage, you you've got the, I've got to get this right, M365 agents toolkits to help of help deploy them, which I think really, and things like type spec to help define some of the APIs, which is really good.

Garry Trinder (39:23)
Yeah.

You've got a lot more tooling. You've had over a year's worth of metrics and refinement to improve just the general behaviors of these agents. you off the back of that, but then improvements to the capabilities that agents have as well. So.

You know, we added things in like code interpreter, the image generator, extra data sources. So, you know, we had SharePoint, OneDrive, web searches, scoped web searches. We've now got email, people, Teams chats, and this has been updated on a regular basis. So, you know, on the pro code side, I'm looking at the manifest. The manifest version tells me what capabilities this agent has. And we've had a few releases in the last few months.

Kevin McDonnell (39:57)
web sources. Yeah.

Mm.

Garry Trinder (40:17)
and new capabilities coming in there all the time. So always keep an eye on kind of those improvements that are happening because you might have discounted a declarative agent because it just didn't support a particular data source. Whereas now that might be there. So keep those in mind. Yeah, definitely.

Kevin McDonnell (40:36)
think there's another show idea to deep dive into those, isn't there?

Now, it would kind of left the last section with not much time of the how. You know, we've thrown a load of technologies, we've thrown a load of things for people to think about within there. How on earth do they know where to go? And there's almost a kind of simple answer is Microsoft Learn. Let's go and check out the things on there.

Garry Trinder (40:59)
That

is definitely, yeah. So I mean, one of the things that we've been building in our team, or two of the things actually, we've got one on Learn and one outside of Learn. So if you're looking for content in Learn, something that we built was an extending Microsoft 365 curriculum, which is more on the pro code side. So it's all Visual Studio code based, where we go into declarative agents, API plugins.

authentication and API plugins, all that kind of stuff. So it's a kind of, you know, standard learn format that, that you have probably known and used. So it's all in there with even some challenges as well. So we can put that link in the show notes. We've also got copilot developer camp as well, which is more lab focused. But in there,

Kevin McDonnell (41:46)
That's a thing.

Garry Trinder (41:50)
We've literally just been releasing ⁓ new labs. We've got a custom engine agents lab that uses agents SDK, Azure Foundry, semantic kernel. That was from one of the sessions we did at Build. That's been published in there as well. But you've got declarative agents, copilot studio, SharePoint agents. It's more kind of like generally across the board in M365. So there's content in all different places.

that you can get and that's all coming from our team as well and Community Calls as well is always a great place to go on them.

Kevin McDonnell (42:26)
was about say listening to what other people are doing, listening to

their challenges of I did start with a simple, this is why I moved up to here, which is important, looking at some of the examples that people got. And again, not looking at the most exciting examples, go let's replicate that, we can build our own 16 or 18 month projects to get this declarative agent while Gary sits there and ⁓ uses the agent's toolkit. But by the way, have we got an acronym for the agent's toolkit now? Is it ATK or?

Garry Trinder (42:54)
ATK.

yeah, it's eight agents talk. It, just dropped the 365, but yeah.

Kevin McDonnell (42:56)
did like TTK, but...

Right, got that bit on

there. So, yeah, I think getting stuff those and get hands on with these as well. ⁓ Maybe another show idea is how I know you've been doing some things with deaf proxy about kind of being able to mock some of the interfaces using things like 5.4 to try out things locally. So even if you don't have that copilot license at finding ways that you can build some of these.

Garry Trinder (43:09)
Yep.

Kevin McDonnell (43:29)
things as well is really worth looking into. I think that is one thing I've been really happy Microsoft's getting better, not perfect, but getting better at that developer model and actually working with that.

Garry Trinder (43:42)
Yeah, I think one thing to add another place to learn is definitely samples. So we've got pro code developer samples repo. I'm currently doing a lot of work in there and updating things as well. There's a lot, there's going to be a lot more samples coming in there. So, you know, I think it's a good place to kind of just go and have a look, see what people are doing. Even looking at kind of things like the prompt library that is just a series of prompts. Like they might be.

Kevin McDonnell (43:48)
Yeah.

Garry Trinder (44:10)
the basis of instructions for an agent, you know, those kinds of things. I think it's kind of like flexing that muscle as well as like, if you're in the stage of, yeah, I'm not really sure about why I would need an agent is like, well, you know, maybe you're repeating things like try, try the, the agent model and standardize, uh, you know, you work.

Kevin McDonnell (44:11)
completely.

Definitely. Well, I unfortunately have to wrap up this. I have to stop recording, but hopefully everyone enjoyed that as a kind of intro. My headphones just cut out, so I don't know exactly what you said there, but apologies if you said this already. We'd love to hear from people. What would you like to hear from? Are there areas you would like to deep dive on? Are there some cool things you've done that you'd like to share with us ⁓ as well? Let us know. Reach out to either of us individually or to the Copilot Connection.

send us a message from there and we'll definitely try and get you on the show or get some of those things to dig into. So we'd love to hear from you.

But otherwise, we'll wrap up there. Thank you very much. Don't forget to send it to your friends, subscribe, let your family know. I'll admit most your family probably aren't quite so detailed in some of the dev aspects in here, but certainly the other bit to the show. And watch out, listen out. We'll have Zoe on with her new co-host as well, coming to talk about some of this responsible AI. And Zoe and I will still be coming back with our own shows on there as well. So look forward to seeing you all soon. Thanks very much, Gary.

Garry Trinder (45:13)
Cheers.

Thanks very much.

Kevin McDonnell (45:40)
Cheers, bye bye.