Inspiration
A lot of times I'll be talking to an LLM and trying to get advice but then it gives me advice not the way that my teacher would have given it. Then I end up getting confused because the teacher asks it a certain way or something like that during the exam. I need a way so that my LLMs can talk to me as if they had all of the material in front of them and taught it in a way that my teacher teaches it and knows what I'll see those questions
What it does
It is an MCP that allows any LLM since MCP is a standard protocol to essentially get all the files from Canvas and put them in the context
How we built it
The problem is that Canvas doesn't actually give you an API key to check to use the API with because USF doesn't like you tinkering with it and so authentication was a mess and took five hours. In order to solve that I had to make an extension which would run some script on an open Canvas tab to take the cookies and authentication and session ID from it so that they can make the Canvas calls API calls. But I need to do that from an MCP.
And then the MCP isn't able to talk directly to the extension because Chrome doesn't like extensions talking to other things on your computer. So I had to make a script to purposely open Chrome with debugging and open ports which then allows me to make a server that is already coresed into the extension so that my server (that's running locally on my computer) can talk to the extension at all times. And then the MCP is going to have to say, "Hey, my Python server, go make this query." And the Python server will go tell the extension, "Hey, go make this query." The extension will have to make the query then forward it back to my server and then forward it back to the MCP. Which will then tell you Al-Alam and give you Al-Alam the files
Challenges we ran into
I accidentally talked all about that in the previous section but it was essentially 5 hours of figuring out how to forward and open ports and authenticate to actually make those API calls
Accomplishments that we're proud of
I have actually never made an MCP server before fully. I have also never worked with GraphQL which is the API format that Canvas uses. I have also never made a Chrome extension and barely do any JavaScript programming which is what Chrome extensions are. So I'm really proud of being able to at least get all of that down and work around the fact that even though USF doesn't want me to make API calls with Canvas, I was still able to find a little network hack around it
What we learned
I was able to learn a lot about how extensions work and what they're able and not able to do, about SSC, about networks and port forwarding, about how MCPs really work.
And just in general pretty much everything I did was completely new. I had never done any of this stuff before. I wish I was going to get to the part where I could implement like Factor Database to actually get accurate files. And I was going to learn more there but five hours of debugging authorization and cookies kind of doesn't let you do that
What's next for TalkToCanvas
What's next is for it to actually work because the MCP does work right now. The queries do work right now but the MCP does not actually connect and actually make the queries in this moment to download all the files and all of that. So I need like one more hour to get that actually connected because again 5 hours was spent on just getting a query onto my local computer because of all the Canvas authentication issues.
But if I were to get that, the next plan was for it to really be an A2A, an agent-to-agent was really the ideal solution because an MCP could do this and get all the files but all the files isn't good for the LLM that you're talking to. The LLM wants only the files that are relevant and that you could do a vector database and do it that way. But then the problem is a vector database isn't going to take into account what lectures relate to which assignments. In a chronological sense because maybe it'll get something accurate but that's from the future that you haven't learned yet or it isn't relevant to your assignment. Not in the sense that it isn't about the same topic but in the sense that it's a lecture that you haven't done yet and so it's not going to be considered in your assignment.
So there's lots of things like this that a simple rag solution wouldn't be able to do and you would need to have an agent go around looking through the files and looking what they talk about in conjunction with rag and a vector database to properly get the actual files and parts of the files that would be relevant and then return back to the LLM that you're actually talking to.
The reason why I would love for it to be set up this way and not set up my own LLM is because a lot of people have their LLMs set up already in a way that they enjoy. When I talk to my LLM, my ChatGPT has memory. It already explains using analogies or projects that I've already done so it knows how I like things done, it knows that I like it in-depth and knowing the whys and the hows. If I make my own LLM instead of this MCP or A2A, then I have to get rid of that or I have to re-set all that up which sometimes isn't possible and if it is would just take a long time and not good for the user.
Built With
- chromeextension
- docker(attempted)
- fastmcp
- fastmcpcloud
- javascript
- python
Log in or sign up for Devpost to join the conversation.