AI Pulse is UC Berkeley D-Lab's bi-weekly online workshop series on AI tools for research and academia. Each 50-minute session features a live demo (~30 min) followed by open discussion (~20 min).
No prior experience with AI tools required! Check out D-Lab's Workshop Catalog to browse all workshops.
We wrap this semester's AI Pulse with a tour of the new hot topics in 2026:
- Agentic AI: how AI is now able to use your computer for tasks beyond coding and chatting, and the risks associated with it.
- Plugins, skills, and MCP: how to build and use public workflows for specific use cases.
- Autonomous AI scientists: what Sakana's AI Scientist, Karpathy's AutoResearch, and HKU's AI-Researcher actually achieved, and what it means for research.
Whether you've attended every session or this is your first, this workshop is designed to leave you with a clear picture of where AI tools stand today and what's worth paying attention to next.
This session covered the full lifecycle of running your own AI: from downloading a model to your laptop, to fine-tuning it on a server, to deploying it in the cloud through Hugging Face. We covered what a large language model actually is and how it works under the hood, installing and running a model locally with Ollama (no internet, subscription, or API key required), context windows and quantization, and fine-tuning with LoRA on the free National Research Platform GPU cluster. As a running application, we built a personalized writing partner that knows a physics subfield's notation, style, and structure.
This session walked through six real case studies of researchers who successfully used AI in their scientific work, across astronomy, biology, social science, theoretical physics, chemistry, and mathematics. For each case we covered the complete workflow, how they customized the AI interaction, and where things went wrong.
Tools: NotebookLM | Materials
How can AI help you learn, teach, and work with others? This session explored NotebookLM, Google's source-grounded AI that generates podcasts, flashcards, study guides, and more from your own documents. Answers come from your sources, not the internet. We walked through 9 live demos covering exam prep, literature synthesis, course material creation, policy navigation, and collaborative research workflows.
We also discussed the rise of AI homework agents, AI humanizers, and what they mean for how we think about assessments.
Tools: Gemini, NotebookLM | Materials
AI tools have transformed quantitative research workflows — but qualitative researchers have been largely left out of the conversation. This session explored how LLMs fit into qualitative work through five live demos: grounded document analysis with NotebookLM, dialogical qualitative coding, multimodal analysis of photos and video, structured text extraction from open-ended responses, and piloting research designs with simulated participants.
We also discussed the unique risks LLMs pose for interpretive work and where human judgment remains essential in the analysis loop.
Tools: NotebookLM, Khanmigo, Microsoft Study, SciSpace | Materials
How can AI help you learn, teach, and work with others? This session explored NotebookLM, Google's source-grounded AI that generates podcasts, flashcards, study guides, and more from your own documents. Answers come from your sources, not the internet. We walked through 9 live demos covering exam prep, literature synthesis, course material creation, policy navigation, and collaborative research workflows.
We also discussed the rise of AI homework agents, AI humanizers, and what they mean for how we think about assessments.
Tools: Perplexity, Consensus, Elicit, Kosmos | Materials
General-purpose AI has a citation problem: studies show ChatGPT fabricates roughly 1 in 5 academic references. This session walked through specialized research tools designed to solve this: Perplexity for quick context with verified sources, Consensus for evidence synthesis across peer-reviewed literature, and Elicit for systematic reviews and data extraction.
We also took a first look at Kosmos, an autonomous research agent that reads ~1,500 papers and writes ~42,000 lines of code over 12 hours to produce a research report. We discussed when to trust (and not trust) any of these tools.
Tools: Claude Code, Gemini CLI | Materials
Our inaugural session introduced AI-powered coding assistants that work directly in the terminal. We demoed Claude Code and Gemini CLI on real research tasks: generating and documenting code, navigating unfamiliar codebases, consolidating messy datasets, and running linear regressions, all through natural language conversation.
The session showed how these tools can save researchers hours on routine programming tasks, even if you're not a software developer.
Future Sessions: AI for Data Analysis, Productivity & Workflow, Customizing Your AI (Tentative)
D-Lab works with Berkeley faculty, research staff, and students to advance data-intensive social science and humanities research. Our goal at D-Lab is to provide practical training, staff support, resources, and space to enable you to use AI tools for your own research applications.
Visit the D-Lab homepage to learn more about us.
- Bruno Cittolin Smaniotto
- Tom van Nuenen