Motivation
GenAI agents are playing a larger and larger role in digital workflows. However, web interfaces are largely built for human cognition, not AI comprehension, creating a gap in how agents interact with them. They click the wrong buttons, misunderstand layouts, and get lost in frontend code. In response to this, we built an alternative framework to React—Aartisan (AI Agent Toolkit for Reac*t*) enriching frontend code with AI-readable semantics. This way, we empower developers and website managers to leverage GenAI tools to automate the web dev process.
So what is Aartisan⁉️
Aartisan is a web framework built on top of React and Vite that enriches components with semantic metadata to make web applications more comprehensible to AI agents.
Key Features
🚀 CLI for AI-Optimized React Applications: Quickly scaffold new projects or enhance existing ones with AI optimization built-in
🧩 Component Enhancement System: Multiple approaches to add semantic metadata to your components without disrupting functionality
🔌 Vite Plugin Integration: Build-time optimization and code transformation for seamless integration
🤖 LLM Integration: AI-powered component analysis and enhancement using Gemini and Cohere models
🧠 Semantic Context: Components provide rich context about their structure and purpose to AI assistants
How we built it
We developed Aartisan using a combination of modern JavaScript tools and AI technologies:
Core React Integration: Built a lightweight enhancement system using React hooks, HOCs, and context providers to add semantic metadata without disrupting existing component functionality.
Build System Plugins: Developed a Vite plugin that automatically transforms React components during the build process, adding semantic metadata where appropriate.
CLI Framework: Created a comprehensive command-line interface using Commander.js, with custom commands for creating, porting, analyzing, and annotating React applications.
AI Integration: Implemented connectors for both Gemini and Cohere LLMs to analyze components and suggest semantic enhancements based on code structure and purpose.
Experimental Framework: Developed a standardized web agent testing framework in Python to quantitatively measure the impact of our semantic enhancements on AI agent performance.
Babel Transformation: Used Babel's parser and traversal APIs to analyze and modify React component ASTs, ensuring precise code transformations during the porting process.
Command Line Tool Functionalities
create Command
The create command allows developers to scaffold new AI-optimized React applications from scratch. When invoked, it:
Generates Project Structure: Creates a complete React project with all necessary dependencies, configurations, and boilerplate code.
Integrates AI-Ready Components: Sets up the project with pre-configured AI-readable components, including the AartisanProvider at the application root.
Configures Build Tools: Automatically sets up Vite with the Aartisan plugin, enabling runtime optimization of components.
Templates Selection: Offers multiple templates (minimal, e-commerce) for different types of applications, each with AI-optimized components for common use cases.
The templates are designed based on insights from both Gemini and Cohere models' interaction patterns, ensuring the components are structured in ways that make sense to AI agents from the start.
port Command
The port command is our most sophisticated feature, transforming existing React applications into AI-optimized versions. This command:
Codebase Analysis: Scans all React components in a project, identifying component types, props, state, and rendering patterns using advanced AST parsing.
Semantic Enhancement: Adds AI-readable semantic metadata to components based on their purpose, structure, and behavior.
Provider Integration: Automatically integrates the AartisanProvider into the application root to establish semantic context.
Build System Updates: Modifies Vite/Webpack configurations to include Aartisan plugins.
AI-Powered Analysis: When provided with an API key, uses either Gemini or Cohere to analyze components and generate more accurate semantic metadata.
The Gemini integration excels at understanding component hierarchies and interaction patterns, helping identify logical groupings and parent-child relationships. Meanwhile, the Cohere integration provides deeper semantic understanding of component purposes, generating high-quality metadata that precisely describes what each component does and how it should be used.
A key innovation in our port command is the use of Cohere's Rerank API for intelligent context gathering. When analyzing a component, we:
- Collect all potentially relevant files from the codebase
- Generate a query describing the component's purpose
- Use Cohere's rerank-english-v3.0 model to identify the most contextually relevant files
- Include this context when generating semantic metadata with either Cohere or Gemini
This approach dramatically improves the quality of generated metadata by giving the LLM a more complete understanding of how components are used throughout the application.
The port command offers three enhancement levels (basic, standard, advanced) to accommodate different degrees of optimization, from simple attribute tagging to complete component reconstruction with the defineComponent approach.
annotate Command
The annotate command enables AI-powered enhancement of specific components using a directive-based system:
Targeted Enhancement: Developers add
// @aartisan:analyzecomments to components they want to enhance.LLM-Powered Analysis: Using either Gemini or Cohere, analyzes the component's code, purpose, and context.
Context-Aware Enhancement: Examines related files to understand the component's role in the broader application.
Code Transformation: Applies the most appropriate enhancement strategy based on the LLM's analysis.
Intelligent Metadata Generation: Creates rich semantic metadata describing the component's purpose, interactions, and accessibility characteristics.
Similar to the port command, our annotate command leverages Cohere's Rerank API to gather relevant context from the codebase. This intelligent context gathering process works by:
- Creating a semantic query about the component being analyzed
- Using this query to search through the entire codebase
- Applying Cohere's reranking to identify the most semantically similar files
- Building a comprehensive context that includes these related files
This context-rich approach allows the LLM (whether Gemini or Cohere) to generate more accurate, nuanced semantic metadata that truly captures the component's role and functionality within the application.
The Gemini API provides exceptional capability for understanding component implementation details and technical patterns, while Cohere's semantic understanding excels at generating human-readable descriptions and identifying user intent patterns. Our system dynamically selects between these APIs based on the enhancement needs of each component.
For both port and annotate commands, we've implemented a smart integration that can use Cohere for reranking when gathering codebase context, even when using Gemini for the main analysis. This hybrid approach leverages the strengths of both models to produce superior results.
Challenges we overcame
We encountered (and overcame) challenges in the following areas to build Aartisan:
- CLI Development & Debugging: Ensuring a seamless developer experience required extensive testing and debugging. From dependency conflicts to unexpected runtime errors, we spent hours refining the CLI to be both robust and intuitive.
- Aartisan npm Package: Packaging Aartisan as an npm module introduced issues with module resolution, build processes, and compatibility across environments. We had to try installation on multiple devices and test run all bash commands to ensure the module's smooth installation and usage.
- Prompt Engineering for Experiments: AI agent behavior can be unpredictable, so designing effective test cases required carefully structured prompts. We iterated extensively to ensure AI interactions were both meaningful and measurable.
- Interpreting Experiment Results: Unsure about what performance metrics to evaluate our AI agent on, we had to refine our evaluation criteria and conduct multiple experiments to isolate measures indicating meaningful improvements in execution time, API efficiency, and error reduction.
- Exploring Creative AI Use Cases: AI agents offer new ways to interact with the web and optimize code, but we went through extensive brainstorming to identify practical, high-impact applications of Gemini and Cohere models (described in the "How we built it" section above).
Accomplishments that we're proud of
Using our port tool, we seamlessly automated the conversion of two real-world web apps—a concert ticketing site and a blog—to Aartisan, transforming them into AI-friendly website templates. This showcases our tool's incredible ability to migrate existing React codebases effortlessly, saving developers time and effort.
To quantify Aartisan's impact, we ran experiments evaluating the performance of Gemini-2.0-Flash and Cohere command-a-03-2025 models on Aartisan-optimized sites. We observed massive efficiency gains:
Performance Breakthroughs with Aartisan
- 32.3% faster task completion
- Steps per task reduced from 19.6 → 6.0
- API calls dropped from 20.4 → 7.0
- Token usage slashed from 1.7M → 51.8K
- Error rate cut in half (10.0 → 4.4 per task)
The ASCII graphic that shows up when Aartisan is run is also super dope.
What we learned
Building Aartisan revealed several key insights about both the AI and web development landscapes:
AI Perception Gap: We discovered that AI agents struggle with the same web interfaces humans find intuitive. What seems obvious to us (navigation menus, buttons, forms) lacks semantic clarity for AI systems without proper metadata.
Metadata Impact: We were surprised by just how dramatically a small amount of semantic metadata could improve AI performance. Simply adding purpose and interaction descriptions reduced error rates by 50%.
Framework Integration: We learned that enhancing existing frameworks (React) is more effective than building entirely new ones. Developers can adopt AI-friendly practices without abandoning their existing knowledge and codebases.
LLM Behavior: Different LLMs exhibited distinct "personalities" when interacting with web interfaces. Gemini models tended to be more exploratory, while Cohere models were more methodical, suggesting that semantic enhancements should be tailored to specific AI models.
Token Efficiency: The most dramatic improvement we saw was in token usage, which has significant implications for cost-efficiency when deploying AI agents at scale.
What's next for Aartisan
Our roadmap for Aartisan includes several exciting directions:
Framework Expansion: Extend Aartisan beyond React to support other popular frameworks like Vue, Angular, and Svelte, creating a unified approach to AI-friendly web development.
AI Agent Marketplace: Develop pre-configured AI agents optimized for different types of Aartisan-enhanced applications, allowing developers to easily integrate purpose-built agents into their workflows.
Developer Tools: Create VS Code and WebStorm extensions that provide real-time feedback on component semantic clarity and suggest enhancements during development.
Enterprise Solutions: Build enterprise-grade tools for retrofitting large-scale web applications with Aartisan enhancements, focusing on e-commerce, SaaS, and content management systems.
Community Standards: Work with the broader web development community to establish open standards for AI-readable semantic metadata, ensuring interoperability across frameworks and tools.
Performance Optimization: Continue refining our semantic enhancement approaches to further reduce token usage and API calls, making AI-powered workflows even more cost-effective.
By bridging the gap between human-centered and AI-friendly design, Aartisan has the potential to transform how we think about web development in the age of AI agents.
Built With
- cohere
- gemini
- javascript
- npm
- react
- typescript
- vite

Log in or sign up for Devpost to join the conversation.