A systematic, four-layer architecture for transforming prompt engineering from art into science
Features • Quick Start • Architecture • Documentation • Contributing
Open-prompt transforms prompt engineering from intuition-based art into a deterministic, systematic process. By decomposing user intent and applying proven cognitive structure frameworks, it generates production-ready prompts optimized for different LLM providers.
Try the live demo at https://open-prompt-sand.vercel.app
| Traditional Approach | open-prompt |
|---|---|
| ❌ Trial and error | ✅ Systematic decomposition |
| ❌ One-size-fits-all | ✅ Multi-variant generation |
| ❌ Unstructured prompts | ✅ PromptIR™ intermediate format |
| ❌ Provider-specific | ✅ Multi-provider optimization |
| ❌ Manual refinement | ✅ Automated framework selection |
- Layer 1 - Intent Classifier: Multi-intent detection with confidence scoring
- Layer 2 - Structure Framework Selector: Maps intents to optimal cognitive frameworks (CoT, MECE, SCQA, etc.)
- Layer 3 - PromptIR Generator: Creates structured intermediate representations
- Layer 4 - Final Prompt Constructor: Provider-optimized output (GPT, Claude, General)
- General Style: Provider-agnostic, balanced approach (default)
- GPT Style: Optimized for OpenAI GPT (imperative, numbered lists)
- Claude Style: Optimized for Anthropic Claude (collaborative, natural flow)
- Format Support: Single string or message array for API compatibility
- Multi-Language: Auto-detects Chinese, Japanese, Korean, English
Every prompt is represented as a structured object with:
- Role, Goal, Context, Constraints, Process, Output
- Structure framework with template
- Optional tools and termination conditions
- Unified interface supports OpenAI, Anthropic, Qwen, and more
- Easy provider integration
- Built-in monitoring and performance tracking
- Visual Workflow: Step-by-step visualization of the prompt generation process.
- Stage Control: detailed views for Intent Classification, Framework Selection, and PromptIR.
- Real-time Preview: See how changes in configuration affect the output.
- One-Click Copy: Easily copy the generated prompt to clipboard.
- Content Strategy: SEO evaluation, EEAT analysis, content optimization
- Technical Documentation: API docs, technical guides, tutorials
- Business Communication: Reports, presentations, strategic plans
- Problem Solving: Debugging, system design, architecture decisions
- Data Analysis: Structured reasoning through complex datasets
┌─────────────────────────────────────────────────────────────────────┐
│ USER INPUT │
│ "Generate a SEO article EEAT evaluation prompt" │
└──────────────────────────┬──────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ LAYER 1: INTENT CLASSIFIER │
│ • Multi-intent detection with confidence scoring │
│ • Each intent: reasoning + suggested direction │
│ Output: [SEO Content Evaluation (95%), Quality Assessment (88%)] │
└──────────────────────────┬──────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ LAYER 2: STRUCTURE FRAMEWORK SELECTOR │
│ • Maps intents to optimal thinking frameworks │
│ • Selects from CoT, MECE, SCQA, ToT, ReAct, or creates custom │
│ Output: [CoT (0.92), MECE (0.87)] │
└──────────────────────────┬──────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ LAYER 3: PROMPT IR GENERATOR │
│ • Creates Cartesian product: Intent × Framework │
│ • Generates structured PromptIR for each combination │
│ Output: 2-6 PromptIR variations per user input │
└──────────────────────────┬──────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ LAYER 4: FINAL PROMPT CONSTRUCTOR │
│ • Transforms PromptIR into polished, ready-to-use prompts │
│ • Style modes: General | GPT | Claude │
│ Output: Production prompts ready for deployment │
└──────────────────────────┬──────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ FINAL PROMPT OUTPUT │
│ Optimized for target provider with proper structure and language │
└─────────────────────────────────────────────────────────────────────┘
- Node.js >= 18.0.0
- pnpm (recommended) or npm
# Clone the repository
git clone https://github.com/ai-zen-future/open-prompt.git
cd open-prompt
# Install dependencies
pnpm install
# Build all packages
pnpm build
# Start the Website
pnpm devimport { OpenAIProvider } from "@open-prompt/core";
import {
IntentClassifier,
StructureFrameworkSelector,
PromptIRGenerator,
FinalPromptConstructor,
} from "@open-prompt/core";
// Initialize LLM client
const llmClient = new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-4o-mini",
});
// Initialize all four layers
const intentClassifier = new IntentClassifier(llmClient, "gpt-4o-mini");
const frameworkSelector = new StructureFrameworkSelector(
llmClient,
"gpt-4o-mini",
);
const promptIRGenerator = new PromptIRGenerator(llmClient, "gpt-4o-mini");
const finalConstructor = new FinalPromptConstructor(llmClient, "gpt-4o-mini");
// Generate optimized prompts
const userInput = "Generate a SEO article EEAT evaluation prompt";
// Layer 1: Classify intents
const intentResult = await intentClassifier.classify({ userInput });
// Layer 2: Select structure frameworks
const frameworks = await frameworkSelector.select({
userInput,
intents: intentResult.categories.flatMap((c) => c.intents),
});
// Layer 3: Generate PromptIR
const promptIRResult = await promptIRGenerator.generate({
userInput,
intentResult,
frameworkResult: frameworks,
});
// Layer 4: Construct final prompts
const finalPrompt = await finalConstructor.construct({
promptIR: promptIRResult.combinations[0].promptIR,
styleMode: "gpt",
outputFormat: "string",
originalUserInput: userInput,
});
console.log(finalPrompt.finalPrompt.content);- Core Architecture - Detailed architecture documentation
- API Reference - Complete API documentation
- Structure Frameworks - Framework selection guide[todo]
- Contributing - Contribution guidelines [todo]
- Changelog - Version history and changes [todo]
We welcome contributions!
Development Workflow:
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Startup: Define the overall architecture and core capabilities,can only be invoked programmatically.
- User Interface: Build UI for prompt generation and experimentation (website).
- Coding-Style Prompt Engineering: Add prompt versioning, coding-level diff/comparison, and A/B testing workflows.
- Prompt Small Model: Train a fine-tuned compact model dedicated to prompt generation, tuned for speed and accuracy.
- Integration with Prompt Library/Marketplace: An all-in-one prompt discovery platform that aggregates prompt sources instead of hosting them.
This project is licensed under the MIT License - see the LICENSE file for details.
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Built with ❤️ by the open-prompt team