Skip to content

parth-shinge/Hybrid-Prompt-Generator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ Hybrid Prompt Generator

Python Tests License Conference GitHub stars

Official Implementation of the Research Paper A Hybrid Framework for Adaptive Prompt Generation Using Templates, LLMs, and Learned Rankers

πŸ“– Published in ICT: Applications and Social Interfaces β€” Proceedings of ICTCS 2025, Volume 3 (Springer LNNS)

πŸ”— Repository: https://github.com/parth-shinge/Hybrid-Prompt-Generator


🧠 Overview

Hybrid Prompt Generator is a research-driven framework that combines template-based prompt generation, LLM augmentation, and machine learning ranking models to generate high-quality prompts for creative tools such as Canva, Gamma, and other AI design platforms.

The system integrates Human-in-the-Loop learning, enabling the model to continuously improve prompt quality based on user selections.


πŸ— System Architecture

User Input
   β”‚
   β”œβ”€β”€ Template Generator
   β”‚
   β”œβ”€β”€ Gemini Generator
   β”‚
   └── Hybrid Mode
         β”‚
         β–Ό
   🧠 Ensemble Prompt Synthesis
      (Slot Coverage + Fluency)
         β”‚
         β–Ό
   πŸ”Ž Neural Ranker
   (384 β†’ 128 β†’ 64 β†’ 1)
         β”‚
         β–Ό
   πŸ‘€ User Choice Logging
         β”‚
         β–Ό
   πŸ“Š Dataset Creation
         β”‚
         β–Ό
   πŸ“ˆ Evaluation + Statistical Testing
         β”‚
         β–Ό
   πŸ” SHAP Interpretability

✨ Key Features

🧩 Template Prompt Generator

Deterministic prompt construction using 7 structured design parameters.

πŸ€– Gemini Integration

LLM-powered prompt generation using Google Gemini API.

⚑ Hybrid Generation Mode

Generates prompts from both systems and selects the best automatically.

🧠 Ensemble Prompt Synthesis

Prompt quality scoring:

Final Score = Ξ± Γ— SlotCoverage + Ξ² Γ— Fluency

πŸ“Š Neural Ranker

Binary classifier trained on user choice data.

Architecture:

Embedding (384)
 β†’ Linear(128)
 β†’ ReLU
 β†’ Dropout(0.2)
 β†’ Linear(64)
 β†’ ReLU
 β†’ Linear(1)
 β†’ Sigmoid

Embedding model: all-MiniLM-L6-v2


πŸ‘€ Human-in-the-Loop Learning

User selections are logged into a SQLite database, which is converted into a dataset for training the neural ranker.

The system continuously improves as more user feedback is collected.


πŸ“Š Evaluation Protocol

The repository includes a full ML evaluation pipeline.

Models Compared

β€’ Random Baseline β€’ Popularity Baseline β€’ TF-IDF + Logistic Regression β€’ Embedding + Logistic Regression β€’ Neural Ranker

Metrics

β€’ Accuracy β€’ Precision β€’ Recall β€’ F1 Score β€’ ROC-AUC

Evaluation uses:

β€’ 5-Fold Stratified Cross Validation β€’ Held-out Test Set


πŸ“ˆ Statistical Significance Testing

To validate experimental results, the following statistical tests are implemented:

πŸ§ͺ McNemar Test πŸ§ͺ Wilcoxon Signed-Rank Test πŸ§ͺ Bootstrap Confidence Intervals

Results saved to:

results/statistical_tests.json

πŸ” SHAP Interpretability

To improve transparency, the neural ranker supports explainability using SHAP.

Global Explanations

Feature importance across the dataset.

Local Explanations

Explains why the model preferred one prompt over another.

Accessible via the Admin Dashboard.


πŸ–₯ Admin Dashboard

The admin panel provides:

πŸ“Š System analytics 🧠 Ranker retraining πŸ” SHAP visualization πŸ—ƒ Dataset inspection


βš™οΈ Installation

git clone https://github.com/parth-shinge/Hybrid-Prompt-Generator

cd Hybrid-Prompt-Generator

python -m venv .venv

source .venv/bin/activate
# Windows: .venv\Scripts\activate

pip install -r requirements.txt

▢️ Running the Application

streamlit run prompt_generator.py

🧠 Training the Neural Ranker

from neural_ranker import train_ranker
from database import get_choice_dataset

pairs = get_choice_dataset()

texts = [t for t,l in pairs]
labels = [l for t,l in pairs]

train_ranker(texts, labels)

πŸ“‚ Project Structure

Hybrid-Prompt-Generator/
β”‚
β”œβ”€β”€ prompt_generator.py
β”œβ”€β”€ ensemble_synthesis.py
β”œβ”€β”€ neural_ranker.py
β”œβ”€β”€ eval_protocol.py
β”œβ”€β”€ statistical_tests.py
β”œβ”€β”€ shap_explain.py
β”œβ”€β”€ database.py
β”œβ”€β”€ config.yaml
β”‚
β”œβ”€β”€ tests/
β”œβ”€β”€ results/
β”œβ”€β”€ models/
└── utils/

πŸ” Reproducibility

The project ensures reproducible experiments through:

β€’ Deterministic seeding β€’ Dataset hashing β€’ Experiment tracking β€’ Git commit logging β€’ Config-based hyperparameters

Each experiment logs:

dataset hash
git commit
random seed
config snapshot
timestamp

πŸ“œ Citation

If you use this work in your research, please cite the following:

BibTeX

@inproceedings{shinge2026hybridprompt,
  title     = {A Hybrid Framework for Adaptive Prompt Generation Using Templates, LLMs, and Learned Rankers},
  author    = {Parth Shinge},
  booktitle = {ICT: Applications and Social Interfaces},
  series    = {Lecture Notes in Networks and Systems},
  publisher = {Springer Nature Switzerland AG},
  year      = {2026},
  note      = {Proceedings of the 10th International Conference on Information and Communication Technology for Competitive Strategies (ICTCS-2025)}
}

Author

Parth Shinge Vishwakarma Institute of Technology, Pune, India

ORCID: https://orcid.org/0009-0007-3790-2373


⭐ Acknowledgement

This work was presented at:

10th International Conference on Information and Communication Technology for Competitive Strategies (ICTCS-2025)

and published in Springer Lecture Notes in Networks and Systems (LNNS).


πŸ“œ License

This repository is released for academic and research purposes.

About

Hybrid prompt generation framework combining templates, LLMs, ensemble scoring, and neural ranking trained from human-in-the-loop feedback.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages