Published in: Proceedings of the 3rd International Workshop on AI for Quantum and Quantum for AI (AIQxQIA 2025), co-located with the 28th European Conference on Artificial Intelligence (ECAI 2025), Bologna, Italy, 25 October 2025. CEUR Workshop Proceedings, Vol. 4153. Edited by Marco Baioletti (University of Perugia), Miguel Angel Gonzalez (University of Oviedo), Corrado Loglisci (University of Bari), Angelo Oddi (CNR), Riccardo Rasconi (CNR), and Ramiro Varela (University of Oviedo).
🚀 Key Discovery: Vision Transformer (ViT) embeddings unlock quantum machine learning advantage. This is the first systematic evidence that the choice of embeddings determines quantum kernel success, showing a fundamental synergy between transformer attention and quantum feature spaces.
- Project Page: Embedding Aware Quantum
- GitHub Repository: QuantumVE
- Peer Reviewed Paper (CEUR-WS): paper21.pdf, Vol. 4153
- Preprint (arXiv): Embedding Aware Quantum Classical SVMs
- Dataset on HuggingFace: QuantumEmbeddings
- Interactive Demo: Colab Notebook
- Fashion MNIST: +8.02% accuracy vs classical SVM
- MNIST: +4.42% accuracy boost
- Embedding Insights: ViT embeddings enable quantum advantage; CNN features degrade performance
- Scalability: 16 qubit tensor network simulation via cuTensorNet
- Efficiency: Class balanced k means distillation for quantum data preprocessing
QuantumVE/
├── data_processing/ # Class balanced k means distillation procedures
├── embeddings/ # Vision Transformer & CNN embedding extraction
├── qve/ # Core quantum classical modules and utilities
└── scripts/ # Experimental pipelines with cross validation
├── classical_baseline.py # Traditional SVM benchmarks
├── cross_validation_baseline.py # Cross validation framework
└── qsvm_cuda_embeddings.py # Our embedding aware quantum method
# Create conda environment
conda create -n QuantumVE python=3.11 -y
conda activate QuantumVE
# Clone and install
git clone https://github.com/sebasmos/QuantumVE.git
cd QuantumVE
pip install -e .
# For Ryzen devices, install MPI
conda install -c conda-forge mpi4py openmpiMNIST Embeddings:
mkdir -p data && \
wget https://huggingface.co/datasets/sebasmos/QuantumEmbeddings/resolve/main/mnist_embeddings.zip && \
unzip mnist_embeddings.zip -d data && \
rm mnist_embeddings.zipFashion MNIST Embeddings:
mkdir -p data && \
wget https://huggingface.co/datasets/sebasmos/QuantumEmbeddings/resolve/main/fashionmnist_embeddings.zip && \
unzip fashionmnist_embeddings.zip -d data && \
rm fashionmnist_embeddings.zipSingle Node:
# Classical baseline with cross validation
python scripts/classical_baseline.py
# Cross validation framework
python scripts/cross_validation_baseline.py
# Our embedding aware quantum method
python scripts/qsvm_cuda_embeddings.pyMulti Node with MPI:
# Run with 2 processes
mpirun -np 2 python scripts/qsvm_cuda_embeddings.py
mpirun -np 2 python scripts/cross_validation_baseline.pyOur key insight: embedding choice is critical for quantum advantage. While CNN features degrade in quantum systems, Vision Transformer embeddings create a unique synergy with quantum feature spaces, enabling measurable performance gains through:
- Class balanced distillation reduces quantum overhead while preserving critical patterns
- ViT attention mechanisms align naturally with quantum superposition states
- Tensor network simulation scales to practical problem sizes (16+ qubits)
We welcome contributions! Help us advance quantum machine learning:
- Fork the QuantumVE repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Submit a pull request with detailed description
Areas for contribution:
- New embedding architectures (BERT, CLIP, etc.)
- Additional quantum backends
- Performance optimizations
- Documentation improvements
This work was supported by the Google Cloud Research Credits program under award number GCP19980904.
@inproceedings{ordonez2025embedding,
title = {Embedding Aware Quantum Classical SVMs for Scalable Quantum Machine Learning},
author = {Ord{\'o}{\~n}ez, Sebasti{\'a}n Andr{\'e}s Cajas and Torres, Luis Fernando Torres and Bifulco, Mario and Duran, Carlos Andres and Bosch, Cristian and Carbajo, Ricardo Simon},
booktitle = {Proceedings of the 3rd International Workshop on AI for Quantum and Quantum for AI (AIQxQIA 2025), co-located with the 28th European Conference on Artificial Intelligence (ECAI 2025)},
editor = {Baioletti, Marco and Gonzalez, Miguel Angel and Loglisci, Corrado and Oddi, Angelo and Rasconi, Riccardo and Varela, Ramiro},
series = {CEUR Workshop Proceedings},
volume = {4153},
year = {2025},
month = {October},
address = {Bologna, Italy},
publisher = {CEUR-WS.org},
url = {https://ceur-ws.org/Vol-4153/paper21.pdf}
}@misc{ordonez2025embeddingarxiv,
title = {Embedding Aware Quantum Classical SVMs for Scalable Quantum Machine Learning},
author = {Ord{\'o}{\~n}ez, Sebasti{\'a}n Andr{\'e}s Cajas and Torres, Luis Fernando Torres and Bifulco, Mario and Duran, Carlos Andres and Bosch, Cristian and Carbajo, Ricardo Simon},
year = {2025},
eprint = {2508.00024},
archivePrefix = {arXiv},
url = {https://arxiv.org/abs/2508.00024}
}🌟 Star us on GitHub if this helps your research! 🌟