vllm-project projects
Search results
31 open and 0 closed projects found.
PRs and issues related to NVIDIA hardware
#31 updated Apr 21, 2026
Open tracking AMD ROCm CI failures and fixes for vLLM.
#39 updated Apr 21, 2026
Tracking failures that are occurring in CI.
#20 updated Apr 21, 2026
Optimization and bugfixes for Qwen3.5 model series.
#50 updated Apr 21, 2026
Maintainer's tracking board for metrics and tracing related PRs and issues
#44 updated Apr 21, 2026
torch.compile integration related
#12 updated Apr 21, 2026
Backlog for CI feature requests
#35 updated Apr 21, 2026
2025-02-25: DeepSeek V3/R1 is supported with optimized block FP8 kernels, MLA, MTP spec decode, multi-node PP, EP, and W4A16 quantization
#5 updated Apr 21, 2026
Track CPU related issues & tasks
#42 updated Apr 20, 2026
Main tasks for the multi-modality workstream (#4194)
#8 updated Apr 19, 2026
Community requests for multi-modal models
#10 updated Apr 19, 2026
Tracks Ray issues and pull requests in vLLM
#7 updated Apr 4, 2026
Tracker of known issues and bugs for serving Llama on vLLM
#14 updated Feb 6, 2026
A list of onboarding tasks for first-time contributors to get started with vLLM.
#6 updated Dec 10, 2025
Enhancement to Llama herd of models. See also https://github.com/vllm-project/vllm/issues/16114
#13 updated Nov 20, 2025
[Testing] Optimize V1 PP efficiency.
#1 updated Oct 6, 2025
You can’t perform that action at this time.