vllm
0
reviews
A high-throughput and memory-efficient inference and serving engine for LLMs
65
Security
22
Quality
40
Maintenance
46
Overall
v0.15.1
PyPI
Python
Feb 5, 2026
by vLLM Team
70640
GitHub Stars
Community Reviews
No reviews yet
Be the first to share your experience with this package
Write a Review
Sign in to write a review
Sign In
Dependencies
regex
cachetools
psutil
sentencepiece
numpy
requests
tqdm
blake3
py-cpuinfo
transformers
tokenizers
protobuf
fastapi
aiohttp
openai
pydantic
prometheus_client
pillow
prometheus-fastapi-instrumentator
tiktoken
and 40 more
Used By