Release note#
v0.9.1rc1 - 2025.06.22#
This is the 1st release candidate of v0.9.1 for vLLM Ascend. Please follow the official doc to get started.
Highlights#
Atlas 300I series is experimental supported in this release. #1333 After careful consideration, this feature will NOT be included in v0.9.1-dev branch taking into account the v0.9.1 release quality and the feature rapid iteration to improve performance on Atlas 300I series. We will improve this from 0.9.2rc1 and later.
Support EAGLE-3 for speculative decoding. #1032
Core#
Ascend PyTorch adapter (torch_npu) has been upgraded to
2.5.1.post1.dev20250528
. Donāt forget to update it in your environment. #1235Support Atlas 300I series container image. You can get it from quay.io
Fix token-wise padding mechanism to make multi-card graph mode work. #1300
Upgrade vllm to 0.9.1 [#1165]https://github.com/vllm-project/vllm-ascend/pull/1165
Other Improvements#
Initial support Chunked Prefill for MLA. #1172
An example of best practices to run DeepSeek with ETP has been added. #1101
Performance improvements for DeepSeek using the TorchAir graph. #1098, #1131
Supports the speculative decoding feature with AscendScheduler. #943
Improve
VocabParallelEmbedding
custom op performance. It will be enabled in the next release. #796Fixed a device discovery and setup bug when running vLLM Ascend on Ray #884
DeepSeek with MC2 (Merged Compute and Communication) now works properly. #1268
Fixed log2phy NoneType bug with static EPLB feature. #1186
Improved performance for DeepSeek with DBO enabled. #997, #1135
Refactoring AscendFusedMoE #1229
Add initial user stories page (include LLaMA-Factory/TRL/verl/MindIE Turbo/GPUStack) #1224
Add unit test framework #1201
Known Issues#
In some cases, the vLLM process may crash with a GatherV3 error when aclgraph is enabled. We are working on this issue and will fix it in the next release. #1038
Prefix cache feature does not work with the Ascend Scheduler but without chunked prefill enabled. This will be fixed in the next release. #1350
Full Changelog#
https://github.com/vllm-project/vllm-ascend/compare/v0.9.0rc2ā¦v0.9.1rc1
v0.9.0rc2 - 2025.06.10#
This release contains some quick fixes for v0.9.0rc1. Please use this release instead of v0.9.0rc1.
Highlights#
Fix the import error when vllm-ascend is installed without editable way. #1152
v0.9.0rc1 - 2025.06.09#
This is the 1st release candidate of v0.9.0 for vllm-ascend. Please follow the official doc to start the journey. From this release, V1 Engine is recommended to use. The code of V0 Engine is frozen and will not be maintained any more. Please set environment VLLM_USE_V1=1
to enable V1 Engine.
Highlights#
DeepSeek works with graph mode now. Follow the official doc to take a try. #789
Qwen series models works with graph mode now. It works by default with V1 Engine. Please note that in this release, only Qwen series models are well tested with graph mode. Weāll make it stable and generalize in the next release. If you hit any issues, please feel free to open an issue on GitHub and fallback to eager mode temporarily by set
enforce_eager=True
when initializing the model.
Core#
The performance of multi-step scheduler has been improved. Thanks for the contribution from China Merchants Bank. #814
LoRAćMulti-LoRA And Dynamic Serving is supported for V1 Engine now. Thanks for the contribution from China Merchants Bank. #893
Prefix cache and chunked prefill feature works now #782 #844
Spec decode and MTP features work with V1 Engine now. #874 #890
DP feature works with DeepSeek now. #1012
Input embedding feature works with V0 Engine now. #916
Sleep mode feature works with V1 Engine now. #1084
Model#
Other#
online serve with ascend quantization works now. #877
A batch of bugs for graph mode and moe model have been fixed. #773 #771 #774 #816 #817 #819 #912 #897 #961 #958 #913 #905
A batch of performance improvement PRs have been merged. #784 #803 #966 #839 #970 #947 #987 #1085
From this release, binary wheel package will be released as well. #775
The contributor doc site is added
Known Issue#
In some case, vLLM process may be crashed with aclgraph enabled. Weāre working this issue and itāll be fixed in the next release.
Multi node data-parallel doesnāt work with this release. This is a known issue in vllm and has been fixed on main branch. #18981
v0.7.3.post1 - 2025.05.29#
This is the first post release of 0.7.3. Please follow the official doc to start the journey. It includes the following changes:
Highlights#
Qwen3 and Qwen3MOE is supported now. The performance and accuracy of Qwen3 is well tested. You can try it now. Mindie Turbo is recomanded to improve the performance of Qwen3. #903 #915
Added a new performance guide. The guide aims to help users to improve vllm-ascend performance on system level. It includes OS configuration, library optimization, deploy guide and so on. #878 Doc Link
Bug Fix#
Qwen2.5-VL works for RLHF scenarios now. #928
Users can launch the model from online weights now. e.g. from huggingface or modelscope directly #858 #918
The meaningless log info
UserWorkspaceSize0
has been cleaned. #911The log level for
Failed to import vllm_ascend_C
has been changed towarning
instead oferror
. #956DeepSeek MLA now works with chunked prefill in V1 Engine. Please note that V1 engine in 0.7.3 is just expermential and only for test usage. #849 #936
Docs#
v0.7.3 - 2025.05.08#
š Hello, World!
We are excited to announce the release of 0.7.3 for vllm-ascend. This is the first official release. The functionality, performance, and stability of this release are fully tested and verified. We encourage you to try it out and provide feedback. Weāll post bug fix versions in the future if needed. Please follow the official doc to start the journey.
Highlights#
This release includes all features landed in the previous release candidates (v0.7.1rc1, v0.7.3rc1, v0.7.3rc2). And all the features are fully tested and verified. Visit the official doc the get the detail feature and model support matrix.
Upgrade CANN to 8.1.RC1 to enable chunked prefill and automatic prefix caching features. You can now enable them now.
Upgrade PyTorch to 2.5.1. vLLM Ascend no longer relies on the dev version of torch-npu now. Now users donāt need to install the torch-npu by hand. The 2.5.1 version of torch-npu will be installed automatically. #662
Integrate MindIE Turbo into vLLM Ascend to improve DeepSeek V3/R1, Qwen 2 series performance. #708
Core#
LoRAćMulti-LoRA And Dynamic Serving is supported now. The performance will be improved in the next release. Please follow the official doc for more usage information. Thanks for the contribution from China Merchants Bank. #700
Model#
Other#
v0.8.5rc1 - 2025.05.06#
This is the 1st release candidate of v0.8.5 for vllm-ascend. Please follow the official doc to start the journey. Now you can enable V1 egnine by setting the environment variable VLLM_USE_V1=1
, see the feature support status of vLLM Ascend in here.
Highlights#
Upgrade CANN version to 8.1.RC1 to support chunked prefill and automatic prefix caching (
--enable_prefix_caching
) when V1 is enabled #747Optimize Qwen2 VL and Qwen 2.5 VL #701
Improve Deepseek V3 eager mode and graph mode performance, now you can use āadditional_config={āenable_graph_modeā: True} to enable graph mode. #598 #719
Core#
Upgrade vLLM to 0.8.5.post1 #715
Fix early return in CustomDeepseekV2MoE.forward during profile_run #682
Adapts for new quant model generated by modelslim #719
Initial support on P2P Disaggregated Prefill based on llm_datadist #694
Use
/vllm-workspace
as code path and include.git
in container image to fix issue when start vllm under/workspace
#726Optimize NPU memory usage to make DeepSeek R1 W8A8 32K model len work. #728
Fix
PYTHON_INCLUDE_PATH
typo in setup.py #762
Other#
v0.8.4rc2 - 2025.04.29#
This is the second release candidate of v0.8.4 for vllm-ascend. Please follow the official doc to start the journey. Some experimental features are included in this version, such as W8A8 quantization and EP/DP support. Weāll make them stable enough in the next release.
Highlights#
Qwen3 and Qwen3MOE is supported now. Please follow the official doc to run the quick demo. #709
Ascend W8A8 quantization method is supported now. Please take the official doc for example. Any feedback is welcome. #580
DeepSeek V3/R1 works with DP, TP and MTP now. Please note that itās still in experimental status. Let us know if you hit any problem. #429 #585 #626 #636 #671
Core#
ACLGraph feature is supported with V1 engine now. Itās disabled by default because this feature rely on CANN 8.1 release. Weāll make it available by default in the next release #426
Upgrade PyTorch to 2.5.1. vLLM Ascend no longer relies on the dev version of torch-npu now. Now users donāt need to install the torch-npu by hand. The 2.5.1 version of torch-npu will be installed automatically. #661
Other#
MiniCPM model works now. #645
openEuler container image supported with
v0.8.4-openeuler
tag and customs Ops build is enabled by default for openEuler OS. #689Fix ModuleNotFoundError bug to make Lora work #600
Add āUsing EvalScope evaluationā doc #611
Add a
VLLM_VERSION
environment to make vLLM version configurable to help developer set correct vLLM version if the code of vLLM is changed by hand locally. #651
v0.8.4rc1 - 2025.04.18#
This is the first release candidate of v0.8.4 for vllm-ascend. Please follow the official doc to start the journey. From this version, vllm-ascend will follow the newest version of vllm and release every two weeks. For example, if vllm releases v0.8.5 in the next two weeks, vllm-ascend will release v0.8.5rc1 instead of v0.8.4rc2. Please find the detail from the official documentation.
Highlights#
vLLM V1 engine experimental support is included in this version. You can visit official guide to get more detail. By default, vLLM will fallback to V0 if V1 doesnāt work, please set
VLLM_USE_V1=1
environment if you want to use V1 forcely.LoRAćMulti-LoRA And Dynamic Serving is supported now. The performance will be improved in the next release. Please follow the official doc for more usage information. Thanks for the contribution from China Merchants Bank. #521.
Sleep Mode feature is supported. Currently itās only work on V0 engine. V1 engine support will come soon. #513
Core#
The Ascend scheduler is added for V1 engine. This scheduler is more affinity with Ascend hardware. More scheduler policy will be added in the future. #543
Disaggregated Prefill feature is supported. Currently only 1P1D works. NPND is under design by vllm team. vllm-ascend will support it once itās ready from vLLM. Follow the official guide to use. #432
Spec decode feature works now. Currently itās only work on V0 engine. V1 engine support will come soon. #500
Structured output feature works now on V1 Engine. Currently it only supports xgrammar backend while using guidance backend may get some errors. #555
Other#
A new communicator
pyhccl
is added. Itās used for call CANN HCCL library directly instead of usingtorch.distribute
. More usage of it will be added in the next release #503The custom ops build is enabled by default. You should install the packages like
gcc
,cmake
first to buildvllm-ascend
from source. SetCOMPILE_CUSTOM_KERNELS=0
environment to disable the compilation if you donāt need it. #466The custom op
rotay embedding
is enabled by default now to improve the performance. #555
v0.7.3rc2 - 2025.03.29#
This is 2nd release candidate of v0.7.3 for vllm-ascend. Please follow the official doc to start the journey.
Quickstart with container: https://vllm-ascend.readthedocs.io/en/v0.7.3-dev/quick_start.html
Installation: https://vllm-ascend.readthedocs.io/en/v0.7.3-dev/installation.html
Highlights#
Add Ascend Custom Ops framewrok. Developers now can write customs ops using AscendC. An example ops
rotary_embedding
is added. More tutorials will come soon. The Custom Ops compilation is disabled by default when installing vllm-ascend. SetCOMPILE_CUSTOM_KERNELS=1
to enable it. #371V1 engine is basic supported in this release. The full support will be done in 0.8.X release. If you hit any issue or have any requirement of V1 engine. Please tell us here. #376
Prefix cache feature works now. You can set
enable_prefix_caching=True
to enable it. #282
Core#
Bump torch_npu version to dev20250320.3 to improve accuracy to fix
!!!
output problem. #406
Model#
The performance of Qwen2-vl is improved by optimizing patch embedding (Conv3D). #398
Other#
v0.7.3rc1 - 2025.03.14#
š Hello, World! This is the first release candidate of v0.7.3 for vllm-ascend. Please follow the official doc to start the journey.
Quickstart with container: https://vllm-ascend.readthedocs.io/en/v0.7.3-dev/quick_start.html
Installation: https://vllm-ascend.readthedocs.io/en/v0.7.3-dev/installation.html
Highlights#
DeepSeek V3/R1 works well now. Read the official guide to start! #242
Speculative decoding feature is supported. #252
Multi step scheduler feature is supported. #300
Core#
Bump torch_npu version to dev20250308.3 to improve
_exponential
accuracyAdded initial support for pooling models. Bert based model, such as
BAAI/bge-base-en-v1.5
andBAAI/bge-reranker-v2-m3
works now. #229
Model#
Other#
Support MTP(Multi-Token Prediction) for DeepSeek V3/R1 #236
[Docs] Added more model tutorials, include DeepSeek, QwQ, Qwen and Qwen 2.5VL. See the official doc for detail
Pin modelscope<1.23.0 on vLLM v0.7.3 to resolve: https://github.com/vllm-project/vllm/pull/13807
Known issues#
In some cases, especially when the input/output is very long, the accuracy of output may be incorrect. We are working on it. Itāll be fixed in the next release.
Improved and reduced the garbled code in model output. But if you still hit the issue, try to change the generation config value, such as
temperature
, and try again. There is also a knonwn issue shown below. Any feedback is welcome. #277
v0.7.1rc1 - 2025.02.19#
š Hello, World!
We are excited to announce the first release candidate of v0.7.1 for vllm-ascend.
vLLM Ascend Plugin (vllm-ascend) is a community maintained hardware plugin for running vLLM on the Ascend NPU. With this release, users can now enjoy the latest features and improvements of vLLM on the Ascend NPU.
Please follow the official doc to start the journey. Note that this is a release candidate, and there may be some bugs or issues. We appreciate your feedback and suggestions here
Highlights#
Core#
Other#
Known issues#
This release relies on an unreleased torch_npu version. It has been installed within official container image already. Please install it manually if you are using non-container environment.
There are logs like
No platform detected, vLLM is running on UnspecifiedPlatform
orFailed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
shown when running vllm-ascend. It actually doesnāt affect any functionality and performance. You can just ignore it. And it has been fixed in this PR which will be included in v0.7.3 soon.There are logs like
# CPU blocks: 35064, # CPU blocks: 2730
shown when running vllm-ascend which should be# NPU blocks:
. It actually doesnāt affect any functionality and performance. You can just ignore it. And it has been fixed in this PR which will be included in v0.7.3 soon.