docker:cr.metax-tech.com/public-ai-release/maca/vllm-metax:0.14.0-maca.ai3.5.3.102-torch2.8-py312-ubuntu22.04-amd64
显卡:两块 N260
mx-smi version: 2.2.9
=================== MetaX System Management Interface Log ===================
Timestamp : Wed Mar 18 09:39:39 2026
Attached GPUs : 2
+---------------------------------------------------------------------------------+
| MX-SMI 2.2.9 Kernel Mode Driver Version: 3.4.4 |
| MACA Version: 3.3.0.15 BIOS Version: 1.29.1.0 |
|------------------+-----------------+---------------------+----------------------|
| Board Name | GPU Persist-M | Bus-id | GPU-Util sGPU-M |
| Pwr:Usage/Cap | Temp Perf | Memory-Usage | GPU-State |
|==================+=================+=====================+======================|
| 0 MetaX N260 | 0 Off | 0000:41:00.0 | 0% Enabled |
| 52W / 225W | 44C P9 | 6645/65536 MiB | Available |
+------------------+-----------------+---------------------+----------------------+
| 1 MetaX N260 | 1 Off | 0000:c1:00.0 | 0% Enabled |
| 47W / 225W | 41C P9 | 6619/65536 MiB | Available |
+------------------+-----------------+---------------------+----------------------+
+---------------------------------------------------------------------------------+
| Sliced GPU |
|------------------------------------+---------------------+----------------------|
| Minor GPU sGPU-Id Compute | Vram Quota | sGPU-Util |
|====================================+=====================+======================|
| 000 0 0 5% | 0/55296 MiB | 0% |
+------------------------------------+---------------------+----------------------+
| 001 1 0 5% | 0/55296 MiB | 0% |
+------------------------------------+---------------------+----------------------+
+---------------------------------------------------------------------------------+
| Process: |
| GPU PID Process Name GPU Memory |
| Usage(MiB) |
|=================================================================================|
| no process found |
+---------------------------------------------------------------------------------+
运行命令:
root@dzdwd-server:/workspace# CUDA_VISIBLE_DEVICES=0,1 \
nohup vllm serve /models/Qwen3.5-35B-A3B-FP8 -tp 2 \
--port 8889 \
--trust-remote-code \
--dtype auto \
--max-model-len 204800 \
--max-num-batched-tokens 204800 \
--max-num-seqs 7 \
--swap-space 32 \
--gpu-memory-utilization 0.92 \
--enable-prefix-caching \
--served-model-name Qwen3.5-35B-A3B-FP8 \
--enable-auto-tool-choice \
--api-key Dzdwd@85416 \
--tool-call-parser hermes > vllm_serve.log 2>&1 &
[1] 56618
报错:
nohup: ignoring input
INFO 03-18 09:45:45 [__init__.py:43] Available plugins for group vllm.platform_plugins:
INFO 03-18 09:45:45 [__init__.py:45] - metax -> vllm_metax:register
INFO 03-18 09:45:45 [__init__.py:48] All plugins in this group will be loaded. Set `VLLM_PLUGINS` to control which plugins to load.
INFO 03-18 09:45:45 [__init__.py:217] Platform plugin metax is activated
INFO 03-18 09:45:45 [envs.py:89] Note!: set VLLM_USE_FLASHINFER_SAMPLER to False. Reason: flashinfer sampler are not supported on maca
INFO 03-18 09:45:45 [envs.py:89] Note!: set VLLM_USE_TRTLLM_ATTENTION to False. Reason: trtllm interfaces are not supported
INFO 03-18 09:45:45 [envs.py:89] Note!: set VLLM_DISABLE_FLASHINFER_PREFILL to True. Reason: disable flashinfer prefill(use flash_attn prefill) on maca
INFO 03-18 09:45:45 [envs.py:89] Note!: set VLLM_USE_CUDNN_PREFILL to False. Reason: cudnn prefill interfaces are not supported
INFO 03-18 09:45:45 [envs.py:89] Note!: set VLLM_USE_TRTLLM_RAGGED_DEEPSEEK_PREFILL to False. Reason: trtllm interfaces are not supported
INFO 03-18 09:45:45 [envs.py:89] Note!: set VLLM_DISABLE_SHARED_EXPERTS_STREAM to True. Reason: shared expert stream may cause hang
/opt/conda/lib/python3.10/site-packages/torchvision/datapoints/__init__.py:12: UserWarning: The torchvision.datapoints and torchvision.transforms.v2 namespaces are still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes. You can silence this warning by calling torchvision.disable_beta_transforms_warning().
warnings.warn(_BETA_TRANSFORMS_WARNING)
/opt/conda/lib/python3.10/site-packages/torchvision/transforms/v2/__init__.py:54: UserWarning: The torchvision.datapoints and torchvision.transforms.v2 namespaces are still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes. You can silence this warning by calling torchvision.disable_beta_transforms_warning().
warnings.warn(_BETA_TRANSFORMS_WARNING)
INFO Print the version information of mcoplib during compilation.
Version info:Mcoplib_Version = '0.3.1'
Build_Maca_Version = '3.3.0.15'
GIT_BRANCH = 'HEAD'
GIT_COMMIT = '836541d'
Vllm Op Version = 0.13.0
SGlang Op Version = 0.5.7
INFO Staring Check the current MACA version of the operating environment.
INFO: Release major.minor matching, successful:3.3.
INFO 03-18 09:45:55 [fa_utils.py:15] Using Maca version of flash attention, which only supports version 2.
WARNING 03-18 09:46:05 [registry.py:774] Model architecture DeepSeekMTPModel is already registered, and will be overwritten by the new model class vllm_metax.models.deepseek_mtp:DeepSeekMTP.
WARNING 03-18 09:46:05 [registry.py:774] Model architecture DeepseekV2ForCausalLM is already registered, and will be overwritten by the new model class vllm_metax.models.deepseek_v2:DeepseekV2ForCausalLM.
WARNING 03-18 09:46:05 [registry.py:774] Model architecture DeepseekV3ForCausalLM is already registered, and will be overwritten by the new model class vllm_metax.models.deepseek_v2:DeepseekV3ForCausalLM.
WARNING 03-18 09:46:05 [registry.py:774] Model architecture DeepseekV32ForCausalLM is already registered, and will be overwritten by the new model class vllm_metax.models.deepseek_v2:DeepseekV3ForCausalLM.
WARNING 03-18 09:46:05 [__init__.py:78] The quantization method 'awq' already exists and will be overwritten by the quantization config <class 'vllm_metax.quant_config.awq.MacaAWQConfig'>.
WARNING 03-18 09:46:05 [__init__.py:78] The quantization method 'awq_marlin' already exists and will be overwritten by the quantization config <class 'vllm_metax.quant_config.awq_marlin.MacaAWQMarlinConfig'>.
WARNING 03-18 09:46:05 [__init__.py:78] The quantization method 'gptq' already exists and will be overwritten by the quantization config <class 'vllm_metax.quant_config.gptq.MacaGPTQConfig'>.
WARNING 03-18 09:46:05 [__init__.py:78] The quantization method 'gptq_marlin' already exists and will be overwritten by the quantization config <class 'vllm_metax.quant_config.gptq_marlin.MacaGPTQMarlinConfig'>.
WARNING 03-18 09:46:05 [__init__.py:78] The quantization method 'moe_wna16' already exists and will be overwritten by the quantization config <class 'vllm_metax.quant_config.moe_wna16.MacaMoeWNA16Config'>.
WARNING 03-18 09:46:05 [__init__.py:78] The quantization method 'compressed-tensors' already exists and will be overwritten by the quantization config <class 'vllm_metax.quant_config.compressed_tensors.MacaCompressedTensorsConfig'>.
WARNING 03-18 09:46:06 [attention.py:82] Using VLLM_USE_CUDNN_PREFILL environment variable is deprecated and will be removed in v0.14.0 or v1.0.0, whichever is soonest. Please use --attention-config.use_cudnn_prefill command line argument or AttentionConfig(use_cudnn_prefill=...) config field instead.
WARNING 03-18 09:46:06 [attention.py:82] Using VLLM_USE_TRTLLM_RAGGED_DEEPSEEK_PREFILL environment variable is deprecated and will be removed in v0.14.0 or v1.0.0, whichever is soonest. Please use --attention-config.use_trtllm_ragged_deepseek_prefill command line argument or AttentionConfig(use_trtllm_ragged_deepseek_prefill=...) config field instead.
WARNING 03-18 09:46:06 [attention.py:82] Using VLLM_USE_TRTLLM_ATTENTION environment variable is deprecated and will be removed in v0.14.0 or v1.0.0, whichever is soonest. Please use --attention-config.use_trtllm_attention command line argument or AttentionConfig(use_trtllm_attention=...) config field instead.
WARNING 03-18 09:46:06 [attention.py:82] Using VLLM_DISABLE_FLASHINFER_PREFILL environment variable is deprecated and will be removed in v0.14.0 or v1.0.0, whichever is soonest. Please use --attention-config.disable_flashinfer_prefill command line argument or AttentionConfig(disable_flashinfer_prefill=...) config field instead.
(APIServer pid=56618) INFO 03-18 09:46:06 [api_server.py:1351] vLLM API server version 0.13.0
(APIServer pid=56618) INFO 03-18 09:46:06 [utils.py:253] non-default args: {'model_tag': '/models/Qwen3.5-35B-A3B-FP8', 'port': 8889, 'api_key': ['Dzdwd@85416'], 'enable_auto_tool_choice': True, 'tool_call_parser': 'hermes', 'model': '/models/Qwen3.5-35B-A3B-FP8', 'trust_remote_code': True, 'max_model_len': 204800, 'served_model_name': ['Qwen3.5-35B-A3B-FP8'], 'tensor_parallel_size': 2, 'gpu_memory_utilization': 0.92, 'swap_space': 32.0, 'enable_prefix_caching': True, 'max_num_batched_tokens': 204800, 'max_num_seqs': 7}
(APIServer pid=56618) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
(APIServer pid=56618) Traceback (most recent call last):
(APIServer pid=56618) File "/opt/conda/bin/vllm", line 8, in <module>
(APIServer pid=56618) sys.exit(main())
(APIServer pid=56618) File "/opt/conda/lib/python3.10/site-packages/vllm/entrypoints/cli/main.py", line 73, in main
(APIServer pid=56618) args.dispatch_function(args)
(APIServer pid=56618) File "/opt/conda/lib/python3.10/site-packages/vllm/entrypoints/cli/serve.py", line 60, in cmd
(APIServer pid=56618) uvloop.run(run_server(args))
(APIServer pid=56618) File "/opt/conda/lib/python3.10/site-packages/uvloop/__init__.py", line 69, in run
(APIServer pid=56618) return loop.run_until_complete(wrapper())
(APIServer pid=56618) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=56618) File "/opt/conda/lib/python3.10/site-packages/uvloop/__init__.py", line 48, in wrapper
(APIServer pid=56618) return await main
(APIServer pid=56618) File "/opt/conda/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 1398, in run_server
(APIServer pid=56618) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=56618) File "/opt/conda/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 1417, in run_server_worker
(APIServer pid=56618) async with build_async_engine_client(
(APIServer pid=56618) File "/opt/conda/lib/python3.10/contextlib.py", line 199, in __aenter__
(APIServer pid=56618) return await anext(self.gen)
(APIServer pid=56618) File "/opt/conda/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 172, in build_async_engine_client
(APIServer pid=56618) async with build_async_engine_client_from_engine_args(
(APIServer pid=56618) File "/opt/conda/lib/python3.10/contextlib.py", line 199, in __aenter__
(APIServer pid=56618) return await anext(self.gen)
(APIServer pid=56618) File "/opt/conda/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 198, in build_async_engine_client_from_engine_args
(APIServer pid=56618) vllm_config = engine_args.create_engine_config(usage_context=usage_context)
(APIServer pid=56618) File "/opt/conda/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1332, in create_engine_config
(APIServer pid=56618) model_config = self.create_model_config()
(APIServer pid=56618) File "/opt/conda/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1189, in create_model_config
(APIServer pid=56618) return ModelConfig(
(APIServer pid=56618) File "/opt/conda/lib/python3.10/site-packages/pydantic/_internal/_dataclasses.py", line 121, in __init__
(APIServer pid=56618) s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
(APIServer pid=56618) pydantic_core._pydantic_core.ValidationError: 1 validation error for ModelConfig
(APIServer pid=56618) Value error, The checkpoint you are trying to load has model type `qwen3_5_moe` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
(APIServer pid=56618)
(APIServer pid=56618) You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git` [type=value_error, input_value=ArgsKwargs((), {'model': ...rocessor_plugin': None}), input_type=ArgsKwargs]
(APIServer pid=56618) For further information visit https://errors.pydantic.dev/2.12/v/value_error