• Members 7 posts
    2026年4月8日 16:44

    使用最新镜像 cr.metax-tech.com/public-ai-release/maca/vllm-metax:0.15.0-maca.ai3.5.3.203-torch2.8-py312-ubuntu22.04-amd64 依然无法正确部署 qwen3.5-27b 或其他qwen3.5 moe 模型。

    请问什么版本才会正确支持?或者方便给一个样例启动命令吗?

    Version info:Mcoplib_Version = '0.4.0'
    Build_Maca_Version = '3.5.3.18'
    GIT_BRANCH = 'HEAD'
    GIT_COMMIT = 'fe3a7e2'
    Vllm Op Version = 0.15.0
    SGlang Op Version = 0.5.7 && 0.5.8

    (APIServer pid=1) INFO 04-08 16:27:32 [utils.py:325]
    (APIServer pid=1) INFO 04-08 16:27:32 [utils.py:325] █ █ █▄ ▄█
    (APIServer pid=1) INFO 04-08 16:27:32 [utils.py:325] ▄▄ ▄█ █ █ █ ▀▄▀ █ version 0.15.0
    (APIServer pid=1) INFO 04-08 16:27:32 [utils.py:325] █▄█▀ █ █ █ █ model /models/qwen35-27b
    (APIServer pid=1) INFO 04-08 16:27:32 [utils.py:325] ▀▀ ▀▀▀▀▀ ▀▀▀▀▀ ▀ ▀
    (APIServer pid=1) INFO 04-08 16:27:32 [utils.py:325]
    (APIServer pid=1) INFO 04-08 16:27:32 [utils.py:261] non-default args: {'model_tag': '/models/qwen35-27b', 'api_server_count': 1, 'host': '0.0.0.0', 'port': 9001, 'enable_auto_tool_choice': True, 'tool_call_parser': 'qwen3_coder', 'model': '/models/qwen35-27b', 'dtype': 'bfloat16', 'max_model_len': 32768, 'reasoning_parser': 'qwen3', 'tensor_parallel_size': 2, 'gpu_memory_utilization': 0.95, 'enable_prefix_caching': True, 'max_num_batched_tokens': 16384, 'max_num_seqs': 8}
    (APIServer pid=1) Traceback (most recent call last):
    (APIServer pid=1) File "/opt/conda/bin/vllm", line 8, in <module>
    (APIServer pid=1) sys.exit(main())
    (APIServer pid=1) ^^^^^^
    (APIServer pid=1) File "/opt/conda/lib/python3.12/site-packages/vllm/entrypoints/cli/main.py", line 73, in main
    (APIServer pid=1) args.dispatch_function(args)
    (APIServer pid=1) File "/opt/conda/lib/python3.12/site-packages/vllm/entrypoints/cli/serve.py", line 111, in cmd
    (APIServer pid=1) uvloop.run(run_server(args))
    (APIServer pid=1) File "/opt/conda/lib/python3.12/site-packages/uvloop/init.py", line 96, in run
    (APIServer pid=1) return asyncio.run(
    (APIServer pid=1) ^^^^^^^^^^^^^^
    (APIServer pid=1) File "/opt/conda/lib/python3.12/asyncio/runners.py", line 195, in run
    (APIServer pid=1) return runner.run(main)
    (APIServer pid=1) ^^^^^^^^^^^^^^^^
    (APIServer pid=1) File "/opt/conda/lib/python3.12/asyncio/runners.py", line 118, in run
    (APIServer pid=1) return self._loop.run_until_complete(task)
    (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    (APIServer pid=1) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
    (APIServer pid=1) File "/opt/conda/lib/python3.12/site-packages/uvloop/__init
    .py", line 48, in wrapper
    (APIServer pid=1) return await main
    (APIServer pid=1) ^^^^^^^^^^
    (APIServer pid=1) File "/opt/conda/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 919, in run_server
    (APIServer pid=1) await run_server_worker(listen_address, sock, args, uvicorn_kwargs)
    (APIServer pid=1) File "/opt/conda/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 938, in run_server_worker
    (APIServer pid=1) async with build_async_engine_client(
    (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^
    (APIServer pid=1) File "/opt/conda/lib/python3.12/contextlib.py", line 210, in aenter
    (APIServer pid=1) return await anext(self.gen)
    (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^
    (APIServer pid=1) File "/opt/conda/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 147, in build_async_engine_client
    (APIServer pid=1) async with build_async_engine_client_from_engine_args(
    (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    (APIServer pid=1) File "/opt/conda/lib/python3.12/contextlib.py", line 210, in aenter
    (APIServer pid=1) return await anext(self.gen)
    (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^
    (APIServer pid=1) File "/opt/conda/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 173, in build_async_engine_client_from_engine_args
    (APIServer pid=1) vllm_config = engine_args.create_engine_config(usage_context=usage_context)
    (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    (APIServer pid=1) File "/opt/conda/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 1374, in create_engine_config
    (APIServer pid=1) model_config = self.create_model_config()
    (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^
    (APIServer pid=1) File "/opt/conda/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 1228, in create_model_config
    (APIServer pid=1) return ModelConfig(
    (APIServer pid=1) ^^^^^^^^^^^^
    (APIServer pid=1) File "/opt/conda/lib/python3.12/site-packages/pydantic/_internal/_dataclasses.py", line 121, in init
    (APIServer pid=1) s.pydantic_validator.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
    (APIServer pid=1) pydantic_core._pydantic_core.ValidationError: 1 validation error for ModelConfig
    (APIServer pid=1) Value error,
    The checkpoint you are trying to load has model type qwen3_5 but Transformers does not recognize this architecture**. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
    (APIServer pid=1)
    (APIServer pid=1) You can update Transformers with the command pip install --upgrade transformers. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command pip install git+https://github.com/huggingface/transformers.git [type=value_error, input_value=ArgsKwargs((), {'model': ...rocessor_plugin': None}), input_type=ArgsKwargs]
    (APIServer pid=1) For further information visit errors.pydantic.dev/2.12/v/value_error

  • Members 314 posts
    2026年4月8日 16:47

    尊敬的开发者您好,请将transformers版本升到5.2.0

  • arrow_forward

    Thread has been moved from 公共.

  • Members 7 posts
    2026年4月8日 16:49

    这使用的是官方镜像,也就是说我需要基于这个官方镜像制作更新版本的镜像?

  • Members 7 posts
    2026年4月8日 16:57

    且当我尝试更新 transformers 时,存在依赖问题:

    vllm-metax 0.15.0+g24fb31.d20260310.maca3.5.3.20.torch2.8 requires transformers<5,>=4.56.0, but you have transformers 5.5.0 which is incompatible.

  • Members 314 posts
    2026年4月8日 16:59

    尊敬的开发者您好,是的,您容器内更新transformers版本commit镜像或使用dockerfile build镜像制作镜像。vllm-metax镜像内的transformers版本和vllm社区版本报错一致。

  • Members 314 posts
    2026年4月8日 17:00

    尊敬的开发者您好,请将transformers版本升到5.2.0版本

  • Members 7 posts
    2026年4月8日 17:08

    请您再看一下该报错。

    vllm-metax 依赖了老版本的transformers, 但是 qwen3.5 需要更新 transformers, vllm-metax 更新时依旧依赖 4x 的transformers。

    请问这该如何解决?

  • Members 314 posts
    2026年4月8日 17:11

    尊敬的开发者您好,请在容器内执行pip install transformers==5.2.0

  • Members 7 posts
    2026年4月8日 19:05
  • Members 314 posts
    2026年4月8日 19:11

    尊敬的开发者您好,请在容器内执行pip install transformers==5.2.0,不要省略.0

  • Members 7 posts
    2026年4月8日 19:12

    命令(pip install transformers==5.2.0) 没有省略,只是在回复中省略了 .0

  • Members 314 posts
    2026年4月8日 19:13

    尊敬的开发者您好,请给出详细报错日志

  • Members 7 posts
    2026年4月8日 19:14