MetaX-Tech Developer Forum 论坛首页
  • 沐曦开发者
search
Sign in

trdyun

  • Members
  • Joined 2026年4月6日
  • message 帖子
  • forum 主题
  • favorite 关注者
  • favorite_border Follows
  • person_outline 详细信息

trdyun has posted 1 message.

  • See post chevron_right
    trdyun
    Members
    在MXC500上部署MiniMax M2.5模型时报错 解决中 2026年4月6日 19:27

    如题,报错如下
    root@4n8mh6oeu78t9-0:/mnt/model# ./start.sh
    INFO 04-06 18:49:28 [init.py:43] Available plugins for group vllm.platform_plugins:
    INFO 04-06 18:49:28 [init.py:45] - metax -> vllm_metax:register
    INFO 04-06 18:49:28 [init.py:48] All plugins in this group will be loaded. Set VLLM_PLUGINS to control which plugins to load.
    INFO 04-06 18:49:28 [init.py:217] Platform plugin metax is activated
    INFO 04-06 18:49:28 [envs.py:83] Plugin sets VLLM_USE_FLASHINFER_SAMPLER to False. Reason: flashinfer sampler are not supported on maca
    INFO 04-06 18:49:28 [envs.py:83] Plugin sets VLLM_ENGINE_READY_TIMEOUT_S to 3600. Reason: set timeout to 3600s for model loading
    INFO Print the version information of mcoplib during compilation.

    Version info:Mcoplib_Version = '0.4.0'
    Build_Maca_Version = '3.5.3.18'
    GIT_BRANCH = 'HEAD'
    GIT_COMMIT = 'fe3a7e2'
    Vllm Op Version = 0.15.0
    SGlang Op Version = 0.5.7 && 0.5.8

    INFO Staring Check the current MACA version of the operating environment.

    INFO: Release major.minor matching, successful:3.5.

    WARNING 04-06 18:49:32 [init.py:86] The quantization method 'awq' already exists and will be overwritten by the quantization config <class 'vllm_metax.quant_config.awq.MacaAWQConfig'>.
    WARNING 04-06 18:49:32 [init.py:86] The quantization method 'awq_marlin' already exists and will be overwritten by the quantization config <class 'vllm_metax.quant_config.awq_marlin.MacaAWQMarlinConfig'>.
    WARNING 04-06 18:49:32 [init.py:86] The quantization method 'compressed-tensors' already exists and will be overwritten by the quantization config <class 'vllm_metax.quant_config.compressed_tensors.MacaCompressedTensorsConfig'>.
    WARNING 04-06 18:49:32 [init.py:86] The quantization method 'gptq' already exists and will be overwritten by the quantization config <class 'vllm_metax.quant_config.gptq.MacaGPTQConfig'>.
    WARNING 04-06 18:49:32 [init.py:86] The quantization method 'gptq_marlin' already exists and will be overwritten by the quantization config <class 'vllm_metax.quant_config.gptq_marlin.MacaGPTQMarlinConfig'>.
    WARNING 04-06 18:49:32 [init.py:86] The quantization method 'moe_wna16' already exists and will be overwritten by the quantization config <class 'vllm_metax.quant_config.moe_wna16.MacaMoeWNA16Config'>.
    WARNING 04-06 18:49:36 [registry.py:812] Model architecture DeepSeekMTPModel is already registered, and will be overwritten by the new model class vllm_metax.models.deepseek_mtp:DeepSeekMTP.
    WARNING 04-06 18:49:36 [registry.py:812] Model architecture DeepseekV2ForCausalLM is already registered, and will be overwritten by the new model class vllm_metax.models.deepseek_v2:DeepseekV2ForCausalLM.
    WARNING 04-06 18:49:36 [registry.py:812] Model architecture DeepseekV3ForCausalLM is already registered, and will be overwritten by the new model class vllm_metax.models.deepseek_v2:DeepseekV3ForCausalLM.
    WARNING 04-06 18:49:36 [registry.py:812] Model architecture DeepseekV32ForCausalLM is already registered, and will be overwritten by the new model class vllm_metax.models.deepseek_v2:DeepseekV3ForCausalLM.
    WARNING 04-06 18:49:36 [registry.py:812] Model architecture KimiK25ForConditionalGeneration is already registered, and will be overwritten by the new model class vllm_metax.models.kimi_k25:KimiK25ForConditionalGeneration.
    (APIServer pid=317) INFO 04-06 18:49:36 [utils.py:325]
    (APIServer pid=317) INFO 04-06 18:49:36 [utils.py:325] █ █ █▄ ▄█
    (APIServer pid=317) INFO 04-06 18:49:36 [utils.py:325] ▄▄ ▄█ █ █ █ ▀▄▀ █ version 0.15.0
    (APIServer pid=317) INFO 04-06 18:49:36 [utils.py:325] █▄█▀ █ █ █ █ model /mnt/model
    (APIServer pid=317) INFO 04-06 18:49:36 [utils.py:325] ▀▀ ▀▀▀▀▀ ▀▀▀▀▀ ▀ ▀
    (APIServer pid=317) INFO 04-06 18:49:36 [utils.py:325]
    (APIServer pid=317) INFO 04-06 18:49:36 [utils.py:261] non-default args: {'model_tag': '/mnt/model', 'api_server_count': 1, 'host': '0.0.0.0', 'port': 7878, 'enable_auto_tool_choice': True, 'tool_call_parser': 'minimax_m2', 'model': '/mnt/model', 'trust_remote_code': True, 'served_model_name': ['MiniMax-M2.5-196k'], 'reasoning_parser': 'minimax_m2_append_think', 'tensor_parallel_size': 8, 'enable_expert_parallel': True, 'max_num_seqs': 8, 'compilation_config': {'level': None, 'mode': None, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': [], 'splitting_ops': None, 'compile_mm_encoder': False, 'compile_sizes': None, 'compile_ranges_split_points': None, 'inductor_compile_config': {'enable_auto_functionalized_v2': False}, 'inductor_passes': {}, 'cudagraph_mode': <CUDAGraphMode.PIECEWISE: 1>, 'cudagraph_num_of_warmups': 0, 'cudagraph_capture_sizes': None, 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': None, 'pass_config': {}, 'max_cudagraph_capture_size': None, 'dynamic_shapes_config': {'type': <DynamicShapesType.BACKED: 'backed'>, 'evaluate_guards': False, 'assume_32_bit_indexing': True}, 'local_cache_dir': None}}
    (APIServer pid=317) The argument trust_remote_code is to be used with Auto classes. It has no effect here and is ignored.
    (APIServer pid=317) The argument trust_remote_code is to be used with Auto classes. It has no effect here and is ignored.
    (APIServer pid=317) INFO 04-06 18:49:36 [model.py:541] Resolved architecture: MiniMaxM2ForCausalLM
    (APIServer pid=317) ERROR 04-06 18:49:36 [repo_utils.py:47] Error retrieving safetensors: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/mnt/model'. Use repo_type argument if needed., retrying 1 of 2
    (APIServer pid=317) ERROR 04-06 18:49:38 [repo_utils.py:45] Error retrieving safetensors: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/mnt/model'. Use repo_type argument if needed.
    (APIServer pid=317) INFO 04-06 18:49:38 [model.py:1882] Downcasting torch.float32 to torch.bfloat16.
    (APIServer pid=317) INFO 04-06 18:49:38 [model.py:1561] Using max model len 196608
    (APIServer pid=317) Traceback (most recent call last):
    (APIServer pid=317) File "/opt/conda/bin/vllm", line 8, in <module>
    (APIServer pid=317) sys.exit(main())
    (APIServer pid=317) File "/opt/conda/lib/python3.10/site-packages/vllm/entrypoints/cli/main.py", line 73, in main
    (APIServer pid=317) args.dispatch_function(args)
    (APIServer pid=317) File "/opt/conda/lib/python3.10/site-packages/vllm/entrypoints/cli/serve.py", line 111, in cmd
    (APIServer pid=317) uvloop.run(run_server(args))
    (APIServer pid=317) File "/opt/conda/lib/python3.10/site-packages/uvloop/init.py", line 69, in run
    (APIServer pid=317) return loop.run_until_complete(wrapper())
    (APIServer pid=317) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
    (APIServer pid=317) File "/opt/conda/lib/python3.10/site-packages/uvloop/init.py", line 48, in wrapper
    (APIServer pid=317) return await main
    (APIServer pid=317) File "/opt/conda/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 919, in run_server
    (APIServer pid=317) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
    (APIServer pid=317) File "/opt/conda/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 938, in run_server_worker
    (APIServer pid=317) async with build_async_engine_client(
    (APIServer pid=317) File "/opt/conda/lib/python3.10/contextlib.py", line 199, in aenter
    (APIServer pid=317) return await anext(self.gen)
    (APIServer pid=317) File "/opt/conda/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 147, in build_async_engine_client
    (APIServer pid=317) async with build_async_engine_client_from_engine_args(
    (APIServer pid=317) File "/opt/conda/lib/python3.10/contextlib.py", line 199, in aenter
    (APIServer pid=317) return await anext(self.gen)
    (APIServer pid=317) File "/opt/conda/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 173, in build_async_engine_client_from_engine_args
    (APIServer pid=317) vllm_config = engine_args.create_engine_config(usage_context=usage_context)
    (APIServer pid=317) File "/opt/conda/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1374, in create_engine_config
    (APIServer pid=317) model_config = self.create_model_config()
    (APIServer pid=317) File "/opt/conda/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1228, in create_model_config
    (APIServer pid=317) return ModelConfig(
    (APIServer pid=317) File "/opt/conda/lib/python3.10/site-packages/pydantic/_internal/_dataclasses.py", line 121, in init
    (APIServer pid=317) s.pydantic_validator.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
    (APIServer pid=317) pydantic_core._pydantic_core.ValidationError: 1 validation error for ModelConfig
    (APIServer pid=317) Value error, fp8 quantization is currently not supported in maca. [type=value_error, input_value=ArgsKwargs((), {'model': ...rocessor_plugin': None}), input_type=ArgsKwargs]
    (APIServer pid=317) For further information visit errors.pydantic.dev/2.12/v/value_error

  • 沐曦开发者论坛
powered by misago