请问在进行 ray配置的时候,如果机器有ib卡,是不是必须要要映射计算网口?
请问在进行 ray配置的时候,如果机器有ib卡,是不是必须要要映射计算网口?
比如以下模型那些支持,那些不支持:
Qwen3-235B
Qwen3-VL
Qwen3-embeding
Qwen3-rerank
Z-Image
GLM4.6/7
Deepseek-V3.2
Deepseek-V3
Deepseek-R1
安装信息:python -m pip install paddle-metax-gpu==3.3.0 -i www.paddlepaddle.org.cn/packages/stable/maca/
模型运行脚本如上传文件:
使用模型:
PP-OCRv5_server_det
PP-LCNet_x1_0_doc_ori
PP-LCNet_x1_0_textline_ori
PP-OCRv5_server_rec
UVDoc
镜像:cr.metax-tech.com/public-ai-release/maca/vllm-metax:0.11.0-maca.ai3.3.0.11-torch2.6-py310-ubuntu22.04-amd64
容器构建命令:docker run -it --restart always --device=/dev/dri --device=/dev/mxcd --group-add 44 --name GLM-4.1V-9B-Thinking --device=/dev/mem --network=host --security-opt seccomp=unconfined --security-opt apparmor=unconfined --shm-size '100gb' --ulimit memlock=-1 -v /mnt/data/models/GLM-4.1V-9B-Thinking:/mnt/data/models/GLM-4.1V-9B-Thinking cr.metax-tech.com/public-ai-release/maca/vllm-metax:0.11.0-maca.ai3.3.0.11-torch2.6-py310-ubuntu22.04-amd64
服务启动命令:vllm serve /mnt/data/models/GLM-4.1V-9B-Thinking/ --trust-remote-code --dtype auto --max-model-len 4096 --gpu-memory-utilization 0.9 --served-model-name GLM-4.1V-9B-Thinking
报错:
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] EngineCore failed to start.
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] Traceback (most recent call last):
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/multimodal/processing.py", line 1057, in call_hf_processor
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] output = hf_processor(data,
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/transformers/models/glm4v/processing_glm4v.py", line 150, in call
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] videos_inputs = self.video_processor(videos=videos, output_kwargs["videos_kwargs"])
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/transformers/video_processing_utils.py", line 206, in call
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] return self.preprocess(videos, kwargs)
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/transformers/video_processing_utils.py", line 387, in preprocess
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] preprocessed_videos = self._preprocess(videos=videos, kwargs)
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/transformers/models/glm4v/video_processing_glm4v.py", line 177, in preprocess
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] resized_height, resized_width = smart_resize(
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/transformers/models/glm4v/image_processing_glm4v.py", line 59, in smart_resize
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] raise ValueError(f"t:{num_frames} must be larger than temporal_factor:{temporal_factor}")
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] ValueError: t:1 must be larger than temporal_factor:2
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708]
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] The above exception was the direct cause of the following exception:
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708]
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] Traceback (most recent call last):
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 699, in run_engine_core
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] engine_core = EngineCoreProc(args, kwargs)
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 498, in init
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] super().init(vllm_config, executor_class, log_stats,
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 83, in init
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] self.model_executor = executor_class(vllm_config)
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 54, in init
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] self._init_executor()
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 54, in _init_executor
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] self.collective_rpc("init_device")
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 83, in collective_rpc
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] return [run_method(self.driver_worker, method, args, kwargs)]
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/utils/init.py", line 3122, in run_method
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] return func(args, **kwargs)
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 259, in init_device
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] self.worker.init_device() # type: ignore
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 201, in init_device
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] self.model_runner: GPUModelRunner = GPUModelRunner(
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/v1/worker/gpu_model_runner.py", line 421, in init
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] self.mm_budget = MultiModalBudget(
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/v1/worker/utils.py", line 47, in init
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] max_tokens_by_modality = mm_registry \
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/multimodal/registry.py", line 167, in get_max_tokens_per_item_by_nonzero_modality
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] max_tokens_per_item = self.get_max_tokens_per_item_by_modality(
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/multimodal/registry.py", line 143, in get_max_tokens_per_item_by_modality
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] return profiler.get_mm_max_contiguous_tokens(
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/multimodal/profiling.py", line 282, in get_mm_max_contiguous_tokens
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] return self._get_mm_max_tokens(seq_len,
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/multimodal/profiling.py", line 262, in _get_mm_max_tokens
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] mm_inputs = self._get_dummy_mm_inputs(seq_len, mm_counts)
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/multimodal/profiling.py", line 173, in _get_dummy_mm_inputs
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] return self.processor.apply(
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/multimodal/processing.py", line 2036, in apply
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] ) = self._cached_apply_hf_processor(
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/multimodal/processing.py", line 1826, in _cached_apply_hf_processor
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] ) = self._apply_hf_processor_main(
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/multimodal/processing.py", line 1572, in _apply_hf_processor_main
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] mm_processed_data = self._apply_hf_processor_mm_only(
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/multimodal/processing.py", line 1529, in _apply_hf_processor_mm_only
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] , mm_processed_data, _ = self._apply_hf_processor_text_mm(
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/multimodal/processing.py", line 1456, in _apply_hf_processor_text_mm
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] processed_data = self._call_hf_processor(
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/models/glm4_1v.py", line 1207, in _call_hf_processor
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] video_outputs = super()._call_hf_processor(
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/multimodal/processing.py", line 1417, in _call_hf_processor
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] return self.info.ctx.call_hf_processor(
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] File "/opt/conda/lib/python3.10/site-packages/vllm/multimodal/processing.py", line 1080, in call_hf_processor
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] raise ValueError(msg) from exc
(EngineCore_DP0 pid=137) ERROR 12-19 09:43:42 [core.py:708] ValueError: Failed to apply Glm4vProcessor on data={'text': '<|begin_of_video|><|video|><|end_of_video|>', 'videos': [[array([[[[255, 255, 255],