• Members 17 posts
    2026年3月23日 16:32

    缺陷报告:MCCL 跨机 RoCE 内存注册失败 (ibv_reg_mr Invalid argument)
    【环境信息】

    硬件: 双节点共 16 张沐曦 MetaX C500 (8卡/节点),机头是两台H3c uniserver R5330 G7(C500*16)
    网络: 40Gbps RoCE 网卡 (设备名 rocep6s0, rocep95s0)---这是ibstat显示的信息,网卡用的是400G的迈洛斯 cx7
    软件: MACA 3.5.3 / MCCL 2.16.5

    【问题现象】
    在使用 mpirun 执行跨机 all_reduce_perf 测试时,若开启 IB/RoCE 硬件加速,MCCL 在初始化阶段崩溃,报错:
    MCCL WARN Call to ibv_reg_mr failed with error Invalid argument
    【已完成的排查与隔离】
    网络层正常: RoCE 高速网段(192.168.100.x)跨机 ping 测试延迟 < 0.1ms,双端 firewalld 与 SELinux 已关闭。
    系统限制正常: 双端通过 MPI 验证 ulimit -l 均为 unlimited,排除 memlock 限制导致的问题。
    iova2 降级测试: 增加 -x MCCL_IB_PCI_RELAXED_ORDERING=0 后,报错从 ibv_reg_mr_iova2 failed 退化为 ibv_reg_mr failed,依然返回 Invalid argument (retcode 2)。
    TCP 降级对照组(核心证据): 增加 -x MCCL_IB_DISABLE=1 强制走普通 TCP/Socket 通信后,测试完美通过(#wrong 0)。


    mccl测试脚本内容:
    [root@localhost /opt/maca/samples/mccl_tests/perf]# cat cluster.sh

    !/bin/bash

    MACA_PATH="${MACA_PATH:-/opt/maca}"

    HOST_IP=192.168.1.204:8,192.168.1.205:8
    GPU_NUM=16

    TEST_DIR=$MACA_PATH/samples/mccl_tests/perf/mccl_perf

    BENCH_NAMES="all_reduce_perf all_gather_perf reduce_scatter_perf sendrecv_perf alltoall_perf"

    BENCH_NAMES="all_reduce_perf"

    if [[ -z "$1" || -z "$2" || -z "$3" ]]; then
    echo "Use the default ip addr. Run with parameters for custom ip addr, for example: bash cluster.sh ip_1:proc_count,ip_2:proc_count gpu_num test_name"
    else
    HOST_IP=$1
    GPU_NUM=$2

    if [ "$3" = "all" ]; then
    BENCH_NAMES="all_reduce_perf all_gather_perf reduce_scatter_perf sendrecv_perf alltoall_perf"
    else
    if [ -e "$TEST_DIR/$3" ]; then
    BENCH_NAMES=$3
    else
    echo "$TEST_DIR/$3 dose not exist!"
    exit 1
    fi
    fi
    fi

    IP_MASK="$(echo "$HOST_IP" | cut -d. -f1-3).0/24"

    IP_MASK="192.168.100.0/24"
    IB_PORT=rocep6s0,rocep95s0

    PERF_ENV="-x FORCE_ACTIVE_WAIT=2"
    LIB_PATH_ENV="-x MACA_PATH=${MACA_PATH} -x LD_LIBRARY_PATH=${MACA_PATH}/lib:/${MACA_PATH}/ompi/lib:/${MACA_PATH}/ucx/lib"

    ENV_VAR="-x MCCL_IB_HCA=${IB_PORT} -x MCCL_CROSS_NIC=1 ${PERF_ENV} ${LIB_PATH_ENV}"

    ENV_VAR="-x MCCL_IB_HCA=rocep6s0,rocep95s0 -x MCCL_SOCKET_IFNAME=p50p1,p51p1 -x MCCL_CROSS_NIC=1 ${PERF_ENV} ${LIB_PATH_ENV} -x MCCL_IB_DISABLE=0"
    MPI_PROCESS_NUM=${GPU_NUM}
    MPI_RUN_OPT="--allow-run-as-root -mca btl_tcp_if_include ${IP_MASK} -mca oob_tcp_if_include ${IP_MASK} -mca pml ^ucx -mca osc ^ucx -mca btl ^openib"

    for BENCH in ${BENCH_NAMES}; do
    echo -n "The test is ${BENCH}, the maca version is " && realpath ${MACA_PATH}
    ${MACA_PATH}/ompi/bin/mpirun -np ${MPI_PROCESS_NUM} ${MPI_RUN_OPT} -host ${HOST_IP} ${ENV_VAR} ${TEST_DIR}/${BENCH} -b 1K -e 1G -d float -f 2 -g 1 -n 10
    done

    报错信息如附件内容

    insert_drive_file
    mccl-error.txt

    Text, 183.9 KB, uploaded by lishuai on 2026年3月23日.

  • Members 314 posts
    2026年3月23日 16:42

    尊敬的开发者您好,ib_write打流是否测试

  • arrow_forward

    Thread has been moved from 产品&运维.

  • Members 314 posts
    2026年3月23日 16:48

    尊敬的开发者您好,请参考Mellanox ib_write相关文档进行测试

  • Members 17 posts
    2026年3月24日 13:12

    现在双机部署模型一直卡在这个状态,看了下mccl的日志,像是变量声明了mccl走HCA,但是mccl还是走业务网,两台机器的ibdev2netdev
    python、mccl、ip信息如附件

    insert_drive_file
    ip_info.txt

    Text, 16.5 KB, uploaded by lishuai on 2026年3月24日.

    insert_drive_file
    mccl_info.txt

    Text, 20.8 KB, uploaded by lishuai on 2026年3月24日.

    insert_drive_file
    python_info.txt

    Text, 10.7 KB, uploaded by lishuai on 2026年3月24日.

  • Members 314 posts
    2026年3月24日 13:36

    尊敬的开发者您好,ib_write客户端和服务端打流是否测试正常

  • Members 314 posts
    2026年3月24日 13:40

    尊敬的开发者您好,您MCCL测试结果非常偏低,请检查相关网络配置

  • Members 314 posts
    2026年3月24日 13:43

    尊敬的开发者您好,请检查交换机配置、服务器端网卡配置

  • Members 17 posts
    2026年3月24日 13:55

    ib_write单流测试结果如图

    image.png

    PNG, 112.3 KB, uploaded by lishuai on 2026年3月24日.

  • Members 314 posts
    2026年3月24日 13:58

    尊敬的开发者您好,ib_write单流测试结果偏低,请检查交换机相关配置

  • Members 17 posts
    2026年3月25日 16:59

    这数据怎么样

    image.png

    PNG, 109.3 KB, uploaded by lishuai on 2026年3月25日.

    image.png

    PNG, 107.8 KB, uploaded by lishuai on 2026年3月25日.

  • Members 17 posts
    2026年3月25日 17:10

    ib_write测试做完后,进行cluster测试,回导致机器重启

    脚本内容如下:

    !/bin/bash

    MACA_PATH="${MACA_PATH:-/opt/maca}"

    HOST_IP=192.168.1.204:8,192.168.1.205:8
    GPU_NUM=16

    TEST_DIR=$MACA_PATH/samples/mccl_tests/perf/mccl_perf

    BENCH_NAMES="all_reduce_perf all_gather_perf reduce_scatter_perf sendrecv_perf alltoall_perf"

    BENCH_NAMES="all_reduce_perf"

    if [[ -z "$1" || -z "$2" || -z "$3" ]]; then
    echo "Use the default ip addr. Run with parameters for custom ip addr, for example: bash cluster.sh ip_1:proc_count,ip_2:proc_count gpu_num test_name"
    else
    HOST_IP=$1
    GPU_NUM=$2

    if [ "$3" = "all" ]; then
    BENCH_NAMES="all_reduce_perf all_gather_perf reduce_scatter_perf sendrecv_perf alltoall_perf"
    else
    if [ -e "$TEST_DIR/$3" ]; then
    BENCH_NAMES=$3
    else
    echo "$TEST_DIR/$3 dose not exist!"
    exit 1
    fi
    fi
    fi

    IP_MASK="$(echo "$HOST_IP" | cut -d. -f1-3).0/24"

    IP_MASK="192.168.100.0/24"
    IB_PORT=mlx5_0,mlx5_1

    PERF_ENV="-x FORCE_ACTIVE_WAIT=2"
    LIB_PATH_ENV="-x MACA_PATH=${MACA_PATH} -x LD_LIBRARY_PATH=${MACA_PATH}/lib:/${MACA_PATH}/ompi/lib:/${MACA_PATH}/ucx/lib"
    ENV_VAR="-x MCCL_IB_HCA=${IB_PORT} -x MCCL_CROSS_NIC=1 -x MCCL_IB_TRAFFIC_CLASS=160 -x MCCL_IB_GID_INDEX=3 -x MCCL_IB_RETRY_CNT=15 -x MCCL_NET_GDR_LEVEL=2 ${PERF_ENV} ${LIB_PATH_ENV}"
    MPI_PROCESS_NUM=${GPU_NUM}
    MPI_RUN_OPT="--allow-run-as-root -mca btl_tcp_if_include ${IP_MASK} -mca oob_tcp_if_include ${IP_MASK} -mca pml ^ucx -mca osc ^ucx -mca btl ^openib"

    for BENCH in ${BENCH_NAMES}; do
    echo -n "The test is ${BENCH}, the maca version is " && realpath ${MACA_PATH}
    ${MACA_PATH}/ompi/bin/mpirun -np ${MPI_PROCESS_NUM} ${MPI_RUN_OPT} -host ${HOST_IP} ${ENV_VAR} ${TEST_DIR}/${BENCH} -b 1K -e 1G -d float -f 2 -g 1 -n 10
    done

  • Members 17 posts
    2026年3月25日 17:22

    现在测试mccl的问题是关闭 GDR 走 SysMem 能跑通,开启 GDR 走 P2P 瞬间触发 PCIe 总线错误

  • arrow_forward

    Thread has been moved from 解决中.