DeepSpeed入门

作者 : admin 本文共1546个字,预计阅读时间需要4分钟 发布时间: 2024-06-10 共2人阅读

pip install deepspeed

支持transformers: –deepspeed,以及config文件;

model_engine, optimizer, _, _ = deepspeed.initialize(args=cmd_args,
                                                     model=model,
                                                     model_parameters=params)

分布式和mixed-precision等,都包含在deepspeed.initialize和model_engine里面了;

删掉: torch.distributed.init_process_group(…)

for step, batch in enumerate(data_loader):
    #forward() method
    loss = model_engine(batch)

    #runs backpropagation
    model_engine.backward(loss)

    #weight update
    model_engine.step()

Gradient Average: 在model_engine.backward里自动解决;

Loss Scaling: 自动解决;

Learning Rate Scheduler: model_engin.step里自动解决;

save&load: (model、optimizer、lr scheduler状态,都存下来)(client_sd是用户自定义数据)

_, client_sd = model_engine.load_checkpoint(args.load_dir, args.ckpt_id)
step = client_sd['step']
...
if step % args.save_interval:
    client_sd['step'] = step
    ckpt_id = loss.item()
    model_engine.save_checkpoint(args.save_dir, ckpt_id, client_sd = client_sd)

配置文件:(例如名为ds_config.json)

{
  "train_batch_size": 8,
  "gradient_accumulation_steps": 1,
  "optimizer": {
    "type": "Adam",
    "params": {
      "lr": 0.00015
    }
  },
  "fp16": {
    "enabled": true
  },
  "zero_optimization": true
}

hostfile: (和OpenMPI、Horovord兼容)(hostname GPU个数)

worker-1 slots=4
worker-2 slots=4

启动命令:

deepspeed --hostfile=myhostfile   \
  --deepspeed --deepspeed_config ds_config.json

–num_nodes: 在几台机器上跑;

–num_gpus:在几张GPU卡上跑;

–include: 白名单节点和GPU编号;例:–include=”worker-2:0,1″

–exclude: 黑名单节点和GPU编号;例:–exclude=”worker-2:0@worker-3:0,1″

环境变量:

运行起来会被设置到所有node上;

“.deepspeed_env”文件;放运行目录下,或者~/;例:

NCCL_IB_DISABLE=1
NCCL_SOCKET_IFNAME=eth0

在一台机器上运行”deepspeed”命令,会在所有node上launch进程;

也支持mpirun方式来launch,但通信后端用的仍是NCCL而不是MPI;

注意:

不支持CUDA_VISIBLE_DEVICES;只能这么来指定GPU:

deepspeed --include localhost:1 ...
本站无任何商业行为
个人在线分享 » DeepSpeed入门
E-->