文章目录

  • nohup 命令
  • 使用场景
  • 使用方法
  • nohup ... &
  • nohup ... > train.log 2>&1 &
  • 结束进程
  • 参考文章


nohup 命令

使用场景

  • 比如我要在服务器里运行如下代码来训练我的深度学习模型:
python train.py

windows运行python保持在命令行界面 python保持一直运行_重定向

  • 但是这样运行你一旦合上笔记本电脑或者换个工作环境而发生网络断连的情况,远程连接也随之停止,之前训练的就白费了,很浪费运算资源和时间

使用方法

nohup … &

nohup python train.py &
  • 如果你执行这个代码,在你的屏幕上显示的结果应该是:

windows运行python保持在命令行界面 python保持一直运行_重定向_02

  • 1807028 代表的是现在这个程序的唯一标识,因为我们采用 nohup 命令就相当于把这个程序放到后台执行了,所以即使现在网络断掉还是会保持执行,因此我们不能再用常规的 ctrl + c 来结束程序,我们要根据这个 pid 来结束程序
  • nohup: ignoring input and appending output to inohup.out 这句话表示所有的信息将会重定向到 nohup.out 这个文件中
  • 但是 nohup.out 这个文件中并不会将你在 train.pyprint 的那些信息也进行保存。那如果我想获得这些信息怎么办呢?答案是采用一个 log 文件来保存运行过程中的所有信息。

nohup … > train.log 2>&1 &

nohup python train.py > train.log 2>&1 &

windows运行python保持在命令行界面 python保持一直运行_学习_03

  • 数字仍然是 pid
  • 这个命令的含义是将所有的运行结果重定向到 train.log

2>&1解释:
将标准错误2 重定向到标准输出 &1,标准输出 &1再被重定向输入到 train.log 文件中。

  • 0 - stdin (standard input, 标准输入)
  • 1 - stdout (standard output, 标准输出)
  • 2 - stderr (standard error,标准错误输出)
  • 采取上述命令之后可以看到所有的信息都被记录到 train.log 中了
args:
Namespace(batch_size=8, bpe_token=False, cls_index_path='data/cls_index.json', device='0,1,2,3', encoder_json='tokenizations/encoder.json', epochs=8, fp16=False, fp16_opt_level='O1', gradient_accumulation=1, log_step=10, lr=0.00015, max_grad_norm=1.0, min_length=128, model_config='config/model_config_small.json', num_pieces=3, output_dir='model/', pretrained_model='pretrained/GPT2-base-Chinese', raw=False, raw_data_path='data/jieshuo.txt', samples_path='data/samples.json', segment=False, stride=768, tokenized_data_path='data/jieshuo_tokens_full.txt', tokenizer_path='pretrained/GPT2-base-Chinese/vocab.txt', vocab_bpe='tokenizations/vocab.bpe', warmup_steps=2000, writer_dir='tensorboard_summary/')
config:
{
  "attn_pdrop": 0.1,
  "embd_pdrop": 0.1,
  "finetuning_task": null,
  "initializer_range": 0.02,
  "layer_norm_epsilon": 1e-05,
  "n_ctx": 1024,
  "n_embd": 768,
  "n_head": 12,
  "n_layer": 10,
  "n_positions": 1024,
  "num_labels": 1,
  "output_attentions": false,
  "output_hidden_states": false,
  "output_past": true,
  "pruned_heads": {},
  "resid_pdrop": 0.1,
  "summary_activation": null,
  "summary_first_dropout": 0.1,
  "summary_proj_to_labels": true,
  "summary_type": "cls_index",
  "summary_use_proj": true,
  "torchscript": false,
  "use_bfloat16": false,
  "vocab_size": 13317
}

using device: cuda
number of parameters: 102068736
calculating total steps
total steps = 80319
Let's use 4 GPUs!
starting training
loading samples
epoch 1
time: 2022-12-09 16:12:14.115555
shuffling samples ...
converting samples
checking samples ....

  0%|          | 0/9968 [00:00<?, ?it/s]/home/qinpn/anaconda3/envs/gpt/lib/python3.6/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
  warnings.warn('Was asked to gather along dimension 0, but all '
/home/qinpn/anaconda3/envs/gpt/lib/python3.6/site-packages/transformers/optimization.py:166: UserWarning: This overload of add_ is deprecated:
	add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
	add_(Tensor other, *, Number alpha) (Triggered internally at  ../torch/csrc/utils/python_arg_parser.cpp:1050.)
  exp_avg.mul_(beta1).add_(1.0 - beta1, grad)

  0%|          | 1/9968 [00:22<61:32:05, 22.23s/it]
  0%|          | 2/9968 [00:22<26:33:54,  9.60s/it]
  0%|          | 3/9968 [00:23<15:21:24,  5.55s/it]
  0%|          | 4/9968 [00:35<21:48:04,  7.88s/it]
  0%|          | 5/9968 [00:35<14:37:39,  5.29s/it]
  0%|          | 6/9968 [00:36<10:18:57,  3.73s/it]
  0%|          | 7/9968 [00:37<7:37:16,  2.75s/it] 
  0%|          | 8/9968 [00:38<5:48:42,  2.10s/it]
  0%|          | 9/9968 [00:38<4:39:49,  1.69s/it]
  0%|          | 10/9968 [00:39<3:53:26,  1.41s/it]
  0%|          | 11/9968 [00:40<3:2

结束进程

kill -9  进程号PID