1. tf.Graph().as_default()


首先看官网上的解释:

TensorFlow测试网络 tensorflow.net_shell

再看博主 Joanna-In-Hdu&Hust 对此比较通俗易懂的解释()

tf.Graph() 表示实例化一个类,用于 TensorFlow

tf.Graph().as_default()  表示将这个类实例,也就是新生成的图作为整个 tensorflow 运行环境的默认图,如果只有一个主线程不写也没有关系, TensorFlow 里面已经存好了一张默认图,可以使用 tf.get_default_graph() 来调用(显示这张默认纸),当你有多个线程就可以创造多个 tf.Graph()

当只有一个线程时写不写都不重要,但写上确实一个良好的习惯,也是代码比较规范。正如下面这段话所解释的那样。

Since a default graph is always registered, every op and variable is placed into the default graph. The statement, however, creates a new graph and places everything (declared inside its scope) into this graph. If the graph is the only graph, it's useless. But it's a good practice because if you start to work with many graphs it's easier to understand where ops and vars are placed. Since this statement costs you nothing, it's better to write it anyway.

Just to be sure that if you refactor the code in the future, the operations defined belong to the graph you choose initially.

2.  tf.device()


在 TensorFlow 中,模型可以在本地 CPU 和 GPU 中运行,用户可以使用 tf.device()

一般情况下,如果安装的 TensorFlow 是 GPU

如果要切换成 CPU 运算,可调用 tf.device(device_name) 函数,其中 device_name 格式如 /cpu:0 其中的0表示设别号,TF不区分 CPU 设备,通常设置为0即可;但是会区分 GPU 设备, /gpu:0 和 /gpu:1

有时候,我们即使在 GPU 下跑模型,也会将部分 Tesnor 储存在内存里,因为可能 Tensor 太大了,显存放不下,放到更大的内存中来,这是常常通过人为指定 CPU



with tf.device('/cpu:0'):
    build_CNN() # 这时CNN的Tensor储存在内存中,而非显存里



3.  sys.path.append('../')


当我们 import module 时,默认情况下 python 解析器会搜索当前目录、已安装的内置模块和第三方模块,搜索路径存放在 sys 模块的 path 参数中, sys.path 返回的是一个 list

当我们要添加自己的搜索目录时,可以通过列表的 append() 方法 sys.path.append('../') 添加到列表末尾或者 insert() 方法 sys.path.insert(position, '../')



1 >>> import sys
2 >>> sys.path
3 ['', '/usr/local/lib/python2.7/dist-packages/apex-0.1-py2.7-linux-x86_64.egg', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages']
4 >>> sys.path.append('../')
5 >>> sys.path
6 ['', '/usr/local/lib/python2.7/dist-packages/apex-0.1-py2.7-linux-x86_64.egg', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '../']



这种方法是在程序运行时对搜索路径进行临时修改,脚本运行结束后失效。

如果想要改路径一直有效,需要把路径添加到系统环境变量中。有以下几种方法:

1. 将写好的py文件放到已经添加到环境变量的目录下

2. 使用环境变量PYTHONPATH修改默认搜索路径 export PYTHONPATH=$PYTHONPATH:/home/liu/shell/config 

3. /usr/lib/python2.7/site-packages下新建一个.pth 文件(以 pth


1 /home/cv/mvsnet/tools 2 /home/cv/mvsnet/cnn_warpper


4.  tf.app.flags.DEFINE_string()


tf.app.flags.DEFINE_xxx() 的作用就是添加命令行的X tf.app.flags.DEFINE_xxx() ,而 tf.app.flags.FLAGS



1 tf.app.flags.DEFINE_string('model_dir', './checkpoint', """Path to save the model.""")
2 tf.app.flags.DEFINE_boolean('use_pretrain', False,  """Whether to train.""")
3 tf.app.flags.DEFINE_integer('ckpt_step', 0, """ckpt step.""")



上述代码的功能是创建几个命令行参数,其中第三个参数是对无输入时相应参数的提示信息。

当运行 tf.app.run() 时,可以利用 tf.app.flags

5.  GPU run out of memory


 



2019-05-23 16:26:59.515839: I tensorflow/core/kernels/cuda_solvers.cc:159] Creating CudaSolver handles for stream 0x5a7a07e0

2019-05-23 16:27:00.676787: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 791.62MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.

2019-05-23 16:27:00.682241: W tensorflow/core/common_runtime/bfc_allocator.cc:217] Allocator (GPU_0_bfc) ran out of memory trying to allocate 776.50MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.