tf.Variable() 和 tf.get_variable()的区别
tf.Variable():
检测到命名冲突时,系统会自动处理,通俗的说就是在变量后面自动加“_index”。
import tensorflow as tf
w_1 = tf.Variable(0,name="w_1")
w_2 = tf.Variable(1,name="w_1")
print w_1.name
print w_2.name
#output
#w_1:0
#w_1_1:0
tf.get_variable():
检测到命名冲突时,系统会报错。这个机制保护了命名系统(这个语句要和tf.variable_scope结合使用),在同样的variable_scope下命名冲突时,系统认为这两个命名是指向同一个内存地址的。
import tensorflow as tf
w_1 = tf.get_variable(name="w_1",initializer=0)
w_2 = tf.get_variable(name="w_1",initializer=1)
#error imformation
#ValueError: Variable w_1 already exists, disallowed. Did
#you mean to set reuse=True in VarScope?
import tensorflow as tf
with tf.variable_scope("scope1"):
w1 = tf.get_variable("w1", shape=[])
w2 = tf.Variable(0.0, name="w2")
with tf.variable_scope("scope1", reuse=True):
w1_p = tf.get_variable("w1", shape=[])
w2_p = tf.Variable(1.0, name="w2")
print(w1 is w1_p, w2 is w2_p)
#输出
#True False
tf.get_variable()结合reuse使用例子
import tensorflow as tf
with tf.variable_scope('v_scope',reuse=True) as scope1:
Weights1 = tf.get_variable('Weights', shape=[2,3])
bias1 = tf.get_variable('bias', shape=[3])
# 下面来共享上面已经定义好的变量
# note: 在下面的 scope 中的变量必须已经定义过了,才能设置 reuse=True,否则会报错
with tf.variable_scope('v_scope', reuse=True) as scope2:
Weights2 = tf.get_variable('Weights')
# 下面来共享上面已经定义好的变量
# note: 在下面的 scope 中的变量必须已经定义过了,才能设置 reuse=True,否则会报错
with tf.variable_scope('v_scope', reuse=True) as scope2:
Weights3 = tf.get_variable('Weights')
print (Weights1.name)
print (Weights2.name)
print (Weights3.name)
#### output:
v_scope/Weights:0
v_scope/Weights:0
v_scope/Weights:0
可以看到三个变量指向的是同一个变量.
注意1:
variable_scope必须是同一个名为‘v_scope’,否则起不到共享变量的作用,会报ValueError: Variable v_scope1/Weights does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?
注意2:
get_variable()变量必须已经定义过了,而且必须是通过get_variable()定义的,才能设置 reuse=True,否则会报错Variable v_scope/bias does not exist, or was not created with tf.get_variable()
reference