1.torch.set_default_tensor_type(t)
- 这个方法的意思是设置PyTorch中默认的浮点类型,注意这个方法只可以设置浮点数的默认类型,不可以设置整形的默认类型),可以使用
torch.get_default_dtype()
来获取设置的默认浮点类型。 - 在CPU上,t默认是
torch.FloatTensor
,还可以是torch.DoubleTensor
- 在GPU上,t默认是
torch.cuda.FloatTensor
,还可以是torch.cuda.DoubleTensor,torch.cuda.HalfTensor
import torch
a = torch.rand(4,3)
print(a.dtype, a.device)
print(torch.get_default_dtype())
# torch.float32 cpu
# torch.float32
torch.set_default_tensor_type(torch.cuda.FloatTensor)
b = torch.rand(2,3)
print(b.dtype, b.device)
print(torch.get_default_dtype())
# torch.float32 cuda:0
# torch.float32
torch.set_default_tensor_type(torch.FloatTensor)
c = torch.rand(3,3)
print(c.dtype, c.device)
print(torch.get_default_dtype())
# torch.float32 cpu
# torch.float32
2.torch.is_nonzero(input)
-
input
只能有一个元素,其实就是测试input
是不是torch.tensor([0.]),torch.tensor[0],torch.tensor([False])
中的一种。
import torch
a = torch.tensor([0])
b = torch.tensor([0.])
c = torch.tensor([False])
d = torch.tensor([True])
print(torch.is_nonzero(a)) # False
print(torch.is_nonzero(b)) # False
print(torch.is_nonzero(c)) # False
print(torch.is_nonzero(d)) # True
# 如果以下操作,则会报错
e = torch.tensor([1,2])
print(torch.is_nonzero(e))
# RuntimeError: Boolean value of Tensor with more than one value is ambiguous
3.torch.tensor(data, *, dtype=None, device=None, requires_grad=False, pin_memory=False)
torch.tensor()
会复制数据,如果现在有一个tensor数据a
,并且不想要进行复制,那么我们使用a.detach()
。
如果data类型是Numpy并且不想进行拷贝,那么使用torch.as_tensor()
。
import torch
a = torch.tensor([2,3,4])
b = torch.Tensor.detach(a)
c = a.detach()
b[0] = 110
print(a, b, c)
# tensor([110, 3, 4]) tensor([110, 3, 4]) tensor([110, 3, 4])
c[0] = 120
print(a, b, c)
# tensor([120, 3, 4]) tensor([120, 3, 4]) tensor([120, 3, 4])
4.torch.as_tensor(data, dtype=None, device=None)->Tensor
将list, tuple, NumPy ndarray, scalar,tensor
等类型的数据转化为tensor
。
如果data是一个相应dtype的ndarray,并且设备是cpu(numpy中的ndarray只能存在于cpu中),那么不会进行任何复制,但是返回的是tensor,只是使用的内存相同。
import torch
import numpy as np
a = np.array([1, 2, 3, 4])
t = torch.as_tensor(a)
print(t) # tensor([1, 2, 3, 4])
t[0] = 110
print(a, t) # [110 2 3 4] tensor([110, 2, 3, 4])
print(t.requires_grad) # False
如果修改dtype则会复制数据:
import torch
import numpy as np
a = np.array([1, 2, 3, 4])
t = torch.as_tensor(a, dtype=torch.float32)
print(t) # tensor([1., 2., 3., 4.])
t[0] = 110
print(a, t) # [1 2 3 4] tensor([110., 2., 3., 4.])
print(t.requires_grad) # False
5.torch.from_numpy(ndarray)
将类型为numpy.float64, numpy.float32, numpy.float16, numpy.complex64, numpy.complex128, numpy.int64, numpy.int32, numpy.int16, numpy.int8, numpy.uint8, and numpy.bool
的numpy.ndarray数据转化为tensor,共享内存。
import torch
import numpy as np
a = np.array([1, 11, 111])
t = torch.from_numpy(a)
print(t) # tensor([ 1, 11, 111])
t[0] = 120
print(a, t) # [120 11 111] tensor([120, 11, 111])
6.torch.quantize_per_tensor(input, scale, zero_point, dtype) → Tensor和torch.quantize_per_channel(input, scales, zero_points, axis, dtype) → Tensor
将tensor进行量化,有以下几种方法:
-
torch.quantize_per_tensor()
是按照tensor来进行转化的,每个tensor中所有数据进行一样的操作 -
torch.quantize_per_channel()
是对每个channel进行不同的变化。
import torch
print(torch.quantize_per_tensor(torch.tensor([-1.0, 0.0, 1.0, 2.0]), 0.1, 10, torch.quint8))
# tensor([-1., 0., 1., 2.], size=(4,), dtype=torch.quint8,quantization_scheme=torch.per_tensor_affine, scale=0.1, zero_point=10)
# 存放表示距离中心点的距离(截取最大最小,中间的保留),注意还要缩放。
print(torch.quantize_per_tensor(torch.tensor([-1.0, 0.0, 1.0, 2.0]), 0.1, 10, torch.quint8).int_repr())
# tensor([ 0, 10, 20, 30], dtype=torch.uint8)
x = torch.tensor([[-1.0, 0.0], [1.0, 2.0]])
print(torch.quantize_per_channel(x, torch.tensor([0.1, 0.01]), torch.tensor([10, 0]), 0, torch.quint8))
# tensor([[-1., 0.], [ 1., 2.]], size=(2, 2), dtype=torch.quint8, quantization_scheme=torch.per_channel_affine,
# scale=tensor([0.1000, 0.0100], dtype=torch.float64),zero_point=tensor([10, 0]), axis=0)
print(torch.quantize_per_channel(x, torch.tensor([0.1, 0.01]), torch.tensor([10, 0]), 0, torch.quint8).int_repr())
# tensor([[ 0, 10], [100, 200]], dtype=torch.uint8)
7.torch.dequantize(tensor) → Tensor
通过将量化的张量去量化,返回一个fp32张量
import torch
a = torch.tensor([10., 40., 20.]) # [-1 24.5],其他会截断->[0, 255]
b = torch.quantize_per_tensor(a, 0.1, 10, dtype=torch.quint8)
c = torch.dequantize(b)
print(b, c)
# tensor([10.0000, 24.5000, 20.0000], size=(3,), dtype=torch.quint8, quantization_scheme=torch.per_tensor_affine, scale=0.1, zero_point=10)
# tensor([10.0000, 24.5000, 20.0000])
# a中的40.之所以会变成b中的24.5,因为8位无符号数范围为0~255,最大位255(即b.int_repr()中最大的值为255,
# 距离中心点(10)最大的距离只能是245,然后还要乘以0.1,所以就是24.5。同理,8位无符号数最小值为0,所以距离中心点(10)最小的距离只能是-10,然后还要乘以0.1,所以就是-1。
a = torch.tensor([1., 4., 2.])
b = torch.quantize_per_tensor(a, 0.1, 10, dtype=torch.quint8)
c = torch.dequantize(b)
print(b, c)
# tensor([1., 4., 2.], size=(3,), dtype=torch.quint8, quantization_scheme=torch.per_tensor_affine, scale=0.1, zero_point=10)
# tensor([1., 4., 2.])
8.torch.polar(abs, angle, *, out=None) → Tensor
创建复数张量
import numpy as np
import torch
abs = torch.tensor([1, 2], dtype=torch.float64)
angle = torch.tensor([np.pi / 2, 5 * np.pi / 4], dtype=torch.float64)
z = torch.polar(abs, angle)
print(z)
# tensor([ 6.1232e-17+1.0000j, -1.4142e+00-1.4142j], dtype=torch.complex128)
# abs与angle的数据类型要相同,为float或double。如果输入为torch.float32,则输出out必须为torch.complex64。如果输入为torch.float64,则必须为torch.complex128。
9.torch.cat(inputs, dimension=0) → Tensor
在给定维度dimension上对输入的张量进行连接操作
参数:
- inputs(sequence of Tensors):可以是任意相同Tensor类型的Python序列
- dimension(int,optional):沿着此维度连接张量序列
import torch
x = torch.randn(2,3)
print(x.shape) # # torch.Size([2, 3])
print(torch.cat((x,x,x), 0).shape) # torch.Size([6, 3])
print(torch.cat((x,x,x), 1).shape) # torch.Size([2, 9])
10.torch.chunk(tensor, chunks, dim=0)
在给定维度(轴)dim上将输入张量进行分块。
参数:
- tensor (Tensor) – 待分块的输入张量
- chunks (int) – 分块的个数
- dim (int) – 沿着此维度进行分块
import torch
x = torch.randn(4,3)
print(x.shape) # torch.Size([4, 3])
print(torch.chunk(x, 3, 0))
#(tensor([[-0.4134, 0.0175, 0.3921],[ 0.4035, -0.6176, 1.3026]]),
# tensor([[-0.1641, 0.1590, -1.4298],[-2.4546, -0.4464, -0.1099]]))
print(torch.chunk(x, 3, 1))
# (tensor([[-0.4134],[ 0.4035],[-0.1641],[-2.4546]]),
# tensor([[ 0.0175],[-0.6176],[ 0.1590],[-0.4464]]),
# tensor([[ 0.3921],[ 1.3026],[-1.4298],[-0.1099]]))
import torch
x = torch.randn(6,9)
print(x.shape) # torch.Size([6, 9])
print(torch.chunk(x, 3, 0))
# (tensor([[-0.8840, 0.0236, -1.4453, 0.0862, 0.6388, -0.0901, 0.8369, -0.0078,0.4427],
# [-0.4893, -0.1796, -0.8844, -0.5101, -0.1262, 0.8419, -0.5799, 0.6986,0.2171]]),
# tensor([[-1.1454, 1.1796, -0.4989, 1.6639, -0.6961, -0.7190, -0.2515, -0.4151,0.5898],
# [ 0.1325, 2.0381, -1.3824, -0.3890, -1.5002, 0.9571, 2.1166, 1.4195,0.5397]]),
# tensor([[ 0.0143, -0.2321, -0.0826, 0.2963, -0.4779, 0.0304, -1.4891, 1.7376,0.0992],
# [ 0.0651, 0.6782, -1.1981, -0.3734, 0.2600, 0.2868, 1.2318, 2.2550,1.8552]]))
print(torch.chunk(x, 3, 1))
# (tensor([[-0.8840, 0.0236, -1.4453],
# [-0.4893, -0.1796, -0.8844],
# [-1.1454, 1.1796, -0.4989],
# [ 0.1325, 2.0381, -1.3824],
# [ 0.0143, -0.2321, -0.0826],
# [ 0.0651, 0.6782, -1.1981]]),
# tensor([[ 0.0862, 0.6388, -0.0901],
# [-0.5101, -0.1262, 0.8419],
# [ 1.6639, -0.6961, -0.7190],
# [-0.3890, -1.5002, 0.9571],
# [ 0.2963, -0.4779, 0.0304],
# [-0.3734, 0.2600, 0.2868]]),
# tensor([[ 0.8369, -0.0078, 0.4427],
# [-0.5799, 0.6986, 0.2171],
# [-0.2515, -0.4151, 0.5898],
# [ 2.1166, 1.4195, 0.5397],
# [-1.4891, 1.7376, 0.0992],
# [ 1.2318, 2.2550, 1.8552]]))
11.torch.gather(input, dim, index, out=None) → Tensor
沿给定轴 dim,将输入索引张量 index 指定位置的值进行聚合。
- input (Tensor) – 源张量
- dim (int) – 索引的轴
- index (LongTensor) – 聚合元素的下标
- out (Tensor, optional) – 目标张量
# 对一个 2 维张量,输出可以定义为:
out[i][j] = tensor[index[i][j]][j] # dim=0
out[i][j] = tensor[i][index[i][j]] # dim=1
# 对一个 3 维张量,输出可以定义为:
out[i][j][k] = tensor[index[i][j][k]][j][k] # dim=0
out[i][j][k] = tensor[i][index[i][j][k]][k] # dim=1
out[i][j][k] = tensor[i][j][index[i][j][k]] # dim=3
import torch
# 一维情况
a = torch.rand(6)
index = torch.tensor([0])
b = torch.gather(a, 0, index)
print(a)
print(b)
# tensor([0.1230, 0.4418, 0.9687, 0.5235, 0.6526, 0.5118])
# tensor([0.1230])
# 二维情况
a = torch.rand(3, 3)
index = torch.tensor([[0, 1, 2], [1, 2, 0], [2, 0, 1]])
b = torch.gather(a, 0, index)
print(a)
print(b)
# tensor([[0.7992, 0.3199, 0.1959],
# [0.2398, 0.2135, 0.9711],
# [0.6006, 0.9658, 0.4815]])
# tensor([[0.7992, 0.2135, 0.4815],
# [0.2398, 0.9658, 0.1959],
# [0.6006, 0.3199, 0.9711]])
# 三维情况
a = torch.rand(3,3,3)
index=torch.tensor([[[0,1,2],[1,2,0], [2,0,1]], [[0,1,2],[1,2,0], [2,0,1]], [[0,1,2],[1,2,0], [2,0,1]]])
b = torch.gather(a, 1, index)
print(a)
print(b)
# tensor([[[0.9200, 0.6297, 0.8914],
# [0.1114, 0.3913, 0.6592],
# [0.9143, 0.5122, 0.9108]],
# [[0.3028, 0.1813, 0.5715],
# [0.1008, 0.1466, 0.1975],
# [0.8455, 0.8054, 0.3646]],
# [[0.7624, 0.2610, 0.0521],
# [0.8029, 0.9804, 0.1773],
# [0.3598, 0.2220, 0.4475]]])
# tensor([[[0.9200, 0.3913, 0.9108],
# [0.1114, 0.5122, 0.8914],
# [0.9143, 0.6297, 0.6592]],
# [[0.3028, 0.1466, 0.3646],
# [0.1008, 0.8054, 0.5715],
# [0.8455, 0.1813, 0.1975]],
# [[0.7624, 0.9804, 0.4475],
# [0.8029, 0.2220, 0.0521],
# [0.3598, 0.2610, 0.1773]]])
12.torch.index_select(input, dim, index, out=None) → Tensor
沿着指定维度对输入进行切片,取 index 中指定的相应项(index 为一个 LongTensor),然后返
回到一个新的张量, 返回的张量与原始张量有相同的维度(在指定轴上)。
注意: 返回的张量不与原始张量共享内存空间。
参数:
- input (Tensor) – 输入张量
- dim (int) – 索引的轴
- index (LongTensor) – 包含索引下标的一维张量
- out (Tensor, optional) – 目标张量
import torch
x = torch.randn(3, 4)
print(x)
# tensor([[ 0.2900, -0.9910, -0.4476, 0.5361],
# [ 1.4227, -0.1876, -2.3000, 0.2488],
# [-0.3566, -0.4786, 0.1726, 0.4721]])
indices = torch.LongTensor([0, 2])
print(torch.index_select(x, 0, indices))
# tensor([[ 0.2900, -0.9910, -0.4476, 0.5361],
# [-0.3566, -0.4786, 0.1726, 0.4721]])
print(torch.index_select(x, 1, indices))
# tensor([[ 0.2900, -0.4476],
# [ 1.4227, -2.3000],
# [-0.3566, 0.1726]])
13.torch.masked_select(input, mask, out=None) → Tensor
根据掩码张量 mask 中的二元值,取输入张量中的指定项( mask 为一个 ByteTensor),将取
值返回到一个新的 1D 张量,张量 mask 须跟 input 张量有相同数量的元素数目,但形状或维度不需要相同。 注意:返回的张量不与原始张量共享内存空间。
参数:
- input (Tensor) – 输入张量
- mask (ByteTensor) – 掩码张量,包含了二元索引值
- out (Tensor, optional) – 目标张量
把input与mask相对应起来,取出mask中True所对应位置的数据,组成一维的tensor。
import torch
x = torch.randn(3, 4)
print(x)
# tensor([[ 0.3552, -2.3825, -0.8297, 0.3477],
# [-1.2035, 1.2252, 0.5002, 0.6248],
# [ 0.1307, -2.0608, 0.1244, 2.0139]])
mask = x.ge(0.5)
print(mask)
# tensor([[False, False, False, False],
# [False, True, True, True],
# [False, False, False, True]])
print(torch.masked_select(x, mask))
# tensor([ 1.2252, 0.5002, 0.6248, 2.0139])
14.torch.dstack(tensors, *, out=None) → Tensor
将tensor沿第三维度按顺序进行叠加
参数:
- tensors:要叠加的tensor,可以是多个,但是叠加的tensor前两个维度形状需要相等。
- out:所得结果tensor。
如果要拼接的tensor维度小于三维,那么我们的结果是先使用torch.atleast_3d()将一维或者二维的tensor转化为3维后再进行拼接。
import torch
# 二维
a = torch.rand(3, 4)
b = torch.rand(3, 4)
c = torch.dstack((a, b))
print(a.shape)
print(b.shape)
print(c.shape)
# torch.Size([3, 4])
# torch.Size([3, 4])
# torch.Size([3, 4, 2])
# 三维
a = torch.rand(3, 4, 2)
b = torch.rand(3, 4, 3)
c = torch.dstack((a, b))
print(a.shape)
print(b.shape)
print(c.shape)
# torch.Size([3, 4, 2])
# torch.Size([3, 4, 3])
# torch.Size([3, 4, 5])
# 四维
a = torch.rand(3, 4, 2, 2)
b = torch.rand(3, 4, 3, 2)
c = torch.dstack((a, b))
print(a.shape)
print(b.shape)
print(c.shape)
# torch.Size([3, 4, 2, 2])
# torch.Size([3, 4, 3, 2])
# torch.Size([3, 4, 5, 2])
15.torch.hstack(tensors, *, out=None) → Tensor
参数:
- tensors:要拼接的tensor序列。
- out:输出tensor。
此方法对一维的tensor沿着第一维进行拼接,而对其它维度的tensor沿着第二维进行拼接。
import torch
a = torch.tensor([1, 2, 3])
b = torch.tensor([4, 5, 6])
c = torch.hstack((a,b))
print(c.shape)
# torch.Size([6])
a = torch.tensor([[1],[2],[3]])
b = torch.tensor([[4],[5],[6]])
c = torch.hstack((a,b))
print(a.shape, b.shape, c.shape)
# torch.Size([3, 1]) torch.Size([3, 1]) torch.Size([3, 2])
16.torch.stack(tensors, dim=0, *, out=None) → Tensor
将一系列的tensor沿着新的维度进行连接,拼接后的tensor比原tensor维度多1。
参数:
- tensors:要连接的一系列的tensors(要拼接的tensor形状必须相等)。
- dim:要进行连接的维度。
- out:连接以后输出的新tensor
import torch
# dim=0
a = torch.randn([2, 3])
b = torch.randn([2, 3])
c = torch.stack((a,b), dim=0)
print(a.shape, b.shape, c.shape)
# torch.Size([2, 3]) torch.Size([2, 3]) torch.Size([2, 2, 3])
# dim=1
a = torch.randn([3, 4])
b = torch.randn([3, 4])
c = torch.stack((a,b), dim=1)
print(a.shape, b.shape, c.shape)
# torch.Size([3, 4]) torch.Size([3, 4]) torch.Size([3, 2, 4])
# dim=2
a = torch.randn([3, 4])
b = torch.randn([3, 4])
c = torch.stack((a,b), dim=2)
print(a.shape, b.shape, c.shape)
# torch.Size([3, 4]) torch.Size([3, 4]) torch.Size([3, 4, 2])
17.torch.vstack(tensors, *, out=None) → Tensor
按水平方向堆叠张量
import torch
a = torch.tensor([1, 2, 3])
b = torch.tensor([4, 5, 6])
print(torch.vstack((a,b))) # shape=[2,3]
# tensor([[1, 2, 3],[4, 5, 6]])
a = torch.tensor([[1],[2],[3]]) # shape=[3, 1]
b = torch.tensor([[4],[5],[6]]) # shape=[3,1]
print(torch.vstack((a,b))) # shape=[6 1]
# tensor([[1],[2],[3],[4],[5],[6]])
18.torch.nonzero(input, *, out=None, as_tuple=False) → LongTensor or tuple of LongTensors
将我们的input中非零元素的索引返回,也可以返回满足指定条件下的元素。
参数:
- input:需要进行返回索引的tensor。
- out:指定输出
- as_tuple(bool):此参数有两个取值,True和False,默认为False。
1) 值为False时返回一个二维张量,其中每一行都是一个非零值的索引。
2) 值为True时返回一维索引张量的元组,允许进行高级索引,因此x [x.nonzero(as_tuple = True)]给出张量x的所有非零值。在返回的元组中,每个索引张量都包含特定维度的非零索引。其实就是返回很多个元组,每一个元组都是值为False时每一列的值
import torch
a = torch.randint(2, (3,4))
print(a)
# tensor([[0, 0, 1, 0],
# [0, 0, 1, 1],
# [1, 0, 1, 1]])
print(torch.nonzero(a))
# tensor([[0, 2],
# [1, 2],
# [1, 3],
# [2, 0],
# [2, 2],
# [2, 3]])
print(torch.nonzero(a, as_tuple=True))
# (tensor([0, 1, 1, 2, 2, 2]),
# tensor([2, 2, 3, 0, 2, 3]))
print(a[a.nonzero(as_tuple = True)])
# tensor([1, 1, 1, 1, 1, 1])
# a中大于0的元素的位置
print(torch.nonzero(a>0, as_tuple=True))
# (tensor([0, 1, 1, 2, 2, 2]),
# tensor([2, 2, 3, 0, 2, 3]))
19.torch.reshape(input, shape) → Tensor
返回一个与输入相同的数据和元素数量的张量,但具有指定的形状。如果可能,返回的张量将是输入的视图。否则,它将是一个副本。
一个维度可能是-1,在这种情况下,它是从剩余维度和输入中的元素数量推断出来的。
import torch
a = torch.arange(4.)
print(torch.reshape(a, (2, 2)))
# tensor([[ 0., 1.], [ 2., 3.]])
b = torch.tensor([[0, 1], [2, 3]])
torch.reshape(b, (-1,))
# tensor([ 0, 1, 2, 3])
20.torch.squeeze(input, dim=None, *, out=None) → Tensor
返回一个去掉input中所有维度大小为1的张量。例如,如果输入是形状:A×1×B×C×1×D那么输出张量就是形状:A×B×C×D。
import torch
x = torch.zeros(2, 1, 2, 1, 2)
print(x.size())
# torch.Size([2, 1, 2, 1, 2])
y = torch.squeeze(x)
print(y.size())
# torch.Size([2, 2, 2])
y = torch.squeeze(x, 0)
print(y.size())
# torch.Size([2, 1, 2, 1, 2])
y = torch.squeeze(x, 1)
print(y.size())
# torch.Size([2, 2, 1, 2])
21.torch.unsqueeze(input, dim=None) → Tensor
返回一个在指定位置插入新维度大小为1的张量。共享内存。
import torch
x = torch.tensor([1, 2, 3, 4])
print(torch.unsqueeze(x, 0))
# tensor([[ 1, 2, 3, 4]])
print(torch.unsqueeze(x, 1))
# tensor([[ 1],[ 2],[ 3],[ 4]])
22.torch.transpose(input, dim0, dim1, out=None) → Tensor
返回输入矩阵 input 的转置。交换维度 dim0 和 dim1。 输出张量与输入张量共享内存,
所以改变其中一个会导致另外一个也被修改。torch.t(input, out=None) → Tensor
输入一个矩阵(2 维张量),并转置 0, 1 维。 可以被视为函数 transpose(input, 0, 1)
的简写。
x = torch.randn(2, 3)
print(x)
# tensor([[ 1.0028, -0.9893, 0.5809], [-0.1669, 0.7299, 0.4942]])
print(torch.transpose(x, 0, 1))
# tensor([[ 1.0028, -0.1669], [-0.9893, 0.7299], [ 0.5809, 0.4942]])
23.torch.unbind(input, dim=0) → seq
移去张量维数
import torch
# dim=0
print(torch.unbind(torch.tensor([[1, 2, 3],[4, 5, 6],[7, 8, 9]])))
# (tensor([1, 2, 3]), tensor([4, 5, 6]), tensor([7, 8, 9]))
#dim=1
a = torch.rand(3, 4)
print(a)
# tensor([[0.4433, 0.5060, 0.8613, 0.1414],
# [0.4245, 0.5876, 0.4906, 0.4352],
# [0.1293, 0.4648, 0.1066, 0.7602]])
b = torch.unbind(a, 1)
print(b)
# (tensor([0.4433, 0.4245, 0.1293]),
# tensor([0.5060, 0.5876, 0.4648]),
# tensor([0.8613, 0.4906, 0.1066]),
# tensor([0.1414, 0.4352, 0.7602]))
#dim=2
a = torch.rand(3, 4, 2)
print(a)
# tensor([[[0.5595, 0.1910],[0.0918, 0.0681],[0.7964, 0.6436],[0.0025, 0.6071]],
# [[0.1794, 0.6847],[0.4248, 0.2443],[0.6551, 0.3341],[0.0331, 0.5331]],
# [[0.7538, 0.3053],[0.3053, 0.7342],[0.1947, 0.2462],[0.9642, 0.5596]]])
b = torch.unbind(a, 2)
print(b)
# (tensor([[0.5595, 0.0918, 0.7964, 0.0025],
# [0.1794, 0.4248, 0.6551, 0.0331],
# [0.7538, 0.3053, 0.1947, 0.9642]]),
# tensor([[0.1910, 0.0681, 0.6436, 0.6071],
# [0.6847, 0.2443, 0.3341, 0.5331],
# [0.3053, 0.7342, 0.2462, 0.5596]]))
24.torch.split(tensor, split_size, dim=0)
将输入张量分割成相等形状的 chunks(如果可分)。 如果沿指定维的张量形状大小不能被
split_size 整分, 则最后一个分块会小于其它分块。
参数:
- tensor (Tensor) – 待分割张量
- split_size (int) – 单个分块的形状大小
- dim (int) – 沿着此维进行分割
import torch
x = torch.randn(4,3)
print(x.shape) # torch.Size([4, 3])
print(torch.split(x, 3, 0))
#(tensor([[ 1.6679, 0.4135, 0.4897],
# [-1.5495, 0.8439, 1.3431],
# [-1.7111, -1.4038, 1.5968]]),
# tensor([[-0.1251, -2.5938, -0.1291]]))
print(torch.split(x, 3, 1))
# (tensor([[ 1.6679, 0.4135, 0.4897],
# [-1.5495, 0.8439, 1.3431],
# [-1.7111, -1.4038, 1.5968],
# [-0.1251, -2.5938, -0.1291]]),)
25. torch.where(condition, x, y) → Tensor
根据条件返回从x或y中选择的元素的张量。
import torch
x = torch.randn(3, 2)
y = torch.ones(3, 2)
print(x)
# tensor([[-0.4620, 0.3139],
# [ 0.3898, -0.7197],
# [ 0.0478, -0.1657]])
print(torch.where(x > 0, x, y))
# tensor([[ 1.0000, 0.3139],
# [ 0.3898, 1.0000],
# [ 0.0478, 1.0000]])
x = torch.randn(2, 2, dtype=torch.double)
print(x)
# tensor([[ 1.0779, 0.0383],
# [-0.8785, -1.1089]], dtype=torch.float64)
print(torch.where(x > 0, x, 0.))
# tensor([[1.0779, 0.0383],
# [0.0000, 0.0000]], dtype=torch.float64)
torch.where(condition)与torch.nonzero(condition, as tuple=True)相等.
import torch
a = torch.tensor([[1, 0, -1, 2],[2, 4, 0, 3], [-1, 0, 2, 0]])
b = torch.where(a>0)
c = torch.nonzero(a>0, as_tuple=True)
print(b)
print(c)
# (tensor([0, 0, 1, 1, 1, 2]), tensor([0, 3, 0, 1, 3, 2]))
# (tensor([0, 0, 1, 1, 1, 2]), tensor([0, 3, 0, 1, 3, 2]))
26. torch.randperm(n, out=None) → LongTensor
给定参数 n,返回一个从 0 到 n-1 的随机整数排列。
import torch
print(torch.randperm(4))
# tensor([0, 2, 3, 1])
27.随机抽样(Random sampling )
torch.manual_seed(seed)
:设定生成随机数的种子torch.initial_seed()
:返回生成随机数的原始种子值(python long)torch.bernoulli(input, out=None) → Tensor
:从伯努利分布中抽取二元随机数(0 或者 1)。
- input (Tensor) – 输入为伯努利分布的概率值
- out (Tensor, optional) – 输出张量(可选)
import torch
a = torch.Tensor(3, 3).uniform_(0, 1)
print(a)
# tensor([[0.1333, 0.3842, 0.1807],
# [0.0344, 0.8019, 0.3724],
# [0.4385, 0.8115, 0.5870]])
print(torch.bernoulli(a))
# tensor([[0., 1., 1.],
# [0., 1., 0.],
# [0., 1., 0.]])
torch.multinomial(input, num_samples,replacement=False, out=None) →LongTensor
:返回一个张量,每行包含从 input 相应行中定义的多项分布中抽取的 num_samples 个样本。
- input (Tensor) – 包含概率值的张量
- num_samples (int) – 抽取的样本数
- replacement (bool, optional) – 布尔值,决定是否能重复抽取
- out (Tensor, optional) – 结果张量
import torch
weights = torch.tensor([0, 10, 3, 0], dtype=torch.float)
print(torch.multinomial(weights, 4))
# tensor([2, 1, 0, 3])
print(torch.multinomial(weights, 2))
# tensor([2, 1])
torch.normal(means, std, out=None)
:返回一个张量,包含从给定参数 means,std 的离散正态分布中抽取随机数
import torch
a = torch.normal(mean=torch.arange(1, 11, dtype=torch.float32), std=torch.arange(1, 0, -0.1))
print(a)
# tensor([-0.0578, 3.6174, 2.8741, 3.5766, 6.3145, 5.9814, 7.5412, 8.1185, 8.9289, 10.0889])
b= torch.normal(mean=1., std=torch.arange(1, 0, -0.1))
print(b)
# tensor([1.0448, 1.5626, 1.3038, 1.0531, 1.8195, 0.4030, 1.0526, 1.0245, 1.1231, 1.1754])
c= torch.normal(mean=0., std=1., size=(2, 3))
print(c)
# tensor([[ 0.9723, 0.8284, -1.7362], [ 1.2385, 1.1260, 3.1365]])