在上文中分析了帮助文档中深度学习工具箱【体脂估计】示例,但未对其中fitnet的内容进行解析。fitnet为何就能做出这样的拟合效果?本文对fitnet的数据结构进行学习。

 fitnet的数据结构

首先生成一个基本的fitnet对象

a = fitnet

得到如下结果:

a =
     Neural Network
               name: 'Function Fitting Neural Network'
           userdata: (your custom info)
     dimensions:
          numInputs: 1
          numLayers: 2
         numOutputs: 1
     numInputDelays: 0
     numLayerDelays: 0
  numFeedbackDelays: 0
  numWeightElements: 10
         sampleTime: 1
     connections:
        biasConnect: [1; 1]
       inputConnect: [1; 0]
       layerConnect: [0 0; 1 0]
      outputConnect: [0 1]
     subobjects:
              input: Equivalent to inputs{1}
             output: Equivalent to outputs{2}
             inputs: {1x1 cell array of 1 input}
             layers: {2x1 cell array of 2 layers}
            outputs: {1x2 cell array of 1 output}
             biases: {2x1 cell array of 2 biases}
       inputWeights: {2x1 cell array of 1 weight}
       layerWeights: {2x2 cell array of 1 weight}
     functions:
           adaptFcn: 'adaptwb'
         adaptParam: (none)
           derivFcn: 'defaultderiv'
          divideFcn: 'dividerand'
        divideParam: .trainRatio, .valRatio, .testRatio
         divideMode: 'sample'
            initFcn: 'initlay'
         performFcn: 'mse'
       performParam: .regularization, .normalization
           plotFcns: {'plotperform', 'plottrainstate', 'ploterrhist',
                     'plotregression', 'plotfit'}
         plotParams: {1x5 cell array of 5 params}
           trainFcn: 'trainlm'
         trainParam: .showWindow, .showCommandLine, .show, .epochs,
                     .time, .goal, .min_grad, .max_fail, .mu, .mu_dec,
                     .mu_inc, .mu_max
     weight and bias values:
                 IW: {2x1 cell} containing 1 input weight matrix
                 LW: {2x2 cell} containing 1 layer weight matrix
                  b: {2x1 cell} containing 2 bias vectors
     methods:
              adapt: Learn while in continuous use
          configure: Configure inputs & outputs
             gensim: Generate Simulink model
               init: Initialize weights & biases
            perform: Calculate performance
                sim: Evaluate network outputs given inputs
              train: Train network with examples
               view: View diagram
        unconfigure: Unconfigure inputs & outputs
     evaluate:       outputs = a(inputs)

name

表明神经网络的名称,对运算无实质意义;

dimensions网络维度

dimensions需要参照网络的形状进行分析。

view(fitnet)

matlab中调用深度学习工具箱 matlab深度网络工具箱_开发语言

 numInputs:1输入为1个参数;

numLayers:2有两层网络,Hidden层(隐藏层)和Output层(输出层);

numOutputs:输出为1个参数;

numInputDelays:输入延迟,默认为0;

numLayerDelays:层延迟,默认为0;

numFeedbackDelays:反馈延迟,默认为0;

numWeightElements:权重元素,是IW/LW/b三者元素之和。默认为2×1+2×2+1×2=10个,如上图所示。

sampleTime:采样时间,默认为1。

connections网络拓扑

biasConnect定义哪一层有bias,可以是任意的Nl×1布尔矩阵。Nl是numLayers。biasConnect(i)=1表明第i层有bias。[1;1]表明隐藏层和输出层均有bias;

inputConnect定义哪一层包含有输入参数带来的权重信息,可以是任意的Nl×Ni布尔矩阵。其中Nl是numLayers,Ni是numInputs。inputConnect(i,j)=1表明第i层有第j个输入参数形成的权重信息。[1;0]表明隐藏层有从输入参数形成的权重信息;

layerConnect定义哪一层有其它层带来的权重信息,可以是任意的Nl×Nl布尔矩阵。其中Nl是numLayers。layerConnect(i,j)=1表明有从第j层到第i层的权重信息。[0 0;1 0]表明第第输出层有从隐藏层形成的权重信息;

outputConnect定义哪一层为产生输出信息,可以是任意的1×Nl布尔矩阵。其中Nl是numLayers。outputConnect(i)=1表明第i层产生输出信息;[0 1]表明输出层会产生输出信息。

subobjects子对象数据

input,与inputs含义相同

output,与outputs含义相同

inputs为Ni×1的cell,其中Ni是numInputs。默认参数包含如下内容:

>> a.inputs{1}
 ans = 
     Neural Network Input
               name: 'Input'
     feedbackOutput: []
        processFcns: {'removeconstantrows', 'mapminmax'}
      processParams: {1x2 cell array of 2 params}
    processSettings: {1x2 cell array of 2 settings}
     processedRange: []
      processedSize: 0
              range: []
               size: 0
           userdata: (your custom info)outputs为1×Nl的cell,其中Nl是numLayers。默认参数包含如下内容:

  >> a.outputs{2} 
 
 ans =  
 
     Neural Network Output 
 
               name: 'Output' 
 
      feedbackInput: [] 
 
      feedbackDelay: 0 
 
       feedbackMode: 'none' 
 
        processFcns: {'removeconstantrows', 'mapminmax'} 
 
      processParams: {1x2 cell array of 2 params} 
 
    processSettings: {1x2 cell array of 2 settings} 
 
     processedRange: [] 
 
      processedSize: 0 
 
              range: [] 
 
               size: 0 
 
           userdata: (your custom info) 
layers为Nl×1的cell,其中Nl是numLayers。隐藏层默认参数包含如下内容:
>> a.layers{1}
 ans = 
     Neural Network Layer
               name: 'Hidden'
         dimensions: 10
        distanceFcn: (none)
      distanceParam: (none)
          distances: []
            initFcn: 'initnw'
        netInputFcn: 'netsum'
      netInputParam: (none)
          positions: []
              range: [10x2 double]
               size: 10
        topologyFcn: (none)
        transferFcn: 'tansig'
      transferParam: (none)
           userdata: (your custom info)输出层默认参数包含如下内容:
 >> a.layers{2}
 ans = 
     Neural Network Layer
               name: 'Output'
         dimensions: 0
        distanceFcn: (none)
      distanceParam: (none)
          distances: []
            initFcn: 'initnw'
        netInputFcn: 'netsum'
      netInputParam: (none)
          positions: []
              range: []
               size: 0
        topologyFcn: (none)
        transferFcn: 'purelin'
      transferParam: (none)
           userdata: (your custom info) biases是Nl×1的cell,其中Nl是numLayers。
隐藏层的bias默认参数如下:
>> a.biases{1}
 ans = 
     Neural Network Bias
            initFcn: (none)
              learn: true
           learnFcn: 'learngdm'
         learnParam: .lr, .mc
               size: 10
           userdata: (your custom info)输出层的bias默认如下:
 >> a.biases{2}
 ans = 
     Neural Network Bias
            initFcn: (none)
              learn: true
           learnFcn: 'learngdm'
         learnParam: .lr, .mc
               size: 0
           userdata: (your custom info) inputWeights是Nl×Ni的cell。其中Nl是numLayers,Ni是numInputs。输入层的默认参数如下:
>> a.inputWeights{1}
 ans = 
     Neural Network Weight
             delays: 0
            initFcn: (none)
       initSettings: .range
              learn: true
           learnFcn: 'learngdm'
         learnParam: .lr, .mc
               size: [10 0]
          weightFcn: 'dotprod'
        weightParam: (none)
           userdata: (your custom info) layerWeights是Nl×Nl的cell。其中Nl是numLayers。各默认参数如下所示:
>> a.layerWeights{1,1}
 ans =
      []
 >> a.layerWeights{1,2}
 ans =
      []
 >> a.layerWeights{2,1}
 ans = 
     Neural Network Weight
             delays: 0
            initFcn: (none)
       initSettings: .range
              learn: true
           learnFcn: 'learngdm'
         learnParam: .lr, .mc
               size: [0 10]
          weightFcn: 'dotprod'
        weightParam: (none)
           userdata: (your custom info)
 >> a.layerWeights{2,2}
 ans =
      [] weight and bias values
IW表征从输入参数到各层引入的权重,为Nl×Ni的cell。其中Nl是numLayers,Ni是numInputs。默认值如下,表明初值状态为0矩阵。
>> a.IW{1,1}
 ans =
   10×0 empty double matrix
 >> a.IW{2,1}
 ans =
      []LW表征层间引入的权重,为为Nl×Nl的cell。其中Nl是numLayers。默认值如下
>> a.LW{1,1}
 ans =
      []
 >> a.LW{1,2}
 ans =
      []
 >> a.LW{2,2}
 ans =
      []
 >> a.LW{2,1}
 ans =
   0×10 empty double matrix b表征bias,为Nl×1的cell。其中Nl是numLayers。默认值如下,均为0值或空值。
>> a.b{1}
 ans =
      0
      0
      0
      0
      0
      0
      0
      0
      0
      0
 >> a.b{2}
 ans =
   0×1 empty double column vector

 体脂估计算例运行后的数据结果

重点说明变化的参数。numWeightElements:由默认值10变为226。是由于15*13+15+16=226。与网络拓扑参数密切相关。

数据结构中真正与计算有关的参数经过训练后发生了变化,这些量也是网络真正有用的信息。

IW{1}变为:

matlab中调用深度学习工具箱 matlab深度网络工具箱_Network_02

 LW{2,1}变为:

>> net.LW{2,1}
 ans =
   Columns 1 through 13
    -0.2171   -0.0488   -0.4684   -0.7564    0.2761    0.4006    0.0158    0.1042    0.2304    0.3622   -0.2882    0.1682   -0.7007
   Columns 14 through 15
     0.0786    0.5335 b变为:
>> net.b{1}
 ans =
     2.1510
     1.9738
     1.5234
    -1.9311
     0.5286
     2.5662
    -0.4979
     0.1441
     0.1731
    -0.2444
    -1.6653
     1.2786
    -1.7759
    -2.2227
    -2.0938
 >> net.b{2}
 ans =
    -0.7651inputs{1}.range
>> net.inputs{1}.range
 ans =
    22.0000   81.0000
   118.5000  363.1500
    29.5000   77.7500
    31.1000   51.2000
    79.3000  136.2000
    69.4000  148.1000
    85.0000  147.7000
    47.2000   87.3000
    33.0000   49.1000
    19.1000   33.9000
    24.8000   45.0000
    21.0000   34.9000
    15.8000   21.4000outputs{2}.range
>> net.outputs{2}.range
 ans =
          0   47.5000