最近更改别人的代码发现了1.x和2.x有很多去区别就特意找官网查了一下,以下就是官网原文

Keras 2 release notes

This document details changes, in particular API changes, occurring from Keras 1 to Keras 2.

Training

  • Thenb_epoch
    argument has been renamedepochs
    everywhere.
  • The methodsfit_generator,evaluate_generatorandpredict_generatornow work by drawing a number of batches from a generator (number of training steps), rather than a number of samples.
  • samples_per_epoch
    was renamedsteps_per_epoch
    infit_generator
    .
  • nb_val_samples
    was renamedvalidation_steps
    infit_generator
    .
  • val_samples
    was renamedsteps
    inevaluate_generator
    andpredict_generator
    .
  • It is now possible to manually add a loss to a model by callingmodel.add_loss(loss_tensor)
    . The loss is added to the other losses of the model and minimized during training.
  • It is also possible to not apply any loss to a specific model output. If you passNone
    as theloss
    argument for an output (e.g. in compile,loss={'output_1': None, 'output_2': 'mse'}
    , the model will expect no Numpy arrays to be fed for this output when usingfit
    ,train_on_batch
    , orfit_generator
    . The output values are still returned as usual when usingpredict
    .
  • In TensorFlow, models can now be trained usingfit
    if some of their inputs (or even all) are TensorFlow queues or variables, rather than placeholders. See​​​this test​​ for specific examples.

Losses & metrics

  • Theobjectives
    module has been renamedlosses
    .
  • Several legacy metric functions have been removed, namelymatthews_correlation
    ,precision
    ,recall
    ,fbeta_score
    ,fmeasure
    .
  • Custom metric functions can no longer return a dict, they must return a single tensor.

Models

  • Constructor arguments forModelhave been renamed:
  • input
    ->inputs
  • output
    ->outputs
  • TheSequential
    model not longer supports theset_input
    method.
  • For any model saved with Keras 2.0 or higher, weights trained with backend X will be converted to work with backend Y without any manual conversion step.

Layers

Removals

Deprecated layers ​​MaxoutDense​​, ​​Highway​​ and ​​TimedistributedDense​​ have been removed.

Call method

  • All layers that use the learning phase now support atraining
    argument incall
    (Python boolean or symbolic tensor), allowing to specify the learning phase on a layer-by-layer basis. E.g. by calling aDropout
    instance asdropout(inputs, training=True)
    you obtain a layer that will always apply dropout, regardless of the current global learning phase. Thetraining
    argument defaults to the global Keras learning phase everywhere.
  • Thecall
    method of layers can now take arbitrary keyword arguments, e.g. you can define a custom layer with a call signature likecall(inputs, alpha=0.5)
    , and then pass aalpha
    keyword argument when calling the layer (only with the functional API, naturally).
  • __call__
    now makes use of TensorFlowname_scope
    , so that your TensorFlow graphs will look pretty and well-structured in TensorBoard.

All layers taking a legacy​​dim_ordering​​argument

​dim_ordering​​ has been renamed ​​data_format​​. It now takes two values: ​​"channels_first"​​(formerly ​​"th"​​) and ​​"channels_last"​​ (formerly ​​"tf"​​).

Dense layer

Changed interface:

  • output_dim
    ->units
  • init
    ->kernel_initializer
  • addedbias_initializer
    argument
  • W_regularizer
    ->kernel_regularizer
  • b_regularizer
    ->bias_regularizer
  • b_constraint
    ->bias_constraint
  • bias
    ->use_bias

Dropout, SpatialDropout*D, GaussianDropout

Changed interface:

  • p
    ->rate

Embedding

Convolutional layers

  • TheAtrousConvolution1D
    andAtrousConvolution2D
    layer have been deprecated. Their functionality is instead supported via thedilation_rate
    argument inConvolution1D
    andConvolution2D
    layers.
  • Convolution*
    layers are renamedConv*
    .
  • TheDeconvolution2D
    layer is renamedConv2DTranspose
    .
  • TheConv2DTranspose
    layer no longer requires anoutput_shape
    argument, making its use much easier.

Interface changes common to all convolutional layers:

  • nb_filter
    ->filters
  • float kernel dimension arguments become a single tuple argument,kernel
    size. E.g. a legacy callConv2D(10, 3, 3)
    becomesConv2D(10, (3, 3))
  • kernel_size
    can be set to an integer instead of a tuple, e.g.Conv2D(10, 3)
    is equivalent toConv2D(10, (3, 3))
    .
  • subsample
    ->strides
    . Can also be set to an integer.
  • border_mode
    ->padding
  • init
    ->kernel_initializer
  • addedbias_initializer
    argument
  • W_regularizer
    ->kernel_regularizer
  • b_regularizer
    ->bias_regularizer
  • b_constraint
    ->bias_constraint
  • bias
    ->use_bias
  • dim_ordering
    ->data_format
  • In theSeparableConv2D
    layers,init
    is split intodepthwise_initializer
    andpointwise_initializer
    .
  • Addeddilation_rate
    argument inConv2D
    andConv1D
    .
  • 1D convolution kernels are now saved as a 3D tensor (instead of 4D as before).
  • 2D and 3D convolution kernels are now saved in formatspatial_dims + (input_depth, depth))
    , even withdata_format="channels_first"
    .

Pooling1D

  • pool_length
    ->pool_size
  • stride
    ->strides
  • border_mode
    ->padding

Pooling2D, 3D

  • border_mode
    ->padding
  • dim_ordering
    ->data_format

ZeroPadding layers

The ​​padding​​ argument of the ​​ZeroPadding2D​​ and ​​ZeroPadding3D​​ layers must be a tuple of length 2 and 3 respectively. Each entry ​​i​​ contains by how much to pad the spatial dimension ​​i​​. If it's an integer, symmetric padding is applied. If it's a tuple of integers, asymmetric padding is applied.

Upsampling1D

  • length
    ->size

BatchNormalization

The ​​mode​​ argument of ​​BatchNormalization​​ has been removed; BatchNorm now only supports mode 0 (use batch metrics for feature-wise normalization during training, and use moving metrics for feature-wise normalization during testing).

  • beta_init
    ->beta_initializer
  • gamma_init
    ->gamma_initializer
  • added argumentscenter
    ,scale
    (booleans, whether to use abeta
    andgamma
    respectively)
  • added argumentsmoving_mean_initializer
    ,moving_variance_initializer
  • added argumentsbeta_regularizer
    ,gamma_regularizer
  • added argumentsbeta_constraint
    ,gamma_constraint
  • attributerunning_mean
    is renamedmoving_mean
  • attributerunning_std
    is renamedmoving_variance
    (it is in fact a variance with the current implementation).

ConvLSTM2D

Same changes as for convolutional layers and recurrent layers apply.

PReLU

  • init
    ->alpha_initializer

GaussianNoise

  • sigma
    ->stddev

Recurrent layers

  • output_dim
    ->units
  • init
    ->kernel_initializer
  • inner_init
    ->recurrent_initializer
  • added argumentbias_initializer
  • W_regularizer
    ->kernel_regularizer
  • b_regularizer
    ->bias_regularizer
  • added argumentskernel_constraint
    ,recurrent_constraint
    ,bias_constraint
  • dropout_W
    ->dropout
  • dropout_U
    ->recurrent_dropout
  • consume_less
    ->implementation
    . String values have been replaced with integers: implementation 0 (default), 1 or 2.
  • LSTM only: the argumentforget_bias_init
    has been removed. Instead there is a boolean argumentunit_forget_bias
    , defaulting toTrue
    .

Lambda

The ​​Lambda​​ layer now supports a ​​mask​​ argument.

Utilities

Utilities should now be imported from ​​keras.utils​​ rather than from specific submodules (e.g. no more ​​keras.utils.np_utils...​​).

Backend

random_normal and truncated_normal

  • std
    ->stddev

Misc

  • In the backend,set_image_ordering
    andimage_ordering
    are nowset_data_format
    anddata_format
    .
  • Any arguments (other thannb_epoch
    ) prefixed withnb_
    has been renamed to be prefixed withnum_
    instead. This affects two datasets and one preprocessing utility.