Set of slim functions in tensorflow

Catalog

slim.get_model_variables()

slim.get_trainable_variables()

slim.learning.train()

slim.fully_connected()

slim.softmax()

slim.get_model_variables()

slim.get_or_create_global_step()

slim.arg_scope()

slim.variance_scaling_initializer()

slim.l2_regularizer()

slim.flatten()

slim.max_pool2d()

slim.get_model_variables()

def get_model_variables(scope=None, suffix=None):

  return get_variables(scope, suffix, ops.GraphKeys.MODEL_VARIABLES)

Gets a list of model variables filtered by range and/or suffix.

Parameters:

  • Scope: Filters the optional scope of the variable to be returned.
  • Suffix: An optional suffix used to filter the variables to be returned.

Return value:

  • A list of variables in a collection with ranges and suffixes.

slim.get_trainable_variables()

def get_trainable_variables(scope=None, suffix=None):

  return get_variables(scope, suffix, ops.GraphKeys.TRAINABLE_VARIABLES)

Gets a list of trainable variables filtered by range and/or suffix.

Parameters:

  • Scope: Filters the optional scope of the variable to be returned.
  • Suffix: An optional suffix used to filter the variables to be returned.

Return value:

  • A list of variables in a trainable set with a range and suffix.

slim.learning.train()

slim.learning.train(train_op, logdir, train_step_fn=train_step,
                    train_step_kwargs=_USE_DEFAULT,
                    log_every_n_steps=1, graph=None, master='',
                    is_chief=True, global_step=None,
                    number_of_steps=None, init_op=_USE_DEFAULT,
                    init_feed_dict=None, local_init_op=_USE_DEFAULT,
                    init_fn=None, ready_op=_USE_DEFAULT,
                    summary_op=_USE_DEFAULT,
                    save_summaried_secs=600,
                    summary_writer=_USE_DEFAULT,
                    startup_delay_steps=0, saver=None,
                    save_interval_secs=600, sync_optimizer=None,
                    session_config=None, session_wrapper=None,
                    trace_every_n_steps=None,
                    ignore_live_threads=False)

There are many parameters, among which the important ones are:

  • train_op, specifying an optimization algorithm
  • logdir, specify training data save folder
  • save_summaries_secs, specifying how many seconds to update the log file (corresponding to when tensorboard refreshes)
  • save_interval_secs, specifying how many seconds to save the model

slim.fully_connected()

def fully_connected(inputs,
                    num_outputs,
                    activation_fn=nn.relu,
                    normalizer_fn=None,
                    normalizer_params=None,
                    weights_initializer=initializers.xavier_initializer(),
                    weights_regularizer=None,
                    biases_initializer=init_ops.zeros_initializer(),
                    biases_regularizer=None,
                    reuse=None,
                    variables_collections=None,
                    outputs_collections=None,
                    trainable=True,
                    scope=None)
Add a fully connected layer."fully_connected" creates a variable named "weights" that represents a fully connected weight matrix multiplied by "input" to produce a "tensor" of a hidden cell.If'normalizer_fn'(for example,'batch_norm') is provided, then it is applied.Otherwise, if'normalizer_fn'is None and a'biases_initializer' is provided, a'bias'variable is created and hidden units are added.Finally, if "activation_fn" is not "None", it also applies to hidden cells.Note: If the rank of the Input is greater than 2, then the Input is flat before the initial matrix is multiplied by the Weight.

Parameters:

  • inputs: A tensor of at least rank 2, and the last dimension is a static value; that is.'[batch_size, depth]','[None, None, None, channels]'.
  • num_output: Integer or long, the number of output units in the layer.
  • activation_fn: Activate function.The default value is a ReLU function.Explicitly set it to None to skip it and maintain linear activation.
  • Normalizer_fn: A normalization function used instead of Deviation.If "normalizer_fn" is provided, then "biases_initializer" and "biases_regularizer" are ignored and "bias" is not created or added.For the no normalizer function, the default setting is None
  • normalizer_params: Normalize function parameters.
  • weights_initializer: Initializer of weight values.
  • weights_regularizer: Optional weight regularizer.
  • biases_initializer: Initializer for bias.If no one skips prejudice.
  • biases_regularizer: Optional deviation adjuster.
  • Reuse: Whether the layer and its variables should be reused.In order to be able to reuse the layer scope, it must be given.
  • variables_collections: An optional list of collections for all variables, or a dictionary containing a different list of collections for each variable.
  • outputs_collections: A collection used to add output.
  • trainable: If True also adds variables to the GraphKeys graphic collection.TRAINABLE_VARIABLES (see tf.Variable).
  • Optional scope for scope:variable_scope.

Return value:

  • A tensor variable that represents the result of a series of operations.

Possible exceptions:

  • ValueError: If x has rank less than 2 or if its last dimension is not set.

slim.softmax()

softmax(logits, scope=None)

Softmax is performed on the nth dimension of the n-dimensional logit tensor.For two-dimensional logits, this can be boiled down to tf.n.softmax.The nth dimension requires a specified number of elements (the number of classes).

Parameters:

  • logits: N-dimensional tensor, where N > 1.
  • Optional scope for scope:variable_scope.

Return value:

  • A Tensor of the same shape and type as logits.

slim.get_model_variables()

def get_model_variables(scope=None, suffix=None):

  return get_variables(scope, suffix, ops.GraphKeys.MODEL_VARIABLES)

Gets a list of model variables filtered by range and/or suffix.

Parameters:

  • Scope: Filter the optional scope of the variable to be returned
  • Suffix: An optional suffix to filter the variables to be returned

Return value:

  • List of variables with ranges and suffixes in the collection

slim.get_or_create_global_step()

get_or_create_global_step()

Returns and creates (if necessary) a global step tensor.

Parameters:

  • Graph: A graph used to create a global step tensor.If missing, use the default chart.

Return value:

  • Global Step Tensor

slim.arg_scope()

def arg_scope(list_ops_or_scope, **kwargs):
  if isinstance(list_ops_or_scope, dict):
    # Assumes that list_ops_or_scope is a scope that is being reused.
    if kwargs:
      raise ValueError('When attempting to re-use a scope by suppling a'
                       'dictionary, kwargs must be empty.')
    current_scope = list_ops_or_scope.copy()
    try:
      _get_arg_stack().append(current_scope)
      yield current_scope
    finally:
      _get_arg_stack().pop()
  else:
    # Assumes that list_ops_or_scope is a list/tuple of ops with kwargs.
    if not isinstance(list_ops_or_scope, (list, tuple)):
      raise TypeError('list_ops_or_scope must either be a list/tuple or reused '
                      'scope (i.e. dict)')
    try:
      current_scope = current_arg_scope().copy()
      for op in list_ops_or_scope:
        key = arg_scope_func_key(op)
        if not has_arg_scope(op):
          raise ValueError('%s is not decorated with @add_arg_scope',
                           _name_op(op))
        if key in current_scope:
          current_kwargs = current_scope[key].copy()
          current_kwargs.update(kwargs)
          current_scope[key] = current_kwargs
        else:
          current_scope[key] = kwargs.copy()
      _get_arg_stack().append(current_scope)
      yield current_scope
    finally:
      _get_arg_stack().pop()

Stores the default parameters for the given list_ops collection.

Parameters:

  • List_ops_or_scope: A list or tuple of operations that set a parameter range for a dictionary containing the current range.When list_ops_or_scope is dict, kwargs must be empty.When list_ops_or_scope is a list or tuple, each of these OPS needs to be decorated with @add_arg_scope to work.
  • **kwargs: keyword=value, which defines a default value for each operation in list_ops.All OPS need to accept a given set of parameters.**kwargs:current_scope is a dictionary of {op: {arg: value}}.

Return value:

  • yield:current_scope is a dictionary of {op: {arg: value}}

Possible exceptions:

  • TypeError: if list_ops is not a list or a tuple.
  • ValueError: if any op in list_ops has not be decorated with @add_arg_scope.

slim.variance_scaling_initializer()

def xavier_initializer(uniform=True, seed=None, dtype=dtypes.float32):

  if not dtype.is_floating:
    raise TypeError('Cannot create initializer for non-floating point type.')
  if mode not in ['FAN_IN', 'FAN_OUT', 'FAN_AVG']:
    raise TypeError('Unknown mode %s [FAN_IN, FAN_OUT, FAN_AVG]', mode)

Returns an initializer that performs an "Xavier" initialization of weights.This function initializes weights from:

Xavier Glorot and yobengio(2010): [Understanding the difficulties of training deep feed-forward neural networks].(http://www.jmlr.org/programedings/papers/v9/glorot10a/glorot10a.pdf)

This initializer is designed to maintain roughly the same proportion of gradients in all layers.In a uniform distribution, the range is'x = sqrt(6)./ (in + out); the standard deviation of the normal distribution is 2./ (in + out)'.

Parameters:

  • Factor: Floating.A multiplier factor
  • mode: String."FAN_IN", "FAN_OUT",'FAN_AVG'
  • Uni: Whether to use uniform or normal distribution for random initialization
  • Seed: A Python integer.Used to create random seeds.See "Task Forces".set_random_seed behavior
  • dtype: Data type.Only floating point types are supported

Return value:

  • Initializer for generating unit variance tensor

Possible exceptions:

  • ValueError: if `dtype` is not a floating point type.
  • TypeError: if `mode` is not in ['FAN_IN', 'FAN_OUT', 'FAN_AVG'].

slim.l2_regularizer()

def l2_regularizer(scale, scope=None):
  if isinstance(scale, numbers.Integral):
    raise ValueError('scale cannot be an integer: %s' % (scale,))
  if isinstance(scale, numbers.Real):
    if scale < 0.:
      raise ValueError('Setting a scale less than 0 on a regularizer: %g.' %
                       scale)
    if scale == 0.:
      logging.info('Scale of 0 disables regularizer.')
      return lambda _: None

Returns a function that can be used to apply L2 regularization to weights.A smaller L2 value helps prevent overfitting of training data.

Parameters:

  • scale: Scalar multiplier Tensor.0.0 Disable Regularizer
  • Scope:Optional scope name

Return value:

  • A function with a "l2 (weight)" signature that applies L2 regularization

Possible exceptions:

  • ValueError: If scale is negative or if scale is not a float.

slim.flatten()

While maintaining batch_size, flatten the input.Suppose the first dimension represents batch processing.

def flatten(inputs, outputs_collections=None, scope=None):
  with ops.name_scope(scope, 'Flatten', [inputs]) as sc:
    inputs = ops.convert_to_tensor(inputs)
    outputs = core_layers.flatten(inputs)
    return utils.collect_named_outputs(outputs_collections, sc, outputs)

Parameters:

  • inputs: A size tensor [batch_size,...]
  • outputs_collections: Collection used to add output
  • Optional scope for scope:name_scope

Return value:

  • A flat tensor with a shape [batch_size, k].

Possible exceptions:

  • ValueError: If inputs rank is unknown or less than 2.

slim.max_pool2d()

def max_pool2d(inputs,
               kernel_size,
               stride=2,
               padding='VALID',
               data_format=DATA_FORMAT_NHWC,
               outputs_collections=None,
               scope=None):
  if data_format not in (DATA_FORMAT_NCHW, DATA_FORMAT_NHWC):
    raise ValueError('data_format has to be either NCHW or NHWC.')
  with ops.name_scope(scope, 'MaxPool2D', [inputs]) as sc:
    inputs = ops.convert_to_tensor(inputs)
    df = ('channels_first'
          if data_format and data_format.startswith('NC') else 'channels_last')
    layer = pooling_layers.MaxPooling2D(
        pool_size=kernel_size,
        strides=stride,
        padding=padding,
        data_format=df,
        _scope=sc)
    outputs = layer.apply(inputs)
    return utils.collect_named_outputs(outputs_collections, sc, outputs)

A 2D maximum pooling operation was added, which assumes that pooling is done per image, but not in batches or channels.

Parameters:

  • inputs: a 4-D tensor of the shape'[batch_size, height, width, channels]', if'data_format'is'NHWC','[batch_size, channels, height, width]'If'data_format' is'NCHW'
  • kernel_size: List of computed op pool kernels of length 2:[kernel_height, kernel_width].If the two values are the same, it could be int
  • Stride: A list of length 2: [stride_height, stride_width].If the two steps are the same, it could be int.Note that these two steps must now have the same value
  • padding: The fill method is either "valid" or "same"
  • data_format: A string.Supports'NHWC'(default) and'NCHW'
  • outputs_collections: Collection to which output is added
  • Optional scope for scope:name_scope

Return value:

  • Tensor representing the result of a pool operation

Possible exceptions:

  • ValueError: If `data_format` is neither `NHWC` nor `NCHW`.
  • ValueError: If 'kernel_size' is not a 2-D list

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Tags: less Python Lambda

Posted on Wed, 28 Aug 2019 18:47:53 -0700 by tommy445