SHOGUN  3.2.1
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
List of all members | Public Member Functions | Public Attributes | Protected Member Functions | Protected Attributes
CNeuralConvolutionalLayer Class Reference

Detailed Description

Main component in convolutional neural networks

This layer type of consists of multiple feature maps. Each feature map computes its activations using by convolving its filter with the inputs, adding a bias, and then applying a non-linearity. Activations of each feature map can be max-pooled, that is, the map is divided into regions of a certain size and then the maximum activation is taken from each region.

All layer that are connected to this layer as input must have the same size.

During convolution, the inputs are implicitly padded with zeros on the sides

The layer assumes that its input images are in column major format

Definition at line 59 of file NeuralConvolutionalLayer.h.

Inheritance diagram for CNeuralConvolutionalLayer:
Inheritance graph
[legend]

Public Member Functions

 CNeuralConvolutionalLayer ()
 CNeuralConvolutionalLayer (EConvMapActivationFunction function, int32_t num_maps, int32_t radius_x, int32_t radius_y, int32_t pooling_width=1, int32_t pooling_height=1, int32_t stride_x=1, int32_t stride_y=1)
virtual ~CNeuralConvolutionalLayer ()
virtual void set_batch_size (int32_t batch_size)
virtual void initialize (CDynamicObjectArray *layers, SGVector< int32_t > input_indices)
virtual void initialize_parameters (SGVector< float64_t > parameters, SGVector< bool > parameter_regularizable, float64_t sigma)
virtual void compute_activations (SGVector< float64_t > parameters, CDynamicObjectArray *layers)
virtual void compute_gradients (SGVector< float64_t > parameters, SGMatrix< float64_t > targets, CDynamicObjectArray *layers, SGVector< float64_t > parameter_gradients)
virtual float64_t compute_error (SGMatrix< float64_t > targets)
virtual void enforce_max_norm (SGVector< float64_t > parameters, float64_t max_norm)
virtual const char * get_name () const
virtual bool is_input ()
virtual void compute_activations (SGMatrix< float64_t > inputs)
virtual void dropout_activations ()
virtual float64_t compute_contraction_term (SGVector< float64_t > parameters)
virtual int32_t get_num_neurons ()
virtual int32_t get_width ()
virtual int32_t get_height ()
virtual int32_t get_num_parameters ()
virtual SGMatrix< float64_tget_activations ()
virtual SGMatrix< float64_tget_activation_gradients ()
virtual SGMatrix< float64_tget_local_gradients ()
virtual CSGObjectshallow_copy () const
virtual CSGObjectdeep_copy () const
virtual bool is_generic (EPrimitiveType *generic) const
template<class T >
void set_generic ()
void unset_generic ()
virtual void print_serializable (const char *prefix="")
virtual bool save_serializable (CSerializableFile *file, const char *prefix="", int32_t param_version=Version::get_version_parameter())
virtual bool load_serializable (CSerializableFile *file, const char *prefix="", int32_t param_version=Version::get_version_parameter())
DynArray< TParameter * > * load_file_parameters (const SGParamInfo *param_info, int32_t file_version, CSerializableFile *file, const char *prefix="")
DynArray< TParameter * > * load_all_file_parameters (int32_t file_version, int32_t current_version, CSerializableFile *file, const char *prefix="")
void map_parameters (DynArray< TParameter * > *param_base, int32_t &base_version, DynArray< const SGParamInfo * > *target_param_infos)
void set_global_io (SGIO *io)
SGIOget_global_io ()
void set_global_parallel (Parallel *parallel)
Parallelget_global_parallel ()
void set_global_version (Version *version)
Versionget_global_version ()
SGStringList< char > get_modelsel_names ()
void print_modsel_params ()
char * get_modsel_param_descr (const char *param_name)
index_t get_modsel_param_index (const char *param_name)
void build_gradient_parameter_dictionary (CMap< TParameter *, CSGObject * > *dict)
virtual void update_parameter_hash ()
virtual bool parameter_hash_changed ()
virtual bool equals (CSGObject *other, float64_t accuracy=0.0, bool tolerant=false)
virtual CSGObjectclone ()

Public Attributes

bool is_training
float64_t dropout_prop
float64_t contraction_coefficient
ENLAutoencoderPosition autoencoder_position
SGIOio
Parallelparallel
Versionversion
Parameterm_parameters
Parameterm_model_selection_parameters
Parameterm_gradient_parameters
ParameterMapm_parameter_map
uint32_t m_hash

Protected Member Functions

virtual TParametermigrate (DynArray< TParameter * > *param_base, const SGParamInfo *target)
virtual void one_to_one_migration_prepare (DynArray< TParameter * > *param_base, const SGParamInfo *target, TParameter *&replacement, TParameter *&to_migrate, char *old_name=NULL)
virtual void load_serializable_pre () throw (ShogunException)
virtual void load_serializable_post () throw (ShogunException)
virtual void save_serializable_pre () throw (ShogunException)
virtual void save_serializable_post () throw (ShogunException)

Protected Attributes

int32_t m_num_maps
int32_t m_input_width
int32_t m_input_height
int32_t m_input_num_channels
int32_t m_radius_x
int32_t m_radius_y
int32_t m_pooling_width
int32_t m_pooling_height
int32_t m_stride_x
int32_t m_stride_y
EConvMapActivationFunction m_activation_function
SGMatrix< float64_tm_convolution_output
SGMatrix< float64_tm_convolution_output_gradients
SGMatrix< float64_tm_max_indices
int32_t m_num_neurons
int32_t m_width
int32_t m_height
int32_t m_num_parameters
SGVector< int32_t > m_input_indices
SGVector< int32_t > m_input_sizes
int32_t m_batch_size
SGMatrix< float64_tm_activations
SGMatrix< float64_tm_activation_gradients
SGMatrix< float64_tm_local_gradients
SGMatrix< bool > m_dropout_mask

Constructor & Destructor Documentation

default constructor

Definition at line 40 of file NeuralConvolutionalLayer.cpp.

CNeuralConvolutionalLayer ( EConvMapActivationFunction  function,
int32_t  num_maps,
int32_t  radius_x,
int32_t  radius_y,
int32_t  pooling_width = 1,
int32_t  pooling_height = 1,
int32_t  stride_x = 1,
int32_t  stride_y = 1 
)

Constuctor

Parameters
functionActivation function
num_mapsNumber of feature maps
radius_xRadius of the convolution filter on the x (width) axis. The filter size on the x-axis equals (2*radius_x+1)
radius_yRadius of the convolution filter on the y (height) axis. The filter size on the y-axis equals (2*radius_y+1)
pooling_widthWidth of the pooling region
pooling_heightHeight of the pooling region
stride_xStride in the x direction for convolution
stride_yStride in the y direction for convolution

Definition at line 45 of file NeuralConvolutionalLayer.cpp.

virtual ~CNeuralConvolutionalLayer ( )
virtual

Definition at line 91 of file NeuralConvolutionalLayer.h.

Member Function Documentation

void build_gradient_parameter_dictionary ( CMap< TParameter *, CSGObject * > *  dict)
inherited

Builds a dictionary of all parameters in SGObject as well of those of SGObjects that are parameters of this object. Dictionary maps parameters to the objects that own them.

Parameters
dictdictionary of parameters to be built.

Definition at line 1189 of file SGObject.cpp.

CSGObject * clone ( )
virtualinherited

Creates a clone of the current object. This is done via recursively traversing all parameters, which corresponds to a deep copy. Calling equals on the cloned object always returns true although none of the memory of both objects overlaps.

Returns
an identical copy of the given object, which is disjoint in memory. NULL if the clone fails. Note that the returned object is SG_REF'ed

Definition at line 1306 of file SGObject.cpp.

void compute_activations ( SGVector< float64_t parameters,
CDynamicObjectArray layers 
)
virtual

Computes the activations of the neurons in this layer, results should be stored in m_activations. To be used only with non-input layers

Parameters
parametersVector of size get_num_parameters(), contains the parameters of the layer
layersArray of layers that form the network that this layer is being used with

Reimplemented from CNeuralLayer.

Definition at line 146 of file NeuralConvolutionalLayer.cpp.

virtual void compute_activations ( SGMatrix< float64_t inputs)
virtualinherited

Computes the activations of the neurons in this layer, results should be stored in m_activations. To be used only with input layers

Parameters
inputsactivations of the neurons in the previous layer, matrix of size previous_layer_num_neurons * batch_size

Reimplemented in CNeuralInputLayer.

Definition at line 153 of file NeuralLayer.h.

virtual float64_t compute_contraction_term ( SGVector< float64_t parameters)
virtualinherited

Computes

\[ \frac{\lambda}{N} \sum_{k=0}^{N-1} \left \| J(x_k) \right \|^2_F \]

where \( \left \| J(x_k)) \right \|^2_F \) is the Frobenius norm of the Jacobian of the activations of the hidden layer with respect to its inputs, \( N \) is the batch size, and \( \lambda \) is the contraction coefficient.

Should be implemented by layers that support being used as a hidden layer in a contractive autoencoder.

Parameters
parametersVector of size get_num_parameters(), contains the parameters of the layer

Reimplemented in CNeuralLinearLayer, CNeuralLogisticLayer, and CNeuralRectifiedLinearLayer.

Definition at line 242 of file NeuralLayer.h.

float64_t compute_error ( SGMatrix< float64_t targets)
virtual

Computes the error between the layer's current activations and the given target activations. Should only be used with output layers

Parameters
targetsdesired values for the layer's activations, matrix of size num_neurons*batch_size

Reimplemented from CNeuralLayer.

Definition at line 224 of file NeuralConvolutionalLayer.cpp.

void compute_gradients ( SGVector< float64_t parameters,
SGMatrix< float64_t targets,
CDynamicObjectArray layers,
SGVector< float64_t parameter_gradients 
)
virtual

Computes the gradients that are relevent to this layer:

  • The gradients of the error with respect to the layer's parameters -The gradients of the error with respect to the layer's inputs

    Input gradients for layer i that connects into this layer as input are added to m_layers.element(i).get_activation_gradients()

    Deriving classes should make sure to account for dropout [Hinton, 2012] during gradient computations

    Parameters
    parametersVector of size get_num_parameters(), contains the parameters of the layer
    targetsa matrix of size num_neurons*batch_size. If the layer is being used as an output layer, targets is the desired values for the layer's activations, otherwise it's an empty matrix
    layersArray of layers that form the network that this layer is being used with
    parameter_gradientsVector of size get_num_parameters(). To be filled with gradients of the error with respect to each parameter of the layer

Reimplemented from CNeuralLayer.

Definition at line 171 of file NeuralConvolutionalLayer.cpp.

CSGObject * deep_copy ( ) const
virtualinherited

A deep copy. All the instance variables will also be copied.

Definition at line 146 of file SGObject.cpp.

void dropout_activations ( )
virtualinherited

Applies dropout [Hinton, 2012] to the activations of the layer

If is_training is true, fills m_dropout_mask with random values (according to dropout_prop) and multiplies it into the activations, otherwise, multiplies the activations by (1-dropout_prop) to compensate for using dropout during training

Definition at line 90 of file NeuralLayer.cpp.

void enforce_max_norm ( SGVector< float64_t parameters,
float64_t  max_norm 
)
virtual

Constrains the weights of each neuron in the layer to have an L2 norm of at most max_norm

Parameters
parameterspointer to the layer's parameters, array of size get_num_parameters()
max_normmaximum allowable norm for a neuron's weights

Reimplemented from CNeuralLayer.

Definition at line 235 of file NeuralConvolutionalLayer.cpp.

bool equals ( CSGObject other,
float64_t  accuracy = 0.0,
bool  tolerant = false 
)
virtualinherited

Recursively compares the current SGObject to another one. Compares all registered numerical parameters, recursion upon complex (SGObject) parameters. Does not compare pointers!

May be overwritten but please do with care! Should not be necessary in most cases.

Parameters
otherobject to compare with
accuracyaccuracy to use for comparison (optional)
tolerantallows linient check on float equality (within accuracy)
Returns
true if all parameters were equal, false if not

Definition at line 1210 of file SGObject.cpp.

virtual SGMatrix<float64_t> get_activation_gradients ( )
virtualinherited

Gets the layer's activation gradients, a matrix of size num_neurons * batch_size

Returns
layer's activation gradients

Definition at line 284 of file NeuralLayer.h.

virtual SGMatrix<float64_t> get_activations ( )
virtualinherited

Gets the layer's activations, a matrix of size num_neurons * batch_size

Returns
layer's activations

Definition at line 277 of file NeuralLayer.h.

SGIO * get_global_io ( )
inherited

get the io object

Returns
io object

Definition at line 183 of file SGObject.cpp.

Parallel * get_global_parallel ( )
inherited

get the parallel object

Returns
parallel object

Definition at line 224 of file SGObject.cpp.

Version * get_global_version ( )
inherited

get the version object

Returns
version object

Definition at line 237 of file SGObject.cpp.

virtual int32_t get_height ( )
virtualinherited

Returns the height assuming that the layer's activations are interpreted as images (i.e for convolutional nets)

Returns
Height

Definition at line 265 of file NeuralLayer.h.

virtual SGMatrix<float64_t> get_local_gradients ( )
virtualinherited

Gets the layer's local gradients, a matrix of size num_neurons * batch_size

Returns
layer's local gradients

Definition at line 294 of file NeuralLayer.h.

SGStringList< char > get_modelsel_names ( )
inherited
Returns
vector of names of all parameters which are registered for model selection

Definition at line 1081 of file SGObject.cpp.

char * get_modsel_param_descr ( const char *  param_name)
inherited

Returns description of a given parameter string, if it exists. SG_ERROR otherwise

Parameters
param_namename of the parameter
Returns
description of the parameter

Definition at line 1105 of file SGObject.cpp.

index_t get_modsel_param_index ( const char *  param_name)
inherited

Returns index of model selection parameter with provided index

Parameters
param_namename of model selection parameter
Returns
index of model selection parameter with provided name, -1 if there is no such

Definition at line 1118 of file SGObject.cpp.

virtual const char* get_name ( ) const
virtual

Returns the name of the SGSerializable instance. It MUST BE the CLASS NAME without the prefixed `C'.

Returns
name of the SGSerializable

Reimplemented from CNeuralLayer.

Definition at line 193 of file NeuralConvolutionalLayer.h.

virtual int32_t get_num_neurons ( )
virtualinherited

Gets the number of neurons in the layer

Returns
number of neurons in the layer

Definition at line 251 of file NeuralLayer.h.

virtual int32_t get_num_parameters ( )
virtualinherited

Gets the number of parameters used in this layer

Returns
number of parameters used in this layer

Definition at line 271 of file NeuralLayer.h.

virtual int32_t get_width ( )
virtualinherited

Returns the width assuming that the layer's activations are interpreted as images (i.e for convolutional nets)

Returns
Width

Definition at line 258 of file NeuralLayer.h.

void initialize ( CDynamicObjectArray layers,
SGVector< int32_t >  input_indices 
)
virtual

Initializes the layer, computes the number of parameters needed for the layer

Parameters
layersArray of layers that form the network that this layer is being used with
input_indicesIndices of the layers that are connected to this layer as input

Reimplemented from CNeuralLayer.

Definition at line 80 of file NeuralConvolutionalLayer.cpp.

void initialize_parameters ( SGVector< float64_t parameters,
SGVector< bool >  parameter_regularizable,
float64_t  sigma 
)
virtual

Initializes the layer's parameters. The layer should fill the given arrays with the initial value for its parameters

Parameters
parametersVector of size get_num_parameters()
parameter_regularizableVector of size get_num_parameters(). This controls which of the layer's parameter are subject to regularization, i.e to turn off regularization for parameter i, set parameter_regularizable[i] = false. This is usally used to turn off regularization for bias parameters.
sigmastandard deviation of the gaussian used to random the parameters

Reimplemented from CNeuralLayer.

Definition at line 123 of file NeuralConvolutionalLayer.cpp.

bool is_generic ( EPrimitiveType *  generic) const
virtualinherited

If the SGSerializable is a class template then TRUE will be returned and GENERIC is set to the type of the generic.

Parameters
genericset to the type of the generic if returning TRUE
Returns
TRUE if a class template.

Definition at line 243 of file SGObject.cpp.

virtual bool is_input ( )
virtualinherited

returns true if the layer is an input layer. Input layers are the root layers of a network, that is, they don't receive signals from other layers, they receive signals from the inputs features to the network.

Local and activation gradients are not computed for input layers

Reimplemented in CNeuralInputLayer.

Definition at line 127 of file NeuralLayer.h.

DynArray< TParameter * > * load_all_file_parameters ( int32_t  file_version,
int32_t  current_version,
CSerializableFile file,
const char *  prefix = "" 
)
inherited

maps all parameters of this instance to the provided file version and loads all parameter data from the file into an array, which is sorted (basically calls load_file_parameter(...) for all parameters and puts all results into a sorted array)

Parameters
file_versionparameter version of the file
current_versionversion from which mapping begins (you want to use Version::get_version_parameter() for this in most cases)
filefile to load from
prefixprefix for members
Returns
(sorted) array of created TParameter instances with file data

Definition at line 650 of file SGObject.cpp.

DynArray< TParameter * > * load_file_parameters ( const SGParamInfo param_info,
int32_t  file_version,
CSerializableFile file,
const char *  prefix = "" 
)
inherited

loads some specified parameters from a file with a specified version The provided parameter info has a version which is recursively mapped until the file parameter version is reached. Note that there may be possibly multiple parameters in the mapping, therefore, a set of TParameter instances is returned

Parameters
param_infoinformation of parameter
file_versionparameter version of the file, must be <= provided parameter version
filefile to load from
prefixprefix for members
Returns
new array with TParameter instances with the attached data

Definition at line 491 of file SGObject.cpp.

bool load_serializable ( CSerializableFile file,
const char *  prefix = "",
int32_t  param_version = Version::get_version_parameter() 
)
virtualinherited

Load this object from file. If it will fail (returning FALSE) then this object will contain inconsistent data and should not be used!

Parameters
filewhere to load from
prefixprefix for members
param_version(optional) a parameter version different to (this is mainly for testing, better do not use)
Returns
TRUE if done, otherwise FALSE

Definition at line 320 of file SGObject.cpp.

void load_serializable_post ( ) throw (ShogunException)
protectedvirtualinherited

Can (optionally) be overridden to post-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::LOAD_SERIALIZABLE_POST is called.

Exceptions
ShogunExceptionwill be thrown if an error occurs.

Reimplemented in CKernel, CWeightedDegreePositionStringKernel, CList, CAlphabet, CLinearHMM, CGaussianKernel, CInverseMultiQuadricKernel, CCircularKernel, and CExponentialKernel.

Definition at line 1008 of file SGObject.cpp.

void load_serializable_pre ( ) throw (ShogunException)
protectedvirtualinherited

Can (optionally) be overridden to pre-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::LOAD_SERIALIZABLE_PRE is called.

Exceptions
ShogunExceptionwill be thrown if an error occurs.

Reimplemented in CDynamicArray< T >, CDynamicArray< float64_t >, CDynamicArray< float32_t >, CDynamicArray< int32_t >, CDynamicArray< char >, CDynamicArray< bool >, and CDynamicObjectArray.

Definition at line 1003 of file SGObject.cpp.

void map_parameters ( DynArray< TParameter * > *  param_base,
int32_t &  base_version,
DynArray< const SGParamInfo * > *  target_param_infos 
)
inherited

Takes a set of TParameter instances (base) with a certain version and a set of target parameter infos and recursively maps the base level wise to the current version using CSGObject::migrate(...). The base is replaced. After this call, the base version containing parameters should be of same version/type as the initial target parameter infos. Note for this to work, the migrate methods and all the internal parameter mappings have to match

Parameters
param_baseset of TParameter instances that are mapped to the provided target parameter infos
base_versionversion of the parameter base
target_param_infosset of SGParamInfo instances that specify the target parameter base

Definition at line 688 of file SGObject.cpp.

TParameter * migrate ( DynArray< TParameter * > *  param_base,
const SGParamInfo target 
)
protectedvirtualinherited

creates a new TParameter instance, which contains migrated data from the version that is provided. The provided parameter data base is used for migration, this base is a collection of all parameter data of the previous version. Migration is done FROM the data in param_base TO the provided param info Migration is always one version step. Method has to be implemented in subclasses, if no match is found, base method has to be called.

If there is an element in the param_base which equals the target, a copy of the element is returned. This represents the case when nothing has changed and therefore, the migrate method is not overloaded in a subclass

Parameters
param_baseset of TParameter instances to use for migration
targetparameter info for the resulting TParameter
Returns
a new TParameter instance with migrated data from the base of the type which is specified by the target parameter

Definition at line 895 of file SGObject.cpp.

void one_to_one_migration_prepare ( DynArray< TParameter * > *  param_base,
const SGParamInfo target,
TParameter *&  replacement,
TParameter *&  to_migrate,
char *  old_name = NULL 
)
protectedvirtualinherited

This method prepares everything for a one-to-one parameter migration. One to one here means that only ONE element of the parameter base is needed for the migration (the one with the same name as the target). Data is allocated for the target (in the type as provided in the target SGParamInfo), and a corresponding new TParameter instance is written to replacement. The to_migrate pointer points to the single needed TParameter instance needed for migration. If a name change happened, the old name may be specified by old_name. In addition, the m_delete_data flag of to_migrate is set to true. So if you want to migrate data, the only thing to do after this call is converting the data in the m_parameter fields. If unsure how to use - have a look into an example for this. (base_migration_type_conversion.cpp for example)

Parameters
param_baseset of TParameter instances to use for migration
targetparameter info for the resulting TParameter
replacement(used as output) here the TParameter instance which is returned by migration is created into
to_migratethe only source that is used for migration
old_namewith this parameter, a name change may be specified

Definition at line 835 of file SGObject.cpp.

bool parameter_hash_changed ( )
virtualinherited
Returns
whether parameter combination has changed since last update

Definition at line 209 of file SGObject.cpp.

void print_modsel_params ( )
inherited

prints all parameter registered for model selection and their type

Definition at line 1057 of file SGObject.cpp.

void print_serializable ( const char *  prefix = "")
virtualinherited

prints registered parameters out

Parameters
prefixprefix for members

Definition at line 255 of file SGObject.cpp.

bool save_serializable ( CSerializableFile file,
const char *  prefix = "",
int32_t  param_version = Version::get_version_parameter() 
)
virtualinherited

Save this object to file.

Parameters
filewhere to save the object; will be closed during returning if PREFIX is an empty string.
prefixprefix for members
param_version(optional) a parameter version different to (this is mainly for testing, better do not use)
Returns
TRUE if done, otherwise FALSE

Definition at line 261 of file SGObject.cpp.

void save_serializable_post ( ) throw (ShogunException)
protectedvirtualinherited

Can (optionally) be overridden to post-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::SAVE_SERIALIZABLE_POST is called.

Exceptions
ShogunExceptionwill be thrown if an error occurs.

Reimplemented in CKernel.

Definition at line 1018 of file SGObject.cpp.

void save_serializable_pre ( ) throw (ShogunException)
protectedvirtualinherited

Can (optionally) be overridden to pre-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::SAVE_SERIALIZABLE_PRE is called.

Exceptions
ShogunExceptionwill be thrown if an error occurs.

Reimplemented in CKernel, CDynamicArray< T >, CDynamicArray< float64_t >, CDynamicArray< float32_t >, CDynamicArray< int32_t >, CDynamicArray< char >, CDynamicArray< bool >, and CDynamicObjectArray.

Definition at line 1013 of file SGObject.cpp.

void set_batch_size ( int32_t  batch_size)
virtual

Sets the batch_size and allocates memory for m_activations and m_input_gradients accordingly. Must be called before forward or backward propagation is performed

Parameters
batch_sizenumber of training/test cases the network is currently working with

Reimplemented from CNeuralLayer.

Definition at line 62 of file NeuralConvolutionalLayer.cpp.

void set_generic< complex128_t > ( )
inherited

set generic type to T

Definition at line 38 of file SGObject.cpp.

void set_global_io ( SGIO io)
inherited

set the io object

Parameters
ioio object to use

Definition at line 176 of file SGObject.cpp.

void set_global_parallel ( Parallel parallel)
inherited

set the parallel object

Parameters
parallelparallel object to use

Definition at line 189 of file SGObject.cpp.

void set_global_version ( Version version)
inherited

set the version object

Parameters
versionversion object to use

Definition at line 230 of file SGObject.cpp.

CSGObject * shallow_copy ( ) const
virtualinherited

A shallow copy. All the SGObject instance variables will be simply assigned and SG_REF-ed.

Reimplemented in CGaussianKernel.

Definition at line 140 of file SGObject.cpp.

void unset_generic ( )
inherited

unset generic type

this has to be called in classes specializing a template class

Definition at line 250 of file SGObject.cpp.

void update_parameter_hash ( )
virtualinherited

Updates the hash of current parameter combination

Definition at line 196 of file SGObject.cpp.

Member Data Documentation

ENLAutoencoderPosition autoencoder_position
inherited

For autoencoders, specifies the position of the layer in the autoencoder, i.e an encoding layer or a decoding layer. Default value is NLAP_NONE

Definition at line 327 of file NeuralLayer.h.

float64_t contraction_coefficient
inherited

For hidden layers in a contractive autoencoders [Rifai, 2011] a term:

\[ \frac{\lambda}{N} \sum_{k=0}^{N-1} \left \| J(x_k) \right \|^2_F \]

is added to the error, where \( \left \| J(x_k)) \right \|^2_F \) is the Frobenius norm of the Jacobian of the activations of the hidden layer with respect to its inputs, \( N \) is the batch size, and \( \lambda \) is the contraction coefficient.

Default value is 0.0.

Definition at line 322 of file NeuralLayer.h.

float64_t dropout_prop
inherited

probabilty of dropping out a neuron in the layer

Definition at line 311 of file NeuralLayer.h.

SGIO* io
inherited

io

Definition at line 457 of file SGObject.h.

bool is_training
inherited

Should be true if the layer is currently used during training initial value is false

Definition at line 308 of file NeuralLayer.h.

EConvMapActivationFunction m_activation_function
protected

The map's activation function

Definition at line 230 of file NeuralConvolutionalLayer.h.

SGMatrix<float64_t> m_activation_gradients
protectedinherited

gradients of the error with respect to the layer's inputs size previous_layer_num_neurons * batch_size

Definition at line 365 of file NeuralLayer.h.

SGMatrix<float64_t> m_activations
protectedinherited

activations of the neurons in this layer size num_neurons * batch_size

Definition at line 360 of file NeuralLayer.h.

int32_t m_batch_size
protectedinherited

number of training/test cases the network is currently working with

Definition at line 355 of file NeuralLayer.h.

SGMatrix<float64_t> m_convolution_output
protected

Holds the output of convolution

Definition at line 233 of file NeuralConvolutionalLayer.h.

SGMatrix<float64_t> m_convolution_output_gradients
protected

Gradients of the error with respect to the convolution's output

Definition at line 236 of file NeuralConvolutionalLayer.h.

SGMatrix<bool> m_dropout_mask
protectedinherited

binary mask that determines whether a neuron will be kept or dropped out during the current iteration of training size num_neurons * batch_size

Definition at line 377 of file NeuralLayer.h.

Parameter* m_gradient_parameters
inherited

parameters wrt which we can compute gradients

Definition at line 472 of file SGObject.h.

uint32_t m_hash
inherited

Hash of parameter values

Definition at line 478 of file SGObject.h.

int32_t m_height
protectedinherited

Width of the image (if the layer's activations are to be interpreted as images. Default value is 1

Definition at line 341 of file NeuralLayer.h.

int32_t m_input_height
protected

Height of the input

Definition at line 206 of file NeuralConvolutionalLayer.h.

SGVector<int32_t> m_input_indices
protectedinherited

Indices of the layers that are connected to this layer as input

Definition at line 347 of file NeuralLayer.h.

int32_t m_input_num_channels
protected

Total number channels in the inputs

Definition at line 209 of file NeuralConvolutionalLayer.h.

SGVector<int32_t> m_input_sizes
protectedinherited

Number of neurons in the layers that are connected to this layer as input

Definition at line 352 of file NeuralLayer.h.

int32_t m_input_width
protected

Width of the input

Definition at line 203 of file NeuralConvolutionalLayer.h.

SGMatrix<float64_t> m_local_gradients
protectedinherited

gradients of the error with respect to the layer's pre-activations, this is usually used as a buffer when computing the input gradients size num_neurons * batch_size

Definition at line 371 of file NeuralLayer.h.

SGMatrix<float64_t> m_max_indices
protected

Row indices of the max elements for each pooling region

Definition at line 239 of file NeuralConvolutionalLayer.h.

Parameter* m_model_selection_parameters
inherited

model selection parameters

Definition at line 469 of file SGObject.h.

int32_t m_num_maps
protected

Number of feature maps

Definition at line 200 of file NeuralConvolutionalLayer.h.

int32_t m_num_neurons
protectedinherited

Number of neurons in this layer

Definition at line 331 of file NeuralLayer.h.

int32_t m_num_parameters
protectedinherited

Number of neurons in this layer

Definition at line 344 of file NeuralLayer.h.

ParameterMap* m_parameter_map
inherited

map for different parameter versions

Definition at line 475 of file SGObject.h.

Parameter* m_parameters
inherited

parameters

Definition at line 466 of file SGObject.h.

int32_t m_pooling_height
protected

Height of the pooling region

Definition at line 221 of file NeuralConvolutionalLayer.h.

int32_t m_pooling_width
protected

Width of the pooling region

Definition at line 218 of file NeuralConvolutionalLayer.h.

int32_t m_radius_x
protected

Radius of the convolution filter on the x (width) axis

Definition at line 212 of file NeuralConvolutionalLayer.h.

int32_t m_radius_y
protected

Radius of the convolution filter on the y (height) axis

Definition at line 215 of file NeuralConvolutionalLayer.h.

int32_t m_stride_x
protected

Stride in the x direction

Definition at line 224 of file NeuralConvolutionalLayer.h.

int32_t m_stride_y
protected

Stride in the y direcetion

Definition at line 227 of file NeuralConvolutionalLayer.h.

int32_t m_width
protectedinherited

Width of the image (if the layer's activations are to be interpreted as images. Default value is m_num_neurons

Definition at line 336 of file NeuralLayer.h.

Parallel* parallel
inherited

parallel

Definition at line 460 of file SGObject.h.

Version* version
inherited

version

Definition at line 463 of file SGObject.h.


The documentation for this class was generated from the following files:

SHOGUN Machine Learning Toolbox - Documentation