SHOGUN  6.1.3
List of all members | Public Types | Public Member Functions | Public Attributes | Protected Member Functions | Protected Attributes | Friends
CNeuralNetwork Class Reference

Detailed Description

A generic multi-layer neural network.

A [Neural network](http://en.wikipedia.org/wiki/Artificial_neural_network) is constructed using an array of CNeuralLayer objects. The NeuralLayer class defines the interface necessary for forward and [backpropagation](http://en.wikipedia.org/wiki/Backpropagation).

The network can be constructed as any arbitrary directed acyclic graph.

How to use the network:

The network can also be initialized from a JSON file using CNeuralNetworkFileReader.

Supported feature types: CDenseFeatures<float64_t> Supported label types:

The neural network can be trained using [L-BFGS](http://en.wikipedia.org/wiki/Limited-memory_BFGS) (default) or [mini-batch gradient descent] (http://en.wikipedia.org/wiki/Stochastic_gradient_descent).

NOTE: LBFGS does not work properly when using dropout/max-norm regularization due to their stochastic nature. Use gradient descent instead.

During training, the error at each iteration is logged as MSG_INFO. (to turn on info messages call sg_io->set_loglevel(MSG_INFO)).

The network stores the parameters of all the layers in a single array. This makes it easy to train a network of any combination of arbitrary layer types using any optimization method (gradient descent, L-BFGS, ..)

All the matrices the network (and related classes) deal with are in column-major format

When implemnting new layer types, the function check_gradients() can be used to make sure the gradient computations are correct.

Definition at line 110 of file NeuralNetwork.h.

Inheritance diagram for CNeuralNetwork:
[legend]

Public Types

typedef rxcpp::subjects::subject< ObservedValueSGSubject
 
typedef rxcpp::observable< ObservedValue, rxcpp::dynamic_observable< ObservedValue > > SGObservable
 
typedef rxcpp::subscriber< ObservedValue, rxcpp::observer< ObservedValue, void, void, void, void > > SGSubscriber
 

Public Member Functions

 CNeuralNetwork ()
 
 CNeuralNetwork (CDynamicObjectArray *layers)
 
virtual void set_layers (CDynamicObjectArray *layers)
 
virtual void connect (int32_t i, int32_t j)
 
virtual void quick_connect ()
 
virtual void disconnect (int32_t i, int32_t j)
 
virtual void disconnect_all ()
 
virtual void initialize_neural_network (float64_t sigma=0.01f)
 
virtual ~CNeuralNetwork ()
 
virtual CBinaryLabelsapply_binary (CFeatures *data)
 
virtual CRegressionLabelsapply_regression (CFeatures *data)
 
virtual CMulticlassLabelsapply_multiclass (CFeatures *data)
 
virtual CDenseFeatures< float64_t > * transform (CDenseFeatures< float64_t > *data)
 
virtual void set_labels (CLabels *lab)
 
virtual EMachineType get_classifier_type ()
 
virtual EProblemType get_machine_problem_type () const
 
virtual float64_t check_gradients (float64_t approx_epsilon=1.0e-3, float64_t s=1.0e-9)
 
SGVector< float64_t > * get_layer_parameters (int32_t i)
 
int32_t get_num_parameters ()
 
SGVector< float64_tget_parameters ()
 
int32_t get_num_inputs ()
 
int32_t get_num_outputs ()
 
CDynamicObjectArrayget_layers ()
 
virtual const char * get_name () const
 
void set_optimization_method (ENNOptimizationMethod optimization_method)
 
ENNOptimizationMethod get_optimization_method () const
 
void set_l2_coefficient (float64_t l2_coefficient)
 
float64_t get_l2_coefficient () const
 
void set_l1_coefficient (float64_t l1_coefficient)
 
float64_t get_l1_coefficient () const
 
void set_dropout_hidden (float64_t dropout_hidden)
 
float64_t get_dropout_hidden () const
 
void set_dropout_input (float64_t dropout_input)
 
float64_t get_dropout_input () const
 
void set_max_norm (float64_t max_norm)
 
float64_t get_max_norm () const
 
void set_epsilon (float64_t epsilon)
 
float64_t get_epsilon () const
 
void set_max_num_epochs (int32_t max_num_epochs)
 
int32_t get_max_num_epochs () const
 
void set_gd_mini_batch_size (int32_t gd_mini_batch_size)
 
int32_t get_gd_mini_batch_size () const
 
void set_gd_learning_rate (float64_t gd_learning_rate)
 
float64_t get_gd_learning_rate () const
 
void set_gd_learning_rate_decay (float64_t gd_learning_rate_decay)
 
float64_t get_gd_learning_rate_decay () const
 
void set_gd_momentum (float64_t gd_momentum)
 
float64_t get_gd_momentum () const
 
void set_gd_error_damping_coeff (float64_t gd_error_damping_coeff)
 
float64_t get_gd_error_damping_coeff () const
 
virtual bool train (CFeatures *data=NULL)
 
virtual CLabelsapply (CFeatures *data=NULL)
 
virtual CStructuredLabelsapply_structured (CFeatures *data=NULL)
 
virtual CLatentLabelsapply_latent (CFeatures *data=NULL)
 
virtual CLabelsget_labels ()
 
void set_max_train_time (float64_t t)
 
float64_t get_max_train_time ()
 
void set_solver_type (ESolverType st)
 
ESolverType get_solver_type ()
 
virtual void set_store_model_features (bool store_model)
 
virtual bool train_locked (SGVector< index_t > indices)
 
virtual float64_t apply_one (int32_t i)
 
virtual CLabelsapply_locked (SGVector< index_t > indices)
 
virtual CBinaryLabelsapply_locked_binary (SGVector< index_t > indices)
 
virtual CRegressionLabelsapply_locked_regression (SGVector< index_t > indices)
 
virtual CMulticlassLabelsapply_locked_multiclass (SGVector< index_t > indices)
 
virtual CStructuredLabelsapply_locked_structured (SGVector< index_t > indices)
 
virtual CLatentLabelsapply_locked_latent (SGVector< index_t > indices)
 
virtual void data_lock (CLabels *labs, CFeatures *features)
 
virtual void post_lock (CLabels *labs, CFeatures *features)
 
virtual void data_unlock ()
 
virtual bool supports_locking () const
 
bool is_data_locked () const
 
SG_FORCED_INLINE bool cancel_computation () const
 
SG_FORCED_INLINE void pause_computation ()
 
SG_FORCED_INLINE void resume_computation ()
 
int32_t ref ()
 
int32_t ref_count ()
 
int32_t unref ()
 
virtual CSGObjectshallow_copy () const
 
virtual CSGObjectdeep_copy () const
 
virtual bool is_generic (EPrimitiveType *generic) const
 
template<class T >
void set_generic ()
 
template<>
void set_generic ()
 
template<>
void set_generic ()
 
template<>
void set_generic ()
 
template<>
void set_generic ()
 
template<>
void set_generic ()
 
template<>
void set_generic ()
 
template<>
void set_generic ()
 
template<>
void set_generic ()
 
template<>
void set_generic ()
 
template<>
void set_generic ()
 
template<>
void set_generic ()
 
template<>
void set_generic ()
 
template<>
void set_generic ()
 
template<>
void set_generic ()
 
template<>
void set_generic ()
 
void unset_generic ()
 
virtual void print_serializable (const char *prefix="")
 
virtual bool save_serializable (CSerializableFile *file, const char *prefix="")
 
virtual bool load_serializable (CSerializableFile *file, const char *prefix="")
 
void set_global_io (SGIO *io)
 
SGIOget_global_io ()
 
void set_global_parallel (Parallel *parallel)
 
Parallelget_global_parallel ()
 
void set_global_version (Version *version)
 
Versionget_global_version ()
 
SGStringList< char > get_modelsel_names ()
 
void print_modsel_params ()
 
char * get_modsel_param_descr (const char *param_name)
 
index_t get_modsel_param_index (const char *param_name)
 
void build_gradient_parameter_dictionary (CMap< TParameter *, CSGObject *> *dict)
 
bool has (const std::string &name) const
 
template<typename T >
bool has (const Tag< T > &tag) const
 
template<typename T , typename U = void>
bool has (const std::string &name) const
 
template<typename T >
void set (const Tag< T > &_tag, const T &value)
 
template<typename T , typename U = void>
void set (const std::string &name, const T &value)
 
template<typename T >
get (const Tag< T > &_tag) const
 
template<typename T , typename U = void>
get (const std::string &name) const
 
SGObservableget_parameters_observable ()
 
void subscribe_to_parameters (ParameterObserverInterface *obs)
 
void list_observable_parameters ()
 
virtual void update_parameter_hash ()
 
virtual bool parameter_hash_changed ()
 
virtual bool equals (CSGObject *other, float64_t accuracy=0.0, bool tolerant=false)
 
virtual CSGObjectclone ()
 

Public Attributes

SGIOio
 
Parallelparallel
 
Versionversion
 
Parameterm_parameters
 
Parameterm_model_selection_parameters
 
Parameterm_gradient_parameters
 
uint32_t m_hash
 

Protected Member Functions

virtual bool train_machine (CFeatures *data=NULL)
 
virtual bool train_gradient_descent (SGMatrix< float64_t > inputs, SGMatrix< float64_t > targets)
 
virtual bool train_lbfgs (SGMatrix< float64_t > inputs, SGMatrix< float64_t > targets)
 
virtual SGMatrix< float64_tforward_propagate (CFeatures *data, int32_t j=-1)
 
virtual SGMatrix< float64_tforward_propagate (SGMatrix< float64_t > inputs, int32_t j=-1)
 
virtual void set_batch_size (int32_t batch_size)
 
virtual float64_t compute_gradients (SGMatrix< float64_t > inputs, SGMatrix< float64_t > targets, SGVector< float64_t > gradients)
 
virtual float64_t compute_error (SGMatrix< float64_t > inputs, SGMatrix< float64_t > targets)
 
virtual float64_t compute_error (SGMatrix< float64_t > targets)
 
virtual bool is_label_valid (CLabels *lab) const
 
CNeuralLayerget_layer (int32_t i)
 
SGMatrix< float64_tfeatures_to_matrix (CFeatures *features)
 
SGMatrix< float64_tlabels_to_matrix (CLabels *labs)
 
virtual void store_model_features ()
 
virtual bool train_require_labels () const
 
rxcpp::subscription connect_to_signal_handler ()
 
void reset_computation_variables ()
 
virtual void on_next ()
 
virtual void on_pause ()
 
virtual void on_complete ()
 
virtual void load_serializable_pre () throw (ShogunException)
 
virtual void load_serializable_post () throw (ShogunException)
 
virtual void save_serializable_pre () throw (ShogunException)
 
virtual void save_serializable_post () throw (ShogunException)
 
template<typename T >
void register_param (Tag< T > &_tag, const T &value)
 
template<typename T >
void register_param (const std::string &name, const T &value)
 
bool clone_parameters (CSGObject *other)
 
void observe (const ObservedValue value)
 
void register_observable_param (const std::string &name, const SG_OBS_VALUE_TYPE type, const std::string &description)
 

Protected Attributes

int32_t m_num_inputs
 
int32_t m_num_layers
 
CDynamicObjectArraym_layers
 
SGMatrix< bool > m_adj_matrix
 
int32_t m_total_num_parameters
 
SGVector< float64_tm_params
 
SGVector< bool > m_param_regularizable
 
SGVector< int32_t > m_index_offsets
 
int32_t m_batch_size
 
bool m_is_training
 
ENNOptimizationMethod m_optimization_method
 
float64_t m_l2_coefficient
 
float64_t m_l1_coefficient
 
float64_t m_dropout_hidden
 
float64_t m_dropout_input
 
float64_t m_max_norm
 
float64_t m_epsilon
 
int32_t m_max_num_epochs
 
int32_t m_gd_mini_batch_size
 
float64_t m_gd_learning_rate
 
float64_t m_gd_learning_rate_decay
 
float64_t m_gd_momentum
 
float64_t m_gd_error_damping_coeff
 
float64_t m_max_train_time
 
CLabelsm_labels
 
ESolverType m_solver_type
 
bool m_store_model_features
 
bool m_data_locked
 
std::atomic< bool > m_cancel_computation
 
std::atomic< bool > m_pause_computation_flag
 
std::condition_variable m_pause_computation
 
std::mutex m_mutex
 

Friends

class CDeepBeliefNetwork
 

Member Typedef Documentation

◆ SGObservable

Definition at line 130 of file SGObject.h.

◆ SGSubject

Definition at line 127 of file SGObject.h.

◆ SGSubscriber

typedef rxcpp::subscriber< ObservedValue, rxcpp::observer<ObservedValue, void, void, void, void> > SGSubscriber
inherited

Definition at line 133 of file SGObject.h.

Constructor & Destructor Documentation

◆ CNeuralNetwork() [1/2]

default constuctor

Definition at line 43 of file NeuralNetwork.cpp.

◆ CNeuralNetwork() [2/2]

Sets the layers of the network

Parameters
layersAn array of CNeuralLayer objects specifying the layers of the network. Must contain at least one input layer. The last layer in the array is treated as the output layer

Definition at line 49 of file NeuralNetwork.cpp.

◆ ~CNeuralNetwork()

~CNeuralNetwork ( )
virtual

Definition at line 153 of file NeuralNetwork.cpp.

Member Function Documentation

◆ apply()

CLabels * apply ( CFeatures data = NULL)
virtualinherited

apply machine to data if data is not specified apply to the current features

Parameters
data(test)data to be classified
Returns
classified labels

Definition at line 159 of file Machine.cpp.

◆ apply_binary()

CBinaryLabels * apply_binary ( CFeatures data)
virtual

apply machine to data in means of binary classification problem

Reimplemented from CMachine.

Definition at line 158 of file NeuralNetwork.cpp.

◆ apply_latent()

CLatentLabels * apply_latent ( CFeatures data = NULL)
virtualinherited

apply machine to data in means of latent problem

Reimplemented in CLinearLatentMachine.

Definition at line 239 of file Machine.cpp.

◆ apply_locked()

CLabels * apply_locked ( SGVector< index_t indices)
virtualinherited

Applies a locked machine on a set of indices. Error if machine is not locked

Parameters
indicesindex vector (of locked features) that is predicted

Definition at line 194 of file Machine.cpp.

◆ apply_locked_binary()

CBinaryLabels * apply_locked_binary ( SGVector< index_t indices)
virtualinherited

applies a locked machine on a set of indices for binary problems

Reimplemented in CKernelMachine.

Definition at line 245 of file Machine.cpp.

◆ apply_locked_latent()

CLatentLabels * apply_locked_latent ( SGVector< index_t indices)
virtualinherited

applies a locked machine on a set of indices for latent problems

Definition at line 273 of file Machine.cpp.

◆ apply_locked_multiclass()

CMulticlassLabels * apply_locked_multiclass ( SGVector< index_t indices)
virtualinherited

applies a locked machine on a set of indices for multiclass problems

Definition at line 259 of file Machine.cpp.

◆ apply_locked_regression()

CRegressionLabels * apply_locked_regression ( SGVector< index_t indices)
virtualinherited

applies a locked machine on a set of indices for regression problems

Reimplemented in CKernelMachine.

Definition at line 252 of file Machine.cpp.

◆ apply_locked_structured()

CStructuredLabels * apply_locked_structured ( SGVector< index_t indices)
virtualinherited

applies a locked machine on a set of indices for structured problems

Definition at line 266 of file Machine.cpp.

◆ apply_multiclass()

CMulticlassLabels * apply_multiclass ( CFeatures data)
virtual

apply machine to data in means of multiclass classification problem

Reimplemented from CMachine.

Definition at line 199 of file NeuralNetwork.cpp.

◆ apply_one()

virtual float64_t apply_one ( int32_t  i)
virtualinherited

◆ apply_regression()

CRegressionLabels * apply_regression ( CFeatures data)
virtual

apply machine to data in means of regression problem

Reimplemented from CMachine.

Definition at line 187 of file NeuralNetwork.cpp.

◆ apply_structured()

CStructuredLabels * apply_structured ( CFeatures data = NULL)
virtualinherited

apply machine to data in means of SO classification problem

Reimplemented in CLinearStructuredOutputMachine.

Definition at line 233 of file Machine.cpp.

◆ build_gradient_parameter_dictionary()

void build_gradient_parameter_dictionary ( CMap< TParameter *, CSGObject *> *  dict)
inherited

Builds a dictionary of all parameters in SGObject as well of those of SGObjects that are parameters of this object. Dictionary maps parameters to the objects that own them.

Parameters
dictdictionary of parameters to be built.

Definition at line 635 of file SGObject.cpp.

◆ cancel_computation()

SG_FORCED_INLINE bool cancel_computation ( ) const
inherited
Returns
whether the algorithm needs to be stopped

Definition at line 319 of file Machine.h.

◆ check_gradients()

float64_t check_gradients ( float64_t  approx_epsilon = 1.0e-3,
float64_t  s = 1.0e-9 
)
virtual

Checks if the gradients computed using backpropagation are correct by comparing them with gradients computed using numerical approximation. Used for testing purposes only.

Gradients are numerically approximated according to:

\[ c = max(\epsilon x, s) \]

\[ f'(x) = \frac{f(x + c)-f(x - c)}{2c} \]

Parameters
approx_epsilonConstant used during gradient approximation
sSome small value, used to prevent division by zero
Returns
Average difference between numerical gradients and backpropagation gradients

Definition at line 557 of file NeuralNetwork.cpp.

◆ clone()

CSGObject * clone ( )
virtualinherited

Creates a clone of the current object. This is done via recursively traversing all parameters, which corresponds to a deep copy. Calling equals on the cloned object always returns true although none of the memory of both objects overlaps.

Returns
an identical copy of the given object, which is disjoint in memory. NULL if the clone fails. Note that the returned object is SG_REF'ed

Reimplemented in CDynamicArray< T >, CDynamicArray< float64_t >, CDynamicArray< float32_t >, CDynamicArray< int32_t >, CDynamicArray< char >, CDynamicArray< bool >, CDynamicObjectArray, CAlphabet, and CMKL.

Definition at line 734 of file SGObject.cpp.

◆ clone_parameters()

bool clone_parameters ( CSGObject other)
protectedinherited

Definition at line 759 of file SGObject.cpp.

◆ compute_error() [1/2]

float64_t compute_error ( SGMatrix< float64_t inputs,
SGMatrix< float64_t targets 
)
protectedvirtual

Forward propagates the inputs and computes the error between the output layer's activations and the given target activations.

Parameters
inputsinputs to the network, a matrix of size m_num_inputs*m_batch_size
targetsdesired values for the network's output, matrix of size num_neurons_output_layer*batch_size

Definition at line 549 of file NeuralNetwork.cpp.

◆ compute_error() [2/2]

float64_t compute_error ( SGMatrix< float64_t targets)
protectedvirtual

Computes the error between the output layer's activations and the given target activations.

Parameters
targetsdesired values for the network's output, matrix of size num_neurons_output_layer*batch_size

Reimplemented in CAutoencoder, and CDeepAutoencoder.

Definition at line 522 of file NeuralNetwork.cpp.

◆ compute_gradients()

float64_t compute_gradients ( SGMatrix< float64_t inputs,
SGMatrix< float64_t targets,
SGVector< float64_t gradients 
)
protectedvirtual

Applies backpropagation to compute the gradients of the error with repsect to every parameter in the network.

Parameters
inputsinputs to the network, a matrix of size m_num_inputs*m_batch_size
targetsdesired values for the output layer's activations. matrix of size m_layers[m_num_layers-1].get_num_neurons()*m_batch_size
gradientsarray to be filled with gradient values.
Returns
error between the targets and the activations of the last layer

Definition at line 467 of file NeuralNetwork.cpp.

◆ connect()

void connect ( int32_t  i,
int32_t  j 
)
virtual

Connects layer i as input to layer j. In order for forward and backpropagation to work correctly, i must be less that j

Definition at line 75 of file NeuralNetwork.cpp.

◆ connect_to_signal_handler()

rxcpp::subscription connect_to_signal_handler ( )
protectedinherited

connect the machine instance to the signal handler

Definition at line 280 of file Machine.cpp.

◆ data_lock()

void data_lock ( CLabels labs,
CFeatures features 
)
virtualinherited

Locks the machine on given labels and data. After this call, only train_locked and apply_locked may be called

Only possible if supports_locking() returns true

Parameters
labslabels used for locking
featuresfeatures used for locking

Reimplemented in CKernelMachine.

Definition at line 119 of file Machine.cpp.

◆ data_unlock()

void data_unlock ( )
virtualinherited

Unlocks a locked machine and restores previous state

Reimplemented in CKernelMachine.

Definition at line 150 of file Machine.cpp.

◆ deep_copy()

CSGObject * deep_copy ( ) const
virtualinherited

A deep copy. All the instance variables will also be copied.

Definition at line 232 of file SGObject.cpp.

◆ disconnect()

void disconnect ( int32_t  i,
int32_t  j 
)
virtual

Disconnects layer i from layer j

Definition at line 88 of file NeuralNetwork.cpp.

◆ disconnect_all()

void disconnect_all ( )
virtual

Removes all connections in the network

Definition at line 93 of file NeuralNetwork.cpp.

◆ equals()

bool equals ( CSGObject other,
float64_t  accuracy = 0.0,
bool  tolerant = false 
)
virtualinherited

Recursively compares the current SGObject to another one. Compares all registered numerical parameters, recursion upon complex (SGObject) parameters. Does not compare pointers!

May be overwritten but please do with care! Should not be necessary in most cases.

Parameters
otherobject to compare with
accuracyaccuracy to use for comparison (optional)
tolerantallows linient check on float equality (within accuracy)
Returns
true if all parameters were equal, false if not

Definition at line 656 of file SGObject.cpp.

◆ features_to_matrix()

SGMatrix< float64_t > features_to_matrix ( CFeatures features)
protected

Ensures the given features are suitable for use with the network and returns their feature matrix

Definition at line 617 of file NeuralNetwork.cpp.

◆ forward_propagate() [1/2]

SGMatrix< float64_t > forward_propagate ( CFeatures data,
int32_t  j = -1 
)
protectedvirtual

Applies forward propagation, computes the activations of each layer up to layer j

Parameters
datainput features
jlayer index at which the propagation should stop. If -1, the propagation continues up to the last layer
Returns
activations of the last layer

Definition at line 439 of file NeuralNetwork.cpp.

◆ forward_propagate() [2/2]

SGMatrix< float64_t > forward_propagate ( SGMatrix< float64_t inputs,
int32_t  j = -1 
)
protectedvirtual

Applies forward propagation, computes the activations of each layer up to layer j

Parameters
inputsinputs to the network, a matrix of size m_num_inputs*m_batch_size
jlayer index at which the propagation should stop. If -1, the propagation continues up to the last layer
Returns
activations of the last layer

Definition at line 446 of file NeuralNetwork.cpp.

◆ get() [1/2]

T get ( const Tag< T > &  _tag) const
inherited

Getter for a class parameter, identified by a Tag. Throws an exception if the class does not have such a parameter.

Parameters
_tagname and type information of parameter
Returns
value of the parameter identified by the input tag

Definition at line 381 of file SGObject.h.

◆ get() [2/2]

T get ( const std::string &  name) const
inherited

Getter for a class parameter, identified by a name. Throws an exception if the class does not have such a parameter.

Parameters
namename of the parameter
Returns
value of the parameter corresponding to the input name and type

Definition at line 404 of file SGObject.h.

◆ get_classifier_type()

virtual EMachineType get_classifier_type ( )
virtual

get classifier type

Returns
classifier type CT_NEURALNETWORK

Reimplemented from CMachine.

Definition at line 188 of file NeuralNetwork.h.

◆ get_dropout_hidden()

float64_t get_dropout_hidden ( ) const

Returns dropout probability for hidden layers

Definition at line 292 of file NeuralNetwork.h.

◆ get_dropout_input()

float64_t get_dropout_input ( ) const

Returns dropout probability for input layers

Definition at line 312 of file NeuralNetwork.h.

◆ get_epsilon()

float64_t get_epsilon ( ) const

Returns epsilon

Definition at line 346 of file NeuralNetwork.h.

◆ get_gd_error_damping_coeff()

float64_t get_gd_error_damping_coeff ( ) const

Definition at line 454 of file NeuralNetwork.h.

◆ get_gd_learning_rate()

float64_t get_gd_learning_rate ( ) const

Returns gradient descent learning rate

Definition at line 393 of file NeuralNetwork.h.

◆ get_gd_learning_rate_decay()

float64_t get_gd_learning_rate_decay ( ) const

Returns gradient descent learning rate decay

Definition at line 410 of file NeuralNetwork.h.

◆ get_gd_mini_batch_size()

int32_t get_gd_mini_batch_size ( ) const

Returns mini batch size

Definition at line 378 of file NeuralNetwork.h.

◆ get_gd_momentum()

float64_t get_gd_momentum ( ) const

Returns gradient descent momentum multiplier

Definition at line 431 of file NeuralNetwork.h.

◆ get_global_io()

SGIO * get_global_io ( )
inherited

get the io object

Returns
io object

Definition at line 269 of file SGObject.cpp.

◆ get_global_parallel()

Parallel * get_global_parallel ( )
inherited

get the parallel object

Returns
parallel object

Definition at line 311 of file SGObject.cpp.

◆ get_global_version()

Version * get_global_version ( )
inherited

get the version object

Returns
version object

Definition at line 324 of file SGObject.cpp.

◆ get_l1_coefficient()

float64_t get_l1_coefficient ( ) const

Returns L1 coefficient

Definition at line 272 of file NeuralNetwork.h.

◆ get_l2_coefficient()

float64_t get_l2_coefficient ( ) const

Returns L2 coefficient

Definition at line 258 of file NeuralNetwork.h.

◆ get_labels()

CLabels * get_labels ( )
virtualinherited

get labels

Returns
labels

Definition at line 83 of file Machine.cpp.

◆ get_layer()

CNeuralLayer * get_layer ( int32_t  i)
protected

returns a pointer to layer i in the network

Definition at line 726 of file NeuralNetwork.cpp.

◆ get_layer_parameters()

SGVector< float64_t > * get_layer_parameters ( int32_t  i)

returns a copy of a layer's parameters array

Parameters
iindex of the layer

Definition at line 715 of file NeuralNetwork.cpp.

◆ get_layers()

CDynamicObjectArray * get_layers ( )

Returns an array holding the network's layers

Definition at line 747 of file NeuralNetwork.cpp.

◆ get_machine_problem_type()

EProblemType get_machine_problem_type ( ) const
virtual

returns type of problem machine solves

Reimplemented from CMachine.

Definition at line 678 of file NeuralNetwork.cpp.

◆ get_max_norm()

float64_t get_max_norm ( ) const

Returns maximum allowable L2 norm

Definition at line 328 of file NeuralNetwork.h.

◆ get_max_num_epochs()

int32_t get_max_num_epochs ( ) const

Returns maximum number of epochs

Definition at line 362 of file NeuralNetwork.h.

◆ get_max_train_time()

float64_t get_max_train_time ( )
inherited

get maximum training time

Returns
maximum training time

Definition at line 94 of file Machine.cpp.

◆ get_modelsel_names()

SGStringList< char > get_modelsel_names ( )
inherited
Returns
vector of names of all parameters which are registered for model selection

Definition at line 536 of file SGObject.cpp.

◆ get_modsel_param_descr()

char * get_modsel_param_descr ( const char *  param_name)
inherited

Returns description of a given parameter string, if it exists. SG_ERROR otherwise

Parameters
param_namename of the parameter
Returns
description of the parameter

Definition at line 560 of file SGObject.cpp.

◆ get_modsel_param_index()

index_t get_modsel_param_index ( const char *  param_name)
inherited

Returns index of model selection parameter with provided index

Parameters
param_namename of model selection parameter
Returns
index of model selection parameter with provided name, -1 if there is no such

Definition at line 573 of file SGObject.cpp.

◆ get_name()

virtual const char* get_name ( ) const
virtual

Returns the name of the SGSerializable instance. It MUST BE the CLASS NAME without the prefixed `C'.

Returns
name of the SGSerializable

Reimplemented from CMachine.

Reimplemented in CDeepAutoencoder, and CAutoencoder.

Definition at line 232 of file NeuralNetwork.h.

◆ get_num_inputs()

int32_t get_num_inputs ( )

returns the number of inputs the network takes

Definition at line 224 of file NeuralNetwork.h.

◆ get_num_outputs()

int32_t get_num_outputs ( )

returns the number of neurons in the output layer

Definition at line 742 of file NeuralNetwork.cpp.

◆ get_num_parameters()

int32_t get_num_parameters ( )

returns the totat number of parameters in the network

Definition at line 218 of file NeuralNetwork.h.

◆ get_optimization_method()

ENNOptimizationMethod get_optimization_method ( ) const

Returns optimization method

Definition at line 244 of file NeuralNetwork.h.

◆ get_parameters()

SGVector<float64_t> get_parameters ( )

return the network's parameter array

Definition at line 221 of file NeuralNetwork.h.

◆ get_parameters_observable()

SGObservable* get_parameters_observable ( )
inherited

Get parameters observable

Returns
RxCpp observable

Definition at line 415 of file SGObject.h.

◆ get_solver_type()

ESolverType get_solver_type ( )
inherited

get solver type

Returns
solver

Definition at line 109 of file Machine.cpp.

◆ has() [1/3]

bool has ( const std::string &  name) const
inherited

Checks if object has a class parameter identified by a name.

Parameters
namename of the parameter
Returns
true if the parameter exists with the input name

Definition at line 304 of file SGObject.h.

◆ has() [2/3]

bool has ( const Tag< T > &  tag) const
inherited

Checks if object has a class parameter identified by a Tag.

Parameters
tagtag of the parameter containing name and type information
Returns
true if the parameter exists with the input tag

Definition at line 315 of file SGObject.h.

◆ has() [3/3]

bool has ( const std::string &  name) const
inherited

Checks if a type exists for a class parameter identified by a name.

Parameters
namename of the parameter
Returns
true if the parameter exists with the input name and type

Definition at line 326 of file SGObject.h.

◆ initialize_neural_network()

void initialize_neural_network ( float64_t  sigma = 0.01f)
virtual

Initializes the network

Parameters
sigmastandard deviation of the gaussian used to randomly initialize the parameters

Definition at line 98 of file NeuralNetwork.cpp.

◆ is_data_locked()

bool is_data_locked ( ) const
inherited
Returns
whether this machine is locked

Definition at line 308 of file Machine.h.

◆ is_generic()

bool is_generic ( EPrimitiveType *  generic) const
virtualinherited

If the SGSerializable is a class template then TRUE will be returned and GENERIC is set to the type of the generic.

Parameters
genericset to the type of the generic if returning TRUE
Returns
TRUE if a class template.

Definition at line 330 of file SGObject.cpp.

◆ is_label_valid()

bool is_label_valid ( CLabels lab) const
protectedvirtual

check whether the labels is valid.

Subclasses can override this to implement their check of label types.

Parameters
labthe labels being checked, guaranteed to be non-NULL

Reimplemented from CMachine.

Definition at line 692 of file NeuralNetwork.cpp.

◆ labels_to_matrix()

SGMatrix< float64_t > labels_to_matrix ( CLabels labs)
protected

converts the given labels into a matrix suitable for use with network

Returns
matrix of size get_num_outputs()*num_labels

Definition at line 633 of file NeuralNetwork.cpp.

◆ list_observable_parameters()

void list_observable_parameters ( )
inherited

Print to stdout a list of observable parameters

Definition at line 878 of file SGObject.cpp.

◆ load_serializable()

bool load_serializable ( CSerializableFile file,
const char *  prefix = "" 
)
virtualinherited

Load this object from file. If it will fail (returning FALSE) then this object will contain inconsistent data and should not be used!

Parameters
filewhere to load from
prefixprefix for members
Returns
TRUE if done, otherwise FALSE

Definition at line 403 of file SGObject.cpp.

◆ load_serializable_post()

void load_serializable_post ( )
throw (ShogunException
)
protectedvirtualinherited

Can (optionally) be overridden to post-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::LOAD_SERIALIZABLE_POST is called.

Exceptions
ShogunExceptionwill be thrown if an error occurs.

Reimplemented in CKernel, CWeightedDegreePositionStringKernel, CList, CAlphabet, CLinearHMM, CGaussianKernel, CInverseMultiQuadricKernel, CCircularKernel, and CExponentialKernel.

Definition at line 460 of file SGObject.cpp.

◆ load_serializable_pre()

void load_serializable_pre ( )
throw (ShogunException
)
protectedvirtualinherited

Can (optionally) be overridden to pre-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::LOAD_SERIALIZABLE_PRE is called.

Exceptions
ShogunExceptionwill be thrown if an error occurs.

Reimplemented in CDynamicArray< T >, CDynamicArray< float64_t >, CDynamicArray< float32_t >, CDynamicArray< int32_t >, CDynamicArray< char >, CDynamicArray< bool >, and CDynamicObjectArray.

Definition at line 455 of file SGObject.cpp.

◆ observe()

void observe ( const ObservedValue  value)
protectedinherited

Observe a parameter value and emit them to observer.

Parameters
valueObserved parameter's value

Definition at line 828 of file SGObject.cpp.

◆ on_complete()

virtual void on_complete ( )
protectedvirtualinherited

The action which will be done when the user decides to return to prompt and terminate the program execution

Definition at line 427 of file Machine.h.

◆ on_next()

virtual void on_next ( )
protectedvirtualinherited

The action which will be done when the user decides to premature stop the CMachine execution

Definition at line 411 of file Machine.h.

◆ on_pause()

virtual void on_pause ( )
protectedvirtualinherited

The action which will be done when the user decides to pause the CMachine execution

Definition at line 418 of file Machine.h.

◆ parameter_hash_changed()

bool parameter_hash_changed ( )
virtualinherited
Returns
whether parameter combination has changed since last update

Definition at line 296 of file SGObject.cpp.

◆ pause_computation()

SG_FORCED_INLINE void pause_computation ( )
inherited

Pause the algorithm if the flag is set

Definition at line 327 of file Machine.h.

◆ post_lock()

virtual void post_lock ( CLabels labs,
CFeatures features 
)
virtualinherited

post lock

Definition at line 299 of file Machine.h.

◆ print_modsel_params()

void print_modsel_params ( )
inherited

prints all parameter registered for model selection and their type

Definition at line 512 of file SGObject.cpp.

◆ print_serializable()

void print_serializable ( const char *  prefix = "")
virtualinherited

prints registered parameters out

Parameters
prefixprefix for members

Definition at line 342 of file SGObject.cpp.

◆ quick_connect()

void quick_connect ( )
virtual

Connects each layer to the layer after it. That is, connects layer i to as input to layer i+1 for all i.

Definition at line 81 of file NeuralNetwork.cpp.

◆ ref()

int32_t ref ( )
inherited

increase reference counter

Returns
reference count

Definition at line 186 of file SGObject.cpp.

◆ ref_count()

int32_t ref_count ( )
inherited

display reference counter

Returns
reference count

Definition at line 193 of file SGObject.cpp.

◆ register_observable_param()

void register_observable_param ( const std::string &  name,
const SG_OBS_VALUE_TYPE  type,
const std::string &  description 
)
protectedinherited

Register which params this object can emit.

Parameters
namethe param name
typethe param type
descriptiona user oriented description

Definition at line 871 of file SGObject.cpp.

◆ register_param() [1/2]

void register_param ( Tag< T > &  _tag,
const T &  value 
)
protectedinherited

Registers a class parameter which is identified by a tag. This enables the parameter to be modified by set() and retrieved by get(). Parameters can be registered in the constructor of the class.

Parameters
_tagname and type information of parameter
valuevalue of the parameter

Definition at line 472 of file SGObject.h.

◆ register_param() [2/2]

void register_param ( const std::string &  name,
const T &  value 
)
protectedinherited

Registers a class parameter which is identified by a name. This enables the parameter to be modified by set() and retrieved by get(). Parameters can be registered in the constructor of the class.

Parameters
namename of the parameter
valuevalue of the parameter along with type information

Definition at line 485 of file SGObject.h.

◆ reset_computation_variables()

void reset_computation_variables ( )
protectedinherited

reset the computation variables

Definition at line 403 of file Machine.h.

◆ resume_computation()

SG_FORCED_INLINE void resume_computation ( )
inherited

Resume current computation (sets the flag)

Definition at line 340 of file Machine.h.

◆ save_serializable()

bool save_serializable ( CSerializableFile file,
const char *  prefix = "" 
)
virtualinherited

Save this object to file.

Parameters
filewhere to save the object; will be closed during returning if PREFIX is an empty string.
prefixprefix for members
Returns
TRUE if done, otherwise FALSE

Definition at line 348 of file SGObject.cpp.

◆ save_serializable_post()

void save_serializable_post ( )
throw (ShogunException
)
protectedvirtualinherited

Can (optionally) be overridden to post-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::SAVE_SERIALIZABLE_POST is called.

Exceptions
ShogunExceptionwill be thrown if an error occurs.

Reimplemented in CKernel.

Definition at line 470 of file SGObject.cpp.

◆ save_serializable_pre()

void save_serializable_pre ( )
throw (ShogunException
)
protectedvirtualinherited

Can (optionally) be overridden to pre-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::SAVE_SERIALIZABLE_PRE is called.

Exceptions
ShogunExceptionwill be thrown if an error occurs.

Reimplemented in CKernel, CDynamicArray< T >, CDynamicArray< float64_t >, CDynamicArray< float32_t >, CDynamicArray< int32_t >, CDynamicArray< char >, CDynamicArray< bool >, and CDynamicObjectArray.

Definition at line 465 of file SGObject.cpp.

◆ set() [1/2]

void set ( const Tag< T > &  _tag,
const T &  value 
)
inherited

Setter for a class parameter, identified by a Tag. Throws an exception if the class does not have such a parameter.

Parameters
_tagname and type information of parameter
valuevalue of the parameter

Definition at line 342 of file SGObject.h.

◆ set() [2/2]

void set ( const std::string &  name,
const T &  value 
)
inherited

Setter for a class parameter, identified by a name. Throws an exception if the class does not have such a parameter.

Parameters
namename of the parameter
valuevalue of the parameter along with type information

Definition at line 368 of file SGObject.h.

◆ set_batch_size()

void set_batch_size ( int32_t  batch_size)
protectedvirtual

Sets the batch size (the number of train/test cases) the network is expected to deal with. Allocates memory for the activations, local gradients, input gradients if necessary (if the batch size is different from it's previous value)

Parameters
batch_sizenumber of train/test cases the network is expected to deal with.

Definition at line 607 of file NeuralNetwork.cpp.

◆ set_dropout_hidden()

void set_dropout_hidden ( float64_t  dropout_hidden)

Sets the probabilty that a hidden layer neuron will be dropped out When using this, the recommended value is 0.5 default value 0.0 (no dropout)

For more details on dropout, see [paper](http://arxiv.org/abs/1207.0580) [Hinton, 2012]

Parameters
dropout_hiddendropout probability

Definition at line 286 of file NeuralNetwork.h.

◆ set_dropout_input()

void set_dropout_input ( float64_t  dropout_input)

Sets the probabilty that an input layer neuron will be dropped out When using this, a good value might be 0.2 default value 0.0 (no dropout)

For more details on dropout, see this [paper](http://arxiv.org/abs/1207.0580) [Hinton, 2012]

Parameters
dropout_inputdropout probability

Definition at line 306 of file NeuralNetwork.h.

◆ set_epsilon()

void set_epsilon ( float64_t  epsilon)

Sets convergence criteria training stops when (E'- E)/E < epsilon where E is the error at the current iterations and E' is the error at the previous iteration default value is 1.0e-5

Parameters
epsilonconvergence criteria

Definition at line 340 of file NeuralNetwork.h.

◆ set_gd_error_damping_coeff()

void set_gd_error_damping_coeff ( float64_t  gd_error_damping_coeff)

Sets gradient descent error damping coefficient Used to damp the error fluctuations when stochastic gradient descent is used. damping is done according to: error_damped(i) = c*error(i) + (1-c)*error_damped(i-1) where c is the damping coefficient

If -1, the damping coefficient is automatically computed according to: c = 0.99*gd_mini_batch_size/training_set_size + 1e-2;

default value is -1

Parameters
gd_error_damping_coefferror damping coefficient

Definition at line 449 of file NeuralNetwork.h.

◆ set_gd_learning_rate()

void set_gd_learning_rate ( float64_t  gd_learning_rate)

Sets gradient descent learning rate defualt value 0.1

Parameters
gd_learning_rategradient descent learning rate

Definition at line 387 of file NeuralNetwork.h.

◆ set_gd_learning_rate_decay()

void set_gd_learning_rate_decay ( float64_t  gd_learning_rate_decay)

Sets gradient descent learning rate decay learning rate is updated at each iteration i according to: alpha(i)=decay*alpha(i-1) default value is 1.0 (no decay)

Parameters
gd_learning_rate_decaygradient descent learning rate decay

Definition at line 404 of file NeuralNetwork.h.

◆ set_gd_mini_batch_size()

void set_gd_mini_batch_size ( int32_t  gd_mini_batch_size)

Sets size of the mini-batch used during gradient descent training, if 0 full-batch training is performed default value is 0

Parameters
gd_mini_batch_sizemini batch size

Definition at line 372 of file NeuralNetwork.h.

◆ set_gd_momentum()

void set_gd_momentum ( float64_t  gd_momentum)

Sets gradient descent momentum multiplier

default value is 0.9

For more details on momentum, see this [paper](http://jmlr.org/proceedings/papers/v28/sutskever13.html) [Sutskever, 2013]

Parameters
gd_momentumgradient descent momentum multiplier

Definition at line 425 of file NeuralNetwork.h.

◆ set_generic() [1/16]

void set_generic ( )
inherited

Definition at line 73 of file SGObject.cpp.

◆ set_generic() [2/16]

void set_generic ( )
inherited

Definition at line 78 of file SGObject.cpp.

◆ set_generic() [3/16]

void set_generic ( )
inherited

Definition at line 83 of file SGObject.cpp.

◆ set_generic() [4/16]

void set_generic ( )
inherited

Definition at line 88 of file SGObject.cpp.

◆ set_generic() [5/16]

void set_generic ( )
inherited

Definition at line 93 of file SGObject.cpp.

◆ set_generic() [6/16]

void set_generic ( )
inherited

Definition at line 98 of file SGObject.cpp.

◆ set_generic() [7/16]

void set_generic ( )
inherited

Definition at line 103 of file SGObject.cpp.

◆ set_generic() [8/16]

void set_generic ( )
inherited

Definition at line 108 of file SGObject.cpp.

◆ set_generic() [9/16]

void set_generic ( )
inherited

Definition at line 113 of file SGObject.cpp.

◆ set_generic() [10/16]

void set_generic ( )
inherited

Definition at line 118 of file SGObject.cpp.

◆ set_generic() [11/16]

void set_generic ( )
inherited

Definition at line 123 of file SGObject.cpp.

◆ set_generic() [12/16]

void set_generic ( )
inherited

Definition at line 128 of file SGObject.cpp.

◆ set_generic() [13/16]

void set_generic ( )
inherited

Definition at line 133 of file SGObject.cpp.

◆ set_generic() [14/16]

void set_generic ( )
inherited

Definition at line 138 of file SGObject.cpp.

◆ set_generic() [15/16]

void set_generic ( )
inherited

Definition at line 143 of file SGObject.cpp.

◆ set_generic() [16/16]

void set_generic ( )
inherited

set generic type to T

◆ set_global_io()

void set_global_io ( SGIO io)
inherited

set the io object

Parameters
ioio object to use

Definition at line 262 of file SGObject.cpp.

◆ set_global_parallel()

void set_global_parallel ( Parallel parallel)
inherited

set the parallel object

Parameters
parallelparallel object to use

Definition at line 275 of file SGObject.cpp.

◆ set_global_version()

void set_global_version ( Version version)
inherited

set the version object

Parameters
versionversion object to use

Definition at line 317 of file SGObject.cpp.

◆ set_l1_coefficient()

void set_l1_coefficient ( float64_t  l1_coefficient)

Sets L1 Regularization coeff default value is 0.0

Parameters
l1_coefficientl1_coefficient

Definition at line 266 of file NeuralNetwork.h.

◆ set_l2_coefficient()

void set_l2_coefficient ( float64_t  l2_coefficient)

Sets L2 Regularization coeff default value is 0.0

Parameters
l2_coefficientl2_coefficient

Definition at line 252 of file NeuralNetwork.h.

◆ set_labels()

void set_labels ( CLabels lab)
virtual

set labels

Parameters
lablabels

Reimplemented from CMachine.

Definition at line 699 of file NeuralNetwork.cpp.

◆ set_layers()

void set_layers ( CDynamicObjectArray layers)
virtual

Sets the layers of the network

Parameters
layersAn array of CNeuralLayer objects specifying the layers of the network. Must contain at least one input layer. The last layer in the array is treated as the output layer

Definition at line 55 of file NeuralNetwork.cpp.

◆ set_max_norm()

void set_max_norm ( float64_t  max_norm)

Sets maximum allowable L2 norm for a neurons weights When using this, a good value might be 15 default value -1 (max-norm regularization disabled)

Parameters
max_normmaximum allowable L2 norm

Definition at line 322 of file NeuralNetwork.h.

◆ set_max_num_epochs()

void set_max_num_epochs ( int32_t  max_num_epochs)

Sets maximum number of iterations over the training set. If 0, training will continue until convergence. defualt value is 0

Parameters
max_num_epochsmaximum number of iterations over the training set

Definition at line 356 of file NeuralNetwork.h.

◆ set_max_train_time()

void set_max_train_time ( float64_t  t)
inherited

set maximum training time

Parameters
tmaximimum training time

Definition at line 89 of file Machine.cpp.

◆ set_optimization_method()

void set_optimization_method ( ENNOptimizationMethod  optimization_method)

Sets optimization method default is NNOM_LBFGS

Parameters
optimization_methodoptimiation method

Definition at line 238 of file NeuralNetwork.h.

◆ set_solver_type()

void set_solver_type ( ESolverType  st)
inherited

set solver type

Parameters
stsolver type

Definition at line 104 of file Machine.cpp.

◆ set_store_model_features()

void set_store_model_features ( bool  store_model)
virtualinherited

Setter for store-model-features-after-training flag

Parameters
store_modelwhether model should be stored after training

Definition at line 114 of file Machine.cpp.

◆ shallow_copy()

CSGObject * shallow_copy ( ) const
virtualinherited

A shallow copy. All the SGObject instance variables will be simply assigned and SG_REF-ed.

Reimplemented in CGaussianKernel.

Definition at line 226 of file SGObject.cpp.

◆ store_model_features()

virtual void store_model_features ( )
protectedvirtualinherited

Stores feature data of underlying model. After this method has been called, it is possible to change the machine's feature data and call apply(), which is then performed on the training feature data that is part of the machine's model.

Base method, has to be implemented in order to allow cross-validation and model selection.

NOT IMPLEMENTED! Has to be done in subclasses

Reimplemented in CKernelMachine, CKNN, CLinearMachine, CLinearMulticlassMachine, CKMeansBase, CTreeMachine< T >, CTreeMachine< ConditionalProbabilityTreeNodeData >, CTreeMachine< RelaxedTreeNodeData >, CTreeMachine< id3TreeNodeData >, CTreeMachine< VwConditionalProbabilityTreeNodeData >, CTreeMachine< CARTreeNodeData >, CTreeMachine< C45TreeNodeData >, CTreeMachine< CHAIDTreeNodeData >, CTreeMachine< NbodyTreeNodeData >, CGaussianProcessMachine, CHierarchical, CDistanceMachine, CKernelMulticlassMachine, and CLinearStructuredOutputMachine.

Definition at line 378 of file Machine.h.

◆ subscribe_to_parameters()

void subscribe_to_parameters ( ParameterObserverInterface obs)
inherited

Subscribe a parameter observer to watch over params

Definition at line 811 of file SGObject.cpp.

◆ supports_locking()

virtual bool supports_locking ( ) const
virtualinherited
Returns
whether this machine supports locking

Reimplemented in CKernelMachine.

Definition at line 305 of file Machine.h.

◆ train()

bool train ( CFeatures data = NULL)
virtualinherited

train machine

Parameters
datatraining data (parameter can be avoided if distance or kernel-based classifiers are used and distance/kernels are initialized with train data). If flag is set, model features will be stored after training.
Returns
whether training was successful

Reimplemented in CRelaxedTree, CAutoencoder, CLinearMachine, CSGDQN, and COnlineSVMSGD.

Definition at line 43 of file Machine.cpp.

◆ train_gradient_descent()

bool train_gradient_descent ( SGMatrix< float64_t inputs,
SGMatrix< float64_t targets 
)
protectedvirtual

trains the network using gradient descent

Definition at line 261 of file NeuralNetwork.cpp.

◆ train_lbfgs()

bool train_lbfgs ( SGMatrix< float64_t inputs,
SGMatrix< float64_t targets 
)
protectedvirtual

trains the network using L-BFGS

Definition at line 357 of file NeuralNetwork.cpp.

◆ train_locked()

virtual bool train_locked ( SGVector< index_t indices)
virtualinherited

Trains a locked machine on a set of indices. Error if machine is not locked

NOT IMPLEMENTED

Parameters
indicesindex vector (of locked features) that is used for training
Returns
whether training was successful

Reimplemented in CKernelMachine.

Definition at line 248 of file Machine.h.

◆ train_machine()

bool train_machine ( CFeatures data = NULL)
protectedvirtual

trains the network

Reimplemented from CMachine.

Definition at line 229 of file NeuralNetwork.cpp.

◆ train_require_labels()

virtual bool train_require_labels ( ) const
protectedvirtualinherited

returns whether machine require labels for training

Reimplemented in COnlineLinearMachine, CKMeansBase, CHierarchical, CLinearLatentMachine, CVwConditionalProbabilityTree, CConditionalProbabilityTree, and CLibSVMOneClass.

Definition at line 397 of file Machine.h.

◆ transform()

CDenseFeatures< float64_t > * transform ( CDenseFeatures< float64_t > *  data)
virtual

Applies the network as a feature transformation

Forward-propagates the data through the network and returns the activations of the last layer

Parameters
dataInput features
Returns
Transformed features

Reimplemented in CAutoencoder, and CDeepAutoencoder.

Definition at line 222 of file NeuralNetwork.cpp.

◆ unref()

int32_t unref ( )
inherited

decrement reference counter and deallocate object if refcount is zero before or after decrementing it

Returns
reference count

Definition at line 200 of file SGObject.cpp.

◆ unset_generic()

void unset_generic ( )
inherited

unset generic type

this has to be called in classes specializing a template class

Definition at line 337 of file SGObject.cpp.

◆ update_parameter_hash()

void update_parameter_hash ( )
virtualinherited

Updates the hash of current parameter combination

Definition at line 282 of file SGObject.cpp.

Friends And Related Function Documentation

◆ CDeepBeliefNetwork

friend class CDeepBeliefNetwork
friend

Definition at line 112 of file NeuralNetwork.h.

Member Data Documentation

◆ io

SGIO* io
inherited

io

Definition at line 600 of file SGObject.h.

◆ m_adj_matrix

SGMatrix<bool> m_adj_matrix
protected

Describes the connections in the network: if there's a connection from layer i to layer j then m_adj_matrix(i,j) = 1.

Definition at line 596 of file NeuralNetwork.h.

◆ m_batch_size

int32_t m_batch_size
protected

number of train/test cases the network is expected to deal with. Default value is 1

Definition at line 618 of file NeuralNetwork.h.

◆ m_cancel_computation

std::atomic<bool> m_cancel_computation
protectedinherited

Cancel computation

Definition at line 448 of file Machine.h.

◆ m_data_locked

bool m_data_locked
protectedinherited

whether data is locked

Definition at line 445 of file Machine.h.

◆ m_dropout_hidden

float64_t m_dropout_hidden
protected

Probabilty that a hidden layer neuron will be dropped out When using this, the recommended value is 0.5

default value 0.0 (no dropout)

For more details on dropout, see [paper](http://arxiv.org/abs/1207.0580) [Hinton, 2012]

Definition at line 642 of file NeuralNetwork.h.

◆ m_dropout_input

float64_t m_dropout_input
protected

Probabilty that a input layer neuron will be dropped out When using this, a good value might be 0.2

default value 0.0 (no dropout)

For more details on dropout, see this [paper](http://arxiv.org/abs/1207.0580) [Hinton, 2012]

Definition at line 652 of file NeuralNetwork.h.

◆ m_epsilon

float64_t m_epsilon
protected

convergence criteria training stops when (E'- E)/E < epsilon where E is the error at the current iterations and E' is the error at the previous iteration default value is 1.0e-5

Definition at line 667 of file NeuralNetwork.h.

◆ m_gd_error_damping_coeff

float64_t m_gd_error_damping_coeff
protected

Used to damp the error fluctuations when stochastic gradient descent is used. damping is done according to: error_damped(i) = c*error(i) + (1-c)*error_damped(i-1) where c is the damping coefficient

If -1, the damping coefficient is automatically computed according to: c = 0.99*gd_mini_batch_size/training_set_size + 1e-2;

default value is -1

Definition at line 711 of file NeuralNetwork.h.

◆ m_gd_learning_rate

float64_t m_gd_learning_rate
protected

gradient descent learning rate, defualt value 0.1

Definition at line 682 of file NeuralNetwork.h.

◆ m_gd_learning_rate_decay

float64_t m_gd_learning_rate_decay
protected

gradient descent learning rate decay learning rate is updated at each iteration i according to: alpha(i)=decay*alpha(i-1) default value is 1.0 (no decay)

Definition at line 689 of file NeuralNetwork.h.

◆ m_gd_mini_batch_size

int32_t m_gd_mini_batch_size
protected

size of the mini-batch used during gradient descent training, if 0 full-batch training is performed default value is 0

Definition at line 679 of file NeuralNetwork.h.

◆ m_gd_momentum

float64_t m_gd_momentum
protected

gradient descent momentum multiplier

default value is 0.9

For more details on momentum, see this [paper](http://jmlr.org/proceedings/papers/v28/sutskever13.html) [Sutskever, 2013]

Definition at line 699 of file NeuralNetwork.h.

◆ m_gradient_parameters

Parameter* m_gradient_parameters
inherited

parameters wrt which we can compute gradients

Definition at line 615 of file SGObject.h.

◆ m_hash

uint32_t m_hash
inherited

Hash of parameter values

Definition at line 618 of file SGObject.h.

◆ m_index_offsets

SGVector<int32_t> m_index_offsets
protected

offsets specifying where each layer's parameters and parameter gradients are stored, i.e layer i's parameters are stored at m_params + m_index_offsets[i]

Definition at line 613 of file NeuralNetwork.h.

◆ m_is_training

bool m_is_training
protected

True if the network is currently being trained initial value is false

Definition at line 623 of file NeuralNetwork.h.

◆ m_l1_coefficient

float64_t m_l1_coefficient
protected

L1 Regularization coeff, default value is 0.0

Definition at line 632 of file NeuralNetwork.h.

◆ m_l2_coefficient

float64_t m_l2_coefficient
protected

L2 Regularization coeff, default value is 0.0

Definition at line 629 of file NeuralNetwork.h.

◆ m_labels

CLabels* m_labels
protectedinherited

labels

Definition at line 436 of file Machine.h.

◆ m_layers

CDynamicObjectArray* m_layers
protected

network's layers

Definition at line 591 of file NeuralNetwork.h.

◆ m_max_norm

float64_t m_max_norm
protected

Maximum allowable L2 norm for a neurons weights When using this, a good value might be 15

default value -1 (max-norm regularization disabled)

Definition at line 659 of file NeuralNetwork.h.

◆ m_max_num_epochs

int32_t m_max_num_epochs
protected

maximum number of iterations over the training set. If 0, training will continue until convergence. defualt value is 0

Definition at line 673 of file NeuralNetwork.h.

◆ m_max_train_time

float64_t m_max_train_time
protectedinherited

maximum training time

Definition at line 433 of file Machine.h.

◆ m_model_selection_parameters

Parameter* m_model_selection_parameters
inherited

model selection parameters

Definition at line 612 of file SGObject.h.

◆ m_mutex

std::mutex m_mutex
protectedinherited

Mutex used to pause threads

Definition at line 457 of file Machine.h.

◆ m_num_inputs

int32_t m_num_inputs
protected

number of neurons in the input layer

Definition at line 585 of file NeuralNetwork.h.

◆ m_num_layers

int32_t m_num_layers
protected

number of layer

Definition at line 588 of file NeuralNetwork.h.

◆ m_optimization_method

ENNOptimizationMethod m_optimization_method
protected

Optimization method, default is NNOM_LBFGS

Definition at line 626 of file NeuralNetwork.h.

◆ m_param_regularizable

SGVector<bool> m_param_regularizable
protected

Array that specifies which parameters are to be regularized. This is used to turn off regularization for bias parameters

Definition at line 607 of file NeuralNetwork.h.

◆ m_parameters

Parameter* m_parameters
inherited

parameters

Definition at line 609 of file SGObject.h.

◆ m_params

SGVector<float64_t> m_params
protected

array where all the parameters of the network are stored

Definition at line 602 of file NeuralNetwork.h.

◆ m_pause_computation

std::condition_variable m_pause_computation
protectedinherited

Conditional variable to make threads wait

Definition at line 454 of file Machine.h.

◆ m_pause_computation_flag

std::atomic<bool> m_pause_computation_flag
protectedinherited

Pause computation flag

Definition at line 451 of file Machine.h.

◆ m_solver_type

ESolverType m_solver_type
protectedinherited

solver type

Definition at line 439 of file Machine.h.

◆ m_store_model_features

bool m_store_model_features
protectedinherited

whether model features should be stored after training

Definition at line 442 of file Machine.h.

◆ m_total_num_parameters

int32_t m_total_num_parameters
protected

total number of parameters in the network

Definition at line 599 of file NeuralNetwork.h.

◆ parallel

Parallel* parallel
inherited

parallel

Definition at line 603 of file SGObject.h.

◆ version

Version* version
inherited

version

Definition at line 606 of file SGObject.h.


The documentation for this class was generated from the following files:

SHOGUN Machine Learning Toolbox - Documentation