GestureRecognitionToolkit  Version: 0.1.0
The Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, c++ machine learning library for real-time gesture recognition.
File List
Here is a list of all documented files with brief descriptions:
 AdaBoost.cpp
 AdaBoost.hThis class contains the AdaBoost classifier. AdaBoost (Adaptive Boosting) is a powerful classifier that works well on both basic and more complex recognition problems
 AdaBoostClassModel.hThis file implements a container for an AdaBoost class model
 ANBC.cpp
 ANBC.hThis class implements the Adaptive Naive Bayes Classifier algorithm. The Adaptive Naive Bayes Classifier (ANBC) is a naive but powerful classifier that works very well on both basic and more complex recognition problems
 ANBC_Model.cpp
 ANBC_Model.hThis class implements a container for an ANBC model
 BAG.cpp
 BAG.hThis class implements the bootstrap aggregator classifier. Bootstrap aggregating (bagging) is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of other machine learning algorithms. Bagging also reduces variance and helps to avoid overfitting. Although it is usually applied to decision tree methods, the BAG class can be used with any type of GRT classifier. Bagging is a special case of the model averaging
 BernoulliRBM.cpp
 BernoulliRBM.hThis class implements a Bernoulli Restricted Boltzmann machine
 Cholesky.cpp
 Cholesky.hThis code is based on the LU Decomposition code from Numerical Recipes (3rd Edition)
 CircularBuffer.hThe CircularBuffer class provides a data structure for creating a dynamic circular buffer (also known as a cyclic buffer or a ring buffer). The main advantage of a circular buffer is that it does not need to have its elements shuffled around each time a new element is added. The circular buffer therefore works well for FIFO (first in first out) buffers
 ClassificationData.cpp
 ClassificationData.hThe ClassificationData is the main data structure for recording, labeling, managing, saving, and loading training data for supervised learning problems
 ClassificationDataStream.cpp
 ClassificationDataStream.hThe ClassificationDataStream is the main data structure for recording, labeling, managing, saving, and loading datasets that can be used to test the continuous classification abilities of the GRT supervised learning algorithms
 ClassificationResult.hThe ClassificationResult class provides a data structure for storing the results of a classification test
 ClassificationSample.cpp
 ClassificationSample.hThis class stores the class label and raw data for a single labelled classification sample
 Classifier.cpp
 Classifier.hThis is the main base class that all GRT Classification algorithms should inherit from
 ClassLabelChangeFilter.cpp
 ClassLabelChangeFilter.hThe Class Label Change Filter signals when the predicted output of a classifier changes. For instance, if the output stream of a classifier was {1,1,1,1,2,2,2,2,3,3}, then the output of the filter would be {1,0,0,0,2,0,0,0,3,0}. This module is useful if you want to debounce a gesture and only care about when the gesture label changes
 ClassLabelFilter.cpp
 ClassLabelFilter.hThe Class Label Filter is a useful post-processing module which can remove erroneous or sporadic prediction spikes that may be made by a classifier on a continuous input stream of data
 ClassLabelTimeoutFilter.cpp
 ClassLabelTimeoutFilter.hThe Class Label Timeout Filter is a useful post-processing module which debounces a gesture (i.e. it stops a single gesture from being recognized multiple times over a short time frame). For instance, it is normally the case that whenever a user performs a gesture, such as a swipe gesture for example, that the recognition system may recognize this single gesture several times because the user's movements are being sensed at a high sample rate (i.e. 100Hz). The Class Label Timeout Filter can be used to ensure that a gesture, such as the previous swipe gesture example, is only recognize once within any given timespan
 ClassTracker.h
 Clusterer.cpp
 Clusterer.hThis is the main base class that all GRT Clustering algorithms should inherit from
 ClusterTree.cpp
 ClusterTree.hThis class implements a Cluster Tree. This can be used to automatically build a cluster model (where each leaf node in the tree is given a unique cluster label) and then predict the best cluster label for a new input sample
 ClusterTreeNode.hThis file implements a ClusterTreeNode, which is a specific type of node used for a ClusterTree
 CommandLineParser.h
 Context.cpp
 Context.hThis is the main base class that all GRT Feature Extraction algorithms should inherit from
 ContinuousHiddenMarkovModel.cpp
 ContinuousHiddenMarkovModel.hThis class implements a continuous Hidden Markov Model
 DataType.h
 DeadZone.cpp
 DeadZone.hThe DeadZone class sets any values in the input signal that fall within the dead-zone region to zero. Any values outside of the dead-zone region will be offset by the dead zone's lower limit and upper limit
 DebugLog.cpp
 DebugLog.h
 DecisionStump.cpp
 DecisionStump.hThis class implements a DecisionStump, which is a single node of a DecisionTree
 DecisionTree.cpp
 DecisionTree.hThis class implements a basic Decision Tree classifier. Decision Trees are conceptually simple classifiers that work well on even complex classification tasks. Decision Trees partition the feature space into a set of rectangular regions, classifying a new datum by finding which region it belongs to
 DecisionTreeClusterNode.cpp
 DecisionTreeClusterNode.hThis file implements a DecisionTreeClusterNode, which is a specific type of node used for a DecisionTree
 DecisionTreeNode.cpp
 DecisionTreeNode.hThis file implements a DecisionTreeNode, which is a specific base node used for a DecisionTree
 DecisionTreeThresholdNode.cpp
 DecisionTreeThresholdNode.hThis file implements a DecisionTreeThresholdNode, which is a specific type of node used for a DecisionTree
 DecisionTreeTripleFeatureNode.cpp
 DecisionTreeTripleFeatureNode.hThis file implements a DecisionTreeTripleFeatureNode, which is a specific type of node used for a DecisionTree
 Derivative.cpp
 Derivative.hThe Derivative class computes either the first or second order derivative of the input signal
 DiscreteHiddenMarkovModel.cpp
 DiscreteHiddenMarkovModel.hThis class implements a discrete Hidden Markov Model
 DoubleMovingAverageFilter.cpp
 DoubleMovingAverageFilter.hThe class implements a Float moving average filter
 DTW.cpp
 DTW.hThis class implements Dynamic Time Warping. Dynamic Time Warping (DTW) is a powerful classifier that works very well for recognizing temporal gestures. Temporal gestures can be defined as a cohesive sequence of movements that occur over a variable time period. The DTW algorithm is a supervised learning algorithm that can be used to classify any type of N-dimensional, temporal signal. The DTW algorithm works by creating a template time series for each gesture that needs to be recognized, and then warping the realtime signals to each of the templates to find the best match. The DTW algorithm also computes rejection thresholds that enable the algorithm to automatically reject sensor values that are not the K gestures the algorithm has been trained to recognized (without being explicitly told during the prediction phase if a gesture is, or is not, being performed). You can find out more about the DTW algorithm in Gillian, N. (2011) Recognition of multivariate temporal musical gestures using n-dimensional dynamic time warping
 EigenvalueDecomposition.cpp
 EigenvalueDecomposition.h
 ErrorLog.cpp
 ErrorLog.h
 EvolutionaryAlgorithm.hThis class implements a template based EvolutionaryAlgorithm
 FastFourierTransform.cpp
 FastFourierTransform.h
 FeatureExtraction.cpp
 FeatureExtraction.hThis is the main base class that all GRT Feature Extraction algorithms should inherit from
 FFT.cpp
 FFT.hThe FFT class computes the Fourier transform of an N dimensional signal using a Fast Fourier Transform algorithm
 FFTFeatures.cpp
 FFTFeatures.hThis class implements the FFTFeatures featue extraction module
 FileParser.h
 FiniteStateMachine.cpp
 FiniteStateMachine.h
 FIRFilter.cpp
 FIRFilter.hThis class implements a Finite Impulse Response (FIR) Filter
 FSMParticle.h
 FSMParticleFilter.h
 Gate.cpp
 Gate.h
 GaussianMixtureModels.cpp
 GaussianMixtureModels.hThis class implements a Gaussian Miture Model clustering algorithm. The code is based on the GMM code from Numerical Recipes (3rd Edition)
 GestureRecognitionPipeline.cpp
 GestureRecognitionPipeline.hThis file contains the GestureRecognitionPipeline class
 GMM.cpp
 GMM.hThis class implements the Gaussian Mixture Model Classifier algorithm. The Gaussian Mixture Model Classifier (GMM) is basic but useful classification algorithm that can be used to classify an N-dimensional signal
 GridSearch.h
 GRT.hThis is the main GRT header. You should include this to access all the GRT classes in your project
 GRTBase.cpp
 GRTBase.hThis file contains the GRTBase class. This is the core base class for all the GRT modules
 GRTCommon.h
 GRTException.h
 GRTTypedefs.h
 GRTVersionInfo.h
 HierarchicalClustering.cpp
 HierarchicalClustering.hThis class implements a basic Hierarchial Clustering algorithm
 HighPassFilter.cpp
 HighPassFilter.hThis class implements a High Pass Filter
 HMM.cpp
 HMM.hThis class acts as the main interface for using a Hidden Markov Model
 HMMEnums.hThis class acts as the main interface for using a Hidden Markov Model
 IndexedDouble.h
 Individual.h
 InfoLog.cpp
 InfoLog.h
 KMeans.cpp
 KMeans.hThis class implements the KMeans clustering algorithm
 KMeansFeatures.cpp
 KMeansFeatures.h
 KMeansQuantizer.cpp
 KMeansQuantizer.hThe KMeansQuantizer module quantizes the N-dimensional input vector to a 1-dimensional discrete value. This value will be between [0 K-1], where K is the number of clusters used to create the quantization model. Before you use the KMeansQuantizer, you need to train a quantization model. To do this, you select the number of clusters you want your quantizer to have and then give it any training data in the following formats:
 KNN.cpp
 KNN.hThis class implements the K-Nearest Neighbor classification algorithm (http://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm). KNN is a simple but powerful classifier, based on finding the closest K training examples in the feature space for the new input vector. The KNN algorithm is amongst the simplest of all machine learning algorithms: an object is classified by a majority vote of its neighbors, with the object being assigned to the class most common amongst its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of its nearest neighbor
 LDA.cpp
 LDA.hThis class implements the Linear Discriminant Analysis Classification algorithm
 LeakyIntegrator.cpp
 LeakyIntegrator.hThe LeakyIntegrator class computes the following signal: y = y*z + x, where x is the input, y is the output and z is the leakrate
 libsvm.cpp
 libsvm.h
 LinearLeastSquares.hThis class implements a basic Linear Least Squares algorithm
 LinearRegression.cpp
 LinearRegression.hThis class implements the Linear Regression algorithm. Linear Regression is a simple but effective regression algorithm that can map an N-dimensional signal to a 1-dimensional signal
 Log.h
 LogisticRegression.cpp
 LogisticRegression.hThis class implements the Logistic Regression algorithm. Logistic Regression is a simple but effective regression algorithm that can map an N-dimensional signal to a 1-dimensional signal
 LowPassFilter.cpp
 LowPassFilter.hThe class implements a low pass filter, this is based on an Exponential moving average filter: https://en.wikipedia.org/wiki/Exponential_smoothing
 LUDecomposition.cpp
 LUDecomposition.h
 Matrix.hThe Matrix class is a basic class for storing any type of data. This class is a template and can therefore be used with any generic data type
 MatrixFloat.cpp
 MatrixFloat.h
 MeanShift.hThis class implements the MeanShift clustering algorithm
 MedianFilter.cpp
 MedianFilter.hThe MedianFilter implements a simple median filter
 MinDist.cpp
 MinDist.hThis class implements the MinDist classifier algorithm
 MinDistModel.cpp
 MinDistModel.hThis class implements the MinDist classifier algorithm
 MinMax.h
 MixtureModel.hThis class implements a MixtureModel, which is a container for holding a class model for the GRT::GMM class
 MLBase.cpp
 MLBase.hThis is the main base class that all GRT machine learning algorithms should inherit from
 MLP.cpp
 MLP.hThis class implements a Multilayer Perceptron Artificial Neural Network
 MovementDetector.cpp
 MovementDetector.hThis class implements a simple movement detection algorithm. This can be used to detect periods of 'low movement' and 'high movement' to act as additional context for other GRT algorithms
 MovementIndex.cpp
 MovementIndex.hThis class implements the MovementIndex feature module. The MovementIndex module computes the amount of movement or variation within an N-dimensional signal over a given time window. The MovementIndex class is good for extracting features that describe how much change is occuring in an N-dimensional signal over time. An example application might be to use the MovementIndex in combination with one of the GRT classification algorithms to determine if an object is being moved or held still
 MovementTrajectoryFeatures.cpp
 MovementTrajectoryFeatures.hThis class implements the MovementTrajectory feature extraction module
 MovingAverageFilter.cpp
 MovingAverageFilter.hThe MovingAverageFilter implements a low pass moving average filter
 MultidimensionalRegression.cpp
 MultidimensionalRegression.hThis class implements the Multidimensional Regression meta algorithm. Multidimensional Regressionacts as a meta-algorithm for regression that allows several one-dimensional regression algorithms (such as Linear Regression), to be combined together to allow an M-dimensional signal to be mapped to an N-dimensional signal. This works by training N seperate regression algorithms (one for each dimension), each with an M-dimensional input
 Neuron.cpp
 Neuron.hThis class implements a Neuron that is used by the Multilayer Perceptron
 Node.cpp
 Node.hThis class contains the main Node base class
 Observer.h
 ObserverManager.h
 Particle.h
 ParticleClassifier.cpp
 ParticleClassifier.h
 ParticleClassifierParticleFilter.h
 ParticleFilter.hThis class implements a template based ParticleFilter. The user is required to implement the predict and update functions for their specific task
 ParticleSwarmOptimization.hThis class implements a template based ParticleSwarmOptimization algorithm
 PeakDetection.cpp
 PeakDetection.h
 PostProcessing.cpp
 PostProcessing.hThis is the main base class that all GRT PostProcessing algorithms should inherit from
 PreProcessing.cpp
 PreProcessing.hThis is the main base class that all GRT PreProcessing algorithms should inherit from
 PrincipalComponentAnalysis.cpp
 PrincipalComponentAnalysis.hThis class runs the Principal Component Analysis (PCA) algorithm, a dimensionality reduction algorithm that projects an [M N] matrix (where M==samples and N==dimensions) onto a new K dimensional subspace, where K is normally much less than N
 PSOParticle.h
 RadialBasisFunction.cpp
 RadialBasisFunction.hThis class implements a Radial Basis Function Weak Classifier. The Radial Basis Function (RBF) class fits an RBF to the weighted training data so as to maximize the number of positive training samples that are inside a specific region of the RBF (this region is set by the GRT::RadialBasisFunction::positiveClassificationThreshold parameter). After the RBF has been trained, it will output 1 if the input data is inside the RBF positive classification region, otherwise it will output 0
 Random.hThis file contains the Random class, a useful wrapper for generating cross platform random functions. This includes functions for uniform distributions (both integer and Float) and Gaussian distributions
 RandomForests.cpp
 RandomForests.h
 RangeTracker.cpp
 RangeTracker.hThe RangeTracker can be used to keep track of the expected ranges that might occur in a dataset. These ranges can then be used to set the external ranges of a dataset for several of the GRT DataStructures
 RBMQuantizer.cpp
 RBMQuantizer.hThe SOMQuantizer module quantizes the N-dimensional input vector to a 1-dimensional discrete value. This value will be between [0 K-1], where K is the number of clusters used to create the quantization model. Before you use the SOMQuantizer, you need to train a quantization model. To do this, you select the number of clusters you want your quantizer to have and then give it any training data in the following formats:
 Regressifier.cpp
 Regressifier.hThis is the main base class that all GRT Regression algorithms should inherit from
 RegressionData.cpp
 RegressionData.hThe RegressionData is the main data structure for recording, labeling, managing, saving, and loading datasets that can be used to train and test the GRT supervised regression algorithms
 RegressionSample.cpp
 RegressionSample.hThis class stores the input vector and target vector for a single labelled regression instance
 RegressionTree.cpp
 RegressionTree.hThis class implements a basic Regression Tree
 RegressionTreeNode.hThis file implements a RegressionTreeNode, which is a specific type of node used for a RegressionTree
 SavitzkyGolayFilter.cpp
 SavitzkyGolayFilter.hThis implements a Savitzky-Golay filter. This code is based on the Savitzky Golay filter code from Numerical Recipes 3
 SelfOrganizingMap.cpp
 SelfOrganizingMap.hThis class implements the Self Oganizing Map clustering algorithm
 Softmax.cpp
 Softmax.hThe Softmax Classifier is a simple but effective classifier (based on logisitc regression) that works well on problems that are linearly separable
 SoftmaxModel.hThis file implements a container for a Softmax model
 SOMQuantizer.cpp
 SOMQuantizer.hThe SOMQuantizer module quantizes the N-dimensional input vector to a 1-dimensional discrete value. This value will be between [0 K-1], where K is the number of clusters used to create the quantization model. Before you use the SOMQuantizer, you need to train a quantization model. To do this, you select the number of clusters you want your quantizer to have and then give it any training data in the following formats:
 SVD.cpp
 SVD.h
 SVM.cpp
 SVM.hThis class acts as a front end for the LIBSVM library (http://www.csie.ntu.edu.tw/~cjlin/libsvm/). It implements a Support Vector Machine (SVM) classifier, a powerful classifier that works well on a wide range of classification problems, particularly on more complex problems that other classifiers (such as the KNN, GMM or ANBC algorithms) might not be able to solve
 SwipeDetector.cpp
 SwipeDetector.hThis class implements a basic swipe detection classification algorithm
 TestingLog.cpp
 TestingLog.h
 TestInstanceResult.hThe TestInstanceResult class provides a data structure for storing the results of a classification or regression test instance
 TestResult.hThe TestResult class provides a data structure for storing the results of a classification or regression test
 ThreadPool.cpp
 ThreadPool.hThe ThreadPool class implements a flexible inteface for performing a large number of batch tasks. You need to build the GRT with GRT_CXX11_ENABLED, otherwise the ThreadPool class will be empty (as it requires C++11 support)
 ThresholdCrossingDetector.cpp
 ThresholdCrossingDetector.hThis class implements a threshold crossing detector
 TimeDomainFeatures.cpp
 TimeDomainFeatures.hThis class implements the TimeDomainFeatures feature extraction module
 Timer.h
 TimeseriesBuffer.cpp
 TimeseriesBuffer.hThis class implements the TimeseriesBuffer feature extraction module
 TimeSeriesClassificationData.cpp
 TimeSeriesClassificationData.hThe TimeSeriesClassificationData is the main data structure for recording, labeling, managing, saving, and loading training data for supervised temporal learning problems. Unlike the ClassificationData, in which each sample consists of 1 N dimensional datum, a TimeSeriesClassificationData sample will consist of an N dimensional time series of length M. The length of each time series sample (i.e. M) can be different for each datum in the dataset
 TimeSeriesClassificationSample.cpp
 TimeSeriesClassificationSample.hThis class stores the timeseries data for a single labelled timeseries classification sample
 TimeSeriesClassificationSampleTrimmer.cpp
 TimeSeriesClassificationSampleTrimmer.hThis class provides a useful tool to automatically trim timeseries data
 TimeSeriesPositionTracker.hThis class can be used to track the class label, start and end indexs for labelled data
 TimeStamp.h
 TrainingDataRecordingTimer.cpp
 TrainingDataRecordingTimer.hThe TrainingDataRecordingTimer is a tool to help record your training data
 TrainingLog.cpp
 TrainingLog.h
 TrainingResult.hThe TrainingResult class provides a data structure for storing the results of a classification or regression training iteration
 Tree.cpp
 Tree.hThis class implements the base class Tree used for the DecisionTree, RegressionTree and ClusterTree
 UnlabelledData.cpp
 UnlabelledData.hThe UnlabelledData class is the main data container for supporting unsupervised learning
 Util.cpp
 Util.hThis file contains the Util class, a wrapper for a number of generic functions that are used throughout the GRT. This includes functions for scaling data, finding the minimum or maximum values in a double or UINT vector, etc. Many of these functions are static functions, which enables you to use them without having to create a new Util instance, for instance, you can directly call: Util::sleep( 1000 ); to use the sleep function
 Vector.hThe Vector class is a basic class for storing any type of data. The default Vector is an interface for std::vector, but the idea is this can easily be changed when needed (e.g., when running the GRT on an embedded device with limited STL support). This class is a template and can therefore be used with any generic data type
 VectorFloat.cpp
 VectorFloat.h
 WarningLog.cpp
 WarningLog.h
 WeakClassifier.cpp
 WeakClassifier.hThis is the main base class for all GRT WeakClassifiers
 WeightedAverageFilter.cpp
 WeightedAverageFilter.hThe WeightedAverageFilter implements a weighted average filter that gives a larger weight to more recent samples, and a smaller weight to older samples
 ZeroCrossingCounter.cpp
 ZeroCrossingCounter.h