GestureRecognitionToolkit
Version: 0.1.0
The Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, c++ machine learning library for real-time gesture recognition.
|
AdaBoost.cpp | |
AdaBoost.h | This class contains the AdaBoost classifier. AdaBoost (Adaptive Boosting) is a powerful classifier that works well on both basic and more complex recognition problems |
AdaBoostClassModel.h | This file implements a container for an AdaBoost class model |
ANBC.cpp | |
ANBC.h | This class implements the Adaptive Naive Bayes Classifier algorithm. The Adaptive Naive Bayes Classifier (ANBC) is a naive but powerful classifier that works very well on both basic and more complex recognition problems |
ANBC_Model.cpp | |
ANBC_Model.h | This class implements a container for an ANBC model |
BAG.cpp | |
BAG.h | This class implements the bootstrap aggregator classifier. Bootstrap aggregating (bagging) is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of other machine learning algorithms. Bagging also reduces variance and helps to avoid overfitting. Although it is usually applied to decision tree methods, the BAG class can be used with any type of GRT classifier. Bagging is a special case of the model averaging |
BernoulliRBM.cpp | |
BernoulliRBM.h | This class implements a Bernoulli Restricted Boltzmann machine |
Cholesky.cpp | |
Cholesky.h | This code is based on the LU Decomposition code from Numerical Recipes (3rd Edition) |
CircularBuffer.h | The CircularBuffer class provides a data structure for creating a dynamic circular buffer (also known as a cyclic buffer or a ring buffer). The main advantage of a circular buffer is that it does not need to have its elements shuffled around each time a new element is added. The circular buffer therefore works well for FIFO (first in first out) buffers |
ClassificationData.cpp | |
ClassificationData.h | The ClassificationData is the main data structure for recording, labeling, managing, saving, and loading training data for supervised learning problems |
ClassificationDataStream.cpp | |
ClassificationDataStream.h | The ClassificationDataStream is the main data structure for recording, labeling, managing, saving, and loading datasets that can be used to test the continuous classification abilities of the GRT supervised learning algorithms |
ClassificationResult.h | The ClassificationResult class provides a data structure for storing the results of a classification test |
ClassificationSample.cpp | |
ClassificationSample.h | This class stores the class label and raw data for a single labelled classification sample |
Classifier.cpp | |
Classifier.h | This is the main base class that all GRT Classification algorithms should inherit from |
ClassLabelChangeFilter.cpp | |
ClassLabelChangeFilter.h | The Class Label Change Filter signals when the predicted output of a classifier changes. For instance, if the output stream of a classifier was {1,1,1,1,2,2,2,2,3,3}, then the output of the filter would be {1,0,0,0,2,0,0,0,3,0}. This module is useful if you want to debounce a gesture and only care about when the gesture label changes |
ClassLabelFilter.cpp | |
ClassLabelFilter.h | The Class Label Filter is a useful post-processing module which can remove erroneous or sporadic prediction spikes that may be made by a classifier on a continuous input stream of data |
ClassLabelTimeoutFilter.cpp | |
ClassLabelTimeoutFilter.h | The Class Label Timeout Filter is a useful post-processing module which debounces a gesture (i.e. it stops a single gesture from being recognized multiple times over a short time frame). For instance, it is normally the case that whenever a user performs a gesture, such as a swipe gesture for example, that the recognition system may recognize this single gesture several times because the user's movements are being sensed at a high sample rate (i.e. 100Hz). The Class Label Timeout Filter can be used to ensure that a gesture, such as the previous swipe gesture example, is only recognize once within any given timespan |
ClassTracker.h | |
Clusterer.cpp | |
Clusterer.h | This is the main base class that all GRT Clustering algorithms should inherit from |
ClusterTree.cpp | |
ClusterTree.h | This class implements a Cluster Tree. This can be used to automatically build a cluster model (where each leaf node in the tree is given a unique cluster label) and then predict the best cluster label for a new input sample |
ClusterTreeNode.h | This file implements a ClusterTreeNode, which is a specific type of node used for a ClusterTree |
CommandLineParser.h | |
Context.cpp | |
Context.h | This is the main base class that all GRT Feature Extraction algorithms should inherit from |
ContinuousHiddenMarkovModel.cpp | |
ContinuousHiddenMarkovModel.h | This class implements a continuous Hidden Markov Model |
DataType.h | |
DeadZone.cpp | |
DeadZone.h | The DeadZone class sets any values in the input signal that fall within the dead-zone region to zero. Any values outside of the dead-zone region will be offset by the dead zone's lower limit and upper limit |
DebugLog.cpp | |
DebugLog.h | |
DecisionStump.cpp | |
DecisionStump.h | This class implements a DecisionStump, which is a single node of a DecisionTree |
DecisionTree.cpp | |
DecisionTree.h | This class implements a basic Decision Tree classifier. Decision Trees are conceptually simple classifiers that work well on even complex classification tasks. Decision Trees partition the feature space into a set of rectangular regions, classifying a new datum by finding which region it belongs to |
DecisionTreeClusterNode.cpp | |
DecisionTreeClusterNode.h | This file implements a DecisionTreeClusterNode, which is a specific type of node used for a DecisionTree |
DecisionTreeNode.cpp | |
DecisionTreeNode.h | This file implements a DecisionTreeNode, which is a specific base node used for a DecisionTree |
DecisionTreeThresholdNode.cpp | |
DecisionTreeThresholdNode.h | This file implements a DecisionTreeThresholdNode, which is a specific type of node used for a DecisionTree |
DecisionTreeTripleFeatureNode.cpp | |
DecisionTreeTripleFeatureNode.h | This file implements a DecisionTreeTripleFeatureNode, which is a specific type of node used for a DecisionTree |
Derivative.cpp | |
Derivative.h | The Derivative class computes either the first or second order derivative of the input signal |
DiscreteHiddenMarkovModel.cpp | |
DiscreteHiddenMarkovModel.h | This class implements a discrete Hidden Markov Model |
DoubleMovingAverageFilter.cpp | |
DoubleMovingAverageFilter.h | The class implements a Float moving average filter |
DTW.cpp | |
DTW.h | This class implements Dynamic Time Warping. Dynamic Time Warping (DTW) is a powerful classifier that works very well for recognizing temporal gestures. Temporal gestures can be defined as a cohesive sequence of movements that occur over a variable time period. The DTW algorithm is a supervised learning algorithm that can be used to classify any type of N-dimensional, temporal signal. The DTW algorithm works by creating a template time series for each gesture that needs to be recognized, and then warping the realtime signals to each of the templates to find the best match. The DTW algorithm also computes rejection thresholds that enable the algorithm to automatically reject sensor values that are not the K gestures the algorithm has been trained to recognized (without being explicitly told during the prediction phase if a gesture is, or is not, being performed). You can find out more about the DTW algorithm in Gillian, N. (2011) Recognition of multivariate temporal musical gestures using n-dimensional dynamic time warping |
EigenvalueDecomposition.cpp | |
EigenvalueDecomposition.h | |
ErrorLog.cpp | |
ErrorLog.h | |
EvolutionaryAlgorithm.h | This class implements a template based EvolutionaryAlgorithm |
FastFourierTransform.cpp | |
FastFourierTransform.h | |
FeatureExtraction.cpp | |
FeatureExtraction.h | This is the main base class that all GRT Feature Extraction algorithms should inherit from |
FFT.cpp | |
FFT.h | The FFT class computes the Fourier transform of an N dimensional signal using a Fast Fourier Transform algorithm |
FFTFeatures.cpp | |
FFTFeatures.h | This class implements the FFTFeatures featue extraction module |
FileParser.h | |
FiniteStateMachine.cpp | |
FiniteStateMachine.h | |
FIRFilter.cpp | |
FIRFilter.h | This class implements a Finite Impulse Response (FIR) Filter |
FSMParticle.h | |
FSMParticleFilter.h | |
Gate.cpp | |
Gate.h | |
GaussianMixtureModels.cpp | |
GaussianMixtureModels.h | This class implements a Gaussian Miture Model clustering algorithm. The code is based on the GMM code from Numerical Recipes (3rd Edition) |
GestureRecognitionPipeline.cpp | |
GestureRecognitionPipeline.h | This file contains the GestureRecognitionPipeline class |
GMM.cpp | |
GMM.h | This class implements the Gaussian Mixture Model Classifier algorithm. The Gaussian Mixture Model Classifier (GMM) is basic but useful classification algorithm that can be used to classify an N-dimensional signal |
GridSearch.h | |
GRT.h | This is the main GRT header. You should include this to access all the GRT classes in your project |
GRTBase.cpp | |
GRTBase.h | This file contains the GRTBase class. This is the core base class for all the GRT modules |
GRTCommon.h | |
GRTException.h | |
GRTTypedefs.h | |
GRTVersionInfo.h | |
HierarchicalClustering.cpp | |
HierarchicalClustering.h | This class implements a basic Hierarchial Clustering algorithm |
HighPassFilter.cpp | |
HighPassFilter.h | This class implements a High Pass Filter |
HMM.cpp | |
HMM.h | This class acts as the main interface for using a Hidden Markov Model |
HMMEnums.h | This class acts as the main interface for using a Hidden Markov Model |
IndexedDouble.h | |
Individual.h | |
InfoLog.cpp | |
InfoLog.h | |
KMeans.cpp | |
KMeans.h | This class implements the KMeans clustering algorithm |
KMeansFeatures.cpp | |
KMeansFeatures.h | |
KMeansQuantizer.cpp | |
KMeansQuantizer.h | The KMeansQuantizer module quantizes the N-dimensional input vector to a 1-dimensional discrete value. This value will be between [0 K-1], where K is the number of clusters used to create the quantization model. Before you use the KMeansQuantizer, you need to train a quantization model. To do this, you select the number of clusters you want your quantizer to have and then give it any training data in the following formats: |
KNN.cpp | |
KNN.h | This class implements the K-Nearest Neighbor classification algorithm (http://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm). KNN is a simple but powerful classifier, based on finding the closest K training examples in the feature space for the new input vector. The KNN algorithm is amongst the simplest of all machine learning algorithms: an object is classified by a majority vote of its neighbors, with the object being assigned to the class most common amongst its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of its nearest neighbor |
LDA.cpp | |
LDA.h | This class implements the Linear Discriminant Analysis Classification algorithm |
LeakyIntegrator.cpp | |
LeakyIntegrator.h | The LeakyIntegrator class computes the following signal: y = y*z + x, where x is the input, y is the output and z is the leakrate |
libsvm.cpp | |
libsvm.h | |
LinearLeastSquares.h | This class implements a basic Linear Least Squares algorithm |
LinearRegression.cpp | |
LinearRegression.h | This class implements the Linear Regression algorithm. Linear Regression is a simple but effective regression algorithm that can map an N-dimensional signal to a 1-dimensional signal |
Log.h | |
LogisticRegression.cpp | |
LogisticRegression.h | This class implements the Logistic Regression algorithm. Logistic Regression is a simple but effective regression algorithm that can map an N-dimensional signal to a 1-dimensional signal |
LowPassFilter.cpp | |
LowPassFilter.h | The class implements a low pass filter, this is based on an Exponential moving average filter: https://en.wikipedia.org/wiki/Exponential_smoothing |
LUDecomposition.cpp | |
LUDecomposition.h | |
Matrix.h | The Matrix class is a basic class for storing any type of data. This class is a template and can therefore be used with any generic data type |
MatrixFloat.cpp | |
MatrixFloat.h | |
MeanShift.h | This class implements the MeanShift clustering algorithm |
MedianFilter.cpp | |
MedianFilter.h | The MedianFilter implements a simple median filter |
MinDist.cpp | |
MinDist.h | This class implements the MinDist classifier algorithm |
MinDistModel.cpp | |
MinDistModel.h | This class implements the MinDist classifier algorithm |
MinMax.h | |
MixtureModel.h | This class implements a MixtureModel, which is a container for holding a class model for the GRT::GMM class |
MLBase.cpp | |
MLBase.h | This is the main base class that all GRT machine learning algorithms should inherit from |
MLP.cpp | |
MLP.h | This class implements a Multilayer Perceptron Artificial Neural Network |
MovementDetector.cpp | |
MovementDetector.h | This class implements a simple movement detection algorithm. This can be used to detect periods of 'low movement' and 'high movement' to act as additional context for other GRT algorithms |
MovementIndex.cpp | |
MovementIndex.h | This class implements the MovementIndex feature module. The MovementIndex module computes the amount of movement or variation within an N-dimensional signal over a given time window. The MovementIndex class is good for extracting features that describe how much change is occuring in an N-dimensional signal over time. An example application might be to use the MovementIndex in combination with one of the GRT classification algorithms to determine if an object is being moved or held still |
MovementTrajectoryFeatures.cpp | |
MovementTrajectoryFeatures.h | This class implements the MovementTrajectory feature extraction module |
MovingAverageFilter.cpp | |
MovingAverageFilter.h | The MovingAverageFilter implements a low pass moving average filter |
MultidimensionalRegression.cpp | |
MultidimensionalRegression.h | This class implements the Multidimensional Regression meta algorithm. Multidimensional Regressionacts as a meta-algorithm for regression that allows several one-dimensional regression algorithms (such as Linear Regression), to be combined together to allow an M-dimensional signal to be mapped to an N-dimensional signal. This works by training N seperate regression algorithms (one for each dimension), each with an M-dimensional input |
Neuron.cpp | |
Neuron.h | This class implements a Neuron that is used by the Multilayer Perceptron |
Node.cpp | |
Node.h | This class contains the main Node base class |
Observer.h | |
ObserverManager.h | |
Particle.h | |
ParticleClassifier.cpp | |
ParticleClassifier.h | |
ParticleClassifierParticleFilter.h | |
ParticleFilter.h | This class implements a template based ParticleFilter. The user is required to implement the predict and update functions for their specific task |
ParticleSwarmOptimization.h | This class implements a template based ParticleSwarmOptimization algorithm |
PeakDetection.cpp | |
PeakDetection.h | |
PostProcessing.cpp | |
PostProcessing.h | This is the main base class that all GRT PostProcessing algorithms should inherit from |
PreProcessing.cpp | |
PreProcessing.h | This is the main base class that all GRT PreProcessing algorithms should inherit from |
PrincipalComponentAnalysis.cpp | |
PrincipalComponentAnalysis.h | This class runs the Principal Component Analysis (PCA) algorithm, a dimensionality reduction algorithm that projects an [M N] matrix (where M==samples and N==dimensions) onto a new K dimensional subspace, where K is normally much less than N |
PSOParticle.h | |
RadialBasisFunction.cpp | |
RadialBasisFunction.h | This class implements a Radial Basis Function Weak Classifier. The Radial Basis Function (RBF) class fits an RBF to the weighted training data so as to maximize the number of positive training samples that are inside a specific region of the RBF (this region is set by the GRT::RadialBasisFunction::positiveClassificationThreshold parameter). After the RBF has been trained, it will output 1 if the input data is inside the RBF positive classification region, otherwise it will output 0 |
Random.h | This file contains the Random class, a useful wrapper for generating cross platform random functions. This includes functions for uniform distributions (both integer and Float) and Gaussian distributions |
RandomForests.cpp | |
RandomForests.h | |
RangeTracker.cpp | |
RangeTracker.h | The RangeTracker can be used to keep track of the expected ranges that might occur in a dataset. These ranges can then be used to set the external ranges of a dataset for several of the GRT DataStructures |
RBMQuantizer.cpp | |
RBMQuantizer.h | The SOMQuantizer module quantizes the N-dimensional input vector to a 1-dimensional discrete value. This value will be between [0 K-1], where K is the number of clusters used to create the quantization model. Before you use the SOMQuantizer, you need to train a quantization model. To do this, you select the number of clusters you want your quantizer to have and then give it any training data in the following formats: |
Regressifier.cpp | |
Regressifier.h | This is the main base class that all GRT Regression algorithms should inherit from |
RegressionData.cpp | |
RegressionData.h | The RegressionData is the main data structure for recording, labeling, managing, saving, and loading datasets that can be used to train and test the GRT supervised regression algorithms |
RegressionSample.cpp | |
RegressionSample.h | This class stores the input vector and target vector for a single labelled regression instance |
RegressionTree.cpp | |
RegressionTree.h | This class implements a basic Regression Tree |
RegressionTreeNode.h | This file implements a RegressionTreeNode, which is a specific type of node used for a RegressionTree |
SavitzkyGolayFilter.cpp | |
SavitzkyGolayFilter.h | This implements a Savitzky-Golay filter. This code is based on the Savitzky Golay filter code from Numerical Recipes 3 |
SelfOrganizingMap.cpp | |
SelfOrganizingMap.h | This class implements the Self Oganizing Map clustering algorithm |
Softmax.cpp | |
Softmax.h | The Softmax Classifier is a simple but effective classifier (based on logisitc regression) that works well on problems that are linearly separable |
SoftmaxModel.h | This file implements a container for a Softmax model |
SOMQuantizer.cpp | |
SOMQuantizer.h | The SOMQuantizer module quantizes the N-dimensional input vector to a 1-dimensional discrete value. This value will be between [0 K-1], where K is the number of clusters used to create the quantization model. Before you use the SOMQuantizer, you need to train a quantization model. To do this, you select the number of clusters you want your quantizer to have and then give it any training data in the following formats: |
SVD.cpp | |
SVD.h | |
SVM.cpp | |
SVM.h | This class acts as a front end for the LIBSVM library (http://www.csie.ntu.edu.tw/~cjlin/libsvm/). It implements a Support Vector Machine (SVM) classifier, a powerful classifier that works well on a wide range of classification problems, particularly on more complex problems that other classifiers (such as the KNN, GMM or ANBC algorithms) might not be able to solve |
SwipeDetector.cpp | |
SwipeDetector.h | This class implements a basic swipe detection classification algorithm |
TestingLog.cpp | |
TestingLog.h | |
TestInstanceResult.h | The TestInstanceResult class provides a data structure for storing the results of a classification or regression test instance |
TestResult.h | The TestResult class provides a data structure for storing the results of a classification or regression test |
ThreadPool.cpp | |
ThreadPool.h | The ThreadPool class implements a flexible inteface for performing a large number of batch tasks. You need to build the GRT with GRT_CXX11_ENABLED, otherwise the ThreadPool class will be empty (as it requires C++11 support) |
ThresholdCrossingDetector.cpp | |
ThresholdCrossingDetector.h | This class implements a threshold crossing detector |
TimeDomainFeatures.cpp | |
TimeDomainFeatures.h | This class implements the TimeDomainFeatures feature extraction module |
Timer.h | |
TimeseriesBuffer.cpp | |
TimeseriesBuffer.h | This class implements the TimeseriesBuffer feature extraction module |
TimeSeriesClassificationData.cpp | |
TimeSeriesClassificationData.h | The TimeSeriesClassificationData is the main data structure for recording, labeling, managing, saving, and loading training data for supervised temporal learning problems. Unlike the ClassificationData, in which each sample consists of 1 N dimensional datum, a TimeSeriesClassificationData sample will consist of an N dimensional time series of length M. The length of each time series sample (i.e. M) can be different for each datum in the dataset |
TimeSeriesClassificationSample.cpp | |
TimeSeriesClassificationSample.h | This class stores the timeseries data for a single labelled timeseries classification sample |
TimeSeriesClassificationSampleTrimmer.cpp | |
TimeSeriesClassificationSampleTrimmer.h | This class provides a useful tool to automatically trim timeseries data |
TimeSeriesPositionTracker.h | This class can be used to track the class label, start and end indexs for labelled data |
TimeStamp.h | |
TrainingDataRecordingTimer.cpp | |
TrainingDataRecordingTimer.h | The TrainingDataRecordingTimer is a tool to help record your training data |
TrainingLog.cpp | |
TrainingLog.h | |
TrainingResult.h | The TrainingResult class provides a data structure for storing the results of a classification or regression training iteration |
Tree.cpp | |
Tree.h | This class implements the base class Tree used for the DecisionTree, RegressionTree and ClusterTree |
UnlabelledData.cpp | |
UnlabelledData.h | The UnlabelledData class is the main data container for supporting unsupervised learning |
Util.cpp | |
Util.h | This file contains the Util class, a wrapper for a number of generic functions that are used throughout the GRT. This includes functions for scaling data, finding the minimum or maximum values in a double or UINT vector, etc. Many of these functions are static functions, which enables you to use them without having to create a new Util instance, for instance, you can directly call: Util::sleep( 1000 ); to use the sleep function |
Vector.h | The Vector class is a basic class for storing any type of data. The default Vector is an interface for std::vector, but the idea is this can easily be changed when needed (e.g., when running the GRT on an embedded device with limited STL support). This class is a template and can therefore be used with any generic data type |
VectorFloat.cpp | |
VectorFloat.h | |
WarningLog.cpp | |
WarningLog.h | |
WeakClassifier.cpp | |
WeakClassifier.h | This is the main base class for all GRT WeakClassifiers |
WeightedAverageFilter.cpp | |
WeightedAverageFilter.h | The WeightedAverageFilter implements a weighted average filter that gives a larger weight to more recent samples, and a smaller weight to older samples |
ZeroCrossingCounter.cpp | |
ZeroCrossingCounter.h |