|
AlgorithmsThis page documents library components that are all basically just implementations of mathematical functions or algorithms without any really significant data structures associated with them. So this includes things like checksums, cryptographic hashes, machine learning algorithms, sorting, etc... |
|
bigint_kernel_1:
This implementation is done using an array of unsigned shorts. It is also reference counted. For further details see the above link. Also note that kernel_2 should be faster in almost every case so you should really just use that version of the bigint object.
kernel_1ais a typedef for bigint_kernel_1 kernel_1a_cis a typedef for kernel_1a that checks its preconditions.
bigint_kernel_2:
This implementation is basically the same as kernel_1 except it uses the Fast Fourier Transform to perform multiplcations much faster.
kernel_2ais a typedef for bigint_kernel_2 kernel_2a_cis a typedef for kernel_2a that checks its preconditions.
crc32_kernel_1:
This implementation uses the polynomial 0xedb88320.
kernel_1ais a typedef for crc32_kernel_1
This object then allows you to compute the distance between the center of mass and any test points. So you can use this object to predict how similar a test point is to the data this object has been trained on (larger distances from the centroid indicate dissimilarity/anomalous points).
The long and short of this algorithm is that it is an online kernel based regression algorithm. You give it samples (x,y) and it learns the function f(x) == y. For a detailed description of the algorithm read the above paper.
This is an implementation of an online algorithm for recursively finding a set of linearly independent vectors in a kernel induced feature space. To use it you decide how large you would like the set to be and then you feed it sample points.
Each time you present it with a new sample point it either keeps the current set of independent points unchanged, or if the new point is "more linearly independent" than one of the points it already has, it replaces the weakly linearly independent point with the new one.
This object uses the Approximately Linearly Dependent metric described in the paper The Kernel Recursive Least Squares Algorithm by Yaakov Engel to decide which points are more linearly independent than others.
mlp_kernel_1:
This is implemented in the obvious way.
kernel_1ais a typedef for mlp_kernel_1 kernel_1a_cis a typedef for kernel_1a that checks its preconditions.
rand_kernel_1:
This implementation is done using the Mersenne Twister algorithm.
kernel_1ais a typedef for rand_kernel_1
rand_float_1:
The implementation is obvious. Click on the link if you want to see.
float_1ais a typedef for rand_kernel_1a extended by rand_float_1
This is a trainer object that is meant to wrap other trainer objects that create decision_function objects. It performs post processing on the output decision_function objects with the intent of representing the decision_function with fewer support vectors.
It begins by performing the same post processing as the reduced_decision_function_trainer object but it also performs a global gradient based optimization to further improve the results.
Trains a relevance vector machine for solving regression problems. Outputs a decision_function that represents the learned regression function.
The implementation of the RVM training algorithm used by this library is based on the following paper:Tipping, M. E. and A. C. Faul (2003). Fast marginal likelihood maximisation for sparse Bayesian models. In C. M. Bishop and B. J. Frey (Eds.), Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, Key West, FL, Jan 3-6.
Trains a relevance vector machine for solving binary classification problems. Outputs a decision_function that represents the learned classifier.
The implementation of the RVM training algorithm used by this library is based on the following paper:Tipping, M. E. and A. C. Faul (2003). Fast marginal likelihood maximisation for sparse Bayesian models. In C. M. Bishop and B. J. Frey (Eds.), Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, Key West, FL, Jan 3-6.
Trains a nu support vector classifier and outputs a decision_function.
The implementation of the nu-svm training algorithm used by this library is based on the following excellent papers:Trains a probabilistic_decision_function using some sort of trainer object such as the svm_nu_trainer or rbf_network_trainer.
The probability model is created by using the technique described in the paper:Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods by John C. Platt. Match 26, 1999