Machine Learning Prediction for Motion Sensors In Mathematica

 

Let's assume we have a motion sensor that provides us X and Y data for a target where the measurement error is normally distributed with parameters:

\mu=0.0

\sigma=5.2

First we will need to train our prediction model with some training data.  We add a square term to give the target a parabolic path.

snip1

 

 

Now build the five different supported predictor functions:

snip2

With minimal parameter input here is Information about the predictors as built with default parameters:

Method Linear regression Number of features 1 Number of training examples 200 L1 regularization coefficient 0 L2 regularization coefficient 1. TagBox[GridBox[List[List[StyleBox["\"Method\"", Rule[StripOnInput, False], Rule[LineOpacity, 0.8`], Rule[FrontFaceOpacity, 0.8`], Rule[BackFaceOpacity, 0.8`], Rule[Opacity, 0.8`], Rule[FontWeight, "SemiBold"], Rule[FontOpacity, 0.8`]], "\"Linear regression\""], List[StyleBox["\"Number of features\"", Rule[StripOnInput, False], Rule[LineOpacity, 0.8`], Rule[FrontFaceOpacity, 0.8`], Rule[BackFaceOpacity, 0.8`], Rule[Opacity, 0.8`], Rule[FontWeight, "SemiBold"], Rule[FontOpacity, 0.8`]], "1"], List[StyleBox["\"Number of training examples\"", Rule[StripOnInput, False], Rule[LineOpacity, 0.8`], Rule[FrontFaceOpacity, 0.8`], Rule[BackFaceOpacity, 0.8`], Rule[Opacity, 0.8`], Rule[FontWeight, "SemiBold"], Rule[FontOpacity, 0.8`]], "200"], List[StyleBox["\"L1 regularization coefficient\"", Rule[StripOnInput, False], Rule[LineOpacity, 0.8`], Rule[FrontFaceOpacity, 0.8`], Rule[BackFaceOpacity, 0.8`], Rule[Opacity, 0.8`], Rule[FontWeight, "SemiBold"], Rule[FontOpacity, 0.8`]], "0"], List[StyleBox["\"L2 regularization coefficient\"", Rule[StripOnInput, False], Rule[LineOpacity, 0.8`], Rule[FrontFaceOpacity, 0.8`], Rule[BackFaceOpacity, 0.8`], Rule[Opacity, 0.8`], Rule[FontWeight, "SemiBold"], Rule[FontOpacity, 0.8`]], "1.`"]], Rule[AutoDelete, False], Rule[BaseStyle, List[Rule[FontWeight, "Light"], Rule[FontFamily, "Segoe UI"], Rule[NumberMarks, False]]], Rule[GridBoxAlignment, List[Rule["Columns", List[Right, List[Left]]], Rule["ColumnsIndexed", List[]], Rule["Rows", List[List[Baseline]]], Rule["RowsIndexed", List[]]]], Rule[GridBoxDividers, List[Rule["Columns", List[False, List[Opacity[0.15`]], False]]]], Rule[GridBoxItemSize, List[Rule["Columns", List[List[Automatic]]], Rule["ColumnsIndexed", List[]], Rule["Rows", List[List[1.`]]], Rule["RowsIndexed", List[]]]], Rule[GridBoxSpacings, List[Rule["Columns", List[Offset[0.27999999999999997`], Offset[2.0999999999999996`], List[Offset[1.75`]], Offset[0.27999999999999997`]]], Rule["ColumnsIndexed", List[]], Rule["Rows", List[Offset[0.2`], List[Offset[0.8`]], Offset[0.2`]]], Rule["RowsIndexed", List[]]]]], "Grid"]

Method Gaussian Process Number of features 1 Number of training examples 200 AssumeDeterministic False Numerical Covariance Type SquaredExponential Nominal Covariance Type HammingDistance EstimationMethod MaximumPosterior OptimizationMethod FindMinimum

Method K-nearest neighbors Number of features 1 Number of training examples 200 Number of neighbors 10 Distance function EuclideanDistance

Method Neural network Number of features 1 Number of training examples 200 L1 regularization coefficient 0 L2 regularization coefficient 0.1 Number of hidden layers 2 Hidden nodes TemplateBox[List[",", "\",\"", "3", "3"], "RowWithSeparators"] Hidden layer activation functions TemplateBox[List[",", "\",\"", "\"Tanh\"", "\"Tanh\""], "RowWithSeparators"] CostFunction Cost Function

Method Random forest Number of features 1 Number of training examples 200 Number of trees 50

Now we build a new random sensor track and test it against our trained prediction functions.

snip3

Here is our "training track" plotted against the trackData1 we created.

plot1

We create a plot function that allows us to pass track data and show the first plot's prediction, confidence interval and raw data using a linear regression predictor:

plot2

We decide we'd rather parameterize the prediction method so we steel a cool method found on the Wolfram site.

snip4

Finally let's plot them all in one combined chart.

snip6

Viola' we get this chart:

plot3

The Gaussian Process provides the best fit and tightest confidence interval for our track predictor.

You can download the CDF and/or the Notebook for this work.  Enjoy and let me know if you find something wrong or improve it.

Thanks,

Phil Neumiller