Mathematica Shortest Path Min / Max Flow on Random Graphs

We present here a simple program to generate a variety of random graphs with a variable number of vertices and edges and Random edge weights, either directed or undirected.  You can put an insane number of vertices in and it will run ... but it will take a while...

Compute Shortest Path Max / Min Flow on Random Graph

If you want the Random graph to be directed you can change the DirectedEdges-> parameter in the RandomGraph[] function.  Also if you want a different number of vertices or edge density you ad adjust the numVertices, numEdges parameters as it suites you.  Here is an example output.  Edge weights are distributed uniform randomly over [0, 1].

Shortest Path through Random Graph
Maximum Flow
Min Flow Cost

Here is a shortest path plot with 1500 nodes.  It works but it gets difficult to visualize at this scale...  You can sort of see the red vertices in there right? 🙂

1500 Vertices Shortest Path

You can download the Mathematica notebook for this post here.

Screen Scraping Plots / Charts to Acquire Raw Data

Sometimes you just don't have the raw data and all you have is a plot.  This tool (link below) does a reasonable job on a variety of plots.

WebPlotDigitizer

I tried it on this chart exported from Mathematica as a JPEG.
chartscraped

You have to calibrate your Axes and the pick the trace color of the plot which it offers in a menu of dominant colors derived from the chart.  It's pretty cool that the points I got back had a correlation of nearly 1 for the original data in Mathematica.

The tool also has an option WebCam option and can import and export JSON which is nice as well.  Overall I give this an overall rating of 9.5/10!  I still haven't found my holy grail screen scraper yet but this one works well for manual scraping of charts.

Also see this article: http://connectedresearchers.com/extracting-data-from-plots-images-and-maps-with-webplotdigitizer/ for more information about this tool.

Kudos to:

Author: Ankit Rohatgi

Title: WebPlotDigitizer

Website: http://arohatgi.info/WebPlotDigitizer

Version: 3.10

Date: May, 2016

E-Mail: ankitrohatgi@hotmail.com

Location: Austin, Texas, USA

Machine Learning Prediction for Motion Sensors In Mathematica

 

Let's assume we have a motion sensor that provides us X and Y data for a target where the measurement error is normally distributed with parameters:

\mu=0.0

\sigma=5.2

First we will need to train our prediction model with some training data.  We add a square term to give the target a parabolic path.

snip1

 

 

Now build the five different supported predictor functions:

snip2

With minimal parameter input here is Information about the predictors as built with default parameters:

Method Linear regression Number of features 1 Number of training examples 200 L1 regularization coefficient 0 L2 regularization coefficient 1. TagBox[GridBox[List[List[StyleBox["\"Method\"", Rule[StripOnInput, False], Rule[LineOpacity, 0.8`], Rule[FrontFaceOpacity, 0.8`], Rule[BackFaceOpacity, 0.8`], Rule[Opacity, 0.8`], Rule[FontWeight, "SemiBold"], Rule[FontOpacity, 0.8`]], "\"Linear regression\""], List[StyleBox["\"Number of features\"", Rule[StripOnInput, False], Rule[LineOpacity, 0.8`], Rule[FrontFaceOpacity, 0.8`], Rule[BackFaceOpacity, 0.8`], Rule[Opacity, 0.8`], Rule[FontWeight, "SemiBold"], Rule[FontOpacity, 0.8`]], "1"], List[StyleBox["\"Number of training examples\"", Rule[StripOnInput, False], Rule[LineOpacity, 0.8`], Rule[FrontFaceOpacity, 0.8`], Rule[BackFaceOpacity, 0.8`], Rule[Opacity, 0.8`], Rule[FontWeight, "SemiBold"], Rule[FontOpacity, 0.8`]], "200"], List[StyleBox["\"L1 regularization coefficient\"", Rule[StripOnInput, False], Rule[LineOpacity, 0.8`], Rule[FrontFaceOpacity, 0.8`], Rule[BackFaceOpacity, 0.8`], Rule[Opacity, 0.8`], Rule[FontWeight, "SemiBold"], Rule[FontOpacity, 0.8`]], "0"], List[StyleBox["\"L2 regularization coefficient\"", Rule[StripOnInput, False], Rule[LineOpacity, 0.8`], Rule[FrontFaceOpacity, 0.8`], Rule[BackFaceOpacity, 0.8`], Rule[Opacity, 0.8`], Rule[FontWeight, "SemiBold"], Rule[FontOpacity, 0.8`]], "1.`"]], Rule[AutoDelete, False], Rule[BaseStyle, List[Rule[FontWeight, "Light"], Rule[FontFamily, "Segoe UI"], Rule[NumberMarks, False]]], Rule[GridBoxAlignment, List[Rule["Columns", List[Right, List[Left]]], Rule["ColumnsIndexed", List[]], Rule["Rows", List[List[Baseline]]], Rule["RowsIndexed", List[]]]], Rule[GridBoxDividers, List[Rule["Columns", List[False, List[Opacity[0.15`]], False]]]], Rule[GridBoxItemSize, List[Rule["Columns", List[List[Automatic]]], Rule["ColumnsIndexed", List[]], Rule["Rows", List[List[1.`]]], Rule["RowsIndexed", List[]]]], Rule[GridBoxSpacings, List[Rule["Columns", List[Offset[0.27999999999999997`], Offset[2.0999999999999996`], List[Offset[1.75`]], Offset[0.27999999999999997`]]], Rule["ColumnsIndexed", List[]], Rule["Rows", List[Offset[0.2`], List[Offset[0.8`]], Offset[0.2`]]], Rule["RowsIndexed", List[]]]]], "Grid"]

Method Gaussian Process Number of features 1 Number of training examples 200 AssumeDeterministic False Numerical Covariance Type SquaredExponential Nominal Covariance Type HammingDistance EstimationMethod MaximumPosterior OptimizationMethod FindMinimum

Method K-nearest neighbors Number of features 1 Number of training examples 200 Number of neighbors 10 Distance function EuclideanDistance

Method Neural network Number of features 1 Number of training examples 200 L1 regularization coefficient 0 L2 regularization coefficient 0.1 Number of hidden layers 2 Hidden nodes TemplateBox[List[",", "\",\"", "3", "3"], "RowWithSeparators"] Hidden layer activation functions TemplateBox[List[",", "\",\"", "\"Tanh\"", "\"Tanh\""], "RowWithSeparators"] CostFunction Cost Function

Method Random forest Number of features 1 Number of training examples 200 Number of trees 50

Now we build a new random sensor track and test it against our trained prediction functions.

snip3

Here is our "training track" plotted against the trackData1 we created.

plot1

We create a plot function that allows us to pass track data and show the first plot's prediction, confidence interval and raw data using a linear regression predictor:

plot2

We decide we'd rather parameterize the prediction method so we steel a cool method found on the Wolfram site.

snip4

Finally let's plot them all in one combined chart.

snip6

Viola' we get this chart:

plot3

The Gaussian Process provides the best fit and tightest confidence interval for our track predictor.

You can download the CDF and/or the Notebook for this work.  Enjoy and let me know if you find something wrong or improve it.

Thanks,

Phil Neumiller

Raspberry Pi has Wolfram Mathematica Let's Build Something

For builders out there everywhere that love the Raspberry Pi, I recently thought of an interesting application of Mathematica on the Pi.

How about a game camera that recognizes the animal's you want to capture in memory on your game/animal(for you vegans) cam?  Let's say you just want to see Deer and not hunters, hikers or all those blasted raccoons.

It turns out to be pretty simple with Mathematica, provided your Pi has access to the Internet and Wolfram's curated data.

Here is an image of a mature white tale buck.

mature-deer

Here is the one line Mathematica code to "recognize" the critter and the one line of code to display the results as a word cloud.

mathematica_command_and_result

Since the tail of the deer could not clearly be seen in the photo the most prominent feature of the image is "its a deer!".

Let's try a picture where the tail can be seen on the Deer.

whitetail2

Here are the results:

whitetailresults

Pretty cool huh?  Now let's try and trick it.

racoon

Well it has raccoon as one of the critters but didn't list deer.

racoon_results

There will be false positives...  But you can use machine learning on your captured images to train the system to be better all the time!

You can try this yourself right here at Wolfram's image identify project here.

Thanks for reading, and enjoy Mathematica!

Phil

Disclaimer:

Make sure to check the Wolfram licensing -- this is strictly for non-commercial use.

Generalized Stochastic Petri Net Analysis with PIPE2

Recently I downloaded PIPE2 which is the Platform Independent Petri Net editor written in Java.  It's an impressive open source and FREE tool to that allows you to model Generalized Stochastic Petri Nets (GSPNs).

So I fired it up and decided to build a simple Producer / Consumer model, aka the "bounded buffer problem".  Here's the GSPN I drew with the PIPE2 editor.

Notice that the highest firing rate for the producer is 100 times that of the consumer (on both timed transitions).  This means we would mostly expect the input buffers to be highly utilized and the PIPE2 analysis shows this.

gspn

Here is PIPE's classification of the above GSPN.

Petri net classification results

State Machine false
Marked Graph true
Free Choice Net true
Extended Free Choice Net true
Simple Net true
Extended Simple Net true

Here is PIPE2's GSPN analysis of the Petri Net.

GSPN Steady State Analysis Results

Set of Tangible States
Busy Buffers Consume Free Buffers Produce Receive Send
M0 0 1 2 1 0 0
M1 0 1 2 0 0 1
M2 0 0 2 1 1 0
M3 1 1 1 1 0 0
M4 0 0 2 0 1 1
M5 1 1 1 0 0 1
M6 1 0 1 1 1 0
M7 2 1 0 1 0 0
M8 1 0 1 0 1 1
M9 2 1 0 0 0 1
M10 2 0 0 1 1 0
M11 2 0 0 0 1 1

Steady State Distribution of Tangible States
Marking Value
M0 0
M1 0
M2 0
M3 0
M4 0
M5 0.00495
M6 0
M7 0.0049
M8 0.00005
M9 0.49015
M10 0.0001
M11 0.49985

Average Number of Tokens on a Place
Place Number of Tokens
Busy Buffers 1.995
Consume 0.5
Free Buffers 0.005
Produce 0.005
Receive 0.5
Send 0.995

Token Probability Density
µ=0 µ=1 µ=2
Busy Buffers 0 0.005 0.995
Consume 0.5 0.5 0
Free Buffers 0.995 0.005 0
Produce 0.995 0.005 0
Receive 0.5 0.5 0
Send 0.005 0.995 0

Throughput of Timed Transitions
Transition Throughput
Done Consuming 0.5
fill 0.5
Production Complete 0.5
remove 0.5

Sojourn times for tangible states
Marking Value
M0 0.0099
M1 0.0099
M2 0.01
M3 0.0099
M4 0.01
M5 0.0099
M6 0.0099
M7 0.0099
M8 0.0099
M9 1
M10 0.0099
M11 1

State space exploration took 0.051s
Solving the steady state distribution took 0.015s
Total time was 0.747s

Finally here is the reachability graph produced PIPE2.

cpreachability

The file format of the saved GSPNs is in an XML dialect.  As you can see PIPE to is more than just a graphical GSPN editor it also does some heavy lifting with the analysis.

I have been able to import the above GSPN into Mathematica but I haven't done much with it in there yet.  That's for tomorrow!

-Phil

Our First Couple of Wordclouds

Mathematica now has a nifty feature that allows you to create "word clouds" from various on line data sources such as wikipedia. So here is tiny bit of code to do this.

words = DeleteStopwords[WikipediaData["Analytics"]];
WordCloud[words];

 

analytics_wc

OK that was easy now lets try "White Pine";

wc_whitepine

Of course you can change the orientation and even make it random.  Another thing you can do is make the outline of the word cloud conform to another image like this:

WordCloud[words, logo];

logoconform

Maybe not the best outline shape...