Add "Wolfy" Functions to Mathematica!

If you are using Linux or Apple your mileage may very with this post.  If you want to figure it out for Apple or Linux it would be cool if you posted your method as a comment to this blog!

I am going to cover creating a Windows 10 keyboard macro for Mathematica in Windows 10, and show how it can save you time with Wolfram built in functions. I am going to use some Set theory functions as an example of how this can save you some carpal tunnel impact, Namely Union[], Intersection[], and Complement[].

In Windows 10 you should have the Windows Mouse and Keyboard app which you can download here if you don't have it.  Open the app and it will show your devices.  Mine opens to my Microsoft mouse so I scroll over to they Keyboard I want to create hot keys for and it looks like this.

Phil's Keyboard

First we will add a Macro that types the keys needed to get the Wolfram wolf icon. Which are [Esc Key]wolf[Esc Key].

Click app-specific settings and scroll to Mathematica (don't pick the Kernel!) and pick that as your App.  My screen looks like this now:

Now we are ready to add a macro.  Scroll to New and click it.  Now create a new Macro.  I named mine "Wolfy" of course.  Then use the special keys menu in the Macro editor to select the escape key.  Then type wolf in lower case and add the escape key again.  Your screen should look something like this.

We are not done, we need to assign our macro to a key or a key combination like Alt-w for example.  Just hit the back arrow the macro is save automagically and keep going back to the base menu.  Scroll till you see Assign a macro (Wolfy).  My screen looks like this.

Click the CBS eye looking icon to the top right of the Keyboard image to toggle the keys available for programming.  I picked the task view key.  Now scroll to the macro and pick Wolfy or whatever you called yours.  If you dig around a bit in the tool you can find out how to assign Macros to key combination like Windows-W but I'm lazy and that would make this post even longer!

Go into Mathematica and press you hot key or hot key-combination if you figured that out and you will see Wolfy magically appear.

OK.  Why is this useful?

Hopefully I can motivate with you with what motivated me.  I have been doing a bunch of set theory stuff lately.  I found the Wolfram builtin functions useful, but I wanted to add some additional logic and protection to my functions.  I also wanted hot keys for these functions with the benefit of icon in the function name, i.e. Wolfram's wolf.  Let's say I want to use the built in functions set Union, Intersection, and Complement.  We can use the Wolfy icon in our new function names which makes them short fast and look cute.  Here are the function definitions with some exception handling added to each function.

Lets test our new Keyboard Macro Wolfy functions.

If you found this post useful you can download the Wolfram Notebook from here.  If you have any ideas on how to improve this post, or learn how to add "Wolfy" functions to Linux or Mac leave a response to this post.  By the way the "Spikey" icon is just to the left of the Wolfy icon so adding Spikey functions would work the same way.

Have fun with Mathematica!

Phil

Mathematica NMinimize[] vs Julia with the CPLEX Solver

I setup a simple Convex Optimization problem ans pointed Mathematica and Julia at it.  I am using the CPLEX commercial solver from IBM with Julia.  The problem is documented in this book.

Image result for convex optimization Julia

Here is the Julia code:

using JuMP, CPLEX
m = Model(solver = CplexSolver())

@variable(m, x[1:2])
@objective(m, Min, (x[1]-3)^2 + (x[2]-4)^2)

@constraint(m, (x[1]-1)^2 + (x[2]+1)^2 <=1)
println("Problem As Interpreted by Model")
print(m)
status = solve(m)

println("*** Objective value: ", getobjectivevalue(m))
println("*** Optimal solution: ", getvalue(x))
println(" y = ", getvalue(y))

Here is the output of the Julia code above:

Problem As Interpreted by Model
Min x[1]² + x[2]² - 6 x[1] - 8 x[2] + 25
Subject to
 x[1]² + x[2]² - 2 x[1] + 2 x[2] + 1 <= 0
 x[i] free for all i in {1,2}
Tried aggregator 1 time.
Aggregator did 1 substitutions.
Reduced QCP has 6 rows, 8 columns, and 12 nonzeros.
Reduced QCP has 2 quadratic constraints.
Presolve time = 0.00 sec. (0.00 ticks)
Parallel mode: using up to 8 threads for barrier.
Number of nonzeros in lower triangle of A*A' = 11
Using Approximate Minimum Degree ordering
Total time for automatic ordering = 0.00 sec. (0.00 ticks)
Summary statistics for Cholesky factor:
  Threads                   = 8
  Rows in Factor            = 6
  Integer space required    = 6
  Total non-zeros in factor = 21
  Total FP ops to factor    = 91
 Itn      Primal Obj        Dual Obj  Prim Inf Upper Inf  Dual Inf Inf Ratio
   0  1.8284271e+000 -1.0000000e+000 1.97e+000 0.00e+000 1.70e+001 1.00e+000
   1 -7.6831919e+000 -5.9358407e+000 1.97e+000 0.00e+000 1.70e+001 2.46e-001
   2 -4.5996777e+000 -4.2164027e+000 1.19e+000 0.00e+000 1.03e+001 5.97e-001
   3 -6.1584389e+000 -6.1095871e+000 6.63e-001 0.00e+000 5.73e+000 4.29e+000
   4 -5.8155562e+000 -5.8268221e+000 9.46e-002 0.00e+000 8.19e-001 2.00e+001
   5 -5.8335646e+000 -5.8363252e+000 3.15e-002 0.00e+000 2.72e-001 5.24e+001
   6 -5.8029020e+000 -5.8018579e+000 1.12e-002 0.00e+000 9.70e-002 4.91e+001
   7 -5.7931959e+000 -5.7920153e+000 9.92e-003 0.00e+000 8.58e-002 1.07e+002
   8 -5.7725419e+000 -5.7725795e+000 4.17e-003 0.00e+000 3.61e-002 8.95e+002
   9 -5.7710135e+000 -5.7709267e+000 5.93e-004 0.00e+000 5.13e-003 2.69e+003
  10 -5.7703538e+000 -5.7703509e+000 1.46e-004 0.00e+000 1.26e-003 8.58e+004
  11 -5.7703297e+000 -5.7703297e+000 4.47e-006 0.00e+000 3.87e-005 1.32e+007
  12 -5.7703296e+000 -5.7703296e+000 2.89e-008 0.00e+000 2.50e-007 8.44e+007
*** Objective value: 19.229670381903375
*** Optimal solution: [1.37144,-0.0715412]
 y = 0.2

Mathematica solves the problem very quietly but gets the wrong answer with NMimize[].

Here is the Mathematica code and the output:

(* Convex Optimization Problem *)
objectiveFunc = (x - 3)^2 + (y - 4)^2;
constraintFunc = (x - 1)^2  + (y - 4)^2 <= 1
v = NMinimize[{objectiveFunc, constraintFunc}, {x, y}]
{1., {x -> 2., y -> 4.}}

That is not close to being correct.  But I have heard that NMinimize[] does not do well with Convex optimization problems.  I'm trying the Gurobi solver next if I can get it to cooperate with Julia.

A Brief Foray into Electrostatic Vector Fields with Mathematica

We examine the electrostatic vector field functions in this post.  This is roughly patterned after chapter 1 of Shey's classic book Div, Grad, Curl and All that...

2D Position Vector

Now a tangential vector field:

Tangential 2D vector field

Now let's look at the charge between two particles in free space in MKS units.  This law of Physics was discovered by the French physicist Charles-Augustin de Coulomb (June 1736 - 23 August 1806).  He was best known for developing Coulomb's law, the definition of the electrostatic force of attraction and repulsion, but also did important work on friction. The SI unit of electric charge, the coulomb, was named after him.

Coulombs Law
Electrostatic Force Interaction

Try the above electrostatics App in the Wolfram Cloud here.

\

Exercise 1-1 (a)

Exercise 1-1 (b)

 

Problem 1.1.c

TheYou

You get the idea.  More worked out electrostatic vector fields can be found in the Wolfram notebook here.

Mathematica Shortest Path Min / Max Flow on Random Graphs

We present here a simple program to generate a variety of random graphs with a variable number of vertices and edges and Random edge weights, either directed or undirected.  You can put an insane number of vertices in and it will run ... but it will take a while...

Compute Shortest Path Max / Min Flow on Random Graph

If you want the Random graph to be directed you can change the DirectedEdges-> parameter in the RandomGraph[] function.  Also if you want a different number of vertices or edge density you ad adjust the numVertices, numEdges parameters as it suites you.  Here is an example output.  Edge weights are distributed uniform randomly over [0, 1].

Shortest Path through Random Graph
Maximum Flow
Min Flow Cost

Here is a shortest path plot with 1500 nodes.  It works but it gets difficult to visualize at this scale...  You can sort of see the red vertices in there right? 🙂

1500 Vertices Shortest Path

You can download the Mathematica notebook for this post here.

Comparing Julia with Mathematica LinearProgramming

I like Mathematica, but Wolfram's LinearProgramming (LP) built in function's syntax is horrifically complicated / and dare I say rather unintuitive.

I suppose we could put these values into variable to make what we are doing a bit clearer.

Improved readability but Still Bit Obtuse

By way of contrast if we look at the JuMP package syntax for LP it's very beautiful and matches the typical NP problem setup with descriptive variable names.

Julia Linear Programming Syntax

The performance for this small problem is not worth comparing.  I will be comparing some bigger problems performance between Mathematica, and Julia.

References

  1.  Linear Programming
  2. Wolfram Linear Programming Function
  3. Julia Programming for Operations Research
  4. Julia GitHub Source for this post
  5. Wolfram GitHub Notebook for this post

Screen Scraping Plots / Charts to Acquire Raw Data

Sometimes you just don't have the raw data and all you have is a plot.  This tool (link below) does a reasonable job on a variety of plots.

WebPlotDigitizer

I tried it on this chart exported from Mathematica as a JPEG.
chartscraped

You have to calibrate your Axes and the pick the trace color of the plot which it offers in a menu of dominant colors derived from the chart.  It's pretty cool that the points I got back had a correlation of nearly 1 for the original data in Mathematica.

The tool also has an option WebCam option and can import and export JSON which is nice as well.  Overall I give this an overall rating of 9.5/10!  I still haven't found my holy grail screen scraper yet but this one works well for manual scraping of charts.

Also see this article: http://connectedresearchers.com/extracting-data-from-plots-images-and-maps-with-webplotdigitizer/ for more information about this tool.

Kudos to:

Author: Ankit Rohatgi

Title: WebPlotDigitizer

Website: http://arohatgi.info/WebPlotDigitizer

Version: 3.10

Date: May, 2016

E-Mail: ankitrohatgi@hotmail.com

Location: Austin, Texas, USA

Machine Learning Prediction for Motion Sensors In Mathematica

 

Let's assume we have a motion sensor that provides us X and Y data for a target where the measurement error is normally distributed with parameters:

\mu=0.0

\sigma=5.2

First we will need to train our prediction model with some training data.  We add a square term to give the target a parabolic path.

snip1

 

 

Now build the five different supported predictor functions:

snip2

With minimal parameter input here is Information about the predictors as built with default parameters:

Method Linear regression Number of features 1 Number of training examples 200 L1 regularization coefficient 0 L2 regularization coefficient 1. TagBox[GridBox[List[List[StyleBox["\"Method\"", Rule[StripOnInput, False], Rule[LineOpacity, 0.8`], Rule[FrontFaceOpacity, 0.8`], Rule[BackFaceOpacity, 0.8`], Rule[Opacity, 0.8`], Rule[FontWeight, "SemiBold"], Rule[FontOpacity, 0.8`]], "\"Linear regression\""], List[StyleBox["\"Number of features\"", Rule[StripOnInput, False], Rule[LineOpacity, 0.8`], Rule[FrontFaceOpacity, 0.8`], Rule[BackFaceOpacity, 0.8`], Rule[Opacity, 0.8`], Rule[FontWeight, "SemiBold"], Rule[FontOpacity, 0.8`]], "1"], List[StyleBox["\"Number of training examples\"", Rule[StripOnInput, False], Rule[LineOpacity, 0.8`], Rule[FrontFaceOpacity, 0.8`], Rule[BackFaceOpacity, 0.8`], Rule[Opacity, 0.8`], Rule[FontWeight, "SemiBold"], Rule[FontOpacity, 0.8`]], "200"], List[StyleBox["\"L1 regularization coefficient\"", Rule[StripOnInput, False], Rule[LineOpacity, 0.8`], Rule[FrontFaceOpacity, 0.8`], Rule[BackFaceOpacity, 0.8`], Rule[Opacity, 0.8`], Rule[FontWeight, "SemiBold"], Rule[FontOpacity, 0.8`]], "0"], List[StyleBox["\"L2 regularization coefficient\"", Rule[StripOnInput, False], Rule[LineOpacity, 0.8`], Rule[FrontFaceOpacity, 0.8`], Rule[BackFaceOpacity, 0.8`], Rule[Opacity, 0.8`], Rule[FontWeight, "SemiBold"], Rule[FontOpacity, 0.8`]], "1.`"]], Rule[AutoDelete, False], Rule[BaseStyle, List[Rule[FontWeight, "Light"], Rule[FontFamily, "Segoe UI"], Rule[NumberMarks, False]]], Rule[GridBoxAlignment, List[Rule["Columns", List[Right, List[Left]]], Rule["ColumnsIndexed", List[]], Rule["Rows", List[List[Baseline]]], Rule["RowsIndexed", List[]]]], Rule[GridBoxDividers, List[Rule["Columns", List[False, List[Opacity[0.15`]], False]]]], Rule[GridBoxItemSize, List[Rule["Columns", List[List[Automatic]]], Rule["ColumnsIndexed", List[]], Rule["Rows", List[List[1.`]]], Rule["RowsIndexed", List[]]]], Rule[GridBoxSpacings, List[Rule["Columns", List[Offset[0.27999999999999997`], Offset[2.0999999999999996`], List[Offset[1.75`]], Offset[0.27999999999999997`]]], Rule["ColumnsIndexed", List[]], Rule["Rows", List[Offset[0.2`], List[Offset[0.8`]], Offset[0.2`]]], Rule["RowsIndexed", List[]]]]], "Grid"]

Method Gaussian Process Number of features 1 Number of training examples 200 AssumeDeterministic False Numerical Covariance Type SquaredExponential Nominal Covariance Type HammingDistance EstimationMethod MaximumPosterior OptimizationMethod FindMinimum

Method K-nearest neighbors Number of features 1 Number of training examples 200 Number of neighbors 10 Distance function EuclideanDistance

Method Neural network Number of features 1 Number of training examples 200 L1 regularization coefficient 0 L2 regularization coefficient 0.1 Number of hidden layers 2 Hidden nodes TemplateBox[List[",", "\",\"", "3", "3"], "RowWithSeparators"] Hidden layer activation functions TemplateBox[List[",", "\",\"", "\"Tanh\"", "\"Tanh\""], "RowWithSeparators"] CostFunction Cost Function

Method Random forest Number of features 1 Number of training examples 200 Number of trees 50

Now we build a new random sensor track and test it against our trained prediction functions.

snip3

Here is our "training track" plotted against the trackData1 we created.

plot1

We create a plot function that allows us to pass track data and show the first plot's prediction, confidence interval and raw data using a linear regression predictor:

plot2

We decide we'd rather parameterize the prediction method so we steel a cool method found on the Wolfram site.

snip4

Finally let's plot them all in one combined chart.

snip6

Viola' we get this chart:

plot3

The Gaussian Process provides the best fit and tightest confidence interval for our track predictor.

You can download the CDF and/or the Notebook for this work.  Enjoy and let me know if you find something wrong or improve it.

Thanks,

Phil Neumiller

Raspberry Pi has Wolfram Mathematica Let's Build Something

For builders out there everywhere that love the Raspberry Pi, I recently thought of an interesting application of Mathematica on the Pi.

How about a game camera that recognizes the animal's you want to capture in memory on your game/animal(for you vegans) cam?  Let's say you just want to see Deer and not hunters, hikers or all those blasted raccoons.

It turns out to be pretty simple with Mathematica, provided your Pi has access to the Internet and Wolfram's curated data.

Here is an image of a mature white tale buck.

mature-deer

Here is the one line Mathematica code to "recognize" the critter and the one line of code to display the results as a word cloud.

mathematica_command_and_result

Since the tail of the deer could not clearly be seen in the photo the most prominent feature of the image is "its a deer!".

Let's try a picture where the tail can be seen on the Deer.

whitetail2

Here are the results:

whitetailresults

Pretty cool huh?  Now let's try and trick it.

racoon

Well it has raccoon as one of the critters but didn't list deer.

racoon_results

There will be false positives...  But you can use machine learning on your captured images to train the system to be better all the time!

You can try this yourself right here at Wolfram's image identify project here.

Thanks for reading, and enjoy Mathematica!

Phil

Disclaimer:

Make sure to check the Wolfram licensing -- this is strictly for non-commercial use.

Generalized Stochastic Petri Net Analysis with PIPE2

Recently I downloaded PIPE2 which is the Platform Independent Petri Net editor written in Java.  It's an impressive open source and FREE tool to that allows you to model Generalized Stochastic Petri Nets (GSPNs).

So I fired it up and decided to build a simple Producer / Consumer model, aka the "bounded buffer problem".  Here's the GSPN I drew with the PIPE2 editor.

Notice that the highest firing rate for the producer is 100 times that of the consumer (on both timed transitions).  This means we would mostly expect the input buffers to be highly utilized and the PIPE2 analysis shows this.

gspn

Here is PIPE's classification of the above GSPN.

Petri net classification results

State Machine false
Marked Graph true
Free Choice Net true
Extended Free Choice Net true
Simple Net true
Extended Simple Net true

Here is PIPE2's GSPN analysis of the Petri Net.

GSPN Steady State Analysis Results

Set of Tangible States
Busy Buffers Consume Free Buffers Produce Receive Send
M0 0 1 2 1 0 0
M1 0 1 2 0 0 1
M2 0 0 2 1 1 0
M3 1 1 1 1 0 0
M4 0 0 2 0 1 1
M5 1 1 1 0 0 1
M6 1 0 1 1 1 0
M7 2 1 0 1 0 0
M8 1 0 1 0 1 1
M9 2 1 0 0 0 1
M10 2 0 0 1 1 0
M11 2 0 0 0 1 1

Steady State Distribution of Tangible States
Marking Value
M0 0
M1 0
M2 0
M3 0
M4 0
M5 0.00495
M6 0
M7 0.0049
M8 0.00005
M9 0.49015
M10 0.0001
M11 0.49985

Average Number of Tokens on a Place
Place Number of Tokens
Busy Buffers 1.995
Consume 0.5
Free Buffers 0.005
Produce 0.005
Receive 0.5
Send 0.995

Token Probability Density
µ=0 µ=1 µ=2
Busy Buffers 0 0.005 0.995
Consume 0.5 0.5 0
Free Buffers 0.995 0.005 0
Produce 0.995 0.005 0
Receive 0.5 0.5 0
Send 0.005 0.995 0

Throughput of Timed Transitions
Transition Throughput
Done Consuming 0.5
fill 0.5
Production Complete 0.5
remove 0.5

Sojourn times for tangible states
Marking Value
M0 0.0099
M1 0.0099
M2 0.01
M3 0.0099
M4 0.01
M5 0.0099
M6 0.0099
M7 0.0099
M8 0.0099
M9 1
M10 0.0099
M11 1

State space exploration took 0.051s
Solving the steady state distribution took 0.015s
Total time was 0.747s

Finally here is the reachability graph produced PIPE2.

cpreachability

The file format of the saved GSPNs is in an XML dialect.  As you can see PIPE to is more than just a graphical GSPN editor it also does some heavy lifting with the analysis.

I have been able to import the above GSPN into Mathematica but I haven't done much with it in there yet.  That's for tomorrow!

-Phil

Our First Couple of Wordclouds

Mathematica now has a nifty feature that allows you to create "word clouds" from various on line data sources such as wikipedia. So here is tiny bit of code to do this.

words = DeleteStopwords[WikipediaData["Analytics"]];
WordCloud[words];

 

analytics_wc

OK that was easy now lets try "White Pine";

wc_whitepine

Of course you can change the orientation and even make it random.  Another thing you can do is make the outline of the word cloud conform to another image like this:

WordCloud[words, logo];

logoconform

Maybe not the best outline shape...