Mathematica NMinimize[] vs Julia with the CPLEX Solver

I setup a simple Convex Optimization problem ans pointed Mathematica and Julia at it.  I am using the CPLEX commercial solver from IBM with Julia.  The problem is documented in this book.

Image result for convex optimization Julia

Here is the Julia code:

using JuMP, CPLEX
m = Model(solver = CplexSolver())

@variable(m, x[1:2])
@objective(m, Min, (x[1]-3)^2 + (x[2]-4)^2)

@constraint(m, (x[1]-1)^2 + (x[2]+1)^2 <=1)
println("Problem As Interpreted by Model")
status = solve(m)

println("*** Objective value: ", getobjectivevalue(m))
println("*** Optimal solution: ", getvalue(x))
println(" y = ", getvalue(y))

Here is the output of the Julia code above:

Problem As Interpreted by Model
Min x[1]² + x[2]² - 6 x[1] - 8 x[2] + 25
Subject to
 x[1]² + x[2]² - 2 x[1] + 2 x[2] + 1 <= 0
 x[i] free for all i in {1,2}
Tried aggregator 1 time.
Aggregator did 1 substitutions.
Reduced QCP has 6 rows, 8 columns, and 12 nonzeros.
Reduced QCP has 2 quadratic constraints.
Presolve time = 0.00 sec. (0.00 ticks)
Parallel mode: using up to 8 threads for barrier.
Number of nonzeros in lower triangle of A*A' = 11
Using Approximate Minimum Degree ordering
Total time for automatic ordering = 0.00 sec. (0.00 ticks)
Summary statistics for Cholesky factor:
  Threads                   = 8
  Rows in Factor            = 6
  Integer space required    = 6
  Total non-zeros in factor = 21
  Total FP ops to factor    = 91
 Itn      Primal Obj        Dual Obj  Prim Inf Upper Inf  Dual Inf Inf Ratio
   0  1.8284271e+000 -1.0000000e+000 1.97e+000 0.00e+000 1.70e+001 1.00e+000
   1 -7.6831919e+000 -5.9358407e+000 1.97e+000 0.00e+000 1.70e+001 2.46e-001
   2 -4.5996777e+000 -4.2164027e+000 1.19e+000 0.00e+000 1.03e+001 5.97e-001
   3 -6.1584389e+000 -6.1095871e+000 6.63e-001 0.00e+000 5.73e+000 4.29e+000
   4 -5.8155562e+000 -5.8268221e+000 9.46e-002 0.00e+000 8.19e-001 2.00e+001
   5 -5.8335646e+000 -5.8363252e+000 3.15e-002 0.00e+000 2.72e-001 5.24e+001
   6 -5.8029020e+000 -5.8018579e+000 1.12e-002 0.00e+000 9.70e-002 4.91e+001
   7 -5.7931959e+000 -5.7920153e+000 9.92e-003 0.00e+000 8.58e-002 1.07e+002
   8 -5.7725419e+000 -5.7725795e+000 4.17e-003 0.00e+000 3.61e-002 8.95e+002
   9 -5.7710135e+000 -5.7709267e+000 5.93e-004 0.00e+000 5.13e-003 2.69e+003
  10 -5.7703538e+000 -5.7703509e+000 1.46e-004 0.00e+000 1.26e-003 8.58e+004
  11 -5.7703297e+000 -5.7703297e+000 4.47e-006 0.00e+000 3.87e-005 1.32e+007
  12 -5.7703296e+000 -5.7703296e+000 2.89e-008 0.00e+000 2.50e-007 8.44e+007
*** Objective value: 19.229670381903375
*** Optimal solution: [1.37144,-0.0715412]
 y = 0.2

Mathematica solves the problem very quietly but gets the wrong answer with NMimize[].

Here is the Mathematica code and the output:

(* Convex Optimization Problem *)
objectiveFunc = (x - 3)^2 + (y - 4)^2;
constraintFunc = (x - 1)^2  + (y - 4)^2 <= 1
v = NMinimize[{objectiveFunc, constraintFunc}, {x, y}]
{1., {x -> 2., y -> 4.}}

That is not close to being correct.  But I have heard that NMinimize[] does not do well with Convex optimization problems.  I'm trying the Gurobi solver next if I can get it to cooperate with Julia.

Comparing Julia with Mathematica LinearProgramming

I like Mathematica, but Wolfram's LinearProgramming (LP) built in function's syntax is horrifically complicated / and dare I say rather unintuitive.

I suppose we could put these values into variable to make what we are doing a bit clearer.

Improved readability but Still Bit Obtuse

By way of contrast if we look at the JuMP package syntax for LP it's very beautiful and matches the typical NP problem setup with descriptive variable names.

Julia Linear Programming Syntax

The performance for this small problem is not worth comparing.  I will be comparing some bigger problems performance between Mathematica, and Julia.


  1.  Linear Programming
  2. Wolfram Linear Programming Function
  3. Julia Programming for Operations Research
  4. Julia GitHub Source for this post
  5. Wolfram GitHub Notebook for this post