Drake: QP solver gives a wrong answer?

Created on 23 Nov 2019  路  11Comments  路  Source: RobotLocomotion/drake

The Programming with quadratic constraint problem below gives an obviously wrong answer, anyone know why? Thanks.

mp = MathematicalProgram()
x = mp.NewContinuousVariables(2, "x")
mp.AddConstraint((x**2).sum() == 1.)
mp.AddLinearCost(x.sum())
result = Solve(mp)
print result.is_success()
print result.GetSolution(x)

The solver gives result:
False
[ 0. 0.]

btw, the example code is from http://underactuated.csail.mit.edu/Spring2019/set_3.html and https://github.com/RussTedrake/underactuated/issues/238
--------update line-------
since @jwnimmer-tri said he could not reproduce the problem(good news for this code), the way to reproduce the case is to follow the steps in the link http://underactuated.csail.mit.edu/Spring2019/set_3.html and use the drake version there, in jupyternotebook. And the drake in there are still in python2, a previous version of this code.

manipulation user assistance

Most helpful comment

Right, [1/sqrt(2), 1/sqrt(2)] is a local maximum.

I think we probably shipped different nonlinear solvers in python2 version vs python3 version? Anyway, solving non-convex nonlinear problem is kind of like alchemy. It doesn't guarantee to work, and we can only improve its performance with some human intuition plus trial and error. So if you have some nonlinear nonconvex problem that you think should be feasible, but the solver reports infeasibility, you could try with multiple initial guesses.

All 11 comments

I cannot reproduce this problem.

Using the drake-20191108-bionic.tar.gz from https://github.com/RobotLocomotion/drake/releases/tag/v0.12.0, I get:

$ PYTHONPATH=~/Downloads/v0.12.0/drake/lib/python3.6/site-packages python3
Python 3.6.8 (default, Oct  7 2019, 12:59:55) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from pydrake.solvers.mathematicalprogram import (MathematicalProgram, Solve)
>>> mp = MathematicalProgram()
>>> x = mp.NewContinuousVariables(2, "x")
>>> mp.AddConstraint((x**2).sum() == 1.)
<pydrake.solvers.mathematicalprogram.Binding_Constraint object at 0x7f0e9394f0f0>
>>> mp.AddLinearCost(x.sum())
<pydrake.solvers.mathematicalprogram.Binding_LinearCost object at 0x7f0e9394f070>
>>> result = Solve(mp)
>>> print(result.is_success())
True
>>> print(result.GetSolution())
[-0.70710678 -0.70710678]

Please provide more details to reproduce your problem.

I cannot reproduce this problem.

Using the drake-20191108-bionic.tar.gz from https://github.com/RobotLocomotion/drake/releases/tag/v0.12.0, I get:

$ PYTHONPATH=~/Downloads/v0.12.0/drake/lib/python3.6/site-packages python3
Python 3.6.8 (default, Oct  7 2019, 12:59:55) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from pydrake.solvers.mathematicalprogram import (MathematicalProgram, Solve)
>>> mp = MathematicalProgram()
>>> x = mp.NewContinuousVariables(2, "x")
>>> mp.AddConstraint((x**2).sum() == 1.)
<pydrake.solvers.mathematicalprogram.Binding_Constraint object at 0x7f0e9394f0f0>
>>> mp.AddLinearCost(x.sum())
<pydrake.solvers.mathematicalprogram.Binding_LinearCost object at 0x7f0e9394f070>
>>> result = Solve(mp)
>>> print(result.is_success())
True
>>> print(result.GetSolution())
[-0.70710678 -0.70710678]

Please provide more details to reproduce your problem.

Thanks, I updated the reproducing way in the main post, and that installation costs some time, about half an hour usually...

BTW, when you add the constraint

mp.AddConstraint((x**2).sum() == 1.)

This says x[0]虏 + x[1]虏 = 1. This is an equality quadratic constraint. This constraint is not linear, it is not even convex. So this problem is not a QP.

BTW, when you add the constraint

mp.AddConstraint((x**2).sum() == 1.)

This says x[0]虏 + x[1]虏 = 1. This is an equality quadratic constraint. This constraint is not linear, it is not even convex. So this problem is not a QP.

True, no

BTW, when you add the constraint

mp.AddConstraint((x**2).sum() == 1.)

This says x[0]虏 + x[1]虏 = 1. This is an equality quadratic constraint. This constraint is not linear, it is not even convex. So this problem is not a QP.

True. Thanks for finding that. I fixed the formulation. However, the thing is, drake announced that he can solve quadratic constraint and this piece of code is given as an example code. The result should not be like this in any sense. jwnimmer-tri's result also showed this. There is something wrong with the version I'm using.

drake announced that he can solve quadratic constraint and this piece of code is given as an example code

Drake could solve this problem, but not as a QP, instead Drake solves it as a general nonlinear optimization problem, by calling some nonlinear solvers. Since this is a non-convex problem, the nonlinear solver is not guaranteed to find the solution, even if the solution exists. The nonlinear solver can get trapped in some local infeasible region. In your case, it is possible that the drake you use is not compiled with the same solver as Jeremy uses. Different version of Drake can ship different version of solvers (Snopt vs Ipopt vs NLopt, different version of Snopt, etc). Hence in Jeremy's case the problem is solved by some nonlinear solver, but not in your case.

drake announced that he can solve quadratic constraint and this piece of code is given as an example code

Drake could solve this problem, but not as a QP, instead Drake solves it as a general nonlinear optimization problem, by calling some nonlinear solvers. Since this is a non-convex problem, the nonlinear solver is not guaranteed to find the solution, even if the solution exists. The nonlinear solver can get trapped in some local infeasible region. In your case, it is possible that the drake you use is not compiled with the same solver as Jeremy uses. Different version of Drake can ship different version of solvers (Snopt vs Ipopt vs NLopt, different version of Snopt, etc). Hence in Jeremy's case the problem is solved by some nonlinear solver, but not in your case.

Yes, but in any case, the trapped local minimum should not return a "false" answer, it should have given a local answer. So I think there's something wrong with the solver itself.

Yes, but in any case, the trapped local minimum should not return a "false" answer, it should have given a local answer

Since you didn't provide an initial guess in result = Solve(mp), Drake uses [0, 0] as the initial guess for the nonlinear solver. The nonlinear solver cannot find a solution locally around this initial guess (the gradient of the constraint is zero around this initial guess), so it can only report "I cannot solve this problem", hence it says result.is_success() = False.

In nonlinear optimization, finding a feasible solution can be as hard as finding the optimal solution. This problem is not trapped in a local minimum, it gets trapped in a local infeasible region. The solution is usually to provide another initial guess explicitly, for example result = Solve(mp, np.array([1, 1]))

Yes, but in any case, the trapped local minimum should not return a "false" answer, it should have given a local answer

Since you didn't provide an initial guess in result = Solve(mp), Drake uses [0, 0] as the initial guess for the nonlinear solver. The nonlinear solver cannot find a solution locally around this initial guess (the gradient of the constraint is zero around this initial guess), so it can only report "I cannot solve this problem", hence it says result.is_success() = False.

In nonlinear optimization, finding a feasible solution can be as hard as finding the optimal solution. This problem is not trapped in a local minimum, it gets trapped in a local infeasible region. The solution is usually to provide another initial guess explicitly, for example result = Solve(mp, np.array([1, 1]))

Thank you so much. You are right. I tried several different initial conditions, most of them gave right answer. And Solve(mp, np.array([1, 1])), gives [ 0.70710678 0.70710678]... Compare to jwnimmer-tri's running above, I cannot believe the python2 version drake solver is so weak?

Right, [1/sqrt(2), 1/sqrt(2)] is a local maximum.

I think we probably shipped different nonlinear solvers in python2 version vs python3 version? Anyway, solving non-convex nonlinear problem is kind of like alchemy. It doesn't guarantee to work, and we can only improve its performance with some human intuition plus trial and error. So if you have some nonlinear nonconvex problem that you think should be feasible, but the solver reports infeasibility, you could try with multiple initial guesses.

Speaking of the difference between python2 version vs python3 version, maybe, the only difference here is just to make the default initial guess from 0 to random, I guess

I think this issue is resolved? Please reopen this issue if you have further questions. Thanks!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

peteflorence picture peteflorence  路  5Comments

EricCousineau-TRI picture EricCousineau-TRI  路  3Comments

mntan3 picture mntan3  路  4Comments

jwnimmer-tri picture jwnimmer-tri  路  4Comments

RussTedrake picture RussTedrake  路  4Comments