 # Infeasible initial guess

#1

Hi Joe, I was wondering if the initial guess must always be feasible. When I run the “simple_inequality.m” example if I modify the start point to be slightly infeasible (e.g. line 72 x = [.32;.32]) then the system does not converge (at least for the json files I have tries newton_cg.json, bfgs.json, newton_cg_backtracking.json). I didn’t seem to find much help in the documentation for this. Thanks!

Nate

#2

Hi Nate! Sort of. The requirement is that the inequality constraint needs to be strictly feasible, but the equality constraint does not. In this case, the inequality constraint is:

``````h(x,y) = [ x + 2y - 1 ] >= 0
[ 2x + y - 1 ]
``````

Therefore, changing the initial starting solution to `[0.32;0.32]` means that

``````h(0.32,0.32) = [ 0.32 + 2(0.32) - 1 ] = [ -0.04 ]
[ 2(0.32) + 0.32 - 1 ]   [ -0.04 ]
``````

which violates this constraint. If we don’t know a strictly feasible solution, we can modify the problem to find one by adding a slack variable. Basically, we transform the problem from

``````min f(x) st h(x) >= 0
``````

to

``````min f(x) st h(x) - s = 0, s >= 0
``````

and then set the initial guess of `s` to be something like the identity element (the vector of all ones). Computationally, there is a consequence for doing this since there are more variables and now we need to deal with equalities, which means a preconditioner. However, that’s the price we pay. Technically speaking, we could do a phase 1/phase 2 method where we first solve for feasibility and then optimality, but, frankly, just add a slack and it’ll probably work better.

As to the reason why, it has to do with the interior point method. Essentially, the algorithm relies on having a safe point to search from in order to find the boundary. Also, the barrier term in the merit function isn’t defined for things that are non positive. There’s probably a number of other things that’ll break as well, so the basic idea is that the interior point method does require a strictly feasible solution. Many codes/algorithms just implicitly add a slack to deal with this. I tend not to like this because it changes how the system needs to be preconditioned and this should be explicit, but the point is debatable.

Lastly, I thought I documented this, but I just pulled the manual and I can’t find it. Maybe it’s there, but if I can’t find it there’s a problem, so I need to update the manual. Thanks for letting me know that this is confusing!

Anyway, does that all make sense? Long story short, the initial guess must be strictly feasible with respect to the inequality. If this is hard to determine, add a slack variable and make sure the initial guess for the slack is strictly positive.

#3

Slack variable, Of course, thanks!

#4

No problem! Technically, there’s the example `simple_infeasible_inequality` that works as well too. Candidly, adding a straight slack variable should work much better. That example demonstrates a trick in case we really, really don’t want to add a preconditioner for the equality constraints that arise out of adding the slack. Basically, it only adds a single new equality constraint as opposed to one the size of the slack variable. That said, just use a slack variable and you’re good to go!