Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation: inconsistency in whether constraints are zero #460

Closed
timholy opened this issue Apr 20, 2024 · 11 comments
Closed

Documentation: inconsistency in whether constraints are zero #460

timholy opened this issue Apr 20, 2024 · 11 comments

Comments

@timholy
Copy link
Contributor

timholy commented Apr 20, 2024

Consider HS62, for which the constraint is

x[1] + x[2] + x[3] - 1 == 0

and the initial value is x0 = [0.7, 0.2, 0.1] which sums to 1. Thus you expect the initial constraint value to be zero. However:

julia> using NLPModels, NLPModelsJuMP, OptimizationProblems.PureJuMP

julia> nlp = MathOptNLPModel(hs62());

julia> cons(nlp, nlp.meta.x0)
1-element Vector{Float64}:
 0.9999999999999999

My suspicion is that the bug is in this package, but I'm unsure. It seems likely that it's neglecting to incorporate the constant offset in the JuMP model?

@amontoison
Copy link
Member

amontoison commented Apr 20, 2024

Hi @timholy!
I should document it but we store the linear and quadratic equality constraints as Ax = b and x'Qx + dx = c in NLPModelsJuMP.jl.
So all constant terms in these constraints are moved in the right-hand side.

@timholy
Copy link
Contributor Author

timholy commented Apr 20, 2024

But doesn't the fundamental problem represented by NLPModels say that it solves c(x) = 0? And that cons is the way you evaluate the constraints? https://jso.dev/NLPModels.jl/stable/

Is there the equivalent of lcons (left constraint) and rcons (right constraint)? In which case one could say that the fundamental problem to be solved is lcons(nlp, x) == rcons(nlp, x).

@amontoison
Copy link
Member

It solves c(x) = 0 but internally we can store it as Ax = b when the constraints are linear.

Capture d’écran du 2024-04-20 11-44-57

cc @tmigot

@amontoison
Copy link
Member

amontoison commented Apr 20, 2024

Is there the equivalent of lcons (left constraint) and rcons (right constraint)? In which case one could say that the fundamental problem to be solved is lcons(nlp, x) == rcons(nlp, x).

Yes, you have this information in nlp.meta.
https://jso.dev/NLPModels.jl/dev/reference/#NLPModels.NLPModelMeta
You can use this code:

con(nlp, x)[nlp.meta.jfix]  nlp.meta.lcon[nlp.meta.jfix]

or

con(nlp, x)[nlp.meta.jfix]  nlp.meta.ucon[nlp.meta.jfix]

nlp.meta.lcon and nlp.meta.ucon and the lower and upper bounds of the constraints.
nlp.meta.jfix is the index of the equality constraints.

@timholy
Copy link
Contributor Author

timholy commented Apr 20, 2024

Sounds fine, I can understand why internally you might do it this way. I'm pointing out, though, that this change appears to introduce a major inconsistency in your API. Here's the docstring for cons:

help?> cons
search: cons const cons! consjac conscale cons_nln cons_lin cons_nln! cons_lin! cons_coord cons_coord! isconst unconstrained condskeel objcons objcons! MathConstants

  c = cons(nlp, x)

  Evaluate c(x), the constraints at x.

and then the docs for NLPModels have this:
image

What I'm pointing out is that cons doesn't compute what it claims to compute. It's fine if you represent things differently internally, but cons should return the actual constraint value (and it should be zero at the solution, which isn't true currently).

It would be a little weird if the docstring were "cons computes the constraint value, except if there is a constant offset, in which case it's everything except the constant." Either cons should just be deleted or it needs to return the actual constraint value.

@amontoison
Copy link
Member

I agree with you.
I think we only have the issue in the package because of how I parse the JuMP model here.
We can update cons to return the actual constraint value.

@dpo
Copy link
Member

dpo commented Apr 20, 2024

There seems to be an inconsistency in the docs. Below the equation you quote, we state that the constraints are $c_L \leq c(x) \leq c_U$, and that the $i$-th components of $c_L$ and $c_U$ are equal for equality constraints. The constraint is $c_i(x)$. The common value of $c_L$ and $c_U$ is the “right”-hand side; it is not part of the constraint.

@timholy timholy changed the title Constant offset is neglected in constraint equation Documentation: inconsistency in whether constraints are zero Apr 26, 2024
@timholy
Copy link
Contributor Author

timholy commented Apr 26, 2024

Sorry, just circling back to this. @dpo I guess your point is that given the desirability of (notationally) unifying equalities and inequalities, the better approach is to drop the guarantee of $c_i(x) = 0$ for all equality constraints. That seems reasonable. I'd certainly be fine with using the suggestion in #460.

I've edited the title of this issue accordingly. Perhaps someone with appropriate privileges should label it a documentation issue and transfer it to NLPModels instead?

@tmigot tmigot transferred this issue from JuliaSmoothOptimizers/NLPModelsJuMP.jl Apr 26, 2024
@dpo
Copy link
Member

dpo commented Apr 26, 2024

@timholy If you prefer the right-hand side of your equality constraints to be zero, it’s enough to build it into the constraint and set the right-hand side to zero explicitly when you define the model. In other words, the safe thing to do is to always evaluate $c(x) - c_L$ when testing whether an equality constraint is nearly satisfied or not.

@timholy
Copy link
Contributor Author

timholy commented Apr 28, 2024

Right. To be clear, the way this came up was using OptimizationProblems.jl as a repository of test problems for an external optimizer, and then being surprised that the solution wasn't as stated in the documentation. It's really just a documentation issue, but one that's a significant "gotcha" for external consumers of your interface.

@dpo
Copy link
Member

dpo commented Apr 29, 2024

Thanks for bringing this to our attention!

timholy added a commit to timholy/NLPModels.jl that referenced this issue Apr 30, 2024
This eliminates the inconsistency in `c_i = 0` vs `c_L <= c_i <= c_U` notes in JuliaSmoothOptimizers#460.
It also:
- adds Manifest.toml to `.gitignore`
- standardizes on `y` rather than `λ` for Lagrange multipliers
- clarifies or polishes wording in various places
- adds `NLPModels` to `docs/Project.toml` (this is standard and might
  allow simplification of your `workflow`s)
@tmigot tmigot closed this as completed in 584bdcb Jul 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants