Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inappropriate StackOverflowError with unexpected array type #198

Closed
veita opened this issue May 15, 2016 · 6 comments · Fixed by #354
Closed

Inappropriate StackOverflowError with unexpected array type #198

veita opened this issue May 15, 2016 · 6 comments · Fixed by #354

Comments

@veita
Copy link

veita commented May 15, 2016

When an Array{Float64,2} instead of a Array{Float64,1} is passed to the optimize function a StackOverflowError is thrown. A more helpful exception with a hint to the true reason for the error would be nice.

using Optim

function f(x::Vector)
    return (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
end

optimize(f, [0.0 0.0])
LoadError: StackOverflowError:
  in call at C:\Users\alex\.julia\v0.4\Optim\src\types.jl:193
  in optimize at C:\Users\alex\.julia\v0.4\Optim\src\optimize.jl:146
  in optimize at C:\Users\alex\.julia\v0.4\Optim\src\optimize.jl:150
 while loading D:\prj\julia\optimizebug.jl, in expression starting on line 7

Julia version 0.4.5 (2016-03-18 00:58 UTC)

@johnmyleswhite
Copy link
Contributor

At one point we were making an effort to allow passing in arrays of arbitrary order. Would be good to see if we can restore some of that if it got lost in the API redesign.

@veita
Copy link
Author

veita commented May 15, 2016

Additional note:

optimize(f, [0.0 0.0]')

also fails with a StackOverflowError error.

@Evizero
Copy link
Contributor

Evizero commented May 15, 2016

I suppose it is because the fallback function is called over and over again since no specialized method applies. Probably lines like this are the culprit: https://github.com/JuliaOpt/Optim.jl/blob/master/src/nelder_mead.jl#L53 . I do think I had a reason for narrowing it down to vectors here, especially because I didn't consistently do so: https://github.com/JuliaOpt/Optim.jl/blob/master/src/gradient_descent.jl#L36, but I can't think of a fundamental one. I guess that decision should be revisited

Regardless, a Stackoverflow is not a graceful way to deal with an unsupported parameter setting.

@pkofod
Copy link
Member

pkofod commented May 15, 2016

I don't see why this shouldn't be possible, and it should certainly have tests, as I was sure we allowed for this. Unless we can find a good reason, I think we (I, op, anyone) should restore the functionality. I've never had an obvious need for it myself, though I can imagine a few uses.

@pkofod
Copy link
Member

pkofod commented May 16, 2016

Although I really wish it would be handled by something like #163 . If op doesn't do it, I'll try to make a pr asap.

@pkofod
Copy link
Member

pkofod commented May 20, 2016

Okay, so I looked a bit into this.

NelderMead() wasn't part of the "loosening" as can be seen from 47a39d2 and it wasn't part of c3a7097 either (although the type in the optimize call was changed, the algorithm wasn't). Neither were some other algorithms, such as AcceleratedGradientDescent. I might be unaware of some other commits that are relevant.

I think this happened as a result of insufficient testing. For example, AcceleratedGradientDescent uses Array(T, n) and not similar(x_inital) to initialize gr etc, this could have been caught if arrays.jl tested all algorithms.

Additionally, it never works if you don't give it a gradient, as fininite_difference! only supports Vectors. I think this may be why @Evizero changed it in the first place, could that be?

I will try to clean all of this up, but I'm not sure we can allow for the "no gradient case" and freely chosen Array{T, N} unless Calculus.jl is changed to accommodate arbitrary dimensional arrays. ( @johnmyleswhite )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants