Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tentative roadmap #145

Closed
johnmyleswhite opened this issue Oct 13, 2015 · 9 comments
Closed

Tentative roadmap #145

johnmyleswhite opened this issue Oct 13, 2015 · 9 comments

Comments

@johnmyleswhite
Copy link
Contributor

  • API cleanup
  • Documentation
    • Document a formal API
    • Provide examples of basic problems one might want to solve
  • Benchmarks
    • For every problem we test against, we should log the number of iterations, the number of function calls and the number of gradient calls. Shrinking any of these numbers is the best way to improve perf.
@pkofod
Copy link
Member

pkofod commented Oct 13, 2015

Should API discussions happen at #87 ?

@johnmyleswhite
Copy link
Contributor Author

Probably, but I'm not worried if discussion is split a little bit.

@pkofod
Copy link
Member

pkofod commented Oct 14, 2015

Constrained optimization should probably be added as well, even if @timholy might not have the time right now (or someone else).

@Evizero
Copy link
Contributor

Evizero commented Oct 14, 2015

I'll probably have some time in about a week or two to work on some aspects of this. If no one else is already doing it, that is.

Replace method symbols with abstract types to exploit type dispatch and simplify exposing custom keyword arguments for different methods

I am assuming you mean adding a positional argument to optimize that defines the algorithm to use, right? (e.g. something like BFGS()) A lot of the verbose code in optimize.jl could be avoided that way. Plus it would be easier to extend the functionality.

For every problem we test against, we should log the number of iterations, the number of function calls and the number of gradient calls

I do something along the lines for KSVM.jl to compare the results of my implementation against scikit learn that I stored in csv files. Does this go in the direction you're thinking of?


One reason I would like to work on this is because I would kinda like to sneak-in early stopping to the callback functions. Does this at all fit into your design goals?

@tkelman
Copy link

tkelman commented Oct 16, 2015

@davidanthoff
Copy link

Not sure this is within the scope for anything near term, but long term it would be really neat if the algorithms in Optim were exposed as MathProgBase solvers, and if the user facing API in Optim would just be a user friendly layer over MathProgBase.

@mlubin
Copy link
Contributor

mlubin commented Oct 21, 2015

I wouldn't want to impose the MPB interface, it's not necessarily the most natural for unconstrained or box-constrained problems (that should improve with @timholy's proposal at JuliaOpt/MathProgBase.jl#87). In any case writing an MPB wrapper for an Optim algorithm shouldn't take more than 20 lines of code.

@pkofod
Copy link
Member

pkofod commented Feb 4, 2016

Does anyone have any good examples of maintaining a good/robust/automated perf log/benchmark history? I think that right now, we're kind of in the dark wrt. the evolution of the mentioned perf measures throughout the commit history.

@pkofod pkofod mentioned this issue Aug 12, 2016
5 tasks
@pkofod
Copy link
Member

pkofod commented Jan 10, 2017

This discussion is superseded by #326 and/or solved.

@pkofod pkofod closed this as completed Jan 10, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants