Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

create performance suite 2.0 #1073

Closed
JeffBezanson opened this issue Jul 21, 2012 · 11 comments
Closed

create performance suite 2.0 #1073

JeffBezanson opened this issue Jul 21, 2012 · 11 comments
Labels
help wanted Indicates that a maintainer wants help on an issue or pull request performance Must go faster test This change adds or pertains to unit tests

Comments

@JeffBezanson
Copy link
Member

Although we can still improve a bit at the existing benchmarks in test/perf/, by and large they perform well and have stopped improving. I'd like a new performance suite that works exactly like the current one but that covers the areas needing the most improvement. This is not for comparison; it only needs to be julia code. I've found it very helpful to have some quick performance tests to help guide compiler development. There should be about 10 tests and each one should take at most a couple seconds.

A good place to start is probably the current performance issues that mention specific code.

@timholy
Copy link
Member

timholy commented Jul 21, 2012

I pushed one candidate, is this the kind of thing you had in mind?

@ViralBShah
Copy link
Member

I propose adding a perf test for vectorized codes. stockcorr.jl, gk.jl are potential candidates from examples, and even the recent laplace transform snippet could be used.

For scalar performance of array indexing in loops, we should perhaps just benchmark our dense and sparse matrix library code. This will have multiple benefits.

ziggurat.jl, for which we have a C counterpart is also available in randn, makes for a good benchmark.

We should potentially also include some/all the candidates in shootout, since implementations in various other languages are available.

GC will remain an important area for performance, and we should have a good test for GC.

It would be nice to include a couple of parallel and I/O performance tests as well.

@johnmyleswhite
Copy link
Member

Are we tracking the performance of Julia on these benchmarks over time? It would be really nice to have time series graphs showing where Julia is getting better.

@StefanKarpinski
Copy link
Member

Once we get a CI server setup, we could start running benchmarks and recording results on it. Until then it's a bit too haphazard to make a time series.

@Keno
Copy link
Member

Keno commented Jul 21, 2012

If we're working on a new performance suite anyway, do you think it would be possible to get it into a JUnit compatible output format? That would make it trivial to use it with most CI systems and other existing tools.

@StefanKarpinski
Copy link
Member

This should probably go along with integrating @HarlanH's test suite work — we can have different output formats, including JUnit.

@Keno
Copy link
Member

Keno commented Jul 21, 2012

Yes, I agree. Just wanted to mention it now so that we can take it into account when looking at the testing/performance suite refactor.

@HarlanH
Copy link
Contributor

HarlanH commented Jul 21, 2012

Good idea. The extras/test.jl test suite uses a producer/consumer model, with the default consumer just spitting stuff to stdout with printlns. It should be easy to spit out xunit XML to a file instead.

@StefanKarpinski
Copy link
Member

This is a place where coroutines really shine — generating a properly formatted XML file should be trivial. Doing it with events or callbacks would kind of suck.

@ViralBShah
Copy link
Member

Now, that may motivate the creation of a time series type at a quicker pace!

-viral

On 21-Jul-2012, at 8:33 PM, John Myles White wrote:

Are we tracking the performance of Julia on these benchmarks over time? It would be really nice to have time series graphs showing where Julia is getting better.


Reply to this email directly or view it on GitHub:
#1073 (comment)

@johnmyleswhite
Copy link
Member

It seemed like Jeffrey Sarnoff had made a lot of progress already. Am I right in thinking that?

-- John

On Jul 21, 2012, at 7:12 PM, Viral B. Shah wrote:

Now, that may motivate the creation of a time series type at a quicker pace!

-viral

On 21-Jul-2012, at 8:33 PM, John Myles White wrote:

Are we tracking the performance of Julia on these benchmarks over time? It would be really nice to have time series graphs showing where Julia is getting better.


Reply to this email directly or view it on GitHub:
#1073 (comment)


Reply to this email directly or view it on GitHub:
#1073 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Indicates that a maintainer wants help on an issue or pull request performance Must go faster test This change adds or pertains to unit tests
Projects
None yet
Development

No branches or pull requests

7 participants