Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Code coverage reports for base #8781

Closed
wants to merge 1 commit into from
Closed

WIP: Code coverage reports for base #8781

wants to merge 1 commit into from

Conversation

ghost
Copy link

@ghost ghost commented Oct 23, 2014

A long time ago I noticed a comment by simonster about code coverage in base and finally took a stab at it. Little did I know that I ended up with almost exactly what timholy hacked together several moons ago.

I put the coverage as a separate entry in the build matrix on Travis and explicitly allow for it to fail in order not to slow down the Travis reports. As a consequence of this we will only run coverage on Linux, but I suspect that this does not matter.

However, there are several issues to consider before merging something like this (thus the WIP).

  1. Do we really want Coveralls notifications for all pull requests?
  2. Is the current quality of the coverage reports good enough?

For 1., if it is not possible to disable them on the Coveralls dashboard I can tuck the coverage code and Coveralls submission as optional make targets that can be run upon request rather than on Travis. As for 2., I hope that someone else has more insights into exactly what the current shortcomings of the coverage code is.

Another note on the WIP status, I am yet to fire off the script onto Travis and may push to this branch shortly if I messed something up.

@tkelman
Copy link
Contributor

tkelman commented Oct 23, 2014

IMO this would be better to do within @staticfloat's army of buildbots and posted to status.julialang.org and/or coveralls on a nightly or few-times-a-day basis, please no coveralls spam after every commit to every PR (I find it annoying already in the few packages that have it turned on, the results have value but I don't think they need to be tracked at that frequency). It is interesting to look at and work on improving coverage if it's remotely accurate (hence question 2, I think Tim knows the most about that).

@ghost
Copy link
Author

ghost commented Oct 23, 2014

@tkelman: Very good points, I take it from your comments that the notifications can not be turned off (annoying...). The reason I took the time to look into this was pretty much along the lines of what you say, when looking to find holes in our current coverage it is really hard to do without some sort of tracking and visualisation of the current coverage. I would love to help setting this up as a cronjob or similar if I can be trusted with access to a machine.

@tkelman
Copy link
Contributor

tkelman commented Oct 23, 2014

I take it from your comments that the notifications can not be turned off

I have no idea there, I haven't mucked around with coveralls settings at all. The notifications are enabled in several packages I've submitted PR's to, but that may be because the package authors like the notifications better than I do, and that's up to them.

Elliot has almost all of his code up at either https://github.com/staticfloat/julia-buildbot or other repositories. I'm not sure exactly which file corresponds to which builder, but if you can find where the tests get run it might be not that hard to fit in a set with coverage enabled too.

@timholy
Copy link
Member

timholy commented Oct 23, 2014

FWIW, the big part of getting it to work for base was #7464, and in that issue you'll see more comments expressing concern about the amount of noise generated.

Also, my view is currently that code-coverage is quite useful if a human looks at the results, but that the overall percentages are almost entirely useless. I was hoping that turning on code-coverage for base would inspire legions 😄 of julia-users folks to start finding the holes in our test coverage, but so far that hasn't happened. Organizing that kind of effort would be a major contribution.

@IainNZ
Copy link
Member

IainNZ commented Oct 23, 2014

I think it'd be great to have Coveralls for Base, and I don't think its bad to have it on PRs necessarily (although sometimes it'd be best to turn it off). If someone adds new functionality and the coverage number goes down, that should be recognized as a bad thing. Conversely, I think @timholy is right that if there is a coverage number then people will submit little fixes, because they get feedback quickly and it "game-ifies" it (works for me, stackoverflow...).
Unfortunately, I'm not sure how worth it is without fixing the major bug/oversight in --code-coverage where it skips functions that are never run, and I feel like no one except @JeffBezanson knows how that code all interacts.
Best case is something where I can do, in a Github PR, juliabot -coverage and it'll tell me the change. Next best is just to get the coverage on the nightly build (I can actually do this myself right now, as I also build Julia automatically every night).

@timholy
Copy link
Member

timholy commented Oct 23, 2014

It's not so much a bug or oversight as a very tricky design question. I sketched what might possibly be a way forward in #7541, but I don't know enough about parsing/lowering for me to interested in figuring this out on my own.

But I still don't think we want this for every PR.

@ghost
Copy link
Author

ghost commented Oct 23, 2014

Well, I would not consider myself to be a legion, but I do want to plug some holes. I think I understand the problems outlined in #7541 and I will see what I can make out of the .cov results in the morning.

I am somewhat split whether pursuing 100% coverage is really all that relevant, but I strongly suspect that we have a bunch of modules that are largely untested. What I personally want to do is find them, read them (reading the Julia codebase is always a blast) and write some tests for them.

@timholy
Copy link
Member

timholy commented Oct 23, 2014

See, you are a legion, all by yourself 😄.

@tkelman
Copy link
Contributor

tkelman commented Oct 23, 2014

What I personally want to do is find them, read them (reading the Julia codebase is always a blast) and write some tests for them.

That's a fantastic idea, and it would be good if more people felt that urge when contributing. I can tell you for example the sparse linear algebra code is far less tested than the dense counterparts, but digging into coverage reports to find holes and devise new tests is not something I've found myself wanting to do. The status quo has been identifying bugs when they come up in real-world use cases and adding tests as part of fixing the bugs. Needs more sustained effort to do much differently.

If someone adds new functionality and the coverage number goes down, that should be recognized as a bad thing.

New functionality is a good thing - if it's untested that's a problem, but doesn't completely invalidate the usefulness of the functionality. It's pretty easy when reviewing a feature addition to say, "needs tests." We don't need constant notifications and bot comments saying "+1%" or "-1%" to figure that out. Feature additions of this type also seem to be a fairly small portion of the pull requests that get opened. Most of what I tend to work on is build fixes or platform differences, for which a coverage test that only runs on one platform is pretty useless.

Best case is something where I can do, in a Github PR, juliabot -coverage and it'll tell me the change.

I like that idea, having something opt-in to only run when it makes sense would be good to start with, and we could start getting a sense of what portion of PR's we end up wanting to look at this information.

@IainNZ
Copy link
Member

IainNZ commented Oct 23, 2014

You are right about the nature of PRs to base, they aren't often of the "needs test" type

@timholy
Copy link
Member

timholy commented Oct 23, 2014

FWIW, code-coverage was extremely helpful to me in catching problems with the recent Images revamp before unleashing it on the masses. The only real problems came from missing methods, not broken methods.

@tkelman
Copy link
Contributor

tkelman commented Dec 8, 2014

I will retract my noise concerns if we can get Coveralls reports integrated into the new official-from-Github multi-status indicators (https://github.com/blog/1935-see-results-from-all-pull-request-status-checks), instead of replying as comments.

@ghost
Copy link
Author

ghost commented Dec 9, 2014

I will retract my noise concerns if we can get Coveralls reports integrated into the new official-from-Github multi-status indicators...

I think this is an excellent idea. However, I am currently pushing two papers for a deadline and probably won't be able to work on this before Christmas.

@tkelman
Copy link
Contributor

tkelman commented Dec 9, 2014

I don't think this is something we can do on our side, I think Coveralls would have to implement it. I don't know whether Coveralls' service is as extensible by community contributors as Travis is.

lemurheavy/coveralls-public#25

@hayd
Copy link
Member

hayd commented Feb 4, 2015

Is this closed by #9493 ?

https://coveralls.io/r/timholy/julia

@IainNZ
Copy link
Member

IainNZ commented Feb 4, 2015

Wooah team:
https://twitter.com/CoverallsApp/status/562351762049208320
lemurheavy/coveralls-public#25 (comment)
So this means we can have annoyance-free coverage reports on Base! (apart from it 'failing' if coverage goes down, but I think thats OK?)

@staticfloat
Copy link
Member

:D

On Tue, Feb 3, 2015 at 8:22 PM, Iain Dunning [email protected]
wrote:

Wooah guys:
https://twitter.com/CoverallsApp/status/562351762049208320
lemurheavy/coveralls-public#25 (comment)
lemurheavy/coveralls-public#25 (comment)


Reply to this email directly or view it on GitHub
#8781 (comment).

@timholy
Copy link
Member

timholy commented Feb 4, 2015

I do think this is largely done, but I'm sure it could be set up to be a bit more convenient. Currently I have to manually push master to my fork of julia on a daily basis to keep Coveralls in sync with this repo.

However, keep in mind that you basically can't use Travis to generate accurate coverage reports. You hit the 50-minute timeout due to these requirements:

  • turn off inlining
  • run without the precompiled image file
  • run all tests in a single process

So we can't do this as a byproduct of testing PRs, it has to be a separate pass. I think @staticfloat has been taking what I did in CoverageBase.jl and setting it up on his buildbots?

@hayd
Copy link
Member

hayd commented Feb 4, 2015

Having the badge on the README would be nice (I can never get these to work):

Coverage Status

(but perhaps you want to wait til it's "more respectable" :p)

@staticfloat
Copy link
Member

staticfloat commented Feb 4, 2015 via email

@tkelman
Copy link
Contributor

tkelman commented Jul 6, 2015

We have buildbots for this now, closing. We aren't getting coverage reports for PR's since nothing in #8781 (comment) has changed, but given a bit of webhook scripting we could possibly do so on-demand if there ends up being a need for that.

@tkelman tkelman closed this Jul 6, 2015
@kshyatt
Copy link
Contributor

kshyatt commented Jul 6, 2015

The last part of #8781 (comment) is about to change once I can test this on linux I swear!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants