Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

partial credit for autograde test cells #1090

Merged
merged 11 commits into from
Aug 21, 2019

Conversation

kcranston
Copy link
Contributor

@kcranston kcranston commented May 24, 2019

Closes #974 . When grading, look for output from an autograde test cell that is a single value that can be cast to a floating point value greater than 0. Return min(output, max_points), i.e. if the cell returns more than the max_points, simply return max_points.

Thinking of this as version 1.0. In subsequent versions, planning to allow an autograde cell to return formatted json output (which can include scores, messages for students, tracebacks from errors, etc).

Have not added docs yet - looking for feedback on the implementation first, then I can modify docs either in this PR or a subsequent.

@kcranston
Copy link
Contributor Author

Looks like I've got a failing test. Let me investigate....

@kcranston
Copy link
Contributor Author

Still one test failing (test_nbgrader_feedback.py::TestNbGraderFeedback::test_force_f), but only in the appveyor python 3.5 environment, not the 2.7 environment and not locally for me. Will investigate, but any feeback most welcome.

Copy link
Member

@jhamrick jhamrick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks really great, thanks! My comments are mainly just having to do with when to raise errors versus not---otherwise I am very happy with this.

nbgrader/utils.py Outdated Show resolved Hide resolved
nbgrader/utils.py Outdated Show resolved Hide resolved
nbgrader/utils.py Outdated Show resolved Hide resolved
nbgrader/validator.py Outdated Show resolved Hide resolved
package.json Outdated Show resolved Hide resolved
@jhamrick
Copy link
Member

btw, I think the test failure is probably spurious, I have restarted the tests to see.

@jhamrick jhamrick added this to the 0.6.0 milestone May 28, 2019
@kcranston
Copy link
Contributor Author

Thanks for the comments! - nothing objectionable from my perspective. Will make the modifications in the next day or two.

@kcranston
Copy link
Contributor Author

@jhamrick - can you point me to an example for for how you would have code in utils.py return a warning to the calling app? I don't quite understand how the logging works. Do I need to create a LoggingConfigurable class in utils? (similar to in validator.py)?

@jhamrick
Copy link
Member

jhamrick commented Jun 5, 2019

@kcranston I think the easiest way to do it would be to have the get_partial_grade and determine_grade functions in utils.py accept a log argument (default=None, in which case don't log anything), and then pass in self.log from the calling app.


# it's a markdown cell, so we can't do anything
if score is None:
pass
elif score >= max_score:
elif score == max_score:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably better to keep this as it was, in case the score is greater than the max score that also shouldn't count as failed:

Suggested change
elif score == max_score:
elif score >= max_score:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can change that, but then also need to update get_partial_grade, which currently returns a ValueError when score > max_score. Should I revert that back to a warning again?

Copy link
Member

@jhamrick jhamrick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I just have the one comment about the comparison between the score and max score, otherwise I think this is good to merge.

@jhamrick
Copy link
Member

jhamrick commented Jun 19, 2019 via email

@kcranston
Copy link
Contributor Author

Thanks, @jhamrick ! I'll work on updating the docs, along with an example notebook (separate PR).

@lwasser
Copy link

lwasser commented Jul 22, 2019

hey @kcranston @jhamrick curious what the time associated with getting this merged is?
thank you!!

@kcranston
Copy link
Contributor Author

Pinging again. @lwasser and I are writing notebooks that use this functionality and planning to use in a class starting next week. Is it possible to have this merged this week?

@lwasser
Copy link

lwasser commented Aug 20, 2019

hey @jhamrick i know you are super busy. we do need to use this functionality in our class and it starts next week. what do you need from us to get this merged at this point? we are installing from conda-forge so it would be ideal to get things merged and to bump a patch (IF you are comfortable with that) this week to support our environment setup and such. please just let us know what you need.

@jhamrick jhamrick merged commit ee1e18d into jupyter:master Aug 21, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Allow the autograder to assign partial credit
3 participants