-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hardening the cancelation functionality #675
base: master
Are you sure you want to change the base?
Conversation
openeo/extra/job_management.py
Outdated
try: | ||
running_start_time_str = row.get("running_start_time") | ||
if not running_start_time_str or pd.isna(running_start_time_str): | ||
_log.warning(f"Job {job.job_id} does not have a valid running start time. Cancellation skipped.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This warning might be a bit too alarming. It will be shown every minute on each job that has no recorded start time, so this could be quite spammy.
"cancellation skipped" might also give wrong impression that job manager still thinks that job should be cancelled for some reason, but it's won't actually do it.
some possible improvements:
- only show this once per job, or for the whole job tracking run
- if running start time is missing, fill it in with timestamp of first observation that it's missing, to have a fallback value, so that the auto-cancel feature can still work
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the underlying issue then in:
` if previous_status in {"created", "queued"} and new_status == "running":
stats["job started running"] += 1
active.loc[i, "running_start_time"] = rfc3339.utcnow()
if new_status == "canceled":
stats["job canceled"] += 1
self.on_job_cancel(the_job, active.loc[i])
if self._cancel_running_job_after and new_status == "running":
self._cancel_prolonged_job(the_job, active.loc[i])`
The problem would be removed if I also only run the cancel prolonged job if the previous state was created or queued. Then we know for sure that a starting time has been set?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that won't work in practice: you want cancelling to happen long after the state changed to "running", so both previous state and current state will be "running" when you typically want to cancel.
what you could do is changing the setting of "running_start_time", to something like (pseudo-code):
if running_start_time is not set and new_status == "running":
active.loc[i, "running_start_time"] = rfc3339.utcnow()
then running_start_time
further degrades to a best effort guess of the actual start time, but at least you have something to work with
openeo/extra/job_management.py
Outdated
""" | ||
Ensures the running start time is valid. If missing, approximates with the current time. | ||
Returns the parsed running start time as a datetime object. | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This extra method makes the whole construction quite complex. E.g. it drags in the requirement to have the whole dataframe (df
) at this point, including the assumption mutations on it will properly be persisted.
Isn't it easier to just modify this existing if
in _track_statuses
:
openeo-python-client/openeo/extra/job_management.py
Lines 731 to 733 in 3fd041a
if previous_status in {"created", "queued"} and new_status == "running": | |
stats["job started running"] += 1 | |
active.loc[i, "running_start_time"] = rfc3339.utcnow() |
e.g. something like
if new_status == "running" and (not active.loc[i, "running_start_time"] or pd.isna(active.loc[i, "running_start_time"]):
if previous_status not in {"created", "queued"}:
_log.warning(
f"Unknown 'running_start_time' for running job {job_id}. Using current time as an approximation."
)
stats["job started running"] += 1
active.loc[i, "running_start_time"] = rfc3339.utcnow()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good,
The reason for the additional function was to harden the cancel prolonged jobs in itself. It makes sense to use this from within track statuses. It would however not resolve the issue on the cancellation function in itself.
@soxofaan any other changes required? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a minor thing, but apart from that ok to merge I think
elapsed = current_time - job_running_start_time | ||
|
||
if elapsed > self._cancel_running_job_after: | ||
try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this nested try-except is bit overkill now and doesn't add any value. I'd remove it to keep this _cancel_prolonged_job more to the point
No description provided.