You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thing things we need to check and then retry if they fail
(1) the response is valid JSON (try-catch current in parse_response_message)
(2) the response JSON matches the Pydantic model schema (base_request_processor in create_dataset_files)
Right now these checks happen after the LLM responds (successfully sends back a string) so we don't have it within the retry logic. For batch, we would need to accumulate all the failed responses and send a new batch.
For Online, we can add these checks before writing to responses.jsonl file. However the logic might be weird since everything about this parsing was abstracted away from the requests --> responses logic in online. Might need to rethink the pathways a little bit.
Thing things we need to check and then retry if they fail
(1) the response is valid JSON (try-catch current in
parse_response_message
)(2) the response JSON matches the Pydantic model schema (base_request_processor in
create_dataset_files
)Right now these checks happen after the LLM responds (successfully sends back a string) so we don't have it within the retry logic. For batch, we would need to accumulate all the failed responses and send a new batch.
For Online, we can add these checks before writing to
responses.jsonl
file. However the logic might be weird since everything about this parsing was abstracted away from the requests --> responses logic in online. Might need to rethink the pathways a little bit.Lessons learned from #85
The text was updated successfully, but these errors were encountered: