-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: run dbt in batches #158
base: main
Are you sure you want to change the base?
Conversation
@witash what do you think of this approach as a way of handling large initial syncs? Running small incremental batches seems to be working well from my local testing so far. I will clean up the PR and add some tests if you agree this is a good approach. |
ok, yea we can try it, it will be interesting to see how well it does with large databases |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a good approach! I left a few comments inline.
@dianabarsan yes that was my concern. I have not tested that yet but I will and add specific test case for that. |
@dianabarsan @witash please review. I have added a comment here summarizing the tests I did to this approach. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool stuff! I left some questions inline.
dbt/dbt-run.py
Outdated
with conn.cursor() as cur: | ||
cur.execute(f""" | ||
SELECT MAX(saved_timestamp) | ||
FROM {os.getenv('POSTGRES_SCHEMA')}.document_metadata |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a little strange to reference document_metadata
here, since this is part of the pipeline schema.
Is there some way we can make this independent of the pipeline schema?
What if someone wants to use cht-sync with a completely different set of models?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea is that anyone building their models would still have to make use of our base models and build any additional models on top of that. There would be an issue if we updated the base models and renamed this table so this would have to be updated as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly. I've been reading and there's nothing DBT can return by default to batch. Soo disappointing.
Can we add something that tests of this table exists before we start batching, and log a friendly message of why batching won't work? Or even throw an error that running in batches is not possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clarify: you are suggesting we throw an error if the batching flag is enabled but the table does not exist?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dianabarsan I addressed your comments. Please have another look. |
dbt/dbt-run.py
Outdated
raise psycopg2.errors.UndefinedTable(f"The table {METADATA_TABLE_NAME} does not exist in the database.") | ||
run_dbt_in_batches() | ||
else: | ||
run_dbt() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great! I have another question though:
what happens if a cht-sync instance is started with RUN_DBT_IN_BATCHES=false
, and then, later, when there's a large influx of docs, the cht-sync is restarted with RUN_DBT_IN_BATCHES=true
. Will this make DBT sync process everything again because we don't have a batch status stored?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have removed the document_metadata
table check because there was a bug where dbt would not run for new deployments where the tables ans views are not yet created. The dbt ls command should be able to help identify if the model is defined but I didn't manage to get it to work so we'll catch that error elsewhere in the code.
what happens if a cht-sync instance is started with RUN_DBT_IN_BATCHES=false, and then, later, when there's a large influx of docs, the cht-sync is restarted with RUN_DBT_IN_BATCHES=true. Will this make DBT sync process everything again because we don't have a batch status stored?
I have added a check on the document_metadata table for the latest timestamp that handles this scenario.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still unclear what would happen if we run in batches and then change the config, relaunch dbt and run in full. What would happen then? Will dbt know which docs it has already indexed?
What if we switch from running in full to running in batches? Will everything start from 0?
…n, and remove document_matadata table check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I realized just now I didn't click submit on this 🤦🏻
from urllib.parse import urlparse | ||
|
||
|
||
METADATA_TABLE_NAME = "document_metadata1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the 1
suffix intentional here?
dbt/dbt-run.py
Outdated
raise psycopg2.errors.UndefinedTable(f"The table {METADATA_TABLE_NAME} does not exist in the database.") | ||
run_dbt_in_batches() | ||
else: | ||
run_dbt() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still unclear what would happen if we run in batches and then change the config, relaunch dbt and run in full. What would happen then? Will dbt know which docs it has already indexed?
What if we switch from running in full to running in batches? Will everything start from 0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! I think my only thing is document_metadata1
.
Approving to unblock!
Description
Add the ability to run dbt in batches to avoid scenarios where large table updates result in very large temporary tables that crash Postgres.
This PR depends on this corresponding PR in the CHT Pipeline repository.
#156
Code review checklist
License
The software is provided under AGPL-3.0. Contributions to this project are accepted under the same license.