Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: run dbt in batches #158

Open
wants to merge 12 commits into
base: main
Choose a base branch
from
Open

feat: run dbt in batches #158

wants to merge 12 commits into from

Conversation

njuguna-n
Copy link
Contributor

@njuguna-n njuguna-n commented Sep 27, 2024

Description

Add the ability to run dbt in batches to avoid scenarios where large table updates result in very large temporary tables that crash Postgres.

This PR depends on this corresponding PR in the CHT Pipeline repository.

#156

Code review checklist

  • Readable: Concise, well named, follows the style guide, documented if necessary.
  • Documented: Configuration and user documentation on cht-docs
  • Tested: Unit and/or e2e where appropriate
  • Backwards compatible: Works with existing data and configuration or includes a migration. Any breaking changes documented in the release notes.

License

The software is provided under AGPL-3.0. Contributions to this project are accepted under the same license.

@njuguna-n
Copy link
Contributor Author

@witash what do you think of this approach as a way of handling large initial syncs? Running small incremental batches seems to be working well from my local testing so far. I will clean up the PR and add some tests if you agree this is a good approach.

dbt/dbt-run.py Outdated Show resolved Hide resolved
@witash
Copy link
Contributor

witash commented Sep 30, 2024

@witash what do you think of this approach as a way of handling large initial syncs? Running small incremental batches seems to be working well from my local testing so far. I will clean up the PR and add some tests if you agree this is a good approach.

ok, yea we can try it, it will be interesting to see how well it does with large databases

Copy link
Member

@dianabarsan dianabarsan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a good approach! I left a few comments inline.

dbt/dbt-run.py Outdated Show resolved Hide resolved
dbt/dbt-run.py Outdated Show resolved Hide resolved
dbt/dbt-run.py Outdated Show resolved Hide resolved
@njuguna-n
Copy link
Contributor Author

@dianabarsan yes that was my concern. I have not tested that yet but I will and add specific test case for that.

@njuguna-n njuguna-n changed the title 156 run dbt in batches feat: run dbt in batches Oct 4, 2024
@njuguna-n
Copy link
Contributor Author

@dianabarsan @witash please review. I have added a comment here summarizing the tests I did to this approach.

dbt/dbt-run.py Outdated Show resolved Hide resolved
@njuguna-n njuguna-n marked this pull request as ready for review October 7, 2024 08:48
Copy link
Member

@dianabarsan dianabarsan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool stuff! I left some questions inline.

dbt/dbt-run.py Outdated Show resolved Hide resolved
dbt/dbt-run.py Outdated Show resolved Hide resolved
dbt/dbt-run.py Outdated
with conn.cursor() as cur:
cur.execute(f"""
SELECT MAX(saved_timestamp)
FROM {os.getenv('POSTGRES_SCHEMA')}.document_metadata
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a little strange to reference document_metadata here, since this is part of the pipeline schema.
Is there some way we can make this independent of the pipeline schema?
What if someone wants to use cht-sync with a completely different set of models?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is that anyone building their models would still have to make use of our base models and build any additional models on top of that. There would be an issue if we updated the base models and renamed this table so this would have to be updated as well.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly. I've been reading and there's nothing DBT can return by default to batch. Soo disappointing.
Can we add something that tests of this table exists before we start batching, and log a friendly message of why batching won't work? Or even throw an error that running in batches is not possible.

Copy link
Contributor Author

@njuguna-n njuguna-n Oct 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To clarify: you are suggesting we throw an error if the batching flag is enabled but the table does not exist?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dbt/dbt-run.py Outdated Show resolved Hide resolved
dbt/dbt-run.py Outdated Show resolved Hide resolved
@njuguna-n
Copy link
Contributor Author

@dianabarsan I addressed your comments. Please have another look.

dbt/dbt-run.py Outdated
raise psycopg2.errors.UndefinedTable(f"The table {METADATA_TABLE_NAME} does not exist in the database.")
run_dbt_in_batches()
else:
run_dbt()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great! I have another question though:

what happens if a cht-sync instance is started with RUN_DBT_IN_BATCHES=false, and then, later, when there's a large influx of docs, the cht-sync is restarted with RUN_DBT_IN_BATCHES=true. Will this make DBT sync process everything again because we don't have a batch status stored?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have removed the document_metadata table check because there was a bug where dbt would not run for new deployments where the tables ans views are not yet created. The dbt ls command should be able to help identify if the model is defined but I didn't manage to get it to work so we'll catch that error elsewhere in the code.

what happens if a cht-sync instance is started with RUN_DBT_IN_BATCHES=false, and then, later, when there's a large influx of docs, the cht-sync is restarted with RUN_DBT_IN_BATCHES=true. Will this make DBT sync process everything again because we don't have a batch status stored?

I have added a check on the document_metadata table for the latest timestamp that handles this scenario.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still unclear what would happen if we run in batches and then change the config, relaunch dbt and run in full. What would happen then? Will dbt know which docs it has already indexed?
What if we switch from running in full to running in batches? Will everything start from 0?

dbt/dbt-run.py Outdated Show resolved Hide resolved
@njuguna-n njuguna-n requested a review from dianabarsan October 9, 2024 19:13
Copy link
Member

@dianabarsan dianabarsan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realized just now I didn't click submit on this 🤦🏻

from urllib.parse import urlparse


METADATA_TABLE_NAME = "document_metadata1"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the 1 suffix intentional here?

dbt/dbt-run.py Outdated
raise psycopg2.errors.UndefinedTable(f"The table {METADATA_TABLE_NAME} does not exist in the database.")
run_dbt_in_batches()
else:
run_dbt()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still unclear what would happen if we run in batches and then change the config, relaunch dbt and run in full. What would happen then? Will dbt know which docs it has already indexed?
What if we switch from running in full to running in batches? Will everything start from 0?

Copy link
Member

@dianabarsan dianabarsan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! I think my only thing is document_metadata1.
Approving to unblock!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants