Docling bundles PDF document conversion to JSON and Markdown in an easy, self-contained package.
- โก Converts any PDF document to JSON or Markdown format, stable and lightning fast
- ๐ Understands detailed page layout, reading order and recovers table structures
- ๐ Extracts metadata from the document, such as title, authors, references and language
- ๐ Includes OCR support for scanned PDFs
- ๐ค Integrates easily with LLM app / RAG frameworks like ๐ฆ LlamaIndex and ๐ฆ๐ LangChain
- ๐ป Provides a simple and convenient CLI
To use Docling, simply install docling
from your package manager, e.g. pip:
pip install docling
Note
Works on macOS and Linux environments. Windows platforms are currently not tested.
Alternative PyTorch distributions
The Docling models depend on the PyTorch library.
Depending on your architecture, you might want to use a different distribution of torch
.
For example, you might want support for different accelerator or for a cpu-only version.
All the different ways for installing torch
are listed on their website https://pytorch.org/.
One common situation is the installation on Linux systems with cpu-only support. In this case, we suggest the installation of Docling with the following options
# Example for installing on the Linux cpu-only version
pip install docling --extra-index-url https://download.pytorch.org/whl/cpu
Docling development setup
To develop for Docling (features, bugfixes etc.), install as follows from your local clone's root dir:
poetry install --all-extras
To convert invidual PDF documents, use convert_single()
, for example:
from docling.document_converter import DocumentConverter
source = "https://arxiv.org/pdf/2408.09869" # PDF path or URL
converter = DocumentConverter()
result = converter.convert_single(source)
print(result.render_as_markdown()) # output: "## Docling Technical Report[...]"
print(result.render_as_doctags()) # output: "<document><title><page_1><loc_20>..."
For an example of batch-converting documents, see batch_convert.py.
From a local repo clone, you can run it with:
python examples/batch_convert.py
The output of the above command will be written to ./scratch
.
You can also use Docling directly from your command line to convert individual files โbe it local or by URLโ or whole directories.
A simple example would look like this:
docling https://arxiv.org/pdf/2206.01062
To see all available options (export formats etc.) run docling --help
.
CLI reference
Here are the available options as of this writing (for an up-to-date listing, run docling --help
):
$ docling --help
Usage: docling [OPTIONS] source
โญโ Arguments โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ * input_sources source PDF files to convert. Can be local file / directory paths or URL. [default: None] [required] โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโ Options โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ --json --no-json If enabled the document is exported as JSON. [default: no-json] โ
โ --md --no-md If enabled the document is exported as Markdown. [default: md] โ
โ --txt --no-txt If enabled the document is exported as Text. [default: no-txt] โ
โ --doctags --no-doctags If enabled the document is exported as Doc Tags. [default: no-doctags] โ
โ --ocr --no-ocr If enabled, the bitmap content will be processed using OCR. [default: ocr] โ
โ --backend [pypdfium2|docling] The PDF backend to use. [default: docling] โ
โ --output PATH Output directory where results are saved. [default: .] โ
โ --version Show version information. โ
โ --help Show this message and exit. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Check out the following examples showcasing RAG using Docling with standard LLM application frameworks:
The example file custom_convert.py contains multiple ways one can adjust the conversion pipeline and features.
You can control if table structure recognition or OCR should be performed by arguments passed to DocumentConverter
:
doc_converter = DocumentConverter(
artifacts_path=artifacts_path,
pipeline_options=PipelineOptions(
do_table_structure=False, # controls if table structure is recovered
do_ocr=True, # controls if OCR is applied (ignores programmatic content)
),
)
You can control if table structure recognition should map the recognized structure back to PDF cells (default) or use text cells from the structure prediction itself. This can improve output quality if you find that multiple columns in extracted tables are erroneously merged into one.
from docling.datamodel.pipeline_options import PipelineOptions
pipeline_options = PipelineOptions(do_table_structure=True)
pipeline_options.table_structure_options.do_cell_matching = False # uses text cells predicted from table structure model
doc_converter = DocumentConverter(
artifacts_path=artifacts_path,
pipeline_options=pipeline_options,
)
Since docling 1.16.0: You can control which TableFormer mode you want to use. Choose between TableFormerMode.FAST
(default) and TableFormerMode.ACCURATE
(better, but slower) to receive better quality with difficult table structures.
from docling.datamodel.pipeline_options import PipelineOptions, TableFormerMode
pipeline_options = PipelineOptions(do_table_structure=True)
pipeline_options.table_structure_options.mode = TableFormerMode.ACCURATE # use more accurate TableFormer model
doc_converter = DocumentConverter(
artifacts_path=artifacts_path,
pipeline_options=pipeline_options,
)
You can limit the file size and number of pages which should be allowed to process per document:
conv_input = DocumentConversionInput.from_paths(
paths=[Path("./test/data/2206.01062.pdf")],
limits=DocumentLimits(max_num_pages=100, max_file_size=20971520)
)
You can convert PDFs from a binary stream instead of from the filesystem as follows:
buf = BytesIO(your_binary_stream)
docs = [DocumentStream(filename="my_doc.pdf", stream=buf)]
conv_input = DocumentConversionInput.from_streams(docs)
results = doc_converter.convert(conv_input)
You can limit the CPU threads used by Docling by setting the environment variable OMP_NUM_THREADS
accordingly. The default setting is using 4 CPU threads.
You can perform a hierarchy-aware chunking of a Docling document as follows:
from docling.document_converter import DocumentConverter
from docling_core.transforms.chunker import HierarchicalChunker
doc = DocumentConverter().convert_single("https://arxiv.org/pdf/2206.01062").output
chunks = list(HierarchicalChunker().chunk(doc))
# > [
# > ChunkWithMetadata(
# > path='$.main-text[0]',
# > text='DocLayNet: A Large Human-Annotated Dataset [...]',
# > page=1,
# > bbox=[107.30, 672.38, 505.19, 709.08]
# > ),
# > [...]
# > ]
For more details on Docling's inner workings, check out the Docling Technical Report.
Please read Contributing to Docling for details.
If you use Docling in your projects, please consider citing the following:
@techreport{Docling,
author = {Deep Search Team},
month = {8},
title = {Docling Technical Report},
url = {https://arxiv.org/abs/2408.09869},
eprint = {2408.09869},
doi = {10.48550/arXiv.2408.09869},
version = {1.0.0},
year = {2024}
}
The Docling codebase is under MIT license. For individual model usage, please refer to the model licenses found in the original packages.