This guide shows you how to get started developing the XLA project.
Before you begin, complete the following prerequisites:
- Go to CONTRIBUTING.md and review the contribution process.
- If you haven't already done so, sign the Contributor License Agreement.
- Install or configure the following dependencies:
Then follow the steps below to get the source code, set up an environment, build the repository, and create a pull request.
- Create a fork of the XLA repository.
- Clone your fork of the repo, replacing
<USER>
with your GitHub username:git clone https://github.com/<USER>/xla.git
- Change into the
xla
directory:cd xla
- Configure the remote upstream repo:
git remote add upstream https://github.com/openxla/xla.git
-
Install Bazel.
To build XLA, you must have Bazel installed. The recommended way to install Bazel is using Bazelisk, which automatically downloads the correct Bazel version for XLA. If Bazelisk is unavailable, you can install Bazel manually.
-
Create and run a TensorFlow Docker container.
To get the TensorFlow Docker image for CPU, run the following command:
docker run --name xla -w /xla -it -d --rm -v $PWD:/xla tensorflow/build:latest-python3.9 bash
Alternatively, to get the TensorFlow Docker image for GPU, run the following command:
docker run --name xla_gpu -w /xla -it -d --rm -v $PWD:/xla tensorflow/tensorflow:devel-gpu bash
Build for CPU:
docker exec xla ./configure
docker exec xla bazel build --test_output=all --spawn_strategy=sandboxed //xla/...
Build for GPU:
docker exec -e TF_NEED_CUDA=1 xla_gpu ./configure
docker exec xla_gpu bazel build --test_output=all --spawn_strategy=sandboxed //xla/...
Your first build will take quite a while because it has to build the entire stack, including XLA, MLIR, and StableHLO.
To learn more about building XLA, see Build from source.
When you're ready to send changes for review, create a pull request.
To learn about the XLA code review philosophy, see Code reviews.