Before you start contributing, review these basic guidelines on finding a project, determining its complexity, and learning what to expect in your collaboration with the ZNBase Labs team.
If you really want to dig deep into our processes and mindset, you may also want to peruse our extensive first PR guide, which is part of our on-boarding for new engineers.
-
Install the following prerequisites, as necessary:
-
Either a working Docker install able to run GNU/Linux binaries (e.g. Docker on Linux, macOS, Windows), so you can reuse our pre-populated Docker image with all necessary development dependencies; or
-
The tools needed to build ZNBaseDB from scratch:
- A C++ compiler that supports C++11. Note that GCC prior to 6.0 doesn't work due to https://gcc.gnu.org/bugzilla/show_bug.cgi?id=48891
- The standard C/C++ development headers on your system.
- On GNU/Linux, the terminfo development libraries, which may be
part of a ncurses development package (e.g.
libncurses-dev
on Debian/Ubuntu, butncurses-devel
on CentOS). - A Go 1.11.4 environment with a recent 64-bit version of the toolchain. Note that the Makefile enforces the specific version required, as it is updated frequently.
- Git 1.9+
- Bash (4+ is preferred)
- GNU Make (3.81+ is known to work)
- CMake 3.1+
- Autoconf 2.68+
- NodeJS 6.x and Yarn 1.7+
- Yacc or Bison
Note that at least 4GB of RAM is required to build from source and run tests.
-
-
Get the ZNBaseDB code:
mkdir -p $(go env GOPATH)/src/github.com/znbasedb cd $(go env GOPATH)/src/github.com/znbasedb git clone https://github.com/znbasedb/znbase cd znbase
Note: it is important to ensure the ZNBaseDB sources are positioned correctly relative to
$GOPATH
. Otherwise, the builds will fail. -
Run
make build
,make test
, or anything else our Makefile offers.
If you wish to reuse our builder image instead of installing all the
dependencies manually, prefix the make
command with
build/builder.sh
; for example build/builder.sh make build
.
Note that the first time you run make
, it can take some time to
download and install various dependencies. After running make build
,
the znbase
executable will be in your current directory and can
be run as shown in the README.
-
The default binary contains core open-source functionally covered by the Apache License 2 (APL2) and enterprise functionality covered by the ZNBaseDB Community License (ICL). To build a pure open-source (APL2) version excluding enterprise functionality, use
make buildoss
. See this blog post for more details. -
If you plan on working on the UI, check out the UI README.
-
To add or update a Go dependency:
- See
build/README.md
for details on adding or updating dependencies. - Run
make generate
to update generated files. - Create a PR with all the changes.
- See
See our separate style guide document.
When you're ready to commit, be sure to write a Good Commit Message™.
Our commit message guidelines are detailed here: https://github.com/znbasedb/znbase/wiki/Git-Commit-Messages
In summary (the wiki page details the rationales and provides further suggestions):
- Keep in mind who reads: think of the reviewer, think of the release notes
- Separate subject from body with a blank line
- Use the body to explain what and why vs. how
- Prefix the subject line with the affected package/area
- Include a release note annotation, in the right position
- Use the imperative mood in the subject line
- Keep the commit title concise but information-rich
- Wrap the body at some consistent width under 100 characters
-
All contributors need to sign the Contributor License Agreement.
-
Create a local feature branch to do work on, ideally on one thing at a time. If you are working on your own fork, see this tip on forking in Go, which ensures that Go import paths will be correct.
git checkout -b update-readme
-
Hack away and commit your changes locally using
git add
andgit commit
. Remember to write tests! The following are helpful for running specific subsets of tests:make test # Run all tests in ./pkg/storage make test PKG=./pkg/storage # Run all kv tests matching '^TestFoo' with a timeout of 10s make test PKG=./pkg/kv TESTS='^TestFoo' TESTTIMEOUT=10s # Run the sql logic tests make test PKG=./pkg/sql TESTS='TestLogic$$' # or, using a shortcut, make testlogic # Run a specific sql logic subtest make test PKG=./pkg/sql TESTS='TestLogic$$/select$$' # or, using a shortcut, make testlogic FILES=select
Logs are disabled during tests by default. To enable them, include
TESTFLAGS="-v -show-logs"
as an argument the test command:make test ... TESTFLAGS="-v -show-logs"
-
Run the linters, code generators, and unit test suites locally:
make pre-push
This will take several minutes.
-
When you’re ready for review, groom your work: each commit should pass tests and contain a substantial (but not overwhelming) unit of work. You may also want to
git fetch origin
and rungit rebase -i --exec "make lint test" origin/master
to make sure you're submitting your changes on top of the newest version of our code. Next, push to your fork:git push -u <yourfork> update-readme
-
Then create a pull request using GitHub’s UI. If you know of another GitHub user particularly suited to reviewing your pull request, be sure to mention them in the pull request body. If you possess the necessary GitHub privileges, please also assign them to the pull request using GitHub's UI. This will help focus and expedite the code review process.
-
Address test failures and feedback by amending your commits. If your change contains multiple commits, address each piece of feedback by amending that commit to which the particular feedback is aimed. Wait (or ask) for new feedback on those commits if they are not straightforward. An
LGTM
("looks good to me") by someone qualified is usually posted when you're free to go ahead and merge. Most new contributors aren't allowed to merge themselves; in that case, we'll do it for you. -
Direct merges using GitHub's "big green button" are avoided. Instead, we use bors-ng to manage our merges to prevent "merge skew". When you're ready to merge, add a comment to your PR of the form
bors r+
. Craig (our Bors bot) will run CI on your changes, and if it passes, merge them. For more information, see the wiki.
Peeking into a running cluster can be done in several ways:
- the net/trace endpoint
at
/debug/requests
. It has a breakdown of the recent traced requests, in particularly slow ones. Two families are traced:node
andcoord
, the former (and likely more interesting one) containing what happens inside ofNode
/Store
/Replica
and the other inside of the coordinator (TxnCoordSender
). - pprof gives us (among other things) heap and cpu profiles; this wiki page gives an overview and walks you through using it to profile ZNBase. This golang blog post explains it extremely well and this one by Dmitry Vuykov goes into even more detail.
An easy way to locally run a workload against a cluster are the acceptance tests. For example,
make acceptance TESTS='TestPut$$' TESTFLAGS='-v -d 1200s -l .' TESTTIMEOUT=1210s
runs the Put
acceptance test for 20 minutes with logging (useful to look at
the stack trace in case of a node dying). When it starts, all the relevant
commands for pprof
, trace
and logs are logged to allow for convenient
inspection of the cluster.