This project benchmarks the throughput performance of a variety of Java Json libraries using JMH. It covers the following libraries:
- avaje-jsonb
- boon
- dsl-json
- fastjson
- flexjson
- genson
- gson
- jackson
- jakarta-json (from Oracle)
- johnzon
- json-io
- json-simple
- json-smart
- logansquare
- minimal-json
- mjson
- moshi
- nanojson
- org.json
- purejson
- qson
- tapestry
- underscore-java
When available, both databinding and 'stream' (custom packing and unpacking) implementations are tested. Two different kinds of models are evaluated with payloads of 1, 10, 100 and 1000 KB size:
Users
: uses primitive types, String, List and simple POJOs; andClients
: adds arrays, enum, UUID, LocalDate
This benchmark is written to:
- randomly generate payloads upon static loading of the JVM/benchmark; the seed is also shared across runs
- read data from RAM
- write data to reusable output streams (when possible); this reduces allocation pressure
- consume all output streams; to avoid dead code elimination optimization
Not evaluated are: RAM utilization, compression, payloads > 1 MB.
The benchmarks are written with JMH and for Java 17.
The results here-below were computed on April the 30th, 2023 with the following libraries and versions:
Library | Version |
---|---|
avaje-jsonb | 1.4 |
boon | 0.34 |
dsl-json | 1.10.0 |
fastjson | 2.0.27 |
flexjson | 3.3 |
genson | 1.6 |
gson | 2.10.1 |
jackson | 2.14.2 |
jodd json | 6.0.3 |
johnzon | 1.2.19 |
jakarta | 2.1.1 |
json-io | 4.14.0 |
simplejson | 1.1.1 |
json-smart | 2.4.10 |
logansquare | 1.3.7 |
minimal-json | 0.9.5 |
mjson | 1.4.1 |
moshi | 1.14.0 |
nanojson | 1.8 |
org.json | 20230227 |
purejson | 1.0.1 |
qson | 1.1.1 |
tapestry | 5.8.2 |
underscore | 1.88 |
yasson | 3.0.2 |
All graphs and sheets are available in this google doc.
Raw JMH results are available here
Uses: primitive types, String, List and simple POJOs
Uses: primitive types, String, List and simple POJOs, arrays, enum, UUID, LocalDate
Note: fewer libraries are tested with this model due to lack of support for some of the evaluated types.
Tests were run on an Amazon EC2 c5.xlarge (4 vCPU, 8 GiB RAM)
JMH info:
# JMH version: 1.35
# VM version: JDK 17.0.6, OpenJDK 64-Bit Server VM, 17.0.6+10-LTS
# VM invoker: /usr/lib/jvm/java-17-amazon-corretto.x86_64/bin/java
# VM options: -Xms2g -Xmx2g --add-opens=java.base/java.time=ALL-UNNAMED
# Blackhole mode: compiler (auto-detected, use -Djmh.blackhole.autoDetect=false to disable)
# Warmup: 5 iterations, 10 s each
# Measurement: 10 iterations, 3 s each
# Timeout: 10 min per iteration
# Threads: 16 threads, will synchronize iterations
# Benchmark mode: Throughput, ops/time
Prerequisites:
- JDK 17; and JAVA_HOME set.
- make
By default, running ./run ser
(./run deser
respectively) will run
all -- stream and databind -- serialization (deserialization respectively)
benchmarks with 1 KB payloads of Users.
You can also specify which libs, apis, payload-sizes and number of iterations (and more) you want to run. For example:
./run deser --apis stream --libs genson,jackson
./run ser --apis databind,stream --libs jackson
./run deser --apis stream --libs dsljson,jackson --size 10 --datatype users
Type ./run help ser
or ./run help deser
to print help for those
commands.
If you wish to run all benchmarks used to generate the reports above,
you can run ./run-everything
. This will take several hours to complete, so
be patient.
Prerequisites:
Then, simply run:
make packer
Any help to improve the existing benchmarks or write ones for other libraries is welcome.
Adding a JSON library to the benchmark requires little work and you can find numerous examples in the commit history. For instance:
- Addition of moshi: https://github.com/fabienrenaud/java-json-benchmark/commit/6af2c0a7091b12a9dc768e49499682b97ea57ff6
- Addition of jodd: https://github.com/fabienrenaud/java-json-benchmark/commit/288a4e61496588ed4c0a80e1f107f34f9a2c985c
- Addition of json-simple: https://github.com/fabienrenaud/java-json-benchmark/commit/1e1e559c39a6eddc3dd7d7cea777fc7861415469
Pull requests are welcome.