Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LightSeq QAT #307

Merged
merged 132 commits into from
Aug 31, 2022
Merged
Changes from 1 commit
Commits
Show all changes
132 commits
Select commit Hold shift + click to select a range
13ac633
ls embedding support qat
Apr 7, 2022
ace049a
[WIP]ls transformer qat
May 5, 2022
23041c2
fix fairseq transformer cli shape bug of output projection
godweiyang May 6, 2022
5c6ad24
ln_bw_i8 test passed!
godweiyang May 7, 2022
576de44
test with_mean of ln_i8
godweiyang May 7, 2022
178b774
update to the latest version of master, fix conflict
godweiyang May 7, 2022
2c6c30b
ls encoder attn add qat
May 9, 2022
580775c
dropout_relu_bias_i8 passed!
godweiyang May 10, 2022
dd1229b
Merge branch 'ls-qat' of https://github.com/bytedance/lightseq into l…
godweiyang May 10, 2022
75eb4f4
dropout_gelu_bias unit test passed!
godweiyang May 16, 2022
e4b8e9c
dropout_relu_bias_bwd_i8 passed!
godweiyang May 16, 2022
0b0c890
Merge branch 'ls-qat-wy' into ls-qat
godweiyang May 16, 2022
6fab849
dropout_gelu_bias_bwd_i8 unit test passed!
godweiyang May 17, 2022
b5ab256
format
godweiyang May 17, 2022
f2f8401
dropout_gelu_bias_bwd_i8 unit test passed!
godweiyang May 17, 2022
59e5ebf
format
godweiyang May 17, 2022
d39f15c
Merge branch 'ls-qat' of https://github.com/bytedance/lightseq into l…
godweiyang May 18, 2022
ffda943
polish unit test
godweiyang May 18, 2022
5370059
[WIP] ls encoder qat test
May 23, 2022
e66708c
Merge branch 'ls-qat' of https://github.com/bytedance/lightseq into l…
May 23, 2022
20231b6
quant_bias_add_transform_20314, quant_transform4d_0213 unit test passed!
godweiyang May 23, 2022
991ca74
fix unit test bug
godweiyang May 26, 2022
a8af545
[WIP] ls encoder qat unit test
May 30, 2022
e03da8a
fix bug
godweiyang May 30, 2022
238f717
set default module to disable quant, fix bugs in examples
godweiyang May 30, 2022
1e035a8
fix encoder bug
Jun 1, 2022
b6ec156
encoder qat test pass
Jun 6, 2022
f934ca0
decoder qat forward test pass
Jun 8, 2022
3d15555
fix bug in encoder bw
godweiyang Jun 9, 2022
a98e6b5
fix conflict
godweiyang Jun 9, 2022
7d06ffc
fix bug of cmax grad
godweiyang Jun 10, 2022
0a8fc46
fix bug of act mask
godweiyang Jun 10, 2022
89208b1
fix bug in tensor quantizer
godweiyang Jun 10, 2022
09a3d10
fix cmax grad bug
godweiyang Jun 13, 2022
07e376d
[WIP] decoder support qat
Jun 14, 2022
c57b398
Merge branch 'ls-qat' of https://github.com/bytedance/lightseq into l…
Jun 14, 2022
a380b72
ls decoder qat pass
Jun 20, 2022
09f6bf5
ls encoder qat pass
Jun 22, 2022
4e2b257
add unit test for quant bert encoder
godweiyang Jul 5, 2022
956f681
fix memory bug
godweiyang Jul 11, 2022
1c9ddb4
fix cmax grad bug in huggingface
godweiyang Jul 12, 2022
db9d160
quant bert enc fw&bw test passed!
godweiyang Jul 14, 2022
e6338f7
fix hf cmax export bug
godweiyang Jul 14, 2022
d508f62
fix fairseq out_proj bug
godweiyang Jul 14, 2022
c641567
fix fairseq shell bug
godweiyang Jul 14, 2022
60a368e
fix decoder mem bug
godweiyang Jul 15, 2022
b18995f
modify initial lr of fairseq quant training
godweiyang Jul 15, 2022
04d1291
decoupled qat code
Jul 18, 2022
972a54d
modify huggingface training scripts
godweiyang Jul 18, 2022
7ea40bd
decoupled qat code
godweiyang Jul 18, 2022
1b117f0
add cmax grad
Jul 18, 2022
1859341
delete enc_kv output quant
godweiyang Jul 18, 2022
7d4a8cd
modify ffn2gemm quant like inference
godweiyang Jul 20, 2022
b569eb7
fuse dequantize
Jul 20, 2022
94ffe31
fix post ln mem bug
godweiyang Jul 20, 2022
d596b29
Merge branch 'ls-qat' of https://github.com/bytedance/lightseq into l…
godweiyang Jul 20, 2022
eb6c59a
add decoder self attn qkv cache quant
godweiyang Jul 21, 2022
9cd64f5
export quant model (stage 1)
godweiyang Jul 21, 2022
b5ab18c
export quant model (stage 2)
godweiyang Jul 22, 2022
42607a1
export quant model (stage 3)
godweiyang Jul 22, 2022
8358e48
support vit quant train
godweiyang Jul 26, 2022
5d40f93
add gradient clip
Jul 26, 2022
725f112
fix hf export bug
godweiyang Jul 26, 2022
28b2b76
Merge branch 'ls-qat' of https://github.com/bytedance/lightseq into l…
godweiyang Jul 26, 2022
075af1a
fix quant gpt bug
godweiyang Jul 26, 2022
b622b0d
support quant gpt training
godweiyang Jul 26, 2022
d3d3566
modify huggingface training scripts
godweiyang Jul 27, 2022
baba9d7
support ls bert, gpt export
godweiyang Jul 27, 2022
3bbfac3
support custom quant transformer export
godweiyang Jul 28, 2022
483c252
optimizer ffn fake quant and dcmax
Jul 28, 2022
0983fe0
support quant gpt export
godweiyang Jul 28, 2022
bfa3b76
support quant vit export
godweiyang Jul 29, 2022
0c21296
add quant linear layer
Jul 29, 2022
89a92e7
Merge branch 'ls-qat' of https://github.com/bytedance/lightseq into l…
Jul 29, 2022
e683775
fix quant linear layer bug
Jul 29, 2022
c180d2c
support quant vit infer
godweiyang Aug 1, 2022
e8ad835
Merge branch 'ls-qat' of https://github.com/bytedance/lightseq into l…
godweiyang Aug 1, 2022
931c0b0
speedup cublass igemm on A100 (by huxingwu)
godweiyang Aug 2, 2022
0fe0862
optimize ls_quant_dropout_act_bias_bwd_kernel
Aug 2, 2022
ee03974
polish training gemm algo code
godweiyang Aug 3, 2022
56211cb
support gemm best algo search on different GPUs and shapes
godweiyang Aug 5, 2022
2e99f37
search in the range (min_bsz, 512, 1) and (512, max_bsz, 32)
godweiyang Aug 5, 2022
e2439e1
add configs_sm75/h512_i2048_b1-10016.json
godweiyang Aug 5, 2022
156bb6e
support col32 igemm
Aug 5, 2022
477d3e7
Merge branch 'ls-qat' of https://github.com/bytedance/lightseq into l…
Aug 5, 2022
01dccf8
add configs_sm75/h768_i3072_b1-10016.json
godweiyang Aug 5, 2022
6f13034
Merge branch 'ls-qat' of https://github.com/bytedance/lightseq into l…
godweiyang Aug 5, 2022
4fbbb0f
add configs_sm80/h512_i2048_b1-10016.json
godweiyang Aug 5, 2022
8305cd3
add configs_sm75/h1024_i4096_b1-10016.json
godweiyang Aug 5, 2022
3f02450
add configs_sm80/h768_i3072_b1-10016.json
godweiyang Aug 5, 2022
c0c9d81
fix syntax error
godweiyang Aug 5, 2022
1b23508
configs_sm80/h1024_i4096_b1-10016.json
godweiyang Aug 5, 2022
d521929
modify gemm test config format
godweiyang Aug 8, 2022
3868f59
merge all the configs to one
godweiyang Aug 8, 2022
33702dd
support search all shapes which are not in the config
godweiyang Aug 8, 2022
a9104d0
polish the merged config
godweiyang Aug 8, 2022
430e4e4
add cublas_algo_map cpp code
godweiyang Aug 9, 2022
aa7d4a1
move get_sm func to lightseq kernels
godweiyang Aug 9, 2022
cf21734
move gemm_test to lightseq ops
godweiyang Aug 9, 2022
e582959
modify default config dir, fix algo_map bug
godweiyang Aug 9, 2022
d91024a
fix col32 bug
Aug 10, 2022
690801e
col major igemm become default
Aug 11, 2022
2877db5
fix dcax kernel bug
Aug 11, 2022
649a037
loosen cuda 11.6 requirement
Aug 12, 2022
fc6a035
add vit cpp example
godweiyang Aug 12, 2022
0a76d2e
Merge branch 'ls-qat' of https://github.com/bytedance/lightseq into l…
godweiyang Aug 12, 2022
cd9c85c
fix bug from col32 gemm and a100 tuned col gemm
Aug 15, 2022
7eac0ed
support training encoder qkv_linear auto-tune gemm (in comment)
godweiyang Aug 16, 2022
7a1bd0c
add required header file
godweiyang Aug 16, 2022
1904c74
dynamic use col32 or col4 in different GPUs
godweiyang Aug 16, 2022
220c2e8
fix multidefinition bug
godweiyang Aug 16, 2022
964ff11
fix weight transform col32 bug
godweiyang Aug 17, 2022
50f6512
add best algo for inference gemm (in comments)
godweiyang Aug 18, 2022
78857bb
support easy benchmark for gpt and transformer
Aug 19, 2022
36b6607
support benmark huggingface
Aug 22, 2022
efa5c71
fix embedding clip_max bug
Aug 23, 2022
3e63d41
ls quant linear support more shape
Aug 23, 2022
12d58ed
fix quant linear bug
Aug 24, 2022
c8e5b89
fix quant linear bug
Aug 24, 2022
41e5cec
update pad function for older torch
Aug 24, 2022
33cc7b0
fix quant linear bug
Aug 24, 2022
123874a
remove redundant code
Aug 24, 2022
8dac833
Merge branch 'master' into ls-qat
Aug 26, 2022
332e22e
fix export bug
Aug 26, 2022
a8df5d6
Merge remote-tracking branch 'origin/master' into ls-qat
Aug 26, 2022
11af3e7
fix format
Aug 26, 2022
389106d
fix conflicts
godweiyang Aug 30, 2022
a62cad9
fix custom train&infer bug
godweiyang Aug 30, 2022
541ec0f
fix quant infer size overflow
godweiyang Aug 30, 2022
91d7c1a
fix ls gpt export bug (extra_decode_length)
godweiyang Aug 30, 2022
a42a98c
fix hf bart cmax init and state
Aug 30, 2022
2e8fe5a
fix max-batch-tokens bug of bart predict
godweiyang Aug 30, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
polish training gemm algo code
godweiyang committed Aug 3, 2022
commit ee03974bfa9579feadc7db997dab553fb52341c3
2 changes: 2 additions & 0 deletions build_test.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
nvcc -o gemm gemm.cpp -lcublasLt -lcublas
./gemm 512 1024 1024
1,514 changes: 1,514 additions & 0 deletions gemm.cpp

Large diffs are not rendered by default.

38 changes: 14 additions & 24 deletions lightseq/training/csrc/kernels/cublas_wrappers.cpp
Original file line number Diff line number Diff line change
@@ -168,52 +168,42 @@ void cublaslt_igemm(const int8_t *input_a, const int8_t *input_b,
// make best algo with algoId: 21, customOption: 0, tile: 20, splitK_val: 0,
// swizzle: 0, reductionScheme: 0, workspaceSize: 0, stages: 17
cublasLtMatmulAlgo_info algo_info = {21, 0, 20, 0, 0, 0, 0, 17};

// algoId: 21, customOption: 0, tile: 15, splitK_val: 0, swizzle: 0,
// reductionScheme: 0, workspaceSize: 0, stages: 24
if (m < 500) {
algo_info.tile = 15;
algo_info.stages = 24;
}

cublasComputeType_t computeType = CUBLAS_COMPUTE_32I;
cudaDataType_t scaleType = CUDA_R_32F;
cudaDataType_t Atype = CUDA_R_8I;
cudaDataType_t Btype = CUDA_R_8I;
cudaDataType_t Ctype = CUDA_R_8I;

// get algo
cublasLtMatmulAlgo_t algo2;
char *workSpace2 = NULL;
int workspaceSize = 0;
workspaceSize = algo_info.workspaceSize;
cublasLtMatmulAlgoInit(cublasLt_handle, computeType, CUDA_R_32F, CUDA_R_8I,
cublasLtMatmulAlgo_t algo;
char *workSpace = NULL;
int workspaceSize = algo_info.workspaceSize;
cublasLtMatmulAlgoInit(cublasLt_handle, compute_type, CUDA_R_32F, CUDA_R_8I,
CUDA_R_8I, CUDA_R_8I, CUDA_R_8I, algo_info.algoId,
&algo2);
&algo);
cublasLtMatmulAlgoConfigSetAttribute(
&algo2, CUBLASLT_ALGO_CONFIG_CUSTOM_OPTION, &(algo_info.customOption),
&algo, CUBLASLT_ALGO_CONFIG_CUSTOM_OPTION, &(algo_info.customOption),
sizeof(algo_info.customOption));
cublasLtMatmulAlgoConfigSetAttribute(&algo2, CUBLASLT_ALGO_CONFIG_TILE_ID,
cublasLtMatmulAlgoConfigSetAttribute(&algo, CUBLASLT_ALGO_CONFIG_TILE_ID,
&(algo_info.tile),
sizeof(algo_info.tile));
cublasLtMatmulAlgoConfigSetAttribute(&algo2, CUBLASLT_ALGO_CONFIG_SPLITK_NUM,
cublasLtMatmulAlgoConfigSetAttribute(&algo, CUBLASLT_ALGO_CONFIG_SPLITK_NUM,
&(algo_info.splitK_val),
sizeof(algo_info.splitK_val));
cublasLtMatmulAlgoConfigSetAttribute(
&algo2, CUBLASLT_ALGO_CONFIG_CTA_SWIZZLING, &(algo_info.swizzle),
&algo, CUBLASLT_ALGO_CONFIG_CTA_SWIZZLING, &(algo_info.swizzle),
sizeof(algo_info.swizzle));
cublasLtMatmulAlgoConfigSetAttribute(
&algo2, CUBLASLT_ALGO_CONFIG_REDUCTION_SCHEME,
&(algo_info.reductionScheme), sizeof(int));
cublasLtMatmulAlgoConfigSetAttribute(&algo2, CUBLASLT_ALGO_CONFIG_STAGES_ID,
&algo, CUBLASLT_ALGO_CONFIG_REDUCTION_SCHEME,
&(algo_info.reductionScheme), sizeof(algo_info.reductionScheme));
cublasLtMatmulAlgoConfigSetAttribute(&algo, CUBLASLT_ALGO_CONFIG_STAGES_ID,
&(algo_info.stages),
sizeof(algo_info.stages));

CHECK_GPU_ERROR(cublasLtMatmul(
cublasLt_handle, matmul_desc, alpha, input_a, desc_a, input_b, desc_b,
// beta, output_c, desc_c, output_c, desc_c, NULL, NULL, 0, stream));
beta, output_c, desc_c, output_c, desc_c, &algo2, workSpace2,
workspaceSize, stream));
beta, output_c, desc_c, output_c, desc_c, &algo, workSpace, workspaceSize,
stream));

CHECK_GPU_ERROR(cublasLtMatmulDescDestroy(matmul_desc));
CHECK_GPU_ERROR(cublasLtMatrixLayoutDestroy(desc_a));