You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I run the following command, I encounter an issue. Has anyone else experienced this issue? I've tested my cluster using other tools(for example: to calculate pi number) and it is working very well.
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 108257 RUNNING AT jetson@node1
= EXIT CODE: 134
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Terminated (signal 15)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions`
The text was updated successfully, but these errors were encountered:
Greetings to all,
When I run the following command, I encounter an issue. Has anyone else experienced this issue? I've tested my cluster using other tools(for example: to calculate pi number) and it is working very well.
./mpirun -hostfile hosts -n 4 /nfs-storage/llama.cpp/main --model "/nfs-storage/models/mistral-7b-v0.1.Q4_K_M.gguf" --threads 2 -p "Raspberry Pi computers are" -n 128
Here is the output:
`main: build = 1775 (eec22a1)
main: build = 1775 (eec22a1)
main: built with gcc (Ubuntu 9.4.0-1ubuntu1
20.04.2) 9.4.0 for aarch64-linux-gnu20.04.2) 9.4.0 for aarch64-linux-gnumain: build = 1775 (eec22a1)
main: built with gcc (Ubuntu 9.4.0-1ubuntu1
Log start
main: seed = 1704525139
main: seed = 1704525139
main: built with gcc (Ubuntu 9.4.0-1ubuntu1
20.04.2) 9.4.0 for aarch64-linux-gnu20.04.2) 9.4.0 for aarch64-linux-gnumain: seed = 1704525139
main: build = 1775 (eec22a1)
main: built with gcc (Ubuntu 9.4.0-1ubuntu1
main: seed = 1704525139
llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from /nfs-storage/models/mistral-7b-v0.1.Q4_K_M.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = mistralai_mistral-7b-v0.1
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from /nfs-storage/models/mistral-7b-v0.1.Q4_K_M.gguf (version GGUF V2)
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = mistralai_mistral-7b-v0.1
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0,000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000,000000
llama_model_loader: - kv 11: general.file_type u32 = 15
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from /nfs-storage/models/mistral-7b-v0.1.Q4_K_M.gguf (version GGUF V2)
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0,000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000,000000
llama_model_loader: - kv 11: general.file_type u32 = 15
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = mistralai_mistral-7b-v0.1
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0,000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000,000000
llama_model_loader: - kv 11: general.file_type u32 = 15
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: loaded meta data with 20 key-value pairs and 291 tensors from /nfs-storage/models/mistral-7b-v0.1.Q4_K_M.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = mistralai_mistral-7b-v0.1
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0,000010
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000,000000
llama_model_loader: - kv 11: general.file_type u32 = 15
llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "
", "", "<0x00>", "<...llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "
", "", "<0x00>", "<...llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "
", "", "<0x00>", "<...llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["", "
", "", "<0x00>", "<...llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0,000000, 0,000000, 0,000000, 0,0000...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0,000000, 0,000000, 0,000000, 0,0000...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0,000000, 0,000000, 0,000000, 0,0000...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0,000000, 0,000000, 0,000000, 0,0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 19: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V2
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V2
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: f_norm_eps = 0,0e+00
llm_load_print_meta: f_norm_rms_eps = 1,0e-05
llm_load_print_meta: f_clamp_kqv = 0,0e+00
llm_load_print_meta: f_max_alibi_bias = 0,0e+00
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000,0
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0,0e+00
llm_load_print_meta: f_norm_rms_eps = 1,0e-05
llm_load_print_meta: f_clamp_kqv = 0,0e+00
llm_load_print_meta: f_max_alibi_bias = 0,0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000,0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model params = 7,24 B
llm_load_print_meta: model size = 4,07 GiB (4,83 BPW)
llm_load_print_meta: general.name = mistralai_mistral-7b-v0.1
llm_load_print_meta: BOS token = 1 '
''llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 7,24 B
llm_load_print_meta: model size = 4,07 GiB (4,83 BPW)
llm_load_print_meta: EOS token = 2 '
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0,11 MiB
llm_load_print_meta: general.name = mistralai_mistral-7b-v0.1
llm_load_print_meta: BOS token = 1 '
''llm_load_print_meta: EOS token = 2 '
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0,11 MiB
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V2
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0,0e+00
llm_load_print_meta: f_norm_rms_eps = 1,0e-05
llm_load_print_meta: f_clamp_kqv = 0,0e+00
llm_load_print_meta: f_max_alibi_bias = 0,0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000,0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 7,24 B
llm_load_print_meta: model size = 4,07 GiB (4,83 BPW)
llm_load_print_meta: general.name = mistralai_mistral-7b-v0.1
llm_load_print_meta: BOS token = 1 '
''llm_load_print_meta: EOS token = 2 '
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0,11 MiB
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V2
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0,0e+00
llm_load_print_meta: f_norm_rms_eps = 1,0e-05
llm_load_print_meta: f_clamp_kqv = 0,0e+00
llm_load_print_meta: f_max_alibi_bias = 0,0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000,0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 7,24 B
llm_load_print_meta: model size = 4,07 GiB (4,83 BPW)
llm_load_print_meta: general.name = mistralai_mistral-7b-v0.1
llm_load_print_meta: BOS token = 1 '
''llm_load_print_meta: EOS token = 2 '
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0,11 MiB
llm_load_tensors: system memory used = 4165,48 MiB
...............................................................................................
llm_load_tensors: system memory used = 4165,48 MiB
llm_load_tensors: system memory used = 4165,48 MiB
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 10000,0
llama_new_context_with_model: freq_scale = 1
................................................................................................................
.................................llm_load_tensors: system memory used = 4165,48 MiB
......llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 10000,0
llama_new_context_with_model: freq_scale = 1
................................................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 10000,0
llama_new_context_with_model: freq_scale = 1
......
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 10000,0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: KV self size = 64,00 MiB, K (f16): 32,00 MiB, V (f16): 32,00 MiB
llama_new_context_with_model: KV self size = 64,00 MiB, K (f16): 32,00 MiB, V (f16): 32,00 MiB
llama_build_graph: non-view tensors processed: 676/676
llama_new_context_with_model: compute buffer total size = 76,19 MiB
GGML_ASSERT: llama.cpp:9881: false && "not implemented"
llama_build_graph: non-view tensors processed: 676/676
llama_new_context_with_model: compute buffer total size = 76,19 MiB
GGML_ASSERT: llama.cpp:6631: false && "not implemented"
llama_new_context_with_model: KV self size = 64,00 MiB, K (f16): 32,00 MiB, V (f16): 32,00 MiB
llama_build_graph: non-view tensors processed: 676/676
llama_new_context_with_model: compute buffer total size = 76,19 MiB
GGML_ASSERT: llama.cpp:9881: false && "not implemented"
llama_new_context_with_model: KV self size = 64,00 MiB, K (f16): 32,00 MiB, V (f16): 32,00 MiB
llama_build_graph: non-view tensors processed: 676/676
llama_new_context_with_model: compute buffer total size = 76,19 MiB
GGML_ASSERT: llama.cpp:9881: false && "not implemented"
[New LWP 108263]
[New LWP 108268]
[New LWP 108261]
[New LWP 108266]
[New LWP 108262]
[New LWP 108265]
[New LWP 108264]
[New LWP 108267]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
0x0000007f83533c0c in __GI___wait4 (pid=, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:27
0x0000007f96c68c0c in __GI___wait4 (pid=, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:27
27 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory.
27 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory.
#0 0x0000007f83533c0c in __GI___wait4 (pid=, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:27
27 in ../sysdeps/unix/sysv/linux/wait4.c
#1 0x000000557c325280 in ggml_print_backtrace ()
#2 0x000000557c36443c in llama_decode ()
#3 0x000000557c3ac610 in llama_init_from_gpt_params(gpt_params&) ()
#0 0x0000007f96c68c0c in __GI___wait4 (pid=, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:27
27 in ../sysdeps/unix/sysv/linux/wait4.c
#1 0x000000555c44f280 in ggml_print_backtrace ()
#4 0x000000557c318d88 in main ()
#2 0x000000555c4969d8 in llama_new_context_with_model ()
#3 0x000000555c4d63fc in llama_init_from_gpt_params(gpt_params&) ()
#4 0x000000555c442d88 in main ()
0x0000007f7e219c0c in __GI___wait4 (pid=, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:27
27 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory.
#0 0x0000007f7e219c0c in __GI___wait4 (pid=, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:27
27 in ../sysdeps/unix/sysv/linux/wait4.c
#1 0x0000005569124280 in ggml_print_backtrace ()
#2 0x000000556916b9d8 in llama_new_context_with_model ()
#3 0x00000055691ab3fc in llama_init_from_gpt_params(gpt_params&) ()
#4 0x0000005569117d88 in main ()
[Inferior 1 (process 108257) detached]
[Inferior 1 (process 108259) detached]
[Inferior 1 (process 108258) detached]
0x0000007f83fc9c0c in __GI___wait4 (pid=, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:27
27 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory.
#0 0x0000007f83fc9c0c in __GI___wait4 (pid=, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:27
27 in ../sysdeps/unix/sysv/linux/wait4.c
#1 0x000000555c4b6280 in ggml_print_backtrace ()
#2 0x000000555c4fd9d8 in llama_new_context_with_model ()
#3 0x000000555c53d3fc in llama_init_from_gpt_params(gpt_params&) ()
#4 0x000000555c4a9d88 in main ()
[Inferior 1 (process 108260) detached]
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 108257 RUNNING AT jetson@node1
= EXIT CODE: 134
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Terminated (signal 15)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions`
The text was updated successfully, but these errors were encountered: