From 661e0fe746cb5e7f52853951cdcf1f5b9429f5f7 Mon Sep 17 00:00:00 2001 From: Augustin Chan Date: Mon, 27 Nov 2023 01:55:52 +0800 Subject: [PATCH] cleanup --- _posts/2023-11-26-Its Hard to find an Uncensored Model.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2023-11-26-Its Hard to find an Uncensored Model.md b/_posts/2023-11-26-Its Hard to find an Uncensored Model.md index 70dbb1b..56369d4 100644 --- a/_posts/2023-11-26-Its Hard to find an Uncensored Model.md +++ b/_posts/2023-11-26-Its Hard to find an Uncensored Model.md @@ -125,7 +125,7 @@ filtered dataset further to see why. ## Spectacular Reddit Post on Huggingface Transformers / Llama.cpp / GGUF /GGML -https://www.reddit.com/r/LocalLLaMA/comments/178el7j/transformers_llamacpp_gguf_ggml_gptq_other_animals/ +[Awesome Breakdown Post](https://www.reddit.com/r/LocalLLaMA/comments/178el7j/transformers_llamacpp_gguf_ggml_gptq_other_animals/) Some clarifications/findings from the above: GGML isn't necessarily quantized. I ran the GGML converter on my GPT-J float 16 model. It remained float 16,