Skip to content

Commit

Permalink
Merge pull request ggerganov#1 from slaren/cmrp-fixes
Browse files Browse the repository at this point in the history
slaren: Cmrp fixes
  • Loading branch information
RefractAI authored Apr 6, 2024
2 parents 26e8f23 + 78819c0 commit 8b6577b
Show file tree
Hide file tree
Showing 7 changed files with 277 additions and 295 deletions.
2 changes: 1 addition & 1 deletion convert-hf-to-gguf.py
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ def write_tensors(self):
data = data.astype(np.float32)

# TODO: Why cant we use these float16 as-is? There should be not reason to store float16 as float32
if self.ftype == 1 and data_dtype == np.float16 and n_dims == 1:
if self.ftype == 1 and data_dtype == np.float16 and (n_dims == 1 or new_name.endswith("_norm.weight")):
data = data.astype(np.float32)

# if f16 desired, convert any float32 2-dim weight tensors to float16
Expand Down
Loading

0 comments on commit 8b6577b

Please sign in to comment.