Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Package request: pytorch on darwin with GPU (MPS) support #243868

Closed
n8henrie opened this issue Jul 16, 2023 · 30 comments · Fixed by #351778
Closed

Package request: pytorch on darwin with GPU (MPS) support #243868

n8henrie opened this issue Jul 16, 2023 · 30 comments · Fixed by #351778
Labels
0.kind: packaging request Request for a new package to be added

Comments

@n8henrie
Copy link
Contributor

n8henrie commented Jul 16, 2023

Project description

Not sure if this is a "packaging request" but didn't seem like a bug report, just wanted a tracking issue, I might try to hack on this eventually.

Pytorch on aarch64-darwin now supports GPU:

https://pytorch.org/docs/stable/notes/mps.html

The currently packaged pytorch runs but doesn't use my Mac's GPU:

$ nix-shell -p 'python3.withPackages (ps: with ps; [ pytorch ])' --command "python -c 'import torch; print(torch.backends.mps.is_available())'"
False

In contrast, using the pytorch-provided wheel:

$ python3 -m venv .venv && source ./.venv/bin/activate
$ pip install https://files.pythonhosted.org/packages/85/68/f901437d3e3ef6fe97adb1f372479626d994185b8fa06803f5bdf3bb90fd/torch-2.0.1-cp311-none-macosx_11_0_arm64.whl
$ python -c 'import torch; print(torch.backends.mps.is_available())'
True

Metadata

cc @NixOS/darwin-maintainers

@n8henrie n8henrie added the 0.kind: packaging request Request for a new package to be added label Jul 16, 2023
@uri-canva
Copy link
Contributor

Very cool! cc maintainers @tscholak @teh @thoughtpolice. Also cc @samuela.

@samuela
Copy link
Member

samuela commented Jul 17, 2023

curious if python3Packages.torch-bin works?

@n8henrie
Copy link
Contributor Author

@samuela torch-bin gives me unsupported system (cuda_nvtx-11.7.50)

@samuela
Copy link
Member

samuela commented Jul 17, 2023

hmmm that's odd. aarch64-darwin is a supported platform

https://github.com/NixOS/nixpkgs/blob/04f7903bb9cd886a0e2885c728ff735f9248de89/pkgs/development/python-modules/torch/bin.nix#L79C85-L79C85

and there are srcs available here:

aarch64-darwin-38 = {
name = "torch-1.13.1-cp38-none-macosx_11_0_arm64.whl";
url = "https://download.pytorch.org/whl/cpu/torch-1.13.1-cp38-none-macosx_11_0_arm64.whl";
hash = "sha256-7usgTTD9QK9qLYCHm0an77489Dzb64g43U89EmzJCys=";
};
aarch64-darwin-39 = {
name = "torch-1.13.1-cp39-none-macosx_11_0_arm64.whl";
url = "https://download.pytorch.org/whl/cpu/torch-1.13.1-cp39-none-macosx_11_0_arm64.whl";
hash = "sha256-4N+QKnx91seVaYUy7llwzomGcmJWNdiF6t6ZduWgSUk=";
};
aarch64-darwin-310 = {
name = "torch-1.13.1-cp310-none-macosx_11_0_arm64.whl";
url = "https://download.pytorch.org/whl/cpu/torch-1.13.1-cp310-none-macosx_11_0_arm64.whl";
hash = "sha256-ASKAaxEblJ0h+hpfl2TR/S/MSkfLf4/5FCBP1Px1LtU=";
};

what version of python are you using?

@n8henrie
Copy link
Contributor Author

I made a nix and nix-gpu branch at https://github.com/n8henrie/whisper; nix-gpu uses the wheel (and I've confirmed it enables the GPU to run whisper).

However I'm not seeing any speed difference with a small test file (full of particularly witty content):

$ nix shell github:n8henrie/whisper/nix -c time whisper 20220922\ 084923.m4a
/nix/store/0fyl0qi21ljiswg05qz5p2bpnl016k0l-python3.10-whisper/lib/python3.10/site-packages/whisper/transcribe.py:114: UserWarning: FP16 is not supported on CPU; using FP32 instead
  warnings.warn("FP16 is not supported on CPU; using FP32 instead")
Detecting language using up to the first 30 seconds. Use `--language` to specify the language
Detected language: English
[00:00.000 --> 00:04.640]  This is an example of voice recording. Something might actually say if I
[00:04.640 --> 00:09.440]  recording something on my watch. I think this is how it would turn out.
19.93user 11.08system 0:14.23elapsed 217%CPU (0avgtext+0avgdata 2062800maxresident)k
0inputs+0outputs (3246major+212332minor)pagefaults 0swaps
$
$ nix shell github:n8henrie/whisper/nix-gpu -c time whisper 20220922\ 084923.m4a
/nix/store/fvpkcahvngmsssm3yfchvgw11hdic6sx-python3.10-whisper/lib/python3.10/site-packages/whisper/transcribe.py:114: UserWarning: FP16 is not supported on CPU; using FP32 instead
  warnings.warn("FP16 is not supported on CPU; using FP32 instead")
Detecting language using up to the first 30 seconds. Use `--language` to specify the language
Detected language: English
[00:00.000 --> 00:04.640]  This is an example of voice recording. Something might actually say if I
[00:04.640 --> 00:09.440]  recording something on my watch. I think this is how it would turn out.
19.81user 11.16system 0:14.02elapsed 220%CPU (0avgtext+0avgdata 2177600maxresident)k
0inputs+0outputs (3955major+212007minor)pagefaults 0swaps

Not sure if that's going to be a training vs inference issue (or more likely whisper isn't configured to use the GPU, it has a --device flag but seems to only look for cuda devices by default).

@n8henrie
Copy link
Contributor Author

Huh, looks like I just need to pass --device mps (openai/whisper#382) but no luck:

$ nix shell github:n8henrie/whisper/nix-gpu -c time whisper --device mps 20220922\ 084923.m4a
Traceback (most recent call last):
  File "/nix/store/fvpkcahvngmsssm3yfchvgw11hdic6sx-python3.10-whisper/bin/.whisper-wrapped", line 9, in <module>
    sys.exit(cli())
  File "/nix/store/fvpkcahvngmsssm3yfchvgw11hdic6sx-python3.10-whisper/lib/python3.10/site-packages/whisper/transcribe.py", line 444, in cli
    model = load_model(model_name, device=device, download_root=model_dir)
  File "/nix/store/fvpkcahvngmsssm3yfchvgw11hdic6sx-python3.10-whisper/lib/python3.10/site-packages/whisper/__init__.py", line 154, in load_model
    return model.to(device)
  File "/nix/store/0y2cidaighjvzwlw31k6fkcragba3jki-python3.10-torch-2.0.1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1145, in to
    return self._apply(convert)
  File "/nix/store/0y2cidaighjvzwlw31k6fkcragba3jki-python3.10-torch-2.0.1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 844, in _apply
    self._buffers[key] = fn(buf)
  File "/nix/store/0y2cidaighjvzwlw31k6fkcragba3jki-python3.10-torch-2.0.1/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1143, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'SparseMPS' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty.memory_format' is only available for these backends: [CPU, MPS, Meta, QuantizedCPU, QuantizedMeta, MkldnnCPU, SparseCPU, SparseMeta, SparseCsrCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

CPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterCPU.cpp:31034 [kernel]
MPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterMPS.cpp:22748 [kernel]
Meta: registered at /dev/null:241 [kernel]
QuantizedCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterQuantizedCPU.cpp:929 [kernel]
QuantizedMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterQuantizedMeta.cpp:105 [kernel]
MkldnnCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterMkldnnCPU.cpp:507 [kernel]
SparseCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterSparseCPU.cpp:1379 [kernel]
SparseMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterSparseMeta.cpp:249 [kernel]
SparseCsrCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterSparseCsrCPU.cpp:1128 [kernel]
BackendSelect: registered at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/RegisterBackendSelect.cpp:726 [kernel]
Python: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:280 [backend fallback]
Named: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:21 [kernel]
Negative: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:23 [kernel]
ZeroTensor: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:90 [kernel]
ADInplaceOrView: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradOther: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
AutogradCPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
AutogradCUDA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
AutogradHIP: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
AutogradXLA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
AutogradMPS: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
AutogradIPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
AutogradXPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
AutogradHPU: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
AutogradVE: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
AutogradLazy: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
AutogradMeta: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
AutogradMTIA: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
AutogradPrivateUse1: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
AutogradPrivateUse2: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
AutogradPrivateUse3: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
AutogradNestedTensor: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:17484 [autograd kernel]
Tracer: registered at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/generated/TraceType_2.cpp:16726 [kernel]
AutocastCPU: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:148 [backend fallback]

Command exited with non-zero status 1
3.87user 1.23system 0:03.39elapsed 150%CPU (0avgtext+0avgdata 1210224maxresident)k
0inputs+0outputs (19major+112713minor)pagefaults 0swaps

@samuela
Copy link
Member

samuela commented Jul 17, 2023

timing based benchmarks are tricky to debug... does import torch; print(torch.backends.mps.is_available()) work for you in this flake? also ooc what's the equivalent to nvtop for the MPS backend?

@samuela
Copy link
Member

samuela commented Jul 17, 2023

NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'SparseMPS' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty.memory_format' is only available for these backends: [CPU, MPS, Meta, QuantizedCPU, QuantizedMeta, MkldnnCPU, SparseCPU, SparseMeta, SparseCsrCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

Yeah this error looks like it's an issue with whisper as opposed to nix's packaging of torch/torch-bin.

We can leave this open as a tracking issue for getting MPS support in our source build of torch however.

@n8henrie
Copy link
Contributor Author

what version of python are you using?

Tried 39-311. I'm pinned to 23.05, I wonder if it's different in unstable?

Yup, that was it!

$ nix-shell -I nixpkgs=channel:nixpkgs-unstable -p 'python39.withPackages (ps: with ps; [ pytorch-bin ])' --command "python -c 'import torch; print(torch.backends.mps.is_available())'"
True

import torch; print(torch.backends.mps.is_available())

@samuela yes, I just patched this into the whisper code (in the cli() function) and confirmed my nix branch shows False and my nix-gpu branch shows the device as expected.

@samuela
Copy link
Member

samuela commented Jul 17, 2023

yup, that makes for two TODOs

  • Add MPS support to the source build python3Packages.torch
  • Add python 3.11, 3.12 support to torch-bin

@samuela
Copy link
Member

samuela commented Jul 21, 2023

I found that torchvision-bin support on aarch64-darwin effectively prevented MPS from being usable. Submitted #244716 to fix.

@samuela
Copy link
Member

samuela commented Jul 27, 2023

I can confirm that MPS support works with torch-bin and torchvision-bin now

@n8henrie
Copy link
Contributor Author

I'm still trying to get it working. Tried adding the USE_MPS flag but it still results without MPS support; I think the build script uses xcrun to sort out some SDK path business, which fails to find the MPSGraph framework, and disables MPS:

bash: line 1: xcrun: command not found
-- MPS: unable to get MacOS sdk version
-- MPSGraph framework not found

https://github.com/pytorch/pytorch/blob/1da41157028ee8224e456f6fab18bc22fa2637fe/CMakeLists.txt#L99

@n8henrie
Copy link
Contributor Author

n8henrie commented Aug 4, 2023

Can any of the aarch64-darwin folks help me build this locally via nix develop?

Currently builds fine with nix build --offline (so I don't think it's a substituter issue) and nix shell.

Steps I'm taking / have tried:

Attempt 1

  1. nix develop .#python310Packages.pytorch
  2. source $stdenv/setup
  3. genericBuild

It looks like /usr/bin/xcrun bleeds into the environment, sets USE_MPS to true, and it fails to find the relevant SDK libraries (fixing this would hopefully help me close this issue, but I'd like to at least get it to build first).

Attempt 2

  1. nix develop -i .#python310Packages.pytorch --command bash --norc [link]
  2. source $stdenv/setup
  3. genericBuild

USE_MPS is set to false, but the build fails looking for #include <google/protobuf/implicit_weak_message.h>

After reading this issue, it looks like some of these are functions and some are variables, so I might need to e.g. eval "${buildPhase}" instead of just buildPhase.

Attempt 3

  1. nix develop -i .#python310Packages.pytorch --command bash --norc
  2. source $stdenv/setup
  3. unpackPhase
  4. eval $patchPhase
  5. eval $preConfigurePhases
  6. eval $configurePhase
  7. cd source
  8. eval $buildPhase

Here it fails with the same errors about protobuf.

I think these are relevant convos (for my future reference):

Can anyone point out what I'm getting wrong here? nix develop should be able to build it if nix build can, right? TIA for any help for a noob.

@nixos-discourse
Copy link

This issue has been mentioned on NixOS Discourse. There might be relevant details there:

https://discourse.nixos.org/t/nix-develop-fails-with-command-bash-norc/31896/1

@n8henrie
Copy link
Contributor Author

According to the pytorch cmakelists, it requires SDK >= 12.3 for MPS support, currently we only have:

$ nix eval --json --apply 'builtins.attrNames' github:nixos/nixpkgs/master#darwin | jq -r '.[] | select(contains("sdk"))'                                                                          
apple_sdk
apple_sdk_10_12
apple_sdk_11_0

I've tried anyways, adding the MetalPerformanceShaders and MetalPerformanceShadersGraph to the build inputs and substituting xrun for xcbuild.xcrun with no luck. I suspect we might just be blocked on a newer SDK?

@samuela
Copy link
Member

samuela commented Aug 21, 2023

hmmm interesting... what's the process for adding a new apple_sdk version? cc @NixOS/darwin-maintainers

@viraptor
Copy link
Contributor

@samuela It's a long task with side-quests. See #242666 for some discussion - MoltenVK is also dependent on SDK 12. The work is ongoing on various aspects and the SDKs themselves have been proposed here: #229210

@david-r-cox
Copy link
Member

People looking for a work-around might find this flake helpful. It's based on @n8henrie's wheel approach.

nix run "github:david-r-cox/pytorch-darwin-env#verificationScript"

@reckenrode
Copy link
Contributor

After #346043 lands, this should be doable. Just add apple-sdk_13 as a build input.

@mikatammi
Copy link
Contributor

I tried to enable this with:

diff --git a/pkgs/development/python-modules/torch/default.nix b/pkgs/development/python-modules/torch/default.nix
index cd43c04d67d4..e7bbf4eb1b31 100644
--- a/pkgs/development/python-modules/torch/default.nix
+++ b/pkgs/development/python-modules/torch/default.nix
@@ -364,10 +364,14 @@ buildPythonPackage rec {
   # NB technical debt: building without NNPACK as workaround for missing `six`
   USE_NNPACK = 0;

+  # USE_PYTORCH_METAL = stdenv.hostPlatform.isDarwin;
+  USE_MPS = stdenv.hostPlatform.isDarwin;
+
   cmakeFlags =
     [
       # (lib.cmakeBool "CMAKE_FIND_DEBUG_MODE" true)
       (lib.cmakeFeature "CUDAToolkit_VERSION" cudaPackages.cudaVersion)
+      (lib.cmakeBool "USE_MPS" stdenv.hostPlatform.isDarwin)
     ]
     ++ lib.optionals cudaSupport [
       # Unbreaks version discovery in enable_language(CUDA) when wrapping nvcc with ccache
@@ -519,8 +523,12 @@ buildPythonPackage rec {
     ++ lib.optionals (cudaSupport || rocmSupport) [ effectiveMagma ]
     ++ lib.optionals stdenv.hostPlatform.isLinux [ numactl ]
     ++ lib.optionals stdenv.hostPlatform.isDarwin [
-      darwin.apple_sdk.frameworks.Accelerate
-      darwin.apple_sdk.frameworks.CoreServices
+      darwin.apple_sdk_12_3.frameworks.Accelerate
+      darwin.apple_sdk_12_3.frameworks.CoreServices
+      darwin.apple_sdk_12_3.frameworks.Metal
+      darwin.apple_sdk_12_3.frameworks.MetalKit
+      darwin.apple_sdk_12_3.frameworks.MetalPerformanceShaders
+      darwin.apple_sdk_12_3.frameworks.MetalPerformanceShadersGraph
       darwin.libobjc
     ]
     ++ lib.optionals tritonSupport [ _tritonEffective ]

I'm still getting the xcrun error:

python3.12-torch> bash: line 1: xcrun: command not found
python3.12-torch> -- MPS: unable to get MacOS sdk version
python3.12-torch> -- MPSGraph framework not found

@n8henrie
Copy link
Contributor Author

I was waiting on @reckenrode's recent massive SDK refactor to land (at least in master, still staging as of this AM) and will retry then.

@mikatammi
Copy link
Contributor

My mistake, didn't realize #346043 was not yet in the master.

@n8henrie
Copy link
Contributor Author

n8henrie commented Oct 27, 2024

A couple sites helpful for tracking -- the best known is https://nixpk.gs/pr-tracker.html?pr=346043

(I really wish we could get an RSS feed for when user-specified PRs land in a specific branch.)

@emilazy
Copy link
Member

emilazy commented Oct 27, 2024

It’s in staging-next, you can just target a PR to that. Many things have already been built.

@mikatammi
Copy link
Contributor

I rebased my hack on top of staging-next, now I got different errors:

-- CLANG_VERSION_STRING:         clang version 16.0.6
Target: arm64-apple-darwin
Thread model: posix
InstalledDir: /nix/store/r0w268903vjlq1vrnajb3lvll4hs5z8g-clang-16.0.6/bin

warning: unhandled Target key DefaultVariant
warning: unhandled Target key SupportedTargets
warning: unhandled Target key VersionMap
warning: unhandled Target key Variants
warning: unhandled Target key DebuggerOptions
warning: unhandled Product key iOSSupportVersion
warning: unhandled Target key DefaultVariant
warning: unhandled Target key SupportedTargets
warning: unhandled Target key VersionMap
warning: unhandled Target key Variants
warning: unhandled Target key DebuggerOptions
warning: unhandled Product key iOSSupportVersion
warning: unhandled Target key DefaultVariant
warning: unhandled Target key SupportedTargets
warning: unhandled Target key VersionMap
warning: unhandled Target key Variants
warning: unhandled Target key DebuggerOptions
warning: unhandled Product key iOSSupportVersion
-- sdk version: 11.3, mps supported: OFF
warning: unhandled Target key DefaultVariant
warning: unhandled Target key SupportedTargets
warning: unhandled Target key VersionMap
warning: unhandled Target key Variants
warning: unhandled Target key DebuggerOptions
warning: unhandled Product key iOSSupportVersion
warning: unhandled Target key DefaultVariant
warning: unhandled Target key SupportedTargets
warning: unhandled Target key VersionMap
warning: unhandled Target key Variants
warning: unhandled Target key DebuggerOptions
warning: unhandled Product key iOSSupportVersion
warning: unhandled Target key DefaultVariant
warning: unhandled Target key SupportedTargets
warning: unhandled Target key VersionMap
warning: unhandled Target key Variants
warning: unhandled Target key DebuggerOptions
warning: unhandled Product key iOSSupportVersion
-- MPSGraph framework not found
-- Could not find ccache. Consider installing ccache to speed up compilation.

It now says SDK version is 11.3, maybe I need to change something else as well in that package, as I was trying to specify I want version 12.3

@reckenrode
Copy link
Contributor

reckenrode commented Oct 27, 2024

This was one of my test cases. Add apple-sdk_13 as a buildInput. Delete the old frameworks from buildInputs. That should be it.

It has to be the 13.3 SDK because MPS isn’t available in older SDKs, and the version of pytorch in nixpkgs does not build with the 14.4 (or presumably 15.0) SDK.

Edit: Apple’s article on MPS in PyTorch suggests the 12.3 SDK should work, so I could be wrong about the minimum supported SDK, but the requirements may have changed since it was posted. You can try the 12.3 SDK by adding apple-sdk_12 instead to buildInputs.

It now says SDK version is 11.3, maybe I need to change something else as well in that package, as I was trying to specify I want version 12.3

After the update, the old SDK frameworks are stubs that do nothing on their own . If you used overrideSDK stdenv "12.3", you would get the 12.3 SDK, but using the old pattern is deprecated. After a tree-wide after 24.11, it will be moved to darwin-aliases.nix and warn on use.

@mikatammi
Copy link
Contributor

Looks like success!

~/nixpkgs.git/ > nix-shell -p 'python3.withPackages (ps: with ps; [ torch ])' -I nixpkgs=.

[nix-shell:~/nixpkgs.git]$ python
Python 3.12.7 (main, Oct  1 2024, 02:05:46) [Clang 16.0.6 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(f"PyTorch version: {torch.__version__}")
PyTorch version: 2.5.0
>>> print(f"MPS available: {torch.backends.mps.is_available()}")
MPS available: True

@mikatammi
Copy link
Contributor

Just for tests I also tried to use apple-sdk_12 but I got these errors:

python3.12-torch>   return MTLLanguageVersion3_0;
python3.12-torch>          ^~~~~~~~~~~~~~~~~~~~~
python3.12-torch>          MTLLanguageVersion2_0
python3.12-torch> /nix/store/2vs624s5vsdnhralhwlin5gnid2dl1qy-apple-sdk-12.3/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Metal.framework/Headers/MTLLibrary.h:185:5: note: 'MTLLanguageVersion2_0' declared here
python3.12-torch>     MTLLanguageVersion2_0 API_AVAILABLE(macos(10.13), ios(11.0)) = (2 << 16),
python3.12-torch>     ^
python3.12-torch> 1 error generated.
python3.12-torch> [2278/5844] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-archive.cpp.o
python3.12-torch> ninja: build stopped: subcommand failed.
python3.12-torch> ERROR Backend subprocess exited when trying to invoke build_wheel
error: builder for '/nix/store/n41ysm7qfk9azq8w09vs5l8iwvmwqvr3-python3.12-torch-2.5.0.drv' failed with exit code 1;
       last 10 log lines:
       >          ^~~~~~~~~~~~~~~~~~~~~
       >          MTLLanguageVersion2_0
       > /nix/store/2vs624s5vsdnhralhwlin5gnid2dl1qy-apple-sdk-12.3/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/Metal.framework/Headers/MTLLibrary.h:185:5: note: 'MTLLanguageVersion2_0' declared here
       >     MTLLanguageVersion2_0 API_AVAILABLE(macos(10.13), ios(11.0)) = (2 << 16),
       >     ^
       > 1 error generated.
       > [2278/5844] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/api/src/serialize/output-archive.cpp.o
       > ninja: build stopped: subcommand failed.
       >
       > ERROR Backend subprocess exited when trying to invoke build_wheel
       For full logs, run 'nix log /nix/store/n41ysm7qfk9azq8w09vs5l8iwvmwqvr3-python3.12-torch-2.5.0.drv'.

So I guess 13 is way to go

@mikatammi
Copy link
Contributor

I made a PR to staging-next now: #351778

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0.kind: packaging request Request for a new package to be added
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants