-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MPI on HPC #870
Comments
This shouldn't be an issue if If I recall correctly, the goal with the GUI was to expose as little of the parallel backend API as possible while still running the tutorials in a timely manner. Since almost all of the tutorials run single trials, We can still convert to |
@rythorpe Are you suggesting we also remove the For what it's worth, I personally don't find it too technical to expose the number of cores. It can be nice to see what you have access to on your machine, as long as the max is set to the # of cores available on the instance to prevent user input error |
See the conversation on the PR for more details. I guess I see the GUI primarily an educational tool, so while I agree that adding the number of cores as a simulation parameter in the GUI isn't, in itself, a deal breaker, I think there's something to be said for a GUI simulation that "just runs" without having to sift through a myriad of parameters that don't directly relate to the scientific discourse of our workshops. |
I've been testing out the GUI on Brown's HPC system (OSCAR). Running simulations with the MPI backend is not working because it's requesting too many processors than the instance allows. The node that my instance is running on has 48 cores, but my instance is not allotted access to all the node's cores.
hnn-core/hnn_core/gui/gui.py
Lines 1916 to 1918 in 18830b5
The GUI initializes the backend at the lines above using
multiprocessing.cpu_count
, which returns the node's total but not my instance's allotment.The joblib backend allows you to specify the number of cores in with the GUI. Is there a reason why this option is not exposed for MPI?
joblib options
MPI options
This stack overflow answer also has a way to get the number of available cores instead of total.
The text was updated successfully, but these errors were encountered: