Skip to content

henryiii/python-performance-minicourse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

High Performance Python

Princeton mini-course

By Henry Schreiner, with Jim Pivarski

Installation

Binder

In the minicourse, if you haven't prepared beforehand, please use this link to run online via Binder: Binder

Codespaces

GitHub provides 120 core-hours (60 real-time hours if you use the smallest (2-core) setting) of CodeSpaces usage every month. You can run this in a codespace: Open in GitHub Codespaces

Note that you should currently start jupyter lab manually from the VSCode terminal once it's built (3-5 minutes after starting it for the first time).

Local install:

If you are reading this at least 10 minutes before the course starts or you have anaconda or miniconda installed, you will probably be best off installing miniconda. This way you will keep local edits and will have an environment to play with.

Get the repository:

git clone https://github.com/henryiii/python-performance-minicourse.git
cd python-performance-minicourse

Download and install miniconda. On macOS with homebrew, just run brew cask install miniconda (see my recommendations).

Run:

conda env create

from this directory. This will create an environment performance-minicourse. To use:

conda activate performance-minicourse
./check.py # Check to see if you've installed this correctly
jupyter lab

And, to disable:

conda deactivate

or restart your terminal.

If you want to add a package, modify environment.yml then run:

conda env update

Lessons

  • 00 Intro: The introduction
  • 01 Fractal accelerate: A look at a fractal computation, and ways to accelerate it with NumPy changes, numexpr, and numba.
  • 02 Temperatures: A look at reading files and array manipulation in NumPy and Pandas.
  • 03 MCMC: A Marco Chain Monte Carlo generator (and metropolis generator) in Python and Numba, with a focus on profiling.
  • 04 Runge-Kutta: Implementing a popular integration algorithm in NumPy and Numba.
  • 05 Distributed: An exploration of ways to break up code (fractal) into chunks for multithreading, multiproccessing, and Dask distribution.
  • 06 Tensorflow: A look at implementing a Negative Log Likelihood function (used for unbinned fitting) in NumPy and Google's Tensorflow.
  • 07 Callables: A look at Scipy's LowLevelCallable, and how to implement one with Numba.

Class participants: please complete the survey that will be posted.