A JavaScript/TypeScript audio engine for the Web and Server capable of multitrack time stretching, pitch shifting, declarative effects, faster than realtime processing, and more!
Developed at Spotify 2017-2018, Discontinued and handed over to new maintainers January 2023
- Install the player:
npm install --save nf-player
- Describe your audio experience using NFGrapher
npm install --save nf-grapher
, or use a prebuilt JSON Score. - Listen to it!
// In a browser, probably webpack-powered...
import { SmartPlayer, TimeInstant } from 'nf-player';
import * as ScoreJSON from './your-score.json';
(async function() {
// Create player. By default it uses Web Audio to insert audio data into
// the current audio device.
const p = new SmartPlayer();
// Load the score and referenced audio files.
await p.enqueueScore(ScoreJSON);
// Play! `renderTime` will now proceed to increase.
p.playing = true;
// You can instantly seek if you want.
p.renderTime = TimeInstant.fromSeconds(30);
})();
For more demos, check out the Playground, which contains a live-coding environment, sample code, and sample JSON Scores, all in your browser!
There are also in-progress API Docs.
A CLI also ships with this package!
$ npx nf-player save --duration 120 --input-file ./fixtures/roxanne-30s-preview-shifted-infinite.json --output-file ./roxanne.wav
The above command will load the Score JSON, download the audio files, and render 120 seconds of audio to a file called tng.wav
. This should happen in about 5 seconds of realtime, depending on your computer and internet connection.
The CLI depends on ffmpeg being present in $PATH
for file decoding. On a mac this can be accomplished using homebrew brew install ffmpeg
Given a way to describe arbitrary audio experiences over time, we needed a player to play them. This repo serves as a reference implementation of the patterns required to process a Grapher Score, as well as a testbed for new features and API design.
Due to the nature of Grapher Scores, this player is capable of playback experiences that are generally outside the realm of a standard "track based" audio player, such as Winamp, Spotify, or VLC, and closer to an audio game engine. These include:
- infinite playback (never ending loops, infinite-jukebox-style)
- evolving or generative playback (subtle changes over time, procedurally generated changes)
- adaptive playback (user input, such as location)
By providing a unifying, declarative format to describe these experiences (and with a matching player implementation), they can be shared by multiple platforms, such as apps on your phone, websites, or physical devices. This is notably different from device or platform specific APIs like the Web Audio API, Core Audio, or Windows Audio Graphs, which often require imperative coding using programming languages specific to each. Composable time stretching, for example, is not possible without abandoning most (if not all) of the ergonomics of the Web Audio API.
A Grapher Score describes, basically, an audio processing graph, where each node of the graph is either a producer of audio, a consumer, or both, in the case of transformation. The edges of the graph control where the audio "flows".
This is basically the same as the Web Audio API, except instead of an imperative creation of the graph by the programmer, building it up piece by piece, a Grapher Score is a complete declarative description of the graph, including all the nodes, edges, and changes in values over time (Audio Parameter Commands). It can be de/serialized from/to JSON, and can be thought of as a JSON description of what to load, play, and change over time. It is given to the Player whole, and processed as a whole. Changes are described as Mutations to the Score, which are currently limted to adding and removing scheduled audio parameter changes. Multiple scores can be loaded, processed, and played by the Player, which allows for seamless switching between scores, and therefore seamless audio playback.
Once given to the Player, a Grapher Score is traversed, recursively, each render quantum (default is 8192 samples, ~0.185 seconds). Each node/plugin requests samples for the current quantum from its ancestors in the graph, passing its own notion of the current render time, which could be dilated due to time stretching or shifted due to looping. Each node/plugin applies whatever processing it wants by reading the config and audio params of the node for the current time, then forwards the resulting samples. All the buffers are then mixed down and written using the Web Audio API (or in the case of a CLI, a chunk of a WAV file).
Passing and modifying the current notion of time, recursively through ancestors, is what sets this player apart from other players built on top of APIs like Web Audio: a Stretch node, for example, can affect time for an entire subgraph, requiring that subgraph to be rendered much faster than realtime. This is extremely powerful (an entire graph could be audibly "sped up" like fast forwarding a tape cassette) but requires this processing model.
It should also be noted that all "leaves" of the Grapher Score are implicitly connected to the (internal) output node. Therefore, many Scores look like discontiguous graphs.
- Supported Grapher Score Nodes:
- Stretch (pitchshifting and timestretching, provided by a TS port of SoundTouch)
- Loop
- Gain (volume)
- File (
fetch
-able urls only)
- Seeking
- Processing audio faster than realtime with reasonable performance
- Realtime mutations of the current Score
- Loading / Processing / Playing multiple Scores
- Loads all audio files into memory all at once using
fetch
/ decodeAudioData - Only supports whatever audio formats the platform supports (due to using decodeAudioData). The CLI shells out to ffmpeg to avoid this limitation.
- No DRM / EME support (the EME API does not provide faster-than-realtime access to the audio samples)
- Does all processing on the main thread (but with fairly low CPU usage). This could be mitigated by running the entire player in a Web Worker or, eventually, Audio Worklet.
See the Playground!
Clone this repo, then:
$ npm install
To build the library and start the development environment:
$ npm start
To build and publish the demo to gh-pages manually, run:
$ npm run deploy:demo:manual
Otherwise the demo is actually built and deployed on each push to master.
There is also a more cut-down debug environment useful for debugging / developing single Scores or scripts:
$ npm run debug-harness
npm version [major|minor|patch]
, then PR.- TravisCI will attempt to publish any tagged commit.
- The demo is deployed automatically on master builds using the TravisCI gh-pages provider.
This project adheres to the Open Code of Conduct. By participating, you are expected to honor this code.
NFPlayerJS depends on SoundTouch-TS which is licensed under the GNU Lesser General Public Library, version 2.1.
To comply with section 6 of the LGPL, SoundTouch can be swapped out at runtime by any user by placing an API-compatible replacement at window.SoundTouch
or global.SoundTouch
.
music player by SBTS from the Noun Project.
"Reasonable" is ambiguous, of course. In this case, it means "expected CPU usage if audio is the primary purpose of the experience". For example, an authoring tool or enhanced listening experience. It's still unclear to if CPU usage is low enough for a background task like listening to a playlist.