Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] Other modalities #4205

Merged
merged 1 commit into from
Jul 24, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 2 additions & 8 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -88,10 +88,6 @@
title: Custom Diffusion
title: Training
- sections:
- local: using-diffusers/rl
title: Reinforcement Learning
- local: using-diffusers/audio
title: Audio
- local: using-diffusers/other-modalities
title: Other Modalities
title: Taking Diffusers Beyond Images
Expand Down Expand Up @@ -276,6 +272,8 @@
title: Unconditional Latent Diffusion
- local: api/pipelines/unidiffuser
title: UniDiffuser
- local: api/pipelines/value_guided_sampling
title: Value-guided sampling
- local: api/pipelines/versatile_diffusion
title: Versatile Diffusion
- local: api/pipelines/vq_diffusion
Expand Down Expand Up @@ -331,8 +329,4 @@
- local: api/schedulers/vq_diffusion
title: VQDiffusionScheduler
title: Schedulers
- sections:
- local: api/experimental/rl
title: RL Planning
title: Experimental Features
title: API
15 changes: 0 additions & 15 deletions docs/source/en/api/experimental/rl.mdx

This file was deleted.

32 changes: 32 additions & 0 deletions docs/source/en/api/pipelines/value_guided_sampling.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Value-guided planning

<Tip warning={true}>

🧪 This is an experimental pipeline for reinforcement learning!

</Tip>

This pipeline is based on the [Planning with Diffusion for Flexible Behavior Synthesis](https://huggingface.co/papers/2205.09991) paper by Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine.

The abstract from the paper is:

*Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility*.

You can find additional information about the model on the [project page](https://diffusion-planning.github.io/), the [original codebase](https://github.com/jannerm/diffuser), or try it out in a demo [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/reinforcement_learning_with_diffusers.ipynb).

The script to run the model is available [here](https://github.com/huggingface/diffusers/tree/main/examples/reinforcement_learning).

## ValueGuidedRLPipeline
[[autodoc]] diffusers.experimental.ValueGuidedRLPipeline
16 changes: 0 additions & 16 deletions docs/source/en/using-diffusers/audio.mdx

This file was deleted.

25 changes: 0 additions & 25 deletions docs/source/en/using-diffusers/rl.mdx

This file was deleted.

16 changes: 9 additions & 7 deletions src/diffusers/experimental/rl/value_guided_sampling.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,19 +24,21 @@

class ValueGuidedRLPipeline(DiffusionPipeline):
r"""
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
Pipeline for sampling actions from a diffusion model trained to predict sequences of states.
Pipeline for value-guided sampling from a diffusion model trained to predict sequences of states.

Original implementation inspired by this repository: https://github.com/jannerm/diffuser.
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).

Parameters:
value_function ([`UNet1DModel`]): A specialized UNet for fine-tuning trajectories base on reward.
unet ([`UNet1DModel`]): U-Net architecture to denoise the encoded trajectories.
value_function ([`UNet1DModel`]):
A specialized UNet for fine-tuning trajectories base on reward.
unet ([`UNet1DModel`]):
UNet architecture to denoise the encoded trajectories.
scheduler ([`SchedulerMixin`]):
A scheduler to be used in combination with `unet` to denoise the encoded trajectories. Default for this
application is [`DDPMScheduler`].
env: An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models.
env ():
An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models.
"""

def __init__(
Expand Down