Skip to content

Commit

Permalink
[docs] Other modalities (huggingface#4205)
Browse files Browse the repository at this point in the history
remove coming soon, rl pipeline
  • Loading branch information
stevhliu authored and orpatashnik committed Aug 1, 2023
1 parent 4c64655 commit bcdb1aa
Show file tree
Hide file tree
Showing 6 changed files with 43 additions and 71 deletions.
10 changes: 2 additions & 8 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -88,10 +88,6 @@
title: Custom Diffusion
title: Training
- sections:
- local: using-diffusers/rl
title: Reinforcement Learning
- local: using-diffusers/audio
title: Audio
- local: using-diffusers/other-modalities
title: Other Modalities
title: Taking Diffusers Beyond Images
Expand Down Expand Up @@ -278,6 +274,8 @@
title: Unconditional Latent Diffusion
- local: api/pipelines/unidiffuser
title: UniDiffuser
- local: api/pipelines/value_guided_sampling
title: Value-guided sampling
- local: api/pipelines/versatile_diffusion
title: Versatile Diffusion
- local: api/pipelines/vq_diffusion
Expand Down Expand Up @@ -333,8 +331,4 @@
- local: api/schedulers/vq_diffusion
title: VQDiffusionScheduler
title: Schedulers
- sections:
- local: api/experimental/rl
title: RL Planning
title: Experimental Features
title: API
15 changes: 0 additions & 15 deletions docs/source/en/api/experimental/rl.mdx

This file was deleted.

32 changes: 32 additions & 0 deletions docs/source/en/api/pipelines/value_guided_sampling.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Value-guided planning

<Tip warning={true}>

🧪 This is an experimental pipeline for reinforcement learning!

</Tip>

This pipeline is based on the [Planning with Diffusion for Flexible Behavior Synthesis](https://huggingface.co/papers/2205.09991) paper by Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine.

The abstract from the paper is:

*Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility*.

You can find additional information about the model on the [project page](https://diffusion-planning.github.io/), the [original codebase](https://github.com/jannerm/diffuser), or try it out in a demo [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/reinforcement_learning_with_diffusers.ipynb).

The script to run the model is available [here](https://github.com/huggingface/diffusers/tree/main/examples/reinforcement_learning).

## ValueGuidedRLPipeline
[[autodoc]] diffusers.experimental.ValueGuidedRLPipeline
16 changes: 0 additions & 16 deletions docs/source/en/using-diffusers/audio.mdx

This file was deleted.

25 changes: 0 additions & 25 deletions docs/source/en/using-diffusers/rl.mdx

This file was deleted.

16 changes: 9 additions & 7 deletions src/diffusers/experimental/rl/value_guided_sampling.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,19 +24,21 @@

class ValueGuidedRLPipeline(DiffusionPipeline):
r"""
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
Pipeline for sampling actions from a diffusion model trained to predict sequences of states.
Pipeline for value-guided sampling from a diffusion model trained to predict sequences of states.
Original implementation inspired by this repository: https://github.com/jannerm/diffuser.
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.).
Parameters:
value_function ([`UNet1DModel`]): A specialized UNet for fine-tuning trajectories base on reward.
unet ([`UNet1DModel`]): U-Net architecture to denoise the encoded trajectories.
value_function ([`UNet1DModel`]):
A specialized UNet for fine-tuning trajectories base on reward.
unet ([`UNet1DModel`]):
UNet architecture to denoise the encoded trajectories.
scheduler ([`SchedulerMixin`]):
A scheduler to be used in combination with `unet` to denoise the encoded trajectories. Default for this
application is [`DDPMScheduler`].
env: An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models.
env ():
An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models.
"""

def __init__(
Expand Down

0 comments on commit bcdb1aa

Please sign in to comment.