Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[V1] Support Pixtral-HF on V1 #11409

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ywang96
Copy link
Member

@ywang96 ywang96 commented Dec 22, 2024

Support Transformers compatible Pixtral checkpoints on V1

Signed-off-by: Roger Wang <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@ywang96
Copy link
Member Author

ywang96 commented Dec 22, 2024

cc @mgoin

Currently I'm facing some difficulties on how to patch the result image embeddings tensors with the [IMG_BREAK] and [IMG_END] tokens so that the shape indeed matches the placeholder ranges for fine-grained scheduling. I've verified this is indeed the missing piece we need.

PlaceholderRange: [{'offset': 10, 'length': 2795}]
Image embedding shape: torch.Size([2752, 5120])
# of image tokens:  2752
# of image break tokens:  42
# of image end tokens:  1

and 2752 + 42 + 1 = 2795

For mistral-format Pixtral, this wasn't an issue because MultiModalKwargs contain the actual complete image token ids (with break and end included) so it's easy to patch. See

def get_multimodal_embeddings(self, **kwargs) -> Optional[NestedTensors]:
image_input, image_tokens = self._parse_and_validate_image_input(
**kwargs)
if image_input is None:
return None
vision_embeddings = self._process_image_input(image_input)
# NOTE: We patch the outputs of the vision encoder with embeddings
# from `[IMG_BREAK]` and `[IMG_END]` tokens.
image_embeds = self.language_model.get_input_embeddings(image_tokens)
image_token_mask = image_tokens == self.vision_args.image_token_id
image_embeds[image_token_mask] = vision_embeddings
# NOTE: Image embeddings are split into separate tensors for each image
# by the indices of `[IMG_END]` token.
image_end_condition = (image_tokens == PIXTRAL_12B_IMAGE_END_ID) | (
image_tokens == PIXTRAL_LARGE_IMAGE_END_ID)
split_indices = torch.where(image_end_condition)[0] + 1
if len(split_indices) <= 1:
# Do not split, return as tensor of shape [1, fs, hs]
return image_embeds.unsqueeze(0)
# If the last split index is the last index in image_tokens, we
# ignore it to avoid empty split tensor
if split_indices[-1] == len(image_tokens):
split_indices = split_indices[:-1]
image_embeds = image_embeds.tensor_split(split_indices.cpu())
return image_embeds

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant