Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Inference] Some effective methods to reduce the loading time of pir models #70219

Open
wants to merge 4 commits into
base: develop
Choose a base branch
from

Conversation

aooxin
Copy link
Contributor

@aooxin aooxin commented Dec 13, 2024

PR Category

Inference

PR Types

New features

Description

jit.save support option of separate_parameters
pir model load support multi threads
params_sync_among_devices_pass support multi threads and multi streams
使用时模型保存与加载方式需要改变:

模型需要重新导出,导出时paddle.jit.save api新增separate_parameters=True参数
config = paddle.inference.Config(model_path) 直接传入模型文件所在目录path即可

Copy link

paddle-bot bot commented Dec 13, 2024

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added the contributor External developers label Dec 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant