Replies: 1 comment 1 reply
-
我们已经在Paddle内部实现了类似torch-quiver的后端,具体可以看https://github.com/PaddlePaddle/Paddle/tree/develop/python/paddle/geometric 下的一些相关内容。目前PGL可以类似torch-quiver一样支持单机多卡的训练,近期我们会在PGL库中集成相关的例子(如Graphsage),可以关注一下。 |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
pgl是否可以类似torch那样,使用torch.multiprocessing.spwan来进行单机多卡的训练 ,我准备实现paddle后端的torch-quiver(https://github.com/quiver-team/torch-quiver),因为在torch后端的时候,可以在spwan的args中加入自定义的类https://github.com/quiver-team/torch-quiver/blob/main/examples/multi_gpu/pyg/reddit/dist_sampling_ogb_reddit_quiver.py#L146
,paddle后端应该怎样处理呢,我看paddle的源码里,只有paddle.distributed支持spwan,paddle的multiprocessing,不似torch,只有reductions,没有其他的内容。
请问我该如何使用paddle实现https://github.com/quiver-team/torch-quiver/blob/main/examples/multi_gpu/pyg/reddit/dist_sampling_ogb_reddit_quiver.py#L146
的分布式训练呢
Beta Was this translation helpful? Give feedback.
All reactions