You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
importdglg= ... # load the DGLGraph object with nebula-dgldgl.distributed.partition_graph(g, 'mygraph', 2, 'data_root_dir')
It'll output the partitioned graph as:
data_root_dir/
|-- mygraph.json # metadata JSON. File name is the given graph name.
|-- part0/ # data for partition 0
| |-- node_feats.dgl # node features stored in binary format
| |-- edge_feats.dgl # edge features stored in binary format
| |-- graph.dgl # graph structure of this partition stored in binary format
|
|-- part1/ # data for partition 1
|-- node_feats.dgl
|-- edge_feats.dgl
|-- graph.dgl
How to do distributed training:
Load data and prepare on graph partition
It'll output the partitioned graph as:
See more on the reference docs:
ref:
Prepare distributed training env
ref:
The text was updated successfully, but these errors were encountered: