You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is the current behavior?
Step reading consumes a lot of memory on the platform.
We noticed that the memory consumption during reading a step file is very high.
Currently, there is a 60x factor between the step file size and ram used.
What is the expected behavior?
given the fact that a step file is itself pretty inefficient (as a data representation format). the ultimate goal would be to have a similar footprint. ie 1x factor.
however I think having a 10x factor is acceptable, as it would allow treating most potential use cases with a standard server setup.
What is the motivation / use case for changing the behavior?
with the current footprint. Parsing a 500mb step file would practically impossible as it [combined with wf run & infrastructure] would consume all the available memory!
Possible fixes
a more compact representation of volume_model would help ( lazy "compute" for some properties ? elimnate redundant info in different attributes ? ...)
more reliance on numpay arrays for primitives & primitive collections ... as I suspect we are hitting some hard limits with python datastructures. This will require some serious refactoring.
?
Please tell us about your environment:
branch: all
commit:
python version:
The text was updated successfully, but these errors were encountered:
ladnane
changed the title
Step Reading: RAM Problem
Step Reading: RAM high usage
Aug 8, 2023
I'm submitting a ...
What is the current behavior?
Step reading consumes a lot of memory on the platform.
We noticed that the memory consumption during reading a step file is very high.
Currently, there is a 60x factor between the step file size and ram used.
What is the expected behavior?
given the fact that a step file is itself pretty inefficient (as a data representation format). the ultimate goal would be to have a similar footprint. ie 1x factor.
however I think having a 10x factor is acceptable, as it would allow treating most potential use cases with a standard server setup.
What is the motivation / use case for changing the behavior?
with the current footprint. Parsing a 500mb step file would practically impossible as it [combined with wf run & infrastructure] would consume all the available memory!
Possible fixes
a more compact representation of volume_model would help ( lazy "compute" for some properties ? elimnate redundant info in different attributes ? ...)
more reliance on numpay arrays for primitives & primitive collections ... as I suspect we are hitting some hard limits with python datastructures. This will require some serious refactoring.
?
Please tell us about your environment:
The text was updated successfully, but these errors were encountered: