-
Notifications
You must be signed in to change notification settings - Fork 430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GCSFuse performance on Vertex AI custom training job #1830
Comments
hi @miguelalba96 thanks for asking the question and for providing the full context of the problem.
File-caching in GCSFuse is a new feature that was added in v2.0.0, about 3 weeks ago. Unfortunately, the vertex AI pipelines still use an older version of GCSFuse, which doesn't support the file-caching feature, so as of now, there is no way to enable/configure file-caching in gcsfuse using vertex AI pipeline.
As I said above, this is not possible through Vertex AI job creation interface right now.
If you can take control of which gcsfuse version is installed in your container and how it is used to mount buckets, then this might be possible. In this case, you need to install GCSFuse v2.0.0 (instructions) in your container and mount buckets using the config-file needed for desirable file-cache parameters (doc).
This is outside my expertise area, but I can try to guess. If you install gcsfuse in your training container and mount your bucket at |
Hi @miguelalba96, Vertex AI has GCSfuse v1.x currently which has stat and type caching enabled by default. The new file cache feature is only available in GCSfuse V2, which has not been rolled out by Vertex yet. We can let you know once its available, but I also suggested opening up a ticket/feature request for the Vertex AI team directly so they can track this. |
Lowering priority down to P2 as the upgrade of vertex ai to gcsfuse V2 is already in plan and is also outside the scope of the GCSFuse team. |
Wanted to update here that VertexAI training upgrade to GCSFuse v2.0.0 is now complete. The VertexAI training team is now working on enabling GCSFuse file-cache feature in training jobs. Once that completes, the original problem that @miguelalba96 faced might get fixed. Though, AFAIK, VertexAI won't be providing users controls to configure file-cache feature parameters as Miguel asked. |
I'm keen to be able to use the file cache on the |
@tiagovrtr the Vertex team is working on the integration, but dont have a timeline yet. Will report back |
keep us posted! This feature will also be helpful to me. I've been using gcsfuse, file caching, and webdataset in a Vertex AI Workbench instance where I have more control over the environment, but I prefer to use Vertex AI Training once I'm out of the prototyping phase. |
keen to know where we are in terms of enabling GCSFuse file-cache feature in Vertex AI custom training jobs. |
After months of experimentation on distributed training in GCP. I think the most cost efficient solutions (specially when dealing with TB of data) for custom training job workflows or workbench instances are:
and use GCSFuse only for model artifacts, metadata and small dataframes. |
@miguelalba96 I'm glad that you have found a workaround that works for you.
I am curious.. higher compared to which alternative: the local SSD, or the reads through GCSFuse ? Also, could you share what kind of read speeds are you getting with the filestore NFS mount? I am asking this to know if these will be beaten by GCSFuse with file-cache enabled once vertex AI has rolled out the support for it. |
@JasperW01 last I spoked to the vertex AI training team, they told me that they will be rolling out the change within Q3. I'll update here as soon as it is fully rolled out and you can expect it to make the difference. |
Thx a lot, @gargnitingoogle . is there any update on the rolling out? my customers are keen to leverage it in their production workloads. Thx a lot in advance. |
@JasperW01 AFAIK, the plan is to roll it out for a limited machine types at first, and that too for autopilot gke clusters only at first. I'll share more concrete information as soon I have it. |
Description
I am encountering data loading throughput issues while training a large model on Google Cloud Platform (GCP). Here's some context:
I am utilizing Vertex AI pipelines for my training process. According to GCP documentation, Vertex AI custom training jobs automatically mount GCS (Google Cloud Storage) buckets using GCSFuse. Upon debugging my training setup, I've identified that the bottleneck in data loading seems to be related to GCSFuse, leading to data starvation and subsequent drops in GPU utilization.
I've come across performance tips that discuss caching as a potential solution. However, since Vertex AI configures GCSFuse automatically, it's unclear how to enable caching.
Should I configure caching at runtime when running the training job?
When building the docker image that contains my code to run as custom job should I mount manually the bucket and specify cache-dir, won't that be reconfigured by vertex AI when submitting the job?
Additional context
I am running distributed training on a 4-node setup within Vertex AI pipelines. Each worker node is a n1-highmemory-16 machine equipped with 2 GPUs.
I am using google_cloud_pipeline_components.v1.custom_job.create_custom_training_job_from_component to create the custom training job.
In my code, I'm simply replacing gs:// with /gcs/ as per the GCP documentation for Vertex AI.
Type of issue
Information - request
The text was updated successfully, but these errors were encountered: