Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix dask_cudf.read_csv #17612

Merged
merged 2 commits into from
Dec 17, 2024
Merged

Conversation

rjzamora
Copy link
Member

Description

Recent changes in dask and dask-expr have broken dask_cudf.read_csv (dask/dask-expr#1178, dask/dask#11603). Fortunately, the breaking changes help us avoid legacy CSV code in the long run.

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

@rjzamora rjzamora added bug Something isn't working 2 - In Progress Currently a work in progress dask Dask issue non-breaking Non-breaking change labels Dec 17, 2024
@rjzamora rjzamora self-assigned this Dec 17, 2024
@rjzamora rjzamora requested a review from a team as a code owner December 17, 2024 16:52
@github-actions github-actions bot added the Python Affects Python cuDF API. label Dec 17, 2024
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NOTE: This code was translated from the "legacy" code in dask_cudf/_legacy/io/csv.py. After this PR is merged, that file can be removed in #17558

Comment on lines -188 to -191
# Test chunksize deprecation
with pytest.warns(FutureWarning, match="deprecated"):
df3 = dask_cudf.read_csv(path, chunksize=None, dtype=typ)
dd.assert_eq(df, df3)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was deprecated a long time ago, the new code path doesn't try to catch this anymore.

Comment on lines -279 to -281
with pytest.warns(match="dask_cudf.io.csv.read_csv is now deprecated"):
df2 = dask_cudf.io.csv.read_csv(csv_path)
dd.assert_eq(df, df2, check_divisions=False)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new csv code now lives at this "expected" path.

Copy link
Member

@pentschev pentschev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this looks fine, I left some suggestions.

import dask_expr as dx
from fsspec.utils import stringify_path
try:
# TODO: Remove when cudf is pinned to dask>2024.12.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense for us to do this? rapids-dask-dependency should always pick the latest Dask, no?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By "when cudf is pinned", I mean "when rapids-dask-dependency is pinned". Is that your question?

rapids-dask-dependency currently pins to >=2024.11.2. This means another package with a dask<2024.12.0 requirement can still give us dask-2024.11.2 in practice, no?

>>> import dask_cudf
>>> df = dask_cudf.read_csv("myfiles.*.csv")

In some cases it can break up large files:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What cases are those? Or is it always dependent upon the file size and the value of blocksize? If the latter, maybe we should just rephrase to "It can also break up large files by specifying the size of each block via blocksize".

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I didn't spend any time reviewing these doc-strings, because they were directly copied from dask_cudf/_legacy/io/csv.py. Does it make sense to address these suggestions/questions in a follow-up (just to make sure CI is unblocked)?


>>> df = dask_cudf.read_csv("largefile.csv", blocksize="256 MiB")

It can read CSV files from external resources (e.g. S3, HTTP, FTP)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
It can read CSV files from external resources (e.g. S3, HTTP, FTP)
It can read CSV files from external resources (e.g. S3, HTTP, FTP):

----------
path : str, path object, or file-like object
Either a path to a file (a str, :py:class:`pathlib.Path`, or
py._path.local.LocalPath), URL (including http, ftp, and S3
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
py._path.local.LocalPath), URL (including http, ftp, and S3
``py._path.local.LocalPath``), URL (including HTTP, FTP, and S3

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it could also just be :py:class:py._path.local.LocalPath?

path : str, path object, or file-like object
Either a path to a file (a str, :py:class:`pathlib.Path`, or
py._path.local.LocalPath), URL (including http, ftp, and S3
locations), or any object with a read() method (such as
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
locations), or any object with a read() method (such as
locations), or any object with a ``read()`` method (such as

The target task partition size. If ``None``, a single block
is used for each file.
**kwargs : dict
Passthrough key-word arguments that are sent to
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Passthrough key-word arguments that are sent to
Passthrough keyword arguments that are sent to

Copy link
Contributor

@wence- wence- left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Rick

@rjzamora rjzamora added 5 - Ready to Merge Testing and reviews complete, ready to merge and removed 2 - In Progress Currently a work in progress labels Dec 17, 2024
@rjzamora
Copy link
Member Author

/merge

@rapids-bot rapids-bot bot merged commit 0058b52 into rapidsai:branch-25.02 Dec 17, 2024
111 checks passed
@rjzamora rjzamora deleted the fix-dask-read-csv branch December 17, 2024 18:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
5 - Ready to Merge Testing and reviews complete, ready to merge bug Something isn't working dask Dask issue non-breaking Non-breaking change Python Affects Python cuDF API.
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

3 participants