Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SNOW-1463687: Driver uses too much memory #1152

Closed
prochac opened this issue Jun 5, 2024 · 4 comments
Closed

SNOW-1463687: Driver uses too much memory #1152

prochac opened this issue Jun 5, 2024 · 4 comments
Assignees
Labels
status-information_needed Additional information is required from the reporter status-triage Issue is under initial triage

Comments

@prochac
Copy link

prochac commented Jun 5, 2024

Ne noticed high memory usage peak with snowflake driver.

Screenshot_20240605_131645
Screenshot_20240605_131718

The code isn't any complex. Just iterating sql rows returned from cursor.
The code is shared with other database drivers, but it's only snowflake what causes high mem peaks.

Please answer these questions before submitting your issue.
In order to accurately debug the issue this information is required. Thanks!

  1. What version of GO driver are you using?

v1.10.0

  1. What operating system and processor architecture are you using?

x86_64 GNU/Linux

  1. What version of GO are you using?

1.22.3

4.Server version:* E.g. 1.90.1

Not sure - we're ETL platform, and I'm not sure which pipeline caused it yet. IMO irrelevant.

  1. What did you do?

Simple (*sql.DB).QueryContext with iterator over *sql.Rows

  1. What did you expect to see?

No memory peak, like with other db drivers we share code with.

  1. Can you set logging to DEBUG and collect the logs?

Not very motivated to do it in production.

@prochac prochac added the bug Erroneous or unexpected behaviour label Jun 5, 2024
@github-actions github-actions bot changed the title Driver uses too much memory SNOW-1463687: Driver uses too much memory Jun 5, 2024
@sfc-gh-dszmolka sfc-gh-dszmolka self-assigned this Jun 5, 2024
@sfc-gh-dszmolka sfc-gh-dszmolka added status-triage Issue is under initial triage and removed bug Erroneous or unexpected behaviour labels Jun 5, 2024
@sfc-gh-dszmolka
Copy link
Contributor

sfc-gh-dszmolka commented Jun 5, 2024

hi - thanks for submitting this issue with us. can you please share the actual code which produces high memory usage for you in gosnowflake ?
if it's not shareable, then a minimal viable reproduction application which when run, leads to the same issue ?

asking because other drivers have the same problem when someone tries to read the resultset into memory before working on it, and that's why it would be great to see your approach. Thank you so much in advance !

edit: also the difference between the first and the second screenshots, both seem to be having gosnowflake-related stack but the memory usage is very different. Is it one of the gosnowflake versions working well for you? If so, what version has low memory usage (bottom screenshot) and which one has the high one (upper screenshot). Thank you !

@sfc-gh-dszmolka sfc-gh-dszmolka added the status-information_needed Additional information is required from the reporter label Jun 5, 2024
@prochac
Copy link
Author

prochac commented Jun 5, 2024

hi - thanks for submitting this issue with us. can you please share the actual code which produces high memory usage for you in gosnowflake ?
if it's not shareable, then a minimal viable reproduction application which when run, leads to the same issue ?

asking because other drivers have the same problem when someone tries to read the resultset into memory before working on it, and that's why it would be great to see your approach. Thank you so much in advance !

edit: also the difference between the first and the second screenshots, both seem to be having gosnowflake-related stack but the memory usage is very different. Is it one of the gosnowflake versions working well for you? If so, what version has low memory usage (bottom screenshot) and which one has the high one (upper screenshot). Thank you !

I spent some time reproducing the same memory pattern in our test environment, unsuccessfully.

But now, after letting it go for a moment, I realised that we may have one legacy method that could iterate over all results to render HTTP response. That would also explain the irregularity of the memory pattern. Because it's not happening continuously. I noticed it from our monitoring just because it's not common.

I will check tomorrow, and hopefully we could blame our legacy code 😁

@sfc-gh-dszmolka
Copy link
Contributor

appreciate the efforts for reproduction a lot 👍 recent finding you mentioned indeed sounds something like promising. we'll be standing by this issue; let us know please how it went once you had a bit more time.

@prochac
Copy link
Author

prochac commented Jun 13, 2024

Sorry for taking it so long... priorities

Yes, it was the legacy endpoint.

@prochac prochac closed this as completed Jun 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status-information_needed Additional information is required from the reporter status-triage Issue is under initial triage
Projects
None yet
Development

No branches or pull requests

2 participants