You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, when running tests with coverage, where the data produced by coverage report is large, each reporter is most probably running a row-by-row query from client to DB.
We should prevent this as it significantly impacts the overall execution time of run when done va utPLSQL-cli.
Ideally, data from reporters that do not report to screen but do reporting in background , should be read in bulks of 100, 1000 or even 10000 rows.
If you have a look at our test runs on Travis,
~50 seconds is time for utPSLQL unit test run, while overall end-to-end run takes ~80 seconds.
I can imagine that on really large schema, most of the time might be spent on reading coverage report by cli.
The text was updated successfully, but these errors were encountered:
Currently, when running tests with coverage, where the data produced by coverage report is large, each reporter is most probably running a row-by-row query from client to DB.
We should prevent this as it significantly impacts the overall execution time of run when done va utPLSQL-cli.
Ideally, data from reporters that do not report to screen but do reporting in background , should be read in bulks of 100, 1000 or even 10000 rows.
If you have a look at our test runs on Travis,
~50 seconds is time for utPSLQL unit test run, while overall end-to-end run takes ~80 seconds.
I can imagine that on really large schema, most of the time might be spent on reading coverage report by cli.
The text was updated successfully, but these errors were encountered: