You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the data source and repository are discarding paging data.
It is ok both of them do not automatically resume, but the use case should check and fire follow-up requests since we are supposed to plot the graph using a complete set of data.
The text was updated successfully, but these errors were encountered:
This is linked to #96 when we support customised date range which might result in more than 100 records. Otherwise, even the hour-hourly view at present only returns 48 records per day.
At the top of the response it will tell you the number of records returned in the response (in this case 103). If the number of records exceeds the page_size (usually 100), you will have to explicitly ask for the next page - if so this will be given in the "next" field of the response.
This might be better handled by the repository if we later on will implement DB/caching.
API includes top-level count which suggests the total number of records,
and next (a link that we are not currently parsing) if we can continue requests for the next data batch.
So, the repository is to keep on looping until either reaching count or next becomes null
Currently, the data source and repository are discarding paging data.
It is ok both of them do not automatically resume, but the use case should check and fire follow-up requests since we are supposed to plot the graph using a complete set of data.
The text was updated successfully, but these errors were encountered: