Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Account screen: Provide option to cache/clear ALL meter readings and rates in DB #217

Open
ryanw-mobile opened this issue Jun 17, 2024 · 1 comment
Labels
feature A new feature for the user, not a new feature for a build script.

Comments

@ryanw-mobile
Copy link
Owner

Dependencies:
#23 - We need to have the RoomDB ready.
#79 - Repository needs to be able to resume lazy load

If we do not have all the half-hourly meter readings and their corresponding rates stored locally, right now it is impossible to calculate any estimated cost in any of the presentation modes other than daily/half-hourly.

If we have enough cached consumption data, on Agile screen we can possibly aggregate the average consumption per half-hour slot, showing how much we generally use versus the upcoming agile rate. So we can look ahead given our habits unchanged, how much we can save/ extra we have to pay.

When returning to demo mode, we force the DB to clear.

@ryanw-mobile ryanw-mobile added the feature A new feature for the user, not a new feature for a build script. label Jun 17, 2024
@ryanw-mobile
Copy link
Owner Author

Some simple calculations:

Assume a user has joined the Agile tariff for one year.

Number of meter readings a day = 24 * 2 = 48
Number of meter readings a year = 48 * 365.25 = 17,532

Each time, the API can return 100 records on a page.
Therefore, we need to repeat the API call for 176 times.

That's the same for Agile unit rates.

So for one year's data we have to fire 352 api calls.
Not to mention if the user has more than one year's history.

We have a concern whether Octopus would block us if we consecutively fire this amount of calls without delay.
On the other hand, this operation is not going to be done in a few seconds, the download process would then needs to be visible to the user, so that means:

  • We need a new use case that publishes the progress
  • We need the repository to be able to return the total number of records (not yet available), and let the use case to manually request each single page (available) so we can control the rate of data flow.

Optionally, if we want to make things a bit more complex, we can inspect what kind of data we already have in our database, so that we do not ask for what we have.

Given if we don't split API requests, we need 176 API calls, if we split it into requests of one or two days at a time, we can better do a SQL query to check if we already have a complete set of data covering that range, so we keep on checking and requesting what we don't have.

In this way, the number of API calls will be more, BUT it is more fault tolerant - in case the transmission is interrupted, we technically can skip the portions downloaded and do not have to reload everything again.

This option seems to make more sense, as then the repository doesn't really need to return the amount of records for progress tracking - the use case instead can use the number of days processed as the progress.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature A new feature for the user, not a new feature for a build script.
Projects
None yet
Development

No branches or pull requests

1 participant