-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Belgian prices are not fetched because of different resolutions in api response #195
Comments
As a starter we could ignore any 2nd, 3rd, 4th price entries Changing the complete integration to support 15min intervals is quite a large change |
This is the debug output. Did find 24 hours 2024-10-07 15:58:40.865 DEBUG (MainThread) [custom_components.entsoe.coordinator] ENTSO-e DataUpdateCoordinator data update |
dear all, entsoe-py maintainer dropping in. Sorry to hear you had to build your own implementation due to pandas issue. for the future it is important to note that the SDAC will go to 15 min resolution somewhere Q1 or Q2 2025. This means that you will need the 15 min parser eventually since ALL countries will swap their day ahead prices to that resolution at the same time. |
Thank you for your explanation that is really useful. |
this might fix this (for now, not ideal as I don't really care about real 15m resolutions |
Proposing temporary workaround with the change below to api_client.py for mixed resolution in API response :
|
If it helps I was thinking about making a small library only pulling in prices with no pandas dependency for projects such as these. Would that help?
Get BlueMail for Android
…On 7 Oct 2024, 20:45, at 20:45, micvdh ***@***.***> wrote:
Proposing temporary workaround with the change below to api_client.py
for mixed resolution in API response :
def query_day_ahead_prices(
self, country_code: Union[Area, str], start: datetime, end: datetime
) -> str:
"""
Parameters
----------
country_code : Area|str
start : datetime
end : datetime
Returns
-------
str
"""
area = Area[country_code.upper()]
params = {
"documentType": "A44",
"in_Domain": area.code,
"out_Domain": area.code,
}
response = self._base_request(params=params, start=start, end=end)
if response.status_code == 200:
try:
root = self._remove_namespace(ET.fromstring(response.content))
_LOGGER.debug(f"content: {root}")
series = {}
# Extract TimeSeries data
for timeseries in root.findall(".//TimeSeries"):
for period in timeseries.findall(".//Period"):
resolution = period.find(".//resolution").text
_LOGGER.debug(f"Resolution {resolution}")
if (resolution != "PT60M" and resolution != "PT30M" and resolution !=
"PT15M"):
continue
response_start = period.find(".//timeInterval/start").text
start_time = (
datetime.strptime(response_start, "%Y-%m-%dT%H:%MZ")
.replace(tzinfo=pytz.UTC)
.astimezone()
)
response_end = period.find(".//timeInterval/end").text
end_time = (
datetime.strptime(response_end, "%Y-%m-%dT%H:%MZ")
.replace(tzinfo=pytz.UTC)
.astimezone()
)
_LOGGER.debug(f"Period found is from {start_time} till {end_time}")
for point in period.findall(".//Point"):
position = point.find(".//position").text
price = point.find(".//price.amount").text
if resolution == "PT60M":
hour = int(position) - 1
elif resolution == "PT30M":
hour = ((int(position) - 1) / 2)
elif resolution == "PT15M":
hour = ((int(position) - 1) / 4)
else:
continue
try:
series[start_time + timedelta(hours=hour)] = float(price)
except:
continue
# Now fill in any missing hours
current_time = start_time
last_price = series[current_time]
while current_time < end_time: # upto excluding! the endtime
if current_time in series:
last_price = series[current_time] # Update to the current price
else:
_LOGGER.debug(f"Extending the price {last_price} of the previous hour
to {current_time}")
series[current_time] = last_price # Fill with the last known price
current_time += timedelta(hours=1)
return dict(sorted(series.items()))
except Exception as exc:
_LOGGER.debug(f"Failed to parse response content:{response.content}")
raise exc
else:
print(f"Failed to retrieve data: {response.status_code}")
return None
--
Reply to this email directly or view it on GitHub:
#195 (comment)
You are receiving this because you commented.
Message ID: ***@***.***>
|
I was wondering if you know on how the SDAC and EXAA will be distinguishable from each other when the SDAC changes to 15 min data.
I think that a lightweight low dependency api library could be useful. It would definitely be easier to wait for a package update when entso-e breaks the api again than fixing and maintaining it in this project. In a perfect world that library could then be the core of a library that provides pandas objects. That would eliminate maintenance of redundant functionality. I just don't know how feasible that all is. Belgian data for today is in the correct format. So maybe it is fixed. If prices of tomorrow break it again I will look into merging the patch @wonko |
Keep in mind that the response sent was valid according to the specs as set out in the XML Schema guide. The patch allows to accept these valid responses, doesn't break anything (and even adds tests on the XML, which can be easily extended with other cases in the future). Until there's a uniform solution through some external (or internal) package which deals with everything, it only adds robustness. But, your call in the end. |
Yes I agree and i like the tests. And have the intention on merging the PR. But this just takes the pressure off releasing a rushed version. |
PR #202 is now tested and should resolve this issue |
So since the api changes of 10/04 the response for the Belgian zone contains PT15M (15 min) resolution data mixed with PT60M (60 min) data. The current parsing logic only parses PT60M data because that was al that was needed until now.
It is currently unclear to me what the solution here should be. We could change the parsing logic to also parse PT15M data. The issue with that is that the German zone also returns PT15M data we explicitly don't want to parse. We could program in some logic to only parse PT15M when the Belgian data is requested but that feels like a 'hack'.
The text was updated successfully, but these errors were encountered: