You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
SInce this version of Process stores the Scrapyd job ID, it's easy to use scrapy-log-analyzer to parse the log file itself. This avoids errors being introduced by Process, the network, etc.
The text was updated successfully, but these errors were encountered:
Hmm, the job ID is stored by the data registry. It is sent to create_collection from Collect's spider_opened callback, but the ID is not yet stored. See #341 Done
jpmckinney
changed the title
Stop storing collection errors in the database
Stop storing FileError items from Kingfisher Collect in the database
Jul 4, 2023
Obviously, as part of this, we would also stop sending messages for file errors from Collect.
Edit: There is also some logic in collectionstatus that we can remove .exclude(data__has_key="http_error")
As part of this, we can delete collection_note rows WHERE note LIKE 'Couldn''t download %'
jpmckinney
added
database
Changes to the database (adding indices, renaming columns)
and removed
feature
Relating to loading data from the web API or CLI command
labels
Apr 17, 2024
open-contracting/kingfisher-collect#917 (comment)
SInce this version of Process stores the Scrapyd job ID, it's easy to use scrapy-log-analyzer to parse the log file itself. This avoids errors being introduced by Process, the network, etc.
The text was updated successfully, but these errors were encountered: