-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Epic] Import additional data from zbMATH Open #3
Comments
@physikerwelt according to the documentation and the examples, the OAI API should return JSON, but it only returns XML. That's OK for me, but is it the intended behavior? |
Yes, the response content type is incorrect. I'll make a pull request. |
The content type itself is correct. Only in the swagger API, the wrong content type is expected.
|
@Hyper-Node suggested that we wouldn't necessarily need to import the data, if the data is already in a graph database in zbMath Open (I didn't find any documentation on how the backend database is implemented). If this is so, that would be the most elegant solution. Otherwise, would we import preview data (title, author, doi, keywords etc., but no abstract etc.) for all publications in zbMath Open? Taking licensing limitations into account, that would be +- 3 million entries. If that is so, I would first prototype it like described in this issue, then build an import container and put it on the server to run by night. I estimate importing 3 million entries (probably in batches of 100) using quickstatements would take a couple of weeks. @Hyper-Node @physikerwelt what do you think? |
I would say stay focussed. There is no graph database for zbMath Open. I would be extremely happy if we could develop a tool that is capable of importing individual zbMATH Open entries (or a batch of entries) on demand, without creating duplicates (but rather updates existing entries). If I interpret the ticket description correctly, this is what this ticket about. I am not sure if we need to import all zbMATH at this point in time. I would like to create a different ticket for that and keep the focus here on building the first version of the zbMATH Open -> MaRDI portal ingestion pipeline. |
Fixed now, cf. https://oai.zbmath.org/ |
thanks. |
The ListSets endpoint crashes with certain parameters, e.g. while
Since FORTRAN is probably much more popular than Gfan, is there some error related to the size of the result set? (also mailing OAI support) |
Don't go into the trouble of downloading 4.2+ million records from zbmath - at least not yet. Two-thirds of all XML records are records whose title, authors and many other elements contain just the string zbMATH Open Web Interface contents unavailable due to conflicting licenses. Example: Let's take the item with "DE number" 3224368 and form the OAI-PMH URL for the "GetRecord" endpoint: Download this with your favorite web client to, say, 3224368.xml and inspect it - you'll see what I mean. This happens to 2 out of any 3 items that I try randomly. Interestingly enough, trying the bibtex for the same item from https://zbmath.org/bibtex/03224368.bib will get you full information on exactly those fields where OAI-PMH encounters "conflicting licenses" - go figure. I would understand if this would appear in XML elements that might contain some copyrightable information, but I fail to see how titles or author names fall into a category of items where any licensing restrictions might apply whatsoever... |
Indeed title and authors can not be exposed via the API due to license restrictions. The terms and conditions of zbMATH don't allow scaping the bibtex information. Thanks to @rank-zero for the background information. I think independent of the restricted fields we should design the ingestion process in a way that it downloads the initial dataset once and fetches updates in fixed intervals. Here the oaipmh format comes in handy. |
"The terms and conditions of zbMATH don't allow scaping the bibtex information." I thought zbMATH decided to become "open access" some time ago. Besides, if I present a title, does this give rise to a legal suspicion that I scraped a bibtex from somewhere? zbMATH does not have to reveal to anyone how it arrived at any given title or author name. Plus, the way you say it implies that it is zbMATH itself that imposes restrictions to...itself? - I don't understand all this. Anyway, to stay on topic: you plan to download 4.x million records? Even with a sleep interval of 1-2 seconds between downloads, which is short (I think), it may take a whole year, since the download itself will also consume some seconds: assuming ~6 seconds per record, you will get 10 records per minute, or 600 per hour, i.e ~12000/day - you need a year to get them all once. How often do you plan to hit the oai.zbmath.org server per second with requests? Are you OK with such a "long running" process? Just curious... |
I am not a lawyer and I agree with you that the situation is not intuitiv. Especially since one can use the DOI field to join data from semantic scholar or crossref. However one still is not allowed to redistribute the merged data. I have double checked that zbMATH Open can not expose titles and authors via APIs without to break German law. Tha API is capable of providing the dataset in a few hours. I tested that last week so one can get all data quite quick. The import to wikibase is the long running task. The issue is about developing the software to import records from zbMATH Open importing everything is subject to another discussion. It is not entirely clear to me if importing everything is a good idea or if a lazy approach is preferable. |
O.K., I don't want to get into lengthy debates about this here, especially since you are not zbMATH. :-) But I do have some remarks and I urge you to consider them seriously in your project:
The TODO list above should be updated with a new, high priority item: Fight in the courts for the right to use titles/authors, no matter how we got them! In a sane society, this right would be self-evident - but we don't live in one, so this is the way to go. |
Thank you for your opinion. As said, I am not a lawyer and therefore this is out of scope. I think you ambition to improve the legal situation is nobel, however this is not our expertise. There are other initiatives with the required legal expertise to go into these issues. In this project we need to respect rules and regulations. |
This is now in the making for over a year. @LizzAlice can you estimate how long it will take to complete this task? |
I would think that this would take 1-2 months. However, if I should do it, I would like to push it until after my link prediction is in beta stage. |
@LizzAlice I feel we have different tickets for the same task. Can you close duplicates? |
Issue description:
Additional data from the zbMath open API should be imported into the MaRDI-Portal.
Related: #6
Remarks:
TODO:
see also: Mass delete (Nuke) broken portal-compose#82
Acceptance-Criteria
Checklist for this issue:
Using Crossref data: "No sign-up is required to use the REST API, and the data can be treated as facts from members. The data is not subject to copyright, and you may use it for any purpose.
Crossref generally provides metadata without restriction; however, some abstracts contained in the metadata may be subject to copyright by publishers or authors." (https://www.crossref.org/documentation/retrieve-metadata/rest-api/)
The text was updated successfully, but these errors were encountered: