This repository has been archived by the owner on Jun 30, 2023. It is now read-only.
Replies: 1 comment
-
Same problem here. When I change the page_n for any value < 500, the API start responding but then stops. Console print below:
Does anyone know what is happening? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I tried to scrape Twitter Data with an academic twitter API. The code worked for almost every case, even though there are a few cases, where it doesn’t.
I used the code tweets <- get_all_tweets("@honda lang: en -is:retweet", "2018-08-06T00:00:00Z", "2018-08-26T00:00:00Z", n = Inf)
After scraping 4 pages of tweets, the following error occurred:
“Error in make_query(url = endpoint_url, params = params, bearer_token = bearer_token, : Too many errors. In addition: Warning messages: 1: Recommended to specify a data path in order to mitigate data loss when ingesting large amounts of data. 2: Tweets will not be stored as JSONs or as a .rds file and will only be available in local memory if assigned to an object.”
I actually don’t get what the problem is because the code works even for cases with more than 35.000 Tweets. Therefore, I don’t think the number of tweets is the reason.
Can somebody help me?
Beta Was this translation helpful? Give feedback.
All reactions