-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: MeiliSearch::TimeoutError: The request was not processed in the expected time. #355
Comments
This should not be difficult to implement but I can predict a data race occurring: Say I have document "constitution" with field "first_line":
Since # config/initializers/meilisearch.rb
MeiliSearch::Rails::MSJob.retry_on MeiliSearch::TimeoutError, wait: 30.seconds, attempts: :unlimited |
That's a very valid issue. Is there any guidance what to do if Meilisearch Cloud is not available or returning timeouts? It would be really helpful to have a retry mechanism built in which discards updates that are out of date (as in your example). Otherwise it's basically retry and pray |
I did discuss it with Bruno in the past and he agreed that it would be a good feature to be able to opt into: #187 (comment) However I prioritized refactoring the codebase before adding new features and unfortunately haven't had time to finish it. |
I keep running into timeout issues constantly. This time when trying to reindex a model with 7 records Is there anything i can do about this? This is really breaking for me
It worked when running it a second time |
Hey @drale2k, can you run this command |
Also, can you try setting a different value on the timeout: MeiliSearch::Rails.configuration = {
meilisearch_url: '...',
meilisearch_api_key: '...',
timeout: 5, # default is: 1
max_retries: 3 # default is: 0
} |
I am on a Macbook Pro M1
I used the recommended setting you posted above for 2 days now and just now as i was writing this comment, i executed a couple of reindexes on my model which has about 7 records. The first 4 ran fine, the 5th timed out again Episode.reindex!
(irb):10:in `<main>': The request was not processed in the expected time. Net::ReadTimeout with #<TCPSocket:(closed)> (MeiliSearch::TimeoutError)
/Users/drale2k/.rbenv/versions/3.3.1/lib/ruby/3.3.0/net/protocol.rb:229:in `rbuf_fill': Net::ReadTimeout with #<TCPSocket:(closed)> (Net::ReadTimeout) Trying to replicate i ran 20 reindexes twice. The last request (the 40th) timed out again irb(main):062* 20.times do
irb(main):063* Episode.reindex!
irb(main):064> end |
Thanks for the ulimit info, and I think I have an idea why I can't reproduce your case:
Each network connection uses a file descriptor (I don't know if that's a Net ruby thing). But when you make multiple requests simultaneously or in quick succession (like in a multi-threaded scenario), you may exhaust the available file descriptors allowed for your process. Also, I can see you have double of user processes than me, so it could mean that you are probably using this space with other processes that are consuming your So, can you try updating this limit?
|
Hi @drale2k did you have the time to do those changes? |
Hi, yes i changed it since then but had to move on to other features and did not test in high enough volume. I am about to deploy to production next week and will report on how it went there. Sorry for the late reply |
No problem, @drale2k. Let me know when you have any news! Good luck! 🙏 |
I have been running into the issue again today in production on a Debian server (Frankfurt, 2 cores, 4GB RAM) running my rails app. I used the above settings
I am convinced this is an issue with the meilisearch ruby library. I noticed that timeout issues have been opened by people before, like #120 and #260 I don't know if they are related to my issue but especially #120 seems to be basically the same. I will try the on-prem installation and have it be in the same region as my server to decrease latency. I hope that solves it. |
I've been having this issue for awhile myself, just as @drale2k described. Even with increased timeout and retries: MeiliSearch::Rails.configuration = {
meilisearch_url: ENV.fetch('MEILISEARCH_URL', nil),
meilisearch_api_key: ENV.fetch('MEILISEARCH_API_KEY', nil),
timeout: 5,
max_retries: 2,
active: !Rails.env.test?
} |
@doutatsu Are you on cloud or self hosted? Have you found a way to deal with it? I am struggling with this as i would really like to stay with Meilisearch but this is becoming a dealbreaker. I don't think there is a solution right now and it seems to be an issue only with the ruby gem as i cannot see e.g. Laravel people complaining :/ |
@brunoocasali I could try porting the ms-ruby gem to http.rb (which doesn't use |
Appreciate the effort guys, i am happy to test once you got a PoC. Not that i have a better idea, but is there a reason to believe it is due to Net:HTTP? Being the defacto standard library for Ruby, wouldn't this cause issues all over Ruby / Rails projects? |
This is NOT meant to be the final implementation of the switch, this is only intended as a quick test to see if httprb can fix meilisearch/meilisearch-rails#355 (comment).
Apologies for the delay on this. I made a branch with some dirty edits that use httprb instead of httparty. You should be able to use it by specifying this on your Gemfile: gem 'meilisearch-rails', git: 'https://github.com/ellnix/meilisearch-rails.git', branch: 'use-httprb-ms-ruby' Of course, all of the changes are on github for you to review them, but there aren't any. As far as I know there are no regressions in this new httprb version, all tests pass. P. S.: If this fixes the error, I'll refactor the branch and PR it to both repos as soon as possible. P. P. S.: httprb currently does not support retries, so the max_retries option is not used. |
@ellnix Out of curiosity - Is there any reason to use httrb directly instead of using something like Faraday? As it provides adaptors for different middleware, you could test both httrb as well as other alternatives like httpx or Typhoeus. Plus other goodies, like retry logic, so you can make connection to meilisearch more resilient |
I also have no idea why this is happening to be honest. Our code making use of Net::HTTP is very basic, I can't see any errors anywhere in it. Since this is a networking problem it's very hard to know exactly what is failing and why. My idea was that by switching gems to httprb there might be large enough differences that coincidentally solve this issue.
That might be the best thing to do going forward, I was just not aware of Faraday. I'll reach for it if this switch works and I go refacoring, or if I need to switch to another backend. For now I think using httprb directly is good to minimize the number of variables. |
Yup - Faraday is extensible, you should have everything you need, from retries with backoff to serialisation goodies. Leaving this here, so you can take a look when you get around to refactoring - https://github.com/lostisland/awesome-faraday |
Implemented a (bad) solution for retries, live on that ms-ruby branch. It shouldn't cause any issues except maybe performance ones because I use |
Trying something is already a good step. Let's see if something else breaks which might help point us towards the actual issue, if not Net::HTTP. If it does not already, would be good to log when it has to retry so we have some idea if retries are still happening and can try to hunt them down |
It didn't before, but it does now (make sure to remove the lock file and bundle again). You should be able to filter your logs with:
The log level is |
I just ran 50 times a full reindex of my model, which has approx 130 db entries - so about 6500 reindexes in total with the new branch and had no issues so far. However, i ran the same with the release branch again and also did not run into any timeout issues yet. 😐 I will keep trying and report any news |
Actually i think i am not testing the new branch but the current release one. I updated my Gemfile:
but i still get the httparty error in my console due to the current release version of the meilisearch gem locks it at 0.21.0:
Looking at the new branch of the meilisearch gem, httparty is no longer a dependecy and thus this error should not be thrown. When i
However, if i Any ideas how to force it to use the new branch from the meilisearch gem? |
It might be an issue of the gemspec dependency mattering over the Maybe you can specify gem 'meilisearch', git: 'https://github.com/ellnix/meilisearch-ruby.git', branch: 'switch-to-http-rb'
gem 'meilisearch-rails', git: 'https://github.com/ellnix/meilisearch-rails.git', branch: 'use-httprb-ms-ruby' |
That worked fine, thanks. I re-ran the my test, re-indexing 50 times a model with 1213 records (about 60k re-indexes) and benchmarked the duration for both, release version and the test branch. Not a single timeout issue for both tests
I ran this twice and the duration was approx the same both times. It seems the http gem is about 18% slower than NET::Http for my test. |
I remain extremely confused 😆
Can't say for sure but it could easily be my implementation. |
Thanks a lot for providing the PoCs @ellnix you rock! And thanks, @drale2k, for helping us hunt this issue down! But I don't know. Weirdly, only a few people have complained about that, and it is not happening to everybody. So, I always think that it could be something on the host 🤔 @doutatsu let us know if you have time to try it! |
@drale2k, how big are your documents? And the issue is happening mostly in the indexing and not in the search time right? |
@ellnix I thought we could try something like this: #120 (comment) if @drale2k confirms the documents are "too big" we could think about trying to reduce the average amount of data the ruby process have to handle 🤔. |
Seems size independent as it happens on a model which has very little data per record, just metadata basically. On top of that, i run into the same issue when trying to delete records from index by running:
Sometimes deleting records from the index like that is very slow, not deleting anything for minutes before then continuing to delete just a couple records. I monitor that by refreshing the Meilisearch dashboard. Not sure if that issue is related |
I just ran the remove_from_index loop with the new branch, using The command was running for like 10 minutes, only deleted about 10 records in that time and then threw a TimeoutError irb(main):001> Segment.all.each {|r| r.remove_from_index!}
Segment Load (29.2ms) SELECT "segments".* FROM "segments"
(irb):1:in `block in <main>': Timed out after using the allocated 1 seconds (HTTP::TimeoutError)
from (irb):1:in `<main>'
/Users/drale2k/.rbenv/versions/3.3.2/lib/ruby/gems/3.3.0/gems/http-5.2.0/lib/http/timeout/global.rb:36:in `connect_nonblock': read would block (OpenSSL::SSL::SSLErrorWaitReadable)
from /Users/drale2k/.rbenv/versions/3.3.2/lib/ruby/gems/3.3.0/gems/http-5.2.0/lib/http/timeout/global.rb:36:in `connect_ssl'
from /Users/drale2k/.rbenv/versions/3.3.2/lib/ruby/gems/3.3.0/gems/http-5.2.0/lib/http/timeout/null.rb:39:in `start_tls'
from /Users/drale2k/.rbenv/versions/3.3.2/lib/ruby/gems/3.3.0/gems/http-5.2.0/lib/http/connection.rb:169:in `start_tls'
from /Users/drale2k/.rbenv/versions/3.3.2/lib/ruby/gems/3.3.0/gems/http-5.2.0/lib/http/connection.rb:45:in `initialize'
from /Users/drale2k/.rbenv/versions/3.3.2/lib/ruby/gems/3.3.0/gems/http-5.2.0/lib/http/client.rb:70:in `new'
from /Users/drale2k/.rbenv/versions/3.3.2/lib/ruby/gems/3.3.0/gems/http-5.2.0/lib/http/client.rb:70:in `perform'
from /Users/drale2k/.rbenv/versions/3.3.2/lib/ruby/gems/3.3.0/gems/http-5.2.0/lib/http/client.rb:31:in `request'
from /Users/drale2k/.rbenv/versions/3.3.2/lib/ruby/gems/3.3.0/gems/http-5.2.0/lib/http/chainable.rb:20:in `get'
from /Users/drale2k/.rbenv/versions/3.3.2/lib/ruby/gems/3.3.0/bundler/gems/meilisearch-ruby-8142e09dfd8e/lib/meilisearch/http_request.rb:13:in `public_send' My file descriptor limit is at 2560 currently |
Wonder if this was partly to blame - meilisearch/meilisearch#4654 - will upgrade and see if it reduces number of timeouts or other non 200 status codes |
Hello @doutatsu Is there anyone else having the timeout error, and being on the Cloud offer? |
Did you have the issue before v1.8.0? |
@curquiza Yup, I had this issue for a looong time. Although I don't remember it being as frequent - but that might also be due to my app becoming more popular and me using Meilisearch more heavily, hard to say I haven't tried that CI flag - I'll try it out |
Can confirm @curquiza - the flag did nothing |
@doutatsu Maybe you are using The Meilisearch API client ? I had a similar problem and solved it by adding timeout when initializing the client
|
@JoseValencia125 No, I am using this gem - I already have timeouts set in the configuration file, as shared above... |
Description
Meilisearch Cloud is throwing random Timeout Errors, causing my record not to be indexed and the failing jobs to be discarded.
Error: MeiliSearch::TimeoutError: The request was not processed in the expected time. Net::ReadTimeout with #<TCPSocket:(closed)>
How is this case supposed to be handled with a Meilisearch Cloud installation? Adding a retry to the Job would definitely help
Ideally wait time and attempts would be configurable in an initializer. Is there a way to do this without a change to the gem?
Expected behavior
Timeout Jobs should be automatically retried. Or any other solutions how to handle this?
Current behavior
What happened.
Stacktrace
The text was updated successfully, but these errors were encountered: