-
Notifications
You must be signed in to change notification settings - Fork 944
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logrotation on output log file causes empty log files #106
Comments
👍 |
Was this issue ever resolved? has it persisted on more recent versions of |
@giggsey, does this still happen or is it resolved? |
Haven't updated and tested for a while.
|
Some daemon services like nginx answer to a signal like SIGUSR1 to reopen the log file, so we can use 'postrotate' in logrotate config file to do log rotating: Maybe forever can have a similar implementation |
I'm using forever with node v0.8.8 and I still cannot logrotate the forever logs without completely restarting the forever process. My vote is to include a logrotation feature within forever. |
I would have expected to be able to do this to manually rotate logs (process 0 logs to ~/.forever/foo.log):
Instead forever kept logging to foo.log.0, I suppose because it doesn't close then reopen log files on It's not optimal, but |
I just opened a pull request, that makes it possible to use logrotate with forever: Feel free to leave any feedback :) |
You should add "copytruncate" to your config file, that does the job.
|
I have copytruncate and I'm still seeing this issue with [email protected], [email protected] and this in my logrotate.d directory /home/ec2-user/.forever/cloud.log { all the forever logging moved to a file called cloud.log-20130704 (the first day it rotated) and each day a new empty file is created. cloud.log(the original) is zero. And of course a forever restart doesn't fix it - logging continues to the same file. Any fixes or am I switching to another logging / daemon framework? |
I'm still having this problem. |
Issue effecting me as well. I'll submit a patch if somebody can point me to the right place in the code to fix it. |
+1! |
Is there any workaround? |
+1! |
The feature is still needed and still useful (it's hard to bypass this: logrotate "copytruncate" is not perfect, replacing logrotate by something else is not viable for standard deployments, and restarting the server is not wanted in many cases). The foreversd/forever-monitor/pull/16 PR is outdated, and the re opening of the fd were not atomic. With the current master it seems easier to do since we don't touch the child process fd directly, just read from them and pipe the data to a (file) stream. |
+1! |
1 similar comment
+1! |
+1 |
2 similar comments
+1 |
+1 |
+1 Does anyone test this http://qzaidi.github.io/2013/05/14/node-in-production/ ? |
We plan to move to recluster, which allows graceful restart & probably has a fix for this problem. |
The reload() in the link from @jfroffice is probably invalid because it ignores the fd returned by fs.openSync. However, using sync versions of close & open is one way to correctly implement an atomic reload. (another non-blocking way would maybe be to async open, then async close, and on the close callback set the newly open fd where it's used. I'm not sure it works...) Also, writing to |
What are people doing here on production? Are they just turning of the logs? We have winston logs with logrotation but can't seem to figure out what to do with forever. |
Why this not be a high-priority problem? We got heavy logs every day. Logrotation is really important to us. |
If its so important, provide Pull Request or use alternatives like monit :-) |
+1 |
+1 |
2 similar comments
+1 |
+1 |
@indexzero @mmalecki so could you please comment this topic? I suppose it's one of the most valueable problems. |
Is there a script I can run to debug this quickly? |
You mean like:
or I misunderstood the question? |
So, in case of create logrotate option - forever proceeds to write to rotated log file instead of main one. Does someone have a solution? |
Could a maintainer clarify whether this isn't understood to be a problem? The forever command-line utility seems to be useless to us in production, because we can't safely rotate its logs, and so have to resort to programmatic use of forever-monitor, so that we can configure the logging. But this issue has been open so long, perhaps I've misunderstood something that makes it a non-problem? |
I've been forced to move away from Forever in production due to this. |
@sheldonh @lklepner it's a problem. No one has provided a usable script to reproduce this end-to-end so can't debug it. If you could do that then we'd be happy to fix it. By end-to-end this example should assume
|
I have created a small test setup here: https://github.com/lasar/forever-106 I don't have any log rotation setup handy but the problem exists even when just using |
Having done server maintenance for several years, this is a common issue that is not specific to While you appear to be logging to a file, you are really logging to a file descriptor. After log rotation by an external application, the application continues to log to a file descriptor, but now it is no longer connected with the file, which has been re-created through log rotation. While the log may be empty, your disk space may well be continuing to increase. Possible solutions to log rotation complicationslogrotate and copytruncateAbove there was a recommendation to use Restart the app
Build log rotation into foreverYou could submit a pull request which adds log rotation into Log directly from your app over the network to syslog or a 3rd-party service.This avoids the direct use of log files, but most of the options I've looked for this in Node.js share the same design flaw: They don't (or didn't recently) handle the "sad path" of the remote logging server being unavailable. If they coped with it at all, the solution was to put buffered records into an in-memory queue of unlimited size. Given enough logging or a long enough outage, memory would eventually fill up and things would crash. Limiting the buffer queue size would address that issue, but it illustrates a point: designing robust network services is hard. Your are likely busy building and maintaining your main application. Do you want to also be responsible for the memory, latency and CPU concerns of a network logging client embedded in your application? For reference, here are the related bug reports I've opened about this
If you are using this a module that logs over the network directly, you might wish to check how it handles the possibility that the network or logging service is down. Log to STDOUT and STDERR, use syslogIf your application simply logs to STDOUT and STDERR instead of a log file, then you've eliminated the problematic direct-use of logging files and created a foundation for something that specializes in logging to handle the logs. I recommend reading the post Logs are Streams, Not Files which makes a good case for why you should log to STDOUT and shows how you can use pipe to logs to Logging to STDOUT and STDERR is also considered a best practice in the App Container Spec. I expect to see more of this logging pattern as containerization catches on. There are also good arguments out there for logging as JSON, but I won't detour into that now. Log to STDOUT, use systemd
Systemd will be standard in future Ubuntu releases and is already standard in Fedora. CoreOs uses Systemd inside its container to handle process supervision and logging, but also because it starts in under a second. How to Log to STDOUT effectively with forever?About now, you may be looking at the What you might hope works:
Besides that
The You can use the same approach to with the You are not limited to using this syntax to pipe your logs to |
+1 for "log to stdout and use systemd" |
@markstos that is an awesome number of suggestions. |
Thanks for the feedback, @indexzero |
Noticed that there was a new `/var/log/18f-pages-server/pages.log` file, but that the logs were still going to an uncompressed `/var/log/18f-pages-server/pages.log.1`. The old `postrotate` script wasn't actually successfully restarting the server, and a manual restart also didn't allow the new process to write to the new log file. Found out about the `copytruncate` directive, which will work well enough for 18F Pages, though it's not completely ideal. For more information, this issue has a very helpful comment with a ton of technical background: foreversd/forever#106 (comment)
@markstos that is a brilliant suggestion– using process substitution. Thanks! |
My logrotate.d script:
forever runs in daemon mode.
After logrotate runs overnight, it copies sends it off to the file fine, but forever then doesn't continue writing to the main log file.
It's most likely something up with my logrotate.d script, but any ideas?
The text was updated successfully, but these errors were encountered: