serilog-sinks-file icon indicating copy to clipboard operation
serilog-sinks-file copied to clipboard

Recreate current rolling file immediately to log on file deleting

Open asset-2 opened this issue 4 years ago • 15 comments

It is not a new issue. It partially interacts with #36, #87, #96.

Service uses an hour as rolling interval. Current log file has been deleted in-between. Logger waits till chosen interval ends and then only creates new file to log things. All events and messages are ignored during remained time. Expected behavior for me is to recreate file immediately and write to it, as soon as we have something to log.

Do you guys plan to implement this thing? What possible workarounds do you suggest?

asset-2 avatar Feb 04 '20 09:02 asset-2

Do you mean you are manually deleting the log file during the period that Serilog would expect to be able to write to it?

cocowalla avatar Feb 04 '20 13:02 cocowalla

Yes, it's possible when you run your application in Docker container for example.

asset-2 avatar Feb 04 '20 14:02 asset-2

Hmm, I'm not getting how running in a container deletes log files during writing - would appreciate if you could describe a scenario when this happens?

cocowalla avatar Feb 04 '20 14:02 cocowalla

No, 'running in container' does not delete log files during writing. Application is running in Docker container. It performs logging into file. It's possible to get access to that log file and delete it manually. I was expecting that file will be recreated immediately, as I wrote before. But no. It creates a new one when rolling interval is over. In this scenario we lose a bunch of messages.

asset-2 avatar Feb 04 '20 14:02 asset-2

It's possible to get access to that log file and delete it manually

OK, but why would you do that? :smile:

I'm trying to figure out if this is a realistic, common thing to happen, or an edge case where you have to actively be trying to cause a problem for it to be an issue.

Also, as an aside, if you configure the sink using shared: false, it will hold an exclusive lock on the log file during writing, preventing you from deleting it.

cocowalla avatar Feb 04 '20 16:02 cocowalla

Also, as an aside, if you configure the sink using shared: false, it will hold an exclusive lock on the log file during writing, preventing you from deleting it.

I will give a try. But something tells me that this approach does not work.

OK, but why would you do that? 😄

Someone accidentally removes log file on the production... The data has been lost. At that point we have embarked investigation. As you see, it is pretty realistic occasion. From my perspective it is pretty evident to logger to proceed without interruption. The client loses its upcomming data for remained rolling period when human/system deletes log file unintentionally or not . Well, I really do not know whether that thing fits well with the strategy that you guys keep.

asset-2 avatar Feb 04 '20 18:02 asset-2

But something tells me that this approach does not work

I don't know about the intricacies of the Linux file system, but it absolutely works on Windows.

Well, I really do not know whether that thing fits well with the strategy that you guys keep

Just a little piece of unsolicited advice: being rude to people will not get you what you want from them, especially if they are maintaining OSS in their spare time, for free.

If I'm being honest, I'm not really seeing how this could be s a realistic or common problem to be dealt with by a logging framework. That said, it would be nice if we didn't lose messages in the rare event that somebody explicitly deletes a file that wasn't locked.

The most obvious ways to do this (e.g. checking if the file exists before every write) are likely to hurt performance badly, so we need to be careful about that.

Do you have any suggestions for an approach?

cocowalla avatar Feb 04 '20 19:02 cocowalla

Hi all! @asset-2 thanks for raising this as a separate issue; I see it's related to #96 but not covered by it, guess we track these two things separately. Thanks for digging in @cocowalla!

Well, I really do not know whether that thing fits well with the strategy that you guys keep

Just a little piece of unsolicited advice: being rude to people will not get you what you want from them, especially if they are maintaining OSS in their spare time, for free.

I think you might be misunderstanding each other, I can see both interpretations of that statement and think that From my perspective is intended to put the later statement in context - does this fit with the goals of the project? (I'm not sure what the answer is, at this point :-))

#96 won't be an issue on Unix, as file "locks" are only advice on that platform, unlike Windows where a lock is enforced. Because of that, this ticket is something of an inverse to that problem.

I think we could attempt to implement this, but because the file will still "exist" from the app's perspective on Unix (it will just be unlinked from its name in the directory hierarchy, until the process closes the file handle), we can't do it in response to an exception and instead, this would necessitate some kind of monitor process, AFAIK (need to grab my copy of Kerrisk's book to confirm!).

I'm not sure it will be worth that amount of effort, but keeping an open mind if someone is interested in exploring it. To answer your original questions, @asset-2 👍

Do you guys plan to implement this thing?

Not at this point, sorry.

What possible workarounds do you suggest?

At this point it's probably an operational problem to solve (avoid accidentally deleting log files by using automated runbooks/scripts to manage production servers).

nblumhardt avatar Feb 04 '20 22:02 nblumhardt

@cocowalla @nblumhardt Thank you guys for the questions consideration.

asset-2 avatar Feb 05 '20 12:02 asset-2

I need this feature and my use case is that I move logs to another storage on a regular schedule for a bank application, so we roll logs daily, but audit happens every 2 hrs, and we have to remove the processed logs out without restarting the application. For perspective, log4net and NLog does this, but we chose serilog because it just works - except for recreating deleted logs that it has handle to. I'm wondering if adding a conditional to check if a file exist would fix it.

qiqo avatar Apr 27 '20 04:04 qiqo

Hi, just wanted to chime in and say that I'm getting around this issue by calling Serilog.Log.CloseAndFlush(), deleting the file with System.IO.File.Delete(file) (after a System.IO.File.Exists(file) check), then recreating if I need to by calling my Log.Logger = new LoggerConfiguration()... code again. We upload log files with user-submitted error reports, and then immediately delete the file. We usually do this as the application is closing, but recreating seems to work just fine in my testing.

rspears74 avatar Sep 14 '20 22:09 rspears74

Hi, is there a way to implement it? I think it's a realistic problem. think about: high availability application that cannot be started during operation hours; now the log disk is full. the infrastructure team clears all files to free up space.

esskar avatar Oct 13 '20 20:10 esskar

@cocowalla

Do you have any suggestions for an approach?

Is there a possibility to adopt a buffering style approach where entries enter a queue and flush after some limit is reached? It would reduce the checks required for file existence by a significant factor and could possibly leverage the existing buffering options available. Another approach may lay in leveraging filesystem hooks to determine if the file was deleted, and temporarily cache new entries until another file can be created and written to.

impr0t avatar Dec 10 '20 14:12 impr0t

Is there a possibility to adopt a buffering style approach where entries enter a queue and flush after some limit is reached?

@impr0t the Serilog File Sink already supportes buffered output. The default is not to buffer output, because you risk losing logs in the event of a crash. I think this is a pretty sane default.

Another approach may lay in leveraging filesystem hooks to determine if the file was deleted, and temporarily cache new entries until another file can be created and written to

Hmm, unfortunately I think any approach that risks losing logs is probably a non-starter.

cocowalla avatar Dec 10 '20 15:12 cocowalla

Hi, is there a way to implement it? I think it's a realistic problem. think about: high availability application that cannot be started during operation hours; now the log disk is full. the infrastructure team clears all files to free up space.

I often get into log folder and delete everything under the log folder to save sapces. In Windows, active logging file is locked, and it cannot be deleted. I am fine with that. In Linux (Debian 11), the active log file can be deleted. However, I have to restart the service/app. Or depend on rollingInterval, I have to wait a mintue, a hour or a day until the new log file is created. This is inconvenient.


Edited: Okay I found a combination of commands can delete the the logs that are not opened by other apps in Linux (Debian 11), so I should be fine now.

Test run phase: find . -maxdepth 1 -name "Apple*" ! -exec fuser {} ; -exec echo {} ;

Actual deletion phase: find . -maxdepth 1 -name "Apple*" ! -exec fuser {} ; -exec rm {} ;

rolandwu777 avatar Jul 30 '23 00:07 rolandwu777