08 Jan 11, 09:27PM
(08 Jan 11, 09:05PM)Fiz Wrote: No no not tail is a resource hog reading the whole log file in anything at a rapid pace would be.it doesn't read the whole file as far as I'm aware...
Quote:tail itself isn't because it only spits out that last few lines but say you have tail set to show the last 10 lines and in between processing those 10 lines 20 lines get put into the log then you miss the 10 extra lines that got written while you where processing the 10 you managed to grab.This is 'tail -f', not normal 'tail', if you didn't notice.
Using tail to spit the whole file out each time WOULD be a resource hog if the log file size was 10mb and larger which I see regularly just on a osok server I imagine on ctf servers and such they grow even bigger faster. Now I know processing 10 lines of a log should be pretty instant but 10 lines can be written to the log pretty fast too specially when a round changes or if some type of abuse is happening.
Quote: I'm not saying this would happen often but with that method there is always the possibility for data to get lost.How? I don't think anyone would use UNIX software like tail if it were not reliable...
Quote:It's also very hard to keep track of what you have already processed since there is no unique identifier provided. If you use syslog you have a pretty unique timestamp but I imagine if some logs come in fast enough they might have the same timestamp but then again its using another external source (syslog) to collect the logs.
here you go: unique id = timestamp + the rank of the line at that timestamp
ie.
[SELECT ALL] Code:
12:13:13 x
12:13:14 y
12:13:14 z
12:13:14 x
[SELECT ALL] Code:
12:13:13:1
12:13:14:1
12:13:14:2
12:13:14:3