Logfile analysis in R?

前端 未结 5 1255
生来不讨喜
生来不讨喜 2021-02-01 06:04

I know there are other tools around like awstats or splunk, but I wonder whether there is some serious (web)server logfile analysis going on in R. I might not be the first thoug

相关标签:
5条回答
  • 2021-02-01 06:19

    In connection with a project to build an analytics toolbox for our Network Ops guys, i built one of these about two months ago. My employer has no problem if i open source it, so if anyone is interested i can put it up on my github repo. I assume it's most useful to this group if i build an R Package. I won't be able to do that straight away though because i need to research the docs on package building with non-R code (it might be as simple as tossing the python bytecode files in /exec along with a suitable python runtime, but i have no idea).

    I was actually suprised that i needed to undertake a project of this sort. There are at least several excellent open source and free log file parsers/viewers (including the excellent Webalyzer and AWStats) but neither parse server error logs (parsing server access logs is the primary use case for both).

    If you are not familiar with error logs or with the difference between them and access logs, in sum, Apache servers (likewsie, nginx and IIS) record two distinct logs and store them to disk by default next to each other in the same directory. On Mac OS X, that directory in /var, just below root:

    $> pwd
       /var/log/apache2
    
    $> ls
       access_log   error_log
    

    For network diagnostics, error logs are often far more useful than the access logs. They also happen to be significantly more difficult to process because of the unstructured nature of the data in many of the fields and more significantly, because the data file you are left with after parsing is an irregular time series--you might have multiple entries keyed to a single timestamp, then the next entry is three seconds later, and so forth.

    i wanted an app that i could toss in raw error logs (of any size, but usually several hundred MB at a time) have something useful come out the other end--which in this case, had to be some pre-packaged analytics and also a data cube available inside R for command-line analytics. Given this, i coded the raw-log parser in python, while the processor (e.g., gridding the parser output to create a regular time series) and all analytics and data visualization, i coded in R.

    I have been building analytics tools for a long time, but only in the past four years have i been using R. So my first impression--immediately upon parsing a raw log file and loading the data frame in R is what a pleasure R is to work with and how it is so well suited for tasks of this sort. A few welcome suprises:

    • Serialization. To persist working data in R is a single command (save). I knew this, but i didn't know how efficient is this binary format. Thee actual data: for every 50 MB of raw logfiles parsed, the .RData representation was about 500 KB--100 : 1 compression. (Note: i pushed this down further to about 300 : 1 by using the data.table library and manually setting compression level argument to the save function);

    • IO. My Data Warehouse relies heavily on a lightweight datastructure server that resides entirely in RAM and writes to disk asynchronously, called redis. The proect itself is only about two years old, yet there's already a redis client for R in CRAN (by B.W. Lewis, version 1.6.1 as of this post);

    • Primary Data Analysis. The purpose of this Project was to build a Library for our Network Ops guys to use. My goal was a "one command = one data view" type interface. So for instance, i used the excellent googleVis Package to create a professional-looking scrollable/paginated HTML tables with sortable columns, in which i loaded a data frame of aggregated data (>5,000 lines). Just those few interactive elments--e.g., sorting a column--delivered useful descriptive analytics. Another example, i wrote a lot of thin wrappers over some basic data juggling and table-like functions; each of these functions i would for instance, bind to a clickable button on a tabbed web page. Again, this was a pleasure to do in R, in part becasue quite often the function required no wrapper, the single command with the arguments supplied was enough to generate a useful view of the data.

    A couple of examples of the last bullet:

    # what are the most common issues that cause an error to be logged?
    
    err_order = function(df){
        t0 = xtabs(~Issue_Descr, df)
        m = cbind( names(t0), t0)
        rownames(m) = NULL
        colnames(m) = c("Cause", "Count")
        x = m[,2]
        x = as.numeric(x)
        ndx = order(x, decreasing=T)
        m = m[ndx,]
        m1 = data.frame(Cause=m[,1], Count=as.numeric(m[,2]),
                        CountAsProp=100*as.numeric(m[,2])/dim(df)[1])
        subset(m1, CountAsProp >= 1.)
    }
    
    # calling this function, passing in a data frame, returns something like:
    
    
                            Cause       Count    CountAsProp
    1  'connect to unix://var/ failed'    200        40.0
    2  'object buffered to temp file'     185        37.0
    3  'connection refused'                94        18.8
    


    The Primary Data Cube Displayed for Interactive Analysis Using googleVis:

    The Primary Data Cube Displayed for Interactive Analysis Using googleVis

    A contingency table (from an xtab function call) displayed using googleVis)

    enter image description here

    0 讨论(0)
  • 2021-02-01 06:21
    #!python
    
    import argparse
    import csv
    import cStringIO as StringIO
    
    class OurDialect:
        escapechar = ','
        delimiter = ' '
        quoting = csv.QUOTE_NONE
    
    
    parser = argparse.ArgumentParser()
    parser.add_argument('-f', '--source', type=str, dest='line', default=[['''54.67.81.141 - - [01/Apr/2015:13:39:22 +0000] "GET / HTTP/1.1" 502 173 "-" "curl/7.41.0" "-"'''], ['''54.67.81.141 - - [01/Apr/2015:13:39:22 +0000] "GET / HTTP/1.1" 502 173 "-" "curl/7.41.0" "-"''']])
    arguments = parser.parse_args()
    
    try:
        with open(arguments.line, 'wb') as fin:
            line = fin.readlines()
    except: 
        pass
    finally:
        line = arguments.line
    
    header = ['IP', 'Ident', 'User', 'Timestamp', 'Offset', 'HTTP Verb', 'HTTP Endpoint', 'HTTP Version', 'HTTP Return code', 'Size in bytes', 'User-Agent']
    
    lines = [[l[:-1].replace('[', '"').replace(']', '"').replace('"', '') for l in l1] for l1 in line]
    
    out = StringIO.StringIO()
    
    writer = csv.writer(out)
    writer.writerow(header)
    
    writer = csv.writer(out,dialect=OurDialect)
    writer.writerows([[l1 for l1 in l] for l in lines])
    
    print(out.getvalue())
    

    Demo output:

    IP,Ident,User,Timestamp,Offset,HTTP Verb,HTTP Endpoint,HTTP Version,HTTP Return code,Size in bytes,User-Agent
    54.67.81.141, -, -, 01/Apr/2015:13:39:22, +0000, GET, /, HTTP/1.1, 502, 173, -, curl/7.41.0, -
    54.67.81.141, -, -, 01/Apr/2015:13:39:22, +0000, GET, /, HTTP/1.1, 502, 173, -, curl/7.41.0, -
    

    This format can easily be read into R using read.csv. And, it doesn't require any 3rd party libraries.

    0 讨论(0)
  • 2021-02-01 06:23

    It is in fact an excellent idea. R also has very good date/time capabilities, can do cluster analysis or use any variety of machine learning alogorithms, has three different regexp engines to parse etc pp.

    And it may not be a novel idea. A few years ago I was in brief email contact with someone using R for proactive (rather than reactive) logfile analysis: Read the logs, (in their case) build time-series models, predict hot spots. That is so obviously a good idea. It was one of the Department of Energy labs but I no longer have a URL. Even outside of temporal patterns there is a lot one could do here.

    0 讨论(0)
  • 2021-02-01 06:27

    I did a logfile-analysis recently using R. It was no real komplex thing, mostly descriptive tables. R's build-in functions were sufficient for this job.
    The problem was the data storage as my logfiles were about 10 GB. Revolutions R does offer new methods to handle such big data, but I at last decided to use a MySQL-database as a backend (which in fact reduced the size to 2 GB though normalization).
    That could also solve your problem in reading logfiles in R.

    0 讨论(0)
  • 2021-02-01 06:36

    I have used R to load and parse IIS Log files with some success here is my code.

    Load IIS Log files
    require(data.table)
    
    setwd("Log File Directory")
    
    # get a list of all the log files
    log_files <- Sys.glob("*.log")
    
    # This line
    # 1) reads each log file
    # 2) concatenates them
    IIS <- do.call( "rbind", lapply( log_files,  read.csv, sep = " ", header = FALSE, comment.char = "#", na.strings = "-" ) )
    
    # Add field names - Copy the "Fields" line from one of the log files :header line 
    colnames(IIS) <- c("date", "time", "s_ip", "cs_method", "cs_uri_stem", "cs_uri_query", "s_port", "cs_username", "c_ip", "cs_User_Agent", "sc_status", "sc_substatus", "sc_win32_status", "sc_bytes", "cs_bytes", "time-taken")
    
    #Change it to a data.table
    IIS <- data.table( IIS )
    
    #Query at will
    IIS[, .N, by = list(sc_status,cs_username, cs_uri_stem,sc_win32_status) ]
    
    0 讨论(0)
提交回复
热议问题