问题
I'm collecting some system data in a flatfile, which has this format:
YYYY-MM-DD-HH24:MI:SS DD1 DD2 DD3 DD4
Where DD1-DD4 are four items of data. An example of the file is this:
2011-02-01-13:29:53 16 8 7 68
2011-02-01-13:29:58 13 8 6 110
2011-02-01-13:30:03 26 25 1 109
2011-02-01-13:30:08 13 12 1 31
2011-02-01-13:30:14 192 170 22 34
2011-02-01-13:30:19 16 16 0 10
2011-02-01-13:30:24 137 61 76 9
2011-02-01-13:30:29 452 167 286 42
2011-02-01-13:30:34 471 177 295 11
2011-02-01-13:30:39 502 192 309 10
The file is over 2 million rows, with data points every five seconds.
I need to chart this data to be able to derive meaning from it.
What I've tried
At the moment I've tried gnuplot and rrdtool with a variety of unix tools (awk, sed, etc). Both of these work, but seem to require a lot of chopping and recutting the data each time I want to view it in a different way. My gut feel is that rrdtool is the right way to go, but at the moment I'm struggling to get the data into it fast enough, partly because I have to convert my timestamp into Unix epoch. My understanding is also that if I decide I want a new granularity of aggregation I have to rebuild the rrd (which makes sense for realtime collection, but not retrospective loads like this). These things make me think maybe I'm using the wrong tool.
The collection of the data to flat file is fixed - for example I cannot pipe the collection directly into rrdtool.
My Question
I would like people's opinions on the best way to do charting. I have these requirements:
- It should be as fast as possible to create a graph (not just render, but set up to render too)
- It should be as flexible as possible - I need to muck around with the graphs to work out the best granularity for the data (5 seconds is probably too granular)
- It should be able to aggregate (MAX/AVG/etc) where necessary
- It should be repeatable and new data files as they come in
- Ideally I want to be able to overlay DD1 vs DD2, or DD1 last week with DD1 this week
- Unix or Windows, don't care. Prefer *nix though :-)
Any suggestions?
回答1:
This is a really good question. I'm glad to see some R folks weighing in. I too think R is the right tool for the job, although it's my main hammer so everything looks a bit like a nail to me.
There are a handful of R concepts needed to tackle this challenge. As I see it, you need the following (references in parens) :
- Import data into R. (R Import Export Guide)
- Get the Data into an appropriate time series structure. (XTS Vignette PDF)
- A little bit of plotting. (Quick-R intro to graphics)
Here's example code using 2mm points. If you notice, I don't illustrate plotting all 2mm points. It's slow and not that informative. But this should give you some ideas on getting started. Feel free to come back with more specific questions if you do decide to jump down the R rabbit hole!
require( xts )
require( lubridate )
## set up some example data
dataLength <- 2e6
startTime <- ymd_hms("2011-02-01-13-29-53")
fistFullOfSeconds <- 1:dataLength
date <- startTime + fistFullOfSeconds
DD1 <- rnorm( dataLength )
DD2 <- DD1 + rnorm(dataLength, 0, .1 )
DD3 <- rnorm( dataLength, 10, 2)
DD4 <- rnorm( dataLength )
myXts <- xts(matrix( c( DD1, DD2, DD3, DD4 ), ncol=4 ), date)
## now all the data are in the myXts object so let's do some
## summarizing and visualization
## grabbing just a single day from the data
## converted to data.frame to illustrate default data frame plotting
oneDay <- data.frame( myXts["2011-02-02"] )
plot( oneDay )
The relationship between DD1 and DD2 kinda jumps out
boxplot( oneDay )
Boxplot is the piechart of statistical graphics. The plot you love to hate. Might as well link to this while we're here.
## look at the max value of each variable every minute
par(mfrow=c(4,1)) ## partitions the graph window
ep <- endpoints(myXts,'minutes')
plot(period.apply(myXts[,1],INDEX=ep,FUN=max))
plot(period.apply(myXts[,2],INDEX=ep,FUN=max))
plot(period.apply(myXts[,3],INDEX=ep,FUN=max))
plot(period.apply(myXts[,4],INDEX=ep,FUN=max))
Even at one minute resolution I'm not sure this is informative. Should probably subset.
回答2:
Here's some R code for playing around with 8000000 numbers in 4 columns of 2000000 rows:
> d=matrix(runif(8000000),ncol=4)
> dim(d)
[1] 2000000 4
> plot(d[1:1000,1])
> plot(d[1:1000,1],type='l')
> plot(d[1:10000,1],type='l')
now it starts to get a bit slow:
> plot(d[1:100000,1],type='l')
what about correlation of two columns:
> cor(d[,1],d[,2])
[1] 0.001708502
-- instant. Fourier transform?
> f=fft(d[,1])
also instant. Don't try and plot it though.
Let's plot a thinned version of one of the columns:
> plot(d[seq(1,2000000,len=1000),1],type='l')
-- instant.
What's really missing is an interactive plot where you could zoom and pan around the whole data set.
回答3:
Here is an example along the lines of the data you have, as loaded into R, aggregated etc...
First, some dummy data to write out to a file:
stime <- as.POSIXct("2011-01-01-00:00:00", format = "%Y-%d-%m-%H:%M:%S")
## dummy data
dat <- data.frame(Timestamp = seq(from = stime, by = 5, length = 2000000),
DD1 = sample(1:1000, replace = TRUE),
DD2 = sample(1:1000, replace = TRUE),
DD3 = sample(1:1000, replace = TRUE),
DD4 = sample(1:1000, replace = TRUE))
## write it out
write.csv(dat, file = "timestamp_data.txt", row.names = FALSE)
Then we can time reading in the 2-million rows. To speed this up, we tell R the classes of the columns in the file: "POSIXct"
is one way in R to store the sort of timestamps you have.
## read it in:
system.time({
tsdat <- read.csv("timestamp_data.txt", header = TRUE,
colClasses = c("POSIXct",rep("integer", 4)))
})
which, takes about 13 seconds to read in and format in internal unix times on my modest laptop.
user system elapsed
13.698 5.827 19.643
Aggregation can be done in lots of ways, one is using aggregate()
. Say aggregate to the hour mean/average:
## Generate some indexes that we'll use the aggregate over
tsdat <- transform(tsdat,
hours = factor(strftime(tsdat$Timestamp, format = "%H")),
jday = factor(strftime(tsdat$Timestamp, format = "%j")))
## compute the mean of the 4 variables for each minute
out <- aggregate(cbind(Timestamp, DD1, DD2, DD3, DD4) ~ hours + jday,
data = tsdat, FUN = mean)
## convert average Timestamp to a POSIX time
out <- transform(out,
Timestamp = as.POSIXct(Timestamp,
origin = ISOdatetime(1970,1,1,0,0,0)))
That (the line creating out
) takes ~16 seconds on my laptop, and gives the following output:
> head(out)
hours jday Timestamp DD1 DD2 DD3 DD4
1 00 001 2010-12-31 23:29:57 500.2125 491.4333 510.7181 500.4833
2 01 001 2011-01-01 00:29:57 516.0472 506.1264 519.0931 494.2847
3 02 001 2011-01-01 01:29:57 507.5653 499.4972 498.9653 509.1389
4 03 001 2011-01-01 02:29:57 520.4111 500.8708 514.1514 491.0236
5 04 001 2011-01-01 03:29:57 498.3222 500.9139 513.3194 502.6514
6 05 001 2011-01-01 04:29:57 515.5792 497.1194 510.2431 496.8056
Simple plotting can be achieved using the plot()
function:
plot(DD1 ~ Timestamp, data = out, type = "l")
We can overlay more variables via, e.g.:
ylim <- with(out, range(DD1, DD2))
plot(DD1 ~ Timestamp, data = out, type = "l", ylim = ylim)
lines(DD2 ~ Timestamp, data = out, type = "l", col = "red")
or via multiple panels:
layout(1:2)
plot(DD1 ~ Timestamp, data = out, type = "l", col = "blue")
plot(DD2 ~ Timestamp, data = out, type = "l", col = "red")
layout(1)
This has all been done with base R functionality. Others have shown how add-on packages can make working with dates easier.
来源:https://stackoverflow.com/questions/5119687/fastest-most-flexible-way-to-chart-over-2-million-rows-of-flat-file-data