lubridate

In R, is it possible to include the same row in multiple groups, or is there other workaround?

一个人想着一个人 提交于 2020-01-06 19:36:18
问题 I've measured N20 flux from soil at multiple timepoints in the day (not equally spaced). I'm trying to calculate the total N20 flux from soil for a subset of days by finding the area under the curve for the given day. I know how to do this when using only measures from the given day, however, I'd like to include the last measure of the previous day and the first measure of the following day to improve the estimation of the curve. Here's an example to give a more concrete idea: library(MESS)

How to summarise overlaps between data points

蹲街弑〆低调 提交于 2020-01-06 04:31:06
问题 I have a data set of animals passing an RFID reader, it looks like this - ID date_time A 2019-11-02 08:07:47 B 2019-11-02 08:07:48 A 2019-11-02 08:07:49 A 2019-11-02 08:07:50 A 2019-11-02 08:09:12 A 2019-11-02 08:09:13 B 2019-11-02 08:09:17 I asked this question recently, (combine multiple rows into one time interval), and now my data looks like this - (with the data organised into ten second intervals) ID start_date_time. end_date_time A 2019-11-02 08:07:47 2019-11-02 08:07:50 B 2019-11-02

Calculating time lag between sequential events after grouping for subsets

蓝咒 提交于 2020-01-05 15:36:40
问题 I am trying to calculate the time between sequential observations for different combinations of my columns. I have attached a sample of my data here. A subset of my data looks like: head(d1) #visualize the first few lines of the data date time year km sps pp datetime prev timedif seque <fct> <fct> <int> <dbl> <fct> <dbl> <chr> <dbl> <dbl> <chr> 2012/06/09 2:22 2012 110 MICRO 0 2012-06-09 02:22 0 260. 00 2012/06/19 2:19 2012 80 MICRO 0 2012-06-19 02:19 1 4144 01 2012/06/19 22:15 2012 110 MICRO

Calculating time lag between sequential events after grouping for subsets

◇◆丶佛笑我妖孽 提交于 2020-01-05 15:36:09
问题 I am trying to calculate the time between sequential observations for different combinations of my columns. I have attached a sample of my data here. A subset of my data looks like: head(d1) #visualize the first few lines of the data date time year km sps pp datetime prev timedif seque <fct> <fct> <int> <dbl> <fct> <dbl> <chr> <dbl> <dbl> <chr> 2012/06/09 2:22 2012 110 MICRO 0 2012-06-09 02:22 0 260. 00 2012/06/19 2:19 2012 80 MICRO 0 2012-06-19 02:19 1 4144 01 2012/06/19 22:15 2012 110 MICRO

How to take subsets of lists in a tibble

北战南征 提交于 2020-01-05 04:40:13
问题 I have annual financial data for several stocks. I needed to blow it out to become monthly data and, thanks to an answer to this question I'd asked earlier, I have a solution which involves mutating the date column into lists of dates : library(tidyverse) library(lubridate) factors.subset.raw = structure(list( sec_id = c(1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1572L, 1676L, 1676L, 1676L,

merging tables based on time ranges/intervals using lubridate

守給你的承諾、 提交于 2020-01-05 04:00:13
问题 I am trying to merge two tables based on time ranges. I only found some old answers on this (e.g. Data Table merge based on date ranges) which don't use lubridate . Actually, lubridate provides the %within% function which can check if a date is within an interval. I constructed a minimal example and wondering if there is a way to merge these data frames together based on the overlapping dates/intervals. So checking if df1$Date is in df2$interval . library(lubridate) df1 <- data.frame(Date=c

Delete all times less than a specified value

妖精的绣舞 提交于 2020-01-04 00:41:08
问题 This is probably a simple question however I am new to R and couldn't find an answer (or was googling the wrong thing). I am currently working on a project which involves deleting all Time values that are less than 5 minutes. An example of the data is as follows with the times created using the "lubridate" package. Time 19S 1M 24S 7M 53S 11M 6S . . . Now I wish to delete all values which are less than 5 minutes. Therefore the final dataset I wish to get is: Time 7M 53S 11M 6S . . . Any help

In R, use lubridate to convert hms objects into seconds

落爺英雄遲暮 提交于 2020-01-01 05:01:26
问题 simple question in lubridate--I want to convert an hms object into its appropriate number of seconds since the start of the day. For instance library(lubridate) hms("12:34:45") then I want to know exactly how long 12 hours, 34 minutes, and 45 seconds is, in seconds something obvious like seconds(hms("12:34:45")) just returns 45s which is not what I want. How do I convert these hms values into seconds? I'd like to use lubridate 回答1: It doesn't matter which package you use -- it will have

Converting yearmon column to last date of the month in R

陌路散爱 提交于 2019-12-31 03:06:39
问题 I have a data frame (df) like the following: Date Arrivals 2014-07 100 2014-08 150 2014-09 200 I know that I can convert the yearmon dates to the first date of each month as follows: df$Date <- as.POSIXct(paste0(as.character(df[,1]),"-01"), format = "%Y-%m-%d") However, given that my data is not available until the end of the month I want to index it to the end rather than the beginning, and I cannot figure it out. Any help appreciated. 回答1: If the Date variable is an actual yearmon class

Using ifelse with transform in ddply

核能气质少年 提交于 2019-12-30 18:59:43
问题 I am trying to use ddply with transform to populate a new variable ( summary_Date ) in a dataframe with variables ID and Date . The value of the variable is chosen based on the length of the piece that is being evaluated using ifelse : If there are less than five observations for an ID in a given month, I want to have summary_Date be calculated by rounding the date to the nearest month (using round_date from package lubridate ); if there are more than five observations for an ID in a given