python-datetime

strptime seems to create wrong date from week number

别来无恙 提交于 2020-01-14 22:25:04
问题 strptime seems to create wrong date from week number... First case : dt1 = dateutil.parser.parse('2016-01-04 00:00:00+01:00') dt1.isocalendar() => (2016, 1, 1) # (year, week number, week day) from datetime import datetime datetime.strptime('2016 1 1', '%Y %W %w') => datetime.datetime(2016, 1, 4, 0, 0) # OK Second case : dt1 = dateutil.parser.parse('2015-12-28 00:00:00+01:00') dt1.isocalendar() => (2015, 53, 1) # (year, week number, week day) datetime.strptime('2015 53 1', '%Y %W %w') =>

Python select random date in current year

て烟熏妆下的殇ゞ 提交于 2020-01-12 18:51:34
问题 In Python can you select a random date from a year. e.g. if the year was 2010 a date returned could be 15/06/2010 回答1: It's much simpler to use ordinal dates (according to which today's date is 734158): from datetime import date import random start_date = date.today().replace(day=1, month=1).toordinal() end_date = date.today().toordinal() random_day = date.fromordinal(random.randint(start_date, end_date)) This will fail for dates before 1AD. 回答2: Not directly, but you could add a random

Python select random date in current year

一个人想着一个人 提交于 2020-01-12 18:49:29
问题 In Python can you select a random date from a year. e.g. if the year was 2010 a date returned could be 15/06/2010 回答1: It's much simpler to use ordinal dates (according to which today's date is 734158): from datetime import date import random start_date = date.today().replace(day=1, month=1).toordinal() end_date = date.today().toordinal() random_day = date.fromordinal(random.randint(start_date, end_date)) This will fail for dates before 1AD. 回答2: Not directly, but you could add a random

Hours, Date, Day Count Calculation

有些话、适合烂在心里 提交于 2020-01-05 07:05:14
问题 I have this huge dataset which has dates for several days and timestamps. The datetime format is in UNIX format. The datasets are logs of some login. The code is supposed to group start and end time logs and provide log counts and unique id counts. I am trying to get some stats like: total log counts per hour & unique login ids per hour. log count with choice of hours i.e. 24hrs, 12hrs, 6 hrs, 1 hr, etc and day of the week and such options. I am able to split the data with start and end hours

Average day of the year across December-January

|▌冷眼眸甩不掉的悲伤 提交于 2020-01-05 04:19:07
问题 Imagine a time series that peaks cyclically around end-December/early-January. The maxima of the series will then have dates like those showed in dt1 or dt2 below. I need to compute the average day of the year (DOY) of those maxima. The problem is that a normal average would give very different results for dt1 (211) and dt2 (356). The cause is obviously that some elements of dt1 are in January, so the corresponding DOYs are very small and bring the resulting average down. I originally worked

Add months to xaxis and legend on a matplotlib line plot

ぐ巨炮叔叔 提交于 2020-01-03 01:13:25
问题 I am trying to plot stacked yearly line graphs by months. I have a dataframe df_year as below: Day Number of Bicycle Hires 2010-07-30 6897 2010-07-31 5564 2010-08-01 4303 2010-08-02 6642 2010-08-03 7966 with the index set to the date going from 2010 July to 2017 July I want to plot a line graph for each year with the xaxis being months from Jan to Dec and only the total sum per month is plotted I have achieved this by converting the dataframe to a pivot table as below: pt = pd.pivot_table(df

selecting observation of datetime64[ns] type in particular time range

倖福魔咒の 提交于 2020-01-02 07:06:26
问题 I have a pandas dataframe( dfnew ) in which one column(timestamp) is of datetime64[ns] type. Now I want to see how many observations are in particular time range lets say 10:00:00 to 12:00:00. dfnew['timestamp'] = dfnew['timestamp'].astype('datetime64[ns]') dfnew['timestamp] 0 2013-12-19 09:03:21.223000 1 2013-12-19 11:34:23.037000 2 2013-12-19 11:34:23.050000 3 2013-12-19 11:34:23.067000 4 2013-12-19 11:34:23.067000 5 2013-12-19 11:34:23.067000 6 2013-12-19 11:34:23.067000 7 2013-12-19 11:34

selecting observation of datetime64[ns] type in particular time range

冷暖自知 提交于 2020-01-02 07:06:10
问题 I have a pandas dataframe( dfnew ) in which one column(timestamp) is of datetime64[ns] type. Now I want to see how many observations are in particular time range lets say 10:00:00 to 12:00:00. dfnew['timestamp'] = dfnew['timestamp'].astype('datetime64[ns]') dfnew['timestamp] 0 2013-12-19 09:03:21.223000 1 2013-12-19 11:34:23.037000 2 2013-12-19 11:34:23.050000 3 2013-12-19 11:34:23.067000 4 2013-12-19 11:34:23.067000 5 2013-12-19 11:34:23.067000 6 2013-12-19 11:34:23.067000 7 2013-12-19 11:34

How to convert datetime from string format into datetime format in pyspark?

回眸只為那壹抹淺笑 提交于 2020-01-01 14:42:01
问题 I created a dataframe using sqlContext and I have a problem with the datetime format as it is identified as string. df2 = sqlContext.createDataFrame(i[1]) df2.show df2.printSchema() Result: 2016-07-05T17:42:55.238544+0900 2016-07-05T17:17:38.842567+0900 2016-06-16T19:54:09.546626+0900 2016-07-05T17:27:29.227750+0900 2016-07-05T18:44:12.319332+0900 string (nullable = true) Since the datetime schema is a string, I want to change it to datetime format as follows: df3 = df2.withColumn('_1', df2['

How to convert datetime from string format into datetime format in pyspark?

只谈情不闲聊 提交于 2020-01-01 14:40:14
问题 I created a dataframe using sqlContext and I have a problem with the datetime format as it is identified as string. df2 = sqlContext.createDataFrame(i[1]) df2.show df2.printSchema() Result: 2016-07-05T17:42:55.238544+0900 2016-07-05T17:17:38.842567+0900 2016-06-16T19:54:09.546626+0900 2016-07-05T17:27:29.227750+0900 2016-07-05T18:44:12.319332+0900 string (nullable = true) Since the datetime schema is a string, I want to change it to datetime format as follows: df3 = df2.withColumn('_1', df2['