tail

dplyr and tail to change last value in a group_by in r

被刻印的时光 ゝ 提交于 2019-12-18 21:17:56
问题 while using dplyr i'm having trouble changing the last value my data frame. i want to group by user and tag and change the Time to 0 for the last value / row in the group. user_id tag Time 1 268096674 1 3 2 268096674 1 10 3 268096674 1 1 4 268096674 1 0 5 268096674 1 9999 6 268096674 2 0 7 268096674 2 9 8 268096674 2 500 9 268096674 3 0 10 268096674 3 1 ... Desired output: user_id tag Time 1 268096674 1 3 2 268096674 1 10 3 268096674 1 1 4 268096674 1 0 5 268096674 1 0 6 268096674 2 0 7

Get last n lines or bytes of a huge file in Windows (like Unix's tail). Avoid time consuming options

自闭症网瘾萝莉.ら 提交于 2019-12-18 12:12:38
问题 I need to retrieve the last n lines of huge files (1-4 Gb), in Windows 7. Due to corporate restrictions, I cannot run any command that is not built-in. The problem is that all solutions I found appear to read the whole file, so they are extremely slow. Can this be accomplished, fast? Notes: I managed to get the first n lines, fast. It is ok if I get the last n bytes. (I used this https://stackoverflow.com/a/18936628/2707864 for the first n bytes). Solutions here Unix tail equivalent command

How to implement a pythonic equivalent of tail -F?

好久不见. 提交于 2019-12-17 03:04:23
问题 What is the pythonic way of watching the tail end of a growing file for the occurrence of certain keywords? In shell I might say: tail -f "$file" | grep "$string" | while read hit; do #stuff done 回答1: Well, the simplest way would be to constantly read from the file, check what's new and test for hits. import time def watch(fn, words): fp = open(fn, 'r') while True: new = fp.readline() # Once all lines are read this just returns '' # until the file changes and a new line appears if new: for

Tail read a growing dynamic file and extract two columns and then print a graph

本小妞迷上赌 提交于 2019-12-13 11:50:36
问题 What is the best way to read a 1 GB file that gets time series data logged in it and generate a real time graph with two of its columns (one time and other a number)? I see that you have different ways of tailign the file. 回答1: Sounds like a good job for RRDTool. But if you want to stick with Python, I would use tail to stream the data into my program (this is assuming the file is continuously written to, otherwise a straight open() in Python will work). tail -F data.log | python myprogram.py

Replacing row elements in a dataframe based on values from another dataframe [duplicate]

回眸只為那壹抹淺笑 提交于 2019-12-13 09:16:54
问题 This question already has answers here : How to join (merge) data frames (inner, outer, left, right) (13 answers) Closed 9 months ago . I'm fairly new to R so I hope somebody can help me. An output table in one of my scripts is the averagetable below showing different proportions of the event Standing in three different clusters: > print(averagetable) Group.1 Standing 1 cluster1 0.5642857 2 cluster2 0.7795848 3 cluster3 0.7922980 Note that R can assign different cluster names ( cluster1 ,

Oddity with PHP tail -n 1 returning multiple results

雨燕双飞 提交于 2019-12-12 16:34:30
问题 I had this question... answered and very nice it was too. But, oddity has emerged whereby if the log file has a unique last line, (i.e. the first few words are different to the preceeding lines) it correctly returns that last line with tail -n 1 "file" but if the last few lines are similar to the the last line, it returns all the lines that are similar. Let me show you.... The file it's reading is... frame= 1065 fps= 30 q=1.6 size= 11977kB time=35.54 bitrate=2761.1kbits/s frame= 1081 fps= 30

How do I use Head and Tail to print specific lines of a file

血红的双手。 提交于 2019-12-12 09:31:40
问题 I want to say output lines 5 - 10 of a file, as arguments passed in. How could I use head and tail to do this? where firstline = $2 and lastline = $3 and filename = $1 . Running it should look like this: ./lines.sh filename firstline lastline 回答1: Aside from the answers given by fedorqui and Kent, you can also use a single sed command: #! /bin/sh filename=$1 firstline=$2 lastline=$3 # Basics of sed: # 1. sed commands have a matching part and a command part. # 2. The matching part matches

tailing aws lambda/cloudwatch logs

蹲街弑〆低调 提交于 2019-12-12 08:35:16
问题 Found out how to access lambda logs from another answer Is it possible to tail them? (manually pressing refresh is cumbersome) 回答1: Since you mentioned tail -ing, I'm expecting that you are comfortable with working on the terminal with CLI tools. You can install awslogs locally and use it to tail Cloudwatch. e.g. $ awslogs get /aws/lambda/my-api-lambda ALL --watch --profile production Aside from not needing to refresh anything anymore (that's what tail is for), I also like that you don't have

Incorrect results with bash process substitution and tail?

雨燕双飞 提交于 2019-12-12 08:22:09
问题 Using bash process substitution, I want to run two different commands on a file simultaneously. In this example it is not necessary but imagine that "cat /usr/share/dict/words" was a very expensive operation such as uncompressing a 50gb file. cat /usr/share/dict/words | tee >(head -1 > h.txt) >(tail -1 > t.txt) > /dev/null After this command I would expect h.txt to contain the first line of the words file "A", and t.txt to contain the last line of the file "Zyzzogeton". However what actually

How to use tail -f recursively on a directory and its sub-directories?

喜夏-厌秋 提交于 2019-12-11 16:12:35
问题 I am trying to use tail utility in linux to monitor the logs present under nested directories. I tried using tail -f /var/log/**/* but this only go till direct child of log directory. It does not dig beyond one level. Basically I am trying to tail all the application logs in docker container and pass them to /proc/1/fd/1, so that they appear under docker logs. 回答1: You need to enable shopt -s globstar if it is disabled in your shell. With this setting enabled, Bash will recurse directories