strace

Using strace fixes hung memory issue

自闭症网瘾萝莉.ら 提交于 2019-12-05 05:17:54
I have a multithreaded process running on RHEL6.x (64bit). I find that the process hangs and some threads (of the same process) crash most of the time when I try to bring up the process. Some threads wait for shared memory between the threads to get created (I can see that all of it does not get created). But when I use strace , the process does not hang and it works just fine (all of the memory that is supposed to be created, gets created). Even interrupting strace after the memory gets created, keeps the process running fine for good. I have read this: strace fixes hung process which did

Linux性能优化笔记

谁都会走 提交于 2019-12-04 18:26:13
Linux性能优化笔记 Optimizing Linux Performance: A Hands-On Guide to Linux Performance Tools 1 性能追踪建议 记录一切可以搜集的信息,即使其中的一些信息在当时看起来没有太大的用途。 2 性能工具:系统CPU 在Linux下,进程有可运行和阻塞两种状态。可运行进程有分为两类:已经获得CPU时间正在运行的一类,和等待CPU时间的一类。在Linux系统中,运行队列长度就是系统中可运行状态的进程数量(平均值)。被阻塞的进程也可以分为两类:被I/O阻塞的进程,以及被系统调用阻塞的进程。一个CPU一次只能执行一个进程。在进程调度时,CPU需要把当前进程的信息保存起来,从内存中加载下一个进程的信息。这个过程叫做上下文切换。上下文切换会导致CPU中的高速缓存和流水线刷新,对性能有负面影响。为了公平的分配CPU时间,Linux会周期性的中断进程。通常,这种周期性上下文切换的数据可以作为考察系统上下文切换数量的基准: cat /proc/interrupts | grep timer; sleep 10; cat /proc/interrupts | grep timer; 如果系统中断频率高于定时器的频率,说明进程经常陷入I/O等待或系统调用(比如休眠)等待之中。 在任意时刻,CPU可以处于以下几种状态: -空闲

磁盘优化思路

怎甘沉沦 提交于 2019-12-04 10:27:18
磁盘优化思路 性能定位套路 和之前类似,我们不可能出现性能问题就把所有工具都跑一遍,而是先运行那几个支持指标较多的工具,如 top,iostat,vmstat等来缩小范围 先用top, iostat 发现磁盘 I/O 性能瓶颈; 再借助 iotop, pidstat 等定位出导致瓶颈的进程; 随后用strace, lsof等分析进程的 I/O 行为; 最后,结合应用程序的原理,分析这些 I/O 的来源。 性能优化思路 由于影响磁盘I/O性能的因素众多,我们对磁盘I/O优化分应用程序,文件系统,磁盘三方面来说 1.应用程序优化 应用程序处于整个 I/O 栈的最上端,它可以通过系统调用,来调整 I/O 模式(如顺序还是随机、同步还是异步), 同时,它也是 I/O 数据的最终来源。 在我看来,可以有这么几种方式来优化应用程序的 I/O 性能: 1).可以用追加写代替随机写,减少寻址开销,加快 I/O 写的速度。 2).可以借助缓存 I/O ,充分利用系统缓存,降低实际 I/O 的次数。 3).可以在应用程序内部构建自己的缓存,或者用 Redis 这类外部缓存系统。这样,一方面,能在应用程序内部,控制缓存的数据和生命周期;另一方面,也能降低其他应用程序使用缓存对自身的影响。 4).在需要频繁读写同一块磁盘空间时,可以用 mmap 代替 read/write,减少内存的拷贝次数。 5)

【原创】如何选择“定时抓包”方案

坚强是说给别人听的谎言 提交于 2019-12-04 07:35:49
需求:能抓取指定时间长度的包,比如抓取 10s 长度的包; 可选方案: 使用 tcpdump 命令的 -G 和 -W 参数; 自己通过脚本实现在指定时间到达后通过 kill 命令杀掉 tcpdump 抓包进程; 使用 tshark 命令的 -a duration:xx 参数; 基于 tcpdump 的 -G -W 参数实现定时的方案 因为 tcpdump 太有名了,所以一般人十有八九会先想到这个工具; 查阅 tcpdump 的 man 手册可以发现与定时功能相关的参数如下: -G rotate_seconds 如果设置了该参数, tcpdump 将会以 rotate_seconds 为周期对通过 -w 选项指定命名的 dump 文件进行轮转;保存文件命名通过 -w 选项指定,并且应该包含符合 strftime(3) 定义的时间戳格式;如果未指定时间格式,则每一个新产生的文件将会覆盖之前的文件; 如果和 -C 选项配合使用,文件命名将会采用 ' file<count> ' 格式; -W 和 -C 选项配合使用,可以将创建文件数目限制指定值,并且咋达到该数值后,从头开始进行文件覆盖;从行为上,类似于实现了一个 ' rotating ' buffer 的功能;另外,该选项会在为文件命名时使用足够多的前导 0 , 以便正确支持我们要求的最大数目,同时允许基于该数值进行正确排序; 和 -G

how should I use strace to snif the serial port?

末鹿安然 提交于 2019-12-04 07:25:55
I am writing an application in linux and need to access the serial port. For debugging purposes I need to snif what comes and/or goes through the serial port. I looked around and found out I can use strace to do that. So I tried the following: -I print the file_descriptor of the serial device that I use. (after restarting my application a few times, I reassured myself that the file_descriptor number my application gets from kernel is "4" -if i start my application as strace -e write=4 ./myapp , I would expect to get messages in the terminal, from file_descriptor "4" only. instead I get looots

Profiling for wall-time on Linux

人走茶凉 提交于 2019-12-04 03:05:12
I have an application that I want to profile wrt how much time is spent in various activities. Since this application is I/O intensive, I want to get a report that will summarize how much time is spent in every library/system call (wall time). I've tried oprofile, but it seems it gives time in terms of Unhalted CPU cycles (thats cputime, not real time) I've tried strace -T, which gives wall time, but the data generated is huge and getting the summary report is difficult (and awk/py scripts exist for this ?) Now I'm looking upto SystemTap, but I don't find any script that is close enough and

Tracing calls to a shared library

末鹿安然 提交于 2019-12-04 01:57:02
I am developing a program under Linux. For debugging purposes I want to trace all calls from my program to a certain (preferably shared) library. (I do not want to trace calls happening inside the library.) For syscalls there is strace. Is there any instrument to trace calls to a shared library? The tool you are looking for is called ltrace . It allows to trace any call from the program to all (or a set of given) libraries. For example, the following call will list any call to an external function loaded by a shared library: $> ltrace ls / __libc_start_main(0x4028c0, 2, 0x7fff1f4e72d8,

Show complete arguments in strace even in curly brackets

萝らか妹 提交于 2019-12-03 11:19:11
问题 I know the -s option should display longer arguments, but it doesn't work always (probably because of those curly brackets meaning array or nested arguments?). Even after running strace -s1000 my_command this argument is still truncated: ioctl(3, SNDCTL_TMR_TEMPO or TCGETA, {B9600 -opost -isig -icanon -echo ...}) = 0 How can I see the complete arguments? 回答1: There is such option in the strace parameters - you should use -v command line switch. Furthermore, due to the opensource nature of

How to trace per-file IO operations in Linux?

杀马特。学长 韩版系。学妹 提交于 2019-12-03 07:32:34
问题 I need to track read system calls for specific files, and I'm currently doing this by parsing the output of strace . Since read operates on file descriptors I have to keep track of the current mapping between fd and path . Additionally, seek has to be monitored to keep the current position up-to-date in the trace. Is there a better way to get per-application, per-file-path IO traces in Linux? 回答1: First, you probably don't need to keep track because mapping between fd and path is available in

Get all modules/packages used by a python project

回眸只為那壹抹淺笑 提交于 2019-12-03 04:55:50
问题 I have a python GUI application. And now I need to know what all libraries the application links to. So that I can check the license compatibility of all the libraries. I have tried using strace, but strace seems to report all the packages even if they are not used by the application. And, I tried python ModuleFinder but it just returns the modules that are inside python2.7 and not system level packages that are linked. So is there any way I can get all the libraries that are linked from my