file-descriptor

How to retrieve the user name from the user ID

拈花ヽ惹草 提交于 2019-11-27 19:29:32
问题 I am implementing the (ls) command on Unix while learning from a book. During the coding part of my implementation of the (ls) command with the (-l) flag , I see that I have to prompt the user and group names of the file. So far I have the user and group IDs from the following lines: struct stat statBuf; statBuf.st_uid; //For the user id. statBuf.st_gid; //For the group id. In the default (ls) command on Unix, the information of the file is printed in such a way that the user name is shown

Get list of open files (descriptors) in OS X

折月煮酒 提交于 2019-11-27 19:14:46
I would like to get a list of open files in a process on os x (10.9.1). In Linux I was able to get this from /proc/PID/fd . However I'm not sure how to get the same on OS X. I found that the procfs is not present on the OS X (by default. possible implementations present, but I do not want to go that way). So how do I get (natively) the list of open files in a process on OS X. One way is lsof . is there any other support available? please let me know where I can get more info on this. Thanks. At least on OSX 10.10 (Yosemite, didn't check on Mavericks), you can get the list of open files by

Stream live video from phone to phone using socket fd

醉酒当歌 提交于 2019-11-27 17:24:37
I am new to android programming and have found myself stuck I have been researching various ways to stream live video from phone to phone and seem to have it mostly functional, except of course the most important part: playing the stream. It appears to be sending the stream from one phone, but the second phone is not able to play the stream. Here is the code for the playing side public class VideoPlayback extends Activity implements Callback { MediaPlayer mp; private SurfaceView mPreview; private SurfaceHolder holder; private TextView mTextview; public static final int SERVERPORT = 6775;

Check the open FD limit for a given process in Linux

帅比萌擦擦* 提交于 2019-11-27 17:23:09
I recently had a Linux process which "leaked" file descriptors: It opened them and didn't properly close some of them. If I had monitored this, I could tell - in advance - that the process was reaching its limit. Is there a nice, Bash\Python way to check the FD usage ratio for a given process in a Ubuntu Linux system? EDIT: I now know how to check how many open file descriptors are there; I only need to know how many file descriptors are allowed for a process . Some systems (like Amazon EC2) don't have the /proc/pid/limits file. Thanks, Udi Count the entries in /proc/<pid>/fd/ . The hard and

How to use sendmsg() to send a file-descriptor via sockets between 2 processes?

一世执手 提交于 2019-11-27 16:46:42
问题 After @cnicutar answers me on this question, I tried to send a file-descriptor from the parent process to its child. Based on this example, I wrote this code: int socket_fd ,accepted_socket_fd, on = 1; int server_sd, worker_sd, pair_sd[2]; struct sockaddr_in client_address; struct sockaddr_in server_address; /* ======================================================================= * Setup the network socket. * ======================================================================= */ if(

Process leaked file descriptors error on JENKINS

情到浓时终转凉″ 提交于 2019-11-27 13:56:46
I am getting this error when I configured a job to do stop and start of tomcat server: Process leaked file descriptors. See http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build for more information When i googled it, i got a recommended solution as set BUILD_ID=dontKillMe Is this the exact solution? If yes, where do I need to set BUILD_ID? Inside ant/post build script? Can anyone please clarify this? lu_ko Yes, creating fake BUILD_ID for process tells Jenkins to ignore this process during detection spawned processes, so this process will be not killed after finishing job.

How to check if a given file descriptor stored in a variable is still valid?

被刻印的时光 ゝ 提交于 2019-11-27 12:01:48
I have a file descriptor stored in a variable say var. How can I check whether that descriptor is valid at a later stage? fdvar1= open(.....); fdvar2 = fdvar1; // Please ignore the bad design .... // lots of loops , conditionals and threads. It can call close(fdvar2) also. .... if(CheckValid(fdvar1)) // How can I do this check ? write(fdvar1, ....); Now i want to check whether var1 (which still holds the opened descriptor) is still valid. Any API's for that ? R.. fcntl(fd, F_GETFD) is the canonical cheapest way to check that fd is a valid open file descriptor. If you need to batch-check a lot,

two file descriptors to same file

狂风中的少年 提交于 2019-11-27 11:33:06
问题 Using the posix read() write() linux calls, is it guaranteed that if I write through one file descriptor and read through another file descriptor, in a serial fashion such that the two actions are mutually exclusive of each other... that my read file descriptor will always see what was written last by the write file descriptor? i believe this is the case, but I want to make sure and the man page isn't very helpful on this 回答1: It depends on where you got the two file descriptors. If they come

Can anyone explain a simple description regarding 'file descriptor' after fork()?

前提是你 提交于 2019-11-27 11:21:19
In "Advanced Programming in the Unix Environment", 2nd edition, By W. Richard Stevens. Section 8.3 fork function. Here's the description: It is important that the parent and the child share the same file offset. Consider a process that forks a child, then waits for the child to complete. Assume that both processes write to standard output as part of their normal processing. If the parent has its standard output redirected (by a shell, perhaps) it is essential that the parent's file offset be updated by the child when the child writes to standard output. [1. What does it mean? if parent's std

node and Error: EMFILE, too many open files

左心房为你撑大大i 提交于 2019-11-27 10:06:30
For some days I have searched for a working solution to an error Error: EMFILE, too many open files It seems that many people have the same problem. The usual answer involves increasing the number of file descriptors. So, I've tried this: sysctl -w kern.maxfiles=20480 , The default value is 10240. This is a little strange in my eyes, because the number of files I'm handling in the directory is under 10240. Even stranger, I still receive the same error after I've increased the number of file descriptors. Second question: After a number of searches I found a work around for the "too many open