问题
I am trying to write a simple C serial communication program for Linux. I am confused about the blocking/non-blocking reads and VMIN/VTIME relationships.
My question is, if I should be settings VMIN/VTIME according to whether I have a blocking/non-blocking open call?
For example, if I have the following open call:
open( "/dev/ttyS0", O_RDWR|O_NONBLOCK|O_NOCTTY)
Should I set the VMIN/VTIME to:
.c_cc[VTIME] = 0;
.c_cc[VMIN] = 0;
and if I have blocking mode like:
open( "/dev/ttyS0", O_RDWR|O_NOCTTY)
should I set the VMIN/VTIME to:
.c_cc[VTIME] = 0;
.c_cc[VMIN] = 1;
?
Does it make any difference what VMIN/VTIME are set to even though the port open flags are set appropriately?
If anybody could help me understand the relationship between VMIN/VTIME and blocking/non-blocking ports I would really appreciate it.
Thanks
回答1:
Andrey is right. In non-blocking mode, VMIN/VTIME have no effect (FNDELAY / O_NDELAY seem to be linux variants of O_NONBLOCK, the portable, POSIX flag).
When using select() with a file in non-blocking mode, you get an event for every byte that arrives. At high serial data rates, this hammers the CPU. It's better to use blocking mode with VMIN, so that select() waits for a block of data before firing an event, and VTIME to limit the delay, for blocks smaller than VMIN.
Sam said "If you want to make sure you get data every half second you could set vtime" (VTIME = 5).
Intuitively, you may expect that to be true, but it's not. The BSD termios man page explains it better than linux (though they both work the same way). The VTIME timer is an interbyte timer. It starts over with each new byte arriving at the serial port. In a worst case scenario, select() can wait up to 20 seconds before firing an event.
Suppose you have VMIN = 250, VTIME = 1, and serial port at 115200 bps. Also suppose you have an attached device sending single bytes slowly, at a consistent rate of 9 cps. The time between bytes is 0.11 seconds, long enough for the interbyte timer of 0.10 to expire, and select() to report a readable event for each byte. All is well.
Now suppose your device increases its output rate to 11 cps. The time between bytes is 0.09 seconds. It's not long enough for the interbyte timer to expire, and with each new byte, it starts over. To get a readable event, VMIN = 250 must be satisfied. At 11 cps, that takes 22.7 seconds. It may seem that your device has stalled, but the VTIME design is the real cause of delay.
I tested this with two Perl scripts, sender and receiver, a two port serial card, and a null modem cable. I proved that it works as the man page says. VTIME is an interbyte timer that's reset with the arrival of each new byte.
A better design would have the timer anchored, not rolling. It would continue ticking until it expires, or VMIN is satisfied, whichever comes first. The existing design could be fixed, but there is 30 years of legacy to overcome.
In practice, you may rarely encounter such a scenario. But it lurks, so beware.
回答2:
Make sure to unset the FNDELAY flag for descriptor using fcntl, otherwise VMIN/VTIME are ignored. Serial Programming Guide for POSIX Operating Systems
回答3:
I'd recommend using vmin and vtime both 0 if you're using nonblocking reads. That will give you the behavior that if data is available, then it will be returned; the fd will be ready for select, poll, etc whenever data is available.
vmin and vtime are useful if you're doing blocking reads. For example if you're expecting a particular packet size then you could set vmin. If you want to make sure you get data every half second you could set vtime.
Obviously vmin and vtime are only for non-canonical mode (non-line mode)
My suspicion is that in nonblocking mode if you set vmin say to 5, then the fd will not be read-ready and read will return EWOULDBLOCK until 5 characters are ready. I don't know and don't have an easy test case to try, because all the serial work I've done has been either blocking or has set both to 0.
来源:https://stackoverflow.com/questions/20154157/termios-vmin-vtime-and-blocking-non-blocking-read-operations