#include <poll.h> int poll(struct pollfd *fds, nfds_t nfds, int timeout); #define _GNU_SOURCE /* See feature_test_macros(7) */ #include <signal.h> #include <poll.h> int ppoll(struct pollfd *fds, nfds_t nfds, const struct timespec *timeout_ts, const sigset_t *sigmask);
The set of file descriptors to be monitored is specified in the fds argument, which is an array of structures of the following form:
struct pollfd { int fd; /* file descriptor */ short events; /* requested events */ short revents; /* returned events */ };
The caller should specify the number of items in the fds array in nfds.
The field fd contains a file descriptor for an open file. If this field is negative, then the corresponding events field is ignored and the revents field returns zero. (This provides an easy way of ignoring a file descriptor for a single poll() call: simply negate the fd field. Note, however, that this technique can't be used to ignore file descriptor 0.)
The field events is an input parameter, a bit mask specifying the events the application is interested in for the file descriptor fd. This field may be specified as zero, in which case the only events that can be returned in revents are POLLHUP, POLLERR, and POLLNVAL (see below).
The field revents is an output parameter, filled by the kernel with the events that actually occurred. The bits returned in revents can include any of those specified in events, or one of the values POLLERR, POLLHUP, or POLLNVAL. (These three bits are meaningless in the events field, and will be set in the revents field whenever the corresponding condition is true.)
If none of the events requested (and no error) has occurred for any of the file descriptors, then poll() blocks until one of the events occurs.
The timeout argument specifies the number of milliseconds that poll() should block waiting for a file descriptor to become ready. The call will block until either:
Note that the timeout interval will be rounded up to the system clock granularity, and kernel scheduling delays mean that the blocking interval may overrun by a small amount. Specifying a negative value in timeout means an infinite timeout. Specifying a timeout of zero causes poll() to return immediately, even if no file descriptors are ready.
The bits that may be set/returned in events and revents are defined in <poll.h>:
When compiling with _XOPEN_SOURCE defined, one also has the following, which convey no further information beyond the bits listed above:
Linux also knows about, but does not use POLLMSG.
Other than the difference in the precision of the timeout argument, the following ppoll() call:
ready = ppoll(&fds, nfds, timeout_ts, &sigmask);is equivalent to atomically executing the following calls:
sigset_t origmask; int timeout; timeout = (timeout_ts == NULL) ? -1 : (timeout_ts.tv_sec * 1000 + timeout_ts.tv_nsec / 1000000); sigprocmask(SIG_SETMASK, &sigmask, &origmask); ready = poll(&fds, nfds, timeout); sigprocmask(SIG_SETMASK, &origmask, NULL);
See the description of pselect(2) for an explanation of why ppoll() is necessary.
If the sigmask argument is specified as NULL, then no signal mask manipulation is performed (and thus ppoll() differs from poll() only in the precision of the timeout argument).
The timeout_ts argument specifies an upper limit on the amount of time that ppoll() will block. This argument is a pointer to a structure of the following form:
struct timespec { long tv_sec; /* seconds */ long tv_nsec; /* nanoseconds */ };
If timeout_ts is specified as NULL, then ppoll() can block indefinitely.
The ppoll() system call was added to Linux in kernel 2.6.16. The ppoll() library call was added in glibc 2.4.
For a discussion of what may happen if a file descriptor being monitored by poll() is closed in another thread, see select(2).
The raw ppoll() system call has a fifth argument, size_t sigsetsize, which specifies the size in bytes of the sigmask argument. The glibc ppoll() wrapper function specifies this argument as a fixed value (equal to sizeof(sigset_t)).