virNetClientSetTLSSession: Restore original signal mask

Currently, we use pthread_sigmask(SIG_BLOCK, ...) prior to calling
poll(). This is okay, as we don't want poll() to be interrupted.
However, then - immediately as we fall out from the poll() - we try to
restore the original sigmask - again using SIG_BLOCK. But as the man
page says, SIG_BLOCK adds signals to the signal mask:

SIG_BLOCK
      The set of blocked signals is the union of the current set and the set argument.

Therefore, when restoring the original mask, we need to completely
overwrite the one we set earlier and hence we should be using:

SIG_SETMASK
      The set of blocked signals is set to the argument set.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This commit is contained in:
Michal Privoznik 2014-03-19 18:10:34 +01:00
parent 963dcf905c
commit 3d4b4f5ac6

View File

@ -792,7 +792,7 @@ int virNetClientSetTLSSession(virNetClientPtr client,
if (ret < 0 && (errno == EAGAIN || errno == EINTR))
goto repoll;
ignore_value(pthread_sigmask(SIG_BLOCK, &oldmask, NULL));
ignore_value(pthread_sigmask(SIG_SETMASK, &oldmask, NULL));
}
ret = virNetTLSContextCheckCertificate(tls, client->tls);
@ -816,7 +816,7 @@ int virNetClientSetTLSSession(virNetClientPtr client,
if (ret < 0 && (errno == EAGAIN || errno == EINTR))
goto repoll2;
ignore_value(pthread_sigmask(SIG_BLOCK, &oldmask, NULL));
ignore_value(pthread_sigmask(SIG_SETMASK, &oldmask, NULL));
len = virNetTLSSessionRead(client->tls, buf, 1);
if (len < 0 && errno != ENOMSG) {