In IPv6 networking, the IPV6_V6ONLY flag is used to ensure that a socket will only use IPv6, and in particular that IPv4-to-IPv6 mapping won\'t be used for that socket. On many
There's one very common example where the duality of behavior is a problem. The standard getaddrinfo()
call with AI_PASSIVE
flag offers the possibility to pass a nodename parameter and returns a list of addresses to listen on. A special value in form of a NULL string is accepted for nodename and implies listening on wildcard addresses.
On some systems 0.0.0.0
and ::
are returned in this order. When dual-stack socket is enabled by default and you don't set the socket IPV6_V6ONLY
, the server connects to 0.0.0.0
and then fails to connect to dual-stack ::
and therefore (1) only works on IPv4 and (2) reports error.
I would consider the order wrong as IPv6 is expected to be preferred. But even when you first attempt dual-stack ::
and then IPv4-only 0.0.0.0
, the server still reports an error for the second call.
I personally consider the whole idea of a dual-stack socket a mistake. In my project I would rather always explicitly set IPV6_V6ONLY
to avoid that. Some people apparently saw it as a good idea but in that case I would probably explicitly unset IPV6_V6ONLY
and translate NULL
directly to 0.0.0.0
bypassing the getaddrinfo()
mechanism.