NET+OS Socket SO_LINGER

I have frequent

socket.error: [Errno 24] Too many open files

All the sockets are closed using closesocket function;

Linger?

When closesocket is call it will not return until all queued messages for the socket have been successfully sent or the linger timeout has been reached.

It looks like that independently from the result of send all the socket wait the linger timeout to expire before free itself.

In my specific application this in not desired since it needs to keep opening temporary connection inside a while loop.


while(1){

    struct sockaddr_in server;
    // configure sockaddr_in with correct s_addr and sin_port

    int tcp_socket = 0;
    tcp_socket = socket(AF_INET, SOCK_STREAM, 0);

    // connect to server
    // send message
    // recv response

    closesocket(tcp_socket);

}



By setting SO_LINGER to


struct linger so_linger;
so_linger.l_onoff = 1;
so_linger.l_linger = 2; // linger in seconds

setsockopt(tcp_socket, SOL_SOCKET, SO_LINGER, (void*)&so_linger, sizeof(so_linger);

It makes no difference it looks like 2 sec linger is ignored.
using:


so_linger.l_linger = 0; 

this solvers the socket.error: [Errno 24] problem but now I have a RST instead of a FIN and this is not the way I’d like to handle the issue

After some research i’ve found that TIME_WAIT is the reason sockets are not made available immediately still I do not know hot to solve this.

Yes your finding is correct. Setting linger to 0 is a special case linger value that tells the stack to return an RST instead of a FIN. My guess is that in your case this is NOT what you are trying to do.

What you should be looking at is reducing MSL. This is the time that a TCP socket remains open after a close. The state that the socket is in is refered to as the “timed wait state”.

In NET+OS MSL is manipulated through the use of the API NAIpSetTcpMsl. It is a GLOBAL setting and therefore you want to set this value BEFORE opening any sockets. All sockets will pick up this value.

The default is (I believe) 60 seconds. You’ll want to experiment with the value until such time as you identify a value that is adequate for your application.

Information about NAIpSetTcpMsl can be found in the API reference guide at:
Internetworking->Global IP Configuration->Functions->NAIpSetTcpMsl. BTW there is also an NAIpGetTcpMsl that allows the application to check the current value. My finding through experimentation has been that when I have an application that is consuming sockets faster than the stack can restore them, reducing MSL generally addresses the issue. BTW a reminder, NET+OS posses a total of 128 socket objects which are shared between network sockets and serial ports. I believe by default, 6 are allocated by NET+OS at bootup time. So the supply is surely finite and can not be increased.

It works! thanks for your precious help!