what would cause HTTP_WAITCLOSE to discard 1543 packets repeatedly?

My RCM5450W is hosting a web site via RABBITWEB. There are three subpages under the main page. One of these three accesses data from a C structure and prints it to the screen. After a few minutes of working correctly, the server “hangs” and stops sending data to the browser. I’ve printf-ed the HTTP.LIB code and can see that this misbehavior is often presaged by the HTTP_WAITCLOSE state informing me that it is discarding 1543 bytes at close. Rabbit’s code here uses sock_readable to see how many bytes remain in the sock to be closed, and then invokes sock_fastread to read those and dump them to NULL. The trouble is, when the http_handler is next invoked, we end up back here again, the sock_readable call returns the same number, and sock_fastread does the same thing. I don’t know if the 1543 bytes are the exact same ones (and fastread is somehow failing to remove them from the sock) or if something is stuffing bytes into the sock fast enough to keep up. A timeout comes along, and aborts this cycle, but the server stops talking shortly afterwards. I can see by my LCD screen and by watching debug output to STDIO that the Rabbit is still running, but is no longer serving the browser. Any clues would be much appreciated; I’ve been banging my head on this for days now.

What version of Dynamic C are you using? I haven’t heard of any issues. I suggest you download the lateest version, including library patches to see if you see the same

Turns out Digi’s 10.72 compiler and/or libraries are behind this. I reverted back to 10.64 and the “service hang” problem disappeared. I don’t plan to debug this any further.