tcdrain() return performance

Using the tcdrain() call on a port supported by the Acceleport Xp, I suspect that the tcdrain() call–which gets translated to an ioctl() call for TCSBRK handling by the Digi driver–does not return “immediately” after the end of transmission of a given message. I would like to know what this variance is and if anything can be done about it. I know that the generic TCSBRK handling by the kernel is fairly at the mercy of the scheduler but I was hoping for somewhat more real-time handling by the Digi driver.

Our system runs RedHat 9.0 using the 2.4.20-9 Linux kernel. The Digi driver version we are using shows “Release Notes 93000351_G” and “Digi dgdm Driver Package dgdm-1.09-0.src.rpm”.

In our application, RTS is raised to put a half-duplex output circuit into transmit mode before a message is written out on the port. The tcdrain() routine is used to determine when to drop RTS so that the output circuit can be placed into receive mode for accepting responses. It appears that there is an added delay component between the time that the final character is transmitted and the time that tcdrain() returns. RTS is overshooting its mark, i.e. the end of message transmission, by 80+ msec at times.

The field units receiving the output message do not process the message until its last char (terminator) is received. And they have a nominal response delay setting that has worked in past applications. But there is a wide swing in the values of the RTS overshoot during output message transmission, allowing responses to be fully accepted at times but chewing up leading characters at other times (i.e. when RTS overshoots by 80+ msec). We have performed straces and there is no separation in time between the return of tcdrain() – actually its associated ioctl() call – and the ioctl() call used to drop RTS. And there is at most 1 msec delay between the message write() request and the ioctl() call assoicated with tcdrain() so application time is not the issue. There is a wide swing in values for the times spent within the ioctl() call associated with tcdrain(). Of course some of this is attributable to the number of characters in a given message. However, there is some meaningful variance even in the times spent for messages of the same length, e.g. 183 vs. 218 msec for a 5-character message, and 223 vs. 240 msec for a 6-character message.

The Acceleport Xp should have a relatively low latency, so I’m not sure why this behavior is occurring. Best bet would be to check for hardware contention with the card, i.e. other devices/peripherals in the server sharing an IRQ with the Xp. Some BIOS utilities also allow you to designate an IRQ for a particular PCI slot, and make sure you have the latest revision of that motherboard BIOS.

If those steps don’t resolve the issue you’re seeing, my recommendation would be to use a Digi Neo card instead. The Neo is a non-intelligent interrupt driven card, and I can specfically state that it can return from tcdrain() within 1-2ms of when the last byte of data left the UART…