When wondering strange performance issues in socket IO, I peeked into native code in input stream, and noticed that logic is following:

1) If buffer is smaller (or equal) to 8KB then use stack
2) If not then malloc()

Since memory allocation is not ever free, it sounds possible that such memory allocation can cause performance problems when aiming into high performance socket IO. But problem is that fixing such in server side, with SSL sockets isn’t actually piece-of-cake, due to NIO + SSL + ServerSocket == wtf?!?.

As side note, somebody else has also noticed something related: A Memory Problem With Java IO.

Update: 13.6.2010
Memory leak occurs by lack of free for read buffer in SocketInputStream#read() (fixed only in JDK 7!)

/ java

Vastaa

Sähköpostiosoitettasi ei julkaista. Pakolliset kentät on merkitty *

This site uses Akismet to reduce spam. Learn how your comment data is processed.