Rpc server session leak?

To clarify, I do not think there is a problem with the software when used as currently designed. This is really only an issue with systems that cannot fork in rpc server.

I have added support for Windows to the C++ RPC Server, mostly by filling in the blanks. It’s not ready for PR or anything yet, but had a question.

tldr; Does the RPC client have to make calls to free server side resources, or should the allocated resources be freed also when the connection is closed?

Since there is no fork() on Windows, the RPC server and measurements are taken in process, like the current stub in the rpc c++ server source code here:

The basic assumptions @FrozenGene uses makes sense: “If the ServerLoopProc does not finish by the timeout, forcefully close the connection, which in turn causes the internal RPC implementation to clean up and shutdown properly.”

I have noticed that when the timeout is hit though, .so files (dlls in my case) don’t get unloaded and GPU memory can leak. Inspection of rpc_session.cc and rpc_module.cc leads me to believe it is required for the client to send cleanup command codes (ex, kDevFreeData, kModuleFree, etc). If the server is closing the connection, it never gets the commands to free anything. In a situation where fork() is used (everywhere on Linux), there is not issue as the forked() server process just exits.

My work-around now, is to just remove the time-out functionality.

If this is true, would it be desirable to add some book-keeping in rpc_session.cc to track allocated resources, and free them on the rpc server’s session destructor?