The Interceptor pattern describes the transparent insertion of "custom" behavior at specific points (dubbed interception points) in an application's flow. The Interceptor pattern involves four entities:

  1. Interception events: what has occured.
  2. Interception points: where the interception events are dispatched.
  3. The event dispatcher: dispatches interception events to interceptors - see next.
  4. Interceptors - of interception events: subscribe for specific interception events and are notified when the latter occur.

The Ubik RMI API implements the Interceptor pattern. The implementation is provided as a set of classes and interfaces in the org.sapia.ubik.rmi.interceptor package. These classes are reused within Ubik RMI itself, but can be recycled in applications in a totally independant fashion - see the javadoc for more information.

Ubik RMI triggers interception events at the following interception points:

  • Prior to a remote method invocation on the client-side;
  • after a remote method invocation on the client-side;
  • prior to a remote method invocation on the server-side;
  • after a remote method invocation on the server-side;

Through the Ubik RMI API, it is possible to register interceptors for the above events, depending on "where" the events are triggered - on the client-side or on the server-side. Interception events are modeled through classes that implement the Event marker interface. For each of the events defined above, there is a respective class - in the same order:

Each of the events above encapsulates information that can very often be modified. For example, an application could easily implement transparent security by wrapping commands sent to the server with authentication information. Thus, an interceptor could be designed to intercept commands on the client-side and modify them in a way as to transparently add the authentication data.

Another example is how EJB containers manage transactions: every time a method on an EJB is called, the container checks for the called method's transactional attribute - as specified in the EJB's deployment descriptor, and registers the calling thread with a new or the current transaction - if it applies.

Various other uses can be thought of; the following example implements a "hit" counter (an interceptor that increments an "invocation count" every time a method is invoked on a remote object on the server-side):

Implementing the Interceptor

Our interceptor will intercept events of the ServerPreInvokeEvent class. The code implementation goes as follows:

package org.sapia.ubik.rmi.examples.interceptor;

import org.sapia.ubik.rmi.interceptor.Interceptor;
import org.sapia.ubik.rmi.server.invocation.

public class HitCountInterceptor implements Interceptor {
  private int _count;

  public synchronized void 
    onServerPreInvokeEvent(ServerPreInvokeEvent evt) {

  public synchronized int getCount() {
    return _count;

Registering to Interception Events

Once the interceptor has been implemented, it can be registered with the Ubik RMI runtime - this must be done before a server is exported, of before the first Ubik RMI client is created, since the methods to add interceptors are not synchronized. The code below demonstrates this:

// the hitInterceptor variable is an 
// instance of HitCountInterceptor
Hub.getModules().getServerRuntime().addInterceptor(ServerPreInvokeEvent.class, hitInterceptor);

As can be seen, the ServerRuntime has a dispatcher that allows to register interceptors for server-side events. For the events that are triggered on the client-side, the following invocation would take place:

Hub.getModules().getClientRuntime().addInterceptor(someEventClass, someInterceptor);

Interception events are dispatched synchronously; therefore, it is important that interceptors perform their task as fast as possible, in order to minimize the impact on performance. In addition, multiple interceptors can be added for a given event class. The interceptors are called in the order in which they were added. It is important that subsequent interceptors do not contradict the intended effect of previous ones. It is the application developer's responsibility to use interceptors in a consistent manner. Ubik RMI's runtime does not internally use interceptors, a precaution that is taken in order to avoid conflict with potential application interceptors.

Dispatching Events

Applications can use Ubik RMI's interception API to dispatch custom interception events. Interception events must implement the Event interface, and must be dispatched through the server or client runtime - with which interceptors can thereafter be registered. The following snippet shows how to dispatch an event:



Ubik RMI's command protocol sits on top of the transport layer, as illustrated below:

Command Protocol
Transport Layer

The command protocol is based on the Command pattern, where commands are objects that encapsulate self-contained business logic whose execution is triggered by an external environment. In Ubik RMI, a command is sent from the client to the server (through Java's serialization), and executed by the latter. Commands are modeled by the RMICommand class; all commands in Ubik RMI extend this class. The command protocol has been completely separated from the transport layer in order to allow sending commands over different transports.

Applications can somewhat extend the protocol with their own commands. To that end, instances of RMICommand must be created - from custom command classes - and the command must be sent over the wire through Ubik RMI's transport layer. The following code shows how this is done:

package org.sapia.ubik.rmi.examples;

import org.sapia.ubik.rmi.server.RMICommand;
import org.sapia.ubik.rmi.server.transport.TransportManager;


import java.rmi.RemoteException;

public class HelloWorldCommand extends RMICommand {

  public Object execute() throws Throwable {
    return "Hello World";

  public static void main(String[] args) {
    // creating address of server we wish to connect to
    TCPAddress addr = new TCPAddress("localhost", 7070);

    Connection conn = null;

    try {
      // acquiring connection
      conn = TransportManager.getConnectionsFor(addr).acquire();
    } catch (RemoteException e) {

    try {
      conn.send(new HelloWorldCommand());
    } catch (IOException e) {

    // always perform the receive!!!
    try {
      Object response = conn.receive();

      if (response instanceof Throwable) {
        Throwable err = (Throwable) response;
      } else {
        // should print 'Hello World'

      // Very important: allows transport 
      // providers to implement connection
      // pooling.
    } catch (RemoteException e) {
    } catch (IOException e) {
    } catch (ClassNotFoundException e) {


For servers to be scalable, it is important that they spawn a reasonnable amount of threads when handling incoming requests. By default, Ubik RMI servers process all requests (or commands) synchronously: when a command is received at the server, it is immediately executed, and the response is sent back to the client in the same thread. If the commands execute fast enough and the amount of concurrent clients remains small, this could proove good enough.

The JDK's RMI follows this synchronous model.

Yet, in this world of massive traffic, uncertain quality of service and heterogeneous system integration, execution speed is the first victim; and the longer it takes to process requests, the more they pile up, and the more resources are consumed - expecially threads.

In addition, note that the NIO transport should preferrably be used: it offers the best scalability guarantees.

Ubik RMI allows to spare server resources by dividing the work between servers and clients more evenly. To do so, it uses callbacks: when a command is received by the server, it is piled locally in a command queue for later execution, while the server thread returns immediately. At this point, the client blocks until it receives a response. The latter is sent back to the client once the corresponding command has been executed - it is the command's result; for this to happen, the client also becomes a server, to which the response is eventually sent. The command that is sent to the server encapsulates the address of the client's server - as weird as this might sound - so that the response can be sent back appropriately.

Although sacrificing on raw throughput performance, this pattern allows servers to scale extremely well, distributing the load of remote method invocations more equally among clients and servers. Ubik RMI allows to set the number of callback processing threads on the server-side, with the following system property: ubik.rmi.callback.max-threads. If the property is not specified, Ubik RMI uses 5 threads by default - which is probably not what you want (see the customization page for more details).

To enable callbacks, the classes of your remote objects must be annotated with the @Callback annotation. In addition, you must set the ubik.rmi.callback.enabled system property to true - this property must be set both on the client and server sides. The requirement to set it on the client-side stems from the fact that some clients might not be allowed to open servers on their side (in the case of applets, for example).

All system properties used by Ubik RMI's runtime are defined in the Consts interface.

Distributed Garbage Collection

As was explained in the Architecture section, Ubik RMI clients and servers interact to implement distributed gargage collection. For optimal performance, it might be necessary to override Ubik RMI's default settings. This section explains how.


The client gargage collector polls the server at a predefined interval to notify it about unreferenced stubs - so that the server can update its reference counts. It is important that this interval be specified according to the server's behavior; the criterion that determines this interval is the rate at which remote objects are created. Indeed, the higher this rate, the more often the client gargage collector should notify the server about unreferenced stubs, in order to allow the server to clean up its memory. The rule of thumb here is to ensure that distributed garbage collection is on par with remote object creation. This interval can be specified through the following system property: ubik.rmi.client.gc.interval, which must be mapped to an interval in seconds. This property is also important from another perspective: the server keeps an internal table of the connected clients; to support stateless protocols, a "last access time" is internally kept and checked at a regular interval - see further below. If the server detects that a client has not polled for a specified amount of time, it is considered "dead" and the server will update its reference counts accordingly. It is thus important that the interval at which the client polls the server be less than the delay after which clients are considered dead on the server-side.

Another property that can be tweaked is the number of object identifiers that are sent to the server on a DGC notification call - from the client. To notify the server about unreferenced stubs, the client sends to the server their corresponding object identifiers; the server updates the reference count for each object identifier. The number of object identifiers sent at each trip can be specified - the default corresponds to 1000. The property is ubik.rmi.client.gc.batch.size. Object identifiers are sent by batch in order to spare the sending thread from blocking too long on IO. Yet, a too small batch size would result in too many network calls - all the more so if the "dereferencing rate" is large.

Of course, both properties (interval and batch size) should be balanced to provide an optimal combination; experimentation will probably be necessary.


The server gargage collector also runs at a regular interval to check for dead clients. This interval can be specified through the following system property: ubik.rmi.server.gc.interval, that must map to a value in seconds - and defaults to 30. A client will be considered dead if it has not polled the server for an amount of time that can also be specified with a system property: ubik.rmi.server.gc.timeout, that also defaults to 30 seconds.

As mentionned in the previous section, clients poll their server as part of their DGC notification; this polling interval must of course be less than the time-out after which clients are considered dead.


By default, Ubik transmits remote method call parameters "as is" over the wire; such a model is fine if the client and server have Ubik's libraries in their classpath, or if the method calls do not transit through intermediary VMs before reaching their destination.

Yet, this model will not work when client and server inherit Ubik from a parent classloader (such as in an app server scenario, where the remoting runtime is in the app server's classpath, and deployed applications have their own classloader that inherits from that classpath); indeed, at deserialization time, Ubik will not be able to find the classes of the deserialized objects if these classes are not in its classloader - or in a parent classloader.

In addition, if objects transit through other VMs when going from client to server (and vice-versa), the intermediary VMs will not be able to deserialize remote method call information - if the appropriate classes are not in the classpath; this will also result in ClassNotFoundExceptions. Again with regards to a multi-VM scenario, another disadvantage is that whole object graphs have to be deserialized and reserialized, which can produce quite an overhead.

As a work around, Ubik encapsulates method call parameters in MarshalledObject instances. Each method call parameter is transformed into an array of bytes that a MarshalledObject internally keeps, before being sent over the wire. These "marshalled objects" are deserialized upon reaching their destination, but it is only before performing the actual method call that their internal object is itself resurrected from byte form.

To enable marshalling, the ubik.rmi.marshalling system property has to be set to true at the client.

Clean Shutdown

Before terminating a VM that holds running Ubik RMI servers, the Ubik RMI runtime should be shutdown. This is very important in order for the system resources (mainly network connections) held by the runtime to be cleanly relinquished.

The following code demonstrates how a shutdown is invoked:

}catch(InterruptedException e){
      // could not shut down within specified time-out
      // either retry shutdown or display error message.

Internally, the runtime cleanly shuts down Ubik RMI's subcompopnents. The application must specify a timeout that indicates the amount of time that is given to these subcomponents to abort their activities. The thread that calls the shutdown() method has to block while some components proceed to their shutdown asynchronously; if an interruption message is sent to the thread while it is blocking, an InterruptedException is thrown. In such a case, some components might not have had time to shutdown properly. Retrying the shutdown would be the best solution, to insure that all components have had the opportunity to gracefully terminate.