Network Level Framing:
Speeding Up a Multimedia Storage Server?

Pål Halvorsen, Thomas Plagemann, and Vera Goebel
University of Oslo, UniK - Center for Technology at Kjeller, Norway
Email: {paalh, plageman, goebel}@unik.no

In the INSTANCE project, we try to design a high-performance, cost-effective multimedia storage server. As shown in Figure 1, an event is recorded at a remote site, transmitted to a multimedia storage server, and stored on persistent storage (disks). A remote client can then retrieve and play out the information whenever he wants, i.e., like in a Video-on-Demand or News-on-Demand scenario. However, traditional operating systems do not provide adequate support for large scale multimedia-on-demand servers, and we try to identify and eliminate possible bottlenecks of traditional operating systems used in most server based systems. One of our approaches is to decrease the communication protocol processing overhead by introducing network level framing (NLF).

Figure 1: Application scenario.

Traditionally, when transmitting data between end-systems, we have to handle in the OSI reference model all seven protocol layers (Figure 2) and in the Internet protocol suite all four protocol layers, while intermediate systems - or nodes - have only to handle the three lowest layers. The bottleneck in communication is located in transport and higher layer protocols where for example checksum calculation is among the most time expensive operations. Thus, for each client the same data must be processed through all the end-to-end protocols performing the same CPU intensive operations.

Figure 2: Traditional data storage in a server.Figure 3: Network level framing.
Due to the high cost of higher layer processing, we wish to regard a storage server as asynchronous people-to-people communication and consider the server as an intermediate storage node where only the lowest layers are processed (Figure 3). This means that when the data is processed through the ``intermediate system layers'', the data is directly stored on disk including the higher layer packet headers (NLF), i.e., the output of time expensive operations like checksum calculation is stored on disk and can be omitted when the data is later sent to remote clients.

The advantage of this approach is that the CPU intensive end-to-end protocol handling is not necessary - respectively reduced to a minimum - at the intermediate storage node. This results in a reduced load of CPU and system bus, i.e., in other words, more consumers (clients) can be served with the same hardware resources.

However, this gain does not come without any disadvantages. As we also store packet headers, more storage space is needed and more data must be retrieved from slow disks. This overhead varies with the packet size, e.g., storing a 1KB UDP packet requires 20 bytes extra of storage space for the header (an increase of about 2%). Nevertheless, when transmitting multimedia data larger packet sizes are often more appropriate making the overhead less. The disk I/O might also be a problem as the amount of data increase, but as the disks are getting faster and the data is stored in a disk array, the problem might be practicable. For example, the overhead retrieving a stored UDP packet of 1KB using a Cheetah disk (about 10000 RPM) would be minimal. Furthermore, in a multi-user scenario, a packet might be cached and the gain would then increase further.

So far, this is in the design stage, and we have not yet tested the gain in CPU processing versus the increased storage space and disk IO requirement. However, we think it is an interesting idea, and would like to test such a system.