[an error occurred while processing this directive]

Client-Server Thinking


Client-Server Terminology

Client-server terminology is a way of viewing software components and their relationships. When one component accesses data from or calls a function of another component, the first component acts as a client of the second component, which acts as a server. The data access and the effects of called functions are the service that the server provides.

The Role of Client-Server Thinking in Design

Client-server thinking is more than just terminology. It is a way of thinking that focuses on the commmunication between two components of a software system. Its objective is defining the interface between the components, establishing a protocol for communication.

The design process involves two crucial steps repeated as often as necessary to break down the software into managable pieces. First, the software is divided into components. For this step, the designer begins with a general sense of what kind of data the software needs to work with and what kind of tasks that the software needs perform. The designer then groups the data and tasks according to the kind of abstractions that are involved. Data and tasks that involve closely related abstractions form the high-level components, or systems, of the software. Each component is responsible for dealing with a particular kind of abstraction.

After dividing the software into components, designers need to decide how the components will communicate with each other. This is where client-server protocols comes into play. When a design is converted into code in some programming language, the client programmer needs to write code to command the server to perform a service. The protocol specifies what kind of code is needed. Sometimes the server makes variables accessible for retrieving data and provides functions for performing tasks. The protocol specifies what variables can be accessed directly and defines the effects and returned values of functions and the type of and restrictions on parameters in the function calls. In effect, the protocol specifies contracts between the client and the server: if the client meets certain conditions then the server will perform a specified service.

Why Client-Server Thinking Works

Client-server thinking is strongly client-oriented, dealing with components as they are seen by the client. This viewpoint is especially useful during early phases of software development. It also encourages a language-independent point of view, which is preferable in early stages of development.

Client-server thinking relies on the possibility of thinking about the needs of the client without having to deal with the algorithms used by the client or the server. For example, consider the communication between a module for the second pass of an assembler and a scanner module.

When the pass module is dealing with a machine instruction, you only need a minimal understanding of instruction coding to determine what kind of information is needed:

The instruction can be passed as a character string. You can decide how to break down operand information into register specifiers, displacements, and machine addresses without concern for how the pass module is going to convert that information into machine code and without concern for how the scanner is going to extract the information from input.

This capability of dealing with client-server communication without concern for client or server algorithms allows designers to focus on crucial issues that would otherwise be neglected:

Dealing with the first of these issues is difficult. It can only be learned from many years of experience. The first step is learning to put implementation questions into the background while working on software architecture and design. Some conderations involved in the last two issues are discussed in the next two sections.

Client Access to Server Data

Although a server can provide direct access to variables, this is usually not a good idea. It is best to only use function calls for the client-server interface. There are several reasons.

First, in order to preserve the integrity of its data, it is often desirable for the server to limit the ability of the client to change the data. If the server provides a function that returns data then the client can read the data without having the capability of changing it.

Second, accessing data through a functional interface ensures that the server has an opportunity to do internal bookkeeping when data is accessed. This is an important consideration in design: when the interface is being designed, the details of implementation have not been worked out yet. The functional interface specifies a protocol for data access that leaves server implementors free to make implementation changes.

Finally, implementing a design requires cooperation between implementors of the various components, and consideration of the fact that implementors are human beings. In spite of training and experience, they can make mistakes. There are times when accessing data may be a violation of the client-server contract. This is common in input processing components - the data may not be there due to a program user error. If the data is accessed through a function then the server has a chance to intercept these contract violations and give an immediate report. Then client implementors get information about their error that usually pins down precisely where it occurred. If the data is accessed directly from a variable then there is no indication of a problem until later. By then, it is often not clear who is at fault and it is almost always harder to pinpoint where the error occurred.

Passing Data

Even when data is only accessed through functions, the kind of data passed between modules has a significant effect on the number of errors and the difficulty of debugging.
Passing complex data structures
When server data involves complex structures, that complexity is an added burden on the implementors of clients. In addition to understanding the abstractions of their own components, they have to understand the abstractions involved in the data structure. The more abstractions they have to deal with, the more mistakes they make. A well-designed protocol should free their minds to deal with their own abstractions.

It is sometimes necessary to pass complex data. For example, one module may need to get complex information, work with part of it, and then pass the rest to another module for further processing. When this kind of organization is needed, a module should be set up for constructing the complex data type and accessing its parts.

Passing classification information
Modules that deal with input often need to do classification of parts of the input. There are several ways that this information can be passed to the client:

The first two methods are likely to result in numerous errors in both client and server code. It is just too easy to forget what the codes mean. Sure, programmers should be careful, but they have a lot of things to think about. Developing algorithms requires a lot of concentration. If programmers have to break this concentration to look up the right classification code then algorithm errors are likely. If they try to rely on their memory regarding the codes there is a good chance that they will get some of the codes wrong. Either way, they lose.

Furthermore, obscure codes make debugging more difficult. Once you have pinpointed an error to a limited region of code, it is much easier to see an error involving classification if a descriptive string or enumerated type is used.

When enumerated types are available, they are usually the best choice for classification information. Descriptive strings avoid many of the problems of integer or character codes, but they do introduce some additional syntactic complexity and their use requires calling string comparison functions.

Using null pointers as flags
It is common in languages with pointers to use a null pointer as a returned value to indicate that requested data is unavailable. In a language like C, if the client forgets to check for the null value, the result is a segmentation fault. This error is detected at run time, and is often reported with no indication of where the error occured. Debugging the error can take hours or sometimes days.

A better alternative is available: the server module can provide two functions. One returns a boolean indicating if the data is available, the other returns the data. The second function has a precondition requiring that the data be available. To reduce debugging time, the second function checks its precondition and reports an error if it is violated.

This does require extra effort on the part of the server, but only a few minutes worth of coding. Programmers that are interested in saving time will gladly invest these few minutes to avoid the time involved in debugging a segmentation fault. Even when you get to the point of making fewer pointer mistakes, the difference between the time invested and the time saved makes this technique worthwhile.

There is a reason for having do precondition checks in the server in addition to the client. Servers often have several clients and each client usually calls a server function at several places in the client code. Thus one server check is acting as a backup check for a large number of client checks.

[an error occurred while processing this directive]