next up previous contents
Next:  Related Work Up: Time Synchronization Services for Previous:  Metrics and Terminology   Contents

Subsections


 Motivations

Why do sensor nets need synchronized time?

Before discussing time synchronization methods, we must first clarify its motivations. In this section, we will describe some common uses of synchronized time in sensor networks. The wide net cast by the disparate application requirements is important; we will argue later that it precludes the use of any single sync method for all applications. Indeed, while the related work in this field (reviewed in Section 4) is extensive, it often relies on assumptions that are violated in the new sensor network regime.


 Distributed beam-forming and sensor fusion

In recent years, interest has grown in signal processing techniques such as sensor fusion and beam-forming. These are techniques used to combine the inputs of multiple sensors, sometimes using heterogeneous modalities, in applications such as noise reduction [YHR$^+$98], target tracking, and process control. This kind of DSP will serve as an important basis for sensor networks, but much of the extensive prior art in the field assumes centralized sensor fusion. That is, even if the sensors gathering data are distributed, sensor data is often assumed to be consolidated at one site before processing. However, centralized processing makes use of implicit time synchronization; synchronization that must be made explicit to create a fully distributed system.

For example, consider a beamforming array designed to localize the source of sound, such as that described by YHRCL in [YHR$^+$98]. The array computes phase differences of signals received by sensors at different locations. From these phase differences, the processor can infer the time of flight of the sound from its source to each sensor. This allows the sound's source to be localized with respect to the spatial reference frame defined by the sensors in the array. However, this makes the implicit assumption that the sensors themselves are synchronized in time, as well. In other words, the beam-forming computation assumes that the observed phase differences are due to differences in the time of flight from the sound source to the sensor, and not variable delays incurred in transmitting signals from the sensor to the processor. In a centralized system, where there is tight coupling from sensors to a single central processor, this is a valid assumption; the sensor data share an implicitly synchronized time-base by virtue of the fact that the audio data are all fed to the same processor. However, for such an array to be implemented on a fully distributed set of autonomous wireless sensors, explicit time synchronization of the sensors is needed.

It is important to note that a beam-forming array actually contains two separate (but related) time-synchronization problems. Specifically, to measure the time-of-flight from the sound's source to the receiving sensors, some form of synchronization must exist between the sender and receiver. In other words, the array needs to know the time of emission relative to the time of detection in order to measure time-of-flight. This latter problem has traditionally been solved with ``over-sampling'': treating the clock bias between the emitter and receivers as an extra unknown in the system of localization equations, and adding an extra sensor measurement to balance the extra unknown. This works only if the receivers are synchronized with each other; that is, if there is only a single clock bias between the sender and any receiver. Therefore, the need for explicit receiver synchronization as described earlier is not obviated by having extra sensors. Without correlated receivers, each additional sensor brings with it both a measurement and its own unknown clock bias--i.e., adding both an equation and an unknown to the system.

 Multi-sensor integration

A common theme in sensor network design is that of multi-sensor integration--combining information gleaned from multiple sensors into a larger world-view not detectable by any single sensor alone. Unlike the previous examples, in which sensor fusion is done at the signal processing level, sensor integration focuses on algorithmically combining higher-level knowledge.

For example, consider a group of nodes that know their geographic positions and can each detect proximity to some static phenomenon $ P$. (The detection might involve localization as described in the previous section.) Alone, a single sensor can tell that it is near $ P$. However, by integrating their knowledge, the joint network can describe more than just a set of locations covered by $ P$; it can also compute $ P$'s size. In some sense, the whole of information has become greater than the sum of the parts.

This type of emergent behavior does not always require synchronized time. If $ P$ is static, as in the previous example, time sync may not be needed at all. However, what if $ P$ is moving, and the objective is to report $ P$'s speed and direction? At first glance, it might seem that the object localization system we described above can be easily converted into a motion tracking system, simply by performing repeated queries of the object tracking system over time. If the data indicating the location of our tracked phenomenon $ P$ all arrives at the same processor for integration, perhaps no synchronization across nodes is required. The integrator can simply timestamp each localization reading as it arrives and will then have all the information required to integrate the series into motion data. In this case, it seems that the time synchronization problem has been avoided entirely.

This scheme has serious limitations discussed below, but it does work in some contexts. In particular, if the tracked object is moving very slowly relative to the delay between the sensors and the integration point, our brute force approach might work. For example, imagine an asset tracking system capable of locating specific pieces of equipment. If we wish to define the ``motion'' of an object as the history of all people who have used it, an equipment motion tracker might be designed very simply. We can merely ask the object tracker for the equipment's location several times a day, and compile a list over time of offices in which the equipment has been located. This works because we assume someone who uses equipment does so over the course of days or weeks - extremely slowly on the timescale over which the object-tracker can locate an object and report its location to the user or a data integrator.

This method has serious limitations in contexts where the timing requirements are more critical than in our equipment example. If the tracked phenomenon moves quickly, many factors that were insignificant in equipment tracking become overwhelming. For example, consider the situation likely in wireless sensor networks: a spatially distributed group of sensors, each capable of communication over a very short range communication relative to the total geographic area of sensor coverage. Information can only travel through this network hop-by-hop; therefore, the latency of messages from any given sensor to a central integrator will vary with the distance from the sensor to the integration point. In this situation, the brute force approach may fail.

The reason for the failure of the brute force approach is instructive to consider. The simple equipment tracker essentially assumed that the travel time of messages from the equipment sensors back to the integration point was zero: we ask the question ``Where is this object?'' and receive a reply that we assume is instantaneous and still correct when it is received. In the case of tracking equipment, this is probably a valid assumption because a specific piece of equipment is unlikely to move on the scale of time required to propagate a message through the sensor network. However, this assumption breaks down for faster-moving phenomena. Using the brute force centralized approach, it will be impossible to accurately track the motion of any phenomenon moving at speeds that approach the time scale of the networks' round trip time.

There are additional motivations for doing localized and distributed detection. A centralized aggregation point is not scalable and is prone to failures. In addition, in sensor networks where energy-efficiency is critical, it is unwise to design a system where a large volume of messages must be routed through many power-consuming nodes. A system that transmits each individual location reading through every node on the path from the phenomenon back to the integration point will have a high energy cost.

These limitations suggest that sensor readings should be time-stamped inside of the network, as near as possible to the original sensor event. Doing so dramatically reduces the variable delay introduced by message transmission latencies. Timestamping inside the network also allows tracking data to be post-processed into motion data, aggregated, and summarized inside of the network, thus requiring a greatly reduced number of bits to travel back to the user. All of these advantages, however, do come with a price: sensors in the network must share a common time base in order to ensure the consistency of readings taken at multiple sensors.

In a motion tracking application, the allowable synchronization error in nodes' clocks is informed by factors such as the speed of the target relative to sensor density and detection range. It is also affected by the system's desired spatial precision and detection frequency. The tighter time synchronization is achieved, the greater precision is possible in the tracking of motion by a collection of proximity detectors. Very slow moving objects may be tracked sufficiently by nodes with loosely synchronized clocks, but tighter and tighter synchronization is required if we wish to track faster and faster objects--or perhaps even phenomena such as wave-fronts.


 In network data aggregation and duplicate suppression

A feature common to sensor networks due to the high energy cost of communication compared to computation [PK00] is local processing, summarization, and aggregation of data in order to minimize the size and frequency of transmissions. Suppression of duplicate notifications of the same event from a group of nearby sensors can result in significant energy savings [IGE00]. To recognize duplicates, events must be timestamped with a precision on the same order as the event frequency; this might only be tens or hundreds of milliseconds. Since the data may be sent a long way through the network and even cached by many of the intermediate nodes, the synchronization must be broad in scope and long in lifetime--perhaps even persistent.

 Energy-efficient radio scheduling

Low-power, short-range radios of the variety typically used for wireless sensor networks expend virtually as much energy passively listening to the channel as they do during transmission [PK00,APK00]. Sensor net MAC protocols are frequently designed around this assumption, aiming to keep the radio off for as long as possible. TDMA is a common starting point because of the natural mechanism it provides for adjusting the radio's duty cycle, trading energy expenditure for other factors such as channel utilization and message latency [Soh00].

While distributed time synchronization is central to any TDMA scheme, it is considerably more important in wireless sensor nets than in traditional (e.g. cellular phone) TDMA networks. Traditional wireless MAC protocols value only high channel utilization. Good time sync is therefore important because it shortens the guard time, but also easy because each frame received implicitly imparts information about the sender's clock. This information can be used to frequently re-synchronize a node's clock with those of its peers [LS96].

In sensor networks, the picture changes considerably. Energy-efficiency is the highest priority, so localized algorithms are used to minimize both the size and frequency of messages. Long inter-message intervals result in greater clock drift and therefore longer guard times. The high energy cost of passive listening described above makes these guard times expensive. In addition, small data payloads make the guard times a large proportion of the total time a receiver is listening. These factors make good clock synchronization critical for saving energy, and suggest a new technique is needed to achieve it.

 Uses common in traditional distributed systems

The uses of time synchronization we have described so far have been specific to sensor networks, relating to their unique requirements in distributed signal processing, energy efficiency, and localized computation. However, at its core, a sensor network is also a distributed system, where time synchronization of various forms has been used extensively for some time. Many of these more traditional uses apply in sensor networks as well. For example:


next up previous contents
Next:  Related Work Up: Time Synchronization Services for Previous:  Metrics and Terminology   Contents