You are on page 1of 7

Logical Time

(Santhosh K Vallabhaneni)
Abstract: This document describes the concept of logical time. For many purposes , it is sufficient that all machines agree on the same time. It is not essential that those times also agrees with the real time as announced on the radio every hour. For running make, for example, it is adequate that all machines agree that it is 10:00, even if it is really 10:02. Thus for a certain class of algorithms, it is the internal consistency of the clocks that matters, not whether they are particularly close to the real time. For these algorithms, it is conventional to use the concept of Logical Clocks which we are discussing briefly in this document.

1. Introduction The notion of time seems to be subtle in distributed systems, where message delays are usually variable and where processes often do not have access to a global clock or to perfectly synchronized local clocks. To model a distributed system, one typically considers processes which communicate by messages and which execute sequences of events (i.e., elementary or atomic actions). These events occur at specific instants in time. They are usually classified into send events, receive events, and internal events. An execution of a distributed system on such an abstract level can be depicted with the help of a space-time diagram where time moves from left to right. Messages are drawn as arrows, and events are depicted by dots. Causality is the relationship between an event (the cause) and a second event (the effect), where the second event is understood as a consequence of the first. The concept of causality is a very important principle in design and analysis of Distributed Operating Systems. In a system of logical clocks, every process has a logical clock that is advanced using a set of rules. Every event is assigned a timestamp and a causality of these relationships can be determined by their timestamps. Events are related to each other: Events occurring at a particular process are linearly ordered by their local sequence of occurrence, and each receive event has a corresponding send event that happens earlier. Formally, one defines the causality relation < as the smallest transitive relation on the set of events such that < holds for any two events e, eIf 1. e and e happen at the same process and e is the immediate predecessor of e or 2. e is the receipt of a message which was sent by event e.

This article will describe 4 ways to implement logical time in distributed system. 1. 2. 3. 4. Lamports Scalar Clock: Vector Timestamp Matrix Timestamp Virtual Clock

2.0 Types of Logical Clock 2.1 Lamports Scalar Clock Time domain in this context is set of non negative integers. Rule R1 and R2 for scalar time as follows. R1: Before executing an event process pi executes follows. Ci = Ci + d (d>0) Typically d is kept at 1. R2: Each message piggybacks the clock value of its sender at sending time. Hence 1. Ci = max(Ci, Cmsg) 2. Execute R1 3. Deliver the message However it is been mentioned that scalar clock is not good for model causally independent events.

Figure[1]: Evolution of scalar time with d=1.

Here are the basic properties of Scalar time,  Consistency  Total ordering  Event counting 2.2 Vector Clock Vector clock are algorithm for generating a partial ordering of events in a distributed system and detecting casuality violations. As that of Lamports time stamps, interprocess messages contain the state of the sending processs logical clock. A vector clock of a system of N processes is an array or a vector of N logical clocks that means one clock per process. Here are some rules for clock updates:  Initially all clocks are zero  Each time a process experience an internal event, it increments its own logical clock in the vector by one.  Each time a process prepares to send a message, it increments its own logical clock in the vector by one and then sends its entire vector along with the message being sent  Each time a process receives a message, it increments its own logical clock in the vector by one and updates each element in its vector by taking the maximum of the value in its own vector clock and the value in the vector in the received message. In vector clocks we equip each process Pi with a clock Ci that consists of a vector of length n (where n is the total number of processes). Such a vector clock Ci is initialized with the null vector; it ticks immediately before the execution of an event by incrementing the value of its own component: Ci[i] := Ci[i] + 1. Each message is time stamped with the current value of the senders vector clock. When receiving a time stamped message, a process combines its own time knowledge Ci with the timestamp t it receives by performing Ci := Max(Ci,t)

Given below is an illustration of vector clock diagram.

Figure 1[2]: Evolution of Vector time The main property of vector clocks is that they induce an isomorphism of causal structure and temporal structure: For all e, e E: e < e C (e) < C (e).

2.3 Matrix Clock A matrix clock is a mechanism for capturing chronicle and casual relationships in distributed sytems. In this case the logical global time is represented by an n n matrix. So each site Si is endowed with a matrix mti[1..n, 1..n] whose entries have the following meaning. mti[i, i] is the logical local clock of Si, increasing as the computation of the site Si progresses. * mti [k, l] represents the view (or knowledge) the site Si has about the knowledge by Sk about the logical local clock of Sl. The whole matrix mti constitutes the Si local view of the logical global time. Rules R1 and R2 are similar to the preceding ones for each site Si :

R1 : Before producing an event : mti[i, i] = mti[i, i] + d (d > 0)

R2 : Each message m piggybacks a matrix time mt. When it receives such a message (m, mt) from a site Sj, the site Si executes (before R1) : 1 < k < n: mti[i, k] := max(mti[i, k], mt[j, k]) 1 <_ k, 1 < n : mti[k, 1] := max(mti[k, 1], mr[k, 1])

Figure 3[3]: Evolution of Matrix time

Matrix clocks contains all the properties of vector clocks and in addition to that matrix clocks have the following property Min(mti[k,l] >= t => process pi knows that every other process pk knows that pls local time has progressed till t. 2.4 Virtual Time Virtual time is a global, one-dimensional, temporal coordinate system on a distributed computation to measure the computational progress and to define synchronization.

Virtual time is a distributed system executing with coordination of another virtual clock that uses virtual time. Virtual time is also totally ordered by less than relation <. In a distributed system, processes run concurrently and communicate with each other by exchanging messages. Every message is characterized by four values: i. Name of the sender ii. Virtual send time iii. Name of the receiver iv. Virtual receive time Rule R1 and R2 implemented as follows with virtual time. R1: Virtual send time of each message < Virtual receive time of that message R2: Virtual time of each event in process < Virtual time of next event in that process Characteristics of virtual time as follows: 1. 2. 3. 4. 5. Virtual time systems are not all isomorphic; they may be discrete or continuous It can be partially ordered It can be related to real time or independent Virtual time vales can be manipulated explicitly according to system defined regulation Virtual time associated with events can be calculated by user programs or defined by fixed set of rules.

3. Conclusion Scalar clocks are easier to implement compared to other clock techniques. The message and computation overhead is small but it has limited capabilities. Even though overheads associated with vector clocks are high isomorphism of the vector clocks makes it more powerful than scalar clocks. Matrix clock also have high overheads but it contains information about latest dependencies of direct dependencies that is having in the system. Hence matrix clock is used in systems where you have distributed garbage collection. Virtual time is a prototype for organizing and synchronizing distributed systems using virtual time.

4. References  [1],[2],[3]. Distributed Computing, Principles, algorithms and systems by Ajay D.Kshemakalyani and Mukhesh singhal  www.wikipedia.com

You might also like