![]() |
In the area of system/software verification, liveness means something good
must eventually happen. A counterexample to a liveness property (against a
finite state system) is typically a loop (or a deadlock state, which is viewed
as a trivial loop) during which the good thing never occurs.
Fairness, which is concerned with a fair resolution of non
determinism, is often necessary to prove liveness properties. Fairness is an
abstraction of the fair scheduler (e.g., the random scheduler is a rather strong
fair scheduler) in a multi-threaded programming environment or the relative
speed of the processor in distributed systems. Without fairness, verification of
liveness properties often produces unrealistic loops during which one process or
event is infinitely ignored by the scheduler or one processor is infinitely
faster than others. It is important to rule out those counterexamples and
utilizes the computational resource to identify the real bugs. However, ruling
out counterexample due to lack of different fairness systematically is
non-trivial. It requires flexible specification of fairness as well as efficient
verification with fairness. The LTL model checking algorithm in PAT is designed
to handle a variety of fairness constraints efficiently. This is partly
motivated by recently developed population protocols, which only work under
weak, strong local/global fairness. The other motivation is that the current
practice of verification is deficient under fairness. In the following, we
briefly introduce the different notions of fairness. Models in PAT are interpreted as Labeled Transition Systems (LTSs) implicitly
by defining a complete set of operational semantics. A Labeled Transition System
is a 3-tuple (S, init, T) where S is a set of states, init is
an initial state and T is a labeled transition relation. Because
fairness affects infinite not finite system behaviors, we focus on infinite
system executions in the following. Finite behaviors are extended to infinite
ones by appending infinite idling actions at the rear. Given an LTS (S,
init, T), an execution is an infinite sequence of alternating states and
actions E = <s0, a0, s1, a1, ...> where s_0 = init and
for each and every step conforms to the transition relation. Without fairness
constraints, a system may behave freely as long as it starts with an initial
state and conforms the transition relation. A fairness constraint restricts the
set of system behaviors to only those fair ones. Verification under fairness is
to verify whether fair executions satisfies a property. In the following,
we review a variety of different fairness constraints and illustrates their
differences using examples. Process-level Weak Fairness E satisfies process-level weak fairness if and only if, for
every process P, if P eventually becomes enabled
forever in E, then P is participated
in a_i for infinitely many i, i.e.,
(<>[] P is enabled) implies (always
eventually P is engaged). Note: In above figure (a), the
property []<>a is false under PWF is that the process W may make
progress infinitely (by repeatedly engaging in b) without ever engaging in event
a. Alternatively, if the system is modeled using two processes as shown in
figure (b), []<>a becomes true under PWF because both processes must make
infinite progress and therefore both a and b must be engaged infinitely. In
general, process-level fairness is related to the system structure. Process-level Strong Fairness E satisfies process-level strong fairness if and only if, for every process
P, if P is infinitely often enabled, then P participates
in a_i for infinitely many i, i.e.,
([]<> P is enabled) implies
([]<> P is engaged). Note: Given the LTS in above figure (a), the
property []<>b is true that under PSF. As b is infinitely often
enabled, and thus, the system must engage in b infinitely, make this property
[]<>b true. As to figure (b), the property []<>c is false under PWF
but true under PSF. The reason is that event c is guarded by condition x = 1 and
therefore is not repeatedly enabled. Strong Global Fairness E satisfies strong global fairness if and only if, for every (s, a,
s') in T, if s = s_i for infinite many i, then
s_i = s and a_i = a and s_(i+1) = s' for infinite
many i. Strong global fairness was suggested by Fischer and Jiang. It
states that if a step (from s to s' by engaging in action a) can be taken
infinitely often, then it must actually be taken infinitely often. Different
from the above fairness notions, strong global fairness concerns about both
actions and states, instead of actions only. It can be shown by a simple
argument that strong global fairness is stronger than strong fairness.
Strong global fairness requires that an infinitely enabled action must be taken
infinitely often in all contexts, whereas event-level strong fairness
only requires the enabled action to be taken in one context. Note: Above figure illustrates the
difference with two examples. State 2 in Figure (a) may never be visited because
all events are engaged infinitely often if the left loop is taken infinitely. As
a result, property []<> state 2 might be false. However, under
SGF, all states in (a) must be visited infinitely often and
therefore []<> state 2 is true. Figure (b) illustrates their
difference when there are non-determinism. Both transitions labeled a must be
taken inifnitely under SGF, thus, property []<> b is true only under
SGF. Two different approaches for verification under fairness are supported in
PAT, targeting different users. For ordinary users, one of the fairness notions
may be chosen (in the Verification window, refer to the Verifier) and
applied to the whole system. The model checking algorithm works by identifying
the fair execution at a time and checks whether the desirable property is
satisfied. Notice that unfair executions are considered unrealistic and
therefore are not considered as counterexamples. Because of the fairness, nested
depth-first-search is not feasible and therefore the algorithm is based on an
improved version of Tarjan's algorithm for identifying strongly connected
components(details will be revealed in the section 4.2). We have
successfully applied it to prove or disprove (with a counterexample) a range of
systems where fairness is essential. In general, however, system level fairness may sometimes be overwhelming. The
worst case complexity is high and, worse, partial order reduction is not
feasible for model checking under strong fairness or strong global fairness. A
typical scenario for network protocols is that fairness constraints are
associated with only messaging but not local actions. We thus support an
alternative approach, which allows users annotate individual actions with
fairness. Notice that this option is only for advanced users who know exactly
which part of the system needs fairness constraints. Nevertheless this approach
is much more flexible, i.e., different parts of the system may have different
fairness. Furthermore, it allows partial order reduction over actions which are
irrelevant to the fairness constraints, which allows us to handle much large
systems. Thus, PAT supports an alternative (and more flexible) approach, which
allow users to associate fairness to only part of the systems or associate
different parts with different fairness constraints. Refer to language reference on
how to the different kinds of annotation to be associated with individual
actions.