THE POST CORRESPONDENCE PROBLEM
AS A TOOL FOR PROVING UNDECIDABILITY
IN LOGICS OF PROBABILISTIC PROGRAMS
Wiktor DANKO, Jolanta KOSZELEW
Department of Computer Science
Technical University of Bia³ystok
Wiejska 45A, 15-351 Bia³ystok, Poland
e-mail: danko@ii.pb.bialystok.pl
Abstract. We present a simple proof of undecidability of the set of tautologies
of Probabilistic Algorithmic Logic (cf. [1]). It is based on the well-known
Post Correspondence Problem (e.g., cf. [6]).
1. Introduction.
Logics for probabilistic programs, e.g., Probabilistic Dynamic Logic (cf. [4]), Probabilistic Algorithnic Logic (cf. [1]), are highly undecidable formal systems. To establish their degrees of undecidability we need some advanced techniques of recursion theory (cf. [4,8]). Nevertheless, the Post Correspondence Problem is sufficient to prove undecidability of such systems.
2. The Post Correspondence Problem.
We recall basic notions related to the problem mentioned in the title of this section (cf. [6]).
Let A = {j 1 , ..., j q} ba a finite alphabet. and let S, T be two sequences of strings (words)
on A , say
S = <w1 , ..., ws > ,
T = <v1 , ..., vs > .
We say that there exists a Post correspondence solution (PC-solution) for the pair (S,T) if there is a nonempty sequence of integers <m1 , ..., mr > , m1 , ..., mrÎ {1,...,s} , such that the strings
... , ...
are identical.
The Post correspondence problem is to devise an algorithm that will tell us for any (S,T) whether or not there exists a PC-solution.
2.1. Theorem (cf. [6]).
The Post correspondence problem is undecidable.
y
3. Probabilistic Algorithmic Logic.
We shall use an abstract programming language LP being an extension of a first order language L, based on a countable set V = {x0, x1, x2, ... } of individual variables (denoted shortly by x, y, z, ...), a set F = {f, g, ... } of function symbols, a set R = {r, s, ...} of predicate symbols and a set C = {c, d, ... } of constant symbols.
Let Á = < U, f,g, ... , r,s,..., c,d, =, ...> be a structure for L with the universe U. By a valuation of variables from the set Xk.= {x0, x1, ..., xk-1.} we shall understand any mapping w: Xk ® U . The set of all valuations of variables from Xk we shall denote by Wk . The fact, that a formula a (x0, ..., xk-1) of L is satisfied at a valuation w will be denoted by Á ,w |= a (we shall write a (x0, ..., xk-1) instead of a provided that all free variables of a are among {x0, ..., xk-1}).
Probabilistic programs of LP are understood as iterative programs with typical program constructions begin ... ; ... end , if ... then ... else ... , while ... do ... (tests are assumed to be open formulas of L) and with two probabilistic constructions: x:= ? , either ... or ... interpreted as follows: the first corresponds to a random generation of a value of the variable x, each part of the second one is choosen with the probability 0.5. We shall denote by r the probability distribution corresponding to random generation of elements in the realization of assignments of the form x:= ? . We can write in an informal way:
r : U ® [0,1] , r (u) = 1.
By a probabilistic structure for LP we shall understand the pair < Á ,r > and denote by Á r .
We recall that a deterministic iterative program K(x0, ..., xk-1) is interpreted in Á as a (partial) function transforming (input) valuations into (output) valuations (cf. [1,4]):
KA : Wk ® Wk .
(Similarly, to the case of formulas, we shall write K(x0, ..., xk-1) instead of K , provided that all free variables of K are among {x0, ..., xk-1} ; a formal definition of the set of free variables of a program is given in [7]).
In the case of probabilistic programs we shall assume that each possible valuation of program variables is assumed to appear with a probability. A probabilistic programs P is interpreted in the struc-ture < Á ,r > as a (total) mapping transforming input (sub)distributions into output (sub)distributions
(cf. [1,4]):
P< Á , r > : M ® M ,
P< Á , r > (m ) will denote the output subdistribution realized by the program P in the structure < Á ,r > at the initial subdistribution m . We can illustrate this situation as follows
w1 3 ® 3 ® w1
m 1 h 1
w2 3 ® P 3 ® w2
m 2 h 2
. . . . . .
For details we refer the reader to the paper [1]. An example illustrating the above introduced notions is given in Appendix (Example 6.2.). It remains to explain why we use the term "(sub)distribution" instead of "distri-bution":
3.1. Remark.
Note, that in the case, where a program does not terminate any computation, then for the output subdistribution m , m (w) = 0 , for every valuation w. y
The language Lp contains an additional set V of variables ranging over the set | | of real numbers and the set of symbols {+, -, ´ , /, <, 0, 1} interpreted in the standard way in the ordered field  of real numbers. These variables will be used to denote probabilities of transitions (in program computations) from an input valuations to an output valuation and/or probabilities that a formula is satisfied.
Thus Lp is a two sorted language with the set T of terms (defined in a standard way), the set TÂ of arithmetical terms defined as the least set satisfying
- 0,1Î TÂ and each variable from VÂ belongs to TÂ ,
- if a is a formula of L then P(a ) belongs to TÂ ,
- if t’, t" belong to TÂ then t’+t", t’-t", t’· t", t’/ t" belong to TÂ ,
The language L contains the set FP of probabilistic algorithmic formulas which are used to express properties of probabilistic programs. FP is defined as the least set of expressions satisfying:
- if K is a probabilistic program of LP and A is a formula of FP then KA belongs to FP ,
- if A, BÎ FP then O A, (AU B), (AÚ B), (A® B) Î FP ,
- if AÎ FP and rÎ VÂ then ($ r)A, (" r)A Î FP .
The intuitive meaning of a formula of the form KA is the following:
KA is satisfied in an initial state m iff the program K ens its computation at a finalstate m ‘ and the formula A is satisfied at the state m ‘.
We omit here formal definitions of the interpretation of a language of Probabilistic Algorithmic (Dynamic) Logic and its interpretation and refer the reader to the papers [1,4]. In Section 4 we shall consider the case of finite interpretations, where the intuitive meaning of the interpretation is more simple..
4. Decidable and Undecidable Problems in Probabilistic Algorithmic Logic.
The Probabilistic Dynamic Logic (PrDL) is the system for reasoning about probabilistic programs proposed by Feldman and Harell in [4]. This system differs from the sytem of the Probabilistic Algorithmic Logic:
- a language of PrDL contains disjoint set of variables for reals and integers;
- the results in [4] are presented under an additional assumption that the universe of interpre- tation is the set of real numbers or, more general, an arithmetical universe (cf. [4] for the definition of an arithmetical universe).
We would like to accent, that in the case of Probabilistic Algorithmic Logic, we do not use variables for integers, values of real variables are not changed during program computations, and the field of reals is an external tool for probabilistic estimation of behaviours of algorithms.
4.1. Remark (cf. [4]).
In [4], Feldman and Harel have mentioned that the set of tautologies of the Probabilistic Dynamic Logic is highly undecidable (for information about the position of this set in the Kleene-Mostowski hierarchy, we refer the reader to [4,8]). In [4] Feldman and Harell present an axiom system for Probabilistic Dynamic Logic that is complete relative to an extension of first order analysis (second order arithmetic is definable in first order analysis, cf. [8]). Therefore the Probabilistic Dynamic Logic
does not admit a complete axiomatization in the classical sense.
y
In the paper [4] a question is formulated:
when the first-order analysis, without integer variables is sufficient to describe properties of probabilistic algorithms ?
We answer this question in [1]:
4.2. Theorem (cf. [1]).
The set of formulas of Probabilistic Algorithmic Logic, valid in a fixed finite structure Á
is decidable with respect to the diagram of the structure Á .
y
By a diagram of a structure Á for L we understand the set of all atomic formulas valid in Á
(for each element of U = |Á | there is a constant symbol in L). The proof consists in reducing the validity of a formula a of LP in Á to the validity of a corresponding formula a ‘ in the ordered field of reals  (cf. [1]). Some facts related to the proof of this theorem are given in Appendix.
Moreover, this theorem enables us to construct an effective axiom system sufficient to prove all valid sentences in a given finite structure Á , provided that the diagram of Á is taken as the set of specific axioms.
Now, we shall demonstrate that the proof of undecidability of the set of tautologies of the Probabilistic Algorithmic Logic does not require advanced results of the Recursion Theory and can be based on the Post Correspondence Problem.
Let S = <w1 , ..., ws > , T = <v1 , ..., vs > be two sequences of strings on A = {j 1 , ..., j q}. We shall treat the symbols of A as one-argument function symbols of a language LP . Denote by K(S,T) the following program of LP :
begin
y:= x ; z:= x ;
repeat
either
begin
y:= w1(y) ; z:= v1(z) ;
end
or either
begin
y:= w2(y) ; z:= v2(z) ;
end
or either
.
.
.
or either
begin
y:= ws-1(y) ; z:= vs-1(z) ;
end
or
begin
y:= ws(y) ; z:= vs(z) ;
end;
until y = z ;
end.
Note, that the program construction repeat ... until ... , used in this program can be easily eliminated by the while ... do ... construction and therefore this program can be viewed as a program of LP. The sub-program of K(S,T) contained between repeat and until we shall denote by K’.
The following facts reduce the halting problem for the program K(S,T) to the Post Corresponden-ce Problem; the fact that the program K(S,T) halts with a positive probability can be expressed by the for- mula of LP of the form P(K(S,T) (g Ú O g ))>0 , where g is an open formula of L.
(1) Suppose, that the sequence <m1 , ..., mr > , m1 , ..., mrÎ {1,...,s}, is a PC-solution for the pair (S,T),
S = <w1 , ..., wn > , T = <v1 , ..., vn > .
Let m be a distribution such that a value u of the variable x appears with a positive probability,
From the definition of interpretation of the either ... or ... construction it follows that after r
repetitions of the subprogram K’ the probability of realization of the following sequence of
assignments is positive:
y:= (y) ; z:= (z) ;
y:= (y) ; z:= (z) ;
. . .
y:= (y) ; z:= (z) ;
Note that the at the begining of the realization of the instruction construction repeat ... until ...
the initial values of the variables y, z are idantical (the common value isthe initial value of x).
Since <m1 , ..., mr > is a PC-solution for (S,T), then the strings
... , ...
are identical. Thus the values of the terms
... (y) , ... (z)
are identical which means, that the probability that K(S,T) ends its computation is positive.
y
(2) Now, suppose that for the pair (S,T), S = <w1 , ..., wn > , T = <v1 , ..., vn > , there is no PC-
solution, i.e., for each sequence <m1 , ..., mr > , m1 , ..., mrÎ {1,...,s}, the strings
... , ...
are different
Let us consider a Herbrand interpretation Á of LP such that its universe is A* (i.e., consists of all
strings on A) and for every function symbol j of A = {j 1 , ..., j s}, the interpretation j Á : A* ® A*
is defined as follows:
j Á (w) is the string j w of A*.
It is easy to note, that for each initial value v of x , for each number r of repetitions of the subprog-
ram K’ during realization of K(S,T), and for the sequensce of assignments corresponding to these
repetitions
y:= (y) ; z:= (z) ;
y:= (y) ; z:= (z) ;
. . .
y:= (y) ; z:= (z) ;
the values of y, z after r repetitions of K’ are the strings
... v , ... v
respectively. These strings are different, since the strings ... , ... are different.
This means that the halting formula of the repeat ... until ... instruction of K(S,T) cannot be satisfied.
y
From the points 1, 2 it follows the equivalence:
4.4. Corollary.
Let S, T be two sequences of strings on A , and let K(S,T) has the meaning as in the above. The halting formula of the program K(S,T) can be satisfied in an interpretation Á if and only if the Post Correspondence Problem has a PC-solution for (S,T)
y
Thus the undecidability of the Post Correspondence Problem induces the undecidability of validity of halting formulas of the Probabilistic Algorithmic Logic and therefore proves the following theorem:
4.5. Theorem.
The set of tautologies of the Probabilistic Algorithmic Logic is undecidable.
y
We end this Section by a remark:
4.6. Remark.
The Post Correspondence Problem can be used to prove undecidability of the set of tautologies in logics of nondeterministic programs (like the Algorithmic Logic of Nondeterministic Programs (cf. [8]).
y
5. References.
[1] Danko W., The set of probabilistic algorithmic formulas valid in a finite structure is decidable with respect to its diagram, Fundamenta Informaticae,
vol. 19, 3-4, 1993, (417-431
[2] Danko W., Koszelew J., Properties of Probabilistic Algorithms Provable in First-Order Analysis, submitted to Fundamenta Informaticae,
[3] Danko W., A Criterion of Undecidability of Algorithmic Theories, Fundamenta Informaticae
vol.IV.3 , 1981
[4] Feldman Y.A., Harel D., A Probabilistic Dynamic Logic, ACM Journal of Comp., 1982 (181-195),
[5] Feller W., An Introduction to Probability Theory, John Willey & Sons, Inc.,
New York, London, 1961,
[6] Linz P., An Introduction to Formal Languages and Automata, D.C.Heath and Co., Lexington Massachussets, Toronto 1990,
[7] Mirkowska G., Salwicki A., Algorithmic Logic, D.Reidel Publ. Co. & PWNWarsaw, 1987,
[8] Rogers H.J., Theory of Recursive Functions and Effective Computability, McGraw-Hill
Book Co., New York Toronto, London Sydney, 1967
[9] Skowron, A., Data Filtration: A Rough Set Approach, in: W.Ziarko (ed.),Roug Sets, Fuzzy Sets and Knowledge discovery, Workshop in Computing, Springer- Verlag & British Computer Society, London, Berlin, 1994, 108-118,
[10] Stapp, L., The normal form theorem for nondeterministic programs, University of Warsaw, 1979, unpublished manuscript,
6. Appendix.
We recall here some facts related to the proof of the theorem 4.2. In particular we give a lemma proved in [1] on that the proof of Theorem 4.2 depends in an essential manner. For details we refer the reader to the paper [1].
Let U = {u1, u2, ..., um} and let us denote by r the probability distribution corresponding to the realization of random assignments of the form v:= ? :
r : A ® [0,*1] , r (a0) + r (a1) + ... + r (am) = 1.
Let P(x0, ..., xm) be a program. Since U = {u1, ..., un}and Xk.= {x0,x1, ..., xk-1}, then Wk. consists of N = mn elements, Wm.= {w1, w2, ..., wN}. Thus each distribution m can be viewed as a N-element vector m = [m 1, m 2, ..., m N] , where m i = m ( wi) , i = 1, ..., N. Denote by M the set of all such vectors of distributions. Then
P< Á , r > : M ® M .
To avoid any misunderstanding, we shall write [(w1,m 1,), ..., (wN,m N)] instead of [m 1, m 2, ..., m N] if the ordering among the valuations of Wm is not obvious.
The following lemma shows that each program P can be interpreted as a linear operator P on the space M of vectors of distributions:
6.1. Lemma (cf. [1]).
Let < Á ,r > be a structure for LP with the universe U = {u1, ..., un}. For every program P(x0, ..., xm) we can construct, in an effective way, a N´ N matrix P = [pij]i,j = 1, ..., N such that for every vectors
m = [m 1, ..., m N], h = [h 1, ..., h N] = P< Á , r > (m ), the following holds:
h = m · P
( · denotes here the product operation for matrices).
Moreover, an element pij of P describes the probability that wj is the output valuation after computation of P, provided that the valuation wi appears as the input valuation with the probability 1.
?
We illustrate the Lemma as follows:
· =
where m = [m 1, ..., m N], h = [h 1, ..., h N] = P< Á , r > (m ), P = [pij]i,j = 1, ..., N
6.2. Example.
Let K denote the program
while x = b do
either x:= ? or x:= a
and denote by K’ the subprogram either x:= ? or x:= a .
Assume that the universe U of the interpretation Á consists of two elements, U = {a, b}.
Note, that in this case the above defined set W1 (of valuations of the variable x consists of two valuations) w1, w2 satisfying:
w1(x) = a , w2(x) = b .
Moreover, assume that the random generator distribution r is defined as follows:
r (a) = p , r (b) = 1-p
and the initial distribution m satisfies
m (w1) = q , m (w2) = r .
One can check that the output distribution m ‘ for the program K’ at the initial distribution m satisfies:
q’ = m ‘(w1) = 0.5· (1+p)· q + 0.5· (1+p) · r ,
r’ = m ‘(w2) = 0.5· (1+p)· q + 0.5· (1+p) · r .
This may be written in the form
[q’,r’] = [q,r]·
that shows the transition matrix K’ for the subprogram K’ .
Using the method proposed in [1] we can check that the matrix K for the program K is
.