Access the full text.
Sign up today, get DeepDyve free for 14 days.
J. Wagoner (1979)
K2 AND DIFFEOMORPHISMS OF TWO AND THREE DIMENSIONAL MANIFOLDS
V. Kaimanovich, A. Vershik (1983)
Random Walks on Discrete Groups: Boundary and EntropyAnnals of Probability, 11
A. Connes (1975)
Outer conjugacy classes of automorphisms of factorsAnnales Scientifiques De L Ecole Normale Superieure, 8
(1971)
Introduction to Algebraic ^.-theory
Selim Tuncel (1983)
A Dimension, Dimension Modules, and Markov ChainsProceedings of The London Mathematical Society
J. Cuntz (1981)
A class ofC*-algebras and topological markov chains II: Reducible chains and the ext-functor forC*-algebrasInventiones mathematicae, 63
E. Effros (1981)
Dimensions and *-Algebras, 46
Wolfgang Krieger (1980)
On dimension functions and topological Markov chainsInventiones mathematicae, 56
(1980)
On the homotopy groups of the space of endomorphisms of a C*-algebra (with applications to topological Markov chains
R. Bowen, J. Franks (1977)
Homology for zero-dimensional nonwandering setsAnnals of Mathematics, 106
J. Cuntz, Wolfgang Krieger (1980)
A class ofC*-algebras and topological Markov chainsInventiones mathematicae, 56
R. Williams (1973)
Classification of subshifts of finite typeAnnals of Mathematics, 98
Stefan Friedl (2020)
Algebraic topologyGraduate Studies in Mathematics
W. Parry, R. Williams (1977)
Block Coding and a Zeta Function for Finite Markov ChainsProceedings of The London Mathematical Society
(1973)
WAGONER, Pseudo-isotopies of compact manifolds, Asterisque
W. Parry, Selim Tuncel (1982)
Classification Problems in Ergodic Theory
(1986)
RUDOLPH, The automorphism group of a subshift of finite type, preprint
G. Segal (1968)
Classifying spaces and spectral sequencesPubl. Math., I.H.E.S., 34
D. Quillen (1973)
Higher algebraic K-theory: I
R. Williams (1974)
Errata to "Classification of Subshifts of Finite Type"Annals of Mathematics, 99
G. Hedlund (1969)
Endomorphisms and automorphisms of the shift dynamical systemMathematical Systems Theory, 3
J. Wagoner (1988)
Topological Markov chains, C∗-algebras, and K2Advances in Mathematics, 71
J. Franks (1982)
Homology and Dynamical Systems
Mike Boyle, Wolfgang Krieger (1987)
Periodic points and automorphisms of the shiftTransactions of the American Mathematical Society, 302
(1968)
Infinite cyclic coverings, Conference on Topology of Manifolds, ed.J
G. Segal (1968)
Classifying spaces and spectral sequencesPublications mathématiques de l'IHÉS, 34
J. Milnor (1966)
Whitehead torsionBAMS, 72
A. Connes, E. Størmer (1975)
Entropy for automorphisms of II1 von neumann algebrasActa Mathematica, 134
F. Zizza (1988)
Automorphisms of hyperbolic dynamical systems andTransactions of the American Mathematical Society, 307
by J. B. WAGONER* TABLE OF CONTENTS 1. Introduction .............................................................................. 91 2. Markov partitions ......................................................................... 95 3. The strong triangle identities ............................................................... 110 4. Invariants for Aut(~A) ...................................................................... 116 References ................................................................................ 123 1. Introduction Let .9' be a finite or a countably infinite collection of " states ", and let A : 5 r � 5r --~ { 0, 1 } be a zero-one matrix with the corresponding subshift *A : Xx ~ Xx" See [Wi], [PW], [F], or [W2] for example. Let Aut(,x) denote the discrete group of uniformly continuous homomorphisms x : X x ~ X which A commute with *A and which have ,,-1 uniformly continuous also. This is the group of uniform equivalences or symmetries of ~A- A central problem or theme in symbolic dynamics is to examine the algebraic structure and homological properties of Aut(,x). The aim of this paper is to introduce a method for studying Aut(aA) by making it act on a contractible simplicial complex #~A constructed in a very natural way from the set of uniform Markov partitions for *x on X x. The first paper giving a systematic account of Aut(~x) for the Bernoulli 2-shift ~, : X~ ~ X, was written by Hedlund [H]. It was proved there that Aut(a,) contains every finite group as well as elements of infinite order not of the form ~ for some k e Z. These extra infinite order elements were obtained by taking products of non-commuting elements of finite order. More recent results can be found in [BK] and [BLR]. Despite notable progress in the last several years, there is an old and central problem which still remains wide open. This is the finite order generation conjecture (dubbed FOG by D. Lind) which states that Aut(%) for the full Bernoulli p-shift with p prime is 9 Partially supported by National Science Foundation grant DMS85-02351. 92 J. B. WAGONER generated by % together with elements of finite order. This problem arose from the early work by Hedlund and coworkers such as Arnold, Curtis, Lyndon, Rhodes, and Welch. In more recent papers, all exotic elements of infinite order have been of this form. For example, Boyle-Lind [BLR] have shown Aut(a,) contains the free non-abelian group on two generators, each of which is constructed as a product of elements of finite order. Concrete computations of gyration numbers by Boyle-Krieger [BK] lend credence to the possibility raised by Rhodes that Aut(~2) is generated by a, and by involutions. This would imply that the first Eilenberg-MacLane homology group Hl(Aut(cr2) ) is the direct sum of Z and a vector space over Z/2 which, by [BK], would be infinite dimen- sional. Conversely, information about H 1 could bear on Rhodes' question or on FOG. A general open problem is to obtain information about the higher Eilenberg- MacLane homology groups H~(Aut(trA)), and the action of Aut(aA) on t~ A is a natural setting for this type of question. Gromov asked whether ~A had certain combinatorial properties similar to a space with non-positive curvature. For example, he asked whether every loop with L edges in ~A could be spanned by a disc with at most cL ~ triangles for some universal constant c. This turns out to be the case for c = 40. We hope that more detailed analysis of ~A will prove useful for FOG. The present paper concentrates on H 1 and uses the action of Aut(aA) on ~A to construct homomorphisms of Aut(6A) into various other simpler groups such as the group of automorphisms of the dimension group or the algebraic K-theory group K~. These are obtained via a canonical homomorphism qJA : aut(~a) --~ z~t(S(~ A) into the fundamental group of the space of all shift equivalences in the category o ~ of non-negative integral matrices. When A is finite, we show ~1(S(~), A) : Aut(G(A), G(A)+, sx) where the right-hand side is the group of automorphisms of the dimension group G(A) considered as an ordered group as in [E] and a Z[t, t-X]-module via the action of the automorphism s A of G(A) coming from A. The first example of the general theory developed here was a commutative diagram K,(F(t)) Aut(aA) 18 where A is assumed to be finite, F(t) is the field of rational functions over a field F, and O is the tame symbol [M1]. Another typical example comes from random walk on a discrete group G. See w 4. Here is why one might expect a connection between Aut(aA) and K 2 in the first place. Let f: M -~ M be a Smale diffeomorphism [F] of a compact manifold M. In [F] MARKOV PARTITIONS AND K t 93 it is shown that ~(t) - ~s(t) mod 2 where ~(t) is the " homology " zeta function obtained by counting the periodic points off algebraically using the Lefschetz numbers off" and ~t(t) is the " symbolic " zeta function obtained by counting periodic points in the non-wandering set off. Knowing the contribution to ~1(t) from each basic set enables one in principle to compute the entropy off. On the other hand, let M t denote the mapping torus off. This is just M � I modulo the identification of (x, 0) with (f(x), 1). From algebraic K-theory and simple homotopy theory there is a rational function ~(Mt) called the Reidemeister torsion of M t which is an invariant of the manifold M I. This is a special case of a Whitehead torsion invariant which arises in connection with the algebraic K-theory group K x. See [M2], [M3]. It is well known [F], [M2] that ~,(t) =-~(Mt). So in a loose sense we say " entropy is a Kx-type invariant ". Now in differential topology it is an old story that just as K x provides useful invariants for manifolds, the algebraic K-theory group K 2 gives invariants for diffeomorphisms of a manifold. See [HW], [WI]. Consider the group of symmetries Aut(f) of our original Smale diffeomorphism f: M -~ M. By definition, this consists of all homeomorphisms g : M -- M satisfying gf = fg on M. Each such g induces a homeomorphism of M t by the formula G(x, t) ----- (g(x), t). In this setting, it is then possible to define a homo- morphism ~t from Aut(f) into Ks(F(t)). It was consequently natural to ask whether there was a homomorphism such as Ir and this was the starting place for our paper. Incidentally, in his thesis [Z] Zizza shows for a fitted Smale diffeomorphism f and the field F ---- Z/2 that ~t is an alternating product of the various " symbolic " IcA's coming from the basic sets in the non-wandering set off. The argument makes use of #~A. In summary, the idea is that if K x is related to the dynamical system (Xx, ~,), then Ks should be (and is) related to its symmetry group Aut(aA). To carry over the analogy from differential topology to symbolic dynamics, the space of C ~ functions on a manifold is replaced by the space 0~,. Put differently, a given Markov partition for ~, on X A is like a particular triangulation of a finite complex. In view of the embedding of Aut(,A) into the automorphisms Aut(M(~) of the Cuntz-Krieger type C*-algebra ./r discussed in [W2], it is reasonable to expect a " non- commutative " version of the theory for automorphisms of operator algebras. In [W2] a K s invariant for Aut(~,) was constructed using the definition of the dimension group G(A) in terms of unstable manifolds of ** on X,. In case A is aperiodic, this leads to a homomorphism ~b : Aut(~A) --~ R~. which turns out to be the inverse of Connes' module homomorphism mx : Aut(~r ~ R~_. The map m x comes from seeing how an automorphism multiplies the line of semi-finite traces on .~r See [C]. In [CS] Connes- St6rmer define the entropy of an automorphism 0 of a type II t yon Neumasm algebra preserving a faithful normal trace T. Can one define a homomorphism Aut(0) ~ R" using " non-commutative " partitions? Or other methods? The module homomorphism is of course trivial on Aut(0) because the trace is normalized to satisfy ~(1) = 1. In w 2 we define the simplicial structure on ~ and develop some of its basic properties. The main theorem (2.12) is that ~A is contractible. ~s is locally finite when 94 J.B. WAGONER A is a finite matrix. In w 3 we prove the new algebraic Triangle Identities (3.3) coming from the analysis of geometric triangles in ~x. This is basic to the remainder of the paper. The first theme ofw 4 is invariants of Aut(aA) that come from " inverting functors ", of which the dimension group is the prime example when A is finite. Let ~ be the category of non-negative integral matrices and, for simplicity, let ~ be an abelian category such as modules and module homomorphisms over a commutative ring A. Consider a new category in which the objects are endomorphisms (square matrices) A of d v and where a morphism from A to B is a non-negative integral matrix X such that AX = XB. A functor F on this new category assigns an object F(A) in ~ to each endomorphism A and a morphismf(X) : F(B) -+ F(A) in ~ to each morphism X. In particular, we have f(A) : F(A) ~ F(A). We say F is inverting provided each f(A) is an isomorphism. In addition to the dimension group, there are a number of other inverting functors arising in dynamical systems and C'-algebras. See [BF], [CK], [Cul], [Cu2], [E], [F], [K] and [W2]. The principal result of this section constructs a homomorphism hbF. A : Aut(oA) ~ aut(f(A)) into the automorphisms of F(A) commuting with f(A) and a commutative diagram AUt(~A) ~ ~ ~r,A A.t(f(^)) When A is finite, this implies (4.19) that +v,A factors through Aut(G(A), G(A)+, sA). Some examples are given, including the relation of Aut(oA) to K~. To finish w 4, we briefly discuss one way to further explore the implications o[ the Triangle Identities by constructing the space SS(r of strong shift equivalences for the set r of zero-one matrices. Like S(r the space SS(~') is built independently of any dynamical system. Its simplices are formed using the algebraic identities (3.3). The homomorphism q~x factors through a canonical homomorphism from ~x(SS(@), A) to ~x(S(g), A). An open problem is to obtain more information about ~1(SS(r A). It is a new but hard to compute invariant of strong shift equivalence. Whether it is also an invariant of shift equivalence is far from clear at the present time. The material in this paper can be extended in several ways. For example, stochastic shift equivalence and strong shift equivalence for finite irreducible matrices was developed by Parry-Williams in [PW]; see also [PT], [T]. Their theory goes through in the infinite case. Also, if ~ is a Markov measure invariant under ~A, basically all the results on ~x and Aut(oA) can be carried over to the contractible simplicial complex @'~, of bt-Markov partitions and the subgroup Aut~,(oA) of ~-preserving symmetries of ~A- In another direction, the classical " marker method " produces many ways of embedding any finite group G into Aut(aA) when A is aperiodic [BLR]. The G-fixed point sets 95 MARKOV PARTITIONS AND K t of ~x are contractible and can be used to give a criterion for conjugacy classes of G in Aut(~A) in terms of G-equivariant strong shift equivalence. To finish this introduction, we would like to thank M. Boyle, D. Lind, W. Krieger, and F. Zizza for useful discussions and comments over the last few years while this paper underwent several revisions. We would also like to thank IMPA (Rio de Janeiro), the University of Geneva (Switzerland), and IHES (Paris) for their hospitality during Spring 1986. 2. Markov partitions In this section we develop the basic properties of the simplicial complex ~ of Markov partitions. In particular we show in (2.12) that ~A is contractible and is locally compact when A is finite. Let St be a countable set of" states " and let A : St � St ~ { 0, 1 } be a zero-one matrix such that each row and each column has a non-zero entry. We let X A denote the space of sequences x = (x,), -- oo < i < ~, such that A(x,, x,+l) = 1. The metric on X x is defined to be d(x,y) -= 0 if x =y and for x #y a(x,y) = k + 1 where k is the least non-negative integer such that x~ + Yk or x_ k 4: y_k. The space X A is complete. It is locally compact if and only if A is locally finite and compact if and only if A is finite. See [W2]. The shift ~r x : X x -+ X a is to the left; i.e. cA(x), ---- x,+ 1. Both n~x and c A are uniformly continuous. A homeomorphism Qt:X A ~ X B between two shift spaces satisfying ~. ----- ~z ~t will be called a uniform equivalence or an isomorphism provided both ~ and ~-1 are uniformly continuous. We let Isom(~x, ~B) denote the set of all isomorphisms from cr A to ~B. When A = B, we often write Aut(~A) for Is~ ~x) and will call ct 9 Aut(~A) a symmetry of ~x- There are generally two definitions of a Markov partition in the literature. We start with the approach of [PT] and later in this section will discuss the definition used in [V, p. 100]. If U = { U,} and V = ( V~ } are coverings of XA, we let U nV={U, t3V~ where U, t3V~e }. We say V retires U, written U < V, provided each V i is a subset of some U,. If m, n 9 Z and m ~< n, let U(m, n) = nX'(U) n ... n a~"(U) where ~(U) = { ~x(U,)} for k 9 Z. Let U A = { U, A } where for each s 9 St we define U, A = { x 9 X x I x0 = s }. This will be a standard example of a Markov partition for CA on X x. Observe that the sets in UA(-- n, n) are all disjoint, closed, open, and form a basis for the topology of X A as n runs over the non-negative integers. 96 J.B. WAGONER Definition 2.1. -- A topological Markov partition for % on X A is a covering U = { U~ } of X x such that (a) the U~ are disjoint and open (and therefore closed); (b) arty intersection [~ a~"(U~,~) consists of at most one point; (c) if U,,,, n (~I(U,,.+~,) + 0 for n e Z, then fi ~n(U,(.)) 4: O. --o0 Moreover, we say U is uniform provided (d) there are rn, n>1 0 such that U A< U(-- re, m) and U< Ua( - n,n). We let t~ A denote the set of all uniform topological Markov partitions for a A. Remark 2.2. -- Any two coverings U and V satisfying (d) with respect to U A have mutual refinements in the sense that there are m, n >/ 0 so that U < V(-- m, m) and V < U(-- n, n). If A is finite, then any two coverings satisfying (a), (b), (c) also satisfy this mutual refinement condition. So as in [PT] the condition (d) is not needed in the definition when A is finite. If ~:X A ~ Xz is a homeomorphism and U = { U~} is a covering of Xx, let ~(U) = { ~(U~)}. Observe also that uniform continuity can be expressed as follows: Given a refinement UB( - n, n) there is a refinement UX( - m, m) such that Ue( - n, n) < :~(UA( - m, m)). It is not hard to verify that if ~ e Isom((rA, an) and U E~ A, then 0t(U) e~B- Thus we have a bijection (2.3) ,~. ~ ~ given by the correspondence U ~ ~(U). For future reference we observe Lemma 2.4. -- If U e~A, then U n ~I(U) and ~A(U) n U are in ~x. If U e~A and V : { V~ } is a cover of X A by disjoint, open sets such that U < V < U n ~k(U) where = + 1 or r 1, then VE~A also. The proof of this is straightforward. Remark. -- It is not true in general that U, V e ~x implies U n V E :~x- However, this property does hold for Markov partitions as defined in [F, p. I00]; see (2.17) below. If U = { U s } is in ~A, let the matrix M = M(U) associated to U be the function M: U � U -+{ 0, 1 } defined as usual by 1, U~ n (r~1(U~) , O (2.5) M(U,, = 0, otherwise. The definition of M as a function from U � U to { 0, 1 } is independent of the choice of a bijection U ~ I between the sets of U and some countable indexing set I which is MARKOV PARTITIONS AND I~ 97 tacitly assumed when we write U = { U~ } for i ~ I. However, such an identification does give a matrix M : I � I -+{ 0, 1 } where M(i,j) = M(U~, Uj). In particular, there is a canonical bijection 5 p ~= U A under which M(U A) = A. Just as in the case when A is finite we have /,emma 2.6. -- Let U e,~ A and B = M(U). Then there is a uniform equivalence _-- Proof. -- Let U = { U 8 } where t runs through the countable indexing set T. Define ~ : X A -+ X~ by the condition that ~(x)~ = t if and only if 6~(x) ~ U,. Thus Qt is continuous by (a) of (2.1), injective by (b) of (2.1), and surjective by (r of (2.1). Uniform continuity of 0~ follows from U < UA( - n, n) and uniform continuity of ~t -1 follows from U A < U(-- m, m). We now define the simplicial structure on ~,t. If U, V e ~x, we let (2.7) U , V (-- ra, n} mean that U < V < U(-- m, n). The special cases U 9 V and U > V will be {o, 11 {- 1, o~ denoted respectively by (2.8) U + ~V and U _ >V. If U < V, then from (2.2) we have V< U(--m, n) for some m, n>_. 0. Let (2.9) /(U, V) ---- min{ m -t- n [ V < U(-- m, n) }. Then I(U, V) is like a length function, but it is only defined when U < V and it is not symmetric. If U < V < W, then t(U, W) ~</(U, V) + t(V, W). if and only if Definition 2.10. -- If U,V~ A, then we write U~V U ---~ U c~ V <_ V. Observe by (2.4) that this condition implies U c~ V e~ A. Definition 2.11. -- The simplicial complex ~x has as n-simplices those (n + 1)- tuples (V0, ...,V,) where each V~ ~a and V~-+Vj for i<~j. It is clear that the bijection ~x 7 ~ of (2.3) produces an isomorphism of sim- plicial complexes. Proposition 2.12. -- ~x /s contractible and is locally compact if A is finite. Proof. -- First we verify the easy part that ~A is locally compact when A is finite. Under this hypothesis there are, for a given U ~ @'A, only finitely many V e @'A such that U < V < U n ~x~I(U). Consequently there are only finitely many V such that either V ~ U or U ~ V. In particular, a given vertex U can belong to only finitely many simplices. This implies that ~x is locally compact. 13 98 J. B. WAGONER Now we prove that ~A is contractible in three stages: Step I: ~A is connected; Step II: ~q(~x) = 0; Step IIh H.(~A) = 0 for n >t 2. Contractibility then follows from the Whitehead theorem [Sp]. Step L ~ Connectivity of t~ A was essentially proved by Williams in [Wi] where he introduced the notion of strong shift equivalence. Another exposition of this is found in [PW]. For completeness, we give the argument here. Let U and V be vertices in t~ A. We show that there is a path from U to V in ~A having edges of the form ( U0, U1) where U0 ~ Ua for r = + 1. By (d) of (2.1) we know that U < UA( - n,n) > V for some n >1 0; so we may as well assume U < V. The proof then proceeds by induction on r V). By (2.2) we know U 9 V for some m, n t> 0. If n/> 1, then ~-" "~ 9 U(-- m, n) V n cr~x(U) ~ V. U (--m, %) ?-- m,, ,I,-- 1) {0,1} Similarly, if m >/ 1, then 9 U(-- m, n) ~A(U) n V , V. U t-,n, .) ~-~+1,.) i- x, o) Finally, we have U ,u(0, l) ,... ,U(-m,,). + + .... Observe that by (2.4) each of the partitions V n ,~I(U), *A(U) c~ V, and U(--p, q) are still in ~x- Step IL -- Simple connectivity of ~A can be proved in two forms, (2.13) and (2.14). As mentioned in the introduction, it was Gromov who asked whether such properties held for ~A, and he remarked that they might indicate ~A is something like a space with non-positive curvature. By a path in ~x we always mean a simplicial path, and by a homotopy between paths with the same endpoints we mean a sequence of intermediate paths having the same endpoints such that one path differs from the next one by replacing a single edge by the two opposite edges of a triangle or vice versa. Proposition 2.18. -- A path in ~ A having L edges can be spanned by a (possibly singular) triangulated 2-disc in ~ x with at most 40L ~ -- 39L triangles. Remark 2.14. -- An argument similar to the one for (2.13) shows that any two paths of length at most L are homotopic through intermediate paths of length at most 6L-k 2. The specific constants in these propositions do not seem all that important, and they can very likely be improved. MARKOV PARTITIONS AND I~ 99 Proof of 2.13. -- Step 1: Consider a segment of a loop 0t such as X U V W The diagram X V U V shows how to deform 0t to a loop [3 which is alternating in the sense that the natural direction (2.10) of each edge switches as one moves around the loop. Equivalently, the two edges containing any vertex either point both toward that vertex or both point away from it. If ~t has at most L edges, then [3 has at most 2L edges and the number of simplices used to deform 0t to [3 is at most L. Step 2: Assume [3 is an alternating loop with 2L edges and successive vertices V0, Va,..., V~_~, V0, and consider the diagram V,_, V, ~ V,+, v,+, b II J IP t t 1 V,_ i ~ V~ VL_ I ~ VL+ , V I ~ Vi+ i In Step 3 we will show how to triangulate each square, but for the moment observe that the bottom horizontal loop -f is alternating with 2L edges having successive ver- tices V~_ 1 n V~, V 0 n V2, Va n V S, ..., V2T._ z n V 0. Repeating this construction a total of L- 1 times produces an alternating loop of length 2L with only two distinct vertices V 1 n ... n V~,._ t and V 0 n ... n V2L_3. Observe that the number of squares involved is 2L(L -- 1) = 2L ~ -- 2L. Step 3: Consider a loop with 2L edges having just two alternating vertices U and V as in the diagram V 100 J. B. WAGONER for L = 3. The loop has two identical sides of length L emanating from U, say, and ending in a common vertex which is either U or V. These sides may be identified to fill in the 2-disc. No extra triangles are added at this stage. Step 4: A square of the type V, , V8 v, v~ can be filled in by a triangulation with 20 triangles as follows: Vs 4 Vs 12 ~ VsnVa ~"~ 14 V, nV s 10 19 V, nV1 t~ V, n V, 17 16 VxnVs nV~ Vo nV~ nVa 5 8 ~ V, nV~ ~ 6 ID v, v~ We finally see that the total number of triangles required to fill in the loop using Steps 1 through 4 is at most L + 2L(L -- 1) 20 = 40L 2 -- 39L. This completes the proof of (2.13). SUp IIL ~ To prove H.(9~A) = 0 for n/> 2 it suffices to show for any finite complex K C 9~a that the homomorphism H.(K) -~ H.(9~x) is zero. MARKOV PARTITIONS AND K~ 101 Step 1: Subdivision. -- Let K be any finite simplicial complex such that each simplex is equipped with an ordering of the vertices which is compatible with the face maps. For example, K could be any finite complex of~ A. We construct a subdivision K' of K as follows: The vertices of K' are pairs v o = (v,, v j) where v, and v i are vertices occurring in a simplex < v0, ..., v~, ..., vi, ..., v, > of K and v~ comes before vr in the ordering; that is, i ~< j. More generally, an n-simplex of K' is an (n + 1)-tuple < v~ Jo, v~ h, "", v~, j, > satisfying (i) all v,j = (vi, v~) come from vertices lying in a simplex < v0, Vx, ..., vv > of K, (ii) i,<~ i~+~, L<~L+x, (iii) ia+l -- iv -{-,~+x --,]'a = 1. For example, if K = < v0, vt, vz >, then K' is /711 Note that neither < voo, vt~ > nor < rot , v22 > is a simplex of K' in this example. A homeomorphism 0:K'-+ K is defined by taking a point in a simplex < v, o jo,..., v,,~, > represented in terms of barycentric coordinates as the linear combination with Y,~X~ = 1 and X~ >1 0 to the point Z~ = o v,~ + ~ v~ inside < Vo, ..., v~ >. Now let K C ~A be a subcomplex. Each simplex of K is of~e form < Vo, ..., V~ ) where each V, e ~x. We let L denote the isomorphic simplicial complex with a sim- plex < vo, ..., vp > corresponding to each simplex <Vo, ..., Vv > of K and we let : L ~ K C ~A be the simplicial map where ~(v,) = V,. Let 9 : L' ~ ~A be defined on < vt, ~o' " " "' v~p jp ) by the formula 9 (v.) = V, c~ V s. That this is a simplicial map follows from the observation that if U-+ V, then U~Uc~V~V; and ifU~XandV~Y, thenU nV~XnY. Let K C ~x be the subcomplex of ~A consisting of all possible simplices having vertices of the form V 0t3V xc~.., nV, e~A where the V, are vertices of K. The vertices V, need not all belong to the same simplex and not every such intersection 102 J. B. WAGONER is in Mx. But certainly some do, and g, consists of the simplices which can be formed in this way. Let 0 : L' ~ L be the homeomorphism as above. Claim: 710 and x are homotopic as maps from L' into K C ~A. To see this define p : L' -~ g. C ~A by sending a simplex ( v~ t0, 9 9 ", v~, ~, ) of L' to the simplex ( V~0 , ..., V~, ) of K C ~A. Then 9 and p are homotopic as follows: L' � I is triangulated by simplices of the form ( (V,o~o , 0), ..., (%~,, 0), (v,oqo, 1), ..., (v~,,,, 1) ) where ( %Jo, "" ", v~j, ) and ( v~o~o, ..., v~,~, ) are sub-simplices of the same simplex in L' and i, ~< Po, J, ~< qo. The homotopy from p to x takes this simplex to ( V~o , ..., V~, Vp0 n V~o , ..., V~s c~ V~, ). Now observe that the two continuous maps p : L' -+ K C ~A and ~0 : L' ~ K C ~x have the property that if S is a simplex of L', then p(S) and ~q0(S) lie in the same simplex of K. Hence, the one parameter family (1 -- t) p + &0, 0< t~< 1, is a homotopy from p to ~0. This step can now be summarized in the following way: Suppose we have a chain representing an n-dimensional homology class in H.(K) where each S = Sp is a non- degenerate n-simplex of the form S ---- ( V 0, ..., V. ). Let ~ also denote the corres- ponding chain in the isomorphic simplicial complex L. Let ' ~f~S' be the chain on L' where S, is the subdivision of Sp as above. Let 0. : H~(L') ~ H,(K) be the induced map on homology. Then 0,(a') = at in H.(K) and, moreover, (') as homology classes in H.(~,) where ,~ :H,(L')oH,(K) is the map induced on homology by -r : L' -+ K. Step 2: Shrinking. ~ This is the main inductive step. Let 0t = ~,f, S, be an n-cycle representing an n-dimensional homology class in H,(~A). We may assume that each of the n-simplices S, is non-degenerate. Let K C ~A be the finite subcomplex obtained by taking all faces of the simplices S,. From Step 1 we know that a in H~(K,) is represented by the cycle -r.(,t'). We will show that %(a') ~ H.(K) is represented by an n-cycle ~(~) where ~ is an n-cycle having only non-degenerate n-simplices S on a complex M' which is simplically isomorphic to L' and ~ : M' ~ ~, is a simplicial map with the property that whenever ~(S) is a non-degenerate simplex of K, its vertices are of the form V o n . . . ~ Vf MARKOV PARTITIONS AND K t 103 where at least two of the vertices V, ~ K are distinct. Since K is finite and ~----K, we can continue this process to eventually represent ~ e H,(g,) in the form ~(~) where : M -+ l~ is a simplicial map from some high order subdivision M of K which takes each n-simplex of M into a simplex of lower dimension in K. This says that ~ = ~(~) must be zero in H,(K). Let E denote the set of vertices v of L' which correspond to the vertices V of K in t~ A under the homeomorphism ~0. Let F denote the remaining vertices of L'. These are all of the form v~j = (v~, vj) for v,, vj ~ E. Under z, each v ~ E goes to V ~A and each v~ E F goes to V~ n Vj E #~a- For each vertex v e E, let C, denote the set of all simplices in L' containing v. Let C = U,~,c,. Let D denote the subcomplex of L' consisting of those simplices with vertices in F. Then L' =CuD and G n D lies in the (n -- 1)-skeleton of L'. Observe that any (n -- 1)-simplex of C, having v as a vertex does not belong to C~ for u ~ v. Let [5, be the part of ~' supported in C,, let [5 ~ ~, e E [5,, and let y be the part of ~' supported in D. We have a' ~- [5 H- Y. Now fix a vertex v on E. We will construct a complex M' = D u~, u [U,,,C,] simplicially isomorphic to L' where ~], is isomorphic to C, but with a different ordering of the vertices. We will also construct a complex P' containing M' and L' as deformation retracts and will produce an n-cycle ~ on M' which agrees with 0t' on M' n L' and is homologous to it in P'. In fact, P' will have exactly one more vertex z than L'. Finally, we will produce a simplicial map ~ : P' ~ K which agrees with w, on L' and takes the vertex z to a vertex in K of the form V 0 n ... n V, where at least two of the V~ are distinct. Continuing this for each vertex v E E gives a simplicial map ~ : M' ~ g~ taking each vertex to a higher order intersection V 0 n ... n V, and a cycle ~ on M' such that ~(~) ---- ~,(~') in H,(K). So once again fix a vertex v e E. A typical simplex R in C, looks like < xl, ..., x,, v, yl, . . .,Yb ) with a H- b ---- n, x~ ~ F, and yj e F. Under .r, this goes vertex by vertex to a simplex < Xx, ..., Xo, V, Y1, .-', Yb ) where each X~ and Yi is of the form U n V for vertices U and V in K, where V ~ X~, and V-~ Yj. We allow for the possibility that there are no x~'s or y~'s and therefore no Xi's or Yj's. 104 J. B. WAGONER Case 1: There are no x~'s appearing in any simplex of Co. Define P' to be L' together with all faces of ( v, yx, ...,y,, z ) where ( v, ya, 9 9 .,y, ) is a simplex in Oo. Define M' to be D u C, w [U,., C~] where C, consists of all faces of the (Yx .... ,y,, z ). The construction in dimension one is illustrated by the diagram First we define the cycle ~ on M'. The chain ~ is a sum of terms hR where h ~ Z and R = ( v, yx, ...,Yb ). Let S = (-- 1)" <Yl, .-.,Y,, z ) and define ~o by replacing each term hR of ~, by hS. Let ~ = ~, + ~,, [~ + ~'. Next we show ~ is homologous to ~' in P'. This implies in particular that ~ is a cycle. Corresponding to an n-simplex R of C,, let T = (-- 1) "+1 (v,y~, ...,y,, z ). Let ~ denote the sum of all the terms hT as hR runs over the terms of [~,. We claim that ~(~) = ~,' _ ~. To see this, first recall that the formula for the boundary of a term hR of [~, is 0(hR) = h (yx, ...,y,, ) -k- ]E (-- 1)J h ( v, y~, ...,.~j, ...,y, ). J-1 Since 0~' = 0, the disjointness property of the (n -- 1)-simplices in the various [3, implies that summing up all the factors like (--1)Jh(v, yl, ...,.~j, ...,y,) for l~<j~< n over all the terms hR in all the simplices of ~, gives zero. Now compute the boundary of hi': 0(hT) = (-- 1) "+~ h (y~, ...,y,, z ) -q- h ( v,y~, ...,y,, ) + Z (- 1)'+~( - 1)~h(v,y~, ...,y~, ...,y,,z>. J--1 Summing up the terms (- 1)"+~ (- 1)Jh( v,y,, ...,)j, ...,y,, z> for 1 ~<j~< n over all terms hT will produce a coefficient equal to (-- 1) "+1 times the coefficient obtained by summing the terms (-- 1) j h ( v,y~, ...,_~, ...,y, ) in 0(hR). MARKOV PARTITIONS AND K~ 105 Hence this coefficient is zero, and we therefore conclude that summing the terms 0(hT) is the same as summing terms (-- 1)"+1 h (y~, ...,y,, z) + h( v,y~, ...,y,). Namely, 0(~) ---- e' -- ~/. It remains to define ~:P' ~ K.. Recall as above that if U ~ V in ~x, then U~U nV~V. In particular, U ~U c~V ~V. Also ifU ~X and V~Y, then 4- U n V -,- X c~ Y. The simplex ( v,yx, ...,y, ) of 13, goes to ( V, Yx, 9 9 Y, ) where V -~ Yj. Let Z denote the intersection of all these Yj's appearing for all the n-simplices in C,. By (2.4) we know Z 9 ~A, and it is easy to verify that V ~ Z and Yi -+ Z. + + Extend -r, :L' -,-K to the desired map ~:P' ~ K by sending (v, yx, ...,y,,z) to (V, Yx, ...,Y,, Z) vertex by vertex. Case 2: There is at least one vertex xj appearing in some simplex R = ( x:, ..., x,, v,yx, ...,yb) of 13,. Define P' to be I.' together with a face T = ( z, xl, ..., x,, v, yx, ...,Yb ) corresponding to each R where possibly there is no x~ and a = O. Define M' to be D u~', w [U,., 13,] where C, consists of all faces of the < z, xl, ..., x,,yl, ...,Yb )" To obtain ~, let S= (-- I) ~ z, xl,...,x,,yl,...,yb> correspond to the simplex R of 13,. Let ~, be the sum of terms hS as hR runs over the terms in ~,. As in 13ase 1, let = + + v. To show ~ and ~' are homologous, let ~ be the sum of the terms kT as hR runs through [3,. Again, we must show = - Consider the formula 0(R) = (-- 1) ~ xx, ..., x,,yx, ...,Yb) + ~ (-- 1) '-x ( x,, ..., xq, ..., x,,, v, yx, ...,y~, ) ti--1 + Y~ (--l)*+J(xx, "..,x,,v, Ya, "..,.Y~, "'',Yb)" 14 106 J. B. WAGONER As one sums over the terms O(hR) in 0(~'), the last two sums in the above formula add up to zero as in Case 1. Now consider 8(T) = ( xx, ..., xo, v, yl, .. ",Yb) "n t- (-- 1)~ z, xx, ...,x,,Yx, .. ",Yb) + Z (- I)'< z,x x, ...,~,, ...,Xo, V, yx, ...,Yb> + Z (-- 1)0+~+1( z, xx, ...,x,,v, yx, ...,.~j, "'',Yb). J-1 There is a one-to-one correspondence between the terms in the last two sums in this formula and the similar terms in OR. The corresponding coefficients all differ by a factor of -- 1. Hence, upon summing the terms 0(hT), we see that 0(~) is the sum of the various expressions h ( xx, . . ., x,, v, yl, . . .,yb > -- h(-- 1)*( z, xl, . . ., x~ yl, " " ",Yb )" In other words, 0(]) = x' -- ~. Finally, we define ~:P' ~ K. Let Z denote the intersection of all X~ of all the simplices R of C,. Observe that V ~ Xj and therefore V _-,- Z and Xj ~ Z for all X~. Since X, ~ V --~ Yi for all X~ and Y~ regardless of whether X~. and Yj belong to the -- + same simplex in ~x, it is easy to verify that X~ -~ X~ n Yj _+- Yj. See (2.24) below. Therefore X~ ~ Yj. In particular Z -+ Yj for all Yj as well. Extend x, : L' -+ g, to : P' ~ i~ by sending (z, xx, ...,x~ Yx, .",Yb) to (Z, Xa, ..., X,,V, Yx, ...,Y~) vertex by vertex. This completes the proof of the inductive step. We are now finished proving t~ x is contractible. Next we discuss certain boundedness properties of the matrices M = M(U) where U E~ x. Recall that a non-negative matrix M = { M(s, t)} is row finite (resp. column finite) provided each row (resp. column) has at most finitely many non-zero entries. A matrix is locally finite provided it is both row and column finite. Let II M II~ = sup(Y., M(s, t)), II M I1~ = sup(~;o M(s, t)), II M II = max( I1 M I1~, II M I& }. Proposition 2.15. -- Let M = M(U)for U ~A. If A is finite, then M is finite. IrA is locally finite, then so is M and moreover I I All < o0 implies I I M I[ < oo. If A is finite, then X A is compact and therefore any cover by open and closed disjoint sets must be finite. Hence M is finite. So now assume A is locally finite. The proof that ~A is connected shows that it is sufficient to consider the special cases (I) u < v and t(U, V) ~< 1 (II) V< U and I(V, U) ~ 1 MARKOV PARTITIONS AND I~ 107 where we assume (2.15) has already been proved for U ~ and must show it then holds for V ~ ~x. The proof of these two cases is straightforward. Next we consider Markov partitions from the viewpoint expounded in ['F, p. 100] which uses canonical coordinates. Let x ~ X~ and n ~ Z. Let W'(x, n) ={y ]y, =x,,i>_. n} W"(x, n) -- (y lY, -- x,, i < n }. If x,y ~ X x satisfy x o =Yo, then W"(x, 0) n W'(y, 0) consists of the single point z where x~ = x, for i<~ 0 and z~ =y~ for i>1 0. Define [x,y] = z. A set R C X x is a rectangle provided it is open and closed and x,y ~ R implies x 0 =Yo and [x,y] ~ R. If R is a rec- tangle, let W'(x, R) = W'(x, 0) ~ R and W'(x, R) = W'(x, 0) n R. Then for any x ~ R we have W"(x, R) � W'(x, R) g R via the correspondence (a, b) ~ [b, a]. Definition 9.. 16. -- A topological Markov partition of rectangles for a A on X x is a covering U = { U~ } by disjoint rectangles such that if x E U~ and oA(x) E Uj, then u,)) z u,)) c Moreover U is said to be uniform provided U < Ua( - n, n) for some n >/O. Note that U ~ < U automatically because each U~ is a rectangle. We let ~A denote the set of all topological Markov partitions by rectangles for n~. Clearly U x ~ ~. The following proposition extends to ~ several properties which are well known when A is finite. Proposition 9..17. If U and V belong to ~A, then so do U r3 V, U n aA(V), and U n a~l(V). Moreover, if U ~A, then U r The proof is similar to the case when A is finite. Note also that ~A is closed under intersection while ~A is not. Remark $.18. -- The subcomplex ~A of ~A formed by only considering simplices with vertices in ~A is also contractible. The proof is the same as for ~A. However, ~A is closed under intersection; so we automatically know, for example, that the vertex Z in Step III of (2.12) lies in ~A. It is not necessary to invoke something like (2.4). The set of uniform equivalences Isom(n A, aA) does not act on all of ~A but only on " small" elements of ~A. For example, U A ~A but a~l(U A) r even though ~ZI(U A) E~ A. However, o~(UA( - 1, 0)) E ~A. Here is the precise statement. Lemma 2.19. -- Let o~1, ..., ~k be a finite collection of uniform equivalences from X A to X~. Let m >10. Then there is an integer n >. 0 such that for eachj = 1, ..., k if V ~ ~x refines UA( - n, n), then aj(V) ~ #r and ~(V) refines US( - m, m). 1~ J. B. WAGONER Definition 2.20. -- Any partition V as in (2.19) will be called (~, m, n)-small, or, for brevity, just a-small. Proof of 2.19. J This is well known [F] for finite matrices. For completeness we give the argument in the general case. Let Y' and g" be the state spaces for A and B respectively. Recall from [H, W2] that a uniformly continuous 0~ : X A ~ X B can be written in the form 0t ---- o*~ h~o ---- h| o~a where h~ is a (p 4- 1)-block map determined by h:5~] +~ ~g'. Here 5p]+a consists of those (p + 1)-tuples [x0, ...,x~] with A(x~,x~+~) = 1 for 0<~ i<<.p- 1, and h satisfies B(h(x0, ..., x~), h(yo, ...,y~)) = 1 whenever xt =Y~-x for 1 ~< i~< p. Consider the special case of a single ~ : X A ~ X B of the form ~, ---- hoo. Suppose UA(0, p) < V and R is a rectangle of V with x ~ R. We will show ~(R) is a rectangle, ~(W'(x, R)) = W'(~(x), ~(R)), ~(W"(x, R)) = W~(~,(x), ~(R)). Property (2.16) for ~(V) is an immediate consequence of this. For brevity of notation, let W~,(x) -~ W'(x, k) and W~(x) = W'(x, k). (a) ~,(R) is a rectangle. Let x,y ~ R. Then [a(x), ~(y)] = ,,[x,y] E a(R) because x~ =.y~ for 0 ~< i~< p. (b) ~(W'(x, R)) = W'(~(x), ,,(R)). Clearly the left-hand side ~(W'(x)) t3 ~(R) is contained in the right-hand side W~(~(x)) r~ ~(R) because 0t(W~(x)) C W~(~(x)). To see the other way around, suppose that z = ~(u) for some u 9 and that z~ =~(x)~ for i>t 0. Let v= [u,x] 9 Then v 9 R and ~(v) -= z because u~ = x i for 0 ~< i~< p. (c) ~(W'(x, R)) = W*(a(x), a(R)). The assumption on R implies that Wg(x)c~ R = W~(x)c~ R. Since we see that the left-hand side ~(W~(x) n R) = ~(W~(x)) n ~(R) is contained in the right-hand side. Conversely, suppose z satisfies z = ~(u) for u e R and z~ = ~(x)~ for i~< 0. Let v= [x,u] eWe(x). Then veR and ~(v) = z because x~ = u~ for 0~< i~<p. It remains to see that ~(V) is a uniform Markov partition. Suppose V < UA( - k, k). By uniform continuity of ~-1 there is an !/> 0 so that UA( - k,k) ~ ~-I(UB(-- t,/)). Then 0t(V) < 0t(UA(-- k, k)) < UB( - t, t). It is now clear that given any m t> 0 and a finite collection of block maps written in the form o~x boo, it is possible to choose n so large that the conclusion (2.19) holds. This completes the proof. We now discuss the action (2.3) of Aut(ax) on ~x. If G is any subgroup, let MARKOV PARTITIONS AND K t 109 denote the subcomplex of all simplices which are pointwise fixed by G. Assume, moreover, that A is finite. Proposition 2.9,1. -- (A). ~ is non-empty if and only if G is finite, in which case ~ is contractible. (B) The action of Aut(~A) on ~x is properly discontinuous. Proof of (A). -- Assume [ G l< oo and let U e ~A be small. Since ~A is closed under intersections the partition n=e G ,,(u) is again in ~x and is fixed by G. Thus is non-empty. The proof that ~x is contractible carries over directly to ~a using the fact that if U and V are partitions fixed by G, then so is U r3 V. As in the proof of (2.12)these intersections are only to be taken under circumstances where they will still be Markov. Conversely, we now show that if U e~,, then the isotropy group H(U) of U in Aut(aA) is finite. Let B=M(U) as in (2.5) and let 0:X,-+X B be as in (2.6). Observe 0(U) = U B. Therefore, ff ~t(U) = U, then ~ = 00t0 -x e Aut(aB) fixes U B and determines a permutation matrix ~t(~) of the states corresponding to the sets in U such that ~t(~)B = B~t(~). Conversely, any such permutation gives a one-block automorphism of ~. and hence an automorphism of ax. This procedure defines an isomorphism between H(U) and the finite group of such permutations. Proof of (B). J We show that if K is a finite subcomplex of ~A and H is the set of symmetries a satisfying K n ~(K) 4= 9, then H is finite. For each pair of vertices (U, V) of K � K, let H(U, V) denote the set of those a for which a(U) = V. A given H(U, V) may be empty, but H is the union of the finite collection of the H(U, V). Moreover, H(U, V) is a coset of the isotropy subgroup H(U) which is finite by (A). Finally, in this section we present some facts about Markov partitions which will be used in w 4. Lemma 2.9.2.-- Let U, Vet~ x. If U-+ V and U ~ V, then U = V. Proof. ~ Let U = { U~ } for i e I and let B = M(U). According to (2.6) there is an isomorphism 0t:X A ~X B between aA and aB under which m(U~) = U~, the standard cylinder set for i e I. Thus re(U) = U B. Moreover, re(V) e ~B and we have U B ~ re(V) and U B --~ 0~(V). So it really suffices to consider the special case U = U A. Let V k E V. Since U < V < U(-- 1, 0), we can write = U,o..., n Uj where Vk(= Uj. Thus all q ~ 5 ~ for which A(j, q) = 1 can occur as q = x 1 for some x = { x, } E V,. On the other hand, write 110 J. B. WAGONER using the hypothesis U < V < U u a~(U). From above all q with A(j, q) = 1 can occur. Thus V, = U, U, n ~z'(U,) = U, where A(j, q) = 1. Lemma 9.. 23. __ If V, W, X 9 @A satisfy the condition V -,. X ~- W, then X = V n W. Proof. ~ Gheck that V n W --~ X 4-- V n W and apply (2.22): x < v n ~l(v) < (w n ~Z~(W)) n (v n ~'(v)) (v n w) n o2'(V n w) x < w n ~Aw) < (v n ~Av)) n (w n ~Aw)) (v n w) n ~A(V n w) Lemma 9..9.6. -- Let U, V, W 9 t~ x satisfy V +- U ~ W. Then V -+ V n W +- W. -- -6 4- -- Proof. ~ We have U < V, azI(U) < a~a(V), and W < U n a~I(U) < V n ~a(V). Hence VnW<Vn(Vt3cr~x(V)) =Vna2a(V). On the other hand, we have U < W, r ) < ~A(W), and V < aA(U) n U < %(W) n W. Therefore v n W < (~.(w) n W) n W = ~(W) n w. 3. The triangle identities The main goal of this section is to prove the algebraic identities (3.3) arising from triangles in ~A- Let U = { U~} and V = { V k} be any two vertices of #~A. As in [PW] define R = R(U, V) and S = S(V, U) to be f I, ifU~nV~4= f~ R(i, k) ---= { otherwise 0, (3.1) 1, if V k n eZI(U~) 4= 0 S(k, i) = [ 0, otherwise. Let P = M(U) and Q= M(V) be as in (2.5). Proposition 3.9.. ~ If U -+ V, then P -= RS and Q. -- SR. Remark. ~ We have not made any assumptions about finiteness of the matrices. So part of (3.2) asserts that R(i, k) S(k, j) = 1 for at most finitely many k. Similarly for S(k, i) R(i, t). In fact, a step in the proof is to show RS and SR are indeed zero-one matrices. MARKOV PARTITIONS AND K t III Proof of a. 2. Step I: RS = P. Fix a pair of indices (i, j). We must show P(i, j) = 2], R(i, k) S(k, j). First assume the right-hand side (RHS) is not zero. We will verify that the left-hand side (LHS) is non-zero and, moreover, only one term on the RHS is non-zero. Since U<U nV< aA(V ) nV we can write U~ as the disjoint Similarly we write union U~ = [3 ax(Vo) n V b of certain ~A(Vo) n V b 4: ~3. U s= UaA(V,) nV~ where ox(V,) nV d4= ~3. Suppose k is an index where R(i,k) S(k,j) = 1. Then U~ nV~4= ~3 and V k n ,]I(Uj) 4: O, and therefore some b = k and some c = k. In particular, u, n ~'(u,) ~ ~,(v.) n v, n ~l(v,), for some pair of indices (a, d) where the triple intersection is non-empty by (c) of (2.1). In particular P(i,j) 4: 0. Recall that V < U n V < U n o~x(U). Therefore, if a pair of indices (i,j) is given with U~ n a2*(U~) 4: O, then there is only one V t such that V t D U, n ~21(U~). Therefore, if R(i, k) S(k,j) = 1 we have v, Du, n ~'(u,) ~ ~,(v~ n v, n ~'(vd) 9 and k is determined by the pair (i, j). That is, there is only one k with R(i, k) S(k, j) = 1. Conversely, suppose P(i, j) = 1. We show there is a k satisfying R(i, k) S(k, j) ----- 1. Let V k be the unique k such that V~DU, n~I(Uj) 4: O. Then U~nV~4= ~3 and V k n o~(Ui) 4= O. Hence R(i,k) S(k,j) = 1. Step II: SR = Q. Fix a pair of indices (k,t). We must show Q(k, t) = Z, S(k, i) g(i, t). Assume RHS is non-zero. We must show Q.(k, t) 4:0 and that there is exactly one i such that S(k,i) R(i,t) = 1. Write V k = [J Uo n a~a(Ub) and Vt = [.1 U, n a~a(Ud) using the condition V < U n V < U n ~I(U). Let i satisfy S(k, i) R(i, t) = 1. Then some b = i and some c = i. Hence v~ n ~'(vt) ~ U, n ~'(U,) n ~(U,), O for some pair of indices (a, d), where the triple intersection is non-empty by (r of (2.1). As in Step I, we use the condition U < U n V < aA(V ) n V to show that i is determined by k and t. Conversely, suppose Q.(k,t) = 1. Let i be the unique index such that U, D a,(V,) n V t 4: O. Then S(k, i) R(i,I) = 1. This completes the proof of (3.2). Now consider a triangle ~s 112 J. B. WAGONER in ~a where U ={ U,}, V ={Vk) , W--{W~}, and let M = M(U), R~ = R(U, V), St = S(V, U) = S(W, V) R z = R(V, W), Rs = R(U, W), Ss = S(W, U). Proposition 3.3. -- Triangle identities: R1 R2 = R a S~ $1 = S a M. Step I: The triangle identity for the R-matrices. The argument proceeds by several special cases. Consider the triangle U ,, ~V where r = + 1 for i = 1, 2, 3. If all r = + 1, then it follows easily from (3.1) that R(U, W) = R(U, V) R(V, W). (3.4) If all r = -- 1, we similarly have R(W, U) -= R(W, V) R(V, U). (3.5) Lemma 3.6. -- If one r = 1 and another ~, = -- 1, then either U = V or v = w. Proof. -- By (2.22) it suffices to show that either U--~ V or V ~ W. There axe + -4- four cases to consider. For example, suppose ~x-----1, c~ =- 1, and ca = I. Then W < U n ~21(U) < V n cr~-a(V) so that V ~ W. Hence V ~ W. The other cases are t 4- similar. Lemma 3.7. -- For any U, V ~ ~ x, we have R(U, V) = R(U, U n V) R(U n V, V). Proof. ~ Easy from (3.1). Lemma 3.8. -- ffV~-U~W, then -- + R(V, W) = R(V, U) R(U, W). MARKOV PARTITIONS AND K z 113 Proof. -- Let R 1 = R(V, W), R~ = R(V, U) and R 3 = R(U, W). Let U = { U, }, V = { V k } and W = { Wt }. We must show R x = R~ Rs. First observe that R, R3 is a zero-one matrix. To see this, fix a pair of indices (k,/) and write v~ = U~ ~(uo) n u~ wt = U~ u o n ~'(u~). Suppose a term R2(k,j ) Rs(j, t) = 1 in the sum Y~j R,(k, j) Rs(j,/). Then V k n Uj # and Uj n W t # 13. This implies j = b = c. Thus j is determined by (k, l) and there is at most one non-zero term. Moreover, V k n Wt 3 ~A(Ua) n Uj n zxl(Ua) which is non-empty by (c) of (2.1). Thus, R.l(k,/) = 1. Conversely, if Rl(k,/) = 1, then for some pair of indices (a, d) ~(u~) n ub n U0 n ~ZI(U~) must be non-empty. Thus b ---- c and the term R~(k, j) R3(j,/) -- 1 for j = b = r Now observe that the triangle < U, V, W > gives rise to the following commutative diagram of Markov partitions: U ~ W From (2.22) we see that U n W = U n V n W. Hence (3.9) U b W Note in particular that V < U n W. The required R-identity is now a consequence of several applications of (3.4), (3.5), (3.7) and (3.8). 15 114 j. B. WAGONER Step II: The triangle identity for the S-matrices. The diagram (3.9) yields the commutative diagram (3.1o) u w Ur~W" Let T 1 = S(U n W, U) and T~ = S(W, U n W). We now verify the identity (3.11) Si S t = T~ T t. Fix a pair of indices (p, i) and consider the two expressions (LHS) Z, S,(p, k) St(k, i) and (R~S) X,~ T.(#, (q, j)) Tt((q, j), 0. In general, these sums will be non-negative integers and not just zero or one. We prove that LHS = RI-IS by showing that for each pair (q, j) for which Tz(p, (q, j)) TI((q , j), i) = 1 there is exactly one corresponding index k such that Sz(p, k) Sx(k , i) = 1 and vice versa. Let , = a A. Let (q,j) be an index pair such that T2(p, (q,j)) Tt((q, j), i) = 1. Let V k be the unique element of V such that V, D Wq n Uj 4: O. Then W. n ,-'(V,) ~ W, n ,-'(W. n U~) 4: O. v, n ~-'(U,) ~ w~ n U~ n ,r-'(U,) # O, so that S~(p, k) Sx(k , i) ----- 1. Now suppose (q',j') is another pair such that T,(p, (q',j')) Tt((q',j'), i) = 1 and let k' satisfy V v D We n U r 4: O. If we can show that the condition (q,j) 4: (q',j') implies k 4: k', then we will know that RHS ~< LHS. From (3.9) we can write w, = U,., v. n ~-~(v,) w, = U,., v~ n ~-l(v 2 Uj = O,, 1.(v,) n V 1 u, = U,., ,(v,) n v,. Then we can write Wp t~ ~-x(W~ n U~) 4= 0 more explicitly as U~ .,,(von ~-'(v,)) n O-'(v.) n .-'(v,)) n (v. n .-'(v,)). Since V< W nU and V< n(W) nW, we have a-*(V)<Wna-l(W) na-l(U). The above expression then simplifies to U..,..(v~ n ~-'(v,)) n O-'(v,) n ~-*(v,)) n (v. n ~-'(v,)). MARKOV PARTITIONS AND K I 115 Since o-x(U) < W n a-I(W) we must have w, n o-'(w,) n o-~(uj) = w, n o-1(w,) c o-'(uj) and can therefore conclude (3.12) every a for which Vo n a-x(Vk) 4= O in the expression for Wp must occur as some e for which o(V,) c~ V, 4:0 in the expression for Uj. Next write W0 n Uj r~ o-1 (U~) # 0 more explicitly as the union U0.,.,.,.,.,(v, n ~-~(v,)) n (~(v,) n v,) n (vo n o-'(v,)). Since V < W n U and V < U n o-I(U), we have V< W n U t3 a-~(U). The above expression simplifies to U,...,(v, n o-'(v,)) n (o(v.) n v,) n (v, n ~-'(v,)). Since W<U no-a(U), we see that W0nUjna-x(U,) =U~na-I(U,)C W, and therefore have (8.18) every h for which o(V,) n V h 4= O in the expression for U~ must occur as some d for which V, n o-a(Vn) 4= 0 in the expression for W e. Suppose now that k = k'. It then follows from (3.12) and (3.13) that (W, n Uj) n (W~ n U j,) 4= O, contrary to the assumption that (q, j) # (q',j'). Hence k 4= k' and the correspondence (q,j) ~k sends no two (q,j) to the same k. It remains to show that LHS .< RHS. Choose an index k such that S2(p, k)Sx(k, i) = 1. This means W, n o-1(V,) 4= O and V, n ,-~(U~) 4= 13. Use (3.9) to write W. = O..b V~ n o-~(Vb), u, = U0,~ o(v0) n v~. Then w, n o-'(v,) = U. v. n o-'(v,), v, n o-'(u,) = U. v, n ~-'(v2. Hence the Markov property (2.1) implies w, n o-~(v,) n ~-2(u,), o. Now write V k = U,,, Wz n U, and then w, n o-'(v,) = U.,. w, n o-l(wg n o-1(u,) v, n ~-l(u,) -- U~,, w. n U,, n ~-'(U,). 116 j. a. WAGONER We see that there must be some Wq n Uj in the expression for V k such that n n 0. In particular, W, n ~r-l(W, n Us) 4= O (W, n Us) n ~-'(U,). O so that T,(p, (q, j)) Tx((q, j), i ) = 1. We demonstrated above that for a given V k satisfying Sz(p, k) Sa(k, i) = 1, there is exactly one pair (q, j) satisfying the above con- dition. Hence the correspondence k ~ (q,j) is well defined. It is clearly injective; because if k ~ (q, j) and k' ~ (q, j), then V k n V,, 3 W, n Uj 4= ~3 which is contrary to the basic assumption that the V k are disjoint. This completes the proof of (3.11). The proof of the triangle identity for the S-matrices occurring for the triangle ( U, V, W ) now follows from two applications of (3.10): the first for (3.10) as is, and the second for (3.10) with U----V because in this case St = S(V, U)= M and S2 = S(W, U) = Ss. This completes the proof of (3.3). 4. Invariants for Aut(~A) We first construct a homomorphism from Aut(oA) to the fundamental group nx(S(~), A) of the space S(~) of shift equivalences. Throughout this section we let d" denote the set of oq' by .~" zero-one matrices M on products oq' � d t of various finite or countable state spaces S~ and J such that each row and each column has at least one non-zero entry. We will moreover assume that the matrices in d' belong to one of the four following classes: (a) M is finite (b) M is infinite but locally finite (4.1) (c) M is infinite and II M II < oo (d) M is infinite. In case (d) there is the tacit assumption that a product RS of matrices is written only when it is well defined. Thus the equation P = RS assumes that even though we may have R(i, k) )" 0 and S(I, j) ~> 0 for infinitely many k and r there are only finitely many k such that R(i, k) > 0 and S(k, j) > 0 simultaneously for a given pair of indices i andj. It is not hard to verify along the lines of (2.15) that ifU -~ V in ~A and A is in r then the matrices R and S of (3.1) also belong to d'. We let g be the category of matrices formed by taking the " closure " of ~; namely, all products of matrices in ~. A standard state splitting argument as in [K] shows that ~ consists of all non-negative integral matrices satisfying the corresponding condition in (4.1) as do the matrices in g'. MARKOV PARTITIONS AND K, 117 Let A be an 5~' � .9 ~ matrix and B be a J � J matrix in d. Recall from, say, [E] that a shift equivalence R : A ~ B in 4 ~ is an 5" � d matrix R in d such that there exists a J � 5r matrix S in d and an integer n > 0 satisfying AR = R_B, SA ---- BS (4.9.) RS = A", SR = B". Observe that if P : A~ B and Q: B~ C then PO: A~ C. Definition 4.3. -- The space S(d) of shift equivalences in d is the realization of the simplicial set where an n-simplex consists of a) an (n + 1)-tuple < A0, ..., A, > of square matrices A, in d, and b) a shift equivalence R~ : A,_ 1 ~ A, in d for 1 ,< i ~< n. The face operators come from composition and the degeneracy operators insert the identity. See [S] or [Sp] for background on simplicial sets and simplicial complexes. It is immediate from the definitions that the set of path components %(S(d)) of S(d) are just the shift equivalence classes of matrices in d. Take note of the following conventions. Composition will be read from left to right in 6 ~ and d. In the category of sets and functions or of spaces and continuous maps, composition will be read from right to left. Thus iff: X ~ Y and g : Y -+ Z are functions, then the composition is g f: X ~ Z. Also, in the category of right modules and module homomorphisms composition will be read from right to left. If f: I ~J is a bijection of sets, letf also denote the J � I permutation matrix which is 1 in the (3, i) entry if and only ifj =f(i). If f: I -+J and g :J -+ K are bijections then the K � I matrix associated to gf is the product of the K � J matrix for g followed on the right by the J x I matrix for f. Let a:X A -+X B be a uniform equivalence from % to ,~. Let U ={U,} be in ~A with P=M(U) and let U'={U~} be in ~B with P'=M(U'). Assume ~(U) ={~(U,)}={U~}=U'. Then U, ca~21(U~)* 0 if and only if 0~(U,) ca ,~l(e(U~)) + 0. Considering e as a bijection between the indexing sets I for U and K for U', we have the matrix identity (4.4) p, = epx-1. Hence 0t-l:P~P ' and ~:P'~P. Suppose now that U -+V in ~x. Let O : M(V) and let R and S be as in (3.1). Let V' = ~t(V) e tg~. Then U' -+ V' in ~B and we have the corresponding matrices R' and S'. These matrices satisfy the matrix equations Q, = R' ---- ~Re -1 S' = ~S~ -I l lO J. B. WAGONER which translates into the following diagram P , ,Q. (4.,~) p'~ Q' R" of triangles in S(o~). Now let A and B be endomorphisms in d which lie in the same path component of S(dV). We let ,~l(s(g); A, B) denote the set of homotopy classes of paths starting at A and ending at B. Concatenation of paths gives a pairing ~x(S(d); A, B) x ~t(S(dV) ; B, C) -~ ~,(S(d); A, C) denoted by ". ". When A = B we just get the fundamental group ~t(S(o~), A). If y e~t(S(~); A, B), then y-t denotes the path in ~1(S(o7); B, A) which is the reverse ofy going back from B to A. IfR : A ~ B in o ~, let y(R) e ~a(S(d) ; A, B) be the homotopy class of the corresponding edge from A to B in S(d). Lemma 4.6. -- (a) y(~,)-' = y(~-a). (b) If y ~ 7rt(S(~) ; A, B), then y(A) 9 y = y. y(B). Proof. The proof of (a) is left as an exercise. To verify (b), observe that since y is a product of paths y(R) 1, it suffices to consider the case y = y(R). The formula then follows from the diagram A! A -"A BI , ~B Let Isom(~A, as) denote the set of all uniform equivalences from (X,, aA) to (X~, %). Proposition 4.8. -- There is a map = +(A, B) from Isom(aA, %) to nl(S(g); A, B) such that if o~ ~ Isom(crA, as) and [3 ~ Isom(aB, ~rc) , then +(~) = +(~) * +(9). Considering ~A ~ Isom(~A, oA) we have +(~) = v(A). MARKOV PARTITIONS AND K t 119 From this we get a homomorphism hbA : Aut(aa) --* ~t(S(oZ), A) by taking A = B and letting ~?4(~) ---- ~?(~-~). The proof of (4.8) is based on the following more technical result. Proposition 4.9. -- Let U and V be in ~4 with P = M(U) and Q= M(V). Then there is a well defined path F(U, V) in rct(S(d ) ; P, Q.) such that r(u, u) = 1 r(v, w) = r(v, v) 9 r(v, w). Moreover, if ~ 9 Isom(*4, as), then r(~(u), ~(v)) = v(~) * r(u, v) 9 v(~)-'. Consider the special case U -+ V in ~A. Then define F(U, V) = y(R). In general, choose a path from U to V in 0~ which is a concatenation of edges ( U(i -- 1), U(i) ),o~ for i= 1,...,n where r =+ 1, U(0) =U, U(n) =V, and U(i-- 1)-+U(i). Then define (4.1o) r(u, v) = r(u, u(1))'~a,, r(u(1), U(2))'t2',... 9 r(u(,, - 1), u(.))',-,. It must be shown that I' (U, V) is independent of the particular path chosen in #4 from U to V. Before completing (4.9) we show how to derive (4.8) from it. Proof of Proposition 4.8. -- Let ~ 9 as). As in (4.4), we have 0t-x: M(U A) ~ M(~t(U4)). Define +(~) = v(~-,), r(~(u~), us). (+.11) Now let ~ 9 Isom(a4, ~B) and [~ 9 Isom(~s, %). Then +(~) = v(~ -, ~-~). r(~(u4), u ~ = v(~-,), v(~-~) 9 r(~(vA), ~(v~)) 9 r(~(cs), u~ From (4.6) and (4.9) we see that r(~(u4), ~(us)) = v(~) 9 r(~(uA), u S) . v(~-'). Substituting and then simplifying gives ,I,(~) = v(~ -1) , r(~(u4), u B) . ,c(~ -1) . r(~(us), u c) = +(~) 9 +(~). 120 J. B. WAGONER To compute W(.x) , let U A = { Up } be the standard partition and let ~ = zA. Observe that 6(U A) ~ U A because a(U A) -~ a(U A) n U A ~ U A. Then we have the triangle At -- A M(,(u')) in S(d) which shows +(*A) = v(. -1) * r(*(uA), uA)) = y(A). Proof of Proposition 4.9. -- From the definition (4.10) we have F(U, U) = ~'(1) = 1. We also see that F(U, V). F(V, W)-= F(U, W) provided P is independent of the path chosen from U to V in ~A. But this follows immediately from the Triangle Iden- tities (3.3) and simple connectivity of ~A. The property = 9 r(u, v) 9 -1 is a consequence of (4.5) and (4.6). This completes the proof. In dynamical systems and operator algebras inverting functors provide a way to obtain invariants for the shift dynamical system (XA, aA). Here is a framework for making, these constructions natural enough to get invariants for Aut(~A). When A is finite it turns out that the dimension group is ubiquitous. As usual let 6 ~ be one of the four classes (4.1). Let ~ be a category where compo- sition reads from right to left. Assume that iff and g are morphisms and bothfg and gf are isomorphisms, then so are f and g. For example, ~ could bc an abelian category. An inverting functor F on ~ first of all assigns to each endomorphism A of~ an object F(A) of ~. Next suppose A and B are endomorphisms of g and X is a morphism of $v such that AX ---- XB. We are then given a morphismf(X) : F(B) ~ F(A) which must satisfy the composition rule f(XY) =f(X)f(Y). Finally, observe this produces a morphism f(A) : F(A) ~ F(A) for each endomorphism A in ~'. We say F is inverting provided f(A) is an isomorphism. A wholesale method for manufacturing such F is to take F(A) = coker(I -- Aq(A)) or F(A) = ker(I -- Aq(A)) where q is a polynomial over a commutative ring A and ~ is the category of right A-modules and A-homomorphisms. A shift equivalence R:A ~ B induces f(R) via the A-homomorphism R : A[T] ~ A[S]. Some familiar examples are (i) F(A)----coker(I- tA), A = Z[t, t-l]. This is the dimension group. See [BF], [Cul], [CK], [K], [E], [W2]. (ii) F(A) = coker(I -- n), A = Z. (iii) F(A) = ker(I -- A), A = Z. MARKOV PARTITIONS AND I~ 121 iv) co er( A 0 I , A = Z. See [BF], [Cul], [Cu2], [F]. (v) F(A) = bounded solutions of I -- -A = 0 where A is infinite. See [KV] and (4.25) below. Observe that if F is an inverting functor, then F(A) is an invariant of shift equi- valence. This is because f(R) f(S) ---- f(RS) = f(P)" and f(S) f(R) = f(SR) = f(Q,)". Let A, B be endomorphisms in o ~. Isom(f(A),f(B)) will denote all the isomorphisms g : F(A) ~ F(B) in ~ such that f(B) g = gf(A). When A = B, we let Aut(f(A)) ---- Isom(f(A), f(A)). Proposition 4.12. -- Let A, B be endomorphisms in #. There is a map F = +F( A, B) from Isom(,A, %) to Isom(f(B),f(A)) such that if = 9 Isom(aA, aB) and ~ 9 Isom(%, Oc) , then If A = B, then +F(ax) = f(A). From this we obtain a homomorphism (4.13) +F,x : Aut(c~A) ~ Aut(f(A)) by taking A = B and letting dA,.A(= ) = +F(a-1). The homomorphism +F, A was first developed in connection with the algebraic K-theory group K z. See (4.21) below. D. Lind observed that the method goes through for F(A) = coker(I -- Aq(A)) with basically no changes. This led to (4.12). There are entirely similar versions of (4.12) and (4.13) depending on whether F is covariant or contravariant and on whether composition in ~ is read from right to left or vice versa. The general character of (4.12) suggested that there should be a " universal version " This turns out to be the case and involves the dimension group. Proposition 4.14. -- Let F : o z ---> ~ be an inverting functor and let A, B be endomorphisms in s Then there is a map ~F = ~,(A, B) : nx(S(o ~-) ; A, B) -+ Isom(f(B),/(A)) such that (i) /f y e ~x(S(d); A, B) and ~ 9 =x(S(g:) ; B, C), then B,(X * 3) = ~,(y) ~,(~). (ii) ~qF takes y(A)e~x(S(d); A)tof(h)in Aut(f(A)). The required map of (4.12) is then clearly just (4. is) +F = 4. 16 122 J. B. WAGONER The proof of (4.14) is really" general nonsense ". See [Q,]. But for the convenience of readers who find [Q,] overly abstract, we give a more concrete formula for +F in the spirit of (4.10) and (4.11). Assume U ~V in ~A- Let P=M(U), Q,=M(V) and R=R(U,V). Then define f(U, V) -----f(R) 9 Isom(F(Q.), F(P)). In general, for any U, V 9 #~x, choose a path from U to V in #'A which is a conca- tenation of edges ( U(i -- 1), U(i) )'"1 for i = 1, ..., n where r = 4- 1, U(O) = U, U(n) ----V, and U(i- 1) -+ U(i). Let (4.16) f(U, V) = 1-If(u(/-- 1), U(i))'"'. Remember we are reading composition from right to left in 2. If ~, 9 Isom(aA, (rB), then (4.17) 4,(~) = f(r f(a(UA), UB). Now we discuss the unique role played by the dimension group G(A). Let s be as in (a), (b) or (c) of (4.1). Let A 9 and define G(A) ---- lim Z[Sf] ~ Z[Sa] coker(I -- tA) as in [E], [W2]. G(A) will be considered as a r/ght Z[t, t-1]-module. Let G(A)+ denote the set of positive elements and let s A =g(A). The homomorphism ~qo: r~t(S(d~); A) ~ Aut(g(A)) is constructed by sending a path in S(d v) corresponding to R : P w~ Q, to the isomorphism g(R) : G(Q,) ~ G(P) which takes G(Q,) + to G(P)+. In particular, for each loop "I" ~ ~1(S(~), A), the isomorphism ~%('I') preserves the order structure of G(A). Let Aut(G(A), G(A)+, sx) denote all those automorphisms of G(A) which preserve the order structure and commute with s a. Proposition 4.18. ~ ff A 9 ff is finite, then ~% : rq(S(,~), A) -+ Aut(G(A), G(A)+, s,) is an isomorphism. Corollary 4.19. -- ff A 9 d" is finite, then +F. x : Aut((Tx) ~ Aut(f(A)) factors through Aut(G(A), G(A) +, sA). Proof of 4.18. Surjectivity of ~ = %: This is another way of interpreting Krieger's argument proving that two finite non-negative integral matrices are shift equivalent if and only if the triples (G(P), G(P)+, sp) and (G(Q,), G(Q)+, sQ) are isomorphic. See [K, 4.2] or [E, 6.4]. In fact, the argument shows that any element in Aut(G(A), G(A)+, sA) MARKOV PARTITIONS AND K s 123 is the image under ~o of a path of the form y(R) 9 y(A)" where R : A ~ A is a morphism in S(~) and n e Z. Injectivity of ~q = ~o: Let R : P ~ Q. be a path in S(o ~) and choose S : Q.~ P as in (4.2) so that RS = A k. Then y(R) 9 y(S) = y(A) k and hence y(R) -x = y(S) 9 y(A) -*. Any loop y in ~(S(dV), A) is a product of paths y(R)" for e = + 1. Hence it is a product of paths y(R) 9 y(P)* for k ~ Z and various P. Since y(P) 9 y = y 9 y(Q), the y(p)k can be pushed to the end of the product. The relation y(R 1 R2) = y(Rx) * y(Rz) can then be used to deform the loop to one of the form y(A)" 9 y(R) for some n e Z. Since all matrices are assumed to befinite, it is a consequence of the definition of G(A) as a direct limit that shift equivalences P : A ~ A and Q: A ~ A induce the same automorphisms of G(A) if and only if there is a non-negative integer k such that A * P = A * Q.. We want to apply this under the assumption that ~o(Y) = 1. Case 1: y = y(A)" 9 y(R), n/> 0. Then y = y(A" R). Since we assume A" R induces the identity on G(A), there is a k/> 1 such that A * = AkA" R. Hence, we have y(A) ~ = y(A k) = y(a ~ A" R) = y(A) ~ 9 y(A" R") and y(A" R) = 1. Case 2: y = y(A)-" 9 y(R), n/> 0. Then R and A" induce the same automorphism of G(A) and A ~ R = A ~A" for some k >/ 1. We then have y(A) k 9 y(R) = y(A k R) = y(A* A") = y(A*) 9 y(A"). Hence y(R) = y(A)", which gives y = 1. Product Formula Let F : o ~ ~ ~ be an inverting covariant functor into a category ~ of right modules over a commutative right with identity 1. We say F is compatible with tensor products provided there is an isomorphism F(A @z B) g F(A)| F(B) of A-modules whenever A and B are endomorphisms in ~ such that if R : A x ~ A~, then there is a commutative diagram V(A,~z B) = F(A,) | F(Ax | B) = F(AI) | F(B) The prime example is the dimension group G(A), which is compatible because the tensor product commutes with direct limits. Let (XA, ax) and (XB, %) have standard partitions U A = ( U~ } and U n = { Up } respectively. Let C = A | B. Then C((ix,jx), (i~,jz)) = A(ix, is) B(jt,j,) and consequently there is an isomorphism (x,| a,| (x, x xB,-, x 03) under which U c = U x � U s = { U~ � Up }. In particular o x � 1 eAut(ox| 124 J. B. Proposition 4.20. -- If F is compatible with tensor products, then +F.x| X 1) = +F,x(ax) | 1. Proof. -- Direct computation using (4.17) similar to the proof that +(,~) = y(A) in (4.8). Relation to K s Let F be a field and for each prime ideal .~C F[t, t-a], let F~, denote the field F[t, t-~][9 ~. Let K 2 be the algebraic K-theory group of [M1]. Proposition 4.21. -- Assume A is finite. Then there is a commutative diagram Aut(ax) ~ 1K[ o(~(t)) where O is the tame symbol. The image of !r A is contained in the sum of those F~ where 9 ~ divides det(I -- tA). The first step is to define K A and r A. Suppose A is an m x m matrix. Let G(A;F) =G(A)| ~--+F ~coker(I-tA) where I--tA is now viewed as an m � m matrix over F[t, t-l]. Since G(A; F) =~ Image(A * : F '~ -+ F") for k large enough, it is finite dimensional over F. If a e Aut(~x) , then ~o(a) is a vector space automorphism and both ~bo(~ ) and I- tA are commuting automorphisms of G(A; F)| F(t) as a vector space over F(t). We let (4.22) KA( ) = * (I - tA) where the "* " product is defined in [M1, w 8]. Since G(A; F) is finite dimensional over F, it is certainly a finitely generated torsion module over F[t, t-l]. Let G(A; F)~, denote the p-primary part of G(A; F), i.e., those elements killed by some power ~'. Then G(A; F) decomposes naturally as a direct sum G(A; F) T O G(A; F)~,. Each G(A; F) is filtered as G(A; F)~, D ~PG(A; F)~, D ~ G(A; F)~, D ... D ~,-I G(A; F)~, D 0 so that G~. = ~ G(A; F)z,/~ +I G(A; F)~, is a vector space over F~,. Any automorphism 0r of G(A; F) as an F[t, t-1]-module takes each G(A; F)~, to itself and respects the fibration. Let A~,(a) = [[. { Net of ~ on G~ } e F~ and A(~) = G A~,(~) e ~ F~,. WAGONER MARKOV PARTITIONS AND K 1 125 From (4.13) we have a homomorphism +o, A : Aut(oA) -~ Aut(sA) where s A = +o,A(~b,) is multiplication by t -~ on G(A; F). We define = a( o, for ct ~ Aut(aA). Proof of 4.21. -- By naturality of the exact sequence involving the tame symbol as boundary map [M1] it suffices to consider the case where F is algebraically closed. Choose a basis for G(A; F) over F for which +o(0~) and A are diagonal. Using the identities in [M1, w 9] one shows that Kx(a ) can be computed just using the diagonal parts of d/o(0C ) and of A. Also, 1<x(a ) can be computed from the diagonals. Lemma 8.3 of [M1] reduces the computation to symbols { ),, 1 -- ~tt } where it follows directly from the definition of the tame symbol [M1, w 11]. Cramer's Rule, as expressed by Propo- sition 6.6 of [B; IX, w 6] shows that the prime factors of G(A; F) can only be those involving prime polynomials which divide det(I -- tA). So ~:x only brings in those primes as well. The Dual Dimension Group Let A be an endomorphism in ~ with [I All finite. Define the U ~ dual dimension group to be the inverse limit G~ = limt~~ ~-t~176 This is an inverting functor into the category of Banach spaces, so we obtain a homo- morphism (4.24) Aut(aA) --* Aut(sA) where s x = g~~ is induced on G~(A) by the standard shift to the left by one step. Random Walk on an Infinite Group For general background about a random walk on a countably infinite discrete group G see [KV]. Here we consider the very special case of a measure ~ on G with finite support and satisfying the condition that ~t(g) = 1/n whenever ~t(g) =k 0 where n is the number of those g for which ~t(g)~e 0. Let ~"Ct| denote the space of ~t-harmonic functions on G. Namely, those bounded functions 1< : G --* C satisfying r(g) = (PK)(g) = E~ e(g, h) ~c(h) = ~'h M(g-~ h) r.(h) = E n ~:(gh) ~t(h) where P = 1A and A(g, h) = I if and only if lz(g-1 h) :~ 0. 126 J. B. WAGONER Then .,X ca is given as the inverting functor ~=.,~'~ =ker 1 ---A on t| and we can apply the preceding machinery to obtain a homomorphism (4.g,5) h : Aut(~a) -+ Isomorphisms of.,'f'(A). There is a version of this for a general Markov measure i~ and ~t-preserving symmetries of ~A. Finally, we discuss strong shift equivalence. Let 8 be one of the classes of zero-one matrices in (4.1). First there is the generalization of Williams' strong shift equivalence criterion for topological conjugacy as given in, say, [Wi] or [PT]. Proposition 4.26. -- Let A ~ 8 and/et B be a zero-one matrix. If there is an isomorphism (XA, *A) ~ (XB, aB), then B E 8. Moreover, if A, B ~ 8 there exists an isomorphism (XA, ~x) ~ (Xl, as) if and only if A and B are strong shift equivalent in 8. The proof is basically the same as in [Wi], [PT], [PW] with the key ingredient being the connectedness of #~A. Now let A and B be two endomorphisms (square matrices) in 8. As in [Wi], ['PT], [PW] we say a pair (R, S) of matrices in 8 is a strong shift equivalence in 8 from A to B provided (4.27) RS = A and SR = B. We denote this by (R, S) : A ~ B or A ,R, s~ ~B. Definition 4.28. J The space SS(8) of strong shift equivalences in 8 is the realization of the simplicial set where an n-simplex consists of the following data: (a) an (n + 1)-tuple ( A0, A1, ..., A, ) of endomorphisms A, 6 8, and (b) for each i <j a strong shift equivalence (Ro, S~) from A, to Ai such that whenever i < j < k, the triangle identities hold; that is, R. R~ = R,~, S~j S~, = S~ A,. As with S(~), it follows directly from the definition that the set of path components of SS(8) is just the set of strong shift equivalence classes in 8. If (R, S) : A ~ B in 8, then R : A ~ B. The correspondence (R, S) ~ R induces a map of simplicial sets and a continuous map (4.29) ss(e) MARKOV PARTITIONS AND K t 127 In fact, there is a commutative diagram ~ ,,(ss(t), ^) (4.30) Aut(,,) .,Cs(g), A) The homomorphism ~A is obtained by proving (4.8) and (4.9) with S(d) replaced by SS(g). The proofs are similar and the key observations are as follows. The equa- tion (4.4) shows that (4.31) (~-x, otP) :P ~P' and (at, Pot -1) :P' ~P and the analogue for (4.5) is the diagram P .Q P' {~'. s') P O: Next, let y(R, S) be the path from A to B in ~I(SS(8); A, B). Lemma 4.32. -- (a) y(0t -t, ,,P) ---- y(a, p~,-x)-1. (b) v(l, A) = 1. (c) If V 6 n,(SS(~'); A, B), thin y(A, 1) 9 y = V 9 y(B, I). (d) v(R, S) v(S, R) = v(A, 1). Verification of Lemma 4.32 uses the diagram above and the diagram A *A B ~B, 1~ P B The formulas (4.10) and (4.11) are virtually the same with y(R) replaced by y(R, S) and y(at -x) by y(0t -x, 0tP). That I' is well defined uses the Triangle Identities plus the fact that t~ A is simply connected. Williams' problem of " strong shift equivalence vs. shift equivalence " [Wi], [E] for the category o ~ of non-negative integral matrices can be rephrased as asking whether ,~o(SS(,')) ~ ,~o(S(~)) is a bijection. This mere reformulation is heuristic and does not help in solving the problem. But we do note that if A and B are strong shift equivalent, then ~,(SS(o~), A) ~ ~(SS(8), B) 128 J. B. WAGONER because A and B lie in the same path component of SS(s The groups 7q(S(d), A) are clearly invariants of shift equivalence. However, it is not known and, at any rate, certainly not obvious, that rq(SS(s A) is an invariant of shift equivalence. An open problem is to obtain more information about ~l(SS(oa), A) or, for that matter, about ~,(SS(r A) for i >i 2. REFERENCES H. BAss, Algebrair K-Theory, W. A. Benjamin, 1968. [B] R. Bow~N and J. FRAN~, Homology for zero-dimensional non-wandering sets, Ann. of Math., 106 (1977), [BF] 73-92. M. BOYLE and W. KmsGER, Periodic points and automorphisms of the shift, to appear in Trans. Arncr. Math. [BK] Sac. M. BOYLE, D. LXUD and D. RUDOLPH, The automorphism group ofa subshift off mite type, preprint, University of t-BLR] Washington ] University of Maryland (1986). A. CONNES, Outer conjugacy of classe* of automorphistm of factors, Ann. Sci. Ec. Norm. Sup., 4 e s~rie, 8 (1975), [C] 383-420. A. CONNvS and E. ST6RMER, Entropy for automorphistm of IIt yon Neumann algebras, Aaa Mathematica, [CS] 184, (1975), 289-306. J. Cur,rrz, A class of C*-algebras and topological Markov chains, If: Reducible chains and the Ext-functor [Cull for C*-algebras, Invent. Math., 68 (1981), 25-40. J. CUNTZ, On the homotopy groups of the space of endomorphisma of a C*-algebra (with applications to [Cu2] topological Markov chains), in Operator Algebras and Group Representations, Vols. I and 2, Monographs and Studies in Mathematics, Nos. 17 and 18, Conference at Neptun, Romania, 1980, pp. 124-137. J. Ctm'rz and W. KRmGER, A class of C*-algebras and topological Markov chains, Invent. Math., 56 (1980), [CK] 251-268. E. G. E~ROS, Dimensions and C'-algebras, CBMS, no. 46 (American Mathematical Society, 1981). [E] [lq J. FRANKS, Homology and d_),namical systems, CBMS, no. 49 (American Mathematical Society, 1982). G. H~,LlnWD, Endomorphism* and automorphisms of the shift dynamical system, Mathematical Systems Theory, [HI 8, no. 4 (1969), 320-375. A. H^TC~E~ and J. W.~CON~R, Pseudo-hotopie* of compact manifolds, Astlrisque, 6, Soci&~ Mathfinmtique [HW] de France (1973). W. KRX~'GER, On dimension functions and topological Markov chains, Invent. Math., 56 (1980), 239-250. [KI [KV] V. A. and A. M. VE~mK, Random walks on discrete groups: boundary and entropy, Annals of Probability, 11, no. 3 (1983), 457-490. J. MXLNOR, lntroduaion to Algebraic K-theory, Annals of Math. Studies No. 72, Princeton University Press, 1971. [MI] J. MmNoR, Infinite cyclic coverings, Conference on Topology of Manifolds, ed.J.G. HOCKXNO, Vol. 13, Prindle, [M2] Weber & Schmidt, Inc., 1968, 115-133. J. MtLNOR, Whitehead torsion, BAMS, 79. (1966), 358-426. [M31 W. P.~Rv and S. TUNCEL, Classification Problems in Ergodic Theocy, LMS Lecture Notes, 67, Cambridge tIT] University Press, 1982. [PW] W. P^RRY and R. F. WXLLIA~tS, Block coding and a zeta function for finite Markov chains, Proc. London Math. Soc. (3), 35 (1977), 4B3-495. [Q.] D. Q tnLLZN, Higher algebraic K-theory, I, Springer Lecture Notes in Mathematics, No. 341 (1973), 85-147. G. S~'G^L, Classifying spaces and spectral sequences, Publ. Math., LH.E.S., 34 (1968), 105-112. IS] E. SP^mER, Algebraic Topology, McGraw-Hill, 1966. [Sp] [T] S. TU~CEL, A dimension, dimension modules, and Markov chains, Proc. London Math. Soc., 86 (1983), 100-116. ['WI] J. W^OONER, K 2 and diffeomorphisms of two and three dimensional manifolds, Geometric Topology, ed. J. C. CAtcrRELL, Academic Press, 1979, 557-580. KAXmANOVXCH MARKOV PARTITIONS AND K t 129 [W2] J. WAOO~mR, Topological Markov chains, C*-algebras, and Ks, preprint, to appear in Advances in Mathe- matics. [Wi] R. F. WILLL~S, Classification of subshifts of finite type, Ann. of Math., 98 (1973), 120-153; Errata, ibid., 99 (1974), 380-381. [Z] F. Zxzzx, Automorphisms of hyperbolic dynamiral systems and Ks, Ph.D. Thesis, University of California, Berkeley (1985). Also a preprint from University of Washington, Seattle. Department of Mathematics University of Galifomia Berkeley, California 94720 Manuscrit recu le 12 flillet 1985, rgvisi le 3 flvrier 1987.
Publications mathématiques de l'IHÉS – Springer Journals
Published: Aug 30, 2007
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.