Algorithmic Graph Theory and Perfect Graphs has now become the classic introduction to the field. It continues to convey the message that intersection graph models are a necessary and important tool for solving real-world problems. Solutions to the algorithmic problems on these special graph classes are continually integrated into systems for a large variety of application areas, from VLSI circuit design to scheduling, from resource allocation to physical mapping of DNA, from temporal reasoning in artificial intelligence to pavement deterioration analysis.
On the mathematical side, perfect graph classes have provided rich soil for deep theoretical results. In short, it remains a stepping stone from which the reader may embark on one of many fascinating research trails. Martin Charles Golumbic Haifa, Israel Foreword Research in graph theory and its applications has increased considerably in recent years. Typically, the elaboration of new theoretical structures has motivated a search for new algorithms compatible with those structures. Rather than the arduous and systematic study of every new concept definable with a graph, the main task for the mathematician is to eliminate the often arbitrary and cumbersome definitions, keeping only the "deep" mathematical problems.
Of course, the deep problems may well be elusive; indeed, there have been many definitions from Dieudonne, among others of what a deep problem is. In graph theory, it should relate to a variety of other combinatorial structures and must therefore be connected with many difficuh practical problems. Among these will be problems that classical algebra is not able to solve completely or that the computer scientist would not attack by himself.
This book, by Martin Golumbic, is intended as an introduction to graph theory through just these practical problems, nearly all of them related to the structure of permutation graphs, interval graphs, circle graphs, threshold graphs, perfect graphs, and others. The reader will not find motivations drawn from number theory, as is usual for most of the extremal graph problems, or from such refinements of old riddles as the four-color problem and the Hamiltonian tour.
Instead, Golumbic has selected practical problems that occur in operations research, scheduling, econometrics, and even genetics or ecology. The author's point of view has also enjoyed increasing favor in the area of complexity analysis. Each time a new structure appears, the author inmiediately devotes some effort to a description of efficient algorithms, if any are known to exist, and to a determination of whether a proposed algorithm is able to solve the problem within a reasonable amount of time.
XV xvi Foreword Certainly a wealth of literature on graph theory has developed by now. Yet it is clear that this book brings a new point of view and deserves a special place in the literature. Since that time many classes of graphs, interesting in their own right, have been shown to be perfect.
Research, in the meantime, has proceeded along two lines. The first line of investigation has included the proof of the perfect graph theorem Theorem 3. The second line of approach has been to discover mathematical and algorithmic properties of special classes of perfect graphs: comparability graphs, triangulated graphs, and interval graphs, to name just a few.
Many of these graphs arise quite naturally in real-world applications. For example, uses include optimization of computer storage, analysis of genetic structure, synchronization of parallel processes, and certain scheduling problems. Recently it appeared to me that the time was ripe to assemble and organize the many results on perfect graphs that are scattered throughout the literature, some of which are difficult to locate.
A serious attempt has been made to coordinate the melange of some papers referenced here in a manner that would make the subject more accessible to those interested in algorithmic and algebraic graph theory. I have tried to include most of the important results that are currently known. In addition, a few new results and new proofs of old results appear throughout the text.
In particular, Chapter 9, on superperfect graphs, contains results due to Alan J. Hoffman, Ellis Johnson, Larry J. Stockmeyer, and myself that are appearing in print for the first time. The emphasis of any book naturally reflects the bias of the author. As a mathematician and computer scientist, I am doubly biased. First, I have tried to present a rigorous and coherent theory. Proofs are constructive and are streamlined as much as possible. The notation has been chosen to facilitate these matters. Second, I have directed much attention to the algorithmic aspects of every problem.
The complexity of every algorithm is analyzed so that some measure of its efficiency can be determined. These two approaches enhance one another very well. By exploiting the mathematical properties satisfied a priori by a structure, one is often able to reduce the time or space complexity required to solve a problem. Conversely, the algorithmic approach often leads to startling theoretical results. To illustrate this point, consider the fact that certain NP-complete problems become tractable when restricted to certain classes of perfect graphs, whereas the algorithm for recognizing comparability graphs gives rise to a matroid associated with the graph.
A glance at the table of contents will provide a rough outiine of the topics to be discussed. The first two chapters are introductory in the sense that they provide the foundations, respectively, of the graph theoretic notions and the algorithmic design and analysis techniques that will be used in the remaining chapters.
The reader may wish to read these two chapters quickly and refer to them as needed. The chapters are structured in such a way that the book will be suitable as a textbook in a course on algorithmic combinatorics, graph theory, or perfect graphs.
In addition, the book will be very useful for applied mathematicians and computer scientists at the research level. Many applications of the theoretical and computational aspects of the subject are described throughout the text. At the end of each chapter there are numerous exercises to test the reader's understanding and to introduce further results. An extensive bibliography follows each chapter, and, when possible, the Mathematical Reviews number is included for further reference. The topics covered in this book have been chosen to fill a vacuum in the literature, and their interrelation and importance will become evident as early as Section 1.
Since the intersection of this volume with the traditional material covered by most graph theory books has been designed to be small, it is highly recommended that the serious student augment his studies with one of these excellent textbooks. A one-year course with two concurrent texts is suggested. Special thanks are due to Claude Berge for the kind words that introduce this volume.
I am happy to acknowledge the help received from Mark Buckingham, particularly in Chapters 3 and He is the coauthor of Sections 3. The suggestions and critical comments of my "trio" of students, Clyde Kruskal, Larry Rudolph, and Elia Weixelbaum, led to numerous improvements in the exposition. I would also like to express my appreciation to Alan J. Hoffman for many interesting discussions and for his help with the material in Chapter 9.
My thanks go to Uri Peled, Fred S. Roberts, Allan Gottlieb, W. Trotter, Peter L. I am also indebted to my teacher, Samuel Eilenberg, for the guidance, insight, and kindness shown me during my days at Columbia University. But the greatest and most crucial help has come from my wife Lynn. Although not a mathematician, she managed to unconfound much of this mathematician's gibberish. She also "axed" some of my worst best jokes, much to my dismay.
Algorithmic Graph Theory -- from Wolfram MathWorld
More importantly, she has been the rock on which I have always relied for encouragement and inspiration, during our travels and at home, in the course of the research and writing of this book. As it is written in Proverbs:. There exists a y. The subgraph spanned by a subset S of edges. The adjacency set of vertex v.
The out-degree of vertex v. The in-degree of vertex v. The degree of vertex v in an undirected graph. The reversal of a set E of edges. The symmetric closure of a set E of edges. The complement of an undirected graph G. Graphs G and G' are isomorphic.
XXI List of Symbols XXII 6 7 9 9 9 9 9 9 47 77 95 23 26 27 27 32 53 62 62 62 59 62 60 60 The stability number of G. The chromatic number of G. The number of transitive orientations of G. The threshold dimension of G. The complete graph on n vertices. Kn The chordless cycle on n vertices. Cn The chordless path graph on n vertices. G'H Ho[Hu.
- Algorithmic Graph Theory and Perfect Graphs.
- Lunar Settlements.
- Algorithmic Graph Theory and Perfect Graphs: Volume 57 : Martin Charles Golumbic : ;
- Algorithmic Graph Theory and Perfect Graphs, 2nd Edition [Book];
The forcing relation on edges. The collection of linear extensions of a partial order P. Glir] H[7r] The stack sorting graph of w. The inverse of the permutation w. TTP The shuffle product. LU je The class of stack sorting graphs. Oifim P The class of deterministic polynomial-time problems. NP The class of nondeterministic polynomial-time problems.
Problem IIi is polynomially transformable to problem IIz. George Lueker for misspelling his family name throughout the text. Hence all occurrences "Leuker" should be "Lueker". Page The graph in Figure 1. Page Exercise 21 is false. For example, it can use as many as 7 colors on the graph Gi, in Figure 4. A different technique can be used to obtain a linear time coloring algorithm for triangulated graphs, which is due to Martin Farber.
Yannakakis has now proved that the complexity of determining if a poset has dimension 3 is NP-complete. Theory B 28, A necessary condition for the existence of a Hamiltonian cycle in split graphs is proved. Erdos and Gallai [I]: change "" to "" Foldes and Hammer : add "MR80c" Hammer, Ibaraki, and Simeone : change to the following:  Degree sequences of threshold graphs, Proc.
Page There should be edges between and corrected in this edition. Page Figure 8. Algebraic Discrete Methods 1, It will then be the same as the "bull's head" graph on page 16, corrected in this edition. The two vertical edges should be removed, corrected in this edition. See also Section A permutation is simply a bijection from a set to itself. Graph Theoretic Foundations union operations. When A and B are disjoint subsets, we often write their union with a plus sign. Throughout this book we will deal exclusively with finite sets.
For each xeX, the image of x under R is a. It is customary to represent the relation RSLS a. In this case we say that x' is related to x. Notice that this does not necessarily imply that x is related to x. Perhaps one should read "will inherit from" instead of "is related to," as in the case of a poor nephew with ten children and his rich widowed childless aunt. Such a relation is said to be an equivalence if it is reflexive, symmetric, and transitive.
A binary relation is called a strict partial order if it is irreflexive and transitive. It is a simple exercise to show that a strict partial order will also be antisymmetric. Basic Definitions and Notations 3 Graphs Let us formally define the notion of a graph. Both of these representations will be used interchangeably. Clearly f, w eE if and only if we Adj i;. In this case we say that w is adjacent to v and v and w are endpoints of the edge v, w. In this book we will usually drop the parentheses and the comma when denoting an edge. Thus xysE and x, y eE mil have the same meaning.
This convention, we beheve, improves the clarity of exposition. We have defined a graph as a set and a certain relation on that set. It is often convenient to draw a "picture" of the graph. This may be done in many ways. Usually one draws a circle for each vertex and connects vertex x and vertex y with a directed arrow whenever xy is an edge. If both xy and yx are edges, then sometimes a single line joins x and y without arrows.
Figure 1. In each case the adjacency structure remains unchanged. Occasionally, very intelligent persons will become extremely angry because one does not like the other's pictures. When this happens it is best to remember that our figures are meant simply as a tool to help understand the underlying mathematical structure or as an aid in constructing a mathematical model for some application.
Graph Theoretic Foundations d Figure 1. Three pictures of the same graph. The four nonisomorphic orientations of the pentagon are given in Figure 1. The four nonisomorphic orientations of the pentagon. Intuitively, the edges of G become the nonedges of G and vice versa. A graph is complete if every pair of distinct vertices is adjacent. Two types of subgraphs are of particular importance, namely, the subgraph spanned by a given subset of edges and the subgraph induced by a given subset of vertices.
They will now be described. We call H the partial subgraph spanned by S. Some complete graphs. Examples of subgraphs. Obviously not every subgraph of G is an induced subgraph of G Figure 1. Consider the following definitions. A single vertex is a 1-clique. A clique A is maximal if there is no clique of G which properly contains A as a subset. A cUque is maximum if there is no clique of G of larger cardinality. Some authors use the term complete set to indicate a clique. A stable set is a subset X of vertices no two of which are adjacent.
Some authors use the term independent set to indicate a stable set. In such a case, the members of X, are "painted" with the color i and adjacent vertices will receive different colors. We say that G is c-colorable. It is common to omit the word proper; a coloring will always be assumed to be a proper coloring. X G is the smallest possible c for which there exists a proper c-coloring of G; it is called the chromatic number of G. It is easy to see that co G and a G since every vertex of a maximum clique maximum stable set must be contained in a different partition segment in any minimum proper coloring minimum chque cover.
A vertex whose outdegree in-degree equals zero is called a sink source. When G is an undirected graph the situation is somewhat special. That is, the degree of x in an undirected graph is the size of its adjacency set. We present some fairly standard definitions. A path or chain in G is called simple if no vertex occurs more than once. Connected graph: A graph G is connected if between any two vertices there exists a chain in G joining them.
Strongly connected graph: A graph G is strongly connected if for any two vertices x and y there exists a path in G from x to y. Equivalently, G is bipartite if and only if it is 2-colorable. Throughout the text certain graphs will occur many times. We give names to some of them see Figure 1. There is obviously some overlap with these names. The intersection graph of " is obtained by representing each set in ' by a vertex and connecting two vertices by an edge if and only if their corresponding sets intersect.
The problem of characterizing the intersection graphs of families of sets having some specific topological or other pattern is often very interesting and frequently has applications to the real world. The intersection graph of a family of intervals on a linearly ordered set like the real line is called an interval graph. If these intervals are required to have unit length, then we have a unit interval graph; a proper interval graph is constructed from a family of intervals on a line such that no interval properly contains another. Roberts [a] showed that the classes of unit interval graphs and proper interval graphs coincide.
Interval graphs are discussed in Section 1. Consider the following relaxation of the notion of intervals on a Hne. If we join the two ends of our hne, thus forming a circle, the intervals will become arcs on the circle. Allowing arcs to slip over and include the point of connection, we obtain a class of intersection graphs called the circular-arc graphs, which properly contains the interval graphs. Circular-arc graphs have been extensively studied by A.
Tucker and others. We will survey these results 10 1. Graph Theoretic Foundations in Section 8. There are a number of interesting applications of circular-arc graphs, including computer storage allocation and the phasing of traffic lights. Let us look at an example of the latter application.
The traffic flow at the corner of Holly, Vood, and Wine is pictured in Figure 1. Each lane will be assigned an arc on a circle representing the time interval during which it has a green light. Incompatible lanes must be assigned disjoint arcs. The circle may be regarded as a clock representing an entire cycle which will be continually repeated.
An arc assignment for our example is given in Figure 1. In general, if G is the intersection graph of the arcs of such an assignment see Figure 1. Additional aspects of this problem, such as how to choose an arc assignment which minimizes waiting time, can also be incorporated into the model. The reader is referred to Stoffers  and Roberts [, pp. A proper circular-arc graph is the intersection graph of a family of arcs none of which properly contains another. It can be shown Theorem 8. Holly Street 2. The clock cycle. In a different generalization of interval graphs, Renz  characterized the intersection graphs of paths in a tree, and Gavril  gives a recognition algorithm for them.
Walter , Buneman , and Gavril  carried this idea further and showed that the intersection graphs of subtrees of a tree are exactly the triangulated graphs of Chapter 4. All of this is summarized in Figure 1. A permutation diagram consists of n points on each of two parallel lines and n straight line segments matching the points. The intersection graph of the line segments is called a permutation graph.
These graphs will be discussed Figure 1. G, the circular-arc graph. If the 2n points are located randomly around a circle, then the matching segments will be chords of the circle and the resulting class of intersection graphs, studied in Chapter 11, properly contains the permutation graphs. A simple argument shows that every proper circular-arc graph is also the graph of intersecting chords of a circle: We may assume that no pair of arcs together covers the entire circle Theorem 8.
For each arc on the circle, draw the chord connecting its two endpoints. Clearly, two arcs overlap if and only if their corresponding chords intersect. There are many other interesting classes of intersection graphs. We have introduced you to only some of them, specifically those classes which will be developed further in the text. To the reader who wishes to investigate other intersection graphs we offer the following references: Cubes and boxes in n-space: Danzer and Grunbaum , Roberts [b]. Wegner , Convex sets in n-space: Ogden and Roberts . Interval Graphs—A Sneak Preview 13 3.
We also hope to imbue the reader with a sense of how the subject matter is relevant to applied mathematics and computer science. It is unimportant whether we use open intervals or closed intervals; the resulting class of graphs will be the same. An interval representation of the windmill graph is given in Figure 1. Let us discuss one application of interval graphs. Many other such applications will be presented in Section 8. We would like to assign courses to classrooms so that no two courses meet in the same room at the same time.
Each color corresponds to a different classroom. The graph G is obviously an interval graph, since it is represented by time intervals. An interval graph— the windmill graph at left —and an interval representation for it. Graph Theoretic Foundations This example is especially interesting because efficient, linear-time algorithms are known for coloring interval graphs with a minimum number of colors. The minimum coloring problem is NP-complete for general graphs, Section 2. We will discuss these algorithms in subsequent chapters. The determination of whether a given graph is an interval graph can also be carried out in hnear time Section 8.
We have chosen interval graphs as an introduction to our studies because they satisfy so many interesting properties. The first fact that we notice is that being an interval graph is a hereditary property. Proposition 1. An induced subgraph of an interval graph is an interval graph. Some of our favorites include planarity, bipartiteness, and any "forbidden subgraph" characterization.
The next property of interval graphs is also a hereditary property. Triangulated graph property. Every simple cycle of length strictly greater than 3 possesses a chord. Graphs which satisfy this property are called triangulated graphs. The graph in Figure 1. An interval graph satisfies the triangulated graph property.
A graph which is not triangulated: The house graph. A triangulated graph which is not an interval graph. Not every triangulated graph is an interval graph. Consider the tree T given in Figure 1. Clearly we would be stuck. So there must be more to the story of interval graphs than we have told so far. Transitive orientation property. Each edge can be assigned a one-way direction in such a way that the resulting oriented graph F, F satisfies the following condition: abeF and beef imply acef Va, b,cG V.
The odd length chordless cycles C5, C7, C 9 ,. The complement of an interval graph satisfies the transitive orientation property. Transitive orientations of two comparability graphs. The bull's head graph Two graphs which are not transitively orientable. Summary 17 for these graphs, their chromatic number equals their chque number. This is not an accident. In Chapters 4 and 5 we will show that any triangulated graph and any comparabihty graph also satisfies the following properties. X-Perfect property.
Algorithmic graph theory and perfect graphs
This equivalence was originally conjectured by Claude Berge, and it was proved some ten years later by Laszlo Lovasz. Summary The reader has been introduced to the graph theoretic foundations needed for the remainder of the book. In addition, he has had a taste of some of the particular notions that we intend to investigate further. Returning to the table of contents at this point, he will recognize many of the topics listed.
The chapter dependencies are given in Figure 1. The chapter dependencies. The reader may wish to read Chapters 1 and 2 quickly and refer back to them as needed. Graph Theoretic Foundations In the next chapter we will present the foundations of algorithmic design and analysis. As was the case in this chapter, many examples will be given which will introduce the reader to the ideas and techniques that he will find helpful in subsequent chapters. Show that the graphs in Figures 1.
Algorithmic Graph Theory and Perfect Graphs, Volume 57
Can you find graphs for each zone of the Venn diagram in Figure 1. Let " be a family of intervals on a line such that no interval contains another. Show that none of the left endpoints coincide. Let G be the intersection graph of a family of paths in a tree and let i; be a vertex of G. Prove directly using only the definition that the graph in Figure 1.
Give an interval representation for the graph in Figure 1. Show that it is not a comparabihty graph. Why is this not in conflict with the GilmoreHoff'man theorem? Give a graph theoretic solution to the following problem: A group of calculus teaching assistants each gives two office hours weekly which are chosen in advance. Because of budgetary reasons, the TAs must share Exercises 19 offices. Since each office has only one blackboard, hoW can office space be assigned so that at any particular time no more than one TA is meeting with students?
Give an example to show that the graph you obtain in Exercise 8 is not necessarily an interval graph. How could we alter the problem so that we would obtain only interval graphs? Is the bull's head graph Figure 1. Is the complement of the suspension bridge graph Figure 1.
What is a good name for this last graph? An undirected graph is self-complementary if it is isomorphic to its complement. Show that there are exactly two self-complementary graphs having five vertices. How many are there for four vertices? Six vertices? A representation is minimum if the set S is of smallest possible cardinality over all representations of G. Graph Theoretic Foundations A star 7-gon. The Berge mystery story. Six professors had been to the Ubrary on the day that the rare tractate was stolen. Each had entered once, stayed for some time, and then left.
If two were in the library at the same time, then at least one of them saw the other. One of the professors lied!! Who was it? Research Problem. Bibliography Alter, R. Discrete Math. Amsterdam, MR50 Fulkerson, ed. Studies in Mathematics Vol. MR53 Buneman, Peter  A characterization of rigid circuit graphs. Danzer, L. Bibliography 21 Erdos, P. MR32 Gavril, Fanica [ ] The intersection graphs of subtrees in trees are exactly the chordal graphs, J.
Combinatorial Theory B 16, MR48 Ghouila-Houri, Alain  Caracterisation des graphes non orientes dont on pent orienter les arretes de maniere a obtenir le graphe d'une relation d'ordre, C. Paris , MR30 MR31 Hajos, G. First posed the problem of characterizing interval graphs. Marczewski, E. Ogden, W. Guy, H. Hanani, N. Sauer, and J. Schonheim, eds. Gordon and Breach, New York. Renz, P. Pacific J. MR42 Roberts, Fred S. Harary, ed. Academic Press, New York. MR40 Tutte, ed. Stoffers, K.
- Algorithmic Graph Theory and Perfect Graphs: Second Edition / Edition 2!
- Algorithmic Graph Theory and Perfect Graphs (Enhanced Edition).
- Algorithmic Graph Theory and Perfect Graphs.
Transportation Res. Walter, J. Wang, D. Wegner, G. The Complexity of Computer Algorithms With the advent of the high-speed electronic computer, new branches of apphed mathematics have sprouted forth. One area that has enjoyed a most rapid growth in the past decade is the complexity analysis of computer algorithms. At one level, we may wish to compare the relative efficiencies of procedures which solve the same problem.
At a second level, we can ask whether one problem is intrinsically harder to solve than another problem. It may even turn out that a task is too hard for a computer to solve within a reasonable amount of time. Measuring the costs to be incurred by implementing various algorithms is a vital necessity in computer science, but it can be a formidable challenge. Let us reflect for a moment on the differences between computability and computational complexity. These two topics, along with formal languages, become the pillars of the theory of computation.
Computability addresses itself mostly to questions of existence: Is there an algorithm which solves problem n? An early surprise for many math and computer science students is that one can prove mathematically that computers cannot do everything. A standard example is the unsolvability of the halting problem. Loosely stated, this says that it is impossible for a professor to write a computer program which will accept as data any student's programming assignment and will return either the answer ''yes, this student's program will halt within finite time'' or ''no, this student's program has an infinite loop and will run forever.
Complexity of Computer Algorithms 23 demonstrating an actual algorithm which will terminate with a correct answer for every input. The amount of resources time and space used in the calculation, although finite, is unlimited. Thus, computability gives us an understanding of the capabilities and limitations of the machines that mankind can build, but without regard to resource restrictions. In contrast to this, computational complexity deals precisely with the quantitative aspects of problem solving. It addresses the issue of what can be computed within a practical or reasonable amount of time and space by measuring the resource requirements exactly or by obtaining upper and lower bounds.
Complexity is actually determined on three levels: the problem, the algorithm, and the implementation. Naturally, we want the best algorithm which solves our problem, and we want to choose the best implementation of that algorithm. A problem consists of a question to be answered, a requirement to be fulfilled, or a best possible situation or structure to be found, called a solution, usually in response to several input parameters or variables, which are described but whose values are left unspecified.
A decision problem is one which requires a simple "yes" or " n o " answer. An instance of a problem 11 is a specification of particular values for its parameters. An algorithm for 11 is a step-by-step procedure which when applied to any instance of 11 produces a solution. Usually we can rewrite an optimization problem as a decision problem which at first seems to be much easier to solve than the original but turns out to be just about as hard. Consider the following two versions of the graph coloring problem.
Question: What is the smallest number of colors needed for a proper coloring ofG? The optimization version can be solved by applying an algorithm for the decision version n times for an n-vertex graph. If the n decision problems are solved sequentially, then the time needed to solve the optimization version is larger than that for the decision version by at most a factor of n. However, if they can be solved simultaneously in parallel , then the time needed for both versions is essentially the same. It is customary to express complexity as a function of the size of the input.
Thus, demonstrating and analyzing the complexity of a particular algorithm for 11 provides us with an upper bound on the complexity ofn. Consider the example of testing a graph for planarity. A graph is planar if it can be drawn on the plane or on the surface of a sphere such that no two edges cross one another. Auslander and Parter  gave a planar embedding procedure, which Goldstein  was able to formulate in such a way that halting was guaranteed.
Hopcroft and Tarjan [, ] then improved the Auslander-Parter method first to 0 n log n and finally to 0 n , which is the best possible. Booth and Leuker showed that the Lempel-Even-Cederbaum method could also be implemented to run in 0 n time. Table 2. Tarjan  summarizes the progress on a number of other problems. Determining the complexity of a problem 11 requires a two-sided attack: 1 The upper bound—the minimum complexity over all known algorithms solving n.
A gap between 1 and 2 tells us how much more research is needed to achieve this goal. One may also formulate complexity according to the average case. A good discussion of the pros and cons of average-case analysis can be found in Weide [, Section 4]. Complexity of Computer Algorithms 25 Table 2. An example of this is the problem of matrix multiplication.
In Strassen  an algorithm is presented for multiplying a pair of 2 x 2 matrices using only seven scalar multiplications. It is now known that seven multiplications is the best possible. The best algorithm known for the case of 3 x 3 matrices is given by Laderman ; it uses 23 scalar multiplications. For 26 2. Schachtel has an algorithm using multiplications, an improvement of one given by O. Sykora, which used Complexity of Computer Algorithms 27 success causes all copies of the algorithm to stop execution and indicates a "yes" answer to that instance of the problem.
If success is reached in one of the copies, then the final value of A in that copy is a chque of size at least k. Using the above procedure we obtain a nondeterministic polynomial-time algorithm for the optimization version of the CLIQUE problem as follows: Let G be an undirected graph with n vertices. An important open question in the theory of computation is whether the containment of P in NP is proper; i. The NP-complete problems are the most difficult of those in the "zone of uncertainty. Emphasizing the significance of polynomial-time reducibility, he focused attention on N P decision problems.
The hierarchy of complexities. The big open question is whether or not the 'zone of uncertainty," NP-P, is empty. Complexity of Computer Algorithms 29 mathematical logic is NP-complete Cook's theorem , and he suggested other problems which might be NP-complete. Karp  presented a large collection of NP-complete problems about two dozen arising from combinatorics, logic, set theory, and other areas of discrete mathematics.
In the next few years, hundreds of problems were shown to be NP-complete. Next, repeat the following sequence of instructions a few hundred times: Find a candidate H which might be NP-complete. Select an appropriate IT' from the bag of NP-complete problems. Add n to the bag. An amount of cleverness is needed in selecting n ' and finding a transformation from n ' to n. By way of illustration we will demonstrate such a reduction in the proof of the next theorem.
For a more complete treatment of Cook's theorem and the reductions following from it, see in increasing level of scope Reingold, Nievergelt, and Deo , Aho, Hopcroft, and Ullman , and Garey and Johnson . To illustrate the technique of reduction, we present the following result. Theorem 2. The idea of our proof will be to construct from G a certain triangle-free graph H with the property that knowing a H will immediately give us a G.
Subdivide each edge of G into a path of length 3; call the resulting graph H. Next we construct H' from H as follows. The vertices of H' correspond to the edges of H, and we connect two vertices of H' if their corresponding edges in H do not share a common vertex. Thus, a G can be determined from x H'. In this book we will consider this situation for various families of perfect graphs and some not so perfect graphs.
A more perplexing topic currently under investigation by many complexity theorists is that of finding and understanding the cause of the boundary between the tractability and intractability of various problems H. One final note: Our definition of complexity suppressed one fundamental point.
An implementation of an algorithm is always taken relative to some specified type of machine. As an underlying assumption throughout this book we will take the random access machine RAM , introduced by Cook and Reckhow , as our model of computation. The RAM is an abstraction of a general-purpose digital computer in which each storage cell has a unique address, allowing it to perform in one computational step an access to any cell, an arithmetic or Boolean operation, or a comparison.
A computation is performed sequentially by a RAM, one step at a time. This presents no difficulty, however, since any RAM can be simulated on a deterministic Turing machine with only a polynomial increase in running time. Summary Besides providing a basis for comparing algorithms which solve the same problem, algorithmic analysis has other practical uses. Most importantly, it aff'ords us the opportunity to know in advance of the computation an estimate or a bound on the storage and run time requirements.
Such advance knowledge would be essential when designing a computer system for a manned spacecraft in which the ability to calculate trajectories and fire the guidance rockets appropriately within tight constraints had better be guaranteed. Even in less urgent situations, having advanced estimates allows a programmer to set job card Hmits to abort those runs which exceed the expected 2. Data Structures 31 bounds and hence probably contain errors, and to avoid aborting correct programs.
You may also be interested in...
Also such estimates are needed by the person who must decide whether or not it is worthwhile spending the necessary funds on computer time to carry out a certain very large computation. Data Structures As the name suggests, data structures provide a systematic framework in which the variables being processed both input and internal can be organized. Data structures are really mathematical objects, but we will usually refer to their computer implementations by the same names.
The most familiar data structure is the array, which is used in conjunction with subscripted variables. A 0-dimensional array is a single variable or storage location. A d-dimensional array can be defined recursively as a finite sequence of d — l -dimensional arrays all of the same size. A vector is usually stored as a 1-dimensional array and a matrix as a 2-dimensional array. It is generally accepted that the entries of an array must be homogeneous i.
The main feature of an array is its indexing capability. The subscripts should uniquely determine the location of each data item. The entries of an array are stored consecutively, and an addressing scheme using multipliers allows access to any entry in a constant amount of time, independent of the size of the array, on a random access machine. For those unfamiliar with the use of multipliers, the technique will be illustrated for an mj x m2 matrix A. Then the space used by each row of yl equals m2 s. This idea easily extends to d-dimensional arrays Exercise A list is a data structure which consists of homogeneous records which are linked together in a linear fashion.
Each record will contain one field or more of data and one field or more of pointers. Figure 2. Unlike an array, in which the 32 2. The pointers maintain law and order. This allows the flexibility of changing the size of the data structure, inserting and deleting items, by simply changing the values of a few pointers rather than shifting large blocks of data. An implementation of our examples is given in Figure 2. It uses two arrays and two single variables. The A is a special symbol indicating undefined.
Scanning takes time proportional to the length of the list.
Two special types of lists should be mentioned here because of their usefulness in computer science. A stack is a list in which we are only permitted to insert and delete elements at one end, called the top of the stack. A queue is a list in which we are only permitted to insert at one end, called the tail of the queue, and delete from the other end, called the head of the queue.
In what year was Columbia founded? By definition, the main diagonal of M is all zeros, and M is symmetric about the main diagonal if and only if G is an undirected graph. A graph whose edges are weighted can be represented in the same fashion. Design of Efficient Algorithms Some of the performance figures above can be improved upon when the density of M is low. The adjacency Hsts are not necessarily sorted although one might wish them to be see Figure 2. Two implementations of Figure 2. Data Structures 35 Table 2. Often, it is also advantageous from time considerations to store a sparse graph using adjacency lists.
Similarly, "mark each edge" takes 0 e steps using adjacency lists, a substantial saving over the adjacency matrix for a sparse graph. However, erasing an edge is more complex with lists than with the matrix see Table 2. Thus there is no representation of a graph that is best for all operations and processes. Since the selection of a particular data structure can noticeably affect the speed and efficiency of an algorithm, decisions about the representation must incorporate a knowledge of the algorithms to be applied. Conversely, the choice of an algorithm may depend on how the data is initially given.
For example, an algorithm to set up the adjacency lists of a sparse graph will take longer if we are initially given its adjacency matrix as an n x n array rather than as a collection of ordered pairs representing the edges. This is usually the best that one could expect for a graph problem. By a careful choice of algorithm and data structure a number of simple problems can be solved in linear time; these include testing for connectivity Section 2,3 , biconnectivity Exercise 5 , and planarity Table 2.
We will illustrate this on the problem of converting the adjacency lists of a graph into sorted adjacency lists. It is by now a well-known fact that any algorithm which correctly sorts a set of k numbers using comparisons will require at least k log k comparisons both in the worst case and in the average case. Happily, there is yet another method for ordering the adjacency lists, which turns out to be linear.
It is conceptually very simple and differs from the above in that the SortedAdj i? Algorithm 2. Sorting the adjacency lists of a graph. Concatenation is independent of the length of a list provided that a pointing variable is used to remember the address of the end of the list. Thus line 4 takes 0 1 steps, and the loop takes 0 di steps. The usual implementation of adjacency sets as linked lists is illustrated in Figure 2. There is an alternate way of storing the adjacency sets when no 3.
How to Explore a Graph 37 inserting or deleting is anticipated. Under these circumstances sequential storage can be used to eliminate the links that were present in the Hst representation and thus save space. How to Explore a Graph In designing algorithms we frequently require a mechanism for exploring the vertices and edges of a graph. Having the adjacency sets at hand allows us to repeatedly pass from a vertex to one of its neighbors and thus "walk" through the graph.
Typically, in the midst of such a searching algorithm, some of the vertices will have been visited, the remainder not yet visited. A decision will have to be made as to which vertex x is being visited next. Since, in general, there will be many eligible candidates for x, we may want to establish some sort of priority among them. Two criteria of priority which prove to be especially useful in exploring a graph are discussed in this section. In both methods each edge is traversed exactly once in the forward and reverse directions and each vertex is visited. By examining a graph in such a structured way, some algorithms become easier to understand and faster to execute.
The choice of which method to use will often affect the efficiency of the algorithm. Thus, simply selecting a clever data structure is not sufficient to insure a good implementation. A carefully chosen search technique is also needed. Depth-First Search In DFS we select and visit a vertex a, then visit a vertex b adjacent to a, continuing with a vertex c adjacent to b but different from a , followed by an "unvisited" d adjacent to c, and so forth.
As we go deeper and deeper into the graph, we will eventually visit a vertex y with no un visited neighbors; when this happens, we return to the vertex x immediately preceeding y in the search and revisit x. Note that if G is a connected undirected graph, then 38 2.
If G is not connected, then such a search is carried out for each connected component of G. The edge xy is placed into T if vertex y was visited for the first time immediately following a visit to x. In this case x is called thQ father of y and y is the son of x. The origin of this male-dominated nomenclature appears to be biblical. The edges in T are called tree edges.
If G is connected then K, T is called a depth-first spanning tree. We consider each tree of the depth-first spanning forest to be rooted at the vertex at which the DFS of that tree was begun. An algorithm for depth-first search is given below. Chapter 2 The Design of Efficient Algorithms. Chapter 3 Perfect Graphs. Chapter 4 Triangulated Graphs.
Chapter 5 Comparability Graphs. Chapter 6 Split Graphs. Chapter 7 Permutation Graphs. Chapter 8 Interval Graphs. Chapter 9 Superperfect Graphs. Chapter 10 Threshold Graphs. Chapter 11 Not So Perfect Graphs. Chapter 12 Perfect Gaussian Elimination. Chapter 1 Graph Theoretic Foundations. Epilogue