Data Structures And Algorithm Analysis

Question is in the file, just two questions but need to be down by 10pm today. The book is uploaded.

thanks for helping.

1. In the probability review in class, we showed an easier way to compute the Expected value (average) of the sum of two dice, by using the Linearity of Expectations. (a) Show how you would use the Linearity of Expectations property to compute the Expected value (average) of the sum of 10 fair dice. (b) Do the same, but now compute the Expected value (average) of the sum of 10 biased dice in which the probability of obtaining value 6 is 1 and the probability of all other values are 0. Show all your steps of using Expectations and Linearity of Expectations.

(10 points)

2. In the randomized hiring problem, (a) What is the probability that the first candidate is hired?

(b) what is the probability that the fifth candidate is hired? Explain your answers.

(10 points)
A L G O R I T H M S I N T R O D U C T I O N T O

T H I R D E D I T I O N

T H O M A S H.

C H A R L E S E.

R O N A L D L .

C L I F F O R D S T E I N

R I V E S T

L E I S E R S O N

C O R M E N

Introduction to Algorithms Third Edition

Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest Clifford Stein

Introduction to Algorithms Third Edition

The MIT Press Cambridge, Massachusetts London, England

c� 2009 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form or by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.

For information about special quantity discounts, please email special sales@mitpress.mit.edu.

This book was set in Times Roman and Mathtime Pro 2 by the authors.

Printed and bound in the United States of America.

Library of Congress Cataloging-in-Publication Data

Introduction to algorithms / Thomas H. Cormen . . . [et al.].—3rd ed. p. cm.

Includes bibliographical references and index. ISBN 978-0-262-03384-8 (hardcover : alk. paper)—ISBN 978-0-262-53305-8 (pbk. : alk. paper) 1. Computer programming. 2. Computer algorithms. I. Cormen, Thomas H.

QA76.6.I5858 2009 005.1—dc22

2009008593

10 9 8 7 6 5 4 3 2

mailto:sales@mitpress.mit.edu
Contents

Preface xiii

I Foundations

Introduction 3

1 The Role of Algorithms in Computing 5 1.1 Algorithms 5 1.2 Algorithms as a technology 11

2 Getting Started 16 2.1 Insertion sort 16 2.2 Analyzing algorithms 23 2.3 Designing algorithms 29

3 Growth of Functions 43 3.1 Asymptotic notation 43 3.2 Standard notations and common functions 53

4 Divide-and-Conquer 65 4.1 The maximum-subarray problem 68 4.2 Strassen’s algorithm for matrix multiplication 75 4.3 The substitution method for solving recurrences 83 4.4 The recursion-tree method for solving recurrences 88 4.5 The master method for solving recurrences 93

? 4.6 Proof of the master theorem 97

5 Probabilistic Analysis and Randomized Algorithms 114 5.1 The hiring problem 114 5.2 Indicator random variables 118 5.3 Randomized algorithms 122

? 5.4 Probabilistic analysis and further uses of indicator random variables 130

vi Contents

II Sorting and Order Statistics

Introduction 147

6 Heapsort 151 6.1 Heaps 151 6.2 Maintaining the heap property 154 6.3 Building a heap 156 6.4 The heapsort algorithm 159 6.5 Priority queues 162

7 Quicksort 170 7.1 Description of quicksort 170 7.2 Performance of quicksort 174 7.3 A randomized version of quicksort 179 7.4 Analysis of quicksort 180

8 Sorting in Linear Time 191 8.1 Lower bounds for sorting 191 8.2 Counting sort 194 8.3 Radix sort 197 8.4 Bucket sort 200

9 Medians and Order Statistics 213 9.1 Minimum and maximum 214 9.2 Selection in expected linear time 215 9.3 Selection in worst-case linear time 220

III Data Structures

Introduction 229

10 Elementary Data Structures 232 10.1 Stacks and queues 232 10.2 Linked lists 236 10.3 Implementing pointers and objects 241 10.4 Representing rooted trees 246

11 Hash Tables 253 11.1 Direct-address tables 254 11.2 Hash tables 256 11.3 Hash functions 262 11.4 Open addressing 269

? 11.5 Perfect hashing 277

Contents vii

12 Binary Search Trees 286 12.1 What is a binary search tree? 286 12.2 Querying a binary search tree 289 12.3 Insertion and deletion 294

? 12.4 Randomly built binary search trees 299

13 Red-Black Trees 308 13.1 Properties of red-black trees 308 13.2 Rotations 312 13.3 Insertion 315 13.4 Deletion 323

14 Augmenting Data Structures 339 14.1 Dynamic order statistics 339 14.2 How to augment a data structure 345 14.3 Interval trees 348

IV Advanced Design and Analysis Techniques

Introduction 357

15 Dynamic Programming 359 15.1 Rod cutting 360 15.2 Matrix-chain multiplication 370 15.3 Elements of dynamic programming 378 15.4 Longest common subsequence 390 15.5 Optimal binary search trees 397

16 Greedy Algorithms 414 16.1 An activity-selection problem 415 16.2 Elements of the greedy strategy 423 16.3 Huffman codes 428

? 16.4 Matroids and greedy methods 437 ? 16.5 A task-scheduling problem as a matroid 443

17 Amortized Analysis 451 17.1 Aggregate analysis 452 17.2 The accounting method 456 17.3 The potential method 459 17.4 Dynamic tables 463

viii Contents

V Advanced Data Structures

Introduction 481

18 B-Trees 484 18.1 Definition of B-trees 488 18.2 Basic operations on B-trees 491 18.3 Deleting a key from a B-tree 499

19 Fibonacci Heaps 505 19.1 Structure of Fibonacci heaps 507 19.2 Mergeable-heap operations 510 19.3 Decreasing a key and deleting a node 518 19.4 Bounding the maximum degree 523

20 van Emde Boas Trees 531 20.1 Preliminary approaches 532 20.2 A recursive structure 536 20.3 The van Emde Boas tree 545

21 Data Structures for Disjoint Sets 561 21.1 Disjoint-set operations 561 21.2 Linked-list representation of disjoint sets 564 21.3 Disjoint-set forests 568

? 21.4 Analysis of union by rank with path compression 573

VI Graph Algorithms

Introduction 587

22 Elementary Graph Algorithms 589 22.1 Representations of graphs 589 22.2 Breadth-first search 594 22.3 Depth-first search 603 22.4 Topological sort 612 22.5 Strongly connected components 615

23 Minimum Spanning Trees 624 23.1 Growing a minimum spanning tree 625 23.2 The algorithms of Kruskal and Prim 631

Contents ix

24 Single-Source Shortest Paths 643 24.1 The Bellman-Ford algorithm 651 24.2 Single-source shortest paths in directed acyclic graphs 655 24.3 Dijkstra’s algorithm 658 24.4 Difference constraints and shortest paths 664 24.5 Proofs of shortest-paths properties 671

25 All-Pairs Shortest Paths 684 25.1 Shortest paths and matrix multiplication 686 25.2 The Floyd-Warshall algorithm 693 25.3 Johnson’s algorithm for sparse graphs 700

26 Maximum Flow 708 26.1 Flow networks 709 26.2 The Ford-Fulkerson method 714 26.3 Maximum bipartite matching 732

? 26.4 Push-relabel algorithms 736 ? 26.5 The relabel-to-front algorithm 748

VII Selected Topics

Introduction 769

27 Multithreaded Algorithms 772 27.1 The basics of dynamic multithreading 774 27.2 Multithreaded matrix multiplication 792 27.3 Multithreaded merge sort 797

28 Matrix Operations 813 28.1 Solving systems of linear equations 813 28.2 Inverting matrices 827 28.3 Symmetric positive-definite matrices and least-squares approximation

832

29 Linear Programming 843 29.1 Standard and slack forms 850 29.2 Formulating problems as linear programs 859 29.3 The simplex algorithm 864 29.4 Duality 879 29.5 The initial basic feasible solution 886

x Contents

30 Polynomials and the FFT 898 30.1 Representing polynomials 900 30.2 The DFT and FFT 906 30.3 Efficient FFT implementations 915

31 Number-Theoretic Algorithms 926 31.1 Elementary number-theoretic notions 927 31.2 Greatest common divisor 933 31.3 Modular arithmetic 939 31.4 Solving modular linear equations 946 31.5 The Chinese remainder theorem 950 31.6 Powers of an element 954 31.7 The RSA public-key cryptosystem 958

? 31.8 Primality testing 965 ? 31.9 Integer factorization 975

32 String Matching 985 32.1 The naive string-matching algorithm 988 32.2 The Rabin-Karp algorithm 990 32.3 String matching with finite automata 995

? 32.4 The Knuth-Morris-Pratt algorithm 1002

33 Computational Geometry 1014 33.1 Line-segment properties 1015 33.2 Determining whether any pair of segments intersects 1021 33.3 Finding the convex hull 1029 33.4 Finding the closest pair of points 1039

34 NP-Completeness 1048 34.1 Polynomial time 1053 34.2 Polynomial-time verification 1061 34.3 NP-completeness and reducibility 1067 34.4 NP-completeness proofs 1078 34.5 NP-complete problems 1086

35 Approximation Algorithms 1106 35.1 The vertex-cover problem 1108 35.2 The traveling-salesman problem 1111 35.3 The set-covering problem 1117 35.4 Randomization and linear programming 1123 35.5 The subset-sum problem 1128

Contents xi

VIII Appendix: Mathematical Background

Introduction 1143

A Summations 1145 A.1 Summation formulas and properties 1145 A.2 Bounding summations 1149

B Sets, Etc. 1158 B.1 Sets 1158 B.2 Relations 1163 B.3 Functions 1166 B.4 Graphs 1168 B.5 Trees 1173

C Counting and Probability 1183 C.1 Counting 1183 C.2 Probability 1189 C.3 Discrete random variables 1196 C.4 The geometric and binomial distributions 1201

? C.5 The tails of the binomial distribution 1208

D Matrices 1217 D.1 Matrices and matrix operations 1217 D.2 Basic matrix properties 1222

Bibliography 1231

Index 1251

Preface

Before there were computers, there were algorithms. But now that there are com- puters, there are even more algorithms, and algorithms lie at the heart of computing.

This book provides a comprehensive introduction to the modern study of com- puter algorithms. It presents many algorithms and covers them in considerable depth, yet makes their design and analysis accessible to all levels of readers. We have tried to keep explanations elementary without sacrificing depth of coverage or mathematical rigor.

Each chapter presents an algorithm, a design technique, an application area, or a related topic. Algorithms are described in English and in a pseudocode designed to be readable by anyone who has done a little programming. The book contains 244 figures—many with multiple parts—illustrating how the algorithms work. Since we emphasize efficiency as a design criterion, we include careful analyses of the running times of all our algorithms.

The text is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Because it discusses engineering issues in algorithm design, as well as mathematical aspects, it is equally well suited for self-study by technical professionals.

In this, the third edition, we have once again updated the entire book. The changes cover a broad spectrum, including new chapters, revised pseudocode, and a more active writing style.

To the teacher

We have designed this book to be both versatile and complete. You should find it useful for a variety of courses, from an undergraduate course in data structures up through a graduate course in algorithms. Because we have provided considerably more material than can fit in a typical one-term course, you can consider this book to be a “buffet” or “smorgasbord” from which you can pick and choose the material that best supports the course you wish to teach.

xiv Preface

You should find it easy to organize your course around just the chapters you need. We have made chapters relatively self-contained, so that you need not worry about an unexpected and unnecessary dependence of one chapter on another. Each chapter presents the easier material first and the more difficult material later, with section boundaries marking natural stopping points. In an undergraduate course, you might use only the earlier sections from a chapter; in a graduate course, you might cover the entire chapter.

We have included 957 exercises and 158 problems. Each section ends with exer- cises, and each chapter ends with problems. The exercises are generally short ques- tions that test basic mastery of the material. Some are simple self-check thought exercises, whereas others are more substantial and are suitable as assigned home- work. The problems are more elaborate case studies that often introduce new ma- terial; they often consist of several questions that lead the student through the steps required to arrive at a solution.

Departing from our practice in previous editions of this book, we have made publicly available solutions to some, but by no means all, of the problems and ex- ercises. Our Web site, http://mitpress.mit.edu/algorithms/, links to these solutions. You will want to check this site to make sure that it does not contain the solution to an exercise or problem that you plan to assign. We expect the set of solutions that we post to grow slowly over time, so you will need to check it each time you teach the course.

We have starred (?) the sections and exercises that are more suitable for graduate students than for undergraduates. A starred section is not necessarily more diffi- cult than an unstarred one, but it may require an understanding of more advanced mathematics. Likewise, starred exercises may require an advanced background or more than average creativity.

To the student

We hope that this textbook provides you with an enjoyable introduction to the field of algorithms. We have attempted to make every algorithm accessible and interesting. To help you when you encounter unfamiliar or difficult algorithms, we describe each one in a step-by-step manner. We also provide careful explanations of the mathematics needed to understand the analysis of the algorithms. If you already have some familiarity with a topic, you will find the chapters organized so that you can skim introductory sections and proceed quickly to the more advanced material.

This is a large book, and your class will probably cover only a portion of its material. We have tried, however, to make this a book that will be useful to you now as a course textbook and also later in your career as a mathematical desk reference or an engineering handbook.

http://mitpress.mit.edu/algorithms/
Preface xv

What are the prerequisites for reading this book?

� You should have some programming experience. In particular, you should un- derstand recursive procedures and simple data structures such as arrays and linked lists.

� You should have some facility with mathematical proofs, and especially proofs by mathematical induction. A few portions of the book rely on some knowledge of elementary calculus. Beyond that, Parts I and VIII of this book teach you all the mathematical techniques you will need.

We have heard, loud and clear, the call to supply solutions to problems and exercises. Our Web site, http://mitpress.mit.edu/algorithms/, links to solutions for a few of the problems and exercises. Feel free to check your solutions against ours. We ask, however, that you do not send your solutions to us.

To the professional

The wide range of topics in this book makes it an excellent handbook on algo- rithms. Because each chapter is relatively self-contained, you can focus in on the topics that most interest you.

Most of the algorithms we discuss have great practical utility. We therefore address implementation concerns and other engineering issues. We often provide practical alternatives to the few algorithms that are primarily of theoretical interest.

If you wish to implement any of the algorithms, you should find the transla- tion of our pseudocode into your favorite programming language to be a fairly straightforward task. We have designed the pseudocode to present each algorithm clearly and succinctly. Consequently, we do not address error-handling and other software-engineering issues that require specific assumptions about your program- ming environment. We attempt to present each algorithm simply and directly with- out allowing the idiosyncrasies of a particular programming language to obscure its essence.

We understand that if you are using this book outside of a course, then you might be unable to check your solutions to problems and exercises against solutions provided by an instructor. Our Web site, http://mitpress.mit.edu/algorithms/, links to solutions for some of the problems and exercises so that you can check your work. Please do not send your solutions to us.

To our colleagues

We have supplied an extensive bibliography and pointers to the current literature. Each chapter ends with a set of chapter notes that give historical details and ref- erences. The chapter notes do not provide a complete reference to the whole field

http://mitpress.mit.edu/algorithms/
http://mitpress.mit.edu/algorithms/
xvi Preface

of algorithms, however. Though it may be hard to believe for a book of this size, space constraints prevented us from including many interesting algorithms.

Despite myriad requests from students for solutions to problems and exercises, we have chosen as a matter of policy not to supply references for problems and exercises, to remove the temptation for students to look up a solution rather than to find it themselves.

Changes for the third edition

What has changed between the second and third editions of this book? The mag- nitude of the changes is on a par with the changes between the first and second editions. As we said about the second-edition changes, depending on how you look at it, the book changed either not much or quite a bit.

A quick look at the table of contents shows that most of the second-edition chap- ters and sections appear in the third edition. We removed two chapters and one section, but we have added three new chapters and two new sections apart from these new chapters.

We kept the hybrid organization from the first two editions. Rather than organiz- ing chapters by only problem domains or according only to techniques, this book has elements of both. It contains technique-based chapters on divide-and-conquer, dynamic programming, greedy algorithms, amortized analysis, NP-Completeness, and approximation algorithms. But it also has entire parts on sorting, on data structures for dynamic sets, and on algorithms for graph problems. We find that although you need to know how to apply techniques for designing and analyzing al- gorithms, problems seldom announce to you which techniques are most amenable to solving them.

Here is a summary of the most significant changes for the third edition:

� We added new chapters on van Emde Boas trees and multithreaded algorithms, and we have broken out material on matrix basics into its own appendix chapter.

� We revised the chapter on recurrences to more broadly cover the divide-and- conquer technique, and its first two sections apply divide-and-conquer to solve two problems. The second section of this chapter presents Strassen’s algorithm for matrix multiplication, which we have moved from the chapter on matrix operations.

� We removed two chapters that were rarely taught: binomial heaps and sorting networks. One key idea in the sorting networks chapter, the 0-1 principle, ap- pears in this edition within Problem 8-7 as the 0-1 sorting lemma for compare- exchange algorithms. The treatment of Fibonacci heaps no longer relies on binomial heaps as a precursor.

Preface xvii

� We revised our treatment of dynamic programming and greedy algorithms. Dy- namic programming now leads off with a more interesting problem, rod cutting, than the assembly-line scheduling problem from the second edition. Further- more, we emphasize memoization a bit more than we did in the second edition, and we introduce the notion of the subproblem graph as a way to understand the running time of a dynamic-programming algorithm. In our opening exam- ple of greedy algorithms, the activity-selection problem, we get to the greedy algorithm more directly than we did in the second edition.

� The way we delete a node from binary search trees (which includes red-black trees) now guarantees that the node requested for deletion is the node that is actually deleted. In the first two editions, in certain cases, some other node would be deleted, with its contents moving into the node passed to the deletion procedure. With our new way to delete nodes, if other components of a program maintain pointers to nodes in the tree, they will not mistakenly end up with stale pointers to nodes that have been deleted.

� The material on flow networks now bases flows entirely on edges. This ap- proach is more intuitive than the net flow used in the first two editions.

� With the material on matrix basics and Strassen’s algorithm moved to other chapters, the chapter on matrix operations is smaller than in the second edition.

� We have modified our treatment of the Knuth-Morris-Pratt string-matching al- gorithm.

� We corrected several errors. Most of these errors were posted on our Web site of second-edition errata, but a few were not.

� Based on many requests, we changed the syntax (as it were) of our pseudocode. We now use “D” to indicate assignment and “==” to test for equality, just as C, C++, Java, and Python do. Likewise, we have eliminated the keywords do and then and adopted “//” as our comment-to-end-of-line symbol. We also now use dot-notation to indicate object attributes. Our pseudocode remains procedural, rather than object-oriented. In other words, rather than running methods on objects, we simply call procedures, passing objects as parameters.

� We added 100 new exercises and 28 new problems. We also updated many bibliography entries and added several new ones.

� Finally, we went through the entire book and rewrote sentences, paragraphs, and sections to make the writing clearer and more active.

xviii Preface

Web site

You can use our Web site, http://mitpress.mit.edu/algorithms/, to obtain supple- mentary information and to communicate with us. The Web site links to a list of known errors, solutions to selected exercises and problems, and (of course) a list explaining the corny professor jokes, as well as other content that we might add. The Web site also tells you how to report errors or make suggestions.

How we produced this book

Like the second edition, the third edition was produced in LATEX 2″. We used the Times font with mathematics typeset using the MathTime Pro 2 fonts. We thank Michael Spivak from Publish or Perish, Inc., Lance Carnes from Personal TeX, Inc., and Tim Tregubov from Dartmouth College for technical support. As in the previous two editions, we compiled the index using Windex, a C program that we wrote, and the bibliography was produced with BIBTEX. The PDF files for this book were created on a MacBook running OS 10.5.

We drew the illustrations for the third edition using MacDraw Pro, with some of the mathematical expressions in illustrations laid in with the psfrag package for LATEX 2″. Unfortunately, MacDraw Pro is legacy software, having not been marketed for over a decade now. Happily, we still have a couple of Macintoshes that can run the Classic environment under OS 10.4, and hence they can run Mac- Draw Pro—mostly. Even under the Classic environment, we find MacDraw Pro to be far easier to use than any other drawing software for the types of illustrations that accompany computer-science text, and it produces beautiful output.1 Who knows how long our pre-Intel Macs will continue to run, so if anyone from Apple is listening: Please create an OS X-compatible version of MacDraw Pro!

Acknowledgments for the third edition

We have been working with the MIT Press for over two decades now, and what a terrific relationship it has been! We thank Ellen Faran, Bob Prior, Ada Brunstein, and Mary Reilly for their help and support.

We were geographically distributed while producing the third edition, working in the Dartmouth College Department of Computer Science, the MIT Computer

1We investigated several drawing programs that run under Mac OS X, but all had significant short- comings compared with MacDraw Pro. We briefly attempted to produce the illustrations for this book with a different, well known drawing program. We found that it took at least five times as long to produce each illustration as it took with MacDraw Pro, and the resulting illustrations did not look as good. Hence the decision to revert to MacDraw Pro running on older Macintoshes.

http://mitpress.mit.edu/algorithms/
Preface xix

Science and Artificial Intelligence Laboratory, and the Columbia University De- partment of Industrial Engineering and Operations Research. We thank our re- spective universities and colleagues for providing such supportive and stimulating environments.

Julie Sussman, P.P.A., once again bailed us out as the technical copyeditor. Time and again, we were amazed at the errors that eluded us, but that Julie caught. She also helped us improve our presentation in several places. If there is a Hall of Fame for technical copyeditors, Julie is a sure-fire, first-ballot inductee. She is nothing short of phenomenal. Thank you, thank you, thank you, Julie! Priya Natarajan also found some errors that we were able to correct before this book went to press. Any errors that remain (and undoubtedly, some do) are the responsibility of the authors (and probably were inserted after Julie read the material).

The treatment for van Emde Boas trees derives from Erik Demaine’s notes, which were in turn influenced by Michael Bender. We also incorporated ideas from Javed Aslam, Bradley Kuszmaul, and Hui Zha into this edition.

The chapter on multithreading was based on notes originally written jointly with Harald Prokop. The material was influenced by several others working on the Cilk project at MIT, including Bradley Kuszmaul and Matteo Frigo. The design of the multithreaded pseudocode took its inspiration from the MIT Cilk extensions to C and by Cilk Arts’s Cilk++ extensions to C++.

We also thank the many readers of the first and second editions who reported errors or submitted suggestions for how to improve this book. We corrected all the bona fide errors that were reported, and we incorporated as many suggestions as we could. We rejoice that the number of such contributors has grown so great that we must regret that it has become impractical to list them all.

Finally, we thank our wives—Nicole Cormen, Wendy Leiserson, Gail Rivest, and Rebecca Ivry—and our children—Ricky, Will, Debby, and Katie Leiserson; Alex and Christopher Rivest; and Molly, Noah, and Benjamin Stein—for their love and support while we prepared this book. The patience and encouragement of our families made this project possible. We affectionately dedicate this book to them.

THOMAS H. CORMEN Lebanon, New Hampshire CHARLES E. LEISERSON Cambridge, Massachusetts RONALD L. RIVEST Cambridge, Massachusetts CLIFFORD STEIN New York, New York

February 2009

Introduction to Algorithms Third Edition

I Foundations

Introduction

This part will start you thinking about designing and analyzing algorithms. It is intended to be a gentle introduction to how we specify algorithms, some of the design strategies we will use throughout this book, and many of the fundamental ideas used in algorithm analysis. Later parts of this book will build upon this base.

Chapter 1 provides an overview of algorithms and their place in modern com- puting systems. This chapter defines what an algorithm is and lists some examples. It also makes a case that we should consider algorithms as a technology, along- side technologies such as fast hardware, graphical user interfaces, object-oriented systems, and networks.

In Chapter 2, we see our first algorithms, which solve the problem of sorting a sequence of n numbers. They are written in a pseudocode which, although not directly translatable to any conventional programming language, conveys the struc- ture of the algorithm clearly enough that you should be able to implement it in the language of your choice. The sorting algorithms we examine are insertion sort, which uses an incremental approach, and merge sort, which uses a recursive tech- nique known as “divide-and-conquer.” Although the time each requires increases with the value of n, the rate of increase differs between the two algorithms. We determine these running times in Chapter 2, and we develop a useful notation to express them.

Chapter 3 precisely defines this notation, which we call asymptotic notation. It starts by defining several asymptotic notations, which we use for bounding algo- rithm running times from above and/or below. The rest of Chapter 3 is primarily a presentation of mathematical notation, more to ensure that your use of notation matches that in this book than to teach you new mathematical concepts.

4 Part I Foundations

Chapter 4 delves further into the divide-and-conquer method introduced in Chapter 2. It provides additional examples of divide-and-conquer algorithms, in- cluding Strassen’s surprising method for multiplying two square matrices. Chap- ter 4 contains methods for solving recurrences, which are useful for describing the running times of recursive algorithms. One powerful technique is the “mas- ter method,” which we often use to solve recurrences that arise from divide-and- conquer algorithms. Although much of Chapter 4 is devoted to proving the cor- rectness of the master method, you may skip this proof yet still employ the master method.

Chapter 5 introduces probabilistic analysis and randomized algorithms. We typ- ically use probabilistic analysis to determine the running time of an algorithm in cases in which, due to the presence of an inherent probability distribution, the running time may differ on different inputs of the same size. In some cases, we assume that the inputs conform to a known probability distribution, so that we are averaging the running time over all possible inputs. In other cases, the probability distribution comes not from the inputs but from random choices made during the course of the algorithm. An algorithm whose behavior is determined not only by its input but by the values produced by a random-number generator is a randomized algorithm. We can use randomized algorithms to enforce a probability distribution on the inputs—thereby ensuring that no particular input always causes poor perfor- mance—or even to bound the error rate of algorithms that are allowed to produce incorrect results on a limited basis.

Appendices A–D contain other mathematical material that you will find helpful as you read this book. You are likely to have seen much of the material in the appendix chapters before having read this book (although the specific definitions and notational conventions we use may differ in some cases from what you have seen in the past), and so you should think of the Appendices as reference material. On the other hand, you probably have not already seen most of the material in Part I. All the chapters in Part I and the Appendices are written with a tutorial flavor.

1 The Role of Algorithms in Computing

What are algorithms? Why is the study of algorithms worthwhile? What is the role of algorithms relative to other technologies used in computers? In this chapter, we will answer these questions.

1.1 Algorithms

Informally, an algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output.

We can also view an algorithm as a tool for solving a well-specified computa- tional problem. The statement of the problem specifies in general terms the desired input/output relationship. The algorithm describes a specific computational proce- dure for achieving that input/output relationship.

For example, we might need to sort a sequence of numbers into nondecreasing order. This problem arises frequently in practice and provides fertile ground for introducing many standard design techniques and analysis tools. Here is how we formally define the sorting problem:

Input: A sequence of n numbers ha1; a2; : : : ; ani. Output: A permutation (reordering) ha01; a02; : : : ; a0ni of the input sequence such

that a01 � a02 � � � � � a0n. For example, given the input sequence h31; 41; 59; 26; 41; 58i, a sorting algorithm returns as output the sequence h26; 31; 41; 41; 58; 59i. Such an input sequence is called an instance of the sorting problem. In general, an instance of a problem consists of the input (satisfying whatever constraints are imposed in the problem statement) needed to compute a solution to the problem.

6 Chapter 1 The Role of Algorithms in Computing

Because many programs use it as an intermediate step, sorting is a fundamental operation in computer science. As a result, we have a large number of good sorting algorithms at our disposal. Which algorithm is best for a given application depends on—among other factors—the number of items to be sorted, the extent to which the items are already somewhat sorted, possible restrictions on the item values, the architecture of the computer, and the kind of storage devices to be used: main memory, disks, or even tapes.

An algorithm is said to be correct if, for every input instance, it halts with the correct output. We say that a correct algorithm solves the given computational problem. An incorrect algorithm might not halt at all on some input instances, or it might halt with an incorrect answer. Contrary to what you might expect, incorrect algorithms can sometimes be useful, if we can control their error rate. We shall see an example of an algorithm with a controllable error rate in Chapter 31 when we study algorithms for finding large prime numbers. Ordinarily, however, we shall be concerned only with correct algorithms.

An algorithm can be specified in English, as a computer program, or even as a hardware design. The only requirement is that the specification must provide a precise description of the computational procedure to be followed.

What kinds of problems are solved by algorithms?

Sorting is by no means the only computational problem for which algorithms have been developed. (You probably suspected as much when you saw the size of this book.) Practical applications of algorithms are ubiquitous and include the follow- ing examples:

� The Human Genome Project has made great progress toward the goals of iden- tifying all the 100,000 genes in human DNA, determining the sequences of the 3 billion chemical base pairs that make up human DNA, storing this informa- tion in databases, and developing tools for data analysis. Each of these steps requires sophisticated algorithms. Although the solutions to the various prob- lems involved are beyond the scope of this book, many methods to solve these biological problems use ideas from several of the chapters in this book, thereby enabling scientists to accomplish tasks while using resources efficiently. The savings are in time, both human and machine, and in money, as more informa- tion can be extracted from laboratory techniques.

� The Internet enables people all around the world to quickly access and retrieve large amounts of information. With the aid of clever algorithms, sites on the Internet are able to manage and manipulate this large volume of data. Examples of problems that make essential use of algorithms include finding good routes on which the data will travel (techniques for solving such problems appear in

1.1 Algorithms 7

Chapter 24), and using a search engine to quickly find pages on which particular information resides (related techniques are in Chapters 11 and 32).

� Electronic commerce enables goods and services to be negotiated and ex- changed electronically, and it depends on the privacy of personal informa- tion such as credit card numbers, passwords, and bank statements. The core technologies used in electronic commerce include public-key cryptography and digital signatures (covered in Chapter 31), which are based on numerical algo- rithms and number theory.

� Manufacturing and other commercial enterprises often need to allocate scarce resources in the most beneficial way. An oil company may wish to know where to place its wells in order to maximize its expected profit. A political candidate may want to determine where to spend money buying campaign advertising in order to maximize the chances of winning an election. An airline may wish to assign crews to flights in the least expensive way possible, making sure that each flight is covered and that government regulations regarding crew schedul- ing are met. An Internet service provider may wish to determine where to place additional resources in order to serve its customers more effectively. All of these are examples of problems that can be solved using linear programming, which we shall study in Chapter 29.

Although some of the details of these examples are beyond the scope of this book, we do give underlying techniques that apply to these problems and problem areas. We also show how to solve many specific problems, including the following:

� We are given a road map on which the distance between each pair of adjacent intersections is marked, and we wish to determine the shortest route from one intersection to another. The number of possible routes can be huge, even if we disallow routes that cross over themselves. How do we choose which of all possible routes is the shortest? Here, we model the road map (which is itself a model of the actual roads) as a graph (which we will meet in Part VI and Appendix B), and we wish to find the shortest path from one vertex to another in the graph. We shall see how to solve this problem efficiently in Chapter 24.

� We are given two ordered sequences of symbols, X D hx1; x2; : : : ; xmi and Y D hy1; y2; : : : ; yni, and we wish to find a longest common subsequence of X and Y . A subsequence of X is just X with some (or possibly all or none) of its elements removed. For example, one subsequence of hA; B; C; D; E; F; Gi would be hB; C; E; Gi. The length of a longest common subsequence of X and Y gives one measure of how similar these two sequences are. For example, if the two sequences are base pairs in DNA strands, then we might consider them similar if they have a long common subsequence. If X has m symbols and Y has n symbols, then X and Y have 2m and 2n possible subsequences,

8 Chapter 1 The Role of Algorithms in Computing

respectively. Selecting all possible subsequences of X and Y and matching them up could take a prohibitively long time unless m and n are very small. We shall see in Chapter 15 how to use a general technique known as dynamic programming to solve this problem much more efficiently.

� We are given a mechanical design in terms of a library of parts, where each part may include instances of other parts, and we need to list the parts in order so that each part appears before any part that uses it. If the design comprises n parts, then there are nŠ possible orders, where nŠ denotes the factorial function. Because the factorial function grows faster than even an exponential function, we cannot feasibly generate each possible order and then verify that, within that order, each part appears before the parts using it (unless we have only a few parts). This problem is an instance of topological sorting, and we shall see in Chapter 22 how to solve this problem efficiently.

� We are given n points in the plane, and we wish to find the convex hull of these points. The convex hull is the smallest convex polygon containing the points. Intuitively, we can think of each point as being represented by a nail sticking out from a board. The convex hull would be represented by a tight rubber band that surrounds all the nails. Each nail around which the rubber band makes a turn is a vertex of the convex hull. (See Figure 33.6 on page 1029 for an example.) Any of the 2n subsets of the points might be the vertices of the convex hull. Knowing which points are vertices of the convex hull is not quite enough, either, since we also need to know the order in which they appear. There are many choices, therefore, for the vertices of the convex hull. Chapter 33 gives two good methods for finding the convex hull.

These lists are far from exhaustive (as you again have probably surmised from this book’s heft), but exhibit two characteristics that are common to many interest- ing algorithmic problems:

1. They have many candidate solutions, the overwhelming majority of which do not solve the problem at hand. Finding one that does, or one that is “best,” can present quite a challenge.

2. They have practical applications. Of the problems in the above list, finding the shortest path provides the easiest examples. A transportation firm, such as a trucking or railroad company, has a financial interest in finding shortest paths through a road or rail network because taking shorter paths results in lower labor and fuel costs. Or a routing node on the Internet may need to find the shortest path through the network in order to route a message quickly. Or a person wishing to drive from New York to Boston may want to find driving directions from an appropriate Web site, or she may use her GPS while driving.

1.1 Algorithms 9

Not every problem solved by algorithms has an easily identified set of candidate solutions. For example, suppose we are given a set of numerical values represent- ing samples of a signal, and we want to compute the discrete Fourier transform of these samples. The discrete Fourier transform converts the time domain to the fre- quency domain, producing a set of numerical coefficients, so that we can determine the strength of various frequencies in the sampled signal. In addition to lying at the heart of signal processing, discrete Fourier transforms have applications in data compression and multiplying large polynomials and integers. Chapter 30 gives an efficient algorithm, the fast Fourier transform (commonly called the FFT), for this problem, and the chapter also sketches out the design of a hardware circuit to compute the FFT.

Data structures

This book also contains several data structures. A data structure is a way to store and organize data in order to facilitate access and modifications. No single data structure works well for all purposes, and so it is important to know the strengths and limitations of several of them.

Technique

Although you can use this book as a “cookbook” for algorithms, you may someday encounter a problem for which you cannot readily find a published algorithm (many of the exercises and problems in this book, for example). This book will teach you techniques of algorithm design and analysis so that you can develop algorithms on your own, show that they give the correct answer, and understand their efficiency. Different chapters address different aspects of algorithmic problem solving. Some chapters address specific problems, such as finding medians and order statistics in Chapter 9, computing minimum spanning trees in Chapter 23, and determining a maximum flow in a network in Chapter 26. Other chapters address techniques, such as divide-and-conquer in Chapter 4, dynamic programming in Chapter 15, and amortized analysis in Chapter 17.

Hard problems

Most of this book is about efficient algorithms. Our usual measure of efficiency is speed, i.e., how long an algorithm takes to produce its result. There are some problems, however, for which no efficient solution is known. Chapter 34 studies an interesting subset of these problems, which are known as NP-complete.

Why are NP-complete problems interesting? First, although no efficient algo- rithm for an NP-complete problem has ever been found, nobody has ever proven

10 Chapter 1 The Role of Algorithms in Computing

that an efficient algorithm for one cannot exist. In other words, no one knows whether or not efficient algorithms exist for NP-complete problems. Second, the set of NP-complete problems has the remarkable property that if an efficient algo- rithm exists for any one of them, then efficient algorithms exist for all of them. This relationship among the NP-complete problems makes the lack of efficient solutions all the more tantalizing. Third, several NP-complete problems are similar, but not identical, to problems for which we do know of efficient algorithms. Computer scientists are intrigued by how a small change to the problem statement can cause a big change to the efficiency of the best known algorithm.

You should know about NP-complete problems because some of them arise sur- prisingly often in real applications. If you are called upon to produce an efficient algorithm for an NP-complete problem, you are likely to spend a lot of time in a fruitless search. If you can show that the problem is NP-complete, you can instead spend your time developing an efficient algorithm that gives a good, but not the best possible, solution.

As a concrete example, consider a delivery company with a central depot. Each day, it loads up each delivery truck at the depot and sends it around to deliver goods to several addresses. At the end of the day, each truck must end up back at the depot so that it is ready to be loaded for the next day. To reduce costs, the company wants to select an order of delivery stops that yields the lowest overall distance traveled by each truck. This problem is the well-known “traveling-salesman problem,” and it is NP-complete. It has no known efficient algorithm. Under certain assumptions, however, we know of efficient algorithms that give an overall distance which is not too far above the smallest possible. Chapter 35 discusses such “approximation algorithms.”

Parallelism

For many years, we could count on processor clock speeds increasing at a steady rate. Physical limitations present a fundamental roadblock to ever-increasing clock speeds, however: because power density increases superlinearly with clock speed, chips run the risk of melting once their clock speeds become high enough. In order to perform more computations per second, therefore, chips are being designed to contain not just one but several processing “cores.” We can liken these multicore computers to several sequential computers on a single chip; in other words, they are a type of “parallel computer.” In order to elicit the best performance from multicore computers, we need to design algorithms with parallelism in mind. Chapter 27 presents a model for “multithreaded” algorithms, which take advantage of multiple cores. This model has advantages from a theoretical standpoint, and it forms the basis of several successful computer programs, including a championship chess program.

1.2 Algorithms as a technology 11

Exercises

1.1-1 Give a real-world example that requires sorting or a real-world example that re- quires computing a convex hull.

1.1-2 Other than speed, what other measures of efficiency might one use in a real-world setting?

1.1-3 Select a data structure that you have seen previously, and discuss its strengths and limitations.

1.1-4 How are the shortest-path and traveling-salesman problems given above similar? How are they different?

1.1-5 Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is “approximately” the best is good enough.

1.2 Algorithms as a technology

Suppose computers were infinitely fast and computer memory was free. Would you have any reason to study algorithms? The answer is yes, if for no other reason than that you would still like to demonstrate that your solution method terminates and does so with the correct answer.

If computers were infinitely fast, any correct method for solving a problem would do. You would probably want your implementation to be within the bounds of good software engineering practice (for example, your implementation should be well designed and documented), but you would most often use whichever method was the easiest to implement.

Of course, computers may be fast, but they are not infinitely fast. And memory may be inexpensive, but it is not free. Computing time is therefore a bounded resource, and so is space in memory. You should use these resources wisely, and algorithms that are efficient in terms of time or space will help you do so.

12 Chapter 1 The Role of Algorithms in Computing

Efficiency

Different algorithms devised to solve the same problem often differ dramatically in their efficiency. These differences can be much more significant than differences due to hardware and software.

As an example, in Chapter 2, we will see two algorithms for sorting. The first, known as insertion sort, takes time roughly equal to c1n2 to sort n items, where c1 is a constant that does not depend on n. That is, it takes time roughly proportional to n2. The second, merge sort, takes time roughly equal to c2n lg n, where lg n stands for log2 n and c2 is another constant that also does not depend on n. Inser- tion sort typically has a smaller constant factor than merge sort, so that c1 0 and AŒi� > key 6 AŒi C 1� D AŒi� 7 i D i � 1 8 AŒi C 1� D key

Loop invariants and the correctness of insertion sort

Figure 2.2 shows how this algorithm works for A D h5; 2; 4; 6; 1; 3i. The in- dex j indicates the “current card” being inserted into the hand. At the beginning of each iteration of the for loop, which is indexed by j , the subarray consisting of elements AŒ1 : : j � 1� constitutes the currently sorted hand, and the remaining subarray AŒj C 1 : : n� corresponds to the pile of cards still on the table. In fact, elements AŒ1 : : j � 1� are the elements originally in positions 1 through j � 1, but now in sorted order. We state these properties of AŒ1 : : j � 1� formally as a loop invariant:

At the start of each iteration of the for loop of lines 1–8, the subarray AŒ1 : : j �1� consists of the elements originally in AŒ1 : : j �1�, but in sorted order.

We use loop invariants to help us understand why an algorithm is correct. We must show three things about a loop invariant:

2.1 Insertion sort 19

Initialization: It is true prior to the first iteration of the loop.

Maintenance: If it is true before an iteration of the loop, it remains true before the next iteration.

Termination: When the loop terminates, the invariant gives us a useful property that helps show that the algorithm is correct.

When the first two properties hold, the loop invariant is true prior to every iteration of the loop. (Of course, we are free to use established facts other than the loop invariant itself to prove that the loop invariant remains true before each iteration.) Note the similarity to mathematical induction, where to prove that a property holds, you prove a base case and an inductive step. Here, showing that the invariant holds before the first iteration corresponds to the base case, and showing that the invariant holds from iteration to iteration corresponds to the inductive step.

The third property is perhaps the most important one, since we are using the loop invariant to show correctness. Typically, we use the loop invariant along with the condition that caused the loop to terminate. The termination property differs from how we usually use mathematical induction, in which we apply the inductive step infinitely; here, we stop the “induction” when the loop terminates.

Let us see how these properties hold for insertion sort.

Initialization: We start by showing that the loop invariant holds before the first loop iteration, when j D 2.1 The subarray AŒ1 : : j � 1�, therefore, consists of just the single element AŒ1�, which is in fact the original element in AŒ1�. Moreover, this subarray is sorted (trivially, of course), which shows that the loop invariant holds prior to the first iteration of the loop.

Maintenance: Next, we tackle the second property: showing that each iteration maintains the loop invariant. Informally, the body of the for loop works by moving AŒj � 1�, AŒj � 2�, AŒj � 3�, and so on by one position to the right until it finds the proper position for AŒj � (lines 4–7), at which point it inserts the value of AŒj � (line 8). The subarray AŒ1 : : j � then consists of the elements originally in AŒ1 : : j �, but in sorted order. Incrementing j for the next iteration of the for loop then preserves the loop invariant.

A more formal treatment of the second property would require us to state and show a loop invariant for the while loop of lines 5–7. At this point, however,

1When the loop is a for loop, the moment at which we check the loop invariant just prior to the first iteration is immediately after the initial assignment to the loop-counter variable and just before the first test in the loop header. In the case of INSERTION-SORT, this time is after assigning 2 to the variable j but before the first test of whether j � A: length.

20 Chapter 2 Getting Started

we prefer not to get bogged down in such formalism, and so we rely on our informal analysis to show that the second property holds for the outer loop.

Termination: Finally, we examine what happens when the loop terminates. The condition causing the for loop to terminate is that j > A: length D n. Because each loop iteration increases j by 1, we must have j D n C 1 at that time. Substituting n C 1 for j in the wording of loop invariant, we have that the subarray AŒ1 : : n� consists of the elements originally in AŒ1 : : n�, but in sorted order. Observing that the subarray AŒ1 : : n� is the entire array, we conclude that the entire array is sorted. Hence, the algorithm is correct.

We shall use this method of loop invariants to show correctness later in this chapter and in other chapters as well.

Pseudocode conventions

We use the following conventions in our pseudocode.

� Indentation indicates block structure. For example, the body of the for loop that begins on line 1 consists of lines 2–8, and the body of thewhile loop that begins on line 5 contains lines 6–7 but not line 8. Our indentation style applies to if-else statements2 as well. Using indentation instead of conventional indicators of block structure, such as begin and end statements, greatly reduces clutter while preserving, or even enhancing, clarity.3

� The looping constructs while, for, and repeat-until and the if-else conditional construct have interpretations similar to those in C, C++, Java, Python, and Pascal.4 In this book, the loop counter retains its value after exiting the loop, unlike some situations that arise in C++, Java, and Pascal. Thus, immediately after a for loop, the loop counter’s value is the value that first exceeded the for loop bound. We used this property in our correctness argument for insertion sort. The for loop header in line 1 is for j D 2 to A: length, and so when this loop terminates, j D A: length C 1 (or, equivalently, j D n C 1, since n D A: length). We use the keyword to when a for loop increments its loop

2In an if-else statement, we indent else at the same level as its matching if. Although we omit the keyword then, we occasionally refer to the portion executed when the test following if is true as a then clause. For multiway tests, we use elseif for tests after the first one.

3Each pseudocode procedure in this book appears on one page so that you will not have to discern levels of indentation in code that is split across pages.

4Most block-structured languages have equivalent constructs, though the exact syntax may differ. Python lacks repeat-until loops, and its for loops operate a little differently from the for loops in this book.

2.1 Insertion sort 21

counter in each iteration, and we use the keyword downto when a for loop decrements its loop counter. When the loop counter changes by an amount greater than 1, the amount of change follows the optional keyword by.

� The symbol “//” indicates that the remainder of the line is a comment.

� A multiple assignment of the form i D j D e assigns to both variables i and j the value of expression e; it should be treated as equivalent to the assignment j D e followed by the assignment i D j .

� Variables (such as i , j , and key) are local to the given procedure. We shall not use global variables without explicit indication.

� We access array elements by specifying the array name followed by the in- dex in square brackets. For example, AŒi� indicates the i th element of the array A. The notation “: :” is used to indicate a range of values within an ar- ray. Thus, AŒ1 : : j � indicates the subarray of A consisting of the j elements AŒ1�; AŒ2�; : : : ; AŒj �.

� We typically organize compound data into objects, which are composed of attributes. We access a particular attribute using the syntax found in many object-oriented programming languages: the object name, followed by a dot, followed by the attribute name. For example, we treat an array as an object with the attribute length indicating how many elements it contains. To specify the number of elements in an array A, we write A: length.

We treat a variable representing an array or object as a pointer to the data rep- resenting the array or object. For all attributes f of an object x, setting y D x causes y: f to equal x: f . Moreover, if we now set x: f D 3, then afterward not only does x: f equal 3, but y: f equals 3 as well. In other words, x and y point to the same object after the assignment y D x. Our attribute notation can “cascade.” For example, suppose that the attribute f is itself a pointer to some type of object that has an attribute g. Then the notation x: f :g is implicitly parenthesized as .x: f /:g. In other words, if we had assigned y D x: f , then x: f :g is the same as y:g. Sometimes, a pointer will refer to no object at all. In this case, we give it the special value NIL.

� We pass parameters to a procedure by value: the called procedure receives its own copy of the parameters, and if it assigns a value to a parameter, the change is not seen by the calling procedure. When objects are passed, the pointer to the data representing the object is copied, but the object’s attributes are not. For example, if x is a parameter of a called procedure, the assignment x D y within the called procedure is not visible to the calling procedure. The assignment x: f D 3, however, is visible. Similarly, arrays are passed by pointer, so that

22 Chapter 2 Getting Started

a pointer to the array is passed, rather than the entire array, and changes to individual array elements are visible to the calling procedure.

� A return statement immediately transfers control back to the point of call in the calling procedure. Most return statements also take a value to pass back to the caller. Our pseudocode differs from many programming languages in that we allow multiple values to be returned in a single return statement.

� The boolean operators “and” and “or” are short circuiting. That is, when we evaluate the expression “x and y” we first evaluate x. If x evaluates to FALSE, then the entire expression cannot evaluate to TRUE, and so we do not evaluate y. If, on the other hand, x evaluates to TRUE, we must evaluate y to determine the value of the entire expression. Similarly, in the expression “x or y” we eval- uate the expression y only if x evaluates to FALSE. Short-circuiting operators allow us to write boolean expressions such as “x ¤ NIL and x: f D y” without worrying about what happens when we try to evaluate x: f when x is NIL.

� The keyword error indicates that an error occurred because conditions were wrong for the procedure to have been called. The calling procedure is respon- sible for handling the error, and so we do not specify what action to take.

Exercises

2.1-1 Using Figure 2.2 as a model, illustrate the operation of INSERTION-SORT on the array A D h31; 41; 59; 26; 41; 58i. 2.1-2 Rewrite the INSERTION-SORT procedure to sort into nonincreasing instead of non- decreasing order.

2.1-3 Consider the searching problem:

Input: A sequence of n numbers A D ha1; a2; : : : ; ani and a value �. Output: An index i such that � D AŒi� or the special value NIL if � does not

appear in A.

Write pseudocode for linear search, which scans through the sequence, looking for �. Using a loop invariant, prove that your algorithm is correct. Make sure that your loop invariant fulfills the three necessary properties.

2.1-4 Consider the problem of adding two n-bit binary integers, stored in two n-element arrays A and B . The sum of the two integers should be stored in binary form in

2.2 Analyzing algorithms 23

an .nC 1/-element array C . State the problem formally and write pseudocode for adding the two integers.

2.2 Analyzing algorithms

Analyzing an algorithm has come to mean predicting the resources that the algo- rithm requires. Occasionally, resources such as memory, communication band- width, or computer hardware are of primary concern, but most often it is compu- tational time that we want to measure. Generally, by analyzing several candidate algorithms for a problem, we can identify a most efficient one. Such analysis may indicate more than one viable candidate, but we can often discard several inferior algorithms in the process.

Before we can analyze an algorithm, we must have a model of the implemen- tation technology that we will use, including a model for the resources of that technology and their costs. For most of this book, we shall assume a generic one- processor, random-access machine (RAM) model of computation as our imple- mentation technology and understand that our algorithms will be implemented as computer programs. In the RAM model, instructions are executed one after an- other, with no concurrent operations.

Strictly speaking, we should precisely define the instructions of the RAM model and their costs. To do so, however, would be tedious and would yield little insight into algorithm design and analysis. Yet we must be careful not to abuse the RAM model. For example, what if a RAM had an instruction that sorts? Then we could sort in just one instruction. Such a RAM would be unrealistic, since real computers do not have such instructions. Our guide, therefore, is how real computers are de- signed. The RAM model contains instructions commonly found in real computers: arithmetic (such as add, subtract, multiply, divide, remainder, floor, ceiling), data movement (load, store, copy), and control (conditional and unconditional branch, subroutine call and return). Each such instruction takes a constant amount of time.

The data types in the RAM model are integer and floating point (for storing real numbers). Although we typically do not concern ourselves with precision in this book, in some applications precision is crucial. We also assume a limit on the size of each word of data. For example, when working with inputs of size n, we typ- ically assume that integers are represented by c lg n bits for some constant c � 1. We require c � 1 so that each word can hold the value of n, enabling us to index the individual input elements, and we restrict c to be a constant so that the word size does not grow arbitrarily. (If the word size could grow arbitrarily, we could store huge amounts of data in one word and operate on it all in constant time—clearly an unrealistic scenario.)

SANDEEP GUJJAR
Highlight
SANDEEP GUJJAR
Highlight
SANDEEP GUJJAR
Highlight
SANDEEP GUJJAR
Highlight
SANDEEP GUJJAR
Highlight
SANDEEP GUJJAR
Highlight
24 Chapter 2 Getting Started

Real computers contain instructions not listed above, and such instructions rep- resent a gray area in the RAM model. For example, is exponentiation a constant- time instruction? In the general case, no; it takes several instructions to compute xy

when x and y are real numbers. In restricted situations, however, exponentiation is a constant-time operation. Many computers have a “shift left” instruction, which in constant time shifts the bits of an integer by k positions to the left. In most computers, shifting the bits of an integer by one position to the left is equivalent to multiplication by 2, so that shifting the bits by k positions to the left is equiv- alent to multiplication by 2k. Therefore, such computers can compute 2k in one constant-time instruction by shifting the integer 1 by k positions to the left, as long as k is no more than the number of bits in a computer word. We will endeavor to avoid such gray areas in the RAM model, but we will treat computation of 2k as a constant-time operation when k is a small enough positive integer.

In the RAM model, we do not attempt to model the memory hierarchy that is common in contemporary computers. That is, we do not model caches or virtual memory. Several computational models attempt to account for memory-hierarchy effects, which are sometimes significant in real programs on real machines. A handful of problems in this book examine memory-hierarchy effects, but for the most part, the analyses in this book will not consider them. Models that include the memory hierarchy are quite a bit more complex than the RAM model, and so they can be difficult to work with. Moreover, RAM-model analyses are usually excellent predictors of performance on actual machines.

Analyzing even a simple algorithm in the RAM model can be a challenge. The mathematical tools required may include combinatorics, probability theory, alge- braic dexterity, and the ability to identify the most significant terms in a formula. Because the behavior of an algorithm may be different for each possible input, we need a means for summarizing that behavior in simple, easily understood formulas.

Even though we typically select only one machine model to analyze a given al- gorithm, we still face many choices in deciding how to express our analysis. We would like a way that is simple to write and manipulate, shows the important char- acteristics of an algorithm’s resource requirements, and suppresses tedious details.

Analysis of insertion sort

The time taken by the INSERTION-SORT procedure depends on the input: sorting a thousand numbers takes longer than sorting three numbers. Moreover, INSERTION- SORT can take different amounts of time to sort two input sequences of the same size depending on how nearly sorted they already are. In general, the time taken by an algorithm grows with the size of the input, so it is traditional to describe the running time of a program as a function of the size of its input. To do so, we need to define the terms “running time” and “size of input” more carefully.

2.2 Analyzing algorithms 25

The best notion for input size depends on the problem being studied. For many problems, such as sorting or computing discrete Fourier transforms, the most nat- ural measure is the number of items in the input—for example, the array size n for sorting. For many other problems, such as multiplying two integers, the best measure of input size is the total number of bits needed to represent the input in ordinary binary notation. Sometimes, it is more appropriate to describe the size of the input with two numbers rather than one. For instance, if the input to an algo- rithm is a graph, the input size can be described by the numbers of vertices and edges in the graph. We shall indicate which input size measure is being used with each problem we study.

The running time of an algorithm on a particular input is the number of primitive operations or “steps” executed. It is convenient to define the notion of step so that it is as machine-independent as possible. For the moment, let us adopt the following view. A constant amount of time is required to execute each line of our pseudocode. One line may take a different amount of time than another line, but we shall assume that each execution of the i th line takes time ci , where ci is a constant. This viewpoint is in keeping with the RAM model, and it also reflects how the pseudocode would be implemented on most actual computers.5

In the following discussion, our expression for the running time of INSERTION- SORT will evolve from a messy formula that uses all the statement costs ci to a much simpler notation that is more concise and more easily manipulated. This simpler notation will also make it easy to determine whether one algorithm is more efficient than another.

We start by presenting the INSERTION-SORT procedure with the time “cost” of each statement and the number of times each statement is executed. For each j D 2; 3; : : : ; n, where n D A: length, we let tj denote the number of times the while loop test in line 5 is executed for that value of j . When a for or while loop exits in the usual way (i.e., due to the test in the loop header), the test is executed one time more than the loop body. We assume that comments are not executable statements, and so they take no time.

5There are some subtleties here. Computational steps that we specify in English are often variants of a procedure that requires more than just a constant amount of time. For example, later in this book we might say “sort the points by x-coordinate,” which, as we shall see, takes more than a constant amount of time. Also, note that a statement that calls a subroutine takes constant time, though the subroutine, once invoked, may take more. That is, we separate the process of calling the subroutine—passing parameters to it, etc.—from the process of executing the subroutine.

26 Chapter 2 Getting Started

INSERTION-SORT.A/ cost times

1 for j D 2 to A: length c1 n 2 key D AŒj � c2 n � 1 3 // Insert AŒj � into the sorted

sequence AŒ1 : : j � 1�. 0 n � 1 4 i D j � 1 c4 n � 1 5 while i > 0 and AŒi� > key c5

Pn j D2 tj

6 AŒi C 1� D AŒi� c6 Pn

j D2.tj � 1/ 7 i D i � 1 c7

Pn j D2.tj � 1/

8 AŒi C 1� D key c8 n � 1

The running time of the algorithm is the sum of running times for each state- ment executed; a statement that takes ci steps to execute and executes n times will contribute cin to the total running time.6 To compute T .n/, the running time of INSERTION-SORT on an input of n values, we sum the products of the cost and times columns, obtaining

T .n/ D c1nC c2.n � 1/C c4.n � 1/C c5 nX

j D2 tj C c6

nX j D2

.tj � 1/

C c7 nX

j D2 .tj � 1/C c8.n � 1/ :

Even for inputs of a given size, an algorithm’s running time may depend on which input of that size is given. For example, in INSERTION-SORT, the best case occurs if the array is already sorted. For each j D 2; 3; : : : ; n, we then find that AŒi� � key in line 5 when i has its initial value of j � 1. Thus tj D 1 for j D 2; 3; : : : ; n, and the best-case running time is T .n/ D c1nC c2.n � 1/C c4.n � 1/C c5.n � 1/C c8.n � 1/

D .c1 C c2 C c4 C c5 C c8/n � .c2 C c4 C c5 C c8/ : We can express this running time as anC b for constants a and b that depend on the statement costs ci ; it is thus a linear function of n.

If the array is in reverse sorted order—that is, in decreasing order—the worst case results. We must compare each element AŒj � with each element in the entire sorted subarray AŒ1 : : j � 1�, and so tj D j for j D 2; 3; : : : ; n. Noting that

6This characteristic does not necessarily hold for a resource such as memory. A statement that references m words of memory and is executed n times does not necessarily reference mn distinct words of memory.

2.2 Analyzing algorithms 27

nX j D2

j D n.nC 1/ 2

� 1

and nX

j D2 .j � 1/ D n.n � 1/

2

(see Appendix A for a review of how to solve these summations), we find that in the worst case, the running time of INSERTION-SORT is

T .n/ D c1nC c2.n � 1/C c4.n � 1/C c5 �

n.nC 1/ 2

� 1 �

C c6 �

n.n � 1/ 2

� C c7

� n.n � 1/

2

� C c8.n � 1/

D �c5

2 C c6

2 C c7

2

� n2 C

� c1 C c2 C c4 C

c5

2 � c6

2 � c7

2 C c8

� n

� .c2 C c4 C c5 C c8/ : We can express this worst-case running time as an2 C bnC c for constants a, b, and c that again depend on the statement costs ci ; it is thus a quadratic function of n.

Typically, as in insertion sort, the running time of an algorithm is fixed for a given input, although in later chapters we shall see some interesting “randomized” algorithms whose behavior can vary even for a fixed input.

Worst-case and average-case analysis

In our analysis of insertion sort, we looked at both the best case, in which the input array was already sorted, and the worst case, in which the input array was reverse sorted. For the remainder of this book, though, we shall usually concentrate on finding only the worst-case running time, that is, the longest running time for any input of size n. We give three reasons for this orientation.

� The worst-case running time of an algorithm gives us an upper bound on the running time for any input. Knowing it provides a guarantee that the algorithm will never take any longer. We need not make some educated guess about the running time and hope that it never gets much worse.

� For some algorithms, the worst case occurs fairly often. For example, in search- ing a database for a particular piece of information, the searching algorithm’s worst case will often occur when the information is not present in the database. In some applications, searches for absent information may be frequent.

28 Chapter 2 Getting Started

� The “average case” is often roughly as bad as the worst case. Suppose that we randomly choose n numbers and apply insertion sort. How long does it take to determine where in subarray AŒ1 : : j � 1� to insert element AŒj �? On average, half the elements in AŒ1 : : j � 1� are less than AŒj �, and half the elements are greater. On average, therefore, we check half of the subarray AŒ1 : : j � 1�, and so tj is about j=2. The resulting average-case running time turns out to be a quadratic function of the input size, just like the worst-case running time.

In some particular cases, we shall be interested in the average-case running time of an algorithm; we shall see the technique of probabilistic analysis applied to various algorithms throughout this book. The scope of average-case analysis is limited, because it may not be apparent what constitutes an “average” input for a particular problem. Often, we shall assume that all inputs of a given size are equally likely. In practice, this assumption may be violated, but we can sometimes use a randomized algorithm, which makes random choices, to allow a probabilistic analysis and yield an expected running time. We explore randomized algorithms more in Chapter 5 and in several other subsequent chapters.

Order of growth

We used some simplifying abstractions to ease our analysis of the INSERTION- SORT procedure. First, we ignored the actual cost of each statement, using the constants ci to represent these costs. Then, we observed that even these constants give us more detail than we really need: we expressed the worst-case running time as an2 C bn C c for some constants a, b, and c that depend on the statement costs ci . We thus ignored not only the actual statement costs, but also the abstract costs ci .

We shall now make one more simplifying abstraction: it is the rate of growth, or order of growth, of the running time that really interests us. We therefore con- sider only the leading term of a formula (e.g., an2), since the lower-order terms are relatively insignificant for large values of n. We also ignore the leading term’s con- stant coefficient, since constant factors are less significant than the rate of growth in determining computational efficiency for large inputs. For insertion sort, when we ignore the lower-order terms and the leading term’s constant coefficient, we are left with the factor of n2 from the leading term. We write that insertion sort has a worst-case running time of ‚.n2/ (pronounced “theta of n-squared”). We shall use ‚-notation informally in this chapter, and we will define it precisely in Chapter 3.

We usually consider one algorithm to be more efficient than another if its worst- case running time has a lower order of growth. Due to constant factors and lower- order terms, an algorithm whose running time has a higher order of growth might take less time for small inputs than an algorithm whose running time has a lower

2.3 Designing algorithms 29

order of growth. But for large enough inputs, a ‚.n2/ algorithm, for example, will run more quickly in the worst case than a ‚.n3/ algorithm.

Exercises

2.2-1 Express the function n3=1000 � 100n2 � 100nC 3 in terms of ‚-notation. 2.2-2 Consider sorting n numbers stored in array A by first finding the smallest element of A and exchanging it with the element in AŒ1�. Then find the second smallest element of A, and exchange it with AŒ2�. Continue in this manner for the first n�1 elements of A. Write pseudocode for this algorithm, which is known as selection sort. What loop invariant does this algorithm maintain? Why does it need to run for only the first n � 1 elements, rather than for all n elements? Give the best-case and worst-case running times of selection sort in ‚-notation.

2.2-3 Consider linear search again (see Exercise 2.1-3). How many elements of the in- put sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the average-case and worst-case running times of linear search in ‚-notation? Justify your answers.

2.2-4 How can we modify almost any algorithm to have a good best-case running time?

2.3 Designing algorithms

We can choose from a wide range of algorithm design techniques. For insertion sort, we used an incremental approach: having sorted the subarray AŒ1 : : j � 1�, we inserted the single element AŒj � into its proper place, yielding the sorted subarray AŒ1 : : j �.

In this section, we examine an alternative design approach, known as “divide- and-conquer,” which we shall explore in more detail in Chapter 4. We’ll use divide- and-conquer to design a sorting algorithm whose worst-case running time is much less than that of insertion sort. One advantage of divide-and-conquer algorithms is that their running times are often easily determined using techniques that we will see in Chapter 4.

30 Chapter 2 Getting Started

2.3.1 The divide-and-conquer approach

Many useful algorithms are recursive in structure: to solve a given problem, they call themselves recursively one or more times to deal with closely related sub- problems. These algorithms typically follow a divide-and-conquer approach: they break the problem into several subproblems that are similar to the original prob- lem but smaller in size, solve the subproblems recursively, and then combine these solutions to create a solution to the original problem.

The divide-and-conquer paradigm involves three steps at each level of the recur- sion:

Divide the problem into a number of subproblems that are smaller instances of the same problem.

Conquer the subproblems by solving them recursively. If the subproblem sizes are small enough, however, just solve the subproblems in a straightforward manner.

Combine the solutions to the subproblems into the solution for the original prob- lem.

The merge sort algorithm closely follows the divide-and-conquer paradigm. In- tuitively, it operates as follows.

Divide: Divide the n-element sequence to be sorted into two subsequences of n=2 elements each.

Conquer: Sort the two subsequences recursively using merge sort.

Combine: Merge the two sorted subsequences to produce the sorted answer.

The recursion “bottoms out” when the sequence to be sorted has length 1, in which case there is no work to be done, since every sequence of length 1 is already in sorted order.

The key operation of the merge sort algorithm is the merging of two sorted sequences in the “combine” step. We merge by calling an auxiliary procedure MERGE.A; p; q; r/, where A is an array and p, q, and r are indices into the array such that p � q RŒj �, then lines 16–17 perform the appropriate action to maintain the loop invariant.

Termination: At termination, k D r C 1. By the loop invariant, the subarray AŒp : : k � 1�, which is AŒp : : r�, contains the k � p D r � p C 1 smallest elements of LŒ1 : : n1 C 1� and RŒ1 : : n2 C 1�, in sorted order. The arrays L and R together contain n1 C n2 C 2 D r � p C 3 elements. All but the two largest have been copied back into A, and these two largest elements are the sentinels.

34 Chapter 2 Getting Started

To see that the MERGE procedure runs in ‚.n/ time, where n D r � p C 1, observe that each of lines 1–3 and 8–11 takes constant time, the for loops of lines 4–7 take ‚.n1 C n2/ D ‚.n/ time,7 and there are n iterations of the for loop of lines 12–17, each of which takes constant time.

We can now use the MERGE procedure as a subroutine in the merge sort al- gorithm. The procedure MERGE-SORT.A; p; r/ sorts the elements in the subar- ray AŒp : : r�. If p � r , the subarray has at most one element and is therefore already sorted. Otherwise, the divide step simply computes an index q that par- titions AŒp : : r� into two subarrays: AŒp : : q�, containing dn=2e elements, and AŒq C 1 : : r�, containing bn=2c elements.8

MERGE-SORT.A; p; r/

1 if p 1 elements, we break down the running time as follows.

Divide: The divide step just computes the middle of the subarray, which takes constant time. Thus, D.n/ D ‚.1/.

Conquer: We recursively solve two subproblems, each of size n=2, which con- tributes 2T .n=2/ to the running time.

Combine: We have already noted that the MERGE procedure on an n-element subarray takes time ‚.n/, and so C.n/ D ‚.n/.

When we add the functions D.n/ and C.n/ for the merge sort analysis, we are adding a function that is ‚.n/ and a function that is ‚.1/. This sum is a linear function of n, that is, ‚.n/. Adding it to the 2T .n=2/ term from the “conquer” step gives the recurrence for the worst-case running time T .n/ of merge sort:

T .n/ D (

‚.1/ if n D 1 ; 2T .n=2/C‚.n/ if n > 1 : (2.1)

In Chapter 4, we shall see the “master theorem,” which we can use to show that T .n/ is ‚.n lg n/, where lg n stands for log2 n. Because the logarithm func- tion grows more slowly than any linear function, for large enough inputs, merge sort, with its ‚.n lg n/ running time, outperforms insertion sort, whose running time is ‚.n2/, in the worst case.

We do not need the master theorem to intuitively understand why the solution to the recurrence (2.1) is T .n/ D ‚.n lg n/. Let us rewrite recurrence (2.1) as

T .n/ D (

c if n D 1 ; 2T .n=2/C cn if n > 1 ; (2.2)

where the constant c represents the time required to solve problems of size 1 as well as the time per array element of the divide and combine steps.9

9It is unlikely that the same constant exactly represents both the time to solve problems of size 1 and the time per array element of the divide and combine steps. We can get around this problem by letting c be the larger of these times and understanding that our recurrence gives an upper bound on the running time, or by letting c be the lesser of these times and understanding that our recurrence gives a lower bound on the running time. Both bounds are on the order of n lg n and, taken together, give a ‚.n lg n/ running time.

2.3 Designing algorithms 37

Figure 2.5 shows how we can solve recurrence (2.2). For convenience, we as- sume that n is an exact power of 2. Part (a) of the figure shows T .n/, which we expand in part (b) into an equivalent tree representing the recurrence. The cn term is the root (the cost incurred at the top level of recursion), and the two subtrees of the root are the two smaller recurrences T .n=2/. Part (c) shows this process carried one step further by expanding T .n=2/. The cost incurred at each of the two sub- nodes at the second level of recursion is cn=2. We continue expanding each node in the tree by breaking it into its constituent parts as determined by the recurrence, until the problem sizes get down to 1, each with a cost of c. Part (d) shows the resulting recursion tree.

Next, we add the costs across each level of the tree. The top level has total cost cn, the next level down has total cost c.n=2/ C c.n=2/ D cn, the level after that has total cost c.n=4/Cc.n=4/Cc.n=4/Cc.n=4/ D cn, and so on. In general, the level i below the top has 2i nodes, each contributing a cost of c.n=2i/, so that the i th level below the top has total cost 2i c.n=2i/ D cn. The bottom level has n nodes, each contributing a cost of c, for a total cost of cn.

The total number of levels of the recursion tree in Figure 2.5 is lg nC 1, where n is the number of leaves, corresponding to the input size. An informal inductive argument justifies this claim. The base case occurs when n D 1, in which case the tree has only one level. Since lg 1 D 0, we have that lg n C 1 gives the correct number of levels. Now assume as an inductive hypothesis that the number of levels of a recursion tree with 2i leaves is lg 2i C 1 D i C 1 (since for any value of i , we have that lg 2i D i). Because we are assuming that the input size is a power of 2, the next input size to consider is 2iC1. A tree with n D 2iC1 leaves has one more level than a tree with 2i leaves, and so the total number of levels is .i C 1/C 1 D lg 2iC1 C 1.

To compute the total cost represented by the recurrence (2.2), we simply add up the costs of all the levels. The recursion tree has lg nC 1 levels, each costing cn, for a total cost of cn.lg n C 1/ D cn lg n C cn. Ignoring the low-order term and the constant c gives the desired result of ‚.n lg n/.

Exercises

2.3-1 Using Figure 2.4 as a model, illustrate the operation of merge sort on the array A D h3; 41; 52; 26; 38; 57; 9; 49i. 2.3-2 Rewrite the MERGE procedure so that it does not use sentinels, instead stopping once either array L or R has had all its elements copied back to A and then copying the remainder of the other array back into A.

38 Chapter 2 Getting Started

cn

cn

Total: cn lg n + cn

cn

lg n

cn

n

c c c c c c c

(d)

(c)

cn

T(n/2) T(n/2)

(b)

T(n)

(a)

cn

cn/2

T(n/4) T(n/4)

cn/2

T(n/4) T(n/4)

cn

cn/2

cn/4 cn/4

cn/2

cn/4 cn/4

Figure 2.5 How to construct a recursion tree for the recurrence T .n/ D 2T .n=2/ C cn. Part (a) shows T .n/, which progressively expands in (b)–(d) to form the recursion tree. The fully expanded tree in part (d) has lg n C 1 levels (i.e., it has height lg n, as indicated), and each level contributes a total cost of cn. The total cost, therefore, is cn lg nC cn, which is ‚.n lg n/.

Problems for Chapter 2 39

2.3-3 Use mathematical induction to show that when n is an exact power of 2, the solu- tion of the recurrence

T .n/ D (

2 if n D 2 ; 2T .n=2/C n if n D 2k , for k > 1

is T .n/ D n lg n. 2.3-4 We can express insertion sort as a recursive procedure as follows. In order to sort AŒ1 : : n�, we recursively sort AŒ1 : : n�1� and then insert AŒn� into the sorted array AŒ1 : : n � 1�. Write a recurrence for the running time of this recursive version of insertion sort.

2.3-5 Referring back to the searching problem (see Exercise 2.1-3), observe that if the sequence A is sorted, we can check the midpoint of the sequence against � and eliminate half of the sequence from further consideration. The binary search al- gorithm repeats this procedure, halving the size of the remaining portion of the sequence each time. Write pseudocode, either iterative or recursive, for binary search. Argue that the worst-case running time of binary search is ‚.lg n/.

2.3-6 Observe that the while loop of lines 5–7 of the INSERTION-SORT procedure in Section 2.1 uses a linear search to scan (backward) through the sorted subarray AŒ1 : : j � 1�. Can we use a binary search (see Exercise 2.3-5) instead to improve the overall worst-case running time of insertion sort to ‚.n lg n/?

2.3-7 ? Describe a ‚.n lg n/-time algorithm that, given a set S of n integers and another integer x, determines whether or not there exist two elements in S whose sum is exactly x.

Problems

2-1 Insertion sort on small arrays in merge sort Although merge sort runs in ‚.n lg n/ worst-case time and insertion sort runs in ‚.n2/ worst-case time, the constant factors in insertion sort can make it faster in practice for small problem sizes on many machines. Thus, it makes sense to coarsen the leaves of the recursion by using insertion sort within merge sort when

40 Chapter 2 Getting Started

subproblems become sufficiently small. Consider a modification to merge sort in which n=k sublists of length k are sorted using insertion sort and then merged using the standard merging mechanism, where k is a value to be determined.

a. Show that insertion sort can sort the n=k sublists, each of length k, in ‚.nk/ worst-case time.

b. Show how to merge the sublists in ‚.n lg.n=k// worst-case time.

c. Given that the modified algorithm runs in ‚.nkC n lg.n=k// worst-case time, what is the largest value of k as a function of n for which the modified algorithm has the same running time as standard merge sort, in terms of ‚-notation?

d. How should we choose k in practice?

2-2 Correctness of bubblesort Bubblesort is a popular, but inefficient, sorting algorithm. It works by repeatedly swapping adjacent elements that are out of order.

BUBBLESORT.A/

1 for i D 1 to A: length � 1 2 for j D A: length downto i C 1 3 if AŒj � AŒj �, then the pair .i; j / is called an inversion of A.

a. List the five inversions of the array h2; 3; 8; 6; 1i.

42 Chapter 2 Getting Started

b. What array with elements from the set f1; 2; : : : ; ng has the most inversions? How many does it have?

c. What is the relationship between the running time of insertion sort and the number of inversions in the input array? Justify your answer.

d. Give an algorithm that determines the number of inversions in any permutation on n elements in ‚.n lg n/ worst-case time. (Hint: Modify merge sort.)

Chapter notes

In 1968, Knuth published the first of three volumes with the general title The Art of Computer Programming [209, 210, 211]. The first volume ushered in the modern study of computer algorithms with a focus on the analysis of running time, and the full series remains an engaging and worthwhile reference for many of the topics presented here. According to Knuth, the word “algorithm” is derived from the name “al-Khowârizmı̂,” a ninth-century Persian mathematician.

Aho, Hopcroft, and Ullman [5] advocated the asymptotic analysis of algo- rithms—using notations that Chapter 3 introduces, including ‚-notation—as a means of comparing relative performance. They also popularized the use of re- currence relations to describe the running times of recursive algorithms.

Knuth [211] provides an encyclopedic treatment of many sorting algorithms. His comparison of sorting algorithms (page 381) includes exact step-counting analyses, like the one we performed here for insertion sort. Knuth’s discussion of insertion sort encompasses several variations of the algorithm. The most important of these is Shell’s sort, introduced by D. L. Shell, which uses insertion sort on periodic subsequences of the input to produce a faster sorting algorithm.

Merge sort is also described by Knuth. He mentions that a mechanical colla- tor capable of merging two decks of punched cards in a single pass was invented in 1938. J. von Neumann, one of the pioneers of computer science, apparently wrote a program for merge sort on the EDVAC computer in 1945.

The early history of proving programs correct is described by Gries [153], who credits P. Naur with the first article in this field. Gries attributes loop invariants to R. W. Floyd. The textbook by Mitchell [256] describes more recent progress in proving programs correct.

3 Growth of Functions

The order of growth of the running time of an algorithm, defined in Chapter 2, gives a simple characterization of the algorithm’s efficiency and also allows us to compare the relative performance of alternative algorithms. Once the input size n becomes large enough, merge sort, with its ‚.n lg n/ worst-case running time, beats insertion sort, whose worst-case running time is ‚.n2/. Although we can sometimes determine the exact running time of an algorithm, as we did for insertion sort in Chapter 2, the extra precision is not usually worth the effort of computing it. For large enough inputs, the multiplicative constants and lower-order terms of an exact running time are dominated by the effects of the input size itself.

When we look at input sizes large enough to make only the order of growth of the running time relevant, we are studying the asymptotic efficiency of algorithms. That is, we are concerned with how the running time of an algorithm increases with the size of the input in the limit, as the size of the input increases without bound. Usually, an algorithm that is asymptotically more efficient will be the best choice for all but very small inputs.

This chapter gives several standard methods for simplifying the asymptotic anal- ysis of algorithms. The next section begins by defining several types of “asymp- totic notation,” of which we have already seen an example in ‚-notation. We then present several notational conventions used throughout this book, and finally we review the behavior of functions that commonly arise in the analysis of algorithms.

3.1 Asymptotic notation

The notations we use to describe the asymptotic running time of an algorithm are defined in terms of functions whose domains are the set of natural numbers N D f0; 1; 2; : : :g. Such notations are convenient for describing the worst-case running-time function T .n/, which usually is defined only on integer input sizes. We sometimes find it convenient, however, to abuse asymptotic notation in a va-

SANDEEP GUJJAR
Highlight
44 Chapter 3 Growth of Functions

riety of ways. For example, we might extend the notation to the domain of real numbers or, alternatively, restrict it to a subset of the natural numbers. We should make sure, however, to understand the precise meaning of the notation so that when we abuse, we do not misuse it. This section defines the basic asymptotic notations and also introduces some common abuses.

Asymptotic notation, functions, and running times

We will use asymptotic notation primarily to describe the running times of algo- rithms, as when we wrote that insertion sort’s worst-case running time is ‚.n2/. Asymptotic notation actually applies to functions, however. Recall that we charac- terized insertion sort’s worst-case running time as an2CbnCc, for some constants a, b, and c. By writing that insertion sort’s running time is ‚.n2/, we abstracted away some details of this function. Because asymptotic notation applies to func- tions, what we were writing as ‚.n2/ was the function an2 C bn C c, which in that case happened to characterize the worst-case running time of insertion sort.

In this book, the functions to which we apply asymptotic notation will usually characterize the running times of algorithms. But asymptotic notation can apply to functions that characterize some other aspect of algorithms (the amount of space they use, for example), or even to functions that have nothing whatsoever to do with algorithms.

Even when we use asymptotic notation to apply to the running time of an al- gorithm, we need to understand which running time we mean. Sometimes we are interested in the worst-case running time. Often, however, we wish to characterize the running time no matter what the input. In other words, we often wish to make a blanket statement that covers all inputs, not just the worst case. We shall see asymptotic notations that are well suited to characterizing running times no matter what the input.

‚-notation

In Chapter 2, we found that the worst-case running time of insertion sort is T .n/ D ‚.n2/. Let us define what this notation means. For a given function g.n/, we denote by ‚.g.n// the set of functions

‚.g.n// D ff .n/ W there exist positive constants c1, c2, and n0 such that 0 � c1g.n/ � f .n/ � c2g.n/ for all n � n0g :1

1Within set notation, a colon means “such that.”

3.1 Asymptotic notation 45

(b) (c)(a)

nnn n0n0n0

f .n/ D ‚.g.n// f .n/ D O.g.n// f .n/ D �.g.n//

f .n/

f .n/ f .n/

cg.n/

cg.n/

c1g.n/

c2g.n/

Figure 3.1 Graphic examples of the ‚, O , and � notations. In each part, the value of n0 shown is the minimum possible value; any greater value would also work. (a) ‚-notation bounds a func- tion to within constant factors. We write f .n/ D ‚.g.n// if there exist positive constants n0, c1, and c2 such that at and to the right of n0, the value of f .n/ always lies between c1g.n/ and c2g.n/ inclusive. (b) O-notation gives an upper bound for a function to within a constant factor. We write f .n/ D O.g.n// if there are positive constants n0 and c such that at and to the right of n0, the value of f .n/ always lies on or below cg.n/. (c) �-notation gives a lower bound for a function to within a constant factor. We write f .n/ D �.g.n// if there are positive constants n0 and c such that at and to the right of n0, the value of f .n/ always lies on or above cg.n/.

A function f .n/ belongs to the set ‚.g.n// if there exist positive constants c1 and c2 such that it can be “sandwiched” between c1g.n/ and c2g.n/, for suffi- ciently large n. Because ‚.g.n// is a set, we could write “f .n/ 2 ‚.g.n//” to indicate that f .n/ is a member of ‚.g.n//. Instead, we will usually write “f .n/ D ‚.g.n//” to express the same notion. You might be confused because we abuse equality in this way, but we shall see later in this section that doing so has its advantages.

Figure 3.1(a) gives an intuitive picture of functions f .n/ and g.n/, where f .n/ D ‚.g.n//. For all values of n at and to the right of n0, the value of f .n/ lies at or above c1g.n/ and at or below c2g.n/. In other words, for all n � n0, the function f .n/ is equal to g.n/ to within a constant factor. We say that g.n/ is an asymptotically tight bound for f .n/.

The definition of ‚.g.n// requires that every member f .n/ 2 ‚.g.n// be asymptotically nonnegative, that is, that f .n/ be nonnegative whenever n is suf- ficiently large. (An asymptotically positive function is one that is positive for all sufficiently large n.) Consequently, the function g.n/ itself must be asymptotically nonnegative, or else the set ‚.g.n// is empty. We shall therefore assume that every function used within ‚-notation is asymptotically nonnegative. This assumption holds for the other asymptotic notations defined in this chapter as well.

46 Chapter 3 Growth of Functions

In Chapter 2, we introduced an informal notion of ‚-notation that amounted to throwing away lower-order terms and ignoring the leading coefficient of the highest-order term. Let us briefly justify this intuition by using the formal defi- nition to show that 1

2 n2 � 3n D ‚.n2/. To do so, we must determine positive

constants c1, c2, and n0 such that

c1n 2 � 1

2 n2 � 3n � c2n2

for all n � n0. Dividing by n2 yields

c1 � 1

2 � 3

n � c2 :

We can make the right-hand inequality hold for any value of n � 1 by choosing any constant c2 � 1=2. Likewise, we can make the left-hand inequality hold for any value of n � 7 by choosing any constant c1 � 1=14. Thus, by choosing c1 D 1=14, c2 D 1=2, and n0 D 7, we can verify that 12n2 � 3n D ‚.n2/. Certainly, other choices for the constants exist, but the important thing is that some choice exists. Note that these constants depend on the function 1

2 n2 � 3n; a different function

belonging to ‚.n2/ would usually require different constants. We can also use the formal definition to verify that 6n3 ¤ ‚.n2/. Suppose

for the purpose of contradiction that c2 and n0 exist such that 6n3 � c2n2 for all n � n0. But then dividing by n2 yields n � c2=6, which cannot possibly hold for arbitrarily large n, since c2 is constant.

Intuitively, the lower-order terms of an asymptotically positive function can be ignored in determining asymptotically tight bounds because they are insignificant for large n. When n is large, even a tiny fraction of the highest-order term suf- fices to dominate the lower-order terms. Thus, setting c1 to a value that is slightly smaller than the coefficient of the highest-order term and setting c2 to a value that is slightly larger permits the inequalities in the definition of ‚-notation to be sat- isfied. The coefficient of the highest-order term can likewise be ignored, since it only changes c1 and c2 by a constant factor equal to the coefficient.

As an example, consider any quadratic function f .n/ D an2 C bnC c, where a, b, and c are constants and a > 0. Throwing away the lower-order terms and ignoring the constant yields f .n/ D ‚.n2/. Formally, to show the same thing, we take the constants c1 D a=4, c2 D 7a=4, and n0 D 2 � max.jbj =a;

p jcj =a/. You

may verify that 0 � c1n2 � an2 C bn C c � c2n2 for all n � n0. In general, for any polynomial p.n/ DPdiD0 aini , where the ai are constants and ad > 0, we have p.n/ D ‚.nd / (see Problem 3-1).

Since any constant is a degree-0 polynomial, we can express any constant func- tion as ‚.n0/, or ‚.1/. This latter notation is a minor abuse, however, because the

3.1 Asymptotic notation 47

expression does not indicate what variable is tending to infinity.2 We shall often use the notation ‚.1/ to mean either a constant or a constant function with respect to some variable.

O-notation

The ‚-notation asymptotically bounds a function from above and below. When we have only an asymptotic upper bound, we use O-notation. For a given func- tion g.n/, we denote by O.g.n// (pronounced “big-oh of g of n” or sometimes just “oh of g of n”) the set of functions

O.g.n// D ff .n/ W there exist positive constants c and n0 such that 0 � f .n/ � cg.n/ for all n � n0g :

We use O-notation to give an upper bound on a function, to within a constant factor. Figure 3.1(b) shows the intuition behind O-notation. For all values n at and to the right of n0, the value of the function f .n/ is on or below cg.n/.

We write f .n/ D O.g.n// to indicate that a function f .n/ is a member of the set O.g.n//. Note that f .n/ D ‚.g.n// implies f .n/ D O.g.n//, since ‚- notation is a stronger notion than O-notation. Written set-theoretically, we have ‚.g.n// � O.g.n//. Thus, our proof that any quadratic function an2 C bnC c, where a > 0, is in ‚.n2/ also shows that any such quadratic function is in O.n2/. What may be more surprising is that when a > 0, any linear function an C b is in O.n2/, which is easily verified by taking c D aC jbj and n0 D max.1;�b=a/.

If you have seen O-notation before, you might find it strange that we should write, for example, n D O.n2/. In the literature, we sometimes find O-notation informally describing asymptotically tight bounds, that is, what we have defined using ‚-notation. In this book, however, when we write f .n/ D O.g.n//, we are merely claiming that some constant multiple of g.n/ is an asymptotic upper bound on f .n/, with no claim about how tight an upper bound it is. Distinguish- ing asymptotic upper bounds from asymptotically tight bounds is standard in the algorithms literature.

Using O-notation, we can often describe the running time of an algorithm merely by inspecting the algorithm’s overall structure. For example, the doubly nested loop structure of the insertion sort algorithm from Chapter 2 immediately yields an O.n2/ upper bound on the worst-case running time: the cost of each it- eration of the inner loop is bounded from above by O.1/ (constant), the indices i

2The real problem is that our ordinary notation for functions does not distinguish functions from values. In �-calculus, the parameters to a function are clearly specified: the function n2 could be written as �n:n2, or even �r:r2. Adopting a more rigorous notation, however, would complicate algebraic manipulations, and so we choose to tolerate the abuse.

48 Chapter 3 Growth of Functions

and j are both at most n, and the inner loop is executed at most once for each of the n2 pairs of values for i and j .

Since O-notation describes an upper bound, when we use it to bound the worst- case running time of an algorithm, we have a bound on the running time of the algo- rithm on every input—the blanket statement we discussed earlier. Thus, the O.n2/ bound on worst-case running time of insertion sort also applies to its running time on every input. The ‚.n2/ bound on the worst-case running time of insertion sort, however, does not imply a ‚.n2/ bound on the running time of insertion sort on every input. For example, we saw in Chapter 2 that when the input is already sorted, insertion sort runs in ‚.n/ time.

Technically, it is an abuse to say that the running time of insertion sort is O.n2/, since for a given n, the actual running time varies, depending on the particular input of size n. When we say “the running time is O.n2/,” we mean that there is a function f .n/ that is O.n2/ such that for any value of n, no matter what particular input of size n is chosen, the running time on that input is bounded from above by the value f .n/. Equivalently, we mean that the worst-case running time is O.n2/.

�-notation

Just as O-notation provides an asymptotic upper bound on a function, �-notation provides an asymptotic lower bound. For a given function g.n/, we denote by �.g.n// (pronounced “big-omega of g of n” or sometimes just “omega of g of n”) the set of functions

�.g.n// D ff .n/ W there exist positive constants c and n0 such that 0 � cg.n/ � f .n/ for all n � n0g :

Figure 3.1(c) shows the intuition behind �-notation. For all values n at or to the right of n0, the value of f .n/ is on or above cg.n/.

From the definitions of the asymptotic notations we have seen thus far, it is easy to prove the following important theorem (see Exercise 3.1-5).

Theorem 3.1 For any two functions f .n/ and g.n/, we have f .n/ D ‚.g.n// if and only if f .n/ D O.g.n// and f .n/ D �.g.n//.

As an example of the application of this theorem, our proof that an2 C bnC cD ‚.n2/ for any constants a, b, and c, where a > 0, immediately implies that an2 C bnC c D �.n2/ and an2CbnCc D O.n2/. In practice, rather than using Theorem 3.1 to obtain asymptotic upper and lower bounds from asymptotically tight bounds, as we did for this example, we usually use it to prove asymptotically tight bounds from asymptotic upper and lower bounds.

3.1 Asymptotic notation 49

When we say that the running time (no modifier) of an algorithm is �.g.n//, we mean that no matter what particular input of size n is chosen for each value of n, the running time on that input is at least a constant times g.n/, for sufficiently large n. Equivalently, we are giving a lower bound on the best-case running time of an algorithm. For example, the best-case running time of insertion sort is �.n/, which implies that the running time of insertion sort is �.n/.

The running time of insertion sort therefore belongs to both �.n/ and O.n2/, since it falls anywhere between a linear function of n and a quadratic function of n. Moreover, these bounds are asymptotically as tight as possible: for instance, the running time of insertion sort is not �.n2/, since there exists an input for which insertion sort runs in ‚.n/ time (e.g., when the input is already sorted). It is not contradictory, however, to say that the worst-case running time of insertion sort is �.n2/, since there exists an input that causes the algorithm to take �.n2/ time.

Asymptotic notation in equations and inequalities

We have already seen how asymptotic notation can be used within mathematical formulas. For example, in introducing O-notation, we wrote “n D O.n2/.” We might also write 2n2C3nC1 D 2n2C‚.n/. How do we interpret such formulas?

When the asymptotic notation stands alone (that is, not within a larger formula) on the right-hand side of an equation (or inequality), as in n D O.n2/, we have already defined the equal sign to mean set membership: n 2 O.n2/. In general, however, when asymptotic notation appears in a formula, we interpret it as stand- ing for some anonymous function that we do not care to name. For example, the formula 2n2 C 3nC 1 D 2n2 C ‚.n/ means that 2n2 C 3n C 1 D 2n2 C f .n/, where f .n/ is some function in the set ‚.n/. In this case, we let f .n/ D 3nC 1, which indeed is in ‚.n/.

Using asymptotic notation in this manner can help eliminate inessential detail and clutter in an equation. For example, in Chapter 2 we expressed the worst-case running time of merge sort as the recurrence

T .n/ D 2T .n=2/C‚.n/ : If we are interested only in the asymptotic behavior of T .n/, there is no point in specifying all the lower-order terms exactly; they are all understood to be included in the anonymous function denoted by the term ‚.n/.

The number of anonymous functions in an expression is understood to be equal to the number of times the asymptotic notation appears. For example, in the ex- pression

nX iD1

O.i/ ;

50 Chapter 3 Growth of Functions

there is only a single anonymous function (a function of i). This expression is thus not the same as O.1/ C O.2/ C � � � C O.n/, which doesn’t really have a clean interpretation.

In some cases, asymptotic notation appears on the left-hand side of an equation, as in

2n2 C‚.n/ D ‚.n2/ : We interpret such equations using the following rule: No matter how the anony- mous functions are chosen on the left of the equal sign, there is a way to choose the anonymous functions on the right of the equal sign to make the equation valid. Thus, our example means that for any function f .n/ 2 ‚.n/, there is some func- tion g.n/ 2 ‚.n2/ such that 2n2 C f .n/ D g.n/ for all n. In other words, the right-hand side of an equation provides a coarser level of detail than the left-hand side.

We can chain together a number of such relationships, as in

2n2 C 3nC 1 D 2n2 C‚.n/ D ‚.n2/ :

We can interpret each equation separately by the rules above. The first equa- tion says that there is some function f .n/ 2 ‚.n/ such that 2n2 C 3n C 1 D 2n2 C f .n/ for all n. The second equation says that for any function g.n/ 2 ‚.n/ (such as the f .n/ just mentioned), there is some function h.n/ 2 ‚.n2/ such that 2n2 C g.n/ D h.n/ for all n. Note that this interpretation implies that 2n2 C 3nC 1 D ‚.n2/, which is what the chaining of equations intuitively gives us.

o-notation

The asymptotic upper bound provided by O-notation may or may not be asymp- totically tight. The bound 2n2 D O.n2/ is asymptotically tight, but the bound 2n D O.n2/ is not. We use o-notation to denote an upper bound that is not asymp- totically tight. We formally define o.g.n// (“little-oh of g of n”) as the set

o.g.n// D ff .n/ W for any positive constant c > 0, there exists a constant n0 > 0 such that 0 � f .n/ 0, but in f .n/ D o.g.n//, the bound 0 � f .n/ 0. Intuitively, in o-notation, the function f .n/ becomes insignificant relative to g.n/ as n approaches infinity; that is,

3.1 Asymptotic notation 51

lim n!1

f .n/

g.n/ D 0 : (3.1)

Some authors use this limit as a definition of the o-notation; the definition in this book also restricts the anonymous functions to be asymptotically nonnegative.

!-notation

By analogy, !-notation is to �-notation as o-notation is to O-notation. We use !-notation to denote a lower bound that is not asymptotically tight. One way to define it is by

f .n/ 2 !.g.n// if and only if g.n/ 2 o.f .n// : Formally, however, we define !.g.n// (“little-omega of g of n”) as the set

!.g.n// D ff .n/ W for any positive constant c > 0, there exists a constant n0 > 0 such that 0 � cg.n/ b : We say that f .n/ is asymptotically smaller than g.n/ if f .n/ D o.g.n//, and f .n/ is asymptotically larger than g.n/ if f .n/ D !.g.n//.

One property of real numbers, however, does not carry over to asymptotic nota- tion:

Trichotomy: For any two real numbers a and b, exactly one of the following must hold: a b.

Although any two real numbers can be compared, not all functions are asymptot- ically comparable. That is, for two functions f .n/ and g.n/, it may be the case that neither f .n/ D O.g.n// nor f .n/ D �.g.n// holds. For example, we cannot compare the functions n and n1Csin n using asymptotic notation, since the value of the exponent in n1Csin n oscillates between 0 and 2, taking on all values in between.

Exercises

3.1-1 Let f .n/ and g.n/ be asymptotically nonnegative functions. Using the basic defi- nition of ‚-notation, prove that max.f .n/; g.n// D ‚.f .n/C g.n//. 3.1-2 Show that for any real constants a and b, where b > 0,

.nC a/b D ‚.nb/ : (3.2)

3.2 Standard notations and common functions 53

3.1-3 Explain why the statement, “The running time of algorithm A is at least O.n2/,” is meaningless.

3.1-4 Is 2nC1 D O.2n/? Is 22n D O.2n/? 3.1-5 Prove Theorem 3.1.

3.1-6 Prove that the running time of an algorithm is ‚.g.n// if and only if its worst-case running time is O.g.n// and its best-case running time is �.g.n//.

3.1-7 Prove that o.g.n// !.g.n// is the empty set. 3.1-8 We can extend our notation to the case of two parameters n and m that can go to infinity independently at different rates. For a given function g.n; m/, we denote by O.g.n; m// the set of functions

O.g.n; m// D ff .n; m/ W there exist positive constants c, n0, and m0 such that 0 � f .n; m/ � cg.n; m/ for all n � n0 or m � m0g :

Give corresponding definitions for �.g.n; m// and ‚.g.n; m//.

3.2 Standard notations and common functions

This section reviews some standard mathematical functions and notations and ex- plores the relationships among them. It also illustrates the use of the asymptotic notations.

Monotonicity

A function f .n/ is monotonically increasing if m � n implies f .m/ � f .n/. Similarly, it is monotonically decreasing if m � n implies f .m/ � f .n/. A function f .n/ is strictly increasing if m f .n/.

54 Chapter 3 Growth of Functions

Floors and ceilings

For any real number x, we denote the greatest integer less than or equal to x by bxc (read “the floor of x”) and the least integer greater than or equal to x by dxe (read “the ceiling of x”). For all real x,

x � 1 0,�dx=ae

b

� D

l x ab

m ; (3.4)�bx=ac

b

D

j x ab

k ; (3.5)la

b

m � aC .b � 1/

b ; (3.6)ja

b

k � a � .b � 1/

b : (3.7)

The floor function f .x/ D bxc is monotonically increasing, as is the ceiling func- tion f .x/ D dxe.

Modular arithmetic

For any integer a and any positive integer n, the value a mod n is the remainder (or residue) of the quotient a=n:

a mod n D a � n ba=nc : (3.8) It follows that

0 � a mod n 0. For an asymptotically positive polynomial p.n/ of degree d , we have p.n/ D ‚.nd /. For any real constant a � 0, the function na is monotonically increasing, and for any real constant a � 0, the function na is monotonically decreasing. We say that a function f .n/ is polynomially bounded if f .n/ D O.nk/ for some constant k.

Exponentials

For all real a > 0, m, and n, we have the following identities:

a0 D 1 ; a1 D a ;

a�1 D 1=a ; .am/n D amn ; .am/n D .an/m ; aman D amCn : For all n and a � 1, the function an is monotonically increasing in n. When convenient, we shall assume 00 D 1.

We can relate the rates of growth of polynomials and exponentials by the fol- lowing fact. For all real constants a and b such that a > 1,

lim n!1

nb

an D 0 ; (3.10)

from which we can conclude that

nb D o.an/ : Thus, any exponential function with a base strictly greater than 1 grows faster than any polynomial function.

Using e to denote 2:71828 : : :, the base of the natural logarithm function, we have for all real x,

ex D 1C x C x 2

2Š C x

3

3Š C � � � D

1X iD0

xi

i Š ; (3.11)

56 Chapter 3 Growth of Functions

where “Š” denotes the factorial function defined later in this section. For all real x, we have the inequality

ex � 1C x ; (3.12) where equality holds only when x D 0. When jxj � 1, we have the approximation 1C x � ex � 1C x C x2 : (3.13) When x ! 0, the approximation of ex by 1C x is quite good: ex D 1C x C‚.x2/ : (In this equation, the asymptotic notation is used to describe the limiting behavior as x ! 0 rather than as x !1.) We have for all x, lim

n!1

� 1C x

n

�n D ex : (3.14)

Logarithms

We shall use the following notations:

lg n D log2 n (binary logarithm) , ln n D loge n (natural logarithm) ,

lgk n D .lg n/k (exponentiation) , lg lg n D lg.lg n/ (composition) . An important notational convention we shall adopt is that logarithm functions will apply only to the next term in the formula, so that lg n C k will mean .lg n/ C k and not lg.nC k/. If we hold b > 1 constant, then for n > 0, the function logb n is strictly increasing.

For all real a > 0, b > 0, c > 0, and n,

a D blogb a ; logc.ab/ D logc aC logc b ;

logb a n D n logb a ;

logb a D logc a

logc b ; (3.15)

logb.1=a/ D � logb a ; logb a D

1

loga b ;

alogb c D c logb a ; (3.16) where, in each equation above, logarithm bases are not 1.

3.2 Standard notations and common functions 57

By equation (3.15), changing the base of a logarithm from one constant to an- other changes the value of the logarithm by only a constant factor, and so we shall often use the notation “lg n” when we don’t care about constant factors, such as in O-notation. Computer scientists find 2 to be the most natural base for logarithms because so many algorithms and data structures involve splitting a problem into two parts.

There is a simple series expansion for ln.1C x/ when jxj �1: x

1C x � ln.1C x/ � x ; (3.17)

where equality holds only for x D 0. We say that a function f .n/ is polylogarithmically bounded if f .n/ D O.lgk n/

for some constant k. We can relate the growth of polynomials and polylogarithms by substituting lg n for n and 2a for a in equation (3.10), yielding

lim n!1

lgb n

.2a/lg n D lim

n!1 lgb n

na D 0 :

From this limit, we can conclude that

lgb n D o.na/ for any constant a > 0. Thus, any positive polynomial function grows faster than any polylogarithmic function.

Factorials

The notation nŠ (read “n factorial”) is defined for integers n � 0 as

nŠ D (

1 if n D 0 ; n � .n � 1/Š if n > 0 :

Thus, nŠ D 1 � 2 � 3 � � � n. A weak upper bound on the factorial function is nŠ � nn, since each of the n

terms in the factorial product is at most n. Stirling’s approximation,

nŠ D p

2�n �n

e

�n � 1C‚

� 1

n

�� ; (3.18)

58 Chapter 3 Growth of Functions

where e is the base of the natural logarithm, gives us a tighter upper bound, and a lower bound as well. As Exercise 3.2-3 asks you to prove,

nŠ D o.nn/ ; nŠ D !.2n/ ;

lg.nŠ/ D ‚.n lg n/ ; (3.19) where Stirling’s approximation is helpful in proving equation (3.19). The following equation also holds for all n � 1: nŠ D

p 2�n

�n e

�n e˛n (3.20)

where 1

12nC 1 0 :

For example, if f .n/ D 2n, then f .i/.n/ D 2in.

The iterated logarithm function

We use the notation lg� n (read “log star of n”) to denote the iterated logarithm, de- fined as follows. Let lg.i/ n be as defined above, with f .n/ D lg n. Because the log- arithm of a nonpositive number is undefined, lg.i/ n is defined only if lg.i�1/ n > 0. Be sure to distinguish lg.i/ n (the logarithm function applied i times in succession, starting with argument n) from lgi n (the logarithm of n raised to the i th power). Then we define the iterated logarithm function as

lg� n D min ˚i � 0 W lg.i/ n � 1 : The iterated logarithm is a very slowly growing function:

lg� 2 D 1 ; lg� 4 D 2 ;

lg� 16 D 3 ; lg� 65536 D 4 ;

lg�.265536/ D 5 :

3.2 Standard notations and common functions 59

Since the number of atoms in the observable universe is estimated to be about 1080, which is much less than 265536, we rarely encounter an input size n such that lg� n > 5.

Fibonacci numbers

We define the Fibonacci numbers by the following recurrence:

F0 D 0 ; F1 D 1 ; (3.22) Fi D Fi�1 C Fi�2 for i � 2 : Thus, each Fibonacci number is the sum of the two previous ones, yielding the sequence

0; 1; 1; 2; 3; 5; 8; 13; 21; 34; 55; : : : :

Fibonacci numbers are related to the golden ratio � and to its conjugate y�, which are the two roots of the equation

x2 D x C 1 (3.23) and are given by the following formulas (see Exercise 3.2-6):

� D 1C p

5

2 (3.24)

D 1:61803 : : : ; y� D 1�

p 5

2 D �:61803 : : : :

Specifically, we have

Fi D � i � y�ip

5 ;

which we can prove by induction (Exercise 3.2-7). Since ˇ̌y� ˇ̌ 0, be a degree-d polynomial in n, and let k be a constant. Use the definitions of the asymptotic notations to prove the following properties.

a. If k � d , then p.n/ D O.nk/.

b. If k � d , then p.n/ D �.nk/.

c. If k D d , then p.n/ D ‚.nk/.

d. If k > d , then p.n/ D o.nk/.

e. If k 0, and c > 1 are constants. Your answer should be in the form of the table with “yes” or “no” written in each box.

A B O o � ! ‚

a. lgk n n�

b. nk cn

c. p

n nsin n

d. 2n 2n=2

e. nlg c c lg n

f. lg.nŠ/ lg.nn/

3-3 Ordering by asymptotic growth rates a. Rank the following functions by order of growth; that is, find an arrangement

g1; g2; : : : ; g30 of the functions satisfying g1 D �.g2/, g2 D �.g3/, . . . , g29 D �.g30/. Partition your list into equivalence classes such that functions f .n/ and g.n/ are in the same class if and only if f .n/ D ‚.g.n//.

62 Chapter 3 Growth of Functions

lg.lg� n/ 2lg � n .

p 2/lg n n2 nŠ .lg n/Š

.3 2 /n n3 lg2 n lg.nŠ/ 22

n

n1= lg n

ln ln n lg� n n � 2n nlg lg n ln n 1 2lg n .lg n/lg n en 4lg n .nC 1/Š plg n

lg�.lg n/ 2 p

2 lg n n 2n n lg n 22 nC1

b. Give an example of a single nonnegative function f .n/ such that for all func- tions gi .n/ in part (a), f .n/ is neither O.gi .n// nor �.gi.n//.

3-4 Asymptotic notation properties Let f .n/ and g.n/ be asymptotically positive functions. Prove or disprove each of the following conjectures.

a. f .n/ D O.g.n// implies g.n/ D O.f .n//.

b. f .n/C g.n/ D ‚.min.f .n/; g.n///.

c. f .n/ D O.g.n// implies lg.f .n// D O.lg.g.n///, where lg.g.n// � 1 and f .n/ � 1 for all sufficiently large n.

d. f .n/ D O.g.n// implies 2f .n/ D O �2g.n/�. e. f .n/ D O ..f .n//2/.

f. f .n/ D O.g.n// implies g.n/ D �.f .n//.

g. f .n/ D ‚.f .n=2//.

h. f .n/C o.f .n// D ‚.f .n//.

3-5 Variations on O and˝ Some authors define � in a slightly different way than we do; let’s use

1 � (read

“omega infinity”) for this alternative definition. We say that f .n/ D 1�.g.n// if there exists a positive constant c such that f .n/ � cg.n/ � 0 for infinitely many integers n.

a. Show that for any two functions f .n/ and g.n/ that are asymptotically nonneg- ative, either f .n/ D O.g.n// or f .n/ D 1�.g.n// or both, whereas this is not true if we use � in place of

1 �.

Problems for Chapter 3 63

b. Describe the potential advantages and disadvantages of using 1 � instead of � to

characterize the running times of programs.

Some authors also define O in a slightly different manner; let’s use O 0 for the alternative definition. We say that f .n/ D O 0.g.n// if and only if jf .n/j D O.g.n//.

c. What happens to each direction of the “if and only if” in Theorem 3.1 if we substitute O 0 for O but still use �?

Some authors define eO (read “soft-oh”) to mean O with logarithmic factors ig- nored:eO.g.n// D ff .n/ W there exist positive constants c, k, and n0 such that

0 � f .n/ � cg.n/ lgk.n/ for all n � n0g :

d. Define e� and e‚ in a similar manner. Prove the corresponding analog to Theo- rem 3.1.

3-6 Iterated functions We can apply the iteration operator � used in the lg� function to any monotonically increasing function f .n/ over the reals. For a given constant c 2 R, we define the iterated function f �c by

f �c .n/ D min ˚ i � 0 W f .i/.n/ � c ;

which need not be well defined in all cases. In other words, the quantity f �c .n/ is the number of iterated applications of the function f required to reduce its argu- ment down to c or less.

For each of the following functions f .n/ and constants c, give as tight a bound as possible on f �c .n/.

f .n/ c f �c .n/

a. n � 1 0 b. lg n 1

c. n=2 1

d. n=2 2

e. p

n 2

f. p

n 1

g. n1=3 2

h. n= lg n 2

64 Chapter 3 Growth of Functions

Chapter notes

Knuth [209] traces the origin of the O-notation to a number-theory text by P. Bach- mann in 1892. The o-notation was invented by E. Landau in 1909 for his discussion of the distribution of prime numbers. The � and ‚ notations were advocated by Knuth [213] to correct the popular, but technically sloppy, practice in the literature of using O-notation for both upper and lower bounds. Many people continue to use the O-notation where the ‚-notation is more technically precise. Further dis- cussion of the history and development of asymptotic notations appears in works by Knuth [209, 213] and Brassard and Bratley [54].

Not all authors define the asymptotic notations in the same way, although the various definitions agree in most common situations. Some of the alternative def- initions encompass functions that are not asymptotically nonnegative, as long as their absolute values are appropriately bounded.

Equation (3.20) is due to Robbins [297]. Other properties of elementary math- ematical functions can be found in any good mathematical reference, such as Abramowitz and Stegun [1] or Zwillinger [362], or in a calculus book, such as Apostol [18] or Thomas et al. [334]. Knuth [209] and Graham, Knuth, and Patash- nik [152] contain a wealth of material on discrete mathematics as used in computer science.

4 Divide-and-Conquer

In Section 2.3.1, we saw how merge sort serves as an example of the divide-and- conquer paradigm. Recall that in divide-and-conquer, we solve a problem recur- sively, applying three steps at each level of the recursion:

Divide the problem into a number of subproblems that are smaller instances of the same problem.

Conquer the subproblems by solving them recursively. If the subproblem sizes are small enough, however, just solve the subproblems in a straightforward manner.

Combine the solutions to the subproblems into the solution for the original prob- lem.

When the subproblems are large enough to solve recursively, we call that the recur- sive case. Once the subproblems become small enough that we no longer recurse, we say that the recursion “bottoms out” and that we have gotten down to the base case. Sometimes, in addition to subproblems that are smaller instances of the same problem, we have to solve subproblems that are not quite the same as the original problem. We consider solving such subproblems as part of the combine step.

In this chapter, we shall see more algorithms based on divide-and-conquer. The first one solves the maximum-subarray problem: it takes as input an array of num- bers, and it determines the contiguous subarray whose values have the greatest sum. Then we shall see two divide-and-conquer algorithms for multiplying n n matri- ces. One runs in ‚.n3/ time, which is no better than the straightforward method of multiplying square matrices. But the other, Strassen’s algorithm, runs in O.n2:81/ time, which beats the straightforward method asymptotically.

Recurrences

Recurrences go hand in hand with the divide-and-conquer paradigm, because they give us a natural way to characterize the running times of divide-and-conquer algo- rithms. A recurrence is an equation or inequality that describes a function in terms

66 Chapter 4 Divide-and-Conquer

of its value on smaller inputs. For example, in Section 2.3.2 we described the worst-case running time T .n/ of the MERGE-SORT procedure by the recurrence

T .n/ D (

‚.1/ if n D 1 ; 2T .n=2/C‚.n/ if n > 1 ; (4.1)

whose solution we claimed to be T .n/ D ‚.n lg n/. Recurrences can take many forms. For example, a recursive algorithm might

divide subproblems into unequal sizes, such as a 2=3-to-1=3 split. If the divide and combine steps take linear time, such an algorithm would give rise to the recurrence T .n/ D T .2n=3/C T .n=3/C‚.n/.

Subproblems are not necessarily constrained to being a constant fraction of the original problem size. For example, a recursive version of linear search (see Exercise 2.1-3) would create just one subproblem containing only one el- ement fewer than the original problem. Each recursive call would take con- stant time plus the time for the recursive calls it makes, yielding the recurrence T .n/ D T .n � 1/C‚.1/.

This chapter offers three methods for solving recurrences—that is, for obtaining asymptotic “‚” or “O” bounds on the solution:

� In the substitution method, we guess a bound and then use mathematical in- duction to prove our guess correct.

� The recursion-tree method converts the recurrence into a tree whose nodes represent the costs incurred at various levels of the recursion. We use techniques for bounding summations to solve the recurrence.

� The master method provides bounds for recurrences of the form

T .n/ D aT .n=b/C f .n/ ; (4.2)

where a � 1, b > 1, and f .n/ is a given function. Such recurrences arise frequently. A recurrence of the form in equation (4.2) characterizes a divide- and-conquer algorithm that creates a subproblems, each of which is 1=b the size of the original problem, and in which the divide and combine steps together take f .n/ time.

To use the master method, you will need to memorize three cases, but once you do that, you will easily be able to determine asymptotic bounds for many simple recurrences. We will use the master method to determine the running times of the divide-and-conquer algorithms for the maximum-subarray problem and for matrix multiplication, as well as for other algorithms based on divide- and-conquer elsewhere in this book.

Chapter 4 Divide-and-Conquer 67

Occasionally, we shall see recurrences that are not equalities but rather inequal- ities, such as T .n/ � 2T .n=2/ C ‚.n/. Because such a recurrence states only an upper bound on T .n/, we will couch its solution using O-notation rather than ‚-notation. Similarly, if the inequality were reversed to T .n/ � 2T .n=2/C‚.n/, then because the recurrence gives only a lower bound on T .n/, we would use �-notation in its solution.

Technicalities in recurrences

In practice, we neglect certain technical details when we state and solve recur- rences. For example, if we call MERGE-SORT on n elements when n is odd, we end up with subproblems of size bn=2c and dn=2e. Neither size is actually n=2, because n=2 is not an integer when n is odd. Technically, the recurrence describing the worst-case running time of MERGE-SORT is really

T .n/ D (

‚.1/ if n D 1 ; T .dn=2e/C T .bn=2c/C‚.n/ if n > 1 : (4.3)

Boundary conditions represent another class of details that we typically ignore. Since the running time of an algorithm on a constant-sized input is a constant, the recurrences that arise from the running times of algorithms generally have T .n/ D ‚.1/ for sufficiently small n. Consequently, for convenience, we shall generally omit statements of the boundary conditions of recurrences and assume that T .n/ is constant for small n. For example, we normally state recurrence (4.1) as

T .n/ D 2T .n=2/C‚.n/ ; (4.4) without explicitly giving values for small n. The reason is that although changing the value of T .1/ changes the exact solution to the recurrence, the solution typi- cally doesn’t change by more than a constant factor, and so the order of growth is unchanged.

When we state and solve recurrences, we often omit floors, ceilings, and bound- ary conditions. We forge ahead without these details and later determine whether or not they matter. They usually do not, but you should know when they do. Ex- perience helps, and so do some theorems stating that these details do not affect the asymptotic bounds of many recurrences characterizing divide-and-conquer algo- rithms (see Theorem 4.1). In this chapter, however, we shall address some of these details and illustrate the fine points of recurrence solution methods.

68 Chapter 4 Divide-and-Conquer

4.1 The maximum-subarray problem

Suppose that you been offered the opportunity to invest in the Volatile Chemical Corporation. Like the chemicals the company produces, the stock price of the Volatile Chemical Corporation is rather volatile. You are allowed to buy one unit of stock only one time and then sell it at a later date, buying and selling after the close of trading for the day. To compensate for this restriction, you are allowed to learn what the price of the stock will be in the future. Your goal is to maximize your profit. Figure 4.1 shows the price of the stock over a 17-day period. You may buy the stock at any one time, starting after day 0, when the price is $100 per share. Of course, you would want to “buy low, sell high”—buy at the lowest possible price and later on sell at the highest possible price—to maximize your profit. Unfortunately, you might not be able to buy at the lowest price and then sell at the highest price within a given period. In Figure 4.1, the lowest price occurs after day 7, which occurs after the highest price, after day 1.

You might think that you can always maximize profit by either buying at the lowest price or selling at the highest price. For example, in Figure 4.1, we would maximize profit by buying at the lowest price, after day 7. If this strategy always worked, then it would be easy to determine how to maximize profit: find the highest and lowest prices, and then work left from the highest price to find the lowest prior price, work right from the lowest price to find the highest later price, and take the pair with the greater difference. Figure 4.2 shows a simple counterexample,

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

120

110

100

90

80

70

60

Day 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Price 100 113 110 85 105 102 86 63 81 101 94 106 101 79 94 90 97 Change 13 �3 �25 20 �3 �16 �23 18 20 �7 12 �5 �22 15 �4 7

Figure 4.1 Information about the price of stock in the Volatile Chemical Corporation after the close of trading over a period of 17 days. The horizontal axis of the chart indicates the day, and the vertical axis shows the price. The bottom row of the table gives the change in price from the previous day.

4.1 The maximum-subarray problem 69

0 1 2 3 4

11

10

9

8

7

6

Day 0 1 2 3 4 Price 10 11 7 10 6 Change 1 �4 3 �4

Figure 4.2 An example showing that the maximum profit does not always start at the lowest price or end at the highest price. Again, the horizontal axis indicates the day, and the vertical axis shows the price. Here, the maximum profit of $3 per share would be earned by buying after day 2 and selling after day 3. The price of $7 after day 2 is not the lowest price overall, and the price of $10 after day 3 is not the highest price overall.

demonstrating that the maximum profit sometimes comes neither by buying at the lowest price nor by selling at the highest price.

A brute-force solution

We can easily devise a brute-force solution to this problem: just try every possible pair of buy and sell dates in which the buy date precedes the sell date. A period of n days has

� n

2

� such pairs of dates. Since

� n

2

� is ‚.n2/, and the best we can hope for

is to evaluate each pair of dates in constant time, this approach would take �.n2/ time. Can we do better?

A transformation

In order to design an algorithm with an o.n2/ running time, we will look at the input in a slightly different way. We want to find a sequence of days over which the net change from the first day to the last is maximum. Instead of looking at the daily prices, let us instead consider the daily change in price, where the change on day i is the difference between the prices after day i � 1 and after day i . The table in Figure 4.1 shows these daily changes in the bottom row. If we treat this row as an array A, shown in Figure 4.3, we now want to find the nonempty, contiguous subarray of A whose values have the largest sum. We call this contiguous subarray the maximum subarray. For example, in the array of Figure 4.3, the maximum subarray of AŒ1 : : 16� is AŒ8 : : 11�, with the sum 43. Thus, you would want to buy the stock just before day 8 (that is, after day 7) and sell it after day 11, earning a profit of $43 per share.

At first glance, this transformation does not help. We still need to check� n�1

2

� D ‚.n2/ subarrays for a period of n days. Exercise 4.1-2 asks you to show

70 Chapter 4 Divide-and-Conquer

13 1

–3 2

–25 3

20 4

–3 5

–16 6

–23 7 8 9 10

maximum subarray

11

18 12

20 13

–7 14

12 15

7 16

–5 –22 15 –4A

Figure 4.3 The change in stock prices as a maximum-subarray problem. Here, the subar- ray AŒ8 : : 11�, with sum 43, has the greatest sum of any contiguous subarray of array A.

that although computing the cost of one subarray might take time proportional to the length of the subarray, when computing all ‚.n2/ subarray sums, we can orga- nize the computation so that each subarray sum takes O.1/ time, given the values of previously computed subarray sums, so that the brute-force solution takes ‚.n2/ time.

So let us seek a more efficient solution to the maximum-subarray problem. When doing so, we will usually speak of “a” maximum subarray rather than “the” maximum subarray, since there could be more than one subarray that achieves the maximum sum.

The maximum-subarray problem is interesting only when the array contains some negative numbers. If all the array entries were nonnegative, then the maximum-subarray problem would present no challenge, since the entire array would give the greatest sum.

A solution using divide-and-conquer

Let’s think about how we might solve the maximum-subarray problem using the divide-and-conquer technique. Suppose we want to find a maximum subar- ray of the subarray AŒlow : : high�. Divide-and-conquer suggests that we divide the subarray into two subarrays of as equal size as possible. That is, we find the midpoint, say mid, of the subarray, and consider the subarrays AŒlow : : mid� and AŒmidC 1 : : high�. As Figure 4.4(a) shows, any contiguous subarray AŒi : : j � of AŒlow : : high� must lie in exactly one of the following places:

� entirely in the subarray AŒlow : : mid�, so that low � i � j � mid, � entirely in the subarray AŒmidC 1 : : high�, so that mid left-sum 6 left-sum D sum 7 max-left D i 8 right-sum D �1 9 sum D 0

10 for j D midC 1 to high 11 sum D sumC AŒj � 12 if sum > right-sum 13 right-sum D sum 14 max-right D j 15 return .max-left; max-right; left-sum C right-sum/

72 Chapter 4 Divide-and-Conquer

This procedure works as follows. Lines 1–7 find a maximum subarray of the left half, AŒlow : : mid�. Since this subarray must contain AŒmid�, the for loop of lines 3–7 starts the index i at mid and works down to low, so that every subarray it considers is of the form AŒi : : mid�. Lines 1–2 initialize the variables left-sum, which holds the greatest sum found so far, and sum, holding the sum of the entries in AŒi : : mid�. Whenever we find, in line 5, a subarray AŒi : : mid� with a sum of values greater than left-sum, we update left-sum to this subarray’s sum in line 6, and in line 7 we update the variable max-left to record this index i . Lines 8–14 work analogously for the right half, AŒmidC1 : : high�. Here, the for loop of lines 10–14 starts the index j at midC1 and works up to high, so that every subarray it considers is of the form AŒmid C 1 : : j �. Finally, line 15 returns the indices max-left and max-right that demarcate a maximum subarray crossing the midpoint, along with the sum left-sumCright-sum of the values in the subarray AŒmax-left : : max-right�.

If the subarray AŒlow : : high� contains n entries (so that n D high � lowC 1), we claim that the call FIND-MAX-CROSSING-SUBARRAY.A; low; mid; high/ takes ‚.n/ time. Since each iteration of each of the two for loops takes ‚.1/ time, we just need to count up how many iterations there are altogether. The for loop of lines 3–7 makes mid � lowC 1 iterations, and the for loop of lines 10–14 makes high � mid iterations, and so the total number of iterations is .mid � lowC 1/C .high � mid/ D high � lowC 1

D n : With a linear-time FIND-MAX-CROSSING-SUBARRAY procedure in hand, we

can write pseudocode for a divide-and-conquer algorithm to solve the maximum- subarray problem:

FIND-MAXIMUM-SUBARRAY.A; low; high/

1 if high == low 2 return .low; high; AŒlow�/ // base case: only one element 3 else mid D b.lowC high/=2c 4 .left-low; left-high; left-sum/ D

FIND-MAXIMUM-SUBARRAY.A; low; mid/ 5 .right-low; right-high; right-sum/ D

FIND-MAXIMUM-SUBARRAY.A; midC 1; high/ 6 .cross-low; cross-high; cross-sum/ D

FIND-MAX-CROSSING-SUBARRAY.A; low; mid; high/ 7 if left-sum � right-sum and left-sum � cross-sum 8 return .left-low; left-high; left-sum/ 9 elseif right-sum � left-sum and right-sum � cross-sum

10 return .right-low; right-high; right-sum/ 11 else return .cross-low; cross-high; cross-sum/

4.1 The maximum-subarray problem 73

The initial call FIND-MAXIMUM-SUBARRAY.A; 1; A: length/ will find a maxi- mum subarray of AŒ1 : : n�.

Similar to FIND-MAX-CROSSING-SUBARRAY, the recursive procedure FIND- MAXIMUM-SUBARRAY returns a tuple containing the indices that demarcate a maximum subarray, along with the sum of the values in a maximum subarray. Line 1 tests for the base case, where the subarray has just one element. A subar- ray with just one element has only one subarray—itself—and so line 2 returns a tuple with the starting and ending indices of just the one element, along with its value. Lines 3–11 handle the recursive case. Line 3 does the divide part, comput- ing the index mid of the midpoint. Let’s refer to the subarray AŒlow : : mid� as the left subarray and to AŒmid C 1 : : high� as the right subarray. Because we know that the subarray AŒlow : : high� contains at least two elements, each of the left and right subarrays must have at least one element. Lines 4 and 5 conquer by recur- sively finding maximum subarrays within the left and right subarrays, respectively. Lines 6–11 form the combine part. Line 6 finds a maximum subarray that crosses the midpoint. (Recall that because line 6 solves a subproblem that is not a smaller instance of the original problem, we consider it to be in the combine part.) Line 7 tests whether the left subarray contains a subarray with the maximum sum, and line 8 returns that maximum subarray. Otherwise, line 9 tests whether the right subarray contains a subarray with the maximum sum, and line 10 returns that max- imum subarray. If neither the left nor right subarrays contain a subarray achieving the maximum sum, then a maximum subarray must cross the midpoint, and line 11 returns it.

Analyzing the divide-and-conquer algorithm

Next we set up a recurrence that describes the running time of the recursive FIND- MAXIMUM-SUBARRAY procedure. As we did when we analyzed merge sort in Section 2.3.2, we make the simplifying assumption that the original problem size is a power of 2, so that all subproblem sizes are integers. We denote by T .n/ the running time of FIND-MAXIMUM-SUBARRAY on a subarray of n elements. For starters, line 1 takes constant time. The base case, when n D 1, is easy: line 2 takes constant time, and so

T .1/ D ‚.1/ : (4.5) The recursive case occurs when n > 1. Lines 1 and 3 take constant time. Each

of the subproblems solved in lines 4 and 5 is on a subarray of n=2 elements (our assumption that the original problem size is a power of 2 ensures that n=2 is an integer), and so we spend T .n=2/ time solving each of them. Because we have to solve two subproblems—for the left subarray and for the right subarray—the contribution to the running time from lines 4 and 5 comes to 2T .n=2/. As we have

74 Chapter 4 Divide-and-Conquer

already seen, the call to FIND-MAX-CROSSING-SUBARRAY in line 6 takes ‚.n/ time. Lines 7–11 take only ‚.1/ time. For the recursive case, therefore, we have

T .n/ D ‚.1/C 2T .n=2/C‚.n/C‚.1/ D 2T .n=2/C‚.n/ : (4.6)

Combining equations (4.5) and (4.6) gives us a recurrence for the running time T .n/ of FIND-MAXIMUM-SUBARRAY:

T .n/ D (

‚.1/ if n D 1 ; 2T .n=2/C‚.n/ if n > 1 : (4.7)

This recurrence is the same as recurrence (4.1) for merge sort. As we shall see from the master method in Section 4.5, this recurrence has the solution T .n/ D ‚.n lg n/. You might also revisit the recursion tree in Figure 2.5 to un- derstand why the solution should be T .n/ D ‚.n lg n/.

Thus, we see that the divide-and-conquer method yields an algorithm that is asymptotically faster than the brute-force method. With merge sort and now the maximum-subarray problem, we begin to get an idea of how powerful the divide- and-conquer method can be. Sometimes it will yield the asymptotically fastest algorithm for a problem, and other times we can do even better. As Exercise 4.1-5 shows, there is in fact a linear-time algorithm for the maximum-subarray problem, and it does not use divide-and-conquer.

Exercises

4.1-1 What does FIND-MAXIMUM-SUBARRAY return when all elements of A are nega- tive?

4.1-2 Write pseudocode for the brute-force method of solving the maximum-subarray problem. Your procedure should run in ‚.n2/ time.

4.1-3 Implement both the brute-force and recursive algorithms for the maximum- subarray problem on your own computer. What problem size n0 gives the crossover point at which the recursive algorithm beats the brute-force algorithm? Then, change the base case of the recursive algorithm to use the brute-force algorithm whenever the problem size is less than n0. Does that change the crossover point?

4.1-4 Suppose we change the definition of the maximum-subarray problem to allow the result to be an empty subarray, where the sum of the values of an empty subar-

4.2 Strassen’s algorithm for matrix multiplication 75

ray is 0. How would you change any of the algorithms that do not allow empty subarrays to permit an empty subarray to be the result?

4.1-5 Use the following ideas to develop a nonrecursive, linear-time algorithm for the maximum-subarray problem. Start at the left end of the array, and progress toward the right, keeping track of the maximum subarray seen so far. Knowing a maximum subarray of AŒ1 : : j �, extend the answer to find a maximum subarray ending at in- dex jC1 by using the following observation: a maximum subarray of AŒ1 : : j C 1� is either a maximum subarray of AŒ1 : : j � or a subarray AŒi : : j C 1�, for some 1 � i � j C 1. Determine a maximum subarray of the form AŒi : : j C 1� in constant time based on knowing a maximum subarray ending at index j .

4.2 Strassen’s algorithm for matrix multiplication

If you have seen matrices before, then you probably know how to multiply them. (Otherwise, you should read Section D.1 in Appendix D.) If A D .aij / and B D .bij / are square n n matrices, then in the product C D A �B , we define the entry cij , for i; j D 1; 2; : : : ; n, by

cij D nX

kD1 aik � bkj : (4.8)

We must compute n2 matrix entries, and each is the sum of n values. The following procedure takes n n matrices A and B and multiplies them, returning their n n product C . We assume that each matrix has an attribute rows, giving the number of rows in the matrix.

SQUARE-MATRIX-MULTIPLY.A; B/

1 n D A:rows 2 let C be a new n n matrix 3 for i D 1 to n 4 for j D 1 to n 5 cij D 0 6 for k D 1 to n 7 cij D cij C aik � bkj 8 return C

The SQUARE-MATRIX-MULTIPLY procedure works as follows. The for loop of lines 3–7 computes the entries of each row i , and within a given row i , the

76 Chapter 4 Divide-and-Conquer

for loop of lines 4–7 computes each of the entries cij , for each column j . Line 5 initializes cij to 0 as we start computing the sum given in equation (4.8), and each iteration of the for loop of lines 6–7 adds in one more term of equation (4.8).

Because each of the triply-nested for loops runs exactly n iterations, and each execution of line 7 takes constant time, the SQUARE-MATRIX-MULTIPLY proce- dure takes ‚.n3/ time.

You might at first think that any matrix multiplication algorithm must take �.n3/ time, since the natural definition of matrix multiplication requires that many mul- tiplications. You would be incorrect, however: we have a way to multiply matrices in o.n3/ time. In this section, we shall see Strassen’s remarkable recursive algo- rithm for multiplying n n matrices. It runs in ‚.nlg 7/ time, which we shall show in Section 4.5. Since lg 7 lies between 2:80 and 2:81, Strassen’s algorithm runs in O.n2:81/ time, which is asymptotically better than the simple SQUARE-MATRIX- MULTIPLY procedure.

A simple divide-and-conquer algorithm

To keep things simple, when we use a divide-and-conquer algorithm to compute the matrix product C D A � B , we assume that n is an exact power of 2 in each of the n n matrices. We make this assumption because in each divide step, we will divide n n matrices into four n=2 n=2 matrices, and by assuming that n is an exact power of 2, we are guaranteed that as long as n � 2, the dimension n=2 is an integer.

Suppose that we partition each of A, B , and C into four n=2 n=2 matrices

A D �

A11 A12 A21 A22

� ; B D

� B11 B12 B21 B22

� ; C D

� C11 C12 C21 C22

� ; (4.9)

so that we rewrite the equation C D A � B as� C11 C12 C21 C22

� D �

A11 A12 A21 A22

� � �

B11 B12 B21 B22

� : (4.10)

Equation (4.10) corresponds to the four equations

C11 D A11 � B11 C A12 � B21 ; (4.11) C12 D A11 � B12 C A12 � B22 ; (4.12) C21 D A21 � B11 C A22 � B21 ; (4.13) C22 D A21 � B12 C A22 � B22 : (4.14) Each of these four equations specifies two multiplications of n=2 n=2 matrices and the addition of their n=2 n=2 products. We can use these equations to create a straightforward, recursive, divide-and-conquer algorithm:

4.2 Strassen’s algorithm for matrix multiplication 77

SQUARE-MATRIX-MULTIPLY-RECURSIVE.A; B/

1 n D A:rows 2 let C be a new n n matrix 3 if n == 1 4 c11 D a11 � b11 5 else partition A, B , and C as in equations (4.9) 6 C11 D SQUARE-MATRIX-MULTIPLY-RECURSIVE.A11; B11/

C SQUARE-MATRIX-MULTIPLY-RECURSIVE.A12; B21/ 7 C12 D SQUARE-MATRIX-MULTIPLY-RECURSIVE.A11; B12/

C SQUARE-MATRIX-MULTIPLY-RECURSIVE.A12; B22/ 8 C21 D SQUARE-MATRIX-MULTIPLY-RECURSIVE.A21; B11/

C SQUARE-MATRIX-MULTIPLY-RECURSIVE.A22; B21/ 9 C22 D SQUARE-MATRIX-MULTIPLY-RECURSIVE.A21; B12/

C SQUARE-MATRIX-MULTIPLY-RECURSIVE.A22; B22/ 10 return C

This pseudocode glosses over one subtle but important implementation detail. How do we partition the matrices in line 5? If we were to create 12 new n=2 n=2 matrices, we would spend ‚.n2/ time copying entries. In fact, we can partition the matrices without copying entries. The trick is to use index calculations. We identify a submatrix by a range of row indices and a range of column indices of the original matrix. We end up representing a submatrix a little differently from how we represent the original matrix, which is the subtlety we are glossing over. The advantage is that, since we can specify submatrices by index calculations, executing line 5 takes only ‚.1/ time (although we shall see that it makes no difference asymptotically to the overall running time whether we copy or partition in place).

Now, we derive a recurrence to characterize the running time of SQUARE- MATRIX-MULTIPLY-RECURSIVE. Let T .n/ be the time to multiply two n n matrices using this procedure. In the base case, when n D 1, we perform just the one scalar multiplication in line 4, and so

T .1/ D ‚.1/ : (4.15) The recursive case occurs when n > 1. As discussed, partitioning the matrices in

line 5 takes ‚.1/ time, using index calculations. In lines 6–9, we recursively call SQUARE-MATRIX-MULTIPLY-RECURSIVE a total of eight times. Because each recursive call multiplies two n=2 n=2 matrices, thereby contributing T .n=2/ to the overall running time, the time taken by all eight recursive calls is 8T .n=2/. We also must account for the four matrix additions in lines 6–9. Each of these matrices contains n2=4 entries, and so each of the four matrix additions takes ‚.n2/ time. Since the number of matrix additions is a constant, the total time spent adding ma-

78 Chapter 4 Divide-and-Conquer

trices in lines 6–9 is ‚.n2/. (Again, we use index calculations to place the results of the matrix additions into the correct positions of matrix C , with an overhead of ‚.1/ time per entry.) The total time for the recursive case, therefore, is the sum of the partitioning time, the time for all the recursive calls, and the time to add the matrices resulting from the recursive calls:

T .n/ D ‚.1/C 8T .n=2/C‚.n2/ D 8T .n=2/C‚.n2/ : (4.16)

Notice that if we implemented partitioning by copying matrices, which would cost ‚.n2/ time, the recurrence would not change, and hence the overall running time would increase by only a constant factor.

Combining equations (4.15) and (4.16) gives us the recurrence for the running time of SQUARE-MATRIX-MULTIPLY-RECURSIVE:

T .n/ D (

‚.1/ if n D 1 ; 8T .n=2/C‚.n2/ if n > 1 : (4.17)

As we shall see from the master method in Section 4.5, recurrence (4.17) has the solution T .n/ D ‚.n3/. Thus, this simple divide-and-conquer approach is no faster than the straightforward SQUARE-MATRIX-MULTIPLY procedure.

Before we continue on to examining Strassen’s algorithm, let us review where the components of equation (4.16) came from. Partitioning each n n matrix by index calculation takes ‚.1/ time, but we have two matrices to partition. Although you could say that partitioning the two matrices takes ‚.2/ time, the constant of 2 is subsumed by the ‚-notation. Adding two matrices, each with, say, k entries, takes ‚.k/ time. Since the matrices we add each have n2=4 entries, you could say that adding each pair takes ‚.n2=4/ time. Again, however, the ‚-notation subsumes the constant factor of 1=4, and we say that adding two n2=4 n2=4 matrices takes ‚.n2/ time. We have four such matrix additions, and once again, instead of saying that they take ‚.4n2/ time, we say that they take ‚.n2/ time. (Of course, you might observe that we could say that the four matrix additions take ‚.4n2=4/ time, and that 4n2=4 D n2, but the point here is that ‚-notation subsumes constant factors, whatever they are.) Thus, we end up with two terms of ‚.n2/, which we can combine into one.

When we account for the eight recursive calls, however, we cannot just sub- sume the constant factor of 8. In other words, we must say that together they take 8T .n=2/ time, rather than just T .n=2/ time. You can get a feel for why by looking back at the recursion tree in Figure 2.5, for recurrence (2.1) (which is identical to recurrence (4.7)), with the recursive case T .n/ D 2T .n=2/C‚.n/. The factor of 2 determined how many children each tree node had, which in turn determined how many terms contributed to the sum at each level of the tree. If we were to ignore

4.2 Strassen’s algorithm for matrix multiplication 79

the factor of 8 in equation (4.16) or the factor of 2 in recurrence (4.1), the recursion tree would just be linear, rather than “bushy,” and each level would contribute only one term to the sum.

Bear in mind, therefore, that although asymptotic notation subsumes constant multiplicative factors, recursive notation such as T .n=2/ does not.

Strassen’s method

The key to Strassen’s method is to make the recursion tree slightly less bushy. That is, instead of performing eight recursive multiplications of n=2 n=2 matrices, it performs only seven. The cost of eliminating one matrix multiplication will be several new additions of n=2 n=2 matrices, but still only a constant number of additions. As before, the constant number of matrix additions will be subsumed by ‚-notation when we set up the recurrence equation to characterize the running time.

Strassen’s method is not at all obvious. (This might be the biggest understate- ment in this book.) It has four steps:

1. Divide the input matrices A and B and output matrix C into n=2 n=2 subma- trices, as in equation (4.9). This step takes ‚.1/ time by index calculation, just as in SQUARE-MATRIX-MULTIPLY-RECURSIVE.

2. Create 10 matrices S1; S2; : : : ; S10, each of which is n=2 n=2 and is the sum or difference of two matrices created in step 1. We can create all 10 matrices in ‚.n2/ time.

3. Using the submatrices created in step 1 and the 10 matrices created in step 2, recursively compute seven matrix products P1; P2; : : : ; P7. Each matrix Pi is n=2 n=2.

4. Compute the desired submatrices C11; C12; C21; C22 of the result matrix C by adding and subtracting various combinations of the Pi matrices. We can com- pute all four submatrices in ‚.n2/ time.

We shall see the details of steps 2–4 in a moment, but we already have enough information to set up a recurrence for the running time of Strassen’s method. Let us assume that once the matrix size n gets down to 1, we perform a simple scalar mul- tiplication, just as in line 4 of SQUARE-MATRIX-MULTIPLY-RECURSIVE. When n > 1, steps 1, 2, and 4 take a total of ‚.n2/ time, and step 3 requires us to per- form seven multiplications of n=2 n=2 matrices. Hence, we obtain the following recurrence for the running time T .n/ of Strassen’s algorithm:

T .n/ D (

‚.1/ if n D 1 ; 7T .n=2/C‚.n2/ if n > 1 : (4.18)

80 Chapter 4 Divide-and-Conquer

We have traded off one matrix multiplication for a constant number of matrix ad- ditions. Once we understand recurrences and their solutions, we shall see that this tradeoff actually leads to a lower asymptotic running time. By the master method in Section 4.5, recurrence (4.18) has the solution T .n/ D ‚.nlg 7/.

We now proceed to describe the details. In step 2, we create the following 10 matrices:

S1 D B12 � B22 ; S2 D A11 C A12 ; S3 D A21 C A22 ; S4 D B21 � B11 ; S5 D A11 C A22 ; S6 D B11 C B22 ; S7 D A12 � A22 ; S8 D B21 C B22 ; S9 D A11 � A21 ; S10 D B11 C B12 : Since we must add or subtract n=2 n=2 matrices 10 times, this step does indeed take ‚.n2/ time.

In step 3, we recursively multiply n=2 n=2 matrices seven times to compute the following n=2 n=2 matrices, each of which is the sum or difference of products of A and B submatrices:

P1 D A11 � S1 D A11 � B12 � A11 � B22 ; P2 D S2 � B22 D A11 � B22 C A12 � B22 ; P3 D S3 � B11 D A21 � B11 C A22 � B11 ; P4 D A22 � S4 D A22 � B21 � A22 � B11 ; P5 D S5 � S6 D A11 � B11 C A11 � B22 C A22 � B11 C A22 � B22 ; P6 D S7 � S8 D A12 � B21 C A12 � B22 � A22 � B21 � A22 � B22 ; P7 D S9 � S10 D A11 � B11 C A11 � B12 � A21 � B11 � A21 � B12 : Note that the only multiplications we need to perform are those in the middle col- umn of the above equations. The right-hand column just shows what these products equal in terms of the original submatrices created in step 1.

Step 4 adds and subtracts the Pi matrices created in step 3 to construct the four n=2 n=2 submatrices of the product C . We start with C11 D P5 C P4 � P2 C P6 :

4.2 Strassen’s algorithm for matrix multiplication 81

Expanding out the right-hand side, with the expansion of each Pi on its own line and vertically aligning terms that cancel out, we see that C11 equals

A11 �B11CA11 �B22CA22 �B11CA22 �B22 � A22 �B11 CA22 �B21

� A11 �B22 � A12 �B22 � A22 �B22� A22 �B21CA12 �B22CA12 �B21

A11 �B11 CA12 �B21 ; which corresponds to equation (4.11).

Similarly, we set

C12 D P1 C P2 ; and so C12 equals

A11 �B12 � A11 �B22 CA11 �B22CA12 �B22

A11 �B12 CA12 �B22 ; corresponding to equation (4.12).

Setting

C21 D P3 C P4 makes C21 equal

A21 �B11CA22 �B11 � A22 �B11CA22 �B21

A21 �B11 CA22 �B21 ; corresponding to equation (4.13).

Finally, we set

C22 D P5 C P1 � P3 � P7 ; so that C22 equals

A11 �B11CA11 �B22CA22 �B11CA22 �B22 � A11 �B22 CA11 �B12

� A22 �B11 � A21 �B11 �A11 �B11 � A11 �B12CA21 �B11CA21 �B12

A22 �B22 CA21 �B12 ;

82 Chapter 4 Divide-and-Conquer

which corresponds to equation (4.14). Altogether, we add or subtract n=2 n=2 matrices eight times in step 4, and so this step indeed takes ‚.n2/ time.

Thus, we see that Strassen’s algorithm, comprising steps 1–4, produces the cor- rect matrix product and that recurrence (4.18) characterizes its running time. Since we shall see in Section 4.5 that this recurrence has the solution T .n/ D ‚.nlg 7/, Strassen’s method is asymptotically faster than the straightforward SQUARE- MATRIX-MULTIPLY procedure. The notes at the end of this chapter discuss some of the practical aspects of Strassen’s algorithm.

Exercises

Note: Although Exercises 4.2-3, 4.2-4, and 4.2-5 are about variants on Strassen’s algorithm, you should read Section 4.5 before trying to solve them.

4.2-1 Use Strassen’s algorithm to compute the matrix product�

1 3

7 5

�� 6 8

4 2

� :

Show your work.

4.2-2 Write pseudocode for Strassen’s algorithm.

4.2-3 How would you modify Strassen’s algorithm to multiply n n matrices in which n is not an exact power of 2? Show that the resulting algorithm runs in time ‚.nlg 7/.

4.2-4 What is the largest k such that if you can multiply 3 3 matrices using k multi- plications (not assuming commutativity of multiplication), then you can multiply n n matrices in time o.nlg 7/? What would the running time of this algorithm be? 4.2-5 V. Pan has discovered a way of multiplying 68 68 matrices using 132,464 mul- tiplications, a way of multiplying 70 70 matrices using 143,640 multiplications, and a way of multiplying 72 72 matrices using 155,424 multiplications. Which method yields the best asymptotic running time when used in a divide-and-conquer matrix-multiplication algorithm? How does it compare to Strassen’s algorithm?

4.3 The substitution method for solving recurrences 83

4.2-6 How quickly can you multiply a kn n matrix by an n kn matrix, using Strassen’s algorithm as a subroutine? Answer the same question with the order of the input matrices reversed.

4.2-7 Show how to multiply the complex numbers a C bi and c C di using only three multiplications of real numbers. The algorithm should take a, b, c, and d as input and produce the real component ac � bd and the imaginary component ad C bc separately.

4.3 The substitution method for solving recurrences

Now that we have seen how recurrences characterize the running times of divide- and-conquer algorithms, we will learn how to solve recurrences. We start in this section with the “substitution” method.

The substitution method for solving recurrences comprises two steps:

1. Guess the form of the solution.

2. Use mathematical induction to find the constants and show that the solution works.

We substitute the guessed solution for the function when applying the inductive hypothesis to smaller values; hence the name “substitution method.” This method is powerful, but we must be able to guess the form of the answer in order to apply it.

We can use the substitution method to establish either upper or lower bounds on a recurrence. As an example, let us determine an upper bound on the recurrence

T .n/ D 2T .bn=2c/C n ; (4.19) which is similar to recurrences (4.3) and (4.4). We guess that the solution is T .n/ D O.n lg n/. The substitution method requires us to prove that T .n/ � cn lg n for an appropriate choice of the constant c > 0. We start by assuming that this bound holds for all positive m 3, the recurrence does not depend directly on T .1/. Thus, we can replace T .1/ by T .2/ and T .3/ as the base cases in the inductive proof, letting n0 D 2. Note that we make a distinction between the base case of the recurrence (n D 1) and the base cases of the inductive proof (n D 2 and n D 3). With T .1/ D 1, we derive from the recurrence that T .2/ D 4 and T .3/ D 5. Now we can complete the inductive proof that T .n/ � cn lg n for some constant c � 1 by choosing c large enough so that T .2/ � c2 lg 2 and T .3/ � c3 lg 3. As it turns out, any choice of c � 2 suffices for the base cases of n D 2 and n D 3 to hold. For most of the recurrences we shall examine, it is straightforward to extend boundary conditions to make the inductive assumption work for small n, and we shall not always explicitly work out the details.

Making a good guess

Unfortunately, there is no general way to guess the correct solutions to recurrences. Guessing a solution takes experience and, occasionally, creativity. Fortunately, though, you can use some heuristics to help you become a good guesser. You can also use recursion trees, which we shall see in Section 4.4, to generate good guesses.

If a recurrence is similar to one you have seen before, then guessing a similar solution is reasonable. As an example, consider the recurrence

T .n/ D 2T .bn=2c C 17/C n ; which looks difficult because of the added “17” in the argument to T on the right- hand side. Intuitively, however, this additional term cannot substantially affect the

4.3 The substitution method for solving recurrences 85

solution to the recurrence. When n is large, the difference between bn=2c and bn=2c C 17 is not that large: both cut n nearly evenly in half. Consequently, we make the guess that T .n/ D O.n lg n/, which you can verify as correct by using the substitution method (see Exercise 4.3-6).

Another way to make a good guess is to prove loose upper and lower bounds on the recurrence and then reduce the range of uncertainty. For example, we might start with a lower bound of T .n/ D �.n/ for the recurrence (4.19), since we have the term n in the recurrence, and we can prove an initial upper bound of T .n/ D O.n2/. Then, we can gradually lower the upper bound and raise the lower bound until we converge on the correct, asymptotically tight solution of T .n/ D ‚.n lg n/.

Subtleties

Sometimes you might correctly guess an asymptotic bound on the solution of a recurrence, but somehow the math fails to work out in the induction. The problem frequently turns out to be that the inductive assumption is not strong enough to prove the detailed bound. If you revise the guess by subtracting a lower-order term when you hit such a snag, the math often goes through.

Consider the recurrence

T .n/ D T .bn=2c/C T .dn=2e/C 1 : We guess that the solution is T .n/ D O.n/, and we try to show that T .n/ � cn for an appropriate choice of the constant c. Substituting our guess in the recurrence, we obtain

T .n/ � c bn=2c C c dn=2e C 1 D cnC 1 ;

which does not imply T .n/ � cn for any choice of c. We might be tempted to try a larger guess, say T .n/ D O.n2/. Although we can make this larger guess work, our original guess of T .n/ D O.n/ is correct. In order to show that it is correct, however, we must make a stronger inductive hypothesis.

Intuitively, our guess is nearly right: we are off only by the constant 1, a lower-order term. Nevertheless, mathematical induction does not work unless we prove the exact form of the inductive hypothesis. We overcome our difficulty by subtracting a lower-order term from our previous guess. Our new guess is T .n/ � cn � d , where d � 0 is a constant. We now have T .n/ � .c bn=2c � d/C .c dn=2e � d/C 1

D cn� 2d C 1 � cn� d ;

86 Chapter 4 Divide-and-Conquer

as long as d � 1. As before, we must choose the constant c large enough to handle the boundary conditions.

You might find the idea of subtracting a lower-order term counterintuitive. Af- ter all, if the math does not work out, we should increase our guess, right? Not necessarily! When proving an upper bound by induction, it may actually be more difficult to prove that a weaker upper bound holds, because in order to prove the weaker bound, we must use the same weaker bound inductively in the proof. In our current example, when the recurrence has more than one recursive term, we get to subtract out the lower-order term of the proposed bound once per recursive term. In the above example, we subtracted out the constant d twice, once for the T .bn=2c/ term and once for the T .dn=2e/ term. We ended up with the inequality T .n/ � cn� 2d C 1, and it was easy to find values of d to make cn� 2d C 1 be less than or equal to cn � d .

Avoiding pitfalls

It is easy to err in the use of asymptotic notation. For example, in the recur- rence (4.19) we can falsely “prove” T .n/ D O.n/ by guessing T .n/ � cn and then arguing

T .n/ � 2.c bn=2c/C n � cnC n D O.n/ ; wrong!!

since c is a constant. The error is that we have not proved the exact form of the inductive hypothesis, that is, that T .n/ � cn. We therefore will explicitly prove that T .n/ � cn when we want to show that T .n/ D O.n/.

Changing variables

Sometimes, a little algebraic manipulation can make an unknown recurrence simi- lar to one you have seen before. As an example, consider the recurrence

T .n/ D 2T � pn˘�C lg n ; which looks difficult. We can simplify this recurrence, though, with a change of variables. For convenience, we shall not worry about rounding off values, such as p

n, to be integers. Renaming m D lg n yields T .2m/ D 2T .2m=2/Cm : We can now rename S.m/ D T .2m/ to produce the new recurrence S.m/ D 2S.m=2/Cm ;

4.3 The substitution method for solving recurrences 87

which is very much like recurrence (4.19). Indeed, this new recurrence has the same solution: S.m/ D O.m lg m/. Changing back from S.m/ to T .n/, we obtain T .n/ D T .2m/ D S.m/ D O.m lg m/ D O.lg n lg lg n/ :

Exercises

4.3-1 Show that the solution of T .n/ D T .n � 1/C n is O.n2/. 4.3-2 Show that the solution of T .n/ D T .dn=2e/C 1 is O.lg n/. 4.3-3 We saw that the solution of T .n/ D 2T .bn=2c/Cn is O.n lg n/. Show that the so- lution of this recurrence is also �.n lg n/. Conclude that the solution is ‚.n lg n/.

4.3-4 Show that by making a different inductive hypothesis, we can overcome the diffi- culty with the boundary condition T .1/ D 1 for recurrence (4.19) without adjusting the boundary conditions for the inductive proof.

4.3-5 Show that ‚.n lg n/ is the solution to the “exact” recurrence (4.3) for merge sort.

4.3-6 Show that the solution to T .n/ D 2T .bn=2c C 17/C n is O.n lg n/. 4.3-7 Using the master method in Section 4.5, you can show that the solution to the recurrence T .n/ D 4T .n=3/ C n is T .n/ D ‚.nlog3 4/. Show that a substitution proof with the assumption T .n/ � cnlog3 4 fails. Then show how to subtract off a lower-order term to make a substitution proof work.

4.3-8 Using the master method in Section 4.5, you can show that the solution to the recurrence T .n/ D 4T .n=2/ C n2 is T .n/ D ‚.n2/. Show that a substitution proof with the assumption T .n/ � cn2 fails. Then show how to subtract off a lower-order term to make a substitution proof work.

88 Chapter 4 Divide-and-Conquer

4.3-9 Solve the recurrence T .n/ D 3T .pn/ C log n by making a change of variables. Your solution should be asymptotically tight. Do not worry about whether values are integral.

4.4 The recursion-tree method for solving recurrences

Although you can use the substitution method to provide a succinct proof that a solution to a recurrence is correct, you might have trouble coming up with a good guess. Drawing out a recursion tree, as we did in our analysis of the merge sort recurrence in Section 2.3.2, serves as a straightforward way to devise a good guess. In a recursion tree, each node represents the cost of a single subproblem somewhere in the set of recursive function invocations. We sum the costs within each level of the tree to obtain a set of per-level costs, and then we sum all the per-level costs to determine the total cost of all levels of the recursion.

A recursion tree is best used to generate a good guess, which you can then verify by the substitution method. When using a recursion tree to generate a good guess, you can often tolerate a small amount of “sloppiness,” since you will be verifying your guess later on. If you are very careful when drawing out a recursion tree and summing the costs, however, you can use a recursion tree as a direct proof of a solution to a recurrence. In this section, we will use recursion trees to generate good guesses, and in Section 4.6, we will use recursion trees directly to prove the theorem that forms the basis of the master method.

For example, let us see how a recursion tree would provide a good guess for the recurrence T .n/ D 3T .bn=4c/ C ‚.n2/. We start by focusing on finding an upper bound for the solution. Because we know that floors and ceilings usually do not matter when solving recurrences (here’s an example of sloppiness that we can tolerate), we create a recursion tree for the recurrence T .n/ D 3T .n=4/ C cn2, having written out the implied constant coefficient c > 0.

Figure 4.5 shows how we derive the recursion tree for T .n/ D 3T .n=4/C cn2. For convenience, we assume that n is an exact power of 4 (another example of tolerable sloppiness) so that all subproblem sizes are integers. Part (a) of the figure shows T .n/, which we expand in part (b) into an equivalent tree representing the recurrence. The cn2 term at the root represents the cost at the top level of recursion, and the three subtrees of the root represent the costs incurred by the subproblems of size n=4. Part (c) shows this process carried one step further by expanding each node with cost T .n=4/ from part (b). The cost for each of the three children of the root is c.n=4/2. We continue expanding each node in the tree by breaking it into its constituent parts as determined by the recurrence.

4.4 The recursion-tree method for solving recurrences 89

(d)

(c)(b)(a)

T .n/ cn2 cn2

cn2

T �

n 4

� T �

n 4

� T �

n 4

T �

n 16

� T �

n 16

� T �

n 16

� T �

n 16

� T �

n 16

� T �

n 16

� T �

n 16

� T �

n 16

� T �

n 16

cn2

c �

n 4

�2 c �

n 4

�2 c �

n 4

�2

c �

n 4

�2 c �

n 4

�2 c �

n 4

�2

c �

n 16

�2 c �

n 16

�2 c �

n 16

�2 c �

n 16

�2 c �

n 16

�2 c �

n 16

�2 c �

n 16

�2 c �

n 16

�2 c �

n 16

�2

3 16

cn2

� 3

16

�2 cn2

log4 n

nlog4 3

T .1/T .1/T .1/T .1/T .1/T .1/T .1/T .1/T .1/T .1/T .1/T .1/T .1/ ‚.nlog4 3/

Total: O.n2/

Figure 4.5 Constructing a recursion tree for the recurrence T .n/ D 3T .n=4/ C cn2. Part (a) shows T .n/, which progressively expands in (b)–(d) to form the recursion tree. The fully expanded tree in part (d) has height log4 n (it has log4 nC 1 levels).

90 Chapter 4 Divide-and-Conquer

Because subproblem sizes decrease by a factor of 4 each time we go down one level, we eventually must reach a boundary condition. How far from the root do we reach one? The subproblem size for a node at depth i is n=4i . Thus, the subproblem size hits n D 1 when n=4i D 1 or, equivalently, when i D log4 n. Thus, the tree has log4 nC 1 levels (at depths 0; 1; 2; : : : ; log4 n).

Next we determine the cost at each level of the tree. Each level has three times more nodes than the level above, and so the number of nodes at depth i is 3i . Because subproblem sizes reduce by a factor of 4 for each level we go down from the root, each node at depth i , for i D 0; 1; 2; : : : ; log4 n � 1, has a cost of c.n=4i /2. Multiplying, we see that the total cost over all nodes at depth i , for i D 0; 1; 2; : : : ; log4 n � 1, is 3ic.n=4i /2 D .3=16/i cn2. The bottom level, at depth log4 n, has 3

log4 n D nlog4 3 nodes, each contributing cost T .1/, for a total cost of nlog4 3T .1/, which is ‚.nlog4 3/, since we assume that T .1/ is a constant.

Now we add up the costs over all levels to determine the cost for the entire tree:

T .n/ D cn2 C 3 16

cn2 C �

3

16

�2 cn2 C � � � C

� 3

16

�log4 n�1 cn2 C‚.nlog4 3/

D log4 n�1X

iD0

� 3

16

�i cn2 C‚.nlog4 3/

D .3=16/ log4 n � 1

.3=16/ � 1 cn 2 C‚.nlog4 3/ (by equation (A.5)) :

This last formula looks somewhat messy until we realize that we can again take advantage of small amounts of sloppiness and use an infinite decreasing geometric series as an upper bound. Backing up one step and applying equation (A.6), we have

T .n/ D log4 n�1X

iD0

� 3

16

�i cn2 C‚.nlog4 3/

0. Using the same constant c > 0 as before, we have

T .n/ � 3T .bn=4c/C cn2 � 3d bn=4c2 C cn2 � 3d.n=4/2 C cn2

D 3 16

dn2 C cn2

� dn2 ; where the last step holds as long as d � .16=13/c.

In another, more intricate, example, Figure 4.6 shows the recursion tree for

T .n/ D T .n=3/C T .2n=3/CO.n/ : (Again, we omit floor and ceiling functions for simplicity.) As before, we let c represent the constant factor in the O.n/ term. When we add the values across the levels of the recursion tree shown in the figure, we get a value of cn for every level.

92 Chapter 4 Divide-and-Conquer

The longest simple path from the root to a leaf is n ! .2=3/n ! .2=3/2n ! � � � ! 1. Since .2=3/kn D 1 when k D log3=2 n, the height of the tree is log3=2 n.

Intuitively, we expect the solution to the recurrence to be at most the number of levels times the cost of each level, or O.cn log3=2 n/ D O.n lg n/. Figure 4.6 shows only the top levels of the recursion tree, however, and not every level in the tree contributes a cost of cn. Consider the cost of the leaves. If this recursion tree were a complete binary tree of height log3=2 n, there would be 2

log3=2 n D nlog3=2 2 leaves. Since the cost of each leaf is a constant, the total cost of all leaves would then be ‚.nlog3=2 2/ which, since log3=2 2 is a constant strictly greater than 1, is !.n lg n/. This recursion tree is not a complete binary tree, however, and so it has fewer than nlog3=2 2 leaves. Moreover, as we go down from the root, more and more internal nodes are absent. Consequently, levels toward the bottom of the recursion tree contribute less than cn to the total cost. We could work out an accu- rate accounting of all costs, but remember that we are just trying to come up with a guess to use in the substitution method. Let us tolerate the sloppiness and attempt to show that a guess of O.n lg n/ for the upper bound is correct.

Indeed, we can use the substitution method to verify that O.n lg n/ is an upper bound for the solution to the recurrence. We show that T .n/ � dn lg n, where d is a suitable positive constant. We have

T .n/ � T .n=3/C T .2n=3/C cn � d.n=3/ lg.n=3/C d.2n=3/ lg.2n=3/C cn D .d.n=3/ lg n � d.n=3/ lg 3/

C .d.2n=3/ lg n � d.2n=3/ lg.3=2//C cn D dn lg n � d..n=3/ lg 3C .2n=3/ lg.3=2//C cn D dn lg n � d..n=3/ lg 3C .2n=3/ lg 3 � .2n=3/ lg 2/C cn D dn lg n � dn.lg 3 � 2=3/C cn � dn lg n ;

as long as d � c=.lg 3� .2=3//. Thus, we did not need to perform a more accurate accounting of costs in the recursion tree.

Exercises

4.4-1 Use a recursion tree to determine a good asymptotic upper bound on the recurrence T .n/ D 3T .bn=2c/C n. Use the substitution method to verify your answer. 4.4-2 Use a recursion tree to determine a good asymptotic upper bound on the recurrence T .n/ D T .n=2/C n2. Use the substitution method to verify your answer.

4.5 The master method for solving recurrences 93

4.4-3 Use a recursion tree to determine a good asymptotic upper bound on the recurrence T .n/ D 4T .n=2C 2/C n. Use the substitution method to verify your answer. 4.4-4 Use a recursion tree to determine a good asymptotic upper bound on the recurrence T .n/ D 2T .n � 1/C 1. Use the substitution method to verify your answer. 4.4-5 Use a recursion tree to determine a good asymptotic upper bound on the recurrence T .n/ D T .n�1/CT .n=2/Cn. Use the substitution method to verify your answer. 4.4-6 Argue that the solution to the recurrence T .n/ D T .n=3/CT .2n=3/Ccn, where c is a constant, is �.n lg n/ by appealing to a recursion tree.

4.4-7 Draw the recursion tree for T .n/ D 4T .bn=2c/ C cn, where c is a constant, and provide a tight asymptotic bound on its solution. Verify your bound by the substi- tution method.

4.4-8 Use a recursion tree to give an asymptotically tight solution to the recurrence T .n/ D T .n � a/C T .a/C cn, where a � 1 and c > 0 are constants. 4.4-9 Use a recursion tree to give an asymptotically tight solution to the recurrence T .n/ D T .˛n/C T ..1� ˛/n/C cn, where ˛ is a constant in the range 0 0 is also a constant.

4.5 The master method for solving recurrences

The master method provides a “cookbook” method for solving recurrences of the form

T .n/ D aT .n=b/C f .n/ ; (4.20) where a � 1 and b > 1 are constants and f .n/ is an asymptotically positive function. To use the master method, you will need to memorize three cases, but then you will be able to solve many recurrences quite easily, often without pencil and paper.

94 Chapter 4 Divide-and-Conquer

The recurrence (4.20) describes the running time of an algorithm that divides a problem of size n into a subproblems, each of size n=b, where a and b are positive constants. The a subproblems are solved recursively, each in time T .n=b/. The function f .n/ encompasses the cost of dividing the problem and combining the results of the subproblems. For example, the recurrence arising from Strassen’s algorithm has a D 7, b D 2, and f .n/ D ‚.n2/.

As a matter of technical correctness, the recurrence is not actually well defined, because n=b might not be an integer. Replacing each of the a terms T .n=b/ with either T .bn=bc/ or T .dn=be/ will not affect the asymptotic behavior of the recur- rence, however. (We will prove this assertion in the next section.) We normally find it convenient, therefore, to omit the floor and ceiling functions when writing divide-and-conquer recurrences of this form.

The master theorem

The master method depends on the following theorem.

Theorem 4.1 (Master theorem) Let a � 1 and b > 1 be constants, let f .n/ be a function, and let T .n/ be defined on the nonnegative integers by the recurrence

T .n/ D aT .n=b/C f .n/ ; where we interpret n=b to mean either bn=bc or dn=be. Then T .n/ has the follow- ing asymptotic bounds:

1. If f .n/ D O.nlogb a��/ for some constant � > 0, then T .n/ D ‚.nlogb a/. 2. If f .n/ D ‚.nlogb a/, then T .n/ D ‚.nlogb a lg n/. 3. If f .n/ D �.nlogb aC�/ for some constant � > 0, and if af .n=b/ � cf .n/ for

some constant c 0. In the third case, not only must f .n/ be larger than nlogb a, it also must be polynomially larger and in addition satisfy the “regularity” condition that af .n=b/ � cf .n/. This condition is satisfied by most of the polynomially bounded functions that we shall encounter.

Note that the three cases do not cover all the possibilities for f .n/. There is a gap between cases 1 and 2 when f .n/ is smaller than nlogb a but not polynomi- ally smaller. Similarly, there is a gap between cases 2 and 3 when f .n/ is larger than nlogb a but not polynomially larger. If the function f .n/ falls into one of these gaps, or if the regularity condition in case 3 fails to hold, you cannot use the master method to solve the recurrence.

Using the master method

To use the master method, we simply determine which case (if any) of the master theorem applies and write down the answer.

As a first example, consider

T .n/ D 9T .n=3/C n : For this recurrence, we have a D 9, b D 3, f .n/ D n, and thus we have that nlogb a D nlog3 9 D ‚.n2). Since f .n/ D O.nlog3 9��/, where � D 1, we can apply case 1 of the master theorem and conclude that the solution is T .n/ D ‚.n2/.

Now consider

T .n/ D T .2n=3/C 1; in which a D 1, b D 3=2, f .n/ D 1, and nlogb a D nlog3=2 1 D n0 D 1. Case 2 applies, since f .n/ D ‚.nlogb a/ D ‚.1/, and thus the solution to the recurrence is T .n/ D ‚.lg n/.

For the recurrence

T .n/ D 3T .n=4/C n lg n ; we have a D 3, b D 4, f .n/ D n lg n, and nlogb a D nlog4 3 D O.n0:793/. Since f .n/ D �.nlog4 3C�/, where � � 0:2, case 3 applies if we can show that the regularity condition holds for f .n/. For sufficiently large n, we have that af .n=b/ D 3.n=4/ lg.n=4/ � .3=4/n lg n D cf .n/ for c D 3=4. Consequently, by case 3, the solution to the recurrence is T .n/ D ‚.n lg n/.

The master method does not apply to the recurrence

T .n/ D 2T .n=2/C n lg n ; even though it appears to have the proper form: a D 2, b D 2, f .n/ D n lg n, and nlogb a D n. You might mistakenly think that case 3 should apply, since

96 Chapter 4 Divide-and-Conquer

f .n/ D n lg n is asymptotically larger than nlogb a D n. The problem is that it is not polynomially larger. The ratio f .n/=nlogb a D .n lg n/=n D lg n is asymp- totically less than n� for any positive constant �. Consequently, the recurrence falls into the gap between case 2 and case 3. (See Exercise 4.6-2 for a solution.)

Let’s use the master method to solve the recurrences we saw in Sections 4.1 and 4.2. Recurrence (4.7),

T .n/ D 2T .n=2/C‚.n/ ; characterizes the running times of the divide-and-conquer algorithm for both the maximum-subarray problem and merge sort. (As is our practice, we omit stating the base case in the recurrence.) Here, we have a D 2, b D 2, f .n/ D ‚.n/, and thus we have that nlogb a D nlog2 2 D n. Case 2 applies, since f .n/ D ‚.n/, and so we have the solution T .n/ D ‚.n lg n/.

Recurrence (4.17),

T .n/ D 8T .n=2/C‚.n2/ ; describes the running time of the first divide-and-conquer algorithm that we saw for matrix multiplication. Now we have a D 8, b D 2, and f .n/ D ‚.n2/, and so nlogb a D nlog2 8 D n3. Since n3 is polynomially larger than f .n/ (that is, f .n/ D O.n3��/ for � D 1), case 1 applies, and T .n/ D ‚.n3/.

Finally, consider recurrence (4.18),

T .n/ D 7T .n=2/C‚.n2/ ; which describes the running time of Strassen’s algorithm. Here, we have a D 7, b D 2, f .n/ D ‚.n2/, and thus nlogb a D nlog2 7. Rewriting log2 7 as lg 7 and recalling that 2:80 1 and a function f .n/ that satisfies all the conditions in case 3 of the master theorem except the regularity condition.

? 4.6 Proof of the master theorem

This section contains a proof of the master theorem (Theorem 4.1). You do not need to understand the proof in order to apply the master theorem.

The proof appears in two parts. The first part analyzes the master recur- rence (4.20), under the simplifying assumption that T .n/ is defined only on ex- act powers of b > 1, that is, for n D 1; b; b2; : : :. This part gives all the intuition needed to understand why the master theorem is true. The second part shows how to extend the analysis to all positive integers n; it applies mathematical technique to the problem of handling floors and ceilings.

In this section, we shall sometimes abuse our asymptotic notation slightly by using it to describe the behavior of functions that are defined only over exact powers of b. Recall that the definitions of asymptotic notations require that

98 Chapter 4 Divide-and-Conquer

bounds be proved for all sufficiently large numbers, not just those that are pow- ers of b. Since we could make new asymptotic notations that apply only to the set fbi W i D 0; 1; 2; : : :g, instead of to the nonnegative numbers, this abuse is minor.

Nevertheless, we must always be on guard when we use asymptotic notation over a limited domain lest we draw improper conclusions. For example, proving that T .n/ D O.n/ when n is an exact power of 2 does not guarantee that T .n/ D O.n/. The function T .n/ could be defined as

T .n/ D (

n if n D 1; 2; 4; 8; : : : ; n2 otherwise ;

in which case the best upper bound that applies to all values of n is T .n/ D O.n2/. Because of this sort of drastic consequence, we shall never use asymptotic notation over a limited domain without making it absolutely clear from the context that we are doing so.

4.6.1 The proof for exact powers

The first part of the proof of the master theorem analyzes the recurrence (4.20)

T .n/ D aT .n=b/C f .n/ ; for the master method, under the assumption that n is an exact power of b > 1, where b need not be an integer. We break the analysis into three lemmas. The first reduces the problem of solving the master recurrence to the problem of evaluating an expression that contains a summation. The second determines bounds on this summation. The third lemma puts the first two together to prove a version of the master theorem for the case in which n is an exact power of b.

Lemma 4.2 Let a � 1 and b > 1 be constants, and let f .n/ be a nonnegative function defined on exact powers of b. Define T .n/ on exact powers of b by the recurrence

T .n/ D (

‚.1/ if n D 1 ; aT .n=b/C f .n/ if n D bi ;

where i is a positive integer. Then

T .n/ D ‚.nlogb a/C logb n�1X

j D0 aj f .n=bj / : (4.21)

Proof We use the recursion tree in Figure 4.7. The root of the tree has cost f .n/, and it has a children, each with cost f .n=b/. (It is convenient to think of a as being

4.6 Proof of the master theorem 99

… … …

… … …

… … …

f .n/ f .n/

aaa

a

aaa

a

aaa

a

a

f .n=b/f .n=b/f .n=b/

f .n=b2/f .n=b2/f .n=b2/f .n=b2/f .n=b2/f .n=b2/f .n=b2/f .n=b2/f .n=b2/

af .n=b/

a2f .n=b2/

logb n

nlogb a

‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/ ‚.nlogb a/

Total: ‚.nlogb a/C logb n�1X

j D0 aj f .n=bj /

Figure 4.7 The recursion tree generated by T .n/ D aT .n=b/Cf .n/. The tree is a complete a-ary tree with nlogb a leaves and height logb n. The cost of the nodes at each depth is shown at the right, and their sum is given in equation (4.21).

an integer, especially when visualizing the recursion tree, but the mathematics does not require it.) Each of these children has a children, making a2 nodes at depth 2, and each of the a children has cost f .n=b2/. In general, there are aj nodes at depth j , and each has cost f .n=bj /. The cost of each leaf is T .1/ D ‚.1/, and each leaf is at depth logb n, since n=b

logb n D 1. There are alogb n D nlogb a leaves in the tree.

We can obtain equation (4.21) by summing the costs of the nodes at each depth in the tree, as shown in the figure. The cost for all internal nodes at depth j is aj f .n=bj /, and so the total cost of all internal nodes is

logb n�1X j D0

aj f .n=bj / :

In the underlying divide-and-conquer algorithm, this sum represents the costs of dividing problems into subproblems and then recombining the subproblems. The

100 Chapter 4 Divide-and-Conquer

cost of all the leaves, which is the cost of doing all nlogb a subproblems of size 1, is ‚.nlogb a/.

In terms of the recursion tree, the three cases of the master theorem correspond to cases in which the total cost of the tree is (1) dominated by the costs in the leaves, (2) evenly distributed among the levels of the tree, or (3) dominated by the cost of the root.

The summation in equation (4.21) describes the cost of the dividing and com- bining steps in the underlying divide-and-conquer algorithm. The next lemma pro- vides asymptotic bounds on the summation’s growth.

Lemma 4.3 Let a � 1 and b > 1 be constants, and let f .n/ be a nonnegative function defined on exact powers of b. A function g.n/ defined over exact powers of b by

g.n/ D logb n�1X

j D0 aj f .n=bj / (4.22)

has the following asymptotic bounds for exact powers of b:

1. If f .n/ D O.nlogb a��/ for some constant � > 0, then g.n/ D O.nlogb a/. 2. If f .n/ D ‚.nlogb a/, then g.n/ D ‚.nlogb a lg n/. 3. If af .n=b/ � cf .n/ for some constant c 1 be constants, and let f .n/ be a nonnegative function defined on exact powers of b. Define T .n/ on exact powers of b by the recurrence

T .n/ D (

‚.1/ if n D 1 ; aT .n=b/C f .n/ if n D bi ;

where i is a positive integer. Then T .n/ has the following asymptotic bounds for exact powers of b:

1. If f .n/ D O.nlogb a��/ for some constant � > 0, then T .n/ D ‚.nlogb a/. 2. If f .n/ D ‚.nlogb a/, then T .n/ D ‚.nlogb a lg n/. 3. If f .n/ D �.nlogb aC�/ for some constant � > 0, and if af .n=b/ � cf .n/ for

some constant c 0 :

(4.27)

104 Chapter 4 Divide-and-Conquer

… … …

… … …

… … …

f .n/ f .n/

aaa

a

aaa

a

aaa

a

a

f .n1/f .n1/f .n1/

f .n2/f .n2/f .n2/f .n2/f .n2/f .n2/f .n2/f .n2/f .n2/

af .n1/

a2f .n2/

blogb nc

‚.nlogb a/

‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/ ‚.nlogb a/

Total: ‚.nlogb a/C blogb nc�1X

j D0 aj f .nj /

Figure 4.8 The recursion tree generated by T .n/ D aT .dn=be/Cf .n/. The recursive argument nj is given by equation (4.27).

Our first goal is to determine the depth k such that nk is a constant. Using the inequality dxe � x C 1, we obtain n0 � n ; n1 � n

b C 1 ;

n2 � n b2 C 1

b C 1 ;

n3 � n

b3 C 1

b2 C 1

b C 1 ;

:::

In general, we have

4.6 Proof of the master theorem 105

nj � n bj C

j �1X iD0

1

bi

bCb=.b�1/, where c 0 such that for all sufficiently large nj ,

106 Chapter 4 Divide-and-Conquer

f .nj / � c �

n

bj C b

b � 1 �logb a

D c �

n

bj

� 1C b

j

n � b

b � 1 ��logb a

D c �

nlogb a

aj

�� 1C

� bj

n � b

b � 1 ��logb a

� c �

nlogb a

aj

�� 1C b

b � 1 �logb a

D O �

nlogb a

aj

� ;

since c.1C b=.b � 1//logb a is a constant. Thus, we have proved case 2. The proof of case 1 is almost identical. The key is to prove the bound f .nj / D O.nlogb a��/, which is similar to the corresponding proof of case 2, though the algebra is more intricate.

We have now proved the upper bounds in the master theorem for all integers n. The proof of the lower bounds is similar.

Exercises

4.6-1 ? Give a simple and exact expression for nj in equation (4.27) for the case in which b is a positive integer instead of an arbitrary real number.

4.6-2 ? Show that if f .n/ D ‚.nlogb a lgk n/, where k � 0, then the master recurrence has solution T .n/ D ‚.nlogb a lgkC1 n/. For simplicity, confine your analysis to exact powers of b.

4.6-3 ? Show that case 3 of the master theorem is overstated, in the sense that the regularity condition af .n=b/ � cf .n/ for some constant c 0 such that f .n/ D �.nlogb aC�/.

Problems for Chapter 4 107

Problems

4-1 Recurrence examples Give asymptotic upper and lower bounds for T .n/ in each of the following recur- rences. Assume that T .n/ is constant for n � 2. Make your bounds as tight as possible, and justify your answers.

a. T .n/ D 2T .n=2/C n4.

b. T .n/ D T .7n=10/C n.

c. T .n/ D 16T .n=4/C n2.

d. T .n/ D 7T .n=3/C n2.

e. T .n/ D 7T .n=2/C n2.

f. T .n/ D 2T .n=4/Cpn.

g. T .n/ D T .n � 2/C n2.

4-2 Parameter-passing costs Throughout this book, we assume that parameter passing during procedure calls takes constant time, even if an N -element array is being passed. This assumption is valid in most systems because a pointer to the array is passed, not the array itself. This problem examines the implications of three parameter-passing strategies:

1. An array is passed by pointer. Time D ‚.1/. 2. An array is passed by copying. Time D ‚.N /, where N is the size of the array. 3. An array is passed by copying only the subrange that might be accessed by the

called procedure. Time D ‚.q � p C 1/ if the subarray AŒp : : q� is passed. a. Consider the recursive binary search algorithm for finding a number in a sorted

array (see Exercise 2.3-5). Give recurrences for the worst-case running times of binary search when arrays are passed using each of the three methods above, and give good upper bounds on the solutions of the recurrences. Let N be the size of the original problem and n be the size of a subproblem.

b. Redo part (a) for the MERGE-SORT algorithm from Section 2.3.1.

108 Chapter 4 Divide-and-Conquer

4-3 More recurrence examples Give asymptotic upper and lower bounds for T .n/ in each of the following recur- rences. Assume that T .n/ is constant for sufficiently small n. Make your bounds as tight as possible, and justify your answers.

a. T .n/ D 4T .n=3/C n lg n.

b. T .n/ D 3T .n=3/C n= lg n.

c. T .n/ D 4T .n=2/C n2pn.

d. T .n/ D 3T .n=3 � 2/C n=2.

e. T .n/ D 2T .n=2/C n= lg n.

f. T .n/ D T .n=2/C T .n=4/C T .n=8/C n.

g. T .n/ D T .n � 1/C 1=n.

h. T .n/ D T .n � 1/C lg n.

i. T .n/ D T .n � 2/C 1= lg n.

j. T .n/ D pnT .pn/C n.

4-4 Fibonacci numbers This problem develops properties of the Fibonacci numbers, which are defined by recurrence (3.22). We shall use the technique of generating functions to solve the Fibonacci recurrence. Define the generating function (or formal power se- ries) F as

F .´/ D 1X

iD0 Fi´

i

D 0C ´C ´2 C 2´3 C 3´4 C 5´5 C 8´6 C 13´7 C 21´8 C � � � ; where Fi is the i th Fibonacci number.

a. Show that F .´/ D ´C ´F .´/C ´2F .´/.

Problems for Chapter 4 109

b. Show that

F .´/ D ´ 1� ´ � ´2

D ´ .1� �´/.1 � y�´/

D 1p 5

� 1

1� �´ � 1

1� y�´

� ;

where

� D 1C p

5

2 D 1:61803 : : :

and

y� D 1� p

5

2 D �0:61803 : : : :

c. Show that

F .´/ D 1X

iD0

1p 5

.�i � y�i /´i :

d. Use part (c) to prove that Fi D �i = p

5 for i > 0, rounded to the nearest integer. (Hint: Observe that

ˇ̌y� ˇ̌ x0 ;

(4.30)

where

� x � 1 is a real number, � x0 is a constant such that x0 � 1=bi and x0 � 1=.1 � bi / for i D 1; 2; : : : ; k, � ai is a positive constant for i D 1; 2; : : : ; k,

Notes for Chapter 4 113

� bi is a constant in the range 0 AŒj �, then the pair .i; j / is called an inversion of A. (See Problem 2-4 for more on inver- sions.) Suppose that the elements of A form a uniform random permutation of h1; 2; : : : ; ni. Use indicator random variables to compute the expected number of inversions.

5.3 Randomized algorithms

In the previous section, we showed how knowing a distribution on the inputs can help us to analyze the average-case behavior of an algorithm. Many times, we do not have such knowledge, thus precluding an average-case analysis. As mentioned in Section 5.1, we may be able to use a randomized algorithm.

For a problem such as the hiring problem, in which it is helpful to assume that all permutations of the input are equally likely, a probabilistic analysis can guide the development of a randomized algorithm. Instead of assuming a distribution of inputs, we impose a distribution. In particular, before running the algorithm, we randomly permute the candidates in order to enforce the property that every permutation is equally likely. Although we have modified the algorithm, we still expect to hire a new office assistant approximately ln n times. But now we expect

5.3 Randomized algorithms 123

this to be the case for any input, rather than for inputs drawn from a particular distribution.

Let us further explore the distinction between probabilistic analysis and random- ized algorithms. In Section 5.2, we claimed that, assuming that the candidates ar- rive in a random order, the expected number of times we hire a new office assistant is about ln n. Note that the algorithm here is deterministic; for any particular input, the number of times a new office assistant is hired is always the same. Furthermore, the number of times we hire a new office assistant differs for different inputs, and it depends on the ranks of the various candidates. Since this number depends only on the ranks of the candidates, we can represent a particular input by listing, in order, the ranks of the candidates, i.e., hrank.1/; rank.2/; : : : ; rank.n/i. Given the rank list A1 D h1;2;3;4;5;6; 7; 8; 9; 10i, a new office assistant is always hired 10 times, since each successive candidate is better than the previous one, and lines 5–6 are executed in each iteration. Given the list of ranks A2 D h10; 9; 8; 7; 6; 5; 4; 3; 2; 1i, a new office assistant is hired only once, in the first iteration. Given a list of ranks A3 D h5; 2; 1; 8; 4; 7; 10; 9; 3; 6i, a new office assistant is hired three times, upon interviewing the candidates with ranks 5, 8, and 10. Recalling that the cost of our algorithm depends on how many times we hire a new office assistant, we see that there are expensive inputs such as A1, inexpensive inputs such as A2, and moderately expensive inputs such as A3.

Consider, on the other hand, the randomized algorithm that first permutes the candidates and then determines the best candidate. In this case, we randomize in the algorithm, not in the input distribution. Given a particular input, say A3 above, we cannot say how many times the maximum is updated, because this quantity differs with each run of the algorithm. The first time we run the algorithm on A3, it may produce the permutation A1 and perform 10 updates; but the second time we run the algorithm, we may produce the permutation A2 and perform only one update. The third time we run it, we may perform some other number of updates. Each time we run the algorithm, the execution depends on the random choices made and is likely to differ from the previous execution of the algorithm. For this algorithm and many other randomized algorithms, no particular input elicits its worst-case behavior. Even your worst enemy cannot produce a bad input array, since the random permutation makes the input order irrelevant. The randomized algorithm performs badly only if the random-number generator produces an “un- lucky” permutation.

For the hiring problem, the only change needed in the code is to randomly per- mute the array.

124 Chapter 5 Probabilistic Analysis and Randomized Algorithms

RANDOMIZED-HIRE-ASSISTANT.n/

1 randomly permute the list of candidates 2 best D 0 // candidate 0 is a least-qualified dummy candidate 3 for i D 1 to n 4 interview candidate i 5 if candidate i is better than candidate best 6 best D i 7 hire candidate i

With this simple change, we have created a randomized algorithm whose perfor- mance matches that obtained by assuming that the candidates were presented in a random order.

Lemma 5.3 The expected hiring cost of the procedure RANDOMIZED-HIRE-ASSISTANT is O.ch ln n/.

Proof After permuting the input array, we have achieved a situation identical to that of the probabilistic analysis of HIRE-ASSISTANT.

Comparing Lemmas 5.2 and 5.3 highlights the difference between probabilistic analysis and randomized algorithms. In Lemma 5.2, we make an assumption about the input. In Lemma 5.3, we make no such assumption, although randomizing the input takes some additional time. To remain consistent with our terminology, we couched Lemma 5.2 in terms of the average-case hiring cost and Lemma 5.3 in terms of the expected hiring cost. In the remainder of this section, we discuss some issues involved in randomly permuting inputs.

Randomly permuting arrays

Many randomized algorithms randomize the input by permuting the given input array. (There are other ways to use randomization.) Here, we shall discuss two methods for doing so. We assume that we are given an array A which, without loss of generality, contains the elements 1 through n. Our goal is to produce a random permutation of the array.

One common method is to assign each element AŒi� of the array a random pri- ority P Œi�, and then sort the elements of A according to these priorities. For ex- ample, if our initial array is A D h1; 2; 3; 4i and we choose random priorities P D h36; 3; 62; 19i, we would produce an array B D h2; 4; 1; 3i, since the second priority is the smallest, followed by the fourth, then the first, and finally the third. We call this procedure PERMUTE-BY-SORTING:

5.3 Randomized algorithms 125

PERMUTE-BY-SORTING.A/

1 n D A: length 2 let P Œ1 : : n� be a new array 3 for i D 1 to n 4 P Œi� D RANDOM.1; n3/ 5 sort A, using P as sort keys

Line 4 chooses a random number between 1 and n3. We use a range of 1 to n3

to make it likely that all the priorities in P are unique. (Exercise 5.3-5 asks you to prove that the probability that all entries are unique is at least 1 � 1=n, and Exercise 5.3-6 asks how to implement the algorithm even if two or more priorities are identical.) Let us assume that all the priorities are unique.

The time-consuming step in this procedure is the sorting in line 5. As we shall see in Chapter 8, if we use a comparison sort, sorting takes �.n lg n/ time. We can achieve this lower bound, since we have seen that merge sort takes ‚.n lg n/ time. (We shall see other comparison sorts that take ‚.n lg n/ time in Part II. Exercise 8.3-4 asks you to solve the very similar problem of sorting numbers in the range 0 to n3 � 1 in O.n/ time.) After sorting, if P Œi� is the j th smallest priority, then AŒi� lies in position j of the output. In this manner we obtain a permutation. It remains to prove that the procedure produces a uniform random permutation, that is, that the procedure is equally likely to produce every permutation of the numbers 1 through n.

Lemma 5.4 Procedure PERMUTE-BY-SORTING produces a uniform random permutation of the input, assuming that all priorities are distinct.

Proof We start by considering the particular permutation in which each ele- ment AŒi� receives the i th smallest priority. We shall show that this permutation occurs with probability exactly 1=nŠ. For i D 1; 2; : : : ; n, let Ei be the event that element AŒi� receives the i th smallest priority. Then we wish to compute the probability that for all i , event Ei occurs, which is

Pr fE1 E2 E3 � � � En�1 Eng : Using Exercise C.2-5, this probability is equal to

Pr fE1g � Pr fE2 j E1g � Pr fE3 j E2 E1g � Pr fE4 j E3 E2 E1g � � � Pr fEi j Ei�1 Ei�2 � � � E1g � � � Pr fEn j En�1 � � � E1g :

We have that Pr fE1g D 1=n because it is the probability that one priority chosen randomly out of a set of n is the smallest priority. Next, we observe

126 Chapter 5 Probabilistic Analysis and Randomized Algorithms

that Pr fE2 j E1g D 1=.n � 1/ because given that element AŒ1� has the small- est priority, each of the remaining n � 1 elements has an equal chance of hav- ing the second smallest priority. In general, for i D 2; 3; : : : ; n, we have that Pr fEi j Ei�1 Ei�2 � � � E1g D 1=.n� i C1/, since, given that elements AŒ1� through AŒi � 1� have the i � 1 smallest priorities (in order), each of the remaining n� .i � 1/ elements has an equal chance of having the i th smallest priority. Thus, we have

Pr fE1 E2 E3 � � � En�1 Eng D �

1

n

�� 1

n � 1 � � � � �

1

2

�� 1

1

� D 1

nŠ ;

and we have shown that the probability of obtaining the identity permutation is 1=nŠ.

We can extend this proof to work for any permutation of priorities. Consider any fixed permutation D h .1/; .2/; : : : ; .n/i of the set f1; 2; : : : ; ng. Let us denote by ri the rank of the priority assigned to element AŒi�, where the element with the j th smallest priority has rank j . If we define Ei as the event in which element AŒi� receives the .i/th smallest priority, or ri D .i/, the same proof still applies. Therefore, if we calculate the probability of obtaining any particular permutation, the calculation is identical to the one above, so that the probability of obtaining this permutation is also 1=nŠ.

You might think that to prove that a permutation is a uniform random permuta- tion, it suffices to show that, for each element AŒi�, the probability that the element winds up in position j is 1=n. Exercise 5.3-4 shows that this weaker condition is, in fact, insufficient.

A better method for generating a random permutation is to permute the given array in place. The procedure RANDOMIZE-IN-PLACE does so in O.n/ time. In its i th iteration, it chooses the element AŒi� randomly from among elements AŒi� through AŒn�. Subsequent to the i th iteration, AŒi� is never altered.

RANDOMIZE-IN-PLACE.A/

1 n D A: length 2 for i D 1 to n 3 swap AŒi� with AŒRANDOM.i; n/�

We shall use a loop invariant to show that procedure RANDOMIZE-IN-PLACE produces a uniform random permutation. A k-permutation on a set of n ele- ments is a sequence containing k of the n elements, with no repetitions. (See Appendix C.) There are nŠ=.n � k/Š such possible k-permutations.

5.3 Randomized algorithms 127

Lemma 5.5 Procedure RANDOMIZE-IN-PLACE computes a uniform random permutation.

Proof We use the following loop invariant:

Just prior to the i th iteration of the for loop of lines 2–3, for each possible .i � 1/-permutation of the n elements, the subarray AŒ1 : : i � 1� contains this .i � 1/-permutation with probability .n � i C 1/Š=nŠ.

We need to show that this invariant is true prior to the first loop iteration, that each iteration of the loop maintains the invariant, and that the invariant provides a useful property to show correctness when the loop terminates.

Initialization: Consider the situation just before the first loop iteration, so that i D 1. The loop invariant says that for each possible 0-permutation, the sub- array AŒ1 : : 0� contains this 0-permutation with probability .n � i C 1/Š=nŠ D nŠ=nŠ D 1. The subarray AŒ1 : : 0� is an empty subarray, and a 0-permutation has no elements. Thus, AŒ1 : : 0� contains any 0-permutation with probability 1, and the loop invariant holds prior to the first iteration.

Maintenance: We assume that just before the i th iteration, each possible .i � 1/-permutation appears in the subarray AŒ1 : : i � 1� with probability .n � i C 1/Š=nŠ, and we shall show that after the i th iteration, each possible i-permutation appears in the subarray AŒ1 : : i � with probability .n � i/Š=nŠ. Incrementing i for the next iteration then maintains the loop invariant.

Let us examine the i th iteration. Consider a particular i-permutation, and de- note the elements in it by hx1; x2; : : : ; xii. This permutation consists of an .i � 1/-permutation hx1; : : : ; xi�1i followed by the value xi that the algorithm places in AŒi�. Let E1 denote the event in which the first i � 1 iterations have created the particular .i �1/-permutation hx1; : : : ; xi�1i in AŒ1 : : i �1�. By the loop invariant, Pr fE1g D .n� i C 1/Š=nŠ. Let E2 be the event that i th iteration puts xi in position AŒi�. The i-permutation hx1; : : : ; xi i appears in AŒ1 : : i � pre- cisely when both E1 and E2 occur, and so we wish to compute Pr fE2 E1g. Using equation (C.14), we have

Pr fE2 E1g D Pr fE2 j E1g Pr fE1g :

The probability Pr fE2 j E1g equals 1=.n�iC1/ because in line 3 the algorithm chooses xi randomly from the n� i C 1 values in positions AŒi : : n�. Thus, we have

128 Chapter 5 Probabilistic Analysis and Randomized Algorithms

Pr fE2 E1g D Pr fE2 j E1g Pr fE1g D 1

n � i C 1 � .n � i C 1/Š

D .n � i/Š nŠ

:

Termination: At termination, i D nC 1, and we have that the subarray AŒ1 : : n� is a given n-permutation with probability .n�.nC1/C1/=nŠ D 0Š=nŠ D 1=nŠ.

Thus, RANDOMIZE-IN-PLACE produces a uniform random permutation.

A randomized algorithm is often the simplest and most efficient way to solve a problem. We shall use randomized algorithms occasionally throughout this book.

Exercises

5.3-1 Professor Marceau objects to the loop invariant used in the proof of Lemma 5.5. He questions whether it is true prior to the first iteration. He reasons that we could just as easily declare that an empty subarray contains no 0-permutations. Therefore, the probability that an empty subarray contains a 0-permutation should be 0, thus invalidating the loop invariant prior to the first iteration. Rewrite the procedure RANDOMIZE-IN-PLACE so that its associated loop invariant applies to a nonempty subarray prior to the first iteration, and modify the proof of Lemma 5.5 for your procedure.

5.3-2 Professor Kelp decides to write a procedure that produces at random any permuta- tion besides the identity permutation. He proposes the following procedure:

PERMUTE-WITHOUT-IDENTITY.A/

1 n D A: length 2 for i D 1 to n � 1 3 swap AŒi� with AŒRANDOM.i C 1; n/�

Does this code do what Professor Kelp intends?

5.3-3 Suppose that instead of swapping element AŒi� with a random element from the subarray AŒi : : n�, we swapped it with a random element from anywhere in the array:

5.3 Randomized algorithms 129

PERMUTE-WITH-ALL.A/

1 n D A: length 2 for i D 1 to n 3 swap AŒi� with AŒRANDOM.1; n/�

Does this code produce a uniform random permutation? Why or why not?

5.3-4 Professor Armstrong suggests the following procedure for generating a uniform random permutation:

PERMUTE-BY-CYCLIC.A/

1 n D A: length 2 let BŒ1 : : n� be a new array 3 offset D RANDOM.1; n/ 4 for i D 1 to n 5 dest D i C offset 6 if dest > n 7 dest D dest � n 8 BŒdest� D AŒi� 9 return B

Show that each element AŒi� has a 1=n probability of winding up in any particular position in B . Then show that Professor Armstrong is mistaken by showing that the resulting permutation is not uniformly random.

5.3-5 ? Prove that in the array P in procedure PERMUTE-BY-SORTING, the probability that all elements are unique is at least 1 � 1=n. 5.3-6 Explain how to implement the algorithm PERMUTE-BY-SORTING to handle the case in which two or more priorities are identical. That is, your algorithm should produce a uniform random permutation, even if two or more priorities are identical.

5.3-7 Suppose we want to create a random sample of the set f1; 2; 3; : : : ; ng, that is, an m-element subset S , where 0 � m � n, such that each m-subset is equally likely to be created. One way would be to set AŒi� D i for i D 1; 2; 3; : : : ; n, call RANDOMIZE-IN-PLACE.A/, and then take just the first m array elements. This method would make n calls to the RANDOM procedure. If n is much larger than m, we can create a random sample with fewer calls to RANDOM. Show that

130 Chapter 5 Probabilistic Analysis and Randomized Algorithms

the following recursive procedure returns a random m-subset S of f1; 2; 3; : : : ; ng, in which each m-subset is equally likely, while making only m calls to RANDOM:

RANDOM-SAMPLE.m; n/

1 if m == 0 2 return ; 3 else S D RANDOM-SAMPLE.m � 1; n � 1/ 4 i D RANDOM.1; n/ 5 if i 2 S 6 S D S [ fng 7 else S D S [ fig 8 return S

? 5.4 Probabilistic analysis and further uses of indicator random variables

This advanced section further illustrates probabilistic analysis by way of four ex- amples. The first determines the probability that in a room of k people, two of them share the same birthday. The second example examines what happens when we randomly toss balls into bins. The third investigates “streaks” of consecutive heads when we flip coins. The final example analyzes a variant of the hiring prob- lem in which you have to make decisions without actually interviewing all the candidates.

5.4.1 The birthday paradox

Our first example is the birthday paradox. How many people must there be in a room before there is a 50% chance that two of them were born on the same day of the year? The answer is surprisingly few. The paradox is that it is in fact far fewer than the number of days in a year, or even half the number of days in a year, as we shall see.

To answer this question, we index the people in the room with the integers 1; 2; : : : ; k, where k is the number of people in the room. We ignore the issue of leap years and assume that all years have n D 365 days. For i D 1; 2; : : : ; k, let bi be the day of the year on which person i’s birthday falls, where 1 � bi � n. We also assume that birthdays are uniformly distributed across the n days of the year, so that Pr fbi D rg D 1=n for i D 1; 2; : : : ; k and r D 1; 2; : : : ; n.

The probability that two given people, say i and j , have matching birthdays depends on whether the random selection of birthdays is independent. We assume from now on that birthdays are independent, so that the probability that i’s birthday

5.4 Probabilistic analysis and further uses of indicator random variables 131

and j ’s birthday both fall on day r is

Pr fbi D r and bj D rg D Pr fbi D rg Pr fbj D rg D 1=n2 :

Thus, the probability that they both fall on the same day is

Pr fbi D bj g D nX

rD1 Pr fbi D r and bj D rg

D nX

rD1 .1=n2/

D 1=n : (5.6) More intuitively, once bi is chosen, the probability that bj is chosen to be the same day is 1=n. Thus, the probability that i and j have the same birthday is the same as the probability that the birthday of one of them falls on a given day. Notice, however, that this coincidence depends on the assumption that the birthdays are independent.

We can analyze the probability of at least 2 out of k people having matching birthdays by looking at the complementary event. The probability that at least two of the birthdays match is 1 minus the probability that all the birthdays are different. The event that k people have distinct birthdays is

Bk D k

iD1 Ai ;

where Ai is the event that person i’s birthday is different from person j ’s for all j bestscore 4 bestscore D score.i/ 5 for i D k C 1 to n 6 if score.i/ > bestscore 7 return i 8 return n

We wish to determine, for each possible value of k, the probability that we hire the most qualified applicant. We then choose the best possible k, and implement the strategy with that value. For the moment, assume that k is fixed. Let M.j / D max1�i�j fscore.i/g denote the maximum score among ap- plicants 1 through j . Let S be the event that we succeed in choosing the best- qualified applicant, and let Si be the event that we succeed when the best-qualified applicant is the i th one interviewed. Since the various Si are disjoint, we have that Pr fSg DPniD1 Pr fSig. Noting that we never succeed when the best-qualified applicant is one of the first k, we have that Pr fSig D 0 for i D 1; 2; : : : ; k. Thus, we obtain

Pr fSg D nX

iDkC1 Pr fSig : (5.12)

We now compute Pr fSig. In order to succeed when the best-qualified applicant is the i th one, two things must happen. First, the best-qualified applicant must be in position i , an event which we denote by Bi . Second, the algorithm must not select any of the applicants in positions kC1 through i �1, which happens only if, for each j such that kC1 � j � i�1, we find that score.j / 0 or ni D Fi (the i th Fibonacci number—see Section 3.2).

For this problem, assume that n2b�1 is large enough that the probability of an overflow error is negligible.

a. Show that the expected value represented by the counter after n INCREMENT operations have been performed is exactly n.

b. The analysis of the variance of the count represented by the counter depends on the sequence of the ni . Let us consider a simple case: ni D 100i for all i � 0. Estimate the variance in the value represented by the register after n INCREMENT operations have been performed.

5-2 Searching an unsorted array This problem examines three algorithms for searching for a value x in an unsorted array A consisting of n elements.

Consider the following randomized strategy: pick a random index i into A. If AŒi� D x, then we terminate; otherwise, we continue the search by picking a new random index into A. We continue picking random indices into A until we find an index j such that AŒj � D x or until we have checked every element of A. Note that we pick from the whole set of indices each time, so that we may examine a given element more than once.

a. Write pseudocode for a procedure RANDOM-SEARCH to implement the strat- egy above. Be sure that your algorithm terminates when all indices into A have been picked.

144 Chapter 5 Probabilistic Analysis and Randomized Algorithms

b. Suppose that there is exactly one index i such that AŒi� D x. What is the expected number of indices into A that we must pick before we find x and RANDOM-SEARCH terminates?

c. Generalizing your solution to part (b), suppose that there are k � 1 indices i such that AŒi� D x. What is the expected number of indices into A that we must pick before we find x and RANDOM-SEARCH terminates? Your answer should be a function of n and k.

d. Suppose that there are no indices i such that AŒi� D x. What is the expected number of indices into A that we must pick before we have checked all elements of A and RANDOM-SEARCH terminates?

Now consider a deterministic linear search algorithm, which we refer to as DETERMINISTIC-SEARCH. Specifically, the algorithm searches A for x in order, considering AŒ1�; AŒ2�; AŒ3�; : : : ; AŒn� until either it finds AŒi� D x or it reaches the end of the array. Assume that all possible permutations of the input array are equally likely.

e. Suppose that there is exactly one index i such that AŒi� D x. What is the average-case running time of DETERMINISTIC-SEARCH? What is the worst- case running time of DETERMINISTIC-SEARCH?

f. Generalizing your solution to part (e), suppose that there are k � 1 indices i such that AŒi� D x. What is the average-case running time of DETERMINISTIC- SEARCH? What is the worst-case running time of DETERMINISTIC-SEARCH? Your answer should be a function of n and k.

g. Suppose that there are no indices i such that AŒi� D x. What is the average-case running time of DETERMINISTIC-SEARCH? What is the worst-case running time of DETERMINISTIC-SEARCH?

Finally, consider a randomized algorithm SCRAMBLE-SEARCH that works by first randomly permuting the input array and then running the deterministic lin- ear search given above on the resulting permuted array.

h. Letting k be the number of indices i such that AŒi� D x, give the worst-case and expected running times of SCRAMBLE-SEARCH for the cases in which k D 0 and k D 1. Generalize your solution to handle the case in which k � 1.

i. Which of the three searching algorithms would you use? Explain your answer.

Notes for Chapter 5 145

Chapter notes

Bollobás [53], Hofri [174], and Spencer [321] contain a wealth of advanced prob- abilistic techniques. The advantages of randomized algorithms are discussed and surveyed by Karp [200] and Rabin [288]. The textbook by Motwani and Raghavan [262] gives an extensive treatment of randomized algorithms.

Several variants of the hiring problem have been widely studied. These problems are more commonly referred to as “secretary problems.” An example of work in this area is the paper by Ajtai, Meggido, and Waarts [11].

II Sorting and Order Statistics

Introduction

This part presents several algorithms that solve the following sorting problem:

Input: A sequence of n numbers ha1; a2; : : : ; ani. Output: A permutation (reordering) ha01; a02; : : : ; a0ni of the input sequence such

that a01 � a02 � � � � � a0n. The input sequence is usually an n-element array, although it may be represented in some other fashion, such as a linked list.

The structure of the data

In practice, the numbers to be sorted are rarely isolated values. Each is usually part of a collection of data called a record. Each record contains a key, which is the value to be sorted. The remainder of the record consists of satellite data, which are usually carried around with the key. In practice, when a sorting algorithm permutes the keys, it must permute the satellite data as well. If each record includes a large amount of satellite data, we often permute an array of pointers to the records rather than the records themselves in order to minimize data movement.

In a sense, it is these implementation details that distinguish an algorithm from a full-blown program. A sorting algorithm describes the method by which we determine the sorted order, regardless of whether we are sorting individual numbers or large records containing many bytes of satellite data. Thus, when focusing on the problem of sorting, we typically assume that the input consists only of numbers. Translating an algorithm for sorting numbers into a program for sorting records

148 Part II Sorting and Order Statistics

is conceptually straightforward, although in a given engineering situation other subtleties may make the actual programming task a challenge.

Why sorting?

Many computer scientists consider sorting to be the most fundamental problem in the study of algorithms. There are several reasons:

� Sometimes an application inherently needs to sort information. For example, in order to prepare customer statements, banks need to sort checks by check number.

� Algorithms often use sorting as a key subroutine. For example, a program that renders graphical objects which are layered on top of each other might have to sort the objects according to an “above” relation so that it can draw these objects from bottom to top. We shall see numerous algorithms in this text that use sorting as a subroutine.

� We can draw from among a wide variety of sorting algorithms, and they em- ploy a rich set of techniques. In fact, many important techniques used through- out algorithm design appear in the body of sorting algorithms that have been developed over the years. In this way, sorting is also a problem of historical interest.

� We can prove a nontrivial lower bound for sorting (as we shall do in Chapter 8). Our best upper bounds match the lower bound asymptotically, and so we know that our sorting algorithms are asymptotically optimal. Moreover, we can use the lower bound for sorting to prove lower bounds for certain other problems.

� Many engineering issues come to the fore when implementing sorting algo- rithms. The fastest sorting program for a particular situation may depend on many factors, such as prior knowledge about the keys and satellite data, the memory hierarchy (caches and virtual memory) of the host computer, and the software environment. Many of these issues are best dealt with at the algorith- mic level, rather than by “tweaking” the code.

Sorting algorithms

We introduced two algorithms that sort n real numbers in Chapter 2. Insertion sort takes ‚.n2/ time in the worst case. Because its inner loops are tight, however, it is a fast in-place sorting algorithm for small input sizes. (Recall that a sorting algorithm sorts in place if only a constant number of elements of the input ar- ray are ever stored outside the array.) Merge sort has a better asymptotic running time, ‚.n lg n/, but the MERGE procedure it uses does not operate in place.

Part II Sorting and Order Statistics 149

In this part, we shall introduce two more algorithms that sort arbitrary real num- bers. Heapsort, presented in Chapter 6, sorts n numbers in place in O.n lg n/ time. It uses an important data structure, called a heap, with which we can also imple- ment a priority queue.

Quicksort, in Chapter 7, also sorts n numbers in place, but its worst-case running time is ‚.n2/. Its expected running time is ‚.n lg n/, however, and it generally outperforms heapsort in practice. Like insertion sort, quicksort has tight code, and so the hidden constant factor in its running time is small. It is a popular algorithm for sorting large input arrays.

Insertion sort, merge sort, heapsort, and quicksort are all comparison sorts: they determine the sorted order of an input array by comparing elements. Chapter 8 be- gins by introducing the decision-tree model in order to study the performance limi- tations of comparison sorts. Using this model, we prove a lower bound of �.n lg n/ on the worst-case running time of any comparison sort on n inputs, thus showing that heapsort and merge sort are asymptotically optimal comparison sorts.

Chapter 8 then goes on to show that we can beat this lower bound of �.n lg n/ if we can gather information about the sorted order of the input by means other than comparing elements. The counting sort algorithm, for example, assumes that the input numbers are in the set f0; 1; : : : ; kg. By using array indexing as a tool for determining relative order, counting sort can sort n numbers in ‚.k C n/ time. Thus, when k D O.n/, counting sort runs in time that is linear in the size of the input array. A related algorithm, radix sort, can be used to extend the range of counting sort. If there are n integers to sort, each integer has d digits, and each digit can take on up to k possible values, then radix sort can sort the numbers in ‚.d.nC k// time. When d is a constant and k is O.n/, radix sort runs in linear time. A third algorithm, bucket sort, requires knowledge of the probabilistic distribution of numbers in the input array. It can sort n real numbers uniformly distributed in the half-open interval Œ0; 1/ in average-case O.n/ time.

The following table summarizes the running times of the sorting algorithms from Chapters 2 and 6–8. As usual, n denotes the number of items to sort. For counting sort, the items to sort are integers in the set f0; 1; : : : ; kg. For radix sort, each item is a d -digit number, where each digit takes on k possible values. For bucket sort, we assume that the keys are real numbers uniformly distributed in the half-open interval Œ0; 1/. The rightmost column gives the average-case or expected running time, indicating which it gives when it differs from the worst-case running time. We omit the average-case running time of heapsort because we do not analyze it in this book.

150 Part II Sorting and Order Statistics

Worst-case Average-case/expected Algorithm running time running time

Insertion sort ‚.n2/ ‚.n2/ Merge sort ‚.n lg n/ ‚.n lg n/ Heapsort O.n lg n/ — Quicksort ‚.n2/ ‚.n lg n/ (expected) Counting sort ‚.k C n/ ‚.k C n/ Radix sort ‚.d.nC k// ‚.d.nC k// Bucket sort ‚.n2/ ‚.n/ (average-case)

Order statistics

The i th order statistic of a set of n numbers is the i th smallest number in the set. We can, of course, select the i th order statistic by sorting the input and indexing the i th element of the output. With no assumptions about the input distribution, this method runs in �.n lg n/ time, as the lower bound proved in Chapter 8 shows.

In Chapter 9, we show that we can find the i th smallest element in O.n/ time, even when the elements are arbitrary real numbers. We present a randomized algo- rithm with tight pseudocode that runs in ‚.n2/ time in the worst case, but whose expected running time is O.n/. We also give a more complicated algorithm that runs in O.n/ worst-case time.

Background

Although most of this part does not rely on difficult mathematics, some sections do require mathematical sophistication. In particular, analyses of quicksort, bucket sort, and the order-statistic algorithm use probability, which is reviewed in Ap- pendix C, and the material on probabilistic analysis and randomized algorithms in Chapter 5. The analysis of the worst-case linear-time algorithm for order statis- tics involves somewhat more sophisticated mathematics than the other worst-case analyses in this part.

6 Heapsort

In this chapter, we introduce another sorting algorithm: heapsort. Like merge sort, but unlike insertion sort, heapsort’s running time is O.n lg n/. Like insertion sort, but unlike merge sort, heapsort sorts in place: only a constant number of array elements are stored outside the input array at any time. Thus, heapsort combines the better attributes of the two sorting algorithms we have already discussed.

Heapsort also introduces another algorithm design technique: using a data struc- ture, in this case one we call a “heap,” to manage information. Not only is the heap data structure useful for heapsort, but it also makes an efficient priority queue. The heap data structure will reappear in algorithms in later chapters.

The term “heap” was originally coined in the context of heapsort, but it has since come to refer to “garbage-collected storage,” such as the programming languages Java and Lisp provide. Our heap data structure is not garbage-collected storage, and whenever we refer to heaps in this book, we shall mean a data structure rather than an aspect of garbage collection.

6.1 Heaps

The (binary) heap data structure is an array object that we can view as a nearly complete binary tree (see Section B.5.3), as shown in Figure 6.1. Each node of the tree corresponds to an element of the array. The tree is com- pletely filled on all levels except possibly the lowest, which is filled from the left up to a point. An array A that represents a heap is an object with two at- tributes: A: length, which (as usual) gives the number of elements in the array, and A:heap-size, which represents how many elements in the heap are stored within array A. That is, although AŒ1 : : A: length� may contain numbers, only the ele- ments in AŒ1 : : A:heap-size�, where 0 � A:heap-size � A: length, are valid ele- ments of the heap. The root of the tree is AŒ1�, and given the index i of a node, we can easily compute the indices of its parent, left child, and right child:

152 Chapter 6 Heapsort

(a)

16 14 10 8 7 9 3 2 4 1

1 2 3 4 5 6 7 8 9 10

(b)

1

2 3

4 5 6 7

8 9 10

16

14 10

8 7 9 3

2 4 1

Figure 6.1 A max-heap viewed as (a) a binary tree and (b) an array. The number within the circle at each node in the tree is the value stored at that node. The number above a node is the corresponding index in the array. Above and below the array are lines showing parent-child relationships; parents are always to the left of their children. The tree has height three; the node at index 4 (with value 8) has height one.

PARENT.i/

1 return bi=2c

LEFT.i/

1 return 2i

RIGHT.i/

1 return 2i C 1

On most computers, the LEFT procedure can compute 2i in one instruction by simply shifting the binary representation of i left by one bit position. Similarly, the RIGHT procedure can quickly compute 2iC1 by shifting the binary representation of i left by one bit position and then adding in a 1 as the low-order bit. The PARENT procedure can compute bi=2c by shifting i right one bit position. Good implementations of heapsort often implement these procedures as “macros” or “in- line” procedures.

There are two kinds of binary heaps: max-heaps and min-heaps. In both kinds, the values in the nodes satisfy a heap property, the specifics of which depend on the kind of heap. In a max-heap, the max-heap property is that for every node i other than the root,

AŒPARENT.i/� � AŒi� ; that is, the value of a node is at most the value of its parent. Thus, the largest element in a max-heap is stored at the root, and the subtree rooted at a node contains

6.1 Heaps 153

values no larger than that contained at the node itself. A min-heap is organized in the opposite way; the min-heap property is that for every node i other than the root,

AŒPARENT.i/� � AŒi� : The smallest element in a min-heap is at the root.

For the heapsort algorithm, we use max-heaps. Min-heaps commonly imple- ment priority queues, which we discuss in Section 6.5. We shall be precise in specifying whether we need a max-heap or a min-heap for any particular applica- tion, and when properties apply to either max-heaps or min-heaps, we just use the term “heap.”

Viewing a heap as a tree, we define the height of a node in a heap to be the number of edges on the longest simple downward path from the node to a leaf, and we define the height of the heap to be the height of its root. Since a heap of n ele- ments is based on a complete binary tree, its height is ‚.lg n/ (see Exercise 6.1-2). We shall see that the basic operations on heaps run in time at most proportional to the height of the tree and thus take O.lg n/ time. The remainder of this chapter presents some basic procedures and shows how they are used in a sorting algorithm and a priority-queue data structure.

� The MAX-HEAPIFY procedure, which runs in O.lg n/ time, is the key to main- taining the max-heap property.

� The BUILD-MAX-HEAP procedure, which runs in linear time, produces a max- heap from an unordered input array.

� The HEAPSORT procedure, which runs in O.n lg n/ time, sorts an array in place.

� The MAX-HEAP-INSERT, HEAP-EXTRACT-MAX, HEAP-INCREASE-KEY, and HEAP-MAXIMUM procedures, which run in O.lg n/ time, allow the heap data structure to implement a priority queue.

Exercises

6.1-1 What are the minimum and maximum numbers of elements in a heap of height h?

6.1-2 Show that an n-element heap has height blg nc. 6.1-3 Show that in any subtree of a max-heap, the root of the subtree contains the largest value occurring anywhere in that subtree.

154 Chapter 6 Heapsort

6.1-4 Where in a max-heap might the smallest element reside, assuming that all elements are distinct?

6.1-5 Is an array that is in sorted order a min-heap?

6.1-6 Is the array with values h23; 17; 14; 6; 13; 10; 1; 5; 7; 12i a max-heap? 6.1-7 Show that, with the array representation for storing an n-element heap, the leaves are the nodes indexed by bn=2c C 1; bn=2c C 2; : : : ; n.

6.2 Maintaining the heap property

In order to maintain the max-heap property, we call the procedure MAX-HEAPIFY. Its inputs are an array A and an index i into the array. When it is called, MAX- HEAPIFY assumes that the binary trees rooted at LEFT.i/ and RIGHT.i/ are max- heaps, but that AŒi� might be smaller than its children, thus violating the max-heap property. MAX-HEAPIFY lets the value at AŒi� “float down” in the max-heap so that the subtree rooted at index i obeys the max-heap property.

MAX-HEAPIFY.A; i/

1 l D LEFT.i/ 2 r D RIGHT.i/ 3 if l � A:heap-size and AŒl� > AŒi� 4 largest D l 5 else largest D i 6 if r � A:heap-size and AŒr� > AŒlargest� 7 largest D r 8 if largest ¤ i 9 exchange AŒi� with AŒlargest�

10 MAX-HEAPIFY.A; largest/

Figure 6.2 illustrates the action of MAX-HEAPIFY. At each step, the largest of the elements AŒi�, AŒLEFT.i/�, and AŒRIGHT.i/� is determined, and its index is stored in largest. If AŒi� is largest, then the subtree rooted at node i is already a max-heap and the procedure terminates. Otherwise, one of the two children has the largest element, and AŒi� is swapped with AŒlargest�, which causes node i and its

6.2 Maintaining the heap property 155

16

4 10

14 7 9

2 8 1

(a)

16

14 10

4 7 9 3

2 8 1

(b)

16

14 10

8 7 9 3

2 4 1

(c)

3

1

3

4 5 6 7

9 10

2

8

1

3

4 5 6 7

9 10

2

8

1

3

4 5 6 7

9 10

2

8

i

i

i

Figure 6.2 The action of MAX-HEAPIFY.A; 2/, where A:heap-size D 10. (a) The initial con- figuration, with AŒ2� at node i D 2 violating the max-heap property since it is not larger than both children. The max-heap property is restored for node 2 in (b) by exchanging AŒ2� with AŒ4�, which destroys the max-heap property for node 4. The recursive call MAX-HEAPIFY.A; 4/ now has i D 4. After swapping AŒ4� with AŒ9�, as shown in (c), node 4 is fixed up, and the recursive call MAX-HEAPIFY.A; 9/ yields no further change to the data structure.

children to satisfy the max-heap property. The node indexed by largest, however, now has the original value AŒi�, and thus the subtree rooted at largest might violate the max-heap property. Consequently, we call MAX-HEAPIFY recursively on that subtree.

The running time of MAX-HEAPIFY on a subtree of size n rooted at a given node i is the ‚.1/ time to fix up the relationships among the elements AŒi�, AŒLEFT.i/�, and AŒRIGHT.i/�, plus the time to run MAX-HEAPIFY on a subtree rooted at one of the children of node i (assuming that the recursive call occurs). The children’s subtrees each have size at most 2n=3—the worst case occurs when the bottom level of the tree is exactly half full—and therefore we can describe the running time of MAX-HEAPIFY by the recurrence

T .n/ � T .2n=3/C‚.1/ :

156 Chapter 6 Heapsort

The solution to this recurrence, by case 2 of the master theorem (Theorem 4.1), is T .n/ D O.lg n/. Alternatively, we can characterize the running time of MAX- HEAPIFY on a node of height h as O.h/.

Exercises

6.2-1 Using Figure 6.2 as a model, illustrate the operation of MAX-HEAPIFY.A; 3/ on the array A D h27; 17; 3; 16; 13; 10; 1; 5; 7; 12; 4; 8; 9; 0i. 6.2-2 Starting with the procedure MAX-HEAPIFY, write pseudocode for the procedure MIN-HEAPIFY.A; i/, which performs the corresponding manipulation on a min- heap. How does the running time of MIN-HEAPIFY compare to that of MAX- HEAPIFY?

6.2-3 What is the effect of calling MAX-HEAPIFY.A; i/ when the element AŒi� is larger than its children?

6.2-4 What is the effect of calling MAX-HEAPIFY.A; i/ for i > A:heap-size=2?

6.2-5 The code for MAX-HEAPIFY is quite efficient in terms of constant factors, except possibly for the recursive call in line 10, which might cause some compilers to produce inefficient code. Write an efficient MAX-HEAPIFY that uses an iterative control construct (a loop) instead of recursion.

6.2-6 Show that the worst-case running time of MAX-HEAPIFY on a heap of size n is �.lg n/. (Hint: For a heap with n nodes, give node values that cause MAX- HEAPIFY to be called recursively at every node on a simple path from the root down to a leaf.)

6.3 Building a heap

We can use the procedure MAX-HEAPIFY in a bottom-up manner to convert an array AŒ1 : : n�, where n D A: length, into a max-heap. By Exercise 6.1-7, the elements in the subarray AŒ.bn=2cC1/ : : n� are all leaves of the tree, and so each is

6.3 Building a heap 157

a 1-element heap to begin with. The procedure BUILD-MAX-HEAP goes through the remaining nodes of the tree and runs MAX-HEAPIFY on each one.

BUILD-MAX-HEAP.A/

1 A:heap-size D A: length 2 for i D bA: length=2c downto 1 3 MAX-HEAPIFY.A; i/

Figure 6.3 shows an example of the action of BUILD-MAX-HEAP. To show why BUILD-MAX-HEAP works correctly, we use the following loop

invariant:

At the start of each iteration of the for loop of lines 2–3, each node i C 1; i C 2; : : : ; n is the root of a max-heap.

We need to show that this invariant is true prior to the first loop iteration, that each iteration of the loop maintains the invariant, and that the invariant provides a useful property to show correctness when the loop terminates.

Initialization: Prior to the first iteration of the loop, i D bn=2c. Each node bn=2cC 1; bn=2cC 2; : : : ; n is a leaf and is thus the root of a trivial max-heap.

Maintenance: To see that each iteration maintains the loop invariant, observe that the children of node i are numbered higher than i . By the loop invariant, there- fore, they are both roots of max-heaps. This is precisely the condition required for the call MAX-HEAPIFY.A; i/ to make node i a max-heap root. Moreover, the MAX-HEAPIFY call preserves the property that nodes i C 1; i C 2; : : : ; n are all roots of max-heaps. Decrementing i in the for loop update reestablishes the loop invariant for the next iteration.

Termination: At termination, i D 0. By the loop invariant, each node 1; 2; : : : ; n is the root of a max-heap. In particular, node 1 is.

We can compute a simple upper bound on the running time of BUILD-MAX- HEAP as follows. Each call to MAX-HEAPIFY costs O.lg n/ time, and BUILD- MAX-HEAP makes O.n/ such calls. Thus, the running time is O.n lg n/. This upper bound, though correct, is not asymptotically tight.

We can derive a tighter bound by observing that the time for MAX-HEAPIFY to run at a node varies with the height of the node in the tree, and the heights of most nodes are small. Our tighter analysis relies on the properties that an n-element heap has height blg nc (see Exercise 6.1-2) and at most ˙n=2hC1� nodes of any height h (see Exercise 6.3-3).

The time required by MAX-HEAPIFY when called on a node of height h is O.h/, and so we can express the total cost of BUILD-MAX-HEAP as being bounded from above by

158 Chapter 6 Heapsort

1

2 3

4 5 6 7

8 9 10

1

2 3

4 5 6 7

8 9 10

1

2 3

4 5 6 7

8 9 10

1

2 3

4 5 6 7

8 9 10

1

2 3

4 5 6 7

8 9 10

1

2 3

4 5 6 7

8 9 10

4

1 3

2 9 10

14 8 7

(a)

16

4 1 23 16 9 10 14 8 7

4

1 3

2 9 10

14 8 7

(b)

16

4

1 3

14 9 10

2 8 7

(c)

16

4

1 10

14 9 3

2 8 7

(d)

16

4

16 10

14 9 3

2 8 1

(e)

7

16

14 10

8 9 3

2 4 1

(f)

7

A

i i

ii

i

Figure 6.3 The operation of BUILD-MAX-HEAP, showing the data structure before the call to MAX-HEAPIFY in line 3 of BUILD-MAX-HEAP. (a) A 10-element input array A and the bi- nary tree it represents. The figure shows that the loop index i refers to node 5 before the call MAX-HEAPIFY.A; i/. (b) The data structure that results. The loop index i for the next iteration refers to node 4. (c)–(e) Subsequent iterations of the for loop in BUILD-MAX-HEAP. Observe that whenever MAX-HEAPIFY is called on a node, the two subtrees of that node are both max-heaps. (f) The max-heap after BUILD-MAX-HEAP finishes.

6.4 The heapsort algorithm 159

blg ncX hD0

l n 2hC1

m O.h/ D O

n

blg ncX hD0

h

2h

! :

We evalaute the last summation by substituting x D 1=2 in the formula (A.8), yielding 1X

hD0

h

2h D 1=2

.1� 1=2/2 D 2 :

Thus, we can bound the running time of BUILD-MAX-HEAP as

O

n

blg ncX hD0

h

2h

! D O

n

1X hD0

h

2h

! D O.n/ :

Hence, we can build a max-heap from an unordered array in linear time. We can build a min-heap by the procedure BUILD-MIN-HEAP, which is the

same as BUILD-MAX-HEAP but with the call to MAX-HEAPIFY in line 3 replaced by a call to MIN-HEAPIFY (see Exercise 6.2-2). BUILD-MIN-HEAP produces a min-heap from an unordered linear array in linear time.

Exercises

6.3-1 Using Figure 6.3 as a model, illustrate the operation of BUILD-MAX-HEAP on the array A D h5; 3; 17; 10; 84; 19; 6; 22; 9i. 6.3-2 Why do we want the loop index i in line 2 of BUILD-MAX-HEAP to decrease from bA: length=2c to 1 rather than increase from 1 to bA: length=2c? 6.3-3 Show that there are at most

˙ n=2hC1

� nodes of height h in any n-element heap.

6.4 The heapsort algorithm

The heapsort algorithm starts by using BUILD-MAX-HEAP to build a max-heap on the input array AŒ1 : : n�, where n D A: length. Since the maximum element of the array is stored at the root AŒ1�, we can put it into its correct final position

160 Chapter 6 Heapsort

by exchanging it with AŒn�. If we now discard node n from the heap—and we can do so by simply decrementing A:heap-size—we observe that the children of the root remain max-heaps, but the new root element might violate the max-heap property. All we need to do to restore the max-heap property, however, is call MAX-HEAPIFY.A; 1/, which leaves a max-heap in AŒ1 : : n � 1�. The heapsort algorithm then repeats this process for the max-heap of size n � 1 down to a heap of size 2. (See Exercise 6.4-2 for a precise loop invariant.)

HEAPSORT.A/

1 BUILD-MAX-HEAP.A/ 2 for i D A: length downto 2 3 exchange AŒ1� with AŒi� 4 A:heap-size D A:heap-size � 1 5 MAX-HEAPIFY.A; 1/

Figure 6.4 shows an example of the operation of HEAPSORT after line 1 has built the initial max-heap. The figure shows the max-heap before the first iteration of the for loop of lines 2–5 and after each iteration.

The HEAPSORT procedure takes time O.n lg n/, since the call to BUILD-MAX- HEAP takes time O.n/ and each of the n � 1 calls to MAX-HEAPIFY takes time O.lg n/.

Exercises

6.4-1 Using Figure 6.4 as a model, illustrate the operation of HEAPSORT on the array A D h5; 13; 2; 25; 7; 17; 20; 8; 4i. 6.4-2 Argue the correctness of HEAPSORT using the following loop invariant:

At the start of each iteration of the for loop of lines 2–5, the subarray AŒ1 : : i � is a max-heap containing the i smallest elements of AŒ1 : : n�, and the subarray AŒi C 1 : : n� contains the n � i largest elements of AŒ1 : : n�, sorted.

6.4-3 What is the running time of HEAPSORT on an array A of length n that is already sorted in increasing order? What about decreasing order?

6.4-4 Show that the worst-case running time of HEAPSORT is �.n lg n/.

6.4 The heapsort algorithm 161

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k)

1 2 3 4 7 8 9 10 14 16

10

2

1 3

4 7 8 9

1614

1

2 3

4 7 8 9

161410

3

2 1

9874

10 14 16

4

2 3

9871

10 14 16

8

37

4 2 1 9

161410

7

4 3

9821

10 14 16

9

8 3

2174

161410

10

8 9

3174

16142

14

8 10

3974

1612

16

14 10

3978

142

A

i i

i

i i

i i

i

i

Figure 6.4 The operation of HEAPSORT. (a) The max-heap data structure just after BUILD-MAX- HEAP has built it in line 1. (b)–(j) The max-heap just after each call of MAX-HEAPIFY in line 5, showing the value of i at that time. Only lightly shaded nodes remain in the heap. (k) The resulting sorted array A.

162 Chapter 6 Heapsort

6.4-5 ? Show that when all elements are distinct, the best-case running time of HEAPSORT is �.n lg n/.

6.5 Priority queues

Heapsort is an excellent algorithm, but a good implementation of quicksort, pre- sented in Chapter 7, usually beats it in practice. Nevertheless, the heap data struc- ture itself has many uses. In this section, we present one of the most popular ap- plications of a heap: as an efficient priority queue. As with heaps, priority queues come in two forms: max-priority queues and min-priority queues. We will focus here on how to implement max-priority queues, which are in turn based on max- heaps; Exercise 6.5-3 asks you to write the procedures for min-priority queues.

A priority queue is a data structure for maintaining a set S of elements, each with an associated value called a key. Amax-priority queue supports the following operations:

INSERT.S; x/ inserts the element x into the set S , which is equivalent to the oper- ation S D S [ fxg.

MAXIMUM.S/ returns the element of S with the largest key.

EXTRACT-MAX.S/ removes and returns the element of S with the largest key.

INCREASE-KEY.S; x; k/ increases the value of element x’s key to the new value k, which is assumed to be at least as large as x’s current key value.

Among their other applications, we can use max-priority queues to schedule jobs on a shared computer. The max-priority queue keeps track of the jobs to be performed and their relative priorities. When a job is finished or interrupted, the scheduler selects the highest-priority job from among those pending by calling EXTRACT-MAX. The scheduler can add a new job to the queue at any time by calling INSERT.

Alternatively, amin-priority queue supports the operations INSERT, MINIMUM, EXTRACT-MIN, and DECREASE-KEY. A min-priority queue can be used in an event-driven simulator. The items in the queue are events to be simulated, each with an associated time of occurrence that serves as its key. The events must be simulated in order of their time of occurrence, because the simulation of an event can cause other events to be simulated in the future. The simulation program calls EXTRACT-MIN at each step to choose the next event to simulate. As new events are produced, the simulator inserts them into the min-priority queue by calling INSERT.

6.5 Priority queues 163

We shall see other uses for min-priority queues, highlighting the DECREASE-KEY operation, in Chapters 23 and 24.

Not surprisingly, we can use a heap to implement a priority queue. In a given ap- plication, such as job scheduling or event-driven simulation, elements of a priority queue correspond to objects in the application. We often need to determine which application object corresponds to a given priority-queue element, and vice versa. When we use a heap to implement a priority queue, therefore, we often need to store a handle to the corresponding application object in each heap element. The exact makeup of the handle (such as a pointer or an integer) depends on the ap- plication. Similarly, we need to store a handle to the corresponding heap element in each application object. Here, the handle would typically be an array index. Because heap elements change locations within the array during heap operations, an actual implementation, upon relocating a heap element, would also have to up- date the array index in the corresponding application object. Because the details of accessing application objects depend heavily on the application and its imple- mentation, we shall not pursue them here, other than noting that in practice, these handles do need to be correctly maintained.

Now we discuss how to implement the operations of a max-priority queue. The procedure HEAP-MAXIMUM implements the MAXIMUM operation in ‚.1/ time.

HEAP-MAXIMUM.A/

1 return AŒ1�

The procedure HEAP-EXTRACT-MAX implements the EXTRACT-MAX opera- tion. It is similar to the for loop body (lines 3–5) of the HEAPSORT procedure.

HEAP-EXTRACT-MAX.A/

1 if A:heap-size 1 and AŒPARENT.i/� 0.

7 Quicksort

The quicksort algorithm has a worst-case running time of ‚.n2/ on an input array of n numbers. Despite this slow worst-case running time, quicksort is often the best practical choice for sorting because it is remarkably efficient on the average: its expected running time is ‚.n lg n/, and the constant factors hidden in the ‚.n lg n/ notation are quite small. It also has the advantage of sorting in place (see page 17), and it works well even in virtual-memory environments.

Section 7.1 describes the algorithm and an important subroutine used by quick- sort for partitioning. Because the behavior of quicksort is complex, we start with an intuitive discussion of its performance in Section 7.2 and postpone its precise analysis to the end of the chapter. Section 7.3 presents a version of quicksort that uses random sampling. This algorithm has a good expected running time, and no particular input elicits its worst-case behavior. Section 7.4 analyzes the random- ized algorithm, showing that it runs in ‚.n2/ time in the worst case and, assuming distinct elements, in expected O.n lg n/ time.

7.1 Description of quicksort

Quicksort, like merge sort, applies the divide-and-conquer paradigm introduced in Section 2.3.1. Here is the three-step divide-and-conquer process for sorting a typical subarray AŒp : : r�:

Divide: Partition (rearrange) the array AŒp : : r� into two (possibly empty) subar- rays AŒp : : q � 1� and AŒq C 1 : : r� such that each element of AŒp : : q � 1� is less than or equal to AŒq�, which is, in turn, less than or equal to each element of AŒq C 1 : : r�. Compute the index q as part of this partitioning procedure.

Conquer: Sort the two subarrays AŒp : : q�1� and AŒqC1 : : r� by recursive calls to quicksort.

7.1 Description of quicksort 171

Combine: Because the subarrays are already sorted, no work is needed to combine them: the entire array AŒp : : r� is now sorted.

The following procedure implements quicksort:

QUICKSORT.A; p; r/

1 if p x. 3. If k D r , then AŒk� D x.

172 Chapter 7 Quicksort

2 8 7 1 3 5 6 4

p,j ri

(a)

2 8 7 1 3 5 6 4

p,i rj

(b)

2 8 7 1 3 5 6 4

p,i rj

(c)

2 8 7 1 3 5 6 4

p,i rj

(d)

2 871 3 5 6 4

p rj

(e)

i

2 8 71 3 5 6 4

p rj

(f)

i

2 8 71 3 5 6 4

p rj

(g)

i

2 8 71 3 5 6 4

p r

(h)

i

2 871 3 5 64

p r

(i)

i

Figure 7.1 The operation of PARTITION on a sample array. Array entry AŒr� becomes the pivot element x. Lightly shaded array elements are all in the first partition with values no greater than x. Heavily shaded elements are in the second partition with values greater than x. The unshaded el- ements have not yet been put in one of the first two partitions, and the final white element is the pivot x. (a) The initial array and variable settings. None of the elements have been placed in either of the first two partitions. (b) The value 2 is “swapped with itself” and put in the partition of smaller values. (c)–(d) The values 8 and 7 are added to the partition of larger values. (e) The values 1 and 8 are swapped, and the smaller partition grows. (f) The values 3 and 7 are swapped, and the smaller partition grows. (g)–(h) The larger partition grows to include 5 and 6, and the loop terminates. (i) In lines 7–8, the pivot element is swapped so that it lies between the two partitions.

The indices between j and r � 1 are not covered by any of the three cases, and the values in these entries have no particular relationship to the pivot x.

We need to show that this loop invariant is true prior to the first iteration, that each iteration of the loop maintains the invariant, and that the invariant provides a useful property to show correctness when the loop terminates.

7.1 Description of quicksort 173

≤ x > x unrestricted

x

p i j r

Figure 7.2 The four regions maintained by the procedure PARTITION on a subarray AŒp : : r�. The values in AŒp : : i � are all less than or equal to x, the values in AŒi C 1 : : j � 1� are all greater than x, and AŒr� D x. The subarray AŒj : : r � 1� can take on any values.

Initialization: Prior to the first iteration of the loop, i D p � 1 and j D p. Be- cause no values lie between p and i and no values lie between i C 1 and j � 1, the first two conditions of the loop invariant are trivially satisfied. The assign- ment in line 1 satisfies the third condition.

Maintenance: As Figure 7.3 shows, we consider two cases, depending on the outcome of the test in line 4. Figure 7.3(a) shows what happens when AŒj � > x; the only action in the loop is to increment j . After j is incremented, condition 2 holds for AŒj � 1� and all other entries remain unchanged. Figure 7.3(b) shows what happens when AŒj � � x; the loop increments i , swaps AŒi� and AŒj �, and then increments j . Because of the swap, we now have that AŒi� � x, and condition 1 is satisfied. Similarly, we also have that AŒj � 1� > x, since the item that was swapped into AŒj � 1� is, by the loop invariant, greater than x.

Termination: At termination, j D r . Therefore, every entry in the array is in one of the three sets described by the invariant, and we have partitioned the values in the array into three sets: those less than or equal to x, those greater than x, and a singleton set containing x.

The final two lines of PARTITION finish up by swapping the pivot element with the leftmost element greater than x, thereby moving the pivot into its correct place in the partitioned array, and then returning the pivot’s new index. The output of PARTITION now satisfies the specifications given for the divide step. In fact, it satisfies a slightly stronger condition: after line 2 of QUICKSORT, AŒq� is strictly less than every element of AŒq C 1 : : r�.

The running time of PARTITION on the subarray AŒp : : r� is ‚.n/, where n D r � p C 1 (see Exercise 7.1-3).

Exercises

7.1-1 Using Figure 7.1 as a model, illustrate the operation of PARTITION on the array A D h13; 19; 9; 5; 12; 8; 7; 4; 21; 2; 6; 11i.

174 Chapter 7 Quicksort

≤ x > x

x

p i j r

>x(a)

≤ x > x

x

p i j r

≤ x > x

x

p i j r

≤ x(b)

≤ x > x

x

p i j r

Figure 7.3 The two cases for one iteration of procedure PARTITION. (a) If AŒj � > x, the only action is to increment j , which maintains the loop invariant. (b) If AŒj � � x, index i is incremented, AŒi� and AŒj � are swapped, and then j is incremented. Again, the loop invariant is maintained.

7.1-2 What value of q does PARTITION return when all elements in the array AŒp : : r� have the same value? Modify PARTITION so that q D b.p C r/=2c when all elements in the array AŒp : : r� have the same value.

7.1-3 Give a brief argument that the running time of PARTITION on a subarray of size n is ‚.n/.

7.1-4 How would you modify QUICKSORT to sort into nonincreasing order?

7.2 Performance of quicksort

The running time of quicksort depends on whether the partitioning is balanced or unbalanced, which in turn depends on which elements are used for partitioning. If the partitioning is balanced, the algorithm runs asymptotically as fast as merge

7.2 Performance of quicksort 175

sort. If the partitioning is unbalanced, however, it can run asymptotically as slowly as insertion sort. In this section, we shall informally investigate how quicksort performs under the assumptions of balanced versus unbalanced partitioning.

Worst-case partitioning

The worst-case behavior for quicksort occurs when the partitioning routine pro- duces one subproblem with n � 1 elements and one with 0 elements. (We prove this claim in Section 7.4.1.) Let us assume that this unbalanced partitioning arises in each recursive call. The partitioning costs ‚.n/ time. Since the recursive call on an array of size 0 just returns, T .0/ D ‚.1/, and the recurrence for the running time is

T .n/ D T .n � 1/C T .0/C‚.n/ D T .n � 1/C‚.n/ :

Intuitively, if we sum the costs incurred at each level of the recursion, we get an arithmetic series (equation (A.2)), which evaluates to ‚.n2/. Indeed, it is straightforward to use the substitution method to prove that the recurrence T .n/ D T .n � 1/C‚.n/ has the solution T .n/ D ‚.n2/. (See Exercise 7.2-1.)

Thus, if the partitioning is maximally unbalanced at every recursive level of the algorithm, the running time is ‚.n2/. Therefore the worst-case running time of quicksort is no better than that of insertion sort. Moreover, the ‚.n2/ running time occurs when the input array is already completely sorted—a common situation in which insertion sort runs in O.n/ time.

Best-case partitioning

In the most even possible split, PARTITION produces two subproblems, each of size no more than n=2, since one is of size bn=2c and one of size dn=2e�1. In this case, quicksort runs much faster. The recurrence for the running time is then

T .n/ D 2T .n=2/C‚.n/ ; where we tolerate the sloppiness from ignoring the floor and ceiling and from sub- tracting 1. By case 2 of the master theorem (Theorem 4.1), this recurrence has the solution T .n/ D ‚.n lg n/. By equally balancing the two sides of the partition at every level of the recursion, we get an asymptotically faster algorithm.

Balanced partitioning

The average-case running time of quicksort is much closer to the best case than to the worst case, as the analyses in Section 7.4 will show. The key to understand-

176 Chapter 7 Quicksort

n

cn

cn

cn

cn

� cn

� cn

1

1

O.n lg n/

log10 n

log10=9 n

1 10

n 9 10

n

1 100

n 9 100

n9 100

n 81 100

n

81 1000

n 729 1000

n

Figure 7.4 A recursion tree for QUICKSORT in which PARTITION always produces a 9-to-1 split, yielding a running time of O.n lg n/. Nodes show subproblem sizes, with per-level costs on the right. The per-level costs include the constant c implicit in the ‚.n/ term.

ing why is to understand how the balance of the partitioning is reflected in the recurrence that describes the running time.

Suppose, for example, that the partitioning algorithm always produces a 9-to-1 proportional split, which at first blush seems quite unbalanced. We then obtain the recurrence

T .n/ D T .9n=10/C T .n=10/C cn ; on the running time of quicksort, where we have explicitly included the constant c hidden in the ‚.n/ term. Figure 7.4 shows the recursion tree for this recurrence. Notice that every level of the tree has cost cn, until the recursion reaches a bound- ary condition at depth log10 n D ‚.lg n/, and then the levels have cost at most cn. The recursion terminates at depth log10=9 n D ‚.lg n/. The total cost of quick- sort is therefore O.n lg n/. Thus, with a 9-to-1 proportional split at every level of recursion, which intuitively seems quite unbalanced, quicksort runs in O.n lg n/ time—asymptotically the same as if the split were right down the middle. Indeed, even a 99-to-1 split yields an O.n lg n/ running time. In fact, any split of constant proportionality yields a recursion tree of depth ‚.lg n/, where the cost at each level is O.n/. The running time is therefore O.n lg n/ whenever the split has constant proportionality.

7.2 Performance of quicksort 177

n

0 n–1

(n–1)/2 – 1 (n–1)/2

n

(n–1)/2

(a) (b)

(n–1)/2

Θ(n) Θ(n)

Figure 7.5 (a) Two levels of a recursion tree for quicksort. The partitioning at the root costs n and produces a “bad” split: two subarrays of sizes 0 and n � 1. The partitioning of the subarray of size n � 1 costs n � 1 and produces a “good” split: subarrays of size .n � 1/=2 � 1 and .n � 1/=2. (b)A single level of a recursion tree that is very well balanced. In both parts, the partitioning cost for the subproblems shown with elliptical shading is ‚.n/. Yet the subproblems remaining to be solved in (a), shown with square shading, are no larger than the corresponding subproblems remaining to be solved in (b).

Intuition for the average case

To develop a clear notion of the randomized behavior of quicksort, we must make an assumption about how frequently we expect to encounter the various inputs. The behavior of quicksort depends on the relative ordering of the values in the array elements given as the input, and not by the particular values in the array. As in our probabilistic analysis of the hiring problem in Section 5.2, we will assume for now that all permutations of the input numbers are equally likely.

When we run quicksort on a random input array, the partitioning is highly un- likely to happen in the same way at every level, as our informal analysis has as- sumed. We expect that some of the splits will be reasonably well balanced and that some will be fairly unbalanced. For example, Exercise 7.2-6 asks you to show that about 80 percent of the time PARTITION produces a split that is more balanced than 9 to 1, and about 20 percent of the time it produces a split that is less balanced than 9 to 1.

In the average case, PARTITION produces a mix of “good” and “bad” splits. In a recursion tree for an average-case execution of PARTITION, the good and bad splits are distributed randomly throughout the tree. Suppose, for the sake of intuition, that the good and bad splits alternate levels in the tree, and that the good splits are best-case splits and the bad splits are worst-case splits. Figure 7.5(a) shows the splits at two consecutive levels in the recursion tree. At the root of the tree, the cost is n for partitioning, and the subarrays produced have sizes n � 1 and 0: the worst case. At the next level, the subarray of size n � 1 undergoes best-case partitioning into subarrays of size .n � 1/=2 � 1 and .n � 1/=2. Let’s assume that the boundary-condition cost is 1 for the subarray of size 0.

178 Chapter 7 Quicksort

The combination of the bad split followed by the good split produces three sub- arrays of sizes 0, .n � 1/=2 � 1, and .n � 1/=2 at a combined partitioning cost of ‚.n/ C ‚.n � 1/ D ‚.n/. Certainly, this situation is no worse than that in Figure 7.5(b), namely a single level of partitioning that produces two subarrays of size .n � 1/=2, at a cost of ‚.n/. Yet this latter situation is balanced! Intuitively, the ‚.n � 1/ cost of the bad split can be absorbed into the ‚.n/ cost of the good split, and the resulting split is good. Thus, the running time of quicksort, when lev- els alternate between good and bad splits, is like the running time for good splits alone: still O.n lg n/, but with a slightly larger constant hidden by the O-notation. We shall give a rigorous analysis of the expected running time of a randomized version of quicksort in Section 7.4.2.

Exercises

7.2-1 Use the substitution method to prove that the recurrence T .n/ D T .n� 1/C‚.n/ has the solution T .n/ D ‚.n2/, as claimed at the beginning of Section 7.2. 7.2-2 What is the running time of QUICKSORT when all elements of array A have the same value?

7.2-3 Show that the running time of QUICKSORT is ‚.n2/ when the array A contains distinct elements and is sorted in decreasing order.

7.2-4 Banks often record transactions on an account in order of the times of the transac- tions, but many people like to receive their bank statements with checks listed in order by check number. People usually write checks in order by check number, and merchants usually cash them with reasonable dispatch. The problem of converting time-of-transaction ordering to check-number ordering is therefore the problem of sorting almost-sorted input. Argue that the procedure INSERTION-SORT would tend to beat the procedure QUICKSORT on this problem.

7.2-5 Suppose that the splits at every level of quicksort are in the proportion 1 � ˛ to ˛, where 0 aj to determine their relative order. We may not inspect the values of the elements or gain order information about them in any other way.

In this section, we assume without loss of generality that all the input elements are distinct. Given this assumption, comparisons of the form ai D aj are useless, so we can assume that no comparisons of this form are made. We also note that the comparisons ai � aj , ai � aj , ai > aj , and ai

≤ >

1:2

2:3 1:3

⟨1,2,3⟩ 1:3 ⟨2,1,3⟩ 2:3

⟨1,3,2⟩ ⟨3,1,2⟩ ⟨3,2,1⟩

≤ >

≤ >

≤ >

⟨2,3,1⟩

Figure 8.1 The decision tree for insertion sort operating on three elements. An internal node an- notated by i :j indicates a comparison between ai and aj . A leaf annotated by the permutation h�.1/; �.2/; : : : ; �.n/i indicates the ordering a�.1/ � a�.2/ � � � � � a�.n/. The shaded path indicates the decisions made when sorting the input sequence ha1 D 6; a2 D 8; a3 D 5i; the permutation h3; 1; 2i at the leaf indicates that the sorted ordering is a3 D 5 � a1 D 6 � a2 D 8. There are 3Š D 6 possible permutations of the input elements, and so the decision tree must have at least 6 leaves.

they yield identical information about the relative order of ai and aj . We therefore assume that all comparisons have the form ai � aj .

The decision-tree model

We can view comparison sorts abstractly in terms of decision trees. A decision tree is a full binary tree that represents the comparisons between elements that are performed by a particular sorting algorithm operating on an input of a given size. Control, data movement, and all other aspects of the algorithm are ignored. Figure 8.1 shows the decision tree corresponding to the insertion sort algorithm from Section 2.1 operating on an input sequence of three elements.

In a decision tree, we annotate each internal node by i :j for some i and j in the range 1 � i; j � n, where n is the number of elements in the input sequence. We also annotate each leaf by a permutation h�.1/; �.2/; : : : ; �.n/i. (See Section C.1 for background on permutations.) The execution of the sorting algorithm corre- sponds to tracing a simple path from the root of the decision tree down to a leaf. Each internal node indicates a comparison ai � aj . The left subtree then dictates subsequent comparisons once we know that ai � aj , and the right subtree dictates subsequent comparisons knowing that ai > aj . When we come to a leaf, the sort- ing algorithm has established the ordering a�.1/ � a�.2/ � � � � � a�.n/. Because any correct sorting algorithm must be able to produce each permutation of its input, each of the nŠ permutations on n elements must appear as one of the leaves of the decision tree for a comparison sort to be correct. Furthermore, each of these leaves must be reachable from the root by a downward path corresponding to an actual

8.1 Lower bounds for sorting 193

execution of the comparison sort. (We shall refer to such leaves as “reachable.”) Thus, we shall consider only decision trees in which each permutation appears as a reachable leaf.

A lower bound for the worst case

The length of the longest simple path from the root of a decision tree to any of its reachable leaves represents the worst-case number of comparisons that the cor- responding sorting algorithm performs. Consequently, the worst-case number of comparisons for a given comparison sort algorithm equals the height of its decision tree. A lower bound on the heights of all decision trees in which each permutation appears as a reachable leaf is therefore a lower bound on the running time of any comparison sort algorithm. The following theorem establishes such a lower bound.

Theorem 8.1 Any comparison sort algorithm requires �.n lg n/ comparisons in the worst case.

Proof From the preceding discussion, it suffices to determine the height of a decision tree in which each permutation appears as a reachable leaf. Consider a decision tree of height h with l reachable leaves corresponding to a comparison sort on n elements. Because each of the nŠ permutations of the input appears as some leaf, we have nŠ � l . Since a binary tree of height h has no more than 2h leaves, we have

nŠ � l � 2h ; which, by taking logarithms, implies

h � lg.nŠ/ (since the lg function is monotonically increasing) D �.n lg n/ (by equation (3.19)) .

Corollary 8.2 Heapsort and merge sort are asymptotically optimal comparison sorts.

Proof The O.n lg n/ upper bounds on the running times for heapsort and merge sort match the �.n lg n/ worst-case lower bound from Theorem 8.1.

Exercises

8.1-1 What is the smallest possible depth of a leaf in a decision tree for a comparison sort?

194 Chapter 8 Sorting in Linear Time

8.1-2 Obtain asymptotically tight bounds on lg.nŠ/ without using Stirling’s approxi- mation. Instead, evaluate the summation

Pn kD1 lg k using techniques from Sec-

tion A.2.

8.1-3 Show that there is no comparison sort whose running time is linear for at least half of the nŠ inputs of length n. What about a fraction of 1=n of the inputs of length n? What about a fraction 1=2n?

8.1-4 Suppose that you are given a sequence of n elements to sort. The input sequence consists of n=k subsequences, each containing k elements. The elements in a given subsequence are all smaller than the elements in the succeeding subsequence and larger than the elements in the preceding subsequence. Thus, all that is needed to sort the whole sequence of length n is to sort the k elements in each of the n=k subsequences. Show an �.n lg k/ lower bound on the number of comparisons needed to solve this variant of the sorting problem. (Hint: It is not rigorous to simply combine the lower bounds for the individual subsequences.)

8.2 Counting sort

Counting sort assumes that each of the n input elements is an integer in the range 0 to k, for some integer k. When k D O.n/, the sort runs in ‚.n/ time.

Counting sort determines, for each input element x, the number of elements less than x. It uses this information to place element x directly into its position in the output array. For example, if 17 elements are less than x, then x belongs in output position 18. We must modify this scheme slightly to handle the situation in which several elements have the same value, since we do not want to put them all in the same position.

In the code for counting sort, we assume that the input is an array AŒ1 : : n�, and thus A: length D n. We require two other arrays: the array BŒ1 : : n� holds the sorted output, and the array C Œ0 : : k� provides temporary working storage.

8.2 Counting sort 195

2 5 3 0 2 3 0 3

1 2 3 4 5 6 7 8

2 0 2 3 0 1

1 2 3 4 5

A

C

(a)

2 2 4 7 7 8C

(b)

3

1 2 3 4 5 6 7 8

2 2 4 6 7 8

B

C

(c)

3

1 2 3 4 5 6 7 8

1 2 4 6 7 8

B

C

(d)

0 3

1 2 3 4 5 6 7 8

1 2 4 5 7 8

B

C

(e)

0 3

3

1 2 3 4 5 6 7 8

B

(f)

0 30 2 2 3 5

0

1 2 3 4 50

1 2 3 4 50 1 2 3 4 50

1 2 3 4 50

Figure 8.2 The operation of COUNTING-SORT on an input array AŒ1 : : 8�, where each element of A is a nonnegative integer no larger than k D 5. (a) The array A and the auxiliary array C after line 5. (b) The array C after line 8. (c)–(e) The output array B and the auxiliary array C after one, two, and three iterations of the loop in lines 10–12, respectively. Only the lightly shaded elements of array B have been filled in. (f) The final sorted output array B .

COUNTING-SORT.A; B; k/

1 let C Œ0 : : k� be a new array 2 for i D 0 to k 3 C Œi� D 0 4 for j D 1 to A: length 5 C ŒAŒj �� D C ŒAŒj ��C 1 6 // C Œi� now contains the number of elements equal to i . 7 for i D 1 to k 8 C Œi� D C Œi�C C Œi � 1� 9 // C Œi� now contains the number of elements less than or equal to i .

10 for j D A: length downto 1 11 BŒC ŒAŒj ��� D AŒj � 12 C ŒAŒj �� D C ŒAŒj ��� 1

Figure 8.2 illustrates counting sort. After the for loop of lines 2–3 initializes the array C to all zeros, the for loop of lines 4–5 inspects each input element. If the value of an input element is i , we increment C Œi�. Thus, after line 5, C Œi� holds the number of input elements equal to i for each integer i D 0; 1; : : : ; k. Lines 7–8 determine for each i D 0; 1; : : : ; k how many input elements are less than or equal to i by keeping a running sum of the array C .

196 Chapter 8 Sorting in Linear Time

Finally, the for loop of lines 10–12 places each element AŒj � into its correct sorted position in the output array B . If all n elements are distinct, then when we first enter line 10, for each AŒj �, the value C ŒAŒj �� is the correct final position of AŒj � in the output array, since there are C ŒAŒj �� elements less than or equal to AŒj �. Because the elements might not be distinct, we decrement C ŒAŒj �� each time we place a value AŒj � into the B array. Decrementing C ŒAŒj �� causes the next input element with a value equal to AŒj �, if one exists, to go to the position immediately before AŒj � in the output array.

How much time does counting sort require? The for loop of lines 2–3 takes time ‚.k/, the for loop of lines 4–5 takes time ‚.n/, the for loop of lines 7–8 takes time ‚.k/, and the for loop of lines 10–12 takes time ‚.n/. Thus, the overall time is ‚.kCn/. In practice, we usually use counting sort when we have k D O.n/, in which case the running time is ‚.n/.

Counting sort beats the lower bound of �.n lg n/ proved in Section 8.1 because it is not a comparison sort. In fact, no comparisons between input elements occur anywhere in the code. Instead, counting sort uses the actual values of the elements to index into an array. The �.n lg n/ lower bound for sorting does not apply when we depart from the comparison sort model.

An important property of counting sort is that it is stable: numbers with the same value appear in the output array in the same order as they do in the input array. That is, it breaks ties between two numbers by the rule that whichever number appears first in the input array appears first in the output array. Normally, the property of stability is important only when satellite data are carried around with the element being sorted. Counting sort’s stability is important for another reason: counting sort is often used as a subroutine in radix sort. As we shall see in the next section, in order for radix sort to work correctly, counting sort must be stable.

Exercises

8.2-1 Using Figure 8.2 as a model, illustrate the operation of COUNTING-SORT on the array A D h6; 0; 2; 0; 1; 3; 4; 6; 1; 3; 2i. 8.2-2 Prove that COUNTING-SORT is stable.

8.2-3 Suppose that we were to rewrite the for loop header in line 10 of the COUNTING- SORT as

10 for j D 1 to A: length Show that the algorithm still works properly. Is the modified algorithm stable?

8.3 Radix sort 197

8.2-4 Describe an algorithm that, given n integers in the range 0 to k, preprocesses its input and then answers any query about how many of the n integers fall into a range Œa : : b� in O.1/ time. Your algorithm should use ‚.n C k/ preprocessing time.

8.3 Radix sort

Radix sort is the algorithm used by the card-sorting machines you now find only in computer museums. The cards have 80 columns, and in each column a machine can punch a hole in one of 12 places. The sorter can be mechanically “programmed” to examine a given column of each card in a deck and distribute the card into one of 12 bins depending on which place has been punched. An operator can then gather the cards bin by bin, so that cards with the first place punched are on top of cards with the second place punched, and so on.

For decimal digits, each column uses only 10 places. (The other two places are reserved for encoding nonnumeric characters.) A d -digit number would then occupy a field of d columns. Since the card sorter can look at only one column at a time, the problem of sorting n cards on a d -digit number requires a sorting algorithm.

Intuitively, you might sort numbers on their most significant digit, sort each of the resulting bins recursively, and then combine the decks in order. Unfortunately, since the cards in 9 of the 10 bins must be put aside to sort each of the bins, this procedure generates many intermediate piles of cards that you would have to keep track of. (See Exercise 8.3-5.)

Radix sort solves the problem of card sorting—counterintuitively—by sorting on the least significant digit first. The algorithm then combines the cards into a single deck, with the cards in the 0 bin preceding the cards in the 1 bin preceding the cards in the 2 bin, and so on. Then it sorts the entire deck again on the second-least significant digit and recombines the deck in a like manner. The process continues until the cards have been sorted on all d digits. Remarkably, at that point the cards are fully sorted on the d -digit number. Thus, only d passes through the deck are required to sort. Figure 8.3 shows how radix sort operates on a “deck” of seven 3-digit numbers.

In order for radix sort to work correctly, the digit sorts must be stable. The sort performed by a card sorter is stable, but the operator has to be wary about not changing the order of the cards as they come out of a bin, even though all the cards in a bin have the same digit in the chosen column.

198 Chapter 8 Sorting in Linear Time

329 457 657 839 436 720 355

329

457 657

839

436

720 355 329

457 657

839 436

720

355

329

457 657

839

436

720

355

Figure 8.3 The operation of radix sort on a list of seven 3-digit numbers. The leftmost column is the input. The remaining columns show the list after successive sorts on increasingly significant digit positions. Shading indicates the digit position sorted on to produce each list from the previous one.

In a typical computer, which is a sequential random-access machine, we some- times use radix sort to sort records of information that are keyed by multiple fields. For example, we might wish to sort dates by three keys: year, month, and day. We could run a sorting algorithm with a comparison function that, given two dates, compares years, and if there is a tie, compares months, and if another tie occurs, compares days. Alternatively, we could sort the information three times with a stable sort: first on day, next on month, and finally on year.

The code for radix sort is straightforward. The following procedure assumes that each element in the n-element array A has d digits, where digit 1 is the lowest-order digit and digit d is the highest-order digit.

RADIX-SORT.A; d/

1 for i D 1 to d 2 use a stable sort to sort array A on digit i

Lemma 8.3 Given n d -digit numbers in which each digit can take on up to k possible values, RADIX-SORT correctly sorts these numbers in ‚.d.nC k// time if the stable sort it uses takes ‚.nC k/ time.

Proof The correctness of radix sort follows by induction on the column being sorted (see Exercise 8.3-3). The analysis of the running time depends on the stable sort used as the intermediate sorting algorithm. When each digit is in the range 0 to k�1 (so that it can take on k possible values), and k is not too large, counting sort is the obvious choice. Each pass over n d -digit numbers then takes time ‚.nCk/. There are d passes, and so the total time for radix sort is ‚.d.nC k//.

When d is constant and k D O.n/, we can make radix sort run in linear time. More generally, we have some flexibility in how to break each key into digits.

8.3 Radix sort 199

Lemma 8.4 Given n b-bit numbers and any positive integer r � b, RADIX-SORT correctly sorts these numbers in ‚..b=r/.nC 2r// time if the stable sort it uses takes ‚.nC k/ time for inputs in the range 0 to k.

Proof For a value r � b, we view each key as having d D db=re digits of r bits each. Each digit is an integer in the range 0 to 2r � 1, so that we can use counting sort with k D 2r �1. (For example, we can view a 32-bit word as having four 8-bit digits, so that b D 32, r D 8, k D 2r � 1 D 255, and d D b=r D 4.) Each pass of counting sort takes time ‚.nC k/ D ‚.nC 2r/ and there are d passes, for a total running time of ‚.d.nC 2r// D ‚..b=r/.nC 2r//.

For given values of n and b, we wish to choose the value of r , with r � b, that minimizes the expression .b=r/.n C 2r/. If b 1 leaves, and let LT and RT be the left and right subtrees of T . Show that D.T / D D.LT/CD.RT/C k.

c. Let d.k/ be the minimum value of D.T / over all decision trees T with k > 1 leaves. Show that d.k/ D min1�i�k�1 fd.i/C d.k � i/C kg. (Hint: Consider a decision tree T with k leaves that achieves the minimum. Let i0 be the number of leaves in LT and k � i0 the number of leaves in RT.)

d. Prove that for a given value of k > 1 and i in the range 1 � i � k � 1, the function i lg i C .k � i/ lg.k � i/ is minimized at i D k=2. Conclude that d.k/ D �.k lg k/.

e. Prove that D.TA/ D �.nŠ lg.nŠ//, and conclude that the average-case time to sort n elements is �.n lg n/.

Now, consider a randomized comparison sort B . We can extend the decision- tree model to handle randomization by incorporating two kinds of nodes: ordinary comparison nodes and “randomization” nodes. A randomization node models a random choice of the form RANDOM.1; r/ made by algorithm B; the node has r children, each of which is equally likely to be chosen during an execution of the algorithm.

f. Show that for any randomized comparison sort B , there exists a deterministic comparison sort A whose expected number of comparisons is no more than those made by B .

206 Chapter 8 Sorting in Linear Time

8-2 Sorting in place in linear time Suppose that we have an array of n data records to sort and that the key of each record has the value 0 or 1. An algorithm for sorting such a set of records might possess some subset of the following three desirable characteristics:

1. The algorithm runs in O.n/ time.

2. The algorithm is stable.

3. The algorithm sorts in place, using no more than a constant amount of storage space in addition to the original array.

a. Give an algorithm that satisfies criteria 1 and 2 above.

b. Give an algorithm that satisfies criteria 1 and 3 above.

c. Give an algorithm that satisfies criteria 2 and 3 above.

d. Can you use any of your sorting algorithms from parts (a)–(c) as the sorting method used in line 2 of RADIX-SORT, so that RADIX-SORT sorts n records with b-bit keys in O.bn/ time? Explain how or why not.

e. Suppose that the n records have keys in the range from 1 to k. Show how to modify counting sort so that it sorts the records in place in O.nC k/ time. You may use O.k/ storage outside the input array. Is your algorithm stable? (Hint: How would you do it for k D 3?)

8-3 Sorting variable-length items a. You are given an array of integers, where different integers may have different

numbers of digits, but the total number of digits over all the integers in the array is n. Show how to sort the array in O.n/ time.

b. You are given an array of strings, where different strings may have different numbers of characters, but the total number of characters over all the strings is n. Show how to sort the strings in O.n/ time.

(Note that the desired order here is the standard alphabetical order; for example, a AŒj � 2 exchange AŒi� with AŒj �

After the compare-exchange operation, we know that AŒi� � AŒj �. An oblivious compare-exchange algorithm operates solely by a sequence of

prespecified compare-exchange operations. The indices of the positions compared in the sequence must be determined in advance, and although they can depend on the number of elements being sorted, they cannot depend on the values being sorted, nor can they depend on the result of any prior compare-exchange operation. For example, here is insertion sort expressed as an oblivious compare-exchange algorithm:

INSERTION-SORT.A/

1 for j D 2 to A: length 2 for i D j � 1 downto 1 3 COMPARE-EXCHANGE.A; i; i C 1/

Problems for Chapter 8 209

The 0-1 sorting lemma provides a powerful way to prove that an oblivious compare-exchange algorithm produces a sorted result. It states that if an oblivi- ous compare-exchange algorithm correctly sorts all input sequences consisting of only 0s and 1s, then it correctly sorts all inputs containing arbitrary values.

You will prove the 0-1 sorting lemma by proving its contrapositive: if an oblivi- ous compare-exchange algorithm fails to sort an input containing arbitrary values, then it fails to sort some 0-1 input. Assume that an oblivious compare-exchange al- gorithm X fails to correctly sort the array AŒ1 : : n�. Let AŒp� be the smallest value in A that algorithm X puts into the wrong location, and let AŒq� be the value that algorithm X moves to the location into which AŒp� should have gone. Define an array BŒ1 : : n� of 0s and 1s as follows:

BŒi� D (

0 if AŒi� � AŒp� ; 1 if AŒi� > AŒp� :

a. Argue that AŒq� > AŒp�, so that BŒp� D 0 and BŒq� D 1.

b. To complete the proof of the 0-1 sorting lemma, prove that algorithm X fails to sort array B correctly.

Now you will use the 0-1 sorting lemma to prove that a particular sorting algo- rithm works correctly. The algorithm, columnsort, works on a rectangular array of n elements. The array has r rows and s columns (so that n D rs), subject to three restrictions:

� r must be even,

� s must be a divisor of r , and

� r � 2s2. When columnsort completes, the array is sorted in column-major order: reading down the columns, from left to right, the elements monotonically increase.

Columnsort operates in eight steps, regardless of the value of n. The odd steps are all the same: sort each column individually. Each even step is a fixed permuta- tion. Here are the steps:

1. Sort each column.

2. Transpose the array, but reshape it back to r rows and s columns. In other words, turn the leftmost column into the top r=s rows, in order; turn the next column into the next r=s rows, in order; and so on.

3. Sort each column.

4. Perform the inverse of the permutation performed in step 2.

210 Chapter 8 Sorting in Linear Time

10 14 5

8 7 17

12 1 6

16 9 11

4 15 2

18 3 13

(a)

4 1 2

8 3 5

10 7 6

12 9 11

16 14 13

18 15 17

(b)

4 8 10

12 16 18

1 3 7

9 14 15

2 5 6

11 13 17

(c)

1 3 6

2 5 7

4 8 10

9 13 15

11 14 17

12 16 18

(d)

1 4 11

3 8 14

6 10 17

2 9 12

5 13 16

7 15 18

(e)

1 4 11

2 8 12

3 9 14

5 10 16

6 13 17

7 15 18

(f)

5 10 16

6 13 17

7 15 18

1 4 11

2 8 12

3 9 14

(g)

4 10 16

5 11 17

6 12 18

1 7 13

2 8 14

3 9 15

(h)

1 7 13

2 8 14

3 9 15

4 10 16

5 11 17

6 12 18

(i)

Figure 8.5 The steps of columnsort. (a) The input array with 6 rows and 3 columns. (b) After sorting each column in step 1. (c) After transposing and reshaping in step 2. (d) After sorting each column in step 3. (e) After performing step 4, which inverts the permutation from step 2. (f) After sorting each column in step 5. (g) After shifting by half a column in step 6. (h) After sorting each column in step 7. (i) After performing step 8, which inverts the permutation from step 6. The array is now sorted in column-major order.

5. Sort each column.

6. Shift the top half of each column into the bottom half of the same column, and shift the bottom half of each column into the top half of the next column to the right. Leave the top half of the leftmost column empty. Shift the bottom half of the last column into the top half of a new rightmost column, and leave the bottom half of this new column empty.

7. Sort each column.

8. Perform the inverse of the permutation performed in step 6.

Figure 8.5 shows an example of the steps of columnsort with r D 6 and s D 3. (Even though this example violates the requirement that r � 2s2, it happens to work.)

c. Argue that we can treat columnsort as an oblivious compare-exchange algo- rithm, even if we do not know what sorting method the odd steps use.

Although it might seem hard to believe that columnsort actually sorts, you will use the 0-1 sorting lemma to prove that it does. The 0-1 sorting lemma applies because we can treat columnsort as an oblivious compare-exchange algorithm. A

Notes for Chapter 8 211

couple of definitions will help you apply the 0-1 sorting lemma. We say that an area of an array is clean if we know that it contains either all 0s or all 1s. Otherwise, the area might contain mixed 0s and 1s, and it is dirty. From here on, assume that the input array contains only 0s and 1s, and that we can treat it as an array with r rows and s columns.

d. Prove that after steps 1–3, the array consists of some clean rows of 0s at the top, some clean rows of 1s at the bottom, and at most s dirty rows between them.

e. Prove that after step 4, the array, read in column-major order, starts with a clean area of 0s, ends with a clean area of 1s, and has a dirty area of at most s2

elements in the middle.

f. Prove that steps 5–8 produce a fully sorted 0-1 output. Conclude that column- sort correctly sorts all inputs containing arbitrary values.

g. Now suppose that s does not divide r . Prove that after steps 1–3, the array consists of some clean rows of 0s at the top, some clean rows of 1s at the bottom, and at most 2s � 1 dirty rows between them. How large must r be, compared with s, for columnsort to correctly sort when s does not divide r?

h. Suggest a simple change to step 1 that allows us to maintain the requirement that r � 2s2 even when s does not divide r , and prove that with your change, columnsort correctly sorts.

Chapter notes

The decision-tree model for studying comparison sorts was introduced by Ford and Johnson [110]. Knuth’s comprehensive treatise on sorting [211] covers many variations on the sorting problem, including the information-theoretic lower bound on the complexity of sorting given here. Ben-Or [39] studied lower bounds for sorting using generalizations of the decision-tree model.

Knuth credits H. H. Seward with inventing counting sort in 1954, as well as with the idea of combining counting sort with radix sort. Radix sorting starting with the least significant digit appears to be a folk algorithm widely used by operators of mechanical card-sorting machines. According to Knuth, the first published refer- ence to the method is a 1929 document by L. J. Comrie describing punched-card equipment. Bucket sorting has been in use since 1956, when the basic idea was proposed by E. J. Isaac and R. C. Singleton [188].

Munro and Raman [263] give a stable sorting algorithm that performs O.n1C�/ comparisons in the worst case, where 0 AŒi� 4 min D AŒi� 5 return min

We can, of course, find the maximum with n � 1 comparisons as well. Is this the best we can do? Yes, since we can obtain a lower bound of n � 1

comparisons for the problem of determining the minimum. Think of any algorithm that determines the minimum as a tournament among the elements. Each compar- ison is a match in the tournament in which the smaller of the two elements wins. Observing that every element except the winner must lose at least one match, we conclude that n� 1 comparisons are necessary to determine the minimum. Hence, the algorithm MINIMUM is optimal with respect to the number of comparisons performed.

Simultaneous minimum and maximum

In some applications, we must find both the minimum and the maximum of a set of n elements. For example, a graphics program may need to scale a set of .x; y/ data to fit onto a rectangular display screen or other graphical output device. To do so, the program must first determine the minimum and maximum value of each coordinate.

At this point, it should be obvious how to determine both the minimum and the maximum of n elements using ‚.n/ comparisons, which is asymptotically optimal: simply find the minimum and maximum independently, using n � 1 comparisons for each, for a total of 2n � 2 comparisons.

In fact, we can find both the minimum and the maximum using at most 3 bn=2c comparisons. We do so by maintaining both the minimum and maximum elements seen thus far. Rather than processing each element of the input by comparing it against the current minimum and maximum, at a cost of 2 comparisons per element,

9.2 Selection in expected linear time 215

we process elements in pairs. We compare pairs of elements from the input first with each other, and then we compare the smaller with the current minimum and the larger to the current maximum, at a cost of 3 comparisons for every 2 elements.

How we set up initial values for the current minimum and maximum depends on whether n is odd or even. If n is odd, we set both the minimum and maximum to the value of the first element, and then we process the rest of the elements in pairs. If n is even, we perform 1 comparison on the first 2 elements to determine the initial values of the minimum and maximum, and then process the rest of the elements in pairs as in the case for odd n.

Let us analyze the total number of comparisons. If n is odd, then we perform 3 bn=2c comparisons. If n is even, we perform 1 initial comparison followed by 3.n � 2/=2 comparisons, for a total of 3n=2 � 2. Thus, in either case, the total number of comparisons is at most 3 bn=2c.

Exercises

9.1-1 Show that the second smallest of n elements can be found with n C dlg ne � 2 comparisons in the worst case. (Hint: Also find the smallest element.)

9.1-2 ? Prove the lower bound of d3n=2e � 2 comparisons in the worst case to find both the maximum and minimum of n numbers. (Hint: Consider how many numbers are potentially either the maximum or minimum, and investigate how a comparison affects these counts.)

9.2 Selection in expected linear time

The general selection problem appears more difficult than the simple problem of finding a minimum. Yet, surprisingly, the asymptotic running time for both prob- lems is the same: ‚.n/. In this section, we present a divide-and-conquer algorithm for the selection problem. The algorithm RANDOMIZED-SELECT is modeled after the quicksort algorithm of Chapter 7. As in quicksort, we partition the input array recursively. But unlike quicksort, which recursively processes both sides of the partition, RANDOMIZED-SELECT works on only one side of the partition. This difference shows up in the analysis: whereas quicksort has an expected running time of ‚.n lg n/, the expected running time of RANDOMIZED-SELECT is ‚.n/, assuming that the elements are distinct.

216 Chapter 9 Medians and Order Statistics

RANDOMIZED-SELECT uses the procedure RANDOMIZED-PARTITION intro- duced in Section 7.3. Thus, like RANDOMIZED-QUICKSORT, it is a randomized al- gorithm, since its behavior is determined in part by the output of a random-number generator. The following code for RANDOMIZED-SELECT returns the i th smallest element of the array AŒp : : r�.

RANDOMIZED-SELECT.A; p; r; i/

1 if p == r 2 return AŒp� 3 q D RANDOMIZED-PARTITION.A; p; r/ 4 k D q � p C 1 5 if i == k // the pivot value is the answer 6 return AŒq� 7 elseif i k, however, then the desired element lies on the high side of the partition. Since we already know k values that are smaller than the i th smallest element of AŒp : : r�—namely, the elements of AŒp : : q�—the desired element is the .i � k/th smallest element of AŒqC1 : : r�, which line 9 finds recursively. The code appears to allow recursive calls to subarrays with 0 elements, but Exercise 9.2-1 asks you to show that this situation cannot happen.

The worst-case running time for RANDOMIZED-SELECT is ‚.n2/, even to find the minimum, because we could be extremely unlucky and always partition around the largest remaining element, and partitioning takes ‚.n/ time. We will see that

9.2 Selection in expected linear time 217

the algorithm has a linear expected running time, though, and because it is random- ized, no particular input elicits the worst-case behavior.

To analyze the expected running time of RANDOMIZED-SELECT, we let the run- ning time on an input array AŒp : : r� of n elements be a random variable that we denote by T .n/, and we obtain an upper bound on E ŒT .n/� as follows. The pro- cedure RANDOMIZED-PARTITION is equally likely to return any element as the pivot. Therefore, for each k such that 1 � k � n, the subarray AŒp : : q� has k ele- ments (all less than or equal to the pivot) with probability 1=n. For k D 1; 2; : : : ; n, we define indicator random variables Xk where

Xk D I fthe subarray AŒp : : q� has exactly k elementsg ; and so, assuming that the elements are distinct, we have

E ŒXk� D 1=n : (9.1) When we call RANDOMIZED-SELECT and choose AŒq� as the pivot element, we

do not know, a priori, if we will terminate immediately with the correct answer, recurse on the subarray AŒp : : q � 1�, or recurse on the subarray AŒq C 1 : : r�. This decision depends on where the i th smallest element falls relative to AŒq�. Assuming that T .n/ is monotonically increasing, we can upper-bound the time needed for the recursive call by the time needed for the recursive call on the largest possible input. In other words, to obtain an upper bound, we assume that the i th element is always on the side of the partition with the greater number of elements. For a given call of RANDOMIZED-SELECT, the indicator random variable Xk has the value 1 for exactly one value of k, and it is 0 for all other k. When Xk D 1, the two subarrays on which we might recurse have sizes k � 1 and n � k. Hence, we have the recurrence

T .n/ � nX

kD1 Xk � .T .max.k � 1; n � k//CO.n//

D nX

kD1 Xk � T .max.k � 1; n � k//CO.n/ :

218 Chapter 9 Medians and Order Statistics

Taking expected values, we have

E ŒT .n/�

� E ”

nX kD1

Xk � T .max.k � 1; n � k//CO.n/ #

D nX

kD1 E ŒXk � T .max.k � 1; n � k//�CO.n/ (by linearity of expectation)

D nX

kD1 E ŒXk� � E ŒT .max.k � 1; n � k//�CO.n/ (by equation (C.24))

D nX

kD1

1

n � E ŒT .max.k � 1; n � k//�CO.n/ (by equation (9.1)) .

In order to apply equation (C.24), we rely on Xk and T .max.k � 1; n � k// being independent random variables. Exercise 9.2-2 asks you to justify this assertion.

Let us consider the expression max.k � 1; n � k/. We have

max.k � 1; n � k/ D (

k � 1 if k > dn=2e ; n � k if k � dn=2e :

If n is even, each term from T .dn=2e/ up to T .n � 1/ appears exactly twice in the summation, and if n is odd, all these terms appear twice and T .bn=2c/ appears once. Thus, we have

E ŒT .n/� � 2 n

n�1X kDbn=2c

E ŒT .k/�CO.n/ :

We show that E ŒT .n/� D O.n/ by substitution. Assume that E ŒT .n/� � cn for some constant c that satisfies the initial conditions of the recurrence. We assume that T .n/ D O.1/ for n less than some constant; we shall pick this constant later. We also pick a constant a such that the function described by the O.n/ term above (which describes the non-recursive component of the running time of the algo- rithm) is bounded from above by an for all n > 0. Using this inductive hypothesis, we have

E ŒT .n/� � 2 n

n�1X kDbn=2c

ck C an

D 2c n

n�1X kD1

k � bn=2c�1X

kD1 k

! C an

9.2 Selection in expected linear time 219

D 2c n

� .n � 1/n

2 � .bn=2c � 1/ bn=2c

2

� C an

� 2c n

� .n � 1/n

2 � .n=2 � 2/.n=2 � 1/

2

� C an

D 2c n

� n2 � n

2 � n

2=4 � 3n=2C 2 2

� C an

D c n

� 3n2

4 C n

2 � 2

� C an

D c �

3n

4 C 1

2 � 2

n

� C an

� 3cn 4 C c

2 C an

D cn � �cn

4 � c

2 � an

� :

In order to complete the proof, we need to show that for sufficiently large n, this last expression is at most cn or, equivalently, that cn=4 � c=2 � an � 0. If we add c=2 to both sides and factor out n, we get n.c=4 � a/ � c=2. As long as we choose the constant c so that c=4 � a > 0, i.e., c > 4a, we can divide both sides by c=4 � a, giving

n � c=2 c=4� a D

2c

c � 4a :

Thus, if we assume that T .n/ D O.1/ for n 1 distinct elements by executing the following steps. (If n D 1, then SELECT merely returns its only input value as the i th smallest.)

1. Divide the n elements of the input array into bn=5c groups of 5 elements each and at most one group made up of the remaining n mod 5 elements.

2. Find the median of each of the dn=5e groups by first insertion-sorting the ele- ments of each group (of which there are at most 5) and then picking the median from the sorted list of group elements.

3. Use SELECT recursively to find the median x of the dn=5e medians found in step 2. (If there are an even number of medians, then by our convention, x is the lower median.)

4. Partition the input array around the median-of-medians x using the modified version of PARTITION. Let k be one more than the number of elements on the low side of the partition, so that x is the kth smallest element and there are n�k elements on the high side of the partition.

5. If i D k, then return x. Otherwise, use SELECT recursively to find the i th smallest element on the low side if i k.

To analyze the running time of SELECT, we first determine a lower bound on the number of elements that are greater than the partitioning element x. Figure 9.1 helps us to visualize this bookkeeping. At least half of the medians found in

9.3 Selection in worst-case linear time 221

x

Figure 9.1 Analysis of the algorithm SELECT. The n elements are represented by small circles, and each group of 5 elements occupies a column. The medians of the groups are whitened, and the median-of-medians x is labeled. (When finding the median of an even number of elements, we use the lower median.) Arrows go from larger elements to smaller, from which we can see that 3 out of every full group of 5 elements to the right of x are greater than x, and 3 out of every group of 5 elements to the left of x are less than x. The elements known to be greater than x appear on a shaded background.

step 2 are greater than or equal to the median-of-medians x.1 Thus, at least half of the dn=5e groups contribute at least 3 elements that are greater than x, except for the one group that has fewer than 5 elements if 5 does not divide n exactly, and the one group containing x itself. Discounting these two groups, it follows that the number of elements greater than x is at least

3

�� 1

2

ln 5

m� � 2

� � 3n

10 � 6 :

Similarly, at least 3n=10 � 6 elements are less than x. Thus, in the worst case, step 5 calls SELECT recursively on at most 7n=10C 6 elements.

We can now develop a recurrence for the worst-case running time T .n/ of the algorithm SELECT. Steps 1, 2, and 4 take O.n/ time. (Step 2 consists of O.n/ calls of insertion sort on sets of size O.1/.) Step 3 takes time T .dn=5e/, and step 5 takes time at most T .7n=10 C 6/, assuming that T is monotonically increasing. We make the assumption, which seems unmotivated at first, that any input of fewer than 140 elements requires O.1/ time; the origin of the magic constant 140 will be clear shortly. We can therefore obtain the recurrence

1Because of our assumption that the numbers are distinct, all medians except x are either greater than or less than x.

222 Chapter 9 Medians and Order Statistics

T .n/ � (

O.1/ if n 0. We begin by assuming that T .n/ � cn for some suitably large constant c and all n 0. Substituting this inductive hypothesis into the right-hand side of the recurrence yields

T .n/ � c dn=5e C c.7n=10C 6/C an � cn=5C c C 7cn=10C 6c C an D 9cn=10C 7c C an D cnC .�cn=10C 7c C an/ ;

which is at most cn if

�cn=10C 7c C an � 0 : (9.2) Inequality (9.2) is equivalent to the inequality c � 10a.n=.n� 70// when n > 70. Because we assume that n � 140, we have n=.n � 70/ � 2, and so choos- ing c � 20a will satisfy inequality (9.2). (Note that there is nothing special about the constant 140; we could replace it by any integer strictly greater than 70 and then choose c accordingly.) The worst-case running time of SELECT is therefore linear.

As in a comparison sort (see Section 8.1), SELECT and RANDOMIZED-SELECT determine information about the relative order of elements only by comparing ele- ments. Recall from Chapter 8 that sorting requires �.n lg n/ time in the compari- son model, even on average (see Problem 8-1). The linear-time sorting algorithms in Chapter 8 make assumptions about the input. In contrast, the linear-time se- lection algorithms in this chapter do not require any assumptions about the input. They are not subject to the �.n lg n/ lower bound because they manage to solve the selection problem without sorting. Thus, solving the selection problem by sort- ing and indexing, as presented in the introduction to this chapter, is asymptotically inefficient.

9.3 Selection in worst-case linear time 223

Exercises

9.3-1 In the algorithm SELECT, the input elements are divided into groups of 5. Will the algorithm work in linear time if they are divided into groups of 7? Argue that SELECT does not run in linear time if groups of 3 are used.

9.3-2 Analyze SELECT to show that if n � 140, then at least dn=4e elements are greater than the median-of-medians x and at least dn=4e elements are less than x. 9.3-3 Show how quicksort can be made to run in O.n lg n/ time in the worst case, as- suming that all elements are distinct.

9.3-4 ? Suppose that an algorithm uses only comparisons to find the i th smallest element in a set of n elements. Show that it can also find the i � 1 smaller elements and the n � i larger elements without performing any additional comparisons. 9.3-5 Suppose that you have a “black-box” worst-case linear-time median subroutine. Give a simple, linear-time algorithm that solves the selection problem for an arbi- trary order statistic.

9.3-6 The kth quantiles of an n-element set are the k � 1 order statistics that divide the sorted set into k equal-sized sets (to within 1). Give an O.n lg k/-time algorithm to list the kth quantiles of a set.

9.3-7 Describe an O.n/-time algorithm that, given a set S of n distinct numbers and a positive integer k � n, determines the k numbers in S that are closest to the median of S .

9.3-8 Let XŒ1 : : n� and Y Œ1 : : n� be two arrays, each containing n numbers already in sorted order. Give an O.lg n/-time algorithm to find the median of all 2n elements in arrays X and Y .

9.3-9 Professor Olay is consulting for an oil company, which is planning a large pipeline running east to west through an oil field of n wells. The company wants to connect

224 Chapter 9 Medians and Order Statistics

Figure 9.2 Professor Olay needs to determine the position of the east-west oil pipeline that mini- mizes the total length of the north-south spurs.

a spur pipeline from each well directly to the main pipeline along a shortest route (either north or south), as shown in Figure 9.2. Given the x- and y-coordinates of the wells, how should the professor pick the optimal location of the main pipeline, which would be the one that minimizes the total length of the spurs? Show how to determine the optimal location in linear time.

Problems

9-1 Largest i numbers in sorted order Given a set of n numbers, we wish to find the i largest in sorted order using a comparison-based algorithm. Find the algorithm that implements each of the fol- lowing methods with the best asymptotic worst-case running time, and analyze the running times of the algorithms in terms of n and i .

a. Sort the numbers, and list the i largest.

b. Build a max-priority queue from the numbers, and call EXTRACT-MAX i times.

c. Use an order-statistic algorithm to find the i th largest number, partition around that number, and sort the i largest numbers.

Problems for Chapter 9 225

9-2 Weighted median For n distinct elements x1; x2; : : : ; xn with positive weights w1; w2; : : : ; wn such that

Pn iD1 wi D 1, the weighted (lower) median is the element xk satisfyingX

xi xk

wi � 1 2

:

For example, if the elements are 0:1; 0:35; 0:05; 0:1; 0:15; 0:05; 0:2 and each ele- ment equals its weight (that is, wi D xi for i D 1; 2; : : : ; 7), then the median is 0:1, but the weighted median is 0:2.

a. Argue that the median of x1; x2; : : : ; xn is the weighted median of the xi with weights wi D 1=n for i D 1; 2; : : : ; n.

b. Show how to compute the weighted median of n elements in O.n lg n/ worst- case time using sorting.

c. Show how to compute the weighted median in ‚.n/ worst-case time using a linear-time median algorithm such as SELECT from Section 9.3.

The post-office location problem is defined as follows. We are given n points p1; p2; : : : ; pn with associated weights w1; w2; : : : ; wn. We wish to find a point p (not necessarily one of the input points) that minimizes the sum

Pn iD1 wi d.p; pi /,

where d.a; b/ is the distance between points a and b.

d. Argue that the weighted median is a best solution for the 1-dimensional post- office location problem, in which points are simply real numbers and the dis- tance between points a and b is d.a; b/ D ja � bj.

e. Find the best solution for the 2-dimensional post-office location problem, in which the points are .x; y/ coordinate pairs and the distance between points a D .x1; y1/ and b D .x2; y2/ is the Manhattan distance given by d.a; b/ D jx1 � x2j C jy1 � y2j.

9-3 Small order statistics We showed that the worst-case number T .n/ of comparisons used by SELECT to select the i th order statistic from n numbers satisfies T .n/ D ‚.n/, but the constant hidden by the ‚-notation is rather large. When i is small relative to n, we can implement a different procedure that uses SELECT as a subroutine but makes fewer comparisons in the worst case.

226 Chapter 9 Medians and Order Statistics

a. Describe an algorithm that uses Ui.n/ comparisons to find the i th smallest of n elements, where

Ui.n/ D (

T .n/ if i � n=2 ; bn=2c C Ui.dn=2e/C T .2i/ otherwise :

(Hint: Begin with bn=2c disjoint pairwise comparisons, and recurse on the set containing the smaller element from each pair.)

b. Show that, if i k

10 return NIL 11 else return i

If we ignore lines 3–7 of the procedure, we have an ordinary algorithm for searching a sorted linked list, in which index i points to each position of the list in

1Because we have defined a mergeable heap to support MINIMUM and EXTRACT-MIN, we can also refer to it as a mergeable min-heap. Alternatively, if it supported MAXIMUM and EXTRACT-MAX, it would be a mergeable max-heap.

Problems for Chapter 10 251

turn. The search terminates once the index i “falls off” the end of the list or once keyŒi � � k. In the latter case, if keyŒi � D k, clearly we have found a key with the value k. If, however, keyŒi � > k, then we will never find a key with the value k, and so terminating the search was the right thing to do.

Lines 3–7 attempt to skip ahead to a randomly chosen position j . Such a skip benefits us if keyŒj � is larger than keyŒi � and no larger than k; in such a case, j marks a position in the list that i would have to reach during an ordinary list search. Because the list is compact, we know that any choice of j between 1 and n indexes some object in the list rather than a slot on the free list.

Instead of analyzing the performance of COMPACT-LIST-SEARCH directly, we shall analyze a related algorithm, COMPACT-LIST-SEARCH 0, which executes two separate loops. This algorithm takes an additional parameter t which determines an upper bound on the number of iterations of the first loop.

COMPACT-LIST-SEARCH0.L; n; k; t/

1 i D L 2 for q D 1 to t 3 j D RANDOM.1; n/ 4 if keyŒi � k 11 return NIL 12 else return i

To compare the execution of the algorithms COMPACT-LIST-SEARCH.L; n; k/ and COMPACT-LIST-SEARCH 0.L; n; k; t/, assume that the sequence of integers re- turned by the calls of RANDOM.1; n/ is the same for both algorithms.

a. Suppose that COMPACT-LIST-SEARCH.L; n; k/ takes t iterations of the while loop of lines 2–8. Argue that COMPACT-LIST-SEARCH 0.L; n; k; t/ returns the same answer and that the total number of iterations of both the for and while loops within COMPACT-LIST-SEARCH 0 is at least t .

In the call COMPACT-LIST-SEARCH 0.L; n; k; t/, let Xt be the random variable that describes the distance in the linked list (that is, through the chain of next pointers) from position i to the desired key k after t iterations of the for loop of lines 2–7 have occurred.

252 Chapter 10 Elementary Data Structures

b. Argue that the expected running time of COMPACT-LIST-SEARCH 0.L; n; k; t/ is O.t C E ŒXt �/.

c. Show that E ŒXt � � Pn

rD1.1 � r=n/t . (Hint: Use equation (C.25).)

d. Show that Pn�1

rD0 r t � ntC1=.t C 1/.

e. Prove that E ŒXt � � n=.t C 1/.

f. Show that COMPACT-LIST-SEARCH 0.L; n; k; t/ runs in O.t C n=t/ expected time.

g. Conclude that COMPACT-LIST-SEARCH runs in O. p

n/ expected time.

h. Why do we assume that all keys are distinct in COMPACT-LIST-SEARCH? Ar- gue that random skips do not necessarily help asymptotically when the list con- tains repeated key values.

Chapter notes

Aho, Hopcroft, and Ullman [6] and Knuth [209] are excellent references for ele- mentary data structures. Many other texts cover both basic data structures and their implementation in a particular programming language. Examples of these types of textbooks include Goodrich and Tamassia [147], Main [241], Shaffer [311], and Weiss [352, 353, 354]. Gonnet [145] provides experimental data on the perfor- mance of many data-structure operations.

The origin of stacks and queues as data structures in computer science is un- clear, since corresponding notions already existed in mathematics and paper-based business practices before the introduction of digital computers. Knuth [209] cites A. M. Turing for the development of stacks for subroutine linkage in 1947.

Pointer-based data structures also seem to be a folk invention. According to Knuth, pointers were apparently used in early computers with drum memories. The A-1 language developed by G. M. Hopper in 1951 represented algebraic formulas as binary trees. Knuth credits the IPL-II language, developed in 1956 by A. Newell, J. C. Shaw, and H. A. Simon, for recognizing the importance and promoting the use of pointers. Their IPL-III language, developed in 1957, included explicit stack operations.

11 Hash Tables

Many applications require a dynamic set that supports only the dictionary opera- tions INSERT, SEARCH, and DELETE. For example, a compiler that translates a programming language maintains a symbol table, in which the keys of elements are arbitrary character strings corresponding to identifiers in the language. A hash table is an effective data structure for implementing dictionaries. Although search- ing for an element in a hash table can take as long as searching for an element in a linked list—‚.n/ time in the worst case—in practice, hashing performs extremely well. Under reasonable assumptions, the average time to search for an element in a hash table is O.1/.

A hash table generalizes the simpler notion of an ordinary array. Directly ad- dressing into an ordinary array makes effective use of our ability to examine an arbitrary position in an array in O.1/ time. Section 11.1 discusses direct address- ing in more detail. We can take advantage of direct addressing when we can afford to allocate an array that has one position for every possible key.

When the number of keys actually stored is small relative to the total number of possible keys, hash tables become an effective alternative to directly addressing an array, since a hash table typically uses an array of size proportional to the number of keys actually stored. Instead of using the key as an array index directly, the array index is computed from the key. Section 11.2 presents the main ideas, focusing on “chaining” as a way to handle “collisions,” in which more than one key maps to the same array index. Section 11.3 describes how we can compute array indices from keys using hash functions. We present and analyze several variations on the basic theme. Section 11.4 looks at “open addressing,” which is another way to deal with collisions. The bottom line is that hashing is an extremely effective and practical technique: the basic dictionary operations require only O.1/ time on the average. Section 11.5 explains how “perfect hashing” can support searches in O.1/ worst- case time, when the set of keys being stored is static (that is, when the set of keys never changes once stored).

254 Chapter 11 Hash Tables

11.1 Direct-address tables

Direct addressing is a simple technique that works well when the universe U of keys is reasonably small. Suppose that an application needs a dynamic set in which each element has a key drawn from the universe U D f0; 1; : : : ; m � 1g, where m is not too large. We shall assume that no two elements have the same key.

To represent the dynamic set, we use an array, or direct-address table, denoted by T Œ0 : : m � 1�, in which each position, or slot, corresponds to a key in the uni- verse U . Figure 11.1 illustrates the approach; slot k points to an element in the set with key k. If the set contains no element with key k, then T Œk� D NIL.

The dictionary operations are trivial to implement:

DIRECT-ADDRESS-SEARCH.T; k/

1 return T Œk�

DIRECT-ADDRESS-INSERT.T; x/

1 T Œx:key� D x DIRECT-ADDRESS-DELETE.T; x/

1 T Œx:key� D NIL Each of these operations takes only O.1/ time.

T

U (universe of keys)

K (actual keys)

2 3

5 8

1

9 4

0 7

6 2

3

5

8

key satellite data

2

0

1

3

4

5

6

7

8

9

Figure 11.1 How to implement a dynamic set by a direct-address table T . Each key in the universe U D f0; 1; : : : ; 9g corresponds to an index in the table. The set K D f2; 3; 5; 8g of actual keys determines the slots in the table that contain pointers to elements. The other slots, heavily shaded, contain NIL.

11.1 Direct-address tables 255

For some applications, the direct-address table itself can hold the elements in the dynamic set. That is, rather than storing an element’s key and satellite data in an object external to the direct-address table, with a pointer from a slot in the table to the object, we can store the object in the slot itself, thus saving space. We would use a special key within an object to indicate an empty slot. Moreover, it is often unnecessary to store the key of the object, since if we have the index of an object in the table, we have its key. If keys are not stored, however, we must have some way to tell whether the slot is empty.

Exercises

11.1-1 Suppose that a dynamic set S is represented by a direct-address table T of length m. Describe a procedure that finds the maximum element of S . What is the worst-case performance of your procedure?

11.1-2 A bit vector is simply an array of bits (0s and 1s). A bit vector of length m takes much less space than an array of m pointers. Describe how to use a bit vector to represent a dynamic set of distinct elements with no satellite data. Dictionary operations should run in O.1/ time.

11.1-3 Suggest how to implement a direct-address table in which the keys of stored el- ements do not need to be distinct and the elements can have satellite data. All three dictionary operations (INSERT, DELETE, and SEARCH) should run in O.1/ time. (Don’t forget that DELETE takes as an argument a pointer to an object to be deleted, not a key.)

11.1-4 ? We wish to implement a dictionary by using direct addressing on a huge array. At the start, the array entries may contain garbage, and initializing the entire array is impractical because of its size. Describe a scheme for implementing a direct- address dictionary on a huge array. Each stored object should use O.1/ space; the operations SEARCH, INSERT, and DELETE should take O.1/ time each; and initializing the data structure should take O.1/ time. (Hint: Use an additional array, treated somewhat like a stack whose size is the number of keys actually stored in the dictionary, to help determine whether a given entry in the huge array is valid or not.)

256 Chapter 11 Hash Tables

11.2 Hash tables

The downside of direct addressing is obvious: if the universe U is large, storing a table T of size jU j may be impractical, or even impossible, given the memory available on a typical computer. Furthermore, the set K of keys actually stored may be so small relative to U that most of the space allocated for T would be wasted.

When the set K of keys stored in a dictionary is much smaller than the uni- verse U of all possible keys, a hash table requires much less storage than a direct- address table. Specifically, we can reduce the storage requirement to ‚.jKj/ while we maintain the benefit that searching for an element in the hash table still requires only O.1/ time. The catch is that this bound is for the average-case time, whereas for direct addressing it holds for the worst-case time.

With direct addressing, an element with key k is stored in slot k. With hashing, this element is stored in slot h.k/; that is, we use a hash function h to compute the slot from the key k. Here, h maps the universe U of keys into the slots of a hash table T Œ0 : : m � 1�: h W U ! f0; 1; : : : ; m � 1g ; where the size m of the hash table is typically much less than jU j. We say that an element with key k hashes to slot h.k/; we also say that h.k/ is the hash value of key k. Figure 11.2 illustrates the basic idea. The hash function reduces the range of array indices and hence the size of the array. Instead of a size of jU j, the array can have size m.

T

U (universe of keys)

K (actual keys)

0

m–1

k1

k2 k3

k4 k5

h(k1)

h(k4)

h(k3)

h(k2) = h(k5)

Figure 11.2 Using a hash function h to map keys to hash-table slots. Because keys k2 and k5 map to the same slot, they collide.

11.2 Hash tables 257

T

U (universe of keys)

K (actual keys)

k1

k2 k3

k4 k5

k6

k7

k8

k1

k2

k3

k4

k5

k6

k7

k8

Figure 11.3 Collision resolution by chaining. Each hash-table slot T Œj � contains a linked list of all the keys whose hash value is j . For example, h.k1/ D h.k4/ and h.k5/ D h.k7/ D h.k2/. The linked list can be either singly or doubly linked; we show it as doubly linked because deletion is faster that way.

There is one hitch: two keys may hash to the same slot. We call this situation a collision. Fortunately, we have effective techniques for resolving the conflict created by collisions.

Of course, the ideal solution would be to avoid collisions altogether. We might try to achieve this goal by choosing a suitable hash function h. One idea is to make h appear to be “random,” thus avoiding collisions or at least minimizing their number. The very term “to hash,” evoking images of random mixing and chopping, captures the spirit of this approach. (Of course, a hash function h must be deterministic in that a given input k should always produce the same output h.k/.) Because jU j > m, however, there must be at least two keys that have the same hash value; avoiding collisions altogether is therefore impossible. Thus, while a well- designed, “random”-looking hash function can minimize the number of collisions, we still need a method for resolving the collisions that do occur.

The remainder of this section presents the simplest collision resolution tech- nique, called chaining. Section 11.4 introduces an alternative method for resolving collisions, called open addressing.

Collision resolution by chaining

In chaining, we place all the elements that hash to the same slot into the same linked list, as Figure 11.3 shows. Slot j contains a pointer to the head of the list of all stored elements that hash to j ; if there are no such elements, slot j contains NIL.

258 Chapter 11 Hash Tables

The dictionary operations on a hash table T are easy to implement when colli- sions are resolved by chaining:

CHAINED-HASH-INSERT.T; x/

1 insert x at the head of list T Œh.x:key/�

CHAINED-HASH-SEARCH.T; k/

1 search for an element with key k in list T Œh.k/�

CHAINED-HASH-DELETE.T; x/

1 delete x from the list T Œh.x:key/�

The worst-case running time for insertion is O.1/. The insertion procedure is fast in part because it assumes that the element x being inserted is not already present in the table; if necessary, we can check this assumption (at additional cost) by search- ing for an element whose key is x:key before we insert. For searching, the worst- case running time is proportional to the length of the list; we shall analyze this operation more closely below. We can delete an element in O.1/ time if the lists are doubly linked, as Figure 11.3 depicts. (Note that CHAINED-HASH-DELETE takes as input an element x and not its key k, so that we don’t have to search for x first. If the hash table supports deletion, then its linked lists should be doubly linked so that we can delete an item quickly. If the lists were only singly linked, then to delete element x, we would first have to find x in the list T Œh.x:key/� so that we could update the next attribute of x’s predecessor. With singly linked lists, both deletion and searching would have the same asymptotic running times.)

Analysis of hashing with chaining

How well does hashing with chaining perform? In particular, how long does it take to search for an element with a given key?

Given a hash table T with m slots that stores n elements, we define the load factor ˛ for T as n=m, that is, the average number of elements stored in a chain. Our analysis will be in terms of ˛, which can be less than, equal to, or greater than 1.

The worst-case behavior of hashing with chaining is terrible: all n keys hash to the same slot, creating a list of length n. The worst-case time for searching is thus ‚.n/ plus the time to compute the hash function—no better than if we used one linked list for all the elements. Clearly, we do not use hash tables for their worst-case performance. (Perfect hashing, described in Section 11.5, does provide good worst-case performance when the set of keys is static, however.)

The average-case performance of hashing depends on how well the hash func- tion h distributes the set of keys to be stored among the m slots, on the average.

11.2 Hash tables 259

Section 11.3 discusses these issues, but for now we shall assume that any given element is equally likely to hash into any of the m slots, independently of where any other element has hashed to. We call this the assumption of simple uniform hashing.

For j D 0; 1; : : : ; m � 1, let us denote the length of the list T Œj � by nj , so that n D n0 C n1 C � � � C nm�1 ; (11.1) and the expected value of nj is E Œnj � D ˛ D n=m.

We assume that O.1/ time suffices to compute the hash value h.k/, so that the time required to search for an element with key k depends linearly on the length nh.k/ of the list T Œh.k/�. Setting aside the O.1/ time required to compute the hash function and to access slot h.k/, let us consider the expected number of elements examined by the search algorithm, that is, the number of elements in the list T Œh.k/� that the algorithm checks to see whether any have a key equal to k. We shall consider two cases. In the first, the search is unsuccessful: no element in the table has key k. In the second, the search successfully finds an element with key k.

Theorem 11.1 In a hash table in which collisions are resolved by chaining, an unsuccessful search takes average-case time ‚.1C˛/, under the assumption of simple uniform hashing.

Proof Under the assumption of simple uniform hashing, any key k not already stored in the table is equally likely to hash to any of the m slots. The expected time to search unsuccessfully for a key k is the expected time to search to the end of list T Œh.k/�, which has expected length E Œnh.k/� D ˛. Thus, the expected number of elements examined in an unsuccessful search is ˛, and the total time required (including the time for computing h.k/) is ‚.1C ˛/.

The situation for a successful search is slightly different, since each list is not equally likely to be searched. Instead, the probability that a list is searched is pro- portional to the number of elements it contains. Nonetheless, the expected search time still turns out to be ‚.1C ˛/.

Theorem 11.2 In a hash table in which collisions are resolved by chaining, a successful search takes average-case time ‚.1C˛/, under the assumption of simple uniform hashing.

Proof We assume that the element being searched for is equally likely to be any of the n elements stored in the table. The number of elements examined during a successful search for an element x is one more than the number of elements that

260 Chapter 11 Hash Tables

appear before x in x’s list. Because new elements are placed at the front of the list, elements before x in the list were all inserted after x was inserted. To find the expected number of elements examined, we take the average, over the n ele- ments x in the table, of 1 plus the expected number of elements added to x’s list after x was added to the list. Let xi denote the i th element inserted into the ta- ble, for i D 1; 2; : : : ; n, and let ki D xi :key. For keys ki and kj , we define the indicator random variable Xij D I fh.ki / D h.kj /g. Under the assumption of sim- ple uniform hashing, we have Pr fh.ki / D h.kj /g D 1=m, and so by Lemma 5.1, E ŒXij � D 1=m. Thus, the expected number of elements examined in a successful search is

E

” 1

n

nX iD1

1C

nX j DiC1

Xij

!#

D 1 n

nX iD1

1C

nX j DiC1

E ŒXij �

! (by linearity of expectation)

D 1 n

nX iD1

1C

nX j DiC1

1

m

!

D 1C 1 nm

nX iD1

.n � i/

D 1C 1 nm

nX

iD1 n �

nX iD1

i

!

D 1C 1 nm

� n2 � n.nC 1/

2

� (by equation (A.1))

D 1C n � 1 2m

D 1C ˛ 2 � ˛

2n :

Thus, the total time required for a successful search (including the time for com- puting the hash function) is ‚.2C ˛=2� ˛=2n/ D ‚.1C ˛/.

What does this analysis mean? If the number of hash-table slots is at least pro- portional to the number of elements in the table, we have n D O.m/ and, con- sequently, ˛ D n=m D O.m/=m D O.1/. Thus, searching takes constant time on average. Since insertion takes O.1/ worst-case time and deletion takes O.1/ worst-case time when the lists are doubly linked, we can support all dictionary operations in O.1/ time on average.

11.2 Hash tables 261

Exercises

11.2-1 Suppose we use a hash function h to hash n distinct keys into an array T of length m. Assuming simple uniform hashing, what is the expected number of collisions? More precisely, what is the expected cardinality of ffk; lg W k ¤ l and h.k/ D h.l/g? 11.2-2 Demonstrate what happens when we insert the keys 5; 28; 19; 15; 20; 33; 12; 17; 10 into a hash table with collisions resolved by chaining. Let the table have 9 slots, and let the hash function be h.k/ D k mod 9. 11.2-3 Professor Marley hypothesizes that he can obtain substantial performance gains by modifying the chaining scheme to keep each list in sorted order. How does the pro- fessor’s modification affect the running time for successful searches, unsuccessful searches, insertions, and deletions?

11.2-4 Suggest how to allocate and deallocate storage for elements within the hash table itself by linking all unused slots into a free list. Assume that one slot can store a flag and either one element plus a pointer or two pointers. All dictionary and free-list operations should run in O.1/ expected time. Does the free list need to be doubly linked, or does a singly linked free list suffice?

11.2-5 Suppose that we are storing a set of n keys into a hash table of size m. Show that if the keys are drawn from a universe U with jU j > nm, then U has a subset of size n consisting of keys that all hash to the same slot, so that the worst-case searching time for hashing with chaining is ‚.n/.

11.2-6 Suppose we have stored n keys in a hash table of size m, with collisions resolved by chaining, and that we know the length of each chain, including the length L of the longest chain. Describe a procedure that selects a key uniformly at random from among the keys in the hash table and returns it in expected time O.L � .1C 1=˛//.

262 Chapter 11 Hash Tables

11.3 Hash functions

In this section, we discuss some issues regarding the design of good hash functions and then present three schemes for their creation. Two of the schemes, hashing by division and hashing by multiplication, are heuristic in nature, whereas the third scheme, universal hashing, uses randomization to provide provably good perfor- mance.

What makes a good hash function?

A good hash function satisfies (approximately) the assumption of simple uniform hashing: each key is equally likely to hash to any of the m slots, independently of where any other key has hashed to. Unfortunately, we typically have no way to check this condition, since we rarely know the probability distribution from which the keys are drawn. Moreover, the keys might not be drawn independently.

Occasionally we do know the distribution. For example, if we know that the keys are random real numbers k independently and uniformly distributed in the range 0 � k m.

We now define the hash function hab for any a 2 Z�p and any b 2 Zp using a linear transformation followed by reductions modulo p and then modulo m:

hab.k/ D ..ak C b/ mod p/ mod m : (11.3) For example, with p D 17 and m D 6, we have h3;4.8/ D 5. The family of all such hash functions is

Hpm D ˚ hab W a 2 Z�p and b 2 Zp

: (11.4)

Each hash function hab maps Zp to Zm. This class of hash functions has the nice property that the size m of the output range is arbitrary—not necessarily prime—a feature which we shall use in Section 11.5. Since we have p � 1 choices for a and p choices for b, the collection Hpm contains p.p � 1/ hash functions.

Theorem 11.5 The class Hpm of hash functions defined by equations (11.3) and (11.4) is universal.

Proof Consider two distinct keys k and l from Zp, so that k ¤ l . For a given hash function hab we let

r D .ak C b/ mod p ; s D .al C b/ mod p : We first note that r ¤ s. Why? Observe that r � s � a.k � l/ .mod p/ : It follows that r ¤ s because p is prime and both a and .k � l/ are nonzero modulo p, and so their product must also be nonzero modulo p by Theorem 31.6. Therefore, when computing any hab 2 Hpm, distinct inputs k and l map to distinct

268 Chapter 11 Hash Tables

values r and s modulo p; there are no collisions yet at the “mod p level.” Moreover, each of the possible p.p�1/ choices for the pair .a; b/ with a ¤ 0 yields a different resulting pair .r; s/ with r ¤ s, since we can solve for a and b given r and s: a D �.r � s/..k � l/�1 mod p/� mod p ; b D .r � ak/ mod p ; where ..k � l/�1 mod p/ denotes the unique multiplicative inverse, modulo p, of k � l . Since there are only p.p � 1/ possible pairs .r; s/ with r ¤ s, there is a one-to-one correspondence between pairs .a; b/ with a ¤ 0 and pairs .r; s/ with r ¤ s. Thus, for any given pair of inputs k and l , if we pick .a; b/ uniformly at random from Z�p Zp, the resulting pair .r; s/ is equally likely to be any pair of distinct values modulo p.

Therefore, the probability that distinct keys k and l collide is equal to the prob- ability that r � s .mod m/ when r and s are randomly chosen as distinct values modulo p. For a given value of r , of the p � 1 possible remaining values for s, the number of values s such that s ¤ r and s � r .mod m/ is at most dp=me � 1 � ..p Cm � 1/=m/ � 1 (by inequality (3.6))

D .p � 1/=m : The probability that s collides with r when reduced modulo m is at most ..p � 1/=m/=.p � 1/ D 1=m.

Therefore, for any pair of distinct values k; l 2 Zp, Pr fhab.k/ D hab.l/g � 1=m ; so that Hpm is indeed universal.

Exercises

11.3-1 Suppose we wish to search a linked list of length n, where each element contains a key k along with a hash value h.k/. Each key is a long character string. How might we take advantage of the hash values when searching the list for an element with a given key?

11.3-2 Suppose that we hash a string of r characters into m slots by treating it as a radix-128 number and then using the division method. We can easily represent the number m as a 32-bit computer word, but the string of r characters, treated as a radix-128 number, takes many words. How can we apply the division method to compute the hash value of the character string without using more than a constant number of words of storage outside the string itself?

11.4 Open addressing 269

11.3-3 Consider a version of the division method in which h.k/ D k mod m, where m D 2p � 1 and k is a character string interpreted in radix 2p . Show that if we can derive string x from string y by permuting its characters, then x and y hash to the same value. Give an example of an application in which this property would be undesirable in a hash function.

11.3-4 Consider a hash table of size m D 1000 and a corresponding hash function h.k/ D bm .kA mod 1/c for A D .p5 � 1/=2. Compute the locations to which the keys 61, 62, 63, 64, and 65 are mapped.

11.3-5 ? Define a family H of hash functions from a finite set U to a finite set B to be �-universal if for all pairs of distinct elements k and l in U ,

Pr fh.k/ D h.l/g � � ; where the probability is over the choice of the hash function h drawn at random from the family H . Show that an �-universal family of hash functions must have

� � 1jBj � 1

jU j :

11.3-6 ? Let U be the set of n-tuples of values drawn from Zp, and let B D Zp , where p is prime. Define the hash function hb W U ! B for b 2 Zp on an input n-tuple ha0; a1; : : : ; an�1i from U as

hb.ha0; a1; : : : ; an�1i/ D

n�1X j D0

aj b j

! mod p ;

and let H D fhb W b 2 Zpg. Argue that H is ..n � 1/=p/-universal according to the definition of �-universal in Exercise 11.3-5. (Hint: See Exercise 31.4-4.)

11.4 Open addressing

In open addressing, all elements occupy the hash table itself. That is, each table entry contains either an element of the dynamic set or NIL. When searching for an element, we systematically examine table slots until either we find the desired element or we have ascertained that the element is not in the table. No lists and

270 Chapter 11 Hash Tables

no elements are stored outside the table, unlike in chaining. Thus, in open ad- dressing, the hash table can “fill up” so that no further insertions can be made; one consequence is that the load factor ˛ can never exceed 1.

Of course, we could store the linked lists for chaining inside the hash table, in the otherwise unused hash-table slots (see Exercise 11.2-4), but the advantage of open addressing is that it avoids pointers altogether. Instead of following pointers, we compute the sequence of slots to be examined. The extra memory freed by not storing pointers provides the hash table with a larger number of slots for the same amount of memory, potentially yielding fewer collisions and faster retrieval.

To perform insertion using open addressing, we successively examine, or probe, the hash table until we find an empty slot in which to put the key. Instead of being fixed in the order 0; 1; : : : ; m � 1 (which requires ‚.n/ search time), the sequence of positions probed depends upon the key being inserted. To determine which slots to probe, we extend the hash function to include the probe number (starting from 0) as a second input. Thus, the hash function becomes

h W U f0; 1; : : : ; m � 1g ! f0; 1; : : : ; m � 1g : With open addressing, we require that for every key k, the probe sequence

hh.k; 0/; h.k; 1/; : : : ; h.k; m � 1/i be a permutation of h0;1; : : : ;m�1i, so that every hash-table position is eventually considered as a slot for a new key as the table fills up. In the following pseudocode, we assume that the elements in the hash table T are keys with no satellite infor- mation; the key k is identical to the element containing key k. Each slot contains either a key or NIL (if the slot is empty). The HASH-INSERT procedure takes as input a hash table T and a key k. It either returns the slot number where it stores key k or flags an error because the hash table is already full.

HASH-INSERT.T; k/

1 i D 0 2 repeat 3 j D h.k; i/ 4 if T Œj � == NIL 5 T Œj � D k 6 return j 7 else i D i C 1 8 until i == m 9 error “hash table overflow”

The algorithm for searching for key k probes the same sequence of slots that the insertion algorithm examined when key k was inserted. Therefore, the search can

11.4 Open addressing 271

terminate (unsuccessfully) when it finds an empty slot, since k would have been inserted there and not later in its probe sequence. (This argument assumes that keys are not deleted from the hash table.) The procedure HASH-SEARCH takes as input a hash table T and a key k, returning j if it finds that slot j contains key k, or NIL if key k is not present in table T .

HASH-SEARCH.T; k/

1 i D 0 2 repeat 3 j D h.k; i/ 4 if T Œj � == k 5 return j 6 i D i C 1 7 until T Œj � == NIL or i == m 8 return NIL

Deletion from an open-address hash table is difficult. When we delete a key from slot i , we cannot simply mark that slot as empty by storing NIL in it. If we did, we might be unable to retrieve any key k during whose insertion we had probed slot i and found it occupied. We can solve this problem by marking the slot, storing in it the special value DELETED instead of NIL. We would then modify the procedure HASH-INSERT to treat such a slot as if it were empty so that we can insert a new key there. We do not need to modify HASH-SEARCH, since it will pass over DELETED values while searching. When we use the special value DELETED, however, search times no longer depend on the load factor ˛, and for this reason chaining is more commonly selected as a collision resolution technique when keys must be deleted.

In our analysis, we assume uniform hashing: the probe sequence of each key is equally likely to be any of the mŠ permutations of h0; 1; : : : ; m � 1i. Uni- form hashing generalizes the notion of simple uniform hashing defined earlier to a hash function that produces not just a single number, but a whole probe sequence. True uniform hashing is difficult to implement, however, and in practice suitable approximations (such as double hashing, defined below) are used.

We will examine three commonly used techniques to compute the probe se- quences required for open addressing: linear probing, quadratic probing, and dou- ble hashing. These techniques all guarantee that hh.k; 0/; h.k; 1/; : : : ; h.k;m� 1/i is a permutation of h0; 1; : : : ; m� 1i for each key k. None of these techniques ful- fills the assumption of uniform hashing, however, since none of them is capable of generating more than m2 different probe sequences (instead of the mŠ that uniform hashing requires). Double hashing has the greatest number of probe sequences and, as one might expect, seems to give the best results.

272 Chapter 11 Hash Tables

Linear probing

Given an ordinary hash function h0 W U ! f0; 1; : : : ; m � 1g, which we refer to as an auxiliary hash function, the method of linear probing uses the hash function

h.k; i/ D .h0.k/C i/ mod m for i D 0; 1; : : : ; m � 1. Given key k, we first probe T Œh0.k/�, i.e., the slot given by the auxiliary hash function. We next probe slot T Œh0.k/ C 1�, and so on up to slot T Œm � 1�. Then we wrap around to slots T Œ0�; T Œ1�; : : : until we finally probe slot T Œh0.k/ � 1�. Because the initial probe determines the entire probe sequence, there are only m distinct probe sequences.

Linear probing is easy to implement, but it suffers from a problem known as primary clustering. Long runs of occupied slots build up, increasing the average search time. Clusters arise because an empty slot preceded by i full slots gets filled next with probability .i C 1/=m. Long runs of occupied slots tend to get longer, and the average search time increases.

Quadratic probing

Quadratic probing uses a hash function of the form

h.k; i/ D .h0.k/C c1i C c2i2/ mod m ; (11.5) where h0 is an auxiliary hash function, c1 and c2 are positive auxiliary constants, and i D 0; 1; : : : ; m � 1. The initial position probed is T Œh0.k/�; later positions probed are offset by amounts that depend in a quadratic manner on the probe num- ber i . This method works much better than linear probing, but to make full use of the hash table, the values of c1, c2, and m are constrained. Problem 11-3 shows one way to select these parameters. Also, if two keys have the same initial probe position, then their probe sequences are the same, since h.k1; 0/ D h.k2; 0/ im- plies h.k1; i/ D h.k2; i/. This property leads to a milder form of clustering, called secondary clustering. As in linear probing, the initial probe determines the entire sequence, and so only m distinct probe sequences are used.

Double hashing

Double hashing offers one of the best methods available for open addressing be- cause the permutations produced have many of the characteristics of randomly chosen permutations. Double hashing uses a hash function of the form

h.k; i/ D .h1.k/C ih2.k// mod m ; where both h1 and h2 are auxiliary hash functions. The initial probe goes to posi- tion T Œh1.k/�; successive probe positions are offset from previous positions by the

11.4 Open addressing 273

0

1

2

3

4

5

6

7

8

9

10

11

12

79

69

98

72

14

50

Figure 11.5 Insertion by double hashing. Here we have a hash table of size 13 with h1.k/ D k mod 13 and h2.k/ D 1C .k mod 11/. Since 14 � 1 .mod 13/ and 14 � 3 .mod 11/, we insert the key 14 into empty slot 9, after examining slots 1 and 5 and finding them to be occupied.

amount h2.k/, modulo m. Thus, unlike the case of linear or quadratic probing, the probe sequence here depends in two ways upon the key k, since the initial probe position, the offset, or both, may vary. Figure 11.5 gives an example of insertion by double hashing.

The value h2.k/ must be relatively prime to the hash-table size m for the entire hash table to be searched. (See Exercise 11.4-4.) A convenient way to ensure this condition is to let m be a power of 2 and to design h2 so that it always produces an odd number. Another way is to let m be prime and to design h2 so that it always returns a positive integer less than m. For example, we could choose m prime and let

h1.k/ D k mod m ; h2.k/ D 1C .k mod m0/ ; where m0 is chosen to be slightly less than m (say, m � 1). For example, if k D 123456, m D 701, and m0 D 700, we have h1.k/ D 80 and h2.k/ D 257, so that we first probe position 80, and then we examine every 257th slot (modulo m) until we find the key or have examined every slot.

When m is prime or a power of 2, double hashing improves over linear or qua- dratic probing in that ‚.m2/ probe sequences are used, rather than ‚.m/, since each possible .h1.k/; h2.k// pair yields a distinct probe sequence. As a result, for

274 Chapter 11 Hash Tables

such values of m, the performance of double hashing appears to be very close to the performance of the “ideal” scheme of uniform hashing.

Although values of m other than primes or powers of 2 could in principle be used with double hashing, in practice it becomes more difficult to efficiently gen- erate h2.k/ in a way that ensures that it is relatively prime to m, in part because the relative density �.m/=m of such numbers may be small (see equation (31.24)).

Analysis of open-address hashing

As in our analysis of chaining, we express our analysis of open addressing in terms of the load factor ˛ D n=m of the hash table. Of course, with open addressing, at most one element occupies each slot, and thus n � m, which implies ˛ � 1.

We assume that we are using uniform hashing. In this idealized scheme, the probe sequence hh.k; 0/; h.k; 1/; : : : ; h.k; m � 1/i used to insert or search for each key k is equally likely to be any permutation of h0; 1; : : : ; m � 1i. Of course, a given key has a unique fixed probe sequence associated with it; what we mean here is that, considering the probability distribution on the space of keys and the operation of the hash function on the keys, each possible probe sequence is equally likely.

We now analyze the expected number of probes for hashing with open address- ing under the assumption of uniform hashing, beginning with an analysis of the number of probes made in an unsuccessful search.

Theorem 11.6 Given an open-address hash table with load factor ˛ D n=m 1, the probability that there is a j th probe and it is to an occupied slot, given that the first j � 1 probes were to occupied slots, is .n�j C1/=.m�j C1/. This probability follows

11.4 Open addressing 275

because we would be finding one of the remaining .n � .j � 1// elements in one of the .m� .j � 1// unexamined slots, and by the assumption of uniform hashing, the probability is the ratio of these quantities. Observing that n 2 lg ng D O.1=n2/. Let the random variable X D max1�i�n Xi denote the maximum number of probes required by any of the n insertions.

c. Show that Pr fX > 2 lg ng D O.1=n/. d. Show that the expected length E ŒX� of the longest probe sequence is O.lg n/.

Problems for Chapter 11 283

11-2 Slot-size bound for chaining Suppose that we have a hash table with n slots, with collisions resolved by chain- ing, and suppose that n keys are inserted into the table. Each key is equally likely to be hashed to each slot. Let M be the maximum number of keys in any slot after all the keys have been inserted. Your mission is to prove an O.lg n= lg lg n/ upper bound on E ŒM �, the expected value of M .

a. Argue that the probability Qk that exactly k keys hash to a particular slot is given by

Qk D �

1

n

�k� 1� 1

n

�n�k n

k

! :

b. Let Pk be the probability that M D k, that is, the probability that the slot containing the most keys contains k keys. Show that Pk � nQk.

c. Use Stirling’s approximation, equation (3.18), to show that Qk 1 such that Qk0

c lg n

lg lg n

� � nC Pr

� M � c lg n

lg lg n

� � c lg n

lg lg n :

Conclude that E ŒM � D O.lg n= lg lg n/.

11-3 Quadratic probing Suppose that we are given a key k to search for in a hash table with positions 0; 1; : : : ; m�1, and suppose that we have a hash function h mapping the key space into the set f0; 1; : : : ; m � 1g. The search scheme is as follows: 1. Compute the value j D h.k/, and set i D 0. 2. Probe in position j for the desired key k. If you find it, or if this position is

empty, terminate the search.

3. Set i D i C 1. If i now equals m, the table is full, so terminate the search. Otherwise, set j D .i C j / mod m, and return to step 2.

Assume that m is a power of 2.

a. Show that this scheme is an instance of the general “quadratic probing” scheme by exhibiting the appropriate constants c1 and c2 for equation (11.5).

b. Prove that this algorithm examines every table position in the worst case.

284 Chapter 11 Hash Tables

11-4 Hashing and authentication Let H be a class of hash functions in which each hash function h 2 H maps the universe U of keys to f0; 1; : : : ; m � 1g. We say that H is k-universal if, for every fixed sequence of k distinct keys hx.1/; x.2/; : : : ; x.k/i and for any h chosen at random from H , the sequence hh.x.1//; h.x.2//; : : : ; h.x.k//i is equally likely to be any of the mk sequences of length k with elements drawn from f0; 1; : : : ; m � 1g. a. Show that if the family H of hash functions is 2-universal, then it is universal.

b. Suppose that the universe U is the set of n-tuples of values drawn from Zp D f0; 1; : : : ; p � 1g, where p is prime. Consider an element x D hx0; x1; : : : ; xn�1i 2 U . For any n-tuple a D ha0; a1; : : : ; an�1i 2 U , de- fine the hash function ha by

ha.x/ D

n�1X j D0

aj xj

! mod p :

Let H D fhag. Show that H is universal, but not 2-universal. (Hint: Find a key for which all hash functions in H produce the same value.)

c. Suppose that we modify H slightly from part (b): for any a 2 U and for any b 2 Zp , define

h0ab.x/ D

n�1X j D0

aj xj C b !

mod p

and H 0 D fh0abg. Argue that H 0 is 2-universal. (Hint: Consider fixed n-tuples x 2 U and y 2 U , with xi ¤ yi for some i . What happens to h0ab.x/ and h0

ab .y/ as ai and b range over Zp?)

d. Suppose that Alice and Bob secretly agree on a hash function h from a 2-universal family H of hash functions. Each h 2 H maps from a universe of keys U to Zp , where p is prime. Later, Alice sends a message m to Bob over the Internet, where m 2 U . She authenticates this message to Bob by also sending an authentication tag t D h.m/, and Bob checks that the pair .m; t/ he receives indeed satisfies t D h.m/. Suppose that an adversary intercepts .m; t/ en route and tries to fool Bob by replacing the pair .m; t/ with a different pair .m0; t 0/. Argue that the probability that the adversary succeeds in fooling Bob into ac- cepting .m0; t 0/ is at most 1=p, no matter how much computing power the ad- versary has, and even if the adversary knows the family H of hash functions used.

Notes for Chapter 11 285

Chapter notes

Knuth [211] and Gonnet [145] are excellent references for the analysis of hash- ing algorithms. Knuth credits H. P. Luhn (1953) for inventing hash tables, along with the chaining method for resolving collisions. At about the same time, G. M. Amdahl originated the idea of open addressing.

Carter and Wegman introduced the notion of universal classes of hash functions in 1979 [58].

Fredman, Komlós, and Szemerédi [112] developed the perfect hashing scheme for static sets presented in Section 11.5. An extension of their method to dynamic sets, handling insertions and deletions in amortized expected time O.1/, has been given by Dietzfelbinger et al. [86].

12 Binary Search Trees

The search tree data structure supports many dynamic-set operations, including SEARCH, MINIMUM, MAXIMUM, PREDECESSOR, SUCCESSOR, INSERT, and DELETE. Thus, we can use a search tree both as a dictionary and as a priority queue.

Basic operations on a binary search tree take time proportional to the height of the tree. For a complete binary tree with n nodes, such operations run in ‚.lg n/ worst-case time. If the tree is a linear chain of n nodes, however, the same oper- ations take ‚.n/ worst-case time. We shall see in Section 12.4 that the expected height of a randomly built binary search tree is O.lg n/, so that basic dynamic-set operations on such a tree take ‚.lg n/ time on average.

In practice, we can’t always guarantee that binary search trees are built ran- domly, but we can design variations of binary search trees with good guaranteed worst-case performance on basic operations. Chapter 13 presents one such vari- ation, red-black trees, which have height O.lg n/. Chapter 18 introduces B-trees, which are particularly good for maintaining databases on secondary (disk) storage.

After presenting the basic properties of binary search trees, the following sec- tions show how to walk a binary search tree to print its values in sorted order, how to search for a value in a binary search tree, how to find the minimum or maximum element, how to find the predecessor or successor of an element, and how to insert into or delete from a binary search tree. The basic mathematical properties of trees appear in Appendix B.

12.1 What is a binary search tree?

A binary search tree is organized, as the name suggests, in a binary tree, as shown in Figure 12.1. We can represent such a tree by a linked data structure in which each node is an object. In addition to a key and satellite data, each node contains attributes left, right, and p that point to the nodes corresponding to its left child,

12.1 What is a binary search tree? 287

5

2 5

5

8

7

6

(a)

6 8

7

5

2

(b)

Figure 12.1 Binary search trees. For any node x, the keys in the left subtree of x are at most x:key, and the keys in the right subtree of x are at least x:key. Different binary search trees can represent the same set of values. The worst-case running time for most search-tree operations is proportional to the height of the tree. (a) A binary search tree on 6 nodes with height 2. (b) A less efficient binary search tree with height 4 that contains the same keys.

its right child, and its parent, respectively. If a child or the parent is missing, the appropriate attribute contains the value NIL. The root node is the only node in the tree whose parent is NIL.

The keys in a binary search tree are always stored in such a way as to satisfy the binary-search-tree property:

Let x be a node in a binary search tree. If y is a node in the left subtree of x, then y:key � x:key. If y is a node in the right subtree of x, then y:key � x:key.

Thus, in Figure 12.1(a), the key of the root is 6, the keys 2, 5, and 5 in its left subtree are no larger than 6, and the keys 7 and 8 in its right subtree are no smaller than 6. The same property holds for every node in the tree. For example, the key 5 in the root’s left child is no smaller than the key 2 in that node’s left subtree and no larger than the key 5 in the right subtree.

The binary-search-tree property allows us to print out all the keys in a binary search tree in sorted order by a simple recursive algorithm, called an inorder tree walk. This algorithm is so named because it prints the key of the root of a subtree between printing the values in its left subtree and printing those in its right subtree. (Similarly, a preorder tree walk prints the root before the values in either subtree, and a postorder tree walk prints the root after the values in its subtrees.) To use the following procedure to print all the elements in a binary search tree T , we call INORDER-TREE-WALK.T:root/.

288 Chapter 12 Binary Search Trees

INORDER-TREE-WALK.x/

1 if x ¤ NIL 2 INORDER-TREE-WALK.x: left/ 3 print x:key 4 INORDER-TREE-WALK.x:right/

As an example, the inorder tree walk prints the keys in each of the two binary search trees from Figure 12.1 in the order 2; 5; 5; 6; 7; 8. The correctness of the algorithm follows by induction directly from the binary-search-tree property.

It takes ‚.n/ time to walk an n-node binary search tree, since after the ini- tial call, the procedure calls itself recursively exactly twice for each node in the tree—once for its left child and once for its right child. The following theorem gives a formal proof that it takes linear time to perform an inorder tree walk.

Theorem 12.1 If x is the root of an n-node subtree, then the call INORDER-TREE-WALK.x/ takes ‚.n/ time.

Proof Let T .n/ denote the time taken by INORDER-TREE-WALK when it is called on the root of an n-node subtree. Since INORDER-TREE-WALK visits all n nodes of the subtree, we have T .n/ D �.n/. It remains to show that T .n/ D O.n/.

Since INORDER-TREE-WALK takes a small, constant amount of time on an empty subtree (for the test x ¤ NIL), we have T .0/ D c for some constant c > 0.

For n > 0, suppose that INORDER-TREE-WALK is called on a node x whose left subtree has k nodes and whose right subtree has n � k � 1 nodes. The time to perform INORDER-TREE-WALK.x/ is bounded by T .n/ � T .k/CT .n�k�1/Cd for some constant d > 0 that reflects an upper bound on the time to execute the body of INORDER-TREE-WALK.x/, exclusive of the time spent in recursive calls.

We use the substitution method to show that T .n/ D O.n/ by proving that T .n/ � .cCd/nC c. For n D 0, we have .cCd/ �0C c D c D T .0/. For n > 0, we have

T .n/ � T .k/C T .n � k � 1/C d D ..c C d/k C c/C ..c C d/.n � k � 1/C c/C d D .c C d/nC c � .c C d/C c C d D .c C d/nC c ;

which completes the proof.

12.2 Querying a binary search tree 289

Exercises

12.1-1 For the set of f1; 4; 5; 10; 16; 17; 21g of keys, draw binary search trees of heights 2, 3, 4, 5, and 6.

12.1-2 What is the difference between the binary-search-tree property and the min-heap property (see page 153)? Can the min-heap property be used to print out the keys of an n-node tree in sorted order in O.n/ time? Show how, or explain why not.

12.1-3 Give a nonrecursive algorithm that performs an inorder tree walk. (Hint: An easy solution uses a stack as an auxiliary data structure. A more complicated, but ele- gant, solution uses no stack but assumes that we can test two pointers for equality.)

12.1-4 Give recursive algorithms that perform preorder and postorder tree walks in ‚.n/ time on a tree of n nodes.

12.1-5 Argue that since sorting n elements takes �.n lg n/ time in the worst case in the comparison model, any comparison-based algorithm for constructing a binary search tree from an arbitrary list of n elements takes �.n lg n/ time in the worst case.

12.2 Querying a binary search tree

We often need to search for a key stored in a binary search tree. Besides the SEARCH operation, binary search trees can support such queries as MINIMUM, MAXIMUM, SUCCESSOR, and PREDECESSOR. In this section, we shall examine these operations and show how to support each one in time O.h/ on any binary search tree of height h.

Searching

We use the following procedure to search for a node with a given key in a binary search tree. Given a pointer to the root of the tree and a key k, TREE-SEARCH returns a pointer to a node with key k if one exists; otherwise, it returns NIL.

290 Chapter 12 Binary Search Trees

2 4

3

13

7

6

17 20

18

15

9

Figure 12.2 Queries on a binary search tree. To search for the key 13 in the tree, we follow the path 15 ! 6 ! 7 ! 13 from the root. The minimum key in the tree is 2, which is found by following left pointers from the root. The maximum key 20 is found by following right pointers from the root. The successor of the node with key 15 is the node with key 17, since it is the minimum key in the right subtree of 15. The node with key 13 has no right subtree, and thus its successor is its lowest ancestor whose left child is also an ancestor. In this case, the node with key 15 is its successor.

TREE-SEARCH.x; k/

1 if x == NIL or k == x:key 2 return x 3 if k 0, all but O.1=nk/ of the nŠ input permutations yield an O.n lg n/ running time.

Problems

12-1 Binary search trees with equal keys Equal keys pose a problem for the implementation of binary search trees.

a. What is the asymptotic performance of TREE-INSERT when used to insert n items with identical keys into an initially empty binary search tree?

We propose to improve TREE-INSERT by testing before line 5 to determine whether ´:key D x:key and by testing before line 11 to determine whether ´:key D y:key.

304 Chapter 12 Binary Search Trees

If equality holds, we implement one of the following strategies. For each strategy, find the asymptotic performance of inserting n items with identical keys into an initially empty binary search tree. (The strategies are described for line 5, in which we compare the keys of ´ and x. Substitute y for x to arrive at the strategies for line 11.)

b. Keep a boolean flag x:b at node x, and set x to either x: left or x:right based on the value of x:b, which alternates between FALSE and TRUE each time we visit x while inserting a node with the same key as x.

c. Keep a list of nodes with equal keys at x, and insert ´ into the list.

d. Randomly set x to either x: left or x:right. (Give the worst-case performance and informally derive the expected running time.)

12-2 Radix trees Given two strings a D a0a1 : : : ap and b D b0b1 : : : bq, where each ai and each bj is in some ordered set of characters, we say that string a is lexicographically less than string b if either

1. there exists an integer j , where 0 � j � min.p; q/, such that ai D bi for all i D 0; 1; : : : ; j � 1 and aj 1, the tree has at least one red node.

13.3-6 Suggest how to implement RB-INSERT efficiently if the representation for red- black trees includes no storage for parent pointers.

13.4 Deletion 323

13.4 Deletion

Like the other basic operations on an n-node red-black tree, deletion of a node takes time O.lg n/. Deleting a node from a red-black tree is a bit more complicated than inserting a node.

The procedure for deleting a node from a red-black tree is based on the TREE- DELETE procedure (Section 12.3). First, we need to customize the TRANSPLANT subroutine that TREE-DELETE calls so that it applies to a red-black tree:

RB-TRANSPLANT.T; u; �/

1 if u:p == T:nil 2 T:root D � 3 elseif u == u:p: left 4 u:p: left D � 5 else u:p:right D � 6 �:p D u:p

The procedure RB-TRANSPLANT differs from TRANSPLANT in two ways. First, line 1 references the sentinel T:nil instead of NIL. Second, the assignment to �:p in line 6 occurs unconditionally: we can assign to �:p even if � points to the sentinel. In fact, we shall exploit the ability to assign to �:p when � D T:nil.

The procedure RB-DELETE is like the TREE-DELETE procedure, but with ad- ditional lines of pseudocode. Some of the additional lines keep track of a node y that might cause violations of the red-black properties. When we want to delete node ´ and ´ has fewer than two children, then ´ is removed from the tree, and we want y to be ´. When ´ has two children, then y should be ´’s successor, and y moves into ´’s position in the tree. We also remember y’s color before it is re- moved from or moved within the tree, and we keep track of the node x that moves into y’s original position in the tree, because node x might also cause violations of the red-black properties. After deleting node ´, RB-DELETE calls an auxiliary procedure RB-DELETE-FIXUP, which changes colors and performs rotations to restore the red-black properties.

324 Chapter 13 Red-Black Trees

RB-DELETE.T; ´/

1 y D ´ 2 y-original-color D y:color 3 if ´: left == T:nil 4 x D ´:right 5 RB-TRANSPLANT.T; ´; ´:right/ 6 elseif ´:right == T:nil 7 x D ´: left 8 RB-TRANSPLANT.T; ´; ´: left/ 9 else y D TREE-MINIMUM.´:right/

10 y-original-color D y:color 11 x D y:right 12 if y:p == ´ 13 x:p D y 14 else RB-TRANSPLANT.T; y; y:right/ 15 y:right D ´:right 16 y:right:p D y 17 RB-TRANSPLANT.T; ´; y/ 18 y: left D ´: left 19 y: left:p D y 20 y:color D ´:color 21 if y-original-color == BLACK 22 RB-DELETE-FIXUP.T; x/

Although RB-DELETE contains almost twice as many lines of pseudocode as TREE-DELETE, the two procedures have the same basic structure. You can find each line of TREE-DELETE within RB-DELETE (with the changes of replacing NIL by T:nil and replacing calls to TRANSPLANT by calls to RB-TRANSPLANT), executed under the same conditions.

Here are the other differences between the two procedures:

� We maintain node y as the node either removed from the tree or moved within the tree. Line 1 sets y to point to node ´ when ´ has fewer than two children and is therefore removed. When ´ has two children, line 9 sets y to point to ´’s successor, just as in TREE-DELETE, and y will move into ´’s position in the tree.

� Because node y’s color might change, the variable y-original-color stores y’s color before any changes occur. Lines 2 and 10 set this variable immediately after assignments to y. When ´ has two children, then y ¤ ´ and node y moves into node ´’s original position in the red-black tree; line 20 gives y the same color as ´. We need to save y’s original color in order to test it at the

13.4 Deletion 325

end of RB-DELETE; if it was black, then removing or moving y could cause violations of the red-black properties.

� As discussed, we keep track of the node x that moves into node y’s original position. The assignments in lines 4, 7, and 11 set x to point to either y’s only child or, if y has no children, the sentinel T:nil. (Recall from Section 12.3 that y has no left child.)

� Since node x moves into node y’s original position, the attribute x:p is always set to point to the original position in the tree of y’s parent, even if x is, in fact, the sentinel T:nil. Unless ´ is y’s original parent (which occurs only when ´ has two children and its successor y is ´’s right child), the assignment to x:p takes place in line 6 of RB-TRANSPLANT. (Observe that when RB-TRANSPLANT is called in lines 5, 8, or 14, the second parameter passed is the same as x.)

When y’s original parent is ´, however, we do not want x:p to point to y’s orig- inal parent, since we are removing that node from the tree. Because node y will move up to take ´’s position in the tree, setting x:p to y in line 13 causes x:p to point to the original position of y’s parent, even if x D T:nil.

� Finally, if node y was black, we might have introduced one or more violations of the red-black properties, and so we call RB-DELETE-FIXUP in line 22 to restore the red-black properties. If y was red, the red-black properties still hold when y is removed or moved, for the following reasons:

1. No black-heights in the tree have changed.

2. No red nodes have been made adjacent. Because y takes ´’s place in the tree, along with ´’s color, we cannot have two adjacent red nodes at y’s new position in the tree. In addition, if y was not ´’s right child, then y’s original right child x replaces y in the tree. If y is red, then x must be black, and so replacing y by x cannot cause two red nodes to become adjacent.

3. Since y could not have been the root if it was red, the root remains black.

If node y was black, three problems may arise, which the call of RB-DELETE- FIXUP will remedy. First, if y had been the root and a red child of y becomes the new root, we have violated property 2. Second, if both x and x:p are red, then we have violated property 4. Third, moving y within the tree causes any simple path that previously contained y to have one fewer black node. Thus, property 5 is now violated by any ancestor of y in the tree. We can correct the violation of property 5 by saying that node x, now occupying y’s original position, has an “extra” black. That is, if we add 1 to the count of black nodes on any simple path that contains x, then under this interpretation, property 5 holds. When we remove or move the black node y, we “push” its blackness onto node x. The problem is that now node x is neither red nor black, thereby violating property 1. Instead,

326 Chapter 13 Red-Black Trees

node x is either “doubly black” or “red-and-black,” and it contributes either 2 or 1, respectively, to the count of black nodes on simple paths containing x. The color attribute of x will still be either RED (if x is red-and-black) or BLACK (if x is doubly black). In other words, the extra black on a node is reflected in x’s pointing to the node rather than in the color attribute.

We can now see the procedure RB-DELETE-FIXUP and examine how it restores the red-black properties to the search tree.

RB-DELETE-FIXUP.T; x/

1 while x ¤ T:root and x:color == BLACK 2 if x == x:p: left 3 w D x:p:right 4 if w:color == RED 5 w:color D BLACK // case 1 6 x:p:color D RED // case 1 7 LEFT-ROTATE.T; x:p/ // case 1 8 w D x:p:right // case 1 9 if w: left:color == BLACK and w:right:color == BLACK

10 w:color D RED // case 2 11 x D x:p // case 2 12 else if w:right:color == BLACK 13 w: left:color D BLACK // case 3 14 w:color D RED // case 3 15 RIGHT-ROTATE.T; w/ // case 3 16 w D x:p:right // case 3 17 w:color D x:p:color // case 4 18 x:p:color D BLACK // case 4 19 w:right:color D BLACK // case 4 20 LEFT-ROTATE.T; x:p/ // case 4 21 x D T:root // case 4 22 else (same as then clause with “right” and “left” exchanged) 23 x:color D BLACK The procedure RB-DELETE-FIXUP restores properties 1, 2, and 4. Exercises 13.4-1 and 13.4-2 ask you to show that the procedure restores properties 2 and 4, and so in the remainder of this section, we shall focus on property 1. The goal of the while loop in lines 1–22 is to move the extra black up the tree until

1. x points to a red-and-black node, in which case we color x (singly) black in line 23;

2. x points to the root, in which case we simply “remove” the extra black; or

3. having performed suitable rotations and recolorings, we exit the loop.

13.4 Deletion 327

Within the while loop, x always points to a nonroot doubly black node. We determine in line 2 whether x is a left child or a right child of its parent x:p. (We have given the code for the situation in which x is a left child; the situation in which x is a right child—line 22—is symmetric.) We maintain a pointer w to the sibling of x. Since node x is doubly black, node w cannot be T:nil, because otherwise, the number of blacks on the simple path from x:p to the (singly black) leaf w would be smaller than the number on the simple path from x:p to x.

The four cases2 in the code appear in Figure 13.7. Before examining each case in detail, let’s look more generally at how we can verify that the transformation in each of the cases preserves property 5. The key idea is that in each case, the transformation applied preserves the number of black nodes (including x’s extra black) from (and including) the root of the subtree shown to each of the subtrees ˛; ˇ; : : : ; �. Thus, if property 5 holds prior to the transformation, it continues to hold afterward. For example, in Figure 13.7(a), which illustrates case 1, the num- ber of black nodes from the root to either subtree ˛ or ˇ is 3, both before and after the transformation. (Again, remember that node x adds an extra black.) Similarly, the number of black nodes from the root to any of , ı, “, and � is 2, both be- fore and after the transformation. In Figure 13.7(b), the counting must involve the value c of the color attribute of the root of the subtree shown, which can be either RED or BLACK. If we define count.RED/ D 0 and count.BLACK/ D 1, then the number of black nodes from the root to ˛ is 2 C count.c/, both before and after the transformation. In this case, after the transformation, the new node x has color attribute c, but this node is really either red-and-black (if c D RED) or doubly black (if c D BLACK). You can verify the other cases similarly (see Exercise 13.4-5).

Case 1: x’s sibling w is red Case 1 (lines 5–8 of RB-DELETE-FIXUP and Figure 13.7(a)) occurs when node w, the sibling of node x, is red. Since w must have black children, we can switch the colors of w and x:p and then perform a left-rotation on x:p without violating any of the red-black properties. The new sibling of x, which is one of w’s children prior to the rotation, is now black, and thus we have converted case 1 into case 2, 3, or 4.

Cases 2, 3, and 4 occur when node w is black; they are distinguished by the colors of w’s children.

2As in RB-INSERT-FIXUP, the cases in RB-DELETE-FIXUP are not mutually exclusive.

328 Chapter 13 Red-Black Trees

Case 2: x’s sibling w is black, and both of w’s children are black In case 2 (lines 10–11 of RB-DELETE-FIXUP and Figure 13.7(b)), both of w’s children are black. Since w is also black, we take one black off both x and w, leaving x with only one black and leaving w red. To compensate for removing one black from x and w, we would like to add an extra black to x:p, which was originally either red or black. We do so by repeating the while loop with x:p as the new node x. Observe that if we enter case 2 through case 1, the new node x is red-and-black, since the original x:p was red. Hence, the value c of the color attribute of the new node x is RED, and the loop terminates when it tests the loop condition. We then color the new node x (singly) black in line 23.

Case 3: x’s sibling w is black, w’s left child is red, and w’s right child is black Case 3 (lines 13–16 and Figure 13.7(c)) occurs when w is black, its left child is red, and its right child is black. We can switch the colors of w and its left child w: left and then perform a right rotation on w without violating any of the red-black properties. The new sibling w of x is now a black node with a red right child, and thus we have transformed case 3 into case 4.

Case 4: x’s sibling w is black, and w’s right child is red Case 4 (lines 17–21 and Figure 13.7(d)) occurs when node x’s sibling w is black and w’s right child is red. By making some color changes and performing a left ro- tation on x:p, we can remove the extra black on x, making it singly black, without violating any of the red-black properties. Setting x to be the root causes the while loop to terminate when it tests the loop condition.

Analysis

What is the running time of RB-DELETE? Since the height of a red-black tree of n nodes is O.lg n/, the total cost of the procedure without the call to RB-DELETE- FIXUP takes O.lg n/ time. Within RB-DELETE-FIXUP, each of cases 1, 3, and 4 lead to termination after performing a constant number of color changes and at most three rotations. Case 2 is the only case in which the while loop can be re- peated, and then the pointer x moves up the tree at most O.lg n/ times, performing no rotations. Thus, the procedure RB-DELETE-FIXUP takes O.lg n/ time and per- forms at most three rotations, and the overall time for RB-DELETE is therefore also O.lg n/.

13.4 Deletion 329

A

B

D

C Eα β

γ δ ε ζ

x w

A

B

C

D

E

x new w

α β γ δ

ε ζ

A

B

D

C Eα β

γ δ ε ζ

x w

c

A

B

D

C Eα β

γ δ ε ζ

cnew x

A

B

D

C Eα β

γ δ ε ζ

x w

c

A

B

C

Dα β γ

δ

ε ζ

x

c

new w

A

B

D

C Eα β

γ δ

ε ζ

x w

c c

α β

A

B

C

D

E(d)

(c)

(b)

(a)

γ δ ε ζ

Case 4

Case 3

Case 2

Case 1

E

c′ c′

new x D T:root

Figure 13.7 The cases in the while loop of the procedure RB-DELETE-FIXUP. Darkened nodes have color attributes BLACK, heavily shaded nodes have color attributes RED, and lightly shaded nodes have color attributes represented by c and c0, which may be either RED or BLACK. The letters ˛; ˇ; : : : ; � represent arbitrary subtrees. Each case transforms the configuration on the left into the configuration on the right by changing some colors and/or performing a rotation. Any node pointed to by x has an extra black and is either doubly black or red-and-black. Only case 2 causes the loop to repeat. (a) Case 1 is transformed to case 2, 3, or 4 by exchanging the colors of nodes B and D and performing a left rotation. (b) In case 2, the extra black represented by the pointer x moves up the tree by coloring node D red and setting x to point to node B . If we enter case 2 through case 1, the while loop terminates because the new node x is red-and-black, and therefore the value c of its color attribute is RED. (c) Case 3 is transformed to case 4 by exchanging the colors of nodes C and D and performing a right rotation. (d) Case 4 removes the extra black represented by x by changing some colors and performing a left rotation (without violating the red-black properties), and then the loop terminates.

330 Chapter 13 Red-Black Trees

Exercises

13.4-1 Argue that after executing RB-DELETE-FIXUP, the root of the tree must be black.

13.4-2 Argue that if in RB-DELETE both x and x:p are red, then property 4 is restored by the call to RB-DELETE-FIXUP.T; x/.

13.4-3 In Exercise 13.3-2, you found the red-black tree that results from successively inserting the keys 41; 38; 31; 12; 19; 8 into an initially empty tree. Now show the red-black trees that result from the successive deletion of the keys in the order 8; 12; 19; 31; 38; 41.

13.4-4 In which lines of the code for RB-DELETE-FIXUP might we examine or modify the sentinel T:nil?

13.4-5 In each of the cases of Figure 13.7, give the count of black nodes from the root of the subtree shown to each of the subtrees ˛; ˇ; : : : ; �, and verify that each count remains the same after the transformation. When a node has a color attribute c or c 0, use the notation count.c/ or count.c 0/ symbolically in your count.

13.4-6 Professors Skelton and Baron are concerned that at the start of case 1 of RB- DELETE-FIXUP, the node x:p might not be black. If the professors are correct, then lines 5–6 are wrong. Show that x:p must be black at the start of case 1, so that the professors have nothing to worry about.

13.4-7 Suppose that a node x is inserted into a red-black tree with RB-INSERT and then is immediately deleted with RB-DELETE. Is the resulting red-black tree the same as the initial red-black tree? Justify your answer.

Problems for Chapter 13 331

Problems

13-1 Persistent dynamic sets During the course of an algorithm, we sometimes find that we need to maintain past versions of a dynamic set as it is updated. We call such a set persistent. One way to implement a persistent set is to copy the entire set whenever it is modified, but this approach can slow down a program and also consume much space. Sometimes, we can do much better.

Consider a persistent set S with the operations INSERT, DELETE, and SEARCH, which we implement using binary search trees as shown in Figure 13.8(a). We maintain a separate root for every version of the set. In order to insert the key 5 into the set, we create a new node with key 5. This node becomes the left child of a new node with key 7, since we cannot modify the existing node with key 7. Similarly, the new node with key 7 becomes the left child of a new node with key 8 whose right child is the existing node with key 10. The new node with key 8 becomes, in turn, the right child of a new root r 0 with key 4 whose left child is the existing node with key 3. We thus copy only part of the tree and share some of the nodes with the original tree, as shown in Figure 13.8(b).

Assume that each tree node has the attributes key, left, and right but no parent. (See also Exercise 13.3-6.)

4

3

2

8

7 10

4

3

2

8

7 10

4

8

7

5

(b)(a)

r r r′

Figure 13.8 (a) A binary search tree with keys 2; 3; 4; 7; 8; 10. (b) The persistent binary search tree that results from the insertion of key 5. The most recent version of the set consists of the nodes reachable from the root r 0, and the previous version consists of the nodes reachable from r . Heavily shaded nodes are added when key 5 is inserted.

332 Chapter 13 Red-Black Trees

a. For a general persistent binary search tree, identify the nodes that we need to change to insert a key k or delete a node y.

b. Write a procedure PERSISTENT-TREE-INSERT that, given a persistent tree T and a key k to insert, returns a new persistent tree T 0 that is the result of insert- ing k into T .

c. If the height of the persistent binary search tree T is h, what are the time and space requirements of your implementation of PERSISTENT-TREE-INSERT? (The space requirement is proportional to the number of new nodes allocated.)

d. Suppose that we had included the parent attribute in each node. In this case, PERSISTENT-TREE-INSERT would need to perform additional copying. Prove that PERSISTENT-TREE-INSERT would then require �.n/ time and space, where n is the number of nodes in the tree.

e. Show how to use red-black trees to guarantee that the worst-case running time and space are O.lg n/ per insertion or deletion.

13-2 Join operation on red-black trees The join operation takes two dynamic sets S1 and S2 and an element x such that for any x1 2 S1 and x2 2 S2, we have x1:key � x:key � x2:key. It returns a set S D S1 [ fxg [ S2. In this problem, we investigate how to implement the join operation on red-black trees.

a. Given a red-black tree T , let us store its black-height as the new attribute T:bh. Argue that RB-INSERT and RB-DELETE can maintain the bh attribute with- out requiring extra storage in the nodes of the tree and without increasing the asymptotic running times. Show that while descending through T , we can de- termine the black-height of each node we visit in O.1/ time per node visited.

We wish to implement the operation RB-JOIN.T1; x; T2/, which destroys T1 and T2 and returns a red-black tree T D T1[fxg[T2. Let n be the total number of nodes in T1 and T2.

b. Assume that T1:bh � T2:bh. Describe an O.lg n/-time algorithm that finds a black node y in T1 with the largest key from among those nodes whose black- height is T2:bh.

c. Let Ty be the subtree rooted at y. Describe how Ty [ fxg [ T2 can replace Ty in O.1/ time without destroying the binary-search-tree property.

d. What color should we make x so that red-black properties 1, 3, and 5 are main- tained? Describe how to enforce properties 2 and 4 in O.lg n/ time.

Problems for Chapter 13 333

e. Argue that no generality is lost by making the assumption in part (b). Describe the symmetric situation that arises when T1:bh � T2:bh.

f. Argue that the running time of RB-JOIN is O.lg n/.

13-3 AVL trees An AVL tree is a binary search tree that is height balanced: for each node x, the heights of the left and right subtrees of x differ by at most 1. To implement an AVL tree, we maintain an extra attribute in each node: x:h is the height of node x. As for any other binary search tree T , we assume that T:root points to the root node.

a. Prove that an AVL tree with n nodes has height O.lg n/. (Hint: Prove that an AVL tree of height h has at least Fh nodes, where Fh is the hth Fibonacci number.)

b. To insert into an AVL tree, we first place a node into the appropriate place in bi- nary search tree order. Afterward, the tree might no longer be height balanced. Specifically, the heights of the left and right children of some node might differ by 2. Describe a procedure BALANCE.x/, which takes a subtree rooted at x whose left and right children are height balanced and have heights that differ by at most 2, i.e., jx:right:h� x: left:hj � 2, and alters the subtree rooted at x to be height balanced. (Hint: Use rotations.)

c. Using part (b), describe a recursive procedure AVL-INSERT.x; ´/ that takes a node x within an AVL tree and a newly created node ´ (whose key has al- ready been filled in), and adds ´ to the subtree rooted at x, maintaining the property that x is the root of an AVL tree. As in TREE-INSERT from Sec- tion 12.3, assume that ´:key has already been filled in and that ´: left D NIL and ´:right D NIL; also assume that ´:h D 0. Thus, to insert the node ´ into the AVL tree T , we call AVL-INSERT.T:root; ´/.

d. Show that AVL-INSERT, run on an n-node AVL tree, takes O.lg n/ time and performs O.1/ rotations.

13-4 Treaps If we insert a set of n items into a binary search tree, the resulting tree may be horribly unbalanced, leading to long search times. As we saw in Section 12.4, however, randomly built binary search trees tend to be balanced. Therefore, one strategy that, on average, builds a balanced tree for a fixed set of items would be to randomly permute the items and then insert them in that order into the tree.

What if we do not have all the items at once? If we receive the items one at a time, can we still randomly build a binary search tree out of them?

334 Chapter 13 Red-Black Trees

G: 4

B: 7 H: 5

A: 10 E: 23 K: 65

I: 73

Figure 13.9 A treap. Each node x is labeled with x:key : x:priority. For example, the root has key G and priority 4.

We will examine a data structure that answers this question in the affirmative. A treap is a binary search tree with a modified way of ordering the nodes. Figure 13.9 shows an example. As usual, each node x in the tree has a key value x:key. In addition, we assign x:priority, which is a random number chosen independently for each node. We assume that all priorities are distinct and also that all keys are distinct. The nodes of the treap are ordered so that the keys obey the binary-search- tree property and the priorities obey the min-heap order property:

� If � is a left child of u, then �:key u:key.

� If � is a child of u, then �:priority > u:priority.

(This combination of properties is why the tree is called a “treap”: it has features of both a binary search tree and a heap.)

It helps to think of treaps in the following way. Suppose that we insert nodes x1; x2; : : : ; xn, with associated keys, into a treap. Then the resulting treap is the tree that would have been formed if the nodes had been inserted into a normal binary search tree in the order given by their (randomly chosen) priorities, i.e., xi :priority x:priority, y:key r , then the i th smallest element resides in x’s right subtree. Since the subtree rooted at x contains r elements that come before x’s right subtree in an inorder tree walk, the i th smallest element in the subtree rooted at x is the .i � r/th smallest element in the subtree rooted at x:right. Line 6 determines this element recursively.

To see how OS-SELECT operates, consider a search for the 17th smallest ele- ment in the order-statistic tree of Figure 14.1. We begin with x as the root, whose key is 26, and with i D 17. Since the size of 26’s left subtree is 12, its rank is 13. Thus, we know that the node with rank 17 is the 17 � 13 D 4th smallest element in 26’s right subtree. After the recursive call, x is the node with key 41, and i D 4. Since the size of 41’s left subtree is 5, its rank within its subtree is 6. Thus, we know that the node with rank 4 is the 4th smallest element in 41’s left subtree. Af- ter the recursive call, x is the node with key 30, and its rank within its subtree is 2. Thus, we recurse once again to find the 4�2 D 2nd smallest element in the subtree rooted at the node with key 38. We now find that its left subtree has size 1, which means it is the second smallest element. Thus, the procedure returns a pointer to the node with key 38.

Because each recursive call goes down one level in the order-statistic tree, the total time for OS-SELECT is at worst proportional to the height of the tree. Since the tree is a red-black tree, its height is O.lg n/, where n is the number of nodes. Thus, the running time of OS-SELECT is O.lg n/ for a dynamic set of n elements.

Determining the rank of an element

Given a pointer to a node x in an order-statistic tree T , the procedure OS-RANK returns the position of x in the linear order determined by an inorder tree walk of T .

342 Chapter 14 Augmenting Data Structures

OS-RANK.T; x/

1 r D x: left:sizeC 1 2 y D x 3 while y ¤ T:root 4 if y == y:p:right 5 r D r C y:p: left:sizeC 1 6 y D y:p 7 return r

The procedure works as follows. We can think of node x’s rank as the number of nodes preceding x in an inorder tree walk, plus 1 for x itself. OS-RANK maintains the following loop invariant:

At the start of each iteration of the while loop of lines 3–6, r is the rank of x:key in the subtree rooted at node y.

We use this loop invariant to show that OS-RANK works correctly as follows:

Initialization: Prior to the first iteration, line 1 sets r to be the rank of x:key within the subtree rooted at x. Setting y D x in line 2 makes the invariant true the first time the test in line 3 executes.

Maintenance: At the end of each iteration of the while loop, we set y D y:p. Thus we must show that if r is the rank of x:key in the subtree rooted at y at the start of the loop body, then r is the rank of x:key in the subtree rooted at y:p at the end of the loop body. In each iteration of the while loop, we consider the subtree rooted at y:p. We have already counted the number of nodes in the subtree rooted at node y that precede x in an inorder walk, and so we must add the nodes in the subtree rooted at y’s sibling that precede x in an inorder walk, plus 1 for y:p if it, too, precedes x. If y is a left child, then neither y:p nor any node in y:p’s right subtree precedes x, and so we leave r alone. Otherwise, y is a right child and all the nodes in y:p’s left subtree precede x, as does y:p itself. Thus, in line 5, we add y:p: left:sizeC 1 to the current value of r .

Termination: The loop terminates when y D T:root, so that the subtree rooted at y is the entire tree. Thus, the value of r is the rank of x:key in the entire tree.

As an example, when we run OS-RANK on the order-statistic tree of Figure 14.1 to find the rank of the node with key 38, we get the following sequence of values of y:key and r at the top of the while loop:

iteration y:key r 1 38 2 2 30 4 3 41 4 4 26 17

14.1 Dynamic order statistics 343

The procedure returns the rank 17. Since each iteration of the while loop takes O.1/ time, and y goes up one level in

the tree with each iteration, the running time of OS-RANK is at worst proportional to the height of the tree: O.lg n/ on an n-node order-statistic tree.

Maintaining subtree sizes

Given the size attribute in each node, OS-SELECT and OS-RANK can quickly compute order-statistic information. But unless we can efficiently maintain these attributes within the basic modifying operations on red-black trees, our work will have been for naught. We shall now show how to maintain subtree sizes for both insertion and deletion without affecting the asymptotic running time of either op- eration.

We noted in Section 13.3 that insertion into a red-black tree consists of two phases. The first phase goes down the tree from the root, inserting the new node as a child of an existing node. The second phase goes up the tree, changing colors and performing rotations to maintain the red-black properties.

To maintain the subtree sizes in the first phase, we simply increment x:size for each node x on the simple path traversed from the root down toward the leaves. The new node added gets a size of 1. Since there are O.lg n/ nodes on the traversed path, the additional cost of maintaining the size attributes is O.lg n/.

In the second phase, the only structural changes to the underlying red-black tree are caused by rotations, of which there are at most two. Moreover, a rotation is a local operation: only two nodes have their size attributes invalidated. The link around which the rotation is performed is incident on these two nodes. Referring to the code for LEFT-ROTATE.T; x/ in Section 13.2, we add the following lines:

13 y:size D x:size 14 x:size D x: left:sizeC x:right:sizeC 1 Figure 14.2 illustrates how the attributes are updated. The change to RIGHT- ROTATE is symmetric.

Since at most two rotations are performed during insertion into a red-black tree, we spend only O.1/ additional time updating size attributes in the second phase. Thus, the total time for insertion into an n-node order-statistic tree is O.lg n/, which is asymptotically the same as for an ordinary red-black tree.

Deletion from a red-black tree also consists of two phases: the first operates on the underlying search tree, and the second causes at most three rotations and otherwise performs no structural changes. (See Section 13.4.) The first phase either removes one node y from the tree or moves upward it within the tree. To update the subtree sizes, we simply traverse a simple path from node y (starting from its original position within the tree) up to the root, decrementing the size

344 Chapter 14 Augmenting Data Structures

LEFT-ROTATE(T, x)

RIGHT-ROTATE(T, y)

93 19

y

42 11

x

6 4

7

93

42 19

12 6

4 7

x

y

Figure 14.2 Updating subtree sizes during rotations. The link around which we rotate is incident on the two nodes whose size attributes need to be updated. The updates are local, requiring only the size information stored in x, y, and the roots of the subtrees shown as triangles.

attribute of each node on the path. Since this path has length O.lg n/ in an n- node red-black tree, the additional time spent maintaining size attributes in the first phase is O.lg n/. We handle the O.1/ rotations in the second phase of deletion in the same manner as for insertion. Thus, both insertion and deletion, including maintaining the size attributes, take O.lg n/ time for an n-node order-statistic tree.

Exercises

14.1-1 Show how OS-SELECT.T:root; 10/ operates on the red-black tree T of Fig- ure 14.1.

14.1-2 Show how OS-RANK.T; x/ operates on the red-black tree T of Figure 14.1 and the node x with x:key D 35. 14.1-3 Write a nonrecursive version of OS-SELECT.

14.1-4 Write a recursive procedure OS-KEY-RANK.T; k/ that takes as input an order- statistic tree T and a key k and returns the rank of k in the dynamic set represented by T . Assume that the keys of T are distinct.

14.1-5 Given an element x in an n-node order-statistic tree and a natural number i , how can we determine the i th successor of x in the linear order of the tree in O.lg n/ time?

14.2 How to augment a data structure 345

14.1-6 Observe that whenever we reference the size attribute of a node in either OS- SELECT or OS-RANK, we use it only to compute a rank. Accordingly, suppose we store in each node its rank in the subtree of which it is the root. Show how to maintain this information during insertion and deletion. (Remember that these two operations can cause rotations.)

14.1-7 Show how to use an order-statistic tree to count the number of inversions (see Problem 2-4) in an array of size n in time O.n lg n/.

14.1-8 ? Consider n chords on a circle, each defined by its endpoints. Describe an O.n lg n/- time algorithm to determine the number of pairs of chords that intersect inside the circle. (For example, if the n chords are all diameters that meet at the center, then the correct answer is

� n

2

� .) Assume that no two chords share an endpoint.

14.2 How to augment a data structure

The process of augmenting a basic data structure to support additional functionality occurs quite frequently in algorithm design. We shall use it again in the next section to design a data structure that supports operations on intervals. In this section, we examine the steps involved in such augmentation. We shall also prove a theorem that allows us to augment red-black trees easily in many cases.

We can break the process of augmenting a data structure into four steps:

1. Choose an underlying data structure.

2. Determine additional information to maintain in the underlying data structure.

3. Verify that we can maintain the additional information for the basic modifying operations on the underlying data structure.

4. Develop new operations.

As with any prescriptive design method, you should not blindly follow the steps in the order given. Most design work contains an element of trial and error, and progress on all steps usually proceeds in parallel. There is no point, for example, in determining additional information and developing new operations (steps 2 and 4) if we will not be able to maintain the additional information efficiently. Neverthe- less, this four-step method provides a good focus for your efforts in augmenting a data structure, and it is also a good way to organize the documentation of an augmented data structure.

346 Chapter 14 Augmenting Data Structures

We followed these steps in Section 14.1 to design our order-statistic trees. For step 1, we chose red-black trees as the underlying data structure. A clue to the suitability of red-black trees comes from their efficient support of other dynamic- set operations on a total order, such as MINIMUM, MAXIMUM, SUCCESSOR, and PREDECESSOR.

For step 2, we added the size attribute, in which each node x stores the size of the subtree rooted at x. Generally, the additional information makes operations more efficient. For example, we could have implemented OS-SELECT and OS-RANK using just the keys stored in the tree, but they would not have run in O.lg n/ time. Sometimes, the additional information is pointer information rather than data, as in Exercise 14.2-1.

For step 3, we ensured that insertion and deletion could maintain the size at- tributes while still running in O.lg n/ time. Ideally, we should need to update only a few elements of the data structure in order to maintain the additional information. For example, if we simply stored in each node its rank in the tree, the OS-SELECT and OS-RANK procedures would run quickly, but inserting a new minimum ele- ment would cause a change to this information in every node of the tree. When we store subtree sizes instead, inserting a new element causes information to change in only O.lg n/ nodes.

For step 4, we developed the operations OS-SELECT and OS-RANK. After all, the need for new operations is why we bother to augment a data structure in the first place. Occasionally, rather than developing new operations, we use the additional information to expedite existing ones, as in Exercise 14.2-1.

Augmenting red-black trees

When red-black trees underlie an augmented data structure, we can prove that in- sertion and deletion can always efficiently maintain certain kinds of additional in- formation, thereby making step 3 very easy. The proof of the following theorem is similar to the argument from Section 14.1 that we can maintain the size attribute for order-statistic trees.

Theorem 14.1 (Augmenting a red-black tree) Let f be an attribute that augments a red-black tree T of n nodes, and suppose that the value of f for each node x depends on only the information in nodes x, x: left, and x:right, possibly including x: left: f and x:right: f . Then, we can maintain the values of f in all nodes of T during insertion and deletion without asymptotically affecting the O.lg n/ performance of these operations.

Proof The main idea of the proof is that a change to an f attribute in a node x propagates only to ancestors of x in the tree. That is, changing x: f may re-

14.2 How to augment a data structure 347

quire x:p: f to be updated, but nothing else; updating x:p: f may require x:p:p: f to be updated, but nothing else; and so on up the tree. Once we have updated T:root: f , no other node will depend on the new value, and so the process termi- nates. Since the height of a red-black tree is O.lg n/, changing an f attribute in a node costs O.lg n/ time in updating all nodes that depend on the change.

Insertion of a node x into T consists of two phases. (See Section 13.3.) The first phase inserts x as a child of an existing node x:p. We can compute the value of x: f in O.1/ time since, by supposition, it depends only on information in the other attributes of x itself and the information in x’s children, but x’s children are both the sentinel T:nil. Once we have computed x: f , the change propagates up the tree. Thus, the total time for the first phase of insertion is O.lg n/. During the second phase, the only structural changes to the tree come from rotations. Since only two nodes change in a rotation, the total time for updating the f attributes is O.lg n/ per rotation. Since the number of rotations during insertion is at most two, the total time for insertion is O.lg n/.

Like insertion, deletion has two phases. (See Section 13.4.) In the first phase, changes to the tree occur when the deleted node is removed from the tree. If the deleted node had two children at the time, then its successor moves into the position of the deleted node. Propagating the updates to f caused by these changes costs at most O.lg n/, since the changes modify the tree locally. Fixing up the red-black tree during the second phase requires at most three rotations, and each rotation requires at most O.lg n/ time to propagate the updates to f . Thus, like insertion, the total time for deletion is O.lg n/.

In many cases, such as maintaining the size attributes in order-statistic trees, the cost of updating after a rotation is O.1/, rather than the O.lg n/ derived in the proof of Theorem 14.1. Exercise 14.2-3 gives an example.

Exercises

14.2-1 Show, by adding pointers to the nodes, how to support each of the dynamic-set queries MINIMUM, MAXIMUM, SUCCESSOR, and PREDECESSOR in O.1/ worst- case time on an augmented order-statistic tree. The asymptotic performance of other operations on order-statistic trees should not be affected.

14.2-2 Can we maintain the black-heights of nodes in a red-black tree as attributes in the nodes of the tree without affecting the asymptotic performance of any of the red- black tree operations? Show how, or argue why not. How about maintaining the depths of nodes?

348 Chapter 14 Augmenting Data Structures

14.2-3 ? Let˝ be an associative binary operator, and let a be an attribute maintained in each node of a red-black tree. Suppose that we want to include in each node x an addi- tional attribute f such that x: f D x1:a˝ x2:a˝ � � � ˝ xm:a, where x1; x2; : : : ; xm is the inorder listing of nodes in the subtree rooted at x. Show how to update the f attributes in O.1/ time after a rotation. Modify your argument slightly to apply it to the size attributes in order-statistic trees.

14.2-4 ? We wish to augment red-black trees with an operation RB-ENUMERATE.x; a; b/ that outputs all the keys k such that a � k � b in a red-black tree rooted at x. Describe how to implement RB-ENUMERATE in ‚.mC lg n/ time, where m is the number of keys that are output and n is the number of internal nodes in the tree. (Hint: You do not need to add new attributes to the red-black tree.)

14.3 Interval trees

In this section, we shall augment red-black trees to support operations on dynamic sets of intervals. A closed interval is an ordered pair of real numbers Œt1; t2�, with t1 � t2. The interval Œt1; t2� represents the set ft 2 R W t1 � t � t2g. Open and half-open intervals omit both or one of the endpoints from the set, respectively. In this section, we shall assume that intervals are closed; extending the results to open and half-open intervals is conceptually straightforward.

Intervals are convenient for representing events that each occupy a continuous period of time. We might, for example, wish to query a database of time intervals to find out what events occurred during a given interval. The data structure in this section provides an efficient means for maintaining such an interval database.

We can represent an interval Œt1; t2� as an object i , with attributes i: low D t1 (the low endpoint) and i:high D t2 (the high endpoint). We say that intervals i and i 0 overlap if i i 0 ¤ ;, that is, if i: low � i 0:high and i 0: low � i:high. As Figure 14.3 shows, any two intervals i and i 0 satisfy the interval trichotomy; that is, exactly one of the following three properties holds:

a. i and i 0 overlap,

b. i is to the left of i 0 (i.e., i:high 0 3 print sŒn� 4 n D n � sŒn�

In our rod-cutting example, the call EXTENDED-BOTTOM-UP-CUT-ROD.p; 10/ would return the following arrays:

i 0 1 2 3 4 5 6 7 8 9 10

rŒi � 0 1 5 8 10 13 17 18 22 25 30

sŒi � 0 1 2 3 2 2 6 1 2 3 10

A call to PRINT-CUT-ROD-SOLUTION.p; 10/ would print just 10, but a call with n D 7 would print the cuts 1 and 6, corresponding to the first optimal decomposi- tion for r7 given earlier.

Exercises

15.1-1 Show that equation (15.4) follows from equation (15.3) and the initial condition T .0/ D 1.

370 Chapter 15 Dynamic Programming

15.1-2 Show, by means of a counterexample, that the following “greedy” strategy does not always determine an optimal way to cut rods. Define the density of a rod of length i to be pi=i , that is, its value per inch. The greedy strategy for a rod of length n cuts off a first piece of length i , where 1 � i � n, having maximum density. It then continues by applying the greedy strategy to the remaining piece of length n � i . 15.1-3 Consider a modification of the rod-cutting problem in which, in addition to a price pi for each rod, each cut incurs a fixed cost of c. The revenue associated with a solution is now the sum of the prices of the pieces minus the costs of making the cuts. Give a dynamic-programming algorithm to solve this modified problem.

15.1-4 Modify MEMOIZED-CUT-ROD to return not only the value but the actual solution, too.

15.1-5 The Fibonacci numbers are defined by recurrence (3.22). Give an O.n/-time dynamic-programming algorithm to compute the nth Fibonacci number. Draw the subproblem graph. How many vertices and edges are in the graph?

15.2 Matrix-chain multiplication

Our next example of dynamic programming is an algorithm that solves the problem of matrix-chain multiplication. We are given a sequence (chain) hA1; A2; : : : ; Ani of n matrices to be multiplied, and we wish to compute the product

A1A2 � � �An : (15.5) We can evaluate the expression (15.5) using the standard algorithm for multiply- ing pairs of matrices as a subroutine once we have parenthesized it to resolve all ambiguities in how the matrices are multiplied together. Matrix multiplication is associative, and so all parenthesizations yield the same product. A product of ma- trices is fully parenthesized if it is either a single matrix or the product of two fully parenthesized matrix products, surrounded by parentheses. For example, if the chain of matrices is hA1; A2; A3; A4i, then we can fully parenthesize the product A1A2A3A4 in five distinct ways:

15.2 Matrix-chain multiplication 371

.A1.A2.A3A4/// ;

.A1..A2A3/A4// ;

..A1A2/.A3A4// ;

..A1.A2A3//A4/ ;

…A1A2/A3/A4/ :

How we parenthesize a chain of matrices can have a dramatic impact on the cost of evaluating the product. Consider first the cost of multiplying two matrices. The standard algorithm is given by the following pseudocode, which generalizes the SQUARE-MATRIX-MULTIPLY procedure from Section 4.2. The attributes rows and columns are the numbers of rows and columns in a matrix.

MATRIX-MULTIPLY.A; B/

1 if A:columns ¤ B:rows 2 error “incompatible dimensions” 3 else let C be a new A:rows B:columns matrix 4 for i D 1 to A:rows 5 for j D 1 to B:columns 6 cij D 0 7 for k D 1 to A:columns 8 cij D cij C aik � bkj 9 return C

We can multiply two matrices A and B only if they are compatible: the number of columns of A must equal the number of rows of B . If A is a p q matrix and B is a q r matrix, the resulting matrix C is a p r matrix. The time to compute C is dominated by the number of scalar multiplications in line 8, which is pqr . In what follows, we shall express costs in terms of the number of scalar multiplications.

To illustrate the different costs incurred by different parenthesizations of a matrix product, consider the problem of a chain hA1; A2; A3i of three matrices. Suppose that the dimensions of the matrices are 10 100, 100 5, and 5 50, respec- tively. If we multiply according to the parenthesization ..A1A2/A3/, we perform 10 � 100 � 5 D 5000 scalar multiplications to compute the 10 5 matrix prod- uct A1A2, plus another 10 � 5 � 50 D 2500 scalar multiplications to multiply this matrix by A3, for a total of 7500 scalar multiplications. If instead we multiply according to the parenthesization .A1.A2A3//, we perform 100 � 5 � 50 D 25,000 scalar multiplications to compute the 100 50 matrix product A2A3, plus another 10 � 100 � 50 D 50,000 scalar multiplications to multiply A1 by this matrix, for a total of 75,000 scalar multiplications. Thus, computing the product according to the first parenthesization is 10 times faster.

We state the matrix-chain multiplication problem as follows: given a chain hA1;A2; : : : ;Ani of n matrices, where for i D 1; 2; : : : ; n, matrix Ai has dimension

372 Chapter 15 Dynamic Programming

pi�1 pi , fully parenthesize the product A1A2 � � �An in a way that minimizes the number of scalar multiplications.

Note that in the matrix-chain multiplication problem, we are not actually multi- plying matrices. Our goal is only to determine an order for multiplying matrices that has the lowest cost. Typically, the time invested in determining this optimal order is more than paid for by the time saved later on when actually performing the matrix multiplications (such as performing only 7500 scalar multiplications instead of 75,000).

Counting the number of parenthesizations

Before solving the matrix-chain multiplication problem by dynamic programming, let us convince ourselves that exhaustively checking all possible parenthesizations does not yield an efficient algorithm. Denote the number of alternative parenthe- sizations of a sequence of n matrices by P.n/. When n D 1, we have just one matrix and therefore only one way to fully parenthesize the matrix product. When n � 2, a fully parenthesized matrix product is the product of two fully parenthe- sized matrix subproducts, and the split between the two subproducts may occur between the kth and .k C 1/st matrices for any k D 1; 2; : : : ; n � 1. Thus, we obtain the recurrence

P.n/ D

� 1 if n D 1 ; n�1X kD1

P.k/P.n� k/ if n � 2 : (15.6)

Problem 12-4 asked you to show that the solution to a similar recurrence is the sequence of Catalan numbers, which grows as �.4n=n3=2/. A simpler exercise (see Exercise 15.2-3) is to show that the solution to the recurrence (15.6) is �.2n/. The number of solutions is thus exponential in n, and the brute-force method of exhaustive search makes for a poor strategy when determining how to optimally parenthesize a matrix chain.

Applying dynamic programming

We shall use the dynamic-programming method to determine how to optimally parenthesize a matrix chain. In so doing, we shall follow the four-step sequence that we stated at the beginning of this chapter:

1. Characterize the structure of an optimal solution.

2. Recursively define the value of an optimal solution.

3. Compute the value of an optimal solution.

15.2 Matrix-chain multiplication 373

4. Construct an optimal solution from computed information.

We shall go through these steps in order, demonstrating clearly how we apply each step to the problem.

Step 1: The structure of an optimal parenthesization

For our first step in the dynamic-programming paradigm, we find the optimal sub- structure and then use it to construct an optimal solution to the problem from opti- mal solutions to subproblems. In the matrix-chain multiplication problem, we can perform this step as follows. For convenience, let us adopt the notation Ai ::j , where i � j , for the matrix that results from evaluating the product AiAiC1 � � �Aj . Ob- serve that if the problem is nontrivial, i.e., i 1 :

Noting that for i D 1; 2; : : : ; n� 1, each term T .i/ appears once as T .k/ and once as T .n � k/, and collecting the n � 1 1s in the summation together with the 1 out front, we can rewrite the recurrence as

T .n/ � 2 n�1X iD1

T .i/C n : (15.8)

We shall prove that T .n/ D �.2n/ using the substitution method. Specifi- cally, we shall show that T .n/ � 2n�1 for all n � 1. The basis is easy, since T .1/ � 1 D 20. Inductively, for n � 2 we have

T .n/ � 2 n�1X iD1

2i�1 C n

D 2 n�2X iD0

2i C n

D 2.2n�1 � 1/C n (by equation (A.5)) D 2n � 2C n � 2n�1 ;

which completes the proof. Thus, the total amount of work performed by the call RECURSIVE-MATRIX-CHAIN.p; 1; n/ is at least exponential in n.

Compare this top-down, recursive algorithm (without memoization) with the bottom-up dynamic-programming algorithm. The latter is more efficient because it takes advantage of the overlapping-subproblems property. Matrix-chain mul- tiplication has only ‚.n2/ distinct subproblems, and the dynamic-programming algorithm solves each exactly once. The recursive algorithm, on the other hand, must again solve each subproblem every time it reappears in the recursion tree. Whenever a recursion tree for the natural recursive solution to a problem contains the same subproblem repeatedly, and the total number of distinct subproblems is small, dynamic programming can improve efficiency, sometimes dramatically.

15.3 Elements of dynamic programming 387

Reconstructing an optimal solution

As a practical matter, we often store which choice we made in each subproblem in a table so that we do not have to reconstruct this information from the costs that we stored.

For matrix-chain multiplication, the table sŒi; j � saves us a significant amount of work when reconstructing an optimal solution. Suppose that we did not maintain the sŒi; j � table, having filled in only the table mŒi; j � containing optimal subprob- lem costs. We choose from among j � i possibilities when we determine which subproblems to use in an optimal solution to parenthesizing AiAiC1 � � �Aj , and j � i is not a constant. Therefore, it would take ‚.j � i/ D !.1/ time to recon- struct which subproblems we chose for a solution to a given problem. By storing in sŒi; j � the index of the matrix at which we split the product AiAiC1 � � �Aj , we can reconstruct each choice in O.1/ time.

Memoization

As we saw for the rod-cutting problem, there is an alternative approach to dy- namic programming that often offers the efficiency of the bottom-up dynamic- programming approach while maintaining a top-down strategy. The idea is to memoize the natural, but inefficient, recursive algorithm. As in the bottom-up ap- proach, we maintain a table with subproblem solutions, but the control structure for filling in the table is more like the recursive algorithm.

A memoized recursive algorithm maintains an entry in a table for the solution to each subproblem. Each table entry initially contains a special value to indicate that the entry has yet to be filled in. When the subproblem is first encountered as the recursive algorithm unfolds, its solution is computed and then stored in the table. Each subsequent time that we encounter this subproblem, we simply look up the value stored in the table and return it.5

Here is a memoized version of RECURSIVE-MATRIX-CHAIN. Note where it resembles the memoized top-down method for the rod-cutting problem.

5This approach presupposes that we know the set of all possible subproblem parameters and that we have established the relationship between table positions and subproblems. Another, more general, approach is to memoize by using hashing with the subproblem parameters as keys.

388 Chapter 15 Dynamic Programming

MEMOIZED-MATRIX-CHAIN.p/

1 n D p: length � 1 2 let mŒ1 : : n; 1 : : n� be a new table 3 for i D 1 to n 4 for j D i to n 5 mŒi; j � D 1 6 return LOOKUP-CHAIN.m; p; 1; n/

LOOKUP-CHAIN.m; p; i; j /

1 if mŒi; j � 0 and xi D yj ; max.cŒi; j � 1�; cŒi � 1; j �/ if i; j > 0 and xi ¤ yj :

(15.9)

Observe that in this recursive formulation, a condition in the problem restricts which subproblems we may consider. When xi D yj , we can and should consider the subproblem of finding an LCS of Xi�1 and Yj �1. Otherwise, we instead con- sider the two subproblems of finding an LCS of Xi and Yj �1 and of Xi�1 and Yj . In the previous dynamic-programming algorithms we have examined—for rod cutting and matrix-chain multiplication—we ruled out no subproblems due to conditions in the problem. Finding an LCS is not the only dynamic-programming algorithm that rules out subproblems based on conditions in the problem. For example, the edit-distance problem (see Problem 15-5) has this characteristic.

Step 3: Computing the length of an LCS

Based on equation (15.9), we could easily write an exponential-time recursive al- gorithm to compute the length of an LCS of two sequences. Since the LCS problem

394 Chapter 15 Dynamic Programming

has only ‚.mn/ distinct subproblems, however, we can use dynamic programming to compute the solutions bottom up.

Procedure LCS-LENGTH takes two sequences X D hx1; x2; : : : ; xmi and Y D hy1;y2; : : : ;yni as inputs. It stores the cŒi; j � values in a table cŒ0 : : m; 0 : : n�, and it computes the entries in row-major order. (That is, the procedure fills in the first row of c from left to right, then the second row, and so on.) The procedure also maintains the table bŒ1 : : m; 1 : : n� to help us construct an optimal solution. Intu- itively, bŒi; j � points to the table entry corresponding to the optimal subproblem solution chosen when computing cŒi; j �. The procedure returns the b and c tables; cŒm; n� contains the length of an LCS of X and Y .

LCS-LENGTH.X; Y /

1 m D X: length 2 n D Y: length 3 let bŒ1 : : m; 1 : : n� and cŒ0 : : m; 0 : : n� be new tables 4 for i D 1 to m 5 cŒi; 0� D 0 6 for j D 0 to n 7 cŒ0; j � D 0 8 for i D 1 to m 9 for j D 1 to n

10 if xi == yj 11 cŒi; j � D cŒi � 1; j � 1�C 1 12 bŒi; j � D “-” 13 elseif cŒi � 1; j � � cŒi; j � 1� 14 cŒi; j � D cŒi � 1; j � 15 bŒi; j � D “”” 16 else cŒi; j � D cŒi; j � 1� 17 bŒi; j � D “ ” 18 return c and b

Figure 15.8 shows the tables produced by LCS-LENGTH on the sequences X D hA; B; C; B; D; A; Bi and Y D hB; D; C; A; B; Ai. The running time of the procedure is ‚.mn/, since each table entry takes ‚.1/ time to compute.

Step 4: Constructing an LCS

The b table returned by LCS-LENGTH enables us to quickly construct an LCS of X D hx1; x2; : : : ; xmi and Y D hy1; y2; : : : ; yni. We simply begin at bŒm; n� and trace through the table by following the arrows. Whenever we encounter a “-” in entry bŒi; j �, it implies that xi D yj is an element of the LCS that LCS-LENGTH

15.4 Longest common subsequence 395

0 0 0 0 0 0 0

0 0 0 0 1 1 1

0 1 1 1 2 2

0 1 1 2 2 2

0 1 1 2 2 3

0 1 2 2 2 3 3

0 1 2 2 3 3

0 1 2 2 3 4 4

1

2

3

4

B D C A B A

1 2 3 4 5 60

A

B

C

B

D

A

B

1

2

3

4

5

6

7

0

j

i

xi

yj

Figure 15.8 The c and b tables computed by LCS-LENGTH on the sequences X D hA; B; C; B; D;A;Bi and Y D hB;D;C;A;B;Ai. The square in row i and column j contains the value of cŒi; j � and the appropriate arrow for the value of bŒi; j �. The entry 4 in cŒ7; 6�—the lower right-hand corner of the table—is the length of an LCS hB; C; B; Ai of X and Y . For i; j > 0, entry cŒi; j � depends only on whether xi D yj and the values in entries cŒi � 1; j �, cŒi; j � 1�, and cŒi � 1; j � 1�, which are computed before cŒi; j �. To reconstruct the elements of an LCS, follow the bŒi; j � arrows from the lower right-hand corner; the sequence is shaded. Each “-” on the shaded sequence corresponds to an entry (highlighted) for which xi D yj is a member of an LCS.

found. With this method, we encounter the elements of this LCS in reverse order. The following recursive procedure prints out an LCS of X and Y in the proper, forward order. The initial call is PRINT-LCS.b; X; X: length; Y: length/.

PRINT-LCS.b; X; i; j /

1 if i == 0 or j == 0 2 return 3 if bŒi; j � == “-” 4 PRINT-LCS.b; X; i � 1; j � 1/ 5 print xi 6 elseif bŒi; j � == “”” 7 PRINT-LCS.b; X; i � 1; j / 8 else PRINT-LCS.b; X; i; j � 1/

For the b table in Figure 15.8, this procedure prints BCBA. The procedure takes time O.mC n/, since it decrements at least one of i and j in each recursive call.

396 Chapter 15 Dynamic Programming

Improving the code

Once you have developed an algorithm, you will often find that you can improve on the time or space it uses. Some changes can simplify the code and improve constant factors but otherwise yield no asymptotic improvement in performance. Others can yield substantial asymptotic savings in time and space.

In the LCS algorithm, for example, we can eliminate the b table altogether. Each cŒi; j � entry depends on only three other c table entries: cŒi � 1; j � 1�, cŒi � 1; j �, and cŒi; j � 1�. Given the value of cŒi; j �, we can determine in O.1/ time which of these three values was used to compute cŒi; j �, without inspecting table b. Thus, we can reconstruct an LCS in O.mCn/ time using a procedure similar to PRINT-LCS. (Exercise 15.4-2 asks you to give the pseudocode.) Although we save ‚.mn/ space by this method, the auxiliary space requirement for computing an LCS does not asymptotically decrease, since we need ‚.mn/ space for the c table anyway.

We can, however, reduce the asymptotic space requirements for LCS-LENGTH, since it needs only two rows of table c at a time: the row being computed and the previous row. (In fact, as Exercise 15.4-4 asks you to show, we can use only slightly more than the space for one row of c to compute the length of an LCS.) This improvement works if we need only the length of an LCS; if we need to reconstruct the elements of an LCS, the smaller table does not keep enough information to retrace our steps in O.mC n/ time.

Exercises

15.4-1 Determine an LCS of h1; 0; 0; 1; 0; 1; 0; 1i and h0; 1; 0; 1; 1; 0; 1; 1; 0i. 15.4-2 Give pseudocode to reconstruct an LCS from the completed c table and the original sequences X D hx1; x2; : : : ; xmi and Y D hy1; y2; : : : ; yni in O.m C n/ time, without using the b table.

15.4-3 Give a memoized version of LCS-LENGTH that runs in O.mn/ time.

15.4-4 Show how to compute the length of an LCS using only 2 �min.m; n/ entries in the c table plus O.1/ additional space. Then show how to do the same thing, but using min.m; n/ entries plus O.1/ additional space.

15.5 Optimal binary search trees 397

15.4-5 Give an O.n2/-time algorithm to find the longest monotonically increasing subse- quence of a sequence of n numbers.

15.4-6 ? Give an O.n lg n/-time algorithm to find the longest monotonically increasing sub- sequence of a sequence of n numbers. (Hint: Observe that the last element of a candidate subsequence of length i is at least as large as the last element of a can- didate subsequence of length i � 1. Maintain candidate subsequences by linking them through the input sequence.)

15.5 Optimal binary search trees

Suppose that we are designing a program to translate text from English to French. For each occurrence of each English word in the text, we need to look up its French equivalent. We could perform these lookup operations by building a binary search tree with n English words as keys and their French equivalents as satellite data. Because we will search the tree for each individual word in the text, we want the total time spent searching to be as low as possible. We could ensure an O.lg n/ search time per occurrence by using a red-black tree or any other balanced binary search tree. Words appear with different frequencies, however, and a frequently used word such as the may appear far from the root while a rarely used word such as machicolation appears near the root. Such an organization would slow down the translation, since the number of nodes visited when searching for a key in a binary search tree equals one plus the depth of the node containing the key. We want words that occur frequently in the text to be placed nearer the root.6 Moreover, some words in the text might have no French translation,7 and such words would not appear in the binary search tree at all. How do we organize a binary search tree so as to minimize the number of nodes visited in all searches, given that we know how often each word occurs?

What we need is known as an optimal binary search tree. Formally, we are given a sequence K D hk1; k2; : : : ; kni of n distinct keys in sorted order (so that k1 1.

b. Suppose now that along with each pixel AŒi; j �, we have calculated a real- valued disruption measure dŒi; j �, indicating how disruptive it would be to remove pixel AŒi; j �. Intuitively, the lower a pixel’s disruption measure, the more similar the pixel is to its neighbors. Suppose further that we define the disruption measure of a seam to be the sum of the disruption measures of its pixels.

410 Chapter 15 Dynamic Programming

Give an algorithm to find a seam with the lowest disruption measure. How efficient is your algorithm?

15-9 Breaking a string A certain string-processing language allows a programmer to break a string into two pieces. Because this operation copies the string, it costs n time units to break a string of n characters into two pieces. Suppose a programmer wants to break a string into many pieces. The order in which the breaks occur can affect the total amount of time used. For example, suppose that the programmer wants to break a 20-character string after characters 2, 8, and 10 (numbering the characters in ascending order from the left-hand end, starting from 1). If she programs the breaks to occur in left-to-right order, then the first break costs 20 time units, the second break costs 18 time units (breaking the string from characters 3 to 20 at character 8), and the third break costs 12 time units, totaling 50 time units. If she programs the breaks to occur in right-to-left order, however, then the first break costs 20 time units, the second break costs 10 time units, and the third break costs 8 time units, totaling 38 time units. In yet another order, she could break first at 8 (costing 20), then break the left piece at 2 (costing 8), and finally the right piece at 10 (costing 12), for a total cost of 40.

Design an algorithm that, given the numbers of characters after which to break, determines a least-cost way to sequence those breaks. More formally, given a string S with n characters and an array LŒ1 : : m� containing the break points, com- pute the lowest cost for a sequence of breaks, along with a sequence of breaks that achieves this cost.

15-10 Planning an investment strategy Your knowledge of algorithms helps you obtain an exciting job with the Acme Computer Company, along with a $10,000 signing bonus. You decide to invest this money with the goal of maximizing your return at the end of 10 years. You decide to use the Amalgamated Investment Company to manage your investments. Amalgamated Investments requires you to observe the following rules. It offers n different investments, numbered 1 through n. In each year j , investment i provides a return rate of rij . In other words, if you invest d dollars in investment i in year j , then at the end of year j , you have drij dollars. The return rates are guaranteed, that is, you are given all the return rates for the next 10 years for each investment. You make investment decisions only once per year. At the end of each year, you can leave the money made in the previous year in the same investments, or you can shift money to other investments, by either shifting money between existing investments or moving money to a new investement. If you do not move your money between two consecutive years, you pay a fee of f1 dollars, whereas if you switch your money, you pay a fee of f2 dollars, where f2 > f1.

Problems for Chapter 15 411

a. The problem, as stated, allows you to invest your money in multiple investments in each year. Prove that there exists an optimal investment strategy that, in each year, puts all the money into a single investment. (Recall that an optimal investment strategy maximizes the amount of money after 10 years and is not concerned with any other objectives, such as minimizing risk.)

b. Prove that the problem of planning your optimal investment strategy exhibits optimal substructure.

c. Design an algorithm that plans your optimal investment strategy. What is the running time of your algorithm?

d. Suppose that Amalgamated Investments imposed the additional restriction that, at any point, you can have no more than $15,000 in any one investment. Show that the problem of maximizing your income at the end of 10 years no longer exhibits optimal substructure.

15-11 Inventory planning The Rinky Dink Company makes machines that resurface ice rinks. The demand for such products varies from month to month, and so the company needs to de- velop a strategy to plan its manufacturing given the fluctuating, but predictable, demand. The company wishes to design a plan for the next n months. For each month i , the company knows the demand di , that is, the number of machines that it will sell. Let D D PniD1 di be the total demand over the next n months. The company keeps a full-time staff who provide labor to manufacture up to m ma- chines per month. If the company needs to make more than m machines in a given month, it can hire additional, part-time labor, at a cost that works out to c dollars per machine. Furthermore, if, at the end of a month, the company is holding any unsold machines, it must pay inventory costs. The cost for holding j machines is given as a function h.j / for j D 1; 2; : : : ; D, where h.j / � 0 for 1 � j � D and h.j / � h.j C 1/ for 1 � j � D � 1.

Give an algorithm that calculates a plan for the company that minimizes its costs while fulfilling all the demand. The running time should be polyomial in n and D.

15-12 Signing free-agent baseball players Suppose that you are the general manager for a major-league baseball team. During the off-season, you need to sign some free-agent players for your team. The team owner has given you a budget of $X to spend on free agents. You are allowed to spend less than $X altogether, but the owner will fire you if you spend any more than $X .

412 Chapter 15 Dynamic Programming

You are considering N different positions, and for each position, P free-agent players who play that position are available.8 Because you do not want to overload your roster with too many players at any position, for each position you may sign at most one free agent who plays that position. (If you do not sign any players at a particular position, then you plan to stick with the players you already have at that position.)

To determine how valuable a player is going to be, you decide to use a sabermet- ric statistic9 known as “VORP,” or “value over replacement player.” A player with a higher VORP is more valuable than a player with a lower VORP. A player with a higher VORP is not necessarily more expensive to sign than a player with a lower VORP, because factors other than a player’s value determine how much it costs to sign him.

For each available free-agent player, you have three pieces of information:

� the player’s position,

� the amount of money it will cost to sign the player, and

� the player’s VORP.

Devise an algorithm that maximizes the total VORP of the players you sign while spending no more than $X altogether. You may assume that each player signs for a multiple of $100,000. Your algorithm should output the total VORP of the players you sign, the total amount of money you spend, and a list of which players you sign. Analyze the running time and space requirement of your algorithm.

Chapter notes

R. Bellman began the systematic study of dynamic programming in 1955. The word “programming,” both here and in linear programming, refers to using a tab- ular solution method. Although optimization techniques incorporating elements of dynamic programming were known earlier, Bellman provided the area with a solid mathematical basis [37].

8Although there are nine positions on a baseball team, N is not necesarily equal to 9 because some general managers have particular ways of thinking about positions. For example, a general manager might consider right-handed pitchers and left-handed pitchers to be separate “positions,” as well as starting pitchers, long relief pitchers (relief pitchers who can pitch several innings), and short relief pitchers (relief pitchers who normally pitch at most only one inning).

9Sabermetrics is the application of statistical analysis to baseball records. It provides several ways to compare the relative values of individual players.

Notes for Chapter 15 413

Galil and Park [125] classify dynamic-programming algorithms according to the size of the table and the number of other table entries each entry depends on. They call a dynamic-programming algorithm tD=eD if its table size is O.nt / and each entry depends on O.ne/ other entries. For example, the matrix-chain multiplication algorithm in Section 15.2 would be 2D=1D, and the longest-common-subsequence algorithm in Section 15.4 would be 2D=0D.

Hu and Shing [182, 183] give an O.n lg n/-time algorithm for the matrix-chain multiplication problem.

The O.mn/-time algorithm for the longest-common-subsequence problem ap- pears to be a folk algorithm. Knuth [70] posed the question of whether subquadratic algorithms for the LCS problem exist. Masek and Paterson [244] answered this question in the affirmative by giving an algorithm that runs in O.mn= lg n/ time, where n � m and the sequences are drawn from a set of bounded size. For the special case in which no element appears more than once in an input sequence, Szymanski [326] shows how to solve the problem in O..nCm/ lg.nCm// time. Many of these results extend to the problem of computing string edit distances (Problem 15-5).

An early paper on variable-length binary encodings by Gilbert and Moore [133] had applications to constructing optimal binary search trees for the case in which all probabilities pi are 0; this paper contains an O.n3/-time algorithm. Aho, Hopcroft, and Ullman [5] present the algorithm from Section 15.5. Exercise 15.5-4 is due to Knuth [212]. Hu and Tucker [184] devised an algorithm for the case in which all probabilities pi are 0 that uses O.n2/ time and O.n/ space; subsequently, Knuth [211] reduced the time to O.n lg n/.

Problem 15-8 is due to Avidan and Shamir [27], who have posted on the Web a wonderful video illustrating this image-compression technique.

16 Greedy Algorithms

Algorithms for optimization problems typically go through a sequence of steps, with a set of choices at each step. For many optimization problems, using dynamic programming to determine the best choices is overkill; simpler, more efficient al- gorithms will do. A greedy algorithm always makes the choice that looks best at the moment. That is, it makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution. This chapter explores optimization prob- lems for which greedy algorithms provide optimal solutions. Before reading this chapter, you should read about dynamic programming in Chapter 15, particularly Section 15.3.

Greedy algorithms do not always yield optimal solutions, but for many problems they do. We shall first examine, in Section 16.1, a simple but nontrivial problem, the activity-selection problem, for which a greedy algorithm efficiently computes an optimal solution. We shall arrive at the greedy algorithm by first consider- ing a dynamic-programming approach and then showing that we can always make greedy choices to arrive at an optimal solution. Section 16.2 reviews the basic elements of the greedy approach, giving a direct approach for proving greedy al- gorithms correct. Section 16.3 presents an important application of greedy tech- niques: designing data-compression (Huffman) codes. In Section 16.4, we inves- tigate some of the theory underlying combinatorial structures called “matroids,” for which a greedy algorithm always produces an optimal solution. Finally, Sec- tion 16.5 applies matroids to solve a problem of scheduling unit-time tasks with deadlines and penalties.

The greedy method is quite powerful and works well for a wide range of prob- lems. Later chapters will present many algorithms that we can view as applica- tions of the greedy method, including minimum-spanning-tree algorithms (Chap- ter 23), Dijkstra’s algorithm for shortest paths from a single source (Chapter 24), and Chvátal’s greedy set-covering heuristic (Chapter 35). Minimum-spanning-tree algorithms furnish a classic example of the greedy method. Although you can read

16.1 An activity-selection problem 415

this chapter and Chapter 23 independently of each other, you might find it useful to read them together.

16.1 An activity-selection problem

Our first example is the problem of scheduling several competing activities that re- quire exclusive use of a common resource, with a goal of selecting a maximum-size set of mutually compatible activities. Suppose we have a set S D fa1; a2; : : : ; ang of n proposed activities that wish to use a resource, such as a lecture hall, which can serve only one activity at a time. Each activity ai has a start time si and a finish time fi , where 0 � si jAkj j, then

we could use A0 kj

, rather than Akj , in a solution to the subproblem for Sij . We would have constructed a set of jAikj C jA0kj j C 1 > jAikj C jAkj j C 1 D jAij j mutually compatible activities, which contradicts the assumption that Aij is an optimal solution. A symmetric argument applies to the activities in Sik.

This way of characterizing optimal substructure suggests that we might solve the activity-selection problem by dynamic programming. If we denote the size of an optimal solution for the set Sij by cŒi; j �, then we would have the recurrence

cŒi; j � D cŒi; k�C cŒk; j �C 1 : Of course, if we did not know that an optimal solution for the set Sij includes activity ak, we would have to examine all activities in Sij to find which one to choose, so that

cŒi; j � D (

0 if Sij D ; ; max

ak2Sij fcŒi; k�C cŒk; j �C 1g if Sij ¤ ; : (16.2)

We could then develop a recursive algorithm and memoize it, or we could work bottom-up and fill in table entries as we go along. But we would be overlooking another important characteristic of the activity-selection problem that we can use to great advantage.

16.1 An activity-selection problem 417

Making the greedy choice

What if we could choose an activity to add to our optimal solution without having to first solve all the subproblems? That could save us from having to consider all the choices inherent in recurrence (16.2). In fact, for the activity-selection problem, we need consider only one choice: the greedy choice.

What do we mean by the greedy choice for the activity-selection problem? Intu- ition suggests that we should choose an activity that leaves the resource available for as many other activities as possible. Now, of the activities we end up choos- ing, one of them must be the first one to finish. Our intuition tells us, therefore, to choose the activity in S with the earliest finish time, since that would leave the resource available for as many of the activities that follow it as possible. (If more than one activity in S has the earliest finish time, then we can choose any such activity.) In other words, since the activities are sorted in monotonically increasing order by finish time, the greedy choice is activity a1. Choosing the first activity to finish is not the only way to think of making a greedy choice for this problem; Exercise 16.1-3 asks you to explore other possibilities.

If we make the greedy choice, we have only one remaining subproblem to solve: finding activities that start after a1 finishes. Why don’t we have to consider ac- tivities that finish before a1 starts? We have that s1 n, in which case we have examined all activities in Sk without finding one that is compatible with ak. In this case, Sk D ;, and so the procedure returns ; in line 6.

Assuming that the activities have already been sorted by finish times, the running time of the call RECURSIVE-ACTIVITY-SELECTOR.s; f; 0; n/ is ‚.n/, which we can see as follows. Over all recursive calls, each activity is examined exactly once in the while loop test of line 2. In particular, activity ai is examined in the last call made in which k jAj. That is, A and B are acyclic sets of edges, and B contains more edges than A does.

We claim that a forest F D .VF ; EF / contains exactly jVF j � jEF j trees. To see why, suppose that F consists of t trees, where the i th tree contains �i vertices and ei edges. Then, we have

jEF j D tX

iD1 ei

D tX

iD1 .�i � 1/ (by Theorem B.2)

D tX

iD1 �i � t

D jVF j � t ; which implies that t D jVF j � jEF j. Thus, forest GA contains jV j � jAj trees, and forest GB contains jV j � jBj trees.

Since forest GB has fewer trees than forest GA does, forest GB must contain some tree T whose vertices are in two different trees in forest GA. Moreover, since T is connected, it must contain an edge .u; �/ such that vertices u and � are in different trees in forest GA. Since the edge .u; �/ connects vertices in two different trees in forest GA, we can add the edge .u; �/ to forest GA without creating a cycle. Therefore, MG satisfies the exchange property, completing the proof that MG is a matroid.

Given a matroid M D .S; � /, we call an element x … A an extension of A 2 � if we can add x to A while preserving independence; that is, x is an extension of A if A [ fxg 2 � . As an example, consider a graphic matroid MG . If A is an independent set of edges, then edge e is an extension of A if and only if e is not in A and the addition of e to A does not create a cycle.

If A is an independent subset in a matroid M , we say that A is maximal if it has no extensions. That is, A is maximal if it is not contained in any larger independent subset of M . The following property is often useful.

16.4 Matroids and greedy methods 439

Theorem 16.6 All maximal independent subsets in a matroid have the same size.

Proof Suppose to the contrary that A is a maximal independent subset of M and there exists another larger maximal independent subset B of M . Then, the exchange property implies that for some x 2 B � A, we can extend A to a larger independent set A[ fxg, contradicting the assumption that A is maximal.

As an illustration of this theorem, consider a graphic matroid MG for a con- nected, undirected graph G. Every maximal independent subset of MG must be a free tree with exactly jV j � 1 edges that connects all the vertices of G. Such a tree is called a spanning tree of G.

We say that a matroid M D .S; � / is weighted if it is associated with a weight function w that assigns a strictly positive weight w.x/ to each element x 2 S . The weight function w extends to subsets of S by summation:

w.A/ D X x2A

w.x/

for any A � S . For example, if we let w.e/ denote the weight of an edge e in a graphic matroid MG , then w.A/ is the total weight of the edges in edge set A.

Greedy algorithms on a weighted matroid

Many problems for which a greedy approach provides optimal solutions can be for- mulated in terms of finding a maximum-weight independent subset in a weighted matroid. That is, we are given a weighted matroid M D .S; � /, and we wish to find an independent set A 2 � such that w.A/ is maximized. We call such a sub- set that is independent and has maximum possible weight an optimal subset of the matroid. Because the weight w.x/ of any element x 2 S is positive, an optimal subset is always a maximal independent subset—it always helps to make A as large as possible.

For example, in theminimum-spanning-tree problem, we are given a connected undirected graph G D .V; E/ and a length function w such that w.e/ is the (posi- tive) length of edge e. (We use the term “length” here to refer to the original edge weights for the graph, reserving the term “weight” to refer to the weights in the associated matroid.) We wish to find a subset of the edges that connects all of the vertices together and has minimum total length. To view this as a problem of finding an optimal subset of a matroid, consider the weighted matroid MG with weight function w0, where w0.e/ D w0 �w.e/ and w0 is larger than the maximum length of any edge. In this weighted matroid, all weights are positive and an opti- mal subset is a spanning tree of minimum total length in the original graph. More specifically, each maximal independent subset A corresponds to a spanning tree

440 Chapter 16 Greedy Algorithms

with jV j � 1 edges, and since w0.A/ D

X e2A

w0.e/

D X e2A

.w0 � w.e//

D .jV j � 1/w0 � X e2A

w.e/

D .jV j � 1/w0 � w.A/ for any maximal independent subset A, an independent subset that maximizes the quantity w0.A/ must minimize w.A/. Thus, any algorithm that can find an optimal subset A in an arbitrary matroid can solve the minimum-spanning-tree problem.

Chapter 23 gives algorithms for the minimum-spanning-tree problem, but here we give a greedy algorithm that works for any weighted matroid. The algorithm takes as input a weighted matroid M D .S; � / with an associated positive weight function w, and it returns an optimal subset A. In our pseudocode, we denote the components of M by M:S and M:� and the weight function by w. The algorithm is greedy because it considers in turn each element x 2 S , in order of monotoni- cally decreasing weight, and immediately adds it to the set A being accumulated if A [ fxg is independent. GREEDY.M; w/

1 A D ; 2 sort M:S into monotonically decreasing order by weight w 3 for each x 2M:S, taken in monotonically decreasing order by weight w.x/ 4 if A [ fxg 2M:� 5 A D A [ fxg 6 return A

Line 4 checks whether adding each element x to A would maintain A as an inde- pendent set. If A would remain independent, then line 5 adds x to A. Otherwise, x is discarded. Since the empty set is independent, and since each iteration of the for loop maintains A’s independence, the subset A is always independent, by induc- tion. Therefore, GREEDY always returns an independent subset A. We shall see in a moment that A is a subset of maximum possible weight, so that A is an optimal subset.

The running time of GREEDY is easy to analyze. Let n denote jS j. The sorting phase of GREEDY takes time O.n lg n/. Line 4 executes exactly n times, once for each element of S . Each execution of line 4 requires a check on whether or not the set A [ fxg is independent. If each such check takes time O.f .n//, the entire algorithm runs in time O.n lg nC nf .n//.

16.4 Matroids and greedy methods 441

We now prove that GREEDY returns an optimal subset.

Lemma 16.7 (Matroids exhibit the greedy-choice property) Suppose that M D .S; � / is a weighted matroid with weight function w and that S is sorted into monotonically decreasing order by weight. Let x be the first element of S such that fxg is independent, if any such x exists. If x exists, then there exists an optimal subset A of S that contains x.

Proof If no such x exists, then the only independent subset is the empty set and the lemma is vacuously true. Otherwise, let B be any nonempty optimal subset. Assume that x … B; otherwise, letting A D B gives an optimal subset of S that contains x.

No element of B has weight greater than w.x/. To see why, observe that y 2 B implies that fyg is independent, since B 2 � and � is hereditary. Our choice of x therefore ensures that w.x/ � w.y/ for any y 2 B .

Construct the set A as follows. Begin with A D fxg. By the choice of x, set A is independent. Using the exchange property, repeatedly find a new element of B that we can add to A until jAj D jBj, while preserving the independence of A. At that point, A and B are the same except that A has x and B has some other element y. That is, A D B � fyg [ fxg for some y 2 B , and so w.A/ D w.B/� w.y/Cw.x/

� w.B/ : Because set B is optimal, set A, which contains x, must also be optimal.

We next show that if an element is not an option initially, then it cannot be an option later.

Lemma 16.8 Let M D .S; � / be any matroid. If x is an element of S that is an extension of some independent subset A of S , then x is also an extension of ;.

Proof Since x is an extension of A, we have that A[fxg is independent. Since � is hereditary, fxg must be independent. Thus, x is an extension of ;.

Corollary 16.9 Let M D .S; � / be any matroid. If x is an element of S such that x is not an extension of ;, then x is not an extension of any independent subset A of S .

Proof This corollary is simply the contrapositive of Lemma 16.8.

442 Chapter 16 Greedy Algorithms

Corollary 16.9 says that any element that cannot be used immediately can never be used. Therefore, GREEDY cannot make an error by passing over any initial elements in S that are not an extension of ;, since they can never be used.

Lemma 16.10 (Matroids exhibit the optimal-substructure property) Let x be the first element of S chosen by GREEDY for the weighted matroid M D .S; � /. The remaining problem of finding a maximum-weight indepen- dent subset containing x reduces to finding a maximum-weight independent subset of the weighted matroid M 0 D .S 0; � 0/, where S 0 D fy 2 S W fx; yg 2 � g ; �

0 D fB � S � fxg W B [ fxg 2 � g ; and the weight function for M 0 is the weight function for M , restricted to S 0. (We call M 0 the contraction of M by the element x.)

Proof If A is any maximum-weight independent subset of M containing x, then A0 D A � fxg is an independent subset of M 0. Conversely, any independent sub- set A0 of M 0 yields an independent subset A D A0 [ fxg of M . Since we have in both cases that w.A/ D w.A0/Cw.x/, a maximum-weight solution in M contain- ing x yields a maximum-weight solution in M 0, and vice versa.

Theorem 16.11 (Correctness of the greedy algorithm on matroids) If M D .S; � / is a weighted matroid with weight function w, then GREEDY.M; w/ returns an optimal subset.

Proof By Corollary 16.9, any elements that GREEDY passes over initially be- cause they are not extensions of ; can be forgotten about, since they can never be useful. Once GREEDY selects the first element x, Lemma 16.7 implies that the algorithm does not err by adding x to A, since there exists an optimal subset containing x. Finally, Lemma 16.10 implies that the remaining problem is one of finding an optimal subset in the matroid M 0 that is the contraction of M by x. After the procedure GREEDY sets A to fxg, we can interpret all of its remaining steps as acting in the matroid M 0 D .S 0; � 0/, because B is independent in M 0 if and only if B [fxg is independent in M , for all sets B 2 � 0. Thus, the subsequent operation of GREEDY will find a maximum-weight independent subset for M 0, and the overall operation of GREEDY will find a maximum-weight independent subset for M .

16.5 A task-scheduling problem as a matroid 443

Exercises

16.4-1 Show that .S; �k/ is a matroid, where S is any finite set and �k is the set of all subsets of S of size at most k, where k � jS j. 16.4-2 ? Given an m n matrix T over some field (such as the reals), show that .S; � / is a matroid, where S is the set of columns of T and A 2 � if and only if the columns in A are linearly independent.

16.4-3 ? Show that if .S; � / is a matroid, then .S; � 0/ is a matroid, where

� 0 D fA0 W S � A0 contains some maximal A 2 � g :

That is, the maximal independent sets of .S; � 0/ are just the complements of the maximal independent sets of .S; � /.

16.4-4 ? Let S be a finite set and let S1; S2; : : : ; Sk be a partition of S into nonempty disjoint subsets. Define the structure .S; � / by the condition that � D fA W jA Si j � 1 for i D 1; 2; : : : ; kg. Show that .S; � / is a matroid. That is, the set of all sets A that contain at most one member of each subset in the partition determines the independent sets of a matroid.

16.4-5 Show how to transform the weight function of a weighted matroid problem, where the desired optimal solution is a minimum-weight maximal independent subset, to make it a standard weighted-matroid problem. Argue carefully that your transfor- mation is correct.

? 16.5 A task-scheduling problem as a matroid

An interesting problem that we can solve using matroids is the problem of op- timally scheduling unit-time tasks on a single processor, where each task has a deadline, along with a penalty paid if the task misses its deadline. The problem looks complicated, but we can solve it in a surprisingly simple manner by casting it as a matroid and using a greedy algorithm.

A unit-time task is a job, such as a program to be run on a computer, that requires exactly one unit of time to complete. Given a finite set S of unit-time tasks, a

444 Chapter 16 Greedy Algorithms

schedule for S is a permutation of S specifying the order in which to perform these tasks. The first task in the schedule begins at time 0 and finishes at time 1, the second task begins at time 1 and finishes at time 2, and so on.

The problem of scheduling unit-time tasks with deadlines and penalties for a single processor has the following inputs:

� a set S D fa1; a2; : : : ; ang of n unit-time tasks; � a set of n integer deadlines d1; d2; : : : ; dn, such that each di satisfies 1 � di � n

and task ai is supposed to finish by time di ; and

� a set of n nonnegative weights or penalties w1; w2; : : : ; wn, such that we incur a penalty of wi if task ai is not finished by time di , and we incur no penalty if a task finishes by its deadline.

We wish to find a schedule for S that minimizes the total penalty incurred for missed deadlines.

Consider a given schedule. We say that a task is late in this schedule if it finishes after its deadline. Otherwise, the task is early in the schedule. We can always trans- form an arbitrary schedule into early-first form, in which the early tasks precede the late tasks. To see why, note that if some early task ai follows some late task aj , then we can switch the positions of ai and aj , and ai will still be early and aj will still be late.

Furthermore, we claim that we can always transform an arbitrary schedule into canonical form, in which the early tasks precede the late tasks and we schedule the early tasks in order of monotonically increasing deadlines. To do so, we put the schedule into early-first form. Then, as long as there exist two early tasks ai and aj finishing at respective times k and k C 1 in the schedule such that dj t for some t , then there is no way to make a schedule with no late tasks for set A, because more than t tasks must finish before time t . Therefore, (1) implies (2). If (2) holds, then (3) must follow: there is no way to “get stuck” when scheduling the tasks in order of monotonically increasing deadlines, since (2) implies that the i th largest deadline is at least i . Finally, (3) trivially implies (1).

Using property 2 of Lemma 16.12, we can easily compute whether or not a given set of tasks is independent (see Exercise 16.5-2).

The problem of minimizing the sum of the penalties of the late tasks is the same as the problem of maximizing the sum of the penalties of the early tasks. The following theorem thus ensures that we can use the greedy algorithm to find an independent set A of tasks with the maximum total penalty.

Theorem 16.13 If S is a set of unit-time tasks with deadlines, and � is the set of all independent sets of tasks, then the corresponding system .S; � / is a matroid.

Proof Every subset of an independent set of tasks is certainly independent. To prove the exchange property, suppose that B and A are independent sets of tasks and that jBj > jAj. Let k be the largest t such that Nt .B/ � Nt.A/. (Such a value of t exists, since N0.A/ D N0.B/ D 0.) Since Nn.B/ D jBj and Nn.A/ D jAj, but jBj > jAj, we must have that k Nj .A/ for all j in the range k C 1 � j � n. Therefore, B contains more tasks with deadline k C 1 than A does. Let ai be a task in B � A with deadline k C 1. Let A0 D A [ faig.

We now show that A0 must be independent by using property 2 of Lemma 16.12. For 0 � t � k, we have Nt.A0/ D Nt .A/ � t , since A is independent. For k 1 and k � 1. Show that the greedy algorithm always yields an optimal solution.

c. Give a set of coin denominations for which the greedy algorithm does not yield an optimal solution. Your set should include a penny so that there is a solution for every value of n.

d. Give an O.nk/-time algorithm that makes change for any set of k different coin denominations, assuming that one of the coins is a penny.

16-2 Scheduling to minimize average completion time Suppose you are given a set S D fa1; a2; : : : ; ang of tasks, where task ai re- quires pi units of processing time to complete, once it has started. You have one computer on which to run these tasks, and the computer can run only one task at a time. Let ci be the completion time of task ai , that is, the time at which task ai com- pletes processing. Your goal is to minimize the average completion time, that is, to minimize .1=n/

Pn iD1 ci . For example, suppose there are two tasks, a1 and a2,

with p1 D 3 and p2 D 5, and consider the schedule in which a2 runs first, followed by a1. Then c2 D 5, c1 D 8, and the average completion time is .5C 8/=2 D 6:5. If task a1 runs first, however, then c1 D 3, c2 D 8, and the average completion time is .3C 8/=2 D 5:5. a. Give an algorithm that schedules the tasks so as to minimize the average com-

pletion time. Each task must run non-preemptively, that is, once task ai starts, it must run continuously for pi units of time. Prove that your algorithm minimizes the average completion time, and state the running time of your algorithm.

b. Suppose now that the tasks are not all available at once. That is, each task cannot start until its release time ri . Suppose also that we allow preemption, so that a task can be suspended and restarted at a later time. For example, a task ai with processing time pi D 6 and release time ri D 1 might start running at time 1 and be preempted at time 4. It might then resume at time 10 but be preempted at time 11, and it might finally resume at time 13 and complete at time 15. Task ai has run for a total of 6 time units, but its running time has been divided into three pieces. In this scenario, ai ’s completion time is 15. Give an algorithm that schedules the tasks so as to minimize the average completion time in this new scenario. Prove that your algorithm minimizes the average completion time, and state the running time of your algorithm.

448 Chapter 16 Greedy Algorithms

16-3 Acyclic subgraphs a. The incidence matrix for an undirected graph G D .V; E/ is a jV j jEj ma-

trix M such that M�e D 1 if edge e is incident on vertex �, and M�e D 0 other- wise. Argue that a set of columns of M is linearly independent over the field of integers modulo 2 if and only if the corresponding set of edges is acyclic. Then, use the result of Exercise 16.4-2 to provide an alternate proof that .E; � / of part (a) is a matroid.

b. Suppose that we associate a nonnegative weight w.e/ with each edge in an undirected graph G D .V; E/. Give an efficient algorithm to find an acyclic subset of E of maximum total weight.

c. Let G.V; E/ be an arbitrary directed graph, and let .E; � / be defined so that A 2 � if and only if A does not contain any directed cycles. Give an example of a directed graph G such that the associated system .E; � / is not a matroid. Specify which defining condition for a matroid fails to hold.

d. The incidence matrix for a directed graph G D .V; E/ with no self-loops is a jV j jEj matrix M such that M�e D �1 if edge e leaves vertex �, M�e D 1 if edge e enters vertex �, and M�e D 0 otherwise. Argue that if a set of columns of M is linearly independent, then the corresponding set of edges does not contain a directed cycle.

e. Exercise 16.4-2 tells us that the set of linearly independent sets of columns of any matrix M forms a matroid. Explain carefully why the results of parts (d) and (e) are not contradictory. How can there fail to be a perfect correspon- dence between the notion of a set of edges being acyclic and the notion of the associated set of columns of the incidence matrix being linearly independent?

16-4 Scheduling variations Consider the following algorithm for the problem from Section 16.5 of scheduling unit-time tasks with deadlines and penalties. Let all n time slots be initially empty, where time slot i is the unit-length slot of time that finishes at time i . We consider the tasks in order of monotonically decreasing penalty. When considering task aj , if there exists a time slot at or before aj ’s deadline dj that is still empty, assign aj to the latest such slot, filling it. If there is no such slot, assign task aj to the latest of the as yet unfilled slots.

a. Argue that this algorithm always gives an optimal answer.

b. Use the fast disjoint-set forest presented in Section 21.3 to implement the algo- rithm efficiently. Assume that the set of input tasks has already been sorted into

Problems for Chapter 16 449

monotonically decreasing order by penalty. Analyze the running time of your implementation.

16-5 Off-line caching Modern computers use a cache to store a small amount of data in a fast memory. Even though a program may access large amounts of data, by storing a small subset of the main memory in the cache—a small but faster memory—overall access time can greatly decrease. When a computer program executes, it makes a sequence hr1; r2; : : : ; rni of n memory requests, where each request is for a particular data element. For example, a program that accesses 4 distinct elements fa; b; c; dg might make the sequence of requests hd; b; d; b; d; a; c; d; b; a; c; bi. Let k be the size of the cache. When the cache contains k elements and the program requests the .k C 1/st element, the system must decide, for this and each subsequent request, which k elements to keep in the cache. More precisely, for each request ri , the cache-management algorithm checks whether element ri is already in the cache. If it is, then we have a cache hit; otherwise, we have a cache miss. Upon a cache miss, the system retrieves ri from the main memory, and the cache-management algorithm must decide whether to keep ri in the cache. If it decides to keep ri and the cache already holds k elements, then it must evict one element to make room for ri . The cache-management algorithm evicts data with the goal of minimizing the number of cache misses over the entire sequence of requests.

Typically, caching is an on-line problem. That is, we have to make decisions about which data to keep in the cache without knowing the future requests. Here, however, we consider the off-line version of this problem, in which we are given in advance the entire sequence of n requests and the cache size k, and we wish to minimize the total number of cache misses.

We can solve this off-line problem by a greedy strategy called furthest-in-future, which chooses to evict the item in the cache whose next access in the request sequence comes furthest in the future.

a. Write pseudocode for a cache manager that uses the furthest-in-future strategy. The input should be a sequence hr1; r2; : : : ; rni of requests and a cache size k, and the output should be a sequence of decisions about which data element (if any) to evict upon each request. What is the running time of your algorithm?

b. Show that the off-line caching problem exhibits optimal substructure.

c. Prove that furthest-in-future produces the minimum possible number of cache misses.

450 Chapter 16 Greedy Algorithms

Chapter notes

Much more material on greedy algorithms and matroids can be found in Lawler [224] and Papadimitriou and Steiglitz [271].

The greedy algorithm first appeared in the combinatorial optimization literature in a 1971 article by Edmonds [101], though the theory of matroids dates back to a 1935 article by Whitney [355].

Our proof of the correctness of the greedy algorithm for the activity-selection problem is based on that of Gavril [131]. The task-scheduling problem is studied in Lawler [224]; Horowitz, Sahni, and Rajasekaran [181]; and Brassard and Bratley [54].

Huffman codes were invented in 1952 [185]; Lelewer and Hirschberg [231] sur- veys data-compression techniques known as of 1987.

An extension of matroid theory to greedoid theory was pioneered by Korte and Lovász [216, 217, 218, 219], who greatly generalize the theory presented here.

17 Amortized Analysis

In an amortized analysis, we average the time required to perform a sequence of data-structure operations over all the operations performed. With amortized analy- sis, we can show that the average cost of an operation is small, if we average over a sequence of operations, even though a single operation within the sequence might be expensive. Amortized analysis differs from average-case analysis in that prob- ability is not involved; an amortized analysis guarantees the average performance of each operation in the worst case.

The first three sections of this chapter cover the three most common techniques used in amortized analysis. Section 17.1 starts with aggregate analysis, in which we determine an upper bound T .n/ on the total cost of a sequence of n operations. The average cost per operation is then T .n/=n. We take the average cost as the amortized cost of each operation, so that all operations have the same amortized cost.

Section 17.2 covers the accounting method, in which we determine an amortized cost of each operation. When there is more than one type of operation, each type of operation may have a different amortized cost. The accounting method overcharges some operations early in the sequence, storing the overcharge as “prepaid credit” on specific objects in the data structure. Later in the sequence, the credit pays for operations that are charged less than they actually cost.

Section 17.3 discusses the potential method, which is like the accounting method in that we determine the amortized cost of each operation and may overcharge op- erations early on to compensate for undercharges later. The potential method main- tains the credit as the “potential energy” of the data structure as a whole instead of associating the credit with individual objects within the data structure.

We shall use two examples to examine these three methods. One is a stack with the additional operation MULTIPOP, which pops several objects at once. The other is a binary counter that counts up from 0 by means of the single operation INCREMENT.

452 Chapter 17 Amortized Analysis

While reading this chapter, bear in mind that the charges assigned during an amortized analysis are for analysis purposes only. They need not—and should not—appear in the code. If, for example, we assign a credit to an object x when using the accounting method, we have no need to assign an appropriate amount to some attribute, such as x:credit, in the code.

When we perform an amortized analysis, we often gain insight into a particular data structure, and this insight can help us optimize the design. In Section 17.4, for example, we shall use the potential method to analyze a dynamically expanding and contracting table.

17.1 Aggregate analysis

In aggregate analysis, we show that for all n, a sequence of n operations takes worst-case time T .n/ in total. In the worst case, the average cost, or amortized cost, per operation is therefore T .n/=n. Note that this amortized cost applies to each operation, even when there are several types of operations in the sequence. The other two methods we shall study in this chapter, the accounting method and the potential method, may assign different amortized costs to different types of operations.

Stack operations

In our first example of aggregate analysis, we analyze stacks that have been aug- mented with a new operation. Section 10.1 presented the two fundamental stack operations, each of which takes O.1/ time:

PUSH.S; x/ pushes object x onto stack S .

POP.S/ pops the top of stack S and returns the popped object. Calling POP on an empty stack generates an error.

Since each of these operations runs in O.1/ time, let us consider the cost of each to be 1. The total cost of a sequence of n PUSH and POP operations is therefore n, and the actual running time for n operations is therefore ‚.n/.

Now we add the stack operation MULTIPOP.S; k/, which removes the k top ob- jects of stack S , popping the entire stack if the stack contains fewer than k objects. Of course, we assume that k is positive; otherwise the MULTIPOP operation leaves the stack unchanged. In the following pseudocode, the operation STACK-EMPTY returns TRUE if there are no objects currently on the stack, and FALSE otherwise.

17.1 Aggregate analysis 453

23 17 6

39 10 47

(a)

top

10 47

(b)

top

(c)

Figure 17.1 The action of MULTIPOP on a stack S , shown initially in (a). The top 4 objects are popped by MULTIPOP.S; 4/, whose result is shown in (b). The next operation is MULTIPOP.S; 7/, which empties the stack—shown in (c)—since there were fewer than 7 objects remaining.

MULTIPOP.S; k/

1 while not STACK-EMPTY.S/ and k > 0 2 POP.S/ 3 k D k � 1

Figure 17.1 shows an example of MULTIPOP. What is the running time of MULTIPOP.S; k/ on a stack of s objects? The

actual running time is linear in the number of POP operations actually executed, and thus we can analyze MULTIPOP in terms of the abstract costs of 1 each for PUSH and POP. The number of iterations of the while loop is the number min.s; k/ of objects popped off the stack. Each iteration of the loop makes one call to POP in line 2. Thus, the total cost of MULTIPOP is min.s; k/, and the actual running time is a linear function of this cost.

Let us analyze a sequence of n PUSH, POP, and MULTIPOP operations on an ini- tially empty stack. The worst-case cost of a MULTIPOP operation in the sequence is O.n/, since the stack size is at most n. The worst-case time of any stack opera- tion is therefore O.n/, and hence a sequence of n operations costs O.n2/, since we may have O.n/ MULTIPOP operations costing O.n/ each. Although this analysis is correct, the O.n2/ result, which we obtained by considering the worst-case cost of each operation individually, is not tight.

Using aggregate analysis, we can obtain a better upper bound that considers the entire sequence of n operations. In fact, although a single MULTIPOP operation can be expensive, any sequence of n PUSH, POP, and MULTIPOP operations on an initially empty stack can cost at most O.n/. Why? We can pop each object from the stack at most once for each time we have pushed it onto the stack. Therefore, the number of times that POP can be called on a nonempty stack, including calls within MULTIPOP, is at most the number of PUSH operations, which is at most n. For any value of n, any sequence of n PUSH, POP, and MULTIPOP operations takes a total of O.n/ time. The average cost of an operation is O.n/=n D O.1/. In aggregate

454 Chapter 17 Amortized Analysis

analysis, we assign the amortized cost of each operation to be the average cost. In this example, therefore, all three stack operations have an amortized cost of O.1/.

We emphasize again that although we have just shown that the average cost, and hence the running time, of a stack operation is O.1/, we did not use probabilistic reasoning. We actually showed a worst-case bound of O.n/ on a sequence of n operations. Dividing this total cost by n yielded the average cost per operation, or the amortized cost.

Incrementing a binary counter

As another example of aggregate analysis, consider the problem of implementing a k-bit binary counter that counts upward from 0. We use an array AŒ0 : : k � 1� of bits, where A: length D k, as the counter. A binary number x that is stored in the counter has its lowest-order bit in AŒ0� and its highest-order bit in AŒk� 1�, so that x DPk�1iD0 AŒi� � 2i . Initially, x D 0, and thus AŒi� D 0 for i D 0; 1; : : : ; k � 1. To add 1 (modulo 2k) to the value in the counter, we use the following procedure.

INCREMENT.A/

1 i D 0 2 while i 0, then bi D bi�1 � ti C 1. In either case, bi � bi�1 � ti C 1, and the potential difference is

ˆ.Di / �ˆ.Di�1/ � .bi�1 � ti C 1/ � bi�1 D 1� ti :

The amortized cost is therefore

yci D ci Cˆ.Di / �ˆ.Di�1/ � .ti C 1/C .1 � ti / D 2 :

If the counter starts at zero, then ˆ.D0/ D 0. Since ˆ.Di/ � 0 for all i , the total amortized cost of a sequence of n INCREMENT operations is an upper bound on the total actual cost, and so the worst-case cost of n INCREMENT operations is O.n/.

The potential method gives us an easy way to analyze the counter even when it does not start at zero. The counter starts with b0 1s, and after n INCREMENT

462 Chapter 17 Amortized Analysis

operations it has bn 1s, where 0 � b0; bn � k. (Recall that k is the number of bits in the counter.) We can rewrite equation (17.3) as

nX iD1

ci D nX

iD1 yci �ˆ.Dn/Cˆ.D0/ : (17.4)

We have yci � 2 for all 1 � i � n. Since ˆ.D0/ D b0 and ˆ.Dn/ D bn, the total actual cost of n INCREMENT operations is

nX iD1

ci � nX

iD1 2 � bn C b0

D 2n � bn C b0 : Note in particular that since b0 � k, as long as k D O.n/, the total actual cost is O.n/. In other words, if we execute at least n D �.k/ INCREMENT operations, the total actual cost is O.n/, no matter what initial value the counter contains.

Exercises

17.3-1 Suppose we have a potential function ˆ such that ˆ.Di / � ˆ.D0/ for all i , but ˆ.D0/ ¤ 0. Show that there exists a potential function ˆ0 such that ˆ0.D0/ D 0, ˆ0.Di / � 0 for all i � 1, and the amortized costs using ˆ0 are the same as the amortized costs using ˆ.

17.3-2 Redo Exercise 17.1-3 using a potential method of analysis.

17.3-3 Consider an ordinary binary min-heap data structure with n elements supporting the instructions INSERT and EXTRACT-MIN in O.lg n/ worst-case time. Give a potential function ˆ such that the amortized cost of INSERT is O.lg n/ and the amortized cost of EXTRACT-MIN is O.1/, and show that it works.

17.3-4 What is the total cost of executing n of the stack operations PUSH, POP, and MULTIPOP, assuming that the stack begins with s0 objects and finishes with sn objects?

17.3-5 Suppose that a counter begins at a number with b 1s in its binary representa- tion, rather than at 0. Show that the cost of performing n INCREMENT operations is O.n/ if n D �.b/. (Do not assume that b is constant.)

17.4 Dynamic tables 463

17.3-6 Show how to implement a queue with two ordinary stacks (Exercise 10.1-6) so that the amortized cost of each ENQUEUE and each DEQUEUE operation is O.1/.

17.3-7 Design a data structure to support the following two operations for a dynamic multiset S of integers, which allows duplicate values:

INSERT.S; x/ inserts x into S .

DELETE-LARGER-HALF.S/ deletes the largest djS j =2e elements from S . Explain how to implement this data structure so that any sequence of m INSERT and DELETE-LARGER-HALF operations runs in O.m/ time. Your implementation should also include a way to output the elements of S in O.jS j/ time.

17.4 Dynamic tables

We do not always know in advance how many objects some applications will store in a table. We might allocate space for a table, only to find out later that it is not enough. We must then reallocate the table with a larger size and copy all objects stored in the original table over into the new, larger table. Similarly, if many objects have been deleted from the table, it may be worthwhile to reallocate the table with a smaller size. In this section, we study this problem of dynamically expanding and contracting a table. Using amortized analysis, we shall show that the amortized cost of insertion and deletion is only O.1/, even though the actual cost of an operation is large when it triggers an expansion or a contraction. Moreover, we shall see how to guarantee that the unused space in a dynamic table never exceeds a constant fraction of the total space.

We assume that the dynamic table supports the operations TABLE-INSERT and TABLE-DELETE. TABLE-INSERT inserts into the table an item that occupies a sin- gle slot, that is, a space for one item. Likewise, TABLE-DELETE removes an item from the table, thereby freeing a slot. The details of the data-structuring method used to organize the table are unimportant; we might use a stack (Section 10.1), a heap (Chapter 6), or a hash table (Chapter 11). We might also use an array or collection of arrays to implement object storage, as we did in Section 10.3.

We shall find it convenient to use a concept introduced in our analysis of hashing (Chapter 11). We define the load factor ˛.T / of a nonempty table T to be the number of items stored in the table divided by the size (number of slots) of the table. We assign an empty table (one with no items) size 0, and we define its load factor to be 1. If the load factor of a dynamic table is bounded below by a constant,

464 Chapter 17 Amortized Analysis

the unused space in the table is never more than a constant fraction of the total amount of space.

We start by analyzing a dynamic table in which we only insert items. We then consider the more general case in which we both insert and delete items.

17.4.1 Table expansion

Let us assume that storage for a table is allocated as an array of slots. A table fills up when all slots have been used or, equivalently, when its load factor is 1.1 In some software environments, upon attempting to insert an item into a full table, the only alternative is to abort with an error. We shall assume, however, that our software environment, like many modern ones, provides a memory-management system that can allocate and free blocks of storage on request. Thus, upon inserting an item into a full table, we can expand the table by allocating a new table with more slots than the old table had. Because we always need the table to reside in contiguous memory, we must allocate a new array for the larger table and then copy items from the old table into the new table.

A common heuristic allocates a new table with twice as many slots as the old one. If the only table operations are insertions, then the load factor of the table is always at least 1=2, and thus the amount of wasted space never exceeds half the total space in the table.

In the following pseudocode, we assume that T is an object representing the table. The attribute T: table contains a pointer to the block of storage representing the table, T:num contains the number of items in the table, and T:size gives the total number of slots in the table. Initially, the table is empty: T:num D T:size D 0. TABLE-INSERT.T; x/

1 if T:size == 0 2 allocate T: table with 1 slot 3 T:size D 1 4 if T:num == T:size 5 allocate new-table with 2 � T:size slots 6 insert all items in T: table into new-table 7 free T: table 8 T: table D new-table 9 T:size D 2 � T:size

10 insert x into T: table 11 T:num D T:numC 1

1In some situations, such as an open-address hash table, we may wish to consider a table to be full if its load factor equals some constant strictly less than 1. (See Exercise 17.4-1.)

17.4 Dynamic tables 465

Notice that we have two “insertion” procedures here: the TABLE-INSERT proce- dure itself and the elementary insertion into a table in lines 6 and 10. We can analyze the running time of TABLE-INSERT in terms of the number of elementary insertions by assigning a cost of 1 to each elementary insertion. We assume that the actual running time of TABLE-INSERT is linear in the time to insert individual items, so that the overhead for allocating an initial table in line 2 is constant and the overhead for allocating and freeing storage in lines 5 and 7 is dominated by the cost of transferring items in line 6. We call the event in which lines 5–9 are executed an expansion.

Let us analyze a sequence of n TABLE-INSERT operations on an initially empty table. What is the cost ci of the i th operation? If the current table has room for the new item (or if this is the first operation), then ci D 1, since we need only perform the one elementary insertion in line 10. If the current table is full, however, and an expansion occurs, then ci D i : the cost is 1 for the elementary insertion in line 10 plus i � 1 for the items that we must copy from the old table to the new table in line 6. If we perform n operations, the worst-case cost of an operation is O.n/, which leads to an upper bound of O.n2/ on the total running time for n operations.

This bound is not tight, because we rarely expand the table in the course of n TABLE-INSERT operations. Specifically, the i th operation causes an expansion only when i � 1 is an exact power of 2. The amortized cost of an operation is in fact O.1/, as we can show using aggregate analysis. The cost of the i th operation is

ci D (

i if i � 1 is an exact power of 2 ; 1 otherwise :

The total cost of n TABLE-INSERT operations is therefore nX

iD1 ci � nC

blg ncX j D0

2j

x:keyi 3 i D i C 1 4 if i � x:n and k == x:keyi 5 return .x; i/ 6 elseif x: leaf 7 return NIL 8 else DISK-READ.x:ci/ 9 return B-TREE-SEARCH.x:ci ; k/

Using a linear-search procedure, lines 1–3 find the smallest index i such that k � x:keyi , or else they set i to x:n C 1. Lines 4–5 check to see whether we have now discovered the key, returning if we have. Otherwise, lines 6–9 either ter- minate the search unsuccessfully (if x is a leaf) or recurse to search the appropriate subtree of x, after performing the necessary DISK-READ on that child.

Figure 18.1 illustrates the operation of B-TREE-SEARCH. The procedure exam- ines the lightly shaded nodes during a search for the key R.

As in the TREE-SEARCH procedure for binary search trees, the nodes encoun- tered during the recursion form a simple path downward from the root of the tree. The B-TREE-SEARCH procedure therefore accesses O.h/ D O.logt n/ disk pages, where h is the height of the B-tree and n is the number of keys in the B-tree. Since x:n x:keyi 16 i D i C 1 17 B-TREE-INSERT-NONFULL.x:ci ; k/

The B-TREE-INSERT-NONFULL procedure works as follows. Lines 3–8 handle the case in which x is a leaf node by inserting key k into x. If x is not a leaf node, then we must insert k into the appropriate leaf node in the subtree rooted at internal node x. In this case, lines 9–11 determine the child of x to which the recursion descends. Line 13 detects whether the recursion would descend to a full child, in which case line 14 uses B-TREE-SPLIT-CHILD to split that child into two nonfull children, and lines 15–16 determine which of the two children is now the

18.2 Basic operations on B-trees 497

correct one to descend to. (Note that there is no need for a DISK-READ.x:ci/ after line 16 increments i , since the recursion will descend in this case to a child that was just created by B-TREE-SPLIT-CHILD.) The net effect of lines 13–16 is thus to guarantee that the procedure never recurses to a full node. Line 17 then recurses to insert k into the appropriate subtree. Figure 18.7 illustrates the various cases of inserting into a B-tree.

For a B-tree of height h, B-TREE-INSERT performs O.h/ disk accesses, since only O.1/ DISK-READ and DISK-WRITE operations occur between calls to B-TREE-INSERT-NONFULL. The total CPU time used is O.th/ D O.t logt n/. Since B-TREE-INSERT-NONFULL is tail-recursive, we can alternatively imple- ment it as a while loop, thereby demonstrating that the number of pages that need to be in main memory at any time is O.1/.

Exercises

18.2-1 Show the results of inserting the keys

F; S; Q; K; C; L; H; T; V; W; M; R; N; P; A; B; X; Y; D; Z; E

in order into an empty B-tree with minimum degree 2. Draw only the configura- tions of the tree just before some node must split, and also draw the final configu- ration.

18.2-2 Explain under what circumstances, if any, redundant DISK-READ or DISK-WRITE operations occur during the course of executing a call to B-TREE-INSERT. (A redundant DISK-READ is a DISK-READ for a page that is already in memory. A redundant DISK-WRITE writes to disk a page of information that is identical to what is already stored there.)

18.2-3 Explain how to find the minimum key stored in a B-tree and how to find the prede- cessor of a given key stored in a B-tree.

18.2-4 ? Suppose that we insert the keys f1; 2; : : : ; ng into an empty B-tree with minimum degree 2. How many nodes does the final B-tree have?

18.2-5 Since leaf nodes require no pointers to children, they could conceivably use a dif- ferent (larger) t value than internal nodes for the same disk page size. Show how to modify the procedures for creating and inserting into a B-tree to handle this variation.

498 Chapter 18 B-Trees

J K N O R S TD ECA U V Y Z

P XMG(a)

J K N O R S TD EBA U V Y Z

P XMG(b)

C

J K N OD EBA U V Y Z

P XMG(c)

C R SQ

T

J K N OD EBA U V Y Z

MG

(d)

C R SQL

P

XT

J K N OD EBA U V Y Z

MG

(e)

C

R SQL

P

XT

F

Q inserted

L inserted

F inserted

initial tree

B inserted

Figure 18.7 Inserting keys into a B-tree. The minimum degree t for this B-tree is 3, so a node can hold at most 5 keys. Nodes that are modified by the insertion process are lightly shaded. (a) The initial tree for this example. (b) The result of inserting B into the initial tree; this is a simple insertion into a leaf node. (c) The result of inserting Q into the previous tree. The node RST U V splits into two nodes containing RS and U V , the key T moves up to the root, and Q is inserted in the leftmost of the two halves (the RS node). (d) The result of inserting L into the previous tree. The root splits right away, since it is full, and the B-tree grows in height by one. Then L is inserted into the leaf containing JK. (e) The result of inserting F into the previous tree. The node ABCDE splits before F is inserted into the rightmost of the two halves (the DE node).

18.3 Deleting a key from a B-tree 499

18.2-6 Suppose that we were to implement B-TREE-SEARCH to use binary search rather than linear search within each node. Show that this change makes the CPU time required O.lg n/, independently of how t might be chosen as a function of n.

18.2-7 Suppose that disk hardware allows us to choose the size of a disk page arbitrarily, but that the time it takes to read the disk page is aCbt , where a and b are specified constants and t is the minimum degree for a B-tree using pages of the selected size. Describe how to choose t so as to minimize (approximately) the B-tree search time. Suggest an optimal value of t for the case in which a D 5 milliseconds and b D 10 microseconds.

18.3 Deleting a key from a B-tree

Deletion from a B-tree is analogous to insertion but a little more complicated, be- cause we can delete a key from any node—not just a leaf—and when we delete a key from an internal node, we will have to rearrange the node’s children. As in insertion, we must guard against deletion producing a tree whose structure violates the B-tree properties. Just as we had to ensure that a node didn’t get too big due to insertion, we must ensure that a node doesn’t get too small during deletion (except that the root is allowed to have fewer than the minimum number t � 1 of keys). Just as a simple insertion algorithm might have to back up if a node on the path to where the key was to be inserted was full, a simple approach to deletion might have to back up if a node (other than the root) along the path to where the key is to be deleted has the minimum number of keys.

The procedure B-TREE-DELETE deletes the key k from the subtree rooted at x. We design this procedure to guarantee that whenever it calls itself recursively on a node x, the number of keys in x is at least the minimum degree t . Note that this condition requires one more key than the minimum required by the usual B-tree conditions, so that sometimes a key may have to be moved into a child node before recursion descends to that child. This strengthened condition allows us to delete a key from the tree in one downward pass without having to “back up” (with one ex- ception, which we’ll explain). You should interpret the following specification for deletion from a B-tree with the understanding that if the root node x ever becomes an internal node having no keys (this situation can occur in cases 2c and 3b on pages 501–502), then we delete x, and x’s only child x:c1 becomes the new root of the tree, decreasing the height of the tree by one and preserving the property that the root of the tree contains at least one key (unless the tree is empty).

500 Chapter 18 B-Trees

J K N OD EBA U V Y Z

MG

(a)

C

R SQL

P

XT

F

initial tree

J K N OD EBA U V Y Z

MG

(b)

C

R SQL

P

XT

F deleted: case 1

J K N OD EBA U V Y Z

G

(c)

C

R SQ

L

P

XT

M deleted: case 2a

J K N OD EBA U V Y Z

(d)

C

R SQ

L

P

XT

G deleted: case 2c

Figure 18.8 Deleting keys from a B-tree. The minimum degree for this B-tree is t D 3, so a node (other than the root) cannot have fewer than 2 keys. Nodes that are modified are lightly shaded. (a) The B-tree of Figure 18.7(e). (b) Deletion of F . This is case 1: simple deletion from a leaf. (c) Deletion of M . This is case 2a: the predecessor L of M moves up to take M ’s position. (d)Dele- tion of G. This is case 2c: we push G down to make node DEGJK and then delete G from this leaf (case 1).

We sketch how deletion works instead of presenting the pseudocode. Figure 18.8 illustrates the various cases of deleting keys from a B-tree.

1. If the key k is in node x and x is a leaf, delete the key k from x.

2. If the key k is in node x and x is an internal node, do the following:

18.3 Deleting a key from a B-tree 501

J K N OEBA U V Y Z

(e)

C

R SQ

L P XT

D deleted: case 3b

J K N OEBA U V Y Z

C

R SQ

L P XT

J K N OA U V Y ZC R SQ

L P XT(f) B deleted: case 3a E

(e′) tree shrinks in height

Figure 18.8, continued (e) Deletion of D. This is case 3b: the recursion cannot descend to node CL because it has only 2 keys, so we push P down and merge it with CL and TX to form CLP TX ; then we delete D from a leaf (case 1). (e0)After (e), we delete the root and the tree shrinks in height by one. (f) Deletion of B . This is case 3a: C moves to fill B’s position and E moves to fill C ’s position.

a. If the child y that precedes k in node x has at least t keys, then find the predecessor k0 of k in the subtree rooted at y. Recursively delete k0, and replace k by k0 in x. (We can find k0 and delete it in a single downward pass.)

b. If y has fewer than t keys, then, symmetrically, examine the child ´ that follows k in node x. If ´ has at least t keys, then find the successor k0 of k in the subtree rooted at ´. Recursively delete k0, and replace k by k0 in x. (We can find k0 and delete it in a single downward pass.)

c. Otherwise, if both y and ´ have only t � 1 keys, merge k and all of ´ into y, so that x loses both k and the pointer to ´, and y now contains 2t � 1 keys. Then free ´ and recursively delete k from y.

3. If the key k is not present in internal node x, determine the root x:ci of the appropriate subtree that must contain k, if k is in the tree at all. If x:ci has only t �1 keys, execute step 3a or 3b as necessary to guarantee that we descend to a node containing at least t keys. Then finish by recursing on the appropriate child of x.

502 Chapter 18 B-Trees

a. If x:ci has only t � 1 keys but has an immediate sibling with at least t keys, give x:ci an extra key by moving a key from x down into x:ci , moving a key from x:ci ’s immediate left or right sibling up into x, and moving the appropriate child pointer from the sibling into x:ci .

b. If x:ci and both of x:ci ’s immediate siblings have t � 1 keys, merge x:ci with one sibling, which involves moving a key from x down into the new merged node to become the median key for that node.

Since most of the keys in a B-tree are in the leaves, we may expect that in practice, deletion operations are most often used to delete keys from leaves. The B-TREE-DELETE procedure then acts in one downward pass through the tree, without having to back up. When deleting a key in an internal node, however, the procedure makes a downward pass through the tree but may have to return to the node from which the key was deleted to replace the key with its predecessor or successor (cases 2a and 2b).

Although this procedure seems complicated, it involves only O.h/ disk oper- ations for a B-tree of height h, since only O.1/ calls to DISK-READ and DISK- WRITE are made between recursive invocations of the procedure. The CPU time required is O.th/ D O.t logt n/.

Exercises

18.3-1 Show the results of deleting C , P , and V , in order, from the tree of Figure 18.8(f).

18.3-2 Write pseudocode for B-TREE-DELETE.

Problems

18-1 Stacks on secondary storage Consider implementing a stack in a computer that has a relatively small amount of fast primary memory and a relatively large amount of slower disk storage. The operations PUSH and POP work on single-word values. The stack we wish to support can grow to be much larger than can fit in memory, and thus most of it must be stored on disk.

A simple, but inefficient, stack implementation keeps the entire stack on disk. We maintain in memory a stack pointer, which is the disk address of the top element on the stack. If the pointer has value p, the top element is the .p mod m/th word on page bp=mc of the disk, where m is the number of words per page.

Problems for Chapter 18 503

To implement the PUSH operation, we increment the stack pointer, read the ap- propriate page into memory from disk, copy the element to be pushed to the ap- propriate word on the page, and write the page back to disk. A POP operation is similar. We decrement the stack pointer, read in the appropriate page from disk, and return the top of the stack. We need not write back the page, since it was not modified.

Because disk operations are relatively expensive, we count two costs for any implementation: the total number of disk accesses and the total CPU time. Any disk access to a page of m words incurs charges of one disk access and ‚.m/ CPU time.

a. Asymptotically, what is the worst-case number of disk accesses for n stack operations using this simple implementation? What is the CPU time for n stack operations? (Express your answer in terms of m and n for this and subsequent parts.)

Now consider a stack implementation in which we keep one page of the stack in memory. (We also maintain a small amount of memory to keep track of which page is currently in memory.) We can perform a stack operation only if the relevant disk page resides in memory. If necessary, we can write the page currently in memory to the disk and read in the new page from the disk to memory. If the relevant disk page is already in memory, then no disk accesses are required.

b. What is the worst-case number of disk accesses required for n PUSH opera- tions? What is the CPU time?

c. What is the worst-case number of disk accesses required for n stack operations? What is the CPU time?

Suppose that we now implement the stack by keeping two pages in memory (in addition to a small number of words for bookkeeping).

d. Describe how to manage the stack pages so that the amortized number of disk accesses for any stack operation is O.1=m/ and the amortized CPU time for any stack operation is O.1/.

18-2 Joining and splitting 2-3-4 trees The join operation takes two dynamic sets S 0 and S 00 and an element x such that for any x 0 2 S 0 and x 00 2 S 00, we have x 0:key y:key

10 exchange x with y 11 FIB-HEAP-LINK.H; y; x/ 12 AŒd� D NIL 13 d D d C 1 14 AŒd� D x 15 H:min D NIL 16 for i D 0 toD.H:n/ 17 if AŒi� ¤ NIL 18 if H:min == NIL 19 create a root list for H containing just AŒi� 20 H:min D AŒi� 21 else insert AŒi� into H ’s root list 22 if AŒi�:key x:key 2 error “new key is greater than current key” 3 x:key D k 4 y D x:p 5 if y ¤ NIL and x:key 1:619 > �1. The inductive step is for k � 2, and we assume that FiC2 > �i for i D 0; 1; : : : ; k�1. Recall that � is the positive root of equation (3.23), x2 D xC1. Thus, we have

FkC2 D FkC1 C Fk � �k�1 C �k�2 (by the inductive hypothesis) D �k�2.� C 1/ D �k�2 � �2 (by equation (3.23)) D �k :

The following lemma and its corollary complete the analysis.

19.4 Bounding the maximum degree 525

Lemma 19.4 Let x be any node in a Fibonacci heap, and let k D x:degree. Then size.x/ � FkC2 � �k , where � D .1C

p 5/=2.

Proof Let sk denote the minimum possible size of any node of degree k in any Fibonacci heap. Trivially, s0 D 1 and s1 D 2. The number sk is at most size.x/ and, because adding children to a node cannot decrease the node’s size, the value of sk increases monotonically with k. Consider some node ´, in any Fibonacci heap, such that ´:degree D k and size.´/ D sk. Because sk � size.x/, we compute a lower bound on size.x/ by computing a lower bound on sk . As in Lemma 19.1, let y1; y2; : : : ; yk denote the children of ´ in the order in which they were linked to ´. To bound sk , we count one for ´ itself and one for the first child y1 (for which size.y1/ � 1), giving size.x/ � sk

� 2C kX

iD2 syi : degree

� 2C kX

iD2 si�2 ;

where the last line follows from Lemma 19.1 (so that yi :degree � i � 2) and the monotonicity of sk (so that syi : degree � si�2).

We now show by induction on k that sk � FkC2 for all nonnegative integers k. The bases, for k D 0 and k D 1, are trivial. For the inductive step, we assume that k � 2 and that si � FiC2 for i D 0; 1; : : : ; k � 1. We have

sk � 2C kX

iD2 si�2

� 2C kX

iD2 Fi

D 1C kX

iD0 Fi

D FkC2 (by Lemma 19.2) � �k (by Lemma 19.3) .

Thus, we have shown that size.x/ � sk � FkC2 � �k .

526 Chapter 19 Fibonacci Heaps

Corollary 19.5 The maximum degree D.n/ of any node in an n-node Fibonacci heap is O.lg n/.

Proof Let x be any node in an n-node Fibonacci heap, and let k D x:degree. By Lemma 19.4, we have n � size.x/ � �k . Taking base-� logarithms gives us k � log� n. (In fact, because k is an integer, k �

log� n

˘ .) The maximum

degree D.n/ of any node is thus O.lg n/.

Exercises

19.4-1 Professor Pinocchio claims that the height of an n-node Fibonacci heap is O.lg n/. Show that the professor is mistaken by exhibiting, for any positive integer n, a sequence of Fibonacci-heap operations that creates a Fibonacci heap consisting of just one tree that is a linear chain of n nodes.

19.4-2 Suppose we generalize the cascading-cut rule to cut a node x from its parent as soon as it loses its kth child, for some integer constant k. (The rule in Section 19.3 uses k D 2.) For what values of k is D.n/ D O.lg n/?

Problems

19-1 Alternative implementation of deletion Professor Pisano has proposed the following variant of the FIB-HEAP-DELETE procedure, claiming that it runs faster when the node being deleted is not the node pointed to by H:min.

PISANO-DELETE.H; x/

1 if x == H:min 2 FIB-HEAP-EXTRACT-MIN.H/ 3 else y D x:p 4 if y ¤ NIL 5 CUT.H; x; y/ 6 CASCADING-CUT.H; y/ 7 add x’s child list to the root list of H 8 remove x from the root list of H

Problems for Chapter 19 527

a. The professor’s claim that this procedure runs faster is based partly on the as- sumption that line 7 can be performed in O.1/ actual time. What is wrong with this assumption?

b. Give a good upper bound on the actual time of PISANO-DELETE when x is not H:min. Your bound should be in terms of x:degree and the number c of calls to the CASCADING-CUT procedure.

c. Suppose that we call PISANO-DELETE.H; x/, and let H 0 be the Fibonacci heap that results. Assuming that node x is not a root, bound the potential of H 0 in terms of x:degree, c, t.H/, and m.H/.

d. Conclude that the amortized time for PISANO-DELETE is asymptotically no better than for FIB-HEAP-DELETE, even when x ¤ H:min.

19-2 Binomial trees and binomial heaps The binomial tree Bk is an ordered tree (see Section B.5.2) defined recursively. As shown in Figure 19.6(a), the binomial tree B0 consists of a single node. The binomial tree Bk consists of two binomial trees Bk�1 that are linked together so that the root of one is the leftmost child of the root of the other. Figure 19.6(b) shows the binomial trees B0 through B4.

a. Show that for the binomial tree Bk ,

1. there are 2k nodes,

2. the height of the tree is k,

3. there are exactly �

k

i

� nodes at depth i for i D 0; 1; : : : ; k, and

4. the root has degree k, which is greater than that of any other node; moreover, as Figure 19.6(c) shows, if we number the children of the root from left to right by k � 1; k � 2; : : : ; 0, then child i is the root of a subtree Bi .

A binomial heapH is a set of binomial trees that satisfies the following proper- ties:

1. Each node has a key (like a Fibonacci heap).

2. Each binomial tree in H obeys the min-heap property.

3. For any nonnegative integer k, there is at most one binomial tree in H whose root has degree k.

b. Suppose that a binomial heap H has a total of n nodes. Discuss the relationship between the binomial trees that H contains and the binary representation of n. Conclude that H consists of at most blg nc C 1 binomial trees.

528 Chapter 19 Fibonacci Heaps

B4

Bk–1 Bk–2

Bk

B2 B1

B0

B3B2B1B0

Bk

Bk–1 Bk–1

B0

(a)

depth

0

1

2

3

4

(b)

(c)

Figure 19.6 (a) The recursive definition of the binomial tree Bk . Triangles represent rooted sub- trees. (b) The binomial trees B0 through B4. Node depths in B4 are shown. (c) Another way of looking at the binomial tree Bk .

Suppose that we represent a binomial heap as follows. The left-child, right- sibling scheme of Section 10.4 represents each binomial tree within a binomial heap. Each node contains its key; pointers to its parent, to its leftmost child, and to the sibling immediately to its right (these pointers are NIL when appropriate); and its degree (as in Fibonacci heaps, how many children it has). The roots form a singly linked root list, ordered by the degrees of the roots (from low to high), and we access the binomial heap by a pointer to the first node on the root list.

c. Complete the description of how to represent a binomial heap (i.e., name the attributes, describe when attributes have the value NIL, and define how the root list is organized), and show how to implement the same seven operations on binomial heaps as this chapter implemented on Fibonacci heaps. Each opera- tion should run in O.lg n/ worst-case time, where n is the number of nodes in

Problems for Chapter 19 529

the binomial heap (or in the case of the UNION operation, in the two binomial heaps that are being united). The MAKE-HEAP operation should take constant time.

d. Suppose that we were to implement only the mergeable-heap operations on a Fibonacci heap (i.e., we do not implement the DECREASE-KEY or DELETE op- erations). How would the trees in a Fibonacci heap resemble those in a binomial heap? How would they differ? Show that the maximum degree in an n-node Fibonacci heap would be at most blg nc.

e. Professor McGee has devised a new data structure based on Fibonacci heaps. A McGee heap has the same structure as a Fibonacci heap and supports just the mergeable-heap operations. The implementations of the operations are the same as for Fibonacci heaps, except that insertion and union consolidate the root list as their last step. What are the worst-case running times of operations on McGee heaps?

19-3 More Fibonacci-heap operations We wish to augment a Fibonacci heap H to support two new operations without changing the amortized running time of any other Fibonacci-heap operations.

a. The operation FIB-HEAP-CHANGE-KEY.H; x; k/ changes the key of node x to the value k. Give an efficient implementation of FIB-HEAP-CHANGE-KEY, and analyze the amortized running time of your implementation for the cases in which k is greater than, less than, or equal to x:key.

b. Give an efficient implementation of FIB-HEAP-PRUNE.H; r/, which deletes q D min.r; H:n/ nodes from H . You may choose any q nodes to delete. Ana- lyze the amortized running time of your implementation. (Hint: You may need to modify the data structure and potential function.)

19-4 2-3-4 heaps Chapter 18 introduced the 2-3-4 tree, in which every internal node (other than pos- sibly the root) has two, three, or four children and all leaves have the same depth. In this problem, we shall implement 2-3-4 heaps, which support the mergeable-heap operations.

The 2-3-4 heaps differ from 2-3-4 trees in the following ways. In 2-3-4 heaps, only leaves store keys, and each leaf x stores exactly one key in the attribute x:key. The keys in the leaves may appear in any order. Each internal node x contains a value x:small that is equal to the smallest key stored in any leaf in the subtree rooted at x. The root r contains an attribute r:height that gives the height of the

530 Chapter 19 Fibonacci Heaps

tree. Finally, 2-3-4 heaps are designed to be kept in main memory, so that disk reads and writes are not needed.

Implement the following 2-3-4 heap operations. In parts (a)–(e), each operation should run in O.lg n/ time on a 2-3-4 heap with n elements. The UNION operation in part (f) should run in O.lg n/ time, where n is the number of elements in the two input heaps.

a. MINIMUM, which returns a pointer to the leaf with the smallest key.

b. DECREASE-KEY, which decreases the key of a given leaf x to a given value k � x:key.

c. INSERT, which inserts leaf x with key k.

d. DELETE, which deletes a given leaf x.

e. EXTRACT-MIN, which extracts the leaf with the smallest key.

f. UNION, which unites two 2-3-4 heaps, returning a single 2-3-4 heap and de- stroying the input heaps.

Chapter notes

Fredman and Tarjan [114] introduced Fibonacci heaps. Their paper also describes the application of Fibonacci heaps to the problems of single-source shortest paths, all-pairs shortest paths, weighted bipartite matching, and the minimum-spanning- tree problem.

Subsequently, Driscoll, Gabow, Shrairman, and Tarjan [96] developed “relaxed heaps” as an alternative to Fibonacci heaps. They devised two varieties of re- laxed heaps. One gives the same amortized time bounds as Fibonacci heaps. The other allows DECREASE-KEY to run in O.1/ worst-case (not amortized) time and EXTRACT-MIN and DELETE to run in O.lg n/ worst-case time. Relaxed heaps also have some advantages over Fibonacci heaps in parallel algorithms.

See also the chapter notes for Chapter 6 for other data structures that support fast DECREASE-KEY operations when the sequence of values returned by EXTRACT- MIN calls are monotonically increasing over time and the data are integers in a specific range.

20 van Emde Boas Trees

In previous chapters, we saw data structures that support the operations of a priority queue—binary heaps in Chapter 6, red-black trees in Chapter 13,1 and Fibonacci heaps in Chapter 19. In each of these data structures, at least one important op- eration took O.lg n/ time, either worst case or amortized. In fact, because each of these data structures bases its decisions on comparing keys, the �.n lg n/ lower bound for sorting in Section 8.1 tells us that at least one operation will have to take �.lg n/ time. Why? If we could perform both the INSERT and EXTRACT-MIN operations in o.lg n/ time, then we could sort n keys in o.n lg n/ time by first per- forming n INSERT operations, followed by n EXTRACT-MIN operations.

We saw in Chapter 8, however, that sometimes we can exploit additional infor- mation about the keys to sort in o.n lg n/ time. In particular, with counting sort we can sort n keys, each an integer in the range 0 to k, in time ‚.n C k/, which is ‚.n/ when k D O.n/.

Since we can circumvent the �.n lg n/ lower bound for sorting when the keys are integers in a bounded range, you might wonder whether we can perform each of the priority-queue operations in o.lg n/ time in a similar scenario. In this chapter, we shall see that we can: van Emde Boas trees support the priority-queue operations, and a few others, each in O.lg lg n/ worst-case time. The hitch is that the keys must be integers in the range 0 to n � 1, with no duplicates allowed.

Specifically, van Emde Boas trees support each of the dynamic set operations listed on page 230—SEARCH, INSERT, DELETE, MINIMUM, MAXIMUM, SUC- CESSOR, and PREDECESSOR—in O.lg lg n/ time. In this chapter, we will omit discussion of satellite data and focus only on storing keys. Because we concentrate on keys and disallow duplicate keys to be stored, instead of describing the SEARCH

1Chapter 13 does not explicitly discuss how to implement EXTRACT-MIN and DECREASE-KEY, but we can easily build these operations for any data structure that supports MINIMUM, DELETE, and INSERT.

532 Chapter 20 van Emde Boas Trees

operation, we will implement the simpler operation MEMBER.S; x/, which returns a boolean indicating whether the value x is currently in dynamic set S .

So far, we have used the parameter n for two distinct purposes: the number of elements in the dynamic set, and the range of the possible values. To avoid any further confusion, from here on we will use n to denote the number of elements currently in the set and u as the range of possible values, so that each van Emde Boas tree operation runs in O.lg lg u/ time. We call the set f0; 1; 2; : : : ; u � 1g the universe of values that can be stored and u the universe size. We assume throughout this chapter that u is an exact power of 2, i.e., u D 2k for some integer k � 1.

Section 20.1 starts us out by examining some simple approaches that will get us going in the right direction. We enhance these approaches in Section 20.2, introducing proto van Emde Boas structures, which are recursive but do not achieve our goal of O.lg lg u/-time operations. Section 20.3 modifies proto van Emde Boas structures to develop van Emde Boas trees, and it shows how to implement each operation in O.lg lg u/ time.

20.1 Preliminary approaches

In this section, we shall examine various approaches for storing a dynamic set. Although none will achieve the O.lg lg u/ time bounds that we desire, we will gain insights that will help us understand van Emde Boas trees when we see them later in this chapter.

Direct addressing

Direct addressing, as we saw in Section 11.1, provides the simplest approach to storing a dynamic set. Since in this chapter we are concerned only with storing keys, we can simplify the direct-addressing approach to store the dynamic set as a bit vector, as discussed in Exercise 11.1-2. To store a dynamic set of values from the universe f0; 1; 2; : : : ; u � 1g, we maintain an array AŒ0 : : u � 1� of u bits. The entry AŒx� holds a 1 if the value x is in the dynamic set, and it holds a 0 otherwise. Although we can perform each of the INSERT, DELETE, and MEMBER operations in O.1/ time with a bit vector, the remaining operations—MINIMUM, MAXIMUM, SUCCESSOR, and PREDECESSOR—each take ‚.u/ time in the worst case because

20.1 Preliminary approaches 533

0

0

0

1

1

2

1

3

1

4

1

5

0

6

1

7

0

8

0

9

0

10

0

11

0

12

0

13

1

14

1

15

0 1 1 1 0 0 0 1

1 1 0 1

1 1

1

A

Figure 20.1 A binary tree of bits superimposed on top of a bit vector representing the set f2; 3; 4; 5; 7; 14; 15g when u D 16. Each internal node contains a 1 if and only if some leaf in its subtree contains a 1. The arrows show the path followed to determine the predecessor of 14 in the set.

we might have to scan through ‚.u/ elements.2 For example, if a set contains only the values 0 and u � 1, then to find the successor of 0, we would have to scan entries 1 through u � 2 before finding a 1 in AŒu � 1�.

Superimposing a binary tree structure

We can short-cut long scans in the bit vector by superimposing a binary tree of bits on top of it. Figure 20.1 shows an example. The entries of the bit vector form the leaves of the binary tree, and each internal node contains a 1 if and only if any leaf in its subtree contains a 1. In other words, the bit stored in an internal node is the logical-or of its two children.

The operations that took ‚.u/ worst-case time with an unadorned bit vector now use the tree structure:

� To find the minimum value in the set, start at the root and head down toward the leaves, always taking the leftmost node containing a 1.

� To find the maximum value in the set, start at the root and head down toward the leaves, always taking the rightmost node containing a 1.

2We assume throughout this chapter that MINIMUM and MAXIMUM return NIL if the dynamic set is empty and that SUCCESSOR and PREDECESSOR return NIL if the element they are given has no successor or predecessor, respectively.

534 Chapter 20 van Emde Boas Trees

� To find the successor of x, start at the leaf indexed by x, and head up toward the root until we enter a node from the left and this node has a 1 in its right child ´. Then head down through node ´, always taking the leftmost node containing a 1 (i.e., find the minimum value in the subtree rooted at the right child ´).

� To find the predecessor of x, start at the leaf indexed by x, and head up toward the root until we enter a node from the right and this node has a 1 in its left child ´. Then head down through node ´, always taking the rightmost node containing a 1 (i.e., find the maximum value in the subtree rooted at the left child ´).

Figure 20.1 shows the path taken to find the predecessor, 7, of the value 14. We also augment the INSERT and DELETE operations appropriately. When in-

serting a value, we store a 1 in each node on the simple path from the appropriate leaf up to the root. When deleting a value, we go from the appropriate leaf up to the root, recomputing the bit in each internal node on the path as the logical-or of its two children.

Since the height of the tree is lg u and each of the above operations makes at most one pass up the tree and at most one pass down, each operation takes O.lg u/ time in the worst case.

This approach is only marginally better than just using a red-black tree. We can still perform the MEMBER operation in O.1/ time, whereas searching a red-black tree takes O.lg n/ time. Then again, if the number n of elements stored is much smaller than the size u of the universe, a red-black tree would be faster for all the other operations.

Superimposing a tree of constant height

What happens if we superimpose a tree with greater degree? Let us assume that the size of the universe is u D 22k for some integer k, so that pu is an integer. Instead of superimposing a binary tree on top of the bit vector, we superimpose a tree of degree

p u. Figure 20.2(a) shows such a tree for the same bit vector as in

Figure 20.1. The height of the resulting tree is always 2. As before, each internal node stores the logical-or of the bits within its sub-

tree, so that the p

u internal nodes at depth 1 summarize each group of p

u val- ues. As Figure 20.2(b) demonstrates, we can think of these nodes as an array summaryŒ0 : :

p u � 1�, where summaryŒi � contains a 1 if and only if the subar-

ray AŒi p

u : : .i C 1/pu � 1� contains a 1. We call this pu-bit subarray of A the i th cluster. For a given value of x, the bit AŒx� appears in cluster num- ber bx=puc. Now INSERT becomes an O.1/-time operation: to insert x, set both AŒx� and summaryŒbx=puc� to 1. We can use the summary array to perform

20.1 Preliminary approaches 535

0

0

0

1

1

2

1

3

1

4

1

5

0

6

1

7

0

8

0

9

0

10

0

11

0

12

0

13

1

14

1

15

1

1

1 0 1

(a)

0

0

0

1

1

2

1

3

1

4

1

5

0

6

1

7

0

8

0

9

0

10

0

11

0

12

0

13

1

14

1

15

(b)

1

0

1

1

0

2

1

3

AA

summary

p u bits

p u bits

Figure 20.2 (a) A tree of degree p

u superimposed on top of the same bit vector as in Figure 20.1. Each internal node stores the logical-or of the bits in its subtree. (b) A view of the same structure, but with the internal nodes at depth 1 treated as an array summaryŒ0 : :

p u� 1�, where summaryŒi � is

the logical-or of the subarray AŒi p

u : : .i C 1/pu � 1�.

each of the operations MINIMUM, MAXIMUM, SUCCESSOR, PREDECESSOR, and DELETE in O.

p u/ time:

� To find the minimum (maximum) value, find the leftmost (rightmost) entry in summary that contains a 1, say summaryŒi �, and then do a linear search within the i th cluster for the leftmost (rightmost) 1.

� To find the successor (predecessor) of x, first search to the right (left) within its cluster. If we find a 1, that position gives the result. Otherwise, let i D bx=puc and search to the right (left) within the summary array from index i . The first position that holds a 1 gives the index of a cluster. Search within that cluster for the leftmost (rightmost) 1. That position holds the successor (predecessor).

� To delete the value x, let i D bx=puc�. Set AŒx� to 0 and then set summaryŒi � to the logical-or of the bits in the i th cluster.

In each of the above operations, we search through at most two clusters of p

u bits plus the summary array, and so each operation takes O.

p u/ time.

At first glance, it seems as though we have made negative progress. Superimpos- ing a binary tree gave us O.lg u/-time operations, which are asymptotically faster than O.

p u/ time. Using a tree of degree

p u will turn out to be a key idea of van

Emde Boas trees, however. We continue down this path in the next section.

Exercises

20.1-1 Modify the data structures in this section to support duplicate keys.

536 Chapter 20 van Emde Boas Trees

20.1-2 Modify the data structures in this section to support keys that have associated satel- lite data.

20.1-3 Observe that, using the structures in this section, the way we find the successor and predecessor of a value x does not depend on whether x is in the set at the time. Show how to find the successor of x in a binary search tree when x is not stored in the tree.

20.1-4 Suppose that instead of superimposing a tree of degree

p u, we were to superim-

pose a tree of degree u1=k , where k > 1 is a constant. What would be the height of such a tree, and how long would each of the operations take?

20.2 A recursive structure

In this section, we modify the idea of superimposing a tree of degree p

u on top of a bit vector. In the previous section, we used a summary structure of size

p u, with

each entry pointing to another stucture of size p

u. Now, we make the structure recursive, shrinking the universe size by the square root at each level of recursion. Starting with a universe of size u, we make structures holding

p u D u1=2 items,

which themselves hold structures of u1=4 items, which hold structures of u1=8 items, and so on, down to a base size of 2.

For simplicity, in this section, we assume that u D 22k for some integer k, so that u; u1=2; u1=4; : : : are integers. This restriction would be quite severe in practice, allowing only values of u in the sequence 2; 4; 16; 256; 65536; : : :. We shall see in the next section how to relax this assumption and assume only that u D 2k for some integer k. Since the structure we examine in this section is only a precursor to the true van Emde Boas tree structure, we tolerate this restriction in favor of aiding our understanding.

Recalling that our goal is to achieve running times of O.lg lg u/ for the oper- ations, let’s think about how we might obtain such running times. At the end of Section 4.3, we saw that by changing variables, we could show that the recurrence

T .n/ D 2T � pn˘�C lg n (20.1) has the solution T .n/ D O.lg n lg lg n/. Let’s consider a similar, but simpler, recurrence:

T .u/ D T .pu/CO.1/ : (20.2)

20.2 A recursive structure 537

If we use the same technique, changing variables, we can show that recur- rence (20.2) has the solution T .u/ D O.lg lg u/. Let m D lg u, so that u D 2m and we have

T .2m/ D T .2m=2/CO.1/ : Now we rename S.m/ D T .2m/, giving the new recurrence S.m/ D S.m=2/CO.1/ : By case 2 of the master method, this recurrence has the solution S.m/ D O.lg m/. We change back from S.m/ to T .u/, giving T .u/ D T .2m/ D S.m/ D O.lg m/ D O.lg lg u/.

Recurrence (20.2) will guide our search for a data structure. We will design a recursive data structure that shrinks by a factor of

p u in each level of its recursion.

When an operation traverses this data structure, it will spend a constant amount of time at each level before recursing to the level below. Recurrence (20.2) will then characterize the running time of the operation.

Here is another way to think of how the term lg lg u ends up in the solution to recurrence (20.2). As we look at the universe size in each level of the recursive data structure, we see the sequence u; u1=2; u1=4; u1=8; : : :. If we consider how many bits we need to store the universe size at each level, we need lg u at the top level, and each level needs half the bits of the previous level. In general, if we start with b bits and halve the number of bits at each level, then after lg b levels, we get down to just one bit. Since b D lg u, we see that after lg lg u levels, we have a universe size of 2.

Looking back at the data structure in Figure 20.2, a given value x resides in cluster number bx=puc. If we view x as a lg u-bit binary integer, that cluster number, bx=puc, is given by the most significant .lg u/=2 bits of x. Within its cluster, x appears in position x mod

p u, which is given by the least significant

.lg u/=2 bits of x. We will need to index in this way, and so let us define some functions that will help us do so:

high.x/ D x=pu˘ ; low.x/ D x mod pu ;

index.x; y/ D xpuC y : The function high.x/ gives the most significant .lg u/=2 bits of x, producing the number of x’s cluster. The function low.x/ gives the least significant .lg u/=2 bits of x and provides x’s position within its cluster. The function index.x; y/ builds an element number from x and y, treating x as the most significant .lg u/=2 bits of the element number and y as the least significant .lg u/=2 bits. We have the identity x D index.high.x/; low.x//. The value of u used by each of these functions will

538 Chapter 20 van Emde Boas Trees

…0 1 2 3 pu � 1proto-�EB.u/ u summary cluster

proto-�EB. p

u/ structure p u proto-�EB.

p u/ structures

Figure 20.3 The information in a proto-�EB.u/ structure when u � 4. The structure contains the universe size u, a pointer summary to a proto-�EB.

p u/ structure, and an array clusterŒ0 : :

p u � 1�

of p

u pointers to proto-�EB. p

u/ structures.

always be the universe size of the data structure in which we call the function, which changes as we descend into the recursive structure.

20.2.1 Proto van Emde Boas structures

Taking our cue from recurrence (20.2), let us design a recursive data structure to support the operations. Although this data structure will fail to achieve our goal of O.lg lg u/ time for some operations, it serves as a basis for the van Emde Boas tree structure that we will see in Section 20.3.

For the universe f0; 1; 2; : : : ; u � 1g, we define a proto van Emde Boas struc- ture, or proto-vEB structure, which we denote as proto-�EB.u/, recursively as follows. Each proto-�EB.u/ structure contains an attribute u giving its universe size. In addition, it contains the following:

� If u D 2, then it is the base size, and it contains an array AŒ0 : : 1� of two bits. � Otherwise, u D 22k for some integer k � 1, so that u � 4. In addition

to the universe size u, the data structure proto-�EB.u/ contains the following attributes, illustrated in Figure 20.3:

� a pointer named summary to a proto-�EB. p

u/ structure and � an array clusterŒ0 : :

p u�1� ofpu pointers, each to a proto-�EB.pu/ struc-

ture.

The element x, where 0 � x 2. The structure contains the uni- verse size u, elements min and max, a pointer summary to a �EB. ”

p u/ tree, and an array

clusterŒ0 : : ” p

u � 1� of “pu pointers to �EB. #pu/ trees.

ger—that is, if u is an odd power of 2 (u D 22kC1 for some integer k � 0)—then we will divide the lg u bits of a number into the most significant d.lg u/=2e bits and the least significant b.lg u/=2c bits. For convenience, we denote 2d.lg u/=2e (the “up- per square root” of u) by ”

p u and 2b.lg u/=2c (the “lower square root” of u) by #

p u,

so that u D “pu � #pu and, when u is an even power of 2 (u D 22k for some integer k), ”

p u D #pu D pu. Because we now allow u to be an odd power of 2,

we must redefine our helpful functions from Section 20.2:

high.x/ D x= #pu˘ ; low.x/ D x mod #pu ;

index.x; y/ D x #puC y :

20.3.1 van Emde Boas trees

The van Emde Boas tree, or vEB tree, modifies the proto-vEB structure. We denote a vEB tree with a universe size of u as �EB.u/ and, unless u equals the base size of 2, the attribute summary points to a �EB. ”

p u/ tree and the array

clusterŒ0 : : ” p

u � 1� points to “pu �EB. #pu/ trees. As Figure 20.5 illustrates, a vEB tree contains two attributes not found in a proto-vEB structure:

� min stores the minimum element in the vEB tree, and

� max stores the maximum element in the vEB tree.

Furthermore, the element stored in min does not appear in any of the recur- sive �EB. #

p u/ trees that the cluster array points to. The elements stored in a

�EB.u/ tree V , therefore, are V:min plus all the elements recursively stored in the �EB. #

p u/ trees pointed to by V:clusterŒ0 : : ”

p u � 1�. Note that when a vEB

tree contains two or more elements, we treat min and max differently: the element

20.3 The van Emde Boas tree 547

stored in min does not appear in any of the clusters, but the element stored in max does.

Since the base size is 2, a �EB.2/ tree does not need the array A that the cor- responding proto-�EB.2/ structure has. Instead, we can determine its elements from its min and max attributes. In a vEB tree with no elements, regardless of its universe size u, both min and max are NIL.

Figure 20.6 shows a �EB.16/ tree V holding the set f2; 3; 4; 5; 7; 14; 15g. Be- cause the smallest element is 2, V:min equals 2, and even though high.2/ D 0, the element 2 does not appear in the �EB.4/ tree pointed to by V:clusterŒ0�: notice that V:clusterŒ0�:min equals 3, and so 2 is not in this vEB tree. Similarly, since V:clusterŒ0�:min equals 3, and 2 and 3 are the only elements in V:clusterŒ0�, the �EB.2/ clusters within V:clusterŒ0� are empty.

The min and max attributes will turn out to be key to reducing the number of recursive calls within the operations on vEB trees. These attributes will help us in four ways:

1. The MINIMUM and MAXIMUM operations do not even need to recurse, for they can just return the values of min or max.

2. The SUCCESSOR operation can avoid making a recursive call to determine whether the successor of a value x lies within high.x/. That is because x’s successor lies within its cluster if and only if x is strictly less than the max attribute of its cluster. A symmetric argument holds for PREDECESSOR and min.

3. We can tell whether a vEB tree has no elements, exactly one element, or at least two elements in constant time from its min and max values. This ability will help in the INSERT and DELETE operations. If min and max are both NIL, then the vEB tree has no elements. If min and max are non-NIL but are equal to each other, then the vEB tree has exactly one element. Otherwise, both min and max are non-NIL but are unequal, and the vEB tree has two or more elements.

4. If we know that a vEB tree is empty, we can insert an element into it by updating only its min and max attributes. Hence, we can insert into an empty vEB tree in constant time. Similarly, if we know that a vEB tree has only one element, we can delete that element in constant time by updating only min and max. These properties will allow us to cut short the chain of recursive calls.

Even if the universe size u is an odd power of 2, the difference in the sizes of the summary vEB tree and the clusters will not turn out to affect the asymptotic running times of the vEB-tree operations. The recursive procedures that implement the vEB-tree operations will all have running times characterized by the recurrence

T .u/ � T . “pu/CO.1/ : (20.4)

548 Chapter 20 van Emde Boas Trees

0 1 2 3

cluster

u 16

summary

vEB(16) min 2 max 15

0 1

cluster

u 4

summary

vEB(4) min 0 max 3

u 2

min 0

max 1

vEB(2)

u 2

min 1

max 1

vEB(2)

u 2

min 1

max 1

vEB(2)

0 1

clustersummary

u 2

min

max

vEB(2)

u 2

min

max

vEB(2)

u 2

min

max

vEB(2)

0 1

clustersummary

u 2

min 0

max 1

vEB(2)

u 2

min 1

max 1

vEB(2)

u 2

min 1

max 1

vEB(2)

0 1

clustersummary

u 2

min

max

vEB(2)

u 2

min

max

vEB(2)

u 2

min

max

vEB(2)

0 1

clustersummary

u 2

min 1

max 1

vEB(2)

u 2

min

max

vEB(2)

u 2

min 1

max 1

vEB(2)

u 4vEB(4) min 3 max 3 u 4vEB(4) min 0 max 3

u 4vEB(4) min max u 4vEB(4) min 2 max 3

Figure 20.6 A �EB.16/ tree corresponding to the proto-vEB tree in Figure 20.4. It stores the set f2; 3; 4; 5; 7; 14; 15g. Slashes indicate NIL values. The value stored in the min attribute of a vEB tree does not appear in any of its clusters. Heavy shading serves the same purpose here as in Figure 20.4.

20.3 The van Emde Boas tree 549

This recurrence looks similar to recurrence (20.2), and we will solve it in a similar fashion. Letting m D lg u, we rewrite it as T .2m/ � T .2dm=2e/CO.1/ : Noting that dm=2e � 2m=3 for all m � 2, we have T .2m/ � T .22m=3/CO.1/ : Letting S.m/ D T .2m/, we rewrite this last recurrence as S.m/ � S.2m=3/CO.1/ ; which, by case 2 of the master method, has the solution S.m/ D O.lg m/. (In terms of the asymptotic solution, the fraction 2=3 does not make any difference compared with the fraction 1=2, because when we apply the master method, we find that log3=2 1 D log2 1 D 0:) Thus, we have T .u/ D T .2m/ D S.m/ D O.lg m/ D O.lg lg u/.

Before using a van Emde Boas tree, we must know the universe size u, so that we can create a van Emde Boas tree of the appropriate size that initially represents an empty set. As Problem 20-1 asks you to show, the total space requirement of a van Emde Boas tree is O.u/, and it is straightforward to create an empty tree in O.u/ time. In contrast, we can create an empty red-black tree in constant time. Therefore, we might not want to use a van Emde Boas tree when we perform only a small number of operations, since the time to create the data structure would exceed the time saved in the individual operations. This drawback is usually not significant, since we typically use a simple data structure, such as an array or linked list, to represent a set with only a few elements.

20.3.2 Operations on a van Emde Boas tree

We are now ready to see how to perform operations on a van Emde Boas tree. As we did for the proto van Emde Boas structure, we will consider the querying oper- ations first, and then INSERT and DELETE. Due to the slight asymmetry between the minimum and maximum elements in a vEB tree—when a vEB tree contains at least two elements, the minumum element does not appear within a cluster but the maximum element does—we will provide pseudocode for all five querying op- erations. As in the operations on proto van Emde Boas structures, the operations here that take parameters V and x, where V is a van Emde Boas tree and x is an element, assume that 0 � x V:max 6 return V:max 7 else min-low D VEB-TREE-MINIMUM.V:clusterŒhigh.x/�/ 8 if min-low ¤ NIL and low.x/ > min-low 9 offset D VEB-TREE-PREDECESSOR.V:clusterŒhigh.x/�; low.x//

10 return index.high.x/; offset/ 11 else pred-cluster D VEB-TREE-PREDECESSOR.V:summary; high.x// 12 if pred-cluster == NIL 13 if V:min ¤ NIL and x > V:min 14 return V:min 15 else return NIL 16 else offset D VEB-TREE-MAXIMUM.V:clusterŒpred-cluster�/ 17 return index.pred-cluster; offset/

Lines 13–14 form the additional case. This case occurs when x’s predecessor, if it exists, does not reside in x’s cluster. In VEB-TREE-SUCCESSOR, we were assured that if x’s successor resides outside of x’s cluster, then it must reside in a higher-numbered cluster. But if x’s predecessor is the minimum value in vEB tree V , then the successor resides in no cluster at all. Line 13 checks for this condition, and line 14 returns the minimum value as appropriate.

This extra case does not affect the asymptotic running time of VEB-TREE- PREDECESSOR when compared with VEB-TREE-SUCCESSOR, and so VEB- TREE-PREDECESSOR runs in O.lg lg u/ worst-case time.

Inserting an element

Now we examine how to insert an element into a vEB tree. Recall that PROTO- VEB-INSERT made two recursive calls: one to insert the element and one to insert the element’s cluster number into the summary. The VEB-TREE-INSERT proce- dure will make only one recursive call. How can we get away with just one? When we insert an element, either the cluster that it goes into already has another element or it does not. If the cluster already has another element, then the cluster number is already in the summary, and so we do not need to make that recursive call. If

20.3 The van Emde Boas tree 553

the cluster does not already have another element, then the element being inserted becomes the only element in the cluster, and we do not need to recurse to insert an element into an empty vEB tree:

VEB-EMPTY-TREE-INSERT.V; x/

1 V:min D x 2 V:max D x With this procedure in hand, here is the pseudocode for VEB-TREE-INSERT.V; x/, which assumes that x is not already an element in the set represented by vEB tree V :

VEB-TREE-INSERT.V; x/

1 if V:min == NIL 2 VEB-EMPTY-TREE-INSERT.V; x/ 3 else if x 2 6 if VEB-TREE-MINIMUM.V:clusterŒhigh.x/�/ == NIL 7 VEB-TREE-INSERT.V:summary; high.x// 8 VEB-EMPTY-TREE-INSERT.V:clusterŒhigh.x/�; low.x// 9 else VEB-TREE-INSERT.V:clusterŒhigh.x/�; low.x//

10 if x > V:max 11 V:max D x

This procedure works as follows. Line 1 tests whether V is an empty vEB tree and, if it is, then line 2 handles this easy case. Lines 3–11 assume that V is not empty, and therefore some element will be inserted into one of V ’s clusters. But that element might not necessarily be the element x passed to VEB-TREE-INSERT. If x max. Note that if V is a base-case vEB tree that is not empty, then lines 3–4 and 10–11 update min and max properly.

554 Chapter 20 van Emde Boas Trees

Once again, we can easily see how recurrence (20.4) characterizes the running time. Depending on the result of the test in line 6, either the recursive call in line 7 (run on a vEB tree with universe size ”

p u) or the recursive call in line 9 (run on

a vEB with universe size # p

u) executes. In either case, the one recursive call is on a vEB tree with universe size at most ”

p u. Because the remainder of VEB-

TREE-INSERT takes O.1/ time, recurrence (20.4) applies, and so the running time is O.lg lg u/.

Deleting an element

Finally, we look at how to delete an element from a vEB tree. The procedure VEB-TREE-DELETE.V; x/ assumes that x is currently an element in the set repre- sented by the vEB tree V .

VEB-TREE-DELETE.V; x/

1 if V:min == V:max 2 V:min D NIL 3 V:max D NIL 4 elseif V:u == 2 5 if x == 0 6 V:min D 1 7 else V:min D 0 8 V:max D V:min 9 else if x == V:min

10 first-cluster D VEB-TREE-MINIMUM.V:summary/ 11 x D index.first-cluster;

VEB-TREE-MINIMUM.V:clusterŒfirst-cluster�// 12 V:min D x 13 VEB-TREE-DELETE.V:clusterŒhigh.x/�; low.x// 14 if VEB-TREE-MINIMUM.V:clusterŒhigh.x/�/ == NIL 15 VEB-TREE-DELETE.V:summary; high.x// 16 if x == V:max 17 summary-max D VEB-TREE-MAXIMUM.V:summary/ 18 if summary-max == NIL 19 V:max D V:min 20 else V:max D index.summary-max;

VEB-TREE-MAXIMUM.V:clusterŒsummary-max�// 21 elseif x == V:max 22 V:max D index.high.x/;

VEB-TREE-MAXIMUM.V:clusterŒhigh.x/�//

20.3 The van Emde Boas tree 555

The VEB-TREE-DELETE procedure works as follows. If the vEB tree V con- tains only one element, then it’s just as easy to delete it as it was to insert an element into an empty vEB tree: just set min and max to NIL. Lines 1–3 handle this case. Otherwise, V has at least two elements. Line 4 tests whether V is a base-case vEB tree and, if so, lines 5–8 set min and max to the one remaining element.

Lines 9–22 assume that V has two or more elements and that u � 4. In this case, we will have to delete an element from a cluster. The element we delete from a cluster might not be x, however, because if x equals min, then once we have deleted x, some other element within one of V ’s clusters becomes the new min, and we have to delete that other element from its cluster. If the test in line 9 reveals that we are in this case, then line 10 sets first-cluster to the number of the cluster that contains the lowest element other than min, and line 11 sets x to the value of the lowest element in that cluster. This element becomes the new min in line 12 and, because we set x to its value, it is the element that will be deleted from its cluster.

When we reach line 13, we know that we need to delete element x from its cluster, whether x was the value originally passed to VEB-TREE-DELETE or x is the element becoming the new minimum. Line 13 deletes x from its cluster. That cluster might now become empty, which line 14 tests, and if it does, then we need to remove x’s cluster number from the summary, which line 15 handles. After updating the summary, we might need to update max. Line 16 checks to see whether we are deleting the maximum element in V and, if we are, then line 17 sets summary-max to the number of the highest-numbered nonempty cluster. (The call VEB-TREE-MAXIMUM.V:summary/ works because we have already recursively called VEB-TREE-DELETE on V:summary, and therefore V:summary:max has al- ready been updated as necessary.) If all of V ’s clusters are empty, then the only remaining element in V is min; line 18 checks for this case, and line 19 updates max appropriately. Otherwise, line 20 sets max to the maximum element in the highest-numbered cluster. (If this cluster is where the element has been deleted, we again rely on the recursive call in line 13 having already corrected that cluster’s max attribute.)

Finally, we have to handle the case in which x’s cluster did not become empty due to x being deleted. Although we do not have to update the summary in this case, we might have to update max. Line 21 tests for this case, and if we have to update max, line 22 does so (again relying on the recursive call to have corrected max in the cluster).

Now we show that VEB-TREE-DELETE runs in O.lg lg u/ time in the worst case. At first glance, you might think that recurrence (20.4) does not always apply, because a single call of VEB-TREE-DELETE can make two recursive calls: one on line 13 and one on line 15. Although the procedure can make both recursive calls, let’s think about what happens when it does. In order for the recursive call on

556 Chapter 20 van Emde Boas Trees

line 15 to occur, the test on line 14 must show that x’s cluster is empty. The only way that x’s cluster can be empty is if x was the only element in its cluster when we made the recursive call on line 13. But if x was the only element in its cluster, then that recursive call took O.1/ time, because it executed only lines 1–3. Thus, we have two mutually exclusive possibilities:

� The recursive call on line 13 took constant time.

� The recursive call on line 15 did not occur.

In either case, recurrence (20.4) characterizes the running time of VEB-TREE- DELETE, and hence its worst-case running time is O.lg lg u/.

Exercises

20.3-1 Modify vEB trees to support duplicate keys.

20.3-2 Modify vEB trees to support keys that have associated satellite data.

20.3-3 Write pseudocode for a procedure that creates an empty van Emde Boas tree.

20.3-4 What happens if you call VEB-TREE-INSERT with an element that is already in the vEB tree? What happens if you call VEB-TREE-DELETE with an element that is not in the vEB tree? Explain why the procedures exhibit the behavior that they do. Show how to modify vEB trees and their operations so that we can check in constant time whether an element is present.

20.3-5 Suppose that instead of ”

p u clusters, each with universe size #

p u, we constructed

vEB trees to have u1=k clusters, each with universe size u1�1=k , where k > 1 is a constant. If we were to modify the operations appropriately, what would be their running times? For the purpose of analysis, assume that u1=k and u1�1=k are always integers.

20.3-6 Creating a vEB tree with universe size u requires O.u/ time. Suppose we wish to explicitly account for that time. What is the smallest number of operations n for which the amortized time of each operation in a vEB tree is O.lg lg u/?

Problems for Chapter 20 557

Problems

20-1 Space requirements for van Emde Boas trees This problem explores the space requirements for van Emde Boas trees and sug- gests a way to modify the data structure to make its space requirement depend on the number n of elements actually stored in the tree, rather than on the universe size u. For simplicity, assume that

p u is always an integer.

a. Explain why the following recurrence characterizes the space requirement P.u/ of a van Emde Boas tree with universe size u:

P.u/ D .puC 1/P.pu/C‚.pu/ : (20.5) b. Prove that recurrence (20.5) has the solution P.u/ D O.u/. In order to reduce the space requirements, let us define a reduced-space van Emde Boas tree, or RS-vEB tree, as a vEB tree V but with the following changes: � The attribute V:cluster, rather than being stored as a simple array of pointers to

vEB trees with universe size p

u, is a hash table (see Chapter 11) stored as a dy- namic table (see Section 17.4). Corresponding to the array version of V:cluster, the hash table stores pointers to RS-vEB trees with universe size

p u. To find

the i th cluster, we look up the key i in the hash table, so that we can find the i th cluster by a single search in the hash table.

� The hash table stores only pointers to nonempty clusters. A search in the hash table for an empty cluster returns NIL, indicating that the cluster is empty.

� The attribute V:summary is NIL if all clusters are empty. Otherwise, V:summary points to an RS-vEB tree with universe size

p u.

Because the hash table is implemented with a dynamic table, the space it requires is proportional to the number of nonempty clusters.

When we need to insert an element into an empty RS-vEB tree, we create the RS- vEB tree by calling the following procedure, where the parameter u is the universe size of the RS-vEB tree:

CREATE-NEW-RS-VEB-TREE.u/

1 allocate a new vEB tree V 2 V:u D u 3 V:min D NIL 4 V:max D NIL 5 V:summary D NIL 6 create V:cluster as an empty dynamic hash table 7 return V

558 Chapter 20 van Emde Boas Trees

c. Modify the VEB-TREE-INSERT procedure to produce pseudocode for the pro- cedure RS-VEB-TREE-INSERT.V; x/, which inserts x into the RS-vEB tree V , calling CREATE-NEW-RS-VEB-TREE as appropriate.

d. Modify the VEB-TREE-SUCCESSOR procedure to produce pseudocode for the procedure RS-VEB-TREE-SUCCESSOR.V; x/, which returns the successor of x in RS-vEB tree V , or NIL if x has no successor in V .

e. Prove that, under the assumption of simple uniform hashing, your RS-VEB- TREE-INSERT and RS-VEB-TREE-SUCCESSOR procedures run in O.lg lg u/ expected time.

f. Assuming that elements are never deleted from a vEB tree, prove that the space requirement for the RS-vEB tree structure is O.n/, where n is the number of elements actually stored in the RS-vEB tree.

g. RS-vEB trees have another advantage over vEB trees: they require less time to create. How long does it take to create an empty RS-vEB tree?

20-2 y-fast tries This problem investigates D. Willard’s “y-fast tries” which, like van Emde Boas trees, perform each of the operations MEMBER, MINIMUM, MAXIMUM, PRE- DECESSOR, and SUCCESSOR on elements drawn from a universe with size u in O.lg lg u/ worst-case time. The INSERT and DELETE operations take O.lg lg u/ amortized time. Like reduced-space van Emde Boas trees (see Problem 20-1), y- fast tries use only O.n/ space to store n elements. The design of y-fast tries relies on perfect hashing (see Section 11.5).

As a preliminary structure, suppose that we create a perfect hash table containing not only every element in the dynamic set, but every prefix of the binary represen- tation of every element in the set. For example, if u D 16, so that lg u D 4, and x D 13 is in the set, then because the binary representation of 13 is 1101, the perfect hash table would contain the strings 1, 11, 110, and 1101. In addition to the hash table, we create a doubly linked list of the elements currently in the set, in increasing order.

a. How much space does this structure require?

b. Show how to perform the MINIMUM and MAXIMUM operations in O.1/ time; the MEMBER, PREDECESSOR, and SUCCESSOR operations in O.lg lg u/ time; and the INSERT and DELETE operations in O.lg u/ time.

To reduce the space requirement to O.n/, we make the following changes to the data structure:

Notes for Chapter 20 559

� We cluster the n elements into n= lg u groups of size lg u. (Assume for now that lg u divides n.) The first group consists of the lg u smallest elements in the set, the second group consists of the next lg u smallest elements, and so on.

� We designate a “representative” value for each group. The representative of the i th group is at least as large as the largest element in the i th group, and it is smaller than every element of the .iC1/st group. (The representative of the last group can be the maximum possible element u � 1.) Note that a representative might be a value not currently in the set.

� We store the lg u elements of each group in a balanced binary search tree, such as a red-black tree. Each representative points to the balanced binary search tree for its group, and each balanced binary search tree points to its group’s representative.

� The perfect hash table stores only the representatives, which are also stored in a doubly linked list in increasing order.

We call this structure a y-fast trie.

c. Show that a y-fast trie requires only O.n/ space to store n elements.

d. Show how to perform the MINIMUM and MAXIMUM operations in O.lg lg u/ time with a y-fast trie.

e. Show how to perform the MEMBER operation in O.lg lg u/ time.

f. Show how to perform the PREDECESSOR and SUCCESSOR operations in O.lg lg u/ time.

g. Explain why the INSERT and DELETE operations take �.lg lg u/ time.

h. Show how to relax the requirement that each group in a y-fast trie has exactly lg u elements to allow INSERT and DELETE to run in O.lg lg u/ amortized time without affecting the asymptotic running times of the other operations.

Chapter notes

The data structure in this chapter is named after P. van Emde Boas, who described an early form of the idea in 1975 [339]. Later papers by van Emde Boas [340] and van Emde Boas, Kaas, and Zijlstra [341] refined the idea and the exposition. Mehlhorn and Näher [252] subsequently extended the ideas to apply to universe

560 Chapter 20 van Emde Boas Trees

sizes that are prime. Mehlhorn’s book [249] contains a slightly different treatment of van Emde Boas trees than the one in this chapter.

Using the ideas behind van Emde Boas trees, Dementiev et al. [83] developed a nonrecursive, three-level search tree that ran faster than van Emde Boas trees in their own experiments.

Wang and Lin [347] designed a hardware-pipelined version of van Emde Boas trees, which achieves constant amortized time per operation and uses O.lg lg u/ stages in the pipeline.

A lower bound by Pǎtraşcu and Thorup [273, 274] for finding the predecessor shows that van Emde Boas trees are optimal for this operation, even if randomiza- tion is allowed.

21 Data Structures for Disjoint Sets

Some applications involve grouping n distinct elements into a collection of disjoint sets. These applications often need to perform two operations in particular: finding the unique set that contains a given element and uniting two sets. This chapter explores methods for maintaining a data structure that supports these operations.

Section 21.1 describes the operations supported by a disjoint-set data structure and presents a simple application. In Section 21.2, we look at a simple linked-list implementation for disjoint sets. Section 21.3 presents a more efficient represen- tation using rooted trees. The running time using the tree representation is theo- retically superlinear, but for all practical purposes it is linear. Section 21.4 defines and discusses a very quickly growing function and its very slowly growing inverse, which appears in the running time of operations on the tree-based implementation, and then, by a complex amortized analysis, proves an upper bound on the running time that is just barely superlinear.

21.1 Disjoint-set operations

A disjoint-set data structure maintains a collection S D fS1; S2; : : : ; Skg of dis- joint dynamic sets. We identify each set by a representative, which is some mem- ber of the set. In some applications, it doesn’t matter which member is used as the representative; we care only that if we ask for the representative of a dynamic set twice without modifying the set between the requests, we get the same answer both times. Other applications may require a prespecified rule for choosing the repre- sentative, such as choosing the smallest member in the set (assuming, of course, that the elements can be ordered).

As in the other dynamic-set implementations we have studied, we represent each element of a set by an object. Letting x denote an object, we wish to support the following operations:

562 Chapter 21 Data Structures for Disjoint Sets

MAKE-SET.x/ creates a new set whose only member (and thus representative) is x. Since the sets are disjoint, we require that x not already be in some other set.

UNION.x; y/ unites the dynamic sets that contain x and y, say Sx and Sy , into a new set that is the union of these two sets. We assume that the two sets are dis- joint prior to the operation. The representative of the resulting set is any member of Sx [ Sy , although many implementations of UNION specifically choose the representative of either Sx or Sy as the new representative. Since we require the sets in the collection to be disjoint, conceptually we destroy sets Sx and Sy , removing them from the collection S . In practice, we often absorb the elements of one of the sets into the other set.

FIND-SET.x/ returns a pointer to the representative of the (unique) set contain- ing x.

Throughout this chapter, we shall analyze the running times of disjoint-set data structures in terms of two parameters: n, the number of MAKE-SET operations, and m, the total number of MAKE-SET, UNION, and FIND-SET operations. Since the sets are disjoint, each UNION operation reduces the number of sets by one. After n � 1 UNION operations, therefore, only one set remains. The number of UNION operations is thus at most n � 1. Note also that since the MAKE-SET operations are included in the total number of operations m, we have m � n. We assume that the n MAKE-SET operations are the first n operations performed.

An application of disjoint-set data structures

One of the many applications of disjoint-set data structures arises in determin- ing the connected components of an undirected graph (see Section B.4). Fig- ure 21.1(a), for example, shows a graph with four connected components.

The procedure CONNECTED-COMPONENTS that follows uses the disjoint-set operations to compute the connected components of a graph. Once CONNECTED- COMPONENTS has preprocessed the graph, the procedure SAME-COMPONENT answers queries about whether two vertices are in the same connected component.1

(In pseudocode, we denote the set of vertices of a graph G by G:V and the set of edges by G:E.)

1When the edges of the graph are static—not changing over time—we can compute the connected components faster by using depth-first search (Exercise 22.3-12). Sometimes, however, the edges are added dynamically and we need to maintain the connected components as each edge is added. In this case, the implementation given here can be more efficient than running a new depth-first search for each new edge.

21.1 Disjoint-set operations 563

a b

c d

e f

g

h

i

j

Edge processed

initial sets

(b,d)

(e,g)

(a,c)

(h,i)

(a,b)

(e, f )

(b,c)

{a,b,c,d}

{a,b,c,d}

{a,c}

{a,c}

{a}

{a}

{a}

{a,b,c,d}

{b,d}

{b,d}

{b,d}

{b,d}

{b}

{c}

{c}

{c} {d}

{e, f,g}

{e, f,g}

{e,g}

{e,g}

{e,g}

{e,g}

{e}

{e} {f}

{f}

{f}

{f}

{f}

{f}

{g}

{g}

{h,i}

{h,i}

{h,i}

{h,i}

{h}

{h}

{h}

{h} {i}

{i}

{i}

{i}

{j}

{j}

{j}

{j}

{j}

{j}

{j}

{j}

Collection of disjoint sets

(a)

(b)

Figure 21.1 (a) A graph with four connected components: fa; b; c; dg, fe; f; gg, fh; ig, and fj g. (b) The collection of disjoint sets after processing each edge.

CONNECTED-COMPONENTS.G/

1 for each vertex � 2 G:V 2 MAKE-SET.�/ 3 for each edge .u; �/ 2 G:E 4 if FIND-SET.u/ ¤ FIND-SET.�/ 5 UNION.u; �/

SAME-COMPONENT.u; �/

1 if FIND-SET.u/ == FIND-SET.�/ 2 return TRUE 3 else return FALSE

The procedure CONNECTED-COMPONENTS initially places each vertex � in its own set. Then, for each edge .u; �/, it unites the sets containing u and �. By Exercise 21.1-2, after processing all the edges, two vertices are in the same con- nected component if and only if the corresponding objects are in the same set. Thus, CONNECTED-COMPONENTS computes sets in such a way that the proce- dure SAME-COMPONENT can determine whether two vertices are in the same con-

564 Chapter 21 Data Structures for Disjoint Sets

nected component. Figure 21.1(b) illustrates how CONNECTED-COMPONENTS computes the disjoint sets.

In an actual implementation of this connected-components algorithm, the repre- sentations of the graph and the disjoint-set data structure would need to reference each other. That is, an object representing a vertex would contain a pointer to the corresponding disjoint-set object, and vice versa. These programming details depend on the implementation language, and we do not address them further here.

Exercises

21.1-1 Suppose that CONNECTED-COMPONENTS is run on the undirected graph G D .V; E/, where V D fa; b; c; d; e; f; g; h; i; j; kg and the edges of E are pro- cessed in the order .d; i/; .f; k/; .g; i/; .b; g/; .a; h/; .i; j /; .d; k/; .b; j /; .d; f /; .g; j /; .a; e/. List the vertices in each connected component after each iteration of lines 3–5.

21.1-2 Show that after all edges are processed by CONNECTED-COMPONENTS, two ver- tices are in the same connected component if and only if they are in the same set.

21.1-3 During the execution of CONNECTED-COMPONENTS on an undirected graph G D .V; E/ with k connected components, how many times is FIND-SET called? How many times is UNION called? Express your answers in terms of jV j, jEj, and k.

21.2 Linked-list representation of disjoint sets

Figure 21.2(a) shows a simple way to implement a disjoint-set data structure: each set is represented by its own linked list. The object for each set has attributes head, pointing to the first object in the list, and tail, pointing to the last object. Each object in the list contains a set member, a pointer to the next object in the list, and a pointer back to the set object. Within each linked list, the objects may appear in any order. The representative is the set member in the first object in the list.

With this linked-list representation, both MAKE-SET and FIND-SET are easy, requiring O.1/ time. To carry out MAKE-SET.x/, we create a new linked list whose only object is x. For FIND-SET.x/, we just follow the pointer from x back to its set object and then return the member in the object that head points to. For example, in Figure 21.2(a), the call FIND-SET.g/ would return f .

21.2 Linked-list representation of disjoint sets 565

f g d c h e b

(a)

(b)

head

tail S1

c h e

head

tail S2

bf g d

head

tail S1

Figure 21.2 (a) Linked-list representations of two sets. Set S1 contains members d , f , and g, with representative f , and set S2 contains members b, c, e, and h, with representative c. Each object in the list contains a set member, a pointer to the next object in the list, and a pointer back to the set object. Each set object has pointers head and tail to the first and last objects, respectively. (b) The result of UNION.g; e/, which appends the linked list containing e to the linked list containing g. The representative of the resulting set is f . The set object for e’s list, S2, is destroyed.

A simple implementation of union

The simplest implementation of the UNION operation using the linked-list set rep- resentation takes significantly more time than MAKE-SET or FIND-SET. As Fig- ure 21.2(b) shows, we perform UNION.x; y/ by appending y’s list onto the end of x’s list. The representative of x’s list becomes the representative of the resulting set. We use the tail pointer for x’s list to quickly find where to append y’s list. Be- cause all members of y’s list join x’s list, we can destroy the set object for y’s list. Unfortunately, we must update the pointer to the set object for each object origi- nally on y’s list, which takes time linear in the length of y’s list. In Figure 21.2, for example, the operation UNION.g; e/ causes pointers to be updated in the objects for b, c, e, and h.

In fact, we can easily construct a sequence of m operations on n objects that requires ‚.n2/ time. Suppose that we have objects x1; x2; : : : ; xn. We execute the sequence of n MAKE-SET operations followed by n � 1 UNION operations shown in Figure 21.3, so that m D 2n � 1. We spend ‚.n/ time performing the n MAKE-SET operations. Because the i th UNION operation updates i objects, the total number of objects updated by all n � 1 UNION operations is

566 Chapter 21 Data Structures for Disjoint Sets

Operation Number of objects updated MAKE-SET.x1/ 1 MAKE-SET.x2/ 1

::: :::

MAKE-SET.xn/ 1 UNION.x2; x1/ 1 UNION.x3; x2/ 2 UNION.x4; x3/ 3

::: :::

UNION.xn; xn�1/ n � 1

Figure 21.3 A sequence of 2n � 1 operations on n objects that takes ‚.n2/ time, or ‚.n/ time per operation on average, using the linked-list set representation and the simple implementation of UNION.

n�1X iD1

i D ‚.n2/ :

The total number of operations is 2n�1, and so each operation on average requires ‚.n/ time. That is, the amortized time of an operation is ‚.n/.

A weighted-union heuristic

In the worst case, the above implementation of the UNION procedure requires an average of ‚.n/ time per call because we may be appending a longer list onto a shorter list; we must update the pointer to the set object for each member of the longer list. Suppose instead that each list also includes the length of the list (which we can easily maintain) and that we always append the shorter list onto the longer, breaking ties arbitrarily. With this simple weighted-union heuristic, a sin- gle UNION operation can still take �.n/ time if both sets have �.n/ members. As the following theorem shows, however, a sequence of m MAKE-SET, UNION, and FIND-SET operations, n of which are MAKE-SET operations, takes O.mC n lg n/ time.

Theorem 21.1 Using the linked-list representation of disjoint sets and the weighted-union heuris- tic, a sequence of m MAKE-SET, UNION, and FIND-SET operations, n of which are MAKE-SET operations, takes O.mC n lg n/ time.

21.2 Linked-list representation of disjoint sets 567

Proof Because each UNION operation unites two disjoint sets, we perform at most n�1 UNION operations over all. We now bound the total time taken by these UNION operations. We start by determining, for each object, an upper bound on the number of times the object’s pointer back to its set object is updated. Consider a particular object x. We know that each time x’s pointer was updated, x must have started in the smaller set. The first time x’s pointer was updated, therefore, the resulting set must have had at least 2 members. Similarly, the next time x’s pointer was updated, the resulting set must have had at least 4 members. Continuing on, we observe that for any k � n, after x’s pointer has been updated dlg ke times, the resulting set must have at least k members. Since the largest set has at most n members, each object’s pointer is updated at most dlg ne times over all the UNION operations. Thus the total time spent updating object pointers over all UNION operations is O.n lg n/. We must also account for updating the tail pointers and the list lengths, which take only ‚.1/ time per UNION operation. The total time spent in all UNION operations is thus O.n lg n/.

The time for the entire sequence of m operations follows easily. Each MAKE- SET and FIND-SET operation takes O.1/ time, and there are O.m/ of them. The total time for the entire sequence is thus O.mC n lg n/.

Exercises

21.2-1 Write pseudocode for MAKE-SET, FIND-SET, and UNION using the linked-list representation and the weighted-union heuristic. Make sure to specify the attributes that you assume for set objects and list objects.

21.2-2 Show the data structure that results and the answers returned by the FIND-SET operations in the following program. Use the linked-list representation with the weighted-union heuristic.

1 for i D 1 to 16 2 MAKE-SET.xi/ 3 for i D 1 to 15 by 2 4 UNION.xi ; xiC1/ 5 for i D 1 to 13 by 4 6 UNION.xi ; xiC2/ 7 UNION.x1; x5/ 8 UNION.x11; x13/ 9 UNION.x1; x10/

10 FIND-SET.x2/ 11 FIND-SET.x9/

568 Chapter 21 Data Structures for Disjoint Sets

Assume that if the sets containing xi and xj have the same size, then the operation UNION.xi ; xj / appends xj ’s list onto xi ’s list.

21.2-3 Adapt the aggregate proof of Theorem 21.1 to obtain amortized time bounds of O.1/ for MAKE-SET and FIND-SET and O.lg n/ for UNION using the linked- list representation and the weighted-union heuristic.

21.2-4 Give a tight asymptotic bound on the running time of the sequence of operations in Figure 21.3 assuming the linked-list representation and the weighted-union heuris- tic.

21.2-5 Professor Gompers suspects that it might be possible to keep just one pointer in each set object, rather than two (head and tail), while keeping the number of point- ers in each list element at two. Show that the professor’s suspicion is well founded by describing how to represent each set by a linked list such that each operation has the same running time as the operations described in this section. Describe also how the operations work. Your scheme should allow for the weighted-union heuristic, with the same effect as described in this section. (Hint: Use the tail of a linked list as its set’s representative.)

21.2-6 Suggest a simple change to the UNION procedure for the linked-list representation that removes the need to keep the tail pointer to the last object in each list. Whether or not the weighted-union heuristic is used, your change should not change the asymptotic running time of the UNION procedure. (Hint: Rather than appending one list to another, splice them together.)

21.3 Disjoint-set forests

In a faster implementation of disjoint sets, we represent sets by rooted trees, with each node containing one member and each tree representing one set. In a disjoint- set forest, illustrated in Figure 21.4(a), each member points only to its parent. The root of each tree contains the representative and is its own parent. As we shall see, although the straightforward algorithms that use this representation are no faster than ones that use the linked-list representation, by introducing two heuris- tics—“union by rank” and “path compression”—we can achieve an asymptotically optimal disjoint-set data structure.

21.3 Disjoint-set forests 569

c

h e

b

f

d

g

(a)

f

c

h e

b

d

g

(b)

Figure 21.4 A disjoint-set forest. (a) Two trees representing the two sets of Figure 21.2. The tree on the left represents the set fb; c; e; hg, with c as the representative, and the tree on the right represents the set fd; f; gg, with f as the representative. (b) The result of UNION.e; g/.

We perform the three disjoint-set operations as follows. A MAKE-SET operation simply creates a tree with just one node. We perform a FIND-SET operation by following parent pointers until we find the root of the tree. The nodes visited on this simple path toward the root constitute the find path. A UNION operation, shown in Figure 21.4(b), causes the root of one tree to point to the root of the other.

Heuristics to improve the running time

So far, we have not improved on the linked-list implementation. A sequence of n� 1 UNION operations may create a tree that is just a linear chain of n nodes. By using two heuristics, however, we can achieve a running time that is almost linear in the total number of operations m.

The first heuristic, union by rank, is similar to the weighted-union heuristic we used with the linked-list representation. The obvious approach would be to make the root of the tree with fewer nodes point to the root of the tree with more nodes. Rather than explicitly keeping track of the size of the subtree rooted at each node, we shall use an approach that eases the analysis. For each node, we maintain a rank, which is an upper bound on the height of the node. In union by rank, we make the root with smaller rank point to the root with larger rank during a UNION operation.

The second heuristic, path compression, is also quite simple and highly effec- tive. As shown in Figure 21.5, we use it during FIND-SET operations to make each node on the find path point directly to the root. Path compression does not change any ranks.

570 Chapter 21 Data Structures for Disjoint Sets

a

b

c

d

e

f

a b c d e

f

(a) (b)

Figure 21.5 Path compression during the operation FIND-SET. Arrows and self-loops at roots are omitted. (a) A tree representing a set prior to executing FIND-SET.a/. Triangles represent subtrees whose roots are the nodes shown. Each node has a pointer to its parent. (b) The same set after executing FIND-SET.a/. Each node on the find path now points directly to the root.

Pseudocode for disjoint-set forests

To implement a disjoint-set forest with the union-by-rank heuristic, we must keep track of ranks. With each node x, we maintain the integer value x:rank, which is an upper bound on the height of x (the number of edges in the longest simple path between x and a descendant leaf). When MAKE-SET creates a singleton set, the single node in the corresponding tree has an initial rank of 0. Each FIND-SET oper- ation leaves all ranks unchanged. The UNION operation has two cases, depending on whether the roots of the trees have equal rank. If the roots have unequal rank, we make the root with higher rank the parent of the root with lower rank, but the ranks themselves remain unchanged. If, instead, the roots have equal ranks, we arbitrarily choose one of the roots as the parent and increment its rank.

Let us put this method into pseudocode. We designate the parent of node x by x:p. The LINK procedure, a subroutine called by UNION, takes pointers to two roots as inputs.

21.3 Disjoint-set forests 571

MAKE-SET.x/

1 x:p D x 2 x:rank D 0

UNION.x; y/

1 LINK.FIND-SET.x/; FIND-SET.y//

LINK.x; y/

1 if x:rank > y:rank 2 y:p D x 3 else x:p D y 4 if x:rank == y:rank 5 y:rank D y:rankC 1

The FIND-SET procedure with path compression is quite simple:

FIND-SET.x/

1 if x ¤ x:p 2 x:p D FIND-SET.x:p/ 3 return x:p

The FIND-SET procedure is a two-pass method: as it recurses, it makes one pass up the find path to find the root, and as the recursion unwinds, it makes a second pass back down the find path to update each node to point directly to the root. Each call of FIND-SET.x/ returns x:p in line 3. If x is the root, then FIND-SET skips line 2 and instead returns x:p, which is x; this is the case in which the recursion bottoms out. Otherwise, line 2 executes, and the recursive call with parameter x:p returns a pointer to the root. Line 2 updates node x to point directly to the root, and line 3 returns this pointer.

Effect of the heuristics on the running time

Separately, either union by rank or path compression improves the running time of the operations on disjoint-set forests, and the improvement is even greater when we use the two heuristics together. Alone, union by rank yields a running time of O.m lg n/ (see Exercise 21.4-4), and this bound is tight (see Exercise 21.3-3). Although we shall not prove it here, for a sequence of n MAKE-SET opera- tions (and hence at most n � 1 UNION operations) and f FIND-SET opera- tions, the path-compression heuristic alone gives a worst-case running time of ‚.nC f � .1C log2Cf=n n//.

572 Chapter 21 Data Structures for Disjoint Sets

When we use both union by rank and path compression, the worst-case running time is O.m ˛.n//, where ˛.n/ is a very slowly growing function, which we de- fine in Section 21.4. In any conceivable application of a disjoint-set data structure, ˛.n/ � 4; thus, we can view the running time as linear in m in all practical situa- tions. Strictly speaking, however, it is superlinear. In Section 21.4, we prove this upper bound.

Exercises

21.3-1 Redo Exercise 21.2-2 using a disjoint-set forest with union by rank and path com- pression.

21.3-2 Write a nonrecursive version of FIND-SET with path compression.

21.3-3 Give a sequence of m MAKE-SET, UNION, and FIND-SET operations, n of which are MAKE-SET operations, that takes �.m lg n/ time when we use union by rank only.

21.3-4 Suppose that we wish to add the operation PRINT-SET.x/, which is given a node x and prints all the members of x’s set, in any order. Show how we can add just a single attribute to each node in a disjoint-set forest so that PRINT-SET.x/ takes time linear in the number of members of x’s set and the asymptotic running times of the other operations are unchanged. Assume that we can print each member of the set in O.1/ time.

21.3-5 ? Show that any sequence of m MAKE-SET, FIND-SET, and LINK operations, where all the LINK operations appear before any of the FIND-SET operations, takes only O.m/ time if we use both path compression and union by rank. What happens in the same situation if we use only the path-compression heuristic?

21.4 Analysis of union by rank with path compression 573

? 21.4 Analysis of union by rank with path compression

As noted in Section 21.3, the combined union-by-rank and path-compression heu- ristic runs in time O.m ˛.n// for m disjoint-set operations on n elements. In this section, we shall examine the function ˛ to see just how slowly it grows. Then we prove this running time using the potential method of amortized analysis.

A very quickly growing function and its very slowly growing inverse

For integers k � 0 and j � 1, we define the function Ak.j / as

Ak.j / D (

j C 1 if k D 0 ; A

.j C1/ k�1 .j / if k � 1 ;

where the expression A.j C1/ k�1 .j / uses the functional-iteration notation given in Sec-

tion 3.2. Specifically, A.0/ k�1.j / D j and A.i/k�1.j / D Ak�1.A.i�1/k�1 .j // for i � 1.

We will refer to the parameter k as the level of the function A. The function Ak.j / strictly increases with both j and k. To see just how quickly

this function grows, we first obtain closed-form expressions for A1.j / and A2.j /.

Lemma 21.2 For any integer j � 1, we have A1.j / D 2j C 1.

Proof We first use induction on i to show that A.i/0 .j / D jCi . For the base case, we have A.0/0 .j / D j D j C 0. For the inductive step, assume that A.i�1/0 .j / D j C .i � 1/. Then A.i/0 .j / D A0.A.i�1/0 .j // D .j C .i � 1//C 1 D j C i . Finally, we note that A1.j / D A.j C1/0 .j / D j C .j C 1/ D 2j C 1.

Lemma 21.3 For any integer j � 1, we have A2.j / D 2j C1.j C 1/ � 1.

Proof We first use induction on i to show that A.i/1 .j / D 2i .j C 1/ � 1. For the base case, we have A.0/1 .j / D j D 20.j C 1/ � 1. For the inductive step, assume that A.i�1/1 .j / D 2i�1.j C 1/ � 1. Then A.i/1 .j / D A1.A.i�1/1 .j // D A1.2

i�1.j C 1/ � 1/ D 2�.2i�1.jC1/�1/C1 D 2i .jC1/�2C1 D 2i .jC1/�1. Finally, we note that A2.j / D A.j C1/1 .j / D 2j C1.j C 1/ � 1.

Now we can see how quickly Ak.j / grows by simply examining Ak.1/ for levels k D 0; 1; 2; 3; 4. From the definition of A0.k/ and the above lemmas, we have A0.1/ D 1C 1 D 2, A1.1/ D 2 � 1C 1 D 3, and A2.1/ D 21C1 � .1C 1/ � 1 D 7.

574 Chapter 21 Data Structures for Disjoint Sets

We also have

A3.1/ D A.2/2 .1/ D A2.A2.1// D A2.7/ D 28 � 8 � 1 D 211 � 1 D 2047

and

A4.1/ D A.2/3 .1/ D A3.A3.1// D A3.2047/ D A.2048/2 .2047/ � A2.2047/ D 22048 � 2048 � 1 > 22048

D .24/512 D 16512 � 1080 ;

which is the estimated number of atoms in the observable universe. (The symbol “�” denotes the “much-greater-than” relation.)

We define the inverse of the function Ak.n/, for integer n � 0, by ˛.n/ D min fk W Ak.1/ � ng : In words, ˛.n/ is the lowest level k for which Ak.1/ is at least n. From the above values of Ak.1/, we see that

˛.n/ D

˚ 0 for 0 � n � 2 ; 1 for n D 3 ; 2 for 4 � n � 7 ; 3 for 8 � n � 2047 ; 4 for 2048 � n � A4.1/ :

It is only for values of n so large that the term “astronomical” understates them (greater than A4.1/, a huge number) that ˛.n/ > 4, and so ˛.n/ � 4 for all practical purposes.

21.4 Analysis of union by rank with path compression 575

Properties of ranks

In the remainder of this section, we prove an O.m˛.n// bound on the running time of the disjoint-set operations with union by rank and path compression. In order to prove this bound, we first prove some simple properties of ranks.

Lemma 21.4 For all nodes x, we have x:rank � x:p:rank, with strict inequality if x ¤ x:p. The value of x:rank is initially 0 and increases through time until x ¤ x:p; from then on, x:rank does not change. The value of x:p:rank monotonically increases over time.

Proof The proof is a straightforward induction on the number of operations, us- ing the implementations of MAKE-SET, UNION, and FIND-SET that appear in Section 21.3. We leave it as Exercise 21.4-1.

Corollary 21.5 As we follow the simple path from any node toward a root, the node ranks strictly increase.

Lemma 21.6 Every node has rank at most n � 1.

Proof Each node’s rank starts at 0, and it increases only upon LINK operations. Because there are at most n � 1 UNION operations, there are also at most n � 1 LINK operations. Because each LINK operation either leaves all ranks alone or increases some node’s rank by 1, all ranks are at most n � 1.

Lemma 21.6 provides a weak bound on ranks. In fact, every node has rank at most blg nc (see Exercise 21.4-2). The looser bound of Lemma 21.6 will suffice for our purposes, however.

Proving the time bound

We shall use the potential method of amortized analysis (see Section 17.3) to prove the O.m ˛.n// time bound. In performing the amortized analysis, we will find it convenient to assume that we invoke the LINK operation rather than the UNION operation. That is, since the parameters of the LINK procedure are pointers to two roots, we act as though we perform the appropriate FIND-SET operations sepa- rately. The following lemma shows that even if we count the extra FIND-SET op- erations induced by UNION calls, the asymptotic running time remains unchanged.

576 Chapter 21 Data Structures for Disjoint Sets

Lemma 21.7 Suppose we convert a sequence S 0 of m0 MAKE-SET, UNION, and FIND-SET op- erations into a sequence S of m MAKE-SET, LINK, and FIND-SET operations by turning each UNION into two FIND-SET operations followed by a LINK. Then, if sequence S runs in O.m ˛.n// time, sequence S 0 runs in O.m0 ˛.n// time.

Proof Since each UNION operation in sequence S 0 is converted into three opera- tions in S , we have m0 � m � 3m0. Since m D O.m0/, an O.m ˛.n// time bound for the converted sequence S implies an O.m0 ˛.n// time bound for the original sequence S 0.

In the remainder of this section, we shall assume that the initial sequence of m0

MAKE-SET, UNION, and FIND-SET operations has been converted to a sequence of m MAKE-SET, LINK, and FIND-SET operations. We now prove an O.m ˛.n// time bound for the converted sequence and appeal to Lemma 21.7 to prove the O.m0 ˛.n// running time of the original sequence of m0 operations.

Potential function

The potential function we use assigns a potential �q.x/ to each node x in the disjoint-set forest after q operations. We sum the node potentials for the poten- tial of the entire forest: ˆq D

P x �q.x/, where ˆq denotes the potential of the

forest after q operations. The forest is empty prior to the first operation, and we arbitrarily set ˆ0 D 0. No potential ˆq will ever be negative.

The value of �q.x/ depends on whether x is a tree root after the qth operation. If it is, or if x:rank D 0, then �q.x/ D ˛.n/ � x:rank.

Now suppose that after the qth operation, x is not a root and that x:rank � 1. We need to define two auxiliary functions on x before we can define �q.x/. First we define

level.x/ D max fk W x:p:rank � Ak.x:rank/g : That is, level.x/ is the greatest level k for which Ak, applied to x’s rank, is no greater than x’s parent’s rank.

We claim that

0 � level.x/ x:p:rank (by Lemma 21.6) ,

which implies that level.x/ x:p:rank (by definition of level.x/) ,

which implies that iter.x/ � x:rank. Note that because x:p:rank monotonically increases over time, in order for iter.x/ to decrease, level.x/ must increase. As long as level.x/ remains unchanged, iter.x/ must either increase or remain unchanged.

With these auxiliary functions in place, we are ready to define the potential of node x after q operations:

�q.x/ D (

˛.n/ � x:rank if x is a root or x:rank D 0 ; .˛.n/ � level.x//�x:rank � iter.x/ if x is not a root and x:rank � 1 :

We next investigate some useful properties of node potentials.

Lemma 21.8 For every node x, and for all operation counts q, we have

0 � �q.x/ � ˛.n/ � x:rank :

578 Chapter 21 Data Structures for Disjoint Sets

Proof If x is a root or x:rank D 0, then �q.x/ D ˛.n/�x:rank by definition. Now suppose that x is not a root and that x:rank � 1. We obtain a lower bound on �q.x/ by maximizing level.x/ and iter.x/. By the bound (21.1), level.x/ � ˛.n/�1, and by the bound (21.2), iter.x/ � x:rank. Thus, �q.x/ D .˛.n/ � level.x// � x:rank � iter.x/

� .˛.n/ � .˛.n/� 1// � x:rank � x:rank D x:rank � x:rank D 0 :

Similarly, we obtain an upper bound on �q.x/ by minimizing level.x/ and iter.x/. By the bound (21.1), level.x/ � 0, and by the bound (21.2), iter.x/ � 1. Thus, �q.x/ � .˛.n/ � 0/ � x:rank � 1

D ˛.n/ � x:rank � 1 0, then �q.x/ 0 and x is followed somewhere on the find path by another node y that is not a root, where level.y/ D level.x/ just before the FIND-SET operation. (Node y need not immediately follow x on the find path.) All but at most ˛.n/C 2 nodes on the find path satisfy these constraints on x. Those that do not satisfy them are the first node on the find path (if it has rank 0), the last node on the path (i.e., the root), and the last node w on the path for which level.w/ D k, for each k D 0; 1; 2; : : : ; ˛.n/�1.

Let us fix such a node x, and we shall show that x’s potential decreases by at least 1. Let k D level.x/ D level.y/. Just prior to the path compression caused by the FIND-SET, we have

x:p:rank � A.iter.x// k

.x:rank/ (by definition of iter.x/) ,

y:p:rank � Ak.y:rank/ (by definition of level.y/) , y:rank � x:p:rank (by Corollary 21.5 and because

y follows x on the find path) .

Putting these inequalities together and letting i be the value of iter.x/ before path compression, we have

y:p:rank � Ak.y:rank/ � Ak.x:p:rank/ (because Ak.j / is strictly increasing) � Ak.A.iter.x//k .x:rank// D A.iC1/

k .x:rank/ :

21.4 Analysis of union by rank with path compression 581

Because path compression will make x and y have the same parent, we know that after path compression, x:p:rank D y:p:rank and that the path compression does not decrease y:p:rank. Since x:rank does not change, after path compression we have that x:p:rank � A.iC1/

k .x:rank/. Thus, path compression will cause ei-

ther iter.x/ to increase (to at least i C 1) or level.x/ to increase (which occurs if iter.x/ increases to at least x:rankC 1). In either case, by Lemma 21.10, we have �q.x/ � �q�1.x/� 1. Hence, x’s potential decreases by at least 1.

The amortized cost of the FIND-SET operation is the actual cost plus the change in potential. The actual cost is O.s/, and we have shown that the total potential decreases by at least max.0; s � .˛.n/C 2//. The amortized cost, therefore, is at most O.s/ � .s � .˛.n/C 2// D O.s/ � s C O.˛.n// D O.˛.n//, since we can scale up the units of potential to dominate the constant hidden in O.s/.

Putting the preceding lemmas together yields the following theorem.

Theorem 21.14 A sequence of m MAKE-SET, UNION, and FIND-SET operations, n of which are MAKE-SET operations, can be performed on a disjoint-set forest with union by rank and path compression in worst-case time O.m ˛.n//.

Proof Immediate from Lemmas 21.7, 21.11, 21.12, and 21.13.

Exercises

21.4-1 Prove Lemma 21.4.

21.4-2 Prove that every node has rank at most blg nc. 21.4-3 In light of Exercise 21.4-2, how many bits are necessary to store x:rank for each node x?

21.4-4 Using Exercise 21.4-2, give a simple proof that operations on a disjoint-set forest with union by rank but without path compression run in O.m lg n/ time.

21.4-5 Professor Dante reasons that because node ranks increase strictly along a simple path to the root, node levels must monotonically increase along the path. In other

582 Chapter 21 Data Structures for Disjoint Sets

words, if x:rank > 0 and x:p is not a root, then level.x/ � level.x:p/. Is the professor correct?

21.4-6 ? Consider the function ˛0.n/ D min fk W Ak.1/ � lg.nC 1/g. Show that ˛0.n/ � 3 for all practical values of n and, using Exercise 21.4-2, show how to modify the potential-function argument to prove that we can perform a sequence of m MAKE- SET, UNION, and FIND-SET operations, n of which are MAKE-SET operations, on a disjoint-set forest with union by rank and path compression in worst-case time O.m ˛0.n//.

Problems

21-1 Off-line minimum The off-line minimum problem asks us to maintain a dynamic set T of elements from the domain f1; 2; : : : ; ng under the operations INSERT and EXTRACT-MIN. We are given a sequence S of n INSERT and m EXTRACT-MIN calls, where each key in f1; 2; : : : ; ng is inserted exactly once. We wish to determine which key is returned by each EXTRACT-MIN call. Specifically, we wish to fill in an array extractedŒ1 : : m�, where for i D 1; 2; : : : ; m, extractedŒi � is the key returned by the i th EXTRACT-MIN call. The problem is “off-line” in the sense that we are allowed to process the entire sequence S before determining any of the returned keys.

a. In the following instance of the off-line minimum problem, each operation INSERT.i/ is represented by the value of i and each EXTRACT-MIN is rep- resented by the letter E:

4; 8; E; 3; E; 9; 2; 6; E; E; E; 1; 7; E; 5 :

Fill in the correct values in the extracted array.

To develop an algorithm for this problem, we break the sequence S into homoge- neous subsequences. That is, we represent S by

I1; E; I2; E; I3; : : : ; Im; E; ImC1 ;

where each E represents a single EXTRACT-MIN call and each Ij represents a (pos- sibly empty) sequence of INSERT calls. For each subsequence Ij , we initially place the keys inserted by these operations into a set Kj , which is empty if Ij is empty. We then do the following:

Problems for Chapter 21 583

OFF-LINE-MINIMUM.m; n/

1 for i D 1 to n 2 determine j such that i 2 Kj 3 if j ¤ mC 1 4 extractedŒj � D i 5 let l be the smallest value greater than j

for which set Kl exists 6 Kl D Kj [Kl , destroying Kj 7 return extracted

b. Argue that the array extracted returned by OFF-LINE-MINIMUM is correct.

c. Describe how to implement OFF-LINE-MINIMUM efficiently with a disjoint- set data structure. Give a tight bound on the worst-case running time of your implementation.

21-2 Depth determination In the depth-determination problem, we maintain a forest F D fTig of rooted trees under three operations:

MAKE-TREE.�/ creates a tree whose only node is �.

FIND-DEPTH.�/ returns the depth of node � within its tree.

GRAFT.r; �/ makes node r , which is assumed to be the root of a tree, become the child of node �, which is assumed to be in a different tree than r but may or may not itself be a root.

a. Suppose that we use a tree representation similar to a disjoint-set forest: �:p is the parent of node �, except that �:p D � if � is a root. Suppose further that we implement GRAFT.r; �/ by setting r:p D � and FIND-DEPTH.�/ by following the find path up to the root, returning a count of all nodes other than � encountered. Show that the worst-case running time of a sequence of m MAKE- TREE, FIND-DEPTH, and GRAFT operations is ‚.m2/.

By using the union-by-rank and path-compression heuristics, we can reduce the worst-case running time. We use the disjoint-set forest S D fSig, where each set Si (which is itself a tree) corresponds to a tree Ti in the forest F . The tree structure within a set Si , however, does not necessarily correspond to that of Ti . In fact, the implementation of Si does not record the exact parent-child relationships but nevertheless allows us to determine any node’s depth in Ti .

The key idea is to maintain in each node � a “pseudodistance” �:d, which is defined so that the sum of the pseudodistances along the simple path from � to the

584 Chapter 21 Data Structures for Disjoint Sets

root of its set Si equals the depth of � in Ti . That is, if the simple path from � to its root in Si is �0; �1; : : : ; �k, where �0 D � and �k is Si ’s root, then the depth of � in Ti is

Pk j D0 �j :d.

b. Give an implementation of MAKE-TREE.

c. Show how to modify FIND-SET to implement FIND-DEPTH. Your implemen- tation should perform path compression, and its running time should be linear in the length of the find path. Make sure that your implementation updates pseudodistances correctly.

d. Show how to implement GRAFT.r; �/, which combines the sets containing r and �, by modifying the UNION and LINK procedures. Make sure that your implementation updates pseudodistances correctly. Note that the root of a set Si is not necessarily the root of the corresponding tree Ti .

e. Give a tight bound on the worst-case running time of a sequence of m MAKE- TREE, FIND-DEPTH, and GRAFT operations, n of which are MAKE-TREE op- erations.

21-3 Tarjan’s off-line least-common-ancestors algorithm The least common ancestor of two nodes u and � in a rooted tree T is the node w that is an ancestor of both u and � and that has the greatest depth in T . In the off-line least-common-ancestors problem, we are given a rooted tree T and an arbitrary set P D ffu; �gg of unordered pairs of nodes in T , and we wish to deter- mine the least common ancestor of each pair in P .

To solve the off-line least-common-ancestors problem, the following procedure performs a tree walk of T with the initial call LCA.T:root/. We assume that each node is colored WHITE prior to the walk.

LCA.u/

1 MAKE-SET.u/ 2 FIND-SET.u/:ancestor D u 3 for each child � of u in T 4 LCA.�/ 5 UNION.u; �/ 6 FIND-SET.u/:ancestor D u 7 u:color D BLACK 8 for each node � such that fu; �g 2 P 9 if �:color == BLACK

10 print “The least common ancestor of” u “and” � “is” FIND-SET.�/:ancestor

Notes for Chapter 21 585

a. Argue that line 10 executes exactly once for each pair fu; �g 2 P .

b. Argue that at the time of the call LCA.u/, the number of sets in the disjoint-set data structure equals the depth of u in T .

c. Prove that LCA correctly prints the least common ancestor of u and � for each pair fu; �g 2 P .

d. Analyze the running time of LCA, assuming that we use the implementation of the disjoint-set data structure in Section 21.3.

Chapter notes

Many of the important results for disjoint-set data structures are due at least in part to R. E. Tarjan. Using aggregate analysis, Tarjan [328, 330] gave the first tight upper bound in terms of the very slowly growing inverse y̨.m; n/ of Ackermann’s function. (The function Ak.j / given in Section 21.4 is similar to Ackermann’s function, and the function ˛.n/ is similar to the inverse. Both ˛.n/ and y̨.m; n/ are at most 4 for all conceivable values of m and n.) An O.m lg� n/ upper bound was proven earlier by Hopcroft and Ullman [5, 179]. The treatment in Section 21.4 is adapted from a later analysis by Tarjan [332], which is in turn based on an anal- ysis by Kozen [220]. Harfst and Reingold [161] give a potential-based version of Tarjan’s earlier bound.

Tarjan and van Leeuwen [333] discuss variants on the path-compression heuris- tic, including “one-pass methods,” which sometimes offer better constant factors in their performance than do two-pass methods. As with Tarjan’s earlier analyses of the basic path-compression heuristic, the analyses by Tarjan and van Leeuwen are aggregate. Harfst and Reingold [161] later showed how to make a small change to the potential function to adapt their path-compression analysis to these one-pass variants. Gabow and Tarjan [121] show that in certain applications, the disjoint-set operations can be made to run in O.m/ time.

Tarjan [329] showed that a lower bound of �.m y̨.m; n// time is required for operations on any disjoint-set data structure satisfying certain technical conditions. This lower bound was later generalized by Fredman and Saks [113], who showed that in the worst case, �.m y̨.m; n// .lg n/-bit words of memory must be accessed.

VI Graph Algorithms

Introduction

Graph problems pervade computer science, and algorithms for working with them are fundamental to the field. Hundreds of interesting computational problems are couched in terms of graphs. In this part, we touch on a few of the more significant ones.

Chapter 22 shows how we can represent a graph in a computer and then discusses algorithms based on searching a graph using either breadth-first search or depth- first search. The chapter gives two applications of depth-first search: topologically sorting a directed acyclic graph and decomposing a directed graph into its strongly connected components.

Chapter 23 describes how to compute a minimum-weight spanning tree of a graph: the least-weight way of connecting all of the vertices together when each edge has an associated weight. The algorithms for computing minimum spanning trees serve as good examples of greedy algorithms (see Chapter 16).

Chapters 24 and 25 consider how to compute shortest paths between vertices when each edge has an associated length or “weight.” Chapter 24 shows how to find shortest paths from a given source vertex to all other vertices, and Chapter 25 examines methods to compute shortest paths between every pair of vertices.

Finally, Chapter 26 shows how to compute a maximum flow of material in a flow network, which is a directed graph having a specified source vertex of material, a specified sink vertex, and specified capacities for the amount of material that can traverse each directed edge. This general problem arises in many forms, and a good algorithm for computing maximum flows can help solve a variety of related problems efficiently.

588 Part VI Graph Algorithms

When we characterize the running time of a graph algorithm on a given graph G D .V; E/, we usually measure the size of the input in terms of the number of vertices jV j and the number of edges jEj of the graph. That is, we describe the size of the input with two parameters, not just one. We adopt a common notational convention for these parameters. Inside asymptotic notation (such as O-notation or ‚-notation), and only inside such notation, the symbol V denotes jV j and the symbol E denotes jEj. For example, we might say, “the algorithm runs in time O.VE/,” meaning that the algorithm runs in time O.jV j jEj/. This conven- tion makes the running-time formulas easier to read, without risk of ambiguity.

Another convention we adopt appears in pseudocode. We denote the vertex set of a graph G by G:V and its edge set by G:E. That is, the pseudocode views vertex and edge sets as attributes of a graph.

22 Elementary Graph Algorithms

This chapter presents methods for representing a graph and for searching a graph. Searching a graph means systematically following the edges of the graph so as to visit the vertices of the graph. A graph-searching algorithm can discover much about the structure of a graph. Many algorithms begin by searching their input graph to obtain this structural information. Several other graph algorithms elabo- rate on basic graph searching. Techniques for searching a graph lie at the heart of the field of graph algorithms.

Section 22.1 discusses the two most common computational representations of graphs: as adjacency lists and as adjacency matrices. Section 22.2 presents a sim- ple graph-searching algorithm called breadth-first search and shows how to cre- ate a breadth-first tree. Section 22.3 presents depth-first search and proves some standard results about the order in which depth-first search visits vertices. Sec- tion 22.4 provides our first real application of depth-first search: topologically sort- ing a directed acyclic graph. A second application of depth-first search, finding the strongly connected components of a directed graph, is the topic of Section 22.5.

22.1 Representations of graphs

We can choose between two standard ways to represent a graph G D .V; E/: as a collection of adjacency lists or as an adjacency matrix. Either way applies to both directed and undirected graphs. Because the adjacency-list representation provides a compact way to represent sparse graphs—those for which jEj is much less than jV j2—it is usually the method of choice. Most of the graph algorithms presented in this book assume that an input graph is represented in adjacency- list form. We may prefer an adjacency-matrix representation, however, when the graph is dense—jEj is close to jV j2—or when we need to be able to tell quickly if there is an edge connecting two given vertices. For example, two of the all-pairs

590 Chapter 22 Elementary Graph Algorithms

1 2

3

45

1

2

3

4

5

2 5

1

2

2

4 1 2

5 3

4

45 3

1 0 0 1

0 1 1 1

1 0 1 0

1 1 0 1

1 0 1 0

0

1

0

0

1

1 2 3 4 5

1

2

3

4

5

(a) (b) (c)

Figure 22.1 Two representations of an undirected graph. (a)An undirected graph G with 5 vertices and 7 edges. (b) An adjacency-list representation of G. (c) The adjacency-matrix representation of G.

1 2

54

1

2

3

4

5

2 4

5

6

2

4

6

5

1 0 1 0

0 0 0 1

0 0 0 1

1 0 0 0

0 0 1 0

0

0

0

0

0

1 2 3 4 5

1

2

3

4

5

(a) (b) (c)

3

6 6

6

6 0 0 0 0 0 1

0

0

1

0

0

Figure 22.2 Two representations of a directed graph. (a) A directed graph G with 6 vertices and 8 edges. (b) An adjacency-list representation of G. (c) The adjacency-matrix representation of G.

shortest-paths algorithms presented in Chapter 25 assume that their input graphs are represented by adjacency matrices.

The adjacency-list representation of a graph G D .V; E/ consists of an ar- ray Adj of jV j lists, one for each vertex in V . For each u 2 V , the adjacency list AdjŒu� contains all the vertices � such that there is an edge .u; �/ 2 E. That is, AdjŒu� consists of all the vertices adjacent to u in G. (Alternatively, it may contain pointers to these vertices.) Since the adjacency lists represent the edges of a graph, in pseudocode we treat the array Adj as an attribute of the graph, just as we treat the edge set E. In pseudocode, therefore, we will see notation such as G:AdjŒu�. Figure 22.1(b) is an adjacency-list representation of the undirected graph in Fig- ure 22.1(a). Similarly, Figure 22.2(b) is an adjacency-list representation of the directed graph in Figure 22.2(a).

If G is a directed graph, the sum of the lengths of all the adjacency lists is jEj, since an edge of the form .u; �/ is represented by having � appear in AdjŒu�. If G is

22.1 Representations of graphs 591

an undirected graph, the sum of the lengths of all the adjacency lists is 2 jEj, since if .u; �/ is an undirected edge, then u appears in �’s adjacency list and vice versa. For both directed and undirected graphs, the adjacency-list representation has the desirable property that the amount of memory it requires is ‚.V CE/.

We can readily adapt adjacency lists to represent weighted graphs, that is, graphs for which each edge has an associated weight, typically given by a weight function w W E ! R. For example, let G D .V; E/ be a weighted graph with weight function w. We simply store the weight w.u; �/ of the edge .u; �/ 2 E with vertex � in u’s adjacency list. The adjacency-list representation is quite robust in that we can modify it to support many other graph variants.

A potential disadvantage of the adjacency-list representation is that it provides no quicker way to determine whether a given edge .u; �/ is present in the graph than to search for � in the adjacency list AdjŒu�. An adjacency-matrix representa- tion of the graph remedies this disadvantage, but at the cost of using asymptotically more memory. (See Exercise 22.1-8 for suggestions of variations on adjacency lists that permit faster edge lookup.)

For the adjacency-matrix representation of a graph G D .V; E/, we assume that the vertices are numbered 1; 2; : : : ; jV j in some arbitrary manner. Then the adjacency-matrix representation of a graph G consists of a jV j jV j matrix A D .aij / such that

aij D (

1 if .i; j / 2 E ; 0 otherwise :

Figures 22.1(c) and 22.2(c) are the adjacency matrices of the undirected and di- rected graphs in Figures 22.1(a) and 22.2(a), respectively. The adjacency matrix of a graph requires ‚.V 2/ memory, independent of the number of edges in the graph.

Observe the symmetry along the main diagonal of the adjacency matrix in Fig- ure 22.1(c). Since in an undirected graph, .u; �/ and .�; u/ represent the same edge, the adjacency matrix A of an undirected graph is its own transpose: A D AT. In some applications, it pays to store only the entries on and above the diagonal of the adjacency matrix, thereby cutting the memory needed to store the graph almost in half.

Like the adjacency-list representation of a graph, an adjacency matrix can repre- sent a weighted graph. For example, if G D .V; E/ is a weighted graph with edge- weight function w, we can simply store the weight w.u; �/ of the edge .u; �/ 2 E as the entry in row u and column � of the adjacency matrix. If an edge does not exist, we can store a NIL value as its corresponding matrix entry, though for many problems it is convenient to use a value such as 0 or1.

Although the adjacency-list representation is asymptotically at least as space- efficient as the adjacency-matrix representation, adjacency matrices are simpler, and so we may prefer them when graphs are reasonably small. Moreover, adja-

592 Chapter 22 Elementary Graph Algorithms

cency matrices carry a further advantage for unweighted graphs: they require only one bit per entry.

Representing attributes

Most algorithms that operate on graphs need to maintain attributes for vertices and/or edges. We indicate these attributes using our usual notation, such as �:d for an attribute d of a vertex �. When we indicate edges as pairs of vertices, we use the same style of notation. For example, if edges have an attribute f , then we denote this attribute for edge .u; �/ by .u; �/: f . For the purpose of presenting and understanding algorithms, our attribute notation suffices.

Implementing vertex and edge attributes in real programs can be another story entirely. There is no one best way to store and access vertex and edge attributes. For a given situation, your decision will likely depend on the programming lan- guage you are using, the algorithm you are implementing, and how the rest of your program uses the graph. If you represent a graph using adjacency lists, one design represents vertex attributes in additional arrays, such as an array dŒ1 : : jV j� that parallels the Adj array. If the vertices adjacent to u are in AdjŒu�, then what we call the attribute u:d would actually be stored in the array entry dŒu�. Many other ways of implementing attributes are possible. For example, in an object-oriented pro- gramming language, vertex attributes might be represented as instance variables within a subclass of a Vertex class.

Exercises

22.1-1 Given an adjacency-list representation of a directed graph, how long does it take to compute the out-degree of every vertex? How long does it take to compute the in-degrees?

22.1-2 Give an adjacency-list representation for a complete binary tree on 7 vertices. Give an equivalent adjacency-matrix representation. Assume that vertices are numbered from 1 to 7 as in a binary heap.

22.1-3 The transpose of a directed graph G D .V; E/ is the graph GT D .V; ET/, where ET D f.�; u/ 2 V V W .u; �/ 2 Eg. Thus, GT is G with all its edges reversed. Describe efficient algorithms for computing GT from G, for both the adjacency- list and adjacency-matrix representations of G. Analyze the running times of your algorithms.

22.1 Representations of graphs 593

22.1-4 Given an adjacency-list representation of a multigraph G D .V; E/, describe an O.V C E/-time algorithm to compute the adjacency-list representation of the “equivalent” undirected graph G0 D .V; E 0/, where E 0 consists of the edges in E with all multiple edges between two vertices replaced by a single edge and with all self-loops removed.

22.1-5 The square of a directed graph G D .V; E/ is the graph G2 D .V; E2/ such that .u; �/ 2 E2 if and only G contains a path with at most two edges between u and �. Describe efficient algorithms for computing G2 from G for both the adjacency- list and adjacency-matrix representations of G. Analyze the running times of your algorithms.

22.1-6 Most graph algorithms that take an adjacency-matrix representation as input re- quire time �.V 2/, but there are some exceptions. Show how to determine whether a directed graph G contains a universal sink—a vertex with in-degree jV j � 1 and out-degree 0—in time O.V /, given an adjacency matrix for G.

22.1-7 The incidence matrix of a directed graph G D .V; E/ with no self-loops is a jV j jEj matrix B D .bij / such that

bij D

� �1 if edge j leaves vertex i ; 1 if edge j enters vertex i ;

0 otherwise :

Describe what the entries of the matrix product BBT represent, where BT is the transpose of B .

22.1-8 Suppose that instead of a linked list, each array entry AdjŒu� is a hash table contain- ing the vertices � for which .u; �/ 2 E. If all edge lookups are equally likely, what is the expected time to determine whether an edge is in the graph? What disadvan- tages does this scheme have? Suggest an alternate data structure for each edge list that solves these problems. Does your alternative have disadvantages compared to the hash table?

594 Chapter 22 Elementary Graph Algorithms

22.2 Breadth-first search

Breadth-first search is one of the simplest algorithms for searching a graph and the archetype for many important graph algorithms. Prim’s minimum-spanning- tree algorithm (Section 23.2) and Dijkstra’s single-source shortest-paths algorithm (Section 24.3) use ideas similar to those in breadth-first search.

Given a graph G D .V; E/ and a distinguished source vertex s, breadth-first search systematically explores the edges of G to “discover” every vertex that is reachable from s. It computes the distance (smallest number of edges) from s to each reachable vertex. It also produces a “breadth-first tree” with root s that contains all reachable vertices. For any vertex � reachable from s, the simple path in the breadth-first tree from s to � corresponds to a “shortest path” from s to � in G, that is, a path containing the smallest number of edges. The algorithm works on both directed and undirected graphs.

Breadth-first search is so named because it expands the frontier between discov- ered and undiscovered vertices uniformly across the breadth of the frontier. That is, the algorithm discovers all vertices at distance k from s before discovering any vertices at distance k C 1.

To keep track of progress, breadth-first search colors each vertex white, gray, or black. All vertices start out white and may later become gray and then black. A vertex is discovered the first time it is encountered during the search, at which time it becomes nonwhite. Gray and black vertices, therefore, have been discovered, but breadth-first search distinguishes between them to ensure that the search proceeds in a breadth-first manner.1 If .u; �/ 2 E and vertex u is black, then vertex � is either gray or black; that is, all vertices adjacent to black vertices have been discovered. Gray vertices may have some adjacent white vertices; they represent the frontier between discovered and undiscovered vertices.

Breadth-first search constructs a breadth-first tree, initially containing only its root, which is the source vertex s. Whenever the search discovers a white vertex � in the course of scanning the adjacency list of an already discovered vertex u, the vertex � and the edge .u; �/ are added to the tree. We say that u is the predecessor or parent of � in the breadth-first tree. Since a vertex is discovered at most once, it has at most one parent. Ancestor and descendant relationships in the breadth-first tree are defined relative to the root s as usual: if u is on the simple path in the tree from the root s to vertex �, then u is an ancestor of � and � is a descendant of u.

1We distinguish between gray and black vertices to help us understand how breadth-first search op- erates. In fact, as Exercise 22.2-3 shows, we would get the same result even if we did not distinguish between gray and black vertices.

22.2 Breadth-first search 595

The breadth-first-search procedure BFS below assumes that the input graph G D .V; E/ is represented using adjacency lists. It attaches several additional attributes to each vertex in the graph. We store the color of each vertex u 2 V in the attribute u:color and the predecessor of u in the attribute u:� . If u has no predecessor (for example, if u D s or u has not been discovered), then u:� D NIL. The attribute u:d holds the distance from the source s to vertex u computed by the algorithm. The algorithm also uses a first-in, first-out queue Q (see Section 10.1) to manage the set of gray vertices.

BFS.G; s/

1 for each vertex u 2 G:V � fsg 2 u:color D WHITE 3 u:d D 1 4 u:� D NIL 5 s:color D GRAY 6 s:d D 0 7 s:� D NIL 8 Q D ; 9 ENQUEUE.Q; s/

10 whileQ ¤ ; 11 u D DEQUEUE.Q/ 12 for each � 2 G:AdjŒu� 13 if �:color == WHITE 14 �:color D GRAY 15 �:d D u:dC 1 16 �:� D u 17 ENQUEUE.Q; �/ 18 u:color D BLACK

Figure 22.3 illustrates the progress of BFS on a sample graph. The procedure BFS works as follows. With the exception of the source vertex s,

lines 1–4 paint every vertex white, set u:d to be infinity for each vertex u, and set the parent of every vertex to be NIL. Line 5 paints s gray, since we consider it to be discovered as the procedure begins. Line 6 initializes s:d to 0, and line 7 sets the predecessor of the source to be NIL. Lines 8–9 initialize Q to the queue containing just the vertex s.

The while loop of lines 10–18 iterates as long as there remain gray vertices, which are discovered vertices that have not yet had their adjacency lists fully ex- amined. This while loop maintains the following invariant:

At the test in line 10, the queue Q consists of the set of gray vertices.

596 Chapter 22 Elementary Graph Algorithms

r s t u

v w x y

0∞ ∞ ∞

∞∞∞∞ s

0

Q(a)

t u

v w x y

01 ∞ ∞

∞∞∞ 1 w

1

Q(b) r

1

t u

v w x y

01 2 ∞

∞2∞ 1 Q(c) r

1

t u

v w x y

01 ∞

∞ Q(d)

(e) (f)

(g) (h)

Q(i)

r s

r s r s

t

2

x

2

2

212

t

2

x

2

v

2

t u

v w x y

01

∞ Q

r s

2

212

x

2

v

2

u

3

3

t u

v w x y

01

3

Q

r s

2

212

v

2

u

3

3

y

3

t u

v w x y

01

3

Q

r s

2

21

u

3

3

y

32

t u

v w x y

01

3

Q

r s

2

21

3

y

32

t u

v w x y

01

r s

2

21

3

2 3

;

Figure 22.3 The operation of BFS on an undirected graph. Tree edges are shown shaded as they are produced by BFS. The value of u:d appears within each vertex u. The queue Q is shown at the beginning of each iteration of the while loop of lines 10–18. Vertex distances appear below vertices in the queue.

Although we won’t use this loop invariant to prove correctness, it is easy to see that it holds prior to the first iteration and that each iteration of the loop maintains the invariant. Prior to the first iteration, the only gray vertex, and the only vertex in Q, is the source vertex s. Line 11 determines the gray vertex u at the head of the queue Q and removes it from Q. The for loop of lines 12–17 considers each vertex � in the adjacency list of u. If � is white, then it has not yet been discovered, and the procedure discovers it by executing lines 14–17. The procedure paints vertex � gray, sets its distance �:d to u:dC1, records u as its parent �:� , and places it at the tail of the queue Q. Once the procedure has examined all the vertices on u’s

22.2 Breadth-first search 597

adjacency list, it blackens u in line 18. The loop invariant is maintained because whenever a vertex is painted gray (in line 14) it is also enqueued (in line 17), and whenever a vertex is dequeued (in line 11) it is also painted black (in line 18).

The results of breadth-first search may depend upon the order in which the neigh- bors of a given vertex are visited in line 12: the breadth-first tree may vary, but the distances d computed by the algorithm will not. (See Exercise 22.2-5.)

Analysis

Before proving the various properties of breadth-first search, we take on the some- what easier job of analyzing its running time on an input graph G D .V; E/. We use aggregate analysis, as we saw in Section 17.1. After initialization, breadth-first search never whitens a vertex, and thus the test in line 13 ensures that each vertex is enqueued at most once, and hence dequeued at most once. The operations of enqueuing and dequeuing take O.1/ time, and so the total time devoted to queue operations is O.V /. Because the procedure scans the adjacency list of each vertex only when the vertex is dequeued, it scans each adjacency list at most once. Since the sum of the lengths of all the adjacency lists is ‚.E/, the total time spent in scanning adjacency lists is O.E/. The overhead for initialization is O.V /, and thus the total running time of the BFS procedure is O.V CE/. Thus, breadth-first search runs in time linear in the size of the adjacency-list representation of G.

Shortest paths

At the beginning of this section, we claimed that breadth-first search finds the dis- tance to each reachable vertex in a graph G D .V; E/ from a given source vertex s 2 V . Define the shortest-path distance ı.s; �/ from s to � as the minimum num- ber of edges in any path from vertex s to vertex �; if there is no path from s to �, then ı.s; �/ D 1. We call a path of length ı.s; �/ from s to � a shortest path2 from s to �. Before showing that breadth-first search correctly computes shortest- path distances, we investigate an important property of shortest-path distances.

2In Chapters 24 and 25, we shall generalize our study of shortest paths to weighted graphs, in which every edge has a real-valued weight and the weight of a path is the sum of the weights of its con- stituent edges. The graphs considered in the present chapter are unweighted or, equivalently, all edges have unit weight.

598 Chapter 22 Elementary Graph Algorithms

Lemma 22.1 Let G D .V; E/ be a directed or undirected graph, and let s 2 V be an arbitrary vertex. Then, for any edge .u; �/ 2 E, ı.s; �/ � ı.s; u/C 1 :

Proof If u is reachable from s, then so is �. In this case, the shortest path from s to � cannot be longer than the shortest path from s to u followed by the edge .u; �/, and thus the inequality holds. If u is not reachable from s, then ı.s; u/ D 1, and the inequality holds.

We want to show that BFS properly computes �:d D ı.s; �/ for each ver- tex � 2 V . We first show that �:d bounds ı.s; �/ from above.

Lemma 22.2 Let G D .V; E/ be a directed or undirected graph, and suppose that BFS is run on G from a given source vertex s 2 V . Then upon termination, for each ver- tex � 2 V , the value �:d computed by BFS satisfies �:d � ı.s; �/.

Proof We use induction on the number of ENQUEUE operations. Our inductive hypothesis is that �:d � ı.s; �/ for all � 2 V .

The basis of the induction is the situation immediately after enqueuing s in line 9 of BFS. The inductive hypothesis holds here, because s:d D 0 D ı.s; s/ and �:d D1 � ı.s; �/ for all � 2 V � fsg.

For the inductive step, consider a white vertex � that is discovered during the search from a vertex u. The inductive hypothesis implies that u:d � ı.s; u/. From the assignment performed by line 15 and from Lemma 22.1, we obtain

�:d D u:d C 1 � ı.s; u/C 1 � ı.s; �/ :

Vertex � is then enqueued, and it is never enqueued again because it is also grayed and the then clause of lines 14–17 is executed only for white vertices. Thus, the value of �:d never changes again, and the inductive hypothesis is maintained.

To prove that �:d D ı.s; �/, we must first show more precisely how the queue Q operates during the course of BFS. The next lemma shows that at all times, the queue holds at most two distinct d values.

22.2 Breadth-first search 599

Lemma 22.3 Suppose that during the execution of BFS on a graph G D .V; E/, the queue Q contains the vertices h�1; �2; : : : ; �ri, where �1 is the head of Q and �r is the tail. Then, �r :d � �1:dC 1 and �i :d � �iC1:d for i D 1; 2; : : : ; r � 1.

Proof The proof is by induction on the number of queue operations. Initially, when the queue contains only s, the lemma certainly holds.

For the inductive step, we must prove that the lemma holds after both dequeuing and enqueuing a vertex. If the head �1 of the queue is dequeued, �2 becomes the new head. (If the queue becomes empty, then the lemma holds vacuously.) By the inductive hypothesis, �1:d � �2:d. But then we have �r :d � �1:dC 1 � �2:dC 1, and the remaining inequalities are unaffected. Thus, the lemma follows with �2 as the head.

In order to understand what happens upon enqueuing a vertex, we need to ex- amine the code more closely. When we enqueue a vertex � in line 17 of BFS, it becomes �rC1. At that time, we have already removed vertex u, whose adjacency list is currently being scanned, from the queue Q, and by the inductive hypothesis, the new head �1 has �1:d � u:d. Thus, �rC1:d D �:d D u:dC1 � �1:dC1. From the inductive hypothesis, we also have �r :d � u:dC 1, and so �r :d � u:d C 1 D �:d D �rC1:d, and the remaining inequalities are unaffected. Thus, the lemma follows when � is enqueued.

The following corollary shows that the d values at the time that vertices are enqueued are monotonically increasing over time.

Corollary 22.4 Suppose that vertices �i and �j are enqueued during the execution of BFS, and that �i is enqueued before �j . Then �i :d � �j :d at the time that �j is enqueued.

Proof Immediate from Lemma 22.3 and the property that each vertex receives a finite d value at most once during the course of BFS.

We can now prove that breadth-first search correctly finds shortest-path dis- tances.

Theorem 22.5 (Correctness of breadth-first search) Let G D .V; E/ be a directed or undirected graph, and suppose that BFS is run on G from a given source vertex s 2 V . Then, during its execution, BFS discovers every vertex � 2 V that is reachable from the source s, and upon termination, �:d D ı.s; �/ for all � 2 V . Moreover, for any vertex � ¤ s that is reachable

600 Chapter 22 Elementary Graph Algorithms

from s, one of the shortest paths from s to � is a shortest path from s to �:� followed by the edge .�:�; �/.

Proof Assume, for the purpose of contradiction, that some vertex receives a d value not equal to its shortest-path distance. Let � be the vertex with min- imum ı.s; �/ that receives such an incorrect d value; clearly � ¤ s. By Lemma 22.2, �:d � ı.s; �/, and thus we have that �:d > ı.s; �/. Vertex � must be reachable from s, for if it is not, then ı.s; �/ D 1 � �:d. Let u be the vertex im- mediately preceding � on a shortest path from s to �, so that ı.s; �/ D ı.s; u/C 1. Because ı.s; u/ ı.s; �/ D ı.s; u/C 1 D u:dC 1 : (22.1) Now consider the time when BFS chooses to dequeue vertex u from Q in

line 11. At this time, vertex � is either white, gray, or black. We shall show that in each of these cases, we derive a contradiction to inequality (22.1). If � is white, then line 15 sets �:d D u:d C 1, contradicting inequality (22.1). If � is black, then it was already removed from the queue and, by Corollary 22.4, we have �:d � u:d, again contradicting inequality (22.1). If � is gray, then it was painted gray upon dequeuing some vertex w, which was removed from Q earlier than u and for which �:d D w:dC 1. By Corollary 22.4, however, w:d � u:d, and so we have �:d D w:dC 1 � u:dC 1, once again contradicting inequality (22.1).

Thus we conclude that �:d D ı.s; �/ for all � 2 V . All vertices � reachable from s must be discovered, for otherwise they would have1D �:d > ı.s; �/. To conclude the proof of the theorem, observe that if �:� D u, then �:d D u:d C 1. Thus, we can obtain a shortest path from s to � by taking a shortest path from s to �:� and then traversing the edge .�:�; �/.

Breadth-first trees

The procedure BFS builds a breadth-first tree as it searches the graph, as Fig- ure 22.3 illustrates. The tree corresponds to the � attributes. More formally, for a graph G D .V; E/ with source s, we define the predecessor subgraph of G as G� D .V� ; E�/, where V� D f� 2 V W �:� ¤ NILg [ fsg and

E� D f.�:�; �/ W � 2 V� � fsgg : The predecessor subgraph G� is a breadth-first tree if V� consists of the vertices reachable from s and, for all � 2 V� , the subgraph G� contains a unique simple

22.2 Breadth-first search 601

path from s to � that is also a shortest path from s to � in G. A breadth-first tree is in fact a tree, since it is connected and jE� j D jV� j � 1 (see Theorem B.2). We call the edges in E� tree edges.

The following lemma shows that the predecessor subgraph produced by the BFS procedure is a breadth-first tree.

Lemma 22.6 When applied to a directed or undirected graph G D .V; E/, procedure BFS con- structs � so that the predecessor subgraph G� D .V� ; E�/ is a breadth-first tree.

Proof Line 16 of BFS sets �:� D u if and only if .u; �/ 2 E and ı.s; �/ �:d.

An undirected graph may entail some ambiguity in how we classify edges, since .u; �/ and .�; u/ are really the same edge. In such a case, we classify the edge as the first type in the classification list that applies. Equivalently (see Ex- ercise 22.3-6), we classify the edge according to whichever of .u; �/ or .�; u/ the search encounters first.

We now show that forward and cross edges never occur in a depth-first search of an undirected graph.

Theorem 22.10 In a depth-first search of an undirected graph G, every edge of G is either a tree edge or a back edge.

Proof Let .u; �/ be an arbitrary edge of G, and suppose without loss of generality that u:d f .C 0/.

Proof We consider two cases, depending on which strongly connected compo- nent, C or C 0, had the first discovered vertex during the depth-first search.

If d.C / f .C 0/.

If instead we have d.C / > d.C 0/, let y be the first vertex discovered in C 0. At time y:d, all vertices in C 0 are white and G contains a path from y to each vertex in C 0 consisting only of white vertices. By the white-path theorem, all ver- tices in C 0 become descendants of y in the depth-first tree, and by Corollary 22.8, y: f D f .C 0/. At time y:d, all vertices in C are white. Since there is an edge .u; �/ from C to C 0, Lemma 22.13 implies that there cannot be a path from C 0 to C . Hence, no vertex in C is reachable from y. At time y: f , therefore, all vertices in C are still white. Thus, for any vertex w 2 C , we have w: f > y: f , which implies that f .C / > f .C 0/.

The following corollary tells us that each edge in GT that goes between different strongly connected components goes from a component with an earlier finishing time (in the first depth-first search) to a component with a later finishing time.

Corollary 22.15 Let C and C 0 be distinct strongly connected components in directed graph G D .V; E/. Suppose that there is an edge .u; �/ 2 ET, where u 2 C and � 2 C 0. Then f .C / f .C 0/ for any strongly connected component C 0 other than C that has yet to be visited. By the inductive hypothesis, at the time that the search visits u, all other vertices of C are white. By the white-path theorem, therefore, all other vertices of C are descendants of u in its depth-first tree. Moreover, by the inductive hypothesis and by Corollary 22.15, any edges in GT that leave C must be to strongly connected components that have already been visited. Thus, no vertex

620 Chapter 22 Elementary Graph Algorithms

in any strongly connected component other than C will be a descendant of u during the depth-first search of GT. Thus, the vertices of the depth-first tree in GT that is rooted at u form exactly one strongly connected component, which completes the inductive step and the proof.

Here is another way to look at how the second depth-first search operates. Con- sider the component graph .GT/SCC of GT. If we map each strongly connected component visited in the second depth-first search to a vertex of .GT/SCC, the sec- ond depth-first search visits vertices of .GT/SCC in the reverse of a topologically sorted order. If we reverse the edges of .GT/SCC, we get the graph ..GT/SCC/T. Because ..GT/SCC/T D GSCC (see Exercise 22.5-4), the second depth-first search visits the vertices of GSCC in topologically sorted order.

Exercises

22.5-1 How can the number of strongly connected components of a graph change if a new edge is added?

22.5-2 Show how the procedure STRONGLY-CONNECTED-COMPONENTS works on the graph of Figure 22.6. Specifically, show the finishing times computed in line 1 and the forest produced in line 3. Assume that the loop of lines 5–7 of DFS considers vertices in alphabetical order and that the adjacency lists are in alphabetical order.

22.5-3 Professor Bacon claims that the algorithm for strongly connected components would be simpler if it used the original (instead of the transpose) graph in the second depth-first search and scanned the vertices in order of increasing finishing times. Does this simpler algorithm always produce correct results?

22.5-4 Prove that for any directed graph G, we have ..GT/SCC/T D GSCC. That is, the transpose of the component graph of GT is the same as the component graph of G.

22.5-5 Give an O.V C E/-time algorithm to compute the component graph of a directed graph G D .V; E/. Make sure that there is at most one edge between two vertices in the component graph your algorithm produces.

Problems for Chapter 22 621

22.5-6 Given a directed graph G D .V; E/, explain how to create another graph G0 D .V; E 0/ such that (a) G0 has the same strongly connected components as G, (b) G0

has the same component graph as G, and (c) E 0 is as small as possible. Describe a fast algorithm to compute G0.

22.5-7 A directed graph G D .V; E/ is semiconnected if, for all pairs of vertices u; � 2 V , we have u � � or � � u. Give an efficient algorithm to determine whether or not G is semiconnected. Prove that your algorithm is correct, and analyze its running time.

Problems

22-1 Classifying edges by breadth-first search A depth-first forest classifies the edges of a graph into tree, back, forward, and cross edges. A breadth-first tree can also be used to classify the edges reachable from the source of the search into the same four categories.

a. Prove that in a breadth-first search of an undirected graph, the following prop- erties hold:

1. There are no back edges and no forward edges.

2. For each tree edge .u; �/, we have �:d D u:d C 1. 3. For each cross edge .u; �/, we have �:d D u:d or �:d D u:dC 1.

b. Prove that in a breadth-first search of a directed graph, the following properties hold:

1. There are no forward edges.

2. For each tree edge .u; �/, we have �:d D u:d C 1. 3. For each cross edge .u; �/, we have �:d � u:d C 1. 4. For each back edge .u; �/, we have 0 � �:d � u:d.

22-2 Articulation points, bridges, and biconnected components Let G D .V; E/ be a connected, undirected graph. An articulation point of G is a vertex whose removal disconnects G. A bridge of G is an edge whose removal disconnects G. A biconnected component of G is a maximal set of edges such that any two edges in the set lie on a common simple cycle. Figure 22.10 illustrates

622 Chapter 22 Elementary Graph Algorithms

1 2

3

4

5

6

Figure 22.10 The articulation points, bridges, and biconnected components of a connected, undi- rected graph for use in Problem 22-2. The articulation points are the heavily shaded vertices, the bridges are the heavily shaded edges, and the biconnected components are the edges in the shaded regions, with a bcc numbering shown.

these definitions. We can determine articulation points, bridges, and biconnected components using depth-first search. Let G� D .V; E�/ be a depth-first tree of G. a. Prove that the root of G� is an articulation point of G if and only if it has at

least two children in G� .

b. Let � be a nonroot vertex of G� . Prove that � is an articulation point of G if and only if � has a child s such that there is no back edge from s or any descendant of s to a proper ancestor of �.

c. Let

�: low D min (

�:d ;

w:d W .u; w/ is a back edge for some descendant u of � :

Show how to compute �: low for all vertices � 2 V in O.E/ time.

d. Show how to compute all articulation points in O.E/ time.

e. Prove that an edge of G is a bridge if and only if it does not lie on any simple cycle of G.

f. Show how to compute all the bridges of G in O.E/ time.

g. Prove that the biconnected components of G partition the nonbridge edges of G.

h. Give an O.E/-time algorithm to label each edge e of G with a positive in- teger e:bcc such that e:bcc D e0:bcc if and only if e and e0 are in the same biconnected component.

Notes for Chapter 22 623

22-3 Euler tour An Euler tour of a strongly connected, directed graph G D .V; E/ is a cycle that traverses each edge of G exactly once, although it may visit a vertex more than once.

a. Show that G has an Euler tour if and only if in-degree.�/ D out-degree.�/ for each vertex � 2 V .

b. Describe an O.E/-time algorithm to find an Euler tour of G if one exists. (Hint: Merge edge-disjoint cycles.)

22-4 Reachability Let G D .V; E/ be a directed graph in which each vertex u 2 V is labeled with a unique integer L.u/ from the set f1; 2; : : : ; jV jg. For each vertex u 2 V , let R.u/ D f� 2 V W u � �g be the set of vertices that are reachable from u. Define min.u/ to be the vertex in R.u/ whose label is minimum, i.e., min.u/ is the vertex � such that L.�/ D min fL.w/ W w 2 R.u/g. Give an O.V CE/-time algorithm that computes min.u/ for all vertices u 2 V .

Chapter notes

Even [103] and Tarjan [330] are excellent references for graph algorithms. Breadth-first search was discovered by Moore [260] in the context of finding

paths through mazes. Lee [226] independently discovered the same algorithm in the context of routing wires on circuit boards.

Hopcroft and Tarjan [178] advocated the use of the adjacency-list representation over the adjacency-matrix representation for sparse graphs and were the first to recognize the algorithmic importance of depth-first search. Depth-first search has been widely used since the late 1950s, especially in artificial intelligence programs.

Tarjan [327] gave a linear-time algorithm for finding strongly connected compo- nents. The algorithm for strongly connected components in Section 22.5 is adapted from Aho, Hopcroft, and Ullman [6], who credit it to S. R. Kosaraju (unpublished) and M. Sharir [314]. Gabow [119] also developed an algorithm for strongly con- nected components that is based on contracting cycles and uses two stacks to make it run in linear time. Knuth [209] was the first to give a linear-time algorithm for topological sorting.

23 Minimum Spanning Trees

Electronic circuit designs often need to make the pins of several components elec- trically equivalent by wiring them together. To interconnect a set of n pins, we can use an arrangement of n� 1 wires, each connecting two pins. Of all such arrange- ments, the one that uses the least amount of wire is usually the most desirable.

We can model this wiring problem with a connected, undirected graph G D .V; E/, where V is the set of pins, E is the set of possible interconnections between pairs of pins, and for each edge .u; �/ 2 E, we have a weight w.u; �/ specifying the cost (amount of wire needed) to connect u and �. We then wish to find an acyclic subset T � E that connects all of the vertices and whose total weight w.T / D

X .u;�/2T

w.u; �/

is minimized. Since T is acyclic and connects all of the vertices, it must form a tree, which we call a spanning tree since it “spans” the graph G. We call the problem of determining the tree T the minimum-spanning-tree problem.1 Figure 23.1 shows an example of a connected graph and a minimum spanning tree.

In this chapter, we shall examine two algorithms for solving the minimum- spanning-tree problem: Kruskal’s algorithm and Prim’s algorithm. We can easily make each of them run in time O.E lg V / using ordinary binary heaps. By using Fibonacci heaps, Prim’s algorithm runs in time O.E C V lg V /, which improves over the binary-heap implementation if jV j is much smaller than jEj.

The two algorithms are greedy algorithms, as described in Chapter 16. Each step of a greedy algorithm must make one of several possible choices. The greedy strategy advocates making the choice that is the best at the moment. Such a strat- egy does not generally guarantee that it will always find globally optimal solutions

1The phrase “minimum spanning tree” is a shortened form of the phrase “minimum-weight spanning tree.” We are not, for example, minimizing the number of edges in T , since all spanning trees have exactly jV j � 1 edges by Theorem B.2.

23.1 Growing a minimum spanning tree 625

b

a

h

c

g

i

d

f

e

4

8

11

8 7

9

10

144

21

2

7 6

Figure 23.1 A minimum spanning tree for a connected graph. The weights on edges are shown, and the edges in a minimum spanning tree are shaded. The total weight of the tree shown is 37. This minimum spanning tree is not unique: removing the edge .b; c/ and replacing it with the edge .a; h/ yields another spanning tree with weight 37.

to problems. For the minimum-spanning-tree problem, however, we can prove that certain greedy strategies do yield a spanning tree with minimum weight. Although you can read this chapter independently of Chapter 16, the greedy methods pre- sented here are a classic application of the theoretical notions introduced there.

Section 23.1 introduces a “generic” minimum-spanning-tree method that grows a spanning tree by adding one edge at a time. Section 23.2 gives two algorithms that implement the generic method. The first algorithm, due to Kruskal, is similar to the connected-components algorithm from Section 21.1. The second, due to Prim, resembles Dijkstra’s shortest-paths algorithm (Section 24.3).

Because a tree is a type of graph, in order to be precise we must define a tree in terms of not just its edges, but its vertices as well. Although this chapter focuses on trees in terms of their edges, we shall operate with the understanding that the vertices of a tree T are those that some edge of T is incident on.

23.1 Growing a minimum spanning tree

Assume that we have a connected, undirected graph G D .V; E/ with a weight function w W E ! R, and we wish to find a minimum spanning tree for G. The two algorithms we consider in this chapter use a greedy approach to the problem, although they differ in how they apply this approach.

This greedy strategy is captured by the following generic method, which grows the minimum spanning tree one edge at a time. The generic method manages a set of edges A, maintaining the following loop invariant:

Prior to each iteration, A is a subset of some minimum spanning tree.

At each step, we determine an edge .u; �/ that we can add to A without violating this invariant, in the sense that A[f.u; �/g is also a subset of a minimum spanning

626 Chapter 23 Minimum Spanning Trees

tree. We call such an edge a safe edge for A, since we can add it safely to A while maintaining the invariant.

GENERIC-MST.G; w/

1 A D ; 2 while A does not form a spanning tree 3 find an edge .u; �/ that is safe for A 4 A D A [ f.u; �/g 5 return A

We use the loop invariant as follows:

Initialization: After line 1, the set A trivially satisfies the loop invariant.

Maintenance: The loop in lines 2–4 maintains the invariant by adding only safe edges.

Termination: All edges added to A are in a minimum spanning tree, and so the set A returned in line 5 must be a minimum spanning tree.

The tricky part is, of course, finding a safe edge in line 3. One must exist, since when line 3 is executed, the invariant dictates that there is a spanning tree T such that A � T . Within the while loop body, A must be a proper subset of T , and therefore there must be an edge .u; �/ 2 T such that .u; �/ 62 A and .u; �/ is safe for A.

In the remainder of this section, we provide a rule (Theorem 23.1) for recogniz- ing safe edges. The next section describes two algorithms that use this rule to find safe edges efficiently.

We first need some definitions. A cut .S; V � S/ of an undirected graph G D .V; E/ is a partition of V . Figure 23.2 illustrates this notion. We say that an edge .u; �/ 2 E crosses the cut .S; V � S/ if one of its endpoints is in S and the other is in V � S . We say that a cut respects a set A of edges if no edge in A crosses the cut. An edge is a light edge crossing a cut if its weight is the minimum of any edge crossing the cut. Note that there can be more than one light edge crossing a cut in the case of ties. More generally, we say that an edge is a light edge satisfying a given property if its weight is the minimum of any edge satisfying the property.

Our rule for recognizing safe edges is given by the following theorem.

Theorem 23.1 Let G D .V; E/ be a connected, undirected graph with a real-valued weight func- tion w defined on E. Let A be a subset of E that is included in some minimum spanning tree for G, let .S; V � S/ be any cut of G that respects A, and let .u; �/ be a light edge crossing .S; V � S/. Then, edge .u; �/ is safe for A.

23.1 Growing a minimum spanning tree 627

b

a

h

c

g

i

d

f

e

4

8

11

8 7

9

10

144

21

2

7 6

a

b

d

e

h

i

g

c

f

8

11

8

7

14

10

4

6

7

4

9

2

1

2

S

(a) (b)

V – S

S

V – S

S

V – S

Figure 23.2 Two ways of viewing a cut .S; V � S/ of the graph from Figure 23.1. (a) Black vertices are in the set S , and white vertices are in V � S . The edges crossing the cut are those connecting white vertices with black vertices. The edge .d; c/ is the unique light edge crossing the cut. A subset A of the edges is shaded; note that the cut .S; V � S/ respects A, since no edge of A crosses the cut. (b) The same graph with the vertices in the set S on the left and the vertices in the set V � S on the right. An edge crosses the cut if it connects a vertex on the left with a vertex on the right.

Proof Let T be a minimum spanning tree that includes A, and assume that T does not contain the light edge .u; �/, since if it does, we are done. We shall construct another minimum spanning tree T 0 that includes A [ f.u; �/g by using a cut-and-paste technique, thereby showing that .u; �/ is a safe edge for A.

The edge .u; �/ forms a cycle with the edges on the simple path p from u to � in T , as Figure 23.3 illustrates. Since u and � are on opposite sides of the cut .S; V � S/, at least one edge in T lies on the simple path p and also crosses the cut. Let .x; y/ be any such edge. The edge .x; y/ is not in A, because the cut respects A. Since .x; y/ is on the unique simple path from u to � in T , remov- ing .x; y/ breaks T into two components. Adding .u; �/ reconnects them to form a new spanning tree T 0 D T � f.x; y/g [ f.u; �/g.

We next show that T 0 is a minimum spanning tree. Since .u; �/ is a light edge crossing .S; V �S/ and .x; y/ also crosses this cut, w.u; �/ � w.x; y/. Therefore, w.T 0/ D w.T / � w.x; y/Cw.u; �/

� w.T / :

628 Chapter 23 Minimum Spanning Trees

y

v

u

x

p

Figure 23.3 The proof of Theorem 23.1. Black vertices are in S , and white vertices are in V � S . The edges in the minimum spanning tree T are shown, but the edges in the graph G are not. The edges in A are shaded, and .u; �/ is a light edge crossing the cut .S; V � S/. The edge .x; y/ is an edge on the unique simple path p from u to � in T . To form a minimum spanning tree T 0 that contains .u; �/, remove the edge .x; y/ from T and add the edge .u; �/.

But T is a minimum spanning tree, so that w.T / � w.T 0/; thus, T 0 must be a minimum spanning tree also.

It remains to show that .u; �/ is actually a safe edge for A. We have A � T 0, since A � T and .x; y/ 62 A; thus, A [ f.u; �/g � T 0. Consequently, since T 0 is a minimum spanning tree, .u; �/ is safe for A.

Theorem 23.1 gives us a better understanding of the workings of the GENERIC- MST method on a connected graph G D .V; E/. As the method proceeds, the set A is always acyclic; otherwise, a minimum spanning tree including A would contain a cycle, which is a contradiction. At any point in the execution, the graph GA D .V; A/ is a forest, and each of the connected components of GA is a tree. (Some of the trees may contain just one vertex, as is the case, for example, when the method begins: A is empty and the forest contains jV j trees, one for each vertex.) Moreover, any safe edge .u; �/ for A connects distinct components of GA, since A [ f.u; �/g must be acyclic.

The while loop in lines 2–4 of GENERIC-MST executes jV j � 1 times because it finds one of the jV j � 1 edges of a minimum spanning tree in each iteration. Initially, when A D ;, there are jV j trees in GA, and each iteration reduces that number by 1. When the forest contains only a single tree, the method terminates.

The two algorithms in Section 23.2 use the following corollary to Theorem 23.1.

23.1 Growing a minimum spanning tree 629

Corollary 23.2 Let G D .V; E/ be a connected, undirected graph with a real-valued weight func- tion w defined on E. Let A be a subset of E that is included in some minimum spanning tree for G, and let C D .VC ; EC / be a connected component (tree) in the forest GA D .V; A/. If .u; �/ is a light edge connecting C to some other component in GA, then .u; �/ is safe for A.

Proof The cut .VC ; V � VC / respects A, and .u; �/ is a light edge for this cut. Therefore, .u; �/ is safe for A.

Exercises

23.1-1 Let .u; �/ be a minimum-weight edge in a connected graph G. Show that .u; �/ belongs to some minimum spanning tree of G.

23.1-2 Professor Sabatier conjectures the following converse of Theorem 23.1. Let G D .V; E/ be a connected, undirected graph with a real-valued weight function w de- fined on E. Let A be a subset of E that is included in some minimum spanning tree for G, let .S; V � S/ be any cut of G that respects A, and let .u; �/ be a safe edge for A crossing .S; V � S/. Then, .u; �/ is a light edge for the cut. Show that the professor’s conjecture is incorrect by giving a counterexample.

23.1-3 Show that if an edge .u; �/ is contained in some minimum spanning tree, then it is a light edge crossing some cut of the graph.

23.1-4 Give a simple example of a connected graph such that the set of edges f.u; �/ W there exists a cut .S; V � S/ such that .u; �/ is a light edge crossing .S; V � S/g does not form a minimum spanning tree.

23.1-5 Let e be a maximum-weight edge on some cycle of connected graph G D .V; E/. Prove that there is a minimum spanning tree of G0 D .V; E � feg/ that is also a minimum spanning tree of G. That is, there is a minimum spanning tree of G that does not include e.

630 Chapter 23 Minimum Spanning Trees

23.1-6 Show that a graph has a unique minimum spanning tree if, for every cut of the graph, there is a unique light edge crossing the cut. Show that the converse is not true by giving a counterexample.

23.1-7 Argue that if all edge weights of a graph are positive, then any subset of edges that connects all vertices and has minimum total weight must be a tree. Give an example to show that the same conclusion does not follow if we allow some weights to be nonpositive.

23.1-8 Let T be a minimum spanning tree of a graph G, and let L be the sorted list of the edge weights of T . Show that for any other minimum spanning tree T 0 of G, the list L is also the sorted list of edge weights of T 0.

23.1-9 Let T be a minimum spanning tree of a graph G D .V; E/, and let V 0 be a subset of V . Let T 0 be the subgraph of T induced by V 0, and let G0 be the subgraph of G induced by V 0. Show that if T 0 is connected, then T 0 is a minimum spanning tree of G0.

23.1-10 Given a graph G and a minimum spanning tree T , suppose that we decrease the weight of one of the edges in T . Show that T is still a minimum spanning tree for G. More formally, let T be a minimum spanning tree for G with edge weights given by weight function w. Choose one edge .x; y/ 2 T and a positive number k, and define the weight function w0 by

w0.u; �/ D (

w.u; �/ if .u; �/ ¤ .x; y/ ; w.x; y/ � k if .u; �/ D .x; y/ :

Show that T is a minimum spanning tree for G with edge weights given by w0.

23.1-11 ? Given a graph G and a minimum spanning tree T , suppose that we decrease the weight of one of the edges not in T . Give an algorithm for finding the minimum spanning tree in the modified graph.

23.2 The algorithms of Kruskal and Prim 631

23.2 The algorithms of Kruskal and Prim

The two minimum-spanning-tree algorithms described in this section elaborate on the generic method. They each use a specific rule to determine a safe edge in line 3 of GENERIC-MST. In Kruskal’s algorithm, the set A is a forest whose vertices are all those of the given graph. The safe edge added to A is always a least-weight edge in the graph that connects two distinct components. In Prim’s algorithm, the set A forms a single tree. The safe edge added to A is always a least-weight edge connecting the tree to a vertex not in the tree.

Kruskal’s algorithm

Kruskal’s algorithm finds a safe edge to add to the growing forest by finding, of all the edges that connect any two trees in the forest, an edge .u; �/ of least weight. Let C1 and C2 denote the two trees that are connected by .u; �/. Since .u; �/ must be a light edge connecting C1 to some other tree, Corollary 23.2 implies that .u; �/ is a safe edge for C1. Kruskal’s algorithm qualifies as a greedy algorithm because at each step it adds to the forest an edge of least possible weight.

Our implementation of Kruskal’s algorithm is like the algorithm to compute connected components from Section 21.1. It uses a disjoint-set data structure to maintain several disjoint sets of elements. Each set contains the vertices in one tree of the current forest. The operation FIND-SET.u/ returns a representative element from the set that contains u. Thus, we can determine whether two vertices u and � belong to the same tree by testing whether FIND-SET.u/ equals FIND-SET.�/. To combine trees, Kruskal’s algorithm calls the UNION procedure.

MST-KRUSKAL.G; w/

1 A D ; 2 for each vertex � 2 G:V 3 MAKE-SET.�/ 4 sort the edges of G:E into nondecreasing order by weight w 5 for each edge .u; �/ 2 G:E, taken in nondecreasing order by weight 6 if FIND-SET.u/ ¤ FIND-SET.�/ 7 A D A [ f.u; �/g 8 UNION.u; �/ 9 return A

Figure 23.4 shows how Kruskal’s algorithm works. Lines 1–3 initialize the set A to the empty set and create jV j trees, one containing each vertex. The for loop in lines 5–8 examines edges in order of weight, from lowest to highest. The loop

632 Chapter 23 Minimum Spanning Trees

b

a

h

c

g

i

d

f

e

4

8

11

8 7

9

10

144

21

2

7 6 (a) (b)

(c) (d)

(e)

(g)

(f)

(h)

b

a

h

c

g

i

d

f

e

4

8

11

8 7

9

10

144

21

7 6

2

b

a

h

c

g

i

d

f

e

4

8

11

8 7

9

10

144

21

7 6

2

b

a

h

c

g

i

d

f

e

4

8

11

8 7

9

10

144

21

7 6

2

b

a

h

c

g

i

d

f

e

4

8

11

8 7

9

10

144

21

7 6

2

b

a

h

c

g

i

d

f

e

4

8

11

8 7

9

10

144

21

7 6

2

b

a

h

c

g

i

d

f

e

4

8

11

8 7

9

10

144

21

7 6

2

b

a

h

c

g

i

d

f

e

4

8

11

8 7

9

10

144

21

7 6

2

Figure 23.4 The execution of Kruskal’s algorithm on the graph from Figure 23.1. Shaded edges belong to the forest A being grown. The algorithm considers each edge in sorted order by weight. An arrow points to the edge under consideration at each step of the algorithm. If the edge joins two distinct trees in the forest, it is added to the forest, thereby merging the two trees.

checks, for each edge .u; �/, whether the endpoints u and � belong to the same tree. If they do, then the edge .u; �/ cannot be added to the forest without creating a cycle, and the edge is discarded. Otherwise, the two vertices belong to different trees. In this case, line 7 adds the edge .u; �/ to A, and line 8 merges the vertices in the two trees.

23.2 The algorithms of Kruskal and Prim 633

(i) (j)

(k) (l)

(n)(m)

b

a

h

c

g

i

d

f

e

4

8

11

8 7

9

10

144

21

7 6

b

a

h

c

g

i

d

f

e

4

8

11

8 7

9

10

144

21

7 6

22

b

a

h

c

g

i

d

f

e

4

8

11

8 7

9

10

144

21

7 6

2

b

a

h

c

g

i

d

f

e

4

8

11

8 7

9

10

144

21

7 6

b

a

h

c

g

i

d

f

e

4

8

11

8 7

9

10

144

21

7 6

b

a

h

c

g

i

d

f

e

4

8

11

8 7

9

10

144

21

7 6

2

2

2

Figure 23.4, continued Further steps in the execution of Kruskal’s algorithm.

The running time of Kruskal’s algorithm for a graph G D .V; E/ depends on how we implement the disjoint-set data structure. We assume that we use the disjoint-set-forest implementation of Section 21.3 with the union-by-rank and path-compression heuristics, since it is the asymptotically fastest implementation known. Initializing the set A in line 1 takes O.1/ time, and the time to sort the edges in line 4 is O.E lg E/. (We will account for the cost of the jV j MAKE-SET operations in the for loop of lines 2–3 in a moment.) The for loop of lines 5–8 performs O.E/ FIND-SET and UNION operations on the disjoint-set forest. Along with the jV j MAKE-SET operations, these take a total of O..V CE/ ˛.V // time, where ˛ is the very slowly growing function defined in Section 21.4. Because we assume that G is connected, we have jEj � jV j � 1, and so the disjoint-set opera- tions take O.E˛.V // time. Moreover, since ˛.jV j/ D O.lg V / D O.lg E/, the to- tal running time of Kruskal’s algorithm is O.E lg E/. Observing that jEj 0, the shortest path from s to c is hs; ci, with weight ı.s; c/ D w.s; c/ D 5. Similarly, the shortest path from s to d is hs; c;d i, with weight ı.s; d/ D w.s; c/Cw.c; d/ D 11. Analogously, there are infinitely many paths from s to e: hs; ei, hs; e; f; ei, hs; e; f; e; f; ei, and so on. Because the cycle he; f; ei has weight 3 C .�6/ D �3 0), then the path p0 D h�0; �1; : : : ; �i ; �j C1; �j C2; : : : ; �ki has weight w.p0/ D w.p/� w.c/ u:d C w.u; �/ prior to relaxation, the value of �:d decreases. (b) Here, �:d � u:dCw.u; �/ before relaxing the edge, and so the relaxation step leaves �:d unchanged.

estimate �:d and update �’s predecessor attribute �:� . The following code per- forms a relaxation step on edge .u; �/ in O.1/ time:

RELAX.u; �; w/

1 if �:d > u:dCw.u; �/ 2 �:d D u:d Cw.u; �/ 3 �:� D u Figure 24.3 shows two examples of relaxing an edge, one in which a shortest-path estimate decreases and one in which no estimate changes.

Each algorithm in this chapter calls INITIALIZE-SINGLE-SOURCE and then re- peatedly relaxes edges. Moreover, relaxation is the only means by which shortest- path estimates and predecessors change. The algorithms in this chapter differ in how many times they relax each edge and the order in which they relax edges. Dijk- stra’s algorithm and the shortest-paths algorithm for directed acyclic graphs relax each edge exactly once. The Bellman-Ford algorithm relaxes each edge jV j � 1 times.

Properties of shortest paths and relaxation

To prove the algorithms in this chapter correct, we shall appeal to several prop- erties of shortest paths and relaxation. We state these properties here, and Sec- tion 24.5 proves them formally. For your reference, each property stated here in- cludes the appropriate lemma or corollary number from Section 24.5. The latter five of these properties, which refer to shortest-path estimates or the predecessor subgraph, implicitly assume that the graph is initialized with a call to INITIALIZE- SINGLE-SOURCE.G; s/ and that the only way that shortest-path estimates and the predecessor subgraph change are by some sequence of relaxation steps.

650 Chapter 24 Single-Source Shortest Paths

Triangle inequality (Lemma 24.10) For any edge .u; �/ 2 E, we have ı.s; �/ � ı.s; u/C w.u; �/.

Upper-bound property (Lemma 24.11) We always have �:d � ı.s; �/ for all vertices � 2 V , and once �:d achieves the value ı.s; �/, it never changes.

No-path property (Corollary 24.12) If there is no path from s to �, then we always have �:d D ı.s; �/ D1.

Convergence property (Lemma 24.14) If s � u! � is a shortest path in G for some u; � 2 V , and if u:d D ı.s; u/ at any time prior to relaxing edge .u; �/, then �:d D ı.s; �/ at all times afterward.

Path-relaxation property (Lemma 24.15) If p D h�0; �1; : : : ; �ki is a shortest path from s D �0 to �k, and we relax the edges of p in the order .�0; �1/; .�1; �2/; : : : ; .�k�1; �k/, then �k:d D ı.s; �k/. This property holds regardless of any other relaxation steps that occur, even if they are intermixed with relaxations of the edges of p.

Predecessor-subgraph property (Lemma 24.17) Once �:d D ı.s; �/ for all � 2 V , the predecessor subgraph is a shortest-paths tree rooted at s.

Chapter outline

Section 24.1 presents the Bellman-Ford algorithm, which solves the single-source shortest-paths problem in the general case in which edges can have negative weight. The Bellman-Ford algorithm is remarkably simple, and it has the further benefit of detecting whether a negative-weight cycle is reachable from the source. Sec- tion 24.2 gives a linear-time algorithm for computing shortest paths from a single source in a directed acyclic graph. Section 24.3 covers Dijkstra’s algorithm, which has a lower running time than the Bellman-Ford algorithm but requires the edge weights to be nonnegative. Section 24.4 shows how we can use the Bellman-Ford algorithm to solve a special case of linear programming. Finally, Section 24.5 proves the properties of shortest paths and relaxation stated above.

We require some conventions for doing arithmetic with infinities. We shall as- sume that for any real number a ¤ �1, we have aC1 D1C a D1. Also, to make our proofs hold in the presence of negative-weight cycles, we shall assume that for any real number a ¤1, we have aC .�1/ D .�1/C a D �1.

All algorithms in this chapter assume that the directed graph G is stored in the adjacency-list representation. Additionally, stored with each edge is its weight, so that as we traverse each adjacency list, we can determine the edge weights in O.1/ time per edge.

24.1 The Bellman-Ford algorithm 651

24.1 The Bellman-Ford algorithm

The Bellman-Ford algorithm solves the single-source shortest-paths problem in the general case in which edge weights may be negative. Given a weighted, di- rected graph G D .V; E/ with source s and weight function w W E ! R, the Bellman-Ford algorithm returns a boolean value indicating whether or not there is a negative-weight cycle that is reachable from the source. If there is such a cy- cle, the algorithm indicates that no solution exists. If there is no such cycle, the algorithm produces the shortest paths and their weights.

The algorithm relaxes edges, progressively decreasing an estimate �:d on the weight of a shortest path from the source s to each vertex � 2 V until it achieves the actual shortest-path weight ı.s; �/. The algorithm returns TRUE if and only if the graph contains no negative-weight cycles that are reachable from the source.

BELLMAN-FORD.G; w; s/

1 INITIALIZE-SINGLE-SOURCE.G; s/ 2 for i D 1 to jG:Vj � 1 3 for each edge .u; �/ 2 G:E 4 RELAX.u; �; w/ 5 for each edge .u; �/ 2 G:E 6 if �:d > u:dCw.u; �/ 7 return FALSE 8 return TRUE

Figure 24.4 shows the execution of the Bellman-Ford algorithm on a graph with 5 vertices. After initializing the d and � values of all vertices in line 1, the algorithm makes jV j � 1 passes over the edges of the graph. Each pass is one iteration of the for loop of lines 2–4 and consists of relaxing each edge of the graph once. Figures 24.4(b)–(e) show the state of the algorithm after each of the four passes over the edges. After making jV j � 1 passes, lines 5–8 check for a negative-weight cycle and return the appropriate boolean value. (We’ll see a little later why this check works.)

The Bellman-Ford algorithm runs in time O.VE/, since the initialization in line 1 takes ‚.V / time, each of the jV j � 1 passes over the edges in lines 2–4 takes ‚.E/ time, and the for loop of lines 5–7 takes O.E/ time.

To prove the correctness of the Bellman-Ford algorithm, we start by showing that if there are no negative-weight cycles, the algorithm computes correct shortest-path weights for all vertices reachable from the source.

652 Chapter 24 Single-Source Shortest Paths

(a) (b) (c)

(d)

0

5

9

78

6

7

(e)

t x

s

y z

–4

–3

–22

7

4

–2 2

0

5

9

78

6

7

t x

s

y z

–4

–3

–22

7

4

2 2

0

5

9

78

6

7

t x

s

y z

–4

–3

–26

7

4

2 2

0

5

9

78

6

7

t x

s

y z

–4

–3

–26

7

∞ 2

0

5

9

78

6

7

t x

s

y z

–4

–3

–2 ∞

∞ 2

Figure 24.4 The execution of the Bellman-Ford algorithm. The source is vertex s. The d val- ues appear within the vertices, and shaded edges indicate predecessor values: if edge .u; �/ is shaded, then �:� D u. In this particular example, each pass relaxes the edges in the order .t; x/; .t; y/; .t; ´/; .x; t/; .y; x/; .y; ´/; .´; x/; .´; s/; .s; t/; .s; y/. (a) The situation just before the first pass over the edges. (b)–(e) The situation after each successive pass over the edges. The d and � values in part (e) are the final values. The Bellman-Ford algorithm returns TRUE in this example.

Lemma 24.2 Let G D .V; E/ be a weighted, directed graph with source s and weight func- tion w W E ! R, and assume that G contains no negative-weight cycles that are reachable from s. Then, after the jV j � 1 iterations of the for loop of lines 2–4 of BELLMAN-FORD, we have �:d D ı.s; �/ for all vertices � that are reachable from s.

Proof We prove the lemma by appealing to the path-relaxation property. Con- sider any vertex � that is reachable from s, and let p D h�0; �1; : : : ; �ki, where �0 D s and �k D �, be any shortest path from s to �. Because shortest paths are simple, p has at most jV j � 1 edges, and so k � jV j � 1. Each of the jV j � 1 itera- tions of the for loop of lines 2–4 relaxes all jEj edges. Among the edges relaxed in the i th iteration, for i D 1; 2; : : : ; k, is .�i�1; �i /. By the path-relaxation property, therefore, �:d D �k :d D ı.s; �k/ D ı.s; �/.

24.1 The Bellman-Ford algorithm 653

Corollary 24.3 Let G D .V; E/ be a weighted, directed graph with source vertex s and weight function w W E ! R, and assume that G contains no negative-weight cycles that are reachable from s. Then, for each vertex � 2 V , there is a path from s to � if and only if BELLMAN-FORD terminates with �:d ” by “ 1 This change causes the while loop to execute jV j � 1 times instead of jV j times. Is this proposed algorithm correct?

24.3-4 Professor Gaedel has written a program that he claims implements Dijkstra’s al- gorithm. The program produces �:d and �:� for each vertex � 2 V . Give an O.V CE/-time algorithm to check the output of the professor’s program. It should determine whether the d and � attributes match those of some shortest-paths tree. You may assume that all edge weights are nonnegative.

24.3-5 Professor Newman thinks that he has worked out a simpler proof of correctness for Dijkstra’s algorithm. He claims that Dijkstra’s algorithm relaxes the edges of every shortest path in the graph in the order in which they appear on the path, and therefore the path-relaxation property applies to every vertex reachable from the source. Show that the professor is mistaken by constructing a directed graph for which Dijkstra’s algorithm could relax the edges of a shortest path out of order.

24.3-6 We are given a directed graph G D .V; E/ on which each edge .u; �/ 2 E has an associated value r.u; �/, which is a real number in the range 0 � r.u; �/ � 1 that represents the reliability of a communication channel from vertex u to vertex �. We interpret r.u; �/ as the probability that the channel from u to � will not fail, and we assume that these probabilities are independent. Give an efficient algorithm to find the most reliable path between two given vertices.

24.3-7 Let G D .V; E/ be a weighted, directed graph with positive weight function w W E ! f1; 2; : : : ; W g for some positive integer W , and assume that no two ver- tices have the same shortest-path weights from source vertex s. Now suppose that we define an unweighted, directed graph G0 D .V [ V 0; E 0/ by replacing each edge .u; �/ 2 E with w.u; �/ unit-weight edges in series. How many vertices does G0 have? Now suppose that we run a breadth-first search on G0. Show that

664 Chapter 24 Single-Source Shortest Paths

the order in which the breadth-first search of G0 colors vertices in V black is the same as the order in which Dijkstra’s algorithm extracts the vertices of V from the priority queue when it runs on G.

24.3-8 Let G D .V; E/ be a weighted, directed graph with nonnegative weight function w W E ! f0; 1; : : : ; W g for some nonnegative integer W . Modify Dijkstra’s algo- rithm to compute the shortest paths from a given source vertex s in O.W V C E/ time.

24.3-9 Modify your algorithm from Exercise 24.3-8 to run in O..V C E/ lg W / time. (Hint: How many distinct shortest-path estimates can there be in V � S at any point in time?)

24.3-10 Suppose that we are given a weighted, directed graph G D .V; E/ in which edges that leave the source vertex s may have negative weights, all other edge weights are nonnegative, and there are no negative-weight cycles. Argue that Dijkstra’s algorithm correctly finds shortest paths from s in this graph.

24.4 Difference constraints and shortest paths

Chapter 29 studies the general linear-programming problem, in which we wish to optimize a linear function subject to a set of linear inequalities. In this section, we investigate a special case of linear programming that we reduce to finding shortest paths from a single source. We can then solve the single-source shortest-paths problem that results by running the Bellman-Ford algorithm, thereby also solving the linear-programming problem.

Linear programming

In the general linear-programming problem, we are given an m n matrix A, an m-vector b, and an n-vector c. We wish to find a vector x of n elements that maximizes the objective function

Pn iD1 cixi subject to the m constraints given by

Ax � b. Although the simplex algorithm, which is the focus of Chapter 29, does not

always run in time polynomial in the size of its input, there are other linear- programming algorithms that do run in polynomial time. We offer here two reasons to understand the setup of linear-programming problems. First, if we know that we

24.4 Difference constraints and shortest paths 665

can cast a given problem as a polynomial-sized linear-programming problem, then we immediately have a polynomial-time algorithm to solve the problem. Second, faster algorithms exist for many special cases of linear programming. For exam- ple, the single-pair shortest-path problem (Exercise 24.4-4) and the maximum-flow problem (Exercise 26.1-5) are special cases of linear programming.

Sometimes we don’t really care about the objective function; we just wish to find any feasible solution, that is, any vector x that satisfies Ax � b, or to determine that no feasible solution exists. We shall focus on one such feasibility problem.

Systems of difference constraints

In a system of difference constraints, each row of the linear-programming matrix A contains one 1 and one �1, and all other entries of A are 0. Thus, the constraints given by Ax � b are a set of m difference constraints involving n unknowns, in which each constraint is a simple linear inequality of the form

xj � xi � bk ; where 1 � i; j � n, i ¤ j , and 1 � k � m.

For example, consider the problem of finding a 5-vector x D .xi/ that satisfies� 1 �1 0 0 0 1 0 0 0 �1 0 1 0 0 �1 �1 0 1 0 0 �1 0 0 1 0

0 0 �1 1 0 0 0 �1 0 1 0 0 0 �1 1

˘ˇ x1 x2 x3 x4 x5

� �

� 0

�1 1

5

4

�1 �3 �3

˘ :

This problem is equivalent to finding values for the unknowns x1; x2; x3; x4; x5, satisfying the following 8 difference constraints:

x1 � x2 � 0 , (24.3) x1 � x5 � �1 , (24.4) x2 � x5 � 1 , (24.5) x3 � x1 � 5 , (24.6) x4 � x1 � 4 , (24.7) x4 � x3 � �1 , (24.8) x5 � x3 � �3 , (24.9) x5 � x4 � �3 . (24.10)

666 Chapter 24 Single-Source Shortest Paths

One solution to this problem is x D .�5;�3; 0;�1;�4/, which you can verify di- rectly by checking each inequality. In fact, this problem has more than one solution. Another is x 0 D .0; 2; 5; 4; 1/. These two solutions are related: each component of x 0 is 5 larger than the corresponding component of x. This fact is not mere coincidence.

Lemma 24.8 Let x D .x1; x2; : : : ; xn/ be a solution to a system Ax � b of difference con- straints, and let d be any constant. Then x C d D .x1 C d; x2 C d; : : : ; xn C d/ is a solution to Ax � b as well.

Proof For each xi and xj , we have .xj C d/ � .xi C d/ D xj � xi . Thus, if x satisfies Ax � b, so does x C d .

Systems of difference constraints occur in many different applications. For ex- ample, the unknowns xi may be times at which events are to occur. Each constraint states that at least a certain amount of time, or at most a certain amount of time, must elapse between two events. Perhaps the events are jobs to be performed dur- ing the assembly of a product. If we apply an adhesive that takes 2 hours to set at time x1 and we have to wait until it sets to install a part at time x2, then we have the constraint that x2 � x1 C 2 or, equivalently, that x1 � x2 � �2. Alternatively, we might require that the part be installed after the adhesive has been applied but no later than the time that the adhesive has set halfway. In this case, we get the pair of constraints x2 � x1 and x2 � x1C1 or, equivalently, x1�x2 � 0 and x2�x1 � 1.

Constraint graphs

We can interpret systems of difference constraints from a graph-theoretic point of view. In a system Ax � b of difference constraints, we view the m n linear-programming matrix A as the transpose of an incidence matrix (see Exer- cise 22.1-7) for a graph with n vertices and m edges. Each vertex �i in the graph, for i D 1; 2; : : : ; n, corresponds to one of the n unknown variables xi . Each di- rected edge in the graph corresponds to one of the m inequalities involving two unknowns.

More formally, given a system Ax � b of difference constraints, the correspond- ing constraint graph is a weighted, directed graph G D .V; E/, where V D f�0; �1; : : : ; �ng and

E D f.�i ; �j / W xj � xi � bk is a constraintg [ f.�0; �1/; .�0; �2/; .�0; �3/; : : : ; .�0; �n/g :

24.4 Difference constraints and shortest paths 667

0

0

0

0

0

0–1

1

5 4

–1

–3–3 0

–5

–3

0–1

–4

v3

v2

v1

v5

v0

v4

Figure 24.8 The constraint graph corresponding to the system (24.3)–(24.10) of difference con- straints. The value of ı.�0; �i / appears in each vertex �i . One feasible solution to the system is x D .�5;�3; 0;�1;�4/.

The constraint graph contains the additional vertex �0, as we shall see shortly, to guarantee that the graph has some vertex which can reach all other vertices. Thus, the vertex set V consists of a vertex �i for each unknown xi , plus an additional vertex �0. The edge set E contains an edge for each difference constraint, plus an edge .�0; �i / for each unknown xi . If xj � xi � bk is a difference constraint, then the weight of edge .�i ; �j / is w.�i ; �j / D bk. The weight of each edge leav- ing �0 is 0. Figure 24.8 shows the constraint graph for the system (24.3)–(24.10) of difference constraints.

The following theorem shows that we can find a solution to a system of differ- ence constraints by finding shortest-path weights in the corresponding constraint graph.

Theorem 24.9 Given a system Ax � b of difference constraints, let G D .V; E/ be the corre- sponding constraint graph. If G contains no negative-weight cycles, then

x D .ı.�0; �1/; ı.�0; �2/; ı.�0; �3/; : : : ; ı.�0; �n// (24.11) is a feasible solution for the system. If G contains a negative-weight cycle, then there is no feasible solution for the system.

Proof We first show that if the constraint graph contains no negative-weight cycles, then equation (24.11) gives a feasible solution. Consider any edge .�i ; �j / 2 E. By the triangle inequality, ı.�0; �j / � ı.�0; �i / C w.�i ; �j / or, equivalently, ı.�0; �j / � ı.�0; �i / � w.�i ; �j /. Thus, letting xi D ı.�0; �i / and

668 Chapter 24 Single-Source Shortest Paths

xj D ı.�0; �j / satisfies the difference constraint xj � xi � w.�i ; �j / that corre- sponds to edge .�i ; �j /.

Now we show that if the constraint graph contains a negative-weight cycle, then the system of difference constraints has no feasible solution. Without loss of gen- erality, let the negative-weight cycle be c D h�1; �2; : : : ; �ki, where �1 D �k. (The vertex �0 cannot be on cycle c, because it has no entering edges.) Cycle c corresponds to the following difference constraints:

x2 � x1 � w.�1; �2/ ; x3 � x2 � w.�2; �3/ ;

:::

xk�1 � xk�2 � w.�k�2; �k�1/ ; xk � xk�1 � w.�k�1; �k/ :

We will assume that x has a solution satisfying each of these k inequalities and then derive a contradiction. The solution must also satisfy the inequality that results when we sum the k inequalities together. If we sum the left-hand sides, each unknown xi is added in once and subtracted out once (remember that �1 D �k implies x1 D xk), so that the left-hand side of the sum is 0. The right-hand side sums to w.c/, and thus we obtain 0 � w.c/. But since c is a negative-weight cycle, w.c/ u:d C w.u; �/, then �:d D u:d C w.u; �/ afterward. If, instead, �:d � u:d C w.u; �/ just before the relaxation, then neither u:d nor �:d changes, and so �:d � u:d C w.u; �/ afterward.

Lemma 24.14 (Convergence property) Let G D .V; E/ be a weighted, directed graph with weight function w W E ! R, let s 2 V be a source vertex, and let s � u ! � be a shortest path in G for

24.5 Proofs of shortest-paths properties 673

some vertices u; � 2 V . Suppose that G is initialized by INITIALIZE-SINGLE- SOURCE.G; s/ and then a sequence of relaxation steps that includes the call RELAX.u; �; w/ is executed on the edges of G. If u:d D ı.s; u/ at any time prior to the call, then �:d D ı.s; �/ at all times after the call.

Proof By the upper-bound property, if u:d D ı.s; u/ at some point prior to re- laxing edge .u; �/, then this equality holds thereafter. In particular, after relaxing edge .u; �/, we have

�:d � u:dCw.u; �/ (by Lemma 24.13) D ı.s; u/C w.u; �/ D ı.s; �/ (by Lemma 24.1) .

By the upper-bound property, �:d � ı.s; �/, from which we conclude that �:d D ı.s; �/, and this equality is maintained thereafter.

Lemma 24.15 (Path-relaxation property) Let G D .V; E/ be a weighted, directed graph with weight function w W E ! R, and let s 2 V be a source vertex. Consider any shortest path p D h�0; �1; : : : ; �ki from s D �0 to �k. If G is initialized by INITIALIZE-SINGLE-SOURCE.G; s/ and then a sequence of relaxation steps occurs that includes, in order, relaxing the edges .�0; �1/; .�1; �2/; : : : ; .�k�1; �k/, then �k:d D ı.s; �k/ after these relaxations and at all times afterward. This property holds no matter what other edge relaxations occur, including relaxations that are intermixed with relaxations of the edges of p.

Proof We show by induction that after the i th edge of path p is relaxed, we have �i :d D ı.s; �i /. For the basis, i D 0, and before any edges of p have been relaxed, we have from the initialization that �0:d D s:d D 0 D ı.s; s/. By the upper-bound property, the value of s:d never changes after initialization.

For the inductive step, we assume that �i�1:d D ı.s; �i�1/, and we examine what happens when we relax edge .�i�1; �i/. By the convergence property, after relaxing this edge, we have �i :d D ı.s; �i /, and this equality is maintained at all times thereafter.

Relaxation and shortest-paths trees

We now show that once a sequence of relaxations has caused the shortest-path es- timates to converge to shortest-path weights, the predecessor subgraph G� induced by the resulting � values is a shortest-paths tree for G. We start with the follow- ing lemma, which shows that the predecessor subgraph always forms a rooted tree whose root is the source.

674 Chapter 24 Single-Source Shortest Paths

Lemma 24.16 Let G D .V; E/ be a weighted, directed graph with weight function w W E ! R, let s 2 V be a source vertex, and assume that G contains no negative-weight cycles that are reachable from s. Then, after the graph is initialized by INITIALIZE- SINGLE-SOURCE.G; s/, the predecessor subgraph G� forms a rooted tree with root s, and any sequence of relaxation steps on edges of G maintains this property as an invariant.

Proof Initially, the only vertex in G� is the source vertex, and the lemma is triv- ially true. Consider a predecessor subgraph G� that arises after a sequence of relaxation steps. We shall first prove that G� is acyclic. Suppose for the sake of contradiction that some relaxation step creates a cycle in the graph G� . Let the cy- cle be c D h�0; �1; : : : ; �ki, where �k D �0. Then, �i :� D �i�1 for i D 1; 2; : : : ; k and, without loss of generality, we can assume that relaxing edge .�k�1; �k/ created the cycle in G� .

We claim that all vertices on cycle c are reachable from the source s. Why? Each vertex on c has a non-NIL predecessor, and so each vertex on c was assigned a finite shortest-path estimate when it was assigned its non-NIL � value. By the upper-bound property, each vertex on cycle c has a finite shortest-path weight, which implies that it is reachable from s.

We shall examine the shortest-path estimates on c just prior to the call RELAX.�k�1; �k; w/ and show that c is a negative-weight cycle, thereby contra- dicting the assumption that G contains no negative-weight cycles that are reachable from the source. Just before the call, we have �i :� D �i�1 for i D 1; 2; : : : ; k � 1. Thus, for i D 1; 2; : : : ; k � 1, the last update to �i :d was by the assignment �i :d D �i�1:dCw.�i�1; �i /. If �i�1:d changed since then, it decreased. Therefore, just before the call RELAX.�k�1; �k ; w/, we have

�i :d � �i�1:dCw.�i�1; �i / for all i D 1; 2; : : : ; k � 1 : (24.12) Because �k:� is changed by the call, immediately beforehand we also have the strict inequality

�k:d > �k�1:dCw.�k�1; �k/ : Summing this strict inequality with the k � 1 inequalities (24.12), we obtain the sum of the shortest-path estimates around cycle c:

kX iD1

�i :d > kX

iD1 .�i�1:dC w.�i�1; �i//

D kX

iD1 �i�1:dC

kX iD1

w.�i�1; �i/ :

24.5 Proofs of shortest-paths properties 675

s u

x

y

z v

Figure 24.9 Showing that a simple path in G� from source s to vertex � is unique. If there are two paths p1 (s � u � x ! ´ � �) and p2 (s � u � y ! ´ � �), where x ¤ y, then ´:� D x and ´:� D y, a contradiction.

But

kX iD1

�i :d D kX

iD1 �i�1:d ;

since each vertex in the cycle c appears exactly once in each summation. This equality implies

0 >

kX iD1

w.�i�1; �i / :

Thus, the sum of weights around the cycle c is negative, which provides the desired contradiction.

We have now proven that G� is a directed, acyclic graph. To show that it forms a rooted tree with root s, it suffices (see Exercise B.5-2) to prove that for each vertex � 2 V� , there is a unique simple path from s to � in G� .

We first must show that a path from s exists for each vertex in V� . The ver- tices in V� are those with non-NIL � values, plus s. The idea here is to prove by induction that a path exists from s to all vertices in V� . We leave the details as Exercise 24.5-6.

To complete the proof of the lemma, we must now show that for any vertex � 2 V� , the graph G� contains at most one simple path from s to �. Suppose other- wise. That is, suppose that, as Figure 24.9 illustrates, G� contains two simple paths from s to some vertex �: p1, which we decompose into s � u � x ! ´ � �, and p2, which we decompose into s � u � y ! ´ � �, where x ¤ y (though u could be s and ´ could be �). But then, ´:� D x and ´:� D y, which implies the contradiction that x D y. We conclude that G� contains a unique simple path from s to �, and thus G� forms a rooted tree with root s.

We can now show that if, after we have performed a sequence of relaxation steps, all vertices have been assigned their true shortest-path weights, then the predeces- sor subgraph G� is a shortest-paths tree.

676 Chapter 24 Single-Source Shortest Paths

Lemma 24.17 (Predecessor-subgraph property) Let G D .V; E/ be a weighted, directed graph with weight function w W E ! R, let s 2 V be a source vertex, and assume that G contains no negative-weight cycles that are reachable from s. Let us call INITIALIZE-SINGLE-SOURCE.G; s/ and then execute any sequence of relaxation steps on edges of G that produces �:d D ı.s; �/ for all � 2 V . Then, the predecessor subgraph G� is a shortest-paths tree rooted at s.

Proof We must prove that the three properties of shortest-paths trees given on page 647 hold for G� . To show the first property, we must show that V� is the set of vertices reachable from s. By definition, a shortest-path weight ı.s; �/ is finite if and only if � is reachable from s, and thus the vertices that are reachable from s are exactly those with finite d values. But a vertex � 2 V � fsg has been assigned a finite value for �:d if and only if �:� ¤ NIL. Thus, the vertices in V� are exactly those reachable from s.

The second property follows directly from Lemma 24.16. It remains, therefore, to prove the last property of shortest-paths trees: for each

vertex � 2 V� , the unique simple path s p� � in G� is a shortest path from s to � in G. Let p D h�0; �1; : : : ; �ki, where �0 D s and �k D �. For i D 1; 2; : : : ; k, we have both �i :d D ı.s; �i / and �i :d � �i�1:d C w.�i�1; �i /, from which we conclude w.�i�1; �i / � ı.s; �i / � ı.s; �i�1/. Summing the weights along path p yields

w.p/ D kX

iD1 w.�i�1; �i /

� kX

iD1 .ı.s; �i / � ı.s; �i�1//

D ı.s; �k/ � ı.s; �0/ (because the sum telescopes) D ı.s; �k/ (because ı.s; �0/ D ı.s; s/ D 0) .

Thus, w.p/ � ı.s; �k/. Since ı.s; �k/ is a lower bound on the weight of any path from s to �k, we conclude that w.p/ D ı.s; �k/, and thus p is a shortest path from s to � D �k.

Exercises

24.5-1 Give two shortest-paths trees for the directed graph of Figure 24.2 (on page 648) other than the two shown.

24.5 Proofs of shortest-paths properties 677

24.5-2 Give an example of a weighted, directed graph G D .V; E/ with weight function w W E ! R and source vertex s such that G satisfies the following property: For every edge .u; �/ 2 E, there is a shortest-paths tree rooted at s that contains .u; �/ and another shortest-paths tree rooted at s that does not contain .u; �/.

24.5-3 Embellish the proof of Lemma 24.10 to handle cases in which shortest-path weights are1 or �1. 24.5-4 Let G D .V; E/ be a weighted, directed graph with source vertex s, and let G be initialized by INITIALIZE-SINGLE-SOURCE.G; s/. Prove that if a sequence of relaxation steps sets s:� to a non-NIL value, then G contains a negative-weight cycle.

24.5-5 Let G D .V; E/ be a weighted, directed graph with no negative-weight edges. Let s 2 V be the source vertex, and suppose that we allow �:� to be the predecessor of � on any shortest path to � from source s if � 2 V � fsg is reachable from s, and NIL otherwise. Give an example of such a graph G and an assignment of � values that produces a cycle in G� . (By Lemma 24.16, such an assignment cannot be produced by a sequence of relaxation steps.)

24.5-6 Let G D .V; E/ be a weighted, directed graph with weight function w W E ! R and no negative-weight cycles. Let s 2 V be the source vertex, and let G be initial- ized by INITIALIZE-SINGLE-SOURCE.G; s/. Prove that for every vertex � 2 V� , there exists a path from s to � in G� and that this property is maintained as an invariant over any sequence of relaxations.

24.5-7 Let G D .V; E/ be a weighted, directed graph that contains no negative-weight cycles. Let s 2 V be the source vertex, and let G be initialized by INITIALIZE- SINGLE-SOURCE.G; s/. Prove that there exists a sequence of jV j � 1 relaxation steps that produces �:d D ı.s; �/ for all � 2 V . 24.5-8 Let G be an arbitrary weighted, directed graph with a negative-weight cycle reach- able from the source vertex s. Show how to construct an infinite sequence of relax- ations of the edges of G such that every relaxation causes a shortest-path estimate to change.

678 Chapter 24 Single-Source Shortest Paths

Problems

24-1 Yen’s improvement to Bellman-Ford Suppose that we order the edge relaxations in each pass of the Bellman-Ford al- gorithm as follows. Before the first pass, we assign an arbitrary linear order �1; �2; : : : ; �jV j to the vertices of the input graph G D .V; E/. Then, we parti- tion the edge set E into Ef [ Eb, where Ef D f.�i ; �j / 2 E W i j g. (Assume that G contains no self-loops, so that every edge is in either Ef or Eb.) Define Gf D .V; Ef / and Gb D .V; Eb/. a. Prove that Gf is acyclic with topological sort h�1; �2; : : : ; �jV ji and that Gb is

acyclic with topological sort h�jV j; �jV j�1; : : : ; �1i. Suppose that we implement each pass of the Bellman-Ford algorithm in the fol- lowing way. We visit each vertex in the order �1; �2; : : : ; �jV j, relaxing edges of Ef that leave the vertex. We then visit each vertex in the order �jV j; �jV j�1; : : : ; �1, relaxing edges of Eb that leave the vertex.

b. Prove that with this scheme, if G contains no negative-weight cycles that are reachable from the source vertex s, then after only djV j =2e passes over the edges, �:d D ı.s; �/ for all vertices � 2 V .

c. Does this scheme improve the asymptotic running time of the Bellman-Ford algorithm?

24-2 Nesting boxes A d -dimensional box with dimensions .x1; x2; : : : ; xd / nests within another box with dimensions .y1; y2; : : : ; yd / if there exists a permutation � on f1; 2; : : : ; dg such that x�.1/ 1 : Analyze the running time of your algorithm.

b. Give an efficient algorithm to print out such a sequence if one exists. Analyze the running time of your algorithm.

24-4 Gabow’s scaling algorithm for single-source shortest paths A scaling algorithm solves a problem by initially considering only the highest- order bit of each relevant input value (such as an edge weight). It then refines the initial solution by looking at the two highest-order bits. It progressively looks at more and more high-order bits, refining the solution each time, until it has exam- ined all bits and computed the correct solution.

In this problem, we examine an algorithm for computing the shortest paths from a single source by scaling edge weights. We are given a directed graph G D .V; E/ with nonnegative integer edge weights w. Let W D max.u;�/2E fw.u; �/g. Our goal is to develop an algorithm that runs in O.E lg W / time. We assume that all vertices are reachable from the source.

The algorithm uncovers the bits in the binary representation of the edge weights one at a time, from the most significant bit to the least significant bit. Specifically, let k D dlg.W C 1/e be the number of bits in the binary representation of W , and for i D 1; 2; : : : ; k, let wi.u; �/ D

w.u; �/=2k�i

˘ . That is, wi .u; �/ is the

“scaled-down” version of w.u; �/ given by the i most significant bits of w.u; �/. (Thus, wk.u; �/ D w.u; �/ for all .u; �/ 2 E.) For example, if k D 5 and w.u; �/ D 25, which has the binary representation h11001i, then w3.u; �/ D h110i D 6. As another example with k D 5, if w.u; �/ D h00100i D 4, then w3.u; �/ D h001i D 1. Let us define ıi.u; �/ as the shortest-path weight from vertex u to vertex � using weight function wi . Thus, ık.u; �/ D ı.u; �/ for all u; � 2 V . For a given source vertex s, the scaling algorithm first computes the

680 Chapter 24 Single-Source Shortest Paths

shortest-path weights ı1.s; �/ for all � 2 V , then computes ı2.s; �/ for all � 2 V , and so on, until it computes ık.s; �/ for all � 2 V . We assume throughout that jEj � jV j � 1, and we shall see that computing ıi from ıi�1 takes O.E/ time, so that the entire algorithm takes O.kE/ D O.E lg W / time. a. Suppose that for all vertices � 2 V , we have ı.s; �/ � jEj. Show that we can

compute ı.s; �/ for all � 2 V in O.E/ time.

b. Show that we can compute ı1.s; �/ for all � 2 V in O.E/ time. Let us now focus on computing ıi from ıi�1.

c. Prove that for i D 2; 3; : : : ; k, we have either wi.u; �/ D 2wi�1.u; �/ or wi.u; �/ D 2wi�1.u; �/C 1. Then, prove that

2ıi�1.s; �/ � ıi .s; �/ � 2ıi�1.s; �/C jV j � 1

for all � 2 V .

d. Define for i D 2; 3; : : : ; k and all .u; �/ 2 E,

ywi.u; �/ D wi.u; �/C 2ıi�1.s; u/ � 2ıi�1.s; �/ :

Prove that for i D 2; 3; : : : ; k and all u; � 2 V , the “reweighted” value ywi .u; �/ of edge .u; �/ is a nonnegative integer.

e. Now, define yıi .s; �/ as the shortest-path weight from s to � using the weight function ywi . Prove that for i D 2; 3; : : : ; k and all � 2 V ,

ıi.s; �/ D yıi .s; �/C 2ıi�1.s; �/

and that yıi .s; �/ � jEj.

f. Show how to compute ıi .s; �/ from ıi�1.s; �/ for all � 2 V in O.E/ time, and conclude that we can compute ı.s; �/ for all � 2 V in O.E lg W / time.

24-5 Karp’s minimum mean-weight cycle algorithm Let G D .V; E/ be a directed graph with weight function w W E ! R, and let n D jV j. We define the mean weight of a cycle c D he1; e2; : : : ; eki of edges in E to be

.c/ D 1 k

kX iD1

w.ei/ :

Problems for Chapter 24 681

Let � D minc .c/, where c ranges over all directed cycles in G. We call a cycle c for which .c/ D � a minimum mean-weight cycle. This problem investigates an efficient algorithm for computing �.

Assume without loss of generality that every vertex � 2 V is reachable from a source vertex s 2 V . Let ı.s; �/ be the weight of a shortest path from s to �, and let ık.s; �/ be the weight of a shortest path from s to � consisting of exactly k edges. If there is no path from s to � with exactly k edges, then ık.s; �/ D1. a. Show that if � D 0, then G contains no negative-weight cycles and ı.s; �/ D

min0�k�n�1 ık.s; �/ for all vertices � 2 V .

b. Show that if � D 0, then

max 0�k�n�1

ın.s; �/ � ık.s; �/ n � k � 0

for all vertices � 2 V . (Hint: Use both properties from part (a).)

c. Let c be a 0-weight cycle, and let u and � be any two vertices on c. Suppose that � D 0 and that the weight of the simple path from u to � along the cycle is x. Prove that ı.s; �/ D ı.s; u/ C x. (Hint: The weight of the simple path from � to u along the cycle is �x.)

d. Show that if � D 0, then on each minimum mean-weight cycle there exists a vertex � such that

max 0�k�n�1

ın.s; �/ � ık.s; �/ n � k D 0 :

(Hint: Show how to extend a shortest path to any vertex on a minimum mean- weight cycle along the cycle to make a shortest path to the next vertex on the cycle.)

e. Show that if � D 0, then

min �2V

max 0�k�n�1

ın.s; �/ � ık.s; �/ n � k D 0 :

f. Show that if we add a constant t to the weight of each edge of G, then �

increases by t . Use this fact to show that

� D min �2V

max 0�k�n�1

ın.s; �/ � ık.s; �/ n � k :

g. Give an O.VE/-time algorithm to compute �.

682 Chapter 24 Single-Source Shortest Paths

24-6 Bitonic shortest paths A sequence is bitonic if it monotonically increases and then monotonically de- creases, or if by a circular shift it monotonically increases and then monotonically decreases. For example the sequences h1; 4; 6; 8; 3;�2i, h9; 2;�4;�10;�5i, and h1; 2; 3; 4i are bitonic, but h1; 3; 12; 4; 2; 10i is not bitonic. (See Problem 15-3 for the bitonic euclidean traveling-salesman problem.)

Suppose that we are given a directed graph G D .V; E/ with weight function w W E ! R, where all edge weights are unique, and we wish to find single-source shortest paths from a source vertex s. We are given one additional piece of infor- mation: for each vertex � 2 V , the weights of the edges along any shortest path from s to � form a bitonic sequence.

Give the most efficient algorithm you can to solve this problem, and analyze its running time.

Chapter notes

Dijkstra’s algorithm [88] appeared in 1959, but it contained no mention of a priority queue. The Bellman-Ford algorithm is based on separate algorithms by Bellman [38] and Ford [109]. Bellman describes the relation of shortest paths to difference constraints. Lawler [224] describes the linear-time algorithm for shortest paths in a dag, which he considers part of the folklore.

When edge weights are relatively small nonnegative integers, we have more ef- ficient algorithms to solve the single-source shortest-paths problem. The sequence of values returned by the EXTRACT-MIN calls in Dijkstra’s algorithm monoton- ically increases over time. As discussed in the chapter notes for Chapter 6, in this case several data structures can implement the various priority-queue opera- tions more efficiently than a binary heap or a Fibonacci heap. Ahuja, Mehlhorn, Orlin, and Tarjan [8] give an algorithm that runs in O.E C Vplg W / time on graphs with nonnegative edge weights, where W is the largest weight of any edge in the graph. The best bounds are by Thorup [337], who gives an algorithm that runs in O.E lg lg V / time, and by Raman [291], who gives an algorithm that runs in O

� E C V min ˚.lg V /1=3C�; .lg W /1=4C� � time. These two algorithms use an

amount of space that depends on the word size of the underlying machine. Al- though the amount of space used can be unbounded in the size of the input, it can be reduced to be linear in the size of the input using randomized hashing.

For undirected graphs with integer weights, Thorup [336] gives an O.V C E/- time algorithm for single-source shortest paths. In contrast to the algorithms men- tioned in the previous paragraph, this algorithm is not an implementation of Dijk-

Notes for Chapter 24 683

stra’s algorithm, since the sequence of values returned by EXTRACT-MIN calls does not monotonically increase over time.

For graphs with negative edge weights, an algorithm due to Gabow and Tar- jan [122] runs in O.

p V E lg.V W // time, and one by Goldberg [137] runs in

O. p

V E lg W / time, where W D max.u;�/2E fjw.u; �/jg. Cherkassky, Goldberg, and Radzik [64] conducted extensive experiments com-

paring various shortest-path algorithms.

25 All-Pairs Shortest Paths

In this chapter, we consider the problem of finding shortest paths between all pairs of vertices in a graph. This problem might arise in making a table of distances be- tween all pairs of cities for a road atlas. As in Chapter 24, we are given a weighted, directed graph G D .V; E/ with a weight function w W E ! R that maps edges to real-valued weights. We wish to find, for every pair of vertices u; � 2 V , a shortest (least-weight) path from u to �, where the weight of a path is the sum of the weights of its constituent edges. We typically want the output in tabular form: the entry in u’s row and �’s column should be the weight of a shortest path from u to �.

We can solve an all-pairs shortest-paths problem by running a single-source shortest-paths algorithm jV j times, once for each vertex as the source. If all edge weights are nonnegative, we can use Dijkstra’s algorithm. If we use the linear-array implementation of the min-priority queue, the running time is O.V 3 C VE/ D O.V 3/. The binary min-heap implementation of the min-priority queue yields a running time of O.VE lg V /, which is an improvement if the graph is sparse. Alternatively, we can implement the min-priority queue with a Fibonacci heap, yielding a running time of O.V 2 lg V C VE/.

If the graph has negative-weight edges, we cannot use Dijkstra’s algorithm. In- stead, we must run the slower Bellman-Ford algorithm once from each vertex. The resulting running time is O.V 2E/, which on a dense graph is O.V 4/. In this chap- ter we shall see how to do better. We also investigate the relation of the all-pairs shortest-paths problem to matrix multiplication and study its algebraic structure.

Unlike the single-source algorithms, which assume an adjacency-list represen- tation of the graph, most of the algorithms in this chapter use an adjacency- matrix representation. (Johnson’s algorithm for sparse graphs, in Section 25.3, uses adjacency lists.) For convenience, we assume that the vertices are numbered 1; 2; : : : ; jV j, so that the input is an n n matrix W representing the edge weights of an n-vertex directed graph G D .V; E/. That is, W D .wij /, where

Chapter 25 All-Pairs Shortest Paths 685

wij D

� 0 if i D j ; the weight of directed edge .i; j / if i ¤ j and .i; j / 2 E ; 1 if i ¤ j and .i; j / 62 E :

(25.1)

We allow negative-weight edges, but we assume for the time being that the input graph contains no negative-weight cycles.

The tabular output of the all-pairs shortest-paths algorithms presented in this chapter is an n n matrix D D .dij /, where entry dij contains the weight of a shortest path from vertex i to vertex j . That is, if we let ı.i; j / denote the shortest- path weight from vertex i to vertex j (as in Chapter 24), then dij D ı.i; j / at termination.

To solve the all-pairs shortest-paths problem on an input adjacency matrix, we need to compute not only the shortest-path weights but also a predecessor matrix … D .�ij /, where �ij is NIL if either i D j or there is no path from i to j , and otherwise �ij is the predecessor of j on some shortest path from i . Just as the predecessor subgraph G� from Chapter 24 is a shortest-paths tree for a given source vertex, the subgraph induced by the i th row of the … matrix should be a shortest-paths tree with root i . For each vertex i 2 V , we define the predecessor subgraph of G for i as G�;i D .V�;i ; E�;i/ , where V�;i D fj 2 V W �ij ¤ NILg [ fig and

E�;i D f.�ij ; j / W j 2 V�;i � figg : If G�;i is a shortest-paths tree, then the following procedure, which is a modified version of the PRINT-PATH procedure from Chapter 22, prints a shortest path from vertex i to vertex j .

PRINT-ALL-PAIRS-SHORTEST-PATH.…; i; j /

1 if i == j 2 print i 3 elseif �ij == NIL 4 print “no path from” i “to” j “exists” 5 else PRINT-ALL-PAIRS-SHORTEST-PATH.…; i; �ij / 6 print j

In order to highlight the essential features of the all-pairs algorithms in this chapter, we won’t cover the creation and properties of predecessor matrices as extensively as we dealt with predecessor subgraphs in Chapter 24. Some of the exercises cover the basics.

686 Chapter 25 All-Pairs Shortest Paths

Chapter outline

Section 25.1 presents a dynamic-programming algorithm based on matrix multi- plication to solve the all-pairs shortest-paths problem. Using the technique of “re- peated squaring,” we can achieve a running time of ‚.V 3 lg V /. Section 25.2 gives another dynamic-programming algorithm, the Floyd-Warshall algorithm, which runs in time ‚.V 3/. Section 25.2 also covers the problem of finding the tran- sitive closure of a directed graph, which is related to the all-pairs shortest-paths problem. Finally, Section 25.3 presents Johnson’s algorithm, which solves the all- pairs shortest-paths problem in O.V 2 lg V C VE/ time and is a good choice for large, sparse graphs.

Before proceeding, we need to establish some conventions for adjacency-matrix representations. First, we shall generally assume that the input graph G D .V; E/ has n vertices, so that n D jV j. Second, we shall use the convention of denoting matrices by uppercase letters, such as W , L, or D, and their individual elements by subscripted lowercase letters, such as wij , lij , or dij . Some matrices will have parenthesized superscripts, as in L.m/ D �l .m/ij � or D.m/ D �d .m/ij �, to indicate iterates. Finally, for a given n n matrix A, we shall assume that the value of n is stored in the attribute A:rows.

25.1 Shortest paths and matrix multiplication

This section presents a dynamic-programming algorithm for the all-pairs shortest- paths problem on a directed graph G D .V; E/. Each major loop of the dynamic program will invoke an operation that is very similar to matrix multiplication, so that the algorithm will look like repeated matrix multiplication. We shall start by developing a ‚.V 4/-time algorithm for the all-pairs shortest-paths problem and then improve its running time to ‚.V 3 lg V /.

Before proceeding, let us briefly recap the steps given in Chapter 15 for devel- oping a dynamic-programming algorithm.

1. Characterize the structure of an optimal solution.

2. Recursively define the value of an optimal solution.

3. Compute the value of an optimal solution in a bottom-up fashion.

We reserve the fourth step—constructing an optimal solution from computed in- formation—for the exercises.

25.1 Shortest paths and matrix multiplication 687

The structure of a shortest path

We start by characterizing the structure of an optimal solution. For the all-pairs shortest-paths problem on a graph G D .V; E/, we have proven (Lemma 24.1) that all subpaths of a shortest path are shortest paths. Suppose that we represent the graph by an adjacency matrix W D .wij /. Consider a shortest path p from vertex i to vertex j , and suppose that p contains at most m edges. Assuming that there are no negative-weight cycles, m is finite. If i D j , then p has weight 0 and no edges. If vertices i and j are distinct, then we decompose path p into

i p0

� k ! j , where path p0 now contains at most m � 1 edges. By Lemma 24.1, p0 is a shortest path from i to k, and so ı.i; j / D ı.i; k/C wkj .

A recursive solution to the all-pairs shortest-paths problem

Now, let l .m/ij be the minimum weight of any path from vertex i to vertex j that contains at most m edges. When m D 0, there is a shortest path from i to j with no edges if and only if i D j . Thus,

l .0/ ij D

( 0 if i D j ; 1 if i ¤ j :

For m � 1, we compute l .m/ij as the minimum of l .m�1/ij (the weight of a shortest path from i to j consisting of at most m�1 edges) and the minimum weight of any path from i to j consisting of at most m edges, obtained by looking at all possible predecessors k of j . Thus, we recursively define

l .m/ ij D min

� l

.m�1/ ij ; min

1�k�n

˚ l

.m�1/ ik

C wkj �

D min 1�k�n

˚ l

.m�1/ ik

Cwkj

: (25.2)

The latter equality follows since wjj D 0 for all j . What are the actual shortest-path weights ı.i; j /? If the graph contains

no negative-weight cycles, then for every pair of vertices i and j for which ı.i; j / d .k�1/ ik

C d .k�1/ kj

: (25.7)

We leave the incorporation of the ….k/ matrix computations into the FLOYD- WARSHALL procedure as Exercise 25.2-3. Figure 25.4 shows the sequence of ….k/

matrices that the resulting algorithm computes for the graph of Figure 25.1. The exercise also asks for the more difficult task of proving that the predecessor sub- graph G�;i is a shortest-paths tree with root i . Exercise 25.2-7 asks for yet another way to reconstruct shortest paths.

Transitive closure of a directed graph

Given a directed graph G D .V; E/ with vertex set V D f1; 2; : : : ; ng, we might wish to determine whether G contains a path from i to j for all vertex pairs i; j 2 V . We define the transitive closure of G as the graph G� D .V; E�/, where E� D f.i; j / W there is a path from vertex i to vertex j in Gg :

One way to compute the transitive closure of a graph in ‚.n3/ time is to assign a weight of 1 to each edge of E and run the Floyd-Warshall algorithm. If there is a path from vertex i to vertex j , we get dij 0. The query times are O.1/ with high probability. For transitive closure, the amortized time for each update is O.V 4=3 lg1=3 V /. For all-pairs shortest paths, the update times depend on the queries. For queries just giving the shortest-path weights, the amortized time per update is O.V 3=E lg2 V /. To report the actual shortest path, the amortized up- date time is min.O.V 3=2

p lg V /; O.V 3=E lg2 V //. Demetrescu and Italiano [84]

showed how to handle update and query operations when edges are both inserted and deleted, as long as each given edge has a bounded range of possible values drawn from the real numbers.

Aho, Hopcroft, and Ullman [5] defined an algebraic structure known as a “closed semiring,” which serves as a general framework for solving path problems in di- rected graphs. Both the Floyd-Warshall algorithm and the transitive-closure algo- rithm from Section 25.2 are instantiations of an all-pairs algorithm based on closed semirings. Maggs and Plotkin [240] showed how to find minimum spanning trees using a closed semiring.

26 Maximum Flow

Just as we can model a road map as a directed graph in order to find the shortest path from one point to another, we can also interpret a directed graph as a “flow network” and use it to answer questions about material flows. Imagine a mate- rial coursing through a system from a source, where the material is produced, to a sink, where it is consumed. The source produces the material at some steady rate, and the sink consumes the material at the same rate. The “flow” of the mate- rial at any point in the system is intuitively the rate at which the material moves. Flow networks can model many problems, including liquids flowing through pipes, parts through assembly lines, current through electrical networks, and information through communication networks.

We can think of each directed edge in a flow network as a conduit for the mate- rial. Each conduit has a stated capacity, given as a maximum rate at which the ma- terial can flow through the conduit, such as 200 gallons of liquid per hour through a pipe or 20 amperes of electrical current through a wire. Vertices are conduit junctions, and other than the source and sink, material flows through the vertices without collecting in them. In other words, the rate at which material enters a ver- tex must equal the rate at which it leaves the vertex. We call this property “flow conservation,” and it is equivalent to Kirchhoff’s current law when the material is electrical current.

In the maximum-flow problem, we wish to compute the greatest rate at which we can ship material from the source to the sink without violating any capacity constraints. It is one of the simplest problems concerning flow networks and, as we shall see in this chapter, this problem can be solved by efficient algorithms. Moreover, we can adapt the basic techniques used in maximum-flow algorithms to solve other network-flow problems.

This chapter presents two general methods for solving the maximum-flow prob- lem. Section 26.1 formalizes the notions of flow networks and flows, formally defining the maximum-flow problem. Section 26.2 describes the classical method of Ford and Fulkerson for finding maximum flows. An application of this method,

26.1 Flow networks 709

finding a maximum matching in an undirected bipartite graph, appears in Sec- tion 26.3. Section 26.4 presents the push-relabel method, which underlies many of the fastest algorithms for network-flow problems. Section 26.5 covers the “relabel- to-front” algorithm, a particular implementation of the push-relabel method that runs in time O.V 3/. Although this algorithm is not the fastest algorithm known, it illustrates some of the techniques used in the asymptotically fastest algorithms, and it is reasonably efficient in practice.

26.1 Flow networks

In this section, we give a graph-theoretic definition of flow networks, discuss their properties, and define the maximum-flow problem precisely. We also introduce some helpful notation.

Flow networks and flows

A flow network G D .V; E/ is a directed graph in which each edge .u; �/ 2 E has a nonnegative capacity c.u; �/ � 0. We further require that if E contains an edge .u; �/, then there is no edge .�; u/ in the reverse direction. (We shall see shortly how to work around this restriction.) If .u; �/ 62 E, then for convenience we define c.u; �/ D 0, and we disallow self-loops. We distinguish two vertices in a flow network: a source s and a sink t . For convenience, we assume that each vertex lies on some path from the source to the sink. That is, for each vertex � 2 V , the flow network contains a path s � � � t . The graph is therefore connected and, since each vertex other than s has at least one entering edge, jEj � jV j � 1. Figure 26.1 shows an example of a flow network.

We are now ready to define flows more formally. Let G D .V; E/ be a flow network with a capacity function c. Let s be the source of the network, and let t be the sink. A flow in G is a real-valued function f W V V ! R that satisfies the following two properties:

Capacity constraint: For all u; � 2 V , we require 0 � f .u; �/ � c.u; �/. Flow conservation: For all u 2 V � fs; tg, we requireX

�2V f .�; u/ D

X �2V

f .u; �/ :

When .u; �/ 62 E, there can be no flow from u to �, and f .u; �/ D 0.

710 Chapter 26 Maximum Flow

s t

16

12 20

794

13

14

4

Edmonton

Calgary

Saskatoon

Regina

Vancouver Winnipeg

s t

11/ 16

12/12 15/20

7/ 7

4/91 /4

8/13 11/14

4/4

(a) (b)

v1 v1

v2 v2

v3 v3

v4v4

Figure 26.1 (a) A flow network G D .V; E/ for the Lucky Puck Company’s trucking problem. The Vancouver factory is the source s, and the Winnipeg warehouse is the sink t . The company ships pucks through intermediate cities, but only c.u; �/ crates per day can go from city u to city �. Each edge is labeled with its capacity. (b) A flow f in G with value jf j D 19. Each edge .u; �/ is labeled by f .u; �/=c.u; �/. The slash notation merely separates the flow and capacity; it does not indicate division.

We call the nonnegative quantity f .u; �/ the flow from vertex u to vertex �. The value jf j of a flow f is defined as jf j D

X �2V

f .s; �/ � X �2V

f .�; s/ ; (26.1)

that is, the total flow out of the source minus the flow into the source. (Here, the j�j notation denotes flow value, not absolute value or cardinality.) Typically, a flow network will not have any edges into the source, and the flow into the source, given by the summation

P �2V f .�; s/, will be 0. We include it, however, because when

we introduce residual networks later in this chapter, the flow into the source will become significant. In themaximum-flow problem, we are given a flow network G with source s and sink t , and we wish to find a flow of maximum value.

Before seeing an example of a network-flow problem, let us briefly explore the definition of flow and the two flow properties. The capacity constraint simply says that the flow from one vertex to another must be nonnegative and must not exceed the given capacity. The flow-conservation property says that the total flow into a vertex other than the source or sink must equal the total flow out of that vertex—informally, “flow in equals flow out.”

An example of flow

A flow network can model the trucking problem shown in Figure 26.1(a). The Lucky Puck Company has a factory (source s) in Vancouver that manufactures hockey pucks, and it has a warehouse (sink t) in Winnipeg that stocks them. Lucky

26.1 Flow networks 711

s t

16

12 20

794

13

14

4

(a) (b)

v1

v2

v3

v4

10 s t

16

12 20

794

13

14

4

v1

v2

v3

v4

v′ 10

10

Figure 26.2 Converting a network with antiparallel edges to an equivalent one with no antiparallel edges. (a)A flow network containing both the edges .�1; �2/ and .�2; �1/. (b)An equivalent network with no antiparallel edges. We add the new vertex �0, and we replace edge .�1; �2/ by the pair of edges .�1; �0/ and .�0; �2/, both with the same capacity as .�1; �2/.

Puck leases space on trucks from another firm to ship the pucks from the factory to the warehouse. Because the trucks travel over specified routes (edges) between cities (vertices) and have a limited capacity, Lucky Puck can ship at most c.u; �/ crates per day between each pair of cities u and � in Figure 26.1(a). Lucky Puck has no control over these routes and capacities, and so the company cannot alter the flow network shown in Figure 26.1(a). They need to determine the largest number p of crates per day that they can ship and then to produce this amount, since there is no point in producing more pucks than they can ship to their warehouse. Lucky Puck is not concerned with how long it takes for a given puck to get from the factory to the warehouse; they care only that p crates per day leave the factory and p crates per day arrive at the warehouse.

We can model the “flow” of shipments with a flow in this network because the number of crates shipped per day from one city to another is subject to a capacity constraint. Additionally, the model must obey flow conservation, for in a steady state, the rate at which pucks enter an intermediate city must equal the rate at which they leave. Otherwise, crates would accumulate at intermediate cities.

Modeling problems with antiparallel edges

Suppose that the trucking firm offered Lucky Puck the opportunity to lease space for 10 crates in trucks going from Edmonton to Calgary. It would seem natural to add this opportunity to our example and form the network shown in Figure 26.2(a). This network suffers from one problem, however: it violates our original assump- tion that if an edge .�1; �2/ 2 E, then .�2; �1/ 62 E. We call the two edges .�1; �2/ and .�2; �1/ antiparallel. Thus, if we wish to model a flow problem with antipar- allel edges, we must transform the network into an equivalent one containing no

712 Chapter 26 Maximum Flow

antiparallel edges. Figure 26.2(b) displays this equivalent network. We choose one of the two antiparallel edges, in this case .�1; �2/, and split it by adding a new vertex � 0 and replacing edge .�1; �2/ with the pair of edges .�1; � 0/ and .� 0; �2/. We also set the capacity of both new edges to the capacity of the original edge. The resulting network satisfies the property that if an edge is in the network, the reverse edge is not. Exercise 26.1-1 asks you to prove that the resulting network is equivalent to the original one.

Thus, we see that a real-world flow problem might be most naturally modeled by a network with antiparallel edges. It will be convenient to disallow antipar- allel edges, however, and so we have a straightforward way to convert a network containing antiparallel edges into an equivalent one with no antiparallel edges.

Networks with multiple sources and sinks

A maximum-flow problem may have several sources and sinks, rather than just one of each. The Lucky Puck Company, for example, might actually have a set of m factories fs1; s2; : : : ; smg and a set of n warehouses ft1; t2; : : : ; tng, as shown in Figure 26.3(a). Fortunately, this problem is no harder than ordinary maximum flow.

We can reduce the problem of determining a maximum flow in a network with multiple sources and multiple sinks to an ordinary maximum-flow problem. Fig- ure 26.3(b) shows how to convert the network from (a) to an ordinary flow network with only a single source and a single sink. We add a supersource s and add a directed edge .s; si / with capacity c.s; si / D 1 for each i D 1; 2; : : : ; m. We also create a new supersink t and add a directed edge .ti ; t/ with capacity c.ti ; t/ D1 for each i D 1; 2; : : : ; n. Intuitively, any flow in the network in (a) corresponds to a flow in the network in (b), and vice versa. The single source s simply provides as much flow as desired for the multiple sources si , and the single sink t likewise consumes as much flow as desired for the multiple sinks ti . Exercise 26.1-2 asks you to prove formally that the two problems are equivalent.

Exercises

26.1-1 Show that splitting an edge in a flow network yields an equivalent network. More formally, suppose that flow network G contains edge .u; �/, and we create a new flow network G0 by creating a new vertex x and replacing .u; �/ by new edges .u; x/ and .x; �/ with c.u; x/ D c.x; �/ D c.u; �/. Show that a maximum flow in G0 has the same value as a maximum flow in G.

26.1 Flow networks 713

10

(a)

12

5

8

14

7

11

2

3

15

6

20

13

18

10

12

5

8

14

7

11

2

3

15

6

20

13

18

∞ ∞

s1 s1

s2 s2

s3s3

s4 s4

s5s5

t1 t1

t2t2

t3 t3

(b)

s t

Figure 26.3 Converting a multiple-source, multiple-sink maximum-flow problem into a problem with a single source and a single sink. (a) A flow network with five sources S D fs1; s2; s3; s4; s5g and three sinks T D ft1; t2; t3g. (b) An equivalent single-source, single-sink flow network. We add a supersource s and an edge with infinite capacity from s to each of the multiple sources. We also add a supersink t and an edge with infinite capacity from each of the multiple sinks to t .

26.1-2 Extend the flow properties and definitions to the multiple-source, multiple-sink problem. Show that any flow in a multiple-source, multiple-sink flow network corresponds to a flow of identical value in the single-source, single-sink network obtained by adding a supersource and a supersink, and vice versa.

26.1-3 Suppose that a flow network G D .V; E/ violates the assumption that the network contains a path s � � � t for all vertices � 2 V . Let u be a vertex for which there is no path s � u � t . Show that there must exist a maximum flow f in G such that f .u; �/ D f .�; u/ D 0 for all vertices � 2 V .

714 Chapter 26 Maximum Flow

26.1-4 Let f be a flow in a network, and let ˛ be a real number. The scalar flow product, denoted f̨ , is a function from V V to R defined by . f̨ /.u; �/ D ˛ � f .u; �/ : Prove that the flows in a network form a convex set. That is, show that if f1 and f2 are flows, then so is f̨1 C .1� ˛/f2 for all ˛ in the range 0 � ˛ � 1. 26.1-5 State the maximum-flow problem as a linear-programming problem.

26.1-6 Professor Adam has two children who, unfortunately, dislike each other. The prob- lem is so severe that not only do they refuse to walk to school together, but in fact each one refuses to walk on any block that the other child has stepped on that day. The children have no problem with their paths crossing at a corner. Fortunately both the professor’s house and the school are on corners, but beyond that he is not sure if it is going to be possible to send both of his children to the same school. The professor has a map of his town. Show how to formulate the problem of de- termining whether both his children can go to the same school as a maximum-flow problem.

26.1-7 Suppose that, in addition to edge capacities, a flow network has vertex capacities. That is each vertex � has a limit l.�/ on how much flow can pass though �. Show how to transform a flow network G D .V; E/ with vertex capacities into an equiv- alent flow network G0 D .V 0; E 0/ without vertex capacities, such that a maximum flow in G0 has the same value as a maximum flow in G. How many vertices and edges does G0 have?

26.2 The Ford-Fulkerson method

This section presents the Ford-Fulkerson method for solving the maximum-flow problem. We call it a “method” rather than an “algorithm” because it encompasses several implementations with differing running times. The Ford-Fulkerson method depends on three important ideas that transcend the method and are relevant to many flow algorithms and problems: residual networks, augmenting paths, and cuts. These ideas are essential to the important max-flow min-cut theorem (The- orem 26.6), which characterizes the value of a maximum flow in terms of cuts of

26.2 The Ford-Fulkerson method 715

the flow network. We end this section by presenting one specific implementation of the Ford-Fulkerson method and analyzing its running time.

The Ford-Fulkerson method iteratively increases the value of the flow. We start with f .u; �/ D 0 for all u; � 2 V , giving an initial flow of value 0. At each iteration, we increase the flow value in G by finding an “augmenting path” in an associated “residual network” Gf . Once we know the edges of an augmenting path in Gf , we can easily identify specific edges in G for which we can change the flow so that we increase the value of the flow. Although each iteration of the Ford-Fulkerson method increases the value of the flow, we shall see that the flow on any particular edge of G may increase or decrease; decreasing the flow on some edges may be necessary in order to enable an algorithm to send more flow from the source to the sink. We repeatedly augment the flow until the residual network has no more augmenting paths. The max-flow min-cut theorem will show that upon termination, this process yields a maximum flow.

FORD-FULKERSON-METHOD.G; s; t/

1 initialize flow f to 0 2 while there exists an augmenting path p in the residual network Gf 3 augment flow f along p 4 return f

In order to implement and analyze the Ford-Fulkerson method, we need to intro- duce several additional concepts.

Residual networks

Intuitively, given a flow network G and a flow f , the residual network Gf consists of edges with capacities that represent how we can change the flow on edges of G. An edge of the flow network can admit an amount of additional flow equal to the edge’s capacity minus the flow on that edge. If that value is positive, we place that edge into Gf with a “residual capacity” of cf .u; �/ D c.u; �/ � f .u; �/. The only edges of G that are in Gf are those that can admit more flow; those edges .u; �/ whose flow equals their capacity have cf .u; �/ D 0, and they are not in Gf .

The residual network Gf may also contain edges that are not in G, however. As an algorithm manipulates the flow, with the goal of increasing the total flow, it might need to decrease the flow on a particular edge. In order to represent a pos- sible decrease of a positive flow f .u; �/ on an edge in G, we place an edge .�; u/ into Gf with residual capacity cf .�; u/ D f .u; �/—that is, an edge that can admit flow in the opposite direction to .u; �/, at most canceling out the flow on .u; �/. These reverse edges in the residual network allow an algorithm to send back flow

716 Chapter 26 Maximum Flow

it has already sent along an edge. Sending flow back along an edge is equiva- lent to decreasing the flow on the edge, which is a necessary operation in many algorithms.

More formally, suppose that we have a flow network G D .V; E/ with source s and sink t . Let f be a flow in G, and consider a pair of vertices u; � 2 V . We define the residual capacity cf .u; �/ by

cf .u; �/ D

� c.u; �/ � f .u; �/ if .u; �/ 2 E ; f .�; u/ if .�; u/ 2 E ; 0 otherwise :

(26.2)

Because of our assumption that .u; �/ 2 E implies .�; u/ 62 E, exactly one case in equation (26.2) applies to each ordered pair of vertices.

As an example of equation (26.2), if c.u; �/ D 16 and f .u; �/ D 11, then we can increase f .u; �/ by up to cf .u; �/ D 5 units before we exceed the capacity constraint on edge .u; �/. We also wish to allow an algorithm to return up to 11 units of flow from � to u, and hence cf .�; u/ D 11.

Given a flow network G D .V; E/ and a flow f , the residual network of G induced by f is Gf D .V; Ef /, where Ef D f.u; �/ 2 V V W cf .u; �/ > 0g : (26.3) That is, as promised above, each edge of the residual network, or residual edge, can admit a flow that is greater than 0. Figure 26.4(a) repeats the flow network G and flow f of Figure 26.1(b), and Figure 26.4(b) shows the corresponding residual network Gf . The edges in Ef are either edges in E or their reversals, and thus

jEf j � 2 jEj : Observe that the residual network Gf is similar to a flow network with capacities

given by cf . It does not satisfy our definition of a flow network because it may contain both an edge .u; �/ and its reversal .�; u/. Other than this difference, a residual network has the same properties as a flow network, and we can define a flow in the residual network as one that satisfies the definition of a flow, but with respect to capacities cf in the network Gf .

A flow in a residual network provides a roadmap for adding flow to the original flow network. If f is a flow in G and f 0 is a flow in the corresponding residual network Gf , we define f “f 0, the augmentation of flow f by f 0, to be a function from V V to R, defined by

.f “f 0/.u; �/ D (

f .u; �/C f 0.u; �/ � f 0.�; u/ if .u; �/ 2 E ; 0 otherwise :

(26.4)

26.2 The Ford-Fulkerson method 717

9

15 s t

5 12

5

7

5

31

8

11

4

s t

11/ 16

12/12 19/20

7/ 7

91/ 4

12/13 11/14

4/4

(b)

(c)

11 5

3

4 s t

11/ 16

12/12 15/20

7/ 7

4/91 /4

8/13 11/14

4/4

(d)

19 s t

5 12

1

731

12

11

4

11 1

3

v1

v1

v1

v1

v2

v2

v2

v2

v3

v3

v3

v3

v4

v4

v4

v4

(a)

Figure 26.4 (a) The flow network G and flow f of Figure 26.1(b). (b) The residual network Gf with augmenting path p shaded; its residual capacity is cf .p/ D cf .�2; �3/ D 4. Edges with residual capacity equal to 0, such as .�1; �3/, are not shown, a convention we follow in the remainder of this section. (c) The flow in G that results from augmenting along path p by its residual capacity 4. Edges carrying no flow, such as .�3; �2/, are labeled only by their capacity, another convention we follow throughout. (d) The residual network induced by the flow in (c).

The intuition behind this definition follows the definition of the residual network. We increase the flow on .u; �/ by f 0.u; �/ but decrease it by f 0.�; u/ because pushing flow on the reverse edge in the residual network signifies decreasing the flow in the original network. Pushing flow on the reverse edge in the residual network is also known as cancellation. For example, if we send 5 crates of hockey pucks from u to � and send 2 crates from � to u, we could equivalently (from the perspective of the final result) just send 3 creates from u to � and none from � to u. Cancellation of this type is crucial for any maximum-flow algorithm.

Lemma 26.1 Let G D .V; E/ be a flow network with source s and sink t , and let f be a flow in G. Let Gf be the residual network of G induced by f , and let f 0 be a flow in Gf . Then the function f “f 0 defined in equation (26.4) is a flow in G with value jf “f 0j D jf j C jf 0j.

Proof We first verify that f “f 0 obeys the capacity constraint for each edge in E and flow conservation at each vertex in V � fs; tg.

718 Chapter 26 Maximum Flow

For the capacity constraint, first observe that if .u; �/ 2 E, then cf .�; u/ D f .u; �/. Therefore, we have f 0.�; u/ � cf .�; u/ D f .u; �/, and hence .f “f 0/.u; �/ D f .u; �/C f 0.u; �/ � f 0.�; u/ (by equation (26.4))

� f .u; �/C f 0.u; �/ � f .u; �/ (because f 0.�; u/ � f .u; �/) D f 0.u; �/ � 0 :

In addition,

.f “f 0/.u; �/ D f .u; �/C f 0.u; �/ � f 0.�; u/ (by equation (26.4)) � f .u; �/C f 0.u; �/ (because flows are nonnegative) � f .u; �/C cf .u; �/ (capacity constraint) D f .u; �/C c.u; �/ � f .u; �/ (definition of cf ) D c.u; �/ :

For flow conservation, because both f and f 0 obey flow conservation, we have that for all u 2 V � fs; tg,X �2V

.f ” f 0/.u; �/ D X �2V

.f .u; �/C f 0.u; �/ � f 0.�; u//

D X �2V

f .u; �/C X �2V

f 0.u; �/ � X �2V

f 0.�; u/

D X �2V

f .�; u/C X �2V

f 0.�; u/ � X �2V

f 0.u; �/

D X �2V

.f .�; u/C f 0.�; u/ � f 0.u; �//

D X �2V

.f “f 0/.�; u/ ;

where the third line follows from the second by flow conservation. Finally, we compute the value of f “f 0. Recall that we disallow antiparallel

edges in G (but not in Gf ), and hence for each vertex � 2 V , we know that there can be an edge .s; �/ or .�; s/, but never both. We define V1 D f� W .s; �/ 2 Eg to be the set of vertices with edges from s, and V2 D f� W .�; s/ 2 Eg to be the set of vertices with edges to s. We have V1 [ V2 � V and, because we disallow antiparallel edges, V1 V2 D ;. We now compute jf “f 0j D

X �2V

.f “f 0/ .s; �/ � X �2V

.f ” f 0/ .�; s/

D X �2V1

.f ” f 0/ .s; �/ � X �2V2

.f ” f 0/ .�; s/ ; (26.5)

26.2 The Ford-Fulkerson method 719

where the second line follows because .f “f 0/.w; x/ is 0 if .w; x/ 62 E. We now apply the definition of f “f 0 to equation (26.5), and then reorder and group terms to obtain

jf “f 0j D

X �2V1

.f .s; �/C f 0.s; �/ � f 0.�; s// � X �2V2

.f .�; s/C f 0.�; s/ � f 0.s; �//

D X �2V1

f .s; �/C X �2V1

f 0.s; �/ � X �2V1

f 0.�; s/

� X �2V2

f .�; s/ � X �2V2

f 0.�; s/C X �2V2

f 0.s; �/

D X �2V1

f .s; �/ � X �2V2

f .�; s/

C X �2V1

f 0.s; �/C X �2V2

f 0.s; �/ � X �2V1

f 0.�; s/ � X �2V2

f 0.�; s/

D X �2V1

f .s; �/ � X �2V2

f .�; s/C X

�2V1[V2 f 0.s; �/ �

X �2V1[V2

f 0.�; s/ : (26.6)

In equation (26.6), we can extend all four summations to sum over V , since each additional term has value 0. (Exercise 26.2-1 asks you to prove this formally.) We thus have

jf “f 0j D X �2V

f .s; �/ � X �2V

f .�; s/C X �2V

f 0.s; �/ � X �2V

f 0.�; s/ (26.7)

D jf j C jf 0j :

Augmenting paths

Given a flow network G D .V; E/ and a flow f , an augmenting path p is a simple path from s to t in the residual network Gf . By the definition of the resid- ual network, we may increase the flow on an edge .u; �/ of an augmenting path by up to cf .u; �/ without violating the capacity constraint on whichever of .u; �/ and .�; u/ is in the original flow network G.

The shaded path in Figure 26.4(b) is an augmenting path. Treating the residual network Gf in the figure as a flow network, we can increase the flow through each edge of this path by up to 4 units without violating a capacity constraint, since the smallest residual capacity on this path is cf .�2; �3/ D 4. We call the maximum amount by which we can increase the flow on each edge in an augmenting path p the residual capacity of p, given by

cf .p/ D min fcf .u; �/ W .u; �/ is on pg :

720 Chapter 26 Maximum Flow

The following lemma, whose proof we leave as Exercise 26.2-7, makes the above argument more precise.

Lemma 26.2 Let G D .V; E/ be a flow network, let f be a flow in G, and let p be an augmenting path in Gf . Define a function fp W V V ! R by

fp.u; �/ D (

cf .p/ if .u; �/ is on p ;

0 otherwise : (26.8)

Then, fp is a flow in Gf with value jfpj D cf .p/ > 0.

The following corollary shows that if we augment f by fp, we get another flow in G whose value is closer to the maximum. Figure 26.4(c) shows the result of augmenting the flow f from Figure 26.4(a) by the flow fp in Figure 26.4(b), and Figure 26.4(d) shows the ensuing residual network.

Corollary 26.3 Let G D .V; E/ be a flow network, let f be a flow in G, and let p be an augmenting path in Gf . Let fp be defined as in equation (26.8), and suppose that we augment f by fp. Then the function f “fp is a flow in G with value jf “fpj D jf j C jfpj > jf j.

Proof Immediate from Lemmas 26.1 and 26.2.

Cuts of flow networks

The Ford-Fulkerson method repeatedly augments the flow along augmenting paths until it has found a maximum flow. How do we know that when the algorithm terminates, we have actually found a maximum flow? The max-flow min-cut theo- rem, which we shall prove shortly, tells us that a flow is maximum if and only if its residual network contains no augmenting path. To prove this theorem, though, we must first explore the notion of a cut of a flow network.

A cut .S; T / of flow network G D .V; E/ is a partition of V into S and T D V � S such that s 2 S and t 2 T . (This definition is similar to the def- inition of “cut” that we used for minimum spanning trees in Chapter 23, except that here we are cutting a directed graph rather than an undirected graph, and we insist that s 2 S and t 2 T .) If f is a flow, then the net flow f .S; T / across the cut .S; T / is defined to be

f .S; T / D X u2S

X �2T

f .u; �/� X u2S

X �2T

f .�; u/ : (26.9)

26.2 The Ford-Fulkerson method 721

s t 11/

16 12/12

15/20

7/ 7

4/91 /4

8/13 11/14

4/4

S T

v4

v3v1

v2

Figure 26.5 A cut .S; T / in the flow network of Figure 26.1(b), where S D fs; �1; �2g and T D f�3; �4; tg. The vertices in S are black, and the vertices in T are white. The net flow across .S; T / is f .S; T / D 19, and the capacity is c.S; T / D 26.

The capacity of the cut .S; T / is

c.S; T / D X u2S

X �2T

c.u; �/ : (26.10)

A minimum cut of a network is a cut whose capacity is minimum over all cuts of the network.

The asymmetry between the definitions of flow and capacity of a cut is inten- tional and important. For capacity, we count only the capacities of edges going from S to T , ignoring edges in the reverse direction. For flow, we consider the flow going from S to T minus the flow going in the reverse direction from T to S . The reason for this difference will become clear later in this section.

Figure 26.5 shows the cut .fs; �1; �2g ; f�3; �4; tg/ in the flow network of Fig- ure 26.1(b). The net flow across this cut is

f .�1; �3/C f .�2; �4/ � f .�3; �2/ D 12C 11 � 4 D 19 ;

and the capacity of this cut is

c.�1; �3/C c.�2; �4/ D 12C 14 D 26 :

The following lemma shows that, for a given flow f , the net flow across any cut is the same, and it equals jf j, the value of the flow.

Lemma 26.4 Let f be a flow in a flow network G with source s and sink t , and let .S; T / be any cut of G. Then the net flow across .S; T / is f .S; T / D jf j.

722 Chapter 26 Maximum Flow

Proof We can rewrite the flow-conservation condition for any node u 2 V �fs; tg asX �2V

f .u; �/ � X �2V

f .�; u/ D 0 : (26.11)

Taking the definition of jf j from equation (26.1) and adding the left-hand side of equation (26.11), which equals 0, summed over all vertices in S � fsg, gives

jf j D X �2V

f .s; �/ � X �2V

f .�; s/C X

u2S�fsg

X �2V

f .u; �/ � X �2V

f .�; u/

! :

Expanding the right-hand summation and regrouping terms yields

jf j D X �2V

f .s; �/ � X �2V

f .�; s/C X

u2S�fsg

X �2V

f .u; �/� X

u2S�fsg

X �2V

f .�; u/

D X �2V

f .s; �/C

X u2S�fsg

f .u; �/

! � X �2V

f .�; s/C

X u2S�fsg

f .�; u/

! D

X �2V

X u2S

f .u; �/ � X �2V

X u2S

f .�; u/ :

Because V D S [ T and S T D ;, we can split each summation over V into summations over S and T to obtain

jf j D X �2S

X u2S

f .u; �/C X �2T

X u2S

f .u; �/� X �2S

X u2S

f .�; u/� X �2T

X u2S

f .�; u/

D X �2T

X u2S

f .u; �/ � X �2T

X u2S

f .�; u/

C X

�2S

X u2S

f .u; �/ � X �2S

X u2S

f .�; u/

! :

The two summations within the parentheses are actually the same, since for all vertices x; y 2 V , the term f .x; y/ appears once in each summation. Hence, these summations cancel, and we have

jf j D X u2S

X �2T

f .u; �/ � X u2S

X �2T

f .�; u/

D f .S; T / :

A corollary to Lemma 26.4 shows how we can use cut capacities to bound the value of a flow.

26.2 The Ford-Fulkerson method 723

Corollary 26.5 The value of any flow f in a flow network G is bounded from above by the capacity of any cut of G.

Proof Let .S; T / be any cut of G and let f be any flow. By Lemma 26.4 and the capacity constraint,

jf j D f .S; T / D

X u2S

X �2T

f .u; �/� X u2S

X �2T

f .�; u/

� X u2S

X �2T

f .u; �/

� X u2S

X �2T

c.u; �/

D c.S; T / :

Corollary 26.5 yields the immediate consequence that the value of a maximum flow in a network is bounded from above by the capacity of a minimum cut of the network. The important max-flow min-cut theorem, which we now state and prove, says that the value of a maximum flow is in fact equal to the capacity of a minimum cut.

Theorem 26.6 (Max-flow min-cut theorem) If f is a flow in a flow network G D .V; E/ with source s and sink t , then the following conditions are equivalent:

1. f is a maximum flow in G.

2. The residual network Gf contains no augmenting paths.

3. jf j D c.S; T / for some cut .S; T / of G.

Proof .1/ ) .2/: Suppose for the sake of contradiction that f is a maximum flow in G but that Gf has an augmenting path p. Then, by Corollary 26.3, the flow found by augmenting f by fp, where fp is given by equation (26.8), is a flow in G with value strictly greater than jf j, contradicting the assumption that f is a maximum flow.

.2/ ) .3/: Suppose that Gf has no augmenting path, that is, that Gf contains no path from s to t . Define

S D f� 2 V W there exists a path from s to � in Gf g and T D V � S . The partition .S; T / is a cut: we have s 2 S trivially and t 62 S because there is no path from s to t in Gf . Now consider a pair of vertices

724 Chapter 26 Maximum Flow

u 2 S and � 2 T . If .u; �/ 2 E, we must have f .u; �/ D c.u; �/, since otherwise .u; �/ 2 Ef , which would place � in set S . If .�; u/ 2 E, we must have f .�; u/ D 0, because otherwise cf .u; �/ D f .�; u/ would be positive and we would have .u; �/ 2 Ef , which would place � in S . Of course, if neither .u; �/ nor .�; u/ is in E, then f .u; �/ D f .�; u/ D 0. We thus have f .S; T / D

X u2S

X �2T

f .u; �/ � X �2T

X u2S

f .�; u/

D X u2S

X �2T

c.u; �/ � X �2T

X u2S

0

D c.S; T / : By Lemma 26.4, therefore, jf j D f .S; T / D c.S; T /.

.3/) .1/: By Corollary 26.5, jf j � c.S; T / for all cuts .S; T /. The condition jf j D c.S; T / thus implies that f is a maximum flow.

The basic Ford-Fulkerson algorithm

In each iteration of the Ford-Fulkerson method, we find some augmenting path p and use p to modify the flow f . As Lemma 26.2 and Corollary 26.3 suggest, we replace f by f “fp, obtaining a new flow whose value is jf j C jfpj. The follow- ing implementation of the method computes the maximum flow in a flow network G D .V; E/ by updating the flow attribute .u; �/: f for each edge .u; �/ 2 E.1 If .u; �/ 62 E, we assume implicitly that .u; �/: f D 0. We also assume that we are given the capacities c.u; �/ along with the flow network, and c.u; �/ D 0 if .u; �/ 62 E. We compute the residual capacity cf .u; �/ in accordance with the formula (26.2). The expression cf .p/ in the code is just a temporary variable that stores the residual capacity of the path p.

FORD-FULKERSON.G; s; t/

1 for each edge .u; �/ 2 G:E 2 .u; �/: f D 0 3 while there exists a path p from s to t in the residual network Gf 4 cf .p/ D min fcf .u; �/ W .u; �/ is in pg 5 for each edge .u; �/ in p 6 if .u; �/ 2 E 7 .u; �/: f D .u; �/: f C cf .p/ 8 else .�; u/: f D .�; u/: f � cf .p/

1Recall from Section 22.1 that we represent an attribute f for edge .u; �/ with the same style of notation—.u; �/: f —that we use for an attribute of any other object.

26.2 The Ford-Fulkerson method 725

The FORD-FULKERSON algorithm simply expands on the FORD-FULKERSON- METHOD pseudocode given earlier. Figure 26.6 shows the result of each iteration in a sample run. Lines 1–2 initialize the flow f to 0. The while loop of lines 3–8 repeatedly finds an augmenting path p in Gf and augments flow f along p by the residual capacity cf .p/. Each residual edge in path p is either an edge in the original network or the reversal of an edge in the original network. Lines 6–8 update the flow in each case appropriately, adding flow when the residual edge is an original edge and subtracting it otherwise. When no augmenting paths exist, the flow f is a maximum flow.

Analysis of Ford-Fulkerson

The running time of FORD-FULKERSON depends on how we find the augmenting path p in line 3. If we choose it poorly, the algorithm might not even terminate: the value of the flow will increase with successive augmentations, but it need not even converge to the maximum flow value.2 If we find the augmenting path by using a breadth-first search (which we saw in Section 22.2), however, the algorithm runs in polynomial time. Before proving this result, we obtain a simple bound for the case in which we choose the augmenting path arbitrarily and all capacities are integers.

In practice, the maximum-flow problem often arises with integral capacities. If the capacities are rational numbers, we can apply an appropriate scaling transfor- mation to make them all integral. If f � denotes a maximum flow in the transformed network, then a straightforward implementation of FORD-FULKERSON executes the while loop of lines 3–8 at most jf �j times, since the flow value increases by at least one unit in each iteration.

We can perform the work done within the while loop efficiently if we implement the flow network G D .V; E/ with the right data structure and find an augmenting path by a linear-time algorithm. Let us assume that we keep a data structure cor- responding to a directed graph G0 D .V; E 0/, where E 0 D f.u; �/ W .u; �/ 2 E or .�; u/ 2 Eg. Edges in the network G are also edges in G0, and therefore we can easily maintain capacities and flows in this data structure. Given a flow f on G, the edges in the residual network Gf consist of all edges .u; �/ of G0 such that cf .u; �/ > 0, where cf conforms to equation (26.2). The time to find a path in a residual network is therefore O.V C E 0/ D O.E/ if we use either depth-first search or breadth-first search. Each iteration of the while loop thus takes O.E/ time, as does the initialization in lines 1–2, making the total running time of the FORD-FULKERSON algorithm O.E jf �j/.

2The Ford-Fulkerson method might fail to terminate only if edge capacities are irrational numbers.

726 Chapter 26 Maximum Flow

12

4

4 4/

4 4

v1

4

16

4

10

s t

16

12 20

794

13

14

4

v1

s t

4/1 6

4/12 20

7

4/9

13

4/14 4/4

s t75

4

4

v1 8

4

13

20 v1

s t

4/1 6

8/12 4/20

7

4/9

4/13 4/14

4/4

4

10

s t75

8

4

v1 4

9

v1

s t

8/1 6

8/12 8/20

79 4/13

4/14 4/4

v2 v2

v2v2

v2 v2

v3 v3

v3v3

v3 v3

v4 v4

v4v4

v4v4

(b)

(a)

(c)

12 4

4

4

4

4

Figure 26.6 The execution of the basic Ford-Fulkerson algorithm. (a)–(e) Successive iterations of the while loop. The left side of each part shows the residual network Gf from line 3 with a shaded augmenting path p. The right side of each part shows the new flow f that results from augmenting f by fp. The residual network in (a) is the input network G.

When the capacities are integral and the optimal flow value jf �j is small, the running time of the Ford-Fulkerson algorithm is good. Figure 26.7(a) shows an ex- ample of what can happen on a simple flow network for which jf �j is large. A max- imum flow in this network has value 2,000,000: 1,000,000 units of flow traverse the path s ! u! t , and another 1,000,000 units traverse the path s ! � ! t . If the first augmenting path found by FORD-FULKERSON is s ! u! � ! t , shown in Figure 26.7(a), the flow has value 1 after the first iteration. The resulting resid- ual network appears in Figure 26.7(b). If the second iteration finds the augment- ing path s ! � ! u! t , as shown in Figure 26.7(b), the flow then has value 2. Figure 26.7(c) shows the resulting residual network. We can continue, choosing the augmenting path s ! u! � ! t in the odd-numbered iterations and the aug- menting path s ! � ! u! t in the even-numbered iterations. We would perform a total of 2,000,000 augmentations, increasing the flow value by only 1 unit in each.

26.2 The Ford-Fulkerson method 727

4

12 11

2

11

2

8

8

9

4

4

9

8

4

4

9

8 s t

12

7

4

4

v1

s t

8/1 6

8/12 15/20

7/ 7

9 11/13

11/14 4/4

v1

10

19 s t

12 1

7

11

43

v2

v3 v3

v3

v4v4

v4

(d)

(f)

4

9

8

4

4

15 s t

5

7 11

4

v1

s t

12/ 16

12/12 19/20

7/ 7

9 11/13

11/14 4/4

v1

3 v2

v3 v3

v4v4

(e) 4

v2

v2

v1

v2

8

8

Figure 26.6, continued (f) The residual network at the last while loop test. It has no augmenting paths, and the flow f shown in (e) is therefore a maximum flow. The value of the maximum flow found is 23.

The Edmonds-Karp algorithm

We can improve the bound on FORD-FULKERSON by finding the augmenting path p in line 3 with a breadth-first search. That is, we choose the augmenting path as a shortest path from s to t in the residual network, where each edge has unit distance (weight). We call the Ford-Fulkerson method so implemented the Edmonds-Karp algorithm. We now prove that the Edmonds-Karp algorithm runs in O.VE2/ time.

The analysis depends on the distances to vertices in the residual network Gf . The following lemma uses the notation ıf .u; �/ for the shortest-path distance from u to � in Gf , where each edge has unit distance.

Lemma 26.7 If the Edmonds-Karp algorithm is run on a flow network G D .V; E/ with source s and sink t , then for all vertices � 2 V � fs; tg, the shortest-path distance ıf .s; �/ in the residual network Gf increases monotonically with each flow augmentation.

728 Chapter 26 Maximum Flow

1

999,999

999 ,99

9

1

s t 1,0

00, 000

1,000,000

1

1,000,000 1,0 00,

000

999 ,99

9

1

1 999,999

u

v

s t

1,000,000

1

1,000,000

u

v

999 ,99

9

1

999 ,99

9

1 s t1

u

v

(a) (b) (c)

Figure 26.7 (a) A flow network for which FORD-FULKERSON can take ‚.E jf �j/ time, where f � is a maximum flow, shown here with jf �j D 2,000,000. The shaded path is an aug- menting path with residual capacity 1. (b) The resulting residual network, with another augmenting path whose residual capacity is 1. (c) The resulting residual network.

Proof We will suppose that for some vertex � 2 V � fs; tg, there is a flow aug- mentation that causes the shortest-path distance from s to � to decrease, and then we will derive a contradiction. Let f be the flow just before the first augmentation that decreases some shortest-path distance, and let f 0 be the flow just afterward. Let � be the vertex with the minimum ıf 0.s; �/ whose distance was decreased by the augmentation, so that ıf 0.s; �/ 0g : Each vertex u 2 L has only one entering edge, namely .s; u/, and its capacity is 1. Thus, each u 2 L has at most one unit of flow entering it, and if one unit of flow does enter, by flow conservation, one unit of flow must leave. Furthermore, since f is integer-valued, for each u 2 L, the one unit of flow can enter on at most one edge and can leave on at most one edge. Thus, one unit of flow enters u if and only if there is exactly one vertex � 2 R such that f .u; �/ D 1, and at most one edge leaving each u 2 L carries positive flow. A symmetric argument applies to each � 2 R. The set M is therefore a matching.

To see that jM j D jf j, observe that for every matched vertex u 2 L, we have f .s; u/ D 1, and for every edge .u; �/ 2 E �M , we have f .u; �/ D 0. Conse- quently, f .L [ fsg ; R [ ftg/, the net flow across cut .L [ fsg ; R [ ftg/, is equal to jM j. Applying Lemma 26.4, we have that jf j D f .L[fsg ; R[ftg/ D jM j.

Based on Lemma 26.9, we would like to conclude that a maximum matching in a bipartite graph G corresponds to a maximum flow in its corresponding flow network G0, and we can therefore compute a maximum matching in G by running a maximum-flow algorithm on G0. The only hitch in this reasoning is that the maximum-flow algorithm might return a flow in G0 for which some f .u; �/ is not an integer, even though the flow value jf j must be an integer. The following theorem shows that if we use the Ford-Fulkerson method, this difficulty cannot arise.

Theorem 26.10 (Integrality theorem) If the capacity function c takes on only integral values, then the maximum flow f produced by the Ford-Fulkerson method has the property that jf j is an integer. Moreover, for all vertices u and �, the value of f .u; �/ is an integer.

Proof The proof is by induction on the number of iterations. We leave it as Exercise 26.3-2.

We can now prove the following corollary to Lemma 26.9.

26.3 Maximum bipartite matching 735

Corollary 26.11 The cardinality of a maximum matching M in a bipartite graph G equals the value of a maximum flow f in its corresponding flow network G0.

Proof We use the nomenclature from Lemma 26.9. Suppose that M is a max- imum matching in G and that the corresponding flow f in G0 is not maximum. Then there is a maximum flow f 0 in G0 such that jf 0j > jf j. Since the ca- pacities in G0 are integer-valued, by Theorem 26.10, we can assume that f 0 is integer-valued. Thus, f 0 corresponds to a matching M 0 in G with cardinality jM 0j D jf 0j > jf j D jM j, contradicting our assumption that M is a maximum matching. In a similar manner, we can show that if f is a maximum flow in G0, its corresponding matching is a maximum matching on G.

Thus, given a bipartite undirected graph G, we can find a maximum matching by creating the flow network G0, running the Ford-Fulkerson method, and directly ob- taining a maximum matching M from the integer-valued maximum flow f found. Since any matching in a bipartite graph has cardinality at most min.L; R/ D O.V /, the value of the maximum flow in G0 is O.V /. We can therefore find a maximum matching in a bipartite graph in time O.VE 0/ D O.VE/, since jE 0j D ‚.E/.

Exercises

26.3-1 Run the Ford-Fulkerson algorithm on the flow network in Figure 26.8(c) and show the residual network after each flow augmentation. Number the vertices in L top to bottom from 1 to 5 and in R top to bottom from 6 to 9. For each iteration, pick the augmenting path that is lexicographically smallest.

26.3-2 Prove Theorem 26.10.

26.3-3 Let G D .V; E/ be a bipartite graph with vertex partition V D L [ R, and let G0 be its corresponding flow network. Give a good upper bound on the length of any augmenting path found in G0 during the execution of FORD-FULKERSON.

26.3-4 ? A perfect matching is a matching in which every vertex is matched. Let G D .V; E/ be an undirected bipartite graph with vertex partition V D L [ R, where jLj D jRj. For any X � V , define the neighborhood of X as N.X/ D fy 2 V W .x; y/ 2 E for some x 2 Xg ;

736 Chapter 26 Maximum Flow

that is, the set of vertices adjacent to some member of X . Prove Hall’s theorem: there exists a perfect matching in G if and only if jAj � jN.A/j for every subset A � L. 26.3-5 ? We say that a bipartite graph G D .V; E/, where V D L[R, is d -regular if every vertex � 2 V has degree exactly d . Every d -regular bipartite graph has jLj D jRj. Prove that every d -regular bipartite graph has a matching of cardinality jLj by arguing that a minimum cut of the corresponding flow network has capacity jLj.

? 26.4 Push-relabel algorithms

In this section, we present the “push-relabel” approach to computing maximum flows. To date, many of the asymptotically fastest maximum-flow algorithms are push-relabel algorithms, and the fastest actual implementations of maximum-flow algorithms are based on the push-relabel method. Push-relabel methods also effi- ciently solve other flow problems, such as the minimum-cost flow problem. This section introduces Goldberg’s “generic” maximum-flow algorithm, which has a simple implementation that runs in O.V 2E/ time, thereby improving upon the O.VE2/ bound of the Edmonds-Karp algorithm. Section 26.5 refines the generic algorithm to obtain another push-relabel algorithm that runs in O.V 3/ time.

Push-relabel algorithms work in a more localized manner than the Ford- Fulkerson method. Rather than examine the entire residual network to find an aug- menting path, push-relabel algorithms work on one vertex at a time, looking only at the vertex’s neighbors in the residual network. Furthermore, unlike the Ford- Fulkerson method, push-relabel algorithms do not maintain the flow-conservation property throughout their execution. They do, however, maintain a preflow, which is a function f W V V ! R that satisfies the capacity constraint and the following relaxation of flow conservation:X �2V

f .�; u/� X �2V

f .u; �/ � 0

for all vertices u 2 V � fsg. That is, the flow into a vertex may exceed the flow out. We call the quantity

e.u/ D X �2V

f .�; u/� X �2V

f .u; �/ (26.14)

the excess flow into vertex u. The excess at a vertex is the amount by which the flow in exceeds the flow out. We say that a vertex u 2 V � fs; tg is overflowing if e.u/ > 0.

26.4 Push-relabel algorithms 737

We shall begin this section by describing the intuition behind the push-relabel method. We shall then investigate the two operations employed by the method: “pushing” preflow and “relabeling” a vertex. Finally, we shall present a generic push-relabel algorithm and analyze its correctness and running time.

Intuition

You can understand the intuition behind the push-relabel method in terms of fluid flows: we consider a flow network G D .V; E/ to be a system of interconnected pipes of given capacities. Applying this analogy to the Ford-Fulkerson method, we might say that each augmenting path in the network gives rise to an additional stream of fluid, with no branch points, flowing from the source to the sink. The Ford-Fulkerson method iteratively adds more streams of flow until no more can be added.

The generic push-relabel algorithm has a rather different intuition. As before, directed edges correspond to pipes. Vertices, which are pipe junctions, have two interesting properties. First, to accommodate excess flow, each vertex has an out- flow pipe leading to an arbitrarily large reservoir that can accumulate fluid. Second, each vertex, its reservoir, and all its pipe connections sit on a platform whose height increases as the algorithm progresses.

Vertex heights determine how flow is pushed: we push flow only downhill, that is, from a higher vertex to a lower vertex. The flow from a lower vertex to a higher vertex may be positive, but operations that push flow push it only downhill. We fix the height of the source at jV j and the height of the sink at 0. All other vertex heights start at 0 and increase with time. The algorithm first sends as much flow as possible downhill from the source toward the sink. The amount it sends is exactly enough to fill each outgoing pipe from the source to capacity; that is, it sends the capacity of the cut .s; V � fsg/. When flow first enters an intermediate vertex, it collects in the vertex’s reservoir. From there, we eventually push it downhill.

We may eventually find that the only pipes that leave a vertex u and are not already saturated with flow connect to vertices that are on the same level as u or are uphill from u. In this case, to rid an overflowing vertex u of its excess flow, we must increase its height—an operation called “relabeling” vertex u. We increase its height to one unit more than the height of the lowest of its neighbors to which it has an unsaturated pipe. After a vertex is relabeled, therefore, it has at least one outgoing pipe through which we can push more flow.

Eventually, all the flow that can possibly get through to the sink has arrived there. No more can arrive, because the pipes obey the capacity constraints; the amount of flow across any cut is still limited by the capacity of the cut. To make the preflow a “legal” flow, the algorithm then sends the excess collected in the reservoirs of overflowing vertices back to the source by continuing to relabel vertices to above

738 Chapter 26 Maximum Flow

the fixed height jV j of the source. As we shall see, once we have emptied all the reservoirs, the preflow is not only a “legal” flow, it is also a maximum flow.

The basic operations

From the preceding discussion, we see that a push-relabel algorithm performs two basic operations: pushing flow excess from a vertex to one of its neighbors and relabeling a vertex. The situations in which these operations apply depend on the heights of vertices, which we now define precisely.

Let G D .V; E/ be a flow network with source s and sink t , and let f be a preflow in G. A function h W V ! N is a height function3 if h.s/ D jV j, h.t/ D 0, and h.u/ � h.�/C 1 for every residual edge .u; �/ 2 Ef . We immediately obtain the following lemma.

Lemma 26.12 Let G D .V; E/ be a flow network, let f be a preflow in G, and let h be a height function on V . For any two vertices u; � 2 V , if h.u/ > h.�/C 1, then .u; �/ is not an edge in the residual network.

The push operation The basic operation PUSH.u; �/ applies if u is an overflowing vertex, cf .u; �/ > 0, and h.u/ D h.�/C1. The pseudocode below updates the preflow f and the excess flows for u and �. It assumes that we can compute residual capacity cf .u; �/ in constant time given c and f . We maintain the excess flow stored at a vertex u as the attribute u:e and the height of u as the attribute u:h. The expression �f .u; �/ is a temporary variable that stores the amount of flow that we can push from u to �.

3In the literature, a height function is typically called a “distance function,” and the height of a vertex is called a “distance label.” We use the term “height” because it is more suggestive of the intuition behind the algorithm. We retain the use of the term “relabel” to refer to the operation that increases the height of a vertex. The height of a vertex is related to its distance from the sink t , as would be found in a breadth-first search of the transpose GT.

26.4 Push-relabel algorithms 739

PUSH.u; �/

1 // Applies when: u is overflowing, cf .u; �/ > 0, and u:h D �:hC 1. 2 // Action: Push �f .u; �/ D min.u:e; cf .u; �// units of flow from u to �. 3 �f .u; �/ D min.u:e; cf .u; �// 4 if .u; �/ 2 E 5 .u; �/: f D .u; �/: f C�f .u; �/ 6 else .�; u/: f D .�; u/: f ��f .u; �/ 7 u:e D u:e��f .u; �/ 8 �:e D �:eC�f .u; �/

The code for PUSH operates as follows. Because vertex u has a positive excess u:e and the residual capacity of .u; �/ is positive, we can increase the flow from u to � by �f .u; �/ D min.u:e; cf .u; �// without causing u:e to become negative or the capacity c.u; �/ to be exceeded. Line 3 computes the value �f .u; �/, and lines 4–6 update f . Line 5 increases the flow on edge .u; �/, because we are pushing flow over a residual edge that is also an original edge. Line 6 decreases the flow on edge .�; u/, because the residual edge is actually the reverse of an edge in the original network. Finally, lines 7–8 update the excess flows into vertices u and �. Thus, if f is a preflow before PUSH is called, it remains a preflow afterward.

Observe that nothing in the code for PUSH depends on the heights of u and �, yet we prohibit it from being invoked unless u:h D �:hC 1. Thus, we push excess flow downhill only by a height differential of 1. By Lemma 26.12, no residual edges exist between two vertices whose heights differ by more than 1, and thus, as long as the attribute h is indeed a height function, we would gain nothing by allowing flow to be pushed downhill by a height differential of more than 1.

We call the operation PUSH.u; �/ a push from u to �. If a push operation ap- plies to some edge .u; �/ leaving a vertex u, we also say that the push operation applies to u. It is a saturating push if edge .u; �/ in the residual network becomes saturated (cf .u; �/ D 0 afterward); otherwise, it is a nonsaturating push. If an edge becomes saturated, it disappears from the residual network. A simple lemma characterizes one result of a nonsaturating push.

Lemma 26.13 After a nonsaturating push from u to �, the vertex u is no longer overflowing.

Proof Since the push was nonsaturating, the amount of flow �f .u; �/ actually pushed must equal u:e prior to the push. Since u:e is reduced by this amount, it becomes 0 after the push.

740 Chapter 26 Maximum Flow

The relabel operation The basic operation RELABEL.u/ applies if u is overflowing and if u:h � �:h for all edges .u; �/ 2 Ef . In other words, we can relabel an overflowing vertex u if for every vertex � for which there is residual capacity from u to �, flow cannot be pushed from u to � because � is not downhill from u. (Recall that by definition, neither the source s nor the sink t can be overflowing, and so s and t are ineligible for relabeling.)

RELABEL.u/

1 // Applies when: u is overflowing and for all � 2 V such that .u; �/ 2 Ef , we have u:h � �:h.

2 // Action: Increase the height of u. 3 u:h D 1Cmin f�:h W .u; �/ 2 Ef g When we call the operation RELABEL.u/, we say that vertex u is relabeled. Note that when u is relabeled, Ef must contain at least one edge that leaves u, so that the minimization in the code is over a nonempty set. This property follows from the assumption that u is overflowing, which in turn tells us that

u:e D X �2V

f .�; u/� X �2V

f .u; �/ > 0 :

Since all flows are nonnegative, we must therefore have at least one vertex � such that .�; u/: f > 0. But then, cf .u; �/ > 0, which implies that .u; �/ 2 Ef . The operation RELABEL.u/ thus gives u the greatest height allowed by the constraints on height functions.

The generic algorithm

The generic push-relabel algorithm uses the following subroutine to create an ini- tial preflow in the flow network.

INITIALIZE-PREFLOW.G; s/

1 for each vertex � 2 G:V 2 �:h D 0 3 �:e D 0 4 for each edge .u; �/ 2 G:E 5 .u; �/: f D 0 6 s:h D jG:Vj 7 for each vertex � 2 s:Adj 8 .s; �/: f D c.s; �/ 9 �:e D c.s; �/

10 s:e D s:e � c.s; �/

26.4 Push-relabel algorithms 741

INITIALIZE-PREFLOW creates an initial preflow f defined by

.u; �/: f D (

c.u; �/ if u D s ; 0 otherwise :

(26.15)

That is, we fill to capacity each edge leaving the source s, and all other edges carry no flow. For each vertex � adjacent to the source, we initially have �:e D c.s; �/, and we initialize s:e to the negative of the sum of these capacities. The generic algorithm also begins with an initial height function h, given by

u:h D ( jV j if u D s ; 0 otherwise :

(26.16)

Equation (26.16) defines a height function because the only edges .u; �/ for which u:h > �:h C 1 are those for which u D s, and those edges are saturated, which means that they are not in the residual network.

Initialization, followed by a sequence of push and relabel operations, executed in no particular order, yields the GENERIC-PUSH-RELABEL algorithm:

GENERIC-PUSH-RELABEL.G/

1 INITIALIZE-PREFLOW.G; s/ 2 while there exists an applicable push or relabel operation 3 select an applicable push or relabel operation and perform it

The following lemma tells us that as long as an overflowing vertex exists, at least one of the two basic operations applies.

Lemma 26.14 (An overflowing vertex can be either pushed or relabeled) Let G D .V; E/ be a flow network with source s and sink t , let f be a preflow, and let h be any height function for f . If u is any overflowing vertex, then either a push or relabel operation applies to it.

Proof For any residual edge .u; �/, we have h.u/ � h.�/ C 1 because h is a height function. If a push operation does not apply to an overflowing vertex u, then for all residual edges .u; �/, we must have h.u/ 0, x 2 U , all vertices other than s have nonnegative excess, and, by assumption, s 62 U . Thus, we haveX u2U

X �2U

f .�; u/� X u2U

X �2U

f .u; �/ > 0 : (26.17)

All edge flows are nonnegative, and so for equation (26.17) to hold, we must haveP u2U

P �2U f .�; u/ > 0. Hence, there must exist at least one pair of vertices

u0 2 U and � 0 2 U with f .� 0; u0/ > 0. But, if f .� 0; u0/ > 0, there must be a residual edge .u0; � 0/, which means that there is a simple path from x to � 0 (the path x � u0 ! � 0), thus contradicting the definition of U .

The next lemma bounds the heights of vertices, and its corollary bounds the number of relabel operations that are performed in total.

Lemma 26.20 Let G D .V; E/ be a flow network with source s and sink t . At any time during the execution of GENERIC-PUSH-RELABEL on G, we have u:h � 2 jV j�1 for all vertices u 2 V .

Proof The heights of the source s and the sink t never change because these vertices are by definition not overflowing. Thus, we always have s:h D jV j and t:h D 0, both of which are no greater than 2 jV j � 1.

Now consider any vertex u 2 V �fs; tg. Initially, u:h D 0 � 2 jV j�1. We shall show that after each relabeling operation, we still have u:h � 2 jV j � 1. When u is

26.4 Push-relabel algorithms 745

relabeled, it is overflowing, and Lemma 26.19 tells us that there is a simple path p from u to s in Gf . Let p D h�0;�1; : : : ;�ki, where �0 D u, �k D s, and k � jV j�1 because p is simple. For i D 0; 1; : : : ; k � 1, we have .�i ; �iC1/ 2 Ef , and therefore, by Lemma 26.16, �i :h � �iC1:hC 1. Expanding these inequalities over path p yields u:h D �0:h � �k:hC k � s:hC .jV j � 1/ D 2 jV j � 1.

Corollary 26.21 (Bound on relabel operations) Let G D .V; E/ be a flow network with source s and sink t . Then, during the execution of GENERIC-PUSH-RELABEL on G, the number of relabel operations is at most 2 jV j � 1 per vertex and at most .2 jV j � 1/.jV j � 2/ 0 �:h. Initially, ˆ D 0, and the value of ˆ may change after each relabeling, saturating push, and nonsaturating push. We will bound the amount that saturating pushes and relabelings can con- tribute to the increase of ˆ. Then we will show that each nonsaturating push must decrease ˆ by at least 1, and will use these bounds to derive an upper bound on the number of nonsaturating pushes.

Let us examine the two ways in which ˆ might increase. First, relabeling a vertex u increases ˆ by less than 2 jV j, since the set over which the sum is taken is the same and the relabeling cannot increase u’s height by more than its maximum possible height, which, by Lemma 26.20, is at most 2 jV j � 1. Second, a saturating push from a vertex u to a vertex � increases ˆ by less than 2 jV j, since no heights change and only vertex �, whose height is at most 2 jV j � 1, can possibly become overflowing.

Now we show that a nonsaturating push from u to � decreases ˆ by at least 1. Why? Before the nonsaturating push, u was overflowing, and � may or may not have been overflowing. By Lemma 26.13, u is no longer overflowing after the push. In addition, unless � is the source, it may or may not be overflowing after the push. Therefore, the potential function ˆ has decreased by exactly u:h, and it has increased by either 0 or �:h. Since u:h � �:h D 1, the net effect is that the potential function has decreased by at least 1.

Thus, during the course of the algorithm, the total amount of increase in ˆ is due to relabelings and saturated pushes, and Corollary 26.21 and Lemma 26.22 constrain the increase to be less than .2 jV j/.2 jV j2/ C .2 jV j/.2 jV j jEj/ D 4 jV j2 .jV j C jEj/. Since ˆ � 0, the total amount of decrease, and therefore the total number of nonsaturating pushes, is less than 4 jV j2 .jV j C jEj/.

Having bounded the number of relabelings, saturating pushes, and nonsatu- rating push, we have set the stage for the following analysis of the GENERIC- PUSH-RELABEL procedure, and hence of any algorithm based on the push-relabel method.

Theorem 26.24 During the execution of GENERIC-PUSH-RELABEL on any flow network G D .V; E/, the number of basic operations is O.V 2E/.

Proof Immediate from Corollary 26.21 and Lemmas 26.22 and 26.23.

26.4 Push-relabel algorithms 747

Thus, the algorithm terminates after O.V 2E/ operations. All that remains is to give an efficient method for implementing each operation and for choosing an appropriate operation to execute.

Corollary 26.25 There is an implementation of the generic push-relabel algorithm that runs in O.V 2E/ time on any flow network G D .V; E/.

Proof Exercise 26.4-2 asks you to show how to implement the generic algorithm with an overhead of O.V / per relabel operation and O.1/ per push. It also asks you to design a data structure that allows you to pick an applicable operation in O.1/ time. The corollary then follows.

Exercises

26.4-1 Prove that, after the procedure INITIALIZE-PREFLOW.G; s/ terminates, we have s:e � � jf �j, where f � is a maximum flow for G. 26.4-2 Show how to implement the generic push-relabel algorithm using O.V / time per relabel operation, O.1/ time per push, and O.1/ time to select an applicable oper- ation, for a total time of O.V 2E/.

26.4-3 Prove that the generic push-relabel algorithm spends a total of only O.VE/ time in performing all the O.V 2/ relabel operations.

26.4-4 Suppose that we have found a maximum flow in a flow network G D .V; E/ using a push-relabel algorithm. Give a fast algorithm to find a minimum cut in G.

26.4-5 Give an efficient push-relabel algorithm to find a maximum matching in a bipartite graph. Analyze your algorithm.

26.4-6 Suppose that all edge capacities in a flow network G D .V; E/ are in the set f1; 2; : : : ; kg. Analyze the running time of the generic push-relabel algorithm in terms of jV j, jEj, and k. (Hint: How many times can each edge support a nonsat- urating push before it becomes saturated?)

748 Chapter 26 Maximum Flow

26.4-7 Show that we could change line 6 of INITIALIZE-PREFLOW to

6 s:h D jG:Vj � 2

without affecting the correctness or asymptotic performance of the generic push- relabel algorithm.

26.4-8 Let ıf .u; �/ be the distance (number of edges) from u to � in the residual net- work Gf . Show that the GENERIC-PUSH-RELABEL procedure maintains the properties that u:h 0 and h.u/ D h.�/C 1. Otherwise, .u; �/ is inadmissible. The admissible network is Gf;h D .V; Ef;h/, where Ef;h is the set of admissible edges.

The admissible network consists of those edges through which we can push flow. The following lemma shows that this network is a directed acyclic graph (dag).

Lemma 26.26 (The admissible network is acyclic) If G D .V; E/ is a flow network, f is a preflow in G, and h is a height function on G, then the admissible network Gf;h D .V; Ef;h/ is acyclic.

Proof The proof is by contradiction. Suppose that Gf;h contains a cycle p D h�0; �1; : : : ; �ki, where �0 D �k and k > 0. Since each edge in p is admissible, we have h.�i�1/ D h.�i /C 1 for i D 1; 2; : : : ; k. Summing around the cycle gives

kX iD1

h.�i�1/ D kX

iD1 .h.�i /C 1/

D kX

iD1 h.�i /C k :

Because each vertex in cycle p appears once in each of the summations, we derive the contradiction that 0 D k.

The next two lemmas show how push and relabel operations change the admis- sible network.

Lemma 26.27 Let G D .V; E/ be a flow network, let f be a preflow in G, and suppose that the attribute h is a height function. If a vertex u is overflowing and .u; �/ is an ad- missible edge, then PUSH.u; �/ applies. The operation does not create any new admissible edges, but it may cause .u; �/ to become inadmissible.

750 Chapter 26 Maximum Flow

Proof By the definition of an admissible edge, we can push flow from u to �. Since u is overflowing, the operation PUSH.u; �/ applies. The only new residual edge that pushing flow from u to � can create is .�; u/. Since �:h D u:h � 1, edge .�; u/ cannot become admissible. If the operation is a saturating push, then cf .u; �/ D 0 afterward and .u; �/ becomes inadmissible.

Lemma 26.28 Let G D .V; E/ be a flow network, let f be a preflow in G, and suppose that the attribute h is a height function. If a vertex u is overflowing and there are no admissible edges leaving u, then RELABEL.u/ applies. After the relabel operation, there is at least one admissible edge leaving u, but there are no admissible edges entering u.

Proof If u is overflowing, then by Lemma 26.14, either a push or a relabel op- eration applies to it. If there are no admissible edges leaving u, then no flow can be pushed from u and so RELABEL.u/ applies. After the relabel operation, u:h D 1 C min f�:h W .u; �/ 2 Ef g. Thus, if � is a vertex that realizes the mini- mum in this set, the edge .u; �/ becomes admissible. Hence, after the relabel, there is at least one admissible edge leaving u.

To show that no admissible edges enter u after a relabel operation, suppose that there is a vertex � such that .�; u/ is admissible. Then, �:h D u:h C 1 after the relabel, and so �:h > u:h C 1 just before the relabel. But by Lemma 26.12, no residual edges exist between vertices whose heights differ by more than 1. More- over, relabeling a vertex does not change the residual network. Thus, .�; u/ is not in the residual network, and hence it cannot be in the admissible network.

Neighbor lists

Edges in the relabel-to-front algorithm are organized into “neighbor lists.” Given a flow network G D .V; E/, the neighbor list u:N for a vertex u 2 V is a singly linked list of the neighbors of u in G. Thus, vertex � appears in the list u:N if .u; �/ 2 E or .�; u/ 2 E. The neighbor list u:N contains exactly those vertices � for which there may be a residual edge .u; �/. The attribute u:N:head points to the first vertex in u:N, and �:next-neighbor points to the vertex following � in a neighbor list; this pointer is NIL if � is the last vertex in the neighbor list.

The relabel-to-front algorithm cycles through each neighbor list in an arbitrary order that is fixed throughout the execution of the algorithm. For each vertex u, the attribute u:current points to the vertex currently under consideration in u:N. Initially, u:current is set to u:N:head.

26.5 The relabel-to-front algorithm 751

Discharging an overflowing vertex

An overflowing vertex u is discharged by pushing all of its excess flow through admissible edges to neighboring vertices, relabeling u as necessary to cause edges leaving u to become admissible. The pseudocode goes as follows.

DISCHARGE.u/

1 while u:e > 0 2 � D u:current 3 if � == NIL 4 RELABEL.u/ 5 u:current D u:N:head 6 elseif cf .u; �/ > 0 and u:h == �:hC 1 7 PUSH.u; �/ 8 else u:current D �:next-neighbor

Figure 26.9 steps through several iterations of the while loop of lines 1–8, which executes as long as vertex u has positive excess. Each iteration performs exactly one of three actions, depending on the current vertex � in the neighbor list u:N.

1. If � is NIL, then we have run off the end of u:N. Line 4 relabels vertex u, and then line 5 resets the current neighbor of u to be the first one in u:N. (Lemma 26.29 below states that the relabel operation applies in this situation.)

2. If � is non-NIL and .u; �/ is an admissible edge (determined by the test in line 6), then line 7 pushes some (or possibly all) of u’s excess to vertex �.

3. If � is non-NIL but .u; �/ is inadmissible, then line 8 advances u:current one position further in the neighbor list u:N.

Observe that if DISCHARGE is called on an overflowing vertex u, then the last action performed by DISCHARGE must be a push from u. Why? The procedure terminates only when u:e becomes zero, and neither the relabel operation nor ad- vancing the pointer u:current affects the value of u:e.

We must be sure that when PUSH or RELABEL is called by DISCHARGE, the operation applies. The next lemma proves this fact.

Lemma 26.29 If DISCHARGE calls PUSH.u; �/ in line 7, then a push operation applies to .u; �/. If DISCHARGE calls RELABEL.u/ in line 4, then a relabel operation applies to u.

Proof The tests in lines 1 and 6 ensure that a push operation occurs only if the operation applies, which proves the first statement in the lemma.

752 Chapter 26 Maximum Flow

s –265

4

3

2

1

0

6

x 0 y

19 z 0

5/5 8

14/14

s

x

z

s –265

4

3

2

1

0

6

x 0

y 19 z

0

8

14/14

s

x

z 5/5

s –265

4

3

2

1

0

6

x 0

y 11 z

8

8/8

14/14

5/5

s

x

z

s

x

z

1 2 3

s

x

z

4

5

s

x

z

6

s

x

z

7

s

x

z

8

s

x

z

9

(a)

(b)

(c)

Figure 26.9 Discharging a vertex y. It takes 15 iterations of the while loop of DISCHARGE to push all the excess flow from y. Only the neighbors of y and edges of the flow network that enter or leave y are shown. In each part of the figure, the number inside each vertex is its excess at the beginning of the first iteration shown in the part, and each vertex is shown at its height throughout the part. The neighbor list y:N at the beginning of each iteration appears on the right, with the iteration number on top. The shaded neighbor is y:current. (a) Initially, there are 19 units of excess to push from y, and y:current D s. Iterations 1, 2, and 3 just advance y:current, since there are no admissible edges leaving y. In iteration 4, y:current D NIL (shown by the shading being below the neighbor list), and so y is relabeled and y:current is reset to the head of the neighbor list. (b) After relabeling, vertex y has height 1. In iterations 5 and 6, edges .y; s/ and .y; x/ are found to be inadmissible, but iteration 7 pushes 8 units of excess flow from y to ´. Because of the push, y:current does not advance in this iteration. (c) Because the push in iteration 7 saturated edge .y; ´/, it is found inadmissible in iteration 8. In iteration 9, y:current D NIL, and so vertex y is again relabeled and y:current is reset.

26.5 The relabel-to-front algorithm 753

s –265

4

3

2

1

0

6

x 5

y 6

z 8

5 8/8

14/14

s –265

4

3

2

1

0

6

x 0

y 11

z 8

8/8

14/14

5/5

s –265

4

3

2

1

0

6

x 5

y 6

z 8

8/8

14/14

5

s –205

4

3

2

1

0

6

x 5

y 0

z 8

5 8/8

8/14

s

x

z

10

s

x

z

11

s

x

z

12

s

x

z

13

s

x

z

14

s

x

z

15

(f)

(d)

(e)

(g)

Figure 26.9, continued (d) In iteration 10, .y; s/ is inadmissible, but iteration 11 pushes 5 units of excess flow from y to x. (e) Because y:current did not advance in iteration 11, iteration 12 finds .y; x/ to be inadmissible. Iteration 13 finds .y; ´/ inadmissible, and iteration 14 relabels ver- tex y and resets y:current. (f) Iteration 15 pushes 6 units of excess flow from y to s. (g) Vertex y now has no excess flow, and DISCHARGE terminates. In this example, DISCHARGE both starts and finishes with the current pointer at the head of the neighbor list, but in general this need not be the case.

754 Chapter 26 Maximum Flow

To prove the second statement, according to the test in line 1 and Lemma 26.28, we need only show that all edges leaving u are inadmissible. If a call to DISCHARGE.u/ starts with the pointer u:current at the head of u’s neighbor list and finishes with it off the end of the list, then all of u’s outgoing edges are in- admissible and a relabel operation applies. It is possible, however, that during a call to DISCHARGE.u/, the pointer u:current traverses only part of the list be- fore the procedure returns. Calls to DISCHARGE on other vertices may then oc- cur, but u:current will continue moving through the list during the next call to DISCHARGE.u/. We now consider what happens during a complete pass through the list, which begins at the head of u:N and finishes with u:current D NIL. Once u:current reaches the end of the list, the procedure relabels u and begins a new pass. For the u:current pointer to advance past a vertex � 2 u:N during a pass, the edge .u; �/ must be deemed inadmissible by the test in line 6. Thus, by the time the pass completes, every edge leaving u has been determined to be inadmissible at some time during the pass. The key observation is that at the end of the pass, every edge leaving u is still inadmissible. Why? By Lemma 26.27, pushes cannot create any admissible edges, regardless of which vertex the flow is pushed from. Thus, any admissible edge must be created by a relabel operation. But the vertex u is not relabeled during the pass, and by Lemma 26.28, any other vertex � that is relabeled during the pass (resulting from a call of DISCHARGE.�/) has no entering admissible edges after relabeling. Thus, at the end of the pass, all edges leaving u remain inadmissible, which completes the proof.

The relabel-to-front algorithm

In the relabel-to-front algorithm, we maintain a linked list L consisting of all ver- tices in V � fs; tg. A key property is that the vertices in L are topologically sorted according to the admissible network, as we shall see in the loop invariant that fol- lows. (Recall from Lemma 26.26 that the admissible network is a dag.)

The pseudocode for the relabel-to-front algorithm assumes that the neighbor lists u:N have already been created for each vertex u. It also assumes that u:next points to the vertex that follows u in list L and that, as usual, u:next D NIL if u is the last vertex in the list.

26.5 The relabel-to-front algorithm 755

RELABEL-TO-FRONT.G; s; t/

1 INITIALIZE-PREFLOW.G; s/ 2 L D G:V � fs; tg, in any order 3 for each vertex u 2 G:V � fs; tg 4 u:current D u:N:head 5 u D L:head 6 while u ¤ NIL 7 old-height D u:h 8 DISCHARGE.u/ 9 if u:h > old-height

10 move u to the front of list L 11 u D u:next

The relabel-to-front algorithm works as follows. Line 1 initializes the preflow and heights to the same values as in the generic push-relabel algorithm. Line 2 initializes the list L to contain all potentially overflowing vertices, in any order. Lines 3–4 initialize the current pointer of each vertex u to the first vertex in u’s neighbor list.

As Figure 26.10 illustrates, the while loop of lines 6–11 runs through the list L, discharging vertices. Line 5 makes it start with the first vertex in the list. Each time through the loop, line 8 discharges a vertex u. If u was relabeled by the DISCHARGE procedure, line 10 moves it to the front of list L. We can determine whether u was relabeled by comparing its height before the discharge operation, saved into the variable old-height in line 7, with its height afterward, in line 9. Line 11 makes the next iteration of the while loop use the vertex following u in list L. If line 10 moved u to the front of the list, the vertex used in the next iteration is the one following u in its new position in the list.

To show that RELABEL-TO-FRONT computes a maximum flow, we shall show that it is an implementation of the generic push-relabel algorithm. First, ob- serve that it performs push and relabel operations only when they apply, since Lemma 26.29 guarantees that DISCHARGE performs them only when they apply. It remains to show that when RELABEL-TO-FRONT terminates, no basic opera- tions apply. The remainder of the correctness argument relies on the following loop invariant:

At each test in line 6 of RELABEL-TO-FRONT, list L is a topological sort of the vertices in the admissible network Gf;h D .V; Ef;h/, and no vertex before u in the list has excess flow.

Initialization: Immediately after INITIALIZE-PREFLOW has been run, s:h D jV j and �:h D 0 for all � 2 V � fsg. Since jV j � 2 (because V contains at

756 Chapter 26 Maximum Flow

s –265

4

3

2

1

0

6

x 12

y 14

z 0

t 0

5 8 10

7 16

14/14

12/12

L: x y z

N: s

y

z

t

s

x

z

x

y

t

(a)

s –265

4

3

2

1

0

6

x 0 y

19 z 0

t 7

5/5 8 10

14/14

12/12

L: x y z

N: s

y

z

t

s

x

z

x

y

t

(b)

7 7/16

s –205

4

3

2

1

0

6

x 5

y 0

z 8

t 7

5

8/8

10

8/14

12/12

L: xy z

N: s

y

z

t

s

x

z

x

y

t

(c)

7

7/16

Figure 26.10 The action of RELABEL-TO-FRONT. (a)A flow network just before the first iteration of the while loop. Initially, 26 units of flow leave source s. On the right is shown the initial list L D hx; y; ´i, where initially u D x. Under each vertex in list L is its neighbor list, with the current neighbor shaded. Vertex x is discharged. It is relabeled to height 1, 5 units of excess flow are pushed to y, and the 7 remaining units of excess are pushed to the sink t . Because x is relabeled, it moves to the head of L, which in this case does not change the structure of L. (b) After x, the next vertex in L that is discharged is y. Figure 26.9 shows the detailed action of discharging y in this situation. Because y is relabeled, it is moved to the head of L. (c) Vertex x now follows y in L, and so it is again discharged, pushing all 5 units of excess flow to t . Because vertex x is not relabeled in this discharge operation, it remains in place in list L.

26.5 The relabel-to-front algorithm 757

s –205

4

3

2

1

0

6

x 0

y 0

z 8

t 12

5

8/8

10

8/14

12/12

L: xy z

N: s

y

z

t

s

x

z

x

y

t

(d)

7

12/16

s –205

4

3

2

1

0

6

x 0

y 0

z 0 t

20

5

8/8 8/10

8/14

12/12

L: xyz

N: s

y

z

t

s

x

z

x

y

t

(e)

12/16 7

Figure 26.10, continued (d) Since vertex ´ follows vertex x in L, it is discharged. It is relabeled to height 1 and all 8 units of excess flow are pushed to t . Because ´ is relabeled, it moves to the front of L. (e) Vertex y now follows vertex ´ in L and is therefore discharged. But because y has no excess, DISCHARGE immediately returns, and y remains in place in L. Vertex x is then discharged. Because it, too, has no excess, DISCHARGE again returns, and x remains in place in L. RELABEL- TO-FRONT has reached the end of list L and terminates. There are no overflowing vertices, and the preflow is a maximum flow.

least s and t), no edge can be admissible. Thus, Ef;h D ;, and any ordering of V � fs; tg is a topological sort of Gf;h. Because u is initially the head of the list L, there are no vertices before it and so there are none before it with excess flow.

Maintenance: To see that each iteration of the while loop maintains the topolog- ical sort, we start by observing that the admissible network is changed only by push and relabel operations. By Lemma 26.27, push operations do not cause edges to become admissible. Thus, only relabel operations can create admissi- ble edges. After a vertex u is relabeled, however, Lemma 26.28 states that there are no admissible edges entering u but there may be admissible edges leaving u. Thus, by moving u to the front of L, the algorithm ensures that any admissible edges leaving u satisfy the topological sort ordering.

758 Chapter 26 Maximum Flow

To see that no vertex preceding u in L has excess flow, we denote the vertex that will be u in the next iteration by u0. The vertices that will precede u0 in the next iteration include the current u (due to line 11) and either no other vertices (if u is relabeled) or the same vertices as before (if u is not relabeled). When u is discharged, it has no excess flow afterward. Thus, if u is relabeled during the discharge, no vertices preceding u0 have excess flow. If u is not relabeled during the discharge, no vertices before it on the list acquired excess flow during this discharge, because L remained topologically sorted at all times during the discharge (as just pointed out, admissible edges are created only by relabeling, not pushing), and so each push operation causes excess flow to move only to vertices further down the list (or to s or t). Again, no vertices preceding u0 have excess flow.

Termination: When the loop terminates, u is just past the end of L, and so the loop invariant ensures that the excess of every vertex is 0. Thus, no basic oper- ations apply.

Analysis

We shall now show that RELABEL-TO-FRONT runs in O.V 3/ time on any flow network G D .V; E/. Since the algorithm is an implementation of the generic push-relabel algorithm, we shall take advantage of Corollary 26.21, which pro- vides an O.V / bound on the number of relabel operations executed per vertex and an O.V 2/ bound on the total number of relabel operations overall. In addition, Ex- ercise 26.4-3 provides an O.VE/ bound on the total time spent performing relabel operations, and Lemma 26.22 provides an O.VE/ bound on the total number of saturating push operations.

Theorem 26.30 The running time of RELABEL-TO-FRONT on any flow network G D .V; E/ is O.V 3/.

Proof Let us consider a “phase” of the relabel-to-front algorithm to be the time between two consecutive relabel operations. There are O.V 2/ phases, since there are O.V 2/ relabel operations. Each phase consists of at most jV j calls to DIS- CHARGE, which we can see as follows. If DISCHARGE does not perform a re- label operation, then the next call to DISCHARGE is further down the list L, and the length of L is less than jV j. If DISCHARGE does perform a relabel, the next call to DISCHARGE belongs to a different phase. Since each phase contains at most jV j calls to DISCHARGE and there are O.V 2/ phases, the number of times DISCHARGE is called in line 8 of RELABEL-TO-FRONT is O.V 3/. Thus, the total

26.5 The relabel-to-front algorithm 759

work performed by the while loop in RELABEL-TO-FRONT, excluding the work performed within DISCHARGE, is at most O.V 3/.

We must now bound the work performed within DISCHARGE during the ex- ecution of the algorithm. Each iteration of the while loop within DISCHARGE performs one of three actions. We shall analyze the total amount of work involved in performing each of these actions.

We start with relabel operations (lines 4–5). Exercise 26.4-3 provides an O.VE/ time bound on all the O.V 2/ relabels that are performed.

Now, suppose that the action updates the u:current pointer in line 8. This action occurs O.degree.u// times each time a vertex u is relabeled, and O.V �degree.u// times overall for the vertex. For all vertices, therefore, the total amount of work done in advancing pointers in neighbor lists is O.VE/ by the handshaking lemma (Exercise B.4-1).

The third type of action performed by DISCHARGE is a push operation (line 7). We already know that the total number of saturating push operations is O.VE/. Observe that if a nonsaturating push is executed, DISCHARGE immediately returns, since the push reduces the excess to 0. Thus, there can be at most one nonsaturating push per call to DISCHARGE. As we have observed, DISCHARGE is called O.V 3/ times, and thus the total time spent performing nonsaturating pushes is O.V 3/.

The running time of RELABEL-TO-FRONT is therefore O.V 3 C VE/, which is O.V 3/.

Exercises

26.5-1 Illustrate the execution of RELABEL-TO-FRONT in the manner of Figure 26.10 for the flow network in Figure 26.1(a). Assume that the initial ordering of vertices in L is h�1; �2; �3; �4i and that the neighbor lists are �1:N D hs; �2; �3i ; �2:N D hs; �1; �3; �4i ; �3:N D h�1; �2; �4; ti ; �4:N D h�2; �3; ti :

26.5-2 ? We would like to implement a push-relabel algorithm in which we maintain a first- in, first-out queue of overflowing vertices. The algorithm repeatedly discharges the vertex at the head of the queue, and any vertices that were not overflowing before the discharge but are overflowing afterward are placed at the end of the queue. After the vertex at the head of the queue is discharged, it is removed. When the

760 Chapter 26 Maximum Flow

queue is empty, the algorithm terminates. Show how to implement this algorithm to compute a maximum flow in O.V 3/ time.

26.5-3 Show that the generic algorithm still works if RELABEL updates u:h by sim- ply computing u:h D u:h C 1. How would this change affect the analysis of RELABEL-TO-FRONT?

26.5-4 ? Show that if we always discharge a highest overflowing vertex, we can make the push-relabel method run in O.V 3/ time.

26.5-5 Suppose that at some point in the execution of a push-relabel algorithm, there exists an integer 0 k are on the source side of a minimum cut. If such a k exists, the gap heuristic updates every vertex � 2 V � fsg for which �:h > k, to set �:h D max.�:h; jV j C 1/. Show that the resulting attribute h is a height function. (The gap heuristic is crucial in making implementations of the push-relabel method perform well in practice.)

Problems

26-1 Escape problem An n n grid is an undirected graph consisting of n rows and n columns of vertices, as shown in Figure 26.11. We denote the vertex in the i th row and the j th column by .i; j /. All vertices in a grid have exactly four neighbors, except for the boundary vertices, which are the points .i; j / for which i D 1, i D n, j D 1, or j D n.

Given m � n2 starting points .x1; y1/; .x2; y2/; : : : ; .xm; ym/ in the grid, the escape problem is to determine whether or not there are m vertex-disjoint paths from the starting points to any m different points on the boundary. For example, the grid in Figure 26.11(a) has an escape, but the grid in Figure 26.11(b) does not.

a. Consider a flow network in which vertices, as well as edges, have capacities. That is, the total positive flow entering any given vertex is subject to a capacity constraint. Show that determining the maximum flow in a network with edge and vertex capacities can be reduced to an ordinary maximum-flow problem on a flow network of comparable size.

Problems for Chapter 26 761

(a) (b)

Figure 26.11 Grids for the escape problem. Starting points are black, and other grid vertices are white. (a) A grid with an escape, shown by shaded paths. (b) A grid with no escape.

b. Describe an efficient algorithm to solve the escape problem, and analyze its running time.

26-2 Minimum path cover A path cover of a directed graph G D .V; E/ is a set P of vertex-disjoint paths such that every vertex in V is included in exactly one path in P . Paths may start and end anywhere, and they may be of any length, including 0. A minimum path cover of G is a path cover containing the fewest possible paths.

a. Give an efficient algorithm to find a minimum path cover of a directed acyclic graph G D .V; E/. (Hint: Assuming that V D f1; 2; : : : ; ng, construct the graph G0 D .V 0; E 0/, where V 0 D fx0; x1; : : : ; xng [ fy0; y1; : : : ; yng ; E 0 D f.x0; xi / W i 2 V g [ f.yi ; y0/ W i 2 V g [ f.xi ; yj / W .i; j / 2 Eg ; and run a maximum-flow algorithm.)

b. Does your algorithm work for directed graphs that contain cycles? Explain.

26-3 Algorithmic consulting Professor Gore wants to open up an algorithmic consulting company. He has iden- tified n important subareas of algorithms (roughly corresponding to different por- tions of this textbook), which he represents by the set A D fA1; A2; : : : ; Ang. In each subarea Ak, he can hire an expert in that area for ck dollars. The consulting company has lined up a set J D fJ1; J2; : : : ; Jmg of potential jobs. In order to perform job Ji , the company needs to have hired experts in a subset Ri � A of

762 Chapter 26 Maximum Flow

subareas. Each expert can work on multiple jobs simultaneously. If the company chooses to accept job Ji , it must have hired experts in all subareas in Ri , and it will take in revenue of pi dollars.

Professor Gore’s job is to determine which subareas to hire experts in and which jobs to accept in order to maximize the net revenue, which is the total income from jobs accepted minus the total cost of employing the experts.

Consider the following flow network G. It contains a source vertex s, vertices A1; A2; : : : ; An, vertices J1; J2; : : : ; Jm, and a sink vertex t . For k D 1; 2 : : : ; n, the flow network contains an edge .s; Ak/ with capacity c.s; Ak/ D ck, and for i D 1; 2; : : : ; m, the flow network contains an edge .Ji ; t/ with capacity c.Ji ; t/ D pi . For k D 1; 2; : : : ; n and i D 1; 2; : : : ; m, if Ak 2 Ri , then G contains an edge .Ak; Ji / with capacity c.Ak; Ji / D1. a. Show that if Ji 2 T for a finite-capacity cut .S; T / of G, then Ak 2 T for each

Ak 2 Ri .

b. Show how to determine the maximum net revenue from the capacity of a mini- mum cut of G and the given pi values.

c. Give an efficient algorithm to determine which jobs to accept and which experts to hire. Analyze the running time of your algorithm in terms of m, n, and r DPmiD1 jRi j.

26-4 Updating maximum flow Let G D .V; E/ be a flow network with source s, sink t , and integer capacities. Suppose that we are given a maximum flow in G.

a. Suppose that we increase the capacity of a single edge .u; �/ 2 E by 1. Give an O.V CE/-time algorithm to update the maximum flow.

b. Suppose that we decrease the capacity of a single edge .u; �/ 2 E by 1. Give an O.V CE/-time algorithm to update the maximum flow.

26-5 Maximum flow by scaling Let G D .V; E/ be a flow network with source s, sink t , and an integer capac- ity c.u; �/ on each edge .u; �/ 2 E. Let C D max.u;�/2E c.u; �/. a. Argue that a minimum cut of G has capacity at most C jEj.

b. For a given number K, show how to find an augmenting path of capacity at least K in O.E/ time, if such a path exists.

Problems for Chapter 26 763

We can use the following modification of FORD-FULKERSON-METHOD to com- pute a maximum flow in G:

MAX-FLOW-BY-SCALING.G; s; t/

1 C D max.u;�/2E c.u; �/ 2 initialize flow f to 0 3 K D 2blg C c 4 while K � 1 5 while there exists an augmenting path p of capacity at least K 6 augment flow f along p 7 K D K=2 8 return f

c. Argue that MAX-FLOW-BY-SCALING returns a maximum flow.

d. Show that the capacity of a minimum cut of the residual network Gf is at most 2K jEj each time line 4 is executed.

e. Argue that the inner while loop of lines 5–6 executes O.E/ times for each value of K.

f. Conclude that MAX-FLOW-BY-SCALING can be implemented so that it runs in O.E2 lg C / time.

26-6 The Hopcroft-Karp bipartite matching algorithm In this problem, we describe a faster algorithm, due to Hopcroft and Karp, for finding a maximum matching in a bipartite graph. The algorithm runs in O.

p V E/

time. Given an undirected, bipartite graph G D .V; E/, where V D L [ R and all edges have exactly one endpoint in L, let M be a matching in G. We say that a simple path P in G is an augmenting path with respect to M if it starts at an unmatched vertex in L, ends at an unmatched vertex in R, and its edges belong alternately to M and E � M . (This definition of an augmenting path is related to, but different from, an augmenting path in a flow network.) In this problem, we treat a path as a sequence of edges, rather than as a sequence of vertices. A shortest augmenting path with respect to a matching M is an augmenting path with a minimum number of edges.

Given two sets A and B , the symmetric difference A˚B is defined as .A�B/[ .B � A/, that is, the elements that are in exactly one of the two sets.

764 Chapter 26 Maximum Flow

a. Show that if M is a matching and P is an augmenting path with respect to M , then the symmetric difference M ˚P is a matching and jM ˚ P j D jM j C 1. Show that if P1; P2; : : : ; Pk are vertex-disjoint augmenting paths with respect to M , then the symmetric difference M ˚ .P1 [ P2 [ � � � [ Pk/ is a matching with cardinality jM j C k.

The general structure of our algorithm is the following:

HOPCROFT-KARP.G/

1 M D ; 2 repeat 3 let P D fP1; P2; : : : ; Pkg be a maximal set of vertex-disjoint

shortest augmenting paths with respect to M 4 M D M ˚ .P1 [ P2 [ � � � [ Pk/ 5 until P == ; 6 returnM

The remainder of this problem asks you to analyze the number of iterations in the algorithm (that is, the number of iterations in the repeat loop) and to describe an implementation of line 3.

b. Given two matchings M and M � in G, show that every vertex in the graph G0 D .V; M ˚ M �/ has degree at most 2. Conclude that G0 is a disjoint union of simple paths or cycles. Argue that edges in each such simple path or cycle belong alternately to M or M �. Prove that if jM j � jM �j, then M ˚M � contains at least jM �j � jM j vertex-disjoint augmenting paths with respect to M .

Let l be the length of a shortest augmenting path with respect to a matching M , and let P1; P2; : : : ; Pk be a maximal set of vertex-disjoint augmenting paths of length l with respect to M . Let M 0 D M˚.P1[� � �[Pk/, and suppose that P is a shortest augmenting path with respect to M 0.

c. Show that if P is vertex-disjoint from P1; P2; : : : ; Pk, then P has more than l edges.

d. Now suppose that P is not vertex-disjoint from P1; P2; : : : ; Pk . Let A be the set of edges .M ˚M 0/˚ P . Show that A D .P1 [ P2 [ � � � [ Pk/˚ P and that jAj � .k C 1/l . Conclude that P has more than l edges.

e. Prove that if a shortest augmenting path with respect to M has l edges, the size of the maximum matching is at most jM j C jV j =.l C 1/.

Notes for Chapter 26 765

f. Show that the number of repeat loop iterations in the algorithm is at most 2

p jV j. (Hint: By how much can M grow after iteration number

p jV j?)

g. Give an algorithm that runs in O.E/ time to find a maximal set of vertex- disjoint shortest augmenting paths P1; P2; : : : ; Pk for a given matching M . Conclude that the total running time of HOPCROFT-KARP is O.

p V E/.

Chapter notes

Ahuja, Magnanti, and Orlin [7], Even [103], Lawler [224], Papadimitriou and Stei- glitz [271], and Tarjan [330] are good references for network flow and related algo- rithms. Goldberg, Tardos, and Tarjan [139] also provide a nice survey of algorithms for network-flow problems, and Schrijver [304] has written an interesting review of historical developments in the field of network flows.

The Ford-Fulkerson method is due to Ford and Fulkerson [109], who originated the formal study of many of the problems in the area of network flow, including the maximum-flow and bipartite-matching problems. Many early implementations of the Ford-Fulkerson method found augmenting paths using breadth-first search; Edmonds and Karp [102], and independently Dinic [89], proved that this strategy yields a polynomial-time algorithm. A related idea, that of using “blocking flows,” was also first developed by Dinic [89]. Karzanov [202] first developed the idea of preflows. The push-relabel method is due to Goldberg [136] and Goldberg and Tar- jan [140]. Goldberg and Tarjan gave an O.V 3/-time algorithm that uses a queue to maintain the set of overflowing vertices, as well as an algorithm that uses dynamic trees to achieve a running time of O.VE lg.V 2=EC2//. Several other researchers have developed push-relabel maximum-flow algorithms. Ahuja and Orlin [9] and Ahuja, Orlin, and Tarjan [10] gave algorithms that used scaling. Cheriyan and Maheshwari [62] proposed pushing flow from the overflowing vertex of maximum height. Cheriyan and Hagerup [61] suggested randomly permuting the neighbor lists, and several researchers [14, 204, 276] developed clever derandomizations of this idea, leading to a sequence of faster algorithms. The algorithm of King, Rao, and Tarjan [204] is the fastest such algorithm and runs in O.VE logE=.V lg V / V / time.

The asymptotically fastest algorithm to date for the maximum-flow problem, by Goldberg and Rao [138], runs in time O.min.V 2=3; E1=2/E lg.V 2=E C 2/ lg C /, where C D max.u;�/2E c.u; �/. This algorithm does not use the push-relabel method but instead is based on finding blocking flows. All previous maximum- flow algorithms, including the ones in this chapter, use some notion of distance (the push-relabel algorithms use the analogous notion of height), with a length of 1

766 Chapter 26 Maximum Flow

assigned implicitly to each edge. This new algorithm takes a different approach and assigns a length of 0 to high-capacity edges and a length of 1 to low-capacity edges. Informally, with respect to these lengths, shortest paths from the source to the sink tend have high capacity, which means that fewer iterations need be performed.

In practice, push-relabel algorithms currently dominate augmenting-path or linear-programming based algorithms for the maximum-flow problem. A study by Cherkassky and Goldberg [63] underscores the importance of using two heuris- tics when implementing a push-relabel algorithm. The first heuristic is to peri- odically perform a breadth-first search of the residual network in order to obtain more accurate height values. The second heuristic is the gap heuristic, described in Exercise 26.5-5. Cherkassky and Goldberg conclude that the best choice of push- relabel variants is the one that chooses to discharge the overflowing vertex with the maximum height.

The best algorithm to date for maximum bipartite matching, discovered by Hopcroft and Karp [176], runs in O.

p V E/ time and is described in Problem 26-6.

The book by Lovász and Plummer [239] is an excellent reference on matching problems.

VII Selected Topics

Introduction

This part contains a selection of algorithmic topics that extend and complement earlier material in this book. Some chapters introduce new models of computation such as circuits or parallel computers. Others cover specialized domains such as computational geometry or number theory. The last two chapters discuss some of the known limitations to the design of efficient algorithms and introduce techniques for coping with those limitations.

Chapter 27 presents an algorithmic model for parallel computing based on dy- namic multithreading. The chapter introduces the basics of the model, showing how to quantify parallelism in terms of the measures of work and span. It then investigates several interesting multithreaded algorithms, including algorithms for matrix multiplication and merge sorting.

Chapter 28 studies efficient algorithms for operating on matrices. It presents two general methods—LU decomposition and LUP decomposition—for solving linear equations by Gaussian elimination in O.n3/ time. It also shows that matrix inversion and matrix multiplication can be performed equally fast. The chapter concludes by showing how to compute a least-squares approximate solution when a set of linear equations has no exact solution.

Chapter 29 studies linear programming, in which we wish to maximize or mini- mize an objective, given limited resources and competing constraints. Linear pro- gramming arises in a variety of practical application areas. This chapter covers how to formulate and solve linear programs. The solution method covered is the sim- plex algorithm, which is the oldest algorithm for linear programming. In contrast to many algorithms in this book, the simplex algorithm does not run in polynomial time in the worst case, but it is fairly efficient and widely used in practice.

770 Part VII Selected Topics

Chapter 30 studies operations on polynomials and shows how to use a well- known signal-processing technique—the fast Fourier transform (FFT)—to multi- ply two degree-n polynomials in O.n lg n/ time. It also investigates efficient im- plementations of the FFT, including a parallel circuit.

Chapter 31 presents number-theoretic algorithms. After reviewing elementary number theory, it presents Euclid’s algorithm for computing greatest common di- visors. Next, it studies algorithms for solving modular linear equations and for raising one number to a power modulo another number. Then, it explores an impor- tant application of number-theoretic algorithms: the RSA public-key cryptosystem. This cryptosystem can be used not only to encrypt messages so that an adversary cannot read them, but also to provide digital signatures. The chapter then presents the Miller-Rabin randomized primality test, with which we can find large primes efficiently—an essential requirement for the RSA system. Finally, the chapter cov- ers Pollard’s “rho” heuristic for factoring integers and discusses the state of the art of integer factorization.

Chapter 32 studies the problem of finding all occurrences of a given pattern string in a given text string, a problem that arises frequently in text-editing pro- grams. After examining the naive approach, the chapter presents an elegant ap- proach due to Rabin and Karp. Then, after showing an efficient solution based on finite automata, the chapter presents the Knuth-Morris-Pratt algorithm, which modifies the automaton-based algorithm to save space by cleverly preprocessing the pattern.

Chapter 33 considers a few problems in computational geometry. After dis- cussing basic primitives of computational geometry, the chapter shows how to use a “sweeping” method to efficiently determine whether a set of line segments con- tains any intersections. Two clever algorithms for finding the convex hull of a set of points—Graham’s scan and Jarvis’s march—also illustrate the power of sweeping methods. The chapter closes with an efficient algorithm for finding the closest pair from among a given set of points in the plane.

Chapter 34 concerns NP-complete problems. Many interesting computational problems are NP-complete, but no polynomial-time algorithm is known for solving any of them. This chapter presents techniques for determining when a problem is NP-complete. Several classic problems are proved to be NP-complete: determining whether a graph has a hamiltonian cycle, determining whether a boolean formula is satisfiable, and determining whether a given set of numbers has a subset that adds up to a given target value. The chapter also proves that the famous traveling- salesman problem is NP-complete.

Chapter 35 shows how to find approximate solutions to NP-complete problems efficiently by using approximation algorithms. For some NP-complete problems, approximate solutions that are near optimal are quite easy to produce, but for others even the best approximation algorithms known work progressively more poorly as

Part VII Selected Topics 771

the problem size increases. Then, there are some problems for which we can invest increasing amounts of computation time in return for increasingly better approx- imate solutions. This chapter illustrates these possibilities with the vertex-cover problem (unweighted and weighted versions), an optimization version of 3-CNF satisfiability, the traveling-salesman problem, the set-covering problem, and the subset-sum problem.

27 Multithreaded Algorithms

The vast majority of algorithms in this book are serial algorithms suitable for running on a uniprocessor computer in which only one instruction executes at a time. In this chapter, we shall extend our algorithmic model to encompass parallel algorithms, which can run on a multiprocessor computer that permits multiple instructions to execute concurrently. In particular, we shall explore the elegant model of dynamic multithreaded algorithms, which are amenable to algorithmic design and analysis, as well as to efficient implementation in practice.

Parallel computers—computers with multiple processing units—have become increasingly common, and they span a wide range of prices and performance. Rela- tively inexpensive desktop and laptop chip multiprocessors contain a single multi- core integrated-circuit chip that houses multiple processing “cores,” each of which is a full-fledged processor that can access a common memory. At an intermedi- ate price/performance point are clusters built from individual computers—often simple PC-class machines—with a dedicated network interconnecting them. The highest-priced machines are supercomputers, which often use a combination of custom architectures and custom networks to deliver the highest performance in terms of instructions executed per second.

Multiprocessor computers have been around, in one form or another, for decades. Although the computing community settled on the random-access ma- chine model for serial computing early on in the history of computer science, no single model for parallel computing has gained as wide acceptance. A major rea- son is that vendors have not agreed on a single architectural model for parallel computers. For example, some parallel computers feature shared memory, where each processor can directly access any location of memory. Other parallel com- puters employ distributed memory, where each processor’s memory is private, and an explicit message must be sent between processors in order for one processor to access the memory of another. With the advent of multicore technology, however, every new laptop and desktop machine is now a shared-memory parallel computer,

Chapter 27 Multithreaded Algorithms 773

and the trend appears to be toward shared-memory multiprocessing. Although time will tell, that is the approach we shall take in this chapter.

One common means of programming chip multiprocessors and other shared- memory parallel computers is by using static threading, which provides a software abstraction of “virtual processors,” or threads, sharing a common memory. Each thread maintains an associated program counter and can execute code indepen- dently of the other threads. The operating system loads a thread onto a processor for execution and switches it out when another thread needs to run. Although the operating system allows programmers to create and destroy threads, these opera- tions are comparatively slow. Thus, for most applications, threads persist for the duration of a computation, which is why we call them “static.”

Unfortunately, programming a shared-memory parallel computer directly using static threads is difficult and error-prone. One reason is that dynamically parti- tioning the work among the threads so that each thread receives approximately the same load turns out to be a complicated undertaking. For any but the sim- plest of applications, the programmer must use complex communication protocols to implement a scheduler to load-balance the work. This state of affairs has led toward the creation of concurrency platforms, which provide a layer of software that coordinates, schedules, and manages the parallel-computing resources. Some concurrency platforms are built as runtime libraries, but others provide full-fledged parallel languages with compiler and runtime support.

Dynamic multithreaded programming

One important class of concurrency platform is dynamic multithreading, which is the model we shall adopt in this chapter. Dynamic multithreading allows program- mers to specify parallelism in applications without worrying about communication protocols, load balancing, and other vagaries of static-thread programming. The concurrency platform contains a scheduler, which load-balances the computation automatically, thereby greatly simplifying the programmer’s chore. Although the functionality of dynamic-multithreading environments is still evolving, almost all support two features: nested parallelism and parallel loops. Nested parallelism allows a subroutine to be “spawned,” allowing the caller to proceed while the spawned subroutine is computing its result. A parallel loop is like an ordinary for loop, except that the iterations of the loop can execute concurrently.

These two features form the basis of the model for dynamic multithreading that we shall study in this chapter. A key aspect of this model is that the programmer needs to specify only the logical parallelism within a computation, and the threads within the underlying concurrency platform schedule and load-balance the compu- tation among themselves. We shall investigate multithreaded algorithms written for

774 Chapter 27 Multithreaded Algorithms

this model, as well how the underlying concurrency platform can schedule compu- tations efficiently.

Our model for dynamic multithreading offers several important advantages:

� It is a simple extension of our serial programming model. We can describe a multithreaded algorithm by adding to our pseudocode just three “concurrency” keywords: parallel, spawn, and sync. Moreover, if we delete these concur- rency keywords from the multithreaded pseudocode, the resulting text is serial pseudocode for the same problem, which we call the “serialization” of the mul- tithreaded algorithm.

� It provides a theoretically clean way to quantify parallelism based on the no- tions of “work” and “span.”

� Many multithreaded algorithms involving nested parallelism follow naturally from the divide-and-conquer paradigm. Moreover, just as serial divide-and- conquer algorithms lend themselves to analysis by solving recurrences, so do multithreaded algorithms.

� The model is faithful to how parallel-computing practice is evolving. A grow- ing number of concurrency platforms support one variant or another of dynamic multithreading, including Cilk [51, 118], Cilk++ [71], OpenMP [59], Task Par- allel Library [230], and Threading Building Blocks [292].

Section 27.1 introduces the dynamic multithreading model and presents the met- rics of work, span, and parallelism, which we shall use to analyze multithreaded algorithms. Section 27.2 investigates how to multiply matrices with multithread- ing, and Section 27.3 tackles the tougher problem of multithreading merge sort.

27.1 The basics of dynamic multithreading

We shall begin our exploration of dynamic multithreading using the example of computing Fibonacci numbers recursively. Recall that the Fibonacci numbers are defined by recurrence (3.22):

F0 D 0 ; F1 D 1 ; Fi D Fi�1 C Fi�2 for i � 2 : Here is a simple, recursive, serial algorithm to compute the nth Fibonacci number:

27.1 The basics of dynamic multithreading 775

FIB.0/

FIB.0/FIB.0/FIB.0/

FIB.0/

FIB.1/FIB.1/

FIB.1/

FIB.1/

FIB.1/FIB.1/FIB.1/

FIB.1/

FIB.2/

FIB.2/FIB.2/FIB.2/

FIB.2/

FIB.3/FIB.3/

FIB.3/

FIB.4/

FIB.4/

FIB.5/

FIB.6/

Figure 27.1 The tree of recursive procedure instances when computing FIB.6/. Each instance of FIB with the same argument does the same work to produce the same result, providing an inefficient but interesting way to compute Fibonacci numbers.

FIB.n/

1 if n � 1 2 return n 3 else x D FIB.n � 1/ 4 y D FIB.n � 2/ 5 return x C y

You would not really want to compute large Fibonacci numbers this way, be- cause this computation does much repeated work. Figure 27.1 shows the tree of recursive procedure instances that are created when computing F6. For example, a call to FIB.6/ recursively calls FIB.5/ and then FIB.4/. But, the call to FIB.5/ also results in a call to FIB.4/. Both instances of FIB.4/ return the same result (F4 D 3). Since the FIB procedure does not memoize, the second call to FIB.4/ replicates the work that the first call performs.

Let T .n/ denote the running time of FIB.n/. Since FIB.n/ contains two recur- sive calls plus a constant amount of extra work, we obtain the recurrence

T .n/ D T .n � 1/C T .n � 2/C‚.1/ : This recurrence has solution T .n/ D ‚.Fn/, which we can show using the substi- tution method. For an inductive hypothesis, assume that T .n/ � aFn � b, where a > 1 and b > 0 are constants. Substituting, we obtain

776 Chapter 27 Multithreaded Algorithms

T .n/ � .aFn�1 � b/C .aFn�2 � b/C‚.1/ D a.Fn�1 C Fn�2/ � 2b C‚.1/ D aFn � b � .b �‚.1// � aFn � b

if we choose b large enough to dominate the constant in the ‚.1/. We can then choose a large enough to satisfy the initial condition. The analytical bound

T .n/ D ‚.�n/ ; (27.1) where � D .1 C p5/=2 is the golden ratio, now follows from equation (3.25). Since Fn grows exponentially in n, this procedure is a particularly slow way to compute Fibonacci numbers. (See Problem 31-3 for much faster ways.)

Although the FIB procedure is a poor way to compute Fibonacci numbers, it makes a good example for illustrating key concepts in the analysis of multithreaded algorithms. Observe that within FIB.n/, the two recursive calls in lines 3 and 4 to FIB.n� 1/ and FIB.n� 2/, respectively, are independent of each other: they could be called in either order, and the computation performed by one in no way affects the other. Therefore, the two recursive calls can run in parallel.

We augment our pseudocode to indicate parallelism by adding the concurrency keywords spawn and sync. Here is how we can rewrite the FIB procedure to use dynamic multithreading:

P-FIB.n/

1 if n � 1 2 return n 3 else x D spawn P-FIB.n � 1/ 4 y D P-FIB.n � 2/ 5 sync 6 return x C y

Notice that if we delete the concurrency keywords spawn and sync from P-FIB, the resulting pseudocode text is identical to FIB (other than renaming the procedure in the header and in the two recursive calls). We define the serialization of a mul- tithreaded algorithm to be the serial algorithm that results from deleting the multi- threaded keywords: spawn, sync, and when we examine parallel loops, parallel. Indeed, our multithreaded pseudocode has the nice property that a serialization is always ordinary serial pseudocode to solve the same problem. Nested parallelism occurs when the keyword spawn precedes a procedure call,

as in line 3. The semantics of a spawn differs from an ordinary procedure call in that the procedure instance that executes the spawn—the parent—may continue to execute in parallel with the spawned subroutine—its child—instead of waiting

27.1 The basics of dynamic multithreading 777

for the child to complete, as would normally happen in a serial execution. In this case, while the spawned child is computing P-FIB.n � 1/, the parent may go on to compute P-FIB.n � 2/ in line 4 in parallel with the spawned child. Since the P-FIB procedure is recursive, these two subroutine calls themselves create nested parallelism, as do their children, thereby creating a potentially vast tree of subcom- putations, all executing in parallel.

The keyword spawn does not say, however, that a procedure must execute con- currently with its spawned children, only that it may. The concurrency keywords express the logical parallelism of the computation, indicating which parts of the computation may proceed in parallel. At runtime, it is up to a scheduler to deter- mine which subcomputations actually run concurrently by assigning them to avail- able processors as the computation unfolds. We shall discuss the theory behind schedulers shortly.

A procedure cannot safely use the values returned by its spawned children until after it executes a sync statement, as in line 5. The keyword sync indicates that the procedure must wait as necessary for all its spawned children to complete be- fore proceeding to the statement after the sync. In the P-FIB procedure, a sync is required before the return statement in line 6 to avoid the anomaly that would occur if x and y were summed before x was computed. In addition to explicit synchronization provided by the sync statement, every procedure executes a sync implicitly before it returns, thus ensuring that all its children terminate before it does.

A model for multithreaded execution

It helps to think of a multithreaded computation—the set of runtime instruc- tions executed by a processor on behalf of a multithreaded program—as a directed acyclic graph G D .V; E/, called a computation dag. As an example, Figure 27.2 shows the computation dag that results from computing P-FIB.4/. Conceptually, the vertices in V are instructions, and the edges in E represent dependencies be- tween instructions, where .u; �/ 2 E means that instruction u must execute before instruction �. For convenience, however, if a chain of instructions contains no parallel control (no spawn, sync, or return from a spawn—via either an explicit return statement or the return that happens implicitly upon reaching the end of a procedure), we may group them into a single strand, each of which represents one or more instructions. Instructions involving parallel control are not included in strands, but are represented in the structure of the dag. For example, if a strand has two successors, one of them must have been spawned, and a strand with mul- tiple predecessors indicates the predecessors joined because of a sync statement. Thus, in the general case, the set V forms the set of strands, and the set E of di- rected edges represents dependencies between strands induced by parallel control.

778 Chapter 27 Multithreaded Algorithms

P-FIB(1) P-FIB(0)

P-FIB(3)

P-FIB(4)

P-FIB(1)

P-FIB(1)

P-FIB(0)

P-FIB(2)

P-FIB(2)

Figure 27.2 A directed acyclic graph representing the computation of P-FIB.4/. Each circle rep- resents one strand, with black circles representing either base cases or the part of the procedure (instance) up to the spawn of P-FIB.n � 1/ in line 3, shaded circles representing the part of the pro- cedure that calls P-FIB.n� 2/ in line 4 up to the sync in line 5, where it suspends until the spawn of P-FIB.n � 1/ returns, and white circles representing the part of the procedure after the sync where it sums x and y up to the point where it returns the result. Each group of strands belonging to the same procedure is surrounded by a rounded rectangle, lightly shaded for spawned procedures and heavily shaded for called procedures. Spawn edges and call edges point downward, continuation edges point horizontally to the right, and return edges point upward. Assuming that each strand takes unit time, the work equals 17 time units, since there are 17 strands, and the span is 8 time units, since the critical path—shown with shaded edges—contains 8 strands.

If G has a directed path from strand u to strand �, we say that the two strands are (logically) in series. Otherwise, strands u and � are (logically) in parallel.

We can picture a multithreaded computation as a dag of strands embedded in a tree of procedure instances. For example, Figure 27.1 shows the tree of procedure instances for P-FIB.6/ without the detailed structure showing strands. Figure 27.2 zooms in on a section of that tree, showing the strands that constitute each proce- dure. All directed edges connecting strands run either within a procedure or along undirected edges in the procedure tree.

We can classify the edges of a computation dag to indicate the kind of dependen- cies between the various strands. A continuation edge .u; u0/, drawn horizontally in Figure 27.2, connects a strand u to its successor u0 within the same procedure instance. When a strand u spawns a strand �, the dag contains a spawn edge .u; �/, which points downward in the figure. Call edges, representing normal procedure calls, also point downward. Strand u spawning strand � differs from u calling � in that a spawn induces a horizontal continuation edge from u to the strand u0 fol-

27.1 The basics of dynamic multithreading 779

lowing u in its procedure, indicating that u0 is free to execute at the same time as �, whereas a call induces no such edge. When a strand u returns to its calling procedure and x is the strand immediately following the next sync in the calling procedure, the computation dag contains return edge .u; x/, which points upward. A computation starts with a single initial strand—the black vertex in the procedure labeled P-FIB.4/ in Figure 27.2—and ends with a single final strand—the white vertex in the procedure labeled P-FIB.4/.

We shall study the execution of multithreaded algorithms on an ideal paral- lel computer, which consists of a set of processors and a sequentially consistent shared memory. Sequential consistency means that the shared memory, which may in reality be performing many loads and stores from the processors at the same time, produces the same results as if at each step, exactly one instruction from one of the processors is executed. That is, the memory behaves as if the instructions were executed sequentially according to some global linear order that preserves the individual orders in which each processor issues its own instructions. For dynamic multithreaded computations, which are scheduled onto processors automatically by the concurrency platform, the shared memory behaves as if the multithreaded computation’s instructions were interleaved to produce a linear order that preserves the partial order of the computation dag. Depending on scheduling, the ordering could differ from one run of the program to another, but the behavior of any exe- cution can be understood by assuming that the instructions are executed in some linear order consistent with the computation dag.

In addition to making assumptions about semantics, the ideal-parallel-computer model makes some performance assumptions. Specifically, it assumes that each processor in the machine has equal computing power, and it ignores the cost of scheduling. Although this last assumption may sound optimistic, it turns out that for algorithms with sufficient “parallelism” (a term we shall define precisely in a moment), the overhead of scheduling is generally minimal in practice.

Performance measures

We can gauge the theoretical efficiency of a multithreaded algorithm by using two metrics: “work” and “span.” The work of a multithreaded computation is the total time to execute the entire computation on one processor. In other words, the work is the sum of the times taken by each of the strands. For a computation dag in which each strand takes unit time, the work is just the number of vertices in the dag. The span is the longest time to execute the strands along any path in the dag. Again, for a dag in which each strand takes unit time, the span equals the number of vertices on a longest or critical path in the dag. (Recall from Section 24.2 that we can find a critical path in a dag G D .V; E/ in ‚.V C E/ time.) For example, the computation dag of Figure 27.2 has 17 vertices in all and 8 vertices on its critical

780 Chapter 27 Multithreaded Algorithms

path, so that if each strand takes unit time, its work is 17 time units and its span is 8 time units.

The actual running time of a multithreaded computation depends not only on its work and its span, but also on how many processors are available and how the scheduler allocates strands to processors. To denote the running time of a multithreaded computation on P processors, we shall subscript by P . For example, we might denote the running time of an algorithm on P processors by TP . The work is the running time on a single processor, or T1. The span is the running time if we could run each strand on its own processor—in other words, if we had an unlimited number of processors—and so we denote the span by T1.

The work and span provide lower bounds on the running time TP of a multi- threaded computation on P processors:

� In one step, an ideal parallel computer with P processors can do at most P units of work, and thus in TP time, it can perform at most P TP work. Since the total work to do is T1, we have P TP � T1. Dividing by P yields the work law:

TP � T1=P : (27.2) � A P -processor ideal parallel computer cannot run any faster than a machine

with an unlimited number of processors. Looked at another way, a machine with an unlimited number of processors can emulate a P -processor machine by using just P of its processors. Thus, the span law follows:

TP � T1 : (27.3)

We define the speedup of a computation on P processors by the ratio T1=TP , which says how many times faster the computation is on P processors than on 1 processor. By the work law, we have TP � T1=P , which implies that T1=TP � P . Thus, the speedup on P processors can be at most P . When the speedup is linear in the number of processors, that is, when T1=TP D ‚.P /, the computation exhibits linear speedup, and when T1=TP D P , we have perfect linear speedup.

The ratio T1=T1 of the work to the span gives the parallelism of the multi- threaded computation. We can view the parallelism from three perspectives. As a ratio, the parallelism denotes the average amount of work that can be performed in parallel for each step along the critical path. As an upper bound, the parallelism gives the maximum possible speedup that can be achieved on any number of pro- cessors. Finally, and perhaps most important, the parallelism provides a limit on the possibility of attaining perfect linear speedup. Specifically, once the number of processors exceeds the parallelism, the computation cannot possibly achieve per- fect linear speedup. To see this last point, suppose that P > T1=T1, in which case

27.1 The basics of dynamic multithreading 781

the span law implies that the speedup satisfies T1=TP � T1=T1 T1 (by inequality (3.9)) .

Thus, we obtain the contradiction that the P processors would perform more work than the computation requires, which allows us to conclude that the number of complete steps is at most bT1=P c.

Now, consider an incomplete step. Let G be the dag representing the entire computation, and without loss of generality, assume that each strand takes unit time. (We can replace each longer strand by a chain of unit-time strands.) Let G0

be the subgraph of G that has yet to be executed at the start of the incomplete step, and let G00 be the subgraph remaining to be executed after the incomplete step. A longest path in a dag must necessarily start at a vertex with in-degree 0. Since an incomplete step of a greedy scheduler executes all strands with in-degree 0 in G0, the length of a longest path in G00 must be 1 less than the length of a longest path in G0. In other words, an incomplete step decreases the span of the unexecuted dag by 1. Hence, the number of incomplete steps is at most T1.

Since each step is either complete or incomplete, the theorem follows.

27.1 The basics of dynamic multithreading 783

The following corollary to Theorem 27.1 shows that a greedy scheduler always performs well.

Corollary 27.2 The running time TP of any multithreaded computation scheduled by a greedy scheduler on an ideal parallel computer with P processors is within a factor of 2 of optimal.

Proof Let T �P be the running time produced by an optimal scheduler on a machine with P processors, and let T1 and T1 be the work and span of the computation, respectively. Since the work and span laws—inequalities (27.2) and (27.3)—give us T �P � max.T1=P; T1/, Theorem 27.1 implies that TP � T1=P C T1

� 2 �max.T1=P; T1/ � 2T �P :

The next corollary shows that, in fact, a greedy scheduler achieves near-perfect linear speedup on any multithreaded computation as the slackness grows.

Corollary 27.3 Let TP be the running time of a multithreaded computation produced by a greedy scheduler on an ideal parallel computer with P processors, and let T1 and T1 be the work and span of the computation, respectively. Then, if P � T1=T1, we have TP � T1=P , or equivalently, a speedup of approximately P .

Proof If we suppose that P � T1=T1, then we also have T1 � T1=P , and hence Theorem 27.1 gives us TP � T1=P C T1 � T1=P . Since the work law (27.2) dictates that TP � T1=P , we conclude that TP � T1=P , or equiva- lently, that the speedup is T1=TP � P .

The� symbol denotes “much less,” but how much is “much less”? As a rule of thumb, a slackness of at least 10—that is, 10 times more parallelism than pro- cessors—generally suffices to achieve good speedup. Then, the span term in the greedy bound, inequality (27.4), is less than 10% of the work-per-processor term, which is good enough for most engineering situations. For example, if a computa- tion runs on only 10 or 100 processors, it doesn’t make sense to value parallelism of, say 1,000,000 over parallelism of 10,000, even with the factor of 100 differ- ence. As Problem 27-2 shows, sometimes by reducing extreme parallelism, we can obtain algorithms that are better with respect to other concerns and which still scale up well on reasonable numbers of processors.

784 Chapter 27 Multithreaded Algorithms

A

(a) (b)

B

A

B

Work: T1.A [ B/ D T1.A/C T1.B/ Span: T1.A[ B/ D T1.A/C T1.B/

Work: T1.A [ B/ D T1.A/C T1.B/ Span: T1.A [ B/ D max.T1.A/; T1.B/)

Figure 27.3 The work and span of composed subcomputations. (a) When two subcomputations are joined in series, the work of the composition is the sum of their work, and the span of the composition is the sum of their spans. (b) When two subcomputations are joined in parallel, the work of the composition remains the sum of their work, but the span of the composition is only the maximum of their spans.

Analyzing multithreaded algorithms

We now have all the tools we need to analyze multithreaded algorithms and provide good bounds on their running times on various numbers of processors. Analyzing the work is relatively straightforward, since it amounts to nothing more than ana- lyzing the running time of an ordinary serial algorithm—namely, the serialization of the multithreaded algorithm—which you should already be familiar with, since that is what most of this textbook is about! Analyzing the span is more interesting, but generally no harder once you get the hang of it. We shall investigate the basic ideas using the P-FIB program.

Analyzing the work T1.n/ of P-FIB.n/ poses no hurdles, because we’ve already done it. The original FIB procedure is essentially the serialization of P-FIB, and hence T1.n/ D T .n/ D ‚.�n/ from equation (27.1).

Figure 27.3 illustrates how to analyze the span. If two subcomputations are joined in series, their spans add to form the span of their composition, whereas if they are joined in parallel, the span of their composition is the maximum of the spans of the two subcomputations. For P-FIB.n/, the spawned call to P-FIB.n�1/ in line 3 runs in parallel with the call to P-FIB.n � 2/ in line 4. Hence, we can express the span of P-FIB.n/ as the recurrence

T1.n/ D max.T1.n � 1/; T1.n � 2//C‚.1/ D T1.n � 1/C‚.1/ ;

which has solution T1.n/ D ‚.n/. The parallelism of P-FIB.n/ is T1.n/=T1.n/ D ‚.�n=n/, which grows dra-

matically as n gets large. Thus, on even the largest parallel computers, a modest

27.1 The basics of dynamic multithreading 785

value for n suffices to achieve near perfect linear speedup for P-FIB.n/, because this procedure exhibits considerable parallel slackness.

Parallel loops

Many algorithms contain loops all of whose iterations can operate in parallel. As we shall see, we can parallelize such loops using the spawn and sync keywords, but it is much more convenient to specify directly that the iterations of such loops can run concurrently. Our pseudocode provides this functionality via the parallel concurrency keyword, which precedes the for keyword in a for loop statement.

As an example, consider the problem of multiplying an n n matrix A D .aij / by an n-vector x D .xj /. The resulting n-vector y D .yi/ is given by the equation

yi D nX

j D1 aij xj ;

for i D 1; 2; : : : ; n. We can perform matrix-vector multiplication by computing all the entries of y in parallel as follows:

MAT-VEC.A; x/

1 n D A:rows 2 let y be a new vector of length n 3 parallel for i D 1 to n 4 yi D 0 5 parallel for i D 1 to n 6 for j D 1 to n 7 yi D yi C aij xj 8 return y

In this code, the parallel for keywords in lines 3 and 5 indicate that the itera- tions of the respective loops may be run concurrently. A compiler can implement each parallel for loop as a divide-and-conquer subroutine using nested parallelism. For example, the parallel for loop in lines 5–7 can be implemented with the call MAT-VEC-MAIN-LOOP.A; x; y; n; 1; n/, where the compiler produces the auxil- iary subroutine MAT-VEC-MAIN-LOOP as follows:

786 Chapter 27 Multithreaded Algorithms

1,1 2,2 3,3 4,4 5,5 6,6 7,7 8,8

1,2 3,4 5,6 7,8

1,4 5,8

1,8

Figure 27.4 A dag representing the computation of MAT-VEC-MAIN-LOOP.A; x; y; 8; 1; 8/. The two numbers within each rounded rectangle give the values of the last two parameters (i and i 0 in the procedure header) in the invocation (spawn or call) of the procedure. The black circles repre- sent strands corresponding to either the base case or the part of the procedure up to the spawn of MAT-VEC-MAIN-LOOP in line 5; the shaded circles represent strands corresponding to the part of the procedure that calls MAT-VEC-MAIN-LOOP in line 6 up to the sync in line 7, where it suspends until the spawned subroutine in line 5 returns; and the white circles represent strands corresponding to the (negligible) part of the procedure after the sync up to the point where it returns.

MAT-VEC-MAIN-LOOP.A; x; y; n; i; i 0/

1 if i == i 0

2 for j D 1 to n 3 yi D yi C aij xj 4 else mid D b.i C i 0/=2c 5 spawn MAT-VEC-MAIN-LOOP.A; x; y; n; i; mid/ 6 MAT-VEC-MAIN-LOOP.A; x; y; n; midC 1; i 0/ 7 sync

This code recursively spawns the first half of the iterations of the loop to execute in parallel with the second half of the iterations and then executes a sync, thereby creating a binary tree of execution where the leaves are individual loop iterations, as shown in Figure 27.4.

To calculate the work T1.n/ of MAT-VEC on an n n matrix, we simply compute the running time of its serialization, which we obtain by replacing the parallel for loops with ordinary for loops. Thus, we have T1.n/ D ‚.n2/, because the qua- dratic running time of the doubly nested loops in lines 5–7 dominates. This analysis

27.1 The basics of dynamic multithreading 787

seems to ignore the overhead for recursive spawning in implementing the parallel loops, however. In fact, the overhead of recursive spawning does increase the work of a parallel loop compared with that of its serialization, but not asymptotically. To see why, observe that since the tree of recursive procedure instances is a full binary tree, the number of internal nodes is 1 fewer than the number of leaves (see Exercise B.5-3). Each internal node performs constant work to divide the iteration range, and each leaf corresponds to an iteration of the loop, which takes at least constant time (‚.n/ time in this case). Thus, we can amortize the overhead of re- cursive spawning against the work of the iterations, contributing at most a constant factor to the overall work.

As a practical matter, dynamic-multithreading concurrency platforms sometimes coarsen the leaves of the recursion by executing several iterations in a single leaf, either automatically or under programmer control, thereby reducing the overhead of recursive spawning. This reduced overhead comes at the expense of also reduc- ing the parallelism, however, but if the computation has sufficient parallel slack- ness, near-perfect linear speedup need not be sacrificed.

We must also account for the overhead of recursive spawning when analyzing the span of a parallel-loop construct. Since the depth of recursive calling is logarithmic in the number of iterations, for a parallel loop with n iterations in which the i th iteration has span iter1.i/, the span is

T1.n/ D ‚.lg n/C max 1�i�n

iter1.i/ :

For example, for MAT-VEC on an n n matrix, the parallel initialization loop in lines 3–4 has span ‚.lg n/, because the recursive spawning dominates the constant- time work of each iteration. The span of the doubly nested loops in lines 5–7 is ‚.n/, because each iteration of the outer parallel for loop contains n iterations of the inner (serial) for loop. The span of the remaining code in the procedure is constant, and thus the span is dominated by the doubly nested loops, yielding an overall span of ‚.n/ for the whole procedure. Since the work is ‚.n2/, the parallelism is ‚.n2/=‚.n/ D ‚.n/. (Exercise 27.1-6 asks you to provide an implementation with even more parallelism.)

Race conditions

A multithreaded algorithm is deterministic if it always does the same thing on the same input, no matter how the instructions are scheduled on the multicore com- puter. It is nondeterministic if its behavior might vary from run to run. Often, a multithreaded algorithm that is intended to be deterministic fails to be, because it contains a “determinacy race.”

Race conditions are the bane of concurrency. Famous race bugs include the Therac-25 radiation therapy machine, which killed three people and injured sev-

788 Chapter 27 Multithreaded Algorithms

eral others, and the North American Blackout of 2003, which left over 50 million people without power. These pernicious bugs are notoriously hard to find. You can run tests in the lab for days without a failure only to discover that your software sporadically crashes in the field.

A determinacy race occurs when two logically parallel instructions access the same memory location and at least one of the instructions performs a write. The following procedure illustrates a race condition:

RACE-EXAMPLE. /

1 x D 0 2 parallel for i D 1 to 2 3 x D x C 1 4 print x

After initializing x to 0 in line 1, RACE-EXAMPLE creates two parallel strands, each of which increments x in line 3. Although it might seem that RACE- EXAMPLE should always print the value 2 (its serialization certainly does), it could instead print the value 1. Let’s see how this anomaly might occur.

When a processor increments x, the operation is not indivisible, but is composed of a sequence of instructions:

1. Read x from memory into one of the processor’s registers.

2. Increment the value in the register.

3. Write the value in the register back into x in memory.

Figure 27.5(a) illustrates a computation dag representing the execution of RACE- EXAMPLE, with the strands broken down to individual instructions. Recall that since an ideal parallel computer supports sequential consistency, we can view the parallel execution of a multithreaded algorithm as an interleaving of instructions that respects the dependencies in the dag. Part (b) of the figure shows the values in an execution of the computation that elicits the anomaly. The value x is stored in memory, and r1 and r2 are processor registers. In step 1, one of the processors sets x to 0. In steps 2 and 3, processor 1 reads x from memory into its register r1 and increments it, producing the value 1 in r1. At that point, processor 2 comes into the picture, executing instructions 4–6. Processor 2 reads x from memory into register r2; increments it, producing the value 1 in r2; and then stores this value into x, setting x to 1. Now, processor 1 resumes with step 7, storing the value 1 in r1 into x, which leaves the value of x unchanged. Therefore, step 8 prints the value 1, rather than 2, as the serialization would print.

We can see what has happened. If the effect of the parallel execution were that processor 1 executed all its instructions before processor 2, the value 2 would be

27.1 The basics of dynamic multithreading 789

incr r13

r1 = x2

x = r17

incr r25

r2 = x4

x = r26

x = 01

print x8

(a)

step x r1 r2

1

2

3

4

5

6

7

0

0

0

0

0

1

1

0

1

1

1

1

1

0

1

1

1

(b)

Figure 27.5 Illustration of the determinacy race in RACE-EXAMPLE. (a)A computation dag show- ing the dependencies among individual instructions. The processor registers are r1 and r2. Instruc- tions unrelated to the race, such as the implementation of loop control, are omitted. (b)An execution sequence that elicits the bug, showing the values of x in memory and registers r1 and r2 for each step in the execution sequence.

printed. Conversely, if the effect were that processor 2 executed all its instructions before processor 1, the value 2 would still be printed. When the instructions of the two processors execute at the same time, however, it is possible, as in this example execution, that one of the updates to x is lost.

Of course, many executions do not elicit the bug. For example, if the execution order were h1; 2; 3; 7; 4; 5; 6; 8i or h1; 4; 5; 6; 2; 3; 7; 8i, we would get the cor- rect result. That’s the problem with determinacy races. Generally, most orderings produce correct results—such as any in which the instructions on the left execute before the instructions on the right, or vice versa. But some orderings generate improper results when the instructions interleave. Consequently, races can be ex- tremely hard to test for. You can run tests for days and never see the bug, only to experience a catastrophic system crash in the field when the outcome is critical.

Although we can cope with races in a variety of ways, including using mutual- exclusion locks and other methods of synchronization, for our purposes, we shall simply ensure that strands that operate in parallel are independent: they have no determinacy races among them. Thus, in a parallel for construct, all the iterations should be independent. Between a spawn and the corresponding sync, the code of the spawned child should be independent of the code of the parent, including code executed by additional spawned or called children. Note that arguments to a spawned child are evaluated in the parent before the actual spawn occurs, and thus the evaluation of arguments to a spawned subroutine is in series with any accesses to those arguments after the spawn.

790 Chapter 27 Multithreaded Algorithms

As an example of how easy it is to generate code with races, here is a faulty implementation of multithreaded matrix-vector multiplication that achieves a span of ‚.lg n/ by parallelizing the inner for loop:

MAT-VEC-WRONG.A; x/

1 n D A:rows 2 let y be a new vector of length n 3 parallel for i D 1 to n 4 yi D 0 5 parallel for i D 1 to n 6 parallel for j D 1 to n 7 yi D yi C aij xj 8 return y

This procedure is, unfortunately, incorrect due to races on updating yi in line 7, which executes concurrently for all n values of j . Exercise 27.1-6 asks you to give a correct implementation with ‚.lg n/ span.

A multithreaded algorithm with races can sometimes be correct. As an exam- ple, two parallel threads might store the same value into a shared variable, and it wouldn’t matter which stored the value first. Generally, however, we shall consider code with races to be illegal.

A chess lesson

We close this section with a true story that occurred during the development of the world-class multithreaded chess-playing program ?Socrates [80], although the timings below have been simplified for exposition. The program was prototyped on a 32-processor computer but was ultimately to run on a supercomputer with 512 processors. At one point, the developers incorporated an optimization into the pro- gram that reduced its running time on an important benchmark on the 32-processor machine from T32 D 65 seconds to T 032 D 40 seconds. Yet, the developers used the work and span performance measures to conclude that the optimized version, which was faster on 32 processors, would actually be slower than the original ver- sion on 512 processsors. As a result, they abandoned the “optimization.”

Here is their analysis. The original version of the program had work T1 D 2048 seconds and span T1 D 1 second. If we treat inequality (27.4) as an equation, TP D T1=P C T1, and use it as an approximation to the running time on P pro- cessors, we see that indeed T32 D 2048=32 C 1 D 65. With the optimization, the work became T 01 D 1024 seconds and the span became T 01 D 8 seconds. Again using our approximation, we get T 032 D 1024=32C 8 D 40.

The relative speeds of the two versions switch when we calculate the running times on 512 processors, however. In particular, we have T512 D 2048=512C1 D 5

27.1 The basics of dynamic multithreading 791

seconds, and T 0512 D 1024=512 C 8 D 10 seconds. The optimization that sped up the program on 32 processors would have made the program twice as slow on 512 processors! The optimized version’s span of 8, which was not the dominant term in the running time on 32 processors, became the dominant term on 512 processors, nullifying the advantage from using more processors.

The moral of the story is that work and span can provide a better means of extrapolating performance than can measured running times.

Exercises

27.1-1 Suppose that we spawn P-FIB.n � 2/ in line 4 of P-FIB, rather than calling it as is done in the code. What is the impact on the asymptotic work, span, and parallelism?

27.1-2 Draw the computation dag that results from executing P-FIB.5/. Assuming that each strand in the computation takes unit time, what are the work, span, and par- allelism of the computation? Show how to schedule the dag on 3 processors using greedy scheduling by labeling each strand with the time step in which it is executed.

27.1-3 Prove that a greedy scheduler achieves the following time bound, which is slightly stronger than the bound proven in Theorem 27.1:

TP � T1 � T1

P C T1 : (27.5)

27.1-4 Construct a computation dag for which one execution of a greedy scheduler can take nearly twice the time of another execution of a greedy scheduler on the same number of processors. Describe how the two executions would proceed.

27.1-5 Professor Karan measures her deterministic multithreaded algorithm on 4, 10, and 64 processors of an ideal parallel computer using a greedy scheduler. She claims that the three runs yielded T4 D 80 seconds, T10 D 42 seconds, and T64 D 10 seconds. Argue that the professor is either lying or incompetent. (Hint: Use the work law (27.2), the span law (27.3), and inequality (27.5) from Exer- cise 27.1-3.)

792 Chapter 27 Multithreaded Algorithms

27.1-6 Give a multithreaded algorithm to multiply an n n matrix by an n-vector that achieves ‚.n2= lg n/ parallelism while maintaining ‚.n2/ work.

27.1-7 Consider the following multithreaded pseudocode for transposing an n n matrix A in place:

P-TRANSPOSE.A/

1 n D A:rows 2 parallel for j D 2 to n 3 parallel for i D 1 to j � 1 4 exchange aij with aj i

Analyze the work, span, and parallelism of this algorithm.

27.1-8 Suppose that we replace the parallel for loop in line 3 of P-TRANSPOSE (see Ex- ercise 27.1-7) with an ordinary for loop. Analyze the work, span, and parallelism of the resulting algorithm.

27.1-9 For how many processors do the two versions of the chess programs run equally fast, assuming that TP D T1=P C T1?

27.2 Multithreaded matrix multiplication

In this section, we examine how to multithread matrix multiplication, a problem whose serial running time we studied in Section 4.2. We’ll look at multithreaded algorithms based on the standard triply nested loop, as well as divide-and-conquer algorithms.

Multithreaded matrix multiplication

The first algorithm we study is the straighforward algorithm based on parallelizing the loops in the procedure SQUARE-MATRIX-MULTIPLY on page 75:

27.2 Multithreaded matrix multiplication 793

P-SQUARE-MATRIX-MULTIPLY.A; B/

1 n D A:rows 2 let C be a new n � n matrix 3 parallel for i D 1 to n 4 parallel for j D 1 to n 5 cij D 0 6 for k D 1 to n 7 cij D cij C aik � bkj 8 return C

To analyze this algorithm, observe that since the serialization of the algorithm is just SQUARE-MATRIX-MULTIPLY, the work is therefore simply T1.n/ D ‚.n3/, the same as the running time of SQUARE-MATRIX-MULTIPLY. The span is T1.n/ D ‚.n/, because it follows a path down the tree of recursion for the parallel for loop starting in line 3, then down the tree of recursion for the parallel for loop starting in line 4, and then executes all n iterations of the ordinary for loop starting in line 6, resulting in a total span of ‚.lg n/ C ‚.lg n/ C ‚.n/ D ‚.n/. Thus, the parallelism is ‚.n3/=‚.n/ D ‚.n2/. Exercise 27.2-3 asks you to par- allelize the inner loop to obtain a parallelism of ‚.n3= lg n/, which you cannot do straightforwardly using parallel for, because you would create races.

A divide-and-conquer multithreaded algorithm for matrix multiplication

As we learned in Section 4.2, we can multiply n � n matrices serially in time ‚.nlg 7/ D O.n2:81/ using Strassen’s divide-and-conquer strategy, which motivates us to look at multithreading such an algorithm. We begin, as we did in Section 4.2, with multithreading a simpler divide-and-conquer algorithm.

Recall from page 77 that the SQUARE-MATRIX-MULTIPLY-RECURSIVE proce- dure, which multiplies two n � n matrices A and B to produce the n � n matrix C , relies on partitioning each of the three matrices into four n=2 � n=2 submatrices:

A D �

A11 A12 A21 A22

� ; B D

� B11 B12 B21 B22

� ; C D

� C11 C12 C21 C22

� :

Then, we can write the matrix product as� C11 C12 C21 C22

� D

� A11 A12 A21 A22

�� B11 B12 B21 B22

� D

� A11B11 A11B12 A21B11 A21B12

� C

� A12B21 A12B22 A22B21 A22B22

� : (27.6)

Thus, to multiply two n�n matrices, we perform eight multiplications of n=2�n=2 matrices and one addition of n�n matrices. The following pseudocode implements

794 Chapter 27 Multithreaded Algorithms

this divide-and-conquer strategy using nested parallelism. Unlike the SQUARE- MATRIX-MULTIPLY-RECURSIVE procedure on which it is based, P-MATRIX- MULTIPLY-RECURSIVE takes the output matrix as a parameter to avoid allocating matrices unnecessarily.

P-MATRIX-MULTIPLY-RECURSIVE.C; A; B/

1 n D A:rows 2 if n == 1 3 c11 D a11b11 4 else let T be a new n n matrix 5 partition A, B , C , and T into n=2 n=2 submatrices

A11; A12; A21; A22; B11; B12; B21; B22; C11; C12; C21; C22; and T11; T12; T21; T22; respectively

6 spawn P-MATRIX-MULTIPLY-RECURSIVE.C11; A11; B11/ 7 spawn P-MATRIX-MULTIPLY-RECURSIVE.C12; A11; B12/ 8 spawn P-MATRIX-MULTIPLY-RECURSIVE.C21; A21; B11/ 9 spawn P-MATRIX-MULTIPLY-RECURSIVE.C22; A21; B12/

10 spawn P-MATRIX-MULTIPLY-RECURSIVE.T11; A12; B21/ 11 spawn P-MATRIX-MULTIPLY-RECURSIVE.T12; A12; B22/ 12 spawn P-MATRIX-MULTIPLY-RECURSIVE.T21; A22; B21/ 13 P-MATRIX-MULTIPLY-RECURSIVE.T22; A22; B22/ 14 sync 15 parallel for i D 1 to n 16 parallel for j D 1 to n 17 cij D cij C tij Line 3 handles the base case, where we are multiplying 1 1 matrices. We handle the recursive case in lines 4–17. We allocate a temporary matrix T in line 4, and line 5 partitions each of the matrices A, B , C , and T into n=2 n=2 submatrices. (As with SQUARE-MATRIX-MULTIPLY-RECURSIVE on page 77, we gloss over the minor issue of how to use index calculations to represent submatrix sections of a matrix.) The recursive call in line 6 sets the submatrix C11 to the submatrix product A11B11, so that C11 equals the first of the two terms that form its sum in equation (27.6). Similarly, lines 7–9 set C12, C21, and C22 to the first of the two terms that equal their sums in equation (27.6). Line 10 sets the submatrix T11 to the submatrix product A12B21, so that T11 equals the second of the two terms that form C11’s sum. Lines 11–13 set T12, T21, and T22 to the second of the two terms that form the sums of C12, C21, and C22, respectively. The first seven recursive calls are spawned, and the last one runs in the main strand. The sync statement in line 14 ensures that all the submatrix products in lines 6–13 have been computed,

27.2 Multithreaded matrix multiplication 795

after which we add the products from T into C in using the doubly nested parallel for loops in lines 15–17.

We first analyze the work M1.n/ of the P-MATRIX-MULTIPLY-RECURSIVE procedure, echoing the serial running-time analysis of its progenitor SQUARE- MATRIX-MULTIPLY-RECURSIVE. In the recursive case, we partition in ‚.1/ time, perform eight recursive multiplications of n=2 n=2 matrices, and finish up with the ‚.n2/ work from adding two n n matrices. Thus, the recurrence for the work M1.n/ is

M1.n/ D 8M1.n=2/C‚.n2/ D ‚.n3/

by case 1 of the master theorem. In other words, the work of our multithreaded al- gorithm is asymptotically the same as the running time of the procedure SQUARE- MATRIX-MULTIPLY in Section 4.2, with its triply nested loops.

To determine the span M1.n/ of P-MATRIX-MULTIPLY-RECURSIVE, we first observe that the span for partitioning is ‚.1/, which is dominated by the ‚.lg n/ span of the doubly nested parallel for loops in lines 15–17. Because the eight parallel recursive calls all execute on matrices of the same size, the maximum span for any recursive call is just the span of any one. Hence, the recurrence for the span M1.n/ of P-MATRIX-MULTIPLY-RECURSIVE is

M1.n/ DM1.n=2/C‚.lg n/ : (27.7) This recurrence does not fall under any of the cases of the master theorem, but it does meet the condition of Exercise 4.6-2. By Exercise 4.6-2, therefore, the solution to recurrence (27.7) is M1.n/ D ‚.lg2 n/.

Now that we know the work and span of P-MATRIX-MULTIPLY-RECURSIVE, we can compute its parallelism as M1.n/=M1.n/ D ‚.n3= lg2 n/, which is very high.

Multithreading Strassen’s method

To multithread Strassen’s algorithm, we follow the same general outline as on page 79, only using nested parallelism:

1. Divide the input matrices A and B and output matrix C into n=2 n=2 sub- matrices, as in equation (27.6). This step takes ‚.1/ work and span by index calculation.

2. Create 10 matrices S1; S2; : : : ; S10, each of which is n=2 n=2 and is the sum or difference of two matrices created in step 1. We can create all 10 matrices with ‚.n2/ work and ‚.lg n/ span by using doubly nested parallel for loops.

796 Chapter 27 Multithreaded Algorithms

3. Using the submatrices created in step 1 and the 10 matrices created in step 2, recursively spawn the computation of seven n=2 n=2 matrix products P1; P2; : : : ; P7.

4. Compute the desired submatrices C11; C12; C21; C22 of the result matrix C by adding and subtracting various combinations of the Pi matrices, once again using doubly nested parallel for loops. We can compute all four submatrices with ‚.n2/ work and ‚.lg n/ span.

To analyze this algorithm, we first observe that since the serialization is the same as the original serial algorithm, the work is just the running time of the serialization, namely, ‚.nlg 7/. As for P-MATRIX-MULTIPLY-RECURSIVE, we can devise a recurrence for the span. In this case, seven recursive calls exe- cute in parallel, but since they all operate on matrices of the same size, we ob- tain the same recurrence (27.7) as we did for P-MATRIX-MULTIPLY-RECURSIVE, which has solution ‚.lg2 n/. Thus, the parallelism of multithreaded Strassen’s method is ‚.nlg 7= lg2 n/, which is high, though slightly less than the parallelism of P-MATRIX-MULTIPLY-RECURSIVE.

Exercises

27.2-1 Draw the computation dag for computing P-SQUARE-MATRIX-MULTIPLY on 2 2 matrices, labeling how the vertices in your diagram correspond to strands in the execution of the algorithm. Use the convention that spawn and call edges point downward, continuation edges point horizontally to the right, and return edges point upward. Assuming that each strand takes unit time, analyze the work, span, and parallelism of this computation.

27.2-2 Repeat Exercise 27.2-1 for P-MATRIX-MULTIPLY-RECURSIVE.

27.2-3 Give pseudocode for a multithreaded algorithm that multiplies two n n matrices with work ‚.n3/ but span only ‚.lg n/. Analyze your algorithm.

27.2-4 Give pseudocode for an efficient multithreaded algorithm that multiplies a p q matrix by a q r matrix. Your algorithm should be highly parallel even if any of p, q, and r are 1. Analyze your algorithm.

27.3 Multithreaded merge sort 797

27.2-5 Give pseudocode for an efficient multithreaded algorithm that transposes an n n matrix in place by using divide-and-conquer to divide the matrix recursively into four n=2 n=2 submatrices. Analyze your algorithm. 27.2-6 Give pseudocode for an efficient multithreaded implementation of the Floyd- Warshall algorithm (see Section 25.2), which computes shortest paths between all pairs of vertices in an edge-weighted graph. Analyze your algorithm.

27.3 Multithreaded merge sort

We first saw serial merge sort in Section 2.3.1, and in Section 2.3.2 we analyzed its running time and showed it to be ‚.n lg n/. Because merge sort already uses the divide-and-conquer paradigm, it seems like a terrific candidate for multithreading using nested parallelism. We can easily modify the pseudocode so that the first recursive call is spawned:

MERGE-SORT0.A; p; r/

1 if p T Œp�, then it returns the largest index q in the range p 1 5 P-SCAN-UP.x; t; 2; n/ 6 P-SCAN-DOWN.xŒ1�; x; t; y; 2; n/ 7 return y

P-SCAN-UP.x; t; i; j /

1 if i == j 2 return xŒi � 3 else 4 k D b.i C j /=2c 5 t Œk� D spawn P-SCAN-UP.x; t; i; k/ 6 right D P-SCAN-UP.x; t; k C 1; j / 7 sync 8 return // fill in the blank

P-SCAN-DOWN.�; x; t; y; i; j /

1 if i == j 2 yŒi � D � ˝ xŒi � 3 else 4 k D b.i C j /=2c 5 spawn P-SCAN-DOWN. ; x; t; y; i; k/ // fill in the blank 6 P-SCAN-DOWN. ; x; t; y; k C 1; j / // fill in the blank 7 sync

d. Fill in the three missing expressions in line 8 of P-SCAN-UP and lines 5 and 6 of P-SCAN-DOWN. Argue that with expressions you supplied, P-SCAN-3 is correct. (Hint: Prove that the value � passed to P-SCAN-DOWN.�; x; t; y; i; j / satisfies � D xŒ1�˝ xŒ2�˝ � � � ˝ xŒi � 1�.)

e. Analyze the work, span, and parallelism of P-SCAN-3.

27-5 Multithreading a simple stencil calculation Computational science is replete with algorithms that require the entries of an array to be filled in with values that depend on the values of certain already computed neighboring entries, along with other information that does not change over the course of the computation. The pattern of neighboring entries does not change during the computation and is called a stencil. For example, Section 15.4 presents

810 Chapter 27 Multithreaded Algorithms

a stencil algorithm to compute a longest common subsequence, where the value in entry cŒi; j � depends only on the values in cŒi�1; j �, cŒi; j �1�, and cŒi�1; j �1�, as well as the elements xi and yj within the two sequences given as inputs. The input sequences are fixed, but the algorithm fills in the two-dimensional array c so that it computes entry cŒi; j � after computing all three entries cŒi�1; j �, cŒi; j �1�, and cŒi � 1; j � 1�.

In this problem, we examine how to use nested parallelism to multithread a simple stencil calculation on an n n array A in which, of the values in A, the value placed into entry AŒi; j � depends only on values in AŒi 0; j 0�, where i 0 � i and j 0 � j (and of course, i 0 ¤ i or j 0 ¤ j ). In other words, the value in an entry depends only on values in entries that are above it and/or to its left, along with static information outside of the array. Furthermore, we assume throughout this problem that once we have filled in the entries upon which AŒi; j � depends, we can fill in AŒi; j � in ‚.1/ time (as in the LCS-LENGTH procedure of Section 15.4).

We can partition the n n array A into four n=2 n=2 subarrays as follows:

A D �

A11 A12 A21 A22

� : (27.11)

Observe now that we can fill in subarray A11 recursively, since it does not depend on the entries of the other three subarrays. Once A11 is complete, we can continue to fill in A12 and A21 recursively in parallel, because although they both depend on A11, they do not depend on each other. Finally, we can fill in A22 recursively.

a. Give multithreaded pseudocode that performs this simple stencil calculation using a divide-and-conquer algorithm SIMPLE-STENCIL based on the decom- position (27.11) and the discussion above. (Don’t worry about the details of the base case, which depends on the specific stencil.) Give and solve recurrences for the work and span of this algorithm in terms of n. What is the parallelism?

b. Modify your solution to part (a) to divide an n n array into nine n=3 n=3 subarrays, again recursing with as much parallelism as possible. Analyze this algorithm. How much more or less parallelism does this algorithm have com- pared with the algorithm from part (a)?

c. Generalize your solutions to parts (a) and (b) as follows. Choose an integer b � 2. Divide an n n array into b2 subarrays, each of size n=b n=b, recursing with as much parallelism as possible. In terms of n and b, what are the work, span, and parallelism of your algorithm? Argue that, using this approach, the parallelism must be o.n/ for any choice of b � 2. (Hint: For this last argument, show that the exponent of n in the parallelism is strictly less than 1 for any choice of b � 2.)

Notes for Chapter 27 811

d. Give pseudocode for a multithreaded algorithm for this simple stencil calcu- lation that achieves ‚.n= lg n/ parallelism. Argue using notions of work and span that the problem, in fact, has ‚.n/ inherent parallelism. As it turns out, the divide-and-conquer nature of our multithreaded pseudocode does not let us achieve this maximal parallelism.

27-6 Randomized multithreaded algorithms Just as with ordinary serial algorithms, we sometimes want to implement random- ized multithreaded algorithms. This problem explores how to adapt the various performance measures in order to handle the expected behavior of such algorithms. It also asks you to design and analyze a multithreaded algorithm for randomized quicksort.

a. Explain how to modify the work law (27.2), span law (27.3), and greedy sched- uler bound (27.4) to work with expectations when TP , T1, and T1 are all ran- dom variables.

b. Consider a randomized multithreaded algorithm for which 1% of the time we have T1 D 104 and T10;000 D 1, but for 99% of the time we have T1 D T10;000 D 109. Argue that the speedup of a randomized multithreaded algo- rithm should be defined as E ŒT1� =E ŒTP �, rather than E ŒT1=TP �.

c. Argue that the parallelism of a randomized multithreaded algorithm should be defined as the ratio E ŒT1� =E ŒT1�.

d. Multithread the RANDOMIZED-QUICKSORT algorithm on page 179 by using nested parallelism. (Do not parallelize RANDOMIZED-PARTITION.) Give the pseudocode for your P-RANDOMIZED-QUICKSORT algorithm.

e. Analyze your multithreaded algorithm for randomized quicksort. (Hint: Re- view the analysis of RANDOMIZED-SELECT on page 216.)

Chapter notes

Parallel computers, models for parallel computers, and algorithmic models for par- allel programming have been around in various forms for years. Prior editions of this book included material on sorting networks and the PRAM (Parallel Random- Access Machine) model. The data-parallel model [48, 168] is another popular al- gorithmic programming model, which features operations on vectors and matrices as primitives.

812 Chapter 27 Multithreaded Algorithms

Graham [149] and Brent [55] showed that there exist schedulers achieving the bound of Theorem 27.1. Eager, Zahorjan, and Lazowska [98] showed that any greedy scheduler achieves this bound and proposed the methodology of using work and span (although not by those names) to analyze parallel algorithms. Blelloch [47] developed an algorithmic programming model based on work and span (which he called the “depth” of the computation) for data-parallel programming. Blumofe and Leiserson [52] gave a distributed scheduling algorithm for dynamic multi- threading based on randomized “work-stealing” and showed that it achieves the bound E ŒTP � � T1=P CO.T1/. Arora, Blumofe, and Plaxton [19] and Blelloch, Gibbons, and Matias [49] also provided provably good algorithms for scheduling dynamic multithreaded computations.

The multithreaded pseudocode and programming model were heavily influenced by the Cilk [51, 118] project at MIT and the Cilk++ [71] extensions to C++ dis- tributed by Cilk Arts, Inc. Many of the multithreaded algorithms in this chapter appeared in unpublished lecture notes by C. E. Leiserson and H. Prokop and have been implemented in Cilk or Cilk++. The multithreaded merge-sorting algorithm was inspired by an algorithm of Akl [12].

The notion of sequential consistency is due to Lamport [223].

28 Matrix Operations

Because operations on matrices lie at the heart of scientific computing, efficient al- gorithms for working with matrices have many practical applications. This chapter focuses on how to multiply matrices and solve sets of simultaneous linear equa- tions. Appendix D reviews the basics of matrices.

Section 28.1 shows how to solve a set of linear equations using LUP decomposi- tions. Then, Section 28.2 explores the close relationship between multiplying and inverting matrices. Finally, Section 28.3 discusses the important class of symmetric positive-definite matrices and shows how we can use them to find a least-squares solution to an overdetermined set of linear equations.

One important issue that arises in practice is numerical stability. Due to the limited precision of floating-point representations in actual computers, round-off errors in numerical computations may become amplified over the course of a com- putation, leading to incorrect results; we call such computations numerically un- stable. Although we shall briefly consider numerical stability on occasion, we do not focus on it in this chapter. We refer you to the excellent book by Golub and Van Loan [144] for a thorough discussion of stability issues.

28.1 Solving systems of linear equations

Numerous applications need to solve sets of simultaneous linear equations. We can formulate a linear system as a matrix equation in which each matrix or vector element belongs to a field, typically the real numbers R. This section discusses how to solve a system of linear equations using a method called LUP decomposition.

We start with a set of linear equations in n unknowns x1; x2; : : : ; xn:

814 Chapter 28 Matrix Operations

a11x1 C a12x2 C � � � C a1nxn D b1 ; a21x1 C a22x2 C � � � C a2nxn D b2 ;

:::

an1x1 C an2x2 C � � � C annxn D bn :

(28.1)

A solution to the equations (28.1) is a set of values for x1; x2; : : : ; xn that satisfy all of the equations simultaneously. In this section, we treat only the case in which there are exactly n equations in n unknowns.

We can conveniently rewrite equations (28.1) as the matrix-vector equation˙ a11 a12 � � � a1n a21 a22 � � � a2n :::

::: : : :

:::

an1 an2 � � � ann

�˙ x1 x2 :::

xn

� D

˙ b1 b2 :::

bn

� or, equivalently, letting A D .aij /, x D .xi /, and b D .bi /, as Ax D b : (28.2)

If A is nonsingular, it possesses an inverse A�1, and

x D A�1b (28.3) is the solution vector. We can prove that x is the unique solution to equation (28.2) as follows. If there are two solutions, x and x 0, then Ax D Ax 0 D b and, letting I denote an identity matrix,

x D Ix D .A�1A/x D A�1.Ax/ D A�1.Ax 0/ D .A�1A/x 0 D x 0 :

In this section, we shall be concerned predominantly with the case in which A is nonsingular or, equivalently (by Theorem D.1), the rank of A is equal to the number n of unknowns. There are other possibilities, however, which merit a brief discussion. If the number of equations is less than the number n of unknowns—or, more generally, if the rank of A is less than n—then the system is underdeter- mined. An underdetermined system typically has infinitely many solutions, al- though it may have no solutions at all if the equations are inconsistent. If the number of equations exceeds the number n of unknowns, the system is overdeter- mined, and there may not exist any solutions. Section 28.3 addresses the important

28.1 Solving systems of linear equations 815

problem of finding good approximate solutions to overdetermined systems of linear equations.

Let us return to our problem of solving the system Ax D b of n equations in n unknowns. We could compute A�1 and then, using equation (28.3), multiply b by A�1, yielding x D A�1b. This approach suffers in practice from numerical instability. Fortunately, another approach—LUP decomposition—is numerically stable and has the further advantage of being faster in practice.

Overview of LUP decomposition

The idea behind LUP decomposition is to find three n n matrices L, U , and P such that

PA D LU ; (28.4) where

� L is a unit lower-triangular matrix,

� U is an upper-triangular matrix, and

� P is a permutation matrix.

We call matrices L, U , and P satisfying equation (28.4) an LUP decomposition of the matrix A. We shall show that every nonsingular matrix A possesses such a decomposition.

Computing an LUP decomposition for the matrix A has the advantage that we can more easily solve linear systems when they are triangular, as is the case for both matrices L and U . Once we have found an LUP decomposition for A, we can solve equation (28.2), Ax D b, by solving only triangular linear systems, as follows. Multiplying both sides of Ax D b by P yields the equivalent equation PAx D P b, which, by Exercise D.1-4, amounts to permuting the equations (28.1). Using our decomposition (28.4), we obtain

LUx D P b : We can now solve this equation by solving two triangular linear systems. Let us define y D Ux, where x is the desired solution vector. First, we solve the lower- triangular system

Ly D P b (28.5) for the unknown vector y by a method called “forward substitution.” Having solved for y, we then solve the upper-triangular system

Ux D y (28.6)

816 Chapter 28 Matrix Operations

for the unknown x by a method called “back substitution.” Because the permu- tation matrix P is invertible (Exercise D.2-3), multiplying both sides of equa- tion (28.4) by P �1 gives P �1PA D P �1LU , so that A D P �1LU : (28.7) Hence, the vector x is our solution to Ax D b: Ax D P �1LUx (by equation (28.7))

D P �1Ly (by equation (28.6)) D P �1P b (by equation (28.5)) D b :

Our next step is to show how forward and back substitution work and then attack the problem of computing the LUP decomposition itself.

Forward and back substitution

Forward substitution can solve the lower-triangular system (28.5) in ‚.n2/ time, given L, P , and b. For convenience, we represent the permutation P compactly by an array �Œ1 : : n�. For i D 1; 2; : : : ; n, the entry �Œi� indicates that Pi;�Œi � D 1 and Pij D 0 for j ¤ �Œi�. Thus, PA has a�Œi�;j in row i and column j , and P b has b�Œi� as its i th element. Since L is unit lower-triangular, we can rewrite equa- tion (28.5) as

y1 D b�Œ1� ; l21y1 C y2 D b�Œ2� ; l31y1 C l32y2 C y3 D b�Œ3� ;

:::

ln1y1 C ln2y2 C ln3y3 C � � � C yn D b�Œn� : The first equation tells us that y1 D b�Œ1�. Knowing the value of y1, we can substitute it into the second equation, yielding

y2 D b�Œ2� � l21y1 : Now, we can substitute both y1 and y2 into the third equation, obtaining

y3 D b�Œ3� � .l31y1 C l32y2/ : In general, we substitute y1; y2; : : : ; yi�1 “forward” into the i th equation to solve for yi :

28.1 Solving systems of linear equations 817

yi D b�Œi� � i�1X j D1

lij yj :

Having solved for y, we solve for x in equation (28.6) using back substitution, which is similar to forward substitution. Here, we solve the nth equation first and work backward to the first equation. Like forward substitution, this process runs in ‚.n2/ time. Since U is upper-triangular, we can rewrite the system (28.6) as

u11x1 C u12x2 C � � � C u1;n�2xn�2 C u1;n�1xn�1 C u1nxn D y1 ; u22x2 C � � � C u2;n�2xn�2 C u2;n�1xn�1 C u2nxn D y2 ;

:::

un�2;n�2xn�2 C un�2;n�1xn�1 C un�2;nxn D yn�2 ; un�1;n�1xn�1 C un�1;nxn D yn�1 ;

un;nxn D yn : Thus, we can solve for xn; xn�1; : : : ; x1 successively as follows:

xn D yn=un;n ; xn�1 D .yn�1 � un�1;nxn/=un�1;n�1 ; xn�2 D .yn�2 � .un�2;n�1xn�1 C un�2;nxn//=un�2;n�2 ;

:::

or, in general,

xi D

yi � nX

j DiC1 uij xj

! =ui i :

Given P , L, U , and b, the procedure LUP-SOLVE solves for x by combining forward and back substitution. The pseudocode assumes that the dimension n ap- pears in the attribute L:rows and that the permutation matrix P is represented by the array � .

LUP-SOLVE.L; U; �; b/

1 n D L:rows 2 let x be a new vector of length n 3 for i D 1 to n 4 yi D b�Œi� �

Pi�1 j D1 lij yj

5 for i D n downto 1 6 xi D

� yi �

Pn j DiC1 uij xj

� =ui i

7 return x

818 Chapter 28 Matrix Operations

Procedure LUP-SOLVE solves for y using forward substitution in lines 3–4, and then it solves for x using backward substitution in lines 5–6. Since the summation within each of the for loops includes an implicit loop, the running time is ‚.n2/.

As an example of these methods, consider the system of linear equations defined by�

1 2 0

3 4 4

5 6 3

� x D

� 3

7

8

� ;

where

A D �

1 2 0

3 4 4

5 6 3

� ;

b D �

3

7

8

� ;

and we wish to solve for the unknown x. The LUP decomposition is

L D �

1 0 0

0:2 1 0

0:6 0:5 1

� ;

U D �

5 6 3

0 0:8 �0:6 0 0 2:5

� ;

P D �

0 0 1

1 0 0

0 1 0

� :

(You might want to verify that PA D LU .) Using forward substitution, we solve Ly D P b for y:�

1 0 0

0:2 1 0

0:6 0:5 1

�� y1 y2 y3

� D �

8

3

7

� ;

obtaining

y D �

8

1:4

1:5

� by computing first y1, then y2, and finally y3. Using back substitution, we solve Ux D y for x:

28.1 Solving systems of linear equations 819� 5 6 3

0 0:8 �0:6 0 0 2:5

�� x1 x2 x3

� D

� 8

1:4

1:5

� ;

thereby obtaining the desired answer

x D � �1:4

2:2

0:6

� by computing first x3, then x2, and finally x1.

Computing an LU decomposition

We have now shown that if we can create an LUP decomposition for a nonsingular matrix A, then forward and back substitution can solve the system Ax D b of linear equations. Now we show how to efficiently compute an LUP decomposition for A. We start with the case in which A is an n n nonsingular matrix and P is absent (or, equivalently, P D In). In this case, we factor A D LU . We call the two matrices L and U an LU decomposition of A.

We use a process known as Gaussian elimination to create an LU decomposi- tion. We start by subtracting multiples of the first equation from the other equations in order to remove the first variable from those equations. Then, we subtract mul- tiples of the second equation from the third and subsequent equations so that now the first and second variables are removed from them. We continue this process until the system that remains has an upper-triangular form—in fact, it is the ma- trix U . The matrix L is made up of the row multipliers that cause variables to be eliminated.

Our algorithm to implement this strategy is recursive. We wish to construct an LU decomposition for an n n nonsingular matrix A. If n D 1, then we are done, since we can choose L D I1 and U D A. For n > 1, we break A into four parts:

A D

˙ a11 a12 � � � a1n a21 a22 � � � a2n :::

::: : : :

:::

an1 an2 � � � ann

� D

� a11 w

T

� A0

� ;

where � is a column .n � 1/-vector, wT is a row .n � 1/-vector, and A0 is an .n � 1/ .n � 1/ matrix. Then, using matrix algebra (verify the equations by

820 Chapter 28 Matrix Operations

simply multiplying through), we can factor A as

A D �

a11 w T

� A0

� D

� 1 0

�=a11 In�1

�� a11 w

T

0 A0 � �wT=a11

� : (28.8)

The 0s in the first and second matrices of equation (28.8) are row and col- umn .n � 1/-vectors, respectively. The term �wT=a11, formed by taking the outer product of � and w and dividing each element of the result by a11, is an .n � 1/ .n � 1/ matrix, which conforms in size to the matrix A0 from which it is subtracted. The resulting .n � 1/ .n � 1/ matrix A0 � �wT=a11 (28.9) is called the Schur complement of A with respect to a11.

We claim that if A is nonsingular, then the Schur complement is nonsingular, too. Why? Suppose that the Schur complement, which is .n � 1/ .n � 1/, is singular. Then by Theorem D.1, it has row rank strictly less than n � 1. Because the bottom n � 1 entries in the first column of the matrix�

a11 w T

0 A0 � �wT=a11 �

are all 0, the bottom n � 1 rows of this matrix must have row rank strictly less than n � 1. The row rank of the entire matrix, therefore, is strictly less than n. Applying Exercise D.2-8 to equation (28.8), A has rank strictly less than n, and from Theorem D.1 we derive the contradiction that A is singular.

Because the Schur complement is nonsingular, we can now recursively find an LU decomposition for it. Let us say that

A0 � �wT=a11 D L0U 0 ; where L0 is unit lower-triangular and U 0 is upper-triangular. Then, using matrix algebra, we have

A D �

1 0

�=a11 In�1

�� a11 w

T

0 A0 � �wT=a11 �

D �

1 0

�=a11 In�1

�� a11 w

T

0 L0U 0

� D

� 1 0

�=a11 L 0

�� a11 w

T

0 U 0

� D LU ;

thereby providing our LU decomposition. (Note that because L0 is unit lower- triangular, so is L, and because U 0 is upper-triangular, so is U .)

28.1 Solving systems of linear equations 821

Of course, if a11 D 0, this method doesn’t work, because it divides by 0. It also doesn’t work if the upper leftmost entry of the Schur complement A0 � �wT=a11 is 0, since we divide by it in the next step of the recursion. The elements by which we divide during LU decomposition are called pivots, and they occupy the diagonal elements of the matrix U . The reason we include a permutation matrix P during LUP decomposition is that it allows us to avoid dividing by 0. When we use permutations to avoid division by 0 (or by small numbers, which would contribute to numerical instability), we are pivoting.

An important class of matrices for which LU decomposition always works cor- rectly is the class of symmetric positive-definite matrices. Such matrices require no pivoting, and thus we can employ the recursive strategy outlined above with- out fear of dividing by 0. We shall prove this result, as well as several others, in Section 28.3.

Our code for LU decomposition of a matrix A follows the recursive strategy, ex- cept that an iteration loop replaces the recursion. (This transformation is a standard optimization for a “tail-recursive” procedure—one whose last operation is a recur- sive call to itself. See Problem 7-4.) It assumes that the attribute A:rows gives the dimension of A. We initialize the matrix U with 0s below the diagonal and matrix L with 1s on its diagonal and 0s above the diagonal.

LU-DECOMPOSITION.A/

1 n D A:rows 2 let L and U be new n n matrices 3 initialize U with 0s below the diagonal 4 initialize L with 1s on the diagonal and 0s above the diagonal 5 for k D 1 to n 6 ukk D akk 7 for i D k C 1 to n 8 lik D aik=ukk // lik holds �i 9 uki D aki // uki holds wTi

10 for i D k C 1 to n 11 for j D k C 1 to n 12 aij D aij � likukj 13 return L and U

The outer for loop beginning in line 5 iterates once for each recursive step. Within this loop, line 6 determines the pivot to be ukk D akk. The for loop in lines 7–9 (which does not execute when k D n), uses the � and wT vectors to update L and U . Line 8 determines the elements of the � vector, storing �i in lik, and line 9 computes the elements of the wT vector, storing wTi in uki . Finally, lines 10–12 compute the elements of the Schur complement and store them back into the ma-

822 Chapter 28 Matrix Operations

2 3 1 5

6 13 5 19

2 19 10 23

4 10 11 31

(a)

3 1 5

3 4 2 4

1 16 9 18

2 4 9 21

(b)

2 3 1 5

3 2 4

1 4 1 2

2 1 7 17

(c)

2 3 1 5

3 4 2 4

1 4 2

2 1 7 3

(d)

(e)

2

4

1

� 2 3 1 5

6 13 5 19

2 19 10 23

4 10 11 31

˘ D

� 1 0 0 0

3 1 0 0

1 4 1 0

2 1 7 1

˘ � 2 3 1 5

0 4 2 4

0 0 1 2

0 0 0 3

˘ A L U

Figure 28.1 The operation of LU-DECOMPOSITION. (a) The matrix A. (b) The element a11 D 2 in the black circle is the pivot, the shaded column is �=a11, and the shaded row is wT. The elements of U computed thus far are above the horizontal line, and the elements of L are to the left of the vertical line. The Schur complement matrix A0 � �wT=a11 occupies the lower right. (c) We now operate on the Schur complement matrix produced from part (b). The element a22 D 4 in the black circle is the pivot, and the shaded column and row are �=a22 and wT (in the partitioning of the Schur complement), respectively. Lines divide the matrix into the elements of U computed so far (above), the elements of L computed so far (left), and the new Schur complement (lower right). (d) After the next step, the matrix A is factored. (The element 3 in the new Schur complement becomes part of U when the recursion terminates.) (e) The factorization A D LU .

trix A. (We don’t need to divide by akk in line 12 because we already did so when we computed lik in line 8.) Because line 12 is triply nested, LU-DECOMPOSITION runs in time ‚.n3/.

Figure 28.1 illustrates the operation of LU-DECOMPOSITION. It shows a stan- dard optimization of the procedure in which we store the significant elements of L and U in place in the matrix A. That is, we can set up a correspondence between each element aij and either lij (if i > j ) or uij (if i � j ) and update the ma- trix A so that it holds both L and U when the procedure terminates. To obtain the pseudocode for this optimization from the above pseudocode, just replace each reference to l or u by a; you can easily verify that this transformation preserves correctness.

Computing an LUP decomposition

Generally, in solving a system of linear equations Ax D b, we must pivot on off- diagonal elements of A to avoid dividing by 0. Dividing by 0 would, of course, be disastrous. But we also want to avoid dividing by a small value—even if A is

28.1 Solving systems of linear equations 823

nonsingular—because numerical instabilities can result. We therefore try to pivot on a large value.

The mathematics behind LUP decomposition is similar to that of LU decom- position. Recall that we are given an n n nonsingular matrix A, and we wish to find a permutation matrix P , a unit lower-triangular matrix L, and an upper- triangular matrix U such that PA D LU . Before we partition the matrix A, as we did for LU decomposition, we move a nonzero element, say ak1, from somewhere in the first column to the .1; 1/ position of the matrix. For numerical stability, we choose ak1 as the element in the first column with the greatest absolute value. (The first column cannot contain only 0s, for then A would be singular, because its de- terminant would be 0, by Theorems D.4 and D.5.) In order to preserve the set of equations, we exchange row 1 with row k, which is equivalent to multiplying A by a permutation matrix Q on the left (Exercise D.1-4). Thus, we can write QA as

QA D �

ak1 w T

� A0

� ;

where � D .a21; a31; : : : ; an1/T, except that a11 replaces ak1; wT D .ak2; ak3; : : : ; akn/; and A0 is an .n�1/ .n�1/ matrix. Since ak1 ¤ 0, we can now perform much the same linear algebra as for LU decomposition, but now guaranteeing that we do not divide by 0:

QA D �

ak1 w T

� A0

� D

� 1 0

�=ak1 In�1

�� ak1 w

T

0 A0 � �wT=ak1

� :

As we saw for LU decomposition, if A is nonsingular, then the Schur comple- ment A0 � �wT=ak1 is nonsingular, too. Therefore, we can recursively find an LUP decomposition for it, with unit lower-triangular matrix L0, upper-triangular matrix U 0, and permutation matrix P 0, such that

P 0.A0 � �wT=ak1/ D L0U 0 : Define

P D �

1 0

0 P 0

� Q ;

which is a permutation matrix, since it is the product of two permutation matrices (Exercise D.1-4). We now have

824 Chapter 28 Matrix Operations

PA D �

1 0

0 P 0

� QA

D �

1 0

0 P 0

�� 1 0

�=ak1 In�1

�� ak1 w

T

0 A0 � �wT=ak1

� D

� 1 0

P 0�=ak1 P 0

�� ak1 w

T

0 A0 � �wT=ak1

� D

� 1 0

P 0�=ak1 In�1

�� ak1 w

T

0 P 0.A0 � �wT=ak1/ �

D �

1 0

P 0�=ak1 In�1

�� ak1 w

T

0 L0U 0

� D

� 1 0

P 0�=ak1 L0

�� ak1 w

T

0 U 0

� D LU ;

yielding the LUP decomposition. Because L0 is unit lower-triangular, so is L, and because U 0 is upper-triangular, so is U .

Notice that in this derivation, unlike the one for LU decomposition, we must multiply both the column vector �=ak1 and the Schur complement A0 � �wT=ak1 by the permutation matrix P 0. Here is the pseudocode for LUP decomposition:

LUP-DECOMPOSITION.A/

1 n D A:rows 2 let �Œ1 : : n� be a new array 3 for i D 1 to n 4 �Œi� D i 5 for k D 1 to n 6 p D 0 7 for i D k to n 8 if jaikj > p 9 p D jaikj

10 k0 D i 11 if p == 0 12 error “singular matrix” 13 exchange �Œk� with �Œk0� 14 for i D 1 to n 15 exchange aki with ak0i 16 for i D k C 1 to n 17 aik D aik=akk 18 for j D k C 1 to n 19 aij D aij � aikakj

28.1 Solving systems of linear equations 825

Like LU-DECOMPOSITION, our LUP-DECOMPOSITION procedure replaces the recursion with an iteration loop. As an improvement over a direct implemen- tation of the recursion, we dynamically maintain the permutation matrix P as an array � , where �Œi� D j means that the i th row of P contains a 1 in column j . We also implement the code to compute L and U “in place” in the matrix A. Thus, when the procedure terminates,

aij D (

lij if i > j ;

uij if i � j : Figure 28.2 illustrates how LUP-DECOMPOSITION factors a matrix. Lines 3–4

initialize the array � to represent the identity permutation. The outer for loop beginning in line 5 implements the recursion. Each time through the outer loop, lines 6–10 determine the element ak0k with largest absolute value of those in the current first column (column k) of the .n � k C 1/ .n � k C 1/ matrix whose LUP decomposition we are finding. If all elements in the current first column are zero, lines 11–12 report that the matrix is singular. To pivot, we exchange �Œk0� with �Œk� in line 13 and exchange the kth and k0th rows of A in lines 14–15, thereby making the pivot element akk. (The entire rows are swapped because in the derivation of the method above, not only is A0� �wT=ak1 multiplied by P 0, but so is �=ak1.) Finally, the Schur complement is computed by lines 16–19 in much the same way as it is computed by lines 7–12 of LU-DECOMPOSITION, except that here the operation is written to work in place.

Because of its triply nested loop structure, LUP-DECOMPOSITION has a run- ning time of ‚.n3/, which is the same as that of LU-DECOMPOSITION. Thus, pivoting costs us at most a constant factor in time.

Exercises

28.1-1 Solve the equation�

1 0 0

4 1 0

�6 5 1

�� x1 x2 x3

� D

� 3

14

�7

� by using forward substitution.

28.1-2 Find an LU decomposition of the matrix�

4 �5 6 8 �6 7

12 �7 12

� :

826 Chapter 28 Matrix Operations

2 0 2 0.6

3 3 4 –2

5 5 4 2

–1 –2 3.4 –1

(a)

1

2

3

4

2 0 2 0.6

3 3 4 –2

5 5 4 2

–1 –2 3.4 –1

(b)

3

2

1

4

0.4 –2 0.4 –.2

0.6 0 1.6 –3.2

5 5 4 2

–0.2 –1 4.2 –0.6

(c)

3

2

1

4

0.4 –2 0.4 –0.2

0.6 0 1.6 –3.2

5 5 4 2

–0.2 –1 4.2 –0.6

(d)

3

2

1

4

0.4 –2 0.4 –0.2

0.6 0 1.6 –3.2

5 5 4 2

–0.2 –1 4.2 –0.6

(e)

3

2

1

4

0.4 –2 0.4 –0.2

0.6 0 1.6 –3.2

5 5 4 2

–0.2 0.5 4 –0.5

(f)

3

2

1

4

0.4 –2 0.4 –0.2

0.6 0 1.6 –3.2

5 5 4 2

–0.2 0.5 4 –0.5

(g)

3

2

1

4

0.4 –2 0.4 –0.2

0.6 0 1.6 –3.2

5 5 4 2

–0.2 0.5 4 –0.5

(h)

3

2

1

4

0.4 –2 0.4 –0.2

0.6 0 0.4 –3

5 5 4 2

–0.2 0.5 4 –0.5

(i)

3

2

1

4

(j)

� 0 0 1 0

1 0 0 0

0 0 0 1

0 1 0 0

˘ � 2 0 2 0:6

3 3 4 �2 5 5 4 2

�1 �2 3:4 �1

˘ D

� 1 0 0 0

0:4 1 0 0

�0:2 0:5 1 0 0:6 0 0:4 1

˘ � 5 5 4 2

0 �2 0:4 �0:2 0 0 4 �0:5 0 0 0 �3

˘ P A L U

Figure 28.2 The operation of LUP-DECOMPOSITION. (a) The input matrix A with the identity permutation of the rows on the left. The first step of the algorithm determines that the element 5 in the black circle in the third row is the pivot for the first column. (b) Rows 1 and 3 are swapped and the permutation is updated. The shaded column and row represent � and wT. (c) The vector � is replaced by �=5, and the lower right of the matrix is updated with the Schur complement. Lines divide the matrix into three regions: elements of U (above), elements of L (left), and elements of the Schur complement (lower right). (d)–(f) The second step. (g)–(i) The third step. No further changes occur on the fourth (final) step. (j) The LUP decomposition PA D LU .

28.2 Inverting matrices 827

28.1-3 Solve the equation�

1 5 4

2 0 3

5 8 2

�� x1 x2 x3

� D

� 12

9

5

� by using an LUP decomposition.

28.1-4 Describe the LUP decomposition of a diagonal matrix.

28.1-5 Describe the LUP decomposition of a permutation matrix A, and prove that it is unique.

28.1-6 Show that for all n � 1, there exists a singular n n matrix that has an LU decom- position.

28.1-7 In LU-DECOMPOSITION, is it necessary to perform the outermost for loop itera- tion when k D n? How about in LUP-DECOMPOSITION?

28.2 Inverting matrices

Although in practice we do not generally use matrix inverses to solve systems of linear equations, preferring instead to use more numerically stable techniques such as LUP decomposition, sometimes we need to compute a matrix inverse. In this section, we show how to use LUP decomposition to compute a matrix inverse. We also prove that matrix multiplication and computing the inverse of a matrix are equivalently hard problems, in that (subject to technical conditions) we can use an algorithm for one to solve the other in the same asymptotic running time. Thus, we can use Strassen’s algorithm (see Section 4.2) for matrix multiplication to invert a matrix. Indeed, Strassen’s original paper was motivated by the problem of showing that a set of a linear equations could be solved more quickly than by the usual method.

828 Chapter 28 Matrix Operations

Computing a matrix inverse from an LUP decomposition

Suppose that we have an LUP decomposition of a matrix A in the form of three matrices L, U , and P such that PA D LU . Using LUP-SOLVE, we can solve an equation of the form Ax D b in time ‚.n2/. Since the LUP decomposition depends on A but not b, we can run LUP-SOLVE on a second set of equations of the form Ax D b0 in additional time ‚.n2/. In general, once we have the LUP decomposition of A, we can solve, in time ‚.kn2/, k versions of the equation Ax D b that differ only in b.

We can think of the equation

AX D In ; (28.10) which defines the matrix X , the inverse of A, as a set of n distinct equations of the form Ax D b. To be precise, let Xi denote the i th column of X , and recall that the unit vector ei is the i th column of In. We can then solve equation (28.10) for X by using the LUP decomposition for A to solve each equation

AXi D ei separately for Xi . Once we have the LUP decomposition, we can compute each of the n columns Xi in time ‚.n2/, and so we can compute X from the LUP decom- position of A in time ‚.n3/. Since we can determine the LUP decomposition of A in time ‚.n3/, we can compute the inverse A�1 of a matrix A in time ‚.n3/.

Matrix multiplication and matrix inversion

We now show that the theoretical speedups obtained for matrix multiplication translate to speedups for matrix inversion. In fact, we prove something stronger: matrix inversion is equivalent to matrix multiplication, in the following sense. If M.n/ denotes the time to multiply two n n matrices, then we can invert a nonsingular n n matrix in time O.M.n//. Moreover, if I.n/ denotes the time to invert a nonsingular n n matrix, then we can multiply two n n matrices in time O.I.n//. We prove these results as two separate theorems.

Theorem 28.1 (Multiplication is no harder than inversion) If we can invert an n n matrix in time I.n/, where I.n/ D �.n2/ and I.n/ satisfies the regularity condition I.3n/ D O.I.n//, then we can multiply two n n matrices in time O.I.n//.

Proof Let A and B be n n matrices whose matrix product C we wish to com- pute. We define the 3n 3n matrix D by

28.2 Inverting matrices 829

D D �

In A 0

0 In B

0 0 In

� :

The inverse of D is

D�1 D �

In �A AB 0 In �B 0 0 In

� ;

and thus we can compute the product AB by taking the upper right n n submatrix of D�1.

We can construct matrix D in ‚.n2/ time, which is O.I.n// because we assume that I.n/ D �.n2/, and we can invert D in O.I.3n// D O.I.n// time, by the regularity condition on I.n/. We thus have M.n/ D O.I.n//.

Note that I.n/ satisfies the regularity condition whenever I.n/ D ‚.nc lgd n/ for any constants c > 0 and d � 0.

The proof that matrix inversion is no harder than matrix multiplication relies on some properties of symmetric positive-definite matrices that we will prove in Section 28.3.

Theorem 28.2 (Inversion is no harder than multiplication) Suppose we can multiply two n n real matrices in time M.n/, where M.n/ D �.n2/ and M.n/ satisfies the two regularity conditions M.nC k/ D O.M.n// for any k in the range 0 � k � n and M.n=2/ � cM.n/ for some constant c 0. Thus, by choosing k such that n C k is a power of 2, we enlarge the matrix to a size that is the next power of 2 and obtain the desired answer A�1

from the answer to the enlarged problem. The first regularity condition on M.n/ ensures that this enlargement does not cause the running time to increase by more than a constant factor.

For the moment, let us assume that the n n matrix A is symmetric and positive- definite. We partition each of A and its inverse A�1 into four n=2 n=2 submatri- ces:

830 Chapter 28 Matrix Operations

A D �

B C T

C D

� and A�1 D

� R T

U V

� : (28.11)

Then, if we let

S D D � CB�1C T (28.12) be the Schur complement of A with respect to B (we shall see more about this form of Schur complement in Section 28.3), we have

A�1 D �

R T

U V

� D �

B�1 C B�1C TS�1CB�1 �B�1C TS�1 �S�1CB�1 S�1

� ; (28.13)

since AA�1 D In, as you can verify by performing the matrix multiplication. Be- cause A is symmetric and positive-definite, Lemmas 28.4 and 28.5 in Section 28.3 imply that B and S are both symmetric and positive-definite. By Lemma 28.3 in Section 28.3, therefore, the inverses B�1 and S�1 exist, and by Exercise D.2-6, B�1 and S�1 are symmetric, so that .B�1/T D B�1 and .S�1/T D S�1. There- fore, we can compute the submatrices R, T , U , and V of A�1 as follows, where all matrices mentioned are n=2 n=2: 1. Form the submatrices B , C , C T, and D of A.

2. Recursively compute the inverse B�1 of B .

3. Compute the matrix product W D CB�1, and then compute its transpose W T, which equals B�1C T (by Exercise D.1-2 and .B�1/T D B�1).

4. Compute the matrix product X D W C T, which equals CB�1C T, and then compute the matrix S D D �X D D � CB�1C T.

5. Recursively compute the inverse S�1 of S , and set V to S�1.

6. Compute the matrix product Y D S�1W , which equals S�1CB�1, and then compute its transpose Y T, which equals B�1C TS�1 (by Exercise D.1-2, .B�1/T D B�1, and .S�1/T D S�1). Set T to �Y T and U to �Y .

7. Compute the matrix product Z D W TY , which equals B�1C TS�1CB�1, and set R to B�1 C Z.

Thus, we can invert an n n symmetric positive-definite matrix by inverting two n=2 n=2 matrices in steps 2 and 5; performing four multiplications of n=2 n=2 matrices in steps 3, 4, 6, and 7; plus an additional cost of O.n2/ for extracting submatrices from A, inserting submatrices into A�1, and performing a constant number of additions, subtractions, and transposes on n=2 n=2 matrices. We get the recurrence

I.n/ � 2I.n=2/C 4M.n=2/CO.n2/ D 2I.n=2/C‚.M.n// D O.M.n// :

28.2 Inverting matrices 831

The second line holds because the second regularity condition in the statement of the theorem implies that 4M.n=2/ 0 by the assumption that A is positive-definite. Let us break x into two subvectors y and ´ compatible with Ak and C , respectively. Because A�1k exists, we have

xTAx D . yT ´T / �

Ak B T

B C

�� y

´

� D . yT ´T /

� Aky C BT´ By C C ´

� D yTAky C yTBT´C ´TBy C ´TC ´ D .y C A�1k BT´/TAk.y C A�1k BT´/C ´T.C � BA�1k BT/´ ; (28.16)

by matrix magic. (Verify by multiplying through.) This last equation amounts to “completing the square” of the quadratic form. (See Exercise 28.3-2.)

Since xTAx > 0 holds for any nonzero x, let us pick any nonzero ´ and then choose y D �A�1

k BT´, which causes the first term in equation (28.16) to vanish,

leaving

´T.C � BA�1k BT/´ D ´TS´ as the value of the expression. For any ´ ¤ 0, we therefore have ´TS´ D xTAx > 0, and thus S is positive-definite.

28.3 Symmetric positive-definite matrices and least-squares approximation 835

Corollary 28.6 LU decomposition of a symmetric positive-definite matrix never causes a division by 0.

Proof Let A be a symmetric positive-definite matrix. We shall prove something stronger than the statement of the corollary: every pivot is strictly positive. The first pivot is a11. Let e1 be the first unit vector, from which we obtain a11 D eT1Ae1 > 0. Since the first step of LU decomposition produces the Schur complement of A with respect to A1 D .a11/, Lemma 28.5 implies by induction that all pivots are positive.

Least-squares approximation

One important application of symmetric positive-definite matrices arises in fitting curves to given sets of data points. Suppose that we are given a set of m data points

.x1; y1/; .x2; y2/; : : : ; .xm; ym/ ;

where we know that the yi are subject to measurement errors. We would like to determine a function F.x/ such that the approximation errors

�i D F.xi/ � yi (28.17) are small for i D 1; 2; : : : ; m. The form of the function F depends on the problem at hand. Here, we assume that it has the form of a linearly weighted sum,

F.x/ D nX

j D1 cj fj .x/ ;

where the number of summands n and the specific basis functions fj are chosen based on knowledge of the problem at hand. A common choice is fj .x/ D xj �1, which means that

F.x/ D c1 C c2x C c3x2 C � � � C cnxn�1

is a polynomial of degree n � 1 in x. Thus, given m data points .x1; y1/; .x2; y2/; : : : ; .xm; ym/, we wish to calculate n coefficients c1; c2; : : : ; cn that minimize the approximation errors �1; �2; : : : ; �m.

By choosing n D m, we can calculate each yi exactly in equation (28.17). Such a high-degree F “fits the noise” as well as the data, however, and generally gives poor results when used to predict y for previously unseen values of x. It is usu- ally better to choose n significantly smaller than m and hope that by choosing the coefficients cj well, we can obtain a function F that finds the significant patterns in the data points without paying undue attention to the noise. Some theoretical

836 Chapter 28 Matrix Operations

principles exist for choosing n, but they are beyond the scope of this text. In any case, once we choose a value of n that is less than m, we end up with an overde- termined set of equations whose solution we wish to approximate. We now show how to do so.

Let

A D

˙ f1.x1/ f2.x1/ : : : fn.x1/

f1.x2/ f2.x2/ : : : fn.x2/ :::

::: : : :

:::

f1.xm/ f2.xm/ : : : fn.xm/

� denote the matrix of values of the basis functions at the given points; that is, aij D fj .xi/. Let c D .ck/ denote the desired n-vector of coefficients. Then,

Ac D

˙ f1.x1/ f2.x1/ : : : fn.x1/

f1.x2/ f2.x2/ : : : fn.x2/ :::

::: : : :

:::

f1.xm/ f2.xm/ : : : fn.xm/

�˙ c1 c2 :::

cn

D

˙ F.x1/

F.x2/ :::

F .xm/

is the m-vector of “predicted values” for y. Thus,

� D Ac � y is the m-vector of approximation errors.

To minimize approximation errors, we choose to minimize the norm of the error vector �, which gives us a least-squares solution, since

k�k D

mX iD1

�2i

!1=2 :

Because

k�k2 D kAc � yk2 D mX

iD1

nX

j D1 aij cj � yi

!2 ;

we can minimize k�k by differentiating k�k2 with respect to each ck and then setting the result to 0:

28.3 Symmetric positive-definite matrices and least-squares approximation 837

d k�k2 dck

D mX

iD1 2

nX

j D1 aij cj � yi

! aik D 0 : (28.18)

The n equations (28.18) for k D 1; 2; : : : ; n are equivalent to the single matrix equation

.Ac � y/TA D 0 or, equivalently (using Exercise D.1-2), to

AT.Ac � y/ D 0 ; which implies

ATAc D ATy : (28.19) In statistics, this is called the normal equation. The matrix ATA is symmetric by Exercise D.1-2, and if A has full column rank, then by Theorem D.6, ATA is positive-definite as well. Hence, .ATA/�1 exists, and the solution to equa- tion (28.19) is

c D �.ATA/�1AT� y D ACy ; (28.20)

where the matrix AC D ..ATA/�1AT/ is the pseudoinverse of the matrix A. The pseudoinverse naturally generalizes the notion of a matrix inverse to the case in which A is not square. (Compare equation (28.20) as the approximate solution to Ac D y with the solution A�1b as the exact solution to Ax D b.)

As an example of producing a least-squares fit, suppose that we have five data points

.x1; y1/ D .�1; 2/ ;

.x2; y2/ D .1; 1/ ;

.x3; y3/ D .2; 1/ ;

.x4; y4/ D .3; 0/ ;

.x5; y5/ D .5; 3/ ; shown as black dots in Figure 28.3. We wish to fit these points with a quadratic polynomial

F.x/ D c1 C c2x C c3x2 : We start with the matrix of basis-function values

838 Chapter 28 Matrix Operations

0.5

1.0

1.5

2.0

2.5

3.0

0.0 1 2 3 4 50–1–2

x

y

F(x) = 1.2 – 0.757x + 0.214×2

Figure 28.3 The least-squares fit of a quadratic polynomial to the set of five data points f.�1; 2/; .1; 1/; .2; 1/; .3; 0/; .5; 3/g. The black dots are the data points, and the white dots are their estimated values predicted by the polynomial F.x/ D 1:2 � 0:757x C 0:214×2 , the quadratic poly- nomial that minimizes the sum of the squared errors. Each shaded line shows the error for one data point.

A D

� 1 x1 x

2 1

1 x2 x 2 2

1 x3 x 2 3

1 x4 x 2 4

1 x5 x 2 5

� D

� 1 �1 1 1 1 1

1 2 4

1 3 9

1 5 25

� ;

whose pseudoinverse is

AC D �

0:500 0:300 0:200 0:100 �0:100 �0:388 0:093 0:190 0:193 �0:088

0:060 �0:036 �0:048 �0:036 0:060

� :

Multiplying y by AC, we obtain the coefficient vector

c D �

1:200

�0:757 0:214

� ;

which corresponds to the quadratic polynomial

28.3 Symmetric positive-definite matrices and least-squares approximation 839

F.x/ D 1:200 � 0:757x C 0:214×2

as the closest-fitting quadratic to the given data, in a least-squares sense. As a practical matter, we solve the normal equation (28.19) by multiplying y

by AT and then finding an LU decomposition of ATA. If A has full rank, the matrix ATA is guaranteed to be nonsingular, because it is symmetric and positive- definite. (See Exercise D.1-2 and Theorem D.6.)

Exercises

28.3-1 Prove that every diagonal element of a symmetric positive-definite matrix is posi- tive.

28.3-2

Let A D �

a b

b c

� be a 2 2 symmetric positive-definite matrix. Prove that its

determinant ac � b2 is positive by “completing the square” in a manner similar to that used in the proof of Lemma 28.5.

28.3-3 Prove that the maximum element in a symmetric positive-definite matrix lies on the diagonal.

28.3-4 Prove that the determinant of each leading submatrix of a symmetric positive- definite matrix is positive.

28.3-5 Let Ak denote the kth leading submatrix of a symmetric positive-definite matrix A. Prove that det.Ak/= det.Ak�1/ is the kth pivot during LU decomposition, where, by convention, det.A0/ D 1. 28.3-6 Find the function of the form

F.x/ D c1 C c2x lg x C c3ex

that is the best least-squares fit to the data points

.1; 1/; .2; 1/; .3; 3/; .4; 8/ :

840 Chapter 28 Matrix Operations

28.3-7 Show that the pseudoinverse AC satisfies the following four equations:

AACA D A ; ACAAC D AC ; .AAC/T D AAC ; .ACA/T D ACA :

Problems

28-1 Tridiagonal systems of linear equations Consider the tridiagonal matrix

A D

ˇ 1 �1 0 0 0 �1 2 �1 0 0

0 �1 2 �1 0 0 0 �1 2 �1 0 0 0 �1 2

� :

a. Find an LU decomposition of A.

b. Solve the equation Ax D � 1 1 1 1 1 �T by using forward and back sub- stitution.

c. Find the inverse of A.

d. Show how, for any n n symmetric positive-definite, tridiagonal matrix A and any n-vector b, to solve the equation Ax D b in O.n/ time by performing an LU decomposition. Argue that any method based on forming A�1 is asymptot- ically more expensive in the worst case.

e. Show how, for any n n nonsingular, tridiagonal matrix A and any n-vector b, to solve the equation Ax D b in O.n/ time by performing an LUP decomposition.

28-2 Splines A practical method for interpolating a set of points with a curve is to use cu- bic splines. We are given a set f.xi ; yi/ W i D 0; 1; : : : ; ng of n C 1 point-value pairs, where x0 0 4 choose an index e 2 N for which ce > 0 5 for each index i 2 B 6 if aie > 0 7 �i D bi=aie 8 else �i D 1 9 choose an index l 2 B that minimizes �i

10 if �l ==1 11 return “unbounded” 12 else .N; B; A; b; c; �/ D PIVOT.N; B; A; b; c; �; l; e/ 13 for i D 1 to n 14 if i 2 B 15 Nxi D bi 16 else Nxi D 0 17 return . Nx1; Nx2; : : : ; Nxn/

The SIMPLEX procedure works as follows. In line 1, it calls the procedure INITIALIZE-SIMPLEX.A; b; c/, described above, which either determines that the linear program is infeasible or returns a slack form for which the basic solution is feasible. The while loop of lines 3–12 forms the main part of the algorithm. If all coefficients in the objective function are negative, then the while loop terminates. Otherwise, line 4 selects a variable xe, whose coefficient in the objective function is positive, as the entering variable. Although we may choose any such variable as the entering variable, we assume that we use some prespecified deterministic rule. Next, lines 5–9 check each constraint and pick the one that most severely limits the amount by which we can increase xe without violating any of the nonnegativ-

872 Chapter 29 Linear Programming

ity constraints; the basic variable associated with this constraint is xl . Again, we are free to choose one of several variables as the leaving variable, but we assume that we use some prespecified deterministic rule. If none of the constraints lim- its the amount by which the entering variable can increase, the algorithm returns “unbounded” in line 11. Otherwise, line 12 exchanges the roles of the entering and leaving variables by calling PIVOT.N; B; A; b; c; �; l; e/, as described above. Lines 13–16 compute a solution Nx1; Nx2; : : : ; Nxn for the original linear-programming variables by setting all the nonbasic variables to 0 and each basic variable Nxi to bi , and line 17 returns these values.

To show that SIMPLEX is correct, we first show that if SIMPLEX has an initial feasible solution and eventually terminates, then it either returns a feasible solution or determines that the linear program is unbounded. Then, we show that SIMPLEX terminates. Finally, in Section 29.4 (Theorem 29.10) we show that the solution returned is optimal.

Lemma 29.2 Given a linear program .A; b; c/, suppose that the call to INITIALIZE-SIMPLEX in line 1 of SIMPLEX returns a slack form for which the basic solution is feasible. Then if SIMPLEX returns a solution in line 17, that solution is a feasible solution to the linear program. If SIMPLEX returns “unbounded” in line 11, the linear program is unbounded.

Proof We use the following three-part loop invariant:

At the start of each iteration of the while loop of lines 3–12,

1. the slack form is equivalent to the slack form returned by the call of INITIALIZE-SIMPLEX,

2. for each i 2 B , we have bi � 0, and 3. the basic solution associated with the slack form is feasible.

Initialization: The equivalence of the slack forms is trivial for the first itera- tion. We assume, in the statement of the lemma, that the call to INITIALIZE- SIMPLEX in line 1 of SIMPLEX returns a slack form for which the basic solution is feasible. Thus, the third part of the invariant is true. Because the basic so- lution is feasible, each basic variable xi is nonnegative. Furthermore, since the basic solution sets each basic variable xi to bi , we have that bi � 0 for all i 2 B . Thus, the second part of the invariant holds.

Maintenance: We shall show that each iteration of the while loop maintains the loop invariant, assuming that the return statement in line 11 does not execute. We shall handle the case in which line 11 executes when we discuss termination.

29.3 The simplex algorithm 873

An iteration of the while loop exchanges the role of a basic and a nonbasic variable by calling the PIVOT procedure. By Exercise 29.3-3, the slack form is equivalent to the one from the previous iteration which, by the loop invariant, is equivalent to the initial slack form.

We now demonstrate the second part of the loop invariant. We assume that at the start of each iteration of the while loop, bi � 0 for each i 2 B , and we shall show that these inequalities remain true after the call to PIVOT in line 12. Since the only changes to the variables bi and the set B of basic variables occur in this assignment, it suffices to show that line 12 maintains this part of the invariant. We let bi , aij , and B refer to values before the call of PIVOT, and ybi refer to values returned from PIVOT.

First, we observe that ybe � 0 because bl � 0 by the loop invariant, ale > 0 by lines 6 and 9 of SIMPLEX, and ybe D bl=ale by line 3 of PIVOT. For the remaining indices i 2 B � flg, we have that ybi D bi � aieybe (by line 9 of PIVOT) D bi � aie.bl=ale/ (by line 3 of PIVOT) . (29.76)

We have two cases to consider, depending on whether aie > 0 or aie � 0. If aie > 0, then since we chose l such that

bl=ale � bi=aie for all i 2 B ; (29.77)

we have

ybi D bi � aie.bl=ale/ (by equation (29.76)) � bi � aie.bi=aie/ (by inequality (29.77)) D bi � bi D 0 ;

and thus ybi � 0. If aie � 0, then because ale, bi , and bl are all nonnegative, equation (29.76) implies that ybi must be nonnegative, too. We now argue that the basic solution is feasible, i.e., that all variables have non- negative values. The nonbasic variables are set to 0 and thus are nonnegative. Each basic variable xi is defined by the equation

xi D bi � X j 2N

aij xj :

The basic solution sets Nxi D bi . Using the second part of the loop invariant, we conclude that each basic variable Nxi is nonnegative.

874 Chapter 29 Linear Programming

Termination: The while loop can terminate in one of two ways. If it terminates because of the condition in line 3, then the current basic solution is feasible and line 17 returns this solution. The other way it terminates is by returning “un- bounded” in line 11. In this case, for each iteration of the for loop in lines 5–8, when line 6 is executed, we find that aie � 0. Consider the solution Nx defined as

Nxi D

� 1 if i D e ; 0 if i 2 N � feg ; bi �

P j 2N aij Nxj if i 2 B :

We now show that this solution is feasible, i.e., that all variables are nonneg- ative. The nonbasic variables other than Nxe are 0, and Nxe D 1 > 0; thus all nonbasic variables are nonnegative. For each basic variable Nxi , we have Nxi D bi �

X j 2N

aij Nxj

D bi � aie Nxe : The loop invariant implies that bi � 0, and we have aie � 0 and Nxe D 1 > 0. Thus, Nxi � 0. Now we show that the objective value for the solution Nx is unbounded. From equation (29.42), the objective value is

´ D � C X j 2N

cj Nxj

D � C ce Nxe : Since ce > 0 (by line 4 of SIMPLEX) and Nxe D 1, the objective value is1, and thus the linear program is unbounded.

It remains to show that SIMPLEX terminates, and when it does terminate, the solution it returns is optimal. Section 29.4 will address optimality. We now discuss termination.

Termination

In the example given in the beginning of this section, each iteration of the simplex algorithm increased the objective value associated with the basic solution. As Ex- ercise 29.3-2 asks you to show, no iteration of SIMPLEX can decrease the objective value associated with the basic solution. Unfortunately, it is possible that an itera- tion leaves the objective value unchanged. This phenomenon is called degeneracy, and we shall now study it in greater detail.

29.3 The simplex algorithm 875

The assignment in line 14 of PIVOT, y� D �C ceybe, changes the objective value. Since SIMPLEX calls PIVOT only when ce > 0, the only way for the objective value to remain unchanged (i.e., y� D �) is for ybe to be 0. This value is assigned as ybe D bl=ale in line 3 of PIVOT. Since we always call PIVOT with ale ¤ 0, we see that for ybe to equal 0, and hence the objective value to be unchanged, we must have bl D 0.

Indeed, this situation can occur. Consider the linear program

´ D x1 C x2 C x3 x4 D 8 � x1 � x2 x5 D x2 � x3 : Suppose that we choose x1 as the entering variable and x4 as the leaving variable. After pivoting, we obtain

´ D 8 C x3 � x4 x1 D 8 � x2 � x4 x5 D x2 � x3 : At this point, our only choice is to pivot with x3 entering and x5 leaving. Since b5 D 0, the objective value of 8 remains unchanged after pivoting: ´ D 8 C x2 � x4 � x5

x1 D 8 � x2 � x4 x3 D x2 � x5 : The objective value has not changed, but our slack form has. Fortunately, if we pivot again, with x2 entering and x1 leaving, the objective value increases (to 16), and the simplex algorithm can continue.

Degeneracy can prevent the simplex algorithm from terminating, because it can lead to a phenomenon known as cycling: the slack forms at two different itera- tions of SIMPLEX are identical. Because of degeneracy, SIMPLEX could choose a sequence of pivot operations that leave the objective value unchanged but repeat a slack form within the sequence. Since SIMPLEX is a deterministic algorithm, if it cycles, then it will cycle through the same series of slack forms forever, never terminating.

Cycling is the only reason that SIMPLEX might not terminate. To show this fact, we must first develop some additional machinery.

At each iteration, SIMPLEX maintains A, b, c, and � in addition to the sets N and B . Although we need to explicitly maintain A, b, c, and � in order to implement the simplex algorithm efficiently, we can get by without maintaining them. In other words, the sets of basic and nonbasic variables suffice to uniquely determine the slack form. Before proving this fact, we prove a useful algebraic lemma.

876 Chapter 29 Linear Programming

Lemma 29.3 Let I be a set of indices. For each j 2 I , let j̨ and ǰ be real numbers, and let xj be a real-valued variable. Let be any real number. Suppose that for any settings of the xj , we haveX j 2I

j̨ xj D C X j 2I

ǰ xj : (29.78)

Then j̨ D ǰ for each j 2 I , and D 0.

Proof Since equation (29.78) holds for any values of the xj , we can use particular values to draw conclusions about ˛, ˇ, and . If we let xj D 0 for each j 2 I , we conclude that D 0. Now pick an arbitrary index j 2 I , and set xj D 1 and xk D 0 for all k ¤ j . Then we must have j̨ D ǰ . Since we picked j as any index in I , we conclude that j̨ D ǰ for each j 2 I .

A particular linear program has many different slack forms; recall that each slack form has the same set of feasible and optimal solutions as the original linear pro- gram. We now show that the slack form of a linear program is uniquely determined by the set of basic variables. That is, given the set of basic variables, a unique slack form (unique set of coefficients and right-hand sides) is associated with those basic variables.

Lemma 29.4 Let .A; b; c/ be a linear program in standard form. Given a set B of basic variables, the associated slack form is uniquely determined.

Proof Assume for the purpose of contradiction that there are two different slack forms with the same set B of basic variables. The slack forms must also have identical sets N D f1; 2; : : : ; nCmg �B of nonbasic variables. We write the first slack form as

´ D � C X j 2N

cj xj (29.79)

xi D bi � X j 2N

aij xj for i 2 B ; (29.80)

and the second as

´ D � 0 C X j 2N

c 0j xj (29.81)

xi D b0i � X j 2N

a0ij xj for i 2 B : (29.82)

29.3 The simplex algorithm 877

Consider the system of equations formed by subtracting each equation in line (29.82) from the corresponding equation in line (29.80). The resulting sys- tem is

0 D .bi � b0i/ � X j 2N

.aij � a0ij /xj for i 2 B

or, equivalently,X j 2N

aij xj D .bi � b0i /C X j 2N

a0ij xj for i 2 B :

Now, for each i 2 B , apply Lemma 29.3 with j̨ D aij , ǰ D a0ij , D bi �b0i , and I D N . Since ˛i D ˇi , we have that aij D a0ij for each j 2 N , and since D 0, we have that bi D b0i . Thus, for the two slack forms, A and b are identical to A0 and b0. Using a similar argument, Exercise 29.3-1 shows that it must also be the case that c D c 0 and � D � 0, and hence that the slack forms must be identical.

We can now show that cycling is the only possible reason that SIMPLEX might not terminate.

Lemma 29.5 If SIMPLEX fails to terminate in at most

� nCm

m

� iterations, then it cycles.

Proof By Lemma 29.4, the set B of basic variables uniquely determines a slack form. There are n C m variables and jBj D m, and therefore, there are at most�

nCm m

� ways to choose B . Thus, there are only at most

� nCm

m

� unique slack forms.

Therefore, if SIMPLEX runs for more than �

nCm m

� iterations, it must cycle.

Cycling is theoretically possible, but extremely rare. We can prevent it by choos- ing the entering and leaving variables somewhat more carefully. One option is to perturb the input slightly so that it is impossible to have two solutions with the same objective value. Another option is to break ties by always choosing the vari- able with the smallest index, a strategy known as Bland’s rule. We omit the proof that these strategies avoid cycling.

Lemma 29.6 If lines 4 and 9 of SIMPLEX always break ties by choosing the variable with the smallest index, then SIMPLEX must terminate.

We conclude this section with the following lemma.

878 Chapter 29 Linear Programming

Lemma 29.7 Assuming that INITIALIZE-SIMPLEX returns a slack form for which the basic so- lution is feasible, SIMPLEX either reports that a linear program is unbounded, or it terminates with a feasible solution in at most

� nCm

m

� iterations.

Proof Lemmas 29.2 and 29.6 show that if INITIALIZE-SIMPLEX returns a slack form for which the basic solution is feasible, SIMPLEX either reports that a linear program is unbounded, or it terminates with a feasible solution. By the contra- positive of Lemma 29.5, if SIMPLEX terminates with a feasible solution, then it terminates in at most

� nCm

m

� iterations.

Exercises

29.3-1 Complete the proof of Lemma 29.4 by showing that it must be the case that c D c 0 and � D � 0. 29.3-2 Show that the call to PIVOT in line 12 of SIMPLEX never decreases the value of �.

29.3-3 Prove that the slack form given to the PIVOT procedure and the slack form that the procedure returns are equivalent.

29.3-4 Suppose we convert a linear program .A; b; c/ in standard form to slack form. Show that the basic solution is feasible if and only if bi � 0 for i D 1; 2; : : : ; m. 29.3-5 Solve the following linear program using SIMPLEX:

maximize 18×1 C 12:5×2 subject to

x1 C x2 � 20 x1 � 12

x2 � 16 x1; x2 � 0 :

29.4 Duality 879

29.3-6 Solve the following linear program using SIMPLEX:

maximize 5×1 � 3×2 subject to

x1 � x2 � 1 2×1 C x2 � 2

x1; x2 � 0 :

29.3-7 Solve the following linear program using SIMPLEX:

minimize x1 C x2 C x3 subject to

2×1 C 7:5×2 C 3×3 � 10000 20×1 C 5×2 C 10×3 � 30000

x1; x2; x3 � 0 :

29.3-8 In the proof of Lemma 29.5, we argued that there are at most

� mCn

n

� ways to choose

a set B of basic variables. Give an example of a linear program in which there are strictly fewer than

� mCn

n

� ways to choose the set B .

29.4 Duality

We have proven that, under certain assumptions, SIMPLEX terminates. We have not yet shown that it actually finds an optimal solution to a linear program, however. In order to do so, we introduce a powerful concept called linear-programming duality.

Duality enables us to prove that a solution is indeed optimal. We saw an exam- ple of duality in Chapter 26 with Theorem 26.6, the max-flow min-cut theorem. Suppose that, given an instance of a maximum-flow problem, we find a flow f with value jf j. How do we know whether f is a maximum flow? By the max-flow min-cut theorem, if we can find a cut whose value is also jf j, then we have ver- ified that f is indeed a maximum flow. This relationship provides an example of duality: given a maximization problem, we define a related minimization problem such that the two problems have the same optimal objective values.

Given a linear program in which the objective is to maximize, we shall describe how to formulate a dual linear program in which the objective is to minimize and

880 Chapter 29 Linear Programming

whose optimal value is identical to that of the original linear program. When refer- ring to dual linear programs, we call the original linear program the primal.

Given a primal linear program in standard form, as in (29.16)–(29.18), we define the dual linear program as

minimize mX

iD1 biyi (29.83)

subject to mX

iD1 aij yi � cj for j D 1; 2; : : : ; n ; (29.84)

yi � 0 for i D 1; 2; : : : ; m : (29.85) To form the dual, we change the maximization to a minimization, exchange the

roles of coefficients on the right-hand sides and the objective function, and replace each less-than-or-equal-to by a greater-than-or-equal-to. Each of the m constraints in the primal has an associated variable yi in the dual, and each of the n constraints in the dual has an associated variable xj in the primal. For example, consider the linear program given in (29.53)–(29.57). The dual of this linear program is

minimize 30y1 C 24y2 C 36y3 (29.86) subject to

y1 C 2y2 C 4y3 � 3 (29.87) y1 C 2y2 C y3 � 1 (29.88)

3y1 C 5y2 C 2y3 � 2 (29.89) y1; y2; y3 � 0 : (29.90)

We shall show in Theorem 29.10 that the optimal value of the dual linear pro- gram is always equal to the optimal value of the primal linear program. Further- more, the simplex algorithm actually implicitly solves both the primal and the dual linear programs simultaneously, thereby providing a proof of optimality.

We begin by demonstrating weak duality, which states that any feasible solu- tion to the primal linear program has a value no greater than that of any feasible solution to the dual linear program.

Lemma 29.8 (Weak linear-programming duality) Let Nx be any feasible solution to the primal linear program in (29.16)–(29.18) and let Ny be any feasible solution to the dual linear program in (29.83)–(29.85). Then, we have

nX j D1

cj Nxj � mX

iD1 bi Nyi :

29.4 Duality 881

Proof We have nX

j D1 cj Nxj �

nX j D1

mX

iD1 aij Nyi

! Nxj (by inequalities (29.84))

D mX

iD1

nX

j D1 aij Nxj

! Nyi

� mX

iD1 bi Nyi (by inequalities (29.17)) .

Corollary 29.9 Let Nx be a feasible solution to a primal linear program .A; b; c/, and let Ny be a feasible solution to the corresponding dual linear program. If

nX j D1

cj Nxj D mX

iD1 bi Nyi ;

then Nx and Ny are optimal solutions to the primal and dual linear programs, respec- tively.

Proof By Lemma 29.8, the objective value of a feasible solution to the primal cannot exceed that of a feasible solution to