出版社: Morgan Kaufmann
出版年: 2011221
页数: 824
定价: USD 89.95
装帧: Hardcover
ISBN: 9780120884780
内容简介 · · · · · ·
This entirely revised second edition of Engineering a Compiler is full of technical updates and new material covering the latest developments in compiler technology. In this comprehensive text you will learn important techniques for constructing a modern compiler. Leading educators and researchers Keith Cooper and Linda Torczon combine basic principles with pragmatic insights...
This entirely revised second edition of Engineering a Compiler is full of technical updates and new material covering the latest developments in compiler technology. In this comprehensive text you will learn important techniques for constructing a modern compiler. Leading educators and researchers Keith Cooper and Linda Torczon combine basic principles with pragmatic insights from their experience building stateoftheart compilers. They will help you fully understand important techniques such as compilation of imperative and objectoriented languages, construction of static single assignment forms, instruction scheduling, and graphcoloring register allocation.
Indepth treatment of algorithms and techniques used in the front end of a modern compiler
Focus on code optimization and code generation, the primary areas of recent research and development
Improvements in presentation including conceptual overviews for each chapter, summaries and review questions for sections, and prominent placement of definitions for new terms
Examples drawn from several different programming languages
作者简介 · · · · · ·
Keith D. Cooper is the Doerr Professor of Computational Engineering at Rice University. He has worked on a broad collection of problems in optimization of compiled code, including inter procedural dataflow analysis and its applications, value numbering, algebraic reassociation, register allocation, and instruction scheduling. His recent work has focused on a fundamental reexa...
Keith D. Cooper is the Doerr Professor of Computational Engineering at Rice University. He has worked on a broad collection of problems in optimization of compiled code, including inter procedural dataflow analysis and its applications, value numbering, algebraic reassociation, register allocation, and instruction scheduling. His recent work has focused on a fundamental reexamination of the structure and behavior of traditional compilers. He has taught a variety of courses at the undergraduate level, from introductory programming through code optimization at the graduate level. He is a Fellow of the ACM.
Linda Torczon, Senior Research Scientist, Department of Computer Science at Rice Uni versity, is a principal investigator on the PlatformAware Compilation Environment project (PACE), a DARPAsponsored project that is developing an optimizing compiler environment which automatically adjusts its optimizations and strategies to new platforms. From 1990 to 2000, Dr. Torczon served as executive director of the Center for Research on Parallel Compu tation (CRPC), a National Science Foundation Science and Technology Center. She also served as the executive director of HiPerSoft, of the Los Alamos Computer Science Institute, and of the Virtual Grid Application Development Software Project (VGrADS).
喜欢读"Engineering a Compiler, Second Edition"的人也喜欢的电子书 · · · · · ·
喜欢读"Engineering a Compiler, Second Edition"的人也喜欢 · · · · · ·
Engineering a Compiler, Second Edition的话题 · · · · · · ( 全部 条 )
Engineering a Compiler, Second Edition的书评 · · · · · · ( 全部 5 条 )
> 更多书评5篇

牧野飞霜 (好读书 但求甚解)
Compiler construction is a complex task. A good compiler combines ideas from formal language theory, from the study of algorithms, from artificial intelligence, from systems design, from computer architecture, and from the theory of programming languages and applies them to the problem of trans lating a program. A compiler brings together greedy algorithms, heuristic techniques, graph algorithms,...20151018 22:46
原来编译器和运行时是集大成者，各种复杂问题和复杂方法都能遇到Compiler construction is a complex task. A good compiler combines ideas from formal language theory, from the study of algorithms, from artificial intelligence, from systems design, from computer architecture, and from the theory of programming languages and applies them to the problem of trans lating a program. A compiler brings together greedy algorithms, heuristic techniques, graph algorithms, dynamic programming, dfas and nfas, fixed point algorithms, synchronization and locality, allocation and naming, and pipeline management.
回应 20151018 22:46 
牧野飞霜 (好读书 但求甚解)
On the other side, compiler construction exposes complex problems that defy good solutions. The back end of a compiler for a modern processor approximates the solution to two or more interacting npcomplete problems (instruction scheduling, register allocation, and, perhaps, instruction and data placement). These npcomplete problems, however, look easy next to problems such as algebraic rea...20151006 17:00
如果一个领域存在NP问题，那么从长远来看，由于有限时间内不存在最优解，因此针对不同具体问题，有各种不同的近似，特别依赖于人的聪明才智；而如果一个领域内没有NP问题，那就意味着成熟的算法和第三方库可以解决绝大多数问题；遗憾的是大多是程序员根本就不关心什么是NP问题，他们只关心那些“新”技术。On the other side, compiler construction exposes complex problems that defy good solutions. The back end of a compiler for a modern processor approximates the solution to two or more interacting npcomplete problems (instruction scheduling, register allocation, and, perhaps, instruction and data placement). These npcomplete problems, however, look easy next to problems such as algebraic reassociation of expressions (see, for example, Figure 7.1). This problem admits a huge number of solutions; to make matters worse, the desired solution depends on context in both the compiler and the application code. As the compiler approximates the solutions to such problems, it faces constraints on compile time and available memory. A good compiler artfully blends theory, practical knowledge, engineering, and experience.
回应 20151006 17:00 
A type system associates with each value in the program some textual name, a type, that represents a set of common properties held by all values of that type. The definition of a programming language specifies interactions between objects of the same type, such as legal operations on values of a type, and between objects of different type, such as mixedtype arithmetic operations. A welldesigned ...
20141018 13:35
A type system associates with each value in the program some textual name, a type, that represents a set of common properties held by all values of that type. The definition of a programming language specifies interactions between objects of the same type, such as legal operations on values of a type, and between objects of different type, such as mixedtype arithmetic operations. A welldesigned type system can increase the expressiveness of a programming language, allowing safe use of features such as overloading. It can expose subtle errors in a program long before they become puzzling runtime errors or wrong answers. It can let the compiler avoid runtime checks that waste time and space. A type system consists of a set of base types, rules for constructing new types from existing ones, a method for determining equivalence of two types, and rules for inferring the types of each expression in a program. The notions of base types, constructed types, and type equivalence should be familiar to anyone who has programmed in a highlevel language. Type inference plays a critical role in compiler implementation.
回应 20141018 13:35 
SECTION REVIEW Given a regular expression, we can derive a minimal DFA to recognize the language specified by the RE using the following steps: (1) apply Thompson's construction to build an NFA for the RE; (2) use the subset construction to derive a DFA that simulates the behavior of the RE; and (3) use Hopcroft's algorithm to identify equivalent states in the DFA and construct a minimal DFA. T...
20141009 13:52
SECTION REVIEW Given a regular expression, we can derive a minimal DFA to recognize the language specified by the RE using the following steps: (1) apply Thompson's construction to build an NFA for the RE; (2) use the subset construction to derive a DFA that simulates the behavior of the RE; and (3) use Hopcroft's algorithm to identify equivalent states in the DFA and construct a minimal DFA. This trio of constructions produces an efficient recognizer for any language that can be specified with an RE. Both the subset construction and the DFA minimization algorithm are fixedpoint computations. They are characterized by repeated application of a monotone function to some set; the properties of the domain play an important role in reasoning about the termination and complexity of these algorithms. We will see more fixedpoint computations in later chapters ... This section discusses three implementation strategies for converting a dfa into executable code: a tabledriven scanner, a directcoded scanner, and a handcoded scanner. All of these scanners operate in the same manner, by simulating the dfa. They repeatedly read the next character in the input and simulate the dfa transition caused by that character. This process stops when the dfa recognizes a word. As described in the previous section, that occurs when the current state, s, has no outbound transition on the current input character. If s is an accepting state, the scanner recognizes the word and returns a lex eme and its syntactic category to the calling procedure. If s is a nonaccepting state, the scanner must determine whether or not it passed through an accept ing state on the way to s. If the scanner did encounter an accepting state, it should roll back its internal state and its input stream to that point and report success. If it did not, it should report the failure. These three implementation strategies, table driven, direct coded, and hand coded, differ in the details of their runtime costs. However, they all have the same asymptotic complexity—constant cost per character, plus the cost of roll back. The differences in the efficiency of wellimplemented scanners change the constant costs per character but not the asymptotic complexity of scanning.
回应 20141009 13:52


COMPILER CONSTRUCTION IS ENGINEERING A typical compiler has a series of passes that, together, translate code from some source language into some target language. Along the way, the compiler uses dozens of algorithms and data structures. The compiler writer must select, for each step in the process, an appropriate solution. A successful compiler executes an unimaginable number of times. Con...
20141008 18:37
COMPILER CONSTRUCTION IS ENGINEERING A typical compiler has a series of passes that, together, translate code from some source language into some target language. Along the way, the compiler uses dozens of algorithms and data structures. The compiler writer must select, for each step in the process, an appropriate solution. A successful compiler executes an unimaginable number of times. Consider the total number of times that GCC compiler has run. Over GCC's lifetime, even small inefficiencies add up to a significant amount of time. The savings from good design and implementation accumulate over time. Thus, the compiler writer must pay attention to compile time costs, such as the asymptotic complexity of algorithms, the actual running time of the implementation, and the space used by data structures. The compiler writer should have in mind a budget for how much time the compiler will spend on its various tasks. For example, scanning and parsing are two problems for which efficient algorithms abound. Scanners recognize and classify words in time proportional to the number of characters in the input program. For a typical programming language, a parser can build derivations in time proportional to the length of the derivation. (The restricted structure of programming languages makes efficient parsing possible.) Because efficient and effective techniques exist for scanning and parsing, the compiler writer should expect to spend just a small fraction of compile time on these tasks. By contrast, optimization and code generation contain several problems that require more time. Many of the algorithms that we will examine for program analysis and optimization will have complexities greater than O(n). Thus, algorithm choice in the optimizer and code generator has a larger impact on compile time than it does in the compiler's front end. The compiler writer may need to trade precision of analysis and effectiveness of optimization against increases in compile time. He or she should make such decisions consciously and carefully.
回应 20141008 18:37 
Formally, a finite automaton (fa) is a fivetuple (S, 6, δ, s0, SA), where 1.S is the finite set of states in the recognizer, along with an error state se. 2.6 is finite alphabet used by the recognizer. Typically, 6 is the union of the edge labels in the transition diagram. 3.δ(s,c) is the recognizer's transition function. It maps each state s ∈ S and each character c ∈ 6 into ...
20141008 22:05
Formally, a finite automaton (fa) is a fivetuple (S, 6, δ, s0, SA), where 1.S is the finite set of states in the recognizer, along with an error state se. 2.6 is finite alphabet used by the recognizer. Typically, 6 is the union of the edge labels in the transition diagram. 3.δ(s,c) is the recognizer's transition function. It maps each state s ∈ S and each character c ∈ 6 into some next state. In state si with input character c, the fa takes the transition si c → δ(si,c). 4. s0 ∈ S is the designated start state. 5.SA is the set of accepting states, SA ⊆ S. Each state in SA appears as a double circle in the transition diagram. ... For a given re, r, we denote the language that it specifies as L(r). An re is built up from three basic operations: 1. Alternation The alternation, or union, of two sets of strings, R and S, denoted R  S, is {x  x ∈ R or x ∈ S}. 2. Concatenation The concatenation oftwo sets R and S, denoted RS, contains all strings formed by prepending an element of R onto one from S, or {xy  x ∈ R and y ∈ S}. 3. Closure The Kleene closure of a set R, denoted R∗, is S∞ i=0 Ri . This is just the union of the concatenations of R with itself, zero or more ... Regular expressions are closed under many operations—that is, if we apply the operation to an re or a collection of res, the result is an re. Obvious examples are concatenation, union, and closure. The concatenation of two res x and y is just xy. Their union is x  y. The Kleene closure of x is just x∗. From the definition of an re, all of these expressions are also res. These closure properties play a critical role in the use of res to build scan ners. ... A scanner for English could use FAbased techniques to recognize potential words, since all English words are drawn from a restricted alphabet. After that, however, it must look up the prospective word in a dictionary to determine if it is, in fact, a word. If the word has a unique part of speech, dictionary lookup will also resolve that issue. However, many English words can be classified with several parts of speech. Examples include buoy and stress; both can be either a noun or a verb. For these words, the part of speech depends on the surrounding context. In some cases, understanding the grammatical context suffices to classify the word. In other cases, it requires an understanding of meaning, for both the word and its context. In contrast, the words in a programming language are almost always specified lexically. Thus, any string in [1. . . 9][0. . . 9]∗ is a positive integer. The RE [a. . . z]([a. . . z][0. . . 9])∗ defines a subset of the Algol identifiers; arz, are and art are all identifiers, with no lookup needed to establish the fact. To be sure, some identifiers may be reserved as keywords. However, these exceptions can be specified lexically, as well. No context is required. This property results from a deliberate decision in programming language design. The choice to make spelling imply a unique part of speech simplifies scanning, simplifies parsing, and, apparently, gives up little in the expressiveness of the language. Some languages have allowed words with dual parts of speech—for example, PL/I has no reserved keywords. The fact that more recent languages abandoned the idea suggests that the complications outweighed the extra linguistic flexibility. ... This point is critical: the cost of operating an fa is proportional to the length of the input, not to the length or complexity of the re that generates the fa. More complex res may produce fas with more states that, in turn, need more space. The cost of generating an fa from an re may also rise with increased complexity in the re. But, the cost of fa operation remains one transition per input character.
回应 20141008 22:05 
SECTION REVIEW Given a regular expression, we can derive a minimal DFA to recognize the language specified by the RE using the following steps: (1) apply Thompson's construction to build an NFA for the RE; (2) use the subset construction to derive a DFA that simulates the behavior of the RE; and (3) use Hopcroft's algorithm to identify equivalent states in the DFA and construct a minimal DFA. T...
20141009 13:52
SECTION REVIEW Given a regular expression, we can derive a minimal DFA to recognize the language specified by the RE using the following steps: (1) apply Thompson's construction to build an NFA for the RE; (2) use the subset construction to derive a DFA that simulates the behavior of the RE; and (3) use Hopcroft's algorithm to identify equivalent states in the DFA and construct a minimal DFA. This trio of constructions produces an efficient recognizer for any language that can be specified with an RE. Both the subset construction and the DFA minimization algorithm are fixedpoint computations. They are characterized by repeated application of a monotone function to some set; the properties of the domain play an important role in reasoning about the termination and complexity of these algorithms. We will see more fixedpoint computations in later chapters ... This section discusses three implementation strategies for converting a dfa into executable code: a tabledriven scanner, a directcoded scanner, and a handcoded scanner. All of these scanners operate in the same manner, by simulating the dfa. They repeatedly read the next character in the input and simulate the dfa transition caused by that character. This process stops when the dfa recognizes a word. As described in the previous section, that occurs when the current state, s, has no outbound transition on the current input character. If s is an accepting state, the scanner recognizes the word and returns a lex eme and its syntactic category to the calling procedure. If s is a nonaccepting state, the scanner must determine whether or not it passed through an accept ing state on the way to s. If the scanner did encounter an accepting state, it should roll back its internal state and its input stream to that point and report success. If it did not, it should report the failure. These three implementation strategies, table driven, direct coded, and hand coded, differ in the details of their runtime costs. However, they all have the same asymptotic complexity—constant cost per character, plus the cost of roll back. The differences in the efficiency of wellimplemented scanners change the constant costs per character but not the asymptotic complexity of scanning.
回应 20141009 13:52

牧野飞霜 (好读书 但求甚解)
Compiler construction is a complex task. A good compiler combines ideas from formal language theory, from the study of algorithms, from artificial intelligence, from systems design, from computer architecture, and from the theory of programming languages and applies them to the problem of trans lating a program. A compiler brings together greedy algorithms, heuristic techniques, graph algorithms,...20151018 22:46
原来编译器和运行时是集大成者，各种复杂问题和复杂方法都能遇到Compiler construction is a complex task. A good compiler combines ideas from formal language theory, from the study of algorithms, from artificial intelligence, from systems design, from computer architecture, and from the theory of programming languages and applies them to the problem of trans lating a program. A compiler brings together greedy algorithms, heuristic techniques, graph algorithms, dynamic programming, dfas and nfas, fixed point algorithms, synchronization and locality, allocation and naming, and pipeline management.
回应 20151018 22:46 
牧野飞霜 (好读书 但求甚解)
On the other side, compiler construction exposes complex problems that defy good solutions. The back end of a compiler for a modern processor approximates the solution to two or more interacting npcomplete problems (instruction scheduling, register allocation, and, perhaps, instruction and data placement). These npcomplete problems, however, look easy next to problems such as algebraic rea...20151006 17:00
如果一个领域存在NP问题，那么从长远来看，由于有限时间内不存在最优解，因此针对不同具体问题，有各种不同的近似，特别依赖于人的聪明才智；而如果一个领域内没有NP问题，那就意味着成熟的算法和第三方库可以解决绝大多数问题；遗憾的是大多是程序员根本就不关心什么是NP问题，他们只关心那些“新”技术。On the other side, compiler construction exposes complex problems that defy good solutions. The back end of a compiler for a modern processor approximates the solution to two or more interacting npcomplete problems (instruction scheduling, register allocation, and, perhaps, instruction and data placement). These npcomplete problems, however, look easy next to problems such as algebraic reassociation of expressions (see, for example, Figure 7.1). This problem admits a huge number of solutions; to make matters worse, the desired solution depends on context in both the compiler and the application code. As the compiler approximates the solutions to such problems, it faces constraints on compile time and available memory. A good compiler artfully blends theory, practical knowledge, engineering, and experience.
回应 20151006 17:00 
A type system associates with each value in the program some textual name, a type, that represents a set of common properties held by all values of that type. The definition of a programming language specifies interactions between objects of the same type, such as legal operations on values of a type, and between objects of different type, such as mixedtype arithmetic operations. A welldesigned ...
20141018 13:35
A type system associates with each value in the program some textual name, a type, that represents a set of common properties held by all values of that type. The definition of a programming language specifies interactions between objects of the same type, such as legal operations on values of a type, and between objects of different type, such as mixedtype arithmetic operations. A welldesigned type system can increase the expressiveness of a programming language, allowing safe use of features such as overloading. It can expose subtle errors in a program long before they become puzzling runtime errors or wrong answers. It can let the compiler avoid runtime checks that waste time and space. A type system consists of a set of base types, rules for constructing new types from existing ones, a method for determining equivalence of two types, and rules for inferring the types of each expression in a program. The notions of base types, constructed types, and type equivalence should be familiar to anyone who has programmed in a highlevel language. Type inference plays a critical role in compiler implementation.
回应 20141018 13:35 
SECTION REVIEW Given a regular expression, we can derive a minimal DFA to recognize the language specified by the RE using the following steps: (1) apply Thompson's construction to build an NFA for the RE; (2) use the subset construction to derive a DFA that simulates the behavior of the RE; and (3) use Hopcroft's algorithm to identify equivalent states in the DFA and construct a minimal DFA. T...
20141009 13:52
SECTION REVIEW Given a regular expression, we can derive a minimal DFA to recognize the language specified by the RE using the following steps: (1) apply Thompson's construction to build an NFA for the RE; (2) use the subset construction to derive a DFA that simulates the behavior of the RE; and (3) use Hopcroft's algorithm to identify equivalent states in the DFA and construct a minimal DFA. This trio of constructions produces an efficient recognizer for any language that can be specified with an RE. Both the subset construction and the DFA minimization algorithm are fixedpoint computations. They are characterized by repeated application of a monotone function to some set; the properties of the domain play an important role in reasoning about the termination and complexity of these algorithms. We will see more fixedpoint computations in later chapters ... This section discusses three implementation strategies for converting a dfa into executable code: a tabledriven scanner, a directcoded scanner, and a handcoded scanner. All of these scanners operate in the same manner, by simulating the dfa. They repeatedly read the next character in the input and simulate the dfa transition caused by that character. This process stops when the dfa recognizes a word. As described in the previous section, that occurs when the current state, s, has no outbound transition on the current input character. If s is an accepting state, the scanner recognizes the word and returns a lex eme and its syntactic category to the calling procedure. If s is a nonaccepting state, the scanner must determine whether or not it passed through an accept ing state on the way to s. If the scanner did encounter an accepting state, it should roll back its internal state and its input stream to that point and report success. If it did not, it should report the failure. These three implementation strategies, table driven, direct coded, and hand coded, differ in the details of their runtime costs. However, they all have the same asymptotic complexity—constant cost per character, plus the cost of roll back. The differences in the efficiency of wellimplemented scanners change the constant costs per character but not the asymptotic complexity of scanning.
回应 20141009 13:52
在哪儿借这本书 · · · · · ·
这本书的其他版本 · · · · · · ( 全部5 )
 人民邮电出版社版 201212 / 78人读过 / 有售
 机械工业出版社版 20062 / 16人读过
 Morgan Kaufmann版 20031110 / 11人读过
 Morgan Kaufmann版 20031202
以下豆列推荐 · · · · · · ( 全部 )
 从表到里学习JVM实现 (RednaxelaFX)
 Compilers (小天)
 compiling (雨果僧)
 Compiler Principle (FGDBBB)
 计算机基本功 (轻装前行)
谁读这本书?
二手市场
订阅关于Engineering a Compiler, Second Edition的评论:
feed: rss 2.0
0 有用 Michael 20161023
之前买了中文版，看的不过瘾，最后还是买了原版，看过几遍，理论介绍的真的很好。
0 有用 Florian 20130314
现代编译器设计，简称EAC。
1 有用 Sinclair 20170626
这本书前半本可以略过。后半本可以再浓缩一点。每章的Chapter Note不错，里面有大量的reference。总得来说，是本不错的入门书，起到了提纲携领的作用。
0 有用 阿丹 20160606
比龙书更适合，前端部分完全没有必要那么多，这本书更均衡一点。
0 有用 eataix 20140610
其實只翻了幾個章節...
0 有用 堕落天使 20190602
深入浅出。但要深入理解，还需动手写代码
0 有用 jakeheng 20181110
难
1 有用 Sinclair 20170626
这本书前半本可以略过。后半本可以再浓缩一点。每章的Chapter Note不错，里面有大量的reference。总得来说，是本不错的入门书，起到了提纲携领的作用。
0 有用 苍月 20190320
roadmap on compiler
0 有用 Michael 20161023
之前买了中文版，看的不过瘾，最后还是买了原版，看过几遍，理论介绍的真的很好。