作者:
Jerome H. Saltzer
/
M. Frans Kaashoek 出版社: Morgan Kaufmann 副标题: An Introduction 出版年: 2009-7-7 页数: 560 定价: USD 82.95 装帧: Paperback ISBN: 9780123749574
This text identifies, examines, and illustrates fundamental concepts in computer system design that are common across operating systems, networks, database systems, distributed systems, programming languages, software engineering, security, fault tolerance, and architecture. Through carefully analyzed case studies from each of these disciplines, it demonstrates how to apply the...
This text identifies, examines, and illustrates fundamental concepts in computer system design that are common across operating systems, networks, database systems, distributed systems, programming languages, software engineering, security, fault tolerance, and architecture. Through carefully analyzed case studies from each of these disciplines, it demonstrates how to apply these concepts to tackle practical system design problems.
To support the focus on design, the text identifies and explains abstractions that have proven successful in practice such as, remote procedure call, client/service organization, file systems, data integrity, consistency, and authenticated messages. Most computer systems are built using a handful of such abstractions. The text describes how these abstractions are implemented, demonstrates how they are used in different systems, and prepares the reader to apply them in future designs.
Features:
Concepts of computer system design guided by fundamental principles.
Cross-cutting approach that identifies abstractions common to networking, operating systems, transaction systems, distributed systems, architecture, and software engineering.
Case studies that make the abstractions real: naming (DNS and the URL); file systems (the UNIX file system); clients and services (NFS); virtualization (virtual machines); scheduling (disk arms); security (TLS).
Numerous pseudocode fragments that provide concrete examples of abstract concepts.
Extensive support. The authors and MIT OpenCourseWare provide on-line, free of charge, open educational resources, including additional chapters, course syllabi, board layouts and slides, lecture videos, and an archive of lecture schedules, class assignments, and design projects.
Coping with complexity:Iteration
1.Design for iteration:easy to change
2.Document the assumptions
3.Take small steps
4.Don't rush to implementation
5.Plan for feedback - bugreports,etc
6.Study failures rather than assign blame for them.
7.Constantly be on guard to make overall design clean despite iterations,need foresight
8.Adopt sweeping simplifications (查看原文)
Incommensurate scaling: as a system increases in size or speed, not all parts of it follow the same scaling rules, so things stop working. 随着系统尺度的增加,系统的各个部分并不是按照同样的速度增加的。生物的体重以身长的立方增长而骨骼承重能力却以平方增长,这也解释了为什么人和大象的生理结构如此不同。大象的腿骨要比人粗得多,否则根本承受不住这样的体重。 这句话不仅适用于空间尺度,也适用于时间尺度。...(1回应)
2015-01-07 13:392人喜欢
Incommensurate scaling: as a system increases in size or speed, not all parts of it follow the same scaling rules, so things stop working.引自 Chapter 1
We now have seen examples of two forms of atomicity: all-or-nothing and before-or- after. These two forms have a common underlying goal: to hide the internal structure of an action. With that insight, it becomes apparent that atomicity is really a unifying concept: An action is atomic if there is no way for a higher layer to discover the internal structure of its implementation. This descriptio...
2012-06-07 11:33
We now have seen examples of two forms of atomicity: all-or-nothing and before-or-
after. These two forms have a common underlying goal: to hide the internal structure of
an action. With that insight, it becomes apparent that atomicity is really a unifying
concept:
An action is atomic if there is no way for a higher layer to discover the internal structure
of its implementation.
This description is really the fundamental definition of atomicity. From it, one can
immediately draw two important consequences, corresponding to all-or-nothing atom-
icity and to before-or-after atomicity:
1. From the point of view of a procedure that invokes an atomic action, the atomic
action always appears either to complete as anticipated, or to do nothing. This
consequence is the one that makes atomic actions useful in recovering from
failures.
2. From the point of view of a concurrent thread, an atomic action acts as though it
occurs either completely before or completely after every other concurrent atomic
action. This consequence is the one that makes atomic actions useful for
coordinating concurrent threads.
These two consequences are not fundamentally different. They are simply two per-
spectives, the first from other modules within the thread that invokes the action, the
second from other threads. Both points of view follow from the single idea that the internal structure of the action is not visible outside of the module that implements the
action. Such hiding of internal structure is the essence of modularity, but atomicity is an exceptionally strong form of modularity. Atomicity hides not just the details of which
steps form the atomic action, but the very fact that it has structure. There is a kinship
between atomicity and other system-building techniques such as data abstraction and client/server organization. Data abstraction has the goal of hiding the internal structure of data; client/server organization has the goal of hiding the internal structure of major subsystems. Similarly, atomicity has the goal of hiding the internal structure of an action.
All three are methods of enforcing industrial-strength modularity, and thereby of guar-
anteeing absence of unanticipated interactions among components of a complex system.引自 All-or-Nothing and Before-or-After Atomicity
8.6.1 Design Strategies and Design Principles Standing back from the maze of detail about redundancy, we can identify and abstract three particularly effective design strategies: • N-modular redundancy is a simple but powerful tool for masking failures and increasing availability, and it can be used at any convenient level of granularity. • Fail-fast modules provide a sweeping simplificat...
2012-06-01 15:12
8.6.1 Design Strategies and Design Principles
Standing back from the maze of detail about redundancy, we can identify and abstract
three particularly effective design strategies:
• N-modular redundancy is a simple but powerful tool for masking failures and
increasing availability, and it can be used at any convenient level of granularity.
• Fail-fast modules provide a sweeping simplification of the problem of containing
errors. When containment can be described simply, reasoning about fault
tolerance becomes easier.
• Pair-and-compare allows fail-fast modules to be constructed from commercial,
off-the-shelf components.
Standing back still further, it is apparent that several general design principles are
directly applicable to fault tolerance. In the formulation of the fault-tolerance design process in Section 8.1.2, we invoked be explicit, design for iteration. keep digging, and the safety margin principle, and in exploring different fault tolerance techniques we have seen several examples of adopt sweeping simplifications. One additional design principle that applies to fault tolerance (and also, as we will see in Chapter 11[on-line], to security) comes from experience, as documented in the case studies of Section 8.8:
Avoid rarely used components
Deterioration and corruption accumulate unnoticed—until the next use.
n applying these design principles, it is important to consider the threats, the conse-
quences, the environment, and the application.
There is a potential tension between error masking and an end-to-end argument. An end-
to-end argument suggests that a subsystem need not do anything about errors and should
not do anything that might compromise other goals such as low latency, high through-
put, or low cost. The subsystem should instead let the higher layer system of which it is
a component take care of the problem because only the higher layer knows whether or
not the error matters and what is the best course of action to take.
There are two counter arguments to that line of reasoning:
• Ignoring an error allows it to propagate, thus contradicting the modularity goal of
error containment. This observation points out an important distinction between
error detection and error masking. Error detection and containment must be
performed where the error happens, so that the error does not propagate wildly.
Error masking, in contrast, presents a design choice: masking can be done locally
or the error can be handled by reporting it at the interface (that is, by making the
module design fail-fast) and allowing the next higher layer to decide what masking
action—if any—to take.
• The lower layer may know the nature of the error well enough that it can mask it
far more efficiently than the upper layer. The specialized burst error correction
codes used on DVDs come to mind. They are designed specifically to mask errors
caused by scratches and dust particles, rather than random bit-flips. So we have a
trade-off between the cost of masking the fault locally and the cost of letting the
error propagate and handling it in a higher layer.
These two points interact: When an error propagates it can contaminate otherwise
correct data, which can increase the cost of masking and perhaps even render masking impossible. The result is that when the cost is small, error masking is usually done locally. (That is assuming that masking is done at all. Many personal computer designs omit memory error masking. Section 8.8.1 discusses some of the reasons for this design decision.)
A closely related observation is that when a lower layer masks a fault it is important
that it also report the event to a higher layer, so that the higher layer can keep track of
how much masking is going on and thus how much failure tolerance there remains.
Reporting to a higher layer is a key aspect of the safety margin principle.
引自 Chapter 8.6 Wrapping up Reliability
Incidentally, the strategy of employing multiple design teams can also be applied to hardware replicas, with a goal of increasing the independence of the replicas by reducing the chance of replicated design errors and systematic manufacturing defects. Much of software engineering is devoted to a different approach: devising specification and programming techniques that avoid faults in the first...
2012-06-01 11:07
Incidentally, the strategy of employing multiple design teams can also be applied to
hardware replicas, with a goal of increasing the independence of the replicas by reducing the chance of replicated design errors and systematic manufacturing defects. Much of software engineering is devoted to a different approach: devising specification and programming techniques that avoid faults in the first place and test techniques that systematically root out faults so that they can be repaired once and for all before deploying the software. This approach, sometimes called valid construction, can dramatically reduce the number of software faults in a delivered system, but because it is difficult both to completely specify and to completely test a system, some faults inevitably remain. Valid construction is based on the observation that software, unlike hardware, is not subject to wear and tear, so if it is once made correct, it should stay that way. Unfortunately,this observation can turn out to be wishful thinking, first because it is hard to make software correct, and second because it is nearly always necessary to make changes after installing a program because the requirements, the environment surrounding the program, or both, have changed. There is thus a potential for tension between valid construction and the principle that one should design for iteration. Worse, later maintainers and reworkers often do not have a complete understanding of the ground rules that went into the original design, so their work is likely to introduce new faults for which the original designers did not anticipate providing tests. Even if the original design is completely understood, when a system is modified to add features that were not originally planned, the original ground rules may be subjected to some violence.
Software faults more easily creep into areas that lack systematic design.
8.5.2 Tolerating Software (and other) Faults by Separating State
Designers of reliable systems usually assume that, despite the best efforts of programmers there will always be a residue of software faults, just as there is also always a residue of hardware, operation, and environment faults. The response is to develop a strategy for tolerating all of them. Software adds the complication that the current state of a running program tends to be widely distributed. Parts of that state may be in non-volatile storage, while other parts are in temporary variables held in volatile memory locations, processor registers, and kernel tables. This wide distribution of state makes containment of errors problematic. As a result, when an error occurs, any strategy that involves stopping some collection of running threads, tinkering to repair the current state (perhaps at the same time replacing a buggy program module), and then resuming the stopped threads is usually unrealistic.
In the face of these observations, a programming discipline has proven to be effective:
systematically divide the current state of a running program into two mutually exclusive categories and separate the two categories with a firewall. The two categories are:
• State that the system can safely abandon in the event of a failure.
• State whose integrity the system should preserve despite failure.
Upon detecting a failure, the plan becomes to abandon all state in the first category
and instead concentrate just on maintaining the integrity of the data in the second category. An important part of the strategy is an important sweeping simplification: classify the state of running threads (that is, the thread table, stacks, and registers) as abandonable. When a failure occurs, the system abandons the thread or threads that were running at the time and instead expects a restart procedure, the system operator, or the individual user to start a new set of threads with a clean slate. The new thread or threads can then, working with only the data found in the second category, verify the integrity of that data and return to normal operation. The primary challenge then becomes to build a firewall that can protect the integrity of the second category of data despite the failure.
引自 Chapter 8.5.1 Tolerating Software Faults
Coping with complexity:Iteration 1.Design for iteration:easy to change 2.Document the assumptions 3.Take small steps 4.Don't rush to implementation 5.Plan for feedback - bugreports,etc 6.Study failures rather than assign blame for them. 7.Constantly be on guard to make overall design clean despite iterations,need foresight 8.Adopt sweeping simplifications
2012-04-02 08:43
Coping with complexity:Iteration
1.Design for iteration:easy to change
2.Document the assumptions
3.Take small steps
4.Don't rush to implementation
5.Plan for feedback - bugreports,etc
6.Study failures rather than assign blame for them.
7.Constantly be on guard to make overall design clean despite iterations,need foresight
8.Adopt sweeping simplifications引自 Coping with complexity:Iteration
Although the number of potential abstractions for computer system components is unlimited, remarkably the vast majority that actually appear in practice fall into one of three well-defined classes: the memory, the interpreter, and the communication link. These three abstractions are so fundamental that theoreticians compare computer algorithms in terms of the number of data items they must reme...
2012-04-02 08:45
Although the number of potential abstractions for computer system components is unlimited, remarkably the vast majority that actually appear in practice fall into one of
three well-defined classes: the memory, the interpreter, and the communication link.
These three abstractions are so fundamental that theoreticians compare computer algorithms in terms of the number of data items they must remember, the number of steps their interpreter must execute, and the number of messages they must communicate.
To meet the many requirements of different applications, system designers build lay-
ers on this fundamental base, but in doing so they do not routinely create completely
different abstractions. Instead, they elaborate the same three abstractions, rearrang-
ing and repackaging them to create features that are useful and interfaces that are
convenient for each application. 引自 Chapter 2 Elements of Computer System Organization
1.Properties of Networks 1.1.The trio of fundamental physical properties: - The speed of light is finite. - Communication environments are hostile - Communication media have limited bandwidth. - Different network links may thus have radically different data rates 1.2.They are nearly always shared.The reason is: - Any-to-any connection is expensive:n * n links. - Incommensurate rate of communica...
2012-05-21 15:55
1.Properties of Networks
1.1.The trio of fundamental physical properties:
- The speed of light is finite.
- Communication environments are hostile
- Communication media have limited bandwidth.
- Different network links may thus have radically different data rates
1.2.They are nearly always shared.The reason is:
- Any-to-any connection is expensive:n * n links.
- Incommensurate rate of communication cost changes comparing to
cpu,disk,means the communication cost should be shared.
1.3.The wide range of parameter values.
The propagation times, data rates, and the number of communicating computers
can each vary by seven or more orders of magnitude.
2.Isochronous(等时) and Asynchronous Multiplexing
2.1 Telephone system traditionally uses a line multiplexing technique known as
isochronous (from Greek roots meaning “equally timed”) communication.
2.2 When communicating data between two computers, a system designer is
usually willing to forgo the guarantee of uniform data rate and uniform latency if
in return an entire message can get through more quickly. Data communication
networks achieve this trade-off by using what is called asynchronous
3.Packet Forwarding; Delay
Asynchronous communication links are usually organized in a 'packet forwarding
network structure',and introduces delays:
- Propagation delay.
- Transmission delay.
- Processing delay.
- Queuing delay.
4.Buffer Overflow and Discarded Packets
5.Duplicate Packets and Duplicate Suppression
6.Damaged Packets and Broken Links
7.Reordered Delivery
Because transmission links traverse hostile environments and must be consid-
ered fragile, a packet network usually has multiple interconnection paths,so it may
causes reordered delivery.
Incommensurate scaling: as a system increases in size or speed, not all parts of it follow the same scaling rules, so things stop working. 随着系统尺度的增加,系统的各个部分并不是按照同样的速度增加的。生物的体重以身长的立方增长而骨骼承重能力却以平方增长,这也解释了为什么人和大象的生理结构如此不同。大象的腿骨要比人粗得多,否则根本承受不住这样的体重。 这句话不仅适用于空间尺度,也适用于时间尺度。...(1回应)
2015-01-07 13:392人喜欢
Incommensurate scaling: as a system increases in size or speed, not all parts of it follow the same scaling rules, so things stop working.引自 Chapter 1
We now have seen examples of two forms of atomicity: all-or-nothing and before-or- after. These two forms have a common underlying goal: to hide the internal structure of an action. With that insight, it becomes apparent that atomicity is really a unifying concept: An action is atomic if there is no way for a higher layer to discover the internal structure of its implementation. This descriptio...
2012-06-07 11:33
We now have seen examples of two forms of atomicity: all-or-nothing and before-or-
after. These two forms have a common underlying goal: to hide the internal structure of
an action. With that insight, it becomes apparent that atomicity is really a unifying
concept:
An action is atomic if there is no way for a higher layer to discover the internal structure
of its implementation.
This description is really the fundamental definition of atomicity. From it, one can
immediately draw two important consequences, corresponding to all-or-nothing atom-
icity and to before-or-after atomicity:
1. From the point of view of a procedure that invokes an atomic action, the atomic
action always appears either to complete as anticipated, or to do nothing. This
consequence is the one that makes atomic actions useful in recovering from
failures.
2. From the point of view of a concurrent thread, an atomic action acts as though it
occurs either completely before or completely after every other concurrent atomic
action. This consequence is the one that makes atomic actions useful for
coordinating concurrent threads.
These two consequences are not fundamentally different. They are simply two per-
spectives, the first from other modules within the thread that invokes the action, the
second from other threads. Both points of view follow from the single idea that the internal structure of the action is not visible outside of the module that implements the
action. Such hiding of internal structure is the essence of modularity, but atomicity is an exceptionally strong form of modularity. Atomicity hides not just the details of which
steps form the atomic action, but the very fact that it has structure. There is a kinship
between atomicity and other system-building techniques such as data abstraction and client/server organization. Data abstraction has the goal of hiding the internal structure of data; client/server organization has the goal of hiding the internal structure of major subsystems. Similarly, atomicity has the goal of hiding the internal structure of an action.
All three are methods of enforcing industrial-strength modularity, and thereby of guar-
anteeing absence of unanticipated interactions among components of a complex system.引自 All-or-Nothing and Before-or-After Atomicity
8.6.1 Design Strategies and Design Principles Standing back from the maze of detail about redundancy, we can identify and abstract three particularly effective design strategies: • N-modular redundancy is a simple but powerful tool for masking failures and increasing availability, and it can be used at any convenient level of granularity. • Fail-fast modules provide a sweeping simplificat...
2012-06-01 15:12
8.6.1 Design Strategies and Design Principles
Standing back from the maze of detail about redundancy, we can identify and abstract
three particularly effective design strategies:
• N-modular redundancy is a simple but powerful tool for masking failures and
increasing availability, and it can be used at any convenient level of granularity.
• Fail-fast modules provide a sweeping simplification of the problem of containing
errors. When containment can be described simply, reasoning about fault
tolerance becomes easier.
• Pair-and-compare allows fail-fast modules to be constructed from commercial,
off-the-shelf components.
Standing back still further, it is apparent that several general design principles are
directly applicable to fault tolerance. In the formulation of the fault-tolerance design process in Section 8.1.2, we invoked be explicit, design for iteration. keep digging, and the safety margin principle, and in exploring different fault tolerance techniques we have seen several examples of adopt sweeping simplifications. One additional design principle that applies to fault tolerance (and also, as we will see in Chapter 11[on-line], to security) comes from experience, as documented in the case studies of Section 8.8:
Avoid rarely used components
Deterioration and corruption accumulate unnoticed—until the next use.
n applying these design principles, it is important to consider the threats, the conse-
quences, the environment, and the application.
There is a potential tension between error masking and an end-to-end argument. An end-
to-end argument suggests that a subsystem need not do anything about errors and should
not do anything that might compromise other goals such as low latency, high through-
put, or low cost. The subsystem should instead let the higher layer system of which it is
a component take care of the problem because only the higher layer knows whether or
not the error matters and what is the best course of action to take.
There are two counter arguments to that line of reasoning:
• Ignoring an error allows it to propagate, thus contradicting the modularity goal of
error containment. This observation points out an important distinction between
error detection and error masking. Error detection and containment must be
performed where the error happens, so that the error does not propagate wildly.
Error masking, in contrast, presents a design choice: masking can be done locally
or the error can be handled by reporting it at the interface (that is, by making the
module design fail-fast) and allowing the next higher layer to decide what masking
action—if any—to take.
• The lower layer may know the nature of the error well enough that it can mask it
far more efficiently than the upper layer. The specialized burst error correction
codes used on DVDs come to mind. They are designed specifically to mask errors
caused by scratches and dust particles, rather than random bit-flips. So we have a
trade-off between the cost of masking the fault locally and the cost of letting the
error propagate and handling it in a higher layer.
These two points interact: When an error propagates it can contaminate otherwise
correct data, which can increase the cost of masking and perhaps even render masking impossible. The result is that when the cost is small, error masking is usually done locally. (That is assuming that masking is done at all. Many personal computer designs omit memory error masking. Section 8.8.1 discusses some of the reasons for this design decision.)
A closely related observation is that when a lower layer masks a fault it is important
that it also report the event to a higher layer, so that the higher layer can keep track of
how much masking is going on and thus how much failure tolerance there remains.
Reporting to a higher layer is a key aspect of the safety margin principle.
引自 Chapter 8.6 Wrapping up Reliability
Incidentally, the strategy of employing multiple design teams can also be applied to hardware replicas, with a goal of increasing the independence of the replicas by reducing the chance of replicated design errors and systematic manufacturing defects. Much of software engineering is devoted to a different approach: devising specification and programming techniques that avoid faults in the first...
2012-06-01 11:07
Incidentally, the strategy of employing multiple design teams can also be applied to
hardware replicas, with a goal of increasing the independence of the replicas by reducing the chance of replicated design errors and systematic manufacturing defects. Much of software engineering is devoted to a different approach: devising specification and programming techniques that avoid faults in the first place and test techniques that systematically root out faults so that they can be repaired once and for all before deploying the software. This approach, sometimes called valid construction, can dramatically reduce the number of software faults in a delivered system, but because it is difficult both to completely specify and to completely test a system, some faults inevitably remain. Valid construction is based on the observation that software, unlike hardware, is not subject to wear and tear, so if it is once made correct, it should stay that way. Unfortunately,this observation can turn out to be wishful thinking, first because it is hard to make software correct, and second because it is nearly always necessary to make changes after installing a program because the requirements, the environment surrounding the program, or both, have changed. There is thus a potential for tension between valid construction and the principle that one should design for iteration. Worse, later maintainers and reworkers often do not have a complete understanding of the ground rules that went into the original design, so their work is likely to introduce new faults for which the original designers did not anticipate providing tests. Even if the original design is completely understood, when a system is modified to add features that were not originally planned, the original ground rules may be subjected to some violence.
Software faults more easily creep into areas that lack systematic design.
8.5.2 Tolerating Software (and other) Faults by Separating State
Designers of reliable systems usually assume that, despite the best efforts of programmers there will always be a residue of software faults, just as there is also always a residue of hardware, operation, and environment faults. The response is to develop a strategy for tolerating all of them. Software adds the complication that the current state of a running program tends to be widely distributed. Parts of that state may be in non-volatile storage, while other parts are in temporary variables held in volatile memory locations, processor registers, and kernel tables. This wide distribution of state makes containment of errors problematic. As a result, when an error occurs, any strategy that involves stopping some collection of running threads, tinkering to repair the current state (perhaps at the same time replacing a buggy program module), and then resuming the stopped threads is usually unrealistic.
In the face of these observations, a programming discipline has proven to be effective:
systematically divide the current state of a running program into two mutually exclusive categories and separate the two categories with a firewall. The two categories are:
• State that the system can safely abandon in the event of a failure.
• State whose integrity the system should preserve despite failure.
Upon detecting a failure, the plan becomes to abandon all state in the first category
and instead concentrate just on maintaining the integrity of the data in the second category. An important part of the strategy is an important sweeping simplification: classify the state of running threads (that is, the thread table, stacks, and registers) as abandonable. When a failure occurs, the system abandons the thread or threads that were running at the time and instead expects a restart procedure, the system operator, or the individual user to start a new set of threads with a clean slate. The new thread or threads can then, working with only the data found in the second category, verify the integrity of that data and return to normal operation. The primary challenge then becomes to build a firewall that can protect the integrity of the second category of data despite the failure.
引自 Chapter 8.5.1 Tolerating Software Faults
0 有用 xiaom 2020-12-30
- 习题很有意思。尤其是用来印证自己做过的项目的设计问题。值得反复读。- 有机会再读,这个还是兴趣爱好 和 好奇心 还有mental model
1 有用 对我就是那个谁 2014-11-17
下面说什么烂的或者是难读的都是扯淡,这是我见过的最好的系统设计或者说架构入门书籍之一。 尽管这书的名字是introduction,但是它涵盖的内容已经远远的超越了现在烂大街的傻逼湾区程序员的水平。
4 有用 icemelon 2016-07-11
我旦软院大四教科书,课程直接从mit买的,目前mit这门课的授课教授是spark作者Matei Zaharia。既有趣又硬核的系统设计书。
0 有用 史努比 2019-09-03
看完8 9 10 三章,就算看完了,好累。这本书网络、操作系统、数据库、看多了就是各种实现细节乱炖,对实际上业务开发狗涨薪没有任何意义,纯属爱好。
1 有用 E-Neo 2020-03-17
不错,吾道一以贯之:Modularity, Abstraction, Layering, Hierarchy.
0 有用 xiaom 2020-12-30
- 习题很有意思。尤其是用来印证自己做过的项目的设计问题。值得反复读。- 有机会再读,这个还是兴趣爱好 和 好奇心 还有mental model
1 有用 东城(Tony) 2020-04-06
上半部分刚读过一半,内容比较高大上,尤其是下半部分。
1 有用 E-Neo 2020-03-17
不错,吾道一以贯之:Modularity, Abstraction, Layering, Hierarchy.
0 有用 Quack 2019-09-12
6.033教材 抽象的讲系统工程 羡慕能上这门课的人
0 有用 史努比 2019-09-03
看完8 9 10 三章,就算看完了,好累。这本书网络、操作系统、数据库、看多了就是各种实现细节乱炖,对实际上业务开发狗涨薪没有任何意义,纯属爱好。