殿堂级书 跨语言能用
这篇书评可能有关键情节透露
# Java Concurrency in Practise Note
笔记:
- 链接
- [Java Concurrency in Practice (豆瓣)](https://book.douban.com/subject/1888733/)
- [Java Concurrency in Practice: Brian Goetz: 0785342349603: Amazon.com: Books](https://www.amazon.com/Java-Concurrency-Practice-Brian-Goetz/dp/0321349601)
- 题材
- Programming
- 主题
- Java Concurrency
- 阅读对象
- Java Developer
- 阅读前问题
- TODO
- 阅读中问题
- TODO
- 脉络(Reordering, bottom-up)
- Concurrency introduction and Fundamentals
- Concurrency benifits: Resource utilization / Fairness / Convenience
- Concurrency forms(by Reader): Process / Thread / Coroutine / Actors / Reactive
- Threads benifits and risks
- benifits: exploiting multiple processors / Simplicity of Modeling(sequential) / Simplified Handling of Async Events / More Responsive User Interface(for GUI)
- risks: Thread Safety / Liveness / Performance
- Amdahl's law
- $S \\leq \\frac{1}{F+\\frac{1-F}{N}}$
- N Processors, F is fraction of calculation must be serials
- JSR 166
- Thread safety
- definition: manage access to state, paticular to shared, mutable state.
- How to safe: donot share / immutable / synchronize
- Race condition
- Lock
- Liveness and Performance
- ⭐️ Sharing Objects
- Visibility: multiprocessor L1 chace; Lock; Volitile
- Publication Escape
- Confinement: Ad-hoc(semantic) / Stack(function local variable) / ThreadLocal
- Immutable: Final / Safe Publication
- How to Sharing: Thread Confined / Shared read-only / Shared thread-safe(inner) / Guarded(by invoker)
- Compsoe Objects
- Be Thread-safe Class: `synchronized` / Delegating Thread Safety
- Modify Existing Thread-safe Class: Cient-side locking(be aware of sync object) / Composition
- PLEASE Document
- Basic Usage(built-in utils in JDK)
- Synchronized Collections
- `synchronized`: Vector / HashTable
- be aware of `ConcurrentModificationException`
- Concurrent Collections
- ConcurrenHashMap / CopyOnWriteArrayList(`Arrays.copyOf` when write) / Blocking Queue(for Producer / Consumer, `put`, `take`, use `Condition`'s `await` and `signal`) / Deck(doubleendd)
- ⭐️ Synchronizers
- Latches: implemented in `CountDownLatch` delay progress for state(start or stop), `await()`, `countdown()`
- `FutureTask`: waiting complete, `java.util.concurrent.FutureTask#awaitDone`(CAS and time-wait)
- `Semaphore`: useful in resource pool
- `Barries`: all threads must come together in same time
- Others
- DCL(Double Check Lock)
- Explicit Lock
- ReentrantLock: Performance, Fairness,
- ReadWriteLock: r, w, upgrade / downgrade
- Under iceberg
- CAS
- `Condition`
- Signals: `notify` / `notifyAll`,
- AQS: `acquireShared`
- Advanced Usage(real world in concurrency)
- Task Execution
- Task Defination: Isolation Block, `Runnable`
- Unbound creation disadvantage: Request overhead, Resouece consumption, Stability
- `Executor` framework, policy: order / cocurrency count / queued size / reject policy / hook
- `Executor` type: newFixedThreaPool / newCachedThreadPool / newSingleThreadExecutor / newScheduledThreadPool
- `CompletationService` = `Executor` + `BlockingQueue`
- Summary: decouple execution and execution policy
- ⭐️ Cancellation / Shutdown
- Java did not provide safety forcing shutdown thread mechanism
- type of cancelation: user request / time limited / application event / errors
- Interruption, `interupt` merely send message: reset `isInterrupted` status
- Policys: React it self; Thread should intertupted by its owner; cleanup
- Responding: Propagate exceptions; Restore interruption status, higher deal with it;
- DO NOT Swallow exception in Business logics.
- Dealing with Non-interruptiable Blocking: Synchronized I/O, Asynchroinized I/O, Selector, Lock(intrinsic)
- Stoping Thread-based Service: shutdown with loop and condition(still provide interupt exception); poison pills;
- Exception Handle: `UncaughtExceptionhandler`
- JVM Shutdown: hooks
- Thread Pools
- Execute Policy requirements: dependent tasks / exploit thread confinement / rt sensitive / use thread-local
- Sizing: (CPU Bound) N cput + 1,
$$N\_{\\text {threads}}=N\_{c p u} \* U\_{c p u} \*\\left(1+\\frac{W}{C}\\right)$$
- `ThreadPoolExecutor`
- Queued Tasks, Size, Satuation, `RejectExecutionException`, ThreadFactory, Extending
- GUI
- Sequential Event Processing: handle short time task and long running task(background)
- shared data model
- Liveness
- Deadlock: ordering, dynamic ordering(abstract order number), thread dump
- Starvation / Poor Responsiveness / Livelock(anti with deadlock, no lock but cannot progress)
- Performance
- two way: more utilization for owned processors, more processing resource when available
- measure first
- Thread Costs: tradeoff
- Context Switch
- Memory Synchronization
- Blocking
- Strategy
- Reduce Lock Contetion: Narrowing Lock Space(code range), Reducing Lock Granularity(lock range), Lock Striping(分 离锁) / Avoid Hot Fields / Alternative to Exclusive Lock(RWLock) / Monitoring CPU ulitization / no Object Pool / Reduce Context Switch Overhead
- Testing
- Correctness / Resource management / Performance
- Atomic Variable and Nonblocking
- CAS, ABA(version can solve)
- Nonblock Algorithm
- Java Memory Model
- Reordering
- Publication
- 给我带来的冲击和改变
- SKIP
- 本文提到的其他书籍
- SKIP
- 作者
- Brian Goetz: Primary member of JCP JSR 166 Expert Group
- Tim Peierls: Primary member of JCP JSR 166 Expert Group, JCP EG
- Joshua Bloch: Googler, ex Sunner, Jolt Award-winner, JDK Collection
- Joseph Bowbeer: Java ME, JCP JSR-166 EG
- David Holmes: JSR-166 EG, jcu author
- Doug Lea: Sun Labs, jcu author
- 评价
- 优点
- Content orginzed in top down, featured sample code.
- 缺点
- NO
Questions:
- LockSupport many native method, `sun.misc.Unsafe#unpark`, `sun.misc.Unsafe#putObject(java.lang.Object, long, java.lang.Object)`
- Semaphore vs AtomicInteger?
- Fair/Unfair, wait, drain
- If CAS failed, what will hanppened
- nothing, `while` loop, `sun.misc.Unsafe#getAndAddInt`
- `park` meaning, `java.util.concurrent.locks.LockSupport#parkNanos(java.lang.Object, long)`
## Quotes
> For the past 30 years, computer performance has been driven by Moore's Law; from now on, it will be driven by Amdahl's Law. Writing code that effectively exploits multiple processors can be very challenging.
> Modern GUI frameworks, such as the AWT and Swing toolkits, replace the main event loop with an event dispatch thread (EDT).
> In other words, servlets need to be thread-safe.
> Writing thread-safe code is, at its core, about managing access to state, and in particular to shared, mutable state.
> We may talk about thread safety as if it were about code, but what we are really trying to do is protect data from uncontrolled concurrent access.
> There are three ways to fix it: Don't share the state variable across threads; Make the state variable immutable; or Use synchronization whenever accessing the state variable.
> At the heart of any reasonable definition of thread safety is the concept of correctness.
> A class is thread-safe if it behaves correctly when accessed from multiple threads, regardless of the scheduling or interleaving of the execution of those threads by the runtime environment, and with no additional synchronization or other coordination on the part of the calling code.
> Stateless objects are always thread-safe.
> A race condition occurs when the correctness of a computation depends on the relative timing or interleaving of multiple threads by the runtime; in other words, when getting the right answer relies on lucky timing.
> Not all race conditions are data races, and not all data races are race conditions, but they both can cause concurrent programs to fail in unpredictable ways.
> To ensure thread safety, check-then-act operations (like lazy initialization) and read-modify-write operations (like increment) must always be atomic.
> Reentrancy means that locks are acquired on a per-thread rather than per-invocation basis.
> Reentrancy facilitates encapsulation of locking behavior, and thus simplifies the development of object-oriented concurrent code.
> Whenever you use locking, you should be aware of what the code in the block is doing and how likely it is to take a long time to execute.
> Synchronization also has another significant, and subtle, aspect: memory visibility.
> Volatile variables are not cached in registers or in caches where they are hidden from other processors, so a read of a volatile variable always returns the most recent write by any thread.
> Listing 3.4 illustrates a typical use of volatile variables: checking a status flag to determine when to exit a loop.
> You can use volatile variables only when all the following criteria are met: Writes to the variable do not depend on its current value, or you can ensure that only a single thread ever updates the value; The variable does not participate in invariants with other state variables; and Locking is not required for any other reason while the variable is being accessed.
> Accessing shared, mutable data requires using synchronization; one way to avoid this requirement is to not share.
> When an object is confined to a thread, such usage is automatically thread-safe even if the confined object itself is not \[CPJ 2.3.2\].
> this pattern of connection management implicitly confines the Connection to that thread for the duration of the request.
> In this case, you are confining the modification to a single thread to prevent race conditions, and the visibility guarantees for volatile variables ensure that other threads see the most up-to-date value.
> Using a non-thread-safe object in a within-thread context is still thread-safe.
> The thread-specific values are stored in the Thread object itself; when the thread terminates, the thread-specific values can be garbage collected.
> This is convenient in that it reduces the need to pass execution context information into every method, but couples any code that uses this mechanism to the framework.
> An object is immutable if: Its state cannot be modifled after construction; All its flelds are final;\[12\] and
> It is properly constructed (the this reference does not escape during construction).
> Immutable objects can still use mutable objects internally to manage their state, as illustrated by ThreeStooges in Listing 3.11.
> Program state stored in immutable objects can still be updated by "replacing" immutable objects with a new instance holding new state; the next section offers an example of this technique.\[13\]
> but they also have special semantics under the Java Memory Model. It is the use of final fields that makes possible the guarantee of initialization safety
> A properly constructed object can be safely published by: Initializing an object reference from a static initializer; Storing a reference to it into a volatile field or AtomicReference; Storing a reference to it into a final field of a properly constructed object; or Storing a reference to it into a field that is properly guarded by a lock.
> Objects that are not technically immutable, but whose state will not be modified after publication, are called effectively immutable.
> The most useful policies for using and sharing objects in a concurrent program are: Thread-confined. A thread-confined object is owned exclusively by and confined to one thread, and can be modifled by its owning thread. Shared read-only. A shared read-only object can be accessed concurrently by multiple threads without additional synchronization, but cannot be modified by any thread. Shared read-only objects include immutable and effectively immutable objects. Shared thread-safe. A thread-safe object performs synchronization internally, so multiple threads can freely access it through its public interface without further synchronization. Guarded. A guarded object can be accessed only with a specific lock held. Guarded objects include those that are encapsulated within other thread-safe objects and published objects that are known to be guarded by a specific lock.
> Ownership implies control, but once you publish a reference to a mutable object, you no longer have exclusive control; at best, you might have "shared ownership".
> There are advantages to using a private lock object instead of an object's intrinsic lock (or any other publicly accessible lock).
> If a class has compound actions, as NumberRange does, delegation alone is again not a suitable approach for thread safety.
> As an example, let's say we need a thread-safe List with an atomic put-ifabsent operation.
> Listing 4.13. Extending Vector to have a Put-if-absent Method.
> ImprovedList adds an additional level of locking using its own intrinsic lock. It does not care whether the underlying List is thread-safe, because it provides its own consistent locking that provides thread safety even if the List is not thread-safe or changes its locking implementation.
> Crafting a synchronization policy requires a number of decisions: which variables to make volatile, which variables to guard with locks, which lock(s) guard which variables, which variables to make immutable or confine to a thread, which operations must be atomic, etc.
> Where practical, delegation is one of the most effective strategies for creating thread-safe classes: just let existing thread-safe classes manage all the state.
> The synchronized collection classes include Vector and Hashtable, part of the original JDK, as well as their cousins added in JDK 1.2, the synchronized wrapper classes created by the Collections.synchronizedXxx factory methods.
> Common compound actions on collections include iteration (repeatedly fetch elements until the collection is exhausted), navigation (find the next element after this one according to some order), and conditional operations such as put-if-absent (check if a Map has a mapping for key K, and if not, add the mapping (K,V)).
> The problem of unreliable iteration can again be addressed by client-side locking, at some additional cost to scalability. By holding the Vector lock for the duration of iteration, as shown in Listing 5.4, we prevent other threads from modifying the Vector while we are iterating it.
> Listing 5.4. Iteration with Client-side Locking.
> The iterators returned by the synchronized collections are not designed to deal with concurrent modification, and they are fail-fast---meaning that if they detect that the collection has changed since iteration began, they throw the unchecked ConcurrentModificationException.
> if the modification count changes during iteration, hasNext or next throws ConcurrentModificationException.
> BlockingQueue extends Queue to add blocking insertion and retrieval operations. If the queue is empty, a retrieval blocks until an element is available, and if the queue is full (for bounded queues) an insertion blocks until there is space available.
> A weakly consistent iterator can tolerate concurrent modification, traverses elements as they existed when the iterator was constructed, and may (but is not guaranteed to) reflect modifications to the collection after the construction of the iterator.
> Since the result of size could be out of date by the time it is computed, it is really only an estimate, so size is allowed to return an approximation instead of an exact count.
> The one feature offered by the synchronized Map implementations but not by ConcurrentHashMap is the ability to lock the map for exclusive access.
> Producers and consumers can execute concurrently; if one is I/O-bound and the other is CPU-bound, executing them concurrently yields better overall throughput than executing them sequentially.
> Java 6 also adds another two collection types, Deque (pronounced "deck") and BlockingDeque, that extend Queue and BlockingQueue. A Deque is a doubleended queue that allows efficient insertion and removal from both the head and the tail. Implementations include ArrayDeque and LinkedBlockingDeque.
> Work stealing can be more scalable than a traditional producer-consumer design because workers don't contend for a shared work queue; most of the time they access only their own
> When a thread blocks, it is usually suspended and placed in one of the blocked thread states (BLOCKED, WAITING, or TIMED_WAITING).
> This could involve not catching InterruptedException, or catching it and throwing it again after performing some brief activity-specific cleanup.
> Runnable. In these situations, you must catch InterruptedException and restore the interrupted status by calling interrupt on the current thread, so that code higher up the call stack can see that an interrupt was issued, as demonstrated in Listing 5.10.
> Blocking queues can act as synchronizers; other types of synchronizers include semaphores, barriers, and latches. There are a number of synchronizer classes in the platform library; if these do not meet your needs, you can also create your own using the mechanisms described in Chapter 14.
> FutureTask also acts like a latch. (FutureTask implements Future, which describes an abstract result-bearing computation \[CPJ 4.3.3\].)
> Semaphores are useful for implementing resource pools such as database connection pools. While it is easy to construct a fixed-sized pool that fails if you request a resource from an empty pool, what you really want is to block if the pool is empty and unblock when it becomes nonempty again.
> The key difference is that with a barrier, all the threads must come together at a barrier point at the same time in order to proceed. Latches are for waiting for events; barriers are for waiting for other threads.
> Another form of barrier is Exchanger, a two-party barrier in which the parties exchange data at the barrier point \[CPJ 3.4.3\]. Exchangers are useful when the parties perform asymmetric activities, for example when one thread fills a buffer with data and the other thread consumes the data from the buffer;
> Caching a Future instead of a value creates the possibility of cache pollution: if a computation is cancelled or fails, future attempts to compute the result will also indicate cancellation or failure. To avoid this, Memoizer removes the Future from the cache if it detects that the computation was cancelled; it might also be desirable to remove the Future upon detecting a RuntimeException if the computation might succeed on a future attempt.
> Summary of Part I
> Execution Policies
> In what thread will tasks be executed? In what order should tasks be executed (FIFO, LIFO, priority order)? How many tasks may execute concurrently? How many tasks may be queued pending execution? If a task has to be rejected because the system is overloaded, which task should be selected as the victim, and how should the application be notified? What actions should be taken before or after executing a task?
> But the JVM can't exit until all the (nondaemon) threads have terminated, so failing to shut down an Executor could prevent the JVM from exiting.
> run, the recurring task either (depending on whether it was scheduled at fixed rate or fixed delay) gets called four times in rapid succession after the long-running task completes, or "misses" four invocations completely.
> Another problem with Timer is that it behaves poorly if a TimerTask throws an unchecked exception. The Timer thread doesn't catch the exception, so an unchecked exception thrown from a TimerTask terminates the timer thread.
> Java does not provide any mechanism for safely forcing a thread to stop what it is doing.\[1\]
> The deprecated Thread.stop and suspend methods were an attempt to provide such a mechanism, but were quickly realized to be seriously flawed and should be avoided. See http://java.sun.com/j2se/1.5.0/docs/guide/misc/ threadPrimitiveDeprecation.html for an explanation of the problems with these methods.
> One such cooperative mechanism is setting a "cancellation requested" flag that the task checks periodically; if it finds the flag set, the task terminates early.
> we could have a more serious problem---the task might never check the cancellation flag and therefore might never terminate.
> As we hinted in Chapter 5, certain blocking library methods support interruption. Thread interruption is a cooperative mechanism for a thread to signal another thread that it should, at its convenience and if it feels like it, stop what it is doing and do something else.
> Calling interrupt does not necessarily stop the target thread from doing what it is doing; it merely delivers the message that interruption has been requested.
> The static interrupted method should be used with caution, because it clears the current thread's interrupted status.
> The most sensible interruption policy is some form of thread-level or servicelevel cancellation: exit as quickly as practical, cleaning up if necessary, and possibly notifying some owning entity that the thread is exiting.
> A thread should be interrupted only by its owner; the owner can encapsulate knowledge of the thread's interruption policy in an appropriate cancellation mechanism such as a shutdown method.
> Only code that implements a thread's interruption policy may swallow an interruption request. General-purpose task and library code should never swallow interruption requests.
> For example, when a worker thread owned by a ThreadPoolExecutor detects interruption, it checks whether the pool is being shut down. If so, it performs some pool cleanup before terminating; otherwise it may create a new thread to restore the thread pool to the desired size.
> This is an appealingly simple approach, but it violates the rules: you should know a thread's interruption policy before interrupting it. Since timedRun can be called from an arbitrary thread, it cannot know the calling thread's interruption policy.
> This version addresses the problems in the previous examples, but because it relies on a timed join, it shares a deficiency with join: we don't know if control was returned because the thread exited normally or because the join timed out.\[2\]
> (This tells you only whether it was able to deliver the interruption, not whether the task detected and acted on it.) When mayInterruptIfRunning is true and the task is currently running in some thread, then that thread is interrupted.
> ExecutorService provides the shutdown and shutdownNow methods; other thread-owning services should provide a similar shutdown mechanism.
> Another way to convince a producer-consumer service to shut down is with a poison pill: a recognizable object placed on the queue that means "when you get this, stop."
> Poison pills work reliably only with unbounded queues.
> If you are writing a worker thread class that executes submitted tasks, or calling untrusted external code (such as dynamically loaded plugins), use one of these approaches to prevent a poorly written task or plugin from taking down the thread that happens to call it.
> The Thread API also provides the UncaughtExceptionHandler facility, which lets you detect when a thread dies due to an uncaught exception.
> When a thread exits due to an uncaught exception, the JVM reports this event to an application-provided UncaughtExceptionHandler (see Listing 7.24); if no handler exists, the default behavior is to print the stack trace to System.err.\[8\]
> To set an UncaughtExceptionHandler for pool threads, provide a ThreadFactory to the ThreadPoolExecutor constructor. (As with all thread manipulation, only the thread's owner should change its UncaughtExceptionHandler.)
> Shutdown hooks are unstarted threads that are registered with Runtime.addShutdownHook. The JVM makes no guarantees on the order in which shutdown hooks are started.
> Sometimes you want to create a thread that performs some helper function but you don't want the existence of this thread to prevent the JVM from shutting down. This is what daemon threads are for.
> Normal threads and daemon threads differ only in what happens when they exit. When a thread exits, the JVM performs an inventory of running threads, and if the only threads that are left are daemon threads, it initiates an orderly shutdown.
> Daemon threads are best saved for "housekeeping" tasks, such as a background thread that periodically removes expired entries from an in-memory cache.
> Java does not provide a preemptive mechanism for cancelling activities or terminating threads. Instead, it provides a cooperative interruption mechanism that can be used to facilitate cancellation, but it is up to you to construct protocols for cancellation and use them consistently.
> This chapter looks at advanced options for configuring and tuning thread pools, describes hazards to watch for when using the task execution framework, and offers some more advanced examples of using Executor.
> The optimal pool size for keeping the processors at the desired utilization is:
> The default for newFixedThreadPool and newSingleThreadExecutor is to use an unbounded LinkedBlockingQueue. Tasks will queue up if all worker threads are busy, but the queue could grow without bound if the tasks keep arriving faster than they can be executed.
> A SynchronousQueue is not really a queue at all, but a mechanism for managing handoffs between threads.
> bounded thread pools or queues can cause thread starvation deadlock; instead, use an unbounded pool configuration like newCachedThreadPool.\[6\]
> The caller-runs policy implements a form of throttling that neither discards tasks nor throws an exception, but instead tries to slow down the flow of new tasks by pushing some of the work back to the caller.
> There are a number of reasons to use a custom thread factory. You might want to specify an UncaughtExceptionHandler for pool threads, or instantiate an instance of a custom Thread class, such as one that performs debug logging. You might want to modify the priority (generally not a very good idea; see Section 10.3.1) or set the daemon status (again, not all that good an idea; see Section 7.4.2) of pool threads. Or maybe you just want to give pool threads more meaningful names to simplify interpreting thread dumps and error logs.
> The Executor framework is a powerful and flexible framework for concurrently executing tasks. It offers a number of tuning options, such as policies for creating and tearing down threads, handling queued tasks, and what to do with excess tasks, and provides several hooks for extending its behavior.
> The Swing single-thread rule: Swing components and models should be created, modified, and queried only from the event-dispatching thread.
> In a split-model design, the presentation model is confined to the event thread and the other model, the shared model, is thread-safe and may be accessed by both the event thread and application threads. The presentation model registers listeners with the shared model so it can be notified of updates.
> Thread confinement is not restricted to GUIs: it can be used whenever a facility is implemented as a single-threaded subsystem. Sometimes thread confinement is forced on the developer for reasons that have nothing to do with avoiding synchronization or deadlock.
> There is often a tension between safety and liveness. We use locking to ensure thread safety, but indiscriminate use of locking can cause lock-ordering deadlocks. Similarly, we use thread pools and semaphores to bound resource consumption, but failure to understand the activities being bounded can cause resource deadlocks.
> dining philosophers
笔记五个哲学家吃饭问题
> Database systems are designed to detect and recover from deadlock.
> One way to induce an ordering on objects is to use System.identityHashCode, which returns the value that would be returned by Object.hashCode.
> In programs that use fine-grained locking, audit your code for deadlock freedom using a two-part strategy: first, identify where multiple locks could be acquired (try to make this a small set), and then perform a global analysis of all such instances to ensure that lock ordering is consistent across your entire program.
> This form of livelock often comes from overeager error-recovery code that mistakenly treats an unrecoverable error as a recoverable one.
> Worse, some techniques intended to improve performance are actually counterproductive or trade one sort of performance problem for another.
> In using concurrency to achieve better performance, we are trying to do two things: utilize the processing resources we have more effectively, and enable our program to exploit additional processing resources if they become available.
> Nearly all engineering decisions involve some form of tradeoff.
> When making engineering decisions, sometimes you are trading one form of cost for another (service time versus memory consumption); sometimes you are trading cost for safety. Safety doesn't necessarily mean risk to human lives, as it did in the bridge example. Many performance optimizations come at the cost of readability or maintainability---the more "clever" or nonobvious code is, the harder it is to understand and maintain.
> What do you mean by "faster"? Under what conditions will this approach actually be faster? Under light or heavy load? With large or small data sets? Can you support your answer with measurements? How often are these conditions likely to arise in your situation? Can you support your answer with measurements? Is this code likely to be used in other situations where the conditions may be different? What hidden costs, such as increased development or maintenance risk, are you trading for this improved performance? Is this a good tradeoff?
> Most concurrent programs have a lot in common with farming, consisting of a mix of parallelizable and serial portions. Amdahl's law describes how much a program can theoretically be sped up by additional computing resources, based on the proportion of parallelizable and serial components. If F is the fraction of the calculation that must be executed serially, then Amdahl's law says that on a machine with N processors, we can achieve a speedup of at most:
> When evaluating an algorithm, thinking "in the limit" about what would happen with hundreds or thousands of processors can offer some insight into where scaling limits might appear.
> Modern JVMs can reduce the cost of incidental synchronization by optimizing away locking that can be proven never to contend.
> We've seen that serialization hurts scalability and that context switches hurt performance.
> The other way to reduce the fraction of time that a lock is held (and therefore the likelihood that it will be contended) is to have threads ask for it less often. This can be accomplished by lock splitting and lock striping
> Lock splitting can sometimes be extended to partition locking on a variablesized set of independent objects, in which case it is called lock striping.
> ConcurrentHashMap avoids this problem by having size enumerate the stripes and add up the number of elements in each stripe, instead of maintaining a global count.
> and are implemented using low-level concurrency primitives (such as compare-and-swap) provided by most modern processors.
> Insufficent
笔记Insufficient
> Threads calling log no longer block waiting for the output stream lock or for I/O to complete; they need only queue the message and can then return to their task.
> Since the primary source of serialization in Java programs is the exclusive resource lock, scalability can often be improved by spending less time holding locks, either by reducing lock granularity, reducing the duration for which locks are held, or replacing exclusive locks with nonexclusive or nonblocking alternatives.
> Most tests of concurrent classes fall into one or both of the classic categories of safety and liveness.
> In Chapter 1, we defined safety as "nothing bad ever happens" and liveness as "something good eventually happens".
笔记 \| 位置:4,858
海森堡 Bug -> 海森堡测不准原理
> A secondary aspect to test is that it does not do things it is not supposed to do, such as leak resources.
> The timing of compilation is unpredictable. Your timing tests should run only after all code has been compiled; there is no value in measuring the speed of the interpreted code since most programs run long enough that all frequently executed code paths are compiled.
> the goal of a quality assurance (QA) plan should be to achieve the greatest possible confidence given the testing resources available.
> When ReentrantLock was added in Java 5.0, it offered far better contended performance than intrinsic locking. For synchronization primitives, contended performance is the key to scalability: if more resources are expended on lock management and scheduling,
> (Semaphore also offers the choice of fair or nonfair acquisition ordering.)
> Use it if you need its advanced features: timed, polled, or interruptible lock acquisition, fair queueing, or non- block-structured locking. Otherwise, prefer synchronized.
> The JVM knows nothing about which threads hold ReentrantLocks and therefore cannot help in debugging threading problems using ReentrantLock.
> But if the library classes do not provide the functionality you need, you can also build your own synchronizers using the low-level mechanisms provided by the language and libraries, including intrinsic condition queues, explicit Condition objects, and the AbstractQueuedSynchronizer framework.
> A condition queue gets its name because it gives a group of threads---called the wait set---a way to wait for a specific condition to become true. Unlike typical queues in which the elements are data items, the elements of a condition queue are the threads waiting for the condition.
> The key to using condition queues correctly is identifying the condition predicates that the object may wait for.
> When using condition waits (Object.wait or Condition.await): Always have a condition predicate---some test of object state that must hold before proceeding; Always test the condition predicate before calling wait, and again after returning from wait; Always call wait in a loop; Ensure that the state variables making up the condition predicate are guarded by the lock associated with the condition queue; Hold the lock associated with the the condition queue when calling wait, notify, or notifyAll; and Do not release the lock after checking the condition predicate but before acting on it.
> Missed signals are the result of coding errors like those warned against in the list above, such as failing to test the condition predicate before calling wait.
> Most classes don't meet these requirements, so the prevailing wisdom is to use notifyAll in preference to single notify.
> And just as Lock offers a richer feature set than intrinsic locking, Condition offers a richer feature set than intrinsic condition queues: multiple wait sets per lock, interruptible and uninterruptible condition waits, deadline- based waiting, and a choice of fair or nonfair queueing.
> In actuality, they are both implemented using a common base class, Abstract-QueuedSynchronizer (AQS)---as are many other synchronizers. AQS is a framework for building locks and synchronizers,
> Using AQS to build synchronizers offers several benefits. Not only does it substantially reduce the implementation effort, but you also needn't pay for multiple points of contention, as you would when constructing one synchronizer on top of another.
> With a lock or semaphore, the meaning of acquire is straightforward---acquire the lock or a permit---and the caller may have to wait until the synchronizer is in a state where that can happen. With CountDownLatch, acquire means "wait until the latch has reached its terminal state", and with FutureTask, it means "wait until the task has completed". Release is not a blocking operation; a release may allow threads blocked in acquire to proceed.
> If you need to implement a state-dependent class---one whose methods must block if a state-based precondition does not hold---the best strategy is usually to build upon an existing library class such as Semaphore, BlockingQueue, or CountDownLatch, as in ValueLatch on page 187. However, sometimes existing library classes do not provide a sufficient foundation; in these cases, you can build your own synchronizers using intrinsic condition queues, explicit Condition objects, or AbstractQueuedSynchronizer.
> lock; it could use profiling data to decide adaptively between suspension and spin locking based on how long the lock has been held during previous acquisitions.
> However, volatile variables have some limitations compared to locking: while they provide similar visibility guarantees, they cannot be used to construct atomic compound actions.
> An algorithm is called nonblocking if failure or suspension of any thread cannot cause failure or suspension of another thread; an algorithm is called lock-free if, at each step, some thread can make progress.
> The key to creating nonblocking algorithms is figuring out how to limit the scope of atomic changes to a single variable while maintaining data consistency.
> However, sometimes we really want to ask "Has the value of V changed since I last observed it to be A?" For some algorithms, changing V from A to B and then back to A still counts as a change that requires us to retry some algorithmic step.
> These low-level primitives are exposed through the atomic variable classes, which can also be used as "better volatile variables" providing atomic update operations for integers and object references.
> Compilers may generate instructions in a different order than the "obvious" one suggested by the source code, or store variables in registers instead of in memory; processors may execute instructions in parallel or out of order; caches may vary the order in which writes to variables are committed to main memory; and values stored in processor-local caches may not be visible to other processors.
> some good, some bad, and some ugly. DCL falls into the "ugly" category.
> Subsequent changes in the JMM (Java 5.0 and later) have enabled DCL to work if resource is made volatile, and the performance impact of this is small since volatile reads are usually only slightly more expensive than nonvolatile reads.
## More
- [Concurrency vs. Parallelism](http://tutorials.jenkov.com/java-concurrency/concurrency-vs-parallelism.html)
- [Java Concurrency in Practice 读书笔记 第十章 - macemers - 博客园](https://www.cnblogs.com/techyc/p/3328614.html)
- [Hong's Blog, Java Concurrency in Practice 读书笔记1](https://rockhong.github.io/java-concurrency-in-practice-notes-1.html)
- [Java Concurrency in Practice 读书笔记 第二章 \| 四号程序员](https://www.coder4.com/archives/872)
- [Java Concurrency in Practice 读书笔记 第三章 \| 四号程序员](https://www.coder4.com/archives/892)
- [Java Concurrency in Practice 读书笔记 第四章 \| 四号程序员](https://www.coder4.com/archives/895)
- [Java Concurrency in Practice 读书笔记 第五章 \| 四号程序员](https://www.coder4.com/archives/923)
- [Java Concurrency in Practice 读书笔记 第六章 \| 四号程序员](https://www.coder4.com/archives/937)
- [Java Concurrency in Practice 读书笔记 第七章 \| 四号程序员](https://www.coder4.com/archives/954)
- [缓慢阅读之Java Concurrency in Practice-5 - 知乎](https://zhuanlan.zhihu.com/p/54847448)
- [Java 并发编程的艺术 - InfoQ](https://www.infoq.cn/article/art-of-java-concurrent-program)