J.U.C并发框架源码阅读AbstractQueuedSynchronizer
Posted stevenczp
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了J.U.C并发框架源码阅读AbstractQueuedSynchronizer相关的知识,希望对你有一定的参考价值。
基于版本jdk1.7.0_80
java.util.concurrent.locks.AbstractQueuedSynchronizer
代码如下
/* * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms. * * * * * * * * * * * * * * * * * * * * */ /* * * * * * * Written by Doug Lea with assistance from members of JCP JSR-166 * Expert Group and released to the public domain, as explained at * http://creativecommons.org/publicdomain/zero/1.0/ */ package java.util.concurrent.locks; import java.util.*; import java.util.concurrent.*; import java.util.concurrent.atomic.*; import sun.misc.Unsafe; /** * Provides a framework for implementing blocking locks and related * synchronizers (semaphores, events, etc) that rely on * first-in-first-out (FIFO) wait queues. This class is designed to * be a useful basis for most kinds of synchronizers that rely on a * single atomic <tt>int</tt> value to represent state. Subclasses * must define the protected methods that change this state, and which * define what that state means in terms of this object being acquired * or released. Given these, the other methods in this class carry * out all queuing and blocking mechanics. Subclasses can maintain * other state fields, but only the atomically updated <tt>int</tt> * value manipulated using methods {@link #getState}, {@link * #setState} and {@link #compareAndSetState} is tracked with respect * to synchronization. * * <p>Subclasses should be defined as non-public internal helper * classes that are used to implement the synchronization properties * of their enclosing class. Class * <tt>AbstractQueuedSynchronizer</tt> does not implement any * synchronization interface. Instead it defines methods such as * {@link #acquireInterruptibly} that can be invoked as * appropriate by concrete locks and related synchronizers to * implement their public methods. * * <p>This class supports either or both a default <em>exclusive</em> * mode and a <em>shared</em> mode. When acquired in exclusive mode, * attempted acquires by other threads cannot succeed. Shared mode * acquires by multiple threads may (but need not) succeed. This class * does not "understand" these differences except in the * mechanical sense that when a shared mode acquire succeeds, the next * waiting thread (if one exists) must also determine whether it can * acquire as well. Threads waiting in the different modes share the * same FIFO queue. Usually, implementation subclasses support only * one of these modes, but both can come into play for example in a * {@link ReadWriteLock}. Subclasses that support only exclusive or * only shared modes need not define the methods supporting the unused mode. * * <p>This class defines a nested {@link ConditionObject} class that * can be used as a {@link Condition} implementation by subclasses * supporting exclusive mode for which method {@link * #isHeldExclusively} reports whether synchronization is exclusively * held with respect to the current thread, method {@link #release} * invoked with the current {@link #getState} value fully releases * this object, and {@link #acquire}, given this saved state value, * eventually restores this object to its previous acquired state. No * <tt>AbstractQueuedSynchronizer</tt> method otherwise creates such a * condition, so if this constraint cannot be met, do not use it. The * behavior of {@link ConditionObject} depends of course on the * semantics of its synchronizer implementation. * * <p>This class provides inspection, instrumentation, and monitoring * methods for the internal queue, as well as similar methods for * condition objects. These can be exported as desired into classes * using an <tt>AbstractQueuedSynchronizer</tt> for their * synchronization mechanics. * * <p>Serialization of this class stores only the underlying atomic * integer maintaining state, so deserialized objects have empty * thread queues. Typical subclasses requiring serializability will * define a <tt>readObject</tt> method that restores this to a known * initial state upon deserialization. * * <h3>Usage</h3> * * <p>To use this class as the basis of a synchronizer, redefine the * following methods, as applicable, by inspecting and/or modifying * the synchronization state using {@link #getState}, {@link * #setState} and/or {@link #compareAndSetState}: * * <ul> * <li> {@link #tryAcquire} * <li> {@link #tryRelease} * <li> {@link #tryAcquireShared} * <li> {@link #tryReleaseShared} * <li> {@link #isHeldExclusively} *</ul> * * Each of these methods by default throws {@link * UnsupportedOperationException}. Implementations of these methods * must be internally thread-safe, and should in general be short and * not block. Defining these methods is the <em>only</em> supported * means of using this class. All other methods are declared * <tt>final</tt> because they cannot be independently varied. * * <p>You may also find the inherited methods from {@link * AbstractOwnableSynchronizer} useful to keep track of the thread * owning an exclusive synchronizer. You are encouraged to use them * -- this enables monitoring and diagnostic tools to assist users in * determining which threads hold locks. * * <p>Even though this class is based on an internal FIFO queue, it * does not automatically enforce FIFO acquisition policies. The core * of exclusive synchronization takes the form: * * <pre> * Acquire: * while (!tryAcquire(arg)) { * <em>enqueue thread if it is not already queued</em>; * <em>possibly block current thread</em>; * } * * Release: * if (tryRelease(arg)) * <em>unblock the first queued thread</em>; * </pre> * * (Shared mode is similar but may involve cascading signals.) * * <p><a name="barging">Because checks in acquire are invoked before * enqueuing, a newly acquiring thread may <em>barge</em> ahead of * others that are blocked and queued. However, you can, if desired, * define <tt>tryAcquire</tt> and/or <tt>tryAcquireShared</tt> to * disable barging by internally invoking one or more of the inspection * methods, thereby providing a <em>fair</em> FIFO acquisition order. * In particular, most fair synchronizers can define <tt>tryAcquire</tt> * to return <tt>false</tt> if {@link #hasQueuedPredecessors} (a method * specifically designed to be used by fair synchronizers) returns * <tt>true</tt>. Other variations are possible. * * <p>Throughput and scalability are generally highest for the * default barging (also known as <em>greedy</em>, * <em>renouncement</em>, and <em>convoy-avoidance</em>) strategy. * While this is not guaranteed to be fair or starvation-free, earlier * queued threads are allowed to recontend before later queued * threads, and each recontention has an unbiased chance to succeed * against incoming threads. Also, while acquires do not * "spin" in the usual sense, they may perform multiple * invocations of <tt>tryAcquire</tt> interspersed with other * computations before blocking. This gives most of the benefits of * spins when exclusive synchronization is only briefly held, without * most of the liabilities when it isn\'t. If so desired, you can * augment this by preceding calls to acquire methods with * "fast-path" checks, possibly prechecking {@link #hasContended} * and/or {@link #hasQueuedThreads} to only do so if the synchronizer * is likely not to be contended. * * <p>This class provides an efficient and scalable basis for * synchronization in part by specializing its range of use to * synchronizers that can rely on <tt>int</tt> state, acquire, and * release parameters, and an internal FIFO wait queue. When this does * not suffice, you can build synchronizers from a lower level using * {@link java.util.concurrent.atomic atomic} classes, your own custom * {@link java.util.Queue} classes, and {@link LockSupport} blocking * support. * * <h3>Usage Examples</h3> * * <p>Here is a non-reentrant mutual exclusion lock class that uses * the value zero to represent the unlocked state, and one to * represent the locked state. While a non-reentrant lock * does not strictly require recording of the current owner * thread, this class does so anyway to make usage easier to monitor. * It also supports conditions and exposes * one of the instrumentation methods: * * <pre> * class Mutex implements Lock, java.io.Serializable { * * // Our internal helper class * private static class Sync extends AbstractQueuedSynchronizer { * // Report whether in locked state * protected boolean isHeldExclusively() { * return getState() == 1; * } * * // Acquire the lock if state is zero * public boolean tryAcquire(int acquires) { * assert acquires == 1; // Otherwise unused * if (compareAndSetState(0, 1)) { * setExclusiveOwnerThread(Thread.currentThread()); * return true; * } * return false; * } * * // Release the lock by setting state to zero * protected boolean tryRelease(int releases) { * assert releases == 1; // Otherwise unused * if (getState() == 0) throw new IllegalMonitorStateException(); * setExclusiveOwnerThread(null); * setState(0); * return true; * } * * // Provide a Condition * Condition newCondition() { return new ConditionObject(); } * * // Deserialize properly * private void readObject(ObjectInputStream s) * throws IOException, ClassNotFoundException { * s.defaultReadObject(); * setState(0); // reset to unlocked state * } * } * * // The sync object does all the hard work. We just forward to it. * private final Sync sync = new Sync(); * * public void lock() { sync.acquire(1); } * public boolean tryLock() { return sync.tryAcquire(1); } * public void unlock() { sync.release(1); } * public Condition newCondition() { return sync.newCondition(); } * public boolean isLocked() { return sync.isHeldExclusively(); } * public boolean hasQueuedThreads() { return sync.hasQueuedThreads(); } * public void lockInterruptibly() throws InterruptedException { * sync.acquireInterruptibly(1); * } * public boolean tryLock(long timeout, TimeUnit unit) * throws InterruptedException { * return sync.tryAcquireNanos(1, unit.toNanos(timeout)); * } * } * </pre> * * <p>Here is a latch class that is like a {@link CountDownLatch} * except that it only requires a single <tt>signal</tt> to * fire. Because a latch is non-exclusive, it uses the <tt>shared</tt> * acquire and release methods. * * <pre> * class BooleanLatch { * * private static class Sync extends AbstractQueuedSynchronizer { * boolean isSignalled() { return getState() != 0; } * * protected int tryAcquireShared(int ignore) { * return isSignalled() ? 1 : -1; * } * * protected boolean tryReleaseShared(int ignore) { * setState(1); * return true; * } * } * * private final Sync sync = new Sync(); * public boolean isSignalled() { return sync.isSignalled(); } * public void signal() { sync.releaseShared(1); } * public void await() throws InterruptedException { * sync.acquireSharedInterruptibly(1); * } * } * </pre> * * @since 1.5 * @author Doug Lea */ public abstract class AbstractQueuedSynchronizer extends AbstractOwnableSynchronizer implements java.io.Serializable { private static final long serialVersionUID = 7373984972572414691L; /** * Creates a new <tt>AbstractQueuedSynchronizer</tt> instance * with initial synchronization state of zero. */ protected AbstractQueuedSynchronizer() { } /** * Wait queue node class. * * <p>The wait queue is a variant of a "CLH" (Craig, Landin, and * Hagersten) lock queue. CLH locks are normally used for * spinlocks. We instead use them for blocking synchronizers, but * use the same basic tactic of holding some of the control * information about a thread in the predecessor of its node. A * "status" field in each node keeps track of whether a thread * should block. A node is signalled when its predecessor * releases. Each node of the queue otherwise serves as a * specific-notification-style monitor holding a single waiting * thread. The status field does NOT control whether threads are * granted locks etc though. A thread may try to acquire if it is * first in the queue. But being first does not guarantee success; * it only gives the right to contend. So the currently released * contender thread may need to rewait. * * <p>To enqueue into a CLH lock, you atomically splice it in as new * tail. To dequeue, you just set the head field. * <pre> * +------+ prev +-----+ +-----+ * head | | <---- | | <---- | | tail * +------+ +-----+ +-----+ * </pre> * * <p>Insertion into a CLH queue requires only a single atomic * operation on "tail", so there is a simple atomic point of * demarcation from unqueued to queued. Similarly, dequeing * involves only updating the "head". However, it takes a bit * more work for nodes to determine who their successors are, * in part to deal with possible cancellation due to timeouts * and interrupts. * * <p>The "prev" links (not used in original CLH locks), are mainly * needed to handle cancellation. If a node is cancelled, its * successor is (normally) relinked to a non-cancelled * predecessor. For explanation of similar mechanics in the case * of spin locks, see the papers by Scott and Scherer at * http://www.cs.rochester.edu/u/scott/synchronization/ * * <p>We also use "next" links to implement blocking mechanics. * The thread id for each node is kept in its own node, so a * predecessor signals the next node to wake up by traversing * next link to determine which thread it is. Determination of * successor must avoid races with newly queued nodes to set * the "next" fields of their predecessors. This is solved * when necessary by checking backwards from the atomically * updated "tail" when a node\'s successor appears to be null. * (Or, said differently, the next-links are an optimization * so that we don\'t usually need a backward scan.) * * <p>Cancellation introduces some conservatism to the basic * algorithms. Since we must poll for cancellation of other * nodes, we can miss noticing whether a cancelled node is * ahead or behind us. This is dealt with by always unparking * successors upon cancellation, allowing them to stabilize on * a new predecessor, unless we can identify an uncancelled * predecessor who will carry this responsibility. * * <p>CLH queues need a dummy header node to get started. But * we don\'t create them on construction, because it would be wasted * effort if there is never contention. Instead, the node * is constructed and head and tail pointers are set upon first * contention. * * <p>Threads waiting on Conditions use the same nodes, but * use an additional link. Conditions only need to link nodes * in simple (non-concurrent) linked queues because they are * only accessed when exclusively held. Upon await, a node is * inserted into a condition queue. Upon signal, the node is * transferred to the main queue. A special value of status * field is used to mark which queue a node is on. * * <p>Thanks go to Dave Dice, Mark Moir, Victor Luchangco, Bill * Scherer and Michael Scott, along with members of JSR-166 * expert group, for helpful ideas, discussions, and critiques * on the design of this class. */ static final class Node { /** Marker to indicate a node is waiting in shared mode */ static final Node SHARED = new Node(); /** Marker to indicate a node is waiting in exclusive mode */ static final Node EXCLUSIVE = null; /** waitStatus value to indicate thread has cancelled */ static final int CANCELLED = 1; /** waitStatus value to indicate successor\'s thread needs unparking */ static final int SIGNAL = -1; /** waitStatus value to indicate thread is waiting on condition */ static final int CONDITION = -2; /** * waitStatus value to indicate the next acquireShared should * unconditionally propagate */ static final int PROPAGATE = -3; /** * Status field, taking on only the values: * SIGNAL: The successor of this node is (or will soon be) * blocked (via park), so the current node must * unpark its successor when it releases or * cancels. To avoid races, acquire methods must * first indicate they need a signal, * then retry the atomic acquire, and then, * on failure, block. * CANCELLED: This node is cancelled due to timeout or interrupt. * Nodes never leave this state. In particular, * a thread with cancelled node never again blocks. * CONDITION: This node is currently on a condition queue. * It will not be used as a sync queue node * until transferred, at which time the status * will be set to 0. (Use of this value here has * nothing to do with the other uses of the * field, but simplifies mechanics.) * PROPAGATE: A releaseShared should be propagated to other * nodes. This is set (for head node only) in * doReleaseShared to ensure propagation * continues, even if other operations have * since intervened. * 0: None of the above * * The values are arranged numerically to simplify use. * Non-negative values mean that a node doesn\'t need to * signal. So, most code doesn\'t need to check for particular * values, just for sign. * * The field is initialized to 0 for normal sync nodes, and * CONDITION for condition nodes. It is modified using CAS * (or when possible, unconditional volatile writes). */ volatile int waitStatus; /** * Link to predecessor node that current node/thread relies on * for checking waitStatus. Assigned during enqueing, and nulled * out (for sake of GC) only upon dequeuing. Also, upon * cancellation of a predecessor, we short-circuit while * finding a non-cancelled one, which will always exist * because the head node is never cancelled: A node becomes * head only as a result of successful acquire. A * cancelled thread never succeeds in acquiring, and a thread only * cancels itself, not any other node. */ volatile Node prev; /** * Link to the successor node that the current node/thread * unparks upon release. Assigned during enqueuing, adjusted * when bypassing cancelled predecessors, and nulled out (for * sake of GC) when dequeued. The enq operation does not * assign next field of a predecessor until after attachment, * so seeing a null next field does not necessarily mean that * node is at end of queue. However, if a next field appears * to be null, we can scan prev\'s from the tail to * double-check. The next field of cancelled nodes is set to * point to the node itself instead of null, to make life * easier for isOnSyncQueue. */ volatile Node next; /** * The thread that enqueued this node. Initialized on * construction and nulled out after use. */ volatile Thread thread; /** * Link to next node waiting on condition, or the special * value SHARED. Because condition queues are accessed only * when holding in exclusive mode, we just need a simple * linked queue to hold nodes while they are waiting on * conditions. They are then transferred to the queue to * re-acquire. And because conditions can only be exclusive, * we save a field by using special value to indicate shared * mode. */ Node nextWaiter; /** * Returns true if node is waiting in shared mode */ final boolean isShared() { return nextWaiter == SHARED; } /** * Returns previous node, or throws NullPointerException if null. * Use when predecessor cannot be null. The null check could * be elided, but is present to help the VM. * * @return the predecessor of this node */ final Node predecessor() throws NullPointerException { Node p = prev; if (p == null) throw new NullPointerException(); else return p; } Node() { // Used to establish initial head or SHARED marker } Node(Thread thread, Node mode) { // Used by addWaiter this.nextWaiter = mode; this.thread = thread; } Node(Thread thread, int waitStatus) { // Used by Condition this.waitStatus = waitStatus; this.thread = thread; } } /** * Head of the wait queue, lazily initialized. Except for * initialization, it is modified only via method setHead. Note: * If head exists, its waitStatus is guaranteed not to be * CANCELLED. */ private transient volatile Node head; /** * Tail of the wait queue, lazily initialized. Modified only via * method enq to add new wait node. */ private transient volatile Node tail; /** * The synchronization state. */ private volatile int state; /** * Returns the current value of synchronization state. * This operation has memory semantics of a <tt>volatile</tt> read. * @return current state value */ protected final int getState() { return state; } /** * Sets the value of synchronization state. * This operation has memory semantics of a <tt>volatile</tt> write. * @param newState the new state value */ protected final void setState(int newState) { state = newState; } /** * Atomically sets synchronization state to the given updated * value if the current state value equals the expected value. * This operation has memory semantics of a <tt>volatile</tt> read * and write. * * @param expect the expected value * @param update the new value * @return true if successful. False return indicates that the actual * value was not equal to the expected value. */ protected final boolean compareAndSetState(int expect, int update) { // See below for intrinsics setup to support this return unsafe.compareAndSwapInt(this, stateOffset, expect, update); } // Queuing utilities /** * The number of nanoseconds for which it is faster to spin * rather than to use timed park. A rough estimate suffices * to improve responsiveness with very short timeouts. */ static final long spinForTimeoutThreshold = 1000L; /** * Inserts node into queue, initializing if necessary. See picture above. * @param node the node to insert * @return node\'s predecessor */ private Node enq(final Node node) { for (;;) { Node t = tail; if (t == null) { // Must initialize if (compareAndSetHead(new Node())) tail = head; } else { node.prev = t; if (compareAndSetTail(t, node)) { t.next = node; return t; } } } } /** * Creates and enqueues node for current thread and given mode. * * @param mode Node.EXCLUSIVE for exclusive, Node.SHARED for shared * @return the new node */ private Node addWaiter(Node mode) { Node node = new Node(Thread.currentThread(), mode); // Try the fast path of enq; backup to full enq on failure Node pred = tail; if (pred != null) { node.prev = pred; if (compareAndSetTail(pred, node)) { pred.next = node; return node; } } enq(node); return node; } /** * Sets head of queue to be node, thus dequeuing. Called only by * acquire methods. Also nulls out unused fields for sake of GC * and to suppress unnecessary signals and traversals. * * @param node the node */ private void setHead(Node node) { head = node; node.thread = null; node.prev = null; } /** * Wakes up node\'s successor, if one exists. * * @param node the node */ private void unparkSuccessor(Node node) { /* * If status is negative (i.e., possibly needing signal) try * to clear in anticipation of signalling. It is OK if this * fails or if status is changed by waiting thread. */ int ws = node.waitStatus; if (ws < 0) compareAndSetWaitStatus(node, ws, 0); /* * Thread to unpark is held in successor, which is normally * just the next node. But if cancelled or apparently null, * traverse backwards from tail to find the actual * non-cancelled successor. */ Node s = node.next; if (s == null || s.waitStatus > 0) { s = null; for (Node t = tail; t != null && t != node; t = t.prev) if (t.waitStatus <= 0) s = t; } if (s != null) LockSupport.unpark(s.thread); } /** * Release action for shared mode -- signal successor and ensure * propagation. (Note: For exclusive mode, release just amounts * to calling unparkSuccessor of head if it needs signal.) */ private void doReleaseShared() { /* * Ensure that a release propagates, even if there are other * in-progress acquires/releases. This proceeds in the usual * way of trying to unparkSuccessor of head if it needs * signal. But if it does not, status is set to PROPAGATE to * ensure that upon release, propagation continues. * Additionally, we must loop in case a new node is added * while we are doing this. Also, unlike other uses of * unparkSuccessor, we need to know if CAS to reset status * fails, if so rechecking. */ for (;;) { Node h = head; if (h != null && h != tail) { int ws = h.waitStatus; if (ws == Node.SIGNAL) { if (!compareAndSetWaitStatus(h, Node.SIGNAL, 0)) continue; // loop to recheck cases unparkSuccessor(h); } else if (ws == 0 && !compareAndSetWaitStatus(h, 0, Node.PROPAGATE)) continue; // loop on failed CAS } if (h == head) // loop if head changed break; } } /** * Sets head of queue, and checks if successor may be waiting * in shared mode, if so propagating if either propagate > 0 or * PROPAGATE status was set. * * @param node the node * @param propagate the return value from a tryAcquireShared */ private void setHeadAndPropagate(Node node, int propagate) { Node h = head; // Record old head for check below setHead(node); /* * Try to signal next queued node if: * Propagation was indicated by caller, * or was recorded (as h.waitStatus) by a previous operation * (note: this uses sign-check of waitStatus because * PROPAGATE status may transition to SIGNAL.) * and * The next node is waiting in shared mode, * or we don\'t know, because it appears null * * The conservatism in both of these checks may cause * unnecessary wake-ups, but only when there are multiple * racing acquires/releases, so most need signals now or soon * anyway. */ if (propagate > 0 || h == null || h.waitStatus < 0) { Node s = node.next; if (s == null || s.isShared()) doReleaseShared(); } } // Utilities for various versions of acquire /** * Cancels an ongoing attempt to acquire. * * @param node the node */ private void cancelAcquire(Node node) { // Ignore if node doesn\'t exist if (node == null) return; node.thread = null; // Skip cancelled predecessors Node pred = node.prev; while (pred.waitStatus > 0) node.prev = pred = pred.prev; // predNext is the apparent node to unsplice. CASes below will // fail if not, in which case, we lost race vs another cancel // or signal, so no further action is necessary. Node predNext = pred.next; // Can use unconditional write instead of CAS here. // After this atomic step, other Nodes can skip past us. // Before, we are free of interference from other threads. node.waitStatus = Node.CANCELLED; // If we are the tail, remove ourselves. if (node == tail && compareAndSetTail(node, pred)) { compareAndSetNext(pred, predNext, null); } else { // If successor needs signal, try to set pred\'s next-link // so it will get one. Otherwise wake it up to propagate. int ws; if (pred != head && ((ws = pred.waitStatus) == Node.SIGNAL || (ws <= 0 && compareAndSetWaitStatus(pred, ws, Node.SIGNAL))) && pred.thread != null) { Node next = node.next; if (next != null && next.waitStatus <= 0) compareAndSetNext(pred, predNext, next); } else { unparkSuccessor(node); } node.next = node; // help GC } } /** * Checks and updates status for a node that failed to acquire. * Returns true if thread should block. This is the main signal * control in all acquire loops. Requires that pred == node.prev * * @param pred node\'s predecessor holding status * @param node the node * @return {@code true} if thread should block */ private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) { int ws = pred.waitStatus; if (ws == Node.SIGNAL) /* * This node has already set status asking a release * to signal it, so it can safely park. */ return true; if (ws > 0) { /* * Predecessor was cancelled. Skip over predecessors and * indicate retry. */ do { node.prev = pred = pred.prev; } while (pred.waitStatus > 0); pred.next = node; } else { /* * waitStatus must be 0 or PROPAGATE. Indicate that we * need a signal, but don\'t park yet. Caller will need to * retry to make sure it cannot acquire before parking. */ compareAndSetWaitStatus(pred, ws, Node.SIGNAL); } return false; } /** * Convenience method to interrupt current thread. */ private static void selfInterrupt() { Thread.currentThread().interrupt(); } /** * Convenience method to park and then check if interrupted * * @return {@code true} if interrupted */ private final boolean parkAndCheckInterrupt() { LockSupport.park(this); return Thread.interrupted(); } /* * Various flavors of acquire, varying in exclusive/shared and * control modes. Each is mostly the same, but annoyingly * different. Only a little bit of factoring is possible due to * interactions of exception mechanics (including ensuring that we * cancel if tryAcquire throws exception) and other control, at * least not without hurting performance too much. */ /** * Acquires in exclusive uninterruptible mode for thread already in * queue. Used by condition wait methods as well as acquire. * * @param node the node * @param arg the acquire argument * @return {@code true} if interrupted while waiting */ final boolean acquireQueued(final Node node, int arg) { boolean failed = true; try { boolean interrupted = false; for (;;) { final Node p = node.predecessor(); if (p == head && tryAcquire(arg)) { setHead(node); p.next = null; // help GC failed = false; return interrupted; } if (shouldParkAfterFailedAcquire(p, node) && parkAndCheckInterrupt()) interrupted = true; } } finally { if (failed) cancelAcquire(node); } } /**以上是关于J.U.C并发框架源码阅读AbstractQueuedSynchronizer的主要内容,如果未能解决你的问题,请参考以下文章
J.U.C并发框架源码阅读ArrayBlockingQueue