HashMap 源码
Posted li-mzx
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了HashMap 源码相关的知识,希望对你有一定的参考价值。
HashMap源码
先翻译,然后加上注释解释代码
1 package java.util; 2 3 import java.io.IOException; 4 import java.io.InvalidObjectException; 5 import java.io.Serializable; 6 import java.lang.reflect.ParameterizedType; 7 import java.lang.reflect.Type; 8 import java.util.function.BiConsumer; 9 import java.util.function.BiFunction; 10 import java.util.function.Consumer; 11 import java.util.function.Function; 12 import sun.misc.SharedSecrets; 13 14 /** 15 * 基于Hash表的Map界面的实现。 16 * 此实现提供了所有可选的映射操作,并允许NULL value和NULL key。 17 * (HASMAP类大致等同于Hash表,除了它是不同步的,并且允许空值。) 18 * 这个类不保证映射的顺序;特别是,它不保证顺序将随时间保持不变。 19 * 20 * 这个实现为基本操作(get和put)提供了恒定时间的性能,假设Hash函数将元素适当地分散到桶中。 21 * 对集合视图的迭代需要与HashMap实例的“容量”(桶的数量)及其大小(key-value映射的数量)成比例的时间。 22 * 因此,如果迭代性能很重要,那么不要将初始容量设置得太高(或负载系数太低)非常重要。 23 * 24 * HASMAP的一个实例有两个参数影响其性能:初始容量和负载因子。 25 * 容量是Hash表中的桶数,初始容量只是创建Hash表时的容量。 26 * 负载因子是在容量自动增加之前允许Hash表的满度的量度。 27 * 当Hash表中的条目的数量超过负载系数和当前容量的乘积时,对Hash表进行重新Hash(即,重建内部数据结构),使得Hash表具有大约两倍于桶的数量。 28 * 29 * 作为一般规则,默认负载因子(0.75)在时间和空间成本之间提供了很好的折衷。 30 * 较高的值减少了空间开销,但是增加了查找成本(反映在HashMap类的大多数操作中,包括get和put)。 31 * 在设置映射的初始容量时,应该考虑映射中的预期条目数量及其负载因子,以便最小化重散列操作的数量。 32 * 如果初始容量大于最大条目数除以负载因子,则不会发生重新散列操作。 33 * 34 * 如果要在HashMap实例中存储许多映射,则创建具有足够大容量的映射将允许更有效地存储映射,而不是让它根据需要执行自动重散列来增长表。 35 * 注意,使用同一个HASCODE()的多个key是降低任何Hash表性能的可靠方法。 36 * 为了改善影响,当key可比较时,该类可以使用key之间的比较顺序来帮助断开关系。 37 * 38 * 请注意,此实现不同步。 39 * 如果多个线程同时访问Hash映射,并且至少一个线程在结构上修改映射,则必须在外部对其进行同步。 40 * (结构修改是添加或删除一个或多个映射的任何操作;仅仅更改与实例已经包含的key相关联的value不是结构修改。) 41 * 这通常是通过对一些自然封装Map的对象进行同步来实现的。 42 * 43 * 如果不存在这样的对象,则该映射应该使用Collections.synchronizedMap方法“包装”。 44 * 最好在创建时这样做,以防止意外地异步访问映射:Map m=Collections.synchronizedMap(new HashMap(...)); 45 * 46 * 所有此类的“集合视图方法”返回的迭代器都是fail-fast的:如果映射在创建迭代器之后的任何时间以除了通过迭代器自己的remove方法之外的任何方式在结构上被修改,则迭代器将抛出ConcurrentModificationException。 47 * 因此,在面临并发修改时,迭代器会快速而干净地失败,而不会在未来不确定的时间冒任意、非确定性行为的风险。 48 * 49 * 注意,迭代器的fail-fast行为不能得到保证,因为一般来说,在存在非同步的并发修改的情况下,不可能做出任何硬保证。 50 * 故障快速迭代器在尽力的基础上抛出CONTRONUTION修改异常。 51 * 因此,编写依赖于此异常的程序来确保其正确性是错误的:迭代器的快速失败行为应该只用于检测bug。 52 * 53 */ 54 public class HashMap<K,V> extends AbstractMap<K,V> 55 implements Map<K,V>, Cloneable, Serializable { 56 57 private static final long serialVersionUID = 362498820763181265L; 58 59 /** 60 * * 执行的笔记。 61 * 62 * 这个映射通常作为 链表(用桶装)哈希表,但是当链表太大时,它们被转换为树箱,每个树箱的结构与java.util.TreeMap中的类似。 63 * 大多数方法尝试使用正常的容器,但在适用时中继到树型方法(简单地通过检查节点的实例)。 64 * 可以像其他任何一样遍历和使用树状物的容器,但是当人口过密时,还支持更快的查找。 65 * 然而,由于正常使用的绝大多数箱子都不是过度填充的,因此在表方法过程中,检查树箱是否存在可能会延迟。 66 * 67 * 树箱(即,其元素都是TreeNode的箱)主要由hashCode排序,但是在联结的情况下,如果两个元素具有相同的“C类实现Comparable<C>”,则键入它们的 compareTo 方法来排序。 68 * (我们通过反射来保守地检查泛型类型,以验证这一点——参见方法SababeCabaseFor)。 69 * 当键具有不同的散列或者可排序时,树箱增加的复杂性对于提供最差情况O(log n)操作是值得的,因此,在hashCode()方法返回分布较差的值的意外或恶意使用下,性能优雅地降低,就像我们LL是许多密钥共享Hash码的那些,只要它们是可比的。 70 * (如果这两种方法都不适用,我们可能会浪费时间和空间的两倍,而不采取任何预防措施。 71 * 但是,唯一已知的案例源于用户编程的不良行为,这些做法已经非常缓慢,这几乎没有什么区别。 72 * 73 * 因为TreeNode的大小大约是常规节点的两倍,因此我们只有在容器中包含足够的节点来保证使用时才使用它们(参见TREEIFY_THRESHOLD)。 74 * 当它们变得太小(由于移除或调整大小)时,它们被转换回普通容器。 75 * 在使用分布良好的用户哈希代码的情况下,很少使用树箱。 76 * 77 * 理想情况下,在随机hashCodes下,容器中的节点的频率遵循泊松分布(http://en.wikipedia.org/wiki/Poisson_.),对于0.75的默认大小调整阈值,平均参数约为0.5,尽管由于大小调整粒度而存在很大的差异。 78 * 忽略方差,列表大小k的期望发生是(exp(-0.5) * pow(0.5, k) / factorial(k)). 79 * 第一个值是: 80 * 0: 0.60653066 81 * 1: 0.30326533 82 * 2: 0.07581633 83 * 3: 0.01263606 84 * 4: 0.00157952 85 * 5: 0.00015795 86 * 6: 0.00001316 87 * 7: 0.00000094 88 * 8: 0.00000006 89 * 更多:小于千万分之一 90 * 91 * 树箱的根通常是它的第一个节点。 92 * 但是,有时候(当前仅在Iterator.remove时),根可能在其他地方,但是可以在父链接之后恢复(方法TreeNode.root())。 93 * 94 * 所有适用的内部方法都接受哈希代码作为参数(通常由公共方法提供),允许它们彼此调用而不重新计算用户哈希代码。 95 * 大多数内部方法还接受“tab”参数,通常是当前表,但在调整大小或转换时可能是新表或旧表。 96 * 97 * 在桶树化,分裂,或 反树化,我们把他们放在同一个相对访问/遍历顺序(即场节点。下)更好的保存的地方,并稍微简化的分裂和遍历调用Iterator.remove操作。 98 * 当在插入中使用比较器时,为了在重新平衡中保持总排序(或在这里需要的接近),我们将类和身份HashCodes作为连接断路器进行比较。 99 * 100 * 由于子类LinkedHashMap的存在,普通VS树模式之间的使用和转换是复杂的。 101 * 见下面的钩子被调用方法定义的插入后,去除和访问,让LinkedHashMap的内部或保持独立的力学。 102 * (这也要求将一个MAP实例传递给可能创建新节点的一些实用工具方法。) 103 * 104 * 类似SSA的编码风格的并发编程有助于避免所有扭曲的指针操作中的混叠错误。 105 * 106 */ 107 108 109 /** 110 * 默认初始容量。必须是2的幂 111 */ 112 113 static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16 114 115 /** 116 * 最大容量,如果一个更高的值隐式地由任何一个带有参数的构造函数指定。必须是2的30次幂 117 */ 118 static final int MAXIMUM_CAPACITY = 1 << 30; 119 120 /** 121 * 在构造函数中没有指定时使用的负载因子。 122 */ 123 static final float DEFAULT_LOAD_FACTOR = 0.75f; 124 125 /** 126 * 用于桶转换为树的阀值 ,当添加一个节点到一个有多个节点的桶中时,超过阀值转化为树 127 * 该值必须大于2,并且至少应为8,以与树移除中的假设相吻合,在收缩后转换为桶。 128 */ 129 static final int TREEIFY_THRESHOLD = 8; 130 131 /** 132 * 在重新调整大小操作期间,非树化的计数阈值。应小于TeeIFIY阈值,最多6个网格与收缩检测下去除。 133 */ 134 135 static final int UNTREEIFY_THRESHOLD = 6; 136 137 /** 138 * 容器可被树化的最小表容量。 139 * 如果表中的节点太多,则重新调整表大小 140 * 应该至少有 4*TREEIFY_THRESHOLD ,以避免调整大小和树化阈值之间的冲突。 141 */ 142 static final int MIN_TREEIFY_CAPACITY = 64; 143 144 /** 145 * 基本节点,用于大多数条目 146 */ 147 static class Node<K,V> implements Map.Entry<K,V> { 148 final int hash; 149 final K key; 150 V value; 151 Node<K,V> next; 152 153 Node(int hash, K key, V value, Node<K,V> next) { 154 this.hash = hash; 155 this.key = key; 156 this.value = value; 157 this.next = next; 158 } 159 160 public final K getKey() { return key; } 161 public final V getValue() { return value; } 162 public final String toString() { return key + "=" + value; } 163 164 public final int hashCode() { 165 return Objects.hashCode(key) ^ Objects.hashCode(value); 166 } 167 168 public final V setValue(V newValue) { 169 V oldValue = value; 170 value = newValue; 171 return oldValue; 172 } 173 174 public final boolean equals(Object o) { 175 if (o == this) 176 return true; 177 if (o instanceof Map.Entry) { 178 Map.Entry<?,?> e = (Map.Entry<?,?>)o; 179 if (Objects.equals(key, e.getKey()) && 180 Objects.equals(value, e.getValue())) 181 return true; 182 } 183 return false; 184 } 185 } 186 187 /* ---------------- 静态方法 -------------- */ 188 189 /** 190 * 计算KEY.HASCODE()和散列(XOR)更高的散列位。 191 * 因为表使用power-of-two,所以仅在当前掩码上方的位上变化的散列集合总是会发生碰撞。 192 * (已知的例子是在小表中持有连续整数的浮动键集)。 193 * 因此,我们应用一种将更高比特的影响向下传播的变换。在比特扩展的速度、效用和质量之间存在权衡。 194 * 因为许多公共散列集已经合理地分布了(因此不能从扩展中受益),并且因为我们使用树来处理容器中的大型冲突集 195 * ,所以我们只以最便宜的方式异或某些移位的位,以减少系统损失,以及合并最高的位,否则将永远不会在索引计算中使用,因为表的边界。 196 */ 197 198 static final int hash(Object key) { 199 int h; 200 return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16); 201 } 202 203 /** 204 * 如果他是Comparable的子类返回 x 的类 , 否则返回null 205 */ 206 static Class<?> comparableClassFor(Object x) { 207 if (x instanceof Comparable) { 208 Class<?> c; Type[] ts, as; Type t; ParameterizedType p; 209 if ((c = x.getClass()) == String.class) // bypass checks 210 return c; 211 if ((ts = c.getGenericInterfaces()) != null) { 212 for (int i = 0; i < ts.length; ++i) { 213 if (((t = ts[i]) instanceof ParameterizedType) && 214 ((p = (ParameterizedType)t).getRawType() == 215 Comparable.class) && 216 (as = p.getActualTypeArguments()) != null && 217 as.length == 1 && as[0] == c) // type arg is c 218 return c; 219 } 220 } 221 } 222 return null; 223 } 224 225 /** 226 * 如果x匹配kc (k的隐藏比较类) 返回k与x的比较值 否则返回0 227 */ 228 @SuppressWarnings({"rawtypes","unchecked"}) // for cast to Comparable 229 static int compareComparables(Class<?> kc, Object k, Object x) { 230 return (x == null || x.getClass() != kc ? 0 : 231 ((Comparable)k).compareTo(x)); 232 } 233 234 /** 235 * 返回一个最接近的2的次幂的权值,用来给目标作为容量 236 */ 237 static final int tableSizeFor(int cap) { 238 int n = cap - 1; 239 n |= n >>> 1; 240 n |= n >>> 2; 241 n |= n >>> 4; 242 n |= n >>> 8; 243 n |= n >>> 16; 244 return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1; 245 } 246 247 /* ---------------- 字段 -------------- */ 248 249 250 251 /** 252 *表,在第一次使用时初始化,并根据需要调整大小。分配时,长度总是两个幂。 253 *(在某些操作中,我们也允许长度为零,以允许当前不需要的引导机制)。 254 */ 255 transient Node<K,V>[] table; 256 257 /** 258 * 保存缓存的entrySet ,注意在抽象类AbstractMap的字段中,用与keySet和values 259 */ 260 transient Set<Entry<K,V>> entrySet; 261 262 /** 263 * map 中 key - value 映射的总数 264 * 265 */ 266 transient int size; 267 268 /** 269 * 这个map 被 修改结构的次数, 270 * 这个字段用于使 HashMap 的 Collection-views 上的迭代器 fail-fast 参见 ConcurrentModificationException 271 */ 272 transient int modCount; 273 274 /** 275 * 下一次大小调整后的大小 容量*负载因子 capacity * loadfactor 276 */ 277 int threshold; 278 279 /** 280 * hash 表的 负载因子 281 */ 282 final float loadFactor; 283 284 /*---------------- 公共方法 -------------- */ 285 286 /** 287 * 构造具有指定初始容量和负载因子的空HashMap。 288 * @param initialCapacity 初始容量 289 * @param loadFactor 负载因子 290 * @throws IllegalArgumentException 如果初始容量为负或负载因子为非正 291 */ 292 public HashMap(int initialCapacity, float loadFactor) { 293 if (initialCapacity < 0) 294 throw new IllegalArgumentException("Illegal initial capacity: " + 295 initialCapacity); 296 if (initialCapacity > MAXIMUM_CAPACITY) 297 initialCapacity = MAXIMUM_CAPACITY; 298 if (loadFactor <= 0 || Float.isNaN(loadFactor)) 299 throw new IllegalArgumentException("Illegal load factor: " + 300 loadFactor); 301 this.loadFactor = loadFactor; 302 this.threshold = tableSizeFor(initialCapacity); 303 } 304 305 /** 306 * 构造具有指定初始容量和默认负载因子(0.75)的空HashMap。 307 */ 308 public HashMap(int initialCapacity) { 309 this(initialCapacity, DEFAULT_LOAD_FACTOR); 310 } 311 312 /** 313 * 构造具有默认初始容量(16)和负载因子(0.75)的空HashMap 314 */ 315 public HashMap() { 316 this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted 317 } 318 319 /** 320 *构造一个新的哈希图,其映射与指定的映射相同。 321 *创建HashMap时具有默认的负载因子(0.75)和足够的初始容量,以便将映射保存在指定的Map中。 322 */ 323 public HashMap(Map<? extends K, ? extends V> m) { 324 this.loadFactor = DEFAULT_LOAD_FACTOR; 325 putMapEntries(m, false); 326 } 327 328 /** 329 * 实现 Map.putAll 和 Map 构造函数 330 */ 331 final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) { 332 int s = m.size(); 333 if (s > 0) { 334 if (table == null) { // pre-size 335 float ft = ((float)s / loadFactor) + 1.0F; 336 int t = ((ft < (float)MAXIMUM_CAPACITY) ? 337 (int)ft : MAXIMUM_CAPACITY); 338 if (t > threshold) 339 threshold = tableSizeFor(t); 340 } 341 else if (s > threshold) 342 resize(); 343 for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) { 344 K key = e.getKey(); 345 V value = e.getValue(); 346 putVal(hash(key), key, value, false, evict); 347 } 348 } 349 } 350 351 /** 352 * 返回 键值对的数量 353 */ 354 public int size() { 355 return size; 356 } 357 358 /** 359 * 返回 容器是否为空 360 */ 361 public boolean isEmpty() { 362 return size == 0; 363 } 364 365 /** 366 * 返回指定的键映射到的值,如果该映射不包含该键的映射 返回null。 367 * 返回null 不一定不包含key 也可能存的就是null 368 */ 369 370 public V get(Object key) { 371 Node<K,V> e; 372 return (e = getNode(hash(key), key)) == null ? null : e.value; 373 } 374 375 /** 376 * 实现Map.get 和相关方法 377 */ 378 final Node<K,V> getNode(int hash, Object key) { 379 Node<K,V>[] tab; Node<K,V> first, e; int n; K k; 380 if ((tab = table) != null && (n = tab.length) > 0 && 381 (first = tab[(n - 1) & hash]) != null) { 382 if (first.hash == hash && // always check first node 383 ((k = first.key) == key || (key != null && key.equals(k)))) 384 return first; 385 if ((e = first.next) != null) { 386 if (first instanceof TreeNode) 387 return ((TreeNode<K,V>)first).getTreeNode(hash, key); 388 do { 389 if (e.hash == hash && 390 ((k = e.key) == key || (key != null && key.equals(k)))) 391 return e; 392 } while ((e = e.next) != null); 393 } 394 } 395 return null; 396 } 397 398 /** 399 * 返回map是否包含指定key 400 */ 401 public boolean containsKey(Object key) { 402 return getNode(hash(key), key) != null; 403 } 404 405 /** 406 * 将指定key-value 关联放入map 407 * 如果之前存在key 则替换旧值 408 * 返回旧值或null 409 */ 410 public V put(K key, V value) { 411 return putVal(hash(key), key, value, false, true); 412 } 413 414 /** 415 * 实现Map.put 和相关方法 * 416 * @param hash hash for key 417 * @param key the key 418 * @param value the value to put 419 * @param onlyIfAbsent 如果为true 不改变现有的值 420 * @param evict 如果 false 则表处于创建模式 421 * @return 返回之前key映射的值 422 */ 423 final V putVal(int hash, K key, V value, boolean onlyIfAbsent, 424 boolean evict) { 425 Node<K,V>[] tab; Node<K,V> p; int n, i; 426 if ((tab = table) == null || (n = tab.length) == 0) 427 n = (tab = resize()).length; 428 if ((p = tab[i = (n - 1) & hash]) == null) 429 tab[i] = newNode(hash, key, value, null); 430 else { 431 Node<K,V> e; K k; 432 if (p.hash == hash && 433 ((k = p.key) == key || (key != null && key.equals(k)))) 434 e = p; 435 else if (p instanceof TreeNode) 436 e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value); 437 else { 438 for (int binCount = 0; ; ++binCount) { 439 if ((e = p.next) == null) { 440 p.next = newNode(hash, key, value, null); 441 if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st 442 treeifyBin(tab, hash); 443 break; 444 } 445 if (e.hash == hash && 446 ((k = e.key) == key || (key != null && key.equals(k)))) 447 break; 448 p = e; 449 } 450 } 451 if (e != null) { // existing mapping for key 452 V oldValue = e.value; 453 if (!onlyIfAbsent || oldValue == null) 454 e.value = value; 455 afterNodeAccess(e); 456 return oldValue; 457 } 458 } 459 ++modCount; 460 if (++size > threshold) 461 resize(); 462 afterNodeInsertion(evict); 463 return null; 464 } 465 466 /** 467 * 初始化或者变为之前的2倍大小 , 如果为空,则按照保持在字段阈值中的初始容量目标分配。 468 * 否则,因为我们使用的是二倍展开的幂,所以每个桶中的元素必须保持在相同的索引上,或者在新表中以2次幂移动。 469 * @return the table 470 */ 471 final Node<K,V>[] resize() { 472 Node<K,V>[] oldTab = table; 473 int oldCap = (oldTab == null) ? 0 : oldTab.length; 474 int oldThr = threshold; 475 int newCap, newThr = 0; 476 if (oldCap > 0) { 477 if (oldCap >= MAXIMUM_CAPACITY) { 478 threshold = Integer.MAX_VALUE; 479 return oldTab; 480 } 481 else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && 482 oldCap >= DEFAULT_INITIAL_CAPACITY) 483 newThr = oldThr << 1; // double threshold 484 } 485 else if (oldThr > 0) // initial capacity was placed in threshold 486 newCap = oldThr; 487 else { // zero initial threshold signifies using defaults 488 newCap = DEFAULT_INITIAL_CAPACITY; 489 newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY); 490 } 491 if (newThr == 0) { 492 float ft = (float)newCap * loadFactor; 493 newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ? 494 (int)ft : Integer.MAX_VALUE); 495 } 496 threshold = newThr; 497 @SuppressWarnings({"rawtypes","unchecked"}) 498 Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap]; 499 table = newTab; 500 if (oldTab != null) { 501 for (int j = 0; j < oldCap; ++j) { 502 Node<K,V> e; 503 if ((e = oldTab[j]) != null) { 504 oldTab[j] = null; 505 if (e.next == null) 506 newTab[e.hash & (newCap - 1)] = e; 507 else if (e instanceof TreeNode) 508 ((TreeNode<K,V>)e).split(this, newTab, j, oldCap); 509 else { // preserve order 510 Node<K,V> loHead = null, loTail = null; 511 Node<K,V> hiHead = null, hiTail = null; 512 Node<K,V> next; 513 do { 514 next = e.next; 515 if ((e.hash & oldCap) == 0) { 516 if (loTail == null) 517 loHead = e; 518 else 519 loTail.next = e; 520 loTail = e; 521 } 522 else { 523 if (hiTail == null) 524 hiHead = e; 525 else 526 hiTail.next = e; 527 hiTail = e; 528 } 529 } while ((e = next) != null); 530 if (loTail != null) { 531 loTail.next = null; 532 newTab[j] = loHead; 533 } 534 if (hiTail != null) { 535 hiTail.next = null; 536 newTab[j + oldCap] = hiHead; 537 } 538 } 539 } 540 } 541 } 542 return newTab; 543 } 544 545 /** 546 * 替换给定哈希索引中的所有链表节点,除非表太小,在这种情况下,调整大小。 547 */ 548 final void treeifyBin(Node<K,V>[] tab, int hash) { 549 int n, index; Node<K,V> e; 550 if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY) 551 resize(); 552 else if ((e = tab[index = (n - 1) & hash]) != null) { 553 TreeNode<K,V> hd = null, tl = null; 554 do { 555 TreeNode<K,V> p = replacementTreeNode(e, null); 556 if (tl == null) 557 hd = p; 558 else { 559 p.prev = tl; 560 tl.next = p; 561 } 562 tl = p; 563 } while ((e = e.next) != null); 564 if ((tab[index] = hd) != null) 565 hd.treeify(tab); 566 } 567 } 568 569 /** 570 * 从指定map抄写全部的键值对到此map 571 * 这些映射将替换该映射对当前在指定映射中的任何键的任何映射。 572 * @param m mappings to be stored in this map 573 * @throws NullPointerException if the specified map is null 574 */ 575 public void putAll(Map<? extends K, ? extends V> m) { 576 putMapEntries(m, true); 577 } 578 579 /** 580 * 如果存在,则从该映射中移除指定键的映射。 581 */ 582 public V remove(Object key) { 583 Node<K,V> e; 584 return (e = removeNode(hash(key), key, null, false, true)) == null ? 585 null : e.value; 586 } 587 588 /** 589 * 实现Map.remove 和相关方法 590 * @param hash hash for key 591 * @param key the key 592 * @param value 匹配匹配值的值,否则忽略 593 * @param matchValue 如果为true,则仅在值相等时移除 594 * @param movable 如果false在移除时不移动其他节点 595 * @return the node, or null if none 596 */ 597 final Node<K,V> removeNode(int hash, Object key, Object value, 598 boolean matchValue, boolean movable) { 599 Node<K,V>[] tab; Node<K,V> p; int n, index; 600 if ((tab = table) != null && (n = tab.length) > 0 && 601 (p = tab[index = (n - 1) & hash]) != null) { 602 Node<K,V> node = null, e; K k; V v; 603 if (p.hash == hash && 604 ((k = p.key) == key || (key != null && key.equals(k)))) 605 node = p; 606 else if ((e = p.next) != null) { 607 if (p instanceof TreeNode) 608 node = ((TreeNode<K,V>)p).getTreeNode(hash, key); 609 else { 610 do { 611 if (e.hash == hash && 612 ((k = e.key) == key || 613 (key != null && key.equals(k)))) { 614 node = e; 615 break; 616 } 617 p = e; 618 } while ((e = e.next) != null); 619 } 620 } 621 if (node != null && (!matchValue || (v = node.value) == value || 622 (value != null && value.equals(v)))) { 623 if (node instanceof TreeNode) 624 ((TreeNode<K,V>)node).removeTreeNode(this, tab, movable); 625 else if (node == p) 626 tab[index] = node.next; 627 else 628 p.next = node.next; 629 ++modCount; 630 --size; 631 afterNodeRemoval(node); 632 return node; 633 } 634 } 635 return null; 636 } 637 638 /** 639 * 从该映射中移除所有映射。 640 *此调用返回后,map将为空。 641 */ 642 public void clear() { 643 Node<K,V>[] tab; 644 modCount++; 645 if ((tab = table) != null && size > 0) { 646 size = 0; 647 for (int i = 0; i < tab.length; ++i) 648 tab[i] = null; 649 } 650 } 651 652 /** 653 * 如果此映射将一个或多个键映射到指定值,则返回true。 654 */ 655 public boolean containsValue(Object value) { 656 Node<K,V>[] tab; V v; 657 if ((tab = table) != null && size > 0) { 658 for (int i = 0; i < tab.length; ++i) { 659 for (Node<K,V> e = tab[i]; e != null; e = e.next) { 660 if ((v = e.value) == value || 661 (value != null && value.equals(v))) 662 return true; 663 } 664 } 665 } 666 return false; 667 } 668 669 /** 670 * 返回map中所有的key的set集合 671 * 这个set由map支持,因此map的改变反应在set中,反之亦然 672 * 如果在对set进行迭代时,除了通过迭代器自己的移除操作之外,修改map,则迭代的结果是未定义的 673 * set支持元素删除,通过Set.remove, removeAll, retainAll, and clear操作从map中删除响应的键值对 674 * 它不支持add和addAll操作 675 */ 676 public Set<K> keySet() { 677 Set<K> ks = keySet; 678 if (ks == null) { 679 ks = new KeySet(); 680 keySet = ks; 681 } 682 return ks; 683 } 684 685 final class KeySet extends AbstractSet<K> { 686 public final int size() { return size; } 687 public final void clear() { HashMap.this.clear(); } 688 public final Iterator<K> iterator() { return new KeyIterator(); } 689 public final boolean contains(Object o) { return containsKey(o); } 690 public final boolean remove(Object key) { 691 return removeNode(hash(key), key, null, false, true) != null; 692 } 693 public final Spliterator<K> spliterator() { 694 return new KeySpliterator<>(HashMap.this, 0, -1, 0, 0); 695 } 696 public final void forEach(Consumer<? super K> action) { 697 Node<K,V>[] tab; 698 if (action == null) 699 throw new NullPointerException(); 700 if (size > 0 && (tab = table) != null) { 701 int mc = modCount; 702 for (int i = 0; i < tab.length; ++i) { 703 for (Node<K,V> e = tab[i]; e != null; e = e.next) 704 action.accept(e.key); 705 } 706 if (modCount != mc) 707 throw new ConcurrentModificationException(); 708 } 709 } 710 } 711 712 /** 713 * 返回一个map中所有value的集合Collection 714 * 这个集合是由map支持的,所以map的改变会反应在集合中,反之亦然 715 * 如果在对集合进行迭代期间,除了通过迭代器自己的移除操作之外,修改map,则迭代的结果是未定义的 716 * 集合支持删除元素,通过Collection.remove, removeAll, retainAll and clear 操作,从map中删除对应的键值对, 717 * 它不支持add和addAll操作 718 * 719 * @return a view of the values contained in this map 720 */ 721 public Collection<V> values() { 722 Collection<V> vs = values; 723 if (vs == null) { 724 vs = new Values(); 725 values = vs; 726 } 727 return vs; 728 } 729 730 final class Values extends AbstractCollection<V> { 731 public final int size() { return size; } 732 public final void clear() { HashMap.this.clear(); } 733 public final Iterator<V> iterator() { return new ValueIterator(); } 734 public final boolean contains(Object o) { return containsValue(o); } 735 public final Spliterator<V> spliterator() { 736 return new ValueSpliterator<>(HashMap.this, 0, -1, 0, 0); 737 } 738 public final void forEach(Consumer<? super V> action) { 739 Node<K,V>[] tab; 740 if (action == null) 741 throw new NullPointerException(); 742 if (size > 0 && (tab = table) != null) { 743 int mc = modCount; 744 for (int i = 0; i < tab.length; ++i) { 745 for (Node<K,V> e = tab[i]; e != null; e = e.next) 746 action.accept(e.value); 747 } 748 if (modCount != mc) 749 throw new ConcurrentModificationException(); 750 } 751 } 752 } 753 754 /** 755 * 返回一个map中包含的所有键值对的set 756 * 这个set是由map支持的,所以对map的变化会反应在set中,反之亦然 757 * 如果在set迭代期间,除了通过迭代器自己的移除操作之外,修改map,则迭代的结果是未定义的 758 * 集合支持删除元素,通过Iterator.remove, 759 * Set.remove, removeAll, retainAll and 760 * clear操作,从map中删除对应的键值对 761 * 它不支持add和addAll操作 762 */ 763 public Set<Map.Entry<K,V>> entrySet() { 764 Set<Map.Entry<K,V>> es; 765 return (es = entrySet) == null ? (entrySet = new EntrySet()) : es; 766 } 767 768 final class EntrySet extends AbstractSet<Map.Entry<K,V>> { 769 public final int size() { return size; } 770 public final void clear() { HashMap.this.clear(); } 771 public final Iterator<Map.Entry<K,V>> iterator() { 772 return new EntryIterator(); 773 } 774 public final boolean contains(Object o) { 775 if (!(o instanceof Map.Entry)) 776 return false; 777 Map.Entry<?,?> e = (Map.Entry<?,?>) o; 778 Object key = e.getKey(); 779 Node<K,V> candidate = getNode(hash(key), key); 780 return candidate != null && candidate.equals(e); 781 } 782 public final boolean remove(Object o) { 783 if (o instanceof Map.Entry) { 784 Map.Entry<?,?> e = (Map.Entry<?,?>) o; 785 Object key = e.getKey(); 786 Object value = e.getValue(); 787 return removeNode(hash(key), key, value, true, true) != null; 788 } 789 return false; 790 } 791 public final Spliterator<Map.Entry<K,V>> spliterator() { 792 return new EntrySpliterator<>(HashMap.this, 0, -1, 0, 0); 793 } 794 public final void forEach(Consumer<? super Map.Entry<K,V>> action) { 795 Node<K,V>[] tab; 796 if (action == null) 797 throw new NullPointerException(); 798 if (size > 0 && (tab = table) != null) { 799 int mc = modCount; 800 for (int i = 0; i < tab.length; ++i) { 801 for (Node<K,V> e = tab[i]; e != null; e = e.next) 802 action.accept(e); 803 } 804 if (modCount != mc) 805 throw new ConcurrentModificationException(); 806 } 807 } 808 } 809 810 // 对JDK8 Map 扩展方法的重写 811 812 @Override 813 public V getOrDefault(Object key, V defaultValue) { 814 Node<K,V> e; 815 return (e = getNode(hash(key), key)) == null ? defaultValue : e.value; 816 } 817 818 @Override 819 public V putIfAbsent(K key, V value) { 820 return putVal(hash(key), key, value, true, true); 821 } 822 823 @Override 824 public boolean remove(Object key, Object value) { 825 return removeNode(hash(key), key, value, true, true) != null; 826 } 827 828 @Override 829 public boolean replace(K key, V oldValue, V newValue) { 830 Node<K,V> e; V v; 831 if ((e = getNode(hash(key), key)) != null && 832 ((v = e.value) == oldValue || (v != null && v.equals(oldValue)))) { 833 e.value = newValue; 834 afterNodeAccess(e); 835 return true; 836 } 837 return false; 838 } 839 840 @Override 841 public V replace(K key, V value) { 842 Node<K,V> e; 843 if ((e = getNode(hash(key), key)) != null) { 844 V oldValue = e.value; 845 e.value = value; 846 afterNodeAccess(e); 847 return oldValue; 848 } 849 return null; 850 } 851 852 @Override 853 public V computeIfAbsent(K key, 854 Function<? super K, ? extends V> mappingFunction) { 855 if (mappingFunction == null) 856 throw new NullPointerException(); 857 int hash = hash(key); 858 Node<K,V>[] tab; Node<K,V> first; int n, i; 859 int binCount = 0; 860 TreeNode<K,V> t = null; 861 Node<K,V> old = null; 862 if (size > threshold || (tab = table) == null || 863 (n = tab.length) == 0) 864 n = (tab = resize()).length; 865 if ((first = tab[i = (n - 1) & hash]) != null) { 866 if (first instanceof TreeNode) 867 old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key); 868 else { 869 Node<K,V> e = first; K k; 870 do { 871 if (e.hash == hash && 872 ((k = e.key) == key || (key != null && key.equals(k)))) { 873 old = e; 874 break; 875 } 876 ++binCount; 877 } while ((e = e.next) != null); 878 } 879 V oldValue; 880 if (old != null && (oldValue = old.value) != null) { 881 afterNodeAccess(old); 882 return oldValue; 883 } 884 } 885 V v = mappingFunction.apply(key); 886 if (v == null) { 887 return null; 888 } else if (old != null) { 889 old.value = v; 890 afterNodeAccess(old); 891 return v; 892 } 893 else if (t != null) 894 t.putTreeVal(this, tab, hash, key, v); 895 else { 896 tab[i] = newNode(hash, key, v, first); 897 if (binCount >= TREEIFY_THRESHOLD - 1) 898 treeifyBin(tab, hash); 899 } 900 ++modCount; 901 ++size; 902 afterNodeInsertion(true); 903 return v; 904 } 905 906 public V computeIfPresent(K key, 907 BiFunction<? super K, ? super V, ? extends V> remappingFunction) { 908 if (remappingFunction == null) 909 throw new NullPointerException(); 910 Node<K,V> e; V oldValue; 911 int hash = hash(key); 912 if ((e = getNode(hash, key)) != null && 913 (oldValue = e.value) != null) { 914 V v = remappingFunction.apply(key, oldValue); 915 if (v != null) { 916 e.value = v; 917 afterNodeAccess(e); 918 return v; 919 } 920 else 921 removeNode(hash, key, null, false, true); 922 } 923 return null; 924 } 925 926 @Override 927 public V compute(K key, 928 BiFunction<? super K, ? super V, ? extends V> remappingFunction) { 929 if (remappingFunction == null) 930 throw new NullPointerException(); 931 int hash = hash(key); 932 Node<K,V>[] tab; Node<K,V> first; int n, i; 933 int binCount = 0; 934 TreeNode<K,V> t = null; 935 Node<K,V> old = null; 936 if (size > threshold || (tab = table) == null || 937 (n = tab.length) == 0) 938 n = (tab = resize()).length; 939 if ((first = tab[i = (n - 1) & hash]) != null) { 940 if (first instanceof TreeNode) 941 old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key); 942 else { 943 Node<K,V> e = first; K k; 944 do { 945 if (e.hash == hash && 946 ((k = e.key) == key || (key != null && key.equals(k)))) { 947 old = e; 948 break; 949 } 950 ++binCount; 951 } while ((e = e.next) != null); 952 } 953 } 954 V oldValue = (old == null) ? null : old.value; 955 V v = remappingFunction.apply(key, oldValue); 956 if (old != null) { 957 if (v != null) { 958 old.value = v; 959 afterNodeAccess(old); 960 } 961 else 962 removeNode(hash, key, null, false, true); 963 } 964 else if (v != null) { 965 if (t != null) 966 t.putTreeVal(this, tab, hash, key, v); 967 else { 968 tab[i] = newNode(hash, key, v, first); 969 if (binCount >= TREEIFY_THRESHOLD - 1) 970 treeifyBin(tab, hash); 971 } 972 ++modCount; 973 ++size; 974 afterNodeInsertion(true); 975 } 976 return v; 977 } 978 979 @Override 980 public V merge(K key, V value, 981 BiFunction<? super V, ? super V, ? extends V> remappingFunction) { 982 if (value == null) 983 throw new NullPointerException(); 984 if (remappingFunction == null) 985 throw new NullPointerException(); 986 int hash = hash(key); 987 Node<K,V>[] tab; Node<K,V> first; int n, i; 988 int binCount = 0; 989 TreeNode<K,V> t = null; 990 Node<K,V> old = null; 991 if (size > threshold || (tab = table) == null || 992 (n = tab.length) == 0) 993 n = (tab = resize()).length; 994 if ((first = tab[i = (n - 1) & hash]) != null) { 995 if (first instanceof TreeNode) 996 old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key); 997 else { 998 Node<K,V> e = first; K k; 999 do { 1000 if (e.hash == hash && 1001 ((k = e.key) == key || (key != null && key.equals(k)))) { 1002 old = e; 1003 break; 1004 } 1005 ++binCount; 1006 } while ((e = e.next) != null); 1007 } 1008 } 1009 if (old != null) { 1010 V v; 1011 if (old.value != null) 1012 v = remappingFunction.apply(old.value, value); 1013 else 1014 v = value; 1015 if (v != null) { 1016 old.value = v; 1017 afterNodeAccess(old); 1018 } 1019 else 1020 removeNode(hash, key, null, false, true); 1021 return v; 1022 } 1023 if (value != null) { 1024 if (t != null) 1025 t.putTreeVal(this, tab, hash, key, value); 1026 else { 1027 tab[i] = newNode(hash, key, value, first); 1028 if (binCount >= TREEIFY_THRESHOLD - 1) 1029 treeifyBin(tab, hash); 1030 } 1031 ++modCount; 1032 ++size; 1033 afterNodeInsertion(true); 1034 } 1035 return value; 1036 } 1037 1038 @Override 1039 public void forEach(BiConsumer<? super K, ? super V> action) { 1040 Node<K,V>[] tab; 1041 if (action == null) 1042 throw new NullPointerException(); 1043 if (size > 0 && (tab = table) != null) { 1044 int mc = modCount; 1045 for (int i = 0; i < tab.length; ++i) { 1046 for (Node<K,V> e = tab[i]; e != null; e = e.next) 1047 action.accept(e.key, e.value); 1048 } 1049 if (modCount != mc) 1050 throw new ConcurrentModificationException(); 1051 } 1052 } 1053 1054 @Override 1055 public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) { 1056 Node<K,V>[] tab; 1057 if (function == null) 1058 throw new NullPointerException(); 1059 if (size > 0 && (tab = table) != null) { 1060 int mc = modCount; 1061 for (int i = 0; i < tab.length; ++i) { 1062 for (Node<K,V> e = tab[i]; e != null; e = e.next) { 1063 e.value = function.apply(e.key, e.value); 1064 } 1065 } 1066 if (modCount != mc) 1067 throw new ConcurrentModificationException(); 1068 } 1069 } 1070 1071 /*------------------------------------------------------------ */ 1072 // 克隆和序列化 1073 1074 /** 1075 * 返回此哈希映射实例的浅拷贝:键和值本身不被克隆。 1076 * 1077 * @return a shallow copy of this map 1078 */ 1079 @SuppressWarnings("unchecked") 1080 @Override 1081 public Object clone() { 1082 HashMap<K,V> result; 1083 try { 1084 result = (HashMap<K,V>)super.clone(); 1085 } catch (CloneNotSupportedException e) { 1086 // 这不应该发生,因为我们是克隆的 1087 throw new InternalError(e); 1088 } 1089 result.reinitialize(); 1090 result.putMapEntries(this, false); 1091 return result; 1092 } 1093 1094 // 序列化HashSet时也使用这些方法 1095 final float loadFactor() { return loadFactor; } 1096 final int capacity() { 1097 return (table != null) ? table.length : 1098 (threshold > 0) ? threshold : 1099 DEFAULT_INITIAL_CAPACITY; 1100 } 1101 1102 /** 1103 * 将HashMap 实例,保存到流中 * 1104 * 序列化的数据: HashMap的容量(桶数组的长度),然后是size(键值对的数量),然后是key和value, 1105 * 键值对 没有特定的顺序 1106 */ 1107 private void writeObject(java.io.ObjectOutputStream s) 1108 throws IOException { 1109 int buckets = capacity(); 1110 // 写出阈值、负载因子和任何隐藏的内容 1111 s.defaultWriteObject(); 1112 s.writeInt(buckets); 1113 s.writeInt(size); 1114 internalWriteEntries(s); 1115 } 1116 1117 /** 1118 * 从流中重新构造HashMap实例,即反序列化 1119 */ 1120 private void readObject(java.io.ObjectInputStream s) 1121 throws IOException, ClassNotFoundException { 1122 // 读取阈值(忽略)、负载因子和任何隐藏的内容 1123 s.defaultReadObject(); 1124 reinitialize(); 1125 if (loadFactor <= 0 || Float.isNaN(loadFactor)) 1126 throw new InvalidObjectException("Illegal load factor: " + 1127 loadFactor); 1128 s.readInt(); // 阅读并忽略桶的数量 1129 int mappings = s.readInt(); // 读取映射数(大小) 1130 if (mappings < 0) 1131 throw new InvalidObjectException("Illegal mappings count: " + 1132 mappings); 1133 else if (mappings > 0) { // (if zero, use defaults) 1134 // 只有在内部时,才使用给定的负载因子来调整表的大小 1135 // range of 0.25...4.0 1136 float lf = Math.min(Math.max(0.25f, loadFactor), 4.0f); 1137 float fc = (float)mappings / lf + 1.0f; 1138 int cap = ((fc < DEFAULT_INITIAL_CAPACITY) ? 1139 DEFAULT_INITIAL_CAPACITY : 1140 (fc >= MAXIMUM_CAPACITY) ? 1141 MAXIMUM_CAPACITY : 1142 tableSizeFor((int)fc)); 1143 float ft = (float)cap * lf; 1144 threshold = ((cap < MAXIMUM_CAPACITY && ft < MAXIMUM_CAPACITY) ? 1145 (int)ft : Integer.MAX_VALUE); 1146 1147 // 检查map.entry[]。类,因为它是最接近我们实际创建的公共类型。 1148 SharedSecrets.getJavaOISAccess().checkArray(s, Map.Entry[].class, cap); 1149 @SuppressWarnings({"rawtypes","unchecked"}) 1150 Node<K,V>[] tab = (Node<K,V>[])new Node[cap]; 1151 table = tab; 1152 1153 // 读取键和值,并将映射放入HashMap中 1154 for (int i = 0; i < mappings; i++) { 1155 @SuppressWarnings("unchecked") 1156 K key = (K) s.readObject(); 1157 @SuppressWarnings("unchecked") 1158 V value = (V) s.readObject(); 1159 putVal(hash(key), key, value, false, false); 1160 } 1161 } 1162 } 1163 1164 /*------------------------------------------------------------ */ 1165 // 迭代器 1166 1167 abstract class HashIterator { 1168 Node<K,V> next; // next entry to return 1169 Node<K,V> current; // current entry 1170 int expectedModCount; // for fast-fail 1171 int index; // current slot 1172 1173 HashIterator() { 1174 expectedModCount = modCount; 1175 Node<K,V>[] t = table; 1176 current = next = null; 1177 index = 0; 1178 if (t != null && size > 0) { // advance to first entry 1179 do {} while (index < t.length && (next = t[index++]) == null); 1180 } 1181 } 1182 1183 public final boolean hasNext() { 1184 return next != null; 1185 } 1186 1187 final Node<K,V> nextNode() { 1188 Node<K,V>[] t; 1189 Node<K,V> e = next; 1190 if (modCount != expectedModCount) 1191 throw new ConcurrentModificationException(); 1192 if (e == null) 1193 throw new NoSuchElementException(); 1194 if ((next = (current = e).next) == null && (t = table) != null) { 1195 do {} while (index < t.length && (next = t[index++]) == null); 1196 } 1197 return e; 1198 } 1199 1200 public final void remove() { 1201 Node<K,V> p = current; 1202 if (p == null) 1203 throw new IllegalStateException(); 1204 if (modCount != expectedModCount) 1205 throw new ConcurrentModificationException(); 1206 current = null; 1207 K key = p.key; 1208 removeNode(hash(key), key, null, false, false); 1209 expectedModCount = modCount; 1210 } 1211 } 1212 1213 final class KeyIterator extends HashIterator 1214 implements Iterator<K> { 1215 public final K next() { return nextNode().key; } 1216 } 1217 1218 final class ValueIterator extends HashIterator 1219 implements Iterator<V> { 1220 public final V next() { return nextNode().value; } 1221 } 1222 1223 final class EntryIterator extends HashIterator 1224 implements Iterator<Map.Entry<K,V>> { 1225 public final Map.Entry<K,V> next() { return nextNode(); } 1226 } 1227 1228 /*------------------------------------------------------------ */ 1229 // spliterators 1230 1231 static class HashMapSpliterator<K,V> { 1232 final HashMap<K,V> map; 1233 Node<K,V> current; // current node 1234 int index; // current index, modified on advance/split 1235 int fence; // one past last index 1236 int est; // size estimate 1237 int expectedModCount; // for comodification checks 1238 1239 HashMapSpliterator(HashMap<K,V> m, int origin, 1240 int fence, int est, 1241 int expectedModCount) { 1242 this.map = m; 1243 this.index = origin; 1244 this.fence = fence; 1245 this.est = est; 1246 this.expectedModCount = expectedModCount; 1247 } 1248 1249 final int getFence() { // 在第一次使用时初始化围栏和大小 1250 int hi; 1251 if ((hi = fence) < 0) { 1252 HashMap<K,V> m = map; 1253 est = m.size; 1254 expectedModCount = m.modCount; 1255 Node<K,V>[] tab = m.table; 1256 hi = fence = (tab == null) ? 0 : tab.length; 1257 } 1258 return hi; 1259 } 1260 1261 public final long estimateSize() { 1262 getFence(); // force init 1263 return (long) est; 1264 } 1265 } 1266 1267 static final class KeySpliterator<K,V> 1268 extends HashMapSpliterator<K,V> 1269 implements Spliterator<K> { 1270 KeySpliterator(HashMap<K,V> m, int origin, int fence, int est, 1271 int expectedModCount) { 1272 super(m, origin, fence, est, expectedModCount); 1273 } 1274 1275 public KeySpliterator<K,V> trySplit() { 1276 int hi = getFence(), lo = index, mid = (lo + hi) >>> 1; 1277 return (lo >= mid || current != null) ? null : 1278 new KeySpliterator<>(map, lo, index = mid, est >>>= 1, 1279 expectedModCount); 1280 } 1281 1282 public void forEachRemaining(Consumer<? super K> action) { 1283 int i, hi, mc; 1284 if (action == null) 1285 throw new NullPointerException(); 1286 HashMap<K,V> m = map; 1287 Node<K,V>[] tab = m.table; 1288 if ((hi = fence) < 0) { 1289 mc = expectedModCount = m.modCount; 1290 hi = fence = (tab == null) ? 0 : tab.length; 1291 } 1292 else 1293 mc = expectedModCount; 1294 if (tab != null && tab.length >= hi && 1295 (i = index) >= 0 && (i < (index = hi) || current != null)) { 1296 Node<K,V> p = current; 1297 current = null; 1298 do { 1299 if (p == null) 1300 p = tab[i++]; 1301 else { 1302 action.accept(p.key); 1303 p = p.next; 1304 } 1305 } while (p != null || i < hi); 1306 if (m.modCount != mc) 1307 throw new ConcurrentModificationException(); 1308 } 1309 } 1310 1311 public boolean tryAdvance(Consumer<? super K> action) { 1312 int hi; 1313 if (action == null) 1314 throw new NullPointerException(); 1315 Node<K,V>[] tab = map.table; 1316 if (tab != null && tab.length >= (hi = getFence()) && index >= 0) { 1317 while (current != null || index < hi) { 1318 if (current == null) 1319 current = tab[index++]; 1320 else { 1321 K k = current.key; 1322 current = current.next; 1323 action.accept(k); 1324 if (map.modCount != expectedModCount) 1325 throw new ConcurrentModificationException(); 1326 return true; 1327 } 1328 } 1329 } 1330 return false; 1331 } 1332 1333 public int characteristics() { 1334 return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) | 1335 Spliterator.DISTINCT; 1336 } 1337 } 1338 1339 static final class ValueSpliterator<K,V> 1340 extends HashMapSpliterator<K,V> 1341 implements Spliterator<V> { 1342 ValueSpliterator(HashMap<K,V> m, int origin, int fence, int est, 1343 int expectedModCount) { 1344 super(m, origin, fence, est, expectedModCount); 1345 } 1346 1347 public ValueSpliterator<K,V> trySplit() { 1348 int hi = getFence(), lo = index, mid = (lo + hi) >>> 1; 1349 return (lo >= mid || current != null) ? null : 1350 new ValueSpliterator<>(map, lo, index = mid, est >>>= 1, 1351 expectedModCount); 1352 } 1353 1354 public void forEachRemaining(Consumer<? super V> action) { 1355 int i, hi, mc; 1356 if (action == null) 1357 throw new NullPointerException(); 1358 HashMap<K,V> m = map; 1359 Node<K,V>[] tab = m.table; 1360 if ((hi = fence) < 0) { 1361 mc = expectedModCount = m.modCount; 1362 hi = fence = (tab == null) ? 0 : tab.length; 1363 } 1364 else 1365 mc = expectedModCount; 1366 if (tab != null && tab.length >= hi && 1367 (i = index) >= 0 && (i < (index = hi) || current != null)) { 1368 Node<K,V> p = current; 1369 current = null; 1370 do { 1371 if (p == null) 1372 p = tab[i++]; 1373 else { 1374 action.accept(p.value); 1375 p = p.next; 1376 } 1377 } while (p != null || i < hi); 1378 if (m.modCount != mc) 1379 throw new ConcurrentModificationException(); 1380 } 1381 } 1382 1383 public boolean tryAdvance(Consumer<? super V> action) { 1384 int hi; 1385 if (action == null) 1386 throw new NullPointerException(); 1387 Node<K,V>[] tab = map.table; 1388 if (tab != null && tab.length >= (hi = getFence()) && index >= 0) { 1389 while (current != null || index < hi) { 1390 if (current == null) 1391 current = tab[index++]; 1392 else { 1393 V v = current.value; 1394 current = current.next; 1395 action.accept(v); 1396 if (map.modCount != expectedModCount) 1397 throw new ConcurrentModificationException(); 1398 return true; 1399 } 1400 } 1401 } 1402 return false; 1403 } 1404 1405 public int characteristics() { 1406 return (fence < 0 || est == map.size ? Spliterator.SIZED : 0); 1407 } 1408 } 1409 1410 static final class EntrySpliterator<K,V> 1411 extends HashMapSpliterator<K,V> 1412 implements Spliterator<Map.Entry<K,V>> { 1413 EntrySpliterator(HashMap<K,V> m, int origin, int fence, int est, 1414 int expectedModCount) { 1415 super(m, origin, fence, est, expectedModCount); 1416 } 1417 1418 public EntrySpliterator<K,V> trySplit() { 1419 int hi = getFence(), lo = index, mid = (lo + hi) >>> 1; 1420 return (lo >= mid || current != null) ? null : 1421 new EntrySpliterator<>(map, lo, index = mid, est >>>= 1, 1422 expectedModCount); 1423 } 1424 1425 public void forEachRemaining(Consumer<? super Map.Entry<K,V>> action) { 1426 int i, hi, mc; 1427 if (action == null) 1428 throw new NullPointerException(); 1429 HashMap<K,V> m = map; 1430 Node<K,V>[] tab = m.table; 1431 if ((hi = fence) < 0) { 1432 mc = expectedModCount = m.modCount; 1433 hi = fence = (tab == null) ? 0 : tab.length; 1434 } 1435 else 1436 mc = expectedModCount; 1437 if (tab != null && tab.length >= hi && 1438 (i = index) >= 0 && (i < (index = hi) || current != null)) { 1439 Node<K,V> p = current; 1440 current = null; 1441 do { 1442 if (p == null) 1443 p = tab[i++]; 1444 else { 1445 action.accept(p); 1446 p = p.next; 1447 } 1448 } while (p != null || i < hi); 1449 if (m.modCount != mc) 1450 throw new ConcurrentModificationException(); 1451 } 1452 } 1453 1454 public boolean tryAdvance(Consumer<? super Map.Entry<K,V>> action) { 1455 int hi; 1456 if (action == null) 1457 throw new NullPointerException(); 1458 Node<K,V>[] tab = map.table; 1459 if (tab != null && tab.length >= (hi = getFence()) && index >= 0) { 1460 while (current != null || index < hi) { 1461 if (current == null) 1462 current = tab[index++]; 1463 else { 1464 Node<K,V> e = current; 1465 current = current.next; 1466 action.accept(e); 1467 if (map.modCount != expectedModCount) 1468 throw new ConcurrentModificationException(); 1469 return true; 1470 } 1471 } 1472 } 1473 return false; 1474 } 1475 1476 public int characteristics() { 1477 return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) | 1478 Spliterator.DISTINCT; 1479 } 1480 } 1481 1482 /* ------------------------------------------------------------ * / 1483 // LinkedHashMap support 1484 1485 1486 /* 1487 下面的包保护方法被设计成被LinkedHashMap覆盖,但不被任何其他子类覆盖。 1488 几乎所有其他内部方法都是包保护的,但都声明为final,因此可以由LinkedHashMap、视图类和HashSet使用。 1489 */ 1490 1491 // 创建一个常规(非树)节点 1492 Node<K,V> newNode(int hash, K key, V value, Node<K,V> next) { 1493 return new Node<>(hash, key, value, next); 1494 } 1495 1496 // 用于从treenode到普通节点的转换 1497 Node<K,V> replacementNode(Node<K,V> p, Node<K,V> next) { 1498 return new Node<>(p.hash, p.key, p.value, next); 1499 } 1500 1501 // 创建一个树bin节点 1502 TreeNode<K,V> newTreeNode(int hash, K key, V value, Node<K,V> next) { 1503 return new TreeNode<>(hash, key, value, next); 1504 } 1505 1506 // For treeifyBin 1507 TreeNode<K,V> replacementTreeNode(Node<K,V> p, Node<K,V> next) { 1508 return new TreeNode<>(p.hash, p.key, p.value, next); 1509 } 1510 1511 /** 1512 * 重置为初始默认状态。由克隆和readObject调用。 1513 */ 1514 void reinitialize() { 1515 table = null; 1516 entrySet = null; 1517 keySet = null; 1518 values = null; 1519 modCount = 0; 1520 threshold = 0; 1521 size = 0; 1522 } 1523 1524 // 回调允许LinkedHashMap后操作 1525 void afterNodeAccess(Node<K,V> p) { } 1526 void afterNodeInsertion(boolean evict) { } 1527 void afterNodeRemoval(Node<K,V> p) { } 1528 1529 // 仅从writeObject调用,以确保兼容排序。 1530 void internalWriteEntries(java.io.ObjectOutputStream s) throws IOException { 1531 Node<K,V>[] tab; 1532 if (size > 0 && (tab = table) != null) { 1533 for (int i = 0; i < tab.length; ++i) { 1534 for (Node<K,V> e = tab[i]; e != null; e = e.next) { 1535 s.writeObject(e.key); 1536 s.writeObject(e.value); 1537 } 1538 } 1539 } 1540 } 1541 1542 /*------------------------------------------------------------ */ 1543 // Tree bins 1544 1545 /** 1546 * 仅从writeObject调用,以确保兼容排序。 1547 * Entry for Tree bins. Extends LinkedHashMap.Entry(它依次扩展节点)可以用作常规节点或链接节点的扩展。 1548 */ 1549 static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> { 1550 TreeNode<K,V> parent; // red-black tree links 1551 TreeNode<K,V> left; 1552 TreeNode<K,V> right; 1553 TreeNode<K,V> prev; // 删除后需要取消下一个链接 1554 boolean red; 1555 TreeNode(int hash, K key, V val, Node<K,V> next) { 1556 super(hash, key, val, next); 1557 } 1558 1559 /** 1560 * 返回包含此节点的树的根。 1561 */ 1562 final TreeNode<K,V> root() { 1563 for (TreeNode<K,V> r = this, p;;) { 1564 if ((p = r.parent) == null) 1565 return r; 1566 r = p; 1567 } 1568 } 1569 1570 /** 1571 * 确保给定的根是其bin的第一个节点。 1572 */ 1573 static <K,V> void moveRootToFront(Node<K,V>[] tab, TreeNode<K,V> root) { 1574 int n; 1575 if (root != null && tab != null && (n = tab.length) > 0) { 1576 int index = (n - 1) & root.hash; 1577 TreeNode<K,V> first = (TreeNode<K,V>)tab[index]; 1578 if (root != first) { 1579 Node<K,V> rn; 1580 tab[index] = root; 1581 TreeNode<K,V> rp = root.prev; 1582 if ((rn = root.next) != null) 1583 ((TreeNode<K,V>)rn).prev = rp; 1584 if (rp != null) 1585 rp.next = rn; 1586 if (first != null) 1587 first.prev = root; 1588 root.next = first; 1589 root.prev = null; 1590 } 1591 assert checkInvariants(root); 1592 } 1593 } 1594 1595 /** 1596 * 确保给定的根是其bin的第一个节点。使用给定的哈希和键查找从根p开始的节点。kc参数在首次使用比较键时缓存comparableClassFor(键)。 1597 */ 1598 final TreeNode<K,V> find(int h, Object k, Class<?> kc) { 1599 TreeNode<K,V> p = this; 1600 do { 1601 int ph, dir; K pk; 1602 TreeNode<K,V> pl = p.left, pr = p.right, q; 1603 if ((ph = p.hash) > h) 1604 p = pl; 1605 else if (ph < h) 1606 p = pr; 1607 else if ((pk = p.key) == k || (k != null && k.equals(pk))) 1608 return p; 1609 else if (pl == null) 1610 p = pr; 1611 else if (pr == null) 1612 p = pl; 1613 else if ((kc != null || 1614 (kc = comparableClassFor(k)) != null) && 1615 (dir = compareComparables(kc, k, pk)) != 0) 1616 p = (dir < 0) ? pl : pr; 1617 else if ((q = pr.find(h, k, kc)) != null) 1618 return q; 1619 else 1620 p = pl; 1621 } while (p != null); 1622 return null; 1623 } 1624 1625 /** 1626 * 调用查找根节点。 1627 */ 1628 final TreeNode<K,V> getTreeNode(int h, Object k) { 1629 return ((parent != null) ? root() : this).find(h, k, null); 1630 } 1631 1632 /** 1633 * 在相同的哈希码和不可比较的情况下,调用排序插入的破接实用程序。 1634 * 我们不需要一个总顺序,只需要一个一致的插入规则来保持重新平衡之间的等效性。超越必要的程度简化了对根节点的测试。 1635 */ 1636 static int tieBreakOrder(Object a, Object b) { 1637 int d; 1638 if (a == null || b == null || 1639 (d = a.getClass().getName(). 1640 compareTo(b.getClass().getName())) == 0) 1641 d = (System.identityHashCode(a) <= System.identityHashCode(b) ? 1642 -1 : 1); 1643 return d; 1644 } 1645 1646 /** 1647 * 形成从该节点链接的节点树。 1648 * @return root of tree 1649 */ 1650 final void treeify(Node<K,V>[] tab) { 1651 TreeNode<K,V> root = null; 1652 for (TreeNode<K,V> x = this, next; x != null; x = next) { 1653 next = (TreeNode<K,V>)x.next; 1654 x.left = x.right = null; 1655 if (root == null) { 1656 x.parent = null; 1657 x.red = false; 1658 root = x; 1659 } 1660 else { 1661 K k = x.key; 1662 int h = x.hash; 1663 Class<?> kc = null; 1664 for (TreeNode<K,V> p = root;;) { 1665 int dir, ph; 1666 K pk = p.key; 1667 if ((ph = p.hash) > h) 1668 dir = -1; 1669 else if (ph < h) 1670 dir = 1; 1671 else if ((kc == null && 1672 (kc = comparableClassFor(k)) == null) || 1673 (dir = compareComparables(kc, k, pk)) == 0) 1674 dir = tieBreakOrder(k, pk); 1675 1676 TreeNode<K,V> xp = p; 1677 if ((p = (dir <= 0) ? p.left : p.right) == null) { 1678 x.parent = xp; 1679 if (dir <= 0) 1680 xp.left = x; 1681 else 1682 xp.right = x; 1683 root = balanceInsertion(root, x); 1684 break; 1685 } 1686 } 1687 } 1688 } 1689 moveRootToFront(tab, root); 1690 } 1691 1692 /** 1693 * 返回一个非treenode列表,替换从该节点链接的那些。 1694 */ 1695 final Node<K,V> untreeify(HashMap<K,V> map) { 1696 Node<K,V> hd = null, tl = null; 1697 for (Node<K,V> q = this; q != null; q = q.next) { 1698 Node<K,V> p = map.replacementNode(q, null); 1699 if (tl == null) 1700 hd = p; 1701 else 1702 tl.next = p; 1703 tl = p; 1704 } 1705 return hd; 1706 } 1707 1708 /** 1709 * 树版本的putVal。 1710 */ 1711 final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab, 1712 int h, K k, V v) { 1713 Class<?> kc = null; 1714 boolean searched = false; 1715 TreeNode<K,V> root = (parent != null) ? root() : this; 1716 for (TreeNode<K,V> p = root;;) { 1717 int dir, ph; K pk; 1718 if ((ph = p.hash) > h) 1719 dir = -1; 1720 else if (ph < h) 1721 dir = 1; 1722 else if ((pk = p.key) == k || (k != null && k.equals(pk))) 1723 return p; 1724 else if ((kc == null && 1725 (kc = comparableClassFor(k)) == null) || 1726 (dir = compareComparables(kc, k, pk)) == 0) { 1727 if (!searched) { 1728 TreeNode<K,V> q, ch; 1729 searched = true; 1730 if (((ch = p.left) != null && 1731 (q = ch.find(h, k, kc)) != null) || 1732 ((ch = p.right) != null && 1733 (q = ch.find(h, k, kc)) != null)) 1734 return q; 1735 } 1736 dir = tieBreakOrder(k, pk); 1737 } 1738 1739 TreeNode<K,V> xp = p; 1740 if ((p = (dir <= 0) ? p.left : p.right) == null) { 1741 Node<K,V> xpn = xp.next; 1742 TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn); 1743 if (dir <= 0) 1744 xp.left = x; 1745 else 1746 xp.right = x; 1747 xp.next = x; 1748 x.parent = x.prev = xp; 1749 if (xpn != null) 1750 ((TreeNode<K,V>)xpn).prev = x; 1751 moveRootToFront(tab, balanceInsertion(root, x)); 1752 return null; 1753 } 1754 } 1755 } 1756 1757 /** 1758 * 移除指定的节点,该节点必须在此调用之前出现。 1759 * 这比典型的红黑删除代码要混乱得多,因为我们不能用在遍历期间可以独立访问的“next”指针固定的叶子继承节点来交换内部节点的内容。 1760 * 所以我们交换树连杆。 1761 * 如果当前树看起来节点太少,那么bin将被转换回普通bin。(根据树的结构,测试在2到6个节点之间触发)。 1762 */ 1763 final void removeTreeNode(HashMap<K,V> map, Node<K,V>[] tab, 1764 boolean movable) { 1765 int n; 1766 if (tab == null || (n = tab.length) == 0) 1767 return; 1768 int index = (n - 1) & hash; 1769 TreeNode<K,V> first = (TreeNode<K,V>)tab[index], root = first, rl; 1770 TreeNode<K,V> succ = (TreeNode<K,V>)next, pred = prev; 1771 if (pred == null) 1772 tab[index] = first = succ; 1773 else 1774 pred.next = succ; 1775 if (succ != null) 1776 succ.prev = pred; 1777 if (first == null) 1778 return; 1779 if (root.parent != null) 1780 root = root.root(); 1781 if (root == null || root.right == null || 1782 (rl = root.left) == null || rl.left == null) { 1783 tab[index] = first.untreeify(map); // too small 1784 return; 1785 } 1786 TreeNode<K,V> p = this, pl = left, pr = right, replacement; 1787 if (pl != null && pr != null) { 1788 TreeNode<K,V> s = pr, sl; 1789 while ((sl = s.left) != null) // find successor 1790 s = sl; 1791 boolean c = s.red; s.red = p.red; p.red = c; // swap colors 1792 TreeNode<K,V> sr = s.right; 1793 TreeNode<K,V> pp = p.parent; 1794 if (s == pr) { // p was s‘s direct parent 1795 p.parent = s; 1796 s.right = p; 1797 } 1798 else { 1799 TreeNode<K,V> sp = s.parent; 1800 if ((p.parent = sp) != null) { 1801 if (s == sp.left) 1802 sp.left = p; 1803 else 1804 sp.right = p; 1805 } 1806 if ((s.right = pr) != null) 1807 pr.parent = s; 1808 } 1809 p.left = null; 1810 if ((p.right = sr) != null) 1811 sr.parent = p; 1812 if ((s.left = pl) != null) 1813 pl.parent = s; 1814 if ((s.parent = pp) == null) 1815 root = s; 1816 else if (p == pp.left) 1817 pp.left = s; 1818 else 1819 pp.right = s; 1820 if (sr != null) 1821 replacement = sr; 1822 else 1823 replacement = p; 1824 } 1825 else if (pl != null) 1826 replacement = pl; 1827 else if (pr != null) 1828 replacement = pr; 1829 else 1830 replacement = p; 1831 if (replacement != p) { 1832 TreeNode<K,V> pp = replacement.parent = p.parent; 1833 if (pp == null) 1834 root = replacement; 1835 else if (p == pp.left) 1836 pp.left = replacement; 1837 else 1838 pp.right = replacement; 1839 p.left = p.right = p.parent = null; 1840 } 1841 1842 TreeNode<K,V> r = p.red ? root : balanceDeletion(root, replacement); 1843 1844 if (replacement == p) { // detach 1845 TreeNode<K,V> pp = p.parent; 1846 p.parent = null; 1847 if (pp != null) { 1848 if (p == pp.left) 1849 pp.left = null; 1850 else if (p == pp.right) 1851 pp.right = null; 1852 } 1853 } 1854 if (movable) 1855 moveRootToFront(tab, r); 1856 } 1857 1858 /** 1859 * 将树仓中的节点分解为上下树仓,如果现在太小,则将树仓拆成树仓。仅从调整大小调用;参见上面关于分裂位和索引的讨论。 1860 * 1861 * @param map the map 1862 * @param tab the table for recording bin heads 1863 * @param index the index of the table being split 1864 * @param bit the bit of hash to split on 1865 */ 1866 final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) { 1867 TreeNode<K,V> b = this; 1868 // Relink into lo and hi lists, preserving order 1869 TreeNode<K,V> loHead = null, loTail = null; 1870 TreeNode<K,V> hiHead = null, hiTail = null; 1871 int lc = 0, hc = 0; 1872 for (TreeNode<K,V> e = b, next; e != null; e = next) { 1873 next = (TreeNode<K,V>)e.next; 1874 e.next = null; 1875 if ((e.hash & bit) == 0) { 1876 if ((e.prev = loTail) == null) 1877 loHead = e; 1878 else 1879 loTail.next = e; 1880 loTail = e; 1881 ++lc; 1882 } 1883 else { 1884 if ((e.prev = hiTail) == null) 1885 hiHead = e; 1886 else 1887 hiTail.next = e; 1888 hiTail = e; 1889 ++hc; 1890 } 1891 } 1892 1893 if (loHead != null) { 1894 if (lc <= UNTREEIFY_THRESHOLD) 1895 tab[index] = loHead.untreeify(map); 1896 else { 1897 tab[index] = loHead; 1898 if (hiHead != null) // (else is already treeified) 1899 loHead.treeify(tab); 1900 } 1901 } 1902 if (hiHead != null) { 1903 if (hc <= UNTREEIFY_THRESHOLD) 1904 tab[index + bit] = hiHead.untreeify(map); 1905 else { 1906 tab[index + bit] = hiHead; 1907 if (loHead != null) 1908 hiHead.treeify(tab); 1909 } 1910 } 1911 } 1912 1913 /*------------------------------------------------------------ */ 1914 // Red-black tree methods, all adapted from CLR 1915 1916 static <K,V> TreeNode<K,V> rotateLeft(TreeNode<K,V> root, 1917 TreeNode<K,V> p) { 1918 TreeNode<K,V> r, pp, rl; 1919 if (p != null && (r = p.right) != null) { 1920 if ((rl = p.right = r.left) != null) 1921 rl.parent = p; 1922 if ((pp = r.parent = p.parent) == null) 1923 (root = r).red = false; 1924 else if (pp.left == p) 1925 pp.left = r; 1926 else 1927 pp.right = r; 1928 r.left = p; 1929 p.parent = r; 1930 } 1931 return root; 1932 } 1933 1934 static <K,V> TreeNode<K,V> rotateRight(TreeNode<K,V> root, 1935 TreeNode<K,V> p) { 1936 TreeNode<K,V> l, pp, lr; 1937 if (p != null && (l = p.left) != null) { 1938 if ((lr = p.left = l.right) != null) 1939 lr.parent = p; 1940 if ((pp = l.parent = p.parent) == null) 1941 (root = l).red = false; 1942 else if (pp.right == p) 1943 pp.right = l; 1944 else 1945 pp.left = l; 1946 l.right = p; 1947 p.parent = l; 1948 } 1949 return root; 1950 } 1951 1952 static <K,V> TreeNode<K,V> balanceInsertion(TreeNode<K,V> root, 1953 TreeNode<K,V> x) { 1954 x.red = true; 1955 for (TreeNode<K,V> xp, xpp, xppl, xppr;;) { 1956 if ((xp = x.parent) == null) { 1957 x.red = false; 1958 return x; 1959 } 1960 else if (!xp.red || (xpp = xp.parent) == null) 1961 return root; 1962 if (xp == (xppl = xpp.left)) { 1963 if ((xppr = xpp.right) != null && xppr.red) { 1964 xppr.red = false; 1965 xp.red = false; 1966 xpp.red = true; 1967 x = xpp; 1968 } 1969 else { 1970 if (x == xp.right) { 1971 root = rotateLeft(root, x = xp); 1972 xpp = (xp = x.parent) == null ? null : xp.parent; 1973 } 1974 if (xp != null) { 1975 xp.red = false; 1976 if (xpp != null) { 1977 xpp.red = true; 1978 root = rotateRight(root, xpp); 1979 } 1980 } 1981 } 1982 } 1983 else { 1984 if (xppl != null && xppl.red) { 1985 xppl.red = false; 1986 xp.red = false; 1987 xpp.red = true; 1988 x = xpp; 1989 } 1990 else { 1991 if (x == xp.left) { 1992 root = rotateRight(root, x = xp); 1993 xpp = (xp = x.parent) == null ? null : xp.parent; 1994 } 1995 if (xp != null) { 1996 xp.red = false; 1997 if (xpp != null) { 1998 xpp.red = true; 1999 root = rotateLeft(root, xpp); 2000 } 2001 } 2002 } 2003 } 2004 } 2005 } 2006 2007 static <K,V> TreeNode<K,V> balanceDeletion(TreeNode<K,V> root, 2008 TreeNode<K,V> x) { 2009 for (TreeNode<K,V> xp, xpl, xpr;;) { 2010 if (x == null || x == root) 2011 return root; 2012 else if ((xp = x.parent) == null) { 2013 x.red = false; 2014 return x; 2015 } 2016 else if (x.red) { 2017 x.red = false; 2018 return root; 2019 } 2020 else if ((xpl = xp.left) == x) { 2021 if ((xpr = xp.right) != null && xpr.red) { 2022 xpr.red = false; 2023 xp.red = true; 2024 root = rotateLeft(root, xp); 2025 xpr = (xp = x.parent) == null ? null : xp.right; 2026 } 2027 if (xpr == null) 2028 x = xp; 2029 else { 2030 TreeNode<K,V> sl = xpr.left, sr = xpr.right; 2031 if ((sr == null || !sr.red) && 2032 (sl == null || !sl.red)) { 2033 xpr.red = true; 2034 x = xp; 2035 } 2036 else { 2037 if (sr == null || !sr.red) { 2038 if (sl != null) 2039 sl.red = false; 2040 xpr.red = true; 2041 root = rotateRight(root, xpr); 2042 xpr = (xp = x.parent) == null ? 2043 null : xp.right; 2044 } 2045 if (xpr != null) { 2046 xpr.red = (xp == null) ? false : xp.red; 2047 if ((sr = xpr.right) != null) 2048 sr.red = false; 2049 } 2050 if (xp != null) { 2051 xp.red = false; 2052 root = rotateLeft(root, xp); 2053 } 2054 x = root; 2055 } 2056 } 2057 } 2058 else { // symmetric 2059 if (xpl != null && xpl.red) { 2060 xpl.red = false; 2061 xp.red = true; 2062 root = rotateRight(root, xp); 2063 xpl = (xp = x.parent) == null ? null : xp.left; 2064 } 2065 if (xpl == null) 2066 x = xp; 2067 else { 2068 TreeNode<K,V> sl = xpl.left, sr = xpl.right; 2069 if ((sl == null || !sl.red) && 2070 (sr == null || !sr.red)) { 2071 xpl.red = true; 2072 x = xp; 2073 } 2074 else { 2075 if (sl == null || !sl.red) { 2076 if (sr != null) 2077 sr.red = false; 2078 xpl.red = true; 2079 root = rotateLeft(root, xpl); 2080 xpl = (xp = x.parent) == null ? 2081 null : xp.left; 2082 } 2083 if (xpl != null) { 2084 xpl.red = (xp == null) ? false : xp.red; 2085 if ((sl = xpl.left) != null) 2086 sl.red = false; 2087 } 2088 if (xp != null) { 2089 xp.red = false; 2090 root = rotateRight(root, xp); 2091 } 2092 x = root; 2093 } 2094 } 2095 } 2096 } 2097 } 2098 2099 /** 2100 * 递归不变量校验 2101 */ 2102 static <K,V> boolean checkInvariants(TreeNode<K,V> t) { 2103 TreeNode<K,V> tp = t.parent, tl = t.left, tr = t.right, 2104 tb = t.prev, tn = (TreeNode<K,V>)t.next; 2105 if (tb != null && tb.next != t) 2106 return false; 2107 if (tn != null && tn.prev != t) 2108 return false; 2109 if (tp != null && t != tp.left && t != tp.right) 2110 return false; 2111 if (tl != null && (tl.parent != t || tl.hash > t.hash)) 2112 return false; 2113 if (tr != null && (tr.parent != t || tr.hash < t.hash)) 2114 return false; 2115 if (t.red && tl != null && tl.red && tr != null && tr.red) 2116 return false; 2117 if (tl != null && !checkInvariants(tl)) 2118 return false; 2119 if (tr != null && !checkInvariants(tr)) 2120 return false; 2121 return true; 2122 } 2123 } 2124 2125 }
以上是关于HashMap 源码的主要内容,如果未能解决你的问题,请参考以下文章