Start line:  
End line:  

Snippet Preview

Snippet HTML Code

Stack Overflow Questions
   /*
    * Copyright (c) 1997, 2013, Oracle and/or its affiliates. All rights reserved.
    * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
    *
    * This code is free software; you can redistribute it and/or modify it
    * under the terms of the GNU General Public License version 2 only, as
    * published by the Free Software Foundation.  Oracle designates this
    * particular file as subject to the "Classpath" exception as provided
    * by Oracle in the LICENSE file that accompanied this code.
   *
   * This code is distributed in the hope that it will be useful, but WITHOUT
   * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
   * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
   * version 2 for more details (a copy is included in the LICENSE file that
   * accompanied this code).
   *
   * You should have received a copy of the GNU General Public License version
   * 2 along with this work; if not, write to the Free Software Foundation,
   * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
   *
   * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
   * or visit www.oracle.com if you need additional information or have any
   * questions.
   */
  
  package java.util;
  
Hash table based implementation of the Map interface. This implementation provides all of the optional map operations, and permits null values and the null key. (The HashMap class is roughly equivalent to Hashtable, except that it is unsynchronized and permits nulls.) This class makes no guarantees as to the order of the map; in particular, it does not guarantee that the order will remain constant over time.

This implementation provides constant-time performance for the basic operations (get and put), assuming the hash function disperses the elements properly among the buckets. Iteration over collection views requires time proportional to the "capacity" of the HashMap instance (the number of buckets) plus its size (the number of key-value mappings). Thus, it's very important not to set the initial capacity too high (or the load factor too low) if iteration performance is important.

An instance of HashMap has two parameters that affect its performance: initial capacity and load factor. The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created. The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt) so that the hash table has approximately twice the number of buckets.

As a general rule, the default load factor (.75) offers a good tradeoff between time and space costs. Higher values decrease the space overhead but increase the lookup cost (reflected in most of the operations of the HashMap class, including get and put). The expected number of entries in the map and its load factor should be taken into account when setting its initial capacity, so as to minimize the number of rehash operations. If the initial capacity is greater than the maximum number of entries divided by the load factor, no rehash operations will ever occur.

If many mappings are to be stored in a HashMap instance, creating it with a sufficiently large capacity will allow the mappings to be stored more efficiently than letting it perform automatic rehashing as needed to grow the table. Note that using many keys with the same hashCode() is a sure way to slow down performance of any hash table. To ameliorate impact, when keys are java.lang.Comparable, this class may use comparison order among keys to help break ties.

Note that this implementation is not synchronized. If multiple threads access a hash map concurrently, and at least one of the threads modifies the map structurally, it must be synchronized externally. (A structural modification is any operation that adds or deletes one or more mappings; merely changing the value associated with a key that an instance already contains is not a structural modification.) This is typically accomplished by synchronizing on some object that naturally encapsulates the map. If no such object exists, the map should be "wrapped" using the Collections.synchronizedMap method. This is best done at creation time, to prevent accidental unsynchronized access to the map:

   Map m = Collections.synchronizedMap(new HashMap(...));

The iterators returned by all of this class's "collection view methods" are fail-fast: if the map is structurally modified at any time after the iterator is created, in any way except through the iterator's own remove method, the iterator will throw a ConcurrentModificationException. Thus, in the face of concurrent modification, the iterator fails quickly and cleanly, rather than risking arbitrary, non-deterministic behavior at an undetermined time in the future.

Note that the fail-fast behavior of an iterator cannot be guaranteed as it is, generally speaking, impossible to make any hard guarantees in the presence of unsynchronized concurrent modification. Fail-fast iterators throw ConcurrentModificationException on a best-effort basis. Therefore, it would be wrong to write a program that depended on this exception for its correctness: the fail-fast behavior of iterators should be used only to detect bugs.

This class is a member of the Java Collections Framework.

Parameters:
<K> the type of keys maintained by this map
<V> the type of mapped values
Author(s):
Doug Lea
Josh Bloch
Arthur van Hoff
Neal Gafter
Since:
1.2
See also:
java.lang.Object.hashCode()
Collection
Map
TreeMap
Hashtable
 
 public class HashMap<K,V> extends AbstractMap<K,V>
     implements Map<K,V>, CloneableSerializable {
 
     private static final long serialVersionUID = 362498820763181265L;
 
     /*
      * Implementation notes.
      *
      * This map usually acts as a binned (bucketed) hash table, but
      * when bins get too large, they are transformed into bins of
      * TreeNodes, each structured similarly to those in
      * java.util.TreeMap. Most methods try to use normal bins, but
      * relay to TreeNode methods when applicable (simply by checking
      * instanceof a node).  Bins of TreeNodes may be traversed and
      * used like any others, but additionally support faster lookup
      * when overpopulated. However, since the vast majority of bins in
      * normal use are not overpopulated, checking for existence of
      * tree bins may be delayed in the course of table methods.
      *
      * Tree bins (i.e., bins whose elements are all TreeNodes) are
      * ordered primarily by hashCode, but in the case of ties, if two
      * elements are of the same "class C implements Comparable<C>",
      * type then their compareTo method is used for ordering. (We
      * conservatively check generic types via reflection to validate
      * this -- see method comparableClassFor).  The added complexity
      * of tree bins is worthwhile in providing worst-case O(log n)
      * operations when keys either have distinct hashes or are
      * orderable, Thus, performance degrades gracefully under
      * accidental or malicious usages in which hashCode() methods
      * return values that are poorly distributed, as well as those in
      * which many keys share a hashCode, so long as they are also
      * Comparable. (If neither of these apply, we may waste about a
      * factor of two in time and space compared to taking no
      * precautions. But the only known cases stem from poor user
      * programming practices that are already so slow that this makes
      * little difference.)
      *
      * Because TreeNodes are about twice the size of regular nodes, we
      * use them only when bins contain enough nodes to warrant use
      * (see TREEIFY_THRESHOLD). And when they become too small (due to
      * removal or resizing) they are converted back to plain bins.  In
      * usages with well-distributed user hashCodes, tree bins are
      * rarely used.  Ideally, under random hashCodes, the frequency of
      * nodes in bins follows a Poisson distribution
      * (http://en.wikipedia.org/wiki/Poisson_distribution) with a
      * parameter of about 0.5 on average for the default resizing
      * threshold of 0.75, although with a large variance because of
      * resizing granularity. Ignoring variance, the expected
      * occurrences of list size k are (exp(-0.5) * pow(0.5, k) /
      * factorial(k)). The first values are:
      *
      * 0:    0.60653066
      * 1:    0.30326533
      * 2:    0.07581633
      * 3:    0.01263606
      * 4:    0.00157952
      * 5:    0.00015795
      * 6:    0.00001316
      * 7:    0.00000094
      * 8:    0.00000006
      * more: less than 1 in ten million
      *
      * The root of a tree bin is normally its first node.  However,
      * sometimes (currently only upon Iterator.remove), the root might
      * be elsewhere, but can be recovered following parent links
      * (method TreeNode.root()).
      *
      * All applicable internal methods accept a hash code as an
      * argument (as normally supplied from a public method), allowing
      * them to call each other without recomputing user hashCodes.
      * Most internal methods also accept a "tab" argument, that is
      * normally the current table, but may be a new or old one when
      * resizing or converting.
      *
      * When bin lists are treeified, split, or untreeified, we keep
      * them in the same relative access/traversal order (i.e., field
      * Node.next) to better preserve locality, and to slightly
      * simplify handling of splits and traversals that invoke
      * iterator.remove. When using comparators on insertion, to keep a
      * total ordering (or as close as is required here) across
      * rebalancings, we compare classes and identityHashCodes as
      * tie-breakers.
      *
      * The use and transitions among plain vs tree modes is
      * complicated by the existence of subclass LinkedHashMap. See
      * below for hook methods defined to be invoked upon insertion,
      * removal and access that allow LinkedHashMap internals to
      * otherwise remain independent of these mechanics. (This also
      * requires that a map instance be passed to some utility methods
      * that may create new nodes.)
      *
      * The concurrent-programming-like SSA-based coding style helps
      * avoid aliasing errors amid all of the twisty pointer operations.
      */

    
The default initial capacity - MUST be a power of two.
 
     static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
 
    
The maximum capacity, used if a higher value is implicitly specified by either of the constructors with arguments. MUST be a power of two <= 1<<30.
 
     static final int MAXIMUM_CAPACITY = 1 << 30;

    
The load factor used when none specified in constructor.
 
     static final float DEFAULT_LOAD_FACTOR = 0.75f;

    
The bin count threshold for using a tree rather than list for a bin. Bins are converted to trees when adding an element to a bin with at least this many nodes. The value must be greater than 2 and should be at least 8 to mesh with assumptions in tree removal about conversion back to plain bins upon shrinkage.
 
     static final int TREEIFY_THRESHOLD = 8;

    
The bin count threshold for untreeifying a (split) bin during a resize operation. Should be less than TREEIFY_THRESHOLD, and at most 6 to mesh with shrinkage detection under removal.
 
     static final int UNTREEIFY_THRESHOLD = 6;

    
The smallest table capacity for which bins may be treeified. (Otherwise the table is resized if too many nodes in a bin.) Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts between resizing and treeification thresholds.
 
     static final int MIN_TREEIFY_CAPACITY = 64;

    
Basic hash bin node, used for most entries. (See below for TreeNode subclass, and in LinkedHashMap for its Entry subclass.)
 
     static class Node<K,V> implements Map.Entry<K,V> {
         final int hash;
         final K key;
         V value;
         Node<K,V> next;
 
         Node(int hash, K key, V valueNode<K,V> next) {
             this. = hash;
             this. = key;
             this. = value;
             this. = next;
         }
 
         public final K getKey()        { return ; }
         public final V getValue()      { return ; }
         public final String toString() { return  + "=" + ; }
 
         public final int hashCode() {
             return Objects.hashCode() ^ Objects.hashCode();
         }
 
         public final V setValue(V newValue) {
             V oldValue = ;
              = newValue;
             return oldValue;
         }
 
         public final boolean equals(Object o) {
             if (o == this)
                 return true;
             if (o instanceof Map.Entry) {
                 Map.Entry<?,?> e = (Map.Entry<?,?>)o;
                 if (Objects.equals(e.getKey()) &&
                     Objects.equals(e.getValue()))
                     return true;
             }
             return false;
         }
     }
 
     /* ---------------- Static utilities -------------- */

    
Computes key.hashCode() and spreads (XORs) higher bits of hash to lower. Because the table uses power-of-two masking, sets of hashes that vary only in bits above the current mask will always collide. (Among known examples are sets of Float keys holding consecutive whole numbers in small tables.) So we apply a transform that spreads the impact of higher bits downward. There is a tradeoff between speed, utility, and quality of bit-spreading. Because many common sets of hashes are already reasonably distributed (so don't benefit from spreading), and because we use trees to handle large sets of collisions in bins, we just XOR some shifted bits in the cheapest possible way to reduce systematic lossage, as well as to incorporate impact of the highest bits that would otherwise never be used in index calculations because of table bounds.
 
     static final int hash(Object key) {
         int h;
         return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
     }

    
Returns x's Class if it is of the form "class C implements Comparable<C>", else null.
 
     static Class<?> comparableClassFor(Object x) {
         if (x instanceof Comparable) {
             Class<?> cType[] tsasType tParameterizedType p;
             if ((c = x.getClass()) == String.class// bypass checks
                 return c;
             if ((ts = c.getGenericInterfaces()) != null) {
                 for (int i = 0; i < ts.length; ++i) {
                     if (((t = ts[i]) instanceof ParameterizedType) &&
                         ((p = (ParameterizedType)t).getRawType() ==
                          Comparable.class) &&
                         (as = p.getActualTypeArguments()) != null &&
                         as.length == 1 && as[0] == c// type arg is c
                         return c;
                 }
             }
         }
         return null;
     }

    
Returns k.compareTo(x) if x matches kc (k's screened comparable class), else 0.
 
     @SuppressWarnings({"rawtypes","unchecked"}) // for cast to Comparable
     static int compareComparables(Class<?> kcObject kObject x) {
         return (x == null || x.getClass() != kc ? 0 :
                 ((Comparable)k).compareTo(x));
     }

    
Returns a power of two size for the given target capacity.
 
     static final int tableSizeFor(int cap) {
         int n = cap - 1;
         n |= n >>> 1;
         n |= n >>> 2;
         n |= n >>> 4;
         n |= n >>> 8;
         n |= n >>> 16;
         return (n < 0) ? 1 : (n >= ) ?  : n + 1;
     }
 
     /* ---------------- Fields -------------- */

    
The table, initialized on first use, and resized as necessary. When allocated, length is always a power of two. (We also tolerate length zero in some operations to allow bootstrapping mechanics that are currently not needed.)
 
     transient Node<K,V>[] table;

    
Holds cached entrySet(). Note that AbstractMap fields are used for keySet() and values().
 
     transient Set<Map.Entry<K,V>> entrySet;

    
The number of key-value mappings contained in this map.
 
     transient int size;

    
The number of times this HashMap has been structurally modified Structural modifications are those that change the number of mappings in the HashMap or otherwise modify its internal structure (e.g., rehash). This field is used to make iterators on Collection-views of the HashMap fail-fast. (See ConcurrentModificationException).
 
     transient int modCount;

    
The next size value at which to resize (capacity * load factor).

Serial:
 
     // (The javadoc description is true upon serialization.
     // Additionally, if the table array has not been allocated, this
     // field holds the initial array capacity, or zero signifying
     // DEFAULT_INITIAL_CAPACITY.)
     int threshold;

    
The load factor for the hash table.

Serial:
 
     final float loadFactor;
 
     /* ---------------- Public operations -------------- */

    
Constructs an empty HashMap with the specified initial capacity and load factor.

Parameters:
initialCapacity the initial capacity
loadFactor the load factor
Throws:
java.lang.IllegalArgumentException if the initial capacity is negative or the load factor is nonpositive
 
     public HashMap(int initialCapacityfloat loadFactor) {
         if (initialCapacity < 0)
             throw new IllegalArgumentException("Illegal initial capacity: " +
                                                initialCapacity);
         if (initialCapacity > )
             initialCapacity = ;
         if (loadFactor <= 0 || Float.isNaN(loadFactor))
             throw new IllegalArgumentException("Illegal load factor: " +
                                                loadFactor);
         this. = loadFactor;
         this. = tableSizeFor(initialCapacity);
     }

    
Constructs an empty HashMap with the specified initial capacity and the default load factor (0.75).

Parameters:
initialCapacity the initial capacity.
Throws:
java.lang.IllegalArgumentException if the initial capacity is negative.
 
     public HashMap(int initialCapacity) {
         this(initialCapacity);
     }

    
Constructs an empty HashMap with the default initial capacity (16) and the default load factor (0.75).
 
     public HashMap() {
         this. = // all other fields defaulted
     }

    
Constructs a new HashMap with the same mappings as the specified Map. The HashMap is created with default load factor (0.75) and an initial capacity sufficient to hold the mappings in the specified Map.

Parameters:
m the map whose mappings are to be placed in this map
Throws:
java.lang.NullPointerException if the specified map is null
 
     public HashMap(Map<? extends K, ? extends V> m) {
         this. = ;
         putMapEntries(mfalse);
     }

    
Implements Map.putAll and Map constructor

Parameters:
m the map
evict false when initially constructing this map, else true (relayed to method afterNodeInsertion).
 
     final void putMapEntries(Map<? extends K, ? extends V> mboolean evict) {
         int s = m.size();
         if (s > 0) {
             if ( == null) { // pre-size
                 float ft = ((float)s / ) + 1.0F;
                 int t = ((ft < (float)) ?
                          (int)ft : );
                 if (t > )
                      = tableSizeFor(t);
             }
             else if (s > )
                 resize();
             for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {
                 K key = e.getKey();
                 V value = e.getValue();
                 putVal(hash(key), keyvaluefalseevict);
             }
         }
     }

    
Returns the number of key-value mappings in this map.

Returns:
the number of key-value mappings in this map
 
     public int size() {
         return ;
     }

    
Returns true if this map contains no key-value mappings.

Returns:
true if this map contains no key-value mappings
 
     public boolean isEmpty() {
         return  == 0;
     }

    
Returns the value to which the specified key is mapped, or null if this map contains no mapping for the key.

More formally, if this map contains a mapping from a key k to a value v such that (key==null ? k==null : key.equals(k)), then this method returns v; otherwise it returns null. (There can be at most one such mapping.)

A return value of null does not necessarily indicate that the map contains no mapping for the key; it's also possible that the map explicitly maps the key to null. The containsKey operation may be used to distinguish these two cases.

 
     public V get(Object key) {
         Node<K,V> e;
         return (e = getNode(hash(key), key)) == null ? null : e.value;
     }

    
Implements Map.get and related methods

Parameters:
hash hash for key
key the key
Returns:
the node, or null if none
 
     final Node<K,V> getNode(int hashObject key) {
         Node<K,V>[] tabNode<K,V> firsteint n; K k;
         if ((tab = ) != null && (n = tab.length) > 0 &&
             (first = tab[(n - 1) & hash]) != null) {
             if (first.hash == hash && // always check first node
                 ((k = first.key) == key || (key != null && key.equals(k))))
                 return first;
             if ((e = first.next) != null) {
                 if (first instanceof TreeNode)
                     return ((TreeNode<K,V>)first).getTreeNode(hashkey);
                 do {
                     if (e.hash == hash &&
                         ((k = e.key) == key || (key != null && key.equals(k))))
                         return e;
                 } while ((e = e.next) != null);
             }
         }
         return null;
     }

    
Returns true if this map contains a mapping for the specified key.

Parameters:
key The key whose presence in this map is to be tested
Returns:
true if this map contains a mapping for the specified key.
 
     public boolean containsKey(Object key) {
         return getNode(hash(key), key) != null;
     }

    
Associates the specified value with the specified key in this map. If the map previously contained a mapping for the key, the old value is replaced.

Parameters:
key key with which the specified value is to be associated
value value to be associated with the specified key
Returns:
the previous value associated with key, or null if there was no mapping for key. (A null return can also indicate that the map previously associated null with key.)
 
     public V put(K key, V value) {
         return putVal(hash(key), keyvaluefalsetrue);
     }

    
Implements Map.put and related methods

Parameters:
hash hash for key
key the key
value the value to put
onlyIfAbsent if true, don't change existing value
evict if false, the table is in creation mode.
Returns:
previous value, or null if none
 
     final V putVal(int hash, K key, V valueboolean onlyIfAbsent,
                    boolean evict) {
         Node<K,V>[] tabNode<K,V> pint ni;
         if ((tab = ) == null || (n = tab.length) == 0)
             n = (tab = resize()).length;
         if ((p = tab[i = (n - 1) & hash]) == null)
             tab[i] = newNode(hashkeyvaluenull);
         else {
             Node<K,V> e; K k;
             if (p.hash == hash &&
                 ((k = p.key) == key || (key != null && key.equals(k))))
                 e = p;
             else if (p instanceof TreeNode)
                 e = ((TreeNode<K,V>)p).putTreeVal(thistabhashkeyvalue);
             else {
                 for (int binCount = 0; ; ++binCount) {
                     if ((e = p.next) == null) {
                         p.next = newNode(hashkeyvaluenull);
                         if (binCount >=  - 1) // -1 for 1st
                             treeifyBin(tabhash);
                         break;
                     }
                     if (e.hash == hash &&
                         ((k = e.key) == key || (key != null && key.equals(k))))
                         break;
                     p = e;
                 }
             }
             if (e != null) { // existing mapping for key
                 V oldValue = e.value;
                 if (!onlyIfAbsent || oldValue == null)
                     e.value = value;
                 afterNodeAccess(e);
                 return oldValue;
             }
         }
         ++;
         if (++ > )
             resize();
         afterNodeInsertion(evict);
         return null;
     }

    
Initializes or doubles table size. If null, allocates in accord with initial capacity target held in field threshold. Otherwise, because we are using power-of-two expansion, the elements from each bin must either stay at same index, or move with a power of two offset in the new table.

Returns:
the table
 
     final Node<K,V>[] resize() {
         Node<K,V>[] oldTab = ;
         int oldCap = (oldTab == null) ? 0 : oldTab.length;
         int oldThr = ;
         int newCapnewThr = 0;
         if (oldCap > 0) {
             if (oldCap >= ) {
                  = .;
                 return oldTab;
             }
             else if ((newCap = oldCap << 1) <  &&
                      oldCap >= )
                 newThr = oldThr << 1; // double threshold
         }
         else if (oldThr > 0) // initial capacity was placed in threshold
             newCap = oldThr;
         else {               // zero initial threshold signifies using defaults
             newCap = ;
             newThr = (int)( * );
         }
         if (newThr == 0) {
             float ft = (float)newCap * ;
             newThr = (newCap <  && ft < (float) ?
                       (int)ft : .);
         }
          = newThr;
         @SuppressWarnings({"rawtypes","unchecked"})
             Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
          = newTab;
         if (oldTab != null) {
             for (int j = 0; j < oldCap; ++j) {
                 Node<K,V> e;
                 if ((e = oldTab[j]) != null) {
                     oldTab[j] = null;
                     if (e.next == null)
                         newTab[e.hash & (newCap - 1)] = e;
                     else if (e instanceof TreeNode)
                         ((TreeNode<K,V>)e).split(thisnewTabjoldCap);
                     else { // preserve order
                         Node<K,V> loHead = nullloTail = null;
                         Node<K,V> hiHead = nullhiTail = null;
                         Node<K,V> next;
                         do {
                             next = e.next;
                             if ((e.hash & oldCap) == 0) {
                                 if (loTail == null)
                                     loHead = e;
                                 else
                                     loTail.next = e;
                                 loTail = e;
                             }
                             else {
                                 if (hiTail == null)
                                     hiHead = e;
                                 else
                                     hiTail.next = e;
                                 hiTail = e;
                             }
                         } while ((e = next) != null);
                         if (loTail != null) {
                             loTail.next = null;
                             newTab[j] = loHead;
                         }
                         if (hiTail != null) {
                             hiTail.next = null;
                             newTab[j + oldCap] = hiHead;
                         }
                     }
                 }
             }
         }
         return newTab;
     }

    
Replaces all linked nodes in bin at index for given hash unless table is too small, in which case resizes instead.
 
     final void treeifyBin(Node<K,V>[] tabint hash) {
         int nindexNode<K,V> e;
         if (tab == null || (n = tab.length) < )
             resize();
         else if ((e = tab[index = (n - 1) & hash]) != null) {
             TreeNode<K,V> hd = nulltl = null;
             do {
                 TreeNode<K,V> p = replacementTreeNode(enull);
                 if (tl == null)
                     hd = p;
                 else {
                     p.prev = tl;
                     tl.next = p;
                 }
                 tl = p;
             } while ((e = e.next) != null);
             if ((tab[index] = hd) != null)
                 hd.treeify(tab);
         }
     }

    
Copies all of the mappings from the specified map to this map. These mappings will replace any mappings that this map had for any of the keys currently in the specified map.

Parameters:
m mappings to be stored in this map
Throws:
java.lang.NullPointerException if the specified map is null
 
     public void putAll(Map<? extends K, ? extends V> m) {
         putMapEntries(mtrue);
     }

    
Removes the mapping for the specified key from this map if present.

Parameters:
key key whose mapping is to be removed from the map
Returns:
the previous value associated with key, or null if there was no mapping for key. (A null return can also indicate that the map previously associated null with key.)
 
     public V remove(Object key) {
         Node<K,V> e;
         return (e = removeNode(hash(key), keynullfalsetrue)) == null ?
             null : e.value;
     }

    
Implements Map.remove and related methods

Parameters:
hash hash for key
key the key
value the value to match if matchValue, else ignored
matchValue if true only remove if value is equal
movable if false do not move other nodes while removing
Returns:
the node, or null if none
 
     final Node<K,V> removeNode(int hashObject keyObject value,
                                boolean matchValueboolean movable) {
         Node<K,V>[] tabNode<K,V> pint nindex;
         if ((tab = ) != null && (n = tab.length) > 0 &&
             (p = tab[index = (n - 1) & hash]) != null) {
             Node<K,V> node = nulle; K k; V v;
             if (p.hash == hash &&
                 ((k = p.key) == key || (key != null && key.equals(k))))
                 node = p;
             else if ((e = p.next) != null) {
                 if (p instanceof TreeNode)
                     node = ((TreeNode<K,V>)p).getTreeNode(hashkey);
                 else {
                     do {
                         if (e.hash == hash &&
                             ((k = e.key) == key ||
                              (key != null && key.equals(k)))) {
                             node = e;
                             break;
                         }
                         p = e;
                     } while ((e = e.next) != null);
                 }
             }
             if (node != null && (!matchValue || (v = node.value) == value ||
                                  (value != null && value.equals(v)))) {
                 if (node instanceof TreeNode)
                     ((TreeNode<K,V>)node).removeTreeNode(thistabmovable);
                 else if (node == p)
                     tab[index] = node.next;
                 else
                     p.next = node.next;
                 ++;
                 --;
                 afterNodeRemoval(node);
                 return node;
             }
         }
         return null;
     }

    
Removes all of the mappings from this map. The map will be empty after this call returns.
 
     public void clear() {
         Node<K,V>[] tab;
         ++;
         if ((tab = ) != null &&  > 0) {
              = 0;
             for (int i = 0; i < tab.length; ++i)
                 tab[i] = null;
         }
     }

    
Returns true if this map maps one or more keys to the specified value.

Parameters:
value value whose presence in this map is to be tested
Returns:
true if this map maps one or more keys to the specified value
 
     public boolean containsValue(Object value) {
         Node<K,V>[] tab; V v;
         if ((tab = ) != null &&  > 0) {
             for (int i = 0; i < tab.length; ++i) {
                 for (Node<K,V> e = tab[i]; e != nulle = e.next) {
                     if ((v = e.value) == value ||
                         (value != null && value.equals(v)))
                         return true;
                 }
             }
         }
         return false;
     }

    
Returns a Set view of the keys contained in this map. The set is backed by the map, so changes to the map are reflected in the set, and vice-versa. If the map is modified while an iteration over the set is in progress (except through the iterator's own remove operation), the results of the iteration are undefined. The set supports element removal, which removes the corresponding mapping from the map, via the Iterator.remove, Set.remove, removeAll, retainAll, and clear operations. It does not support the add or addAll operations.

Returns:
a set view of the keys contained in this map
 
     public Set<K> keySet() {
         Set<K> ks;
         return (ks = ) == null ? ( = new KeySet()) : ks;
     }
 
     final class KeySet extends AbstractSet<K> {
         public final int size()                 { return ; }
         public final void clear()               { HashMap.this.clear(); }
         public final Iterator<K> iterator()     { return new KeyIterator(); }
         public final boolean contains(Object o) { return containsKey(o); }
         public final boolean remove(Object key) {
             return removeNode(hash(key), keynullfalsetrue) != null;
         }
         public final Spliterator<K> spliterator() {
             return new KeySpliterator<>(HashMap.this, 0, -1, 0, 0);
         }
         public final void forEach(Consumer<? super K> action) {
             Node<K,V>[] tab;
             if (action == null)
                 throw new NullPointerException();
             if ( > 0 && (tab = ) != null) {
                 int mc = ;
                 for (int i = 0; i < tab.length; ++i) {
                     for (Node<K,V> e = tab[i]; e != nulle = e.next)
                         action.accept(e.key);
                 }
                 if ( != mc)
                     throw new ConcurrentModificationException();
             }
         }
     }

    
Returns a Collection view of the values contained in this map. The collection is backed by the map, so changes to the map are reflected in the collection, and vice-versa. If the map is modified while an iteration over the collection is in progress (except through the iterator's own remove operation), the results of the iteration are undefined. The collection supports element removal, which removes the corresponding mapping from the map, via the Iterator.remove, Collection.remove, removeAll, retainAll and clear operations. It does not support the add or addAll operations.

Returns:
a view of the values contained in this map
 
     public Collection<V> values() {
         Collection<V> vs;
         return (vs = ) == null ? ( = new Values()) : vs;
     }
 
     final class Values extends AbstractCollection<V> {
         public final int size()                 { return ; }
         public final void clear()               { HashMap.this.clear(); }
         public final Iterator<V> iterator()     { return new ValueIterator(); }
         public final boolean contains(Object o) { return containsValue(o); }
         public final Spliterator<V> spliterator() {
             return new ValueSpliterator<>(HashMap.this, 0, -1, 0, 0);
         }
         public final void forEach(Consumer<? super V> action) {
             Node<K,V>[] tab;
             if (action == null)
                 throw new NullPointerException();
             if ( > 0 && (tab = ) != null) {
                 int mc = ;
                 for (int i = 0; i < tab.length; ++i) {
                     for (Node<K,V> e = tab[i]; e != nulle = e.next)
                         action.accept(e.value);
                 }
                 if ( != mc)
                     throw new ConcurrentModificationException();
             }
         }
     }

    
Returns a Set view of the mappings contained in this map. The set is backed by the map, so changes to the map are reflected in the set, and vice-versa. If the map is modified while an iteration over the set is in progress (except through the iterator's own remove operation, or through the setValue operation on a map entry returned by the iterator) the results of the iteration are undefined. The set supports element removal, which removes the corresponding mapping from the map, via the Iterator.remove, Set.remove, removeAll, retainAll and clear operations. It does not support the add or addAll operations.

Returns:
a set view of the mappings contained in this map
 
     public Set<Map.Entry<K,V>> entrySet() {
         Set<Map.Entry<K,V>> es;
         return (es = ) == null ? ( = new EntrySet()) : es;
     }
    final class EntrySet extends AbstractSet<Map.Entry<K,V>> {
        public final int size()                 { return ; }
        public final void clear()               { HashMap.this.clear(); }
        public final Iterator<Map.Entry<K,V>> iterator() {
            return new EntryIterator();
        }
        public final boolean contains(Object o) {
            if (!(o instanceof Map.Entry))
                return false;
            Map.Entry<?,?> e = (Map.Entry<?,?>) o;
            Object key = e.getKey();
            Node<K,V> candidate = getNode(hash(key), key);
            return candidate != null && candidate.equals(e);
        }
        public final boolean remove(Object o) {
            if (o instanceof Map.Entry) {
                Map.Entry<?,?> e = (Map.Entry<?,?>) o;
                Object key = e.getKey();
                Object value = e.getValue();
                return removeNode(hash(key), keyvaluetruetrue) != null;
            }
            return false;
        }
        public final Spliterator<Map.Entry<K,V>> spliterator() {
            return new EntrySpliterator<>(HashMap.this, 0, -1, 0, 0);
        }
        public final void forEach(Consumer<? super Map.Entry<K,V>> action) {
            Node<K,V>[] tab;
            if (action == null)
                throw new NullPointerException();
            if ( > 0 && (tab = ) != null) {
                int mc = ;
                for (int i = 0; i < tab.length; ++i) {
                    for (Node<K,V> e = tab[i]; e != nulle = e.next)
                        action.accept(e);
                }
                if ( != mc)
                    throw new ConcurrentModificationException();
            }
        }
    }
    // Overrides of JDK8 Map extension methods
    @Override
    public V getOrDefault(Object key, V defaultValue) {
        Node<K,V> e;
        return (e = getNode(hash(key), key)) == null ? defaultValue : e.value;
    }
    @Override
    public V putIfAbsent(K key, V value) {
        return putVal(hash(key), keyvaluetruetrue);
    }
    @Override
    public boolean remove(Object keyObject value) {
        return removeNode(hash(key), keyvaluetruetrue) != null;
    }
    @Override
    public boolean replace(K key, V oldValue, V newValue) {
        Node<K,V> e; V v;
        if ((e = getNode(hash(key), key)) != null &&
            ((v = e.value) == oldValue || (v != null && v.equals(oldValue)))) {
            e.value = newValue;
            afterNodeAccess(e);
            return true;
        }
        return false;
    }
    @Override
    public V replace(K key, V value) {
        Node<K,V> e;
        if ((e = getNode(hash(key), key)) != null) {
            V oldValue = e.value;
            e.value = value;
            afterNodeAccess(e);
            return oldValue;
        }
        return null;
    }
    @Override
    public V computeIfAbsent(K key,
                             Function<? super K, ? extends V> mappingFunction) {
        if (mappingFunction == null)
            throw new NullPointerException();
        int hash = hash(key);
        Node<K,V>[] tabNode<K,V> firstint ni;
        int binCount = 0;
        TreeNode<K,V> t = null;
        Node<K,V> old = null;
        if ( >  || (tab = ) == null ||
            (n = tab.length) == 0)
            n = (tab = resize()).length;
        if ((first = tab[i = (n - 1) & hash]) != null) {
            if (first instanceof TreeNode)
                old = (t = (TreeNode<K,V>)first).getTreeNode(hashkey);
            else {
                Node<K,V> e = first; K k;
                do {
                    if (e.hash == hash &&
                        ((k = e.key) == key || (key != null && key.equals(k)))) {
                        old = e;
                        break;
                    }
                    ++binCount;
                } while ((e = e.next) != null);
            }
            V oldValue;
            if (old != null && (oldValue = old.value) != null) {
                afterNodeAccess(old);
                return oldValue;
            }
        }
        V v = mappingFunction.apply(key);
        if (v == null) {
            return null;
        } else if (old != null) {
            old.value = v;
            afterNodeAccess(old);
            return v;
        }
        else if (t != null)
            t.putTreeVal(thistabhashkeyv);
        else {
            tab[i] = newNode(hashkeyvfirst);
            if (binCount >=  - 1)
                treeifyBin(tabhash);
        }
        ++;
        ++;
        afterNodeInsertion(true);
        return v;
    }
    public V computeIfPresent(K key,
                              BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
        if (remappingFunction == null)
            throw new NullPointerException();
        Node<K,V> e; V oldValue;
        int hash = hash(key);
        if ((e = getNode(hashkey)) != null &&
            (oldValue = e.value) != null) {
            V v = remappingFunction.apply(keyoldValue);
            if (v != null) {
                e.value = v;
                afterNodeAccess(e);
                return v;
            }
            else
                removeNode(hashkeynullfalsetrue);
        }
        return null;
    }
    @Override
    public V compute(K key,
                     BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
        if (remappingFunction == null)
            throw new NullPointerException();
        int hash = hash(key);
        Node<K,V>[] tabNode<K,V> firstint ni;
        int binCount = 0;
        TreeNode<K,V> t = null;
        Node<K,V> old = null;
        if ( >  || (tab = ) == null ||
            (n = tab.length) == 0)
            n = (tab = resize()).length;
        if ((first = tab[i = (n - 1) & hash]) != null) {
            if (first instanceof TreeNode)
                old = (t = (TreeNode<K,V>)first).getTreeNode(hashkey);
            else {
                Node<K,V> e = first; K k;
                do {
                    if (e.hash == hash &&
                        ((k = e.key) == key || (key != null && key.equals(k)))) {
                        old = e;
                        break;
                    }
                    ++binCount;
                } while ((e = e.next) != null);
            }
        }
        V oldValue = (old == null) ? null : old.value;
        V v = remappingFunction.apply(keyoldValue);
        if (old != null) {
            if (v != null) {
                old.value = v;
                afterNodeAccess(old);
            }
            else
                removeNode(hashkeynullfalsetrue);
        }
        else if (v != null) {
            if (t != null)
                t.putTreeVal(thistabhashkeyv);
            else {
                tab[i] = newNode(hashkeyvfirst);
                if (binCount >=  - 1)
                    treeifyBin(tabhash);
            }
            ++;
            ++;
            afterNodeInsertion(true);
        }
        return v;
    }
    @Override
    public V merge(K key, V value,
                   BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
        if (value == null)
            throw new NullPointerException();
        if (remappingFunction == null)
            throw new NullPointerException();
        int hash = hash(key);
        Node<K,V>[] tabNode<K,V> firstint ni;
        int binCount = 0;
        TreeNode<K,V> t = null;
        Node<K,V> old = null;
        if ( >  || (tab = ) == null ||
            (n = tab.length) == 0)
            n = (tab = resize()).length;
        if ((first = tab[i = (n - 1) & hash]) != null) {
            if (first instanceof TreeNode)
                old = (t = (TreeNode<K,V>)first).getTreeNode(hashkey);
            else {
                Node<K,V> e = first; K k;
                do {
                    if (e.hash == hash &&
                        ((k = e.key) == key || (key != null && key.equals(k)))) {
                        old = e;
                        break;
                    }
                    ++binCount;
                } while ((e = e.next) != null);
            }
        }
        if (old != null) {
            V v;
            if (old.value != null)
                v = remappingFunction.apply(old.valuevalue);
            else
                v = value;
            if (v != null) {
                old.value = v;
                afterNodeAccess(old);
            }
            else
                removeNode(hashkeynullfalsetrue);
            return v;
        }
        if (value != null) {
            if (t != null)
                t.putTreeVal(thistabhashkeyvalue);
            else {
                tab[i] = newNode(hashkeyvaluefirst);
                if (binCount >=  - 1)
                    treeifyBin(tabhash);
            }
            ++;
            ++;
            afterNodeInsertion(true);
        }
        return value;
    }
    @Override
    public void forEach(BiConsumer<? super K, ? super V> action) {
        Node<K,V>[] tab;
        if (action == null)
            throw new NullPointerException();
        if ( > 0 && (tab = ) != null) {
            int mc = ;
            for (int i = 0; i < tab.length; ++i) {
                for (Node<K,V> e = tab[i]; e != nulle = e.next)
                    action.accept(e.keye.value);
            }
            if ( != mc)
                throw new ConcurrentModificationException();
        }
    }
    @Override
    public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {
        Node<K,V>[] tab;
        if (function == null)
            throw new NullPointerException();
        if ( > 0 && (tab = ) != null) {
            int mc = ;
            for (int i = 0; i < tab.length; ++i) {
                for (Node<K,V> e = tab[i]; e != nulle = e.next) {
                    e.value = function.apply(e.keye.value);
                }
            }
            if ( != mc)
                throw new ConcurrentModificationException();
        }
    }
    /* ------------------------------------------------------------ */
    // Cloning and serialization

    
Returns a shallow copy of this HashMap instance: the keys and values themselves are not cloned.

Returns:
a shallow copy of this map
    @SuppressWarnings("unchecked")
    @Override
    public Object clone() {
        HashMap<K,V> result;
        try {
            result = (HashMap<K,V>)super.clone();
        } catch (CloneNotSupportedException e) {
            // this shouldn't happen, since we are Cloneable
            throw new InternalError(e);
        }
        result.reinitialize();
        result.putMapEntries(thisfalse);
        return result;
    }
    // These methods are also used when serializing HashSets
    final float loadFactor() { return ; }
    final int capacity() {
        return ( != null) ? . :
            ( > 0) ?  :
            ;
    }

    
Save the state of the HashMap instance to a stream (i.e., serialize it).

SerialData:
The capacity of the HashMap (the length of the bucket array) is emitted (int), followed by the size (an int, the number of key-value mappings), followed by the key (Object) and value (Object) for each key-value mapping. The key-value mappings are emitted in no particular order.
    private void writeObject(java.io.ObjectOutputStream s)
        throws IOException {
        int buckets = capacity();
        // Write out the threshold, loadfactor, and any hidden stuff
        s.defaultWriteObject();
        s.writeInt(buckets);
        s.writeInt();
        internalWriteEntries(s);
    }

    
Reconstitute the HashMap instance from a stream (i.e., deserialize it).
    private void readObject(java.io.ObjectInputStream s)
        throws IOExceptionClassNotFoundException {
        // Read in the threshold (ignored), loadfactor, and any hidden stuff
        s.defaultReadObject();
        reinitialize();
        if ( <= 0 || Float.isNaN())
            throw new InvalidObjectException("Illegal load factor: " +
                                             );
        s.readInt();                // Read and ignore number of buckets
        int mappings = s.readInt(); // Read number of mappings (size)
        if (mappings < 0)
            throw new InvalidObjectException("Illegal mappings count: " +
                                             mappings);
        else if (mappings > 0) { // (if zero, use defaults)
            // Size the table using given load factor only if within
            // range of 0.25...4.0
            float lf = Math.min(Math.max(0.25f, ), 4.0f);
            float fc = (float)mappings / lf + 1.0f;
            int cap = ((fc < ) ?
                        :
                       (fc >= ) ?
                        :
                       tableSizeFor((int)fc));
            float ft = (float)cap * lf;
             = ((cap <  && ft < ) ?
                         (int)ft : .);
            @SuppressWarnings({"rawtypes","unchecked"})
                Node<K,V>[] tab = (Node<K,V>[])new Node[cap];
             = tab;