brief introduction
HashMap uses a key/value storage structure, where each key corresponds to a unique value, and queries and modifications are fast enough to achieve the average time complexity of O(1).It is non-thread safe and does not guarantee the order in which elements are stored;
Inheritance System
HashMap implements Cloneable and can be cloned.
HashMap implements Serializable and can be serialized.
HashMap inherits from AbstractMap, implements the Map interface, and has all the capabilities of Map.
storage structure
In Java, HashMap's implementation uses a complex structure (array + list + red-black tree). An element of the array is also called a bucket.
When an element is added, the position of the element in the array is calculated based on the hash value. If there is no element in the position, the element is placed here directly. If there is an element in the position, the element is placed at the end of the list as a list of chains.
When the number of elements in a chain table reaches a certain number (and the length of the array reaches a certain length), the chain table is converted to a red-black tree to improve efficiency.
Array query efficiency is O(1), chain table query efficiency is O(k), red-black tree query efficiency is O(log k), K is the number of elements in the bucket, so when the number of elements is very large, converting to red-black tree can greatly improve efficiency.
Source Parsing
attribute
/** * The default initial capacity is 16 */ static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; /** * Maximum capacity is 30th power of 2 */ static final int MAXIMUM_CAPACITY = 1 << 30; /** * Default Load Factor */ static final float DEFAULT_LOAD_FACTOR = 0.75f; /** * Tree when the number of elements in a bucket is greater than or equal to 8 */ static final int TREEIFY_THRESHOLD = 8; /** * Convert a tree into a chain table when the number of elements in a bucket is less than or equal to 6 */ static final int UNTREEIFY_THRESHOLD = 6; /** * Treing occurs when the number of barrels reaches 64 */ static final int MIN_TREEIFY_CAPACITY = 64; /** * Arrays, also known as bucket s */ transient Node<K,V>[] table; /** * Cache as entrySet() */ transient Set<Map.Entry<K,V>> entrySet; /** * Number of elements */ transient int size; /** * Number of modifications to execute a fast failure strategy during iteration */ transient int modCount; /** * Expansion when the number of buckets in use reaches threshold = capacity * loadFactor */ int threshold; /** * Loading factor */ final float loadFactor;
- capacity
Capacity is the length of the array, that is, the number of buckets, which defaults to 16 and has a maximum of 30 power of 2. It can be dendrized when capacity reaches 64. - Loading factor
Loading factor is used to calculate the capacity to expand when the capacity is reached. The default loading factor is 0.75. - Tree
Treing is done when the capacity reaches 64 and the length of the chain table reaches 8, and when the length of the chain table is less than 6.
Node Internal Class
Node is a typical single-chain list node where hash stores hash values calculated by the key.
static class Node<K,V> implements Map.Entry<K,V> { final int hash; final K key; V value; Node<K,V> next; }
TreeNode Internal Class
It inherits from the Entry class in LinkedHashMap, which we will talk about later.
TreeNode is a typical tree node, where prev is a node in the chain table and is used to quickly find its preceding node when deleting an element.
// In HashMap static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> { TreeNode<K,V> parent; // red-black tree links TreeNode<K,V> left; TreeNode<K,V> right; TreeNode<K,V> prev; // needed to unlink next upon deletion boolean red; } // Typical two-way linked list node in LinkedHashMap static class Entry<K,V> extends HashMap.Node<K,V> { Entry<K,V> before, after; Entry(int hash, K key, V value, Node<K,V> next) { super(hash, key, value, next); } }
HashMap() construction method
Empty parameter construction methods, all using default values.
public HashMap() { this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted }
HashMap(int initialCapacity) construction method
Call the HashMap(int initialCapacity, float loadFactor) construction method and pass in the default load factor.
public HashMap(int initialCapacity) { this(initialCapacity, DEFAULT_LOAD_FACTOR); }
HashMap(int initialCapacity) construction method
The validity of the incoming initial capacity and load factor is determined, and the expansion threshold is calculated to be the nearest n-th power of the incoming initial capacity.
public HashMap(int initialCapacity, float loadFactor) { // Check that the incoming initial capacity is legal if (initialCapacity < 0) throw new IllegalArgumentException("Illegal initial capacity: " + initialCapacity); if (initialCapacity > MAXIMUM_CAPACITY) initialCapacity = MAXIMUM_CAPACITY; // Check if the loading factor is legal if (loadFactor <= 0 || Float.isNaN(loadFactor)) throw new IllegalArgumentException("Illegal load factor: " + loadFactor); this.loadFactor = loadFactor; // Calculate expansion threshold this.threshold = tableSizeFor(initialCapacity); } static final int tableSizeFor(int cap) { // The expansion threshold is the nearest 2 n-th power up to the initial incoming capacity int n = cap - 1; n |= n >>> 1; n |= n >>> 2; n |= n >>> 4; n |= n >>> 8; n |= n >>> 16; return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1; }
put(K key, V value) method
Entry to add elements.
public V put(K key, V value) { // Call hash(key) to calculate hash value of key return putVal(hash(key), key, value, false, true); } static final int hash(Object key) { int h; // If the key is null, the hash value is 0, otherwise the hashCode() method of the key is called // And make the 16-bit high exclusive to the entire hash, in order to make the calculated hash more dispersed return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16); } final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) { Node<K, V>[] tab; Node<K, V> p; int n, i; // Initialize if the number of buckets is 0 if ((tab = table) == null || (n = tab.length) == 0) // Call resize() initialization n = (tab = resize()).length; // (n - 1) & In which bucket is the hash element calculated // If there is no element in the bucket, place the element in the first place in the bucket if ((p = tab[i = (n - 1) & hash]) == null) // Create a new node in the bucket tab[i] = newNode(hash, key, value, null); else { // If an element already exists in the bucket Node<K, V> e; K k; // If the key of the first element in the bucket is the same as the key of the element to be inserted, save in e for subsequent modification of the value value if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k)))) e = p; else if (p instanceof TreeNode) // If the first element is a tree node, call the tree node's putTreeVal to insert the element e = ((TreeNode<K, V>) p).putTreeVal(this, tab, hash, key, value); else { // Traversing through the chain table corresponding to this bucket, binCount stores the number of elements in the chain table for (int binCount = 0; ; ++binCount) { // If no element of the same key is found after the chain table has been traversed, indicating that the corresponding element of the key does not exist, a new node is inserted at the end of the chain table if ((e = p.next) == null) { p.next = newNode(hash, key, value, null); // If the length of the chain list is greater than 8 after inserting a new node, determine if it needs to be tree, because the first element is not added to the binCount, so here-1 if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st treeifyBin(tab, hash); break; } // Exit the loop if the key to be inserted is found in the list of chains if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) break; p = e; } } // If an element of the corresponding key is found if (e != null) { // existing mapping for key // Record old values V oldValue = e.value; // Determine if old values need to be replaced if (!onlyIfAbsent || oldValue == null) // Replace old value with new value e.value = value; // What to do after the node is accessed, used in LinkedHashMap afterNodeAccess(e); // Return Old Value return oldValue; } } // Note here that no elements were found // Modifications plus 1 ++modCount; // Number of elements plus 1 to determine if expansion is required if (++size > threshold) // Expansion resize(); // What to do after a node is inserted, used in LinkedHashMap afterNodeInsertion(evict); // Element not found returns null return null; }
- Calculates the hash value of the key;
- Initialize the bucket if the number of buckets (array) is 0;
- If the key is in a bucket with no elements, insert it directly;
- If the key of the first element in the bucket in which the key is located is the same as the key to be inserted, the element is found and processed in a subsequent process (9);
- If the first element is a tree node, call putTreeVal() of the tree node to find the element or insert the tree node;
- If not, traverse the chain table corresponding to the bucket to find out if the key exists in the chain table.
- If an element of the corresponding key is found, it is processed in a subsequent process (9);
- If no key element is found, a new node is inserted at the end of the chain table and a tree is determined.
- If an element of the corresponding key is found, it determines whether the old value needs to be replaced and returns the old value directly.
- If an element is inserted, add 1 to the number and determine if expansion is required;
resize() method
Expansion method.
final Node<K, V>[] resize() { // Old Array Node<K, V>[] oldTab = table; // Old capacity int oldCap = (oldTab == null) ? 0 : oldTab.length; // Old expansion threshold int oldThr = threshold; int newCap, newThr = 0; if (oldCap > 0) { if (oldCap >= MAXIMUM_CAPACITY) { // If the old capacity reaches the maximum capacity, no expansion is carried out threshold = Integer.MAX_VALUE; return oldTab; } else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && oldCap >= DEFAULT_INITIAL_CAPACITY) // If the old capacity is less than twice the maximum capacity and the old capacity is greater than the default initial capacity (16), the capacity is expanded to two and the expansion threshold is doubled. newThr = oldThr << 1; // double threshold } else if (oldThr > 0) // initial capacity was placed in threshold // map created with a non-default constructor where the first element insertion will go // If the old capacity is 0 and the old expansion threshold is greater than 0, assign the new capacity to the old threshold newCap = oldThr; else { // zero initial threshold signifies using defaults // Call the map created by the default constructor and the first insertion of an element will come here // If the old capacity expansion threshold is 0, indicating that it has not been initialized, the initialization capacity is the default capacity, and the expansion threshold is the default capacity*the default load factor newCap = DEFAULT_INITIAL_CAPACITY; newThr = (int) (DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY); } if (newThr == 0) { // If the new expansion threshold is 0, it is calculated as the capacity*loading factor, but cannot exceed the maximum capacity float ft = (float) newCap * loadFactor; newThr = (newCap < MAXIMUM_CAPACITY && ft < (float) MAXIMUM_CAPACITY ? (int) ft : Integer.MAX_VALUE); } // Assignment Expansion Threshold is New Threshold threshold = newThr; // Create a new capacity array @SuppressWarnings({"rawtypes", "unchecked"}) Node<K, V>[] newTab = (Node<K, V>[]) new Node[newCap]; // Assign bucket to new array table = newTab; // Move elements if the old array is not empty if (oldTab != null) { // Traversing through old arrays for (int j = 0; j < oldCap; ++j) { Node<K, V> e; // Assign to e if the first element in the bucket is not empty if ((e = oldTab[j]) != null) { // Empty old drums for GC recycling oldTab[j] = null; // If there is only one element in this bucket, calculate its position in the new bucket and move it to the new bucket // Since the capacity is doubled each time, the first element here must have no elements when the new bucket is moved to it if (e.next == null) newTab[e.hash & (newCap - 1)] = e; else if (e instanceof TreeNode) // If the first element is a tree node, break up the tree into two trees and insert it into a new bucket ((TreeNode<K, V>) e).split(this, newTab, j, oldCap); else { // preserve order // If this list has more than one element and is not a tree // Then split into two chains and insert them into a new bucket // For example, if the original capacity was 4, 3, 7, 11, 15 these four elements are all in barrel 3 // Now expanding to 8, 3 and 11 will still be in barrel 3, 7 and 15 will be moved to barrel 7 // That is, it divides into two chains Node<K, V> loHead = null, loTail = null; Node<K, V> hiHead = null, hiTail = null; Node<K, V> next; do { next = e.next; // (e.hash & oldCap) == 0 elements in low-order list // For example, 3 & 4 == 0 if ((e.hash & oldCap) == 0) { if (loTail == null) loHead = e; else loTail.next = e; loTail = e; } else { // (e.hash & oldCap)!= 0 element in high-order list // For example, 7 & 4!= 0 if (hiTail == null) hiHead = e; else hiTail.next = e; hiTail = e; } } while ((e = next) != null); // Traversal is complete and divides into two chains // The list of low chains is in the same position in the new bucket as in the old one (that is, 3 and 11 are still in bucket 3). if (loTail != null) { loTail.next = null; newTab[j] = loHead; } // The position of the high list in the new bucket is exactly where it was before plus the old capacity (that is, 7 and 15 moved to bucket 7). if (hiTail != null) { hiTail.next = null; newTab[j + oldCap] = hiHead; } } } } } return newTab; }
- If the default construction method is used, the element is initially inserted with a default value of 16 and a threshold of 12 for expansion.
- If a non-default construction method is used, the initial capacity at the first insertion of an element is equal to the expansion threshold, which in the construction method is equal to the n-th power of the nearest two incoming capacities up;
- If the old capacity is greater than 0, the new capacity is equal to twice the old capacity, but not more than the 30th power of the maximum capacity 2, and the new expansion threshold is twice the old expansion threshold.
- Create a new capacity bucket;
- Move elements, the original chain table is divided into two chains, the low chain table is stored in the location of the original bucket, the high chain table is moved to the location of the original bucket and the old capacity is added.
TreeNode.putTreeVal(...) Method
Method of inserting elements into a red-black tree.
final TreeNode<K, V> putTreeVal(HashMap<K, V> map, Node<K, V>[] tab, int h, K k, V v) { Class<?> kc = null; // Mark whether the node for this key is found boolean searched = false; // Find the root node of the tree TreeNode<K, V> root = (parent != null) ? root() : this; // Traverse from the root node of the tree for (TreeNode<K, V> p = root; ; ) { // dir=direction, mark left or right // ph=p.hash, hash value of current node int dir, ph; // pk=p.key, the key value of the current node K pk; if ((ph = p.hash) > h) { // The current hash is larger than the target hash, indicating on the left dir = -1; } else if (ph < h) // The current hash is smaller than the target hash, indicating on the right dir = 1; else if ((pk = p.key) == k || (k != null && k.equals(pk))) // Both have the same hash and the same key, indicating that the node was found and returned directly to it // Go back to putVal() to determine if you need to modify its value return p; else if ((kc == null && // Returns the true class if k is a subclass of Comparable, otherwise returns null (kc = comparableClassFor(k)) == null) || // Returns 0 if k and pk are not of the same type, otherwise returns the result of comparison (dir = compareComparables(kc, k, pk)) == 0) { // This condition means that they have the same hash but one of them is not a Comparable type or the other is different // For example, if key is of type Object, you can pass either String or Integer, which may have the same hash value // Storing elements with the same hash value in a red-black tree in the same subtree is equivalent to finding the vertex of the subtree // Traverse the left and right subtrees from this vertex to find out if there are any elements that are the same as the key to be inserted if (!searched) { TreeNode<K, V> q, ch; searched = true; // Traversing the left and right subtrees found a direct return if (((ch = p.left) != null && (q = ch.find(h, k, kc)) != null) || ((ch = p.right) != null && (q = ch.find(h, k, kc)) != null)) return q; } // If they are of the same type, then hash values are calculated based on their memory addresses and compared dir = tieBreakOrder(k, pk); } TreeNode<K, V> xp = p; if ((p = (dir <= 0) ? p.left : p.right) == null) { // If you do not find an element for the key at the end, create a new node Node<K, V> xpn = xp.next; TreeNode<K, V> x = map.newTreeNode(h, k, v, xpn); if (dir <= 0) xp.left = x; else xp.right = x; xp.next = x; x.parent = x.prev = xp; if (xpn != null) ((TreeNode<K, V>) xpn).prev = x; // Balance after inserting a tree node // Move the root node to the first node in the chain table moveRootToFront(tab, balanceInsertion(root, x)); return null; } } }
- Find the root node;
- Look from the root node;
- Comparing hash and key values, if they are the same, returns directly, and decides whether to replace the value in the putVal() method.
- Based on the hash value and key value, it determines whether to search in the left or right subtree of the tree and finds a direct return.
- If you don't find one at the end, insert the element in the tree and balance it.
treeifyBin() method
If the length of the list of chains after inserting an element is greater than or equal to 8, determine whether the tree is needed.
final void treeifyBin(Node<K, V>[] tab, int hash) { int n, index; Node<K, V> e; if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY) // If the number of barrels is less than 64, expand without tree // Because after expansion, the chain list will be divided into two chains to reduce the number of elements // Of course, it's not necessarily. For example, if the capacity is 4, it contains all the elements whose remainder is equal to 3 divided by 4. // This will not reduce the length of the chain table even if the capacity is expanded resize(); else if ((e = tab[index = (n - 1) & hash]) != null) { TreeNode<K, V> hd = null, tl = null; // Replace all nodes with tree nodes do { TreeNode<K, V> p = replacementTreeNode(e, null); if (tl == null) hd = p; else { p.prev = tl; tl.next = p; } tl = p; } while ((e = e.next) != null); // If you go into the loop above, start treeing from the beginning node if ((tab[index] = hd) != null) hd.treeify(tab); } }
TreeNode.treeify() method
A true tree approach.
final void treeify(Node<K, V>[] tab) { TreeNode<K, V> root = null; for (TreeNode<K, V> x = this, next; x != null; x = next) { next = (TreeNode<K, V>) x.next; x.left = x.right = null; // The first element acts as the root node and is a black node. The other elements are inserted into the tree in turn and then balanced. if (root == null) { x.parent = null; x.red = false; root = x; } else { K k = x.key; int h = x.hash; Class<?> kc = null; // Find the location where the element is inserted from the root node for (TreeNode<K, V> p = root; ; ) { int dir, ph; K pk = p.key; if ((ph = p.hash) > h) dir = -1; else if (ph < h) dir = 1; else if ((kc == null && (kc = comparableClassFor(k)) == null) || (dir = compareComparables(kc, k, pk)) == 0) dir = tieBreakOrder(k, pk); // If no element is found at last, insert TreeNode<K, V> xp = p; if ((p = (dir <= 0) ? p.left : p.right) == null) { x.parent = xp; if (dir <= 0) xp.left = x; else xp.right = x; // Balance after insertion, which inserts a red node by default, in the balanceInsertion() method root = balanceInsertion(root, x); break; } } } } // Move the root node to the head node of the list, because the first element after balance is not necessarily the root node moveRootToFront(tab, root); }
- Traverse starts from the first element of the list of chains;
- Use the first element as the root node;
- The other elements are inserted into the red and black trees in turn and then balanced.
- Move the root node to the position of the first element in the list (because the root node changes when balancing);
get(Object key) method
public V get(Object key) { Node<K, V> e; return (e = getNode(hash(key), key)) == null ? null : e.value; } final Node<K, V> getNode(int hash, Object key) { Node<K, V>[] tab; Node<K, V> first, e; int n; K k; // If the number of buckets is greater than 0 and the key to be found is in a bucket where the first element of the bucket is not empty if ((tab = table) != null && (n = tab.length) > 0 && (first = tab[(n - 1) & hash]) != null) { // Check if the first element is the one to look for and if it is returned directly if (first.hash == hash && // always check first node ((k = first.key) == key || (key != null && key.equals(k)))) return first; if ((e = first.next) != null) { // If the first element is a tree node, look it up as a tree if (first instanceof TreeNode) return ((TreeNode<K, V>) first).getTreeNode(hash, key); // Otherwise, traverse the entire list to find the element do { if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) return e; } while ((e = e.next) != null); } } return null; }
- Calculates the hash value of the key;
- Find the bucket where the key is located and its first element;
- If the key of the first element is equal to the key to be found, it is returned directly.
- If the first element is a tree node, look it as a tree, otherwise look it as a chain table.
TreeNode.getTreeNode(int h, Object k) method
final TreeNode<K, V> getTreeNode(int h, Object k) { // Find from the root node of the tree return ((parent != null) ? root() : this).find(h, k, null); } final TreeNode<K, V> find(int h, Object k, Class<?> kc) { TreeNode<K, V> p = this; do { int ph, dir; K pk; TreeNode<K, V> pl = p.left, pr = p.right, q; if ((ph = p.hash) > h) // Left Subtree p = pl; else if (ph < h) // Right Subtree p = pr; else if ((pk = p.key) == k || (k != null && k.equals(pk))) // Direct return found return p; else if (pl == null) // hash is the same but key is different, left subtree is empty right subtree p = pr; else if (pr == null) // Right subtree is empty left subtree p = pl; else if ((kc != null || (kc = comparableClassFor(k)) != null) && (dir = compareComparables(kc, k, pk)) != 0) // Comparing the size of key values by compare method determines whether to use left or right subtrees p = (dir < 0) ? pl : pr; else if ((q = pr.find(h, k, kc)) != null) // If none of the above fails, try looking in the right subtree return q; else // Find it in the left subtree p = pl; } while (p != null); return null; }
The classical binary search tree searching process first compares according to hash value, then decides whether to search left or right subtree according to key value comparison.
remove(Object key) method
public V remove(Object key) { Node<K, V> e; return (e = removeNode(hash(key), key, null, false, true)) == null ? null : e.value; } final Node<K, V> removeNode(int hash, Object key, Object value, boolean matchValue, boolean movable) { Node<K, V>[] tab; Node<K, V> p; int n, index; // The first element of the bucket in which the number of buckets is greater than 0 and the element to be deleted is not empty if ((tab = table) != null && (n = tab.length) > 0 && (p = tab[index = (n - 1) & hash]) != null) { Node<K, V> node = null, e; K k; V v; if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k)))) // If the first element happens to be the one you are looking for, assign it to the node variable and delete it later node = p; else if ((e = p.next) != null) { if (p instanceof TreeNode) // If the first element is a tree node, find the node as a tree node = ((TreeNode<K, V>) p).getTreeNode(hash, key); else { // Otherwise, traverse the entire list to find elements do { if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) { node = e; break; } p = e; } while ((e = e.next) != null); } } // If an element is found, see if the parameter needs to match the value, delete directly if no match is needed, and see if the value is equal to the value passed in if (node != null && (!matchValue || (v = node.value) == value || (value != null && value.equals(v)))) { if (node instanceof TreeNode) // If it is a tree node, call the deletion method of the tree (called with node, delete yourself) ((TreeNode<K, V>) node).removeTreeNode(this, tab, movable); else if (node == p) // If the element to be deleted is the first element, move the second element to the first position tab[index] = node.next; else // Otherwise delete node p.next = node.next; ++modCount; --size; // Delete node postprocessing afterNodeRemoval(node); return node; } } return null; }
- Find the node where the element is located first;
- If the node found is a tree node, the removal node is processed by the tree.
- If the node found is the first node in the bucket, move the second node to the first position;
- Otherwise, delete nodes according to the list of chains;
- Modify size, call post-processing after removing nodes, etc.
TreeNode.removeTreeNode(...) Method
final void removeTreeNode(HashMap<K, V> map, Node<K, V>[] tab, boolean movable) { int n; // Return directly if the number of buckets is 0 if (tab == null || (n = tab.length) == 0) return; // Index of Node in Bucket int index = (n - 1) & hash; // First node, root node, root left child node TreeNode<K, V> first = (TreeNode<K, V>) tab[index], root = first, rl; // Succeeding node, preceding node TreeNode<K, V> succ = (TreeNode<K, V>) next, pred = prev; if (pred == null) // If the leading node is empty, indicating that the current node is the root node, assigning the successor node to the position of the first node is equivalent to deleting the current node tab[index] = first = succ; else // Otherwise, setting the next node of the preceding node as the successor node of the current node is equivalent to deleting the current node pred.next = succ; // If the succeeding node is not empty, having the preceding node of the succeeding node point to the preceding node of the current node is equivalent to deleting the current node if (succ != null) succ.prev = pred; // If the first node is empty, there are no successor nodes, go back directly if (first == null) return; // If the parent node of the root node is not empty, look up the parent node again if (root.parent != null) root = root.root(); // If the root node is empty, it needs to be de-tree (converting the tree to a chain table) // If you need to move the node and the tree height is small, you need to de-tree it if (root == null || (movable && (root.right == null || (rl = root.left) == null || rl.left == null))) { tab[index] = first.untreeify(map); // too small return; } // These are all nodes in the deletion list, but the next is the node directly deleting the red-black tree (because TreeNode itself is both a list node and a tree node) // The code below relates to red and black trees, but there are no more comments here.The general process of deleting a red-black tree node is to find the smallest node in the right subtree where it will be deleted and balance it
- TreeNode itself is both a chain table node and a red-black tree node.
- Delete the list node first;
- Remove the red-black tree nodes and balance them;
summary
- HashMap is a Hash list with a storage structure (array + Chain Table + red-black tree);
- HashMap has a default initial capacity of 16 (1 < 4), a default load factor of 0.75f, and a capacity of always 2 to the n-th power;
- HashMap doubles its capacity each time it expands;
- When the number of barrels is less than 64, it will not be dendrified, but will only expand capacity.
- Tree when the number of barrels is greater than 64 and the number of elements in a single barrel is greater than 8.
- When the number of elements in a single bucket is less than 6, it is dendrified.
- HashMap is a non-thread-safe container;
- HashMap finds added elements with O(1) time complexity;
Red and Black Tree Nodule
The red-black tree has the following five properties:
(1) Nodes are red or black.
(2) The root node is black.
(3) Each leaf node (NIL node, empty node) is black.
(4) The two children of each red node are black.(You cannot have two consecutive red nodes on all paths from each leaf to the root)
(5) All paths from any node to each leaf contain the same number of black nodes.
The time complexity of a red-black tree is O(log n), which is proportional to the height of the tree.
Every insertion and deletion of red and black trees needs to be balanced, which may change the position of the root node, color conversion, left-hand, right-hand, and so on.