HashMap源码简单分析:

1 一切需要从HashMap属性字段说起:

    /** The default initial capacity - MUST be a power of two. 初始容量 */
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16 /**
* The maximum capacity, used if a higher value is implicitly specified
* by either of the constructors with arguments.
* MUST be a power of two <= 1<<30. 最大容量
*/
static final int MAXIMUM_CAPACITY = 1 << 30; /**
* The load factor used when none specified in constructor.
   * 默认的负载因子,当map的size>=负载因子*capacity时候并且插入元素时候的table[i]!=null进行扩容
* 扩容判断逻辑:java.util.HashMap#addEntry函数中
*
*/
static final float DEFAULT_LOAD_FACTOR = 0.75f; /**
* An empty table instance to share when the table is not inflated.
*/
static final Entry<?,?>[] EMPTY_TABLE = {}; /**
* The table, resized as necessary. Length MUST Always be a power of two. 哈希表
*/
transient Entry<K,V>[] table = (Entry<K,V>[]) EMPTY_TABLE; /**
* The number of key-value mappings contained in this map. map的大小
*/
transient int size; /**
* The next size value at which to resize (capacity * load factor).
* @serial
*/
// If table == EMPTY_TABLE then this is the initial capacity at which the
// table will be created when inflated. 扩容的阈值 = capacity * 负载因子
int threshold; /**
* The load factor for the hash table. 负载因子,默认是0.75,可以在创建HashMap时候通过构造函数指定
*
* @serial
*/
final float loadFactor; /**
* The number of times this HashMap has been structurally modified
* Structural modifications are those that change the number of mappings in
* the HashMap or otherwise modify its internal structure (e.g.,
* rehash). This field is used to make iterators on Collection-views of
* the HashMap fail-fast. (See ConcurrentModificationException).
* 修改次数:例如进行rehash或者返回hashMap视图时候如果发生修改可以fast-fail
*/
transient int modCount; /**
* The default threshold of map capacity above which alternative hashing is
* used for String keys. Alternative hashing reduces the incidence of
* collisions due to weak hash code calculation for String keys.
* <p/>
* This value may be overridden by defining the system property
* {@code jdk.map.althashing.threshold}. A property value of {@code 1}
* forces alternative hashing to be used at all times whereas
* {@code -1} value ensures that alternative hashing is never used.
   * rehash时候判断的一个阈值
*/
static final int ALTERNATIVE_HASHING_THRESHOLD_DEFAULT = Integer.MAX_VALUE;

2: 接下来查看一下HashMap的put方法:

    /**
* Associates the specified value with the specified key in this map.
* If the map previously contained a mapping for the key, the old
* value is replaced.
*
* @param key key with which the specified value is to be associated
* @param value value to be associated with the specified key
* @return the previous value associated with <tt>key</tt>, or
* <tt>null</tt> if there was no mapping for <tt>key</tt>.
* (A <tt>null</tt> return can also indicate that the map
* previously associated <tt>null</tt> with <tt>key</tt>.)
*/
public V put(K key, V value) {
if (table == EMPTY_TABLE) {//初始化哈希表
inflateTable(threshold);
}
if (key == null) //如果key 为null 存储到table[0]位置
return putForNullKey(value);
int hash = hash(key); //计算hash值
int i = indexFor(hash, table.length);//计算entry在table中的位置
//for循环逻辑用于修改key对应的value的
for (Entry<K,V> e = table[i]; e != null; e = e.next) {
Object k;
if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
V oldValue = e.value;
e.value = value;
e.recordAccess(this);
return oldValue;//如果是更新返回旧值
}
}
//修改次数++
modCount++;
//添加元素到哈希表中
addEntry(hash, key, value, i);
// 如果是添加元素则返回null
return null;
}

3 put中调用的inflateTable方法:

    /**
* Inflates the table.
*/
private void inflateTable(int toSize) {
// Find a power of 2 >= toSize
//计算大于等于toSize的最小的2的整数次幂的值
int capacity = roundUpToPowerOf2(toSize);
//计算扩容阈值
threshold = (int) Math.min(capacity * loadFactor, MAXIMUM_CAPACITY + 1);
//初始化哈希表
table = new Entry[capacity];
//更新一下rehash的判断条件,便于以后判断是否rehash
initHashSeedAsNeeded(capacity);
}

4 put方法中调用的indexFor方法:

  /**
* Returns index for hash code h. 返回哈希值对应的哈希表索引
*/
static int indexFor(int h, int length) {
// assert Integer.bitCount(length) == 1 : "length must be a non-zero power of 2";
//使用&操作,而不使用取余原因:均匀分布在哈希表中 。length-1目的是:由于table的长度都是2的整数次幂进行扩容,length-1的二进制全是1,计算效率高
return h & (length-1);
}

5 put方法中调用的addEntry方法:

/**
* Adds a new entry with the specified key, value and hash code to
* the specified bucket. It is the responsibility of this
* method to resize the table if appropriate.
*
* Subclass overrides this to alter the behavior of put method.
*/
void addEntry(int hash, K key, V value, int bucketIndex) {
//判断是否扩容,只有size大于等于阈值而且当前插入table[i]!=null(就是able[i]已经被占用则扩容)
if ((size >= threshold) && (null != table[bucketIndex])) {
resize(2 * table.length);
hash = (null != key) ? hash(key) : 0;
//如果需要扩容的话则需要更新再次重新计算哈希表位置
bucketIndex = indexFor(hash, table.length);
}
//将值插入到哈希表中
createEntry(hash, key, value, bucketIndex);
}

6 addEntry方法中调用的createEntry方法:

 /**
* Like addEntry except that this version is used when creating entries
* as part of Map construction or "pseudo-construction" (cloning,
* deserialization). This version needn't worry about resizing the table.
*
* Subclass overrides this to alter the behavior of HashMap(Map),
* clone, and readObject.
*/
void createEntry(int hash, K key, V value, int bucketIndex) {
// 获取到哈希表指定位置
Entry<K,V> e = table[bucketIndex];
// 链表的头插入方式进行插入,插入逻辑在Entry的构造器中。然后将新节点存储到 table[bucketIndex]中
table[bucketIndex] = new Entry<>(hash, key, value, e);
size++;//更新size即可
}

Entry构造器:

    /**
*
* @param h hash值
* @param k key
* @param v value
* @param n 原始链表
*/
Entry(int h, K k, V v, Entry<K,V> n) {
value = v;
//将原始链表接该节点后面
next = n;
key = k;
hash = h;
}

7 接下来看一下java.util.HashMap#addEntry扩容机制:

当进行扩容时候需要重新计算哈希值和在哈希表中的位置。

    void addEntry(int hash, K key, V value, int bucketIndex) {
//满足扩容条件进行扩容
if ((size >= threshold) && (null != table[bucketIndex])) {
//扩容,2倍进行扩容
resize(2 * table.length);
//重新计算哈数值
hash = (null != key) ? hash(key) : 0;
//重新计算哈希表中的位置
bucketIndex = indexFor(hash, table.length);
} createEntry(hash, key, value, bucketIndex);
}

接下来看一下java.util.HashMap#resize方法:

   /**
* Rehashes the contents of this map into a new array with a
* larger capacity. This method is called automatically when the
* number of keys in this map reaches its threshold.
*
* If current capacity is MAXIMUM_CAPACITY, this method does not
* resize the map, but sets threshold to Integer.MAX_VALUE.
* This has the effect of preventing future calls.
*
* @param newCapacity the new capacity, MUST be a power of two;
* must be greater than current capacity unless current
* capacity is MAXIMUM_CAPACITY (in which case value
* is irrelevant).
*/
void resize(int newCapacity) {
Entry[] oldTable = table;
int oldCapacity = oldTable.length;
if (oldCapacity == MAXIMUM_CAPACITY) {//判断当前old容量是否最最大容量,是的话更新阈值
threshold = Integer.MAX_VALUE;
return;
}
//创建新的表
Entry[] newTable = new Entry[newCapacity];
//元素转移,根据initHashSeedAsNeeded结果判断是否进行rehash
transfer(newTable, initHashSeedAsNeeded(newCapacity));
// 新表赋给table
table = newTable;
//更新阈值
threshold = (int)Math.min(newCapacity * loadFactor, MAXIMUM_CAPACITY + 1);
}

关于HashMap在并发情况下的常见问题,其实在多线程环境下使用HashMap本来就是有风险错误的,但是一般面试却喜欢这么问,下面列举一下自己印象中的常见问题:

1:在进行扩容时候,其他线程是否可以进行进行插入操作(多线程环境下可能会导致HashMap进入死循环,此处暂不考虑)?

答:首先HashMap就不是一个线程安全的容器,所以在多线程环境下使用就是错误的。其次在扩容时候可以进行插入的,但是不安全。例如:

当主线程在调用transfer方法进行复制元素:

    /**
* Transfers all entries from current table to newTable.
*/
void transfer(Entry[] newTable, boolean rehash) {
int newCapacity = newTable.length;
for (Entry<K,V> e : table) {
while(null != e) {
Entry<K,V> next = e.next;
if (rehash) {
e.hash = null == e.key ? 0 : hash(e.key);
}
int i = indexFor(e.hash, newCapacity);
e.next = newTable[i];
newTable[i] = e;
e = next;
}
}
}

此时另一个线程在添加新元素是可以的,新元素添加到table中。如果子线程需要扩容的话可以进行扩容,然后将新容器赋给table。而此时主线程转移元素的工作就是将table中元素转移到newTable中。注意main线程的transfer方法:

如果main线程刚进入transfer方法时候newTable大小是32的话,由于子线程的添加操作导致table此时元素如果有128的话。则128个元素就会存储到大小为32的newTable中(此处不会扩容)。这就会导致HashMap性能下降!!!

可以使用多线程环境进行debug查看即可确定(推荐Idea的debug,的确强大,尤其是Evaluate Expression功能)。

2:进行扩容时候元素是否需要重新Hash?

这个需要具体情况判断,调用initHashSeedAsNeeded方法判断(判断逻辑这里先不介绍)。

    /**
* Rehashes the contents of this map into a new array with a
* larger capacity. This method is called automatically when the
* number of keys in this map reaches its threshold.
*
* If current capacity is MAXIMUM_CAPACITY, this method does not
* resize the map, but sets threshold to Integer.MAX_VALUE.
* This has the effect of preventing future calls.
*
* @param newCapacity the new capacity, MUST be a power of two;
* must be greater than current capacity unless current
* capacity is MAXIMUM_CAPACITY (in which case value
* is irrelevant).
*/
void resize(int newCapacity) {
Entry[] oldTable = table;
int oldCapacity = oldTable.length;
if (oldCapacity == MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return;
} Entry[] newTable = new Entry[newCapacity];
//initHashSeedAsNeeded 判断是否需要重新Hash
transfer(newTable, initHashSeedAsNeeded(newCapacity));
table = newTable;
threshold = (int)Math.min(newCapacity * loadFactor, MAXIMUM_CAPACITY + 1);
}

然后进行转移元素:

    /**
* Transfers all entries from current table to newTable.
*/
void transfer(Entry[] newTable, boolean rehash) {
int newCapacity = newTable.length;
//多线程环境下,如果其他线程导致table快速扩大。newTable在此处无法扩容会导致性能下降。但是如果后面有再次调用put方法的话可以再次触发resize。
for (Entry<K,V> e : table) {
while(null != e) {
Entry<K,V> next = e.next;
if (rehash) { //判断是否需要重新Hash
e.hash = null == e.key ? 0 : hash(e.key);
}
int i = indexFor(e.hash, newCapacity);
e.next = newTable[i];
newTable[i] = e;
e = next;
}
}
}

3:如何判断是否需要重新Hash?

    /**
* Initialize the hashing mask value. We defer initialization until we
* really need it.
*/
final boolean initHashSeedAsNeeded(int capacity) { // hashSeed降低hash碰撞的hash种子,初始值为0
boolean currentAltHashing = hashSeed != 0;
//ALTERNATIVE_HASHING_THRESHOLD: 当map的capacity容量大于这个值的时候并满足其他条件时候进行重新hash
boolean useAltHashing = sun.misc.VM.isBooted() && (capacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD);
//TODO 异或操作,二者满足一个条件即可rehash
boolean switching = currentAltHashing ^ useAltHashing;
if (switching) {
// 更新hashseed的值
hashSeed = useAltHashing ? sun.misc.Hashing.randomHashSeed(this) : 0;
}
return switching;
}

4:HashMap在多线程环境下进行put操作如何导致的死循环?

死循环产生时机:

当两个线程同时需要进行扩容,而且对哈希表同一个桶(table[i])进行扩容时候,一个线程刚好确定e和next元素之后,线程被挂起。此时另一个线程得到cpu并顺利对该桶完成转移(需要要求被转移之后的线程1中的e和next指的元素在新哈希表的同一个桶中,此时e和next被逆序了)。接着线程从挂起恢复回来时候就会陷入死循环中。参考:https://coolshell.cn/articles/9606.html

产生原因:主要由于并发操作,对用一个桶的两个节点构成了环,导致对环进行无法转移完毕元素陷入死循环。

最新文章

  1. VS2013,VS2015设置类模板文件表头
  2. CUDA2.2-原理之存储器访问
  3. winform C#获得Mac地址,IP地址,子网掩码,默认网关
  4. Python数据结构与算法--算法分析
  5. MyEclipse 编写 ExtJS 卡死问题解决方法
  6. javaScript中的数组迭代方法
  7. PHP完整环境搭建
  8. 分享整理的sql脚本
  9. CSS :focus 选择器
  10. CF 543C Remembering Strings
  11. Apache Hadoop 2.9.2 的快照管理
  12. 版本管理工具Git(一)简要介绍
  13. Android Studio升级3.2.1后的合并XML出错的解决方案
  14. .net 手机滑动加载
  15. db2 v9.7 新特性cur_commit 能够实现未提交读新特性cur_commit 能够实现未提交读
  16. (转载)c# winform 窗体起始位置 设置
  17. 【Tomcat】Tomcat + Memcached 实现session共享
  18. Peter的烟(水题测试2017082401&洛谷1150)
  19. ObjC.instancetype
  20. HDU 1061 Rightmost Digit (快速幂取模)

热门文章

  1. 数据分析之numpy模块
  2. JS之BOM、DOM
  3. js 提交数组到后端(C#)
  4. C# 语句 分支语句 switch----case----.
  5. js 对url进行编码和解码的三种方式
  6. sqlserver 级联删除、级联更新
  7. (白话理解)CAS机制
  8. 通过写一个Demo展示C#中多种常用的集合排序方法
  9. 站在DevOps肩膀上的TestOps(一)
  10. 洛谷P2572 [SCOI2010]序列操作(ODT)