开局一张图,内容全靠编

HashMap、HashTable和HashSet区别源码分析_数组
1、HashMap、HashTable 实际上是数组和链表的结合
2、HashSet内部是基于HashMap实现的,也就是hashMap的key形成了HashSet,value为

private static final Object PRESENT = new Object();

HashMap元素key相同的话,不会另外添加到HashMap中,而是更新相同key的value,这就保证了Hashset不能存储相同的数据

public boolean add(E e) {
return map.put(e, PRESENT)==null;
}

3、内存扩容时采取的方式也不同,Hashtable(左移一位加1)采用的是2old+1,而HashMap是2old,HashSet是2*old
HashMap
put方法:

public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
}

当h小于65536时都为0

static final int hash(Object key) {
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;
if ((tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length;
if ((p = tab[i = (n - 1) & hash]) == null)
tab[i] = newNode(hash, key, value, null);
++modCount;
if (++size > threshold)
resize();
afterNodeInsertion(evict);
return null;
}
final Node<K,V>[] resize() {
Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
if (oldCap > 0) {
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return oldTab;
}
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
oldCap >= DEFAULT_INITIAL_CAPACITY)
newThr = oldThr << 1; // double threshold
}
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
else { // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}

默认扩容机制:Hashmap默认容量为16(1左移四位),其中加载因子loadFactor为0.75,会计算出一个threshold为12(0.7516=12),当添加12个元素后重新调用resize方法创建新的容量为32(162,原来容量左移一位),threshold变成24(0.75*32=24),每次新增元素后要判断size和threshold的大小,然后重新计算每个元素在新数组中的位置,而这是一个非常消耗性能的操作,所以如果我们已经预知HashMap中元素的个数,那么预设元素的个数能够有效的提高HashMap的性能
思路:
1、首先通过key计算hash值 然后得到在table[]中的位置
2、然后根据put顺序,新增的放在链头,最先的在链尾,从而形成hashmap结构为横向数组table[],然后纵向为相同hash值的Entry<K,V>链表的形式

get方法:

public V get(Object key) {
Node<K,V> e;
return (e = getNode(hash(key), key)) == null ? null : e.value;
}
final Node<K,V> getNode(int hash, Object key) {
Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
if ((tab = table) != null && (n = tab.length) > 0 &&
(first = tab[(n - 1) & hash]) != null) {
if (first.hash == hash && // always check first node
((k = first.key) == key || (key != null && key.equals(k))))
return first;
if ((e = first.next) != null) {
if (first instanceof TreeNode)
return ((TreeNode<K,V>)first).getTreeNode(hash, key);
do {
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
return e;
} while ((e = e.next) != null);
}
}
return null;
}

思路:
1、首先清楚结构是横向数组+纵向链表,首先利用key算出hash得到在数组中的索引值
2、得到索引值,然后e = e.next遍历链表,key相等则返回value

remove方法:

public V remove(Object key) {
Node<K,V> e;
return (e = removeNode(hash(key), key, null, false, true)) == null ?
null : e.value;
}
final Node<K,V> removeNode(int hash, Object key, Object value,
boolean matchValue, boolean movable) {
Node<K,V>[] tab; Node<K,V> p; int n, index;
if ((tab = table) != null && (n = tab.length) > 0 &&
(p = tab[index = (n - 1) & hash]) != null) {
Node<K,V> node = null, e; K k; V v;
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
node = p;
else if (node == p)
tab[index] = node.next;
else
p.next = node.next;
++modCount;
--size;
afterNodeRemoval(node);
return node;
}
}
return null;
}

思路:
先找到然后和链表remove方法差不多,修改prev、next

注意:​​HashMap、HashTable、HashSet都实现了java.io.Serializable这个序列化的接口,并且里面存储数据的数组都用transient关键字,然后利用writeObject和readObject两个方法来实现序列化操作时写入、读出​