hashtable

1
2
3
public class <K,V>
extends Dictionary<K,V>
implements Map<K,V>, Cloneable, java.io.Serializable
  • This class implements a hash table, which maps keys to values. Any non-null object can be used as a key or as a value.
  • fail-fast: if the Hashtable is structurally modified at any time after the iterator is created, in any way except through the iterator’s own remove method, the iterator will throw a {@link ConcurrentModificationException}.
    The Enumerations returned by Hashtables keys and elements methods are not fail-fast.

Unlike the new collection implementations, {@code Hashtable} is synchronized.

  • If a thread-safe implementation is not needed, it is recommended to use {@link HashMap} in place of {@code Hashtable}.
  • If a thread-safe highly-concurrent implementation is desired, then it is recommended to use {@link java.util.concurrent.ConcurrentHashMap} in place of {@code Hashtable}.

Dictionary

1
public abstract class Dictionary<K,V>

The Dictionary class is the abstract parent of any class, such as Hashtable, which maps keys to values.
Any non-null object can be used as a key and as a value.
NOTE: This class is obsolete. New implementations should implement the Map interface, rather than extending this class.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public abstract class Dictionary<K, V> {
public Dictionary() {
}

public abstract int size();

public abstract boolean isEmpty();

public abstract Enumeration<K> keys();

public abstract Enumeration<V> elements();

public abstract V get(Object var1);

public abstract V put(K var1, V var2);

public abstract V remove(Object var1);
}

Hashtable

An instance of Hashtable has two parameters that affect its performance: initial capacity and load factor.

  • The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created.
    Note that the hash table is open: in the case of a “hash collision”, a single bucket stores multiple entries, which must be searched sequentially.
  • The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased.

If many entries are to be made into a Hashtable, creating it with a sufficiently large capacity may allow the entries to be inserted more efficiently than letting it perform automatic rehashing as needed to grow the table.

Constructors

1
2
3
4
5
6
7
8
9
10
11
12
13
private transient Entry<?,?>[] table;
public (int initialCapacity, float loadFactor) {
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal Capacity: " + initialCapacity);
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal Load: "+loadFactor);

if (initialCapacity==0)
initialCapacity = 1;
this.loadFactor = loadFactor;
table = new Entry<?,?>[initialCapacity];
threshold = (int)Math.min(initialCapacity * loadFactor, MAX_ARRAY_SIZE + 1);//ok
}

new Hashtable(1000) here capacity only 1000
new HashMap(1000) threshold–capacity:1024

Contains

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
// Hashtable bucket collision list entry
private static class Entry<K,V> implements Map.Entry<K,V> {
final int hash;
final K key;
V value;
Entry<K,V> next;
protected Entry(int hash, K key, V value, Entry<K,V> next) {
this.hash = hash;
this.key = key;
this.value = value;
this.next = next;
}
}
//the same structure with HashMap.
public synchronized boolean containsKey(Object key) {
Entry<?,?> tab[] = table;
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.length;//index
for (Entry<?,?> e = tab[index] ; e != null ; e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
return true;
}
}
return false;
}

get&put

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
public synchronized V get(Object key) {
Entry<?,?> tab[] = table;
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.length;//the bucket index to store the list with specific value
for (Entry<?,?> e = tab[index] ; e != null ; e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {//find the specific one in the list
return (V)e.value;
}
}
return null;
}
public synchronized V put(K key, V value) {
// Make sure the value is not null
if (value == null) {
throw new NullPointerException();
}

// Makes sure the key is not already in the hashtable.
Entry<?,?> tab[] = table;
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.length;
("unchecked")
Entry<K,V> entry = (Entry<K,V>)tab[index];
for(; entry != null ; entry = entry.next) {
if ((entry.hash == hash) && entry.key.equals(key)) {
V old = entry.value;
entry.value = value;
return old;
}
}

addEntry(hash, key, value, index);
return null;
}
private void addEntry(int hash, K key, V value, int index) {
modCount++;//change structurally.
Entry<?,?> tab[] = table;
if (count >= threshold) {
// Rehash the table if the threshold is exceeded
rehash();//reconstruct the hashtable.

tab = table;
hash = key.hashCode();
index = (hash & 0x7FFFFFFF) % tab.length;///ok
}

// Creates the new entry.
("unchecked")
Entry<K,V> e = (Entry<K,V>) tab[index];
tab[index] = new Entry<>(hash, key, value, e);
count++;
}

rehash

Increases the capacity of and internally reorganizes this hashtable,
in order to accommodate and access its entries more efficiently.
This method is called automatically when the number of keys in the hashtable
exceeds this hashtable’s capacity and load factor.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
protected void rehash() {
int oldCapacity = table.length;
Entry<?,?>[] oldMap = table;

// overflow-conscious code
int newCapacity = (oldCapacity << 1) + 1;//new size: 2X+1
if (newCapacity - MAX_ARRAY_SIZE > 0) {//may be to large
if (oldCapacity == MAX_ARRAY_SIZE)
// Keep running with MAX_ARRAY_SIZE buckets
return;
newCapacity = MAX_ARRAY_SIZE;
}
Entry<?,?>[] newMap = new Entry<?,?>[newCapacity];//ok...

modCount++;//change structurally here.
threshold = (int)Math.min(newCapacity * loadFactor, MAX_ARRAY_SIZE + 1);//just be ok.
table = newMap;

for (int i = oldCapacity ; i-- > 0 ;) {
for (Entry<K,V> old = (Entry<K,V>)oldMap[i] ; old != null ; ) {
Entry<K,V> e = old;
old = old.next;

int index = (e.hash & 0x7FFFFFFF) % newCapacity;//new index here.
e.next = (Entry<K,V>)newMap[index];
newMap[index] = e;
}
}
}

Serializable(Write/Read)

just like the HashMap does.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
private void writeObject(java.io.ObjectOutputStream s) throws IOException {
Entry<Object, Object> entryStack = null;

synchronized (this) {
// Write out the threshold and loadFactor
s.defaultWriteObject();

// Write out the length and count of elements
s.writeInt(table.length);
s.writeInt(count);

// Stack copies of the entries in the table
for (int index = 0; index < table.length; index++) {
Entry<?,?> entry = table[index];

while (entry != null) {
entryStack =
new Entry<>(0, entry.key, entry.value, entryStack);
entry = entry.next;
}
}
}

// Write out the key/value objects from the stacked entries
while (entryStack != null) {
s.writeObject(entryStack.key);
s.writeObject(entryStack.value);
entryStack = entryStack.next;
}
}

… …