HashSet collections for MapDB. It optionally supports entry expiration and can be used as a cache. It is thread-safe and scales under parallel updates.
It is thread safe, and supports parallel writes by using multiple segments, each with separate ReadWriteLock.
ConcurrentHashMap in JDK 7 works in a similar way. The number of segments (also called concurrency factor) is configurable.
HTreeMap is a segmented Hash Tree. Unlike other HashMaps it does not use fixed size Hash Table, and does not rehash all data when Hash Table grows. HTreeMap uses auto-expanding Index Tree, so it never needs resize. It also occupies less space, since empty hash slots do not consume any space. On the other hand, the tree structure requires more seeks and is slower on access. Its performance degrades with size TODO at what scale?.
HTreeMap optionally supports entry expiration based on four criteria: maximal map size, maximal storage size, time-to-live since last modification and time-to-live since last access. Expired entries are automatically removed. This feature uses FIFO queue and each segment has independent expiration queue.
HTreeMap has a number of parameters. Most important is name, which identifies Map within DB object and serializers which handle data inside Map:
HTreeMap<String, Long> map = db.hashMap("name_of_map") .keySerializer(Serializer.STRING) .valueSerializer(Serializer.LONG) .create(); //or shorter form HTreeMap<String, Long> map2 = db .hashMap("some_other_map", Serializer.STRING, Serializer.LONG) .create();
It is also possible to skip serializer definition, but MapDB will use slower generic serialization, and this is not recommended:
HTreeMap map = db .hashMap("name_of_map") .create();
HTreeMap is recommended for handling large key/values. In same cases you may want to use compression. It is possible to enable compression store-wide, but that has some overhead. Instead, it is better to apply compression just to a specific serializer on key or value. This is done by using serializer wrapper:
HTreeMap<Long, String> map = db.hashMap("map") .valueSerializer( new SerializerCompressionWrapper(Serializer.STRING)) .create();
Most hash maps uses 32bit hash generated by
Object.hashCode() and check equality with
Object.equals(other). However many classes (
int) do not implement it correctly.
MapDB uses Key Serializer to generate Hash Code and to compare keys. For example
byte can be used directly as key in HTreeMap, if
Serializer.BYTE_ARRAY is used as Key Serializer:
HTreeMap<byte, Long> map = db.hashMap("map") .keySerializer(Serializer.BYTE_ARRAY) .valueSerializer(Serializer.LONG) .create();
Another problem is weak
hashCode() in some classes, it causes collisions and degrades performance.
String.hashCode() is weak, but part of specification, so it can not be changed.
HashMap in JDK implemented many workarounds at the expense of memory and performance overhead.
HTreeMap has no such workarounds, and weak Hash would slow it down dramatically.
HTreeMap is fixing the root of the problem,
Serializer.STRING uses stronger XXHash which generates less collisions.
String.hashCode() is still available but with different serializer:
//this will use strong XXHash for Strings HTreeMap<String, Long> map = db.hashMap("map") // by default it uses strong XXHash .keySerializer(Serializer.STRING) .valueSerializer(Serializer.LONG) .create(); //this will use weak `String.hashCode()` HTreeMap<String, Long> map2 = db.hashMap("map2") // use weak String.hashCode() .keySerializer(Serializer.STRING_ORIGHASH) .valueSerializer(Serializer.LONG) .create();
Hash Maps are vulnerable to Hash Collision Attack.
HTreeMap adds Hash Seed for protection. It is randomly generated when collection is created and persisted together with its definition. User can also supply its own Hash Seed:
HTreeMap<String, Long> map = db .hashMap("map", Serializer.STRING, Serializer.LONG) .hashSeed(111) //force Hash Seed value .create();
TODO 64bit Hash Code
TODO custom hash code generator, bit spread, use DataIO.hashInt()
HashMap has parameters such as Initial Capacity, Load Factor etc.. MapDB has different set of parameters to control its access time and maximal size. Those are grouped under term Map Layout.
Concurrency is implemented by using multiple segments, each with separate read-write lock. Each concurrent segment is independent, it has its own Size Counter, iterators and Expiration Queues. Number of segments is configurable. Too small number will cause congestion on concurrent updates, too large will increase memory overhead.
HTreeMap uses Index Tree instead of growing
Object for its Hash Table. Index Tree is sparse array like structure, which uses tree hierarchy of arrays. It is sparse, so unused entries do not occupy any space. It does not do rehashing (copy all entries to bigger array), but also it can not grow beyond its initial capacity.
HTreeMap layout is controlled by
layout function. It takes three parameters:
- concurrency, number of segments. Default value is 8, it always rounds-up to power of two.
- maximal node size of Index Tree Dir Node. Default value is 16, it always rounds-up to power of two. Maximal value is 128 entries.
- number of Levels in Index Tree, default value is 4
Maximal Hash Table Size is calculated as:
segment * node size ^ level count. The default maximal Hash Table Size is
8*16^4= 0.5 million entries. TODO too low?
If Hash Table Size is set too low, hash collision will start to occur once its filled up and performance will degrade.
HTreeMap will accept new entries even after Hash Table is full, but performance will degrade.
32bit hash imposes upper limit on Hash Table Size: 4 billion entries. There is a plan to support 64bit hashing.
Another parameter is the size counter. By default HTreeMap does not keep track of its size and
map.size() performs a linear scan to count all entries. You can enable size counter and in that case
map.size() is instant, but there is some overhead on inserts.
HTreeMap<String, Long> map = db .hashMap("map", Serializer.STRING, Serializer.LONG) .counterEnable() .create();
And finally some sugar. There is value loader, a function to load a value if the existing key is not found. A newly created key/value is inserted into the map. This way
map.get(key) never returns null. This is mainly useful for various generators and caches.
HTreeMap<String,Long> map = db .hashMap("map", Serializer.STRING, Serializer.LONG) .valueLoader(s -> 1L) .create(); //return 1, even if key does not exist Long one = map.get("Non Existent"); // Value Creator output was added to Map map.size(); // => 1
Sharded Stores for better concurrency
HTreeMap is split into separate segments. Each segment is independent and does not share any state with other segments. However they still share underlying
Store and that affects performance under concurrent load. It is possible to make segments truly independent, by using separate
Store for each segment.
That is called Sharded HTreeMap, and is created directly
HTreeMap<String, byte> map = DBMaker //param is number of Stores (concurrency factor) .memoryShardedHashMap(8) .keySerializer(Serializer.STRING) .valueSerializer(Serializer.BYTE_ARRAY) .create(); //DB does not exist, so close map directly map.close();
Sharded HTreeMap has similar configurations options as HTreeMap created by
DB. But there is no DB object associated with this HTreeMap. So in order to close Sharded HTreeMap, one has to invoke
HTreeMap.close() method directly.
HTreeMap offers optional entry expiration if some conditions are met. Entry can expire if:
- An entry exists in the map longer time than the expiration period is. The expiration period could be since the creation, last modification or since the last read access.
- The number of entries in a map would exceed maximal number
- Map consumes more disk space or memory than space limit
This will set expiration time since the creation, last update and since the last access:
// remove entries 10 minutes after their last modification, // or 1 minute after last get() HTreeMap cache = db .hashMap("cache") .expireAfterUpdate(10, TimeUnit.MINUTES) .expireAfterCreate(10, TimeUnit.MINUTES) .expireAfterGet(1, TimeUnit.MINUTES) .create();
This will create
HTreeMap with 16GB space limit:
// Off-heap map with max size 16GB Map cache = db .hashMap("map") .expireStoreSize(16 * 1024*1024*1024) .expireAfterGet() .create();
It is also possible to limit the maximal size of a map:
HTreeMap cache = db .hashMap("cache") .expireMaxSize(128) .expireAfterGet() .create();
HTreeMap maintains LIFO Expiration Queue for each segment, eviction traverses queue and removes oldest entries. Not all Map entries are placed into Expiration Queue. For illustration, in this example the new entries never expire, only after update (value change) entry is placed into Expiration Queue.
HTreeMap cache = db .hashMap("cache") .expireAfterUpdate(1000) .create();
Time based eviction will always place entry into Expiration Queue. But other expiration criteria (size and space limit) also needs hint when to place entry into Expiration Queue. In following example no entry is placed into queue and no entry ever expires.
HTreeMap cache = db .hashMap("cache") .expireMaxSize(1000) .create();
There are three possible triggers which will place entry into Expiration Queue:
expireAfterGet(). Notice there is no TTL parameter.
Entry expiration is done inside other methods. If you call
map.get() it might evict some entries. But eviction has some overhead, and it would slow down user operations. There is option to supply HTreeMap with an executor, and perform eviction in background thread. This will evict entries in two background threads, and eviction will be triggered every 10 seconds:
DB db = DBMaker.memoryDB().make(); ScheduledExecutorService executor = Executors.newScheduledThreadPool(2); HTreeMap cache = db .hashMap("cache") .expireMaxSize(1000) .expireAfterGet() .expireExecutor(executor) .expireExecutorPeriod(10000) .create(); //once we are done, background threads needs to be stopped db.close();
Expiration can be combined with multiple Sharded HTreeMap for better concurrency. In this case each segment has independent Store and that improves scalability under parallel updates.
HTreeMap cache = DBMaker .memoryShardedHashMap(16) .expireAfterUpdate() .expireStoreSize(128*1024*1024) .create();
Sharded HTreeMap should be combined with multiple background threads for eviction. Also over time the Store becomes fragmented and eventually space can not be reclaimed. There is option to schedule periodic compaction if there is too much free space. Compaction will reclaim free space. Because each Store (segment) is compacted separately, compactions do not affect all running threads.
HTreeMap cache = DBMaker .memoryShardedHashMap(16) .expireAfterUpdate() .expireStoreSize(128*1024*1024) //entry expiration in 3 background threads .expireExecutor( Executors.newScheduledThreadPool(3)) //trigger Store compaction if 40% of space is free .expireCompactThreshold(0.4) .create();
HTreeMap supports Modification Listeners. It notifies listener about inserts, updates and removes from HTreeMap. It is possible to link two collections together. Usually faster in-memory with limited size, and slower on-disk with unlimited size. After an entry expires from in-memory, it is automatically moved to on-disk by Modification Listener. And Value Loader will load values back to in-memory map, if those are not found by map.get() operation.
To establish Disk Overflow use following code:
DB dbDisk = DBMaker .fileDB(file) .make(); DB dbMemory = DBMaker .memoryDB() .make(); // Big map populated with data expired from cache HTreeMap onDisk = dbDisk .hashMap("onDisk") .create(); // fast in-memory collection with limited size HTreeMap inMemory = dbMemory .hashMap("inMemory") .expireAfterGet(1, TimeUnit.SECONDS) //this registers overflow to `onDisk` .expireOverflow(onDisk) //good idea is to enable background expiration .expireExecutor(Executors.newScheduledThreadPool(2)) .create();
Once binding is established, every entry removed from
inMemory map will be added to
onDisk map. This applies only to expired entries,
map.remove() will also remove any entry from
//insert entry manually into both maps for demonstration inMemory.put("key", "map"); //first remove from inMemory inMemory.remove("key"); onDisk.get("key"); // -> not found
inMemory.get(key) is called and value does not exist, the Value Loader will try to find Map in
onDisk. If value is found inside
onDisk, it is added into
onDisk.put(1,"one"); //onDisk has content, inMemory is empty inMemory.size(); //> 0 // get method will not find value inMemory, and will get value from onDisk inMemory.get(1); //> "one" // inMemory now caches result, it will latter expire and move to onDisk inMemory.size(); //> 1
It is also possible to clear entire primary map and move all data to disk:
inMemory.put(1,11); inMemory.put(2,11); //expire entire content of inMemory Map inMemory.clearWithExpire();
TODO expiration counts are approximate. Map size can go slightly over limits for short period of time.
TODO modification listeners