its a dequeue , represented internally as a linkedlist can be used concurrently by many threads. And its non blocking.
it's dequeue but is a blocking data structure , i.e. Any threads that calls operations on it and this collection is empty then the thread is blocked.
All the elements you want to add to PriorityBlockingQueue have to implement the Comparable interface.
In this class, you can store elements with an activation date. The methods that return or extract elements of the queue will ignore those elements whose data is in the future. They are invisible to those methods.
To obtain this behavior, the elements you want to store in the DelayedQueue class have to implement the Delayed interface. This interface forces to implement the following two methods:
compareTo(Delayed o) : The Delayed interface extends the Comparable interface. This method will return a value less than zero if the object that is executing the method has a delay smaller than the object passed as a parameter, a value greater than zero if the object that is executing the method has a delay bigger than the object passed as a parameter, and the zero value if both objects have the same delay.
getDelay(TimeUnit unit) : This method has to return the time remaining until the activation date in the units is specified by the unit parameter. The TimeUnit class is an enumeration with the following constants: DAYS, HOURS, MICROSECONDS, MILLISECONDS, MINUTES, NANOSECONDS, and SECONDS.
Were introduced in Java Version 5 to provide atomic operations on single variables.
Why need them
When you work with a normal variable, each operation that you implement in Java is transformed in several instructions that is understandable by the machine when you compile the program. For example, when you assign a value to a variable, you only use one instruction in Java, but when you compile this program, this instruction is transformed in various instructions in the JVM language. This fact can provide data inconsistency errors when you work with multiple threads that share a variable.
How does the magic work behind the scene
- Get the value of the variable, changes the value in a local variable
- and then tries to change the old value for the new one.
- If the old value is still the same, it does the change.
- If not, the method begins the operation again.
Atomic variables don't use locks or other synchronization mechanisms to protect the access to their values. All their operations are based on the Compare and Set operation. It's guaranteed that several threads can work with an atomic variable at a time without generating data inconsistency errors and its performance is better than using a normal variable protected by a synchronization mechanism.
- However, even though all operations are thread-safe, retrieval operations do not entail locking, and there is not any support for locking the entire table in a way that prevents all access.
- The table is internally partitioned to try to permit the indicated number of concurrent updates without contention.
- Not only retrieval is not blocking, even updates can happen concurrently
- Updates operations Locks portions of map , read operation doesn't lock
- if one thread starts doing putAll ( hugeCollection ) and another thread repeats contains ( theSameObjectForAllCalls ) at the same time then it's more then likely that the second thread gets different results while putAll is still working.
- The Iterator returned may or may not reflect changes that occur in the Map while iterating. Imagine if you create the iterator and traverse an entire segment and go to the next segment. After you go to the next segment the first segment you finished traversing had an add or remove done. Well you won't see that and that's ok, it is not violating the API.
- ConcurrentHashMap is designed to optimize retrieval operations. (supports more readers and less writers … case like cache, if used in case where number of writers > number of readers then performance will be low )
- Some map-wide operations, such as size() and isEmpty() may be able to get away without locking the entire map at once (by suitably qualifying the semantics of these operations), but some operations, such as map rehashing (expanding the number of hash buckets and redistributing elements as the map grows) must guarantee exclusive access.
Though this code will work fine in HashMap and Hashtable, This won't work in ConcurrentHashMap; because, during put operation whole map is not locked, and while one thread is putting value, other thread's get() call can still return null which result in one thread overriding value inserted by other thread.
You have to use the ConcurrentHashMap.putIfAbsent(K key, V value) and pay attention to the return value
- Iterator returned by ConcurrentHashMap is weakly consistent, fail safe and never throw ConcurrentModificationException. In Java.
- ConcurrentHashMap doesn't allow null as key or value.
- You can use ConcurrentHashMap in place of Hashtable but with caution as CHM doesn't lock whole Map.
- During putAll() and clear() operations, concurrent read may only reflect insertion or deletion of some entries.
- Why would the concurrencylevel impact memory then? Memory has been an issue with ConcurrentHashMap since it was released. Each level of concurrency will by default create a single Segment. This segment has a HashEntry table and is also a Reentrant lock (so all of the necessities for that as well). Java 8 is release CHMv8 which actually solves this issue.
ConcurrentHashMap & Iterators
From Java docs "However, iterators are designed to be used by only one thread at a time."
What does it mean?
That means that each iterator you obtain from a ConcurrentHashMap is designed to be used by a single thread and should not be passed around. This includes the syntactic sugar that the for-each loop provides.
What happens if I try to iterate the map with two threads at the same time? It will work as expected if each of the threads uses it's own iterator.
don’t need to use concurrency locks with it
- @param initialCapacity the initial capacity. The implementation performs internal sizing to accommodate this many elements.
- @param loadFactor the load factor threshold, used to control resizing. Resizing may be performed when the average number of elements per bin exceeds this threshold.
- @param concurrencyLevel the estimated number of concurrently updating threads. The implementation performs internal sizing to try to accommodate this many threads.
The first value isn't overridden just because the second keys has the same hashCode.
It will be overridden only if key is also equal (as said by equals). If not, both values will be kept in the linked list.
When fetching a key, all nodes with the same hashCode will be compared (using equals) to the provided key until one is equal then its value will be returned.
If no key in the map is equal, you'll get null.
The only problem you have if many objects have the same hashCode (or more precisely the same hashCode modulo the size of the internal Entry table) is that the linked list will always be read, which may be slow (and defeats the purpose of any hash table).
What will happen if two different objects have the same hashcode?
Since hashcode is same, bucket location would be same and collision will occur in HashMap Since HashMap uses LinkedList to store object, this entry (object of Map.Entry comprise key and value ) will be stored in LinkedList. ( tree in java 8 )