Atomic snapshot-and-clear for TrieMap?

I want to use a map as a buffer: periodically, I want to process all elements in the map and clear it. The catch is that the map can be updated concurrently.

I was hoping to avoid locks by using a TrieMap. However, I cannot find a way to do so.

Both its iterator and clear methods are atomic. However, they are separate methods. Calling one after the other is not safe, as some elements could be added to the map between the calls and get lost.

Is there a way to implement an atomic non-locking getIteratorAndClear?

(If not, I wonder whether such an operation could be implemented in TrieMap directly, using its low-level private methods.)

Are locks (using synchronized blocks) the only way to use a map as a concurrent buffer?

Seems like a plausible suggestion at first glance (although I don’t know the internals of the TrieMap data structure well enough to know whether there might be issues) – you might want to also bring it up in the scala-library-next repo.

1 Like

Can we assume that the concurrent accesses to the map only add to it, never remove? In that case, you might be able to use an iterator on a ConcurrentHashMap: process each element from values.iterator and use remove on the iterator as they are processed to clear them from the map.

The following looks like a plausible implementation of what you are looking for:

/** UNTESTED, NO WARRANTY **/
@tailrec def snapshotAndClear(): scala.collection.Map[K, V]  = {
    val r = RDCSS_READ_ROOT()
    val expmain = r.gcasRead(this)
    if (RDCSS_ROOT(r, expmain, INode.newRootNode[K, V](equality) )) new TrieMap(r, null, hashing, equality)
    else snapshotAndClear()
  }

Summary: Get a descriptor of the current tree; if the descriptor matches, replace the root with a new empty one (as done in .clear()) and return a read-only copy of the original map. Crucially, if clear is atomic, then mutating the tree after clear goes to the new tree. I also don’t know the internals of the TrieMap data structure well enough to know whether there might be issues.