when I communicate to many big data servers in apache.org such as HDFS, Kafka, solr etc, the libraries provided are driven by akka or fs2. for what reasons the community must use these two frameworks? doesn’t it increase the user’s learning cost?
Because most programs / programmers are already using either Akka, fs2 or Spark (and lately we also have ZIO) there are other alternatives but these are the major ones.
Thus, usually, most people are actually looking for something that already integrates with the ecosystem they are using.
Also, those libraries also provide a robust answer on how to handle concurrency which is one of the biggest concerns of modern applications.
So yeah, is a bit sad that the ecosystem has a steep learning curve, but the end result is that you will have a pretty solid solution.