The world turned upside down?

So today is the release day of AMD Zen 2 CPUs and Navi Graphic cards, and I’m considering buying a new desktop. So it looks like AMD is not going to take the performance crown from Nvidia. However I’m looking to run on Windows and Linux, with Kubuntu as my workhorse operating system. So as the Nvidea open-source drivers for Linux are appalling I’m going to go with an AMD graphics cards. I run 3 1920 x 1200 monitors, but I don’t play modern super graphics intensive modern games, so I’m thinking of waiting for the release the AMD Rx5600 before purchase. I think that should be good enough for my needs. I’m currently using an HD7790, which I think is a bit underpowered.

I’m thinking of getting 2 500GB 970 evos, 1 for Windows, 1 for my other Linux distro installs and a 4TB hard drive for storing films and stuff.

Probably go for 64GB 3000Mhz DDR4, so that leaves the CPU. At the moment my main resource hog, workflow is running IntelliJ using the inner sbt console on repeat compile, while repeat build and run independently with mill on an 18000 line repository. I’m considering an i7 9700k, a 3800X, a 3900X or even just possibly a 3950X. The recent bugs seem to have killed virtually all the benefits of hyper threading, so the i9 9900K no longer seems to justify the premium.

Any thoughts appreciated, but particularly as to how many cores IntlelliJ / sublime/ vscode / sbt / mill / metals can effectively utilise.

The scala compiler itself is largely single-threaded, although there is a lot of work in progress to change this, so I would expect that whichever current CPU has the best single-core performance is a good bet for compiling Scala code. Unless you use Hydra. Even so, I think the speedup Hydra provides largely depends upon the structure of your codebase. If your code’s module structure makes it hard to compile modules in parallel then you will see a smaller benefit.
sbt will benefit much more from multiple cores, but once again only to the extent that your build can execute in parallel.
mill doesn’t currently run tasks in parallel (although there is a PR open experimenting with this), so once again single-core performance seems like a bigger concern.
It’s hard to say about IntelliJ. I can’t say that any particular combination of hardware or OS has ever made it feel responsive when editing Scala code, and I include the monster Xeon desktop + 32GB RAM I use at work, although it is certainly the least painful experience I’ve had with IntelliJ.

I was a happy Eclipse user. I only switched over to IntelliJ for 2.13. I maybe need to put more time in to experimenting with Metals. I used Visual Studio with C# before that so I’ve never got used to proper project development in a bare editor.

1 Like

Most of us don’t specify -optimize in the IDE. Source level optimized code is near impossible to debug. Unless your maven (sbt, ant, make) functions specify it, your run-time code isn’t optimized. If it does maven’s test phase will execute different code than your IDE. That isn’t intuitively obvious to the average developer. From my experience, not too many developers can develop FP. So it might not be an issue. I don’t know if sbt or maven use multiple compiler processes. If they don’t it would be trivial for them to add that capability. That would be much easier to implement. I presume multithreaded processors can execute multiple processes as well as threads.

I know server level processors can run multiple processes using the separate compute units. Sun’s slow clock speed with many threads certainly does. It would be useless otherwise. I don’t know if consumer level processors allow it, but all server level processors do. I’d encourage the sbt team to add that capability. It wouldn’t cost much even if they don’t. The compiler could do it as well. But it would require the compiler take multiple files on the command line.

If you have an SBT project with submodules that are “dependency siblings” then it compiles them in parallel.

Mill currently does not do this although there is a PR for parallelism that you can try.

In general a lot of things use multiple threads but not with the same CPU intensiveness, like GC etc.

I don’t know if IntelliJ’s inspections (error highlighting etc.) are multithreaded (I suspect they are) but you probably want them to run at the same time sbt is compiling.

So thoughts after initial reviews are that Intel still holds the single threaded performance crown. But the 3000 series seems to be pushing down Intel prices. I think I’m going to wait for Kubuntu 19.10, particularly as a new desktop seemingly won’t solve IntelliJ sluggishness.

Kubuntu / Ubuntu 19.10 this should come with Linux kernel 5.3 and possibly a new version of Mesa, allowing proper driver driver support for the 5000 series Graphics cards. It will also give AMD a chance to sort their drivers and firm-wear. I’d quite like to have 3 display port outputs, and I don’t know if the 5600 will have that when ever it is released. The extra time will allow for the release of partner cards for the 5700 with extra fans to reduce heat / noise.

I think I will probably go with Comet-Lake which should be out by then. I don’t think buying CPUs or Graphic cards just after release is the best policy, as their often issues that get sorted in the initial weeks after release, but Comet Lake is going to be so similar to the Coffee Lake refresh that I don’t think that should be a problem here.