I got this (slightly odd) script working on a M2 Mac.
https://github.com/Quafadas/vecxt/blob/gonative/experiments/src/mlx.scala
Which is more-or-less a translation of
https://github.com/ml-explore/mlx-c/blob/main/examples/example.c
MLX is apples (apple silicon) numpy/torch-like framework.
It went through java’s project Panama (all a bit “build from source” - both jextract and mlx-c), but, to my astonishment, works. Here’s the output;
=== Testing MetalDeviceInfo wrapper ===
Info: (applegpu_g14s,17179869184,22906503168,34359738368)
=== String creation with data test ===
Created string data: Hello, MLX!
=== Array creation test ===
Created MLX stream: Stream(Device(cpu, 0), 1)
Created MLX stream: Stream(Device(gpu, 0), 0)
Created MLX arrays:
array([[1, 2, 3],
[4, 5, 6]], dtype=float32)
array([[5, 2, 7],
[45, 5.5, 6]], dtype=float32)
Added on CPU:
array([[6, 4, 10],
[49, 10.5, 12]], dtype=float32)
Added on GPU:
array([[6, 4, 10],
[49, 10.5, 12]], dtype=float32)
Which is kind of cool as it proves you can get at your apple silicon’s GPU compute through JVM scala.
The hypothesis is that Scala3 could be a cool place to explore MLX;
given Arena’s make memory allocation convenientopqaque type MlxArray = MemorySegmentmake it possible to retain a nice low-runtime-cost type safe API.
I guess the point of posting here would be to see if there is someone else interested in contributing to an exploration of this idea - I’d carve what I have out of its existing chaos.
It’s a niche in a niche (apple silicon) in a niche (people interested in ML) in a niche (who want to explore something other than python).. but you never know, maybe there is someone!
As I’m never likely to have a beefy Nvidia card to futze around with, I figured Id investigate making the most of what I already have…