Tim Lebailly (KU Leuven, Department of Electrical Engineering) is currently testing the GPU partition of LUMI for his project "Spatial-Aware Self-Supervised Learning". His experience with LUMI:
"LUMI is great for Belgium as it allows users to get very large amounts of compute. Currently, I am finishing my allocation on Hortense (Tier-1) in Ghent. To give an example, during the pilot phase on LUMI, I was able to run an experiment over 4 days which is equal to a bit more than my full allocation on Hortense for 8 months! This gives me the opportunity to scale up my research to state-of-the-art neural networks.
Also, given that LUMI is so big, the queue time is significantly smaller as the jobs you run don't have system-wide impact as opposed to smaller supercomputers like Hortense. In that regard, the user experience is really nice.
Though it's not for the faint-hearted. It uses AMD GPUs, and the documentation resources online are very limited (as opposed to NVIDIA hardware), so it's not as trivial to get your code running (as opposed to Hortense, for instance). As time goes on, the support for users will improve, and the LUMI support team will make the experience easier for a new user by providing native installs/containers for different types of common software."