Batch Status

Summary

last updated: 23:15:02 05.03.2026

71 active nodes (9 used, 62 free)

6752 hw threads (428 used, 6324 free)

10 running jobs, 29664:00:00 remaining core hours

0 waiting jobs, - waiting core hours

Nodes

toggle node display

Running Jobs (10)

color job queue user #proc #nodes ppn gpn vmem_req vmem_used t_remain t_req t_used started jobname hosts
130974 any dgromm3m 64 1 64 0 64 GB 49 GB 20:21:39 3:00:00:00 2:03:38:21 03.03.2026 19:36:41 start_mpi.sh wr51
131006 any dgromm3m 64 1 64 0 64 GB 49 GB 1:12:57:20 3:00:00:00 1:11:02:40 04.03.2026 12:12:22 start_mpi.sh wr50
131297 hpc1 hfataf3m 64 1 64 1 40 GB 85 GB 1:15:18:23 3:00:00:00 1:08:41:37 04.03.2026 14:33:25 psi4_clH wr53
131397 hpc ahagg2s 64 1 64 1 128 GB 62 GB 1:16:07:46 3:00:00:00 1:07:52:14 04.03.2026 15:22:48 VEP-species wr54
131398 any dgromm3m 64 1 64 0 64 GB 57 GB 1:16:11:44 3:00:00:00 1:07:48:16 04.03.2026 15:26:46 start_mpi.sh wr55
132191 gpu4 smoses2s 8 1 8 1 32 GB 103 GB 1:23:30:26 3:00:00:00 1:00:29:34 04.03.2026 22:45:28 ulr2ss_training3_joint_off_bs16_gpu4 wr22
132648 gpu4 smoses2s 8 1 8 1 32 GB 172 GB 2:10:41:44 3:00:00:00 13:18:16 9:56:46 ulr2ss_training4_joint_off_bs16_gpu4 wr20
132726 gpu4 ipolat2s 4 1 4 1 16 GB 13 GB 2:18:10:44 3:00:00:00 5:49:16 17:25:46 grid_bai wr21
132757 gpu4 smoses2s 8 1 8 1 32 GB 136 GB 2:21:35:29 3:00:00:00 2:24:31 20:50:31 ulr2ss_training5_joint_off_bs16_gpu4 wr20
132780 hpc3 hfataf3m 64 1 64 1 40 GB 1 GB 2:23:07:24 3:00:00:00 52:36 22:22:26 clH_3 wr75