Batch Status

Summary

last updated: 12:35:02 25.02.2026

70 active nodes (10 used, 60 free)

6688 hw threads (248 used, 6440 free)

23 running jobs, 17856:00:00 remaining core hours

1 waiting jobs, 72:00:00 waiting core hours

Nodes

toggle node display

Running Jobs (23)

color job queue user #proc #nodes ppn gpn vmem_req vmem_used t_remain t_req t_used started jobname hosts
124726 any dgromm3m 64 1 64 0 64 GB 55 GB 1:03:29:54 3:00:00:00 1:20:30:06 23.02.2026 16:04:55 start_mpi.sh wr50
124727 any dgromm3m 64 1 64 0 64 GB 54 GB 1:03:30:38 3:00:00:00 1:20:29:22 23.02.2026 16:05:39 start_mpi.sh wr54
124730 gpu4 smoses2s 8 1 8 1 32 GB 163 GB 1:07:30:26 3:00:00:00 1:16:29:34 23.02.2026 20:05:27 ulr2ss_training2_joint_off_bs16_gpu4 wr20
124766 gpu4 smoses2s 8 1 8 1 32 GB 274 GB 1:12:40:25 3:00:00:00 1:11:19:35 24.02.2026 1:15:26 ulr2ss_training4_joint_off_bs16_gpu4 wr20
124837 gpu4 bpicar3s 2 1 2 0 80 GB 59 GB 2:04:24:08 3:00:00:00 19:35:52 24.02.2026 16:59:09 CF3D_Re180_h90_Ma0.1_BGK_Double_wallfunction_tol0p000001 wr24
124856 gpu4 bpicar3s 2 1 2 0 80 GB 44 GB 2:05:33:37 3:00:00:00 18:26:23 24.02.2026 18:08:38 CF3D_Re180_h100_Ma0.1_BGK_Single_wallfunction_tol0p000001 wr22
124857 gpu4 bpicar3s 2 1 2 0 80 GB 49 GB 2:05:34:54 3:00:00:00 18:25:06 24.02.2026 18:09:55 CF3D_Re180_h105_Ma0.1_BGK_Single_wallfunction_tol0p000001 wr20
124858 gpu4 bpicar3s 2 1 2 0 80 GB 55 GB 2:06:01:15 3:00:00:00 17:58:45 24.02.2026 18:36:16 CF3D_Re180_h110_Ma0.1_BGK_Single_wallfunction_tol0p000001 wr21
124859 gpu4 bpicar3s 2 1 2 0 80 GB 61 GB 2:06:04:31 3:00:00:00 17:55:29 24.02.2026 18:39:32 CF3D_Re180_h115_Ma0.1_BGK_Single_wallfunction_tol0p000001 wr22
124926 gpu4 smoses2s 8 1 8 1 32 GB 194 GB 2:14:42:22 3:00:00:00 9:17:38 3:17:23 train_diffusion wr25
124928 gpu4 bpicar3s 2 1 2 0 80 GB 41 GB 2:16:33:09 3:00:00:00 7:26:51 5:08:10 CF3D_Re180_h75_Ma0.1_BGK_Double_wallfunction_tol0p000001 wr21
124929 gpu4 bpicar3s 2 1 2 0 80 GB 32 GB 2:16:49:25 3:00:00:00 7:10:35 5:24:26 CF3D_Re180_h85_Ma0.1_BGK_Single_wallfunction_tol0p000001 wr23
124937 gpu4 bpicar3s 2 1 2 0 80 GB 30 GB 2:18:37:10 3:00:00:00 5:22:50 7:12:11 CF3D_Re180_h65_Ma0.1_BGK_Double_wallfunction_tol0p000001 wr24
124938 gpu4 bpicar3s 2 1 2 0 80 GB 36 GB 2:18:38:31 3:00:00:00 5:21:29 7:13:32 CF3D_Re180_h90_Ma0.1_BGK_Single_wallfunction_tol0p000001 wr21
124947 gpu4 bpicar3s 2 1 2 0 80 GB 25 GB 2:19:50:36 3:00:00:00 4:09:24 8:25:37 CF3D_Re180_h75_Ma0.1_BGK_Single_wallfunction_tol0p000001 wr21
124951 gpu4 bpicar3s 2 1 2 0 80 GB 26 GB 2:20:45:10 3:00:00:00 3:14:50 9:20:11 CF3D_Re180_h60_Ma0.1_BGK_Double_wallfunction_tol0p000001 wr20
124952 gpu4 bpicar3s 2 1 2 0 80 GB 41 GB 2:21:16:41 3:00:00:00 2:43:19 9:51:42 CF3D_Re180_h95_Ma0.1_BGK_Single_wallfunction_tol0p000001 wr22
124958 gpu4 bpicar3s 2 1 2 0 80 GB 54 GB 2:22:04:53 3:00:00:00 1:55:07 10:39:54 CF3D_Re180_h85_Ma0.1_BGK_Double_wallfunction_tol0p000001 wr24
124961 gpu4 bpicar3s 2 1 2 0 80 GB 23 GB 2:22:17:36 3:00:00:00 1:42:24 10:52:37 CF3D_Re180_h70_Ma0.1_BGK_Single_wallfunction_tol0p000001 wr22
124962 any hfataf3m 32 1 32 0 40 GB 2 GB 2:22:18:30 3:00:00:00 1:41:30 10:53:31 psi4_uni wr59
124964 gpu4 bpicar3s 2 1 2 0 80 GB 22 GB 2:22:38:09 3:00:00:00 1:21:51 11:13:10 CF3D_Re180_h55_Ma0.1_BGK_Double_wallfunction_tol0p000001 wr23
124965 gpu4 bpicar3s 2 1 2 0 80 GB 35 GB 2:22:47:43 3:00:00:00 1:12:17 11:22:44 CF3D_Re180_h70_Ma0.1_BGK_Double_wallfunction_tol0p000001 wr23
124966 any hfataf3m 32 1 32 0 40 GB 2 GB 2:23:07:20 3:00:00:00 52:40 11:42:21 rms wr51

Waiting/Blocked Jobs (1)

job R queue user state #proc #nodes ppn gpn vmem t_req prio enqueued waiting jobname wait reason
124967 gpu4 bpicar3s PD 1 1 1 0 80 GB 3:00:00:00 2355 11:48:51 46:10 CF3D_Re180_h65_Ma0.1_BGK_Single_wallfunction_tol0p000001 (Nodes required for job are DOWN, DRAINED or reserved for jobs in higher priority partitions)