Batch Status

Summary

last updated: 13:15:02 26.02.2026

71 active nodes (9 used, 62 free)

6752 hw threads (224 used, 6528 free)

18 running jobs, 15264:00:00 remaining core hours

9 waiting jobs, 14472:00:00 waiting core hours

Nodes

toggle node display

Running Jobs (18)

color job queue user #proc #nodes ppn gpn vmem_req vmem_used t_remain t_req t_used started jobname hosts
124726 any dgromm3m 64 1 64 0 64 GB 55 GB 2:49:54 3:00:00:00 2:21:10:06 23.02.2026 16:04:55 start_mpi.sh wr50
124727 any dgromm3m 64 1 64 0 64 GB 54 GB 2:50:38 3:00:00:00 2:21:09:22 23.02.2026 16:05:39 start_mpi.sh wr54
124976 gpu4 smoses2s 8 1 8 1 32 GB 191 GB 2:01:52:15 3:00:00:00 22:07:45 25.02.2026 15:07:16 train_diffusion wr25
124979 gpu4 bpicar3s 2 1 2 1 80 GB 62 GB 2:02:37:39 3:00:00:00 21:22:21 25.02.2026 15:52:40 CF3D_Re180_h90_Ma0.1_BGK_Double_wallfunction_tol0p000001 wr22
125002 gpu4 bpicar3s 2 1 2 1 80 GB 52 GB 2:06:39:55 3:00:00:00 17:20:05 25.02.2026 19:54:56 CF3D_Re180_h105_Ma0.1_BGK_Single_wallfunction_tol0p000001 wr23
125007 gpu4 bpicar3s 2 1 2 1 80 GB 58 GB 2:09:57:28 3:00:00:00 14:02:32 25.02.2026 23:12:29 CF3D_Re180_h110_Ma0.1_BGK_Single_wallfunction_tol0p000001 wr21
125008 gpu4 bpicar3s 2 1 2 1 80 GB 41 GB 2:12:44:25 3:00:00:00 11:15:35 1:59:26 CF3D_Re180_h95_Ma0.1_BGK_Single_wallfunction_tol0p000001 wr25
125010 gpu4 smoses2s 8 1 8 1 32 GB 103 GB 2:14:20:28 3:00:00:00 9:39:32 3:35:29 ulr2ss_training3_joint_off_bs16_gpu4 wr20
125011 gpu4 smoses2s 8 1 8 1 32 GB 280 GB 2:14:21:34 3:00:00:00 9:38:26 3:36:35 ulr2ss_training5_joint_off_bs16_gpu4 wr20
125013 gpu4 bpicar3s 2 1 2 1 80 GB 54 GB 2:15:04:47 3:00:00:00 8:55:13 4:19:48 CF3D_Re180_h85_Ma0.1_BGK_Double_wallfunction_tol0p000001 wr21
125014 gpu4 bpicar3s 2 1 2 1 80 GB 64 GB 2:15:53:59 3:00:00:00 8:06:01 5:09:00 CF3D_Re180_h115_Ma0.1_BGK_Single_wallfunction_tol0p000001 wr23
125015 gpu4 bpicar3s 2 1 2 1 80 GB 41 GB 2:17:35:41 3:00:00:00 6:24:19 6:50:42 CF3D_Re180_h75_Ma0.1_BGK_Double_wallfunction_tol0p000001 wr23
125016 gpu4 bpicar3s 2 1 2 1 80 GB 36 GB 2:20:00:24 3:00:00:00 3:59:36 9:15:25 CF3D_Re180_h90_Ma0.1_BGK_Single_wallfunction_tol0p000001 wr23
125017 hpc3 tludwi2s 2 1 2 1 32 GB 680 MB 2:21:41:31 3:00:00:00 2:18:29 10:56:32 Knapsack_job.sh wr75
125018 gpu4 bpicar3s 2 1 2 1 80 GB 46 GB 2:21:59:47 3:00:00:00 2:00:13 11:14:48 CF3D_Re180_h100_Ma0.1_BGK_Single_wallfunction_tol0p000001 wr24
125019 gpu4 ipolat2s 4 1 4 1 16 GB 13 GB 2:22:40:32 3:00:00:00 1:19:28 11:55:33 grid_a wr24
125024 gpu4 ipolat2s 4 1 4 1 16 GB 0 B 2:23:51:22 3:00:00:00 8:38 13:06:23 grid_b wr24
125038 hpc3 hfataf3m 32 1 32 1 40 GB 7 GB 2:23:54:26 3:00:00:00 5:34 13:09:27 psi4_freq_chunk wr75

Waiting/Blocked Jobs (9)

job R queue user state #proc #nodes ppn gpn vmem t_req prio enqueued waiting jobname wait reason
125026 gpu4 ipolat2s PD 4 1 4 1 16 GB 3:00:00:00 82181 11:57:50 1:17:11 grid_d (Priority)
125025 gpu4 ipolat2s PD 4 1 4 1 16 GB 3:00:00:00 82181 11:57:40 1:17:21 grid_c (Priority)
125034 gpu4 bpicar3s PD 1 1 1 1 80 GB 3:00:00:00 2290 12:56:36 18:25 CF3D_Re180_h85_Ma0.1_BGK_Single_wallfunction_tol0p000001 (Priority)
125044 hpc3 hfataf3m PD 32 1 32 1 40 GB 3:00:00:00 1226 13:11:50 3:11 psi4_freq_chunk (Dependency)
125043 hpc3 hfataf3m PD 32 1 32 1 40 GB 3:00:00:00 1226 13:11:13 3:48 psi4_freq_chunk (Dependency)
125042 hpc3 hfataf3m PD 32 1 32 1 40 GB 3:00:00:00 1226 13:10:56 4:05 psi4_freq_chunk (Dependency)
125041 hpc3 hfataf3m PD 32 1 32 1 40 GB 3:00:00:00 1226 13:10:29 4:32 psi4_freq_chunk (Dependency)
125040 hpc3 hfataf3m PD 32 1 32 1 40 GB 3:00:00:00 1226 13:10:10 4:51 psi4_freq_chunk (Dependency)
125039 hpc3 hfataf3m PD 32 1 32 1 40 GB 3:00:00:00 1226 13:10:00 5:01 psi4_freq_chunk (Dependency)