Batch Status

Summary

last updated: 08:08:02 09.03.2026

71 active nodes (14 used, 57 free)

6752 hw threads (912 used, 5840 free)

19 running jobs, 62208:00:00 remaining core hours

6 waiting jobs, 1728:00:00 waiting core hours

Nodes

toggle node display

Running Jobs (19)

color job queue user #proc #nodes ppn gpn vmem_req vmem_used t_remain t_req t_used started jobname hosts
132902 any dgromm3m 64 1 64 0 64 GB 55 GB 5:10:47 3:00:00:00 2:18:49:13 06.03.2026 13:18:49 start_mpi.sh wr53
132903 any dgromm3m 64 1 64 0 64 GB 55 GB 5:11:56 3:00:00:00 2:18:48:04 06.03.2026 13:19:58 start_mpi.sh wr53
132904 any dgromm3m 64 1 64 0 64 GB 55 GB 5:12:31 3:00:00:00 2:18:47:29 06.03.2026 13:20:33 start_mpi.sh wr58
132905 any dgromm3m 64 1 64 0 64 GB 55 GB 5:13:47 3:00:00:00 2:18:46:13 06.03.2026 13:21:49 start_mpi.sh wr58
132906 any dgromm3m 64 1 64 0 64 GB 55 GB 5:14:24 3:00:00:00 2:18:45:36 06.03.2026 13:22:26 start_mpi.sh wr59
132907 any dgromm3m 64 1 64 0 64 GB 72 GB 5:15:31 3:00:00:00 2:18:44:29 06.03.2026 13:23:33 start_mpi.sh wr59
132913 any dgromm3m 64 1 64 0 128 GB 56 GB 5:23:12 3:00:00:00 2:18:36:48 06.03.2026 13:31:14 start_mpi.sh wr61
132914 any dgromm3m 64 1 64 0 64 GB 55 GB 5:23:19 3:00:00:00 2:18:36:41 06.03.2026 13:31:21 start_mpi.sh wr62
132916 any dgromm3m 64 1 64 0 64 GB 56 GB 5:26:28 3:00:00:00 2:18:33:32 06.03.2026 13:34:30 start_mpi.sh wr62
132917 any dgromm3m 64 1 64 0 64 GB 56 GB 5:26:39 3:00:00:00 2:18:33:21 06.03.2026 13:34:41 start_mpi.sh wr63
132923 any dgromm3m 64 1 64 0 64 GB 49 GB 5:34:50 3:00:00:00 2:18:25:10 06.03.2026 13:42:52 start_mpi.sh wr64
132935 any dgromm3m 64 1 64 0 64 GB 49 GB 7:13:23 3:00:00:00 2:16:46:37 06.03.2026 15:21:25 start_mpi_relaxation.sh wr67
133222 gpu4 smoses2s 8 1 8 1 32 GB 274 GB 2:07:10:14 3:00:00:00 16:49:46 08.03.2026 15:18:16 ulr2ss_training6_joint_off_bs16_gpu4 wr20
133228 gpu4 smoses2s 8 1 8 1 32 GB 263 GB 2:07:19:44 3:00:00:00 16:40:16 08.03.2026 15:27:46 ulr2ss_training5_joint_off_bs16_gpu4 wr20
133090 gpu4 ipolat2s 4 1 4 1 16 GB 13 GB 2:20:17:40 3:00:00:00 3:42:20 4:25:42 aml_k3_d6_f24 wr22
133091 gpu4 ipolat2s 4 1 4 1 16 GB 13 GB 2:21:25:02 3:00:00:00 2:34:58 5:33:04 aml_k3_d7_f24 wr21
133092 gpu4 ipolat2s 4 1 4 1 16 GB 12 GB 2:22:13:38 3:00:00:00 1:46:22 6:21:40 aml_k5_d4_f24 wr25
133366 hpc3 hfataf3m 64 1 64 1 40 GB 85 GB 2:23:50:08 3:00:00:00 9:52 7:58:10 S3_opt wr76
133093 gpu4 ipolat2s 4 1 4 1 16 GB 0 B 2:23:53:22 3:00:00:00 6:38 8:01:24 aml_k5_d5_f24 wr22

Waiting/Blocked Jobs (6)

job R queue user state #proc #nodes ppn gpn vmem t_req prio enqueued waiting jobname wait reason
133099 gpu4 ipolat2s PD 4 1 4 1 16 GB 3:00:00:00 87018 07.03.2026 16:28:22 1:15:39:40 aml_k8_d7_f24 (Priority)
133098 gpu4 ipolat2s PD 4 1 4 1 16 GB 3:00:00:00 87018 07.03.2026 16:28:21 1:15:39:41 aml_k8_d6_f24 (Priority)
133097 gpu4 ipolat2s PD 4 1 4 1 16 GB 3:00:00:00 87018 07.03.2026 16:28:20 1:15:39:42 aml_k8_d5_f24 (Priority)
133096 gpu4 ipolat2s PD 4 1 4 1 16 GB 3:00:00:00 87018 07.03.2026 16:28:19 1:15:39:43 aml_k8_d4_f24 (Priority)
133095 gpu4 ipolat2s PD 4 1 4 1 16 GB 3:00:00:00 87018 07.03.2026 16:28:18 1:15:39:44 aml_k5_d7_f24 (Priority)
133094 gpu4 ipolat2s PD 4 1 4 1 16 GB 3:00:00:00 87018 07.03.2026 16:28:17 1:15:39:45 aml_k5_d6_f24 (Priority)