Batch Status

Summary

last updated: 06:42:03 01.02.2026

71 active nodes (24 used, 47 free)

6752 hw threads (2328 used, 4424 free)

34 running jobs, 166848:00:00 remaining core hours

5 waiting jobs, 31360:00:00 waiting core hours

Nodes

toggle node display

Running Jobs (34)

color job queue user #proc #nodes ppn gpn vmem_req vmem_used t_remain t_req t_used started jobname hosts
102117 hpc1 adietr2s 128 1 128 0 256 GB 19 GB 5:12:47 3:00:00:00 2:18:47:13 29.01.2026 11:54:48 size_test_200_j wr53
102118 hpc1 adietr2s 128 1 128 0 256 GB 16 GB 5:12:54 3:00:00:00 2:18:47:06 29.01.2026 11:54:55 size_test_300_r wr54
102119 hpc1 adietr2s 128 1 128 0 256 GB 16 GB 5:12:56 3:00:00:00 2:18:47:04 29.01.2026 11:54:57 size_test_400_r wr55
102120 hpc1 adietr2s 128 1 128 0 256 GB 16 GB 5:13:00 3:00:00:00 2:18:47:00 29.01.2026 11:55:01 size_test_500_j wr56
102126 gpu4 sgeorg2s 16 1 16 1 64 GB 122 GB 5:53:41 3:00:00:00 2:18:06:19 29.01.2026 12:35:42 fastc_fusion_exp1 wr21
102310 any dgromm3m 64 1 64 0 128 GB 72 GB 17:34:59 2:12:00:00 1:18:25:01 30.01.2026 12:17:00 start_mpi.sh wr60
102301 any dgromm3m 64 1 64 0 128 GB 71 GB 1:05:25:20 3:00:00:00 1:18:34:40 30.01.2026 12:07:21 start_mpi.sh wr52
102303 any dgromm3m 64 1 64 0 128 GB 70 GB 1:05:27:24 3:00:00:00 1:18:32:36 30.01.2026 12:09:25 start_mpi.sh wr52
102304 any dgromm3m 64 1 64 0 128 GB 69 GB 1:05:28:00 3:00:00:00 1:18:32:00 30.01.2026 12:10:01 start_mpi.sh wr57
102305 any dgromm3m 64 1 64 0 128 GB 71 GB 1:05:28:36 3:00:00:00 1:18:31:24 30.01.2026 12:10:37 start_mpi.sh wr57
102306 any dgromm3m 64 1 64 0 128 GB 71 GB 1:05:31:21 3:00:00:00 1:18:28:39 30.01.2026 12:13:22 start_mpi.sh wr58
102307 any dgromm3m 64 1 64 0 128 GB 72 GB 1:05:31:56 3:00:00:00 1:18:28:04 30.01.2026 12:13:57 start_mpi.sh wr58
102308 any dgromm3m 64 1 64 0 128 GB 71 GB 1:05:33:43 3:00:00:00 1:18:26:17 30.01.2026 12:15:44 start_mpi.sh wr59
102309 any dgromm3m 64 1 64 0 128 GB 72 GB 1:05:34:23 3:00:00:00 1:18:25:37 30.01.2026 12:16:24 start_mpi.sh wr59
102311 any dgromm3m 64 1 64 0 128 GB 72 GB 1:05:35:37 3:00:00:00 1:18:24:23 30.01.2026 12:17:38 start_mpi.sh wr60
102312 any dgromm3m 64 1 64 0 128 GB 55 GB 1:05:36:48 3:00:00:00 1:18:23:12 30.01.2026 12:18:49 start_mpi.sh wr61
102313 any dgromm3m 64 1 64 0 128 GB 55 GB 1:05:40:44 3:00:00:00 1:18:19:16 30.01.2026 12:22:45 start_mpi.sh wr61
102314 any dgromm3m 64 1 64 0 128 GB 55 GB 1:05:41:24 3:00:00:00 1:18:18:36 30.01.2026 12:23:25 start_mpi.sh wr62
102315 any dgromm3m 64 1 64 0 128 GB 55 GB 1:05:41:59 3:00:00:00 1:18:18:01 30.01.2026 12:24:00 start_mpi.sh wr62
102317 any dgromm3m 64 1 64 0 128 GB 56 GB 1:05:43:35 3:00:00:00 1:18:16:25 30.01.2026 12:25:36 start_mpi.sh wr63
102318 any dgromm3m 64 1 64 0 128 GB 56 GB 1:05:44:15 3:00:00:00 1:18:15:45 30.01.2026 12:26:16 start_mpi.sh wr64
102321 any dgromm3m 64 1 64 0 128 GB 56 GB 1:05:47:36 3:00:00:00 1:18:12:24 30.01.2026 12:29:37 start_mpi.sh wr65
102322 any dgromm3m 64 1 64 0 128 GB 54 GB 1:05:48:26 3:00:00:00 1:18:11:34 30.01.2026 12:30:27 start_mpi.sh wr66
102323 any dgromm3m 64 1 64 0 128 GB 56 GB 1:05:49:04 3:00:00:00 1:18:10:56 30.01.2026 12:31:05 start_mpi.sh wr66
102324 any dgromm3m 64 1 64 0 128 GB 56 GB 1:05:49:46 3:00:00:00 1:18:10:14 30.01.2026 12:31:47 start_mpi.sh wr67
102326 any dgromm3m 64 1 64 0 128 GB 55 GB 1:05:51:05 3:00:00:00 1:18:08:55 30.01.2026 12:33:06 start_mpi.sh wr68
102344 gpu4 pchatu2s 128 1 128 4 200 GB 172 GB 2:17:34:13 3:00:00:00 6:25:47 0:16:14 flwr_dropout50_alpha1000 wr20
102345 gpu4 pchatu2s 128 1 128 4 200 GB 172 GB 2:17:36:37 3:00:00:00 6:23:23 0:18:38 flwr_dropout50_alpha100 wr24
102348 gpu4 pchatu2s 128 1 128 4 200 GB 172 GB 2:17:38:13 3:00:00:00 6:21:47 0:20:14 flwr_dropout50_alpha10 wr25
102396 gpu smoses2s 4 1 4 1 16 GB 109 GB 2:18:32:42 3:00:00:00 5:27:18 1:14:43 ulr2ss_training_joint_off wr15
102398 gpu4 bpicar3s 2 1 2 0 80 GB 46 GB 2:21:15:51 3:00:00:00 2:44:09 3:57:52 idk_what_im_doing wr23
102440 gpu4 bpicar3s 2 1 2 0 80 GB 54 GB 2:23:03:57 3:00:00:00 56:03 5:45:58 idk_what_im_doing wr23
102445 hpc1 hfataf3m 32 1 32 0 40 GB 7 GB 2:23:58:49 3:00:00:00 1:11 6:40:50 psi4_conformer_1-psi wr50
102446 hpc1 hfataf3m 32 1 32 0 40 GB 7 GB 2:23:58:50 3:00:00:00 1:10 6:40:51 psi4_conformer_4-psi wr50

Waiting/Blocked Jobs (5)

job R queue user state #proc #nodes ppn gpn vmem t_req prio enqueued waiting jobname wait reason
102349 gpu4 pchatu2s PD 128 1 128 4 200 GB 3:00:00:00 37799 30.01.2026 19:48:21 1:10:53:40 flwr_dropout50_alpha1 (Resources)
102350 gpu4 pchatu2s PD 128 1 128 4 200 GB 3:00:00:00 37798 30.01.2026 19:48:54 1:10:53:07 flwr_dropout50_alpha0.1 (ReqNodeNotAvail, UnavailableNodes:wr[20-22,24-25])
102391 gpu4 jghofr2m PD 64 1 64 4 128 GB 10:00:00 37767 31.01.2026 22:32:32 8:09:29 get_nodes_rm (Priority)
102384 gpu4 pchatu2s PD 128 1 128 4 200 GB 2:00:00:00 34563 31.01.2026 19:06:12 11:35:49 flwr_redundant50_alpha1000 (ReqNodeNotAvail, UnavailableNodes:wr[20-22,24-25])
102385 gpu4 pchatu2s PD 128 1 128 4 200 GB 2:00:00:00 34555 31.01.2026 19:09:37 11:32:24 flwr_redundant50_alpha100 (Priority)