Batch Status

Summary

last updated: 06:26:01 17.01.2026

71 active nodes (19 used, 52 free)

6752 hw threads (1614 used, 5138 free)

29 running jobs, 91632:00:00 remaining core hours

0 waiting jobs, - waiting core hours

Nodes

toggle node display

Running Jobs (29)

color job queue user #proc #nodes ppn gpn vmem_req vmem_used t_remain t_req t_used started jobname hosts
97646 any dgromm3m 64 1 64 0 128 GB 54 GB 7:28:13 3:00:00:00 2:16:31:47 14.01.2026 13:53:14 start_mpi.sh wr50
97837 hpc1 adietr2s 128 1 128 0 256 GB 56 GB 7:38:57 1:00:00:00 16:21:03 16.01.2026 14:03:58 size_test_200_part_j wr52
97715 hpc ahagg2s 32 1 32 0 128 GB 14 GB 9:51:14 3:00:00:00 2:14:08:46 14.01.2026 16:16:15 mapgp_e256_b256_r1 wr58
97716 hpc ahagg2s 32 1 32 0 128 GB 15 GB 9:51:14 3:00:00:00 2:14:08:46 14.01.2026 16:16:15 mapgp_e256_b256_r2 wr59
97717 hpc ahagg2s 32 1 32 0 128 GB 15 GB 9:51:14 3:00:00:00 2:14:08:46 14.01.2026 16:16:15 mapgp_e256_b256_r3 wr59
97718 hpc ahagg2s 32 1 32 0 128 GB 18 GB 9:51:14 3:00:00:00 2:14:08:46 14.01.2026 16:16:15 mapgp_e256_b512_r1 wr60
97719 hpc ahagg2s 32 1 32 0 128 GB 17 GB 9:51:14 3:00:00:00 2:14:08:46 14.01.2026 16:16:15 mapgp_e256_b512_r2 wr60
97720 hpc ahagg2s 32 1 32 0 128 GB 17 GB 9:51:14 3:00:00:00 2:14:08:46 14.01.2026 16:16:15 mapgp_e256_b512_r3 wr61
97724 hpc ahagg2s 32 1 32 0 128 GB 17 GB 9:51:14 3:00:00:00 2:14:08:46 14.01.2026 16:16:15 mapgp_e512_b256_r1 wr63
97725 hpc ahagg2s 32 1 32 0 128 GB 17 GB 9:51:14 3:00:00:00 2:14:08:46 14.01.2026 16:16:15 mapgp_e512_b256_r2 wr63
97726 hpc ahagg2s 32 1 32 0 128 GB 17 GB 9:51:14 3:00:00:00 2:14:08:46 14.01.2026 16:16:15 mapgp_e512_b256_r3 wr64
97727 hpc ahagg2s 32 1 32 0 128 GB 22 GB 9:51:14 3:00:00:00 2:14:08:46 14.01.2026 16:16:15 mapgp_e512_b512_r1 wr64
97728 hpc ahagg2s 32 1 32 0 128 GB 22 GB 9:51:14 3:00:00:00 2:14:08:46 14.01.2026 16:16:15 mapgp_e512_b512_r2 wr65
97729 hpc ahagg2s 32 1 32 0 128 GB 23 GB 9:51:14 3:00:00:00 2:14:08:46 14.01.2026 16:16:15 mapgp_e512_b512_r3 wr65
97942 hpc1 mmensi2s 128 1 128 0 256 GB 0 B 1:19:12:24 2:00:00:00 4:47:36 1:37:25 slurm_klam_single_node_report.sh wr51
97943 hpc1 mmensi2s 128 1 128 0 256 GB 430 GB 1:19:13:08 2:00:00:00 4:46:52 1:38:09 slurm_klam_single_node_report.sh wr55
97944 hpc1 mmensi2s 128 1 128 0 256 GB 431 GB 1:19:13:12 2:00:00:00 4:46:48 1:38:13 slurm_klam_single_node_report.sh wr56
97945 hpc1 mmensi2s 128 1 128 0 256 GB 430 GB 1:19:13:18 2:00:00:00 4:46:42 1:38:19 slurm_klam_single_node_report.sh wr62
97946 hpc1 mmensi2s 128 1 128 0 256 GB 0 B 1:19:13:21 2:00:00:00 4:46:39 1:38:22 slurm_klam_single_node_report.sh wr66
97947 hpc1 mmensi2s 128 1 128 0 256 GB 430 GB 1:19:13:25 2:00:00:00 4:46:35 1:38:26 slurm_klam_single_node_report.sh wr67
97838 hpc1 adietr2s 128 1 128 0 256 GB 50 GB 2:07:39:03 3:00:00:00 16:20:57 16.01.2026 14:04:04 size_test_300_r wr53
97839 hpc1 adietr2s 128 1 128 0 256 GB 63 GB 2:07:39:07 3:00:00:00 16:20:53 16.01.2026 14:04:08 size_test_400_r wr54
97957 gpu4 bpicar3s 2 1 2 0 80 GB 35 GB 2:21:07:14 3:00:00:00 2:52:46 3:32:15 idk_what_im_doing wr20
97958 gpu4 bpicar3s 2 1 2 0 80 GB 39 GB 2:21:07:14 3:00:00:00 2:52:46 3:32:15 idk_what_im_doing wr20
97959 gpu4 bpicar3s 2 1 2 0 80 GB 44 GB 2:21:07:14 3:00:00:00 2:52:46 3:32:15 idk_what_im_doing wr21
97960 gpu4 bpicar3s 2 1 2 0 80 GB 49 GB 2:21:07:14 3:00:00:00 2:52:46 3:32:15 idk_what_im_doing wr21
97961 gpu4 bpicar3s 2 1 2 0 80 GB 55 GB 2:21:07:14 3:00:00:00 2:52:46 3:32:15 idk_what_im_doing wr21
97965 gpu4 bpicar3s 2 1 2 0 80 GB 32 GB 2:23:44:03 3:00:00:00 15:57 6:09:04 idk_what_im_doing wr20
97966 gpu4 bpicar3s 2 1 2 0 80 GB 29 GB 2:23:51:12 3:00:00:00 8:48 6:16:13 idk_what_im_doing wr20