Batch Status

Summary

last updated: 19:03:09 17.01.2026

71 active nodes (20 used, 51 free)

6752 hw threads (1784 used, 4968 free)

40 running jobs, 80784:00:00 remaining core hours

2 waiting jobs, 576:00:00 waiting core hours

Nodes

toggle node display

Running Jobs (40)

color job queue user #proc #nodes ppn gpn vmem_req vmem_used t_remain t_req t_used started jobname hosts
98003 hpc vschar2s 21 1 21 0 64 GB 27 GB 4:38:11 12:00:00 7:21:49 11:41:13 unified_FEMALE_s0 wr51
98004 hpc vschar2s 21 1 21 0 64 GB 27 GB 4:38:11 12:00:00 7:21:49 11:41:13 unified_FEMALE_s1 wr51
98005 hpc vschar2s 21 1 21 0 64 GB 27 GB 4:38:11 12:00:00 7:21:49 11:41:13 unified_FEMALE_s2 wr51
98006 hpc vschar2s 21 1 21 0 64 GB 27 GB 4:38:11 12:00:00 7:21:49 11:41:13 unified_FEMALE_s3 wr51
98007 hpc vschar2s 21 1 21 0 64 GB 27 GB 4:38:11 12:00:00 7:21:49 11:41:13 unified_FEMALE_s4 wr51
98008 hpc vschar2s 21 1 21 0 64 GB 27 GB 4:38:11 12:00:00 7:21:49 11:41:13 unified_FEMALE_s5 wr55
98009 hpc vschar2s 21 1 21 0 64 GB 27 GB 4:38:11 12:00:00 7:21:49 11:41:13 unified_FEMALE_s6 wr55
98010 hpc vschar2s 21 1 21 0 64 GB 27 GB 4:38:11 12:00:00 7:21:49 11:41:13 unified_FEMALE_s7 wr55
98011 hpc vschar2s 21 1 21 0 64 GB 27 GB 4:38:11 12:00:00 7:21:49 11:41:13 unified_FEMALE_s8 wr55
98012 hpc vschar2s 21 1 21 0 64 GB 27 GB 4:38:11 12:00:00 7:21:49 11:41:13 unified_FEMALE_s9 wr55
98013 hpc vschar2s 21 1 21 0 64 GB 57 GB 4:38:15 12:00:00 7:21:45 11:41:17 unified_MALE_s0 wr56
98014 hpc vschar2s 21 1 21 0 64 GB 58 GB 4:38:16 12:00:00 7:21:44 11:41:18 unified_MALE_s1 wr56
98015 hpc vschar2s 21 1 21 0 64 GB 57 GB 4:38:16 12:00:00 7:21:44 11:41:18 unified_MALE_s2 wr56
98016 hpc vschar2s 21 1 21 0 64 GB 57 GB 4:38:16 12:00:00 7:21:44 11:41:18 unified_MALE_s3 wr56
98017 hpc vschar2s 21 1 21 0 64 GB 58 GB 4:38:16 12:00:00 7:21:44 11:41:18 unified_MALE_s4 wr56
98018 hpc vschar2s 21 1 21 0 64 GB 56 GB 4:38:16 12:00:00 7:21:44 11:41:18 unified_MALE_s5 wr57
98019 hpc vschar2s 21 1 21 0 64 GB 57 GB 4:38:16 12:00:00 7:21:44 11:41:18 unified_MALE_s6 wr57
98020 hpc vschar2s 21 1 21 0 64 GB 58 GB 4:38:16 12:00:00 7:21:44 11:41:18 unified_MALE_s7 wr57
98021 hpc vschar2s 21 1 21 0 64 GB 54 GB 4:38:16 12:00:00 7:21:44 11:41:18 unified_MALE_s8 wr57
98022 hpc vschar2s 21 1 21 0 64 GB 58 GB 4:38:16 12:00:00 7:21:44 11:41:18 unified_MALE_s9 wr57
97838 hpc1 adietr2s 128 1 128 0 256 GB 75 GB 1:19:01:02 3:00:00:00 1:04:58:58 16.01.2026 14:04:04 size_test_300_r wr53
97839 hpc1 adietr2s 128 1 128 0 256 GB 78 GB 1:19:01:06 3:00:00:00 1:04:58:54 16.01.2026 14:04:08 size_test_400_r wr54
98070 hpc1 mmensi2s 128 1 128 0 256 GB 440 GB 1:22:31:25 2:00:00:00 1:28:35 17:34:27 slurm_klam_single_node_report.sh wr50
98071 hpc1 mmensi2s 128 1 128 0 256 GB 440 GB 1:22:31:32 2:00:00:00 1:28:28 17:34:34 slurm_klam_single_node_report.sh wr52
98072 hpc1 mmensi2s 128 1 128 0 256 GB 440 GB 1:22:31:39 2:00:00:00 1:28:21 17:34:41 slurm_klam_single_node_report.sh wr58
98073 hpc1 mmensi2s 128 1 128 0 256 GB 438 GB 1:22:31:45 2:00:00:00 1:28:15 17:34:47 slurm_klam_single_node_report.sh wr59
98074 hpc1 mmensi2s 128 1 128 0 256 GB 438 GB 1:22:31:50 2:00:00:00 1:28:10 17:34:52 slurm_klam_single_node_report.sh wr60
98075 hpc1 mmensi2s 128 1 128 0 256 GB 438 GB 1:22:31:55 2:00:00:00 1:28:05 17:34:57 slurm_klam_single_node_report.sh wr61
98030 any ewangl2s 128 1 128 0 20 GB 11 GB 2:18:22:50 3:00:00:00 5:37:10 13:25:52 hyperparameter_optimization_ingolstadt.sh wr62
98031 any ewangl2s 128 1 128 0 20 GB 11 GB 2:18:22:54 3:00:00:00 5:37:06 13:25:56 hyperparameter_optimization_spider.sh wr66
98044 gpu4 bpicar3s 2 1 2 0 80 GB 41 GB 2:20:26:34 3:00:00:00 3:33:26 15:29:36 idk_what_im_doing wr20
98045 gpu4 bpicar3s 2 1 2 0 80 GB 52 GB 2:20:28:32 3:00:00:00 3:31:28 15:31:34 idk_what_im_doing wr21
98047 gpu4 bpicar3s 2 1 2 0 80 GB 36 GB 2:21:24:36 3:00:00:00 2:35:24 16:27:38 idk_what_im_doing wr20
98048 gpu4 bpicar3s 2 1 2 0 80 GB 32 GB 2:21:27:04 3:00:00:00 2:32:56 16:30:06 idk_what_im_doing wr20
98049 gpu4 ipolat2s 4 1 4 1 16 GB 13 GB 2:22:09:40 3:00:00:00 1:50:20 17:12:42 bai_4b_16f_k4 wr21
98054 gpu4 ipolat2s 4 1 4 1 16 GB 13 GB 2:22:10:08 3:00:00:00 1:49:52 17:13:10 bai_4b_16f_k8 wr24
98059 gpu4 ipolat2s 4 1 4 1 16 GB 13 GB 2:22:10:14 3:00:00:00 1:49:46 17:13:16 bai_5b_24f_k4 wr24
98064 gpu4 ipolat2s 4 1 4 1 16 GB 0 B 2:22:10:19 3:00:00:00 1:49:41 17:13:21 bai_6b_16f_k8 wr25
98076 gpu4 bpicar3s 2 1 2 0 80 GB 58 GB 2:22:35:57 3:00:00:00 1:24:03 17:38:59 idk_what_im_doing wr21
98077 gpu4 bpicar3s 2 1 2 0 80 GB 46 GB 2:23:03:53 3:00:00:00 56:07 18:06:55 idk_what_im_doing wr21

Waiting/Blocked Jobs (2)

job R queue user state #proc #nodes ppn gpn vmem t_req prio enqueued waiting jobname wait reason
98067 gpu4 ipolat2s PD 4 1 4 1 16 GB 3:00:00:00 2525 17:13:25 1:49:37 bai_7b_24f_k4 (Priority)
98068 gpu4 ipolat2s PD 4 1 4 1 16 GB 3:00:00:00 2524 17:13:28 1:49:34 bai_7b_24f_k8 (Priority)