Batch Status

Summary

last updated: 02:21:02 02.04.2023

65 active nodes (25 used, 40 free)

4760 hw threads (1600 used, 3160 free)

25 running jobs, 92160:00:00 remaining core hours

1 waiting jobs, - waiting core hours

Nodes

toggle node display

Running Jobs (25)

color job queue user #proc #nodes ppn vmem_req vmem_used t_remain t_req t_used started jobname hosts
728132 hpc dgromm3m 64 1 64 96 GB 251 GB 1:13:17:09 3:00:00:00 1:10:42:51 31.03.2023 15:38:10 start_mpi.sh wr51
728133 hpc dgromm3m 64 1 64 96 GB 251 GB 1:13:18:17 3:00:00:00 1:10:41:43 31.03.2023 15:39:18 start_mpi.sh wr52
728134 hpc dgromm3m 64 1 64 96 GB 251 GB 1:13:19:04 3:00:00:00 1:10:40:56 31.03.2023 15:40:05 start_mpi.sh wr53
728135 hpc dgromm3m 64 1 64 96 GB 251 GB 1:13:20:09 3:00:00:00 1:10:39:51 31.03.2023 15:41:10 start_mpi.sh wr54
728136 hpc3 dgromm3m 64 1 64 96 GB 251 GB 1:13:20:48 3:00:00:00 1:10:39:12 31.03.2023 15:41:49 start_mpi.sh wr55
728137 hpc dgromm3m 64 1 64 96 GB 251 GB 1:13:22:13 3:00:00:00 1:10:37:47 31.03.2023 15:43:14 start_mpi.sh wr56
728138 hpc dgromm3m 64 1 64 96 GB 251 GB 1:13:22:55 3:00:00:00 1:10:37:05 31.03.2023 15:43:56 start_mpi.sh wr57
728139 hpc dgromm3m 64 1 64 96 GB 251 GB 1:13:24:46 3:00:00:00 1:10:35:14 31.03.2023 15:45:47 start_mpi.sh wr58
728140 hpc dgromm3m 64 1 64 96 GB 251 GB 1:13:25:34 3:00:00:00 1:10:34:26 31.03.2023 15:46:35 start_mpi.sh wr59
728636 hpc3 dgromm3m 64 1 64 96 GB 115 GB 1:14:27:09 2:00:00:00 9:32:51 01.04.2023 16:48:10 start_ppa_seperate.sh wr50
728637 hpc3 dgromm3m 64 1 64 96 GB 115 GB 1:14:27:34 2:00:00:00 9:32:26 01.04.2023 16:48:35 start_ppa_seperate.sh wr60
728638 hpc3 dgromm3m 64 1 64 96 GB 115 GB 1:14:27:51 2:00:00:00 9:32:09 01.04.2023 16:48:52 start_ppa_seperate.sh wr61
728639 hpc3 dgromm3m 64 1 64 96 GB 115 GB 1:14:28:09 2:00:00:00 9:31:51 01.04.2023 16:49:10 start_ppa_seperate.sh wr62
728640 hpc3 dgromm3m 64 1 64 96 GB 115 GB 1:14:28:26 2:00:00:00 9:31:34 01.04.2023 16:49:27 start_ppa_seperate.sh wr63
728641 hpc3 dgromm3m 64 1 64 96 GB 115 GB 1:14:28:45 2:00:00:00 9:31:15 01.04.2023 16:49:46 start_ppa_seperate.sh wr65
728642 hpc3 dgromm3m 64 1 64 96 GB 115 GB 1:14:29:02 2:00:00:00 9:30:58 01.04.2023 16:50:03 start_ppa_seperate.sh wr66
728644 hpc3 dgromm3m 64 1 64 96 GB 115 GB 1:14:49:20 2:00:00:00 9:10:40 01.04.2023 17:10:21 start_ppa_seperate.sh wr68
728645 hpc3 dgromm3m 64 1 64 96 GB 115 GB 1:14:49:38 2:00:00:00 9:10:22 01.04.2023 17:10:39 start_ppa_seperate.sh wr69
728646 hpc3 dgromm3m 64 1 64 96 GB 115 GB 1:14:49:54 2:00:00:00 9:10:06 01.04.2023 17:10:55 start_ppa_seperate.sh wr70
728647 hpc3 dgromm3m 64 1 64 96 GB 115 GB 1:14:50:13 2:00:00:00 9:09:47 01.04.2023 17:11:14 start_ppa_seperate.sh wr71
728648 hpc3 dgromm3m 64 1 64 96 GB 115 GB 1:14:51:24 2:00:00:00 9:08:36 01.04.2023 17:12:25 start_ppa_seperate.sh wr72
728649 hpc3 dgromm3m 64 1 64 96 GB 115 GB 1:14:51:41 2:00:00:00 9:08:19 01.04.2023 17:12:42 start_ppa_seperate.sh wr73
728650 hpc3 dgromm3m 64 1 64 96 GB 115 GB 1:14:51:58 2:00:00:00 9:08:02 01.04.2023 17:12:59 start_ppa_seperate.sh wr74
728651 hpc3 dgromm3m 64 1 64 96 GB 115 GB 1:14:56:32 2:00:00:00 9:03:28 01.04.2023 17:17:33 start_ppa_seperate.sh wr75
728663 gpu mnadar2s 64 1 64 100 GB 62 GB 2:23:13:07 3:00:00:00 46:53 1:34:08 dia-ret-data wr12

Waiting/Blocked Jobs (1)

job queue user state #proc #nodes ppn vmem t_req prio enqueued waiting jobname wait reason
728658 hpc3 mchaou2s PD 3200 50 64 8 GB 3:00 71748 01.04.2023 21:31:06 4:49:55 job5.sh (Nodes required for job are DOWN, DRAINED or reserved for jobs in higher priority partitions)