Batch Status

Summary

last updated: 09:27:01 22.10.2018

81 active nodes (41 used, 40 free)

4920 cores (2536 used, 2384 free)

32 running jobs, 137312:00:00 remaining core hours

4 waiting jobs, - waiting core hours

Nodes

toggle node display

Running Jobs (32)

color job queue user #proc #nodes ppn vmem t_remain t_req t_used started jobname hosts
20939 wr13 rberre2m 272 1 272 60GB 1:13:29 6:00:00 4:46:31 4:40:30 job13.sh wr13
20645 hpc3 koedderm 64 1 64 185GB 8:34:19 3:00:00:00 2:15:25:41 19.10.2018 18:01:20 gmx_test_hpc3 wr54
20746 hpc2 rberre2m 48 1 48 60GB 1:00:52:22 3:00:00:00 1:23:07:38 20.10.2018 10:19:23 job.sh wr28
20748 hpc2 rberre2m 48 1 48 60GB 1:00:52:24 3:00:00:00 1:23:07:36 20.10.2018 10:19:25 job.sh wr30
20749 hpc2 rberre2m 48 1 48 60GB 1:00:52:24 3:00:00:00 1:23:07:36 20.10.2018 10:19:25 job.sh wr31
20750 hpc2 rberre2m 48 1 48 60GB 1:00:52:24 3:00:00:00 1:23:07:36 20.10.2018 10:19:25 job.sh wr32
20747 hpc2 rberre2m 48 1 48 60GB 1:00:52:24 3:00:00:00 1:23:07:36 20.10.2018 10:19:25 job.sh wr29
20751 hpc2 rberre2m 48 1 48 60GB 1:00:52:27 3:00:00:00 1:23:07:33 20.10.2018 10:19:28 job.sh wr33
20752 hpc2 rberre2m 48 1 48 60GB 1:00:52:27 3:00:00:00 1:23:07:33 20.10.2018 10:19:28 job.sh wr34
20753 hpc2 rberre2m 48 1 48 60GB 1:00:52:27 3:00:00:00 1:23:07:33 20.10.2018 10:19:28 job.sh wr35
21002 hpc3 lproch3m 256 4 64 120GB 1:11:07:16 1:12:00:00 52:44 8:34:17 510c_rm3 wr66,wr68,wr69,wr70
21003 hpc2 lproch3m 192 4 48 120GB 1:11:09:50 1:12:00:00 50:10 8:36:51 510c_rm4 wr36,wr37,wr38,wr39
21004 hpc lproch3m 256 4 64 120GB 1:11:33:07 1:12:00:00 26:53 9:00:08 10dd_init wr71,wr72,wr73,wr74
20802 hpc dgromm3m 64 1 64 64GB 1:15:04:51 3:00:00:00 1:08:55:09 21.10.2018 0:31:52 start_mpi.sh wr57
20803 hpc dgromm3m 64 1 64 64GB 1:15:06:33 3:00:00:00 1:08:53:27 21.10.2018 0:33:34 start_mpi.sh wr62
20997 hpc dgromm3m 64 1 64 96GB 1:22:19:40 2:00:00:00 1:40:20 7:46:41 start_mpi.sh wr55
20895 hpc dgromm3m 64 1 64 96GB 2:00:51:14 3:00:00:00 23:08:46 21.10.2018 10:18:15 start_mpi.sh wr56
20899 wr14 pbecke2m 56 1 56 64GB 2:01:54:46 3:00:00:00 22:05:14 21.10.2018 11:21:47 job.sh wr14
20900 hpc dgromm3m 64 1 64 96GB 2:02:26:04 3:00:00:00 21:33:56 21.10.2018 11:53:05 start_mpi.sh wr51
20935 hpc dgromm3m 64 1 64 96GB 2:04:14:59 3:00:00:00 19:45:01 21.10.2018 13:42:00 start_mpi.sh wr53
20936 hpc dgromm3m 64 1 64 96GB 2:04:16:02 3:00:00:00 19:43:58 21.10.2018 13:43:03 start_mpi.sh wr58
20947 gpu amalli2s 64 1 64 70GB 2:04:48:29 2:20:30:00 15:41:31 21.10.2018 17:45:30 usainbolt.sh wr19
20951 hpc1 amalli2s 32 1 32 70GB 2:05:01:39 2:20:30:00 15:28:21 21.10.2018 17:58:40 usainbolt.sh wr23
20978 hpc1 amalli2s 32 1 32 70GB 2:06:13:54 2:20:30:00 14:16:06 21.10.2018 19:10:55 usainbolt.sh wr24
20984 hpc3 koedderm 64 1 64 185GB 2:11:21:56 3:00:00:00 12:38:04 21.10.2018 20:48:57 gmx_test_hpc3 wr50
20989 hpc3 koedderm 64 1 64 185GB 2:11:48:21 3:00:00:00 12:11:39 21.10.2018 21:15:22 gmx_test_hpc3 wr59
20990 hpc3 koedderm 64 1 64 185GB 2:12:21:20 3:00:00:00 11:38:40 21.10.2018 21:48:21 gmx_test_hpc3 wr52
20996 hpc1 dgromm3m 32 1 32 120GB 2:22:05:43 3:00:00:00 1:54:17 7:32:44 start_sf.sh wr20
21000 hpc dgromm3m 64 1 64 120GB 2:22:26:08 3:00:00:00 1:33:52 7:53:09 start_omp.sh wr63
21001 hpc dgromm3m 64 1 64 64GB 2:22:34:08 3:00:00:00 1:25:52 8:01:09 start_omp.sh wr64
21005 hpc dgromm3m 64 1 64 96GB 2:23:50:58 3:00:00:00 9:02 9:17:59 start_mpi.sh wr60
21006 hpc dgromm3m 64 1 64 96GB 2:23:52:13 3:00:00:00 7:47 9:19:14 start_mpi.sh wr61

Waiting/Blocked Jobs (4)

Jobs with any problems are highlighted. Check for these jobs whether your resource request can be satisfied by nodes in the queue (most probably not!).

job queue user state #proc #nodes ppn vmem t_req prio enqueued waiting jobname est.hosts
17352 any jlewer3s PD 8 1 8 16GB 3:08:00:00 910 09.10.2018 18:21:56 12:15:05:05 abaqus_slurm.sh
20940 wr13 rberre2m PD 272 1 272 60GB 6:00:00 83 21.10.2018 16:37:48 16:49:13 job13.sh
20941 wr13 rberre2m PD 272 1 272 60GB 6:00:00 83 21.10.2018 16:37:49 16:49:12 job13.sh
20942 wr13 rberre2m PD 272 1 272 60GB 6:00:00 83 21.10.2018 16:37:51 16:49:10 job13.sh