Batch Status

Summary

last updated: 23:52:02 11.12.2017

38 active nodes (31 used, 7 free)

1612 cores (524 used, 1088 free)

29 running jobs, 11047:04:00 remaining core hours

5 waiting jobs, 3840:00:00 waiting core hours

Nodes

node #cores used by jobs
wr3 272
wr4 96
wr5 56 18
wr6 12
wr7 8 160
wr8 48 262
wr10 16 272
wr11 16 264
wr12 16 225
wr13 16 224
wr14 16 223
wr15 16 222
wr16 16 271
wr17 16 221
wr19 16 261
wr20 32 17
wr21 32 16
wr22 32 15
wr23 32 14
wr24 32 13
wr25 32 12
wr26 32 11
wr27 32 10
wr28 48 21
wr29 48 26660
wr30 48 20
wr31 48 26659
wr32 48
wr33 48
wr34 48
wr35 48
wr36 48 266024
wr37 48 266023
wr38 48 266022
wr39 48 26614,266021
wr40 48 26661
wr41 48 19
wr42 48 44

Running Jobs (29)

color job queue user #proc #nodes ppn vmem t_remain t_req t_used started jobname hosts
221 mpi rberre2m 16 1 16 2GB 4:13 3:01:00 2:55:37 20:55:14 job.sh wr17
222 mpi rberre2m 16 1 16 2GB 8:27 3:01:00 2:51:41 20:59:28 job.sh wr15
160 wr7 rberre2m 8 1 8 1GB 28:11 6:05:00 5:35:57 18:15:12 job7.sh wr7
223 mpi rberre2m 16 1 16 2GB 28:46 3:01:00 2:31:25 21:19:47 job.sh wr14
262 wr8 rberre2m 48 1 48 6GB 36:32 2:05:00 1:27:40 22:23:33 job8.sh wr8
224 mpi rberre2m 16 1 16 2GB 37:10 3:01:00 2:22:39 21:28:11 job.sh wr13
225 mpi rberre2m 16 1 16 2GB 50:16 3:01:00 2:09:42 21:41:17 job.sh wr12
261 mpi rberre2m 16 1 16 2GB 1:24:46 3:01:00 1:35:56 22:15:47 job.sh wr19
264 mpi rberre2m 16 1 16 2GB 2:29:58 3:01:00 29:55 23:20:59 job.sh wr11
271 mpi rberre2m 16 1 16 2GB 2:35:53 3:01:00 23:57 23:26:54 job.sh wr16
272 mpi rberre2m 16 1 16 2GB 2:39:14 3:01:00 20:56 23:30:15 job.sh wr10
10 default kkirsc3m 8 1 8 113GB 3:41:34 18:00:00 14:17:39 9:33:35 01 wr27
11 default kkirsc3m 8 1 8 113GB 3:41:34 18:00:00 14:17:55 9:33:35 02 wr26
12 default kkirsc3m 8 1 8 113GB 3:41:34 18:00:00 14:17:38 9:33:35 03 wr25
13 default kkirsc3m 8 1 8 113GB 3:41:34 18:00:00 14:17:40 9:33:35 04 wr24
14 default kkirsc3m 8 1 8 113GB 3:41:35 18:00:00 14:17:52 9:33:36 05 wr23
15 default kkirsc3m 8 1 8 113GB 3:41:35 18:00:00 14:17:44 9:33:36 06 wr22
16 default kkirsc3m 8 1 8 113GB 3:41:35 18:00:00 14:17:43 9:33:36 07 wr21
17 default kkirsc3m 8 1 8 113GB 3:41:35 18:00:00 14:17:33 9:33:36 08 wr20
18 default kkirsc3m 8 1 8 113GB 3:41:35 18:00:00 14:17:38 9:33:36 09 wr5
19 default kkirsc3m 8 1 8 113GB 3:47:19 18:00:00 14:11:58 9:39:20 10 wr41
20 default kkirsc3m 8 1 8 113GB 3:50:17 18:00:00 14:08:13 9:42:18 11 wr30
21 default kkirsc3m 8 1 8 113GB 3:51:36 18:00:00 14:07:35 9:43:37 12 wr28
26614 default agaier2m 16 1 16 41GB 23:11:56 2:00:00:00 1:00:47:32 10.12.2017 23:03:57 SA-NEAT wr39
26659 hpc2 dgromm3m 48 1 48 29GB 1:09:50:17 2:00:00:00 14:08:36 9:42:18 start.sh wr31
26660 hpc2 dgromm3m 48 1 48 29GB 1:09:51:36 2:00:00:00 14:07:44 9:43:37 start.sh wr29
26661 hpc2 dgromm3m 48 1 48 29GB 1:10:37:31 2:00:00:00 13:21:43 10:29:32 start.sh wr40
26602 default tjandt2s 16 1 16 64GB 2:00:00:00 2:00:00:00 - - eff
44 hpc coligs5m 4 1 4 3GB 2:14:47:48 3:00:00:00 9:11:13 14:39:49 jobscript.sh wr42

Waiting/Blocked Jobs (5)

Jobs with any problems are highlighted. Check for these jobs whether your resource request can be satisfied by nodes in the queue (most probably not!).

job queue user state #proc #nodes ppn vmem t_req prio enqueued waiting jobname est.hosts
47 hpc1 marco Q 32 1 32 120GB 3:00:00:00 7790 12:53:57 10:58:04 PSA wr27
288 hpc1 mschen3m Q 32 1 32 20GB 1:00:00:00 7136 23:43:37 8:24 lysozym wr26
291 hpc1 mschen3m Q 32 1 32 20GB 1:00:00:00 7136 23:43:37 8:24 lysozym wr25
23 hpc rberre2m Q 0 1 1 100GB 4:01:00 2257 9:39:19 14:12:42 job41.sh wr41
73 hpc rberre2m Q 0 1 1 100GB 4:01:00 1944 14:39:48 9:12:13 job42.sh wr42