Batch Status

Summary

last updated: 23:33:01 17.06.2018

38 active nodes (29 used, 9 free)

1612 cores (1160 used, 452 free)

25 running jobs, 34808:24:00 remaining core hours

8 waiting jobs, 13694:40:00 waiting core hours

Nodes

node #cores used by jobs
wr3 272 6639
wr4 96 6877
wr5 56 5739
wr6 12
wr7 8 6635
wr8 48 6847
wr10 16 6818
wr11 16 6813
wr12 16 6878
wr13 16 6806
wr14 16 6805
wr15 16 6812
wr16 16 6814
wr17 16 6804
wr19 16 6839
wr20 32 6621
wr21 32 6623
wr22 32 6622
wr23 32
wr24 32
wr25 32
wr26 32 6869
wr27 32 6869
wr28 48 5861
wr29 48
wr30 48 6416
wr31 48 6415
wr32 48 5756
wr33 48
wr34 48
wr35 48 5756
wr36 48
wr37 48 6441
wr38 48
wr39 48 5756
wr40 48 5756
wr41 48 5738
wr42 48 6866

Running Jobs (25)

color job queue user #proc #nodes ppn vmem t_remain t_req t_used started jobname hosts
6804 mpi rberre2m 16 1 16 2GB 51:57 6:01:00 5:08:28 18:23:58 job.sh wr17
6805 mpi rberre2m 16 1 16 2GB 52:10 6:01:00 5:08:27 18:24:11 job.sh wr14
6806 mpi rberre2m 16 1 16 3GB 52:44 6:01:00 5:07:42 18:24:45 job.sh wr13
6877 wr4 alysek2s 96 1 96 59GB 1:01:52 1:20:00 17:23 23:14:53 job_vector.sh wr4
6812 mpi rberre2m 16 1 16 3GB 1:09:42 6:01:00 4:50:26 18:41:43 job.sh wr15
6813 mpi rberre2m 16 1 16 3GB 1:09:43 6:01:00 4:50:26 18:41:44 job.sh wr11
6814 mpi rberre2m 16 1 16 3GB 1:09:47 6:01:00 4:50:27 18:41:48 job.sh wr16
6818 mpi rberre2m 16 1 16 2GB 1:25:27 6:01:00 4:34:42 18:57:28 job.sh wr10
6839 mpi rberre2m 16 1 16 3GB 2:11:17 6:01:00 3:48:58 19:43:18 job.sh wr19
6866 hpc rberre2m 0 1 1 4GB 2:30:30 4:01:00 1:29:22 22:02:31 job42.sh wr42
6847 wr8 rberre2m 48 1 48 6GB 2:39:15 6:05:00 3:24:53 20:07:16 job8.sh wr8
6635 wr7 rberre2m 8 1 8 2GB 5:17:34 6:05:00 46:33 22:45:35 job7.sh wr7
6639 wr3 rberre2m 272 1 272 7GB 5:19:41 6:05:00 44:41 22:47:42 job3.sh wr3
6878 mpi rberre2m 16 1 16 2GB 5:43:39 6:01:00 17:10 23:15:40 job.sh wr12
6869 hpc1 rsharm2s 64 2 32 62GB 10:01:45 11:10:00 1:07:39 22:24:46 hanuman.sh wr26 wr27
5738 default dgromm3m 48 1 48 58GB 13:02:50 3:00:00:00 2:10:56:06 15.06.2018 12:35:51 start.sh wr41
5739 default dgromm3m 48 1 48 74GB 17:06:51 3:00:00:00 2:06:52:26 15.06.2018 16:39:52 start.sh wr5
5861 default dgromm3m 48 1 48 55GB 20:44:02 2:00:00:00 1:03:15:05 16.06.2018 20:17:03 start.sh wr28
5756 hpc2 lproch3m 96 4 24 22GB 23:22:54 2:00:00:00 1:00:36:30 16.06.2018 22:55:55 w_518_init wr32 wr35 wr39 wr40
6415 hpc2 dgromm3m 48 1 48 43GB 1:20:44:21 3:00:00:00 1:03:15:02 16.06.2018 20:17:22 start.sh wr31
6416 default dgromm3m 48 1 48 42GB 1:23:22:54 3:00:00:00 1:00:36:27 16.06.2018 22:55:55 start.sh wr30
6441 default dgromm3m 48 1 48 44GB 2:04:07:08 3:00:00:00 19:52:17 3:40:09 start.sh wr37
6621 hpc kkuell3m 32 1 32 5GB 2:12:33:32 3:00:00:00 11:26:27 12:06:33 wopt_gen_martys_gAds0.360000.sh wr20
6622 hpc kkuell3m 32 1 32 5GB 2:12:33:36 3:00:00:00 11:25:37 12:06:37 wopt_gen_martys_gAds0.405000.sh wr22
6623 hpc kkuell3m 32 1 32 5GB 2:12:33:46 3:00:00:00 11:25:08 12:06:47 wopt_gen_martys_gAds0.450000.sh wr21

Waiting/Blocked Jobs (8)

Jobs with any problems are highlighted. Check for these jobs whether your resource request can be satisfied by nodes in the queue (most probably not!).

job queue user state #proc #nodes ppn vmem t_req prio enqueued waiting jobname est.hosts
6879 wr4 sobst2s Q 96 1 96 16GB 20:00 8107 23:16:49 16:12 job_image.sh wr4
6636 wr7 rberre2m Q 8 1 8 6GB 6:05:00 1994 12:30:26 11:02:35 job7.sh wr7
6637 wr7 rberre2m Q 8 1 8 6GB 6:05:00 1994 12:30:26 11:02:35 job7.sh wr7
6640 wr3 rberre2m Q 272 1 272 60GB 6:05:00 1989 12:30:29 11:02:32 job3.sh wr3
6641 wr3 rberre2m Q 272 1 272 60GB 6:05:00 1989 12:30:30 11:02:31 job3.sh wr3
6066 wr7 amalli2s Q 16 1 16 8GB 2:17:00:00 0 15.06.2018 15:48:11 2:07:44:50 my-first-shell.sh
5757 hpc2 lproch3m H 96 4 24 120GB 2:00:00:00 0 14.06.2018 16:28:37 3:07:04:24 w_519_init
5758 hpc2 lproch3m H 96 4 24 120GB 2:00:00:00 0 14.06.2018 16:28:37 3:07:04:24 w_520_init