Batch Status

Summary

last updated: 07:19:02 18.10.2017

37 active nodes (24 used, 13 free)

1564 cores (725 used, 839 free)

31 running jobs, 24247:04:00 remaining core hours

8 waiting jobs, 312:00:00 waiting core hours

Nodes

node #cores used by jobs
wr3 272
wr4 96 2032
wr5 56
wr6 12
wr7 8 2210
wr8 48 2233
wr10 16 2230
wr11 16 2227
wr12 16 2228
wr13 16 2224
wr14 16 2232
wr15 16 2231
wr16 16 2223
wr17 16 2234
wr19 16 2229
wr20 32
wr21 32
wr22 32
wr23 32
wr24 32
wr25 32
wr26 32
wr27 32 2148
wr28 48 2147,2207
wr29 48 2145,2146
wr30 48 2143,2144
wr31 48 2141,2142
wr32 48 2139,2140
wr33 48
wr34 48 2107
wr35 48 2106
wr36 48 1934
wr37 48
wr38 48 2019
wr39 48
wr41 48 2225
wr42 48 2226

Running Jobs (31)

color job queue user #proc #nodes ppn vmem t_remain t_req t_used started jobname hosts
2210 wr7 rberre2m 8 1 8 1GB 33:24 6:05:00 5:30:42 1:47:26 job7.sh wr7
2223 mpi rberre2m 16 1 16 2GB 1:00:35 3:01:00 1:59:56 5:18:37 job.sh wr16
2224 mpi rberre2m 16 1 16 2GB 1:16:52 3:01:00 1:43:25 5:34:54 job.sh wr13
2233 wr8 rberre2m 48 1 48 6GB 1:37:08 2:05:00 27:06 6:51:10 job8.sh wr8
2227 mpi rberre2m 16 1 16 2GB 1:55:17 3:01:00 1:04:42 6:13:19 job.sh wr11
2228 mpi rberre2m 16 1 16 2GB 1:55:25 3:01:00 1:04:41 6:13:27 job.sh wr12
2229 mpi rberre2m 16 1 16 3GB 1:55:27 3:01:00 1:04:42 6:13:29 job.sh wr19
2230 mpi rberre2m 16 1 16 2GB 1:55:31 3:01:00 1:04:41 6:13:33 job.sh wr10
2231 mpi rberre2m 16 1 16 2GB 1:55:40 3:01:00 1:04:41 6:13:42 job.sh wr15
2232 mpi rberre2m 16 1 16 2GB 1:55:42 3:01:00 1:04:41 6:13:44 job.sh wr14
2225 hpc rberre2m 0 1 1 2GB 2:28:50 4:01:00 1:31:28 5:46:52 job41.sh wr41
2226 hpc rberre2m 0 1 1 3GB 2:39:02 4:01:00 1:21:11 5:57:04 job42.sh wr42
2234 mpi rberre2m 16 1 16 2GB 2:46:54 3:01:00 13:25 7:04:56 job.sh wr17
1934 hpc2 dgromm3m 48 1 48 26GB 3:43:46 2:00:00:00 1:20:15:18 16.10.2017 11:02:48 start.sh wr36
2148 default tjandt2s 16 1 16 35GB 6:29:24 1:00:00:00 17:30:08 17.10.2017 13:48:26 E3CEff wr27
2019 hpc2 dgromm3m 48 1 48 27GB 7:52:38 2:00:00:00 1:16:07:00 16.10.2017 15:11:40 start.sh wr38
2106 hpc2 dgromm3m 48 1 48 30GB 1:01:01:34 2:00:00:00 22:57:29 17.10.2017 8:20:36 start.sh wr35
2107 hpc2 dgromm3m 48 1 48 27GB 1:01:03:41 2:00:00:00 22:55:51 17.10.2017 8:22:43 start.sh wr34
2147 default agaier2m 12 1 12 32GB 1:06:26:55 2:00:00:00 17:31:55 17.10.2017 13:45:57 SAIL_ffd wr28
2137 hpc2 agaier2m 12 1 12 6GB 1:06:26:55 2:00:00:00 10:51:20 17.10.2017 13:45:57 OpenFOAM_caseRunner wr40
2138 hpc2 agaier2m 12 1 12 6GB 1:06:26:55 2:00:00:00 10:51:20 17.10.2017 13:45:57 OpenFOAM_caseRunner wr40
2139 hpc2 agaier2m 12 1 12 6GB 1:06:26:55 2:00:00:00 17:31:47 17.10.2017 13:45:57 OpenFOAM_caseRunner wr32
2140 hpc2 agaier2m 12 1 12 6GB 1:06:26:55 2:00:00:00 17:31:47 17.10.2017 13:45:57 OpenFOAM_caseRunner wr32
2141 hpc2 agaier2m 12 1 12 6GB 1:06:26:55 2:00:00:00 17:31:56 17.10.2017 13:45:57 OpenFOAM_caseRunner wr31
2142 hpc2 agaier2m 12 1 12 6GB 1:06:26:55 2:00:00:00 17:31:56 17.10.2017 13:45:57 OpenFOAM_caseRunner wr31
2143 hpc2 agaier2m 12 1 12 6GB 1:06:26:55 2:00:00:00 17:32:04 17.10.2017 13:45:57 OpenFOAM_caseRunner wr30
2144 hpc2 agaier2m 12 1 12 6GB 1:06:26:55 2:00:00:00 17:32:04 17.10.2017 13:45:57 OpenFOAM_caseRunner wr30
2145 hpc2 agaier2m 12 1 12 6GB 1:06:26:55 2:00:00:00 17:31:54 17.10.2017 13:45:57 OpenFOAM_caseRunner wr29
2146 hpc2 agaier2m 12 1 12 6GB 1:06:26:55 2:00:00:00 17:31:54 17.10.2017 13:45:57 OpenFOAM_caseRunner wr29
2032 wr4 smuell3g 96 1 96 123GB 1:09:34:08 3:00:00:00 1:14:25:18 16.10.2017 16:53:10 OF_v1606_B008_kOmegaSSTDDES wr4
2207 hpc ahagg2s 17 1 17 27GB 1:17:42:27 2:00:00:00 6:16:23 1:01:29 submit_acq_gps.sh wr28

Waiting/Blocked Jobs (8)

Jobs with any problems are highlighted. Check for these jobs whether your resource request can be satisfied by nodes in the queue (most probably not!).

job queue user state #proc #nodes ppn vmem t_req prio enqueued waiting jobname est.hosts
2038 wr4 kgully2s Q 96 1 96 16GB 20:00 11539 16.10.2017 17:14:22 1:14:04:40 job_heat.sh wr4
2039 wr4 kgully2s Q 96 1 96 16GB 20:00 11538 16.10.2017 17:14:46 1:14:04:16 job_heat.sh wr4
2040 wr4 kgully2s Q 96 1 96 16GB 20:00 11538 16.10.2017 17:14:46 1:14:04:16 job_heat.sh wr4
2035 wr4 drusch2s Q 96 1 96 16GB 20:00 11190 16.10.2017 17:13:00 1:14:06:02 job_heat.sh wr4
2036 wr4 drusch2s Q 96 1 96 16GB 20:00 11190 16.10.2017 17:13:01 1:14:06:01 job_heat.sh wr4
2037 wr4 drusch2s Q 96 1 96 16GB 20:00 11190 16.10.2017 17:13:02 1:14:06:00 job_heat.sh wr4
2170 hpc2 drusch2s Q 240 5 48 1GB 10:00 9702 17.10.2017 16:24:38 14:54:24 job_mpi-5.sh wr33 wr36 wr37 wr38 wr39
2171 hpc2 drusch2s Q 480 10 48 1GB 10:00 9702 17.10.2017 16:24:40 14:54:22 job_mpi-10.sh wr30 wr31 wr32 wr33 wr34 wr35 wr36 wr37 wr38 wr39