Batch Status

Summary

last updated: 14:46:01 19.08.2017

39 active nodes (36 used, 3 free)

1644 cores (1072 used, 572 free)

38 running jobs, 31687:04:00 remaining core hours

6 waiting jobs, 9216:00:00 waiting core hours

Nodes

node #cores used by jobs
wr0 32
wr3 272
wr4 96 832
wr5 56 829
wr6 12
wr7 8 1304
wr8 48 1307
wr10 16 1317
wr11 16 1322
wr12 16 1321
wr13 16 1320
wr14 16 1319
wr15 16 1315
wr16 16 1306
wr17 16 1324
wr19 16 1323
wr20 32 840
wr21 32 835
wr22 32 835
wr23 32 839
wr24 32 1291
wr25 32 828
wr26 32 827,830
wr27 32 1300
wr28 48 1266
wr29 48 844,845
wr30 48 1255
wr31 48 1289
wr32 48 1297
wr33 48 1267
wr34 48 1296
wr35 48 1298
wr36 48 1194,1318
wr37 48 1278
wr38 48 1299
wr39 48 1279
wr40 48 1190
wr41 48 1290
wr42 48 1243

Running Jobs (38)

color job queue user #proc #nodes ppn vmem t_remain t_req t_used started jobname hosts
1307 wr8 rberre2m 48 1 48 6GB 1:38 2:05:00 2:02:24 12:42:39 job8.sh wr8
1306 mpi rberre2m 16 1 16 2GB 55:37 3:01:00 2:04:26 12:40:38 job.sh wr16
1315 mpi rberre2m 16 1 16 2GB 2:27:50 3:01:00 32:24 14:12:51 job.sh wr15
1317 mpi rberre2m 16 1 16 2GB 2:28:09 3:01:00 31:45 14:13:10 job.sh wr10
1319 mpi rberre2m 16 1 16 2GB 2:31:31 3:01:00 29:11 14:16:32 job.sh wr14
1320 mpi rberre2m 16 1 16 2GB 2:39:48 3:01:00 20:23 14:24:49 job.sh wr13
1321 mpi rberre2m 16 1 16 2GB 2:39:49 3:01:00 20:56 14:24:50 job.sh wr12
1322 mpi rberre2m 16 1 16 2GB 2:52:02 3:01:00 7:45 14:37:03 job.sh wr11
1323 mpi rberre2m 16 1 16 2GB 2:54:51 3:01:00 5:27 14:39:52 job.sh wr19
1324 mpi rberre2m 16 1 16 2GB 2:56:36 3:01:00 3:11 14:41:37 job.sh wr17
1304 wr7 rberre2m 8 1 8 1GB 3:38:49 6:05:00 2:25:41 12:19:50 job7.sh wr7
827 default dgromm3m 16 1 16 5GB 20:57:43 2:00:00:00 1:03:01:23 18.08.2017 11:43:44 start.sh wr26
828 default dgromm3m 24 1 24 8GB 20:58:48 2:00:00:00 1:03:00:17 18.08.2017 11:44:49 start.sh wr25
829 default dgromm3m 48 1 48 18GB 21:00:19 2:00:00:00 1:02:59:14 18.08.2017 11:46:20 start.sh wr5
830 default dgromm3m 16 1 16 5GB 21:01:10 2:00:00:00 1:02:58:41 18.08.2017 11:47:11 start.sh wr26
832 wr4 dgromm3m 96 1 96 57GB 21:07:17 2:00:00:00 1:02:52:03 18.08.2017 11:53:18 start.sh wr4
835 default dgromm3m 64 2 32 26GB 21:12:46 2:00:00:00 1:02:46:32 18.08.2017 11:58:47 start.sh wr21 wr22
839 default dgromm3m 16 1 16 5GB 21:34:27 2:00:00:00 1:02:24:40 18.08.2017 12:20:28 start.sh wr23
1243 hpc ashahi3s 32 1 32 5GB 21:52:59 1:00:00:00 2:06:14 12:39:00 cavity_grid79.000000_re1000.000000.sh wr42
840 default dgromm3m 32 1 32 10GB 22:10:59 2:00:00:00 1:01:47:58 18.08.2017 12:57:00 start.sh wr20
1255 hpc ashahi3s 32 1 32 5GB 22:13:10 1:00:00:00 1:46:06 12:59:11 cavity_grid89.000000_re1000.000000.sh wr30
1266 hpc ashahi3s 32 1 32 5GB 22:38:52 1:00:00:00 1:20:38 13:24:53 cavity_grid99.000000_re800.000000.sh wr28
1267 hpc ashahi3s 32 1 32 5GB 22:44:20 1:00:00:00 1:14:46 13:30:21 cavity_grid99.000000_re1000.000000.sh wr33
844 default dgromm3m 16 1 16 5GB 22:46:53 2:00:00:00 1:01:12:23 18.08.2017 13:32:54 start.sh wr29
845 default dgromm3m 16 1 16 5GB 22:46:53 2:00:00:00 1:01:12:23 18.08.2017 13:32:54 start.sh wr29
1278 hpc ashahi3s 32 1 32 5GB 23:09:08 1:00:00:00 50:09 13:55:09 cavity_grid109.000000_re800.000000.sh wr37
1279 hpc ashahi3s 32 1 32 5GB 23:09:59 1:00:00:00 48:58 13:56:00 cavity_grid109.000000_re1000.000000.sh wr39
1289 hpc ashahi3s 32 1 32 5GB 23:25:44 1:00:00:00 33:47 14:11:45 cavity_grid119.000000_re500.000000.sh wr31
1290 hpc ashahi3s 32 1 32 5GB 23:28:51 1:00:00:00 30:54 14:14:52 cavity_grid119.000000_re800.000000.sh wr41
1291 hpc ashahi3s 32 1 32 5GB 23:32:17 1:00:00:00 27:11 14:18:18 cavity_grid119.000000_re1000.000000.sh wr24
1296 hpc ashahi3s 32 1 32 5GB 23:45:26 1:00:00:00 13:19 14:31:27 cavity_grid129.000000_re250.000000.sh wr34
1297 hpc ashahi3s 32 1 32 5GB 23:47:17 1:00:00:00 11:34 14:33:18 cavity_grid129.000000_re300.000000.sh wr32
1298 hpc ashahi3s 32 1 32 5GB 23:50:03 1:00:00:00 9:02 14:36:04 cavity_grid129.000000_re350.000000.sh wr35
1299 hpc ashahi3s 32 1 32 5GB 23:53:23 1:00:00:00 6:16 14:39:24 cavity_grid129.000000_re400.000000.sh wr38
1300 hpc ashahi3s 32 1 32 5GB 23:57:20 1:00:00:00 1:30 14:43:21 cavity_grid129.000000_re450.000000.sh wr27
1190 default dgromm3m 48 1 48 18GB 1:20:51:57 2:00:00:00 3:07:15 11:37:58 start.sh wr40
1194 default dgromm3m 16 1 16 5GB 1:21:00:59 2:00:00:00 2:58:31 11:47:00 start.sh wr36
1318 default dgromm3m 16 1 16 5GB 1:23:27:57 2:00:00:00 31:51 14:13:58 start.sh wr36

Waiting/Blocked Jobs (6)

Jobs with any problems are highlighted. Check for these jobs whether your resource request can be satisfied by nodes in the queue (most probably not!).

job queue user state #proc #nodes ppn vmem t_req prio enqueued waiting jobname est.hosts
1301 hpc ashahi3s Q 32 1 32 120GB 1:00:00:00 5164 12:04:13 2:41:48 cavity_grid129.000000_re500.000000.sh wr25
1302 hpc ashahi3s Q 32 1 32 120GB 1:00:00:00 5164 12:04:13 2:41:48 cavity_grid129.000000_re800.000000.sh wr26
1303 hpc ashahi3s Q 32 1 32 120GB 1:00:00:00 5164 12:04:13 2:41:48 cavity_grid129.000000_re1000.000000.sh wr22
1305 hpc rberre2m Q 0 1 1 100GB 4:01:00 2407 12:38:59 2:07:02 job42.sh wr42
1313 hpc rberre2m Q 0 1 1 100GB 4:01:00 2333 13:49:32 56:29 job41.sh wr41
1314 default dgromm3m Q 144 3 48 64GB 2:00:00:00 1767 13:55:12 50:49 start.sh wr4 wr5 wr8