Batch Status

Summary

last updated: 18:56:02 23.02.2024

71 active nodes (16 used, 55 free)

6752 hw threads (1200 used, 5552 free)

27 running jobs, 83710:00:00 remaining core hours

0 waiting jobs, - waiting core hours

Nodes

toggle node display

Running Jobs (27)

color job queue user #proc #nodes ppn vmem_req vmem_used t_remain t_req t_used started jobname hosts
157136 any lsieve3m 56 1 56 120 GB 45 GB 16:57:59 23:59:00 7:01:01 11:55:00 python_bidirectional_losscomputation wr14
156972 gpu4 mbedru3s 2 1 2 100 GB 51 GB 17:10:21 2:23:59:00 2:06:48:39 21.02.2024 12:07:22 NCO_Cylinder3D wr22
156975 gpu4 mbedru3s 2 1 2 100 GB 51 GB 17:11:08 2:23:59:00 2:06:47:52 21.02.2024 12:08:09 NCO_Cylinder3D wr23
156976 gpu4 mbedru3s 2 1 2 100 GB 51 GB 17:11:28 2:23:59:00 2:06:47:32 21.02.2024 12:08:29 NCO_Cylinder3D wr23
156977 gpu4 mbedru3s 2 1 2 100 GB 51 GB 17:16:37 2:23:59:00 2:06:42:23 21.02.2024 12:13:38 NCO_Cylinder3D wr23
157025 gpu4 mbedru3s 2 1 2 100 GB 51 GB 1:04:45:58 2:23:59:00 1:19:13:02 21.02.2024 23:42:59 NCO_Cylinder3D wr21
157026 gpu4 mbedru3s 2 1 2 100 GB 51 GB 1:04:46:15 2:23:59:00 1:19:12:45 21.02.2024 23:43:16 NCO_Cylinder3D wr22
157027 gpu4 mbedru3s 2 1 2 100 GB 51 GB 1:04:46:23 2:23:59:00 1:19:12:37 21.02.2024 23:43:24 NCO_Cylinder3D wr22
157028 gpu4 mbedru3s 2 1 2 100 GB 51 GB 1:04:46:34 2:23:59:00 1:19:12:26 21.02.2024 23:43:35 NCO_Cylinder3D wr23
157029 gpu4 mbedru3s 2 1 2 100 GB 51 GB 1:04:46:44 2:23:59:00 1:19:12:16 21.02.2024 23:43:45 NCO_Cylinder3D wr24
157030 gpu4 mbedru3s 2 1 2 100 GB 51 GB 1:04:46:53 2:23:59:00 1:19:12:07 21.02.2024 23:43:54 NCO_Cylinder3D wr24
157031 gpu4 mbedru3s 2 1 2 100 GB 51 GB 1:04:47:02 2:23:59:00 1:19:11:58 21.02.2024 23:44:03 NCO_Cylinder3D wr24
157032 gpu4 mbedru3s 2 1 2 100 GB 51 GB 1:04:47:15 2:23:59:00 1:19:11:45 21.02.2024 23:44:16 NCO_Cylinder3D wr24
157109 hpc dgromm3m 128 1 128 128 GB 125 GB 1:21:01:03 3:00:00:00 1:02:58:57 22.02.2024 15:57:04 start_mpi.sh wr50
157110 hpc dgromm3m 128 1 128 128 GB 124 GB 1:22:18:43 3:00:00:00 1:01:41:17 22.02.2024 17:14:44 start_mpi.sh wr52
157111 hpc dgromm3m 128 1 128 128 GB 125 GB 1:22:18:43 3:00:00:00 1:01:41:17 22.02.2024 17:14:44 start_mpi.sh wr53
157112 hpc dgromm3m 128 1 128 128 GB 124 GB 1:22:18:43 3:00:00:00 1:01:41:17 22.02.2024 17:14:44 start_mpi.sh wr54
157113 hpc dgromm3m 128 1 128 128 GB 124 GB 1:22:18:43 3:00:00:00 1:01:41:17 22.02.2024 17:14:44 start_mpi.sh wr55
157114 hpc dgromm3m 128 1 128 128 GB 144 GB 1:22:18:43 3:00:00:00 1:01:41:17 22.02.2024 17:14:44 start_mpi.sh wr56
157115 hpc dgromm3m 128 1 128 128 GB 125 GB 1:22:18:43 3:00:00:00 1:01:41:17 22.02.2024 17:14:44 start_mpi.sh wr57
157147 gpu4 schaar3m 32 1 32 240 GB 34 GB 2:20:14:55 3:00:00:00 3:45:05 15:10:56 Load_Forecast_training_job.sh wr25
157148 gpu4 schaar3m 32 1 32 240 GB 21 GB 2:20:18:30 3:00:00:00 3:41:30 15:14:31 Load_Forecast_training_job.sh wr25
157164 any kkirsc3m 32 1 32 40 GB 7 GB 2:23:47:51 3:00:00:00 12:09 18:43:52 conformer_10-psi-f.run wr75
157165 any kkirsc3m 32 1 32 40 GB 7 GB 2:23:48:04 3:00:00:00 11:56 18:44:05 conformer_14-psi-f.run wr75
157166 any kkirsc3m 32 1 32 40 GB 7 GB 2:23:48:44 3:00:00:00 11:16 18:44:45 conformer_11-psi-f.run wr76
157167 any kkirsc3m 32 1 32 40 GB 6 GB 2:23:49:36 3:00:00:00 10:24 18:45:37 conformer_12-psi-f.run wr76
157168 any kkirsc3m 32 1 32 40 GB 6 GB 2:23:50:10 3:00:00:00 9:50 18:46:11 conformer_13-psi-f.run wr77