The DFG Priority Programme SPP 1726 is organizing an international conference on microswimmers at the Forschungszentrum cesar in ...
Under the limited budget of the ESMI interim arrangement, the supercomputing infrastructure is not avaailable.
ESMI SUPERCOMPUTING FACILITY
2208 compute nodes:
2 Intel Xeon X5570 (Nehalem-EP) quad-core processors with 2.93 GHz
SMT (Simultaneous Multithreading)
24 GB memory (DDR3, 1066 MHz)
17664 cores total
207 Teraflops peak performance
183.5 Teraflops Linpack performance
Sun Blade 6048 system
Infiniband QDR with non-blocking Fat Tree topology
In $HOME there is a limit 3 TB. For larger data sizes, the Archive
$GPFSARCH is recommended, where no restrictions on data sizes apply at present.
Click for more information
At present it is only possible to allocate a maximum of 512 compute nodes (i.e. <= 4096 cores) in a production run
The ESMI proposals will be based upon the unit TFlop-h.
For your convenience, please find here below the conversion with other units
1 TFlop-h = 10.67 Node-h = 85.36 Core-h
1 Core-h = 0.125 Node-h = 0.0117 TFlop-h
Users are welcome to install their own simulation codes and libraries locally in their directories
Note: for the CPU time application process it should be proved that codes scale at least up to 8 processes
Some common simulation-, visualization- and mathematical-programs and libraries are installed on Juropa.
Information on data transfer after project termination
After the project phase the users are responsible to migrate their data to other machines.
There will be no long term storage of user data after the project has expired
ESMI Access and Service possible
Facility available at Forschungszentrum Jülich, Germany
Dr. Godehard Sutmann, Forschungszentrum Jülich, Germany