Simulation Servers of Systems Ecology - ETH Zurich
Simulation Servers of Systems Ecology - ETH Zurich
The shown cluster consists of Macintosh Xserve G5 rack mounted machines. It
contains a central file server (se-server) and an arbitrary number of
attached, clustered simulation servers. All nodes can be used in the same
manner and are all identical except for the first node, i.e. superior.
First node superior offers additional functionality, i.e. it stores the
user's home directories on behalf of the entire cluster.
- Simulationists have their home directory on a single node of the
cluster, i.e. on superior. As a simulationist you always work with this
very same directory, regardless on which clustered node you are currently
logged in;
- The cluster offers
ssh
(includes scp
,
sftp
) and afp
services. All these services
provide restricted access only available to members of the Systems Ecology
group at ETH Zurich;
- To use these services it is recommended to use following means from a
Mac OS X machine:
- afp: the Finder's menu command 'Go -> Connect to Server...'
(Command-K)
- ssh: the preinstalled application 'Terminal', e.g. via 'TelnetLauncher', and
- sftp: 'Fetch 5.x'
- Any access to the cluster is done with the same login and password,
since it is controlled by a Open Directory / LDAP master.
Access is offered only to registered users of the group simulationists;
- Every node of the cluster is protected by a firewall;
- No backup is provided on any node of the cluster. As a
simulationist it is your personal responsibility to save the results
and remove them from the cluster once your calculations are completed;
- Any afp volume (afp share point, afp stands for Apple File Sharing
protocol) can be accessed from any Macintosh computer (given it supports
AFP services via TCP/IP, i.e. MacOS 8.1 and later). The names of all these
volumes are shown on above graph with dark blue letters. Ex.: Access the
centrally stored models and data storage by afp://se-server.ethz.ch/Models-Data.
Your access to individual folders may be restricted, depending on the
project on which you are currently working;
- To access your personal home directory on superior, use
afp
to connect from any Macintosh to the superior (afp://superior.ethz.ch).
Alternatively you can also use sftp
from any host, e.g. from
your Mac or any of our Suns. Connect to any node of the cluster, e.g.
connect to michigan by sftp
sftp://michigan.ethz.ch
and you can access your home directory on superior;
- On the superior you can access also the directory 'Simulationists'
(
afp
, sftp
). That directory contains all user's
home directories. Note, however, your access is restricted to the publicly
available parts of the other user's home directories. Any other data are
fully protected. This allows you to share data with other users, while
preserving privacy;
- Data stored centrally in the folder
Data
of the afp volume
Models-Data
are stored only once on the se-server. They are
made available for reading-only to any clustered simulation server (nfs
export shown in above graph as arrows). Thus there is no need to first
copy redundantly data to the the cluster node where you will execute
simulation experiments;
- Any execution of programs is done only on the basis of so-called
commando tools, i.e. the
ssh
service is the only mean
available to simulationists to issue commands;
- The clustered simulation servers provide mainly
RASS-OSX as modeling and
simulation software;
- RASS-OSX plus other software is provided to the cluster via a single
installation on the se-server (nfs export shown as arrows in above graph).
This warrants identical behavior throughout the entire cluster. Clustered
simulation servers find that software in directory
/usr/local
,
which is by default accessible to any simulationist (global
/etc/profile
). It is not necessary to have a local
'.profile
' in your home directory and it is actually
recommended to have none as long as you are working only with the standard
simulation software;
- Some software is also installed locally on every clustered simulation
server, but this is kept to an absolute minimum to minimize maintenance
costs;
- Each cluster offers an individual large data pool. It is available via
afp
and sftp
. This is a "commons" data storage to
which all users have access. Its name begins with 'SimServ...' and ends
with the first portion of the node's name. E.g on superior it is called
'SimServSuper'. As a simulationist you find in your home directory links to
the currently available data pools. If you have currently several
ssh
sessions open to several cluster nodes, you will find
symbolic links to all those nodes to which you are currently logged in.
Note, however, from any given ssh
session, you will have
working access only to that pool on the machine to which that session is
logged in. This data pool storage is the most efficient manner to store
simulation results. For efficiency reasons you should use your home
directory only sparingly. It is recommended to store in your home directory
only models and settings files, but no large files resulting from
simulation experiments. The only exception to these usage rules is the node
superior, where this does not matter;
- The se-server can not be used to execute simulations;
- Use the se-server for any permanent storage of data, models, and
simulation results. Store the files via
afp
,
sftp
, or scp
at afp://se-server.ethz.ch/Models-Data
or sftp://se-server.ethz.ch/Models-Data;
- The se-server offers backup (netbackup);
- The se-server offers various other services, which are all completely
independent from the server's role for the cluster. An example is the
central FileMaker data base (LiteratureSE) to store and exchange
literature references;
- Benchmarks for ForClim 2.6-4.1 tests:
Test |
Runs (1200 a) |
Quadra 950 (68K and FPU, 33 Mhz) |
PPC mac (G4, ca.1 GHz) |
superior (G5, dual 2.3 GHz) no optimization |
superior (G5, dual 2.3 GHz) fast option |
*-test |
6 |
ca. 4'12" |
ca. 2' |
19" |
10" |
**-test |
600 |
ca. 7h |
2h 45' |
33'37" |
3'07" |
- Should you encounter inconsistencies in your home directory, e.g.
disfunctional symbolic links to the data pool storage, try running
simulation cluster utility 'fixHome'. In addition it may also help to
either 'source .profile' or even log out and log back in to make sure
all works as expected (restores all environment variables should you
have tampered with them).
- You can work with StuffIt archives using the command line tools
'stuff' and 'unstuff'.
Aspects relevant for cluster administrators:
- RASS-OSX is generated on a dedicated node, i.e. the first cluster
node superior;
- RASS-OSX can be easily installed for the cluster by running script
doRassInstallation
as a system administrator on se-server;
- The cluster can be easily extended by further nodes, cloning the first
ordinary cluster node, i.e. michigan, using 'NetRestore Helper' and
'NetRestore' (cf.
afp://se-server.ethz.ch/SE_Software/Unix_Software/OS_X/NetRestore-ncutil,
in particular consult 'Cloning cluster nodes.txt'). Alternatively it should
also be possible to NetBoot a new node by pressing key 'n' during very
first startup (se-server is also a NetBoot server for the cluster) after
having added the new node's hardware address, IP#, and Server serial number
into file 'machine_specific_data.csv';
- Restart sequence: 1) se-server, 2) superior, 3) rest of nodes. Restarting any
machine early in this sequence typically requires a restart of the machines
in the remainder of the sequence. Within 'rest of nodes' you need to
observe no restart sequence;
Specifications of all our Unix
machines (includes the Suns)
af, ETHZ - 10/29/24
Back to:
Top of page
Terrestrial Systems Ecology
Environmental Physics (UP)
Institute of Biogeochemistry and Pollutant Dynamics (IBP)
Partner institutes:
Institute for Atmospheric and Climate Science (IAC)
Institute of Integrative Biology (IBZ)
Department of Environmental Systems Science
ETH Zurich
Contact
Webmaster
(
Last modified 10/29/24
)