Support and tools > ARC-cluster


The ARC-cluster

The ARC is equipped with a dedicated computer server connected to a high-speed optical fiber network to the outside world, allowing fast data transfer (10 Gbit/sec).

We have 318 TB of disk space and one 13-node cluster (136 cores) dedicated to the ARC, with 64-256 GB of dedicated RAM.

ALMA and CASA users can request access to the server and the disk space by sending an e-mail to indicating the reason for the request.

User policy

Blade.jpgNew ARC users can access the Italian ARC node computing facilities by requesting a face-2-face visit (ALMA users only, through the ALMA Helpdesk) or by visiting the ARC node in Bologna (for any data-reduction-related issue to be solved in collaboration with the ARC staff).
In both cases, they are requested to send an e-mail to indicating the reason for the request.

Please notice that the request for a new account for a new requesting user implies that he/she (and/or his/her collaborators) visits the ARC for an introduction on the ARC facilities usage and on issues related to data reduction with CASA both for ALMA or any other telescope. If the request is positively evaluated the visit details will be arranged via e-mail.

The account will guarantee the usage of the facilities and the support for 6 months. Once the account expires the access to the data will be suspended and, after 1 month of quarantine, all data will be removed. Only one gentle reminder will be sent on account of expiration. Extensions of the account duration period could be considered on request (via e-mail). No visit is needed in case of account renewal.

The ARC members support is guaranteed for any ALMA-related issue. For data-reduction-related issues that do not involve ALMA, the support (other than the technical support in the usage of the ARC computing facilities) is limited to the knowledge/experience and availability of the ARC members. The same rules apply also to IRA staff members. IRA collaborators with temporary positions (i.e. students) can have an account for the entire duration of their position.

To ensure a well-balanced load on the cluster nodes please follow instructions about accessing the computer cluster.

Queries can be issued via e-mail to

Users will be automatically added to the mailing list that will be used for any communication from our side.

Accessing the computer cluster

Once you have obtained an ARC account at IRA, you can access the computer cluster nodes from everywhere through host Using graphical applications on the cluster is possible through remote X access. The accessible working nodes are listed in the table below. You can enter a node for interactive work by typing:

ssh -X ''<node>''

Useful tip: by typing ‘hostname’ you can know on which node you are

The only nodes accessible to almaf2f accounts are 19 and the 22.

Here you can find some statistics about resources consumption on the arcblXX nodes.

Using ARC storage

You need to change the directory to access your ARC storage on the cluster:

cd /iranet/groups/arc/homesarc/''username'' 

Beware that disks have no redundancy at all, never leave important data on them

Mounting ARC storage on you workstation

  • On IRA workstations ARC home filesystem can be accessed on /iranet/homesarc
  • On your laptop ARC filesystems can be seamlessly accessed with fuse-sshfs:
  • as root, install the package sshfs
# on RedHat/Centos/ScientificLinux
yum install fuse-sshfs
# on Debian/Ubuntu
apt-get install sshfs

then, as user

sshfs /your/local/mount/point/

By omitting /remote/path you can mount you home directory.

Be aware that this method is suboptimal for heavy input/output loads. Running disk-intensive applications directly on the arc cluster will result in a file access speed 10-50 times faster.

Software packages available

Software available on ARC cluster could be listed by typing the command setup-help

Software package setup command launch command notes
CASA casapy-setup casapy data reduction package (link)
Miriad miriad-setup miriad data reduction package (link)
aips aips-setup Astronomical Image Processing System (link)
analysis utils analysisUtils-setup
analytic infall analytic_infall-setup
astron astron-setup
Coyote library coyote-setup
fits Viewer fv-setup (link)
GCC Compiler gcc-setup (link)
Gildas gildas-setup (link)
Healpix healpix-setup (link)
IDL idl-setup (link)
heasoft heasoft-setup (link)
QA2 qa2-setup
Ratran ratran-setup (link)
Starlink starlink-setup (link)

Computing Nodes

Name RAM CPU¹ Cores Clock Data Net Work Disk Scratch Disk Sch.² Group³ notes
arcbl01 256G [C] 6/12 3600 10GbE 15T 57G N (a,b)
arcbl02 256G [C] 6/12 3600 10GbE 15T 57G N (a,b)
arcbl03 256G [C] 6/12 3600 10GbE 11T 65G N (a,b)
arcbl04 256G [C] 6/12 3600 10GbE 11T 65G N (a,b)
arcbl05 256G [C] 6/12 3600 10GbE 11T 65G N (a,b)
arcbl06 256G [C] 6/12 3600 10GbE 11T 65G N (a,b) VM
arcbl07 256G [C] 6/12 3600 10GbE 11T 65G N (a,b,c)
arcbl08 256G [C] 6/12 3600 10GbE 11T 65G N (a,b)
arcbl09 64G [B] 4/8 3600 10GbE 15T 57G N (a,b)
arcbl10 64G [B] 4/8 3600 10GbE 11T 57G N (a,b)
arcbl11 64G [B] 4/8 3600 10GbE 11T 57G N (a,b,c,d) NFS server
arcbl12 64G [B] 4/8 3600 10GbE 22T 57G N (a,b) Data-transfer
arcbl13 64G [A] 8/16 3600 1GbE 3,5TB N (a,b)

¹ RAM: [A] AMD Ryzen 7 1800X; [B] Intel Xeon E3-1275 v6; [C] Intel Xeon E5-1650 v4;
² SCH.: Scheduler
³ Group: (a) arc-staff, (b) arc-vlbi, (c) arc-f2f, (d) arc-user; Blades are always dedicated to (a) and (b);

Storage Nodes

Name RAM CPU Cores Clock Data Net RAID Space Storage export
arcnas2 32G Intel Xeon Silver 4108 8/16 1800 10GbE ARC-1883IX-24 91 12x10TB (HGST HUH721010ALE600) RAID6 /lustre/arcfs0/ost3
arcnas3 32G Intel Xeon Silver 4108 8/16 1800 10GbE ARC-1883IX-24 72,8 12x8TB (HGST HUH728080AL5200) RAID6 /lustre/arcfs0/ost0
arcnas4 16G Intel XeonE5-2603v3 6/6 1600 10GbE ARC-1284ML-24 36,4T 12x4TB (WDC WD4000F9YZ-09N20L1) RAID6 /lustre/arcfs0/ost1
91T 12x10TB (ST10000NM0086-2AA101) RAID6 /lustre/arcfs0/ost2
arcnas5 32G Intel Xeon E5-2640 v4 10/20 2400 10GbE Broadcom/LSI MegaRAID SAS-3 3108 255G 2x255GB RAID1