INFN data disk servers on CAF
- fcdfdata009, fcdfdata010, fcdfdata011. Three 1.8TB fileservers (5.4TB total) were part of the original CAF
static data disk poool, so that pool could be made large enough to
contain most of hadronic B data set before stripping. See Datasets
on CAF list
- As of may 03 CAF1 is gone, and those fileservers are an undistinguishable
part of dCache pool
- 4 fileservers, 1.8 TB each, to be used for data stripping output
etc.
- fcdfdata013 [ STATUS.
]
- fcdfdata014 [ STATUS.
]
- fcdfdata015 [ STATUS.
]
- fcdfdata026 [ STATUS.
]
- Data sets on these file servers are documented here
- 2 fileservers, 5 TB each, purchased in 2003, operational by end of
January 2004, still to be assigned, plan is
- fcdfdata103: 5TB as personal icaf & scratch
area [ STATUS.
]
All INFN users have 200GB quota on this disk.
- fcdfdata105: one 5TB filesever in the dCache pool.
As of Feb 2004 this is used to gurarantee
golden data set status
for compressed B-Charm dataset
xbhd0c
as of Februrary 2004 there are seven 1.8TB and two 5TB file servers
for a total of 22.6 TB.
50% is for internal usage, 50% for hosting
data sets of common CDF utilisation according to CDF policy
Policies and Access tools:
- icaf/scratch
- Access to icaf/scratch server fcdfdat014 is made under the guidelines
of the CAF user
manual, e.g. with Igor's
icaf tools
from offsite.
- Access to icaf/scratch server from within CAF jobs has to be done
using rcp. From outside the CAF, using ftp. Always with a proper kerberos
ticket. (kerberised) rsh to the file server is only possible from a CAF job
and using the -N qualifier.
- There is no backup, and no automatic deletion of old files.
- Nevertheless please cleanup unused files, the
total assigned quota is always larger then the disk, user should not "live
at the edge of their quota" and should not leave large dead files around
once their work is finished.
- As of today every INFN user should have 50/100 GB of soft/hard
quota on icaf/scratch area (twice as much as common CDF'ers).
- My (Stefano's) goal is to provide 100GB of icaf+scratch space
for everybody. I hope to get this by end of 2003 and do not plan to expand
further the personal space for the rest of run 2.
- Few user have been granted larger quota to accomodate physics
data set while waiting for the new static file servers to come online,
and as long as overall icaf/scratch usage is low.
- static file servers
- Static file servers (fcdfdata013/15/26) are managed via the cdfdata
account.
- There is no other user on these machines but cdfdata, cdfdata
owns all the local disk with no quota
- Data is managed by users who have access by having their principal
listed in cfdata's .k5login. This list will always include me (Stefano)
for management purpose (i.e. adding/removing others as disk areas are assigned
around).
- Local disk is not NFS mounted to worker nodes, access is only
via network.
- Data is put/retrieved according to same rules as icaf/scratch,
with the obvious warning of specifying -l cdfdata in kerberised commands.
From CAF jobs is possbile to use rcp, rsh and rootd (rootd is the preferred
data access method), from offsite is possible to use rootd and/or ftp.
- data access
- All file servers data are accessible via anonymous ftp. See Caf User's Guide
- When accessing a large data set that sits on one only file
server, care must be taken to avoid overloading the server with network
access requests by tens and tens of concurrent processes (especially if using
the italy queue)., even rootd may fail at that point. In this case it is
recommended to use fcp to copy
each file to the local disk before opening it. Here is an example of using fcp around
anonymous ftp to achieve this.
Last modified: Thu Feb 5 12:08:54 CET 2004
Stefano.Belforte@ts.infn.it