Click here for system status.
RCIF
The Research Computing and Informatics Facility (RCIF) is dedicated to helping you create and execute computationally intensive studies for human imaging applications. See our FY25 RCIF Overview, or see our featured resources:
- RCIF's Center for High-Performance Computing (CHPC) offers hardware and software resources.
- Shared storage - high-throughput and high-volume storage systems, more details available here
- Data services - information on shared datasets, imaging data management, and clinical data pulls is available here
- Expertise - we provide support in HPC, data, AI, and more. Find support and training options here. Don't miss our events page with upcoming events as well as links to past events!
Funding
To help the RCIF access funding, we need you, the users, to cite the RCIF in your publications:
Computations were performed using the facilities of the Washington University Research Computing and Informatics Facility (RCIF). The RCIF has received funding from NIH S10 program grants: 1S10OD025200-01A1 and 1S10OD030477-01.
To help you access funding:
- Facilities document(most projects) - view an example of our facilities document.
- Facilities document(HCP-related projects) - view an example of our HCP facilities document.
- Pilot funds - available to anyone using the RCIF, learn more about pilot funds.
- ICTS JIT Funds - also available to anyone using the RCIF. More information about JIT Funds
- note: We are listed as the "Center for High Performance Computing (CHPC)" in this supporting PDF document.
See a list of the publications that have used the RCIF.
Organization of the Site
We tried to make this a (mostly) linear workflow, from getting an account to specialized applications and recipes for doing specific tasks.
- Requesting an account
- Connecting to the cluster
- Free or paid and other accounting FAQs
- Importing and exporting data
- Accessing shared datasets
- Using SLURM to run jobs
- Advanced: Working with containers
- Support and Training
Remote Development
We encourage you to utilize the cluster not just for your compute-intensive jobs, but also for one-off tasks and day-to-day development activities. We believe that developing your cluster jobs on the cluster reduces headaches, speeds deployment, and eases testing/debugging. See below for some resources and tutorials to get started:
- Jupyter Notebook
- Matlab
- Visual Studio Code
- Remote Desktop Environment (coming Q1 2024)
As always, please be courteous and avoid running compute-intensive jobs on the login nodes.