The bioinformatics analysis will be performed using the infrastructure of the Dan L Duncan Cancer Center Computing Facilities: In addition to up to date desktop computers for all faculty and staff, which include both 32-bit and 64-bit personal computers for most bioinformatically oriented members, we have two major computing facilities -- one in the Breast SPORE facilities on the main Baylor campus and one on the Energy Transfer Data Center approximately one mile away.
Both sit inside the Baylor firewall, have nightly offsite backups, are protected through non-aqueous fire suppression, have redundant power and, for high and moderate capability machines, are accessible via a 10g Ethernet switched local area network (LAN). The availability of two physically separated facilities dramatically improves availability by allowing for more rapid recovery from a disaster such as a fire or flood that incapacitates one facility.
We have a 35 node high performance compute cluster (each with 2-8 cores each and newer nodes with 96 or 128 GB RAM) representing a total of 375 CPUs with 34 fast-access terabytes SAN storage for any high performance compute needs. We are in the process of expanding and substantially upgrading this capacity for 2013 with an extensible NetApp storage appliance. For archival storage, 10s-100s of TB can be readily leased at very low cost from Baylor Information Technology. Four cluster nodes are set aside for interactive jobs; the remaining nodes are available for batch jobs. Queues are managed by Sun Grid Engine and the system itself is administered by an expert system architect with >10 years of experience in HPC.
Access to these resources is supported by partial chargeback, commensurate with level of use. Because the cluster is self-contained (ie, located in one location without yet having an identical sister cluster offsite), the nodes themselves do not enjoy the full benefit of disaster recovery from both sites, whereas the archive storage does.
Outside of HPC availability, there are three Sun Sunfire X4170 Virtualization Servers at SUDC and two Cisco UCS C210 M2 Virtualization Servers at Breast facilities. Servers at both sites use VMware for creation of virtual servers that can run any operating system with varying system requirements. Each location’s virtualization servers have attached 37TB NetApp storage with vMotion in place to manage failover of the virtual machines from one site to another, should disaster situations arise. In addition, there are two HP servers with direct-attached 96TB storage for Oracle 11g (backed up off-site nightly); four Sun physical servers with 37 terabytes of storage running the Solaris operating system; etc.