EPCC is the High-Performance and Novel Computing Centre of the University of Edinburgh. EPCC provides a wide variety of services to academia and industry including: application design, development and re-engineering; performance optimisation; distributed computing consultancy and solutions; facilities access; data integration and data mining consultancy.
In the ECO2Clouds site UK-EPCC runs OpenNebula, in a version derived from OpenNebula 3.2 for BonFIRE.
Hypervisor used: Nodes run XEN 3.0.3
Image management: Block devices managed using the same modified version of the LVM manager as used by Inria.
Image storage: Images are stored using the ββrawββ format
OpenNebula scheduler configuration
These values are subject to frequent changes. Their meaning can be explored at
http://opennebula.org/documentation:archives:rel3.0:schg
-t (seconds between two scheduling actions): 30
-m (max number of VMs managed in each scheduling action ): 300
-d (max number of VMs dispatched in each scheduling action): 30
-h (max number of VMs dispatched to a given host in each scheduling action): 1
Permanent resources
EPCC provides 3 dedicated nodes as permanent resources. Two of these nodes offer four, 12-core AMD Opteron 6176 (2.3GHz) with 128GB of memory each (total core count is 96; total memory count is 256GB). The third dedicated node hosts the EPCC front-end and service VMs. It currently offers 6 TB of storage to BonFIRE, with another 6-10TB available, subject to RAID configuration.
On-request resources
On-request resources will be made available in the last year of the project.
Networking
The nodes are connected by Gigabit ethernet. 6 public IP addresses are available for VMs. At present the firewall allows incoming connections to these pulic IPs on tcp/udp ports 80 and 443 only (additional ports may be opened on request).
The following networking features are planned for future releases: IPv6 networking for VMs, interconnection with FEDERICA, bandwidth-on-demand service to PSNC via AutoBAHN.
HP offers 32 dedicated dual processor nodes as permanent resources. These 64 processors are Xeon X5450 (3GHz) and the nodes have 5TB shared storage. Switched Gigabit Ethernet networking interconnections are available between these nodes.
HP may provide a maximum of 96 additional nodes and a maximum of an additional 32 TB of storage in the SAN dedicated to BonFIRE. This additional infrastructure will be provided upon request and will be strictly limited to the available capacity in the HP cloud research infrastructure.