Center for High Performance Computing of Babes-Bolyai University

Center for High Performance Computing of Babes-Bolyai University

The Center for High Performance Computing of Babes-Bolyai University is jointly operated by the Faculty of Mathematics and Computer Science via the Research Center of Modeling, Optimization and Simulation  (http://www.cs.ubbcluj.ro/mos/) and the Faculty of Economics and Business Administration via the Business Informatics Research Center (http://ccie.econ.ubbcluj.ro/). The Center for High Performance Computing serves as an infrastructure for computational research at the Babeș-Bolyai University of Cluj-Napoca. Our HPC infrastructure supports research groups in a variety of scientific domains such as mathematics, computer science, physics, chemistry, business/economics , biology, geography, meteorology, and  communication science, among others.

The Center for High Performance Computing was born within the MADECIP project – “Disaster Management Research Infrastructure Based on HPC” (“Dezvoltarea Infrastructurii de Cercetare pentru Managementul Dezastrelor Bazat pe Calcul de Înaltă Performanţă”). This project was granted to Babeş-Bolyai University, its funding being provided by the Sectoral Operational Programme “Increase of Economic Competitiveness”, Priority Axis 2, co-financed by the European Union through the European Regional Development Fund – “Investments in Your Future” (POSCEE COD SMIS CSNR 48806/1862).

The infrastructure was upgraded throughout the following project: “Upgrade of the Cloud Infrastructure of the Babeș-Bolyai University Cluj-Napoca in Order to Develop an Academic Management and Decision Support Integrated System Based on Big&Smart Data – SmartCloudDSS” – POC/398/1/1/124155 – a Project Co-financed by the European Regional Development Fund (ERDF) through the Competitiveness Operational Programme 2014-2020.

The system serves two main purposes:

A. High Performance computing

    Two interconnected clusters are available for HPC:

    a1. IBM NextScale cluster located in the FSEGA Building, with the following technical description:

    • Performance: Rpeak 62 Tflops (theoretical), Rmax 40 Tflops (practical)
    • 68 nodes Nx360 M5 computing nodes, of which
      • 50 nodes with 2x Intel Xeon E5-2660 v3 CPU, 10 cores
      • 12 nodes with 2x Intel Xeon E5-2660 v3 CPU, 10 cores and 2x Nvidia K40X GPU
      • 6 nodes with 2x Intel Xeon E5-2660 v3 CPU, 10 cores and Intel Phi
    • CPU: 2 processors Intel Xeon E5-2660 v3 with 10 cores per node
    • Memory: 128 GB RAM per node
    • Storage: 2 HDD SATA 500 GB per node
    • Fast networking: 56 Gb/s (Infiniband Mellanox FDR switch SX6512 with 216 ports, 1:1 subscription rate)
    • Storage: NetApp E5660 disk storage equipment, 120 HDD SAS with 600 Gb/HDD. Total storage: 72Tb
    • IBM GPFS 4.x as parallel file system
    • Backup: IBM TS3100 tape library for data archiving
    • Operating system on nodes: RedHat Linux 6 with subscription
    • HPC management software: IBM Platform HPC 4.2
    • Others: 2 management nodes, 2 NSD, Fast Ethernet switches

    a2. DELL cluster located in the main UBB building on Kogalniceanu street, Cluj-Napoca, with the following technical description

    • 12 x DELL PowerEdge R640 servers with:
      • CPU: 2 x Intel Xeon Gold 6230 @ 2.10 GHz, 20 cores each
      • Memory: 512 GB RAM
      • Storage: 2 x 450 GB SSD disk
      • Network: 4x NIC Ethernet 10Gbps
    • Storage Unit Dell EMC Unity 380: 48TB RAW, 8x FiberChannel ports 16Gbps, 8x SFP+ 10Gbps ports, 128 GB cache memory
    • FiberChannel switch – 16 ports
    • 2 x Fast networking switch with 10Gbps, 40Gbps & 100Gbps ports

    The two clusters are interconnected with a high-speed link of 40Gbps.

    We offer specific HPC software for various domains like:

    • Intel Parallel Studio Cluster Edition
    • Matlab, Mathematica
    • Totalview Rogue-wave
    • Lumerical FDTD
    • Comsol Multiphysics
    • Gaussian
    • Materials Studio

    B. Cloud computing

    An IBM FlexSystem consists of>

    • 10 virtualization servers IBM Flex System x240
      • 128GB RAM/server
      • 2x Intel Xeon E5-2640 v2 processor/server
      • 2x SSD SATA 240 Gb/server
    • 1 server for management
    • Private Cloud software: IBM cloud with OpenStack manager 4.2
    • Monitoring and management software: IBM Flex System Manager software stack
    • Virtualization software: VMware vSphere Enterprise 5.1

    The clusters are extensively used in various computing intensive research within Babes-Bolyai, such as artificial intelligence, chemistry, physics, biology, disaster management, environment, psychology, economics and business studies etc.

    Access policy and costs

    The system functions 24/24, users being able to access it with a username and a password. The access to the system is limited only from computers identified with a BBU IP address. The system is open to no-cost access to all researchers and students of BBU. External researchers could get access to the computing infrastructure if they demonstrate involvement in a research project or activity done in collaboration with our university. Before granting access, users need to sign and agree the access usage agreement available at  http://hpc.cs.ubbcluj.ro/new-user-form/. All users of the computing infrastructure should mention the usage of the infrastructure in the acknowledgement section of the published research where the infrastructure was used during the research cycle..

    Contact

    Prof.dr. Gheorghe Cosmin Silaghi – gheorghe.silaghi@ubbcluj.ro

    Conf.dr. Virginia Niculescu – virginia.niculescu@ubbcluj.ro

    Conf.dr. Adrian Sterca – adrian.sterca@ubbcluj.ro