MHPC Workshop on High Performance Computing

Europe/Rome
SISSA, International School for Advanced Studies

SISSA, International School for Advanced Studies

Via Bonomea 265, 34136 Trieste, Italy
Giuseppe Piero Brandino, Ivan Girotto, Luca Heltai, Nicola Cavallini (sissa mathlab), Stefano Cozzini
Description

The Master in High Performance Computing (MHPC) program is organizing its first workshop to highlight the two foundation pillars of the program: training and science. 

The workshop, which will run from 24 to 26 February 2016, will be a unique and international event that will highlight state-of-the-art High Performance Computing applied to computational science and engineering.

The workshop will be held in Trieste (Italy) at SISSA, the International School for Advanced Studies, and it's a great opportunity for leading scientists, students and vendors.

The best MHPC thesis prize will be awarded during this event, and the official graduation ceremony will take place.

Participation is totally free.

Partners:

INFN 

INAF

notes
Poster
Slides
Participants
  • Adriano Amaricci
  • Alberto Branchesi
  • Alberto Ferrari
  • Alberto Morgante
  • Alberto Sartori
  • Alberto Venturato
  • Aleksandre Lomadze
  • Alessandro Candolini
  • Alessandro Laio
  • Alessandro marassi
  • Alessandro Michelangeli
  • Alessandro Renzi
  • Alessia Andò
  • Alessio Ansuini
  • Alessio Berti
  • Alex Rodriguez
  • Andrea Bressan
  • Andrea Magrin
  • Andrea Marangon
  • Andrea Mola
  • Anirban Roy
  • Anna Somma
  • Antonio Lanza
  • Arjuna Scagnetto
  • Aurora Maurizio
  • Bojan Zunkovic
  • Claudia Parma
  • Clement Onime
  • Cristiano De Nobili
  • Célia Laurent
  • César Ernesto González González
  • Damas Makweba
  • Daniele Bertolini
  • Daniele Ceravolo
  • Daniele Tavagnacco
  • Daniele Tolmelli
  • Davide Cingolani
  • Davide Gei
  • Donatella Lucchesi
  • Edmondo Orlotti
  • Edoardo Milotti
  • Edvin Močibob
  • Edwin Fernando Posada Correa
  • Eric Pascolo
  • Erik Romelli
  • Estelle Maeva Inack
  • Evelina Parisi
  • Ezio Corso
  • Fabio Affinito
  • Fabio Gallo
  • Fabio Pasian
  • Fabio Pichierri
  • Fatema Mohamed
  • Federico Carminati
  • Filip Skrinjar
  • Filippo Salmoiraghi
  • Filippo Spiga
  • Francesco Ballarin
  • Francesco De Giorgi
  • Francesco Longo
  • Francesco Schiumerini
  • Franco Vaccari
  • Gianfranco Gallizia
  • Gianluca Coidessa
  • Gianluca GUSTIN
  • Gianluca Orlando
  • Gianluigi Rozza
  • Giorgia Del Bianco
  • Giorgio Bolzon
  • Giorgio Pastore
  • Giovanni Alzetta
  • Giovanni Corsi
  • Giovanni Grilli di Cortona
  • Giulia Matilde Ferrante
  • Giuliano TAFFONI
  • Giuseppe CHECHILE
  • Giuseppe Murante
  • Giuseppe Piero Brandino
  • Giuseppe Pitton
  • Giuseppe Puglisi
  • Guido Bortolami
  • Guido Lupieri
  • Ivan Girardi
  • Ivan Girotto
  • Jack Dongarra
  • Jacopo Surace
  • Jimmy Aguilar Mena
  • Juan Carlos Vasquez Carmona
  • Juan Manuel Carmona Loaiza
  • Jure Pečar
  • Katy Alazo-Cuartas
  • Kevin Bianco
  • Klaus Zimmermann
  • Laura Bertolini
  • Leonardo Belpassi
  • Leonardo Romor
  • Loris Ercole
  • Luca Degano
  • Luca Della Mora
  • Luca Donatini
  • Luca Heltai
  • Luca Tornatore
  • Luka Živulović
  • Mahbube Rustaee
  • Marc Saint Georges
  • Marco Borelli
  • Marco Briscolini
  • Marco Buttu
  • Marco De Pasquale
  • Marco Pividori
  • Marco Raveri
  • Marco reale
  • Marco Tezzele
  • Maria Berti
  • Maria d'Errico
  • Maria Peressi
  • Maria Verina
  • Mariami Rusishvili
  • Marina cobal
  • Marko Kobal
  • Marlon Brenes
  • Massimo Masera
  • Massimo Tormen
  • Matteo Cerminara
  • Matteo Nori
  • Matteo Rinaldi
  • Matteo SANDRI
  • Matteo Simone
  • Mauro Bardelloni
  • Michele Vidotto
  • Miguel Carvajal
  • Mila Bottegal
  • Milena Valentini
  • Minase Tekleab
  • Moreno Baricevic
  • Muhammad Owais
  • Najmeh Foroozani
  • Nicola Bassan
  • Nicola Cavallini
  • Nicola Demo
  • Nicola Giuliani
  • Nicola marzari
  • Noe Caruso
  • Ornela maloku
  • Ornela Mulita
  • Paolo F.sco Lenti
  • Paolo Giannozzi
  • Peter Klin
  • Peter Labus
  • Philippe Cance
  • Pierluigi Di Cerbo
  • Piero Colli Franzone
  • Rajesh Babu Muda
  • Ralph Gebauer
  • Riccarda Bonsignori
  • Riccardo Di Meo
  • Riccardo Pigazzini
  • Rita Carbone
  • Roberto Siagri
  • Romain Murenzi
  • Rossella Aversa
  • Sabrina Visintin
  • Sandro Scandolo
  • Sanzio bassini
  • Sebastiano Saccani
  • Seher Karakuzu
  • Seyed Ehsan Nedaaee Oskoee
  • Seyyedmaalek Momeni
  • Shima Talehy Moineddin
  • Silvano Simula
  • Simone Brazzale
  • Simone Economo
  • Simone Martini
  • Simone Peirone
  • Simone Piccinin
  • Simone Scacchi
  • Stefano Alberto Russo
  • Stefano Borgani
  • Stefano Cozzini
  • Stefano Cristiani
  • Stefano de Gironcoli
  • Stefano Piani
  • Stefano Piano
  • Stefano Salon
  • Stella Valentina Paronuzzi Ticco
  • Tadej Kanduc
  • Thomas Gasparetto
  • Thomas Puzzera
  • Tomaso Esposti Ongaro
  • Tommaso Bianucci
  • Tommaso Gorni
  • Ulrich Singe
  • Valentino Pizzone
  • Vedran Skrinjar
  • Veronica Biffi
  • Virginia Carnevali
  • Vittorio Sciortino
  • Volker Springel
  • Wolfgang BANGERTH
  • Zakia Zainib
Support
    • 1:00 PM 2:00 PM
      Registration Room 128

      Room 128

      SISSA, International School for Advanced Studies

      Via Bonomea 265, 34136 Trieste, Italy
    • 2:00 PM 2:15 PM
      Welcome and Introduction Room 128

      Room 128

      SISSA, International School for Advanced Studies

      Via Bonomea 265, 34136 Trieste, Italy
    • 2:15 PM 3:45 PM
      Tutorial: HPC aspects of Quantum-Espresso package: Part 1 Room 128

      Room 128

      SISSA, International School for Advanced Studies

      Via Bonomea 265, 34136 Trieste, Italy
      Conveners: Fabio Affinito, Filippo Spiga (Q/E Foundation/ University of Cambridge)
      • 2:15 PM
        QE, main strategies of parallelization and levels of parallelisms 45m
        Speaker: Fabio Affinito
        Slides
      • 3:00 PM
        QE, methodologies to develop/maintain/testing toward code modernization and code sustainability 45m
        Speaker: Filippo SPIGA (QE Foundation - University of Cambridge)
        Slides
    • 3:45 PM 4:15 PM
      Coffee 30m
    • 4:15 PM 5:45 PM
      Tutorial: HPC aspects of Quantum-Espresso package: Part 2 Room 128

      Room 128

      SISSA, International School for Advanced Studies

      Via Bonomea 265, 34136 Trieste, Italy
      • 4:15 PM
        QE and many-core architectures 45m
        Speaker: Fabio Affinito
        Slides
      • 5:00 PM
        QE and heterogeneous architectures 45m
        Speaker: Filippo Spiga (QE Foundation - University of Cambridge)
        Slides
    • 8:30 AM 9:00 AM
      Registration Aula Magna a Paolo Budinich

      Aula Magna a Paolo Budinich

      SISSA, International School for Advanced Studies

      Via Bonomea 265, 34136 Trieste, Italy
    • 9:00 AM 9:25 AM
      Welcome and general introduction Aula Magna a Paolo Budinich

      Aula Magna a Paolo Budinich

      SISSA, International School for Advanced Studies

      Via Bonomea 265, 34136 Trieste, Italy
      • 9:00 AM
        Welcome address by SISSA Director 5m
      • 9:05 AM
        Welcome address by ICTP Director 5m
      • 9:10 AM
        Welcome address by Governmental Institutions 15m
    • 10:30 AM 11:00 AM
      Coffee Break 30m Lobby Aula Magna Paolo Budinich

      Lobby Aula Magna Paolo Budinich

    • 11:00 AM 12:15 PM
      HPC projects in FVG, Italy, and Europe Aula Magna a Paolo Budinich

      Aula Magna a Paolo Budinich

      SISSA, International School for Advanced Studies

      Via Bonomea 265, 34136 Trieste, Italy
      Convener: Stefano Ruffo
      • 11:00 AM
        Perspectives of HPC in FVG 15m
        Speaker: Sandro SCANDOLO (ICTP)
        Slides
      • 11:15 AM
        The Quantum ESPRESSO Project 15m Aula Magna a Paolo Budinich

        Aula Magna a Paolo Budinich

        SISSA, International School for Advanced Studies

        Via Bonomea 265, 34136 Trieste, Italy
        Speaker: Paolo Giannozzi (UNIUD)
        Slides
      • 11:30 AM
        The MAX Center of excellence 15m Aula Magna a Paolo Budinich

        Aula Magna a Paolo Budinich

        SISSA, International School for Advanced Studies

        Via Bonomea 265, 34136 Trieste, Italy
        Speaker: Elisa Molinari (CNR NANO)
        Slides
      • 11:45 AM
        The EXANEST project 15m Aula Magna a Paolo Budinich

        Aula Magna a Paolo Budinich

        SISSA, International School for Advanced Studies

        Via Bonomea 265, 34136 Trieste, Italy
        Speaker: Giuliano Taffoni (OATS - INAF)
        Slides
      • 12:00 PM
        The INFN vision for the future of HPC and HTC from regional areas to Europe 15m Aula Magna a Paolo Budinich

        Aula Magna a Paolo Budinich

        SISSA, International School for Advanced Studies

        Via Bonomea 265, 34136 Trieste, Italy
        Speaker: Donatella Lucchesi (INFN - University of Padova)
        Slides
    • 12:15 PM 12:45 PM
      Graduation and Best Thesis Award Cerimony Aula Magna a Paolo Budinich

      Aula Magna a Paolo Budinich

      SISSA, International School for Advanced Studies

      Via Bonomea 265, 34136 Trieste, Italy

      2015 class will be awarded the MHPC diploma. Best thesis prize will awarded as well.

    • 12:45 PM 2:00 PM
      Lunch 1h 15m Lobby Aula Magna Paolo Budinich

      Lobby Aula Magna Paolo Budinich

    • 2:00 PM 3:15 PM
      HPC in industry: some regional examples Aula Magna a Paolo Budinich

      Aula Magna a Paolo Budinich

      SISSA, International School for Advanced Studies

      Via Bonomea 265, 34136 Trieste, Italy
      Convener: Stefano Cozzini
      • 2:00 PM
        The Personal Computer is dead. Long life the Personal HPC. 30m
        The exponential growth of computation is very close to an evolutionary step in the way we use HPC extending and expanding the class of problems they can address. The ongoing digital transformation and software containerization are enabling the use of HPC s in most of the fields of human activities. The new digital hyperconnected world need HPC scientists and not just only Data Scientist.
        Speaker: Roberto Siagri (Eurotech spa)
        Slides
      • 2:30 PM
        Big Data and HPC @ Generali 15m
        Generali is one of the most consolidated insurance company in Europe, looking ahead for innovative product development and new markets across the world. In order to better serve business lines as well as to identify customer valuable products, Generali created the Group Chief Data Office function, whose mission is to define and implement strategies and methods to acquire, analyze and govern data. Evaluating and adopting several data analysis modern techniques, the GCDO is addressing the Big Data challenge and will leverage HPC in dedicated researches beyond the traditional insurance modeling analysis.
        Speaker: Dr Alberto Branchesi (Generali spa)
      • 2:45 PM
        HPC solutions for Ship Designing @ Fincantieri 15m
        Fincantieri is one of the leading ship design company in the world. In 2014 we deployed a HPC cluster that is mainly used to perform Computational Fluid Dynamics and Finite Element Analysis calculations. We equipped our cluster with a Citrix virtualization solution that allows our users to perform their pre-processing activities directly on a few reserved nodes on the cluster. CFD calculations are usually performed to optimize the hull, the appendages and the propellers. FEM calculations are performed to assess the behavior of the Ship internal structure and its response to vibrations.
        Speakers: Dr Gianluca Gustin (Fincantieri), Dr Giuseppe Chechile (Fincantieri)
      • 3:00 PM
        MHPC Thesis: A computational ecosystem for near real time processing of satelllte data 15m
        The aim of this work is the development of a computational ecosystem for nearly real-time inversion of high spectral resolution infrared data coming from meteorological satellites. The ecosystem has been developed as nearly real-time demonstration project to elaborate the level 2 products derived from MTG-IRS
        Speaker: Stefano Piani (MHPC - eXact-lab srl)
        Slides
    • 3:15 PM 3:45 PM
      Coffee Break 30m Lobby Aula Magna Paolo Budinich

      Lobby Aula Magna Paolo Budinich

    • 3:45 PM 5:45 PM
      HPC in Mathematics Aula Magna a Paolo Budinich

      Aula Magna a Paolo Budinich

      SISSA, International School for Advanced Studies

      Via Bonomea 265, 34136 Trieste, Italy
      Conveners: Luca Heltai, Nicola Cavallini
      • 3:45 PM
        Finite Element Methods at Realistic Complexities 40m
        Solving realistic, applied problems with the most modern numerical methods introduces many levels of complexity. In particular, one has to think about not just a single method, but a whole collection of algorithms: a single code may utilize fully adaptive, unstructured meshes; nonlinear, globalized solvers; algebraic multigrid and block preconditioners; and do all this on 1,000 processors or more with realistic material models. Codes at this level of complexity can no longer be written from scratch. However, over the past decade, many high quality libraries have been developed that make writing advanced computational software simpler. In this talk, I will briefly introduce the deal.II finite element library (http://www.dealii.org) whose development I lead and show how it has enabled us to develop the ASPECT code (http://aspect.dealii.org) for simulation of thermal convection. The project also builds on a variety of other libraries (e.g., p4est, Threading Building Blocks, and Trilinos) that provide parallelism at various levels. I will discuss some of the results obtained with this code and comment on the lessons learned from developing this massively parallel code for the solution of a complex problem.
        Speaker: Wolfgang Bangerth (Texas A&M)
        Slides
      • 4:25 PM
        MHPC Thesis: Hybrid Parallelisation Strategies for Boundary Element Methods 20m
        Whenever a mathematical problem admits a boundary integral representation, it can be straightforwardly discretised by Boundary Element Methods (BEM). In this work, we present an efficient hybrid parallel solver for FSI problems based on collocation BEM. The major bottlenecks for a serial implementations of BEM is the computational cost and memory requirements needed to respectively assemble and store the BEM full matrices. Both memory storage and assembling CPU times scale with the square of the number of degrees of freedom. We present two different strategies to parallelise BEM implementations. The first uses an MPI strategy, in which we distribute both assemblage workload and storage requirement among different processors, maintaining the classical BEM structure (and algorithm complexity). This approach leads to optimal strong and weak scalability for the matrix assemble cycles and vector matrix multiplication, although the overall algorithm remains of order O(N^2). In the second strategy, we employ a Fast Multipole Method (FMM) to reduce the computational cost and memory allocation of the BEM problem resolution to O(N), and we use a hybrid MPI and multi-threaded parallelization strategy. This implementation combines direct BEM close range interactions with FMM long range couplings, and represents the state of the art in parallel BEM solvers. The BEM-FMM algortihm calls for a hybrid solution, since the algorithm requires inherently a lot of communication among different processors. We address the main parallelisation techniques to be used in a hybrid parallel BEM-FMM implementation, for which we used the Intel Threaded Building Block paradigm to handle multicore platform, and MPI for the communication between different processors. We present strong and weak scalability results together with an optimality result concerning the way to proper set the hierarchical FMM space subdivision.
        Speaker: Mr Nicola Giuliani (MHPC - SISSA)
        Slides
      • 4:45 PM
        High performance computing for computational electrocardiology. Part I: motivation and mathematical models. 30m
        Life sciences could benefit immensely from the massive growth of HPC processing power occurred in the last ten years. Indeed, complex biological systems are described by sophisticated mathematical models, whose solution requires highly scalable solvers. In particular, for what concerns cardiac electrophysiology, the simulation of the electrical excitation of the heart muscle, and the subsequent contraction-relaxation process, represents a challenging computational task. In the present talk, we will describe the main mathematical model of the cardiac electrical and mechanical interactions, the so-called cardiac electro mechanical coupling model. This model consists of a system of non-linear partial differential equations (PDEs), constituted by four sub-models: the quasi-static anisotropic finite elasticity equations describing the macroscopic deformation of the cardiac tissue; the active tension model, i.e. a system of ordinary differential equations (ODEs) describing the intracellular calcium dynamics and the consequent generation of the cellular force; the anisotropic Bidomain model, i.e. a system of degenerate parabolic reaction-diffusion PDEs describing the electrical current flow through the tissue; the membrane model, i.e. a stiff system of ODEs describing the bioelectrical activity of the membrane of cardiac cells. We will finally present the results of three-dimensional simulations of the full cardiac excitation-contraction process.
        Speaker: Piero Colli Franzone (UNIPV)
        Slides
      • 5:15 PM
        High performance computing for computational electrocardiology. Part II: scalable solvers. 30m
        The complex interaction between the cardiac bioelectrical and mechanical phenomena is modeled by a system of non-linear partial differential equations (PDEs), known as cardiac electro-mechanical coupling (EMC) model. Due to the extremely different spatial and temporal scales of the physical phenomena occurring during a single heartbeat, the discretization of the EMC model with finite elements in space and finite differences in time yields the solution of thousands of large scale linear systems, with $O(10^6-10^8)$ degrees of freedom each. The effective solution of such linear systems requires the use of hundreds/thousands processors and, consequently, of highly scalable preconditioners. In this presentation, we will first introduce two classes of Domain Decomposition preconditioners, the Multilevel Additive Schwarz (MAS) and the Balancing Domain Decomposition by Constraints (BDDC) preconditioners, in the simple setting a scalar elliptic PDE. Then, we will extend such preconditioners to the solution of the reaction-diffusion PDEs and of the non-linear elasticity system constituting the EMC model. Finally, the results of three-dimensional parallel simulations will demonstrate the effectiveness of the resulting algorithms.
        Speaker: Simone Scacchi (UNIMI)
        Slides
    • 8:45 AM 10:15 AM
      HPC in science: Condensed Matter Room 128

      Room 128

      SISSA, International School for Advanced Studies

      Via Bonomea 265, 34136 Trieste, Italy
      Convener: Ivan Girotto
      • 8:45 AM
        Here and now: the intersection of computational science and computer science 40m
        Quantum-mechanical simulations have become dominant and widely used tools for scientific discovery and technological advancement; since they are performed without any experimental input or parameter they can streamline, accelerate, or replace actual physical experiments. This is a far-reaching paradigm shift, substituting the cost- and time-scaling of brick-and-mortar facilities, equipment, and personnel with those, very different, of computing engines. Nevertheless, computational science remains anchored to a renaissance model of individual artisans gathered in a workshop, under the guidance of an established practitioner. Great benefits could follow from rethinking such model, while adopting concepts and tools from computer science for the automation, management, preservation, analytics, and dissemination of these computational efforts. I will offer my perspective on the current state-of-the-art in the field, its power and limitations, and the role and opportunities of high-throughput computing (HTC, rather than HPC), of open source codes and workflows, and of big data available on demand.
        Speaker: Nicola Marzari (EPFL)
      • 9:25 AM
        High Performance Computing and Materials Science: How atomistic simulations can pave the way for clean and sustainable energy 20m
        The availability of cheap and abundant energy was one of the main drivers of the industrial revolution. Until today, energy remains an essential ingredient for many aspects of human activity. Is is recognized that a major challenge of our times is the transition towards sustainable energy conversion, moving away from carbon-based fossil fuels. Developing more efficient and cheaper ways to convert wind or solar radiation into electricity or to store electric energy are important steps in this transition. Computer simulations at the atomic scale can lead to a detailed understanding of the fundamental steps during energy conversion. In this presentation, I will illustrate a few cases where such a "computational microscope" can be used by materials scientists to develop better solar cells or to more efficiently use solar light to split water into hydrogen and oxygen. In these cases, high performance computing allows for a screening of potential materials, before they have even been synthesized in a laboratory.
        Speaker: Ralph Gebauer (ICTP)
      • 9:45 AM
        MHPC thesis: High-performance implementation of the Density Peak clustering algorithm 15m
        We developed a parallel implementation of the “Density Peak” clustering algorithm, exploiting C++11, OpenMP and the FLANN library for k-nearest-neighbour search. The modified algorithm is approximately 50 times faster than the original version on datasets with half a million points, and scales almost linearly with the dataset size. Thanks to improvements on the density estimation and assignation procedure, the algorithm is also unsupervised and non-parametric.
        Speaker: Marco Borelli (MHPC - SISSA)
      • 10:00 AM
        Improving Performance of Basis-set-free Hartree-Fock Calculations Through Grid-based Massively Parallel Techniques 15m
        Multicenter numerical integration scheme for polyatomic molecules has been implemented as an initial step to develop a complete basis-set-free Hartree-Fock (HF) software. The validation of the integration scheme includes the integration of the total density and the calculation of Coulomb potentials for several diatomic molecules. A finite difference method is used to solve Poisson's equation for the Coulomb potential on numerical orbitals expanded on the interlocking multicenter quadrature grid. The implementation which rely on OpenMP and CUDA shows a speedup up to 30x.
        Speaker: Mr Fernando Posada (MHPC)
    • 10:15 AM 10:45 AM
      Coffee 30m
    • 10:45 AM 1:00 PM
      HPC in science: Astrophysics and Earth Science Room 128

      Room 128

      SISSA, International School for Advanced Studies

      Via Bonomea 265, 34136 Trieste, Italy
      Convener: Giuliano Taffoni
      • 10:45 AM
        Simulating Cosmic Structure Formation 30m
        Numerical simulations on supercomputers play an ever more important role in astrophysics. They have become the tool of choice to predict the non-linear outcome of the initial conditions left behind by the Big Bang, providing crucial tests of cosmological theories. However, the problem of galaxy and star formation confronts us with a staggering multi-physics complexity and an enormous dynamic range that severely challenges existing numerical methods. In my talk, I review current strategies to address these problems, focusing on recent developments in the field such as hierarchical time integration schemes, improved particle- and mesh-based hydrodynamical solvers. I will also discuss a selection of current results and highlight some challenges for the future.
        Speaker: Volker Springel (The Heidelberg Institute for Theoretical Studies)
      • 11:15 AM
        Numerical simulations of galaxies and galaxy clusters at the Trieste Observatory 20m
        Speaker: Giuseppe Murante (OATS-INAF)
        Slides
      • 11:35 AM
        HPC for Earth Sciences: training opportunities and research challenges 20m
        Speaker: Stefano Salon (OGS)
      • 11:55 AM
        MHPC Thesis: Shyfem Parallelization: An innovative task approach for coastal environment FEM software 20m
        SHYFEM is a finite element hydrodynamic code written by Georg Umgiesser in the 80s to model Venice lagoon for his master thesis; its development has been continued by CNR-ISMAR group. It is one of the few opensource codes for coastal areas that use a finite element approach. SHYFEM is a very important resource because it is focused on coastal areas and can be coupled it with other software in order to increase the simulation accuracy in such areas. Coastal areas are strategic because many human activities are here concentrated. This means that a software that produces an accurate representation in coastal areas may also advantage socio-economical activities. SHYFEM has been already and successfully applied to several coastal and lagoon environments; for example, it is used to produce tidal forecasts in the Venice lagoon and other lagoons in Mediterranean sea. It is also used in the Danube Delta and to estimate its effects on the Black Sea, and in Malta to produce coastal forecasts. The main goal of this work is to obtain a new version of SHYFEM that may be faster, parallel, capable to use efficiently modern hardware, and easily coupled with other software.
        Speaker: Eric Pascolo (MHPC - OGS)
      • 12:15 PM
        HPC in Europe Prace 30m
        Speaker: Sanzio Bassini (CINECA)
    • 1:00 PM 2:15 PM
      Lunch 1h 15m
    • 2:15 PM 4:15 PM
      HPC in science: High Energy Physics Room 128

      Room 128

      SISSA, International School for Advanced Studies

      Via Bonomea 265, 34136 Trieste, Italy
      Convener: Andrea Bressan
      • 2:15 PM
        The path toward High Performance Computing in High Energy Physics 40m
        Speaker: Federico Carminati (CERN)
      • 2:55 PM
        Studies of Flavor and Hadron Physics using Lattice QCD simulations with modern HPC hardware 20m
        Speaker: Silvano Simula (INFN - Roma3)
        Slides
      • 3:15 PM
        High Performance Computing in the ALICE experiment 20m
        Speaker: Massimo Masera (Università di Torino)
        Slides
      • 3:35 PM
        MHPC Thesis: Analysis of Hybrid Parallelization strategies: Simulation of Anderson Localization 20m
        This thesis presents two experiences of hybrid programming applied to condensed matter and high energy physics. The two projects differ in various aspects, but both of them aim to analyse the benefits of using accelerated hardware to speedup the calculations in current science-research scenarios. The first project enables massively parallelism in a simulation of the Anderson localisation phe- nomenon in a disordered quantum system. The code represents a Hamiltonian in momentum space, then it executes a diagonalization of the corresponding matrix using linear algebra libraries, and finally it analyses the energy-levels spacing statistics averaged over several realisations of the disorder. The implementation combines different parallelization approaches in an hybrid scheme. The averag- ing over the ensemble of disorder realisations exploits massively parallelism with a master-slave config- uration based on both multi-threading and message passing interface (MPI). This framework is designed and implemented to easily interface similar application commonly adopted in scientific research, for ex- ample in Monte Carlo simulations. The diagonalization uses multi-core and GPU hardware interfacing with MAGMA, PLASMA or MKL libraries. The access to the libraries is modular to guarantee portability, maintainability and the extension in a near future. The second project is the development of a Kalman Filter, including the porting on GPU architectures and autovectorization for online LHCb triggers. The developed codes provide information about the viability and advantages for the application of GPU technologies in the first triggering step for Large Hadron Collider beauty experiment (LHCb). The optimisation introduced on both codes for CPU and GPU delivered a relevant speedup on the Kalman Filter. The two GPU versions in CUD and OpenCL have similar performances and are adequate to be considered in the upgrade and in the corresponding implementations of the Gaudi framework. In both projects we implement optimisation techniques in the CPU code. This report presents exten- sive benchmark analyses of the correctness and of the performances for both projects.
        Speaker: Jimmy Aguilar Mena (MHPC - ICTP)
    • 4:15 PM 4:45 PM
      Coffee 30m