Main Article Content
As Computational Science applications are more and more demanding, both on computing power and complexity of development, it is necessary to provide programming languages and tools which offer a high degree of abstraction to ease the programming of parallel, distributed and grid computing systems. Moreover, these high-level languages are very often based on formal semantics. It is thus possible to certify the correctness of critical parts of the applications.
This special issue of Scalable Computing: Practice and Experience presents recent work of researchers in these fields. These articles are a selection of extended and revised versions of papers presented at the third international workshop on Practical Aspects of High-Level Parallel Programming (PAPP), affiliated to the International Conference on Computational Science (ICCS 2006). The PAPP workshop is aimed both at researchers involved in the development of high level approaches for parallel and grid computing and computational science researchers who are potential users of these languages and tools. The topics of the PAPP workshop include high-level models (CGM, BSP, MPM, LogP, etc.) and tools for parallel and grid computing; high-level parallel language design, implementation and optimisation; functional, logic, constraint programming for parallel, distributed and grid computing systems; algorithmic skeletons, patterns and high-level parallel libraries; generative (e.g. template-based) programming with algorithmic skeletons, patterns and high-level parallel libraries; applications in all fields of high-performance computing (using high-level tools); and benchmarks and experiments using such languages and tools.
The Java programming language increases the productivity of programmers for example by taking care of memory management, by providing a wide collection of data structures (safer with the recent introduction of genericity in the language), and many other features. JIT compilation techniques make Java virtual machines quite efficient. Thus Java is an interesting choice as a basis for high-level parallelism. The two papers selected from the PAPP 2006 workshop focus on parallel programming with Java. In their paper MUSKEL: an expandable skeleton environment, Marco Aldinucci, Marco Danelutto and Patrizio Dazzi propose a new skeleton language, Muskel, based on data flow technology. It implements both the usual predefined skeletons and user-defined parallelism exploitation patterns. Muskel is a pure Java implementation, and relies on the annotation and RMI facilities of Java. A Buffering Layer to Support Derived Types and Proprietary Networks for Java HPC by Mark Baker, Bryan Carpenter and Aamir Shafi, presents a new MPI-like binding for Java. MPJ Express combines two strength which can usually not be found in other MPI bindings for Java: it supports the implementation of derived datatypes and it is implemented in pure Java. The buffering layer used to provide these features also gives a way to implement efficient proprietary networks communication devices.
We would like to thank all the people who made the PAPP workshop possible: the organizers of the ICCS conference, the other members of the programme committee: Marco Aldinucci (CNR/Univ. of Pisa, Italy), Olav Beckmann (Imperial College London, UK), Alexandros Gerbessiotis (NJIT, USA), Stephen Gilmore (Univ. of Edinburgh, UK), Clemens Grelck (Univ. of Luebeck, Germany), Christoph Herrmann (Univ. of Passau, Germany), Zhenjiang Hu (Univ. of Tokyo, Japan), Casiano Rodriguez Leon (Univ. La Laguna, Spain), Alexander Tiskin (Univ. of Warwick, UK). We also thank the referees external to the PC for their efficient help. Finally we thank all authors who submitted papers for their interest in the workshop, the quality and variety of the research topics they proposed.
Ecole Normale Supérieure de Lyon,
46 Allée d'Italie,
69364 Lyon Cedex 07, France.
Fondamentale d'Orléans (LIFO),
University of Orléans,
rue Léonard de Vinci, B. P. 6759,
F-45067 Orléans Cedex 2, France.