Conference and journal papers and books


Timothy G. Mattson, Beverly A. Sanders, and Berna L. Massingill; A Pattern Language for Parallel Programming; Addison Wesley Software Patterns Series; 2004.

This book presents a pattern language designed to guide the programmer through the entire process of developing a parallel application program. The pattern language includes patterns that help find the concurrency in the problem, patterns that help find the appropriate algorithm structure to exploit the concurrency in parallel execution, and patterns describing lower-level implementation issues. More information is available at http://www.cise.ufl.edu/research/ParallelPatterns.


Berna L. Massingill, Timothy G. Mattson, and Beverly A. Sanders; "Reengineering for Parallelism: An Entry Point into PLPP (Pattern Language for Parallel Programming) for Legacy Applications"; Proceedings of the Twelfth Pattern Languages of Programs Workshop (PLoP 2005), 2005.

Abstract:

We have developed a pattern language for developing parallel application programs (Patterns for Parallel Programming above). This pattern language, which we call PLPP (Pattern Language for Parallel Programming), embodies a development methodology in which we develop a parallel application by starting with a good understanding of the problem and then working through a sequence of patterns, ending up with code. Often, however, people begin not with a problem to solve from scratch but a piece of legacy code they need to speed up by parallelizing it. Most of the patterns in PLPP are applicable to this situation, but it is not always clear how to get started. The pattern in this paper addresses this question and in essence provides an alternate point of entry into our pattern language.


Berna L. Massingill; "Some Patterns for CS1 Students"; Proceedings of the Eleventh Pattern Languages of Programs Workshop (PLoP 2004), 2004.

Abstract:

Students in beginning programming courses struggle with many aspects of programming. This paper outlines some patterns intended to be useful to these beginners; they represent advice I have frequently offered (or wanted to offer) to students in beginning courses.


Berna L. Massingill, Timothy G. Mattson, and Beverly A. Sanders; "Additional Patterns for Parallel Application Programs"; Proceedings of the Tenth Pattern Languages of Programs Workshop (PLoP 2003), 2003.

Abstract:

We are developing a pattern language to guide the programmer through the entire process of developing a parallel application program. The pattern language includes patterns that help find the concurrency in the problem, patterns that help find the appropriate algorithm structure to exploit the concurrency in parallel execution, and patterns describing lower-level implementation issues. Other patterns in the pattern language can be seen at http://www.cise.ufl.edu/research/ParallelPatterns.

In this paper, we briefly outline the overall structure of the pattern language and present selected patterns from the group of patterns that represent different strategies for exploiting concurrency once it has been identified.


Berna L. Massingill, Timothy G. Mattson, and Beverly A. Sanders; "Some Algorithm Structure and Support Patterns for Parallel Application Programs"; Proceedings of the Ninth Pattern Languages of Programs Workshop (PLoP 2002), 2002.

Abstract:

We are developing a pattern language to guide the programmer through the entire process of developing a parallel application program. The pattern language includes patterns that help find the concurrency in the problem, patterns that help find the appropriate algorithm structure to exploit the concurrency in parallel execution, and patterns describing lower-level implementation issues. The current version of the pattern language can be seen at http://www.cise.ufl.edu/research/ParallelPatterns.

In this paper, we outline the overall structure of the pattern language and present two groups of selected patterns, one group chosen from the subset of patterns that represent different strategies for exploiting concurrency once it has been identified and one group chosen from the subset of patterns that represent commonly-used computational and data structures.


Berna L. Massingill, Timothy G. Mattson, and Beverly A. Sanders; "More Patterns for Parallel Application Programs"; Proceedings of the Eighth Pattern Languages of Programs Workshop (PLoP 2001), 2001.

Abstract:

We are involved in an effort to develop a pattern language for parallel application programs. The pattern language consists of a set of patterns that guide the programmer through the entire process of developing a parallel program, including patterns that help find the concurrency in the problem, patterns that help find the appropriate algorithm structure to exploit the concurrency in parallel execution, and patterns describing lower-level implementation issues. The current version of the pattern language can be seen at http://www.cise.ufl.edu/research/ParallelPatterns.

In this paper, we present three patterns from our pattern language, selected from the set of patterns that are used after the problem has been analyzed to identify the exploitable concurrency. ChooseStructure addresses the question of how to select an appropriate pattern from the others in this set. DivideAndConquer is used when the problem can be solved by recursively dividing it into subproblems, solving each subproblem independently, and then recombining the subsolutions into a solution to the original problem. PipelineProcessing is used when the problem can be decomposed into ordered groups of tasks connected by data dependencies.


Berna L. Massingill, Timothy G. Mattson, and Beverly A. Sanders; "Parallel Programming with a Pattern Language; Software Tools for Technology Transfer volume 3 number 2 (2001).

Abstract:

A design pattern is a description of a high-quality solution to a frequently occurring problem in some domain. A pattern language is a collection of design patterns that are carefully organized to embody a design methodology. A designer is led through the pattern language, at each step choosing an appropriate pattern, until the final design is obtained in terms of a web of patterns. This paper describes a pattern language for parallel application programs aimed at lowering the barrier to parallel programming by guiding a programmer through the entire process of developing a parallel program. We describe the pattern language, present two example patterns, and sketch a case study illustrating the design process using the pattern language.


Berna L. Massingill, Timothy G. Mattson, and Beverly A. Sanders; "A Pattern Language for Parallel Application Programs", Proceedings of the Sixth European Conference on Parallel Computing (Euro-Par 2000), 2000; extended version UF CISE TR 99-022.

Abstract:

A design pattern is a description of a high-quality solution to a frequently occurring problem in some domain. A pattern language is a collection of design patterns that are carefully organized to embody a design methodology. A designer is led through the pattern language, at each step choosing an appropriate pattern, until the final design is obtained in terms of a web of patterns. This paper describes a pattern language for parallel application programs. The goal of our pattern language is to lower the barrier to parallel programming by guiding a programmer through the entire process of developing a parallel program.


Berna L. Massingill, Timothy G. Mattson, and Beverly A. Sanders; "Patterns for Finding Concurrency for Parallel Application Programs"; Proceedings of the Seventh Pattern Languages of Programs Workshop (PLoP 2000), 2000.

Abstract:

We are involved in an ongoing effort to develop a pattern language for parallel application programs. The pattern language consists of a set of patterns that guide the programmer through the entire process of developing a parallel program, including patterns that help find the concurrency in the problem, patterns that help find the appropriate algorithm structure to exploit the concurrency in parallel execution, and patterns describing lower-level implementation issues. The current version of the pattern language can be seen at http://www.cise.ufl.edu/research/ParallelPatterns.

In this paper, we present patterns from the FindingConcurrency design space. These patterns form the starting point for novice parallel programmers and guide them through the process of identifying exploitable concurrency in a problem and designing a high-level algorithm to take advantage of this concurrency.


Berna L. Massingill, Timothy G. Mattson, and Beverly A. Sanders; "Patterns for Parallel Application Programs"; Proceedings of the Sixth Pattern Languages of Programs Workshop (PLoP 1999), 1999.

Abstract:

We are involved in an ongoing effort to design a pattern language for parallel application programs. The pattern language consists of a set of patterns that guide the programmer through the entire process of developing a parallel program, including patterns that help find the concurrency in the problem, patterns that help find the appropriate algorithm structure to exploit the concurrency in parallel execution, and patterns describing lower-level implementation issues. The current version of the pattern language can be seen at http://www.cise.ufl.edu/research/ParallelPatterns.

In this paper, we present three selected patterns from our pattern language, selected from the set of patterns that are used after the problem has been analyzed to identify the exploitable concurrency. The EmbarrassinglyParallel pattern is used when the problem can be decomposed into a set of independent tasks. The SeparableDependencies pattern can be used when dependencies between tasks can be pulled outside the concurrent execution by replicating data prior to the concurrent execution and then combining the replicated data afterwards. The GeometricDecomposition pattern is used when the problem space can be decomposed into discrete subspaces and the problem solved by first exchanging information among subspaces and then concurrently computing solutions for the subspaces.


Berna L. Massingill; "Experiments with Program Parallelization Using Archetypes and Stepwise Refinement"; Parallel Processing Letters volume 9 number 4 (1999); also in Proceedings of the Third International Workshop on Formal Methods for Parallel Programming: Theory and Applications (FMPPTA'98 / IPPS'98) , 1998; extended version UF CISE TR 98-012.

Abstract:

Parallel programming continues to be difficult and error-prone, whether starting from specifications or from an existing sequential program. This paper presents (1) a methodology for parallelizing sequential applications and (2) experiments in applying the methodology. The methodology is based on the use of stepwise refinement together with what we call parallel programming archetypes (briefly, abstractions that capture common features of classes of programs), in which most of the work of parallelization is done using familiar sequential tools and techniques, and those parts of the process that cannot be addressed with sequential tools and techniques are addressed with formally-justified transformations. The experiments consist of applying the methodology to sequential application programs, and they provide evidence that the methodology produces correct and reasonably efficient programs at reasonable human-effort cost. Of particular interest is the fact that the aspect of the methodology that is most completely formally justified is the aspect that in practice was the most trouble-free.


Beverly A. Sanders, Berna L. Massingill, and Svetlana Kryukova; "Specification and Proof of an Algorithm for Location Management for Mobile Communication Devices"; Parallel Processing Letters volume 8 number 4 (1998); also in Proceedings of the Second International Workshop on Formal Methods for Parallel Programming: Theory and Applications (FMPPTA'97 / IPPS'97), 1997; also UF CISE TR 96-015.

Abstract:

In a network supporting mobile communication devices, a mechanism to find the location of a device, wherever it may be, is needed. In this paper, we present a distributed algorithm for this purpose along with its formal specification and proof sketch. Starting with an algorithm due to Wang, the process of formalization together with careful attention to abstraction leads to a more regular, general, and robust algorithm with a clearer description. An incidental contribution is a useful theorem for proving progress properties in distributed algorithms that use tokens.


Adam Rifkin and Berna L. Massingill; "Performance Analysis for Archetypes"; Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA'98), 1998; extended version Caltech CS-TR-96-27.

Abstract:

This document outlines a simple method for benchmarking a parallel communication library and for using the results to model the performance of applications developed with that communication library. We use compositional performance analysis -- decomposing a parallel program into its modular parts and analyzing their respective performances -- to gain perspective on the performance of the whole program. This model is useful for predicting parallel program execution times for different types of program archetypes, (e.g., mesh and mesh-spectral) using communication libraries built with different message-passing schemes (e.g., Fortran M and Fortran with MPI) running on different architectures (e.g., IBM SP2 and a network of Pentium personal computers).


K. Mani Chandy, Rajit Manohar, Berna L. Massingill, and Daniel I. Meiron; "Integrating Task and Data Parallelism with the Collective Communication Archetype"; Proceedings of the 9th Internation Parallel Processing Symposium (IPPS'95), 1995; also Caltech CS-TR-94-08.

Abstract:

A parallel program archetype aids in the development of reliable, efficient parallel applications with common computation/communication structures by providing stepwise refinement methods and code libraries specific to the structure. The methods and libraries help in transforming a sequential program into a parallel program via a sequence of refinement steps that help maintain correctness while refining the program to obtain the appropriate level of granularity for a target machine. The specific archetype discussed here deals with the integration of task and data parallelism using group communication. This archetype has been used to develop several applications.