John E. Howland
Department of Computer Science
Trinity University
One Trinity Place
San Antonio, Texas 78212-7200
Voice: (210) 999-7364
Fax: (210) 999-7477
E-mail: jhowland@Trinity.Edu
Web: http://www.cs.trinity.edu/jhowland/
Subject Areas: Computer Science Education, Computer Science Curriculum.
Keywords: Parallelism, Parallel Algorithms.
As Bob Dylan said,
``...the times they are a-changin' ...'' [2]
From the very beginning, the computing discipline has experienced change at an astonishing pace. Computing in academe has grown accustomed to rapid paced change and is used to frequent, periodic curriculum review. However, an announcement last fall caught many within our field by surprise.
On Friday, October 15, 2004, Intel Corporation announced [5] to the business community that ``Intel Corp.is scrapping plans to hit a high-performance milestone for its flagship microprocessor, the latest in a series of course changes and miscues by the big chip maker.'' The company said that it would not be able to meet its goal of releasing a four giga-hertz Pentium 4 processor chip. Moreover, future chips in the Pentium 4 line were likely to be a little slower than the 3.8 giga-hertz chip available at the time.
It wasn't that Intel did not want their processors to run at higher clock rates, but rather, it was impossible for them to deliver chips which would reliably achieve frequencies at or above the four giga-hertz rate.
In the April 19, 1965 issue of Electronics [3], electrical engineer Gordon Moore noted that the number of circuit elements inside chips was doubling about every year. Ten years later, Moore observed, because of the increasing complexity of chips, that the pace would slow to doubling the number of transistors in a chip about every 24 months. Moore's law is not a law of physics, but reflects, more than anything else, the competitive forces within the chip manufacturing industry. Moore's law continues to hold and Moore now predicts that the number of transistors in processor chips will continue to double about every 18 months for the next twenty years.
Until recently, the increase in speed of processor chips tracked the rate of increase of transistors. However, Intel's announcement indicates that even though Moore's law continues to hold, speed increases will not continue due to heat dissipation problems and current leakage problems as the internal components of a chip get smaller.
Intel, as well as other processor chip manufacturers such as AMD, IBM, Freescale, etc., have decided to use the results of Moore's law by producing processors which contain multiple CPU units which share a common memory system. This means that in order to produce software designs which run faster, those designs must employ the techniques of parallel computing.
Moore's law has provided an environment which allows system design to assume that more powerful processors will soon be available to properly execute bloated designs. Computer science education has contributed to this trend often by teaching techniques which are not optimal and students learn to rely on ever faster processors to run their non-optimal algorithms and designs.
This author told the 2004 graduating computer science class that they could no longer rely on faster processors in their future designs even though that possibility existed when they began their computer science educations in 2000. They were challenged to become more expert in parallel computing techniques so that their algorithms and designs could take advantage of parallel processing chips.
Intel's announcement provoked discussion within our computer science department about the kind of changes needed within our curriculum to properly prepare our graduates to be productive in an increasingly parallel future. Input was also received from the department's external advisory board concerning the effects processor speed limits would impose on the computing industry. The idea of parallelism across the undergraduate computer science curriculum emerged from these discussions. It should be noted that Computing Curricula 2001 Computer Science [1] does not address the subject matter of parallel computing being distributed across the undergraduate curriculum.
Two faculty members of our department have research specialties in parallel computing. As a result, our department already offers two elective courses (Parallel Processing and Advanced Topics in Parallel Computing) which may be taken by junior and senior computer science majors.
Simple Threads model to illustrate two tasks computed in parallel using pthreads.
Several threading examples of removing device latencies in common algorithms using the Java thread library.
Array data structures for parallel algorithms.
Mathematical models for synchronizing parallel tasks.
Church-Rosser property of independence of order of evaluation.
Parallel processing within modern raster display processors. Techniques of parallel image rendering.
Multiple network interfaces to increase network transfers. Multiple daemons for handling client-server requests such as serving web pages.
Multiple daemons for handling client-server requests in database applications.
Application of parallel processing in scientific simulations.
Parallel algorithms for evaluating decision nets.
Since two of our computer science faculty have research specialties in parallel computing we have two upper-division courses in parallel computing. These courses provide in-depth treatment of current topics and trends in parallel computing and parallel languages.
Parallel hardware is necessary to provide demonstration of concepts and provide laboratory experience. Fortunately, it is now relatively inexpensive to acquire such equipment as desktop systems with dual core or dual processor systems are now common. We have found that a few such machines running some dialect of Unix (Linux or OS X) are sufficient to provide laboratory experience in all of the above course modules and courses. It is also possible to use lab computers or classroom computers running Linux to provide laboratory experience with parallel clusters using the mpi libraries. An NSF-CCLI grant proposal is being prepared to fund a cluster of dual processor machines which will be used to support laboratory experiences for our parallelism across the curriculum initiative.
The motivation for this paper is to share preliminary ideas and experience of introducing parallel computing through out the undergraduate computer science curriculum with the hope of provoking discussion and interchange concerning changes within the computing discipline which will likely define the future of computing. Our initial results indicate that it is possible to introduce parallel processing concepts at a very early point in the curriculum and continue to expand and develop these ideas through several intermediate courses, culminating, depending on student interest, in junior and senior level courses in parallel computing.