Archive for July, 2007

Piano sketches, 2 of 6

July 22, 2007


Piano sketches, 1 of 6

July 20, 2007

The recording quality is suboptimal, but I need to externalize this so I can move on to other projects. Otherwise it will become “brain crack.”

“Sketches of the Sierra, Part 1 of 6: Alpenglow”

The MP3 file is here.

A blurry (and slightly incomplete) version of the sheetmusic is here.

88 strings of wisdom

Expressing Parallelism II: explicit expressions

July 17, 2007

In the previous post, I explained the necessity for novel parallel programming languages. In this post, I will discuss the precision of parallel languages. By precision, I mean the degree to which an algorithm’s parallel implementation can be optimized for a particular computational architecture.

In 1982, the Poker programming environment introduced three semantic levels of parallel code. The lowest level of code focuses on the execution of a single processor within a larger multiprocessor system. At the middle level, code describes the communication protocol between two or more processors. Finally, the highest level of code orchestrates a global algorithm.

Poker’s three semantic levels provide a good framework to discuss the precision of parallel languages. Using conventional parlance, languages can be explicitly parallel or implicitly parallel. Explicit languages emphasize the lowest and middle levels of the Poker model. Explicit languages verbosely express inter-process communication and robustly articulate the mapping of data to processors. The explicit focus on lower semantic levels typically obfuscates any cogency of the overall parallel algorithm. On the other hand, implicit languages emphasize a higher semantic level and express parallelism in an abstract fashion. Although implicit languages provide semantic terseness, they traditionally incur an additional performance overhead because they cannot be fine-tuned for a particular architecture.

Explicit and implicit languages assume two different philosophies for achieving computational performance. Explicit languages rely on human developers to effect fine-grain architecture-specific adjustments to a code’s performance. Implicit languages rely on sophisticated compilers to “auto-magically” optimize data mappings and inter-process communication. Until recently, it was assumed that human optimizations outperformed compiler optimizations. In other words, the following relationship was historically true:

Explicit languages: verbose semantics => fine-tuned performance

Implicit languages: clean semantics => suboptimal performance

Today I listened to Qing Yi present a talk titled “Parameterizing optimizations for emprical tuning (POET)”. POET scripts provide language-independent annotations which can be inserted into existing code. The POET annotations can be written automatically by a compiler or manually by a human. POET’s philosophy is to allow developers to get at the results of the compiler (in a more explicit manner than compiler flags). Using this paradigm, it should be easy to envision a hybridization of explicit and implicit languages. In the ideal future, we could use highly abstract languages to implement parallel algorithms, and intermediary tools (such as POET) to fine-tune the compilation.

So what’s my point? Developers typically assume that compilers cannot generate efficient parallel code. Consequently, developers accept the burden of explicitly expressing parallelism. Emergent compiler tools, such as POET, point towards a hybridized future where we can enjoy the semantic ease of implicitly parallel languages with the fine-grained optimizations of explicit languages.

Expressing Parallelism

July 16, 2007

Unless you live in a cave, you’re aware of the microarchitectural paradigm shift towards multicore/multinode technology. This computational sea change is accompanied by a heated debate about how programming languages should express parallelism. Graduate students around the world invent parallel programming languages du jour, but the most popular solutions for expressing parallelism are OpenMP and MPI. Despite their widespread appeal, both of these packages suffer unique limitations. Although OpenMP provides an expedient avenue for parallelizing sequential code, OpenMP can only be implemented on shared-memory architectures. On the other hand, MPI is extensible to a diversity of architectures, but requires verbose expression of boundary conditions and inter-node communication.

In response to these language limitations, I point to the ZPL programming language. ZPL is developed at the University of Washington and demonstrably proves two concepts:

  1. R aising the semantic level of a programming language eliminates the problem of coalescing communication (i.e. – bundling/scheduling concurrent inter-node messages).
  2. A sophisticated high-level language can eliminate the need to express boundary conditions.

Most importantly, ZPL accomplishes #1 and #2 with little (or no) performance loss, as compared to languages like High Performance Fortran. How is ZPL so wonderful? Read “The Design and Development of ZPL”. In short, ZPL introduces a concept called regions, which allows data dependencies to be described independently of the processor-to-data mapping. (The aforementioned paper also discusses several more innovations in ZPL.)

So, what’s my point? When we write parallel code, let’s not complacently use ancient tools.  Although OpenMP and MPI can be used effectively in some contexts, we should also embrace next-generation languages.  ZPL allows programmers to globally orchestrate parallel algorithms, which is certainly worth the learning-curve. Ultimately, the use of next-gen languages promotes the improvement of said languages.

City planning makes a difference.

July 3, 2007

I had low expectations for Livermore, CA. I was scared by visions of strip malls and tract homes. Instead, I discovered a healthy community with a thriving town center (it’s pedestrian-friendly!). Today I walked around Livermore and found trendy coffeshops, appealing thrift stores, and tempting organic markets. This town boasts a local symphony, a Hindu temple, and an impressive public library. Wow!

Why is Livermore unlike most NorCal towns? I see three reasons:

  1. Downtown Livermore is people-centric. The streets are lined with storefronts, not parking lots.
  2. The largest employer is Lawrence Livermore National Laboratory. This makes the average citizenry intelligent, vibrant, and sophisticated.
  3. Money, money, money. Livermore is an escape from San Francisco. . . for better and for worse. The 2007 median Livermore income is $96,632.

I’m here until September. Come visit and we’ll have fun.

Dreams End

July 1, 2007

Eleven days of climbing, swimming, writing, dancing, and laughing: one of the best vacations of my life.

TRAGEDY AVERTED: In Kings Canyon National Park, a recent rockslide transformed my easy class-2 route into a treacherous class-4 climb. Potluck Pass was particularly unsafe. On several occasions, I lowered my pack with rope and free-descended exposed granite faces.  I was most terrified when I tumbled off a ledge and broke my trekking pole. In hindsight, this trip was reckless! The route from Dusy Basin to Palisade Lakes is not advisable to people without helmets and training. (Pictured below: the price of bad decision-making).

WONDERFUL PEOPLE: After my dance with danger, I vacated to San Diego. Annie and Erik are generous hosts, and leaving was difficult. I wish I could remain in the fantasyland of beach trips, graduate student parties, and perfect weather. . . but dreams end.

  • Annie never fails to make me smile. She has a talent for building empires.
  • Erik and his friends are beyond superlatives, but here are a few: gregarious, provocative, and perspicacious.
  • I’m not sure which is more awesome: Grant’s endless supply of positive energy, or his coiffed hair.

(Pictured below: Erik rocks the summit of Mount San Jacinto.)

(Pictured below: My MacBook sports a label for Grant and Erik’s homebrew: Government Standoff Ginger IPA. Photo credit to Erik Pukinskis).