A Coordination Language for Multidisciplinary Applications

Data parallel languages, such as High Performance Fortran, can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are multidisciplinary and heterogeneous in nature, and thus do not fit well into the data parallel paradigm. In this paper we present Opus, a language designed to fill this gap. The central concept of Opus is a mechanism called ShareD Abstractions (SDA). An SDA can be used as a computation server, i.e., a locus of computational activity, or as a data repository for sharing data between asynchronous tasks. SDAs can be internally data parallel, providing support for the integration of data and task parallelism as well as nested task parallelism. They can thus be used to express multidisciplinary applications in a natural and efficient way. In this paper we describe the features of the language through a series of examples and give an overview of the runtime support required to implement these concepts in parallel and distributed environments.

The remainderof the paper is organizedas follows: The next sectiondiscusses the language extensions definedin Opusand their use. Section3 presents a coupleof multidisciplinaryapplications,usingthe concepts introducedin Section2. Section4 outlinesthe runtimesupportnecessary for implementingtheseextensions. This is followedby a sectionon relatedwork anda brief setof conclusions. In this coordination application, the two methods are asynchronously invoked on two distinct sets of processors of the available computing system to run the weather codes (these may well be on different computers in practice). An HPF directive has been used to declare the processors involved; it specifes both the number of processors and gives them a global name. This is then referred to in the method calls which create the SDA and asynchronously spawn the global and local codes. Thus the user can ensure that the two applications run on different sets of processors and that an appropriate set of processors is allocated for each code. In the above code, a decision has been made to locate the data produced by global on the same processors as the code, local, which will read them. HPF notation has also been used to distribute the data associated with the SDA. We may assume that the specification of this distribution enables the reading of data to be performed locally when the method gettemp is invoked.
In practice, a non-trivial filter will be required to transfer data between two such models: not only will the grid points have different distances, the models may well use different coordinate systems.
We do not consider this aspect, here.

MDO for Aircraft Design
In this subsection we present, a short description of the multidisciplinary design of an aircraft and then discuss how one version could be encoded using the Opus language constructs. The overall goal of the application is to optimize the design of an aircraft relative to some goal or "objective "" I I '''"'" I On the <)tiler hand, the machine "ABC" is designaled as tile data server and the two SDAs SurfaceGeo,i and Sensitivities use four processors each on it.
These processor allocations match up with HPF processor and distribution directives specified in the respective SDA type definitions. For example, since the SDA 5'urfaeeGeom is allocated on four processors, the processor array P declared in its type definition (see SDA type S(;eom,qD,4 as shown in Figure 4) will be instantiated as an array of four processors. That is, for the SDA instance ,_'_n'.feweGeom, the HPF function number_of_proces._or.+() will return four. As indicated before, the data within the SDA can now be distributed using the full power of the HPF mapping directives.
The  In this section we describe the runtime system required to support these features.
The Opus runtime system consist of two layers ( see Figure 6): • a language-specific layer, l)roviding tile functionality for managing SDAs and their interaction via. method calls, and • a, language-independent layer, which provides support for thread-based data. parallelism in parallel distributed environments.
We discuss first the thread-based layer and then describe the implementation of method invocation, including the handling of distributed arguments in the Opus runtime system.

Lightweight Threads
As described in the previous sections, SDAs can be configured either as computation servers or as data servers. In general, the computation server tasks and the data servers will utilize the same (or overlapping) physical resources. Thus, any given processor in the system might be responsible    Figure  9: Illustration of the method invocation process for distributed SDA