The Gigaview Multiprocessor Multidisk Image Server

Professionals in various fields such as medical imaging, biology, and civil engineering require rapid access to huge amounts of pixmap image data. Multimedia interfaces further increase the need for large image databases. To fulfill these requirements, the GigaView parallel image server architecture relies on arrays of intelligent disk nodes, each disk node being composed of one processor and one disk. This contribution reviews the design of the GigaView hardware and file system, compares it to other storage servers available on the market, and evaluates fields of applications for the architecture.


INTRODUCTION
In the fields of scientific modeling, medical imaging, biology, civil engineering, cartography.and graphic arts, there is an urgent need for huge storage capacities, fast access.and real-time interactive visualization of pixmap images.
While processing power and memory capacity double every 2 years.disk bandwidth increases at a much slower rate.Interactive real-time visualization of full-color pixmap image data requires throughputs of 2 to 10 YIBytes/s.Parallel input/ output devices are required to access and manipulate image data at high speed.
A high-performance, high-capacity image server must provide users located on local or public networks with a set of adequate services for immediate access to images stored on disk arrays.Basic services include real-time extraction of image parts for panning purposes.resampling for zooming in and out.browsing through three-dimensional (3-D) image cuts, and acce,.;singimage sequences at the required resolution and speed.Previous research focused on increasing transfer rates between CPC and disks by using redundant arrays of inexpensive disks (RAID) [7j.Access to disk blocks was parallelized, but block and file management continued to be handled by a single CPC with limited processing power and memory bandwidth.In a more recent research project [ 6 J, the RAID concept was further extended to offer very high bandwidth disk arrays directly hooked onto high-speed networks (HIPPI-based networks).
In this article, we use a different approach: The multiprocessor multidisk (:VlPMD) approach we propose aims at associating disks and processors into an array of intelligent disk nodes capable of applying parallel local preprocessing operations before sending data from the disk to the client workstation.W'e have shown that such preprocessing operations are highly valuable in the case of image accesses: Large pixmap images can be reduced into displayable size images at disk reading speed [ 4,5].In the MPMD approach, pixmap image data are partitioned into rectangular extents, each extent having a size that minimizes global access time.To ensure high throughput, image extents are stored on a parallel array of disk nodes.Each disk node includes one disk-node processor (T800 transputer), cache memory (6 MBytes ), and one disk ( 400 to 1,000 :V1Bytes ).
The authors have implemented an MPMD image server, called the GigaView.Through its SCSI-II interface, it sustains throughputs of up to 5MBytes/ s, which allows browsing through images and maps of arbitrary size at the rate of three to four 512 x 512 full-color image visualization windows per second [2].
This contribution describes the design of the Giga View image server: the hardware architecture, the multidimensional file system (MDFS), and the server's data redundancy scheme.It analyzes the performance of the architecture through simulation and experimentation, and compares the performance to existing storage servers.The multimedia behavior of the Giga View has been studied in [1].
Section 2 describes the hardware architecture, the MDFS file svstem, and the server's redundancv scheme.Sectio~ B analyzes the Giga View perfor~ mance under single request and multiple request.Section 4 compares the performance of the Giga-View to existing storage servers.Section 5 describes two application fields for the Giga View parallel image server: geographical information systems and medical imaging.Section 6 summarizes the results of this contribution and describes the directions of future image server research.

Hardware Architecture
The parallel image server consists of a server interface processor connected through a crossbar switch to an array of disk nodes (Fig. 1 ).The server interface processor provides the network interface.Each disk node consists of a standard disk connected through an SCSI-II bus to a local processing unit.The disk-node processors are transputers (T800 in the current versions and T9000 when they become available).They provide both processing power and communication links.The number of links between the interface processor and the disk array is fout\ equal to the number of links of a single transputer.The disk-node local processor supports disk access, image part extraction, image reduction, and data compression and decompression.

MDFS
To access images in parallel.images are partitionedinto rectangular extents (Fig. 2).TheMDFS stores 1-D, 2-D, and B-D images divided into 1-D, 2-D, and B-D extents, respectively, and provides excellent access performance, regardless of the size of the accessed file and of the architecture on which it is executed [5].Image access performances are heavily influenced by how extents are distributed into a disk array.In a previous publication [ 4], we showed that the extent size should be between 12 and 48 KBytes, and described algorithms to allocate extents efficiently on a disk array.The parallel low-level file system supports a single directory containing all the files stored on one MPMD cluster.Files are accessed through a directory entry that points to the file distribution information block (DIB).The DIB contains information relative to the file size, the file extension in .rand y dimension, the extent width and height, the number of continuous extents per disk, the number of disks, a table with the successive disk numbers contributing to this file, and for each disk, a pointer to the file local extent index table (FLEIB) containing the local pointers to the data extent blocks (Fig. 3).At file opening time, the file system returns part of the content of the DIB.Directories and DIB have a fixed, maximal size.For safety reasons, they are duplicated on each of the disks in the cluster.At file opening time, each extent process reads the DIB and the FLEIB from its disk.Once the DIB and FLEIB are stored in memory, read and write operations on a given file can be executed at the rate of one disk access per extent.With an extent size between 12 to 48KBytes, the throughput be-tween disk and extent server is close to the maximal disk transfer rate.

Redundancy Scheme
The redundancy scheme on the Giga View server differs from the approach taken on RAID servers.RAID servers compute the redundancy information as the data are stored on the disk.This costs a write-access delay penalty (four disk accesses are required for every user write operation), but ensures almost complete reliability.The Gig a View server takes into account the improved reliability of single devices-up to 1,000,000 hours meantime between failure (MTBF) for modern disk drives-to design a less restrictive redundancy scheme.
The delayed parity scheme (DPS) implemented on the Giga View enables the redundancy information to be computed sometime after the data have been written on disk.This assumes that the singledisk reliability is high, and that some recently written data may be lost in the event of a single disk failure.The following analysis will justify the DPS approach.The mean-time to data loss for a RAID-5 server is given by the following formula [3]: where MTTDL is the disk array mean-time to data loss, MTTF is a single-disk mean time between failure, N is the number of disks in the array (including the parity disk) on which the data are distributed.and MTTR is the single-disk mean time to repair.The formula is written as the product of the MTBF of the array without redundancy MTTFI N, multiplied by a term showing the effect of the parity scheme.Considering an MTTR of 1 h, eight disks in the array, and an MTBF of 1,000,000 h, we get an YlTTDL of 13.89 billion hours, or 1.5 million years.Even without redundancy, the YlTBF of an eight-disk array is 125,000 h or 14 years.The MTTDL of a disk array featuring delayed parity is given by the following formula: where PTR (parity time ratio) measures the fraction of time during which the parity information is available for the whole data.For example, a 90% PTR disk array is an array for which parity on the whole data is available 90% of the time.In this formula, the correction term consists of two parts corresponding to periods of time where parity is (resp. is not) available.Considering a PTR of 0.1, the MTTDL for an eight-disk array is 1.1 million hours, similar to the YlTBF of a single disk (over a hundred years), which is more than sufficient.This theoretical analvsis assumes that the loss of a single bit amounts to a data loss.However, although the DPS does not guarantee total data integrity, it guarantees that most of the data (and in most cases, the whole data) can be recovered in the event of a single disk failure.Only some recently written data may be lost as a result of a single disk failure.
This analysis justifies the delayed parity redundancy scheme adopted in the Giga View design.Another approach is to study the effect of external causes on data integrity.For this analysis, we assume that an external cause (e.g., power supply breakdown) increases the probability of disk failure.In an n-disk system where each disk has a probability p to fail, the probability that exactly f disk fails P(F =f) is given by the binomial probability law: Without parity, the data loss probability is the probability that one or more disks will fail.With parity (RAID server approach), the data loss probability is the probability that two or more disks fail.With delayed parity (Giga View approach).the data loss probability is the weighted average of the withparity and without-parity data loss probabilities.We plot as a function of the single disk failure the array data loss probability, with and without redundancy scheme.Figure 4 shows the array data loss probability in three cases, no parity (0% PTR).90% PTR, and 100% PTR (equivalent to RAID-5 parity).In the case of a failure due to external causes, it confirms that the reliability of a Giga View with 90% PTR is almost as good as a Raid server reliabilitv.

GIGAVIEW PERFORMANCE ANALYSIS
This section analyzes, through simulation, the performance of the Giga View image server.Section 3.1 describes the simulation model.The performance under single request is modeled in terms of throughput and latency (Section 3.2).The perfor- mance under multiple request is shown to be dependent on the single-request delay and utilization (Section 3.3).

Simulation Model
Figure 5 describes the modeled behavior of the Giga View.Reading a visualization window from the Giga View consists of decomposing a window request into extent requests.As soon as an extent request is generated by the interface processor, it is transferred down the appropriate transputer link to the disk where the extent is located.The extent is fetched from the disk and transferred up a transputer link back to the interface processor, where it is merged with the other extents to form the visualization window.The simulation model assumes that the disk access time, the transputer link transfer time, and the transputer memory-to-memory copy operations obey simple linear formulas of the form Delay = Latency + (DataSize/Throughput).

Single-Request Behavior
This section shows by simulation that it is possible to describe the behavior of a parallel storage server using two numbers, latency and throughput.This is similar to the way secondary storage devices are described by two numbers, seek time and throughput.The approach is to measure the delay of the parallel storage server for increasing visualization window sizes, to linearize the delay using a leastsquare fit (Mathematica), and get a formula of the type: . RequestSize The GigaView architecture performance is sensitive to the extent allocation scheme.In particular, the extent size and the row offset have to be chosen carefully to reach the best performance.As shown in Section 2.4., an extent size of 128 X 128 pixels and an extent row offset of 3 are effective for a wide range of visualization window sizes and optimum for a visualization window size of 512 X 512 pixels.In this experiment.the T800 transputers are modeled with a memorv bandwidth of 18Y1Bvtes/s and each communication link has a throughput of 1.6MBytes/s.The disks are T800-Quantum-SCSI-II, whose seek time and throughput have been measured experimentally at, respectively, 20 ms and 2.28 MBytes/s.The linearization approach has proved particularly effective, regardless of the data allocation and the architecture of the system.Lsing the linear model of the performance of the Giga View, it is easy to demonstrate the effect of the number of disk nodes in the architecture on the performance of the system.Figure 6 shows the access time to a visualization window of increasing sizes for four architectures: one-disk-node, two-disk-node, four-disk-node, and eight-disk-node architectures.
Figure 6 shows that latency decreases and throughput increases as the number of disk nodes increases.With a T800-based architecture, adding more disk nodes ceases being beneficial, because link communication bandwidth limits over-delay (sec. ) visualization window size (~tea) FIGURE 6 Giga View single request delay (T800based architecture, simulation results).all performance.Beyond eight disk nodes, the throughput increases only marginally and the latency does not decrease.It is possible to get a precise idea of the maximum number of disk nodes the architecture supports by carrying out a single single-request experiment.The key concept is that of component utilization, defined as the ratio between a given component's active time and the total simulation time.The component utilization is a simulation result, together with individual operation delays.
The simulation consists of requesting a single 512 X 512 3-byte-pixel visualization window on a four disk-node T800-based architecture.In a four-disk-node architecture, the average disk node utilization is 86%, the links are 42% utilized, and the interface processor is 33% utilized.The ratio between disk node and link utilization is 0 8 ~ = 2.

0.~~
This suggests that an eight-disk-node architecture provides an equal utilization of disk nodes and links.The utilizations of disks and links in an eightdisk architecture are equal at 66%: The eight-disk architecture is said to be balanced.Simulations show that above eight disks the throughput does not increase.Balancing the architecture should therefore be a design target.
The utilization data for the eight-disk architecture also shows that the maximum component utilization decreases significantly when stepping up the architecture from four to eight-disk nodes.This explains why the delay of an eight-disk architecture (0.218 s for a 512 X 512 3-byte-pixel visualization window) is more than half the delay of a four-disk architecture (0.332 s).Changing the data allocation scheme to improve the utilization by decreasing the extent size does not improve performance: The overhead due to the larger numher of extents negates the effect of the improved data allocation.

Multiple-Request Behavior
This section describes the behavior of the Giga-View under multiple requests.To provide a reference point, this study compares the behavior of the Gig a View under multiple request to the behavior of an abstract fixed-service-time server.It shows that, due to internal pipelining, the Giga View sustains higher throughput than the fixed-servicetime server.The amount of additional throughput depends on the single-request utilization of the disk array.

Simulation Characteristics
Requests to the Giga View represent a Poisson process.This means that individual requests are independent and that the number of requests in a given time interval only depends on the length of that interval.The interval between requests therefore follows an exponential distribution.The load on the system is expressed in terms of requested throughput.In our simulations, all users request a 512 X 512 3-byte-pixel visualization window (786 KBytes).Therefore, a requested throughput of 3MBytes/ s corresponds to four window requests per second.The Poisson process hypothesis also ensures that, for a given load, the number of users requesting windows from the system has no effect.Only the requested throughput affects the average response time of the system.For a given system architecture, each simulation consists of requesting 5,000 visualization windows at random positions in an image for a given load.Each configuration is simulated for 20 loads chosen in the range ofloads sustainable by the architecture.The result of each simulation is the delay average over the 5,000 requests.For these simulations, the architecture consists of T9000 transputers and Quantum-SCSI-11 disks.Because the T9000 transputers were not yet available at the time of submission, their performance was conservatively estimated at 36MBytes/ s memory bandwidth and 8MBytes/ s link transfer rate.The Quantum-SCSI-2 latency and throughput are measured experimentally at 20 ms and 2.23 MBytes/s.

Fixed Service-Time Server
The fixed service-time server provides a reference point for the Giga View simulations.Its only property is its service time, equal to the service time of a single visualization window request.If a new request occurs while a request (the current request) is being served, the new request is delayed until the current request is completely served.Requests to the reference server follow the same distribution as requests to the Giga View.For example, a T9000-based four-disk-node Giga View architecture satisfies a 512 X 512 3-byte-pixel visualization window request in 0.305 s.The maximum throughput sustainable by the fixed service-time server is: where SRS is the single request size, SRD is the single request delay, and MST is the maximum sustainable throughput.Figure 7 shows the performance results of the Giga View.The continuous line represents the Giga View performance (delay average), whereas the crosses represent the performance of the fixed service-time server (delay average).Figure 7 shows that the performance of the Giga View is superior to the performance of a fixed service-time server.This result is not difficult to explain.During a single-request experiment, no component of a four-disk-node Giga View architecture is used more than 90% of the time.Therefore, under multiple request, some amount of in- ternal pipelining occurs, making the Giga View able to sustain higher loads than the fixed servicetime server.One can match the behavior of the fixed servicetime server and the Giga View by scaling the x-axis of the fixed service-time server performance curve by a factor equal to the inverse of the single-request utilization of the Giga View.This suggests that the Gig a View YIST must be defined as: where SRU is the single request utilization.In this formula, the SRS is a simulation parameter: the SRD and SRU are simulation results.The formula holds true regardless of the SRS.A single singlerequest simulation is enough to evaluate an architecture's MST.

EHect of the Number of Disk Nodes
Figure 8 shows the effect of the number of disk nodes on the performance of the Giga View.Adding disk nodes to the architecture improves the delay of each request and the Giga View's ability to sustain higher loads.Consider a requested throughput of 6 MBytes/ s.The average delay for a 12-disk-node architecture is around 400 ms, whereas a 16-disknode architecture satisfies requests on average within 200 ms, i.e. an improvement by a factor of 2. This seems to be in contradiction with the singlerequest analysis of the same architecture.
The single-request analysis applied to a T9000-based architecture (Table 1) shows that the maximum throughput is reached for a 12-disk-node architecture.The 16-disk-node architecture offers very little benefit over the 12-disk-node architecture in terms of single-request throughput or access delay.The major difference between the two architectures lies in the utilization of 12-disk and 16-disk architectures under single request.In a 12-disk-node architecture, disk-node components are utilized on average 76% of their time, and in a 16-disk-node architecture, they are used on average 61% of their time.
Csing the MST formula introduced earlier, we find that the MSTs are 2. 98 MBytes/ s (respectively S.97 MBytes/s, 9.04 mBytes/s, 11.94 MBytes/s) for a 4-disk (respectively, 8-disk, 12-disk, 16disk) architecture.Although the single-request throughput does not increase above 12 disks, the MST under multiple requests increases linearly with the number of disks, for up to 16 disks.Above 16 disks, the interface processor becomes saturated and the YIST does not increase anymore.
As the throughput approaches the MST, the access delay increases exponentially.At 6 MBytes/ s, the 12-disk architecture is closer to its MST than the 16-disk architecture.Hence, the access delay is much higher for the 12-disk architecture.

MEASURED PERFORMANCE COMPARISONS
This section compares the access delays for four storage systems.The first configuration is an actual SparcClassic workstation and its local disk and the second configuration is an actual RAIDER-5 system connected to a Spare server 1000.The third system is an actual RAID level3 system connected to a Cray Y-MP.The fourth system is an actual four-disk-node Giga View system.The SparcClassic local disk is a 1GB Quantum with an SCSI interface, 10-ms seek time, and 2.93 YIBytes/ s sustained throughput.The RAIDER-S system is a RAID-S architecture consisting of (4 + 1) WREJ\'-9 disks having a latency of 12.9 ms and a wide SCSI-II interface.The RAID level3 system consists of 10 disks (8 + 2 spare) Hitachi DK-S16-1S.
The experiment consists of transferring visualization windows of increasing sizes from disk(s) to host memory and measuring the transfer times.All architectures run MDFS.The image from which the visualization windows are selected is 3072 X 2048 3-byte pixels in size, and is divided in 2.98 128 X 128 extents.On the first three configurations (single-disk station and the two RAID servers), the entire data are experimental.
For the Giga View performance measurements, it is assumed that transferring a visualization window from disk to host is a two-stage pipeline.The first stage of the pipeline transfers rows of extents from the disks to the Giga View server interface processor memory.The second stage transfers rows of extents from the server interface processor memory to the host memory.The access delay is a combination of ( 1) actual delays measured on the Giga View system (transfer of the whole visualization window between the disks and the Giga-View interface processor) and (2) a conservative estimate of the transfer delav of one row of extents between the Giga View interface processor and the host memory.The formula giving the estimation of the SCSI bus transfer time is: The fact that the Giga View performance is superior to the RAID systems performance (Figure 9) can be traced to the fact that the Giga View has an excellent control over extent allocation, which could not be achieved on the tested RAID level 3 and RAIDER-5 systems.To achieve the best visualization window access times, it is necessary to control precisely the disk allocation of each image extent.

APPLICATIONS
The authors consider two application fields for the Giga View image servers: geographical information systems and medical imaging.Both fields require large amounts of pixmap data, as well as the ability to define relationships between various pixmaps (hypermedia document).Both fields also require the ability to display the information stored on the  [1] can be used to provide the best presentation of the data.

Geographical Information System
The EPFL and BSI Engineering develop civil engineering network planning facilities based on the superimposition of networks (road, gas, electricity) and scanned topographic maps.Experience acquired during exhibitions and interactions with potential users led us to the conclusion that the Giga View must support various layers of information such as orthophotos.scanned 1: 25'000 topographic maps, 1: 5'000 local maps.and 1 :500 cadastral maps.For reference, topographic (resp.cadastral) maps scanned at 500 dpi covering the whole of Switzerland represent 37 .5GBytes(resp.27 .5GBytes)uncompressed.To pack sparse scanned maps of a significant region onto a disk arrav of reasonable size (16 disks for example), there is an imperative need for using lossless compression techniques.
The Cigu View uses severallossless compression algorithms tuned to the kind of data stored on disk.The algorithms are variations of the run-length coding algorithm and are optimized to provide high-speed software decompression, at the expense of compression efficiency.
Scanned topographic maps consist of 1-byte pixels.The BRL 1 algorithm recognizes uniform sequences and divides each map in two kinds of runs: compressed unifom1 and uncompressed runs.Scanned cadastral maps are predominantly white and very sparse bitmaps.They are compressed using two versions of a lossless compression algorithm called BRL2 and BRL3, working at the byte level.The BRL2 algorithm divides bitmaps in three kinds of byte nms: runs of black bytes.runs of white bytes, and runs of gray bytes.The BRL3 algorithm takes into account the fact that in most cases.black and white runs are followed by a single gray byte :Each black or white run consists of several identical bytes followed by The compression and decompression facilities are integrated into the server's file system.The data access pipeline, i.e., the path from the compressed data on disk to a visualization window on the Giga View interface processor consists of four steps: moving the required compressed-image extents from disk to its disk-node processor cache: decompressing the extents on the disk-node processor and storing the uncompressed extents back in the disk-node processor cache; transferring the uncompressed extents from the disk nodes to the server interface processor: and merging the decompressed extents into the visualization window buffer.The four steps are pipelined for extents extracted from the same disk node.
The authors evaluated the three decompression algorithms on three processor architectures : sparc-sun4m processor (in SpareS workstations): sparc-sun4c processor (in Spare IPC workstations); T800 transputer (Table 2).The delays of each algorithm are measured on each architecture for windows of varying sizes.The delay curves are linearized and the slope of the linearized curve represents the algorithm throughput.The SpareS workstation is able to decompress at around 10 MBytes/s (40 .VlBytes/s peak).the Spare lPC at 2.5 MBytes/s (9 YlBytes/s peak).and the T800 transputer (which has no internal cache) at the rate of 900 KBytes/s (2.5 YlBytt's/s peak).
The authors also tested the four-di,;k Giga View architecture connected to a .Vlaclntosh computer.for compressed and uncompressed maps.and for various zoom factors (Table 3).A zoom factor of n is achieved by selecting one in n 1 pixels in a decompressed image.The experiment consists for each zoom factor to extract visualization windows of increasing size.When the zoom factor is increased, the visualization window sizes are not changed, and consequently, the size of the data fetched from the disks is increased.The experi- In uncompressed mode, the current SCSI-Mac-Intosh interface limits the SIP throughput at 660 KBvtes/ s.At the disk-node leveL however.the throughput can reach up to 7. 66 .VlBytes Is, enabling a complete topographic map (128 MBytes uncompressed) to be visualized in less than 20 s. ln compressed mode, the combined disk access and decompression throughput reaches in the average case (typical cada,;tral map) 2.75 MBytes/ s. or 700 KBytes/ s per processor in the Giga View architecture: and in the best case (completely white cadastral map) 8 .\1Bytes.or the same throughput as in uncompressed mode.The next generation T9000 transputers will allow the decompression process to be completely transparent to the user.
These results show the benefits of integrating decompression into the data access pipeline.Furthermore.the image-oriented file system can be ported to a high-end workstation with multiple processors and SCSI channels.while retaining excellent decompression performance.

Medical Imaging
The authors acquired and stored on the Giga View a 3-D magnetic resonance imaging (.\lRI) scan.The image size is 100 YlBytes (384 X 512 X 512 1-byte pixels).Thanks to the GigaView.image views orthogonal to the main axes can be extracted at the rate of several frames per second.The image frames through which the user is browsing come directly from the disk, without the costly operation of preloading them in memory.
For this application, the 3-D images are divided in 3-D extents, which improve the locality of both disk and memory accesses.This feature is essential as access times depend almost completely on access locality.Let's assume an image width (Xaxis ), height (Y -axis), and depth (Z-a-x.is) of If", H, and D pixels: a visualization window width and height of w and h; and an extent size of e pixels.
Consider first the case where the 3-D images are stored as a set of 2-D images (X-Y planes stacked along the Z-axis, top of Fig. 10).In this format, an extent (i.e., a few kilobytes of data with good locality) has a width of 11/ and a height of v el W. To fetch a visualization window along the XY plane requires accessing an extent for every u lines in the visualization window.Along the XZ plane, it requires accessing an extent for every line in  time of 160 ms (respectively, 10.24 sand 327.68 s) along the XY (respectively, XZ, YZ) plane.The access anisotropy is large.On the other hand, if we consider cubical extents, the number of extent accesses is identical along all three pbmes (wiVe• hiVe).With the same image and visualization window, the access time becomes 5.12 s along any axis.Moreover, access to contiguous planes will be much faster, as the relevant extents can be maintained in disk-node cache.For example, 32 frames can be visualized in the same 5.12 s.If we consider an eight-disk architecture, the access time drops below 1 s.Thanks to 3-D extents.the amount of data read from the disk depends only on the visualization window size.This last feature is essential, considering for example that the 3-D scan of a complete human body represents about 24 GBytes of data (2048 X 2048 X 2048 3-byte pixels).
The authors are developing visualization algorithms that can be implemented efficiently on the Giga View architecture.These algorithms are based on the extraction, rotation, and projection of planes of arbitrary direction in the 3-D reference image (Fig. 11 ).

CONCLUSION
This article has presented the design, evaluation, and applications of the Giga View multiprocessor multidisk image server.The Giga View is a dedicated multiprocessor architecture connected through a standard SCSI-bus to a workstation.It can interactively display 2-D and 3-D pixmap im-ages accessed simultaneously from several disks.The division of data in extents gives excellent locality to random accesses of 2-D and 3-D pixmap images.The MDFS file system enables data access and processing to be pipelined, allowing for example decompression to be performed almost transparently to the user.
Future research aims at adapting the Giga View concept to MPMD workstations.Research will evaluate the modifications required to the file system to achieve the performance of the current Giga View server on a standard CNIX multiprocessor platform.

FIGURE 4
FIGURE 4 Data loss prob vs. PTR.

FIGURE 8
FIGURE 8 Effect of the number of disk nodes (T9000based architecture).

Table 3 .
Giga View Access Throughput (MBytes/s) Along the YZ plane, it requires accessing one extent for every v pixel in a visualization window line.More formally, accessing images along the XY (resp.XZ, YZ) plane requires hlv (resp.h, resp.w • hlv) extent accesses.To give some numbers, assuming W, H, D at 2048 pixels, w, h at 512 pixels, e at 32'768 pixels, a single extent access time at 20 ms, and a single disk we get an access FIGURE 10 MHI scan and 3-D extents.FIGURE 11 3-D visualization of MRI scan.