Picture Archiving and Communication System (PACS), as its full title implies, addresses two main functions; picture archiving and communication of those pictures. This blog is a personal classification of PACS by generations to explore the relationship of PACS and its archive.

First Generation PACS

First generation PACS in the early '90s was designed to deliver relatively big files over the radiology department’s low bandwidth network to a dedicated diagnostic workstation, with minimum time to render and display images.

The picture archiving was in two parts; fast spinning disk for the active studies in what is often referred to as the short-term store or Cache; and a deep or long-term archive on removable media such as tape or optical disk for closed studies.

These archives were actively managed by the PACS, moving studies in and out of the short-term store, retrieving relevant prior images using pre-fetch from the deep archive to reduce waiting times. The DICOM images were sometimes compressed in proprietory format when they were written to the archive but the DICOM headers were never modified or updated.

Any changes to the metadata of the image, such as patient updates or merges, were recorded only in the PACS database. Other information such as window levels and annotation changes made by the PACS user were also only stored in the PACS database. These changes were only applied when the image was reviewed; no changes were made to the file itself.

Second Generation PACS

Second generation PACS coincided in the UK, in the early 2000s, with the NPfIT initiative and the move towards data-centric solutions, beginning the foundation for standards-based central archives. This PACS began to archive the image in a DICOM Part 10 format on dedicated archives using the DICOM protocol. The theory was that when in a standard format and structure, any DICOM compliant application could retrieve and view the images. However the PACS internal workflow did not really change, instead of storing a UNC path and file pointer to manage the files in the archive, the DICOM Study UID was used instead. To retrieve a file from the DICOM archive PACS would perform a C-move and provide a list of UIDs from its database for the images required. Only the original DICOM file is sent to the archive, just as for the first generation PACS, all changes to the patient demographics, or presentation state, were only stored in the PACS database and not sent to the archive. Standards and IHE profile detailed how the PACS could pass on these changes to the DICOM archive but they were rarely implemented.

When exchanging images via DICOM Query Retrieve, the PACS exports a modified copy of the image using the current information held in its database. It may accompany the image with an additional file-specific annotation and presentation state. The upshot of this is that the information retrieved from the PACS may differ from the information received from its DICOM archive. There are some particular cases where extra care must be taken when retrieving studies from the archive rather than the PACS. These are where changes are made to the metadata in the PACS after it has been received. An example is where the image is labelled with the wrong patient details (incorrectly selected from the modality work list), or the image is rejected for quality or safety reasons; these changes are not made to the copy in the DICOM archive.

Third Generation PACS

So what is required of the interim third generation PACS of the future? On the network side there is a move to advance data centric post processing to deliver the results and the original images quickly to web or thin clients. The volume of data to be delivered to the end user has dramatically increased since the 1990s. Performance expectations have also increased. For the archive side the requirement now is to support truly vendor neutral (DICOM) archives. The PACS must publish all its changes to the archive, not just the original image. The PACS must become independent of its archive, being able to query and retrieve data from any one of a number of Vendor Neutral Archives (VNAs) and present those images alongside its own acquired images, with all the correct metadata, presentation state, annotation and status. Web-based unified clinical viewers are already commonplace to display non-DICOM data alongside the DICOM image.

To maintain the metadata in its associated VNA(s), a third generation PACS (and the VNA) must fully comply with the relevant IHE profile and DICOM standards and publish all changes it makes internally to the images and metadata to its associated archive(s), repositories and registries. This is to ensure that the metadata remains in sync with all systems. Failure to do so means the PACS is, in effect, tethered to its archive and direct access of the image in its VNA is potentially unsafe.

What do I mean by Tethered and Unsafe?

The PACS fully manages the metadata in its database. Images in the archive accessed via the PACS will be presented with the most current information. If this information is not passed to the archive, then accessing the same study may deliver different results. In extreme cases the archive could return out-of-date erroneous metadata, whereas the PACS would correct this data before it was presented to the user. Therefore, to directly access the image in the archive would be unsafe and only safe solutions would be able to access the data via the source PACS. Hence the PACS and the archive are tethered.

Summary

The fourth generation of PACS, yet to be developed, will publish all its changes to the metadata, presentation states, user added data etc. to a fully independent standards-based archive. Personally, I do not believe there will be a fourth generation PACS. The functionality and clinical workflow will be spread over an array of XDS compliant applications designed to support the cross enterprise workflow of all clinical data for a clinical episode in a single view.

The requirement to access a diagnostic quality image (or any raw data) from any location is widely agreed. Increasingly, there is a requirement for remote diagnostics and reporting. I believe this requires a move away from vertical application stacks per department to horizontal functional layers. Hence the ability to independently generate a report, and publish it, should not be restricted to a single department, not even a single enterprise. This ability should become a functional layer, sitting above the other layers, to manage workflow, acquire data, present data for diagnostics, and to store and protect the data.