To this end, in the paper I present one approach to modeling and evaluating these issues, as a strawman proposal. The visualization process is modeled in an object-oriented manner. The key element of this model is the Quality Metric (QM) associated with each data type. The QM acptures the intrinsic quality of the data, including things like error information (e.g., the dose range represented by a dose surface, or the positional error resulting from surface simplification), non-information that is added (e.g., smooth shading a surface with very coarse triangulation), information that is lost (e.g., reducing the dose volume, or losses due to the interaction of the colormap with human visual perception), or the intrinsic value of a visualization (e.g., displaying dose as raw numbers is certainly accurate, but worthless as a visualization tool). QMs are modified by each transformation, and thus are propagated from the raw data to the final observer, providing a quantitative handle on the meaning of the resulting visualization. Only the surface has been scratched, but hopefully this provides a framework for future discussion.
|