My research focuses on the topics in the area of Information visualization, Perceptual visualization, and Human-Computer Interaction. Currently, I am working on how we can construct a framework for task-optimized visualization.
Design choices in visualization, such as the graphical encodings, can directly impact the quality of decision making. Effective visualizations improve understanding of data by leveraging visual perception, e.g., the size of marks in scatterplots better represents quantitative data, while color can express categorical data. In addition, the effectiveness of the visualization varies with the tasks being performed on it, e.g., when searching for clusters vs. outliers, the opacity impacts the visibility of data. Hence, constructing frameworks that consider both the perceptions of the visual encodings and the task being performed enables optimizing the design of visualization to maximize efficacy. In this project, we outline our research on two such frameworks. First, we build a new framework to perform modeling of perception of clustering in scatterplots. Second, we present a framework for evaluating the effectiveness of line chart smoothing for a range of visual analytics tasks, and we elaborate on how the framework can be extended to construct other task-optimized visualization frameworks. Using this framework, we provide less ambiguous presentations of data, leading to better quality and higher confidence decision-making.
Clustering occurs when patterns in the data form distinct groups. However, at its core, clustering is an ill-posed problem, as the “correct” clustering depends upon multiple factors. Experiments in visualization have tried to quantify how effectively viewers perceive different properties encoded using various visual features. Understanding a viewer’s ability to rapidly and accurately understand the clusters in a scatterplot is the theme of this work. We present a rigorous empirical study on visual perception of clustering in scatterplots modeling around a topological data structure known as the merge tree.
When line chart data are noisy, visualization designers can turn to smoothing to reduce the visual clutter. However, there are many techniques available, and while the results they produce may look similar, each preserves different properties of the data. To preserve some properties of the input data, each smoothing technique must also lose information, which can have a negative impact on the utility of the resulting data. The importance of the lost information can be influenced by both the data being used and the visual analytics tasks being performed. We present an analytical framework for measuring the effectiveness of various smoothing techniques and evaluate the perceptual judgment using user studies.
Knowledge of human perception has long been incorporated into visualizations to enhance their quality and effectiveness. The last decade has shown a particular increase in perception-based visualization research studies. In this paper, we provide a systematic and comprehensive report on experimentation, theory, and survey of perception related to visualization to help readers understand and apply the principals of perception to their visualization designs. Implementing the insights from the research summarized in this survey can lead to better visualization design and more effective utilization of visual encodings.
Reproducibility has been increasingly encouraged by science communities to validate experimental conclusions, and replication studies represent a significant opportunity to vision scientists wishing to contribute new perceptual models, methods, or insights to the visualization community. Unfortunately, the notion of replicating previous studies does not lend itself to how we communicate research findings. Simply put, studies that re-conduct and confirm earlier results do not hold any novelty, a key element to the modern research publication system. Nevertheless, savvy researchers have discovered ways to produce replication studies by embedding them into other, sufficiently novel studies. In this position work, we define three methods---re-evaluation, expansion, and specialization---for implanting a replication study into a novel published work. Finally, we discuss how publishing a true replication study should be avoided while providing suggestions for how vision scientists and others can still use replication studies as a vehicle to producing visualization research publications.
Visual Analytics Science and Technology (VAST) is an annual contest to advance visual analytics through competition. The VAST Challenge is designed to help researchers understand how their software would be used in a novel analytic task and determine if their data transformations, visualizations, and interactions would be beneficial for particular analytical tasks. In the summer of 2017, we as a team of three (Sulav Malla, Anwesh Tuladhar) under the guidance of Dr.Paul Rosen, participated in three mini-challenges (MC1, MC2, and MC3) and submitted our work to the IEEE VAST challenge Community. Our MC3 submission was awarded Honorable Mention for Good Facilitation of Single Image Analysis.