Program

Monday Tuesday Wednesday Thursday Friday
9:00 EGPGV 1 EuroVA 1 MolVA 1 EnvirVis 1 Opening & Keynote FP 3 STAR 1 FP 7 FP 8 FP 13 SP 4
10:40
11:00 EGPGV 2 EuroVA 2 MolVA 2 EnvirVis 2 FP 1 FP 4 SP 1 FP 9 FP 10 Dirk Bartz Prize STAR 4
12:40
14:00 EGPGV 3 EuroVA 3 VisGap 1 MLVis 1 Invited C&G FP 2 (Best Papers) FP 5 STAR 2 FP 11 SP 3 Capstone, Award, Closing
15:40 Industrial Keynote
16:00 EGPGV 4 EuroVA 4 VisGap 2 MLVis 2 Posters FP 6 SP 2 FP 12 STAR 3
17:40
18:00

All dates are in the Central European Summer Timezone (CEST = UTC+2).

Monday, 14 June, 2021

09:00 – 10:40
EGPGV 1: Opening & Keynote  
Chair: Markus Hadwiger
09:00 – 09:25
Opening
Markus Hadwiger, Matthew Larssen, Filip Sadlo
09:25 – 10:40
Keynote: High-performance visual computing for large-scale biomedical image analysis
Won-Ki Jeong

Abstract: High-resolution, large-scale image data play a central role in biomedical research, but they also pose challenging computational problems for image processing and visualization in terms of developing suitable algorithms, coping with the ever-increasing data sizes, and maintaining interactive performance. Massively parallel computing systems, such as graphics processing units (GPUs) and distributed cluster systems, can be a solution for such computation-demanding tasks due to their scalable and parallel architecture. In addition, recent advances in machine learning can be another solution by shifting the time-consuming computing process into the training (pre-processing) phase and reducing prediction time by performing only one-pass deployment of a feed-forward neural network. In this talk, I will introduce several examples of such research directions from our work on large-scale biomedical image analysis using high-performance computing and machine learning techniques, for example, how to leverage parallel computing architecture and machine learning algorithms to accelerate tera-scale microscopy image processing and analysis for biomedical applications.

Presenter: Won-Ki Jeong
Won-Ki Jeong is currently a full professor in computer science and engineering at Korea University. He was an assistant and associate professor in the school of electrical and computer engineering at UNIST (2011-2020), a visiting associate professor of the neurobiology at Harvard Medical School (2017–2018),and a research scientist in the Center for Brain Science at Harvard University (2008–2011). His research interests include visualization, image processing, and parallel computing. He received a Ph.D. degree in Computer Science from the University of Utah in 2008 where he was a member of the Scientific Computing and Imaging (SCI) institute. He hosted the NVIDIA GPU Research Center at UNIST in 2014. He co-authored chapters in GPU Gems published in 2011 and published more than 60 refereed research articles.


09:00 – 10:40
EuroVA 1: Immersive Analytics and Interaction  
Chair: Panagiotis Ritsos
09:00 – 09:25
Opening
Jürgen Bernard, Katerina Vrotsou, Michael Behrisch
09:25 – 09:50
Talk2Hand: Knowledge board interaction easing analysis with machine learning assistants
Yu-Lun Hong, Benjamin Watson, Kenneth Thompson, Paul Davis


Abstract: Analysts now often use machine learning (ML) assistants, but find them difficult to use, since most have little ML expertise. Talk2Hand improves the usability of ML assistants by supporting interaction with them using knowledge boards, which intuitively show association, visually aid human recall, and offer natural interaction that eases improvement of displayed associations and addition of new data into emerging models. Knowledge boards are familiar to most and studied by analytics researchers, but not in wide use, because of their large size and the challenges of using them for several projects simultaneously. Talk2Hand uses augmented reality to address these shortcomings, overlaying large but virtual knowledge boards onto typical analyst offices, and enabling analysts to switch easily between different knowledge boards. This paper describes our Talk2Hand prototype.

Presenter: Yu-Lun Hong

09:50 – 10:15
Immersive 3D Visualization of Multi-Modal Brain Connectivity
Britta Pester, Raimund Dachselt, Oliver Winke, Carolin Ligges, Stefan Gumhold


Abstract: In neuroscience, the investigation of connectivity between different brain regions suffers from the lack of adequate solutions for visualizing detected networks. One reason is the high number of dimensions that have to be combined within the same view: neuroscientists examine brain connectivity in its natural spatial context across the additional dimensions time and frequency. To combine all these dimensions without prior merging or filtering steps, we propose a visualization in virtual reality to realize multiple coordinated views of the networks in a virtual visual analysis lab. We implemented a prototype of the new idea. In a first qualitative user study we included experts in the field of computer science, psychology as well as neuroscience. Time series of electroencephalography recordings evoked by visual stimuli were used to provide a first proof of concept trial.The positive user feedback shows that our application successfully fills a gap in the visualization of high-dimensional brain networks.

Presenter: Britta Pester

10:15 – 10:40
Immersive Analytics of Heterogeneous Biological Data Informed through Need-finding Interviews
Christine Ripken, Sebastian Tusk, Christian Tominski


Abstract: The goal of this work is to improve existing biological analysis processes by means of immersive analytics. In a first step, we conducted need-finding interviews with 12 expert biologists to understand the limits of current practices and identify the requirements for an enhanced immersive analysis. Based on the gained insights, a novel immersive analytics solution is being developed. Biological data is highly interdependent. This requires biologists to relate various types of data, including genomes, transcriptomes, and phenomes. We use an abstract tabular representation of heterogeneous data projected onto a curved virtual wall. Several visual and interactive mechanisms are offered to allow biologists to get an overview of large data, to access details and additional information on the fly, to compare selected parts of the data, and to navigate up to about 5 million data values in real-time. Although a formal user evaluation is still pending, initial feedback indicates that our solution can be useful to expert biologists.

Presenter: Christine Ripken


09:00 – 10:40
MolVA 1: Keynote and Presentations  
Chair: Michael Krone, Björn Sommer
09:00 – 09:05
Opening
Jan Byška, Michael Krone
09:05 – 09:45
Keynote: A la carte biomolecular design: algorithms and supercomputing
Victor Guallar

Abstract: We are witnessing a significant revolution in the way science is performed. Just 20 years ago, pharmaceutical companies fired many of their computational chemists after the frustration from the (false?) high expectations that modelling had created. We should admit that most of the scientific community did not appreciate bioinformatics. My experimental faculty colleagues, at Washington University at that time, always looked over us with a fake smile. And in the last few years, all has changed. Today, there is no serious pharmaceutical effort that does not start with an exhaustive in silico study. And it is catching up in many more areas of biotechnology, such as enzyme engineering, or material science. One could even state that it is general to all science: it does first get modelled. What has happened in these 10-15 years? Clearly, we have seen an explosion of better algorithms and of the available data to test/train them; all these happening under easy and cheap access to vast supercomputing resources. We will discuss in this talk these advances, focusing on some contributions from our lab and on what we foresee for the next few years.

Presenter: Victor Guallar
Dr. Guallar performed his PhD between the University Autonomous of Barcelona (Spain) and UC Berkeley (USA), with defense in November 1999. In 2003, after three years as a postdoctoral researcher at Columbia University (New York, USA), he was appointed assistant professor at Washington University School of Medicine (St Louis, USA). In 2006 he was awarded his current ICREA professor position at the Barcelona Supercomputing Center (BSC). Since then, his laboratory (EAPM) has grown considerably, keeping a productive international character, and developing important contributions in computational biophysics, such as the protein-ligand modeling software PELE, and biochemistry, recently centered on enzyme engineering. Prof. Guallar has been awarded several important research projects, including a prestigious advanced ERC grant (the youngest researcher to receive it in Spain). His research has produced over a 170 papers in international journals, reaching an H-index of 44 and having directed 16 PhD thesis. In addition to algorithms development (and their application), the group has recently placed importance in adding interdisciplinary fields, such as molecular visualization techniques, and data mining and software optimization through machine learning algorithms. Prof. Guallar is also a founder of the first spin off from BSC, Nostrum Biodiscovery, a young biotech enterprise created in 2015 which aims to collaborate with pharmaceutical and biotech companies dedicated to the development of drugs and molecules of biotechnological interest.

09:45 – 10:00
A collaborative molecular graphics tool for knowledge dissemination with augmented reality and 3D printing
Mathieu Noizet, Valentine Peltier, Hervé Deleau, Manuel Dauchez, Stéphanie Prévost, Jessica Jonquet-Prevoteau


Abstract: We propose in this article a concept called “”augmented 3D printing with molecular modeling”” as an application framework.Visualization is an essential means to represent complex biochemical and biological objects in order to understand their struc-tures as functions. By pairing augmented reality systems and 3D printing, we propose to design a new collaborative moleculargraphics tool (under implementation) for scientific visualization and visual analytics. The printed object is then used as a sup-port for the visual augmentation by allowing the superimposition of different visualizations. Thus, still aware of his environment,the user can easily communicate with his collaborators while moving around the object. This user-friendly tool, dedicated tonon-initiated scientists, will facilitate the dissemination of knowledge and collaboration between interdisciplinary researchers.Here, we present a first prototype and we focus on the main molecule tracking component. Initial feedback from our userssuggests that our proposal is valid, and shows a real interest in this type of tool, with an intuitive interface.

Presenter: Mathieu Noizet

10:00 – 10:20
Computational design, fabrication and evaluation of rubber protein models
Thomas Alderighi, Daniela Giorgi, Luigi Malomo, Paolo Cignoni, Monica Zoppè


Abstract: Tangible 3D molecular models conceptualize complex phenomena in a stimulating and engaging format. This is especially true for learning environments, where additive manufacturing is increasingly used to produce teaching aids for chemical education. However, the 3D models presented previously are limited in the type of molecules they can represent and the amount of information they carry. In addition, they have little role in representing complex biological entities such as proteins. We present the first complete workflow for the fabrication of soft models of complex proteins of any size. We leverage on molding technologies to generate accurate, soft models which incorporate both spatial and functional aspects of large molecules. Our method covers the whole pipeline from molecular surface preparation and editing to actual 3D model fabrication. The models fabricated with our strategy can be used as aids to illustrate biological functional behavior, such as assembly in quaternary structure and docking mechanisms, which are difficult to convey with traditional visualization methods. We applied the proposed framework to fabricate a set of 3D protein models, and we validated the appeal of our approach in a classroom setting.

Presenter: Thomas Alderigh

10:20 – 10:40
VRdeo: Creating Engaging Educational Material for Asynchronous Student-Teacher Exchange Using Virtual Reality
Vojtech Bruža, Jan Byška, Jan Mican, Barbora Kozlíková


Abstract: Educational videos are traditional means of communicating scientific findings to a broader audience. Nowadays, they are also a very common medium in distant teaching. However, creating videos using the existing software tools can be very challenging for inexperienced users. Also, the student’s engagement in standard videos is often very limited as the experience and exploration of the presented phenomena is indirect and far from being interactive. To overcome this, we propose a novel tool, called VRdeo, for creating and presenting educational material that utilizes the advantages of virtual reality (VR). Each stage of the production is represented by an operating mode, where the user acts in a predefined role. VRdeo is a versatile platform that enables tutors to immerse into the virtual scene, explore 3D models of scientific data, and record a narrated story where the tutor is an active element, represented by a virtual avatar. The recording can be exported as a virtual scene. The observers (e.g., students) can then enter such a scene where they can move around and have several options for replaying and interacting with the tutor’s story. In cases when the observer cannot use a virtual reality device, VRdeo enables to generate a traditional 2D video as well. To evaluate VRdeo, we asked several experts in diverse fields for their feedback and conducted a user study with students, which gave us valuable information about its usefulness for both creating and consuming the narrated stories.

Presenter: Vojtech Bruža


09:00 – 10:40
EnvirVis 1: Probabilistic and Uncertainty-based Techniques  
Chair: Soumya Dutta
09:00 – 09:12
Opening
Soumya Dutta, Kathrin Feige, Karsten Rink, Dirk Zeckzer
09:12 – 09:34
GPU-Assisted Visual Analysis of Flood Ensemble Interaction
Donald Johnson, T.J. Jankun-Kelly


Abstract: Analysis of overlapping spatial data sets is a challenging problem with tension between clearly identifying individual surfaces and exploring significant overlaps/conflicts. One area where this problem occurs is when dealing multiple flood scenes that occur in an area of interest. In order to allow easier analysis of scenes with multiple overlapping data layers, we introduce a visualization system designed to aid in the analysis of such scenes. It allows the user to both see where different data sets agree, and categorize areas of disagreement based on participating surfaces in each area. The results are stable with regard to render order and GPU acceleration via OpenCL allows interaction with large datasets with preprocessing dynamically. This interactivity is further enhanced by data streaming which allows datasets too large to be loaded directly onto the GPU to be processed. After demonstrating our approach on a diverse set of ensemble datasets, we provide feedback from expert users.

Presenter: T.J. Jankun-Kelly

09:34 – 09:56
Probabilistic Principal Component Analysis Guided Spatial Partitioning of Multivariate Ocean Biogeochemistry Data
Subhashis Hazarika, Ayan Biswas, Earl Lawrence, Philip Wolfram


Abstract: Farm-scale cultivation of macroalgae for the production of renewable biofuel depends on complex ocean hydrodynamics and also on the availability of different essential nutrients. To better understand such conditions that are conducive for the growth of macroalgae, scientists implement large-scale computational models, simulating several physical variables (essential nutrients, and other chemical compounds), relevant to study oceanic biogeochemistry (BGC). Visualizing and analysing the different physical variables and their inter-variable relationships across the spatial domain is crucial to form concrete understanding of the underlying physical phenomenon. To facilitate such multivariate analyses for large-scale simulation data, a popular and effective way is to decompose the spatial domain into smaller local regions based on the variable relationships. However, spatial decomposition of multivariate data is not trivial. In this paper, we propose a novel multivariate spatial data partitioning approach using probabilistic principal component analysis. We also perform detailed study of other prospective multivariate partitioning schemes and compare them with our proposed method. To demonstrate the efficacy of our approach, we studied nutrient relationships across different regions of the ocean using a high-resolution Ocean BCG simulation data set, which comprises of multiple physical variables essential for macroalgae cultivation. We further validate the results of our analyses by getting feedback from domain experts in the field of ocean sciences.

Presenter: Subhashis Hazarika

09:56 – 10:18
A Winding Angle Framework for Tracking and Exploring Eddy Transport in Oceanic Ensemble Simulations
Anke Friederici, Martin Falk, Ingrid Hotz


Abstract: Oceanic eddies, which are highly mass-coherent vortices traveling through the earth’s waters, are of special interest for their mixing properties. Therefore, large-scale ensemble simulations are performed to approximate their possible evolution. Analyzing their development and transport behavior requires a stable extraction of both their shape and properties of water masses within. We present a framework for extracting the time series of full 3D eddy geometries based on an winding angle criterion. Our analysis tools enables users to explore the results in-depth by linking extracted volumes to extensive statistics collected across several ensemble members. The methods are showcased on an ensemble simulation of the Red Sea. We show that our extraction produces stable and coherent geometries even for highly irregular eddies in the Red Sea. These capabilities are utilized to evaluate the stability of our method with respect to variations of user-defined parameters. Feedback gathered from domain experts was very positive and indicates that our methods will be considered for newly simulated, even larger data sets.

Presenter: Anke Friederici

10:18 – 10:40
Uncertainty-aware Detection and Visualization of Ocean Eddies in Ensemble Flow Fields – A Case Study of the Red Sea
Felix Raith, Gerik Scheuermann, Christina Gillmann


Abstract: Eddy detection is a state of the art tool to examine transport behavior in oceans, as they form circular movements that are highly involved in transferring mass in an ocean. To achieve this, ocean simulations are run multiple times, and an eddy detection is performed in the final simulation results. Unfortunately, this process is affected by a variety of uncertainties. In this manuscript, we aim to identify the types of uncertainty inherent in ocean simulations. For each of the identified uncertainties, we provide a quantification approach. Based on the quantified uncertainties, we provide a visualization approach that consists of domain embedded views and an uncertainty space view connected via interaction. We showed the effectiveness of our approach by performed a case study of the Red Sea.

Presenter: Felix Raith


10:40 – 11:00 BREAK

11:00 – 12:25
EGPGV 2: Flow  
Chair: Holger Theisel
11:00 – 11:25
HyLiPoD: Parallel Particle Advection Via a Hybrid of Lifeline Scheduling and Parallelization-Over-Data
Roba Binyahib, David Pugmire, Hank Childs


Abstract: Performance characteristics of parallel particle advection algorithms can vary greatly based on workload. With this short paper, we build a new algorithm based on results from a previous bake-off study which evaluated the performance of four algorithms on a variety of workloads. Our algorithm, called HyLiPoD, is a “meta-algorithm,” i.e., it considers the desired workload to choose from existing algorithms to maximize performance. To demonstrate HyliPoD’s benefit, we analyze results from 162 tests including concurrencies of up to 8192 cores, meshes as large as 34 billion cells, and particle counts as large as 300 million. Our findings demonstrate that HyLiPoD’s adaptive approach allows it to match the best performance of existing algorithms across diverse workloads.

Presenter: Roba Binyahib

11:25 – 11:55
Machine Learning-Based Auto-tuning for Parallel Particle Advection
Samuel David Schwartz, Hank Childs, David Pugmire


Abstract: Data-parallel particle advection algorithms contain multiple controls that affect their execution characteristics and performance, in particular how often to communicate and how much work to perform between communications. Unfortunately, the optimal settings for these controls vary based on workload, and, further, it is not easy to devise straight-forward heuristics that automate calculation of these settings. To solve this problem, we investigate a machine learning-based autotuning approach for optimizing data-parallel particle advection. During a pre-processing step, we train multiple machine learning techniques using a corpus of performance data that includes results across a variety of workloads and control settings. The best performing of these techniques is then used to form an oracle, i.e., a module that can determine good algorithm control settings for a given workload immediately before execution begins. To evaluate this approach, we assessed the ability of seven machine learning models to capture particle advection performance behavior and then ran experiments for 108 particle advection workloads on64 GPUs of a supercomputer. Our findings show that our machine learning-based oracle achieves good speedups relative to the available gains.

Presenter: Samuel David Schwartz

11:55 – 12:25
Scalable In Situ Computation of Lagrangian Representations via Local Flow Maps
Sudhanshu Sane, Abhishek Yenpure, Roxana Bujack, Matthew Larsen, Kenneth Moreland, Christoph Garth, Chris R. Johnson, Hank Childs


Abstract: In situ computation of Lagrangian flow maps to enable post hoc time-varying vector field analysis has recently become an active area of research. However, the current literature is largely limited to theoretical settings and lacks a solution to address scalability of the technique in distributed memory. To improve scalability, we propose and evaluate the benefits and limitations of a simple, yet novel, performance optimization. Our proposed optimization is a communication-free model resulting in local Lagrangian flow maps, requiring no message passing or synchronization between processes, intrinsically improving scalability, and thereby reducing overall execution time and alleviating the encumbrance placed on simulation codes from communication overheads. To evaluate our approach, we computed Lagrangian flow maps for four time-varying simulation vector fields and investigated how execution time and reconstruction accuracy are impacted by the number of GPUs per compute node, the total number of compute nodes, particles per rank, and storage intervals. Our study consisted of experiments computing Lagrangian flow maps with up to 67M particle trajectories over 500 cycles and used as many as 2048 GPUs across 512 compute nodes. In all, our study contributes an evaluation of a communication-free model as well as a scalability study of computing distributed Lagrangian flow maps at scale using in situ infrastructure on a modern supercomputer.

Presenter: Sudhanshu Sane


11:00 – 12:40
EuroVA 2: VA Applications and Workflows  
Chair: Gennady Andrienko
11:00 – 11:25
Lessons learned while supporting Cyber Situational Awareness
Graziano Blasilli, Emiliano De Paoli, Simone Lenti, Sergio Picca


Abstract: The increasing number of cyberattacks against critical infrastructures has pushed researchers to develop many Visual Analytics solutions to provide valid defensive approaches and improve the situational awareness of the security operators. Applying such solutions to complex infrastructures is often challenging, and existing tools can present limitations and exhibit various issues. In this paper, supported by cybersecurity experts of a world leader company in the military domain, we apply an existing Visual Analytics solution, MAD, to a complex network of a critical infrastructure, highlighting its limitations in this scenario and proposing further solutions to improve the cyber situational awareness in both proactive and reactive risk analyses. The results of this research contribute to characterize the activities performed by domain experts in this domain and their implications for the design of Visual Analytics solutions that aim at supporting them.

Presenter: Graziano Blasilli

11:25 – 11:50
Customizable Coordination of Independent Visual Analytics Tools
Lars Nonnemann, Marius Hogräfer, Heidrun Schumann, Bodo Urban, Hans-Jörg Schulz


Abstract: While it is common to use multiple independent analysis tools in combination, it is still cumbersome to carry out a cross-tool visual analysis. Some dedicated frameworks addressing this issue exist, yet in order to use them, a Visual Analytics tool must support their API or architecture. In this paper, we do not rely on a single predetermined exchange mechanism for the whole ensemble of VA tools. Instead, we propose using any available channel for exchanging data between two subsequently used VA tools. This effectively allows to mix and match different data exchange strategies within one cross-tool analysis, which considerably reduces the overhead of adding a new VA tool to a given tool ensemble. We demonstrate our approach with a first implementation called AnyProc and its application to a use case of three VA tools in a Health IT data analysis scenario.

Presenter: Lars Nonnemann, Marius Hogräfer

11:50 – 12:15
A Taxonomy of Attribute Scoring Functions
Jenny Schmid, Jürgen Bernard


Abstract: Shifting the analysis from items to the granularity of attributes is a promising approach to address complex decision-making problems. In this work, we study attribute scoring functions (ASFs), which transform values from data attributes to numerical scores. As the output of ASFs for different attributes is always comparable and scores carry user preferences, ASFs are particularly useful for analysis goals such as multi-attribute ranking, multi-criteria optimization, or similarity modeling. However, non-programmers cannot yet fully leverage their individual preferences on attribute values, as visual analytics (VA) support for the creation of ASFs is still in its infancy, and guidelines for the creation of ASFs are missing almost entirely. We present a taxonomy of eight types of ASFs and an overview of tools for the creation of ASFs as a result of an extensive literature review. Both the taxonomy and the tools overview have descriptive power, as they represent and combine non-visual math and statistics perspectives with the VA perspective. We underpin the usefulness of VA support for broader user groups in real-world cases for all eight types of ASFs, unveil missing VA support for the ASF creation, and discuss the integration of ASF in VA workflows.

Presenter: Jenny Schmid

12:15 – 12:40
Rumble Flow++ Interactive Visual Analysis of Dota2 Encounters
Wilma Weixelbaum, Kresimir Matkovic


Abstract: In the last decade, the popularity of ESports has grown rapidly. The financial leader in the tournament scene is Dota2, a complex and strategic multiplayer game. Analysis and exploration of game data could lead to better outcomes. Available data resources include the combat log, which logs every event at an atomic level and excels at providing great detail at the expense of readability, and concise third-party summaries that provide little detail. In this paper, we introduce Rumble Flow++, a web-based exploratory analysis application that provides details in an easy-to-understand manner while providing meaningful aggregations. Rumble Flow++ supports exploration and analysis at different levels of granularity. It supports analysis at the level of the entire match, at the level of individual team fights, and at the level of individual heroes. The user can easily switch between levels in a fully interactive environment. Rumble Flow++ provides much more detail than a summary visualization typically uses, and much better readability than an atomic log file.

Presenter: Wilma Weixelbaum


11:00 – 12:40
MolVA 2: Presentations and Capstone  
Chair: Jan Byška
11:00 – 11:20
A Framework for Uncertainty-Aware Visual Analytics of Proteins
Robin G. C. Maack, Michael L. Raymer, Thomas Wischgoll, Hans Hagen, Christina Gillmann


Abstract: Due to the limitations of existing experimental methods for capturing stereochemical molecular data, there usually is an inherent level of uncertainty present in models describing the conformation of macromolecules. This uncertainty can originate from various sources and can have a significant effect on algorithms and decisions based upon such models. Incorporating uncertainty in state-of-the-art visualization approaches for molecular data is an important issue to ensure that scientists analyzing the data are aware of the inherent uncertainty present in the representation of the molecular data. In this work, we introduce a framework that allows biochemists to explore molecular data in a familiar environment while including uncertainty information within the visualizations. Our framework is based on an anisotropic description of proteins that can be propagated along with required computations, providing multiple views that extend prominent visualization approaches to visually encode uncertainty of atom positions, allowing interactive exploration. We show the effectiveness of our approach by applying it to multiple real-world datasets and gathering user feedback.

Presenter: Robin Maack

11:20 – 11:40
Topological analysis of density fields: an evaluation of segmentation methods
Alexei I Abrikosov, Talha bin Masood, Martin Falk, Ingrid Hotz


Abstract: Topological and geometric segmentation methods provide powerful concepts for detailed field analysis and visualization. However, when it comes to a quantitative analysis that requires highly accurate geometric segmentation, there is a large discrepancy between the promising theory and the available computational approaches. In this paper, we compare and evaluate various segmentation methods with the aim to identify and quantify the extent of these discrepancies. Thereby, we focus on an application from quantum chemistry: the analysis of electron density fields. It is a scalar quantity that can be experimentally measured or theoretically computed. In the evaluation we consider methods originating from the domain of quantum chemistry and computational topology. We apply the methods to the charge density of a set of crystals and molecules. Therefore, we segment the volumes into atomic regions and derive and compare quantitative measures such as total charge and dipole moments from these regions. As a result, we conclude that an accurate geometry determination can be crucial for correctly segmenting and analyzing a scalar field, here demonstrated on the electron density field.

Presenter: Alexei Abrikossov

11:40 – 12:35
Invited talk: On protein interactions – how to visually communicate them?
Barbora Kozlíková

Abstract: In molecular visualization, there are several specific challenges, spanning from understanding single molecular structures, their function, and behavior, to studying their interactivity with other molecular structures. In this talk, I will focus on the latter problem, interactions of protein structures and their visual representation. We will discuss the already existing approaches, as well as the most intrinsic problems that are still waiting to be addressed.

Presenter: Barbora Kozlíková
Dr. Barbora Kozlíková is an Associate Professor at the Masaryk University in Brno, Czech Republic, where she established and is heading Visitlab, the research laboratory focusing on designing visualizations for different application domains. One of the core topics of her research is the visualization and visual analysis of biomolecular structures, with a specific focus on molecular dynamics simulations, molecular docking, and molecular interactions. Dr. Kozlíková is also experimenting with virtual reality and its application in molecular modeling and education.

12:35 – 12:40
Closing
Jan Byška, Michael Krone

11:00 – 12:40
EnvirVis 2: Interactive Digital and Virtual Visualization Techniques  
Chair: Kathrin Feige
11:00 – 11:22
Digital Earth Viewer: a 4D visualisation platform for geoscience datasets & Spatiotemporal visualisation of a deep sea sediment plume dispersion experiment
Valentin Buck, Flemming Stäbler, Everardo González, Jens Greinert


Abstract: A comprehensive study of the Earth System and its different environments requires understanding of multi-dimensional data acquired with a multitude of different sensors or produced by various models. Here we present a component-wise scalable web-based framework for simultaneous visualisation of multiple data sources. It helps contextualise mixed observation and simulation data in time and space.

Presenter: Valentin Buck

11:22 – 11:44
Air Quality Temporal Analyser: Interactive temporal analyses with visual predictive assessments
Shubhi Harbola, Steffen Koch, Thomas Ertl, Volker Coors


Abstract: This work presents Air Quality Temporal Analyser (AQTA), an interactive system to support visual analyses of air quality data with time. This interactive AQTA allows the seamless integration of predictive models and detailed patterns analyses. While previous approaches lack predictive air quality options, this interface provides back-and-forth dialogue with the designed multiple Machine Learning (ML) models and comparisons for better visual predictive assessments. These models can be dynamically selected in real-time, and the user could visually compare the results in different time conditions for chosen parameters. Moreover, AQTA provides data selection, display, visualisation of past, present, future (prediction) and correlation structure among air parameters, highlighting the predictive models effectiveness. AQTA has been evaluated using Stuttgart (Germany) city air pollutants, $i.e.$, Particular Matter (PM) PM$\textsubscript{10}$, Nitrogen Oxide (NO), Nitrogen Dioxide (NO$\textsubscript{2}$), and Ozone (O$\textsubscript{3}$) and meteorological parameters like pressure, temperature, wind and humidity. The initial findings are presented that corroborate the city’s COVID lockdown (year 2020) conditions and sudden changes in patterns, highlighting the improvements in the pollutants concentrations. AQTA, thus, successfully discovers temporal relationships among complex air quality data, interactively in different time frames, by harnessing the user’s knowledge of factors influencing the past, present and future behavior, with the aid of ML models. Further, this study also reveals that the decrease in the concentration of one pollutant does not ensure that the surrounding air quality would improve as other factors are interrelated.

Presenter: Shubhi Harbola

11:44 – 12:06
A Virtual Geographic Environment for the Exploration of Hydro-Meteorological Extremes
Karsten Rink, Özgür Ozan Sen, Marco Hannemann, Uta Ködel, Erik Nixdorf, Ute Weber, Ulrike Werban, Martin Schrön, Thomas Kalbacher, Olaf Kolditz


Abstract: We propose a Virtual Geographic Environment for the exploration of hydro-meteorological events. Focussing on the catchment of the Müglitz River in south-eastern Germany, a large collection of observation data acquired via a wide range of measurement devices has been integrated in a geographical reference frame for the region. Results of area-wide numerical simulations for both groundwater and soil moisture have been added to the scene and allow for the exploration of the delayed consequences of transient phenomena such as heavy rainfall events and their impact on the catchment scale. Implemented in a framework based on Unity, this study focusses on the concurrent visualisation and synchronised animation of multiple area wide datasets from different environmental compartments. The resulting application allows to explore the region of interest during specific hydrological events for an assessment of the interrelation of processes. As such, it offers the opportunity for knowledge transfer between researchers of different domains as well as for outreach to an interested public.

Presenter: Karsten Rink

12:06 – 12:28
Assessing the Geographical Structure of Species Richness Data with Interactive Graphics
Pauline Morgades, Aidan Slingsby, Justin Moat


Abstract: Understanding species richness is an important aspect of biodiversity studies and conservation planning, but varying collection effort often results in insufficient data to have a complete picture of species richness. Species accumulation curves can help assess collection completeness of species richness data, but these are usually considered by discrete area and do not consider the geographical structure of collection. We consider how these can be adapted to assess the geographical structure of species richness over geographical space. We design and implement two interactive visualisation approaches to help assess how species richness data varies over continuous geographical space. We propose these designs, critique them, report on the reactions of four ecologists and provide perspectives on their use for assessing geographical incompleteness in species richness.

Presenter: Pauline Morgades

12:28 – 12:40
Closing
Soumya Dutta, Kathrin Feige, Karsten Rink, Dirk Zeckzer

12:40 – 14:00 BREAK

14:00 – 15:20
EGPGV 3: Volumes  
Chair: Ken Moreland
14:00 – 14:25
Evaluation of PyTorch as a Data-Parallel Programming API for GPU Volume Rendering
Nathan X. Marshak, Pascal Grosset, Aaron Knoll, James Ahrens, Chris R. Johnson


Abstract: Data-parallel programming (DPP) has attracted considerable interest from the visualization community, fostering major software initiatives such as VTK-m. However, there has been relatively little recent investigation of data-parallel APIs in higher-level languages such as Python, which could help developers sidestep the need for low-level application programming in C++ and CUDA. Moreover, machine learning frameworks exposing data-parallel primitives, such as PyTorch and TensorFlow, have exploded in popularity, making them attractive platforms for parallel visualization and data analysis. In this work, we benchmark data-parallel primitives in PyTorch, and investigate its application to GPU volume rendering using two distinct DPP formulations: a parallel scan and reduce over the entire volume, and repeated application of data-parallel operators to an array of rays. We find that most relevant DPP primitives exhibit performance similar to a native CUDA library. However, our volume rendering implementation reveals that PyTorch is limited in expressiveness when compared to other DPP APIs. Furthermore, while render times are sufficient for an early “”proof of concept””, memory usage acutely limits scalability.

Presenter: Nathan Marshak

14:25 – 14:55
Faster RTX-Accelerated Empty Space Skipping using Triangulated Active Region Boundary Geometry
Ingo Wald, Stefan Zellmann, Nate Morrical


Abstract: We describe a technique for GPU and RTX accelerated space skipping of structured volumes that improves on prior work byreplacing clustered proxy boxes with a GPU-extracted triangle mesh that bounds the active regions. Unlike prior methods, ourtechnique avoids costly clustering operations, significantly reduces data structure construction cost, and incurs less overheadwhen traversing active regions.

Presenter: Stefan Zellmann, Nate Morrical

14:55 – 15:20
Performance Tradeoffs in Shared-memory Platform Portable Implementations of a Stencil Kernel
Wes Bethel, Colleen Heinemann, Talita Perciano


Abstract: Building on a significant amount of current research that examines the idea of platform-portable parallel code across different types of processor families, this work focuses on two sets of related questions. First, using a performance analysis methodology that leverages multiple metrics including hardware performance counters and elapsed time on both CPU and GPU platforms, we examine the performance differences that arise when using two common platform portable parallel programming approaches, namely OpenMP and VTK-m, for a stencil-based computation, which serves as a proxy for many different types of computations in visualization and analytics. Second, we explore the performance differences that result when using coarser- and finer-grained parallelism approaches that are afforded by both OpenMP and VTK-m.

Presenter: Wes Bethel


14:00 – 15:15
EuroVA 3: Temporal Data and Clustering  
Chair: Cagatay Turkay
14:00 – 14:25
Towards the Detection and Visual Analysis of COVID-19 Infection Clusters
Dario Antweiler, David Sessler, Sebastian Ginzel, Jörn Kohlhammer


Abstract: A major challenge for departments of public health (DPHs) in dealing with the ongoing COVID-19 pandemic is tracing contacts in exponentially growing SARS-CoV2 infection clusters. Prevention of further disease spread requires a comprehensive registration of the connections between individuals and clusters. Due to the high number of infections with unknown origin, the healthcare analysts need to identify connected cases and clusters through accumulated epidemiological knowledge and the metadata of the infections in their database. Here we contribute a visual analytics framework to identify, assess and visualize clusters in COVID-19 contact tracing networks. Additionally, we demonstrate how graph-based machine learning methods can be used to find missing links between infection clusters and thus support the mission to get a comprehensive view on infection events. This work was developed through close collaboration with DPHs in Germany. We argue how our systems supports the identification of clusters by public health experts and discuss ongoing developments and possible extensions.

Presenter: Dario Antweiler

14:25 – 14:50
LFPeers: Temporal Similarity Search in Covid-19 Data
Jan Burmeister, Jürgen Bernard, Jörn Kohlhammer


Abstract: While there is a wide variety of visualizations and dashboards to help understand the data of the Covid-19 pandemic, hardly any of these support important analytical tasks, especially of temporal attributes. In this paper, we introduce a general concept for the analysis of temporal and multimodal data and the system LFPeers that applies this concept to the analysis of countries in a Covid-19 dataset. Our concept divides the analysis in two phases: a search phase to find the most similar objects to a target object before a time point t0, and an exploration phase to analyze this subset of objects after t0. LFPeers targets epidemiologists and the public who want to learn from the Covid-19 pandemic and distinguish successful and ineffective measures.

Presenter: Jan Burmeister

14:50 – 15:15
Multi-resolution analysis for vector plots of time series data
Bao Dien Quoc Nguyen, Tommy Dang, Rattikorn Hewett


Abstract: This paper studies the use of vector plots in multivariate time series analysis. One drawback of vector plots is the lack of global temporal information. To ease the problem, we propose an interactive visualization supporting a multi-resolution view and integrating multiple linked visual metaphors in a novel combination. The method is applied to two real time series data sets to validate and demonstrate its benefits. The results show the potential of this approach in temporal data analysis.

Presenter: Bao Dien Quoc Nguyen


14:00 – 15:40
VisGap 1: Keynote and Presentations  
Chair: Guido Reina
14:00 – 14:05
Opening
Christina Gillmann, Michael Krone, Guido Reina, Thomas Wischgoll
14:05 – 15:00
Keynote: Lessons for Sustainable Visualization Systems Learned from the Inviwo Development
Timo Ropinski

Abstract: To enable both basic and applied research in visualization, it is essential to have access to reliable visualization systems. Only with such systems, an easy comparison with the state of the art as well as the exploitation of reusable components becomes possible. As the development and maintenance of such visualization systems lead to several challenges, I will address these in my talk and discuss how we have tackled them during the development of Inviwo (www.inviwo.org). Inviwo is a flexible visualization framework that is targeted to scientific visualization. It has been used in several research projects and industry projects, whereby diverse applications were supported through carefully designed usage abstraction scenarios. In this context, I will talk about technical and organizational challenges and derive a few lessons we have learned during the development process.

Presenter: Timo Ropinski
Timo Ropinski is a Professor in Visual Computing at Ulm University, Germany, where he is heading the Visual Computing Research Group. Before moving to Ulm, he was Professor in Interactive Visualization at Linköping University, Sweden. Timo holds a PhD from the University of Münster, Germany, where he also finished his Habilitation. His research interests lie in data visualization and visual data analysis. Together with his research group, Timo works on biomedical visualization techniques, rendering algorithms and deep learning models for spatial data. Most of the visualization related research projects are realized through own software frameworks, most prominently through the Inviwo Interactive Visualization Workshop. Inviwo was initiated in 2012, and is now primarily developed at Linköping University, Ulm University and KTH Royal Institute of Technology.

15:00 – 15:20
OSPRay Studio: Enabling Multi-workflow visualizations with OSPRay
Isha Sharma, Dave DeMarle, Alok Hota, Bruce Cherniak, Johannes Günther


Abstract: There are a number of established production ready scientific visualization tools in the field today including ParaView [Aya15],VisIt [CBW*11] and EnSight [ens]. However, often they come with well defined core feature sets, established visual appearancecharacteristics, and steep learning curves – especially for software developers. They have vast differences with other renderingapplications such as Blender or Maya (known for their high-quality rendering and 3D content creation uses) in terms of designand features, and have over time become monolithic in nature with difficult to customize workflows [UFK*89]. As such amulti-purpose visualization solution for Scientific, Product, Architectural and Medical Visualization is hard to find. This is agap we identify; and with this paper we present the idea of a minimal application called OSPRay Studio, with a flexible designto support high-quality physically-based rendering and scientific visualization workflows. We will describe the motivation,design philosophy, features, targeted use-cases and real-world applications along with future opportunities for this application.

Presenter: Isha Sharma

15:20 – 15:40
Property-Based Testing for Visualization Development
Michael Stegmaier, Dominik Engel, Jannik Olbrich, Timo Ropinski, Matthias Tichy


Abstract: As the testing capabilities of current visualization software fail to cover a large space of rendering parameters, we propose to use property-based testing to automatically generate a large set of tests with different parameter sets. By comparing the resulting renderings for pairs of different parameters, we can verify certain effects to be expected in the rendering upon change of a specific parameter. This allows for testing visualization algorithms with a large coverage of rendering parameters. Our proposed approach can also be used in a test-driven manner, meaning the tests can be defined alongside the actual algorithm. Lastly, we show that by integrating the proposed concepts into the existing regression testing pipeline of Inviwo, we can execute the property-based testing process in a continuous integration setup. To demonstrate our approach, we describe use cases where property-based testing can help to find errors during visualization development.

Presenter: Dominik Engel


14:00 – 15:35
MLVis 1: Introductory Talks and Tutorial  
Chair: Daniel Archambault
14:00 – 14:05
Introduction
Daniel Archambault, Ian Nabney, Jaakko Peltonen
14:05 – 15:05
Keynote: Visually supported exploration of multiple objectives
Torsten Möller

Abstract: When we build models, in Data Science or in Computational Science, we need to specify an objective function to find (or learn) a best model. However, in many scenarios there are multiple objectives one has to consider. Often times, we simply specify a weighted average among relevant objective functions. In this talk I will report on how to deal with cases where such a weighting is not clear or possible: how can we deal with multiple objectives in model building? By creating a visual analysis workflow that complements algorithmic and mathematical analysis workflows, I will try to convince you that we are able to better understand multi-dimensional problems and solution spaces.

Presenter: Torsten Möller

15:05 – 15:15
Visualisation of Mixed Data Types
Ian Nabney
15:15 – 15:25
Dimensionality Reduction and Visual Interfaces
Jaakko Peltonen
15:25 – 15:35
Visual Analytics Solutions to Compartmental Modelling in Response to COVID-19
Daniel Archambault

14:00 – 15:40
Invited C&G Presentations  
Chair: Tobias Isenberg
14:00 – 14:20
Visception: An interactive visual framework for nested visualization design
Yngve Sekse Kristiansen, Stefan Bruckner


Abstract: Nesting is the embedding of charts into the marks of another chart. Related to principles such as Tufte’s rule of utilizing micro/macro readings, nested visualizations have been employed to increase information density, providing compact representations of multi-dimensional and multi-typed data entities. Visual authoring tools are becoming increasingly prevalent, as they make visualization technology accessible to non-expert users such as data journalists, but existing frameworks provide no or only very limited functionality related to the creation of nested visualizations. In this paper, we present an interactive visual approach for the flexible generation of nested multilayer visualizations. Based on a hierarchical representation of nesting relationships coupled with a highly customizable mechanism for specifying data mappings, we contribute a flexible framework that enables defining and editing data-driven multi-level visualizations. As a demonstration of the viability of our framework, we contribute a visual builder for exploring, customizing and switching between different designs, along with example visualizations to demonstrate the range of expression. The resulting system allows for the generation of complex nested charts with a high degree of flexibility and fluidity using a drag and drop interface.

Presenter: Yngve Kristiansen

14:20 – 14:40
On the perceptual influence of shape overlap on data-comparison using scatterplots
Christian van Onzenoodt, Anke Huckauf, Timo Ropinski


Abstract: Scatterplots can be used for a wide range of visual analysis tasks, for example comparing correlations or variances of clusters across potentially multiple classes of data, in order to find answers to higher-level questions. Comparing classes of data in one scatterplot demands additional visual channels to encode this dimension. While perception research suggests colors as rather perceptually dominant, other studies show that shapes can also be visually salient. However, with an increasing amount of data, overlapping shapes can cause perceptual difficulties and obscure data. Even though shapes in scatterplots have been investigated extensively, the overlap between these shapes has usually been avoided by using synthetic scatterplots. To overcome this limitation, we investigate the perceptual implications of overlap when comparing data using scatterplots using a series of crowd-sourced user studies. These studies include common visual analysis tasks, like comparing the number of points, comparing mean values, and determine the set of points that is more clustered. To support our investigations, we introduced and compared four metrics for overlap in scatterplots. Our results provide insight into the overlap in scatterplots, recommend combinations of shapes that are less prone to overlap, and outline how our metrics could be used to optimize future scatterplot design.

Presenter: Christian van Onzenoodt

14:40 – 15:00
CrossVis: A Visual Analytics System for Exploring Heterogeneous Multivariate Data with Applications to Materials and Climate Sciences
Chad A. Steed, John R. Goodall, Junghoon Chae, Artem Trofimov


Abstract: We present a new visual analytics system, called CrossVis, that allows flexible exploration of multivariate data with heterogeneous data types. After presenting the design requirements, which were derived from prior collaborations with domain experts, we introduce key features of CrossVis beginning with a tabular data model that coordinates multiple linked views and performance enhancements that enable scalable exploration of complex data. Next, we introduce extensions to the parallel coordinates plot, which include new axis representations for numerical, temporal, categorical, and image data, an embedded bivariate axis option, dynamic selections, focus+context axis scaling, and graphical indicators of key statistical values. We demonstrate the practical effectiveness of CrossVis through two scientific use cases; one focused on understanding neural network image classifications from a genetic engineering project and another involving general exploration of a large and complex data set of historical hurricane observations. We conclude with discussions regarding domain expert feedback, future enhancements to address limitations, and the interdisciplinary process used to design CrossVis.

Presenter: Chad Steed

15:00 – 15:20
Spectrum-preserving sparsification for visualization of big graphs
Martin Imre, Jun Tao, Yungyu Wang, Zhiqiang Zhao, Zhou Feng, Chaoli Wang


Abstract: We present a novel spectrum-preserving sparsification algorithm for visualizing big graph data. Although spectral methods have many advantages, the high memory and computation costs due to the involved Laplacian eigenvalue problems could immediately hinder their applications in big graph analytics. In this paper, we introduce a practically efficient, nearly-linear time spectral sparsification algorithm for tackling real-world big graph data. Besides spectral sparsification, we further propose a node reduction scheme based on intrinsic spectral graph properties to allow more aggressive, level-of-detail simplification. To enable effective visual exploration of the resulting spectrally sparsified graphs, we implement spectral clustering and edge bundling. Our framework does not depend on a particular graph layout and can be integrated into different graph drawing algorithms. We experiment with publicly available graph data of different sizes and characteristics to demonstrate the efficiency and effectiveness of our approach. To further verify our solution, we quantitatively compare our method against different graph simplification solutions using a proxy quality metric and statistical properties of the graphs.

Presenter: Martin Imre

15:20 – 15:40
Descriptions and evaluations of methods for determining surface curvature in volumetric data
Jacob D. Hauenstein, Timothy S. Newman


Abstract: Three methods developed for determining surface curvature in volumetric data are described, including one convolution-based method, one fitting-based method, and one method that uses normal estimates to directly determine curvature. Additionally, a study of the accuracy and computational performance of these methods and prior methods is presented. The study considers synthetic data, noise-added synthetic data, and real data. Sample volume renderings using curvature-based transfer functions, where curvatures were determined with the methods, are also exhibited.

Presenter: Jacob Hauenstein


15:40 – 16:00 BREAK

16:00 – 17:25
EGPGV 4: Particles & Closing  
Chair: Will Usher
16:00 – 16:30
UnityPIC: Unity Point-Cloud Interactive Core
Yaocheng Wu, Jie Gong, Zhigang Zhu, Huy T. Vo


Abstract: In this work, we present Unity Point-Cloud Interactive Core, a novel interactive point cloud rendering pipeline for the Unity Development Platform. The goal of the proposed pipeline is to expedite the development process for point cloud applications by encapsulating the rendering process as a standalone component, while maintaining flexibility through an implementable interface. The proposed pipeline allows for rendering arbitrarily large point clouds with improved performance and visual quality. First, a novel dynamic batching scheme is proposed to address the adaptive point sizing problem for level-of-detail (LOD)point cloud structures. Then, an approximate rendering algorithm is proposed to reduce overdraw by minimizing the overall number of fragment operations through an intermediate occlusion culling pass. For the purpose of analysis, the visual quality of renderings is quantified and measured by comparing against a high-quality baseline. In the experiments, the proposed pipeline maintains above 90 FPS for a 20 million point budget while achieving greater than 90% visual quality during interaction when rendering a point-cloud with more than 20 billion points.

Presenter: Yaocheng Wu

16:30 – 17:00
Interactive Selection on Calculated Attributes of Large-Scale Particle
Benjamin Wollet, Stefan Reinhardt, Daniel Weiskopf, Bernhard Eberhardt


Abstract: We present a GPU-based technique for efficient selection in interactive visualizations of large particle datasets. In particular, we address multiple attributes attached to particles, such as pressure, density, or surface tension. Unfortunately, such intermediate attributes are often available only during the simulation run. They are either not accessible during visualization or have to be saved as additional information along with the usual simulation data. The latter increases the size of the dataset significantly, and the required variables may not be known in advance. Therefore, we choose to compute intermediate attributes on the fly. In this way, we are even able to obtain attributes that were not calculated by the simulation but may be relevant for data analysis or debugging. We present an interactive selection technique designed for such attributes. It leverages spatial regions of the selection to efficiently compute attributes only where needed. This lazy evaluation also works for intelligent and data-driven selection, extending the region to include neighboring particles. Our technique is evaluated by measurements of performance scalability and case studies for typical usage examples.

Presenter: Benjamin Wollet

17:00 – 17:25
Closing
Markus Hadwiger, Matthew Larssen, Filip Sadlo

16:00 – 17:30
EuroVA 4: Keynote & Closing  
Chair: Jörn Kohlhammer, Katerina Vrotsou, Jürgen Bernard
16:00 – 17:00
Keynote: A tool is not enough: research contributions through design study
Miriah Meyer
17:15 – 17:30
Closing
Jürgen Bernard, Katerina Vrotsou, Michael Behrisch

16:00 – 17:40
VisGap 2: Presentations and Keynote  
Chair: Christina Gillmann
16:00 – 16:20
Tools for Virtual Reality Visualization of Highly Detailed Meshes
Mark Bo Jensen, Egill Ingi Jacobsen, Jeppe Revall Frisvad, J. Andreas Bærentzen


Abstract: The number of polygons in meshes acquired using 3D scanning or by computational methods for shape generation is rapidlyincreasing. With this growing complexity of geometric models, new visualization modalities need to be explored for moreeffortless and intuitive inspection and analysis. Virtual reality (VR) is a step in this direction but comes at the cost of a tighterperformance budget. In this paper, we explore different starting points for achieving high performance when visualizing largemeshes in virtual reality. We explore two rendering pipelines and mesh optimization algorithms and find that a mesh shadingpipeline shows great promise when compared to a normal vertex shading pipeline. We also test the VR performance of commonlyused visualization tools (ParaView and Unity) and ray tracing running on the graphics processing unit (GPU). Finally, we findthat mesh pre-processing is important to performance and that the specific type of pre-processing needed depends intricatelyon the choice of rendering pipeline.

Presenter: Mark Bo Jensen

16:20 – 16:40
The Gap between Visualization Research and Visualization Software in High-Performance Computing Centers
Tommy Dang, Ngan V. T. Nguyen, Jon Hass, Jie Li, Yong Chen, Alan Sill


Abstract: Visualizing and monitoring high-performance computing centers is a daunting task due to the systems’ complex and dynamic nature. Moreover, different users may have different requirements and needs. For example, computer scientists often need to manage jobs. System administrators need to monitor and manage the system. In this paper, we discuss the gap between visual monitoring research and practical applicability. We will start with the general requirements for managing high-performance computing centers and then share the experiences working with academic and industrial experts in this domain.

Presenter: Ngan Nguyen

16:40 – 17:35
The role of visualization in decision support systems – Differences between academia and industry
Benedikt Kämpgen

Abstract: In this presentation, Benedikt will look back at working on decision support systems for over 10 years, half of which in academia, the other in industry. He will specifically try to answer the question of what it takes to have visualization approaches applied in either work environment, and what are the differences thereof.

Presenter: Benedikt Kämpgen

17:35 – 17:40
Closing
Christina Gillmann, Michael Krone, Guido Reina, Thomas Wischgoll

16:00 – 17:40
MLVis 2: Papers and Panel  
Chair: Ian Nabney
16:00 – 16:25
Controllably Sparse Perturbations of Robust Classifiers for Explaining Predictions and Probing Learned Concepts
Jay Roberts, Theodoros Tsiligkaridis


Abstract: Explaining the predictions of a deep neural network (DNN) in image classification is an active area of research. Many methods focus on localizing pixels, or groups of pixels, which maximize a relevance metric for the prediction. Others aim at creating local “”proxy”” explainers which aim to account for an individual prediction of a model. We aim to explore “”why”” a model made a prediction by perturbing inputs to robust classifiers and interpreting the semantically meaningful results. For such an explanation to be useful for humans it is desirable for it to be sparse; however, generating sparse perturbations can computationally expensive and infeasible on high resolution data. Here we introduce controllably sparse explanations that can be efficiently generated on higher resolution data to provide improved counter-factual explanations. Further we use these controllably sparse explanations to probe what the robust classifier has learned. These explanations could provide insight for model developers as well as assist in detecting dataset bias.

Presenter: Jay Roberts

16:25 – 16:50
Revealing Multimodality in Ensemble Weather Prediction
Natacha Galmiche, Helwig Hauser, Thomas Spengler, Clemens Spensberger, Morten Brun, Nello Blaser


Abstract: Ensemble methods are widely used to simulate complex non-linear systems and to estimate forecast uncertainty. However, visualizing and analyzing ensemble data is challenging, in particular when multimodality arises, i.e., distinct likely outcomes. We propose a graph-based approach that explores multimodality in univariate ensemble data from weather prediction. Our solution utilizes clustering and a novel concept of life span associated with each cluster. We applied our method to historical predictions of extreme weather events and illustrate that our method aids the understanding of the respective ensemble forecasts.

Presenter: Natacha Galmiche

16:50 – 17:30
Live Panel with Presenters
Jay Roberts, Natacha Galmiche, Daniel Archambault, Ian Nabney, Jaakko Peltonen
17:30 – 17:40
Closing
Daniel Archambault, Ian Nabney, Jaakko Peltonen

17:40 – 18:00 BREAK


Tuesday, 15 June, 2021

09:00 – 10:40
Opening and Keynote  
Chair: Robert S Laramee
09:00 – 09:45
Opening
09:45 – 10:40
Visualization is where Information Theory Meets Psychology  
Min Chen

Abstract: Building a theoretical foundation for visualization and visual analytics is a collective responsibility of the community of visualization and visual analytics (VIS). There are many pathways for making contributions to this endeavour, including through the observation, development, and evaluation of practical VIS applications. In this talk, the speaker will focus on one particular pathway that connects VIS with information theory and psychology. One can anticipate such connections easily since all VIS processes deal with information while involving human perception and cognition. In most applications of information theory, such as communication, compression, and encryption, encoders and decoders are developed as pairs of machine-centric solutions. VIS offers an intriguing platform for studying phenomena and developing applications that feature machine-centric encoders and human-centric decoders, providing opportunities for advancing information theory. Meanwhile, any improvement of our fundamental understanding of visualization processes and visual analytics workflows — through information theory or any helpful theoretical development — will likely inform theoretical discourse in psychology. It is our ambition as well as obligation to find theories that can explain and measure phenomena in VIS, and predict the cost-benefit and guide the optimization of visual designs and visual analytics workflows. Hopefully such VIS theories will inspire further advancement in other disciplines including information theory and psychology.

Presenter: Min Chen
Min Chen developed his academic career in Wales between 1984 and 2011. He is currently Professor of Scientific Visualization at Oxford University and a fellow of Pembroke College. His research interests include many aspects of data science in general, and visualization and visual analytics in particular. He has co-authored over 200 publications, including his recent contributions in areas such as theory of visualization, visual analytics for machine learning, and perception and cognition in visualization. He has worked on a broad spectrum of interdisciplinary research topics, ranging from the sciences to sports, and from digital humanities to cybersecurity. His services to the research community include papers co-chair of IEEE Visualization 2007 and 2008, Eurographics 2011, IEEE VAST 2014 and 2015; co-chair of Volume Graphics 1999 and 2006, EuroVis 2014; associate editor-in-chief of IEEE Transactions on Visualization and Computer Graphics; editor-in-chief of Computer Graphics Forum; and co-director of Wales Research Institute of Visual Computing. He is a fellow of British Computer Society, European Computer Graphics Association, and Learned Society of Wales. URL: https://sites.google.com/site/drminchen/

Session Chair: Robert S Laramee


10:40 – 11:00 BREAK

11:00 – 12:40
Full Papers 1: Social Science, Security and Accessibility  
Chair: Wolfgang Aigner
11:00 – 11:25
VEHICLE: Validation and Exploration of the Hierarchical Integration of Conflict Event Data
Benedikt Mayer, Kai Lawonn, Karsten Donnay, Bernhard Preim, Monique Meuschke


Abstract: The exploration of large-scale conflicts, as well as their causes and effects, is an important aspect of socio-political analysis. Since event data related to major conflicts are usually obtained from different sources, researchers developed a semi-automatic matching algorithm to integrate event data of different origins into one comprehensive dataset using hierarchical taxonomies. The validity of the corresponding integration results is not easy to assess since the results depend on user-defined input parameters and the relationships between the original data sources. However, only rudimentary visualization techniques have been used so far to analyze the results, allowing no trustworthy validation or exploration of how the final dataset is composed. To overcome this problem, we developed VEHICLE, a web-based tool to validate and explore the results of the hierarchical integration. For the design, we collaborated with a domain expert to identify the underlying domain problems and derive a task and workflow description. The tool combines both traditional and novel visual analysis techniques, employing statistical and map-based depictions as well as advanced interaction techniques. We showed the usefulness of VEHICLE in two case studies and by conducting an evaluation together with conflict researchers, confirming domain hypotheses and generating new insights.

Presenter: Benedikt Mayer

11:25 – 11:50
Topography of Violence: Considerations for Ethical and Collaborative Visualization Design
Fabian Ehmel, Viktoria Brüggemann, Marian Dörk


Abstract: Based on a collaborative visualization design process involving sensitive historical data and historiographical expertise, we investigate the relevance of ethical principles in visualization design. While fundamental ethical norms like truthfulness and accuracy are already well-described and common goals in visualization design, datasets that are accompanied by specific ethical concerns need to be processed and visualized with an additional level of carefulness and thought. There has been little research on adequate visualization design incorporating such considerations. To address this gap we present insights from Topography of Violence, a visualization project with the Jewish Museum Berlin that focuses on a dataset of more than 4,500 acts of violence against Jews in Germany between 1930 and 1938. Drawing from the joint project, we develop an approach to the visualization of sensitive data, which features both conceptual and procedural considerations for visualization design. Our findings provide value for both visualization researchers and practitioners by highlighting challenges and opportunities for ethical data visualization.

Presenter: Fabian Ehmel

12:50 – 12:15
CommAID: Visual Analytics for Communication Analysis through Interactive Dynamics Modeling
Maximilian T. Fischer, Daniel Seebacher, Rita Sevastjanova, Daniel Keim, Mennatallah El-Assady


Abstract: Communication consists of both meta-information as well as content. Currently, the automated analysis of such data often focuses either on the network aspects via social network analysis or on the content, utilizing methods from text-mining. However, the first category of approaches does not leverage the rich content information, while the latter ignores the conversation environment and the temporal evolution, as evident in the meta-information. In contradiction to communication research, which stresses the importance of a holistic approach, both aspects are rarely applied simultaneously, and consequently, their combination has not yet received enough attention in automated analysis systems. In this work, we aim to address this challenge by discussing the difficulties and design decisions of such a path as well as contribute CommAID, a blueprint for a holistic strategy to communication analysis. It features an integrated visual analytics design to analyze communication networks through dynamics modeling, semantic pattern retrieval, and a user-adaptable and problem-specific machine learning-based retrieval system. An interactive multi-level matrix-based visualization facilitates a focused analysis of both network and content using inline visuals supporting cross-checks and reducing context switches. We evaluate our approach in both a case study and through formative evaluation with eight law enforcement experts using a real-world communication corpus. Results show that our solution surpasses existing techniques in terms of integration level and applicability. With this contribution, we aim to pave the path for a more holistic approach to communication analysis.

Presenter: Max Fischer

12:15 – 12:40
ProBGP: Progressive Visual Analytics of Live BGP Updates
Alex Ulmer, David Sessler, Jörn Kohlhammer


Abstract: The global routing network is the backbone of the Internet. However, it is quite vulnerable to attacks that cause major disruptionsor routing manipulations. Prior related works have visualized routing path changes with node link diagrams, but it requiresstrong domain expertise to understand if a routing change between autonomous systems is suspicious. Geographic visualizationhas an advantage over conventional node-link diagrams by helping uncover such suspicious routes as the user can immediatelysee if a path is the shortest path to the target or an unreasonable detour. In this paper, we present ProBGP, a web-basedprogressive approach to visually analyze BGP update routes. We created a novel progressive data processing algorithm forthe geographic approximation of autonomous systems and combined it with a progressively updating visualization. While thenewest log data is continuously loaded, our approach also allows querying the entire log recordings since 1999. We presentthe usefulness of our approach with a real use case of a major route leak from June 2019. We report on multiple interviewswith domain experts throughout the development. Finally, we evaluated our algorithm quantitatively against a public peeringdatabase and qualitatively against AS network maps.

Presenter: Alex Ulmer


12:40 – 14:00 BREAK

14:00 – 14:50
Full Papers 2: Best Papers  
Chair: Rita Borgo, G. Elisabeta Marai, Tatiana von Landesberger
14:00 – 14:25
Color Nameability Predicts Inference Accuracy in Spatial Visualizations
Khairi Reda, Amey A Salvi, Jack Gray, Michael E. Papka


Abstract: Color encoding is foundational to visualizing quantitative data. Guidelines for colormap design have traditionally emphasized perceptual principles, such as order and uniformity. However, colors also evoke cognitive and linguistic associations whose role in data interpretation remains underexplored. We study how two linguistic factors, name salience and name variation, affect people’s ability to draw inferences from spatial visualizations. In two experiments, we found that participants are better at interpreting visualizations when viewing colors with more salient names (e.g., prototypical ‘blue’, ‘yellow’, and ‘red’ over ‘teal’, ‘beige’, and ‘maroon’). The effect was robust across four visualization types, but was more pronounced in continuous (e.g., smooth geographical maps) than in similar discrete representations (e.g., choropleths). Participants’ accuracy also improved as the number of nameable colors increased, although the latter had a less robust effect. Our findings suggest that color nameability is an important design consideration for quantitative colormaps, and may even outweigh traditional perceptual metrics. In particular, we found that the linguistic associations of color are a better predictor of performance than the perceptual properties of those colors. We discuss the implications and outline research opportunities. The data and materials for this study are available at https://osf.io/asb7n

Presenter: Khairi Reda

14:25 – 14:50
What are Table Cartograms Good for Anyway? An Algebraic Analysis
Andrew M McNutt


Abstract: Unfamiliar or esoteric visual forms arise in many areas of visualization. While such forms can be intriguing, it can be unclear how to make effective use of them without long periods of practice or costly user studies. In this work we analyze the table cartogram—a graphic which visualizes tabular data by bringing the areas of a grid of quadrilaterals into correspondence with the input data, like a heat map that has been ”area-ed” rather than colored. Despite having existed for several years, little is known about its appropriate usage. We mend this gap by using Algebraic Visualization Design to show that they are best suited to relatively small tables with ordinal axes for some comparison and outlier identification tasks. In doing so we demonstrate a discount theory-based analysis that can be used to cheaply determine best practices for unknown visualizations.

Presenter: Andrew McNutt


14:50 – 15:00 BREAK

15:00 – 15:40
Industrial Keynote: Industrial Keynote from Intel  
Chair: Renato Pajarola
15:00 – 15:40
High Performance Intel Ray Tracing accelerating time to visual realization
Jim Jeffers, Sr. Principle Engineer, Sr. Director

Abstract: In this session you’ll hear how the Intel Rendering Toolkit enables modern graphics applications that scale to create amazing visual, hyper-realistic renderings via ray tracing for the largest datasets. Intel’s open platform approach delivers high performance, high fidelity performance via a family of cost efficient open source libraries to tackle the most depending renders.

Presenter: Jim Jeffers


15:40 – 16:00 BREAK

16:00 – 17:40
Posters  
Chair: Jan Byška, Stefan Jänicke, Johanna Schmidt
16:00 – 16:05
Elastic Tree Layouts for Interactive Exploration of Mentorship
Xinyuan Yan, Yifang Ma

Abstract: Mentorship is an important collaborative relationship among scholars. The existing tools to visualize it mainly suffer from a waste of space, lack of overview representation, and less displayed attribute information. To solve these problems, we propose a novel elastic tree layout based on node-link diagrams, in which nodes and edges are represented as elastic rectangles and bands respectively. By stretching, compressing, aggregating, and expanding nodes and edges, we can: get a compact tree layout with high space-efficiency, display both the detailed subtree and compressed context in a single view, use labeling, charts, and node opacity to show multiple attributes. Besides, we designed various animated interactions to facilitate the exploration.

Presenter: Xinyuan Yan

16:05 – 16:10
SimBaTex: Similarity-based Text Exploration
Daniel Witschard, Ilir Jusufi, Andreas Kerren

Abstract: Natural language processing in combination with visualization can provide efficient ways to discover latent patterns of similarity which can be useful for exploring large sets of text documents. In this poster abstract, we describe the ongoing work on a visual analytics application, called SimBaTex, which is based on embedding technology, dynamic specification of similarity criteria, and a novel approach for similarity-based clustering. The goal of SimBaTex is to provide search-and-explore functionality to enable the user to identify items of interest in a large set of text documents by interactive assessment of both high-level similarity patterns and pairwise similarity of chosen texts.

Presenter: Daniel Witschard

16:10 – 16:15
Towards a Collaborative Experimental Environment for Graph Visualization Research in Virtual Reality
David Heidrich, Annika Meinecke, Andreas Schreiber

Abstract: Graph visualization benefit from virtual reality (VR) technology and a collaborative environment. However, implementing collaborative graph visualizations can be very resource consuming and existing prototypes cannot be reused easily. We present a work-in-progress collaborative experimental environment for graph visualization research in VR, which is highly modular, contains all fundamental functionality of a collaborative graph visualization, and provides common interaction techniques. Our environment enables researchers to create and evaluate modules in the same environment for a wide range of experiments.

Presenter: David Heidrich

16:15 – 16:20
Online Study of Word-Sized Visualizations in Social Media
Franziska Huth, Miriam Awad-Mohammed, Johannes Knittel, Tanja Blascheck, Petra Isenberg

Abstract: We report on an online study that compares three different representations to show topic diversity in social media threads: a word-sized visualization, a background color, and a text representation. Our results do not provide significant evidence that people gain knowledge about topic diversity with word-sized visualizations faster than with the other two conditions. Further, participants who were shown word-sized visualizations performed tasks with equally few or only slightly fewer errors.

Presenter: Franziska Huth

16:20 – 16:25
Unfolding Edges for Exploring Multivariate Edge Attributes in Graphs
Mark-Jan Bludau, Marian Dörk, Christian Tominski

Abstract: With this research we present an approach to network visualization that expands the capabilities for visual encoding and interactive exploration through edges in node-link diagrams. Compared to the various possibilities for visual and interactive properties of nodes, there are few techniques for interactive visualization of multivariate edge attributes in node-link diagrams. Visualization of edge attributes is oftentimes limited by occlusion and space issues of methods that globally encode attributes in a node-link diagram for all edges, not sufficiently exploiting the potential of interaction. Building up on existing techniques for edge encoding and interaction, we propose ‘Unfolding Edges’ as an exemplary use of an on-demand detail enhancing approach for exploration of multivariate edge attributes.

Presenter: Mark-Jan Bludau

16:25 – 17:40
Poster Session in Topia

17:40 – 18:00 BREAK


Wednesday, 16 June, 2021

09:00 – 10:15
Full Papers 3: Multivariate Data & Dimension Reduction  
Chair: Stefan Bruckner
09:00 – 09:25
Exploring Multi-dimensional Data via Subset Embedding
Peng Xie, Wenyuan Tao, Jie Li, Wentao Huang, Siming Chen


Abstract: Multi-dimensional data exploration is a classic research topic in visualization. Most existing approaches are designed for identifying record patterns in dimensional space or subspace. In this paper, we propose a visual analytics approach to exploring subset patterns. The core of the approach is a subset embedding network (SEN) that represents a group of subsets as uniformly-formatted embeddings. We implement the SEN as multiple subnets with separate loss functions. The design enables to handle arbitrary subsets and capture the similarity of subsets on single features, thus achieving accurate pattern exploration, which in most cases is searching for subsets having similar values on few features. Moreover, each subnet is a fully-connected neural network with one hidden layer. The simple structure brings high training efficiency. We integrate the SEN into a visualization system that achieves a 3-step workflow. Specifically, analysts (1) partition the given dataset into subsets, (2) select portions in a projected latent space created using the SEN, and (3) determine the existence of patterns within selected subsets. Generally, the system combines visualizations, interactions, automatic methods, and quantitative measures to balance the exploration flexibility and operation efficiency, and improve the interpretability and faithfulness of the identified patterns. Case studies and quantitative experiments on multiple open datasets demonstrate the general applicability and effectiveness of our approach.

Presenter: Peng Xie

09:25 – 09:50
Guided Stable Dynamic Projections
Eduardo Faccin Vernier, João Comba, Alexandru Telea


Abstract: Projections attempt to convey the relationships and similarity of data points from a high dimensional dataset into a lower dimensional representation.Most projections techniques are designed for static data. When used for time-dependent data, they usually fail to create a stable and suitable low dimensional representation.We propose two new dynamic projection methods (PCD-tSNE and LD-tSNE) based on the idea of using global guides to steer projection points. This avoids unstable movement that hinders the ability to reason about high dimensional dynamics while keeping t-SNE’sneighborhood preservation ability. PCD-tSNE scores a good balance between stability, neighborhood presentation, and distance preservation, while LD-tSNE allows us to create stable and customizable projections.We demonstrate our methods by comparing them to 11 other techniques using quality metrics and datasets provided by a recent benchmark for dynamic projections.

Presenter: Eduardo Vernier

09:50 – 10:15
Texture Browser: Feature-based Texture Exploration
Xuejiao Luo, Leonardo Scandolo, Elmar Eisemann


Abstract: Texture is a key characteristic in the definition of the physical appearance of an object and a crucial element in the creation process of 3D artists. However, retrieving a texture that matches an intended look from an image collection is difficult. Contrary to most photo collections, for which object recognition has proven quite useful, syntactic descriptions of texture characteristics is not straightforward, and even creating appropriate metadata is a very difficult task. In this paper, we propose a system to help explore large unlabeled collections of texture images. The key insight is that spatially grouping textures sharing similar features can simplify navigation. Our system uses a pre-trained convolutional neural network to extract high-level semantic image features, which are then mapped to a 2-dimensional location using an adaptation of t-SNE, a dimensionality-reduction technique. We describe an interface to visualize and explore the resulting distribution and provide a series of enhanced navigation tools, our prioritized t-SNE, scalable clustering, and multi-resolution embedding, to further facilitate exploration and retrieval tasks. Finally, we also present the results of a user evaluation that demonstrates the effectiveness of our solution.

Presenter: Xuejiao Luo


09:00 – 10:40
STARs 1: Machine Learning and Networks  
Chair: Natalia Andrienko
09:00 – 09:50
Survey of Evaluations in Human-Centered Machine Learning: Dimensions for Measuring Trust, Interpretability & Explainability
Fabian Sperrle, Mennatallah El-Assady, Grace Guo, Rita Borgo, Duen Horng Chau, Alex Endert, Daniel Keim


Abstract: Visual analytics systems integrate interactive visualizations and machine learning to enable expert users to solve complex analysis tasks. Applications combine techniques from various fields of research and are consequently not trivial to evaluate. The result is a lack of structure and comparability between evaluations. In this survey, we provide a comprehensive overview of evaluations in the field of human-centered machine learning. We particularly focus on human-related factors that influence trust, interpretability, and explainability. We analyze the evaluations presented in papers from top conferences and journals in information visualization and human-computer interaction to provide a systematic review of their setup and findings. From this survey, we distill design dimensions for structured evaluations, identify evaluation gaps, and derive future research opportunities.

Presenter: Fabian Sperrle

09:50 – 10:40
Visualizing and Interacting with Geospatial Networks: A Survey and Design Space
Sarah Schöttler, Yalong Yang, Hanspeter Pfister, Benjamin Bach


Abstract: This paper surveys visualization and interaction techniques for geospatial networks from a total of 95 papers. Geospatial networks are graphs where nodes and links can be associated with geographic locations. Examples can include social networks, trade and migration, as well as traffic and transport networks. Visualizing geospatial networks poses numerous challenges around the integration of both network and geographical information as well as additional information such as node and link attributes, time and uncertainty. Our overview analyses existing techniques along four dimensions: (i) the representation of geographical information, (ii) the representation of network information, (iii) the visual integration of both and (iv) the use of interaction. These four dimensions allow us to discuss techniques with respect to the trade?offs they make between showing information across all these dimensions and how they solve the problem of showing as much information as necessary while maintaining readability of the visualization. https://geonetworks.github.io.

Presenter: Sarah Schöttler


10:40 – 11:00 BREAK

11:00 – 12:40
Full Papers 4: Volume and Vector Computing and Representation  
Chair: Hamish Carr
11:00 – 11:25
Local Extraction of 3D Time-Dependent Vector Field Topology
Lutz Hofmann, Filip Sadlo


Abstract: We present an approach to local extraction of 3D time-dependent vector field topology. In this concept, Lagrangian coherent structures, which represent the separating manifolds in time-dependent transport, correspond to generalized streak manifolds seeded along hyperbolic path surfaces (HPSs). Instead of expensive and numerically challenging direct computation of the HPSs by intersection of ridges in the forward and backward finite-time Lyapunov exponent (FTLE) fields, our approach employs local extraction of respective candidates in the four-dimensional space-time domain. These candidates are subsequently refined toward the hyperbolic path surfaces, which provides unsteady equivalents of saddle-type critical points, periodic orbits, and bifurcation lines from steady, traditional vector field topology. In contrast to FTLE-based methods, we obtain an explicit geometric representation of the topological skeleton of the flow, which for steady flows coincides with the hyperbolic invariant manifolds of vector field topology. We evaluate our approach on analytical flows, as well as data from computational fluid dynamics, using the FTLE as a ground truth superset, i.e., we also show that FTLE ridges exhibit several types of false positives.

Presenter: Lutz Hofmann

11:25 – 11:50
Parameterized Splitting of Summed Volume Tables
Christian Reinbold, Rüdiger Westermann


Abstract: Summed Volume Tables (SVTs) allow one to compute integrals over the data values in any cubical area of a three-dimensional orthogonal grid in constant time, and they are especially interesting for building spatial search structures for sparse volumes. However, SVTs become extremely memory consuming due to the large values they need to store; for a dataset of n values an SVT requires O(n log n) bits. The 3D Fenwick tree allows recovering the integral values in O(log^3 n) time, at a memory consumption of O(n) bits. We propose an algorithm that generates SVT representations that can flexibly trade speed for memory: From similar characteristics as SVTs, over equal memory consumption as 3D Fenwick trees at significantly lower computational complexity, to even further reduced memory consumption at the cost of raising computational complexity. For a 641 X 9601 X 9601 binary dataset the algorithm can generate an SVT representation that requires 27.0GB and 46*8 data fetch operations to retrieve an integral value, compared to 27.5GB and 1521*8 fetches by 3D Fenwick trees, a decrease in fetches of 97%. A full SVT requires 247.6GB and 8 fetches per integral value. We present a novel hierarchical approach to compute and store intermediate prefix sums of SVTs, so that any prescribed memory consumption between O(n) bits and O(n log n) bits is achieved. We evaluate the performance of the proposed algorithm in a number of examples considering large volume data, and we perform comparisons to existing alternatives.

Presenter: Christian Reinbold

11:50 – 12:15
Compressive Neural Representations of Volumetric Scalar Fields
Yuzhe Lu, Kairong Jiang, Joshua A Levine, Matthew Berger


Abstract: We present an approach for compressing volumetric scalar fields using implicit neural representations. Our approach represents a scalar field as a learned function, wherein a neural network maps a point in the domain to an output scalar value. By setting the number of weights of the neural network to be smaller than the input size, we achieve compressed representations of scalar fields, thus framing compression as a type of function approximation. Combined with carefully quantizing network weights, we show that this approach yields highly compact representations that outperform state-of-the-art volume compression approaches. The conceptual simplicity of our approach enables a number of benefits, such as support for time-varying scalar fields, optimizing to preserve spatial gradients, and random-access field evaluation. We study the impact of network design choices on compression performance, highlighting how simple network architectures are effective for a broad range of volumes.

Presenter: Yuzhe Lu

12:15 – 12:40
Thin-Volume Visualization on Curved Domains
Felix Herter, Hans-Christian Hege, Markus Hadwiger, Verena Lepper, Daniel Baum


Abstract: Thin, curved structures occur in many volumetric datasets. Their analysis using classical volume rendering is difficult because parts of such structures can bend away or hide behind occluding elements. This problem cannot be fully compensated by effective navigation alone, because structure-adapted navigation in the volume is cumbersome and only parts of the structure are visible in each view.We solve this problem by rendering a spatially transformed view into the volume so that an unobscured visualization of the entire curved structure is obtained. As a result, simple and intuitive navigation becomes possible. The domain of the spatial transform is defined by a triangle mesh that is topologically equivalent to an open disc and that approximates the structure of interest. The rendering is based on ray-casting in which the rays traverse the original curved sub-volume. In order to carve out volumes of varying thickness, the lengths of the rays as well as the position of the mesh vertices can be easily modified in a view-controlled manner by interactive painting. We describe a prototypical implementation and demonstrate the interactive visual inspection of complex structures from digital humanities, biology, medicine, and materials science. Displaying the structure as a whole enables simple inspection of interesting substructures in their original spatial context.Overall, we show that transformed views utilizing ray-casting-based volume rendering supported by guiding surface meshes and supplemented by local, interactive modifications of ray lengths and vertex positions, represent a simple but versatile approach to effectively visualize thin, curved structures in volumetric data.

Presenter: Felix Herter


11:00 – 12:40
Short Papers 1: Machine Learning & SciVis Applications  
Chair: Michaël J. Aupetit
11:00 – 11:20
Loss-contribution-based in situ visualization for neural network training
Teng-Yok Lee


Abstract: This paper presents an in situ visualization algorithm for neural network training. As each training data item leads to multiple hidden variables when being forward-propagated through a neural network, our algorithm first estimates how much each hidden variable contributes to the training loss. Based on linear approximation, we can approximate the contribution mainly based on the forward-propagated value and the backward-propagated derivative per hidden variable, both of which are available during the training with no cost. By aggregating the loss contribution of hidden variables per data item, we can detect difficult data items that contribute most to the loss, which can be ambiguous or even incorrectly labeled. For convolution neural networks (CNN) with images as inputs, we extend the estimation of loss contribution to measure how different image areas impact the loss, which can be visualized over time to see how a CNN evolves to handle ambiguous images.

Presenter: Teng-Yok Lee

11:20 – 11:40
VATUN: Visual Analytics for Testing and Understanding Convolutional Neural Networks
Cheonbok Park, Soyoung Yang, Inyoup Na, Sunghyo Chung, Sungbok Shin, Bum Chul Kwon, Deokgun Park, Jaegul Choo


Abstract: Convolutional neural networks (CNNs) are popularly used in a wide range of applications, such as computer vision, natural language processing, and human-computer interaction.However, testing and understanding a trained model is difficult and very time-consuming.This is because their inner mechanisms are often considered as a `black-box’ due to difficulty in understanding the causal relationships between processes and results. To help the testing and understanding of such models, we present a user-interactive visual analytics system, VATUN, to analyze a CNN-based image classification model. Users can accomplish the following four tasks in our integrated system: (1) detect data instances in which the model confuses classification, (2) compare outcomes of the model by manipulating the conditions of the image, (3) understand reasons for the prediction of the model by highlighting highly influential parts from the image, and (4) analyze the overall what-if scenarios when augmenting the instances for each class.Moreover, by combining multiple techniques, our system lets users analyze behavior of the model from various perspectives. We conduct a user study of an image classification scenario with three domain experts. Our study will contribute to reducing the time cost for testing and understanding the CNN-based models in several industrial areas.

Presenter: Soyoung Yang

11:40 – 12:00
RoomCanvas: A Visualization Technique for Spatiotemporal Temperature Data in Smart Homes
Bastian König, Daniel Limberger, Jan Klimke, Benjamin Hagedorn, Jürgen Döllner


Abstract: Spatiotemporal measurements such as power consumption, temperature, humidity, movement, noise, brightness, etc., will become ubiquitously available in both old and modern homes to capture and analyze behavioral patterns. The data is fed into analytics platforms and tapped by services but is generally not readily available to consumers for exploration due in part to its inherent complexity and volume. We present an interactive visualization system that uses a simplified 3D representation of building interiors as a canvas for a unified sensor data display. The system’s underlying visualization supports spatial as well as temporal accumulation of data, e.g., temperature and humidity values. It introduces a volumetric data interpolation approach which takes 3D room boundaries such as walls, doors, and windows into account. We showcase an interactive, web-based prototype that allows for the exploration of historical as well as real-time data of multiple temperature and humidity sensors. Finally, we sketch an integrated pipeline from sensor data acquisition to visualization, discuss the creation of semantic geometry and subsequent preprocessing, and provide insights into our real-time rendering implementation.

Presenter: Bastian König

12:00 – 12:20
SailVis: Reconstruction and multifaceted visualization of sail shape
Danfeng Mu, Marcos Pieras, Douwe Broekens, Ricardo Marroquim


Abstract: While sailing, sailors rely on their eyes to inspect the sail shape and adjust the configurations to achieve an appropriate shape for a certain the weather condition. Mastering this so-called trimming process requires years of experience since the visual inspection of the sail shape suffers from inaccuracies and many times are difficult to communicate verbally. Therefore, this research proposes a visual analysis tool that presents an accurate sail shape representation and supports sailors in investigating the optimal sail shape for certain weather conditions. In order to achieve our goals, we reconstruct the 3D sail shape from point clouds acquired by photogrammetry methods. For incomplete acquisitions we deform a complete template sail to estimate the missing parts. We designed a visualization dashboard for sailors to explore the 3D structure, 2D profiles and characteristics of the time-varying sail shape as well as analyze their relation to boat speed. The usability of the visualization tool is tested through a qualitative evaluation with two sailing experts. The result shows that the reconstruction and deformation of sail shape are plausible. Furthermore, the visualization dashboard has the potential to enhance sailors’ comprehension of sail shape and provide insights towards optimal trimming.

Presenter: Danfeng Mu

12:20 – 12:40
RISSAD: Rule-based Interactive Semi-Supervised Anomaly Detection
Jiahao Deng, Eli T Brown


Abstract: Anomaly detection has gained increasing attention from researchers in recent times. Owing to a lack of reliable ground-truth labels, many current state-of-art techniques focus on unsupervised learning, which lacks a mechanism for user involvement. Further, these techniques do not provide interpretable results in a way that is understandable to the general public. To address this problem, we present RISSAD: an interactive technique that not only helps users to detect anomalies but automatically characterizes those anomalies with descriptive rules. The technique employs a semi-supervised learning approach based on an algorithm that relies on a partially-labeled dataset. Addressing the need for feedback and interpretability, the tool enables users to label anomalies individually or in groups, using visual tools. We demonstrate the tool’s effectiveness using quantitative experiments simulated on existing anomaly-detection datasets, and a usage scenario that illustrates a real-world application.

Presenter: Jiahao Deng


12:40 – 14:00 BREAK

14:00 – 15:15
Full Papers 5: Situated Displays and Guidance  
Chair: Michael Sedlmair
14:00 – 14:25
Public Data Visualization: Analyzing Local Running Statistics on Situated Displays
Jorgos Coenen, Andrew Vande Moere


Abstract: Popular sports tracking applications allow athletes to share and compare their personal performance data with others. Visualizing this data in relevant public settings can be beneficial in provoking novel types of opportunistic and communal sense-making. We investigated this premise by situating an analytical visualization of running performances on two touch-enabled public displays in proximity to a local community running trail. Using a rich mixed-method evaluation protocol during a three-week-long in-the-wild deployment, we captured its social and analytical impact across 235 distinct interaction sessions. Our results show how our public analytical visualization supported passers-by to create novel insights that were rather of casual nature. Several textual features that surrounded the visualization, such as titles that were framed as provocative hypotheses and predefined attention-grabbing data queries, sparked interest and social debate, while a narrative tutorial facilitated more analytical interaction patterns. Our detailed mixed-methods evaluation approach led to a set of actionable takeaways for public visualizations that allow novice audiences to engage with data analytical insights that have local relevance.

Presenter: Jorgos Coenen

14:25 – 14:50
Guide Me in Analysis: A Framework for Guidance Designers
Davide Ceneda, Natalia Andrienko, Gennady Andrienko, Theresia Gschwandtner, Silvia Miksch, Nikolaus Piccolotto, Tobias Schreck, Marc Streit, Josef Suschnigg, Christian Tominski


Abstract: Guidance is an emerging topic in the field of visual analytics. Guidance can support users in pursuing their analytical goals more efficiently and help in making the analysis successful. However, it is not clear how guidance approaches should be designed and what specific factors should be considered for effective support. In this paper, we approach this problem from the perspective of guidance designers. We present a framework comprising requirements and a set of specific phases designers should go through when designing guidance for visual analytics. We relate this process with a set of quality criteria we aim to support with our framework, that are necessary for obtaining a suitable and effective guidance solution. To demonstrate the practical usability of our methodology, we apply our framework to the design of guidance in three analysis scenarios and a design walk-through session. Moreover, we list the emerging challenges and report how the framework can be used to design guidance solutions that mitigate these issues.

Presenter: Davide Ceneda

14:50 – 15:15
Accessible Visualization: Design Space, Opportunities, and Challenges
Nam Wook Kim, Shakila Cherise S Joyner, Amalia Riegelhuth, Yea-Seul Kim


Abstract: Visualizations are now widely used across disciplines to understand and communicate data. The benefit of visualizations lies in leveraging our natural visual perception. However, the sole dependency on vision can produce unintended discrimination against people with visual impairments. While the visualization field has seen enormous growth in recent years, supporting people with disabilities is much less explored. In this work, we examine approaches to support this marginalized user group, focusing on visual disabilities. We collected and analyzed papers published for the last 20 years on visualization accessibility. We mapped a design space for accessible visualization that includes seven dimensions: user group, literacy task, chart type, interaction, information granularity, sensory modality, assistive technology. We describe the current knowledge gap in light of the latest advances in visualization and present a preliminary accessibility model by synthesizing findings from existing research. Finally, we reflected on the dimensions and discussed opportunities and challenges for future research.

Presenter: Shakila Cherise Joyner


14:00 – 15:40
STARs 2: Interaction and Physicalization  
Chair: Ingrid Hotz
14:00 – 14:50
The State of the Art of Spatial Interfaces for 3D Visualization
Lonni Besançon, Anders Ynnerman, Daniel F. Keefe, Lingyun Yu, Tobias Isenberg


Abstract: We survey the state of the art of spatial interfaces for 3D visualization. Interaction techniques are crucial to data visualization processes and the visualization research community has been calling for more research on interaction for years. Yet, research papers focusing on interaction techniques, in particular for 3D visualization purposes, are not always published in visualization venues, sometimes making it challenging to synthesize the latest interaction and visualization results. We therefore introduce a taxonomy of interaction technique for 3D visualization. The taxonomy is organized along two axes: the primary source of input on the one hand and the visualization task they support on the other hand. Surveying the state of the art allows us to highlight specific challenges and missed opportunities for research in 3D visualization. In particular, we call for additional research in: (1) controlling 3D visualization widgets to help scientists better understand their data, (2) 3D interaction techniques for dissemination, which are under?explored yet show great promise for helping museum and science centers in their mission to share recent knowledge, and (3) developing new measures that move beyond traditional time and errors metrics for evaluating visualizations that include spatial interaction.

Presenter: Lonni Besançon

14:50 – 15:40
Data to Physicalization: A Survey of the Physical Rendering Process
Hessam Djavaherpour, Faramarz Samavati, Ali Mahdavi-Amiri, Fatemeh Yazdanbakhsh, Samuel Huron, Richard Levy, Yvonne Jansen, Lora Oehlberg


Abstract: Physical representations of data offer physical and spatial ways of looking at, navigating, and interacting with data. While digital fabrication has facilitated the creation of objects with data-driven geometry, rendering data as a physically fabricated object is still a daunting leap for many physicalization designers. Rendering in the scope of this research refers to the back-and-forth process from digital design to digital fabrication and its specific challenges. We developed a corpus of example data physicalizations from research literature and physicalization practice. This survey then unpacks the “”rendering”” phase of the extended InfoVis pipeline in greater detail through these examples, with the aim of identifying ways that researchers, artists, and industry practitioners “”render”” physicalizations using digital design and fabrication tools.

Presenter: Hessam Djavaherpour


15:40 – 16:00 BREAK

16:00 – 17:40
Full Papers 6: Machine Learning and Explainable AI  
Chair: Daniel Archambault
16:00 – 16:25
iQUANT: Interactive Quantitative Investment Using Sparse Regression Factors
Xuanwu Yue, Qiao Gu, Deyun Wang, Huamin Qu, Yong Wang


Abstract: The model-based investing using financial factors is evolving as a principal method for quantitative investment. The main challenge lies in the selection of effective factors towards excess market returns. Existing approaches, either hand-picking factors or applying feature selection algorithms, do not orchestrate both human knowledge and computational power. This paper presents iQUANT, an interactive quantitative investment system that assists equity traders to quickly spot promising financial factors from initial recommendations suggested by algorithmic models, and conduct a joint refinement of factors and stocks for investment portfolio composition. We work closely with professional traders to assemble empirical characteristics of ”good” factors and propose effective visualization designs to illustrate the collective performance of financial factors, stock portfolios, and their interactions. We evaluate iQUANT through a formal user study, two case studies, and expert interviews, using a real stock market dataset consisting of 3000 stocks times 6000 days times 56 factors.

Presenter: Yong Wang

16:25 – 16:50
VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization
Angelos Chatzimparmpas, Rafael M. Martins, Kostiantyn Kucher, Andreas Kerren


Abstract: During the training phase of machine learning (ML) models, it is usually necessary to configure several hyperparameters. This process is computationally intensive and requires an extensive search to infer the best hyperparameter set for the given problem. The challenge is exacerbated by the fact that most ML models are complex internally, and training involves trial-and-error processes that could remarkably affect the predictive result. Moreover, each hyperparameter of an ML algorithm is potentially intertwined with the others, and changing it might result in unforeseeable impacts on the remaining hyperparameters. Evolutionary optimization is a promising method to try and address those issues. According to this method, performant models are stored, while the remainder are improved through crossover and mutation processes inspired by genetic algorithms. We present VisEvol, a visual analytics tool that supports interactive exploration of hyperparameters and intervention in this evolutionary procedure. In summary, our proposed tool helps the user to generate new models through evolution and eventually explore powerful hyperparameter combinations in diverse regions of the extensive hyperparameter space. The outcome is a voting ensemble (with equal rights) that boosts the final predictive performance. The utility and applicability of VisEvol are demonstrated with two use cases and interviews with ML experts who evaluated the effectiveness of the tool.

Presenter: Angelos Chatzimparmpas

16:50 – 17:15
Learning Contextualized User Preferences for Co-Adaptive Guidance in Mixed-Initiative Topic Model Refinement
Fabian Sperrle, Hanna Schaefer, Daniel Keim, Mennatallah El-Assady


Abstract: Mixed-initiative visual analytics systems support collaborative human-machine decision-making processes. However, many multi-objective optimization tasks, such as topic model refinement, are highly subjective and context-dependent. Hence, systems need to adapt their optimization suggestions throughout the interactive refinement process to provide efficient guidance. To tackle this challenge, we present a technique for learning context-dependent user preferences and demonstrate its applicability to topic model refinement. We deploy agents with distinct associated optimization strategies that compete for the user’s acceptance of their suggestions. To decide when to provide guidance, each agent maintains an intelligible, rule-based classifier over context vectorizations that captures the development of quality metrics between distinct analysis states. By observing implicit and explicit user feedback, agents learn in which contexts to provide their specific guidance operation. An agent in topic model refinement might, for example, learn to react to declining model coherence by suggesting to split a topic. Our results confirm that the rules learned by agents capture contextual user preferences. Further, we show that the learned rules are transferable between similar datasets, avoiding common cold-start problems and enabling a continuous refinement of agents across corpora.

Presenter: Fabian Sperrle

17:15 – 17:40
A Visual Designer of Layer-wise Relevance Propagation Models
Xinyi Huang, Suphanut Jamonnak, Ye Zhao, Tsung Heng Wu, Wei Xu


Abstract: Layer-wise Relevance Propagation (LRP) is an emerging and widely-used method for interpreting the prediction results of convolutional neural networks (CNN). LRP developers often select and employ different relevance backpropagation rules and parameters, to compute relevance scores on input images. However, there exists no obvious solution to define a “best” LRP model. A satisfied model is highly reliant on pertinent images and designers’ goals. We develop a visual model designer, named as VisLRPDesigner, to overcome the challenges in the design and use of LRP models. Various LRP rules are unified into an integrated framework with an intuitive workflow of parameter setup. VisLRPDesigner thus allows users to interactively configure and compare LRP models. It also facilitates relevance-based visual analysis with two important functions: relevance-based pixel flipping and neuron ablation. Several use cases illustrate the benefits of VisLRPDesigner. The usability and limitation of the visual designer is evaluated by LRP users.

Presenter: Xinyi Huang


16:00 – 17:40
Short Papers 2: Scientific Visualization  
Chair: Georges-Pierre Bonneau
16:00 – 16:20
Analytic Ray Splitting for Controlled Precision DVR
Sebastian Weiss, Rüdiger Westermann


Abstract: For direct volume rendering of post-classified data, we propose an algorithm that analytically splits a ray through a cubical cell at the control points of a piecewise-polynomial transfer function. This splitting generates segments over which the variation of the optical properties is described by piecewise cubic functions. This allows using numerical quadrature rules with controlled precision to obtain an approximation with prescribed error bounds. The proposed splitting scheme can be used to find all piecewise linear or monotonic segments along a ray, and it can thus be used to improve the accuracy of direct volume rendering, scale-invariant volume rendering, and multi-isosurface rendering.

Presenter: Sebastian Weiss

16:20 – 16:40
Visual Analysis of the Relation Between Stiffness Tensor and the Cauchy-Green Tensor
Christian Blecha, Chiara Hergl, Thomas Nagel, Gerik Scheuermann


Abstract: Stress and strain tensors, two well-known quantities in mechanical engineering, are linked through a fourth-order stiffness tensor, which is not considered by many visualizations due to its complexity. Considering an orthotropic material, the tensor naturally decomposes into nine known material properties. We used fiber surfaces to analyze a data set representing a biological tissue. A sphere is pushed into the material to confirm the mathematical link as well as the possibility to extract highly deformed regions even if only the stiffness tensor is available.

Presenter: Christian Blecha

16:40 – 17:00
Visualization of Uncertain Multivariate Data via Feature Confidence Level-Sets
Sudhanshu Sane, Tushar M. Athawale, Chris R. Johnson


Abstract: Recent advancements in multivariate data visualization have opened new research opportunities for the visualization community. In this paper, we propose an uncertain multivariate data visualization technique called feature confidence level-sets. Conceptually, feature level-sets refer to level-sets of multivariate data. Our proposed technique extends the existing idea of uni- variate confidence isosurfaces to multivariate feature level-sets. Feature confidence level-sets are computed by considering the trait for a specific feature, a confidence interval, and the distribution of data at each grid point in the domain. Using uncertain multivariate data sets, we demonstrate the utility of the technique to visualize regions with uncertainty in relation to the specific trait or feature, and the ability of the technique to provide secondary feature structure visualization based on uncertainty.

Presenter: Sudhanshu Sane

17:00 – 17:20
Integration-Aware Vector Field Super Resolution
Saroj Sahoo, Matthew Berger


Abstract: In this work we propose an integration-aware super-resolution approach for 3D vector fields. Recent work in flow field super-resolution has achieved remarkable success using deep learning approaches. However, existing approaches fail to account for how vector fields are used in practice, once an upsampled vector field is obtained. Specifically, a cornerstone of flow visualization is the visual analysis of streamlines, or integral curves of the vector field. To this end, we study how to incorporate streamlines as part of super-resolution in a deep learning context, such that upsampled vector fields are optimized to produce streamlines that resemble the ground truth upon integration. We consider common factors of integration as part of our approach — seeding, streamline length — and how these factors impact the resulting upsampled vector field. To demonstrate the effectiveness of our approach, we evaluate our model both quantitatively and qualitatively on different flow field datasets and compare our method against state of the art techniques.

Presenter: Saroj Sahoo

17:20 – 17:40
Selection of Optimal Salient Time Steps by Non-negative Tucker Tensor Decomposition
Jesus Pulido, John M Patchett, Manish Bhattarai, Boian Alexandrov, James Ahrens


Abstract: Choosing salient time steps from spatio-temporal data is useful for summarizing the sequence and developing visualizations for animations prior to committing time and resources to their production on an entire time series. Animations can be developed more quickly with visualization choices that work best for a small set of the important salient timesteps. Here we introduce a new unsupervised learning method for finding such salient timesteps. The volumetric data is represented by a 4-dimensional non-negative tensor, X(t, x, y, z).The presence of latent (not directly observable) structure in this tensor allows a unique representation and compression of the data. To extract the latent time-features we utilize non-negative Tucker tensor decomposition. We then map these time-features to their maximal values to identify the salient time steps. We demonstrate that this choice of time steps allows a good representation of the time series as a whole.

Presenter: Jesus Pulido


17:40 – 18:00 BREAK


Thursday, 17 June, 2021

09:00 – 10:15
Full Papers 7: Medical Applications and GPUs  
Chair: Gerik Scheuermann
09:00 – 09:25
Daisen: A Framework for Visualizing Detailed GPU Execution
Yifan Sun, Yixuan Zhang, Ali Mosallaei, Michael D Shah, Cody Dunne, David Kaeli


Abstract: Graphics Processing Units (GPUs) have been widely used to accelerate artificial intelligence, physics simulation, medical imaging, and information visualization applications. To improve GPU performance, GPU hardware designers need to identify performance issues by inspecting a huge amount of simulator-generated traces. Visualizing the execution traces can reduce the cognitive burden of users and facilitate making sense of behaviors of GPU hardware components. In this paper, we first formalize the process of GPU performance analysis and characterize the design requirements of visualizing execution traces based on a survey study and interviews with GPU hardware designers. We contribute data and task abstraction for GPU performance analysis. Based on our task analysis, we propose Daisen, a framework that supports data collection from GPU simulators and provides visualization of the simulator-generated GPU execution traces. Daisen features a data abstraction and trace format that can record simulator-generated GPU execution traces. Daisen also includes a web-based visualization tool that helps GPU hardware designers examine GPU execution traces, identify performance bottlenecks, and verify performance improvement. Our qualitative evaluation with GPU hardware designers demonstrates that the design of Daisen reflects the typical workflow of GPU hardware designers. Using Daisen, participants were able to effectively identify potential performance bottlenecks and opportunities for performance improvement. The open-sourced implementation of Daisen can be found at gitlab.com/akita/vis. Supplemental materials including a demo video, survey questions, evaluation study guide, and post-study evaluation survey are available at osf.io/j5ghq.

Presenter: Yifan Sun

09:25 – 09:50
Leveraging Topological Events in Tracking Graphs for Understanding Particle Diffusion
Torin McDonald, Rebika Shrestha, Xiyu Yi, Harsh Bhatia, De Chen, Debanjan Goswami, Valerio Pascucci, Thomas Turbyville, Peer-Timo Bremer


Abstract: Single particle tracking (SPT) of fluorescent molecules provides significant insights into the diffusion and relative motion of tagged proteins and other structures of interest in biology. However, despite the latest advances in high-resolution microscopy, individual particles are typically not distinguished from clusters of particles. This lack of resolution obscures potential evidence for how merging and splitting of particles affect their diffusion and any implications on the biological environment. The particle tracks are typically decomposed into individual segments at observed merge and split events, and analysis is performed without knowing the true count of particles in the resulting segments. Here, we address the challenges in analyzing particle tracks in the context of cancer biology. In particular, we study the tracks of KRAS protein, which is implicated in nearly 20% of all human cancers, and whose clustering and aggregation have been linked to the signaling pathway leading to uncontrolled cell growth. We present a new analysis approach for particle tracks by representing them as tracking graphs and using topological events– merging and splitting, to disambiguate the tracks. Using this analysis, we infer a lower bound on the count of particles as they cluster and create conditional distributions of diffusion speeds before and after merge and split events. Using thousands of time-steps of simulated and in-vitro SPT data, we demonstrate the efficacy of our method, as it offers the biologists a new, detailed look into the relationship between KRAS clustering and diffusion speeds.

Presenter: Torin McDonald

09:50 – 10:15
SumRe: Design and Evaluation of a Gist-based Summary Visualization for Incident Reports Triage
Tabassum Kakar, Xiao Qin, Thang La, Sanjay K Sahoo, Suranjan De, Elke Rundensteiner, Lane Harrison


Abstract: Incident report triage is a common endeavor in many industry sectors, often coupled with serious public safety implications. Forexample, at the US Food and Drug Administration (FDA), analysts triage an influx of incident reports to identify previouslyundiscovered drug safety problems. However, these analysts currently conduct this critical yet error-prone incident report triageusing a generic table-based interface, with no formal support. Visualization design, task-characterization methodologies, andevaluation models offer several possibilities for better supporting triage workflows, including those dealing with drug safetyand beyond. In this work, we aim to elevate the work of triage through a task-abstraction activity with FDA analysts. Second,we design an alternative gist-based summary of text documents used in triage (SumRe). Third, we conduct a crowdsourcedevaluation of SumRe with medical experts. Results of the crowdsourced study with medical experts (n = 20) suggest that SumRebetter supports accuracy in understanding the gist of a given report, and in identifying important reports for followup activities.We discuss implications of these results, including design considerations for triage workflows beyond the drug domain, as well asmethodologies for comparing visualization-enabled text summaries.

Presenter: Tabassum Kakar


09:00 – 10:40
Full Papers 8: Analytics in Science and Engineering  
Chair: Thomas Schulz
09:00 – 09:25
SenVis: Interactive Tensor-based Sensitivity Visualization
Haiyan Yang, Rafael Ballester-Ripoll, Renato Pajarola


Abstract: Sobol’s method is one of the most powerful and widely used frameworks for global sensitivity analysis, and it maps every possible combination of input variables to an associated Sobol index. However, these indices are often challenging to analyze in depth, due in part to the lack of suitable, flexible enough, and fast-to-query data access structures as well as visualization techniques. We propose a visualization tool that leverages tensor decomposition, a compressed data format that can quickly and approximately answer sophisticated queries over exponential-sized sets of Sobol indices. This way, we are able to capture the complete global sensitivity information of high-dimensional scalar models. Our application is based on a three-stage visualization, to which variables to be analyzed can be added or removed interactively. It includes a novel hourglass-like diagram presenting the relative importance for any single variable or combination of input variables with respect to any composition of the rest of the input variables. We showcase our visualization with a range of example models, whereby we demonstrate the high expressive power and analytical capability made possible with the proposed method.

Presenter: Haiyan Yang

09:25 – 09:50
Visual Analysis of Electronic Densities and Transitions in Molecules
Talha Bin Masood, Signe Sidwall Thygesen, Mathieu Linares, Alexei I. Abrikosov, Vijay Natarajan, Ingrid Hotz


Abstract: The study of electronic transitions within a molecule connected to the absorption or emission of light is a common task in the process of the design of new materials. The transitions are complex quantum mechanical processes and a detailed analysis requires a breakdown of these processes into components that can be interpreted via characteristic chemical properties. We approach these tasks by providing a detailed analysis of the electron density field. This entails methods to quantify and visualize electron localization and transfer from molecular subgroups combining spatial and abstract representations. The core of our method uses geometric segmentation of the electronic density field coupled with a graph-theoretic formulation of charge transfer between molecular subgroups. The design of the methods has been guided by the goal of providing a generic and objective analysis following fundamental concepts. We illustrate the proposed approach using several case studies involving the study of electronic transitions in different molecular systems.

Presenter: Talha Bin Masood

09:50 – 10:15
Hornero: Thunderstorms Characterization using Visual Analytics
Alexandra Diehl, Rodrigo Pelorosso, Juan J. Ruiz, Renato Pajarola, Eduard Gröller, Stefan Bruckner


Abstract: Analyzing the evolution of thunderstorms is critical in determining the potential for the development of severe weather events. Existing visualization systems for short-term weather forecasting (nowcasting) allow for basic analysis and prediction of storm developments. However, they lack advanced visual features for efficient decision-making. We developed a visual analytics tool for the detection of hazardous thunderstorms and their characterization, using a visual design centered on a reformulated expert task workflow that includes visual features to overview storms and quickly identify high-impact weather events, a novel storm graph visualization to inspect and analyze the storm structure, as well as a set of interactive views for efficient identification of similar storm cells (known as analogs) in historical data and their use for nowcasting. Our tool was designed with and evaluated by meteorologists and expert forecasters working in short-term operational weather forecasting of severe weather events.Results show that our solution suits the forecasters’ workflow. Our visual design is expressive, easy to use, and effective for prompt analysis and quick decision-making in the context of short-range operational weather forecasting.

Presenter: Alexandra Diehl

10:15 – 10:40
A Modified Double Gyre with Ground Truth Hyperbolic Trajectories for Flow Visualization
Steve Wolligandt, Thomas Wilde, Christian Rössl, Holger Theisel


Abstract: The model of a double gyre flow by Shadden et al. is a standard benchmark data set for the computation of hyperbolic Lagrangian Coherent Structures (LCS) in flow data. While structurally extremely simple, it generates hyperbolic LCS of arbitrary complexity. Unfortunately, the double gyre does not come with a well-defined ground truth: the location of hyperbolic LCS boundaries can only be approximated by numerical methods that usually involve the gradient of the flowmap. We present a new benchmark data set that is a small but carefully designed modification of the double gyre, which comes with ground truth closed-form hyperbolic trajectories. This allows for computing hyperbolic LCS boundaries by a simple particle integration without the consideration of the flow map gradient. We use these hyperbolic LCS as a ground truth solution for testing an existing numerical approach for extracting hyperbolic trajectories. In addition, we are able to construct hyperbolic LCS curves that are significantly longer than in existing numerical methods.

Presenter: Steve Wolligandt


10:40 – 11:00 BREAK

11:00 – 12:40
Full Papers 9: Geo-spatial Design and Analysis  
Chair: Aidan Slingsby
11:00 – 11:25
A Deeper Understanding of Visualization–Text Interplay in Geographic Data-driven Stories
Shahid Latif, Siming Chen, Fabian Beck


Abstract: Data-driven stories comprise of visualizations and a textual narrative. The two representations coexist and complement each other. Although existing research has explored the design strategies and structure of such stories, it remains an open research question how the two representations play together on a detailed level and how they are linked with each other. In this paper, we aim at understanding the fine-grained interplay of text and visualizations in geographic data-driven stories. We focus on geographic content as it often includes complex spatiotemporal data presented as versatile visualizations and rich textual descriptions. We conduct a qualitative empirical study on 22 stories collected from a variety of news media outlets; 10 of the stories report the COVID-19 pandemic, the others cover diverse topics. We investigate the role of every sentence and visualization within the narrative to reveal how they reference each other and interact. Moreover, we explore the positioning and sequence of various parts of the narrative to find patterns that further consolidate the stories. Drawing from the findings, we discuss study implications with respect to best practices and possibilities to automate the report generation.

Presenter: Shahid Latif

11:25 – 11:50
Design Space of Origin-Destination Data Visualization
Martijn Tennekes, Min Chen


Abstract: Visualization is an essential tool for observing and analyzing origin-destination (OD) data, which encodes flows between geographic locations, e.g., in applications concerning commuting, migration, and transport of goods. However, depicting OD data often encounter issues of cluttering and occlusion. To address these issues, many visual designs feature data abstraction and visual abstraction, such as node aggregation and edge bundling, resulting in information loss. The recent theoretical and empirical developments in visualization have substantiated the merits of such abstraction, while confirming that viewers’ knowledge can alleviate the negative impact due to information loss. It is thus desirable to map out different ways of losing and adding information in origin-destination data visualization (ODDV). We therefore formulate a new design space of ODDV based on the categorization of informative operations on OD data in data abstraction and visual abstraction. We apply this design space to existing ODDV methods, outline strategies for exploring the design space, and suggest ideas for further exploration.

Presenter: Martijn Tennekes

11:50 – 12:15
Visual Analysis of Spatio-temporal Phenomena with 1D Projections
Max Franke, Henry Martin, Steffen Koch, Kuno Kurzhals


Abstract: It is crucial to visually extrapolate the characteristics of their evolution to understand critical spatio-temporal events such as earthquakes, fires, or the spreading of a disease. Animations embedded in the spatial context can be helpful for understanding details, but have proven to be less effective for overview and comparison tasks. We present an interactive approach for the exploration of spatio-temporal data, based on a set of neighborhood-preserving 1D projections which help identify patterns and support the comparison of numerous time steps and multivariate data. An important objective of the proposed approach is the visual description of local neighborhoods in the 1D projection to reveal patterns of similarity and propagation. As this locality cannot generally be guaranteed, we provide a selection of different projection techniques, as well as a hierarchical approach, to support the analysis of different data characteristics. In addition, we offer an interactive exploration technique to reorganize and improve the mapping locally to users’ foci of interest. We demonstrate the usefulness of our approach with different real-world application scenarios and discuss the feedback we received from domain and visualization experts.

Presenter: Max Franke

12:15 – 12:40
Boundary Objects in Design Studies: Reflections on the Collaborative Creation of Isochrone Maps
Romain Vuillemot, Philippe Rivière, Anaëlle Beignon, Aurélien Tabard


Abstract: We propose to take an artifact-centric approach to design studies by leveraging the concept of boundary object. Design studies typically focus on processes and articulate design decisions in a project-specific context with a goal of transferability. We argue that design studies could benefit from paying attention to the material conditions in which teams collaborate to reach design outcomes. We report on a design study of isochrone maps following cartographic generalization principles. Focusing on boundary objects enables us to characterize five categories of artifacts and tools that facilitated collaboration between actors involved in the design process (structured collections, structuring artifacts, process-centric artifacts, generative artifacts, and bridging artifacts). We found that artifacts such as layered maps and map collections played a unifying role for our inter-disciplinary team. We discuss how such artifacts can be pivotal in the design process. Finally, we discuss how considering boundary objects could improve the transferability of design study results, and support reflection on inter-disciplinary collaboration in the domain of Information Visualization.

Presenter: Romain Vuillemot, Aurélien Tabard


11:00 – 12:40
Full Papers 10: Charts, Design and Interaction  
Chair: Christian Tominski
11:00 – 11:25
Automatic Improvement of Continuous Colormaps in Euclidean Colorspaces
Pascal Nardini, Min Chen, Michael Böttinger, Gerik Scheuermann, Roxana Bujack


Abstract: Colormapping is one of the simplest and most widely used data visualization methods within and outside the visualization community. Uniformity, order, discriminative power, and smoothness of continuous colormaps are the most important criteria for evaluating and potentially improving colormaps. We present a local and a global automatic optimization algorithm in Euclidean color spaces for each of these design rules in this work. As a foundation for our optimization algorithms, we used the CCC-Tool colormap specification (CMS); each algorithm has been implemented in the CCC-Tool. In addition to synthetic examples that demonstrate each method’s effect, we show the outcome of some of the methods applied to a typhoon simulation.

Presenter: Pascal Nardini

11:25 – 11:50
ParSetgnostics: Quality Metrics for Parallel Sets
Frederik L. Dennig, Maximilian T. Fischer, Michael Blumenschein, Johannes Fuchs, Daniel Keim, Evanthia Dimara


Abstract: While there are many visualization techniques for exploring numeric data, only a few work with categorical data. One prominent example is Parallel Sets, showing data frequencies instead of data points – analogous to parallel coordinates for numerical data. As nominal data does not have an intrinsic order, the design of Parallel Sets is sensitive to visual clutter due to overlaps, crossings, and subdivision of ribbons hindering readability and pattern detection. In this paper, we propose a set of quality metrics, called ParSetgnostics (Parallel Sets diagnostics), which aim to improve Parallel Sets by reducing clutter. These quality metrics quantify important properties of Parallel Sets such as overlap, orthogonality, ribbon width variance, and mutual information to optimize the category and dimension ordering. By conducting a systematic correlation analysis between the individual metrics, we ensure their distinctiveness. Further, we evaluate the clutter reduction effect of ParSetgnostics by reconstructing six datasets from previous publications using Parallel Sets measuring and comparing their respective properties. Our results show that ParSetgostics facilitates multi-dimensional analysis of categorical data by automatically providing optimized Parallel Set designs with a clutter reduction of up to 81% compared to the originally proposed Parallel Sets visualizations.

Presenter: Frederik Dennig

11:50 – 12:15
A novel approach for exploring annotated data with interactive lenses
Fabio Bettio, Moonisa Ahsan, Fabio Marton, Enrico Gobbetti


Abstract: We introduce a novel approach for assisting users in exploring 2D data representations with an interactive lens. Focus-and-context exploration is supported by translating user actions to the joint adjustments in camera and lens parameters that ensure a good placement and sizing of the lens within the view. This general approach, implemented using standard device mappings, overcomes the limitations of current solutions, which force users to continuously switch from lens positioning and scaling to view panning and zooming. Navigation is further assisted by exploiting data annotations. In addition to traditional visual markups and information links, we associate to each annotation a lens configuration that highlights the region of interest. During interaction, an assisting controller determines the next best lens in the database based on the current view and lens parameters and the navigation history. Then, the controller interactively guides the user’s lens towards the selected target and displays its annotation markup. As only one annotation markup is displayed at a time, clutter is reduced. Moreover, in addition to guidance, the navigation can also be automated to create a tour through the data. While our methods are generally applicable to general 2D visualization, we have implemented them for the exploration of stratigraphic relightable models. The capabilities of our approach are demonstrated in cultural heritage use cases. A user study has been performed in order to validate our approach.

Presenter: Moonisa Ahsan

12:15 – 12:40
Line Weaver: Importance-Driven Order Enhanced Rendering of Dense Line Charts
Thomas Trautner, Stefan Bruckner


Abstract: Line charts are an effective and widely used technique for visualizing series of ordered two-dimensional data points. The relationship between consecutive points is indicated by connecting line segments, revealing potential trends or clusters in the underlying data. However, when dealing with an increasing number of lines, the render order substantially influences the resulting visualization. Rendering transparent lines can help but unfortunately the blending order is currently either ignored or naively used, for example, assuming it is implicitly given by the order in which the data was saved in a file. Due to the non-commutativity of classic alpha blending, this results in contradicting visualizations of the same underlying data set, so-called “”hallucinators””. In this paper, we therefore present line weaver, a novel visualization technique for dense line charts. Using an importance function, we developed an approach that correctly considers the blending order independently of the render order and without any prior sorting of the data. We allow for importance functions which are either explicitly given or implicitly derived from the geometric properties of the data if no external data is available. The importance can then be applied globally to entire lines, or locally per pixel which simultaneously supports various types of user interaction. Finally, we discuss the potential of our contribution based on different synthetic and real-world data sets where classic or naive approaches would fail.

Presenter: Thomas Trautner


12:40 – 14:00 BREAK

14:00 – 15:40
Full Papers 11: Bio-Medical Image Analysis  
Chair: Barbora Kozlíková
14:00 – 14:25
A Progressive Approach for Uncertainty Visualization in Diffusion Tensor Imaging
Faizan Siddiqui, Thomas Höllt, Anna Vilanova


Abstract: Diffusion Tensor Imaging (DTI) is a non-invasive magnetic resonance imaging technique that, combined with fiber tracking algorithms, allows the characterization and visualization of white matter structures in the brain. The resulting fiber tracts are used, for example, in tumor surgery to evaluate the potential brain functional damage due to tumor resection. The DTI processing pipeline from image acquisition to the final visualization is rather complex generating undesirable uncertainties in the final results. Most DTI visualization techniques do not provide any information regarding the presence of uncertainty. When planning surgery, a fixed safety margin around the fiber tracts is often used; however, it cannot capture local variability and distribution of the uncertainty, thereby limiting the informed decision-making process. Stochastic techniques are a possibility to estimate uncertainty for the DTI pipeline. However, it has high computational and memory requirements that make it infeasible in a clinical setting. The delay in the visualization of the results adds hindrance to the workflow. We propose a progressive approach that relies on a combination of wild-bootstrapping and fiber tracking to be used within the progressive visual analytics paradigm. We present a local bootstrapping strategy, which reduces the computational and memory costs, and provides fiber-tracking results in a progressive manner. We have also implemented a progressive aggregation technique that computes the distances in the fiber ensemble during progressive bootstrap computations. We present experiments with different scenarios to highlight the benefits of using our progressive visual analytic pipeline in a clinical workflow along with a use case and analysis obtained by discussions with our collaborators.

Presenter: Faizan Siddiqui

14:25 – 14:50
Implicit Modeling of Patient-Specific Aortic Dissections with Elliptic Fourier Descriptors
Gabriel Mistelbauer, Christian Roessl, Kathrin Baeumler, Bernhard Preim, Dominik Fleischmann


Abstract: Aortic dissection is a life-threatening vascular disease characterized by abrupt formation of a new flow channel (false lumen) within the aortic wall. Survivors of the acute phase remain at high risk for late complications, such as aneurysm formation, rupture, and death. Morphologic features of aortic dissection determine not only treatment strategies in the acute phase (surgical vs. endovascular vs. medical), but also modulate the hemodynamics in the false lumen, ultimately responsible for late complications. Accurate description of the true and false lumen, any communications across the dissection membrane separating the two lumina, and blood supply from each lumen to aortic branch vessels is critical for risk prediction. Patient-specific surface representations are also a prerequisite for hemodynamic simulations, but currently require time-consuming manual segmentation of CT data. We present an aortic dissection cross-sectional model that captures the varying aortic anatomy, allowing for reliable measurements and creation of high-quality surface representations. In contrast to the traditional spline-based cross-sectional model, we employ elliptic Fourier descriptors, which allows users to control the accuracy of the cross-sectional contour of a flow channel. We demonstrate (i) how our approach can solve the requirements for generating surface and wall representations of the flow channels, (ii) how any number of communications between flow channels can be specified in a consistent manner, and (iii) how well branches connected to the respective flow channels are handled. Finally, we discuss how our approach is a step forward to an automated generation of surface models for aortic dissections from raw 3D imaging segmentation masks.

Presenter: Gabriel Mistelbauer

14:50 – 15:15
Visualizing Carotid Blood Flow Simulations for Stroke Prevention
Pepe Eulzer, Monique Meuschke, Carsten Klingner, Kai Lawonn


Abstract: In this work, we investigate how concepts from medical flow visualization can be applied to enhance stroke prevention diagnostics. Our focus lies on carotid stenoses, i.e., local narrowings of the major brain-supplying arteries, which are a frequent cause of stroke. Carotid surgery can reduce the stroke risk associated with stenoses, however, the procedure entails risks itself. Therefore, a thorough assessment of each case is necessary. In routine diagnostics, the morphology and hemodynamics of an afflicted vessel are separately analyzed using angiography and sonography, respectively. Blood flow simulations based on computational fluid dynamics could enable the visual integration of hemodynamic and morphological information and provide a higher resolution on relevant parameters. We identify and abstract the tasks involved in the assessment of stenoses and investigate how clinicians could derive relevant insights from carotid blood flow simulations. We adapt and refine a combination of techniques to facilitate this purpose, integrating spatiotemporal navigation, dimensional reduction, and contextual embedding. We evaluated and discussed our approach with an interdisciplinary group of medical practitioners, fluid simulation and flow visualization researchers. Our initial findings indicate that visualization techniques could promote usage of carotid blood flow simulations in practice.

Presenter: Pepe Eulzer

15:15 – 15:40
VICE: Visual Identification and Correction of Neural Circuit Errors
Felix Gonda, Johanna Beyer, Xueying Wang, Markus Hadwiger, Jeff Lichtman, Hanspeter Pfister


Abstract: A connectivity graph of neurons at the resolution of single synapses provides scientists with a tool for understanding the nervous system in health and disease. Recent advances in automatic image segmentation and synapse prediction in electron microscopy (EM) datasets of the brain have made reconstructions of neurons possible at the nanometer scale. However, automatic segmentation sometimes struggles to segment large neurons correctly, requiring human effort to proofread its output. General proofreading involves inspecting large volumes to correct segmentation errors at the pixel level, a visually intensive and time-consuming process. This paper presents the design and implementation of an analytics framework that streamlines proofreading, focusing on connectivity-related errors. We accomplish this with automated likely-error detection and synapse clustering that drives the proofreading effort with highly interactive 3D visualizations. In particular, our strategy centers on proofreading the local circuit of a single cell to ensure a basic level of completeness. We demonstrate our framework’s utility with a user study and report quantitative and subjective feedback from our users. Overall, users find the framework more efficient for proofreading, understanding evolving graphs, and sharing error correction strategies.

Presenter: Felix Gonda


14:00 – 15:40
Short Papers 3: Analytics & Applications  
Chair: Kostiantyn Kucher
14:00 – 14:20
VisMiFlow: Visual Analytics to Support Citizen Migration Understanding Over Time and Space
Andreas Scheidl, Roger A. Leite, Silvia Miksch


Abstract: Multivariate networks are complex data structures, which are ubiquitous in many application domains. Driven by a real-world problem, namely the movement behavior of citizens in Vienna, we designed and implemented a Visual Analytics (VA) approach to ease citizen behavior analyses over time and space. We used a dataset of citizens’ movement behavior to, from, or within Vienna from 2007 to 2018, provided by Vienna’s city. To tackle the complexity of time, space, and other moving people’s attributes, we follow a data-user-tasks design approach to support urban developers. We qualitatively evaluated our VA approach with five experts coming from the field of VA and one non-expert. The evaluation illustrated the importance of task-specific visualization and interaction techniques to support users’ decision-making and insights. We elaborate on our findings and suggest potential future works to the field.

Presenter: Andreas Scheidl

14:20 – 14:40
DanceMoves: A Visual Analytics Tool for Dance Movement Analysis
Vasiliki Arpatzoglou, Artemis Kardara, Alexandra Diehl, Barbara Flueckiger, Sven Helmer, Renato Pajarola


Abstract: Analyzing body movement as a means of expression is of interest in diverse areas, such as dance, sports, films, as well as anthropology or archaeology. In particular, in choreography, body movements are at the core of artistic expression. Dance moves are composed of spatial and temporal structures that are difficult to address without interactive visual data analysis tools. We present a visual analytics solution that allows the user to get an overview of, compare, and visually search dance move features in video archives. With the help of similarity measures, a user can compare dance moves and assess dance poses. We illustrate our approach through three use cases and an analysis of the performance of our similarity measures. The expert feedback and the experimental results show that 75% to 80% of dance moves can correctly be categorized. Domain experts recognize great potential in this standardized analysis. Comparative and motion analysis allows them to get detailed insights into temporal and spatial development of motion patterns and poses.

Presenter: Alexandra Diehl

14:40 – 15:00
Graceful Degradation for Real-time Visualization of Streaming Geospatial Data
João Rafael, João Moreira, Daniel Mendes, Mário Alves, Daniel Gonçalves


Abstract: The availability of devices that can record locations and are connected to the Internet creates a huge amount of geospatial data that are continuously streamed. The informative visualization of such data is a challenging problem, given their sheer volume, and the real-time nature of the incoming stream. A simple approach like plotting all datapoints would generate visual noise, and not scale well. To tackle this problem, we have developed a visualization technique based on graceful degradation along three overlaid time periods (ongoing, recent, and history), each with a different visual idiom. A usability test of the proposed technique showed promising results.

Presenter: João Moreira

15:00 – 15:20
Evaluating Interactive Comparison Techniques in a Multiclass Density Map for Visual Crime Analytics
Lukas Svicarovic, Denis Parra, María Jesús Lobo


Abstract: Techniques for presenting objects spatially via density maps have been thoroughly studied, but there is lack of research on how to display this information in the presence of several classes, i.e., multiclass density maps. Moreover, there is even less research on how to design an interactive visualization for comparison tasks on multiclass density maps. One application domain which requires this type of visualization for comparison tasks is crime analytics, and the lack of research in this area results in ineffective visual designs. To fill this gap, we study four types of techniques to compare multiclass density maps, using car theft data. The interactive techniques studied are swipe, translucent overlay, magic lens, and juxtaposition. The results of a user study (N=32) indicate that juxtaposition yields the worst performance to compare distributions, whereas swipe and magic lens perform the best in terms of time needed to complete the experiment. Our research provides empirical evidence on how to design interactive idioms for multiclass density spatial data, and it opens a line of research for other domains and visual tasks.

Presenter: Denis Parra

15:20 – 15:40
Discussion Flows: An Interactive Visualization for Analyzing Engagement in Multi-Party Meetings
Tao Wang, Mandy Keck, Zana Vosough


Abstract: Engagement in multi-party meetings is a key indicator of outcome. Poor attendee involvement can hinder progress and hurt team cohesion. Thus, there is a strong motivation for organizations to better understand what happens in meetings and improve upon their experience. However, analyzing multi-party meetings is a challenging task, as one needs to consider both verbal exchanges and meeting dynamics among speakers. There is currently a lack of support on these unique tasks. In this paper, we present a new visual approach to help analyze multi-party meetings in industry settings: Discussion Flows, a multi-level interactive visualization tool. Its glyph-based overview allows effortless comparison of overall interactions among different meetings, whereas the individual meeting view uses flow diagrams to convey the relative participation of different speakers throughout the meeting agenda in different levels of details. We demonstrate our approach with meeting recordings from an open source dialogue corpora and use them as the benchmark dataset.

Presenter: Tao Wang


15:40 – 16:00 BREAK

16:00 – 17:15
Full Papers 12: Design Guidelines  
Chair: Alfie Abdul-Rahman
16:00 – 16:25
Design Patterns and Trade-Offs in Responsive Visualization for Communication
Hyeok Kim, Dominik Moritz, Jessica Hullman


Abstract: Increased access to mobile devices motivates the need to design communicative visualizations that are responsive to varying screen sizes. However, relatively little design guidance or tooling is currently available to authors. We contribute a detailed characterization of responsive visualization strategies in communication-oriented visualizations, identifying 76 total strategies by analyzing 378 pairs of large screen (LS) and small screen (SS) visualizations from online articles and reports. Our analysis distinguishes between the Targets of responsive visualization, referring to what elements of a design are changed and Actions representing how targets are changed. We identify key trade-offs related to authors’ need to maintain graphical density, referring to the amount of information per pixel, while also maintaining the “message’’ or intended takeaways for users of a visualization. We discuss implications of our findings for future visualization tool design to support responsive transformation of visualization designs, including requirements for automated recommenders for communication-oriented responsive visualizations.

Presenter: Hyeok Kim

16:25 – 16:50
ClusterSets: Optimizing Planar Clusters in Categorical Point Data
Jakob Geiger, Sabine Cornelsen, Jan-Henrik Haunert, Philipp Kindermann, Tamara Mchedlidze, Martin Nöllenburg, Yoshio Okamoto, Alexander Wolff


Abstract: In geographic data analysis, one is often given point data of different categories (such as facilities of a university categorized by department). Drawing upon recent research on set visualization, we want to visualize category membership by connecting points of the same category with visual links. Existing approaches that follow this path usually insist on connecting all members of a category, which may lead to many crossings and visual clutter. We propose an approach that avoids crossings between connections of different categories completely. Instead of connecting all data points of the same category, we subdivide categories into smaller, local clusters where needed. We do a case study comparing the legibility of drawings produced by our approach and those by existing approaches.In our problem formulation, we are additionally given a graph G on the data points whose edges express some sort of proximity. Our aim is to find a subgraph G’ of G with the following properties:(i) edges connect only data points of the same category, (ii) no two edges cross, and (iii) the number of connected components (clusters) is minimized. We then visualize the clusters in G’. For arbitrary graphs, the resulting optimization problem, Cluster Minimization, is NP-hard (even to approximate). Therefore, we introduce two heuristics. We do an extensive benchmark test on real-world data. Comparisons with exact solutions indicate that our heuristics do astonishing well for certain relative-neighborhood graphs.

Presenter: Philipp Kindermann

16:50 – 17:15
Optimal Axes for Data Value Estimation in Star Coordinates and Radial Axes Plots
Manuel Rubio-Sánchez, Dirk Joachim Lehmann, Alberto Sanchez, Jose Luis Rojo-Álvarez


Abstract: Radial axes plots are projection methods that represent high-dimensional data samples as points on a two-dimensional plane. These techniques define mappings through a set of axis vectors, each associated with a data variable, which users can manipulate interactively to create different plots and analyze data from multiple points of view. However, updating the direction and length of an axis vector is far from trivial. Users must consider the data analysis task, domain knowledge, the directions in which values should increase, the relative importance of each variable, or the correlations between variables, among other factors. Another issue is the difficulty to approximate high-dimensional data values in the two-dimensional visualizations, which can hamper searching for data with particular characteristics, analyzing the most common data values in clusters, inspecting outliers, etc. In this paper we present and analyze several optimization approaches for enhancing radial axes plots regarding their ability to represent high-dimensional data values. The techniques can be used not only to approximate data values with greater accuracy, but also to guide users when updating axis vectors or extending visualizations with new variables, since they can reveal poor choices of axis vectors. The optimal axes can also be included in nonlinear plots. In particular, we show how they can be used within RadViz to assess the quality of a variable ordering. The in-depth analysis carried out is useful for visualization designers developing radial axes techniques, or planning to incorporate axes into other visualization methods.

Presenter: Manuel Rubio-Sánchez


16:00 – 17:40
STARs 3: Scalar Field Topology and Astrophysics  
Chair: Michaël J. Aupetit
16:00 – 16:50
Scalar Field Comparison with Topological Descriptors: Properties and Applications for Scientific Visualization
Lin Yan, Talha Bin Masood, Raghavendra Sridharamurthy, Farhan Rasheed, Vijay Natarajan, Ingrid Hotz, Bei Wang


Abstract: In topological data analysis and visualization, topological descriptors such as persistence diagrams, merge trees, contour trees, Reeb graphs, and Morse–Smale complexes play an essential role in capturing the shape of scalar field data. We present a state-of-the-art report on scalar field comparison using topological descriptors. We provide a taxonomy of existing approaches based on visualization tasks associated with three categories of data: single fields, time-varying fields, and ensembles. These tasks include symmetry detection, periodicity detection, key event/feature detection, feature tracking, clustering, and structure statistics. Our main contributions include the formulation of a set of desirable mathematical and computational properties of comparative measures and the classification of visualization tasks and applications that are enabled by these measures.

Presenter: Lin Yan

16:50 – 17:40
Visualization in Astrophysics: Developing New Methods, Discovering Our Universe, and Educating the Earth
Fangfei Lan, Michael Young, Lauren Anderson, Anders Ynnerman, Alexander Bock, Michelle A. Borkin, Angus G. Forbes, Dr. Juna Kollmeier, Bei Wang


Abstract: We present a state-of-the-art report on visualization in astrophysics. We survey representative papers from both astrophysics and visualization and provide a taxonomy of existing approaches based on data analysis tasks. The approaches are classified based on five categories: data wrangling, data exploration, feature identification, object reconstruction, as well as education and outreach. Our unique contribution is to combine the diverse viewpoints from both astronomers and visualization experts to identify challenges and opportunities for visualization in astrophysics. The main goal is to provide a reference point to bring modern data analysis and visualization techniques to the rich datasets in astrophysics.

Presenter: Fangfei Lan


17:40 – 18:00 BREAK


Friday, 18 June, 2021

09:00 – 10:40
Full Papers 13: Temporal Data and Animation  
Chair: Jürgen Bernard
09:00 – 09:25
AutoClips: An Automatic Approach to Video Generation from Data Facts
Danqing Shi, Fuling Sun, Xinyue Xu, Xingyu Lan, David Gotz, Nan Cao


Abstract: Data videos, a storytelling genre that visualizes data facts with motion graphics, are gaining increasing popularity among data journalists, non-profits, and marketers to communicate data to broad audiences. However, crafting a data video is often time-consuming and asks for various domain knowledge such as data visualization, animation design, and screenwriting. Existing authoring tools usually enable users to edit and compose a set of templates manually, which still cost a lot of human effort.To further lower the barrier of creating data videos, this work introduces a new approach, AutoClips, which can automatically generate data videos given the input of a sequence of data facts. We built AutoClips through two stages. First, we constructed a fact-driven clip library where we mapped ten data facts to potential animated visualizations respectively by analyzing 230 online data videos and conducting interviews. Next, we constructed an algorithm that generates data videos from data facts through three steps: selecting and identifying the optimal clip for each of the data facts, arranging the clips into a coherent video, and optimizing the duration of the video. The results from two user studies indicated that the data videos generated byAutoClips are comprehensible, engaging, and have comparable quality with human-made videos.

Presenter: Danqing Shi

09:25 – 09:50
Animated Presentation of Static Infographics with InfoMotion
Yun Wang, Yi Gao, He Huang, Weiwei Cui, Haidong Zhang, Dongmei Zhang


Abstract: By displaying visual elements logically in temporal order, animated infographics can help readers better understand layers of information expressed in an infographic. While many techniques and tools target the quick generation of static infographics, few support animation designs. We propose InfoMotion that automatically generates animated presentations of static infographics. We first conduct a survey to explore the design space of animated infographics. Based on this survey, InfoMotion extracts graphical properties of an infographic to analyze the underlying information structures; then, animation effects are applied to the visual elements in the infographic in temporal order to present the infographic. The generated animations can be used in data videos or presentations. We demonstrate the utility of InfoMotion with two example applications, including mixed-initiative animation authoring and animation recommendation. To further understand the quality of the generated animations, we conduct a user study to gather subjective feedback on the animations generated by InfoMotion.

Presenter: Yun Wang

09:50 – 10:15
Uncertainty-aware Visualization of Regional Time Series Correlation in Spatio-temporal Ensembles
Marina Evers, Karim Huesmann, Lars Linsen


Abstract: Given a time-varying scalar field, the analysis of correlations between different spatial regions, i.e., the linear dependence of time series within these regions, provides insights into the structural properties of the data.In this context, regions are connected components of the spatial domain with high time series correlations.The detection and analysis of such regions is often performed globally, which requires pairwise correlation computations that are quadratic in the number of spatial data samples. Thus, operations based on all pairwise correlations are computationally demanding, especially when dealing with ensembles that model the uncertainty in the spatio-temporal phenomena using multiple simulation runs. We propose a two-step procedure: In a first step, we map the spatial samples to a 3D embedding based on a pairwise correlation matrix computed from the ensemble of time series. The 3D embedding allows for a one-to-one mapping to a 3D color space such that the outcome can be visually investigated by rendering the colors for all samples in the spatial domain. In a second step, we generate a hierarchical image segmentation based on the color images. From then on, we can visually analyze correlations of regions at all levels in the hierarchy within an interactive setting, which includes the uncertainty-aware analysis of the region’s time series correlation and respective time lags.

Presenter: Karim Huesmann

10:15 – 10:40
TourVis: Narrative Visualization of Multi-Stage Bicycle Races
Marta Fort, Jose Díaz, Pere-Pau Vázquez


Abstract: There are many multiple-stage racing competitions in various sports such as swimming, running, or cycling. The wide availability of affordable tracking devices facilitates monitoring the position along with the race of all participants, even for non-professional contests. Getting real-time information of contenders is useful but also unleashes the possibility of creating more complex visualization systems that ease the understanding of the behavior of all participants during a simple stage or throughout the whole competition. In this paper we focus on bicycle races, which are highly popular, especially in Europe, being the Tour de France its greatest exponent. Current visualizations from TV broadcasting or real-time tracking websites are useful to understand the current stage status, up to a certain extent. Unfortunately, still no current system exists that visualizes a whole multi-stage contest in such a way that users can interactively explore the relevant events of a single stage (e.g. breakaways, groups, virtual leadership…), as well as the full competition. In this paper, we present an interactive system that is useful both for aficionados and professionals to visually analyze the development of multi-stage cycling competitions.

Presenter: Pere-Pau Vázquez


09:00 – 10:40
Short Papers 4: Information Visualization  
Chair: Manuela Waldner
09:00 – 09:20
TaskVis: Task-oriented Visualization Recommendation
Leixian Shen, Enya Shen, Zhiwei Tai, Yiran Song, Jianmin Wang


Abstract: General visualization recommendation systems typically make design decisions of the dataset automatically. However, these systems are only able to prune meaningless visualizations but fail to recommend targeted results. In this paper, we contributed TaskVis, a task-oriented visualization recommendation approach with detailed modeling of the user’s analysis task. We first summarized a task base with 18 analysis tasks by a survey both in academia and industry. On this basis, we further maintained a rule base, which extends empirical wisdom with our targeted modeling of analysis tasks. Inspired by Draco, we enumerated candidate visualizations through answer set programming. After visualization generation, TaskVis supports four ranking schemes according to the complexity of charts, coverage of the user’s interested columns and tasks. In two user studies, we found that TaskVis can well reflect the user’s preferences and strike a great balance between automation and the user’s intent.

Presenter: Leixian Shen

09:20 – 09:40
Toward an Interactive Voronoi Treemap for Manual Arrangement and Grouping
Ala Abuthawabeh, Michael Aupetit


Abstract: Interactive spatial arrangement and grouping (A&G) of images is a critical step of the sense-making process. We argue that to support A&G tasks, a visual encoding idiom should avoid clutter, show groups explicitly, and maximize the use of space while allowing free positioning. None of the existing interactive idioms supporting A&G tasks optimizes all these criteria at once. We propose and implement an interactive Voronoi treemap for A&G that fulfills all these requirements. The cells representing groups or objects can be dragged or clicked to arrange objects and groups and to create, merge, split, expand, or collapse groups. We present a usage scenario for an art quiz game and a comparative analysis of our approach to the recent Piling.js library for a categorization task of HiC data images. We discuss limitations and future work.

Presenter: Michael Aupetit

09:40 – 10:00
A Multilevel Approach for Event-Based Dynamic Graph Drawing
Alessio Arleo, Silvia Miksch, Daniel Archambault


Abstract: The timeslice is the predominant method for drawing and visualizing dynamic graphs. However, when nodes and edges have real coordinates along the time axis, it becomes difficult to organize them into discrete timeslices, without a loss of temporal information due to projection. Event-based dynamic graph drawing rejects the notion of a timeslice and allows each node and edge to have its own real-valued time coordinate. Nodes are represented as trajectories of adaptive complexity that are drawn directly in the three-dimensional space-time cube (2D + t). Existing work has demonstrated clear advantages for this approach, but these advantages come at a running time cost. In response to this scalability issue, we present MultiDynNoS, the first multilevel approach for event-based dynamic graph drawing. We consider three operators for coarsening and placement, inspired by Walshaw, GRIP, and FM3, which we couple with an event-based graph drawing algorithm. We evaluate our approach on a selection of real graphs, showing that it outperforms timeslice-based and existing event-based techniques.

Presenter: Alessio Arleo

10:00 – 10:20
Selective Angular Brushing of Parallel Coordinate Plots
Raphael Sahann, Ivana Gajic, Torsten Moeller, Johanna Schmidt


Abstract: Parallel coordinates are an established technique to visualize multivariate data. Since these graphs are generally hard to read, we need interaction techniques to judge them accurately. Adding to the existing brushing techniques used in parallel coordinate plots, we present a triangular selection that highlights lines with a single click-and-drag mouse motion. Our selection starts by clicking on an axis and dragging the mouse away to select different ranges of lines. The position of the mouse determines the angle and the scope of the selection. We refined the interaction by running and adapting our method in two small user studies and present the most intuitive version to use.

Presenter: Raphael Sahann

10:20 – 10:40
Algorithmic Improvements on Hilbert and Moore Treemaps for Visualization of Large Tree-structured Datasets
Willy Scheibel, Christopher Weyand, Joseph Bethge, Jürgen Döllner


Abstract: Hilbert and Moore treemaps are based on the same named space-filling curves to lay out tree-structured data for visualization. One main component of them is a partitioning subroutine, whose algorithmic complexity poses problems when scaling to industry-sized datasets. Further, the subroutine allows for different optimization criteria that result in different layout decisions. This paper proposes conceptual and algorithmic improvements to this partitioning subroutine. Two measures for the quality of partitioning are proposed, resulting in the min-max and min-variance optimization tasks. For both tasks, linear-time algorithms are presented that find an optimal solution. The implementation variants are evaluated with respect to layout metrics and run-time performance against a previously available greedy approach. The results show significantly improved run time and no deterioration in layout metrics, suggesting effective use of Hilbert and Moore treemaps for datasets with millions of nodes.

Presenter: Willy Scheibel


10:40 – 11:00 BREAK

11:00 – 12:05
Dirk Bartz Prize: Talks and Awards  
Chair: Steffen Oeltze-Jafra, Renata Raidou
11:00 – 11:10
Opening
Steffen Oeltze-Jafra, Renata Georgia Raidou
11:10 – 11:25
Visual Analysis of Tissue Images at Cellular Level
Antonios Somarakis, Marieke Erica Ijsselsteijn, Boyd Kenkhuis, Vincent van Unen, Sietse Luk, Frits Koning, Louise van der Weerd, Noel de Miranda, Boudewijn Lelieveldt, Thomas Höllt


Abstract: The detailed analysis of tissue composition is crucial for the understanding of tissue functionality. For example, the location of immune cells related to a tumour area is highly correlated with the effectiveness of immunotherapy. Therefore, experts are interested in presence of cells with specific characteristics as well as the spatial patterns they form. Recent advances in single-cell imaging modalities, producing high-dimensional, high-resolution images enable the analysis of both of these features. However, extracting useful insight on tissue functionality from these high-dimensional images poses serious and diverse challenges to data analysis. We have developed an interactive, data-driven pipeline covering the main analysis challenges experts face, from the preprocessing of images via the exploration of tissue samples to the comparison of cohorts of samples. All parts of our pipeline have been developed in close collaboration with domain experts and are already a vital part in their daily analysis routine.

Presenter: Antonios Somarakis

11:25 – 11:40
Visual Assistance in Clinical Decision Support
Juliane Müller, Mario Cypko, Alexander Oeser, Matthäus Stöhr, Veit Zebralla, Stefanie Schreiber, Susanne Wiegand, Andreas Dietz, Steffen Oeltze-Jafra


Abstract: Clinical decision-making for complex diseases such as cancer aims at finding the right diagnosis, optimal treatment or best aftercare for a specific patient. The decision-making process is very challenging due to the distributed storage of patient information entities in multiple hospital information systems, the required inclusion of multiple clinical disciplines with their different views of disease and therapy, and the multitude of available medical examinations, therapy options and aftercare strategies. Clinical Decision Support Systems (CDSS) address these difficulties by presenting all relevant information entities in a concise manner and providing a recommendation based on interdisciplinary disease- and patient-specific models of diagnosis and treatment. This work summarizes our research on visual assistance for therapy decision-making. We aim at supporting the preparation and implementation of expert meetings discussing cancer cases (tumor boards) and the aftercare consultation. In very recent work, we started to address the generation of models underlying a CDSS. The developed solutions combine state-of-the-art interactive visualizations with methods from statistics, machine learning and information organization.

Presenter: Juliane Müller

11:40 – 11:55
Visual exploration of intracranial aneurysm blood flow adapted to the clinical researcher
Benjamin Behrendt, Wito Engelke, Philipp Berg, Oliver Beuing, Ingrid Hotz, Bernhard Preim, Sylvia Saalfeld


Abstract: Rupture risk assessment is a key to devise patient-specific treatment plans of cerebral aneurysms. To understand and predict the development of aneurysms and other vascular diseases over time, both hemodynamic flow patterns and their effect on the vessel surface need to be analyzed. Flow structures close to the vessel wall often correlate directly with local changes in surface parameters, such as pressure or wall shear stress. However, especially for the identification of specific blood flow characteristics that cause local startling parameters on the vessel surface, like elevated pressure values, an interactive analysis tool is missing. In order to find meaningful structures in the entirety of the flow, the data has to be filtered based on the respective explorative aim. Thus, we present a combination of visualization, filtering and interaction techniques for explorative analysis of blood flow with a focus on the relation of local surface parameters and underlying flow structures. In combination with a filtering-based approach, we propose the usage of evolutionary algorithms to reduce the overhead of computing pathlines that do not contribute to the analysis, while simultaneously reducing the undersampling artifacts. We present clinical cases to demonstrate the benefits of both our filter-based and evolutionary approach and showcase its potential for patient-specific treatment plans.

Presenter: Benjamin Behrendt

11:55 – 12:05
Awards and Closing
Steffen Oeltze-Jafra, Renata Georgia Raidou

11:00 – 12:40
STARs 4: Applications: From Music to Medical Imaging  
Chair: Bernhard Preim
11:00 – 11:50
A Survey on Visualizations for Musical Data
Richard Khulusi, Jakob Kusnick, Christofer Meinecke, Christina Gillmann, Josef Focht, Stefan Jänicke


Abstract: Digital methods are increasingly applied to store, structure and analyse vast amounts of musical data. In this context, visualization plays a crucial role, as it assists musicologists and non?expert users in data analysis and in gaining new knowledge. This survey focuses on this unique link between musicology and visualization. We classify 129 related works according to the visualized data types, and we analyse which visualization techniques were applied for certain research inquiries and to fulfill specific tasks. Next to scientific references, we take commercial music software and public websites into account, that contribute novel concepts of visualizing musicological data. We encounter different aspects of uncertainty as major problems when dealing with musicological data and show how occurring inconsistencies are processed and visually communicated. Drawing from our overview in the field, we identify open challenges for research on the interface of musicology and visualization to be tackled in the future.

Presenter: Jakob Kusnick

11:50 – 12:40
Uncertainty-aware Visualization in Medical Imaging – A Survey
Christina Gillmann, Dorothee Saur, Thomas Wischgoll, Gerik Scheuermann


Abstract: Medical imaging (image acquisition, image transformation, and image visualization) is a standard tool for clinicians in order to make diagnoses, plan surgeries, or educate students. Each of these steps is affected by uncertainty, which can highly influence the decision-making process of clinicians. Visualization can help in understanding and communicating these uncertainties. In this manuscript, we aim to summarize the current state-of-the-art in uncertainty-aware visualization in medical imaging. Our report is based on the steps involved in medical imaging as well as its applications. Requirements are formulated to examine the considered approaches. In addition, this manuscript shows which approaches can be combined to form uncertainty-aware medical imaging pipelines. Based on our analysis, we are able to point to open problems in uncertainty-aware medical imaging.

Presenter: Christina Gillmann


12:40 – 14:00 BREAK

14:00 – 15:40
Capstone, Awards and Closing  
Chair: Renato Pajarola, Tobias Günther
14:00 – 14:05
Introduction
14:05 – 15:00
Slowing Down How We Think With Visualisations  
Yvonne Rogers

Abstract: Most visualisations and data science tools have been developed to speed up human cognition so that users can efficiently and rapidly draw conclusions from the emerging patterns and anomalies being shown from their datasets. A core UX technique is filtering, enabling the selection and switching on and off of various options, at the touch of a finger. However, the downside of this kind of ‘speed-dial’ interaction is it often results in fixed ways of inspecting and ‘seeing’ data, preventing users from developing different ways of querying and exploring data. How can we design the UX side of visualisations to encourage other kinds of thinking out of the box? An alternative approach we have been developing is to deliberately design the UX to slow down users’ thinking. In particular, we have been developing agents that can probe the user, make suggestions, and even contest their thinking at opportune times. While this approach may seem counter-intuitive, we suggest that for certain settings and tasks, it can encourage different lines of thinking; disrupting routinized problem-solving steps and facilitating more creativity. In so doing, our aim is to enable users to visualise more possibilities in their own minds when interacting with external visualisations. In my talk, I will describe our recent research into how to design the UX to support a slower way of thinking.

Presenter: Yvonne Rogers
Yvonne Rogers is a Professor of Interaction Design, the director of UCLIC and a deputy head of the Computer Science department at University College London. Her research interests are in the areas of interaction design, human-computer interaction and ubiquitous computing. A central theme of her work is concerned with designing interactive technologies that augment humans. A current focus of her research is on human-data interaction and human-centered AI. Central to her work is a critical stance towards how visions, theories and frameworks shape the fields of HCI, cognitive science and Ubicomp. She has been instrumental in promulgating new theories (e.g., external cognition), alternative methodologies (e.g., in the wild studies) and far-reaching research agendas (e.g., “Being Human: HCI in 2020”). She has published over 250 articles, including two monographs “HCI Theory: Classical, Modern and Contemporary” and “Research in the Wild”. She is a fellow of the ACM, BCS and the ACM CHI Academy. She was also awarded a Microsoft’s 2016 Research Outstanding Collaborator Awards and a EPSRC dream fellowship concerned with rethinking the relationship between ageing, computing and creativity.

15:00 – 15:40
Awards and Closing