
Virtual Reality application
for exploration and data communication
of artificial neural networks
Neural Networks in VR
in cooperation with

1. Introduction
Software systems and components increasingly rely on machine learning methods such as Artificial Neural Networks (ANNs) to provide intelligent functionality. Therefore, software developers and machine learning users should have a basic understanding of such methods. However, since ANNs are complex and thus challenging to grasp, novel visualization approaches can contribute to better comprehension. We conducted an online survey to identify use cases and requirements for visualizing ANNs. Based on our results we designed and implemented an ANN visualization in virtual reality (VR) specifically targeted at machine learning users. Our approach is particularly suitable for teaching purposes and machine learning novices or non-experts who want to get an impression of the general functionality of neural networks.
Developer / Researcher (TU Kaiserslautern) - Dirk Queck
Researcher (DLR) - Annika Wohlan
Researcher (DLR) Meike Schaller

2D-Visualization of Multilayer Neural Network
2. Requirement Engineering
The main requirements emerged from two sources (expert survey and research). The intersection items were prioritized and transformed into application features. The following figure shows requirements identified by literature and expert survey. From this, we determined intersections of relevant requirements based on the frequency of each aspect.

3. Architecture

Due to the complexity of neural networks and potential cognitive overload, we decided to split information and provide users with three spaces of information (1. Main Area, 2. CNN Info, 3. General Info).
3.1 Main Area
The Main Area acts as a general information source to give an overview of CNNs. The representative model includes every common component of CNNs and shows the connections between the individual layers.

Concept 3D-model of CNN

3D-concept model of connection between the layers
These components are Input Layer, Convolutional Layer, Pooling Layer, FullyConnected Layer, and Output Layer. Each element is separated from the other. The users can move around the model and between the individual layers. Neurons are representatives of three-dimensional cube objects. In addition, Main Area contains all direct layer connections, weights, and biases.

Visualization of filters
3. 2 CNN Info
The 02_CNN Info area is intended for all additional information. In this space, information is explained for each layer that deals with the functioning of the mathematical model of the CNN. Within the 02_CNN Info area, there are three sub-areas that are close to the 01_main area components. Any additional information on the respective layer has a local reference.
3. 3 General ANN Info
03_General KNN-Info contains additional information relevant to the understanding of deep learning and ANN history. Moreover, the user is able to compare visualizations of different neural networks.
4. User Interaction
Input panel and 3D-Slider
Via the input panel (Figure), the user can select different input images for the CNN. Moreover, the user can fade in and hide the individual components of the model. For future work, this panel also allows the user to upload different kinds of neuronal networks.

Input panel for main user interaction with the 3D-model
The user can freely move through the whole environment via teleportation. Few teleportation points highlight important interaction points (Figure 6). Near the input panel, users have the opportunity to interact with the 3D model in different ways to get on detailed information about the layer. For example input layer interaction, the user can change the transparency of neurons with a 3D slider to reveal the value of each input neuron.


5. CNN data
After CNN modeling and training, the hyperparameters (weights, bias, filters, layers) were transported to the
Game Engine. This was done in individual packages to be able to use different file formats. Basically, all layers and hyperparameters consist of one or multi-dimensional arrays, with the respective trained parameter data. We used TensorFlow to extract single data and exported the data in different output formats. Every layers and hyperparameter were in CSV or PNG file formats. The Figure shows a schematic representation of the data transport process and the associated differentiated output formats.

Data export of the trained CNN
Through performance tests, we explored the visualization of the 3D-Model with individual neurons for convolutional and other layers. We found that the increasing number of neurons leads to a minimization of the VR frame rate. The feature maps and pooling layers are therefore exported as an image file. These performance drawbacks led to negative effects on the UX.
6. Development Process
Iterative Incremental Model
The increments and features were defined through requirement intersection as shown before. In small user stories, we described every feature. The individual functions of the features were developed in self-sufficient environments and were iteratively refined and tested. The figure visualized a schematic representation of the development model.

General procedure model
References
1. J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson. Visualizing and understanding convolutional networks, In ICML Workshop on Deep Learning, 2015.
2. N. Meissler, A. Wohlan, and N. Hochgeschwender. Visualizing Convolutional Neural Networks with Virtual Reality. In 25zh ACM Symposium on Virtual Reality Software and Technology (VRST ’19), November 12-15, 2019, Paramatta, NSW, Australia. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3359996.3364817
3. Andreas Schreiber and Marcel Bock. 2019. Visualization and Exploration of Deep Learning Networks in 3D and Virtual Reality. In HCI International 2019 - Posters. Springer International Publishing, 206–211. https://doi.org/10.1007/978-3-030-23528-4_29
4. Nielsen, J. Molich, R.,1990. Heuristic evaluation of user interfaces. In: Proceedings of CHI 90. ACM, New York, NY, pp. 249–256.