Summarizing Big Data using visualizations can be very challenging. Read this article to discover how you can apply the capabilities of Augmented Reality and Virtual Reality to the field of Big Data Visualization.
Integration with augmented and virtual reality
It
is well known that the vision perception capabilities of the human
brain are limited. Furthermore, handling a visualization process
on currently used screens requires high costs in both time and health.
This leads to the need of its proper usage in the case of image
interpretation. Nevertheless, the market is in the process of being
flooded with countless numbers of wearable devices as well as
various display devices.
The
term Augmented Reality was invented by Tom Caudell and David Mizel in
1992 and meant to describe data produced by a computer that is
superimposed to the real world. Nevertheless, Ivan Sutherland
created the first AR/VR system already in 1968. He developed the optical
see-through head-mounted display that can reveal simple
three-dimensional models in real time. This invention was a
predecessor to the modern VR displays and AR helmets that seem to
be an established research and industrial area for the coming decade. Applications for use have already been found in military,
education, healthcare, industry and gaming fields. At the moment, the Oculus Rift helmet gives many
opportunities for AR practice. Concretely, it will make it possible to
embed virtual content into the physical world. William Steptoe has
already done research in this field. The use of it in the visualization
area might solve many issues from narrow visual angle, navigation,
scaling, etc. For example, offering a way to have a complete 360-degrees
view with a helmet can solve an angle problem. On the other hand, a
solution can be obtained with help of specific widescreen rooms, which
by definition involves enormous budgets. Focusing on the combination of
dynamic projection and interactive filtering visualization methods, AR
devices in combination with motion recognition tools might solve a
significant scaling problem especially for multidimensional
representations that comes to this area from the field of Architecture.
Speaking more precisely, designers (specialized in 3D-visualization)
work with flat projections in order to produce a visual model.
However, the only option to present a final image is in moving around it
and thus navigation inside the model seems to be another influential
issue.
From
the Big Data visualization point of view, scaling is a significant
issue mainly caused by multidimensional systems where a need to delve
into a branch of information in order to obtain some specific value or
knowledge takes its place. Unfortunately, it cannot be solved from a
static point of view. Likewise, integration with motion detection
wearables would highly increase such visualization system
usability. For example, the additional use of an MYO armband may
be a key to the interaction with visualized data in the most native way.
Similar comparison may be given as a pencil-case in which one tries to
find a sharpener and spreads stationery with his/her fingers.
However,
the use of AR displays and helmets is also limited by specific
characteristics of the human eye (visual system), such as field of view
and/or diseases like scotoma and blind spots. Central vision is most significant and necessary for human activities such as
reading or driving. Additionally, it is responsible for accurate vision
in the pointed direction and takes most of the visual cortex in the
brain but its retinal size is less than 1 %. Furthermore, it
captures only two degrees of the vision field, which stays the most
considerable for text and object recognition. Nevertheless, it is
supported with Peripheral vision which is responsible for events outside
the center of gaze. Many researchers around the world are currently
working with virtual and AR to train young professionals,
develop new areas and analyze the patient's behavior.
Despite
the well known topics like colorblindness, natural field of view and
other physiological abnormalities, recent research by Israel Abramov et
al. is overviewing physiological gender and age differences based
on the cerebral cortex and its large number of testosterone receptors, as a basis for the variety in perception procedures. The study
was mainly about the focused image onto the retina at the back of the
eyeball and its visual system processing. We overview the main reasons
for those differences, starting from prehistoric times, when African
habitats in forest regions had limited distance for object detection and
identification, thus obtained higher acuity for males may be explained.
Also, sex differences might be related to different roles in the
survival commune. So that males were mainly hunting (hunter-gatherer
hypothesis) - they had to detect enemies and predators much faster. Moreover, there are significant gender differences for far- and
near-vision: males have their advantage in a far-space. On the
other hand, females are much more susceptible for brightness and color
changes in addition to static objects in near-space. However, we
can conclude that male/female differences in the sensory capacities are
adaptive but should be considered in order to optimize represented and
visualized data for end-uses. Additionally, there exists a research area
focusing on the human eye movement patterns during the perception of
scenes and objects. It can be based on different factors starting from
particular culture peculiar properties and up to specific search
tasks being in high demand for Big Data visualization purposes.
Further
studies shall be focused on the usage of ophthalmology and neurology
for the development of the new visualization tools. Basically, such
cross-discipline collaboration would support decision making for the
image position selection, which is mainly related to the problem of the
significant information losses due to the vision angle extension.
Moreover, it is highly important to take in account current hardware
quality and screens resolution in addition to the software part.
Nevertheless, there is a need of the improvement for multicore GPU
processors besides the address bus throughput refinement between CPU and
GPU or even replacement for wireless transfer computations on cluster
systems. Never the less, it is significant to discuss current
visualization challenges to support future research.
Future research agenda and data visualization challenges
Visualized
data can significantly improve the understanding of the preselected
information for an average user. In fact, people start to explore the
world using visual abilities since birth. Images are often more easier
to perceive in comparison to text. In the modern world, we can see clear
evolution towards visual data representation and imagery experience.
Moreover, visualization software becomes ubiquitous and publicly
available for ordinary user. As a result, visual objects are widely
distributed - from social media to scientific papers and, thus, the role
of visualization while working with large amount of data should be
reconsidered. In this section, we overview important challenges and
possible solutions related to future agenda for Big Data visualization
with AR and VR usage:
- Application development integration In order to operate with visualized objects, it is necessary to create a new interactive system for the user. It should support such actions as: scaling; navigating in visualized 3D space; selecting sub-spaces, objects, groups of visual elements (flow/path elements) and views; manipulating and placing; planning routes of view; generating, extracting and collecting data (based on the reviewed visualized data). A novel system should allow multimodal control by voice and/or gestures in order to make it more intuitive for users. Nevertheless, one of the main issues regarding this direction of development is the fact that implementing effective gestural and voice interaction is not a trivial matter. There is a need to develop a machine learning system and to define basic intuitive gestures that are currently in research for general and more specific (medical) purposes.
- Equipment and virtual interface It is necessary to apply certain equipment for the implementation of such an interactive system in practice. Currently, there are optical and video see-trough head-mounted displays (HMD) that merge virtual objects into the real scene view. Both have the following issues: distortion and resolution of the real scene; delay of a system; viewpoint matching; engineering and cost factors. As for the interaction issue, for an appropriate haptic feedback in an MR environment there is a need to create a framework that would allow an interaction with intuitive gesture. As it is revealed in the section Integration with Augmented and Virtual Reality, glove-based systems are mainly used for virtual object manipulation. The disadvantage of hand-tracking input is so that there is no tactile feedback. In summary, the interface should be redesigned or reinvented in order to simplify user interaction. Software engineers should create new approaches, principles and methods in User Interface Design to make all instruments easily accessible and intuitive to use.
- Tracking and recognition system Objects and tools have to be tracked in virtual space. The position and orientation values of virtual items are dynamic and have to be re-estimated during presentation. Tracking head movement is another significant challenge. It aims to avoid mismatch of the real view scene and computer generated objects. This challenge may be solved by using more flexible software platforms.
- Perception and cognition Actually, the level of computer operation is high but still not sufficiently effective in comparison to human brain performance even in cases of neural networks. As was mentioned earlier in the section Integration with Augmented and Virtual Reality, human perception and cognition have their own characteristics and features, and the consideration of this issue by developers during hardware and interface design for AR is vital. In addition, the user's ability to recognize and understand the data is a central issue. Tasks such as browsing and searching require a certain cognitive activity. Also, there can be issues related to different users' reactions with regard to visualized objects depending on their personal and cultural backgrounds. In this sense, simplicity in information visualization has to be achieved in order to avoid misperceptions and cognitive overload. Psychophysical studies would provide answers to questions regarding perception and would give the opportunity to improve performance by motion prediction.
- Virtual and physical objects mismatch In an Augmented Reality environment, virtual images integrate with real world scenery at the static distance in the display while the distance to real objects varies. Consequently, a mismatch of virtual and physical distances is irreversible and it may result in incorrect focus, contrast and brightness of virtual objects in comparison to real ones. The human eye is capable of recognizing many levels of brightness, saturation and contrast, but most contemporary optical technologies cannot display all levels appropriately. Moreover, potential optical illusions arise from conflicts between computer-generated and real environment objects. Using modern equipment would be a solution for this challenge.
- Screen limitations With the current technology development level, visualized information is presented mainly on screens. Even a VR helmet is equipped with two displays. Unfortunately, and because of the close-to-the-eye proximity, users can experience lack of comfort while working with it. It is mainly based on a low display resolution and high graininess and, thus, manufacturers should take it into consideration for further improvement.
- Education As this concept is relatively new, there is a need to specify
the value of the data visualization and its contribution to the users'
work. The value cannot be so obvious; that is why compelling showcase
examples and publicly available tutorials can reveal AR and VR potential
in visual analytics. Moreover, users need to be educated and trained
for the oncoming interaction with this evolving technology. The visual
literacy skill should be improved in order to have high performance
while working with visualized objects. A preferable guideline can be
chosen as Visual Information-Seeking Mantra: overview first, zoom and
filter, then details on demand.
Despite
all the challenges, the main benefit from the implementation of MR
approach is human experience improvement. At the same time, such
visualization allows convenient access to huge amounts of data and
provides a view from different angles. The navigation is smooth and
natural via tangible and verbal interaction. It also minimizes
perceptional inaccuracy in data analysis and makes visualization
powerful at conveying knowledge to the end user. Furthermore, it ensures
actionable insights that improves decision making.
In
conclusion, challenges of data visualization for AR and VR are
associated not only with current technology development but also with
human-centric issues. Interestingly, some researchers are already
working on the conjugation of such complex fields as massive data
analysis, its visualization and complex control of the visualized
environment. It is worthwhile to note that those factors should be
taken into account simultaneously in order to achieve the best outcome
for the established industrial field.