This study investigates how social and physical environments affect human wayfinding and locomotion behaviors in a virtual multi-level shopping mall. Participants were asked to locate a store inside the virtual building as efficiently as possible. We examined the effects of crowdedness, start floor, and trial number on wayfinding strategies, initial route choices, and locomotion behaviors. The results showed that crowdedness did not affect wayfinding strategies or initial route choices, but did affect locomotion in that participants in the high crowdedness condition were more likely to avoid crowds by moving close to the boundaries of the environment. The results also revealed that participants who started on the second floor were more likely to use the floor strategy than participants who started on the third floor, possibly because of the structure of the virtual building. These results suggest that both physical and social environments can influence multi-level indoor wayfinding.
Living in a disadvantaged neighborhood is associated with worse health and early mortality. Although many mechanisms may partially account for this effect, disadvantaged neighborhood environments are hypothesized to elicit stress and emotional responses that accumulate over time and influence physical and mental health. However, evidence for neighborhood effects on stress and emotion is limited due to methodological challenges. In order to address this question, we developed a virtual reality experimental model of neighborhood disadvantage and affluence and examined the effects of simulated neighborhoods on immediate stress and emotion. Exposure to neighborhood disadvantage resulted in greater negative emotion, less positive emotion, and more compassion, compared to exposure to affluence. However, the effect of virtual neighborhood environments on blood pressure and electrodermal reactivity depended on parental education. Participants from families with lower education exhibited greater reactivity to the disadvantaged neighborhood, while those from families with higher education exhibited greater reactivity to the affluent neighborhood. These results demonstrate that simulated neighborhood environments can elicit immediate stress reactivity and emotion, but the nature of physiological effects depends on sensitization to prior experience.
The role of affective states in learning has recently attracted considerable attention in education research. The accurate prediction of affective states can help increase the learning gain by incorporating targeted interventions that are capable of adjusting to changes in the individual affective states of students. Until recently, most work on the prediction of affective states has relied on expensive and stationary lab devices that are not well suited for classrooms and everyday use. Here, we present an automated pipeline capable of accurately predicting (AUC up to 0.86) the affective states of participants solving tablet-based math tasks using signals from low-cost mobile bio-sensors. In addition, we show that we can achieve a similar classification performance (AUC up to 0.84) by only using handwriting data recorded from a stylus while students solved the math tasks. Given the emerging digitization of classrooms and increased reliance on tablets as teaching tools, stylus data may be a viable alternative to bio-sensors for the prediction of affective states.
Exploring a city panorama from a vantage point is a popular tourist activity. Typical audio guides that support this activity are limited by their lack of responsiveness to user behavior and by the difficulty of matching audio descriptions to the panorama. These limitations can inhibit the acquisition of information and negatively affect user experience. This paper proposes Gaze-Guided Narratives as a novel interaction concept that helps tourists find specific features in the panorama (gaze guidance) while adapting the audio content to what has been previously looked at (content adaptation). Results from a controlled study in a virtual environment (n=60) revealed that a system featuring both gaze guidance and content adaptation obtained better user experience, lower cognitive load, and led to better performance in a mapping task compared to a classic audio guide. A second study with tourists situated at a vantage point (n=16) further demonstrated the feasibility of this approach in the real world.
The collective behavior of human crowds often exhibits surprisingly regular patterns of movement. These patterns stem from social interactions between pedestrians such as when individuals imitate others, follow their neighbors, avoid collisions with other pedestrians, or push each other. While some of these patterns are beneficial and promote efficient collective motion, others can seriously disrupt the flow, ultimately leading to deadly crowd disasters. Understanding the dynamics of crowd movements can help urban planners manage crowd safety in dense urban areas and develop an understanding of dynamic social systems. However, the study of crowd behavior has been hindered by technical and methodological challenges. Laboratory experiments involving large crowds can be difficult to organize, and quantitative field data collected from surveillance cameras are difficult to evaluate. Nevertheless, crowd research has undergone important developments in the past few years that have led to numerous research opportunities. For example, the development of crowd monitoring based on the virtual signals emitted by pedestrians’ smartphones has changed the way researchers collect and analyze live field data. In addition, the use of virtual reality, and multi-user platforms in particular, have paved the way for new types of experiments. In this review,we describe these methodological developments in detail and discuss how these novel technologies can be used to deepen our understanding of crowd behavior.
Virtual reality (VR) experiments are increasingly employed because of their internal and external validity compared to real-world observation and laboratory experiments, respectively. VR is especially useful for geographic visualizations and investigations of spatial behavior. In spatial behavior research, VR provides a platform for studying the relationship between navigation and physiological measures (e.g., skin conductance, heart rate, blood pressure). Specifically, physiological measures allow researchers to address novel questions and constrain previous theories of spatial abilities, strategies, and performance. For example, individual differences in navigation performance may be explained by the extent to which changes in arousal mediate the effects of task difficulty. However, the complexities in the design and implementation of VR experiments can distract experimenters from their primary research goals and introduce irregularities in data collection and analysis. To address these challenges, the Experiments in Virtual Environments (EVE) framework includes standardized modules such as participant training with the control interface, data collection using questionnaires, the synchronization of physiological measurements, and data storage. EVE also provides the necessary infrastructure for data management, visualization, and evaluation. The present paper describes a protocol that employs the EVE framework to conduct navigation experiments in VR with physiological sensors. The protocol lists the steps necessary for recruiting participants, attaching the physiological sensors, administering the experiment using EVE, and assessing the collected data with EVE evaluation tools. Overall, this protocol will facilitate future research by streamlining the design and implementation of VR experiments with physiological sensors.
Investigating the interactions among multiple participants is a challenge for researchers from various disciplines, including the decision sciences and spatial cognition. With a local area network and dedicated software platform, experimenters can efficiently monitor the behavior of the participants that are simultaneously immersed in a desktop virtual environment and digitalize the collected data. These capabilities allow for experimental designs in spatial cognition and navigation research that would be difficult (if not impossible) to conduct in the real world. Possible experimental variations include stress during an evacuation, cooperative and competitive search tasks, and other contextual factors that may influence emergent crowd behavior. However, such a laboratory requires maintenance and strict protocols for data collection in a controlled setting. While the external validity of laboratory studies with human participants is sometimes questioned, a number of recent papers suggest that the correspondence between real and virtual environments may be sufficient for studying social behavior in terms of trajectories, hesitations, and spatial decisions. In this article, we describe a method for conducting experiments on decision-making and navigation with up to 36 participants in a networked desktop virtual reality setup (i.e., the Decision Science Laboratory or DeSciL). This experiment protocol can be adapted and applied by other researchers in order to set up a networked desktop virtual reality laboratory.
Cognitive neuroscience has provided additional techniques for investigations of spatial and geographic thinking. However, the incorporation of neuroscientific methods still lacks the theoretical motivation necessary for the progression of geography as a discipline. Rather than reflecting a shortcoming of neuroscience, this weakness has developed from previous attempts to establish a positivist approach to behavioral geography. In this chapter, we will discuss the connection between the challenges of positivism in behavioral geography and the current drive to incorporate neuroscientific evidence. We will also provide an overview of research in geography and neuroscience. Here, we will focus specifically on large-scale spatial thinking and navigation. We will argue that research at the intersection of geography and neuroscience would benefit from an explanatory, theory-driven approach rather than a descriptive, exploratory approach. Future considerations include the extent to which geographers have the skills necessary to conduct neuroscientific studies, whether or not geographers should be equipped with these skills, and the extent to which collaboration between neuroscientists and geographers can be useful.
EVE is a framework for the setup, implementation, and evaluation of experiments in virtual reality. The framework aims to reduce repetitive and error-prone steps that occur during experiment-setup while providing data management and evaluation capabilities. EVE aims to assist researchers who do not have specialized training in computer science. The framework is based on the popular platforms of Unity and MiddleVR. Database support, visualization tools, and scripting for R make EVE a comprehensive solution for research using VR. In this article, we illustrate the functions and flexibility of EVE in the context of an ongoing VR experiment called Neighbourhood Walk.
Previous research in spatial cognition has often relied on simple spatial tasks in static environments in order to draw inferences regarding navigation performance. These tasks are typically divided into categories (e.g., egocentric or allocentric) that reflect different two-systems theories. Unfortunately, this two-systems approach has been insufficient for reliably predicting navigation performance in virtual reality (VR). In the present experiment, participants were asked to learn and navigate towards goal locations in a virtual city and then perform eight simple spatial tasks in a separate environment. These eight tasks were organised along four orthogonal dimensions (static/dynamic, perceived/remembered, egocentric/allocentric, and distance/direction). We employed confirmatory and exploratory analyses in order to assess the relationship between navigation performance and performances on these simple tasks. We provide evidence that a dynamic task (i.e., intercepting a moving object) is capable of predicting navigation performance in a familiar virtual environment better than several categories of static tasks. These results have important implications for studies on navigation in VR that tend to over-emphasise the role of spatial memory. Given that our dynamic tasks required efficient interaction with the human interface device (HID), they were more closely aligned with the perceptuomotor processes associated with locomotion than wayfinding. In the future, researchers should consider training participants on HIDs using a dynamic task prior to conducting a navigation experiment. Performances on dynamic tasks should also be assessed in order to avoid confounding skill with an HID and spatial knowledge acquisition.
We tell stories to save the past. Most of these stories today are experienced through reading texts, and we consequently are denied the visceral experience of the past even though we strive to recapture and animate lost worlds through our distinct senses. Virtual Plasencia is our highly realistic and interactive model of the Spanish medieval city of Plasencia. Virtual Plasencia offers dynamic new ways of storytelling via visual and auditory senses. By navigating the three dimensional city simulation, users begin to experience the sights and sounds of daily life in a medieval city, meander its cobbled streets and contemplate its principal structures and residences, and observe human interactions from different (e.g., religious, personal, communal) points of view. Inside Virtual Plasencia, users encounter people and places that cannot usually be achieved through traditional written narratives. The opportunity to observe historical events in loco represents a valuable new form of representation of the past.
One significant feature of urbanisation in the twenty-first century is the increase in large, complex and densely populated city quarters. Airports, shopping precincts, sports venues and cultural facilities increasingly combine with generic function buildings such as hotels, housing, businesses and offices to produce horizontal and vertical nodes in a city. The capacity of such city quarters to bring large numbers of people into proximity produces crowds of unprecedented complexity. The manner in which such crowds ‘behave’ in space by aggregating, disaggregating, flowing or stalling generate new kinds of urban experience that can be thrilling, bewildering, stressful or even threatening. In turn, this creates a set of complex challenges for architectural design and its capacity to understand human behaviour and crowd dynamics.
Signage systems are critical for communicating environmental information. Signage that is visible and properly located can assist individuals in making efficient navigation decisions during wayfinding. Drawing upon concepts from information theory, we propose a framework to quantify the wayfinding information available in a virtual environment. Towards this end, we calculate and visualize the uncertainty in the information available to agents for individual signs. In addition, we expand on the influence of new signs on overall information (e.g., joint entropy, conditional entropy, mutual Information). The proposed framework can serve as the backbone for an evaluation tool to help architects during different stages of the design process by analyzing the efficiency of the signage system.
Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In 2 experiments, participants watched videos of routes through 4 virtual cities and were subsequently tested on their memory for observed landmarks and their ability to make judgments regarding the relative directions of the different landmarks along the route. In the first experiment, self-report questionnaires measuring visual and verbal cognitive styles were administered to examine correlations between cognitive styles, landmark recognition, and judgments of relative direction. Results demonstrate a tradeoff in which the verbal cognitive style is more beneficial for recognizing individual landmarks than for judging relative directions between them, whereas the visual cognitive style is more beneficial for judging relative directions than for landmark recognition. In a second experiment, we manipulated the use of verbal and visual strategies by varying task instructions given to separate groups of participants. Results confirm that a verbal strategy benefits landmark memory, whereas a visual strategy benefits judgments of relative direction. The manipulation of strategy by altering task instructions appears to trump individual differences in cognitive style. Taken together, we find that processing different details during route encoding, whether due to individual proclivities (Experiment 1) or task instructions (Experiment 2), results in benefits for different components of navigation-relevant information. These findings also highlight the value of considering multiple sources of individual differences as part of spatial cognition investigations.
Spatial navigation in the absence of vision has been investigated from a variety of perspectives and disciplines. These different approaches have progressed our understanding of spatial knowledge acquisition by blind individuals, including their abilities, strategies, and corresponding mental representations. In this review, we propose a framework for investigating differences in spatial knowledge acquisition by blind and sighted people consisting of three longitudinal models (i.e., convergent, cumulative, and persistent). Recent advances in neuroscience and technological devices have provided novel insights into the different neural mechanisms underlying spatial navigation by blind and sighted people and the potential for functional reorganization. Despite these advances, there is still a lack of consensus regarding the extent to which locomotion and wayfinding depend on amodal spatial representations. This challenge largely stems from methodological limitations such as heterogeneity in the blind population and terminological ambiguity related to the concept of cognitive maps. Coupled with an over-reliance on potential technological solutions, the field has diffused into theoretical and applied branches that do not always communicate. Here, we review research on navigation by congenitally blind individuals with an emphasis on behavioral and neuroscientific evidence, as well as the potential of technological assistance. Throughout the article, we emphasize the need to disentangle strategy choice and performance when discussing the navigation abilities of the blind population.
We present the results of a study that investigated the interaction of strategy and scale on search quality and efficiency for vista-scale spaces. The experiment was designed such that sighted participants were required to locate “invisible” objects whose locations were marked only with audio cues, thus enabling sight to be used for search coordination, but not for object detection. Participants were assigned to one of three conditions: a small indoor space (~20 m2), a medium-sized outdoor space (~250 m2), or a large outdoor space (~1000 m2), and the entire search for each participant was recorded either by a laser tracking system (indoor) or by GPS (outdoor). Results revealed a clear relationship between the size of space and search strategy. Individuals were likely to use ad-hoc methods in smaller spaces, but they were much more likely to search large spaces in a systematic fashion. In the smallest space, 21.5% of individuals used a systematic gridline search, but the rate increased to 56.2% for the medium-sized space, and 66.7% for the large-sized space. Similarly, individuals were much more likely to revisit previously found locations in small spaces, but avoided doing so in large spaces, instead devoting proportionally more time to search. Our results suggest that even within vista-scale spaces, perceived transport costs increase at a decreasing rate with distance, resulting in a distinct shift in exploration strategy type.
We investigated the interpolation of missing values in data that were fit by bidimensional regression models. This addresses a problem in spatial cognition research in which sketch maps are used to assess the veracity of spatial representations. In several simulations, we compared samples of different sizes with different numbers of interpolated coordinate pairs. A genetic algorithm was used in order to estimate parameter values. We found that artificial inflation in the fit of bidimensional regression models increased with the percent of interpolated coordinate pairs. Furthermore, samples with fewer coordinate pairs resulted in more inflation than samples with more coordinate pairs. These results have important implications for statistical models, especially those applied to the analysis of spatial data.
There are marked individual differences in the formation of cognitive maps both in the real world and in virtual environments (VE; e.g., Blajenkova, Motes, & Kozhevnikov, 2005; Chai & Jacobs, 2010; Ishikawa & Montello, 2006; Wen, Ishikawa, & Sato, 2011). These differences, however, are poorly understood and can be difficult to assess except by self-report methods. VEs offer an opportunity to collect objective data in environments that can be controlled and standardized. In this study, we designed a VE consisting of buildings arrayed along 2 separated routes, allowing for differentiation of between-route and within-route representation. Performance on a pointing task and a model-building task correlated with self-reported navigation ability. However, for participants with lower levels of between-route pointing, the Santa Barbara Sense of Direction scale (Hegarty, Richardson, Montello, Lovelace, & Subbiah, 2002) did not predict individual differences in accuracy when pointing to buildings within the same route. Thus, we confirm the existence of individual differences in the ability to construct a cognitive map of an environment, identify both the strengths and the potential weaknesses of self-report measures, and isolate a dimension that may help to characterize individual differences more completely. The VE designed for this study provides an objective behavioral measure of navigation ability that can be widely used as a research tool.
The idea that humans use flexible map-like representations of their environment to guide spatial navigation has a long and controversial history. One reason for this enduring controversy might be that individuals vary considerably in their ability to form and utilize cognitive maps. Here we investigate the behavioral and neuroanatomical signatures of these individual differences. Participants learned an unfamiliar campus environment over a period of three weeks. In their first visit, they learned the position of different buildings along two routes in separate areas of the campus. During the following weeks, they learned these routes for a second and third time, along with two paths that connected both areas of the campus. Behavioral assessments after each learning session indicated that subjects formed a coherent representation of the spatial structure of the entire campus after learning a single connecting path. Volumetric analyses of structural MRI data and voxel-based morphometry (VBM) indicated that the size of the right posterior hippocampus predicted the ability to use this spatial knowledge to make inferences about the relative positions of different buildings on the campus. An inverse relationship between gray matter volume and performance was observed in the caudate. These results suggest that (i) humans can rapidly acquire cognitive maps of large-scale environments and (ii) individual differences in hippocampal anatomy may provide the neuroanatomical substrate for individual differences in the ability to learn and flexibly use these cognitive maps.
Classical theories of spatial microgenesis (Siegel and White, 1975) posit that information about landmarks and the paths between them is acquired prior to the establishment of more holistic survey-level representations. To test this idea, we examined the neural and behavioral correlates of landmark and path encoding during a real-world route learning episode. Subjects were taught a novel 3 km route around the University of Pennsylvania campus and then brought to the laboratory where they performed a recognition task that required them to discriminate between on-route and off-route buildings. Each building was preceded by a masked prime, which could either be the building that immediately preceded the target building along the route or immediately succeeded it. Consistent with previous reports using a similar paradigm in a virtual environment (Janzen and Weststeijn, 2007), buildings at navigational decision points (DPs) were more easily recognized than non-DP buildings and recognition was facilitated by in-route vs. against-route primes. Functional magnetic resonance imaging (fMRI) data collected during the recognition task revealed two effects of interest: first, greater response to DP vs. non-DP buildings in a wide network of brain regions previously implicated in spatial processing; second, a significant interaction between building location (DP vs. non-DP) and route direction (in-route vs. against-route) in a retrosplenial/parietal-occipital sulcus region previously labeled the retrosplenial complex (RSC). These results indicate that newly learned real-world routes are coded in terms of paths between decision points and suggest that the RSC may be a critical locus for integrating landmark and path information.
This article discusses several aspects of psychosocial adjustment to blindness and low-vision and proposes that the education of both the self and society are essential for positive adjustment. It exposes some of the general misunderstandings about visual impairment and demonstrates how these are partly responsible for the perpetuation of myths and misconceptions regarding the character and abilities of this population. It argues that confidence and self-esteem are deeply connected to ability and should be regarded as constructive elements of the ego usually manifested in different types of introverted or extroverted behaviour. Wherever possible arguments will be backed by current and past research in social and abnormal psychology as well as specific case studies recorded by the author during the years he spent conducting research and working as a life-skills tutor at the Royal London Society for the Blind.
The paper reports on two studies being conducted with students from Dorton College, Royal London Society for the Blind (RLSB) in Kent. The first experiment will examine the content and accuracy of mental representations of a well-known environment. Students will walk a route around the college campus and learn the position of 10 buildings or structures. They will then be asked to make heading judgments, estimate distances, complete a spatial cued model and sequentially visit a series of locations. The second experiment will examine the strategies and coding heuristics used to explore a complex novel environment. Students will be asked to explore a maze and learn the location of different places. Their search patterns will be digitally tracked, coded and analyzed using GIS software. Students will be tested using the same methods as in the first experiment and their performance level will be correlated with their exploratory patterns. Throughout the paper we are reminded that construct validity can only be secured by employing multiple converging techniques in the collection and analysis of cognitive data. Methods should be designed to test content and accuracy as well as the utility of mental representations.
The article reports on the second and final stage of a study concerned with the impact of an entertainment retrofit on the per[ ormance of a shopping center. The study focused on the changes in the type of visitor and the level of patronage inside the Place Alexis Nihon in downtown Montreal after the construction of the neighboring Pepsi Forum. By tracking 729 individuals, a comprehensive picture of the spatial behavior and trip characteristics of visitors was developed that was compared with the behavior of 722 individuals before the entertainment center was opened. Motivations, trip-planning and evaluations were also probed with a questionnaire applied to 283 individuals. Expectations that each center would benefit from the presence of the other were largely not fulfilled. Results indicated that only a slight synergy exists between the entertainment venues and shopping. The estimated contribution to the shopping center of visitors whose first destination was the entertainment center was 5%. Except for anchor store patronage, the center experienced a decrease in visits to small stores and a tendency for visitors to remain on Doors close to the ground. One year after opening, the entertainment center operators continue to try new retailing combinations to build their own clientele.
Choosing the topic for research is an expression of a person’s fascination for the subject. This fascination is nothing more than the culmination of perceived peculiarities about someone or something that constantly intrigues the individual. In my case I was taken aback by the astounding differences found in the neighbouring districts of Parc-Extension and the Town of Mont-Royal. Call it fate or serendipity, but all it took was a wrong turn on l’Acadie Boulevard to prompt my curiosity about the differences in lifestyle between these two areas. The sight of the fence that separated the calm, quiet and spatially organized environment of the Town of Mont-Royal from the noisy and crowded setting of Parc-Extension was enough to offend me. I had never come across such a variation in land use within such a short distance. Intrigued by this phenomenon, I set out to investigate the reason for this spatial segregation. The study of the different lifestyles required that I go beyond a simple observation of the resident’s daily activities and find a way to experience life as a fellow resident.