Virtual Reality to Foster Social Integration by Allowing Wheelchair Users to Tour Complex Archaeological Sites Realistically

: People with disabilities encounter numerous barriers when dealing with the simplest and most usual things in their daily lives. This is even more remarkable when they are faced with archaeological heritage buildings or environments. People with reduced mobility come too often upon architectural barriers that stop them from enjoying their visits to sites and monuments. This paper introduces a virtual reality (VR) experience developed to provide people in wheelchairs with the most realistic sensations while virtually touring some archaeological sites. To this end, the remote sensing of the site enables the production of a realistic 3D model leading to the creation of a virtual world that the user will explore. This VR application has been developed to traverse one of the most important monumental buildings in Spanish Protohistory, the site of Cancho Roano (Zalamea de la Serena, Spain).


Introduction
Improving the social participation of disabled people is still a considerable challenge in our society, despite the efforts in this regard over the past few decades [1]. A set of important issues needs to be addressed for these people to fully achieve social integration, which would allow them to participate in valued life activities, achieve social roles, and contribute to their community [2]. Equal access to services and urban places must be guaranteed by law for all, regardless of any functional limitations, according to the United Nations Convention on the rights of people with disabilities [3].
Cultural heritage is currently perceived as a perfect tool to promote social integration. This is the reason why all initiatives that help to improve and promote cultural heritage experiences while providing disciplines such as Computer Science or Humanities with new research tools have been encouraged [4].
With the idea of achieving the effective integration of those not yet fully integrated socio-cultural groups that are, we have directed our attention toward people with disabilities and their difficulties to visit cultural heritage monuments and sites. For example, visually impaired people and wheelchair users too often encounter architectural barriers that hinder them from enjoying this type of activities, when it is not altogether impossible for them to even access the sites.
Different technologies have recently focused on people with reduced mobility in the field of rehabilitation engineering. Their objective is to develop assistance systems that adapt to the wheelchair users' needs and improve their quality of life. As a result, motor and cognitive skill assessment methods have been developed for years to facilitate the process of choosing the appropriate technical assistance systems. Thus, together with experts in Neurosciences and Physical Medicine, several research groups have been working on prototypes of wheelchair simulators to improve the users' daily lives by creating experiences in a risk-free environment [5,6]. The creation and validation of virtual environments have therefore become an indispensable tool for wheelchair users to simulate movements and real situations to improve the process, sensations, and development of systems and devices. For example, Mahajan et al. [7] have created a virtual power mobility simulator for evaluating power wheelchair driving, concluding that it is a valid tool and that it can be used as a complement to their real-world assessments. John et al. [8] propose another virtual training system for power wheelchairs that has produced good results.
Several studies have examined the state of the art, exposed the problems, and explained the different solutions reached, regarding actions that range from the very first task of learning how to use a power wheelchair, to the training in daily actions. For example, Fernández-Panadero et al. [9,10] presented a power wheelchair simulator that was evaluated with generic users (without disability) to raise awareness of the difficulties that a person driving a wheelchair has to face daily. Devigne et al. [11] presented and evaluated a virtual electric wheelchair driving simulator. The simulator was designed so that different control inputs can be used (regular wheelchair joystick, chin control device, or game controller) by people with different disabilities. More advanced control proposals for simulators have also been developed, such as that of Pinheiro et al. [12]. In this case, a brain-computer interface was introduced to allow a wheelchair to be operated in a virtual environment. The users targeted by this type of system are those with severe motor disabilities. Tao and Archambault [13] compared the results of user training on reaching tasks (working at a desk, using an elevator, and opening a door) during wheelchair handling in a virtual trainer (the McGill Immersive Wheelchair (miWe) simulator) and in the real world. Although the handling of the chair was poorer in the virtual world than in the real one, it was demonstrated that similar strategies were followed to achieve the objectives in both cases. Finally, in a very interesting paper, Alshaer et al. [14] evaluated the factors that most affect the perception and behavior of users in a virtual reality simulator for driving wheelchairs. They concluded that those factors are the type of visualization system (Head-Mounted Display vs. Monitor), the ability to freely change the field of vision (FOV) and the visualization of the user's avatar.
With the conviction that the application of the methodology developed in other contexts can also help the real integration of people in wheelchairs, this work explores its application to cultural heritage. The reason for this, as mentioned, is that people with disabilities often face architectural barriers that hamper their experience when visiting sites and ancient monuments. If they are lucky, they may find narrow entrances leading to difficult access to the sites. However, most of the time they cannot make the visits because it is dangerous or even impossible, due to the lack of appropriate facilities.
Virtual reality (VR) has been used in cultural heritage primarily for educational purposes (see [15] for an interesting review of different proposals) and dissemination intentions (see [16] as one of the first examples, among others). Recently, there has also been some approximations to the problem of going to certain places such as the archaeological site of Choirokoitia (Larnaca, Cyprus) [17], or the Forum Adiectum of Augusta Emerita (Merida, Spain) [18], in addition to a large number of virtual visits and tours that can be made online. Wheelchairs users are of course indirect beneficiaries of these initiatives. Nevertheless, to the best of our knowledge, the cases where VR has been applied to provide vivid experiences for wheelchairs users as if they were really wheeling on monuments or sites are still scarce. It is even rarer to find cases where these VR applications have been created from the actual data obtained with laser scanners to tour the models of cultural heritage spaces.
This work tackles the challenge of generating virtual visits to sites and monuments by implementing the tools provided by rehabilitation engineering, to offer realistic routes that can be enjoyed by wheelchair users. In addition, we believe that the experience acquired with the use of the VR application can help design more suitable infrastructures that would facilitate a real visit to this type of environment.
The main goal of the VR experience presented is to let its users explore the archaeological heritage sites from an accessible position with the same realistic sensations as if they were exploring the actual place on their wheelchair. From this standpoint, we have developed a VR application designed to visit the pre-Roman site of Cancho Roano (Zalamea de la Serena, Spain) on a wheelchair.
The rest of the paper is structured as follows. Section 2 begins with the description of the archaeological site of Cancho Roano, followed by an explanation of the methodology used to develop the proposed VR application. Section 3 is devoted to the results and deals with the integration of all the elements that compose the VR system. Section 4 presents a discussion on the results obtained. Finally, the conclusions of this work are summarized in Section 5.

Cancho Roano Archaeological Site
Our collaboration with the Institute of Archaeology of Mérida led to the possibility of working on the 3D digitization of the main Tartessian sites of the Guadiana. The most significant of them, Cancho Roano [19,20], also offered us the challenge of having a complex architecture and difficult accessibility. The other site where this type of experience could be applied, known as "El Turuñuelo," is still in the process of excavation, so it will take some years to implement a VR application like this one.
Cancho Roano is one of the most important monumental buildings of the whole Spanish Prehistory. Its lifespan covers from the end of the 7th century and the beginning of the 6th century BC, in full Tartessian culture, until the 4th century BC, when the monument was intentionally destroyed by their users with a great fire. Several construction phases have been documented during its lifetime, always preserving its religious significance. The site is located in the countryside in a hidden position near several streams, which made it possible for the inhabitants of the territory to have contact with Mediterranean cultures [21].
Nowadays, the site can only be reached by car, which must be parked nearby. A brick road leads to the interpretation center and then to the ancient building, which can be visited freely. To reach the monument, it is necessary to climb a wooden ramp, and then a step to access an open area (Figure 1a-c). The main building, that can currently be visited, has a square floor of approximately 24 m on each side. It consists of eleven rooms of different heights, some of them inaccessible to visitors for conservation reasons, and a large courtyard with a characteristic U shape. The whole building is surrounded by a slightly sloping 2-meter wide terrace, from which it is easy to see all the rooms despite the difficult walk to get there [22] (Figure 1d). If this complex architecture hampers the visit on a regular tour, it is obviously worse for wheelchair users, for whom it is next to impossible to explore the site due to the barriers imposed by slopes, narrow corridors, irregular floors, or steps.

The VR Experience Developed
The tasks leading to the implementation of the VR application can be broadly categorized into two main stages: the generation of the 3D model of the site to be virtually toured, and the design of the VR application itself. In stage 1, two types of 3D models are generated in parallel processes: on the one hand, the models based on the real data acquired with a laser scanner are obtained through the consecutive steps of planning, data acquisition, and data processing. The models thus generated must be adapted afterward to fulfill the requirements of VR applications, especially in terms of real-time processing. On the other hand, a synthetic model of the site is also generated together with some accessory elements, as will be explained below. In the second stage, the real data are integrated into the synthetic data to program the whole virtual world. Figure 2 shows an outline of this procedure, specifying the software used for the different steps. The details of each of these two stages are explained in the following paragraphs.  In addition, the application must also undertake the important role of integrating the different technologies that make up the VR system: a motion simulator with a haptic interface to let disabled people experience movement sensations.

Stage 1: Generation of the 3D Elements
•

Models from Real Data
To generate the 3D model of Cancho Roano, we used a Faro LS 880 laser scanner and a Nikon D200 camera coupled with a Nikon AF DX Fisheye lens to obtain both the geometry and the color information. This scanner has a distance range of 0.6 m to 76 m with an accuracy of ±3 mm in 25 m.
The angular range is 360 • on the vertical axis and 320 • on the horizontal one, with an accuracy of 16 µrad on both axes. Finally, the maximum vertical resolution is 13 µrad and the maximum horizontal resolution is 157 µrad. The system takes about 27 minutes to scan a panoramic sample (at half the maximum resolution) plus 8 minutes to take the sequence of photos. The data were acquired in 3 sessions of 10 hours. Undoubtedly, the first task to be performed in situ is planning the positions accessible to the scanner. The volume that could be covered in a single scan is affected by several factors, namely the field of vision, the occlusion conditions, the accessibility of the scanner, and the overlap with other scans [23]. In the case of this heritage building, the captures were made in a series of concentric rings, as shown in Figure 3a, where the 33 positions of the scanner are indicated on a site map. Figure 3b is a picture of the scanner placed in the location marked with a red circle in Figure 3a. Once the data have been gathered, the noise and outliers filtering pre-processing task is performed before starting the model generation [24]. This filtering process eliminates the erroneous points provided by the scanner: points generated at infinity (spread points), outliers (points outside the scene), and noise (due to reflections and shiny surfaces).
Next, the partial views acquired must be registered. As known, registration is the process of matching two overlapping point clouds for them to be in the same coordinate system. This is a well-studied problem, and methods for automatically registering laser scan data are commercially available. After this process, a coarse transformation is attained. The register refinement stage calculates a precise alignment between the two-point clouds. Finally, the merging stage samples the points and rewraps them to make smooth transitions and generate a unique point cloud.
Aside from the noise and outlier filtering tasks in the pre-processing stage, errors in the color assignment must be corrected. The system can associate color information to each point of the scene but, due to small errors in camera alignment, the color-geometry registration must be refined. To solve this problem, we establish pairs of corresponding points on both reflectance and color images and align them precisely [25].
Eventually, a high-resolution 3D digital model of the site is available for inclusion in digital heritage repositories, which would allow its use in a multitude of applications: generation of replicas, measurement of significant elements, high quality rendered images from any desired point of view, etc.
In the case considered in this study, the specific application of the model obtained is to be represented in real-time, so that the ruins can be visited virtually, as mentioned before.
Real-time representation and high-resolution models are concepts that are not yet compatible today, due to the limitations imposed by current hardware. Therefore, an essential step to develop a virtual tour is to make some adaptations to high-resolution 3D models. In addition, a data type conversion is necessary as the data obtained after the model generation stage consists of a colored point cloud, and the virtual application engine (Unity) requires triangulated colored meshes.
This process of adapting the 3D model obtained from real data to the virtual application engine can be divided into three steps:

1.
Resolution lowering: the number of points must be reduced to obtain a model that can be represented in real-time on an average computer. This reduction is performed with the open-source 3D processing software Meshlab, which is well known and popular among the Computer Vision scientific community. Specifically, we applied the simplification "Quadric Edge Collapse Decimation" to the original point cloud of 25,000,000 points, resulting in a reduction of 14% data, that is, a point cloud with 3,500,000 points.

2.
Data type: the point cloud must become a triangular surface. To do this, all normals in the point cloud are first oriented using the "Compute normal" tool from the Meshlab software. A reconstruction process is then applied using the "Screened Poisson method". The final triangular surface obtained consists of 6,800,000 triangles ( Figure 4a). Color Conversion: this is done using the Meshlab tools called "Trivial Parameterization by Triangle" (to map all triangles into triangles of equal size) and "Transfer Vertex Attributes to Texture" (to assign one color per vertex to the texture of the mesh), obtaining a JPG image file that stores the color information. Figure 4b is a snapshot of the textured final 3D model.

Synthetic Models
During the phases of planning, acquisition, and adaptation of the 3D data, a simple schematic model of the archaeological site is generated in parallel. This model facilitates the design of the VR application and provides a test bench to detect the needs of disabled people when using the simulator.
On the other hand, some elements necessary for the design of the VR application must be modeled. The archaeological ruins can be considered an extreme case of non-adapted building to be traveled by disabled people in wheelchairs. As the use of the simulator is intended to make the users feel as if they really are inside the historic site using a wheelchair, there is a need to design certain technical aids, such as ramps, floors, and lifts. These aids could be designed so that the user can even enjoy a visit to the inaccessible rooms, improving their sensation compared to those of a common visitor.
These elements are generated from scratch with Blender to be part of the VR scenes. During their modeling, all considerations regarding their 3D representation in real-time must also be considered.

Stage 2: Virtual Reality System
The VR system developed is a movement simulator with a haptic interface, so that people with disabilities can experience the sensations of movement while visiting a heritage site. They can also decide the trajectory by manually moving the wheels of the wheelchair as if they were moving within the real location. The simulator is composed of four different types of devices: motion capture systems, visualization systems, physical systems, and a computer system. Figure 5 shows the relationships between them.

1.
The main component of the VR system is the computer system, consisting of a workstation that enables the communication between all elements. The simulator management application has been developed in Unity, which allows the incorporation of models of virtual worlds and the possibility to interact with them. Besides, this software permits the administration of any hardware device that has the corresponding programming libraries, thanks to the possibility of introducing scripts in C# language. It is important to point out that the graphic power of this computer could limit the level of detail or complexity of the virtual worlds designed. In our case, it must manage several visualization outputs (those of the visualization systems) simultaneously, so it is necessary for the graphic card to have this functionality. Two graphics cards are used simultaneously in this simulator: Nvidia GTX970 and Nvidia Quadro K5200.

2.
The movements made by the user are caught by the motion capture systems: an optical system and a 5DT data glove. This system has a double purpose: to improve the users' sense of immersion, as they can see their avatar moving in sync with the movements of their own body; and to register data for further analysis to diagnose possible problems or erroneous behavior. The data captured by the Optitrack's system are transmitted in real-time to the virtual world avatar in Unity via its Motive software, which also allows the workspace to be calibrated and configured. The glove data is transmitted to Unity via its own scripts. Additionally, an inertial capture device can be used if necessary. This device captures the user's movements in an autonomous manner, without the need for external cameras.

3.
The visualization systems offer two alternatives to display the virtual world, which can be used in different evaluation configurations. The first one consists of a large 3D screen (100 inches), on which to project the virtual world, and a set of active glasses, which provide the 3D immersion.
This kind of representation allows both the user and the technical team to jointly observe the users' interactions with the virtual world. The second visualization subsystem is a VR headset, which is operated exclusively by the user.

4.
The physical systems are a wheelchair, a motion platform, and an elevator system to access this platform. One of the most important and innovative parts of the developed system is a six-degree motion platform containing the wheelchair. It allows the users to simultaneously transmit the acceleration to which they are subjected as a response to the movements that take place in the virtual world, and to reproduce the different inclines of the terrain in the virtual world in which they are moving. The entire platform can change its orientation by means of a haptic system composed of two motor-driven active cylinders acting as an interface with the wheelchair, which must be positioned so that the wheels are in permanent contact with the cylinders. They have a dual function: on the one hand, they are activated when the user moves the wheels and detects the intention of the user's movement. In addition, they emulate the ground conditions of the virtual world with the movement of the rollers: depending on the inclination and type of surface of the virtual world in which the users are located, they may feel more or less comfortable when moving the wheels. The movement of the rollers is measured and processed at the workstation, which also generates the rotation of the rollers corresponding to the movements of the wheels of the virtual wheelchair. Figure 6 shows the details of these components.

Integration of the Motion Platform.
As previously indicated, Unity was used to design the VR application. All the elements comprising the virtual world were integrated with Unity where the application interfaces were also designed, and where several scripts to control the interaction between all the elements, the virtual objects, and the hardware of the application were programmed. It is necessary to point out that Unity includes a physics engine that governs the functioning of the physical laws within that virtual world, such as gravity or collisions. In addition, it includes a 3D representation engine with lighting, which manages the visualization of objects according to the properties of their materials. The lights have also been included, along with the ambient lighting, and the indirect lighting between all the objects.
Unity's physics engine is a determining factor for the performance of the platform and the active cylinders. It is worth noting that the scripts that have been programmed to control the simulator generate a bidirectional connection between the real world and the virtual world, following the scheme shown in Figure 7. As can be seen, there are two closed loops: one for Unity and the active cylinders, and one for Unity and the motion platform. In these loops, two parameters, γ and ρ, adjust the interaction between real devices and virtual elements. The ρ parameter is exclusively aimed at compensating for some future decays or misalignments in the structure of the platform. An application of the use of the γ parameter will be explained later (Section 3.3).

VR Application Development: Initial Synthetic Model and Design of Accessory Elements.
The process of planning, acquiring, and adapting the 3D data of an archaeological site takes considerable time. During this time, a simple schematic model of the archaeological site was generated in parallel. This model made it possible to work on the design of the VR application and to have a test bench available to detect the needs of disabled people when using the simulator. Figure 8 shows the synthetic model generated using the schema of Figure 3a and some quantitative data available from the site.  One of the peculiarities of our simulator is that it offers the user a new way to move around the world: the wheelchair. While motion controllers are usually employed in these kind of applications, the one presented in this study forces the user to move the wheels of a real wheelchair to be able to move around. Therefore, it was necessary to analyze whether it was feasible to move within the site scenario using this new method. First, the possibility of taking the same routes than a person would take on foot through the site was analyzed. The two obvious drawbacks were the different levels, some of which were only accessible by steps, and the width of the route in some areas. In this way, a wheelchair user would only be able to travel around the enclosure, outside its exterior walls.
To allow the user to move inside, two solutions were implemented for the aforementioned obstacles. To overcome the unevenness, the use of what is called teleportation in VR was tested. Through this functionality, the users point with the controller to a location where they want to move and, upon pressing the button, they instantly appear in that new position. The particular use of our simulator makes it difficult to operate these controllers simultaneously with the wheelchair. In addition, the Optitrack capture system is already available to detect the position of the user's hands. For this reason, the control was anchored to the arm of the chair, as shown in Figure 9b, for the buttons to be used for certain actions. Consequently, teleportation was implemented by making the users point their heads towards the locations where they wanted to move. This is an effective solution that we kept as an alternative, configurable within the user interface.
However, to add a certain realism related to the difficulty of movement within the site, we decided to include accessory elements to overcome the unevennesses. By means of a button on the controller, the user can activate the deployment of ramps to save the unevennesses at specific points indicated within the site. The Spanish Technical Building Code has been considered for the design of these ramps, which establishes a maximum slope of 10% for lengths of less than 3m, 8% for lengths of less than 6m, and 6% in the rest of the cases.
Even though several rooms on the site are inaccessible to visit on foot, it was decided to include the possibility of accessing these rooms in the application. To this end, to add an entertainment component to the visit, elevators with ramps have been designed to allow the user to descend to these locations. These accessory elements are activated by the user via the controller at certain points along the route. Figure 10 represents an example of the use of ramps. An icon is shown on the screen when the user approaches some stairs. If the user presses a controller button, a ramp arises.

Integration with 3D Data: Visiting the Digitized Site.
After the adaptation process, data are integrated into the VR application. The basic difference between using the synthetic model and the digitized data is the roughness of the terrain and the subtle variations in slope that did not appear in the synthetic model.
To add realism to the virtual tour, we have considered the influence of the terrain's roughness within the VR experience. As mentioned above, there are active rollers on the simulator platform that allows bidirectional communication between the turn of the real wheels and the virtual wheels. If a coefficient is added to the transmission of the rotation of the wheels from the virtual world to the real world, it is then possible to simulate the transit through different types of soil. In other words, the rotation angle of the virtual wheels will be multiplied by a value f t (terrain friction), which implies a positive or negative rotation value of the active cylinders' engines when the user intends to turn the wheels. This implies a greater facility or difficulty to carry out the manual rotation of the real wheels. In the application, the implementation lies in assigning a specific f t value to each of the 3D models of the floors on which the wheelchair can circulate. Then, when the virtual chair enters one of these floors that f t value is automatically transferred to the active cylinders. When the user spins a wheel dφ radians, these values are transmitted to the corresponding wheel of the virtual world. This wheel lies on a specific terrain with its associated f t and the Unity's physics engine manages all interactions between elements in the virtual worlds. Thus, the resulting value of the angle, dθ radians multiplied by f t (the above-mentioned γ factor), is sent to be applied to the motors of active cylinders. If this value is 0, it means that the motors are not going to work, i.e. the virtual wheel is over a terrain whose friction is equivalent to one of the real cylinders. A negative value of dθ implies higher friction and then, the rotation of cylinder opposite to dϕ will make the rotation more difficult. However, with a positive value, the motors will rotate in the same sense of dϕ, and will help the users in their effort of exerting force. Figure 11 shows a schematic representation that illustrates this continuous transmission of real and virtual wheel rotation angles, considering the effect of the physics engine and the assigned terrain friction value.
For the moment, the adjustment of f t has been done empirically and with illustrative values of soils with large differences in their coefficients of friction to perceive the differences in the behavior of the haptic system. Specifically, the behavior of soils such as "asphalt", "grass", "soil", and "ice" has been modeled developing a simple application (Figure 12).
In the virtual visit of the Cancho Roano site, the f t value of "soil" has been applied for all the terrains of the site and "asphalt" for the displacement through ramps and elevators.  The simple application developed for adjusting the friction coefficients for several grounds: "grass", "ice", "soil", and "asphalt.".

Discussion
The European Union considers cultural heritage as one of the means to achieve social integration [26]. Within its challenges for the forthcoming future, the EU promotes multidisciplinary initiatives, combining the expertise in cultural heritage with the resources provided by technology, which foster the social inclusion of those groups of people that have not yet achieved real integration into our society. For example, DT-Transformations-11-2019 of the Horizon2020-2018-2020 work program is dedicated to the development of collaborative approaches to cultural heritage for social cohesion.
According to this, people with disabilities must be seen as an important target for these types of actions. They still must face a lot of obstacles daily, even in their most usual quotidian activities. Although there is a trend toward their incorporation into ordinary life, their social integration is far from being fully achieved. This situation is aggravated when visiting monuments and archaeological sites as many of those are not prepared to receive these types of visitors, and this makes it impossible for them to enjoy such an experience.
Broadly speaking, accessibility to heritage has two fundamental phases [27]: physical accessibility, in which visitors use their bodies and senses to collect stimuli or experiences related to the heritage asset being visited, and perceptual accessibility, related to the understanding of heritage. The combination of these two phases gives rise to appropriational accessibility, which has to do with the emotions that the heritage environment arouses in us and how it is integrated into our own experiences. In the case of people with motor disabilities, the limitations in the first phase clearly impoverishes appropriational accessibility. In this way, any action directed to improve this situation will enhance their accessibility, in the broadest sense. That is the case with the solution proposed in this work, which offers to the user's physical accessibility very close to that of people who can actually visit the heritage sites.
It can be said that technological advances are meaningful when they facilitate people's lives, but they reach their zenith when they improve the lives of those social groups that must suffer through disability. This is the main reason to plan and design a VR application like the one presented here. It combines archaeological knowledge with 3D modeling and computer graphics expertise to create a virtual immersive experience that allows one to realistically visit archaeological remains remotely. Besides, we are aware of the potential of this system also as a test-bed for the design of accessibility measures for archaeological sites thanks to the feedback provided by the users. We plan to use it for designing real ramps and accesses that can be implemented in the real sites. Once the application is fully developed, it will make it possible to virtually evaluate which is the most suitable degree of inclination of the slopes or the width of the corridors to ease the movement of wheelchair users, for instance; the ultimate aim being the full integration of these people. In this process, it is essential to have the help of cultural heritage managers and curators so that to find solutions that do not physically or aesthetically damage structures as unique as those belonging to cultural heritage.
This experience is also remarkably powerful in making people conscious of the difficulties that wheelchairs users constantly must face. Although there have been previous initiatives in other contexts, this case is more shocking as visiting archaeological sites is not only related to cultural activities but also with leisure and enjoyment. This should be one of the key points to be investigated in future studies.
During the design of the application, some tests were performed to refine some of its parameters. Nevertheless, we think it is necessary to make a pilot test with a significant number of subjects, to assess the application and improve the experience. We plan to undergo an experimental test with at least 10 people, wheelchair users and generic users. All of them must take the same tour inside the building, which will consist of wheeling along the passable roads and accessing two rooms using the lifts designed for that purpose. Before starting, the participants will have ten minutes to adapt to the simulator while learning the basic controls. Besides, the time that each participant takes to complete the tour will be measured. At the end of the experience, the users will be asked to answer a brief questionnaire. Despite being a reduced population of users, we are sure that some initial conclusions can be drawn from the results of the survey. First, we will check if participants perceived the visit as realistic and if the possibility of visiting areas of the site that are not accessible adds value to the experience. Secondly, we will verify if people feel as if they are realistically handling a wheelchair while touring the site (slopes, narrowness, and different types of soil) and explore the visitor's reaction to the site itself. Although we have amended the small delays that were detected in the motion system during the design phase, occasionally subtle delays (in the order of 20-50 ms) are perceived. This is due to several factors, mainly hardware limitations, as one computer simultaneously manages different systems (motion platform, VR headset with the controller, data gloves and body motion track) or tracking losses due to occasional lighting problems. We have observed that these delays only arise with very sudden movements of the user. Since using a wheelchair generally involves slow movements, they are not considered significant. Despite this, we continue working on system optimization and polishing. Finally, as mentioned earlier, we will assess the usefulness of the application in raising people's awareness of the difficulties of access that people in wheelchairs must face in these sorts of environments.

Conclusions
This work presents a VR application developed to allow people in wheelchairs to visit the archaeological site of Cancho Roano (Zalamea de la Serena, Badajoz), one of the most important monumental buildings in Spanish Protohistory.
The aim of the application is to help people with disabilities to achieve true social integration, from the point of view of having access to cultural heritage, by allowing its users to visit a place of high historical value. To reinforce this aspect, two strategies have been used in addition to the classic VR procedures. The first one is to create 3D models from 3D data gathered with a laser scanner, allowing visualization of the site as it is today. Subsequently, this model has been adapted to be integrated into the VR application. The second strategy is to make the experience of touring the site as realistic as possible for a visitor in a wheelchair. To this end, a novel VR system has been developed by integrating four different types of devices: motion capture systems, visualization systems, a motion platform with haptic rollers, and a workstation. A simple application has been settled for the simultaneous use of these technologies, and different types of soil have been empirically modeled into it to provide realism to the user's experience.
In future works, we plan for the application to undergo an experimental test with wheelchair users as well as generic users to verify if they perceive the visit as realistic, and if they consider that the possibility of visiting some areas of the site that are not accessible in a real visit adds value to the experience. Finally, the usefulness of the application in raising people's awareness of the difficulties of access to people in wheelchairs must face in this type of environment will also be tested.