As part of its proposed vision, Society 5.0, the Japanese government aims to realise a society in which the digital (cyber) and real (physical) worlds are highly integrated through the use of cyber-physical systems (CPSs). One example of a CPS is a system that operates a device based on its understanding of real space through data acquired by sensors. To realise such a mechanism, the device and system must have a correct understanding of the real space where people are. However, currently available data consists mostly of 2D map data with insufficient vertical axis information, and 3D spatial data with no mutual identifier system or mutual distribution method.
This article introduces ‘spatial IDs’, a type of mutual identifier for 3D spatial information; 3D spatial information infrastructure with spatial information distribution methods; real-time locating system (RTLS) technologies to measure the current location of devices; and finally, an overview and explanation of the architecture of an autonomous vehicle that utilises spatial IDs.
A spatial ID is a mutual identifier for 3D spatial information that assigns an altitude*1 to existing absolute 2D coordinates used by various industry, government, and academic organisations. Spatial IDs provide the following two advantages:
By standardising the structures and rules for identifying spatial locations as spatial IDs, various attribute information linked to these IDs can be shared among devices and systems.
Because spatial data is distributed in the form of spatial voxels*2 that are easy for both humans and devices and systems to handle, rather than point clouds, even devices and systems with low processing performance are able to process the data.
In addition to the abovementioned spatial IDs, the spatial voxels that make up an individual space have the following three characteristics:
3D spatial information infrastructure is a type of infrastructure designed to unify the management of 3D spatial data. This data is managed by using different ID systems according to the owner of the data acquired, for example, via IoT devices, through a system of mutual identifiers called spatial IDs. The 3D spatial information infrastructure holds spatial voxel data that is easy for computers to process, consisting of spatial IDs and 3D spatial information associated with spatial IDs, and has a mechanism that enables data sharing among humans and devices and systems.
For an entity to dynamically manage 3D data in real space, the devices that acquire data need to know their own current location. To verify the following three points, which may become issues if and when spatial IDs are put into wide use, we decided to take stock of currently available positioning technologies and select suitable positioning technologies for this demonstration.
RTLS technologies that satisfy the above conditions are called ‘computer vision systems’. In recent years, a type of hybrid positioning technology called a visual positioning system (VPS), which combines inertial measurement units (IMUs), global navigation satellite systems (GNSSs), and Wi-Fi has come into popular use. We decided to use a VPS for this demonstration.
Finally, we would like to introduce some general information about and the architecture of an autonomous vehicle prototype used in a digital twin*3 of the Technology Laboratory, a research facility for 3D spatial information, which we demonstrated at a seminar*4 about spatial IDs.
This demonstration is based on the following scenario: An autonomous vehicle carries a package from an outdoor source to an indoor destination utilising different positioning methods and at different elevations. The scenario is divided into two main parts: the ‘data update’ and ‘data extraction’ phase of 3D spatial data, and the data extraction phase is further divided into four sub-parts.
The 3D map required for positioning is saved along with the time axis by scanning the surroundings with a smartphone equipped with a LiDAR (light detection and ranging) system and an RGB sensor that resembles a drone. In this demonstration, the spatial information on the route that is needed for automatic driving is acquired by scanning the route after the ramp near point (iv) has been set up, thereby creating a connection (node) to the destination.
(i) The vehicle crosses over the border between the outdoors and the indoor space of the Technology Laboratory:
Outdoor positioning is performed by using IMUs and a GNSS, while indoor positioning is done by a VPS integrated with IMUs. 3D maps for indoor positioning and 3D meshes for route generation are acquired at a zoom level of 22 spatial voxels (the size is depending on latitude and about 9.6㎡ around Japan) when approaching the destination building to secure the data necessary for automatic driving and to switch from outdoor to indoor positioning.
(ii) The vehicle pauses in front of an automatic door that has unexpectedly broken down:
Obstacles can be detected by acquiring data with LiDAR and RGB sensors on smartphones. However, the Technology Laboratory's automatic door is a mirrored surface which laser beams pass through, making proper positioning impossible. In this demonstration, we overcame this by attaching a different material to the door, but there are other possible ways to handle similar situations, such as by managing the opening and closing of doors on the cloud.
(iii) The vehicle navigates a path around a drone
A smartphone, which plays the role of a drone in this demonstration, sends data on the location occupied by the drone, which is obtained by using the VPS, to the server. The length and direction of each movement the autonomous vehicle needs to take in order to reach the destination is then transmitted as an automatically generated route that avoids the location occupied by the drone, as extracted by the smartphone at the zoom level of 26 spatial voxels (the size is depending on latitude and about 0.5m around Japan).
(iv) The vehicle goes up a ramp to reach a destination at different a altitude than its departure point:
Because the autonomous vehicle chooses a route based on location information including altitude information, it acquires only the data necessary in order to travel to the destination. For this demonstration, to take advantage of the benefits of lightweight data representing occupied locations by spatial voxels, we moved the vehicle by synchronising the locations occupied by each device.
The above demonstration is part of our efforts to promote spatial IDs. We also provide other solutions to various issues related to 3D spatial information. If you are interested in any of the following, please contact us.
*1 Real space can also be captured as 4D spatiotemporal data, but this data is limited to coordinates.
*2 Coined from volume and pixel, the smallest unit of a small cube used for 3D representation. The spatial voxels used for spatial IDs exist as rectangular cases with different heights. For details, see page 15 of the 4D spatiotemporal information infrastructure architecture guideline (beta version):
https://www.ipa.go.jp/digital/architecture/Individual-link/ps6vr7000001gz5z-att/3dspatial_guideline.pdf
*3 Technology to fuse real and digital spaces: Utilisation of 3D spatial information, interspace research, and the potential of spatial IDs
https://www.pwc.com/jp/ja/seminars/archive/c1230113.html
*4 How we built a digital twin of the Technology Laboratory, our research facility for three-dimensional spatial information, and manage related data
https://www.pwc.com/jp/ja/knowledge/column/technology-front-line/vol14.html
S. Yanagisawa
Manager, PwC Consulting LLC
Fields and industries of focus:
AR (augmented reality), VR (virtual reality), XR (extended reality), spatial information, metaverse
After working as an augmented reality (AR) app developer for a specialised AR and artificial intelligence (AI) solution development company and as an extended reality (XR) research engineer for a mega-venture company, Satoshi Yanagisawa joined PwC Consulting LLC. He is also a special lecturer at a fashion school. He has strengths in the field of XR, especially in AR, planning, research, PoC, and service development utilising smart glasses and VR devices and has most recently been involved in R&D related to 3D spatial information and the metaverse.
T. Sasaki
Director, PwC Consulting LLC
Fields and industries of focus:
3D spatial information, drone, mobility, metaverse, digital technology business strategy and concept development
Tomohiro Sasaki worked for a major general trading company-affiliated IT solution service provider, a foreign consulting firm and a major accounting audit firm before assuming his current position. He provided growth strategy planning and execution support and IT strategy planning and execution support to major companies in a variety of industries, including information and telecommunications, high-tech, and media and entertainment. Since joining PwC, he has provided planning and consulting services for analytics-related services. His areas of focus include 3D spatial information, drones, flying cars, metaverse, and other digital technology utilisation strategies, smart cities, OMO (online meets offline), transformation of work styles in the digital age, RPA planning and implementation promotion, etc. He is also a speaker at digital-related seminars, including external seminars.
M. Minami
Senior Manager, PwC Consulting LLC
Fields and industries of focus:
Internet, robotics, drones, flying cars, industrial architecture
For more than 25 years since the dawn of the Internet, Masaki Minami has been engaged in research activities in the area of digital society supported by cyber-physical systems, and in 2020 he led the design of the architecture of the autonomous mobile robotics area at the Digital Architecture and Design Centre, newly established by the Information-technology Promotion Agency, Japan. As programme director of the Digital Architecture Design Centre, he leads the architectural design for drones, service robots, flying cars and smart cities for Society 5.0. In his current position, he is involved in research on smart city reference architecture as defined by the Cabinet Office, and support for the planning, construction and operation of autonomous mobile robots and smart cities, urban space information infrastructure, data collaboration infrastructure, and more.
In particular, by integrating flexible data distribution in cyberspace (virtual space) based on the internet and technologies that move in physical space (real space), such as drones and robotics including flying vehicles, on the basis of industrial architecture, new added value can be created in industrial and social infrastructures. Our strength lies in support for projects, businesses and services that create new added value for industrial and social infrastructures by integrating technologies that move in physical space (real space), such as drones and robotics, with industrial architecture.