Access the full text.
Sign up today, get DeepDyve free for 14 days.
Article Situated AR Simulations of a Lantern Festival Using a Smartphone and LiDAR‐Based 3D Models 1, 2 1 1 Naai‐Jung Shih *, Pai‐Huang Diao , Yi‐Ting Qiu , and Tzu‐Yu Chen Department of Architecture, National Taiwan University of Science and Technology, 43, Taipei 106, Taiwan; chooi0215@gmail.com (Y.‐T.Q.); ziyouchen.0617@gmail.com (T.‐Y.C.) Department of Advertising, School of Art and Design, Guangdong University of Finance and Economics, Guangzhou 510320, China; phd@gdufe.edu.cn * Correspondence: shihnj@mail.ntust.edu.tw; Tel.: +88‐6022‐7376‐718 Abstract: A lantern festival was 3D‐scanned to elucidate its unique complexity and cultural identity in terms of Intangible Cultural Heritage (ICH). Three augmented reality (AR) instancing scenarios were applied to the converted scanned data from an interaction to the entire site; a forward additive instancing and interactions with a pre‐defined model layout. The novelty and contributions of this study are three‐fold: documentation, development of an AR app for situated tasks, and AR verifi‐ cation. We presented ready‐made and customized smartphone apps for AR verification to extend the model’s elaboration of different site contexts. Both were applied to assess their feasibility in the restructuring and management of the scene. The apps were implemented under a homogeneous and heterogeneous combination of contexts, originating from an as‐built event description to a re‐ mote site as a sustainable cultural effort. A second reconstruction of screenshots in an AR loop pro‐ cess of interaction, reconstruction, and confirmation verification was also made to study the manip‐ ulated result in 3D prints. Keywords: AR; LiDAR; RP; smartphone; 3D sustainability; lantern festival; intangible cultural heritage Citation: Shih, N.‐J.; Diao, P.‐H.; Qiu, Y.‐T.; Chen, T.‐Y. Situated AR Simulations of a Lantern Festival Us‐ 1. Introduction ing a Smartphone and LiDAR‐Based Festivals have been categorized as a form of Intangible Cultural Heritage (ICH) [1– 3D Models. Appl. Sci. 2020, 11, 12. 3] and promoted by different governments, organizations, and media [4–7]. A lantern fes‐ https://doi.org/10.3390/app11010012 tival not only constitutes an important cultural asset, but also creates dynamic interactions with the urban fabric. The festival setting generates a temporary fabric, which is usually Received: 1 November 2020 Accepted: 16 December 2020 removed after the event and deserves both documentation and dissemination. For physi‐ Published: 22 December 2020 cal installations, related studies have been undertaken focusing on photogrammetry mod‐ eling [8], interactive systems [9], augmented reality (AR) / virtual reality (VR) / mixed Publisher’s Note: MDPI stays neu‐ reality (MR) [10], and 3D printing [11], in which devices, such as 3D scanners [12–14], low‐ tral with regard to jurisdictional ® cost 3D scans using Google Tango (2014, Google Inc., Mountain View, CA, USA) tablets claims in published maps and insti‐ [15,16], smartphones or handheld devices [17], and unmanned aerial vehicles (UAV) and tutional affiliations. light detection and ranging (LiDAR) [18], have been applied with different levels of effi‐ ciency and effectiveness. The photogrammetry models were created by applying multi‐ sourcing [19] or crowdsourcing [20], or in a collaborative manner [21]. Moreover, virtual humans [22], virtual crowds [23], large populations [24], video data of real actors [25], and Copyright: © 2020 by the authors. cartoon‐like characters [26] were applied to attract users’ interest or as a way to promote Licensee MDPI, Basel, Switzerland. communication. These applications have been proven to be feasible for varied cases [27– This article is an open access article 31] and scales [32,33], with awareness of location and context [34]. The documentation is distributed under the terms and con‐ related to information systems [35] and as part of a communication model [36] for explo‐ ditions of the Creative Commons At‐ ration and dissemination [37]. Engagement and learning were also assessed [38]. tribution (CC BY) license (http://cre‐ Point clouds have been applied to create mesh models for AR application [39]. Cur‐ ativecommons.org/licenses/by/4.0/). rent technology already enables automatic documentation and registration at the size of Appl. Sci. 2021, 11, 12. https://doi.org/10.3390/app11010012 www.mdpi.com/journal/applsci Appl. Sci. 2021, 11, 12 2 of 22 a backpack for terrains that are unreachable by vehicle. Although Light Detection and Ranging (LiDAR) scans can apply a Global Positioning System (GPS) and Inertial Meas‐ urement Unit (IMU) on a vehicle for city model reconstruction [40], a static scan is usually applied for a superior scan and registration result. The modeling effort in conversion, meshing, and hole‐filling should be alleviated by the direct utilization of a point cloud in AR, regardless of its limits in transparency, larger file size, and the interference of other objects during the scan process. Indeed, a well‐sampled and well‐textured point cloud model provides a convincing effect with the high resolution of an as‐built scene. AR‐related applications have been successfully implemented in navigation, educa‐ tion, industry, and medical practice [41–44]. They combine, register, and interact with real and virtual information in a real environment [45–47]. Reality with an interactive and highly dynamic experience can be explored using mobile AR applications for layers of information [48,49]. Considering a case study of a local lantern festival formerly applied to a communication model [50], engagement can be facilitated with improved compre‐ hension through interactions with 3D scenes or entities in AR. To recall a festival that comprises a complicated setting, AR should be applied so that the representation of the past virtual environment can be enriched to achieve a vivid tourism experience. AR applies 3D models in different environments for meaningful context elaboration. The interactive models are usually applied to different sites to fulfill aims in pedagogy or professional practice. The models, which are presented as part of a general structure with no specific preference or constraint, may require re‐construction or re‐framing of a scene for a specific task. Management issues arise for models, or their instances, with matching backgrounds. The reconstruction of appropriate scenes has proven to be highly beneficial in pedagogy with real‐time streaming of contents. Mixed Reality (VR and AR) has also been applied for cultural heritage for the benefits of both [51]. Although post‐processing needs to be applied for a higher reality effect, direct collaboration in a real environment using AR can be more straightforward in connecting the temporal experience of an event between different locations and periods of time. The proposed 3D data acquisition was conducted in the evening. The 3D laser scan constitutes an effective tool for acquiring geometric attributes, such as detailed trees and vegetation [52]. For night‐time road and street environments, lighting conditions through 3D point clouds were evaluated [53]. A previous investigation used a 3D scan for a special urban evening event in a city and a festival [54]. Although laser scanners in different illu‐ mination conditions were assessed for accuracy [55], structural details still need to be ver‐ ified in low luminance conditions. However, 3D scanning laser‐rangefinders for visual navigation techniques could be made by applying raw laser intensity data into greyscale camera‐like images [56]. 2. Research Purpose and Methodology For a lantern festival held in the evening with crowds, the current study aimed to determine if large‐scale documented 3D data can be applied in AR interactions or to assist planning [57] in professional design practice. Moreover, we aim to investigated whether various planning flexibilities of AR interactions can be conducted in a ready‐made or cus‐ tomized environment, whether convenient mobile devices, such as smartphones, can be applied in AR, and whether human characters can be applied as a part of a scene by in‐ creasing the involvement of the surrounding crowds in AR. This research intended to represent and reconstruct a cultural festival. The represen‐ tation required appropriate 3D documentation to retrieve and identify significant settings of the scene. The reconstruction of old scenes concerns related context applications in terms of the correlations and interactions of installations in different types and scales. The exemplification should also be applicable to traditional festivals held in the past. We proposed LiDAR‐based 3D data and an AR app for supporting documentation and interaction in the representation and reconstruction. The 3D entities should be inter‐ acted with using a convenient smartphone and app for different scenarios with matching Appl. Sci. 2021, 11, 12 3 of 22 contexts. Cloud access of a ready‐made decimated point cloud or mesh models should enable the app to be implemented under a homogeneous and heterogeneous combination of contexts, as an extension to 3D sustainability. This study should not only develop an app, but also restructure a cultural event for the feasibility of context adaption. The representation of cultural heritage as a single object differs from that of an event, which consists of urban fabrics, installations, facilities, sup‐ porting structures, and visitors. The introduction of an entity should be represented with a collection of multiple correlations in an arrangement of a spatial structure. The correla‐ tion forms the complexity of the scene and the AR to be applied. Challenges exist in presenting a contemporaneous traditional event, concerning how to define related elements, establish correlations, and create media to determine pertinent relationships between factors. As‐built data from real scenes should be utilized to support the illustration of details and spatial structure to a level that establishes a synergistic rela‐ tionship with the AR application. Reverse engineering should also be implemented with approaches, processes, and expected outcomes to interpret the unique case of a festival, instead of inspecting original design plans. A verification of the simulation was performed by the reconstruction of a 3D model in an AR loop process of interaction, reconstruction, and confirmation. A series of screen‐ shots were recorded for the second photogrammetric modeling of a 3D printed model to study the manipulated result. The novelty of this research was the application of AR in a cultural festival, with data originating from a relatively large size of point cloud. This app should offer models with higher documented reality. The 3D entities should be interacted with using a convenient smartphone and app for different scenarios with matching con‐ texts. Tests by the researchers should be conducted at the original site and in a different locations as a situated application. Test opinions should give the verification of the appli‐ cations, such as re‐experiencing a former event, enabling the display or 3D layout of the context like a canopy with an on/off option, and its successes and restrictions. One of the main purposes of this article is to discuss the feasibility of the application, and the survey of user experience will be the focus of the second stage in the future. This paper is organized in the following sections: introduction, research purpose and methodology, background of a lantern festival, restructuring and management of AR in‐ stancing, implementation of the 3D scan data, three AR instancing scenarios, verification by the situated application and reconstruction of a 3D model, discussions, and conclusions. 3. Background of a Lantern Festival A lantern festival was selected for its unique complexity, cultural identity, and role in memory. The festival is an important Chinese event held immediately after the Lunar New Year. It features enormous collections of installations inspired by folk stories and/or traditional architecture. The installations create a remarkable night scene and attract visi‐ tors island‐wide (Figure 1a). The event has a major theme lantern of the year with desig‐ nated subjects distributed in various zones. Large‐scaled displays have become a regional highlight and have promote tourism. This extraordinary urban evening event represents a planning of a display and interaction with traditional icons and customs. The festival only lasts for approximately two weeks. A feasible application of model instancing in different urban contexts should be provided to fulfill a recall or promotion attempt. For a short‐term festival event with a large number of installations, a 3D scan provides a rapid and detailed architectural documentation of the lanterns and the spatial structures (Figure 1b,c). A system should be developed to integrate the documented data and a convenient device to represent the festival with a co‐existing relationship of entities. The 3D data are important in the construction of spatial frameworks, so that a tourism‐ like experience can be constructed from linear axes with relative reference made between the main lantern, smaller installations, visitors, and supporting constructions. Appl. Sci. 2021, 11, 12 4 of 22 Figure 1. (a) The 2016 Taipei Lantern Festival with different types of installations; (b) the spatial structure; (c) the main entrance to the main theme lantern using the point cloud model. 4. Restructuring and Management of AR Instancing An AR app works differently with the management of instances and backgrounds before and after instancing. An interaction starts by selecting models and manipulating backgrounds for subjective study afterwards. Then, a model has to work with its relation‐ ship to other models. Assistance may be required to anchor or align one to another. Man‐ agement is different from instancing in a general AR application with a free selection of elements for a room or a space, since a problem space is defined by specific elements (e.g., a furniture set) and a constrained operation of relation location. A bottom‐up or top‐down metaphor is presented in the restructuring and manage‐ ment process. A scene is usually created as a bottom‐up process from a sequential instanc‐ ing of the individual model to the final scene. All of the models come from categories that concern the same hierarchy. A general purposed off‐the‐shelf app or predefined platform, ® ® such as Sketchfab (2020, Sketchfab, New York, NY, USA) [58] or Augment (2020, Aug‐ ment, Paris, France) [59], usually supports multiple categories of 3D models. The instances present a general structure with no specific preferences or constraints in application. The structuring and management of AR models are different using the 3D scan pro‐ cess. For a festival with 3D scans, the individual scans are registered into a whole as a bottom‐up process. Each model object must be separated from the registered point cloud model in a top‐down differentiation process. Both processes construct and restructure a hierarchy. Irrespective of how a scene is organized, a group of instances is initially subject to a certain organization of framework. The restructuring and re‐framing of a scene in‐ volves, for example, the grouping or ungrouping of specific instances of a task. Moreover, each instance may have dissimilar degrees of freedom. For instance, an interior design may have applied 4‐degrees of freedom (3‐way transition + Z‐axis rotation) to each piece of furniture, fix‐shaped installation, or facility. AR requires the restructuring and re‐framing of a scene with an appropriate switch at the macro‐ or micro‐scope. Comparing the constrained and free manipulation of in‐ stances, a scene should be initiated with a top‐down decomposition of elements for bot‐ tom‐up context matching. The switch between a macro‐ and micro‐perspective is crucial, not just in scaling the scene for appropriate visibility, but also in the restructuring of a scene and elements to support design comprehension and development in the different backgrounds of urban fabrics or objects with cultural identities. In real applications, the macro‐view presents the application, Sketchfab , to the full scope of the point cloud. The micro‐view, on the other hand, presents the development of an app using ARKit (v1.5, Apple, Cupertino, CA, USA) [60] or the application, Augment . The former is a custom‐ ized, domain‐specific, and design‐specific application with a constrained arrangement of instances; the latter constitutes a general‐purpose app with general support of instancing arrangements. Appl. Sci. 2021, 11, 12 5 of 22 5. Implementation of the 3D Scan Data The 3D data were implemented for cultural preservation, context extension, and the spatial structure analysis of a sustainable festival. The preservation was conducted with more of a direct connection to our daily life, in which the memory should be reserved and recalled. The presentation of structure was exemplified by the definition of spatial frames and elements, such as the canopy and ground level, and infills, including installations and crowds. The structure led to an appropriate representational hierarchy of the scenes to be displayed. Indeed, any part of the framework or lantern installation was feasible for a homogeneous and heterogeneous combination of contexts. The mesh model of multiple objects or the point cloud of the entire site practically correlate the data in different hierarchies from field work and laboratory experiments to different sites. Both data types were able to be interacted with. The app application and development followed the layout of objects and the spatial structure of an event. Since the off‐site instancing experience is to be extended to other architectural spaces or urban fab‐ rics, explorations were made of different development approaches, data retrieval, format conversions, app platforms, and instancing scenarios. 3D scan data were utilized to define festival entities and their correlations, both vis‐ ually and physically. Faro Focus 3D (FARO Technologies Inc., Stuttgart, Germany) was used to scan the 2016 Taipei Lantern Festival. Field work included 37 scan locations, in which more than 19 GB of data or 543 million points were retrieved in a region approxi‐ mately 300 m wide and 250 m long. The subsequent data manipulation effort constituted more than 10 times that of the field effort. A detailed description and figures of the meth‐ ods applied can be seen in Figure 2, which included registration inspection, filtering and decimation, scene segmentation, texture adjustment, and format conversion. It is worth noting that holes were purposely not filled to prevent misleading outcomes of the config‐ uration in over‐crowded scans. decimation / filtering of point cloud mesh model wrapping, hole filling mesh model inspection, normalization mesh models create and adjust textures (APP by ARKit , Augment ) export as OBJ or ZIP Unity import / menu scripting (app) point cloud 3D scans registration AR publish / browse (app) decimation / convert format .ply point cloud models VR import / model adjustment (Sketchfab ) publish / browse in VR Figure 2. The data flowchart. Two approaches were utilized. The first one used a public platform to engage the point cloud as the type of data to interactively manipulate in VR. The second was an app developed specifically for the mesh model, converted from the point cloud in AR. Both types of interactions have to be made on a smartphone platform for the different scenarios of site contexts. 3D documentations Appl. Sci. 2021, 11, 12 6 of 22 The point cloud model was also able to create a mesh model for AR user interaction (Figure 3). Although photogrammetry can create a large area of terrain model using im‐ ages taken by an Unmanned Aerial Vehicle (UAV) or Unmanned Aircraft Systems (UAS), the mid‐range 3D scanner captures small objects, such as power wires, stage frames, lamp poles, tree branches (Figure 4), and installations on building façades with sufficient detail to facilitate inspection. In addition, crowds were also used as a specific element for the reference of scale (Figure 5), and the real depth was usually applied by Depth API (ARCore 1.18, Google Inc., Mountain View, CA, USA) [61]. ® The model worked with software, including CloudCompare (v2.6.1, EDF R&D, ® Paris, French) [62], Geomagic Studio (v2014.1.0, former Raindrop Geomagic Inc., Morris‐ ® ville, NC, USA, now 3D Systems Company, Rock Hill, SC, USA) [63], Meshlab (v1.3.3, ® Visual Computing Lab–ISTI–CNR, Pisa, Italy) [64], and Autodesk Revit (2020, Autodesk Inc., San Rafael, CA, USA) [65], to review the model, convert formats, and retrieve sec‐ tions, skylines, ground plans, and images. In addition to the color illustrated on 3D point clouds, an Eye Dome Lighting (EDL) rendering process [66] was applied to differentiate physical elements, particularly in a colorless mode for a better display of edges, depths, or silhouettes (Figure 5b) during the study process. Among two major AR types [67], markerless AR contributes to more diversified ap‐ plication scenarios [68,69], concerning the difficulty involved in arranging markers pasted on building surfaces in marker‐based AR. A markerless smartphone AR app was devel‐ oped for convenient interactions with lanterns and facilities. Figure 3. (a) Augmented reality (AR) models for situated simulations; (b) corresponding representations in the augmented reality (AR) app. Appl. Sci. 2021, 11, 12 7 of 22 Figure 4. Original scan: 1.38 million points vs. Augment AR mesh: 0.28 million points, 0.50 mil‐ lion polygons. Figure 5. (a) The crowd appeared as multiple moving trails in radical shape around the scanner; (b) grouping configurations. 6. Three AR Instancing Scenarios Three AR instancing scenarios were applied (Figure 6) based on whether a scene could be manipulated at the element level and how the manipulation was conducted (with or without a pre‐defined model structure in the initial state of interaction). The tests were conducted only from the interaction made to an entire scene using an existing app, and to an individual model within a pre‐defined layout using a customized app. A homogeneous combination of backgrounds refers to a plain one, or the same site as a festival recall. A heterogeneous combination refers to different backgrounds in the context or site of study, promotion, or design development. 1 entity: festival scene homogeneou plain festival s background recall 3D point 1 or multiple cloud scaling entities: Ske fetstival chfab homogeneou festival same site transition elements s recall rotation a set of 3D new site & meshes scaling context festival created from heterogenou transition multiple promotion 3D point undefined entities: or study cloud model rotation festival ® structure Augment new site & structure alignment context design a set of 3D heterogenou transition developmen pre‐defined meshes s model created from rotation structure 3D point cloud ARKit Figure 6. Three scenarios of AR instancing and corresponding data and manipulation. AR Appl. Sci. 2021, 11, 12 8 of 22 The three scenarios were applied. The first two were conducted as tests, and led to the development of the last one as a customized app to represent a previous lantern festi‐ val. The tests were conducted in four places: a laboratory, the original festival site, a re‐ mote site, and a personal working space. • The first: to browse. An interaction with the entire site was made. The point cloud of the entire festival scene was decimated, viewed, and interacted with in VR, but no sufficient support of point data conversion to AR was provided. • The second: to interact and interpret. A forward additive instancing and interaction was made. A general set of instances was used for context‐matching from individual elements to a scene. Mesh models were created and were effective in free‐instancing. However, no sufficient support to the on/off display option or assistance to precisely controlled the relative location were provided. Different types of background con‐ text were applied as needed. • The third: to recall, interact, interpret, switch display, and evaluate. An interaction with the pre‐defined model layout was made. A structure with a set of pre‐loaded instances was used for the recall of former lantern settings. The customized app had the relative location of models pre‐defined as a reference of familiar scenes. Context matching was carried out from a remote site. Design alternatives were supported with on/off options for each model for evaluation. Different types of background context were applied as needed. 6.1. An Interaction with the Entire Site For the comparison of the application environment, a smartphone with Android 8.1 (Google Inc., Mountain View, CA, USA) or later was adequate to display individual enti‐ ties using Sketchfab without menu customization. It allowed multiple cloud downloads of 3D models in mesh or point cloud format. Instead of restructuring the scene, the entire scan was used (Figure 7a–c). The original registered file was 19 GB in PLY format. It was decimated to 190 MB in 12.2 million points (truncated from 22 million by the app), which is just under the 200 MB limit of the subscribed plan. The 3D point cloud model in AR was not supported. For a smaller size of mesh model, the mesh model in Augment (Figure 7d) was also able to be interacted with (Figure 7e). Appl. Sci. 2021, 11, 12 9 of 22 Figure 7. (a) Screen shots of the Sketchfab app in browse and virtual reality (VR) mode; (b) con‐ verting process for decimation and model setting; (c) a full resolution with the silhouettes of crowds identifiable when a small part was presented; (d) the Augment models; (e) the decimated ® ® mesh model in Augment ; (f) the point cloud model in Meshlab . Trade‐offs existed between the details, file size, and the scope of the scene. The size was approximately two‐thirds of the app developed by ARKit of the third scenario, irre‐ spective of whether or not the entire scene was available. A more direct application of the point cloud was made possible without segmentation into parts. A more connected work‐ ing environment was achieved between the manipulations prior to the AR conversion and afterwards. The full resolution of the point cloud before decimation can identify the scale of crowds from small particles to curved silhouettes (Figure 7c,f). One of the axes in Figure 1b was applied to illustrate the main theme lantern and a group of purposely modeled visitor configurations for context adaption. Users were en‐ couraged to relocate themselves and run the app in different contexts to make compari‐ sons. 6.2. A Forward Additive Instancing and Interaction The smartphone AR app, Augment , was used for individual instancing of the com‐ ponents of the spatial structure presented in the second scenario. Each lantern was ma‐ nipulated as needed in different urban fabrics or in drawings when a matching context was selected. The models were re‐installed back to the original site as a recall of a former event (Figure 8). The interaction was mainly recorded in the evening by screenshots, third‐ person photographs, and the stepwise operation process (Figure 8d). Each model can be Appl. Sci. 2021, 11, 12 10 of 22 relocated or scaled based on the relative proportion to the researcher for a personal and flexible interpretation of the design. The interaction also brought people’s attention to how a former temporary fabric had enriched the activity of a public open space with cul‐ tural diversities. Figure 8. The situated revisit and recall of an old festival with AR screenshots (a), screenshots in the evening and late afternoon (b), third‐person photographs (c), and the stepwise operation pro‐ cess: launch app, scan QR code of a selected 3D model, detect plane, tap to allocate the model, and adjust the model’s scale, location, or rotation angles (d). For a group layout tested in the laboratory, all of the model instances created a work‐ ing space without a predefined relative location to each other. Although the working space can be shared and recalled afterwards, the initial state was created from scratch as part of a forward design loop. In particular, a model could not be displayed on/off indi‐ vidually as needed for layout evaluation unless it was deleted. This on/off option, which is important in the presentation of design elements, was changed in the customized app in the following section. 6.3. An Interaction with a Pre‐Defined Model Layout The meta‐relationship between elements was expected, with identified arrangements between people, installations, temporary fabrics, and permanent fabrics. The developing steps and the structural diagram of elements of the AR app can be seen in Figure 9. The markerless AR app was created from Unity (2020, Unity Technologies, San Francisco, ® Cupertino, CA, USA) using ARKit SDK (2020, Apple Inc., Cupertino, CA, USA). Visual Studio (2020, Microsoft, Redmond, WA, USA) was applied to edit the display and switch the virtual models. The menu to the left of the screen could be retracted as needed. The interacted items and system interface of the app included the selection of a series of ob‐ jects. All of the contents could be selected individually or accumulatively. It only required a surface, such as the ground level, to anchor the 3D models, instead of a barcode or a Appl. Sci. 2021, 11, 12 11 of 22 picture. The markerless app made the application operate more straightforwardly, with‐ out being restricted to additional marker‐paste procedures or potentially causing damage to the targeted surface. Figure 9. The development process of the AR APP (top), smartphone interface (middle), and the structural entities of the lantern festival (bottom). ® An iPhone XS Max (A12 Bionic chip, 64G Capacity, Apple, Cupertino, CA, USA), which was equipped with an A12 CPU with 4 GB RAM and 64 GB internal storage, was used to test the app. The final size of the app was 288.5 MB. Due to the limited storage space of a smartphone, increasing the polygon number caused a delay in the screen dis‐ play, with flickering. Decimation was made by the defined polygon count or percentage. The cloud model was converted to a mesh model in OBJ format and decimated using Ge‐ ® omagic Studio (v2014.1.0, former Raindrop Geomagic Inc., Morrisville, NC, USA, now 3D Systems Company, Rock Hill, SC, USA). For display efficiency, the original mesh model of 121 MB (0.73–62.70 MB each) was divided into seven parts along two linear and perpendicular axes (Figure 10a). Each decimated part was approximately 88,106–887,408 polygons. The standalone mesh model was 171.1 MB, comprising approximately half of the polygon numbers wrapped afterwards from the original scanned points. The axes rep‐ resented one of many possible combinations that visitors perceived in different perspec‐ tives (Figure 10). Appl. Sci. 2021, 11, 12 12 of 22 Figure 10. Two axes of lantern models (a) and many possible combinations perceived by visitors from different orienta‐ tions (b). The site planning, which was subjected to the programming of the festival, created a hierarchical space based on the management of two‐axis spatial structures for lanterns and facilities (Figure 11). The structure constituted a semi‐open arcade which consisted of the arrangement of a roof, visitors, and three groups of lanterns. One of the three groups of lanterns was selected to form another axis with two additional instances to be interacted with upon selection on the menu. The app display scenario was constructed based on the as‐built spatial structure, which is similar to what occurred to the adjacent fabric in an existing 3D spatial framework in reality. Lantern installations and crowds can be dis‐ played selectively to reveal the scale factor of the surrounding environment, from a smaller installation to a large canopy. Figure 11. The two‐axis spatial structure (a) or individual lantern (b) was combined with different real environmental contexts in the evening or in the daytime. An event‐specific or occasion‐related app should be designed to enable 3D scenes with unlimited adaption outside of the original display scenario. Our app was designed with the individual scenes separated, so that it can be carried around to project the standalone 3D model in different urban fabrics as a correspondence to different cultural contexts or as a feasibility test of different environmental settings. Since this app does not require markers, the designated models are suggested to be projected at ground level to merge with existing urban contexts as it shows on screen, such as different landscapes, architecture, streets, or open spaces. Indeed, a user can walk inside a virtual crowd and feel the relative proportion with the theme lantern, just like in reality. The main theme can be relocated and evaluated in a more diversified arrangement of urban space, or next to Appl. Sci. 2021, 11, 12 13 of 22 the theme lantern next year as a connection to a past memory to promote festival sustain‐ ability. The on/off display is a typical function commonly applied in drafting or modeling environments. Many tools were applied to achieve this through, for example, the option to control layer visibility. However, none of these tools should delete the model, since a model represents a trail of former design and a record for future reference. This is partic‐ ularly the case when a modification was made inside of an environment replete with as‐ built facilities. The modification should start with a set of models called from a former version of a design plan, and be followed by an on/off display switch to reveal their cor‐ relation to the spatial structure. The display support, the individual manipulation options, and a pre‐defined structure of models made the app a diversified design tool for manage‐ ment. Combining AR and a 3D model of night scans achieved a captivating experience. The interaction with entities created an extended system of point cloud data. Scenes are better shown in the evening or in a room with the lights turned off. The app can also be used in daylight for the design layout evaluation of different sites. The crowds, which are usually avoided for a clear display of a scene, were purposely retained as a scale reference. It al‐ lows a user to walk into the crowds, exactly as it was experienced in the festival. The validation of the implementation was primarily made with field tests of multiple urban contexts or cross‐regions from different locations. Other than the weather issue that was present in the outdoor application, the app was proven to be feasible for indoor browsing at any time. Due to the file size being up to 190 MB, a capable bit stream that connects to the cloud should be provided. A 4G network was used and revealed that the benchmark of browsing experience was acceptable. 7. Verification by Situated Application and Reconstruction of the 3D Model The verification was made by a situated application and the reconstruction of a 3D model in a loop process (Figure 12) of interaction, reconstruction, and confirmation. The AR interaction from different orientations was recorded as a series of screenshots for pho‐ togrammetric modeling. The reconstruction from interaction created a more solid integra‐ tion of the context during the situated study. To confirm the manipulated result, the AR simulation was substantiated and further verified by a 3D printed model (Figure 13). We found that rapid prototyping (RP) confirmation enabled a broader interpretation of lantern installations with context, based on personal preference. The interpretation brought back the former entity and facilitated a more flexible elaboration of the original design. RP AR interaction confirmation reversed reconstruction Figure 12. The AR interaction, reversed reconstruction, and rapid prototyping (RP) confirmation process. Appl. Sci. 2021, 11, 12 14 of 22 Figure 13. Reconstruction of the lantern in a new context as a sustainable preservation effort: (a) new allocated setting with the surrounding context; (b) photogrammetry modeling case 1; (c) modeling of case 2 made by the screenshots taken from the AR scenes; (d) the modification process; (e) AR QR code of the model; (f) 3D‐printed model and layout case made by reversed reconstruction. 7.1. Field and Laboratory Tests The three AR instancing scenarios mentioned above represented a simulations made from a programmer’s or users’ perspective. The application in Guangdong proved that the spatial structure and large staged installations can be elaborated directly from an app programmer’s point of view in a park or plaza, the same as a designer could do. The ap‐ plication in Taipei proved that installations can be easily interpreted and improvised by researchers to assimilate into the original context like for visitors. Although situated applications have been used to enhance or re‐experience a former event, a pure application should be evolved into a reconstruction of a test object and con‐ text in architectural pedagogy or design practice. More detailed tests were conducted in a the laboratory or personal working spaces where different contexts were deployed for possible design elaboration, in which a user was also a designer. The tests followed the sequence and relationship illustrated in Figure 13. The reconstruction as a design process has to lay out the 3D model of an installation in a real environment or an artificial setting to replicate, simulate, or extend the former context. Based on the layout and complexity of the background, the context was classified into a plain background using a white cloth sheet, a more featured background using a piece of wood to simulate the landform, and a field test with a visit to a real site. Experiments concluded the cases of success and restrictions (Table 1). The reversed reconstruction created models with different levels of detail. Most isolated subjects had models created with more detail than most of the rough models represented in a circled background, because of the depth issue or being merged into the background. Due to the interference of pedestrians during the 3D scan, some models were not created for an in‐ complete boundaries. Appl. Sci. 2021, 11, 12 15 of 22 Table 1. Experiments in terms of field cases and design applications. Situated Tests Results Background Arrangement Data White Sheet Wood Field Circled None Images Time (min) Model A V V 16 10 B V V 33 10 V C V V 25 10 V D V V 46 10 V E V V 4 10 F V V 3 5 field G V V 7 5 simulation: Figure 8a, H V V 1 5 Figure 13f I V V 29 15 V J V V 2 5 K V V 1 1 L V V 30 6 M V V 17 5 V N V V 1 1 a (lab.) V V 28 7 V b (lab.) V V 74 10 V c (lab.) V V 74 10 V d (lab.) V V 81 7 V e (lab.) V V 55 11 V design f (lab.) V V V 63 8 simulation: Figure 13b,c, g (lab.) V V V 32 5 and below h (lab.) V V V 100 16 i (lab.) V V V 64 10 j (home) V V 51 6 V k (home) V V 86 16 V l (home) V V 64 12 V Appl. Sci. 2021, 11, 12 16 of 22 The complexity of the background contributed to the level of detail. Using the same cloth but a more complicated background, or being surrounded by woods, was more likely to create details. In a laboratory test with a white sheet acting as a simple and plain background, it was less likely that horse feet would be modeled due to the lack of refer‐ ence. Models were deformed if the images were taken in an extended zoom‐out mode, since conflict of depth occurred and the model always appeared in front of the wood. Success was made possible with more detail when we zoomed in on the subject by 360°. Similar results were made by circling separated pieces of wood around the AR model, to create a a vacancy for the subject. Field tests were undertaken at the original festival site in daytime and in the evening. Researchers reported that AR models with dimensions marked on the screen were very helpful to allocate a subject with a close resemblance of size at a 1:1 scale in the field. Most of the lanterns were originally allocated in the center of a grassland. The simulation of the original setting was not successful since the soft pavement with grass waving in the wind added a drifting problem to the model. Only one model was created on the grassland. Glare from street lamps occurred and prevented taking good pictures. Therefore, a model should be allocated to avoid these problems. More interference from pedestrians occurred in the evening than in the daytime. Although adjustments had to be made for the interfer‐ ence, the picture‐taking time remained similar. 7.2. Drafting‐Like Assistant and Context‐Based Assistant The initial design stage without the models being placed at the correct locations can be time‐consuming. The test actually took much longer to allocate all of the models in a relatively linear or perpendicular layout (Figure 14), since adjustments to specific meas‐ urements or alignments were difficult in AR. Although the model size was annotated clearly, the plan projection of the final layout exhibited a larger tolerance than anticipated. As a consequence, a good environment of interaction would require an auxiliary function for layout assistance. This concern led to the development of the app with a pre‐set layout of models with which to start. Figure 14. AR app for individual instancing. Appl. Sci. 2021, 11, 12 17 of 22 8. Discussions The AR application with cultural concern needs to be more deeply explored since the interaction was not limited to the manipulation of geometric instances, but also the cul‐ tural background that existed behind those instances. After all of the context‐matching AR effort, it was concluded that the interaction between the AR design and the real‐world case was critical, as one of them could provide a deeper comprehension of the local cul‐ tural identity. The local icon contributed a meaningful representation for the promotion process to the AR platform. The interaction experience of users under the tourism para‐ digm can be extended to a design practice that facilitates future urban or landscape de‐ sign. The general‐purposed AR instancing is also meaningful when a background is pro‐ vided and related to cultural activity. The instancing can then be applied to different con‐ texts and induce a familiar festival feeling immediately. A traditional heritage is location‐specific and cannot be relocated. The contributions can be elaborated now and symbolized in different forms of media for the creative culture industry between the traditional and modern values of time and space. The core culture activity can be promoted from the cross‐city annually to cloud access all‐year around. In addition to the recycling of physical lantern creations to the subsequent year, the sustain‐ ability of the lantern festival can be achieved from the scenes archived in digital format, the AR experiences of previous festivals, the construction of the spatial structure, and the 3D interaction of theme lanterns. In contrast to the richer color retrieved using photogram‐ metry technology, 3D scans collect more detail to differentiate parts. Both types of mesh models can be accessed from cloud data and viewed through an Internet platform. In con‐ trast to a photogrammetry model with high reality, the level of structural detail is much greater. The 3D sustainability of a point cloud in VR and a mesh model in AR represent an extended experience for point cloud‐related application. It constituted the execution of reverse engineering applied to festival reservation with an extension to current heritage data and technology. A subjective interaction of cultural instances was made by context instancing, selective instancing, and management, such as a tool of design by AR. 8.1. Novelty and Contribution The novelty and contribution of this study are three‐fold: documentation, develop‐ ment of an AR app for situated tasks, and AR verification. The documentation was made to a special evening event, which was yet to be achieved in 3D and AR promotion. The new evolved festival of cultural heritage made this 3D record highly valuable. Moreover, both the customized and ready‐made AR app were applied to situated tasks, such as pro‐ motion, comparison, and possible design evaluation in different sites. The AR not only existed as a simulation; RP‐assisted AR verification was also conducted with a well‐de‐ fined process to reverse‐reconstruct a 3D model of the simulated scene. The feasibility of AR was extended to a new dimension, with follow‐up verifications of the results. Current representation and interaction of any artifact created in an event can be achieved through AR using customized or ready‐made platforms. However, the simula‐ tion was usually accomplished in a forward manipulation, instead of backward evalua‐ tion afterward. After the tests of festival installations as fulfillment of the study goal, we would like to conduct this from a design perspective to provide a more involved and pro‐ found experience than a normal AR application can achieve. We believe that the simulation should move to a different level of reality. With the assistance of photogrammetry modeling, images taken in the process of interaction can be referred in order to reconstruct a model. This simple approach connects a virtual world to a physical world with a feasible RP output. Although the problem of depth and defor‐ mation existed, the results here justifies the feasibility of this approach. 8.2. Ready‐Made App and Customized Application Trade‐offs exist between the ready‐made app and customized application, in which the former could create more elaborated representations of an entity and the latter had Appl. Sci. 2021, 11, 12 18 of 22 more control of the display options and a predefined relative allocation between entities. Both approaches enabled the transformation of individual entities as an extended explo‐ ration of traditional lantern craftsmanship, local folklore, and island‐wide identities. However, the development effort of the latter was time‐ and effort‐intensive. The author developed an AR app that was feasible for simulations in a collaborative effort, since the data were created in one place and applied in another within similar or different contexts. Modification was added to enable the freedom of manipulation in ro‐ tation and scaling. For cultural promotion, ready‐made apps have the mobile advantage of smartphone and cloud access to 3D models. The customized app has the mobile ad‐ vantage of a predefined design layout for smartphone and cloud access. Both facilitated situated simulations. 8.3. Revisit Experience The revisit experience was very interesting, and also different from the original visits that occurred years ago. An observation was made by examining if the researchers simu‐ lated the installations differently from the original settings, or how different of a change was made to the settings. The results showed both the context and the technical issues, in which the former showed that the location of installation could be easily identified by checking the resemblance of 3D scans. The latter suggested a flat and solid ground surface was more preferable and applicable in AR than a setting of grass waving in the wind. The modification of the original location can be seen from the outer boundary of the scattered pattern of test locations in Figure 8a, in which researchers did reconstruct the former ex‐ perience by referring to the old axis linking the entrance and the main theme lantern. Most of the tests were conducted by a team of two for the AR interaction and a third person for picture‐taking. Researchers reported an urge to look behind the smartphone for a clearer inspection, just like an installation was actually located there. A 360 interac‐ tion to the model and scene enhanced the sense of reality. A new sense of scale was created by moving and orienting the smartphone to different parts of a huge installation, to catch the full picture of the object. The interaction made by the researchers in the air often con‐ fused pedestrians who intended to avoid interrupting, assuming pictures were being taken in front of them at that moment. 9. Conclusions One of the main purposes of this article was to discuss the feasibility of the applica‐ tion, and the survey of user experience will be the focus of the second stage in a future study. We did not provide a specific target audience survey evaluation, except the re‐ searchers or assistants who were also the audience of past festivals over several years. The apps and 3D data were prepared based on a real event and checked if the old experience could be recalled, then moved forward to reconstruct the space. We found this novel ap‐ proach valuable, it was as a looped process. Correlation exists between an application, a simulation, a simulation that facilitates design, and a design whose outcome can be verified. This study illustrated that an addi‐ tive application and a customized application could enable the festival experience in the same or at a remote site, although the procedure was not applicable to the original format of point cloud data. A novel extension was made to re‐represent the results of the object and background in an AR simulation to a physical reality of the 3D model. The three scenarios presented different levels of interaction with a scene. A forward interaction loop worked just like a normal design process, since it was in its conceptual design stage. In reality, a project may need to consider the constraints of space or installa‐ tions initially and come up with a specification to be followed by a design team. The con‐ straints included the application of constrained instance locations, re‐instancing for dif‐ ferent contexts, and a 3D scan data‐originated model adaption in AR. The state of design is usually required and monitored by a project manager in a schematic design or design development stage. As a result, the management of models actually reflects how AR Appl. Sci. 2021, 11, 12 19 of 22 should work in professional practice. In fact, the way to access and interact with each model does matter. The application of LiDAR for remodeling projects is very common now. Not only does a demand exist for a platform with an app for manipulation in differ‐ ent hierarchies, but seamless working between the point cloud and mesh model should also be achieved. The importance of using the festival as an instance is to prove that this is more than a desktop app. Indeed, it is a design tool and a sharable experience for others, as well as a type of as‐built data that can be created and applied from different input devices, experi‐ ences, data flows, and levels of reality. The festival represents a special event that carries the development of the local culture and economy from one decade to another. The cre‐ ated transformation included an evolving urban boundary and tourist‐oriented tempo‐ rary fabric experience. In order to witness the richness of the event, the process of reverse engineering was applied to reconstruct an as‐built 3D model of the site and festival‐related elements. The 3D documentation fulfills the representation needs. Author Contributions: Conceptualization, methodology, validation, formal analysis, in‐ vestigation, resources, 3D scan data curation, RP models, writing—original draft prepa‐ ration, writing—review and editing, visualization, supervision, project administration, and funding acquisition, N.‐J.S.; iOS App software, AR format transfer, validation in re‐ mote field, visualization, validation, P.‐H.D.; data curation, visualization, validation, Y.‐ T.Q. and T.‐Y.C. All authors have read and agreed to the published version of the manu‐ script. Funding: This research was an extended study with the funding sponsored by the Minis‐ try of Science and Technology, Taiwan, under the project number MOST 105‐2221‐E‐011‐ 014‐MY2. The authors would like to appreciate the support. Conflicts of Interest: The authors declare no conflict of interest. References 1. Lopez‐Guzman, T.; Santa‐Cruz, F.G. International tourism and the UNESCO category of intangible cultural heritage. Int. J. Cult. Tourism Hospit. Res. 2016, 10, 310–322. 2. International Council of Monuments and Sites (ICOMOS). Principles and Guidelines for Managing Tourism at Places of Cultural and Heritage Significance. International Cultural Tourism Charter; ICOMOS International Cultural Tourism Committee: Charenton‐le‐ Pont, France, 2002. 3. United Nations Educational, Scientific and Cultural Organization. Representative List of the Intangible Cultural Heritage of Humanity. 2009. Available online: https://ich.unesco.org/doc/src/06859‐EN.pdf (accessed on 29 October 2020). 4. Bureau of Cultural Heritage, Taiwan. Available online: https://twh.boch.gov.tw/non_material/ index.aspx (accessed on 9 October 2020). 5. Cultural Affairs Bureau, Macau. Elements of Intangible Cultural Heritage, Lantern Festival. Available online: http://www.cul‐ turalheritage.mo/en/detail/2759/1 (accessed on 7 October 2020). 6. Aomori Prefectural Government. Hirosaki Neputa Festival, Aptinet, Aomori Sightseeing Guide, Aomori Prefecture, Tourism and International Affairs Strategy Bureau. Available online: https://www.en‐aomori.com/culture‐039.html (accessed on 29 Oc‐ tober 2020). 7. Intangible Cultural Heritage Shows Staged to Celebrate Lantern Festival. 1 March 2018. Available online: http://www.enghunan.gov.cn/newscollection/news2018/March2018/201807/t20180713_5052575.html (accessed on 7 October 2020). 8. Aicardi, I.; Chiabrando, F.; Lingua, A.M.; Noardo, F. Recent trends in cultural heritage 3D survey: The photogrammetric com‐ puter vision approach. J. Cult. Herit. 2018, 32, 257–266, doi:10.1016/j.culher.2017.11.006. 9. Koutsabasis, P. Empirical evaluations of interactive systems in cultural heritage: A review. Int. J. Comput. Methods Herit. Sci. 2017, 1, 1–23, doi:10.4018/IJCMHS.2017010107. 10. Bekele, M.K.; Pierdicca, E.; Frontoni, E.; Malinverni, S.; Gain, J. A survey of augmented, virtual, and mixed reality for cultural heritage. J. Comput. Cult. Herit. 2018, 11, 1–36, doi:10.1145/3145534. 11. Scopigno, R.; Cignoni, P.; Pietroni, N.; Callieri, M.; Dellepiane, M. Digital fabrication techniques for cultural heritage: A survey. Comput. Graph. Forum 2017, 36, 6–21, doi:10.1111/cgf.12781. 12. Di Angelo, L.; Di Stefano, P.; Fratocchi, L.; Marzola, A. An AHP‐based method for choosing the best 3D scanner for cultural heritage applications. J. Cult. Herit. 2018, 34, 109–115, doi:10.1016/j.culher.2018.03.026. Appl. Sci. 2021, 11, 12 20 of 22 13. Donlic, M.; Petkovic, T.; Pribanic, T. On tablet 3D structured light reconstruction and registration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2462–2471. 14. Lachat, E.; Landes, T.; Grussenmeyer, P. Investigation of a combined surveying and scanning device: The Trimble SX10 scan‐ ning total station. Sensors 2017, 17, 730, doi:10.3390/s17040730. 15. Froehlich, M.; Azhar, S.; Vanture, M. An investigation of Google Tango tablet for low cost 3D scanning. In Proceedings of the 34th International Symposium on Automation and Robotics in Construction, Taipei, Taiwan, 28 June–1 July 2017; pp. 864–871. 16. Voinea, G.D.; Girbacia, F.; Postelnicu, C.C.; Marto, A. Exploring Cultural Heritage Using Augmented Reality through Google’s Project Tango and ARCore. In Communications in Computer and Information Science, Proceedings of the International Conference on VR Technologies in Cultural Heritage, Brasov, Romania, 29–30 May 2018; Duguleană, M., Carrozzino, M., Gams, M., Tanea, I., Eds.; Springer: Cham, Switzerland, 2019; Volume 904, pp. 93–106, doi:10.1007/978‐3‐030‐05819‐7_8. 17. Senthilvel, M.; Soman, R.K.; Varghese, K. Comparison of handheld devices for 3D reconstruction in construction. In Proceedings of the 34th International Symposium on Automation and Robotics in Construction, Taipei, Taiwan, 28 June–1 July 2017; pp. 698–705. 18. Kwon, S.; Park, J.‐W.; Moon, D.; Jung, S.; Park, H. Smart merging method for hybrid point cloud data using UAV and LIDAR in earthwork construction. Procedia Eng. 2017, 196, 21–28, doi:10.1016/J.PROENG.2017.07.168. 19. Grifoni, E.; Legnaioli, S.; Nieri, P.; Campanella, B.; Lorenzetti, G.; Pagnotta, S.; Poggialini, F.; Palleschi, V. Construction and comparison of 3D multi‐source multi‐band models for cultural heritage applications. J. Cult. Herit. 2018, 34, 261–267, doi:10.1016/j.culher.2018.04.014. 20. Alsadik, B. Practicing the geometric designation of sensor networks using the Crowdsource 3D models of cultural heritage objects. J. Cult. Herit. 2018, 31, 202–207, doi:10.1016/j.culher.2017.11.001. 21. Nocerino, E.; Poiesi, F.; Locher, A.; Tefera, Y.T.; Remondino, F.; Chippendale, P.; Van Gool, L. 3D reconstruction with a collab‐ orative approach based on smartphones and a cloud‐based server. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII‐ 2/W8, 187–194, doi:10.5194/isprs‐archives‐XLII‐2‐W8‐187‐2017. 22. Machidon, O.M.; Duguleana, M.; Carrozzino, M. Virtual humans in cultural heritage ICT applications: A review. J. Cult. Herit. 2018, 33, 249–260, doi:10.1016/j.culher.2018.01.007. 23. Ryder, G.; Flack, P.; Day, A. A framework for real‐time virtual crowds in cultural heritage environments. In Proceedings of the 6th International Symposium on Virtual Reality, Archaeology and Cultural Heritage, Pisa, Italy, 8–11 November 2005; pp. 108– 113. 24. Thalmann, D.; Maim, B.; Maim, J. Geometric issues in reconstruction of virtual heritage involving large populations. In 3D Research Challenges in Cultural Heritage; Springer: Berlin/Heidelberg, Germany, 2014; pp. 78–92. 25. Carrozzino, M.; Lorenzini, C.; Evangelista, C.; Tecchia, F.; Bergamasco, M. AMICA: Virtual Reality as a tool for learning and communicating the craftsmanship of engraving. In Proceedings of the 2015 Digital Heritage, Granada, Spain, 28 September–2 October 2015; pp. 187–188, doi:10.1109/DigitalHeritage.2015.7419486. 26. Jang, S.A.; Baik, K.; Ko, K.H. Muru in Wonderland: An immersive video tour with gameful character interaction for children. In Proceedings of the 2016 ACM Conference Companion Publication on Designing Interactive System, Brisbane, Australia, 4–8 June 2016; pp. 173–176, doi:10.1145/2908805.2909417. 27. Tamborrino, R.; Wendrich, W. Cultural heritage in context: The temples of Nubia, digital technologies and the future of conser‐ vation. J. Inst. Conserv. 2017, 40, 168–182, doi:10.1080/19455224.2017.1321562. 28. Younes, G.; Kahil, R.; Jallad, M.; Asmar, D.; Elhajj, I. Virtual and augmented reality for rich interaction with cultural heritage sites: A case study from the Roman Theater at Byblos. Digit. Appl. Archaeol. Cult. Herit. 2017, 5, 1–9, doi:10.1016/j.daach.2017.03.002. 29. Jung, T.; Chung, N.; Leue, M.C. The determinants of recommendations to use augmented reality technologies: The case of a Korean Theme Park. Tour. Manag. 2015, 49, 75–86, doi:10.1016/j.tourman.2015.02.013. 30. Fernández‐Palacios, B.J.; Morabito, D.; Remondino, F. Access to complex reality‐based 3D models using virtual reality solutions. J. Cult. Herit. 2017, 23, 40–48, doi:10.1016/j.culher.2016.09.003. 31. Kasapakis, V.; Gavalas, D.; Galatis, P. Augmented reality in cultural heritage: Field of view awareness in an archaeological site mobile guide. J. Ambient Intell. Smart Environ. 2016, 8, 501–514, doi:10.3233/AIS‐160394. 32. Schöps, T.; Sattler, T.; Häne, C.; Pollefeys, M. Large‐scale outdoor 3D reconstruction on a mobile device. Comput. Vis. Image Underst. 2017, 157, 151–166, doi:10.1016/j.cviu.2016.09.007. 33. Golodetz, S.; Cavallari, T.; Lord, N.; Prisacariu, V.; Murray, D.; Torr, P. Collaborative Large‐Scale Dense 3D Reconstruction with Online Inter‐Agent Pose Optimisation. arXiv 2018, arXiv:1801.08361, doi:10.1109/TVCG.2018.2868533. 34. Rubino, I.; Barberis, C.; Xhembulla, J.; Malnati, G. Integrating a location‐based mobile game in the museum visit: Evaluating visitors’ behaviour and learning. J. Comput. Cultur. Herit. 2015, 8, 1–18, doi:10.1145/2724723. 35. Soler, F.; Melero, F.J.; Luzón, M.V. A complete 3D information system for cultural heritage documentation. J. Cult. Herit. 2017, 23, 49–57, doi:10.1016/j.culher.2016.09.008. 36. Xue, K.; Li, Y.; Meng, X. An evaluation model to assess the communication effects of intangible cultural heritage. J. Cult. Herit. 2019, 40, 124–132, doi:10.1016/j.culher.2019.05.021. 37. Vosinakis, S.; Avradinis, N.; Koutsabasis, P. Dissemination of Intangible Cultural Heritage using a Multi‐Agent Virtual World. In Advances in Digital Cultural Heritage; Springer: Berlin/Heidelberg, Germany, 2018; pp.197–207, doi:10.1007/978‐3‐319‐75789‐ 6_14. Appl. Sci. 2021, 11, 12 21 of 22 38. Galani, A.; Kidd, J. Evaluating digital cultural heritage ‘In the Wild’: The case for reflexivity. J. Comput. Cult. Herit. 2019, 12, 1– 15, doi:10.1145/3287272. 39. Comes, R.; Neamțu, C.; Buna, Z.; Badiu, I.; Pupeză, P. Methodology to Create 3D Models for Augmented Reality Applications Using Scanned Point Clouds. Mediterr. Archaeol. Archaeom. 2014, 14, 35–44, doi:10.5281/ZENODO.13703. 40. Pylvänäinen, T.; Berclaz, J.; Korah, T.; Hedau, V.; Aanjaneya, M.; Grzeszczuk, R. 3D city modeling from street‐level data for augmented reality applications. In Proceedings of the Second Joint 3DIM/3DPVT Conference: 3D Imaging, Modeling, Pro‐ cessing, Visualization & Transmission, Zurich, Switzerland, 13–15 October 2012; pp. 238–245, doi:10.1109/3DIMPVT.2012.19. 41. Diao, P.‐H.; Shih, N.‐J. Trends and Research Issues of Augmented Reality Studies in Architectural and Civil Engineering Edu‐ cation—A Review of Academic Journal Publications. Appl. Sci. 2019, 9, 1840, doi:10.3390/app9091840. Zhou, Y. Integrating Augmented Reality with Building Information Modeling: Onsite 42. Wang, X.; Truijens, M.; Hou, L.; Wang, Y.; Construction Process Controlling for Liquefied Natural Gas Industry. Autom. Constr. 2014, 40, 96–105, doi:10.1016/j.autcon.2013.12.003. 43. Wang, S.; Parsons, M.; Stone‐McLean, J.; Rogers, P.; Boyd, S.; Hoover, K.; Meruvia‐Pastor, O.; Gong, M.; Smith, A. Augmented Reality as a Telemedicine Platform for Remote Procedural Training. Sensors 2017, 17, 2294, doi:10.3390/s17102294. 44. Zhou, Y.; Luo, H.; Yang, Y. Implementation of Augmented Reality for Segment Displacement Inspection during Tunneling Construction. Autom. Constr. 2017, 82, 112–121, doi:10.1016/j.autcon.2017.02.007. 45. Abdullah, F.; Kassim, M.H.B.; Sanusi, A.N.Z. Go Virtual: Exploring Augmented Reality Application in Representation of Steel Architectural Construction for the Enhancement of Architecture Education. Adv. Sci. Lett. 2017, 23, 804–808, doi:10.1166/asl.2017.7449. 46. Ayer, S.K.; Messner, J.I.; Anumba, C.J. Augmented Reality Gaming in Sustainable Design Education. J. Archit. Eng. 2016, 22, 04015012, doi:10.1061/(ASCE)AE.1943‐5568.0000195. 47. Behzadan, A.H.; Kamat, V.R. Enabling Discovery‐based Learning in Construction Using Telepresent Augmented Reality. Au‐ tom. Constr. 2013, 33, 3–10, doi:10.1016/j.autcon.2012.09.003. 48. Fritz, F.; Susperrgui, A.; Linaza, M.T. Enhancing Cultural Tourism Experiences with Augmented Reality Technologies. In Pro‐ ceedings of the 6th International Symposium on Virtual Reality, Archaeology and Cultural Heritage VAST, Pisa, Italy, 8–11 November 2005; pp. 1–6. 49. Kounavis, C.D.; Kasimati, A.E.; Zamani, E.D. Enhancing the Tourism Experience through Mobile Augmented Reality: Chal‐ lenges and Prospects. Int. J. Eng. Bus. Manag. 2012, 4, 1–6, doi:10.5772/51644. 50. Lee, T.; Huh, C.; Yeh, H.; Tsaur, W. Effectiveness of a Communication Model in City Branding Using Events: The Case of the Taiwan Lantern Festival. Int. J. Event Festiv. Manag. 2016, 7, 137–148, doi:10.1108/IJEFM‐01‐2016‐0001. 51. Plecher, D.A.; Wandinger, M.; Klinker, G. Mixed reality for cultural heritage. In Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces, Osaka, Japan, 23–27 March 2019; pp. 1618–1622. 52. Du, S.; Lindenbergh, R.; Ledoux, H.; Stoter, J.; Nan, L. AdTree: Accurate, Detailed, and Automatic Modelling of Laser‐Scanned Trees. Remote Sens. 2019, 11, 2074, doi:10.3390/rs11182074. 53. Vaaja, M.T.; Kurkela, M.; Virtanen, J.P.; Maksimainen, M.; Hyyppä, H.; Hyyppä, J.; Tetri, E. Luminance‐Corrected 3D Point Clouds for Road and Street Environments. Remote Sens. 2015, 7, 11389–11402, doi:10.3390/rs70911389. 54. Shih, N.J.; Kuo, H.C.; Chang, C.F. 3D Scan for Special Urban Evening Occasion. In Proceedings of the 2016 International Con‐ ference on Applied System Innovation (IEEE ICASI 2016), Okinawa, Japan, 26–30 May 2016; pp. 1–4. 55. Arslan, A.E.; Kalkan, K. Comparison of Working Efficiency of Terrestrial Laser Scanner in Day and Night Conditions. In Pro‐ ceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS 2013‐ SSG), Antalya, Turkey, 11–17 November 2013; Volume XL‐7/W2, pp. 19–21. 56. McManus, C.; Furgale, P.; Barfoot, T.D. Towards Lighting‐invariant Visual Navigation: An Appearance‐based Approach Using Scanning Laser‐rangefinders. Robotics and Autonomous Systems. Robot. Auton. Syst. 2013, 61, 836–852, doi:10.1016/j.ro‐ bot.2013.04.008. 57. Zhang, Y.; Han, M.; Chen, W. The strategy of digital scenic area planning from the perspective of intangible cultural heritage protection. J. Image Video Process. 2018, 130, 1–11, doi:10.1186/s13640‐018‐0366‐7. 58. Sketchfab. Available online: https://sketchfab.com/ (accessed on 28 November 2020). 59. Augment. Available online: https://www.augment.com/ (accessed on 28 November 2020). 60. ARKit. Available online: https://developer.apple.com/cn/augmented‐reality/arkit/ (accessed on 29 November 2020). 61. Google Developers. Depth API Overview for Android. Available online: https://developers.google.com/ar/de‐ velop/java/depth/overview (accessed on 27 November 2020). 62. CloudCompare. Available online: https://www.danielgm.net/cc/ (accessed on 28 November 2020). 63. Geomagic Studio. Previous Version Purchased from Formerly Owned Raindrop Corp. Available online: https://www.3dsys‐ tems.com/software (accessed on 27 November 2020). 64. Meshlab. Meshlab_64bit v1.3.3, Visual Computing Lab of CNR‐ISTI. Available online: https://www.meshlab.net/ (accessed on 28 November 2020). 65. Autodesk Inc. Autodesk Revit. Available online: https://www.autodesk.com/products/revit/overview?plc=RVT&term=1‐ YEAR&support=ADVANCED&quantity=1 (accessed on 27 November 2020). 66. Boucheny, C.; Ribes, A. Eye‐Dome Lighting: A Non‐Photorealistic Shading Technique. 2011. Available online: https://blog.kit‐ ware.com/eye‐dome‐lighting‐a‐non‐photorealistic‐shading‐technique/ (accessed on 11 April 2020). Appl. Sci. 2021, 11, 12 22 of 22 67. Wang, X.; Kim, M.J.; Love, P.E.D.; Kang, S.‐C. Augmented Reality in Built Environment: Classification and Implications for Future Research. Autom. Constr. 2013, 32, 1–13, doi:10.1016/j.autcon.2012.11.021. 68. Diao, P.‐H.; Shih, N.‐J. MARINS: A Mobile Smartphone AR System for Pathfinding in a Dark Environment. Sensors 2018, 18, 3442, doi:10.3390/s18103442. 69. Kwon, O.‐S.; Park, C.‐S.; Lim, C.‐R. A Defect Management System for Reinforced Concrete Work Utilizing BIM, Image‐matching and Augmented Reality. Autom. Constr. 2014, 46, 74–81, doi:10.1016/j.autcon.2014.05.005.
Applied Sciences – Multidisciplinary Digital Publishing Institute
Published: Dec 22, 2020
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.