Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Making Shopping Easy for People with Visual Impairment Using Mobile Assistive Technologies

Making Shopping Easy for People with Visual Impairment Using Mobile Assistive Technologies applied sciences Review Making Shopping Easy for People with Visual Impairment Using Mobile Assistive Technologies 1 , 2 , 3 4 Mostafa Elgendy * , Cecilia Sik-Lanyi and Arpad Kelemen Department of Electrical Engineering and Information Systems, University of Pannonia, 8200 Veszprém, Hungary Department of Computer Science, Faculty of Computers and Informatics, Benha University, 13511 Benha, Egypt Department of Electrical Engineering and Information Systems, University of Pannonia, 8200 Veszprém, Hungary; lanyi@almos.uni-pannon.hu Department of Organizational Systems and Adult Health, University of Maryland, Baltimore, MD 21201, USA; kelemen@umaryland.edu * Correspondence: mostafa.elgendy@virt.uni-pannon.hu; Tel.: +36-88-624-000/6188 Received: 27 December 2018; Accepted: 7 March 2019; Published: 13 March 2019 Abstract: People with visual impairment face various difficulties in their daily activities in comparison to people without visual impairment. Much research has been done to find smart solutions using mobile devices to help people with visual impairment perform tasks like shopping. One of the most challenging tasks for researchers is to create a solution that offers a good quality of life for people with visual impairment. It is also essential to develop solutions that encourage people with visual impairment to participate in social life. This study provides an overview of the various technologies that have been developed in recent years to assist people with visual impairment in shopping tasks. It gives an introduction to the latest direction in this area, which will help developers to incorporate such solutions into their research. Keywords: smartphone; assistive technology; visually impaired; shopping; computer vision 1. Introduction Visual Impairment (VI), which results from various diseases and degenerative conditions, causes significant limitations in visual capability. VI cannot be corrected by conventional means [1]. Currently, more than 253 million people live with VI, and this number is projected to increase in the coming decades [2]. People with Visual Impairment (PVI) have limitations in the function of their visual system. These limitations prevent them from seeing and doing daily activities, such as navigation or shopping [3–9]. For example, PVI have difficulties in reading product labels during shopping; they thus miss important information about the content of their food and sometimes make bad choices. During shopping, PVI also face navigation troubles, which encourage them to consume takeout [10,11]. Another problem is how to walk in an environment with many barriers such as walking in unknown places or crossing a street [12,13]. Moreover, the lack of support services in their surrounding environment make PVI dependent on their families and prevent them from being socially active [14,15]. Last, but not least, PVI face social barriers such as the attitudes of other people and society [16]. Therefore, it is important to develop solutions to help PVI improve their mobility, protect them from injury, encourage them to travel outside of their own environments, and interact socially [17]. Recently, mobile devices, such as smartphones, smart glasses, and notebooks have become popular. These new devices have various capabilities that are useful in developing complex software Appl. Sci. 2019, 9, 1061; doi:10.3390/app9061061 www.mdpi.com/journal/applsci Appl. Sci. 2019, 9, 1061 2 of 15 applications. Mobile devices can also be connected to cloud computing and offload tasks to be executed there, which saves power, memory, and battery [18–20]. The advantages of mobile technologies make them useful for accessing information from any place at any time and give PVI the opportunity to use smartphones in their daily activities [21–25]. In this way, smartphones are used with Assistive Technology (AT) to offer multiple solutions; this technology is called Mobile Assistive Technology (MAT). Researchers have conducted extensive investigations on using MAT to help PVI navigate from one place to another and shop without any support from people without disabilities [26–34]. In this study, we concentrate on the available solutions to help PVI in the shopping process. We divided the shopping process into two parts. The first part is how to prepare the shopping list, which provides assistance during shopping, before shopping. The second part helps them navigate inside shopping malls and identify products during shopping. The purpose of this literature review is twofold. The first aim is to answer the following research questions: Q1: What are the main categories of MAT shopping solutions for PVI? Q2: What are the strengths and weaknesses of the latest MAT shopping help systems for PVI? Q3: What capabilities do the best and most effective solution for PVI give? The second aim is to overview the available MAT solutions about helping PVI prepare shopping lists, navigate inside shopping malls and recognize products during shopping. It also discusses how the proposed solutions can help PVI in the shopping process and summarizes the challenges and drawbacks of the proposed solutions. This paper is structured as follows: Section 2 describes the research methodology. Section 3 discusses multiple solutions and how they can help PVI. Section 4 explains the main benefits and research challenges. Finally, Section 5 outlines the conclusions of this study. 2. Research Methodology In order to identify most of the available MAT solutions for PVI, we searched the following databases: Springer, Science Direct, Web of Science, Institute of Electrical and Electronics Engineers (IEEE) Xplore, Google Scholar, Association for Computing Machinery (ACM) Digital Library and Microsoft Academic. We used the following keywords to search for peer reviewed journal articles: (“Assistive technology” OR “Assistive technology devices” OR “Mobile assistive technology devices” OR “navigation solution” OR “shopping”) AND, (“visual impairment” OR “blind *”), (“avoiding obstacles” OR “write * notes” OR “text to speech”) AND (“visual impairment” OR “blind *”). We set the search period to articles published between January 2010 and December 2018. The search query returned 8893 records. Duplicates were removed, reducing the search results to 842 articles. Then, we eliminated 433 results by restricting to articles in the English language, articles that describe research intervention for PVI based on their titles, and articles that are free and downloadable. Next, all keywords were screened, which eliminated 206 articles, because they were not technical papers, or they were literature reviews or surveys. Abstracts of the resulting 203 papers were then screened for relevance to our research goals. One hundred and thirteen of the articles were deemed inappropriate because they did not study visual impaired or blind populations, or they were not related to MAT. Next, two different researchers conducted a full text article review of the remaining 90 articles. Forty-six articles were eliminated due to not helping PVI in the shopping process and avoiding obstacles. The resulting 44 articles met all inclusion criteria and were evaluated in this study. Figure 1 shows the flowchart of choosing methodology based on PRISMA flowchart [35]. Appl. Sci. 2019, 9, 1061 3 of 15 Figure 1. PRISMA flowchart [35]. 3. MAT Solutions for PVI Shopping When entering into the world of PVI, one should be aware of certain obstacles PVI face while shopping alone, particularly when the support from shop assistants is limited [26,34]. Many retailers offer online shopping, but this method is difficult and time-consuming, as PVI have to listen to all the choices before choosing the product. Even worse, if any items are missing, they must re-listen to the list again [36]. Some shops offer home delivery, but this option requires the PVI to make an appointment and wait for delivery. These alternatives limit personal autonomy and make independent shopping difficult so, PVI often avoid using these services. Buying a product in person at the grocery store is also difficult for PVI. They often wait for help from store employees, which is time-consuming. Moreover, most stores cannot assign an employee to help PVI, as hiring an assistant is too expensive and offers no privacy [34,37]. PVI also face difficulties when searching for a product on the shelves and checking their details, as the shopkeepers frequently move products around. Kostyra and co-workers found that sensory attributes are important for PVI when they select different products that have the same appearance. They performed a questionnaire, and the results showed that using mobile devices can make PVI feel independent during the shopping process, which is important to them [11]. We searched for available MAT solutions to help PVI before and during the shopping process. We found that the shopping process is divided into two parts: the first part is how to prepare the shopping list to make shopping easy and fast; the second part is dealing with how to navigate and identify products during the shopping process, as shown in Figure 2 [38–47]. It is also shown in the figure that MATs during the shopping process are classified into three categories, based on the technology that was used: tag based, Computer Vision (CV) based, and hybrid systems. The remaining part of this section gives an overview of technologies that have been developed in recent years related to each part in the shopping process. Moreover, it gives an in-depth look at these solutions and some research examples to show how they work. Appl. Sci. 2019, 9, 1061 4 of 15 Figure 2. MAT solutions for the parts of the shopping process for PVI. 3.1. Shopping Preparation Earlier studies [8,11,48] assume that shopping list preparation is a useful activity, as most PVI prefer to follow a predetermined list. It is necessary to help PVI prepare shopping lists and store them in a database. During shopping, PVI can then retrieve and use their lists. Several applications take an image from a printed text, a written list, or Braille and send it to an Optical Character Recognition (OCR) module, which analyses and converts it to text [49–51]. In some cases, the system sends the image to a character recognition API in cloud computing, which identifies the words and converts it to text [52]. Other applications use Speech to Text (STT) techniques to transform PVI voice commands to a list of items [53]. After preparing the shopping lists, they are stored in a database. During shopping, PVI retrieve the shopping items from the database and listen to them by using Text-To-Speech (TTS) [51,54]. 3.2. Navigation and Product Identification Going to the store is not the only navigation challenge for PVI; in-store navigation is also a complex problem [24]. It is challenging for PVI to navigate inside shops and to reach and identify products. It is also difficult for them to get detailed information about products, such as production and expiration date. Additionally, PVI always need help from others, as most shops are not well prepared to help them [11]. As a result, researchers have developed multiple solutions to help PVI navigate and identify products. These solutions are divided into three categories, as shown in Figure 2: (1) Tag Based Systems: such as Radio-frequency identification (RFID) and Near Field Communication (NFC), which use wireless components to transfer data from a tag attached to an object for the purposes of automatic identification and tracking. (2) Computer Vision Based Systems: some of these systems require unique visual tags such as Quick Response (QR) codes, Barcode, or Augmented Reality (AR) markers to be placed on products. These tags are used for detecting and giving PVI all available details about the products. Other systems do not require tags to be placed on products. Instead, they utilize information about the objects’ physical features to identify them. (3) Hybrid Systems: these take the strengths of two or more systems and combine them into a new system to deliver better accuracy and performance [9,41–43,55–62]. 3.2.1. Tag Based Systems Tag systems use wireless communication technology to transfer data from the tag attached to an object to a tag reader for automatic identification and tracking. Developers are using several types Appl. Sci. 2019, 9, 1061 5 of 15 of tags, but we will concentrate on RFID and NFC tags [63]. RFID uniquely identifies items by using radio waves. Each RFID system has three components: a tag, a reader, and an antenna. The reader sends a signal to the tag via the antenna, and the tag responds to this signal by sending its unique identification information [64]. There are two types of RFID tags: active and passive. Active tags broadcast a signal up to a range of 100 m. Passive tags use electromagnetic energy from the reader, but they do not have their own power source. They can broadcast a signal up to a range of 25 m. Passive tags are typically smaller and cheaper than active tags [65]. For shopping, RFID tags are attached to products and PVI hold tag readers to identify them. After identifying the items, PVI get details about them, such as name, price and special offers [55], [66–68]. These systems use a server or a local database to store product details. Other solutions use RFID tags for navigation inside shopping malls. These solutions use tag reader attached to white canes to identify RFID tags. They use a map to guide them through the store and suggest the shortest route for PVI to reach their products. They give audio messages that provide verbal directions [69,70]. NFC technology is a version of RFID that operates at a maximum range of about 10 cm. The NFC system consists of three main components: a tag, a reader, and a database. The tag is a place to store information, while the reader is an NFC mobile device that reads the content of the tag. The database stores additional information to the NFC tags. NFC technology is used for active or passive communication. In active mode, the NFC tag and NFC tag reader produce a radio field during communication. In passive mode, only the NFC tag reader generates a radio field and begins the communication session, so NFC tags do not process any information sent from other sources, and cannot connect to other passive components. The main differences are the communication distance and the use of mobile devices by NFC systems or RFID Readers by the RFID systems [33,71]. To use them for shopping, NFC tags are attached to products and identified by smartphones like RFID. After identifying the items, PVI get details about them [33,72]. These systems use a database to store product details so PVI can scan them and get information such as name, price, and special offers [37]. Some other solutions used NFC tags for navigation inside shopping malls. Like RFID, NFC tags use a map to guide PVI through the store and suggest the shortest route for PVI to reach their products. These tags also give audio messages that provide verbal directions [33,73]. 3.2.2. Computer Vision Based Systems CV based systems accept visual inputs from the camera and use CV techniques to extract valuable information and recognize objects in the surrounding environment. Finally, they provide information to the PVI through tactile or auditory channels [74]. Researchers classified CV based systems into tag based and non-tag based. In tag based systems, unique visual tags such as QR code, Barcode, and AR markers are placed on products to aid the recognition process. Recognition is accomplished by capturing an image of the tag and analyzing this image to determine the identity of the object based on its tag information. Then, they use a database to retrieve product details, such as name, price, and special offers [42,75,76]. Other solutions use these tags for navigation and giving the shortest route to products [77,78]. Finally, tactile or voice commands are used for warnings and providing direction commands to the PVI [79]. With non-tag based systems, developers do not attach tags to objects. They use CV techniques to analyze the images and identify objects [80–82]. Non-tag systems require extensive computational power to analyze images and give accurate results [44]. For example, Zientara et al. used smartglasses to accurately locate and identify objects using CV techniques [41,83]. They also used a glove with a camera, which guides hand movements to point and grasp things. Kumar and Meher used a color recognition module with a convolution neural network to recognize objects and their colors [84,85]. Jafri and co-workers processed the depth and motion tracking data obtained via a Google Tango Tablet to create and update a 3D box around the PVI to detect and avoid obstacles [43,86]. Hoang and co-workers utilized color images, depth images, and accelerometer information from a mobile Kinect and transferred them to laptop for processing and detecting obstacles [87]. Concerning the Appl. Sci. 2019, 9, 1061 6 of 15 obstacle warning module, a tactile–visual substitution system uses voice commands to warn the PVI to avoid obstacles. 3.2.3. Hybrid Systems In the previous two sections, several solutions to assist PVI in navigating and identifying products have been discussed. These solutions use tags such as RFID and NFC, visuals tags such as QR codes and AR Markers, or CV techniques. However, these solutions are not suitable under all situations, because each environment has specific features. For example, CV techniques cannot be used in areas with considerable light, because the quality of the taken image will be poor. In this case, it is better to use a different technology, like RFID or NFC, to improve system accuracy. Another example is when, in a shop, items, which all have needed information, already have barcodes or QR codes. Developers can use these tags for product identification and use CV techniques or non-visual tags (RFID, NFC) for navigation. Two or more such technologies can be combined, which would lead to the development of hybrid systems. For example, McDaniel and co-authors proposed a system that integrates CV techniques with RFID Systems. This system identifies information about relevant objects in the surroundings and sends them to PVI [56]. López-de-Ipiña and co-authors integrate RFID with QR codes, to allow PVI to navigate inside a grocery store [57,88]. The system used the RFID reader to identify the RFID tags to navigate inside the store. It adopted the smartphone camera to identify QR codes placed on product shelves. Finally, Fernandes and co-authors developed a solution to help PVI identify objects and navigate in indoor locations using RFID and CV technologies [58]. This system used the RFID reader to receive the current location of PVI and CV techniques to identify objects. 4. Discussion In this study we have presented different approaches, techniques, and technologies to help PVI navigate and identify products while shopping. We classified the shopping approaches into preparing a list of what to purchase and navigating inside the store to identify items on the shelves. Solutions for shopping list preparation used some techniques, such as CV, OCR and STT, to read list details from PVI and store them in a database. During shopping, audio messages are given to PVI about shopping items stored in the database using TTS. Shopping preparation makes it easy for users to buy from a list, but it has some limitations: it assumes that the shopper already knows what they wish to buy on their trip [8], and this is not always the case. Also, shopping may take a long time. Shopping is not simply buying items on a list. For people without visual impairment, shopping also entails opportunistically exploring new products or brands, engaging in cultural learning about tastes, and making different choices based on occasional sales [89]. Some solutions require taking an image of the shopping list, but it is difficult for PVI to take a good quality image. Researchers have also developed multiple solutions to help PVI navigate and identify products using RFID and NFC for tag based systems. RFID tags have some benefits: their signals can penetrate walls and obstacles, which cover a wide area, and reuse existing infrastructures, resulting in cost reduction [90]. Also, PVI do not need to be in a certain direction to receive messages from the tagged item [91]. Moreover, the tag does not need to be within the line of sight of the RFID reader which allows it to be placed inside items. However, there are some drawbacks: to setup the environment, hundreds of RFID tags need to be installed, which is costly. Information overload is another major problem, as it is overwhelming to receive information about all the items in the store at the same time and attempt to use this information to identify various objects [56]. Another significant limitation is that these systems are used only in a restricted environment in which objects have been tagged, and these tags need regular maintenance. Sometimes, RFID tags are attached to items like the liquid in metal cans, which reflect the radio waves during communication. Using RFID technology with glass causes a reflection of the radio waves, which affects the system outcome. Appl. Sci. 2019, 9, 1061 7 of 15 NFC tags have the following benefits: PVI can simply touch the NFC tag with an NFC reader, such as a smartphone, to begin the required service. NFC readers read information stored in the tags, which enables PVI to get product information. Moreover, by using NFC tags, researchers can build low-cost indoor navigation systems. NFC technology also has a very low response time, because the time required to transfer data from an NFC tag to a mobile device and generate the walking path to the items is short. They also provide accurate position and orientation information, so the orientation of the user to the destination is facilitated. Moreover, NFC tags work well in unclean environments and do not require a direct line of sight between the reader and the tags. Finally, PVI do not have to carry large or bulky devices—only their smartphones. However, there are some drawbacks: NFC tags are not as effective and efficient as Bluetooth or Wi-Fi Direct when it comes to data transfer rates. NFC can only send and receive very small packets of data, so real-time positioning cannot be provided in the NFC Internal system. Also, PVI must be inside the reading area in order to identify NFC tags and must have an NFC-equipped smartphone. Moreover, researchers developed multiple solutions to help PVI navigate and identify products using CV techniques. CV techniques rely on visual tags, such as QR codes, barcodes, and AR markers, or utilize information about the objects’ physical features. CV tag based systems offer several advantages: they only need to identify tags to get product details, so they need low computational power and small storage space. Many of these approaches do not need tags to be explicitly placed, as products already have unique visual tags, such as barcodes and QR codes. Such tags can be generated and printed at a very low cost compared to non-visual tags, such as RFID, and can be easily modified. CV tag based systems are ideal for tasks that require differentiating among groups of objects. They are vital for PVI when the contents of the items are different, such as a tube of glue versus a tube of eye drops, as they have the same shape and it may be dangerous if they choose the wrong one [92]. However, tag based CV techniques require a prior selection of items and the correct placement of tags on those items. Moreover, if there are many tagged items in a small area, PVI would be confused by receiving information about them all at the same time. Visual tags also must be in line-of-sight of the camera, otherwise, they will not be detected. Furthermore, visual tags cannot be placed inside items, as the appearance of these tags are important for PVI. These tags can also be damaged during movement through the supply chain or by weather. Also, it is difficult for a smartphone camera to detect CV tags if the PVI is moving fast, and the recognition rate decreases as the distance between the reader and the tags increases [75]. CV non-tag based systems have several advantages. These systems are cost-effective, as they need little or no infrastructure and most of them can be easily installed on smartphones. However, they have several limitations. Their performance may be unreliable in real-world environments because of factors like motion blur and image resolution, as well as changes in conditions, such as illumination, orientation, and scale. These systems use extensive computational power, and PVI need to take many photos. However, taking good quality photos is difficult for PVI. Finally, feedback latency must be reduced to make these systems more effective. Finally, researchers have created hybrid systems by taking the strengths of two or more systems and combining them. Numerous attempts have been made in this area to balance the trade-offs of the combined technologies. As a result, there is a significant improvement in accuracy, robustness, performance, and usability. However, the major drawback of these systems is that they use significant infrastructure due to the combination of technologies, which results in increased complexity and cost. Figure 3 shows a complete scenario of the shopping process, from preparation of the shopping list to completion of the shopping task. Appl. Sci. 2019, 9, 1061 8 of 15 Figure 3. The scenario of the shopping solutions for PVI. To summarize, the answers to Q1, Q2, and Q3 are: A1: This study presented the main categories of MAT to help PVI shop. We divided the shopping task into two parts: 1) Preparing a list of what to buy, and 2) navigation inside a store and identification of items on the shelves. Preparing a list for shopping solutions used techniques such as CV and STT to obtain list details from PVI and store them in a database. During shopping, audio feedback is given to PVI about shopping items using TTS. The navigation and identification of products use RFID and NFC for tag based systems, visual tags like QR codes, barcodes and AR markers for CV tag based systems, or information about the physical features of items for CV approaches A2: Table 1 shows the strengths and weaknesses of the latest MAT shopping systems for PVI. Appl. Sci. 2019, 9, 1061 9 of 15 Table 1. Strengths, challenges and drawbacks of each category. Category Type Technology Paper Strengths Challenges and Drawbacks - Assumes that PVI already knows what they wish to buy on their trip CV, OCR and - It makes easy for users to buy from a list - Shopping is not composed of simply purchasing a set of items on a list Shopping preparation [49–54,93,94] STT - It is hard for PVI to make image good quality pictures - Hard to integrate with other systems - RFID signals are able to penetrate walls and obstacles - Small coverage area - RFID works well with existing infrastructure which results in - Hundreds of RFID tags are needed in the environment which are costly cost reduction - Receiving information from all the items at the same time are confusing - PVI must be in a certain direction to receive messages from tags RFID [55,66–70] for PVI - Tags do not need to be within the line of sight of the RFID reader which - RFID tags need regular maintenance allows being placed inside items - Using RFID tags with liquid in metal cans reflects the radio waves Tag based - PVI can read more than one tag at the same time during communication systems - PVI simply to touch the NFC tag to begin the required service - NFC tags can be used to build low-cost indoor navigation systems in which PVI do not have to carry large devices - NFC is not as effective and efficient as Bluetooth or Wi-Fi Direct - NFC minimizes response time and provides accurate - PVI should be inside the reading area in order to identify NFC tags NFC [33,37,72,73] position information - PVI must have an NFC-equipped mobile - NFC tags work well in unclean environment Navigation and - PVI do not need to be in a direct line of sight between the reader and tags Product Identification - In areas with many tagged items, PVI will be confused by receiving - They need only to extract item tag so they need low computational information about all items at the same time power and storage space - Tags have to be in the line-of-sight of the camera QR code, - They do not need tags to be explicitly placed as products already have - Tags cannot be placed inside items Barcode, [42,75–79] unique visual tags such as barcodes and QR codes - Tags are damaged by movement across the supply chain or weather Markers - These tags can be generated and printed at very low cost - It is difficult to detect tags if the PVI is moving fast and the recognition Computer rate decreases as the distance between the reader and tags increases vision based systems - Inconsistent performance in real-world environments because of factors such as motion blur - Cost-effective as they need little or no infrastructure [41,43,46,47,59,60, - Use extensive computation power CV - Most of them can be installed easily on smartphones 80–87] - PVI need to take many good quality photos that is hard for them - Feedback latency must to be reduced to make the systems more effective - Increased infrastructure usage - Balanced trade-off between the combined technologies - Increased time usage Hybrid systems [56–58,62,88,92] - Improved accuracy, robustness, usability and performance - Increased complexity - Increased cost. Appl. Sci. 2019, 9, 1061 10 of 15 A3: Table 2 summarizes the criteria for the most effective solutions. Table 2. Comparison of identification technologies for PVI. Number of Requires Technology Cost Equipment Scanned Line Range Capacity Items Of-Sight NFC Low NFC reader 1 No Up to 10 cm Maximum 1.6MB RFID Low RFID reader Multiple No Up to 3 m Maximum 8000 bytes QR code Free Camera 1 Yes Depends on code size. Maximum 2953 Bytes Barcode Free Camera 1 Yes Depends on code size. N/A Markers Free Camera Multiple Yes Depends on marker size. N/A CV techniques High Camera Multiple Yes Depends on camera - Based on the categories and solutions discussed in Section 2, we selected some criteria and compared them in Table 2. The first criterion is the cost of applying the technology to any solution. It is shown that CV tag techniques can be used without any cost except for printing the QR codes or AR markers and putting them in the correct place. When using a barcode, there is no need to print them, as they are already placed on each product. Tag based techniques can be used at a low cost, as shops only need some RFID or NFC tags to be installed in the correct places. If CV techniques are used, high-quality equipment, such as cameras, are needed for good results. The second criterion is the equipment needed to detect and identify products or places. For CV tag based solutions, PVI need only their smartphone cameras to detect and identify items. In CV techniques, some solutions only need smartphone cameras, while others need high-quality cameras to take high resolutions images and machines with powerful processors for computations. In tag based techniques, PVI need RFID reader or smartphones supporting NFC technology. The third criterion is the number of items able to be scanned at the same time. Only RFID readers, AR markers, and CV techniques can scan more than one item at the same time, which is useful in some situations, such as if PVI want to identify and count the items in their shopping cart. The fourth criterion is whether or not the PVI must be in the line of sight with the identified products. For tag based solutions, PVI do not have to be in the line of sight of items, and the PVI can identify them in any direction. In tag based solutions, tags must be within 3 m for RFID tags and within 10 cm for NFC tags, while CV solutions depend on some other parameters, like the tag size in QR codes or barcodes, and the marker or camera parameters for CV techniques. The last criterion is the storage capacity of each solution. Only some tags, such as RFID, NFC and QR codes, have a storage capacity, while others, such as AR markers and barcodes, do not have any storage capacity. Researchers can select and design new technology solutions based on specific requirements and which criteria to focus on, and how to evaluate tradeoffs. Further research is needed to develop precise, more effective, low cost, and easy to use helping systems for PVI. Finally, if scientists and engineers develop MAT for PVI, they should study and take into account the Web Content Accessibility Guidelines [95] and Section 508 standards [96]. 5. Conclusions PVI face many problems during the shopping process. This study has discussed the current, most prevalent, solutions to help PVI shop effectively. We conclude that all presented solutions have some advantages and disadvantages. Researchers have tried to design and evaluate hybrid solutions that exploit the main advantages, and avoid the disadvantages, of individual systems. However, these hybrid systems increased the infrastructure use, the time consumption, and the system complexity. This study provides an introduction to guide and motivate researchers towards carrying out more studies that may lead to good solutions to help PVI in the shopping process. Author Contributions: All authors contributed to the present paper with the same effort in finding available literature resources, as well as writing the paper. Funding: The authors would like to thank the financial support of Széchenyi 2020 under the EFOP-3.6.1- 16-2016-00015. Appl. Sci. 2019, 9, 1061 11 of 15 Conflicts of Interest: The authors declare no conflict of interest References 1. What is Visual Impairment? Available online: https://www.news-medical.net/health/What-is-visual- impairment.aspx (accessed on 23 December 2018). 2. Ackland, P.; Resnikoff, S.; Bourne, R. World blindness and visual impairment: Despite many successes, the problem is growing. Community Eye Health J. 2018, 30, 71–73. 3. Capella-Mcdonnall, M. The Need for Heaith Promotion for Aduits Who Are Visuaiiy impaired. J. Vis. Impair. Blind 2007, 2002, 133–146. [CrossRef] 4. Kollmuss, A.; Agyeman, J. Mind the Gap: Why do people act environmentally and what are the barriers to pro-environmental behavior. Environ. Educ. Res. 2002, 8, 239–260. [CrossRef] 5. Giudice, N.A. Navigating without Vision: Principles of Blind Spatial Cognition. In Handbook of Behavioral and Cognitive Geography; Montello, D.R., Ed.; Publisher: Edward Elgar, UK, 2018; pp. 1–32. 6. Legge, G.E.; Granquist, C.; Baek, Y.; Gage, R. Indoor Spatial Updating with Impaired Vision. Investig. Opthalmol. Vis. Sci. 2016, 57, 6757–6765. [CrossRef] [PubMed] 7. Schinazi, V.R.; Thrash, T.; Chebat, D. Spatial navigation by congenitally blind individuals. Wiley Interdiscip. Rev. Cogn. Sci 2016, 7, 37–58. [CrossRef] [PubMed] 8. Yuan, C.W.; Hanrahan, B.V.; Lee, S.; Rosson, M.B.; Carroll, J.M. Constructing a holistic view of shopping with people with visual impairment: a participatory design approach. Univers. Access. Inf. Soc. 2017, 1–14. Available online: http://link.springer.com/10.1007/s10209-017-0577-1 (accessed on 21 November 2018). 9. Wong, E.J.; Yap, K.M.; Alexander, J.; Karnik, A. HABOS: Towards a platform of haptic-audio based online shopping for the visually impaired. In Proceedings of the ICOS 2015—2015 IEEE Conference on Open Systems, Bandar Melaka, Malaysia, 24–26 August 2016; pp. 62–67. 10. Wong, S. The limitations of using activity space measurements for representing the mobilities of individuals with visual impairment: A mixed methods case study in the San Francisco Bay Area. J. Transp. Geogr. 2018, 66, 300–308. [CrossRef] 11. Kostyra, E.; Zakowska-Biemans, S.; Sniegocka, K.; Piotrowska, A. Food shopping, sensory determinants of food choice and meal preparation by visually impaired people. Obstacles and expectations in daily food experiences. Appetite 2017, 113, 14–22. [CrossRef] 12. Bradley, N.A.; Dunlop, M.D. An Experimental Investigation into Wayfinding Directions for Visually Impaired People. Pers. Ubiquitous Comput. 2005, 9, 395–403. [CrossRef] 13. Geruschat, D.R.; Hassan, S.E.; Turano, K.A.; Quigley, H.A.; Congdon, N.G. Gaze Behavior of the Visually Impaired During Street Crossing. Optom. Vis. Sci. 2006, 83, 550–558. [CrossRef] 14. Kbar, G.; Al-Daraiseh, A.; Mian, S.H.; Abidi, M.H. Utilizing sensors networks to develop a smart and context-aware solution for people with disabilities at the workplace (design and implementation). Int. J. Distrib. Sens. Netw. 2016, 12, 1–25. [CrossRef] 15. Nurjaman, T.A. Exploring the Interdependence between Visually Impaired and Sighted People in the Early Phase of Friendship. Ijds 2018, 5, 115–126. [CrossRef] 16. Leissner, J.; Coenen, M.; Froehlich, S.; Loyola, D.; Cieza, A. What explains health in persons with visual impairment? Health Qual. Life Outcomes 2014, 12, 65–81. [CrossRef] [PubMed] 17. Bhowmick, A.; Hazarika, S.M. An insight into assistive technology for the visually impaired and blind people: state-of-the-art and future trends. J. Multimodal User Interfaces 2017, 11, 149–172. [CrossRef] 18. Elgendy, M.A.; Shawish, A.; Moussa, M.I. MCACC: New approach for augmenting the computing capabilities of mobile devices with Cloud Computing. In Proceedings of the IEEE Science and Information Conference, London, UK, 27–29 August 2014; pp. 79–86. 19. Shiraz, M.; Gani, A.; Khokhar, R.H.; Buyya, R. A review on distributed application processing frameworks in smart mobile devices for mobile cloud computing. IEEE Commun. Surv. Tutor. 2013, 15, 1294–1313. [CrossRef] 20. Elgendy, I.; Zhang, W.; Liu, C.; Hsu, C.H. An efficient and secured framework for mobile cloud computing. IEEE Trans. Cloud Comput. 2018, 6, 1. [CrossRef] 21. Angin, P.; Bhargava, B.K. Real-time mobile-cloud computing for context-aware blind navigation. Int. J. Next-Gener. Comput. 2011, 2, 1–13. Appl. Sci. 2019, 9, 1061 12 of 15 22. Bai, J.; Liu, D.; Su, G.; Fu, Z. A cloud and vision-based navigation system used for blind people. In Proceedings of the International Conference on Artificial Intelligence, Automation and Control Technologies, Wuhan, China, 7–9 April 2017; pp. 1–6. 23. Habiba, U.; Barua, S.; Ahmed, F.; Dey, G.K.; Ahmmed, K.T. 3rd Hand: A device to support elderly and disabled person. In Proceedings of the 2nd International Conference on Electrical Information and Communication Technology (EICT), Khulna, Bangladesh, 10–12 December 2015; pp. 1–6. 24. Domingo, M.C. An overview of the Internet of Things for people with disabilities. J. Netw. Comput. Appl. 2012, 35, 584–596. [CrossRef] 25. Vatavu, R.-D. Visual Impairments and Mobile Touchscreen Interaction: State-of-the-Art, Causes of Visual Impairment, and Design Guidelines. Int. J. Hum.–Comput. Interact. 2017, 33, 486–509. [CrossRef] 26. Ashraf, M.M.; Hasan, N.; Lewis, L.; Hasan, M.R.; Ray, P. A Systematic Literature Review of the Application of Information Communication Technology for Visually Impaired People. Int. J. Disabil. Manag. 2017, 11, 1–18. [CrossRef] 27. Csapó, Á.; Wersényi, G.; Nagy, H.; Stockman, T. A survey of assistive technologies and applications for blind users on mobile platforms: a review and foundation for research. J. Multimodal User Interfaces 2015, 9, 275–286. [CrossRef] 28. Csapó, Á.; Wersényi, G.; Jeon, M. A Survey on Hardware and Software Solutions for Multimodal Wearable Assistive Devices Targeting the Visually Impaired. Acta Polytech. Hungarica 2016, 13, 39–63. 29. Elgendy, M.; Lanyi, C.S. Review on Smart Solutions for People with Visual Impairment. In Proceedings of the International Conference on Computers for Handicapped Persons (ICCHP) Conference, Linz, Austria, 11–13 July 2018; pp. 81–84. 30. Elmannai, W.; Elleithy, K. Sensor-based assistive devices for visually-impaired people: Current status, challenges, and future directions. Sensors 2017, 17, 565. [CrossRef] [PubMed] 31. Hakobyan, L.; Lumsden, J.; O’Sullivan, D.; Bartlett, H. Mobile assistive technologies for the visually impaired. Surv. Ophthalmol. 2013, 58, 513–528. [CrossRef] [PubMed] 32. Ahmadi, H.; Arji, G.; Shahmoradi, L.; Safdari, R.; Nilashi, M.; Alizadeh, M. The application of internet of things in healthcare: a systematic literature review and classification. Universal Access Inf. Soc. 2018, 1–33. Available online: https://link.springer.com/article/10.1007/s10209-018-0618-4 (accessed on 21 November 2018). 33. Sakpere, W.E.; Mlitwa, N.B.W.; Oshin, M.A. Towards an efficient indoor navigation system: a near field communication approach. J. Eng. Des. Technol. 2017, 15, 505–527. [CrossRef] 34. Kulyukin, V.; Kutiyanawala, A. Accessible Shopping Systems for Blind and Visually Impaired Individuals: Design Requirements and the State of the Art. Open Rehabil. J. 2010, 3, 158–168. [CrossRef] 35. PRISMA guidelines. Available online: http://prisma-statement.org/PRISMAStatement/FlowDiagram.aspx. (accessed on 12 Mar 2019). 36. Blind people and the World Wide Web. Available online: https://www.webbie.org.uk/webbie.htm. (accessed on 3 December 2018). 37. Virtualeyez, A.M. Developing Nfc Technology to Enable the Visually Impaired to Shop Independently. Master Thesis, Dalhousie University, Halifax, Nova Scotia, July 2014. 38. Sakpere, W.; Adeyeye-Oshin, M.; Mlitwa, N.B. A State-of-the-Art Survey of Indoor Positioning and Navigation Systems and Technologies. South Afr. Comput. J. 2017, 29, 145–197. [CrossRef] 39. Dakopoulos, D.; Bourbakis, N.G. Wearable Obstacle Avoidance Electronic Travel Aids for Blind: A Survey. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2010, 40, 25–35. [CrossRef] 40. Woods, R.L.; Satgunam, P. Television, computer and portable display device use by people with central vision impairment. Ophthalmic Physiol. Opt. 2011, 31, 258–274. [CrossRef] 41. Zientara, P.A.; Lee, S.; Smith, G.H.; Brenner, R.; Itti, L.; Rosson, M.B.; Carroll, J.M.; Irick, K.M.; Narayanan, V. Third Eye: A Shopping Assistant for the Visually Impaired. Computer (Long. Beach. Calif). 2017, 50, 16–24. [CrossRef] 42. Azenkot, S.; Zhao, Y. Designing smartglasses applications for people with low vision. ACM SIGACCESS Access Comput. 2017, 119, 19–24. [CrossRef] 43. Jafri, R.; Ali, S.A. A Multimodal Tablet–Based Application for the Visually Impaired for Detecting and Recognizing Objects in a Home Environment. In Proceedings of the International Conference on Computers for Handicapped Persons (ICCHP) Conference, Paris, France, 9–11 July 2014; pp. 356–359. Appl. Sci. 2019, 9, 1061 13 of 15 44. Jafri, R.; Ali, S.A.; Arabnia, H.R.; Fatima, S. Computer vision-based object recognition for the visually impaired in an indoors environment: A survey. Vis. Comput. 2014, 30, 1197–1222. [CrossRef] 45. Szpiro, S.; Zhao, Y.; Azenkot, S. Finding a store, searching for a product. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing—UbiComp, Heidelberg, Germany, 12–16 September 2016; ACM: New York, NY, USA, 2016; pp. 61–72. 46. Tai, K.C.; Chen, C.H. Symbol detection in low-resolution images using a novel method. Int. J. Control Autom. 2014, 7, 143–154. [CrossRef] 47. Li, Y. An Object Recognition Method Based on the Improved Convolutional Neural Network. J. Comput. Theor. Nanosci. 2016, 13, 870–877. [CrossRef] 48. Compeau, L.D.; Monroe, K.B.; Grewal, D.; Reynolds, K. Expressing and defining self and relationships through everyday shopping experiences. J. Bus. Res. 2016, 69, 1035–1042. [CrossRef] 49. Joan, S.F.; Valli, S. An enhanced text detection technique for the visually impaired to read text. Inf. Syst. Front. 2017, 19, 1039–1056. [CrossRef] 50. Stearns, L.; Du, R.; Oh, U.; Wang, Y.; Findlater, L.; Chellappa, R.; Froehlich, J.E. The design and preliminary evaluation of a finger-mounted camera and feedback system to enable reading of printed text for the blind. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Zurich, Switzerland, 2014; pp. 615–631. 51. Samal, B.M.; Parvathi, K.; Das, J.K. A bidirectional text transcription of braille for odia, hindi, telugu and english via image processing on FPGA. Acad. Educ. 2015, 4, 483–494. 52. Sakai, T.; Matsumoto, T.; Takeuchi, Y.; Kudo, H.; Ohnishi, N. A Mobile System of Reading out Restaurant Menus for Blind People. In Proceedings of the International Conference on Enabling Access for Persons with Visual Impairment, Athens, Greece, 12–14 February 2015; pp. 12–14. 53. Ani, R.; Maria, E.; Joyce, J.J.; Sakkaravarthy, V.; Raja, M.A. Smart Specs: Voice assisted text reading system for visually impaired persons using TTS method. In Proceedings of the Innovations in Green Energy and Healthcare Technologies (IGEHT), Coimbatore, India, 16–18 March 2017; pp. 1–6. 54. Chalamandaris, A.; Karabetsos, S.; Tsiakoulis, P.; Raptis, S. A unit selection text-to-speech synthesis system optimized for use with screen readers. IEEE Trans. Consum. Electron. 2010, 56, 1890–1897. [CrossRef] 55. Andò, B.; Baglio, S.; Marletta, V.; Crispino, R.; Pistorio, A. A Measurement Strategy to Assess the Optimal Design of an RFID-Based Navigation Aid. IEEE Trans. Instrum. Meas. 2018, 1–7. Available online: https: //ieeexplore.ieee.org/abstract/document/8540063 (accessed on 21 November 2018). 56. McDaniel, T.L.; Kahol, K.; Villanueva, D.; Panchanathan, S. Integration of RFID and computer vision for remote object perception for individuals who are blind. In Proceedings of the Ambi-Sys Workshop on Haptic User Interfaces in Ambient Media Systems, Quebec City, QC, Canada, 1–14 February 2008; p. 7. 57. López-de-Ipiña, D.; Lorido, T.; López, U. BlindShopping: Enabling accessible shopping for visually impaired people through mobile technologies. In Proceedings of the International Conference on Smart Homes and Health Telematics, Montreal, QC, Canada, 20–22 June 2011; pp. 266–270. 58. Fernandes, H.; Costa, P.; Paredes, H.; Filipe, V.; Barroso, J. Integrating computer vision object recognition with location based services for the blind. In Proceedings of the International Conference on Universal Access in Human-Computer Interaction, Crete, Greece, 22–27 June 2014; pp. 493–500. 59. Kang, M.C.; Chae, S.H.; Sun, J.Y.; Lee, S.H.; Ko, S.J. An enhanced obstacle avoidance method for the visually impaired using deformable grid. IEEE Trans. Consum. Electron. 2017, 63, 169–177. [CrossRef] 60. Kang, M.C.; Chae, S.H.; Sun, J.Y.; Yoo, J.W.; Ko, S.J. A novel obstacle detection method based on deformable grid for the visually impaired. IEEE Trans. Consum. Electron. 2015, 61, 376–383. [CrossRef] 61. Medeiros, V.U.S.; Araújo, R.P.; Silva, R.L.A.; Slaets, A.F.F. Device for Location Assistance and Identification of Products in a Closed Enviroment. In Proceeding of the VI Latin American Congress on Biomedical Engineering CLAIB, Paraná, Argentina, 29–31 October 2014; pp. 992–994. 62. Rashid, Z.; Melià-Seguí, J.; Pous, R.; Peig, E. Using Augmented Reality and Internet of Things to improve accessibility of people with motor disabilities in the context of Smart Cities. Future Gener. Comput. Syst. 2017, 76, 248–261. [CrossRef] 63. RFID versus NFC: What’s the Difference between NFC and RFID? Available online: https://blog. atlasrfidstore.com/rfid-vs-nfc (accessed on 23 December 2018). 64. Valero, E.; Adán, A.; Cerrada, C. Evolution of RFID applications in construction: A literature review. Sensors 2015, 15, 15988–16008. [CrossRef] [PubMed] Appl. Sci. 2019, 9, 1061 14 of 15 65. Active RFID vs. Passive RFID: What’s the Difference? Available online: https://blog.atlasrfidstore.com/ active-rfid-vs-passive-rfid. (accessed on 2 December 2018). 66. Kornsingha, T.; Punyathep, P. A voice system, reading medicament label for visually impaired people. In Proceedings of the RFID SysTech 7th European Workshop, Smart Objects: Systems, Technologies and Applications, Dresden, Germany, 17–18 May 2011; pp. 1–6. 67. Mathankumar, M.; Sugandhi, N. A low cost smart shopping facilitator for visually impaired. In Proceedings of the Advances in Computing, Communications and Informatics (ICACCI) International Conference, Mysore, India, 22–25 August 2013; pp. 1088–1092. 68. Kesh, S. Shopping by Blind People: Detection of Interactions in Ambient Assisted Living Environments using RFID. Int. J. 2017, 6, 7–11. 69. Fernandes, H.; Filipe, V.; Costa, P.; Barroso, J. Location based services for the blind supported by RFID technology. Procedia Comput. Sci. 2014, 27, 2–8. [CrossRef] 70. Tsirmpas, C.; Rompas, A.; Fokou, O.; Koutsouris, D. An indoor navigation system for visually impaired and elderly people based on Radio Frequency Identification (RFID). Inf. Sci. 2015, 320, 288–305. [CrossRef] 71. What’s an NFC Tag? Available online: https://electronics.howstuffworks.com/nfc-tag.htm (accessed on 2 December 2018). 72. Ozdenizci, B.; Coskun, V.; Ok, K. NFC internal: An indoor navigation system. Sensors 2015, 15, 7571–7595. [CrossRef] 73. Ozdenizci, B.; Ok, K.; Coskun, V.; Aydin, M.N. Development of an indoor navigation system using NFC technology. In Proceedings of the Information and Computing (ICIC) International Conference, Phuket Island, Thailand, 25–27 April 2011; pp. 11–14. 74. Jafri, R.; Ali, S.A.; Arabnia, H.R. Computer Vision-based Object Recognition for the Visually Impaired Using Visual Tags. In Proceedings of the International Conference on Image Processing, Computer Vision, Las Vegas, NV, USA, 22–25 July 2013; pp. 400–406. 75. Zhang, H.; Zhang, C.; Yang, W.; Chen, C.Y. Localization and navigation using QR code for mobile robot in indoor environment. In Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO), Zhuhai, China, 6–9 December 2015; pp. 2501–2506. 76. Zhao, Y.; Szpiro, S.; Knighten, J.; Azenkot, S. CueSee: Exploring visual cues for people with low vision to facilitate a visual search task. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing, Heidelberg, Germany, 12–16 September 2016; pp. 73–84. 77. Kutiyanawala, A.; Kulyukin, V. Eyes-free barcode localization and decoding for visually impaired mobile phone users. In Proceedings of the International Conference on Image Processing, Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 12–15 July 2010; pp. 130–135. 78. Lee, S.J.; Lim, J.; Tewolde, G.; Kwon, J. Autonomous tour guide robot by using ultrasonic range sensors and QR code recognition in indoor environment. In Proceedings of the IEEE International Conference on Electro/Information Technology, Milwaukee, WI, USA, 5–7 June 2014; pp. 410–415. 79. Al-Khalifa, S.; Al-Razgan, M. Ebsar: Indoor guidance for the visually impaired. Comput. Electr. Eng. 2016, 54, 26–39. [CrossRef] 80. Yi, C.; Tian, Y.; Arditi, A. Portable camera-based assistive text and product label reading from hand-held objects for blind persons. IEEE/ASME Trans. Mechatron. 2014, 19, 808–817. [CrossRef] 81. Mekhalfi, M.L.; Melgani, F.; Zeggada, A.; De Natale, F.G.; Salem, M.A.M.; Khamis, A. Recovering the sight to blind people in indoor environments with smart technologies. Expert Syst. Appl. 2016, 46, 129–138. [CrossRef] 82. Takizawa, H.; Yamaguchi, S.; Aoyagi, M.; Ezaki, N.; Mizuno, S. Kinect cane: An assistive system for the visually impaired based on the concept of object recognition aid. Pers. Ubiquitous Comput. 2015, 19, 955–965. [CrossRef] 83. Advani, S.; Zientara, P.; Shukla, N.; Okafor, I.; Irick, K.; Sampson, J.; Datta, S.; Narayanan, V. A Multitask Grocery Assist System for the Visually Impaired: Smart glasses, gloves, and shopping carts provide auditory and tactile feedback. IEEE Consum. Electron. Mag. 2017, 6, 73–81. [CrossRef] 84. Kumar, R.; Meher, S. A Novel method for visually impaired using object recognition. In Proceedings of the International Conference on Communications and Signal Processing (ICCSP), Melmaruvathur, India, 2–4 April 2015; pp. 772–776. Appl. Sci. 2019, 9, 1061 15 of 15 85. Aladren, A.; López-Nicolás, G.; Puig, L.; Guerrero, J.J. Navigation Assistance for the Visually Impaired Using RGB-D Sensor with Range Expansion. IEEE Syst. J. 2016, 10, 922–932. [CrossRef] 86. Jafri, R.; Campos, R.L.; Ali, S.A.; Arabnia, H.R. Visual and Infrared Sensor Data-Based Obstacle Detection for the Visually Impaired Using the Google Project Tango Tablet Development Kit and the Unity Engine. IEEE Access 2017, 6, 443–454. [CrossRef] 87. Hoang, V.N.; Nguyen, T.H.; Le, T.L.; Tran, T.H.; Vuong, T.P.; Vuillerme, N. Obstacle detection and warning system for visually impaired people based on electrode matrix and mobile Kinect. Vietnam J. Comput. Sci. 2017, 4, 71–83. [CrossRef] 88. López-de-Ipiña, D.; Lorido, T.; López, U. Indoor navigation and product recognition for blind people assisted shopping. In Proceedings of the International Workshop on Ambient Assisted Living (IWAAL), Torremolinos-Málaga, Spain, 8–10 June 2011; pp. 33–40. 89. Yuan, C.W.; Hanrahan, B.V.; Lee, S.; Carroll, J.M. Designing Equal Participation in Informal Learning for People with Visual Impairment. Interact. Des. Archit. J. 2015, 27, 93–106. 90. Farid, Z.; Nordin, R.; Ismail, M. Recent advances in wireless indoor localization techniques and system. J. Comput. Netw. Commun. 2013, 2013, 1–12. [CrossRef] 91. Giudice, N.A.; Legge, G.E. Blind Navigation and the Role of Technology. In The Engineering Handbook of Smart Technology for Aging, Disability, and Independence; Helal, A., Mokhtari, M., Abdulrazak, B., Eds.; John Wiley & Sons: Hoboken, NJ, USA, 2008; pp. 479–500. 92. Lanigan, P.E.; Paulos, A.M.; Williams, A.W.; Rossi, D.; Narasimhan, P. Trinetra: Assistive technologies for grocery shopping for the blind. In Proceedings of the International Symposium on Wearable Computers, ISWC, Montreux, Switzerland, 11–14 October 2006; pp. 147–148. 93. Billah, S.M.; Ashok, V.; Ramakrishnan, I.V. Write-it-Yourself with the Aid of Smartwatches: A Wizard-of-Oz Experiment with Blind People. In Proceedings of the 23rd International Conference on Intelligent User Interfaces, Tokyo, Japan, 7–11 March 2018; pp. 427–431. 94. Velmurugan, D.; Sonam, M.S.; Umamaheswari, S.; Parthasarathy, S.; Arun, K.R. A smart reader for visually impaired people using Raspberry pi. Int. J. Eng. Sci. Comput. IJESC 2016, 6, 2997–3001. 95. Web Content Accessibility Guidelines (WCAG) Overview|Web Accessibility Initiative (WAI)|W3C. Available online: https://www.w3.org/WAI/standards-guidelines/wcag/ (accessed on 1 February 2019). 96. About the Section 508 Standards—United States Access Board. Available online: https://www.access-board. gov/guidelines-and-standards/communications-and-it/about-the-section-508-standards (accessed on 1 February 2019). © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Sciences Multidisciplinary Digital Publishing Institute

Making Shopping Easy for People with Visual Impairment Using Mobile Assistive Technologies

Loading next page...
 
/lp/multidisciplinary-digital-publishing-institute/making-shopping-easy-for-people-with-visual-impairment-using-mobile-cjM201077j
Publisher
Multidisciplinary Digital Publishing Institute
Copyright
© 1996-2019 MDPI (Basel, Switzerland) unless otherwise stated
ISSN
2076-3417
DOI
10.3390/app9061061
Publisher site
See Article on Publisher Site

Abstract

applied sciences Review Making Shopping Easy for People with Visual Impairment Using Mobile Assistive Technologies 1 , 2 , 3 4 Mostafa Elgendy * , Cecilia Sik-Lanyi and Arpad Kelemen Department of Electrical Engineering and Information Systems, University of Pannonia, 8200 Veszprém, Hungary Department of Computer Science, Faculty of Computers and Informatics, Benha University, 13511 Benha, Egypt Department of Electrical Engineering and Information Systems, University of Pannonia, 8200 Veszprém, Hungary; lanyi@almos.uni-pannon.hu Department of Organizational Systems and Adult Health, University of Maryland, Baltimore, MD 21201, USA; kelemen@umaryland.edu * Correspondence: mostafa.elgendy@virt.uni-pannon.hu; Tel.: +36-88-624-000/6188 Received: 27 December 2018; Accepted: 7 March 2019; Published: 13 March 2019 Abstract: People with visual impairment face various difficulties in their daily activities in comparison to people without visual impairment. Much research has been done to find smart solutions using mobile devices to help people with visual impairment perform tasks like shopping. One of the most challenging tasks for researchers is to create a solution that offers a good quality of life for people with visual impairment. It is also essential to develop solutions that encourage people with visual impairment to participate in social life. This study provides an overview of the various technologies that have been developed in recent years to assist people with visual impairment in shopping tasks. It gives an introduction to the latest direction in this area, which will help developers to incorporate such solutions into their research. Keywords: smartphone; assistive technology; visually impaired; shopping; computer vision 1. Introduction Visual Impairment (VI), which results from various diseases and degenerative conditions, causes significant limitations in visual capability. VI cannot be corrected by conventional means [1]. Currently, more than 253 million people live with VI, and this number is projected to increase in the coming decades [2]. People with Visual Impairment (PVI) have limitations in the function of their visual system. These limitations prevent them from seeing and doing daily activities, such as navigation or shopping [3–9]. For example, PVI have difficulties in reading product labels during shopping; they thus miss important information about the content of their food and sometimes make bad choices. During shopping, PVI also face navigation troubles, which encourage them to consume takeout [10,11]. Another problem is how to walk in an environment with many barriers such as walking in unknown places or crossing a street [12,13]. Moreover, the lack of support services in their surrounding environment make PVI dependent on their families and prevent them from being socially active [14,15]. Last, but not least, PVI face social barriers such as the attitudes of other people and society [16]. Therefore, it is important to develop solutions to help PVI improve their mobility, protect them from injury, encourage them to travel outside of their own environments, and interact socially [17]. Recently, mobile devices, such as smartphones, smart glasses, and notebooks have become popular. These new devices have various capabilities that are useful in developing complex software Appl. Sci. 2019, 9, 1061; doi:10.3390/app9061061 www.mdpi.com/journal/applsci Appl. Sci. 2019, 9, 1061 2 of 15 applications. Mobile devices can also be connected to cloud computing and offload tasks to be executed there, which saves power, memory, and battery [18–20]. The advantages of mobile technologies make them useful for accessing information from any place at any time and give PVI the opportunity to use smartphones in their daily activities [21–25]. In this way, smartphones are used with Assistive Technology (AT) to offer multiple solutions; this technology is called Mobile Assistive Technology (MAT). Researchers have conducted extensive investigations on using MAT to help PVI navigate from one place to another and shop without any support from people without disabilities [26–34]. In this study, we concentrate on the available solutions to help PVI in the shopping process. We divided the shopping process into two parts. The first part is how to prepare the shopping list, which provides assistance during shopping, before shopping. The second part helps them navigate inside shopping malls and identify products during shopping. The purpose of this literature review is twofold. The first aim is to answer the following research questions: Q1: What are the main categories of MAT shopping solutions for PVI? Q2: What are the strengths and weaknesses of the latest MAT shopping help systems for PVI? Q3: What capabilities do the best and most effective solution for PVI give? The second aim is to overview the available MAT solutions about helping PVI prepare shopping lists, navigate inside shopping malls and recognize products during shopping. It also discusses how the proposed solutions can help PVI in the shopping process and summarizes the challenges and drawbacks of the proposed solutions. This paper is structured as follows: Section 2 describes the research methodology. Section 3 discusses multiple solutions and how they can help PVI. Section 4 explains the main benefits and research challenges. Finally, Section 5 outlines the conclusions of this study. 2. Research Methodology In order to identify most of the available MAT solutions for PVI, we searched the following databases: Springer, Science Direct, Web of Science, Institute of Electrical and Electronics Engineers (IEEE) Xplore, Google Scholar, Association for Computing Machinery (ACM) Digital Library and Microsoft Academic. We used the following keywords to search for peer reviewed journal articles: (“Assistive technology” OR “Assistive technology devices” OR “Mobile assistive technology devices” OR “navigation solution” OR “shopping”) AND, (“visual impairment” OR “blind *”), (“avoiding obstacles” OR “write * notes” OR “text to speech”) AND (“visual impairment” OR “blind *”). We set the search period to articles published between January 2010 and December 2018. The search query returned 8893 records. Duplicates were removed, reducing the search results to 842 articles. Then, we eliminated 433 results by restricting to articles in the English language, articles that describe research intervention for PVI based on their titles, and articles that are free and downloadable. Next, all keywords were screened, which eliminated 206 articles, because they were not technical papers, or they were literature reviews or surveys. Abstracts of the resulting 203 papers were then screened for relevance to our research goals. One hundred and thirteen of the articles were deemed inappropriate because they did not study visual impaired or blind populations, or they were not related to MAT. Next, two different researchers conducted a full text article review of the remaining 90 articles. Forty-six articles were eliminated due to not helping PVI in the shopping process and avoiding obstacles. The resulting 44 articles met all inclusion criteria and were evaluated in this study. Figure 1 shows the flowchart of choosing methodology based on PRISMA flowchart [35]. Appl. Sci. 2019, 9, 1061 3 of 15 Figure 1. PRISMA flowchart [35]. 3. MAT Solutions for PVI Shopping When entering into the world of PVI, one should be aware of certain obstacles PVI face while shopping alone, particularly when the support from shop assistants is limited [26,34]. Many retailers offer online shopping, but this method is difficult and time-consuming, as PVI have to listen to all the choices before choosing the product. Even worse, if any items are missing, they must re-listen to the list again [36]. Some shops offer home delivery, but this option requires the PVI to make an appointment and wait for delivery. These alternatives limit personal autonomy and make independent shopping difficult so, PVI often avoid using these services. Buying a product in person at the grocery store is also difficult for PVI. They often wait for help from store employees, which is time-consuming. Moreover, most stores cannot assign an employee to help PVI, as hiring an assistant is too expensive and offers no privacy [34,37]. PVI also face difficulties when searching for a product on the shelves and checking their details, as the shopkeepers frequently move products around. Kostyra and co-workers found that sensory attributes are important for PVI when they select different products that have the same appearance. They performed a questionnaire, and the results showed that using mobile devices can make PVI feel independent during the shopping process, which is important to them [11]. We searched for available MAT solutions to help PVI before and during the shopping process. We found that the shopping process is divided into two parts: the first part is how to prepare the shopping list to make shopping easy and fast; the second part is dealing with how to navigate and identify products during the shopping process, as shown in Figure 2 [38–47]. It is also shown in the figure that MATs during the shopping process are classified into three categories, based on the technology that was used: tag based, Computer Vision (CV) based, and hybrid systems. The remaining part of this section gives an overview of technologies that have been developed in recent years related to each part in the shopping process. Moreover, it gives an in-depth look at these solutions and some research examples to show how they work. Appl. Sci. 2019, 9, 1061 4 of 15 Figure 2. MAT solutions for the parts of the shopping process for PVI. 3.1. Shopping Preparation Earlier studies [8,11,48] assume that shopping list preparation is a useful activity, as most PVI prefer to follow a predetermined list. It is necessary to help PVI prepare shopping lists and store them in a database. During shopping, PVI can then retrieve and use their lists. Several applications take an image from a printed text, a written list, or Braille and send it to an Optical Character Recognition (OCR) module, which analyses and converts it to text [49–51]. In some cases, the system sends the image to a character recognition API in cloud computing, which identifies the words and converts it to text [52]. Other applications use Speech to Text (STT) techniques to transform PVI voice commands to a list of items [53]. After preparing the shopping lists, they are stored in a database. During shopping, PVI retrieve the shopping items from the database and listen to them by using Text-To-Speech (TTS) [51,54]. 3.2. Navigation and Product Identification Going to the store is not the only navigation challenge for PVI; in-store navigation is also a complex problem [24]. It is challenging for PVI to navigate inside shops and to reach and identify products. It is also difficult for them to get detailed information about products, such as production and expiration date. Additionally, PVI always need help from others, as most shops are not well prepared to help them [11]. As a result, researchers have developed multiple solutions to help PVI navigate and identify products. These solutions are divided into three categories, as shown in Figure 2: (1) Tag Based Systems: such as Radio-frequency identification (RFID) and Near Field Communication (NFC), which use wireless components to transfer data from a tag attached to an object for the purposes of automatic identification and tracking. (2) Computer Vision Based Systems: some of these systems require unique visual tags such as Quick Response (QR) codes, Barcode, or Augmented Reality (AR) markers to be placed on products. These tags are used for detecting and giving PVI all available details about the products. Other systems do not require tags to be placed on products. Instead, they utilize information about the objects’ physical features to identify them. (3) Hybrid Systems: these take the strengths of two or more systems and combine them into a new system to deliver better accuracy and performance [9,41–43,55–62]. 3.2.1. Tag Based Systems Tag systems use wireless communication technology to transfer data from the tag attached to an object to a tag reader for automatic identification and tracking. Developers are using several types Appl. Sci. 2019, 9, 1061 5 of 15 of tags, but we will concentrate on RFID and NFC tags [63]. RFID uniquely identifies items by using radio waves. Each RFID system has three components: a tag, a reader, and an antenna. The reader sends a signal to the tag via the antenna, and the tag responds to this signal by sending its unique identification information [64]. There are two types of RFID tags: active and passive. Active tags broadcast a signal up to a range of 100 m. Passive tags use electromagnetic energy from the reader, but they do not have their own power source. They can broadcast a signal up to a range of 25 m. Passive tags are typically smaller and cheaper than active tags [65]. For shopping, RFID tags are attached to products and PVI hold tag readers to identify them. After identifying the items, PVI get details about them, such as name, price and special offers [55], [66–68]. These systems use a server or a local database to store product details. Other solutions use RFID tags for navigation inside shopping malls. These solutions use tag reader attached to white canes to identify RFID tags. They use a map to guide them through the store and suggest the shortest route for PVI to reach their products. They give audio messages that provide verbal directions [69,70]. NFC technology is a version of RFID that operates at a maximum range of about 10 cm. The NFC system consists of three main components: a tag, a reader, and a database. The tag is a place to store information, while the reader is an NFC mobile device that reads the content of the tag. The database stores additional information to the NFC tags. NFC technology is used for active or passive communication. In active mode, the NFC tag and NFC tag reader produce a radio field during communication. In passive mode, only the NFC tag reader generates a radio field and begins the communication session, so NFC tags do not process any information sent from other sources, and cannot connect to other passive components. The main differences are the communication distance and the use of mobile devices by NFC systems or RFID Readers by the RFID systems [33,71]. To use them for shopping, NFC tags are attached to products and identified by smartphones like RFID. After identifying the items, PVI get details about them [33,72]. These systems use a database to store product details so PVI can scan them and get information such as name, price, and special offers [37]. Some other solutions used NFC tags for navigation inside shopping malls. Like RFID, NFC tags use a map to guide PVI through the store and suggest the shortest route for PVI to reach their products. These tags also give audio messages that provide verbal directions [33,73]. 3.2.2. Computer Vision Based Systems CV based systems accept visual inputs from the camera and use CV techniques to extract valuable information and recognize objects in the surrounding environment. Finally, they provide information to the PVI through tactile or auditory channels [74]. Researchers classified CV based systems into tag based and non-tag based. In tag based systems, unique visual tags such as QR code, Barcode, and AR markers are placed on products to aid the recognition process. Recognition is accomplished by capturing an image of the tag and analyzing this image to determine the identity of the object based on its tag information. Then, they use a database to retrieve product details, such as name, price, and special offers [42,75,76]. Other solutions use these tags for navigation and giving the shortest route to products [77,78]. Finally, tactile or voice commands are used for warnings and providing direction commands to the PVI [79]. With non-tag based systems, developers do not attach tags to objects. They use CV techniques to analyze the images and identify objects [80–82]. Non-tag systems require extensive computational power to analyze images and give accurate results [44]. For example, Zientara et al. used smartglasses to accurately locate and identify objects using CV techniques [41,83]. They also used a glove with a camera, which guides hand movements to point and grasp things. Kumar and Meher used a color recognition module with a convolution neural network to recognize objects and their colors [84,85]. Jafri and co-workers processed the depth and motion tracking data obtained via a Google Tango Tablet to create and update a 3D box around the PVI to detect and avoid obstacles [43,86]. Hoang and co-workers utilized color images, depth images, and accelerometer information from a mobile Kinect and transferred them to laptop for processing and detecting obstacles [87]. Concerning the Appl. Sci. 2019, 9, 1061 6 of 15 obstacle warning module, a tactile–visual substitution system uses voice commands to warn the PVI to avoid obstacles. 3.2.3. Hybrid Systems In the previous two sections, several solutions to assist PVI in navigating and identifying products have been discussed. These solutions use tags such as RFID and NFC, visuals tags such as QR codes and AR Markers, or CV techniques. However, these solutions are not suitable under all situations, because each environment has specific features. For example, CV techniques cannot be used in areas with considerable light, because the quality of the taken image will be poor. In this case, it is better to use a different technology, like RFID or NFC, to improve system accuracy. Another example is when, in a shop, items, which all have needed information, already have barcodes or QR codes. Developers can use these tags for product identification and use CV techniques or non-visual tags (RFID, NFC) for navigation. Two or more such technologies can be combined, which would lead to the development of hybrid systems. For example, McDaniel and co-authors proposed a system that integrates CV techniques with RFID Systems. This system identifies information about relevant objects in the surroundings and sends them to PVI [56]. López-de-Ipiña and co-authors integrate RFID with QR codes, to allow PVI to navigate inside a grocery store [57,88]. The system used the RFID reader to identify the RFID tags to navigate inside the store. It adopted the smartphone camera to identify QR codes placed on product shelves. Finally, Fernandes and co-authors developed a solution to help PVI identify objects and navigate in indoor locations using RFID and CV technologies [58]. This system used the RFID reader to receive the current location of PVI and CV techniques to identify objects. 4. Discussion In this study we have presented different approaches, techniques, and technologies to help PVI navigate and identify products while shopping. We classified the shopping approaches into preparing a list of what to purchase and navigating inside the store to identify items on the shelves. Solutions for shopping list preparation used some techniques, such as CV, OCR and STT, to read list details from PVI and store them in a database. During shopping, audio messages are given to PVI about shopping items stored in the database using TTS. Shopping preparation makes it easy for users to buy from a list, but it has some limitations: it assumes that the shopper already knows what they wish to buy on their trip [8], and this is not always the case. Also, shopping may take a long time. Shopping is not simply buying items on a list. For people without visual impairment, shopping also entails opportunistically exploring new products or brands, engaging in cultural learning about tastes, and making different choices based on occasional sales [89]. Some solutions require taking an image of the shopping list, but it is difficult for PVI to take a good quality image. Researchers have also developed multiple solutions to help PVI navigate and identify products using RFID and NFC for tag based systems. RFID tags have some benefits: their signals can penetrate walls and obstacles, which cover a wide area, and reuse existing infrastructures, resulting in cost reduction [90]. Also, PVI do not need to be in a certain direction to receive messages from the tagged item [91]. Moreover, the tag does not need to be within the line of sight of the RFID reader which allows it to be placed inside items. However, there are some drawbacks: to setup the environment, hundreds of RFID tags need to be installed, which is costly. Information overload is another major problem, as it is overwhelming to receive information about all the items in the store at the same time and attempt to use this information to identify various objects [56]. Another significant limitation is that these systems are used only in a restricted environment in which objects have been tagged, and these tags need regular maintenance. Sometimes, RFID tags are attached to items like the liquid in metal cans, which reflect the radio waves during communication. Using RFID technology with glass causes a reflection of the radio waves, which affects the system outcome. Appl. Sci. 2019, 9, 1061 7 of 15 NFC tags have the following benefits: PVI can simply touch the NFC tag with an NFC reader, such as a smartphone, to begin the required service. NFC readers read information stored in the tags, which enables PVI to get product information. Moreover, by using NFC tags, researchers can build low-cost indoor navigation systems. NFC technology also has a very low response time, because the time required to transfer data from an NFC tag to a mobile device and generate the walking path to the items is short. They also provide accurate position and orientation information, so the orientation of the user to the destination is facilitated. Moreover, NFC tags work well in unclean environments and do not require a direct line of sight between the reader and the tags. Finally, PVI do not have to carry large or bulky devices—only their smartphones. However, there are some drawbacks: NFC tags are not as effective and efficient as Bluetooth or Wi-Fi Direct when it comes to data transfer rates. NFC can only send and receive very small packets of data, so real-time positioning cannot be provided in the NFC Internal system. Also, PVI must be inside the reading area in order to identify NFC tags and must have an NFC-equipped smartphone. Moreover, researchers developed multiple solutions to help PVI navigate and identify products using CV techniques. CV techniques rely on visual tags, such as QR codes, barcodes, and AR markers, or utilize information about the objects’ physical features. CV tag based systems offer several advantages: they only need to identify tags to get product details, so they need low computational power and small storage space. Many of these approaches do not need tags to be explicitly placed, as products already have unique visual tags, such as barcodes and QR codes. Such tags can be generated and printed at a very low cost compared to non-visual tags, such as RFID, and can be easily modified. CV tag based systems are ideal for tasks that require differentiating among groups of objects. They are vital for PVI when the contents of the items are different, such as a tube of glue versus a tube of eye drops, as they have the same shape and it may be dangerous if they choose the wrong one [92]. However, tag based CV techniques require a prior selection of items and the correct placement of tags on those items. Moreover, if there are many tagged items in a small area, PVI would be confused by receiving information about them all at the same time. Visual tags also must be in line-of-sight of the camera, otherwise, they will not be detected. Furthermore, visual tags cannot be placed inside items, as the appearance of these tags are important for PVI. These tags can also be damaged during movement through the supply chain or by weather. Also, it is difficult for a smartphone camera to detect CV tags if the PVI is moving fast, and the recognition rate decreases as the distance between the reader and the tags increases [75]. CV non-tag based systems have several advantages. These systems are cost-effective, as they need little or no infrastructure and most of them can be easily installed on smartphones. However, they have several limitations. Their performance may be unreliable in real-world environments because of factors like motion blur and image resolution, as well as changes in conditions, such as illumination, orientation, and scale. These systems use extensive computational power, and PVI need to take many photos. However, taking good quality photos is difficult for PVI. Finally, feedback latency must be reduced to make these systems more effective. Finally, researchers have created hybrid systems by taking the strengths of two or more systems and combining them. Numerous attempts have been made in this area to balance the trade-offs of the combined technologies. As a result, there is a significant improvement in accuracy, robustness, performance, and usability. However, the major drawback of these systems is that they use significant infrastructure due to the combination of technologies, which results in increased complexity and cost. Figure 3 shows a complete scenario of the shopping process, from preparation of the shopping list to completion of the shopping task. Appl. Sci. 2019, 9, 1061 8 of 15 Figure 3. The scenario of the shopping solutions for PVI. To summarize, the answers to Q1, Q2, and Q3 are: A1: This study presented the main categories of MAT to help PVI shop. We divided the shopping task into two parts: 1) Preparing a list of what to buy, and 2) navigation inside a store and identification of items on the shelves. Preparing a list for shopping solutions used techniques such as CV and STT to obtain list details from PVI and store them in a database. During shopping, audio feedback is given to PVI about shopping items using TTS. The navigation and identification of products use RFID and NFC for tag based systems, visual tags like QR codes, barcodes and AR markers for CV tag based systems, or information about the physical features of items for CV approaches A2: Table 1 shows the strengths and weaknesses of the latest MAT shopping systems for PVI. Appl. Sci. 2019, 9, 1061 9 of 15 Table 1. Strengths, challenges and drawbacks of each category. Category Type Technology Paper Strengths Challenges and Drawbacks - Assumes that PVI already knows what they wish to buy on their trip CV, OCR and - It makes easy for users to buy from a list - Shopping is not composed of simply purchasing a set of items on a list Shopping preparation [49–54,93,94] STT - It is hard for PVI to make image good quality pictures - Hard to integrate with other systems - RFID signals are able to penetrate walls and obstacles - Small coverage area - RFID works well with existing infrastructure which results in - Hundreds of RFID tags are needed in the environment which are costly cost reduction - Receiving information from all the items at the same time are confusing - PVI must be in a certain direction to receive messages from tags RFID [55,66–70] for PVI - Tags do not need to be within the line of sight of the RFID reader which - RFID tags need regular maintenance allows being placed inside items - Using RFID tags with liquid in metal cans reflects the radio waves Tag based - PVI can read more than one tag at the same time during communication systems - PVI simply to touch the NFC tag to begin the required service - NFC tags can be used to build low-cost indoor navigation systems in which PVI do not have to carry large devices - NFC is not as effective and efficient as Bluetooth or Wi-Fi Direct - NFC minimizes response time and provides accurate - PVI should be inside the reading area in order to identify NFC tags NFC [33,37,72,73] position information - PVI must have an NFC-equipped mobile - NFC tags work well in unclean environment Navigation and - PVI do not need to be in a direct line of sight between the reader and tags Product Identification - In areas with many tagged items, PVI will be confused by receiving - They need only to extract item tag so they need low computational information about all items at the same time power and storage space - Tags have to be in the line-of-sight of the camera QR code, - They do not need tags to be explicitly placed as products already have - Tags cannot be placed inside items Barcode, [42,75–79] unique visual tags such as barcodes and QR codes - Tags are damaged by movement across the supply chain or weather Markers - These tags can be generated and printed at very low cost - It is difficult to detect tags if the PVI is moving fast and the recognition Computer rate decreases as the distance between the reader and tags increases vision based systems - Inconsistent performance in real-world environments because of factors such as motion blur - Cost-effective as they need little or no infrastructure [41,43,46,47,59,60, - Use extensive computation power CV - Most of them can be installed easily on smartphones 80–87] - PVI need to take many good quality photos that is hard for them - Feedback latency must to be reduced to make the systems more effective - Increased infrastructure usage - Balanced trade-off between the combined technologies - Increased time usage Hybrid systems [56–58,62,88,92] - Improved accuracy, robustness, usability and performance - Increased complexity - Increased cost. Appl. Sci. 2019, 9, 1061 10 of 15 A3: Table 2 summarizes the criteria for the most effective solutions. Table 2. Comparison of identification technologies for PVI. Number of Requires Technology Cost Equipment Scanned Line Range Capacity Items Of-Sight NFC Low NFC reader 1 No Up to 10 cm Maximum 1.6MB RFID Low RFID reader Multiple No Up to 3 m Maximum 8000 bytes QR code Free Camera 1 Yes Depends on code size. Maximum 2953 Bytes Barcode Free Camera 1 Yes Depends on code size. N/A Markers Free Camera Multiple Yes Depends on marker size. N/A CV techniques High Camera Multiple Yes Depends on camera - Based on the categories and solutions discussed in Section 2, we selected some criteria and compared them in Table 2. The first criterion is the cost of applying the technology to any solution. It is shown that CV tag techniques can be used without any cost except for printing the QR codes or AR markers and putting them in the correct place. When using a barcode, there is no need to print them, as they are already placed on each product. Tag based techniques can be used at a low cost, as shops only need some RFID or NFC tags to be installed in the correct places. If CV techniques are used, high-quality equipment, such as cameras, are needed for good results. The second criterion is the equipment needed to detect and identify products or places. For CV tag based solutions, PVI need only their smartphone cameras to detect and identify items. In CV techniques, some solutions only need smartphone cameras, while others need high-quality cameras to take high resolutions images and machines with powerful processors for computations. In tag based techniques, PVI need RFID reader or smartphones supporting NFC technology. The third criterion is the number of items able to be scanned at the same time. Only RFID readers, AR markers, and CV techniques can scan more than one item at the same time, which is useful in some situations, such as if PVI want to identify and count the items in their shopping cart. The fourth criterion is whether or not the PVI must be in the line of sight with the identified products. For tag based solutions, PVI do not have to be in the line of sight of items, and the PVI can identify them in any direction. In tag based solutions, tags must be within 3 m for RFID tags and within 10 cm for NFC tags, while CV solutions depend on some other parameters, like the tag size in QR codes or barcodes, and the marker or camera parameters for CV techniques. The last criterion is the storage capacity of each solution. Only some tags, such as RFID, NFC and QR codes, have a storage capacity, while others, such as AR markers and barcodes, do not have any storage capacity. Researchers can select and design new technology solutions based on specific requirements and which criteria to focus on, and how to evaluate tradeoffs. Further research is needed to develop precise, more effective, low cost, and easy to use helping systems for PVI. Finally, if scientists and engineers develop MAT for PVI, they should study and take into account the Web Content Accessibility Guidelines [95] and Section 508 standards [96]. 5. Conclusions PVI face many problems during the shopping process. This study has discussed the current, most prevalent, solutions to help PVI shop effectively. We conclude that all presented solutions have some advantages and disadvantages. Researchers have tried to design and evaluate hybrid solutions that exploit the main advantages, and avoid the disadvantages, of individual systems. However, these hybrid systems increased the infrastructure use, the time consumption, and the system complexity. This study provides an introduction to guide and motivate researchers towards carrying out more studies that may lead to good solutions to help PVI in the shopping process. Author Contributions: All authors contributed to the present paper with the same effort in finding available literature resources, as well as writing the paper. Funding: The authors would like to thank the financial support of Széchenyi 2020 under the EFOP-3.6.1- 16-2016-00015. Appl. Sci. 2019, 9, 1061 11 of 15 Conflicts of Interest: The authors declare no conflict of interest References 1. What is Visual Impairment? Available online: https://www.news-medical.net/health/What-is-visual- impairment.aspx (accessed on 23 December 2018). 2. Ackland, P.; Resnikoff, S.; Bourne, R. World blindness and visual impairment: Despite many successes, the problem is growing. Community Eye Health J. 2018, 30, 71–73. 3. Capella-Mcdonnall, M. The Need for Heaith Promotion for Aduits Who Are Visuaiiy impaired. J. Vis. Impair. Blind 2007, 2002, 133–146. [CrossRef] 4. Kollmuss, A.; Agyeman, J. Mind the Gap: Why do people act environmentally and what are the barriers to pro-environmental behavior. Environ. Educ. Res. 2002, 8, 239–260. [CrossRef] 5. Giudice, N.A. Navigating without Vision: Principles of Blind Spatial Cognition. In Handbook of Behavioral and Cognitive Geography; Montello, D.R., Ed.; Publisher: Edward Elgar, UK, 2018; pp. 1–32. 6. Legge, G.E.; Granquist, C.; Baek, Y.; Gage, R. Indoor Spatial Updating with Impaired Vision. Investig. Opthalmol. Vis. Sci. 2016, 57, 6757–6765. [CrossRef] [PubMed] 7. Schinazi, V.R.; Thrash, T.; Chebat, D. Spatial navigation by congenitally blind individuals. Wiley Interdiscip. Rev. Cogn. Sci 2016, 7, 37–58. [CrossRef] [PubMed] 8. Yuan, C.W.; Hanrahan, B.V.; Lee, S.; Rosson, M.B.; Carroll, J.M. Constructing a holistic view of shopping with people with visual impairment: a participatory design approach. Univers. Access. Inf. Soc. 2017, 1–14. Available online: http://link.springer.com/10.1007/s10209-017-0577-1 (accessed on 21 November 2018). 9. Wong, E.J.; Yap, K.M.; Alexander, J.; Karnik, A. HABOS: Towards a platform of haptic-audio based online shopping for the visually impaired. In Proceedings of the ICOS 2015—2015 IEEE Conference on Open Systems, Bandar Melaka, Malaysia, 24–26 August 2016; pp. 62–67. 10. Wong, S. The limitations of using activity space measurements for representing the mobilities of individuals with visual impairment: A mixed methods case study in the San Francisco Bay Area. J. Transp. Geogr. 2018, 66, 300–308. [CrossRef] 11. Kostyra, E.; Zakowska-Biemans, S.; Sniegocka, K.; Piotrowska, A. Food shopping, sensory determinants of food choice and meal preparation by visually impaired people. Obstacles and expectations in daily food experiences. Appetite 2017, 113, 14–22. [CrossRef] 12. Bradley, N.A.; Dunlop, M.D. An Experimental Investigation into Wayfinding Directions for Visually Impaired People. Pers. Ubiquitous Comput. 2005, 9, 395–403. [CrossRef] 13. Geruschat, D.R.; Hassan, S.E.; Turano, K.A.; Quigley, H.A.; Congdon, N.G. Gaze Behavior of the Visually Impaired During Street Crossing. Optom. Vis. Sci. 2006, 83, 550–558. [CrossRef] 14. Kbar, G.; Al-Daraiseh, A.; Mian, S.H.; Abidi, M.H. Utilizing sensors networks to develop a smart and context-aware solution for people with disabilities at the workplace (design and implementation). Int. J. Distrib. Sens. Netw. 2016, 12, 1–25. [CrossRef] 15. Nurjaman, T.A. Exploring the Interdependence between Visually Impaired and Sighted People in the Early Phase of Friendship. Ijds 2018, 5, 115–126. [CrossRef] 16. Leissner, J.; Coenen, M.; Froehlich, S.; Loyola, D.; Cieza, A. What explains health in persons with visual impairment? Health Qual. Life Outcomes 2014, 12, 65–81. [CrossRef] [PubMed] 17. Bhowmick, A.; Hazarika, S.M. An insight into assistive technology for the visually impaired and blind people: state-of-the-art and future trends. J. Multimodal User Interfaces 2017, 11, 149–172. [CrossRef] 18. Elgendy, M.A.; Shawish, A.; Moussa, M.I. MCACC: New approach for augmenting the computing capabilities of mobile devices with Cloud Computing. In Proceedings of the IEEE Science and Information Conference, London, UK, 27–29 August 2014; pp. 79–86. 19. Shiraz, M.; Gani, A.; Khokhar, R.H.; Buyya, R. A review on distributed application processing frameworks in smart mobile devices for mobile cloud computing. IEEE Commun. Surv. Tutor. 2013, 15, 1294–1313. [CrossRef] 20. Elgendy, I.; Zhang, W.; Liu, C.; Hsu, C.H. An efficient and secured framework for mobile cloud computing. IEEE Trans. Cloud Comput. 2018, 6, 1. [CrossRef] 21. Angin, P.; Bhargava, B.K. Real-time mobile-cloud computing for context-aware blind navigation. Int. J. Next-Gener. Comput. 2011, 2, 1–13. Appl. Sci. 2019, 9, 1061 12 of 15 22. Bai, J.; Liu, D.; Su, G.; Fu, Z. A cloud and vision-based navigation system used for blind people. In Proceedings of the International Conference on Artificial Intelligence, Automation and Control Technologies, Wuhan, China, 7–9 April 2017; pp. 1–6. 23. Habiba, U.; Barua, S.; Ahmed, F.; Dey, G.K.; Ahmmed, K.T. 3rd Hand: A device to support elderly and disabled person. In Proceedings of the 2nd International Conference on Electrical Information and Communication Technology (EICT), Khulna, Bangladesh, 10–12 December 2015; pp. 1–6. 24. Domingo, M.C. An overview of the Internet of Things for people with disabilities. J. Netw. Comput. Appl. 2012, 35, 584–596. [CrossRef] 25. Vatavu, R.-D. Visual Impairments and Mobile Touchscreen Interaction: State-of-the-Art, Causes of Visual Impairment, and Design Guidelines. Int. J. Hum.–Comput. Interact. 2017, 33, 486–509. [CrossRef] 26. Ashraf, M.M.; Hasan, N.; Lewis, L.; Hasan, M.R.; Ray, P. A Systematic Literature Review of the Application of Information Communication Technology for Visually Impaired People. Int. J. Disabil. Manag. 2017, 11, 1–18. [CrossRef] 27. Csapó, Á.; Wersényi, G.; Nagy, H.; Stockman, T. A survey of assistive technologies and applications for blind users on mobile platforms: a review and foundation for research. J. Multimodal User Interfaces 2015, 9, 275–286. [CrossRef] 28. Csapó, Á.; Wersényi, G.; Jeon, M. A Survey on Hardware and Software Solutions for Multimodal Wearable Assistive Devices Targeting the Visually Impaired. Acta Polytech. Hungarica 2016, 13, 39–63. 29. Elgendy, M.; Lanyi, C.S. Review on Smart Solutions for People with Visual Impairment. In Proceedings of the International Conference on Computers for Handicapped Persons (ICCHP) Conference, Linz, Austria, 11–13 July 2018; pp. 81–84. 30. Elmannai, W.; Elleithy, K. Sensor-based assistive devices for visually-impaired people: Current status, challenges, and future directions. Sensors 2017, 17, 565. [CrossRef] [PubMed] 31. Hakobyan, L.; Lumsden, J.; O’Sullivan, D.; Bartlett, H. Mobile assistive technologies for the visually impaired. Surv. Ophthalmol. 2013, 58, 513–528. [CrossRef] [PubMed] 32. Ahmadi, H.; Arji, G.; Shahmoradi, L.; Safdari, R.; Nilashi, M.; Alizadeh, M. The application of internet of things in healthcare: a systematic literature review and classification. Universal Access Inf. Soc. 2018, 1–33. Available online: https://link.springer.com/article/10.1007/s10209-018-0618-4 (accessed on 21 November 2018). 33. Sakpere, W.E.; Mlitwa, N.B.W.; Oshin, M.A. Towards an efficient indoor navigation system: a near field communication approach. J. Eng. Des. Technol. 2017, 15, 505–527. [CrossRef] 34. Kulyukin, V.; Kutiyanawala, A. Accessible Shopping Systems for Blind and Visually Impaired Individuals: Design Requirements and the State of the Art. Open Rehabil. J. 2010, 3, 158–168. [CrossRef] 35. PRISMA guidelines. Available online: http://prisma-statement.org/PRISMAStatement/FlowDiagram.aspx. (accessed on 12 Mar 2019). 36. Blind people and the World Wide Web. Available online: https://www.webbie.org.uk/webbie.htm. (accessed on 3 December 2018). 37. Virtualeyez, A.M. Developing Nfc Technology to Enable the Visually Impaired to Shop Independently. Master Thesis, Dalhousie University, Halifax, Nova Scotia, July 2014. 38. Sakpere, W.; Adeyeye-Oshin, M.; Mlitwa, N.B. A State-of-the-Art Survey of Indoor Positioning and Navigation Systems and Technologies. South Afr. Comput. J. 2017, 29, 145–197. [CrossRef] 39. Dakopoulos, D.; Bourbakis, N.G. Wearable Obstacle Avoidance Electronic Travel Aids for Blind: A Survey. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2010, 40, 25–35. [CrossRef] 40. Woods, R.L.; Satgunam, P. Television, computer and portable display device use by people with central vision impairment. Ophthalmic Physiol. Opt. 2011, 31, 258–274. [CrossRef] 41. Zientara, P.A.; Lee, S.; Smith, G.H.; Brenner, R.; Itti, L.; Rosson, M.B.; Carroll, J.M.; Irick, K.M.; Narayanan, V. Third Eye: A Shopping Assistant for the Visually Impaired. Computer (Long. Beach. Calif). 2017, 50, 16–24. [CrossRef] 42. Azenkot, S.; Zhao, Y. Designing smartglasses applications for people with low vision. ACM SIGACCESS Access Comput. 2017, 119, 19–24. [CrossRef] 43. Jafri, R.; Ali, S.A. A Multimodal Tablet–Based Application for the Visually Impaired for Detecting and Recognizing Objects in a Home Environment. In Proceedings of the International Conference on Computers for Handicapped Persons (ICCHP) Conference, Paris, France, 9–11 July 2014; pp. 356–359. Appl. Sci. 2019, 9, 1061 13 of 15 44. Jafri, R.; Ali, S.A.; Arabnia, H.R.; Fatima, S. Computer vision-based object recognition for the visually impaired in an indoors environment: A survey. Vis. Comput. 2014, 30, 1197–1222. [CrossRef] 45. Szpiro, S.; Zhao, Y.; Azenkot, S. Finding a store, searching for a product. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing—UbiComp, Heidelberg, Germany, 12–16 September 2016; ACM: New York, NY, USA, 2016; pp. 61–72. 46. Tai, K.C.; Chen, C.H. Symbol detection in low-resolution images using a novel method. Int. J. Control Autom. 2014, 7, 143–154. [CrossRef] 47. Li, Y. An Object Recognition Method Based on the Improved Convolutional Neural Network. J. Comput. Theor. Nanosci. 2016, 13, 870–877. [CrossRef] 48. Compeau, L.D.; Monroe, K.B.; Grewal, D.; Reynolds, K. Expressing and defining self and relationships through everyday shopping experiences. J. Bus. Res. 2016, 69, 1035–1042. [CrossRef] 49. Joan, S.F.; Valli, S. An enhanced text detection technique for the visually impaired to read text. Inf. Syst. Front. 2017, 19, 1039–1056. [CrossRef] 50. Stearns, L.; Du, R.; Oh, U.; Wang, Y.; Findlater, L.; Chellappa, R.; Froehlich, J.E. The design and preliminary evaluation of a finger-mounted camera and feedback system to enable reading of printed text for the blind. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Zurich, Switzerland, 2014; pp. 615–631. 51. Samal, B.M.; Parvathi, K.; Das, J.K. A bidirectional text transcription of braille for odia, hindi, telugu and english via image processing on FPGA. Acad. Educ. 2015, 4, 483–494. 52. Sakai, T.; Matsumoto, T.; Takeuchi, Y.; Kudo, H.; Ohnishi, N. A Mobile System of Reading out Restaurant Menus for Blind People. In Proceedings of the International Conference on Enabling Access for Persons with Visual Impairment, Athens, Greece, 12–14 February 2015; pp. 12–14. 53. Ani, R.; Maria, E.; Joyce, J.J.; Sakkaravarthy, V.; Raja, M.A. Smart Specs: Voice assisted text reading system for visually impaired persons using TTS method. In Proceedings of the Innovations in Green Energy and Healthcare Technologies (IGEHT), Coimbatore, India, 16–18 March 2017; pp. 1–6. 54. Chalamandaris, A.; Karabetsos, S.; Tsiakoulis, P.; Raptis, S. A unit selection text-to-speech synthesis system optimized for use with screen readers. IEEE Trans. Consum. Electron. 2010, 56, 1890–1897. [CrossRef] 55. Andò, B.; Baglio, S.; Marletta, V.; Crispino, R.; Pistorio, A. A Measurement Strategy to Assess the Optimal Design of an RFID-Based Navigation Aid. IEEE Trans. Instrum. Meas. 2018, 1–7. Available online: https: //ieeexplore.ieee.org/abstract/document/8540063 (accessed on 21 November 2018). 56. McDaniel, T.L.; Kahol, K.; Villanueva, D.; Panchanathan, S. Integration of RFID and computer vision for remote object perception for individuals who are blind. In Proceedings of the Ambi-Sys Workshop on Haptic User Interfaces in Ambient Media Systems, Quebec City, QC, Canada, 1–14 February 2008; p. 7. 57. López-de-Ipiña, D.; Lorido, T.; López, U. BlindShopping: Enabling accessible shopping for visually impaired people through mobile technologies. In Proceedings of the International Conference on Smart Homes and Health Telematics, Montreal, QC, Canada, 20–22 June 2011; pp. 266–270. 58. Fernandes, H.; Costa, P.; Paredes, H.; Filipe, V.; Barroso, J. Integrating computer vision object recognition with location based services for the blind. In Proceedings of the International Conference on Universal Access in Human-Computer Interaction, Crete, Greece, 22–27 June 2014; pp. 493–500. 59. Kang, M.C.; Chae, S.H.; Sun, J.Y.; Lee, S.H.; Ko, S.J. An enhanced obstacle avoidance method for the visually impaired using deformable grid. IEEE Trans. Consum. Electron. 2017, 63, 169–177. [CrossRef] 60. Kang, M.C.; Chae, S.H.; Sun, J.Y.; Yoo, J.W.; Ko, S.J. A novel obstacle detection method based on deformable grid for the visually impaired. IEEE Trans. Consum. Electron. 2015, 61, 376–383. [CrossRef] 61. Medeiros, V.U.S.; Araújo, R.P.; Silva, R.L.A.; Slaets, A.F.F. Device for Location Assistance and Identification of Products in a Closed Enviroment. In Proceeding of the VI Latin American Congress on Biomedical Engineering CLAIB, Paraná, Argentina, 29–31 October 2014; pp. 992–994. 62. Rashid, Z.; Melià-Seguí, J.; Pous, R.; Peig, E. Using Augmented Reality and Internet of Things to improve accessibility of people with motor disabilities in the context of Smart Cities. Future Gener. Comput. Syst. 2017, 76, 248–261. [CrossRef] 63. RFID versus NFC: What’s the Difference between NFC and RFID? Available online: https://blog. atlasrfidstore.com/rfid-vs-nfc (accessed on 23 December 2018). 64. Valero, E.; Adán, A.; Cerrada, C. Evolution of RFID applications in construction: A literature review. Sensors 2015, 15, 15988–16008. [CrossRef] [PubMed] Appl. Sci. 2019, 9, 1061 14 of 15 65. Active RFID vs. Passive RFID: What’s the Difference? Available online: https://blog.atlasrfidstore.com/ active-rfid-vs-passive-rfid. (accessed on 2 December 2018). 66. Kornsingha, T.; Punyathep, P. A voice system, reading medicament label for visually impaired people. In Proceedings of the RFID SysTech 7th European Workshop, Smart Objects: Systems, Technologies and Applications, Dresden, Germany, 17–18 May 2011; pp. 1–6. 67. Mathankumar, M.; Sugandhi, N. A low cost smart shopping facilitator for visually impaired. In Proceedings of the Advances in Computing, Communications and Informatics (ICACCI) International Conference, Mysore, India, 22–25 August 2013; pp. 1088–1092. 68. Kesh, S. Shopping by Blind People: Detection of Interactions in Ambient Assisted Living Environments using RFID. Int. J. 2017, 6, 7–11. 69. Fernandes, H.; Filipe, V.; Costa, P.; Barroso, J. Location based services for the blind supported by RFID technology. Procedia Comput. Sci. 2014, 27, 2–8. [CrossRef] 70. Tsirmpas, C.; Rompas, A.; Fokou, O.; Koutsouris, D. An indoor navigation system for visually impaired and elderly people based on Radio Frequency Identification (RFID). Inf. Sci. 2015, 320, 288–305. [CrossRef] 71. What’s an NFC Tag? Available online: https://electronics.howstuffworks.com/nfc-tag.htm (accessed on 2 December 2018). 72. Ozdenizci, B.; Coskun, V.; Ok, K. NFC internal: An indoor navigation system. Sensors 2015, 15, 7571–7595. [CrossRef] 73. Ozdenizci, B.; Ok, K.; Coskun, V.; Aydin, M.N. Development of an indoor navigation system using NFC technology. In Proceedings of the Information and Computing (ICIC) International Conference, Phuket Island, Thailand, 25–27 April 2011; pp. 11–14. 74. Jafri, R.; Ali, S.A.; Arabnia, H.R. Computer Vision-based Object Recognition for the Visually Impaired Using Visual Tags. In Proceedings of the International Conference on Image Processing, Computer Vision, Las Vegas, NV, USA, 22–25 July 2013; pp. 400–406. 75. Zhang, H.; Zhang, C.; Yang, W.; Chen, C.Y. Localization and navigation using QR code for mobile robot in indoor environment. In Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO), Zhuhai, China, 6–9 December 2015; pp. 2501–2506. 76. Zhao, Y.; Szpiro, S.; Knighten, J.; Azenkot, S. CueSee: Exploring visual cues for people with low vision to facilitate a visual search task. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing, Heidelberg, Germany, 12–16 September 2016; pp. 73–84. 77. Kutiyanawala, A.; Kulyukin, V. Eyes-free barcode localization and decoding for visually impaired mobile phone users. In Proceedings of the International Conference on Image Processing, Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 12–15 July 2010; pp. 130–135. 78. Lee, S.J.; Lim, J.; Tewolde, G.; Kwon, J. Autonomous tour guide robot by using ultrasonic range sensors and QR code recognition in indoor environment. In Proceedings of the IEEE International Conference on Electro/Information Technology, Milwaukee, WI, USA, 5–7 June 2014; pp. 410–415. 79. Al-Khalifa, S.; Al-Razgan, M. Ebsar: Indoor guidance for the visually impaired. Comput. Electr. Eng. 2016, 54, 26–39. [CrossRef] 80. Yi, C.; Tian, Y.; Arditi, A. Portable camera-based assistive text and product label reading from hand-held objects for blind persons. IEEE/ASME Trans. Mechatron. 2014, 19, 808–817. [CrossRef] 81. Mekhalfi, M.L.; Melgani, F.; Zeggada, A.; De Natale, F.G.; Salem, M.A.M.; Khamis, A. Recovering the sight to blind people in indoor environments with smart technologies. Expert Syst. Appl. 2016, 46, 129–138. [CrossRef] 82. Takizawa, H.; Yamaguchi, S.; Aoyagi, M.; Ezaki, N.; Mizuno, S. Kinect cane: An assistive system for the visually impaired based on the concept of object recognition aid. Pers. Ubiquitous Comput. 2015, 19, 955–965. [CrossRef] 83. Advani, S.; Zientara, P.; Shukla, N.; Okafor, I.; Irick, K.; Sampson, J.; Datta, S.; Narayanan, V. A Multitask Grocery Assist System for the Visually Impaired: Smart glasses, gloves, and shopping carts provide auditory and tactile feedback. IEEE Consum. Electron. Mag. 2017, 6, 73–81. [CrossRef] 84. Kumar, R.; Meher, S. A Novel method for visually impaired using object recognition. In Proceedings of the International Conference on Communications and Signal Processing (ICCSP), Melmaruvathur, India, 2–4 April 2015; pp. 772–776. Appl. Sci. 2019, 9, 1061 15 of 15 85. Aladren, A.; López-Nicolás, G.; Puig, L.; Guerrero, J.J. Navigation Assistance for the Visually Impaired Using RGB-D Sensor with Range Expansion. IEEE Syst. J. 2016, 10, 922–932. [CrossRef] 86. Jafri, R.; Campos, R.L.; Ali, S.A.; Arabnia, H.R. Visual and Infrared Sensor Data-Based Obstacle Detection for the Visually Impaired Using the Google Project Tango Tablet Development Kit and the Unity Engine. IEEE Access 2017, 6, 443–454. [CrossRef] 87. Hoang, V.N.; Nguyen, T.H.; Le, T.L.; Tran, T.H.; Vuong, T.P.; Vuillerme, N. Obstacle detection and warning system for visually impaired people based on electrode matrix and mobile Kinect. Vietnam J. Comput. Sci. 2017, 4, 71–83. [CrossRef] 88. López-de-Ipiña, D.; Lorido, T.; López, U. Indoor navigation and product recognition for blind people assisted shopping. In Proceedings of the International Workshop on Ambient Assisted Living (IWAAL), Torremolinos-Málaga, Spain, 8–10 June 2011; pp. 33–40. 89. Yuan, C.W.; Hanrahan, B.V.; Lee, S.; Carroll, J.M. Designing Equal Participation in Informal Learning for People with Visual Impairment. Interact. Des. Archit. J. 2015, 27, 93–106. 90. Farid, Z.; Nordin, R.; Ismail, M. Recent advances in wireless indoor localization techniques and system. J. Comput. Netw. Commun. 2013, 2013, 1–12. [CrossRef] 91. Giudice, N.A.; Legge, G.E. Blind Navigation and the Role of Technology. In The Engineering Handbook of Smart Technology for Aging, Disability, and Independence; Helal, A., Mokhtari, M., Abdulrazak, B., Eds.; John Wiley & Sons: Hoboken, NJ, USA, 2008; pp. 479–500. 92. Lanigan, P.E.; Paulos, A.M.; Williams, A.W.; Rossi, D.; Narasimhan, P. Trinetra: Assistive technologies for grocery shopping for the blind. In Proceedings of the International Symposium on Wearable Computers, ISWC, Montreux, Switzerland, 11–14 October 2006; pp. 147–148. 93. Billah, S.M.; Ashok, V.; Ramakrishnan, I.V. Write-it-Yourself with the Aid of Smartwatches: A Wizard-of-Oz Experiment with Blind People. In Proceedings of the 23rd International Conference on Intelligent User Interfaces, Tokyo, Japan, 7–11 March 2018; pp. 427–431. 94. Velmurugan, D.; Sonam, M.S.; Umamaheswari, S.; Parthasarathy, S.; Arun, K.R. A smart reader for visually impaired people using Raspberry pi. Int. J. Eng. Sci. Comput. IJESC 2016, 6, 2997–3001. 95. Web Content Accessibility Guidelines (WCAG) Overview|Web Accessibility Initiative (WAI)|W3C. Available online: https://www.w3.org/WAI/standards-guidelines/wcag/ (accessed on 1 February 2019). 96. About the Section 508 Standards—United States Access Board. Available online: https://www.access-board. gov/guidelines-and-standards/communications-and-it/about-the-section-508-standards (accessed on 1 February 2019). © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Journal

Applied SciencesMultidisciplinary Digital Publishing Institute

Published: Mar 13, 2019

There are no references for this article.