Abstract: The unprecedented success of speech recognition methods has stimulated the wide usage of intelligent audio systems, which provides new attack opportunities for stealing the user privacy through eavesdropping on the loudspeakers. Effective eavesdropping methods employ a high-speed camera, relying on LOS to measure object vibrations, or utilize WiFi MIMO antenna array, requiring to eavesdrop in quiet environments. In this paper, we explore the possibility of eavesdropping on the loudspeaker based on COTS RFID tags, which are prevalently deployed in many corners of our daily lives. We propose Tag-Bug that focuses on the human voice with complex frequency bands and performs the thru-the-wall eavesdropping on the loudspeaker by capturing sub-mm level vibration. Tag-Bug extracts sound characteristics through two means: (1) Vibration effect, where a tag directly vibrates caused by sounds; (2) Reflection effect, where a tag does not vibrate but senses the reflection signals from nearby vibrating objects. To amplify the influence of vibration signals, we design a new signal feature referred as Modulated Signal Difference (MSD) to reconstruct the sound from RF-signals. To improve the quality of the reconstructed sound for human voice recognition, we apply a Conditional Generative Adversarial Network (CGAN) to recover the full-frequency band from the partial-frequency band of the reconstructed sound. Extensive experiments on the USRP platform show that Tag-Bug can successfully capture the monotone sound when the loudness is larger than 60dB. Tag-Bug can efficiently recognize the numbers of human voice with 95.3%, 85.3% and 87.5% precision in the free-space eavesdropping, thru-the-brick-wall eavesdropping and thru-the-insulating-glass eavesdropping, respectively. Tag-Bug can also accurately recognize the letters with 87% precision in the free-space eavesdropping.
Chuyu Wang, Lei Xie, Yuancan Lin, Wei Wang, Yingying Chen, Yanling Bu, Kai Zhang, and Sanglu Lu
ACM IMWUT/UbiComp, 2022
Abstract: Nowadays, the growing demand for the 3D human- computer interaction (HCI) has brought about a number of novel approaches, which achieve the HCI by tracking the motion of different devices, including the translation and the rotation. In this paper, we propose to use a spinning linearly polarized antenna to track the 3D motion of a specified object attached with the passive RFID tag array. Different from the fixed antenna-based solutions, which suffer from the unavoidable signal interferences at some specific positions/orientations, and only achieve the good performance in some feasible sensing conditions, our spinning antenna-based solution seeks to sufficiently suppress the ambient signal interferences and extracts the most distinctive features, by actively spinning the antenna to create the optimal sensing condition. Moreover, by leveraging the matching/mismatching property of the linearly polarized antenna, i.e., in comparison to the circularly polarized antenna, the phase variation around the matching direction is more stable, and the RSSI variation in the mismatching direction is more distinctive, we are able to find more distinctive features to estimate the position and the orientation. We build a model to investigate the RSSI and the phase variation of the RFID tag along with the spinning of the antenna, and further extend the model from a single RFID tag to an RFID tag array. Furthermore, we design corresponding solutions to extract the distinctive RSSI and phase values from the RF-signal variation. Our solution tracks the translation of the tag array based on the phase features, and the rotation of the tag array based on the RSSI variation. The experimental results show that our system can achieve an average error of 13.6cm in the translation tracking, and an average error of 8.3◦ in the rotation tracking in the 3D space.
Chuyu Wang, Lei Xie, Keyan Zhang, Wei Wang, Yanling Bu, and Sanglu Lu
IEEE INFOCOM, 2019
Probing into the Physical Layer: Moving Tag Detection for Large-Scale RFID Systems.
Abstract: Logistics monitoring is a fundamental application that utilizes RFID systems to manage numerous tagged-objects. Due to the frequent rearrangement of tagged-objects, a fast RFID-based tracking approach is highly desired for accurate logistics distribution. However, traditional RFID systems usually take tens of seconds to interrogate hundreds of RFID tags, not to mention the time delay involved to locate all the tags, which severely prevents from in-time tracking. To address this issue, we reduce the problem domain by first distinguishing the motion status of the tagged-objects, i.e., "stationary" or "moving", and then tracking the moving objects with the state-of-the-art localization schemes, which significantly reduces the efforts of tracking all the objects. Toward this end, we propose a moving tag detection mechanism, which achieves the time efficiency by exploiting the useless collision signal in RFID systems. In particular, we extract two kinds of physical-layer features (namely phase profile and backscatter link frequency) from the collision signal received by the USRP to distinguish tags at different positions. We further develop the Graph Matching (GM) method and Coherent Phase Variance (CPV) method to detect the moving tagged-objects. Experiment results show that our approach can accurately detect the moving objects while reducing 80% inventory time compared with the state-of-art solutions.
Chuyu Wang, Lei Xie, Wei Wang, Yingying Chen, Tao Xue, and Sanglu Lu
IEEE Transactions on Mobile Computing
Abstract: As an important indicator of autonomic regulation for circulatory function, Heart Rate Variability (HRV) is widely used for general health evaluation. Apart from using dedicated devices (e.g, ECG) in a wired manner, current methods search for a ubiquitous manner by either using wearable devices, which suffer from low accuracy and limited battery life, or applying wireless techniques (e.g., FMCW), which usually utilize dedicated devices (e.g., USRP) for the measurement. To address these issues, we present RF-ECG based on Commercial-Off-The-Shelf (COTS) RFID, a wireless approach to sense the human heartbeat through an RFID tag array attached on the chest area in the clothes. In particular, as the RFID reader continuously interrogates the tag array, two main effects are captured by the tag array: the reflection effect representing the RF-signal reflected from the heart movement due to heartbeat; the moving effect representing the tag movement caused by chest movement due to respiration. To extract the reflection signal from the noisy RF-signals, we develop a mechanism to capture the RF-signal variation of the tag array caused by the moving effect, aiming to eliminate the signals related to respiration. To estimate the HRV from the reflection signal, we propose a signal reflection model to depict the relationship between the RF-signal variation from the tag array and the reflection effect associated with the heartbeat. A fusing technique is developed to combine multiple reflection signals from the tag array for accurate estimation of HRV. Experiments with 15 volunteers show that RF-ECG can achieve a median error of 3% of Inter-Beat Interval (IBI), which is comparable to existing wired techniques.
Chuyu Wang, Lei Xie, Wei Wang, Yingying Chen, Yanling Bu, Sanglu Lu ACM IMWUT/UbiComp, 2018
Abstract: The rising popularity of electronic devices with gesture recognition capabilities makes the gesture-based human-computer interaction more attractive. Along this direction, tracking the body movement in 3D space is desirable to further facilitate behavior recognition in various scenarios. Existing solutions attempt to track the body movement based on computer version or wearable sensors, but they are either dependent on the light or incurring high energy consumption. This paper presents RF-Kinect, a training-free system which tracks the body movement in 3D space by analyzing the phase information of wearable RFID tags attached on the limb. Instead of locating each tag independently in 3D space to recover the body postures, RF-Kinect treats each limb as a whole, and estimates the corresponding orientations through extracting two types of phase features, Phase Difference between Tags (PDT) on the same part of a limb and Phase Difference between Antennas (PDA) of the same tag. It then reconstructs the body posture based on the determined orientation of limbs grounded on the human body geometric model, and exploits Kalman filter to smooth the body movement results, which is the temporal sequence of the body postures. The real experiments with 5 volunteers show that RF-Kinect achieves 8.7◦ angle error for determining the orientation of limbs and 4.4cm relative position error for the position estimation of joints compared with Kinect 2.0 testbed.
Abstract: The human–computer interactions have moved from the conventional approaches of entering inputs into the keyboards/touchpads to the brand-new approaches of performing interactions in the air. In this paper, we propose RF-glove, a sys- tem that recognizes concurrent multiple finger micromovement using RF signals, so as to realize the vision of "multi-touch in the air." It uses a commercial-off-the-shelf (COTS) RFID reader with three antennas and five COTS tags attached to the five fingers of a glove, one tag per finger. During the process of a user performing finger micromovements, we let the RFID reader continuously interrogate these tags and obtain the backscattered RF signals from each tag. For each antenna–tag pair, the reader obtains a sequence of RF phase values called a phase profile from the tag’s responses over time. To tradeoff between accuracy and robustness in terms of matching resolution, we propose a two phase approach, including coarse-grained filtering and fine- grained matching. To tackle the variation of template phase profiles at different positions, we propose a phase-model-based solution to reconstruct the template phase profiles based on the exact locations. Experiment results show that we achieve an average accuracy of 92.1% under various moving speeds, orientation deviations, and so on.
Lei Xie, Chuyu Wang, Alex. X. Liu, Jianqiang Sun, and Sanglu Lu. ACM/IEEE Transactions on Networking, vol. 26, no. 1, pp. 231-244, 2018
Abstract: Pen-based handwriting has become one of the major human-computer interaction methods. Traditional approaches either require writing on the specic supporting device like the touch screen, or limit the way of using the pen to pure rotation or translation. In this paper, we propose Handwriting-Assistant, to capture the free handwriting of ordinary pens on regular planes with mm-level accuracy. By attaching the inertial measurement unit (IMU) to the pen tail, we can infer the handwriting on the notebook, blackboard or other planes. Particularly, we build a generalized writing model to correlate the rotation and translation of IMU with the tip displacement comprehensively, thereby we can infer the tip trace accurately. Further, to display the effective handwriting during the continuous writing process, we leverage the principal component analysis (PCA) based method to detect the candidate writing plane, and then exploit the distance variation of each segment relative to the plane to distinguish on-plane strokes. Moreover, our solution can apply to other rigid bodies, enabling smart devices embedded with IMUs to act as handwriting tools. Experiment results show that our approach can capture the handwriting with high accuracy, e.g., the average tracking error is 1.84mm for letters with the size of about 2cm⇥1cm, and the average character recognition rate of recovered single letters achieves 98.2% accuracy of the ground-truth recorded by touch screen.
Yanling Bu, Lei Xie, Yafeng Yin, Chuyu Wang, Jingyi Ning, Jiannong Cao and Sanglu Lu
ACM IMWUT/UbiComp, 2022
Abstract: As an important indicator of the infusion monitoring for clinical treatment, the drip rate is expected to be monitored in an accurate and real-time manner. However, state-of-the-art drip rate monitoring schemes either suffer from high maintenance or incur high hardware cost. In this paper, we propose DropMonitor, an RFID-based approach to perform the mm-level sensing for infusion drip rate monitoring. By attaching a pair of batteryless RFID tags on the drip chamber, we can estimate the drip rate by capturing the RF-signals reflected from the vibrating liquid surface caused by the falling droplets. Particularly, we use the sensing tag to perceive the liquid surface vibration in the drip chamber and further derive the drip rate for infusion monitoring. Moreover, to sufficiently mitigate the multi-path interference from the surrounding human activities, we use the reference tag to perceive the multi-path signals from the indoor environment. By computing the difference of RF-signals from tag pairs, we cancel the multi-path interference and extract the drip-rate-related signals. We have implemented a prototype system and evaluated its performance in real applications. The experiment results show that DropMonitor can accurately estimate the infusion drip rate, and the average relative error of drip rate estimation is below 1% for conventional cases. In this way, considering the essential sampling rates of each tag, DropMonitor is able to monitor the drip rate for over a dozen of infusion bottles/bags in parallel with one COTS RFID system.
Yuancan Lin, Lei Xie, Chuyu Wang, Yanling Bu, and Sanglu Lu
ACM IMWUT/UbiComp, 2021
Mag-Barcode: Magnet Barcode Scanning for Indoor Pedestrian Tracking
Zefan Ge, Lei Xie, Shuangquan Wang, Xinran Lu, Chuyu Wang, Gang Zhou, and Sanglu Lu
IEEE IWQoS, 2020
RF-Rhythm: Secure and Usable Two-Factor RFID Authentication
Abstract: Passive RFID technology is widely used in user authentication and access control. We propose RF-Rhythm, a secure and usable two-factor RFID authentication system with strong resilience to lost/stolen/cloned RFID cards. In RF-Rhythm, each legitimate user performs a sequence of taps on his/her RFID card according to a self-chosen secret melody. Such rhythmic taps can induce phase changes in the backscattered signals, which the RFID reader can detect to recover the user’s tapping rhythm. In addition to verifying the RFID card’s identification information as usual, the backend server compares the extracted tapping rhythm with what it acquires in the user enrollment phase. The user passes authentication checks if and only if both verifications succeed. We also propose a novel phase-hopping protocol in which the RFID reader emits Continuous Wave (CW) with random phases for extracting the user’s secret tapping rhythm. Our protocol can prevent a capable adversary from extracting and then replaying a legitimate tapping rhythm from sniffed RFID signals. Comprehensive user experiments confirm the high security and usability of RF-Rhythm with false-positive and false-negative rates close to zero.
Jiawei Li, Chuyu Wang, Ang Li, Dianqi Han, Yan Zhang, Jinhang Zuo, Rui Zhang, Lei Xie, and Yanchao Zhang.
IEEE INFOCOM, 2020
Abstract: With computer vision-based technologies, current Augmented reality (AR) systems can effectively recognize multiple objects with different visual characteristics. However, only limited degrees of distinctions can be offered among different objects with similar natural features, and inherent information about these objects cannot be effectively extracted. In this paper, we propose TaggedAR, i.e., an RFID-based approach to assist the recognition of multiple tagged objects in AR systems, by deploying additional RFID antennas to the COTS depth camera. By sufficiently exploring the correlations between the depth of field and the received RF-signal, we propose a rotate scanning-based scheme to distinguish multiple tagged objects in the stationary situation, and propose a continuous scanning-based scheme to distinguish multiple tagged human subjects in the mobile situation. By pairing the tags with the objects according to the correlations between the depth of field and RF-signals, we can accurately identify and distinguish multiple tagged objects to realize the vision of "tell me what I see" from the AR system. We have implemented a prototype system to evaluate the actual performance with case studies in real-world environment. The experiment results show that our solution achieves an average match ratio of 91% in distinguishing up to dozens of tagged objects with a high deployment density.
Lei Xie, Chuyu Wang, Yanling Bu, Jianqiang Sun, Qingliang Cai, Jie Wu, and Sanglu Lu
IEEE TMC, 2018
Abstract: Recently, gesture recognition has gained consider- able attention in emerging applications (e.g., AR/VR systems) to provide a better user experience for human-computer inter- action. Existing solutions usually recognize the gestures based on wearable sensors or specialized signals (e.g., WiFi, acoustic and visible light), but they are either incurring high energy consumption or susceptible to the ambient environment, which prevents them from efficiently sensing the fine-grained finger movements. In this paper, we present RF-finger, a device-free system based on Commercial-Off-The-Shelf (COTS) RFID, which leverages a tag array on a letter-size paper to sense the fine- grained finger movements performed in front of the paper. Particularly, we focus on two kinds of sensing modes: finger tracking recovers the moving trace of finger writings; multi-touch gesture recognition identifies the multi-touch gestures involving multiple fingers. Specifically, we build a theoretical model to extract the fine-grained reflection feature from the raw RF-signal, which describes the finger influence on the tag array in cm- level resolution. For the finger tracking, we leverage K-Nearest Neighbors (KNN) to pinpoint the finger position relying on the fine-grained reflection features, and obtain a smoothed trace via Kalman filter. Additionally, we construct the reflection image of each multi-touch gesture from the reflection features by regarding the multiple fingers as a whole. Finally, we use a Convolutional Neural Network (CNN) to identify the multi-touch gestures based on the images. Extensive experiments validate that RF-finger can achieve as high as 88% and 92% accuracy for finger tracking and multi-touch gesture recognition, respectively.
Chuyu Wang, Jian Liu, Yingying Chen, Hongbo Liu, Lei Xie, Wei Wang, Bingbing He, and Sanglu Lu
IEEE INFOCOM, 2018
Abstract: Nowadays, the demand for novel approaches of 2D human-computer interaction has enabled the emergence of a number of intelligent devices, such as Microsoft Surface Dial. Surface Dial realizes 2D interactions with the computer via simple clicks and rotations. In this paper, we propose RF-Dial, a battery- free solution for 2D human-computer interaction based on RFID tag arrays. We attach an array of RFID tags on the surface of an object, and continuously track the translation and rotation of the tagged object with an orthogonally deployed RFID antenna pair. In this way, we are able to transform an ordinary object like a board eraser into an intelligent HCI device. According to the RF-signals from the tag array, we build a geometric model to depict the relationship between the phase variations of the tag array and the rigid transformation of the tagged object, including the translation and rotation. By referring to the fixed topology of the tag array, we are able to accurately extract the translation and rotation of the tagged object during the moving process. Moreover, considering the variation of phase contours of the RF-signals at different positions, we divide the overall scanning area into the linear region and non-linear region in regard to the relationship between the phase variation and the tag movement, and propose tracking solutions for the two regions, respectively. We implemented a prototype system and evaluated the performance of RF-Dial in the real environment. The experiments show that RF-Dial achieved an average accuracy of 0.6cm in the translation tracking, and an average accuracy of 1.9◦ in the rotation tracking.
Yanling Bu, Lei Xie, Yinyin Gong, Chuyu Wang, Lei Yang, Jia Liu, and Sanglu Lu
IEEE INFOCOM, 2018
Abstract: Nowadays, novel approaches of 3D human-computer interaction have enabled the capability of manipulating in the 3D space rather than 2D space. For example, Microsoft Surface Pen leverages the embedded sensors to sense the 3D manipulations, such as inclining the pen to get bolder handwriting. In this paper, we propose RF-Brush, a battery-free and light-weight solution for 3D human-computer interaction based on RFID, by simply attaching a linear RFID tag array onto the linear shaped object like a brush. RF-Brush senses the 3D orientation and 2D movement of the linear shaped object, when the human subject is drawing with this object in the 3D space. Here, the 3D orientation refers to the relative orientation of the linear shaped object to the operating plane, whereas the 2D movement refers to the moving trace in the 2D operating plane. In this way, we are able to transform an ordinary linear shaped object like a brush or pen to an intelligent HCI device. Particularly, we build two geometric models to depict the relationship between the RF-signal and the 3D orientation as well as 2D movement, respectively. Based on the geometric model, we propose the linear tag array-based HCI solution, implemented a prototype system, and evaluated the performance in real environment. The experiments show that RF-Brush achieves an average error of 5.7◦ and 8.6◦ of elevation and azimuthal angle, respectively, and an average error of 3.8cm and 4.2cm in movement tracking along X-axis and Y- axis, respectively. Moreover, RF-Brush achieves 89% in letter recognition accuracy.
Yinyin Gong, Lei Xie, Chuyu Wang, Yanling Bu, and Sanglu Lu
IEEE MASS, 2018
Abstract: While the mobile users enjoy the anytime anywhere Internet access by connecting their mobile devices through Wi-Fi services, the increasing deployment of access points (APs) have raised a number of privacy concerns. This paper explores the potential of smartphone privacy leakage caused by surrounding APs. In particular, we study to what extent the users’ personal information such as social relationships and demographics could be revealed leveraging simple signal information from APs without examining the Wi-Fi traffic. Our approach utilizes users’ activities at daily visited places derived from the surrounding APs to infer users’ social interactions and individual behaviors. Furthermore, we develop two new mechanisms: the Closeness-based Social Relationships Inference algorithm captures how closely people interact with each other by evaluating their physical closeness and derives fine-grained social relationships, whereas the Behavior-based Demographics Inference method differentiates various individual behaviors via the extracted activity features (e.g., activeness and time slots) at each daily place to reveal users’ demographics. Extensive experiments conducted with 21 participants’ real daily life including 257 different places in three cities over a 6-month period demonstrate that the simple signal information from surrounding APs have a high potential to reveal people’s social relationships and infer demographics with an over 90% accuracy when using our approach.
Chen Wang, Chuyu Wang, Yingying Chen, Lei Xie, and Sanglu Lu
IEEE ICDCS, 2017
Abstract: In a number of RFID-based applications such as logistics monitoring, the RFID systems are deployed to monitor a large number of RFID tags. They are usually required to track the movement of all tags in a real-time approach, since the tagged-goods are moved in and out in a rather frequent approach. However, a typical cycle of tag inventory in COTS RFID system usually takes tens of seconds to interrogate hundreds of RFID tags. This hinders the system to track the movement of all tags in time. One critical issue in such type of tag monitoring is to efficiently distinguish the motion status of all tags, i.e., stationary or moving. According to the motion status of different tags, the state-of-art localization schemes can further track those moving tags, instead of tracking all tags. In this paper, we propose a real-time approach to detect the moving tags in the monitoring area, which is a fundamental premise to support tracking the movement of all tags. We achieve the time efficiency by decoding collisions from the physical layer. Instead of using the EPC ID, which cannot be decoded in collision slots, we are able to extract two kinds of physical-layer features of RFID tags, i.e., the phase profile and the backscatter link frequency, to distinguish among different tags in different positions. By resolving the two physicallayer features from the tag collisions, we are able to derive the motion status of multiple tags simultaneously, and greatly improve the time-efficiency. Experiment result shows that our solution can accurately detect the moving tags while reducing 80% of inventory time compared with the state-of-art solutions.
Chuyu Wang, Lei Xie, Wei Wang, and Sanglu Lu
IEEE INFOCOM, 2016
Abstract: Gait rehabilitation is a common method of postoperative recovery after the user sustains an injury or disability. However, traditional gait rehabilitations are usually performed under the supervision of rehabilitation specialists, meaning the patients can not receive adequate care continuously. In this paper, we propose IMU-Kinect, a novel system to remotely and continuously monitor the gait rehabilitation via the wearable kit. This system consists of a wearable hard- ware platform and a user-friendly software application. The hardware platform is composed of four Inertial Measurement Units (IMU), which are attached on the shanks and thighs of the human body. The software application is able to estimate the rotation and displacement of these sensors, then reconstruct the gait movements and calculate the gait parameters according to the geometric model of human lower limbs. Based on IMU-Kinect system, the users of gait rehabilitation just need to walk normally by wearing the IMU-Kinect kit, and then the rehabilitation specialists can analyze the status of postoperative recovery by remotely viewing the animations about users' gait movements and charts of the general gait parameters. Extend experiments in real environment show that our system can efficiently track the gait movements with 9% rotation and displacement error.
Peicheng Yang, Lei Xie, Chuyu Wang, Sanglu Lu
ACM Ubicomp, 2019
Abstract: Currently, conventional indoor localization schemes mainly leverage WiFi-based or Bluetooth-based schemes to locate the users in the indoor environment. These schemes require to deploy the infrastructures such as the WiFi APs and Bluetooth beacons in advance to assist indoor localization. This property hinders the indoor localization schemes in that they are not scalable to any other situations without these infrastructures. In this paper, we propose FootStep-Tracker, an anchor-free indoor localization scheme purely based on sensing the user’s footsteps. By embedding the tiny SensorTag into the user’s shoes, FootStep- Tracker is able to accurately perceive the user’s moving trace, including the moving direction and distance, by leveraging the accelerometers and gyroscopes. Furthermore, by detecting the user’s activities such as ascending/descending the stairs and taking an elevator, FootStep-Tracker can effectively correlate with the specified positions such as stairs and elevators, and further determine the exacted moving traces in the indoor map by leveraging the space constraints in the map. Realistic experiment results show that, FootStep-Tracker is able to achieve an average localization accuracy of 1m for indoor localization, without any infrastructures having been deployed in advance.
Chang Liu, Lei Xie, Chuyu Wang, Jie Wu, Sanglu Lu
IEEE MASS, 2016
Abstract: Nowadays, people usually depend on augmented reality (AR) systems to obtain an augmented view in a real-world environment. With the help of advanced AR technology (e.g. object recognition), users can effectively distinguish multiple objects of different types. However, these techniques can only offer limited degrees of distinctions among different objects and cannot provide more inherent information about these objects. In this paper, we leverage RFID technology to further label different objects with RFID tags. We deploy additional RFID antennas to the COTS depth camera and propose a continuous scanning-based scheme to scan the objects, i.e., the system continuously rotates and samples the depth of field and RF- signals from these tagged objects. In this way, by pairing the tags with the objects according to the correlations between the depth of field and RF-signals, we can accurately identify and distinguish multiple tagged objects to realize the vision of "tell me what I see" from the augmented reality system. For example, in front of multiple unknown people wearing RFID tagged badges in public events, our system can identify these people and further show their inherent information from the RFID tags, such as their names, jobs, titles, etc. We have implemented a prototype system to evaluate the actual performance. The experiment results show that our solution achieves an average match ratio of 91% in distinguishing up to dozens of tagged objects with a high deployment density.
Lei Xie, Jianqiang Sun, Qingliang Cai, Chuyu Wang, Jie Wu, and Sanglu Lu
ACM UbiComp, 2016
Abstract: In this paper, we show the first comprehensive experimental study on mobile RFID reading performance based on a relatively large number of tags. By making a number of observations regarding the tag reading performance, we build a model to depict how various parameters affect the reading performance. Through our model, we have designed very efficient algorithms to maximize the time-efficiency and energy-efficiency by adjusting the reader’s power and moving speed. Our experiments show that our algorithms can reduce the total scanning time by 50 percent and the total energy consumption by 83 percent compared to the prior solutions.
Lei Xie, Qun Li, Chuyu Wang, Xi Chen, and Sanglu Lu
IEEE TMC, 2015
Abstract: As a supporting technology for most pervasive applications, indoor localization and navigation has attracted extensive attention in recent years. Conventional solutions mainly leverage techniques like WiFi and cellular network to effectively locate the user for indoor localization and navigation. In this paper, we investigate the problem of indoor navigation by using the RFID-based delay tolerant network. Different from the previous work, we aim to efficiently locate and navigate to a specified mobile user who is continuously moving within the indoor environment. As the low-cost RFID tags are widely deployed inside the indoor environment and acting as landmarks, the mobile users can actively interrogate the surrounding tags with devices like smart phones and leave messages or traces to the tags. These messages or traces can be carried and forwarded to more tags by other mobile users. In this way, the RFID-based infrastructure forms a delay tolerant network. By using the crowd-sourcing technology in RFID-based delay tolerant network, we respectively propose a framework, namely CrowdSensing, to schedule the tasks and manage the resources in the network. We further propose a navigation algorithm to locate and navigate to the moving target. We verify the performance of proposed framework and navigation algorithm on mobility model built on real-world human traceset. Experiment results show that our solution can efficiently reduce the average searching time for indoor navigation.
Hao Ji, Lei Xie, Chuyu Wang, Yafeng Yin and Sanglu Lu
Abstract: In real life, looking for a misplaced object like a key in the room can be usually like searching for a needle in a haystack. In this paper, we propose a novel solution to accurately locate the specified objects attached with RFID tags in indoor environments, by efficiently leveraging the RFID technology. By making a number of novel observations regarding the tag reading performance, we obtain several regularities to depict how various parameters including the reader’s power and the antenna’s scanning angle affect the reading performance. Based on the regularities, we have designed very efficient algorithms to maximize the accuracy and the time-efficiency for localization. Without the help of any anchor nodes, our solution can rapidly navigate to the target object from a specific initial position. We have implemented a system prototype to evaluate the actual performance in realistic applications. The realistic experiment results show that our solution can restrict the average localization error within 49 cm and reduce the total navigation time by 33% compared to the baseline solutions.
Chuyu Wang, Lei Xie, and Sanglu Lu
IEEE WCNC, 2014
Abstract: In many pervasive applications like the intelligent bookshelves in libraries, it is essential to accurately locate the items to provide the location-based service, e.g., the average localization error should be smaller than 50 cm and the localization delay should be within several seconds. Conventional indoor- localization schemes cannot provide such accurate localization results. In this paper, we design an adaptive, accurate indoor-localization scheme using passive RFID systems. We propose two adaptive solutions, i.e., the adaptive power stepping and the adaptive calibration, which can adaptively adjust the critical parameters and leverage the feedbacks to improve the localization accuracy. The realistic experiment results indicate that, our adaptive localization scheme can achieve an accuracy of 31 cm within 2.6 seconds on average.
Xi Chen, Lei Xie, Chuyu Wang, Sanglu Lu
IEEE ICPADS, 2013