SURF-IoT Fellows Present at Symposium
Irvine, September 1, 2017- Summer Undergraduate Research Fellows in the
Internet of Things (SURF-IoT) presented their projects Thursday, Aug. 31, at the SURF-IoT Symposium at the CALIT2 Auditorium.
Eight students participated in the program. Here is a brief overview of the projects and participants.
Iris: A Social Media Platform for Healthy Living
Student: Emma Anderson
Mentor: Dr. Mark Bachman
Orange County is home to 4.6 million people age 65 and older. This number is set to double by 2040 (Orange County Healthy Aging Initiative, 2016). Iris is a social media platform that promotes healthy living among the elderly Orange County community. The Iris website offers games, health services, news, and local events that help users stay healthy and active in their communities.For the SURF-IoT project, we wanted to take Iris beyond a website and integrate the platform wit external devices like Amazon Alexa. Users now have the option to use Iris services through their Amazon Echo devices. Features on the Iris Alexa app include reading recent news articles, giving users daily reminders, and learning about upcoming events. Our progress with the Amazon Alexa paves the way for connecting Iris to even more devices, including health monitoring devices and other wearables.
Improving Texera Interface for Powerful IoT Data Analytics
Student: Wei Han (Henry) Chen
Mentor: Professor Chen Li
We are currently living in an era where most information is stored as text. Popular social media sites, such as Facebook and Instagram, are producing a huge amount of information every second. Analyzing this information is critical to the researchers to gain insights to make the best decision. However, most existing text-centric applications require a significant amount of programming knowledge and effort to operate. Therefore, our team began the development of Texera, which is an opensource data-management system that facilitates an easy-to-use graphical user interface. Texera allows the users to easily manipulate and prepare text data without any programming effort. While the functionality of Texera is powerful, polishing the user interface is also critical to create better user experiences for both the IT and Non-IT experts. In our initial prototype, we utilize a third-party library called jQuery Flowchart; however, the interface design is limited by its inadequate functionality. In our research, we learn to alter the framework of the jQuery flowchart to improve the appearance of each operator and add new functionalities to the library such as right click options and displaying details for each operator. We also developed a method to present the data result in an excel-table format to make our interface more user friendly and allow the users to download and conduct other analyzes on an excel file we generated. As the frontend of Texera continues to be polished and more functionalities to be introduced, people with no programming background will soon be capable of doing deep data analyze easily and efficiently.
Using PyMongo for PET’s Data Storage
Student: Ahmed Gorashi
Mentor: Dr. Sergio Gago
Exercise is an important factor in sustaining and strengthening a person’s health,yet exercise is often neglected due to lack of time and motivation. PET (Personal Embodied Trainer) is an interactive user-centered application that helps guide users through physical exercises that can be performed at home, tracks progress throughout every use, and provides constant feedback for improving form. It differs from traditional workout applications by using motion sensor data to provide real-time and accurate feedback, providing visual and verbal guidance, as well as enjoyment and motivation. Utilizing PyMongo (the interaction between the python programming language and MongoDB, database program) to build the backend server for the application, users will now be able to easily access and view their account and exercise log information whenever they use the application. The results of storing the user's performance in prior exercises will allow the PET application to provide the user beneficial insight on how to optimally improve their exercises in the future. This data storage capability that PyMongo supplies will not only keep track of past actions of the user, but the application will also use the data stored to visually display graphs, infographics, charts to holistically showcase the user's performance. Overall, the implementation of the backend server using PyMongo will invoke greater interactivity and motivation for continual use of the application.
Voice Assistant Technology for Stroke Rehabilitation
Student: John Janecek
Mentors: Yu Chen, Yunan Chen
Stroke survivors often suffer a variety of physical impairments, which include muscular,speech, and loss of coordination. Technology to assist rehabilitation for stroke survivors can help survivors to regain physical movement and overcome physical impairments in everyday life. Technology can help ease the burden of the caretaker, and make stroke survivors more connected to family and friends. This project is t build and assess voice driven technology to assist stroke survivors in the home setting,using Amazon’s Echo. We are using the Echo to assist stroke survivors to perform tasks that would otherwise be restricted by physical impairments. We built different prototypes that would be used for physical therapy, social networks, and for therapist-patient communication. After feedback from the mentors and medical school, we moved toward building an application using Amazon’s Echo that would allow stroke survivors to connect with each other, send each other messages, and send messages to the world using Twitter. Twitter is already a part of Echo, but it cannot post, and we are building a more improved version. We hope that this system can facilitate communication and provide an environment for stroke survivors to communicate, even those with physical impairments. Furthermore, this system will give an easy way to communicate and socialize with family and friends using Twitter.
Indoor Mobile Localization Systems
Student: Brandon Metcalf
Mentors: Professor Kia Solmaz, Jianan Zhu, Seyyed Ahmad Razavi Majomard
Localization (positioning) systems have been made effective in tracking objects outside in a large scale environment, through instruments like GPS, however, tracking objects indoors for search and rescue and firefighting purposes still has much room for improvement. The main problem to overcome in this scenario is finding a way to design a sensor system that works indoors when every device in the system is moving. This study is designed to create new algorithms for indoor positioning systems that are more accurate for moving sensors. Although other systems like this have been made, their size, accuracy, and lack of practicality have not been successfully improved such that they can be widely used. This study has begun improvements through the use of a relatively new type of sensor; Ultra-Wide Band. Initially robots are being used to test and implement these algorithms. Once testing is made successful on the robots, the study will progress onto human trials. So far the study has yielded promising results, reducing error propagation with its localization algorithms. These algorithms are what ultimately make this study unique from other research. Although indoor positioning systems have been implemented with ultrawide band before, they have never fully been implemented into a system which required all of its devices to move. With the need for a stationary localizing devices irradiated, implementation of these algorithms can be expanded to firefighting and beyond.
Healthcare IoT Monitoring System
Student: Ruoyi Nie
Mentors: Professor Nikil Dutt, Dr. Amir Rahmani, Delaram Amiri
Health monitoring is essential for accurate medical diagnosis, which is commonly used in hospitals. With the development of Internet-of-Things(IoT) technology, a stable, customized in-home health monitoring system has been made possible using a three tier architecture, which uses a gateway between sensors and cloud. Although general purpose gateways are increasing in popularity among researchers, smart gateways are rarely applied to the healthcare domain. The methodology we use is FOG computing architecture, which has a smart gateway for basic data processing acting as a bridge of sensors and the cloud. Before the in-depth data analytics on the cloud, the gateway processes the data and determines the sampling rate after calculating the Early Warning Score (EWS) of the patients, which reduced power consumption of the sensor while effectively monitoring the vital signs of patients. Data transmission between sensors and gateway does not rely on the internet, which allow the gateway to send patient alerts based on EWS when WI-FI is not available. On the cloud, a visual, user-friendly user interface is generated based on more in-depth analysis of the data for medical professionals. Using Fog computing architecture in health monitoring gives patients not only the convenience of in-home monitoring but also the robustness and reliability of continuous monitoring. In the future, more automated data interpretation will be developed on the cloud and the gateway will have the ability to send alerts to both the patient and medical professionals based on the EWS.
PICARD (Patient Controlled Analgesic Recording Dispenser)
Student: Chiwei Peng
Mentor: G.P.Li, Sergio Gago
Developing Projected Augmented Reality Games on an Interactive Sandpit
Student: Alexander Sidenko
Mentors: Mehdi Rahimzadeh, Mahdi Abbaspour Tehrani, and Dr. Aditi Majumder
Almost everyone has played a video game in their life. There are many genres but more importantly, there are many ways of playing. Virtual Reality (VR), one of the newest ways, takes the user into the game. Augmented Reality (AR), even newer technology, brings the game to the user. To bring the virtual world into ours, iGravi created an AR system of projectors/cameras with awareness and interaction. One difference between our system and others is no necessity for a screen to see the virtual world. Our system can be viewed and interacted with by multiple users in real space. Before developing games, we ported the lab's code. The code was written in MATLAB, a slow language meant for prototyping on the CPU. Using C++, a faster language, and OpenGL, we ported the code onto the GPU. After porting, we coded the first game. A top-down platformer, the user shapes the sandpit creating an elevated path. This path is the safe zone and the troughs are a game over zone. The controller is a handheld projector the user moves, projecting a ball. The goal is to move the ball across the path. The ball, however, interacts with the landscape as a normal ball would. This makes it harder to follow the path as the projected ball is essentially influenced by gravity. With this first game and others afterwards, we demonstrate how our system and AR works and how it can be used in the future.
Smart Pain Assessment using Internet of Things
Student: Ajan Subramanian
Mentors: Professor Nikil Dutt, Professor Amir Rahmani
Current pain assessment techniques rely on subjective measures to assess a patient’s pain level. There arises a need to create a remote health monitoring system that maps pain levels to physiological parameters like heart rate, breath rate, galvanic skin response, etc. This project uniquely proposes a platform that uses both cloud and IoT technologies to create an automatic pain assessment tool for remote patient monitoring systems. During trials, the heart rate of patients was measured while they were exposed to two pain stimuli: thermal and electrical. Several heart rate variability features were extracted from the data received from the trials. These features were then used to map three distinct pain states: no pain, mild pain,and severe pain. Given the heart rate variability features, we used different machine learning techniques like one versus all classification, artificial neural networks,and support vector machines to make predictions on a patient’s pain intensity level. Through this process of classifying the data, we could get accuracies upwards of 80 percent. These results could help us to better understand which physiological features directly attribute to higher pain intensity levels in patients. In the future, we plan to integrate different physiological sensors to a cloud and an IoT architecture to create a fully-functioning remote health-monitoring device.
The SURF-IoT program, co-sponsored by UCI’s Undergraduate Research Opportunities Program (UROP) and CALIT2, provides students with a unique experience. Each student has the guidance of a UCI faculty mentor, along with the opportunity to gain experience and advanced training in state-of-the-art facilities and techniques.
To learn more about SURF-IoT visit here.