Machine Learning Research

Machine Learning Club proudly supports ML research at TJ. We encourage our members to apply the knowledge they've gained from the lectures and competitions to real-world data: from biology to computer security, the applications are endless!

Here is a brief sample of the many projects, both completed and ongoing, we have helped mentor.

HypeFL: A Novel Blockchain-Based Architecture Using Federated Learning and Cooperative Perception for a Fully-Connected Autonomous Vehicle System

Aryaman Khanna and Mihika Dusad

★★★ Regeneron ISEF Finalist ★★★

We propose HypeFL, a novel framework built on Hyperledger Fabric that combines blockchain and ML to create a decentralized, collaborative, fully-connected autonomous vehicle system. HypeFL enables cooperative perception, where vehicles fuse road perceptions to collaboratively perceive their environments and ensure optimal driving decisions. Our system utilizes federated learning, a distributed ML approach, to optimize data privacy by only sharing model parameters between vehicles, rather than raw data. The blockchain provides an immutable, decentralized server that stores model parameters and vehicles as nodes, which provides protection against single-point failures, eliminates the risk of malicious attacks, and emphasizes data privacy through a novel consensus protocol based on the Multi-Krum algorithm. We tested HypeFL in real-world conditions using the online CARLA simulator, recording an average object detection accuracy of 93.1% with a 35% decrease in collision rate compared to current approaches. We then generalized HypeFL to a physical environment with miniature Raspberry Pi-powered cars, observing a 71.7% decrease in collision rate.


OtoScan: A Novel, Inexpensive System for Comprehensive Diagnosis of Middle Ear Infections With an Intelligent Mobile Otoscope

Sritan Motati and Omkar Kovvali

★★★ Regeneron ISEF Finalist ★★★

This research presents OtoScan, a revolutionary pipeline for the detection of middle ear infections using novel ensemble networks for diagnosis and a custom 3D-printed mobile otoscope. The physical attachment was developed using custom-designed 3D models, a compact magnification lens, fiber optics, and various electronics for illumination. To develop detection algorithms, public otoscopic images were collected and augmented with realistic perturbations. An ensemble of the InceptionV3, InceptionResNetV2, and Xception architectures trained using transfer learning and label smoothing was developed to mitigate class imbalance and overconfidence while improving diagnostic accuracy for acute and chronic suppurative OM. Regions of interest are highlighted as gradient saliency maps in a smartphone application using Grad-CAM++. Evaluation on public images shows that the proposed algorithm greatly surpasses standard and state-of-the-art architectures such as a pretrained ResNet-50 in accuracy and F1 score. Further testing using an industry-standard otoscopic simulator validated the real-world viability of this system.


Automated Bias Reduction in Deep Learning Based Melanoma Diagnosis Using a Semi-Supervised Algorithm

Sauman Das

★★★ Regeneron ISEF Finalist ★★★

In this study, we introduce a novel architecture called LatentNet which automatically detects over-represented features and reduces their weights during training. We tested our model on four distinct categories - three skin color levels corresponding to Type I, II, and III on the Fitzpatrick Scale, and images containing visible hair. Then, we compared the accuracy against the conventional Deep Convolutional Neural Network (DCNN) model trained using the standard approach (i.e. without detecting over-represented features) and containing the same hyperparameters as the LatentNet. The LatentNet showed significant performance improvement over the standard DCNN model with accuracy of 89.52%, 79.05%, 64.31%, and 64.35% compared to the DCNN accuracy of 90.41%, 70.82%, 45.28%, 56.52% in the corresponding categories, respectively. Differences in the average performance between the models were statistically significant (p < 0.05), suggesting that the proposed model successfully reduced bias amongst the tested categories.


An Accessible Parkinson's and Alzheimer's Novel Diagnostic Framework Using Vision-Based Handwriting Kinematic Analysis and Machine Learning

Ron Nachum

★★★ Regeneron ISEF Finalist ★★★

PANDwriting is a novel system for diagnostic assessment of neurodegenerative diseases (NDs) using handwriting kinematic analysis, particularly targeted to increase Alzheimer's Disease (AD) and Parkinson's Disease (PD) diagnostic access in lower-income areas and resource-poor health systems. An ensemble classifier consisting of a neural network, support vector machine, and random forest was trained with 10-fold cross-validation, achieving a classification accuracy of 74% in distinguishing PD patients from healthy controls (n=75).


Decentralized Shared Intelligence of Autonomous Vehicles with Real-Time Multi-Agent Reinforcement Learning

Irfan Nafi, Raffu Khondaker, and Eugene Choi

★★★ Regeneron ISEF Finalist ★★★

Over 1.3 million people are killed each year in road traffic crashes worldwide. Autonomous vehicles are the future of transportation. We devised an autonomous system for navigating highways using multi-agent reinforcement learning (MARL) using a shared intelligence system, allowing each agent (in this case the vehicles) to avoid collisions and efficiently move through traffic. We tested our algorithm first on a simulator, then generalized it to the real world where we trained our agent on varying parameters using a technique called active domain randomization.


3D Sound Localization and Classification for the Deaf and Hard of Hearing

Irfan Nafi, Raffu Khondaker, and Eugene Choi

★★★ Regeneron ISEF Finalist ★★★

Current devices geared towards the deaf and hard of hearing, such as hearing aids, struggle to localize and transmit sounds to those with severe hearing impairments. Our goal was to convey the directionality, pitch, amplitude, sound classification, and speech recognition of multiple sound sources to those with hearing impairments through a low-cost device. We localized sounds with the SRP-PHAT-HSDA algorithm calculated on incoming audio captured by a 6-microphone array. We classified each source in real-time using a stacked generalization ensemble trained on an augmented audio dataset.


Real-time Object Search and Detection for the Visually Impaired

Irfan Nafi, Raffu Khondaker, and Eugene Choi

★★★ Regeneron ISEF Finalist ★★★

There are over 289 million people with visual disabilities worldwide and that number is expected to grow to 579 million in only 3 decades. Current techniques to aid the visually disabled are expensive and limited in their ability, with limited and low accuracy computer vision. With ensemble learning we were able to get accuracies as high as 88% with four CV architectures trained on COCO and OIDv4. Furthermore, with an inexpensive camera-vibration interface, combined with an app, we were able effectively guide a user to a designated object in real-time using several onboard microcontrollers.


Alternative to Echocardiography: Using Deep Learning to Diagnose Heart Murmurs

George Tang, Sylesh Suresh, and Ankit Gupta

Cardiovascular disease (CD) is the number one leading cause of death worldwide, accounting for more than 17 million deaths in 2015. Based on a simple interface and machine learning, HeartFit allows users to administer diagnoses themselves. The model consists of a deep recurrent convolutional neural network trained on 132 pre-labeled heartbeat audio samples. After the model was validated on a previously unseen set of 44 heartbeat audio samples, it achieved an f-beta score of 0.93 and an accuracy of 93.1%. This value exceeds that of clinical examination accuracy, which is around 83%, demonstrating the effectiveness of the HeartFit platform.


Parameter Study of a Kernel-Based Approach to Anomaly Detection in Multispectral Imaging

Neal Bayya

The purpose of this parametric study is to improve upon the US Navy's ability to detect illegal ships. Building on Dr. Hoffman's work, Neal, along with Dr. Colin Olsen and Dr. Timothy Doster, propose methods using kPCA, which outperforms current RX methods. In addition, he constructs an object-level analysis of anomaly detection performance, which is more intuitive and representative of the success in identifying ships rather than the contemporary pixel-level analysis. This research was originally conducted at a naval research lab, and Neal continued his work under the guidance of TJML.


Breast Cancer Classification and Recurrence Prediction Using Machine Learning and Java

Min Kang

Min explored a possible use of breast cancer classification models as a prediction tool in clinical settings. This empirical research had three phases: discriminating tumor type, classifying recurrence outcome within 1, 2, 3, and 5 years after surgery, and creating a predictive application with Java. The SMO models classified tumor types with 98% accuracy and recurrence outcomes with 91%, 83%, 78%, and 69% accuracy within 1, 2, 3, and 5 years, respectively. Lastly, a Java application implementing the SMO models was created to predict recurrence outcome for each year’s end with a specific data input.


Retinal Image Segmentation

Nithin Dass

Retinal image analysis has many applications, from checking the health of the eye to using it as indicators for other more severe diseases. In order to provide valuable insights to eye doctors as well as those looking for symptons in eyes, this project aims to automatically segment retinal images. Several important features will be filtered out, such as blood vessels, which can be fed into more complex, application specific machine learning structures.


EEG-Controlled Exoskeleton

Nithin Dass, Yash Bollisetty, and Srinidhi Krishnamurthy

★★★ Intel ISEF Finalist ★★★

Modern exoskeletons cost thousands of dollars and are far too expensive for the majority of the paralyzed population. This project set out to create an inexpensive exoskeleton that a paralyzed patient could control with their brain, and they turned to machine learning to classify EEG signals. Using a Discrete Wavelength Transform to convert EEG signals into vectors, they then used a support vector machine to differentiate the EEG signals into two classes of motion. This allowed for a basic exoskeleton to allow for upperbody movement and body stabilization.


Diagnosing Diabetic Retinopathy

Justin Zhang, Kavya Kopparapu, and Neeyanth Kopparapu

★★★ Intel ISEF Finalist ★★★

Blindness related to diabetes, formally known as diabetic retinopathy, is a common condition in poor regions where difficult access to medical care prevents diagnosis. After this project, all you need to diagnose diabetic retinopathy is a smartphone, an inexpensive lens system, and a convolutional neural network trained on tens of thousands of images. Justin used Keras to create the deep learning model, taking advantage of the power of transfer learning with ResNet-50 pretrained on ImageNet.


Automating Plagarism Detection

Yuki Oyama and Arvind Srinivasan

Detecting certain characteristics of writing related to style is very difficult using traditional methods. However, using machine learning, this project was able to associate authors with 72 different linguistic parameters characterizing writing style. They turned to Scikit-Learn for a quick and fast network which yielded impressive results. This apporach can be used as an improved way for preventing plagarism versus traditional methods.


Detecting Terrorism on Social Media

Mihir Patel and Nikhil Sardana

★★★ Intel ISEF Finalist ★★★

Many extremists such as ISIS have used social media to recruit and influence thousands of potential jihadists and foreign fighters. TerroristTracker aims to stop this form of propaganda by turning the techniques these organizations use against them. By following specific symbols, such as the ISIS flag, using computer vision techniques and convolutional neural networks, a new system has been developed to track image features with 93.5% accuracy. These features, along with those extracted from caption analysis and user content, has allowed for classification with an SVM with 90.5% accuracy for identifying terrorist accounts.