JACOB ASHWORTH

Title: Solving the Nonisomorphic Realization Problem for DNA Self-Assembly

Abstract: The study of the production of nanostructures has become increasingly necessary in recent years, with the growth of areas such as biomedical computing and molecular robotics. One method of nanoengineering is to construct branched DNA molecules that bond to each other to self-assemble into larger molecules. The mathematical study of how this process occurs and what DNA molecules are required to build a target structure is referred to as the Flexible Tile Assembly Model (FTAM). To generate molecules that are more likely to assemble into a given target structure, various constraints are placed upon the FTAM. While the FTAM under these constraints generates more practically feasible solutions, it also makes the process of solving for an optimal tile set much more difficult. One such constraint can be represented as the Nonisomorphic Realization Problem, and in this paper, we develop an algorithm to efficiently solve this problem for any given target nanostructure.

ZIQI (ANDREA) CHEN

Title: A Computation Model of Why Males are Honest When Pursuing a Mate

Abstract: If you and another person are competing for the same resources, say money, and each of you makes a decision based on what the other’s said, will you lie to get more money? You might see a natural tendency to lie, but in nature, many species have evolved to stay honest, especially during mating events. In this past year, I built a computational model to explore the dynamics between males and females in the mating context where males employ different honesty strategies and females can evolve their choosiness. Males make their signals based on their quality, so intuitively higher-quality males will send higher signals, which is preferred by females when they choose a mate to reproduce. From experiments, low-quality lying males can be set apart from the population by the females using choosiness alone in some cases. How much one mutation could affect the choosiness and male’s quality also plays a role in distinguishing lying and other males.

!SPENCER CHUBB

SPENCER CHUBB

Title: Streamlining Combinatorial Auctions with Unified Neural Networks          

Abstract: We build on prior auction research which uses neural networks to learn bidder preferences. The speed of auctions is crucial for real-world applications, motivating research into ways to speed up the auction. Previous approaches use a separate neural network for each bidder, whereas we propose a unified neural network for all bidders. For testing, we use the Spectrum Auction Test Suite in accordance with previous research. In our experiment, the unified approach is 13 times faster with only a 1.3% tradeoff in efficiency. The unified and separate approach can also be used together to increase efficiency by about 0.7%. The faster algorithm provides an initial solution, which is then refined by the slower algorithm. Furthermore, the unified approach simplifies implementation by requiring one set of hyperparameters rather than many.

ENDIA CLARK

Title: A Comparison of Statistical Models and Machine Learning for Epilepsy Prediction Analysis         

Abstract: Epilepsy, a neurological disorder affecting millions worldwide, poses a continuous threat to patient safety and quality of life. Automatic Seizure Detection (ASD) systems aim to identify seizures in real-time, enabling timely interventions. To improve these systems, this study compares the predictive accuracy and computational efficiency of machine learning and statistical models in epilepsy prediction. Leveraging the CHB-MIT dataset, statistical models including linear classification and Hidden Markov Models (HMM) are compared with machine learning models like Support Vector Machines (SVM) and Long Short-Term Memory Models (LSTM). This study provides valuable insights for healthcare practitioners and contributes to predictive modeling in healthcare.

!OLIVIA DAVIS

OLIVIA DAVIS

Title: Evaluating K-mers in Measuring Population Diversity of Unaligned Sequences

Abstract: In population genomics, scientists measure the diversity of populations through pairwise nucleotide diversity (π) - a metric that compares the differences at each position of a genome sequence. However, π underestimates diversity as compared to what the theoretical measure should be. This discrepancy is known as Lewontin’s Paradox - and one issue may be the loss of information during the alignment process (the process of aligning the smaller subsets called reads we get when sequencing a genome). We investigate a way to measure population diversity without aligning sequences. That is, we collect k-length sections of the reads called k-mers and compare the sets of k-mers from two individuals in three different ways: bray-curtis directly on k-mer count, cosine similarity directly on k-mer count, and cosine similarity after compressing k-mer information using a counting bloom filter. We determine optimal lengths of k with various coverage sizes and genome types. We also compare the efficacy of these measures against each other after simulations with different population sizes, mutation rates, and genomes. Finally, we directly compare these measures with the pairwise nucleotide diversity score. 

!WILLIAM D. HAWKINS

WILLIAM D. HAWKINS

Title: Modeling Baseball as a Markov Chain using a 313-State Transition Matrix

Abstract: This thesis explores the application of a 313-state transition matrix to model the game of baseball as a Markov chain. Traditionally, baseball has been studied using just 25 states, while more recent research has extended this to a 288-state matrix. My study advances this area by utilizing a comprehensive 313-state matrix, providing a more detailed representation of the game. I analyze the effectiveness of this approach through simulations and statistical comparisons, highlighting its potential to capture the complexities of baseball more accurately than previous models. Further details and findings from this investigation are discussed later in this thesis.

!NAT HURTIG

NAT HURTIG

Title: An Improved Policy for the Known-size M/G/1 Queue with Preemption Costs      

Abstract: We introduce a new single-server policy that outperforms the previously best-known policy (Görg 1986) in the known-size M/G/1 queue with constant preemption costs. We base our result in simulation. We introduce a new problem, “fool’s no-arrival known-size preemptive queueing,” that is related to the known-size M/G/1 queue with preemption costs but more amenable to theoretical analysis. Our new M/G/1 policy is based on our proven-optimal solution to fool’s no-arrival known-size preemptive queueing. Our policy’s main departure from previous literature is that it considers the state of the entire queue to make a preemption decision. We prove that any index-based policy that does not consider queue state, including Görg 1986, is suboptimal in fool’s no-arrival known-size preemptive queueing.

STEVEN JUNG

Title: Evaluating Mutation Coverage: Human-Created vs. Automated Test Cases

Abstract: This study compares mutation coverage between test cases created by people and those generated by the KLEE symbolic execution engine, focusing on GNU software. We aim to find out which type of test—human-made or automatically produced—better identifies and fixes bugs. Early results show differences in effectiveness, pointing out the strengths and weaknesses of each method. This research helps us understand how automated testing tools might support or improve the traditional testing done by developers in software development.

!NYOMI CARRINE MORRIS

NYOMI CARRINE MORRIS

Title: Designing a Robotic Sidekick to Aid Self-disclosure in Domestic Abuse Survivors  

Abstract: Intimate partner violence, which takes forms of physical and emotional control by one person over another, occurs at high rates for both men and women. Often survivors of such abuse are burdened with overcoming negative trauma-related mental health problems such as post-traumatic stress disorder. Recovery, which includes rebuilding lost social bonds, often comes in the form of self-disclosure, where survivors can share their history of abuse with another person. Despite the benefits of self-disclosure, an additional burden lies on the survivor to manage their own feelings and the reactions of their conversation partner. I developed a co-design workshop for survivors to identify these burdens and construct designs for a robot that would support the self-disclosure process. Key themes emerged from group discussions, hands-on activities, and physical designs: emotional management, conversational guidance, and behavior-correcting feedback. I additionally contribute several design implications for future survivor-supporting robots regarding their behavior, interaction modality, and form factor.

JACOB J. OLINGER

Title: The AI Applications in Radiology: Regulation, Evaluation, and Usage         

Abstract: Artificial Intelligence (AI) holds immense promise in revolutionizing healthcare, offering potential benefits for patients and healthcare professionals. However, its full potential remains constrained by significant barriers to adoption within the healthcare industry. This research investigates the regulatory, operational, and ethical domains surrounding AI software in healthcare, more specifically in the field of radiology. The main objectives are to 1) examine the current barriers to AI implementation from different perspectives: manufacturer, radiologist, technologist, and sales representative, 2) evaluate the current regulation taxonomies (AI EU Act and FDA AI approval procedures), and 3) propose an extended framework increasing the practicality of current AI regulations. The research methodology involved a qualitative survey administered at the Annual Radiological Society Conference. Survey respondents provided valuable insights into existing software solutions and adoption barriers, informing the extension of current models for AI regulations. The preliminary findings suggested a five-pillar classification system for radiological AI-enabled devices: learning, documenting, planning, communication, and discovery applications. This classification system offers a more nuanced and accurate representation as compared to the FDA’s existing three-class system, based on the risk and impact on the patient, from low to high-risk applications. The findings also revealed the potential “safety” gap in the current FDA regulation.

CONNOR RHYS PEPER

Title: Optimizing Flag Selection and Phase-Order in the GHC Compiler    

Abstract: Limited research exists on automated flag-selection and phase-ordering optimizations in the Glasgow Haskell Compiler (GHC). This paper introduces two machine learning algorithms, BOCA and RIO, tailored for compiler flag selection. Additionally, modified versions of BOCA and RIO are proposed to address the phase-order problem. Our experiments demonstrate significant runtime improvements compared to default optimization flag presets like -O2, achieved through the application of RIO and BOCA. Furthermore, our modified BOCA algorithm outperforms the default phase-ordering implemented in GHC. While we identify program-specific optimal flag combinations and phase-ordering for each test application, we note that the default phase-ordering in the simplifier is generally effective across most applications. We also find instances where -O2 performs sub optimally or even worse than no optimization flags at all.

TRISTAN C. SCHEINER

Title: Channel Selection for Intermediate Fusion Using Reinforcement Learning in Cooperative Vehicle-to-Vehicle Perception   

Abstract: In order for autonomous vehicles to make safe navigation plans, they need to accurately be able to detect obstacles on the road in real-time. While cooperative vehicle-to-vehicle perception solves many of the problems facing individual perception, it requires both time and resources to share data with other vehicles. There are many cases where sending certain pieces of data is redundant or unnecessary, which overall increases the time between object predictions. In our work, we train a reinforcement learning agent to select a subset of channels in an intermediate fusion model to share with surrounding vehicles. We show that we can reduce the amount of data that is shared between vehicles while maintaining similar prediction performance. We also show that we can identify and remove noisy channels in order to improve the prediction models performance.

Launch Root Quad
Return to Top