Below you will find a schedule of events as well as session details available electronically. The full program is available online by clicking here. A hard copy will be provided in the folder you recive when checking-in at the conference.
LaTeX is the de facto standard when it comes to typesetting mathematics and scientific documents. In this workshop, we aim to give an overview of some of the more advanced features available in LaTeX. Specifically, we will explore the following: the creation of custom packages and document classes to avoid those mile-long preambles, other TeX engines with specific emphasis on LuaLatex which includes Lua as an embedded scripting language. This allows for a host of useful applications (e.g., creating custom assignment classes that auto-populate a table of scores or an answer key, or that randomly generate similar problems upon compiling), the various tools and add-ons available for creating high-quality 2D and 3D vector graphics in LaTeX. Templates and/or minimal working examples will be provided for those who wish use or otherwise tootle around with those demos used in the workshop. Participants are encouraged to bring a laptop to this session!
In probability theory, events represent sets of outcomes whose probability can be sensibly defined. In a typical undegraduate probability course, generally any set of outcomes is allowed to be considered an event -- if you're lucky, with a footnote about how technically one could construct a non-event set (i.e. a non-measurable set) with much more advanced tools. However, when describing random processes evolving in time, there is an easy and natural way to restrict what sets of outcomes can and can't be events, which is used to encode information about the amount of "information" that is available to an "observer" of the process. This interpretation is fundamental for understanding models in mathematical finance.
This workshop will be a hands-on introduction to some examples of data wrangling. It should be accessible for everybody. We will show examples about how to work with large data sets using Python, R, or SQL, in a JupyterLab environment. This workshop requires participants to bring their laptop but does not require any software to be installed. Instead, we will also demonstrate the high performance computing environment that our students use, for introductory courses on data wrangling. Students and professors are both welcome to attend! We expect that we will get participants excited about ways to work with large data sets. No computational background is needed for this workshop.
Hyperinflation has become an increasingly common problem in the last century. In 2007, Zimbabwe entered a period of extreme hyperinflation that led to the collapse of the Zimbabwean dollar. In this research, we evaluate Makochekanwa's model which formed hypotheses surrounding Zimbabwean inflation from 1999-2006. Finding data on semi-recent situations in Zimbabwe to be sparse, we work to find appropriate estimators for missing pieces of data. We apply linear regression and the Toda-Yamamoto variant of Granger causality to check Makochekanwa's findings.
In recent and past works, convexity is usually assumed on each individual part of the action functional in order to demonstrate the existence and uniqueness of a Nash equilibrium on some interval [0, T] (this meant that each hessian was assumed to be nonnegative). Particularly, a certain assumption was imposed in order to quantify the smallness of T. The contribution of this project is to expand on this with the key insight being that one does not need the convexity of each part of the action, but rather just an appropriate combination of them, which will essentially "compensate" for the other two terms to yield convexity in the action. This is meaningful in both the pure and applied settings as it generalizes the existence and uniqueness of a Nash equilibrium slightly more, but maybe more importantly matches real-world application slightly closer, as in reality there are many settings in which not each part of the action have convexity. Thus, it is more accurate for modern application of Mean Field Game Theory.
In the 1950's and 1960's, University of Chicago algebraist Abraham Adrian Albert (1905-1972) participated in various mathematical research projects for the National Security Agency (NSA). Recently the reports that Albert wrote describing those projects have been declassified. This project is part of our larger effort to explore the more than 50 reports that Albert submitted under contract with the NSA. This project is an exploration of the group theoretic methods described in Albert's reports on Project Voodoo, a system for Identification Friend or Foe (IFF). Albert describes several permutation systems operating on n-tuples with binary components, aiming to obfuscate a known challenge vector while producing unique responses. Individual systems will be discussed alongside Albert's conclusions; additionally, similar constructions between reports will be highlighted and analyzed.
Pascal's triangle is one of the most famous recreational mathematics topics, but that does not mean there's no serious math involved in it. One area where research continues to this day is the study of Pascal's triangle modulo p, where p is a prime. While the results in this topic can be simply stated, they are proven with a variety of mathematics, including analytic number theory, group theory, fractal geometry, and combinatorics. In this lecture, we'll discuss some of the results and some of the constructions used to prove them.
Let's say you want to see the inside of your hand: the muscles, the tendons, etc. The problem isn't that your hand doesn't transmit light. If you go into a dark room and cover a flashlight with your hand, you'll see the light coming through. The problem is that the light inside your hand is scattered: it bounces around a lot inside the hand, so the light that comes out doesn't carry information about any one place in particular. That's why if you try the flashlight thing, you'll only see some blurry redness. To do better -- to undo the effects of scattering -- you need a mathematical model of how light behaves in a scattering medium. You need to study the solutions to the resulting equations and try to understand what they have to say about the optical parameters of the medium they pass through. And once you've understood scattering on that level, it turns out there are many interesting questions you can ask next -- not just how you can see inside your hand, but how you can see around corners, how you can hear the sound of light, and many more. In this talk I'll try to describe some of these ideas in broad terms, starting with the basics, and building all the way up to the questions I mentioned above. I'll bring a flashlight.
Positron emission tomography (PET) is a medical imaging technique that uses nuclear decay events inside a patient's body to image physiological activity. It finds particular use in monitoring metabolic activity in the brain or imaging cancer. Since the nuclear decay events cannot be measured at their source, they are inferred via detection of gamma ray pairs emitted from pair annihilation events between an electron and an emitted positron due to decay from a radioisotope delivered via radiopharmaceutical. It is from these detector data that an image is reconstructed of the distribution of radiopharmaceutical, indicating the location of whatever physiological activity we are concerned with. A variety of strategies are employed to solve the reconstruction algorithm with the simplest being filtered back projection which is a discretized inverse Radon transform. However, here we investigate the expectation maximization algorithm of Shepp and Vardi which seeks a solution to the maximum likelihood problem with observed detector data and unknown parameters of a discretized distribution of radiopharmaceutical.
The Game of Cycles is a game that is played by two players on a simple connected planar graph called a board. Each player takes turns directing edges on a board with an arrow without allowing for a source or sink to occur. The game continues until a player creates a directed cycle cell or makes the last possible move. In this talk, we will use Python to simulate the game and examine optimal winning strategies on boards involving tree graphs.
The Ten Unspeakable Words problem appeared on the internet in 2015. Two prisoners must decide on a common testimony, but are each banned from using ten words during proceedings. With limited communication between them before the trial, how can they communicate their banned lists to each other, and decide on a valid common testimony? The basic solution, generalized to b banned words, requires an available dictionary of asymptotically b^2 words. We present a solution that exploits the structure of cyclic groups to bring this bound down by a factor of 2, and to extend the problem to a setting with 3 or more prisoners. The generalized bound on dictionary size is considerably smaller than the naive solution.
The Game of Cycles is a relatively new two-player game. The players take turns placing an orientation on an edge of a graph, until someone wins. This occurs when the player either creates a cycle or makes the last legal move. Our research led us to find winning strategies on two types of graphs: paths and caterpillars. We will outline these strategies and discuss some of the nuances and subtleties of this mathematical game.
Locating individual trees spatially is a critical component of efficient forest management. However, conventional forest inventorying procedures provide limited to no information on the location of individual trees. Using Global Positioning System (GPS) units within a forest requires multiple expensive base stations outside the forest perimeter while resulting in limited accuracy due to interference from the tree canopy. Survey techniques for mapping and storing individual tree positions need to be both robust and cost-efficient. In this study, we use multiple Global Navigation Satellite System (GNSS) equipment models to obtain clusters of GPS coordinates on the perimeters of forest plots. These coordinate measurements will be used to compare the precision and accuracy of the GNSS units and understand the effects of survey equipment placement relative to the trees. This study will contribute to the development of inexpensive location-tagged tree inventory systems by determining the optimal distance from the perimeter of a forest plot for GNSS measurements, choosing the best survey equipment to be used and demonstrating a method for the placement of benchmarks.
The chip-firing game is a process of moving objects called chips around the vertices of a graph. This process is applied in some models, perhaps most famously the abelian sandpile model, which describes how grains of sand on a tabletop fall and eventually produce a stable configuration, i.e. a configuration in which every grain of sand is at rest. In this work, we consider chip-firing on graphs called trees, which are connected graphs with no cycles. Our goal in this talk is to investigate the connection between the chip-firing game and polyhedral geometry, a branch of geometry that explores polygons and their analogues in higher dimensions. As a means of describing this connection, we develop the concept of a self-reachable chip-firing configuration, which is a chip configuration that can revert to itself after a nonzero number of legal chip-firing moves. We explore the properties of the smallest polytope that contains every self-reachable chip-firing configuration with a particular number of chips on a particular tree.
This presentation will analyze the impact of various factors on soccer World Cup matches, including team formation, average player age, and possession of the ball. Using predictive models, we will examine which of these factors have the greatest influence on the outcome of the game. By studying the data and analyzing the results, we hope to gain a deeper understanding of the dynamics of World Cup matches and provide insights that can be used by coaches and players to improve their performance.
Monsky's theorem of 1970 says that we cannot dissect a unit square into an odd number of triangles of equal area. A related question, dissecting a rectangle into three triangles of equal area, illustrates the importance of the curvature of space. We prove that the dissection can be done in hyperbolic space, the non-Euclidean geometry of negative curvature.
There are several techniques a rider can choose from that they can perform being distributed along the long-board ride. And this research is to create a machine-learning model that can efficiently classify these techniques at different periods of time using raw acceleration data. This paper presents a period-window feature-extraction method for non-linear, specifically wavelet, time-series data processing where data is divided into windows of wavelet periods. This method involves analytical geometry, multidimensional calculus, and linear algebra and can be used to visualize and normalize time-invariant object paths. This method focuses on displacement data calculated from raw acceleration data and gyro sensor data from a smartphone application called "Physics Toolbox Sensor Suite". We extracted features from each dynamic window of time in the displacement data and then fed them into machine learning algorithms with various statistical features, including supervised learning classifiers and Long short-term memory. With recurrent neural networks, we were able to get an overall accuracy rate to be 100% and rate of loss close to 0.3. With deep neural networks, we were able to get an overall accuracy rate to be close to 89% and rate of loss of close to 0.28.
The Game of Cycles is a two-player deterministic game that is also a relatively new field of study in mathematics. Over the summer, we looked at different graphs the game can be played on and corresponding winning strategies for certain players depending on a graph. Our presentation highlights these findings and describes the appropriate winning strategies for the graphs we looked at. We hope to continue this research in the future and hope our work throughout the school year presents even more findings in the field.
"Correlation does not imply causation." We've all heard it, but how can we actually use statistics to answer questions of causality such as "Will a new feature increase revenue in my app?" or "Does this vaccine prevent the flu?". This talk will introduce the field of Causal Inference along with key assumptions required to make causal claims. We will dive into one common technique in the field known as "matching" using real data and the R programming language.