Below you will find a schedule of events as well as session details available electronically. The full program is available online by clicking here. A hard copy can be provided, upon request, at the registration desk upon checking-in at the conference.
We will have a hands-on overview of some of the tools that data scientists use for working with data, including large data sets. The workshop topics can be slightly flexible and open to discussion, depending on the interests of the participants. At a minimum, we will introduce students to R and RStudio, data visualization, and perhaps some tools for scraping and parsing XML directly from the web and processing the scraped data in R.All participants are encouraged to bring a laptop...and to be excited to learn about some of the introductory nuts and bolts of data science. No computational background is needed for this workshop.
Weaving is one of the most mathematical of all art forms, and it's got something for every mathematician --- geometry, topology, combinatorics, number theory, algebra, and even a little bit of differential equations. We'll take a hands-on tour of lots of different ways to explore mathematics with weaving. Be prepared to learn and experiment with various techniques of weaving paper strips. We will also demonstrate weaving with yarn on a loom and there may be opportunities for you to try that out, too. No previous knowledge of weaving or any particular area of mathematics is required. Participants are encouraged to bring your creativity! All materials will be provided.
LaTeX is the de facto standard when it comes to typesetting mathematics and scientific documents. In this workshop, we aim to give an overview of some of the more advanced features available in LaTeX. Specifically, we will explore the following: the creation of custom packages and document classes to avoid those mile-long preambles, other TeX engines with specific emphasis on LuaLatex which includes Lua as an embedded scripting language. This allows for a host of useful applications (e.g., creating custom assignment classes that auto-populate a table of scores or an answer key, or that randomly generate similar problems upon compiling), the various tools and add-ons available for creating high-quality 2D and 3D vector graphics in LaTeX. Templates and/or minimal working examples will be provided for those who wish use or otherwise tootle around with those demos used in the workshop. Participants are encouraged to bring a laptop to this session!
In this paper, we establish an method-of-moments-type estimation method for the parameter involved in stochastic processes time-changed by a stable subordinator. We use only the number of constant periods observed in the process to define the estimator. We provide approximations for the variance and the bias of the estimator, and compare its performance to the CDF curve-fitting method in simulations. After adjusting for bias, the resulting estimator is quick to compute and especially well-suited for situations in which multiple observances of the same process are available. We continue with an application of this method to market price data for low-volume stocks, estimating the parameter with the method-of-moments-type estimator and comparing resulting estimates to those of the CDF curve-fitting method. The estimator produces similar results with significantly less computational effort, so it is well-suited for situations in which many estimates need to be made or in which estimates are made over large amounts of data.
The Unified Transform Method is a recently developed method for solving boundary value problems. This method subsumes classical methods such as separation of variables and Fourier series, and has several beneficial properties. In this talk, we will discuss how we utilized the Unified Transform Method to solve the Dirac system of partial differential equations, a system of equations developed by Dirac to model quantum mechanical particles. In addition, we will highlight the key concepts behind the method and explain its utility.
The computer vision task semantic segmentation, which involves assigning a label to each pixel in an image, has many applications to vital topics such as medical imaging or autonomous driving. This project uses the semantic segmentation deep learning architecture Deeplabv3+ to make inferences about the intensity of vehicular and pedestrian traffic. This unique approach to traffic estimation has advantages over traditional methods like object detection in its ability to more accurately account for both large and small vehicles that each affect traffic differently. This project and its methods are one step toward the "smart city", i.e., a city that has a traffic system based on artificial intelligence that can adapt to traffic in real-time with the end goal of mitigating gridlock traffic in large cities.
In this talk, I will introduce the concepts of a detection system and information fusion, then will discuss using ROC (receiver operating characteristic) curves to quantify the accuracy of detection systems. I will discuss Boolean rules for fusing different detection systems and the mathematical structures that result from this fusion. I will justify results that at first seem surprising; in this context, Boolean rules do not always work as expected.
A magic cube of order n is an n x n x n array filled with n3 distinct positive integers 1, 2, ..., n3 such that the n integers in each row, column, pillar and space diagonal all sum up to the magic sum. A magic cube of any odd order can be constructed through an algorithm that places elements in order along the 2-dimensional backward diagonals of the cube. We prove that our construction gives a magic cube by first deconstructing the magic cube into 4 cubes each containing the entries 0, 1, ..., (n-1). Then we show that the sum of the entries of the rows, columns, pillars, and space diagonals are all the same. Time permitting, we show that the algorithm extends to create magic hypercubes of odd dimensions.
Wolves are among the earliest known animals to be domesticated. However, the mechanism by which gray wolves were domesticated into dogs is still unknown. The prevailing domestication hypothesis is that humans selectively bred the gray wolves that were more docile. However, there is a more recent hypothesis which states that wolves which were less hostile towards humans would essentially domesticate themselves by naturally selecting for such wolves because of the availability of food near human settlements. Simulating these conditions could help prove the possibility of domestication via natural selection. Previously published mathematical models are based on systems of differential equations, and these models have critical simplifications such as homogeneous randomly mixed populations and infinite population sizes. Therefore, we created an agent-based simulation which has single trait evolution, user-defined and literature-based parameters, and sexual reproduction.
I will introduce some geometric and combinatorial properties of hypersurfaces. In particular, I will be discussing a class of hypersurfaces called "well-poised" which exhibit strong irreducibility properties. This property is revealed through the combinatorics of the Newton polytope of the hypersurface.
Region crossing change (RCC) is an operation performed on a selected region of a link diagram that reverses all crossings incident to that region. A set of regions are ineffective if, after RCC moves are performed on each region in the set, the resulting link diagram has the same crossing information as the original one. Ineffective sets are key to understanding how many RCCs it takes to transform one diagram the other. In this talk, we characterize the ineffective sets for all knots and reduced link diagrams. These are determined by checkerboard shading and a variant of checkerboard shading called tri-coloring. As an application, we will determine the maximum number of RCC moves needed to transform one knot diagram to another diagram with the same underlying projection.
Forest ecosystems are constantly changing due to factors such as climate change, invasive species, and management. Traditionally, changes to forest ecosystems are described by shifts in species composition, successional stage, or phylogenetic diversity. Recent advancements in remote sensing, particularly LiDAR, have revealed the importance of stand structure (the arrangement of trunks, branches, and leaves in 3-dimensional space) on forest dynamics. However, the limited availability of LiDAR data at a continental scale prevents the analysis of forest structure across the United States. In this exploratory analysis, we compiled data from the U.S. Forest Inventory and Analysis (FIA) Program from over 120,000 plots. We computed a suite of metrics related to the distribution of tree size classes, species diversity, and phylogenetic diversity in FIA plots and mapped the resulting metrics across the U.S. In addition, we subsetted our data by ecoregion and assessed the relationships between these metrics. We found a positive relationship between forest structural diversity and both species and phylogenetic diversity at low levels of structural diversity, but the relationships tend to flatten out or become negative in areas with higher levels of structural diversity. These relationships also varied in strength between ecoregions which may be related to the regional variability in the species pool. Moving forward, we plan to explore the relationship between forest structure and ecosystem functions, which will help broaden our understanding of forest structural dynamics in a changing global environment.
This talk will explore how quadratic integer rings, such as Zi, can be used to solve Diophantine equations. While classical questions such as which numbers can be written as a sum of squares are difficult to solve using number theoretic methods, using more abstract methods can simplify the solutions. These solutions depend on the properties of the particular quadratic integer ring, including the structure of the ideals within the ring. Some of these properties can be seen graphically as well as algebraically. In particular, the unique factorization of an element into primes is reflected by the tiling of the plane with parallelograms generated by the ideals of the ring.
Sometimes, when we pose questions of mathematics, its answers are strikingly contrary. Why cant we trisect an angle with the same tools we use to bisect an angle? Its not possible. Why havent we found the quintic formula? It doesnt exist. Can we at least prove that arithmetic is logically consistent? Nope! We can view these results as intransigent obstacles to human knowledge, or we can accept them as fascinating illustrations of the boundaries of different mathematical techniques. In this talk, we will explore analogous results for techniques in the fiber arts. For each form of knitting, crochet, embroidery, and so forth, there is a set of limitations on what types of designs they can produce. Sometimes, these limits are broad enough that the the art form can encompass every mathematical possibility. Other times, the craft imposes intriguing restrictions on what patterns we can produce, and we will make the case that these restrictions have their own intrinsic beauty.
Blackjack, or 21, is among the most popular casino table games. Since, unlike most other games of chance, successive hands of blackjack are not independent, the mathematics behind blackjack is at once more complicated and more interesting than for games like craps or roulette, and there can be times during play when the gambler has an edge over the casino. This talk will briefly review the rules of the game and then describe some of the calculations--both theoretical and experimental--that led to blackjack basic strategy and the advantages derived from card counting.
This presentation analyzes the impact of the Great Recession (2007-2009) on age-related declines in health. Previous research has yielded mixed findings regarding whether economic downturns are associated with periods of deteriorating or improving health. This research attempts to contribute a study making extensive use of statistical matching methods to this collection of results. Using data from three waves of the Survey of Mid-Life Development in the United States (MIDUS), a national study of health and well-being in adults, this study hypothesizes that those who lived through a stressful period such as the Great Recession will experience significantly greater declines in physical and mental health than those who lived through a period prior to the recession. The study combines subjects into a target group who were surveyed on two separate occasions spanning a period containing the onset of the recession. Each of these subjects is paired with a subject from the comparison group who was surveyed on two separate occasions prior to the recession. Pairings are based on the results of a nearest neighbor matching algorithm performed on a series of explanatory variables. Changes in self-reported functional limitations, depression, and self-rated health are used as response variables to determine differences in health outcomes between the matched pairs. The study results indicate that contrary to the hypothesis, the Great Recession was associated with improved health outcomes. This presentation is based on a study supported by the National Science Foundation under Grant No. 1246818.
The Directed Power Graph of a group is a graph whose vertex set is the elements of the group, with an edge from x to y if y is a power of x. The Power Graph of a group can be obtained from the directed power graph by disorienting its edges. This work discusses properties of cliques, cycles, paths, and coloring in power graphs of finite groups. A construction of the longest directed path in power graphs of cyclic groups is given, along with some results on distance in power graphs. We discuss the cyclic subgroup graph of a group and show that it shares a remarkable number of properties with the power graph, including domination number and independence number.
A Beatty sequence is a sequence of the form [a*n], where a is an irrational number and the bracket denotes the floor function. A remarkable result, called Beatty's Theorem, says that if a and b are irrational numbers such that 1/a+1/b=1, then the associated Beatty sequences "partition" the natural numbers. That is, every natural number belongs to exactly one of these two sequences. It is known that Beatty's Theorem does not extend directly to partitions into three or more sets, and finding appropriate analogs of Beatty's Theorem for such partitions is an interesting and wide open problem, which has applications to optimal scheduling problems. One example of such a problem is the "Chairman Assignment Problem" due to Robert Tijdeman: Suppose that k states with k positive weights (which sum up to 1) form a union. Each year, a state is selected from which the Chair of the union is to be chosen. The problem asks for the optimal assignment of these states such that, for each n and each state, the actual count of Chairs from this state in the first n years is closest to the expected count, given by n times the weight of the state. Tijdeman gave an algorithm to generate the optimal assignment. In the case k=2 the resulting partition of the natural numbers turns out to be a partition into non-homogeneous Beatty sequences. In our research we consider the case k=3. We construct partitions of the natural numbers into three sequences that are in some sense closest to actual Beatty sequences, and we apply these results to optimal scheduling problems, such as the Chairman Assignment Problem.
We present a computational methodology for the structure of subtraction games. One of the oldest problems in combinatorial game theory is to characterize the structure of subtraction games. Although the structure can be analyzed recursively, at present, a methodology for explicitly characterizing the structure of a subtraction game is not (yet) known. In the last two years, our team characterized the (eventual) period lengths of the Sprague-Grundy values of subtraction games with 3 parameters. Recently, however, we greatly generalized these results, to fully characterize the complete sequences of SG-values, including both the periodic and the pre-periodic portions of the sequences. We have analyzed 72 PB of data about this problem, to verify this computational approach to the analysis of these games.
We will prove a Wallace-Simson-type theorem in elliptic geometry and give some counter-examples to Monsky's theorem in hyperbolic geometry.
We consider the problem of tiling a rectangular domain of the triangular lattice by means of the six elementary tiles obtained by cutting a regular hexagon in half through opposite vertices such that each tile is composed of three adjacent triangles. For each rectangular domain of size 3n, there are a number of ways to uniquely tile the domain completely. The number of unique tilings changes depending on the size of the domain. To determine the number of unique tilings that can be made, we used transformation matrices, which represent different qualities of each tile. To visualize the many possibilities, we made use of a search tree program that generates all unique tilings of any dimension by trying every possible combination of ordering in which the tilings are placed. This program works for any domain but, due to the way it works, is very inefficient.
Generating functions are an extremely valuable tool to solve and generalize probability and counting problems. We will focus on their applications to extracting information about words formed from a geometrically distributed alphabet. Using generating functions we can take this complicated probability problem and solve it in a very generalizable way. We will also demonstrate other uses including the counting and asymptotic analysis of Wolf partitions
In this talk, we investigate the eigenvalues and corresponding eigenfunctions of second order ordinary differential equations (ODE) with singular weight. We employ general techniques in ODE and Mathematica programming to visualize the behavior of the dynamics of eigenvalues and eigenfunctions.
In this paper, we provide a method for "cloaking" the outside world from an observer inside a fixed region. It is assumed the observer uses emitted and reflected waves---like radar---to image the exterior of the region. Our approach uses the idea of surrounding the region containing the observer with a special material, an absorbing buffer layer that does not reflect waves. As a result, the observer thinks the waves have traveled away to "infinity," and is unaware of the even the presence of the buffer. We discuss two for the damping model, and their properties. We then optimize the cloaking effect based on the balance between the decay rate of the wave amplitude and the thickness of the surrounding materials. From there, we investigate a much more efficient cloaking scheme with an extended property of the surrounding materials and the possibility of perfect cloaking. We also analyze an existing cloaking scheme, the so-called "perfectly matched layer" (PML). Finally, we present numerical examples showing the efficiency of cloaking for the two different choices, and compare them to the result of the PML scheme.
John von Neumann "Approximative Properties of Matrices of High Finite Order" (1941) explores the asymptotic behavior of matrices as dimension increases but remains finite. Essentially, von Neumann ventured to explore the neglected middle ground between finite and infinite dimensional analysis. The major result of his paper is a proof of the existence of "big, bad matrices" that is, matrices of large dimension that possess "bad" qualities. von Nuemann's proof was nonconstructive, making use of what he called a "volumetric" argument. We utilize computational techniques in a quest to find a construction of these matrices; discovering what the matrices look like will potentially have applications to data science and the theory of random matrices.
Construct a wire network of a cube. Reinforce it by augmenting all twelve face diagonals. Consider a symmetric random walk on this reinforced cube: In each step, a particle moves from one vertex to any one of the six adjacent vertices (except the one diametrically opposite to it) with equal probability. We answer three questions: (1) What is the distribution of the number of steps taken until the particle returns to its starting vertex? (2) What are the mean and the variance of the cover time (time to visit all vertices)? (3) Among all vertices, which vertex will be visited the last?
A cellular automaton is a type of mathematical system that models the behavior of a group of cells with discrete values in progressing time steps. The often complicated and systematic behaviors of cellular automaton are studied in computer science, mathematics, biology, and other science related fields. This thesis focuses on discussing and analyzing the patterns that appear in some elementary cellular automaton and hexagon lattice gases. A python program was implemented to help visualize and analyze the behavior of the hexagon lattice gases model.
undefined
undefined