Тези ISEF 2011 - MA

2011 - MA001

Kate Alexandra Geschwind
Mayo High School, Rochester, MN

In this project, data-driven methodologies for the development of mathematical models of three wind farms are presented. Hourly observed weather data was taken from three national weather stations nearest to the locations of the wind farms, Rochester, MN; Tulsa, Oklahoma; and Burlington, VT, for the months of December 2009 through May 2010. Each wind farm was of a different size to demonstrate how size affects the ability to accurately forecast the wind farm output, as well as each wind farm was of a different location, to demonstrate how geographical location affects the ability to accurately forecast wind farm output. Multiple explanatory variables were then derived to be used in each data set: wind speed squared, wind speed cubed, a one hour lag of the actual wind farm production, and the change in the one hour lag. These variables, along with wind speed, were combined in various ways to form different mathematical models using regression analysis and artificial intelligence. Each model was then used to calculate the RMSE for the testing data from the wind farm that it had been derived from. The RMSE values were calculated over four different forecast time periods: a one hour forecast, a six hour forecast, a twelve hour forecast, and a twenty-four hour forecast. The models with the lowest RMSE for each wind farm were then deemed the most accurate in forecasting the wind farm output of that particular wind farm. 

Awards won at the 2011 ISEF
Certificate of Honorable Mention - American Mathematical Society
Certificate of Honorable Mention - American Statistical Association
Third Award of $1,000 - Mathematical Sciences - Presented by Intel


2011 - MA002


Matthew Russel Bauerle
Bauerle Homeschool, Fenton, MI

From robust data fitting in statistics to L1 
finite element methods for fluid dynamics, the overdetermined L1 minimization problem arises in many practical applications. First, this paper presents a proof of the convergence of Newton's method for the smoothed overdetermined L1 minimization problem. Next, new formulas are derived for computing the Newton search direction by solving a linear least squares problem. Numerical examples show that the new least squares reformulation stably computes L1 minimizers for coefficient matrices with condition numbers from 10^3 to 10^6. In contrast, the existing normal equations formulation completely failed to provide an accurate solution for the ill-conditioned coefficient matrices with condition numbers of 10^5 and 10^6. The reformulation currently holds great potential for any overdetermined ill-conditioned L1 minimization problem and can be easily extended to nonlinear or underdetermined problems. This paper presents many practical applications and the circumstances where ill-conditioned matrices appear. 

Awards won at the 2011 ISEF
Intel ISEF Best of Category Award of $5,000 for Top First Place Winner - Mathematical Sciences - Presented by Intel
$ 3,000.00 will be the total of all awards. First award minimum of $1500.00 - Mu Alpha Theta, National High School and Two-Year College Mathematics Honor Society


2011 - MA003 
Myles Reid Scolnick
Cherry Creek High School, Greenwood Village, CO

The purpose of this research was to explore the set ordering of the divisors of an integer n as well as the two closest divisors. This was done by creating number theoretic functions, such as Euler’s Tau and Sigma functions, in order to study properties of integers. These number theoretic functions were defined as: F(n)=(x,y) an ordered pair of the two closest divisors of n Delta(n)=x-y The problem that this research seeks to resolve is a way to order all the divisors of an integer n. A special case that was heavily look upon is when n = p^lq^m, thus F(p^lq^m). These functions were studied form different points of view to further the understanding of this research. Firstly, these functions were looked at as a directed graph, for example a tree graph. Also these were looked at as a semi-group of ordered comparisons, and lastly, these were studied using rational approximations to represent irrational numbers with the use of continued fractions and their convergent. Using inductive and deductive reasoning, many connections have been found. First, it can be concluded that only two precise relationships between both primes of n are needed to fully order the set of divisors of n. Secondly, different formulas have been developed for all integers n where n is divisible by one prime or two distinct primes. Lastly, the conclusion has led to an astounding understanding of the creation of a fully ordered set of the divisors of n through the methods mentioned.


2011 - MA004 
Gabriel Wolfgang Shih
Laramie Senior High School, Laramie, WY

The purpose of my project was to know the connection between continued fractions and mathematical constants. My engineering goal was to study the continued fractions and investigate its applications to mathematical constants. The mathematical constants I examined were the Pythagoras’ constant, Fibonacci numbers and the golden ratio, Archimedes’ constant, Euler’s number, and Euler–Mascheroni constant. These famous numbers are used in fields of mathematics and non-mathematics. In addition, I observed rational and irrational numbers as continued fractions. My project is important because one can be more knowledgeable from learning many new mathematical concepts. A significant application of continued fractions is deciphering efficiently in cryptography. Also individuals can appreciate the beauty of continued fractions and mathematical constants. In the future project I would extend the theory of continued fractions to more mathematical constants as well as functions, including the exponential function, the logarithmic function, and the trigonometric functions.


2011 - MA005 
Jason Mark Benedict Long
The Glasgow Academy, Glasgow, UNITED KINGDOM

This project explores RSA cryptography and the problem of integer factorisation from a mathematical and computational perspective, examining some of the recent breakthroughs in these fields. A specific objective of the project was to consider the problem of parameter optimisation in the quadratic sieve algorithm. The work began with a study of the mechanics of RSA cryptography and the mathematical foundations that underpin its security and widespread use. A detailed empirical and theoretical analysis of the quadratic sieve (QS) and the elliptic curve method (ECM) was undertaken, and these algorithms were implemented using Haskell as the primary programming language. This allowed the two methods to be compared, both mathematically and observationally. Finally, the implementation of the QS was used to study the problem of parameter optimisation and relevant empirical results were gathered and analysed. Due to the computation required to estimate optimal parameters using a brute force approach, a cluster of 10 computers was used to collect data. Graphs were constructed to show the frontier at which factorisation becomes feasible on the 2-dimensional plane of possible bound choices. These graphs were then compared to the theoretical asymptotically optimal bound choice. The project also compared alternative factorisation methods, both exponential and sub-exponential, for a range of instances, and examined the complementary problem of primality testing and primality proving; both probabilistic and deterministic methods were considered. This field is central to secure communications in the digital age and this project tackles some of the most dynamic and influential questions of modern mathematics.


2011 - MA006 
Laura Marie Moore
Gallup Catholic High School, Gallup, NM

This project is being done to test the theory of probabilities by applying the mathematics of probabilities (the law of large numbers) to a popular game show LET’S MAKE A DEAL with the popular game show host Monty Hall. Thus, the name of the science project: The Monty Hall Problem. The purpose of the Monty Hall problem is to determine the probability when given the choice of a prize behind one of three doors. The Monty Hall Problem is, when the contestant has a choice between two doors after the third was eliminated, is the probability of choosing the correct door still 33.3% (1/3) or is it 50% (1/2) if you keep the original choice or door. The hypothesis is, since the original project involved three doors, it should not matter if one were eliminated. Utilizing the law of large numbers, the probability of choosing the correct door would still be 33.3% (1/3) if you choose to keep your original choice or door. The experiment was tested with a bowl of 300 marbles: 100 red for door one, 100 white for door two, and 100blue for door three. The marbles were pulled randomly to determine which door contained the prize and which door the contestant picked. There were 2000 trials. The experiment was done again with 3 marbles. The experiment was done on the computer using Microsoft Excel and the random number generator with one million trials. There were so many samples so as to prove the Law of Large Numbers.The results came out to approximately 33.3% correct if the contestant kept his original choice.



2011 - MA007 
Wayne Liao
National Taichung First Senior High School, Taichung City, CHINESE TAIPEI

Given a point and a triangle, the pedal circle is the unique circle determined by the perpendicular feet from the point to three sides. By using the dynamic geometry software GSP, we have discovered chains of infinitely many theorems of the pedal circle: Chain 1: For a point and a quadrilateral, the four pedal circles, formed by omitting each vertex in turn, are concurrent in a point, named the pedal point. For a point and a pentagon, the five pedal points similarly formed, are concyclic, named the pedal circle. This process can be extended repeatedly. Chain 2: Given a point and a cyclic quadrilateral, the centers of the four pedal circles so formed, lie on a circle NC (3; 4). Given a point and a cyclic pentagon, the five centers of the corresponding circles NC (3; 4), formed by omitting each vertex in turn, lie on a circle NC (3; 5). This process can be extended repeatedly. Chain 3: Given a point and a cyclic hexagon, the centers of the six pedal circles (with respect to pentagon) so formed, lie on a circle NC (5; 6). Given a point and a cyclic heptagon, the seven centers of the corresponding circles NC (5; 6), lie on a circle NC (5; 7). This process can be extended repeatedly. Chain n: Established by repeating the process above by starting with cyclic (2n)-polygon.



2011 - MA008 
Tzu-Hsuan Su
Taipei Municipal Jianguo High School, Taipei City, CHINESE TAIPEI

Throughout the project, all rectangles are assumed to have sides with integral lengths and all lie in horizontal/vertical directions. Here we study the combinatorial properties of dissections of a rectangle into smaller ones. Following Duijvestijn and Leeuw, a dissection is said to be perfect if all rectangles have different areas. A dissection is called even if the area of each rectangle is in even number. Given an m by n rectangle, we proceed with three following problems: What is the maximum number f(m, n) of component rectangles in a perfect even dissection? What is the number N(m, n) of perfect even dissections? What is the maximum number g(m, n) of component rectangles in a perfect dissection? Our main results are: The function f(m, n) is equal to the unique integer k such that the product mn lies in the close-open interval [k(k+1), (k+1)(k+2)). The function N(m, n) is less than or equal to the number of all possible ways to express the number [mn-f(m, n)(f(m, n)+1)]/2 as a sum of positive integers minus a small term. The small term is the number of ways to express mn as a sum of distinct even positive integers, and one of which contains at least one prime factor greater than m and n. The function g(m, n) is less than or equal to the unique integer k such that the product mn lies in the close-open interval [k(k+1)/2, (k+1)(k+2)/2). 

Awards won at the 2011 ISEF
Second Award of $500 - American Mathematical Society



2011 - MA009 
Michael G. Carmona-Soto
Petra Mercado Bougart, Humacao, PUERTO RICO

The determinant of a matrix is obtained from its elements. These determinants are useful in the solution lineal systems and the calculation of the inverse matrix and it has other multiple applications. A magic square is a square/grid which spaces of grids, whole numbers have been placed so that the addition of each row, column and diagonal is the same number, called magic constant. One of the most common methods to fill the magic square is the Loubére method, used only in odd order. The purpose of this investigation is to find a new method to find determinants of matrices. After evaluating the existing methods the following problem was proposed: Is there any method to find determinants of matrix using the magic square? The hypothesis was: Using the Loubére algorithms method to fill the magic square can be the development of a new method to find determinants of matrices 3x3. After observing algorithms the MaGa method is created, by which determinants of matrices 3x3 are found. After concluding this investigative assignment, it was observed that when the algorithms of the Loubére method to fill magic square, a new, easier, more organized, and systematic method can be developed. This investigation recommends the use of the MaGa method to find the determinants of matrices 3x3. It should also be used for the solution of equations with three variables. It also facilitates the solution of equations for students, making mathematics interesting and fun.



2011 - MA010 
Axel Manuel Pagan-Nieves
Francisco Morales High School, Naranjito, PUERTO RICO

The purpose of this research was to show that arbelos's area, limited by three semicircles, is equal to the area of the circle with diameter DC. Throughout this study, we examined the relationship between the lengths of the diameters of the semicircles and the diameter of the circle. Our question was: is arbelos's area (the region delimited by three tangential semicircles with diameters AB, AD, and DB, where D belongs to the segment AB, and in the same plane) equal to the area of the circle with diameter CD? To answer our question, we formulated the following hypothesis: If the arbelos is the region delimited by three tangential semicircles with diameters AB, AD, and DB, where D belongs to the segment AB, and laying in the same plane then, the area of the arbelos is equal to the area of the circle with perpendicular diameter to AB in D. To confirm our hypothesis, we used Pythagoras' theorem, the formula for the area of a circle, and algebraic algorithms to identify the area of the arbelos and the area of the circle. Finally, we were able to conclude that, independently of the value (less than one) that represents the diameter to the left of the center of AB, the area of the arbelos is equal to the area of the circle. 
Our results, confirmed our initial hypothesis.



2011 - MA011 
Yamil Rosario- Vazquez
University Gardens High School, San Juan, PUERTO RICO

This research deals with the study of the discrete volume of polytopes. A polytope is the generalization of the bi-dimensional polygon concept, or polyhedron (tri-dimensional), to any other dimension. Geometrically, a polytope is a finite region of space enclosed by a finite number of hyper planes. Examples of tridimensional polytopes include crystals, boxes, tetrahedrals and convex objects. It is interesting to see how many problems in combinatorics, number theory, and many other areas of mathematics can be expressed by means of polytopes that exist in a Euclidian space. Computation of areas and volumes, which involve the application of integrals, is a topic that in theory has been highly studied. Moreover, the approximations of surface areas of common objects is complex. This is due to the fact that surfaces are irregular. Therefore to find the area of these through integration is not feasible. It is necessary to identify an alternate method to the use of integrals. This project focuses on the approximations of areas of linear functions in the interval [0,1]. A formula was developed using inductive reasoning by counting points in grids. This formula provides an approximation of the area below the curve of linear functions. It was verified that the expression: [M(n/2)+1](n+1)/(n+1)^2 converges to the integral[0,1] of GsubM(x)dx is correct since the limit of n when it tends to the infinite of [M(n/2)+1](n+1)/(n+1)^2=M/2. This formula can be used in Civil Engineering, Architecture, Cartography, Land Survey, and for general Mathematical knowledge.



2011 - MA013 
Minsuk Kim
Paul VI Catholic High School, Fairfax, VA

Sequences of polygons generated by performing iterative processes on an initial polygon have been studied extensively. One of the most popular sequences is the one sometimes referred to as Kasner polygons. Given a polygon K, the first Kasner descendant K' of K is obtained by placing the vertices of K’ at the midpoints of the edges of K. More generally, for any fixed m in (0, 1) one may define a sequence of polygons {K_t} such that t is greater thab 0¸ where each polygon K_t is obtained by dividing every edge of K_t-1 into the ratio m: (1 - m) in the counterclockwise (or clockwise) direction and taking these division points to be the vertices of K_t. We study the following problem Let m be a fixed number in (0, 1) and let n be a fixed integer greater than 3. Further, let K be a convex n-gon and denote by K', the first m-Kasner descendant of K. What can be said about the ratio between the area of K' and the area of K, when K varies in the class of convex n-gons? We provide a partial answer to this question. Theorem. If K is a triangle then K'/K = 1 - 3m(1 - m). If K is a quadrilateral then K'/K = 1 - 2m(1 - m). If K is a pentagon then 1 - 2m(1 - m) < K'/K < 1 - m(1 - m). If K is a n-gon has more than six sides then 1 - 2m(1 - m) < K'/K < 1.


2011 - MA014

Will Gooding
Roanoke Valley Governor's School for Science and Technology, Roanoke, VA

The purpose of this experiment was to observe the existence and location of eigenvalues of the Zakharov-Shabat system when the chirp of the Gaussian type pulse was manipulated. The hypothesis was that if the chirp factor was greater than fifteen, then no eigenvalues would be supported by the system. This objective was accomplished primarily using Mathematica software. The differential equations for the Zakharov-Shabat system were solved using the chirped Gaussian pulse, allowing the chirp factor to vary. The initial eigenvalues, present with no chirp, were then tracked until “knocked off” the imaginary axis, at which point they were tracked through the complex plane to a chirp of 50. Initially, three eigenvalues were present with a chirp of zero, occurring at xi=3.75i (#1), 2.89i (#2) and 0.41i (#3). From there, eigenvalues #1 and #2 moved towards each other along the imaginary axis until they collided at (c,xi)=(1.186,3.25i). These eigenvalues would produce a soliton solution in the system up until c=18.50, at which point the eigenvalue hit the real axis. Eigenvalue #3 moved up the imaginary axis, until, at (c,xi)=(4.285,1.84i), the eigenvalue “turned around” and moved down the imaginary axis and was tracked up to a chirp of 50. This eigenvalue would produce a soliton solution in the system up until c=10.762. Based on this, the most ideal case in a real world system would be the single eigenvalue case from a chirp of 10.762 to 18.50.



2011 - MA015 
Mari Kimura
Ritsumeikan Senior High School, Kyoto, JAPAN

Some studies, I have discovered that cubes and rhombic dodecahedrons are one of the void-filled solid. However, the definition of what void-filled polyhedrons hasn’t been clear. So I tried build new void-filled polyhedrons using origami and to analyze it for understanding the unsolved void-filled polyhedrons. I considered the formation of a polyhedron from a sheet of origami paper and examined whether space can be filled in a new polyhedron created by combining origami-formed polyhedron. In this research, I decided to create polyhedrons which occupy space using three methods: (1) division, (2) cutting, and (3) combination. (1)Here, division refers to dividing a polyhedron into a new solid so that the vertex, sides, or diagonal lines of the polyhedron before the division are shared. (2) Cutting refers to dividing a polyhedron at a desired point without passing through the vertex, etc. of the source polyhedron. (3) Combination refers to the formation of a new space-filling polyhedron by combining polyhedron formed by division or cutting. To begin with, I made a pyramid with the sheet and used this element as a basic part. By using the element and trapezoid pyramids, I found out that it’s possible to make rhombic and trapezoidal dodecahedron. Based on my findings, I concluded that we can make solid which can take up space using origami. In addition, some applications and extensions of the investigation seem to include the following: we may be able to make models for molecular biology as well as cardiovascular models using origami.



2011 - MA016 
Jeffery Brunson Holste
Mills E. Godwin High School, Henrico, VA

The relationship between the era of music and the fractal dimension of melody lines from the music of each era was analyzed. Twelve pieces - four selected from each of the three time periods Baroque, Classical, and Romantic - were studied at the eight measure level. After converting each melody line’s notes to frequencies (Hz), each piece was analyzed for its respective fractal dimension using the Grassberger-Procaccia algorithm. Using Microsoft Excel, the fractal dimension was calculated with music from the Baroque era having a mean dimension of 1.2372, music from the Classical era having a mean dimension of 1.0081, and music from the Romantic era having a mean dimension of .775. A t-test showed that the Baroque music’s mean fractal dimension was both significantly different from that of the Classical and the Romantic eras. However, the t-test did not indicate a statistically significant difference between music of the Classical era and that of the Romantic era. A linear regression of the year of publication or composition of each piece versus each respective piece's fractal dimension revealed a negative correlation with a slope significantly less than zero. This suggests a correlation between the era of music and its fractal dimension. Further studies should analyze additional pieces within each of these three musical eras as well as consider other eras than those examined here.



2011 - MA017 
Allen Yuan
Detroit Country Day School, Beverly Hills, MI

Parallel computing network architectures based on star graphs were shown to have desirable properties with respect to fault tolerance – in particular, it was shown that when relatively large numbers of vertices are deleted, the network will stay mostly connected. However, star graphs suffer the problem of having a factorial number of vertices, resulting in large gaps in the possible vertex count of star graphs. Thus, it is important to find an alternative to the star graph for well-structured interconnection networks. In this paper, we explore the connectivity of one such alternative, the (n,k)-star graph, a generalization of the star graph which does not run into the same problem of vertex count. We show that the (n,k)-star has properties similar to that of the star graph by giving both asymptotic and precise results relating vertex deletion in (n,k)-star graphs to the size of the disconnected components of the graph. In addition to solving the original problem, the results in the paper are shown to immediately resolve several other issues of connectivity for (n,k)-star grap hs, establishing them as a viable structure for future parallel computing networks. 

Awards won at the 2011 ISEF
First Award of $3,000 - Mathematical Sciences - Presented by Intel
Award of three $1,000 U.S. Savings Bonds, a certificate of achievement and a gold medallion. - United States Army
First Award of $3,000 - Air Force Research Laboratory on behalf of the United States Air Force
Each winning project will receive $2,000 in shares of UTC common stock. - United Technologies Corporation



2011 - MA018 
Goutham Kapa
Troy High School, Troy, MI

The iteration of the logarithmic function on a complex number is rather complicated to calculate by hand; however, using computer programs to mathematically simplify the logarithmic operations gives us data that suggests that iterating the logarithmic function on any complex number will eventually reach a unique limit no matter what the initial inputted value is. My project revolves around giving a rigorous mathematical proof to explain why this convergence occurs. Manipulating common equations in Complex Analysis and applying the Mean Value Theorem on the complex plane shows not only that the limit does exist but also shows that continuously taking the logarithm of a number, real or complex, will showcase a function that converges to the limit at a constant rate. Furthermore, collecting data using mathematical software supports the proof with hard data. Graphing the data points on the complex plane, we see that the function creates a spiral that has interesting properties on its own. With these new constants we can find new equilibriums in areas applying complex logarithms such as quantum mechanics, computer science, and electrical engineering. Thus, this research gives an interesting proof for a unique mathematical phenomenon in which potential real-world application may lie.



2011 - MA019 
Wenyu Cao
Phillips Academy, Andover, MA

Expander graphs are graphs that exhibit the seemingly contradictory features of high connectivity and few edges. The construction of expander graphs is an area of research with several applications in error-correcting codes, derandomization, networking, complexity theory, and cryptography. I studied bipartite regular, or biregular, graphs. Biregular graphs are bipartite graphs with an additional symmetric property: any two vertices in the same vertex set have the same number of emanating edges. Specifically, I researched how biregular graphs behave as expander graphs and how biregular graphs with high expansion, or connectivity, can be constructed. My analysis relied on combinatorial graph theory to study walks taken on a biregular graph and algebraic graph theory to investigate the eigenvalues of biregular graphs. I also introduced probability theory to handle random biregular graphs. First, I proved an upper bound on the second eigenvalue of a random biregular graph. Since a known result states that a smaller second eigenvalue necessarily implies higher expansion, I was able to guarantee expansion for random biregular graphs with high probability through this bound. Additionally, I created a new model for random biregular graphs. This new model allows random biregular graphs to be generated computationally fast and provides mathematical structure for these random objects. With this model, I proved a better, and near optimal, bound on the second eigenvalue of a randomly generated biregular graph. Finally, I explicitly constructed a class of asymptotically optimal graphs. My results show that both random generation and my construction produce expander graphs with high expansion. 

Awards won at the 2011 ISEF
Third Award of $1,000 - Mathematical Sciences - Presented by Intel
Tuition Scholarship Award in the amount of $8,000 - Office of Naval Research on behalf of the United States Navy and Marine Corps



2011 - MA020 
Aaron Lawrence Zweig
Randolph High School, Randolph, NJ

Prime numbers are the subject of many conjectures, but due to their irregular distribution it is very difficult to prove theorems regarding primes. The Sieve of Eratosthenes isolates prime numbers by removing multiples of surviving elements. This process may be randomized, known as the Hawkins random sieve. Beginning with the set of natural numbers excluding one, the sieve progresses by taking smallest value A, which for the first cycle is always 2, and using its reciprocal 1/A as the probability for which subsequent values are removed. Then, the smallest remaining value greater than A is assigned to B, whose reciprocal 1/B is used as the new probability with which to remove subsequent values. This process is repeated infinitely, and the non-deterministic set of remaining values is the set of Hawkins primes. Because Hawkins primes are generated in a similar manner to the way primes are isolated, they share many properties regarding distribution. This project proves a probabilistic analogue of the Bunyakovsky conjecture. Using the set of all possible sets of Hawkins primes, it is shown that the set of all values produced by a non-constant polynomial will contain infinitely many Hawkins primes almost surely. 

Awards won at the 2011 ISEF
Certificate of Honorable Mention - American Mathematical Society
Third Award of $1,000 - Mathematical Sciences - Presented by Intel



2011 - MA021 
Elise Victoria Meade
Sanford H. Calhoun High School, Merrick, NY

The Gompertz and logistic equations are exponential functions used to track the growth of breast cancer tumors. Each equation is set in terms of the maximum tumor volume or tumor carrying capacity. In this study two new equations derived from the aforementioned functions were created to solve for two variables within the original equations. These functions were also put in terms of the tumor carrying capacity. The extracted data measuring the initial and final volumes of 23 tumors were plugged into the newly derived equations. The new equations were expressed in terms of the tumor carrying capacity and were altered until the lowest variance for both beta (the relative growth rate) and alpha (the rate of decay of beta) variables was discovered. The same procedure was done for the variables in the logistic equation. The results were compiled graphically and differences between the two curves were determined. The points of inflection were found for both functions illustrating a symmetrical difference between them. The logistic equation more appropriately represented tumor growth rates. These derived equations may be able to predict future tumor growth rates. The results indicate that an intermediate volume reading would facilitate future studies.



2011 - MA023 
Cissy Chen
Fairview High School, Boulder, CO

Cyclic groups, an integral part of group theory, can be generated by some element a (called a generator) by raising a to different powers. The group of units of Z(n) (the set of congruence classes modulo n) contains all positive integers that are relatively prime to n modulo n. This project determines which n produce cyclic groups of units of Z(n). First, I generated the first 100 groups of units using Mathematica. Then, I tested for cyclicity by raising each of the elements to different powers and comparing them to the original group of units. The results of these tests led to several conjectures characterizing the n that produce cyclic groups of units. In sum, the group of units is cyclic if and only if n is 2, 4, a power of an odd prime (pox), or twice the power of an odd prime (2pox). This theorem was proved in two parts: one, to show the groups of units under the given criteria are cyclic, and two, to show the groups of units of n not under those criteria (in which case n must be a multiple of 8 or have at least two distinct odd prime factors) are not cyclic. These results have applications in cryptography, primality testing, and molecular symmetry. Specifically, the characterization of cyclic groups is especially significant in cryptography. Many encryption systems, such as the Diffie-Hellman Key protocol, are based on the properties of cyclic groups.



2011 - MA024 
Evgeny Vyacheslavovich Varganov
Lyceum #572, Center of Mathematical Education, Saint-Petersburg, RUSSIA

Maximal functions play an important role in understanding the differentiability properties of functions, singular integrals theory and partial differential equations. The most well-known function was defined by Hardy and Littlewood. This work deals with maximal functions M#f and Mbf defined by Calderon. These functions allow to measure higher derivatives in case of maximal functions. For example, if M^bf is locally summable then f is a Sobolev function. However, nothing is known about the relation between M^#f and M^bf. Particularly, there is no answer to the question whether M^bf is summable in case M^#f is summable. The first task was to answer this question. The answer is negative, but for a wide class of functions this work describes it is positive. The second task was to study general properties of these operators. A set of non-trivial properties was proved in this part, including the estimates of their local behavior. This work has a certain theoretical interest and provides an opportunity to better understand a nature of these maximal functions.



2011 - MA025 
Daniel David White
Somerset High School, Somerset, MA

The objective of this experiment was to establish a mathematical means of diagnosing premature cancer using fractals and Hausdorff dimension. Fractals are objects that possess recursive patterns. The Hausdorff dimension is used to describe these repetitive patterns. Cauliflower is one example of a naturally occurring fractal and is the foundation of this experiment, serving as a cell substitute for practicality. The diameters of each branch and sub-branch of a cauliflower form a predictable ratio, which is proportional to the Hausdorff dimension. Therefore, if the ratio changes then this dimension will also change. This is analogous to cells: if a burned cauliflower has a different dimension or structure than a normal cauliflower, then an atypical cell will have a different dimension than a typical cell. A cauliflower was dissected into its branches and sub-branches. The diameters were measured and ratios derived. To contrast, a cauliflower was burned and the ratio between its diameters was compared to that of a typical cauliflower. Finally, an original computer program was created to compare the ratios of images of a typical cell and an atypical cell. It was found that a cauliflower had a specific ratio between its branches and sub-branches. This ratio averaged 3, and with Hausdorff dimension computed, produced 2.3. When this process was repeated with burned cauliflower, the ratio averaged 4 and the dimension was approximately 1.8. As expected, the computer program output different dimensions for the typical and atypical cells. 
Therefore, mathematical diagnosis of premature cancer is possible.



2011 - MA026 
Kang-Ying Liu
Saint Andrew's Priory School, Honolulu, HI

In American Mathematical Monthly, Clark Kimberling and Peter Yff state, “Let A' be the point in which the incircle is tangent to a circle that passes through vertices B and C, and define B' and C' cyclically. The lines AA', BB', CC' concur in X(479).” These unique circles are defined as simulated circumcircles. I first found the points simulated circumcircles passed through, and then obtained the equations of simulated circumcircles based on these points. There are seven theorems as following: Theorem 1 restates and proves X(479). Theorem 2 shows triangle ABC and the triangle bounded by three common tangents of incircle and simulated circumcircles are perspective from X(57), which is also the concurrency of the lines determined by excenters and tangencies of incircle and simulated circumcircles. Theorem 3 states points Ta, Tb, and Tc are the intersections of lines pass vertices of triangle ABC and tangencies of incircle and simulated circumcircles. Lines ATa, BTb, and CTc are concurrent at X(55). Theorem 4 discovers a new triangle center that is the Gergonne Point of the triangle bounded by three common tangents of incircle and simulated circumcircles. Theorem 5 discovers a new triangle center that is the perspector of triangle ABC and the triangle formed by the radical axis of excircles and simulated circumcircles. Theorem 6 discovers another triangle center that is the perspector of triangle TaTbTc and the triangle determined by the tangencies of incircle and simulated circumcircles. Theorem 7 states the collinear properties among some well-known triangle centers and new triangle centers. After the assessment of Dr. Clark Kimberling, two of the new triangle centers are defined as X(3598) and X(3599), and be named as 1st and 2nd Liu Point in the ETC in March, 2011. 

Awards won at the 2011 ISEF
Fourth Award of $500 - Mathematical Sciences - Presented by Intel



2011 - MA027 
Caroline Knight Snowden
Ponte Vedra High School, Ponte Vedra, FL

PROBLEM: The researcher aspires to develop a secure cipher based on the variation of non-base 10 number systems. HYPOTHESIS: The researcher believes that based on her acquired knowledge of ciphers and the related mathematics, the goal can be feasibly completed. PROCEDURE AND RESULTS: After extensive research, the researcher independently and uniquely developed an algorithm for the conversion of base-10 numbers into a known non-base 10 number system and back again. This algorithm provides the foundation for a cipher based on variation of non-base 10 number systems, thereby achieving the researcher’s proposed goal. CONCLUSION AND DISCUSSION: The cipher developed by the researcher offers a novel and secure method of encipherment. Unlike many of the monoalphabetic and polyalphabetic systems in use today, no method for breaking the number system cipher has been developed aside from trial and error. The number system cipher offers a computationally more manageable alternative to the RSA cipher. While large numbers heighten the security of both ciphers, the number system cipher is expanded only limitedly upon encryption, while the RSA cipher is expanded exponentially. Therefore, the number system cipher offers a secure, convenient method of encipherment to those lacking powerful computational equipment, in either the military or commercial fields. Additionally, the number system cipher can be layered with the RSA cipher (parallel to the layering of an affine cipher) for added security. 

Awards won at the 2011 ISEF
Second Award of $1,500 - Air Force Research Laboratory on behalf of the United States Air Force



2011 - MA028 
Damon Chizuru Kawamoto
Pacific Collegiate School, Santa Cruz, CA

Many challenges are faced when forecasting the abundance of salmon for the ocean-fishing harvest. I compared two methods to estimate the abundance of Sacramento River Fall-run Chinook salmon. The standard method, the jack count method, uses the number of jacks (fish that have matured one to two years early) returning to freshwater to predict the Sacramento salmon abundance for the following year. I investigated an alternative method in which the Sacramento abundance is derived using a separate abundance estimate for Klamath Chinook and the relative occurrence (determined by genetic methods) of Sacramento and Klamath fish in the harvest. I first assessed the feasibility of this new approach using historical data from the Pacific Fishery Management Council (PFMC) including the Klamath harvest, Sacramento harvest, Sacramento Index, Klamath abundance and the Sacramento jack count. Second, I applied this method to historical fishery composition data recently obtained using genetic analysis. I used the statistical modeling program R to assess the strength of the relationship between the overall abundance of Klamath and Chinook salmon and their proportional representation in ocean fisheries across different seasons and coastal locations. Analysis of the PFMC data revealed little relationship in the overall abundance and the harvest proportion. Based on an r^2 measure, the jack count method is much more accurate than the alternate methods I investigated. This suggests that fishery composition estimates from genetic data are unlikely to provide substantial improvement in abundance estimates.



2011 - MA029 
Markus Robert Woltjer
Wilsonville High School, Wilsonville, OR

The purpose of this project is to invent a creative solution to dividing by zero. It can also be seen as defining the already existing solution that occurs with division by zero. Division by zero has, throughout history, been explored and abandoned by great mathematicians because of the extensive mathematical fallacies associated with the concept. The invention of this solution relies on the assumption of logical axioms and the investigation of their reliability through extending them. These proofs also give an idea of the typical behavior of the solution when applied to various mathematical concepts. The axioms found were (1)”All elements of the Nullinary Loop can result in operations of elements outside of the Nullinary Loop, but the element Xi is closed in the Nullinary Loop.”, (2)”Any fraction with denominator zero is equal to Xi, and the differences in the numerator cause irrelevant divergences of Xi, similar to the concept of infinity-plus-one.”, and (3)”The truth Xi*0=1 does not come algebraically, but in order to uphold principles of inverse operations using Axiom 2.” These were used to prove theorems and eventually implications. According to these results, division by zero can be defined using “The Nullinary Loop”, consisting of zero, one, and the Nullinary Number (Xi). This solution gives rise to many mathematical extensions beyond the limiting “undefined”, especially those concerning projective geometry. The practical importance of this solution are yet to be discovered, but likely existent considering the late application of non-Euclidean geometry to Albert Einstein’s Theory of Relativity. 

Awards won at the 2011 ISEF
Fourth Award of $500 - Mathematical Sciences - Presented by Intel



2011 - MA030 
Ryan Thomas Baker
Hillcrest High School, Midvale, UT

Due to concerns regarding global warming, sustainability, and energy security, researchers are seeking to utilize the potential of renewable energy sources. One of the most promising renewable energy sources is wind power. Wind power generation is dependent on wind speed which is highly uncertain. This presents a challenge in analyzing the effects of a hybrid power system with both gas powered and wind powered generators because the equations used to model power systems require deterministic inputs. Researchers have recently proposed a new method called Polynomial Chaos (PC) for use in propagating uncertainty from model inputs to model outputs. The purpose of this project is to apply PC methods to hybrid power systems to analyze the impacts of using an uncertain energy source in the electric power grid. Applying this method requires the creation of a new set of orthogonal polynomials, constructing a PC approximation for wind power, and then incorporating the wind power PC approximation into the power flow equations which describe the steady-state condition of a power network. When compared to a Monte Carlo method, the new method is just as accurate but significantly faster. The results from this project can be used to better evaluate the reliability, safety, and cost efficiency of incorporating wind power into a power grid and to maximize the efficiency of using wind power as a major power source. 

Awards won at the 2011 ISEF
Certificate of Honorable Mention - American Mathematical Society
Second Award of $1,500 - GE Energy



2011 - MA031 
Aishwarya Ananda Vardhana
Jesuit High School, Portland, OR

The Elliptic Curve Method (ECM) computes a large multiple of a point P on an elliptic curve modulo n to find a prime factor of n. The ECM has been implemented in three forms: Affine Weierstrass (AW): y2 = x3 + ax + b, Projective Weierstrass: y2z = x3 - jxz2- kz3, Projective Jacobian: y2 = x3 – kxz4 – jz6, and newly created Projective Edwards curves (PE): (x2 +y2) (z2) = z4 + kx2y2. The four forms were combined to create Multicurve I algorithm. ECM contains two phases, stage 1 and stage 2. Stage 2 factors n if stage 1 fails. Stage 2 focuses strictly on large prime multiples of P within a range [B1, B2] where a large prime multiple of P may yield the point at infinity which in turn yields the smallest prime factor of n. 2011 work concentrates on the creation of Projective Edwards stage 2 because Edwards curves have thus far not been implemented in Projective space. PE stage 2 has been combined with AW stage 2 to form Multicurve II algorithm. A novel twist to PE stage 2 is using Montgomery Reduction for fast modular arithmetic. The Multicurve II has been implemented in C# language as a parallel-processing and multi-threaded program which runs on multiple cores. The objective was to create an algorithm of 10% faster factorization and 5% higher success rate, which can find factors of 25 digits in length. The Multicurve II algorithm has a success rate 22% higher than AW stage 1 and time efficiency 34% greater than AW stage 1 thereby resulting in a partially deterministic algorithm compared to the probabilistic ECM which drives towards polynomial time. It has also found a 33 digit factor. Currently work is concentrated towards integerating Multicurve II within the General Number Field Sieve for faster factorization. 

Awards won at the 2011 ISEF
All expense paid trip to tour CERN - European Organization for Nuclear Research-CERN
Second Award of $1,500 - Mathematical Sciences - Presented by Intel
$ 3,000.00 will be the total of all awards. First award minimum of $1500.00 - Mu Alpha Theta, National High School and Two-Year College Mathematics Honor Society



2011 - MA032 
Pratheek Nagaraj
Marjory Stoneman Douglas High School, Parkland, FL

The Monte Carlo method (MCM) based computational algorithms are extensively used in scientific fields to solve complex problems ranging from climate modeling to financial risk analysis. MCMs are used for modeling phenomena involving a large number of variables with significant uncertainty, where deterministic techniques are impractical to compute an exact result. This project focused on implementation, evaluation, accuracy and feasibility of MCM multidimensional integration. The research goal was to identify a more computationally efficient and accurate MCM than the Plain MCM and to further analyze the various components of the optimized algorithm to develop a novel hybrid MCM algorithm. This study analyzed algorithms in C++ and Mathematica for Plain, Quasi, Adaptive, MISER, and VEGAS MCMs and tested them through extensive simulations of multidimensional integration scenarios involving oscillatory functions, exponential functions, and localized variance, while spanning various sampling sizes totaling 1200 trials. Compared to the Plain MCM, the VEGAS MCM was 89.68% more accurate and the Quasi MCM was 98.50% more accurate with the exclusion of Integral #5 which highly skewed the results; both VEGAS and Quasi MCM were significantly more precise. Overall, the VEGAS MCM outperformed the Plain MCM in accuracy and efficiency, demonstrating the optimal blend of importance sampling with recursion, quasi random numbers, and probability distributions. Ultimately, this research serves as the foundation for my development of a novel hybrid MCM, using the VEGAS MCM as the base model, which will dynamically adapt to complex scenarios where the current methods fail to accurately compute a reliable solution. 

Awards won at the 2011 ISEF
All expense paid trip to tour CERN - European Organization for Nuclear Research-CERN
Third Award of $1,000 - Mathematical Sciences - Presented by Intel



2011 - MA033 
Casey Ann Gibson
Nettleton High School, Jonesboro, AR

The purpose of this project was to calculate probabilities and expected values (EVs- average amount of money lost per dollar bet) for specific hands in the card game “blackjack”, and to use the results to better inform potential blackjack players of all possible outcomes they face. The hypothesis was that potential hands (pairs) of cards dealt to the player and dealer (in addition to another card that may or may not be taken) will have a wide range of probabilities and expected values because of the colossal number of possible outcomes. First the probabilities of the player’s possible hands were calculated, followed by the dealer’s possible hands. The product of the two was then either directly inserted into the EV formula ,µ=xP(x), or (if either the player or dealer took another card) the product was then multiplied by the probability of winning, tying, or losing on the second draw. This final product then was used in the EV formula, via an extensive EV table. The results proved the hypothesis correct. EVs ranged from -0.000120356 cents per dollar bet to +0.116231108 cents per dollar bet. The data showed that the closer a player’s original sum was to twenty-one, the higher their EV was. When the player was dealt seventeen or more points and the dealer was also dealt seventeen or more points, or less than seventeen points, the player’s EV was positive. However, if the player was dealt less than seventeen points and the dealer was dealt seventeen or more points, their EV was negative. So in conclusion, the possible decisions made by the player in blackjack can make the game profitable.



2011 - MA034 
Anjana Ram
Texas Academy of Mathematics & Science, Denton, TX

Influenza is one of leading causes of hospitalizations and death in the United States. Because of shortages of the vaccine and inefficient administration, a novel vaccination regime is required that can manage epidemic spread. This study establishes the impact of strategic vaccination on the spread of influenza using a rule-based cellular automata model (RBCAM). The simulations establish dependencies on the initial infection spread and vaccination of the population. The model is parameterized with factors of disease spread such as population infection rates (PIR), infection period (PIP), infection (PIT), and re-infection threshold (PRT). The compartmentalized model uses a discrete time and space domain cellular automata, where each state is determined by a single or a range of values representing the state. A random walk simulation method is incorporated to include dynamic nature of infection probability among the neighboring cells. MATLAB is used in implementing RBCAM. Results indicate that the value of input parameters influence the disease spread significantly. The initial infection distribution affects the spread of influenza, whereby selection of strategic vaccinated areas aided in preventing further spread by identifying specific cells as hot spots for immunization. The peak infection decreased at a reasonably consistent rate, with increasing vaccination showing its favorable influential effects on the rate of disease spread. The data compares well for the years 2006-2010 influenza seasons, and the error is within +/- 21%. The model is effectively used in considering other interrelated parameters such as population distribution, dual viral interactions and inter-cell mixing.



2011 - MA036 
Vasily Sergeevich Bolbachan
Advanced Science and Education Center - A.N.Kolmogorov School, Moscow, Moscow Region, RUSSIA

We obtain two sequences of rational numbers whose ratio converges to the Euler-Gompertz constant. Denote <f(x)> by the integral of f(x)exp(-x) from 0 to infinity. Recall that the Euler-Gompertz consant is <ln(x+1)>. Main idea. Let P_n(x) be a polynimial with integer coefficients. It is easy to prove that <P_n(x)ln(x+1)>=a_n+<ln(x+1)>b_n for some integers a_n, b_n. Hence if <P_n(x)ln(x+1)>/b_n converges to zero, a_n/b_n converges to -<ln(x+1)>. Main Theorem. Let u be positive real. There exists polynomials P_n(x) (they are explicitly given in the paper) such that <P_n(x)ln(xu+1)> tends to u as n tends to infinity. Proof of Main Theorem is elementary. Also we present some conjectures. 

Awards won at the 2011 ISEF
Third Award of $250 - American Mathematical Society
Fourth Award of $500 - Mathematical Sciences - Presented by Intel



2011 - MA037 
Eithar Mohammedamin Abdulhalim
49 Secondary, Makkah, Central, SAUDI ARABIA

The nature of math really needs specific mental abilities which makes this subject hard for many students. To create creative method to solve the math operations. a field study was applied where 2 school visits were conducted and created methods were explained for students of 4 th and 5th grade. In comparing the students' results with the traditional ways and the created method, the former one was more accurate. Results: saving time in the created method than the traditional one to its half, the probability of success of this method is 95% while the tradition way is 75% Testing again for students according to the creative method as well as calculation of the time required for the solution of operation and the extent to which they comprehend solution according to the newly (created) method and percentage of correctness of answer. I have recorded the results obtained by female students and summarized it in the shape of graph accompanied by percentage for students' assimilation. During the conduct of the experiment, I have dealt with three variables. The amazing results I reached in the understanding of female students for the new (created) method, their assimilation for it and the speed of its application led me to give the following recommendations: 1- It is possible that the method can be applied for the male and female students of the second and third classes of elementary stage. 2- Male and female secondary schools students can use this method in the assessment and ability test for the rapid solution of problems. 3- It is possible to explain this method to low- understanding male and female students as it is easily understood, assimilated and applied.



2011 - MA038 
Vahid Fazel-Rezai
Red River High School, Grand Forks, ND

When the diagonals of a convex pentagon are drawn, a new, smaller convex pentagon is formed inside the first pentagon. Such an interesting pattern led to the asking of a more general question, ``Are there more patterns like this with interesting properties?'' A three-step algorithm was used to describe the process: First draw all line segments between any two starting points. Then, take all points lying on at least two line segments as the new set of points. FInally, take out of the new set all points in the starting set. This rule simplifies to the intersection of diagonals for convex polygons. To develop an easier way of exploring these, a MATLAB program was developed to be able to plot the intersection points for larger polygons. These images consisted of patterns of concentric circles and points, which resembled flowers. On the other hand, not all starting sets of points simplify the algorithm to intersection points. Such arrangements include collinear points and continuous curves. Here, the algorithm could be applied multiple times to form a sequence. Some inquiry in this area found some starting sets that produced a infinitely large and infinitely small sequences, along with other intersting shapes. As a result of this project, more questions and possibilities were opened to inquiry. 

Awards won at the 2011 ISEF
Fourth Award of $500 - Mathematical Sciences - Presented by Intel



2011 - MA039 
Benjamin Jerome Kraft
Liberty High School, Bethlehem, PA

Let U_n be the group of n by n unitary matrices. To select a random unitary matrix, we use the Haar measure. Much study has been devoted to the eigenvalues of random unitary matrices, but little is known about the entries of random unitary matrices and their powers. In this work, we use eigenvalues to understand the entries of random unitary matrices and their powers. We characterize the exact distribution of the top-left entry in the case where the matrix is raised to a power at least n, and give some relationships for smaller powers. These results may have applications in quantum mechanics, telephone encryption, and statistical analysis, in addition to helping illuminate the field of random matrix theory. 

Awards won at the 2011 ISEF
Third Award of $250 - American Mathematical Society



2011 - MA040 
Alan Ziyuan Yang
Upper Dublin High School, Fort Washington, PA

This study was conducted to prove Archimedes’ fifth proposition in his Book of Lemmas using Euclidean geometry. The hypothesis is: if given proposition five, then this proposition can be proven using Euclidean geometry, algebra, and deduction. In his book, Archimedes defines an arbelos to be the region bounded by three mutually tangent semicircles such that the diameter of one of the semicircles contains the diameters of the other two. The main goal of this proof was to justify that if a segment is constructed within an arbelos such that it contains a point of tangency and is perpendicular to the line containing the centers of a semicircles, then the two circles inscribed within the arbelos and this segment have equal area. Based on the original proposition, a diagram was completed. By using algebraic variables and the Pythagorean Theorem, the radii of the two circles were shown to be equal. Hence, the proposition was proven. The hypothesis was accepted and further studies can be conducted to investigate similar Archimedean propositions.



2011 - MA041 
Nikita Maksimovich Nikita
High- School #1, Ekibastuz, Ekibastuz, KAZAKHSTAN

Purpose: Evaluate the difference between the lengths of the product fast additive chains and fast additive chain, if they are end up the same number. Hypothesis for some integer k, there are fast additive chains a, b and c, such that | | a b ||-|| c | | = k and a * b = c Stages, the procedures of research: 1) the search of literature of the subject; 2) make plan, questions; 3) find solutions to questions; 4) design work; Methods: 1) research; 2) analytical; 3) search; Scientific novelty and degree of independence: found the formula for calculating the length of additive chains ending numbers , , the table of fast chains’ lengths for using numbers of the form , , two theorems and the lemma Results and conclusions: We find the formula for calculating the length of additive chains ending number and the number , a table of fast-chains and their lengths, the numbers of the form and the numbers of the form , two theorems and the lemma are formulated and proved. The practical use of the results: The results of the work can be used for performing calculations, preparation of algorithms and programs, reducing the number of operations when calculating the values of certain cumbersome formulas and large baselines.



2011 - MA042 
Georgiy Vladimirovich Kolyshev
Stuyvesant High School, New York, NY

In this paper, we consider the twenty-one year old problems jointly posed by Alexander Soifer and Paul Erdos in a letter that is included in Soifer's _How Does One Cut a Triangle_. This book deals with several different natural generalizations of standard high school problems. In their most general form, these problems are extremely difficult and they do not yet have solutions. Suppose there is a closed, convex figure with points inside it, if we look at the triangles formed by three of these points and choose the one with the smallest area, we can obtain a value alpha by dividing the area of this triangle by the area of the figure. One of the problems posed by Soifer and Erdos asks for the smallest and largest values alpha so that this alpha is obtainable with five points or six points, not obtainable for four points, and always obtainable -- irrespective of the position of the points or the shape of the figure -- for seven points. We obtain the lowest value by applying results from Paolo Gronchi and Marco Longinetti's work and then improving their results (as applied to the problem) by extending methods developed by Dmytro Karabash to solve another one of the problems posed in the previously mentioned book. Additionally, we prove some asymptotic results for sequences related to the problem and provide a conjecture with reasoning for the largest value of alpha. 

Awards won at the 2011 ISEF
Certificate of Honorable Mention - American Mathematical Society



2011 - MA043 
Tyler Gordon Toepke-Floyd
Wishek Public School, Wishek, ND

The purpose of my project was to see if the shape of a parachute affected the speed of its descent. To do this I kept the surface area of each parachute the same. The shapes of the parachutes were an equilateral triangle, a square, a rectangle, a pentagon, a hexagon, and a circle. The surface area of each of the parachutes was 144 square inches. My procedure was as follows: 1. I used a plastic material and drew my parachutes according to mathematical formulas. 2. I cut them out and attached one twelve inch piece of string to each vertex. I attached eight strings for the circle in a double crossing pattern. 3. I attached one washer to each parachute. 4. I recorded each fall from a height of 138 inches repeated 10 times. The results of my project were that the triangle fell the fastest followed by the rectangle. Then came the square, followed by the hexagon. The second slowest was the pentagon and the slowest was the circle. In conclusion, I found that my hypothesis was partly correct. My hypothesis was that the parachutes would fall at the same rates. They didn’t all fall, however, at the same rates and there was about a difference of a second in their average times between the fastest and slowest parachutes. Overall the parachutes were fairly close in there speeds and times, but not as close as I thought they would be. 
Differences in shape do matter.



2011 - MA044 
Katherine Leigh Cordwell
Manzano High School, Albuquerque, NM

Field theory is the study of fields, which are basic mathematical structures that possess certain properties. Using the ideas of extension fields and automorphisms, we explore three different applications of field theory. One of these applications is rationalizing complicated denominators, such as 1/[2^(1/2) + 5^(1/3) + 7^(1/5)] or 1/(2z^2 + z^5 + 7), where z is a root to an equation such as x^9 + 5x^8 + 21x^5 + 49x + 368. A straightforward way uses the Extended Euclidean Algorithm for polynomials. We discuss a significantly simpler method that works in many cases. This method uses concepts of consecutive extension fields. Factoring integers is a second application. This is of great importance to public key cryptography. Integers factor differently in extension fields than they do in Z, and we can utilize the extension field factors to find integer divisors. Another application is that of constructing regular polygons. We consider n-gons, where n is a prime of the form 2^(2^k) + 1. Using automorphisms of extension fields, we explicitly find formulas for cos(2*pi/5), cos(2*pi/17), and cos(2*pi/257), which account for three out of the five known polygons of this form.



2011 - MA045 
Jonathan F Li
St. Margaret's Episcopal School, San Juan Capistrano, CA

I analyze the effects of cell migration, compression, and contact inhibition on tumor growth using the Cellular Potts Model (CPM). Cell proliferation, motility, cell-to-cell adhesion, contact inhibition, and cell compressibility are incorporated in the model. I find that increased motility has a direct effect on the growth rate of a tumor. Cell lines with greater motility overcome the attractive forces of cell-to-cell adhesion and have more space to proliferate. I analyze the interplay between cell motility and compressibility within the CPM, and find that more motile cells are generally smaller than their more sedentary counterparts. I obtain an explicit inverse-relationship between the cell compressibility and motility parameters. In addition, I find contact inhibition in the CPM penalizes clumped cells by halting their growth, giving motile cells a greater advantage. The model also shows that cells that are less responsive to contact inhibition are more invasive. Together, these results raise questions on the effectiveness of some chemotherapy treatments, which, by targeting cells that undergo mitosis, may actually select for these equally or more invasive motile cells. I have begun testing my model with in vitro data obtained from a lab source and my model is reflective of the data. 

Awards won at the 2011 ISEF
Third Award of $250 - American Mathematical Society
Second Award of $500 U.S. savings bond - AVASC-Ashtavadhani Vidwan Ambati Subbaraya Chetty Foundation
Award of $3,000 - China Association for Science and Technology (CAST)



2011 - MA046 
Ilias Valer'evich Iliasov
Lyceum #572, Center of Mathematical Education, Saint-Petersburg, Saint-Petersburg, RUSSIA

In this paper we investigate some properties of the subvariety lattices for varieties generated by left zero semigroups of some positive degree. It is well known that such semigroups generate a small permutational variety which can be embedded in a natural way to the universal small permutational combinatorial semigroup variety. Our results give a way to investigate small varieties in general. Also, we obtained a complete description of the subvariety interval between the varieties generated by the left zero semigroup of degree k and k+1 respectively. One of modern mathematics proved that there exist a limit combinatorial finitely generated variety. According to this result our description has an application in theory of infinitely based varieties.



2011 - MA047 
Constantine Borisovich Tsvetkov
Lyceum #572 Center of Mathematical Education, Saint-Peterburg, Saint-Peterburg, RUSSIA

Let R be a commutative ring with unit, and let F be a reduced irreducible root system, let P be a subset of simple roots of F. Let a and b be a pair of simple roots. One says that the elementary subgroup E(F, R) of the Chevalley group G(F,R) admits the parabolic (or Dennis-Vaserstein type) factorization for the roots a and b if any element from the group E(F, R) can be written as a product puq, where the elements p and q lie in the corresponding parabolic subgroups EP(a) and EP(b), and the element u lies in the intersection of the opposite unipotent radicals. For classical root systems Fn and a, b being the opposite endnodes of the Dynkin diagram of Fn the existence of the factorization has been found in the late 1970’s under some assumptions on the stable rank of R. This particular case of factorization implies injective stability for the sequences of K1 functors modeled on classical groups. It seems natural to ask what the condition one needs to require from R in the general situation. It turns out that the case of a classical root system can be handled by similar methods and the result can be formulated in terms of the stable rank of R and the distance between roots on the Dynkin diagram. The purpose of my project is to investigate the case of an exceptional group for some specific choices of the pair a, b. I find sufficient conditions on the ring R which ensure parabolic factorizations for these cases.



2011 - MA048 
Manosij G Dastidar
South Point High School, Kolkata, INDIA

This project presents an extension of Stanley's theorem related to partitions of positive integers. Stanley’s theorem states that the total number of distinct members in the partitions of a positive integer n is equal to the total number of 1's that occur in the partitions of n. My generalization states a similar relation between the total number of distinct members in the partitions of n and the total number of 2's or 3's or any general integer k that occur in the partitions of n and the subsequent integers. More precisely, the total number of distinct members in the partitions of a positive integer n is equal to the total number of k's that occur in the partitions of n, n+1, . . . , n + k -1. To the best of my knowledge, this result is a new extension of Stanley's theorem. This is applied to obtain an array of interesting corollaries, including alternate proofs and analogues of some well-known results in the theory of partitions. I extend Ramanujan's results on congruence behavior of the `number of partition' function p(n) to get analogous results for the `number of occurrences of an element k in partitions of n'. Further, I provide an alternate proof of Ramanujan's congruence identities related to integer partitions. Finally, I have proposed a general structure of adding points to existing partitions in the Ferrer's diagram and have formulated and proved a combinatorial result for adding k points to partitions of n and count the number of resultant partitions. 

Awards won at the 2011 ISEF
First Award of $1,000 - American Mathematical Society
Second Award of $1,500 - Mathematical Sciences - Presented by Intel



2011 - MA049 
Aishwarya Saluja
Hedgesville High School, Hedgesville, WV

Purpose: The Mandelbrot set is considered the most fascinating mathematical image. It is a map of the complex plane that shows information about the Julia sets for each complex number “c”. The mathematical basis for the Mandelbrot set is f (z) = z^2 +c. The project explores various Julia sets generated by altering the value of “z” with c held constant. Methods: The mathematical basis of the Mandelbrot was first studied. Approximately 200 variables were altered in its unique quadratic equation and the results were computed both manually and utilizing computer software. The different variables were manipulated using generic fractal generator software from the NLVM website. The generated fractal images represent various Julia sets. Analysis: A mathematical basis of the Mandelbrot set is described. For each c of f(z) = z^2 + c there exists a particular Julia set Jc. A total of approximately 200 Julia set images were created using the fractal generator, some of which are displayed. Since the mathematical basis of each image was also manually generated, the results are tabulated illustrating the number of iterations and the coordinates in the complex plane. Conclusions: The Julia sets are represented by Fc(z) = z^2 + c where c is constant and z is variable. The Julia sets can be classified into two distinct types- connected and totally disconnected (Cantor dust). The fusion of all possible connected Julia sets will form the composite image of the Mandelbrot set.



2011 - MA050 
Anirudh Prabhu
West Lafayette Junior-Senior High School, West Lafayette, IN

A number that is the sum of its proper divisors is called a perfect number. The study of perfect numbers began more than 2300 years ago with Euclid’s work (circa 300 B.C.). To date, 47 perfect numbers have been discovered. All the known perfect numbers are even. It is not known if any odd perfect numbers exist. Many analytic upper bounds for odd perfect numbers have been published in the literature. However, no nontrivial analytic lower bound has been reported to date. The hypothesis of this project was that there exists a nontrivial analytic lower bound for odd perfect numbers. The project sought to prove the hypothesis by deriving such a bound. First, a lower bound for the product of the distinct prime factors of an odd perfect number was derived using Euler's Theorem, certain previous results of Suryanarayana and Hagis, the binomial theorem and the QM-AM-GM-HM inequalities. Using the above bound, Euler's Theorem, Rosser's Theorem, and Dusart's Theorem the first nontrivial analytic lower bound for odd perfect numbers was established. Some improvements of the new lower bound are also discussed. The analytic lower bound established in this project complements known analytic upper bounds for odd perfect numbers and suggests a promising approach to this old problem. If an odd perfect number does not exist, then one way to prove its nonexistence is to derive a lower bound that exceeds a known upper bound. The work reported in this project represents a first step in that direction. 

Awards won at the 2011 ISEF
Third Award of $250 - American Mathematical Society
Third Award of $1,000 - Mathematical Sciences - Presented by Intel



2011 - MA051 
Simanta Gautam
Albemarle High School, Charlottesville, VA

The initial purpose of this project was to understand the structure of Carousel Primes in various bases. As I found new conjectures and proofs, I tried to tackle an unsolved problem in this topic: how many Carousel Primes exist? To create data, I generated a couple of these primes by hand but realized that it would be too time consuming. Therefore, I wrote two computer programs in java providing me with more than enough data. The first program finds the first 50 Carousel Primes in bases 2 to 1000. The second program finds the first 200 Carousel Primes in a specific base. Through all the data generated by these programs, I was able to search for patterns and understand the structure of Carousel Primes not just in base 10, but in hundreds of bases. Some of the conjectures I created were very surprising and I went on to prove/disprove those using my background in number theory. One of the proofs I wrote solved a small case of the unsolved problem mentioned earlier. I was able to prove (using Fermat's Little Theorem) that any base that’s a square number does not have any nontrivial Carousel Primes. This research is new because I didn't limit myself to base 10 when searching for Carousel Primes. A great application for Carousel Primes could potentially be in cryptography. Through understanding the patterns of these primes in various bases, it is possible to create a new cipher which could improve modern cryptography. 

Awards won at the 2011 ISEF
Second Award of $1,500 - Mathematical Sciences - Presented by Intel



2011 - MA052 
Rebecca Chen
Park Tudor School, Indianapolis, IN

Unitary representations of the braid group are sought for their potential usefulness in topological quantum computation, in which two-dimensional quasi-particles called anyons are braided to process quantum information. Useful braid group representations are difficult to find because current methods yield representations that either are not unitary or have the wrong dimensions. My project explores a new method for finding braid group representations: solving a generalized version of the famous Yang-Baxter equation. The equation I worked with is the (2, 3, 1)-generalized Yang-Baxter equation (gYBE) for an 8x8 matrix R. After making several assumptions about the form of R to ensure unitary solutions, I systematically solved the (2, 3, 1)-gYBE and classified all solutions of the form R = X direct sum Y, where the 4x4 matrices X, Y, are (2x2)-diagonally unitary, into three families, each described by a single parameter. My project goes on to show that my solutions to the gYBE lead to unitary braid group representations, thus generating braiding quantum gates. The realization of such braiding quantum gates in physical systems would lead to useful topological quantum computers and thereby bring benefits to our society. Another significance of my work lies in my promising new method, which may be repeated by other researchers to find more braid group representations that generate braiding quantum gates. 

Awards won at the 2011 ISEF
Certificate of Honorable Mention - American Mathematical Society
Fourth Award of $500 - Mathematical Sciences - Presented by Intel



2011 - MA301 
Vivian H. Nguyen, Mynah Holloway, 
Starr's Mill High School, Fayetteville, GA

The human mind constantly pursues enigmas to solve, yet the most challenging ones become stale over time because of the persistent development of the human race’s intellectual ability. As a result, Sudoku and crossword puzzles became too effortless to solve, so our hypothesis is that if we, the experimenters, observed different magic squares and theories, then we would be able to develop basic algorithms that can facilitate the creation of magic squares of all sizes. This project explored the fundamentals of a magic square and invented the basic rules for the production of magic squares. First, we obtained the method for creating a simple four-by-four square and generally accepted theories for the construction of magic squares with odd dimensions and squares with dimensions that are multiples of four. Next, based on observations from these steps, we pondered upon the creation of a previously undeveloped algorithm for magic squares with even dimensions that were not multiples of four. From the final three compiled algorithms, we composed magic squares consecutively to a square with fifty-three numbers in each row and column, and each completed magic square went into one of the three groups. Our results yielded squares that are all magic squares because each is an arrangement of numbers from one to n^2 when n is the number of elements in each row, column, or main diagonal that each add to n(n^2+1)/2. In conclusion, our experiment resulted in the creation of algorithms for all magic squares, and now, dexterous people will be able to observe a new, challenging puzzle.



2011 - MA302 
Aliaksandr Nekrashevich, Mikhail Khursevich, 
Middle School 41, Minsk, BELARUS

History of the problem. At the beginning this problem was some analogue of a tomography question, to be more detailed it was a problem about the minimal number of lines with projections such that it was definitely available to restore source points. The solution was found, and it was absolutely combinatorial. As a result there appeared a combinatorial question about reconstruction using (k;d) - pencils. Importance of this project. This project is connected to computerized tomography and to the compressing of vector graphics’ information. On the other hand, our solution is connected to the graph theory, particularly to the hypergraphs, coverings and matroid theory.



2011 - MA303 
Ufuk Kanat, Furkan Samil Basaran, 
Private Samanyolu Science High School, Ankara, TURKEY

We started our research project with a question : if we draw a line passing through a vertex of a triangle and intersects a side of that triangle or if we draw the diagonals of a quadrilateral when the Euler lines of new triangles will be parallel or concurrent.From this question we found nice results. We denoted the tangent of the angle within the Euler line of a triangle and a side of that triangle with trigonometric variables. We found that the Euler lines of new triangles are parallel if and only if the angle within the side of triangle and the line is equal to 60 or 120 degrees. We studied on the concurrence of the Euler lines of new triangles and first triangle. We found that the Euler lines of the four triangles which are formed by diagonals and sides of quadrilateral are parallel if and only if the angle between diagonals is equal to 60 or 120 degrees.Let the centers of the parallelograms with vertices are orthocenters, circumcenters and center of gravities of that four triangles be H,O and G respectively. Then H,G and O are collinear and the distance between H and G is twice as the distance between O and G like an Euler line of a triangle.Let the angle between the diagonals be 60 or 120 degrees.Then this line is parallel to the Euler lines of that four triangles. Finally , we found that the Euler lines of the four triangles which are formed by diagonals and sides of a cyclic quadrilateral are parallel if and only if the angle between diagonals is equal to 60 or 120 degrees otherwise that four Euler lines are concurrent.



2011 - MA304 
Raul Eduardo Marquez-Nunez, Giovanni Santiago Feliciano, 
Centro Residencial de Oportunidades Educativas de Mayaguez, Mayaguez, PUERTO RICO

It has been established in number theory, which studies the properties of numbers, that natural numbers larger than five (5) may have one or more friendly numbers. It has been speculated that numbers have an infinite amount of friendly numbers, however, has not yet been proven. A way to prove or disprove this would be to observe if there is a pattern of repeating friendly numbers in given intervals. If such a pattern does exist, it would seem to indicate, that as long as another interval of an established length can continue to exist, there would be another set amount for friendly numbers. To prove this we will create a program that finds friendly numbers and we will verify if such a pattern exists. Upon the finalization of the tests, no such pattern was found for any of the test numbers. However, it was found that the maximum abundancy a number can have increases linearly as the number increases. However, due to the fact that a limited amount of test numbers were used, it may be possible that, a number, unbeknownst to us at the moment, may develop such a pattern, and indicate that indeed has infinite friendly numbers. For this, further research would be needed in the matter, by others who are better equipped and experienced to conduct a more exhaustive research.



2011 - MA305 
Fedor Ivanovich Uvarychev, Anatoliy Alexandrovich Zaikovskiy, 
Lyceum #572, Center of Mathematical Education, Saint-Petersburg, RUSSIA

For finite algebras, as well as for algebras in general, the primary cohomology theory is the theory of Hochschild cohomology. A direct sum of groups of Hochschild cohomology is usually considered together with action, called the cap-product, which defines the structure of a graded algebra on this direct sum - the Hochschild cohomology algebra. The Hochschild cohomology algebra is computed for many specific examples of algebras, and the technology of these calculations is being gradually developed. In this paper we compute the zeros and the first Hochschild cohomologies of some classes of algebras, the zeros are calculated together with the structure of a commutative algebra, and the first with the structure of the Lie algebra. Also a new technique to calculate the center and Lie algebra of derivations for the path algebras of a quiver with relations was developed. After that, this technique has been applied to certain classes of algebras, namely the path algebras of a quiver without relations, symmetric Nakayama algebras and symmetric special biserial algebras (SSB-algebras). SSB-algebras were actively investigated for the derived equivalence for the last few years. One of the modern mathematicians found a lot of necessary and sufficient conditions on the fact that the two SSB-algebras are derived-equivalent, but still the problem is not solved. Since Hochschild cohomology algebra, regarded as Gerstenhaber algebra, is an invariant with respect to the derived equivalence, realized computing can be useful for the complete answer to the question of the derived equivalence of the SSB-algebras.



2011 - MA306 
Jose Daniel Sales, Perez Leandro Matias, 
Escuela Provincial de Comercio No 7, San Pedro de Jujuy, Jujuy, ARGENTINA

There exist numbers with infinite non periodic decimal digits that are found in the set of real numbers but are incapable of being expressed as fractions: the irrational numbers. This work is aimed at finding a new way of interpreting those irrational numbers through an inductive method in order to provide an answer related to the usage of this numerical set in real life. Such method has involved applying geometrical concepts so as to get back to the historical roots of that numerical set. In that way, geometrical representations were made to visualize the length that algebraic irrational numbers represent. A ruler was created with the representations made. The diagonals of some rectangles can be measured using our measurement instruments. For instance squares, and rectangles whose width is half the length of those shapes. Conclusively, this project has let us give a meaning to algebraic irrational numbers and show their practical usage thanks to the interpretation of their operability through the measurements done with our ruler and thus, proving the viability of our research onto this field.



2011 - MA307 
Cintia Paola Branca, Yanina Gisela Mansilla, 
Escuela de Ensenanza Media N0 318 "Antartida Argentina", Diaz, Santa Fe, ARGENTINA

The local cultural youth is the one that fosters the answer to this work. When they listen to the song Simetría de Moebius (Catupecu Machu, 2009), they feel the need to know what "Möbius" refers to. This gives them the opportunity to deal with geometry topics in depth and to enter in the morphology field. The basic elements of Topology are introduced in the math classes, mainly the model known as "¨Möbius Strip" (Möbius 1790 – 1868). It is there where the questions arise: Is it possible to find some morphological patterns in works of art? Is there any relationship between the Möbius Strip and some aesthetic shapes? It is known that the natural shapes have functions that can be interpreted as an answer to nature requirements. For instance, there is a logic of growth, development and evolution that results in coherent shapes appearing in nature. So, certain geometry patterns, apart from the aesthetic function, can be found in some aspects of art that lead them. In the project, it is characterized the Möbius model, and a comparative analysis of its properties in relation to the different expressive productions is carried out in order to investigate the possible existence of an analog topological pattern in some art demonstration. With the analysis of different works of art (from different branches) and the collected data, the students can identify that there are certain morphological aspects similar to the ones of the Möbius model used as the starting point of the work.


2011 - MA308 
Jawaher Saleh A. Enani, Hend Khalid I Bimuhaya, 
Riyadh Najd, Riyadh, Central, SAUDI ARABIA

Introduction: The difficulty of finding the remainder in a division of polynomials of the second degree by long division is a problem we decided to efficiently solve. Purpose : To prove a theory which can make the method of finding the remainder easier and time saving for students. Procedures: Investigating why cannot a person find the remainder of a second degree polynomial without long division. For that, we had some trails of making f(x), g(x) controlled followed by the value of r(x). Results: Throughout this theory we were able to find the remainder of a second degree polynomial without using long division. Conclusion: The study found that out hypothesis is positive. And that students can apply it easily, fast, and a very efficient way.


2011 - MA309 
Clement Martinez, Arnaud Vespa, Marine Auriol
College Albert Camus, Miramas, FRANCE

Basing our work on Graph theory and Modern (non-Euclidean) geometry, we contemplated real world applications. After considering other possibilities, among which applications to neuroscience, we decided to focus on two topics : 1. Studying optimization problems in maritime transport as required by a local shipping company, taking into account the relevancy of these tools in order to calculate and optimize connection schedules for ships. 2. Creating the necessary tools to allow us to beat the Around the world sailing record on a genuine ocean race simulator challenge. The choice of the optimal path in this race resembles the choice of the shortest route in a network. In order to remain relevant and to understand the subject in all its dimensions: - we combined mathematical theories (like spherical, tropical and discrete geometries), - we worked, as much as possible, with specialists from the different areas : research, business and sailing (skipper, navigator, naval architect, weather specialist), - we sailed on a racing multihull to measure the best adjustment in order to go fast. For the optimization maritim routes problem, we created a software enabling us to find the shortest route within a network of routes. For the sailing problem, we first studied databases of the performance of the trimaran for each wind condition, meteorological statistics and gathered information during an outing on a genuine racing trimaran. Using these data and mathematics, we devised a method, allowing us to beat the record in 48 days and 44 minutes. 

Awards won at the 2011 ISEF
Fourth Award of $500 - Mathematical Sciences - Presented by Intel


2011 - MA310 
Tole Ernstuly Tolegenov, Adepkhan Yeren, 
High School in Physics and Math, Almaty, KAZAKHSTAN

Abstract: In the project minimal rings are studied.Some classical results are presented .It is proved that any minimal Lie ring contains an infinite locally finite subring.