# Summary

Approximating a dataset using a polynomial equation is useful when conducting engineering calculations as it allows results to be quickly updated when inputs change without the need for manual lookup of the dataset. The most common method to generate a polynomial equation from a given data set is the least squares method. This article demonstrates how to generate a polynomial curve fit using the least squares method.

# Definitions

a | : | Polynomial coefficient |

k | : | The degree of the polynomial |

N | : | The number of points to be regressed |

\(\epsilon\) | : | Error |

# Introduction

When presented with a data set it is often desirable to express the relationship between variables in the form of an equation. The most common method of representation is a \(k^{th}\) order polynomial which takes the form:

\[ \displaystyle y = a_kx^k + \cdots + a_1x + a_0 + \epsilon \]

The above equation is often referred to as the general polynomial regression model with the error \( \epsilon \) serving as a reminder that the polynomial will typically provide an estimate rather than an implicit value of the dataset for any given value of \(x\).

## Polynomial Order

The maximum order of the polynomial is dictated by the number of data points used to generate it. For a set of \(N\) data points, the maximum order of the polynomial is \(k = N-1\). However it is generally best practice to use as low of an order as possible to accurately represent your dataset as higher order polynomials while passing directly through each data point, can exhibit erratic behaviour between these points due to a phenomenon known as polynomial wiggle (demonstrated below).

# Estimating the Polynomial Coefficients

The general polynomial regression model can be developed using the method of least squares. The method of least squares aims to minimise the variance between the values estimated from the polynomial and the expected values from the dataset.

The coefficients of the polynomial regression model \( \left( a_k, a_{k-1}, \cdots, a_1 \right) \) may be determined by solving the following system of linear equations.

\[ \displaystyle \begin{bmatrix} N & \sum_{i=1}^{N} x_i & \cdots & \sum_{i=1}^{N} x_i^k \\ \sum_{i=1}^{N} x_i & \sum_{i=1}^{N} x_i^2 & \cdots & \sum_{i=1}^{N} x_i^{k+1} \\ \vdots & \vdots & \vdots & \vdots \\ \sum_{i=1}^{N} x_i^k & \sum_{i=1}^{N} x_i^{k+1} & \cdots & \sum_{i=1}^{N} x_i^{2k} \end{bmatrix} \begin{bmatrix} a_0 \\ a_1 \\ \vdots \\ a_k \\ \end{bmatrix} = \begin{bmatrix} \sum_{i=1}^{N} y_i \\ \sum_{i=1}^{N} x_i y_i \\ \vdots \\ \sum_{i=1}^{N} x_i^k y_i \\ \end{bmatrix} \]

This system of equations is derived from the polynomial residual function (derivation may be seen in this Wolfram MathWorld article) and happens to be presented in the standard form \( Ma = b \), which can be solved using a variety of methods.

# Solving for the Polynomial Coefficients

## Cramer's Rule

Cramer's rule allows you to solve the linear system of equations to find the regression coefficients using the determinants of the square matrix \(M\). Each of the coefficients \(a_k\) may be determined using the following equation:

\[ \displaystyle a_k = \frac{det(M_i)}{det(M)} \]

Where \(M_i\) is the matrix \(M\) with the \(i^{th}\) column replaced with the column vector \(b\) (remembering the system is presented in the form \( Ma = b \)). For example \(M_0\) could be calculated as follows:

\[ \displaystyle M_0 = \begin{bmatrix} \sum_{i=1}^{N} y_i & \sum_{i=1}^{N} x_i & \cdots & \sum_{i=1}^{N} x_i^k \\ \sum_{i=1}^{N} x_i y_i & \sum_{i=1}^{N} x_i^2 & \cdots & \sum_{i=1}^{N} x_i^{k+1} \\ \vdots & \vdots & \vdots & \vdots \\ \sum_{i=1}^{N} x_i^k y_i & \sum_{i=1}^{N} x_i^{k+1} & \cdots & \sum_{i=1}^{N} x_i^{2k} \end{bmatrix} \]

Cramer's rule is easily performed by hand or implemented as a program and is therefore ideal for solving linear systems. Additionally when solving linear systems by hand it is often faster than using row reduction or elimination of variables depending on the size of the system and the experience of the practitioner.

### Example Application of Cramer's Rule

The following example demonstrates how to develop a 2^{nd} order polynomial curve fit for the following dataset:

x | -3 | -2 | -1 | -0.2 | 1 | 3 |

y | 0.9 | 0.8 | 0.4 | 0.2 | 0.1 | 0 |

This dataset has \(N=6\) points and for a 2^{nd} order polynomial \( k = 2 \). As shown in the previous section, application of the least of squares method provides the following linear system.

\[ \displaystyle \begin{bmatrix} 6 & -2.2 & 24.04 \\ -2.2 & 24.04 & -8.008 \\ 24.04 & -8.008 & 180.0016 \\ \end{bmatrix} \begin{bmatrix} a_0 \\ a_1 \\ a_2 \\ \end{bmatrix} = \begin{bmatrix} 2.4 \\ -4.64 \\ 11.808 \\ \end{bmatrix} \]

Using Cramer's rule to solve the system we generate each of the matrices \( M_i \) by taking the matrix \(M\) and substituting the column vector b into the i^{th} column, for example \(M_0\) and \(M_1\) would be:

\[ \displaystyle M_0 = \begin{bmatrix} 2.4 & -2.2 & 24.04 \\ -4.64 & 24.04 & -8.008 \\ 11.808 & -8.008 & 180.0016 \\ \end{bmatrix} \]

\[ \displaystyle M_1 = \begin{bmatrix} 6 & 2.44 & 24.04 \\ -2.2 & -4.64 & -8.008 \\ 24.04 & 11.808 & 180.0016 \\ \end{bmatrix} \]

Once these matrices have been formed the determinant for each of the square matrices \(M, M_0, M_1 \text{and} M_2\) can be calculated and utilised to determine the polynomial coefficients as follows:

\[ \displaystyle \begin{equation} \begin{split} a_0 &= \frac{det(M_0)}{det(M)} = \frac{2671.20}{11661.27} = 0.2291 \\ a_1 &= \frac{det(M_1)}{det(M)} = \frac{-1898.46}{11661.27} = -0.1628 \\ a_2 &= \frac{det(M_2)}{det(M)} = \frac{323.76}{11661.27} = 0.0278 \\ \end{split} \end{equation} \]

The polynomial regression of the dataset may now be formulated using these coefficients.

\[ \displaystyle y = 0.0278x^2 - 0.1628x + 0.2291 \]

Which provides an adequate fit of the data as shown in the figure below.

## LU Decomposition

LU decomposition is method of solving linear systems that is a modified form of Gaussian elimination that is particularly well suited to algorithmic treatment. Coverage of LU decomposition is outside the scope of this article but further information may be found in the references section below.

## Software methods

There are several software packages that are capable of either solving the linear system to determine the polynomial coefficients or performing regression analysis directly on the dataset to develop a suitable polynomial equation:

- MATLAB - A numerical computing environment commonly used in engineering.
- Mathematica - A computational environment used in many industries.
- GNU Octave - An open source alternative to Matlab.
- R - A open source statistical computing package.
- Excel - A general purpose spread-sheeting program and an engineers primary tool
- Numbers - An alternative spread-sheeting program for Apple Mac and iOS platforms

It should be noted that with the exception of Excel and Numbers these packages can have a steep learning curve and for infrequent use it is more efficient to use Excel, Numbers or if solving manual Cramer's rule.