The Maclaurin Series and Machine Learning

In the very early days of computers (say the 1950s and 1960s), most guys who entered the new field of “computer science” came from a background in either mathematics or electrical engineering. There’s always been a strong connection between mathematics and computer science. More specifically, with regards to machine learning, every now and then I’ll have a brief micro-discussion with colleagues about what math topics people who are new to ML should know.

For me, the answer isn’t large categories like “vector algebra”. I prefer to think in terms of very discreet, small topics. Understanding the Maclaurin Series is one mini-topic I think every person who works with ML should know.

The Maclaurin Series is a special case of the Taylor Series. Both are equations that can approximate some mathematical function. Put another way, in some ML scenarios, working directly with some function f(x) is very difficult, but working with the Maclaurin approximation to f(x), call the approximation P(x), is easier. The Maclaurin and Taylor series expansions pop up in several areas of ML, notably numerical optimization for ML training algorithms.

The Maclaurin approximation has a beautifully symmetric definition that uses the first, second, third, and so on, derivatives, and also the factorial function.

I prefer the full form of the approximation equation, but the simplified form, which uses the facts that 0! = 1! = 1, and x^0 = 1, and x^1 = x, is more common.

Here’s an example of approximating f(x) = (x+1)^(-1/2) using a second order Maclaurin series:

The approximation could be improved by adding more terms to the series expansion. And the approximation is only good for values close to x = 0. The Tayler Series generalizes the Maclaurin Series by using derivatives at any arbitrary value x = c.

Is the explanation I’ve provided enough to know about the Maclaurin Series for engineers who are learning ML? It depends, but I’d say the information in this blog post is a minimum, but valuable, amount of knowledge about the Maclaurin Series approximation. Note that in my post, I did not explain where the Maclaurin Series approximation equation comes from — the derivation is very, very beautiful and would be more or less required information for a math major, but probably a bit of overkill for most engineers who work with ML.

Advertisements
This entry was posted in Machine Learning. Bookmark the permalink.