Bayesian Optimization

Marko Youngson
20 min readMay 27, 2021

The world is full of black-box functions: Protein design, Hyperparameter tuning, Supply line optimization, and many more. What if I told you there was a way to optimize them all while still being extremely efficient. In comes Bayesian Optimization(BO), an excellent algorithm used to optimize many black-box functions that are expensive to evaluate.

A black-box function is a function where you can only see the inputs and corresponding outputs. You do not know how the inside of it works, meaning you would not be able to have access to the derivatives which you would need for another algorithmn like gradient descent.

Before we start digging into the main steps behind BO, I would like to give a little bit of intuition on why this approach is useful and how it is beautiful in nature.

We want to optimize a function that we don’t know. Meaning that we want to find the corresponding x values that will give us the best y value.

That is our goal and seems pretty impossible. How can we optimize something that we don’t even know?

Generally, a good place to start is either finding out what the black box function is or finding a good approximation.

We are doing the latter in BO. Since we are only trying to find the point that would give us the maximum corresponding y value, it would be inefficient to maximize the entire function. What is more optimal is approximating parts of the function where it is most likely to be the maximum.

This leaves us with one of the most important concepts in BO. The exploration/exploitation trade-off. Exploration is the amount that we want to discover and approximate new regions in the function. In contrast, exploitation is when we want to use already known information about the function to find the optimal point. We have to balance these carefully because we are trying to optimize the function as cheaply as possible.

Once we approximate our function, we then sample a specific point on that function that we think will give us the most information about the function while still being close to the maximum (Exploration/exploitation again). We will then evaluate this point in the real world, and then after repeating this process multiple times, we can arrive at the global maximum.

Outline Of Article

To make a more concrete list of the steps of BO, I will present the following

1. Building a surrogate model of the data

1.1. What is a distribution

1.2. Gaussian Processes and Joint Distributions

1.3. The Covariance Matrix

1.4. Covariance Kernels

1.5. Back To Gaussian Processes

1.6. The Posterior Distribution

1.7. Gaussian Process Regression

1.8. Uncertainty Estimates

1.9. Conclusion of Gaussian Processes

2. Choosing a point to sample next from this surrogate model

2.1. Acquisition Functions

2.2. Expected Improvement

2.3. Probability Of Improvement

2.4. Upper Confidence Bound

2.5. Utilizing the Reperameterization Trick

2.6. Conclusion Of Acquisition Functions

3. Evaluating this point and adding to your dataset

4. Other Add-ons to Bayesian Optimization

1. Building a surrogate model of the data

The first step is to build a surrogate model of the dataset. We do this because we do not know the actual function; however, we want to approximate it as best as possible. A great tool used almost all the time in Bayesian Optimization is Gaussian Processes(GPs).

Before we look more into what a GP is and how this is applied in BO, let us dive into what a Gaussian Distribution is, which will give us the fundamental concepts needed to understand Gaussian Processes.

1.1 To start, what is a Distribution

A distribution is a mathematical function that describes the probability of occurrence of different values for a certain variable.

So, in other words, a distribution is a function that models the probability of a certain event. For example, we can have a distribution that describes the IQ of humans.

Another example could be a distribution of SAT scores.

One important similarity between these specific distributions is that they have a curved structure where there is a greater amount in the center, and then near the sides, it starts to diminish. It is possible to have other types of structures, but when the distribution has this type of curve, we can call it a Gaussian distribution, a normal distribution, or the bell curve.

Gaussian Distributions have two main parameters that are essential to describe the function. The mean and variance. The mean is the middle and average of the distribution, and the variance is the average of the square difference from the mean, which measures how spread out the numbers are from the mean. In a standard normal distribution, the mean is 0, and the variance is 1.

Normal Distribution

Another important fact to notice is that when dealing with multiple dimensions, we model the Multivariate Gaussian Distribution with mean vector (μ) mu and covariance matrix (Σ) sigma. Instead of having a bell curve in 2D, we would have a 3D or higher-dimensional space. Each value in the mean vector tells us its corresponding average value for that dimension, and the covariance matrix tells us how all the dimensions relate to each other.

3D Gaussian Distribution

1.2 Gaussian Processes and Joint Distributions

Now that we know what the mean and covariance are, we can look at Gaussian Processes(GPs). GPs are a little different from distributions because instead of defining a distribution over a set of values, like when we defined a distribution over the IQ of humans, we are defining a distribution over the black box function directly. Meaning that we will end up with an infinite amount of possible functions that can fit our data. Our goal is to learn the one underlying function of all the infinite functions that will give us the best approximation of the real environment.

Here, you can see some of the different possible functions that could have given us our data. It is important to notice that, in reality, this is an infinite amount of functions.

Joint Distribution: Credit

So what exactly is the definition of a GP?

Gaussian process is a collection of random variables. Any finite number of which have a joint Gaussian distribution.

Let's break that definition apart. The random variables are any points on the function. To start, we will have an initial dataset with X points and their corresponding values. A joint distribution is a lot like a normal distribution; although, there is more than one random variable. But what is the other random variable? As mentioned before, we will have an initial dataset where we have evaluated some points in the real environment. But what about all the other points that haven’t been evaluated? That is what we call X* points and will be our other random variable in the joint distribution.

So how can we write this mathematically? Well, we can create a joint distribution between these X and X* points.

Joint Distribution

Here we can see that our joint distribution has a mean (u) and covariance matrix sigma(Σ). The covariance matrix gets split up into four parts because it describes relationships between all dimensions. X and X, X and X*, X* and x, and X* and X*.

1.3 The Covariance Matrix

The covariance matrix is arguably the most important part of Gaussian Processes. With the covariance kernels, this allows us to approximate the real function as best as possible. But what exactly does it look like? Well, the covariance matrix is split up into four sub-matrices.

The first submatrix only has to do with the correlations between the known X points.

Sub-Matrix 11

The second model correlations between these known X points and other unknown points that we do not know, X*. This is the second sub-matrix of the covariance matrix.

Sub-Matrix 12

However, this matrix can also be flipped, where the known X points are on the top, and the unknown are on the left. This is the third sub-matrix.

Sub-Matrix 21

And finally, we have the last sub-matrix that models the correlations between the unknown points.

Sub Matrix 22

So then, when we put all of these four parts together, we get the whole covariance matrix(Σ).

The Covariance Matrix

Now, what is on the inside? Here we have Covariance Kernels that find different correlations between points.

The Covariance Matrix

1.4 Covariance Kernels

Let us now look at what these Covariance Kernels actually compute. Many different Kernels are used depending on the situation, so I will only go over some of the more popular and effective kernels.

RBF: The radial basis function (RBF) kernel is probably the most used kernel in machine learning. It measures the similarity or distance between two different points, x, and y. ||x-y|| is the Euclidean Distance (L2 Norm), and sigma is the variance. In our case, X and Y are our points X and X or X and X*, etc. As you can see, there is also another hyperparameter sigma that I describe when talking about the training of the GP.

RBF Kernel

The Euclidean distance is the measure of distance between two points on a plane.

Credit: tutorialexample.com

The Formula for Euclidean distance is the square root of the sum of the squares between the two points or vectors(p,q).

Euclidean Distance

In our case, it is more convenient to omit the square root, which is why we square the Euclidean distance. This will give us a good estimate of how far away points are from each other on the function.

Matern Kernel: The Matern kernel is very similar to the RBF kernel; however, it has an additional hyperparameter (v) that controls the smoothness of the function.

Matern Kernel

Here d(xi, xj) is the Euclidean distance, Kv(…) is the modified Bessel function, and 𝚪 is the gamma function.

An interesting fact about this Kernel is that when the parameter v is set to infinity, it is equivalent to the RBF Kernel, and if v = 1/2, it is equal to another Kernel called the absolute exponential kernel.

1.5. Back To Gaussian Processes

Now that we have a more intuitive feeling of what the covariance Matrix and Covariance Kernels are, let us return to the set-up of our joint distribution. When dealing with Gaussian Processes, we usually set the mean to 0, making our calculations much easier. This is also known as centering of the data.

Joint Distribution With Mean at 0

Like mentioned before, the covariance matrix is generally calculated through the use of Covariance Kernels. This is what allows us to find the correct correlations between already observed points and predicted points. So we can change the respective matrices to K(x,x) instead of Σx,x.

Joint Distribution With Kernels

Another factor that we have to deal with is that the data gathered from the real world is not 100% accurate. In reality, there will always be a little bit of error. So to solve this, we need to add some Gaussian noise onto the value for each X point. We will also change the notation to Y instead of X in the joint distribution after adding the noise.

Adding Gaussian Noise to X point

This will then lead to us slightly modifying our join distribution so that when we are calculating the covariance between the same two X points, there is a hyperparameter that deals with Gaussian noise. We will also have to train this hyperparameter during training.

Updated Joint Distribution With Noise

1.6 The Prior and Posterior Distribution

We have looked at how Gaussian Processes work and how we can write this mathematically with the joint distribution. Let us now focus on using our data to create two new distributions that will allow us to fit a gaussian process to our data.

The first new distribution that we will look at is the Prior Distribution. This is the distribution that is defined before we have observed any data. You can think of the prior as our belief of what the underlying function is.

Much like the joint distribution, the gaussian prior has a mean of zero. However, the covariance is only between the X points observed but not evaluated.

Prior Distribution

This is where choosing the kernel becomes crucial because depending on which kernel you choose, it will shape the distribution differently. Here are examples of how the function can look depending on the kernel.

Credit: Distill Pub

As you can see, it is also possible to combine kernels.

The next distribution we will look at is the posterior or conditional distribution. This distribution will be created after we take into account all of the observed data.

We will receive a predicted value for points that we do not have data for when sampling from this new distribution. We will obtain these points' means and covariances, which is crucial in later parts of Bayesian optimization.

The mathematical notation for the Posterior Distribution is as follows.

Posterior Distribution

If you want to see the mathematical proof of the posterior distribution, you can go here.

The main intuition behind creating a conditional distribution is that we now have reduced our distribution from an infinite number of functions to just the functions that pass through the points that we have observed.

Posterior Distribution

Here the black dots would be the points that we have evaluated in the real environment.

1.7. Gaussian Process Regression

Now armed with the basic knowledge of how Gaussian Processes work and how they are written mathematically, let us understand how we can actually train the Gaussian Process to fit our dataset. Because this is a regression problem, we need a loss function that we can try to minimize. Once we have this loss function we can update different parameters using gradient descent to try and get as little loss as possible.

To do this, we have to understand a new way of thinking.

Way Of Thinking

Unlike most regression problems, we have our initial point on the function X and a corresponding output Y. What we do not know but want to find out is “Nature,” which is the real environment. We can assume that it has some function f(X) that takes in our x point, calculates an output, and then adds some noise.

The goal of Gaussian Process regression is to model the probability of Y given X. If we can do this, we can predict what x points would give us the best y values.

Our Goal

Let us now look at the loss function that will be used when fitting the GP to our data in regression. Because we assume that there is an underlying function that gives us our data, we can marginalize this equation.

After Marginalizing

Then after applying Bayes rule:

Bayes Rule

We arrive at the following equation.

After Applying Bayes Rule

We also maximize the log of p(y|x) because it makes it easier to compute.

It is important to notice that the two terms in the integral are both gaussian. One has a mean of f and covariance of sigma, and the other has a mean of 0 and a covariance kernel between x and x.

And the product of two Gaussians is another Gaussian. So we end up with the final loss function below.

Loss Function For GP Regression

It is important to notice that the parameters we update the loss function with respect to are inside the Kernel. So to fit the GP to our data, we need to apply gradient descent or any other optimization algorithm on this loss function. Once we have thoroughly fit our Gaussian Process, we will be able to sample from it and receive a mean and covariance for each point on the function. This is also why the kernel we choose is so important because each kernel will shape the function differently and will have different parameters.

1.8. Uncertainty Estimates

We have now gone over how Gaussian Processes work, but why are they used. The reason is that they already have Bayesian-like properties that are extremely useful. The most prominent being that they can give uncertainty estimates. To illustrate how powerful this can be, let us pretend that we are trying to create a machine learning algorithm that will predict whether you should invest in a certain stock.

Another approach that is commonly used for this problem would be neural networks. For example, RNNs or LSTMs. When finished training, both of these algorithms would output a single value. “Invest 1000$ in Stock XYZ.” After getting this prediction from my algorithm, I would be pretty confident. However, we have no real way of knowing if this is going to be correct. On the other hand, Gaussian Processes would output a value and an uncertainty estimate. “Invest 1000$ in Stock XYZ; but, I am only 20% confident.” As you can see, this could play a huge factor in your decision.

In GPs, the covariance is our uncertainty estimates. Because the covariance is the relationship between the different points, the covariance is smaller when the points are closer. In contrast, when they are far from each other, it is larger. So when we are modeling the function, we can see that the covariance is larger at points where the function is farther from already known points.

So when evaluating points that are farther away from our data, we are more uncertain of the point's actual value. Knowing the uncertainty estimates becomes crucial in the next part of BO when you are trying to find out what point you want to evaluate next.

1.9. Conclusion of Gaussian Processes

Now we have seen different parts of GPs and how they can be trained to model an underlying function. We can then use this model to condition all the values of the points we do not know on the values of the points we have already observed, giving us the posterior distribution. This allows us to sample from that distribution and receive the mean and covariance, which are the uncertainty estimates, which will allow us to continue the Bayesian Optimization Process.

2. Choosing a point to sample next from this surrogate model

Once the data is fit to the Gaussian process, the next step in Bayesian optimization is choosing the next point we want to evaluate.

The whole point of BO is to optimize a function as efficiently as possible, meaning that we want to evaluate the least amount of points possible. This means that we have to be very careful about which points we want to pick. Like mentioned before, this is an exploration/exploitation problem. To solve this problem, in BO, we use acquisition functions.

2.1 Acquisition Functions

Acquisition functions are tools that use the predicted mean and covariance that is generated from the gaussian process. In general, there are three main acquisition functions used in practice. Expected Improvement(EI), Upper Confidence Bound(UCB), and Probability of Improvement(PI). When taking the gradient of each of these acquisition functions, you would receive a new point that we will evaluate next in the environment.

One important thing to note is that the density for all of these acquisition functions is the posterior distribution. Remember, the posterior allows us to predict the value for X points that we have not evaluated using already known X points.

So every time you see the following notation:

It is actually this:

Posterior Distribution

2.2 Expected Improvement

The goal of expected improvement is to find a new probe that is, on average, better than the best we know so far. Mathematically, the best we know so far is the argmax of the function f(x), which is nature.

Best We Know So Far

Therefore we can write EI as follows.

Expected Improvement Acquisition Function

So when taking the max of either f(x)-f(x*) or 0, we will always end up with a zero value or a positive value, which is the difference between the value of the new X points and the best we know so far.

Because we either end up with zero or positive values, we can actually apply a machine learning activation function called Relu instead of the max.

Relu Activation Function

The Relu activation function outputs a 0 if the number is non-positive. So we can change EI to be the following.

Final Expected Improvement Acquisition Function

2.3 Upper Confidence Bound

The next acquisition function that is used a lot in practice is Upper Confidence Bound(UCB). UCB assigns a parameter to covariance and will use this to trade-off exploration and exploitation. As mentioned before, exploration is the amount that we want to explore new parts of the function, which correlates with the covariance. Because the higher the covariance at a specific point, the more uncertain we are about that area. So exploring spots with high uncertainty would be considered exploration. On the other hand, the mean is correlated with exploitation. The higher the mean is at certain parts in the function, the more likely it is that the global maximum will be near that point.

UCB Acquisition Function

2.4 Probability Of Improvement

The final acquisition function that we will look at is Probability of Improvement or PI. This is similar to EI; however, instead of tracking the average amount that a point is better than the best we know so far, we are taking the probability that it is better than the best we know so far.

We will use indicator functions, which is a function that is equal to 0 when something is false and 1 when something is true.

Indicator Function

Specifically, the expectation of the indicator function of some event is equal to the probability of that event.

This will allow us to create an acquisition function that models the probability that an X point sampled from our posterior distribution is better than our already known so far.

PI Acquisition Function

2.5 Utilizing the Reperameterization Trick

As you probably noticed, there is the posterior distribution in each acquisition function's density. And most noticeably, parameters theta. Because to find the best point for each of these acquisition functions, we will have to take the gradient of each of them with respect to theta. This can be impossible if the theta is in the density. So we must find a way to deal with this.

First, let me show the issue and why we can not take the gradient with respect to parameters in the density. To show this, let us take a look at two different expectations. The first one will not have the gradient in the density, and the second one will.

The First Expectation

As you can see, theta our parameters are inside the expectation. The following is what happens when you take the gradient of this function:

Taking The Gradient Of Expectation 1

As you can see, we are not left with one expectation where the gradient is inside the expectation. This is easy to compute and can be done using various methods like Gradient descent, Natural gradient, etc.

Now let's look at the second expectation.

The Second Expectation

Here you can see that theta is also in the density. What happens if we take the gradient of this expectation?

Taking The Gradient Of Expectation 2

Here we are left with two different terms. One is an expectation that we would have been able to compute no problem; however, one of them can not be transformed into an expectation. This is because to create an expectation; you need a density. Densities have certain properties that make them unique, and as soon as we take the gradient of the density, we no longer have a density.

To solve this problem, we make use of the following equality.

Suppose we have a random variably y that is sampled from a gaussian with mean mu and covariance Sigma. Then we can re-write this equation as y equals mu + L(A Cholesky Decomposition of Sigma) multiplied on epsilon (a random variably sampled from a Standard gaussian).

So utilizing this equality, we can re-write all of our acquisition functions as the following.

Reparameterized EI
Reparameterized PI
Reparameterized UCB

Because it is now in Expectation form, we would be able to apply techniques like Monte Carlo Estimation and be able to obtain the gradients.

3. Evaluating This Point and Adding To Your Dataset

When taking the gradient of the acquisition functions, we will receive a point that we want to evaluate in the environment. Depending on the problem you are trying to solve, this can be many different things. One project that I have been working on is Bayesian Optimization for hyperparameter tuning for reinforcement learning. I was trying to find the optimal learning rate for Proximal Policy Optimization(An RL algorithm)in the OpenAi cart pole environment.

CartPole Enviornement

So each time you try a new learning rate, you record the average reward and add those to your dataset. Once this new learning rate and the corresponding value are added to your dataset, you will have to re-train the Gaussian process on the data and then find the next point to sample by again taking the gradient of an acquisition function. One of the biggest benefits of BO is that generally, you do not need to repeat this process many times. You can usually find the global maximum or minimum in under 10 evaluations. This can be extremely beneficial, especially when compared to other methods like random search that could take 100s of evaluations.

Other Add Ons To Bayes

We have now seen the basics of bayesian optimization; however, there are many ways that this algorithm can be tweaked to be more efficient or to be used in high dimensionalities. I will provide some resources below with links.

  • Latent Space BO (Used for when you have a large number of dimensions for your search space)
  • Input Warping (Used for non-stationary function)
  • Contextual Bayesian Optimization (Used when trying to take in more information that is important for the decision)
  • GPyTorch (A library used to fit a gaussian process to data in python)
  • Botorch(A python library used to do bayesian optimization)

If you want to see another project that I have been working on with some other students, you can check out our website: Ascentira-ai. Our product is a wearable wristband that uses biomarker data to give you a personalized optimal schedule.

TL;DR

  • Bayesian optimization is used to optimize black-box functions that are expensive to evaluate
  • Four-Step Process
  • First, fit a Gaussian process to the initial dataset
  • Second, take the gradient an acquisition function to give you the next point that you want to evaluate in the real environment
  • Third, evaluate this point in the real environment and record the results
  • Lastly, repeat from step one until you are happy with the results

--

--