# Ecological Dynamic Programming Optimization in Python

It’s been awhile since I’ve posted anything. I’ll use the excuse that I’ve been busy, but mostly I just forget. Regardless, I’m learning how to do Dynamic Programming Optimization (DPO), which sounds more complex than it is. The reason for this is that DPO allows us to simulate the behavior of individuals who make decisions based on current patch quality.  DPO is an exciting tool that forms the foundation of individual-based models, which allow us to assess community and ecosystem dynamics as emergent properties of individual decisions based on well-grounded, physiological principles (hence my interest).The underlying idea, as I understand it, is that individuals assess their future reproductive output prior to making a decision (I’m interested in optimal foraging, so I’ll put this in the context of foraging). We can model how individuals make decisions over the course of a lifetime, and track each individual, which then allows us to make quantitative statements about populations, communities, or ecosystems.

This all sounds complicated, and it can be difficult. So it’s easiest to jump right in with an example. Here’s the code, along with a basic explanation of what’s going on, for the very first toy example found in “Dynamic State Variable Models in Ecology” by Clark and Mangel. In the book, they give the computer code in TRUEBASIC, but… in all honesty.. no one I know uses that. I could use R, but we all know my feelings on that. So here’s their toy model programmed and annotated in Python.

Background

Suppose we’re interested in the behavior of fish and how they choose to distribute themselves among three different patches. Two patches are foraging patches and one is a reproduction patch. The first thing we need to do is make an objective fitness function F. In this example, F relates body size to reproductive output. It can be thought of as what the organism believes about its final fitness given a particular body size. Let’s make it increasing, but asymptotic, with body mass:

$F = \frac{A(x-x_c)}{x-x_c+x_0}$

Here, A is sort of a saturation constant beyond which fitness increases minimally with body size, $x_c$ is the body size at which mortality occurs (0), and $x_0$ is the initial body size. Lets set $x_c=0$, $A=60$ and $x_0=0.25*x_{max}$, where $x_{max}=30$ is the individuals maximum possible body size. You have to set the ceiling on body size otherwise organisms grow without bounds.

OK so that’s what the organisms “believes” about its fitness at the end of a season given a specific body mass. The question is: How does the organism forage optimally to maximize fitness at the end of the its lifetime?

The obvious answer would be to simulate a foraging decision at every time step moving forward, and then have it decide again at the new time step, etc. This is computationally expensive, so to circumvent this we work backwards from the end of the season. This saves time because there are far fewer paths to a known destination than there are to an unknown one (essentially).

So we imagine that, at the final time step $t_f$, the organism’s fitness is given by F for any given body mass.

import numpy as np
import matplotlib.pyplot as pet
import pandas as pd

n_bodysize = 31 # 31 body sizes, from 0-30
n_time = 20 # 20 time steps, 1-20
n_patch = 3, # 3 foraging patches

# make a container for body size (0-30) and time (1-20)
F = np.zeros((n_bodysize, n_time))

# make the function F
x_crit = 0
x_max = 30
acap = 60
x_0 = 0.25*x_max
t_max = 20

# calculate organism fitness at the final time point
for x in range(x_crit, x_max+1):
F[x,-1] = acap*(x-x_crit)/(x-x_crit+x_0)
[/code]

Now that we know fitness at the final time point, we can iterate through backwards (called backwards simulation) to decide what the optimal strategy is to achieve each body mass. To determine that, we need to know the fitness in each patch. Let’s start with the two foraging patches, Patch 1 and Patch 2. We need to know four things about each patch: (1) the mortality rate in each patch (m), (2) the probability of finding food in each patch (p), (3) the metabolic cost of visiting a patch (a), and (4) the gain in body mass if food is successfully found (y). For these two patches, let:

Patch 1: m=0.01, p=0.2, a=1, y=2

Patch 2: m=0.2, p=0.5, a=1, y=4

Right away we can see that Patch 2 is high-risk, high reward compared to Patch 1. In each patch, we calculate the next body size given that an animal does (x‘) or does not (x”) find food:

$x' = x-a_i+y_i$

$x'' = x-a_i$

Those are simple equations. Body size is the current body size minus the metabolic cost of foraging in the patch and, if successful, the energy gain from foraging. Great. Now we can calculate the expected fitness of each patch as the weighted average of F‘ and F” given the probability of finding food, all times the probability of actually surviving. For these two patches, we make a fitness function (V):

$V_i = (1 - m_i)*[p_i*F_{t+1}(x') + (1-p_i)*F'_{t+1}(x'')]$

The reproductive patch is different. There is no foraging that occurs in the reproductive patch. Instead, if the organism is above a critical mass $x_{rep}$, then it devotes all excess energy to reproduction to a limit  (c=4). If the organism is below the reproductive threshold and still visits the foraging patch, it simply loses mass (unless it dies).

OK this is all kind of complicated, so let’s step through it. We know what fitness is at the final time step because of F. So let’s step back one time step. At this penultimate time step, we go through every body mass and calculate fitness for each patch. Let’s do an example. If x=15, then we need to know fitness in Patch 1, Patch 2, and Patch 3. For Patches 1 and 2, we need to know the weight gain if successful and the weight gain if unsuccessful.

$x' = max(15-a_1+y_1, x_{max})$

$x' = 15-1+2 = 16$

$x''=min(15-a_1, x_{c})$

$x'' = 15-1 = 14$

The min and max functions here just make sure our organism doesn’t grow without limit and dies if metabolic cost exceeds body mass. So these are now the two potential outcomes of foraging in Patch 1 given x=15. The expected fitness of these two body masses is given as F(16) and F(14). Plug all these values into equation V for Patch 1 to get the expected fitness of Patch 1 at a body size of 15. We then take the maximum for all three Patches, save whichever Patch corresponds to that as the optimal foraging decision, and then save as the fitness for body size 15 at that time step. So at body size 15, for Patch 1 is  39, for Patch 2 is 38, and for Patch 3 is 37.6, so the individual will foraging in Patch 1 and now fitness for body size 15 at this time step is 39.

Repeat this procedure for every possible body size, and you’ll get the fitness for every body size at the second to last time step as well as the optimal foraging patch for every body size at that time.

Then, step backwards in time. Repeat this whole procedure, except now the value for each body mass doesn’t come from the equation F, but comes from the fitness we just calculated for each body mass. So for example, if an organism’s foraging decisions at this time step lead it to a body mass of 15, then F is now 39. Again, repeat this for every body mass, and then step back, etc.

Here’s the full Python code for how this is done:

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns

n_bodysize = 31 # 31 body sizes, 0 - 30
n_time = 20 # 20 time steps, 1 - 20
n_patch = 3 # number of patches

#%% CONATINERS
# make a container for body size (0-30) and time (1-20)
F = np.zeros((n_bodysize, n_time))
# make a container for three patches, each with body size (0-30) and time (1-20)
Vi = np.zeros((n_patch, n_bodysize, n_time))
# make a container for the optimal decision
d = np.zeros((n_bodysize, n_time))

# make a container for the mortality rates in each patch
m = np.zeros(n_patch)
# make a container for the probability of finding food in each path
p = np.zeros(n_patch)
# make a container for the metabolic cost
a = np.zeros(n_patch)
# make a container for food gain in each patch
y = np.zeros(n_patch)
# make a container for reproductive output in each patch
c = np.zeros(n_patch)

#%% CONDITIONS
x_crit = 0
x_max = 30
t_max = 20
x_rep = 4

#%% PARAMETERS
m[0] = 0.01; m[1] = 0.05; m[2] = 0.02
p[0] = 0.2; p[1] = 0.5; p[2] = 0
a[0] = 1; a[1] = 1; a[2] = 1
y[0] = 2; y[1] = 4; y[2] = 0
c[0] = 0; c[1] = 0; c[2] = 4

#%% END CONDITION
acap = 60
x_0 = 0.25*x_max

# Calculate Fitness for every body mass at the final time step
for x in range(x_crit, x_max+1):
F[x,-1] = acap*(x-x_crit)/(x-x_crit+x_0) # maximum reproductive output for each body size at the final time

#%% SOLVER
for t in range(0, t_max-1)[::-1]: # for every time step, working backward in time #print t
for x in range(x_crit+1, x_max+1): # iterate over every body size # print x
for patch in range(0, 3): # for every patch
if patch in [0,1]: # if in a foraging patch
xp = x-a[patch]+y[patch] # the updated body size
xp = min(xp, x_max) # constraint on max size
xpp = int(x-a[patch]) # updated body size if no food
Vi[patch,x,t] = (1 - m[patch])*(p[patch]*F[int(xp),t+1] + (1-p[patch])*F[xpp, t+1]) # Calculate expected fitness for foraging in that patch
else:
if x < x_rep: # in a reproduction patch
xp = max(x-a[patch], x_crit)
Vi[patch, x, t] = (1-m[patch])*F[int(xp), t+1]
else:
fitness_increment = min(x-x_rep, c[patch]) # resources devoted to reproduction
xp = max(x-a[patch]-fitness_increment, x_crit) # new body size is body size minus metabolism minus reproduction
Vi[patch, x, t] = fitness_increment + (1-m[patch])*F[int(xp),t+1]
vmax = max(Vi[0,x,t], Vi[1,x,t])
vmax = max(vmax, Vi[2,x,t])
if vmax==Vi[0,x,t]:
d[x,t] = 1
elif vmax==Vi[1,x,t]:
d[x,t] = 2
elif vmax==Vi[2,x,t]:
d[x,t] = 3
F[x,t] = max # the expected fitness at this time step for this body mass
[/code]

This model doesn’t really track individual behavior. What it does is provides an optimal decision for every time and every body mass. So we know what, say, an individual should do if it finds itself small towards the end of its life, or if it finds itself large at the beginning:

T,X = np.meshgrid(range(1, t_max+1), range(x_crit, x_max+1))
df = pd.DataFrame({'X': X.ravel(), 'T': T.ravel(), 'd': d.ravel()})
df = df.pivot('X', 'T', 'd')
sns.heatmap(df, vmin=1, vmax=3, annot=True, cbar_kws={'ticks': [1,2,3]})
plt.gca().invert_yaxis()
plt.show()
[/code]

And that’s that!! To be honest, I still haven’t wrapped my head around this fully. I wrote this blog post in part to make myself think harder about what was going on, rather than just regurgitating code from the book.

# Heaters Running!

I finally got power run to my experimental plots last week. Today, I had work to do. I poured 40 pounds of compost in every subplot to help with soil quality (for those of you keeping track, I have 64 subplots, which means I carried roughly 2500 pounds of dirt around the garden all morning). Of course, I couldn’t have done it without the help of the volunteer, Aaron.

Second, I hung my heaters over my plots and FINALLY turned them on. I have waited two and a half months for this moment. (Side note: All the crap BGE gave us about needing to run new lines, put in higher amperage lines, etc. was garbage. We waited 2.5 months for them to come turn the power off and, when we were done hooking up the new electrical meter, turn it back on). I’ll repeat: My heaters are up and running!

A heater! Heating!

I half expected to plug the heaters in and have outlets trip left and right. They didn’t, and everything stayed on!

Till it rains, at least (just kidding. I sealed the poop out of these things with silicon. Literally. Lots of bird poop out there.).

Even better, I went out to the garden last night to take some temperature measurements.

Also to bask in the orange glow of fulfillment. I swear it’s much less creepy than the picture indicates, but my phone camera is pretty bad at night.

The good news is, I get exactly the heating effect I was looking for, and it covers the whole plot! I was worried about it not heating enough (but I still have some leeway to crank the wattage up if necessary) and heating just a strip in the middle of the plot, but my fears have, for the moment, been allayed. I’ll probably still turn it up all the way later (or, if things get crazy, to 11).

Tomorrow, I will plant more seeds since the last batch died and water them daily. I bought a pound of evening primrose seeds. For those of you who’ve never seen primrose seeds, they look almost exactly like Grape Nuts. A pound is about a cereal box worth of seeds, about 1.3 million. I’m only planting about 1300 in every subplot. If I get the 20% germination rate the company says I should, then  I’ll have more germlings in each plot than I know what to do with (not really, I know exactly what to do with them: kill most, leave a few).

Fingers crossed!

# Power!

Finally, after 2.5 months, nearing the end of the growing season (too late to get much useful done), I have power!

Unfortunately, the growing season is almost over and I can’t really plant anything anymore, and every… single… seed I planted earlier died. I planted several thousand seeds. Not one made it.

Science really sucks sometimes.

Yep, it sucks.

# Are My Experiments Doomed?

I just got back from a two week conference at NIMBioS, hosted by the University of Tennessee. It was great. However, I posted before I left that I had just put all my experiments in the ground and still didn’t have the power outlets installed. The question was: Will the electrical staff and power company finally get power running to the garden, after two months of telephone calls and begging?

Well. No. They didn’t. I put in the request for this work in mid-April, and the staff here and the electrical company BGE pretty much blew me off so now its only about 60% complete. The electrician just left for a new job (which wasn’t exactly a surprise) and there’s no one around to finish the work. The growing season is blowing by pretty fast, and I’m starting to worry that the incompetence of the electrician (who pretty much stopped showing up to work for the last month) and, to the same extent BGE, is going to cost me an entire year of grad school.

My two weeks at NIMBioS and my experience in trying to get things done here have opened my eyes to a new possibility: Screw experiments. Do math.

# Evaluating Optimization Algorithms in MATLAB, Python, and R

As those of you who read my last post know, I’m at the NIMBioS-CAMBAM workshop on linking mathematical models to biological data here at UT Knoxville. Day 1 (today) was on parameter estimation and model identifiability. Specifically, we (quickly) covered maximum likelihood estimation and how to program our own estimation procedures in MATLAB.

If you’ve read this blog in the past, you know I’m an open-source kind of person. Naturally, I re-programmed all of the MATLAB examples in R. When I did, I noticed something funny. The optimization procedures in MATLAB gave different estimates than those in R. I asked a post-doc there, who seemed equally stumped but did mention that R’s optimization procedures are little funky. So, I took the next logical step and programmed the ML optimization routine into Python, using Scipy and Numpy, just to double check.

The model is an SIR epidemiological model that predicts the number of Susceptible, Infected, and Recovering people with, in this case, cholera. It relies on four parameters, Bi, Bw, e, and k. I won’t give the model here, you’ll see the formula in the code below. When optimizing, I made sure that MATLAB, R, and Python all used Nelder-Mead algorithms and, when possible, equivalent ODE solvers (ode45 in MATLAB and R).

I won’t post the MATLAB code here, because I didn’t write it and it’s multiple files etc etc, but I’ve gone over it line by line to make sure it’s identical to my R and Python code. It is.
MATLAB outputs estimates for Bi (0.2896), Bw (1.0629), e (0.0066) and k (0.0001). If you want the MATLAB files, or the data, I’ll send them to you.

In R, first import the data (see the Python code below for the actual data). Then, run the ode solver and optimization code:

## Main code for the SIR model example
library(deSolve) # Load the library to solve the ode

## Set initial parameter values
Bi <- 0.75
Bw <- 0.75
e <- 0.01
k <- 1/89193.18

## Combine parameters into a vector
params <- c(Bi, Bw, e, k)
names(params) <- c('Bi', 'Bw', 'e', 'k')

# Make a function for running the ODE
SIRode <- function(t, x, params){
S <- x[1]
I <- x[2]
W <- x[3]
R <- x[4]

Bi <- params[1]
Bw <- params[2]
e <- params[3]
k <- params[4]

dS <- -Bi*S*I - Bw*S*W
dI <- Bi*S*I + Bw*S*W - 0.25*I
dW <- e*(I - W)
dR <- 0.25*I

output <- c(dS, dI, dW, dR)
list(output)
}

# Set initial conditions
I0 <- data[1]*k
R0 <- 0
S0 <- (1-I0)
W0 <- 0

initCond <- c(S0, I0, W0, R0)

# Simulate the model using our initial parameter guesses
initSim <- ode(initCond, tspan, SIRode, params, method='ode45')

plot(tspan, initSim[,3]/k, type='l')
points(tspan, data)

# Make a function for optimzation of the parameters
LLode <- function(params){

k <- params[4]

I0 <- data[1]*k
R0 <- 0
S0 <- 1 - I0
W0 <- 0

initCond <- c(S0, I0, W0, R0)

# Run the ODE
odeOut <- ode(initCond, tspan, SIRode, params, method='ode45')

# Measurement variable
y <- odeOut[,3]/k

diff <- (y - data)
LL <- t(diff) %*% diff

return(LL)
}

# Run the optimization procedure

## Resimulate the ODE
estParms <- MLresults$par estParms MLresults$value

I0est <- data[1]*estParms[4]
S0est <- 1 - I0est
R0 <- 0
W0 <- 0
initCond <- c(S0est, I0est, W0, R0)

odeEstOut <- ode(initCond, tspan, SIRode, estParms, method='ode45')
estY <- odeEstOut[,3]/estParms[4]

plot(tspan, data, pch=16, xlab='Time', ylab='Number Infected')
lines(tspan, estY)


Running this code gives me estimates of Bi (0.30), Bw (1.05), e (0.0057), and k (0.0001).

I rounded the values, but the unrounded values are even closer to MATLAB. So far, I’m fairly satisfied.

We can run the same procedure in Python:


from scipy.optimize import minimize
from scipy import integrate
import pylab as py

tspan = np.array([0, 7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 91, 98, 105, 112, 119, 126, 133, 140, 147, 154, 161])
data = np.array([ 113, 60, 70, 140, 385, 2900, 4600, 5400, 5300, 6350, 5350, 4400, 3570, 2300, 1900, 2200, 1700, 1170, 830, 750, 770, 520, 550, 380 ])

## define ODE equations
def SIRode(N, t, Bi, Bw, e, k):
return(
-Bi*N[0]*N[1] - Bw*N[0]*N[2] ,
+Bi*N[0]*N[1] + Bw*N[0]*N[2] - 0.25*N[1] ,
+e*(N[1] - N[2]) ,
+0.25*N[1]
)

# Set parameter values
Bi = 0.75
Bw = 0.75
e = 0.01
k = 1/89193.18

# Set initial conditions
I0 = data[0]*k
S0 = 1 - I0
N = [S0, I0, 0]

# Run the ode
Nt = integrate.odeint(SIRode, N, tspan, args=(Bi, Bw, e, k))

# Get the second column of data corresponding to I
INt = [row[1] for row in Nt]
INt = np.divide(INt, k)

py.clf()
py.plot(tspan, data, 'o')
py.plot(tspan, INt)
py.show()

def LLode(x):
Bi = x[0]
Bw = x[1]
e = x[2]
k = x[3]

I0 = data[0]*k
S0 = 1-I0
N0 = [S0, I0, k]

Nt = integrate.odeint(SIRode, N0, tspan, args=(Bi, Bw, e, k))

INt = [row[1] for row in Nt]
INt = np.divide(INt, k)

difference = data - INt

LL = np.dot(difference, difference)

return LL

x0 = [Bi, Bw, e, k]

print results.x

estParams = results.x

Bi = estParams[0]
Bw = estParams[1]
e = estParams[2]
k = estParams[3]

I0 = data[0]*k
S0 = 1 - I0
N = [S0, I0, 0]

Nt = integrate.odeint(SIRode, N, tspan, args=(Bi, Bw, e, k))

INt = [row[1] for row in Nt]
INt = np.divide(INt, k)

py.clf()
py.plot(tspan, data, 'o')
py.plot(tspan, INt)
py.show()


Python gave me estimates of Bi (0.297), Bw (1.106), e (0.0057), and k (0.00001).

Overall, I have to say I’m pretty satisfied with the performance of both R and Python. I also didn’t find programming these sorts of optimization procedures in R or Python to be any more difficult than in MATLAB (discounting for the fact that I’m not terribly familiar with MATLAB, but I’m also only somewhat familiar with Python, so they’re roughly equivalent here). I initially wrote this post because I could not get R to sync up with MATLAB, so I tried it in Python and got the same results as in R. I then found out that something was wrong with the MATLAB code, so that all three matched up pretty well.

I’ll say it again, R + Python = awesome.

UPDATE:

I’ve had a couple of requests for time comparisons between MATLAB, R, and Python. I don’t have those for MATLAB, I don’t know MATLAB well.

For Python, if I wrap the mimize function in:

t0 = time.time()
miminize....# run the optimizer
t1 = time.time()
print t1 - t0


I get 3.17 seconds.

In R, if I use system.time( ) to time the optim( ) function, I get about 39 seconds. That pretty much matches my feeling that R is just laboriously slow compared with how quickly Python evaluates the function.