Economist
##### Published

juil. 28, 2014

←Home

The following is the presentation that I gave at the Computational in Economics and Finance (CEF) conference 2014 in Oslo.

Note that all Julia packages used in this notebook are installable directly with the package manager. The only exception is the splines package which is installed with: Pkg.clone("https://github.com/EconForge/splines.git").

# Overview of Julia¶

Julia is a new programming language for technical computing, created at the MIT in 2012. Its main features are:

• Syntax close to MATLAB/Octave, with ideas also borrowed from Python, Ruby, Lisp
• Good performance, close to compiled languages, thanks to just-in-time (JIT) compiler
• Free software / open source
• Designed for parallel and distributed computation
• Easily interfaceable with other languages (C, Fortran, Python)
• Multiple dispatch: ability to define function behavior across many combinations of argument types
• Large number of package extensions, lively community

# Example 1: Hodrick-Prescott filter¶

Here we show how to implement a Hodrick-Prescott filter in Julia. Recall that, given a timeseries $y_t$ and a smoothing parameter $\lambda$, the filtered trend $\tau_t$ is defined by: $$\min_{\tau}\left(\sum_{t = 1}^T {(y_t - \tau _t )^2 } + \lambda \sum_{t = 2}^{T - 1} {[(\tau _{t+1} - \tau _t) - (\tau _t - \tau _{t - 1} )]^2 }\right)$$

The first-order conditions of this minimization problem can be expressed by the following linear system: $$\begin{pmatrix} 1+\lambda & -2\lambda & \lambda & & & & \\ -2\lambda & 1+5\lambda & -4\lambda & \ddots & & & \\ \lambda & -4\lambda & 1+6\lambda & \ddots & \ddots & & \\ & \ddots & \ddots & \ddots & \ddots & \ddots & \\ & & \ddots & \ddots & 1+6\lambda & -4\lambda & \lambda \\ & & & \ddots & -4\lambda & 1+5\lambda & -2\lambda \\ & & & & \lambda & -2\lambda & 1+\lambda \end{pmatrix} \begin{pmatrix} \tau_1 \\ \vdots \\ \vdots \\ \tau_t \\ \vdots \\ \vdots \\ \tau_T \end{pmatrix} = \begin{pmatrix} y_1 \\ \vdots \\ \vdots \\ y_t \\ \vdots \\ \vdots \\ y_T \end{pmatrix}$$

The following function implements the Hodrick-Prescott filter by computing the filtered trend, given a timeseries and a smoothing parameter:

In [1]:
function hp_filter(y::Vector{Float64}, lambda::Float64)
n = length(y)
@assert n >= 4

diag2 = lambda*ones(n-2)
diag1 = [ -2lambda; -4lambda*ones(n-3); -2lambda ]
diag0 = [ 1+lambda; 1+5lambda; (1+6lambda)*ones(n-4); 1+5lambda; 1+lambda ]

D = spdiagm((diag2, diag1, diag0, diag1, diag2), (-2,-1,0,1,2))

D\y
end

Out[1]:
hp_filter (generic function with 1 method)


The spdiagm function is used to create a matrix by specifying its nonzero diagonals. The right-divide operator (antislash) is used to compute the solution of the linear system.

Features to be noted:

• Matrices can be easily constructed with brackets
• Function arguments are typed: this is optional most of the time, but improves performance
• The type Float64 represents double precision floating point numbers (they occupy 64 bits in memory). It is the default type when manipulating real numbers.
• There is no output argument (it is the last computation that is returned as output)
• The implicit multiplication operator in monomials (2lambda)
• One can construct tuples using a comma-separated list within parenthesis (see the 5-tuples as first and second argument of spdiagm).

Now we construct a timeseries as the simulation over 40 periods of a random walk with gaussian innovation:

In [2]:
y = cumsum(randn(40))

Out[2]:
40-element Array{Float64,1}:
-1.68294
-1.39345
-2.45808
-0.681115
-0.287251
0.407196
1.23489
1.11527
1.33717
3.67818
⋮
2.43134
3.6512
4.72383
5.12357
4.25745
4.5049
5.14032
6.57948
7.90881


And we extract the filtered trend, with a smoothing parameter of 1600 (typical for quarterly frequency):

In [3]:
tau = hp_filter(y, 1600.)

Out[3]:
40-element Array{Float64,1}:
-0.860596
-0.633992
-0.407901
-0.183313
0.037502
0.251964
0.457287
0.650786
0.830259
0.993796
⋮
3.85658
4.08573
4.32502
4.57284
4.82785
5.08906
5.3551
5.62426
5.89468


Note that there is a dot at the end of 1600, in order to emphasize the fact that it is a floating point. If the dot is omitted, the number is interpreted as an integer, and this leads to an error. This is a consequence of the typing system of Julia, we'll return to that later.

In [4]:
hp_filter(y,1600)

no method hp_filter(Array{Float64,1}, Int64)
while loading In[4], in expression starting on line 1

Now we want to plot the result, using the PyPlot module which is based on Python's matplotlib. Note that a module needs to be loaded only once per Julia session.

In [5]:
using PyPlot

INFO: Loading help data...



Finally let's do the plot:

In [6]:
plot(1:40, y, "r", 1:40, tau, "b")
title("HP-filtered random walk")

Out[6]:
PyObject <matplotlib.text.Text object at 0xb7249d0>


# Example 2: yield-to-maturity computation¶

Suppose that we want to compute the yield-to-maturity (YTM) of a bond with pays a yearly coupon $C$. The principal is 100, which is due at date $T$ (expressed in years, today is date 0). The bond is priced today at $P$ (excluding accrued interest).

First consider the case when $T$ is an integer. For a given annual discount rate $\delta$, the net present value of the flow of payments is the following: $$NPV(\delta) = \frac{100}{(1+\delta)^T}+ \sum_{t=1}^{T} \frac{C}{(1+\delta)^t}$$

If $T$ is not an integer, and if $\lfloor T \rfloor$ is the biggest integer less than $T$, then the NPV is: $$NPV(\delta) = \frac{1}{(1+\delta)^{T-\lfloor T \rfloor}} \left( \frac{100}{(1+\delta)^{\lfloor T \rfloor}}+ \sum_{t=1}^{\lfloor T \rfloor} \frac{C}{(1+\delta)^t}\right)$$

The YTM is the value $\delta^*$ that equalizes the NPV with the bond price (including accrued interests), i.e. such that: $$P + (1-T+\lfloor T \rfloor)C = NPV(\delta^*)$$

We now implement this computation in Julia. First, we load two packages that we will need for our purpose. The first is DateTime, useful for time computations, and the second is Roots, which provides a nonlinear zero finder for univariate functions via the fzero function.

In [7]:
using Datetime
using Roots


Now the function with computes the YTM, in percentage points:

In [8]:
function ytm(price::Float64, maturity::Date, coupon::Float64)

function npv_minus_price(yield)
# Compute the NPV sum by backward induction
npv = 100+coupon
d = maturity - today()
while d >= year(1)
npv = coupon + npv/(1+yield/100)
d -= year(1)
end
# Adjustment if the maturity horizon is not an integer number of years
npv /= (1+yield/100)^(integer(d)/365)

# Difference between NPV and market price (with accrued interest)
npv - (price + (1-integer(d)/365)*coupon)
end

fzero(npv_minus_price, coupon)
end

Out[8]:
ytm (generic function with 1 method)


The nested function npv_minus_price computes the difference between the NPV for a given yield, and the market price including accrued interest. Then the fzero function computes a zero of the npv_minus_price function, which is the YTM. The zero solver needs an initial value for starting the algorithm, and we use the coupon rate for that purpose.

Features to be noted:

• Functions can be nested. Inside functions inherit from variables of the enclosing function
• The while/end control structure, similar to MATLAB's
• Updating arithmetic operators (-=, /=), which do both a computation and a variable assignment (e.g. x /= 2 is equivalent to x = x/2)
• Type converters: year(), integer()
• Fields of a structure can be accessed with a dot syntax, as in MATLAB (e.g. r.zero)
In [9]:
ytm(101.4, date("2015-09-01"), 2.)

Out[9]:
0.8190492671015233


The YTM of a bond maturing on September 1st 2015, with a 2% coupon rate, and currently priced 101.4, is 0.82%.

# Types¶

## Concrete bits types¶

The default integer type is 32-bit or 64-bit, depending on your machine hardware:

In [10]:
typeof(1)

Out[10]:
Int64


Int is an alias for the default integer type:

In [11]:
Int

Out[11]:
Int64


The default floating point type is 64-bit (equivalent of Fortran's double precision or C's double):

In [12]:
typeof(1.0)

Out[12]:
Float64


## Abstract types¶

Abstract types do not correspond to actual objects in memory. They are generic categories that include (generally several) concrete types. For exemple, the abstract type Integer includes both 32-bit and 64-bit concrete integer types (one says that Int32 and Int64 are subtypes of Integer):

In [13]:
Int64 <: Integer

Out[13]:
true

In [14]:
Int32 <: Integer

Out[14]:
true


But 64-bit floating points are not a subtype of Integer:

In [15]:
Float64 <: Integer

Out[15]:
false


## Tuples¶

Tuples are made of several values enclosed within parentheses and separated by commas. They are useful when you want a function to return several values (possibly of different types).

In [16]:
tpl = (1,true,"foo")

Out[16]:
(1,true,"foo")

In [17]:
typeof(tpl)

Out[17]:
(Int64,Bool,ASCIIString)


Tuple elements are indexed with brackets, and indices start at 1:

In [18]:
tpl[2]

Out[18]:
true


Tuples can be used for multiple variable assignments:

In [19]:
(a, b, c) = tpl

Out[19]:
(1,true,"foo")


A special case is the tuple containing only one value. Note the syntax with parenthesis and a trailing comma.

In [20]:
typeof((1,))

Out[20]:
(Int64,)


## Array types, matrices¶

A vector of floats:

In [21]:
[1., 2., 3.]

Out[21]:
3-element Array{Float64,1}:
1.0
2.0
3.0


A vector of integers:

In [22]:
[1, 2, 3]

Out[22]:
3-element Array{Int64,1}:
1
2
3


Note that Array{T,n} is a parametric type: the parameter T determines the type of objects inside the array, and the parameter n determines the dimensionality.

Vector{T} is an alias for Array{T,1}.

A matrix of floats:

In [23]:
[1. 2.; 3. 4.]

Out[23]:
2x2 Array{Float64,2}:
1.0  2.0
3.0  4.0


Matrix{T} is an alias for Array{T,2}.

An unitialized array is created in the following way (beware, elements are not guaranteed to be zero, and can be anything):

In [24]:
X = Array(Float64, 10, 5)

Out[24]:
10x5 Array{Float64,2}:
5.66871e-316  5.66872e-316  5.66874e-316  5.66875e-316  5.66877e-316
5.66871e-316  5.66872e-316  5.66873e-316  5.66875e-316  5.66877e-316
5.66871e-316  5.66873e-316  5.66874e-316  5.66875e-316  5.66879e-316
5.66871e-316  5.66873e-316  5.66874e-316  5.66875e-316  5.66879e-316
5.66871e-316  5.66873e-316  5.66874e-316  5.66879e-316  5.66879e-316
5.66871e-316  5.66873e-316  5.66874e-316  5.66875e-316  5.66879e-316
5.66872e-316  5.66874e-316  5.66874e-316  5.66875e-316  5.66879e-316
5.66871e-316  5.66873e-316  5.66874e-316  5.66876e-316  1.59572e-316
5.66872e-316  5.66874e-316  5.66875e-316  5.66876e-316  1.68837e-316
5.66872e-316  5.66874e-316  5.66875e-316  5.66876e-316  2.16946e-316


The size of the array is returned as a tuple:

In [25]:
size(X)

Out[25]:
(10,5)


Array elements are indexed with brackets, and indices start at 1:

In [26]:
X[10,2]

Out[26]:
5.6687377e-316


Multiplication and division operators on matrices perform linear algebra operations:

In [27]:
A = [ 1. 2.; 3. 4. ]
B = [ 5. 6. ]'
A * B

Out[27]:
2x1 Array{Float64,2}:
17.0
39.0

In [28]:
A \ B

Out[28]:
2x1 Array{Float64,2}:
-4.0
4.5


Element-by-element operations are available under .+, .-, .*, ./:

In [29]:
A .* A

Out[29]:
2x2 Array{Float64,2}:
1.0   4.0
9.0  16.0


It is also possible to create vectors using comprehension syntax, which is similar to the mathematical syntax for describing sets:

In [30]:
[ 2x^2 + 1 for x = 1:5 ]

Out[30]:
5-element Array{Int64,1}:
3
9
19
33
51


## Composite types¶

Composite types are records (also called structures or objects in other languages) containing several named fields. For example, one can create a Timeseries type, which is the aggregation of a start date, a frequency, and an array of data:

In [31]:
type Timeseries
start::Date
frequency::Integer
data::Vector{Float64}
end

In [32]:
x = Timeseries(date("2014-06-21"), 12, [ 1., 2., 3. ])

Out[32]:
Timeseries(2014-06-21,12,[1.0,2.0,3.0])


It is possible to read and write members of the composite type using the dot operator:

In [33]:
x.data

Out[33]:
3-element Array{Float64,1}:
1.0
2.0
3.0

In [34]:
x.frequency = 4

Out[34]:
4


It is also possible to create composite types whose members cannot be modified, by replacing the keyword type with immutable. Such types are generally more efficient, especially if they are packed inside arrays.

## Type annotations¶

Type annotations can be used at different places:

1. in the declaration of composite types (see above)
2. for qualifying function arguments (see above)
3. to assert that something is of a given type
4. to indicate the type of a variable (only local variables in functions, not for global variables)

Note that type annotations are always optional (except for function arguments when doing multiple dispatch, see below). Type annotations can increase performance when used judiciously, see below.

Here is an example of type assertions:

In [35]:
1::Integer

Out[35]:
1

In [36]:
1.0::Integer

type: typeassert: expected Integer, got Float64
while loading In[36], in expression starting on line 1

Here is an example of typed local variables:

In [37]:
function f(x::Float64)
y::Integer = floor(x)
2y
end

Out[37]:
f (generic function with 1 method)

In [38]:
f(1.)

Out[38]:
2


# Functions, methods¶

## Functions¶

Functions can be constructed using three syntaxes:

• with the function keyword
• using a compact “assignement form” when the function body is simple
• as anonymous function (i.e. not associated to a function name), with the arrow syntax (->)

Here is an example of compact function declaration:

In [39]:
g(x) = 2x^2-3x+1

Out[39]:
g (generic function with 1 method)

In [40]:
g(5)

Out[40]:
36


Here is an example of anonymous polynomial function, for which we search a zero:

In [41]:
fzero(x -> 2x^2-3x, 1.)

Out[41]:
1.5


Functions are also objects of the language, and have a type. This can be checked on an anonymous function (i.e. a function which is not given a name):

In [42]:
typeof(x -> x+1)

Out[42]:
Function


One interesting feature when declaring functions is the ability to declare optional arguments, that will be referenced by a keyword (like in R). Those optional arguments have to be listed after a semicolon in the function declaration, and should be given a default value:

In [43]:
function utility(c; gamma = 2.0)
c^(1-gamma)/(1-gamma)
end

utility(2)

Out[43]:
-0.5

In [44]:
utility(2, gamma = 3.0)

Out[44]:
-0.125


## Methods and multiple dispatch¶

In Julia, a given function can behave differently depending on the types of its arguments. In this case, the different implementations are called methods, and the process by which a given method is chosen depending on the argument types is called multiple dispatch. For example, the square root function has several methods:

In [45]:
methods(sqrt)

Out[45]:
13 methods for generic function sqrt:

Now we define addition over two timeseries, using the Timeseries type defined above. For the sake of simplicity, we require that the two timeseries have the same start date, the same frequence and the same length.

In [46]:
function +(a::Timeseries, b::Timeseries)
if a.start != b.start
error("Arguments do not have the same start date")
end
if a.frequency != b.frequency
error("Arguments do not have the same frequency")
end
if length(a.data) != length(b.data)
error("Arguments do not have the same data length")
end
Timeseries(a.start, a.frequency, a.data .+ b.data)
end

Out[46]:
+ (generic function with 175 methods)


Now we can add two timeseries using the most natural syntax:

In [47]:
Timeseries(today(), 12, [1., 2., 3]) + Timeseries(today(), 12, [4., 6., 9.])

Out[47]:
Timeseries(2014-06-21,12,[5.0,8.0,12.0])


Note that multiple dispatch works not only on existing operators, but also on user-defined functions.

# Performance¶

Julia has a good performance, close to compiled languages like Fortran or C, and often much better than MATLAB. This is evidenced by the micro benchmarks performed on several simple algorithms accross several languages (including C, Fortran, Python, MATLAB, R).

The good performance of Julia comes from its use of a just-in-time (JIT) compiler: all the code that you write in Julia is invisibly compiled on-the-fly by the interpreter.

To be effective, the JIT compiler needs to determine types of variables and functions arguments. Most of the time, types are automatically inferred by Julia in an efficient way. But sometimes the compiler needs some help, and this is where type annotations become useful when put at strategic place. A typical Julia workflow is therefore to prototype a first version of the code without type annotations; then, during a second pass, the code is optimized by putting type annotations at strategic places.

As a general rule, type annotations are useful within composite type declarations (i.e. within type and immutable blocks), in function arguments, and in some function local variables. Global variables (i.e. variables declared/assigned outside functions) should generally be avoided, or should be declared as constants (using the const keyword, like in const a = 1), because they currently cannot be typed.

For more details on optimization, the Julia manual has some performance tips.

Here are some links where you can find documentation or extra information about Julia:

# Exercise: Solving a rational-expectations model with value function iteration¶

We are going to solve a model by Aguiar and Gopinath (2006): “Defaultable debt, interest rates and the current account”, Journal of International Economics, 69(1). It is model II of the paper, which is a model of strategic sovereign default with a stochastic growth trend. The solution method is value function iteration, with approximation of value function with cubic splines.

For solving the model, we will need to integrate expectations with respect to Gaussian shocks. We will therefore use the Gauss-Hermite quadrature, which is well-suited to that case. The quadrature consists in approximating an integral by a finite weighted sum. More precisely, the Gauss-Hermite quadrature is: $$\int_{-\infty}^{+\infty} f(x)e^{-x^2} dx \approx \sum_{i=1}^{n} w_i f(x_i)$$ where $n$ determines the precision of the approximation (the higher $n$, the more precise), the $x_i$ are the roots of the $n$-th order Hermite polynomial $H_n(x)$, and the associated weights are: $$w_i = \frac{2^{n-1} n! \sqrt{\pi}}{n^2 [H_{n-1}(x_i)]^2}$$

In turn, the Hermite polynomials are defined by the following recursive equations (where $H'$ denotes the derivative of $H$): $$H_0(x) = 1$$ $$H_n(x) = 2xH_{n-1}(x) - H'_{n-1}(x)$$

We are now going to implement the computation of the points and weights as a function of $n$. For that purpose, we will use the Polynomial package, whose documentation can be seen there.

In [48]:
using Polynomial


Now, write a function that, given $n$, creates $H_n$. You will need the Poly function to create basic polynomials, the polyder function to compute the derivative of a polynomial. Standard addition and multiplication work on polynomials as expected. Use a for loop to do the recursion.

In [49]:
function compute_hermite_polynomial(n)
P = Poly([1])
const x = Poly([1; 0])
for i = 1:n
P = 2x*P - polyder(P)
end
P
end

Out[49]:
compute_hermite_polynomial (generic function with 1 method)


Verify that $H_5(x) = 32x^5-160x^3+120x$:

In [50]:
compute_hermite_polynomial(5)

Out[50]:
Poly(32x^5 - 160x^3 + 120x)


Now, write a function that, given $n$, returns a tuple consisting of the two vectors $x_i$ and $w_i$. Use the roots function to compute the roots of a polynomial, the polyval function to evaluate a polynomial, and the factorial function. You can use a comprehension to create the vector of weights. Also, you can use a type annotation to indicate to Julia that the roots of the polynomial are real, and not complex.

In [51]:
function compute_hermite_points(n)
P = compute_hermite_polynomial(n)
P2 = compute_hermite_polynomial(n-1)

points::Vector{Float64} = roots(P) # We know that the roots are real

weights = [ 2^(n-1)*factorial(n)*sqrt(pi)/n^2/polyval(P2, points[i])^2 for i = 1:n ]

points, weights
end

Out[51]:
compute_hermite_points (generic function with 1 method)


Now, verify that, for $n=3$, one has: $(x_1,w_1) \approx (-1.22,0.29)$, $(x_2,w_2) \approx (0,1.18)$, $(x_3,w_3) \approx (1.22,0.29)$:

In [52]:
compute_hermite_points(3)

Out[52]:
([-1.22474,1.22474,0.0],[0.295409,0.295409,1.18164])


## The model¶

We consider a country with a representative agent, producing and consuming a single homogeneous good. The production stream is exogenous and has a stochastic growth trend.

At every period, the difference between production and consumption is financed on international markets. The country therefore accumulates debt (or assets) which is short-term (one period), and can decide to default or repay at every period. If it repays, access to financial markets continues. If it defaults, it is barred from financial markets (and therefore, being in autarky, has to consume what it produces), and moreover suffers from a penalty (proportional to GDP); exclusion from financial markets can last several periods, and terminates with a given probability at each period.

International investors are risk-neutral and in perfect competition. The interest rate that the country faces is therefore equal to the international riskless rate (assumed to be constant), plus a risk premium equal to the model-consistent risk of default in next period.

Let's now describe the model equations. Since the model has a stochastic trend, equations and variables are in detrended form. The two state variables are $y_t$ the (log of the gross) growth rate and $a_t$ the detrended assets due at date $t$.

The law of motion of $y_t$ is: $$y_t = (1-\rho_g)\mu_y +\rho_g y_{t-1} + \varepsilon_t$$ where $\varepsilon_t \leadsto \mathcal{N}(0, \sigma_g)$ and $\mu_y = ln(\mu_g) - \frac{\sigma_g^2}{2(1-\rho_g^2)}$.

Detrended GDP as a function of $y_t$ is: $$Q_t = \frac{e^{y_t}}{\mu_g}$$

We now describe the value function of the country. The instantaneous utility function is: $$u(c) = \frac{c^{1-\gamma}}{1-\gamma}$$ and the discount factor is $\beta$.

$V^G(a_t, y_t)$ is the value function if the country repays, $V^B(y_t)$ is the value function under autarky (after a default), and $V(a_t,y_t)$ is the optimal choice between repayment and default, which is given by: $$V(a_t,y_t) = \max\{ V^G(a_t,y_t), V^B(y_t) \}$$

The value function under autarky depends on $\delta$ (the penalty) and $\lambda$ (the probability of redemption): $$V^B(y_t) = u\left[(1-\delta)Q_t\right] + \beta e^{y_t(1-\gamma)}\left[\lambda \mathbb{E}_t V^G(0, y_{t+1}) + (1-\lambda) \mathbb{E}_t V^B(y_{t+1})\right]$$

The value function under repayment depends of $q(y_t, a_{t+1})$ which is the price at which investors accept to buy bonds due tomorrow: $$V^G(a_t, y_t) = \max_{a_{t+1}} \left\{u\left[Q_t + a_t - a_{t+1} q(y_t, a_{t+1}) e^{y_t}\right] + \beta e^{y_t(1-\gamma)} \mathbb{E}_t V(a_{t+1}, y_{t+1})\right\}$$

Finally, the price function of bonds is: $$q(y_t, a_{t+1}) = \frac{\mathbb{P}(y_{t+1} \geq y^*(a_{t+1}) | y_t)}{1+r}$$ where $r$ is the riskless interest rate, and $y^*(a_{t+1})$ is the default threshold defined by $V^G(a_{t+1}, y^*(a_{t+1})) = V^B(y^*(a_{t+1}))$.

## Calibration¶

We are going to use the following (quarterly) calibration: $\gamma = 2$, $r = 0.01$, $\beta=0.8$, $\lambda=0.1$, $\delta=0.02$, $\mu_g = 1.006$, $\sigma_g = 0.03$, $\rho_g = 0.17$.

Assign each parameter to a constant variable of the same. Also create a variable for $\mu_y$, using the relationship shown above.

In [53]:
const gam = 2.0
const r = 0.01
const beta = 0.8
const lambda = 0.1
const delta = 0.02
const mu_g = 1.006
const sigma_g = 0.03
const rho_g = 0.17
const mu_y = log(mu_g)-0.5*sigma_g^2/(1-rho_g^2)

Out[53]:
0.005518679647890379


## Utility and GDP¶

Using the short form syntax, create functions for computing the instantaneous utility (as a function of consumption) and GDP (as a function of $y_t$):

In [54]:
util(c) = c^(1-gam)/(1-gam)

Out[54]:
util (generic function with 1 method)

In [55]:
gdp(y) = exp(y)/mu_g

Out[55]:
gdp (generic function with 1 method)


## Grid¶

As explained above, the two state variables are $y_t$ and $a_t$. Since we are going to interpolate with cubic splines, we have to decide of a grid on which to make the approximation. We will use the interval $[-0.3,0]$ for $a_t$, with 15 points. And the interval $\left[\mu_y-\frac{6\sigma_g}{\sqrt{1-\rho_g^2}}, \mu_y+\frac{6\sigma_g}{\sqrt{1-\rho_g^2}}\right]$ for $y_t$, with 10 points.

Create variables min_a, max_a, ngrid_a and min_y, max_y, ngrid_y. Then create vectors grid_a and grid_y, using the linspace function.

In [56]:
const min_a = -0.3
const max_a = 0.0
const ngrid_a = 15

const min_y = mu_y - 6*sigma_g/sqrt(1-rho_g^2)
const max_y = mu_y + 6*sigma_g/sqrt(1-rho_g^2)
const ngrid_y = 10

const grid_a = linspace(min_a, max_a, ngrid_a)
const grid_y = linspace(min_y, max_y, ngrid_y)

Out[56]:
10-element Array{Float64,1}:
-0.17714
-0.136549
-0.0959584
-0.0553676
-0.0147767
0.0258141
0.0664049
0.106996
0.147587
0.188177


Now, let's give initial values for $V^G$ and $V^B$ on the grid points. We will use $V^B(y_t) = u((1-\delta)Q_t)$ and $V^G(a_t, y_t) = u(Q_t + a_t)$.

Create a vector VB_points and a matrix VG_points initialized with these values. For VB_points, you can use a comprehension. Declare both variables as constant; note that this means that the variable will always point to the same array, but this does not prevent modification of array elements.

In [57]:
const VB_points = [ util((1-delta)*gdp(y)) for y in grid_y ]

const VG_points = Array(Float64, ngrid_a, ngrid_y)

for iy = 1:ngrid_y, ia = 1:ngrid_a
VG_points[ia, iy] = util(gdp(grid_y[iy])+grid_a[ia])
end


## Splines¶

To interpolate the value function between grid points, we will use cubic splines, available in the splines package.

In [58]:
using splines


The following example shows how to approximate the absolute function by a 5-points spline over $[-1, 1]$. For 2-dimensional approximation, the principle is the same: arguments to filter_coeffs and eval_UC_spline should be vectors of 2 elements (or matrices for the point values or the coefficients).

In [59]:
# Create a small grid
mygrid = linspace(-1., 1., 5)

# Value of the function at grid points
mypoints = Float64[ abs(x) for x in mygrid ]

# Compute coefficients of the spline interpolation
mycoeffs = filter_coeffs([-1.], [1.], [5], mypoints)

# Create a function that computes the interpolation at any point, using the coefficients
interp(x) = getindex(eval_UC_spline([-1.], [1.], [5], mycoeffs, [x]), 1)

# Plot the absolute function and its spline approximation on a finer grid
myfinergrid = linspace(-1., 1., 100)
plot(myfinergrid, [ interp(x) for x in myfinergrid ], "r", myfinergrid, abs(myfinergrid), "b")

Out[59]:
2-element Array{Any,1}:
PyObject <matplotlib.lines.Line2D object at 0x387ce10>
PyObject <matplotlib.lines.Line2D object at 0x387cf10>


Now, create constant arrays VG_coeffs and VB_coeffs for storing the splines approximations, using the filter_coeffs function and the arrays VG_points and VB_points.

In [60]:
const VG_coeffs = filter_coeffs([min_a, min_y], [max_a, max_y], [ngrid_a, ngrid_y], VG_points)
const VB_coeffs = filter_coeffs([min_y], [max_y], [ngrid_y], VB_points)

Out[60]:
12-element Array{Float64,1}:
-1.27463
-1.22547
-1.17631
-1.12963
-1.08467
-1.04153
-1.0001
-0.96031
-0.922131
-0.885372
-0.850447
-0.815523


Write functions VG(a, y) and VB(y) that evaluate the spline at any point in the state space. Warning: for VG, the last argument in the call to eval_UC_spline must be [a, y]' (note the transpose, needed to create a $1\times 2$ matrix).

In [61]:
VG(a, y) = getindex(eval_UC_spline([min_a, min_y], [max_a, max_y], [ngrid_a, ngrid_y], VG_coeffs, [a, y]'), 1)

Out[61]:
VG (generic function with 1 method)

In [62]:
VB(y) = getindex(eval_UC_spline([min_y], [max_y], [ngrid_y], VB_coeffs, [ y ]), 1)

Out[62]:
VB (generic function with 1 method)


Verify that VB(0.01) is approximatively -1.01631 and that VG(-0.01, 0.02) is approximatively -0.9959.

In [63]:
VB(0.01)

Out[63]:
-1.016317205011133

In [64]:
VG(-0.01, 0.02)

Out[64]:
-0.9959422296759999


Finally, write the function V that is the maximum of VB and VG:

In [65]:
V(a, y) = max(VG(a, y), VB(y))

Out[65]:
V (generic function with 1 method)


## Bond price function¶

First, here is an example of the probability that a variable following $\mathcal{N}(\mu_y, \sigma_g)$ is less than $2\mu_y$, using the Distributions package:

In [66]:
using Distributions

cdf(Normal(mu_y, sigma_g), 2mu_y)

Out[66]:
0.5729760090490047


Now write the bond price function q(y, a1) (where a1 denotes $a_{t+1}$). The idea is to compute tomorrow's default threshold $y^*$ (using fzero on the difference between VG and VB), and then to compute the corresponding default probability with the cdf and Normal functions.

In order to speed up the computation, you can treat special cases separately:

• if the assets are positive, the risk premium is zero
• if for a large value of $y_{t+1}$ (like 6 standard deviations above the mean), default is still preferable to repayment, then the bond price is approximatively zero
• if for a small value of $y_{t+1}$ (like 6 standard deviations below the mean), repayment is still preferable to default, then the risk premium is approximatively zero
In [67]:
function q(y, a1)
if a1 >= 0
return 1/(1+r)
end

const cond_mean = (1-rho_g)*mu_y + rho_g*y

const width = 6.0
const upper_y1 = cond_mean + width*sigma_g
const lower_y1 = cond_mean - width*sigma_g

if VG(a1, lower_y1) >= VB(lower_y1)
return 1/(1+r)
end
if VG(a1, upper_y1) <= VB(upper_y1)
return 0.0
end

y1_threshold = fzero(y1 -> VG(a1, y1) - VB(y1), lower_y1, upper_y1)

Erepay1 = 1-cdf(Normal(cond_mean, sigma_g), y1_threshold)
Erepay1/(1+r)
end

Out[67]:
q (generic function with 1 method)


## Bellman equation¶

We are now going to precompute points and weights for quadrature. Define a constant containing the number of quadrature points, which we will set to 16. Then, compute the hermite points and weights using the function created previously.

In [68]:
const gauss_npoints = 16
gauss_points, gauss_weights = compute_hermite_points(gauss_npoints)

Out[68]:
([-0.273481,0.273481,-0.822951,0.822951,-1.38026,1.38026,-1.95179,-2.5462,-3.177,-3.86945,-4.68874,1.95179,2.5462,3.177,3.86945,4.68874],[0.507929,0.507929,0.280647,0.280647,0.08381,0.08381,0.0128803,0.000932284,2.71186e-5,2.32098e-7,2.65481e-10,0.0128803,0.000932284,2.71186e-5,2.32098e-7,2.65481e-10])


Now, if $x_i$ and $w_i$ are the hermite points, and given that $\varepsilon_t \leadsto \mathcal{N}(0, \sigma_g)$, the following approximation holds (using the change of variable $y = \sigma_g\sqrt{2}x$): $$\mathbb{E}_t f(\varepsilon_{t+1}) = \int_{-\infty}^{+\infty} \frac{1}{\sigma_g\sqrt{2\pi}}f(y)e^{-\frac{y^2}{2\sigma_g^2}}dy = \int_{-\infty}^{+\infty} \frac{1}{\sqrt{\pi}} f(\sigma_g\sqrt{2}x)e^{-x^2}dx \approx \sum_{i=1}^n \frac{w_i}{\sqrt{\pi}} f(\sigma_g\sqrt{2} x_i) = \sum_{i=1}^n w'_i f(x'_i)$$ where $w'_i=\frac{w_i}{\sqrt{\pi}}$ and $x'_i=\sigma_g\sqrt{2} x_i$.

Compute the vector of $x'_i$ and $w'_i$.

In [69]:
gauss_weights /= sqrt(pi)
gauss_points *= sqrt(2) * sigma_g

Out[69]:
16-element Array{Float64,1}:
-0.0116028
0.0116028
-0.0349149
0.0349149
-0.0585594
0.0585594
-0.0828074
-0.108026
-0.134789
-0.164167
-0.198926
0.0828074
0.108026
0.134789
0.164167
0.198926


Write the function VB_next(y) that, given $y_t$, computes $u\left[(1-\delta)Q_t\right] + \beta e^{y_t(1-\gamma)}\left[\lambda \mathbb{E}_t V^G(0, y_{t+1}) + (1-\lambda) \mathbb{E}_t V^B(y_{t+1})\right]$ (the right hand side in the Bellman equation defining $V^B$).

In [70]:
function VB_next(y)
const cond_mean = (1-rho_g)*mu_y + rho_g*y
EVB1 = 0.0
EVG1 = 0.0
for i = 1:gauss_npoints
EVB1 += gauss_weights[i]*VB(cond_mean + gauss_points[i])
EVG1 += gauss_weights[i]*VG(0.0, cond_mean + gauss_points[i])
end
util((1-delta)*gdp(y))+beta*exp(y*(1-gam))*(lambda*EVG1 + (1-lambda)*EVB1)
end

Out[70]:
VB_next (generic function with 1 method)


Write VG_objective(a1, a, y) which, given $a_{t+1}$, $a_t$ and $y_t$, computes $u\left[Q_t + a_t - a_{t+1} q(y_t, a_{t+1}) e^{y_t}\right] + \beta e^{y_t(1-\gamma)} \mathbb{E}_t V(a_{t+1}, y_{t+1})$ (the maximization objective in the right hand side of the Bellman equation defining $V^G$).

In [71]:
function VG_objective(a1, a, y)
const cond_mean = (1-rho_g)*mu_y + rho_g*y
EV1 = 0
for i = 1:gauss_npoints
EV1 += gauss_weights[i]*V(a1, cond_mean + gauss_points[i])
end

util(gdp(y)+a-exp(y)*a1*q(y,a1)) + beta*exp(y*(1-gam))*EV1
end

Out[71]:
VG_objective (generic function with 1 method)


## Main loop¶

First, load the Optim package that will be used for optimization of the value function objective (see the package documentation).

In [72]:
using Optim


Here is a simple example of minimization:

In [73]:
optres = optimize(x -> 2x^2+1, -2., 2.)
optres.f_minimum

Out[73]:
1.0


Create constant variables to store the maximum number of iterations (set to 1000), and the convergence criterion expressed as a tolerance (set to $10^{-6}$).

Then create an iteration counter (initialized to 1), and a variable to store the current error (initialized to $+\infty$).

In [74]:
const max_it = 1000
const convergence_tol = 1e-6

it = 1
err = Inf

Out[74]:
Inf


Create two uninitialized arrays VG_points_next and VB_points_next, respectively of the same size than VG_points and VB_points, that will be used to temporarily store the values of value functions at grid points for the next iteration.

In [75]:
const VG_points_next = Array(Float64, ngrid_a, ngrid_y)
const VB_points_next = Array(Float64, ngrid_y)

Out[75]:
10-element Array{Float64,1}:
1.15867e-315
1.59897e-316
1.15865e-315
1.599e-316
0.0
3.24181e178
0.0
0.0
2.82994e-316
1.59569e-316


Now create the main computational loop:

• use a while construct, which will iterate as long as the error is greater than the tolerance, and the iteration counter less than the maximum iterations
• in an embedded loop over the $y_t$ grid:
• compute the value of $V^B$ at next iteration using VB_next, and store it in VB_points_next
• in an embedded loop over the $a_t$ grid, optimize VG_objective over $a_{t+1}$, and store the result in VG_points_next
• update the error, which is the max of the (infinite norm) distance between VG_points and VG_points_next on one hand, and VB_points and VB_points_next on the other hand
• using copy!, update VG_points and VB_points with the new values
• using copy! and filter_coeffs, update the spline coefficients
• using print, display the iteration counter and the current error
• increment the iteration counter
In [76]:
while err > convergence_tol && it <= max_it

for iy = 1:ngrid_y
const y = grid_y[iy]

VB_points_next[iy] = VB_next(y)

for ia = 1:ngrid_a
a = grid_a[ia]

optres = optimize(a1 -> -VG_objective(a1, a, y), min_a, max_a)

VG_points_next[ia,iy] = -optres.f_minimum
end
end

# Compute error
err = max(maximum(abs(VB_points_next-VB_points)),
maximum(abs(VG_points_next-VG_points)))

# Copy
copy!(VB_points, VB_points_next)
copy!(VG_points, VG_points_next)

copy!(VG_coeffs, filter_coeffs([min_a, min_y], [max_a, max_y], [ngrid_a, ngrid_y], VG_points))
copy!(VB_coeffs, filter_coeffs([min_y], [max_y], [ngrid_y], VB_points))

# Output
print("Iteration ", it, ": error=", err, "\n")

it += 1

if it == max_it+1
warn("Maximum iterations reached")
end
end

Iteration 1: error=1.0041713125165022
Iteration 2: error=0.801911659390004
Iteration 3: error=0.6374699128959649
Iteration 4: error=0.5062513718398653
Iteration 5: error=0.4018791354115798
Iteration 6: error=0.31891357862109615
Iteration 7: error=0.25297866802352775
Iteration 8: error=0.20058697469548115
Iteration 9: error=0.1589638921948282
Iteration 10: error=0.1259025482336753
Iteration 11: error=0.09964777648076506
Iteration 12: error=0.07880392359461208
Iteration 13: error=0.06226118410997916
Iteration 14: error=0.04913698952575274
Iteration 15: error=0.038729494104838835
Iteration 16: error=0.030480594751913337
Iteration 17: error=0.023946490346786753
Iteration 18: error=0.018774284310604017
Iteration 19: error=0.014683389853683337
Iteration 20: error=0.011450740774860968
Iteration 21: error=0.008899026488795592
Iteration 22: error=0.006887331103732741
Iteration 23: error=0.00530368311863505
Iteration 24: error=0.004059129084348356
Iteration 25: error=0.0030830195487663303
Iteration 26: error=0.002319258588777906
Iteration 27: error=0.0017233206692859326
Iteration 28: error=0.0012598776116590216
Iteration 29: error=0.00125530439216881
Iteration 30: error=0.0013375473039021202
Iteration 31: error=0.0013732084183635251
Iteration 32: error=0.0013744389628147502
Iteration 33: error=0.0013506752063703331
Iteration 34: error=0.001309210101578806
Iteration 35: error=0.0012556490600745818
Iteration 36: error=0.0011942780897271632
Iteration 37: error=0.0011283467110727585
Iteration 38: error=0.0010602923428963962
Iteration 39: error=0.000991922615516394
Iteration 40: error=0.0009245463151428268
Iteration 41: error=0.000859104457682669
Iteration 42: error=0.0007962393156084602
Iteration 43: error=0.0007363717334483155
Iteration 44: error=0.0006797526865005565
Iteration 45: error=0.0006265056708425476
Iteration 46: error=0.0005766596133458535
Iteration 47: error=0.0005301927627110459
Iteration 48: error=0.00048696559545735596
Iteration 49: error=0.0004468969858919536
Iteration 50: error=0.00040983078257017524
Iteration 51: error=0.00037560611937337285
Iteration 52: error=0.000344055777519614
Iteration 53: error=0.00031501057207794503
Iteration 54: error=0.0002883028523150699
Iteration 55: error=0.0002637690844773388
Iteration 56: error=0.00024125166039112855
Iteration 57: error=0.00022060010275115616
Iteration 58: error=0.00020167181130581469
Iteration 59: error=0.0001843324598480578
Iteration 60: error=0.00016845613600047216
Iteration 61: error=0.00015392529127122856
Iteration 62: error=0.00014063055306312577
Iteration 63: error=0.00012847044165109622
Iteration 64: error=0.00011735102277565801
Iteration 65: error=0.00010718551997346282
Iteration 66: error=9.789390466163894e-5
Iteration 67: error=8.940247763167264e-5
Iteration 68: error=8.164345174499488e-5
Iteration 69: error=7.455454309379661e-5
Iteration 70: error=6.807857565149789e-5
Iteration 71: error=6.216310289097748e-5
Iteration 72: error=5.6760048407156205e-5
Iteration 73: error=5.182536688064232e-5
Iteration 74: error=4.731872589314179e-5
Iteration 75: error=4.32032084969336e-5
Iteration 76: error=3.9445036287943935e-5
Iteration 77: error=3.601331230562721e-5
Iteration 78: error=3.287978279686854e-5
Iteration 79: error=3.0018617207971943e-5
Iteration 80: error=2.7406205031610398e-5
Iteration 81: error=2.5020968784872366e-5
Iteration 82: error=2.2843191885613123e-5
Iteration 83: error=2.0854860476759995e-5
Iteration 84: error=1.903951823045702e-5
Iteration 85: error=1.7382133089327567e-5
Iteration 86: error=1.5868975140165276e-5
Iteration 87: error=1.4487504715887667e-5
Iteration 88: error=1.3226269899746512e-5
Iteration 89: error=1.2074812700824111e-5
Iteration 90: error=1.1023583255997949e-5
Iteration 91: error=1.0063861280329434e-5
Iteration 92: error=9.187684269384988e-6
Iteration 93: error=8.387781881502576e-6
Iteration 94: error=7.657515877390608e-6
Iteration 95: error=6.990825352382046e-6
Iteration 96: error=6.382176582775401e-6
Iteration 97: error=5.82651723135541e-6
Iteration 98: error=5.3192346003427815e-6
Iteration 99: error=4.8561173482397635e-6
Iteration 100: error=4.4333206270508185e-6
Iteration 101: error=4.047334227763599e-6
Iteration 102: error=3.6949534756303137e-6
Iteration 103: error=3.3732526283358766e-6
Iteration 104: error=3.0795606225098027e-6
Iteration 105: error=2.8114389483135938e-6
Iteration 106: error=2.5666613314712095e-6
Iteration 107: error=2.343195351528493e-6
Iteration 108: error=2.1391855096197787e-6
Iteration 109: error=1.9529378594285163e-6
Iteration 110: error=1.782905919789357e-6
Iteration 111: error=1.6276778662671632e-6
Iteration 112: error=1.485964781444693e-6
Iteration 113: error=1.3565899781298185e-6
Iteration 114: error=1.2384792071884476e-6
Iteration 115: error=1.130651762437651e-6
Iteration 116: error=1.032212317397807e-6
Iteration 117: error=9.423434876865144e-7



Verify that for $y_t=0$, the default threshold is 21% of debt to (quarterly) GDP:

In [77]:
fzero(a -> VG(a, 0.) - VB(0.), 0.)/gdp(0.)

Out[77]:
-0.2115384019929245


Copyright © 2014 Sébastien Villemot <sebastien.villemot@sciencespo.fr>

Go Top