^{1}

^{1}

^{2}

^{1}

^{2}

This research work investigates the theoretical foundations and computational aspects of constructing optimal bespoke CDO structures. Due to the evolutionary nature of the CDO design process, stochastic search methods that mimic the metaphor of natural biological evolution are applied. For efficient searching the optimal solution, the nondominating sort genetic algorithm (NSGA-II) is used, which places emphasis on moving towards the true Paretooptimal region. This is an essential part of real-world credit structuring problems. The algorithm further demonstrates attractive constraint handling features among others, which is suitable for successfully solving the constrained portfolio optimisation problem. Numerical analysis is conducted on a bespoke CDO collateral portfolio constructed from constituents of the iTraxx Europe IG S5 CDS index. For comparative purposes, the default dependence structure is modelled via Gaussian and Clayton copula assumptions. This research concludes that CDO tranche returns at all levels of risk under the Clayton copula assumption performed better than the sub-optimal Gaussian assumption. It is evident that our research has provided meaningful guidance to CDO traders, for seeking significant improvement of returns over standardised CDOs tranches of similar rating.

Bespoke CDOs
provides tailored credit solutions to market participants. They provide both
long-term strategic and tactical investors with the ability to capitalise on
views at the market, sector and name levels. Investors can use these structures
in various investment strategies to target the risk/return profile or hedging
needs. These strategies can vary from leverage and correlation strategies to
macro and relative value plays [

Understanding the risk/return trade-off dynamics underlying the bespoke CDO collateral portfolios is crucial when maximising the utility provided by these instruments. The single-tranche deal can be put together in a relatively short period of time. This is aided by the development of numerous advance pricing, risk management and portfolio optimisation techniques.

The most crucial tasks in putting together the bespoke
CDO is choosing the underlying credits that will be included in the portfolio.
Investors often express preferences on individual names, and there is likely to
be credit rating constraint and industry concentration limits imposed by the
investors and rating agencies [

Given these various investor defined requirements, the structurer is required to optimise the portfolio to achieve the best possible tranche spreads for investors. This was a complicated task, however, with the advent of faster computational pricing and portfolio optimisation algorithms, aid structurers in presenting bespoke CDO which conform to the investment parameters.

The proper implementation of the decision steps lies in the solution of the multiobjective, multiconstrained optimisation problem, where investors can choose an optimal structure that matches their risk/return profile. Optimal structures are defined by portfolios that lie on the Pareto frontier on the CDO tranche yield/portfolio risk plane.

Davidson [

The creation of the CDO collateral portfolio can broadly be seen in similar ways. Given a certain set of investor and/or market constraints, such as the number of underlying credits, the notional for the credits, concentration limits and the weighted average rating factor, credit structurers need to be able to construct a portfolio that is best suited to the market environment. If the portfolio does not suit the conditions, it evolves so that only those that are “fittest,” defined by having the best CDO tranche spread given the constraints, will survive. Many of the same techniques used in the natural world can be applied to this constrained portfolio optimisation problem. Evolutionary algorithms have received a lot of attention regarding their potential for solving these types of problems. They possess several characteristics that are desirable to solve real world optimisation problems up to a required level of satisfaction.

Our previous research work focused on developing a
methodology to optimise credit portfolios. The Copula Marginal Expected Tail
Loss (CMETL) model proposed by Jewan et al. [

Our research work now investigates a new approach to
asset allocation in credit portfolios for the determination of optimal
investments in bespoke CDO tranches. Due to the complexity of the problem,
advance algorithms are applied solve the constrained multiobjective
optimisation problem. The nondominating sort genetic algorithm (NSGA-II)
proposed by Deb et al. [

NSGA-II is a popular second generation multiobjective evolutionary algorithm. This algorithm places emphasis on moving towards the true Pareto-optimal region, which is essential in real world credit structuring problems. The main features of these algorithms are the implementation of a fast nondominated sorting procedure and its ability to handle constraints without the use of penalty functions. The latter feature is essential for solving the multiobjective CDO optimisation problem.

The study uses both Gaussian and Clayton copula models
to investigate the effects of different default dependence assumptions on the
Pareto frontier. Various real world cases are considered, these include the
constrained long-only credits and concentrated credit cases. Two objectives are
used to define the CDO optimisation problem. The first is related to the
portfolio risk, which is measured by the Expected-tail-loss (ETL). ETL is a
convex risk measure and has attractive properties for asset allocation
problems. The second objective is the CDO tranche return. This objective
requires a CDO valuation model. We apply an extension of the Implied Factor
model proposed by Rosen and Saunders
[

The breakdown of the paper is as follows. The next
section briefly discusses the mechanics of bespoke CDOs. It outlines the three
improtant decision making steps involved in the structuring process. In section
four, a robust and practical CDO valuation framework based on the application
of the single-factor copula models given in Section

A bespoke CDO
is a popular second-generation credit product. This standalone single-tranche
transaction is referred to as a bespoke because it allows the investor to
customise the various deal characteristics such as the collateral composition,
level of subordination, tranche thickness, and credit rating. Other features,
such as substitution rights, may also play an improtant role [

While the bespoke CDO provides great flexibility in
the transaction parameters, it is crucial that investors understand the
mechanics of the deal. A key feature of these transactions is the greater
dialogue that exists between the parties during the structuring process,
avoiding the “moral hazard” problem that existed in earlier CDO deals
[

In a typical bespoke CDO transaction, there are three main decision steps for potential investors:

A typical placement of a bespoke CDO is outlined in
Figure

Placement of mezzanine tranche in a typical bespoke CDO transaction.

In the above schematic, the investor goes long the
credit risk in a mezzanine tranche. The challenge of issuing bespoke CDOs is
the ongoing need and expense of the risk management of the tranche position.
The common method of hedging these transactions is to manage the risk like an
options book. Greeks similar to those related to options can be defined for CDO
tranches. The distribution of risk in these transactions is not perfect,
leaving dealers exposed to various first and second order risks [

Copula-based credit risk models were developed to extend the univariate credit-risk models for the individual obligors to a multivariate setting which keeps all salient features of the individual credit-risk models while incorporating a realistic dependency structure between the obligor defaults.

Copulas were first introduced by Sklar [

In credit portfolio modelling the copula approach and
factor models have become an industry-wide standard to describe the asset
correlation structure. Construction of mutual asset correlations between
entities of the portfolio on common external factors represents a very flexible
and powerful modelling tool. This modelling approach can be understood as a
combination of the copula and firm-value approach. This factor approach is
quite standard in credit risk modelling [

factor models represent an intuitive framework and allow fast calculation of the loss distribution function; and

the full correlation matrix, which represents a challenging issue in large credit portfolios, need not be fully estimated.

Credit risk models can be divided into two
mainstreams, structural models and reduced form models. In the reduced-form
methodology, the default event in these models is treated exogenously. The
central idea is to model the default counting process

We will consider a latent factor

The next two subsections provide a description of the two copula models used in the study. These two models will define the default dependence structure, which is used in a Monte Carlo simulation, to derive the credit portfolio loss distribution. These distributions are then used in pricing, portfolio and risk management.

A convenient
way to take into account the default dependence structure is through the
Gaussian copula model. This has become the market standard in pricing
multiname credit products. In the firm-value approach a company will default
when its “default-like” stochastic process,

The default probability of an entity

We also have that:

Most of the existing copula models involve using the Gaussian copula which has symmetric upper and lower tails, without any tail dependence. This symmetry fails to capture the fact that firm failures occur in cascade in difficult times but not in better times. That is, the correlation between defaults increases in difficult times. The Clayton copula encapsulates this idea.

This “class” of Archimedean copulas was
first introduced by Clayton [

The asset value process in a single factor Clayton
copula is given by the following:

Using the definition of default times and the Marshall and Olkin [

When

The current
market standard for pricing synthetic CDOs is the single-factor Gaussian copula
model introduced by Li
[

It is common practice to quote an implied “correlation
skew”—a different correlation which matches the price of each tranche. This
assumption is analogous to the Black-Scholes implied volatility in the options
market. Implied tranche correlations not only suffer from interpretation
problems, but they might not be unique as in the case of mezzanine tranches,
and cannot be interpolated to price bespoke tranches [

We define bespoke tranches in the following cases:

the underlying portfolio and maturity is the same as the reference portfolio, but the tranche attachment and/or detachment points are different,

the underlying portfolio is the same as the reference portfolio but the maturities are different, or

the underlying portfolio differs from the reference portfolio.

The impact of the different copula assumptions on the loss distribution is also investigated. The credit tail characteristics are analysed. This is imperative to ensure that the copula model used has the ability to capture the default dependence between the underlying credits, and not severely underestimate the potential for extreme losses. The loss analysis is performed on a homogeneous portfolio consisting of 106 constituents of the iTraxx European IG S5 CDS index.

The key idea
behind CDOs, is the tranching of the credit risk of the underlying portfolio. A
given tranche

Let

The tranche investors need to be compensated for
bearing the default risk in the underlying credits. The holders of tranche

Monte Carlo
algorithms can be divided (somewhat arbitrarily) into two categories: uniformly
weighted and nonuniformly weighted algorithms. nonuniform weights are a
mechanism for improving simulation accuracy. Consider a set of

Two features of credit risk modelling which pose a particular challenge under simulation based procedures, namely:

it requires accurate estimation of low-probability events of large credit losses; and

default dependence mechanisms described in the previous chapter do not immediately lend themselves to rare-event simulation techniques used in other settings.

In what follows, we introduce the general modelling
framework, present the algorithm and discuss some technical implementation
details. (We use a similar methodology to Rosen and Saunders [

A weighted Monte Carlo method can be used to find an
implied risk-neutral distribution of the systematic factors, assuming the
specification of a credit risk model with a specified set of parameter values.
According to Rosen and
Saunders [

The methodology to obtain the implied credit loss distribution is summarised by the following steps.

Let

The classical approach to solving constrained optimisation problems is the method of Lagrange multipliers. This approach transforms the constrained optimisation problem into an unconstrained one, thereby allowing the use of the unconstrained optimisation techniques.

Taking the
objective to be a symmetric separable convex function gives the optimal
probabilities

In this setting,

This case is put forward in Avellaneda et al. [

For the bespoke CDO pricing problem, the implied scenario probabilities must satisfy the following constraints:

the sum of all scenario probabilities must equal to one;

the
probabilities should be positive:

we match the
individual CDS spreads (i.e., marginal default probabilities for each
name)

the current
market prices of standard CDO tranches are matched given by

The augmented Lagragian method seeks the solution by replacing the original constrained problem with a sequence of unconstrained subproblems in which the objective function is formed by the original objective of the constrained optimisation plus additional “penalty” terms. These terms are made up of constraint functions multiplied by a positive coefficient.

For the problem at hand, the augmented Lagrangian
function is given by,

In practice, a significant proprotion of computation time is not spent solving the optimisation problem itself, but rather computing the coefficients in the linear constraints. The marginal default probability constraints require the evaluation of the conditional default probability given in the previous chapter for each name under each scenario.

One has the option to only match the cumulative
implied default probability to the end of the lifetime of the bespoke CDO or
perhaps at selected times only. The advantage of dropping constraints is
twofold. Firstly it reduces the computational burden by reducing the number of
coefficients that need to be computed, thus leading to a faster pricing
algorithm. Secondly, it loosens the conditions required of the probabilities

The valuation
of CDOs depends on the portfolio loss distribution. For the pricing of a CDO or

Figure

The comparison of the implied loss density under Gaussian copula assumption: (a) shows the deviation of the model density from the implied density at the 5 year horizon, (b) displays the implied credit loss density surface up to the 5 year horizon.

The first thing to note is the typical shape of credit
loss distributions. Due to the common dependence on the factor

The Clayton copula model displays similar deviations.
This is shown in Figure

The comparison of the implied loss density under Clayton copula assumption: (a) shows the deviation of the model density from the implied density at the 5 year horizon, (b) displays the implied credit loss density surface up to the 5 year horizon.

Similar sentiments on the weakness of Gaussian copula
are shared by Li and Liang [

Figure

The comparison of implied tail probabilities under different copula assumptions.

The probabilities decrease very quickly under the Gaussian and Clayton copula assumptions. The effect of thicker tails under the Clayton copula can easily be seen to dominate the Gaussian copula. In the pricing of the super senior tranche, the Clayton model exhibits higher expected losses due to the excess mass concentrated in the tails of the distribution, the tranche spread will be higher under this model than the Gaussian case. These deviations in the implied distribution under different distributional assumptions will filter through to the resulting efficient frontiers. Due to the higher tail probabilities the Clayton ETL efficient frontiers will be significantly different from frontiers resulting from the Gaussian assumption. This feature will be shown in the subsequent sections.

The results so far also have a practical edge for credit risk management. The likelihood of extreme credit losses is increased under the Clayton copula assumption. This is due to the lower tail dependency exhibited by this copula function.

Comparison of
uncertainty in outcomes is central to investor preferences. If the outcomes
have a probabilistic description, a wealth of concepts and techniques from
probability theory can be applied. The main objective in the following section
is to present a review of the fundamental work by Artzner et al. [

Let

Given some convex cone,

A mapping

By adding positive homogeneity to these properties, one obtains the following definition.

A convex risk
measure

Given some
confidence level

Consider a loss

Credit portfolio optimisation plays a critical role in determining bespoke CDO strategies for investors. The most crucial tasks in putting together bespoke CDOs is choosing the underlying credits that will be included in the portfolio. Usually investors often express preferences on individual names to which they willing to have the exposure, while there are likely to be credit rating constraints and industry/geographical concentration limits imposed by rating agencies and/or investors.

Given these various requirements, it is up to the credit structurer to optimise the portfolio and achieve the best possible tranche spreads for investors. In the following analysis, we focus on the asset allocation rather than credit selection strategy, which remains a primary modelling challenge for credit structurers.

Davidson [

Creating a portfolio for a CDO can broadly be seen in similar ways. Given a certain set of investor defined, constraintsstructurers need to be able to construct a credit portfolio that is best suited to the market environment. Added to these constraints are market constraints such as trade lot restrictions and liquidity and availability of underlying credits. If the portfolio does not suit these conditions, it evolves so that only those with the best fit (highest tranche spreads) will survive. Many of the same techniques used in the natural world can be applied to this structuring process.

Evolutionary computation methods are exploited to allow for a generalisation of the underlying problem structure and to solve the resulting optimisation problems, numerically in a systematic way. The next section will briefly discuss some of the basic concepts of multiobjective optimisation and outline the NSGA-II algorithm used for solving the challenging CDO optimisation problem. The CDO optimisation model is then outlined before conducting a prototype experiment on the test portfolio constructed from the constituents of the iTraxx Europe IG S5 index.

Many real-world
problems involve simultaneous optimisation of several incommensurable and often
competing objectives. In single-objective optimisation the solution is usually
clearly defined, this does not hold for multiobjective optimisation problems.
We define formally, the multiobjective optimisation problem to define other
improtant concepts used in this chapter. All problems are assumed to be
minimisation problems unless otherwise specified To avoid inserting the same
reference every few lines, note that the terms and definitions are taken from
Zitzler [

A general

For any two
vectors

For any two
vectors

A decision
vector

Let

The second
generation NSGA-II is a fast and elitist multiobjective evolutionary
algorithm. The main features are
(

The first stage
of building the EA is to link the “real world” to the “EA
world.” This linking involves setting up a bridge between the original
problem context and the problem-solving solving space, where evolution takes
place. The objects forming possible solutions within the original problem
context are referred to as a

The mapping from phenotype space to the genotype space
is termed

Many different encoding methods have been proposed and used in EA development. Few frequently applied representations are: binary, integer and real valued representation. Real-valued or Floating-point representation is often the most sensible way to represent a candidate solution to of a problem. This approach is appropriate when the values that we want to represent as genes, originate from a continuous distribution.

The solutions to the proposed CDO optimisation models
are real-valued. This study opted to use the real-valued encoding, for the sake
of operational simplicity. The genes of a chromosome are real numbers between 0
and 1, which represents the weights invested in the different CDS contracts.
However, the summation of these weights might not be 1 in the initialisation
stage or after genetic operations. To overcome this problem, the weights are
normalised as follows:

The role of the

In the bespoke CDO optimisation problem, we introduce two objective to the problem. These objectives are namely the CDO tranche return, and the portfolio tail risk measured by ETL. Various constraints are then introduced to study the dynamics of the Pareto frontier under various conditions.

Unlike
operators, which operate on individuals, selection operators like parent and
survivor selection work at population level. In most EA applications, the
population size is constant and does not change during the evolutionary search.
The

In NSGA-I, the well-known sharing function approach
was used, which was found to maintain sustainable diversity in a population
[

Deb et al. [

To describe this approach, we first define a metric for density estimation and then present the description of the crowded-comparison operator.

The density of solutions surrounding a particular solution in the population must firstly be estimated by calculating the average distance of two points on either side of this point, along each of the objectives.

Once the nondominated sort is complete, the crowding
distance is assigned. Individuals are selected based on rank and crowding
distance. The crowding-distance computation requires sorting the population according
to each objective function value in ascending order of magnitude. The boundary
solutions (solutions with smallest and largest function values) for each
objective function are assigned an infinite distance value. All other
intermediate solutions are assigned a distance value equal to the absolute
normalised difference in the function values of two adjacent solutions. This
calculation is continued with other objective functions. The overall
crowding-distance value is calculated as the sum of individual distance values
corresponding to each objective. Each objective function is normalised before
calculating the crowding distance. (See Deb et al. [

The complexity of this procedure is governed by the
sorting algorithm and has computational complexity of

After all population members in the set

In order to
identify solutions in the NSGA of the first nondominated front, each solution
is compared with every other solution in the population to find if it is
dominated (We summarise the algorithm discussed in Deb et al. [

The worst case is when there are

the domination
count

the set of
solutions

In NSGA-II the complexity reduction is due to the
realisation that the body of the first inner loop (for each

The selection operator determines, which individuals are chosen for mating and how many offspring each selected individual produces. Once the individuals are sorted based on nondomination with crowding distance assigned, the selection is carried out using a crowded comparison operator described above. The comparison is carried out as below based on

nondomination
rank

crowding
distance

Mutation causes
individuals to be randomly altered. These variations are mostly small. They
will be applied to the variables of the individuals with a low probability.
Offspring are mutated after being created by recombination. Mutation of real
variables means, that randomly created values are added to the variables with a
low mutation probability. The probability of mutating a variable is inversely
proprotional to the number of variables (dimensions). The more dimensions one
individual has, the smaller is the mutation probability. Different papers
reproted results for the optimal mutation rate [

The mutation step size is usually difficult to choose.
The optimal step-size depends on the problem considered and may even vary
during the optimisation process. Small mutation steps are often successful,
especially when the individual is already well adapted. However, large mutation
steps can produce good results with a faster convergence rate. According Pohleim
[

In the NSGA-II a polynomial mutation operator is used.
This operator is defined by the following:

The recombination operator produces new individuals in combining the information contained in two or more parents in the mating population. This mating is done by combining the variable values of the parents. Depending on the representation of the variables different methods must be used.

Using real-value representation, the

Simulated binary crossover simulates the binary
crossover observed in nature and is give by the following.

First, a random
parent population is created. The population is then sorted based on the
nondomination outlined above. Each individual is assigned a fitness value (or
rank) equal to its nondomination level, with 1 representing the best level.
Binary tournament selection, recombination, and mutation operators are then
applied to create an offspring population of size

Due to the introduction of elitism, the current
population is compared to previously found best nondominated solutions, the
procedure is different after the initial generation. Figure

An outline of the NSGA-II procedure.

The first step is to combined parent population

Solutions belonging to the best nondominated set

In general, the count of solutions in all sets from

The requisite diversity among nondominated solutions
is introduced by using the crowded comparison procedure, which is used in the
tournament selection and during the population truncation phases [

Initialisation
is kept simple in most EA applications, the first population is seeded by
randomly generated individuals. In principle, problem-specific heuristics can
be used in this step, to create an initial population with higher fitness
values [

We can distinguish two cases for an appropriate
termination condition. If the problem has a known optimum fitness level, then
an acceptable error bound would be a suitable condition. However, in many
multiobjective problems optimum values are not known. In this case one needs
to extend this condition with one that certainly stops the algorithm. As noted
by Eiben and Smith [

The optimal
bespoke structure can be derived by solving the following
problem:

For practical purposes, it may be crucial to include
other real-world constraints. Let

When short selling is disallowed, it can be explicitly
modelled by setting

Rating agencies presuppose that there are two broad
credit considerations that determine the risk of a CDO portfolio: collateral
diversity by industry and by reference entity, and the credit quality of each
asset in the portfolio. With respect to the latter, rating agencies use the
concept of the WARF. Each security in the CDO portfolio has a rating (actual or
implied) and each rating has a corresponding rating factor. The lower a
security's rating, the higher its corresponding rating factor. In order to
calculate the weighted average debt rating for a pool of assets, take the par
amount of each performing asset and multiply it by its respective rating
factor. Then, sum the resulting amounts for all assets in the pool and divide
this number by the sum of the par values of the performing assets [

Krahnen and Wilde [

The two cases investigated are as follows.

First an examination of the pareto frontiers for bespoke CDO structures under both Gaussian and Clayton copula assumptions is conducted. Long-only positions in the underlying CDS are allowed. The upper trading limit in any particular reference entity is set to 2.5% of the portfolio, whilst the lower limit is set to 0.5% to satisfy the cardinality constraint of having 106 credits in the collateral portfolio.

Then an investigation into the behaviour of the pareto frontiers under increasing upper trading limit is conducted. This consequently allows an investigation of concentration risk and its effects on the producing optimal CDO structures. The upper trading limit is increased to 5% in this case.

NSGA-II parameter settings.

Evolutionary parameter | Value |
---|---|

Population size | |

Number of generations | |

Crossover probability | |

Distribution index for crossover | |

Distribution index for mutation | |

Pool size | |

Tour size |

The parameters settings were found to be most appropriate after several runs of the algorithm. The computational time played a crucial role in the assignment of parameter values.

Bespoke CDOs are commonly preferred among investors because they can be used to execute a variety of customised investment objectives and strategies. Flexibility in choosing the reference portfolio allows investors to structure the portfolio investments such that it can be properly matched with their investment portfolios, as a separate investment strategy or as a hedging position. Most often investors choose credit portfolios such that diversification is only as a result of buying protection on reference entities in the underlying portfolio.

We derive the pareto frontiers solving problem (

Figure

The comparison of bespoke CDO pareto frontiers under different default dependence assumptions.

The minimum tail risk portfolio can provide a

We concluded that the asset allocation strategies based on the Gaussian copula will results in sub-optimal bespoke CDO structures.

We now
investigate the effects of issuer concentration on the pareto frontier. As we
concluded in the previous chapter, that investors demand a higher premium for taking
on concentration risk, combining this with the leverage effects of tranche
technology, will result in higher tranche spreads than the well diversified
case. The resulting frontiers are shown in Figure

The portfolio notional distribution for the concentrated portfolio case, by issuer credit rating.

Figure

The main objective of this research has been to develop and illustrate a methodology to derive optimal bespoke CDO structures using the NSGA-II technology. Until recently, the derivation of such structures was a near to an impossible task. However, with the advent of advance pricing and optimisation techniques applied in this research, credit structurers can use these tools to provide investors with optimal structures. Investors can use this type of methodology to compare similar rated credit structures and make an informed investment decision, especially under the current credit market conditions.

The most improtant finding is that the Gaussian copula
allocation produces suboptimal CDO tranche investments. Better tranche returns
at all levels of risk are obtained under the Clayton assumption. In the
constrained long-only case, for all levels of portfolio risk, measured by ETL,
the Clayton allocation will result in higher

It was also demonstrated that significant improvement of returns over standardised CDOs tranches of similar rating can be achieved using the methodology presented in this paper. The findings also concluded that the leverage effects should became more pronounced for tranches lower down the capital structure. More concentrated bespoke CDO portfolios result in higher tranche spreads. Investor demand higher premiums for taking on the concentration risk. For a 29.64% level of portfolio tail risk, an excess return of 92 bps over the well diversified case can be achieved.

Since the current analysis is restricted to
single-factor copula models, one obvious extension for future studies should be
on the added benefits multifactor copula models may provide in producing
optimal CDO structures. On the use of evolutionary technology, a future study
should compare NSGA-II results to the rival, the second generation of the
strength pareto evolutionary algorithm (SPEA-2) proposed by the Zitzler et al. [