GDP, Keynsian Cross, IS-LM models

I wanted to start by doing a relatively long piece on Brexit, but understanding Brexit through a macro-economic framework and using it as an example of a macro-economic shock. So, firstly we'll have to start at the very beginning: ie, GDP. Economists typically describe GDP as: Y = C + I + G + NX with: Y being your real GDP, ie the total output of the economy. C represents Consumption - ie what people in the economy spend/consume on goods and services. The US economy is driven by consumption (as opposed to, say, the Chinese economy which is largely investment oriented, though the Chinese are trying to change that...). We can represent Consumption as two types: fixed, or essential/fixed consumption (food, shelter, etc) and variable consumption, which is a function of Y-T, or disposable income, here represented as Income (Y) minus taxes (T). The slope of the variable portion of the consumption function is a really importa.nt concept in macro, as its first derivative is called the Marginal Propensity to Consume (MPC). The concept here is that for every extra dollar of income, the consumer in the economy can either spend that dollar or save it. The higher an individual's (or, in aggregate, an economy's) marginal propensity to consume is, the more they consume in period 0 (ie, now) and the less they save. Lower income individuals tend to have a highe marginal propensity to consume than the wealthy. A take-away from this is that (and I'm going off theory and into politics) stimulus that is targeted primarily towards the wealthy is less effficacious (esp in pd1) than that which is targeted at the poor. Which is one reason why I think trickle-down economics is a bit full of shit.

I represents investment and is a function of Y as well as r, the real rate.

G is taken as exogenous but it's really a function of Congress passing budgets and the President passing them. So, in today's Congress, it's a function of random political bullshit. While, as of this writing, we (being America) are definitely not the worst with the entire shitstorm that is the (soon to be ex?) UK and Brexit, but the US congress certainly hasn't been helping thing with it's government shutdowns and imposing austerity in the midst of a very weak recovery a few years ago.

NX stands for Net Exports, or total goods exported less total goods imported. In the US this actually isn't a huge part of our economy. Exports are roughly 12% of GDP and we are running a trade deficit that is less than 1% of our GDP. On the other end of the spectrum, for example, in Germany exports make up 45% of their GDP and Ireland is >100%.

So now let's introduce the idea of the Keynsesian Cross. If we can assume (big IF) that I, G, NX, etc are all fixed for now, then we can graph C as a function of Y. The basic intuition behind this is that as income rises, so does Consumption, and it rises by definition at the Marginal Propensity to Consume (ie, the first derivative of C wrt Y). If an economy is at equilibrium, this means that the Expected Production is equal to the Output (Y) of the economy, and therefore the economy is in equilibrium. You can see this below, where E(p) intersects Y=Y at Y*. The economy makes what it consumes, there is no left-over or deficit etc and we are all good. But it rarely works that way. What is more common is that E(p) is slightly more or slightly less than Y.

The graph above crudely represents a Keynesian cross, with

What I've learned from 5 years of Inflation Trading

I've spent a lot of the last five years looking at various macro analysis and graphs, figuring out ways to make money trading inflation (TIPS and inflation derivatives), and I wanted to basically summarize what I've learned. I'll break this down into general trading observations, and then how to make money using various techniques (Relative Value, Macro, Quantitative/Stat-Arb with both broad observations and also how they can specifically be applied to the inflation market. I'll end with some general musings about the culture of finance, its social value, It should be noted that I tend to meander in my thoughts but I try to keep it organized and by section as best I can.

1. General Trading Observations

1.1 Market Efficiency, or, What is the Point of It All

There is some debate amongst financial economists about the efficiency of markets. Efficiency in markets is basically a debate about the extent to which information in the world is reflected in asset prices. The formal name for this is the Effiecient-market hypothesis (E.M.H.) and it was popularized by Eugene Fama. There are a couple of variants of this, with weak-form EMH meaning that markets reflect all past-information, semi-strong meaning weak form + markets reflect all new public information and strong being that market prices reflect all information, both inside and public info. At its, core, for the financial market practioner, EMH poses somewhat of an existential threat. If markets reflect all available information immediately, then one must ask oneself, in the immortal words of that fat guy from Office Space: "What do ya' say, ya do here...?" Are we, as in that infamous WSJ study, no better than monkeys throwing darts (http://www.automaticfinances.com/monkey-stock-picking/)? If EMH is indeed accurate, then what is the point of it all? What are we doing with our lives? Did we really go to the best universities and work 80 hours a week to be highly paid monkey dart throwers?

Luckily, enter the Grossman-Stiglitz Paradox, which states that IF E.M.H. were true, then no single agent would have sufficient incentive to acquire the information on which prices are based. So if EMH were true then why are there so many people actively working as traders? Where is the money coming from? I would say that the answer to this is a bit more complex, and you have to differentiate between the sell-side and buy-side, and also between primary markets and secondary markets.

The primary market is, in finance, where the actual concrete *value to society* pat of finance comes into play. It simply goes like this. Person A wants a loan to start a business, Bank A takes a risk and gives Person A money for the business. Person A uses that money to create something of value, which people love and give person A money for, which then allows Person A to repay Bank A the money and also give Bank A some interest on top of it. Viola. This is the magic of finance. Bank A lends Person A some money, he or she does something valuable with it, and the world is better off. America has, arguably, the world's deepest and most efficient capital markets. Jamie Dimon talks about this all the time, but he's got a point. There are a lot of genius risk-takers in America, but by and large they do not exist on the sell-side; however - the positive societal impact of the sell-side is to enable these people to take risks. Apple, Google, Facebook, Uber, Amazon, etc...these companies are all dominant and shaping the world and they were all created in the US, and while our financial system isn't the only reason, it is certainly -a- reason. Uber is able to raise a ridiculous amount of money very easily and efficiently, allowing it to grow extremely quickly, and the USA is better for it. So those are primary markets. I, however (and basically everyone who you think of as a trader) operate in the secondary markets - that is buying and selling stuff that has already been issued and tryng to profit from changes in the price. The value-to-society part here is much less certain. The more participants that are involved in a particular market, the greater the efficiency of that market - there is more price transparency, liquidity, and thus lower transcation costs for everyone who is involved in the market. It is why you turn $500 million into euros firly easily in one clip but if you tried to do that in April 2032 maturity TIPS then every TIPS dealer (myself included) would jump out the nearest window in full panic. So the value to society from the secondary market participant's point of view is to simply decrease the transaction costs for everyone else in the market. It is extremely abstract but I would argue that, for the most part, it does have concrete value in and of itself, ie in general more liquidity in a market is better than less liquidity in a market. However, this is not exactly something that assuages ones' soul: I am not sure if anyone is truly proud of themselves for increasing the liquidity of a market. It is not exactly like helping the sick or even creating some dumb app that applies moustaches to peoples faces. After 5 years of improving the liquidity of the TIPS market, I can't really say I feel like I made the world any better. Additionally, speaking more broadly, I am not sure that given the opportunity cost of so many bright minds (your author humbly excludes himself) improving market efficiency instead of solving real world problems, or even solving first-world problems (like Silicon Valley) is good for us. Market-makers make money by stepping in for accounts to buy or sell securities.

How it works in Narnia: 1. Blackrock comes up to me and wants to sell 10y TIPS Breakevens. 10y B/E (Breakevens) are currently 145 @ 146. I show them 145. They sell. I am now long 10y TIPS Breakevens. 2. Being a good market-maker, I know that a client from overseas wants to increase their exposure to US inflation, and I show them that I can sell 10y TIPS Breakevens at 146. They buy. 3. I now have no risk. I wipe my hands clean, earning 1 for my employer, and rinse and repeat. I am improving liquidity in financial markets.

How it actually works: 1. LARGE CLIENT X messages me via chat (Salesperson is on the chat also) and asks what my bid is for 100mm of 10y BEI. The market is currently 145 @ 146. I show them 145. They sell. I say done, thanks. I am now long 10y TIPS Breakevens. 2. Suddenly someone hits 10y Breakevens in the screens, they go down at 145 and only 5mm trade. I angrily ask the salesperson how many other people the client showed. It was non-comp (not in competition). Sure. Salesperson asks for RCs (revenue credits - aka monopoly money for salespeople). 3. The market comes back 144 @ 145. Currently down 50K on the trade - OK, not bad, could be worse. I message a few clients and ask if they need to buy any 10y TIPS because I was just hit on "some". They all say no but one hedge fund says that three other dealers came in and reported a seller. 4. The bid on the screen pulls, and the offer is now 144. No bid currently and down 200K. This is typically when a trader bangs on the desk and swears, but I am more serene so I just idly stare into the void, contemplating death. 5. In an effort to support the market I bid back at 141. The market is now 141 @ 144. I'm immediately hit, I take the minimum size and now I'm long 105mm 10y TIPS and have lost 400K and have -increased- my risk. 6. Several clients message me and ask WTF is going on, I say there is no news coming out, I have no idea what is happening, etc. One client, the most bullish guy in the TIPS market messages me. My mood picks up - surely this guy will come in to add to his position. 6. It is now 136 @ 139 interdealer. I tell him I can be 139 offer for 100 million. He says he wants my bid, not my offer. I tell him I can't give him a good bid. He says it's my job to do so. I show 135 bid for another 100mm, I get hit. Now long 200mm and down $750k. 7. 136 bid pulls, it's now 134 @ 138. I think about the eventual heat death of the universe, and how in the long run this is all irrelevant because we will all be dead. 8. Large UK real$ comes in and asks for my offer on 250mm 10y TIPS. In sheer relief I offer 136.5, 0.5bp from mid. They are done. 9. Offer pulls, market is now 140 @ 146. 10. A random hedge fund asks me to offer 10y BEI. A vein on my forehead twitches. I take a walk and then decide to drink heavily as soon as I get out of work.

Anyways, this is a very long-winded way for me to just say the most important thing that I learned from working in financial markets - that markets are not efficient in the short/medium run (from nano-seconds to a few days) because of structural differences between various players in the markets. This, in my opinion, is where you can take advantage of dislocations in a systematic way (this is in the Stat-Arb/Quantitative trading part as well as the Relative Value part.) Roughly I can categorize these forces into three main things:

1.2 Nature of the participant

(holding period, performance metric, fund goals, etc): Michael Pettis wrote an extremely good book on financial markets (specifically, EM crises) called "The Volatility Machine" that presented a different way to think of EM crises. In the book he argued that the primary cause of emerging market crises is not the failure of international capital markets but rather the inability of local governments' liability structures (or the 'balance sheet' of these countries) to absorb the volatility of the capital flows from international markets, causing them to be very fragile. I.e. if you give some random banana republic (the country, not the store) a ton of money really quickly then pull it out a few months or years later equally quickly, things won't end well. However, the main take-away that I got from his book was not so much the take on balance sheets of economies but his point on the nature of the various investment funds that operate in the markets, as measured by holding period / investment returns: - broker/dealers - hedge funds - mutual funds / asset managers - insurance / pension funds - central banks Generally the stress/pay of the job (both strongly positively correlated) as well as the sensitivity of the fund/account to movements in prices. Hedge Funds: Broker/Dealers: Mutual Funds/Asset Managers: Insurance/Pension Funds: Central Banks:

1.3 Leverage/Margining

1.4 Groupthink

Anyways, to tie this back into thoughts of market efficiency: all of the actions that I spoke about above lead to inefficiencies. The funds that cannot take advantage of opportunities because of constraints to their investment mandate, to hedge funds that are over-levered and are forced to sell winning positions to meet a margin-call, to a scared kid running risk at a broker-dealer who sells something at the wrong price because he or she is afraid of losing their job - these inefficiencies and frictions within the market cause prices to overshoot and deviate from their longer-run fair value.

However there are also definitely areas in which markets are effcient or at least extremely close to efficient. When it comes to formulaic adjustments - e.g. when TIPS have their final CPI figure as a known quantity, markets are efficient at immediately adjusting the price to the known quantity. There are other moments, e.g. an FOMC meeting, where the market may be confused at first (was this statement dovish? hawkish? what was it?) and

2. Inflation Trading: Fundamental Value, Relative Value

There are a few strategies for how to analyze and take advantage of the inefficiencies of the market. The majority of these inefficiencies are caused by the aforementioned discrepency between motivations between various actors (either hedge funds stopping out, price incentivive/uncompensated buyers, etc). These actors act irrationally and cause price dislocations. It should be noted also that just because such a dislocation exist it doesn't necessarily mean it will correct any time soon, and you also have to be cautious that you aren't accidently "shorting vol." It generally involves not taking directional views of up vs down on rates or maybe

I break apart the inflation curve into three distinct parts:

2.1: t0/front months:

This is the part where we can actually model inflation vs energy and look at the inflation rates that are embedded in the price of the TIPS (and inflation derivs). It is not exactly "relative value" but more "fundamental value" where we can actually model out inflation rate implied by various front end TIPS.

The way to trade the very front part of the inflation curve is to actually try to model inflation. The sad truth is, similar to most forecasters/projections, inflation traders aren't that good at modeling inflation past a few months. This is basically in line with how poor we all are at economic forecasting because it is simply too random and complicated a process to model effectively. However, we *are* pretty decent at modeling inflation in the very short term. The methodology to do so is actually relatively simple - inflation traders, economists, etc are very good at modeling the impact of energy on inflation. While energy is only ~8% of the CPI basket, it is something like 90% of the volatility of inflation. This is because food prices and energy prices are extremely volatile compared to things like apparel, health care, etc, which tend to increase in a fairly consistent way. So to model "Core CPI" we simply assume that it is a stationary process, and so for the best predictor of core inflation we simply take the past trailing 3 months or 6 months:

So that is your "core assumption." The rest comes from modeling the energy pass through for inflation. Roughly 4% of the 8% of the basket is determined by energy services i.e. gasoline. The change in gasoline prices can be monitored daily by 3AGSREG Index GO in bloomberg. You take the percentage change in gasoline prices and multiply that by the relative weight (4%) and use that as the relative effect of gasoline on the monthly CPI. You can see this is the driver behind short term deflation, as gasoline can move 10% in a month, which is 0.4% or 40bp weight to CPI. The monthly core CPI has averaged roughly 0.15%-0.20% every month, (a run rate of 1.81%-2.42% annually). Then you have the energy services part of the index, which is basically driven by electricity prices, which are highly seasonal. Because there is no (to my knowledge) effective way to get live electricity prices (bloomberg reports them monthly, and with a two month delay) that I typically use historical electricity prices to capture their seasonality. When you combine the three things (core assumption, energy consumption and energy services) you can get an estimate for what the next CPI print will be. A one stdev miss on CPI is apprx .08 of index pts, so you can get a 95% confidence interval of apprx .32 of the index, with the main unknown being the core CPI assumption. The way to trade this part of the curve is to compare the index ratio that is implied by the TIPS or market fixing (you can do that -here-) and then you back out the implied core CPI contribution that is implied in the market fixing. If the implied core CPI contribution is too high or too low relative to your estimate then you can scale into a position accordingly. For a good metric on how to scale, you can check out the Kelly Criterion.

Why are there dislocations in this sector? Hedgefunds/Dealers tend to be long front end TIPS so they tend to trade at a discount to fair value. Similarly, there are instances where e.g. clients are long Apr16 TIPS (which are liked to Jan and Feb CPI) - then Jan CPI will surprise to the upside, and a lot of the these accounts will take profits on their Apr16 positions, pushing down the implied Feb CPI price. In these instances the implied core CPI will be pushed from 0.15%-0.20% all the way to 0.00% or even negative! Additio

2.2: 1yr-5yrs: Energy vs front end TIPS, 1y and 1y fwds.

What is this part of the curve? This is probably my favorite part of the yield curve as it combines the extremely technical nature of the spot part of the inflation curve with the forecasting / embedded inflation assumptions and macro factors of further out the curve but what makes it unique is that for the most part (esp. 1y-2yr etc) you are in the time horizon where you can actually hold the trades til maturity. An undervalued part of finance is how much job security and job tenors dictate trading decisions. It is a large reason behind why prices get so dislocated

How do I analyze it? The way I look at this part of the curve is to take the strip of 1yr fwds. Why 1yr fwds? There are two main reasons. The first is that there is seasonality in TIPS - that is, inflation is not linear throughout a given year. There tends to be more inflation in the summer and less inflation during winter - this is driven primarily by gasoline but also core inflation exhibits some seasonality as well. There are three series of TIPS in this sector - April, January, and July. The April series are old 5y TIPS, whereas the Jan and Jul TIPS are old 10y TIPS. So if you take the three 2017 TIPS: Jan2017, Apr2017 and Jul2017 and graph their yields you will not get anything resembling a smooth line, it'll, in fact, be super fucked up because of the seasonals (on top of other normal dislocations). This is unlime the front end of, say, the IR swaps curve which is constructed the normal way (fwd rate expectations + term premium + convexity, which tends to give you a smoothly upward sloping curve).

TIPS curve (all fucked up): [pic of TIPS curve] IR swap curve (nice n' smooth):

Why are there dislocations?

2.3: 5yr-30yr: Macro Models and regression analysis.

Machine learning. Off the runs vs on the runs. This is the part of the curve where fundamental value doesn't really mean anything. Even if you could accurately model inflation 10 years out (you can't), then unless you are a special type of real money client, you won't be able to hold your 10y TIPS BEI for 10 years. So what matters for most traders of 5y-30y BEI is more about how rich or cheap BEI are relative to other risk assets. So the thing to do here is to find an appropriate model for 10y BEI relative to other risk assets such as gasoline, SPX, 10yr UST yield, 3m trailing core, etc. How to determine which risk assets or macro variables to choose when building your penalized linear regression model? This is a classic problem in statistics and machine learning - if you keep on adding variables, you increase the fit to your training data but you run the risk of overfitting, i.e. your model does super well on the training data but then underperforms in actual real life data. This processes is called best subset selection.

2.3.1 CASE STUDY: An aside on Machine Learning and Penalized Linear Regressions:

I'll be looking at some wine data to explore how to analyze data (both quantitatively and using pretty correlation graphs) and use multiple methods of best subset selection (forward stepwise and lasso regression) as well as how to apply methods such as cross-validation, all using Python/pandas. I'll include annotated code as well as links to the code on github for convenience.

The following section will assume a knowledge of statistics and python/pandas...sorry. I should note that I am heavily indebted to Machine Learning in Python, and Elements of Statistical Learning, both of which are freely available online textbooks.

2.3.1.1 Analyzing the Data Set

The first thing that's good to do when you get a set of data is to take a look at it - the predictors, the independent variables, etc and understand the predictors and run things like means/medians/modes, stdvs, check for any outliers, run correlation matrices etc. In this instance we are looking at data about red wine, with 11 predictors and what we are testing for is the "quality" measurement. The predictors here are Fixed acidity, volatile acidity, citric acid, residual sugar, chlorides, free sulfur dioxide, total sulfur dioxide, density, pH, sulphates, alcohol and finally "Quality."

There are two basic types of stats/ML problems - those in which you are trying to classify the data (e.g. in finance: Buy or Sell) and quantitative problems where your target is some real number and there are an unlimited number of answers. In this section I'll focus on the quantitative type becase e.g. what we are doing with the macro model for 10y BEI is to come up with some sort of "fair value" for 10y BEI given historical relationships with other risk assets.

Using the following code we wil generate three graphs:

1) a normalized box n' whiskers plot for each predictor that will allow us to easily identify the presence of outliers. In this data we can see that there are a significant number of outliers present, which will color our analysis

2) a color-coded normalized parallel coordinates plot. This will give us an ide of how well correlated some of the predictors are with the target. It will allow us to get a better picture of what we are to expect when we start our subset selection. High correlated data, we would imagine, would be more significant predictors in our penalized linear regression (or just our linear regression if we choose to use stepwise methods for subset selection). The methodology is to graph the values of each row (after you normalize all the data) and then you color code each row by the the value of the target. Then you just look to see if any colors of targets stick out on any part of the normalized predictors. In this analysis we will see that , for example, dark blue (high score) values appear to also have high alcohol content, which may explain my love of belgian beer as well. Also less dense and higher total so2 wines have higher scores.

3) predictor correlation matrix. the last thing that we will graph to allow us to visualize our data is a correlation matrix of the predictors. This will allow us to gain an insight into the problem of collinearity.

you can find the code here on the github: here

import pandas as pd
from pandas import DataFrame
from pylab import *
import matplotlib.pyplot as plot


target_url = ("http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv")

#This is panda's beautiful CSV parsing tool, noting that the data is semicolon spliced.
wine = pd.read_csv(target_url,header = 0, sep = ";")

print wine.head()

#Now we generate some statistical summaries...

summary = wine.describe()
nrows=len(wine.index)
tasteCol = len(summary.columns)
meanTaste=summary.ix[1,tasteCol-1]
sdTaste=summary.ix[2,tasteCol-1]
nDataCol=len(wine.columns)-1

print summary

wineNormalized = wine
ncols=len(wineNormalized.columns)

# Find the mean and standard deviation of each attribute in the data set
for i in range(ncols):
	mean = summary.ix[1,i]
	sd = summary.ix[2,i]
	#Now we look at how dispersed each predictor is
	wineNormalized.ix[:,i:(i+1)] = (wineNormalized.ix[:,i:(i+1)] - mean) / sd


array = wineNormalized.values
boxplot(array)
plot.xlabel("Attribute Index")
plot.ylabel(("Quartile Ranges - Normalized"))
show()

plot.cla()

for i in range(nrows):
	#plot rows of data as if they were series data
	dataRow=wineNormalized.ix[i,1:nDataCol]
	normTarget=wineNormalized.ix[i,nDataCol]
	labelColor = 1.0/(1.0+exp(-normTarget))
	dataRow.plot(color=plot.cm.RdYlBu(labelColor),alpha=0.5)

plot.xlabel("Attribute Index")
plot.ylabel("Attribute Values")
plot.show()

plot.cla()

#Create correlation heatmap of predictors
corMat=DataFrame(wine.ix[:,1:nDataCol].corr())
plot.pcolor(corMat)
plot.show()

print corMat

You output a box diagram that looks like the following:

Box n Whiskers plot

And you get a parallel coordinates plot that looks like this:

Parallel Coordinate Plot

And you get a correlation matrix that looks like this:

Predictor's Correlation Matrix

2.3.1.2 Penalized Linear Regression and Subset Selection

The quesiton now becomes how many variables to use and how to best pick your predictors. I'll go over two Here is an example of forward stepwise regression to control overfitting the data. The basic premise to this is that you put a constraint on the number of predictors (e.g. nCol) - you then iterate through the goup of nCols and find the number that minimizes your out of sample error (here measured by RSME or root mean squared error). The basic pseduocode is as follows: Out of sample error = NULL Break data into test and training sets (either odd_number_days or even_number_days depending on your preference). for i in range(number of columns in X): for each subset of x having i+1 columns: fit OLS out_of_sample_error.append(least error among substs containing i+1 columns) pick the subset containing the least overall error. Now, the problem with this is that it becomes computationally very intensive very quickly. This is for trying -every single- subset of predictors, which is likely an inefficient algorithm. A better way forward is to instead use the forward stepwise regression algorithm. The logic of the function is pretty straight forward. Let's say, for example, you have 10 predictors. The forward stepwise regression algorithm first runs OLS with 0 predictors and generates the RMSE. Then it tests every single predictor and runs an OLS with 1 predictor and finds the one that minimizes the RMSE. Then it takes that one predictor which minimized RMSE and uses the remaining predictors to find the best 2 predictor set that minimizes RMSE...etc. You iterate through all the predictors and you can graph RMSE vs number of predictors. Here is an example of how to do this with the wine data:

While forward stepwise regression is an intuitive way of finding the optimal subset of predictors, many statisticians in fact prefer more subtle ways of constraining the predictor set using penalized linear regress 2.3.1.3 Cross-Validation and Model Assessment

3.1 Macro Analysis

This is the most fun part of trading, in my opinion, and also the hardest to really get any sort of edge into it. A lot of it is ultimately bullshit philosophizing, but it's entertaining. The crux of rates trading is figuring out what central banks are going to do either domestically and abroad and how that spills into other parts

4.1 Quantitative trading / Stat-arb

What is this strategy? You know that old investing aphorism, "past performance is not reflective of future performance?" Well this The basis of quantitative trading, Stat-arb, and machine learning is basically a bet that past performance IS reflective of future performance. In certain instances this makes sense - and in others it doesn't, and there are always assumptions you make and risks that you are taking. An important aspect of successful quantitative trading/stat-arb is IMO the question "does mean reversion make sense?." I would argue that applying a quantitative trading strategy, for, example, on 5y Breakevens or EDZ7 would not make sense, as there should be no expectation of mean reversion. 5y breakevens are sensitive to energy prices (which are non-stationary typically except for one weird period between 2010 and 2013) and Fed policy and the level of nominal rates and "risk tolerance" e.g. the level of SPX. Same with EDZ7, which is pricing in expected future Fed policy and can change rapidly in response to FedSpeak and economic data. The key point is that any financial asset that is going to constantly be sensitive to "regime change" is probably not a good candidate for mean reversion or stat-arb. It is why the most frequently cited example of stat-arb (statistical arbitrage, which is NOT arbitrage and is a fancy way of saying mean reversion trading) is pairs trading, ie you take two stocks in related fields and trade when the ratio of the pair goes above or below certain boundaries- for example, McDonalds and Burger King. That way you are exposed to the relative performance of McDs vs BK which is less likely to go through a regime change (though it might - there might be a really good CEO or McDz might unveil some killer sandwich or something).

4.1.1 Case Study: 2s5s10s vs 5y swap rate

I'll go through two examples of how to implement a quantitative trading strategy, using mean reversion of 2s5s10s IR swap vs the level of 5y swap rate. 1) Is this series mean-reverting? To explore this we use the Hurst exponent. 2) Analyzing the data / intuitively what does this ratio suggest? 3) The code (python) Now that we have a good look at the data and we are reasonably confident that this series exhibits some properties of mean reversion, how do we implement this into a trading strategy that can give us signals on when to buy/sell?

5.1 Consistent dislocations and opportunities - ie, Front end TIPS cheapness / TIPS ASW / Inflation Vol

6.1 Other misc. musings on the culture of Wall Street, bank regulation, etc