Light Finance https://lightfinance.blog/ Wealth | Finance | Economics Wed, 01 Nov 2023 02:12:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://i0.wp.com/lightfinance.blog/wp-content/uploads/2020/08/Light-Finance-Logo-wo-Text.png?fit=32%2C32&ssl=1 Light Finance https://lightfinance.blog/ 32 32 172245135 Inflation Forecasting – Who does it Better: Economists or Consumers? https://lightfinance.blog/inflation-forecasting-who-does-it-better-economists-or-consumers/#utm_source=rss&utm_medium=rss&utm_campaign=inflation-forecasting-who-does-it-better-economists-or-consumers Wed, 01 Nov 2023 02:11:51 +0000 https://lightfinance.blog/?p=1643 When speaking about inflation forecasting, it’s important to set expectations. Forecasting is a notoriously difficult business; if I could do it well, then I wouldn’t be writing an article about it! However, modern economic theory posits that actual inflation crucially depends on expected future inflation. Indeed, Chairman Powell often cites the Fed’s assessment of inflation […]

The post Inflation Forecasting – Who does it Better: Economists or Consumers? first appeared on Light Finance.

The post Inflation Forecasting – Who does it Better: Economists or Consumers? appeared first on Light Finance.

]]>

When speaking about inflation forecasting, it’s important to set expectations. Forecasting is a notoriously difficult business; if I could do it well, then I wouldn’t be writing an article about it! However, modern economic theory posits that actual inflation crucially depends on expected future inflation. Indeed, Chairman Powell often cites the Fed’s assessment of inflation expectations in both his post-FOMC press conferences and testimony before Congress.

If expectations are, if fact, an important factor in the Fed’s decision-making process then investors must answer two important questions: 1) which measure(s) of inflation expectations to focus on and 2) what time horizon to use. The current Monetary Policy Report to the Congress indicates that policymakers regularly examine several measures of inflation expectations, including those of financial market participants, forecasts from staff economic models, the consensus of professional forecasters, and surveys of households and businesses. Time-horizon is of equal importance. Over shorter periods of time, realized inflation may evolve in response to factors exclusive to monetary policy (as COVID made us acutely aware).

In this post, I will review the performance of both short and long-term forecasts from consumers and economists for predicting future realized inflation. The goal is to better understand the relative accuracy and reliability of these forecasts as a means for gauging future realized inflation and, hopefully, shed light on the implications these measures have on the direction of monetary policy.

Data

Consumer expectations are based on the Survey of Consumers published by the University of Michigan. The survey, released monthly, provides data on expected inflation over the next 12-months and 5-years, offering both a short and long run perspective of the American public. Data for 12-month expectations is available on a monthly basis beginning in 1978. Data for 5-year expectations is spotty in the early years of the survey and for my purposes will start in 1990 when consistent monthly readings become available.

Economist expectations are taken from the Cleveland Fed’s Inflation Expectations model. The Cleveland Fed has published the results for 1-, 5- and 10-year expected inflation from this model going back to 1982. The model takes a range of input variables, including:

  • Blue Chip CPI forecast
  • Current month and historical CPI
  • Short- and long-term Treasury yields
  • Survey of Professional Forecasters median year-over-year CPI inflation rate

To measure inflation, I will use CPI and Core CPI, respectively. While CPI tends to be most relevant for consumers, policy makers tend to place greater emphasis on “core” statistics which are less volatile. The change in CPI and Core CPI will be presented on a year-over-year and 5-year annualized basis in order to facilitate a direct comparison with the expectations measures cited above.

Results I: 12-month Expectations v. Realized Inflation

The below charts depict the inflation expectations of the participants in the Univ. of Michigan survey (e.g., “Univ. Mich.”) and the inflation forecast from the Cleveland Fed (i.e., “economists”) for the next 12-months against the realized year-over-year (YoY) change in CPI and Core CPI, respectively. More specifically, the YoY change in CPI/Core CPI has been lagged by 12-months to show what inflation actually ended up being in the succeeding year. Presented this way, we can clearly see what consumers/economists predicted inflation to be at a point in time and what the official result was standing at that point in time and looking 12-months into the future.

A quick scan of the plots suggests that neither economist nor consumer estimates track future inflation especially well. With respect to CPI, periods of divergence between realized and expected values dominate the plot. There does appear to be some improvement when moving from CPI to Core CPI. In particular, the mid-90’s shows a relatively high degree of alignment among the series. More recently, economists seem to have done a decent job forecasting the average level of inflation during the 2010’s; albeit with much higher volatility than Core CPI.

Other interesting features of the plots include the observation that since ~2000 consumer expectations of inflation have been consistently higher than economist estimates. Moreover, consumer expectations ran markedly higher than both realized CPI and Core CPI throughout the 2010’s. In general, consumers and economists both significantly underestimated the post-pandemic inflation surge.

To confirm the accuracy of these intuitive statements, the below plot depicts the rolling 3-year correlation between our forecast measures and realized inflation.

While the chart shows a fair degree of correlation between CPI/Core CPI and the expectations measures during the 90’s, the salient feature is that the correlations are wide ranging and highly unstable. Indeed, the 2010’s period shows considerable degradation in the relationship across all measures with negative correlation occurring more often that positive. This implies an element of confusion amongst both consumers and economists concerning even the direction of inflation.

To further gauge the accuracy of these forecasts, the below table shows the root mean-squared error and R2 from regressions of the forecasting variables against CPI and Core CPI, respectively. The standard deviations for CPI and Core CPI are also noted.

Table 1: Sample Statistics & Forecast Accuracy for 12-month Inflation & Inflation Expectations

 CPICore CPI
Standard Deviation.0159.0127
   
 RMSEAdjusted-R2
Economists v. CPI.01547.5%
Univ. Mich. v. CPI.01565.0%
Economists v. Core CPI.009839.7%
Univ. Mich. v. Core CPI.011024.4%

If economist estimates and survey responses from consumers are good predictors of future inflation, then we would anticipate high adjusted-R2’s and for regression RMSE’s to be notably below the standard deviations of CPI/Core CPI. From Table 1 we can see that neither economists nor consumers are especially capable at forecasting 12-month CPI. RMSE’s are nearly the same as the standard deviation of CPI and adjusted-R2’s are very low; indicating that the regressors don’t explain much of the year-to-year variation. When we move to evaluate Core CPI, we observe significant improvement for both measures. Approximately, 40% of the variation in Core CPI is explained by the economist prediction and RMSE is meaningfully lower. The statistics for the Univ. of Michigan survey are similarly improved; though, economists appear to have the edge. The lower overall variance of the Core CPI seems to aid both groups in predicting 12-month forward inflation.

Results II: 5-Year Expectations v. Realized Inflation

As noted in the introduction, short-run inflation may be influenced by factors substantially outside of the control of monetary policy. Therefore, “short-run” factors may make near horizon forecasting difficult. However, in the long-run, inflation is primarily driven by monetary factors. Consumers and economists may have an easier time predicting long-run inflation as short-term fluctuations in the price level “even out”. To evaluate this hypothesis, we can perform a similar exercise as Part 1 for a 5-year horizon.

In this section, I will be evaluating the 5-year annualized change in inflation against the 5-year forecast from the Cleveland Fed and responses from the Michigan survey, respectively. Let’s begin with the charts:

Once again, the charts depict substantial discrepancies between the forecast variables and both headline and core inflation statistics. The Economist estimate is, at least, directionally accurate to the extent that it declined for the better part of 30 years in line with inflation. The estimate from the Michigan survey has been consistently above CPI and Core CPI and does not capture the changes in inflation well. In the charts, the last reading for the forecast variables was taken in July 2018 (hence predicting inflation for July 2023). It’s no surprise then that both measures missed the run up in inflation taking place in 2022 and 2023.

The rolling correlation plot confirms some of these informal observations. The Economist estimate has generally been only weakly correlated with CPI over the last ~15 years. The most striking result is how poorly correlated the University of Michigan survey has been with both measures, posting a deeply negative correlation for the better part of 20 years. Indeed, the rolling correlations plots are not suggestive of any kind of a stable relationship between expectations and realized inflation.

Table 2 presents the sample and forecast accuracy statistics.

Table 2: Sample Statistics & Forecast Accuracy for 5-Year Inflation & Inflation Expectations

 CPICore CPI
Standard Deviation.0061.0048
   
 RMSEAdjusted-R2
Economists v. CPI.005616.6%
Univ. Mich. v. CPI.00611.1%
Economists v. Core CPI.004223.6%
Univ. Mich. v. Core CPI.004320.4%

For the Economist estimates, the summary statistics demonstrate only a modest ability to predict 5-year inflation. The R2 for 5-year CPI is higher than the R2 obtained from the earlier 12-month regression. This  implies that the longer time horizon is moderately beneficial for economist’s structural models. The Michigan survey seemingly has no meaningful relationship with CPI, but the statistics do improve for Core CPI and are broadly in line with the results reached for Economists v. Core CPI. Ultimately, the regressions do not support the hypothesis that long-run inflation is easier to forecast. In fact, the performance deteriorates in several cases.

Concluding Remarks

Modern economic theory suggests that managing expectations is a key for containing inflation. In this post, I measured the performance of economist and consumer forecasts for predicting future realized inflation over the short and long-run. The results mostly suggest that neither economists nor consumers are able to forecast inflation with a high or consistent degree of accuracy. On balance, economists fair better than consumers, but their relatively superior reliability is still quite limited and confined primarily to short-run, Core CPI.

There are many alteratives for measuring inflation (PCE, median CPI, “sticky” indices, etc.) and expectations (market based, business surveys, etc.) and even more when in comes to forecasting. However, the foregoing analysis is based on some of the more popular and widely publicized measures and casts considerable doubt on the enterprise.

Forecasting…it’s a tricky business!

The post Inflation Forecasting – Who does it Better: Economists or Consumers? first appeared on Light Finance.

The post Inflation Forecasting – Who does it Better: Economists or Consumers? appeared first on Light Finance.

]]>
1643
Managed Futures & Trend Following – Inside the Black Box https://lightfinance.blog/managed-futures-trend-following-inside-the-black-box/#utm_source=rss&utm_medium=rss&utm_campaign=managed-futures-trend-following-inside-the-black-box Sat, 10 Dec 2022 17:48:56 +0000 https://lightfinance.blog/?p=1559 It goes without saying that 2022 has been a difficult year across markets. Investors have had to contend with an inflationary bear market for which the traditional playbook has proven woefully inadequate. NASDAQ and high yield debt, the darlings of yesteryear, have fallen from grace with few exceptions. Treasuries, the most common hedge against stock […]

The post Managed Futures & Trend Following – Inside the Black Box first appeared on Light Finance.

The post Managed Futures & Trend Following – Inside the Black Box appeared first on Light Finance.

]]>

It goes without saying that 2022 has been a difficult year across markets. Investors have had to contend with an inflationary bear market for which the traditional playbook has proven woefully inadequate. NASDAQ and high yield debt, the darlings of yesteryear, have fallen from grace with few exceptions. Treasuries, the most common hedge against stock volatility, have suffered their worst drawdown in at least the last 70 years (and it’s not close):

Times such as these provide good cause for reflection. Portfolio managers and allocators are generally charged with building diverse portfolios that balance growth and safety of capital over an intermediate to long-time horizon. Treasuries have often filled the role of diversifier and risk-off asset. However, if Treasuries are expected to be a less effective hedge against risk assets in the future, then we can anticipate portfolio construction to look very different going forward. Amongst the questions PMs should be asking is if other strategies or asset classes exist that potentially enhance diversification and deliver consistent returns?

In this note, I’ll be making the case for managed futures as one such asset class. Managed futures are not well known to most investors, but, as we shall see, have some attractive properties especially at times of high volatility. We’ll begin by examining the performance and economic logic of managed futures as a strategy. From there, we’ll investigate the implementation of managed futures in liquid mutual funds which all investors can access. We’ll see that not all funds are created equal (surprise!). Finally, we will look at portfolio construction and quantify the impact of managed futures in the context of a well-diversified portfolio.

The Trend is Your Friend

The following quote from Dr. John Lintner (of CAPM) may provide some motivation as we undertake the study of managed futures:

“Indeed, the improvements from holding efficiently selected portfolios of managed [futures] accounts or funds are so large – and the correlations between the returns on the futures portfolios and those on the stock and bond portfolios are surprisingly low (sometimes even negative) – that the return/risk trade-offs provided by augmented portfolios consisting partly of funds invested with appropriate groups of futures managers… combined with funds invested in portfolios of stocks alone (or in mixed portfolios of stocks and bonds), clearly dominate the trade-offs available from portfolios of stocks alone (or from portfolios of stocks and bonds). Moreover, they do so by very considerable margins.

The combined portfolios of stocks (or stocks and bonds) after including judicious investments in appropriately selected sub-portfolios of investments in managed futures accounts…show substantially less risk at every possible level of expected return than portfolios of stock (or stocks and bonds) alone. This is the essence of the “potential role” of managed futures accounts (or funds) as a supplement to stock and bond portfolios suggested in the title of this paper.

Finally, all the above conclusions continue to hold when returns are measured in real as well as in nominal terms, and also when returns are adjusted for the risk-free rate on Treasury bills.”

Lintner, The Potential Role of Managed Commodity-Financial Futures Accounts (and/or Funds) in Portfolios of Stocks and Bonds (1983)

This passage provides several tantalizing clues on the possible role of managed futures in a portfolio. Namely, that such strategies improve the risk/return profile of portfolios of stocks and bonds, exhibit meaningfully low correlation to traditional assets, and improve returns on both an absolute and risk-adjusted basis. We shall evaluate each of these claims in turn.

Economic Rationale

The primary driver of returns for managed futures strategies is trend-following or momentum investing; that is, buying assets have that recently been going up and selling (i.e., shorting) assets that have recently been declining. Trend based strategies are typically applied to liquid futures contracts across a wide range of markets including equity indices, rates, commodities (energy, agricultural, and industrial), and currencies. Most investors are not exposed to commodities or FX so from the simple perspective of traded instruments managed futures have the potential to introduce new sources of risk and return.

Momentum investing has a rich academic history and is widely regarded as an essential factor for explaining the performance of stock portfolios (Carhart 1997). The evidence in support of trend-following is similarly robust. Pedersen, Ooi, and Hurst (2017) analyze 137 years of performance for a time-series momentum strategy and conclude that such strategies perform well across different macroeconomic environments and have a propensity to outperform during times of macro- stress.

The chart below depicts quarterly returns from Jan-1990 to April-2022 for the Barclay’s BTOP50 Index against returns for the MSCI World. The BTOP50 Index seeks to replicate the overall composition of the managed futures industry with regard to trading style and overall market exposure. Also included is the fitted line for a second-degree polynomial. The plot shows a distinctive “smile”; characteristic of trend-followers. This suggests that a key feature of managed futures strategies is that they tend to be “long volatility” and outperform in both extreme up and extreme down markets.

Set Up and Approach

For this study we’ll be considering the period from Jan-1990 to April-2022. The Barclays BTOP50 Index  (henceforth the BTOP50) will serve as our benchmark for managed futures strategies. Returns and summary statistics are calculated on a monthly and quarterly basis (it will be made explicit which is being used).

In Part I we will examine the empirical facts of managed futures performance and how the strategy relates to other asset classes. Part II will investigate the implementation of managed futures in publicly traded mutual funds. Futures markets are highly liquid and trade standardized instruments on exchange. Therefore, mutual funds are an ideal vehicle for implementing and marketing a managed futures strategy. The ultimate objective of Part II is to quantify how well publicly available products live up to the managed futures ideal; as we shall see, the devil is in the details. Part III will explore the use of managed futures in a portfolio.

Part I: Stylized Facts

­­The table below presents the summary statistics for the BTOP50 along with indices for other key asset classes. Statistics were calculated using quarterly total return data. Confidence intervals (95%) for skew and excess kurtosis are shown in parentheses.

The table shows that over the past 32 years managed futures have, on average, produced positive returns and have exhibited approximately half the volatility of global stocks. The 95% confidence interval for skew suggests that the BTOP50 has distinctly positive skewness: unique amongst the asset classes under consideration. Even US Treasuries and the Dollar (typically considered “safe-haven” assets during risk-off periods) do not exhibit statistically significant positive skewness. The histogram below provides some visual evidence of this effect. The confidence interval for excess-kurtosis isn’t quite conclusive at the 95% level but is still suggestive of heavy tails for the BTOP50. Moreover, the Shapiro-Wilk test decisively rejects the hypothesis of normally distributed returns. Interestingly, the Shapiro-Wilk fails to reject normality for 10-year Treasury and DXY returns thereby suggesting that these series are relatively well behaved.

The following chart maps the cumulative return of the BTOP50 and comparative assets over the sample period. Several interesting observations stand out. Over the full period, managed futures (as proxied by the BTOP50 in red) are the third best performing asset class; slightly edging out Treasuries (magenta) and falling just shy of IG Corporates (aqua). Over the 1990-2010 subperiod which, of course, featured the TMT Bubble and GFC, trend-following was the top performer largely because the strategy deftly avoided both of these large drawdown events and actually posted positive returns in ’08-’09.

However, since then—and until fairly recently—strategies built to profit from price trends have struggled. Since the GFC, markets have trended less than their historical norm; which presents a certain challenge if you’re a trend-follower! Part of the explanation for underperformance during the 2010’s period may be attributable to the deluge of money that flooded the industry precisely because the performance had been so impressive (see below); a period of mean reversion was inevitable.

Another critical aspect of managed futures that can be divined from the cumulative return plot: the low correlation the strategy appears to have with the other ‘traditional’ asset classes. The following chart details the rolling 12-month correlation of the BTOP50 with the other five asset classes, respectively. The solid black line in each plot shows the average correlation over the entire observation period.

While the correlations certainly vary over time, it can be seen clearly that trend-following has structurally low correlation with the other assets. At -.0302 the correlation with equities is statistically indistinguishable from zero. Equities are often the largest source of risk in diversified portfolios, and it is often desirable to hedge this risk with assets that perform well when stocks struggle. Over the past 20+ years, Treasuries have filled this role and, up until 2022, have generally done a good job. However, 2022 has revealed significant gaps in portfolios that rely solely on bonds for downside protection. In today’s environment of high inflation, sagging growth and high volatility trend-followers have excelled. When it comes to diversification, managed futures do exceedingly well.

When Things Get Extreme

Between the smile-plot and correlation diagram we are building the case that managed futures have a very important role to play in portfolio construction. Specifically, managed futures strategies appear to produce consistently positive returns across market regimes and perform particularly well in the tails. Let’s dig a little deeper into this latter point.

The following plot contains three panels. The first panel shows the average rolling 12-month return for the Barclays BTOP50 and MSCI World over the full sample period. The MSCI World has provided an average return of ~9.75% since 1990 while the BTOP50 has returned ~5.80%. Over a sufficiently long period of time the series with the highest expected return will provably outperform all other assets. However, as the cumulative return plot demonstrates, the path to get there can (and likely will) be punctuated by periods (possibly long ones) of significant underperformance and volatility.

Examining the second and third panels. Here I have sorted the returns into deciles based on the performance of the MSCI World. The idea is to show how managed futures performed when the MSCI World did particularly well/poorly. The third panel shows the average 12-month return for the 10th (i.e., best) decile. The interpretation is as follows: during periods of “good” returns for the MSCI, what is the average “good” return for each index? The average top decile return for the MSCI World is ~34%; so very good. On the other hand, the average return for managed futures in the top decile is only about 7%. So, when equity markets really run you want to own stocks (this probably comes as no surprise!).

The second panel shows the average 12-month return for the 1st (i.e., worst) decile. The interpretation is: during periods of “bad” returns for MSCI, what is the average “bad”? As can be seen, the average bottom decile return for the MSCI is approximately -24% while the average return for managed futures is positive 12%. This is the critical point: managed futures have a positive expectation in both up & down markets, but it is in down markets where their hedging benefits are felt most strongly; just when you need it most.

One final iteration on this theme, let us consider the extreme cases in addition to the simple averages elucidated above. The following chart depicts the maximum & minimum return for the top and bottom deciles of the MSCI World and the corresponding performance of the BTOP50.

Panel 2 shows the maximum return for decile 10 (i.e., 100th percentile). Essentially, this panel displays the best 12-month return for the MSCI since 1990 and how the BTOP50 performed over the same period. We can see the MSCI returned about 55% while the BTOP put up 24%. In a ripping bull market managed futures can produce solid returns, but ultimately won’t keep pace with stocks (recall that the BTOP50 has much lower volatility so this observation is not necessarily surprising).

Let’s shift our attention to Panels 1 & 3; the “bad times” for stocks. Panel 1 is the maximum return for decile 1  (i.e., the 10th percentile) of the MSCI. The least amount that the MSCI has lost over a 12-month period is approximately -13.5%. In contrast, when the MSCI was down 13.5% the BTOP50 was up 30%. Likewise, in Panel 3 we observe that the worst (i.e., 1st percentile) 12-month return for the MSCI was a merciless -47%. Over this period the BTOP50 did lose money, but it was a very manageable -2.6%.

Bringing it all together we can make two important observations. 1) trend-following has a long-run positive expected return and, moreover, a positive expected return in both Bull and Bear markets (this is the lesson of the means chart). 2) managed futures have an asymmetric return profile. While generally failing to keep pace with stocks in a Bull market the strategy is still capable of producing solid returns. However, in Bear markets, managed futures strategies have significantly outperformed stocks, producing positive returns or, at minimum, offering substantially less downside.

Part II: Mutual Fund Analysis

In the previous section we examined the empirical facts about managed futures and set the initial groundwork for understanding the potential role of trend-following in a portfolio. We concluded that trend-following has a long-run positive expected return, performs well across different market regimes, and exhibits a low to zero correlation with other common assets. In this section we will investigate the implementation of a managed futures strategies in publicly traded mutual funds.

In order to achieve the structurally low correlation we observed, it is necessary for a successful managed futures program to trade across many distinct markets. If a trend-follower only trades equities, then the overall portfolio will substantially track the performance of the stock market and won’t inherit the benefits of diversification. This operational complexity makes the strategy very difficult (basically impossible) for retail and high net-worth investors to implement without professional assistance. As such, the best way to access managed futures is often through a mutual fund.

The marketing of managed futures via mutual funds is a relatively recent innovation with the first funds introduced in mid-2000’s. As such, for the analysis to follow we are limited to some degree by the availability of data. The funds I intend to investigate are as follows:

  • Guggenheim Managed Futures Strategy Fund (RYMTX)
  • AQR Managed Futures Strategy Fund (AQMIX)
  • AlphaSimplex Managed Futures Strategy Fund (ASFYX)
  • Arrow Managed Futures Strategy Fund (MFTFX)
  • Virtus FORT Trend Fund (VAPIX)

To give a reasonable view of performance over a market cycle I limited my search to funds with data back to 2010…these five were the only ones to make the cut. To give a sense of how potentially underutilized this asset class is at the retail level, the total AUM for these five funds is less than ~$5B; equivalent to a single mid-cap stock. Indeed, the Guggenheim fund (a gigantic asset manager otherwise) only has $32M in AUM.

To supplement the mutual funds, I also include data for two private funds available only to accredited investors:

To measure the exposure of each fund I regressed the monthly returns against the indices discussed in Part 1. The results of the analysis are presented in the table below. Standard errors are corrected for heteroscedasticity and autocorrelation.

The table contains a massive amount of information, so we’ll begin our analysis by considering the exposure of each fund to the BTOP50. The majority of the funds selected for study exhibit a highly statistically significant relationship to the Barclays BTOP50 as evidenced by large t-statistics and near-zero p-values. Moreover, the coefficients are uniformly positive as we expected a priori. Arrow Managed Futures (MFTFX) has the largest gross exposure with a beta of ~1.93 while Millburn has the lowest exposure (of the significant funds) with a beta of ~.84. This is good! Each of these funds claim to be a trend-follower, so they better be significantly and positively exposed to the index. The single exception is the Virtus FORT Trend Fund (VAPIX). The coefficient for the BTOP50 for VAPIX is not significantly different from zero which suggests that the fund, which purports to be a “trend” fund, has no trend-like exposure which is…curious.

In order to assess the quality of managed futures strategies in vehicles accessible to the average investor we need to rigorously understand if the funds are staying true to the managed futures mandate. This is important because if we decide to allocate to a managed futures strategy then we have a certain expectation about how it will perform. We don’t want to inadvertently invest in a fund that has covertly moved into other asset classes or added incremental Market beta risk.

With this in mind, let us now turn our attention to the MSCI World. The coefficients on the MSCI World are statistically significant for Guggenheim (RYMTX), AQR (AQMIX), Arrow (MFTFX) and Virtus FORT (VAPIX). This is potentially problematic. The coefficient of .0734 for RYMTX is small in absolute terms so while there is evidence that this fund dabbles in equities the attendant beta risk may not be of particular concern. For AQMIX and MFTFX the betas are negative which suggests that both funds tend to be short global stocks. This is possibly not ideal as what we really want from a managed futures manager is lack or correlation (so neither positive nor negative) to equities. However, at least with AQR and Arrow we can be reasonable sure that if we were to invest, then we wouldn’t be adding unwanted stock market risk to our portfolio.

This brings us to VAPIX. The coefficient on the MSCI World for VAPIX is ~.61 and highly significant which implies the fund is meaningfully exposed to stocks. Indeed, if we consider the full regression output for VAPIX, we observe that the MSCI World is the sole significant exposure for the fund. Furthermore, the MSCI explains ~61% of the fund’s variance as evidenced by R2. At this point, it would seem that VAPIX is essentially an expensive substitute for a portfolio of cheap beta and cash. This is not what we want to have in a managed futures provider.

As far as the other regressors as concerned, it looks like Guggenheim (RYMTX) and AQR (AQMIX) have short Treasury exposure. AQR and AlphaSimplex are short the Dollar and only Guggenheim is short commodities.

Zooming in on a couple of other interesting features, let us consider Abbey Capital. The regression for Abbey shows that the BTOP50 is the only significant risk factor. Moreover, the R2 for Abbey is .88; the highest amongst the candidate funds. This suggests that Abbey is a good example of a “pure” trend-following manager which we’ll want to keep in mind when we move toward portfolio construction.

Perhaps the most confounding results come from Millburn. Millburn demonstrates pronounced exposure to the BTOP50 while the other regressors are statistically insignificant; this is good. However, the R2 comes in at only .39. This indicates that a large proportion of Millburn’s variance comes from sources outside of our selected variables and that the particular strategy they are running is part trend following and part something else. Millburn utilizes a highly data driven methodology and works on an unusually short time horizon (22 days between moving from long to short). It could be that significant non-linearities exist as part of their approach that our model isn’t picking up.

Part III: Portfolio Construction

Let’s recap what we discussed so far. In Part I we examined the empirical nature of managed futures strategies using the Barclays BTOP50 index as a broad proxy for performance. We demonstrated that trend following strategies have a positive expected return over time and that this return is uncorrelated or weakly correlated with traditional assets like stocks, fixed income and commodities. Furthermore, we were able to show that managed futures tend to have an asymmetric return profile and generally perform well in environments of high volatility when other assets typically struggle.

In Part II, we evaluated a set of public and private funds to assess how well managed futures are implemented in investable vehicles. We discovered that the degree of implementation differs significantly amongst managers and that we must take care during the selection process in order to avoid unwanted or redundant risk exposures.

In Part III, we’ll put all of these pieces together in a portfolio to see how managed futures impact the risk/return profile and potentially benefit investors. However, one of the issues that we need to address upfront is the lack of data. As mentioned previously, data for the majority of managed futures mutual funds only extends back to around 2010. In the world of finance, a decade of returns usually isn’t enough to achieve robust results. Therefore, for this part we’ll narrow our focus and only work with the funds with the longest return histories. For our case, the funds with the longest histories are Millburn and Abbey which begin in October 2004 and January 2002, respectively. It’s unfortunate that we are unable to go back further, but 18 years of data should provide us with a reasonable perspective from which to draw conclusions.

For this part, I constructed four portfolios:

  • Simple Benchmark
    • 50% MSCI World
    • 20% Treasuries
    • 20% Corporate Bond
    • 5% DXY
    • 5% Commodities
  • Barclays BTOP50 Benchmark
    • 40% MSCI World
    • 20% Barclays BTOP50
    • 16% Treasuries
    • 16% Corporate Bonds
    • 4% DXY
    • 4% Commodities
  • Abbey Portfolio
    • 40% MSCI World
    • 20% Abbey Capital Managed Futures Strategy
    • 16% Treasuries
    • 16% Corporate Bonds
    • 4% DXY
    • 4% Commodities
  • Millburn Portfolio
    • 40% MSCI World
    • 20% Millburn Multi-Markets Strategy
    • 16% Treasuries
    • 16% Corporate Bonds
    • 4% DXY
    • 4% Commodities

The Simple Benchmark represents a fairly typical allocation of diverse assets to serve as a good baseline. To build the managed futures alternatives I added 20% of the BTOP50, Abbey Capital, and Millburn, respectively, by taking pro-rata from the other asset classes. The BTOP50 Benchmark serves as a good “industry” benchmark portfolio from which to compare the Abbey and Millburn portfolios.

The cumulative return plot and summary statistics table below detail the results.

The cumulative return plot shows significant overlap between the Benchmark, Abbey and Millburn portfolios. The differences amongst the portfolios can be more clearly discerned from the table. We observe that on a cumulative and annualized return basis Millburn slightly edges out the Simple Benchmark and Abbey portfolios; however, practically speaking, all three are materially the same. The Barclays portfolio is the laggard falling meaningfully shy of the other three alternatives.

Turning our attention to risk adjusted statistics the variation amongst the portfolios begins to reveal itself. The Simple Benchmark appears to be significantly more volatile than the other three with a standard deviation of 8.45% v. 7.23%, 7.43% and 7.02% for Abbey, Millburn and Barclays, respectively. From the perspective of the Sharpe Ratio, Abbey appears to be the most efficient of our options followed closely by Millburn and well ahead of the Simple Benchmark. From an absolute and risk-adjusted perspective, the Abbey portfolio is looking quite attractive.

What about blow ups? We noted in Part I that managed futures strategies appear to have skewed and heavy tailed return distributions which may lead to downside risks not well captured by volatility and Sharpe alone. The Sortino Ratio, a variation of Sharpe that focuses on the volatility of negative returns, is highest for the Abbey Portfolio. This suggests that Abbey has the lowest downside risk of the candidate portfolios and, crucially, ahead of the Simple Bench. Finally, we consider the impact managed futures strategies have on drawdown. All three managed futures alternatives have experienced smaller drawdowns than the Simple Bench. In the case of Abbey, the drawdown is almost 10% lower!

Taken together, the portfolio risk analysis paints a very clear and compelling picture: trend-following introduces a source of independent and uncorrelated returns. The inclusion of managed futures in a portfolio does not appear to compromise total return over time as evidenced by cumulative and annualized returns. Moreover, incorporating trend following into your asset allocation leads to lower portfolio volatility, smaller drawdowns and a general improvement in the risk-adjusted statistics. This is what we had hoped to find!

Concluding Remarks

This post has been a deep dive into managed futures and trend-following strategies. We learned that managed futures have several properties that make them attractive investments: namely, uncorrelated returns and crash-protection. We demonstrated that the implementation of the strategy can differ greatly by manager and that we must be selective when considering how to incorporate managed futures into a portfolio. Finally, we discovered that such strategies add meaningfully to traditional portfolios through lower risk and enhanced efficiency without compromising long run returns.

Hopefully, you have found this article useful for your own investing. As always, thanks for reading!

-Aric Lux.

The post Managed Futures & Trend Following – Inside the Black Box first appeared on Light Finance.

The post Managed Futures & Trend Following – Inside the Black Box appeared first on Light Finance.

]]>
1559
Stock Market Valuation and Impact of Inflation https://lightfinance.blog/stock-market-valuation-and-the-impact-of-inflation/#utm_source=rss&utm_medium=rss&utm_campaign=stock-market-valuation-and-the-impact-of-inflation https://lightfinance.blog/stock-market-valuation-and-the-impact-of-inflation/#comments Sun, 26 Jun 2022 20:08:11 +0000 https://lightfinance.blog/?p=1353 If 2022 has taught us anything, it is that our understanding of the inflationary process is woefully incomplete. Increasingly, it seems that the easy money era of the 2010’s created a blind spot in the market: stable inflation and ample liquidity were taken for granted. The risk of high (indeed, very high) inflation was deeply […]

The post Stock Market Valuation and Impact of Inflation first appeared on Light Finance.

The post Stock Market Valuation and Impact of Inflation appeared first on Light Finance.

]]>

If 2022 has taught us anything, it is that our understanding of the inflationary process is woefully incomplete. Increasingly, it seems that the easy money era of the 2010’s created a blind spot in the market: stable inflation and ample liquidity were taken for granted. The risk of high (indeed, very high) inflation was deeply discounted which resulted in a significant misallocation of investor capital.

In some ways, this is understandable. Inflation had been in a steady decline for decades and in the most recent one routinely surprised to the downside. Prudence and a healthy appreciation for inflation consistently went unrewarded and was often disrespected. It was against this backdrop that investors were confronted with the most significant fiscal and monetary intervention since WWII, an epic dislocation in supply chains and war that has strangled critical commodities.

The objective of this post to provide an analysis of inflation and market valuation. We’ll find that impact of inflation on the market is highly nonlinear, but that a reasonably tight relationship exists when inflation is at extremes.

A Puzzlingly Close Relationship

Stock prices reflect the expected value of future cash flows (think we all agree on this one!). It follows that low earnings yields (i.e., high PE Ratios) reflect some combination of low discount rates and/or high expected future earnings growth. This is essentially the story of the “disruptive tech” stocks that have been ubiquitous during the easy money era. However, various growth metrics are only weakly correlated with earnings yield fluctuations. Moreover, PE ratios have essentially no ability to predict future earnings growth . These were the classic conclusions of Shiller 2001. Indeed, analysis suggests that earnings yields are more closely associated with inflation than with real yields, nominal yields or other traditional growth metrics.

The charts and correlation matrix below hint at this relationship. The first chart depicts the year-over-year change in CPI (LHS) and the TTM earnings-price ratio (i.e., earnings yield) (RHS). While the correlation is not perfect, there is clearly a relationship: high/low inflation is associated with high/low earnings yield.

The second plot takes a slightly different view. Here I depict the normalized YoY CPI and normalized E/P ratio to give a sense of the relative relationship. The correlation is quite tight in the first half of the series and particularly so during the inflationary 1970’s and early 80’s, but has been less reliable over the past twenty years.

This relationship may seem surprising because earnings yield is supposed to be a real variable. The textbook explanation is that stocks are real assets. Higher levels of inflation should be reflected in higher nominal earnings and have a negligible impact on stock prices and valuation. However, this description is not corroborated by the data.

Further examination of the correlation matrix suggests that the connection between valuation and inflation is not easy to disentangle. Valuation appears most strongly negatively correlated with YoY CPI, but also negatively correlated with Next-12 Month CPI. This is suggestive of at least a fairly consistent relationship: high inflation is incorporated into valuation today and, should it persist, also reflected in the future. YoY CPI appears moderately negatively correlated with next-12 months real and nominal earnings growth, but next 12-months CPI is uncorrelated with either. This second observation is harder to reconcile. Finally, both YoY CPI and next 12-months CPI are moderately negatively correlated with next 12-months nominal and real returns. The correlation is stronger for next 12-months CPI particularly where real returns are concerned.

An Underexamined Phenomenon

What are the plausible explanations for this unintuitive association?

  • Investors are irrational, risk averse or some combination thereof.
  • Real returns may be correlated with expected inflation which leads to a rationally priced inflation risk premium.
  • High inflation may impact real earnings growth which is translates directly to lower valuations.

Mogdigliani (1979) investigates the first explanation. Franco (in his eminence) postulates that during inflationary episodes investors commit two major errors when valuing stocks. First, they apply nominal discount rates to real valued cash flows and, second, they fail to incorporate a (sufficiently) higher earnings growth rate into their forecasts. These two errors result in the systematic undervaluation of stocks during periods of high inflation. He suggests that valuing stocks in this way is irrational or reflects a higher level of risk aversion that gets embedded into the discount rate.

In Inflation and the stock market: Understanding the “Fed Model” authors Geert Bekaert and Eric Engstrom make the case that if recessions tend to occur during periods of high inflation, then both equity and bond risk premia will be high at the same time! This would be rational as recessions would translate to lower nominal and real earnings and command a lower multiple. This seems like an appropriate explanation for our current environment of rupturing stock and bond markets. Interestingly, they present evidence across countries and markets which suggests that countries with a higher inflation-recession correlation tend to have a higher correlation between stock and bonds yields which is very reminiscent of the charts in the previous section.

The last explanation is probably the most digestible. A topic hotly debated of late is whether high inflation will result in margin compression. Margin compression is arguably the most intuitive mechanism by which high inflation could result in lower earnings. If firms are unable to pass through higher costs for materials, labor, etc. onto the end consumer or the consumer simply forgoes consumption because prices are too high then it is easy to see how inflation could impact the bottom line. If real earnings are lower, then lower multiples are a natural consequence.

This third explanation will be the focus of the remainder of this post.

The Impact of Inflation on Market Valuation

The inflation gauge that I elected to use for this study is CPI. The period under investigation is January 1950 to January 2021. Data comes from Bob Shiller’s website. To study the relationship to valuation I sorted CPI into 10 buckets ranging from a low of -2.09% to a high of 14.76%. I then grouped the relevant variable (CAPE, PE, etc.) for the corresponding month into one of the 10 buckets based on the YoY level of inflation during the month and, finally, computed the average value for each bucket.

(That was a bit of a mouthful, but I think the graphs will make the analysis clear.)

The plots below depict the YoY CPI v. PE and CAPE ratio, respectively. To be clear, the graphs show the average of the valuation ratio (PE and CAPE) in each of the 10 CPI groupings.

The humped shaped plots suggest a highly nonlinear relationship between inflation and valuation, but several features stand out. The sweet spot for peak valuations appears to occur at positive, but low levels of inflation; right around 2%. Higher levels of inflation are associated with declining multiples, and a valuation cliff appears around the 5% mark. Very high inflation (>6%) is associated with substantially lower multiples, but there doesn’t appear to be a meaningful difference in valuation for inflation of ~6% v. ~14%.

What about forward-looking inflation? It’s one thing if inflation is high today, but it’s quite another for it to remain high in the future. The next two plots depict YoY CPI in the next 12 months v. the PE and CAPE ratio today. The general picture is the same: peak valuations occur at low levels of inflation while low multiples are associated with high inflation. The consistency of these two sets of plots lends some credence to the claim that high inflation is generally bad for stocks.

Concluding Remarks

In this post we investigated the relationship between inflation and stock market valuation. Inflation is a confusing phenomenon and seldom in history has its process been more confounded than today as we grapple with the effects of supply-chain disruption, pandemic recovery in demand, record money growth and war. Perhaps it should not come as a surprise that inflation’s effect on asset prices is nonlinear. On balance, inflation appears negatively correlated with traditional valuation metrics with severe inflation historically associated with very low multiples. If there is a simple explanation, then perhaps it is that inflation causes unease and uncertainty amongst investors which translates to risk aversion and an unwillingness to pay high premiums for uncertain cash flows.

Hope you have enjoyed this post. Until next time, thanks for reading!

-Aric Lux.

The post Stock Market Valuation and Impact of Inflation first appeared on Light Finance.

The post Stock Market Valuation and Impact of Inflation appeared first on Light Finance.

]]>
https://lightfinance.blog/stock-market-valuation-and-the-impact-of-inflation/feed/ 1 1353
Inflation Deep Dive: An Examination of the Underlying BEA PCE Data https://lightfinance.blog/inflation-deep-dive-an-examination-of-the-underlying-bea-data/#utm_source=rss&utm_medium=rss&utm_campaign=inflation-deep-dive-an-examination-of-the-underlying-bea-data Tue, 26 Apr 2022 04:08:12 +0000 https://lightfinance.blog/?p=1330 Introduction Inflation is perhaps the least well understood phenomenon in economics. Once said to be exclusively a monetary phenomenon, our current predicament is significantly more complicated and there is little consensus as to the root cause. Until recently the concern was that inflation would run permanently too low. The topic garnered little interest from the […]

The post Inflation Deep Dive: An Examination of the Underlying BEA PCE Data first appeared on Light Finance.

The post Inflation Deep Dive: An Examination of the Underlying BEA PCE Data appeared first on Light Finance.

]]>

Introduction

Inflation is perhaps the least well understood phenomenon in economics. Once said to be exclusively a monetary phenomenon, our current predicament is significantly more complicated and there is little consensus as to the root cause. Until recently the concern was that inflation would run permanently too low. The topic garnered little interest from the public but has seen a sharp reversal in recent months and now ranks as the top concern amongst voters. Indeed, how to appropriately measure inflation is often cause for debate.

February saw the Personal Consumption and Expenditures (PCE) Index (the Fed’s preferred measure) print an astonishing 6.35% year-over-year increase while the less volatile “core” PCE index (which excludes food and energy) rose 5.4%; both 40-year highs. This has led many to worry about the possibility of structurally higher prices going forward and the risk of inflation expectations becoming “unanchored” (though there has been some doubt as to the importance of inflation expectations for controlling the price level).

As we consider the outlook for inflation it is critical to understand which parts of the economy are currently causing the problem and what that means for the risks going forward. To untangle this riddle, I went deep (and I mean deep) into the data and examined the ~235 categories of goods and services considered in the PCE index. The goal is to uncover if inflation is broadly distributed or confined to select categories that are having an outsized impact. The methodology is loosely based on research from the Federal Reserve Bank of San Francisco.

Methodology

To begin the analysis, I classified each underlying category into one of three groups:

  • Above Trend: For categories in this group the current rate of inflation is higher than the pre-Pandemic average.
  • At Trend: Inflation for products and services in this group are broadly in line with the pre-Pandemic trend.
  • Below Trend: Prices in this category are registering inflation below the pre-Pandemic average.

To classify the categories, I ran the following regression for the period January 2010 through February 2022:

Where:

Πi,t = the YoY log-change in the price index for category ‘i’ in month ’t’

αi = regression intercept

Di,t = a dummy variable that takes a value of 1 beginning at the start of the Pandemic in February 2020 and 0 otherwise

βi = regression coefficient for dummy variable

i,t = regression error term

In this setting the regression intercept, αi, represents the average rate of inflation during the pre-Pandemic period January 2010 through January 2020. The coefficient βi is the differential intercept term and gives us the change in inflation during the Pandemic period. If βi is positive and statistically significant then we can conclude that inflation for category ‘i’ is higher today than during pre-Pan; these categories are classified as “Above Trend”. Conversely, if βi is negative and statistically significant then we conclude inflation for category ‘i’ is lower today than it was pre-Pan; these are the “Below Trend” categories. Finally, if βi is not statistically significant then there is no detectable change between the pre-Pandemic and post-Pandemic periods for product ‘i’; such categories are placed in the “At Trend” group.

Inflation Deep Dive

The below table summarizes the number of categories in each group and the corresponding weight of each group in the Core PCE calculation:

The Above Trend group consists of 101 separate products & services and comprises ~56% of the weight in the Core PCE index. This suggests that over half of all spending is currently running above trend and putting substantial pressure on consumer’s wallets. In contrast, only 17% of spending is currently running below the pre-Pandemic trend which suggests that there isn’t much of an offset to the rising prices in other parts of the economy. Finally, 72 categories are currently classified as At Trend which implies that current inflation is consistent with observed price level changes pre-Pan. However, the At Trend categories only comprise ~28% of overall spending which is not sufficient to keep prices anchored.

Core PCE can be broadly decomposed into Goods and Services. In the core PCE data there are 68 separate Goods and 147 Service categories. To get a sense of whether Goods or Services are contributing more significantly to inflation we can break down the Trend groups by classification.

The plot below depicts the percentage of all Goods and Service categories contained by each Trend bucket. Approximately 60% of all Goods and 40% of all Services are currently running at Above Trend inflation. The At Trend group is dominated by Services while the Below Trend group is evenly split. Taken together, these figures imply that Goods are primarily responsible for the acceleration in inflation that we are witnessing, but there are potential upside risks if some of the At Trend Services categories inflect higher. A key determinant for keeping Services prices anchored will be a sustained recovery in the labor force in service-related sectors (housing, transportation, food service, childcare, etc.).

To understand where the trends in inflation may be headed, I reconstructed a separate price index for the Above, At and Below Trend groupings. Even though 101 categories are currently logging Above Trend inflation it could be the case that the pace of acceleration is cooling or even rolling over which would suggest some near-term abatement in headline numbers. Conversely, Below Trend figures could be inflecting higher and moving from a net negative contribution to net positive contribution which would mean headline figures are likely to worsen.

The below chart depicts the percentage YoY change in PCE for each of the Above, At and Below price indices. The results are a bit jarring. It appears that each of the classifications are accelerating higher. The Above Trend group started to climb higher basically at the onset of the Pandemic and is currently clocking a 6% YoY change. Interestingly, the Above Trend categories showed the most subdued inflation in the pre-Pan period running at a little more than 1% YoY for almost 10 years. This makes the rapid rise all the more disturbing as it suggests potentially significant damage to the supply chains of the underlying categories.

The At Trend group saw an initial steep decline and stayed low for most of 2020 but has been resurgent in 2021 and ’22. The 4.5% change in February is now notably higher than the 1%-3% range that the index saw pre-Pan. Indeed, the limited sample size may be the only thing keeping these categories in the At Trend group. This observation lends some credence to our earlier conjecture that some Service categories in this classification may be at risk of inflecting higher.

The trajectory of the Below Trend group offers, perhaps, the most interesting results. Historically, this group recorded the highest inflation figures of the three with a pre-Pan range of ~2%-4% and considerably more volatility. At the onset of the Pandemic inflation for this group declined precipitously and spent most of 2020 and part of 2021 in negative territory. Outright deflation for this group was responsible for initially keeping a lid on inflation, but now the situation is distinctly different. Of the three classes, the Below Trend group has experienced the most dramatic snapback from approx. -1.4% in February 2021 to 3.3% one year later. Inflation in this group remains below the top end of the pre-Pan range which suggests near term upside risk as these categories continue to recover.

Having decomposed PCE into the Above, At and Below trend classes we can finally put the pieces together to determine the net impact to headline Core PCE. The following plot charts the cumulative contribution of each bucket to Core PCE. The dark blue region is the cumulative impact of categories that post-Pandemic are considered Above Trend, the dark red region shows the impact of the At Trend categories and dark green the Below Trend. Overlaid on the chart is headline Core PCE in gold.

It’s important to recall that the bucket classifications (and consequently the color scheme) are based on post-Pandemic results. Just because a category is running Above Trend today does not mean that pre-Pandemic its contribution to Core PCE was necessarily positive. Indeed, what we observe is that many categories that are today running Above Trend and contributing significantly to inflation were actually net detractors for most of the 2010’s as evidenced by the sub-zero dark blue region from ~2011-2020. Even today some categories that are considered At Trend are contributing negatively to inflation, though these categories are disappearing fast!

As of February (again, latest data) the Above Trend categories are contributing ~3.30% to Core PCE, At Trend is contributing (net) 1.20% and the Below Trend categories are contributing ~.50%. As we expected from the prior results, there are very few categories now that are acting to offset inflation (i.e., net negative).

Concluding Remarks

In this post I went deep into the inflation data to get a more granular picture of where inflation is running hot and how the underlying trends are developing. We broke down the data into Goods v. Services inflation and classified the ~235 categories into separate buckets based on deviations from pre-Pandemic averages. The results indicate that across almost all categories inflation is positive and accelerating. The key near term risk appears to be At Trend categories flipping to Above Trend in the coming months as the sample size broadens and the underlying trend reveals itself. On balance the results suggest that Core PCE is likely to be higher over the next few months which will have significant implications for the direction of monetary policy.

This post gave me ample opportunity to work with the underlying PCE datasets available through NIPA. Managing this much data was certainly a bit of a beast, but also instructive as it really gave me a sense of the challenge involved in measuring each of these items monthly. If you’re wondering about the various considerations that economists make when constructing a price index (as well as the high possibility for error!) check out this EconTalk podcast with Susan Houseman on manufacturing. It’s exceptionally good and really makes you think about the implicit assumptions you often make when working with price data.

Until next time, thanks for reading!

-Aric Lux.

The post Inflation Deep Dive: An Examination of the Underlying BEA PCE Data first appeared on Light Finance.

The post Inflation Deep Dive: An Examination of the Underlying BEA PCE Data appeared first on Light Finance.

]]>
1330
Beta and Sharpe Ratio of S&P 500 Stocks: March 2022 https://lightfinance.blog/beta-and-sharpe-ratio-of-sp-500-stocks-march-2022/#utm_source=rss&utm_medium=rss&utm_campaign=beta-and-sharpe-ratio-of-sp-500-stocks-march-2022 Fri, 25 Mar 2022 02:11:47 +0000 https://lightfinance.blog/?p=1301 After a rather long hiatus the sorted Beta and Sharpe Ratio for all companies listed in the S&P 500 as of March 2022 are now available. Beta and the Sharpe Ratio are calculated using 3 years of bi-weekly returns. To learn more about Sharpe and Beta check out my post about measuring risk and return! The 5 […]

The post Beta and Sharpe Ratio of S&P 500 Stocks: March 2022 first appeared on Light Finance.

The post Beta and Sharpe Ratio of S&P 500 Stocks: March 2022 appeared first on Light Finance.

]]>

After a rather long hiatus the sorted Beta and Sharpe Ratio for all companies listed in the S&P 500 as of March 2022 are now available. Beta and the Sharpe Ratio are calculated using 3 years of bi-weekly returns. To learn more about Sharpe and Beta check out my post about measuring risk and return!

The 5 companies with the highest Beta are as follows:

  • Norwegian Cruise Line Holdings Ltd. (NCLH)
  • Royal Caribbean Group (RCL)
  • Apache Corporation (APA)
  • Caesars Entertainment Inc (CZR)
  • Penn National Gaming, Inc (PENN)

After not having revisited the lists for nearly a year the stocks in the high Beta group remain remarkably unchanged from July 2021! In some ways this is encouraging because it suggests that our Beta estimates are pretty reliable. Royal Caribbean and Norwegian continue to lag badly and remain well below their January 2020 highs. Casino operators Penn National Gaming and Caesar’s have continued their precipitous decline which gives little indication of abating. The one notable change concerns Apache (APA). The price of oil has risen dramatically thus far in 2022 on the back of the conflict in Ukraine and Apache has seen its fortunes shift as a result. APA is up a cool 42% YTD and if oil remains elevated the stock may continue to outperform.

The 5 companies with the lowest Beta are as follows:

  • Kroger (KR)
  • The Clorox Company (CLX)
  • Regeneron Pharmaceuticals, Inc. (REGN)
  • Coterra Energy Inc. (CTRA)
  • Gilead Sciences, Inc. (GILD)

There has been some shake up in the low Beta cohort over the past few months. Kroger has been a beneficiary of inflation and geopolitical tension as concerns about food security have grown in recent weeks. Clorox remains a member of the list but has struggled mightily in 2022 as inflation in basic chemicals has pressured margins. Clorox guided poorly on their Q4 2021 earnings call and you can see the impact this has had on the usually stalwart blue chip. Gilead and Regeneron are new constituents to the list, but have had divergent performance thus far in 2022 with Regeneron rally strongly and Gilead lagging badly. Coterra Energy is an independent oil and gas E&P company based in the Permian basin and a new addition to the list. Interestingly, this stock has had a slightly negative Beta to the S&P over the last 3 years and, as you can see, has been a key beneficiary of the spike in oil prices.

The 5 companies with the highest Sharpe Ratio are as follows:

  • Apple Inc (AAPL)
  • Microsoft Corp (MSFT)
  • Pool Corp (POOL)
  • West Pharmaceutical Services, Inc. (WST)
  • Costco (COST)

The high Sharpe list consists of mostly old friends. Microsoft and West Pharmaceutical Services are hold overs from the last update. Pool Corp, a leading manufacturer of inground pools and equipment, is back in the mix after a brief absence. Apple (surprisingly) is making its debut as a member of the high Sharpe list while Costco, who has been hanging out in the top 10 for a while, is also making its first showing. Notably, Enphase Energy is absent from the list after appearing for most of 2021.

The high Sharpe list has broadly struggled so far in 2022 as market volatility has challenged the mega cap names which had seemed so unassailable during the pandemic.

The 5 companies with the lowest Sharpe Ratio are:

  • Viatris Inc. (VTRS)
  • Carnival Cruise Lines (CCL)
  • Norwegian Cruise Line Holdings Ltd. (NCLH)
  • DXC Technology (DXC)
  • Las Vegas Sands Corp (LVS)

The low Sharpe list is a mix of old and new names. Norwegian Cruise Lines has the dubious honor of featuring in both the high Beta and low Sharpe group. Joining Norwegian is Carnival Cruises which is down ~70% from its 2020 peak. Las Vegas Sands, another gaming and hotel operator, is a new addition this month and has struggled to gain traction after an initial bounce following the onset of the pandemic and is currently plumbing new lows. Viatris and DXC continue to feature from the previous update.

To download the updated Betas and Sharpe Ratios for the S&P 500 companies, click the buttons below!

Thanks for reading!

-Aric lux.

The post Beta and Sharpe Ratio of S&P 500 Stocks: March 2022 first appeared on Light Finance.

The post Beta and Sharpe Ratio of S&P 500 Stocks: March 2022 appeared first on Light Finance.

]]>
1301
Measuring Hedge Fund Performance with Factor Model Monte Carlo https://lightfinance.blog/measuring-hedge-fund-performance-with-factor-model-monte-carlo/#utm_source=rss&utm_medium=rss&utm_campaign=measuring-hedge-fund-performance-with-factor-model-monte-carlo Mon, 21 Mar 2022 01:34:23 +0000 https://lightfinance.blog/?p=1273 I. Introduction Due diligence for hedge funds presents a unique set of challenges for analysts and asset allocators. Funds often have significant discretion to invest across multiple asset classes and instruments. Funds may also deploy strategies of varying complexity ranging from well-known approaches such as equity long-short to more exotic schemes like capital-structure arbitrage. Investments […]

The post Measuring Hedge Fund Performance with Factor Model Monte Carlo first appeared on Light Finance.

The post Measuring Hedge Fund Performance with Factor Model Monte Carlo appeared first on Light Finance.

]]>

I. Introduction

Due diligence for hedge funds presents a unique set of challenges for analysts and asset allocators. Funds often have significant discretion to invest across multiple asset classes and instruments. Funds may also deploy strategies of varying complexity ranging from well-known approaches such as equity long-short to more exotic schemes like capital-structure arbitrage. Investments might be in highly illiquid markets with holdings that are either marked to market infrequently or marked to models that are generally not made available for inspection.  Indeed, Hedge Fund Research tracks indices for at least 70 unique hedge fund strategies. The breadth of strategies available requires plan sponsors to carefully select which assets they want exposure to without inadvertently concentrating in any one source of risk.

Such dynamics introduce opacity into an already challenging process. As such, it is crucial for allocators to have a rigorous, quantitative framework for benchmarking performance and measuring risk. In this post, I’ll review the use of Factor Model Monte Carlo (FMMC) simulation as a general and robust framework for assessing the risk-return drivers of hedge fund strategies. We’ll find that FMMC has many attractive qualities in terms of implementation and offers highly accurate estimates of several commonly used risk and performance metrics. The goal is to provide plan consultants, boards and trustees with a useful and intuitive tool to complement their due hedge fund due diligence process.

II. Task and Setup

To demonstrate the efficacy of the Factor Model Monte Carlo approach we will be applying the methodology to the real-world example of Oracle Fund. The name Oracle Fund is fictitious, but the returns are very real. Oracle is an active, investable fund made available only to Qualified Purchasers (i.e., investors with a net-worth in excess of $5MM) hence the anonymized name.

By way of background, Oracle is managed by a well-known investment advisor with ~$140B in AUM. Oracle is described as a long-short equity fund with the stated investment objective of providing “equity-like” returns with lower volatility by systematically investing in stocks that are:

  • Defensive: low market beta
  • High Quality: strong balance sheets and consistent cash flow generation
  • Cheap: relatively lower multiples

The strategy invests in developed market public equities with an investable universe containing ~2,300 large cap and ~2,700 small cap stocks.

For this case study we have access to monthly return data for Oracle from October 2012 through December 2021. However, for the purpose of this post and evaluating the accuracy of the factor model Monte Carlo method we will pretend as though we only have data from January 2017 through December 2021. The purpose of using the “truncated” time series is threefold. First, by estimating our model using only half the available data we can perform “walk forward” (walk-backward?) analysis to evaluate how well our estimates match those from the out of sample period. Second, measuring risk-return over a single market cycle may lead us to bias conclusions. By breaking up the sample period we can see how well our results translate to other market environments. Third, in the case of evaluating a new or emerging manager only a short history of returns may be available for analysis. In such a case, it is essential to have a reliable tool to project how that manager may have performed under different circumstances.

The below graph shows the cumulative return of Oracle since October 2012. The data to the right of the red line represents the “sample period”.

III. Model Specification and Estimation

A common technique in empirical finance is to explain changes in asset prices based on a set of common risk factors. The simplest and most well-known factor model is the Capital Asset Pricing Model (CAPM) of William Sharpe. The CAPM is specified as follows:

Where:

  • ri = Return of asset ‘i’
  • rm = Return of market factor ‘m’i
  • α = Excess return
  • βi = Exposure to the Market Risk factor
  • εi = Idiosyncratic error term

Market, or “systematic” risk, serves as a kind of summary measure for all of the risks to which financial assets are exposed. This may include recessions, inflation, changes in interest rates, political turmoil, natural disasters, etc. Market risk is usually proxied by a large index like the S&P 500 and cannot be reduced through diversification. βi (i.e. Beta) represents an asset’s exposure to market risk. A Beta = 1 would imply that the asset is as risky as the market, Beta >1 would imply more risk than the market, while a Beta < 1 would imply less risk. εi is idiosyncratic risk and represents the portion of the return that cannot be explained by the Market Risk factor.

As a long-short hedge fund, part of Oracle’s value proposition is to eliminate some (or most) of the exposure to pure Market β (otherwise, why invest in a hedge fund?). Thus, we will extend CAPM to include additional risk factors which the literature has shown to be important for explaining asset returns.

The general form of our factor model is as follows:

All the above says is that returns (r) are explained by a set of risk factors j=1…k where rj is the return for factor ‘j’ and βj is the exposure. ε is the idiosyncratic error.

If we can estimate the βj, then we can leverage the long history of factor returns (rj) to calculate conditional returns for Oracle Fund. Finally, if we can reasonably estimate the distribution of ε then we can build randomness into Oracle’s return series (more on this in a bit). This enables us to fully capture the variety of returns that we could observe.

The FMMC method will take place in three parts:

  • Part A: Data Acquisition, Clean Up and Processing
  • Part B: Factor Model Estimation
  • Part C: Monte Carlo Simulation

Part A: Data Acquisition

Part of the art of analysis is selecting which set of variables to include in a model. For this study, I will be examining a set of financial and economic indices aimed at capturing different investment styles and sources of risk/return. Below is the list of variables. Economic time series were obtained from the FRED Database (identifier included in parentheses), financial indices were obtained from Yahoo! Finance and hedge fund index returns come courtesy of Hedge Fund Research. HFR indices are freely available for download by the investing public on their website (you must register and create a login, but still…this level of detail for free is rather generous).

  • Term premium: Yield spread between 3-month and 10-year Treasuries (T10Y3M)
  • Credit Spread: Moody’s Baa corp. bond yield minus 10-year Treasury yield (BAA10Y)
  • 3-month T-Bill Rate (DGS3MO)
  • TED Spread: 3-Month LIBOR Minus 3-Month Treasury Yield (TEDRATE)
  • BofAML Corporate Bond Total Return Index (BAMLCC0A0CMTRIV)
  • 10-Year Inflation Expectations (T10YIE)
  • Russell 1000 Value
  • Russell 1000 Growth
  • Russell 2000
  • S&P 500
  • MSCI EAFE
  • Barclays Aggregate Bond Index
  • HFRI Equity Market Neutral
  • HFRI Fundamental Growth
  • HFRI Fundamental Value
  • HFRI Long-Short Directional
  • HFRI Quantitative Directional
  • HFRI Emerging Market China
  • HFRI Event Driven Directional
  • HFRI Event Driven – Total
  • HFRI Japan
  • HFRI Global Marco – Total
  • HFRI Relative Value – Total
  • HFRI Pan Europe

Part B: Model Estimation

Recall that for this case study we are “pretending” as though we only have returns data for Oracle from January 2017 through December 2021 (i.e., the sample period). In reality, we have data going back to October 2012. We will use the data in the sample period to calibrate the factor model and then compare the results from the simulation to the long-run risk and performance over the full period.

Model estimation has 2-steps:

  1. Estimate a Factor Model: Using the common “short” history of asset and factor returns, compute a factor model with intercept α, factor betas βj for j=1…k, and residuals ε.
  2. Estimate Error Density: Use the residuals (ε) from the factor model to fit a suitable density function from which to draw.

Estimation of the error density in Step 2 presents a challenge. One approach is to fit a parametric distribution to the data. This is a one-dimensional problem and, in theory, may be done reasonably well by using a fat-tailed or skewed distribution. However, such an approach has to potential to introduce error into the modeling process if the wrong distribution is selected. Moreover, in the case of a new manager with a short track record there simply may not be enough data available to fit a suitable density.

To get around the need for a parametric distribution, we will instead use the empirical or “non-parametric” estimates of the probability distribution of errors obtained once we have estimated the calibrated model for the short “sample” period. This method enables us to reuse residuals that were actually observed, capture potential non-linearities in the data and model behavior at least fairly far out into the tails.

Having charted a path forward for dealing with the error distribution, we can now proceed to estimating the factor model. I have proposed 24 candidate variables to explain the returns of Oracle, but I don’t know which ones offer the best fit. It would be bad statistics to run a 24 variable regression and see what happens; what we want is an elegant model that uses a subset of the proposed. A typical approach to this problem is to minimize an objective function such as the Akaike Information Criterion (AIC) or Bayesian Information Criterion (BIC). In general, it is often possible to improve the fit of a model by adding parameters but doing so may result in overfitting. Both BIC and AIC attempt to resolve this problem by introducing a penalty term for the number of parameters in the model; with the BIC penalty being larger relative to AIC.

We’ll define the BIC as follows:

Where:

  • k = # of model parameters including the intercept
  • T = sample size
  • L = the value of the likelihood function

To find the model that minimizes the BIC I used the regsubsets function available in the leaps package in R. regsubsets uses an exhaustive search selection algorithm to find the model which minimizes the BIC. I set the “nvmax” parameter (i.e., the maximum number of variables per model) to 15. The selection algorithm returned the following 5 factor model:

  • S&P 500 (i.e., the Market Risk Factor)
  • HFRI Equity Long Short Directional Index
  • HFRI Japan Index
  • HFRI Global Macro Index – Total
  • Term Premium

Below are the regression results. Standard errors have been adjusted for heteroscedasticity and autocorrelation.

I think it is worth noting how parsimonious the model is. We have distilled the sources of macro-financial risk for Oracle down to just 5 components. As previously stated, Oracle pursues a developed markets directional equity long-short strategy therefore we expect equity factors to feature prominently. Except for the Term premium all the other factors are equity based. This is good! Had the model returned significant results for the Barclays Aggregate Bond Index or Emerging Markets China then we might have cause for concern that the manager is not pursing the strategy as advertised.

The Market risk factor (as proxied by the S&P 500) is highly statistically significant and of the sign we would expect as Oracle has a long bias. HFRI Japan is also positive and significant which speaks to the strategy’s global developed markets approach. The HFRI Long-Short Directional Index is significant as we would hope for a fund that markets itself as, you know, equity long-short! However, the sign is a little hard to interpret. I had anticipated this variable to be positive. One possible explanation for the negative coefficient is that most funds which comprise the index were on the other side of the trade over the sample period. It would be appropriate to describe Oracle’s strategy as titled toward value (i.e., defensive stocks, low multiples, low beta). Over the last decade a popular hedge fund strategy has been long growth and short value, which was quite lucrative for some time, but in 2022 has turned disastrous. If this is the case, then it makes sense for Oracle to have significant, but negative exposure to this risk factor.

The adjusted-R2 of .5911 suggests that ~60% of the variability in Oracle’s return is explained by the model risk factors. The below plot illustrates the realized returns of Oracle and the fitted values from the calibration period January 2017 – December 2021:

Maximizing R2 is typically not a recommended practice which is why we used the BIC as our selection criterion. On balance, an R2 of ~.60 suggests a reasonable degree of explanatory power, but tighter would be better. This moderate fit could be because we have neglected to include important variables in our modeling process or Oracle has some “secret sauce” which is not captured by the variables. The other explanation is the potentially troublesome presence of outliers.

Influential Points and Outlier Detection

To identify outliers, we can use a combination of visual and statistical diagnostics. These methods include Cook’s Distance (Cook’s D), leverage plots, differential betas (DFBETAS), differential fits (DFFITS), and the COVRATIO (pronounced “cove” ratio). For conciseness (and because this isn’t a post about outlier detection) we’ll focus on just two measures: DFFITS and COVRATIO.

DFFITS is defined as the difference between the predicted value for a point when that point is left in during the regression estimation v. when that point is removed. DFFITS is usually presented as a studentized measure. I think the formula makes this definition more digestible:

Where:

  • yi = prediction for point ‘i’ with point ‘i’ left in the regression
  • yi(i) = prediction for point ‘i’ with point ‘i’ removed from the regression
  • s(i) = standard error for the regression with point ‘i’ removed from the regression
  • hii = is the leverage for point ‘i’ taken from the “hat” matrix

The developers of DFFITS have suggested using a critical value of 2√(p/n) to identify influential observations; where ‘p’ is the number of parameters and ‘n’ is the number of points. The calibrated model has 6 parameters (5 variables plus the intercept) and 60 points which gives us a critical value of .63.

We can plot the results of DFFITS to help visually identify influential data as seen in the below chart.

The plot shows that 5 points exceed the critical threshold of .63 with observations 38 and 39 standing out in particular. As you might have guessed, these two points correspond to February and March 2020, respectively.

We can use the COVRATIO to corroborate the results from DFFITS. The COVRATIO measures the ratio of the determinant of covariance matrix with the ith observation deleted and the determinant of the covariance matrix with all observations.

Values for the COVRATIO near 1 indicate that the observation has little impact on the parameters estimates. An observation where |COVRATIO – 1| > 3p/n is indicative of an influential datapoint.

The results from the COVRATIO returned 10 potentially influential points; see plot. Taken together with the results from DFFITS we can comfortably conclude that our model is subject to the effects of outliers. If we are to build a model that will perform well out of sample, then it is critical to have robust parameter estimates that we are confident capture the return process of Oracle across different time periods. Enter robust regression.

Refining our Factor Model with Robust Regression

Having identified the outliers in our dataset, we can attempt to mitigate the impact these observations have on our parameter estimates by rerunning the factor model regression on reweighted data in a process referred to as robust regression.

Several weighting schemes have been proposed for dealing with influential data, but the two most popular are the Tukey Biweight and Huber M estimators. Huber’s M estimator reweights the data based on the following calculation:

Where:

  • εi = residual ‘i’
  • si = standard error for the regression with observation ‘i’ excluded.
  • wi = weight assigned to observation ‘i’

The basic intuition is that the larger the estimated residual, the smaller the weight assigned to that observation. This should consequently reduce the impact of outliers on the regression coefficients and stabilize the estimates such that we can apply them out of sample.

The below table gives the factor model using Huber estimation:

The results from the reestimated model are broadly similar to the initial OLS estimates. The coefficients for HFRI Japan and HFRI Macro exhibit the most notable change. The R2 has marginally improved from ~.59 to ~.62, but practically speaking this is the same degree of fit. Overall, OLS and Huber are communicating the same basic results.

As we press forward to the simulation results in the next section (finally!) we’ll adopt the Huber model as the factor model which we will use to forecast risk and return for Oracle.

Part C: Simulation

The goal of this study is to accurately estimate the risk and performance profile of Oracle Fund using Monte Carlo simulation; all the better if we can do it efficiently. As noted in Part B, when it comes to Monte Carlo simulation there are two standard approaches: parametric and non-parametric. Parametric estimation requires that we fit a joint probability distribution to the history of factor returns from which to draw observations. This is a challenging problem as it requires us to estimate many (and quite likely, different) fat-tailed distributions and a suitable correlation structure. While this is theoretically achievable, it is best avoided in our case.

Instead, we can use the discrete empirical distributions from the common history of factor returns as a proxy for the true densities. We can then conduct bootstrap resampling on the factor densities where a probability of 1/T is assigned to each of the observed factor returns for t=1…T. Similarly, we can bootstrap the empirical error distribution from the residuals we obtained from the estimated factor model in Part B. In this way we can reconstruct may alternative histories of factor returns and errors. Proceeding in the way should enable us to gaze relatively far into the tails of risk/return profile for Oracle and better understand the overall exposure to sources of systematic and idiosyncratic risk.

For this simulation I conducted 1000 bootstrap resamples each of size 100. The steps were as follows:

  • For iteration i…1000 (i.e., column i), observation j…100 (i.e., row j) randomly sample a row of factor returns from the factor return matrix and error estimate from the empirical error distribution.
  • Plug these values into the factor model from Part B to form an estimate for Oracle’s return for the ith, jth period.

The resultant matrix of estimated returns for Oracle should appear as follows:

IV. Performance Analysis

Alright…after all of that hard work we have finally come to the fun part: performance! To recap our setting, recall that the full performance history for Oracle spans October 2012-December 2021. Thus far we have pretended as though we only have returns for January 2017-December 2021 (i.e., the sample period). We have used the common short history of factor returns to estimate a factor model to describe Oracle’s return process and obtained the residuals. We conducted bootstrap resampling of the short history (i.e., January 2017-December 2021) of factor returns and residuals to estimate 1000 different return paths each of size 100.

Now we’ll use these 1000 return paths to estimate risk and performance metrics and compare the results across 2 different time scales:

  • Full period: October 2012-December 2021
  • Truncated period: October 2012-December 2016

This should provide us with a fairly comprehensive basis for which to evaluate the factor model Monte Carlo (FMMC) methodology.

The below table and figures below summarize the results for the FMMC, Full, and Truncated periods using four “risk” metrics and three standard performance measures.

Considering the risk measures first, with respect to mean return and volatility the results are very encouraging. The FMMC estimates are exceptionally close to the estimates over the Full and Truncated periods. The FMMC estimate for Value at Risk (VaR) is quite close to the Full Period VaR and slightly underestimates the Truncated Period, but overall, is pretty accurate. For Expected Shortfall, FMMC undershoots the realized ES for both the Full and Truncated periods. Taken together, these results suggests that the factor model Monte Carlo method is able to describe the general risk-reward profile of Oracle quite well and provides a reasonable view into the tail behavior.

Regarding the performance metrics, the results are remarkably good. The FMMC estimates are very close to the realized results for the Sharpe, Sortino and Calmar Ratios. Examining the distributions of each measure provides some additional insight. The distribution of the Sharpe ratio is symmetric and approximately bell-shaped with nearly equivalent mean and median. The distribution of the Sortino Ratio is somewhat right skewed with fat tails. This behavior is even more pronounced for the Calmar Ratio. In both cases the median estimate is a bit to the left of the mean reported in the table. For the Sortino ratio, the median is closer to realized values over the Full and Truncated periods while for the Calmar ratio the median is further away. Thus, in practice it may be prudent to consider both to accommodate this observation.

V. Concluding Remarks

Hedge fund and private market investing presents a unique set of challenges for advisors and asset managers. Questions such as, “how can I confirm that what I am paying for is what I am getting?” or “how can I evaluate the risk of these investments when they may be priced infrequently or have a short return history?” require substantive answers.

In this post, I reviewed the Factor Model Monte Carlo (FMMC) framework as a means of assessing hedge fund investments. Along the way we reviewed how to appropriately select a factor model, discussed considerations for outliers and how simulation can aid us in our decision making. By applying this technique to the real-world example of Oracle Fund we were able to see how accurately FMMC is able to model the fund’s return process and provide us with actionable results.

Personally, I really like this approach because it is straightforward to understand, easy to implement and provides a wealth of information that would be either hidden or simply inaccessible to us otherwise.

I enjoyed writing this piece, so if you made it to the end, thanks for reading!

Until next time,

-Aric Lux.

The post Measuring Hedge Fund Performance with Factor Model Monte Carlo first appeared on Light Finance.

The post Measuring Hedge Fund Performance with Factor Model Monte Carlo appeared first on Light Finance.

]]>
1273
Trading the Inflation Theme https://lightfinance.blog/trading-the-inflation-theme/#utm_source=rss&utm_medium=rss&utm_campaign=trading-the-inflation-theme https://lightfinance.blog/trading-the-inflation-theme/#comments Sun, 28 Nov 2021 21:32:29 +0000 https://lightfinance.blog/?p=1199 Introduction While the holiday season has long been regarded as a time of excess, folks this year are bracing for another challenge besides annual waistline expansion: price inflation. As we gather with family and friends for the holidays in coming weeks many are predicting that this year’s turkey will be the most expensive in the […]

The post Trading the Inflation Theme first appeared on Light Finance.

The post Trading the Inflation Theme appeared first on Light Finance.

]]>

Introduction

While the holiday season has long been regarded as a time of excess, folks this year are bracing for another challenge besides annual waistline expansion: price inflation. As we gather with family and friends for the holidays in coming weeks many are predicting that this year’s turkey will be the most expensive in the history of the holiday with food prices having risen a remarkable 5.4% yoy.

It’s no secret that inflation has been on the rise in 2021 and with each passing day it becomes increasingly difficult to believe that it is merely “transitory”. In fact, there is little agreement amongst industry professionals as to whether inflation will remain a persistent feature in the years to come or if this period is merely an aberration in the long run deflationary trend that began in the 1980’s.

All of this has left investors unsure of how to invest in such an environment and little experience to draw upon.

In this article, I intend to outline the case for structurally higher inflation in coming years and how investors might consider reallocating their assets against a backdrop of rising prices and falling real rates.

The article will proceed as follows:

  • Review the history of inflation in recent decades and the case for higher inflation going forward
  • Assess the statistical evidence for how inflation impacts asset prices
  • Evaluate the performance of inflation and deflation baskets

The Secular Case for Higher Inflation

To better understand the course that inflation may take in the future, we must first consider the historical context that has led to the present state of prices and why those conditions may be due to change.

As the below chart demonstrates, inflation has been broadly receding since it’s peak in 1981. During the most recent decade, prices were quite stable with inflation averaging about 2%. In fact, inflation routinely failed to reach the Fed’s 2% stated target and the concern (as recently as 2019) seemed to too little inflation rather than too much.

Factors responsible for the secular decline in inflation over the past 40 years include:

Central Banks

The Federal Reserve has stated that it believes an inflation target of 2% is broadly consistent with its dual mandate to maintain stable prices and maximum employment. The Fed formally adopted this target in 2012 in its inaugural “Statement on Longer-Run Goals and Monetary Policy Strategy”. Key to this policy framework was the adoption of a “symmetric” view of inflation and that the Committee would seek to mitigate “deviations” of inflation from its longer-run goal. By viewing deviations from target as critical, the Fed signaled that it would treat inflation both above and below 2% equally. This conservative approach sought to combat inflation immediately when it began to rise and is perhaps best evidenced by Powell’s infamous 2018 statement

Globalization and the Rise of China

Of all China’s exports, the most significant is arguably deflation. China’s introduction into the WTO in 2001 caused a wave of offshoring globally. Rightly or wrongly, “The China Price” became a euphemism for the cost paid by onshore industries for China’s rapid ascent over the past two decades. Consumer’s benefited from lower retail prices, and corporations experienced vast margin expansion, but wages faced downward pressure as China’s vast population was able to undercut the price of labor. Worker’s share of the economy’s production has also decline making prices less sensitive to wage pressures which has historically been associated with inflationary periods.

Technology

The 2010’s were the decade of the disruptor. Technology is the great deflationary force and in the past ten years, tech, has invaded every industry. Cloud computing massively brought down the costs of data acquisition and storage. The iPhone sucked in entire stores worth of product (phones, radios, cameras, camcorders, alarm clocks, CD players, TVs…bank branches) into a device that can fit in your pocket. Software ate the world.

Demographics

Economics is downstream from demographics. Recent research has sought to quantify the link between age and inflation and found that large increases in the working age population have generally been associated with declines in inflation across developed economies when controlling for other macroeconomic factors. Critically, the development of inflation over the past several decades has tracked the age structure of the US remarkably. Employment-to-Population and Labor Force Participation both broadly increased starting in the 1960’s as boomers and women drove a significant change in the overall labor pool; increasing productivity but depressing real wages.

By their nature, secular forces are necessarily slow moving. Any policy decisions that have contributed to disinflation were made decades ago and it is only now that we reap the consequences. With this in mind, we must consider what decisions made today will have on the future course of inflation.

The secular drivers of higher future inflation are:

Central Banks

Beginning in 2019, the Fed conducted a review of its monetary policy and longer-term goals framework. As fate would have it, 2020 brought the most significant challenge to monetary policy since the Great Depression and the Fed reacted in epic fashion [Link to Ocean of Money] with purchases by the central bank totaling ~$160MM per hour in 2020 and 2021. The changes the Fed made to its policy statement are instructive. As opposed to taking a “symmetric” or deviation-based view of inflation, the Fed has stated that they will now target an “average rate” of inflation:

“In order to anchor longer-term inflation expectations at this level, the Committee seeks to achieve inflation that averages 2 percent over time, and therefore judges that, following periods when inflation has been running persistently below 2 percent, appropriate monetary policy will likely aim to achieve inflation moderately above 2 percent for some time.”

Moreover, the Fed’s comments concerning employment suggest the conduct of monetary policy will be substantially different going forward:

“The maximum level of employment is a broad-based and inclusive goal that is not directly measurable and changes over time… the Committee’s policy decisions must be informed by assessments of the shortfalls of employment from its maximum level… Committee considers a wide range of indicators in making these assessments.”

Rather than focusing exclusively on shortfalls of employment from the maximum level, the Fed will now seek to act in such a way that they will actively promote an inclusive economy whose benefits are more equally conferred upon the participants.

Taken together these revisions suggest a Fed that is more flexible and less conservative in its approach to policy and, critically, a Fed that is more comfortable with inflation running hot.

Debt

The US Federal budget deficit is expected to eclipse $2.3T in 2021 and debt-to-GDP is expected to reach 102%. Moreover, the annual deficit is expected to be between $1.2T-$1.6T from 2022-2031 and exceed the 50-average of 3.3% (as a percentage of GDP) in each of those years. By 2031, the US debt-to-GDP ratio is expected to reach 113% (see here and here). Notably, these projections do not incorporate any additional stimulus from the recent $1.2T Infrastructure Investment and Jobs Act or the Biden administration’s $1.85T Build Back Better (BBB) Act. These are some staggering figures to be sure and to the degree to which they raise the productive capacity of the US may yet be quite successful. What is likely however are price distortions in the short run.

Inequality

Closely linked to debt is inequality. The 2010’s witnessed a remarkable rise in the general degree of wealth inequity and COVID only further accelerated this trend. The share of wealth controlled by the Top 1% of the wealth distribution increased from ~30.8% at the beginning of 2020 to ~32.3% as of Q2 2021. All while the share of wealth controlled by the 50th-99th percentiles combined continued its secular downward spiral.

The research covering the link between inequality and inflation is relatively scare, but relevant studies covering this area suggest that there is a positive correlation between the two in cross country samples (see here and here). The logic being, politicians in countries with high inequality face incentives to choose high inflation policies in favor of redistribution or increase the asset prices of their wealthy constituents. While the motivation is unclear, the link is relatively persistent.

‘Inflation, Inequality and Social Conflict’ by Christopher Crowe, CEP Discussion Paper No. 657 (http://cep.lse.ac.uk/pubs/download/dp0657.pdf).

Part of motivation behind the BBB Act is to promote more inclusive participation in the fruits of the economy which dovetails nicely with language adopted by the Fed.

Inequality and debt work in concert to explain how inflation may develop going forward. Piketty summarizes this mechanism in Capital in the Twenty-First Century, p. 380-382. In brief, stoking inflation is a blunt but historically popular tool for reducing inequality since it erodes the real value of debt held by investors and reduces the effective amount owed by borrowers. As debt-to-GDP climbs, the federal government has an increasing interest in seeing the real value of debt fall. This also has the ancillary benefit of increasing the nominal wealth of average citizens whose primary store of value is their home; on which they often carry a mortgage.

Piketty repeated reminds us that inflation is an imprecise and imperfect tool, but one that has been used repeated in history by central banks in the US, Great Britain, France, and Germany in order to escape the burden of public debt. Indeed, the Fed’s intention to raise their inflation target coupled with legislation aimed at boosting social programs is consistent with a desire to use such tools again.

Globalization

The pandemic has exposed the hidden risks and costs of maintaining global supply chains. Economic nationalism, which began with the Trump tariffs and continues under the Biden admin, demands the reshoring of critical industries (semiconductors being a chief example) and the reorientation of global value chains. This translates to higher prices across a range of industries and greater sensitivity to the cost of labor.

Demographics

One of the more surprising outcomes of the pandemic has been the reluctance of people to return to work. The Great Resignation, as it has been called, has resulted in a dramatic decline in the labor force and put stress on businesses throughout the country. Of those nearing retirement who were laid off during the pandemic an estimated 2MM elected not to return to work-a-day life. The Labor Force Participation Rate has declined to levels not seen since the 70’s (the last period of significant inflation) and has been very slow to recover, which is suggestive of structural damage to the labor market. Moreover, research suggests that countries with a large proportion of recently retired is inflationary (again, see Juselius and Takats).

As we consider the case for structurally higher inflation in coming years it is important to recall one factor that is expected to work against this outcome: technology. Technology is deflationary by nature and the pervasiveness of technology in daily life will remain considerable, if not accelerate. It may be the case that tech and automation come to dominate the forces mentioned above to drive productivity gains and keep inflationary pressures at bay. This is an important context as investors consider how to trade the inflation theme.

Inflationary Expectations

Having outlined the case for higher inflation in the years ahead it is time to consider the impact this will have on portfolios and investor asset allocations. In general, investors can expect:

  1. Low real yields: With nominal yields across fixed income at multi-century lows even a modest amount of inflation will eat up annual coupons. Yields on 10-year TIPS (which proxy for real yield) currently hover around -1% and inflation expectations are at the highest level observed since the series began in 2003. Investors are pricing in inflation and even if these levels moderate, we can expect real yields to remain paltry.
  2. Low (Negative) returns in stocks and bonds: With stocks at near record valuations and bond durations just off all time highs the outlook for future returns is pretty bleak.
  3. Greater volatility in bonds and stocks: High valuations imply not just low future returns, but also more volatile returns. High duration will make any adjustment in rates greatly amplified. Indeed, 2021 has been a difficult year for bonds with long dated Treasury’s posting negative returns YTD through November.
  4. Real assets to outperform financial assets: Real estate and commodities are more levered to inflation than stocks and bonds and should be expected to outperform.
  5. Value and small caps to outperform growth: It’s been a while since we’ve seen value outperform growth. The dominance of growth over everything in the last decade (and the last few years in particular) can be ascribed, at least in part, to declining yields which lower discount rates and prop of the valuations of “long duration” stocks (i.e., companies whose earnings accrue in the distant future). As these dynamics shift, investors should reallocate to value and small caps which are more correlated to inflation and have been historical beneficiaries.

Trading Inflation

To support these salacious claims and assess the likely impact inflation will have on asset prices I’ve gathered data from 1990 through present for 23 assets/styles intended to be generally representative of the investable universe (see Appendix for data sources).

The analysis to follow examines rolling 6-month returns for each asset and the rolling 6-month change in the Personal Consumption and Expenditures Index (PCE) which will serve as our gauge of inflation. Because monthly changes in economic indicators can be noisy, the rolling 6-month window was selected to more broadly demonstrate how asset prices react to a general change in the price level.

The plot below depicts the correlation of each asset to the change in inflation:

As might be expected, commodities are highly correlated with inflation. Indeed, many inflationary episodes throughout history (the present one included) have been caused by commodity price shocks. Gold, often considered a key inflation hedge, exhibits a relatively low correlation which is worth thinking about. For more on the drivers of gold returns see my post all about it!

Somewhat more surprising are the results for hedge funds which rank as the second most correlated asset class. Trend following commodity futures is a popular quantitative hedge fund trading strategy which may partially account for this apparent association. As a widely regarded store of wealth, real estate also ranks well with REITs exhibiting a distinctly positive correlation on both a price and total return basis. Interestingly, while REITs appear highly correlated with inflation, US Residential Housing (i.e., US House) does not demonstrate a pronounced association and ranks low on the list.

Turning to stocks, the results are more nuanced. Broadly speaking, US stocks are correlated with inflation, but there is significant dispersion within styles. Specifically, Low PB and Low PE stocks show the greatest correlation. Price-to-Book and Price-to-Earnings are the two most widely referenced measures of “value” and these results combined suggest that value stocks are significant inflation beneficiaries. High dividend stocks also rank well which may indicate that in an environment not conducive to bonds/duration investors may turn to companies that consistently pay and grow dividends as an alternative to traditional fixed income. Growth and High PB stocks, on the other hand, rank poorly and bolster the thesis that inflation is a key factor for explaining the growth-value performance differential.

Finally, we consider fixed income. Long dated Treasuries are the clear losers when inflation ticks up with both the 10- and 30-year note exhibiting a markedly negative correlation even when interest payments are accounted for. Short, dated Treasuries, such as the 2-year, appear unimpacted by inflation which makes sense as they essentially substitute for cash. On the corporate side, US Investment Grade holds up okay and appears basically uncorrelated with inflation while US High Yield is notably positive. Taken together the corporate bond universe may serve as a suitable alternative habitat for fixed income investors.

Statistical Analysis

Moving beyond simple correlation analysis, the following table depicts the results of a regression of each asset against PCE. Test statistics and p-values were computed based on heteroscedastic and autocorrelation corrected (HAC) standard errors and 372 degrees of freedom. The legend describes how significant is color coded.

In general, the results from the correlation analysis carry over. Apart from long-dates Treasuries, the Inflation Beta are uniformly positive for our list of assets and suggests that positive inflation is constructive for stocks while deflation is negative which conforms with our expectations a priori.

For Commodities, Hedge Funds and 30-Year Treasuries, inflation is highly statistically significant. For Commodities, approximately 70% of variance is explained by the change in PCE based on R-squared. For Hedge Funds, inflation explains about 12% of model variance, which, again seems remarkably high. Meanwhile for long dates Treasuries, inflation explains only about 5% of overall variance which suggests that while inflation is an important factor for determining the return from Treasuries, much is left unaccounted for.

Inflation is also highly significant for stock styles like Low PE, Low PB, High Dividend and International with a reasonable degree of explanatory power.

The below bar graph plots sorted inflation sensitivity as measured by Beta:

One might theorize that the impact of inflation on asset prices differs based on the level of inflation; that is, the impact is asymmetric. While mild inflation (around the Fed’s 2% target) may be constructive for risk assets, persistently high inflation may put pressure on firm’s margins and reduce earnings. Thus, a differential effect may exist that is not captured in the regressions we have looked at so far.

To investigate this claim we’ll now consider models of the following form:

Where:

  • B1 = Intercept
  • B2,>Med = Differential Intercept for when PCE > Median PCE
  • B3 = Sensitivity of Asset ‘i’ to PCE
  • B4,>Med = Differential Beta when PCE > Median PCE

B2 and B4 represent the differential slope and intercept coefficients, respectively. This specification is convenient because it allows us to determine if the differential impact (if one exists) is attributable to the intercept, slope, or both. If B2 and B4 are significant, then we can infer that asset prices react differently to high inflation.

The below table presents the result for the differential regressions:

Now these are some interesting results! If we consider the intercepts, we observe that the coefficients for B_1 are not statistically significant. In our first round of regressions, the intercept was significant for commodities, Treasuries, hedge funds, and US corp bonds. With the introduction of the diff intercept that effect has entirely disappeared. Contrast that with the results for the diff intercept B_2 which is significant across a broad cross section of stocks, real estate, and bonds. Except for 30-year Treasuries, the sign of B_2 is positive which suggests that asset price returns are structurally higher when inflation runs above the median.

Examining the slope terms B_3 and B_4, after having introduced diff betas for PCE > Median (i.e., B_2 and B_4) the coefficients for PCE (B_3) turned highly statistically significant across the board. In our prior regressions many of these same assets displayed significant results for PCE, but generally the coefficients were smaller in magnitude and of lower significance. Introducing the diff intercept suggests that inflation is much more important for assets than we may have previously thought. Examining the results for diff PCE, B_4, again we see the coefficients are largely significant for stocks, real estate, and real estate. Crucially, (again, apart from Treasuries) the coefficients are negative. Whereas the impact of inflation as measured by PCE is significant and positive the diff impact is negative and significant. This is indicative of a structural change in how asset prices react to inflation above the median and strongly supports the hypothesis we fist proposed.

Let’s look at the fitted plots for some assets to solidify these results visually:

The “kink” visible in the plots represents the structural change in the regression that occurs when inflation is above the median. For Low PE stocks and Hedge Funds you can see how “post-break” the regression is trending downward which suggests that while stocks still benefit from higher inflation the expected return declines as inflation gets progressively hotter. For Growth stocks, the change is quite visible. It is sometimes thought that Growth excels in an environment of low and stable inflation and these results lend that claim some support. In a relatively higher inflationary environment, we see Growth returns have historically broke lower and on average have been around 0; granted there is substantial variability around the realized return as the prediction interval suggests.

Performance: The Final Showdown

What has performed better over time: inflationary or deflationary assets? To whet the appetite, let’s take a look at the average return for our assets when inflation is above/below the median.

In the above plot, the teal bars depict the average return for assets when inflation is low (i.e., PCE below median) while the red bars show the average return when inflation is high (i.e., PCE above median). From a strict return perspective, when inflation is low Commodities, Gold, and International struggle while Growth, High PE/PB, and Treasuries do well. When inflation is high, assets generally seem to fair better with REITS, Small Cap, and Value categories leading.

To compare the long run performance of inflationary and deflationary assets we’ll consider two portfolios:

  • Deflationary Portfolio (equal weighted)
    • 30-Year Treasuries
    • US Housing
    • US Growth
    • Low Beta
    • US Investment Grade
    • US High Yield
  • Inflationary Portfolio (equal weighted)
    • Commodities
    • US Value
    • Small Cap
    • REIT TR
    • International Developed
    • 2-Year Treasuries

The below plot charts the cumulative return of the Inflationary and Deflationary portfolios from February 1990 to August 2021. As the graph makes obvious, the Deflation portfolio has decisively beaten the Inflation portfolio over the past three decades.

One might conjecture that perhaps the Deflation basket has outperformed because it is riskier. The below table tabulates the annualized return, volatility, and Sharpe Ratio for each portfolio. As it happens, not only has the Deflation portfolio produced higher returns, but has done so will about 60% the risk of the Inflation portfolio. On a risk-adjusted basis, the Deflation portfolio as a Sharpe Ratio over twice that of the Inflation portfolio.

Has the Inflation basket really underperformed that significantly? It’s true that commodities performed very badly during the 2010’s. Additionally, commodities are a difficult asset class to gain direct exposure to. Most investors are not comfortable trading futures and ETF based products offer imperfect access. Let’s modify the Inflation basket slightly by removing commodities and redistributing the allocation amongst the other 5 assets. The following chart depicts the cumulative return of the Deflation and Modified Inflation baskets:

Removing commodities aids the absolute performance of the Inflation basket. Indeed, you can see the post Tech Bubble period from ~2002 to 2007 when Inflation assets last had multi-year outperformance and the surging performance over the last year. Let us view the summary statistics:

Intriguingly, removing commodities makes performance worse in some respects. While the average annualized return has increased, so has volatility. The low correlation that commodities have with the other assets in the Inflation portfolio helped to bring down vol. Excluding commodities has historically resulted in Inflation portfolio having about twice the volatility of the Deflation portfolio and has brought down the Sharpe Ratio from .70 to .65. Even considering the slightly better absolute returns, the Inflation portfolio has markedly underperformed the Deflation portfolio which is…disappointing.

Concluding Remarks

In this post, we have gone deep on inflation. I detailed the reasons why inflation has receded over the past several decades and laid out the case for higher inflation in the years ahead. We examined the empirical evidence for how inflation impacts major assets and how those impacts have translated to performance in the context of a portfolio, but a question remains: how do best trade the inflation theme?

It’s a good question. Even if you believe (like me) that inflation will be higher for a period of time, I don’t think it will be a permanent phenomenon. The Fed’s long run target is still 2% which is inconsistent with long-run, sustained outperformance of inflation sensitive assets. So even if inflation runs hot for 1, 3, or 5 years it is unlikely to do so for 10 or more. Given the evidence we have seen, Deflation assets have outperformed Inflation sensitives handsomely over time.

So how best to balance these competing views? My approach is to keep it simple. Rather than try and market time to capture all the inflation beta and risk missing out on the long run outperformance of Growth I think a tilt is more appropriate. Many investors have fully committed to Growth stocks in recent years and positioning in Value, Small Caps and Commodities remains light. Consider reintroducing these components into your portfolio to catch these secular themes and are shaping up to be key market drivers in years to come.

If you have made it this far, I hope you found this post informative. Until next time, thanks for reading!

-Aric Lux.

Appendix

  1. Gold (GOLDAMGBD228NLBM), Commodity (PPIACO), US Housing (CSUSHPISA), US Corp Investment Grade (BAMLCC0A0CMTRIV), and US Corp. High Yield (BAMLHYH0A0HYM2TRIV) price and total return indices were retrieved from the Federal Reserve of St. Louis FRED Database. Identifiers are in parentheses.
  2. High/Low Beta, High/Low PE, High/Low PB, High Momentum, and Small Cap returns were retrieved from Kenneth French’s Data Library.
  3. REIT Price and Total Return Indices were retrieved from the National Association of Real Estate Investment Trusts online data portal.
  4. US Value, US Growth and International price indices were retrieved from the MSCI online data portal.
  5. Hedge Fund total return indices were retrieved from the HFRI online data portal.
  6. Treasury yields data was obtained from the FRED Database. Return indices were derived using R and the ‘treasuryTR package developed and maintained by Martin Geissmann. Available on CRAN.

The post Trading the Inflation Theme first appeared on Light Finance.

The post Trading the Inflation Theme appeared first on Light Finance.

]]>
https://lightfinance.blog/trading-the-inflation-theme/feed/ 2 1199
Who Needs Dividends and Interest? https://lightfinance.blog/who-needs-dividends-and-interest/#utm_source=rss&utm_medium=rss&utm_campaign=who-needs-dividends-and-interest Sat, 24 Jul 2021 20:30:13 +0000 https://lightfinance.blog/?p=1099 In another world, seemingly long ago (i.e., January of 1990) the yield on the 10-Year Treasury was 8.20% while the dividend yield on the S&P 500 – which varies with market valuations – stood at 3.28%. In this world, a retiree with a $1,000,000 nest egg could have invested their portfolio is 10-Year Treasury bonds […]

The post Who Needs Dividends and Interest? first appeared on Light Finance.

The post Who Needs Dividends and Interest? appeared first on Light Finance.

]]>

In another world, seemingly long ago (i.e., January of 1990) the yield on the 10-Year Treasury was 8.20% while the dividend yield on the S&P 500 – which varies with market valuations – stood at 3.28%. In this world, a retiree with a $1,000,000 nest egg could have invested their portfolio is 10-Year Treasury bonds – a technically risk-free asset – and lived comfortably off interest income of $82,000 per year. An investor more comfortable with risk choosing to invest in stocks could have expected income of $32,800 per year, but would principally have relied on capital appreciation to meet their retirement goals; which necessarily involves more risk.

As the above illustration highlights, cash flow from low risk investments was previously sufficient to meet investors’ lifestyles. Increasingly, however, investors have had to rely on price appreciation to finance their needs. As of June 2021 the 10-Year Treasury yield was 1.52% while the S&P 500 dividend yield was 1.37%. As the below plot demonstrates, it is increasingly common for the S&P 500 dividend yield to exceed treasuries. Interest and dividends at such miserly levels is unlikely to address investor’s income needs. This has left many wondering how they can increase cash flow or what other options are available to them.

In this post, we’ll examine the issue of investor cash flow and whether it’s the appropriate paradigm for assessing your investments.

Income or Cash Flow?

There is a conceptual difference between income from a portfolio and cash flow. Income refers to cash payments received from the portfolio (i.e., interest and dividends). Cash flow refers to the capacity of your portfolio to provide you with a consistent paycheck which may be derived partially from interest or dividends, but may also come from the principal.

Understanding this distinction is subtle, but important. As an investor, it’s generally more appropriate to focus on the total return of a portfolio and the sustainable withdrawal rate rather than the myopic view of yield.

Understanding Interest and Dividends

Dividends from Stock

A common misunderstanding that I encounter with respect to dividends is that investor’s often view them as “free money”. Investing in dividend stocks won’t necessarily create excess shareholder wealth over time as a stock’s price usually declines on the day that it begins to trade ex-dividend (i.e. without its dividend). The reason for this is intuitive: a company has a certain amount of cash on its balance sheet which is reflected in the value of its equity, once that cash is paid out the equity necessarily has to decline to offset the change in assets. The below chart depicts this visually. It’s important that your dividend strategy be analyzed from a total return perspective accounting for both income and capital appreciation.

Interest from Bonds

You encounter a similar issue with bonds. While bonds generally tend to be lower risk than stocks, in many ways they are also more complex and investors generally have less direct experience with them.

In order to understand bonds properly, I’m going to have to introduce a little jargon:

  • Par-Value: Also called face-value is the value of the bond at maturity. This is the value of the final payment that a bond holder will receive and serves as the basis for calculating certain bond characteristics.
  • Term: Refers to the length of time the bond will make payments. The bond reaches maturity when the term expires.
  • Coupon: The coupon is the periodic payment that the bond promises to make and is typically quoted as a rate. Coupon payments are usually made on a semi-annual or annual basis.
  • Yield to Maturity (YTM): Yield to Maturity is difficult concept when you are first trying to understand it. The yield to maturity (expressed as a rate) represents the return an investor would expect to receive from a bond if they bought it today at prevailing market rates and reinvested the interest payments at the today’s prevailing rate.
  • Discount/Premium: A Discount/Premium refers to a bond that is trading below/above it’s par value. A bond will generally trade at a discount when it’s coupon rate is less than the YTM. This makes sense as the periodic payment you receive from the coupon is less than the market rate hence you are not willing to pay as much for the bond. Similarly with bonds that trade at a premium, the coupon rate is greater than the market rate. Thus to get the higher periodic payment you’re going to have to pay more up front.

Confused yet? Let’s try and make these concepts more concrete with an few examples.

Example 1: Bond Trading at a Premium

Consider a bond with the following characteristics:

  • Term: 10 year
  • Coupon: 4% paid annually
  • Par Value: $1000

Suppose the prevailing market interest rate is 2%. We can use this information to calculate the price of the bond using Excel as follows:

The coupon rate for this bond is greater than the yield to maturity, hence the bond trades at a premium to its par value.

Example 2: Bond Trading at a Discount

Consider a bond with the same basic characteristics, but now suppose the prevailing market interest rate is 6%. What’s the bond’s price given these conditions?

The bond’s 4% coupon is less than the YTM, hence the bond trades at a discount to its par value.

So what is the point of the above discussion? The intention is to highlight that when investing in bonds, it’s important to consider more than just income. The bond will pay out the coupon rate on a regular, defined schedule, but there is significantly more to the story.

Consider the bond trading at the premium (Example 1). If you purchased this bond, then you could expect to receive $40 annually in income. Certainly you could spend the entire $40, but remember that for the privilege you paid $1179.65 up front and at maturity you will only get back $1,000 in par value. Take a moment to consider what this means: you paid $1,179.65 for the bond, you spend the $40 annual coupon payment, you get $1,000 back at maturity. This means that now you only have $1,000 left with which to purchase a new bond. The $179.65 in premium you paid is gone. Because the coupon was greater than the YTM, the premium you initially paid went toward financing the higher payout. You are effectively spending a little principle when you spend the full coupon.

As the below table illustrates, if your objective is to preserve your capital, then you can only spend ~$22.03 of each coupon payment. The other $17.97 needs to be saved. Otherwise you run the risk of eroding your capital over time!

Reaching for Yield or Reaching for Risk?

Trying to satisfy income needs exclusively with interest and dividends in the face of declining yields across equity and fixed income has induced investors to move out on the risk curve. The following two charts show the results. The first graph plots the yield for different sectors of the fixed income market. In particular notice that yields across sectors and ratings are either at or near all time lows.

Another way to view the yield compression of recent years is to look at the spread to Treasuries. The picture is perhaps somewhat better as yield spreads are not markedly different from what we have observed over the past two decades, but at the same time suggests that there is limited upside to be gained from investing in these sectors.

It’s not just an issue that absolute yields are lower. Reaching for yield can inadvertently lead to a suboptimal portfolio allocation and lower risk adjusted returns. Investors may not realize that they have drifted from their long term allocation which can have the effect of compromising long-term goals for marginally higher current income.

Mental Accounting or Sustainable Withdrawal Rate?

Most investors seem to be more comfortable with the idea of harvesting cash from interest and dividends as they think of those payments—the income—as separate from the investment holdings that generate them—the principal. Assigning different values or preferences, based on subjective criteria, to the same amount of money without considering how it translates to achieving a goal or objective is a bias known as Mental Accounting.

Nobel Laureate Richard Thaler developed the concept of mental accounting to describe phenomena involving irrational decision making and investment behavior. At its core, the theory postulates that all money is fungible. That is to say, regardless of origin, all money is the same. While this may seem intuitive on its face, our mental bias fosters a sense that the earning power of the investments remains unchanged when we take income from our portfolio. The reality is that every dollar withdrawn from a portfolio reduces earning power to exactly the same extent. The source of the withdrawals does not usually matter. Whether a portfolio can sustain any particular withdrawal level depends on the long-term, total return expectation and not whether that return is derived primarily from interest, dividends or capital appreciation.

Do You Need Interest and Dividends?

Short answer: it depends. Interest and dividends are only one component of total return and while they provide some clues as to the viability of an investment they do not tell the full story. The correct framework for evaluating an investment is to consider the total return, the attendant risk and how both fit together within the context your financial goals.

Hopefully this post has provided you with a new way to look at the cash flow from your portfolio.

Until next time, thanks for reading!

-Aric Lux.

The post Who Needs Dividends and Interest? first appeared on Light Finance.

The post Who Needs Dividends and Interest? appeared first on Light Finance.

]]>
1099
Beta and Sharpe Ratio of S&P 500 Stocks (July 2021) https://lightfinance.blog/beta-and-sharpe-ratio-of-sp-500-stocks-july-2021/#utm_source=rss&utm_medium=rss&utm_campaign=beta-and-sharpe-ratio-of-sp-500-stocks-july-2021 Fri, 16 Jul 2021 03:10:35 +0000 https://lightfinance.blog/?p=1084 The sorted Beta and Sharpe Ratio for all companies listed in the S&P 500 as of July 2021 are now available. Beta and Sharpe are calculated using 3 years of bi-weekly returns. To learn more about Beta and the Sharpe Ratio check out my post about measuring risk and return! The 5 companies with the highest Beta are as […]

The post Beta and Sharpe Ratio of S&P 500 Stocks (July 2021) first appeared on Light Finance.

The post Beta and Sharpe Ratio of S&P 500 Stocks (July 2021) appeared first on Light Finance.

]]>

The sorted Beta and Sharpe Ratio for all companies listed in the S&P 500 as of July 2021 are now available. Beta and Sharpe are calculated using 3 years of bi-weekly returns. To learn more about Beta and the Sharpe Ratio check out my post about measuring risk and return!

The 5 companies with the highest Beta are as follows:

  • Norwegian Cruise Line Holdings Ltd. (NCLH)
  • Royal Caribbean Group (RCL)
  • Apache Corporation (APA)
  • Caesars Entertainment Inc (CZR)
  • Penn National Gaming, Inc (PENN)

The list of high Beta stocks this month remains unchanged from when the last update was posted in May. Norwegian, Royal, and Apache continue to lag the market badly. Despite the broad based reopening and resumption of travel the cruise lines are still struggling to gain traction and return to pre-pandemic form; as evidenced by the performance of their stock. Gaming stocks Caesar’s and Penn National have seen their stock prices slide in recent months as the reopening trade has been called into doubt. On balance the gaming stocks have still performed well over the past year, but recent price momentum has been fading.

The 5 companies with the lowest Beta are as follows:

  • Kroger (KR)
  • The Clorox Company (CLX)
  • Hormel Food Corporation (HRL)
  • Cabot Oil & Gas (COG)
  • The J.M. Smucker Company (SJM)

The constituents of the low Beta list remain the same as it has for months (which makes me wonder if this will ever really change). The same themes are also evident. Clorox seems to be trying to reverse it’s slide over the past 9 months while Kroger continues to slowly grind higher. Hormel remains range bounds while Cabot continues to languish. While not the subject of this post, it’s interesting to consider the bifurcation of performance within the oil and gas industry. As we have seen, APA and COG have continued their underperformance while other stocks like Exxon and Chevron have seen better rebounds as the price of oil has risen back to $70/bbl.

The 5 companies with the highest Sharpe Ratio are as follows:

  • Proctor and Gamble (PG)
  • Microsoft Corp (MSFT)
  • Enphase Energy (ENPH)
  • West Pharmaceutical Services, Inc. (WST)
  • Generac Holdings Inc. (GNRC)

The plot for the High Sharpe Ratio stocks demonstrates some interesting developments. Notably we see the resurgence of performance in growth names like Enphase Energy and Generac as the reopening trade has witnessed a fierce reversal. In May I speculated that Enphase might drop out of the list on the assumption that high inflation prints would continue to cause yields to rise…it appears the exact opposite is playing out. Generac announced this month that it would be acquiring solar inverter manufacturer Chilicon for an undisclosed amount in a bid to increase its presence in solar equipment; a move viewed favorably by the market. Also this month, we observe that Eli Lilly & Co has dropped out of the list and been replaced by blue-chip bellwether Proctor & Gamble.

The 5 companies with the lowest Sharpe Ratio are:

  • Viatris Inc. (VTRS)
  • Perrigo Company plc (PRGO)
  • Schlumberger Limited (SLB)
  • DXC Technology (DXC)
  • NOV Inc. (NOV)

Like the Low Beta stocks, the list of Low Sharpe Ratio stocks remains unchanged since May. The list continues to demonstrate uniform underperformance and, in some cases, further deterioration over the past 2 months.

To download the updated Betas and Sharpe Ratios for the S&P 500 companies, click the buttons below!

Thanks for reading!

-Aric Lux.

The post Beta and Sharpe Ratio of S&P 500 Stocks (July 2021) first appeared on Light Finance.

The post Beta and Sharpe Ratio of S&P 500 Stocks (July 2021) appeared first on Light Finance.

]]>
1084
Everything About Faber: A Critical Look at Market Timing https://lightfinance.blog/everything-about-faber-a-critical-look-at-market-timing/#utm_source=rss&utm_medium=rss&utm_campaign=everything-about-faber-a-critical-look-at-market-timing https://lightfinance.blog/everything-about-faber-a-critical-look-at-market-timing/#comments Thu, 15 Jul 2021 03:40:53 +0000 https://lightfinance.blog/?p=1059 In 2006, Meb Faber wrote a highly influential paper on tactical asset allocation and market timing. The strategy was particularly attractive in part because of its simplicity: Buy when monthly price > 10-month SMA Sell and move to cash when monthly price < 10-month SMA By applying this simple, mechanical strategy to the S&P 500 […]

The post Everything About Faber: A Critical Look at Market Timing first appeared on Light Finance.

The post Everything About Faber: A Critical Look at Market Timing appeared first on Light Finance.

]]>

In 2006, Meb Faber wrote a highly influential paper on tactical asset allocation and market timing. The strategy was particularly attractive in part because of its simplicity:

  • Buy when monthly price > 10-month SMA
  • Sell and move to cash when monthly price < 10-month SMA

By applying this simple, mechanical strategy to the S&P 500 going back to 1900, Faber concluded that market timing can be used to enhance the absolute and risk-adjusted returns of a portfolio. The strategy would have deftly avoided the ruinous drawdowns of the Great Depression, Tech Bubble and Financial Crisis which lends substantial credibility to the claim.

However, there are reasons to be skeptical. Market timing is prone to generating false signals which can result in substantial opportunity costs as investors move to cash in what is otherwise a secular bull market. Moreover, given how well the strategy has been publicized there is reason to suspect that investors are now keen to the idea and such inefficiencies will be difficult to exploit going forward.

In this post, I will be taking a critical eye to the Faber strategy with the objective of determining how well it has performed since the initial study was first published and what its prospects are going forward. Given the near record valuations that investors face today understanding how this strategy can help investors play defense is particularly salient.

Strategy and Set Up

For this study I’ll be using data for the S&P 500 Total Return Index (i.e., assumes dividends are reinvested) from January 1988 through May 2021. The S&P TR was selected to align with Faber’s original paper but is arguably more realistic than using the price return series given the strategy’s long-term focus. Faber’s original specification was based on monthly price data and a 10-month simple moving average (SMA). I’ll be employing a slightly different expression of the strategy based on weekly price data and a 44-week simple moving average which allows for slightly more granularity.

My specification uses the same simple logic described by Faber:

  • Buy when the weekly price is greater than the 44-week SMA
  • Sell and stay in cash when the weekly price is less than the 44-week SMA

The initial strategy equity is set at $100,000 and is fully invested/divested when the appropriate signal is observed. Buy and sell orders are assumed to be executed at the Close on the day of the signal. While this is an obvious simplification, given that the strategy is low turnover and relies on weekly data the overall impact is likely to be small.

Strategy design and backtesting was conducted using the quantstrat family of R packages.

Initial Backtest

Below is a plot of the S&P 500 Total Return Index and 44-week SMA. Several features of the plot stand out, namely the peaks in September of 2000 and August of 2007. This is possibly the single best example of the seduction of market timing: with just a little foresight you could have side stepped two of the worst drawdowns in investor memory.

However, recent experience is more cautionary. Prominent signals were also generated in December of 2018 and, of course, in March 2020 when COVID hit, and the observational evidence is more mixed. Given that both drawdowns were sharp and short-lived, one may wonder up front if timing in this way was the correct approach. With the benefit of hindsight, we know that weathering COVID has been substantially rewarded over the past 15-months.

With these stylized observations in mind, let us present the results of the first backtest. The below series of plots provides a good snapshot of the strategy’s performance. The top plot charts the S&P 500 Total Return Index with indicators for when trades took place. The Green triangles indicate Buy orders while the Red triangles indicate, unsurprisingly, Sell orders. The blue bars in the middle chart depict the number of shares held during the period. The absence of a blue bar implies that the strategy was out of the market and sitting in idle cash. The final two charts are the equity curve and drawdown plot, respectively.

The position plots show that for ~80% of the strategy’s life it was invested in the market. The longest periods of time that the strategy was out of the market occurred during the fall out of the Tech Bubble and during the Financial Crisis; as we have observed. Let’s examine the trade statistics:

quantstrat provides us with a plethora of statistics for the Faber strategy. The Net Trading P&L is $1,578,172 and the Max Drawdown is -$237,575. Let us compare these statistics to the performance of the index. Note that the first trade for the Faber strategy takes place on February 2nd, 1990, to make the comparison “apples-to-apples” I have computed the performance that follows beginning on this date to simulate the difference between trading Faber v. holding the index.

The below figure plots the cumulative return for the S&P and strategy, respectively. The first observation that stands out is that the Faber strategy underperforms the S&P Total Return index over the analysis period. The strategy deftly exits the market in 2000, 2007 and, briefly, in 2020 as desired, but this is not enough to compensate for the false signals generated along the way. Specifically, a prominent false signal was generated in 2016 when markets stalled on fears of a “China slowdown”.

The key selling point of the Faber strategy is that is guards against large drawdowns; as depicted in the below plot. In general, we can see that the Faber strategy works as advertised. Again, in 2000, 2007 & 2020 the strategy’s drawdown is significantly lower than that of the index. However, we do observe the effects of the false signal generated in 2016 when drawdown was about twice that of the index.

Finally, let us examine the risk-adjusted return statistics. In general, they are quite encouraging and what we would hope to see from a timing strategy. The annualized return is lower for the Faber strategy than the index which is indicative of the long-term underperformance we observed, but crucially this return is achieved while taking significantly less risk. On a pure risk basis (as measure by standard deviation) the strategy takes ~29% less risk than the index. The Sharpe for Faber is ~.79 compared to the index’s ~.68; an approximately 14% improvement. The Calmar Ratio (annual return/max drawdown) is over double that of the index and is probably the most salient feature of the strategy.

Parameter Optimization and Feature Analysis

We will now transition to parameter optimization and feature analysis.

The objectives of parameter optimization are twofold:

  1. Determine if the selected parameter is stable.
  2. Determine if another value of the parameter can improve results.

Stability is synonymous with consistency. A minor adjustment to the value of a parameter should not lead to significantly different performance. If it does, then we should treat any conclusions with caution. It could be that we just happened to select a parameter that worked well in-sample and the same value for the parameter may not work well when traded out-of-sample. Moreover, if another parameter works particularly well and also appears stable, then this would be a good thing to know, and we should investigate it further.

There are many parameters that we could optimize for (and perhaps we will in the future) based on the indicator, signal, or rule. We could imagine optimizing across order types by adding stop and trailing stop orders to the strategy or adjusting the trade size based on signal “strength”. For this study I will focus on optimizing the indicator. The Faber strategy is simple in the sense that it relies on a single indicator (the simple moving average) which has a single parameter (the length of the SMA).

Our initial backtest was conducted using a SMA length of 44 weeks. In the analysis to follow, I varied the length of the SMA from 20 to 52 weeks (stepping by 2 each time). Let’s check out the results!

The first graph presents the number of trades v. SMA length. We can see that the number of trades placed by the strategy uniformly decreases with length. This is unsurprising as the longer the SMA the closer the strategy approximates simple Buy and Hold.

Moving on to something a little more interesting, the second plot shows strategy P&L v. SMA length. This plot suggests the existence of two stable regions from 26-32 and from 42-50. The extreme ends of the range are instructive. An SMA length of 20 is the worst performing parameter. This suggests that the strategy has difficulty separating signal from noise over short intervals and consequently overtrades the account. Likewise, an SMA length of 52 is quite long and starts to approximate a Buy-and-Hold Strategy. Hence the P&L is higher which is consistent with previously discussed results.

Our original choice of 44 for the SMA lies within a stable region which suggests our initial guess was rather good. However, the plot suggests that the strategy would have performed better if we traded more aggressively using a shorter SMA length.

The next two plots bring risk into consideration: plotting the standard deviation of the P&L and Sharpe Ratio, respectively. Standard deviation appears reasonably stable from 26-36 and from 40-46; similar ranges to the P&L plot. Notably the shorter SMA length region (i.e., 26-36) demonstrates lower risk than the longer region (i.e., 40-46). Meanwhile, the two regions appear to produce roughly similar Sharpe Ratios.

To gain some additional insight into the risk of strategy we can consider drawdown which, as it turns out, is highly instructive. As can be seen in the below plot. The drawdown plot appears to have stable regions from 26-32 and from 42-50 (in line with the P&L plot), but one is significantly more attractive than the other. Trading the strategy more aggressively with a shorter SMA results is more severe drawdowns that using a slower SMA. In fact, the drawdown of the “short SMA” (i.e., 26-32) is almost as bad as the 52 weeks SMA which I have postulated is close to Buy-and-Hold which is not what we want to see from a timing strategy.

We have now established that the short SMA is potentially more profitable than the long SMA, but also results in larger drawdowns. To square these opposing conclusions, we can turn to the Profit to Drawdown Ratio (a variation of the Calmar Ratio discussed previously). The Profit to Drawdown plot is shown below. Interestingly, the two effects essentially cancel out with the Profit to Max DD ratio essentially the same for the short SMA (i.e., 26-32) and long SMA (i.e., 40-48).

In summary, the results from the parameter optimization suggest that our initial choice of 44 for SMA length was pretty good. 44 lies in a stable region for most of the metrics analyzed which indicates consistency. Moreover, 44 has the attractive feature that it appears to guard against drawdowns more effectively than short SMA variants. That being said, if you want to shoot for potentially higher profits and can tolerate larger drawdowns then you can employ a SMA of 26-32 and the risk-adjusted statistics (Sharpe, Profit to Max DD, etc.) will support your approach. As with many things in trading and finance, it is a matter of what you’re optimizing for.

Transaction Based Simulation

To this point we have considered backtesting and parameter optimization and the results have been encouraging. The Faber strategy does not appear to deliver outperformance on an absolute basis, but it does do a good job of managing risk and protecting against market drawdowns.

However, we might reasonably wonder if the strategy’s performance is due to skill or simply luck. To address this question more rigorously, we can employ monte carlo simulation. Generally speaking, the objective of a monte carlo study is to simulate many trading environments to better understand how a given strategy would have performed if the conditions were different. This provides valuable insight into the relative sensitivity of a strategy to the specific, observed market conditions, overall risk of the strategy and how the strategy can be expected to perform going forward. While most MC simulations do this by reordering the P&L/returns to create different price paths, the result is little more than a statistical confidence interval. This is helpful to be sure, but perhaps we can do better.

For this study, I’ll be using the txnsim() function from the blotter package available in R. txnsim() is a unique simulation function that aims to capture the dynamics of the trading strategy rather than simply reordering the P&L. Specifically, txnsim() attempts to incorporate “stylized facts” about the observed trading strategy including: duration of trades, direction of trades, how long the strategy was out of the market and maximum position. Using these facts, it then constructs a more realistic distribution from which to resample. This approach has several advantages:

  • More closely compare the strategy to random entries and exits with same overall dynamic.
  • Creates a distribution around the trading dynamics (our true object of concern), not just the daily P&L.
  • Best for modeling “skill vs. luck”

(To learn more about transaction based simulation and txnsim(), check out the presentation and accompanying blog post by the package authors.)

Results from Strategy Simulation

For this study I will be simulating 100 different price paths for the Faber strategy to trade. Let’s see how the results stack up.

The chart below plots the equity curve of the “original” strategy (i.e., the one we have already examined), plotted in red, and the 100 possible variations which are plotted in grey. As we can see, the equity curve of the original strategy lies in the middle of the pack of the random variants. This tells us that the strategy as initially specified is not obviously better than randomly trading the same dynamics. If the strategy’s performance we’re solely attributable to skill then we would expect the red line to be “higher” than the majority of the grey lines which we do not see here.

However, net P&L is only one part of the equation as we need to consider risk and the risk-adjusted performance. The following three histograms depict the standard deviation, max drawdown and Sharpe Ratio of the simulation. The plots show the distribution for each statistic. The blue lines label the mean as well as the lower and upper bounds for a 95% confidence interval. The red line labels the value for the original backtest. What we want to learn from these plots is if the value for the backtest is statistically significant and if it is the “right” kind of significance.

Starting with standard deviation, we see that the backtest line (again, in red) lies between the confidence bounds. This tells us that the backtest standard deviation is not statistically significant (i.e., we cannot verify it is lower than randomly trading the strategy) at a 95% level of confidence. However, the backtest is close to the lower bound. If we were using a lower standard for significance (say 90%) we would reach a different conclusion.

Turning now to max drawdown we can now see that the backtest value does lie outside the confidence interval. This leads us to conclude that the backtest’s max drawdown is statistically significant at 5%. Moreover, it is the “right” kind of significance because the backtest lies outside the upper bound. For max drawdown we need to remember that we are considering losses (i.e., negative values) hence the upper bound represents smaller drawdowns. Since our backtest lies outside the upper bound we can confidently say that the Faber strategy results in lower drawdowns than a random variant. If the backtest value were below the lower bound we could still claim the result was statistically significant, but that wouldn’t be a good thing!

Finally, we can consider the Sharpe Ratio. Here, again, the backtest value lies between the confidence bounds which does not allow us to claim the Faber strategy results in a higher Sharpe than the randomly traded variants.

Concluding Remarks

In this post we have done a deep dive on the Faber strategy and learned quite a lot along the way. In my opinion, the key takeaway is that the Faber strategy does work for guarding against large market drawdowns but does so at the expense of long run performance. The Faber strategy should not be expected to deliver long run outperformance on an absolute basis over a simple by and hold but is able to achieve solid returns while taking a lot of risk (particularly, the most painful risk) off the table.

In many ways this makes intuitive sense. Volatility is often synonymous with “bad” outcomes but can be valuable if you are on the right side of the trade (option prices go up with volatility). With Faber, you are eliminating the risks and rewards from high volatility which has advantages and disadvantages. Had the COVID Crisis played out differently we might all be singing a different tune today. It’s important to remember that with market timing you need to know what you are getting yourself into and what you’re trying to achieve; it’s a difficult game after all.

Interestingly, my conclusion runs counter to that of Faber’s original paper which should give the reader pause. It is not that I think Faber’s conclusion or calculations were wrong, but, rather, incomplete. My hunch is that because he used market data going back to 1900 most of strategy’s outperformance can be attributed to timing out during the Great Depression. This is not an inconsequential feature as you would have avoided the worst economic meltdown in recorded history and, were it to happen today, you’d feel pretty good about that. If the advantage of Faber is that you avoid black swans, then you should probably think carefully about whether that is something you want to incorporate into your portfolio.

Final thought. In this post, I examined the most elementary implementation of the strategy: trade the 44-week moving average. It is quite possible that incorporating shorting, buying bonds rather than exiting to cash, setting stops and trailing stops or adding trade filters would have led to better results (indeed, I believe it would), but that, dear reader, is an exercise for another time.

Until next time, thanks for reading!

-Aric Lux.

The post Everything About Faber: A Critical Look at Market Timing first appeared on Light Finance.

The post Everything About Faber: A Critical Look at Market Timing appeared first on Light Finance.

]]>
https://lightfinance.blog/everything-about-faber-a-critical-look-at-market-timing/feed/ 11 1059