Pages

Monday, August 12, 2024

books. copulas. concerning queries

Animal spirits to the dark side unleashed, as we entered the final month of summer. We had enjoyed a multi-year continuation of a tired economic growth regime, but there have been concerns lingering that this stimulus-fueled growth was starting to wane yet again; perhaps come to an end.  This was one of the concerns we recently described in our new book, copula narratives, and which I elaborate on a little further below.  How do we assess the meaning, for example, of continued reductions in monthly job growth, a variable that the central bank themselves conceded is “somehow” repeatedly overrepresenting the actual job market?  And what do we make of the recent pick up in unemployment [up +0.6 percentage points off their cyclical low], which in our book I suggest is in the initial district of spelling outright economic doom?

 

Copulas

Copula statistics are a measure of joint dependency of variables, when they’re at their respective extreme values.  We study them for multiple reasons, and in concert with other quantitative topics such as probability theory, macro governance issues, agency dynamics, and machine learning.  What we have seen is the challenge of knowing how to calculate what should be plain probability statistics.  Such as the probability that we are in a recession now or in the near future.  We consider long-view data [an expression we’ll define later] that has persisted at least about a century.  And even there it is tricky to understand why patterns have changed in more recent business cycles.

 

As an example, the length and frequency of economic contractions have changed a lot since records were [ex post] tabulated in the mid-19th century.  Those tabularizations were also open to subjectivity as shown in exceptions being rushed into conclusion, during the already-historic, 2020 coronavirus outbreak.  Still, in its extraordinary aftermath, we might imagine continued challenges of predicting recessions based on these increasingly questionable statistical characteristics.  Copula theory saw the comonotonic nature [i.e., when there is joint dependency in two variables in at least one equal direction: such as a stock and bond market crash at the same time] of labor and stock market data makes sense, but also is connected to the breakage of these models with parametric model risk.  So, where there are simply a complete shift in assumed new variable relationships.

 

New York Times

Lets’ take last weekend’s New York Times article [How to Cope When the Markets Panic] by Jeff Sommer, and one of over a dozen times we’ve been cited in this paper [including a previous time this year on cracking lottery annuities: Those Billion-Dollar Lottery ‘Jackpots’ Aren’t Even Half That Big].  The recent article looks at the idea of assessing the blind chance of a recession, based on a frequentist theory of measuring said part of time in the past, and -just as vital- inferring from it that the same said statistic could be predicted to likewise continue in the future.  However there we get very large differences in results, based on a couple factors, such as which century of historic data you use for the calculation.  And also the weight of evidence you give towards current data already leaning in a certain direction, whether it is up, or down.  These are analytical prerequisites that should give a human operator pause.

 

The New York Times article leverages this more complicated discussion in one of their own earlier 2022 articles [Bear Markets and Recessions Happen More Often Than You Think], arguably the most recent previous period which had its own somewhat-similar, mini growth scare.  That article explored a 2-year horizon for a recession outburst, versus this same calculation being modified to use a 1-year variable assumption instead.  Would the model be as exact over time, if you halved this forecasting horizon from 2-years down to 1-year? 

 

Additionally, we’ll note we didn’t have a recession in the two years since.  Everyone should also spend time on model assumption analysis trying to understand if a 40% of so probability at that time was therefore perhaps just a tad too high, or perhaps it is correct as is!  This is something for you to better understand since -either way- it is certainly within a close range of model uncertainty.  Hence this model and advertised errors as well, are the overall topic of this web log article.

 

[V]VIX

Let’s discuss what happened in the markets of early August 2024.  It’s tough to imagine the entire global markets swinging around like an options-fueled, meme stock.  Yet we also know market volatility -in a generic sense- does exist at times and puts everything about us in awe, when it does.  But the multi-percentage daily breakdown of the major indices a week ago, seemingly without a long-term market wearing warning, was of particular note.

 

Here we saw it occur in real-time, and everyone had to make sense of the “unusualness” of what they were sensing.  Every time we must wrestle with these two major possibilities: was it merely market volatility which rears every so many years, or did it have elements of a complete breakdown and extensive contagion that was going to spill-over into broader economic measures?  Monday morning of August 5, pre-market New York time to be exact, I saw the volatility index [or VIX] print a high 60s.  This immediately stood out as the second highest spike level seen since the 2008 global financial crisis [or GFC].  And as high as it was, it almost completely disappeared below 40 by the end of that Monday, and closed sub-30s by the end of the next day; all suggesting an odd couple day trip. 

 

VIX looks at fear in the overall market index: the panicked rush to sell at a steep bid-ask discount.  While VVIX on the other hand is a complimentary measure of strain in the market system.  VVIX looks at the panic through the expected annualized change in the VIX.  Both are useful to see and cross-verify, in concert, to get an overall read of the legitimacy of market mood shifts.

 

This measure indeed popped as well to nearly 200, one of the 4 highest spike times since the GFC.  Almost exactly 6 years ago in August 2008 -a month prior to the fall of Lehman Brothers- it had leaped close to 150.  But without the same historical context as the measure was first introduced only a couple years prior and evidently not as clean to backward-interpolate!  What was the VVIX telling us recently, and why did it not align with the ever changing but correlated VIX?  Yes a pop to a high level is a signal, though of course these pop-ups are different it appears than the level from the GFC era.

 

 

In the charts above we show these distribution rankings, copula style, and notice a clear comonotonic pattern with upper-tail dependency.  For example we notice a severe clustering at the 100th percentile of more VIX and VVIX, and not as much at the 0th percentile, or any other corner.  Notice we had many top-most VIX readings during the GFC [chart in upper left quadrant], and many top-most VVIX readings in the coronavirus sell-off [chart in the lower right quadrant].  Additionally we’ll note the times we saw the highest joint closings for VIX and VVIX below.  Notice the vast majority of them all clustered during March 2020?  Outside of this the events were more sporadic, and none at all during September 2008! 

 

Date              VVIX VIX

10/27/2008  135    80

5/20/2010    145    46

8/8/2011       135    48

3/9/2020      137    54

3/10/2020    139    47

3/11/2020     147    54

3/12/2020    155    75

3/13/2020    171    58

3/16/2020    208    83

3/17/2020    194    76

3/18/2020    181    76

3/19/2020    182    72

3/20/2020    187    66

3/23/2020    168    62

3/24/2020    172    62

3/25/2020    172    64

3/26/2020    170    61

3/27/2020    169    66

3/30/2020    158    57

3/31/2020    153    54

4/1/2020      157    57

4/2/2020      147    51

4/3/2020      144    47

4/6/2020      143    45

4/7/2020      134    47

4/21/2020    132    45

 

Officially August 5 didn’t officially make this cut since the VVX closed at 173, but the VIX again “only” closed at 39 [the intra-day high of 66 is also not comparable across this series since it occurred as noted pre-market, where calculations were only first recorded in 2016.]  But it’s close enough and we blackened the data on the chart above on the lower right quadrant where we see it is just the left [slight below 96%’ile VIX] of the top-right most cluster of VVIX and VIX readings.  Incidentally a joint 96%’ile event is for the most part a (1-.96)*(1-.96), or extreme 0.2% event.  Or seeing a once every couple year phenomenon.

 

It suggests to me that we see multiple statistics concerning sudden market strains, and not all related to a recession [which we assume has to be dependently interconnected most times].  Note these levels from 2022, as well, which were high but still not that high.  Of course at the center of this market meltdown was the weak July employment report, a weakening reading which some market participants believe may not be an isolated case.  We still need to see these upcoming readings, next released in September and October, to get a sense of overall economic direction.  As a result, there could still easily be continued panic and a revisiting of market volatility if in fact the economic readings continue to soften heading into the November election.

 

For now though it was sad to see retail investors panic into this unknown, earlier this month.  Novices flooding towards safety and unclear if the information they were seeing was reasonable changes or not.  I have long suspected AI-hyped NVDA stock was in a bubble of its own making, and due for a market meltdown.  I too had to take advantage of this period, but after a couple of very strong down days I could only count my overwhelming blessings.  On Monday evening that day I terminated my put options sensing things had come in too far too fast, and likely due for at least some back and forth from this point forward.  Along with the continued uncertainty, which I believe will define the markets for the coming months.  My entire portfolio is now at an all-time high, while a 60/40 passive stock and bond index is down a few percent from their July 2024 peak.

 

 

Institutional pensions, and conferences

Switching gears slightly to the topic of copulas, and the trick in measuring market changing models, we have explored this topic in the context of institutional liabilities and fiscal reserves.  My customized liability modeling work in this space is in a draft peer-review article currently under revision.  However I did speak at the start of an important industry conference over this summer, about my modeling and the ramifications for investing against institutional liabilities.  Particularly with the changing market, labor, and mortality landscape.  And the uncertainty in newer asset classes at this point in the post-coronavirus cycle. It was well received, and a long-time friend, Skybridge CEO Anthony Scaramucci also keynoted shortly afterwards at this conference [and at one point he highlighted to the audience my statistical polling strength!]

 


copula narratives

This summer I released my new book, copula narratives [copula narratives: mehta, salil].  120 pages and 25 thousand words.  It has been a top 25 science book release: a super-category subsuming all commercial subcategories of ethics and technology, probability, statistics, and math!  We spent years putting together the nuances of the different modeling methods and differentiating between short-view statistics [news events that could receive help from this type of modeling analysis], and the long-view models [news which we should examine through this lens and at times in fact do if people know to do it.]

 


New books, projects, credentials

In this last section we’ll note our release other versions of this book, for example an audio and e-book variation, both at a discount about Labor Day [along with my previous books still top ranked in mathematics].  So please follow [Salil Mehta: books, biography, latest update] for new information on that.  Additionally I am close to completing math with mira πŸ‘ΆπŸ», a 140-page at-home exercises and solutions.  This will be our book aimed at pre-K through 2nd grade, or children aged 3 through 7.  This has been a multi-year project in coaching my baby daughter Mira on the mechanics of simple math and the higher-level probability concepts to think about.  Doled out in small weekly sample problems and lessons. 

 

For example what does it mean to have a small amount at higher odds, or a large amount at lower odds?  Or how do you assess two statistics, one at a current point in time and another at a distant point in time?  What does it mean to think about insurance on multiple things at the same time [e.g., wanting a higher allowance but a parent losing a job, at the same time]?  All topics that are not covered in the closed-theoretical cases of elementary school math, though still important to see through in life.  For example, solving a division between two given numbers: as if the world is going to be that easy and never having to know all the shocking things that people still don’t know they got wrong in their assumptions, for example during the 2016 election. 

 

Some of the examples of statistics ideas for children were given during my STEAM [STEM] week presentation at Mira’sπŸ‘ΆπŸ» school [https://sites.google.com/site/statisticalideas/mira-school-talk].  Where I charged our children to think bigger: “You’re at the launching pad of your life; You’re unbound by other people’s passions”.

 


And another item we are working on is a 2025 product for children’s coding and logic development [miraπŸ‘Ά coding (from:salilstatistics) - Search].  This is taking initial steps to learn how to think through a problem and solve it programmatically, to show an answer.  Coding involves a different analytical tool kit than probability, and statistical math, and this work is a step to complement the gamification of software such as MIT’s scratch, in order to think through many more applications and ideas.

 

Finally we’ll note we have been completing the American fellowship actuary exams in order to enjoy staying on the cutting edge of mathematics, an exam level 94% of actuaries fail to get to. A topic of copula narratives.  Our recent exam was a near highest score to boot.  What started as a highly productive and positive year, will hopefully continue that way!





salil statistics [10k+ books sold, 36m reads, 1/4m follows]
search products w.i. a year: audios, math w miraπŸ‘Ά, and coding&logic
follow via RSS or e-mail or Amazon

No comments:

Post a Comment