Pages

Friday, August 15, 2014

Difficulties w/ job cut data

On the eve of monthly government labor report, Challenger, Gray, and Christmas reports on job cut announcements.  Given the recent pent-up anticipation for every monthly government jobs number, Challenger's ability to market their report is highly effective, and it also receives considerable attention.  Their report can seem to offer a comprehensive insight into the job market at U.S. based companies.  Challenger collects anecdotal company news and assimilates it into a statistical table, even attributing geography and industry.  As a former proprietary trader, this systematic capturing of raw third-party news is commendable, and reading their report overview is of great interest.  Though we will soon show here that, at a macro level, the insights can misguide.  Other data sources may provide a better labor gauge.

In the past week, I called one of the leaders at Challenger who was cooperative in scoping out their data.  Challenger prominently headlines their aggregate time series, with monthly changes in job cuts announced.  Elsewhere in the report one can too contrast the monthly number of hirings announced.  The data is shown in the chart below.  Since the limited Challenger data here also have some common seasonal patterns, this was smoothed out by taking a moving 4-quarter average.  This was particularly useful on the hirings data, where the most recent July monthly data falls into the recently busy Q3 hirings announcement quarter.


Seeing the number of announced firings (in red above) should be a clear signal of problems to come for the economy.  Let's look at the chart above.  If we just awoke from a 15-year coma and was shown this chart, we would get the impression that the dot-com crash (and it's initial recovery) at the start of the millennium was more catastrophically painful for employees then the recent Great Recession.  The time series, which in 1989 was being collected from a predecessor organization, also begins at the heels of the late 1990 recession.  The trouble here is at least we know factually ex-post that the economy rebounded by mid-1991.  Just don't expect the Challenger data to reflect that (job cuts continues to increase at that time).

This puts their recent hirings announcement (in green above) data into doubt, even though the brief time series could offer promise.  Hirings announcements should allow one to augur continued resilience for a growing economy.  But how can we assume the pattern we see there, similar to the job cut data from the early 1990's, isn't also a function of internal data collection bias?  With the government labor data for example, refined surveying methods over time have allowed a greater portion of the economy to be sampled.  Their data is balanced accordingly.  But Challenger's secondary research makes no promise to correct major sampling bias (what a mathematician means by this would include completeness, representativeness, trending, seasonal adjustment, etc.)  When asked about this announcement data, Challenger stated that "It’s not great."  And that "It’s not a very comprehensive listing."

Let's return to the Challenger job cut announcements.  We might think even if the levels are comprehensively unreliable, with no sampling error estimates provided, then perhaps the data provides an early read on when mass firings will occur.  This should be one of the promises of looking at this aggregate panel organized data.  Here too we see that the data fails to provide a meaningful necessary early detection of any coming labor softening (which by the way for the past two year's job cut data they show it as essentially flat, even though the past year's labor department job growth has been on a fiery streak).

Two government sources for looking at layoffs are the weekly jobless claims data, and the monthly mass layoffs report.  The latter also had similar geographic, and industry attribution.  Their data, starting in the mid-1990s, was more reliable (in terms of layoff events) and partially reconciled to the former claims data.  More importantly, their data set was able to lead any economic deterioration -and lead any signal from the Challenger data- by about a year.  Due to budgetary tightening, this vulnerable government collection program was among those that had to be sequestered a year ago under the Obama Administration.  This should serve as a cautionary antidote for those who insist that the private sector always does things better than the government.

That leaves us with the weekly claims data, which while not as great as the monthly mass layoffs report, does provide superior directional information with no lag over the Challenger job cut information.  Look at the chart below.


We see the initial jobless claims (in red above) provides some sensitive unemployment data.  While the continuing claims (in dashed orange above) provides a cumulative total for all net claims at a specific time.  Both weekly data immediately above are more stationary and also already seasonally adjusted, and presented in a way to most easily contrast with the top chart above.  We note that the advanced seasonal adjustment also contain some component of smoothing out larger macro cyclical aberrations.  This gives the refined data, shown immediately above, a more smooth flow versus the Challenger data (e.g., see the past five years on both charts above).

The analysis above shows how probability and statistics can evaluate a somewhat popular data set, regularly cited by the media.  We notice here the conclusion of the value of the Challenger announcement data is mostly in the monthly anecdotes it provides for specific companies.  While government labor reports do document extreme cases of single-company issues or exogenous events, it is less thorough at times in this regard.  The Challenger report is completely less purposeful in the quantitative way it is most often articulated.  As a read for actual monthly changes and early turning-points on the economy or markets.

We'll end this note to state that, in the past week, over a dozen news organizations around the globe have cited the statistics research done on this web log.  This was only partially associated with once more prescient probability advice 12 days ago that industry prognosticators of 10%-corrections are generally wrong, but for the rare times they stumble upon luck.  Of special note, in the U.S., there were public acknowledgements by Forbes, Wall Street Journal's MarketWatch, Bloomberg, Big Picture, CFA Institute, Lindzon (StockTwits), and Viskanta (Abnormal Returns).  Some of these can be seen on the following link.  We should also note the Statistics Topics book (temporarily stuck at a low price of $7) again ranked near the top 5 in a major science-math category, as well as a popular business-economics category (see bottom of this link).  Lastly, in the immediate future we have two quantitative risk articles being published in high-profile practitioner publications, and we also have a health and policy article in the pipeline!

No comments:

Post a Comment