The Z Files: Leave the Rest to Me

The Z Files: Leave the Rest to Me

This article is part of our The Z Files series.

We all know the expression, "Easier said than done." I can't say for sure, my guess it has been around a lot longer, but it could have emanated from the first person undertaking rest of season projections for fantasy baseball.

It seems easy enough. Adjust initial expectations by what the player has done so far and apply playing time. Easy peasy.

Well, not really.

Let's say a batter is projected for 30 homers in 600 plate appearances. After 100 trips to the dish, he's clubbed eight long balls. How many should he be expected to garner in his final 500 plate appearances?

Hopefully, you don't simply subtract projected from actual, and expect 22 more long balls. Intuitively, the expected rate should be a weighted average of his projected HR/PA and the actual mark.

The initial rate was 30/600 or .05 HR/PA.

The actual rate is 8/100 or .08 HR/PA.

If the weighted average is applied linearly, the math is

((500 x .05) + (100 x .08))/600 or .055 HR/PA.

When this is multiplied by the remaining 500 PA, the rest of season projection is 27.5 more homers, which would be rounded to 28 in most presentations. The means the new yearly expectation is 36 homers, up six from the initial projection.

Theoretically, this would be done with all skills to generate the rest of season projection. The problem is there is no basis for assuming skills change in a linear fashion. In fact, the overriding issue is there is no

We all know the expression, "Easier said than done." I can't say for sure, my guess it has been around a lot longer, but it could have emanated from the first person undertaking rest of season projections for fantasy baseball.

It seems easy enough. Adjust initial expectations by what the player has done so far and apply playing time. Easy peasy.

Well, not really.

Let's say a batter is projected for 30 homers in 600 plate appearances. After 100 trips to the dish, he's clubbed eight long balls. How many should he be expected to garner in his final 500 plate appearances?

Hopefully, you don't simply subtract projected from actual, and expect 22 more long balls. Intuitively, the expected rate should be a weighted average of his projected HR/PA and the actual mark.

The initial rate was 30/600 or .05 HR/PA.

The actual rate is 8/100 or .08 HR/PA.

If the weighted average is applied linearly, the math is

((500 x .05) + (100 x .08))/600 or .055 HR/PA.

When this is multiplied by the remaining 500 PA, the rest of season projection is 27.5 more homers, which would be rounded to 28 in most presentations. The means the new yearly expectation is 36 homers, up six from the initial projection.

Theoretically, this would be done with all skills to generate the rest of season projection. The problem is there is no basis for assuming skills change in a linear fashion. In fact, the overriding issue is there is no statistical foundation for determining the legitimate weighted average.

Long time readers of this space know what's next. There was a time I thought I discovered the holy grail within this realm by using stability points to guide the weighted averages. The notion was different skills stabilize at different rates, which in turn could be incorporated into the weighted average.

I'll spare the math (search "Russell Carleton" stabilization if you want to go down this rabbit hole). Let's use strikeout rate as an example. The stabilization point for K% was determined to be 60 PA. That is, after 60 PA, the luck to skill ratio is 50:50, hence there is a 50 percent chance the K% after 60 PA is "real". Cool, so after 60 PA, just take the actual K% and average it with the initial mark and we have the rest of season K%. Skills were found to stabilize at different rates, so just incorporate this into a spreadsheet and we have a rest of season projection engine.

It's been at least 10 years since this method has been debunked, yet some still swear by it in their analysis. They'll also cite stats at the quarter pole in a couple of days, but that's a rant for another time.

The aforementioned web search will unearth several essays explaining the flaw in this approach. In short, the luck to skill ratio in the 60 PA is statistically 50/50, but it does not mean the next 60 PA will have the same K%. This flawed assumption drove the perceived holy grail.

All the stabilization indicates is the player's results can be considered real (or maybe half real?) under those circumstances, The circumstances of the next 60 PA will be different (quality of competition, weather, park factors, etc.), but at least within the first 60 PA, the luck to skill ratio is 50/50.

Without stabilization points, we're back to the drawing board. Or are we? We may not know the sample at which skills stabilize, but a reasonable takeaway from the studies is that some stats will stabilize earlier than others, we just don't know exactly when.

Isn't this similar to some of the pitfalls with the means by which initial projections are generated? We know a weighted average of past seasons is best, but there isn't a statistical foundation for how many seasons and the respective weights. It is done empirically, with different prognosticators landing on different results. We know regression of some luck-related stats is necessary (BABIP, HR/FB, K% from swinging strike), but we don't know the proper extent. Most often, a standard regression is chosen, with the ability to tweak it between no and full regression.

The sample needed for in-season skills to overtake projected skills varies, and can be determined via back testing. It is a huge undertaking and still ripe with variance, so the efficiency of doing such a study depends on who is doing it and its purpose. As an aside, I have not done as complete an investigation as I would like, but I am slowly working through the pertinent metrics.

Let's take things in a more practical direction and pretend the weighted averages are universally recognized and can be incorporated into a rest of season projection engine. There is still the matter of playing time. It's not just a question of accurately guesstimating how much each guy will play. There is a matter of how players with injury time baked in can play mind tricks.

It's a bit funny. One would think playing time gets easier to project in-season, and to a certain extent it does. Maybe it's more about a personal sense of the discretion encountered. When doing initial projections, there isn't as much scrutiny (or so it seems) because all one can do is look at the player's history and how you expect him to be used this season, then land on a somewhat arbitrary number. There will be some discrepancies with injury-prone and younger players, but for the most part, there isn't a ton of, "I think you're wrong."

Once the games start, and we all see what's happening, everyone becomes an expert. Often, judgments are biased towards players on our respective teams, but that's fine. I have always said each fantasy team manager should take ownership of their playing time expectations and manage their teams accordingly.

The chief issue with playing time is handling players with an established track record of missing time like Mike Trout and Clayton Kershaw. Converting projections to rankings for a draft has two elements in sync. That is, better skills and rates of production drive rankings, but missing time lowers the rank.

Using Kershaw as an example, he is projected for 23 starts, around 72 percent of a full season. Had he been projected for 32, he'd be a top-five overall pitcher, but he checked in around SP30. He's made eight starts, and the Dodgers will have played a quarter of their games before his next outing, so he's pacing for 32, or a potential of 24 left. If it's still assumed the southpaw will miss nine games, he'll start 15 or 24, or 63 percent. Even with the same rate stats, this drops Kershaw well down in the rankings.

The question is whether to assume Kershaw will still miss nine starts, or if the fact he's made it this far unscathed warrants an adjustment. Perhaps he should be expected to start 72 percent of the remaining 24 starts, which yields 17 more efforts. This is probably the most reasonable approach, and it gives you the ability to manually adjust for players with a greater likelihood of injury.

Similar analysis can be done for Trout. Most projected him to miss around 32 games, which indicated he would play 80 percent of the season. Through Thursday night, Trout has appeared in 35 of the Angels' 38 contests. The club has 124 games left. If Trout is still expected to miss 32, he'll play in 92, or 74 percent of the remaining schedule. Or should his playing time be adjusted to 80 percent of 124, or 99 more games? The seven-game difference will significantly alter his rest of season ranking.

Again, there is no right or wrong answer, though some overly rely on rest of season rankings to drive trades. Apply your own logic, seasoned with common sense. Consider the contributions of replacement players and make your best call. Maybe let your level of desperation, or that of your prospective trade partner, be part of the equation. If you're offered Trout, and have a chance to win the league if he bucks the odds and plays 150 games, maybe you accept the deal, figuring you aren't going to win with what you have. Conversely, maybe you have Trout and are approached for a deal. If your team is in good shape, and just needs steady production to win, maybe acquiring a less volatile player is the call, assuming you can still win with the steadier option. It may be too early to make decisions of this nature, but later in the season when rest of season rankings get even more wacky, it could be relevant.

As a person paid to generate rest of season projections, I need to make a call. As a fantasy manager, I care more about the next few weeks than the next few months. This is format dependent, but most of the time, the operative question isn't who do I like for the rest of the season, but rather who do I prefer for the next few weeks? This is especially true for picking up free agents and players off waivers. Granted, the lusher the available player pool, the shorter my "who do I like more?" time frame becomes.

In summary, rest of season projections are fraught with flaws, both in methodology and application. Many will argue they provide a basis for player evaluation, but that could presume too much accuracy. This is not to say they should be ignored. Just understand the shortcomings, and don't let them unduly influence your evaluation process.

Want to Read More?
Subscribe to RotoWire to see the full article.

We reserve some of our best content for our paid subscribers. Plus, if you choose to subscribe you can discuss this article with the author and the rest of the RotoWire community.

Get Instant Access To This Article Get Access To This Article
RotoWire Community
Join Our Subscriber-Only MLB Chat
Chat with our writers and other RotoWire MLB fans for all the pre-game info and in-game banter.
Join The Discussion
ABOUT THE AUTHOR
Todd Zola
Todd has been writing about fantasy baseball since 1997. He won NL Tout Wars and Mixed LABR in 2016 as well as a multi-time league winner in the National Fantasy Baseball Championship. Todd is now setting his sights even higher: The Rotowire Staff League. Lord Zola, as he's known in the industry, won the 2013 FSWA Fantasy Baseball Article of the Year award and was named the 2017 FSWA Fantasy Baseball Writer of the Year. Todd is a five-time FSWA awards finalist.
MLB: Winter Meetings Recap
MLB: Winter Meetings Recap
Offseason Deep Dives: Garrett Crochet
Offseason Deep Dives: Garrett Crochet
Farm Futures: Rookie Infielder Targets
Farm Futures: Rookie Infielder Targets
Collette Calls: Does Controlling the Running Game Really Matter?
Collette Calls: Does Controlling the Running Game Really Matter?