Welcome to Measuring Shadows, a blog about thoughts, games, and ethics. We will post about more serious subjects later, but first a foray into Baseball batting orders.

---------------------

Whether or not batting orders significantly impact a Baseball offense is an often asked question, and similar treatments to mine can be found all over the web, for instance here and here. To my knowledge, though, mine is slightly more thorough than others I've found.

My strategy is to take a possible lineup of ten players, look at the frequency with which they create different hits, and simulate entire baseball games based on those, recording the number of runs the offense scores; averaging runs scored over many (~100,000) simulations gives an estimate of the strength of an offense.

My test case has been this year's Giants lineup: Gregor Blanco, Ryan Theriot, Melkey Cabrera, Buster Posey, Angel Pagan, Pablo Sandoval, Brandon Belt, Brandon Crawford, and Barry Zito (chosen as a pitcher with roughly average batting stats). In future posts I'll write about exactly what my method was and extensions, but to start off with, some results:

1) An average (i.e. random) ordering of those nine players creates about 3.78 runs/game.

2) The Giants' typical batting order creates roughly 3.82 runs/game.

3) The best performing lineups in the simulation create roughly 3.99 runs per game. In particular, the following lineup performed best in my simulation: Theriot, Sandoval, Blanco, Belt, Cabrera, Posey, Pagan, Crawford, Zito; it created 3.992 runs/game on average (over 200,000 trials).

Due to sample size issues there might be another lineup which performs slightly better, but I think that 3.99 runs/game is a reasonable value for the best the Giants can do. Taking the canonical value of 10 extra runs creating one win, the Giants could get roughly 2.75 more wins in the season, or roughly .015 extra winning percentage, by changing their batting order. This isn't a huge difference, but given that most winning percentages are between .400 and .600, the difference between .550 and .565 is non trivial.

This kind of calculation is kind of cool, but perhaps a more important application of these simulations is in valuing players: one thing that can be done is to take a lineup composed of nine of a given player and see how many runs it scores. How well does this predict how "good" a player is? Which has a stronger correlation with a team's number of runs scored: the (weighted) sum of the runs created per game (using these simulations) of each of its players, or the sum of the WAR of its players? I'll look at these questions in a future post.

--Sam