I use a tool called Typefully to send out posts simultaneously to both Twitter and Bluesky. This helps me promote content without being sucked into the various feeds and also make sure people who are following me on both platforms are getting the same information.
There’s a drafts section in Typefully and for about a year now I’ve been sitting on a thread discussing iterative rankings.
No idea why I never sent it.
Maybe I thought the threaded conversations thing on Twitter was kind of annoying and scrapped it. Either way, I’ve been thinking about iterative rankings a lot this week. We pushed out our first significant in-season update of the Top 100 on Wednesday over at Baseball America. I think the list is more accurate, more up to date and more useful for readers today than it was before the season started.
But I did notice some people asking: “Why?”
Why update already? It’s only been a month. What’s the point in changing the ranking so soon? Aren’t you overreacting to small samples? Aren’t you just looking for clicks?
I figured it would be fun to write more about my obsession with thoughts on iterative rankings and the benefits in today’s newsletter.1
When I started at Baseball America back in 2017 we didn’t have as many updates on either our top 100 prospects lists, or our team top 30s or our draft rankings as we do now. I believe we had a preseason 100, a midseason update and then an end of season update on the pro side. On the amateur side, we didn’t update each month in season and we also had fewer summer and fall updates.
I started becoming a big proponent of constant iteration on these lists as I began ranking players for work. The process of re-ranking, constantly comparing players and forcing yourself to think through their pros and cons over and over again helps crystalize your thoughts.
Later I read Superforecasting by Philip E. Tetlock and Dan Gardner. A friend in the game recommended it to me. I would guess a decent chunk of most big league front offices have read it as well. The book has nothing to do with baseball but is perhaps the most relevant book I’ve ever read for the work we try and do at BA: project baseball players.
The premise of the book is to try and understand how forecasting experts (so called “superforecasters”) are able to predict future events at an impressive clip—better than industry experts and betting markets. The authors use the results of a government-funded forecasting tournament to explore how thinking probabilistically, working with a team, keeping score, being as precise as you possibly can,2 and updating your predictions consistently as you get new information leads to surprisingly savvy predictions about the future.
Here’s one of my favorite passages from the book:
“Forecasts aren’t like lottery tickets that you buy and file away until the big draw. They are judgements that are based on available information and that should be updated in light of changing information. If new polls show a candidate has surged into a comfortable lead, you should boost the probability that the candidate will win. If a competitor unexpectedly declares bankruptcy, revise expected sales accordingly. The IARPA tournament was no different. After Bill Flack did all his difficult initial work and concluded there was a 60% chance that polonium would be detected in Yasser Arafat’s remains, he could raise or lower his forecast as often as he liked, for any reason. So he followed the news closely and updated his forecast whenever he saw good reason to do so. This is obviously important. A forecast that is updated to reflect the latest information is likely to be closer to the truth than a forecast that isn’t so informed.”
…
“Superforecasters update much more frequently, on average, than regular forecasters. That obviously matters. An updated forecast is likely to be a better-informed forecast and therefore a more accurate forecast. “When the facts change, I change my mind,” the legendary British economist John Maynard Keynes declared. “What do you do sir?” The superforecasters do likewise, and that is another big reason why they are super.”
This is the thrust of my argument for iterative rankings.
By updating more, we are able to make small, incremental changes with new information that hopefully prevents over- and under-reacting. Those smaller up/down moves across five different updates throughout a season should lead to a more accurate product than if we were doing one or two updates that needed larger swings each time.
It’s also a more “real time” process that hopefully tracks player development and regression a bit closer. Players really can change rapidly.
There’s always a bit of a lagging effect with our rankings since they are sourced and reported from the scouting industry. Iterating consistently cuts down on that and also creates a tighter feedback loop.
On top of just being a better process, we’re now (thankfully) in a world where there’s greater demand for prospect rankings and content throughout the season. People want to know where a player ranks today. Whether for fantasy or because they are just diehard fans of their respective teams. They want to be able to click on a list and know they are getting useful, relevant information.
A list updated multiple times and just before the trade deadline is going to be a heck of a lot more useful to them than our preseason rankings.
BA Editor-In-Chief JJ Cooper went back and looked to see if this iterative process actually led to a better product. We’re pretty confident it has. If you want to see more on the nuts and bolts of this, definitely check out his column here where he shows our work. And thanks to all the BA subscribers who allow us to be so obsessive about all of this stuff. It’s not possible without you guys.
So, long story short: iterate, iterate, iterate.
Below is the work I’ve produced for Baseball America since my last newsletter:
Writing
Top 400 MLB Draft Prospects For 2025 — One step away from the first edition of the BA 500. Yes the players begin to blur together a bit after writing up 400 scouting reports.
2025 MLB Draft: Bonus Pools, Slot Values For Each Team — I run my own personal competition (which I’m sure he knows nothing about) with Jim Callis to try and get these slot values first each year. Hat tip to him for being on them first this year. Big shoutout to all the draft nerds who love keeping track of bonus pools and slot values.
8 MLB Draft Prospects Who Could Be The No. 1 Overall Pick In 2025 — The draft at the top was muddled a few weeks ago and it remains the case today. Because of that I tried to dig into each of the top players with real shots for the first overall pick and make their 1-1 case. If anything I think there are more players than I wrote about here who have non-zero 1-1 chances this July.
2025 MLB Draft: Baseball America Staff Draft 2.0 For Top 50 Picks — Our second effort at putting on scouting director hats and working through the draft board. I find myself targeting the strength of the class throughout: prep shortstops. Jojo Parker in particular is a name that I have found myself ensuring I pick in these. Great swing. Great name.
Podcasts
Future Projection Episode 123: Yankees & Red Sox Pitching Prospects Trending Up. We are becoming shills for the Boston system that just keeps pumping out dudes.
Future Projection Episode 124: May Top 100 Prospect Update—The Up/Down Names To Know. Ben and I talk through basically the entirety of the May update of our top 100 prospects list. I’m warming on Jac Caglianone. Let it be known!
Draft Pod: The Most Surprising Picks In Our Upcoming Staff Draft. This one might be a little dated now but you might still enjoy hearing Peter and myself try and parse the second version of the staff draft.
Draft Pod: College Arms Rising MLB Draft Boards—But Where Are The Hitters? How are you lining up Jamie Arnold, Kade Anderson, Kyson Witherspoon, Liam Doyle and Tyler Bremner? I could use some assistance.
I did have to double check to make sure I haven’t written about this topic before because it has become a bit of an obsession for me over the years—thankfully I don’t think I’ve hammered this home relentlessly in the newsletter. Perhaps I have in our BA slack channel…
This precision point is also why I am a fan of using half grades throughout the 20-80 scale. The common wisdom is to use half grades around the midpoint (45s and 55s) because more of your sample is in this range, and therefore you need a better way to separate players and tools in this bulk. As you get towards the extremes on either side where fewer players are, there’s less need for half grades. Some people would tell you there’s no such thing as a 65 or a 35. I’m not one of them.