Partial, Game, Slam, and Grand Slam Statistical Breakdown
by Matthew KiddBridge students are often afraid to bid their games because nine or more tricks seems like a lot when they are just getting started. To some extent adults can just be told that there is a substantial bonus for bidding game and that game contracts are quite frequent. Younger students may benefit from other approaches. For example, local Ed Koch rewarded his middle school students two Mike and Ike candies per partnership when they made a contract and doubled the reward if they bid and made a game. Another approach is to present a breakdown of how often each category occurs. Since I was trying to explain the breakdown to fourth graders at Ocean Air elementary school, I wanted to present the results visually rather than as a set of numbers. So I tallied the results from a collection of about 700,000 random deals that included double dummy results and generated the following pie chart.
This chart shows the highest category that either side has. For example, if one side can make a game and the other side can do no better than a partial, the result is assigned to the game category. It is immediately clear that games constitute near half of all contracts and that they occur more often than partials. Also slams and even grand slams are more common than might be supposed, a fact I’ve detailed in the article Slam Statistics.
The table below shows the same results to two decimal places.
Category | Freq (%) |
---|---|
Partial | 39.81 |
Game | 46.59 |
Slam | 10.65 |
Grand | 2.94 |
The 46.59% game category can be broken down as 45.55% where only one side has game and 1.04% where both sides have game, overwhelmingly 4♠ vs. 4♥.
Sacrifices / Par
Arguably basing these results on double dummy makeable contracts ignores sacrifice contracts that push the bidding higher and therefore might increase the number of games and slams that should be bid to reach par. In practice, basing the tally on par contract(s) instead of makeable contracts has little impact on the breakdown of each category. In part this is because the common sacrifices of 4♠ over 4♥, and 5♣/♦ over 4♥/♠, do not change the category. Moreover, while 4♥/♠ over 4♣/♦ increases the number of games, 4♣/♦ over 3N compensates by decreasing the number of games.
Considering par contracts requires examining all four vulnerability combinations. Also when there are multiple par sacrifice contracts, they might span categories, e.g. one is partial and the other is a game. Since the penalty is identical there is no reason to preferentially assign such cases to one category, they are shown in combined categories in the table below.
Category | Makeable | NV/NV | V/V | NV/V | V/NV |
---|---|---|---|---|---|
Partial | 39.81 | 37.48 | 40.41 | 39.81 | 39.81 |
Partial + Game | — | 1.01 | 0.13 | 0.63 | 0.63 |
Game | 46.59 | 47.31 | 45.25 | 45.77 | 45.76 |
Game + Slam | — | 0.06 | 0.06 | 0.09 | 0.09 |
Slam | 10.65 | 10.68 | 10.02 | 10.25 | 10.27 |
Slam + Grand | — | 0.07 | 0.07 | 0.07 | 0.08 |
Grand | 2.94 | 3.41 | 4.03 | 4.09 | 4.06 |
Symmetry requires the last two columns to be identical in theory. Since the results are based on so many hands, they are nearly identical in practice, a good sanity check. For the NV/NV case, if we apportion the 1.01% of partial/game sacrifices equally, we see the number of partials decrease by 1.8% and the number of games increase by 1.2%. Also the number of grand slams increases in all scenarios, by over 1% in all but the NV/NV scenario.
Have you considered a grand slam sacrifice? Jeff Meckstroth, partnered with Eric Rodwell, made a non-vulnerable grand slam sacrifice of 7♠ over a vulnerable 7♥ bid holding ♠9xxxx against Edgar Kaplan and Norman Kay and though he went down ten trick tricks, his team picked up 7 IMPs. This result so offended Edgar Kaplan that he successfully lobbied to change the laws of bridge so that non-vulnerable doubled undertricks now go 100, 300, 500, 800, 1100, etc, i.e. 300 each after the third undertrick, instead of 200 each from the second undertrick onward. But even with today’s rules, there are still some good grand slam sacrifices, enough to make it the correct bid on 1% of deals if you are faced with superhuman opponents who accurately bid all their grand slams.
Incidentally there are a few par zero deals where neither side can make anything in any denomination. Thomas Andrews has found these occur only about once every 400,000 deals. Even the most avid players may never encounter one and the set of 700,000 deals used for the present analysis did not include one.
What do actual bridge players bid?
Do real bridge players approximate the double dummy breakdown? Certainly not for slams as previously shown in the article Slam Statistics. But stronger players bid more games and fewer partials. The figure below was generated using several sets of Bridgemate / BridgePad / BridgeScorer (.bws) electronic scoring files, notably a large collection from the Albany and Corvallis bridge clubs in Oregon, kindly provided by Rick Garvin. All results are for pair games. Each bid is binned based on the strength of the partnership that won the contract. Here strength is defined as the geometric mean of the two partnership masterpoint totals during the month the hand was played. For any given player or partnership masterpoint totals are only a rough approximation of skill but averaged over many players it is a reasonable metric. These figures are generated from approximately 210,000 results from roughly 17,000 deals.
The wc bin contains world class results taken from a data compiled by Richard Pavlicek aggregated from the Vanderbilt, Spingold, U.S. Bridge Championship and World Team Championship. These are all team events and therefore not directly comparable to the pair data in the earlier bins. I would prefer to use results from Platinum Pairs events but I don't have the data.
The final dd bin contains the double dummy result. It might appear that the 5000+ MP partnerships reach the double dummy percentage for game contracts bid (green curve). However, the combined game, slam, and grand slam totals (blue curve) indicate this is not the full story. These strong players are stopping in game on many slam hands and stuck in partials on deals that can make can about 6% of the time. Observe that there is significant improvement in all categories as partnerships approach the 500-1000 bin, i.e. roughly life master. Also partnerships steadily bid more slams (magenta curve) as their skill improves. Grand slams are rarely bid; until the partnership strength is at least 5000+ MP, it is more likely that a hand will be passed out than that a grand slam will bid.
The world class team partnerships show strikingly different behavior, driven by the premium for bidding vulnerable games in team play. They are bidding 9% more games than can be made double dummy and 3% fewer partials than the double dummy result. The world class partnerships bid more slams and particularly grand slams than the 5000+ MP partnerships.
Better players are not only better at bidding games and slams, they are also better at making the contracts they bid. The next figure breaks this out for each category.
Partnerships shows the most steady improvement in making games as they get stronger. The slam curve (magenta) shows interesting behavior with 500-1000 MP strong partnerships performing worse than 200-500 MP strong partnerships. Combined with the plot above, this suggests that the 500-1000 MP partnerships are bidding more slams but do not yet have the bidding sense or technique to find the additional good ones. Note: the grand slam curve is not shown because the statistics are poor.
The world class partnerships make fewer of their game and part score contracts. But this is because they are playing teams rather than pairs and therefore bid many close games which are defended by world class defenders.
The two plots above can be combined to show the percentage of contracts in each category that are both bid and made.
The combined impact of better bidding and card play is substantial. The beginners are going down in 39% of their contracts whereas the strong 5000+ MP players are going down less than 31% of the time while bidding high category contracts.
The aggressive strategy of the world class players at teams yields a substantial gain in terms of additional game, slam, and grand slam contracts that are bid and made. The world class players are not poor at playing part scores, it is just that they have bid game on many promising part scores such that the remaining part scores are relatively crappy or outright sacrifices. In particular, whereas the three level is often the killing field in a strong matchpoint game, it is quite safe to bid three over three in team events because the risk of going -300, -500, or worse is countered by being doubled into game.
Players at regionals vs. players at clubs
Are players at ACBL regionals different from players at clubs? Somewhat. The three figures below show the same results as above but for data from three big regionals (San Diego 2011, Palm Springs 2011, and San Diego 2012). They are based on approximately 88,000 results from roughly 2,500 deals.
It is notable that the 0-50 MP partnerships bid 2.0% more games than their club counterparts and make 1.3% more of the games that they do bid. This is probably a selection bias in that the 0-50 MP partnerships that are not scared to play in regionals are on average the more highly skilled members of their pool. However, it is also possible that the regional 0-50 partnerships have an easier field at regionals because they may be playing in 199er events at regionals while finding the upper limit of the limited games at their local club to be higher.
All skill levels bid more slams at regionals, consistently 0.85–1.44% more than the club players with the 0-50 MP partnerships showing the largest increase of 1.44%. This observation supports the selection bias hypothesis above.
The 500-1000 MP and higher regional partnerships make more of the slams they bid than their club counterparts but fewer of the games they bid. This does not seem to be the result of bidding more marginal games inasmuch as the percentage of games bid at regionals goes down by roughly the same amount that the percentage of slams increases for these players. It is likely that better defense sets more of the close games. This is not surprising because open games at regionals are nearly stripped of players below 750 MP, and hence most partnerships even in the 500-1000 MP range, by the proliferation of Gold Rush events which were offered at all three regionals used to generate the data.
Why don’t strong partnerships bid more slams?
It is not obvious why strong partnerships don’t more slams. It is easy to generate hypotheses but not clear which is dominant. The steady increase in the number of slams bid as function of partnership strength suggests strong players are not running into a systematic limitation. In part, I don’t think the strong partnerships work hard enough at slam bidding. Most of the players I know in the 5,000–10,000 MP range, do not seem to be employing methods more sophisticated than Roman Key-Card Blackwood, Jacoby 2NT and splinters, accurate control bidding, and the extra bidding space provided by the 2/1 bidding system to better describe shape and suggest extra values conducive to slam. In particular they do not seem to be employing a comprehensive approach within 2/1 such as that outlined by Steve Robinson in Washington Standard (1996) a book which does have a lot of good ideas if one can get past the fact that it is typed and thus visually unpleasant to read, the manuscript having just missed the desktop publishing revolution. Nor are they playing Kickback Roman Keycard, which requires a decent amount of partnership discussion, or full relay systems, which require a very serious amount of discussion and memorization. Players may be investing more effort into competitive bidding or improved card play, or may just be focused on accumulating masterpoints from which partnership discussion is a short term distraction from playing bridge.
Good players will have a higher threshold for bidding slam. Even if a matchpoint slam really has a 50% chance, guaranteed not to fail due to ruff at trick one or a 6-0 suit split or another catastrophe, bidding slam puts all your eggs in the bidding basket. It would be better to stay with the field in game and hope to come out ahead through superior card play. +480 achieved via a double squeeze may be just as good as +980.
Players are also subject to field drag much as an observer’s space near a rotating black hole is dragged. Consider the level of aggressiveness appropriate to a poker game where all players are known to be following the optimal strategy. In a tight game, more conservative than optimal, the correct level of aggressiveness is below what it would be in the optimally played game, though not as conservative as the rest of your opponents. The plan will be to steal more antes. In a loose game, the correct level of aggressiveness is above what it would be in the optimally played game, though not as aggressive as the rest your opponents. The plan will be not to be so conservative as to scare them into folding when you do bet but still to contribute less to the pots you will probably lose while expecting to reap a large pot when you hold the nuts. In bridge terms why even risk a 65% slam if none of your opponents are bidding it? You might still come out ahead by outplaying them in the field contract.
The distribution of slam probabilities may also matter. Suppose the 13% of deals with a slam or grand slam fall into exactly two categories: 8% of offensive layouts which have an 80% chance of making depending on the lay of the defensive cards and 16.5% of offensive layouts which have a 40% chance of making depending on the lay of the defensive cards. In this discrete scenario it would be correct to bid only the 8% of high probability slams assuming they can be identified through good bidding.
The distribution of slam probabilities is surely continuous but what does it look like? It may seem reasonable to answer this question through a double dummy simulation where the defensive cards are permuted say 100 times for each of a collection of fixed randomly generated slam oriented offensive cards. The programming is straightforward and Bo Haglund’s DDS program optimizes this scenario through transposition table reuse. But double dummy analysis has limitations. For example, the double dummy solver always guesses a two way finesse correctly, knows when to drop a stiff king, and finds low percentage squeezes and other endplays in favor of high probability lines that fail on the given layout of the cards. For many purposes these things have little impact on what is being calculated. For example David Bird and Taf Anthias argue that they do not significantly affect the results presented in Winning Notrump Leads (2011) or Winning Suit Contract Leads (2012). But with only one trick to lose, I think the slam distribution calculation could be significantly thrown off by the limitations of double dummy calculations, an issue that is not addressed for example in Irwin Landow’s Innovative Slam Bidding (2009). Software that can analyze the single dummy percentage is required and it doesn’t seem readily available.
Software and analysis notes
Results from the Bridgemate / BridgePad / BridgeScorer (.bws) electronic scoring files were tallied using a Perl program, levelbid.pl. This program has basic command line help but is more of a personal analysis program than something like ACBLmerge. Here is the command line that was used to tally the BWS files from the three regionals.
The blue part, e.g. +1112, in each directory name is not actually part of the directory name but rather an optional specification of which YYMM masterpoint database to use for the BWS files in the directory where +1112 means the Dec 2011 masterpoint database. Note: the naming of BWS files from tournaments is usually not related to the date, e.g. is not something convenient like 141203M.bws.
The levelbid.pl Perl program only runs on Windows using a 32-bit Perl interpreter. It would be possible to modify it support Mac OS X and Linux along the lines of ACBLmerge. I just don’t feel like documenting all the details at this time.
Despite the statement above, each club’s data was processed using a single masterpoint at the midpoint of period covered; for example the July masterpoint database would be used for a calendar year of data. This is a bit faster than loading a different masterpoint database for each month of results and has little impact on the binning of partnerships.
The roughly 700,000 double dummy solved deals came from the ViewDDLib software package but were originally generated by Matthew Ginsberg.