Archive for the ‘Board Games’ Category

The Optimal Play
December 4, 2009

Ok, so the other day I was playing a 5 player game of Settlers with a few friends. The game got unusually close and heated toward the end, with a 9-6-6-6-6 score. We’ll call the player with 9 points “Alex.” I was sadly one of the sixers tied for second. Alex had just taken Longest Road, which catapulted him from a safe 7 to an unsettling 9 (10 points wins the game). The general consensus was that Alex had the game on his next turn if he was allowed to keep Longest Road. Unfortunately for us, he held the Longest Road by a large margin and we determined that none of us could overcome it alone. After some discussion and plotting, however, we determined that if we all pooled our resources we could grant one of us enough to steal Longest Road (turning the score into 8-7-6-6-6). Alex’s turn had just ended. What would you do?

This is a troubling situation from every player’s perspective. Alex realizes that his best chance at victory lies in us doing nothing. And although we recognize that Alex will likely win if we don’t act, we also recognize that whoever we grant the resources to will be given an enormous advantage and will likely win themselves. This outcome is no better than Alex winning. So what is the optimum play here?

Let’s define what an “optimum play” is (which was what the ensuing argument at our game table was about). Two definitions were considered in the argument: “the play that garners the individual player the greatest chance of victory” (my definition), and “the play that creates the most ‘fun’ and generally minimizes the use of any douchebaggery.” These definitions are each very difficult to assess. How can you measure how an option effects your chance at victory? How can you measure how much fun is generated from your actions? I’ve come to expect that all players in a game should always act in a way that optimizes their odds of victory, but what does “victory” mean?

Let’s switch gears and consider a simple analogous 3 player free for all Starcraft game. Really any multiplayer FFA strategy game will do for this analogy. Player 1 is currently winning, player 2 has about 70% the strength of player 1, and player 3 has about 40% the strength of player 1. Without cooperation players 2 and 3 are essentially doomed, but if they work together they are 10% stronger than player 1. Player 2 should want cooperation the most, as he stands the best chance to come in first once player 1 is eliminated. Now player 3 has little chance at first place, but can choose between second and third (second if he cooperates, third if he doesn’t). The question then is “what is the difference in value that player 3 puts on getting second versus third?” Interestingly, I contend this depends on the particular game and player. In Starcraft, I personally would much rather be second than third. In Settlers, I personally value all non-first places to be about equal (if you’re not first, you’re last).

If the last place players place significantly different values on non-first positions, then we can expect them to behave in a way that maximizes their odds of attaining the highest place they can whereas if they care only for first place they will behave in a way that maximizes their odds of attaining first place. If our Starcraft player 3 cares about non-first positions, then their behavior is easy to predict: they will cooperate with player 2. If they only care about first place, they will behave much more erratically. They might give up hope and just cripple a player based on personal vendettas, they might ally themselves with player 1 and backstab them when they’re weak, they might do the same to player 2 in a desperate attempt at first place. In reality, a player probably experiences some mix of these two victory perspectives, but the mix is probably more or less unique to the specific game and player.

Now that we have defined “victory” I will invoke a concept from basic economics and claim that any “rational” player will act in a way that optimizes their odds of victory. Sure, they could do otherwise. I’ve played with players who get bored of a game and decide to help the winning player just so the game ends faster. I’ve played with players who simply don’t understand the game and make nonsense moves. I’ve played with players who just like “the pretty colors” and make their moves purely on whims. These actions, however, are not rational in the context of the game. In a 6 player game of Monopoly all players could elect to sign over all property and money to a single player and end the game instantly (which in all fairness would probably be the best way to go about playing that horrid game), but do you consider this a “game of Monopoly?” Certainly not. Any reasonable game requires a push and pull of opposing wills struggling for their individual victory. Without that you simply have madness.

So finally, optimal play: “the play that garners the individual player the greatest chance of victory.”

Ok, let’s come full circle to our Settler’s game. If we assume that the players put a reasonable amount of value in non-first positions then we can assume that they would not cooperate to dethrone first place. Why? Because the score would then be 8-7-6-6-6 and dooms the majority to third place or worse. If Alex was allowed to win, then the remaining players (6-6-6-6) would each have a much better shot at second place.

But this was not the case. Apparently, we place high value on first place and very little on even second place in Settler’s. We all conspired to grant one of us (myself as it turned out) nearly every resource available on the table in order to outbuild Alex. This is of course the expected outcome, because if Alex is allowed to take first then it definitively denies it to all other players. Even by widening the 1 point victory gap to 2 points, the majority gives itself just a little more breathing room to get to first.

Sadly we’ll never know how the game would have ended. Everyone felt extremely unhappy about this massive exploitation of the rules and we all gave up after the decision was reached. It’s unfortunate that the nature of the game induced this kind of behavior. An optimal play should never produce such a massive feeling of “brokenness.” This is a failing on the design of the game, and surprisingly I’ve seen this happen many times during my Settler’s career. Tip for game designers: don’t encourage massive acts of collusion. It just makes us upset.

What’s the Score?
November 7, 2009

This past Wednesday was a day full of sports for me. In the afternoon I had my first opportunity to learn and play the sport of curling. Then, later that evening at home, I caught the closing moments of Game 6 of the 2009 World Series. All of this exposure to sports got me thinking about scoring systems, something that underlies both sports and board games. Linked closely with this concept is that of two of the most important mechanics in any type of game: the ending condition and the winning condition.

To avoid confusion, let’s define those terms. (Sorry to any scholars who have already done so; I might not get these precisely correct, but they will be accurate in the ways that I use them in this discussion.) An ending condition is the set of criteria by which a game ceases to be played by its players and no further adjustment to scores may take place. A separate but related concept is that of something like the sports world’s overtime in which the ending condition is replaced by another so that further scoring my take place. A winning condition is the set of criteria by which the winning player of a game is identified. Again, there is a related concept, the tie-breaker, in which secondary scoring criteria are examined beyond the primary scoring criterion due to a tie in those primary scores.

Now, let’s take a survey of some games and these conditions:

Baseball

  • Ending: play of nine innings completed
  • Winning: more runs scored than the opposing team

Tennis

  • Ending: a player had satisfied the winning condition
  • Winning: take the majority of sets to be played (either 2 in best-of-3 or 3 in best-of-5)

Basketball

  • Ending: 60 minutes elapsed on the play clock
  • Winning: more points scored than the opposing team

Curling

  • Ending: play of eight ends completed
  • Winning: more points scored than the opposing team

Portal

  • Ending: play through all levels
  • Winning: satisfy the ending condition

Settlers of Catan

  • Ending: a player has satisfied the winning condition
  • Winning: accumulation of 10 victory points at one time

Power Grid

  • Ending: a player builds his/her (X)th city, where X varies by the number of players
  • Winning: ability to power the most cities

Imperial

  • Ending: a Great Power attains at least 25 Power Points, reaching the x5 region of the Power Track
  • Winning: highest sum of multiplied bond value and cash-on-hand

Puerto Rico

  • Ending: cannot refill the colonist ship OR run out of victory point chips OR build into 12th city space
  • Winning: highest total of victory points

Monopoly

  • Ending: a critical mass gets tired of playing
  • Winning: who knows?

Okay, so the last one isn’t quite right, but I’ve never played any other type of Monopoly game and I didn’t feel like sifting through the rules.

There’s many more I could name, probably some with interesting mechanics, but these will do for now. The first thing to note is that the ending condition and the winning condition are often closely linked. This has a number of advantages. Take tennis and baseball as the two opposites in this case. In tennis, the ending condition goes hand-in-hand with the winning conditions. This has the neat property that a player hasn’t won until he/she guarantees that his/her opponent cannot win. In other words, a US Open finalist engaged in the championship match can still hope to win even when he’s down two sets and trailing 5-0 in the third set. Don’t get me wrong, this kind of turnaround is highly unlikely, but there is still space for this player to outscore his opponent, and so the thinking goes that the game should continue until it is literally impossible to win. This is a nice property that doesn’t seem valuable at first glance, but you sorely miss when you’re playing a game without it.

This brings us to baseball, which is, in my opinion, a scoring mechanics disaster. (I pick on baseball, but most sports are guilty on this count: replace baseball and its terminology with the same pertaining to basketball, golf, football, soccer, and, yes, even curling.) The disaster comes from the frequency of the so-called “runaway leader problem” that you so often observe. This is when one player has acquired such a large leading score so as for it to be insurmountable by the opponents within the remainder of the game. Now, you complain (or should!) that many games have such a problem. The tennis example above comes into play again. While the described situation still allows for the trailing player to win, the actual possibility of this happening is about as close to zero as the baseball team facing an 18-0 run deficit in the first inning. Where tennis scores some brownie points, then, is from its ability to hasten the end of the game. The tennis match continues only as long as the losing player is able to mount some sort of comeback; meanwhile, the baseball spectators are stuck waiting around for eight more innings for whatever might happen. (I’ll admit that there is a re-ingnition of interest in these lop-sided scoring cases, a sort of “how high can it go?” interest.) But we see that if a game’s ending condition and winning condition are “in tune”, we can develop a better game-playing experience.

Now that I’ve preached about how important it is to link the ending and winning conditions, I’m going to turn that on it’s head and claim that an even better game would de-couple them … to a point. Prime examples of this are some from the family of eurogames. Let’s take a 5-player Power Grid game for this analysis. The ending condition is the building of the 15th city, but the winner is the player who powers the most cities. In many games, these players might be one and the same, but the interesting part is that they need not be. This leads to all sorts of fun jockeying for position. Compare this with Settlers of Catan, which is an example of a game with closely linked ending and winning conditions. Yes, the game doesn’t drag on unnecessarily even in the runaway leader case, but its end is altogether less interesting.

So, what’s the secret recipe? As with a lot of things in life, the hybrid approach seems best: I would argue it’s having a correlation between the game ending condition and winning condition that gives us the best game experience.

European Vs. American Style Board Games
October 29, 2009

In a previous post I touched on some differences between so-called “European style” board games and “American style” board games. There are some fascinating differences between the two genres and although I will try to be as objective as possible in this analysis, I will tell you now that I find the European style to encompass much better design choices in almost every respect.

Let’s take a few games to exemplify the genres. We will pick what are probably the two most popular classic American board games to represent “American style”: Risk and Monopoly. For “European style” we will consider Settlers of Catan and Imperial (two games featured previously on this blog).

So let us begin. One of the most prominent mechanics inherent to European games is that players are never thrown out of the game before it ends. Anyone who has played Monopoly enough has experienced the irritation of being the first player to go bankrupt. It could be hours before the game is over and you can reunite with your friends, so your best bet is to go cry alone in the corner while you wait. The same argument can be made for Risk: someone is bound to be eliminated sooner or later and stuck twiddling their thumbs while the rest of the game grinds to an end. THIS IS BAD DESIGN. The point of a game is to have fun, not alienate your friends. If 30-40% of the man hours put into the game is spent doing nothing, then we have some serious design issues. Settlers of Catan and Imperial, on the other hand, have absolutely no mechanic for ejecting a player (in fact if a player leaves for any reason it will seriously upset the game balance). This guarantees that all of your buddies will be hanging in to the bitter end and will all, technically, have a chance at victory. Now doesn’t that sound like more fun?

Now it’s time for a crash course in feedback systems. A feedback system is any system that can influence itself. The classic example is a thermostat system. When a room gets too hot, the thermostat senses it and controls the flow of cold air into the room. When the room gets too cold, the thermostat sends a flow of hot air. The net result is a room that is always nice and comfortable. This particular system is a negative feedback system because when the system strays too far from one state (too hot or too cold) it is pushed back toward the other state. As you can see, negative feedback systems are inherently very stable. The room cannot get too hot or too cold. Imagine now the analogous positive feedback system: the hotter the room gets the more hot air is pumped in, and the colder it gets the more cold air is pumped in. Obviously this makes no sense. It’s an extremely unstable system that will have you spontaneously combusting or freezing to death in a matter of minutes. Positive feedback systems are always unstable. 

Ok let’s apply the concept of feedback to our games. Monopoly is a great example of positive feedback (as is capitalism in general). The rich get richer and the poor get poorer. The problem with this in Monopoly is whoever gets an early lead is more likely to have more money to weather disaster and invest more. This leads to more money from more investments, which leads to more money, etc. If you fall behind early on, you’re pretty much screwed by the opposite reasoning. Again, the same model applies to Risk. If you get an early lead on owning a whole continent or two, you will have more units than your opponents which you can use to capture even more territory and get even more units. THIS IS BAD DESIGN. Basically, this mechanic imposes winners and losers on the game right from the very beginning. The winners are usually going to be someone who has been winning the whole time. Hopelessness is not enjoyable.

European games, on the other hand, tend to impose negative feedback. That is, the further ahead a player gets the harder it is for them to stay ahead. In Settlers, not only does the winning player face discrimination and even trade embargoes from other players, but they will also likely be the one consistenly dealing with the robber, and losing cards when 7’s are rolled. In Imperial, the better your country is the more attractive it will be for other players to steal it from you. In fact, the winning player is going to be the one in constant fear of losing his winning status. Negative feedback is extremely important in any game system because it keeps outcomes upredictable and keeps all players engaged in the game.

There are many other factors at work here, but these two are the big ones. complete participation and negative feedback are the biggest identifiers of a European game over an American game. They are two obvious choices for a better design and ultimately produce better games. It’s a shame that most americans have grown up knowing little more than Risk and Monopoly, and tend to write off board games as a feasible form of entertainment as they get older. I can’t say I blame them, I mean after all, these are goddamned terrible games.

When to Buy: the Math in Imperial
October 27, 2009

Imperial is a 2006 game from Mac Gerdts that rethinks the classic Diplomacy. A layman—in this case, a true board game geek—would have difficulty not mistaking the maps incorporated in both games. What I like about Imperial is that it takes the too-interactive, too-chesslike Diplomacy, adds a realism-contributing economic aspect, and comes out with something that requires even more social and mental depth.

A critical component of the Imperial economy are “bonds”. Players seek to acquire bonds because they grant both points and influence within the game. For each of the six powers in the game, eight distinct bonds are available. Bonds feature a face value and interest value. For instance, the lowest-valued bond is the 1-bond, so called because its interest value is $1million. What this means is that for an initial investment (i.e. buying the bond) for $2m, you receive $1m payments each time a payout is to be had. This occurs indefinitely, so you have the possibility for making back your initial investment and more as the game goes on. When you’re rich enough, you might be able to afford the highest-priced bond, the 8-bond. As you could assume, its interest payment is $8m on a face value of $25m. Notice that the one-time interest payments as a fraction of the initial investment become smaller, the higher the bond’s face value. Viz. $1m/$2m = 50% while $8m/$25 = 32%.

Bond interest payouts not only provide the income stream described above, but also the points needed to win the game. At the game’s end, you can multiply the interest value of a bond by its power multiplier, a multiplier determined by how well a nation did in the game. Thus, you would like to be holding all of your high-valued bonds in the nations that did well and low-valued or, better, no bonds in the nations that did poorly.

However, the opportunities you have to buy bonds are limited and the cash available at that instant limited even more, so the question of what the “right” bond to buy is one that will cross your mind several times throughout a game. In particular, because the power multipliers will not be resolved until the game is already over, it’s important to forecast a nation’s finishing position to decide how much to invest. Just as investing too little is problematic because you miss out on multiplying more of your investment by those power multipliers, investing too much is problematic as it will stunt your ability to “grow” your money as the game goes on; in other words, what you spend this turn isn’t available next turn.

With that groundwork laid, it’s time to get to some pictures. First, a chart to capture what I have been discussing and provide some math that we’ll need.

ImperialPoints

Net payoff of bond purchase for varying multipliers.

This chart shows the net payoff from investing in varying bonds (the vertical axis). Of course, the net payoff is conditional on the power multiplier at game end (the horizontal axis). The net payoff is defined as the multiplier points less the original bond cost (face value). Thus, we see that the 8-bond paying off with the x5 multiplier yields 8×5=40 gross points. From this, we subtract the initial amount paid to buy the bond, 25, to get a net payoff of 15, the highest offered in the game.

What should also grab your attention, though, is that you can accomplish the same thing with the 7-bond (7×5–20=15). This begs the question: why buy the 8-bond. The answer is invariably “because the 7 has already been bought”.* As mentioned previously, it’s better to hold on to your money if you can. (You can keep it to add to your score at game’s end, so it never goes to waste.) Thus, in the perfect world, we absolutely make sure we have our hands on the 7-bond in x5 countries, the 5-bond in x4 nations, the 3-bond in x3 nations, and no higher than a 3-bond in any nations that fall short of x3. Anything else is gravy as long as you’re not taking a loss (red boxes).

Of course, the world’s not perfect and you’re never going to have your money in all the right places. There’s too many other players interested in seeing that you don’t. But it can pay to come close and know what you’re shooting for, although this will require taking some probabilities into account.

During a game in progress, you’ll only be able to discern game-end multipliers to a statistical certainty. The relative probabilities of each outcome dictate which bonds to shoot for. If you think x4 is likely with an outside shot of x3, then it pays most to aim for the 5-bond or the 4-bond.

Payoff graph

The progression of net payoffs for each bond

Since the probabilities will always be changing and the math is complicated to do in-game in the first place, it helps to generalize which bonds provide the best bang for the buck. The chart above attempts to capture this. At the start of the game, it assumes the following probabilties of a nation finishing in a given multiplier:

x0   0.00%
x1   5.00%
x2  25.00%
x3  33.33%
x4  20.00%
x5  16.66%

This suffices for the beginning of the game, but as the game progresses, nations will reach higher multipliers thereby nullifying the probability that they will reach a previous level—they’ve already reached it! For this, we make a simplifying assumption that the probabilities of reaching higher levels redistribute proportionally among the remaining possible outcomes. An example will help illustrate this. Once we reach x2, the chance of being at x1 is now 0. Thus, we redistribute the original 5% chance of being at x1 among the remaining outcomes in proportion to their original probabilities. Thus, the x2 outcome gains an extra (25%/95%)*5% chance, x3 adds (33.33%/95%)*5%, x4 goes up (20%/95%)*5%, and x5 increases (16.66%/95%)*5%. For each bond, we multiply the payoff at each level by the probability of realizing that payoff, and we generate the expected payoff for each bond.

As you would expect, as we can become more certain of reaching a given level, the expected net payoff increases. Thus, in the chart above, the right edge of each color band show the expected payoff assuming a given multiplier position. The dark blue is the result at the start of the game. Here we find that the 5-bond provides the best value for money as its blue bar reaches furthest right. The 5-bond falls about even with the 6-bond once we’ve reached the green region and sit in the x3 multiplier. Once we’re at x4, the 7-bond becomes the best and it maintains this positions even as the 8-bond evens up with it at the x5 region. This was precisely what we had found earlier.

Thus, we can see that the 5-bond is a very safe early bond to own in any country. It provides an optimal value at the x3 region, the optimum at x4, and a good return even at x5. However, once a country has attained x3. It’s best to focus on the 7-bond if possible. We also see the losses that can be sustained by investing in the 8-bond when it falls short of the x5 multiplier.

__________

* There are other gameplay factors at work here such as trying to take control of a nation. These effects are ignored in this analysis.

Settlers of Catan And Its Growing Popularity
October 26, 2009

Perhaps it’s just me, but it seems as though the board game Settlers of Catan has reached new heights of popularity with the masses in the last couple years. I certainly agree that Settlers has merit and is worthy of its status as an increasingly popular and well known family game. However, I can’t help but wonder why it has won out over many other similar European-style games to become the american household favorite.

Before I go any further, let me clarify what I mean by a “European-style” boardgame. For some reason, the boardgaming tradition seems to be much more alive in Europe than in America. In fact, Germany (where Settlers was designed and initially produced) produces more boardgames per capita than any other country. Typically these games adhere to some fundamentally different design choices than the common American-style boardgame (such as Risk or Monopoly). European-style boardgames normally do not eliminate players mid-game, they usually incorporate some form of negative feedback, and it usually very difficult to predict exactly who will win, whereas American-style games often lack these traits. (I will cut this discussion short for now as it probably merits its own post)

In college I was exposed to many of these European-style boardgames in addition to Settlers, including Imperial, Power Grid, and Puerto Rico (I guarantee the average American has not heard of these games). Out of these four, Settlers is certainly the worst in my humble opinion. So why has it become the “popular” European game here in the states? I suppose marketing could be at least partially to blame, but if we consider only the game design in our argument what can we come up with?

Randomness. If you’ve played Settlers you are definitely guilty of praying for that 4 to come up just when you need it. And if it does, then awesome! you just garnered yourself 3 more victory points for the win. Settlers is by far the most chance oriented game on our sample list of European boardgames (in fact you might be able to argue that the other 3 have no chance elements at all). Consider our sample hit American Boardgames: Risk and Monopoly. These games are almost all chance. Do Americans gravitate toward games that are more random? I suppose it’s possible. American culture is much less steeped in boardgaming and it’s much more likely that you’re opponents will range greatly in skill from the completely hapless to the uber-leet. This large chance element is the great equalizer in any game. Perhaps our culture puts some special value in allowing anyone to conceivably win.

Simple rules. This is probably the real kicker. I must be a real exception to the population in that I actually enjoy learning new games. Most people seem to think of it as a chore. Consequently, simple games get a much better reception. “It’s easy grandma, you just roll the dice and then you collect your cards. Oh, and you can trade cards too.” Not that I’m an advocate for needlessly complicated mechanics, I’m a firm believer in the KISS principle, but sometimes a little complexity can be a good thing.

These two aspects of Settlers make it ideal for the newbie boardgame enthusiast to break into European-style boardgaming, but it causes the game to seriously lack in content. When I win this game, I don’t feel like it was because I made great decisions, but rather because we rolled 9 threes in a row.