About the authors
Dan Eastwood is a biostatistician at the Medical College of Wisconsin who has been playing netrek sporadically for about 3 years.
Zachary Uram studied physics at Carnegie Mellon University and now works in the IT field. He’s been playing netrek near continuously (only a 1 year hiatus) since 1994. He has played in numerous clue games (bronco and hockey) and countless pickup games. Zach has played all the netrek variants: bronco (classic netrek), hockey, sturgeon, chaos and paradise. He knew Terrance Chang and played physically (on the console!) on the very first bronco server: bronco.ece.cmu.edu. CMU netrek teams routinely won world championship titles and had some of the greatest netrek players of all time such as Bert Enderton and Erik Lauer. Their starbasing skills and Erik’s general cluefulness and skill in directing a netrek team are legendary.
In November, 2008, Dan wrote a review of Netrek, a simple online multiplayer game (and the first!) where the best dynamics of the game far exceed the apparent simplicity of the game itself. There is an important lesson Dan and Zach spent much of their Thanksgiving 2008 vacations thinking about it. This post is not to be a complete analysis by any means, but hopefully, it will serve to introduce a number of topics we hope to discuss in greater detail somewhere down the line. There are three basic aspects of Netrek that we want to describe: Player versus Player, Team versus Team, and Resource Allocation.
Player versus Player
One-on-one, ship-to-ship combat (dogfighting) is perhaps the most basic component of Netrek. Players combat each other with phasers (phased energy pulse beam – think lasers) and photon torpedoes, trying to inflict enough damage to destroy the other ship first. If a player and his opponent both kill each other in their dogfight they are said to have “mutualed” each other. There is an aspect of Point Objective Games here, with the added complexity that shields regenerate and ships repair. This complexity aside, it comes down to a contest of skill between players, rather like a Chess match. Chess has a ratings system, Elo Scoring, which is used to describe the past performance of a player, and (ideally) has some usefulness in predicting the winner of a match between two players who have never met before. Elo scoring uses a scale where a 200 point difference between player ratings is interpreted as a 75% chance that the player with the higher rating will win (assuming the ratings accurately represent player skill). Chess is considered to be a game, with a very great range or many levels of skill between the worst and best players, and this deepness can be measured (in part) by the range in scores. Beginning Chess players might have a rating of 800-1000 points, while the World Championship Chess players have ratings of 2500 or more (Go has even greater depth by this measure). Netrek also has considerable depth on player skill, and it would be interesting if this sort of ratings system could be implemented to measure it. The netrek server itself has a rating system it uses to track player rankings. In order to advance from a lower rank to a higher rank (the lowest rank in netrek is Ensign and the highest is Admiral) a player must meet certain conditions:
- Minimum Defense rating
- Minimum Hours played
- DI rating (Damage Inflicted) – a rough metric based on how much bombing a player has done, how many players carrying armies they killed (dooshed), time spent in a starbase, plus how many planets they took. It is interesting to note that this rating metric is time dependent. The faster you inflict damage on opposing teams the faster your DI advances. The underlying server code actually uses what amounts to a derivative to track this.
- [Overall] Rating = Offense + Bombing + Planets ratings combined.
- Mathematics based measurements of player skill in games is one of those topics we intend to discuss at greater length in future posts.
Team versus Team
This is where the game of Netrek really shines. And if we could measure the skill of a netrek team with a sort of Elo score in the same way we might measure individual skill, we’re pretty sure the depth of Netrek would approach that of Chess. Tournament mode (T-mode) play in Netrek has an underlying mathematical structure that is actually fairly simple. We want to deconstruct the games and point out where the basic mathematics applies to the team games. Suppose we have the following simple game: 20 Coins are laid out on a table with 10 heads and 10 tails. Two players, one represented by heads and the other by tails, each turn both players choose one coin and flip it, with the object of making all the coins either heads or tails (with 50/50 chance). Obviously the heads-player will select any coin showing tails and flip it, trying to get it to come up heads, and the tails player will do the opposite. This game is essentially a discrete random walk, and with every turn (one flip for each player) there is a 25% change a player will gain a coin, 25% chance they will lose a coin, and a 50% chance that things stay the same. This is in concept very close to a completely random T-mode Netrek game. (This might also be stated as a Markov chain.) This is in concept impossible to actually win, either by Genocide (all planets) or Timer (all but three planets for ~20 minutes). Therefore, most wins probably occur in unbalanced games. There have been experiments with different types of scoring systems in netrek clue games. One such experiment was the WNL (World Netrek League) which used continuous scoring (cscore) where there was an incentive to take planets often. A final weighted score which factored in how fast a team took (and retook) planets determined the winner if it was an otherwise close game and no team had more than 2 planets gained over the opposing team. The first and longest netrek league was the INL (International Netrek League). Due to a decline in the playerbase there have not been INL games since 2003. There have been several draft leagues and there are still bi-monthly clue games. A purely random game like this could go on for quite some time (it might even be infinite), but suppose there were some element of skill involved so that one player (lets say heads) is a little bit more likely to get the desired result than the other. So now maybe each turn there is a 35% chance the heads-player will gain one coin, a 20% chance the tails-player will lose one coin, and a 45% chance that things stay the same. Out of ever 20 turns we would expect that heads would gain 7 coins (35% of 20) while losing 4 (20% of 20) for a net gain of 3 coins. We should expect the heads player will win in 67 turns, on average. Netrek takes this same mathematical structure and adds a lot to it (players of varying skill, a variety of ships and planets, etc.). I wonder if it is possible to measure the “team playing” skill of players in the same way as Elo scoring might be used to measure the individual dogfighting skills of players? This would be rather complicated, and likely impractical, but it is interesting to think about.
This is the final aspect we want to write about today, and the part that leads to the wonderful complexity of team play in Netrek (It will also be the least mathematical discussion). Resources in this game are not equal; The team with more planets has greater potential to gather armies and try to take planets. More planets than players mean that is it not possible to defend every planet equally. Players are not equally skilled, but have varying levels of ability at individual and team play. Finally, controlling a greater number of planets increases the distance the leading team must travel to front-line-combat, and players are out of position for a longer period of time (though Starbases help). This last part has an effect not seen in most games; the closer one team is to winning (in an otherwise balanced game) the harder it is to successfully take more planes, because the losing side can respond to defend planets much more quickly – and it tends to make the game self-balancing. In “clue” games with a fairly equal distribution of clued players on each side an outright victory is rare, and most games are given a set time limit, and the team in control of the most planets at the end of time is declared the winner. This is entirely consistent with Netrek being a self-balancing game. One common complaint new players make is how hard it is to actually win a T-mode game. There are many reasons for this, certainly more than we have discussed, but we would propose the following: In a really balanced game with equally skilled players on each side, it may be nearly impossible to actually win, either by Genocide (all planets) or Timer (all but three planets for ~20 minutes). Therefore, most wins probably occur in unbalanced games.
Netrek offers a very rich strategic game environment which can leverage teamwork and see it realized at a high level in constantly shifting game circumstances. Dogfighting provides a fun tactical experience and we should remember that dogfighting is the essence of netrek. To take a planet you must first get kills, to defend a carrier you must engage an ogger, to destroy a starbase you must ogg it, to defend a planet you must engage a carrier and his escorts. Space control is another aspect of netrek which can be analyzed mathematically. There is a correlation between the control of space and the rates at which planets are lost and gained. It would be cool to construct a phase space of the netrek universe consisting of all known observables and see evolution of the space through time. Also the highly fluid nature of planet takes (taking enemy planets) and their close dependence on dogfighting, space control, Clue Rating and timing makes netrek an excellent candidate for Bayesian statistical methods. One could construct a Bayesian inference engine to generate a probability model that guides the time evolution of our netrek phase space. Here is an interesting research paper, “Multiple Roles, Multiple Teams, Dynamic Environment: Autonomous Netrek Agents”, by Marcus J. Huber and Tedd Hadley which explores autonomous agents in netrek. It should be noted that netrek has bots (autonomous agents) which the server controls and while they do not possess higher order netrek clueful knowledge they have instantaneous reaction time and could be programmed to coordinate with each other in perfect unison. This would be highly effective in the area of base ogging (ships teaming up to attack an enemy starbase). It would be nice to see a netrek expert system with a rich decision matrix and advanced AI (artificial intelligence) heuristics matched against a team of the best netrek players. This echoes the man versus machine battles such as Gary Kasparov versus Deep Blue – the IBM chess supercomputer. This could be another area of research so academia take note: netrek has much to offer.