I have a small band of people who have occasionally thrown LAN parties for various games. One of these games is AssaultCube, the free-and-open-source first-person shooter game. The LAN party is structured much like any other - I run a server on my laptop, and the three of us play.
In one sitting, we may play multiple games of AssaultCube. Each game is a combination of a map and a gamemode, and the set (and order) of games we play are determined by a “map rotation” file.
Oddly, though, the map rotation is a fixed file. The server ships with an example map rotation file, but it’s just a static file - if we play the server again on another day, we’ll play the exact same sequence of events.
A little bit of Googling showed that, generally, the map rotation is chosen by hand. This is fine, I guess, but there are 36 built-in maps and 10 game modes (that we play), leading to 360 possible combinations to choose from. And we’ve played over 200 games together. Doing this by hand would’ve been tedious. I’d be liable to just choose the same maps over and over again, so we’d end up playing in a rut and never exploring all of the maps.
So I wrote a program to generate mapfiles. The first iteration was to choose the map and game mode randomly. This worked fine, in the sense that we did a lot of different maps. But it turned out that a lot of maps weren’t very good - for example, ac_power is a massive map, and is terrible for games with only three people. We spent most of those games wandering around the map looking for other players.
I also setup the server to record demo files, in case I later decided to analyze them for anything. I encapsulated everything in a short script that would generate a new map rotation and start the server:
python3 gen-maprot.py > tmp-maprot.cfg ./bin_unix/linux_64_server -mlocalhost -rtmp-maprot.cfg -Wdemos/
On the path to make the map rotations more “fun”, I added a part of the script that derates a map based on the map’s filesize, assuming that the filesize is proportional to the size of the map because I didn’t have a handy parser for the map file format.
This assumption turned out to be largely false, because some large maps are sparse (like ac_power) and some small maps have lots of little details (like ac_lainio). But short of writing a map file format parser, this was the best I could do.
Another problem was that, simply by chance, sometimes we’d play a map twice in a row, and some maps seemed a lot more common than other ones. I had had the server recording demo files for every game we played, and I had a script dumping it into a sqlite database. So I had the map rotation script derate maps based on how many times we’ve played it in the past. Also, for a particular map rotation, every time it chooses a map it decreases the probability that it will be chosen again in that map rotation. This helps prevent playing the same map twice in a row.
I had a similar system setup for game modes, except there were also some preconfigured probabilities for each game mode (for instance, we like One Shot One Kill (OSOK) but not Pistol Frenzy (PF)).
This system worked decently well, and we played it a few times, allowing me to collect a few hundred demo files.
Ultimately, the goal of all of this is to make map rotations more “fun.” But how to tell how fun a map rotation is? It’d be bad to, after every map, ask “how would you rate this game on a scale of 1 to 5?”
So I theorized that the amount of fun to be had during a game is proportional to the number of kills during the game. Then games spent mostly wandering around would be less fun than more intense games.
It might make sense to, for each map, simply average how many kills there were on that map. Then maps with a lot of kills would be more fun than maps with fewer kills, and could be chosen with higher probability. But, the number of kills is not just related to the map, but also to the game mode. For example, OSOK and LSS games are just as fun as DeathMatches, but there tend to be fewer kills. We then have to take into account both the map and the mode.
Let’s give each map and each mode some coefficient, called the “kill contributor” (or something). The coefficient represents how many kills the map or the mode causes - positive numbers means that the given map/mode is correlated with a large number of kills, and negative means that the map/mode is correlated with a few number of kills. Then let’s assume that the number of kills in a game is equal to the corresponding map coefficient added to the corresponding mode coefficient - that is, the map and the mode contribute to the “funness” of the game independently of each other. For example, if ac_douze is much more fun than ac_power and the LSS gamemode is more fun than DM, than ac_power/LSS is more fun than ac_power/DM, ac_douze/DM is more fun than ac_power/DM, and so on (ac_douze/DM could be about as fun as ac_power/LSS, depending on the exact numbers involved).
To calculate these numbers, we could do a complicated regression, or we could just approximate it iteratively. The algorithm looks roughly like:
foreach iteration: foreach sample: estimated_kills = map_coefficients[mapname] + mode_coefficients[mode] map_coefficients[mapname] -= (estimated_kills - sample_kills) * alpha mode_coefficients[mapname] -= (estimated_kills - sample_kills) * alpha
Where the “samples” are each of the recorded demo files, and alpha is some learning rate parameter (roughly 0.1 worked for me).
After a bit of tuning, and derating based on number of plays, this algorithm gave this ranking:
ac_desert,4,58541,9.002770401287599,8.598117657932994 ac_douze,12,32423,7.875942185854054,11.31797516096086 ac_snow,6,43789,7.745997909901627,9.035074151625587 ac_gothic,6,51551,6.739458507671803,8.43261885679789 ac_lainio,5,101742,5.041147436441453,6.666202257727983 ac_toxic,4,67543,4.926387079041138,5.988600542989606 ac_wasteland,8,62702,4.27837318989626,7.334427657053301 ac_scaffold,10,42027,3.9483410969519923,7.710004854018154 ac_ingress,5,58659,3.6227592326556173,5.236212594913319 ac_arctic,9,70659,3.543593324737594,6.895472447079759 ac_industrial,2,108484,3.5188844733202074,3.0760845471731875 ac_werk,3,73759,3.383294588820544,3.6951197142405867 ac_desert3,5,57420,3.083586511589839,4.538773778327826 ac_arid,8,59442,2.8585494641376576,5.589090431298128 ac_swamp,6,96092,2.5173128364456856,4.170374366588669 ac_rattrap,5,67770,2.4646268233192665,3.569053471864044 ac_outpost,5,119298,2.245255891910313,3.165585446102157 ac_elevation,4,72096,2.220075692444564,2.538837173501348 ac_cavern,5,111949,1.90591850920475,2.4564051180858195 ac_avenue,6,108479,1.8788177858537596,2.904196283566275 ac_stellar,6,153205,1.8520757926884661,2.842150233463513 ac_terros,5,120471,1.8329857963612874,2.287532630873484 ac_complex,4,61574,1.4257344147146587,0.622126363532193 ac_venison,3,209066,1.322269910721078,-0.37111658792073476 ac_iceroad,4,138695,1.3006054684283148,0.22456076147049941 ac_mines,2,118903,1.2997762362229994,-1.2344800745576388 ac_depot,6,103664,1.2610831396434978,1.1787109901830493 ac_arabian,3,136297,1.204025582064422,-0.7765685502199269 ac_keller,5,120812,1.0977771598803536,0.06870149603202544 ac_urban,7,58933,1.0972109207960719,1.0322527562576778 ac_shine,8,82610,0.782099666478667,-0.02048621260522015 ac_sunset,2,101703,0.7306565543818598,-3.7274739118338123 ac_edifice,8,96859,0.6430165325133923,-0.8679761247887673 ac_desert2,5,71740,0.6125326489765841,-2.456478241216734 ac_aqueous,7,83357,0.48192683801356845,-2.528611752368664 ac_power,7,93089,0.25513039662905296,-5.281349806675301
This is the output from the whole system. The first column is the map name, the second is the number of times we’ve played it, the third is the map file size, the fourth is the probability of choosing that map (percent), and the last column is the “map coefficient” from the above algorithm.
In playtesting, this algorithm generated a much better rotation. The games were much more intense, perhaps even too intense - after just an hour of playing, we were all exhausted (but at no point were we bored).
Interestingly, while the map coefficients had a positive correlation with how much we liked the map, the mode coefficients had a negative correlation with how much we liked the mode. These are the mode coefficients:
team lss: 4.715634515601476 lss: 6.254326629816497 team osok: 7.830816178874231 osok: 7.83720472245699 team pf: 10.304179277076216 team ktf: 10.305728099868182 ktf: 10.993493545788104 pf: 14.213536946374926 dm: 15.021018583762135 team dm: 16.433781981854427
This is, roughly, sorted in descending order of how much we like the game mode.
While this “maximum fun” generator worked decently well, it does have the problem of a) being almost too intense and b) being liable to play highly-ranked maps to the exclusion of others.
Up next is a “constant fun” generator, which pairs high-coefficient maps with low-coefficient modes, and vice-versa. This way we might get good map coverage, and aren’t constantly playing the same set of maps.
The code for all of this is over at https://github.com/lkolbly/assaultcube-server-scripts, you can run it if you have Python 3. (I’m trying to move away from Python 2 in my day-to-day life)