In the last two months, there has been a marked upswing in public attacks against people, mostly women, who have either spoken out about sexism in games and the games industry or who want to make games that are a bit different from what is available to us now. My gut response to this phenomenon has been a combination of anger and helplessness. These attacks have even bubbled up to public discourse in recent weeks, which I fear has had the depressing effect of sending us back 30 years in the eyes of the general public.
Yesterday was our first day of classes at Northeastern. So over the last week, I’ve been tweaking my syllabus for Game Artificial Intelligence. Adjusting assignments that didn’t work well last year, adding in time for discussion on concepts that needed more time, agonizing over what to remove. It’s one of the hardest parts of teaching: identifying and making time for what is important.
At the Procedural Content Generation workshop held with FDG this year, I had the pleasure of being on a panel talking about evaluation methods in PCG, alongside Julian Togelius, Staffan Bjork, and Adam Smith. As part of preparing for the panel, I quickly put together a list of questionable claims that I frequently see in PCG papers (including my own!) as a starting point for talking about evaluation: what should we be evaluating? how do we compare our work? how do we know if we are getting better at PCG? what does it mean to get “better” at it? do...