CIG18 Accepted Full Papers

Papers are ordered by submission id. Any paper accepted for oral presentation or poster will be published as a full 8-page paper in the proceedings.

Main Track

  • Ivan Bravi, Diego Perez, Simon Lucas and Jialin Liu. Shallow decision-making analysis in General Video Game Playing
  • Zhengxing Chen, Chris Amato, Magy Seif El-Nasr, Truong Nguyen, Seth Cooper and Yizhou Sun. Q-DeckRec: a Fast Deck Recommendation System for Collectible Card Games
  • Michael Cook, Simon Colton and Azalea Raad. Inferring Design Constraints From Game Ruleset Analysis
  • Amin Babadi, Kourosh Naderi and Perttu Hämäläinen. Intelligent Middle-Level Game Control
  • Stefan Gudmundsson, Philipp Eisen, Erik Poromaa, Alex Nodet, Sami Purmonen, Richard Meurling, Bartlomiej Kozakowski and Lele Cao. Human-Like Playtesting with Deep Learning
  • Garry Greenwood, Hussein Abbass and Eleni Petraki. A Critical Analysis of Punishment in Public Goods Games
  • Makoto Ishihara, Suguru Ito, Ryota Ishii, Tomohiro Harada and Ruck Thawonmas. Monte-Carlo Tree Search for Implementation of Dynamic Difficulty Adjustment Fighting Game AIs Having Believable Behaviors
  • Ryota Ishii, Suguru Ito, Makoto Ishihara, Tomohiro Harada and Ruck Thawonmas. Monte-Carlo Tree Search Implementation of Fighting Game AIs Having Personas
  • Raluca Gaina, Simon Lucas and Diego Perez Liebana. General Win Prediction from Agent Experience
  • Shanchuan Wan and Tomoyuki Kaneko. Building Evaluation Functions for Chess and Shogi with Uniformity Regularization Networks
  • Jakub Kowalski and Andrzej Kisielewicz. Regular Language Inference for Learning Rules of Simplified Boardgames
  • Jakub Kowalski, Radoslaw Miernik, Piotr Pytlik, Maciej Pawlikowski, Krzysztof Piecuch and Jakub Sekowski. Strategic Features and Terrain Generation for Balanced Heroes of Might and Magic III Maps
  • Magnus Gedda, Mikael Zayenz Lagerkvist and Martin Butler. Monte-Carlo Methods for the Game Kingdomino
  • Daniel Ashlock and Courtney Kolthof. Evolving number sentence puzzles.
  • Daniel A, Eun-Youn Kim and Diego Pérez-Liébana. Toward General Mathematical Game Playing
  • Myat Aung, Valerio Bonometti, Anders Drachen, Peter Cowling, Athanasios Kokkinakis and Alex Wade. Predicting skill learning outcomes in a large, longitudinal MOBA dataset
  • Michael Nixon, Steve Dipaola and Ulysses Bernardet. An eye gaze model for controlling the display of social status in believable virtual humans
  • Rahul Dubey, Joseph Ghantous, Sushil Louis and Siming Liu. Evolutionary Multi-objective Optimization of Real-Time Strategy Micro
  • Oleksandra Keehl and Adam Smith. Monster Carlo: an MCTS-based Framework for Machine Playtesting Unity Games
  • Per-Arne Andersen, Morten Goodwin and Ole-Christoffer Granmo. Deep RTS: A Game Environment for Deep Reinforcement Learning in Real-Time Strategy Games
  • Christian Guckelsberger, Christoph Salge and Julian Togelius. New And Surprising Ways to be Mean: Adversarial NPCs with Coupled Empowerment Minimisation
  • Christoph Salge, Christian Guckelsberger, Rodrigo Canaan and Tobias Mahlmann. Accelerating Empowerment Computation with UCT Tree Search
  • Devon Sigurdson, Vadim Bulitko, William Yeoh, Sven Koenig and Carlos Hernandez. Real-time Multi-agent heuristic search in videogame pathfinding
  • Sam Ganzfried and Qinyung Sun. Bayesian Opponent Exploitation in Imperfect-Information Games
  • Rutger Kraaijer, Marc Van Kreveld, Wouter Meulemans and Andre van Renssen. Geometry and Generation of a new Graph Planarity Game
  • Philip Rodgers, John Levine and Damien Anderson. Ensemble Decision Making in Real-time Video Games
  • Mike Preuss, Thomas Pfeiffer, Vanessa Volz and Nicolas Pflanzl. Integrated Balancing of an RTS Game: Case Study and Toolbox Refinement
  • Luiz Bernardo Martins Kummer, Júlio César Nievola and Emerson Paraiso. Applying Commitment to Churn and Remaining Players Lifetime Prediction
  • Aavaas Gajurel, Sushil J. Louis, Daniel J. Mendez and Siming Liu. Neuroevolution of real-time strategy game micro
  • Anderson R. Tavares and Luiz Chaimowicz. Tabular Reinforcement Learning in Real-Time Strategy Games via Options
  • Fernando De Mesentier Silva, Julian Togelius, Frank Lantz and Andy Nealen. Generating Novice Heuristics for Post-Flop Poker
  • André Siqueira Ruela and Karina Valdivia Delgado. Scale-free Evolutionary Level Generation
  • Hendrik Baier and Peter I. Cowling. Evolutionary MCTS for Multi-Action Adversarial Games
  • Frank Glavin and Michael Madden. Skilled Experience Catalogue: A Skill-Balancing Mechanism for Non-Player Characters using Reinforcement Learning
  • Mohammed Salem, Antonio Mora and Juan J. Merelo. The evolutionary race: improving the process of evaluating car controllers in racing simulators

SS1: Deep Learning in Games

  • Daniel Karavolos, Antonios Liapis and Georgios N. Yannakakis. Using a Surrogate Model of Gameplay for Automated Level Design
  • William Woof and Ke Chen. Learning to Play General Video-Games via an Object Embedding Network
  • Niels Justesen and Sebastian Risi. Automated Curriculum Learning by Rewarding Temporally Rare Events
  • Zuozhi Yang and Santiago Ontañón. Learning Map-Independent Evaluation Functions for Real-Time Strategy Games
  • Jack Harmer, Linus Gisslen, Henrik Holst, Joakim Bergdahl, Tom Olsson, Kristoffer Sjöö and Magnus Nordin. Imitation Learning with Concurrent Actions in 3D Games
  • Ruben Rodriguez Torrado, Philip Bontrager, Julian Togelius, Jialin Liu and Diego Perez Liebana. Deep reinforcement learning in the General Video Game AI framework

SS2: Intelligent Games for Learning

  • Sandra Kaczmarek and Sintija Petroviča. Promotion of Learning Motivation through Individualization of Learner-Game Interaction
  • Samuel Mascarenhas, Rui Prada, João Dias, Pedro A. Santos, Kam Star, Ben Hirsh, Ellis Spice and Rob Kommeren. A Virtual Agent Toolkit for Applied Game Developers
  • Maria Cutumisu. The Influence of Feedback Choice on University Students’ Revision Choices and Performance in a Digital Assessment Game
  • Gabriel Toschi de Oliveira, Hugo Henriques Pereira, Claudio Fabiano Motta Toledo, Seiji Isotani and Geiser Chaclo Challco. Plot from the Stars: educational game development for teaching basic mathematical functions

SS3: Integrating IoT Technologies with Serious Games

  • Chrysanthi Tziortzioti, Irene Mavrommati and Ioannis Chatzigiannakis. Delivering Educational Scenarios using Internet of Things Data
  • Evaggelos Spyrou, Nicholas Vretos, Andrew Pomazanskyi, Stylianos Asteriadis and Helen Leligou. Exploiting IoT Technologies for Personalized Learning
  • Pavlos Kosmides, Konstantinos Demestichas, Evgenia Adamopoulou, Nikos Koutsouris, Yannis Oikonomidis and Vanessa De Luca. InLife: Combining Real Life with Serious Games using IoT

Short Papers

  • Vadim Bulitko and Kacy Doucet. Anxious Learning in Real-time Heuristic Search
  • Kun Shao, Dongbin Zhao, Nannan Li and Yuanheng Zhu. Learning Battles in ViZDoom via Deep Reinforcement Learning
  • Chiara F. Sironi and Mark H. M. Winands. Analysis of Self-adaptive Monte Carlo Tree Search in General Video Game Playing
  • Chrysoula Varia, Georgios Tsatiris, Kostas Karpouzis and Stefanos Kollias. A refined 3D dataset for the analysis of player actions in exertion games
  • Paul Bertens, Anna Guitart, Pei Pei Chen and Africa Perianez. A Machine-Learning Item Recommendation System for Video Games
  • Simon Lucas. Game AI Research with Fast Planet Wars Variants
  • Emil Gensby, Anders Harbøll Christiansen and Bo Friis Nielsen. Multi-Parametrised Matchmaking: A Framework
  • Benjamin Bell. Learning to Play Doom with Separate Action Outputs
  • Adam Streck and Thomas Wolbers. Using Discrete Time Markov Chains for Control of Idle Character Animation

Competition Papers

  • Rodrigo de Moura Canaan, Haotian Shen, Ruben Torrado, Julian Togelius, Andy Nealen and Stefan Menzel. Evolving Agents for the Hanabi 2018 CIG Competition
  • Pavan Kantharaju, Santiago Ontanon and Christopher Geib. μCCG, a CCG-based Game-Playing Agent for μRTS
  • Maciej Świechowski, Tomasz Tajmajer and Andrzej Janusz. Improving Hearthstone AI by Combining MCTS and Supervised Learning Algorithms
  • Alexander Dockhorn and Daan Apeldoorn. Forward Model Approximation for General Video Game Learning
  • Martin L.M. Rooijackers and Mark H. M. Winands. Wall Building in the Game of StarCraft with Terrain Considerations
  • Yoshina Takano, Wenwen Ouyang, Suguru Ito, Tomohiro Harada and Ruck Thawonmas. Applying Hybrid Reward Architecture to a Fighting Game AI
  • Bryan Weber. Standard Economic Models in Nonstandard Settings- StarCraft:Brood Wars

Vision Papers

  • Cameron Browne. Modern Techniques for Ancient Games
  • Cristina Guerrero-Romero, Simon Lucas and Diego Perez-Liebana. Using a Team of General AI Algorithms to Assist Game Design and Testing
  • Rodrigo de Moura Canaan, Stefan Menzel, Julian Togelius and Andy Nealen. Towards Game-based Metrics for Computational Co-creativity
  • Vanessa Volz, Kevin Majchrzak and Mike Preuss. A Bottom-Up Approach to Explanations for (Game) AI
  • Jichen Zhu, Antonios Liapis, Sebastian Risi, Rafael Bidarra and Michael Youngblood. Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation

Demos

  • Baek In-Chang and Kim Kyung-Joong. Web-based Interface for Data Labeling in StarCraft