Perhaps a good feature for a game-playing AI would be a chunking/theorem-proving module.

For example, in the mobile game Everlands by Hexage, on each turn, each player places one piece; when you place a piece, first any adjacent pieces that can attack do, and then your placed piece attacks all pieces that it can. Each piece has a HP and an attack rating. Each piece has a special power. For example, bears have an attack rating of 2, 4 HPs, and the special power that after it is wounded, its attack rating increases to 3.

If the enemy has a bear, and you place a unit with 3 or less HPs such that a bear can be placed that can attack that unit and such that that unit can attack the bear, and such that the bear will survive the initial attack(s) on it, then the enemy can place a bear, which will be attacked by your unit, which will raise the bear's attack rating to 3, and then the bear will attack, which will decrease your unit's HP by 3, killing it.

By a chunking/theorem-proving module, i mean that once this sort of thing happens to the AI a few times, the AI should notice the pattern, and absract from it, proving the previous paragraph as a theorem (assuming the AI knows the rules of the game as axioms). Then in the future, when the AI is doing lookahead, it can apply this theorem in a way analogous to 'chunking', that is, it can save some computation time by realizing that if the preconditions of the theorem are satisfied, then its unit will be killed.

Note that the process of abstraction is similar to Bayes net d-separation; you start out with a specific situation (e.g. i placed a hedgehog in this square, then the enemy placed a bear, the hedgehog attacked the bear, the bear's attack rating increased to 3, the bear attacked the hedgehog, the hedgehog died), and then you see how the rules of the game caused the result from this input, in the process using something like 'd-separation' to observe which parts of the input didn't affect that computation.