Our R&D engineers developed this rudimentary chess engine as a part of an initiation ritual, because every new hire contributes a bit to this code, and the idea is that we'd keep developing it in-between projects as a means to turn downtime into "learning & experimentation time" within the field of reinforcement learning.

As it turns out, downtime is not at all that common for us, yet, we still managed to explore ways of packaging TensorFlow AI applications into client-side packages via this exercise, and looking forward to continue with the more relevant research side of things.

Currently only two modes are functional, the brute-force minmax tree approach and another, heuristic based approach.