Many problems in Reinforcement Learning (RL) seek an optimal policy with large discrete multidimensional
yet unordered action spaces; these include problems in randomized allocation of resources such as
placements of multiple security resources and emergency response units, etc. A challenge in this setting
is that the underlying action space is categorical (discrete and unordered) and large, for which existing
RL methods do not perform well. Moreover, these problems require validity of the realized action
(allocation); this validity constraint is often difficult to express compactly in a closed mathematical
form. The allocation nature of the problem also prefers stochastic optimal policies, if one exists.
In this work, we address these challenges by (1) applying a (state) conditional normalizing flow
to compactly represent the stochastic policy — the compactness arises due to the network only producing
one sampled action and the corresponding log probability of the action, which is then used by an
actor-critic method; and (2) employing an invalid action rejection method (via a valid action oracle) to
update the base policy. The action rejection is enabled by a modified policy gradient that we derive.
Finally, we conduct extensive experiments to show the scalability of our approach compared to prior
methods and the ability to enforce arbitrary state-conditional constraints on the support of the
distribution of actions in any state.
Our approach consists of two main components: (1) a conditional normalizing flow to compactly represent the stochastic policy in the large categorical action space, and (2) an invalid action rejection method to guide the learning in the constrained action space.
We show empirically that our approach:
Moreover, we show that the baselines with assumptions on the action space may lead to poor performance, e.g. Factored policy fails to learn the optimal policy in partially observable settings (Fig. E1 (b)), and AR policy performs poorly in constrained action spaces (Fig. E2).
@inproceedings{
chen2023generative,
title={Generative Modelling of Stochastic Actions with Arbitrary Constraints in Reinforcement Learning},
author={Changyu Chen and Ramesha Karunasena and Thanh Hong Nguyen and Arunesh Sinha and Pradeep Varakantham},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023}
}