While Voice User Interfaces (VUI) are becoming increasingly embedded into everyday life, their ability to tailor their output to individual users is limited. Research in VUIs has explored the use of static user models to encode general preferences; and, separately, dynamic models of dialogue context or short-term common ground have been used to inform natural language generation decisions. Neither of these alone is enough to provide a VUI with the ability to dynamically explain concepts. This paper highlights the need to use both, and thus develop new interactive models of tailored explanations.