\section{Language based Hierarchical Reinforcement Learning} Reinforcement Learning methods that utilise a hierarchy have recently been introduced to solve challenging tasks with pre-requisite steps \cite{Tessler:2017}. Hierarchical methods are defined by their use of sub-tasks to break down the long-term goal into shorter steps. However, as numerical reward functions are used, human supervision was required to define the completion of sub-goals. Specifically, \cite{Tessler:2017} trained a hierarchy by defining long-term tasks as a connection of rooms, each with a specific task and a supervised completion state. Likewise, recent works of \cite{shu:interp-RL} and \cite{hu:nl-instr} required definition of the states that completed each instruction. Specifically, \cite{hu:nl-instr} labelled instruction completion based on having a human play the game. In some text game problems, such as CookingWorld \cite{Madotto:2020}, the authors incorporate sub-goals into the reward signal directly (i.e. $r=0.25$ for each sub-task). The work in \cite{luketina:2019} provides a detailed summary of the research attempting to link Natural Language and Reinforcement Learning so far. They note that the grounding (``i.e. learning the correspondence between language and environment features") is still a significant research challenge. Natural language has recently been considered to improve an agent’s performance. The most recent work attempts to combine Natural Language and Reinforcement Learning to ground language via the use of instructions, but this was achieved with direct supervision \cite{shu:interp-RL} and \cite{hu:nl-instr}. In each case, the advantage of grounding the environment with Natural Language provides improved results for generalisability and therefore outperforms methods that do not us