In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways.