What’s all the hype with Transformers? Part 2: Memory for RNNs
What's all the hype with Transformers? Part 2: Memory for RNNs Contents Introduction Recurrent Neural Networks Towards a Solution: Gated Recurrent Units (GRUs) and Long Short-Term Memory (LSTM) Introduction In the previous post, we highlighted some of the issues we are faced with when attempting to process natural language