Official PyTorch implementation of RIO
-
Updated
Jul 29, 2021 - Python
Official PyTorch implementation of RIO
parameter optimization of a reinforcement learning deep Q network with memory replay buffer using genetic algorithm in the snake game. base code for snake env from codecamp
A replay-based continual learning, where models preserve past knowledge through stored exemplars or pseudo-samples. Implements Experience Replay (ER), Gradient Episodic Memory (GEM), and iCaRL. Provides modular dataset buffering, memory selection policies, and evaluation utilities for reproducible experiments on vision and NLP tasks.
Continual Relation Extraction Utilizing Pretrained Large Language Models
Add a description, image, and links to the memory-replay topic page so that developers can more easily learn about it.
To associate your repository with the memory-replay topic, visit your repo's landing page and select "manage topics."