In this talk I discuss recent work in my group exploring a curious finding: when training neural networks, periodically resetting some or all parameters can be helpful in promoting better solutions. begin with a discussion of our findings in supervised learning where we relate this strategy to Iterated Learning, a method for promoting compositionality in emergent languages. We then show how parameter resets appear to offset a common flaw of deep reinforcement learning (RL) algorithms: a tendency to rely on early interactions and ignore useful evidence encountered later. We apply this reset mechanism to algorithms in both discrete (Atari 100k) and continuous action (DeepMind Control Suite) domains and observe consistently improving performance. I conclude with some recent finding and speculation about the underlying causes behind the observed effects of parameter resets.