There has been significant progress over the last few years in the theory and applications of Reinforcement Learning (RL). While RL theory and applications have had a rich history going back several decades, the major recent successes have occurred due to a successful marriage between deep learning approaches for function approximation embedded within a reinforcement learning framework for decision-making (Deep RL). On one hand, there has been a richer understanding of Stochastic Gradient Descent (SGD) for non-convex optimization, its impact in driving training error to zero in deep neural networks, and on the generalization ability of such networks for inference. On the other hand, there has been an explosion of research on iterative learning algorithms with strong statistical guarantees in the settings of reinforcement learning, stochastic approximation and multi-armed bandits. This workshop aims to bring leading researchers from these two threads, with the goal of understanding and advancing research at their intersection. We will also explore other potential connections between deep learning and deep RL, including but not limited to: Understanding generalization in deep RL and how it is related to and/or different from generalization in deep learning; Connections between adversarial training in deep learning (e.g., Generative Adversarial Networks) and the optimization aspects of recent deep RL algorithms based on generalized moment matching in off-policy RL and imitation learning. This workshop is fully funded by a Simons Foundation Targeted Grant to Institutes.