Automated deduction is now more than half a century old. Its theoretical foundations, i.e., the notions of soundness, completeness, decidability, and complexity date back to these times or even earlier. In the last couple of decades though, due to the massive development of powerful processors with several to many cores, large, layered, and fast memories, GPUs, machine learning, cloud computing, and huge storage, the world has changed. In this new world, novel infrastructure and techniques often challenge our foundations. For instance, parallel and portfolio solvers question the once revered notion of completeness, in favor of a more vague notion of practical efficiency. Some time to assess the current status of research on deduction and its future is necessary.