We have a ton of automation testing, but it can never possibly catch everything.
I do need to get better acquainted with test-driven software development though. As much of a PITA as it sounds, I think in the long run, its a good thing.
In terms of refactoring, we produce code diff's for our QA teams. It gives them an idea of testing coverage per lines of change. The real issue is not so much testing within the system itself, but all the external systems we connect with.
Tests don't catch everything, but that's the nature of software development. Most of the software I write is very iterative in nature so I'm always going back over code that's already written. Start with a simple idea then build an MVP release, repeat. This works for my current company because I'm writing software that doesn't exist and has very comparisons. Users need to see something before they start to realize other features they need.
External systems can be a pain to interface. What you need to do is isolate those interfaces from the rest of the system so that changes on either side do not break the other side. One system that I wrote pulls data in from many other systems (hardware call center management systems be exact - phone switches, predictive dialers, etc...) and I used a lot of techniques to isolate each system and version so that if one system feed changes it doesn't break the whole thing. The first version wasn't that robust, but over time and many refactorings it emerged