Problem
Its quite a common problem I would like to think. Adding new code translates into regression - existing test cases become obsolete. Dependencie
With JDepend you can analyse dependencies between packages and even create unit tests to assure dependencies or integrate it with Fitnesse to have a nice dependency-table-test. This may help, if your tests are in specific packages...
You can find up-to Class level dependencies using Classycle-Dependency Checker. And the section - Check Classes Dependency in the guide might help in your case. However, normally using a static analysis tool we analyze either source codebase (e.g.: Project->src directory) or test codebase (e.g.: Project->test directory) but not both. But in your case it seems that you want to find out dependencies between source and test code too. So, executing an appropriate dependency analysis tool by providing the input path as the parent path of both src and test (e.g.: project root directory) is may be what you need (e.g.: using dependencies find out -> if source X changes, what dependent classes in tests get impacted by that change).
The best tool you can find for your problem is actually not a tool, but a practise. I strongly suggest you read about Test Driven Development (see Lasse Koskela's book), and Specification by Example (see Gojko Adzic's books, they're great).
Using these practises will fundamentally change two things:
The reason why I find this relevant to your question is that your scenario hints for the exact opposite role of the tests: people will perform changes in code and they'll then think "oh, no...now I have to go figure out what I broke in those damn tests".
From my experience, tests should not be overlooked or considered "lower grade code". And while my answer points to a methodology change that will only have visible results on the long run, it might help avoid the issue altogether in the future.
You can figure out which tests are relevant by tracking what code they touch. You can track what code they touch by using test coverage tools.
Most test coverage tools build an explicit set of locations that a running test executes; that's the point of "coverage". If you organize your test running to execute one unit test at a time, and then take a snapshot of the coverage data, then you know for each test what code it covers.
When code is modified, you can determine the intersection of the modified code and what an individual test covers. If the intersection is non-empty, you surely need to run that test again, and likely you'll need to update it.
There are a several practical problems with making this work.
First, it is often hard to figure out how the test coverage tools record this positioning data. Second, you have to get the testing mechanism to capture it on per test basis; that may be awkward to organize, and/or the coverage data may be clumsy to extract and store. Third, you need to compute the intersection of "code modified" with "code covered" by a test; this is abstractly largely a problem in intersecting big bit vectors. Finally, capturing "code modified" is bit tricky because sometimes code moves; if location 100 in a file was tested, and the line moves in the file, you may not get comparable position data. That will lead to false positives and false negatives.
There are test coverage tools which record the coverage data in an easily captured form, and can do the computations. Determine code changes is trickier; you can use diff but the moved code will confuse the issue somewhat. You can find diff tools that compare code structures, which identify such moves, so you can get better answers.
If you have source code slicers, you could compute how the output tested is (backward slice-)dependent on the input; all code in that slice affects the test, obviously. I think the bad news here is that such slicers are not easy to get.
Some tools which might help. Note not all of them intergrate with CI.
iPlasma is a great tool
CodePro is an Eclipse Plugin which help in detecting code and design problems (e.g. duplicated code, classes that break encapsulation, or methods placed in a wrong class). (Now acquired by Google )
Relief is a tool visualizing the following parameters of Java projects: size of a package, i.e. how many classes and interfaces it contains the kind of item being visualized (Packages and classes are represented as boxes, interfaces and type's fields as spheres). how heavy an item is being used (represented as gravity, i.e. distance of center) number of dependancies (represented as depth).
Stan4j is a commercial tool that costs a couple of hundred $. It is targeting only Java projects, comes very close to (or a little better reports than? not sure) Sonar. It has a good Eclipse integration.
Intellij Dependency Analysis
From what I could understand from your problem is that - you need to track two OO metrics - Efferent Coupling (Ca) and Afferent Couplings (Ce). With this you can narrow down to the required packages. You can explore the oppurtunity of writing an eclipse plugin which on every build - can highlight the required classes based on the Ca, Ce metrics.
I'm a huge fan of Sonar myself, since you already have it running, and working in your CI environment, that would be my suggestion. I'm not quite sure how Sonar's Dependency Cycle Matrix doesn't already do what you want, or rather what it is you want in addition to that.