I currently have like 10 tests that test whenever my Tetris piece doesn\'t move left if there is a piece in the path, or a wall. Now, I will have to test the same behaviour for
Remember that your tests are pushing against your code. if you find that your tests look duplicated except for something like left/right, then maybe there issome underlying code that left and right is duplicating. So you might want to see if you can refactor your code to use either left or right and send a left or right flag to it.
Test code is like any other code and should be maintained and refactored.
This means that if you have shared logic, extract it to its own function.
Some unit test libraries such as the xUnit family have specific test fixture, setup and teardown attributes for such shared code.
See this related question - "Why is copy paste of code dangerous?".
I have a somewhat controversial position on this one. While code duplication must be avoided as much as possible in production code, this is not so bad for test code. Production and test code differ in nature and intent:
Production code can afford some complexity so as to be understandable/maintainable. You want the code to be at the right abstraction level, and the design to be consistent. This is ok because you have tests for it and you can make sure it works. Code duplication in production code wouldn't be a problem if you had really a 100% code coverage at the logical level. This is really hard to achieve, so the rule is: avoid duplication and maximize code coverage.
Test code on the other hand must be as simple as possible. You must make sure that test code actually tests what it should. If tests are complicated, you might end up with bug in the tests or the wrong tests -- and you don't have tests for the tests, so the rule is: keep it simple. If test code is duplicated, this is not so a big problem when it something changes. If the change is applied only in one test, the other will fail until you fix it.
The main point I want to make is that production and test code have a different nature. Then it's always a matter of common sense, and I'm not saying you should not factor test code, etc. If you can factor something in test code and you're sure it's ok, then do it. But for test code, I would favor simplicity over elegance, while for production code, I would favor elegance over simplicity. The optimum is of course to have a simple, elegant solution :)
PS: If you really don't agree, please leave a comment.
There's nothing wrong with copy-pasting, and it's a good place to start. Indeed, it's better than from scratch, as if you've got working code (whether tests or otherwise), then copy-paste is more reliable than from scratch, as well as quicker.
However, that's only step 1. Step 2 is refactoring for commonality, and step 1 is only to help you see that commonality. If you can already see it clearly without copying (sometimes it's easier to copy first and then examine, sometimes it isn't, and it depends on the person doing it) then skip step 1.
I sometimes find myself making very complex unit tests only for avoiding test code duplication. I think it's not good to do so. Any single unit test should be as simple as possible. And if you need a duplication for achieving it - let it be.
On the other hand, if your unit test has +100500 lines of code, then it should obviously be refactored, and this will be a simplification.
And, of course, try avoiding meaningless unit test duplication, like testing 1+1=2, 2+2=4, 3+3=6. Write data-driven tests if you really need to test the same method on different data.
If you are repeating code, then you must refactor. Your situation is a common problem and is solved using 'Parameteric Testing'. Parameteric testing when supported by the test harness allows for passing multiple sets of input values as parameters. You may also want to look up Fuzz testing, I have found that it useful in situations such as this.