This question is not about what OCP is. And I am not looking for simplistic answers, either.
So, here is why I ask this. OCP was first described in the late 80s. It reflects the thinking and context of that time. The concern was that changing source code to add or modify functionality, after the code had already been tested and put into production, would end up being too risky and costly. So the idea was to avoid changing existing source files as much as possible, and only add to the codebase in the form of subclasses (extensions).
I may be wrong, but my impression is that network-based version control systems (VCS) were not widely used back then. The point is that a VCS is essential to manage source code changes.
The idea of refactoring is much more recent. The sophisticated IDEs that enable automated refactoring operations were certainly inexistent back then. Even today, many developers don't use the best refactoring tools available. The point here is that such modern tools allow a developer to change literally thousands of lines of code, safely, in a few seconds.
Lastly, today the idea of automated developer testing (unit/integration tests) is widespread. There are many free and sophisticated tools that support it. But what good is creating and maintaining a large automated test suite if we never/rarely change existing code? New code, as the OCP requires, will only require new tests.
So, does the OCP really makes sense today? I don't think so. Instead, I would indeed prefer to change existing code when adding new functionality, if the new functionality does not require new classes. Doing so will keep the codebase simpler, smaller, and much easier to read and understand. The risk of breaking previous functionality will be managed through a VCS, refactoring tools, and automated test suites.
OCP makes a lot of sense when you aren't the consumer of your code. If I'm writing a class, and I or my team am writing all of the classes which consume it, I agree. Refactoring as things change is no huge deal at all.
If, on the other hand, I am writing an API for my customers, or I have multiple consumers in a large organization with varying interests, the OCP is critical because I can't refactor as easily.
Also, if you just refactor your class to meet everyone's needs, you'll get a bloated class as a result. If you designed the class to allow consumers to extend your class rather than modify it, you wouldn't really have this problem.
I never heard of OCP being that. Maybe you are refering to something else, but the OCP I know says "A module/class must be open for extension, but closed for modification, meaning that you shouldn't modify the source code of the module to enhance it, but the module or object should be easy to extend.
Think of eclipse (or any other plugin based software for that matter). You don't have the source code, but anyone can write a plugin to extend the behaviour or to add another feature. You didn't modify eclipse, but you extended it.
So, yes, the Open/Closed principle is indeed very valid and quite a good idea.
UPDATE:
I see that the main conflict here is between code that is still under development and code that is already shipped and used by someone. So I went and checked with Bertrand Meyer, the author of this principle. He says:
A module is said to be closed if it is available for use by other modules. This assumes that the module has been given a well-defined, stable description (its interface in the sense of information hiding). At the implementation level, closure for a module also implies that you may compile it, perhaps store it in a library, and make it available for others (its clients) to use.
So, indeed, the Open/Closed Principle refers only to stable, ready for compile and use entities.
Alright, so here's my response.
I cannot testify to the historic origin of the principle but it is still invoked frequently in modern times. I don't think its about it being dangerous to change functioning code (though it of course is) its about allowing you to separate out ideas.
Suppose we have a component
public class KnownFriendsFilter{
IList<People> _friends;
public KnownFriendsFilter(IList<People> friends) {
_friends = friends;
}
public IList<Person> GetFriends(IList<Person> people) {
return people.Where(p=>_friends.Contains(p)).ToList();
}
}
Now say the algorithm on which this specific component needs a slight modification - for example you want to make sure that the initial list passed in contains distinct people. This is something that would be a concern of the KnownFriendsFilter so by all means change the class.
However there is a difference between this class and the feature it supports.
- This class is really just to filter an array of people for known friends
- The feature that it supports is to find all friends from an array of people
The difference is that the feature is concerned with function while the class is concerned with implementation. Most requests we get to change that feature will fall outside the specific responsibility of the class.
For example lets say we want to add a blacklist of any names that begin with the letter "X" (because those people are obviously spacemen and not our friends). That's something that supports the feature but is not really a part of what this class is all about, sticking it in the class would be awkward. What about when the next request comes in that now the application is being used exclusively by misogynists and any female names must be also excluded? Now you've got to add logic to decide whether the name is male or female into the class - or at least know about some other class that knows how to do that - the class is growing in responsibilities and becoming very bloated! And what about cross-cutting concerns? Now we want to log whenever we filter an array of people, does that go right in there too?
It would be better to factor out an IFriendsFilter interface and wrap this class in a decorator, or re-implement it as a chain of responsibility on IList. That way you can place each of those responsibilities into their own class that supports just that concern. If you inject dependencies then any code that uses this class (and its centrally used in our application) doesn't have to change at all!
So again, the principle isn't about never changing existing code - it's about not ending up in a situation where you are faced with the decision between bloating the responsibilities of a commonly used class or having to edit every single location that uses it.
Interesting question, and based on the strict definition of the open closed principle I can see where you're coming from.
I have come to define the open-closed principle slightly differently, and this principle I think should apply, and that is to apply it far more broadly.
I like to say, all my classes (as a whole) involved in the application should be closed for modification and open for extension. So the principle is that if I need to change behaviour and/or operation of the application, I do not in fact modify a class but add a new one and then change the relationships to point to this new one (depending of course on the size of the change). If I'm following the single responsibility and utilising inversion of control, this should occur. What then occurs is that all changes come to be extensions. They system now can both act in the former way and in the new way and changing between them = changing a relationship.
So, does the OCP really makes sense today? I don't think so.
Sometimes it does:
When you've released the base class to customers and can't easily modify it on all your customers' machines (see for example "DLL Hell")
When you're a customer, who didn't write the base class yourself and aren't the one maintaining it
More generally, any situation where the base class is used by more than one team and/or for more than project
See also Conway's Law.
The point here is that such modern tools allow a developer to change literally thousands of lines of code, safely, in a few seconds.
Which is fine if you have 'a' developer. If you are working in teams, certainly with version control, probably with branching and merging, then the ability to ensure that changes from different people tend to end up concentrated in different files is pretty vital to being able to control what's going on.
You could imagine language-specific merge/branch tools that could take three refactorings done in parallel and merge them as easily as changes to isolated files. But such tools don't exist, and if they did, I wouldn't want to rely on them.
来源:https://stackoverflow.com/questions/1416476/is-the-open-closed-principle-a-good-idea