There are a number of reasons why this a bad idea.
First, this is a bad idea because the standard containers do not have virtual destructors. You should never use something polymorphically that does not have virtual destructors, because you cannot guarantee cleanup in your derived class.
Basic rules for virtual dtors
Second, it is really bad design. And there are actually several reasons it is bad design. First, you should always extend the functionality of standard containers through algorithms that operate generically. This is a simple complexity reason - if you have to write an algorithm for every container it applies to and you have M containers and N algorithms, that is M x N methods you must write. If you write your algorithms generically, you have N algorithms only. So you get much more reuse.
It is also really bad design because you are breaking a good encapsulation by inheriting from the container. A good rule of thumb is: if you can perform what you need using the public interface of a type, make that new behavior external to the type. This improves encapsulation. If it's a new behavior you want to implement, make it a namespace scope function (like the algorithms). If you have a new invariant to impose, use containment in a class.
A classic description of encapsulation
Finally, in general, you should never think about inheritance as a means to extend the behavior of a class. This is one of the big, bad lies of early OOP theory that came about due to unclear thinking about reuse, and it continues to be taught and promoted to this day even though there is a clear theory why it is bad. When you use inheritance to extend behavior, you are tying that extended behavior to your interface contract in a way that ties users hands to future changes. For instance, say you have a class of type Socket that communicates using the TCP protocol and you extend it's behavior by deriving a class SSLSocket from Socket and implementing the behavior of the higher SSL stack protocol on top of Socket. Now, let's say you get a new requirement to have the same protocol of communications, but over a USB line, or over telephony. You would need to cut and paste all that work to a new class that derives from a USB class, or a Telephony class. And now, if you find a bug, you have to fix it in all three places, which won't always happen, which means bugs will take longer and not always get fixed...
This is general to any inheritance hierarchy A->B->C->... When you want to use the behaviors you've extended in derived classes, like B, C, .. on objects not of the base class A, you've got to redesign or you are duplicating implementation. This leads to very monolithic designs that are very hard to change down the road (think Microsoft's MFC, or their .NET, or - well, they make this mistake a lot). Instead, you should almost always think of extension through composition whenever possible. Inheritance should be used when you are thinking "Open / Closed Principle". You should have abstract base classes and dynamic polymorphism runtime through inherited class, each will full implementations. Hierarchies shouldn't be deep - almost always two levels. Only use more than two when you have different dynamic categories that go to a variety of functions that need that distinction for type safety. In those cases, use abstract bases until the leaf classes, which have the implementation.