The natural paradigm in theory for storing XBRL in a database would be OLAP, because XBRL is about data cubes. OLAP on top of a relational database would be called ROLAP.
This is not a trivial problem, because facts taken from a large number of taxonomies can form a very large and sparse cube (for SEC filings it's 10k+ dimensions), and also because creating an SQL schema requires knowing the taxonomies before any import. If new taxonomies come up, one needs to re-ETL everything. This doesn't make relational databases suitable as a general solution.
If the filings share the same taxonomy and the taxonomy is very simple though (as in: not too many dimensions), it is possible to come up with an ad-hoc mapping to store all facts in a single table with many rows in the ROLAP sense (facts to rows, aspects to columns). Some vendors are specialized in storing non-dimensional XBRL facts, in which case traditional SQL (or "post-SQL" that scale with rows) offerings work well.
Some vendors create a table for each XBRL hypercube in the taxonomy, with a schema derived from the definition network but different for each hypercube. This can lead to a lot of tables in the database, and requires a lot of joins for queries involving multiple hypercubes.
Some other vendors make assumptions about the underlying XBRL structure, or about the kind of queries that their users need to run. Restricting the scope of the problem allows finding specific architectures or SQL schemas that can also do the job for these specific needs.
Finally, to import large amounts of filings, it is possible to build generic mappings on top of NoSQL data stores rather than relational databases. Large numbers of facts with a varying number of dimensions fit in large collections of semi-structured documents, and networks fit well in a hierarchical format.