I am trying to find a reusable way of taking a CSV file and generating an XML file from it that conforms to a specified XSD. I haven\'t really found a reusable approach for
What you have is a single "table" (the CSV file) which contains (probably) denormalized rows representing a (probably) hierarchical data model. You want to map that to an arbitrary hierarchical XML document based on the XSD.
You'll need a tool that can map grouping key columns to XML elements and specify which data columns go in which attributes/child elements. This is a fairly significant problem, unless your mappings are trivial.
Could you post some samples of the CSV and XSD? That might help get a more focused answer.
Well, I don't really have a ready-made, out-of-the-box solution for this, but maybe:
read your CSV file with a library like FileHelphers; for this, you need to create a class MyDataType
which describes the columns in the CSV, and you get an array of MyDataType
if you decorate that class with the proper XML serialization attributes like [XmlIgnore]
, [XmlAttribute]
and so forth, you might be able to just simply serialize out the resulting array of MyDataType
into an XML that conforms to your XML schema
or if that doesn't work, you could create another class that maps to your XML requirements (generate it from the XSD you have), and just simply define a mapping between the two types MyDataType
(from your CSV) and MyXmlDataType
(for your XML) with something like AutoMapper
It's not boiler-plate - but fairly close, and you could possibly make that pretty much a "framework" to just simply plug in your own types (if you need to do this frequently).
Microsoft Excel is able to export XML: http://office.microsoft.com/en-us/excel-help/export-xml-data-HP010206401.aspx
I had some problems with creating an exportable XSD format, but this is a really great tool once you've got it working.
If your XSLT engine is compliant with XSLT version 2, then the best solution is here:
This seems like something that would be easy to do, but it's not. XML Schema is a document validation language, not a document production language. It doesn't tell you how to make a new document; it tells you whether or not the document that you made is valid. Those aren't the same thing by a long shot.
For instance, it's trivial to create a complex type in XML Schema that consists of a sequence of optional choices. A foo
element can have either a bar
or baz
child, then either a baz
or bat
child, then a foo
, bar
, or bat
child. That makes for a rule that can determine that both of these elements are valid:
<foo>
<baz/>
<baz/>
<bar/>
</foo>
<foo>
<foo>
<bar/>
</foo>
</foo>
At the same time, that rule gives you pretty much zero help in determining how to take a tuple of data items and create a foo
element from it.
Generally, when someone asks this question, they're looking at one or two schemas they're using which define a relatively simple document structure. It seems intuitive that it should be easy to use those schemas as input to a mapping process. It probably is. What's not easy, or even possible, is a mapping process that can take any schema as an input.
What I've done instead, in my projects, is to simplify the problem. I've built programs that use CSV and XML and and support schema validation, but in these programs, the schema is an output. I've defined a simple XML metadata format, e.g.:
<item name="foo" type="string" size="10" allowNulls="true" .../>
<item name="bar" type="date" allowNulls="false" .../>
Then I can use that metadata to control XML production from CSV input, and I can also use it to produce a schema that the XML my program produces will conform to. If I change my metadata, my XML and schema changes appropriately.
Of course, if the schemas are genuinely an input to your process (e.g. they're provided by a third party), this won't even start to help you.