We have loads of apps where we fetch data from remote web services as JSON and then use a parser to translate that into a Core-Data model.
For one of our app
I use SBJson to parse JSON to NSDictionaries then save them as .plist files using [dict writeToFile:saveFilePath atomically:YES]
. Loading is also just as simple NSMutableDictionary *dict = [NSDictionary dictionaryWithContentsOfFile:saveFilePath]
. Its fast, efficient and easy. No need for a database.
JSON Framework is one. It'll turn your JSON into native NSDictionary and NSArray objects. I don't know anything about its performance on a large document like that, but lots of people use it and like it. It's not the only JSON library for iOS, but it's a popular one.
When deciding what persistence to use, it's important to remember that Core Data is first and foremost an object graph management system. It true function is to create the runtime model layer of Model-View-Controller design patterned apps. Persistence is actually a secondary and even optional function of Core Data.
The major modeling/persistence concerns are the size of the data and the complexity of the data. So, the relative strengths and weaknesses of each type of persistence would break down like this:
_______________________________
| | |
2 | | |
| SQL | Core Data | 4
s | | |
i |_______________ ______________|
z | | |
e | | |
1 | Collection | Core Data | 3
| plist/xml | |
| | |
-------------------------------
Complexity--->
To which we could add a third lessor dimension, volatility i.e. how often the data changes
(1) If the size, complexity and volatility of the data are low, then using a collection e.g. NSArray, NSDictionary, NSSet of a serialized custom object would be the best option. Collections must be read entirely into memory so that limits their effective persistence size. They have no complexity management and all changes require rewriting the entire persistence file.
(2) If the size is very large but the complexity is low then SQL or other database API can give superior performance. E.g. an old fashion library index card system. Each card is identical, the cards have no relationships between themselves and the cards have no behaviors. SQL or other procedural DBs are very good at processing large amounts of low complexity information. If the data is simple, then SQL can handle even highly volatile data efficiently. If the UI is equally simple, then there is little overhead in integrating the UI into the object oriented design of an iOS/MacOS app.
(3) As the data grows more complex Core Data quickly becomes superior. The "managed" part of "managed objects" manages complexity in relationships and behaviors. With collections or SQL, you have manually manage complexity and can find yourself quickly swamped. In fact, I have seen people trying manage complex data with SQL who end up writing their own miniature Core Data stack. Needless to say, when you combine complexity with volatility Core Data is even better because it handles the side effects of insertions and deletion automatically.
(Complexity of the interface is also a concern. SQL can handle a large, static singular table but when you add in hierarchies of tables in which can change on the fly, SQL becomes a nightmare. Core Data, NSFetchedResultsController and UITableViewController/delegates make it trivial.)
(4) With high complexity and high size, Core Data is clearly the superior choice. Core Data is highly optimized so that increase in graph size don't bog things down as much as they do with SQL. You also get highly intelligent caching.
Also, don't confuse, "I understand SQL thoroughly but not Core Data," with "Core Data has a high overhead." It really doesn't. Even when Core Data isn't the cheapest way to get data in and out of persistence, it's integration with the rest of the API usually produces superior results when you factor in speed of development and reliability.
In this particular case, I can't tell from the description whether you are in case (2) or case (4). It depends on the internal complexity of the data AND the complexity of the UI. You say:
I don't think I want to create a Core Data model with 100's of entities, and then use a mapper to import the JSON into it.
Do you mean actual abstract entities here or just managed objects? Remember, entities are to managed objects what classes are to instances. If the former, then yes Core Data will be a lot of work up front, if the latter, then it won't be. You can build up very large complex graphs with just two or three related entities.
Remember also that you can use configuration to put different entities into different stores even if they all share a single context at runtime. This can let you put temporary info into one store, use it like more persistent data and then delete the store when you are done with it.
Core Data gives you more options than might be apparent at first glance.