问题
I've updated the model of an existing iPhone app in some simple ways (remove attribute, add attribute, remove index), and can use automatic lightweight migration to migrate the persistent store.
Due to the typical size of the data set, the processing time is not insignificant, and warrants feedback for the user.
NSMigrationManager
provides a simple but useful migrationProgress
value that sends KVO notifications as the migration is performed. That forms the basis of providing feedback, however attempting to use an inferred model ([NSMappingModel inferredMappingModelForSourceModel:destinationModel:error:]
) results in drastically different timing for the exact same dataset.
Profile results on an original iPhone (2G), Cache size: 1.785 MB on disk.
Automatic inferred lightweight migration
PROFILE: CacheManager -migrateStore
PROFILE: 0.6130 (+0.6130) models loaded
PROFILE: 1.1759 (+0.5629) delegate -CacheManagerWillMigrate:
PROFILE: 1.2516 (+0.0757) persistent store coordinator loaded
PROFILE: 5.1436 (+3.8920) automatic lightweight migration completed
PROFILE: 5.5435 (+0.3999) delegate -CacheManagerDidFinishMigration:withError:
Manual inferred migration
PROFILE: CacheManager -migrateStore
PROFILE: 0.6660 (+0.6660) models loaded
PROFILE: 1.1471 (+0.4811) inferred mapping model generated
PROFILE: 1.4046 (+0.2574) delegate -CacheManagerWillMigrate:
PROFILE: 1.5058 (+0.1013) persistent store coordinator loaded
PROFILE: 22.6952 (+21.1894) manual migration completed
PROFILE: 23.1478 (+0.4525) delegate -CacheManagerDidFinishMigration:withError:
So, with an inferred model, the manual migration takes over 5 times longer than automatic!
UPDATE: Model loading
Core Data documentation for NSPersistentStoreCoordinator "Migration Options" says:
NSInferMappingModelAutomaticallyOption
... the coordinator will attempt to infer a mapping model if none can be found.
And that is why the XCode built,compiled & bundled mapping model has to be removed (or just un-targetted) to allow an inferred and lightweight migration to happen.
It's a big inconsistency, and the lightweight option that NSPersistentStoreCoordinator -addPersistentStoreWithType:configuration:URL:options:error:
provides absolutely no indication of progress while processing.
Can anybody provide a supported way to get the migrationProgress
values during automatic migration, OR a way to configure an inferred mapping model to be as fast during manual processing as automatic?
UPDATE: Bug Report
Spoke to the engineers at WWDC and they've asked for a bug report requesting the migrationProgress
for the automatic lightweight migration processing.
I'll update again if the API is updated to add progress reporting..
回答1:
Currently Core Data uses a private class, NSSQLiteInPlaceMigrationManager
, to perform lightweight migration. This is a subclass of NSMigrationManager
, but handles everything itself in migrateStoreFromURL:type:options:withMappingModel:toDestinationURL:destinationType:destinationOptions:error:
. From the looks of it, this class is actually performing alterations directly on the SQLite store instead of pulling everything into memory as required by manual migration.
This explains why you're seeing the lightweight migration complete much faster.
Unfortunately, even if you use this knowledge of the private APIs that are being used behind the scenes, it doesn't gain you much for getting a progress indication. The value of progress is currently never changed for NSSQLiteInPlaceMigrationManager
, it's always zero. The value of currentEntityMapping
also seems to remain nil.
Until Apple provides an API, it seems we're out of luck. Do you have a radar number so I can open a duplicate?
回答2:
What happens when you define the mapping model yourself instead of using an inferred model? Sounds like the creation of the inferred model is causing a performance hit that defining the mapping model directly and including it in your project would solve.
update
I already tried that strategy and using a mapping model generated in XCode results in approximately the same processing time as the inferred-at-runtime model. The only real difference is the time to load the model from the bundle is slightly quicker than inferring at runtime. Furthermore once a mapping model is bundled in the app, the automatic migration ceases to be lightweight, I assume it is using the bundled model. Removing the mapping model from the target brings the processing time back to ~4 seconds for automatic-lightweight
That is certainly counter-intuitive. Is your project simple enough to post as an example of this inefficiency or do you perhaps have a test project that isolates this issue? In either situation it would be very helpful to take a look at it so that we can A) hopefully solve the mystery; or B) file it as a rather large bug with Apple as the reverse should certainly be the case.
How large is the data set you are working with?
来源:https://stackoverflow.com/questions/2535373/core-data-inferred-migration-automatic-lightweight-vs-manual