I have setup a system where I have taken the model first approach as it made more logical sense for me. Now when even I have some changes in the model currently what I do is -
- Use the Generate database from model feature of entity framework. I create a dummy database and apply those scripts. which deletes all my data and tables first and then updates the database with the latest sql file which is generated by entity framework.
- Now I use the Visual Studio's schema compare feature and generate migration scripts for my local database and also for the one which is in production.
- I go through the scripts manually and verify them. Once that is done I run the migration scripts on the production instances.
Question : The main problem is that is really tedious and since I do it from my local system, connecting to my prod databases is very slow and sometimes my visual studio also crashes. Is there a more cleaner approach to do this? Which is more automated such that my laptop is not really responsible for the database migrations on the production instances?
You can try Database Migration Power Pack - it allows creating change scripts instead of full database scripts but on behind it does the same procedure as you did by hand. The problem is that mentioned tool will not work with EF5.
Unfortunately EF migrations currently don't support models created through EDMX. Migrations support only code first approach at the moment.
In a Schema First design I use ApexSQL Diff (quite likely very similar to RedGate's product, perhaps a bit cheaper) - a good 3rd party tool is much easier to use than a VS Database Project and is easy to apply with a script-application tool like RoundHousE.
Using it in a Model First approach can follow the Schema First approach using a cycle of Model‑Schema‑Diff‑Schema‑Model as described in the post; consider these guidelines/notes below to make for a streamlined process. The schema-diff approach does not need to be tedious, slow, or excessively manual.
The current version of the database schema is obtained by applying a sequence of database patches (or DDL/DML scripts).
A tool (we use RoundHousE) automatically applies the scripts, as needed. It records information to know which scripts have been applied. Applying the same scripts is idempotent.
Diff done against a local database; this local database can be built up from all the previous change scripts in an automated fashion. This latest-local is always the diff target for the latest model changes.
The remote/live database is never used as a diff target. The same scripts can be applied later to the test (and then live) databases. Since everything is done the same way then the process is repeatable on all databases.
The only "issue" is that an update that is not well thought out may lead to data that is invalid under new restrictions/constraints. Of course, this was easy to identify, fix, and re-diff before pushing to the live database.
Once a diff is committed to source control it must be applied on the branch. To "undo" a previously commit change-script requires creating a new diff applying an inverse action. There is no implicit down-version.
We have a [Hg] model branch that affectively acts as a schema lock that that must be unified against; this could be viewed as a weak point, but it has worked well with small-team development.
A tool like Huagati DBML/EDMX is used to synchronize the Schema back to the Model which is really useful when developing. This little gem really pays for itself and is part of the cycle. When this is employed it's easy to also "update to a model" or make Schema changes in SSMS (or whatever) and then bring them back over.
The Code First migrations are "OK" (and definitely better than naught!), but I'm only using them because Azure SQL (aka SQL Database) is not supported by advanced diff tooling due to not exposing various sys information. (The diffs can be done locally as per normal, but ApexSQL Diff generates DDL/DML that is not always friendly with Azure SQL - plus, it's a chance for me to learn a slightly different approach :-)
Some advantages of Code First migrations via the Power Pack: can perform update tasks in C# instead of being limited to the DDL/DML (can be convenient), automatic downgrades (although I question their use), do not need to purchase a 3rd party tool (can be expensive), easier integration/deployment to Azure SQL, less tied to a specific database vendor (in theory), etc.
While Code First migrations (and automation of such) are a good step forward vs. the absolutely horrid Drop-and-Recreate approach, I much prefer dedicate SQL tooling when developing.
来源:https://stackoverflow.com/questions/11920491/migration-using-model-first-approach-in-entity-framework