问题
Is it okay to run Hibernate applications configured with hbm2ddl.auto=update
to update the database schema in a production environment?
回答1:
No, it's unsafe.
Despite the best efforts of the Hibernate team, you simply cannot rely on automatic updates in production. Write your own patches, review them with DBA, test them, then apply them manually.
Theoretically, if hbm2ddl update worked in development, it should work in production too. But in reality, it's not always the case.
Even if it worked OK, it may be sub-optimal. DBAs are paid that much for a reason.
回答2:
We do it in production albeit with an application that's not mission critical and with no highly paid DBAs on staff. It's just one less manual process that's subject to human error - the application can detect the difference and do the right thing, plus you've presumably tested it in various development and test environments.
One caveat - in a clustered environment you may want to avoid it because multiple apps can come up at the same time and try to modify the schema which could be bad. Or put in some mechanism where only one instance is allowed to update the schema.
回答3:
Hibernate creators discourage doing so in a production environment in their book "Java Persistence with Hibernate":
WARNING: We've seen Hibernate users trying to use SchemaUpdate to update the schema of a production database automatically. This can quickly end in disaster and won't be allowed by your DBA.
回答4:
Check out LiquiBase XML for keeping a changelog of updates. I had never used it until this year, but I found that it's very easy to learn and make DB revision control/migration/change management very foolproof. I work on a Groovy/Grails project, and Grails uses Hibernate underneath for all its ORM (called "GORM"). We use Liquibase to manage all SQL schema changes, which we do fairly often as our app evolves with new features.
Basically, you keep an XML file of changesets that you continue to add to as your application evolves. This file is kept in git (or whatever you are using) with the rest of your project. When your app is deployed, Liquibase checks it's changelog table in the DB you are connecting to so it will know what has already been applied, then it intelligently just applies whatever changesets have not been applied yet from the file. It works absolutely great in practice, and if you use it for all your schema changes, then you can be 100% confident that code you checkout and deploy will always be able to connect to a fully compatible database schema.
The awesome thing is that I can take a totally blank slate mysql database on my laptop, fire up the app, and right away the schema is set up for me. It also makes it easy to test schema changes by applying these to a local-dev or staging db first.
The easiest way to get started with it would probably be to take your existing DB and then use Liquibase to generate an initial baseline.xml file. Then in the future you can just append to it and let liquibase take over managing schema changes.
http://www.liquibase.org/
回答5:
I would vote no. Hibernate doesn't seem to understand when datatypes for columns have changed. Examples (using MySQL):
String with @Column(length=50) ==> varchar(50)
changed to
String with @Column(length=100) ==> still varchar(50), not changed to varchar(100)
@Temporal(TemporalType.TIMESTAMP,TIME,DATE) will not update the DB columns if changed
There are probably other examples as well, such as pushing the length of a String column up over 255 and seeing it convert to text, mediumtext, etc etc.
Granted, I don't think there is really a way to "convert datatypes" with without creating a new column, copying the data and blowing away the old column. But the minute your database has columns which don't reflect the current Hibernate mapping you are living very dangerously...
Flyway is a good option to deal with this problem:
http://flywaydb.org
回答6:
Hibernate has to put the disclaimer about not using auto updates in prod to cover themselves when people who don't know what they are doing use it in situations where it should not be used.
Granted the situations where it should not be used greatly outnumber the ones where it's OK.
I have used it for years on lots of different projects and have never had a single issue. That's not a lame answer, and it's not cowboy coding. It's a historic fact.
A person who says "never do it in production" is thinking of a specific set of production deployments, namely the ones he is familiar with (his company, his industry, etc).
The universe of "production deployments" is vast and varied.
An experienced Hibernate developer knows exactly what DDL is going to result from a given mapping configuration. As long as you test and validate that what you expect ends up in the DDL (in dev, qa, staging, etc), you are fine.
When you are adding lots of features, auto schema updates can be a real time saver.
The list of stuff auto updates won't handle is endless, but some examples are data migration, adding non-nullable columns, column name changes, etc, etc.
Also you need to take care in clustered environments.
But then again, if you knew all this stuff, you wouldn't be asking this question. Hmm . . . OK, if you are asking this question, you should wait until you have lots of experience with Hibernate and auto schema updates before you think about using it in prod.
回答7:
As I explained in this article, it's not a good idea to use hbm2ddl.auto
in production.
The only way to manage the database schema is to use incremental migration scripts because:
- the scripts will reside in VCS along your code base. When you checkout a branch, you recreate the whole schema from scratch.
- the incremental scripts can be tested on a QA server before being applied in production
- there is no need for manual intervention since the scripts can be run by Flyway, hence it reduces the possibility of human error associated with running scripts manually.
Even the Hibernate User Guide advise you to avoid using the hbm2ddl
tool for production environments.
回答8:
I wouldn't risk it because you might end up losing data that should have been preserved. hbm2ddl.auto=update is purely an easy way to keep your dev database up to date.
回答9:
We do it in a project running in production for months now and never had a problem so far. Keep in mind the 2 ingredients needed for this recipe:
Design your object model with a backwards-compatibility approach, that is deprecate objects and attributes rather than removing/altering them. This means that if you need to change the name of an object or attribute, leave the old one as is, add the new one and write some kind of migration script. If you need to change an association between objects, if you already are in production, this means that your design was wrong in the first place, so try to think of a new way of expressing the new relationship, without affecting old data.
Always backup the database prior to deployment.
My sense is - after reading this post - that 90% of the people taking part in this discussion are horrified just with the thought of using automations like this in a production environment. Some throw the ball at the DBA. Take a moment though to consider that not all production environments will provide a DBA and not many dev teams are able to afford one (at least for medium size projects). So, if we're talking about teams where everyone has to do everything, the ball is on them.
In this case, why not just try to have the best of both worlds? Tools like this are here to give a helping hand, which - with a careful design and plan - can help in many situations. And believe me, administrators may initially be hard to convince but if they know that the ball is not on their hands, they will love it.
Personally, I'd never go back to writing scripts by hand for extending any type of schema, but that's just my opinion. And after starting to adopt NoSQL schema-less databases recently, I can see that more than soon, all these schema-based operations will belong to the past, so you'd better start changing your perspective and look ahead.
回答10:
In my case (Hibernate 3.5.2, Postgresql, Ubuntu), setting
hibernate.hbm2ddl.auto=update
only created new tables and created new columns in already existing tables.It did neither drop tables, nor drop columns, nor alter columns. It can be called a safe option, but something like
hibernate.hbm2ddl.auto=create_tables add_columns
would be more clear.
回答11:
It's not safe, not recommended, but it's possible.
I have experience in an application using the auto-update option in production.
Well, the main problems and risks found in this solution are:
- Deploy in the wrong database. If you commit the mistake to run the application server with a old version of the application (EAR/WAR/etc) in the wrong database... You will have a lot of new columns, tables, foreign keys and errors. The same problem can occur with a simple mistake in the datasource file, (copy/paste file and forgot to change the database). In resume, the situation can be a disaster in your database.
- Application server takes too long to start. This occur because the Hibernate try to find all created tables/columns/etc every time you start the application. He needs to know what (table, column, etc) needs to be created. This problem will only gets worse as the database tables grows up.
- Database tools it's almost impossible to use. To create database DDL or DML scripts to run with a new version, you need to think about what will be created by the auto-update after you start the application server. Per example, If you need to fill a new column with some data, you need to start the application server, wait to Hibernate crete the new column and run the SQL script only after that. As can you see, database migration tools (like Flyway, Liquibase, etc) it's almost impossible to use with auto-update enabled.
- Database changes is not centralized. With the possibility of the Hibernate create tables and everything else, it's hard to watch the changes on database in each version of the application, because most of them are made automatically.
- Encourages garbage on database. Because of the "easy" use of auto-update, there is a chance your team neglecting to drop old columns and old tables, because the hibernate auto-update can't do that.
- Imminent disaster. The imminent risk of some disaster to occur in production (like some people mentioned in other answers). Even with an application running and being updated for years, I don't think it's a safe choice. I never felt safe with this option being used.
So, I will not recommend to use auto-update in production.
If you really want to use auto-update in production, I recommend:
- Separated networks. Your test environment cannot access the homolog environment. This helps prevent a deployment that was supposed to be in the Test environment change the Homologation database.
- Manage scripts order. You need to organize your scripts to run before your deploy (structure table change, drop table/columns) and script after the deploy (fill information for the new columns/tables).
And, different of the another posts, I don't think the auto-update enabled it's related with "very well paid" DBAs (as mentioned in other posts). DBAs have more important things to do than write SQL statements to create/change/delete tables and columns. These simple everyday tasks can be done and automated by developers and only passed for DBA team to review, not needing Hibernate and DBAs "very well paid" to write them.
回答12:
No, don't ever do it. Hibernate does not handle data migration. Yes, it will make your schema look correctly but it does not ensure that valuable production data is not lost in the process.
回答13:
Typically enterprise applications in large organizations run with reduced privileges.
Database username may not have
DDL
privilege for adding columns whichhbm2ddl.auto=update
requires.
回答14:
I agree with Vladimir. The administrators in my company would definitely not appreciate it if I even suggested such a course.
Further, creating an SQL script in stead of blindly trusting Hibernate gives you the opportunity to remove fields which are no longer in use. Hibernate does not do that.
And I find comparing the production schema with the new schema gives you even better insight to wat you changed in the data model. You know, of course, because you made it, but now you see all the changes in one go. Even the ones which make you go like "What the heck?!".
There are tools which can make a schema delta for you, so it isn't even hard work. And then you know exactly what's going to happen.
回答15:
Applications' schema may evolve in time; if you have several installations, which may be at different versions, you should have some way to ensure that your application, some kind of tool or script is capable of migrating schema and data from one version stepwise to any following one.
Having all your persistence in Hibernate mappings (or annotations) is a very good way for keeping schema evolution under control.
You should consider that schema evolution has several aspects to be considered:
evolution of the database schema in adding more columns and tables
dropping of old columns, tables and relations
filling new columns with defaults
Hibernate tools are important in particular in case (like in my experience) you have different versions of the same application on many different kinds of databases.
Point 3 is very sensitive in case you are using Hibernate, as in case you introduce a new boolean valued property or numeric one, if Hibernate will find any null value in such columns, if will raise an exception.
So what I would do is: do indeed use the Hibernate tools capacity of schema update, but you must add alongside of it some data and schema maintenance callback, like for filling defaults, dropping no longer used columns, and similar. In this way you get the advantages (database independent schema update scripts and avoiding duplicated coding of the updates, in peristence and in scripts) but you also cover all the aspects of the operation.
So for example if a version update consists simply in adding a varchar valued property (hence column), which may default to null, with auto update you'll be done. Where more complexity is necessary, more work will be necessary.
This is assuming that the application when updated is capable of updating its schema (it can be done), which also means that it must have the user rights to do so on the schema. If the policy of the customer prevents this (likely Lizard Brain case), you will have to provide the database - specific scripts.
来源:https://stackoverflow.com/questions/221379/hibernate-hbm2ddl-auto-update-in-production