You could try to make both the migration jobs and app independent of each other by doing the following:
- Have the migration job return successfully even when the migration failed. Keep a machine-consumable record somewhere of what the outcome of the migration was. This could be done either explicitly (by, say, writing the latest schema version into some database table field) or implicitly (by, say, assuming that a specific field must have been created along a successful migration job). The migration job would only return an error code if it failed for technical reasons (auch as unavailability of the database that the migration should be applied to). This way, you can do the migrations via Kubernetes Jobs and rely on its ability to run to completion eventually.
- Built the new app version such that it can work with the database in both pre and post migration phases. What this means depends on your business requirements: The app could either turn idle until the migration has completed successfully, or it could return different results to its clients depending on the current phase. The key point here is that the app processes the migration outcome that the migration jobs produced previously and acts accordingly without terminating erroneously.
Combining these two design approaches, you should be able to develop and execute the migration jobs and app independently of each other and not have to introduce any temporal coupling.
Whether this idea is actually reasonable to implement depends on more specific details of your case, such as the complexity of your database migration efforts. The alternative, as you mentioned, is to simply deploy unmanaged pods into the cluster that do the migration. This requires a bit more wiring as you will need to regularly check the result and distinguish between successful and failed outcomes.