I have a database with a large number of fields that are currently NTEXT.
Having upgraded to SQL 2005 we have run some performance tests on converting these to NVARC
Running a database test on a low performance virtual machine is not really indicative of production performance, the heavy IO involved will require a fast disk array, which the virtualisation will throttle.
If you can get scheduled downtime:
Multiple times as required (within a loop with a delay).
Once complete, do another backup then change the recovery model back to what it was originally on and add old indexes.
Remember that every index or trigger on that table causes extra disk I/O and that the simple recovery mode minimises logfile I/O.
You might also consider testing to see if an SSIS package might do this more efficiently.
Whatever you do, make it an automated process that can be scheduled and run during off hours. the feweer users you have trying to access the data, the faster everything will go. If at all possible, pickout the three or four most critical to change and take the database down for maintentance (during a normally off time) and do them in single user mode. Once you get the most critical ones, the others can be scheduled one or two a night.
How about running the update in batches - update 1000 rows at a time.
You would use a while loop that increments a counter, corresponding to the ID of the rows to be updated in each iteration of the the update query. This may not speed up the amount of time it takes to update all 7 million records, but it should make it much less likely that users will experience an error due to record locking.
If you can't get scheduled downtime....
create two new columns: nvarchar(max) processedflag INT DEFAULT 0
Create a nonclustered index on the processedflag
You have UPDATE TOP available to you (you want to update top ordered by the primary key).
Simply set the processedflag to 1 during the update so that the next update will only update where the processed flag is still 0
You can use @@rowcount after the update to see if you can exit a loop.
I suggest using WAITFOR for a few seconds after each update query to give other queries a chance to acquire locks on the table and not to overload disk usage.