“Primary Filegroup is Full” in SQL Server 2008 Standard for no apparent reason

后端 未结 10 1913
后悔当初
后悔当初 2020-12-08 04:18

Our database is currently at 64 Gb and one of our apps started to fail with the following error:

System.Data.SqlClient.SqlException: Coul

相关标签:
10条回答
  • 2020-12-08 05:05

    I found that this happens because: http://support.microsoft.com/kb/913399

    SQL Server only releases all the pages that a heap table uses when the following conditions are true: A deletion on this table occurs. A table-level lock is being held. Note A heap table is any table that is not associated with a clustered index.

    If pages are not deallocated, other objects in the database cannot reuse the pages.

    However, when you enable a row versioning-based isolation level in a SQL Server 2005 database, pages cannot be released even if a table-level lock is being held.

    Microsoft's solution: http://support.microsoft.com/kb/913399

    To work around this problem, use one of the following methods: Include a TABLOCK hint in the DELETE statement if a row versioning-based isolation level is not enabled. For example, use a statement that is similar to the following:

    DELETE FROM TableName WITH (TABLOCK)

    Note represents the name of the table. Use the TRUNCATE TABLE statement if you want to delete all the records in the table. For example, use a statement that is similar to the following:

    TRUNCATE TABLE TableName

    Create a clustered index on a column of the table. For more information about how to create a clustered index on a table, see the "Creating a Clustered Index" topic in SQL

    You'll notice at the bottom of the link that it is NOT noted that it applies to SQL Server 2008 but I think it does

    0 讨论(0)
  • 2020-12-08 05:10

    Do one thing, go to properties of database select files and increase initial size of database and set primary filegroup as autoincremented. Restart sql server.

    You will be able to use database as earlier.

    0 讨论(0)
  • 2020-12-08 05:10

    I also ran into the same problem, where the initial dtabase size is set to 4Gb and autogrowth is set by 1Mb. The virtual encrypted TrueCrypt drive that the databse was on, seemed to have plenty of space.

    I changed a couple of (the above) things:

    • I turned the Windows service for Sql Server Express from automatic to manual, so only the 'regular' Sql Server is running. (Even though I am running Sql Server 2008 R2 which should allow 10 GB.)
    • I changed the autogrowth from 1 MB to 10%
    • I changed the autogrowth increment-size from 10% to 1000 MB
    • I defragmented the drive
    • I shrank the database:
      • manually DBCC SHRINKDATABASE('...')
      • automatically right click on database | "properties" | "Auto Shrink" | "Truncate log on check point")

    All to little avail (I could insert some more records, but soon ran into the same problem). The pagefile mentioned by Tobbi, made me try a larger virtual drive. (Even though my drive should not contain any such system files, since I run without it being mounted a lot of the time.)

    • I made a new larger virtual drive with TrueCrypt

    When making this, I ran into a TrueCrypt-question, if I am going to store files larger than 4gb (as shown in this SuperUser question).

    • I told TrueCrypt I would store files larger than 4 GB

    After these last two I was doing fine, and I am assuming this last one did the trick. I think TrueCrypt chooses an exfat file system (as described here), which limits all files to 4GB. (So I probably did not need to enlarge the drive after all, but I did anyway.)

    This is probably a very rare border case, but maybe it is of help to somebody.

    0 讨论(0)
  • 2020-12-08 05:10

    our problem was that the hard drive was down to zero space available.

    0 讨论(0)
提交回复
热议问题