“Primary Filegroup is Full” in SQL Server 2008 Standard for no apparent reason

后端 未结 10 1912
后悔当初
后悔当初 2020-12-08 04:18

Our database is currently at 64 Gb and one of our apps started to fail with the following error:

System.Data.SqlClient.SqlException: Coul

相关标签:
10条回答
  • 2020-12-08 04:48

    Anton,

    As a best practice one should n't create user objects in the primary filegroup. When you have bandwidth, create a new file group and move the user objects and leave the system objects in primary.

    The following queries will help you identify the space used in each file and the top tables that have highest number of rows and if there are any heaps. Its a good starting point to investigate this issue.

    SELECT  
    ds.name as filegroupname
    , df.name AS 'FileName' 
    , physical_name AS 'PhysicalName'
    , size/128 AS 'TotalSizeinMB'
    , size/128.0 - CAST(FILEPROPERTY(df.name, 'SpaceUsed') AS int)/128.0 AS 'AvailableSpaceInMB' 
    , CAST(FILEPROPERTY(df.name, 'SpaceUsed') AS int)/128.0 AS 'ActualSpaceUsedInMB'
    , (CAST(FILEPROPERTY(df.name, 'SpaceUsed') AS int)/128.0)/(size/128)*100. as '%SpaceUsed'
    FROM sys.database_files df LEFT OUTER JOIN sys.data_spaces ds  
        ON df.data_space_id = ds.data_space_id;
    
    EXEC xp_fixeddrives
    select  t.name as TableName,  
        i.name as IndexName, 
        p.rows as Rows
    from sys.filegroups fg (nolock) join sys.database_files df (nolock)
        on fg.data_space_id = df.data_space_id join sys.indexes i (nolock) 
        on df.data_space_id = i.data_space_id join sys.tables t (nolock)
        on i.object_id = t.object_id join sys.partitions p (nolock)
    on t.object_id = p.object_id and i.index_id = p.index_id  
    where fg.name = 'PRIMARY' and t.type = 'U'  
    order by rows desc
    select  t.name as TableName,  
        i.name as IndexName, 
        p.rows as Rows
    from sys.filegroups fg (nolock) join sys.database_files df (nolock)
        on fg.data_space_id = df.data_space_id join sys.indexes i (nolock) 
        on df.data_space_id = i.data_space_id join sys.tables t (nolock)
        on i.object_id = t.object_id join sys.partitions p (nolock)
    on t.object_id = p.object_id and i.index_id = p.index_id  
    where fg.name = 'PRIMARY' and t.type = 'U' and i.index_id = 0 
    order by rows desc
    
    0 讨论(0)
  • 2020-12-08 04:54

    In my experience, this message occurs when the primary file (.mdf) has no space to save the metadata of the database. This file include the system tables and they only save their data into it.

    Make some space in the file and the commands works again. That's all, Enjoy

    0 讨论(0)
  • 2020-12-08 04:56

    Ran into the same problem and at first defragmenting seemed to work. But it was for just a short while. Turns out the server the customer was using, was running the Express version and that has a licensing limit of about 10gb.

    So even though the size was set to "unlimited", it wasn't.

    0 讨论(0)
  • 2020-12-08 04:57

    please chceck the type of file growth of the database, if its restricted make it unrestricted

    0 讨论(0)
  • 2020-12-08 04:58

    OK, got it working. Turns out that an NTFS volume where the DB files were located got heavily fragmented. Stopped SQL Server, defragmented the whole thing and all it was fine ever since.

    0 讨论(0)
  • 2020-12-08 05:03

    I just ran into the same problem. The reason was that the virtual memory file "pagefile.sys" was located on the same drive as our data files for our databases (D: drive). It had doubled in size and filled the disk but windows wasn't picking it up, i.e. it looked like we had 80 GB free when we actually didn't.

    Restarting SQL server didn't help, perhaps defragment would give the OS time to free up the pagefile, but we just rebooted the server and voila, the pagefile had shrunk and everything worked fine.

    What is interesting is that during the 30 min we were investigating, windows didn't calculate the size of the pagefile.sys at all (80gb). After restart windows did find the pagefile and included it's size in the total disk usage (now 40gb - which is still too big).

    0 讨论(0)
提交回复
热议问题