The shrink should be much, MUCH faster, and there's no need for any further index maintenance on your main data objects which will have been created neatly in order and minimal risk for future fragmentation.
If you are running out of space, and your data isn't supposed to get that big then shrink, but rebuild your indices after with appropriate fill factors that allows for typical growth. If your end goal is actually to reduce backup size make sure you implement a comprehensive backup strategy to clear out the transaction log and when you back up the db, use the compress options.
I wouldn't recommend auto growth of 5GB unless you typically will expect to grow 5GB frequently. You could have intermittent performance problems otherwise. Your data size should first be set to what you think is required for, say, a year, and Auto Growth should be set to a size that you've tested doesn't affect operating performance. Rebuilding indexes before shrinking causes the indexes to be badly laid out. It's not good to rebuild then shrink. Shrinking causes the indexes to be mangled to recover space - so rebuilding beforehand then shrinking is pointless.
I don't know if this would work better than reindexing after the shrink but another option would be to create a new data file that's appropriately sized and move all the data to that.
In that case I would do a reindex first so you know what the actual data size is. One catch is that if this is the first file in the primary data file I don't think you can empty it. You should be able to shrink it then move the data back afterwards and that would avoid the page reversal.
However, if you're looking at moving to solid state that shouldn't make a big difference anyway. Sign up to join this community. The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Learn more. When is it OK to shrink a Database? Ask Question. Asked 9 years, 6 months ago. Active 2 years, 6 months ago. Viewed 66k times. You seem to be saying this is true:.
If you have a growing database as the majority of non-static databases tend to be , this means that that database will grow again. Do you mean that the OS or a commercial disk defragger for that matter will handle defragmentation adequately without worrying about the internal shrink function? I apologize for any confusion I may have caused with my stream of thought style and wording. For SQL Server the major point I was trying to make, however, was that physical file fragmentation is bad. That was slightly tongue-in-cheek.
The more likely case is closer to the worst case: the physical fragmentation will cause pain, the cost of the growths will cause pain and it is sort of all for not when you could have right sized the database from the beginning for a value that you can grow into over time.
Your first thought about reorganizing the database may be induced by some of the wording in SQL Server maintenance plans which sort of lump an index reorganization and shrink into what seemed like one operation IIRC.
The best way to handle this type of fragmentation is through rebuilding or reorganizing indexes, not shrinking a database. All that is designed to do is reduce the free space within your database files.
What would webmasters do if database size is limited in hosting services? For example: MB before shrinking and 18MB after. They asked my opinion of shrinking. Lets assume the growth is predictable and auto-growth settings are sensible. Do you think there would be any problem with performing a one-off shrink in this case? How about a shrink, manually setting the shrunk file size to be a little greater than 1 auto-growth unit so that an auto-grow will not immediately occur?
Thanks for the comment. Feel free to shoot me an e-mail mike at the domain of this blog works, your e-mail address looks like a fake one to prevent spam. I would say in this case a shrink sounds like an alright idea. This will prevent the file from having to autogrow in small chunks. While at it, I would also suggest to change the auto growth from the default 1MB, something more in line with the size of the DB. Paul and Kimberly blog about this over at their site and that link from Paul at the end of the above article is a good place to go look for more.
The goal is to avoid physical file fragmentation from lots of growths. Heaps, Deletes, and Optimistic Isolation Levels. Leave new Glenn Berry. Brent Ozar. OK, great! Jeff Mlakar. Oh, you are good. Shand Glenn. Julian Leite. Jean-Luc Lecomte. A very Happy to the team!
Daniel Kimberlin. Hey Bret. Just heads up, is gonna suck. Buy some extra TP? Why did they build it that way? Is this something that you frequently see people doing? Have a great new year.
Paul Rector. Steve jarvis. Nick Fotopoulos. What about if you reindex first, then shrink the files after? Jack Donnell. Thanks for this post. I think I might add the URL to my email signature.
Or maybe not. Have a good week! Steve Jarvis. Cleaning up data is good and reducing storage space is good as well. Disappointing comment right here. And that is the issue. But shrinking may affect your database performance significantly, if not properly done. Shrinking will increase fragmentation and will cause any DB operation costly. Rebuild indexes is necessary after DB shrink to reduce fragmentation and increase performance.
Also, the cost of file size expansion, like for accommodating additional records, is too high.
0コメント