If you decide to host your SQL Server on a virtual machine on Azure (or any other cloud service providers), it is a great idea to configure your server to backup the databases to Azure Blob storage and take advantage of its security, availability and durability.
Azure Blob storage is Microsoft’s object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data. for more information on Azure Blob storage you can read this article.
Of course the first step is to create a storage account in Azure. Follow these steps to create one:
1-Click on Create a resource on home page
2-Choose Storage and then Storage account
3- Enter the basic information for your account such as subscription, resource group, your preferred name for the storage account, location (also known as region), performance (choose premium if you want to use SSD backed storage for frequently accessed data), account type (choose V2 to use all the latest features) and replication (read this article to understand the difference between different options). By default the access tier is going to be Hot. You can change it to cool if the data is not frequently accessed.
Rest of the options can be left as default. Just make sure you don’t disable secure transfer under advanced tab otherwise SQL Server won’t be able to access this account.
4-Review and create your storage account
5-Create a blob container
Now that we have our storage account and blob container created, it’s time to configure our SQL Server to access this container, but before that I’d like to briefly talk about different blob types supported by Azure. There are three: block blobs, append blobs, and page blobs.
Block blobs are for your discrete storage objects like jpg’s, log files, etc. that you’d typically view as a file in your local OS
Page blobs are for random read/write storage, such as VHD’s (in fact, page blobs are what’s used for Azure Virtual Machine disks). Max. size 8TB. Supported by both regular and Premium Storage
An append blob is comprised of blocks and is optimized for append operations. When you modify an append blob, blocks are added to the end of the blob only, via the Append Block operation. Updating or deleting of existing blocks is not supported. Unlike a block blob, an append blob does not expose its block IDs.
SQL Server can store backups as both page blob and block blob and it requires different configurations for each!
Another thing I’d like to mention here is that if you store your backups as page blob you won’t be able to take advantage of a great feature in storage accounts called Lifecycle Management. You also won’t be able to change the access tier of your blobs since both are only available for block blobs.
More information on different access tiers can be found here.
More information on Lifecycle Management feature can be found here.
Now it’s time to configure SQL Server…
I’m using SQL Server 2019 Developer edition on a Windows 2019 VM. I’ve restored AdventureWorks2017 database and I’m going to use it for this demo.
First we are going to configure SQL Server to backup our database as a page blob as it’s the easier one!
The first thing to do is to create a credential. You can find it under Security folder
Enter a name for the credential you are about to create. Identity is the name of your storage account and password is the access key of your storage account which can be found under Access Keys option.
You can either use key1 or key2.
Now that the credential is created you can go to Management folder and create a new Maintenance Plan.
Give your plan a name and update the step name and description by double clicking on it. You also have the option to schedule the sub-plans individually. I’m scheduling it to run every day at 12AM.
From Toolbox, choose Back Up Database Task and drag it into your plan.
Double click on it to open the properties window. Choose backup type (I’m choosing full), databases you want to include and URL as back up to.
Go to Destination tab and select the credential we created a few steps back. Put the name of your container in Azure storage container. I’m leaving the options as is.
click OK and save the plan. Now if you go to the jobs you’ll see the newly created backup job. Right click on it and choose Start Job at Step…
It will execute the plan and your backup will be saved to your blob container.
Note that the blob type is page blob and the access tier is blank as it’s not supported by page blobs.
While it’s relatively easy to setup, in my opinion it’s not the best option as you can’t change the access tier and you can’t apply lifecycle management policies. For example I’d like to keep my backups under Hot access tier for a day and then move it to Cool access tier and keep it there for 5 days and then move it to Archive. I also would like to delete the backup files after 2 weeks to save on storage cost otherwise my container will grow over time and the cost will accumulate. The better option is to configure SQL Server to store the backups as block blobs.
Storing backups as block blob requires a different approach. For page blob we created a credential and we used storage account name and one of the access keys as identity and password. To store backups as block blob we have to use a Shared Access Signature (SAS). So let’s create one!
- Go to your storage account and click on Shared access signature under settings.
- Select Blob under allowed services.
- Select proper permissions under Allowed permissions
- Put an end date for the token
- click on Generate SAS button
- Copy the SAS token
Now go to SQL Server and create a new credential. This time the name is going to be the full URL to your blob container that can be found under properties of the container.
The identity is going to be the word “SHARED ACCESS SIGNATURE” (yes it must be hard coded!) and the password your SAS token without the question mark at the beginning!
SAS tokens start with ?sv= make sure you remove the ? so it starts with sv=
You can also create the credential using this script (make sure you update it with your blob storage info and SAS token)
IF NOT EXISTS (SELECT * FROM sys.credentials WHERE name = 'https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>') CREATE CREDENTIAL [https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>] WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = '<SAS_TOKEN>';
Now that we have created the credential we can create the backup by using below script:
BACKUP DATABASE [AdventureWorks2017] TO URL = N'https://fastpacedtutorials.blob.core.windows.net/db-backup/AdventureWorks2017.bak' WITH MAXTRANSFERSIZE = 4194304, BLOCKSIZE = 65536, CHECKSUM, FORMAT, STATS = 1;
You can see that the backup file is created in our blob container and the access tier is Hot and blob type is Block Blob.
Unfortunately you can’t use a maintenance plan as the script it creates is not compatible and won’t work. You can create a SQL job and configure it to execute this script on a schedule.
Now that our backup is saved as block blob we can configure Lifecycle Management to move our backups to different tiers and remove them after certain days. Let’s go to the storage account and click on Lifecycle Management under Blob Services. Click on Add rule at the top to create a new one.
Define the policy that suites your need and click on next. In this page you can select a container or even some folders under that container to be included in your policy. I’m going to choose the whole container by clicking on browse and selecting the container.
Review and add the policy.
That was it! Azure Storage will now move your files between different access tiers and will permanently delete them after the defined number of days so you pay only for the amount of file you need to store.