Skip to main content
Microsoft Azure

Understanding Microsoft Azure Storage

Cloud storage platform is designed for Internet-scale applications. It is highly reliable, available and scalable. At an average, we can manage more than 40 trillion stored objects and 3.5 million request/sec. As a result of its scalability, it is possible to store a large volume of data. If you combine this with the necessary system allocation, you can achieve a remarkable good performance. Window storages are especially durable, in my opinion. Remember however that the cost of storage is key to cloud storage, and we need to pay for both storage and transfer bandwidth on the basis of actual usage. The data and azure storage is available via REST interface, so we can access the same from all programming languages.

The platform divides into four types of standard storages which work with different scenarios:

  1. Blobs
  2. Tables
  3. Queues
  4. Files

azureIt can expose via REST APIs and with multiple client library like .Net, C++, Java, Node.js, Android etc.

According to 2015 data, Azure storage is available in 19 different regions globally.

azure storage

Image1: Azure storage available in different region


Blob storage is useful for sharing documents, images, video, music and to store raw data/logs.

We can interact blob storage with REST interface (Put, Get, and Delete).

Code sample:

// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the blob client.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve a reference to a container.
CloudBlobContainer container = blobClient.GetContainerReference("deepakcontainer");
// Create the container if it doesn't already exist.
// Retrieve reference to a blob named "myblob".
CloudBlockBlob blockBlob = container.GetBlockBlobReference("deepakblob");
// Create or overwrite the "myblob" blob with contents from a local file.
using(var fileStream = System.IO.File.OpenRead(@ "path\myfile")) {

In Code we are referencing to a storage account. We need to create a blob client proxy which will interact with blob object. Then we can upload from the client to the cloud. We need to create a container to organize the blob. If container is not available, we need to create a container the first time we use it.

There are three types of blobs: Block blobs, Append blobs, and Page blobs (disks).

Block blobs are optimized for streaming and storing cloud objects, and are a good choice for storing documents, media files, backups etc.

Append blobs are similar to block blobs, but are optimized for append operations. An append blob can be updated only by adding a new block at the end. Append blobs are a good choice for scenarios such as logging, where new data needs to be written only at the end of the blob.

Page blobs are optimized for representing IaaS disks and supporting random writes. An Azure virtual machine network attached to IaaS disk is a Virtual Hard Disk stored as a page blob.


This is a massively scalable NoSql key/value storage. It is very useful for storing a large volume of metadata. This storage platform automatically load balances between new tables as you allocate more resources. It is very scalable. Azure tables are ideal for storing structured, non-relational data.

We can use Table storage to store flexible datasets, such as user data for web applications, address books, device information, and any other type of metadata that your service requires. You can store any number of entities in a table, and a storage account may contain any number of tables, up to the capacity limit of the storage account. We can Access data using the OData protocol and LINQ queries with WCF Data Service .NET Libraries.

Code sample:

// Retrieve the storage account from the connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the table client.CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
// Retrieve a reference to the table.CloudTable table = tableClient.GetTableReference("deepak");
// Create the table if it doesn't exist.


It is an efficient solution for reliable applications, low latency and it is a high throughput messaging system. It basically uses decouple components and uses web role to worker role communication. It also implements scheduling of asynchronous tasks. It stores a large number of messages, in any format, of up to 64 KB. The maximum time that a message can remain in the queue is seven days.

Code sample:

// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the queue client.
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
// Retrieve a reference to a queue.
CloudQueue queue = queueClient.GetQueueReference("deepakqueue");
// Create the queue if it doesn't already exist.
// Create a message and add it to the queue.
CloudQueueMessage message = new CloudQueueMessage("Hello, Trigent");


We can use file storage to share the file. It is very useful to move on-premises application to cloud.

It support REST and SMP protocol access to same file share.

File storage contains the below components:

On-Premise Cloud application

Code sample:

// Create a CloudFileClient object for credentialed access to File storage.
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
// Get a reference to the file share we created previously.
CloudFileShare share = fileClient.GetShareReference("logs");
// Ensure that the share exists.
if (share.Exists()) {
    // Get a reference to the root directory for the share.
    CloudFileDirectory rootDir = share.GetRootDirectoryReference();
    // Get a reference to the directory we created previously.
    CloudFileDirectory sampleDir = rootDir.GetDirectoryReference("CustomLogs");
    // Ensure that the directory exists.
    if (sampleDir.Exists()) {
        // Get a reference to the file we created previously.
        CloudFile file = sampleDir.GetFileReference("Log1.txt");
        // Ensure that the file exists.
        if (file.Exists()) {
            // Write the contents of the file to the console window.
Deepak Kumar Barik

Deepak Kumar Barik

Deepak Kumar Barik works as Senior Software Engineer with Trigent Software. Having completed his MCA and MCPD with highly skilled applications, he has nearly seven years of experience in .NET technology. He also has hands-on experience in C#.Net, ASP.Net, MVC, WCF, Azure, WEB API, web technologies and SQL.

2 thoughts on “Understanding Microsoft Azure Storage

  1. Hi! I’ve been reading your website for a long time now and finally got the bravery to go ahead and give you a shout out from Kingwood Texas! Just wanted to say keep up the good job!

Comments are closed.