Angular Components – Best Practices (Part 1)

Angular is a popular framework for creating front-ends for web and mobile applications. Components are an important part of any Angular application.

In this blog, we look at some of the best practices for Angular’s components.

1. File Name

It’s important that we are able to easily find files within our application. To make this possible, we need to name them carefully and ensure that the file’s name describes its contents explicitly to make it easy to identify.

In the following image, the file name ‘catalog.component.ts’ clearly indicates that this is our catalog component. This naming style, which consists of a descriptor followed by a period and then followed by its type, is a recommended practice according to the Angular style guide.

Similarly, for the CSS file, the same should be named ‘catalog.component.css’. So, using ‘catalog.component.css’ makes it clear that these are the styles for the catalog component. We can do the same thing for the template and call this ‘catalog.component.html’.

2. Prefixing Component Selectors

There are multiple ways to write code, but when we follow some fundamental rules in our code writing practices, and categorize the file structure and folder correctly, it will simplify the process of locating code, identifying code quickly, and help us to reuse the code.

We have a couple of components in our example that have selectors that we use in our HTML. Please view image below for more details.

For example, this nav-bar.component has a nav-bar selector. It is a good practice to add a prefix to these selectors that match the feature area where these selectors can be used. This component is in our core module, which is sort of an app-level module. Hence, we will prefix that with wb, for whitebeards.

Prefixing of the component selectors in this manner avoids conflicts when we import a module that has a component selector that conflicts with one of its component name(s).

This component resides in our shared module, which is also an app-wide module, so we will prefix this one with wb . We do not have any components with selectors in our feature areas, i.e. catalog or user features.

Prefixes are usually two to four characters to keep them short to avoid distracting from the actual component name. As a matter of fact, prefixes can really be whatever you want, however, just be sure to prefix them with something that represents the feature area that they are in. Add these prefixes whenever you use a selector.

3. Using separate CSS and template files

The Angular style guide recommends that if our template or CSS has more than three lines, you should extract them. So we will start by creating a sign-in. component. css file, and then copy the styles out from there. We can put them in a separate css file.

4. Decorating input and output properties

To declare input and output properties on components, you can declare them as inputs in your component metadata. It is recommended that you also use a decorator. Thus, the same syntax exists for output properties, and the same rule applies here. Decorating our input and output properties makes it more obvious which properties are used for input and output, and simplifies the renaming process.

Finally, it is also simple less code to write…

Don’t miss my next blog, ‘Angular Components Best Practices (Part 2)’ where I will share some more best practices for Angular’s components..

Angular 2 – Getting Started

 Why Angular? 

  1. Expressive HTML: Angular makes HTML more expressive. Expressive html is simply a different way of seeing, interpreting and authoring markup. It is a HTML with features such as `if condition’, for loops and local variables.
  2. Powerful data binding: Angular has powerful data-binding. We can easily display fields from data mode, track changes, process updates  from the user.
  3. Module by designer: Angular promotes modular by design. It is easier to make and reuse the content.
  4. Built in back-end integration: Angular has built-in support for communication with back-end services.

Why Angular 2? 

Angular 2 is built for speed. It has faster initial loads and improved rendering times. Angular 2 is  modern with advanced features including JavaScript standards such as classes, modules, object and operators. Angular 2 has a simplified AP. It has few directives and simple binding. It enhances productivity to improve day-to-day work flow

Angular 2 is a set of components and services that provide functionality across components.

What is Angular 2 Component?

Each component comprises a set of templates –  HTML for user interface.   The component  has meta-data and is `view’ defined.  The template associated with  the code is defined with `class’ and additional information defined with meta data.

What are Angular modules?

Angular modules help us to organize our application. Every angular application has at least  one angular module.

There are two type of angular module. Root Angular module and Feature Angular module:

Science Angular is a JavaScript library which we could use with any  JavaScript language. Most common language choices for Angular 2 are  ES 5 version of JavaScript, ES 2015, Typescript and Dart.

What is Type script?

Typescript is a an opensource language. Typescript is a super-set of JavaScript. One of the benefits of Typescript is strong typing which essentially means that everything has data type. Angular team itself takes advantage of these benefits to use  Typescript to build  Angular2. Typescript type definition files contain different aspects of a library.

When setting up our environment for Angular 2 application,  we need two basic node package managers.

Given below are some of the files which we need to set  up and configure for Angular 2 application.

Typescript configuration file(tsconfig.json):

 This specifies Typescript compile option and other settings. The typescript compiler “TSC” reads this file and tells typescript compiler to transpile our typescript code to es5 code. Details can be found in the below picture. The source map option if defined will generate map file. Map files assist with debugging. “emitdecoratormetadata” and “experimentalDecorators” are supports for decorator. This must be set to `true’ as otherwise the Angular application will not compile.The “noimplecitany” option defines whether all our variables are strongly typed or not.

We can configure this file, as per requirement:

  • TypeScript Definitions File (typings.json): This file contains a list of the node modules library. We are specifying “core-js” which brings es 2015 capability to es browser. The Node is used to develop and build the application and is used like a server.
  • npm Package File (package.json): It is the most important file for successful set up and development time execution.  This file describes basic information of the application.

  • main.ts: It will bootstrap our root application module. We are using dynamic bootstrapping and JIT compiler. This means Angular compiler compiles the application in the browser and lunch the application starting with the root application module “AppModule”

What is Azure Active Directory?

Azure Active Directory (Azure AD) is Microsoft’s multi-tenant cloud-based directory and identity management service.

Azure™ Active Directory® (Azure AD) provides a comprehensive solution that addresses identity and access management requirements for on-premises and cloud applications, including Office 365 and a world of non-Microsoft SaaS applications.

To enhance your Azure Active Directory, you can add paid capabilities using the Azure Active Directory Basic, Premium P1, and Premium P2 editions. Azure Active Directory paid editions are built on top of your existing free directory, providing enterprise-class capabilities spanning self-service, enhanced monitoring, security reporting, Multi-Factor Authentication (MFA), and secure access for your mobile workforce.

Azure AD lets you focus on building your application by making it fast and simple to integrate with a world class identity management solution used by millions of organizations around the world.

Related: Did you know that more than 57% of Fortune 500 companies rely on Azure for infrastructure services, platform services and hybrid solutions? Click to know what Azure can do for your organization.

Azure active directory is playing a major role in Azure cloud.

Benefits of improving the management of the Identity life-cycle include:

  • Reduced cost and time to integrate new users
  • Maximize investments of existing on-premises identities by extending them to the cloud
  • Reduced time for new users to access corporate resources
  • Reduced management overhead for provisioning(future) process
  • Improved security by ensuring access to systems can be controlled centrally
  • Consistent application of security policies
  • Reduced time to integrate acquired companies
  • Reduced business interruptions
  • Reduced exposure to outdated credentials
  • Reduced time and cost to enable applications to accessible from the internet
  • Increased capacity of IT to develop core application features
  • Increased security and auditing
  • Increased flexibility by delegating specific administration tasks

Azure AD also includes a full suite of identity management capabilities including multi-factor authentication. Multi-factor authentication (MFA) is a security system that requires more than one method of authentication from independent categories of credentials to verify the user’s identity for a login or other transactions, device registration, self-service password management, self-service group management, privileged account management, role based access control, application usage monitoring, rich auditing and security monitoring and alerting and a lot more. These capabilities can help secure cloud-based applications, streamline IT processes, cut costs and help ensure that corporate compliance goals are met.

How reliable is Azure AD?

The multi-tenant, geo-distributed, high availability design of Azure AD means that you can rely on it for your most critical business needs. Running out of 28 data centers around the world with automated fail-over, you’ll have the comfort of knowing that Azure AD is highly reliable and that even if a data center goes down, copies of your directory data are live in at least two more regionally dispersed data centers and available for instant access.

Types of application supported the Azure Active Directory:

These are the five primary application scenarios supported by Azure AD:

Through the admin role in Azure below, features can be automated in Azure Active Directory.

  1. Users and groups: This is most powerful automation capability
  2. Enterprise applications: It will provide the user or group who can access the SAAS application.
  3. Audit logs: IT will provide all user information.
  4. Single sign on: You can configure the azure AD with over 2000 application for single sign on
  5. Password reset: Self-service password reset without calling help desk. We can specify which user can reset the password
  6. Azure AD connect: Used to integrate your azure ad with your windows server AD or another directory on your network.
  7. Sign in: It will provide which user signed in successfully to the application.


When an organization moves to the cloud, new scenarios are enabled and new solutions become available to solve the organization’s problems. Identity and access management is one of the biggest concerns when integrating on-premises and cloud-based resources. Digital identities are at the core of all IT-related services because they control how people, devices, applications, and services access a variety of resources within and outside of the organization.

  1. Enterprise SLA of 99.9%
  2. Advanced security reports and alerts
  3. Company branding (Azure Active Directory provides this capability by allowing you to customize the appearance of the following web pages with your company logo and custom color schemes:)
  4. Group-based licensing and application access
  5. Self-service password reset and group management
  6. Multi-Factor authentication

Azure Multi-Factor Authentication (MFA) is Microsoft’s two-step verification solution. Azure MFA helps safeguard access to data and applications while meeting user demand for a simple sign-in process. It delivers strong authentication via a range of verification methods, including phone call, text message, or mobile app verification.

Understanding Microsoft Azure Storage

Before you understand Microsoft Azure’s storage capabilities, here’s a primer on Microsoft’s multi-tenant cloud-based directory and identity management service, Azure Active Directory.

Cloud storage platform is designed for Internet-scale applications. It is highly reliable, available, and scalable. On average, we can manage more than 40 trillion stored objects and 3.5 million requests/sec. As a result of its scalability, it is possible to store a large volume of data. If you combine this with the necessary system allocation, you can achieve remarkable good performance. Window storages are especially durable, in my opinion. Remember however that the cost of storage is key to cloud storage, and we need to pay for both storage and transfer bandwidth on the basis of actual usage. The data and Microsoft Azure storage are available via the REST interface, so we can access the same from all programming languages.

The Microsoft Azure platform divides into four types of standard storages which work with different scenarios:

  1. Blobs
  2. Tables
  3. Queues
  4. Files

It can expose via REST APIs and with multiple client libraries like .Net, C++, Java, Node.js, Android, etc.

According to 2015 data, Azure storage is available in 19 different regions globally.

Microsoft Azure available in different regions

Image1: Azure storage available in different regions


Blob storage is useful for sharing documents, images, video, music and to store raw data/logs.

We can interact blob storage with REST interface (Put, Get, and Delete).

Code sample:

// Retrieve storage account from connection string.
 CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
 // Create the blob client.
 CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
 // Retrieve a reference to a container.
 CloudBlobContainer container = blobClient.GetContainerReference("deepakcontainer");
 // Create the container if it doesn't already exist.
 // Retrieve reference to a blob named "myblob".
 CloudBlockBlob blockBlob = container.GetBlockBlobReference("deepakblob");
 // Create or overwrite the "myblob" blob with contents from a local file.
 using(var fileStream = System.IO.File.OpenRead(@ "pathmyfile")) {

In Code we are referencing to a storage account. We need to create a blob client proxy which will interact with blob object. Then we can upload from the client to the cloud. We need to create a container to organize the blob. If container is not available, we need to create a container the first time we use it.

There are three types of blobs: Block blobs, Append blobs, and Page blobs (disks).

Block blobs are optimized for streaming and storing cloud objects, and are a good choice for storing documents, media files, backups etc.

Append blobs are similar to block blobs, but are optimized for append operations. An append blob can be updated only by adding a new block at the end. Append blobs are a good choice for scenarios such as logging, where new data needs to be written only at the end of the blob.

Page blobs are optimized for representing IaaS disks and supporting random writes. An Azure virtual machine network attached to IaaS disk is a Virtual Hard Disk stored as a page blob.


This is a massively scalable NoSql key/value storage. It is very useful for storing a large volume of metadata. This storage platform automatically load balances between new tables as you allocate more resources. It is very scalable. Azure tables are ideal for storing structured, non-relational data.

We can use Table storage to store flexible datasets, such as user data for web applications, address books, device information, and any other type of metadata that your service requires. You can store any number of entities in a table, and a storage account may contain any number of tables, up to the capacity limit of the storage account. We can Access data using the OData protocol and LINQ queries with WCF Data Service .NET Libraries.

Code sample:

// Retrieve the storage account from the connection string.
 CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
 // Create the table client.CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
 // Retrieve a reference to the table.CloudTable table = tableClient.GetTableReference("deepak");
 // Create the table if it doesn't exist.


It is an efficient solution for reliable applications, low latency and it is a high throughput messaging system. It basically uses decouple components and uses web role to worker role communication. It also implements scheduling of asynchronous tasks. It stores a large number of messages, in any format, of up to 64 KB. The maximum time that a message can remain in the queue is seven days.

Code sample:

// Retrieve storage account from connection string.
 CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
 // Create the queue client.
 CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
 // Retrieve a reference to a queue.
 CloudQueue queue = queueClient.GetQueueReference("deepakqueue");
 // Create the queue if it doesn't already exist.
 // Create a message and add it to the queue.
 CloudQueueMessage message = new CloudQueueMessage("Hello, Trigent");


We can use file storage to share the file. It is very useful to move on-premises application to cloud.

It support REST and SMP protocol access to same file share.

File storage contains the below components:

Code sample:

// Create a CloudFileClient object for credentialed access to File storage.
 CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
 // Get a reference to the file share we created previously.
 CloudFileShare share = fileClient.GetShareReference("logs");
 // Ensure that the share exists.
 if (share.Exists()) {
 // Get a reference to the root directory for the share.
 CloudFileDirectory rootDir = share.GetRootDirectoryReference();
 // Get a reference to the directory we created previously.
 CloudFileDirectory sampleDir = rootDir.GetDirectoryReference("CustomLogs");
 // Ensure that the directory exists.
 if (sampleDir.Exists()) {
 // Get a reference to the file we created previously.
 CloudFile file = sampleDir.GetFileReference("Log1.txt");
 // Ensure that the file exists.
 if (file.Exists()) {
 // Write the contents of the file to the console window.

Speak to our cloud experts to learn what Microsoft Azure can do for your organization.