Microsoft Dynamics CRM – Processes

Microsoft Dynamics CRM is a Customer Relationship Management software developed by Microsoft Corporation. It is used for managing a company’s relationship with customers and to organize, automate, and synchronize sales, marketing, customer service, and provides technical support. Microsoft Dynamics CRM users can customize dynamics CRM without code. It helps to reduce costs and increase a company’s profit.

Processes in Dynamics CRM helps customers to focus on their work instead of on remembering  to do a set of manual steps.

There are  four types of processes, each one is designed for a specific purpose in Dynamics CRM.

In this blog, We will discuss  two types of processes and their respective uses.

Business Process Flow

Business Process Flow provides working methods that guides customer through the processes using Stages and Steps to perform their tasks effectively.  Business process flow is divided into a set of stages, and each stage contains at least one step (fields in entity that are currently on). Moreover, we can customize the business flow based on the requirement by simply adding or removing steps , changing the orders, or adding new entities to the process. Each business process flow should not be more than 30 stages (each stage should not be more than 30 steps) and  allows a maximum of five  entities if it is a `multi-entity’ process.

Steps to Create a Business Process Flow in Dynamics CRM

To start with, we need to have a System Administrator or equivalent permission to create a business process flow.

  • Go to Settings –> Processes –> On the Action Tool bar –> Click “New” button –> Now opens process dialog box for required fields.
    • Enter a process name(this does not need to be unique but at the same time it should be meaningful).
    • Select Business Process Flow (We cannot change the category once done) in the Category List.
    • Select the entity you want to base the process in the Entity List and then choose OK.
  • Now the business process flow is created with predefined stage.

 

  • Add stages using stage componentt, if your users will progress from one business stage to another in the process.
  • select a category for the stage. The category (such asQualify or Develop) appears as a chevron in the process bar (Fig.3).
  • Add steps in the stage using listed fields from the drop-down list(Entity Fields).
  • Add a Work flowto stage from theComponents tab to a stage or to the Global Work flow item in the designer.
  • Click Validate and Save the process as a draft if you want to work further on or Click “Activate” to make it available to your team.

Note : For Edit, Go to Settings –> Processes –> Business Process Flows and then select the business process flow you want to edit.

Work Flow

Workflows in Dynamic CRM is used to perform Automatic Processes  without any user interface and can optionally require a input from the customer. Workflows can be triggered based on specific conditions or to be started manually by the users.

Workflow Process Properties:

Process Name: Enter a process name(doesn’t need to be unique but at the same time it should be meaningful).

Active As: You can select Process template to create an advanced starting point for other templates. If you choose this option, after you activate the workflow it will not be applied but instead it will be available to select in the Create Process dialog if you select TypeNew process from an existing template (select from list)

Process templates are convenient when you have a number of similar workflow processes and want to define them without duplicating the same logic.

Entit : Each workflow process must be set to a single entity. You can’t change the entity after the workflow process is created.

Category: Select Work flow Process (We can’t change the category once done) in the Category List.

Available to Run: To set the scope in which the workflow will run. Following are the supported workflow scopes

  1. Run this Workflow in the background (recommended) – This check box reflects the option you selected when you created the workflow. This option is disabled, but you can change it from the Actionsmenu by choosing either Convert to a real-time workflow or Convert to a background workflow.
  2. As an on-demand process – To select if we want to allow users to run this workflow from the Run Workflow
  3. As a child process Choose this option if you want to allow the workflow to be available to be started from another workflow.

Options for automatic Processes:

 Scope The workflow logic can be applied to any record in the organization if we select Organization (The default value is User). Otherwise, the workflow can only be applied to a subset of records that fall within the scope.

Start When:   Here , we can specify when a workflow should start automatically. You can configure a real-time workflow to be run before certain events. This is a very powerful capability because the workflow can stop the action before it occurs. The options are 1. record is created, 2. record status changes, 3. record is assigned, 4. record field value changes and 5. record is deleted.

Workflow Job Retention:

This is used to delete a workflow after the workflow execution has completed automatically(to save disk space).

 Workflow Steps:

 We can create a set of steps in logical stages which the workflow will follow based on actions that we are selected. For example , we will configure sending email to the customer in below images.

Steps:

  • Add step –> Select “Send Mail” (Fig : 3.a)
  • Next, A new step will be added. Enter its name as Send email to Customer. Click Set Properties(Fig: 3.b)

  •  In the next window to configure email, perform the operations like From, To, Subject, and Body (We will set record values from entity or from related records that can be accessed from the N:1 (many-to-one) relationships for the entity)(Fig : 3.c).

Finally, In the Workflow Process , Click “Activate” to make it available process in the CRM.

MapReduce – Distributing Your Processing Power

MapReduce (MR) is one of the core features of the Hadoop ecosystem which works in accordance with YARN (Yet Another Resource Negotiator). This is an out-of-the-box solution inbuilt in Hadoop to distribute the processing of  data across multiple clusters. MR divides the data, saves it into multiple partitions and then processes it. The Mapping transforms the data and makes it into readable data. The Reducer joins the data together, for our understanding. The MR also has features to handle  unseen problems like a Hadoop node shutting down, or a node becoming slow to ensure effective processing of the job.

Understanding a Problem  and addressing it with a Map reduce solution:

The example given below is that of data of a hypothetical departmental store which has it customers information.

Assuming this store is a big store like Walmart / Tesco, we know that the data will be huge. The store management wants to know the count of the employment for some analysis. The old school approach could be to dump this data onto a SQL and do a select count query by grouping on the `employment’ column. For a large set of data, this will be a slow operation.

Related: We help you simplify the storage, security, versioning, workflows and management of content.

The problem can be converted to a Map Reduce problem. The first step to this would be to map the data to a key value pair that would give some insights. Key will be the employment and the value will be the count. The Mapper will read each line of this data file and create a name value pair. The name value pair can be also repeated. Based on the data above,  the Mapper will convert the data into

Management ->1

Technician -> 1

Entrepreneur->1

Blue-collar->1

Unknown -> 1

Management->1

So on and so forth

This Mapper data will be flying on the cluster. After the Mapping function,  the Map-Reduce key value pair will be shuffled and sorted automatically . So the Mapping data  shown above will become:

Blue collar -> 1,

Entrepreneur->1,

Management ->1,1,

Technician->1,

Unknown->1

Now  the Reducer will read each of the Map Keys and do a sum of it.

So the data will become:

Blue collar -> 1,

Entrepreneur->1,

Management ->2,

Technician->1,

Unknown->1

N.B.: We are just focusing on the first few rows of the data table shown above. On an actual implementation the counts will vary.

So to put it in a nut shell, the Mapper and reducer have done the following:

Input data -> Mapping to key Value Pair(Mapper) -> Shuffle and Sort -> Processing the Mapped Data(Reducer)

Now let us write a small php program that can perform this Mapper and Reducer job. The Hadoop ecosystem is built over an opensource stack so it can work with maximum programming languages like python, perl, java , .net etc.

Let us assume that we have our data file in our server and is saved as CustomerData.txt and this data is a comma delimeted data

Create a new directory on the HDFS partition

hdfs fs- mkdir customerData

Copy the data file to the hdfs directory

hadoop fs -copyFromLocal CustomerData.txt customerData/CustomerData.txt

Mapper.php

<?php
 // iterate through lines
 while($line = fgets(STDIN)){
 $line = trim($line);
 $explodedArray= explode(“,”,$line);
 $employment = $explodedArray[1];
 printf("%st%dn", $employment, 1);

}
?>

Reducer.php

<?php

$employmentCountArray=array();
while($line = fgets(STDIN)) {
$line=trim($line);
// split line into key and count
list($employment,$count) = explode(“t”, $line);
If(!in_array($employment,$employmentCountArray)){
$employmentCountArray[$employment]=$count;
}
else{
$employmentCountArray[$employment]=$employmentCountArray[$employment]+count;

}
foreach( $employmentCountArray as $employment => $count){

echo $employment.”->”.$count.”n”;
}
unset($employmentCountArray);

}
?>

On a Hadoop cluster this will be executed as:

hadoop jar <<path to >>hadoop-streaming-<version>.jar

-mapper “mapper.php”

-reducer “reducer.php”

-input “customerData/CustomerData.txt”

-output “customerData/CustomerCount.txt”

The output file can be viewed using hadoop fs-cat command

For More information on MapReduce and it usages kindly refer to: https://hadoop.apache.org/docs/r2.8.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html

Digital Business Transformation: Where Are You on the Journey?

Digital leaders are companies that manage to harness the power of digital information and technologies to improve business performance. Having said this, the fact is digital business transformation (DBT) is happening on a scale and at a speed that managers find both threatening and promising. However, it is important for managers to be aware of digital technologies that have the power to enable and transform their businesses.

More people connect to the internet today through mobile devices such as smartphones and tablets than via fixed devices like PCs. As a consequence, many companies are pursuing a mobile-first strategy, whereby application development is targeted first at mobile devices and then later modified for computers and other fixed devices.

Many  traditional  systems  and  processes  are proprietary, meaning that the underlying data and insights cannot easily be shared. Digital platforms are non-proprietary systems that can facilitate the sharing of data, applications and insights across different parts of an organization. For example, programmer improvements to the Linux operating system code and applications are freely available in digital libraries on the internet.

Social media applications like Facebook, LinkedIn and Twitter allow for a two-way flow of information and communication between an organization and its key internal and external stakeholders. They can also be used as learning tools to monitor industry trends, customer sentiment and competitor moves.

Four-Step  Digital Transformation Journey

The first challenge  is  for  business  leaders to create a sharing culture in which people are encouraged to express and use their  internal and external knowledge for the company’s benefit.

The second challenge is to develop and promote a mindset of curiosity, fostering people’s appetite to better understand what they know and, more importantly, what they don’t know, and link this knowledge to decision-making and business benefits.

The third challenge is to cascade an information-oriented culture throughout the company, and beyond, to customers and partners, to co-create value and innovation through smarter use of digital tools and real-time  information.

The fourth challenge is to selectively prioritize emerging business areas that leverage digital tools and technologies, while still seeking to optimize areas that are challenged by these changes.

Where are you on the journey?

Many managers in companies with strong non- digital capabilities struggle  with the  challenges of “going digital.”

Trigent Software Inc, Boston MA – one of the trusted pioneers in the market over 23 years has helped numerous companies in the United States of America transform their assets transition to new digital areas. Are you ready to get transformed, too?

Hadoop Distributed File System – An Overview

Hadoop Distributed File System (HDFS) is the file system on which Hadoop stores its data. This is the underlying technology that helps the data to be stored in the distributed manner across the cluster. It helps applications to get access to the data for fast mining/analyzing and users can be assured that the data that is saved on the HDFS file system is without any corruption.

HDFS is usually used for storing and reading large files. These large files can be a continuous stream of data from a web server or a vehicle GPS data or even the pulse beat data of a patient. These large files can be easily stored across multiple clusters in a distributed manner and HDFS helps us in achieving this. HDFS decomposes the large file into small blocks (which by default are of 128 MB size). In fact, HDFS can even split the processing of such large files to multiple nodes / servers. So each server processes the small portion of a large file in parallel. These blocks are spread across each node in the cluster. The HDFS makes copies of these blocks, so in case the block on a server is corrupt HDFS can quickly regenerate it from the backup block so there is minimal data loss. The backup of the block is also on another node as well.

Related: Leverage our Big Data Services to Get Insights From your Structured and Unstructured Data Repositories.

In a generic HDFS architecture there is a Name node and a Data node. The Name node keeps the address of the small blocks that the file is split into. It keeps the address translation of the blocks to identify which node and which block to read the file chunk from. The Name node also has a edit log for audit purpose. The Data node is the place where actual files blocks are stored. Whenever a file is requested, this data node will return the file content. The Data nodes talk to each other and are in continuous sync to update the file blocks using real-time replication.

If we have to read a file in HDFS, a message is sent to the Name node, and the name node will reply with the data node and the block information. Now the client application can connect to those specific data nodes and the data nodes can give them the file block that is being requested. There are client library in Python Java and other programming languages which can do this job.

To write a file to HDFS, a message is first sent to the Name node to create a new entry for that file. The client application will then give this information to a single data node, and then the data node will replicate the information with other data nodes in real time fashion. Once the file is stored the data node sends an acknowledgment to the Name node via the client application so that the Name node updates the information on the file blocks and the Data nodes that it is stored on.

The Name node is a very important candidate of the HDFS architecture. Using the edit log of the Name node we can rebuild the Name node. The edit log has the metadata of the data that can help us to create a new Name node. We can also have a secondary Name node which contains the merged copy of the edit log.

HDFS Federation allows to have namespace volume. It enables support for multiple namespaces in the cluster to improve scalability and isolation.

HDFS can be used on UI tools such as Apache Ambari, command line interfaces, interface libraries like Java Python PHP etc.

HDFS command line example:

The HDFS command line example here assumes that you have a Hadoop cluster setup on Linux and you are connected to the node using putty.

All the commands of HDFS use a prefix of Hadoop fs -, and below are some common examples:

  1. List the files on the hadoop cluster : hadoop fs -ls
  2. Create a new directory : hadoop fs -mkdir hadoop-trigent
  3. Copy a file from local file system to HDFS : hadoop fs -copyFromLocal <<filename>> <<HDFS filename>>
  4. Copy a file from HDFS to Local file system: hadoop fs -copyToLoca <<HDFS filename>> <<Locatl file name>>
  5. Remove a file : hadoop fs -rm <<filename>>
  6. Remove a directory : hadoop fs -rmdir hadoop-trigent
  7. To see the commands available : hadoop fs

For more information on the HDFS command line please refer:  https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html

Create Your Own PHP Extensions

Introduction to Extensions

Extensions are pre-compiled codes or libraries which enable specific functions to be used in your PHP code. These extensions may be either a PHP extension or a Zend  extension.We can see extensions in php.ini file.

; syntax:

; extension=modulename.extension

; For example, on Windows:

; extension=msql.dll

; under UNIX:

; extension=msql.so

Why we need to create Extensions

  • To create our own efficient and high performance PHP code which adds  missing features to the language.
  • When there are  limitations in PHP where we cannot make call to some library or OS specific calls.
  • When we have our own set of intelligent business logic which we want to sell and execute but do not want this to be viewed by others.
  • When we want to make PHP software to behave in a superior manner

How to create an Extension

The following system requirements need to be present to  create your own PHP extensions:

  • Text editor
  • PHP
  • Source code file and
  • C compiler.

Required files:

config. m4: The first required file to store the basic configuration data used by PHP to compile your custom extension.

PHP_ARG_ENABLE(my_code,

[Whether to enable the “my_code” extension],

[-enable-my_code  Enable “my_code” extension support])

if test $PHP_my_code != “no”; then // 1st argument declares the module

PHP_SUBST(my_code_SHARED_LIBADD) // 2nd tells what all files to compile

PHP_NEW_EXTENSION(my_code,my_code.c,$ext_shared)  // $ext_shared is counterpart of PHP_SUBST()

fi

In this case my_code.c is source code file and its content is :

#ifdef HAVE_CONFIG_H

#include “config.h”

#endif

#include “php.h”

#define PHP_MY_CODE_VERSION “1.0”

#define PHP_MY_CODE_EXTNAME “my_code”

extern zend_module_entry my_code_module_entry;

#define phpext_my_code_ptr &my_code_module_entry

// declaration of a custom my_code_function()

PHP_FUNCTION(my_code_function);

// list of custom PHP functions provided by this extension

// set {NULL, NULL, NULL} as the last record to mark the end of list

static function_entry my_code_functions[] = {

PHP_FE(my_code_function, NULL)

{NULL, NULL, NULL}

};

// the following code creates an entry for the module and registers it with Zend.

zend_module_entry my_code_module_entry = {

#if ZEND_MODULE_API_NO >= 20010901

STANDARD_MODULE_HEADER,

#endif

PHP_MY_CODE_EXTNAME,

my_code_functions,

NULL, // name of the MINIT function or NULL if not applicable

NULL, // name of the MSHUTDOWN function or NULL if not applicable

NULL, // name of the RINIT function or NULL if not applicable

NULL, // name of the RSHUTDOWN function or NULL if not applicable

NULL, // name of the MINFO function or NULL if not applicable

#if ZEND

Add to php.ini file

The last thing you need to do is to add the following line to your php.ini to load your extension on PHP startup:

extension=my_code.so

Testing the Extension

You can test your PHP extension by typing the following command:

  1. $ php -r “echo my_code_function();”

What is Azure Active Directory?

Azure Active Directory (Azure AD) is Microsoft’s multi-tenant cloud-based directory and identity management service.

Azure™ Active Directory® (Azure AD) provides a comprehensive solution that addresses identity and access management requirements for on-premises and cloud applications, including Office 365 and a world of non-Microsoft SaaS applications.

To enhance your Azure Active Directory, you can add paid capabilities using the Azure Active Directory Basic, Premium P1, and Premium P2 editions. Azure Active Directory paid editions are built on top of your existing free directory, providing enterprise-class capabilities spanning self-service, enhanced monitoring, security reporting, Multi-Factor Authentication (MFA), and secure access for your mobile workforce.

Azure AD lets you focus on building your application by making it fast and simple to integrate with a world class identity management solution used by millions of organizations around the world.

Related: Did you know that more than 57% of Fortune 500 companies rely on Azure for infrastructure services, platform services and hybrid solutions? Click to know what Azure can do for your organization.

Azure active directory is playing a major role in Azure cloud.

Benefits of improving the management of the Identity life-cycle include:

  • Reduced cost and time to integrate new users
  • Maximize investments of existing on-premises identities by extending them to the cloud
  • Reduced time for new users to access corporate resources
  • Reduced management overhead for provisioning(future) process
  • Improved security by ensuring access to systems can be controlled centrally
  • Consistent application of security policies
  • Reduced time to integrate acquired companies
  • Reduced business interruptions
  • Reduced exposure to outdated credentials
  • Reduced time and cost to enable applications to accessible from the internet
  • Increased capacity of IT to develop core application features
  • Increased security and auditing
  • Increased flexibility by delegating specific administration tasks

Azure AD also includes a full suite of identity management capabilities including multi-factor authentication. Multi-factor authentication (MFA) is a security system that requires more than one method of authentication from independent categories of credentials to verify the user’s identity for a login or other transactions, device registration, self-service password management, self-service group management, privileged account management, role based access control, application usage monitoring, rich auditing and security monitoring and alerting and a lot more. These capabilities can help secure cloud-based applications, streamline IT processes, cut costs and help ensure that corporate compliance goals are met.

How reliable is Azure AD?

The multi-tenant, geo-distributed, high availability design of Azure AD means that you can rely on it for your most critical business needs. Running out of 28 data centers around the world with automated fail-over, you’ll have the comfort of knowing that Azure AD is highly reliable and that even if a data center goes down, copies of your directory data are live in at least two more regionally dispersed data centers and available for instant access.

Types of application supported the Azure Active Directory:

These are the five primary application scenarios supported by Azure AD:

Through the admin role in Azure below, features can be automated in Azure Active Directory.

  1. Users and groups: This is most powerful automation capability
  2. Enterprise applications: It will provide the user or group who can access the SAAS application.
  3. Audit logs: IT will provide all user information.
  4. Single sign on: You can configure the azure AD with over 2000 application for single sign on
  5. Password reset: Self-service password reset without calling help desk. We can specify which user can reset the password
  6. Azure AD connect: Used to integrate your azure ad with your windows server AD or another directory on your network.
  7. Sign in: It will provide which user signed in successfully to the application.

Conclusion

When an organization moves to the cloud, new scenarios are enabled and new solutions become available to solve the organization’s problems. Identity and access management is one of the biggest concerns when integrating on-premises and cloud-based resources. Digital identities are at the core of all IT-related services because they control how people, devices, applications, and services access a variety of resources within and outside of the organization.

  1. Enterprise SLA of 99.9%
  2. Advanced security reports and alerts
  3. Company branding (Azure Active Directory provides this capability by allowing you to customize the appearance of the following web pages with your company logo and custom color schemes:)
  4. Group-based licensing and application access
  5. Self-service password reset and group management
  6. Multi-Factor authentication

Azure Multi-Factor Authentication (MFA) is Microsoft’s two-step verification solution. Azure MFA helps safeguard access to data and applications while meeting user demand for a simple sign-in process. It delivers strong authentication via a range of verification methods, including phone call, text message, or mobile app verification.

Why Businesses Cannot Afford Software Glitches

In October last year when a denial-of-service (DOS) cyber attack on the DNS provider made many internet platforms and services unavailable, people realized how hopelessly reliant they have become on the internet. However, today businesses have a lot more to worry about as they watch and read about software system glitches. Very recently, some Starbucks stores were closed due to computer system outage because of POS glitches causing caffeine withdrawal throes for customers and probably a strong impact on revenue figures for Starbucks.

In another example, improved technologies in the banking sector have failed to stem the rising tide of fraud in the US, according to a study by analytic software firm FICO. Instead of hiding glitches, more businesses are ready to talk about their issues. For example, explaining a 12% drop in same-store quarterly sales, Rent-A-Center’s CEO Robert Davis said, “Following the implementation of our new point-of-sale system, we experienced system performance issues and outages that resulted in a larger than expected negative impact on core sales. While we expect it to take several quarters to fully recover from the impact to the core portfolio, system performance has improved dramatically and we have started to see early indicators of collections improvement.”

Related: All that you need to know about avoiding software glitches

Similar to the cases mentioned above, software failure can have very serious consequences for businesses which rely on their software systems to keep their businesses up and running. It can stop production, interrupt processes and ultimately lead to financial loss. While we must acknowledge the fact that end-to-end software systems are vital to organizations, their advantage comes with the risk factor. Risk management is therefore key to avoiding software glitches. Research indicates that the number one cause of software failure is human error in application development or programming. With the prevalence of human error, it’s unavoidable that some software will deploy with bugs and errors that slip through the cracks during development. Business leaders may not have control over the source code or the development process, but it is possible to take some steps to prevent software malfunction — and to identify potential problems before they can cause interruptions in  day-to-day business. Talk to us to know more.

Introducing Apache Ambari

Apache Ambari is a web-based tool for provisioning, managing, and monitoring Apache Hadoop clusters. Ambari provides Restful APIs and a web-based management interface.

Ambari started as a sub-project of Hadoop but has now become a fully-fledged tool used by many Hadoop developers and administrators. The latest version available is Ambari 2.5.1

System requirements for Ambari:

Ambari can be installed only on UNIX based servers as of now. You would also need the following add-ons / packages.

  • xCode
  • JDK 7.0 (in case you use a previous version of Ambari it can be compiled using JDK 6.0). The future versions of Ambari>3.0 will require JDK 8.0
  • Python and Python setup tools
  • rpmbuild
  • gcc-c++ package
  • nodejs

Running the Ambari server:

Download the tarball and install it on the server.

Type command ambari-server setup. This will initialize the Ambari server.

To start /stop /restart/ check the status of the Ambari server use the following command:

ambari-server start/stop/status

To login to the Ambari server, open the URL: http://<<your-ambari-server>>:8080/. The default username and password is admin and admin respectively.

Changing the default port 8080 on the Ambari server:

To change the port of Ambari server , open the following file

/etc/ambari-server/conf/ambari.properties

search for the line that starts with client.api.port, it would be like:

client.api.port = 8080

change it to

#client.api.port = 8080

client.api.port = 8090 (We have taken a backup in case the Ambari is not happy using the new port).

Save the file and then restart the Ambari server.

sudo ambari-server stop

sudo ambari-server start

If you have a webserver such as Apache running, you can also do a proxypass and proxyreserve to transfer all the requests on a particular URL to the Ambari port.

Deploying Ambari client on the clusters:

  • Download and install the Ambari agent rpm on the clusters.
  • Edit the file /etc/ambari-agent/conf/ambari-agent.ini and in the location update the ip address / location of the ambari server
  • then start the Ambari agent using command ambari-agent start

Ports used by Ambari:

The Ambari uses the following default ports:

  • 8080-for Ambari web interface
  • 8440 – for connection between Ambari agents to Ambari server.
  • 8441 – for registration and providing the heartbeat from Ambari agents to Ambari server.

When the Ambari host does not connect to Ambari server, there are basic checkpoints that you can perform:

  • In case there is a firewall between the Ambari host and server, check that the port 8440 and 8441 are allowed over the firewall.
  • Check the iptables for the rules pertaining to Ambari ports
  • Disable selinux on both server and client and check if the host is able to connect to the server.
  • Check the logs available at location /var/lob/ambari-agent/ambari-agent.log to see the error messages.

Further information on Ambari is available on: https://ambari.apache.org/

Digital Transformation and its Impact on the Retail Industry

In the last few years, there has been a radical change in the use of technology and to improve performance, retail enterprises have begun looking at their existing business models to strategize, and add innovations which are digital-inclusive. While several digital innovations are changing the pace of the retail industry most of them are centered around ‘engaging customers’ and ‘upgrading operations’.

Retailers are also investing in the right infrastructure and technologies such as chat bots, cloud artificial intelligence, and recommendation engines that enhance the sales capabilities of the in-store associate towards delighting customers. Personalization, for example, plays a pivotal role in enhancing customer experience but not stopping there, digitalization offers added perks such as data intelligence and streamlined processes that in turn help to reduce costs and improve efficiency across the transactional cycle. By adopting digitalization, retailers will experience cost savings and operational efficiency across all areas of their business starting from sourcing to pricing strategies, inventory planning, employee training, and finally customer engagement

Given below is a table which outlines the challenges and opportunities of digital transformation:

Retail Consumer Journey Digitally Challenged Digitally Transformed
Engagement
  • One-way communication
  • Shoppers know as much as sales people
  • Customers are driven by loyalty and brand awareness
  • TV advertisement, banners used as the medium to promote products
  • Mobile devices with intelligent mobile apps which infuse personalized shopping experience
  • Transparent dynamic pricing
  • Continuous availability- Cloud
  • Big Data and Analytics
Product Delivery
  • Ambiguity with regard to customer preference
  • Product journey traceability
Purchase
  • Limited payment options
  • Return and exchange policy dependent on trust and loyalty
  • Alternate payment mechanisms
  • Better exchange and return policy
Post Purchase
  • Servicing sporadic and undependable
  • Low user influence
  • Self-Service
  • 360 degree customer engagement

Digital transformation should be strategically designed to help in communication to evangelize and bring innovation into the business. There should be a change management strategy in place. Finally, retails must build partnerships, so they can leverage the strength of their partners to expedite transformation.

Technology Trends Shaping the Mortgage Industry in 2017

The mortgage industry threw its doors open to the latest technological and digital innovations a few years ago. This has resulted in a sea-change in the way in which the business as a whole has responded.

Technology is ubiquitous.With every passing year, we are witnessing many impressive innovations in mortgage services technology. Each one of these transformations serves to make the choices of customers that much easier.

The mortgage industry today is commanded by lead aggregators who are active online. Their job is to gather bits of borrower information, before passing individual borrowers to select lenders.

A few major trends that are shaping the technological shifts in the industry this year are:

Mortgage sans Paperwork

The moment one thinks of the word “Mortgage” the image that instantly springs to mind is that of paperwork! Yes, be it disclosures or closing, endless rounds of information and signatures have to be entered on reams of paper. But with the latest technological strides, banks are better positioned to understand the advantages of going paperless.

While tons of paper will be available for other uses, customer relationship management teams can build strong bonds with their clientele. Customers can examine their closing and disclosure package in the online format; and subsequently imprint their e-signature. In all of this, establishing robust cooperation online between internal and external stakeholders in the mortgage loan process will be the next big challenge to overcome.

Related: What you need to know about the trends driving in the tech industry

Thrust is on Mobility

Enterprises that do not design their operations for mobility cannot survive in today’s competitive environment. Mobiles and other handheld devices are in many ways the zenith of our current technological achievements. The reason is simple: globally, more and more people use mobiles to gather information about products and services, in comparison to traditional desktops.

This is precisely why one can find dozens of mortgage mobile apps now. The function of each of these apps is varied. While some assist users in tracking the latest mortgage rates, other apps take customers through the entire mortgage origination process. These are indeed welcome additions and scores of customers have derived the benefits by utilizing such apps.

With all the evidence on the ground, clearly, mobile is the way forward for the mortgage industry. It will streamline processes and enable customers to gain a comprehensive view of their requirements.

The onus is on lenders to stay focused in adopting the myriad technologies that have come to the fore today. Customers will be more than happy to manage their finance and accounting needs effectively with the aid of technology. There is no escaping the ultimate digitalization of the mortgage process.

Trigent Software has been a revolutionary in helping a lot of Mortgage companies in the US, digitizing their assets to a greater ROI.

Introduction to the Bot Framework

The Bot Framework is a platform for building, connecting, testing, and deploying powerful and intelligent bots. The types of bots that you can build with the Bot Framework are also commonly called chat-bots.

Inside the Microsoft’s Bot Framework

Microsoft’s Bot Framework is designed to help you build and deploy chat-based bots across a range of services, including non-Microsoft platforms and through open web and SMS gateways, with minimal coding and with tools for delivering cross-platform conversations from one bot implementation. Like much of Microsoft’s recent development tools, the Bot Framework is intended to be cross-platform and cloud-based, building on Azure services and on the APIs in the company’s machine learning-powered Cognitive Services APIs.

At the heart of the Bot Framework are two SDKs, one for use with .Net and one building on the open source cross-platform JavaScript-based Node.js. There’s also a set of RESTful APIs for building your own code in your choice of languages. Once built and tested, bots can be registered in any of the supported channels (with their own user names and passwords), before being listed in Microsoft’s Bot Directory.

Microsoft Bot Framework team is working on some new features. The road-map of the Bot Framework includes:

  • Integration across the Azure ecosystem
  • Platform as a service for bots
  • Azure Bot Service V2 announced

What is a bot?

Think of a bot as an app that users interact with in a conversational way. Bots can communicate conversationally with text, cards, or speech. A bot may be as simple as basic pattern matching with a response, or it may be a sophisticated weaving of artificial intelligence techniques with complex conversational state tracking and integration to existing business services.

The Bot Framework enables you to build bots that support different types of interactions with users. You can design conversations in your bot to be freeform. Your bot can also have more guided interactions where it provides the user choices or actions. The conversation can use simple text strings or more complex rich cards that contain text, images, and action buttons. And you can add natural language interactions, which let your users interact with your bots in a natural and expressive way.

Bots use text, speech, or cards for their conversation and also can implement artificial intelligence to get involved in complex and deep conversations with human.

Microsoft Bot Framework is a set of APIs for building intelligent bots using .NET/C#, Node.js, and REST.

Let’s look at an example of a bot that schedules salon appointments. The bot understands the user’s intent, presents appointment options using action buttons, displays the user’s selection when they tap an appointment, and then sends a thumbnail card that contains the appointment’s specifics.

Bots are rapidly becoming an integral part of digital experiences. They are becoming as essential as a website or a mobile experience for users to interact with a service or application.

Why use the Bot Framework?

Developers writing bots all face the same problems: bots require basic I/O, they must have language and dialog skills, and they must connect to users, preferably in any conversation experience and language the user chooses. The Bot Framework provides powerful tools and features to help solve these problems.

Channels

The Bot Framework supports several popular channels for connecting your bots and people. Users can start conversations with your bot on any channel that you’ve configured your bot to work with, including email, Facebook, Skype, Slack, and SMS.

Build smart bots

You can take advantage of Microsoft Cognitive Services to add smart features like natural language understanding, image recognition, speech, and more.

References:

https://docs.microsoft.com/en-us/bot-framework/overview-introduction-bot-framework

GitHub Repository Hosting Service

What is GitHub?

GitHub is a code hosting platform for version control and collaboration. It lets you and others work together on projects from anywhere.

GitHub essentials include repositories, branches, commits, and `pull’ requests.

Step 1: Install Git and create a GitHub account

Step 2: Create a Local git repository

$ git clone https://github.com/adhiyamaans/Reposit.git

Cloning into ‘Reposit’…

remote: Counting objects: 3, done.

remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0

Unpacking objects: 100% (3/3), done.

Clones  creates a repository into a newly created directory, creates remote-tracking branches for each branch in the cloned repository.

cd ~/MyProject

`CD’ stands for Change Directory, and it is also a navigational command. We just made a directory, and now we want to switch over to that directory and go inside it. Once we type this command, we are transported inside MyProject.

mkdir ~/MyProject:

$ mkdir AdhiRepo

mkdir is short for Make Directory. It’s not actually a Git command, but a general navigational command from the time before visual computer interfaces.

Step 3: Create a Branch

$ git branch

* master

Working with multiple collaborators and want to make changes on your own?

This command will let you build a new branch, or timeline of commits, of changes and file additions that are completely your own. Your title goes after the command.

If you wanted a new branch called “test_branch,” you’d type Git branch test_branch.

$ git branch test_branch

$ git branch

* master

test_branch

GIT CHECKOUT:

$ git checkout test_branch

Switched to branch ‘test_branch’

Literally allows you to “check out” a repository that you are not currently inside. This is a navigational command that lets you move to the repository you want to check.

You can use this command as Git checkout master to look at the master branch, or Git checkout cats to look at another branch.

$ git branch

master

* test_branch

Step 4: Check Modified and Newly Added Files

ADD Files:

touch Adhi.txt

touch means `create’. Whatever you write after that is the name of the thing created inside the folder.

GIT STATUS:

$ git status

On branch test_branch

Changes to be committed:

(use “git reset HEAD <file>…” to unstage)

new file:   AdhiRepo/Adhi.txt

Check the status of your repository.

See which files are inside it, what changes still need to be committed, and which branch of the repository you are currently working on.

GIT ADD:

$ git add *

This does not add new files to your repository. Instead, it brings new files to Git’s attention.

After you add files, they are included in Git’s `snapshots’ of the repository.

$ git status

On branch test_branch

Changes to be committed:

(use “git reset HEAD <file>…” to unstage)

new file:   AdhiRepo/Adhi.txt

Step 4. Merge and commit changes:

GIT MERGE:

When you’re done working on a branch, you can merge your changes back to the master branch, which is visible to all collaborators.

git merge cats would take all the changes you made to the “test_branch” branch and add them to the master.

GIT COMMIT:

$ git commit -a -m “new files added”

[test_branch 3390bc7] new files added

1 file changed, 0 insertions(+), 0 deletions(-)

create mode 100644 AdhiRepo/prabha.txt.

The above is Git’s most important command. After you make any sort of change, you input this in order to take a “snapshot” of the repository. Usually it goes Git commit -m “Message here.”

The -m indicates that the following section of the command should be read as a message.

Step 5. Merge your Pull Request

The Pull Request API allows you to list, view, edit, create, and even merge pull requests.

Use the base branch dropdown menu to select the branch you’d like to merge your changes into, then use the compare branch drop-down menu to choose the topic branch you made your changes in.Type a title and description for your pull request.Click Create pull request.

Step 6: Push a branch to GitHub

$ git push origin test_branch

Counting objects: 4, done.

Delta compression using up to 4 threads.

Compressing objects: 100% (2/2), done.

Writing objects: 100% (4/4), 324 bytes | 0 bytes/s, done.

Total 4 (delta 0), reused 0 (delta 0)

To https://github.com/adhiyamaans/Reposit.git

* [new branch]      test_branch -> test_branch.

If you’re working on your local computer, and want your Commits to be visible online on GitHub as well, you`push’ the changes up to GitHub with this command.

Reference

https://guides.github.com/activities/hello-world/

3 Steps to Create your Own Font Icon from Photoshop Icon

In the history of the Web, designers have tried a variety of methods to use icons and images on websites. This includes, to name a few: importing bitmap image files (usually transparent PNGs), vectoring SVG files, or even using web fonts containing symbols instead of typical characters from the alphabet.  Using non-standard fonts has become the standard in web development in recent years, leading to the rise of icon fonts.

Advantages of Font Icons

Displaying icons through fonts is no different from using alternative fonts for headers, titles, and paragraphs. Implementing icons as a web font has many advantages:

  • Icons are scalable without loss in definition
  • There is no need to worry about Retina displays
  • It is easy to apply CSS properties without editing the icon itself (color, gradient, shadows, etc.)
  • You can use the same icon in different sizes and colors to save time and space
  • Better page speed performance (i.e. fewer http requests)
  • Icon fonts load faster than background or inline SVG’s

Now, let’s create a font icon…

Creating Icons

In order to create a font icon, the icon has to be in the SVG (Scalable Vector Graphics) format. You can download SVG icons from various sites like iconfinder.com. But here I will tell you how to create icons from Photoshop, then convert this to SVG format and as a final step, convert the SVG format to font.

Step 1.

First, we need to create nice icon in Photoshop then save as PNG.

Step 2.

After saving the image you have to convert the PNG image to SVG image. To do that you have lot of online image converter tool – among them http://picsvg.com/ and http://picsvg.com/.

Upload your image:

Choose proper filters and download the converted SVG icon.

Step 3

Converting to SVG is done. Now we have to convert that SVG to font by using http://fontello.com/ or https://icomoon.io/. To demonstrate this, I am using icomoon here.

Import your saved SVG icon in icomoon clicking on “Import Icons” button.

You can arrange, enter Meta details and change grid size etc. by using right side hamburger menu:

If you want to edit, move, or delete the icon, you can do it by choosing the button which is highlighted in the following snapshot:

After all the editing is completed,  click on `Generate font’:

Rename all the icons and define a Unicode character for each (optional):

Download the generated files:

That’s it!!! The icon font you have just generated consists of various font formats (for cross-browser compatibility) and demo files, including HTML and CSS.

3 Compelling Reasons Why CIOs Resist Digital Transformation

A decade ago, businesses focused on data mining, search technologies, and virtual collaboration.  Many of them did not have a mobile strategy and social media was hardly leveraged to advance business goals.

Fast forward to 2017 and most technologists talk about digital transformation, artificial intelligence, Internet of Things and machine learning.  However, if digital transformation is gaining mass popularity, why is there some resistance to its overall acceptance as the new normal?   A March 2017 study by Point Source titled `Executing Digital Transformation’ found that although companies planned to spend $1.2 trillion on digital transformation in 2017, less than half (44 percent) of IT decision makers are extremely confident in their organization’s ability to achieve the vision.  According to the report, many of the roadblocks relate to organizational structure and culture. The fear of the unknown is also rooted in the ambiguity behind digital transformation.

Here are the top 3 reasons why CIOs resist digital transformation:

The ‘mostly unknown’ factor

Technology is used to create and install solutions for businesses.   In the case of digital transformation, businesses have to start by examining their existing processes, look where improvements can be made, and identify weak links and dependencies in the system. However, with regard to existing processes, if everything is in place and working, why do we have to invest time, energy and money in cross-examination?  What can the value-add be in this transformation?  Without a clear idea about ROI, the need to change is met with resistance.  IDC’s 2017 predictions for digital transformation and for CIOs, confirms that only 40% of CIOs will lead the digital transformation of the enterprise by 2018. 

“Working fine” processes and systems

Those companies that have succeeded in creating a digital value proposition, have a clear view with regard to how they will exceed customers’ digital requirements. They also have realistic time-frames and budgets and this helps them to visualize the outcome.   The gaps that arise in terms of unmet customer expectations related to technology and the processes that are delaying in achieving their goals is where digital transformation can help.  All this requires a futuristic vision instead of a preference to go along with what is already working.  Resistance to change is the biggest hurdle in the way of innovation acceptance.  This resistance is stronger if existing processes and systems are at an overall level achieving business goals.

`Where to begin with’ data

All organizations have tons of data. The challenge for many is the unstructured nature of their data.  Some of them have siloed systems,  containing bits and pieces of data.  In the absence of a centralized information warehouse, there is no clear idea about what they do want to accomplish from this data.

Summary

The challenges are more in the nature of spring cleaning, modernizing and following a structured process oriented way to run the business.  Whether we call this digital transformation or anything else, we need to be able to look beyond the challenges at the advantages, knowing that our competitors have probably already begun their digital transformation journey.  As a  February 2017 research from McKinsey succinctly shows, companies that get digital transformation right, win market share, and those that don’t actually have a negative ROI for their investments.

Digital Darwinism

  • 75% of all Fortune 500 companies will not be there in 10 years.
  • 80 billion objects will be connected starting 2020.
  • The amount of digital data doubles every 2 years. 
  • Your company social network will allow everyone to process 20 times more information than emails.

Just as the industrial revolution of the 18th and 19th centuries changed our way of life with machines, digital transformation is set to disrupt today’s organizations beyond recognition. Previously successful models will be rendered obsolete; indeed, businesses that have grown up with rigid, structured legacy technologies built around predictable, repeatable steps may find themselves under threat from “digital Darwinism”. Nevertheless, for those organizations that can visualize the possibilities for data-driven innovation and creativity, there are market-defining opportunities up for grabs.

In the realm of buzzwords, “digital transformation” earns high marks for simultaneously conveying everything and nearly nothing at all. Beyond the buzz, what is digital transformation, also known as DX?

What’s driving digital transformation?

There are three major forces providing the incentive for digital business, the first of which is changing user demand. Major brands have effectively “trained” customers to use complex products to live simpler, more convenient and connected lives. Employees, too, are using digital innovations to determine where, when, how and why they work.

The competitive landscape is changing, too. Size, which was once a distinct advantage, is now becoming something of a liability, as technology levels the playing field for businesses of any size to disrupt traditional industries (think Über, Netflix, and Airbnb). Bricks and mortar stores are closing and high streets face desertification as omnichannel retailing becomes commonplace. Businesses that cannot gain a single view of their customer are vulnerable to aggressive new “born-digital” entrants who can muscle in quickly with a customer-centric approach based on service and simplicity.

And of course, technology continues to evolve at a frantic pace. Having overcome its initial hype, the cloud is now proving its worth as a growth engine for business, supporting innovation without dramatic rip-and-replace. Mobility is fast becoming an enabler of user interaction with things, data, people and places, not just a bunch of apps. The falling cost of sensors and low-power networking technologies is bringing the Internet of Things within reach of a burgeoning number of organizations, which are now challenged to manage the explosion of data generated by connected devices and convert it into real-time actionable insights. Open-source software, once seen as relatively niche, is now a major component of the next generation of high-performance computing clusters needed for big data analysis and applications, as well as providing the foundation for enterprise-grade mobile management and development.

The barriers to becoming digital-first

So what’s stopping businesses from placing their bets on a digital future? A combined legacy of organizational silos, aging application estates and outdated technology management processes are common practical constraints. However, laggards are also hampered by a legacy mindset: cultural resistance to change, a lack of imagination and appetite for risk-taking, and a shortage of digital leaders to spearhead change from the top. As Professor Jerry Wind of The Wharton School, University of Pennsylvania shrewdly observes, “a successful business is the hardest organization to change”. However, the time is fast approaching when ‘business as usual’ will almost certainly no longer be good enough. Digital transformation is how good companies become great!

Digital is a strategy, not a tactic

Rather than tinker around the edges by deploying individual technologies with a narrow, operational focus, organizations must develop the digital fluency to articulate the strategic value of technology to the future of the business – reflecting the demands of customers, employees, and supply chain partners – and work backward. Digital transformation should, in practice, be nothing more than the continuous business improvement that any established organization should be doing as a matter of course to stay ahead of the game. The rewards go beyond mere survival: those that succeed typically exhibit higher revenue growth and profitability than their industry peers.

Digital transformation is a strategic endeavor with a long horizon. Reinforce the long-term by celebrating short-term wins.

In summary, becoming a digital business requires:

Shifting from a legacy mindset to “digital by default” ways of working and thinking
An end to the generations-old silo mentality, with data integration at the heart of any initiative
A strategy and program that touches every function of the organization
Cultivating innovation through experimentation and learning
Considered use of technology to improve the experience of employees, customers, suppliers, partners and stakeholders, and new business models that exploit digitized assets
We at Trigent help software vendors use technology to radically improve performance and reach. From enhanced social experience, transforming business operations and energizing collaborative communities on the cloud and mobile to data driven real-time process optimization and cross-enterprise analytics.

We would love to hear from you and help you make the most of your digital investment.

This blog was originally published on LinkedIn: https://www.linkedin.com/pulse/digital-darwinism-abishek-bhat?published=t