5 Essential Technologies to get your Distributed Enterprise Future Ready

The global pandemic of 2020 permanently changed our working dynamics.  What is easily missed is how much the workplace has changed too.  Whether and how much work will return to the workplace (versus work from home) is a matter for a different blog.  Here, we want to talk about how the workplace has changed forever: especially the distributed enterprise. 

To be fair, the distributed enterprise is not a new concept at all.  From an IT/networking perspective, distributed enterprises always moved away from a centralized IT infrastructure to connected islands to maximize convenience, networking and efficiency, faster access, and localized control. 

In a borderless economy, where businesses fiercely compete for resources and market share, it is fair to expect re-architecting systems and networks to maximize customer benefits and increase employee flexibility.

How will you make the Distributed Enterprise future-ready? 

Here are our top 5 tech picks: 

Computing is moving to an Edge near you!

Powered by a significant increase in computing power and ubiquitous bandwidth, suddenly, it is possible to move the computing power to where it needs to be, rather than where it always was.  Be it in a user’s hands, PoS (Point of Sale) at a retail counter, manufacturing assembly line, or CSP (cellular service provider), the edge has moved closest to the point where it needs to be.  This comes with advantages such as faster response times, localized approvals saving a round trip to the central server, increased privacy, security, and reduced cloud costs.

Daihen, a Japanese manufacturer of industrial electronics equipment, realized that their Osaka plant could not handle data from dozens of sensors.  The data was being processed remotely by a cloud server, and response time was slow.  The solution came in the form of an Intelligent Edge solution from FogHorn that makes complex machine learning modules run on highly constrained devices.  The results were almost immediate: improved speed, higher accuracy, drop-in defect count, all of which encouraged them to increase investment in the Edge in the following year.

The Cloud is going hybrid

While the transition from an on-premises server to the cloud has been a work in progress, the cloud itself has metamorphosed entirely.  The distributed enterprise can now have a mix of premise, public and private cloud, as required based on business needs. 

The enterprise must be careful not to get locked into a hyper-scaler vendor’s vision of the future but keep options open to realize their own.  Since many of these paths are evolving, the enterprise needs to engage deeply and tread carefully while committing to future road maps. 

The CSP (cellular service provider) is also evolving and is now a major cloud provider with advanced capabilities in Service Edge and SD-WAN (software-defined Wide Area Network).  CSPs have blurred the line between enterprise and carrier cloud with their offerings.  With upcoming 5G rollouts, massive IoT networks, mmWave, and network slicing requirements, their cloud and edge capabilities will be of an entirely different scale.  Enterprises will need to understand how to best harness offerings from each vendor without compromising their requirements.

Enterprise software also has disaggregated from a monolithic form split into microservices (via containers) where code, debugger, utilities, and algorithms may be contained within the container and control routed appropriately to the parent code block as required.  Containers make decoupling of applications convenient by abstracting them from the runtime environment.  This way, they are deployed agnostic to the target environment.  These smaller services can be highly efficient and lend themselves to high scalability, but not without loading DevOps teams with the additional pressure of housekeeping. 

When Ducati, a global automotive giant, undertook a data center modernization project, they expected gains, but that was not how it turned out.  The hybrid cloud dramatically changed their perspective of what is possible.  The data awareness, speed, and minimal footprint across departments brought a new level of productivity that was not planned.

Hyper Automation

Currently, an emerging trend but likely will become mainstream as many of the required elements are already falling in place.  Hyper Automation refers to the coming together of systems, processes, software, and networking to automate most known processes with zero-touch human intervention resulting in ‘robotic process automation’ sequences. 

This advanced state of automation will require stable AI, ML modules, and integrating IT and OT (operation technologies in the IIoT world), where machines will make decisions and keep routine systems running.  Real-time monitoring and analytics are logged for a supervisor to check and intervene if necessary. 

Understanding documents through OCR (optical character recognition), emails using NLP (natural language processing), and enhancing automation using AI / ML data flows are increasingly common.  Banking and healthcare have seen successful deployments of OCR and NLP.

Data will soon be everywhere, but how about Security?

Security will become such an essential parameter for business success that the CISO (Chief Information Security Officer) might well, if not already, be the most important executive in the economy.  With billions of IoT devices from airplane tires to connected cars and heating systems at offices and production lines going online, the opportunities for a security breach just went up a notch.  And every incident will only dilute human trust delaying further progress and slowing down growth. 

With geopolitical scenarios going worse, cyber warfare being a reality, there is no telling when some of these will start impacting enterprise system security.  This is an ongoing new reality – almost as real as the pandemic. 

Quantum computing, the emerging innovation in high-speed computing, will be a threat too.  It is believed that current-day data breaches are being tapped and data being stored for analysis and targeting after quantum computing power becomes available (because current computing could take years to decipher this data).  Some cyber experts believe the advanced planning and methods of cybercriminals are years ahead of the capabilities of corporate IT security teams.  And that can be a cause for worry.

AIOps

When Gartner coined the term AIOps, it meant Artificial Intelligence for IT Operations (or Algorithmic IT operations).  They referred to a “method of combining big data and machine learning to automate IT operations and processes, including event correlation, anomaly detection, and causality determination.”

AIOps is a set of methods or practices that makes rapid data processing possible for vast volumes of data, which then feed into an ML engine to predict issues.  AIOps will be very much a requirement for the DevOps teams.  They try catching up with data and problems across hybrid environments to support agile processes in ever-changing platforms and networked silos. 

US infrastructure provider Ensono provides infrastructure support to mission-critical processes of many top enterprises.  As its volumes started growing, it became important for Ensono to invest in AIops to ensure its ability to monitor client hardware and software would not be compromised.  Investing in TrueSight AIops helped Ensono decrease its trouble ticket numbers from over 10,000 to a few hundred per month.  This is the power of AI ops.

In conclusion, remember no one has a crystal ball into the future.  But going by current technology trends in the distributed enterprise, some things are clear: growth, chaos, and churn are predicted.  It helps to have a trusted consulting team of experts on your side to learn from, seek advice from, and leverage from experience.

At Trigent, our domain experts have delivered solutions, charted digitization route maps, and provided distributed enterprise workflow design and architecture for future growth to global leaders in every sector. 

We are happy to share our learnings.  Do give us a call.  Drop us a line.  We are listening.

Control Tower in Logistics – Optimizing operational cost with end-to-end supply chain visibility

The competitive business landscape and ever-changing needs of customers are reshaping traditional supply chains today. With globalization and organizations looking to extend their geographical scope for lower-cost sourcing options and emerging markets, the complexity of supply chains has increased. The increase in outsourcing makes effective collaboration with partners imperative for efficient supply chain operations. 

In addition to these complexities, organizations have continuous pressure to improve their profit margins and increase revenue. Supply chain executives are often under enormous pressure to cater to the needs of their customers while optimizing their supply chain operations cost-efficiently. These critical business challenges drive the need to create solid end-to-end capabilities for supply chain visibility.

Supply chain visibility is the most vital enabler for managing businesses both within the organizational boundaries and across the boundaries. Visibility across processes, right from the receipt of an order to its delivery, provides the flexibility, speed, and reliability to gain a competitive advantage in the form of well-controlled supply chain functions.

Supply chain control towers embody the leading principles of supply chain visibility to address this need. It integrates data from multiple sources to provide end-to-end visibility across the supply chain, improve resiliency, and respond faster to unplanned events. A control tower in the supply chain helps organizations prioritize, resolve, and understand critical issues in real-time.

Current state and phases in supply chain visibility

Supply chain visibility, in short, includes the process of how organizations capture data and interconnect it to retrieve the vital supply chain execution information. It provides a comprehensive view for tracking information, material, or cost by monitoring the main dimensions in a global supply chain, such as inventory positions or shipment status or real-time order movements, to make well-informed decisions based on facts.

Many logistics organizations have implemented or are in the process of adopting solutions for supply chain visibility. However, they reflect different phases of maturity. The maturity level is identified by the associated processes, skills, and tools involved.

Leading practices for supply chain visibility

A successful solution for supply chain visibility is deployed around five main principles for a holistic view of the inbound and outbound operations.

Understanding Control tower

Control Towers are cross-divisional organizations with integrated “information hubs” to provide supply chain visibility. These hubs gather and distribute information, allowing people trained to handle these capabilities to identify and act on risks/opportunities more quickly.

It provides end-to-end visibility to all the participants involved in supply chain logistics. These may include manufacturers, distribution centers/warehouses, logistics, shippers, carriers, 3PL, 4PL, and store-end customers.

Also read: How supply chain visibility is reducing operational costs in logistics

A control tower helps capture and correlate relevant supply chain data from all entities involved in the operation, be it freight details, inventory positions, or transportation parameters. It supports real-time update capabilities with the help of the latest technologies like IoT and predictive analytics to monitor possible supply chain disruptions in goods movement or material shortages.  In short, control towers help implement a centralized decision-making system that focuses solely on fulfilling the end user’s needs. 

The importance of control tower in logistics

A control Tower provides round-the-clock visibility, enabling real-time feedback to customers through video, voice, or text. In short, a control tower in logistics operation offers greater reassurance and efficiency, irrespective of time zones, office hours, or holidays.

A control tower implementation is managed by a team of supply chain experts, who monitors the movement of goods throughout the supply chain. The freight movement data is then collected and analyzed to ensure that the essential service requirements are met. This data can be used to prevent potential disruptions or take corrective actions.

A team of highly experienced professionals then makes decisions based on the real-time information obtained to ensure that all service commitments are met and that customers remain happy and satisfied. The customer can follow up on special requirements concerning their cargo, such as temperature control, time constraints, or relevant security/customs clearances.

Achieve end-to-end supply chain visibility and optimize operational costs with control tower solutions from Trigent. Contact us now!

Key benefits of control tower in logistics

The Control Tower is pivotal for effective supply chain management. They help manage any unpredictable, potential disruptions in supply chain operations. They enable better planning, decision-making, proactive event management, improvement of the performance of supply chain partners, and sophisticated supply chain analytics.

Some of the benefits of control tower while streamlining dynamic management of the supply chain are as follows:

  • Enhancing logistics operations
    • A control tower platform can be configured to help manufacturers gain better insights into retaining supplies and raw materials.
    • Help carriers enhance their ability to fulfill orders quickly for customers. 
    • Reduce Inventory 
    • Speed up detection and reaction times
  • Achieve end-to-end visibility
    • Control tower systems provide details on freight movement to multiple stakeholders involved in the logistics world. 
    • Correlate data across siloed systems to provide actionable insights and manage exceptions
  • Improve service levels such as total cycle time and on-time delivery
    • Better insights and accurate information help companies improve their delivery rates. An optimized control tower helps them achieve this critical goal. 
  • Reducing costs
    • Every business looks to optimize its profits, and in most cases, this is achieved by reducing costs of operation and goods.

Implementation approach

The need for a quick response is more significant than ever before. Organizations need actionable recommendations derived from intelligent strategic inputs to respond quickly and effectively to mitigate risks and unforeseen circumstances. Control towers plan, monitor, measure, and control logistics in real-time to deliver compelling, essential strategic capabilities and cost efficiencies.

Here are some pointers to ensure the successful implementation of control towers in your organization:

  • Ensure standardization

To fully realize the benefits of a control tower, it is necessary to harmonize all the processes that it is mandated to control. Hence the first step of implementation should be to define integration standards among all the actors involved in the operation, i.e., manufacturers, assembly lines, warehouses, logistics, delivery stores, and customers.

  • Central oversight, local execution

Maintaining a balance between central oversight and local coordination for execution is crucial in bringing in the required business knowledge for building robust solutions for day-to-day operations. It enables the field to use the insights and act based on ground realities.

  • Multifunctional involvement

Establish a successful control tower implementation by including representatives from all relevant functions. They should also be given a clear idea of the individual benefits and working across the system. 

  • Pragmatism

Ensure a “feasibility-first” approach over the theoretical best practices to help deliver a control tower solution that fulfills all the preset objectives within a reasonable time frame. 

  • Knowing when to stop

It is always essential to keep an eye on the returns and avoid any adoption efforts that provide a low return. 

Break free from visibility challenges with control tower solutions from Trigent 

The recent pandemic and disruptions induced by the current digital transformation wave have put logistics organizations under immense pressure to perform. The highly experienced team at Trigent provides comprehensive and customized solutions to ensure end-to-end visibility while streamlining your supply chain operations.

End-to-end visibility and significant cost reduction for our customers have made Control Tower solutions a critical service in our offerings. Book a consultation with us to know more.

SDK vs API – All you need to know to make an informed decision

Building software in the current world requires high-speed development to meet ever-changing business needs. Products and services are delivered incrementally in Agile mode. 

To meet speed and quality requirements a development team will need to identify the following:

  1. Development tools and frameworks that ensure standardization.
  2. Ready made solutions that can be integrated directly or customized to serve their needs.

Modern development approaches need to make a choice – SDK vs API to meet these challenges. Instead of wasting time and resources on researching, developing, and testing, teams can use a plethora of APIs and SDKs with extensive community support.

SDK vs API Examples

An SDK is a full-fledged installable library, while APIs are services exposed by a third party or another service, to be communicated with. Both take away the development effort of a module or feature that you might not be ready with.  Depending on the scenario a developer or team will either need an SDK or just an API. Making an informed decision on when to use one over the other is crucial to successful software development. 

To understand this, let us take an example in which we want to build a native health tracking app. The app will have the following features:

  1. Social authentication through Google or Facebook accounts.
  2. Location tracking to figure out distance covered from point A to B as per the user’s activity. It could be cycling or walking.
  3. BMI calculator.
  4. Diet options.

The list can continue, but we do not want to digress from our main intent of understanding SDKs and APIs.

The first thing to consider while building a native mobile app is that there needs to be an Android and an iOS version to serve the majority of users. Whether one should go in for a native or a hybrid app or build the 2 variants using a Cross-Platform approach requires a separate discussion in itself. The starting point for it could be the skills available in-house.

Android app and social authentication implementation

For our scope, let’s just consider the Android app. The official language for building Android apps is Java. Kotlin also has become an official language for Android development and is heavily promoted by Google. C, C++ runs natively on the phone. Then there is LUA which is not supported natively and requires an Android SDK. You can even use C#  depending on your team’s core competency. This will require either Xamarin with Visual studio or Unity. 

We are going to choose Java here.

The best way to get started for a Java developer is to install Android Studio which is an IDE that automatically downloads the Android SDK and emulator.  The Android SDK is a complete set of development, debugging, testing, and build tools, APIs, and documentation. Using the SDK you can generate APKs that can be deployed to different Android-supported devices. The developer just focuses on the language of his choice based on what is supported by the SDK and uses standard code and framework to get the app up and running. 

The next feature to be built is single-sign-on into the app, using a social account. Both Google and Facebook provide client or server-side SDKs to hide the complexity of the actual implementation and enable the integration through popular languages. The developer just rides on the authentication provided by Facebook and Google. Additionally, the user also grants the app the permission to access information or perform operations on either platform based on our need. In our case, we will have to use the Android SDK provided by Facebook and Google.

To sum up, the Android SDK enables the following:

  1. Enables development of the Android app using a language of our choice, Java.
  2. Provides APIs to access location, UI, camera and other native features. 
  3. Enables localization of the app for different languages through the SDK’s framework if required.
  4. The Java code is compiled to an  Android application package along with the required libraries of the SDK

Hence for our health tracking app, we can use the Android SDK for social authentication

Unsure of which SDK Framework to use? Send in your requirement and we will be happy to assist you!

SDK vs API – Location Tracking Functionality

One of the key features of the app we are trying to build here is to figure out the distance walked or cycled by the user. We can take the route of custom implementation by spending a couple of days or weeks to come up with an algorithm, implementing and finally testing it. A better approach would be to use an out-of-the-box solution such as Google Maps and save on SDLC time and effort.  Google provides both SDK and API related to Maps and distance. In our case, we do not really need the entire Google MAP SDK. We can use just the relevant APIs such as the Distance Matrix API.  It gives you the distance and time between one or more endpoints. 

Let’s consider the Javascript implementation of the distance matrix API. The end-point provided looks like this:

https://maps.googleapis.com/maps/api/distancematrix/outputFormat?parameters

Based on the above URL we can glean that an API comprises of the following –

  1. Protocol – SOAP, REST or GraphQL. In our case it is REST. SOAP is the oldest mode of interaction with heavy schemas and data. REST is an architectural style relying on HTTPs GET, POST,PUT and DELETE operations. GraphQL is a query language promoted by Facebook which solves the problem of under-fetching or over-fetching by REST.
  2. URL – as provided by the service provider.
  3. Request Parameters – Either all parameters are mandatory or some are optional. Any service exposing APIs will share the parameters and their structure. In our case for instance – destinations and  origins are required parameters. Mode (bicycling or walking) is an optional parameter. 
  4. API Key – We will need to pass a unique API key that points to our application using the service for authentication and authorization.
  5. Response – The output is either JSON or XML.

What is API advantage here

An API (Application Programming Interface) enables easy and seamless data transfer between a client application and the server providing the service. There is no installation required, unlike an SDK. The API logic is completely abstracted by the service provider from the client. APIs contribute to a loosely coupled, flexible architecture. Since the API code lies on the server, it’s maintained by the provider. Because of this dependency, we need to ensure that we choose a reliable provider and also keep an eye out for newer versions.

Hence for our health tracking app, we can use the Google Map API for location tracking.

BMI calculator and diet options implementation

This would be either a custom implementation, an API, or SDK. If it’s not available readily as an API or SDK and is required in a number of different health services or products the organization wants to provide, it would be best to expose it as an API for current and future use. 

Diet options clearly are a custom implementation in our scenario.

Difference between SDK and API

APISDK
An API is used to provide a feature by running on a third-party system in a request-response mode.An SDK provides all the tools, libraries, APIs, and documentation necessary to build the application or feature.
APIs run on separate servers (internal or 3rd party) and hence have a continued dependency on the service for reliable operation.SDKs typically run on the same environment and hence have no interdependencies. However, they use the processing power of the existing environment of the application being built.
This just requires a SOAP/REST/GraphQL call to the server end-point with request parameters defined as per the API documentation. This is available in languages supported by the provider which is mostly based on what can run in the environment expected and the popularity of the language. 
For instance, Java, NodeJS, Python, GO, PHP are the usual languages popular with the developer community.
No installation is required. It requires installation and is therefore bulky. Any upgrades will need to be handled at our end. Some SDKs also allow customizations as per our needs.

In a scenario where just a few APIs are required from the entire stack provided by the SDK and these APIs can be independently run, it’s better to opt for the APIs alone.
Error handling is left to the application based on what is thrown back by the server.SDKs lean on the language’s error handling mechanism besides what the server platform returns. Therefore error handling is handled in a more effective way.
Examples – Map Apis, Payment Apis, AdMob API provided by Google.Examples – JAVA SDK, Android SDK, Facebook’s Single Sign-on SDK.

While SDKs are a superset of APIs, used appropriately, they both have many advantages over custom development. 

Advantages of API and SDK

  1. Fast and easy adoption – A few lines of code and your feature is ready.  The developer can focus on the core business functionalities of the application instead of re-inventing the wheel or working on something that is not our core area of expertise.
  2. Saves time and effort – Ready to use and can be directly plugged into, thereby shortening development cycle.
  3. Language – In the case of SDKs, they usually support all the popular languages that the implementation needs. For APIs you just have to ensure the communication protocol and parameters are as per the requirements.
  4. Support -APIs and SDKs ensure best practices, provide robustness and have community support.
  5. Documentation – APIs and SDKs have good documentation for developers to understand and use. No expertise required other than knowing the language to be implemented in. 
  6. Updated – Newer features keep getting added to the stack by way of versions which the developer if required needs to just update. Mostly backward compatibility is already handled by the service provider.

Disadvantages of using APIs and SDKs

To summarize, whether it’s an API or SDK, it’s better to follow the reviews of the community before making a selection. Things to look out for are known bugs, limitations, and cost.

Trigent provides a number of ready-to-use SDKs and APIs for many domains such as mobile app development, SCM workflows, Logistics, AR/VR development services, enabling you to focus on your core expertise and saving you a lot of time and effort in your development cycles. To know more, please contact us

How to build and monitor a Telegram bot with AWS Lambda & SNS – A comprehensive guide

There was a time only a tech-savvy person could understand and build a bot, now bots are everywhere. Building a bot is no longer a complex process, and a Telegram Bot is the easiest one. At the core, Bots are third party applications that run in Telegram and help publish messages to the Telegram group.

Telegram bots can be used to enrich chats by integrating content from external services. They can also be used for sending you customized notifications/news, alerts, weather forecasts and so on.  Telegram bots can also be used to accept payment from other Telegram users. 

This blog explains the complete process of how to build a Telegram bot and monitor using AWS services. Here, the AWS Services used are AWS Cloudwatch, SNS, and Lambda. The messaging service used is Telegram. The Cloudwatch alerts are notified on the Telegram group, where everyone who has a Telegram account can join and receive the alerts. The functional specification flow is as given below:

Amazon Simple Notification Service (SNS) is a web service which allows you to publish messages from Cloudwatch logs and immediately deliver them to the subscribers(Lambda function, which gets triggered and pushes the messages on the Telegram Bot).

To deliver your notification to a Telegram chat, you will not be able to simply integrate the SNS topic with Telegram Bot API through an HTTP/S endpoint. Instead, you will have to create a simple Lambda function which calls the Bot API and forwards the notifications to a Telegram chat. The procedure is as detailed below.

Forwarding SNS Notifications to Telegram Chat

To kickstart this procedure, you need to first create a Telegram bot. These are nothing but Telegram accounts that are operated by software instead of people. Here in our case, the Telegram bot will be operated by a Lambda function which sends a notification to Telegram chat on behalf of the bot. This communication is unidirectional, which means that although the bot sends a message to you, it does not process any messages that it receives from you.

The SNS notifications can be forwarded to a Telegram chat by following the below steps:

  1. Create a new Telegram bot.
    • In the Telegram app, type and search for @BotFather. Next, press the Start button (alternatively, you may send the /start command). Once this is done, send the /newbot command and follow the few easy steps to create a new Telegram bot. The BotFather will generate an authorization token for the new bot. This token is a string which resembles something like 123456789:ABCD1234efgh5678-IJKLM. This is required for sending requests to the Telegram Bot API.
    • In the Telegram app, search the name of the bot that you had created. Then, press the Start button (you may also send the /start command). Write down any text message to chat with your bot. For, e.g., write ‘Hello’.
    • Now, execute the Bot API call to retrieve the ID of your chat with the bot. In the given command, replace <token> with the value of the authorization token that you had received from the BotFather.
      curl ‘https://api.telegram.org/bot<token>/getUpdates’ | python -m json.tool
      The output of this will give your chat id.
  2. Go to https://console.aws.amazon.com/sns/home to open Amazon SNS Console. Create a new SNS topic at the AWS region of your choice.
  3. Go to https://console.aws.amazon.com/lambda/home and open the Lambda Management Console. Now, switch to the same AWS region where you had created your SNS topic. Create a new Lambda function with the IAM role as “Executing the basic Lamda function reading the Cloudwatch logs”. 
  4. The following function will execute the sendMessage method of Telegram Bot API and help forward the SNS messages (notifications) to a Telegram chat.

Sample code in python

import json

import os

import logging

from botocore.vendored import requests

# Initializing a logger and settign it to INFO

logger = logging.getLogger()

logger.setLevel(logging.INFO)

# Reading environment variables and generating a Telegram Bot API URL

TOKEN = os.environ[‘TOKEN’]

USER_ID = os.environ[‘USER_ID’]

TELEGRAM_URL = “https://api.telegram.org/bot{}/sendMessage”.format(TOKEN)

# Helper function to prettify the message if it’s in JSON

def process_message(input):

    try:

        # Loading JSON into a string

        raw_json = json.loads(input)

        # Outputing as JSON with indents

        output = json.dumps(raw_json, indent=4)

    except:

        output = input

    return output

# Main Lambda handler

def lambda_handler(event, context):

    # logging the event for debugging

    logger.info(“event=”)

    logger.info(json.dumps(event))

    # Basic exception handling. If anything goes wrong, logging the exception    

    try:

        # Reading the message “Message” field from the SNS message

        message = process_message(event[‘Records’][0][‘Sns’][‘Message’])

        # Payload to be set via POST method to Telegram Bot API

        payload = {

            “text”: message.encode(“utf8”),

            “chat_id”: USER_ID

        }

        # Posting the payload to Telegram Bot API

        requests.post(TELEGRAM_URL, payload)

    except Exception as e:

        raise e

5. Memory (MB): 128 MB
Timeout: 5 sec
Environment variables: set the CHAT_ID and TOKEN environment variables of your Lambda function (use the values from Step 1).
For example:

6. Publish the new version of your Lambda function. Copy the function ARN (along with the version suffix) from the top of the page.

7. Open the SNS topic in Amazon SNS Console. With the ARN in the previous step, create a new subscription for the AWS Lambda protocol.

8. Open your SNS topic in the Amazon SNS Console and publish a test message.
The message will be delivered to your Telegram chat with your bot.

Once the above configuration is done, set the Event Metrics and create the Alarm associated with the Metric. Henceforth, if a similar situation occurs, an alert notification will be sent to the Telegram Bot, which in turn gets cascaded to the group.

Here’s a sample notification for reference:

{

    “AlarmName”: “High CPU on Test Server”,

    “AlarmDescription”: “Created from EC2 Console”,

    “AWSAccountId”: “525477889965”,

    “NewStateValue”: “ALARM”,

    “NewStateReason”: “Threshold Crossed: 1 datapoint [99.6666666666667 (15/02/20 09:21:00)] was greater than or equal to the threshold (80.0).”,

    “StateChangeTime”: “2020-02-15T09:22:34.928+0000”,

    “Region”: “US East (Ohio)”,

    “OldStateValue”: “OK”,

    “Trigger”: {

        “MetricName”: “CPUUtilization”,

        “Namespace”: “AWS/EC2”,

        “StatisticType”: “Statistic”,

        “Statistic”: “AVERAGE”,

        “Unit”: null,

        “Dimensions”: [

            {

                “value”: “i-0e8f79e0801648253”,

                “name”: “InstanceId”

            }

        ],

        “Period”: 60,

        “EvaluationPeriods”: 1,

        “ComparisonOperator”: “GreaterThanOrEqualToThreshold”,

        “Threshold”: 80.0,

        “TreatMissingData”: “”,

        “EvaluateLowSampleCountPercentile”: “”

    }

}

Note: Ensure that the Lambda function is provided with proper permissions to pick the message from the SNS topic.

This concludes the case of notifying the Cloudwatch logs monitoring. The same arrangement can be used for the Cloudtrail logs monitoring the API calls, as well.

Transportation and Logistics Go Places with RPA at the Helm

Tedious, repetitive tasks can put quite a drain on your time, especially when you would rather spend it on more meaningful activities. Take emails, for instance, you cannot do without them, you cannot ignore them, and there will be quite a few that will require you to prioritize and take action.

Sifting through the entire information to take only the necessary data to the operating system for crafting a response can be overwhelming especially when you would want to focus on important activities such as building relationships with customers or planning for business growth. Thankfully, after a successful run across industries including financial services, healthcare, and hospitality, Robotics Process Automation (RPA) has now made its debut in transportation and logistics.

RPA bots are easy to use and you can integrate them with your existing technology infrastructure even if the systems they work with do not integrate with one another. The fact that the global robotic process automation market size is expected to touch $13.74 billion by 2028 at a CAGR of 32.8% over the forecast period only makes it evident how eager enterprises are to adopt RPA.

Enterprises have always been on the lookout for ways and means to monitor costs and resources. RPA offers them just that, making its way across business departments and processes reducing human error, and amplifying throughput.

Some organizations have been hesitant to adopt RPA because they weren’t sure if their scale could support this technology. The capabilities that RPA brings along are however helping them realize its value and potential. No matter which industry we speak about, transportation and logistics form an integral part of their supply chain. Any improvement in business processes thus has a positive impact on all others.

It’s time we delved deeper into the benefits and use cases that make RPA the smartest solution out there for streamlining processes in transportation and logistics.

The RPA benefits

RPA offers several benefits when you put RPA at the helm of business processes. Jaguar Freight recently announced its decision of choosing RPA Labs for the documentation of its document processes.

Speaking about its decision, Simon Kaye, President, and CEO of Jaguar Freight elaborated, “We recently partnered with RPA Labs, who does a tremendous job automating a lot of the heavy lifting within our organization. They helped us in two areas – one is taking a lot of raw data from client documentation, commercial invoices, and packing lists, and populating that automatically in our system, where previously there was a fair amount of data entry, which caused a lot of errors and delays.”
Not just big enterprises, but even startups are now eagerly embracing the power of RPA to streamline their operations.

Some of the top benefits of leveraging RPA solutions include:

  • Time – Automation has always saved enterprises a lot of time, but RPA tools streamline tasks helping them further bring down the process cycle time significantly.
  • Accuracy – Due to the absence of manual intervention, RPA ensures high accuracy. Tasks performed are usually error-free and in the rare event that an error occurs, it can be found and fixed easily. This is possible because RPA-driven processes are recorded and easily retrieved.
  • Productivity – Higher accuracy ensures better work management. It helps enterprises align processes with their business goals ensuring productivity is at an all-time high.
  • Revenue – With reduced process cycle times and increased accuracy and productivity, enterprises are able to devote their time to grow their business and increase revenue.

To take a closer look at the different processes that benefit from RPA and understand how RPA plays a role in enhancing organizational efficiencies, let’s look at its applications.

Order processing and tracking

The one area that involves endless manual data entries and can improve significantly is order processing and tracking. It’s not just tedious and time-consuming but also very resource-intensive. Manual errors can prove to be extremely costly at this stage. RPA enables organizations to process orders efficiently. PRO numbers of shipments are picked up from a carrier’s website automatically via bots and loads are closed out in no time.

Tracking continues with the help of IoT sensors even after orders are processed and shipped. IoT sensors also ensure that products can be traced based on their last known location in case they get misplaced during transit. The rationale is to keep both employees and customers in the loop so that the status of shipments is known to all concerned at all times.

The RPA tool also sends out updates in the form of emails at regular intervals. This feature comes in handy when the transit period is too long. Customers also get plenty of time to schedule pick-up times based on the location of the product.

Inventory management

Another important task that comes under the domain of RPA in supply chain and logistics is that of inventory monitoring. After all, supply needs to be aligned with the demand for products and the expectations can be met only when you know exactly how many products are left and when new shipments are going to be needed.

RPA tools look into this aspect and send a notification to concerned employees about the number of products remaining and even order new products as required. Supply and demand planning is possible only when you are able to analyze diverse data from suppliers, customers, distributors, and your workforce. RPA can gather, store, and analyze data to help you tide over these challenges and maintain a steady supply.

Invoice management

Like order processing, invoice management also involves entering and processing a huge amount of data. With RPA tools, you can substantially reduce the stress of going through invoice documents and ensure error-free processing. In a typical business scenario in transport and logistics, orders are received, processed, and shipped in large numbers every day.

While it took days in the pre-RPA era to process invoices, RPA ensures that invoices are processed quickly and accurately, extracting only pertinent information to enable automatic payments. This helps businesses reduce the average handling time by 89% with 100% accuracy and achieve a resource utilization of 36%.

Report generation

You need reports for just about everything; be it for processing payments, gathering customer feedback, or managing shipments. When it comes to transportation, report generation assumes a whole new level especially when you are tracking movements from city to city, port to port. Often, it can get tiresome and challenging.

RPA helps you manage all your report-related chores with ease thanks to its ability to screen information. Minus the human intervention, RPA-generated reports are highly accurate. Modern enterprises combine the capabilities of RPA with Artificial Intelligence to generate precise reports and even make sense of them to offer actionable insights.

Communication and customer satisfaction

In a sector as busy and extensive as transportation, communication is the key to better relations and customer satisfaction. Customers need timely updates and the fact that multiple vendors and partners are divided by distance and time zones can sometimes pose challenges in communication. This is where RPA tools such as chatbots and auto-responders come into play.

They communicate, interact, and answer customer queries. They also push notifications as often as required to inform concerned authorities about order status or shipment delays or other related matters. This in turn ensures a high level of customer satisfaction. Given the stiff competition, it is the only way customers are going to keep coming back for more.

While old customers are happy to hang around, new customers will look forward to a long association thanks to RPA-enabled services. The best part about RPA tools is that they allow you to link information across stages and processes to have the right information necessary for providing efficient customer service and 24X7 support.

Take your business to new heights with Trigent

Trigent with its highly experienced team of technology experts is helping enterprises improve process cycle times and create new opportunities for increasing revenue. They can help you too with the right RPA tools and solutions to enhance process efficiencies and create better customer experiences.


Allow us to help you manage your workflows and add value with RPA. Call us today to book a business consultation.

Apple’s ARKit: Unique features delivering an immersive experience

Augmented Reality (AR) has emerged as a new communication medium that provides a wide range of processing devices’ motion tracking capabilities. In this article, we will discuss Apple’s iOS ARKit platform’s unique features that enable an immersive experience for users.

Features of iOS ARKit

AR isn’t only the joining of computer data with human detects. It is a lot more, thanks to the below-listed features:

  • Location property for creating and updating at specific points on the map
  • 3D views of amenities with AR with real-time 3D portraying, adding all the animation and textures
  • Optical or Video technologies are used to accomplish the way of augmentation.
  • Depth camera for a secure facial recognition system. Face ID will open 30% quicker, and those applications will dispatch twice as quickly in iOS 13.
  • Motion capture: moving moments of things by applying a similar body development to a virtual character.
  • AR is experienced in real-time conjoint, not pre-recorded. Data analyses, join a genuine activity with computer designs don’t consider AR. Therefore, amalgamating real and virtual.
  • Gauge lighting to help progress among Virtual and Real Worlds
  • ARKit produces information in meter scale, 3D virtual item anytime will be secured to that point in 3D space.
  • Augment Reality application improvement will encounter a total change with ARKit and recent iOS development highlights.

A little more about AR and the ARKit

The concept of AR dates back to 1950, while the term was coined in 1990 by Boeing researcher Tim Caudell. AR’s ability to recreate human sensory fuelled its increased usage in many applications.

After the launch of Google Glass, tech titans like Microsoft, Niantic, Sony, and Apple took up the initiative to leverage AR in new ways. Apple’s ARKit harnesses its library to offer features like collaborative sessions, mapping of physical 3D space, multiple face tracking, stable motion tracking, etc.

Now is the time to build a digitally driven experiential future with this booming platform of ARKit. Let’s join hands to inspire creative thinking that fuels tomorrow’s innovations.

AR has demonstrated a clear return on investment while offering businesses the means and ways to connect and converse with their customers. At Trigent, we help you create immersive experiences that are intuitive and data-rich while putting your customer needs at the core of every initiative. It’s time you embraced the many possibilities AR has to offer to unlock moments of delight for your customers. Allow us to help you push the standards a little higher.

Call us today for a consultation.


Trigent is Clutch.co’s 2019 #1 AngularJS Developer

With almost 25 years or experience, Trigent has touched the lives of hundreds of companies through digital transformation. We have been able to set an industry standard by keeping ahead of emerging technologies and understanding how to utilize a range of IT services. Specifically, our knowledge in AngularJS has caught the attention of Clutch.co, as we were recently ranked #1 in their 2019 leading developers report and in their Leaders Matrix for AngularJS developers.

Headquartered in Washington D.C., Clutch is a B2B site that rates and reviews agencies in various industries. Their goal is to unite businesses with the ideal firm to resolve a precise need. They rank hundreds of agencies based on a unique methodology that evaluates expertise, portfolio of work, and ability to deliver top-notch products for their clients. Their analysts spoke directly with our clients to assess these areas. Based on their research, we were ranked #1 in both their annual listing and Leaders Matrix out of over 1,700 firms.

Beyond holding this highly coveted spot, our success is also shared on Clutch’s sister-sites: The Manifest and Visual Objects. The Manifest publishes state-of-tech news and how-to guides for businesses, assisting them in simplifying their hut for solutions providers. You can find us listed here among other ECM Companies and Java Developers. Likewise, Visual Objects is a platform that displays portfolios from custom software developers to creative agencies alike so firms can envision what a future project might look like.

Without our clients, we wouldn’t have been ranked #1 by Clutch! We’d like to thank them, as well as Clutch, for taking the time to review our company. Our team looks forward to enabling even more businesses in overcoming limits and becoming future-ready.

Learn React.JS in 10 Minutes

ReactJS is a popular JavaScript library used extensively for building user interfaces developed by Facebook. It is a JavaScript view-based framework, which uses HTML language. It supports one-way binding which essentially means that it does not have a mechanism to allow HTML to change the components. HTML can only raise events that the components respond to.

In ReactJS, ‘state’ is the interface between back-end data and UI elements in the front end. State helps to keep the data of different components in sync to ensure that each state update will render all relevant components. To simplify, state is the medium to communicate between different components.

In this blog, I will touch upon state management, automatic back-end data refresh (no F5) and retrieving data from API and rendering the same.

Pre-requisite – Please ensure that you have installed Node.js (latest package from https://nodejs.org/en/).

To create your first ReactJs project, we will use the following command in the selected directory. This will create new project under firstreactjsapp directory.

npm init react-app firstreactjsapp
Navigate to the newly created folder, and you will see public and src directories.

To update content, go to App.js file and replace App with following code:
function App() { return ( <div className="App"> <header className="App-header"> <img src={logo} className="App-logo" alt="logo" /> <h1>First ReactJS Application</h1> </header> </div> ); }
Run ReactJS application using following command and you will see below output. (prefer Chrome browser)
npm start

Interacting with API. We will use Axios, is a popular, promise-based HTTP client that sports an easy-to-use API and can be used in both the browser and Node.js.

npm i axios

Accessing API ( lets get some available online api’s from (https://jsonplaceholder.typicode.com).

Creating ToDo list grid.

Copy following code and past it in the App.js:

import React from 'react';
 import axios from 'axios'; // Reference axios for accessing API’s
 export default class PersonList extends React.Component {
 state = {
 todos: [],
 onetime:true
 }
 fetchNews(me)
 {
 if (me.state.onetime) // load data onetime
 {
 axios.get('https://jsonplaceholder.typicode.com/todos') // retreiving from backend
 .then(res => {
 var todos = []
 todos = res.data.slice(0,10); // lets fetch only 10 records
 // lets randomly update the status of first record using backgroudn thread without refershing the page or pressing F5.
 todos[0].completed = todos[0].completed==="true" ? "false" : "true";
 me.setState({ todos });
 var onetime = false;
 me.setState({ onetime }); // changing state.
 })
 }
 else
 {
 var todos = []
 todos = me.state.todos;
 todos[0].completed = todos[0].completed==="true" ? "false" : "true";
 me.setState({todos });
 }
 }
 componentWillMount() {
 if (this.state.onetime)
 {
 this.fetchNews(this);
 }
 this.interval = setInterval(this.fetchNews, 500,this); // refresh
 }
 render() { // rendering
 return (
 <ul>
 <table border="1">
 <tr><td>Id</td><td>UserID</td><td>Title</td><td>Status</td></tr>
 { this.state.todos.map(todo =>
 <tr>
 <td> <a href="/">{todo.id}</a></td>
 <td> {todo.userId}</td>
 <td> {todo.title}</td>
 <td> {todo.completed}</td>
 </tr>
 )
 }
 </table>
 </ul>
 )
 }
 }
Run React application using the following command and you will see the below output. (prefer Chrome browser)
npm start

Did you find this blog useful? Send your comments to info@trigent.com.

Angular Components – Best Practices (Part 1)

Angular is a popular framework for creating front-ends for web and mobile applications. Components are an important part of any Angular application.

In this blog, we look at some of the best practices for Angular’s components.

1. File Name

It’s important that we are able to easily find files within our application. To make this possible, we need to name them carefully and ensure that the file’s name describes its contents explicitly to make it easy to identify.

In the following image, the file name ‘catalog.component.ts’ clearly indicates that this is our catalog component. This naming style, which consists of a descriptor followed by a period and then followed by its type, is a recommended practice according to the Angular style guide.

Similarly, for the CSS file, the same should be named ‘catalog.component.css’. So, using ‘catalog.component.css’ makes it clear that these are the styles for the catalog component. We can do the same thing for the template and call this ‘catalog.component.html’.

2. Prefixing Component Selectors

There are multiple ways to write code, but when we follow some fundamental rules in our code writing practices, and categorize the file structure and folder correctly, it will simplify the process of locating code, identifying code quickly, and help us to reuse the code.

We have a couple of components in our example that have selectors that we use in our HTML. Please view image below for more details.

For example, this nav-bar.component has a nav-bar selector. It is a good practice to add a prefix to these selectors that match the feature area where these selectors can be used. This component is in our core module, which is sort of an app-level module. Hence, we will prefix that with wb, for whitebeards.

Prefixing of the component selectors in this manner avoids conflicts when we import a module that has a component selector that conflicts with one of its component name(s).

This component resides in our shared module, which is also an app-wide module, so we will prefix this one with wb . We do not have any components with selectors in our feature areas, i.e. catalog or user features.

Prefixes are usually two to four characters to keep them short to avoid distracting from the actual component name. As a matter of fact, prefixes can really be whatever you want, however, just be sure to prefix them with something that represents the feature area that they are in. Add these prefixes whenever you use a selector.

3. Using separate CSS and template files

The Angular style guide recommends that if our template or CSS has more than three lines, you should extract them. So we will start by creating a sign-in. component. css file, and then copy the styles out from there. We can put them in a separate css file.

4. Decorating input and output properties

To declare input and output properties on components, you can declare them as inputs in your component metadata. It is recommended that you also use a decorator. Thus, the same syntax exists for output properties, and the same rule applies here. Decorating our input and output properties makes it more obvious which properties are used for input and output, and simplifies the renaming process.

Finally, it is also simple less code to write…

Don’t miss my next blog, ‘Angular Components Best Practices (Part 2)’ where I will share some more best practices for Angular’s components..

Handling Combo Box Selection Change in ViewModel (WPF MVVM)

Windows Presentation Foundation offers various controls and one of the basic control is the combo box.

The combo box has various events such as DropDownOpened, DropDownClosed, SelectionChanged, GotFocus, etc.,.

In this blog, we will see how to handle the selectionchanged event of combo box which is inside grid using Model–View–Viewmodel (MVVM) pattern.

Create Model (Person):

Define a class Person as show below.

Create View Model:

Create a view model named MainWindowViewModel.cs

Create View:

Create a view named MainWindow.xaml.

In the above code, ‘Cities’ are defined in the view model and are not part of Persons class, the ItemsSource is defined as ItemsSource=”{Binding Path=DataContext.Cities,RelativeSource={RelativeSource FindAncestor, AncestorType = UserControl}}” which will take the data from viewmodel Cities property.

We need to import the namespace xmlns_i=”http://schemas.microsoft.com/expression/2010/interactivity”.

View Code behind:

Create view model field and instantiate the view model in the constructor of View Code behind file.

MainWindow.xaml.cs

When we run the application, the grid will bind with the person details and city combo box will be bound with the cities list. Now we can change the city for the respective person and it will be handled by the CityChangeCommand in the view model class.

In this manner we can handle the combo box selection change event using MVVM pattern in WPF.

CRUD Operations on Amazon S3 Using PHP AWS SDK

What Is Amazon S3?

Amazon Simple Storage Service is popularly known as S3. It is a storage service for the Internet. It has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere over the Internet.

Store Data in Buckets:

Data is stored in a container called Bucket. Upload as many objects as you like into an Amazon S3 bucket. Each object can contain up to 5 TB of data. For example, if the object named photos/puppy.jpg is stored in the johnsmith bucket, then can be addressable using the URL https://johnsmith.s3.amazonaws.com/photos/puppy.jpg

Download Data:

Download your data or enable others to do so. Download your data any time you like or allow others to do the same.

Build impactful cloud solutions that solve challenging business problems

Permissions:

Grant or deny access to others who want to upload or download data into your Amazon S3 bucket.

Here I will be using PHP AWS SDK on Windows Machine to Upload, Download and Delete data from S3

To access any of the AWS service, the user first need to have an AWS account

Please log into: Amazone AWS

and install/configure AWS SDK on to a local system.

AWS accepts only https requests, so install dummy SSL certificate in a local WAMP/XAMPP.

Creating a Bucket Using CreateBucket:

To upload your data (photos, videos, documents etc.), you first create a bucket in one of the AWS Regions. By default, you can create up to 100 buckets in each of your AWS accounts

Steps to Create a Bucket:

Go to aws.amazon.com and login using your credentials.

Go to Services -> Storage -> S3

Click on create bucket button.

1.Provide Name of Bucket & Region

  1. Bucket name – storing-data
  2. Region – US East (N. Virginia)

2.Set Properties to Bucket by Enabling/Disabling

  1. Enable versioning
  2. Server access logging
  3. Default encryption AES-256 (Advanced Encryption Standard)

Use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)

3.Set permissions

  1. Manage public permissions – Do not grant public read access to this bucket (Recommended)
  2. Manage system permissions – Do not grant Amazon S3 Log Delivery group write access to this bucket

Finally click on Create bucket.

How Do You Get an Access Key for Amazon S3?

Use an access key ID and secret access key to authenticate an Amazon Web Services (AWS) account in a Cloud Storage migration project. An AWS secret access key can’t be retrieved after it has been created. Once lost, it cannot be recovered; a new access key must be created.

Follow the steps below to create a new secret access key for an AWS account:

  1. Sign in to the AWS Management Console and open the IAM console.
  2. In the navigation pane, choose Users.
  3. Add a checkmark next to the name of the desired user, and then choose User Actions from the top.
    Note:The selected user must have read and write access to the AWS S3 bucket for the migration.

Click on Manage Access Keys:

Manage Access Keys
  • Click on Create Access Key.

Note: Each AWS account can only have two access keys. If the secret access keys are lost, one of the existing access keys must be deleted, and a new one created.

Manage Access Keys
  • Click on Show User Security Credentials.
  • Copy and paste the Access Key ID and Secret Access Key values, or click on Download Credentials to download the credentials in a CSV (file).

Code to Upload, Retrieve and Delete an Object in a Bucket:

Let the code to upload, retrieve and delete an object in a bucket. WAMP / XAMPP can be used. I have used XAMPP

Create a project folder s3. this is how the folder structure should look like

AWS SDK resides inside vendor folder


AWS

Define S3 Configuration in : C:xampphtdocss3config.php

<?php
 define("KEY", 'AKIAJFWC3NTWAFASA');
 define("SECRET", 'MgAduMo4gBCXM+kVr/ZADwefdsFASDASD');
 define("BUCKET", storing-data');
 ?>

Use Aws Services provided by AWS SDK in : C:xampphtdocss3connection.php

<?php
 use AwsS3S3Client;
 require_once 'vendor/autoload.php';
 require_once 'config.php';
 $config =['key' => KEY, 'secret' => SECRET, 'bucket' => BUCKET];
 $s3 = AwsS3S3Client::factory($config);
 ?>

Upload files using : C:xampphtdocss3start.php

<?php ?>
 <html>
 <head>
 <title>Upload Data</title>
 <script type="text/javascript" src="js/jquery.js"></script>
 </head>
 <body>
 <h3>Upload the files</h3>
 <form name="upload" action="upload.php" method="post" enctype="multipart/form-data">
 <input type="file" name="uploaduser" id="uploaduser" />
 <input type="submit" name="submit" value="upload"/>
 </form>
 <h3>List of items uploaded</h3>
 <?php include_once 'listing.php'; ?>
 </body>
 </html>

Upload Action page to capture and move the object : C:xampphtdocss3upload.php

<?php
 require_once 'connection.php';
 if(isset($_FILES['uploaduser'])){
 $files = $_FILES['uploaduser'];
 $name = $files['name'];
 $tmpName = $files['tmp_name'];
 $size = $files['size'];
 $extension = explode('.', $files['name']);
 $extension = strtolower(end($extension));
 $key = md5(uniqid());
 $tmp_file_name = "{$key}.{$extension}";
 $tmp_file_path = "files/{$tmp_file_name}";
 move_uploaded_file($tmpName, $tmp_file_path);
 try{
 $s3->putObject([
 'Bucket' => $config['bucket'],
 'Key' => "uploads-azeez/{$name}",
 'Body' => fopen($tmp_file_path, 'rb'),
 'ACL' => 'public-read'
 ]);
 //remove the file from local folder
 unlink($tmp_file_path);
 } catch (AwsS3ExceptionS3Exception $ex){
 die("Error uploading the file to S3");
 }
 header("Location: start.php");
 exit();
 }else if(isset($_POST) && !empty($_POST)){
 $name = $_POST['key'];
 // Delete an object from the bucket.
 $s3->deleteObject([
 'Bucket' => $config['bucket'],
 'Key' => "$name"
 ]);
 }

List the uploaded Object using : C:xampphtdocss3listing.php

<?php
 require_once 'connection.php';
 $objects = $s3->getIterator('ListObjects', ['Bucket' => $config['bucket'], 'Prefix' => 'uploads-azeez/']
 );
 ?>
 <html>
 <head>
 <title>Listing Bucket data</title>
 <style>
 table, th, td {
 border: 1px solid black;
 border-collapse: collapse;
 }
 </style>
 <script type="text/javascript" src="js/jquery.js"></script>
 <script>
 function ConfirmDelete(key)
 {
 var x = confirm("Are you sure you want to delete?");
 if (x) {
 $.ajax({
 url: 'upload.php',
 type: "POST",
 data: {'key': key},
 success: function(response) {
 console.log(response);
 window.location.reload();
 },
 error: function(jqXHR, textStatus, errorThrown) {
 console.log(textStatus, errorThrown);
 }
 });
 // event.preventDefault();
 } else
 return false;
 }
 </script>
 </head>
 <body>
 <table >
 <thead>
 <tr>
 <td>File Name</td>
 <td>Download Link</td>
 <td>Delete</td>
 </tr>
 </thead>
 <tbody>
 <?php foreach ($objects as $object): ?>
 <tr>
 <td><?php echo $object['Key']; ?></td>
 <td><a href="<?php echo $s3->getObjectUrl(BUCKET, $object['Key']); ?>" download="<?php echo $object['Key']; ?>">Download</a></td>
 <td><a href="" name="delete" onclick='ConfirmDelete("<?php echo $object['Key']; ?>")'>Delete</a></td>
 </tr>
 <?php endforeach; ?>
 </tbody>
 </table>
 </body>
 </html>

Finally, this is How the application looks:


Amazon AWS S3

Upload Larger Files Using Multi-part:

You can upload large files to Amazon S3 in multiple parts. You must use a multipart upload for files larger than 5 GB. The AWS SDK for PHP exposes the MultipartUploader class that simplifies multipart uploads.

The upload method of the MultipartUploader class is best used for a simple multipart upload.

<?php
 require 'vendor/autoload.php';
 use AwsCommonExceptionMultipartUploadException;
 use AwsS3MultipartUploader;
 use AwsS3S3Client;
 $bucket = $config['bucket'];
 $keyname = $_POST[‘key’];
 $s3 = new S3Client([
 'version' => 'latest',
 'region' => 'us-east-1'
 ]);
 // Prepare the upload parameters.
 $uploader = new MultipartUploader($s3, '/path/to/large/file.zip', [
 'bucket' => $bucket,
 'key' => $keyname
 ]);
 // Perform the upload.
 try {
 $result = $uploader->upload();
 echo "Upload complete: {$result['ObjectURL']}" . PHP_EOL;
 } catch (MultipartUploadException $e) {
 echo $e->getMessage() . PHP_EOL;
 }

Multiple Authentication System Using Guards in Laravel

Guards:

A guard is a way of supplying the logic that is used to identify authenticated users. Laravel provides different guards like sessions and tokens. The session guard maintains the state of the user in each request by cookies, and on the other hand, the token guard authenticates the user by checking a valid token in every request.

Providers:

Basically, the provider is responsible for retrieving the information from the back-end storage. If the guard requires that the user must be validated against the back-end storage, then the implementation of retrieving the user goes into the authentication provider. Laravel ships with two default authentication providers like Database and Eloquent. The database authentication provider deals with the straightforward retrieval of the user credentials from the back-end storage, while Eloquent provides an abstraction layer that does the needful.

Robust and scalable PHP applications to enhance your web presence.

Process to set up Laravel auth:

  1. Create blank database and link it to Application.
  2. Run bellow Command in Command Prompt
php artisan make:auth

and

php artisan migrate

This will scaffold the entire authentication system.

  1. Create the admin model and migration (You can create multiple model and migration).
php artisan make:model Models/Admins -m

This will create a migration file from the admins model.

Copy & Paste the bellow code in /app/Models/Admins.php

<?php
 namespace AppModels;
 use IlluminateFoundationAuthUser as Authenticatable;
 class Admins extends Authenticatable
 {
 protected $guard = 'admin';
 /**
 * The attributes that are mass assignable.
 *
 * @var array
 */
 protected $fillable = [
 'firstname', 'midname', 'lastname', 'email', 'address', 'password',
 ];
 /**
 * The attributes that should be hidden for arrays.
 *
 * @var array
 */
 protected $hidden = [
 'password', 'remember_token',
 ];
 }

And also Copy & Paste bellow hilighted code in Admins Migration file (Path: databasemigrations_create_admins_table.php)

<?php
 use IlluminateSupportFacadesSchema;
 use IlluminateDatabaseSchemaBlueprint;
 use IlluminateDatabaseMigrationsMigration;
 class CreateAdminsTable extends Migration
 {
 /**
 * Run the migrations.
 *
 * @return void
 */
 public function up()
 {
 Schema::create('admins', function (Blueprint $table) {
 $table->increments('id');
 $table->string('firstname');
 $table->string('midname');
 $table->string('lastname');
 $table->string('email')->unique();
 $table->string('address')->nullable();
 $table->string('password');
 $table->rememberToken();
 $table->timestamps();
 });
 }
 /**
 * Reverse the migrations.
 *
 * @return void
 */
 public function down()
 {
 Schema::dropIfExists('admins');
 }

And run “php artisan migrate” command.

  1. In the config/auth.php file, set up the custom guard and provider for admins.
'guards' => [
 'web' => [
 'driver' => 'session',
 'provider' => 'users',
 ],
 'api' => [
 'driver' => 'token',
 'provider' => 'users',
 ],
 'admin' => [
 'driver' => 'session',
 'provider' => 'admins',
 ],
 'admin-api' => [
 'driver' => 'token',
 'provider' => 'admins',
 ],
 ],
 'providers' => [
 'users' => [
 'driver' => 'eloquent',
 'model' => AppUser::class,
 ],
 'admins' => [
 'driver' => 'eloquent',
 'model' => AppModelsAdmin::class,
 ],
 ],
  1. Create the AdminLoginController

php artisan make:controller Auth/AdminLoginController

This command will create a AdminLoginController file and then copy & paste the bellow code.

<?php
 namespace AppHttpControllersAuth;
 use IlluminateHttpRequest;
 use AppHttpControllersController;
 use Auth;
 use Route;
 class AdminLoginController extends Controller
 {
 public function __construct()
 {
 $this->middleware('guest:admin', ['except' => ['logout']]);
 }
 public function showLoginForm()
 {
 return view('auth.admin_login');
 }
 public function login(Request $request)
 {
 // Validate the form data
 $this->validate($request, [
 'email' => 'required|email',
 'password' => 'required|min:6'
 ]);
 // Attempt to log the user in
 if (Auth::guard('admin')->attempt(['email' => $request->email, 'password' => $request->password], $request->remember)) {
 // if successful, then redirect to their intended location
 return redirect()->intended(route('admin.dashboard'));
 }
 // if unsuccessful, then redirect back to the login with the form data
 return redirect()->back()->withInput($request->only('email', 'remember'));
 }
 public function logout()
 {
 Auth::guard('admin')->logout();
 return redirect('/admin');
 }
 }
  1. Create the AdminController

php artisan make:controller AdminController

This command will create a AdminController file and then copy & paste the bellow code.

<?php
 namespace AppHttpControllers;
 use IlluminateHttpRequest;
 class AdminController extends Controller
 {
 /**
 * Create a new controller instance.
 *
 * @return void
 */
 public function __construct()
 {
 $this->middleware('auth:admin');
 }
 /**
 * show dashboard.
 *
 * @return IlluminateHttpResponse
 */
 public function index()
 {
 return view('admin');
 }
 }
  1. Create Admin Login Page

Copy & Paste bellow code for creating admin login page

@extends('layouts.app')
 @section('content')
 <div class="container">
 <div class="row justify-content-center">
 <div class="col-md-8">
 <div class="card">
 <div class="card-header">{{ __('Admin Login') }}</div>
 <div class="card-body">
 <form method="POST" action="{{ route('admin.login.submit') }}">
 @csrf
 <div class="form-group row">
 <label for="email" class="col-sm-4 col-form-label text-md-right">{{ __('E-Mail Address') }}</label>
 <div class="col-md-6">
 <input id="email" type="email" class="form-control{{ $errors->has('email') ? ' is-invalid' : '' }}" name="email" value="{{ old('email') }}" required autofocus>
 @if ($errors->has('email'))
 <span class="invalid-feedback">
 <strong>{{ $errors->first('email') }}</strong>
 </span>
 @endif
 </div>
 </div>
 <div class="form-group row">
 <label for="password" class="col-md-4 col-form-label text-md-right">{{ __('Password') }}</label>
 <div class="col-md-6">
 <input id="password" type="password" class="form-control{{ $errors->has('password') ? ' is-invalid' : '' }}" name="password" required>
 @if ($errors->has('password'))
 <span class="invalid-feedback">
 <strong>{{ $errors->first('password') }}</strong>
 </span>
 @endif
 </div>
 </div>
 <div class="form-group row">
 <div class="col-md-6 offset-md-4">
 <div class="checkbox">
 <label>
 <input type="checkbox" name="remember" {{ old('remember') ? 'checked' : '' }}> {{ __('Remember Me') }}
 </label>
 </div>
 </div>
 </div>
 <div class="form-group row mb-0">
 <div class="col-md-8 offset-md-4">
 <button type="submit" class="btn btn-primary">
 {{ __('Login') }}
 </button>
 <a class="btn btn-link" href="{{ route('password.request') }}">
 {{ __('Forgot Your Password?') }}
 </a>
 </div>
 </div>
 </form>
 </div>
 </div>
 </div>
 </div>
 </div>
 @endsection
  1. Copy & Paste bellow code in Route file (Path: routesweb.php)
Route::prefix('admin')->group(function() {
 Route::get('/login',
 'AuthAdminLoginController@showLoginForm')->name('admin.login');
 Route::post('/login', 'AuthAdminLoginController@login')->name('admin.login.submit');
 Route::get('logout/', 'AuthAdminLoginController@logout')->name('admin.logout');
 Route::get('/', 'AdminController@index')->name('admin.dashboard');
 });

Admin Login URL: http://localhost/admin/login

User Login URL: http://localhost/login

SQS Messaging Service in AWS

AWS SQS (Simple Queue Service), as the name indicates is a fully managed messaging FIFO (First In First Out) queue service that receives and sends messages from any software system. But it is generally used for distributed computing. AWS SQS is secure, durable, scalable and a reliable service. AWS provides SDK’s in various languages to access SQS services.

In this blog I will use PHP AWS SDK to send, receive and delete messages from SQS.

Given below are the steps to be followed:

Please log into: AWS Account

and install/configure AWS SDK on to a local system. I am assuming that there is already an available AWS account.

AWS accepts only https requests, so install dummy ssl certifcate in a local WAMP/XAMPP.

After installation of AWS SDK and SSL, SQS needs to be configured.

  1. Go to the AWS console, choose ‘preferred availability zone’ and `simple queue service’. Click on ‘create new queue’.
  2. Enter the name of the queue, for example, php-demo-queue
  3. Set Default Visibility Timeout to 5 minutes. This option is to make the messages invisible for 5 minutes once it goes for processing from the queue. The maximum time is 12 hours.
  4. Set Message Retention Period to 14 days. This option is to make the messages available in the queue for maximum of two weeks if we do not delete the message.
  5. Set Maximum Message Size to 256 kb. The message should not exceed 256 kb.
  6. Set Delivery Delay to 0. This will tell the queue to show the message as soon as it comes to the queue. If one does not want the message to be visible instantly, then give Delivery Delay up to 15 minutes.
  7. Set Receive Message Wait Time to 0 and click on Create Queue.

Note 1: Once the queue is created, it is possible to access edit/attribute options, rather than edit them via API Calls.

Note 2: To create a FIFO Queue, the queue name has to be prefixed with .fifo For Ex. php-demo-queue.fifo

Transform your applications to the cloud

After the Queue is created, now is the time to add/edit permissions.

1) Select the queue name to add permissions.

2) Click Permissions tab and click on Add a Permission button.

3) Select Effect to Allow, Principal to Everybody and Actions to All SQS Actions and Click on Save Changes

List of Methods available in AWS SQS

  1. changeMessageVisibility()
  2. changeMessageVisibilityBatch()
  3. createQueue()
  4. deleteMessage()
  5. deleteMessageBatch()
  6. deleteQueue()
  7. getQueueAttributes()
  8. getQueueUrl()
  9. listDeadLetterSourceQueues()
  10. listQueues()
  11. purgeQueue()
  12. receiveMessage()
  13. removePermission()
  14. sendMessage()
  15. sendMessageBatch()
  16. setQueueAttributes()

Example 1:- Get url of the queue, get queue attributes, send messages, receive message and delete the message from queue. This example is about a Film ticket booking system. The user has provided his information and seating capacity. The main server receives user information, stores info, payment done. Now to generate a QR Code, this is sent to another dedicated server where only QR Codes are generated and messaged to user and updates DB.

$config = [
 'region' => 'ap-south-1',
 'version' => 'latest',
 'credentials' => [
 'key' => AWS_ACCESS_KEY_ID,
 'secret' => AWS_SECRET_ACCESS_KEY,
 ]
 ];

Try:

{
 $sqsClient = new AwsSqsSqsClient($config);
 $stdUrl = $sqsClient->getQueueUrl(array('QueueName' => "test-std-queue"));
 $queueUrl = $stdUrl->get('QueueUrl');
 $queueAttributes = $sqsClient->getQueueAttributes(['QueueUrl' => $queueUrl, 'AttributeNames' => ['All']]);
 $attributes = $queueAttributes->get('Attributes');
 $message = [
 'id' => uniqid(),
 'cust_name' => 'Demo User',
 'cust_email' => 'testemail@test.com',
 'cust_phone' => '987654321',
 'cust_seating' => ['A1','A2','A3'],
 'theatre_id' => 500,
 'amount_paid' => 1000,
 'discount' => 100
 ];
 $messageResult = $sqsClient->sendMessage(['QueueUrl' => $queueUrl, 'MessageBody' => json_encode($message)]);
 $receiveMessages = $sqsClient->receiveMessage(['QueueUrl' => $queueUrl, 'AttributeNames' => ['All']]);
 $msg = $receiveMessages->get('Messages');
 $receiptHandle = $msg[0]['ReceiptHandle'];
 $sqsClient->deleteMessage(array(
 'QueueUrl' => $queueUrl,
 'ReceiptHandle' => $receiptHandle,
 ));
 } catch (AwsExceptionAwsException $ex) {
 echo $ex->getMessage();
 } catch (Exception $ex) {
 echo $ex->getMessage();
 }

Note 1: All methods will return appropriate messages and status codes.

One can store the responses for future investigations.

Note 2: The body of the message can be in any format, for example, JSON, XML, Text, Paths to files or images which should not exceed 256 kb.

Dead Letter Queue: If a message is received for X number of times, then it is considered as a Dead Letter Queue. Supposing a message is received by the server over 50 times and still not processed successfully, then it is considered as Dead Letter Queue and sent to Dead Letter Queue.

To configure Dead Letter Queues, we need to create a queue as mentioned in the above steps. For example test-std-queue-dlq.

Then add the queue to Dead Letter Queue Settings for test-std-queue.

An Introduction to Apache Kafka

What is Kafka?

Kafka is an open-source distributed streaming platform by Apache software foundation and it is used as a platform for real-time data pipeline. It is a publish-subscribe messaging system.

Kafka has the ability to auto-balance consumers and replicates the data enhancing reliability. Kafka offers better throughput for producing and consuming data, even in cases of high data volume, with stable performance. Kafka is a distributed system so it can scale easily and fast, and therefore has great scalability. Kafka relies on the principle of zero-copy. It uses OS kernel to transfer the data and a distributed commit log and therefore can be considered durable. It has high throughput, built-in partitioning, replication, and fault-tolerance which makes it a good solution for large-scale message processing applications.

Related: Robust and scalable applications to enhance your web presence

Kafka was originally developed by LinkedIn than later it became opensource in 2011.

Kafka has the following capabilities:

  • It can be used to publish and subscribe to streams of data like an enterprise messaging system unlike JMS because of its speed and volume capabilities.
  • Kafka can be used for storing streams of records in fault-tolerant storages.
  • It can be used for processing streams of records which are on the pipeline, as and when they occur.

Kafka use cases:

  • Complex event processing (like part of an IOT system),
  • Building real-time data platform for event streaming,
  • Building intelligent applications for fraud detection, cross-selling, and predictive maintenance,
  • Real-time analytics (user activity tracking), and stream processing,
  • Ingesting data into Spark or Hadoop (both real-time pipelines and batch pipelines) and log aggregation.
  • Building a real-time streaming ETL pipeline.

Kafka can work with Spark Streaming, Flume, Storm, HBase, Flink, and Spark for real-time ingesting, analysis, and processing of streaming data.

Terms:

Kafka stores data as records as they consist, i.e. key, value, and timestamp, which comes from many producers. The records are partitioned and stored with different partitions within different topics. Each partition is an ordered, immutable sequence of records. The records in the partitions are each assigned a sequential ID called the Offset which uniquely identifies each record within the partition.

Adding another dimension, the Consumer Group can have one or more consumers and it can query the messages on Kafka partitions from the topic.

Kafka Cluster runs with one or more number of Kafka Brokers / Servers / Node and partitions can be distributed across the cluster nodes.

Distribution:

Kafka partitions are distributed over the Kafka Cluster. Each partition has one Leader Broker / Server and the rest of the brokers act as Follower Brokers. Each server from a Kafka cluster handles the request and data. The Leader handles all requests, reads, and writes to the partition, while Follower passively replicates the data from Leader server so the load is well balanced within the Kafka cluster. If the Leader Broker fails, then one of the followers will be elected as a Leader. This Replication Factor is configurable for all the topics.

Kafka Cluster manages the brokers with the help of a connected Zookeeper server which provides service for the coordinated distributed system over the network.

Kafka Cluster Architecture:

The topics configured to use three partitions are given here. Each ID of the Replica is the same as the ID of the Broker.

Producers:

Producers publish data to appropriate topics. They have the responsibility to choose topics and partition topics. Producer sends data as records and each record contains key and value pair so it converts data to byte array with the help of Key Serializer and Value Serializer. By default, partitioner chooses partition number by hash key or it can be done in a round-robin fashion. It has various approaches to send data to the server.

Consumers:

Consumers read and process the data from appropriate topics within the Kafka cluster. Consumers are labeled with consumer group names. Those which have the same consumer group name for multiple consumers are called consumer groups. Kafka cluster delivers each record from the topics to single consumer instant of the consumer group. If each consumer instant has a different group name, then records are delivered to all consumer instants. Each consumer instant can run on a different process or different machine.

Conclusion:

Kafka provides highly scalable and abstraction solutions for the distribution system and various real-time processing. Apache Kafka exists within the well-defined architectures of several leading applications such as Twitter, LinkedIn, Netflix, Uber, Yelp, and Ebay.

I have, in this blog, covered some basic information, use cases, and terms. In my next blog, I will write in detail about Kafka Producer, Partitioner, Serializer, Deserializer and Consumer Group.

Angular 2 – Getting Started

 Why Angular? 

  1. Expressive HTML: Angular makes HTML more expressive. Expressive html is simply a different way of seeing, interpreting and authoring markup. It is a HTML with features such as `if condition’, for loops and local variables.
  2. Powerful data binding: Angular has powerful data-binding. We can easily display fields from data mode, track changes, process updates  from the user.
  3. Module by designer: Angular promotes modular by design. It is easier to make and reuse the content.
  4. Built in back-end integration: Angular has built-in support for communication with back-end services.

Why Angular 2? 

Angular 2 is built for speed. It has faster initial loads and improved rendering times. Angular 2 is  modern with advanced features including JavaScript standards such as classes, modules, object and operators. Angular 2 has a simplified AP. It has few directives and simple binding. It enhances productivity to improve day-to-day work flow

Angular 2 is a set of components and services that provide functionality across components.

What is Angular 2 Component?

Each component comprises a set of templates –  HTML for user interface.   The component  has meta-data and is `view’ defined.  The template associated with  the code is defined with `class’ and additional information defined with meta data.

What are Angular modules?

Angular modules help us to organize our application. Every angular application has at least  one angular module.

There are two type of angular module. Root Angular module and Feature Angular module:

Science Angular is a JavaScript library which we could use with any  JavaScript language. Most common language choices for Angular 2 are  ES 5 version of JavaScript, ES 2015, Typescript and Dart.

What is Type script?

Typescript is a an opensource language. Typescript is a super-set of JavaScript. One of the benefits of Typescript is strong typing which essentially means that everything has data type. Angular team itself takes advantage of these benefits to use  Typescript to build  Angular2. Typescript type definition files contain different aspects of a library.

When setting up our environment for Angular 2 application,  we need two basic node package managers.

Given below are some of the files which we need to set  up and configure for Angular 2 application.

Typescript configuration file(tsconfig.json):

 This specifies Typescript compile option and other settings. The typescript compiler “TSC” reads this file and tells typescript compiler to transpile our typescript code to es5 code. Details can be found in the below picture. The source map option if defined will generate map file. Map files assist with debugging. “emitdecoratormetadata” and “experimentalDecorators” are supports for decorator. This must be set to `true’ as otherwise the Angular application will not compile.The “noimplecitany” option defines whether all our variables are strongly typed or not.

We can configure this file, as per requirement:

  • TypeScript Definitions File (typings.json): This file contains a list of the node modules library. We are specifying “core-js” which brings es 2015 capability to es browser. The Node is used to develop and build the application and is used like a server.
  • npm Package File (package.json): It is the most important file for successful set up and development time execution.  This file describes basic information of the application.

  • main.ts: It will bootstrap our root application module. We are using dynamic bootstrapping and JIT compiler. This means Angular compiler compiles the application in the browser and lunch the application starting with the root application module “AppModule”
Exit mobile version