Advent of Cloud Native Applications

Cloud native applications are defined as applications that are scalable and reliable by construction. The difference between cloud native applications and non-cloud native applications is that the non-cloud native applications are scalable by requirement and not by construct. Construct implies that the design of applications by default must take care of scalability and reliability. In any application, there are two major failures. Failures due to code and failures due to performance. Cloud native applications can detect run-time failures and take mitigations on its own. Cloud native applications are usually container packaged, micro services oriented and dynamically orchestrated.

Application Containers: Technically the applications are container based which enables the deployment across different flavours of operating systems. The biggest benefits of containers are faster deployment, portability and cost efficient. Containers are just processes running in your system. Unlike a VM which provides hardware virtualization, a container provides operating-system-level virtualization by abstracting the “user space”. Containers require fewer system resources as they do not require any operating system images. Containers are good for modern application development say microservices etc.

Container as a service use cases are picking up in the cloud and many cloud-based applications are built using container as a service. Monolithic applications are decoupled as microservices using the container images.

Microservices: Microservices is an architectural style where autonomous, independently deployable services collaborate to form a broader system/application. The benefits of microservices includes the following.

  1. Decompose applications into smaller services and each service can be owned by a smaller team.
  2. A good microservice is the one that can be thrown away if necessary and rewritten.
  3. Microservices offer up-gradation flexibility while one service could be in one version and the other service could be in a different version.
  4. Microservices are loosely coupled and scalable.
  5. Microservices improves the deployment frequency as the services are very light in nature and can be deployed very frequently.

Services Orchestration:

Orchestration is the process of getting all the (infrastructural) components lined up to deliver your digital service to your customers. All moving part in an IT environment are part of orchestration. From a code change to production, everything is orchestration. Orchestration is the heartbeat of your lifecycle of every iteration you made. There are 4 steps in the IT services part of orchestration.

  1. Provision the infrastructure.
  2. Code and Commit changes
  3. Build and test the service.
  4. Deploy and Run the services

Cloud native applications will help in all above the steps in an automated way.

To conclude, cloud native application development approach is critical to the utilization of the cloud. Many legacy applications are getting migrated as cloud native application to enable the scalability and availability of the applications in general.

AI adoption increases with Cloud Machine Learning as a service

The basic ingredient required for AI is a successful machine learning model. However, to create and run the model it is important to have the right capacity of infrastructure, followed by good domain knowledge and a large amount of data. A machine learning model is defined as a software entity created with algorithms and training data. The success of the model depends on getting the right training data with precisely tweaked algorithms. Running the model requires huge infrastructure with GPU / TPU CPU capacity and this makes most of the enterprises wary of investing in AI. The capex model for AI infrastructure calls for a very large investment, plus with the technology developing at a fast pace, enterprises will find it difficult to keep changing the infrastructure.

Essentially for Businesses to take advantage of AI, they must have a huge capacity investment. However, cloud comes to the rescue by offering infrastructure capacity with a lesser initial investment and pay as you use models. In the following blog let us look at the models offered by various cloud service providers for running machine learning as a service (MLaaS). We look at four major cloud service providers namely Amazon, Microsoft Azure, Google Cloud (GCP) and IBM Watson.


Amazon Machine Learning

SageMaker is an Amazon machine learning framework with built-in models with algorithms for classification, regression, multi-class classification, k-means clustering and so on. SageMaker helps to create the models quickly with advanced algorithms. Apart from this, Amazon also offers huge infrastructure on-demand as well as serverless-processing enabling the models to be run on the most optimized infrastructure. Amazon SageMaker also gives hook to Google Tools like Tensorflow, open-source Keras, Facebook Pytorch etc. A complete MLops (Equivalent of DevOps for Machine learning code) is offered by Amazon.

Azure Machine Learning Platform

Services from Azure Machine learning can be elaborated two-fold, Azure Machine Learning studio and Bot service. The graphical drag and drop machine language workflow creation ability is created by Azure using Azure ML Studio. This includes data-exploration, pre-processing, choosing methods, and validating modelling results.The main benefit of using Azure is the variety of algorithms available to play with. The Studio supports around 100 methods that address classification(binary multiclass), anomaly detectionregression, recommendation, and text analysis. It is worth mentioning that the platform has one clustering algorithm (K-means).

Azure serves different kinds of customers. Namely data Scientists, Data engineers, data analysts and so on. Azure’s approach is to provide an end-to-end platform for all types of customers and the product includes model management tools, python packages and workbench tools.

Google Machine Learning Services

Google being an AI-first company offers a variety of AI tools for the developers, enterprise operations, data scientists etc. Google recently started AutoML which requires no programming to develop a Machine Learning model. Google has been a great contributor to open-sources. Most recently they introduced Google BERT, TensorFlow, AutoML etc. Among all the service providers Google has done the maximum contribution to open-source and this, in turn, has improved the adoption of Google Tools. Today TensorFlow is the most widely used development tool amongst the developers. It has different libraries available from multiple open sources making it one of the more popular developing applications on the cloud. Google Cloud also offers a high-end computing environment with TPU processors along with robust data security making it one of the most versatile platforms for development and deployment. Many cloud-native deployments are possible in Google Cloud Platform.

IBM Watson

IBM Watson one of the earliest and very widely used machine learning platforms, has been in existence for some time. It offers a set of services for newcomers as well as experienced service providers. Separately, IBM offers deep neural network training workflow with flow editor interface similar to the one used in Azure ML Studio.


Machine Learning Services offered by the cloud providers.

  1. Speech and text service Translation service
  2. Image classification
  3. Text classification
  4. Speech classification
  5. Facial detection
  6. Facial analysis
  7. Celebrity recognition
  8. Written text recognition
  9. Video Analysis etc

In conclusion, many of the cloud service providers have recognized the fact that business transformation can be brought about by the use of AI technology and provide machine learning as a service so that enterprises can use the readily available models. This helps them in the areas of prediction, personalization, natural language processing, optimization, and anomaly detection. Businesses want a competitive edge and AI plays a key role. And to enable AI quickly, cloud is the way to go.

Cloud Optimization – The Necessary Conundrum

The cloud migration services market was valued at USD 119.13 billion in 2019 and is expected to reach USD 448.34 billion by 2025, at a CAGR of 28.89% over the forecast period 2020 – 2025. Over the past decade, cloud computing adoption is rising, owing to increasing investments from small and medium enterprises. About 21 billion dollars are wasted per year on the cloud spend. There are three kinds of enterprises approach to the cloud – some enterprises are completely public cloud-only companies, some are only private while others use a combination of Private and Public cloud called the hybrid cloud. There is a world of difference between datacentre cost optimization and cloud optimization. Cloud Optimization is a lever that needs to be utilized by the CFOs to make sure optimal-utilization is achieved. Back in the years of data centre, the budget is predetermined. However, the public cloud is something you pay for to use and therefore gives leverage to control the cost.

1. Visibility to the IT Environment
Unlike data centre which is a onetime budgeting exercise at the start of the finance year cost optimization in the cloud is a continuous exercise. This continuous exercise gives the opportunity to reduce the IT costs. The first requirement is a tool that gives complete visibility to the IT resources usage on a continuous basis and their impacts on the billing. All cloud service providers give this visibility and to that extent, the usage of IT resources can be tracked. Real-time spend visualization is a must.

2. Migration with ROI
When you perform the migration from on-premises to data centre have a framework for an ROI-based on the current IT system usage and future business requirements. While cloud avoids the capex costs, the operational costs should be optimized for the best ROI possible. All cloud service providers have different sets of products. The objective is to map every task in the datacentre to be done on cloud but the way we do the task can possibly change, though the outcome has to be the same.

3. Look for Idle resources in your cloud.
Cloud has multiple resources like server instances, idle load balancers, idle containers and so on. Turn off anything that is not utilized. Unused instances make a big portion of the cost for the cloud. Ideally, all cloud service providers give a resource utilization report and any resource not utilized should be turned off.

4. Try and use serverless resources.
Unlike data centres, many applications do not require exclusive resources specifically CPU and RAM. All cloud service providers give serverless options. This avoids provisioning time, and the utilization time is only for the period the program uses the serverless infrastructure. AWS gives lambda and GCP gives cloud which helps in running long-running CRON jobs.

5. Reserve capacity usage
Many cloud service providers give deep discounts when we buy instances after planning the capacity. They are known by different names like reserved instances, in AWS. Avoid on-demand buying of instances as it will cost much higher, and plan for Reserved instances. Unutilized reserved instances is another reason why the spend going up on the cloud.

6. Understand the discount policy of each cloud service provider.
Every cloud service provider is different, and two instances are equal. Understand the discount policy of each service provider. As an example, GCP gives Deep discounts on sustained usage of VM. The more we use the more discount we get from GCP.

7. Look for Product Specific Free Tier Usage
GCP has 90 products approximately and AWS has 169 products. It is difficult to understand each product. Free tiers associated with the product of a cloud service provider should be utilized completely. Utilize the product-specific free tier usage.

8. Use DevOps and Automation
If you are managing a large infrastructure, the DevOps tools chain must be used to maximize the operational efficiency and reduce the usage of resources. DevOps culture should be brought in with cloud migration to take maximum advantage of Cloud resources.

9. Cloud-Native application development
Development organizations need to switch over to cloud-native development to take advantage of the flexibility the cloud offers for containers and container-deployments. Usage of containers can further optimize the license costs and migration costs.

10. Use Multiple Cloud Service Providers
Avoid vendor lock-in with Cloud service providers. Have a minimum of two vendors and ensure that the applications are written to migrate the workload from one vendor to another vendor. GCP offers live migration. Use the best of the multiple worlds of the cloud service provider to do a better managing of the costs.

To Conclude cloud cost management is an engineering problem and it is not a finance or operations problem. A strong engineering team as part of the operations will enable more savings for cloud operations.

Data References:

https://info.flexera.com/SLO-CM-REPORT-State-of-the-Cloud-2020

Cloud DevOps: Ensuring Business, Tech and Security go hand in Hand

DevOps is a new area where both the Development and Operations are intertwined together as a single organization. Cloud DevOps is a newer development area, the need for which had arisen for agile development, automated deployment, as well as for faster time to scale. DevOps on premises is different from Cloud DevOps as Cloud DevOps require both cloud-expertise as well as DevOps knowledge to master the development of the same. DevOps Practices in different clouds are different and holds great promise if the awareness to handle the DevOps in the cloud is there.

Requirements of Cloud DevOps

Cloud Expertise: Cloud is still considered a new technology although the cloud concept has been there for more than a decade. The tools required for DevOps from Agile tracking of development, Continuous integration with new builds, Continuous delivery of code to production, and Site Reliability Engineering consisting of monitoring the availability, performance, and fault management of Infra and applications, are different for different cloud service providers. A cloud DevOps engineer has the knowledge of complete cloud DevOps Tools chain specifically optimized to the cloud service provider.

Cloud Costing Model: Awareness of the cloud costing model is a must. The number of products by a cloud services provider is daunting. As an example, AWS has 169 products whereas GCP has 90 products. Many costs are hidden in nature and many of them must be discovered on the way. Therefore, right experts are necessary to make sure the cloud costs are optimized to the best of the ability.

Scaling: One of the facets of DevOps is automation and requirement for automation is varies according to cloud service providers. As an example, with AWS lot of third-party service providers are available to automate the operations whereas in GCP many operations are automated by default. Standardization and automation are necessary to scale the operations. Cloud-native development has become the order of the day and many open-source tools are used to scale the deployment speed. DevOps as code should be used to scale the pipelines.

Security and Compliance: Code Security is still an important aspect of developing the code on the cloud. Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) are necessary in the cloud. Security and compliance scaling happens more with automation. SAST check should be automatically done with every code check-in and DAST check should automatically be done with every build. Security is a continuous service and public cloud service providers are enabling DevSecOps as a new practice. Application security level checks are now reaching new levels which many security professionals have been asking for as well. The goal of the DevSecOps Practice is to introduce security earlier in the SDLC lifecycle. The Objective of the DevSecOps is to make business, tech, and security work together.

AI in DevOps Chain: DevOps throws a lot of data and it is important to have complete visibility of the entire DevOps chain. One can use the Data with AIOPS and get important inferences for actionable intelligence. Data on DevOps is important to optimize the complete process. A new approach of combining DevOps with AIOPs is being done by public cloud service providers. Many of the AI applications require DevOps by default as well. AI is more iterative. While AI can help with DevOps data the DevOps practice in AI can help with more actionable intelligence in anomaly detection, prediction, and natural language processing. All AI applications will have DevOps approach. Cloud offers AI ML tools and can be used as part of the DevOps tools chain for optimization.

Conclusion

While DevOps practice itself has delivered faster productivity with enterprises setting up CI and CD chain it is important to understand the cloud DevOps chain and use it effectively for business purposes. The migration from On-prem DevOps to Cloud DevOps should be carefully calibrated for maximum benefits at minimal cost.

Data References:

https://www.reportsanddata.com/report-detail/devops-market
https://dzone.com/articles/devops-trends-to-watch-for-in-2020

Serverless – The New Option Of Reducing The IT Infrastructure Cost

The word serverless does not mean applications can run without a server. Every application requires CPU, Memory to run the program which is a process in execution. However, Serverless enables applications to share the resource’s availability in an optimal manner. Serverless imply applications that are written in a stateless container, ephemeral and managed by a third party. The Serverless was first started by the AWS in 2014 by the launch of AWS Lambda. There are three aspects to the serverless namely application/services. Infrastructure and architecture. Let us look at all the aspects of the serverless.

Why Serverless?

There are three fundamental reasons to go serverless as listed below.

1. Lower Operational Cost: This means fewer servers, fewer people to manage servers and there is a division of labour.

2. Faster time to Value: Usually applications or services require servers to be provisioned. With serverless, there are zero applications to be provisioned.

3. Focus on core value: Serverless means outsourcing our architecture and focusing on the core value.

Perspectives of Serverless:

1. Application/services perspective: Serverless is lightweight event-based microservices like Google functions. Google cloud functions are light weight event-based response functions that allow a small single-purpose function that allows a lightweight response without needing a server to be managed at any given point in time. Effectively any lightweight function that is not dependent on a server can be run on a serverless architecture.

2. Infrastructure for Serverless: The infrastructure for serverless is totally managed by the vendor. Like AWS lambda enables the serverless infrastructure. Scaling is done automatically, and it is triggered by events.

3. Architecture: The architecture is usually stateless function; event-driven and uses API gateway to as an input to get triggered. An example of a stateless function in a website is the addition of an item to a cart.

 

Serverless Offerings

Serverless offerings are being done both by the public cloud service providers and private cloud service providers. AWS offers Lambda service for serverless mode. AWS lambda is very popular, and the shift has happened to AWS cloud lambda based on the fit for purpose. Not every service can run on serverless but whatever is doing only focused on a single purpose and uses independently the Compute power then serverless becomes an option to be used. Like AWS, Microsoft Azure offers serverless compute as well. Google cloud provides cloud serverless to deploy and develop APIs in the form of Microservices. Serverless provides a new way of running an application as a FaaS. (Function as a Service).

 

Disadvantages of Serverless

1. Cold Starts: Sometimes cold starts take quite a lot of time say anywhere from 200ms-600ms.

2. Parallel Requests: Parallel requests are not allowed inside the code. Parallelism is an issue.

3. Coding Language: Need the language to support application development. Node.js supports the Serverless architecture and not python. It is best suited for background jobs, API calls, batch jobs etc.

4. Hidden costs: The right job must use the serverless as some of the cloud service providers charge based on the no of requests/usage of API gateway though the cost of the CPU. So, RAM will be less as the cost is being shared.

5. Code Maintenance: This is higher on serverless architecture.
The transformation to Serverless is worth doing considering the fact it leads to the huge cost savings available due to shared CPU and RAM cost. At the same time, the right application must be chosen to run the serverless.

Data References:

https://www.marketsandmarkets.com/Market-Reports/serverless-architecture-market-64917099.html?gclid=Cj0KCQiA9P__BRC0ARIsAEZ6irhNcChPUksWfnmIYk6WXLCMKRIoGMPwHCMXzI04DnXZSBdDIWDj-8kaAmmWEALw_wcB