AI adoption increases with Cloud Machine Learning as a service

The basic ingredient required for AI is a successful machine learning model. However, to create and run the model it is important to have the right capacity of infrastructure, followed by good domain knowledge and a large amount of data. A machine learning model is defined as a software entity created with algorithms and training data. The success of the model depends on getting the right training data with precisely tweaked algorithms. Running the model requires huge infrastructure with GPU / TPU CPU capacity and this makes most of the enterprises wary of investing in AI. The capex model for AI infrastructure calls for a very large investment, plus with the technology developing at a fast pace, enterprises will find it difficult to keep changing the infrastructure.

Essentially for Businesses to take advantage of AI, they must have a huge capacity investment. However, cloud comes to the rescue by offering infrastructure capacity with a lesser initial investment and pay as you use models. In the following blog let us look at the models offered by various cloud service providers for running machine learning as a service (MLaaS). We look at four major cloud service providers namely Amazon, Microsoft Azure, Google Cloud (GCP) and IBM Watson.


Amazon Machine Learning

SageMaker is an Amazon machine learning framework with built-in models with algorithms for classification, regression, multi-class classification, k-means clustering and so on. SageMaker helps to create the models quickly with advanced algorithms. Apart from this, Amazon also offers huge infrastructure on-demand as well as serverless-processing enabling the models to be run on the most optimized infrastructure. Amazon SageMaker also gives hook to Google Tools like Tensorflow, open-source Keras, Facebook Pytorch etc. A complete MLops (Equivalent of DevOps for Machine learning code) is offered by Amazon.

Azure Machine Learning Platform

Services from Azure Machine learning can be elaborated two-fold, Azure Machine Learning studio and Bot service. The graphical drag and drop machine language workflow creation ability is created by Azure using Azure ML Studio. This includes data-exploration, pre-processing, choosing methods, and validating modelling results.The main benefit of using Azure is the variety of algorithms available to play with. The Studio supports around 100 methods that address classification(binary multiclass), anomaly detectionregression, recommendation, and text analysis. It is worth mentioning that the platform has one clustering algorithm (K-means).

Azure serves different kinds of customers. Namely data Scientists, Data engineers, data analysts and so on. Azure’s approach is to provide an end-to-end platform for all types of customers and the product includes model management tools, python packages and workbench tools.

Google Machine Learning Services

Google being an AI-first company offers a variety of AI tools for the developers, enterprise operations, data scientists etc. Google recently started AutoML which requires no programming to develop a Machine Learning model. Google has been a great contributor to open-sources. Most recently they introduced Google BERT, TensorFlow, AutoML etc. Among all the service providers Google has done the maximum contribution to open-source and this, in turn, has improved the adoption of Google Tools. Today TensorFlow is the most widely used development tool amongst the developers. It has different libraries available from multiple open sources making it one of the more popular developing applications on the cloud. Google Cloud also offers a high-end computing environment with TPU processors along with robust data security making it one of the most versatile platforms for development and deployment. Many cloud-native deployments are possible in Google Cloud Platform.

IBM Watson

IBM Watson one of the earliest and very widely used machine learning platforms, has been in existence for some time. It offers a set of services for newcomers as well as experienced service providers. Separately, IBM offers deep neural network training workflow with flow editor interface similar to the one used in Azure ML Studio.


Machine Learning Services offered by the cloud providers.

  1. Speech and text service Translation service
  2. Image classification
  3. Text classification
  4. Speech classification
  5. Facial detection
  6. Facial analysis
  7. Celebrity recognition
  8. Written text recognition
  9. Video Analysis etc

In conclusion, many of the cloud service providers have recognized the fact that business transformation can be brought about by the use of AI technology and provide machine learning as a service so that enterprises can use the readily available models. This helps them in the areas of prediction, personalization, natural language processing, optimization, and anomaly detection. Businesses want a competitive edge and AI plays a key role. And to enable AI quickly, cloud is the way to go.

Cloud Optimization – The Necessary Conundrum

The cloud migration services market was valued at USD 119.13 billion in 2019 and is expected to reach USD 448.34 billion by 2025, at a CAGR of 28.89% over the forecast period 2020 – 2025. Over the past decade, cloud computing adoption is rising, owing to increasing investments from small and medium enterprises. About 21 billion dollars are wasted per year on the cloud spend. There are three kinds of enterprises approach to the cloud – some enterprises are completely public cloud-only companies, some are only private while others use a combination of Private and Public cloud called the hybrid cloud. There is a world of difference between datacentre cost optimization and cloud optimization. Cloud Optimization is a lever that needs to be utilized by the CFOs to make sure optimal-utilization is achieved. Back in the years of data centre, the budget is predetermined. However, the public cloud is something you pay for to use and therefore gives leverage to control the cost.

1. Visibility to the IT Environment
Unlike data centre which is a onetime budgeting exercise at the start of the finance year cost optimization in the cloud is a continuous exercise. This continuous exercise gives the opportunity to reduce the IT costs. The first requirement is a tool that gives complete visibility to the IT resources usage on a continuous basis and their impacts on the billing. All cloud service providers give this visibility and to that extent, the usage of IT resources can be tracked. Real-time spend visualization is a must.

2. Migration with ROI
When you perform the migration from on-premises to data centre have a framework for an ROI-based on the current IT system usage and future business requirements. While cloud avoids the capex costs, the operational costs should be optimized for the best ROI possible. All cloud service providers have different sets of products. The objective is to map every task in the datacentre to be done on cloud but the way we do the task can possibly change, though the outcome has to be the same.

3. Look for Idle resources in your cloud.
Cloud has multiple resources like server instances, idle load balancers, idle containers and so on. Turn off anything that is not utilized. Unused instances make a big portion of the cost for the cloud. Ideally, all cloud service providers give a resource utilization report and any resource not utilized should be turned off.

4. Try and use serverless resources.
Unlike data centres, many applications do not require exclusive resources specifically CPU and RAM. All cloud service providers give serverless options. This avoids provisioning time, and the utilization time is only for the period the program uses the serverless infrastructure. AWS gives lambda and GCP gives cloud which helps in running long-running CRON jobs.

5. Reserve capacity usage
Many cloud service providers give deep discounts when we buy instances after planning the capacity. They are known by different names like reserved instances, in AWS. Avoid on-demand buying of instances as it will cost much higher, and plan for Reserved instances. Unutilized reserved instances is another reason why the spend going up on the cloud.

6. Understand the discount policy of each cloud service provider.
Every cloud service provider is different, and two instances are equal. Understand the discount policy of each service provider. As an example, GCP gives Deep discounts on sustained usage of VM. The more we use the more discount we get from GCP.

7. Look for Product Specific Free Tier Usage
GCP has 90 products approximately and AWS has 169 products. It is difficult to understand each product. Free tiers associated with the product of a cloud service provider should be utilized completely. Utilize the product-specific free tier usage.

8. Use DevOps and Automation
If you are managing a large infrastructure, the DevOps tools chain must be used to maximize the operational efficiency and reduce the usage of resources. DevOps culture should be brought in with cloud migration to take maximum advantage of Cloud resources.

9. Cloud-Native application development
Development organizations need to switch over to cloud-native development to take advantage of the flexibility the cloud offers for containers and container-deployments. Usage of containers can further optimize the license costs and migration costs.

10. Use Multiple Cloud Service Providers
Avoid vendor lock-in with Cloud service providers. Have a minimum of two vendors and ensure that the applications are written to migrate the workload from one vendor to another vendor. GCP offers live migration. Use the best of the multiple worlds of the cloud service provider to do a better managing of the costs.

To Conclude cloud cost management is an engineering problem and it is not a finance or operations problem. A strong engineering team as part of the operations will enable more savings for cloud operations.

Data References:

https://info.flexera.com/SLO-CM-REPORT-State-of-the-Cloud-2020

Cloud DevOps: Ensuring Business, Tech and Security go hand in Hand

DevOps is a new area where both the Development and Operations are intertwined together as a single organization. Cloud DevOps is a newer development area, the need for which had arisen for agile development, automated deployment, as well as for faster time to scale. DevOps on premises is different from Cloud DevOps as Cloud DevOps require both cloud-expertise as well as DevOps knowledge to master the development of the same. DevOps Practices in different clouds are different and holds great promise if the awareness to handle the DevOps in the cloud is there.

Requirements of Cloud DevOps

Cloud Expertise: Cloud is still considered a new technology although the cloud concept has been there for more than a decade. The tools required for DevOps from Agile tracking of development, Continuous integration with new builds, Continuous delivery of code to production, and Site Reliability Engineering consisting of monitoring the availability, performance, and fault management of Infra and applications, are different for different cloud service providers. A cloud DevOps engineer has the knowledge of complete cloud DevOps Tools chain specifically optimized to the cloud service provider.

Cloud Costing Model: Awareness of the cloud costing model is a must. The number of products by a cloud services provider is daunting. As an example, AWS has 169 products whereas GCP has 90 products. Many costs are hidden in nature and many of them must be discovered on the way. Therefore, right experts are necessary to make sure the cloud costs are optimized to the best of the ability.

Scaling: One of the facets of DevOps is automation and requirement for automation is varies according to cloud service providers. As an example, with AWS lot of third-party service providers are available to automate the operations whereas in GCP many operations are automated by default. Standardization and automation are necessary to scale the operations. Cloud-native development has become the order of the day and many open-source tools are used to scale the deployment speed. DevOps as code should be used to scale the pipelines.

Security and Compliance: Code Security is still an important aspect of developing the code on the cloud. Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) are necessary in the cloud. Security and compliance scaling happens more with automation. SAST check should be automatically done with every code check-in and DAST check should automatically be done with every build. Security is a continuous service and public cloud service providers are enabling DevSecOps as a new practice. Application security level checks are now reaching new levels which many security professionals have been asking for as well. The goal of the DevSecOps Practice is to introduce security earlier in the SDLC lifecycle. The Objective of the DevSecOps is to make business, tech, and security work together.

AI in DevOps Chain: DevOps throws a lot of data and it is important to have complete visibility of the entire DevOps chain. One can use the Data with AIOPS and get important inferences for actionable intelligence. Data on DevOps is important to optimize the complete process. A new approach of combining DevOps with AIOPs is being done by public cloud service providers. Many of the AI applications require DevOps by default as well. AI is more iterative. While AI can help with DevOps data the DevOps practice in AI can help with more actionable intelligence in anomaly detection, prediction, and natural language processing. All AI applications will have DevOps approach. Cloud offers AI ML tools and can be used as part of the DevOps tools chain for optimization.

Conclusion

While DevOps practice itself has delivered faster productivity with enterprises setting up CI and CD chain it is important to understand the cloud DevOps chain and use it effectively for business purposes. The migration from On-prem DevOps to Cloud DevOps should be carefully calibrated for maximum benefits at minimal cost.

Data References:

https://www.reportsanddata.com/report-detail/devops-market
https://dzone.com/articles/devops-trends-to-watch-for-in-2020

Serverless – The New Option Of Reducing The IT Infrastructure Cost

The word serverless does not mean applications can run without a server. Every application requires CPU, Memory to run the program which is a process in execution. However, Serverless enables applications to share the resource’s availability in an optimal manner. Serverless imply applications that are written in a stateless container, ephemeral and managed by a third party. The Serverless was first started by the AWS in 2014 by the launch of AWS Lambda. There are three aspects to the serverless namely application/services. Infrastructure and architecture. Let us look at all the aspects of the serverless.

Why Serverless?

There are three fundamental reasons to go serverless as listed below.

1. Lower Operational Cost: This means fewer servers, fewer people to manage servers and there is a division of labour.

2. Faster time to Value: Usually applications or services require servers to be provisioned. With serverless, there are zero applications to be provisioned.

3. Focus on core value: Serverless means outsourcing our architecture and focusing on the core value.

Perspectives of Serverless:

1. Application/services perspective: Serverless is lightweight event-based microservices like Google functions. Google cloud functions are light weight event-based response functions that allow a small single-purpose function that allows a lightweight response without needing a server to be managed at any given point in time. Effectively any lightweight function that is not dependent on a server can be run on a serverless architecture.

2. Infrastructure for Serverless: The infrastructure for serverless is totally managed by the vendor. Like AWS lambda enables the serverless infrastructure. Scaling is done automatically, and it is triggered by events.

3. Architecture: The architecture is usually stateless function; event-driven and uses API gateway to as an input to get triggered. An example of a stateless function in a website is the addition of an item to a cart.

 

Serverless Offerings

Serverless offerings are being done both by the public cloud service providers and private cloud service providers. AWS offers Lambda service for serverless mode. AWS lambda is very popular, and the shift has happened to AWS cloud lambda based on the fit for purpose. Not every service can run on serverless but whatever is doing only focused on a single purpose and uses independently the Compute power then serverless becomes an option to be used. Like AWS, Microsoft Azure offers serverless compute as well. Google cloud provides cloud serverless to deploy and develop APIs in the form of Microservices. Serverless provides a new way of running an application as a FaaS. (Function as a Service).

 

Disadvantages of Serverless

1. Cold Starts: Sometimes cold starts take quite a lot of time say anywhere from 200ms-600ms.

2. Parallel Requests: Parallel requests are not allowed inside the code. Parallelism is an issue.

3. Coding Language: Need the language to support application development. Node.js supports the Serverless architecture and not python. It is best suited for background jobs, API calls, batch jobs etc.

4. Hidden costs: The right job must use the serverless as some of the cloud service providers charge based on the no of requests/usage of API gateway though the cost of the CPU. So, RAM will be less as the cost is being shared.

5. Code Maintenance: This is higher on serverless architecture.
The transformation to Serverless is worth doing considering the fact it leads to the huge cost savings available due to shared CPU and RAM cost. At the same time, the right application must be chosen to run the serverless.

Data References:

https://www.marketsandmarkets.com/Market-Reports/serverless-architecture-market-64917099.html?gclid=Cj0KCQiA9P__BRC0ARIsAEZ6irhNcChPUksWfnmIYk6WXLCMKRIoGMPwHCMXzI04DnXZSBdDIWDj-8kaAmmWEALw_wcB

Next Generation Cloud Adoption: Distributed Cloud

Cloud Computing is an evolving discipline. Newer innovations in cloud management are coming into fruition as we speak. What started out as a ‘High-Availability Storage Space’ is now integrated into every function of business. The Cloud opens possibilities for customers to gain benefits and be agile with their workloads. By shifting to cloud they leverage the economics offered by cloud like elasticity, pace-of-innovation, better uptimes and much more, from cloud-based scheduling, cloud-based applications to cloud-based Data-backup and DR. Practically everything has to come prefixed with ‘Cloud-based’ to ensure BAU continues uninterruptedly. However, there is still a pinch of resistance and hesitation seen in organizations when deciding to go for a public cloud model, entirely.

Some prefer private cloud or to an extent are willing to adopt hybrid cloud. Private cloud, is designed in a way that they are, owned and controlled by the customer and operated by the service provider’s teams or the customer’s own technology team and in the hybrid cloud, the public cloud provider manages their set of cloud offerings.

Hybrid Cloud was introduced to further the ‘best of both worlds objective’ for businesses that were not keen on completely abandoning their Legacy Systems in favour of a fully Cloud-based IT Infrastructure. It provided a sort of ‘safety net’ whose requirement was triggered mostly by data security concerns. Distributed Cloud does all this and more.

Distributed Cloud is Cloud-based Technology’s newest offering. Gartner identified Distributed Cloud as one of the top 10 trends of 2020 and the hype around it does not seem to be slowing down and will seemingly continue well into 2021 as well, by the look of things. Distributed Cloud basically leverages Public Cloud to interconnect IT Infrastructure irrespective of Physical/Geographical Location.

Gartner describes Distributed Cloud as “the distribution of public cloud services to different physical locations, while the operation, governance, updates and evolution of the services are the responsibility of the originating public cloud provider.”

Let’s consider the scenario where a business maintains some data on-site, some on private/public cloud and others on edge environments. Maintaining all these complex IT environments require overhead and maintenance to some degree. There is also the issue of all these being physically apart. Not to mention delay/latency concerns. What a Distributed Cloud Arrangement brings to the table is the ability to extend Public Cloud Capabilities to these complex systems and manage all of a business’s spread-out IT Infrastructure.

Cloud-computing involving Distributed Cloud utilizes so-called ‘substations’ as coined by Gartner. These tactically located substations act as a shared cloud pseudo-availability zones with networking, computing and storage capabilities.

Hybrid Digital Infrastructure Management vs Distributed Cloud

In a way Distributed Cloud Management makes up for everything HDIM falls short of. This type of cloud management does not rely on a unified approach to IT Infrastructure Management. It rather focuses on usage-consistency, customization and most importantly governance.

Firstly, Distributed Cloud raises the bar in terms of networking capabilities of IT Infrastructure Clusters. Inter-communication amongst IT clusters whether it is based on-premises and on Public platforms or Edge environments, is a striking feature of Distributed Cloud. This ensures users will have consistency across the board while utilizing the IT Infrastructure. DC also dissipates chances of network failure owing to the presence of sub-stations. This was not possible in a hybrid cloud arrangement.

This uniformity in usage does not hinder customization in Distributed Cloud Systems. Personalization based on the pertinent requirements of a particular location is possible while using distributed cloud. This drives value for the customer as well as the system administrator.

Dev Ops efficiency while deploying high-value services is also augmented by Distributed Cloud. It gives freedom of choice to users when it comes to deciding their preferred cloud clusters/locations. Integrating with Public cloud features allows Distributed Cloud to have the ability to implement innovations like AI/ML based automation capabilities to all IT environments.

Source: O’Reilly- Cloud Adoption in 2020

Another key characteristic of Distributed cloud is its ease-of-governance. If any new policy is introduced at the on-site level, it will be reflected on all cloud-based and edge systems as well. Data security is thus maintainable across the whole IT Infrastructure. This ensures the same level of security at all IT environments regardless of whether it is Cloud-based or on-site. This obliterates the security concerns posed by Hybrid Cloud.

Unifying Public Cloud and IT Infrastructure

To say it in the simplest of terms, Distributed Cloud can bring the unique competencies offered by Public Cloud to IT Infrastructure and make the experience of using cloud-based and non-cloud-based infrastructure less challenging, not to mention the reduction in cost. All this drastically reduces delays to service-delivery and makes the customer-business interaction a delightful encounter.

Source: IDC 2020

But with the unification comes issues like trouble-shooting complexities due to increased chances of interaction between cloud and on-site environments. Replicated data at all these environments also have to be kept track of and secured. So, although it is the same level of security across all platforms, the intricacies regarding the same may increase. Another factor to consider is the cost of deployment. Although operational costs may drop, the resources required to deploy such distributed systems may shoot.

Is this truly ‘The Best of Both Worlds’?

HDIM is constantly described as such but distributed cloud systems may be the new ‘the best of both worlds’ scenario that will see more adoption-rates with businesses requiring more customized offerings that do not compromise on security. But Distributed Cloud is not as ‘tried and tested’ as HDIM and may only look good on paper. that may depreciate ROI, as mentioned earlier. Only time will tell. But once perfected Distributed Cloud Systems are projected to be the future of cloud-based IT Infrastructure management.

Head, Automation Practice

Data References:

https://www.oreilly.com/radar/cloud-adoption-in-2020/

https://www.idc.com/getdoc.jsp?containerId=US46796120/