IT

ALL IT Technology Information

Cloud Computing

  Cloud Computing

What are some of the most effective ways to use cloud computing to achieve business goals?


Cloud computing has been credited with increasing competitiveness through cost reduction, greater flexibility, elasticity and optimal resource utilization. Here are a few situations where cloud computing is used to enhance the ability to achieve business goals.

1. Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS)

Infrastructure-as-a-Service (IaaS) delivers fundamental compute, network, and storage resources to consumers on-demand, over the internet, and on a pay-as-you-go basis. Using an existing infrastructure on a pay-per-use scheme seems to be an obvious choice for companies saving on the cost of investing to acquire, manage, and maintain an IT infrastructure. 

Platform-as-a-Service (PaaS) provides customers a complete platform—hardware, software, and infrastructure—for developing, running, and managing applications without the cost, complexity, and inflexibility of building and maintaining that platform on-premises. Organizations may turn to PaaS for the same reasons they look to IaaS, while also seeking to increase the speed of development on a ready-to-use platform to deploy applications.

2. Hybrid cloud and multicloud

Hybrid cloud is a computing environment that connects a company’s on-premises private cloud services and third-party public cloud into a single, flexible infrastructure for running the organization’s applications and workloads. This unique mix of public and private cloud resources provides an organization the luxury of selecting optimal cloud for each application or workload and moving workloads freely between the two clouds as circumstances change. Technical and business objectives are fulfilled more effectively and cost-efficiently than could be with public or private cloud alone.

The video "Hybrid Cloud Explained" provides a more in-depth discussion of the computing environment:

 

Multicloud takes things a step further and allows you to use two or more clouds from different cloud providers. This can be any mix of Infrastructure, Platform, or Software as a Service (IaaS, PaaS, or SaaS). With multicloud, you can decide which workload is best suited to which cloud based on your unique requirements, and you are also able to avoid vendor lock-in.

To learn more about how these options compare, see "Distributed Cloud vs. Hybrid Cloud vs. Multicloud vs. Edge Computing."

3. Test and development

One of the best scenarios for the use of a cloud is a test and development environment. This entails securing a budget, and setting up your environment through physical assets, significant manpower, and time. Then comes the installation and configuration of your platform. All this can often extend the time it takes for a project to be completed and stretch your milestones.

With cloud computing, there are now readily available environments tailored for your needs at your fingertips. This often combines, but is not limited to, automated provisioning of physical and virtualized resources.

4. Big data analytics

One of the aspects offered by leveraging cloud computing is the ability to use big data analytics to tap into vast quantities of both structured and unstructured data to harness the benefit of extracting business value.

Retailers and suppliers are now extracting information derived from consumers’ buying patterns to target their advertising and marketing campaigns to a particular segment of the population. Social networking platforms are now providing the basis for analytics on behavioral patterns that organizations are using to derive meaningful information.

5. Cloud storage

Cloud can offer you the possibility of storing your files and accessing, storing, and retrieving them from any web-enabled interface. The web services interfaces are usually simple. At any time and place, you have high availability, speed, scalability, and security for your environment. In this scenario, organizations are only paying for the amount of cloud storage they are actually consuming, and do so without the worries of overseeing the daily maintenance of the storage infrastructure.

There is also the possibility to store the data either on- or off-premises depending on the regulatory compliance requirements. Data is stored in virtualized pools of storage hosted by a third party based on the customer specification requirements.

6. Disaster recovery

Yet another benefit derived from using cloud is the cost-effectiveness of a disaster recovery (DR) solution that provides for faster recovery from a mesh of different physical locations at a much lower cost that the traditional DR site with fixed assets, rigid procedures and a much higher cost.

7. Data backup

Backing up data has always been a complex and time-consuming operation. This included maintaining a set of tapes or drives, manually collecting them, and dispatching them to a backup facility with all the inherent problems that might happen in between the originating and the backup site. This way of ensuring a backup is performed is not immune to problems (such as running out of backup media), and there is also the time it takes to load the backup devices for a restore operation, which takes time and is prone to malfunctions and human errors.

Cloud-based backup, while not being the panacea, is certainly a far cry from what it used to be. You can now automatically dispatch data to any location across the wire with the assurance that neither security, availability nor capacity are issues.

While the list of the above uses of cloud computing is not exhaustive, it certainly give an incentive to use the cloud when comparing to more traditional alternatives to increase IT infrastructure flexibility, as well as leverage on big data analytics and mobile computing.

===================================================

Learn more about cloud basics

  • What is Cloud Computing?

  •  

Cloud computing transforms IT infrastructure into a utility: It lets you ‘plug into' infrastructure via the internet, and use computing resources without installing and maintaining them on-premises.

Table of Contents

  • What is cloud computing?
  • SaaS (Software-as-a-Service)
  • PaaS (Platform-as-a-Service)
  • IaaS (Infrastructure-as-a-Service)
  • Serverless computing 
  • Public cloud
  • Private cloud
  • Hybrid cloud
  • Multicloud and hybrid multicloud
  • Cloud security
  • Cloud use cases
  • IBM Cloud

What is cloud computing?

Cloud computing is on-demand access, via the internet, to computing resources—applications, servers (physical servers and virtual servers), data storage, development tools, networking capabilities, and more—hosted at a remote data center managed by a cloud services provider (or CSP). The CSP makes these resources available for a monthly subscription fee or bills them according to usage.

Compared to traditional on-premises IT, and depending on the cloud services you select, cloud computing helps do the following:

  • Lower IT costs: Cloud lets you offload some or most of the costs and effort of purchasing, installing, configuring, and managing your own on-premises infrastructure. 
  • Improve agility and time-to-value: With cloud, your organization can start using enterprise applications in minutes, instead of waiting weeks or months for IT to respond to a request, purchase and configure supporting hardware, and install software. Cloud also lets you empower certain users—specifically developers and data scientists—to help themselves to software and support infrastructure.
  • Scale more easily and cost-effectively: Cloud provides elasticity—instead of purchasing excess capacity that sits unused during slow periods, you can scale capacity up and down in response to spikes and dips in traffic. You can also take advantage of your cloud provider’s global network to spread your applications closer to users around the world.

The term ‘cloud computing’ also refers to the technology that makes cloud work. This includes some form of virtualized IT infrastructure—servers, operating system software, networking, and other infrastructure that’s abstracted, using special software, so that it can be pooled and divided irrespective of physical hardware boundaries. For example, a single hardware server can be divided into multiple virtual servers.

Virtualization enables cloud providers to make maximum use of their data center resources. Not surprisingly, many corporations have adopted the cloud delivery model for their on-premises infrastructure so they can realize maximum utilization and cost savings vs. traditional IT infrastructure and offer the same self-service and agility to their end-users.

If you use a computer or mobile device at home or at work, you almost certainly use some form of cloud computing every day, whether it’s a cloud application like Google Gmail or Salesforce, streaming media like Netflix, or cloud file storage like Dropbox. According to a recent survey, 92% of organizations use cloud today (outside link), and most of them plan to use it more within the next year.

CLOUD COMPUTING SERVICES

SaaS (Software-as-a-Service)

SaaS—also known as cloud-based software or cloud applications—is application software that’s hosted in the cloud and that you access and use via a web browser, a dedicated desktop client, or an API that integrates with your desktop or mobile operating system. In most cases, SaaS users pay a monthly or annual subscription fee; some may offer ‘pay-as-you-go’ pricing based on your actual usage.

In addition to the cost savings, time-to-value, and scalability benefits of cloud, SaaS offers the following:

  • Automatic upgrades: With SaaS, you take advantage of new features as soon as the provider adds them, without having to orchestrate an on-premises upgrade.
  • Protection from data loss: Because your application data is in the cloud, with the application, you don’t lose data if your device crashes or breaks.

SaaS is the primary delivery model for most commercial software today—there are hundreds of thousands of SaaS solutions available, from the most focused industry and departmental applications, to powerful enterprise software database and AI (artificial intelligence) software.

PaaS (Platform-as-a-Service)

PaaS provides software developers with on-demand platform—hardware, complete software stack, infrastructure, and even development tools—for running, developing, and managing applications without the cost, complexity, and inflexibility of maintaining that platform on-premises.

With PaaS, the cloud provider hosts everything—servers, networks, storage, operating system software, middleware, databases—at their data center. Developers simply pick from a menu to ‘spin up’ servers and environments they need to run, build, test, deploy, maintain, update, and scale applications.

Today, PaaS is often built around containers, a virtualized compute model one step removed from virtual servers. Containers virtualize the operating system, enabling developers to package the application with only the operating system services it needs to run on any platform, without modification and without need for middleware.

Red Hat OpenShift is a popular PaaS built around Docker containers and Kubernetes, an open source container orchestration solution that automates deployment, scaling, load balancing, and more for container-based applications.

IaaS (Infrastructure-as-a-Service)

IaaS provides on-demand access to fundamental computing resources–physical and virtual servers, networking, and storage—over the internet on a pay-as-you-go basis. IaaS enables end users to scale and shrink resources on an as-needed basis, reducing the need for high, up-front capital expenditures or unnecessary on-premises or ‘owned’ infrastructure and for overbuying resources to accommodate periodic spikes in usage.  

In contrast to SaaS and PaaS (and even newer PaaS computing models such as containers and serverless), IaaS provides the users with the lowest-level control of computing resources in the cloud.

IaaS was the most popular cloud computing model when it emerged in the early 2010s. While it remains the cloud model for many types of workloads, use of SaaS and PaaS is growing at a much faster rate.

Serverless computing 

Serverless computing (also called simply serverless) is a cloud computing model that offloads all the backend infrastructure management tasks–provisioning, scaling, scheduling, patching—to the cloud provider, freeing developers to focus all their time and effort on the code and business logic specific to their applications.

What's more, serverless runs application code on a per-request basis only and scales the supporting infrastructure up and down automatically in response to the number of requests. With serverless, customers pay only for the resources being used when the application is running—they never pay for idle capacity. 

FaaS, or Function-as-a-Service, is often confused with serverless computing when, in fact, it's a subset of serverless. FaaS allows developers to execute portions of application code (called functions) in response to specific events. Everything besides the code—physical hardware, virtual machine operating system, and web server software management—is provisioned automatically by the cloud service provider in real-time as the code executes and is spun back down once the execution completes. Billing starts when execution starts and stops when execution stops.

Diagram depicting the different types of cloud computing and who is responsible for managing different elements.

TYPES OF CLOUD COMPUTING

Public cloud

Public cloud is a type of cloud computing in which a cloud service provider makes computing resources—anything from SaaS applications, to individual virtual machines (VMs), to bare metal computing hardware, to complete enterprise-grade infrastructures and development platforms—available to users over the public internet. These resources might be accessible for free, or access might be sold according to subscription-based or pay-per-usage pricing models.

The public cloud provider owns, manages, and assumes all responsibility for the data centers, hardware, and infrastructure on which its customers’ workloads run, and it typically provides high-bandwidth network connectivity to ensure high performance and rapid access to applications and data. 

Public cloud is a multi-tenant environment—the cloud provider's data center infrastructure is shared by all public cloud customers. In the leading public clouds—Amazon Web Services (AWS), Google Cloud, IBM Cloud, Microsoft Azure, and Oracle Cloud—those customers can number in the millions.

The global market for public cloud computing has grown rapidly over the past few years, and analysts forecast that this trend will continue; industry analyst Gartner predicts that worldwide public cloud revenues will exceed $330 billion by the end of 2022 (outside link).

Many enterprises are moving portions of their computing infrastructure to the public cloud because public cloud services are elastic and readily scalable, flexibly adjusting to meet changing workload demands. Others are attracted by the promise of greater efficiency and fewer wasted resources since customers pay only for what they use. Still others seek to reduce spending on hardware and on-premises infrastructures.

Private cloud

Private cloud is a cloud environment in which all cloud infrastructure and computing resources are dedicated to, and accessible by, one customer only. Private cloud combines many of the benefits of cloud computing—including elasticity, scalability, and ease of service delivery—with the access control, security, and resource customization of on-premises infrastructure.

A private cloud is typically hosted on-premises in the customer's data center. But a private cloud can also be hosted on an independent cloud provider’s infrastructure or built on rented infrastructure housed in an offsite data center.

Many companies choose private cloud over public cloud because private cloud is an easier way (or the only way) to meet their regulatory compliance requirements. Others choose private cloud because their workloads deal with confidential documents, intellectual property, personally identifiable information (PII), medical records, financial data, or other sensitive data.

By building private cloud architecture according to cloud native principles, an organization gives itself the flexibility to easily move workloads to public cloud or run them within a hybrid cloud (see below) environment whenever they’re ready.

Hybrid cloud

Hybrid cloud is just what it sounds like—a combination of public and private cloud environments. Specifically, and ideally, a hybrid cloud connects an organization's private cloud services and public clouds into a single, flexible infrastructure for running the organization’s applications and workloads.

The goal of hybrid cloud is to establish a mix of public and private cloud resources—and with a level of orchestration between them—that gives an organization the flexibility to choose the optimal cloud for each application or workload and to move workloads freely between the two clouds as circumstances change. This enables the organization to meet its technical and business objectives more effectively and cost-efficiently than it could with public or private cloud alone.

Multicloud and hybrid multicloud

Multicloud is the use of two or more clouds from two or more different cloud providers. Having a multicloud environment can be as simple using email SaaS from one vendor and image editing SaaS from another. But when enterprises talk about multicloud, they're typically talking about using multiple cloud services—including SaaS, PaaS, and IaaS services—from two or more of the leading public cloud providers. In one survey, 85% of organizations reported using multicloud environments.

Hybrid multicloud is the use of two or more public clouds together with a private cloud environment. 

Organizations choose multicloud to avoid vendor lock-in, to have more services to choose from, and to access to more innovation. But the more clouds you use—each with its own set of management tools, data transmission rates, and security protocols—the more difficult it can be to manage your environment. Multicloud management platforms provide visibility across multiple provider clouds through a central dashboard, where development teams can see their projects and deployments, operations teams can keep an eye on clusters and nodes, and the cybersecurity staff can monitor for threats.

Diagram showing the Four Types of Cloud Computing.

Cloud security

Traditionally, security concerns have been the primary obstacle for organizations considering cloud services, particularly public cloud services. In response to demand, however, the security offered by cloud service providers is steadily outstripping on-premises security solutions.

According to security software provider McAfee, today, 52% of companies experience better security in the cloud than on-premises (outside link). And Gartner has predicted that by this year (2020), infrastructure as a service (IaaS) cloud workloads will experience 60% fewer security incidents than those in traditional data centers (outside link).

Nevertheless, maintaining cloud security demands different procedures and employee skillsets than in legacy IT environments. Some cloud security best practices include the following:

  • Shared responsibility for security: Generally, the cloud provider is responsible for securing cloud infrastructure and the customer is responsible for protecting its data within the cloud—but it's also important to clearly define data ownership between private and public third parties.
  • Data encryption: Data should be encrypted while at rest, in transit, and in use. Customers need to maintain full control over security keys and hardware security module.
  • User identity and access management: Customer and IT teams need full understanding of and visibility into network, device, application, and data access.
  • Collaborative management: Proper communication and clear, understandable processes between IT, operations, and security teams will ensure seamless cloud integrations that are secure and sustainable.
  • Security and compliance monitoring: This begins with understanding all regulatory compliance standards applicable to your industry and setting up active monitoring of all connected systems and cloud-based services to maintain visibility of all data exchanges between public, private, and hybrid cloud environments.

Cloud use cases

With 25% of organizations planning to move all their applications to cloud within the next year, it would seem that cloud computing use cases are limitless. But even for companies not planning a wholesale shift to the cloud, certain initiatives and cloud computing are a match made in IT heaven.

Disaster recovery and business continuity have always been a natural for cloud because cloud provides cost-effective redundancy to protect data against system failures and the physical distance required to recover data and applications in the event of a local outage or disaster. All of the major public cloud providers offer Disaster-Recovery-as-a-Service (DRaaS).

Anything that involves storing and processing huge volumes of data at high speeds—and requires more storage and computing capacity than most organizations can or want to purchase and deploy on-premises—is a target for cloud computing. Examples include:

  • Big data analytics
  • Internet of Things (IoT)
  • Artificial intelligence—particularly machine learning and deep learning applications

For development teams adopting Agile or DevOps (or DevSecOps) to streamline development, cloud offers the on-demand end-user self-service that keeps operations tasks—such as spinning up development and test servers—from becoming development bottlenecks. 

IBM Cloud

IBM Cloud offers the most open and secure public cloud platform for business, a next-generation hybrid multicloud platform, advanced data and AI capabilities, and deep enterprise expertise across 20 industries. IBM Cloud hybrid cloud solutions deliver flexibility and portability for both applications and data. Linux®, Kubernetes, and containers support this hybrid cloud stack, and combine with RedHat® OpenShift® to create a common platform connecting on-premises and cloud resources.

Learn how IBM Cloud solutions can help your organization with the following:

  • Modernize existing applications
  • Build and scale cloud native applications
  • Migrate existing on-premises workloads to the cloud
  • Speed software and services delivery with DevOps
  • Integrate applications and data across multiple clouds
  • Accelerate your journey to artificial intelligence
  • Leverage 5G and edge computing

 

  • ===============================================================
  •  
  • How does cloud computing work?

 


I was challenged to describe how cloud computing works in 500 words. It was such an interesting challenge that I had to take it.

First, you have to know what cloud computing is to understand the advantages of this new way of providing computing resources in the cloud. Second, you have to understand the different types of cloud offerings, including infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS) and business process as a service (BPaaS). Each service is  built on top of the other.

Now, how does it work? The Cloud Computing Reference Architecture (CCRA) is a great place to start. I don’t mean that the CCRA is the Holy Grail and should always be fully applied, but it gives you material to design your own solution and understand the architecture. You can find some questions and answers in this article: “What is CCRA?” You can read another good article about CCRA here.

The CCRA defines multiple components, and each component fulfills a given functionality.

CCRA-components-1

The first building block is the infrastructure where the cloud will be implemented. Some people make the assumption that environment should be virtualized, but as cloud is a way to request resources in an on-demand way and if you have solutions to provide  on bare metal, then why not? The infrastructure will support the different types of cloud (IaaS, PaaS, SaaS, BPaaS).

(Related: Cloud computing basics)

To be able to provide these services you will need Operating System Services (OSS), which will be in charge of deploying the requested service, and Business System Services (BSS), mainly used to validate the request and create the invoice for the requested services. Any metrics could be used to create the invoice (for example, number of users, number of CPUs, memory, usage hours/month). It is very flexible and depends on the service provider.

A cloud computing environment will also need to provide interfaces and tools for the service creators and users. This is the role of the Cloud Service Creator and Cloud Service Consumer components.

Now, let’s see how it works in reality.

Generally, you log in to a portal (enterprise or public wise) and you order your services through the Cloud Service Consumer. This service has been created by the cloud service provider and can be a simple virtual machine (VM) based on an image, some network components, an application service such as an WebApp environment and a service such as MongoDB. It depends on the provider and type of resources and services.

The cloud provider will validate, through the BSS, your request and if the validation is okay (credit card, contract), it will provision the request through the OSS.

You will receive, in one way or another, the credentials to access your requested services and you will usually receive a monthly invoice for your consumption.

 

 ===============================================================

  • Cloud computing basics
  •  

 

Cloud has often been used as a metaphor for Internet in the network diagram. Cloud computing is a new IT delivery model accessed over the network (Internet or intranet). It is definitely not formed in one day by a “Big Bang.” This revolutionary style of computing emerges from evolutionary changes, maturity, development and advancements of technologies over the last 50 years. Readers may be interested to read my blog post on the evolution of cloud computing.

In this post, I will present the very essentials, attributes, differentiators and benefits of cloud computing that a beginner needs to know.

From a plethora of cloud definitions online, I prefer to use the definition by the National Institute of Standards and Technology (NIST). According to NIST:

“Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

It demonstrates five essential characteristics, three services models and four deployment models of cloud.

The five characteristics that define cloud computing are:

1. On-demand self-service. This means provisioning or de-provisioning computing resources as needed in an automated fashion without human intervention. An analogy to this is electricity as a utility where a consumer can turn on or off a switch on-demand to use as much electricity as required.

2. Ubiquitous network access. This means that computing facilities can be accessed from anywhere over the network using any sort of thin or thick clients (for example smartphones, tablets, laptops, personal computers and so on).

3. Resource pooling. This means that computing resources are pooled to meet the demand of the consumers so that resources (physical or virtual) can be dynamically assigned, reassigned or de-allocated as per the requirement. Generally the consumers are not aware of the exact location of computing resources. However, they may be able to specify location (country, city, region and the like) for their need. For example, I as a consumer might want to host my services with a cloud provider that has cloud data centers within the boundaries of Australia.

4. Rapid elasticity. Cloud computing provides an illusion of infinite computing resources to the users. In cloud models, resources can be elastically provisioned or released according to demand. For example, my cloud-based online services should be able to handle a sudden peak in traffic demand by expanding the resources elastically. When the peak subsides, unnecessary resources can be released automatically.

5. Measured service. This means that consumers only pay for the computing resources they have used. This concept is similar to utilities like water or electricity.

Three main service models of cloud computing are:

1. Software as a service (SaaS). Applications hosted by a provider on a cloud infrastructure are accessed from thin or thick clients over the network or a program interface (for example, web services). Examples are Google Docs, IBM SmartCloud Docs, IBM SmartCloud Meetings, Saleforce.com’s CRM application and so on.

(Related: Top five advantages of software as a service (SaaS))

2. Platform as a service (PaaS). Providers deliver not only infrastructure but also middleware (databases, messaging engines and so on) and solution stacks for application build, development and deploy. IBM SmartCloud Application Services and Google App Engine are two examples of PaaS.

3. Infrastructure as a service (IaaS). It is the delivery of computing infrastructure as a service. IBM SmartCloud Enterprise+, SoftLayer cloud and Amazon EC2 are some examples of IaaS.

(Related: Three key advantages of using SoftLayer for cloud deployment)

There are others services emanating from these main services. Storage as a service (STaaS) and communications as a service (CaaS) are two such variants.

Now let’s look at the cloud deployment models.

Public cloud. This is where computing resources provided by a cloud provider are used by different organizations through public Internet on a pay as you go (PAYG) model. Cloud providers ensure some sort of separation for resources used by different organizations. This is known as multitenancy.

Private cloud. This is where cloud infrastructure is solely owned by an organization and maintained either by this organization or a third party and can be located on site or off-site. Computing resources are behind the corporate firewall.

Community cloud. Here, cloud infrastructure is owned and shared by multiple organizations with a shared concern.

Hybrid cloud. It is the combination of any type of cloud model mentioned above connected by standardized or proprietary technology.

Companies stopped generating their own electricity using steam engines and dynamos as the electric grid provided a better means of getting electricity. Similarly, organizations these days can rely on the cloud service providers to get computing resources on-demand and in an automated fashion. Organizations only pay for the resources they have used and relinquish unnecessary resources using a self-service portal. This eliminates the need for expensive capital investment. However, cloud is not just about lowering the cost.

The benefits of cloud computing are agility, scalability and economies of scale, sustainability, reliability, faster time to market and developing prototypes with increased efficiency. You can look at the series of my blog posts highlighting the benefits of cloud computing for more information.

Cloud computing has created a reverberation in the IT landscape. The cloud adoption rate is increasing dramatically. It is high time for organizations to consider cloud computing for turbulent business challenges ahead.  What are your thoughts? Share them in the comments below or connect with me on Twitter.

==================================================================


  • What is Platform-as-a-Service (PaaS)?
  •   
  •  

    An introduction to PaaS, a cloud-based computing model that allows development teams to build, test, deploy, manage, update, and scale applications faster and more cost-effectively.

    Table of Contents

    • What is PaaS (Platform-as-a-Service)?
    • Benefits
    • PaaS, IaaS, and SaaS
    • Use cases
    • AIPaaS
    • Open source PaaS and Kubernetes
    • Cloud native and PaaS
    • PaaS and IBM Cloud

    What is PaaS (Platform-as-a-Service)?

    PaaS, or Platform-as-a-Service, is a cloud computing model that provides customers a complete platform—hardware, software, and infrastructure—for developing, running, and managing applications without the cost, complexity, and inflexibility of building and maintaining that platform on-premises.

    The PaaS provider hosts everything—servers, networks, storage, operating system software, databases—at their data center; the customer uses it all for a for a monthly fee based on usage and can purchase more resources on-demand as needed. In this way, PaaS lets your development teams to build, test, deploy, maintain, update, and scale applications (and to innovate in response to market opportunities and threats) much more quickly and less expensively than they could if you had to build out and manage your own on-premises platform.

    See our video "PaaS Explained" for a closer look at the model:

    Benefits

    The following are some specific advantages your organization can realize from utilizing PaaS:

    • Faster time to market: With PaaS, there’s no need to purchase and install the hardware and software you’ll use to build and maintain your application development platform and no need for development teams to wait while you do this. You simply tap into the cloud service provider’s PaaS resources and begin developing immediately.
    • Faster, easier, less-risky adoption of a wider range of resources: PaaS platforms typically include access to a greater variety of choices up and down the application development stack—operating systems, middleware, and databases, and tools such as code libraries and app components—than you can affordably or practically maintain on-premises. It also lets you test new operating systems, languages, and tools without risk—that is, without having to invest in the infrastructure required to run them.
    • Easy, cost-effective scalability: If an application developed and hosted on-premises starts getting more traffic, you’ll need to purchase more computing, storage, and even network hardware to meet the demand, which you may not be able to do quickly enough and can be wasteful (since you typically purchase more than you need). With PaaS, you can scale on-demand by purchasing just the amount of additional capacity you need.
    • Lower costs: Because there’s no infrastructure to build, your upfront costs are lower. Costs are also lower and more predictable because most PaaS providers charge customers based on usage.

    PaaS, IaaS, and SaaS

    IaaS (Infrastructure-as-a-Service), PaaS (Platform-as-a-Service) , and SaaS (Software-as-a-Service) are the three most common models of cloud services, and it’s not uncommon for an organization to use all three. However, there is often confusion among the three and what’s included with each:

    • With IaaS, your cloud provider offers access to ‘raw’ computing resources, such as servers, storage, and networking, but you’re responsible for the platform and application software.
    • With PaaS, your provider delivers and manages the entire platform infrastructure; you are abstracted from the lower-level details of the environment, and you use the platform to develop and deploy your applications.
    • SaaS is software you use via the cloud, as if it were installed on your computer (and parts of it may, in fact, be installed on your computer). SaaS applications reside on the cloud network, and users can store and analyze data and collaborate on projects thorough the application.

    Use cases

    There are many use cases for PaaS, including the following popular application-based ones:

    • API development and management: You can use PaaS to develop, run, manage, and secure application programming interfaces (APIs) and microservices.
    • Internet of Things (IoT): PaaS can support the broad range of application environments, programming languages, and tools used for IoT deployments.
    • Business analytics/intelligence: PaaS tools allow you to analyze your data to find business insights that enable more informed business decisions and predictions.

    In the end, the most common use case for PaaS is strategic—namely to build, test, deploy, run, enhance, and scale applications more rapidly and cost-effectively. For example, Australian-based UBank wanted to innovate by create a simpler, smarter customer experience but without hiring additional development personnel. Using PaaS, UBank was able to create RoboChat, a chatbot that helps customers complete their online home loan applications, in just two months from concept to production.

    Customers who use RoboChat have a 15% higher home loan completion rate than those who don’t. And UBank continues to use PaaS to pilot new ideas; if a project doesn’t work out, UBank can quickly scrap it and move on to a new project.

    AIPaaS

    For a few years now, cloud service providers have offered Artificial-Intelligence-as-a-Service (AIaaS) that lets you ‘tap into’ individual AI capabilities without investing in the supporting infrastructure required to run it or the in-house expertise required to operate, manage, and maintain it.

    But the leading cloud service providers have introduced comprehensive AI-Platform-as-a-Service (AIPaaS) offerings that include a platform for delivering AI-enriched applications. Typically, AIPaaS includes infrastructure and data storage hardened to provide the computing power and voluminous storage AI requires, pretrained machine learning models you can use as-is or customize, and APIs for integrating specific AI capabilities (such as facial recognition or text-to-speech conversion) into your application. 

    Open source PaaS and Kubernetes

    An open source PaaS allows developers and users to contribute and share source code and extensions. Cloud Foundry (link resides outside IBM) and OpenShift (link resides outside IBM) are two popular open source PaaS platforms.

    Cloud Foundry allows you to deploy and run apps on your own computing infrastructure or by using a PaaS deployed by a commercial Cloud Foundry cloud provider. A broad vendor community contributes to and supports Cloud Foundry. OpenShift is Red Hat’s cloud computing PaaS offering. OpenShift is built around Docker containers orchestrated and managed by Kubernetes on a Red Hat Enterprise foundation.

    There is often confusion about whether Kubernetes is a PaaS. Kubernetes is an open source, container orchestration tool that is critical to the managing of cloud applications. It provides some features common to PaaS offerings, such as deployment, scaling, load balancing, logging, and monitoring, but is not a traditional, all-inclusive PaaS.

    For more information on Kubernetes, see the video “Kubernetes Explained”:

    Cloud native and PaaS

    PaaS is experiencing strong growth. Gartner predicts the PaaS market will have doubled in size from 2018 and 2022, and that it will become the prevailing platform delivery model moving forward.

    At the same time, “cloud native” apps are becoming pervasive. Cloud native refers to an application designed to run in a cloud environment that automatically reflects the key elements of cloud (agility, scalability, etc.).

    Cloud native is less about where an application resides and more about how it is built and deployed, but cloud native apps generally follow a microservices architecture and make use of such cloud technologies as containers. Because cloud native applications typically include new and modern methods of application development, using a PaaS model makes developing cloud native apps much easier, and allows you to innovate much faster.

    See the following video for a deeper dive on cloud native applications:

    PaaS and IBM Cloud

    IBM provides a rich and scalable PaaS for developing cloud native applications from scratch or modernizing existing applications to benefit from the flexibility and scalability of the cloud. Services include IBM Cloud Kubernetes Service, a fully-managed container orchestration solution; Red Hat OpenShift on IBM Cloud; IBM Cloudant and IBM Cloud Databases for PostgreSQL; and much more.

    IBM Cloud Foundry is IBM’s version of the open source PaaS for building, testing, deploying, and scaling applications. And the IBM Watson platform lets you deploy AI applications wherever your data resides—on any cloud or on your own private cloud platform.

    To get started with PaaS on IBM Cloud, sign up for an IBMid and create your IBM Cloud account.

  •  
  •  
  • ===================================================== 
  • What is Infrastructure-as-a-Service (IaaS)?

 


An introduction to IaaS (Infrastructure-as-a-Service), its components, advantages, pricing, and how it relates to PaaS, SaaS, BMaaS, containers, and serverless.

Table of Contents

  • What is IaaS (Infrastructure-as-a-Service)?
  • IaaS platform and architecture
  • BMaaS vs. IaaS
  • Data centers, availability zones, and regions
  • Virtual Private Cloud and IaaS
  • Pricing
  • Advantages
  • Typical use cases
  • IaaS vs. PaaS vs. SaaS
  • IaaS vs. containers vs. serverless
  • IaaS and IBM Cloud

What is IaaS (Infrastructure-as-a-Service)?

Infrastructure-as-a-Service, commonly referred to as simply “IaaS,” is a form of cloud computing that delivers fundamental compute, network, and storage resources to consumers on-demand, over the internet, and on a pay-as-you-go basis. IaaS enables end users to scale and shrink resources on an as-needed basis, reducing the need for high, up-front capital expenditures or unnecessary “owned” infrastructure, especially in the case of “spiky” workloads. In contrast to PaaS and SaaS (even newer computing models like containers and serverless), IaaS provides the lowest-level control of resources in the cloud.

IaaS emerged as a popular computing model in the early 2010s, and since that time, it has become the standard abstraction model for many types of workloads. However, with the advent of new technologies, such as containers and serverless, and the related rise of the microservices application pattern, IaaS remains foundational but is in a more crowded field than ever.

In the following video, Bradley Knapp breaks down the basics of IaaS:

IaaS platform and architecture

IaaS is made up of a collection of physical and virtualized resources that provide consumers with the basic building blocks needed to run applications and workloads in the cloud.

  • Physical data centers: IaaS providers will manage large data centers, typically around the world, that contain the physical machines required to power the various layers of abstraction on top of them and that are made available to end users over the web. In most IaaS models, end users do not interact directly with the physical infrastructure, but it is provided as a service to them.
    Colleagues working in IBM Cloud Data Center
  • Compute: IaaS is typically understood as virtualized compute resources, so for the purposes of this article, we will define IaaS compute as a virtual machine. Providers manage the hypervisors and end users can then programmatically provision virtual “instances” with desired amounts of compute and memory (and sometimes storage). Most providers offer both CPUs and GPUs for different types of workloads. Cloud compute also typically comes paired with supporting services like auto scaling and load balancing that provide the scale and performance characteristics that make cloud desirable in the first place.
  • Network: Networking in the cloud is a form of Software Defined Networking in which traditional networking hardware, such as routers and switches, are made available programmatically, typically through APIs. More advanced networking use cases involve the construction of multi-zone regions and virtual private clouds, both of which will be discussed in more detail later.
  • Storage: The three primary types of cloud storage are block storage, file storage, and object storage. Block and file storage are common in traditional data centers but can often struggle with scale, performance and distributed characteristics of cloud. Thus, of the three, object storage has thus become the most common mode of storage in the cloud given that it is highly distributed (and thus resilient), it leverages commodity hardware, data can be accessed easily over HTTP, and scale is not only essentially limitless but performance scales linearly as the cluster grows.  

BMaaS vs. IaaS

Bare-metal-as-a-Service (BMaaS) provides an even lower level of control than traditional IaaS. In a BMaaS environment, resources are still provisioned on-demand, made available over the internet, and billed on a pay-as-you-go basis (typically in monthly or hourly increments).

Unlike traditional IaaS, BMaaS does not provide end users with already virtualized compute, network, and storage; instead, it gives direct access to the underlying hardware. This level of access offers end users almost total control of their hardware specs. Given the hardware is neither virtualized nor supporting multiple virtual machines, it also offers end users the greatest amount of potential performance, something of significant value for use cases like HPC and GPU computing, high-performance databases, analytics workloads, and more.

For end users familiar with operating in traditional data centers, BMaaS environments will also feel the most familiar and may best map to the architecture patterns of existing workloads.

However, these advantages can also come at the expense of the benefits of traditional IaaS, namely the ability to really rapidly provision and horizontally scale resources by simply making copies of instances and load balancing across them.

When it comes to BMaaS vs. IaaS, one model is not superior to the other—it’s all about what model best supports the specific use case or workload.

Components of IBM Cloud Server

Data centers, availability zones, and regions

To promote greater availability and resiliency of resources, most cloud providers today offer a hierarchy around how workloads map to physical and virtual infrastructure as well as geography.

As an example, IBM Cloud has availability zones and regions. These two terms are defined as follows:

  • IBM Cloud Region: A region is a geographically and physically separate group of one or more availability zones with independent electrical and network infrastructures isolated from other regions. Regions are designed to remove shared single points of failure with other regions and guarantee low inter-zone latency within the region.
  • IBM Cloud Availability Zone: An availability zone is a logically and physically isolated location within an IBM Cloud Region with independent power, cooling, and network infrastructures isolated from other zones, This strengthens fault tolerance by avoiding single points of failure between zones while also guaranteeing high bandwidth and low inter-zone latency within a region.

Virtual Private Cloud and IaaS

For many end users, particularly companies with sensitive data or strict compliance requirements, additional security and privacy within a public cloud is a desirable. A virtual private cloud (VPC) can be a way of creating additional isolation of cloud infrastructure resources without sacrificing speed, scale, or functionality.

VPCs enable end users to create a private network for a single tenant in a public cloud. They give users control of subnet creation, IP address range selection, virtual firewalls, security groups, network ACLs, site-to-site virtual private networks (VPNs), and load balancing.

In the following video, Ryan Sumner explains VPCs in more detail:

Pricing

IaaS is typically priced on a consumption basis, meaning users are only charged for what they use. Over time, the pricing models of cloud infrastructure have come to span many different levels of granularity:

  • Subscriptions and reserved instances: Many providers offer discounts off the sticker price for clients willing to commit to longer contract terms, typically around one to three years.  
  • Monthly billing: Monthly billing models are most common in the BMaaS market, where physical infrastructure typically implies steady state workloads without spiky characteristics.
  • By the hour/second: The most common granularity for traditional cloud infrastructure, end users are charged only for what they use.
  • Transient/spot: Some providers will offer up unused capacity at a discount via transient/spot instances, but those instances can be reclaimed if the capacity is needed.

Advantages

Taken together, there are many reasons why someone would see cloud infrastructure as a potential fit:

  • Pay-as-you-Go: Unlike traditional IT, IaaS does not require any upfront, capital expenditures, and end users are only billed for what they use.
  • Speed: With IaaS, users can provision small or vast amounts of resources in a matter of minutes, testing new ideas quickly or scaling proven ones even quicker.
  • Availability: Through things like multizone regions, the availability and resiliency of cloud applications can exceed traditional approaches.
  • Scale: With seemingly limitless capacity and the ability to scale resources either automatically or with some supervision, it’s simple to go from one instance of an application or workload to many.
  • Latency and performance: Given the broad geographic footprint of most IaaS providers, it’s easy to put apps and services closers to your users, reducing latency and improving performance.

Typical use cases

IaaS represents general purpose compute resources and is thus capable of supporting use cases of all types. Today, IaaS is most commonly used for dev and test environments, customer-facing websites and web applications, data storage, analytics and data warehousing workloads, and backup and recovery, particularly for on-premises workloads. IaaS is also a good fit for deploying and running common business software and applications, such as SAP.

And while IaaS is capable of supporting a very diverse array of workloads, as we will explore in later sections, there are emerging compute models that might be better positioned to support certain types of workloads or application architectures, such as microservices.

IaaS vs. PaaS vs. SaaS

The easiest and most common way of understanding the distinction between the coarse grained -aaS categories of IaaS, PaaS, and SaaS is typically by understanding which elements of the stack are managed by the vendor and which are managed by the end user.

In a traditional IT setting, it is up to the end user to manage the whole stack end-to-end, from the physical hardware for servers and networking, up through virtualization, operating systems, middleware, and so on.

IaaS, PaaS, and SaaS each offer a progressive layer of abstraction after that. IaaS abstracts away the physical compute, network, storage, and the technology needed to virtualize those resources. PaaS goes a step further and abstracts away the management of the operating system, middleware, and runtime. SaaS provides the entire end-user application as-a-Service, abstracting away the entire rest of the stack.

Differences Between IaaS, PaaS, and SaaS

Learn more about the differences between IaaS, PaaS, and SaaS.

IaaS vs. containers vs. serverless

More recently, the discussion around cloud workloads has become increasingly dominated by containers and serverless. In many ways, IaaS was a step in the journey to the platonic ideal of cloud.

IaaS does offer end users much more granularity to pay for what they use, but they rarely pay only for what they use. Even virtual servers often involve long-running processes and less than perfect capacity utilization.

IaaS abstracts away many low-level components so developers can focus on business logic that differentiates the business, but it does still require end users to manage operating systems, middleware, and runtimes.

IaaS is often more resource and financially efficient than traditional compute, but spinning up a VM can still be somewhat time-consuming, and each VM brings with it overhead in the form of operating systems.

This model of IT was capable of supporting almost anything from a workload perspective but had room for evolution when it came to certain underlying philosophies and values that make cloud, cloud.

Containers and serverless are the two newer cloud models that are challenging the traditional IaaS model for supremacy around certain classes of cloud native applications and workloads.

In some cases, the container has begun replacing the VM as the standard unit of process or service deployment, with orchestration tools like Kubernetes governing the entire ecosystem of clusters.

Serverless goes the furthest of any model, abstracting away nearly everything but the business logic, scaling perfectly with demand, and really delivering on the promise of paying only for what you use.

As the world moves more toward microservices architectures—where applications are decomposed to their small piece parts, deployed independently, manage their own data, and communicate via API—containers and serverless approaches will only become more common.

Today, traditional IaaS is, by far, the most mature compute model in cloud and controls the vast majority of market share in this space, but containers and serverless will be technologies to watch and begin employing opportunistically where it makes sense.

IaaS and IBM Cloud

IBM offers a full-stack cloud platform that includes a full IaaS layer of virtualized compute, network, and storage. Additionally, and unique within the industry, IBM Cloud also offers BMaaS for users that want additional control over the underlying hardware.

IBM is also committed to delivering solutions for cloud-native applications and workloads which, in addition to IaaS, include IBM Cloud Kubernetes Service and IBM Cloud Functions for serverless applications.

To get started with cloud IaaS, create an IBM Cloud account and provision your first virtual server.

 

======================================================================

  • What is Hybrid Cloud? 

Learn how to support a full range of applications and workloads using a mix of private and public cloud services.

Table of Contents

  • What is hybrid cloud?
  • Private cloud vs. public cloud vs. hybrid cloud
  • Benefits of hybrid cloud
  • Common use cases of hybrid cloud
  • Hybrid cloud architecture
  • Monocloud vs. multicloud
  • Hybrid cloud strategy
  • Hybrid cloud and IBM Cloud

What is hybrid cloud?

Hybrid cloud is a computing environment that connects a company’s on-premises private cloud services and third-party public cloud into a single, flexible infrastructure for running the organization’s applications and workloads.

The principle behind hybrid cloud is that its mix of public and private cloud resources—with a level of orchestration between them—gives an organization the flexibility to choose the optimal cloud for each application or workload (and to move workloads freely between the two clouds as circumstances change). This enables the organization to meet its technical and business objectives more effectively and cost-efficiently than it could with public or private cloud alone.

The benefits of hybrid cloud are easier to understand once you know more about the capabilities, limitations, and uses of private and public clouds.

Private cloud vs. public cloud vs. hybrid cloud

Private cloud

In the private cloud model, cloud infrastructure and resources are deployed on-premises and owned and managed by the organization.

Private cloud requires a large upfront capital expense for equipment and software, a lengthy deployment, and in-house IT expertise to manage and maintain the infrastructure. It’s also expensive and time-consuming to scale capacity (because you have to purchase, provision, and deploy new hardware) and add capabilities (because you have to purchase and install new software). But private cloud provides maximum control over the computing environment and data, which is especially important—or even mandatory—if your company deals with highly sensitive data or is subject to strict industry or governmental regulations.

Public cloud

In the public cloud model, a company consumes compute, network, storage, and application resources as services that are delivered by a cloud provider over the Internet.

The cloud provider owns, manages, provisions, and maintains the infrastructure and essentially rents it out to customers, either for a periodic subscription charge or fees based on usage.

Public cloud offers significant cost savings because the provider bears all the capital, operations, and maintenance expenses. It makes scalability as easy as requesting more capacity, and it lets your company’s IT staff focus more on revenue-driving activities and innovation and less on “keeping the lights on.”

In public cloud's multi-tenant environments, your workloads are subject to the performance, compliance, and security of the cloud provider’s infrastructure. With Virtual Private Cloud (VPC) capabilities, you gain full control over your public cloud environment, including security and controls. VPCs give you the scalability of a public cloud and the security of a private cloud.

Hybrid cloud

The hybrid cloud model represents the best of both worlds. You can run sensitive, highly regulated, and mission-critical applications and workloads or workloads with reasonably constant performance and capacity requirements on private cloud infrastructure. You can run less-sensitive, more-dynamic, or even temporary workloads (such as development and test environments for a new application) on the public cloud.

With the proper integration and orchestration between the two, you can leverage BOTH (when needed) for the same workload. For example, you can leverage additional public cloud capacity to accommodate a spike in demand for a private cloud application (this is known as “cloud bursting”).

Benefits of hybrid cloud

If you’ve read this far, you’ve likely concluded that the flexibility and division of labor enabled by hybrid cloud can offer significant benefits to almost any organization in several areas, including the following

Security and compliance

Hybrid cloud lets your organization deploy highly regulated or otherwise sensitive workloads in private cloud, while still being able to deploy less-sensitive workloads to public cloud services.

Scalability and resilience

You can’t always predict when workload traffic will spike, and even when you can predict spikes, you can’t always afford to purchase additional private cloud capacity for those spikes only. Hybrid cloud lets you scale up quickly, inexpensively, and even automatically using public cloud infrastructure and then scale back down when the surge subsides—all without impacting the other workloads running on your private cloud.

Resource optimization and cost saving

Hybrid cloud gives your IT more options and flexibility for deploying workloads in a way that makes the best use of your on-premises investments and your overall infrastructure budget. It also allows you to change that deployment in response to changing workloads or new opportunities.

For example, hybrid cloud lets you do any of the following:

  • Establish a cost-optimal division of labor for workloads—say, maintain workloads with known capacity and performance requirements on private cloud and migrate more variable workloads and applications to public cloud resources.
  • Quickly ‘spin-up’ a development and test environment using pay-as-you-go in the public cloud resources, without impacting on-premises infrastructure.
  • Rapidly adopt or switch to emerging or state-of-the-art tools that can streamline your development, improve your products and services, or give you a competitive edge.

For a visual dive into hybrid cloud and the benefits it offers, watch “Hybrid Cloud Explained”:

Common use cases of hybrid cloud

Unless your organization was born on the cloud, you have a range of applications and workloads spread across private cloud, public cloud, and traditional IT environments that represent a range of opportunities for optimization via a hybrid cloud approach. Some increasingly common hybrid cloud use cases that might be relevant to your business include the following:

  • SaaS integration: Through hybrid integration, organizations are connecting Software-as-a-Service (SaaS) applications, available via public cloud, to their existing public cloud, private cloud, and traditional IT applications to deliver new solutions and innovate faster.
  • Data and AI integration: Organizations are creating richer and more personal experiences by combining new data sources on the public cloud—such as weather, social, IoT, CRM, and ERP—with existing data and analytics, machine learning and AI capabilities.
  • Enhancing legacy apps: 80% of applications are still on-premises, but many enterprises are using public cloud services to upgrade the user experience and deploy them globally to new devices, even as they incrementally modernize core business systems.
  • VMware migration: More and more organizations are “lifting and shifting” their on-premises virtualized workloads to public cloud without conversion or modification to dramatically reduce their on-premises data center footprint and position themselves to scale as needed without added capital expense.

Hybrid cloud architecture

Gartner defines two common types of hybrid cloud platforms: hybrid monocloud and hybrid multicloud.

Hybrid monocloud

Hybrid monocloud is hybrid cloud with one cloud provider—essentially an extension of a single public cloud provider’s software and hardware stack to the customer’s on-premises environment so that the exact same stack runs in both locations. The two environments are tethered together to form a single hybrid environment, managed from the public cloud with the same tools used to manage the public cloud provider’s infrastructure.

Hybrid multicloud

Hybrid multicloud is an open standards-based stack that can be deployed on any public cloud infrastructure. That means across multiple providers as well as on premises. As with hybrid monocloud, the environments are tethered together to form a single hybrid environment, but management can be done on- or off-premises and across multiple providers, using a common set of management tools chosen by the customer.

Hybrid multicloud architecture gives an organization the flexibility to move workloads from vendor to vendor and environment to environment as needed and to swap out cloud services and vendors for any reason.

A variant of hybrid multicloud called composite multicloud makes the flexibility even more granular—it uses a mix of microservices and cloud environments to distribute single applications across multiple providers and lets you move application components across cloud services and vendors as needed.

Monocloud vs. multicloud

Pros and cons exist for both approaches. Hybrid monocloud may be better if you’re confident that you can meet your application needs with a single vendor’s stack; you can’t justify the cost and management effort of working with multiple cloud vendors; or if you’re taking your first step from on-premises to hybrid.

But the flexibility of hybrid multicloud makes it almost inevitable for most organizations. In a recent Gartner survey, 81% of respondents reported working with two or more cloud vendors.

Hear more from Daryl Plummer, VP, Distinguished Analyst, Chief of Research and Chief Gartner Fellow on how enterprises are realizing an agile and responsive hybrid cloud architecture in this webcast (link resides outside IBM) featuring Gartner.

For a deeper dive on hybrid cloud architecture, see Sai Vennam's four-video series, starting with "Hybrid Cloud Architecture: Introduction":

Hybrid cloud strategy

Important considerations for your hybrid cloud strategy include the following:

  • Use of open standards-based architectures
  • Secure integration across cloud apps and data on- and off-premises
  • Management of mixed clouds and providers across hybrid environments
  • Automation of DevOps across providers and hybrid environments
  • Movement of data and files between clouds, on- and off-premises, and across multicloud.
  • Understanding security responsibilities.

Let’s look at each in more detail.

Cloud open standards

Open standards, as the name implies, are documented standards open to the public for use by anyone. Typically, the purpose of open standards is to allow for consistency and repeatability in approach. They are most often developed in collaboration by people who are invested in achieving the same outcomes.

In the case of hybrid cloud, open standards can help support interoperability, integration, and management. Some examples of open standards that support hybrid cloud include Kubernetes, Istio, OpenStack, and Cloud Foundry.

Hybrid cloud integration

Integration across applications and data—in the cloud and on- and off- premises—is an important component of any hybrid cloud strategy. Whether connecting applications from multiple Software-as-a-Service (SaaS) providers, moving parts of applications to microservices, or integrating with legacy applications, integration is key to ensuring the components of the hybrid ecosystem work together quickly and reliably.

To keep up with the pace of innovation, organizations need to be able to support a high volume of integration requests. While traditional integration styles and approaches are still important, more modern styles—such as API lifecycle management and event-driven architecture—are critical components of today’s integration ecosystem.

Modern integration requires speed, flexibility, security, and scale, and in recent years, businesses have started rethinking their approach to integration in order to drive speed and efficiency while lowering costs.

Decentralized teams using agile methods, microservices-aligned architectures, and the introduction of hybrid integration platforms are reshaping the way enterprises approach hybrid integration. Download the Agile Integration eBook to learn more about how business are thinking about integration modernization.

Hybrid cloud management

Management is another important component of a hybrid cloud strategy. Management includes, but is not limited to, provisioning, scaling, and monitoring across environments.

In a hybrid monocloud environment, management is relatively straightforward because with a single vendor, you can use the same tools to manage or provision across the infrastructure.

In a hybrid multicloud environment encompassing multiple cloud vendors, it is more of a challenge to manage consistently.

Kubernetes, the most popular container orchestration system, is an open source technology that works with many container engines. It can help with management tasks like scaling containerized apps, rolling out new versions of apps, and providing monitoring, logging, debugging, etc.

Differences in the specific Kubernetes implementations by cloud vendors can complicate management across environments but open source solutions like Red Hat OpenShift (link resides outside IBM) can simplify Kubernetes implementations by enabling orchestration and provisioning across different cloud environments, standardizing and treating the entire environment as a single stack.

DevOps and automation

At its core, DevOps is focused on automating development and delivery tasks and standardizing environments across the lifecycle of applications. One of the primary advantages of hybrid cloud is the flexibility to use the best fit environment to support individual workload requirements. DevOps methodology and tools like Red Hat OpenShift and Ansible help ensure a consistent approach and automation across hybrid environments and infrastructures, which is especially helpful in multicloud scenarios.

To learn more, check out the video “What is DevOps?”:

Hybrid cloud storage

Cloud storage allows you to save data and files to an off-site accessible via the public Internet or a dedicated private network connection. Data that you transfer off-site for storage becomes the responsibility of a third-party cloud provider. The provider hosts, secures, manages, and maintains the servers and associated infrastructure and ensures you have access to the data whenever you need it.

A hybrid cloud storage model combines elements of private and public clouds, giving organizations a choice of which data to store in which cloud. For instance, highly regulated data subject to strict archiving and replication requirements is usually more suited to a private cloud environment, whereas less-sensitive data (such as email that doesn’t contain business secrets) can be stored in the public cloud. Some organizations use hybrid clouds to supplement their internal storage networks with public cloud storage.

Hybrid cloud security

Enterprises worry that moving applications, services, and data beyond their firewalls to the cloud exposes them to greater risk. In fact, security vulnerability is often cited as a leading barrier to cloud adoption.

Hybrid cloud adds complexity to security management because it requires management across multiple platforms, often without transparency or visibility into what is being managed where. Businesses often misunderstand where the responsibility lies for ensuring security, believing the cloud provider bears sole responsibility.

The following provides a basis for a sound hybrid cloud security strategy:

  • Insist on a “shared responsibility” approach: Although the business is ultimately responsible for securing its data, services, and applications, it's important for businesses to choose vendors that view security as a shared responsibility. Choose cloud providers that incorporate security into their platforms, offer tools and partners that make security management easier, and work with customers to implement best practices.
  • Use tools and processes designed for the cloud: Automation and secure DevOps practices help security professionals automate system checks and tests into deployments. Removing human error from the workflow helps simplify development and deployment.
  • Manage access: Identity and access management (IAM) frameworks help protect valuable assets from getting into the wrong hands. Policies should promote the concept of least-privileged access so that users only have access to the resources they absolutely require for their roles.
  • Ensure visibility and define ownership: Management systems should help enterprises monitor and manage across multiple cloud platforms. Internal security teams should know who is responsible for specific assets and data and have robust communications plans in place so nothing is overlooked.

Hybrid cloud and IBM Cloud

IBM Cloud supports the most complete range of hybrid cloud uses cases—not just IBM public cloud and private clouds, but across multicloud environments.

IBM Hybrid Cloud solutions can ensure maximum flexibility and portability of your new and existing applications. Create an integrated environment that embraces public and private cloud platforms and offers supporting technologies for integration and multicloud management.

Hybrid cloud customers using IBM Cloud Satellite can receive better integration, monitoring and management when building and running their applications. Satellite offers a consistent, fully managed set of core application services that run across all cloud environments including on-premises. With Satellite, you can accelerate your journey to the cloud while reducing fragmented visibility and management complexity, as its dashboard provides you with control over configuration and application deployment.

 

 

======================================================================

  • What is Multicloud?

In this guide, learn how a multicloud strategy can improve efficiency, control costs, and provide access to new technologies.

Table of Contents

  • What is multicloud?
  • Multicloud versus hybrid cloud
  • Pros and cons of multicloud
  • Multicloud use cases
  • Multicloud architecture
  • Facing challenges
  • Security challenges associated with multicloud
  • Strategy for multicloud
  • Key technologies of multicloud
  • Multicloud and IBM

What is multicloud?

Multicloud is the use of two or more clouds from different cloud providers. This can be any mix of Infrastructure, Platform, or Software as a Service (IaaS, PaaS, or SaaS). For example, you may consume email as service from one vendor, customer relationship management (CRM) from another, and Infrastructure as a Service (IaaS) from yet another.

Currently, most organizations — 85 percent, according to one survey — use multicloud environments. You might choose multicloud to address specific business requirements; you might also choose it to avoid the limitations of a single-vendor cloud strategy. For example, if you standardize on a single cloud vendor or approach for all of your IT services, you might find it difficult later to switch to a different vendor t that offers a better platform for application development and more competitive prices. And, if the vendor you’re locked into has an outage, it will affect your whole environment.

With multicloud, you can decide which workload is best suited to which cloud based on your unique requirements. Different mission-critical workloads (such as an inventory application for a retailer or distributor, a medical records repository for a healthcare provider, or a CAD solution for an engineering firm) have their own requirements for performance, data location, scalability, and compliance, and certain vendors’ clouds will meet these requirements better than others.

Multicloud versus hybrid cloud

Multicloud and hybrid cloud are distinct but often complementary models. Hybrid cloud describes a variety of cloud types. In a hybrid cloud environment, an organization uses a blend of public and private clouds, on or off premises, to meet its IT needs. The goal of the hybrid cloud is to get everything working together, with varying degrees of communication and data sharing to best run a company’s daily operations.

Multicloud refers to a variety of cloud providers. Each cloud may reside in its own silo, but that doesn’t prevent it from interacting with other services in a hybrid environment. In fact, most organizations use multicloud as part of their hybrid strategies.

A common hybrid/multicloud use case: your website and its load balancing solution run on public cloud IaaS, while the website connects to a user database and an inventory system on a private cloud on premises to meet security and regulatory requirements.

Pros and cons of multicloud

Speaking generally, the chief advantage of multicloud is the flexibility to quickly adopt the best technologies for any task. The chief drawback is the complexity that comes with managing many different technologies from many different vendors.

Pros

Multicloud’s inherent flexibility offers a number of benefits, including risk mitigation, optimization, and ready access to the services you need.

Multicloud helps mitigate risk in two ways: by limiting exposure from a single vendor approach and by preventing vendor lock-in. In a multicloud environment, if a particular provider’s cloud experiences downtime, the outage will affect only one vendor’s service. If your hosted email is down for a few hours, services from other providers, such as your website or software development platform, can still run.

Multicloud also lets you choose the service that best suits your needs. One service might offer extra functionality or employ a security protocol that makes it easier to meet your compliance requirements. Or, all things being equal in security and functionality, you might choose the provider with the best price.

Another significant multicloud benefit is access to technology. For example, if you lack the budget to deploy an analytics solution on premises, you can access it as a cloud service without the up-front capital expense. This also means you can get the service up and running more quickly, accelerating your time to value.

Similarly, when you have the freedom to choose any provider for any solution, you can access new innovative technologies more quickly than you might be able too from a single vendor’s catalog, and you can combine services from multiple providers to create applications that offer unique competitive advantage.

Cons

The more clouds you use — each with its own set of management tools, data transmission rates, and security protocols — the more difficult it can be to manage your environment.

You may lack visibility into entire applications and their dependencies. Even if some cloud providers offer monitoring functions, you may have to switch between dashboards with different APIs (interface rules and procedures) and authentication protocols to view the information you need. Cloud providers each have their own procedures to migrate, access, and export data, potentially creating serious management headaches for administrators.

Multicloud management platforms address these issues by providing visibility across multiple provider clouds and making it possible to manage your applications and services in a consistent, simplified way. Through a central dashboard, development teams can see their projects and deployments, operations teams can keep an eye on clusters and nodes, and the cybersecurity staff can monitor for threats. You might also consider the adoption of microservices architecture so that you can source cloud services from any mix of providers and combine them into a single application.

Multicloud use cases

The number of multicloud use cases is expanding quickly. Multicloud helps you meet a virtually infinite number of business goals.

For instance, you may choose to develop and test applications on multi-tenant public cloud infrastructure to speed access to compute and storage resources and optimize costs; but, you may choose to deploy your applications on a dedicated cloud environment from another vendor that offers more compelling security and compliance features, or on a bare metal environment that meets specific performance requirements.

For data storage, you may choose one vendor for data that is frequently in transit and a different vendor for data archiving because the costs may vary significantly with data in motion versus data at rest. Or, you may want the freedom to move your data off a given cloud vendor in response to new regulations or other unforeseen events.

Multicloud architecture

When you develop a multicloud strategy, architecture is a central consideration. Architecture decisions you make today will have repercussions far into the future. Careful planning and vision are required to avoid architecture that may eventually work against you by constraining your ability to scale, make changes and upgrades, and adopt new technologies.

When designing your multicloud architecture, consider factors such as where data resides, who has access to it, and from where. If certain applications are spread across different clouds, take into account the API formats and encodings for each cloud and how you can create a seamless experience for IT administrators and users alike.

You should also account for the geographic spread of your applications, databases, and web services to make it easy to access and manage your data regardless of location. You’ll also want to consider how far data travels and create a flow with the lowest possible latency.

Facing challenges

While multicloud environments help modernize IT environments, making them more agile and flexible, they also create challenges because of the differences between cloud providers. For instance, you have to address ownership boundaries — where do your management and security responsibilities end and where do those of the cloud providers begin?

  • Integration: Some cloud services may operate seamlessly out of the box, but many are bound to require some level of integration, especially if you are linking them to other resources within your IT environment, such as a website or database. For the environment to operate optimally, you will have to address differences between each cloud in areas such as APIs, containerization, features, functions, and security protocols.
  • Portability and interoperability: Are you able to migrate components to more than one cloud without having to make major modifications in each system? Once components are moved to a cloud, you also may face challenges of interoperability with your on-premises systems.
  • Latency: Where data resides, its proximity to user, and the distances it has to travel all contribute to latency issues. If users experience delays in accessing applications and databases as a result of latency, productivity may suffer, and that would be counterproductive to using a multicloud approach that is supposed to deliver benefits like agility, flexibility, and efficiency.
  • Privacy regulations: Regulations sometimes require you to use security controls like encryption when transmitting and storing data. Regulations also may restrict where you can archive personal data (such as medical, financial, and human resources records), so you need to know where the cloud infrastructure is located and whether it complies with relevant data-handling laws.

Security challenges associated with multicloud

One of the biggest challenges you’re likely to face with a multicloud environment is security. Cloud providers have appropriate security controls and tools in place to protect their services, but it’s up to you to implement proper protocols and solutions to secure data when it sits in your on-premises environment and when it travels back and forth to the cloud.

Your multicloud security plan needs to include authentication policies to ensure users access only the cloud-based resources they need for their jobs. And since using cloud services gives user access from any device, you also have to secure the mobile devices your employees use to connect to the services.

Each multicloud environment is different, so some level of security customization is usually necessary. Whatever your customization requirements, visibility into the entire multicloud infrastructure is critical, enabling you to monitor the environment at all times to ensure data is being accessed properly, that security vulnerabilities are addressed, and cyberattacks are prevented.

Strategy for multicloud

As you build and expand your multicloud environment, it’s wise to set strategy to maximize benefits and prevent complexity. It’s easy to lose control of a multicloud environment without a proper management strategy. Currently, fewer than half of organizations with a multicloud environment have a management strategy and only 38 percent have the necessary procedures and tools in place.

Setting a strategy starts with deciding which workload belongs in which cloud so you can achieve optimal data resiliency. Resiliency refers to how you handle and back up data to ensure business continuity in case of data loss.

Part of your strategy should cover how to manage APIs to achieve interoperability between multiple clouds and on-premises systems. Typically, cloud services come with API lifecycle solutions that include centralized management and flexible deployment options for multiclouds and on-premises environments. But getting everything to work together will require some configuration expertise.

Your multicloud strategy should cover the migration of on-premises services to the cloud and any modifications you have to make so they can run in a cloud environment. It should specify rules and best practices for building, testing, and running applications that will interact with your cloud services. Lastly, the strategy should cover security controls, practices, and solutions that ensure a safe multicloud environment.

Key technologies of multicloud

Multicloud containers

In a multicloud environment, the use of software containers solves portability issues and accelerates application deployment.

A container is a small file that packages together application code along with all the libraries and other dependencies that it needs to run. By packaging together applications, libraries, environment variables, other software binaries, and configuration files, a container guarantees that it has everything needed to run the application out of the box, regardless of the operating environment in which the container runs.

The consistent application of open standards across clouds makes containerization ideal for moving applications within a multicloud infrastructure. Since the application can run in any environment, you can take its container from an on-premises environment and place it on any public cloud infrastructure for the purpose of cloud bursting, a process that allows you to scale up when you run out of capacity. In another scenario, if you need to run an application in different places across a multicloud environment, containerization enables you to do so with efficiency and consistency.

According to a recent study, 57 percent of surveyed organizations are using containers today (link resides outside IBM). To deploy and run multicloud efficiently, enterprises will look to adopt management solutions that leverage open standards like Kubernetes to give them full visibility and control across environments in a consistent and repeatable way, regardless of the vendor or infrastructure they choose.

Multicloud storage

To get the most value out of your multicloud environment, you need a data storage strategy. You can run your storage infrastructure either on premises, in the cloud, or use a combination of both depending on your specific needs.

Cloud storage adds flexibility and scalability, but data privacy or archiving regulations may limit what types of data you can store in the cloud. Privacy laws differ between states, countries, and regions, and, in some cases, specify where data can be saved. For instance, the European Union’s General Data Protection Regulation (GDPR) places severe restrictions on how to handle and store data of EU subjects outside the region’s borders, so many companies simply opt to keep the data within member countries.

Other considerations regarding multicloud storage revolve around management of stored data. You may have multiple storage locations to keep the data as close to users as possible but, of course, using multiple sites adds complexity. Thankfully, management solutions are available that bring consistency and order to cloud storage no matter how geographically dispersed your storage network is or how many clouds it uses.

Multicloud automation

As IT environments expand across geographic zones and multiple clouds, getting everything to work together efficiently is a priority. Automating management of a multicloud environment eliminates manual tasks and with them the chance of human error, improving efficiency and operational consistency while freeing up staff for strategic work.

Multicloud monitoring

While multicloud offers plentiful benefits, as already discussed, it can create silos and added complexity, making it difficult to monitor your entire IT environment. Even when a cloud provider offers monitoring, the capability is limited to that provider’s cloud, which means other parts of your cloud environment stay in the dark from an administrative standpoint.

To address the issue of visibility, vendors have started introducing monitoring tools that give you a comprehensive view of your multicloud environment. You should select a management tool as early as possible in the process of implementing a multicloud environment. Trying to manage a multicloud without full visibility is likely to result in performance issues that will only get more severe the longer they are allowed to exist— and poor performance can discourage customers from doing business with your organization.

To help you choose a tool that best suits your needs, Gartner has assembled a set of criteria (link resides outside IBM) to evaluate monitoring solutions.

Multicloud and VMware

One way that organizations can gain visibility into their multicloud environments is by leveraging VMware’s multicloud solutions. From a central console, you get a unified view of the health, performance, and security of all your applications and services wherever they are located within the multicloud infrastructure.

When leveraging VMware solutions, you can accelerate software development through the use of containers that make it possible to run applications seamlessly in different environments. Within the VMware environment, you also can leverage microservices that enable quick changes to applications and Kubernetes, which automate application deployment and management.

Some organizations are using VMware multicloud solutions in conjunction with a cloud provider to develop and manage containerized applications in a customized multicloud infrastructure. This approach makes it possible to scale the environment on demand and manage it from the centralized VMware console.

Multicloud and IBM

To help prepare companies for a multicloud future, IBM offers a host of multicloud solutions and services, including the IBM Cloud Pak for Multicloud Management. Enterprises can use IBM Multicloud Manager to deploy, run and monitor their Kubernetes container clusters in multicloud environments.

IBM supports multicloud strategies for application development, migration, modernization, and management with a range of cloud migration and integration technologies, services, and consulting offerings.

IBM Cloud Satellite allows you to deploy and maintain applications across multicloud environments consistently with more global visibility and less management complexity. Satellite offers a fully managed set of core application services that run across all cloud environments including on-premises, edge and public cloud. You have the flexibility to build and run applications in the cloud environments where they make sense to you, with faster time to market, scalability and reliability.

 

====================================================================

  • What is Cloud Storage?

An introduction to the important aspects of cloud storage, including how it works, its benefits, and the different types of cloud storage that are available.

Table of Contents

  • What is cloud storage?
  • How does it work?
  • Pros and cons
  • Examples
  • Cloud storage for business
  • Security
  • Backup
  • Servers
  • Open source
  • Pricing
  • Examples
  • Cloud storage and IBM

What is cloud storage?

Cloud storage allows you to save data and files in an off-site location that you access either through the public internet or a dedicated private network connection. Data that you transfer off-site for storage becomes the responsibility of a third-party cloud provider. The provider hosts, secures, manages, and maintains the servers and associated infrastructure and ensures you have access to the data whenever you need it.

Cloud storage delivers a cost-effective, scalable alternative to storing files on on-premise hard drives or storage networks. Computer hard drives can only store a finite amount of data. When users run out of storage, they need to transfer files to an external storage device. Traditionally, organizations built and maintained storage area networks (SANs) to archive data and files. SANs are expensive to maintain, however, because as stored data grows, companies have to invest in adding servers and infrastructure to accommodate increased demand.

Cloud storage services provide elasticity, which means you can scale capacity as your data volumes increase or dial down capacity if necessary. By storing data in a cloud, your organization save by paying for storage technology and capacity as a service, rather than investing in the capital costs of building and maintaining in-house storage networks. You pay for only exactly the capacity you use. While your costs might increase over time to account for higher data volumes, you don’t have to overprovision storage networks in anticipation of increased data volume.

How does it work?

Like on-premise storage networks, cloud storage uses servers to save data; however, the data is sent to servers at an off-site location. Most of the servers you use are virtual machines hosted on a physical server. As your storage needs increase, the provider creates new virtual servers to meet demand.

For more information on virtual machines, see “Virtual Machines: A Complete Guide.”

Typically, you connect to the storage cloud either through the internet or a dedicated private connection, using a web portal, website, or a mobile app. The server with which you connect forwards your data to a pool of servers located in one or more data centers, depending on the size of the cloud provider’s operation.

As part of the service, providers typically store the same data on multiple machines for redundancy. This way, if a server is taken down for maintenance or suffers an outage, you can still access your data.

Cloud storage is available in private, public and hybrid clouds.

  • Public storage clouds: In this model, you connect over the internet to a storage cloud that’s maintained by a cloud provider and used by other companies. Providers typically make services accessible from just about any device, including smartphones and desktops and let you scale up and down as needed.
  • Private cloud storage: Private cloud storage setups typically replicate the cloud model, but they reside within your network, leveraging a physical server to create instances of virtual servers to increase capacity. You can choose to take full control of an on-premise private cloud or engage a cloud storage provider to build a dedicated private cloud that you can access with a private connection. Organizations that might prefer private cloud storage include banks or retail companies due to the private nature of the data they process and store.
  • Hybrid cloud storage: This model combines elements of private and public clouds, giving organizations a choice of which data to store in which cloud. For instance, highly regulated data subject to strict archiving and replication requirements is usually more suited to a private cloud environment, whereas less sensitive data (such as email that doesn’t contain business secrets) can be stored in the public cloud. Some organizations use hybrid clouds to supplement their internal storage networks with public cloud storage.

Pros and cons

As with any other cloud-based technology, cloud storage offers some distinct advantages. But it also raises some concerns for companies, primarily over security and administrative control.

Pros

The pros of cloud storage include the following:

  • Off-site management: Your cloud provider assumes responsibility for maintaining and protecting the stored data. This frees your staff from tasks associated with storage, such as procurement, installation, administration, and maintenance. As such, your staff can focus on other priorities.
  • Quick implementation: Using a cloud service accelerates the process of setting up and adding to your storage capabilities. With cloud storage, you can provision the service and start using it within hours or days, depending on how much capacity is involved.
  • Cost-effective: As mentioned, you pay for the capacity you use. This allows your organization to treat cloud storage costs as an ongoing operating expense instead of a capital expense with the associated upfront investments and tax implications.
  • Scalability: Growth constraints are one of the most severe limitations of on-premise storage. With cloud storage, you can scale up as much as you need. Capacity is virtually unlimited.
  • Business continuity: Storing data offsite supports business continuity in the event that a natural disaster or terrorist attack cuts access to your premises.

Cons

Cloud storage cons include the following:

  • Security: Security concerns are common with cloud-based services. Cloud storage providers try to secure their infrastructure with up-to-date technologies and practices, but occasional breaches have occurred, creating discomfort with users.
  • Administrative control: Being able to view your data, access it, and move it at will is another common concern with cloud resources. Offloading maintenance and management to a third party offers advantages but also can limit your control over your data.
  • Latency: Delays in data transmission to and from the cloud can occur as a result of traffic congestion, especially when you use shared public internet connections. However, companies can minimize latency by increasing connection bandwidth.
  • Regulatory compliance: Certain industries, such as healthcare and finance, have to comply with strict data privacy and archival regulations, which may prevent companies from using cloud storage for certain types of files, such as medical and investment records. If you can, choose a cloud storage provider that supports compliance with any industry regulations impacting your business.

Examples

There are three main types of cloud storage: block, file, and object. Each comes with its set of advantages:

Block storage

Traditionally employed in SANs, block storage is also common in cloud storage environments. In this storage model, data is organized into large volumes called “blocks." Each block represents a separate hard drive. Cloud storage providers use blocks to split large amounts of data among multiple storage nodes. Block storage resources provide better performance over a network thanks to low IO latency (the time it takes to complete a connection between the system and client) and are especially suited to large databases and applications.

Used in the cloud, block storage scales easily to support the growth of your organization’s databases and applications. Block storage would be useful if your website captures large amounts of visitor data that needs to be stored.

File storage

The file storage method saves data in the hierarchical file and folder structure with which most of us are familiar. The data retains its format, whether residing in the storage system or in the client where it originates, and the hierarchy makes it easier and more intuitive to find and retrieve files when needed. File storage is commonly used for development platforms, home directories, and repositories for video, audio, and other files.

In the video “Block Storage vs. File Storage,” Amy Blea compares and contrasts these two cloud storage options:

04:04

Block Storage vs. File Storage

Object storage

Object storage differs from file and block storage in that it manages data as objects. Each object includes the data in a file, its associated metadata, and an identifier. Objects store data in the format it arrives in and makes it possible to customize metadata in ways that make the data easier to access and analyze. Instead of being organized in files or folder hierarchies, objects are kept in repositories that deliver virtually unlimited scalability. Since there is no filing hierarchy and the metadata is customizable, object storage allows you to optimize storage resources in a cost-effective way.

Check out the “What is Object Storage?” overview video to hear major use cases and benefits of object storage:


03:43

What is Object Storage?

Cloud storage for business

A variety of cloud storage services is available for just about every kind of business— anything from sole proprietor to large enterprises.

If you run a small business, cloud storage could make sense, particularly if you don’t have the resources or skills to manage storage yourself.  Cloud storage can also help with budget planning by making storage costs predictable, and it gives you the ability to scale as the business grows.

If you work at a larger enterprise (e.g., a manufacturing company, financial services, or a retail chain with dozens of locations), you might need to transfer hundreds of gigabytes of data for storage on a regular basis. In these cases, you should work with an established cloud storage provider that can handle your volumes. In some cases, you may be able to negotiate custom deals with providers to get the best value.

Security

Cloud storage security is a serious concern, especially if your organization handles sensitive data like credit card information and medical records. You want assurances your data is protected from cyber threats with the most up-to-date methods available. You will want layered security solutions that include endpoint protection, content and email filtering and threat analysis, as well as best practices that comprise regular updates and patches. And you need well-defined access and authentication policies.

Most cloud storage providers offer baseline security measures that include access control, user authentication, and data encryption. Ensuring these measures are in place is especially important when the data in question involves confidential business files, personnel records, and intellectual property. Data subject to regulatory compliance may require added protection, so you need to check that your provider of choice complies with all applicable regulations.

Whenever data travels, it is vulnerable to security risks. You share the responsibility for securing data headed for a storage cloud. Companies can minimize risks by encrypting data in motion and using dedicated private connections (instead of the public internet) to connect with the cloud storage provider.

Backup

Data backup is as important as security. Businesses need to back up their data so they can access copies of files and applications— and prevent interruptions to business—if data is lost due to cyberattack, natural disaster, or human error.

Cloud-based data backup and recovery services have been popular from the early days of cloud-based solutions. Much like cloud storage itself, you access the service through the public internet or a private connection. Cloud backup and recovery services free organizations from the tasks involved in regularly replicating critical business data to make it readily available should you ever need it in the wake of data loss caused by a natural disaster, cyber attack or unintentional user error.

Cloud backup offers the same advantages to businesses as storage—cost-effectiveness, scalability, and easy access. One of the most attractive features of cloud backup is automation. Asking users to continually back up their own data produces mixed results since some users always put it off or forget to do it. This creates a situation where data loss is inevitable. With automated backups, you can decide how often to back up your data, be it daily, hourly or whenever new data is introduced to your network.

Backing up data off-premise in a cloud offers an added advantage: distance. A building struck by a natural disaster, terror attack, or some other calamity could lose its on-premise backup systems, making it impossible to recover lost data. Off-premise backup provides insurance against such an event.

Servers

Cloud storage servers are virtual servers—software-defined servers that emulate physical servers. A physical server can host multiple virtual servers, making it easier to provide cloud-based storage solutions to multiple customers. The use of virtual servers boosts efficiency because physical servers otherwise typically operate below capacity, which means some of their processing power is wasted.

This approach is what enables cloud storage providers to offer pay-as-you-go cloud storage, and to charge only for the storage capacity you consume. When your cloud storage servers are about to reach capacity, the cloud provider spins up another server to add capacity—or makes it possible for you to spin up an additional virtual machine on your own.

“Virtualization: A Complete Guide” offers a complete overview of virtualization and virtual servers.

Open source

If you have the expertise to build your own virtual cloud servers, one of the options available to you is open source cloud storage. Open source means the software used in the service is available to users and developers to study, inspect, change and distribute.

Open source cloud storage is typically associated with Linux and other open source platforms that provide the option to build your own storage server. Advantages of this approach include control over administrative tasks and security.

Cost-effectiveness is another plus. While cloud-based storage providers give you virtually unlimited capacity, it comes at a price. The more storage capacity you use, the higher the price gets. With open source, you can continue to scale capacity as long as you have the coding and engineering expertise to develop and maintain a storage cloud.

Different open source cloud storage providers offer varying levels of functionality, so you should compare features before deciding which service to use. Some of the functions available from open source cloud storage services include the following:

  • Syncing files between devices in multiple locations
  • Two-factor authentication
  • Auditing tools
  • Data transfer encryption
  • Password-protected sharing

Pricing

As mentioned, cloud storage helps companies cut costs by eliminating in-house storage infrastructure. But cloud storage pricing models vary. Some cloud storage providers charge monthly the cost per gigabyte, while others charge fees based on stored capacity. Fees vary widely; you may pay $1.99 or $10 for 100 GB of storage monthly, based on the provider you choose. Additional fees for transferring data from your network to the fees based on storage cloud are usually included in the overall service price.

Providers may charge additional fees on top of the basic cost of storage and data transfer. For instance, you may incur an extra fee every time you access data in the cloud to make changes or deletions, or to move data from one place to another. The more of these actions you perform on a monthly basis, the higher your costs will be. Even if the provider includes some base level of activity in the overall price, you will incur extra charges if you exceed the allowable limit.

Providers may also factor the number of users accessing the data, how often users access data, and how far the data has to travel into their charges. They may charge differently based on the types of data stored and whether the data requires added levels of security for privacy purposes and regulatory compliance.

Examples

Cloud storage services are available from dozens of providers to suit all needs, from those of individual users to multinational organizations with thousands of locations. For instance, you can store emails and passwords in the cloud, as well as files like spreadsheets and Word documents for sharing and collaborating with other users. This capability makes it easier for users to work together on a project, which explains while file transfer and sharing are among the most common uses of cloud storage services.

Some services provide file management and syncing, ensuring that versions of the same files in multiple locations are updated whenever someone changes them. You can also get file management capability through cloud storage services. With it, you can organize documents, spreadsheets, and other files as you see fit and make them accessible to other users. Cloud storage services also can handle media files, such as video and audio, as well as large volumes of database records that would otherwise take up too much room inside your network.

Whatever your storage needs, you should have no trouble finding a cloud storage service to deliver the capacity and functionality you need.

Cloud storage and IBM

IBM Cloud Storage offers a comprehensive suite of cloud storage services, including out-of-the-box solutions, components to create your own storage solution, and standalone and secondary storage.

Benefits of IBM Cloud solutions include:

  • Global reach
  • Scalability
  • Flexibility
  • Simplicity

You also can take advantage of IBM’s automated data backup and recovery system, which is managed through the IBM Cloud Backup WebCC browser utility. The system allows you to securely back up data in one or more IBM cloud data centers around the world.

Storage software is predicted to overtake storage hardware by 2020, by which time it will need to manage 40 zettabytes (40 sextillion bytes) of data. Check out IBM’s report “Hybrid storage for the hybrid cloud.”

 

====================================================================

  • Disaster Recovery: An Introduction

An overview of the process of disaster recovery planning and some guidance on whether Disaster-Recovery-as-a-Service (DRaaS) is the right choice for protecting your business.

Table of Contents

  • What is disaster recovery?
  • Business continuity planning
  • Disaster recovery planning
  • Disaster Recovery-as-a-Service (DRaaS)
  • Cloud DR
  • Disaster recovery and IBM

What is disaster recovery?

Disaster recovery (DR) consists of IT technologies and best practices designed to prevent or minimize data loss and business disruption resulting from catastrophic events—everything from equipment failures and localized power outages to cyberattacks, civil emergencies, criminal or military attacks, and natural disasters.

Many businesses—especially small- and mid-sized organizations—neglect to develop a reliable, practicable disaster recovery plan. Without such a plan, they have little protection from the impact of significantly disruptive events.

Infrastructure failure can cost as much as $100,000 per hour, and critical application failure costs can range from $500,000 to $1 million per hour. Many businesses cannot recover from such losses. More than 40% of small businesses will not re-open after experiencing a disaster, and among those that do, an additional 25% will fail within the first year after the crisis. Disaster recovery planning can dramatically reduce these risks.

Disaster recovery planning involves strategizing, planning, deploying appropriate technology, and continuous testing. Maintaining backups of your data is a critical component of disaster recovery planning, but a backup and recovery process alone does not constitute a full disaster recovery plan.

Disaster recovery also involves ensuring that adequate storage and compute is available to maintain robust failover and failback procedures. Failover is the process of offloading workloads to backup systems so that production processes and end-user experiences are disrupted as little as possible. Failback involves switching back to the original primary systems.

Business continuity planning

Business continuity planning creates systems and processes to ensure that all areas of your enterprise will be able to maintain essential operations or be able to resume them as quickly as possible in the event of a crisis or emergency. Disaster recovery planning is the subset of business continuity planning that focuses on recovering IT infrastructure and systems.

Disaster recovery planning

Business impact analysis

The creation of a comprehensive disaster recovery plan begins with business impact analysis. When performing this analysis, you’ll create a series of detailed disaster scenarios that can then be used to predict the size and scope of the losses you’d incur if certain business processes were disrupted. What if your customer service call center was destroyed by fire, for instance? Or an earthquake struck your headquarters?

This will allow you to identify the areas and functions of the business that are the most critical and enable you to determine how much downtime each of these critical functions could tolerate. With this information in hand, you can begin to create a plan for how the most critical operations could be maintained in various scenarios.

IT disaster recovery planning should follow from and support business continuity planning. If, for instance, your business continuity plan calls for customer service representatives to work from home in the aftermath of a call center fire, what types of hardware, software, and IT resources would need to be available to support that plan?

Risk analysis

Assessing the likelihood and potential consequences of the risks your business faces is also an essential component of disaster recovery planning. As cyberattacks and ransomware become more prevalent, it’s critical to understand the general cybersecurity risks that all enterprises confront today as well as the risks that are specific to your industry and geographical location.

For a variety of scenarios, including natural disasters, equipment failure, insider threats, sabotage, and employee errors, you’ll want to evaluate your risks and consider the overall impact on your business. Ask yourself the following questions:

  • What financial losses due to missed sales opportunities or disruptions to revenue-generating activities would you incur?
  • What kinds of damage would your brand’s reputation undergo? How would customer satisfaction be impacted?
  • How would employee productivity be impacted? How many labor hours might be lost?
  • What risks might the incident pose to human health or safety?
  • Would progress towards any business initiatives or goals be impacted? How?

Prioritizing applications

Not all workloads are equally critical to your business’s ability to maintain operations, and downtime is far more tolerable for some applications than it is for others. Separate your systems and applications into three tiers, depending on how long you could stand to have them be down and how serious the consequences of data loss would be.

  1. Mission-critical: Applications whose functioning is essential to your business’s survival.
  2. Important: Applications for which you could tolerate relatively short periods of downtime.
  3. Non-essential: Applications you could temporarily replace with manual processes or do without.

Documenting dependencies

The next step in disaster recovery planning is creating a complete inventory of your hardware and software assets. It’s essential to understand critical application interdependencies at this stage. If one software application goes down, which others will be affected?

Designing resiliency—and disaster recovery models—into systems as they are initially built is the best way to manage application interdependencies. It’s all too common in today’s microservices-based architectures to discover processes that can’t be initiated when other systems or processes are down, and vice versa. This is a challenging situation to recover from, and it’s vital to uncover such problems when you have time to develop alternate plans for your systems and processes—before an actual disaster strikes.

Establishing recovery time objectives, recovery point objectives, and recovery consistency objectives

By considering your risk and business impact analyses, you should be able to establish objectives for how long you’d need it to take to bring systems back up, how much data you could stand to use, and how much data corruption or deviation you could tolerate.

Your recovery time objective (RTO) is the maximum amount of time it should take to restore application or system functioning following a service disruption.

Your recovery point objective (RPO) is the maximum age of the data that must be recovered in order for your business to resume regular operations. For some businesses, losing even a few minutes’ worth of data can be catastrophic, while those in other industries may be able to tolerate longer windows.

A recovery consistency objective (RCO) is established in the service-level agreement (SLA) for continuous data protection services. It is a metric that indicates how many inconsistent entries in business data from recovered processes or systems are tolerable in disaster recovery situations, describing business data integrity across complex application environments.

Regulatory compliance issues

All disaster recovery software and solutions that your enterprise have established must satisfy any data protection and security requirements that you’re mandated to adhere to. This means that all data backup and failover systems must be designed to meet the same standards for ensuring data confidentiality and integrity as your primary systems.

At the same time, several regulatory standards stipulate that all businesses must maintain disaster recovery and/or business continuity plans. The Sarbanes-Oxley Act (SOX), for instance, requires all publicly held firms in the U.S. to maintain copies of all business records for a minimum of five years. Failure to comply with this regulation (including neglecting to establish and test appropriate data backup systems) can result in significant financial penalties for companies and even jail time for their leaders.

Choosing technologies

Backups serve as the foundation upon which any solid disaster recovery plan is built. In the past, most enterprises relied on tape and spinning disks (HDD) for backups, maintaining multiple copies of their data and storing at least one at an offsite location.

In today’s always-on digitally transforming world, tape backups in offsite repositories often cannot achieve the RTOs necessary to maintain business-critical operations. Architecting your own disaster recovery solution involves replicating many of the capabilities of your production environment and will require you to incur costs for support staff, administration, facilities, and infrastructure. For this reason, many organizations are turning to cloud-based backup solutions or full-scale Disaster-Recovery-as-a-Service (DRaaS) providers.

Choosing recovery site locations

Building your own disaster recovery data center involves balancing several competing objectives. On the one hand, a copy of your data should be stored somewhere that’s geographically distant enough from your headquarters or office locations that it won’t be affected by the same seismic events, environmental threats, or other hazards as your main site. On the other hand, backups stored offsite always take longer to restore from than those located on-premises at the primary site, and network latency can be even greater across longer distances.

Continuous testing and review

Simply put, if your disaster recovery plan has not been tested, it cannot be relied upon. All employees with relevant responsibilities should participate in the disaster recovery test exercise, which may include maintaining operations from the failover site for a period of time.

If performing comprehensive disaster recovery testing is outside your budget or capabilities, you can also schedule a “tabletop exercise” walkthrough of the test procedures, though you should be aware that this kind of testing is less likely to reveal anomalies or weaknesses in your DR procedures—especially the presence of previously undiscovered application interdependencies—than a full test.

As your hardware and software assets change over time, you’ll want to be sure that your disaster recovery plan gets updated as well. You’ll want to periodically review and revise the plan on an ongoing basis.

Disaster Recovery-as-a-Service (DRaaS)

Disaster-Recovery-as-a-Service (DRaaS) is one of the most popular and fast-growing managed IT service offerings available today. Your vendor will document RTOs and RPOs in a service-level agreement (SLA) that outlines your downtime limits and application recovery expectations.

DRaaS vendors typically provide cloud-based failover environments. This model offers significant cost savings compared with maintaining redundant dedicated hardware resources in your own data center. Contracts are available in which you pay a fee for maintaining failover capabilities plus the per-use costs of the resources consumed in a disaster recovery situation. Your vendor will typically assume all responsibility for configuring and maintaining the failover environment.

Disaster recovery service offerings differ from vendor to vendor. Some vendors define their offering as a comprehensive, all-in-one solution, while others offer piecemeal services ranging from single application restoration to full data center replication in the cloud. Some offerings may include disaster recovery planning or testing services, while others will charge an additional consulting fee for these offerings.

Be sure that any enterprise software applications you rely on are supported, as are any public cloud providers that you’re working with. You’ll also want to ensure that application performance is satisfactory in the failover environment, and that the failover and failback procedures have been well tested.

Cloud DR

If you have already built an on-premises disaster recovery (DR) solution, it can be challenging to evaluate the costs and benefits of maintaining it versus moving to a monthly DRaaS subscription instead.

Most on-premises DR solutions will incur costs for hardware, power, labor for maintenance and administration, software, and network connectivity. In addition to the upfront capital expenditures involved in the initial setup of your DR environment, you’ll need to budget for regular software upgrades. Because your DR solution must remain compatible with your primary production environment, you’ll want to ensure that your DR solution has the same software versions. Depending upon the specifics of your licensing agreement, this might effectively double your software costs.

Not only can moving to a DRaaS subscription reduce your hardware and software expenditures, it can lower your labor costs by moving the burden of maintaining the failover site to the vendor.

If you’re considering third-party DRaaS solutions, you’ll want to make sure that the vendor has the capacity for cross-regional multi-site backups. If a significant weather event like a hurricane impacted your primary office location, would the failover site be far enough away to remain unaffected by the storm? Also, would the vendor have adequate capacity to meet the combined needs of all its customers in your area if many were impacted at the same time? You’re trusting your DRaaS vendor to meet RTOs and RPOs in times of crisis, so look for a service provider with a strong reputation for reliability.

Disaster recovery and IBM

Disaster recovery solutions based in the IBM Cloud are resilient and reliable. You can provision a failover site in any of the more than 60 data centers located in six regions and in 18 global availability zones for low latency and in order to meet geographically-specific business requirements.

IBM disaster recovery solutions deliver enterprise-grade business continuity capabilities for on-premise, public, private, and hybrid cloud deployments. Solutions are designed to support on-premises to IBM Cloud, IBM Cloud to IBM Cloud, and third-party cloud provider to IBM Cloud disaster recovery architectures. IBM also offers business continuity consulting to help you anticipate and plan for a wide range of threats, risks, and potential business disruptions.

In addition, IBM has partnered with Zerto to introduce Zerto on IBM Cloud, a simple, scalable disaster recovery solution that installs seamlessly into the VMware vSphere environment and offers RPOs of seconds and RTOs of minutes for all virtual machines and workloads.

 

=====================================================================

  • Backup and Disaster Recovery Explained

Learn the basics of backup and disaster recovery so you can formulate effective plans that minimize downtime.

Table of Contents

  • What are backup and disaster recovery?
  • The importance of planning
  • Key terms
  • Prioritize workloads
  • Don’t wait for disaster
  • Recognize the difference between backup and disaster recovery, and understand key concepts that are critical for developing effective strategies
  • Evaluate multiple cloud and on-premises deployment options to find the right fit for your organization
  • Identify the best technologies for achieving your backup and disaster recovery goals

Understanding the essentials of backup and disaster recovery is critical for minimizing the impact of unplanned downtime on your business. Across industries, organizations recognize that downtime can quickly result in lost revenue. Unfortunately, natural disasters, human error, security breaches and ransomware attacks can all jeopardize the availability of IT resources. Any downtime can derail customer interactions, sap employee productivity, destroy data and halt business processes.

Differentiating backup from disaster recovery, defining key terms, and evaluating various deployment options and technologies can help you develop effective strategies for avoiding the consequences of downtime.

What are backup and disaster recovery?

There’s an important distinction between backup and disaster recovery. Backup is the process of making an extra copy (or multiple copies) of data. You back up data to protect it. You might need to restore backup data if you encounter an accidental deletion, database corruption, or problem with a software upgrade.

Disaster recovery, on the other hand, refers to the plan and processes for quickly reestablishing access to applications, data, and IT resources after an outage. That plan might involve switching over to a redundant set of servers and storage systems until your primary data center is functional again.

Some organizations mistake backup for disaster recovery. But as they may discover after a serious outage, simply having copies of data doesn’t mean you can keep your business running. To ensure business continuity, you need a robust, tested disaster recovery plan.

The importance of planning

Your organization cannot afford to neglect backup or disaster recovery. If it takes hours to retrieve lost data after an accidental deletion, your employees or partners will sit idle, unable to complete business-critical processes that rely on your technology. And if it takes days to bring your business back online after a disaster, you stand to permanently lose customers. Given the amount of time and money you could lose in both cases, investments in backup and disaster recovery are completely justified.

Key terms

Understanding a few essential terms can help shape your strategic decisions and enable you to better evaluate backup and disaster recovery solutions.

  • Recovery time objective (RTO) is the amount of time it takes to recover normal business operations after an outage. As you look to set your RTO, you’ll need to consider how much time you’re willing to lose—and the impact that time will have on your bottom line. The RTO might vary greatly from one type of business to another. For example, if a public library loses its catalog system, it can likely continue to function manually for a few days while the systems are restored. But if a major online retailer loses its inventory system, even 10 minutes of downtime—and the associated loss in revenue—would be unacceptable.
  • Recovery point objective (RPO) refers to the amount of data you can afford to lose in a disaster. You might need to copy data to a remote data center continuously so that an outage will not result in any data loss. Or you might decide that losing five minutes or one hour of data would be acceptable.
  • Failover is the disaster recovery process of automatically offloading tasks to backup systems in a way that is seamless to users. You might fail over from your primary data center to a secondary site, with redundant systems that are ready to take over immediately.
  • Failback is the disaster recovery process of switching back to the original systems. Once the disaster has passed and your primary data center is back up and running, you should be able to fail back seamlessly as well.
  • Restore is the process of transferring backup data to your primary system or data center. The restore process is generally considered part of backup rather than disaster recovery.

One last term might be helpful as you consider alternatives for managing your disaster recovery processes and your disaster recovery environment:

  • Disaster recovery as a service (DRaaS) is a managed approach to disaster recovery. A third party hosts and manages the infrastructure used for disaster recovery. Some DRaaS offerings might provide tools to manage the disaster recovery processes or enable organizations to have those processes managed for them.

Prioritize workloads

Once you understand the key concepts, it’s time to apply them to your workloads. Many organizations have multiple RTOs and RPOs that reflect the importance of each workload to their business.

For a major bank, the online banking system might be a critical workload—the bank needs to minimize time and data loss. However, the bank’s employee time-tracking application is less important. In the event of a disaster, the bank could allow that application to be down for several hours or even a day without having a major negative impact on the business. Defining workloads as Tier 1, Tier 2, or Tier 3 can help provide a framework for your disaster recovery plan.

Evaluate deployment options

The next step in designing a disaster recovery plan is to evaluate deployment options. Do you need to keep some disaster recovery functions or backup data on premises? Would you benefit from a public cloud or hybrid cloud approach?

Cloud

Cloud-based backup and disaster recovery solutions are becoming increasingly popular among organizations of all sizes. Many cloud solutions provide the infrastructure for storing data and, in some cases, the tools for managing backup and disaster recovery processes.

By selecting a cloud-based backup or disaster recovery offering, you can avoid the large capital investment for infrastructure as well as the costs of managing the environment. In addition, you gain rapid scalability plus the geographic distance necessary to keep data safe in the event of a regional disaster.

Cloud-based backup and disaster recovery solutions can support both on-premises and cloud-based production environments. You might decide, for example, to store only backed up or replicated data in the cloud while keeping your production environment in your own data center. With this hybrid approach, you still gain the advantages of scalability and geographic distance without having to move your production environment. In a cloud-to-cloud model, both production and disaster recovery are located in the cloud, although at different sites to ensure enough physical separation.

On-premises

In some cases, keeping certain backup or disaster recovery processes on-premises can help you retrieve data and recover IT services rapidly. Retaining some sensitive data on premises might also seem appealing if you need to comply with strict data privacy or data sovereignty regulations.

For disaster recovery, a plan that relies wholly on an on-premises environment would be challenging. If a natural disaster or power outage strikes, your entire data center—with both primary and secondary systems—would be affected. That’s why most disaster recovery strategies employ a secondary site that is some distance away from the primary data center. You might locate that other site across town, across the country or across the globe depending on how you decide to balance factors such as performance, regulatory compliance and physical accessibility to the secondary site.  

Technologies

Depending on which deployment options you choose, you might have several alternatives for the types of technologies and processes you employ for backup and for disaster recovery.

Traditional tape

Despite having been around for decades, traditional magnetic tape storage can still play a role in your backup plan. With a tape solution, you can store a large amount of data reliably and cost-effectively.

While tape can be effective for backup, it is not usually employed for disaster recovery, which requires the faster access time of disk-based storage. Also, if you need to physically retrieve a tape from an offsite vault, you could lose several hours or even days of availability.

Snapshot-based replication

A snapshot-based backup captures the current state of an application or disk at a moment in time. By writing only the changed data since the last snapshot, this method can help protect data while conserving storage space.

Snapshot-based replication can be used for backup or disaster recovery. Of course, your data is only as complete as your most recent snapshot. If you take snapshots every hour, you must be willing to lose an hour’s worth of data.

Continuous replication

Many organizations are moving toward continuous replication for disaster recovery as well as for backup. With this method, the latest copy of a disk or application is continuously replicated to another location or the cloud, minimizing downtime and providing more granular recovery points.

Don’t wait for disaster

For most organizations, backup and disaster recovery strategies are absolutely critical to maintain the health of the business. IBM Cloud Disaster Recovery Solutions can help you evaluate and update your strategies, which can help you control complexity and cost. Additionally, IBM Cloud Object Storage offers a scalable and secure destination for backing up your critical data.

Whatever you do, don’t wait to assess your strategies. Backup and disaster recovery plans can help only if they are designed, deployed and tested long before they are needed.

 

=====================================================================

Top 7 Most Common Uses of Cloud Computing

Top 7 Most Common Uses of Cloud Computing

4 comments: