Stop your public-cloud AI projects from dripping you dry

Stop your public-cloud AI projects from dripping you dry

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.



Last year, Andreessen Horowitz published a provocative blog post entitled “The Cost of Cloud, a Trillion Dollar Paradox.” In it, the venture capital firm argued that out-of-control cloud spending is resulting in public companies leaving billions of dollars in potential market capitalization on the table. An alternative, the firm suggests, is to recalibrate cloud resources into a hybrid model. Such a model can boost a company’s bottom line and free capital to focus on new products and growth. 

Whether enterprises follow this guidance remains to be seen, but one thing we know for sure is that CIOs are demanding more agility and performance from their supporting infrastructure. That’s especially so as they look to use sophisticated and computing-intensive artificial intelligence/machine learning (AI/ML) applications to improve their ability to make real-time, data-driven decisions.

To this end, the public cloud has been foundational in helping to usher AI into the mainstream. But the factors that made the public cloud an ideal testing ground for AI (that is, elastic pricing, the ease of flexing up or down, among other factors) are actually preventing AI from realizing its full potential. 

Here are some considerations for organizations looking to optimize the benefits of AI in their environments.

Event

Intelligent Security Summit


Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.


Register Now


For AI, the cloud is not one-size-fits-all

Data is the lifeblood of the modern enterprise, the fuel that generates AI insights. And because many AI workloads must constantly ingest large and growing volumes of data, it’s imperative that infrastructure can support these requirements in a cost-effective and high-performance way.

When deciding how to best tackle AI at scale, IT leaders need to consider a variety of factors. The first is whether colocation, public cloud or a hybrid mix is best suited to meet the unique needs of modern AI applications. 

While the public cloud has been invaluable in bringing AI to market, it doesn’t come without its share of challenges. These include:

  • Vendor lock-in: Most cloud-based services pose some risk of lock-in. However, some cloud-based AI services available today are highly platform-specific, each sporting its own particular nuances and distinct partner-related integrations. As a result, many organizations tend to consolidate their AI workloads with a single vendor. That makes it difficult for them to switch vendors in the future without incurring significant costs.
  • Elastic Pricing: The ability to pay only for what you use is what makes the public cloud such an appealing option for businesses, especially those hoping to reduce their CapEx spending. And consuming a public cloud service by the drip often makes good economic sense in the short term. But organizations with limited visibility into their cloud utilization all too often find that they are consuming it by the bucket. At that point it becomes a tax that stifles innovation.
  • Egress Fees: With cloud data transfers, a customer doesn’t need to pay for the data that it sends to the cloud. But getting that data out of the cloud requires them to pay egress fees, which can quickly add up. For instance, disaster recovery systems will often be distributed across geographic regions to ensure resilience. That means that in the event of a disruption, data must be continually duplicated across availability zones or to other platforms. As a result, IT leaders are coming to understand that at a certain point, the more data that’s pushed into the public cloud, the more likely they will be painted into a financial corner.
  • Data Sovereignty: The sensitivity and locality of the data is another crucial factor in determining which cloud provider would be the most appropriate fit. In addition, as a raft of new state-mandated data privacy regulations goes into effect, it will be important to ensure that all data used for AI in public cloud environments comply with prevailing data privacy regulations.

Three questions to ask before moving AI to the cloud

The economies of scale that public cloud providers bring to the table have made it a natural proving ground for today’s most demanding enterprise AI projects. That said, before going all-in on the public cloud, IT leaders should consider the following three questions to determine if it is indeed their best option.

At what point does the public cloud stop making economic sense?

Public cloud offerings such as AWS and Azure provide users with the ability to quickly and cheaply scale their AI workloads since you only pay for what you use. However, these costs are not always predictable, especially since these types of data-intensive workloads tend to mushroom in volume as they voraciously ingest more data from different sources, such as training and refining AI models. While “paying by the drip” is easier, faster and cheaper at a smaller scale, it doesn’t take long for these drips to accumulate into buckets, pushing you into a more expensive pricing tier.

You can mitigate the cost of these buckets by committing to long-term contracts with volume discounts, but the economics of these multi-year contracts still rarely pencil out. The rise of AI Compute-as-a-Service outside the public cloud provides options for those who want the convenience and cost predictability of an OpEx consumption model with the reliability of dedicated infrastructure.

Should all AI workloads be treated the same way?

It’s important to remember that AI isn’t a zero-sum game. There’s often room for both cloud and dedicated infrastructure or something in between (hybrid). Instead, start by looking at the attributes of your applications and data, and invest the time upfront in understanding the specific technology requirements for the individual workloads in your environment and the desired business outcomes for each. Then seek out an architectural model that enables you to match the IT resource delivery model that fits each stage of your AI development journey. 

Which cloud model will enable you to deploy AI at scale?

In the land of AI model training, fresh data must be regularly fed into the compute stack to improve the prediction capabilities of the AI applications they support. As such, the proximity of compute and data repositories have increasingly become important selection criteria. Of course, not all workloads will require dedicated, persistent high-bandwidth connectivity. But for those that do, undue network latency can severely hamper their potential. Beyond performance issues, there are a growing number of data privacy regulations that dictate how and where certain data can be accessed and processed. These regulations should also be part of the cloud model decision process.

The public cloud has been essential in bringing AI into the mainstream. But that doesn’t mean it makes sense for every AI application to run in the public cloud. Investing the time and resources at the outset of your AI project to determine the right cloud model will go a long way towards hedging against AI project failure.

Holland Barry is SVP and field CTO at Cyxtera.


DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers