Exclusive: Amazon AWS aims to outshine Microsoft with Gen AI offerings at Re:Invent

Exclusive: Amazon AWS aims to outshine Microsoft with Gen AI offerings at Re:Invent

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Amazon AWS, the cloud computing giant, has been perceived as playing catch-up with its rivals Microsoft Azure and Google Cloud in the emerging and exciting field of generative AI.

But this week, at its annual AWS Re:Invent conference, Amazon plans to showcase its ambitious vision for generative AI, and how it can help enterprises build innovative and differentiated applications using a variety of models and data sources.

In an interview with VentureBeat on Monday, Swami Sivasubramanian, Amazon AWS’s vice president of Data and AI, who oversees all AWS database, analytics, machine learning and generative AI services, gave a preview of what to expect from his keynote on Wednesday morning and AWS CEO Adam Selipsky’s keynote on Tuesday morning.

The main theme around generative AI, he said, is that enterprises want to have the flexibility and choice to work with different models from different providers, rather than being locked into a single vendor or platform. However, he added, the models themselves may not be enough to provide a competitive edge, as they may become commoditized over time. Therefore, the key differentiator for businesses will be their own proprietary data, and how they can integrate it with the models to create unique applications.

VB Event

The AI Impact Tour







Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!






Learn More


To support this vision, Sivasubramanian said Amazon is focused on emphasizing two things at Re:Invent: its offering of a wide range of generative AI models that customers can access through its Bedrock service, and better, seamless data management tools that customers can use to build and deploy their own generative AI applications

He said his keynote will cover the “inherent symbiotic relationship” between data and generative AI, and how generative AI can not only benefit from data, but also enhance and improve databases and data systems in return.

Here are some of the highlights that Sivasubramanian hinted at for Re:Invent, which comes just two weeks after Microsoft showed it was going all-in on Gen AI at its competing Ignite conference:

Bedrock apps in less than a minute: AWS’s Bedrock, which was unveiled in April, is a fully managed service that allows customers to use foundation generative AI models available through an API. Sivasubramanian said Bedrock is being made even easier to use. Sivasubramanian said he will feature some customer stories that demonstrate how easy and fast it is to build applications on Bedrock, with some examples taking less than a minute. He said customers such as Booking.com, Intuit, LexusNexis, and Bridgewater Associates are among those using Bedrock to create impactful applications. 

More LLM choice: Through Bedrock, Amazon has already provided enterprise customers access to models like its own pretrained foundation model, Titan, as well as foundation models from third parties, like AI21’s Jurassic, Anthropic’s Claude, Meta’s Llama 2, and Stable Diffusion. But expect to see more action here, including more about Amazon’s partnership with OpenAI-competitor Anthropic, after Amazon’s significant investment in that company in September. “We’ll continue to invest deeply in model choice in a big way,” Sivasubramanian said.

Vector database expansions: Another area where generative AI models can make a difference is vector databases, which enable semantic search across unstructured data such as images, text, and video. By using generative AI models, vector databases can find the most relevant and similar data to a given query, rather than relying on keywords or metadata. In July, Amazon launched a vector database capability, Vector Engine, for its OpenSearch Serverless, in preview mode. Sivasubramanian said Vector Engine has seen “amazing traction” since its launch, and hinted that it may soon become generally available. He also suggested that Amazon may extend vector search capabilities to other databases in its portfolio. “You’ll see us making this a lot easier and better as part of Bedrock, but also in many other areas,” he said.

Gen AI applications: Sivasubramanian also hinted at some announcements related to the application layer of the enterprise generative AI stack. He mentioned some examples of applications that are already available and integrated with generative AI models, such as Amazon QuickSite, a serverless tool that allows customers to create and share interactive dashboards and reports, and Amazon HealthScribe, which automatically generates clinical notes by analyzing patient-clinician conversations. He said these applications are designed to be easy and accessible for users who may not have any knowledge or experience with generative AI or coding.

Zero ETL: A key challenge for enterprises with complex data needs is to integrate data from different sources and formats, without having to go through the cumbersome and costly process of extract, transform, and load (ETL). This process involves moving data from one database to another, often requiring data conversion and transformation. To avoid this friction, some cloud providers are developing “fabric” technologies, which use open and standard formats for data exchange and interoperability. Microsoft has been touting its Fabric initiative, and some analysts say it has an edge over Amazon and Google. But Sivasubramanian said Amazon has always tried to give developers choices for databases, and is continuing to invest in its zero ETL vision, which it started to implement last year with the integration of some of its own databases, such as Aurora and Redshift. Enterprises also want to store and query their vector data along with their other business data in their databases. “You’ll continue to see us improve these services,” he said, citing the recent addition of vector search support to Amazon’s Aurora MySQL, a cloud-based relational database. “You’ll see us make more progress on zero ETL in a big and meaningful way.

Secure generative AI customization, with data staying in customer’s own cloud: Some AWS customers will share their stories during Selipsky’s and Sivasubramanian’s keynotes about how they are customizing generative AI models with Bedrock, by further training or fine-tuning them to suit their specific needs and domains. But they are doing so without compromising their data security and privacy, as their data remains within their own virtual private cloud (VPC), a secure and isolated section of the AWS cloud. Sivasubramanian said this is “a big differentiator” that sets AWS apart from other cloud providers.

Generative AI chip innovations: Lastly, Amazon has been developing its own silicon solutions to power generative AI. Sivasubramanian said AWS will provide some updates on the performance and adoption of its Nitro hypervisor and its Graviton family of chips, which are designed to offer high performance and low cost for cloud computing. He will also talk about its Trainium and Inferentia chips, which are specialized for generative AI training and inference, respectively.



VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.