Cloud-Based Generative AI Services

ChatGPT made its debut in 2022 with Foundation Models, which revolutionized AI solutions for consumer and business applications. Now, with the integration of Foundation Models into the offerings of public cloud providers, organizations of all types can harness the power of Generative AI in their own solutions.

What Are Foundation Models?

What sets Generative AI apart from traditional AI techniques is its incorporation of Foundation Models (referred to as FMs).

AI models have been around for quite some time, and they have proven to be incredibly valuable in creating intelligent technology solutions. These models have been trained for specific tasks like language translation, image classification, and answering queries based on a particular knowledge base.

What Sets Foundation Models Apart?

Foundation models may appear similar, but they are unique beneath the surface.  Cloud Foundation ModelsWhat sets FMs apart is their training on extensive datasets consisting of billions of source documents, their utilization of billions of parameters, and their ability to be applied to a diverse range of use cases.

A Foundation Model is trained using a vast amount of input data to generate the most probable response to a given input. Although the FM does not possess a true understanding of the meaning behind the input, its extensive training data enables it to produce sensible responses to a wide range of knowledge domains.

However, the cost of training and operating Foundation Models can be prohibitively high, making it difficult for most enterprises to afford the necessary technical, computational, and financial resources to implement these advanced AI technologies in-house.

Cloud-Based Foundation Models

cloud based generative aiThankfully, the extensive range of applications for Foundation Models makes public cloud providers the perfect hosts for these cutting-edge AI solutions. Sharing a large, versatile foundation model in a multi-tenant service is an invaluable asset for organizations seeking to leverage the power of Generative AI. 

Cloud-based services offer a wide range of foundation models. The leading public cloud providers have unveiled their groundbreaking Generative AI services powered by Foundation Models. These cutting-edge services, offered as Platform-as-a-Service solutions, seamlessly integrate into a wide range of customer-created applications, including chatbots, websites, and custom software.

The three major cloud providers, listed below in alphabetical order, all offer cutting-edge Generative AI services powered by Foundation Models. While some capabilities are still in preview at the time of writing, these preview features are rapidly progressing towards global availability.

Amazon Bedrock

  • Bedrock, a cutting-edge Platform-as-a-Service (PaaS) solution, has been meticulously crafted as a dedicated Generative AI service hosted on the powerful AWS infrastructure. Tailored specifically to cater to the needs of Generative AI, Bedrock offers unparalleled access to a diverse range of models sourced from renowned companies like AI21 Labs, Anthropic, Meta, and Amazon's very own Titan LLM.
  • These state-of-the-art models boast remarkable capabilities, including text-to-text translation, image generation, document summarization, embeddings, and much more. Bedrock also empowers users by allowing them to fine-tune the models with their own data, ensuring a personalized and optimized AI experience.

Google Vertex AI

  • Part of Google's comprehensive Vertex AI PaaS platform, this offering is powered by Google's proprietary PaLM foundation model. It boasts an array of capabilities, including support for text-to-text, text-to-image, document summarization, embeddings, and more. Additionally, it allows for tuning with customer-provided data, enabling a personalized and tailored AI experience.

Microsoft Azure OpenAI Service

  • Under the Microsoft Azure AI Services umbrella, Microsoft has developed the OpenAI Service as a dedicated Generative AI service that hosts foundation models created by OpenAI. This strategic partnership between Microsoft and OpenAI, the company responsible for renowned products like ChatGPT and DALL-E, has resulted in the Azure OpenAI Service.
  • This service offers a variety of OpenAI models, including GPT-3, GPT-4, and Codex models, providing capabilities such as text-to-text, text-to-image (DALL-E), embeddings, and more. Additionally, the Microsoft Azure OpenAI Service supports tuning with customer-provided data, allowing for personalized and tailored AI experiences.

Choosing the Optimal Cloud Service for Your Situation

Azure Arc Hybrid Cloud

Given the multitude of similar services offered by the leading cloud providers, selecting the ideal platform can be challenging. As with any tech decision, the right choice ultimately depends on the specific circumstances and requirements at hand.

Which Models Are Most Suitable for Your Needs?

When you review the attributes of each cloud service, you will see that many of the features share similarities, such as the availability of custom data, text-to-text support, and a PaaS deployment model.

The distinguishing factor among these cloud services lies in the foundation models they offer. While certain models, like Meta Llama 2, can be found on multiple platforms, others, such as OpenAI models and AWS Titan, are exclusive to a single cloud. 

Although foundation models are created with a general purpose in mind, they can also excel in specific areas such as conversation, Q&A, or even code generation. Certain models may be better suited for particular use cases than others. By conducting thorough prototyping and evaluation, you can find a model from a specific cloud provider that truly stands out and aligns perfectly with your unique requirements.

Where Are Your Data and Applications Hosted?

When considering public cloud solutions, it is crucial to take into account the specific customer-provided data that will be used to fine-tune the Foundation Model, as well as the applications that will use the model's output.

When training an AI service in one public cloud using data from a different public cloud, organizations may face potential challenges such as increased data egress costs and higher latency. While the effectiveness of the solution remains crucial, it's important to consider that building cross-cloud applications generally introduces additional complexity and can lead to higher costs.

What Are Your Integration Plans?

For each of the major cloud service providers, the foundation model is surrounded by services that add value to your existing or planned applications. Also, you may find that the application or service you're planning may integrate better with one public cloud platform than another.  

One cloud service may better support the format of our tuning data, while another cloud may have a Foundation Model resource in a data center closer to our target audience.

Key Features of Generative AI

Generative AI has captured the attention of numerous organizations, offering exciting prospects to enhance applications and boost employee productivity in their daily tasks.cloud computing

Generative AI is so captivating because Foundation Models can:

  1. Undergo training which can be seamlessly applied to a diverse range of AI use cases.
  2. Generate natural language and then structure and condense information, making it more accessible and comprehensible.
  3. Utilize customer-provided data tuning to expand their language and knowledge search capabilities.  

Generative AI Is Available for All Organizations

With Generative AI services offered by public cloud providers, organizations of all sizes can now easily integrate this cutting-edge technology into their knowledge management systems. By spreading the high cost of training and deployment across multiple customers, these services have become cost-effective, making them accessible to a wide range of organizations.

Rob Kerr is VP, Artificial Intelligence at DesignMind. 

Learn about our AI and Data Science solutions, including AI Strategy and Roadmaps, Machine Learning and AI Model Development, and Large Language Models (LLM).