Microsoft partnered with OpenAI for Generative AI technology. The partnership has been a tremendous success for both the companies. Google recently released its own Generative AI model for developers and also announced partnership with Generative AI startups like Anthropic.
Today, Amazon’s AWS entered the Generative AI race with several new services.
- Amazon Bedrock is a new service that makes foundation models from AI21 Labs, Anthropic, Stability AI, and Amazon accessible via an API. Bedrock will also offer the ability to access a range of models for text and images—including Amazon’s Titan models. Customers can also privately customize models with their own data, and easily integrate and deploy them into their applications using the AWS tools and capabilities.
- Amazon Titan models: The first is a generative LLM for tasks such as summarization, text generation (for example, creating a blog post), classification, open-ended Q&A, and information extraction. The second is an embeddings LLM that translates text inputs (words, phrases or possibly large units of text) into numerical representations (known as embeddings) that contain the semantic meaning of the text.
- Amazon announced the general availability of new, network-optimized Trn1n instances, which offer 1600 Gbps of network bandwidth and are designed to deliver 20% higher performance over Trn1 for large, network-intensive models.
- Amazon announced the general availability of Inf2 instances powered by AWS Inferentia2, which are optimized specifically for large-scale generative AI applications with models containing hundreds of billions of parameters.
AI and ML have been a focus for Amazon for over 20 years, and many of the capabilities customers use with Amazon are driven by ML.