Skip to content

Atlas AI agent language model library

Early adopters

The features described in this section are currently available to early adopters only and are subject to change.

This article details the language models in Atlas AI. The availability of language models depends on your preferred cloud vendor and the location of your Cognite Data Fusion project.

Anthropic

Claude 3.5 Sonnet is Anthropic's most powerful AI model, offering advanced reasoning and accuracy. It excels in a wide range of tasks, making it ideal for complex applications requiring high accuracy.

  • Cloud availability: Amazon
  • Speed: Medium
  • Quality: High
  • Context length: Long

Claude 3 Opus provides high accuracy and superior performance across various evaluations. It excels in complex tasks requiring advanced reasoning and precision, making it ideal for sophisticated AI applications.

  • Cloud availability: Amazon
  • Speed: Medium
  • Quality: High
  • Context length: Long

Claude 3 Haiku is designed for near-instant responses to simple queries, mimicking human interactions. It's ideal for applications requiring quick and efficient processing.

  • Cloud availability: Amazon
  • Speed: High
  • Quality: Medium
  • Context length: Long

Google

Gemini 1.5 Pro builds upon its predecessor with improved performance and a larger context window. It excels in complex reasoning, visual information processing, and offers high-quality results for demanding AI applications.

  • Cloud availability: Google
  • Speed: Medium
  • Quality: High
  • Context length: Long

Gemini 1.5 Flash is designed for speed and efficiency in multimodal tasks, including visual understanding, classification, summarization, and content creation from various media inputs. It's ideal for high-volume, latency-sensitive applications like chat assistants and on-demand content generation.

  • Cloud availability: Google
  • Speed: High
  • Quality: Medium
  • Context length: Long

Open AI

GPT-4o excels in generating structured outputs and handling multimodal inputs, making it ideal for applications requiring precise data extraction and analysis over text and images.

  • Cloud availability: Amazon, Microsoft, Google
  • Speed: Medium
  • Quality: High
  • Context length: Long

GPT-4o Mini offers the capabilities of GPT-4o in a smaller model size, with vision support and a 16K token output window. It is suitable for applications requiring efficient processing and structured outputs over text and images.

  • Cloud availability: Amazon, Microsoft, Google
  • Speed: High
  • Quality: Medium
  • Context length: Long

GPT-4 Turbo extends the capabilities of GPT-4 with a 128K token context window, allowing it to process and generate responses based on extensive inputs. It is ideal for applications requiring analysis and generation over large documents or datasets.

  • Cloud availability: Amazon, Microsoft, Google
  • Speed: Medium
  • Quality: High
  • Context length: Long

GPT-4 offers enhanced reasoning abilities with an 8K token context window. It excels in generating high-quality content, understanding complex instructions, and performing advanced reasoning tasks, making it suitable for sophisticated applications.

  • Cloud availability: Amazon, Microsoft, Google
  • Speed: Medium
  • Quality: High
  • Context length: Short

GPT-3.5 Turbo is optimized for chat and completion tasks with a 4K token context window. It excels in generating natural language responses, performing text-to-speech and speech-to-text conversions, content moderation, and basic reasoning tasks.

  • Cloud availability: Amazon, Microsoft, Google
  • Speed: High
  • Quality: Medium
  • Context length: Short

Mistral AI

Mistral Large offers advanced reasoning, extensive knowledge, and robust coding capabilities. It excels in precise instruction following and sophisticated text transformations.

  • Cloud availability: Amazon
  • Speed: Low
  • Quality: High
  • Context length: Medium

Mixtral 8x7B is designed to balance model size and inference speed, making it ideal for efficient text generation tasks. It offers superior performance with faster inference times compared to larger models.

  • Cloud availability: Amazon
  • Speed: High
  • Quality: Medium
  • Context length: Medium

Meta

Llama 3.1 70B is an advanced multilingual model optimized for high-performance tasks. It excels in content creation, conversational AI, language understanding, and enterprise solutions.

  • Cloud availability: Amazon
  • Speed: Medium
  • Quality: High
  • Context length: Short

Llama 3.1 8B is a compact, multilingual model optimized for dialogue and various tasks. It offers efficient performance suitable for applications with limited computational resources.

  • Cloud availability: Amazon
  • Speed: High
  • Quality: Medium
  • Context length: Short