Skip to content

NVIDIA Triton Inference Server Organization

NVIDIA Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs.

This top level GitHub organization host repositories for officially supported backends, including TensorRT, TensorFlow, PyTorch, Python, ONNX Runtime, and OpenVino. The organization also hosts several popular Triton tools, including:

  • Model Analyzer: A tool to analyze the runtime performance of a model and provide an optimized model configuration for Triton Inference Server.

  • Model Navigator: a tool that provides the ability to automate the process of moving a model from source to optimal format and configuration for deployment on Triton Inference Server.

Getting Started

To learn about NVIDIA Triton Inference Server, refer to the Triton developer page and read our Quickstart Guide. Official Triton Docker containers are available from NVIDIA NGC.

Product Documentation

User documentation on Triton features, APIs, and architecture is located in the server documents on GitHub. A table of contents for the user documentation is located in the server README file.

Release Notes, Support Matrix, and Licenses information are available in the NVIDIA Triton Inference Server Documentation.

Examples

Specific end-to-end examples for popular models, such as ResNet, BERT, and DLRM are located in the NVIDIA Deep Learning Examples page on GitHub. Additional generic examples can be found in the server documents.

FAQ

For technical questions about Triton Inference Server, please consult the Triton FAQ Guide. Information about future support & updates for Triton can be found in the Dynamo FAQ Guide.

Feedback

Share feedback or ask questions about NVIDIA Triton Inference Server by filing a GitHub issue.

Pinned Loading

  1. server server Public

    The Triton Inference Server provides an optimized cloud and edge inferencing solution.

    Python 10.4k 1.7k

  2. core core Public

    The core library and APIs implementing the Triton Inference Server.

    C++ 167 130

  3. backend backend Public

    Common source, scripts and utilities for creating Triton backends.

    C++ 368 104

  4. client client Public

    Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.

    Python 680 252

  5. model_analyzer model_analyzer Public

    Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.

    Python 506 85

  6. model_navigator model_navigator Public

    Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.

    Python 218 28

Repositories

Showing 10 of 36 repositories
  • TensorRT-LLM Public Forked from NVIDIA/TensorRT-LLM

    TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.

    triton-inference-server/TensorRT-LLM’s past year of commit activity
    Python 1 2,138 0 0 Updated Feb 22, 2026
  • server Public

    The Triton Inference Server provides an optimized cloud and edge inferencing solution.

    triton-inference-server/server’s past year of commit activity
    Python 10,377 BSD-3-Clause 1,719 781 (3 issues need help) 94 Updated Feb 21, 2026
  • client Public

    Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.

    triton-inference-server/client’s past year of commit activity
    Python 680 BSD-3-Clause 252 57 34 Updated Feb 21, 2026
  • tensorrtllm_backend Public

    The Triton TensorRT-LLM Backend

    triton-inference-server/tensorrtllm_backend’s past year of commit activity
    922 Apache-2.0 135 318 (1 issue needs help) 25 Updated Feb 20, 2026
  • python_backend Public

    Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.

    triton-inference-server/python_backend’s past year of commit activity
    C++ 669 BSD-3-Clause 193 0 19 Updated Feb 21, 2026
  • core Public

    The core library and APIs implementing the Triton Inference Server.

    triton-inference-server/core’s past year of commit activity
    C++ 167 BSD-3-Clause 130 0 23 Updated Feb 19, 2026
  • triton_cli Public

    Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inference Server.

    triton-inference-server/triton_cli’s past year of commit activity
    Python 73 5 3 4 Updated Feb 19, 2026
  • pytorch_backend Public

    The Triton backend for the PyTorch TorchScript models.

    triton-inference-server/pytorch_backend’s past year of commit activity
    C++ 173 BSD-3-Clause 67 0 7 Updated Feb 18, 2026
  • model_analyzer Public

    Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.

    triton-inference-server/model_analyzer’s past year of commit activity
    Python 506 Apache-2.0 85 0 1 Updated Feb 17, 2026
  • onnxruntime_backend Public

    The Triton backend for the ONNX Runtime.

    triton-inference-server/onnxruntime_backend’s past year of commit activity
    C++ 173 BSD-3-Clause 77 74 6 Updated Feb 14, 2026