The recommended way to build and use Triton Inference Server is with Docker images. The first step in using Triton to serve your models is to place one or more models into a model repository.