EnglishFrenchSpanish

OnWorks favicon

TensorFlow Serving download for Linux

Free download TensorFlow Serving Linux app to run online in Ubuntu online, Fedora online or Debian online

This is the Linux app named TensorFlow Serving whose latest release can be downloaded as 2.13.1.zip. It can be run online in the free hosting provider OnWorks for workstations.

Download and run online this app named TensorFlow Serving with OnWorks for free.

Follow these instructions in order to run this app:

- 1. Downloaded this application in your PC.

- 2. Enter in our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.

- 3. Upload this application in such filemanager.

- 4. Start the OnWorks Linux online or Windows online emulator or MACOS online emulator from this website.

- 5. From the OnWorks Linux OS you have just started, goto our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.

- 6. Download the application, install it and run it.

SCREENSHOTS

Ad


TensorFlow Serving


DESCRIPTION

TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. It deals with the inference aspect of machine learning, taking models after training and managing their lifetimes, providing clients with versioned access via a high-performance, reference-counted lookup table. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other types of models and data. The easiest and most straight-forward way of using TensorFlow Serving is with Docker images. We highly recommend this route unless you have specific needs that are not addressed by running in a container. In order to serve a Tensorflow model, simply export a SavedModel from your Tensorflow program. SavedModel is a language-neutral, recoverable, hermetic serialization format that enables higher-level systems and tools to produce, consume, and transform TensorFlow models.



Features

  • Can serve multiple models, or multiple versions of the same model simultaneously
  • Exposes both gRPC as well as HTTP inference endpoints
  • Allows deployment of new model versions without changing any client code
  • Supports canarying new versions and A/B testing experimental models
  • Adds minimal latency to inference time due to efficient, low-overhead implementation
  • Features a scheduler that groups individual inference requests into batches for joint execution on GPU, with configurable latency controls


Programming Language

C++


Categories

Machine Learning

This is an application that can also be fetched from https://sourceforge.net/projects/tensorflow-serving.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.


Free Servers & Workstations

Download Windows & Linux apps

Linux commands

  • 1
    abicompat
    abicompat
    abicompat - check ABI compatibility
    abicompat checks that an application
    that links against a given shared
    library is still ABI compatible with a
    subsequent ve...
    Run abicompat
  • 2
    abidiff
    abidiff
    abidiff - compare ABIs of ELF files
    abidiff compares the Application Binary
    Interfaces (ABI) of two shared libraries
    in ELF format. It emits a meaningful
    repor...
    Run abidiff
  • 3
    crip
    crip
    crip - a terminal-based
    ripper/encoder/tagger tool ...
    Run crip
  • 4
    crit
    crit
    crit - CRiu Image Tool ...
    Run crit
  • 5
    galleroob
    galleroob
    galleroob - browse and download web
    image galleries ...
    Run galleroob
  • 6
    gallery-uploader
    gallery-uploader
    gallery-uploader - program to upload
    pictures and video to Gallery ...
    Run gallery-uploader
  • More »

Ad