This is the Linux app named LocalAI whose latest release can be downloaded as v1.30.0.zip. It can be run online in the free hosting provider OnWorks for workstations.
Download and run online this app named LocalAI with OnWorks for free.
Follow these instructions in order to run this app:
- 1. Downloaded this application in your PC.
- 2. Enter in our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 3. Upload this application in such filemanager.
- 4. Start the OnWorks Linux online or Windows online emulator or MACOS online emulator from this website.
- 5. From the OnWorks Linux OS you have just started, goto our file manager https://www.onworks.net/myfiles.php?username=XXXXX with the username that you want.
- 6. Download the application, install it and run it.
SCREENSHOTS
Ad
LocalAI
DESCRIPTION
Self-hosted, community-driven, local OpenAI compatible API. Drop-in replacement for OpenAI running LLMs on consumer-grade hardware. Free Open Source OpenAI alternative. No GPU is required. Runs ggml, GPTQ, onnx, TF compatible models: llama, gpt4all, rwkv, whisper, vicuna, koala, gpt4all-j, cerebras, falcon, dolly, starcoder, and many others. LocalAI is a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer-grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.
Features
- Local, OpenAI drop-in alternative REST API
- NO GPU required
- Supports multiple models
- Once loaded the first time, it keep models loaded in memory for faster inference
- Doesn’t shell-out, but uses C++ bindings for a faster inference and better performance
- You own your data
Programming Language
Go
Categories
This is an application that can also be fetched from https://sourceforge.net/projects/localai.mirror/. It has been hosted in OnWorks in order to be run online in an easiest way from one of our free Operative Systems.