LM Gate
v0.2.0

Run AI Locally. Securely. In Minutes.

LM Gate adds enterprise-grade security to your LLM server — deployed with a single docker compose command.

The Problem
"A SentinelOne & Censys investigation found 175,000 Ollama hosts across 130 countries running without any authentication."

LM Gate makes sure yours isn't one of them.

Up and Running in Minutes

Two deployment options, same simple steps.

Standalone

Already have an LLM server?

LM Gate sits in front of your existing Ollama or LLM backend and adds security.

1

Configure

Fill in a config (.env) file with your settings.

2

Deploy

docker compose -f docker/docker-compose.standalone.yml up -d
3

Use

Open the dashboard, login, change your password, and you are ready to go.

Omnigate

Starting fresh?

Ollama and LM Gate bundled together in a single container. Everything you need in one go.

1

Configure

Fill in a config (.env) file with your settings.

2

Deploy

docker compose -f docker/docker-compose.omni.nvidia.yml up -d
3

Use

Open the dashboard, login, change your password, and you are ready to go.

That's it. No cloud account. No subscription. Fully yours.

LM Gate Dashboard

Works with

Ollama llama.cpp LM Studio OpenAI API Compatible

Why LM Gate?

Secure by Default

Authentication and access controls protect your LLM server the moment it's deployed.

Built-In Security

Login, multi-factor authentication, and access controls included out of the box.

One Command Setup

A single docker compose up command installs everything you need. No complicated configuration.

Team Ready

Share your LLM server with your team. Everyone gets their own account with their own permissions.

Stay in Control

Set usage limits, see who's using what, and keep an audit trail of every request.

Hardware Flexible

Runs on CPU, NVIDIA, AMD, or Intel GPUs. Pick the setup that matches your hardware.

Ready to Try It?

LM Gate is free and open source under the Apache 2.0 license.

View on GitHub