HammerLock AI

HammerLock AI is a Desktop App

HammerLock AI runs locally on your machine so your data never leaves your device. Download the app to get started.

Get HammerLock AI

Choose your platform

macOS

Native desktop app for Mac. Drag to Applications and launch.

Enter email above to unlock

Windows

Full desktop installer for Windows. Run the setup wizard and launch.

Enter email above to unlock

Linux

Available as AppImage (universal) or .deb package for Debian/Ubuntu.

Enter email above to unlock

Web Console

Access HammerLock AI from any browser on your local network while the desktop app is running.

Open localhost:3100 after launching the app

Mobile (PWA)

Open HammerLock AI on your phone and add to home screen for an app-like experience.

Same network as desktop · Add to Home Screen

First Launch

1
Install & open HammerLock AI

Open the .dmg and drag HammerLock AI to Applications. Double-click to launch.

2
Create your encryption password

This password encrypts everything locally. We never see it.

3
Install Ollama (free)

Ollama is the AI engine that runs models on your machine. Download it from ollama.com, then pull a model.

4
Pull a model & start chatting

Run 'ollama pull llama3.1' in your terminal. HammerLock detects it automatically. You're ready.

Power Your AI Locally

HammerLock AI is the interface. Ollama is the engine that runs AI models on your machine. You need both — the app does not include a model. Install Ollama, pull a model, and everything runs 100% offline.

🦙

Ollama

Free, open-source local AI engine. Runs models on your hardware with one command. Required for local AI.

Download Ollamaollama.com · macOS, Windows, Linux
🧠

Or Use Cloud API Keys

Prefer cloud models? Bring your own API keys from OpenAI, Anthropic, Google, Groq, Mistral, or DeepSeek. No Ollama needed.

BYOK — your keys, your spend, your choice

Which Setup Is Right for You?

Your setup depends on your plan. Here's a quick guide:

🆓

Free Plan

You need Ollama. The free plan runs entirely on local models. Install Ollama, pull a model (see table below), and you're set. No API keys, no cloud, no cost.

ollama pull llama3.1
🔑

Core ($15 one-time)

Bring Your Own Keys (BYOK). Core unlocks agents, vault, personas, and export. For AI, you provide your own API keys from OpenAI, Anthropic, etc. Or use Ollama for free local AI.

Go to Settings → API Keys in the app

Pro ($29/mo)

Just type and go. Pro includes 1,000 monthly cloud AI credits — GPT-4o, Claude, Gemini, and more are built in. No API keys needed. Optionally add Ollama for unlimited free local AI, or your own keys for unlimited cloud.

✓ Cloud AI included · ✓ Web search · ✓ Voice · ✓ PDF · ✓ Reports

Recommended Local Models

After installing Ollama, open a terminal and pull one of these models. Each one runs entirely on your hardware.

ModelBest ForRAM NeededTerminal Command
LLaMA 3.1 8BBest all-rounder (recommended)16 GBollama pull llama3.1
Mistral 7BFast & efficient16 GBollama pull mistral
Phi-3 MiniLow-resource machines8 GBollama pull phi3
Gemma 2Instruction following16 GBollama pull gemma2
Mixtral 8x7BNear-GPT-4 quality32 GBollama pull mixtral
LLaMA 3.1 70BMaximum capability64 GBollama pull llama3.1:70b

All models are free and open source. Download once — runs offline forever.
Browse all models at ollama.com/library →