Loading...
Discovering amazing open source projects
Discovering amazing open source projects
Loading post content...
LlamaFS uses large language models to automatically rename, sort, and structure your files based on content and context. It runs locally or via Groq, keeping your data private while turning a chaotic downloads folder into an organized library in seconds.

Every computer user knows the feeling: a sprawling Downloads folder, a desktop littered with PDFs, screenshots, and random zip files, and the constant mental overhead of trying to remember where you saved that one important receipt. Traditional file managers give you a view, but they don't understand what's inside the files, nor do they help you keep the structure tidy as you work.
Enter LlamaFS. Built on top of Llama 3, it watches your filesystem, learns from your actions, and instantly proposes (or applies) a sensible hierarchy—renaming files, creating date-based folders, and even extracting semantic tags from images and audio. All of this happens in under 500 ms per operation, thanks to smart caching and a lightweight Python + Electron stack.
| Feature | What LlamaFS Does | Why It Matters |
|---|---|---|
| Content-aware naming | Uses Llama 3 to generate descriptive filenames (e.g., "2023-04-15_Tax-Return_2022.pdf"). | Eliminates vague "IMG_1234.jpg" clutter and improves searchability. |
| Semantic folder creation | Detects patterns (dates, project names, media types) and builds nested directories automatically. | Gives you a logical hierarchy without manual planning. |
| Multimodal support | Leverages Moondream for image summarization and Whisper for audio transcription. | Handles screenshots, scanned receipts, voice memos, and more. |
| Smart caching | Only rewrites the index for the minimal filesystem diff required. | Keeps CPU and I/O usage low, even on large libraries. |
| Incognito mode (Ollama) | Switches LLM inference to a local Ollama server. | Guarantees that sensitive files never leave your machine. |
| Interactive preview | Electron UI shows the proposed tree before any changes are applied. | Lets you approve, tweak, or reject suggestions safely. |
| Batch & Watch modes | Run a one-off organization job or enable continuous monitoring. | Fits both occasional clean-ups and daily workflow automation. |
Below is a quick path from zero to a running LlamaFS instance on any modern OS.
# 1. Clone the repository
git clone https://github.com/iyaja/llama-fs.git
cd llama-fs
# 2. Set up the Python environment
python -m venv .venv
source .venv/bin/activate # on Windows: .venv\Scripts\activate
pip install -r requirements.txt
# 3. Install the Electron front-end
npm install
npm run build # compiles the UI
# 4. Run LlamaFS in your preferred mode
# Batch mode – organize a folder once
python src/main.py --mode batch --path ~/Downloads
# Watch mode – start the daemon
python src/main.py --mode watch --path ~/Desktop
Tip: To enable incognito mode, start Ollama locally (
ollama serve) and launch LlamaFS with--incognito. All LLM calls will be routed to the local model, guaranteeing zero data leaves your device.
| Feature | LlamaFS (Open-Source) | Hazel (macOS) | File Juggler (Windows) | Adobe Bridge (Cross-platform) |
|---|---|---|---|---|
| AI-driven naming & sorting | ✅ Llama 3 (cloud) / Ollama (local) | ❌ Rule-based only | ❌ Rule-based only | ❌ Manual tagging |
| Multimodal (image/audio) analysis | ✅ Moondream & Whisper | ❌ | ❌ | ❌ |
| Incognito / local inference | ✅ Ollama integration | ❌ (cloud services) | ❌ | ❌ |
| Batch & continuous watch modes | ✅ Both | ✅ Only batch | ✅ Only batch | ❌ |
| Cross-platform UI | ✅ Windows / macOS / Linux | ❌ macOS only | ❌ Windows only | ✅ (but heavy) |
| Cost | Free (MIT-style) | $39-$79 per license | $39 per license | Free (Adobe ID) but tied to Creative Cloud |
| Extensibility | ✅ Full source access, Python plugins | ❌ Closed source | ❌ Closed source | ✅ Scripting (limited) |
| Privacy | ✅ Data stays on-device (incognito) | ❌ Cloud-dependent | ❌ Cloud-dependent | ❌ Cloud-dependent (Adobe ID) |
Bottom line: If you need true AI-powered organization without surrendering your data to a SaaS vendor, LlamaFS is the only open-source option that offers both cloud and on-premise LLM inference.
Ready to turn your chaotic file system into a smart, self-organizing library?
LlamaFS proves that you don't need a pricey commercial tool to get intelligent file management. With zero vendor lock-in, full privacy control, and a vibrant open-source community, it's the future-ready solution for anyone who wants their files to work for them—not the other way around.
Curating the best open source projects every day. Follow us for daily discoveries of amazing tools and libraries.
Get all the latest posts delivered straight to your inbox.
We respect your privacy. Unsubscribe at any time.