Alireza Chegini presents a comprehensive tutorial for developers on how to set up and run generative AI agents locally with LM Studio and n8n, highlighting installation, workflow creation, and free experimentation.

Run Local Generative AI Agents for Free with LM Studio and n8n

Author: Alireza Chegini AI Skills for Your Career

Overview

This video demonstrates how to use open-source generative AI models on your own computer—entirely free and without reliance on paid APIs or subscriptions. It’s intended for developers, automation enthusiasts, and anyone wishing to prototype or learn about AI agents and chat workflows in a hands-on way.

Key Topics

  • Why choose local AI tools vs. paid cloud APIs
  • How to install LM Studio on Windows, Mac, or Linux
  • Setting up and running local language models (e.g., LLaMA, Mistral)
  • Using LM Studio’s interface and testing model outputs
  • Installing and running n8n via Docker
  • Creating chat agent workflows in n8n
  • Connecting LM Studio to n8n using API integrations
  • Adding conversational memory to your agent
  • Tips for further experimentation and learning

Step-by-Step Guide

1. LM Studio Installation

  • Download LM Studio from: https://lmstudio.ai
  • Supported on Windows, Mac, and Linux
  • Walkthrough of installation and interface basics

2. Download & Run Local Models

  • Example setup with LLaMA (other models such as Mistral also supported)
  • How to fetch models from online repositories, import, and run locally
  • Test the generative capabilities inside LM Studio

3. n8n Workflow Automation

  • Install n8n using Docker for ease of setup: https://n8n.io
  • Overview of workflow builder
  • Create a simple chat agent workflow

4. Integrate LM Studio and n8n

  • Connect LM Studio to n8n via local API endpoints
  • Send prompts/receive completions through workflow nodes
  • Test the end-to-end chatbot loop

5. Add Memory to Your Agent

  • Enable conversational memory to help your agent track context across exchanges
  • Techniques for storing and retrieving previous messages or states

6. Experiment Freely

  • No cloud subscription or token costs
  • Easily try out AI art, agents, and chat-based solutions locally
  • Ideal for proof-of-concept builds and personal exploration

Additional Resources

Chapters

  • 00:00 Intro – Why go local?
  • 01:08 Install LM Studio
  • 04:00 LM Studio interface overview
  • 06:30 Download & run a local model (LLaMA)
  • 09:00 Test model inside LM Studio
  • 12:30 Install/run n8n with Docker
  • 14:00 Create your first workflow
  • 16:30 Connect n8n to LM Studio
  • 19:00 Test the agent
  • 20:30 Add memory
  • 23:00 Final thoughts

Conclusion

By following this guide, you can build, test, and iterate on generative AI agents and workflows locally, supporting creative experimentation and learning without financial barriers.