Session Handoff Protocol: Solving AI Agent Continuity in Complex Projects

Published: September 26, 2025 The Challenge: Lost Context between AI Sessions When working on complex software projects with AI agents, one of the biggest productivity killers is context loss between sessions. You spend hours making progress on implementing API endpoints, fixing test failures, or writing documentation, only to have the next AI agent iteration start from scratch or misunderstand where you left off. This problem becomes especially acute in large codebases like my Invoico platform, where I’m implementing comprehensive API customer flows with 90+ tests across multiple feature areas including webhooks, monitoring, security, and billing systems.

How I Use chmod to Stop AI Agents from Cheating on My Tests

The Problem: My AI Assistant Was Cheating on Its Tests I’ve been using AI agents to help me build features for my invoice management platform, and I’m a big fan of Test-Driven Development (TDD). But I ran into a problem pretty quickly: my AI agents were a little too clever for their own good. Instead of, you know, actually implementing the features to make the tests pass, my AI assistant started taking shortcuts.

Building a Data Model Validation System for Large-Scale AI Agent Projects

The follow-up to my user flow validation system: How I keep my data models in sync with business requirements using chunked processing for AI agents with limited memory. The Problem: When Your Data Models Can’t Keep Up In my previous post, I figured out how to fix incomplete user flow documentation. But that led me to a whole new problem: data model drift. As my user flows got better and my validation scores went up, I noticed my Django data models were falling behind.

Building a Robust User Story and Flow Validation System for AI Agents

How I built a validation system that guarantees complete documentation and automates quality control. The Problem: When Incomplete Documentation Grinds Everything to a Halt If you’ve worked with AI agents and complex workflows, you’ve probably felt this pain: incomplete documentation slows down development, creates confusing requirements, and leads to a ton of rework. When your user stories are missing clear flows, or your flows aren’t properly documented, everything just stops.

Django + Lit + Vite: Template Setup and Hot-Reload Workflow

This post walks you through how I set up a hypothetical Django app called acmefront to work with the Lit JavaScript framework and Vite. I’ll also show you a simple, framework-agnostic hot-reload workflow that makes development a breeze. Just a heads-up: all the names and paths I’m using here are just for illustration. Step 1: Wiring Up the App, Template, URL, and View First things first, let’s get the basic Django pieces in place.

Building Smart Web Scrapers with Local LLMs

How I Use Local LLMs to Build Smarter Web Scrapers This is a guide to how I use locally-run Large Language Models (LLMs) to build web scrapers that are way more reliable than the old-school, CSS selector-based ones. If you’ve ever built a web scraper, you know the pain of brittle CSS selectors. You spend forever getting them just right, and then the website changes its HTML, and your scraper is toast.

Tips for Building Apps with CursorAI

My Favorite Tips for Building Apps with CursorAI I just finished building a complete Ruby on Rails app with CursorAI, and I wanted to share some tips that made the process so much faster. Here are a few things I learned along the way that will help you get the most out of Cursor. 1. Keep a Changelog with .cursorrules It’s a good idea to have Cursor maintain a changelog for you.

Finding a job with Python and Selenium Part 2

In Part 1 of our job-finding series, we found a job board that was easy to scrape and saved the data to a local file. Now, it’s time to take that data and put it to work. In this post, we’ll walk through how to load our scraped data into a database and run some basic analysis on it.

Setting Up the Database

First things first, we need a place to store our data. We’ll use SQLite to create a database with a jobs table that has the following columns:

  • title: The job title
  • location: Where the job is located
  • date: The date the job was posted
  • link: A link to the job post
  • description: The full job description
  • hasapplied: A flag to track whether we’ve applied for the job
import sqlite3
import os

# Database file path
db_file = 'jobs.db'

# SQL to create the jobs table
create_jobs_table_sql = '''
CREATE TABLE IF NOT EXISTS jobs (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    title TEXT NOT NULL,
    companyname TEXT NOT NULL,
    location TEXT,
    date TEXT,
    link TEXT,
    description TEXT,
    hasapplied INTEGER DEFAULT 0,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- Create an index on companyname for faster lookups
CREATE INDEX IF NOT EXISTS idx_companyname ON jobs(companyname);

-- Create an index on hasapplied for faster filtering
CREATE INDEX IF NOT EXISTS idx_hasapplied ON jobs(hasapplied);
'''

def create_database():
    # Check if database file already exists
    db_exists = os.path.exists(db_file)

    # Connect to the database (this will create it if it doesn't exist)
    conn = sqlite3.connect(db_file)
    cursor = conn.cursor()

    # Create the jobs table
    cursor.executescript(create_jobs_table_sql)

    # Commit the changes and close the connection
    conn.commit()
    conn.close()

    if db_exists:
        print(f"Connected to existing database: {db_file}")
    else:
        print(f"Created new database: {db_file}")
    print("Jobs table initialized successfully.")

if __name__ == "__main__":
    create_database()

Finding a job with Python and Selenium

Finding a job can be a real grind. In this post, I’m going to show you how to automate a big chunk of that work using Python and Selenium. We’ll dive into how to find job boards that are easy to scrape, save that data, and then analyze it to pinpoint the jobs you’re most qualified for.

We’ll even explore how to use free AI tools like Ollama and LM Studio to filter jobs based on your skillset and extract keywords to make searching easier. And to top it all off, we’ll build a neat little web app with React and Flask to browse all the jobs we’ve scraped.

The Tools of the Trade

Python

I’m a big fan of Python for this kind of stuff. It’s simple, it’s effective, and it gets the job done. If you’re new to Python, you can learn how to get it set up here: Installing Python.

Selenium

While Python has some built-in tools for grabbing content from web pages, a lot of job boards use JavaScript to render their content. That’s where Selenium comes in. It lets us spin up a real browser session and control it with our code, so we can get to all that juicy, dynamically-rendered content.

SQLite

We’re going to be dealing with a lot of data, so Excel just isn’t going to cut it. I’ve got a single database table with about a week’s worth of job posts that’s already over 100MB. SQLite is perfect for this. It’s lightweight, and it’ll let us run the advanced queries we’ll need to analyze and sort through all the jobs we find.