Track keywords, profiles, and posts on LinkedIn. Collect every person who likes or comments. 1,000 leads for about $6. Built with Claude Code.
Pick a topic like "outbound", "AI", or "GTM". The tool searches LinkedIn for posts mentioning that keyword and collects every person liking and commenting on them.
Add a LinkedIn profile URL. The tool checks their new posts and collects the people interacting with them. Track competitors, creators, or your own team.
Add a post link. The tool keeps checking that post and collects new people who like or comment as they come in. Great for Thought Leader Ads.
Paste a post link and get the full list right now. No waiting, no schedule. One-time pull of everyone who engaged.
Keyword, profile,
or post URL
Schedule or
trigger manually
Fetch posts,
get engagers
Skip people
you already have
Webhook, CSV,
or Clay
Frontend dashboard
hosted on Netlify
Postgres database
+ auth + realtime
Background processor
on Railway (Docker)
LinkedIn post search,
profile posts, metadata
Extract commenters
+ likers from posts
Send leads to Clay,
CRM, or anywhere
Dashboard (React) Database (Supabase) Worker (Python on Railway) ┌─────────────────┐ ┌─────────────────────┐ ┌─────────────────────┐ │ │ │ │ │ │ │ Create monitor │──────>│ monitors table │ │ Polls every 5s │ │ Queue job │──────>│ queue_jobs table │<──────│ Picks up "queued" │ │ View results │<──────│ runs table │<──────│ Writes results │ │ Approve large │──────>│ seen_engagers │<──────│ Deduplicates │ │ │ │ │ │ │ └─────────────────┘ └─────────────────────┘ └──────────┬──────────┘ │ ┌──────────┴──────────┐ │ External APIs │ │ Limadata │ │ Apify │ │ Webhook delivery │ └─────────────────────┘
Create the frontend, set up the database, configure the worker, and get API access for LinkedIn data extraction.
# Create the project npm create vite@latest social-engager -- --template react-ts cd social-engager # Install dependencies npm install @supabase/supabase-js tailwindcss postcss autoprefixer npm install lucide-react react-router-dom # Init Tailwind npx tailwindcss init -p
-- Core tables CREATE TABLE profiles ( id UUID REFERENCES auth.users PRIMARY KEY, email TEXT, full_name TEXT, created_at TIMESTAMPTZ DEFAULT NOW() ); CREATE TABLE projects ( id UUID DEFAULT gen_random_uuid() PRIMARY KEY, name TEXT NOT NULL, created_by UUID REFERENCES profiles(id), created_at TIMESTAMPTZ DEFAULT NOW() ); CREATE TABLE monitors ( id UUID DEFAULT gen_random_uuid() PRIMARY KEY, project_id UUID REFERENCES projects(id), name TEXT NOT NULL, mode TEXT NOT NULL, -- 'keyword' | 'profile' | 'posts' | 'direct' input TEXT NOT NULL, -- keyword string, profile URL, or post URLs webhook_url TEXT, -- where to send leads schedule TEXT, -- 'daily' | 'weekly' | 'monthly' | null is_active BOOLEAN DEFAULT true, created_at TIMESTAMPTZ DEFAULT NOW() ); CREATE TABLE queue_jobs ( id UUID DEFAULT gen_random_uuid() PRIMARY KEY, monitor_id UUID REFERENCES monitors(id), project_id UUID REFERENCES projects(id), status TEXT DEFAULT 'queued', -- queued | running | completed | failed | awaiting_approval payload JSONB, result JSONB, error TEXT, started_at TIMESTAMPTZ, completed_at TIMESTAMPTZ, created_at TIMESTAMPTZ DEFAULT NOW() ); CREATE TABLE runs ( id UUID DEFAULT gen_random_uuid() PRIMARY KEY, monitor_id UUID REFERENCES monitors(id), project_id UUID REFERENCES projects(id), job_id UUID REFERENCES queue_jobs(id), leads_found INTEGER DEFAULT 0, new_leads INTEGER DEFAULT 0, cost DECIMAL(10,4) DEFAULT 0, posts_scraped INTEGER DEFAULT 0, details JSONB, created_at TIMESTAMPTZ DEFAULT NOW() ); CREATE TABLE seen_engagers ( id UUID DEFAULT gen_random_uuid() PRIMARY KEY, monitor_id UUID REFERENCES monitors(id), post_url TEXT NOT NULL, engager_profile_url TEXT NOT NULL, first_seen_at TIMESTAMPTZ DEFAULT NOW(), UNIQUE(monitor_id, post_url, engager_profile_url) );
social-engager/ │ ├── src/ ← React frontend │ ├── App.tsx ← Routes + auth gating │ ├── pages/ │ │ ├── Dashboard.tsx ← Monitor management │ │ ├── Queue.tsx ← Job queue + approval flow │ │ ├── History.tsx ← Run execution history │ │ ├── Trigger.tsx ← Manual job triggering │ │ └── Login.tsx ← Supabase auth │ ├── components/ │ │ ├── MonitorCard.tsx ← Display monitor config + stats │ │ ├── MonitorForm.tsx ← Create/edit monitors │ │ ├── UsageBanner.tsx ← Cost + usage tracking │ │ └── Layout.tsx ← Navigation shell │ ├── hooks/ │ │ ├── useMonitors.ts ← CRUD for monitors │ │ ├── useQueue.ts ← Job enqueueing + approval │ │ ├── useRuns.ts ← Execution history queries │ │ ├── useUsageStats.ts ← Cost tracking per project │ │ └── useJobNotifications.ts ← Realtime status via Supabase │ └── lib/ │ ├── supabase.ts ← Supabase client init │ ├── types.ts ← TypeScript interfaces │ └── costs.ts ← Cost calculation helpers │ ├── backend/ ← Python worker │ ├── worker.py ← Main job processor (runs 24/7) │ ├── Dockerfile ← Python 3.11-slim for Railway │ └── requirements.txt ← supabase, requests, python-dotenv │ └── supabase/ ← Database schema + migrations ├── setup.sql ← Full production schema └── migrations/ ← Incremental changes
A React frontend with monitor management, a job queue with approval flow, execution history, and realtime status updates via Supabase.
Select mode (keyword, profile, posts, direct). Enter the input (keyword string, LinkedIn profile URL, or post URLs). Optionally set a webhook URL and schedule (daily, weekly, monthly).
Display each monitor with its mode, input, schedule, and stats (total leads found, last run date, cost to date). Toggle active/inactive.
Run any monitor on demand. Creates a queue_job with status "queued" and the monitor's config as payload.
List all jobs with status badges: queued (waiting), running (in progress), completed (done), failed (error), awaiting_approval (needs confirmation).
When estimated leads exceed 1,000, the worker sets status to "awaiting_approval". The dashboard shows the estimated count and cost. User clicks "Approve" to re-queue the job.
Subscribe to Supabase realtime on the queue_jobs table. Status changes appear live without refreshing.
// Supabase realtime subscription for job status const channel = supabase .channel('job-updates') .on('postgres_changes', { event: 'UPDATE', schema: 'public', table: 'queue_jobs', filter: `project_id=eq.${projectId}`, }, (payload) => { updateJobInState(payload.new); }) .subscribe();
A Python script that runs 24/7 on Railway. It polls Supabase for queued jobs, calls the LinkedIn APIs, deduplicates results, and delivers leads.
# worker.py - main loop import time from supabase import create_client supabase = create_client(SUPABASE_URL, SUPABASE_KEY) while True: # Pick up the oldest queued job job = supabase.table("queue_jobs") \ .select("*") \ .eq("status", "queued") \ .order("created_at") \ .limit(1) \ .execute() if job.data: process_job(job.data[0]) else: time.sleep(5)
Search LinkedIn for posts matching the keyword. Run two parallel searches (by relevance and by recency) to get a broader set. Take the top posts by engagement count. Scrape commenters + likers from each post.
Fetch the profile's recent posts via Limadata. For each post, scrape commenters + likers with Apify. Useful for tracking competitors or your own team's content.
Take the post URLs from the monitor config. Check if engagement count has changed since last run (smart skip). If new engagement detected, scrape and deduplicate against seen_engagers. Only return NEW people.
Take the post URL, scrape all commenters + likers in one shot. No dedup tracking, no repeat logic. One-time full pull.
def process_job(job): # 1. Mark job as running update_status(job["id"], "running") # 2. Get post URLs based on mode if job["mode"] == "keyword": posts = search_posts_by_keyword(job["input"]) elif job["mode"] == "profile": posts = get_profile_posts(job["input"]) else: posts = job["input"] # direct post URLs # 3. Check approval threshold estimated = estimate_engagers(posts) if estimated > 1000 and not job.get("large_scrape_approved"): update_status(job["id"], "awaiting_approval") return # 4. Scrape engagers from each post (parallel, max 3 concurrent) all_engagers = [] for post in posts: commenters = scrape_commenters(post["url"]) likers = scrape_likers(post["url"]) all_engagers.extend(commenters + likers) # 5. Deduplicate against seen_engagers new_leads = deduplicate(job["monitor_id"], all_engagers) # 6. Deliver via webhook if job["webhook_url"] and new_leads: send_to_webhook(job["webhook_url"], new_leads) # 7. Record results create_run(job, leads_found=len(all_engagers), new_leads=len(new_leads)) update_status(job["id"], "completed")
def deduplicate(monitor_id, engagers): new_leads = [] for engager in engagers: # Check if we've seen this person on this post before existing = supabase.table("seen_engagers") \ .select("id") \ .eq("monitor_id", monitor_id) \ .eq("engager_profile_url", engager["profile_url"]) \ .execute() if not existing.data: # New lead - record it and add to output supabase.table("seen_engagers").insert({ "monitor_id": monitor_id, "post_url": engager["post_url"], "engager_profile_url": engager["profile_url"], }).execute() new_leads.append(engager) return new_leads
Prevent runaway costs, handle stuck jobs, and protect against edge cases. These are the things that break in production.
Track API calls and leads scraped. Cost = (engagers x cost per engager) + (API calls x cost per call). Store on each run record.
Set a monthly cap (e.g., 5,000 leads). Query the runs table for the current billing cycle. Block new jobs when approaching the limit. Show a warning banner at 80%.
Hard limit per individual job (e.g., 5,000 leads). Anything above 1,000 requires the approval flow before running.
Set a 10-minute hard timeout per job. If the worker is still running after 10 minutes, mark the job as failed with the error logged.
On startup, scan for jobs stuck in "running" for more than 15 minutes. Reset them to "queued" so they get retried.
If Apify returns 0 results, mark the job as failed instead of completed. Something went wrong - don't record it as a successful run.
Cap concurrent Apify actor calls to 3. More than that and you hit rate limits or timeouts. Use a semaphore or simple counter.
Before scraping a tracked post again, check if the engagement count changed since last run. If not, skip it - no new people to find.
Configure Railway (or your host) to auto-restart on failure. Set retry limits (e.g., 10 restarts) before alerting you.
Frontend to Netlify, worker to Railway, database already on Supabase. Three services, each deployed independently.
Run npm run build. This creates a dist/ folder with the static site.
Drag the dist/ folder to Netlify, or connect your GitHub repo for automatic deploys on every push.
Set VITE_SUPABASE_URL and VITE_SUPABASE_ANON_KEY in Netlify's environment variable settings.
Python 3.11-slim base image. Copy requirements.txt and worker.py. Install dependencies. Set the entrypoint to run worker.py.
Connect your GitHub repo to Railway. It detects the Dockerfile and deploys automatically. Set environment variables in Railway's dashboard.
Set the service to auto-restart on failure with a retry limit. The worker should run continuously.
# Dockerfile FROM python:3.11-slim WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY worker.py . CMD ["python", "worker.py"]