Building Blog Header Art with Claude Skills
Building Blog Header Art with Claude Skills

Building Blog Header Art with Claude Skills

I got tired of manually creating header images for blog posts. Each one needed to match the site’s indigo/purple color scheme, work on dark backgrounds, and look consistently styled. So I built a Claude Code skill that generates them automatically.

The result: type /header-image, and get a custom header image that’s already integrated into your post. Here’s how it works.

What Are Claude Skills?

Claude Code (claude.ai/code) lets you extend its capabilities by creating “skills”: specialized tools that live in your codebase under .claude/skills/. Think of them as reusable AI workflows that understand your project’s specific needs.

A skill is just a directory with:

  • SKILL.md - Documentation and guidelines for Claude to follow
  • tools/ - Scripts, utilities, or any executable code
  • Optional README for humans

When you invoke a skill, Claude reads the documentation and knows exactly how to use the tools to accomplish the task. It’s like giving Claude a specialized playbook for your project.

The Problem: Inconsistent Header Images

My blog posts needed header images that:

  • Used the site’s color palette (indigo/purple #7C3AED, navy #1E293B, white)
  • Had transparent backgrounds to float over the dark theme
  • Followed a consistent geometric, Bauhaus-inspired style
  • Visually represented different content categories (AI/ML, software, startup, recipe, cycling)

Doing this manually meant opening Figma or something like that, trying to match the aesthetic, exporting, removing backgrounds. Every. Single. Time.

So I just basically didn’t do it, most posts have zero images here.

Building the Skill

Part 1: Style Guidelines

The heart of the skill is SKILL.md, which defines the visual aesthetic Claude should follow:

### Core Aesthetic
- **Style**: Abstract conceptual illustrations with clean geometric forms
- **Mood**: Sophisticated, technical, approachable, slightly futuristic
- **Technique**: Minimal line art with selective use of color, emphasis on negative space

### Color Palette
**Primary Colors (MUST use these):**
- Indigo/Purple: `hsl(262, 86%, 56%)` in light mode, `hsl(262, 90%, 62%)` in dark mode
- Deep Navy/Slate: `hsl(220, 40%, 20%)` for shapes
- Soft Cream/White: `hsl(40, 30%, 96%)` for highlights

This gives Claude clear constraints when generating image prompts. It also includes category-specific guidelines:

### AI/ML & Data Science Posts
- Abstract representations of neural networks, data flows, or algorithms
- Geometric patterns suggesting computational processes
- Interconnected nodes, layered structures, or matrix-like grids
- Avoid cliché "brain with circuits" imagery

Part 2: The Generation Script

The actual image generation happens in a Python script using uv with inline dependencies (PEP 723):

#!/usr/bin/env -S uv run
# /// script
# requires-python = ">=3.10"
# dependencies = [
#     "openai>=1.0.0",
#     "requests>=2.31.0",
#     "pillow>=10.0.0",
#     "numpy>=1.24.0",
# ]
# ///

This means no manual pip install, uv handles everything automatically.

The script does three things:

1. Generate a detailed prompt based on the post content and category

def generate_prompt(title: str, description: str, category: str) -> str:
    base_style = """
    Flat vector art illustration on pure bright green (#00FF00) background
    (chroma key green screen). Hard-edged geometric shapes with NO shadows,
    NO gradients, NO blur, NO soft edges.
    """

    category_style = category_styles.get(category, category_styles["general"])

    return f"""
    Subject: Visual metaphor for "{title}" - {description}
    Style: {base_style}
    Content approach: {category_style}
    """

2. Generate the image with DALL-E 3

from openai import OpenAI

client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
response = client.images.generate(
    model="dall-e-3",
    prompt=prompt,
    size="1792x1024",
    quality="hd",
    n=1,
)

3. Remove the green screen background

Here’s where it gets interesting. DALL-E can’t generate transparent backgrounds directly, so I prompt it to use a bright green (#00FF00) chroma key background, then remove it programmatically.

The first attempt used simple RGB thresholding, but that left green/teal artifacts around edges. The fix was converting to HSV color space and using Gaussian blur for smooth feathering:

# Convert RGB to HSV for better color detection
hue, saturation, value = rgb_to_hsv(img_array)

# Detect green (hue 60-180°, saturation >0.2, value >0.2)
green_mask = (hue >= 60) & (hue <= 180) & (saturation > 0.2) & (value > 0.2)

# Create alpha channel with feathered edges
alpha = np.ones(green_mask.shape, dtype=np.uint8) * 255
alpha[green_mask] = 0

# Apply Gaussian blur for smooth edges
alpha_img = Image.fromarray(alpha, mode='L')
alpha_img = alpha_img.filter(ImageFilter.GaussianBlur(radius=1.5))

HSV color space makes it much easier to isolate specific colors, and the Gaussian blur prevents harsh edges.

Part 3: The Slash Command

To make this easy to use, I created a slash command at .claude/commands/header-image.md:

---
description: Generate a blog header image for a post
---

When the user invokes this command, you should:

1. Ask the user for:
   - Post title (or detect from currently open file)
   - Brief description of the post content
   - Category (ai-ml, software, startup, recipe, cycling, or general)

2. If a blog post is currently open in the IDE, automatically extract:
   - Title from the front matter
   - Description from the front matter
   - Infer category from tags or content

3. Generate the image using the Python script with uv

4. Show the user the generated prompt and the saved file path

5. **Automatically incorporate the image into the post**:
   - Update the front matter's `images:` field
   - Add Hugo figure shortcode at the top of the post content

Now I can just type /header-image in Claude Code, and it walks me through the whole process.

How It Works in Practice

Here’s a real example. I’m writing a post about deep research systems, so I run:

/header-image

Claude asks me for details:

  • Title: “Deep Research Systems: Architectural Differences That Matter”
  • Description: “Comparing ReAct, test-time compute scaling, and hybrid reasoning architectures”
  • Category: ai-ml

Behind the scenes, Claude:

  1. Generates a detailed DALL-E prompt incorporating the style guidelines and AI/ML category aesthetics
  2. Calls the Python script to generate the image with chroma key background
  3. Removes the green screen using HSV detection
  4. Saves to static/images/blog-headers/2025-12-15-deep-research-systems-architectural-differences-that-matter.png
  5. Updates the post’s front matter:
images:
- /images/blog-headers/2025-12-15-deep-research-systems-architectural-differences-that-matter.png
  1. Adds the figure shortcode at the top of the post:
Deep Research Systems: Architectural Differences That Matter

Total time: about 30 seconds. And the image perfectly matches my site’s aesthetic.

Why This Approach Works

1. Constraints breed consistency

By defining strict color palettes and style guidelines in SKILL.md, every generated image follows the same aesthetic. DALL-E is powerful, but without constraints it produces wildly different styles.

2. Chroma key is more reliable than AI background removal

I initially tried using rembg (AI-powered background removal), but it was slow and left artifacts. Prompting DALL-E to use a specific green background and removing it programmatically is faster and more predictable.

3. HSV color space matters

RGB thresholding (if green > red and green > blue) catches the obvious green pixels but struggles with variations. HSV lets you define a range of “green-ish” colors by hue (60-180°) while filtering out dark or desaturated pixels.

4. Automation reduces friction

The slash command handles everything: prompt generation, image creation, background removal, and post integration. No context switching between tools.

What I’d Change

The generated images are good but not perfect. DALL-E sometimes adds subtle gradients despite the “NO gradients” instruction. A tighter feedback loop, maybe generating multiple options and picking the best, would help.

The chroma key approach also means I’m limited to flat, vector-style art. You can’t easily remove backgrounds from photorealistic images this way. But that constraint actually helps maintain consistency.

File sizes are large (1-2.5MB). Adding automatic image optimization with optipng or similar would be a good next step.

It’s simplistic but overall it’s easy and I think looks pretty nice, so I’ll continue using it and gradually improving it.

Conclusion

Claude skills turn repetitive creative tasks into automated workflows. The key is giving Claude clear constraints and good tools to work with.

For blog header images, that meant:

  • Detailed style guidelines that define the aesthetic
  • A Python script using chroma key and HSV color detection
  • A slash command that handles the full workflow

Now generating header images takes 30 seconds instead of 30 minutes. And they’re more consistent than I’d achieve manually.

If you’re doing any repetitive work with AI, whether it’s image generation, content formatting, or code scaffolding, consider building a skill. The upfront investment pays off fast.

Subscribe to the Newsletter

Get the latest posts and insights delivered straight to your inbox.