Engineering2026-04-0911 min read

How We Detect Competitor Metadata Changes Overnight

Your competitor changes their subtitle at 3 PM. By 8 AM the next morning, you get an email connecting their metadata change to your ranking drop. Here's the cron-powered detection pipeline behind it.

The problem: you find out two weeks too late

A competitor changes their app subtitle from "Simple Expense Tracker" to "AI Budget Planner & Expense Tracker". That single metadata change can shift keyword rankings across your entire category. They start ranking for "AI budget", "budget planner", and "expense tracker" all at once. Your app drops three positions for "expense tracker" and you don't notice until your weekly download numbers look off.

By the time you spot it — maybe two weeks later, maybe never — the damage is done. They've consolidated their new rankings and you're playing catch-up. The App Store rewards early movers. If you had known the day it happened, you could have adjusted your own keywords within hours.

This is the problem we set out to solve: detect competitor metadata changes overnight and connect those changes to your own ranking shifts automatically.

The architecture: snapshots and diffs

The concept is simple. Every day, take a snapshot of each competitor's public metadata. Compare today's snapshot to yesterday's. If anything changed, record it. Then cross-reference those changes with your own keyword ranking history to see if there's a connection.

The implementation has four parts: the data collector, the diff engine, the cross-diagnosis system, and the notification pipeline. Each runs as a separate stage in our daily cron sequence.

Stage 1: Daily competitor snapshots

Every day at 19:00 UTC (4 AM KST), a Vercel Cron job triggers the competitor sync. For each competitor you've added in the Competitors tab, we hit the iTunes Search API and record a full metadata snapshot:

  • App name — the display name on the App Store.
  • Subtitle — the short text below the name. This is heavily indexed for search.
  • Description — the full description text. Less weight for search, but still relevant.
  • Rating — current average rating and total rating count.
  • Price — free, paid, or subscription pricing.
  • Version — current version number and release date.
  • Screenshots and icon URL — to detect visual rebrand changes.

Each snapshot is stored in a CompetitorSnapshot table with a timestamp. We keep every snapshot — no overwriting. This means you can look back and see exactly what a competitor's metadata looked like on any given day.

The iTunes Search API is free and doesn't require authentication. But it has rate limits, so we batch requests with a small delay between each. For most users tracking 3–10 competitors, the entire sync completes in under a minute.

Stage 2: The diff engine

After collecting today's snapshots, the next step compares each one to the previous day's snapshot for the same competitor. This is a field-by-field comparison:

For text fields (name, subtitle, description), we do an exact string comparison. Even a single character change — a comma added, a word capitalized differently — gets flagged. For numeric fields (rating, rating count, price), we compare values directly.

When a difference is detected, we create a MetaChange record with:

  • The competitor app ID and name.
  • Which field changed (e.g., "subtitle").
  • The old value and the new value.
  • The timestamp of detection.

We also compute a "change significance" score. A subtitle change is more significant than a description change because subtitles are more heavily weighted in App Store search. A name change is the most significant — apps rarely change their name unless they're pivoting their positioning entirely.

Stage 3: Cross-diagnosis with your rankings

This is where it gets interesting. Detecting that a competitor changed their subtitle is useful. Detecting that a competitor changed their subtitle and your keyword ranking dropped at the same time is actionable.

The cross-diagnosis system runs after both the competitor diff and the keyword ranking collection are complete. It looks for correlations:

  1. Retrieve recent ranking drops. Any keyword where your position worsened by 5 or more spots in the last 3 days.
  2. Retrieve recent competitor changes. Any metadata change detected in the same 3-day window.
  3. Check keyword overlap. If the competitor's new subtitle contains a term that matches one of your dropping keywords, that's a correlation.
  4. Generate a RANK_DROP_DIAGNOSIS insight. The AI receives the ranking data and competitor change data together and produces an explanation.

The resulting insight might read: "Your ranking for 'expense tracker' dropped from #8 to #14 over the last 3 days. During the same period, Competitor X changed their subtitle to include 'expense tracker'. This likely increased competition for the term."

This insight gets a Correlation confidence badge — not a Fact. We can't prove causation. App Store rankings depend on many factors: download velocity, review sentiment, Apple's algorithm updates, and more. But the temporal correlation is strong enough to be worth investigating.

Stage 4: Notifications

The final stage delivers the findings to you. Apsity sends alerts through two channels:

In-app notifications. When you open Apsity, the Overview tab shows insight cards with any rank drop diagnoses at the top. Each card shows the affected keyword, the ranking change, and the competitor action that may have caused it.

Email alerts. At 08:00 KST the next morning, an email summary goes out. It includes competitor metadata changes, significant ranking drops, and any cross-diagnosis findings. The email looks something like:

Competitor Alert — April 7, 2026

BudgetPal changed subtitle:

Old: "Simple Budget Tracker"

New: "AI Budget Planner & Expense Tracker"

Your ranking impact:

"expense tracker": #8 → #14 (▼6 positions, last 3 days)

"budget planner": #22 → #31 (▼9 positions, last 3 days)

The email gives you enough context to decide whether to act. If the drops are on keywords you care about, you can re-run the AI keyword optimizer to generate a new 100-character set that accounts for the competitive shift. If the drops are on low-priority terms, you can ignore them.

Why overnight detection matters

App Store keyword rankings react quickly to metadata changes. When a competitor adds a popular term to their subtitle, Apple's algorithm can start ranking them for it within 24–48 hours. If you wait a week to notice, they've already accumulated download momentum for that keyword.

Overnight detection means you're never more than ~24 hours behind. The competitor changes metadata during your Tuesday. The cron job detects it early Wednesday morning. You get the email at 8 AM Wednesday. By Wednesday afternoon, you've updated your own keywords. Total reaction time: less than a day.

Compared to the "notice it two weeks later by accident" approach, this is a significant advantage — especially in competitive categories where multiple apps are actively optimizing their ASO.

Technical details: the cron sequence

Apsity runs 7 cron jobs daily. The competitor-related jobs are part of a sequential chain to ensure data dependencies are met:

  1. 03:00 KST — Revenue sync (App Store Connect API).
  2. 03:15 KST — Keyword ranking collection (iTunes Search API).
  3. 04:00 KST — Competitor snapshot collection (iTunes Search API).
  4. 04:30 KST — AI insight generation, including cross-diagnosis (Claude Sonnet).
  5. 08:00 KST — Email alerts dispatched.

Each job uses the after() pattern on Vercel: the cron endpoint returns 200 immediately, then processes the work in the background. This avoids Vercel's function timeout limit. If a stage fails, it logs the error and the next stage still runs — partial data is better than no data.

The AI insight stage is the most resource-intensive. It sends structured data to Claude Sonnet — ranking history, competitor diffs, keyword difficulty scores — and receives formatted insight objects. Each insight includes a type tag (like RANK_DROP_DIAGNOSISor COMPETITOR_META_CHANGE), a confidence level, and a human-readable explanation.

Edge cases we handle

Building reliable change detection means handling messy real-world scenarios:

  • iTunes API temporary errors. Sometimes the API returns incomplete data or times out. We retry once with a 5-second delay. If it fails again, we skip that competitor for the day and note the gap. No false "everything changed" alerts.
  • Localized metadata. Competitor apps may have different names/subtitles in different locales. We snapshot the locale matching your primary market. If you track keywords in "US", we snapshot the US store version.
  • Rating count fluctuations. Apple sometimes revises rating counts slightly (removing spam reviews, for example). Small fluctuations (less than 2%) are filtered out to avoid noise.
  • New competitor added. When you add a new competitor, there's no previous snapshot to diff against. The first snapshot is treated as the baseline — no changes are reported until the second day.
  • Competitor app removed from store. If the iTunes API returns a 404 for a tracked competitor, we flag it as "Delisted" and send a one-time notification. The competitor remains in your list in case it comes back.

What this doesn't do

Transparency about limitations. The system detects what changed and correlates it with when your rankings moved. It cannot prove causation. App Store rankings are influenced by dozens of factors — download velocity, user engagement, Apple's algorithm updates, seasonal trends, and more. A competitor changing their subtitle on the same day your ranking drops is suspicious, but it could be coincidental.

That's why every cross-diagnosis insight gets a Correlation badge, not a Fact badge. We present the evidence and let you decide whether to act. The value isn't in certainty — it's in awareness. Knowing that a competitor made a move and your rankings shifted is infinitely better than not knowing at all.

The overnight loop

The full system forms a closed loop: competitors change metadata → cron detects it → diff engine records it → cross-diagnosis connects it to your rankings → AI generates an explanation → you get an email. Six stages, fully automated, running every night while you sleep.

The cron handles the watching. The AI handles the analysis. You handle the decision. That's the division of labor we designed for. An indie developer shouldn't be manually checking competitor App Store pages every day. That's exactly the kind of tedious, repetitive work that automation was built for.

Try Apsity for free

Track rankings, revenue, and competitors. Set up in 2 minutes.

Get Started Free