Beyond Public Search: 15 New MCP Tools to Pull Your Apsity Data Into Claude
Apsity's MCP server used to be public App Store search only. It now exposes 19 tools — 15 new — that let Claude read your own revenue, downloads, reviews, keyword rankings, competitor metadata changes, AI insights, and plan usage. Read-only by design. Here's what changed and why it matters.
The MCP gap we ignored for too long
When we shipped Apsity's MCP server, it gave Claude four tools — all of them flavors of public App Store search. You could ask Claude to scan the Top 50 for "meditation" in five countries and brainstorm gaps, which is useful for market research. But the part Apsity actually does — analyzing your own apps — was nowhere in the protocol.
That meant the typical AI workflow looked like: open Apsity → squint at a chart → copy a few numbers into Claude → ask a question. You were the API. The data Apsity already collects, normalizes, and stores was sitting one tab away from the assistant that wanted to reason about it.
This release fixes that. The MCP server now exposes 19 tools (4 existing + 15 new), and the new ones are all about pulling your private Apsity data into Claude — revenue, downloads, reviews, keyword rankings, competitor moves, AI insights, plan usage. Read-only by design.
What's new (15 tools, grouped)
Your apps and money
list_my_apps— every app you've connected, with App Store id and growth stage. This is the entry point for everything else.get_revenue— daily revenue + downloads + active subscribers, normalized to USD, for one app or all apps. Subscription revenue uses latest-day snapshot the same way the dashboard does, so totals don't double-count.get_downloads— pure install counts only (excludes redownloads, updates, IAP, subscriptions). Useful when the question is "is anyone new finding us?" not "how much did we earn?"get_country_breakdown— top 30 countries by downloads with USD revenue per country. Surfaces the "where should we localize next?" signal.revenue_compare— two equal-length windows compared. Default is last 7 days vs the prior 7 days; you get deltas and percentage changes for downloads and revenue.
Reviews
get_reviews— recent reviews with rating, body, sentiment label, and aggregate stats. Filterable by rating or sentiment. Body is truncated to ~400 chars so Claude doesn't burn context on padding.summarize_reviews— curated buckets (most recent negative, positive, and lowest-rated) plus sentiment totals. Cheaper than fetchingget_reviewsthree times with different filters. No AI call inside the tool — Claude does the synthesis itself from the curated set.
Keywords (yours)
get_keyword_rankings— every keyword tracked under an app, with the last several rank snapshots so the trend (rising, falling, missing from the top 200) is obvious.list_my_keywords— compact inventory across all apps. Use this to ask Claude "what am I even tracking right now?"
Competitors
list_my_competitors— your tracked rivals with developer, rating, version, genre.get_competitor_rankings— competitor keyword ranks grouped by (keyword, country) with a short history each.get_meta_changes— the change log of competitor metadata over time. Apsity snapshots competitor names, subtitles, descriptions, icons, and versions daily and records diffs. This tool is the one I'm most excited about: it's data competitors' own analytics dashboards don't have, but yours does.
Subscriptions, AI insights, and quota
get_subscription_trend— daily active subscriber count series. Useful only for apps that sell auto-renewable subscriptions; will tell you politely if your apps don't have any.get_growth_insights— the AI insights that Apsity's nightly crons already produce (rank-drop diagnoses, hidden markets, keyword optimization, revenue anomalies). This tool reads cached insights; it doesn't trigger new analysis.get_plan_usage— your plan tier, plan limits, and current consumption (apps, keywords, competitors, today's keyword searches, watchlist size). Helpful when a tool returns a quota error and you want to know exactly where you stand.
Workflows that finally make sense
Monday morning weekly review
One prompt instead of five tabs:
“Use apsity to pull last week's revenue and downloads for all my apps and compare them to the prior week with revenue_compare. Flag any app with a >20% drop and dig into the country breakdown.”Claude picks the right tools in sequence, normalizes the numbers, and reads back a Monday-morning summary. You never opened the dashboard.
Review triage before a release
“Run apsity:summarize_reviews for the past 30 days. Group the negative themes, then propose 3 product changes that would address the largest cluster.”
The tool returns 5 recent negatives, 5 positives, 5 lowest-rated — enough signal for Claude to identify themes without paying for 100-review context.
Competitor watch
“Use apsity:get_meta_changes for the last 14 days. For each rival that changed subtitle or screenshots, summarize what they shifted and whether it aligns with a feature push.”
This is the killer one. Most teams don't notice when a competitor quietly rewrites their subtitle to chase a new keyword. Apsity already detects it; the MCP tool puts that diff directly in Claude's context.
Keyword health check
“Run apsity:get_keyword_rankings. List keywords whose rank dropped 5+ positions in the last week, then use apsity:keyword_search to propose alternatives.”
This is exactly the loop ASO consultants charge for. Now it's a single prompt.
Read-only by design
Every new tool is read-only. There is intentionally no add_keyword, no add_competitor, no delete_app. Two reasons:
- Hallucination risk. Letting an LLM register tracking targets based on a casual sentence is a way to fill your keyword quota with garbage by Tuesday afternoon. Setup actions belong in the dashboard where you can review before committing.
- Blast radius if a key leaks. If an API key ends up somewhere public, read-only access means at worst someone sees your stats. Write access would mean someone can pollute your tracking, fill quotas, or trigger notification alerts.
We'll consider write tools later if a real workflow requires them. For now, the friction of clicking "Add" in the dashboard is a feature, not a bug.
How auth works
Same model as before. Issue a key in Settings → MCP API Keys, paste it into your client config (Claude Desktop, Cursor, Claude Code, anything MCP-capable), and the server scopes every tool call to your account. Free plan still gets 401 because MCP is a Starter and Pro feature.
Every tool that touches user data starts with the same check: the Bearer token is looked up in the ApiKey table, the resolved user id is attached to the call, and the data query filters by userId. Trying to pass another user's app id returns "App not found or not owned" — there's no scenario where one customer's key can read another's data.
What this changes
Before: Apsity was a place you opened. Now: Apsity is a thing Claude already knows. The honest framing of this release isn't "we added 15 tools", it's "your AI assistant just stopped being blind to your own data."
Open /docs/mcp for the full tool reference and connection snippets. If you're already connected, the new tools appear the next time you start a fresh conversation — no config change needed.