API-First AI Monitoring
API-First AI Monitoring: Build Custom Dashboards with Real-Time AI Search Data
Your marketing stack already has a BI tool, a Slack workspace, and a dozen internal dashboards. The last thing you need is another standalone platform with its own login. Here's how to pipe AI search visibility data directly into the tools your team already uses.
Why API-First Matters for AI Monitoring
Most AI visibility tools give you a dashboard and expect you to check it. That model breaks down fast. Your SEO data lives in Looker. Your alerts go to Slack. Your content calendar runs in Notion or Asana. If AI search data sits in a silo, nobody looks at it until something goes wrong.
An API-first architecture means every data point — audit results, AI Readiness Scores, citation rates, competitor mentions — is available programmatically. You choose where it goes and how it gets there. Build a custom Slack bot that pings your team when a competitor overtakes you in ChatGPT. Feed visibility scores into your Tableau dashboard alongside organic traffic. Trigger content workflows automatically when scores drop.
This isn't a nice-to-have. For teams that already run on data pipelines, an API-first monitoring tool is the difference between AI search data that gets acted on and data that gets ignored. See our enterprise monitoring guide for the broader organizational framework.
What you can build with the Foglift API
- 1. Slack bots that alert on visibility drops or competitor gains
- 2. BI dashboards (Looker, Metabase, Tableau) with live AI search data
- 3. Automated content workflows that trigger when AI Readiness Scores fall
- 4. CI/CD checks that validate AI visibility before shipping content changes
- 5. Custom reporting for stakeholders who never log in to another tool
Foglift API Overview
The Foglift API is organized around two core resource types: scans (individual website audits) and GEO queries (AI search visibility checks across ChatGPT, Perplexity, Claude, Gemini, and Google AI Overviews). All endpoints return JSON, use standard HTTP methods, and authenticate via Bearer tokens. Full reference at /docs.
Authentication
Every request requires an API key passed in the Authorization header. Generate keys in your dashboard under Settings → API Keys. Keys are scoped per project, so multi-brand organizations can issue separate keys per team.
curl https://foglift.io/api/v1/scans \
-H "Authorization: Bearer fl_live_your_api_key" \
-H "Content-Type: application/json"Scan Endpoints
Run a Website Audit and retrieve results programmatically. The audit analyzes SEO health, schema markup, AI-readiness signals, and page performance.
# Start a new scan
curl -X POST https://foglift.io/api/v1/scans \
-H "Authorization: Bearer fl_live_your_api_key" \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com", "depth": "full"}'
# Response
{
"id": "scan_8xKp2mNq",
"status": "processing",
"url": "https://example.com",
"created_at": "2026-03-22T14:30:00Z",
"estimated_completion": "2026-03-22T14:31:30Z"
}
# Retrieve audit results
curl https://foglift.io/api/v1/scans/scan_8xKp2mNq \
-H "Authorization: Bearer fl_live_your_api_key"GEO Query Endpoints
Check how your brand appears across AI search engines. Submit a prompt, specify which engines to query, and get back structured visibility data including citations, sentiment, position, and competitor mentions.
# Run a GEO visibility check
curl -X POST https://foglift.io/api/v1/geo/query \
-H "Authorization: Bearer fl_live_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"prompt": "Best website optimization tools for SaaS companies",
"brand": "YourBrand",
"engines": ["chatgpt", "perplexity", "claude", "gemini"],
"include_competitors": true
}'
# Response
{
"id": "geo_4rTv9wXz",
"prompt": "Best website optimization tools for SaaS companies",
"results": [
{
"engine": "chatgpt",
"cited": true,
"position": 2,
"sentiment": "positive",
"competitors_mentioned": ["Ahrefs", "Semrush", "Moz"],
"response_snippet": "...YourBrand offers real-time AI search monitoring..."
}
]
}Batch Operations
For teams monitoring large prompt libraries, the batch endpoint lets you submit up to 100 prompts in a single request. Results are delivered via webhook or polling.
curl -X POST https://foglift.io/api/v1/geo/batch \
-H "Authorization: Bearer fl_live_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"brand": "YourBrand",
"prompts": [
"Best CRM for startups",
"Top project management tools 2026",
"YourBrand vs CompetitorX"
],
"engines": ["chatgpt", "perplexity", "claude"],
"webhook_url": "https://your-app.com/webhooks/foglift"
}'Example: Building a Slack Bot for AI Visibility Alerts
One of the most common integrations is a Slack bot that notifies your team when AI visibility changes. Here's a complete example using Foglift webhooks and a small Node.js server.
Step 1: Register a Webhook
Tell Foglift where to send alert events. You can register webhooks via the API or in the dashboard under Settings → Webhooks.
curl -X POST https://foglift.io/api/v1/webhooks \
-H "Authorization: Bearer fl_live_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"url": "https://your-server.com/webhooks/foglift",
"events": ["visibility.drop", "competitor.surge", "scan.complete"],
"filters": {
"min_score_change": 10,
"brands": ["YourBrand"]
}
}'Step 2: Handle the Webhook and Post to Slack
A minimal Express server that receives Foglift webhook events and forwards them to a Slack channel via the Slack Web API:
import express from "express";
const app = express();
app.use(express.json());
const SLACK_WEBHOOK = process.env.SLACK_WEBHOOK_URL;
app.post("/webhooks/foglift", async (req, res) => {
const { event, data } = req.body;
// Verify webhook signature (recommended)
// const isValid = verifySignature(req);
let message = "";
if (event === "visibility.drop") {
message = [
":chart_with_downwards_trend: *AI Visibility Drop Detected*",
`Brand: ${data.brand}`,
`Engine: ${data.engine}`,
`Prompt: "${data.prompt}"`,
`Score: ${data.previous_score} → ${data.current_score}`,
`<https://foglift.io/dashboard/geo/${data.query_id}|View details>`,
].join("\n");
}
if (event === "competitor.surge") {
message = [
":warning: *Competitor Visibility Surge*",
`Competitor: ${data.competitor}`,
`Engine: ${data.engine}`,
`Share of voice: +${data.sov_change}%`,
`Your brand impact: ${data.your_brand_change}`,
].join("\n");
}
if (message) {
await fetch(SLACK_WEBHOOK, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ text: message }),
});
}
res.status(200).json({ received: true });
});
app.listen(3001);That's it. Your team now gets real-time Slack alerts whenever a monitored prompt shows a significant visibility change. No need to check a separate dashboard.
Connecting AI Search Data to Your BI Dashboard
AI search visibility belongs in the same dashboard where your team already tracks organic traffic, paid media performance, and conversion rates. Here's how to connect Foglift data to the most common BI tools.
Looker / Looker Studio
Create a custom data source that pulls from the Foglift API on a schedule. Use the /api/v1/geo/history endpoint to get time-series data for trend visualization.
// Fetch 30-day visibility history for Looker ingestion
const response = await fetch(
"https://foglift.io/api/v1/geo/history?" +
new URLSearchParams({
brand: "YourBrand",
period: "30d",
engines: "chatgpt,perplexity,claude",
metrics: "citation_rate,sentiment,sov",
format: "csv", // CSV for easy Looker import
}),
{
headers: {
Authorization: "Bearer fl_live_your_api_key",
},
}
);
const csvData = await response.text();
// Upload to your data warehouse or Looker sourceMetabase
Metabase supports REST API data sources via its native query interface or through a sync layer. The simplest approach is writing the Foglift data to your Postgres or BigQuery warehouse on a nightly cron, then pointing Metabase at the table. This gives you full SQL access to AI visibility data alongside every other metric you track.
Tableau
Use Tableau's Web Data Connector or its REST API integration to pull Foglift data directly. The JSON response format maps cleanly to Tableau's data model. Build cross-source dashboards that correlate AI citation rate with organic traffic, pipeline stage, or content publish dates to identify what drives AI visibility.
Common BI integrations
- • Data warehouse sync: Nightly pull to Postgres, BigQuery, or Snowflake via cron + API
- • Looker Studio: Custom data connector with CSV or JSON export
- • Tableau: Web Data Connector or direct REST integration
- • Metabase: SQL queries against synced warehouse tables
- • Power BI: REST API connector with scheduled refresh
Webhook-Driven Workflows
Webhooks let you build automated responses to AI visibility changes. Instead of polling the API on a schedule, register a webhook and Foglift pushes events to your endpoint in real time.
Trigger Content Updates When Scores Drop
The highest-value webhook workflow is automatic content flagging. When an AI Readiness Score drops below a threshold on a critical prompt, the webhook fires and your system creates a ticket in Jira, Linear, or Asana with the affected prompt, the current AI response, and a link to the content that needs updating.
// Webhook handler: auto-create Linear ticket on visibility drop
app.post("/webhooks/foglift", async (req, res) => {
const { event, data } = req.body;
if (event === "visibility.drop" && data.score_change <= -15) {
await fetch("https://api.linear.app/graphql", {
method: "POST",
headers: {
Authorization: `Bearer ${process.env.LINEAR_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
query: `mutation {
issueCreate(input: {
teamId: "${process.env.LINEAR_TEAM_ID}",
title: "AI visibility drop: ${data.prompt}",
description: "Citation rate dropped ${data.score_change}% on ${data.engine}.\n\nPrompt: ${data.prompt}\nPrevious score: ${data.previous_score}\nCurrent score: ${data.current_score}\n\n[View in Foglift](https://foglift.io/dashboard/geo/${data.query_id})",
priority: 2
}) { issue { id url } }
}`
}),
});
}
res.status(200).json({ ok: true });
});Available Webhook Events
| Event | Fires When | Use Case |
|---|---|---|
| visibility.drop | Citation rate or AI Readiness Score decreases beyond threshold | Content remediation tickets |
| visibility.gain | Scores improve past threshold | Win tracking, team notifications |
| competitor.surge | Competitor gains significant share of voice | Competitive alerts |
| scan.complete | Website audit finishes processing | Pipeline triggers, report generation |
| sentiment.shift | AI engine sentiment changes from positive to negative | Brand safety, PR alerts |
CLI for Developer Workflows
Not every workflow needs a webhook or a dashboard. Sometimes you just want to check your AI visibility from the terminal. The Foglift CLI (foglift-scan) wraps the API in a developer-friendly interface that works in scripts, CI/CD pipelines, and interactive sessions.
CLI Quick Start
# Install
npm install -g foglift-scan
# Authenticate (env var; generate a key at foglift.io/dashboard/settings)
export FOGLIFT_API_KEY=sk_fog_your_api_key
# Scan a single URL (no API key required for the basic scan)
foglift scan https://example.com
foglift scan https://example.com --json --threshold=80
# Run an AI visibility check across engines
foglift scan ai-check --prompt "best project management tools" \
--models chatgpt,perplexity,claude,gemini \
--domain yourbrand.com
# Pull recent visibility results and sentiment
foglift scan results --days 7 --model chatgpt --json > results.json
foglift scan sentiment --days 30 --json
# Manage tracked prompts
foglift scan prompts list
foglift scan prompts add "best tools to rank in ChatGPT"
# Scan history for a URL
foglift scan history https://example.com --jsonCI/CD Integration
Add an AEO scan to your deployment pipeline so every push is graded against the same eight extraction signals the AI engines care about. --threshold=N returns exit code 1 if the overall score drops below N, which gives you a clean pipeline gate.
# GitHub Actions example
- name: AEO scan post-deploy
env:
FOGLIFT_API_KEY: ${{ secrets.FOGLIFT_API_KEY }}
run: |
npm install -g foglift-scan
# Fail the build if AEO score drops below 80
foglift scan https://example.com --threshold=80
# Optional: capture structured results for reporting
foglift scan https://example.com --json > aeo.json
foglift scan ai-check --prompt "best tools in your category" \
--models chatgpt,perplexity --domain example.com --json > visibility.jsonRate Limits, Authentication, and Best Practices
Rate Limits
Every API response includes rate limit headers so your integration can handle throttling gracefully:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 47
X-RateLimit-Reset: 1742662800| Plan | Requests / Minute | Requests / Day | Batch Size |
|---|---|---|---|
| Free | 10 | 100 | 5 prompts |
| Pro | 60 | 5,000 | 100 prompts |
| Enterprise | Custom | Custom | 500 prompts |
Authentication Best Practices
- Use project-scoped keys: Create separate API keys for each integration (Slack bot, BI sync, CI pipeline). If one key is compromised, revoke it without disrupting other integrations.
- Store keys securely: Use environment variables or a secrets manager. Never commit API keys to version control.
- Rotate regularly: Set a quarterly rotation schedule. The API supports multiple active keys per project to enable zero-downtime rotation.
- Verify webhook signatures: Every webhook payload includes an
X-Foglift-Signatureheader. Validate it to confirm the request came from Foglift.
Error Handling
The API uses standard HTTP status codes. All error responses include a machine-readable code field and a human-readable message:
// 429 Too Many Requests
{
"error": {
"code": "rate_limit_exceeded",
"message": "Rate limit exceeded. Retry after 23 seconds.",
"retry_after": 23
}
}
// Recommended: exponential backoff with jitter
async function apiCallWithRetry(url, options, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
const res = await fetch(url, options);
if (res.status === 429) {
const retryAfter = res.headers.get("Retry-After") || 2 ** i;
const jitter = Math.random() * 1000;
await new Promise((r) => setTimeout(r, retryAfter * 1000 + jitter));
continue;
}
return res;
}
throw new Error("Max retries exceeded");
}Pagination
List endpoints use cursor-based pagination. Each response includes a next_cursor field. Pass it as a query parameter to fetch the next page. This approach is stable even when new data is added between requests.
# First page
curl "https://foglift.io/api/v1/geo/queries?limit=50" \
-H "Authorization: Bearer fl_live_your_api_key"
# Next page (use next_cursor from previous response)
curl "https://foglift.io/api/v1/geo/queries?limit=50&cursor=eyJpZCI6MTIzfQ" \
-H "Authorization: Bearer fl_live_your_api_key"Frequently Asked Questions
Can I pull AI search visibility data into my existing BI dashboard?
Yes. The REST API returns all audit results, AI Readiness Scores, and competitive data as JSON. Connect it to Looker, Metabase, Tableau, Power BI, or any tool that supports REST data sources. Schedule nightly pulls or use webhooks for real-time updates. The API uses standard pagination and filtering, so integration follows the same patterns you use for any other data source.
How do I set up real-time alerts when my AI visibility drops?
Register a webhook endpoint, define alert rules (citation rate drops below a threshold or a competitor gains significant share of voice), and Foglift sends a POST request with the full event payload. Route these to Slack, PagerDuty, Microsoft Teams, or any HTTP endpoint. See the webhook documentation for setup instructions.
Does Foglift have a CLI for developer workflows?
Yes. Install the CLI via npm install -g foglift-scan and run AEO scans, AI visibility checks, and prompt management directly from the terminal. Authenticate by exporting FOGLIFT_API_KEY. The --threshold=N flag returns exit code 1 when the overall score drops below N, which makes it easy to gate CI/CD pipelines, cron jobs, and scripted workflows on AEO quality.
What are the API rate limits?
Free accounts get 10 requests per minute and 100 per day. Pro accounts get 60 per minute and 5,000 per day. Enterprise accounts get custom limits with dedicated rate allocations. All responses include X-RateLimit headers so your integration can handle throttling gracefully. See pricing for plan details.
Sources & Further Reading
- Gartner, “Predicts 2025: Search Marketing,” Feb 2025 — 25% of search volume shifting to AI engines by 2026.
- SE Ranking, 2025, 129,000 domains — brand web mentions are the strongest AI citation predictor (35% weight); content updated within 30 days gets 3.2x more AI citations.
- Dimension Market Research, 2024 — GEO market $886M in 2024, projected $7.3B by 2031 at 34% CAGR.
- Aggarwal et al., KDD 2024 — “Position of the Referenced URL in the LLM Response” — research on AI citation mechanics and source ordering.
Try the API — Free AI Brand Check
See your brand's AI search visibility in 30 seconds. Then explore the API to build custom dashboards, Slack alerts, and automated workflows with your real data.
Fundamentals: Learn about GEO (Generative Engine Optimization) and AEO (Answer Engine Optimization) — the two frameworks for optimizing your content for AI search engines.
Related reading
AI Brand Monitoring Guide
Complete guide to monitoring what AI engines say about your brand.
GEO Monitoring
Step-by-step guide to tracking brand visibility across all AI search engines.
AI Search Optimization for SaaS
How SaaS companies optimize for AI search engines and drive pipeline.
Enterprise AI Search Monitoring
How large brands monitor AI visibility at scale across teams and markets.