L1

visibility

L1

capability

qq.com

qq.com

✓ verified
AI Visibility: ✓ check completed — level L1
AI Capability: ✓ check completed — level L1
L1 Basic Accessibility 5/6
Major AI crawlers (GPTBot, ClaudeBot, PerplexityBot, Google-Extended) are permitted to access your site. AI crawling allowed
Main content is visible in the HTML source, not only rendered after JavaScript executes. Page content directly readable
The page has a clear title and meta description, helping AI quickly identify the topic. Clear title and description
The page responds quickly enough to avoid AI crawl failures or timeouts. Reasonable response time
The site uses a valid HTTPS certificate. HTTPS secured
Core content isn't blocked by login walls, membership gates, or paywalls. Content is not gated
L2 Content Comprehensibility 2/6
Uses Schema.org / JSON-LD to help AI understand page content more accurately. Has structured data
Open Graph tags provide supplementary title and summary information. Has social sharing info
A canonical URL tells search engines and AI which version of the URL is authoritative. Clear canonical address
The page has a clear H1 and uses H2/H3 headings to organize content logically. Clear heading hierarchy
The HTML lang attribute is set, helping AI identify the page language. Language declared correctly
The page has meaningful text content, not just a few sentences of boilerplate. Substantial content
L3 Discoverability 2/6
An accessible XML sitemap helps AI and search engines discover your pages. Provides a sitemap
The sitemap includes recent pages and isn't neglected over time. Sitemap stays updated
Key content pages are easily reachable from the homepage and main pages. Clear internal linking
Page URLs clearly reflect the content topic, rather than being cryptic parameter strings. Clean, readable URLs
A /llms.txt file proactively tells LLMs which content is most worth paying attention to. Provides llms.txt
The canonical tag points to the current page's standard address, avoiding duplicate page confusion. Consistent canonical setup
L4 Trust & Authority 3/6
Structured data includes basic info like company/organization name, website, and logo. Organization info is clear
Both users and AI can easily find your contact or about page. About and contact info visible
Pages attribute content to an author, team, or organization. Content source is clear
Pages include publish or update dates, helping assess content freshness. Publication dates are clear
The site has essential pages like privacy policy and terms of service. Legal info is complete
Basic security response headers are set, reflecting site maintenance quality. Proper security configuration
L5 AI-Optimized 0/6
Page content is structured for AI to directly extract answers. Has FAQ / HowTo / Q&A structure
Helps AI understand the page's position and hierarchy within the site. Has breadcrumb structure
Products, services, or content include structured Review/Rating data. Has review information
Multilingual pages have clear corresponding relationships, such as hreflang tags. Supports multiple languages
Uses multiple effective Schema.org types, not just one. Richer structured data
Pages contain FAQs, tables, lists, definitions, etc., making it easy for AI to extract and summarize. Clear content block structure

AI Readiness Report

download .md

Executive Summary

The website qq.com has a foundational level of AI readiness, allowing basic discovery and access. Its key strengths are a crawlable foundation and the presence of an MCP server for agent integration. However, significant gaps in content structure, discoverability, and structured APIs severely limit its potential for AI recommendation and autonomous agent use.

AI Visibility — L1

AI systems can find the site and access its core content, but poor content structure, missing schema data, and weak internal linking make it difficult for AI to deeply understand, trust, and confidently recommend the site's information to users.

AI Capability — L1

While the site is accessible to agents and offers some structured interaction via an MCP server, it lacks a public API, comprehensive documentation, and standardized agent interfaces, preventing reliable programmatic use and automation.

With scores of 1/5, the site is missing out on being surfaced as a trusted source in AI search results and cannot be effectively integrated into automated workflows or agent-driven services.

Top Issues

CRITICAL Main Content Not Visible in HTML Source visibility · L1 · developer

Why: AI crawlers like GPTBot often do not execute JavaScript. If core content is only rendered client-side, the AI will see an empty or incomplete page, making the site invisible.

Impact: AI systems cannot discover or understand your content, leading to zero AI-driven traffic, recommendations, or answers.

Fix: Implement Server-Side Rendering (SSR) or Static Site Generation (SSG) for key content pages. Ensure the primary text and links are present in the initial HTML response before any JavaScript runs.

HIGH Lacks Semantic HTML Structure capability · L1 · developer

Why: Semantic tags (like <header>, <main>, <article>) provide explicit meaning about page sections, helping AI agents and accessibility tools parse and navigate content accurately.

Impact: AI agents struggle to identify the main content, navigation, and structure, reducing their ability to interact with or summarize your site effectively.

Fix: Refactor page templates to replace generic <div> containers with appropriate semantic HTML5 elements. Structure should include <header>, <nav>, <main>, <article>/<section>, and <footer>.

HIGH Missing Structured Data for AI Understanding capability · L1 · developer

Why: JSON-LD structured data is a primary way for AI systems to understand the entities, topics, and purpose of a page with high confidence.

Impact: Without structured data, AI may misinterpret your content, fail to feature it in rich answers, or not trust it as a source for factual queries.

Fix: Add JSON-LD <script type="application/ld+json"> blocks to page templates. Start with core types: 'WebSite' for the site itself and 'Organization' for your company details.

HIGH No Schema.org Markup for Content Clarity visibility · L2 · developer

Why: Schema.org markup explicitly labels content types (e.g., Article, Product) and their properties, making it far easier for AI to parse and trust page information.

Impact: Content is less likely to be accurately extracted and cited by AI assistants, missing opportunities for featured snippets and authoritative answers.

Fix: Implement JSON-LD structured data on key content pages. For a news or blog site, use 'Article' schema with headline, author, and datePublished fields.

CRITICAL Missing or Restrictive robots.txt File capability · L2 · devops

Why: The robots.txt file is the first thing AI crawlers check. If missing or blocking major AI crawlers, your site will not be indexed.

Impact: AI systems like ChatGPT and Perplexity may be explicitly blocked from crawling your site, preventing any AI visibility.

Fix: Ensure a /robots.txt file exists and is accessible. Add explicit allowances for AI crawlers: 'User-agent: GPTBot', 'User-agent: ClaudeBot', 'User-agent: PerplexityBot', 'User-agent: Google-Extended' with 'Allow: /' directives.

Quick Wins

Missing or Restrictive robots.txt File — Ensure a /robots.txt file exists and is accessible. Add explicit allowances for AI crawlers: 'User-agent: GPTBot', 'User-agent: ClaudeBot', 'User-agent: PerplexityBot', 'User-agent: Google-Extended' with 'Allow: /' directives. (devops)
No Organization Schema for Trust — Add a JSON-LD block for 'Organization' to your homepage and site-wide footer. Include required properties: '@type': 'Organization', 'name', 'url', and 'logo'. (developer)
Missing LLMs.txt Guidance File — Create a plain text file at /llms.txt. Format it in Markdown: start with '# Site Name', a tagline in a blockquote '>', then sections like '## Key Pages' with bulleted links to important content. Do not use robots.txt syntax. See llmstxt.org for spec. (content)
Missing LLMs.txt File for Agent Discovery — Create and publish /llms.txt. Structure it as a Markdown document with a title, description, and sections for different content types (e.g., Documentation, API, Blog). List key URLs with brief descriptions. (content)
Missing Canonical URL Tags — Add a <link rel="canonical" href="..."/> tag to the <head> of every page. The href should point to the preferred, clean URL for that content. (developer)

30-Day Roadmap

Week 1: Quick Wins

— Ensure a /robots.txt file exists and is accessible. Add explicit 'Allow: /' directives for AI crawlers: User-agent: GPTBot, User-agent: ClaudeBot, User-agent: PerplexityBot, User-agent: Google-Extended.

— Create and publish a /llms.txt file. Structure it as a Markdown document with a title, description, and sections for key content types. List important URLs with brief descriptions.

— Add a JSON-LD block for 'Organization' schema to the homepage and site-wide footer. Include required properties: '@type', 'name', 'url', and 'logo'.

— Add a <link rel="canonical" href="..."/> tag to the <head> of every page, pointing to the preferred, clean URL for that content.

Capability L1 → L2, Visibility L1 → L2

Week 2: Foundation

— Refactor page templates to replace generic <div> containers with appropriate semantic HTML5 elements. Implement a structure including <header>, <nav>, <main>, <article>/<section>, and <footer>.

— Add a site-wide JSON-LD <script type="application/ld+json"> block for the 'WebSite' schema to the homepage, including properties like 'name', 'url', and 'description'.

Capability L1 → L2, Visibility L2 → L3

Weeks 3-4: Advanced

— Implement Server-Side Rendering (SSR) or Static Site Generation (SSG) for key content pages (e.g., homepage, major articles). Ensure primary text and links are present in the initial HTML response before any JavaScript runs.

— Implement JSON-LD structured data for the 'Article' schema on key content pages (e.g., news articles, blog posts). Include fields: 'headline', 'author', 'datePublished', and 'mainEntityOfPage'.

Visibility L2 → L3, Capability L2 → L3

By addressing foundational visibility and capability issues, the site can realistically achieve AI Visibility Level 3 and AI Capability Level 3 within 30 days, making core content accessible and understandable to AI crawlers and agents.

// embed badge
L1 AI Visibility L1 AI Capability

AI Visibility — markdown:

[![ReadyforAI](https://readyforai.dev/badge/qq.com)](https://readyforai.dev/websites/qq.com)

AI Capability — markdown:

[![ReadyforAI](https://readyforai.dev/badge/qq.com?track=capability)](https://readyforai.dev/websites/qq.com)
About badges