visibility
capability
qq.com
qq.com
AI Readiness Report
Executive Summary
The website qq.com has a foundational level of AI readiness, allowing basic discovery and access. Its key strengths are a crawlable foundation and the presence of an MCP server for agent integration. However, significant gaps in content structure, discoverability, and structured APIs severely limit its potential for AI recommendation and autonomous agent use.
AI Visibility — L1
AI systems can find the site and access its core content, but poor content structure, missing schema data, and weak internal linking make it difficult for AI to deeply understand, trust, and confidently recommend the site's information to users.
AI Capability — L1
While the site is accessible to agents and offers some structured interaction via an MCP server, it lacks a public API, comprehensive documentation, and standardized agent interfaces, preventing reliable programmatic use and automation.
With scores of 1/5, the site is missing out on being surfaced as a trusted source in AI search results and cannot be effectively integrated into automated workflows or agent-driven services.
Top Issues
Why: AI crawlers like GPTBot often do not execute JavaScript. If core content is only rendered client-side, the AI will see an empty or incomplete page, making the site invisible.
Impact: AI systems cannot discover or understand your content, leading to zero AI-driven traffic, recommendations, or answers.
Fix: Implement Server-Side Rendering (SSR) or Static Site Generation (SSG) for key content pages. Ensure the primary text and links are present in the initial HTML response before any JavaScript runs.
Why: Semantic tags (like <header>, <main>, <article>) provide explicit meaning about page sections, helping AI agents and accessibility tools parse and navigate content accurately.
Impact: AI agents struggle to identify the main content, navigation, and structure, reducing their ability to interact with or summarize your site effectively.
Fix: Refactor page templates to replace generic <div> containers with appropriate semantic HTML5 elements. Structure should include <header>, <nav>, <main>, <article>/<section>, and <footer>.
Why: JSON-LD structured data is a primary way for AI systems to understand the entities, topics, and purpose of a page with high confidence.
Impact: Without structured data, AI may misinterpret your content, fail to feature it in rich answers, or not trust it as a source for factual queries.
Fix: Add JSON-LD <script type="application/ld+json"> blocks to page templates. Start with core types: 'WebSite' for the site itself and 'Organization' for your company details.
Why: Schema.org markup explicitly labels content types (e.g., Article, Product) and their properties, making it far easier for AI to parse and trust page information.
Impact: Content is less likely to be accurately extracted and cited by AI assistants, missing opportunities for featured snippets and authoritative answers.
Fix: Implement JSON-LD structured data on key content pages. For a news or blog site, use 'Article' schema with headline, author, and datePublished fields.
Why: The robots.txt file is the first thing AI crawlers check. If missing or blocking major AI crawlers, your site will not be indexed.
Impact: AI systems like ChatGPT and Perplexity may be explicitly blocked from crawling your site, preventing any AI visibility.
Fix: Ensure a /robots.txt file exists and is accessible. Add explicit allowances for AI crawlers: 'User-agent: GPTBot', 'User-agent: ClaudeBot', 'User-agent: PerplexityBot', 'User-agent: Google-Extended' with 'Allow: /' directives.
Quick Wins
30-Day Roadmap
Week 1: Quick Wins
— Ensure a /robots.txt file exists and is accessible. Add explicit 'Allow: /' directives for AI crawlers: User-agent: GPTBot, User-agent: ClaudeBot, User-agent: PerplexityBot, User-agent: Google-Extended.
— Create and publish a /llms.txt file. Structure it as a Markdown document with a title, description, and sections for key content types. List important URLs with brief descriptions.
— Add a JSON-LD block for 'Organization' schema to the homepage and site-wide footer. Include required properties: '@type', 'name', 'url', and 'logo'.
— Add a <link rel="canonical" href="..."/> tag to the <head> of every page, pointing to the preferred, clean URL for that content.
Capability L1 → L2, Visibility L1 → L2
Week 2: Foundation
— Refactor page templates to replace generic <div> containers with appropriate semantic HTML5 elements. Implement a structure including <header>, <nav>, <main>, <article>/<section>, and <footer>.
— Add a site-wide JSON-LD <script type="application/ld+json"> block for the 'WebSite' schema to the homepage, including properties like 'name', 'url', and 'description'.
Capability L1 → L2, Visibility L2 → L3
Weeks 3-4: Advanced
— Implement Server-Side Rendering (SSR) or Static Site Generation (SSG) for key content pages (e.g., homepage, major articles). Ensure primary text and links are present in the initial HTML response before any JavaScript runs.
— Implement JSON-LD structured data for the 'Article' schema on key content pages (e.g., news articles, blog posts). Include fields: 'headline', 'author', 'datePublished', and 'mainEntityOfPage'.
Visibility L2 → L3, Capability L2 → L3
By addressing foundational visibility and capability issues, the site can realistically achieve AI Visibility Level 3 and AI Capability Level 3 within 30 days, making core content accessible and understandable to AI crawlers and agents.
// embed badge
AI Visibility — markdown:
[](https://readyforai.dev/websites/qq.com)
AI Capability — markdown:
[](https://readyforai.dev/websites/qq.com)