visibility
capability
opencode.ai
opencode.ai
Levels are cumulative — you must pass L1 before reaching L2, L2 before L3, and so on.
AI Readiness Report
Executive Summary
OpenCode.ai has a solid foundation for AI discoverability but lacks the structured data and APIs needed for advanced AI interaction. The site is generally accessible to AI crawlers and has good basic content structure, but it misses key trust signals and programmatic capabilities. This limits its potential for AI-driven recommendations and automated agent use.
AI Visibility — L3
AI systems can reliably find and crawl the site, thanks to a permissive robots.txt, fast loading, and a clear sitemap. However, the lack of structured data (Schema.org, Open Graph) and missing trust elements like author attribution and publish dates reduce its authority and recommendation potential by AI assistants.
AI Capability — L2
The site is readable by AI agents and provides good documentation, but it lacks any programmatic API for structured interaction. Without a defined REST/GraphQL API, authentication methods, or agent integration protocols, AI agents cannot perform actions or retrieve data beyond basic content scraping.
A Visibility score of 3/5 means AI can find the site but may not fully understand or confidently recommend it. A Capability score of 2/5 means AI agents can read content but cannot programmatically interact with services, missing opportunities for automation and integration.
Top Issues
Why: This is the foundational level of AI visibility. Without a clear title and meta description, AI crawlers cannot quickly identify the page's topic and purpose, making it less likely to be indexed or recommended.
Impact: AI systems may ignore or misrepresent your site, drastically reducing organic discovery and traffic from AI-powered search and chat.
Fix: For each page, ensure the HTML <title> tag is unique and descriptive (e.g., 'OpenCode AI - AI-Powered Code Analysis'). Add a <meta name="description"> tag with a concise, keyword-rich summary of the page content.
Why: Structured data is the primary way to make your content's meaning explicit to AI systems. Without it, AI must guess at entities and relationships, leading to inaccurate understanding and poor extraction.
Impact: AI agents cannot reliably understand or use your site's core offerings (e.g., products, services, articles), blocking basic AI-driven interactions and integrations.
Fix: Add JSON-LD <script type="application/ld+json"> blocks to key pages. Start with a 'WebSite' schema for the homepage and an 'Organization' schema with your company name, URL, and logo. For content pages, use appropriate types like 'Article' or 'SoftwareApplication'.
Why: AI agents and other automated systems interact with websites via APIs. Without a well-defined REST or GraphQL API, agents cannot programmatically retrieve data or perform actions, limiting the site to human-only use.
Impact: Prevents any form of AI automation, integration, or agent-based workflow, locking you out of the emerging ecosystem of AI-powered tools and services.
Fix: Design and implement a public API. Start by identifying core data entities (e.g., projects, documentation) and create GET endpoints that return structured JSON. Document the endpoints, request/response formats, and authentication (if any).
Why: Schema.org markup is a key signal for AI to understand page content accurately. It provides a standardized vocabulary that AI systems are trained to recognize and trust.
Impact: Reduces the accuracy with which AI summarizes or cites your content, leading to missed opportunities for being featured in AI-generated answers and recommendations.
Fix: Implement JSON-LD structured data on all major content pages. Use types relevant to your content (e.g., 'Article' for blog posts, 'SoftwareApplication' for product pages). Include key properties like 'headline', 'description', 'author', and 'datePublished'.
Why: An llms.txt file proactively guides LLMs to the most important content on your site, improving the quality and relevance of how your site is represented in AI responses.
Impact: Without guidance, AI may focus on less important pages (like legal disclaimers) or miss key sections, leading to poor representation and lower user trust in AI-sourced information about your brand.
Fix: Create a plain text file at the root: `/llms.txt`. Format it in Markdown. Start with '# OpenCode AI', followed by a tagline in a blockquote '>'. Then add sections like '## Documentation', '## API Reference', '## Blog' with bulleted links to the most important pages. Do not use robots.txt syntax. See llmstxt.org for the spec.
Quick Wins
30-Day Roadmap
Week 1: Quick Wins
— Implement unique and descriptive HTML <title> tags and <meta name='description'> tags for all pages.
— Create and deploy the `/llms.txt` file at the root, formatted in Markdown with a clear site structure.
— Add the four essential Open Graph meta tags (og:title, og:description, og:image, og:url) to the <head> of each page.
— Add a JSON-LD block with an 'Organization' schema type to the homepage, including name, url, and logo.
Visibility L1 → L3, Capability L1 → L2
Week 2: Foundation
— Add a 'WebSite' JSON-LD schema to the homepage, linking to the site's main properties.
— Implement JSON-LD structured data on all major content pages (e.g., blog, product) using appropriate types like 'Article' or 'SoftwareApplication' with key properties (headline, description, author, datePublished).
Visibility L3 → L4, Capability L2 → L3
Weeks 3-4: Advanced
— Design and implement a public API: identify core data entities (e.g., projects, documentation) and create GET endpoints returning structured JSON.
— Document the API endpoints, request/response formats, and any authentication requirements.
Capability L3 → L4
After 30 days, the site's AI Visibility Level should reach 4/5 and AI Capability Level should reach 4/5, establishing a strong foundation for AI discoverability and programmatic interaction.