Category: Digital Marketing

  • Future-Proofing Your Content Strategy with llms.txt

    Search is evolving—and fast. With the rise of generative AI and large language models (LLMs), how your content is found, interpreted, and used is shifting from traditional keyword-based search engines to conversational AI platforms. In this new era, visibility isn’t just about ranking #1 on Google—it’s about being the source LLMs cite, summarize, or paraphrase in their responses. That’s where llms.txt comes in.

    What Is llms.txt?

    The llms.txt file is a new standard being proposed as a way for website owners to communicate how their content should be accessed and used by large language models like ChatGPT, Google Gemini, Claude, and others. It’s a simple text file placed at the root of your domain, similar to robots.txt, but with a focus on LLMs rather than search engine crawlers.

    For example:

    https://bradbartell.dev/llms.txt

    This file lets you:

    • Allow or disallow LLMs from training on or referencing your content
    • Specify conditions for use (like attribution or licensing terms)
    • Signal your openness to AI systems in a transparent, machine-readable way

    How Is llms.txt Different from robots.txt?

    While both llms.txt and robots.txt are used to guide automated systems, they serve different purposes:

    Featurerobots.txtllms.txt
    Primary AudienceWeb crawlers (e.g., Googlebot, Bingbot)Large language models (e.g., ChatGPT, Gemini)
    FocusSearch indexing and crawling behaviorAI training and content usage
    SyntaxStandard directives like Disallow, AllowEmerging conventions for AI content governance
    Current AdoptionWidely implemented and recognizedStill emerging, but gaining attention

    robots.txt tells search engines whether to index pages. llms.txt goes a step further by addressing whether your content can be used in training datasets or real-time generative answers.

    Why It Matters for the Future of SEO and AI Search

    As AI becomes the front door to more digital experiences, how LLMs interpret and use your content will define your visibility. This includes:

    • Whether your content is cited in AI-generated summaries
    • How accurate or up-to-date AI answers are when referring to your site
    • The ability to control or monetize the use of your original content

    By proactively adding llms.txt, you demonstrate digital maturity and readiness to engage with AI systems on your terms.

    How to Implement llms.txt

    1. Create a plain text file named llms.txt.
    2. Add directives or policy notes, such as:
    User-Agent: *
    Allow: /
    Attribution: required
    Licensing: CC-BY-NC
    Contact: ai@yoursite.com
    1. Upload it to the root of your domain (e.g., https://yoursite.com/llms.txt).
    2. Monitor adoption and adjust policies as standards evolve.

    Conclusion: Stay Ahead of the Curve

    The introduction of llms.txt is more than a technical tweak—it’s a strategic move. As more AI models crawl, synthesize, and present content, your site’s policies should keep pace. By embracing llms.txt, you’re not just protecting your content—you’re positioning your brand to thrive in the next wave of search and discovery.

  • How I address SEO for Existing Sites

    If your site has been around for a while but isn’t ranking as well as it should, you’re not alone. Many brands build up content, make design changes, or launch new features over time — but without a consistent SEO strategy, it’s easy for visibility to stagnate. The good news? A site with history has data — and that gives you a huge advantage.

    Here’s how I approach revitalizing SEO on an existing site using a combination of technical audits, content optimization, and ongoing strategy — with tools like Screaming Frog and SEMrush leading the way.

    Run a Full Crawl with Screaming Frog

    Screaming Frog is my go-to tool to uncover the technical health of a website. I use it to crawl the entire site and surface:

    • Missing or duplicate title tags and meta descriptions
    • Broken internal or outbound links
    • Incorrect canonical tags or redirect chains
    • Pages with low word count or thin content
    • Improper use of H1s and heading structures
    • Orphan pages that aren’t linked to internally
    • Image issues like missing alt text or large file sizes

    This crawl gives a full picture of what’s going on under the hood. From here, I build a prioritized fix list — starting with technical blockers that prevent pages from being indexed or crawled properly.

    Audit Keyword Performance with SEMrush

    SEMrush is where the strategy gets sharp. It helps me understand how the site is currently performing in search — and more importantly, where the missed opportunities are. I use it to:

    • Identify keywords where the site is ranking on page 2 or 3
    • Find high-volume queries where the site has impressions but low click-through
    • Discover new long-tail keywords that align with existing content
    • Analyze competitors to see which terms they’re winning that we aren’t
    • Review backlink profiles and identify toxic links that might need disavowing

    From this data, I create a content plan: which pages need refreshed content, which keywords need stronger internal linking, and what new pages should be created.

    Optimize Existing Content for Quick Wins

    Before launching anything new, I look for quick wins in the existing content. These are typically:

    • Pages ranking in positions 5–20 for target terms
    • Blog posts with outdated information
    • Product or service pages with weak CTAs or vague copy
    • Pages with solid traffic but poor engagement (high bounce, low time on page)

    I improve on-page SEO by adjusting headlines, tightening content to align with search intent, improving meta tags, and adding internal links to and from high-priority pages.

    Address Technical SEO Gaps

    After content, it’s back to the code. I revisit the Screaming Frog data and combine it with insights from Google Search Console to:

    • Fix crawl errors and reduce redirect chains
    • Optimize sitemap and robots.txt files
    • Improve page speed and Core Web Vitals using Lighthouse
    • Add or improve structured data (Product, Article, FAQ, etc.)
    • Ensure canonical and hreflang tags are set properly

    Search engines favor sites that are technically sound. Cleaning this up gives your content a much stronger chance to rise in rankings.

    Create Supporting Content Around Priority Terms

    Once the foundation is solid, it’s time to build momentum. I use SEMrush to identify related queries, questions, and subtopics around key themes. Then I create supporting content — blog posts, FAQ pages, resource hubs — that:

    • Strengthen topical authority
    • Increase internal linking opportunities
    • Capture additional long-tail keywords
    • Drive users deeper into the site experience

    This “hub and spoke” model reinforces relevance and builds a strong SEO network around high-converting pages.

    Monitor, Adjust, and Repeat

    SEO isn’t one-and-done. After implementing changes, I use Google Search Console, SEMrush, and analytics tools to monitor:

    • Changes in ranking and click-through
    • Traffic patterns to key landing pages
    • Engagement metrics like bounce rate and time on page
    • Site health and crawlability over time

    From here, I keep iterating — updating older content, targeting new terms, and keeping the technical side clean as the site evolves.

  • How I Approach Search Engine Optimization

    Search Engine Optimization (SEO) isn’t just about showing up in search results — it’s about being understood. Modern SEO is built into the code itself, starting with how content is structured, how pages are marked up, and how a site performs across devices. One of the most important aspects is making your content not only easy for humans to read, but also optimized for search engine crawlers.

    Start with Solid Meta Data

    The fundamentals matter. Every page should have clean, well-structured meta data to help search engines understand its content. I make sure to:

    • Set canonical tags to avoid duplicate content issues and ensure search engines index the right version of a page.
    • Add alternate hreflang tags for multilingual sites to help direct users to the correct language or regional version.
    • Write concise and clear title and meta description tags that reflect the page’s value to the user and improve click-through rates from search results.

    And no, I don’t focus on stuffing keyword meta tags — search engines haven’t used them in years. Instead, I focus on writing useful, well-structured content that aligns with real user intent.

    Enhance Discoverability with Structured Data

    To help search engines go beyond just reading — to actually understanding — I add structured data using JSON-LD. This semantic markup allows content to appear in rich results like:

    • Product listings with pricing and availability
    • Product ratings so Google will show ratings in search results
    • Articles with publish dates and authors
    • FAQs, breadcrumbs, and even local business info

    Structured data improves visibility in Google’s search features and helps expose content to the right audiences. It’s one of the best ways to speak directly to search engine robots and clarify what your content is about.

    Optimize the Share Experience with Open Graph Tags

    Sharing isn’t just about social reach — it’s also a signal of relevance and trust. I implement Open Graph meta tags for platforms like Facebook and Twitter to ensure that shared links look great and provide value at a glance. This includes:

    • Customizing preview images
    • Writing optimized share titles and descriptions
    • Ensuring Twitter cards render correctly

    When users share your page, it should look polished, professional, and enticing — because a shared link that drives traffic is still a win.

    Analyze Web Core Vitals & Lighthouse Scores

    Search engines reward good user experience, and that means your site needs to perform. I use Lighthouse to regularly audit pages for:

    • Largest Contentful Paint (LCP)
    • Cumulative Layout Shift (CLS)
    • First Input Delay (FID)

    From there, I dig into the code to make improvements — whether that’s optimizing images, reducing JavaScript, deferring unused assets, or cleaning up render-blocking resources.

    A fast, smooth site isn’t just better for SEO. It’s better for users, and that’s what search engines want to see.

    Fine-Tune Based on Google Search Insights

    Google Search Console is one of the most underrated tools in an SEO toolkit. I regularly review performance reports to:

    • Identify search terms where pages are ranking on the second or third page
    • Fine-tune content, headings, or internal links to push those terms toward page one
    • Spot content gaps or underperforming pages that could be reworked or expanded

    This data-driven iteration ensures ongoing optimization beyond the initial launch.

    TLDR

    Good SEO is about more than just keywords and links. It’s about creating a site that is valuable, discoverable, fast, and shareable — all built on a solid technical foundation. My approach combines technical SEO best practices, thoughtful UX, and real user data to help sites perform better today and stay competitive tomorrow.