Author: Brad Bartell

  • The True Cost of LLMs — and How to Build Smarter with Ollama + Supabase

    Over the past few years, the cost of training large language models (LLMs) has skyrocketed. Models like GPT-4 are estimated to cost $20M–$100M+ just to train once, with projections of $1B per run by 2027. Even “smaller” foundation models like GPT-3 required roughly $4.6M in compute.

    That’s out of reach for nearly every company. But the good news? You don’t need to train a new LLM from scratch to harness AI in your business. Instead, you can run existing models locally and pair them with a vector database to bring in your company’s knowledge.

    This approach — Retrieval Augmented Generation (RAG) — is how many startups and internal tools are building practical, affordable AI systems today.


    Training vs. Using LLMs

    • Training from scratch
      • Requires thousands of GPUs, months of compute, and millions of dollars.
      • Only feasible for major labs (OpenAI, Anthropic, DeepMind, etc.).
    • Running + fine-tuning existing models
      • Can be done on commodity cloud servers — or even a laptop.
      • Cost can drop from millions to just hundreds or thousands of dollars.

    The trick: instead of teaching a model everything, let it “look things up” in your own database of knowledge.


    Ollama: Running LLMs Locally

    Ollama makes it easy to run open-source LLMs on your own hardware.

    • It supports models like LLaMA, Mistral, and Gemma.
    • You can run it on a laptop (Mac/Windows/Linux) and/or in a Docker container. I like to run it in docker on my machine, it’s the easiest way to control costs while building and testing
    • Developers can expose endpoints to applications with a simple API.

    Instead of paying per token to OpenAI or Anthropic, you run the models yourself, with predictable costs.

    Bash
    # Example: pull and run LLaMA 3.2 with Ollama
    ollama pull llama3.2
    ollama run llama3.2

    Supabase: Your Vector Database

    When you add RAG into the mix, you need somewhere to store embeddings of your documents. That’s where Supabase comes in:

    • Supabase is a Postgres-based platform with built-in pgvector extension.
    • You can store text embeddings (numerical representations of text meaning).
    • With SQL or RPC calls, you can run similarity searches (<->) to fetch the most relevant chunks of data.

    For example, embedding your FAQs:

    SQL
    CREATE TABLE documents (
      id bigserial PRIMARY KEY,
      content text,
      embedding vector(1536)
    );
    
    -- Search for relevant documents
    SELECT content
    FROM documents
    ORDER BY embedding <-> (SELECT embedding FROM query_embedding)
    LIMIT 5;

    This gives your LLM the ability to retrieve your data before generating answers.

    RAG in Action: The Flow

    1. User asks a question → “What’s our refund policy?”
    2. System embeds the query using nomic-embed-text or OpenAI embeddings.
    3. Supabase vector search finds the closest matching policy docs.
    4. Ollama LLM uses both the question + retrieved context to generate a grounded answer.

    Result: Instead of the model hallucinating, it answers confidently with your company’s real data.

    Cost Reality Check

    • Training GPT-4: $50M+
    • Running Ollama with a 7B–13B parameter model: a few hundred dollars per month in compute (or free if local).
    • Using Supabase for vector search: low monthly costs, scales with usage.

    For most businesses, this approach is 95% cheaper and far faster to implement.

    Final Thoughts

    Building your own GPT-4 is impossible for most organizations. But by combining:

    • Ollama (local LLM runtime)
    • Supabase + pgvector (semantic search layer)
    • RAG pipelines

    …you can get the power of custom AI at a fraction of the cost.

    The future isn’t about every company training billion-dollar models — it’s about smart teams leveraging open-source LLMs and vector databases to make AI truly useful inside their workflows.

    Interested in this for your company? Feel free to reach out on LinkedIn and I’ll use my experience doing this for Modere and one of my freelance clients to build one for you.

  • How Social Media Companies Can Use AI Without Losing Human Control

    AI is changing the way businesses work — and social media is no exception. Agencies are under constant pressure to deliver content faster, track trends in real time, and respond to audiences across multiple platforms. But in all the hype around automation, one principle often gets lost: AI should never replace human creativity and judgment.

    Instead, think of AI as a digital assistant that handles the repetitive, data-heavy tasks and gives you a head start on creative thinking. Humans remain in control, making the final decisions, ensuring brand safety, and applying the nuance that no algorithm can capture.

    Here are three practical ways social media companies can use AI to enhance their work — while always keeping people in the driver’s seat.


    1. Creative Campaign Ideation

    The challenge:

    Brainstorming campaign ideas is a cornerstone of social media marketing, but it can be time-consuming. Teams can spend hours trying to crack the “big idea,” only to end up circling the same concepts.

    How AI helps:

    AI can dramatically speed up the ideation process by:

    • Generating dozens of campaign angles from a single prompt.
    • Suggesting different creative formats (short-form video, Instagram carousels, LinkedIn thought pieces).
    • Tailoring ideas to audience segments (teen lifestyle, small business owners, B2B decision-makers, etc.).

    Where humans come in:

    The team takes these raw AI-generated ideas and applies strategy, creativity, and brand voice. Humans filter out what won’t resonate, refine what has potential, and ensure that the concepts align with client goals. AI provides volume and variety — humans provide vision.


    2. Social Listening & Insight Generation

    The challenge:

    Audiences move fast, and conversations can shift overnight. Agencies need to understand what’s trending, how competitors are positioning themselves, and where opportunities exist — but manually monitoring these signals across multiple platforms can eat up entire days.

    How AI helps:

    AI-powered monitoring tools can:

    • Track mentions, hashtags, and brand sentiment at scale.
    • Spot emerging trends before they hit the mainstream.
    • Highlight unusual spikes in competitor activity or audience engagement.

    Where humans come in:

    AI surfaces the noise; humans decide what matters. A strategist interprets the signals, applies market context, and recommends how to act. For example, AI might detect a sudden surge in conversation around sustainable fashion. A human marketer decides if it’s worth jumping in, how to align it with brand values, and whether it’s appropriate for the campaign calendar.


    3. Customer Engagement & Support

    The challenge:

    On social media, audiences expect instant responses — but agencies can’t realistically have humans available 24/7 to handle every comment, DM, or inquiry.

    How AI helps:

    AI chatbots and response assistants can:

    • Handle routine questions (“What are your hours?” “Where can I buy this?”).
    • Direct users to helpful resources or FAQs.
    • Flag urgent or sensitive conversations for human follow-up.

    Where humans come in:

    Community managers review and step in for anything that requires empathy, nuance, or strategic decision-making. When conversations escalate — such as customer complaints, influencer inquiries, or brand reputation issues — only a human can respond with the judgment and care needed. AI handles scale; humans handle relationships.


    Why Balance Matters

    AI is powerful, but it’s not perfect. Left unchecked, it can generate off-brand content, misinterpret conversations, or mishandle sensitive interactions. That’s why the best social media companies use AI as an assistant, not a replacement.

    By letting AI take on the repetitive tasks — brainstorming raw ideas, monitoring chatter, drafting first responses — agencies free up their human teams to do what they do best: create compelling campaigns, build client trust, and foster authentic connections with audiences.

  • Git Branching for Teams: A Workflow That Supports Collaboration and Quality

    When multiple developers are working on the same codebase, things can get messy—fast. Code conflicts, broken features, and unclear ownership can bring progress to a halt. That’s why adopting a clear Git branching strategy isn’t optional—it’s essential.

    In this post, I’ll walk through a branching model we’ve used successfully in team environments, including how we structure collaboration, manage quality assurance (QA), and protect the master branch from untested changes.

    The Core Branching Strategy

    At the heart of this workflow are three persistent branches:

    • master (or main): This is the production-ready branch. Nothing gets merged here unless it has been tested and approved.
    • release branch: The integration branch for all feature and fix work. This is where the latest, stable in-progress work lives for that release date.
    • feature/* or bugfix/*: Individual branches created by developers to isolate work.
    • hotfix/* : the branch we never want to actually use but want to know how to use properly for emergencies.

    Feature Branch Workflow

    Each developer works in their own branch, named for the feature or ticket. This ensures:

    • Work can be reviewed independently
    • Code changes are isolated
    • Developers don’t step on each other’s toes


    When a feature is complete, the branch is pushed to the remote and opened as a pull request (PR) into the release branch.

    QA Before release

    As things are added to the release branch we have QA go in and test the work of developers. This can be done on local machines by building the release branch or using a testing/QA server. If issues are found the developer fixes their feature and merges it back into the release branch.

    Releasing the branch

    Once the QA work is done and the code has been pushed to the production servers the release branch is merged into the master branch. The key is to always have master branch match the production server. This way when after all the testing is done and a bug is found in prod we can deal with it using a hotfix branch

    Hotfix

    All the testing in the world and you will find issues in Prod that may be too serious to wait for the next planned release so in come the hotfix branch. The hotfix branch is always forked from the master/main branch. When the code is ready to be tested the testing server get build from the hotfix branch then after QA has tested is merged back into master and applicable release branches that may be in progress.

    Guidelines for a Smooth Team Workflow

    • Enforce code reviews on all PRs
    • Use GitHub Actions or similar CI to run automated tests on every PR
    • Protect master with branch protection rules (e.g., require PR approval, passing tests)
    • Tag releases for traceability and rollbacks
    • Name branches clearly (e.g., feature/signup-modal, bugfix/missing-footer)

    By introducing a branching strategy that separates feature development, QA testing, and production releases, teams can move faster and safer. Everyone knows where to work, where to test, and what’s going to production—no surprises, no broken code in master.

    This approach has worked well for me across teams of developers, QA testers, and product owners—and it will scale as your team grows too.

  • Homelab V2

    After a recent return of some hardware I lent out, I found myself with an extra PC that wasn’t doing much. Rather than letting it sit idle, I turned it into something much more valuable: a full-fledged homelab, powered by Proxmox VE.

    This machine now runs essential self-hosted tools that handle everything from ad-blocking to smart home automation. If you’ve got spare hardware and a bit of curiosity, building your own homelab is one of the most rewarding ways to take control of your digital life.

    I do have an older server blade but it has a larger power draw compared to this simple PC.


    Why Proxmox Was the Right Choice

    Proxmox VE is a free and powerful virtualization platform. I chose it because:

    • It provides an intuitive web UI for managing VMs and containers
    • It supports LXC and full KVM virtualization
    • It runs reliably on consumer-grade hardware
    • It’s perfect for layering services and isolating workloads

    I considered Docker-only and bare-metal installs, but Proxmox gave me more flexibility and backup/snapshot options out of the box.


    What I’m Hosting on Proxmox

    1. Pi-hole

    My first container was Pi-hole, which now acts as a network-wide ad blocker. It filters DNS requests to eliminate ads, trackers, and malicious domains.

    • Why I use it: It cleans up browsing across all devices and gives me full visibility into what’s talking to the internet.

    2. Home Assistant

    Home Assistant is the brain of my smart home. It automates everything from lights to temperature to security alerts.

    • Why I use it: Total control of smart devices, privacy-first automation, and better reliability than any cloud service.
    • Because the system is local most controls will still work even if my internet goes down.
    • Eventually I’m looking at replacing my Google Assistants with Home Assistant Voice for voice controls

    3. Mealie.io

    Mealie is a meal planning and recipe manager that looks and works great.

    • Why I use it: Organizing recipes and planning meals used to be chaotic—now it’s centralized and simple.
    • I was going to make a puppeteer application for scraping recipes but this already does that

    4. Firefly III

    Firefly III handles all my personal finance tracking.

    • Why I use it: It replaces third-party budgeting tools while keeping all financial data local.

    Bonus Setup Details

    • Backups: Proxmox handles weekly snapshots.
    • DNS Management: Pi-hole doubles as my internal DNS, resolving custom hostnames for all services.

    Lessons Learned

    • Old hardware still has value: Don’t toss that old PC—it’s a homelab waiting to happen.
    • Self-hosting is empowering: From blocking ads to budgeting and home automation, these tools make my digital life smoother and more private.
    • Proxmox keeps it clean: Isolated containers and VMs make experimentation low-risk and organized.

    Next on the Roadmap

    • Add Vaultwarden for password management (currently using 1Password to manage)
    • Set up Uptime Kuma for service monitoring
    • Build a custom dashboard with Grafana for home metrics

    Conclusion: From Dust Collector to Digital Core

    That spare PC I nearly gave away is now the backbone of a secure, private, and highly functional digital home. With Proxmox and a handful of open-source tools, I’ve reclaimed my data, enhanced my network, and made life at home just a little bit smarter.

    If you’ve got unused hardware sitting around, give it a new purpose. A homelab doesn’t just teach you—it serves you every single day.

  • Why JSON-LD is Critical for Modern SEO: A Real-World Example from Modere

    If you want your brand to stand out on Google, it’s no longer enough to simply rank high—you need to look great too. One of the most powerful (yet often overlooked) tools for enhancing your search appearance is JSON-LD structured data. During my time at Modere, I saw firsthand how implementing JSON-LD, particularly for product reviews, helped us turn basic listings into rich, eye-catching results. Here’s why JSON-LD matters—and how we used it to drive more visibility and credibility.

    What is JSON-LD?

    JSON-LD (JavaScript Object Notation for Linked Data) is a way to add machine-readable metadata to your web pages without disrupting the user experience. It tells Google—and other search engines—important information about your content: products, reviews, organization info, FAQs, and more.

    Unlike older methods like Microdata or RDFa, JSON-LD is simple to implement and doesn’t require nesting tags within your HTML. Instead, it sits cleanly in the <head> (or sometimes <body>) of your page as a standalone block of code.

    The Opportunity at Modere:

    At Modere, we were already gathering thousands of authentic product reviews through Yotpo, a leading customer review platform. However, while these reviews were visible on the page, they weren’t showing up in Google’s search results as rich snippets (the star ratings you often see under a product link).

    Without structured data, Google had no way to easily associate our reviews with our products—which meant we were missing out on valuable trust signals and click-through opportunities.

    How We Solved It: Adding JSON-LD to Our Next.js Website

    Working within a Next.js framework, we developed a process to dynamically inject JSON-LD into our product pages based on real Yotpo review data. Here’s how we approached it:

    • Pulled Yotpo data: On page load (or during server-side generation), we accessed the latest review counts and average ratings via Yotpo’s API.
    • Generated JSON-LD: For each product page, we created a JSON-LD schema following Google’s Product schema guidelines, including fields like name, description, aggregateRating, and review.
    • Injected it into the page: Using Next.js’ <Head> component, we embedded the JSON-LD inside a <script type=”application/ld+json”> tag.

    Here’s a simple version of what we added:

    import Head from 'next/head';
    
    export default function ProductPage({ product, yotpoReviews }) {
      const jsonLd = {
        "@context": "https://schema.org/",
        "@type": "Product",
        "name": product.name,
        "description": product.description,
        "aggregateRating": {
          "@type": "AggregateRating",
          "ratingValue": yotpoReviews.average_score,
          "reviewCount": yotpoReviews.total_reviews
        }
      };
    
      return (
        <>
          <Head>
            <script
              type="application/ld+json"
              dangerouslySetInnerHTML={{ __html: JSON.stringify(jsonLd) }}
            />
          </Head>
          {/* Rest of product page */}
        </>
      );
    }

    The Results:

    After Google re-crawled our pages:

    • Many Modere products started displaying review star ratings directly in search results.
    • Click-through rates improved, particularly on competitive products.
    • Customer trust increased before even landing on the site—because seeing those stars makes a strong first impression.

    Why This Matters for Your Website:

    Adding JSON-LD structured data isn’t just a “nice to have”—it’s becoming a necessity if you want your site to:

    • Earn rich snippets (stars, pricing, availability, FAQ dropdowns)
    • Improve click-through rates (CTR) from search results
    • Provide better context for AI models and voice search systems
    • Future-proof your SEO against evolving search engine expectations

    If you’re running a modern site—whether it’s built on Next.js, WordPress, Shopify, or anything else—you should be leveraging JSON-LD. It’s one of the highest-ROI, lowest-friction ways to boost your search appearance and show customers (and Google) that your content deserves attention.

    At Modere, this simple but strategic addition helped us bridge the gap between customer experiences and search engine visibility—and it’s something I recommend to every brand serious about their digital presence.

  • Share API For Consultant Marketing Pages

    When I was working with Arbonne, we faced a unique challenge:

    How could we empower Consultants to easily share marketing pages that looked better, performed better, and still properly linked back to their e-commerce stores for attribution?

    To make the sharing experience effortless, we leveraged the Web Share API.

    With a single click, Consultants could open their device’s native share options and automatically send a personalized link — no copying and pasting, no extra steps.

    Each Consultant had a unique identifier we called their Consultant Business Name (CBN). Traditionally, the CBN was added as part of the subdomain to route traffic to their personalized shopping sites. However, due to limitations in our tech stack, we had to host these new marketing pages on a separate server — one that didn’t inherently recognize the CBN structure.

    To solve this, we used a cookie-based approach:

    • When a visitor landed on a marketing page through a shared link, the query string appended the CBN (e.g., ?cbn=JohnDoe).
    • JavaScript then read that query string and set a cookie storing the CBN.
    • As users browsed the site, JavaScript dynamically updated shopping URLs to include the correct CBN — ensuring purchases still attributed to the right Consultant.

    The key was automation:

    Rather than training Consultants to manually add query parameters, we simply taught them to use the Share button — which handled all the logic behind the scenes.

    Here’s a simplified version of the sharing logic:

    JavaScript
    if (navigator.share) {
      navigator.share({
        title: 'Check out Arbonne!',
        text: 'Discover amazing products through my Arbonne Consultant store!',
        url: window.location.href + "?cbn=" + cookievalue
      }).then(() => {
        console.log('Thanks for sharing!');
      }).catch(console.error);
    }

    Beyond just improving the sharing process, these marketing pages offered major SEO advantages over the legacy e-commerce platform:

    • Custom page titles and meta descriptions
    • Open Graph tags for better social media previews
    • Faster load times with clean, lightweight HTML
    • More targeted keyword optimization around product categories and promotions

    The result?

    These pages didn’t just perform better for direct links — they also started ranking organically in search engines, driving new discovery traffic and expanding the reach of each Consultant’s business.

  • Future-Proofing Your Content Strategy with llms.txt

    Search is evolving—and fast. With the rise of generative AI and large language models (LLMs), how your content is found, interpreted, and used is shifting from traditional keyword-based search engines to conversational AI platforms. In this new era, visibility isn’t just about ranking #1 on Google—it’s about being the source LLMs cite, summarize, or paraphrase in their responses. That’s where llms.txt comes in.

    What Is llms.txt?

    The llms.txt file is a new standard being proposed as a way for website owners to communicate how their content should be accessed and used by large language models like ChatGPT, Google Gemini, Claude, and others. It’s a simple text file placed at the root of your domain, similar to robots.txt, but with a focus on LLMs rather than search engine crawlers.

    For example:

    https://bradbartell.dev/llms.txt

    This file lets you:

    • Allow or disallow LLMs from training on or referencing your content
    • Specify conditions for use (like attribution or licensing terms)
    • Signal your openness to AI systems in a transparent, machine-readable way

    How Is llms.txt Different from robots.txt?

    While both llms.txt and robots.txt are used to guide automated systems, they serve different purposes:

    Featurerobots.txtllms.txt
    Primary AudienceWeb crawlers (e.g., Googlebot, Bingbot)Large language models (e.g., ChatGPT, Gemini)
    FocusSearch indexing and crawling behaviorAI training and content usage
    SyntaxStandard directives like Disallow, AllowEmerging conventions for AI content governance
    Current AdoptionWidely implemented and recognizedStill emerging, but gaining attention

    robots.txt tells search engines whether to index pages. llms.txt goes a step further by addressing whether your content can be used in training datasets or real-time generative answers.

    Why It Matters for the Future of SEO and AI Search

    As AI becomes the front door to more digital experiences, how LLMs interpret and use your content will define your visibility. This includes:

    • Whether your content is cited in AI-generated summaries
    • How accurate or up-to-date AI answers are when referring to your site
    • The ability to control or monetize the use of your original content

    By proactively adding llms.txt, you demonstrate digital maturity and readiness to engage with AI systems on your terms.

    How to Implement llms.txt

    1. Create a plain text file named llms.txt.
    2. Add directives or policy notes, such as:
    User-Agent: *
    Allow: /
    Attribution: required
    Licensing: CC-BY-NC
    Contact: ai@yoursite.com
    1. Upload it to the root of your domain (e.g., https://yoursite.com/llms.txt).
    2. Monitor adoption and adjust policies as standards evolve.

    Conclusion: Stay Ahead of the Curve

    The introduction of llms.txt is more than a technical tweak—it’s a strategic move. As more AI models crawl, synthesize, and present content, your site’s policies should keep pace. By embracing llms.txt, you’re not just protecting your content—you’re positioning your brand to thrive in the next wave of search and discovery.

  • My Approach to Leadership in Digital Teams

    Leadership isn’t just about managing people—it’s about unlocking potential. Over the years, I’ve led cross-functional teams in marketing, development, and UX. Whether I’m mentoring junior developers or collaborating with senior stakeholders, my goal is always the same: to build environments where innovation, accountability, and growth thrive.

    People First

    I believe the best results come from teams that feel supported and heard. That’s why I prioritize clear communication, one-on-one check-ins, and creating space for every voice at the table. I take time to understand each team member’s strengths, goals, and learning style, so I can tailor my leadership to help them grow.

    Empowerment Through Trust

    Micromanagement stifles creativity. I trust my team to own their work, make decisions, and try new ideas. I’m there to provide context, remove roadblocks, and offer guidance—but I believe in giving people the autonomy to experiment and grow.

    Cross-Functional Collaboration

    Having worked across development, UX, and marketing, I’ve seen how silos slow teams down. I encourage collaboration between departments by translating technical jargon for non-technical teams and ensuring business goals are clearly understood on all sides. This creates alignment and accelerates delivery.

    Feedback as Fuel

    I see feedback as a two-way street. I regularly ask my team for feedback on how I can support them better, and I give feedback that’s direct, actionable, and kind. The goal is to build a culture of continuous improvement—where learning from mistakes is encouraged and celebrated.

    Leading Through Change

    The digital space moves fast, and I thrive in environments where change is the only constant. Whether it’s shifting marketing strategies, adopting new tech stacks, or navigating organizational pivots, I stay adaptable and keep my team focused on the big picture.

    At the heart of my leadership philosophy is a simple belief: when you invest in people, results follow. I’ve seen firsthand how great leadership can transform a project—and a career. And I’ll keep showing up every day to lead with purpose, empathy, and a relentless drive to help teams win together.

  • How I address SEO for Existing Sites

    If your site has been around for a while but isn’t ranking as well as it should, you’re not alone. Many brands build up content, make design changes, or launch new features over time — but without a consistent SEO strategy, it’s easy for visibility to stagnate. The good news? A site with history has data — and that gives you a huge advantage.

    Here’s how I approach revitalizing SEO on an existing site using a combination of technical audits, content optimization, and ongoing strategy — with tools like Screaming Frog and SEMrush leading the way.

    Run a Full Crawl with Screaming Frog

    Screaming Frog is my go-to tool to uncover the technical health of a website. I use it to crawl the entire site and surface:

    • Missing or duplicate title tags and meta descriptions
    • Broken internal or outbound links
    • Incorrect canonical tags or redirect chains
    • Pages with low word count or thin content
    • Improper use of H1s and heading structures
    • Orphan pages that aren’t linked to internally
    • Image issues like missing alt text or large file sizes

    This crawl gives a full picture of what’s going on under the hood. From here, I build a prioritized fix list — starting with technical blockers that prevent pages from being indexed or crawled properly.

    Audit Keyword Performance with SEMrush

    SEMrush is where the strategy gets sharp. It helps me understand how the site is currently performing in search — and more importantly, where the missed opportunities are. I use it to:

    • Identify keywords where the site is ranking on page 2 or 3
    • Find high-volume queries where the site has impressions but low click-through
    • Discover new long-tail keywords that align with existing content
    • Analyze competitors to see which terms they’re winning that we aren’t
    • Review backlink profiles and identify toxic links that might need disavowing

    From this data, I create a content plan: which pages need refreshed content, which keywords need stronger internal linking, and what new pages should be created.

    Optimize Existing Content for Quick Wins

    Before launching anything new, I look for quick wins in the existing content. These are typically:

    • Pages ranking in positions 5–20 for target terms
    • Blog posts with outdated information
    • Product or service pages with weak CTAs or vague copy
    • Pages with solid traffic but poor engagement (high bounce, low time on page)

    I improve on-page SEO by adjusting headlines, tightening content to align with search intent, improving meta tags, and adding internal links to and from high-priority pages.

    Address Technical SEO Gaps

    After content, it’s back to the code. I revisit the Screaming Frog data and combine it with insights from Google Search Console to:

    • Fix crawl errors and reduce redirect chains
    • Optimize sitemap and robots.txt files
    • Improve page speed and Core Web Vitals using Lighthouse
    • Add or improve structured data (Product, Article, FAQ, etc.)
    • Ensure canonical and hreflang tags are set properly

    Search engines favor sites that are technically sound. Cleaning this up gives your content a much stronger chance to rise in rankings.

    Create Supporting Content Around Priority Terms

    Once the foundation is solid, it’s time to build momentum. I use SEMrush to identify related queries, questions, and subtopics around key themes. Then I create supporting content — blog posts, FAQ pages, resource hubs — that:

    • Strengthen topical authority
    • Increase internal linking opportunities
    • Capture additional long-tail keywords
    • Drive users deeper into the site experience

    This “hub and spoke” model reinforces relevance and builds a strong SEO network around high-converting pages.

    Monitor, Adjust, and Repeat

    SEO isn’t one-and-done. After implementing changes, I use Google Search Console, SEMrush, and analytics tools to monitor:

    • Changes in ranking and click-through
    • Traffic patterns to key landing pages
    • Engagement metrics like bounce rate and time on page
    • Site health and crawlability over time

    From here, I keep iterating — updating older content, targeting new terms, and keeping the technical side clean as the site evolves.

  • How I Approach Search Engine Optimization

    Search Engine Optimization (SEO) isn’t just about showing up in search results — it’s about being understood. Modern SEO is built into the code itself, starting with how content is structured, how pages are marked up, and how a site performs across devices. One of the most important aspects is making your content not only easy for humans to read, but also optimized for search engine crawlers.

    Start with Solid Meta Data

    The fundamentals matter. Every page should have clean, well-structured meta data to help search engines understand its content. I make sure to:

    • Set canonical tags to avoid duplicate content issues and ensure search engines index the right version of a page.
    • Add alternate hreflang tags for multilingual sites to help direct users to the correct language or regional version.
    • Write concise and clear title and meta description tags that reflect the page’s value to the user and improve click-through rates from search results.

    And no, I don’t focus on stuffing keyword meta tags — search engines haven’t used them in years. Instead, I focus on writing useful, well-structured content that aligns with real user intent.

    Enhance Discoverability with Structured Data

    To help search engines go beyond just reading — to actually understanding — I add structured data using JSON-LD. This semantic markup allows content to appear in rich results like:

    • Product listings with pricing and availability
    • Product ratings so Google will show ratings in search results
    • Articles with publish dates and authors
    • FAQs, breadcrumbs, and even local business info

    Structured data improves visibility in Google’s search features and helps expose content to the right audiences. It’s one of the best ways to speak directly to search engine robots and clarify what your content is about.

    Optimize the Share Experience with Open Graph Tags

    Sharing isn’t just about social reach — it’s also a signal of relevance and trust. I implement Open Graph meta tags for platforms like Facebook and Twitter to ensure that shared links look great and provide value at a glance. This includes:

    • Customizing preview images
    • Writing optimized share titles and descriptions
    • Ensuring Twitter cards render correctly

    When users share your page, it should look polished, professional, and enticing — because a shared link that drives traffic is still a win.

    Analyze Web Core Vitals & Lighthouse Scores

    Search engines reward good user experience, and that means your site needs to perform. I use Lighthouse to regularly audit pages for:

    • Largest Contentful Paint (LCP)
    • Cumulative Layout Shift (CLS)
    • First Input Delay (FID)

    From there, I dig into the code to make improvements — whether that’s optimizing images, reducing JavaScript, deferring unused assets, or cleaning up render-blocking resources.

    A fast, smooth site isn’t just better for SEO. It’s better for users, and that’s what search engines want to see.

    Fine-Tune Based on Google Search Insights

    Google Search Console is one of the most underrated tools in an SEO toolkit. I regularly review performance reports to:

    • Identify search terms where pages are ranking on the second or third page
    • Fine-tune content, headings, or internal links to push those terms toward page one
    • Spot content gaps or underperforming pages that could be reworked or expanded

    This data-driven iteration ensures ongoing optimization beyond the initial launch.

    TLDR

    Good SEO is about more than just keywords and links. It’s about creating a site that is valuable, discoverable, fast, and shareable — all built on a solid technical foundation. My approach combines technical SEO best practices, thoughtful UX, and real user data to help sites perform better today and stay competitive tomorrow.