Category: Web Development

  • How to Reignite a Company’s Digital Culture

    How to Reignite a Company’s Digital Culture

    When I look back at the companies that really clicked digitally, it wasn’t because they had the flashiest website or the biggest tech budget. It was because they had a culture that believed in what digital could do. Teams that wanted to experiment. Leaders who didn’t just approve the work, they understood it.

    Reigniting that kind of culture isn’t about hiring a few more developers or launching a new CMS. It’s about rebuilding curiosity, trust, and alignment around what digital means for the business.

    I’ve seen both sides: the well-oiled digital teams where ideas flow across departments, and the ones where “digital” feels like a side project rather than a strategy. Here’s what I’ve learned about bringing it back to life.


    1. Start with Alignment, Not Tools

    When I join a company, the first thing I ask isn’t “What stack are we using?” It’s about understanding the lay of the land and finding silos

    In my experience reigniting digital culture starts with unifying groups. Marketing knows the message. IT knows the systems. UX knows the friction. When those teams start solving problems together instead of tossing tickets over the wall, everything changes.


    2. Create a Safe Space to Experiment

    Digital innovation doesn’t come from perfection; it comes from play.

    When I was at Arbonne, we gave teams the ability to experiment with blue sky ideas. That freedom sparked new workflows, better collaboration, and a few ideas that grew into full-scale projects.

    The trick is to make experimentation part of the job — not an afterthought. Host digital demo days or what we called the “Innovation Summit”. Let teams share what they’ve built. Celebrate creative risk-taking the same way you celebrate a successful launch.


    3. Build Human Connections First

    One of the most underrated parts of digital transformation is simply helping people talk to each other.

    At one company, I started scheduling what I jokingly called “very important meetings.” These were optional get-togethers where people from IT and the creative teams would play Nintendo Switch games (mostly Super Smash Bros.) over our lunch break.

    We even made a custom Smash trophy for the winner to hold on to.

    On paper, those meetings didn’t produce a single line of code or a new campaign asset. But they completely changed how those teams worked together. Suddenly, designers weren’t afraid to walk over to a developer’s desk to ask a question. Developers could actually name people in creative instead of just saying, “Talk to the PM.”

    That small bit of shared fun turned into real collaboration — and that’s where the best digital work starts.


    4. Bridge the Gap Between Business and Builders

    One of the fastest ways to kill digital momentum is to separate the people making decisions from the people writing code.

    If leadership only sees metrics and not the process, they don’t understand what’s possible. If developers only get directives and not context, they stop caring about impact.

    The best digital cultures have regular, transparent conversations between these worlds. Developers present not just what they built, but why. Leaders ask not just for features, but for understanding. It builds mutual respect, and that’s where innovation thrives.


    5. Invest in People, Not Platforms

    There’s always going to be a new shiny tool. But tools don’t fix cultural issues, people do.

    Invest in training, mentoring, and giving teams the time to grow their skills. Build processes that support creativity instead of drowning it in approvals.

    One of my favorite exercises is asking each team member what they wish the company understood better about their work. You’d be amazed how much insight that uncovers. It’s not just about digital, it’s about empathy.


    6. Keep Momentum Visible

    Once things start clicking, make progress visible. Share wins. Publish small success stories internally.

    When teams see digital success celebrated across the company, they start looking for ways to contribute. It builds pride, momentum, and a sense that everyone has a role in the bigger digital story.


    Final Thoughts

    Reigniting a company’s culture isn’t about chasing trends or deploying another platform. It’s about restoring the energy that makes teams want to build, learn, and innovate together.

    The tools will always change, just look at the react space, but the mindset is what lasts.

  • Relearning Shopify (And Remembering That Arbonne Proof of Concept)

    Relearning Shopify (And Remembering That Arbonne Proof of Concept)

    I’ve been spending some time back in Shopify lately. Being on the job hunt I’ve been seeing Shopify in job descriptions and wanted to revisit the system. It’s wild how much has changed and how much still feels familiar.

    Back around 2018, I built a proof of concept for Arbonne to see if we could use Shopify as our primary sales system. The goal was to test whether Shopify could handle a key part of the business model: tracking consultant sales attribution. We needed a way for each purchase to tie back to the consultant who referred the customer. In the MLM space if you can’t do that the system, no matter how shiny, will never work.

    At the time, Shopify didn’t have built-in multi-tier referral tracking or complex custom logic for that kind of networked sales structure. But what we could do was attach a custom value to the cart. I built a setup where the consultant’s business name was passed into the checkout as a custom cart attribute, effectively “tagging” each order. It worked—at least well enough to show it was technically possible and the CIO had another option for the board.

    That proof of concept never went live at scale, but it taught me a lot about Shopify’s flexibility and where its boundaries were. Coming back to the platform now, I’m impressed with how many of those limitations have been addressed. The app ecosystem is deeper, checkout extensibility has matured, APIs are far more capable, and documentation much easier to read.

    It’s been fun to rediscover Shopify through a modern lens—seeing what’s new, what’s improved, and what old tricks still hold up. Sometimes revisiting an old system is like catching up with an old friend: you recognize the foundation, but you’re surprised by how much they’ve grown.

  • Testing Chrome’s Prompt API: On-Device AI for the Web

    Testing Chrome’s Prompt API: On-Device AI for the Web

    The web just took another step toward local, browser-based intelligence — and I’m here for it.

    I recently signed up for the Prompt API trial in Chrome, and I’ve been diving into what it could mean for the future of interactive web experiences. For those not familiar, the Prompt API gives developers access to on-device language models directly through the browser. That means we can build AI-powered tools without relying on external APIs or sending data to the cloud — it all runs locally, right where the user is.

    Why I’m Excited

    For years, we’ve watched AI integrations live mostly on servers — calling APIs, managing tokens, and handling latency. The idea that we can now tap into a browser-level AI model opens up an entirely new playground for building responsive, privacy-friendly, and highly personalized web apps.

    It’s still early, but the possibilities are endless.

    What I’m Experimenting With

    As part of the trial, I’m already thinking through a few prototype projects that could take advantage of this local AI layer:

    • Product Comparison Tool
      Imagine selecting two products on a site and instantly getting a human-like breakdown of pros, cons, and suitability — all processed right in your browser.
    • Smart Shopping Cart
      A cart that helps you make smarter decisions. Instead of just holding products, it could suggest alternatives, bundle recommendations, or tell you if you’re missing something commonly paired with your picks.
    • AI-Powered FAQ System
      Instead of static questions and answers, the FAQ could understand context. Users could ask natural questions, and the browser would generate helpful, brand-specific answers based on local content.

    The Future of On-Device Intelligence

    This feels like a major shift — not just for performance, but for privacy and accessibility. You don’t need an API key, a backend pipeline, or even a connection to OpenAI or Gemini. Everything happens in the browser, using the user’s device capabilities.

    That opens doors for lighter, faster, and more compliant AI experiences, especially in industries where data sensitivity matters.

    If you’re a developer, I recommend checking out the Prompt API GitHub repo and the official Chrome developer docs.

    I’ll be sharing updates as I build out my first demos — so stay tuned for more experiments soon on bradbartell.dev.

  • Thoughts for my next role

    Lead / Decision Maker

    As a developer / technical person it really pains me to see money being wasted on programs that add little to no value.

    In a previous position I watched a big name come in an absolutely mess up a product that was pretty much done and just needed a branded skin. But a VP level saw the demo and brought it back to square one. The product only ended up adding 1 feature I can recall from the process, not even a major feature very minor part. This reset ended up wasting 6-8 months and in that time we could have worked on just the skin / styling and gotten the product out and gotten the best feedback possible for real users!

    My desire is to be in the early decision making processes to avoid wasting time, money, and to improve system adoption by making sure people use what companies already built or paid for.

    Business Partners

    I want to work with teammates, not silos. The thing that drives me crazy is silos and people operating outside their role in major projects (minor decisions that have little to no impact don’t bother me).

    Teammates know who is strong in certain topics. I’m a developer, If you ask me to design something I can do it but it’s not my strength. If the company has a designer I’m going to talk with them and make the best version of their vision with the resources I have available. I would expect the same from the people I work with. If a coworker is thinking about how a system works together I would want them to talk to me about it, not to tie us to a third party that does what one of our systems already is capable of.

    I want to avoid silos because they drive me nuts. I hate when people don’t have a clue what other people are working on. It leads to ineffective work or people doing double work after realizing that what they did originally won’t work.

    In a previous position I was in our Marketing department and transferred over to IT. Because I had spent years with my feet in both worlds I had connection inside the company in both departments but neither could really name people in other departments. What I did was start a little group that would meet to play games over lunch with people from both departments. People started to get along and would chat outside of the group so people within these two departments started to understand who did what. This saved time on projects since people could directly talk with people who did specific things.

    People Leader

    One of the things I do find a lot of value in is being a people leader. I actually do miss looking at applicants resumes / portfolio sites to find people who are absolute gems. I miss guiding people and working with my team to create opportunities for growth.

    When I hired people in the past I didn’t expect them to stay forever. I told them upfront that I wanted a certain amount of time with them and to tell me what their goals were so I could find projects to help them advance. One person I gave an entire site translation for a new market, one owned a client relationship management platform’s front end code, one I worked with to create a hybrid role that empowered them to work on native apps.

    I also loved the old Google mentality to make paid time to work on their own ideas. Not everything out of these projects related directly to the company but the morale boost out of it was well worth the time. At one point we were having weekly meetings with the CTO /CIO about the passion projects people were working on. I loved that it got people who wouldn’t typically have a chance to have those meetings easily get to talk about their code and helped people feel comfortable speaking to high level leaders.

  • The True Cost of LLMs — and How to Build Smarter with Ollama + Supabase

    Over the past few years, the cost of training large language models (LLMs) has skyrocketed. Models like GPT-4 are estimated to cost $20M–$100M+ just to train once, with projections of $1B per run by 2027. Even “smaller” foundation models like GPT-3 required roughly $4.6M in compute.

    That’s out of reach for nearly every company. But the good news? You don’t need to train a new LLM from scratch to harness AI in your business. Instead, you can run existing models locally and pair them with a vector database to bring in your company’s knowledge.

    This approach — Retrieval Augmented Generation (RAG) — is how many startups and internal tools are building practical, affordable AI systems today.


    Training vs. Using LLMs

    • Training from scratch
      • Requires thousands of GPUs, months of compute, and millions of dollars.
      • Only feasible for major labs (OpenAI, Anthropic, DeepMind, etc.).
    • Running + fine-tuning existing models
      • Can be done on commodity cloud servers — or even a laptop.
      • Cost can drop from millions to just hundreds or thousands of dollars.

    The trick: instead of teaching a model everything, let it “look things up” in your own database of knowledge.


    Ollama: Running LLMs Locally

    Ollama makes it easy to run open-source LLMs on your own hardware.

    • It supports models like LLaMA, Mistral, and Gemma.
    • You can run it on a laptop (Mac/Windows/Linux) and/or in a Docker container. I like to run it in docker on my machine, it’s the easiest way to control costs while building and testing
    • Developers can expose endpoints to applications with a simple API.

    Instead of paying per token to OpenAI or Anthropic, you run the models yourself, with predictable costs.

    Bash
    # Example: pull and run LLaMA 3.2 with Ollama
    ollama pull llama3.2
    ollama run llama3.2

    Supabase: Your Vector Database

    When you add RAG into the mix, you need somewhere to store embeddings of your documents. That’s where Supabase comes in:

    • Supabase is a Postgres-based platform with built-in pgvector extension.
    • You can store text embeddings (numerical representations of text meaning).
    • With SQL or RPC calls, you can run similarity searches (<->) to fetch the most relevant chunks of data.

    For example, embedding your FAQs:

    SQL
    CREATE TABLE documents (
      id bigserial PRIMARY KEY,
      content text,
      embedding vector(1536)
    );
    
    -- Search for relevant documents
    SELECT content
    FROM documents
    ORDER BY embedding <-> (SELECT embedding FROM query_embedding)
    LIMIT 5;

    This gives your LLM the ability to retrieve your data before generating answers.

    RAG in Action: The Flow

    1. User asks a question → “What’s our refund policy?”
    2. System embeds the query using nomic-embed-text (in ollama) or OpenAI embeddings.
    3. Supabase vector search finds the closest matching policy docs.
    4. Ollama LLM uses both the question + retrieved context to generate a grounded answer.

    Result: Instead of the model hallucinating, it answers confidently with your company’s real data.

    Cost Reality Check

    • Training GPT-4: $50M+
    • Running Ollama with a 7B–13B parameter model: a few hundred dollars per month in compute (or free if local).
    • Using Supabase for vector search: low monthly costs, scales with usage.

    For most businesses, this approach is 95% cheaper and far faster to implement.

    Final Thoughts

    Building your own GPT-4 is impossible for most organizations. But by combining:

    • Ollama (local LLM runtime)
    • Supabase + pgvector (semantic search layer)
    • RAG pipelines

    …you can get the power of custom AI at a fraction of the cost.

    The future isn’t about every company training billion-dollar models — it’s about smart teams leveraging open-source LLMs and vector databases to make AI truly useful inside their workflows.

    Interested in this for your company? Feel free to reach out on LinkedIn and I’ll use my experience doing this for Modere and one of my freelance clients to build one for you.

  • How Social Media Companies Can Use AI Without Losing Human Control

    AI is changing the way businesses work — and social media is no exception. Agencies are under constant pressure to deliver content faster, track trends in real time, and respond to audiences across multiple platforms. But in all the hype around automation, one principle often gets lost: AI should never replace human creativity and judgment.

    Instead, think of AI as a digital assistant that handles the repetitive, data-heavy tasks and gives you a head start on creative thinking. Humans remain in control, making the final decisions, ensuring brand safety, and applying the nuance that no algorithm can capture.

    Here are three practical ways social media companies can use AI to enhance their work — while always keeping people in the driver’s seat.


    1. Creative Campaign Ideation

    The challenge:

    Brainstorming campaign ideas is a cornerstone of social media marketing, but it can be time-consuming. Teams can spend hours trying to crack the “big idea,” only to end up circling the same concepts.

    How AI helps:

    AI can dramatically speed up the ideation process by:

    • Generating dozens of campaign angles from a single prompt.
    • Suggesting different creative formats (short-form video, Instagram carousels, LinkedIn thought pieces).
    • Tailoring ideas to audience segments (teen lifestyle, small business owners, B2B decision-makers, etc.).

    Where humans come in:

    The team takes these raw AI-generated ideas and applies strategy, creativity, and brand voice. Humans filter out what won’t resonate, refine what has potential, and ensure that the concepts align with client goals. AI provides volume and variety — humans provide vision.


    2. Social Listening & Insight Generation

    The challenge:

    Audiences move fast, and conversations can shift overnight. Agencies need to understand what’s trending, how competitors are positioning themselves, and where opportunities exist — but manually monitoring these signals across multiple platforms can eat up entire days.

    How AI helps:

    AI-powered monitoring tools can:

    • Track mentions, hashtags, and brand sentiment at scale.
    • Spot emerging trends before they hit the mainstream.
    • Highlight unusual spikes in competitor activity or audience engagement.

    Where humans come in:

    AI surfaces the noise; humans decide what matters. A strategist interprets the signals, applies market context, and recommends how to act. For example, AI might detect a sudden surge in conversation around sustainable fashion. A human marketer decides if it’s worth jumping in, how to align it with brand values, and whether it’s appropriate for the campaign calendar.


    3. Customer Engagement & Support

    The challenge:

    On social media, audiences expect instant responses — but agencies can’t realistically have humans available 24/7 to handle every comment, DM, or inquiry.

    How AI helps:

    AI chatbots and response assistants can:

    • Handle routine questions (“What are your hours?” “Where can I buy this?”).
    • Direct users to helpful resources or FAQs.
    • Flag urgent or sensitive conversations for human follow-up.

    Where humans come in:

    Community managers review and step in for anything that requires empathy, nuance, or strategic decision-making. When conversations escalate — such as customer complaints, influencer inquiries, or brand reputation issues — only a human can respond with the judgment and care needed. AI handles scale; humans handle relationships.


    Why Balance Matters

    AI is powerful, but it’s not perfect. Left unchecked, it can generate off-brand content, misinterpret conversations, or mishandle sensitive interactions. That’s why the best social media companies use AI as an assistant, not a replacement.

    By letting AI take on the repetitive tasks — brainstorming raw ideas, monitoring chatter, drafting first responses — agencies free up their human teams to do what they do best: create compelling campaigns, build client trust, and foster authentic connections with audiences.

  • Git Branching for Teams: A Workflow That Supports Collaboration and Quality

    When multiple developers are working on the same codebase, things can get messy—fast. Code conflicts, broken features, and unclear ownership can bring progress to a halt. That’s why adopting a clear Git branching strategy isn’t optional—it’s essential.

    In this post, I’ll walk through a branching model we’ve used successfully in team environments, including how we structure collaboration, manage quality assurance (QA), and protect the master branch from untested changes.

    The Core Branching Strategy

    At the heart of this workflow are three persistent branches:

    • master (or main): This is the production-ready branch. Nothing gets merged here unless it has been tested and approved.
    • release branch: The integration branch for all feature and fix work. This is where the latest, stable in-progress work lives for that release date.
    • feature/* or bugfix/*: Individual branches created by developers to isolate work.
    • hotfix/* : the branch we never want to actually use but want to know how to use properly for emergencies.

    Feature Branch Workflow

    Each developer works in their own branch, named for the feature or ticket. This ensures:

    • Work can be reviewed independently
    • Code changes are isolated
    • Developers don’t step on each other’s toes


    When a feature is complete, the branch is pushed to the remote and opened as a pull request (PR) into the release branch.

    QA Before release

    As things are added to the release branch we have QA go in and test the work of developers. This can be done on local machines by building the release branch or using a testing/QA server. If issues are found the developer fixes their feature and merges it back into the release branch.

    Releasing the branch

    Once the QA work is done and the code has been pushed to the production servers the release branch is merged into the master branch. The key is to always have master branch match the production server. This way when after all the testing is done and a bug is found in prod we can deal with it using a hotfix branch

    Hotfix

    All the testing in the world and you will find issues in Prod that may be too serious to wait for the next planned release so in come the hotfix branch. The hotfix branch is always forked from the master/main branch. When the code is ready to be tested the testing server get build from the hotfix branch then after QA has tested is merged back into master and applicable release branches that may be in progress.

    Guidelines for a Smooth Team Workflow

    • Enforce code reviews on all PRs
    • Use GitHub Actions or similar CI to run automated tests on every PR
    • Protect master with branch protection rules (e.g., require PR approval, passing tests)
    • Tag releases for traceability and rollbacks
    • Name branches clearly (e.g., feature/signup-modal, bugfix/missing-footer)

    By introducing a branching strategy that separates feature development, QA testing, and production releases, teams can move faster and safer. Everyone knows where to work, where to test, and what’s going to production—no surprises, no broken code in master.

    This approach has worked well for me across teams of developers, QA testers, and product owners—and it will scale as your team grows too.

  • Why JSON-LD is Critical for Modern SEO: A Real-World Example from Modere

    If you want your brand to stand out on Google, it’s no longer enough to simply rank high—you need to look great too. One of the most powerful (yet often overlooked) tools for enhancing your search appearance is JSON-LD structured data. During my time at Modere, I saw firsthand how implementing JSON-LD, particularly for product reviews, helped us turn basic listings into rich, eye-catching results. Here’s why JSON-LD matters—and how we used it to drive more visibility and credibility.

    What is JSON-LD?

    JSON-LD (JavaScript Object Notation for Linked Data) is a way to add machine-readable metadata to your web pages without disrupting the user experience. It tells Google—and other search engines—important information about your content: products, reviews, organization info, FAQs, and more.

    Unlike older methods like Microdata or RDFa, JSON-LD is simple to implement and doesn’t require nesting tags within your HTML. Instead, it sits cleanly in the <head> (or sometimes <body>) of your page as a standalone block of code.

    The Opportunity at Modere:

    At Modere, we were already gathering thousands of authentic product reviews through Yotpo, a leading customer review platform. However, while these reviews were visible on the page, they weren’t showing up in Google’s search results as rich snippets (the star ratings you often see under a product link).

    Without structured data, Google had no way to easily associate our reviews with our products—which meant we were missing out on valuable trust signals and click-through opportunities.

    How We Solved It: Adding JSON-LD to Our Next.js Website

    Working within a Next.js framework, we developed a process to dynamically inject JSON-LD into our product pages based on real Yotpo review data. Here’s how we approached it:

    • Pulled Yotpo data: On page load (or during server-side generation), we accessed the latest review counts and average ratings via Yotpo’s API.
    • Generated JSON-LD: For each product page, we created a JSON-LD schema following Google’s Product schema guidelines, including fields like name, description, aggregateRating, and review.
    • Injected it into the page: Using Next.js’ <Head> component, we embedded the JSON-LD inside a <script type=”application/ld+json”> tag.

    Here’s a simple version of what we added:

    import Head from 'next/head';
    
    export default function ProductPage({ product, yotpoReviews }) {
      const jsonLd = {
        "@context": "https://schema.org/",
        "@type": "Product",
        "name": product.name,
        "description": product.description,
        "aggregateRating": {
          "@type": "AggregateRating",
          "ratingValue": yotpoReviews.average_score,
          "reviewCount": yotpoReviews.total_reviews
        }
      };
    
      return (
        <>
          <Head>
            <script
              type="application/ld+json"
              dangerouslySetInnerHTML={{ __html: JSON.stringify(jsonLd) }}
            />
          </Head>
          {/* Rest of product page */}
        </>
      );
    }

    The Results:

    After Google re-crawled our pages:

    • Many Modere products started displaying review star ratings directly in search results.
    • Click-through rates improved, particularly on competitive products.
    • Customer trust increased before even landing on the site—because seeing those stars makes a strong first impression.

    Why This Matters for Your Website:

    Adding JSON-LD structured data isn’t just a “nice to have”—it’s becoming a necessity if you want your site to:

    • Earn rich snippets (stars, pricing, availability, FAQ dropdowns)
    • Improve click-through rates (CTR) from search results
    • Provide better context for AI models and voice search systems
    • Future-proof your SEO against evolving search engine expectations

    If you’re running a modern site—whether it’s built on Next.js, WordPress, Shopify, or anything else—you should be leveraging JSON-LD. It’s one of the highest-ROI, lowest-friction ways to boost your search appearance and show customers (and Google) that your content deserves attention.

    At Modere, this simple but strategic addition helped us bridge the gap between customer experiences and search engine visibility—and it’s something I recommend to every brand serious about their digital presence.

  • Share API For Consultant Marketing Pages

    When I was working with Arbonne, we faced a unique challenge:

    How could we empower Consultants to easily share marketing pages that looked better, performed better, and still properly linked back to their e-commerce stores for attribution?

    To make the sharing experience effortless, we leveraged the Web Share API.

    With a single click, Consultants could open their device’s native share options and automatically send a personalized link — no copying and pasting, no extra steps.

    Each Consultant had a unique identifier we called their Consultant Business Name (CBN). Traditionally, the CBN was added as part of the subdomain to route traffic to their personalized shopping sites. However, due to limitations in our tech stack, we had to host these new marketing pages on a separate server — one that didn’t inherently recognize the CBN structure.

    To solve this, we used a cookie-based approach:

    • When a visitor landed on a marketing page through a shared link, the query string appended the CBN (e.g., ?cbn=JohnDoe).
    • JavaScript then read that query string and set a cookie storing the CBN.
    • As users browsed the site, JavaScript dynamically updated shopping URLs to include the correct CBN — ensuring purchases still attributed to the right Consultant.

    The key was automation:

    Rather than training Consultants to manually add query parameters, we simply taught them to use the Share button — which handled all the logic behind the scenes.

    Here’s a simplified version of the sharing logic:

    JavaScript
    if (navigator.share) {
      navigator.share({
        title: 'Check out Arbonne!',
        text: 'Discover amazing products through my Arbonne Consultant store!',
        url: window.location.href + "?cbn=" + cookievalue
      }).then(() => {
        console.log('Thanks for sharing!');
      }).catch(console.error);
    }

    Beyond just improving the sharing process, these marketing pages offered major SEO advantages over the legacy e-commerce platform:

    • Custom page titles and meta descriptions
    • Open Graph tags for better social media previews
    • Faster load times with clean, lightweight HTML
    • More targeted keyword optimization around product categories and promotions

    The result?

    These pages didn’t just perform better for direct links — they also started ranking organically in search engines, driving new discovery traffic and expanding the reach of each Consultant’s business.

  • Home Lab

    In a need to update my skills and being really interested in a ESXI server after my boss talked to me about his setup for a side project we worked on I built a home lab. Mine isn’t anything fancy but it enables me do whatever I want for building and experimenting with little need to keep reinstalling new Operating Systems.

    What hardware did I use?

    For most people there are three options, build, use part from a previous upgrade, or buy a setup. For most people who build their own computers I would recommend building a server from their old parts, but I actually just gave mine away to a neighbor kid who wanted to get into PC gaming. So I could either buy parts and build a new PC or buy a prebuilt. In my case I knew this would be a server only so I didn’t want to build a typical PC.

    I opted to use an old server, there are a ton of them on Amazon and similar sites. This enabled me to get a really good price for the hardware and felt good since I kept some chips from going to the dump.

    Software for the server

    once you find the hardware you want to use you’ll need to find the software you want to run. Depending on what you plan to use the server for you can go with a Linux distro or Windows Server. What I did was use ESXI which allows you to run multiple operating systems without a full OS to run the virtual machines. ESXI is really minified to take minimal resources.

    What I’m using it for

    Currently I’m using my server to run Home Assistant to automate and monitor my home. It’s not completely setup for everything but I’m able to monitor the key systems I want to watch. Home Assistant takes a lot of the basic IoT integrations so you can focus on creating hooks where things happen after something else happens. I’m working on building logic so I can see when my solar system is overproducing electricity to send to the grid so I use higher electrical use devices when it cost me nothing.

    My other use for this server is to mess around with passion projects. After finishing this post I’ll be working on learning about Linux. I truly believe that to really understand a system you have to poke around. Sometimes better know as FAFO

    With ESXI I can take snapshots of a sever before messing around in case I really mess a server up I can just bounce it back.