Building My Digital Twin

I've been meaning to build a personal site for a while. Not a portfolio template. Not a Linktree. Something I actually own, deployed on infrastructure I control, that represents who I am right now.

Here's the honest version of how it happened.

Building My Digital Twin

Why bother

The honest answer is that I wanted a home base on the internet. Somewhere that isn't a platform, isn't controlled by an algorithm, and won't disappear when a company pivots. I graduated from Bucknell, I'm working at CURE Auto Insurance, and I wanted a place to think in public while I figure out what comes next.

I also want this site to be something I look back on in 10, 15, 20 years. A snapshot of how I was thinking at a given moment. I want to read these posts someday and smile, or laugh at myself, or just remember who I was. If nothing else, it's a way to track how my thinking evolves over time.

Choosing the stack

I kept it intentionally simple and stuck with what I knew:

No database. No CMS. No auth. The content lives in the repo as .mdx files, and the entire content system is a single TypeScript file that scans the filesystem. That's it.

I'd always wanted to build a personal site and had been tinkering with this stack on and off, so when I decided to actually do it, the tech choices were already made. The real question was how to build it.

How I actually built it

Here's the part most people leave out: I didn't write the code. I designed the site, made every content and design decision, and orchestrated AI to write the actual code. Every line of TypeScript, every component, every CSS utility. AI wrote it. I directed it.

The notebook

It started with a physical notebook. I sketched out what I wanted. The layout, the sections, how the navigation should work. I knew I wanted something clean and easy to read. Nothing crazy. I like leerob's website and used that as a reference point for the minimalist, content-focused vibe.

Discovering PAI

Around the same time, I watched a YouTube video from Daniel Miessler and fell down the rabbit hole. I found his PAI (Personal AI) architecture, a system for building a personal AI infrastructure on top of Claude Code. I've been using it for about a month now and it's changed how I think about building with AI.

PAI gave me a personal AI assistant with structured skills for different tasks. I'm still figuring out what to name it, so if you have suggestions, I'm all ears. Instead of just prompting Claude and hoping for the best, I had access to specialized capabilities:

I used all three during the design process. The Council debated whether to go full digital garden vs. traditional blog (we landed on a hybrid). BeCreative explored wild ideas for the homepage. RedTeam tore apart my initial plans and pointed out what would break.

Feature by feature, PR by PR

With the design locked in, I built the site feature by feature. Each feature got its own branch, its own PR, and its own AI-powered code review. Here's the actual build timeline from the repo:

  1. Project scaffold. Planning docs and Next.js initialization.
  2. Design foundation. Colors, typography, MDX styling.
  3. Navigation + footer. Site-wide components.
  4. Home page. Typewriter hero, blog scaffold, greeting.
  5. About page. The honest personal story.
  6. Blog system. Index page, content scanner, first post.
  7. Links page. Social profiles and contact.
  8. SEO polish. Sitemap, robots, OG images, custom 404.
  9. Deploy. Vercel Analytics, production config.
  10. Content restructure. Migrated to next-mdx-remote with remark-gfm.
  11. Interactive bookshelf. The library page.
  12. 3D bookshelf redesign. Immersive book visualization.

12 pull requests. Every one reviewed by Greptile before merging.

Greptile as the quality gate

This is the part of the workflow I'm most proud of. Since I wasn't writing the code myself, I needed something to catch issues I wouldn't spot by reading diffs. Greptile reviews every PR automatically and scores it.

My rule: nothing merges to main unless Greptile gives it a 5/5.

And Greptile actually caught real problems. On PR #4 (the home page), it flagged four issues in one review:

The timezone bug:

"greeting uses server time, not user's local timezone — user in California will see 'good evening' at 2pm PST if server is in UTC"

I had a time-aware greeting on the homepage ("good morning", "good evening"). Worked perfectly in dev, wrong in production. The fix was extracting the greeting into a client component so it uses the browser's timezone instead of Vercel's UTC server.

The sorting inversion:

"sorting logic is inverted — when b.date > a.date, returning 1 places a after b, giving you oldest-first instead of newest-first"

Classic. My first blog post sorter showed the oldest posts first. Off-by-sign energy.

The frontmatter parser bug:

"splitting on all colons truncates values containing colons — use indexOf to split on first colon only"

If any frontmatter value contained a colon (like a URL or a time), the parser would chop it off. Subtle bug that would have been painful later.

On PR #12 (the interactive bookshelf), Greptile caught a missing data-book attribute that would have broken the entire focus restoration flow, and flagged duplicated utility functions that needed extraction.

These aren't nitpicks. These are real bugs that would have shipped to production.

The code

Here's what some of the interesting parts look like.

The typewriter effect

The homepage has rotating taglines that type and delete themselves. It's a state machine with four phases:

const taglines = [
  "building things on the internet",
  "figuring it out as I go",
  "nerd in training",
  "runner, wrestler, builder",
  "chasing the next PR",
  "shipping my digital twin",
];

export function TypewriterRotator() {
  const [display, setDisplay] = useState("");
  const phase = useRef<"typing" | "pausing" | "deleting" | "waiting">("typing");
  const index = useRef(0);
  const charPos = useRef(0);

  useEffect(() => {
    let timeout: ReturnType<typeof setTimeout>;
    function step() {
      const current = taglines[index.current];
      switch (phase.current) {
        case "typing":
          charPos.current++;
          setDisplay(current.slice(0, charPos.current));
          if (charPos.current >= current.length) {
            phase.current = "pausing";
            timeout = setTimeout(step, 2000);
          } else {
            timeout = setTimeout(step, 60 + Math.random() * 40);
          }
          break;
        // ... delete, wait, cycle to next
      }
    }
    timeout = setTimeout(step, 500);
    return () => clearTimeout(timeout);
  }, []);

  return (
    <span className="text-secondary">
      {display}
      <span className="animate-pulse text-accent">|</span>
    </span>
  );
}

The random delay between keystrokes (60 + Math.random() * 40) gives it a human-like typing feel instead of robotic constant speed.

The content system

The entire blog engine is a single file, lib/content.ts. It scans a directory for .mdx files, parses frontmatter, and returns sorted metadata:

export function getContentByType(type: ContentType): ContentMeta[] {
  const dir = path.join(process.cwd(), "content", type);
  if (!fs.existsSync(dir)) return [];

  const files = fs.readdirSync(dir).filter((f) => f.endsWith(".mdx"));
  const items: ContentMeta[] = [];

  for (const file of files) {
    const raw = fs.readFileSync(path.join(dir, file), "utf-8");
    const { metadata } = parseFrontmatter(raw);
    const slug = file.replace(/\.mdx$/, "");

    items.push({
      title: metadata.title as string,
      date: metadata.date as string,
      description: metadata.description as string,
      tags: metadata.tags as string[],
      slug,
      type,
    });
  }

  return items.sort((a, b) => b.date.localeCompare(a.date));
}

No database. No API. Just the filesystem. Adding a new blog post means creating a new .mdx file in content/blog/ and it shows up automatically.

What I learned

Designing is harder than coding. When AI writes the code, the bottleneck shifts to knowing what you want. My notebook sketches and design iterations took longer than the actual implementation. That's the right bottleneck to have.

AI code review is non-negotiable. I wouldn't have caught the timezone bug, the sorting inversion, or the frontmatter parser issue by reading diffs. Greptile paid for itself on PR #4 alone.

Build in public, even when it's messy. This post is the honest version. I didn't write the code and I'm not pretending I did. But I designed every pixel, chose every word, and made every decision about what this site should be. The AI was the tool. The vision was mine.

Ship, then iterate. Version one was 9 PRs. Then I restructured the content system. Then I redesigned the bookshelf twice. The site is never done, and that's the point.

What's next

This site is a living thing. Over time I want to add:

The whole thing is open source at github.com/MaxHarar/PersonalWebsite. If you want to see how it's built, every PR is there with Greptile's reviews attached.

But for now, it's shipped. And shipping beats planning every time.