A person sitting at a desk across from a blank screen, a small card between them

You’ve used AI dozens of times. Written emails with it, brainstormed strategy, talked through decisions you weren’t sure about. You know what it’s capable of when you’re in a good session.

Then you open a new one. You’re working on something you’ve done a hundred times before, something AI should know about you by now. Your role, your industry, how you think, what good looks like.

But it doesn’t know any of that. It gives you something technically fine and completely generic. Something that could’ve been written for anyone, by anyone. You read it and think: this isn’t even close.

So you start over. You explain who you are and how you like things done. You rebuild the context from scratch, knowing full well you’ll do it again tomorrow.

If this sounds familiar, you’ve been living in 50 First Dates.

Lucy Whitmore wakes up every morning thinking it’s October 13th. A car accident wiped her ability to form new long-term memories. Henry Roth meets her at a café, they have this wonderful connection, and the next morning she looks at him like a total stranger. He walks in, she looks up, and she has no idea who he is.

Imagine having to win over the girl of your dreams every friggin’ day.

That’s what most people’s relationship with AI looks like. Every session starts fresh. Your knowledge, your way of thinking, your standards for what “good” means. All of it lives and dies in the conversation. You type it or copy and paste it every time. You’re hand-carrying yourself into every interaction. The architecture is zero.

And then you wonder why the output feels like it was written for a stranger.

It was. The AI doesn’t know you.

But here’s the thing about 50 First Dates: it’s not a tragedy. Henry doesn’t give up. He doesn’t decide Lucy isn’t worth it because she forgets. He builds a video tape. Every morning, it tells her what happened to her and catches her up on what she’s missed, so she wakes up with context instead of a blank slate. And then something unexpected happens. Lucy starts journaling on her own. She begins building her own record, her own continuity, filling in the gaps from a place that feels like it comes from her. Eventually they build an entire life together. One that works even though her memory resets every night, because the system around her doesn’t.

That’s exactly what’s possible with AI. Not through better prompting. Through building context into the system, so the AI knows you before you say a word. And once you give it that foundation, something interesting happens: the AI starts developing its own ways of remembering you too.

This is what I’ve been building for the past year. And once you cross the first threshold, you can’t go back.

Do You Have Any Idea Who I Am?

“Do you have any idea who I am?” “No.” “No. That sucks.”

I see two completely different realities playing out with AI right now. On one side, people building things that genuinely weren’t possible two years ago. My company uses AI every day. I build with it. I think with it. On the other side, smart people I know personally who’ve tried it, gotten mediocre output, and walked away thinking AI is overhyped.

Both experiences are real. The difference isn’t the AI.

The difference is context. One group is typing who they are and how they think into every single session from scratch. The other group figured out that you don’t have to do that. That there’s a way to give AI something to work with before you even open your mouth.

But most people don’t know that second option exists. They hit a wall and decide AI peaked on day one. They never find out that the issue isn’t the technology. It’s that every session starts with amnesia, and nobody told them there’s something they can build to fix it.

In the movie, it takes Henry a few days to even realize what’s happening with Lucy. He thinks she’s lost interest. He thinks he did something wrong. The breakthrough isn’t a better opening line. It’s understanding the actual problem.

Same with AI. The breakthrough isn’t a better prompt. It’s realizing the whole setup needs to change.

Empty screen on the left with an arrow pointing to a screen full of context documents on the right

That Notebook Used to Have a Lot More About Me in It

In the movie, Henry figures out that if he records a video tape for Lucy that tells her what happened to her and catches her up on everything she’s missed, she wakes up with context instead of a blank slate. Over time they add more to it, but the first version is just the basics: here’s your situation, here’s what you’ve missed, here’s where things stand.

That’s the move that changed everything for me with AI.

There wasn’t a dramatic moment. It was a hunch. I’d been using AI for years, getting good results when I put in the work, losing all of it by the next session. At some point I started wondering: what if I just… wrote some of this down? Not for me. For the AI.

Now, I use Claude Code, Anthropic’s version of Claude that runs in your terminal. When you set it up in a project folder, it automatically creates a CLAUDE.md file. But that alone doesn’t solve the problem.

What I did was create a context folder and start adding files about me. Who I am, what I value, how I think. I had the AI write to those files as it learned things about me, documenting who I was in a way it could read back later. Then in CLAUDE.md, I wrote a simple rule: I have these context files. Anytime you’re working with me on something, check if there’s a relevant context file and read it.

That was the first version. Honestly, it was kind of a dumb, hopeful, wishful-thinking system (the AI equivalent of leaving a note on the fridge). It didn’t always pull the right files. It wasn’t always reliable. But it was the first step. Giving AI something to work with beyond whatever I typed into the prompt.

And then every session suddenly felt like the same session.

That’s the best way I can describe it. Before, every conversation started from zero. After, it felt like picking up where you left off. AI had continuity. It knew my standards. It wrote in my voice without being told. It remembered what mattered to me, because I’d written it down in a place it could find.

Once you’ve had that experience, generic sessions feel broken. You can’t go back.

The thing that surprised me is how low the barrier actually is. You’re not writing code. You’re not configuring some complex system. You’re putting notes in a file. That’s it. The real barrier isn’t difficulty. It’s knowing this is even possible. Most people don’t. They’re still re-introducing themselves every session because nobody told them there’s a place to put the tape.

Once AI has that foundation, once it knows who you are and how you work, it starts building its own context. It creates memory files. It notices patterns and records them. It starts filling in the notebook on its own, the same way Lucy started journaling once Henry gave her a reason to. You give it the video tape. It starts writing the journal.

That’s where it shifts from “I have to teach AI about me every time” to “AI is learning about me on its own.”

Good Morning, Mrs. Roth. Would You Like to Meet Your Daughter?

Context files were the tape. But I wanted more than a tape.

Those early context files relied on hope. I wrote rules in CLAUDE.md that said “read these files when they’re relevant,” and sometimes AI did. Sometimes it didn’t. It was better than starting from scratch, but I was still dependent on AI choosing to follow the instructions. There was no guarantee.

So I started building more structure. Not just files AI could read, but systems that enforced the rules whether AI felt like following them or not. Hooks that fire automatically. If AI tries to do something without checking the right context first, the hook catches it. Databases that store information so I’m not burning the entire conversation window on backstory. Agents that run tasks on their own, following structured pipelines I’ve defined. Automation that works while I sleep.

Some of that structure came in the form of iOS and macOS apps I built for myself. Ways to engage with my projects, my content, the things I’m working on every day. I built them because I could, because AI already knew what was important to me and what I needed. None of this is a product I’m releasing. It’s personal infrastructure. Tools for thinking better. Leading better. Creating more.

I’m not an engineer. I learned all of this by experimenting. Playing with it. Watching YouTube videos from people further along than me. Having long conversations with my COO, who was building his own deterministic AI rules and thinking through how to make AI more reliable on the engineering side.

I took those principles and translated them into my own world. The world of someone who writes, leads a team, and thinks about strategy, not someone who ships code.

Here’s what I keep coming back to. Each piece of understanding about how AI works unlocks the next thing you can build. I learned about context files, so I built them. That taught me about hooks, so I built those. Hooks taught me about agents, and now I have pipelines. You can’t build what I’ve built today with what I knew a year ago. You wouldn’t even know these tools existed. The understanding and the architecture grow together. One feeds the other.

And when it’s working, AI should make you an exponentially better version of yourself. Not a different person. A more capable version of who you already are. If you use AI to do things you couldn’t do before but lose what makes you distinctive in the process, you’ve traded amplification for replacement. That’s not the point.

Think about what Henry actually pulled off. He built an entire life with someone who can’t remember him. Not by hoping she’d magically recover. By building context, infrastructure, systems, and daily practices that gave her everything she needed to act like she remembered, to live like she remembered, even though she didn’t. That’s how I use AI today. Every system I’ve built, every context file, every hook and rule and pipeline. It’s all the answer to one question: how do you create a meaningful, productive relationship with something that has no memory?


People tell me AI is changing too fast to invest in. That the model you’re using today will be obsolete in six months. And they’re right. It will be. But that misses the point.

You’re not investing in a model. You’re investing in principles that work regardless of which model you’re using. The idea that AI should know who you are before you start talking, that your expertise should persist between sessions. Those principles adapt to any model, any system. The specific tools I use are built for Claude Code, but the thinking behind them carries forward. Claude today, whatever comes next tomorrow. The investment compounds instead of resetting.

That’s the difference between someone who’s on their 50th first date with AI and someone who’s built a life with it.

You don’t need to be an engineer. You don’t need to understand how the models work under the hood. You just need to know this is possible. And then start.

Reading this might get you to the edge. But building something for yourself, putting your own expertise into a system that remembers it for you, that’s the part that actually changes things.

If you’re ready…

Good morning, Lucy.