Tag: technology

  • Backfilling a Project hidden Knowledge is finally Possible

    Backfilling a Project hidden Knowledge is finally Possible

    Sometimes you join a project and realize you don’t know the best practices there. It’s a new project, either new in general or new to you because you’ve joined a new team or started helping on a different codebase. This has happened to me quite a few times.

    When that happens, a few things can occur.
    First, the documentation might be up to date, which is great. But from time to time the documentation might not be aligned with the latest standards. This can happen for more mature projects, or when a project is in the very early stages and best practices are still evolving.

    In those moments I ask myself, “What are the best practices?”
    I either try to follow the framework’s recommended practices, or, if I have more experience with a specific framework ,I mix the framework’s best practices with my own learnings, experience, and judgment.

    Recently I realized we can actually backfill that information, thanks to AI.
    You may have heard me mention AI before and that’s because I use it daily for product work and coding, and I think it offers many opportunities to change how we work, no surprise I’m bringing it up again here.

    Where do a project’s best practices live?

    Whenever we interact on a pull request and comment that something is not quite right or should be done differently, we are implicitly documenting an expectation, a behaviour.

    A PR comment might say, “You did it this way, but the way we prefer to approach this for this project or our goals is different.” That information lives in the pull request, but it is sometimes not translated into the formal documentation. For many reasons, bandwidth, speed, whatever, we might not move those comments into the docs.

    That information, though, can be used to document the latest best practices on a project and there are two main ways to do this.
    The first is through deep research. The second is through MCPs and the CLI toolings.

    Deep research

    Deep research is a way for AI to think more deeply about a topic, search widely, and create a detailed document based on what you ask.

    If your project is open source or your repository is publicly available, this is especially powerful: you can ask the tool to review the repo, check comments from code owners in the last 50, 100, or 200 pull requests, and use that information as the source to gather the latest best practices for that codebase.

    This can produce documentation of the current recommended approach to development and the things to pay attention to (please don’t trust the output without checking it, but it’s a good starting point.
    Most tools call this “deep research.”: Perplexity, ChatGPT, Claude, or Gemini should support this and it’s all a matter of finding the right prompt.

    Deep research might take minutes (even 20 minutes sometimes), but it’s so powerful for this use case.

    MCPs and the CLI

    The other approach is to use MCPs and command-line tools to gather information about a private project.

    MCP stands for Model Context Protocol, and it’s a way for AI tooling to interact with external systems to retrieve information.

    For example, the MCP for GitHub would allow you to query pull requests and comments. You can do something similar if you’re running AI locally with Claude Code/Open AI Codex by asking it to use the GitHub CLI to gather PR information and comments.

    In this case you prompt the tool to analyze the PRs, but you must craft the prompt slightly differently depending on whether you use an MCP integration or CLI commands to fetch the data.

    Once the research is finished, you have a baseline that tells you what’s currently used, what’s considered best, and how to approach implementation.

    I personally find this very helpful when approaching new codebases.
    Even if you have multiple contributors, you can use this method to identify different points of view and align everyone on the same page.

    A last note: AI tooling are changing quickly, so if you revisit this content later, check the current best practices and available tools.

  • Beyond Self-Learning: How AI Adapts to You

    Beyond Self-Learning: How AI Adapts to You

    One of the reasons I’m particularly excited about AI is how it can transform learning. But first, let’s understand what AI can do in this context.

    One of the powers of AI, for example, is being language-agnostic. You can have content that’s in English, hand it over to AI, and then ask questions about this content in a different language.
    Let’s say you ask questions in Italian; you’ll get answers related to that English content in Italian. This is part of how AI works and how it is structured and, to me, one of its most useful qualities.

    What’s even more interesting is that now we’re starting to see AI avatars, AI voices, and interactive AI models.
    How does that apply to learning?

    When you want to learn (self-learn, specifically) something, you might go to YouTube or search Google or go to a dedicated site for course , and it can take a while to find the right approach—because not every approach works for everyone.

    For example, you might learn better through videos, while others might prefer to learn specific topics related to design through PDFs and books.

    But you might not always get the format you want. Sometimes you’re lucky enough to have a writer or video maker who creates content the way you like. However, as you probably learned during your school years, you don’t get to choose your teacher. You might choose the school or some of the subjects, but the teacher is a matter of luck, often times.

    In this regard, I personally see a big shift in how AI can transform personalized learning, beyond what self-learning is today.
    Right now, you check out different courses, videos, and websites to learn something—whether it’s woodworking, programming, design, or video making. Over time, since AI can ingest content from any language and of any type, it can create material relevant to that information in a different language or style.

    Here’s an example of how I learned something through an unusual approach: Over time, I accumulated a lot of knowledge about backpack fabrics because I like backpacks, I enjoy traveling, and I like optimizing things. Gradually, I gained knowledge about fabrics.
    How did I gain that knowledge?
    By spending time in forums, reading Reddit, and so on. I realized that this sort of slow, ongoing consumption worked for me as a slow learning curve. I didn’t even know I was learning.

    I began to wonder if I could learn something new in that same format.
    So, I asked Claude to create Twitter/X threads on certain topics.
    I started by trying to learn about large language models. Every day, I would receive 20 tweets about large language models.

    Claude making up Twitter/X threads on LLMs

    Now, obviously, hallucinations are a problem, so be mindful of trusting AI entirely. But the point I’m making is that once you know what works for you in terms of learning, you can adapt and use AI to learn new skills in the way that suits you best.

    Do you need a video? In the future, you could ask AI to create a video course for you. You could have content written as an exchange between two people in podcast form—NotebookLM is already doing this. You could structure it as a Twitter thread, as I mentioned. All these opportunities are right in front of us.

    What is being asked of us now is to start understanding the way you personally learn. Once you have deeper introspection about how you learn best, how you understand things, and how you get excited about learning, you can apply that style to any topic you want and get a personalized learning experience.

    Not all the tools are at this stage yet. If we think about creating a video course, we might still be a bit behind compared to anything that a chat interface can create. But we’re not that far off. So, keep this in mind when you begin learning something new—there may be opportunities for you to learn better and faster.

  • Onboarding AI vs Onboarding Humans

    Onboarding AI vs Onboarding Humans

    There’s one thing I realized when Cursor and other AI-integrated code development systems came out. Some of them allow you to set rules to better help the AI navigate your codebase. For example, you can tell the AI how to use the codebase, where to find the code related to a specific feature, how to structure that code, what the intention behind it was, and if there is any unusual behavior in the code.

    By behavior, I mean cases where the code might be structured in a certain way but still have edge cases that don’t quite make sense or differ from what you would expect just by looking at the folder structure.

    I’ve started to see some tweets about how this is great and I agree. It’s great.

    What I realized is that this is exactly the kind of information you share when onboarding a new team member.

    Usually, you either have some documentation in place or, after some time working together, you end up having specific onboarding sessions. During these sessions, you explain things like, “This is why we have this ‘views’ folder in two different paths — because XYZ.” There’s a reason for that structure even if, at first look, it doesn’t make sense.

    The unwritten rules like: “If you want to work in the first ‘views’ folder, that’s because you want to create a component for a view. On the other hand, if you want to create something in the other ‘views’ folder, that’s likely because you want to define the actual view.

    Besides the argument that my example could be or not be confusing to people, this is exactly the kind of information we also try to include in tools like Cursor rules when developing. It’s curious — we’re trying so hard now to make things clear for an AI, whereas we weren’t trying this hard before for humans.
    It’s funny in some ways, but it’s also what I think we should have been doing from the start.

    We should maintain the code and accompanying documentation so that someone can work on the code as quickly as possible.
    It’s interesting, at least in this moment in time, how the needs of AI are similar to the needs of a human in terms of understanding the code and knowing all the quirks and strange logic that might have been implemented.

    The challenge is still the same: how do we keep the mental model or internal structure of a project up to date so that it makes AI faster and keeps the codebase readable by humans?
    This would help both humans and AI move faster.

    The challenge, at least in the short term, is how to make maintaining this documentation and structure easier.
    For me, looking back at all the coding I’ve done in my life, maintenance has always been the most difficult. You’re tempted to just ship and move the project forward because that’s easy. But you also want to keep the documentation up to date and accurate, that’s a fine balance where you tend to do tradeoffs.

    My personal take is that we should integrate updating the structure or documentation into the change flow. So, whenever you do a big or small refactoring, AI should update the internal documentation of the features you changed. That way, it remains clear, current, and useful.

    Different tools will handle this differently, and I expect this article to be outdated in less than a year, given how fast AI is moving.

    While writing this article I discovered how Devin.ai includes some kind of internal scratchpad that it uses to know how your code works, plus it adds (thinking about human, again), their deepwiki.com to ensure humans can still navigate the complexities of a codebase.
    If you’re using Cursor/Windsurf, .cursorrules are your friend. And you can add prompts in them to ensure they keep either a scratchpad (like Devin) updated, or keep the docs (which, in this case, should live in the same codebase) updated at every PR.

    It’s interesting to see how this will play out in the future. I would suggest everyone keep this in mind because maintaining accurate documentation and structure benefits everyone working with your products.