Sometimes you join a project and realize you don’t know the best practices there. It’s a new project, either new in general or new to you because you’ve joined a new team or started helping on a different codebase. This has happened to me quite a few times.
When that happens, a few things can occur.
First, the documentation might be up to date, which is great. But from time to time the documentation might not be aligned with the latest standards. This can happen for more mature projects, or when a project is in the very early stages and best practices are still evolving.
In those moments I ask myself, “What are the best practices?”
I either try to follow the framework’s recommended practices, or, if I have more experience with a specific framework ,I mix the framework’s best practices with my own learnings, experience, and judgment.
Recently I realized we can actually backfill that information, thanks to AI.
You may have heard me mention AI before and that’s because I use it daily for product work and coding, and I think it offers many opportunities to change how we work, no surprise I’m bringing it up again here.
Where do a project’s best practices live?
Whenever we interact on a pull request and comment that something is not quite right or should be done differently, we are implicitly documenting an expectation, a behaviour.
A PR comment might say, “You did it this way, but the way we prefer to approach this for this project or our goals is different.” That information lives in the pull request, but it is sometimes not translated into the formal documentation. For many reasons, bandwidth, speed, whatever, we might not move those comments into the docs.
That information, though, can be used to document the latest best practices on a project and there are two main ways to do this.
The first is through deep research. The second is through MCPs and the CLI toolings.
Deep research
Deep research is a way for AI to think more deeply about a topic, search widely, and create a detailed document based on what you ask.
If your project is open source or your repository is publicly available, this is especially powerful: you can ask the tool to review the repo, check comments from code owners in the last 50, 100, or 200 pull requests, and use that information as the source to gather the latest best practices for that codebase.
This can produce documentation of the current recommended approach to development and the things to pay attention to (please don’t trust the output without checking it, but it’s a good starting point.
Most tools call this “deep research.”: Perplexity, ChatGPT, Claude, or Gemini should support this and it’s all a matter of finding the right prompt.
Deep research might take minutes (even 20 minutes sometimes), but it’s so powerful for this use case.
MCPs and the CLI
The other approach is to use MCPs and command-line tools to gather information about a private project.
MCP stands for Model Context Protocol, and it’s a way for AI tooling to interact with external systems to retrieve information.
For example, the MCP for GitHub would allow you to query pull requests and comments. You can do something similar if you’re running AI locally with Claude Code/Open AI Codex by asking it to use the GitHub CLI to gather PR information and comments.
In this case you prompt the tool to analyze the PRs, but you must craft the prompt slightly differently depending on whether you use an MCP integration or CLI commands to fetch the data.
Once the research is finished, you have a baseline that tells you what’s currently used, what’s considered best, and how to approach implementation.
I personally find this very helpful when approaching new codebases.
Even if you have multiple contributors, you can use this method to identify different points of view and align everyone on the same page.
A last note: AI tooling are changing quickly, so if you revisit this content later, check the current best practices and available tools.



