Rendered at 11:50:14 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
comchangs 5 hours ago [-]
Tried it on a few repos — the file-tier prioritization is a nice touch. Skipping lockfiles and generated code saves a lot of noise.
Would be interesting to see this integrated into CI — auto-generate updated diagrams on each PR so architecture docs never go stale.
uwais12 3 hours ago [-]
Thanks for trying it out! CI integration is definitely on the roadmap. The idea is a GitHub Action that runs the analysis on each PR and comments with an architecture diff showing what changed. Would be great for catching accidental coupling or layer violations before they get merged. Haven't started building it yet but it's on the plan.
federicosimoni 30 minutes ago [-]
[dead]
kent8192 5 hours ago [-]
It's pretty good for me, and I applied it to my project!
I want to connect this tool to my Claude Code via an MCP or a plugin.
Could I do that as though it were Deepwiki?
uwais12 3 hours ago [-]
Glad it worked well on your project! MCP integration is a really interesting idea actually. Right now there's no plugin or MCP server for it but the API is pretty straightforward - you could hit the /api/analyze endpoint to trigger an analysis and /api/chat to ask questions about the results. Building a proper MCP server that exposes the architecture data and chat as tools would be a cool next step. Going to look into this, thanks for the suggestion.
uwais12 1 hours ago [-]
[dead]
spbuilds 16 hours ago [-]
Interesting approach. How does it handle monorepos or repos with unconventional structure? The multi-pass analysis sounds nice, but I’d be curious how consistent the output is across runs if you run it twice on the same repo, do you get basically the same diagram?
uwais12 12 hours ago [-]
[dead]
glimglob 15 hours ago [-]
How does it handle messy massive codebases?
uwais12 12 hours ago [-]
It uses a file tier system to prioritize what to analyze. Entry points, configs, and core source files get fetched fully. Tests and utilities get partial treatment. Generated code, lockfiles, and assets get skipped entirely. So even for large repos it focuses on the stuff that actually matters for understanding architecture.
For really massive repos (100K+ files) the analysis runs in a resumable pipeline - each of the 5 passes saves results to the database, so if the serverless function times out it picks up where it left off on the next connection. Embeddings for chat are also done incrementally in batches of 50 chunks.
That said, messy codebases are honestly where it's most useful. Clean well-documented repos don't need a tool like this. The ones with zero docs and 500 files with no clear structure are where it saves the most time.
Would be interesting to see this integrated into CI — auto-generate updated diagrams on each PR so architecture docs never go stale.
For really massive repos (100K+ files) the analysis runs in a resumable pipeline - each of the 5 passes saves results to the database, so if the serverless function times out it picks up where it left off on the next connection. Embeddings for chat are also done incrementally in batches of 50 chunks.
That said, messy codebases are honestly where it's most useful. Clean well-documented repos don't need a tool like this. The ones with zero docs and 500 files with no clear structure are where it saves the most time.