Is Sourcegraph Cody actually better than Copilot for navigating a large codebase?
I work on a large enterprise monorepo with millions of lines of code across dozens of services and getting useful AI assistance in this environment has been frustrating. GitHub Copilot is good for generating code within a single file but it has very limited awareness of how the rest of the codebase is structured, which means its suggestions frequently do not align with our patterns, conventions or the specific way we have implemented similar things elsewhere in the system.
Sourcegraph Cody has been mentioned to me as a tool specifically designed to give AI assistance with deep awareness of an entire codebase rather than just the current file. Sourcegraph already indexes our whole codebase for search purposes so I understand there might be a natural integration there, but I want to know whether the AI layer on top of that search capability actually produces meaningfully better suggestions for large-codebase work.
Has anyone used Cody in a genuinely large enterprise codebase and found it useful for things like understanding how a particular pattern is implemented elsewhere in the system, navigating unfamiliar parts of the code, or generating code that correctly follows the conventions used in the surrounding context? Those are the specific failure modes I hit with Copilot most often and I want to know whether Cody actually solves them.