Documentation Request: Clarify limitations of code examples
When using context7 to verify import patterns, I discovered that the documentation can be misleading for smaller libraries with minimal docs.
Case study - https://github.com/cytomining/copairs/ (very new, minimal docs)
Context7 showed 29 snippets, all using this pattern:
from copairs import map
map.mean_average_precision()
Based on this, I incorrectly concluded that direct imports were invalid:
from copairs.map import mean_average_precision # Marked as "wrong"
However, checking the source code revealed these direct imports ARE valid - they're exported in __init__.py
While context7 provides a "Trust Score" (copairs had 6.0 vs scikit-learn's 8.5), it's not clear what this score indicates about documentation completeness.
Context7's documentation doesn't make it clear that:
- Examples show recommended patterns, not ALL valid patterns
- Tutorial-style docs may not reflect the complete API surface
- Smaller libraries may only have usage examples, not API references
- Lower trust scores might indicate incomplete API coverage
Should the context7 documentation include a warning like:
"Code examples demonstrate common usage patterns and may not show all valid import methods or API variations. For definitive API compatibility, always verify against the library's source code."
or something like that?
--
(I used Claude Opus to draft this issue, then edited it)
very good feedback, I will think about this.
We have released our docs website at https://context7.com/docs and tried to explain these limitations. Please let us know if you think there could be improvements or you can always create a PR. Best!