feat: Add support for vLLMs and custom URL support
Extend the Language Model component to support language models using vLLM OpenAI compatible server. [1] The component uses the langchain ChatOpenAI to access the model.
The component has a custom model name, URL and API key. The user should be responsible to feed abovinformationns correctly in order for the models to work properly
Summary by CodeRabbit
-
New Features
- Added vLLM as a supported provider for language models
- Introduced base URL configuration field for vLLM models
- Updated model selection UI for vLLM with text input
- Made API key optional for vLLM provider
[!IMPORTANT]
Review skipped
Auto incremental reviews are disabled on this repository.
Please check the settings in the CodeRabbit UI or the
.coderabbit.yamlfile in this repository. To trigger a single review, invoke the@coderabbitai reviewcommand.You can disable this status message by setting the
reviews.review_statustofalsein the CodeRabbit configuration file.
Walkthrough
This PR adds vLLM as a supported language model provider by introducing frontend icon components and extending the backend configuration to handle vLLM initialization via base URL and optional API key, with model name as a text input rather than a dropdown selection.
Changes
| Cohort / File(s) | Summary |
|---|---|
Frontend vLLM Icon Components src/frontend/src/icons/vLLM/index.tsx, src/frontend/src/icons/vLLM/vLLM.jsx |
Created new VLLMIcon React component wrapping SvgVLLM with forwardRef support, and added SvgVLLM SVG component rendering an inline icon with dark mode color adaptation. |
Frontend Icon Registry src/frontend/src/icons/eagerIconImports.ts |
Added VLLMIcon import and registered it in the eagerIconsMapping to enable eager loading. |
Backend Language Model Provider src/lfx/src/lfx/components/models/language_model.py |
Extended LanguageModelComponent to support vLLM provider with base_url input, updated model_name from dropdown to text input for vLLM, added ChatOpenAI initialization with base_url validation, and modified UI configuration conditionally for vLLM. |
Sequence Diagram
sequenceDiagram
participant User
participant UI as Language Model UI
participant Backend
participant ChatOpenAI
User->>UI: Select vLLM provider
UI->>Backend: update_build_config(provider="vLLM")
Note over Backend: Convert model_name to text input<br/>Set base_url visibility<br/>Mark API key optional
Backend-->>UI: Updated configuration
User->>UI: Enter base_url & model_name
User->>UI: (Optional) Enter API key
User->>UI: Build model
UI->>Backend: build_model(provider="vLLM", base_url, model_name, api_key?)
alt base_url present
Backend->>ChatOpenAI: Initialize with base_url & api_key
ChatOpenAI-->>Backend: ChatOpenAI client
Backend-->>UI: Model ready
else base_url missing
Backend-->>UI: ValueError: base_url required
end
Estimated code review effort
🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
-
langflow-ai/langflow#9460: Both PRs add new icon components to the frontend and update
src/frontend/src/icons/eagerIconImports.tsto register the icons.
Suggested labels
enhancement, lgtm
Suggested reviewers
- edwinjosechittilappilly
- ogabrielluiz
- erichare
Pre-merge checks and finishing touches
❌ Failed checks (1 error, 4 warnings)
| Check name | Status | Explanation | Resolution |
|---|---|---|---|
| Test Coverage For New Implementations | ❌ Error | The PR adds significant new functionality for vLLM provider support in the LanguageModelComponent, including a new vLLM icon component in the frontend. However, the test file has not been updated to include tests for the new vLLM functionality. The existing test suite contains 13 test methods covering OpenAI, Anthropic, and Google providers with tests for model creation, build configuration updates, missing API key validation, and live API tests. However, there are zero references to vLLM in the tests, and no corresponding tests for the new vLLM provider path in the build_model() method, the vLLM branch in the update_build_config() method, or base_url validation. Additionally, no frontend tests exist for the newly added VLLMIcon component or verification that it is properly included in the eagerIconsMapping. |
Add comprehensive tests for vLLM functionality to src/backend/tests/unit/components/models/test_language_model_component.py: (1) test_update_build_config_vllm to verify model_name switches to MessageTextInput and base_url becomes visible when vLLM is selected, (2) test_build_vllm_provider to verify ChatOpenAI instantiation with base_url and optional API key, (3) test_build_missing_base_url_vllm to verify ValueError when base_url is missing, and (4) tests confirming UI elements restore correctly when switching back to other providers. Additionally, add frontend tests to verify the VLLMIcon component renders properly and is included in the eagerIconsMapping as documented in the PR summary. |
| Docstring Coverage | ⚠️ Warning | Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. | You can run @coderabbitai generate docstrings to improve docstring coverage. |
| Test Quality And Coverage | ⚠️ Warning | ||
| Test File Naming And Structure | ⚠️ Warning | The PR introduces significant new backend functionality (vLLM provider support with base_url validation, conditional UI logic for model_name switching across providers) and frontend icon components, but fails to follow the repository's established testing patterns. The existing test file at ./src/backend/tests/unit/components/models/test_language_model_component.py demonstrates clear pytest conventions with descriptive test function names and proper organization covering both positive and error scenarios for other providers, yet it was not updated to include vLLM test cases. Additionally, no new test files were created for the vLLM icon component in the frontend, despite the repository having established Playwright test infrastructure at ./src/frontend/tests/. This lack of test coverage violates the custom check requirements for proper test file naming (test_*.py for backend), structure, and comprehensive scenario coverage. |
Update the existing ./src/backend/tests/unit/components/models/test_language_model_component.py file to add test methods following the established naming convention (e.g., test_update_build_config_vllm, test_vllm_model_creation, test_build_model_vllm_missing_base_url) that verify vLLM provider behavior including base_url validation errors, ChatOpenAI instantiation with correct kwargs, and UI config switching. Create a new frontend test file at ./src/frontend/src/icons/vLLM/__tests__/VLLMIcon.test.tsx using Playwright to test component rendering, ref forwarding, and dark mode prop handling. Ensure all test functions have descriptive names explaining what is being tested, and cover both positive and negative scenarios as demonstrated in the existing provider tests. |
| Excessive Mock Usage Warning | ⚠️ Warning | The existing test file uses appropriate mocking patterns and is not characterized by excessive mock usage. Tests instantiate real component objects, verify real type instances (ChatOpenAI, ChatAnthropic, ChatGoogleGenerativeAI), and rely on real integrations where feasible. However, the PR introduces vLLM provider support but the test file contains zero test cases for vLLM functionality: no vLLM provider configuration tests, no vLLM model creation tests, and no vLLM error handling tests. Additionally, the update_build_config tests do not validate the configuration reset behavior mentioned in review comments, specifically that switching between providers properly restores base_url.advanced flag and API key requirements. This gap represents incomplete test coverage for new feature functionality rather than excessive mocking, but it fails the check's intent to ensure new features are tested appropriately. | Add missing test coverage for vLLM functionality, including: (1) test_update_build_config_vllm validating that model_name switches to text input, base_url is exposed, and api_key becomes optional when selecting vLLM; (2) test_vllm_model_creation verifying ChatOpenAI instantiation with correct parameters for vLLM; (3) test_build_model_vllm_missing_base_url ensuring proper error handling; (4) enhanced tests for all provider transitions (OpenAI→vLLM, vLLM→Anthropic, etc.) to verify configuration resets documented in review comments; and (5) optionally add parametrized tests to verify base_url.advanced and api_key.required flags are correctly updated across all provider switches. The existing test patterns are sound and should be followed. |
✅ Passed checks (2 passed)
| Check name | Status | Explanation |
|---|---|---|
| Description Check | ✅ Passed | Check skipped - CodeRabbit’s high-level summary is enabled. |
| Title Check | ✅ Passed | The pull request title "feat: Add support for vLLMs and custom URL support" directly and accurately reflects the main changes in the changeset. The primary modifications extend the Language Model component to support vLLM providers by introducing vLLM as a new supported provider option and adding a base_url input field for custom URLs, along with supporting icon components for UI presentation. The title is concise, specific, and clearly communicates the key functionality being added without vagueness or misleading information. A teammate reviewing the commit history would understand that this change introduces vLLM provider support with custom URL configuration. |
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
Comment @coderabbitai help to get the list of available commands and usage tips.
⚠️ Component index needs to be updated
Please run the following command locally and commit the changes:
make build_component_index
Or alternatively:
LFX_DEV=1 uv run python scripts/build_component_index.py
Then commit and push the updated src/lfx/src/lfx/_assets/component_index.json file.
/retest
@erichare @rodrigosnader Can you help us understand why is the pipeline failing?
I already ran the make build_component_index script, but no changes are applied to my local branch...
@jpramos123 the failing pipeline shouldnt block the merging of this - I think we just need an approving review.
There are enough frontend changes that it would be good to have multiple eyes on this: @ogabrielluiz @Cristhianzl @lucaseduoli and @edwinjosechittilappilly might be good candidates!
@ogabrielluiz @Cristhianzl @lucaseduoli and @edwinjosechittilappilly Hey guys, are you able to review this PR?
Thank you!
@jpramos123 I think the CI wants you to update the component index with
make build_component_index
Did you update with this?
This results in a different component_index.json as it uses LFX_DEV=1 I think.
The other CI failures seem to be timeouts/aborts due to runtime :-/
(subject to increase timeouts in CI or just retry :-( )
@jpramos123 I think the CI wants you to update the component index with
make build_component_indexDid you update with this? This results in a differentcomponent_index.jsonas it usesLFX_DEV=1I think. The other CI failures seem to be timeouts/aborts due to runtime :-/ (subject to increase timeouts in CI or just retry :-( )
Hey @schuellerf, yes always use this make command to refresh the component_index.json.
Looks like the conflict happens because it always changes after a new commit is merged into master
@jpramos123 I think the CI wants you to update the component index with
make build_component_indexDid you update with this? This results in a differentcomponent_index.jsonas it usesLFX_DEV=1I think. The other CI failures seem to be timeouts/aborts due to runtime :-/ (subject to increase timeouts in CI or just retry :-( )Hey @schuellerf, yes always use this make command to refresh the
component_index.json.Looks like the conflict happens because it always changes after a new commit is merged into master
Thanks! That's a good explanation, so we don't need to update this always, I guess!
/retest
