ml-commons
ml-commons copied to clipboard
ml-commons provides a set of common machine learning algorithms, e.g. k-means, or linear regression, to help developers build ML related features within OpenSearch.
**What is the bug?** I registered a rerank model, the same example as i. The docs using _register API. This returns a task_id, then I use the below API to...
## Background: With the impact growth of OpenSearch ml-commons, we observe that some people are proposing to contribute model artifacts to the community. Meanwhile, some features should come up with...
The integration test failed at distribution level for component ml-commonsVersion: 1.3.15Distribution: zipArchitecture: x64Platform: windowsPlease check the logs: https://build.ci.opensearch.org/job/integ-test/7957/display/redirect * Test-report manifest:* - https://ci.opensearch.org/ci/dbc/integ-test/1.3.15/9514/windows/x64/zip/test-results/7957/integ-test/test-report.yml _Note: Steps to reproduce, additional logs and...
### Description [Describe what this change achieves] ### Issues Resolved [List any issues this PR will resolve] ### Check List - [ ] New functionality includes testing. - [ ]...
For clusters in a corporate setting, internet access is often restricted with an egress firewall. However, the ML commons plugin needs internet access to download dependencies, even when using a...
I'm running OS 2.11. in a docker container with no internet and I can see after deploying a model / registering a model group. [2024-02-28T12:00:54,931][WARN ][a.d.h.z.HfModelZoo ] [...] Failed to...
**What is the bug?** The BWC Rolling upgrade tests are failing in the neural search because of recent [changes](https://github.com/opensearch-project/ml-commons/commit/bab9439e17c98429f4bd9f1ac853c1db20ef3219#diff-c7e3f7d91a1e68b0bee8c7397ec92bf7238296ac7ad8e61195a94998e4929fef) in ML-Commons. So the scenario is when the bwc version is...
Most model services have some throttling limit. For example [Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/quotas.html). With such limit, it takes long time to ingest large amount of data. One way is to use batch inference...