ck icon indicating copy to clipboard operation
ck copied to clipboard

Collective Mind (CM) is a small, modular, cross-platform and decentralized workflow automation framework with a human-friendly interface and reusable automation recipes to make it easier to build, run...

Results 124 ck issues
Sort by recently updated
recently updated
newest added

Mark most CM scripts with "prototype":True flag unless have a code-quality standard, proper tests and documentation.

Some feedback from our users and MLCommons: - Convert all prints into logger calls with different levels of verbosity

Ran with `cm run script "app mlperf reference inference _dlrm _cpu" --env.CM_RERUN`. From the attached log file, I saw pulling dlrm ` cm run script "get dlrm src"`, but when...

When trying to run bert-99 or rnnt inference. always fails with: /root/cm/bin/python3 -m pip install "/opt/nvmitten-0.1.3-cp38-cp38-linux_x86_64.whl" WARNING: Requirement '/opt/nvmitten-0.1.3-cp38-cp38-linux_x86_64.whl' looks like a filename, but the file does not exist ERROR:...

We got a feedback from CM users that when some tool/library fails when wrapped by a CM script, it can be useful to provide a link to a repository of...

enhancement

When I run the command cm run script --tags=generate-run-cmds,inference,_find-performance,_all-scenarios --model=bert-99 --implementation=reference --device=cuda --backend=onnxruntime --category=edge --division=open --quiet There are some warnnings which I don't konw it matters. 2024-03-23 16:39:29.057780772 [W:onnxruntime:, graph.cc:3593...

When I run cmr "run mlperf inference generate-run-cmds _submission" --quiet --submitter="MLCommons" --hw_name=default --model=bert-99 --implementation=reference --backend=onnxruntime --device=cuda --scenario=Offline --adr.compiler.tags=gcc --target_qps=1 --category=edge --division=open default-reference-gpu-onnxruntime-v1.17.1-default_config +---------+----------+----------+--------+-----------------+---------------------------------+ | Model | Scenario | Accuracy |...

I want to reproduce nvidia-bert https://github.com/mlcommons/ck/blob/master/docs/mlperf/inference/bert/README_nvidia.md#build-nvidia-docker-container-from-31-inference-round when I run "cm docker script --tags=build,nvidia,inference,server", I encounter some problems. => ERROR [10/12] RUN cm pull repo mlcommons@ck 104.6s ------ > [10/12] RUN...

When I run the command cm run script "app mlperf reference inference _bert-99 _offline _onnxruntime _cuda _fp32" Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.10/importlib/metadata/__init__.py", line...