ipex-llm icon indicating copy to clipboard operation
ipex-llm copied to clipboard

[ReadtheDocs] Possible Improvements on Nano Section

Open Oscilloscope98 opened this issue 1 year ago • 0 comments

Possible Imporvements for ReadtheDocs Nano Section

1. Major Issues

1.1 BigDL-Nano TensorFlow Inference Overview

For this page, most of contents seems to be directly copied from "BigDL-Nano PyTorch Inference Overview" part. However, Nano PyTorch currently have runtime acceleration enabled, which is not yet realized for Nano TensorFlow. Directly copying some sentences including the concept of runtime acceleration from PyTorch Inference to TensorFlow Inference could make readers confused. Such sentences include (but not limited to):

Currently, performance accelerations are achieved by integrating extra runtimes as inference backend engines or using quantization methods on full-precision trained models to reduce computation during inference.

To use INC as your quantization engine, you can choose accelerator as None or 'onnxruntime'. Otherwise, accelerator='openvino' means using OpenVINO POT to do quantization.

  • [ ] It would be better if we could revise this section to introduce runtime accelerator in a reasonable way, e.g. as a up-coming feature, as a convention with PyTorch to have the accelerator parameter for quantize function, etc.

2. Minor Issues

2.1 Nano User Guide

  • [ ] Install: it seems that there should be conda create -n env python=3.7 instead of conda create -n env for creating environment image

2.2 Windows User Guide

  • [ ] Install WSL2: It is better to have a link of "WSL manual installation steps" for users with older build image

  • [ ] Create a BigDL-Nano env: same problem as before, we should have conda create -n bigdl-nano python=3.7 instead of conda create -n bigdl-nano image

2.3 BigDL-Nano PyTorch Training Overview

  • [ ] Best Known Configurations: It is not clear here whether the environment variables are set by source bigdl-nano-init directly, or this command just generates them and users need to set by themselves. Could we make it more clear? image

  • [ ] BigDL-Nano PyTorch Trainer: the code: from bigdl.nano.pytorch import Trainer seems to be duplicated here. image

  • [ ] Multi-instance Training: it is a little bit abrupt to mention "PyTorch’s DDP API" here. The logic could be clearer if we add a sentence like: Although it is common to use PyTorch Distributed Data Parallel (DDP) API, it is a little cumbersome and error-prone... image

2.4 BigDL-Nano PyTorch Inference Overview

  • [ ] BigDL-Nano PyTorch Inference Overview: It would be better here to give a brief introduction about ONNXRuntime and OpenVINO, or at least include the links here. image

  • [ ] Quantization: "Intel Neural Compressor (INC) and Post-training Optimization Tools (POT) from OpenVINO toolkit are enabled as options." <= in this sentence, it seems like INC is also from OpenVINO toolkit. It would be better to rearrange this sentence. And it could be clearer to add link for INC and POT. image

  • [ ] Quantization: "In the meantime, runtime acceleration is also included directly in the quantization pipeline when using accelerator='onnxruntime'/'openvino' so you don’t have to run Trainer.trace before quantization." => the logic here is a little bit strange. Would it be better to add something like "If you would like to use runtime acceleration together with quantization, runtime acceleration is included directly in the quantization pipeline...” image

  • [ ] Quantization using Intel Neural Compressor: it would be more user-friendly to directly add the installation guide (pip install onnx onnxruntime) for ONNXRuntime Acceleration here, instead of redirecting to another place. image

  • [ ] Quantization with Accuracy Control: seems like parameters precision, approach, and method, which are used in the example code below, are not explained here. Is there a specific reason? image

2.5 BigDL-Nano TensorFlow Training Overview

3. General Issues

  • [ ] Not all the subsection titles are numbered. For example, Nano User Guide is numbered, while BigDL-Nano PyTorch Training Overview is not numbered. It is better to be consistent.
  • [ ] There is no hierarchy navigation (or navigation back) for Nano tutorials. In any of the Nano tutorials, readers can only navigate back to the home/landing page, instead of the tutorial index page.
  • [ ] For tutorials, it would be better to add some assumption/knowledge prerequisite for being able to follow them, so that readers could have a better idea of what they should expect from these tutorials. Such as what PyTorch Tutorials do here: image

Oscilloscope98 avatar Aug 08 '22 04:08 Oscilloscope98