zhpacer

Results 4 issues of zhpacer

### Discussed in https://github.com/hpcaitech/ColossalAI/discussions/5393 Originally posted by **zhpacer** February 20, 2024 CUDA: Build cuda_11.8.r11.8/compiler.31833905_0 following: https://github.com/hpcaitech/ColossalAI/tree/main/applications/Colossal-LLaMA-2 Error: utils/flash_attention_patch.py", line 22, in from colossalai.accelerator import get_accelerator ModuleNotFoundError: No module named 'colossalai.accelerator'

### 🐛 Describe the bug following: https://github.com/hpcaitech/ColossalAI/tree/main/applications/Colossal-LLaMA-2 but get error: > Flash-attention enabled successfully > Model params: 6.28 B > Booster init max device memory: 38593.54 MB > Booster init...

bug

The error is : ![image](https://cloud.githubusercontent.com/assets/8143488/7909054/22940a88-087c-11e5-8f80-fd821e996e2e.png)

Do we have the training script to pre-train a llama-7B model using GPU such as A100? Current examples are based on TPU. Don't know if there are some difference. thanks.