Rahul Huilgol

Results 16 issues of Rahul Huilgol

Is it possible to self host this without having to use AWS?

**Describe the bug** --------------------------------------------------------------------------- ``` ClientError Traceback (most recent call last) in 50 disable_profiler=True, 51 base_job_name=base_job_name, ---> 52 **kwargs 53 ) ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/sagemaker/pytorch/estimator.py in __init__(self, entry_point, framework_version, py_version, source_dir, hyperparameters, image_uri,...

bug

``` from tests.tensorflow.keras.test_keras import train_model train_model( out_dir, save_all=True, use_tf_keras=True, save_config=SaveConfig(save_steps=[0, 10]), eager=False, steps=["train", "eval", "predict", "train"], ) print(create_trial_fast_refresh(out_dir).tensor_names(step=10)) ```

Noticed this for TF2. `helper_keras_gradtape` and `helper_keras_fit` in tensorflow2/test_keras.py

``` def test_save_shapes(out_dir, hook=None): hook_created = False if hook is None: hook_created = True global_reduce_config = ReductionConfig(save_raw_tensor=True) global_save_config = SaveConfig(save_steps=[0, 1]) hook = t_hook( out_dir=out_dir, save_config=global_save_config, include_collections=[ "weights", "biases", "gradients",...

Instead of histograms, save them as scalar summaries for the reduction chosen

So that people don't get confused by the different ways of doing the same thing

This is an usecase especially for the keras model_to_estimator flow.

**Describe the bug** When model is created with low_cpu_mem_usage flag in HF, deepspeed predictions are wrong, while HF predictions are correct. **To Reproduce** ``` import torch from transformers import pipeline...

bug
inference

**Describe the bug** ``` hf_output [{'generated_text': 'Try without sampling the data.\n\nA:\n\nYou can use the following code to get the data from the database.\n$sql = "SELECT * FROM `table`";\n$result = mysqli_query($conn,'}]...

bug
inference