Izak Marais
Izak Marais
> In tfx 1.13 we [introduced](https://github.com/tensorflow/tfx-bsl/commit/ce2e12b4a78ecced09e0e5e30e34d310656166d7) a new batching mode that tries to deserialize data in batches of ~ 100MB. It can be enabled with [tfxio_use_byte_size_batching](https://github.com/tensorflow/tfx-bsl/blob/b566c3970bc36d94fbf27e989df510d1f81e7ce0/tfx_bsl/coders/batch_util.py#L35) flag. Could you try...
I added the flag in the Transform component's ```python tfx.components.Transform().with_beam_pipeline_args([, "--tfxio_use_byte_size_batching"]]) ``` In a test using the local tfx runner I could confirm that the flag value of True is...
The [above suggestion](https://github.com/tensorflow/tfx/issues/5777#issuecomment-19925096980) did not work. I see we also set `tf.config.experimental.set_memory_growth(device, True)`. Could that have interfered with this suggested fix (or [the previous](https://github.com/tensorflow/tfx/issues/5777#issuecomment-1640564951) `use_byte_size_batching` fix)? Applied to Transform component...
Using `pip-compile` 6.10.0 and specifying the `--resolver=backtracking` option resolves the above error.
pip-compile has started taking extremely long for me (killed after more than an hour). python version: 3.7.10 pip version: 23.0 pip-compile version: 6.10.0 requirements-dev.in: ``` tfx[kfp]==1.12.0 tensorflow_text tensorflow_recommenders scann tqdm...