models.batch_get swallows rate cap exception and returns empty generator.
I was looking into how best to wrap my own backoff/retries around this as it seemed like I wasn't getting any retries. Digging into it, it appears that the exception is swallowed and you get an empty iterator back.
This took a while to track down as I was getting this exception on read cap exceeded:
result = {AttributeError} 'str' object has no attribute 'values'
args = {tuple} <class 'tuple'>: ("'str' object has no attribute 'values'",)
when I tried to read the next() from the returned generator.
Here's my sample code, model (truncated):
class Article(Model):
class Meta(object):
table_name = table_name()
region = table_region()
ad use case where it fails:
read_values = orm.batch_get(hash_keys, *args, **kwargs)
try:
retrieved_values = {}
try:
# Force generator calls to populate list
read_values_list = list(read_values)
except Exception as ex:
logger.error('catch error: {}'.format(ex))
raise ex
Here's the expended returned read_values generator object when there's no data because of a read cap violation:
result = {generator} <generator object batch_get at 0x7fe01c6be830>
gi_code = {code} <code object batch_get at 0x7fe006b50780, file "/usr/local/lib/python3.5/dist-packages/pynamodb/models.py", line 242>
co_argcount = {int} 4
co_cellvars = {tuple} <class 'tuple'>: ()
co_code = {bytes} b't\x00\x00|\x01\x00\x83\x01\x00}\x01\x00|\x00\x00j\x01\x00\x83\x00\x00j\x02\x00}\x04\x00|\x00\x00j\x01\x00\x83\x00\x00j\x03\x00}\x05\x00g\x00\x00}\x06\x00x\xf2\x00|\x01\x00r$\x01t\x04\x00|\x06\x00\x83\x01\x00t\x05\x00k\x02\x00r\xad\x00x_\x00|\x06\x00r\xac
co_consts = {tuple} <class 'tuple'>: ('\n BatchGetItem for this model\n\n :param items: Should be a list of hash keys to retrieve, or a list of\n tuples if range keys are used.\n ', 'consistent_read', 'attributes_to_get', 0, 1, None)
co_filename = {str} '/usr/local/lib/python3.5/dist-packages/pynamodb/models.py'
co_firstlineno = {int} 242
co_flags = {int} 99
co_freevars = {tuple} <class 'tuple'>: ()
co_kwonlyargcount = {int} 0
co_lnotab = {bytes} b'\x00\x08\x0c\x01\x0f\x01\x0f\x01\x06\x01\t\x01\x12\x01\t\x01\x06\x01\x06\x01\x06\x01\x0f\x02\r\x01\x12\x01\x06\x01\t\x02\n\x01\x0c\x01\x06\x01 \x01\x06\x01\x06\x01\x10\x03\x13\x01\x06\x01\x11\x03\t\x01\x06\x01\x06\x01\x06\x01\x0f\x02\r\x01\x12\x01\x06\x0
co_name = {str} 'batch_get'
co_names = {tuple} <class 'tuple'>: ('list', '_get_meta_data', 'hash_keyname', 'range_keyname', 'len', 'BATCH_GET_PAGE_LIMIT', '_batch_get_page', 'from_raw_data', 'pop', '_serialize_keys', 'append')
co_nlocals = {int} 13
co_stacksize = {int} 6
co_varnames = {tuple} <class 'tuple'>: ('cls', 'items', 'consistent_read', 'attributes_to_get', 'hash_keyname', 'range_keyname', 'keys_to_get', 'page', 'unprocessed_keys', 'batch_item', 'item', 'hash_key', 'range_key')
gi_frame = {NoneType} None
gi_running = {bool} False
gi_yieldfrom = {NoneType} None
Basically, the fact that gi_frame is None says there's nothing there.
The evidence that this is read cap based is that it doesn't happen when I over-provision the read cap on the table. When I scale it back to a small value this starts happening.
There's no visible exception passed back from the models.batch_get(). Rather it raises an exception when trying to get the first element from the empty generator.
I expect I'm doing something wrong as I can't see a clean way to detect this error sort of looking at the generator's internal gi_frame attribute, which seems plain wrong.
related with https://github.com/pynamodb/PynamoDB/issues/1093