"Expected delimiter: \" error in tom.generate_measure_descriptions()
Describe the bug I am trying to run the generate_measure_descriptions for a list of semantic models published in my workspace. For some, it works as expected, but for many of them we are getting this error message below:
" FabricHTTPException Traceback (most recent call last) Cell In[199], line 16 14 print(row.Dataset_Name) 15 with connect_semantic_model(dataset=row.Dataset_Id, workspace=row.Workspace_Id, readonly=readonly) as tom: ---> 16 measureDescriptions = tom.generate_measure_descriptions()
File ~/cluster-env/trident_env/lib/python3.11/site-packages/sempy_labs/tom/_model.py:4749, in TOMWrapper.generate_measure_descriptions(self, measure_name, max_batch_size) 4743 response = requests.post( 4744 f"{prefix}/explore/v202304/nl2nl/completions", 4745 headers=headers, 4746 json=payload, 4747 ) 4748 if response.status_code != 200: -> 4749 raise FabricHTTPException(response) 4751 for item in response.json().get("modelItems", []): 4752 ms_name = item["urn"]
FabricHTTPException: 500 Internal Server Error for url: https://wabi-west-us-e-primary-redirect.analysis.windows.net/explore/v202304/nl2nl/completions Error: {"code":"JsonReaderException","message":"Unterminated string. Expected delimiter: ". Path '[4].description', line 25, position 150."} Headers: {'Cache-Control': 'no-store, must-revalidate, no-cache', 'Pragma': 'no-cache', 'Transfer-Encoding': 'chunked', 'Content-Type': 'application/json; charset=utf-8', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains, max-age=31536000; includeSubDomains', 'X-Frame-Options': 'deny, deny', 'X-Content-Type-Options': 'nosniff, nosniff', 'RequestId': 'e42325c4-9fd0-40db-a1ec-eb7bb3ef636c', 'Access-Control-Expose-Headers': 'RequestId', 'Date': 'Tue, 04 Nov 2025 15:21:03 GMT'} "
Would you be able to give me an example of the DAX of a measure which fails?
Hey @m-kovalsky, first time I have executed the function I saw that there was a measure with a space at the end of its name, like [US - Total Single Serve Switch ]. So, I have removed the space from the end and tried to run this statement below:
with connect_semantic_model(dataset=row.Dataset_Id, workspace=row.Workspace_Id, readonly=readonly) as tom: measures = list(tom.all_measures()) for measure in measures: print(measure.Name) measureDescriptions = tom.generate_measure_descriptions(measure_name=measure.Name)
With this statement, I could go measure by measure and check the names. After the change in the name, it processed successfully all measures from the model, so I thought "Okay, my issue has been fixed with that simple change in the measure's name", but after running the function for the entire model as below, the error is still happening:
with connect_semantic_model(dataset=row.Dataset_Id, workspace=row.Workspace_Id, readonly=readonly) as tom: measureDescriptions = tom.generate_measure_descriptions()
In summary: if I run the function measure by measure, then it process successfully. if I run the function for the model, then it fails
Would you try setting the match_batch_size parameter to 1 and see what happens?
It worked setting the parameter match_batch_size to 1. Can you share more details on what this parameter affects the logic of the function?
I was considering only one of my semantic models, which was generating the reported error. After successfully processing the function for this semantic model, I have executed for a few others semantic models in the same workspace and I got a different error message for a different semantic model:
FabricHTTPException: 429 for url: https://wabi-west-us-e-primary-redirect.analysis.windows.net/explore/v202304/nl2nl/completions Headers: {'Cache-Control': 'no-store, must-revalidate, no-cache', 'Pragma': 'no-cache', 'Content-Length': '0', 'Content-Type': 'text/plain; charset=utf-8', 'Retry-After': '59', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains', 'X-Frame-Options': 'deny', 'X-Content-Type-Options': 'nosniff', 'RequestId': 'ebb2544f-4c96-4ef3-88e4-bbc252644d17', 'Date': 'Thu, 06 Nov 2025 18:55:40 GMT'}
Please let me know if I need to raise a separated issue for this new error message or if we can keep the track in this same issue.
The batch size parameter controls how many measures get sent to the API in a single call. I’ve set the default to 5. This is a quasi internal API and isn’t exactly fully supported. Please share the DAX and measure name of the measure which generates the issue so I can look into potential causes.
so here are the tests I have done after your inputs:
Test # 1 - Function running in the complete model (max_batch_size=1)
with connect_semantic_model(dataset=row.Dataset_Id, workspace=row.Workspace_Id, readonly=readonly) as tom:
measureDescriptions = tom.generate_measure_descriptions(max_batch_size=1)
Result: FabricHTTPException: 429 for url: https://wabi-west-us-e-primary-redirect.analysis.windows.net/explore/v202304/nl2nl/completions Headers: {'Cache-Control': 'no-store, must-revalidate, no-cache', 'Pragma': 'no-cache', 'Content-Length': '0', 'Content-Type': 'text/plain; charset=utf-8', 'Retry-After': '60', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains', 'X-Frame-Options': 'deny', 'X-Content-Type-Options': 'nosniff', 'RequestId': '55e0d921-e961-499c-8091-dd5cc1ff5433', 'Date': 'Thu, 06 Nov 2025 21:13:07 GMT'}
Test # 2 - Function running in a loop measure by measure in the model
with connect_semantic_model(dataset=row.Dataset_Id, workspace=row.Workspace_Id, readonly=readonly) as tom:
measures = list(tom.all_measures())
for measure in measures:
print(measure.Name+"|")
measureDescriptions = tom.generate_measure_descriptions(measure_name=measure.Name)
Result: It executes successfully.