twitter-python-ads-sdk icon indicating copy to clipboard operation
twitter-python-ads-sdk copied to clipboard

a bytes-like object is required, not 'str'

Open sarim-sikander-turing opened this issue 2 years ago • 11 comments

There is a issue in analytics.py. Traceback (most recent call last): File ".\api.py", line 60, in <module> async_data.append(LineItem.async_stats_job_data(account, url=result.url)) File "D:\Office\twitter ads\env\lib\site-packages\twitter_ads\analytics.py", line 115, in async_stats_job_data response = Request(account.client, 'get', resource.path, domain=domain, File "D:\Office\twitter ads\env\lib\site-packages\twitter_ads\http.py", line 70, in perform raise Error.from_response(response) File "D:\Office\twitter ads\env\lib\site-packages\twitter_ads\error.py", line 45, in from_response return ERRORS[response.code](response) File "D:\Office\twitter ads\env\lib\site-packages\twitter_ads\error.py", line 13, in __init__ if response.body and 'errors' in response.body: TypeError: a bytes-like object is required, not 'str'

This error is caused when running the analytics.py file. Please fix this issue.

sarim-sikander-turing avatar May 01 '22 16:05 sarim-sikander-turing

Has anyone been able to find a solution for this. It appears to still be an issue with the latest version.

pep4eto1211 avatar Jul 27 '22 07:07 pep4eto1211

Still getting the same error today.

rafabandoni avatar Aug 01 '22 13:08 rafabandoni

So, apparently the 30 seconds sleep is too fast. I had a proper url return by increasing the sleep time for 300 seconds. Idk how this will impact the cloud usage yet, so I'll run a few more tests and update this thread if discover anything else.

rafabandoni avatar Aug 01 '22 14:08 rafabandoni

Is there a more reliable way to find out if a report is ready or not? The sleep method looks way too "hacky" for me.

pep4eto1211 avatar Aug 02 '22 08:08 pep4eto1211

Is there a more reliable way to find out if a report is ready or not? The sleep method looks way too "hacky" for me.

Idk :( These were the changes I made to get the proper data from the url after the sleep:

async_data = []
for result in async_stats_job_results:
  print(result)
  url = result.url
  #async_data.append(LineItem.async_stats_job_data(account, url=result.url))
  
response = requests.get(url)
data = gzip.decompress(response.content)

data = json.loads(data.decode('utf-8'))

rafabandoni avatar Aug 02 '22 15:08 rafabandoni

Just posting an update here as it seems none of the developers are actually looking at these issues. The error we are seeing is not indicating the actual fault. Rather it is the library failing to provide additional information about the real error. Basically their code that is supposed to show what the error is, is failing and this is its error. As such, this means that any failure might have happened to cause this. As a workaround I found that you can print the response's HTTP status code here: https://github.com/twitterdev/twitter-python-ads-sdk/blob/a3dd5819341e77aa469d0b4b3399f0bcd028c80c/twitter_ads/http.py#L69 by reading the response.code property. This might at least point you in the right direction. Unfortunately I was unable to find a way to also print the response body, as it is а bytes object and I don't know what the encoding is. In addition: my error was 403- which is extremely weird, considering I was able to paste the result's URL and download the generated report just fine. Download even works in incognito mode- making me thing that the file downlead requires no authentication whatsoever (also seen in @rafabandoni's comment). Overall the code quality for this library is extremely low and the only reason I keep using it is because it handles the cumbersome auth. @rafabandoni I actually found a way to do checks if the report generation is complete. Using the job ID, you can retrieve the job using the async_stats_job_result function. This object has a status property:

{
    "id": "*****",
    "status": "SUCCESS",
    "url": "*****",
    "created_at": "2022-08-10T10:39:25Z",
    "expires_at": "2022-08-12T10:39:38Z",
    "updated_at": "2022-08-10T10:39:38Z",
    "start_time": "2022-06-21T04:00:00Z",
    "end_time": "2022-06-22T04:00:00Z",
    "entity": "CAMPAIGN",
    "entity_ids": [
        "*****"
    ],
    "placement": "ALL_ON_TWITTER",
    "granularity": "DAY",
    "metric_groups": [
        "ENGAGEMENT",
        "BILLING"
    ]
}

A simple while with some waiting time and periodic checking should be a better way to do this than waiting an arbitrary amount of time.

These were the changes I made to get the proper data from the url after the sleep:

Thanks for this- I'll make use of it.

pep4eto1211 avatar Aug 10 '22 12:08 pep4eto1211

hey all - thank you for the thoughtful discussion. given bandwidth constraints we haven't been able to look into this issue as soon as we'd liked and given that this seems to affect a lot of users i'd like to dedicate some time towards a fix.

@pep4eto1211 I'd love to hear your thoughts on what we can be doing better in terms of code quality and feature sets that are missing.

generally speaking when it comes to fetching analytics we've got a general algorithm available in our documentation page which outlines how to use the status field to determine when the files are ready.

tushdante avatar Aug 11 '22 18:08 tushdante

Hey all, I got the same issue as well. Any updates on this?

ttarom avatar Dec 29 '22 14:12 ttarom

Hey all, I got the same issue as well. Any updates on this?

Nothing on my side, we finished the project using the workaround mentioned before and I have no longer worked with Twitter API since then.

rafabandoni avatar Dec 29 '22 14:12 rafabandoni

Steel getting this error.

oleks-ufo avatar Aug 03 '23 12:08 oleks-ufo

How is this still not fixed? Soon it'll be 2 years since the first issue.

brunopini avatar Feb 06 '24 15:02 brunopini