tuber
tuber copied to clipboard
Error: HTTP failure: 403
Hi there,
So I've been using and loving Tuber, until just now when all the sudden my search attempts are coming back with a "Error: HTTP failure: 403" message. I haven't changed a thing. Same API key, same secret. But now yt_search with any term in it simply fails.
I don't see any troubleshooting mechanisms in the package, and the documentation you link to on the YouTube Developer site doesn't yield any additional insights either. I logged into my YouTube account to see if it had been closed, and it hadn't. I checked the Google APIs dashboard to see if my access had been restricted or something, and it doesn't appear to have been.
Any ideas what could be going on? And more importantly, how to fix it?
Thanks!
Additional detail:
The plot thickens. When I use the YouTube API Explorer, and issue the following request:
GET https://www.googleapis.com/youtube/v3/search?part=snippet&q=React&key={YOUR_API_KEY}
I get back a valid list of results. This suggests that the 403 insufficient permission error is bogus (or at least not entirely accurate). So the next step was to try the same request from R. It still works (using httr directly). So...?
The one thing which occurs to me is that this request is simply asking for my "key" rather than it and my secret. So I wonder if it's a credentials issue. Is tuber using the correct API permissions (I have several established in my API account)?
From the YouTube API updates page it doesn't look like they've officially made any changes which should break tuber functionality, so... I'm still at a loss.
I hope you have ideas of things I can try to figure out what's going on and hopefully resolve the issue.
Thanks!
Dear @abeburnett: Let me explore this and get back to you asap. If the trouble is in tuber, as you suspect it is, apologies for that.
Actually I'm completely mystified as to the cause. I don't see anything obviously wrong in what I'm doing, the tuber code, how the credentials are setup... And like I said, the last time I ran the code a couple weeks ago, it worked great. So totally mystified at the moment!
On Tue, Jan 17, 2017, 11:44 AM soodoku [email protected] wrote:
Dear @abeburnett https://github.com/abeburnett: Let me explore this and get back to you asap. If the trouble is in tuber, as you suspect it is, apologies for that.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/soodoku/tuber/issues/15#issuecomment-273260017, or mute the thread https://github.com/notifications/unsubscribe-auth/ACfUEXYa-UR6J9BJcI7gjw1k1dFWTq2Nks5rTQv3gaJpZM4LlQep .
Don't waste your time on it.
I have some ways setup to detect which portion is broken. But need to take care of a few things before I investigate that. hopefully, nothing urgent is being seriously delayed at your end.
Copy that, good to know! I'm using it in a system I'm developing, but if you could look into it sometime this week that would be soon enough for me. Thank you for being so responsive!
On Tue, Jan 17, 2017, 12:04 PM soodoku [email protected] wrote:
Don't waste your time on it.
I have some ways setup to detect which portion is broken. But need to take care of a few things before I investigate that. hopefully, nothing urgent is being seriously delayed at your end.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/soodoku/tuber/issues/15#issuecomment-273265851, or mute the thread https://github.com/notifications/unsubscribe-auth/ACfUEcQ9Z00EJlgsZ5q9TJbr8QI4B98Nks5rTRDEgaJpZM4LlQep .
Well, this is just strange. So I tried to go direct to the API, and it worked. Here's what worked:
require(curl)
require(jsonlite)
API_key <- 'MY_KEY'
Base_URL <- 'https://www.googleapis.com/youtube/v3'
YT_Service <- c( 'search?part=snippet&q=%s&type=%s&key=%s', # search API 'videos?part=snippet,statistics&id=%s&key=%s', # video stats 'subscriptions?part=snippet,contentDetails&channelId=%s&key=%s' ) # subscriptions API')
form URL
url <- paste0(Base_URL, "/", sprintf(YT_Service[1], 'React', 'video', API_key))
returns five search results
result <- fromJSON(txt=url)
grab a random video id from the results for testing
url <- paste0(Base_URL, '/', sprintf(YT_Service[2], '0bYETPZXi7w', API_key))
get stats for video id
result <- fromJSON(txt=url)
So...this seems to indicate that I don't have some sort of restricted API access or something. But it's also using a different kind of credential than tuber requires (key + secret).
Until some weeks ago the package worked well but now when I try to run also the most simple function like yt_search(term="soccer")
returns Error: HTTP failure: 401
. Is an internal problem of the package or did Youtube change the API permission? Thank you in advance
That's what I ran into. Ultimately I opted to just go direct (drop the package) and use JSONlite and curl... Particularly because the author of this package hasn't been super responsive, and it's more trouble than it's worth to get familiar enough with the code base to fix it.
On Wed, Feb 15, 2017, 10:45 AM andreaangeli [email protected] wrote:
Until some weeks ago the package worked well but now when I try to run also the most simple function like yt_search(term="soccer") returns Error: HTTP failure: 401. Is an internal problem of the package or did Youtube change the API permission? Thank you in advance
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/soodoku/tuber/issues/15#issuecomment-280083765, or mute the thread https://github.com/notifications/unsubscribe-auth/ACfUERBcfapDbnrDe_7yzkb2v6W4O1Uuks5rcznJgaJpZM4LlQep .
@abeburnett great! Could you please attach me this alternative code? I'm a newbie with R
It's actually already posted above!
On Wed, Feb 15, 2017, 2:27 PM andreaangeli [email protected] wrote:
@abeburnett https://github.com/abeburnett great! Could you post this alternative code? I'm a newbie with R
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/soodoku/tuber/issues/15#issuecomment-280144449, or mute the thread https://github.com/notifications/unsubscribe-auth/ACfUEYA7uy6LOt2MIhaREHuVzIum8GhIks5rc23TgaJpZM4LlQep .
Oh sorry, I didn't catch! However when I try to run your code the console returns this error
Error in (function (classes, fdef, mtable) : unable to find an inherited method for function ‘fromJSON’ for signature ‘"missing", "missing"’
I'm sure that all the Youtube API key are enabled
Sounds like you're missing JSONlite, perhaps. Make sure you have it installed, and that you're loading it (library(jsonlite)). Also ensure that you've put your API key in the code wherever it's required, as designated by "MY_KEY" and "API_key".
If you have loaded the package correctly, you shouldn't be seeing errors relating to jsonlite etc, (tube depends on these packages).
But if you see 403 error, here are some details:
-
403 error is an access forbidden error: https://developers.google.com/youtube/v3/docs/errors
-
You are trying to access something that you aren't allowed to, I would think. Part of the answer is in changing the 'scope' of yt_oauth(). Function here: https://github.com/soodoku/tuber/blob/master/R/yt_oauth.R
Try writing ?yt_oauth in R after library(tuber)
Ping if issues.
For the record, I found I got 403 errors if something was wrong with my query. I.e., if it had funky characters, etc. Doesn't make sense why the response is 403, but it's just one of those things.
On Fri, Apr 14, 2017, 2:07 PM soodoku [email protected] wrote:
Closed #15 https://github.com/soodoku/tuber/issues/15.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/soodoku/tuber/issues/15#event-1043742753, or mute the thread https://github.com/notifications/unsubscribe-auth/ACfUEWv81MOsJO9LtdU_rXAaHlzdSPLvks5rv9H6gaJpZM4LlQep .
Thanks @abeburnett! Can you share such a query? I will investigate if I can send queries with funky characters in them as YT should support it.
I'm getting the same error. If I assign the yt_oauth function to an object and view that object I get the API info of my Rtweet app (http://rtweet.info/)
p <- yt_oauth(app_id = "[MY-TUBER-APP-ID]", app_secret = "[MY-APP-SECRET-KEY]")
p
Ok, I just wrote this and it solved:
yt_oauth(app_id = "[MY-TUBER-APP-ID]", app_secret = "[MY-APP-SECRET-KEY]", token = "")
@meneos thanks, this solved a problem I was having too! -- @soodoku sounds like this might be a different problem to the one that started this thread but I'm happy either way ;-) is yt_oauth()
defaulting to tokens even when not applicable? (still learning here too so sorry if that's a mile off)
Hi, I'm getting the same error when I'm trying to get comments from a list of videos that I created using yt_search before. yt_search worked fine and created a list of 590 observations but I'm hitting 'http failure 403' after running the second code to get the comments from the videos. Could anybody please help? Here are the codes:
res <- yt_search("search term", published_after = "2017-1-01T00:00:00Z", published_before = "2018-1-01T00:00:00Z") comments = lapply(as.character(res$video_id), function(x){ get_comment_threads(c(video_id = x), max_results = 21) })
update: I think it was because I hit the limit. Because I got results after reducing the number of videos.
Hi, I'm getting the same error when I'm trying to get comments from a list of videos that I created using yt_search before. yt_search worked fine and created a list of 590 observations but I'm hitting 'http failure 403' after running the second code to get the comments from the videos. Could anybody please help? Here are the codes:
res <- yt_search("search term", published_after = "2017-1-01T00:00:00Z", published_before = "2018-1-01T00:00:00Z") comments = lapply(as.character(res$video_id), function(x){ get_comment_threads(c(video_id = x), max_results = 21) })
update: I think it was because I hit the limit. Because I got results after reducing the number of videos.
I've found a solution. That error has something related to "liveBroadcastContent" column in your yt_search result. Just filter it by "none" value and all will work fine. Another thing that stops your request is a video with disabled comments. And this is the main problem.
Hi, I'm getting the same error when I'm trying to get comments from a list of videos that I created using yt_search before. yt_search worked fine and created a list of 590 observations but I'm hitting 'http failure 403' after running the second code to get the comments from the videos. Could anybody please help? Here are the codes: res <- yt_search("search term", published_after = "2017-1-01T00:00:00Z", published_before = "2018-1-01T00:00:00Z") comments = lapply(as.character(res$video_id), function(x){ get_comment_threads(c(video_id = x), max_results = 21) }) update: I think it was because I hit the limit. Because I got results after reducing the number of videos.
I've found a solution. That error has something related to "liveBroadcastContent" column in your yt_search result. Just filter it by "none" value and all will work fine. Another thing that stops your request is a video with disabled comments. And this is the main problem.
Hi! What do you mean about 'a video with disabled comments'?
Hi! What do you mean about 'a video with disabled comments'?
Hello! Imagine that you have a Youtube channel. You can decide if you want to allow viewers to comment on specific videos. So you could turn comments on or off.
If some video in a playlist has disabled comments your request will be stopped with no results returned.
Hi!
Has this issue very been solved? I can't even run a basic search function using the tuber package without getting the 403 error. Wondering if this is related to the tuber package or YouTube API updates.
yt_search(term = "Elizabeth Warren", max_results = 5, published_after = "2018-12-31T00:00:00Z")
Thanks!
Hi!
Has this issue very been solved? I can't even run a basic search function using the tuber package without getting the 403 error. Wondering if this is related to the tuber package or YouTube API updates.
yt_search(term = "Elizabeth Warren", max_results = 5, published_after = "2018-12-31T00:00:00Z")
Thanks!
I am with the same problem!
I tried for different videos and my results here. If one author has only replies (doesnt have comment about video) ,then according to number of replies behave.If the author has not more than 5 replies then dont scrape anyone.But if has more than 5 replies then some comments are scraping. And if one author has both himself comments and replies then more than second man (up I told) comments are scraping.
I've tried several ways to fix this error but I can't. I have already enabled all YouTube APIs, given the token argument the value "", changed the Client ID and secret.
I worked with this package several months ago and I didn't have any issue. No wonder why is this happening.
Does anyone have been lucky with this?
Just installed tuber and successfully got past OAuth consent. Ran demo 'get_stats(video_id="N708P-A45D0")' and got a valid result. ran demo 'yt_search('"Barack Obama",max_results=10)' and got error 403. Then ran same demo query again and got error '403'. Dead in the water.
Try removing the .auth file and go through auth again.
On Mon, Jun 1, 2020 at 11:58 AM Art Steinmetz [email protected] wrote:
Just installed tuber and successfully got past OAuth consent. Ran demo 'get_stats(video_id="N708P-A45D0")' and got a valid result. ran demo 'yt_search('"Barack Obama",max_results=10)' and got error 403. Then ran same demo query again and got error '403'. Dead in the water.
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/soodoku/tuber/issues/15#issuecomment-637044042, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFQEOULLCSE55ZFI6CLNETRUP25LANCNFSM4C4VA6UQ .
I have removed and renewed the auth file several times now, reset my app secret on my API credential page, made sure everything was in working order, and still getting error 403. The yt_oauth() function works fine, and says "Authentication complete." yet nothing can be queried.
Has anyone looked into fixing this yet?
I'm trying to use the tubeR for some months now, and I've been getting the same problem (Erro: HTTP failure: 403). Does anyone know an alternative API to get youtube data using R ?