bootcamp
bootcamp copied to clipboard
Auto generate link for URL on news and comments
Hi,
I have recently implemented auto generating hyperlinks on URL for news.
This was done by these two commits:
-
News list: https://github.com/EncontrosDigitais/bootcamp/commit/6681653f2d9910d72189932d4c3aaa323f1d2112
-
News Comments: https://github.com/EncontrosDigitais/bootcamp/commit/dc9e31997feeefddec928de31d49f2c605ca1c8c
If it is an interest of all, I could create a pull-request.
Cheers, Carlos
Hello again,
I have improved this issue suggestion by getting the first URL from text and extracting the meta information, like facebook or twitter does:

This was the commit that implemented it: https://github.com/EncontrosDigitais/bootcamp/commit/a74f15ba696dea8eee66c68fb4c5ae73b0a9b00e
The main responsible file is the metadatareader.py:
https://github.com/EncontrosDigitais/bootcamp/blob/master/bootcamp/news/metadatareader.py
If you are good with this idea, I can merge it with the previous suggestion and make the pull request.
My only doubt was about adding this new requirement in this file, I am not sure if this is the right way: https://github.com/EncontrosDigitais/bootcamp/commit/a74f15ba696dea8eee66c68fb4c5ae73b0a9b00e#diff-7fe11226cff646f5d9f35faa76217059
This is actually awesome @Vamoss I have been thinking about implementing this for some time now, but I hadn't the time to do it myself. I appreciate you took the time to show it to me. This is great, and I appreciate it very much. Yes, please, I gladly receive your PR for this.
Great to hear you like it.
I started learning Django because bootcamp, and because of that I don't know much. Can you please check if this requirement was added in the correct place: https://github.com/EncontrosDigitais/bootcamp/commit/a74f15ba696dea8eee66c68fb4c5ae73b0a9b00e#diff-7fe11226cff646f5d9f35faa76217059
Yes @Vamoss it is in the right place. Can you replace the URL you use to describe the package, for the URL to the repository?
Hi @sebastian-code, BeautifulSoup has no github/git official respository. There are some outdated versions. So I think it is better to link to the official website. Just made the pull-request: https://github.com/vitorfs/bootcamp/pull/219
Here is a screenshot:

The only feature not implemented yet would be cache the metatag image. For example, from servers like facebook and instagram, the image link expires after a few hours. I think that is simple to implement, love if someone could make it :)
Hi @sebastian-code, amazing working improving the code!
Just one small fix. The news are giving an error if there is no URLs in it. This should fix: https://github.com/Vamoss/bootcamp/commit/6d991dd400de1b1ce18261df3757a5dacf9a6762
Before a pull-request I would like discuss two topics:
- By replace
subprocesstorequestswe loose the ability to force a timeout in servers that never respond. I choose to usesubprocessbecause I noticed that a lot of sites never respond, so I think this is critical, because bootcamp now can easily enter an infinity loop. - By simplifying the regex, we loose the ability to find urls like
google.com(without the rigor of typinghttp://), I think this is important because a lot of users don't pay attention to that. I did a lot of research, and find a way to extract a URL from a text is not a simple task.
Considering the complexity those two subject I would recommend to roll back to the original suggestion, or find a better way to solve those issues.
Now I see the value of using a really complex regex like the one you gave @Vamoss I'll recover it then. About the subprocess method, that is a bad idea, it exposes any installation to unnecessary risks, and also makes more difficult to use Bootcamp in different environments. Besides, you can assign a timeout feature to the requests call. I'll add the same 5 sec timeframe to the call.
I know request has a native timeout parameter, but please consider this note from the documentation:
timeout is not a time limit on the entire response download; rather, an exception is raised if the server has not issued a response for timeout seconds (more precisely, if no bytes have been received on the underlying socket for timeout seconds). If no timeout is specified explicitly, requests do not time out. https://requests.readthedocs.io/en/latest/user/quickstart/#timeouts
During the development I find some cases where a website returned the first byte(avoiding request timeout) and then took 30 seconds to fully return the content. Another approach to use with request is using eventlet :
https://stackoverflow.com/a/13592680/1177566
@Vamoss I see your point. I think this can be solved implementing the proper exception workflow from requests when declaring the timeout, I was thinking on something of the likes of:
try:
r = requests.get(url,timeout=3)
r.raise_for_status()
except requests.exceptions.HTTPError as errh:
print ("Http Error:",errh)
except requests.exceptions.ConnectionError as errc:
print ("Error Connecting:",errc)
except requests.exceptions.Timeout as errt:
print ("Timeout Error:",errt)
except requests.exceptions.RequestException as err:
print ("OOps: Something Else",err)
But thinking on what the documentation says, perhaps your solution with eventlet will be a safer approach, but the implementation on your reference looks weird, what do you think of something of the likes of:
with eventlet.Timeout(10):
requests.get("someverytrickyURL.com")
By the way @Vamoss I could not reproduce a timeout situation, do you have an URL I can work with for testing?
Great! There is hope :)
Yes, that is hard to simulate, I don't remember exactly what site I was testing for that case, but I suspect it was https://reddit.com, or my personal website https://vamoss.com.br (my server can be very slow sometimes).
This site may help (I tested some pages and not all of them were actually slow): http://internetsupervision.com/scripts/urlcheck/report.aspx?reportid=slowest
Nonetheless, I think you have a good strategy, I would be happy to see in action.
@Vamoss I'm trying to revert the regex, but when I tested it, in the specific case you say (I tried using vamoss.com.br) it throws a MissingSchema error; the log actually has a mention about detecting the right pattern Invalid URL 'vamoss.com.br': No schema supplied. Perhaps you meant http://vamoss.com.br?
Do you know something? My regex is really bad.
It happens because request requires using a schema (https://) in the url, this didn't happen with subprocess because curl auto fixed that.
This code works for me:
def fetch_metadata(text):
"""Method to consolidate workflow to recover the metadata of a page of the
first URL found a in a given text block.
:requieres:
:param text: Block of text of any lenght
"""
urls = get_urls(text)
if len(urls) > 0:
url = urls[0]
if not url.lower().startswith(("http://", "https://", "ftp://")):
url = "http://" + url
return get_metadata(url)
return None
No problem @Vamoss I fixed it using a different approach. Thanks for the help tho.
I think we can close this issue @sebastian-code