scala-scraper icon indicating copy to clipboard operation
scala-scraper copied to clipboard

Asynchronous browser#get?

Open frankandrobot opened this issue 8 years ago • 4 comments

It would be nice if the browsers has an asynchronous version of get---this way you can just do several page loads at once. As a work around, can I use a custom loader? what format does Document need to be to work with scala-scraper?

frankandrobot avatar Jan 28 '17 20:01 frankandrobot

Hi @frankandrobot! It's a good point and the Browser implementations are probably not thread-safe. I'll get to improve both the browser implementations and the API later.

Nevertheless, you can easily load several pages in parallel by just wrapping the operations in a Future, like:

import scala.concurrent.ExecutionContext.global
import scala.concurrent.Future

val doc: Future[Document] = Future { JsoupBrowser().get("http://example.com") }

You can implement your custom loader that returns a Document, you really just need to implement its interface (location, root and toHtml methods). You will probably have less work if you use jsoup or HtmlUnit for parsing the HTML though, as scala-scraper already provides Document instances wrapping the models for these libraries.

ruippeixotog avatar Jan 29 '17 15:01 ruippeixotog

Ah yes, wrap it in a future. That works for now. Thanks!

On Sun, Jan 29, 2017 at 9:46 AM, Rui Gonçalves [email protected] wrote:

Hi @frankandrobot https://github.com/frankandrobot! It's a good point and the Browser implementations are probably not thread-safe. I'll get to improve both the browser implementations and the API later.

Nevertheless, you can easily load several pages in parallel by just wrapping the operations in a Future, like:

import scala.concurrent.ExecutionContext.globalimport scala.concurrent.Future val doc: Future[Document] = Future { JsoupBrowser().get("http://example.com") }

You can implement your custom loader that returns a Document, you really just need to implement its interface (location, root and toHtml methods). You will probably have less work if you use jsoup or HtmlUnit for parsing the HTML though, as scala-scraper already provides Document instances wrapping the models for these libraries.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ruippeixotog/scala-scraper/issues/41#issuecomment-275922355, or mute the thread https://github.com/notifications/unsubscribe-auth/AAwDMaVU-VqYEitG19Dtgbj4-oC6Idyoks5rXLRLgaJpZM4LwnSH .

-- Thanks!

Uriel Avalos Software Engineer RStudio, Inc 516.902.1552

frankandrobot avatar Jan 30 '17 17:01 frankandrobot

In some common situations which there are a large number of concurrent jobs, simply wrapping ".get" inside a Future seems problematic as the Future would take a spot in execution context's thread-pool. Is there any plan for building an async version? If not, I can get started and make a pull request.

polymorpher avatar Aug 30 '17 19:08 polymorpher

I'm planning on working this weekend on extending the Browser interface to support async browser implementations and providing a naive factory method that wraps a non-async Browser by wrapping calls in futures. Once I do that, a pull request with a proper async implementation would be very helpful :)

ruippeixotog avatar Aug 30 '17 23:08 ruippeixotog