warcbase icon indicating copy to clipboard operation
warcbase copied to clipboard

Tweet URL Extraction: All Twitter Shortlinks

Open ianmilligan1 opened this issue 9 years ago • 13 comments

Right now, our script for URL extraction is as follows:

import org.warcbase.spark.matchbox._
import org.warcbase.spark.matchbox.TweetUtils._
import org.warcbase.spark.rdd.RecordRDD._

val tweets =
RecordLoader.loadTweets("/mnt/vol1/data_sets/elxn42/ruest-white/elxn42-tweets-combined-deduplicated-unshortened-fixed.json",
sc)
val r = tweets.flatMap(tweet => {"""http://[^ ]+""".r.findAllIn(tweet.text).toList})
.countItems()
.saveAsTextFile("/home/i2millig/tweet-test/tweet-urls-test.txt")

By grabbing tweets from the text field we just get results like:

(http://…,49033)
(http://t…,48066)
(http://t.…,45610)
(http://t.c…,42470)
(http://t.co…,38145)
(http://t.co/…,32723)
(http://t.co/pbFMYFZpQC,2902)
(http://t.co/lTTkYPlGX0,2823)
(http://t.co/mn2pyBGZmj,1964)
(http://t.co/rriRvt6DyI,1964)
(ad nauseum)

This is not very useful – so what's the best path? In the past, @ruebot and I have used unshorten.py in twarc.

ianmilligan1 avatar Apr 06 '16 16:04 ianmilligan1

...which in-turn uses https://github.com/edsu/unshrtn

We could incorporate that in. Or, create a method in warcbase that does the same thing, or maybe there is already a Java library that does unshortening that we could just pull in.

ruebot avatar Apr 06 '16 16:04 ruebot

Do we have a file which has the mapping from short urls to the full URLs? If so, I can show you how to join in the data...

lintool avatar Apr 07 '16 20:04 lintool

@lintool can you clarify what you mean by "a file that has the mapping from short urls to the full URLs"?

ruebot avatar Apr 07 '16 20:04 ruebot

...or, is this what you're looking for? https://github.com/edsu/unshrtn/blob/master/unshrtn.coffee

ruebot avatar Apr 07 '16 20:04 ruebot

File that has:

http://t.co/pbFMYFZpQC http://foo.bar.com/
http://t.co/pg3SFzLc http://foo.bar.com/
...

lintool avatar Apr 07 '16 20:04 lintool

Oh, https://github.com/edsu/twarc/blob/master/utils/unshorten.py#L37-L53 puts it back in the dataset with a new entry.

ruebot avatar Apr 07 '16 20:04 ruebot

If I understand correctly what it's doing, that's absolutely terrible. That's the digital equivalent of going through a paper archive with a black magic marker, crossing out historical place names and replacing them with their modern names. Would you do that to a paper archive? No! So don't do it to a digital archive.

The correct way to do this is to have a separate file that has the mapping (per above), and join in the unshortened form during processing.

EDIT: okay, it adds in a new field in the JSON, which isn't as bad as I thought. Analogy would be to go through a paper archive and put a post-it note next to every instance of a historical place name and on the post-it note write it's modern name.

lintool avatar Apr 07 '16 23:04 lintool

You don't do it on the preservation/master version of the dataset, you always cat it out to a new file. By default it is stdout. It only reads the preservation/master version of the dataset.

ruebot avatar Apr 08 '16 14:04 ruebot

If that's the case, it's a waste of space. You still just want

short long
short long
...

lintool avatar Apr 09 '16 00:04 lintool

Would the output be:

short, count, long, count
http://t.co/pbFMYFZpQC, 12, http://foo.bar.com/, 123

ruebot avatar Apr 09 '16 01:04 ruebot

You wouldn't even need the count. If you just had short, long, you can process the original archival JSON and just join in the long form as needed.

lintool avatar Apr 09 '16 01:04 lintool

Just re-opening this. Did we reach any agreement here?

ianmilligan1 avatar Apr 28 '16 23:04 ianmilligan1

Do we have a way to generate a file that has the following?

short-url full-url
short-url full-url
...

lintool avatar Apr 29 '16 19:04 lintool