datasets icon indicating copy to clipboard operation
datasets copied to clipboard

[PGA] Pointer redeclared during import "unsafe" (borges-indexer on Windows)

Open sakalouski opened this issue 6 years ago • 13 comments

image

I have tried under admin rights as well - no luck.

sakalouski avatar Sep 05 '18 12:09 sakalouski

Not sure if borges-indexer build is supported on Windows. Tagging @erizocosmico

vmarkovtsev avatar Sep 05 '18 12:09 vmarkovtsev

Thanks for such a rapid reply! Not a problem - launching it on linux then:)

sakalouski avatar Sep 05 '18 12:09 sakalouski

Looks like a dependency might have problems building on windows. To be honest, we never tried on Windows as we're not currently supporting windows as an OS on that tool.

erizocosmico avatar Sep 05 '18 12:09 erizocosmico

@vmarkovtsev I have installed pga. I have downloaded some repos with .siva extension.

Still, from the docs You provide, there is couple of unclear points:

  1. how to query this .siva from pyspark?
  2. how to get this detailed metadata mentioned in engine, but without engine without a docker (working on a cluster - no root)?

Generally, I do not see logics behind the suggestion of these 3 tools in PublicGitArchive README (pga, multitool and borges-indexer). Possibly it is unclear only to me.

Concluding: I want an access to detailed metadata and timestamped source code. What do I do to get it without a docker?:)

Thank You

sakalouski avatar Sep 12 '18 12:09 sakalouski

@sakalouski Your choice is jgit-spark-connector It is a JAR which is plugged into Spark. If your programming language is Python, it is as easy as

from pyspark.sql import SparkSession
from sourced.engine import Engine
session = SparkSession.builder.master("local[*]").appName("test").config("spark.jars.packages", "tech.sourced:engine:0.7.0").getOrCreate()
engine = Engine(session, "/path/to/siva")
print(engine.repositories.collect())

vmarkovtsev avatar Sep 13 '18 08:09 vmarkovtsev

@vmarkovtsev Thank You for the information!

sakalouski avatar Sep 13 '18 08:09 sakalouski

BTW not sure what you mean by

detailed metadata and timestamped source code

In my understanding "detailed metadata" is the commit message, author and date and "timestamped source code" is git blame. So your other option is to run Hercules over the siva files without using Spark. If you really need git blame you will have to hack some Go code using the Hercules framework.

vmarkovtsev avatar Sep 13 '18 08:09 vmarkovtsev

@vmarkovtsev regarding the metadata - I mean exactly what You have mentioned. Thanks for the hints - having a look!

sakalouski avatar Sep 13 '18 08:09 sakalouski

Oh and there is another option: SQL interface to the underlying repos via gitbase

vmarkovtsev avatar Sep 13 '18 08:09 vmarkovtsev

@vmarkovtsev There are so many tools, but all of them are somehow different:/ So, in case if I want to make all the data processible in Pyspark, would You suggest using Hercules or gitbase? As I see, gitbase mentions Apache Spark integration - does it mean I could config it to run Spark SQL?

sakalouski avatar Sep 13 '18 09:09 sakalouski

As far as I know gitbase+Spark integration is not ready yet. But yep this is the goal.

So the only way to run PySpark over siva atm is through jgit-spark-connector (former "engine").

vmarkovtsev avatar Sep 13 '18 09:09 vmarkovtsev

@vmarkovtsev Thank You Vadim - You have saved me lots of time!

sakalouski avatar Sep 13 '18 09:09 sakalouski

@sakalouski we'll keep you posted when Spark SQL integration is ready, it is on the top of our priority list. cc @vcoisne

eiso avatar Sep 28 '18 10:09 eiso