Beetest icon indicating copy to clipboard operation
Beetest copied to clipboard

Getting errors while executing example

Open Gaurang033 opened this issue 7 years ago • 0 comments

Hi I am getting few errors while executing example.

I have modied run-test-locally.sh as it was taking wrong parameters


`if [[ $# -ne 2 ]]; then
  echo "Usage: $0 <test-case-folder> <path-to-hive-site.xml>"
  exit 1
fi

JAR_PATH=~/code/Beetest/target/jars
TEST_CASE=$1
CONFIG=$2

echo "JAR location: $JAR_PATH"
echo "Test Case Location: $TEST_CASE"
echo "Config Location: $CONFIG"

USE_MINI_CLUSTER=TRUE
DELETE_TEST_DIR_ON_EXIT=FALSE

CP=$(find `pwd` $JAR_PATH -name "*.jar" | tr "\n" ":")
java -cp $CP                            \
  -Dhadoop.root.logger=ERROR,console    \
  com.spotify.beetest.TestQueryExecutor \
  ${TEST_CASE} ${CONFIG} ${USE_MINI_CLUSTER} ${DELETE_TEST_DIR_ON_EXIT} \
  2>&1 | grep -v MetricsSystemImpl

`

And when I execute it using following arguments.

 ./run.sh  artist-count-ddl local-config/hive-site.xml
JAR location: /home/gaurang.shah/code/Beetest/target/jars
Test Case Location: artist-count-ddl
Config Location: local-config/hive-site.xml
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/gaurang.shah/code/Beetest/target/jars/Beetest-1.0-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/gaurang.shah/code/Beetest/target/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Formatting using clusterid: testClusterID
18/02/06 18:27:06 WARN impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
Feb 06, 2018 6:27:24 PM com.spotify.beetest.TestQueryExecutor run
INFO: Generated query filename: /tmp/beetest-test-1456079680-query.hql
Feb 06, 2018 6:27:24 PM com.spotify.beetest.TestQueryExecutor run
INFO: Generated query content:
CREATE DATABASE IF NOT EXISTS beetest;
USE beetest;
DROP TABLE IF EXISTS stream;
CREATE TABLE stream(artist STRING, song STRING, user STRING, ts TIMESTAMP)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';
LOAD DATA LOCAL INPATH 'artist-count-ddl/stream.txt' INTO TABLE stream;
DROP TABLE IF EXISTS output_1456079680;
CREATE TABLE output_1456079680
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
COLLECTION ITEMS TERMINATED BY '|'
MAP KEYS TERMINATED BY '$'
LOCATION '/tmp/beetest-test-1456079680-output_1456079680' AS
  SELECT artist, COUNT(*) AS cnt
    FROM stream
GROUP BY artist
ORDER BY cnt DESC
   LIMIT 2;

Feb 06, 2018 6:27:24 PM com.spotify.beetest.TestQueryExecutor run
INFO: Missing variables file
Feb 06, 2018 6:27:24 PM com.spotify.beetest.TestQueryExecutor getTestCaseCommand
INFO: CONFIG BEING USED IS: /tmp/beetest-test-1456079680/MiniDFSClusterConfig/local-config
OK
OK
18/02/06 18:27:25 ERROR metadata.Hive: Table stream not found: beetest.stream table not found
OK
OK
Loading data to table beetest.stream
Table beetest.stream stats: [numFiles=1, totalSize=296]
OK
18/02/06 18:27:26 ERROR metadata.Hive: Table output_1456079680 not found: beetest.output_1456079680 table not found
OK
Query ID = gaurang.shah_20180206182726_9ab74ee1-5ec9-44a3-b4b9-07ca2850f7a8
Total jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
18/02/06 18:27:26 ERROR mr.ExecDriver: local
Job running in-process (local Hadoop)
2018-02-06 18:27:27,916 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_local574033818_0001
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
18/02/06 18:27:28 ERROR mr.ExecDriver: local
Job running in-process (local Hadoop)
2018-02-06 18:27:29,783 Stage-2 map = 100%,  reduce = 100%
Ended Job = job_local1593158376_0002
Moving data to: /tmp/beetest-test-1456079680-output_1456079680
Table beetest.output_1456079680 stats: [numFiles=0, numRows=2, totalSize=0, rawDataSize=17]
MapReduce Jobs Launched:
Stage-Stage-1:  HDFS Read: 592 HDFS Write: 592 SUCCESS
Stage-Stage-2:  HDFS Read: 592 HDFS Write: 692 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
Feb 06, 2018 6:27:30 PM com.spotify.beetest.TestQueryExecutor run
INFO: Asserting: artist-count-ddl/expected.txt and /tmp/beetest-test-1456079680/outputDirs/000000_0

Gaurang033 avatar Feb 06 '18 18:02 Gaurang033