spatialhadoop2
spatialhadoop2 copied to clipboard
convexhull operation is giving nullpointerexception
Hi,
I am trying to run the convex hull operation in spatialhadoop. Its giving nullpointerexception. Can you please help me?
The full stack trace is
bin/shadoop convexhull /user/hduser/points.grid /user/hduser/convexhull 14/07/28 05:14:41 INFO util.NativeCodeLoader: Loaded the native-hadoop library 14/07/28 05:14:41 WARN snappy.LoadSnappy: Snappy native library not loaded 14/07/28 05:14:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 14/07/28 05:14:41 INFO operations.ConvexHull: Processing 1 out of 1 partition 14/07/28 05:14:41 INFO mapred.FileInputFormat: Spatial filter function matched with 1 cells 14/07/28 05:14:42 INFO mapred.JobClient: Running job: job_201407230401_0013 14/07/28 05:14:43 INFO mapred.JobClient: map 0% reduce 0% 14/07/28 05:14:54 INFO mapred.JobClient: map 100% reduce 0% 14/07/28 05:15:03 INFO mapred.JobClient: map 100% reduce 33% 14/07/28 05:15:06 INFO mapred.JobClient: Task Id : attempt_201407230401_0013_r_ 000000_0, Status : FAILED java.lang.NullPointerException at edu.umn.cs.spatialHadoop.core.GridRecordWriter.closeCell(GridRecordWriter.java:357) at edu.umn.cs.spatialHadoop.core.GridRecordWriter.close(GridRecordWriter.java:436) at edu.umn.cs.spatialHadoop.mapred.GridRecordWriter2.close(GridRecordWriter2.java:39) at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.close(ReduceTask.java:467) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:535) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.mapred.Child.main(Child.java:249)
attempt_201407230401_0013_r_000000_0: log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient). attempt_201407230401_0013_r_000000_0: log4j:WARN Please initialize the log4j system properly. 14/07/28 05:15:07 INFO mapred.JobClient: map 100% reduce 0% 14/07/28 05:15:16 INFO mapred.JobClient: map 100% reduce 33% 14/07/28 05:15:18 INFO mapred.JobClient: Task Id : attempt_201407230401_0013_r_000000_1, Status : FAILED java.lang.NullPointerException at edu.umn.cs.spatialHadoop.core.GridRecordWriter.closeCell(GridRecordWriter.java:357) at edu.umn.cs.spatialHadoop.core.GridRecordWriter.close(GridRecordWriter.java:436) at edu.umn.cs.spatialHadoop.mapred.GridRecordWriter2.close(GridRecordWriter2.java:39) at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.close(ReduceTask.java:467) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:535) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.mapred.Child.main(Child.java:249)
attempt_201407230401_0013_r_000000_1: log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient). attempt_201407230401_0013_r_000000_1: log4j:WARN Please initialize the log4j system properly. 14/07/28 05:15:19 INFO mapred.JobClient: map 100% reduce 0% 14/07/28 05:15:28 INFO mapred.JobClient: map 100% reduce 33% 14/07/28 05:15:32 INFO mapred.JobClient: Task Id : attempt_201407230401_0013_r_000000_2, Status : FAILED java.lang.NullPointerException at edu.umn.cs.spatialHadoop.core.GridRecordWriter.closeCell(GridRecordWriter.java:357) at edu.umn.cs.spatialHadoop.core.GridRecordWriter.close(GridRecordWriter.java:436) at edu.umn.cs.spatialHadoop.mapred.GridRecordWriter2.close(GridRecordWriter2.java:39) at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.close(ReduceTask.java:467) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:535) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.mapred.Child.main(Child.java:249)
attempt_201407230401_0013_r_000000_2: log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient). attempt_201407230401_0013_r_000000_2: log4j:WARN Please initialize the log4j system properly. 14/07/28 05:15:33 INFO mapred.JobClient: map 100% reduce 0% 14/07/28 05:15:41 INFO mapred.JobClient: map 100% reduce 33% 14/07/28 05:15:46 INFO mapred.JobClient: map 100% reduce 0% 14/07/28 05:15:48 INFO mapred.JobClient: Job complete: job_201407230401_0013 14/07/28 05:15:48 INFO mapred.JobClient: Counters: 25 14/07/28 05:15:48 INFO mapred.JobClient: Job Counters 14/07/28 05:15:48 INFO mapred.JobClient: Launched reduce tasks=4 14/07/28 05:15:48 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=12354 14/07/28 05:15:48 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0 14/07/28 05:15:48 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0 14/07/28 05:15:48 INFO mapred.JobClient: Launched map tasks=1 14/07/28 05:15:48 INFO mapred.JobClient: Data-local map tasks=1 14/07/28 05:15:48 INFO mapred.JobClient: Failed reduce tasks=1 14/07/28 05:15:48 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=51587 14/07/28 05:15:48 INFO mapred.JobClient: File Input Format Counters 14/07/28 05:15:48 INFO mapred.JobClient: Bytes Read=10502261 14/07/28 05:15:48 INFO mapred.JobClient: FileSystemCounters 14/07/28 05:15:48 INFO mapred.JobClient: FILE_BYTES_READ=930 14/07/28 05:15:48 INFO mapred.JobClient: HDFS_BYTES_READ=10502381 14/07/28 05:15:48 INFO mapred.JobClient: FILE_BYTES_WRITTEN=59744 14/07/28 05:15:48 INFO mapred.JobClient: Map-Reduce Framework 14/07/28 05:15:48 INFO mapred.JobClient: Map output materialized bytes=924 14/07/28 05:15:48 INFO mapred.JobClient: Map input records=288781 14/07/28 05:15:48 INFO mapred.JobClient: Spilled Records=102 14/07/28 05:15:48 INFO mapred.JobClient: Map output bytes=4620496 14/07/28 05:15:48 INFO mapred.JobClient: Total committed heap usage (bytes)=157810688 14/07/28 05:15:48 INFO mapred.JobClient: CPU time spent (ms)=2120 14/07/28 05:15:48 INFO mapred.JobClient: Map input bytes=10965940 14/07/28 05:15:48 INFO mapred.JobClient: SPLIT_RAW_BYTES=120 14/07/28 05:15:48 INFO mapred.JobClient: Combine input records=288781 14/07/28 05:15:48 INFO mapred.JobClient: Combine output records=51 14/07/28 05:15:48 INFO mapred.JobClient: Physical memory (bytes) snapshot=196898816 14/07/28 05:15:48 INFO mapred.JobClient: Virtual memory (bytes) snapshot=759816192 14/07/28 05:15:48 INFO mapred.JobClient: Map output records=288781 14/07/28 05:15:48 INFO mapred.JobClient: Job Failed: # of failed Reduce Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201407230401_0013_r_000000 java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357) at edu.umn.cs.spatialHadoop.operations.ConvexHull.convexHullMapReduce(ConvexHull.java:295) at edu.umn.cs.spatialHadoop.operations.ConvexHull.main(ConvexHull.java:333) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at edu.umn.cs.spatialHadoop.operations.Main.main(Main.java:97)