open-solution-mapping-challenge icon indicating copy to clipboard operation
open-solution-mapping-challenge copied to clipboard

Open solution to the Mapping Challenge :earth_americas:

Results 55 open-solution-mapping-challenge issues
Sort by recently updated
recently updated
newest added

running python main.py prepare_masks and python main.py train --pipeline_name unet_weighted d, it threw out an error: FileNotFoundError: [Errno 2] File b'data/meta/metadata.csv' does not exist: b'data/meta/metadata.csv',please tell me how to solve...

When running local pure python with `python main.py -- train_evaluate_predict --pipeline_name unet --chunk_size 5000` , the following error occurs, any help? > neptune: Executing in Offline Mode. neptune: Executing in...

Excellent results can be achieved using the Developers' fully-trained model (found here: https://app.neptune.ml/neptune-ml/Mapping-Challenge/files) to predict on the test_images set provided as part of the mapping challenge. However, over the past...

Good day, I would like to train on my own map images. However I noticed that the input image size is 300 * 300, following the crowdAI (AIcrowd) Mapping Challenge...

Hello, I am interested in loss function combined with size and distance but I have no idea how it works. Can you give me more information(related paper, reference and so...

Has anyone uploaded pretrained models of the same? I dont have mutiple GPUs to train so its taking a lot of time to train!

How would you go about training this data on a larger/more diverse data set? I've looked into some open source data from SpaceNet, and was wondering if I could use...

Running train crashes when the pipelines are collating results from running transform on the entire dataset.

bug

sorry , i can not find the 50000 examples subset of the dataset, which should be downloaded online or generate through all the dataset?

Hi, I'm sorry to bother you. There are some problems in the replication experiment, and the description of the problem is >reader.py,line 178, in update_raw data = self.stream.read(size) UnicodeDecodeError: 'gbk'...