Leaderboard¶
Tasks¶
There are three tasks within the MillionTrees package.
Official¶
The first task is to create the best global detector for individual trees given a set of training and test data. Datasets are split randomly, reflecting information within localities. This is consistant with how most applied users engage with models, by fine-tuning backbone models with sample data from a desired locality.
Zero-shot¶
The second task is to create a zero-shot detection system to generalize across geography and aquisition conditions. Selected datasets are held out from training completely and used for evaluation in new conditions. This is a challenging task with no local training data.
Cross-geometry¶
Off the shelf tools often limit users for a single annotation type. We have ‘point’ models, ‘box’ models and ‘polygon’ models. To create truly global models for biological inference, we need models that can use all available data, not just one annotation geometry. In particular, polygon annotations are very time consuming to create, but are often desirable for downstream usecases. We opted against polygon training sources, for example polygons to points, as this is an unrealistic, or atleast, very uncommon downstream use case.
Boxes to Polygons¶
All box sources are used to train and predict all polygon sources. There is no local data from the test localities in train.
Points to Polygons¶
All point sources are used to train and predict all polygon sources
Points to Boxes¶
All point sources are used to train and predict all box sources.
Submissions¶
Submit to the leaderboard¶
Once you have trained a model and evaluated its performance, you can submit your results to the MillionTrees leaderboard. Here’s how:
Create a public repository with your code and model training scripts. Make sure to include:
Clear instructions for reproducing your results
Requirements file listing all dependencies
Training configuration files/parameters
Code for data preprocessing and augmentation
Model architecture definition
Evaluation code
Generate predictions on the test split:
test_dataset = dataset.get_subset("test") # Use test split test_loader = get_eval_loader("standard", test_dataset, batch_size=16) predictions = [] for metadata, images, _ in test_loader: pred = model(images) predictions.append(pred)
Save visual examples of your model’s predictions:
# Save a few example predictions dataset.visualize_predictions( predictions[:5], save_dir="prediction_examples/" )
Submit a pull request to the MillionTrees repository with:
Link to your code repository
Model description and approach
Performance metrics on test set
Example prediction visualizations
Instructions for reproducing results
Official¶
Name |
Citation |
Official Split |
---|---|---|
Zero-shot¶
Name |
Citation |
Official Split |
---|---|---|