Private user

Private user   •   6 months ago

Expectations around published release (binary/other artifacts)

Hi! What are the expectations for the actual published release, i.e. how would you inference the model. Are any of the below preferred, or is there something I missed in the rules?

- A standalone binary executable compiled for linux arm64 aarch?
- Docker image with weights downloadable from say hugging face or anywhere else?
- .pte Pytorch Executorch file with an inference script?
- .pt2 PyTorch exported model?

In each of the cases above, are any preprocessing steps to be provided as instructions or should they be baked into the model as an individual layer?
Or are we to develop a demo mobile application showcasing the features of the model alone?

A lot of questions re deployment and inference!

  • 2 comments

  •   •   6 months ago

    Entries should be capable of showing off your work entirely on their own, this means that any application code and instructions should be included in your submission.

    The exact format of your submission isn't specified, as long as it includes everything we would need to successfully run it. So you can provide a fully packaged APK, or an arm64 binary with instructions to run it, or source code with instructions to build and run it, etc. We will make every attempt to get your submission working according to your directions, but the simpler you can make them the better.

  • Private user

    Private user   •   6 months ago

    Thanks for the clarification!

Comments are closed.