EvalAI使用——类似kaggle的开源平台,不过没有kernel fork功能,比较蛋疼
Posted bonelee
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了EvalAI使用——类似kaggle的开源平台,不过没有kernel fork功能,比较蛋疼相关的知识,希望对你有一定的参考价值。
官方的代码 https://github.com/Cloud-CV/EvalAI 我一直没法成功import yaml配置举办比赛(create a challenge on EvalAI 使用https://github.com/Cloud-CV/EvalAI-Starters)。
直到使用第三方的fork: https://github.com/live-wire/EvalAI
下面是介绍的简单使用流程:
A question we’re often asked is: Doesn’t Kaggle already do this? The central differences are:
-
Custom Evaluation Protocols and Phases: We have designed versatile backend framework that can support user-defined evaluation metrics, various evaluation phases, private and public leaderboard.
-
Faster Evaluation: The backend evaluation pipeline is engineered so that submissions can be evaluated parallelly using multiple cores on multiple machines via mapreduce frameworks offering a significant performance boost over similar web AI-challenge platforms.
-
Portability: Since the platform is open-source, users have the freedom to host challenges on their own private servers rather than having to explicitly depend on Cloud Services such as AWS, Azure, etc.
-
Easy Hosting: Hosting a challenge is streamlined. One can create the challenge on EvalAI using the intuitive UI (work-in-progress) or using zip configuration file.
-
Centralized Leaderboard: Challenge Organizers whether host their challenge on EvalAI or forked version of EvalAI, they can send the results to main EvalAI server. This helps to build a centralized platform to keep track of different challenges.
Goal
Our ultimate goal is to build a centralized platform to host, participate and collaborate in AI challenges organized around the globe and we hope to help in benchmarking progress in AI.
Performance comparison
Some background: Last year, the Visual Question Answering Challenge (VQA) 2016 was hosted on some other platform, and on average evaluation would take ~10 minutes. EvalAI hosted this year‘s VQA Challenge 2017. This year, the dataset for the VQA Challenge 2017 is twice as large. Despite this, we’ve found that our parallelized backend only takes ~130 seconds to evaluate on the whole test set VQA 2.0 dataset.
Installation Instructions
Setting up EvalAI on your local machine is really easy. You can setup EvalAI using two methods:
Using Docker
You can also use Docker Compose to run all the components of EvalAI together. The steps are:
-
Get the source code on to your machine via git.
git clone https://github.com/Cloud-CV/EvalAI.git evalai && cd evalai
Use your postgres username and password for fields
USER
andPASSWORD
insettings/dev.py
file. -
Build and run the Docker containers. This might take a while. You should be able to access EvalAI at
localhost:8888
.docker-compose up --build
Using Virtual Environment
-
Install python 2.7.10 or above, git, postgresql version >= 10.1, have ElasticMQ installed (Amazon SQS is used in production) and virtualenv, in your computer, if you don‘t have it already. If you are having trouble with postgresql on Windows check this link postgresqlhelp.
-
Get the source code on your machine via git.
git clone https://github.com/Cloud-CV/EvalAI.git evalai
-
Create a python virtual environment and install python dependencies.
cd evalai virtualenv venv source venv/bin/activate # run this command everytime before working on project pip install -r requirements/dev.txt
-
Create an empty postgres database.
sudo -i -u (username) createdb evalai
-
Change Postgresql credentials in
settings/dev.py
and run migrationsUse your postgres username and password for fields
USER
andPASSWORD
indev.py
file. After changing credentials, run migrations using the following command:python manage.py migrate --settings=settings.dev
-
Seed the database with some fake data to work with.
python manage.py seed --settings=settings.dev
This command also creates a
superuser(admin)
, ahost user
and aparticipant user
with following credentials.SUPERUSER- username:
admin
password:password
HOST USER- username:host
password:password
PARTICIPANT USER- username:participant
password:password
-
That‘s it. Now you can run development server at http://127.0.0.1:8000 (for serving backend)
python manage.py runserver --settings=settings.dev
-
Please make sure that node(
>=7.x.x
), npm(>=5.x.x
) and bower(>=1.8.x
) are installed globally on your machine.Install npm and bower dependencies by running
npm install bower install
If you running npm install behind a proxy server, use
npm config set proxy http://proxy:port
-
Now to connect to dev server at http://127.0.0.1:8888 (for serving frontend)
gulp dev:runserver
-
That‘s it, Open web browser and hit the url http://127.0.0.1:8888.
-
(Optional) If you want to see the whole game into play, then install the ElasticMQ Queue service and start the worker in a new terminal window using the following command that consumes the submissions done for every challenge:
python scripts/workers/submission_worker.py
注意:为了是新加的账户直接login并加入team,我修改了:
575 vi accounts/permissions.py
from allauth.account.models import EmailAddress from rest_framework import permissions class HasVerifiedEmail(permissions.BasePermission): """ Permission class for if the user has verified the email or not """ message = "Please verify your email first!" def has_permission(self, request, view): if request.user.is_anonymous: return True else: print("*******************email verify removed!!!!") return True if EmailAddress.objects.filter(user=request.user, verified=True).exists(): return True else: return False
使用docker运行:
578 docker-compose up --build
然后就是漫长的等待。各种安装依赖,安装linux docker的东西。。。
最后访问localhost:8888即可。
以上是关于EvalAI使用——类似kaggle的开源平台,不过没有kernel fork功能,比较蛋疼的主要内容,如果未能解决你的问题,请参考以下文章