Welcome to Ianvs! Ianvs aims to test the performance of distributed synergy AI solutions following recognized standards, in order to facilitate more efficient and effective development. Quick start helps you to test your algorithm on Ianvs with a simple example of semantic segmentation based on lifelong learning. You can reduce manual procedures to just a few steps so that you can build and start your distributed synergy AI solution development within minutes.
Before using Ianvs, you might want to have the device ready:
- One machine is all you need, i.e., a laptop or a virtual machine is sufficient and a cluster is not necessary
- 2 CPUs or more
- 4GB+ free memory, depends on algorithm and simulation setting
- 10GB+ free disk space
- Internet connection for GitHub and pip, etc
- Python 3.6+ installed
In this example, we are using the Linux platform with Python 3.9. If you are using Windows, most steps should still apply but a few like commands and package requirements might be different.
First, we download the code of Ianvs. Assuming that we are using /ianvs
as workspace, Ianvs can be cloned with Git
as:
mkdir /ianvs
cd /ianvs #One might use another path preferred
mkdir project
cd project
git clone https://github.com/kubeedge/ianvs.git
Then, we install third-party dependencies for ianvs.
sudo apt-get update
sudo apt-get install libgl1-mesa-glx -y
python -m pip install --upgrade pip
cd ianvs
python -m pip install ./examples/resources/third_party/*
python -m pip install -r requirements.txt
We are now ready to install Ianvs.
python setup.py install
Datasets and models can be large. To avoid over-size projects in the Github repository of Ianvs, the Ianvs code base does not include origin datasets. Then developers do not need to download non-necessary datasets for a quick start.
mkdir /data
cd /data
mkdir datasets
cd datasets
download datasets in https://kubeedge-ianvs.github.io/download.html
The URL address of this dataset then should be filled in the configuration file testenv.yaml
. In this quick start,
we have done that for you and the interested readers can refer to testenv.yaml for more details.
Related algorithm is also ready in this quick start.
export PYTHONPATH=$PYTHONPATH:/ianvs/project/ianvs/examples/robot/lifelong_learning_bench/semantic-segmentation/testalgorithms/rfnet/RFNet
The URL address of this algorithm then should be filled in the configuration file algorithm.yaml
. In this quick
start, we have done that for you and the interested readers can refer to algorithm.yaml for more details.
If you want to run the large vision model based cloud-edge collaboration process, then you need to follow the steps below to install the large vision model additionally. If you only want to run the basic lifelong learning process, you can ignore the steps below.
In this example, we use SAM model as the cloud large vision model. So, we need to install SAM by the following instructions:
cd /ianvs/project
git clone https://github.com/facebookresearch/segment-anything.git
cd segment-anything
python -m pip install -e .
Then, we need to download the pretrained SAM model:
wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth
In order to save the inference result, we need to install mmcv and mmdetection by the following instructions:
python -m pip install https://download.openmmlab.com/mmcv/dist/cu118/torch2.0.0/mmcv-2.0.0-cp39-cp39-manylinux1_x86_64.whl
cd /ianvs/project
git clone https://github.com/hsj576/mmdetection.git
cd mmdetection
python -m pip install -v -e .
P.S. The mmcv is heavily relying on the versions of the PyTorch and Cuda installed. The installation of mmcv should ref to this link.
In case that your computer couldn't run SAM model, we prepare a cache for all the SAM inference results in Cloud-Robotics dataset. You could download the cache from this link and put the cache file in "/ianvs/project/":
cp cache.pickle /ianvs/project
By using the cache, you could simulate the edge-cloud joint inference without installing SAM model.
Besides that, we also provided you a pretrained RFNet model in this link, you could use it if you don't want to train the RFNet model from zero. This instruction is optional:
cd /ianvs/project
mkdir pretrain
cp pretrain_model.pth /ianvs/project/pretrain
in /ianvs/project/ianvs/examples/robot/lifelong_learning_bench/semantic-segmentation/testalgorithms/rfnet/RFNet/utils/args.py set self.resume = '/ianvs/project/pretrain/pretrain_model.pth'
We are now ready to run the ianvs for benchmarking.
To run the basic lifelong learning process:
cd /ianvs/project/ianvs
ianvs -f examples/robot/lifelong_learning_bench/semantic-segmentation/benchmarkingjob-simple.yaml
Finally, the user can check the result of benchmarking on the console and also in the output path(
e.g. /ianvs/lifelong_learning_bench/workspace
) defined in the benchmarking config file (
e.g. benchmarkingjob.yaml
). In this quick start, we have done all configurations for you and the interested readers
can refer to benchmarkingJob.yaml for more details.
The final output might look like this:
rank | algorithm | accuracy | BWT | FWT | paradigm | basemodel | task_definition | task_allocation | basemodel-learning_rate | basemodel-epochs | task_definition-origins | task_allocation-origins | time | url |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | rfnet_lifelong_learning | 0.2970033189775575 | 0.04239649121511442 | 0.02299711942108413 | lifelonglearning | BaseModel | TaskDefinitionByOrigin | TaskAllocationByOrigin | 0.0001 | 1 | ['front', 'garden'] | ['front', 'garden'] | 2023-05-24 15:07:57 | /ianvs/lifelong_learning_bench/robot-workspace-bwt/benchmarkingjob/rfnet_lifelong_learning/efdc47a2-f9fb-11ed-8f8b-0242ac110007 |
To run the large vision model based cloud-edge collaboration process:
cd /ianvs/project/ianvs
ianvs -f examples/robot/lifelong_learning_bench/semantic-segmentation/benchmarkingjob-sam.yaml
Finally, the user can check the result of benchmarking on the console and also in the output path(
e.g. /ianvs/lifelong_learning_bench/workspace
) defined in the benchmarking config file (
e.g. benchmarkingjob.yaml
). In this quick start, we have done all configurations for you and the interested readers
can refer to benchmarkingJob.yaml for more details.
The final output might look like this:
rank | algorithm | accuracy | Task_Avg_Acc | paradigm | basemodel | task_definition | task_allocation | unseen_sample_recognition | basemodel-learning_rate | basemodel-epochs | task_definition-origins | task_allocation-origins | unseen_sample_recognition-threhold | time | url |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | sam_rfnet_lifelong_learning | 0.7052917006987501 | 0.6258875117354328 | lifelonglearning | BaseModel | TaskDefinitionByOrigin | TaskAllocationByOrigin | HardSampleMining | 0.0001 | 1 | ['front', 'garden'] | ['front', 'garden'] | 0.95 | 2023-08-24 12:43:19 | /ianvs/sam_bench/robot-workspace/benchmarkingjob/sam_rfnet_lifelong_learning/9465c47a-4235-11ee-8519-ec2a724ccd3e |
This ends the quick start experiment.
If any problems happen, the user can refer to the issue page on Github for help and are also welcome to raise any new issue.
Enjoy your journey on Ianvs!