MaochengHu 576cda45b8 first commit před 2 roky
..
.circleci 576cda45b8 first commit před 2 roky
.github 576cda45b8 first commit před 2 roky
docs 576cda45b8 first commit před 2 roky
mim 576cda45b8 first commit před 2 roky
requirements 576cda45b8 first commit před 2 roky
tests 576cda45b8 first commit před 2 roky
.gitignore 576cda45b8 first commit před 2 roky
.pre-commit-config.yaml 576cda45b8 first commit před 2 roky
.readthedocs.yml 576cda45b8 first commit před 2 roky
LICENSE 576cda45b8 first commit před 2 roky
README.md 576cda45b8 first commit před 2 roky
__init__.py 576cda45b8 first commit před 2 roky
requirements.txt 576cda45b8 first commit před 2 roky
setup.cfg 576cda45b8 first commit před 2 roky
setup.py 576cda45b8 first commit před 2 roky

README.md

MIM: MIM Installs OpenMMLab Packages

MIM provides a unified interface for launching and installing OpenMMLab projects and their extensions, and managing the OpenMMLab model zoo.

Major Features

  • Package Management

You can use MIM to manage OpenMMLab codebases, install or uninstall them conveniently.

  • Model Management

You can use MIM to manage OpenMMLab model zoo, e.g., download checkpoints by name, search checkpoints that meet specific criteria.

  • Unified Entrypoint for Scripts

You can execute any script provided by all OpenMMLab codebases with unified commands. Train, test and inference become easier than ever. Besides, you can use gridsearch command for vanilla hyper-parameter search.

License

This project is released under the Apache 2.0 license.

Changelog

v0.1.1 was released in 13/6/2021.

Customization

You can use .mimrc for customization. Now we support customize default values of each sub-command. Please refer to customization.md for details.

Build custom projects with MIM

We provide some examples of how to build custom projects based on OpenMMLAB codebases and MIM in MIM-Example. Without worrying about copying codes and scripts from existing codebases, users can focus on developing new components and MIM helps integrate and run the new project.

Installation

Please refer to installation.md for installation.

Command

1. install

asciicast

  • command
  # install latest version of mmcv-full
  > mim install mmcv-full  # wheel
  # install 1.3.1
  > mim install mmcv-full==1.3.1
  # install master branch
  > mim install mmcv-full -f https://github.com/open-mmlab/mmcv.git

  # install latest version of mmcls
  > mim install mmcls
  # install 0.11.0
  > mim install mmcls==0.11.0  # v0.11.0
  # install master branch
  > mim install mmcls -f https://github.com/open-mmlab/mmclassification.git
  # install local repo
  > git clone https://github.com/open-mmlab/mmclassification.git
  > cd mmclassification
  > mim install .

  # install extension based on OpenMMLab
  mim install mmcls-project -f https://github.com/xxx/mmcls-project.git
  • api
  from mim import install

  # install mmcv
  install('mmcv-full')

  # install mmcls
  # install mmcls will automatically install mmcv if it is not installed
  install('mmcv-full', find_url='https://github.com/open-mmlab/mmcv.git')
  install('mmcv-full==1.3.1', find_url='https://github.com/open-mmlab/mmcv.git')

  # install extension based on OpenMMLab
  install('mmcls-project', find_url='https://github.com/xxx/mmcls-project.git')

2. uninstall

asciicast

  • command
  # uninstall mmcv
  > mim uninstall mmcv-full

  # uninstall mmcls
  > mim uninstall mmcls
  • api
  from mim import uninstall

  # uninstall mmcv
  uninstall('mmcv-full')

  # uninstall mmcls
  uninstall('mmcls)

3. list

asciicast

  • command
  > mim list
  > mim list --all
  • api
  from mim import list_package

  list_package()
  list_package(True)

4. search

asciicast

  • command
  > mim search mmcls
  > mim search mmcls==0.11.0 --remote
  > mim search mmcls --config resnet18_8xb16_cifar10
  > mim search mmcls --save_models resnet
  > mim search mmcls --dataset cifar-10
  > mim search mmcls --valid-field
  > mim search mmcls --condition 'batch_size>45,epochs>100'
  > mim search mmcls --condition 'batch_size>45 epochs>100'
  > mim search mmcls --condition '128<batch_size<=256'
  > mim search mmcls --sort batch_size epochs
  > mim search mmcls --field epochs batch_size weight
  > mim search mmcls --exclude-field weight paper
  • api
  from mim import get_model_info

  get_model_info('mmcls')
  get_model_info('mmcls==0.11.0', local=False)
  get_model_info('mmcls', models=['resnet'])
  get_model_info('mmcls', training_datasets=['cifar-10'])
  get_model_info('mmcls', filter_conditions='batch_size>45,epochs>100')
  get_model_info('mmcls', filter_conditions='batch_size>45 epochs>100')
  get_model_info('mmcls', filter_conditions='128<batch_size<=256')
  get_model_info('mmcls', sorted_fields=['batch_size', 'epochs'])
  get_model_info('mmcls', shown_fields=['epochs', 'batch_size', 'weight'])

5. download

asciicast

  • command
  > mim download mmcls --config resnet18_8xb16_cifar10
  > mim download mmcls --config resnet18_8xb16_cifar10 --dest .
  • api
  from mim import download

  download('mmcls', ['resnet18_8xb16_cifar10'])
  download('mmcls', ['resnet18_8xb16_cifar10'], dest_dir='')

6. train

asciicast

  • command
  # Train models on a single server with CPU by setting `gpus` to 0 and
  # 'launcher' to 'none' (if applicable). The training script of the
  # corresponding codebase will fail if it doesn't support CPU training.
  > mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 0
  # Train models on a single server with one GPU
  > mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1
  # Train models on a single server with 4 GPUs and pytorch distributed
  > mim train mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 4 \
      --launcher pytorch
  # Train models on a slurm HPC with one 8-GPU node
  > mim train mmcls resnet101_b16x8_cifar10.py --launcher slurm --gpus 8 \
      --gpus-per-node 8 --partition partition_name --work-dir tmp
  # Print help messages of sub-command train
  > mim train -h
  # Print help messages of sub-command train and the training script of mmcls
  > mim train mmcls -h
  • api
  from mim import train

  train(repo='mmcls', config='resnet18_8xb16_cifar10.py', gpus=0,
        other_args='--work-dir tmp')
  train(repo='mmcls', config='resnet18_8xb16_cifar10.py', gpus=1,
        other_args='--work-dir tmp')
  train(repo='mmcls', config='resnet18_8xb16_cifar10.py', gpus=4,
        launcher='pytorch', other_args='--work-dir tmp')
  train(repo='mmcls', config='resnet18_8xb16_cifar10.py', gpus=8,
        launcher='slurm', gpus_per_node=8, partition='partition_name',
        other_args='--work-dir tmp')

7. test

asciicast

  • command
  # Test models on a single server with 1 GPU, report accuracy
  > mim test mmcls resnet101_b16x8_cifar10.py --checkpoint \
      tmp/epoch_3.pth --gpus 1 --metrics accuracy
  # Test models on a single server with 1 GPU, save predictions
  > mim test mmcls resnet101_b16x8_cifar10.py --checkpoint \
      tmp/epoch_3.pth --gpus 1 --out tmp.pkl
  # Test models on a single server with 4 GPUs, pytorch distributed,
  # report accuracy
  > mim test mmcls resnet101_b16x8_cifar10.py --checkpoint \
      tmp/epoch_3.pth --gpus 4 --launcher pytorch --metrics accuracy
  # Test models on a slurm HPC with one 8-GPU node, report accuracy
  > mim test mmcls resnet101_b16x8_cifar10.py --checkpoint \
      tmp/epoch_3.pth --gpus 8 --metrics accuracy --partition \
      partition_name --gpus-per-node 8 --launcher slurm
  # Print help messages of sub-command test
  > mim test -h
  # Print help messages of sub-command test and the testing script of mmcls
  > mim test mmcls -h
  • api
  from mim import test
  test(repo='mmcls', config='resnet101_b16x8_cifar10.py',
       checkpoint='tmp/epoch_3.pth', gpus=1, other_args='--metrics accuracy')
  test(repo='mmcls', config='resnet101_b16x8_cifar10.py',
       checkpoint='tmp/epoch_3.pth', gpus=1, other_args='--out tmp.pkl')
  test(repo='mmcls', config='resnet101_b16x8_cifar10.py',
       checkpoint='tmp/epoch_3.pth', gpus=4, launcher='pytorch',
       other_args='--metrics accuracy')
  test(repo='mmcls', config='resnet101_b16x8_cifar10.py',
       checkpoint='tmp/epoch_3.pth', gpus=8, partition='partition_name',
       launcher='slurm', gpus_per_node=8, other_args='--metrics accuracy')

8. run

asciicast

  • command
  # Get the Flops of a save_models
  > mim run mmcls get_flops resnet101_b16x8_cifar10.py
  # Publish a save_models
  > mim run mmcls publish_model input.pth output.pth
  # Train models on a slurm HPC with one GPU
  > srun -p partition --gres=gpu:1 mim run mmcls train \
      resnet101_b16x8_cifar10.py --work-dir tmp
  # Test models on a slurm HPC with one GPU, report accuracy
  > srun -p partition --gres=gpu:1 mim run mmcls test \
      resnet101_b16x8_cifar10.py tmp/epoch_3.pth --metrics accuracy
  # Print help messages of sub-command run
  > mim run -h
  # Print help messages of sub-command run, list all available scripts in
  # codebase mmcls
  > mim run mmcls -h
  # Print help messages of sub-command run, print the help message of
  # training script in mmcls
  > mim run mmcls train -h
  • api
  from mim import run

  run(repo='mmcls', command='get_flops',
      other_args='resnet101_b16x8_cifar10.py')
  run(repo='mmcls', command='publish_model',
      other_args='input.pth output.pth')
  run(repo='mmcls', command='train',
      other_args='resnet101_b16x8_cifar10.py --work-dir tmp')
  run(repo='mmcls', command='test',
      other_args='resnet101_b16x8_cifar10.py tmp/epoch_3.pth --metrics accuracy')

9. gridsearch

asciicast

  • command
  # Parameter search on a single server with CPU by setting `gpus` to 0 and
  # 'launcher' to 'none' (if applicable). The training script of the
  # corresponding codebase will fail if it doesn't support CPU training.
  > mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 0 \
      --search-args '--optimizer.lr 1e-2 1e-3'
  # Parameter search with on a single server with one GPU, search learning
  # rate
  > mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \
      --search-args '--optimizer.lr 1e-2 1e-3'
  # Parameter search with on a single server with one GPU, search
  # weight_decay
  > mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \
      --search-args '--optimizer.weight_decay 1e-3 1e-4'
  # Parameter search with on a single server with one GPU, search learning
  # rate and weight_decay
  > mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 1 \
      --search-args '--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay 1e-3 \
      1e-4'
  # Parameter search on a slurm HPC with one 8-GPU node, search learning
  # rate and weight_decay
  > mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 8 \
      --partition partition_name --gpus-per-node 8 --launcher slurm \
      --search-args '--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay 1e-3 \
      1e-4'
  # Parameter search on a slurm HPC with one 8-GPU node, search learning
  # rate and weight_decay, max parallel jobs is 2
  > mim gridsearch mmcls resnet101_b16x8_cifar10.py --work-dir tmp --gpus 8 \
      --partition partition_name --gpus-per-node 8 --launcher slurm \
      --max-jobs 2 --search-args '--optimizer.lr 1e-2 1e-3 \
      --optimizer.weight_decay 1e-3 1e-4'
  # Print the help message of sub-command search
  > mim gridsearch -h
  # Print the help message of sub-command search and the help message of the
  # training script of codebase mmcls
  > mim gridsearch mmcls -h
  • api
  from mim import gridsearch

  gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=0,
             search_args='--optimizer.lr 1e-2 1e-3',
             other_args='--work-dir tmp')
  gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=1,
             search_args='--optimizer.lr 1e-2 1e-3',
             other_args='--work-dir tmp')
  gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=1,
             search_args='--optimizer.weight_decay 1e-3 1e-4',
             other_args='--work-dir tmp')
  gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=1,
             search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'
                         '1e-3 1e-4',
             other_args='--work-dir tmp')
  gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=8,
             partition='partition_name', gpus_per_node=8, launcher='slurm',
             search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'
                         ' 1e-3 1e-4',
             other_args='--work-dir tmp')
  gridsearch(repo='mmcls', config='resnet101_b16x8_cifar10.py', gpus=8,
             partition='partition_name', gpus_per_node=8, launcher='slurm',
             max_workers=2,
             search_args='--optimizer.lr 1e-2 1e-3 --optimizer.weight_decay'
                         ' 1e-3 1e-4',
             other_args='--work-dir tmp')

Contributing

We appreciate all contributions to improve mim. Please refer to CONTRIBUTING.md for the contributing guideline.

License

This project is released under the Apache 2.0 license.

Projects in OpenMMLab

  • MMCV: OpenMMLab foundational library for computer vision.
  • MIM: MIM installs OpenMMLab packages.
  • MMClassification: OpenMMLab image classification toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
  • MMRotate: OpenMMLab rotated object detection toolbox and benchmark.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMOCR: OpenMMLab text detection, recognition, and understanding toolbox.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
  • MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
  • MMRazor: OpenMMLab model compression toolbox and benchmark.
  • MMFewShot: OpenMMLab fewshot learning toolbox and benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMFlow: OpenMMLab optical flow toolbox and benchmark.
  • MMEditing: OpenMMLab image and video editing toolbox.
  • MMGeneration: OpenMMLab image and video generative models toolbox.
  • MMDeploy: OpenMMLab model deployment framework.