MIVisionX toolkit is a set of comprehensive computer vision and machine intelligence libraries, utilities, and applications bundled into a single toolkit. AMD MIVisionX also delivers a highly optimized open-source implementation of the Khronos OpenVX™ and OpenVX™ Extensions.

MIT licensed Language grade: Python

MIVisionX Python ML Model Validation Tool

MIVisionX ML Model Validation Tool using pre-trained ONNX / NNEF / Caffe models to analyze, summarize, & validate.

Pre-trained models in ONNX, NNEF, & Caffe formats are supported by MIVisionX. The app first converts the pre-trained models to AMD Neural Net Intermediate Representation (NNIR), once the model has been translated into AMD NNIR (AMD’s internal open format), the Optimizer goes through the NNIR and applies various optimizations which would allow the model to be deployed on to target hardware most efficiently. Finally, AMD NNIR is converted into OpenVX C code, which is compiled and wrapped with a python API to run on any targeted hardware.

Analyzer Index


	pip install pyqtgraph
	export PATH=$PATH:/opt/rocm/bin
	export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/rocm/lib

NOTE: To get the best performance from the validation toolkit use the RPP Develop Branch

sudo apt-get install libomp-dev
export OMP_NUM_THREADS=<number of threads to use>

Use MIVisionX Docker

MIVisionX provides developers with docker images for Ubuntu 16.04, Ubuntu 18.04, CentOS 7.5, & CentOS 7.6. Using docker images developers can quickly prototype and build applications without having to be locked into a single system setup or lose valuable time figuring out the dependencies of the underlying software.

Docker with display option

sudo docker pull mivisionx/ubuntu-18.04:latest
xhost +local:root
sudo docker run -it --device=/dev/kfd --device=/dev/dri --cap-add=SYS_RAWIO --device=/dev/mem --group-add video --network host --env DISPLAY=unix$DISPLAY --privileged --volume $XAUTH:/root/.Xauthority --volume /tmp/.X11-unix/:/tmp/.X11-unix mivisionx/ubuntu-18.04:latest
export PATH=$PATH:/opt/rocm/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/rocm/lib
runvx /opt/rocm/share/mivisionx/samples/gdf/canny.gdf


Command Line Interface (CLI)

usage: python mivisionx_validation_tool.py 	[-h] 
                             	       		--model_format MODEL_FORMAT 
                                       		--model_name MODEL_NAME 
                                       		--model MODEL 
                                       		--model_input_dims MODEL_INPUT_DIMS 
                                       		--model_output_dims MODEL_OUTPUT_DIMS
											--model_batch_size MODEL_BATCH_SIZE 
											--rocal_mode rocal_MODE
                                       		--label LABEL 
                                       		--output_dir OUTPUT_DIR 
                                       		--image_dir IMAGE_DIR
                                       		[--image_val IMAGE_VAL] 
                                       		[--hierarchy HIERARCHY]
                                       		[--add ADD] 
                                       		[--multiply MULTIPLY]
				       						[--fp16 FP16]
                                       		[--replace REPLACE] 
                                       		[--verbose VERBOSE]

Usage help

  -h, --help            show this help message and exit
  --model_format        pre-trained model format, options:caffe/onnx/nnef [required]
  --model_name          model name                                        [required]
  --model               pre_trained model file/folder                     [required]
  --model_input_dims    c,h,w - channel,height,width                      [required]
  --model_output_dims   c,h,w - channel,height,width                      [required]
  --model_batch_size    n - batch size                                    [required]
  --rocal_mode           rocal mode (1/2/3)                                 [required]
  --label               labels text file                                  [required]
  --output_dir          output dir to store ADAT results                  [required]
  --image_dir           image directory for analysis                      [required]
  --image_val           image list with ground truth                      [optional]
  --hierarchy           AMD proprietary hierarchical file                 [optional]
  --add                 input preprocessing factor      [optional - default:[0,0,0]]
  --multiply            input preprocessing factor      [optional - default:[1,1,1]]
  --fp16                quantize model to FP16 		     [optional - default:no]
  --replace             replace/overwrite model              [optional - default:no]
  --verbose             verbose                              [optional - default:no]

Graphical User Interface (GUI)

usage: python mivisionx_validation_tool.py

Supported Pre-Trained Model Formats


Sample 1 - Using Pre-Trained ONNX Model

Run SqueezeNet on sample images

	cd && mkdir sample-1 && cd sample-1
	git clone https://github.com/kiritigowda/MIVisionX-validation-tool.git
	wget https://s3.amazonaws.com/download.onnx/models/opset_8/squeezenet.tar.gz
	tar -xvf squeezenet.tar.gz

Note: pre-trained model - squeezenet/model.onnx

Sample 2 - Using Pre-Trained Caffe Model

Run VGG 16 on sample images

	cd && mkdir sample-2 && cd sample-2
	git clone https://github.com/kiritigowda/MIVisionX-validation-tool.git


	wget http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodel

Sample 3 - Using Pre-Trained NNEF Model

Run VGG 16 on sample images

	cd && mkdir sample-3 && cd sample-3
	git clone https://github.com/kiritigowda/MIVisionX-validation-tool.git


	mkdir ~/sample-3/vgg16
	cd ~/sample-3/vgg16
	wget https://sfo2.digitaloceanspaces.com/nnef-public/vgg16.onnx.nnef.tgz