接上一篇文章,可能我使用的服务器的cpu并不支持某些操作,由于rknn也没有开源的sdk,不知道怎么调试,所以我直接租用别的服务器。
首先,安装环境,步骤不用赘述
模型的导出
首先是导出脚本
# filename: onnx2rknn.py
import numpy as np
from rknn.api import RKNN
if __name__ == '__main__':
# 模型部署平台
platform = 'rv1106'
#训练模拟时输入图片大小
Width = 28
Height = 28
# 此处改为自己的模型地址
MODEL_PATH = '/home/ljl/mnist/mnist_cnn_model.onnx'
# 导出模型地址
RKNN_MODEL_PATH = '/home/ljl/mnist/mnist_cnn_model.rknn'
# 创建RKNN对象并在屏幕打印详细的日志信息
rknn = RKNN(verbose=True)
# 模型配置
# mean_values: 输入图像像素均值
# std_values: 输入图像像素标准差
# target_platform: 目标部署平台
# 本模型训练时输入图象为单通道
rknn.config(mean_values=[0], std_values=[255], target_platform=platform)
# 模型加载
print('--> Loading model')
ret = rknn.load_onnx(MODEL_PATH)
if ret != 0:
print('load model failed!')
exit(ret)
print('done')
# 构建 RKNN 模型
print('--> Building model')
#do_quantization:是否对模型进行量化。默认值为 True
ret = rknn.build(do_quantization=True, dataset="./data.txt")
if ret != 0:
print('build model failed.')
exit(ret)
print('done')
# 导出模型
ret = rknn.export_rknn(RKNN_MODEL_PATH)
#释放RKNN模型
rknn.release()
然后要将测试的图片放在文件夹里,路径写在data.txt 文件中
首先要获取mnist的图片用于测试
这里提供导出脚本
# filename: generate_data.py
import torch
import torchvision
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from torch.utils.data import DataLoader
import cv2
import os
import numpy as np
# Test set
test_set = datasets.MNIST('dataset/', train=False, transform=transforms.ToTensor(), download=True)
test_loader = DataLoader(dataset=test_set, batch_size=1, shuffle=True)
def mnist_save_png():
for data, i in test_loader:
with torch.no_grad():
image = data.squeeze().numpy() # Remove unnecessary transpose
# Optional: If you need to move channel dimension to the last position
# image = np.transpose(image, (1, 2, 0))
image = cv2.GaussianBlur(image, (9, 9), 0)
# image *= 255 # Scale image to 0-255 range
index = i.numpy()[0]
if not os.path.exists('./mnist_image/'):
os.mkdir('./mnist_image/')
# 每张图片只保存一次
if not os.path.exists('./mnist_image/' + str(index) + '.png'):
cv2.imwrite('./mnist_image/' + str(index) + '.png', image)
if __name__ == '__main__':
mnist_save_png()
导出后的效果,分辨率为28 * 28,代码也对应前面文章的
这里只是保存了10张图片,如果要更多图片测试需要修改代码
准备好后运行脚本即可
模型转换打印的过程。
感觉rknn至少比全志系的好用些(又不开源,还不允许个人使用),虽然不是开源的。
I rknn-toolkit2 version: 2.0.0b0+9bab5682
--> Loading model
I It is recommended onnx opset 19, but your onnx model opset is 17!
I Model converted from pytorch, 'opset_version' should be set 19 in torch.onnx.export for successful convert!
I Loading : 100%|██████████████████████████████████████████████████| 5/5 [00:00<00:00, 24966.10it/s]
done
--> Building model
D base_optimize ...
D base_optimize done.
D
D fold_constant ...
D fold_constant done.
D
D correct_ops ...
D correct_ops done.
D
D fuse_ops ...
W build: Can not find 'idx' to insert, default insert to 0!
D fuse_ops results:
D replace_reshape_gemm_by_conv: remove node = ['/Reshape', '/fc1/Gemm'], add node = ['/fc1/Gemm_2conv', '/fc1/Gemm_2conv_reshape']
D swap_reshape_relu: remove node = ['/fc1/Gemm_2conv_reshape', '/Relu'], add node = ['/Relu', '/fc1/Gemm_2conv_reshape']
D convert_gemm_by_conv: remove node = ['/fc2/Gemm'], add node = ['/fc2/Gemm_2conv_reshape1', '/fc2/Gemm_2conv', '/fc2/Gemm_2conv_reshape2']
D fuse_two_reshape: remove node = ['/fc1/Gemm_2conv_reshape']
D remove_invalid_reshape: remove node = ['/fc2/Gemm_2conv_reshape1']
D fold_constant ...
D fold_constant done.
D fuse_ops done.
D
D sparse_weight ...
D sparse_weight done.
D
I GraphPreparing : 100%|████████████████████████████████████████████| 4/4 [00:00<00:00, 5403.29it/s]
I Quantizating : 100%|███████████████████████████████████████████████| 4/4 [00:00<00:00, 283.34it/s]
D
D quant_optimizer ...
D quant_optimizer results:
D adjust_relu: ['/Relu']
D quant_optimizer done.
D
W build: The default input dtype of 'onnx::Reshape_0' is changed from 'float32' to 'int8' in rknn model for performance!
Please take care of this change when deploy rknn model with Runtime API!
W build: The default output dtype of '15' is changed from 'float32' to 'int8' in rknn model for performance!
Please take care of this change when deploy rknn model with Runtime API!
I rknn building ...
I RKNN: [00:09:32.440] compress = 0, conv_eltwise_activation_fuse = 1, global_fuse = 1, multi-core-model-mode = 7, output_optimize = 1, layout_match = 1, enable_argb_group = 0
I RKNN: librknnc version: 2.0.0b0 (35a6907d79@2024-03-24T02:34:11)
D RKNN: [00:09:32.440] RKNN is invoked
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNExtractCustomOpAttrs
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNExtractCustomOpAttrs
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNSetOpTargetPass
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNSetOpTargetPass
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNBindNorm
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNBindNorm
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNAddFirstConv
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNAddFirstConv
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNEliminateQATDataConvert
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNEliminateQATDataConvert
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNTileGroupConv
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNTileGroupConv
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNTileFcBatchFuse
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNTileFcBatchFuse
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNAddConvBias
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNAddConvBias
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNTileChannel
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNTileChannel
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNPerChannelPrep
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNPerChannelPrep
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNBnQuant
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNBnQuant
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNFuseOptimizerPass
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNFuseOptimizerPass
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNTurnAutoPad
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNTurnAutoPad
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNInitRNNConst
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNInitRNNConst
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNInitCastConst
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNInitCastConst
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNMultiSurfacePass
D RKNN: [00:09:32.442] <<<<<<<< end: rknn::RKNNMultiSurfacePass
D RKNN: [00:09:32.442] >>>>>> start: rknn::RKNNReplaceConstantTensorPass
D RKNN: [00:09:32.443] <<<<<<<< end: rknn::RKNNReplaceConstantTensorPass
D RKNN: [00:09:32.443] >>>>>> start: rknn::RKNNSubgraphManager
D RKNN: [00:09:32.443] <<<<<<<< end: rknn::RKNNSubgraphManager
D RKNN: [00:09:32.443] >>>>>> start: OpEmit
D RKNN: [00:09:32.443] <<<<<<<< end: OpEmit
D RKNN: [00:09:32.443] >>>>>> start: rknn::RKNNLayoutMatchPass
I RKNN: [00:09:32.443] AppointLayout: t->setNativeLayout(64), tname:[/fc1/Gemm_output_0_new]
I RKNN: [00:09:32.443] AppointLayout: t->setNativeLayout(64), tname:[15_conv]
I RKNN: [00:09:32.443] AppointLayout: t->setNativeLayout(0), tname:[15]
D RKNN: [00:09:32.443] <<<<<<<< end: rknn::RKNNLayoutMatchPass
D RKNN: [00:09:32.443] >>>>>> start: rknn::RKNNAddSecondaryNode
D RKNN: [00:09:32.443] <<<<<<<< end: rknn::RKNNAddSecondaryNode
D RKNN: [00:09:32.443] >>>>>> start: OpEmit
D RKNN: [00:09:32.443] finish initComputeZoneMap
D RKNN: [00:09:32.443] <<<<<<<< end: OpEmit
D RKNN: [00:09:32.443] >>>>>> start: rknn::RKNNSubGraphMemoryPlanPass
D RKNN: [00:09:32.443] <<<<<<<< end: rknn::RKNNSubGraphMemoryPlanPass
D RKNN: [00:09:32.443] >>>>>> start: rknn::RKNNProfileAnalysisPass
D RKNN: [00:09:32.443] node: Reshape:/fc2/Gemm_2conv_reshape2, Target: NPU
D RKNN: [00:09:32.443] <<<<<<<< end: rknn::RKNNProfileAnalysisPass
D RKNN: [00:09:32.443] >>>>>> start: rknn::RKNNOperatorIdGenPass
D RKNN: [00:09:32.443] <<<<<<<< end: rknn::RKNNOperatorIdGenPass
D RKNN: [00:09:32.443] >>>>>> start: rknn::RKNNWeightTransposePass
W RKNN: [00:09:32.444] Warning: Tensor /fc2/Gemm_2conv_reshape2_shape need paramter qtype, type is set to float16 by default!
W RKNN: [00:09:32.444] Warning: Tensor /fc2/Gemm_2conv_reshape2_shape need paramter qtype, type is set to float16 by default!
D RKNN: [00:09:32.444] <<<<<<<< end: rknn::RKNNWeightTransposePass
D RKNN: [00:09:32.444] >>>>>> start: rknn::RKNNCPUWeightTransposePass
D RKNN: [00:09:32.444] <<<<<<<< end: rknn::RKNNCPUWeightTransposePass
D RKNN: [00:09:32.444] >>>>>> start: rknn::RKNNModelBuildPass
D RKNN: [00:09:32.446] <<<<<<<< end: rknn::RKNNModelBuildPass
D RKNN: [00:09:32.446] >>>>>> start: rknn::RKNNModelRegCmdbuildPass
D RKNN: [00:09:32.446] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [00:09:32.446] Network Layer Information Table
D RKNN: [00:09:32.446] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [00:09:32.446] ID OpType DataType Target InputShape OutputShape Cycles(DDR/NPU/Total) RW(KB) FullName
D RKNN: [00:09:32.446] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [00:09:32.446] 0 InputOperator INT8 CPU \ (1,1,28,28) 0/0/0 0 InputOperator:onnx::Reshape_0
D RKNN: [00:09:32.446] 1 ConvRelu INT8 NPU (1,1,28,28),(50,1,28,28),(50) (1,50,1,1) 6585/12544/12544 39 Conv:/fc1/Gemm_2conv
D RKNN: [00:09:32.446] 2 Conv INT8 NPU (1,50,1,1),(10,50,1,1),(10) (1,10,1,1) 138/64/138 0 Conv:/fc2/Gemm_2conv
D RKNN: [00:09:32.446] 3 Reshape INT8 NPU (1,10,1,1),(2) (1,10) 7/0/7 0 Reshape:/fc2/Gemm_2conv_reshape2
D RKNN: [00:09:32.446] 4 OutputOperator INT8 CPU (1,10) \ 0/0/0 0 OutputOperator:15
D RKNN: [00:09:32.446] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [00:09:32.446] <<<<<<<< end: rknn::RKNNModelRegCmdbuildPass
D RKNN: [00:09:32.446] >>>>>> start: rknn::RKNNFlatcModelBuildPass
D RKNN: [00:09:32.446] Export Mini RKNN model to /tmp/tmpkbgrb68z/check.rknn
D RKNN: [00:09:32.446] >>>>>> end: rknn::RKNNFlatcModelBuildPass
D RKNN: [00:09:32.446] >>>>>> start: rknn::RKNNMemStatisticsPass
D RKNN: [00:09:32.446] ------------------------------------------------------------------------------------------------------------------------------
D RKNN: [00:09:32.446] Feature Tensor Information Table
D RKNN: [00:09:32.446] --------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [00:09:32.446] ID User Tensor DataType DataFormat OrigShape NativeShape | [Start End) Size
D RKNN: [00:09:32.446] --------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [00:09:32.446] 1 ConvRelu onnx::Reshape_0 INT8 NC1HWC2 (1,1,28,28) (1,1,28,28,1) | 0x00027500 0x00027880 0x00000380
D RKNN: [00:09:32.446] 2 Conv /fc1/Gemm_output_0_new INT8 NC1HWC2 (1,50,1,1) (1,4,1,1,16) | 0x00027880 0x000278c0 0x00000040
D RKNN: [00:09:32.446] 3 Reshape 15_conv INT8 NC1HWC2 (1,10,1,1) (1,1,1,1,16) | 0x00027500 0x00027510 0x00000010
D RKNN: [00:09:32.446] 4 OutputOperator 15 INT8 UNDEFINED (1,10) (1,10) | 0x00027580 0x000275c0 0x00000040
D RKNN: [00:09:32.446] --------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [00:09:32.446] -----------------------------------------------------------------------------------------------------
D RKNN: [00:09:32.446] Const Tensor Information Table
D RKNN: [00:09:32.446] -------------------------------------------------------------------+---------------------------------
D RKNN: [00:09:32.446] ID User Tensor DataType OrigShape | [Start End) Size
D RKNN: [00:09:32.446] -------------------------------------------------------------------+---------------------------------
D RKNN: [00:09:32.446] 1 ConvRelu fc1.weight INT8 (50,1,28,28) | 0x00000000 0x00026480 0x00026480
D RKNN: [00:09:32.446] 1 ConvRelu fc1.bias INT32 (50) | 0x00026480 0x00026680 0x00000200
D RKNN: [00:09:32.446] 2 Conv fc2.weight INT8 (10,50,1,1) | 0x00026680 0x00026900 0x00000280
D RKNN: [00:09:32.446] 2 Conv fc2.bias INT32 (10) | 0x00026900 0x00026980 0x00000080
D RKNN: [00:09:32.446] 3 Reshape /fc2/Gemm_2conv_reshape2_shape INT64 (2) | 0x00026980*0x000269c0 0x00000040
D RKNN: [00:09:32.446] -------------------------------------------------------------------+---------------------------------
D RKNN: [00:09:32.446] ----------------------------------------
D RKNN: [00:09:32.446] Total Internal Memory Size: 0.9375KB
D RKNN: [00:09:32.446] Total Weight Memory Size: 154.438KB
D RKNN: [00:09:32.446] ----------------------------------------
D RKNN: [00:09:32.446] <<<<<<<< end: rknn::RKNNMemStatisticsPass
I rknn buiding done.
done
模型验证
我们需要检验模型的输出是否正确
貌似simulator不支持rknn模型验证
报错日志:
官方提供的demo教程也是使用load_onnx,https://wiki.luckfox.com/zh/Luckfox-Pico/Luckfox-Pico-RKNN-Test
所以其实可以在导出rknn模型的时候,也可以验证模型
脚本如下
# filename: rknn_mnist_test.py
import numpy as np
import cv2
from rknn.api import RKNN
# Model conversion parameters
MODEL_PATH = '/root/test/my_model.onnx' # Path to the ONNX model
RKNN_MODEL_PATH = '/root/test/my_model.rknn' # Path to save the RKNN model
# Model inference parameters
input_size = (28, 28) # Define the input size (same as your model's input)
data_file = 'data.txt' # Path to the data file (containing image paths and labels)
rknn = RKNN(verbose=True) # Create RKNN object with verbose logging
rknn.config(mean_values=[0], std_values=[255], target_platform='rv1106') # Set configuration parameters
ret = rknn.load_onnx(MODEL_PATH)
if ret != 0:
print('Load ONNX model failed!')
exit(ret)
print('done')
print('--> Building RKNN model')
ret = rknn.build(do_quantization=True, dataset="./data.txt")
if ret != 0:
print('Build model failed.')
exit(ret)
print('done')
# Model export (optional) #导出rknn模型
ret = rknn.export_rknn(RKNN_MODEL_PATH)
# Model inference
print('--> Performing inference on data')
rknn.init_runtime() # Initialize RKNN runtime
with open(data_file, 'r') as f:
lines = f.readlines()
for line in lines:
# Get image path and label
image_path = line.strip()
# Read the image
image = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
# Preprocess the image
image = image.astype(np.float32)
# image 这时候读取出来是 (28, 28), 需要增加维度
image = np.expand_dims(image, axis=[0,1])
# Run inference
outputs = rknn.inference([image], data_format = 'nchw') #(1, 1, 28, 28) 对应nchw,批次,通道,长,宽
print(f"Ineference Output: {outputs}")
# Check inference results
if outputs is not None:
predicted_label = np.argmax(outputs)
print(f"Image: {image_path}")
print(f"Predicted label: {predicted_label}")
else:
print(f"Inference failed for image: {image_path}")
# Release RKNN resources
rknn.release()
打印的结果
可以看到推理的结果是正确的
本帖最后由 hollyedward 于 2024-5-30 03:37 编辑
用于测试的mnist图片文件
可以给自己电脑装一个虚拟机的。不一定要用Linux服务器。