Why after separable_conv2d i can't use conv2d with kernel=(3,3) ?



  • hi. Now I have a network base on mobilenet v1, When I use kendryte-model-compiler it will send an error message.

    note the code line 21, when I use conv2d kernel size = (1,1) , the network can be compile.(default: stride=1 )
    but when the conv2d kernel size =(3,3), the model-compiler will send:

    ValueError: run fix_dw_with_strde2 failed. can not delay left_pooling over current layer, current layer conv_kernel_size:3, pool_type_size_stride:None
    

    and I found the raise error code :

    if not (conv_kernel_size == 1 and pool_type_size_stride is None):
        print(input_shape, conv_shape, output_shape, pool_type_size_stride)
        raise ValueError(
            'run fix_dw_with_strde2 failed. ' +
            'can not delay left_pooling over current layer, ' +
            'current layer conv_kernel_size:{}, pool_type_size_stride:{}'
            .format(conv_kernel_size, pool_type_size_stride)
        )
    

    I print the input_shape, conv_shape, output_shape, pool_type_size_stride :

    [5, 4, 5, 512] [5, 4, 5, 256] [5, 4, 5, 256] None
    

    please help ~

    The code :

    def mobilenet_conv(images: tf.Tensor, num_classes: int, depth_multiplier: float, is_training: bool):
        flower_point = ['Conv2d_0_depthwise', 'Conv2d_0_pointwise', 'Conv2d_1', 'Final']
        with slim.arg_scope(mobilenet_v1_arg_scope(is_training=is_training)):
            nets, endpoints = mobilenet_v1_base(images, depth_multiplier=depth_multiplier)
    
        # add the new layer
        with tf.variable_scope('Flowers'):
            with slim.arg_scope([slim.batch_norm], is_training=is_training,  center=True, scale=True, decay=0.9997, epsilon=0.001):
                with slim.arg_scope([slim.conv2d, slim.separable_conv2d], padding='SAME', normalizer_fn=slim.batch_norm, activation_fn=None,):
                    if depth_multiplier == 1.0 or depth_multiplier == 0.75:
                        # nets= [?,8,10,1024]
                        nets = slim.conv2d(nets, 512, (3, 3), padding='SAME')
                        nets = slim.batch_norm(nets)
                    else:
                        pass
                    # (?, 8, 10, 512) ===> (?, 4, 5, 512)
                    nets = slim.separable_conv2d(nets, None, (3, 3), stride=(2, 2), scope=flower_point[0])
                    nets = tf.nn.relu6(nets, name=flower_point[0]+'/relu6')
                    endpoints[flower_point[0]] = nets
                    # ! (?, 4, 5, 512)===>(?, 4, 5, 128) 难道是稀疏卷积之后不能加(3,3)卷积?
                    nets = slim.conv2d(nets, 256, (1, 1), scope=flower_point[1])
                    # nets = slim.conv2d(nets, 256, (3, 3), scope=flower_point[1])
                    
                    nets = tf.nn.relu6(nets, name=flower_point[1]+'/relu6')
                    endpoints[flower_point[1]] = nets
                    # nets = (?, 4, 5, 128)
                    nets = slim.conv2d(nets, 128, (3, 3),  scope=flower_point[2])
                    nets = tf.nn.relu6(nets, name=flower_point[2]+'/relu6')
                    endpoints[flower_point[2]] = nets
                    # nets = (?, 4, 5, 64)
                    nets = slim.conv2d(nets, 5, (3, 3), normalizer_fn=None, activation_fn=None, scope=flower_point[3])
                    endpoints[flower_point[3]] = nets
                    # nets = (?, 4, 5, 5)
                    # tf.contrib.layers.softmax(nets)
        return nets, endpoints
    


  • @nathan it seems that one of my interemediate feature maps is larger than 2MB and that it was the source of the problem. I reduced the size of the network and now it is transformed to kmodel correctly.


  • Staff |  Mod

    @sni Pls check if the input size is an even number. If yes, you can show your network graph here.



  • @jujuede I'm facing the same issue :
    "Fatal : Dimensions must be equal."

    How did you manage to solve the problem?

    I'm using Keras 2.2.4, and I'm ZeroPadding2D([[1,1,],[1,1]]) before each DepthwiseConv2D (stride=2), and set the padding of the DepthwiseConv2D layer to "valid".

    Thank you for your help.



  • If you are using (depthwise)conv2d with kernel=3x3 stride=2, you should add a space_to_batchnd before the layer to pad manually and set padding of the layer to valid.
    Because the padding method of tensorflow is different from k210. Caffe and most of the other frameworks don't have this problem.



  • @latyas Hi, i try to use nncase but it raise the Fatal: Dimensions must be equal. (this model use the conv2d with kernel=(3,3)

    I put 5 pictures in the dataset/flowers.I don't know which Dimensions is not equal.

    My command :

    cd ~/Documents/kendryte-model-compiler/ && \
    /home/zqh/Documents/nncase/src/NnCase.Cli/bin/Debug/netcoreapp3.0/ncc \
                                    -i tflite -o k210code \
                                    --dataset dataset/flowers \
                                    --postprocess n1to1 \
                                    pb_files/mobilenet_v1_0.5_240_320_frozen.tflite build/model.c
    

    Can you give me some slutions?

    The all message

    ➜  mobilenet_flowers git:(master) ✗ make nncase_convert
    cd ~/Documents/kendryte-model-compiler/ && \
    /home/zqh/Documents/nncase/src/NnCase.Cli/bin/Debug/netcoreapp3.0/ncc \
                                    -i tflite -o k210code \
                                    --dataset dataset/flowers \
                                    --postprocess n1to1 \
                                    pb_files/mobilenet_v1_0.5_240_320_frozen.tflite build/model.c
    Fatal: Dimensions must be equal.
    Makefile:26: recipe for target 'nncase_convert' failed
    make: *** [nncase_convert] Error 255
    


  • @latyas ok, i will try it.
    My compute only install Ubuntu, Do you have some guide about how to compile this ?


  • Staff

    try nncase