WiMi Developed a Novel Image Classification System Based on a Model Network of Continuous Multi-scale Feature Learning System

BEIJING, Aug. 21, 2023 /PRNewswire/ — WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced that a novel image classification system based on a model network of continuous multi-scale feature learning system has been developed which uses well-designed pre-processing and modeling architectures. The model benefits from multi-scale feature extraction and continuous feature learning and achieves better performance in terms of speed and accuracy by using various feature maps with different perceptions as compared to the existing methods.

A continuous multi-scale feature learning system model network of WiMi employs a continuous feature learning approach based on using various feature maps with different receptive fields to achieve faster training/inference and higher accuracy. The system network contains three important steps namely data pre-processing, data learning and inference. In the data pre-processing stage, the dataset images are represented as tensors, which makes the computation during training easier and more efficient. In the data learning phase, useful features of the images are extracted using a model based on continuous multi-scale feature learning. In the inference phase, after completing the second step of the proposed system and obtaining the trained model, the image can be classified using the model.

In the data pre-processing stage, the dataset images are represented as tensors for subsequent computation and processing. This makes the computation during training easier and more efficient. The process of pre-processing includes normalization, scaling and cropping of the images. The purpose of this step is to make the data more convenient and efficient during the training process and to improve the accuracy and reliability of subsequent processing. This process is to ensure that the input data is processed correctly and can be recognized and learned correctly by the model.

In the data learning phase, this network system of WiMi, uses a continuous multi-scale feature learning method to extract useful features from images. The basic idea of the method is to decompose the image into different scales and then extract the corresponding features at each scale. Image information at different scales contains different feature information, for example, in low-resolution images, the detail information is blurred, but the global information and contour information of the image are still well preserved. Therefore, the robustness and generalization ability of the model can be improved by multi-scale feature extraction.

Specifically, the network architecture consists of a series of feature extraction modules and feature fusion modules. The feature extraction module employs a series of convolutional layers, pooling layers and activation functions for extracting feature maps at different scales. The feature fusion module is used to fuse the feature maps at different scales to obtain a more comprehensive and representative feature representation. The feature fusion module employs a special method that connects the feature maps of different scales and then fuses the features through some convolutional layers and activation functions. The advantage of this method is that it can avoid information loss and can fully utilize the feature information of different scales.

In the inference phase, WiMi can use a trained model to classify new images. Specifically, we input a test image into the model and then determine the class of the image by the prediction results output by the model. To improve the accuracy and generalization ability of the model, we can perform data enhancement operations, such as random rotation, cropping, and flipping, on the input images during the testing phase to simulate more image variations. In this step, we can use different techniques and algorithms to optimize the accuracy and efficiency of the model. For example, we can use Convolutional Neural Networks (CNN) to extract features from an image or Recurrent Neural Networks (RNN) to process sequence data.

The strength of this learning system model network is its ability to simultaneously process images at different scales and extract useful features from them. This makes it adaptable to different application scenarios and enables better results in terms of efficient computation and small-scale image generalization. In addition, the method is also efficient and lightweight to learn useful features appropriately and avoid underfitting problems. After a series of experiments, the method achieves significant improvements in terms of accuracy and efficiency. Compared to the existing state-of-the-art efficient networks, the method is comparable in accuracy but optimized in terms of efficiency and speed. The method also achieves the best results in terms of accuracy and efficiency trade-off.

Moreover, the image recognition classification method of this learning network has a wide range of market value and significance. First of all, it can be widely used in the field of computer vision, such as automatic driving, security monitoring, medical diagnosis, smart home and so on. For example, in the field of automatic driving, vehicles need to quickly and accurately recognize road signs, moving vehicles and pedestrians. In the field of medical diagnosis, continuous multi-scale feature learning networks can assist doctors in automatically recognizing lesions and disease markers, improving diagnostic accuracy and efficiency. In the field of smart homes, people can use the technology to develop smart door locks and smart home devices for a smarter lifestyle.

Second, this system from WiMi addresses some of the industry pain points and difficulties in the field of computer vision. For example, in the traditional computer vision field, it is often necessary to use multiple hand-designed features and algorithms to recognize objects in images, and these hand-designed features and algorithms are often not generic when applied to new datasets, and need to be redesigned and optimized, which leads to a large amount of wasted time and resources. In contrast, image recognition classification methods using continuous multi-scale feature learning networks can automatically extract features from images at multiple scales, avoiding the tedious and non-generalizable problems of manual design.

In addition, this image recognition classification method solves the problem of limited data volume and limited computational resources in the field of computer vision. Due to the limited amount of data, traditional deep learning models are prone to overfitting problems, while continuous multi-scale feature learning networks can better utilize the limited data, thus avoiding overfitting. In addition, due to limited computational resources, traditional deep learning models require a large amount of computational resources for training and inference, while the lightweight design of continuous multi-scale feature learning networks can reduce the consumption of computational resources while ensuring accuracy.

WiMi continuous multi-scale feature learning network for image recognition classification method has important market value and significance. It is able to solve the industry pain points and difficulties in image classification tasks, providing a better solution for the application of image classification technology. In the future, this systematic model network is expected to be applied in more fields to achieve more efficient, accurate and intelligent image classification.

About WIMI Hologram Cloud

WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.

Safe Harbor Statements

This press release contains “forward-looking statements” within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as “will,” “expects,” “anticipates,” “future,” “intends,” “plans,” “believes,” “estimates,” and similar statements. Statements that are not historical facts, including statements about the Company’s beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company’s strategic and operational plans contain forward-looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission (“SEC”) on Forms 20-F and 6-K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward-looking statement, including but not limited to the following: the Company’s goals and strategies; the Company’s future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company’s expectations regarding demand for and market acceptance of its products and services.

Further information regarding these and other risks is included in the Company’s annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.