WiMi is Researching Machine Learning-Based Multi-Focus Image Fusion

BEIJING, Oct. 18, 2023 /PRNewswire/ — WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced that its R&D team is working on machine learning-based multi-focus image fusion technology, which utilizes deep learning algorithms to process and analyze input images for more accurate and realistic fusion results.

The machine learning-based multi-focus image fusion researched by WiMi requires multiple steps of processing and analysis to get the final image fusion result. These steps need to consider various factors such as application scenarios, data quality, model design, etc. to get better results and performance.

Data pre-processing: The input multiple images are subjected to pre-processing operations such as denoising, alignment, depth estimation, etc. to improve the accuracy and effectiveness of subsequent processing.

Feature extraction: Pre-processed multiple images are input into the deep learning model, and the input images are feature extracted and abstracted using models such as CNN to obtain a feature vector representation of each pixel. These feature vectors contain more semantic information and advanced features, thus improving the accuracy and effectiveness of subsequent processing.

Selection and training of models: Appropriate machine learning models are selected based on the application scenarios and requirements, and trained and tuned using training data to get the best fusion results. These models can be based on different types of models such as classification, regression, generative adversarial network (GAN), etc. The specific selection needs to be based on the application scenario and requirements.

Fusion Output: The trained model is applied to the image data to classify or regress each pixel to get the final fusion results. These results can be of different types such as weighted average, probability statistics, least squares, etc.

The steps of machine learning-based multi-focus image fusion technology are not linear, and the steps may affect each other or cross over. For example, when applying CNN for feature extraction, operations such as data enhancement and batch normalization may be required; during model training, operations such as hyper-parameter adjustment and regularization may be required. In addition, due to the limitation of computational resources and time, the specific implementation of each step may also vary depending on the application scenario.

The machine learning-based multi-focus image fusion technology researched by WiMi has been greatly improved and upgraded over traditional methods in several aspects. It can not only improve the speed and accuracy of image processing, but also deal with more complex and diverse image data, providing better image processing solutions for various fields, and it has the advantages of strong adaptability, strong generalization ability, fast processing speed and high processing accuracy. The traditional multi-focus image fusion technology usually adopts the pixel-level fusion method, which lacks the understanding and analysis of the image content, and the machine learning-based multi-focus image fusion technology can adaptively adjust and optimize according to the content and characteristics of the input image, to obtain more accurate and realistic fusion results. At the same time, it can not only deal with image data under different scenes and different lighting conditions, but also deal with image data under different equipment and different shooting parameters, which has strong generalization ability and can deal with more complex and diversified image data. In addition, the machine learning-based multi-focus image fusion technology uses deep learning models such as CNN, which have the ability of efficient parallel computing, can complete the processing and analysis of a large amount of image data in a short period, and can further improve the image processing accuracy through model training and tuning.

With the continuous development and improvement of deep learning algorithms, there are more and more demands for image analysis and processing. Multi-focus image fusion technology based on machine learning will receive wider attention and application under this trend. On the one hand, with the continuous optimization and improvement of deep learning algorithms, this technology can further improve the speed and accuracy of image processing, so as to better meet the needs for image analysis and processing in various fields. On the other hand, with the continuous increase in computing resources and the continuous improvement of computing power, the multi-focus image fusion technology based on machine learning can process large-scale image data more efficiently and be applied to more new scenes and fields, which can be applied to many fields such as medicine, machine vision, intelligent security, etc., and has a wide range of prospects for application and commercial value.

The future development direction of machine learning-based multi-focus image fusion technology includes multi-modal fusion, model optimization, algorithm expansion and application expansion, etc. WiMi will also continue to improve the multi-modal fusion and model performance of its technology and broaden the scope of application, so as to promote the application of machine learning-based multi-focus image fusion technology in the actual scene.

About WIMI Hologram Cloud

WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.

Safe Harbor Statements

This press release contains “forward-looking statements” within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as “will,” “expects,” “anticipates,” “future,” “intends,” “plans,” “believes,” “estimates,” and similar statements. Statements that are not historical facts, including statements about the Company’s beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company’s strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission (“SEC”) on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company’s goals and strategies; the Company’s future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company’s expectations regarding demand for and market acceptance of its products and services.

Further information regarding these and other risks is included in the Company’s annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.