Tag Archives: BRD

WiMi Developed Digital Twin Modeling Technology Based on Multiple Data Sources

BEIJING, Oct. 6, 2023 /PRNewswire/ — WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced that it developed digital twin modelling technology based on multiple data sources to build more comprehensive, accurate, and reliable digital twin models.

This technology refers to the integration of data from different sources into a unified model. In digital twin modelling, the multiple data source integration technique can help us obtain more comprehensive and accurate data, thus improving the precision and reliability of the digital twin model.

The key modules of the integrated digital twin modelling system based on multiple data sources include data acquisition and pre-processing, data integration and consolidation, model development and training, model deployment and real-time updating, and visualization and analysis, etc., which are interdependent and interact with each other, and collectively constitute the key aspects of the integrated digital twin modelling technology.

First, the system will collect data from multiple data sources and pre-process and clean them to ensure the quality and consistency of the data, including data cleansing, data conversion, data merging and other operations. Then the data from different data sources will be integrated into a unified data model. This may require operations such as data mapping, data transformation, and data integration to ensure that data between different data sources can be effectively correlated and analyzed. The development of the model for digital twin modelling is then carried out and the integrated data is used for model training and optimization by selecting appropriate modelling algorithms, defining the structure and parameters of the model, and using the training data to train and validate the model. Next, the trained model is deployed to a real-time environment and receives and processes data from different data sources in real-time. This may involve operations such as the deployment of the model, real-time transmission of data, and real-time updating of the model to ensure that the digital twin model reflects real-world changes in real time. This module is responsible for visualizing and analyzing the results of the digital twin model so that users can understand and utilize the output of the model, and providing the application of visualization tools and analytical algorithms to support users’ understanding and decision-making on the model results.

With access to more data sources and more complex data integration requirements, future digital twin modelling techniques may need to deal with multi-modal data, including different forms of data such as image, sound, and video. Multiple data sources integration needs to be able to process and analyze this multi-modal data to more fully model and predict real-world behaviour. Future digital twin modelling technologies are also likely to be more automated and intelligent, and by combining machine learning, artificial intelligence, and automation technologies, it will be possible to automate the data integration and modelling process to improve the accuracy and efficiency of the models. There will also be more focus on real-time data processing and real-time updating of models to more accurately reflect changes in the real world, as well as cross-domain applications and integration between different domains to achieve a more comprehensive and holistic digital twin model, all of which are future trends in the digital twin modelling technology based on multiple data sources.

The rapid development of big data, cloud computing, the Internet of Things and other technologies has significantly improved data acquisition, storage and processing capabilities, which provides the technical basis and support for the realization of the digital twin modelling technology with multiple data sources. The digital twin modelling technology with multiple data sources researched by WiMi has a wide range of application prospects in many fields, such as industrial Internet, smart city, virtual reality and so on. With the continuous progress of data acquisition and processing technology, as well as the increasing demand for intelligent and sustainable development, this technology will be further developed and innovated.

About WIMI Hologram Cloud
WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.

Safe Harbor Statements
This press release contains “forward-looking statements” within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as “will,” “expects,” “anticipates,” “future,” “intends,” “plans,” “believes,” “estimates,” and similar statements. Statements that are not historical facts, including statements about the Company’s beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company’s strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission (“SEC”) on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company’s goals and strategies; the Company’s future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company’s expectations regarding demand for and market acceptance of its products and services.

Further information regarding these and other risks is included in the Company’s annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.

Source: WiMi Hologram Cloud Inc.

IBC 2023: TVU Networks Introduces Native 4K Support Across Its Entire Live Cloud Video Production EcoSystem

Newest platform enhancement provides seamless cloud-based video production, playout, and distribution in 4K resolution for broadcasters, social media producers, and sports organizations

CUPERTINO, Calif., Sept. 27, 2023 /PRNewswire/ — TVU Networks, a pioneer in cloud-based workflow solutions for live content production and distribution, today announced that its end-to-end TVU Cloud Ecosystem for live video production now supports 4K natively, encompassing collaboration, playout and distribution as well as AI-based recording, indexing, and search. The new 4K TVU Cloud Ecosystem has been demonstrated live at the IBC Conference.

TVU Networks Introduces Native 4K Support Across Its Entire Live Cloud Video Production EcoSystem.
TVU Networks Introduces Native 4K Support Across Its Entire Live Cloud Video Production EcoSystem.

Users can now harness the advantages of the cloud to ingest, process, record, and output in 4K easily and cost-effectively on the TVU platform across all solutions:

TVU Producer – With TVU Producer, users can natively ingest 4K live video to the cloud from virtually any IP video source or VOD content and fully produce the content in 4K including the switching of sources, accessing graphics overlays, and custom PIP configurations to name a few. The finished production can then be simultaneously output to broadcast, OTT, social media, and many other types of content distribution channels in 4K.

TVU Partyline – TVU Partyline enables all members in a remote production environment using TVU 4K devices – technical crews, producers, talent, and guests – to communicate seamlessly in a virtual environment. Users also have access to professional production tools and can collaborate remotely in real time with high quality video, perfectly synchronized audio and video, and mix minus one audio feedback. With a simple shared URL, participants can join Partyline to watch all program feeds live and interact, discuss, control and participate in a production with undetectable latency. Collaboration within Partyline is made possible through the use of TVU’s exclusive Real Time Interactive Layer (RTIL).

TVU Search – TVU Search provides AI-driven live video content ingest and discovery all in the cloud. It features content search capabilities based on advanced AI algorithms and automation, allowing users to locate live or archival feeds or clips for immediate playout, download, or sharing easy and fast. TVU Search offers 24/7 ingest, record and AI based metadata creation indexed to the timecode of 4K content as well as playout or downloading of archived 4K content.

TVU Channel – TVU Channel upends traditional playout. It doesn’t operate on customary infrastructure which can be both expensive and complicated. Instead, TVU Channel can be deployed in minutes to schedule, manage and output one or hundreds of live content channels including OTT, websites, apps and social media all in 4K if desired. TVU Channel supports LIVE and pre-recorded/pre-programmed content. Its intuitive user interface design borrows from familiar web-based calendars making it quick to get started and easy to use. TVU Channel supports dynamic ad insertion with SCTE decoration and can be set to operate continuously 24/7 with no downtime.

TVU Remote Commentator – Professional, high-quality sports and event commentary can be delivered from anywhere with TVU Remote Commentator. On-air talent can call the action from their homes, hotel rooms, offices – wherever there’s an internet connection – using a simple browser-based interface without needing to be at the venue. 4K support is now available for high quality, low-latency commentary that’s 100% in sync with the live program. 

TVU Grid – TVU Grid delivers highly scalable point-to-point and point-to-multipoint switching, routing and distribution of live video over IP. It’s used by news agencies and media organizations around the world to share and exchange reliable live broadcast quality video feeds with virtually no latency. TVU Grid can now provide the point-to-point and point-to-multipoint distribution of 4K content using commodity internet.

“We are seeing a growing demand for live 4K production, particularly within sports, entertainment, corporate, and major events. The pivotal consideration of applying 4K in these specific industries revolves around cost effectiveness and flexibility, as the cost and complexity of a traditional 4K production environment is generally not practical,” said Paul Shen, CEO, TVU Networks. “The TVU Ecosystem provides a comprehensive array of tools covering the entire spectrum from content acquisition and production to delivery. This empowers the creation of premium live 4K content tailored for various platforms, including live streaming, virtual reality, pop-up channels, and more with a strong emphasis on simplicity and affordability.”

WiMi Developed Mask R-CNN-Based CSO, Reference Point, and Intelligent Extraction Technique

BEIJING, Aug. 18, 2023 /PRNewswire/ — WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced that it developed a Mask R-CNN-based technique for intelligently extracting CSOs (feature space objects) and its reference points brings a breakthrough in the field of high-resolution image processing and matching. The technique utilizes the latest advances in deep learning and computer vision to provide an efficient and accurate solution for automatic image matching and target localization.

High-resolution image processing and matching have been an important research direction in the field of computer vision, but automatic matching has been facing great challenges due to local deformations in images and differences in lighting conditions. Previous methods are often limited by computational complexity and dependence on local features, making it difficult to achieve accurate results. WiMi’s technique can be used to extract CSOs and their reference points on images. With this method, the CSOs can be acquired automatically and provide accurate localization information for the subsequent image matching process.

WiMi’s R&D team successfully solved this challenge by introducing the Mask R-CNN model, a model extension based on Faster R-CNN commonly used for target detection and instance segmentation. The model is unique in that it can simultaneously predict the bounding box, category, mask and key points of a target, providing comprehensive information for image processing tasks.

In this new technique, WiMi first utilizes a large amount of high-resolution remote sensing image data for training the Mask R-CNN model. Through training, the model is able to learn the features of different target instances in the image and accurately predict their bounding boxes, categories, masks and key points. Based on the trained Mask R-CNN model, the technical team further proposes the concept of CSO and the reference point method. CSO refers to target instances with distinctive features, which can be intelligently filtered out by setting thresholds or rules. Reference points, on the other hand, are extracted from CSOs by a mask predictor and a key point predictor, which are used to locate important feature points of target instances.

The technical implementation logic of it is as follows:

Data preparation: first, a dataset of high-resolution remote sensing images for training and evaluation needs to be prepared. The dataset should contain images with different target types and deformation levels.

Model training: the Mask R-CNN model is trained using the prepared dataset. The goal of training is to enable the model to accurately predict the bounding frame, categories, masks and key points of the targets.

CSO reference point extraction: on the trained Mask R-CNN model, intelligent extraction of CSOs and reference points can be achieved by inputting a high-resolution remote sensing image. Definition of CSO: CSO refers to feature space objects, i.e., target instances with distinctive features. Target instances with distinctive features can be filtered out as CSOs by setting some thresholds or rules. Reference point extraction: the mask predictor and key point predictor of the Mask R-CNN model are utilized to extract the mask and key point for each CSO. The Mask Predictor will generate a binary mask for each CSO, which is used to accurately segment the target instance. The key point predictor will predict the key point coordinates of the target instance for locating the important feature points of the target instance.

Application of CSOs and reference points: the extracted CSOs and reference points can be used for a variety of applications, such as high-resolution remote sensing image matching. Depending on the specific application scenario, image matching or other related tasks can be realized based on the location and features of CSOs.

The breakthrough of this technique is that it not only efficiently extracts CSOs and reference points, but also accurately describes the shape and location of target instances. This makes the automatic matching of high-resolution images more accurate and reliable, providing a reliable foundation for subsequent image processing tasks.

The technique brings many important applications and advantages to the field of high-resolution image processing and matching. It can be widely used in the field of remote sensing image processing, such as urban planning, environmental monitoring and resource management, etc. It can help to automatically extract the features of urban buildings, road networks and natural environments, and provide accurate data support for urban planning and resource management. In addition, this technology can also be applied in the fields of security monitoring, traffic management and military reconnaissance, etc. It can help to automatically extract key targets in the monitoring screen and accurately locate them, so as to improve the efficiency and accuracy of security monitoring. In traffic management, the technology can help identify traffic signs, vehicles and pedestrians, providing reliable data support for traffic flow monitoring and intelligent transportation systems.

This technique has achieved remarkable results in related fields and has been widely noticed and recognized. Currently, the technology has been successfully applied to several practical projects with impressive results. For future development, WiMi will continue to strengthen its technology development and innovation, and continuously improve the performance and effect of Mask R-CNN-based CSO and its reference point and intelligent extraction technology. At the same time, the company will actively expand the application areas of the technology, and work with partners from various industries to promote the development of high-resolution image processing and matching technology, and contribute to the progress and development of society.

About WIMI Hologram Cloud

WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.

Safe Harbor Statements

This press release contains “forward-looking statements” within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as “will,” “expects,” “anticipates,” “future,” “intends,” “plans,” “believes,” “estimates,” and similar statements. Statements that are not historical facts, including statements about the Company’s beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company’s strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission (“SEC”) on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company’s goals and strategies; the Company’s future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company’s expectations regarding demand for and market acceptance of its products and services.

Further information regarding these and other risks is included in the Company’s annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.

Source: WiMi Hologram Cloud Inc.

WiMi Developed an HMD-based Control System for Humanoid Robots Controlled by BCI

BEIJING, July 21, 2023 /PRNewswire/ — WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced its development of a humanoid robot control system based on a head-mounted display (HMD) controlled by a brain-computer interface (BCI). This system via steady-state visual evoked potentials (SSVEP) and allows interaction with the environment and humans. The technology provides real-time feedback through the robot’s embedded camera, which integrates stimulus feedback into the HMD display.

WiMi’s researchers took control of the robot and tested its performance by using this new interaction in an experiment. Testers were asked to navigate the robot to a specific location to perform a task. This test was based on visual SLAM feedback, which provides navigation instructions through images of the environment captured by a camera.

Steady-state visual evoked potentials (SSVEP) are used as control signals, and the user’s EEG signals are captured by an EEG signal acquisition device, and real-time feedback from the robot is displayed using a head-mounted display. For navigation instructions, the researchers embedded an embedded camera in the robot and combined its real-time feedback with the SSVEP signals on the head-mounted display to form a form of interaction. The researchers used a visual SLAM (Simultaneous Localization and Mapping) algorithm to implement the navigation instructions.

The control system of the head-mounted display BCI consists of several components: signal acquisition device, head-mounted display, humanoid robot, embedded camera, control algorithm, etc. The steps and results of WiMi’s BCI control system implementation are as follows:

EEG signal acquisition and processing:

The WiMi researchers first used a set of EEG signal interaction platforms for capturing the user’s EEG signals and processing those signals. The platform consists of a multi-channel amplifier, electrode caps, and data acquisition software to capture and store the user’s EEG signals in real-time. In processing the EEG signals, an efficient method is used, which is the extraction of SSVEP by frequency analysis of the signals. This method presents light flashes of a specific frequency to the user, and the user’s EEG signal will resonate at this frequency, thus enabling the extraction of control signals.

The researchers used an EEG signal acquisition device to capture the user’s EEG signals, a head-mounted display to show real-time feedback from the robot, and an embedded camera to provide navigation instructions. The control algorithm uses a visual SLAM algorithm to implement the navigation instructions, which models the environment and implements the navigation instructions from images of the environment captured by the robot’s camera.

Head-mounted display applications:

To provide feedback on the control signals, the researchers used a head-mounted display, which shows real-time images and states displayed through a built-in monitor. The head-mounted display utilizes high-resolution display technology to provide a more realistic experience of the virtual environment.

In addition, to better provide feedback on control signals, the researchers combined the display with the robot’s built-in camera. With the robot’s camera capturing images of the environment and displaying them on the head-mounted display, users can visualize the robot’s state and environmental information more intuitively. Specifically, the user is required to gaze at a specific frequency of light stimuli on the head-mounted display, which causes the user’s brain to emit a specific SSVEP signal. Once the signal is captured and processed, a control algorithm can determine the user’s intent based on the frequency and amplitude of the signal, such as moving the robot to turn left or right. The embedded camera captures the viewpoint of the robot in real-time, providing an image of the environment as well as feedback on the current position of the robot.

Navigation implementation:

Using the SLAM algorithm, an image of the robot’s environment is captured by the robot’s built-in camera and converted into a model of the robot’s environment. The algorithm is also able to estimate the robot’s position and provide navigation instructions to the user. The user can control the direction of the robot’s movement through SSVEP signals, while the head-mounted display shows the robot’s status and environmental information to provide more intuitive feedback to the user.

In addition, the researchers conducted an experimental evaluation of the control system to assess its performance in terms of control accuracy and interaction experience. The experimental results show that the control system can provide precise control signals as well as an immersive interactive experience, providing users with a novel way to control their robots.

Experimental evaluation:

To evaluate the performance of the system, the researchers conducted a series of experiments. The experiments included a user using the system for a robot navigation task and a control experiment using a traditional remote control for the task. The results of the experiments showed that users were able to perform navigation tasks more accurately and get better interaction when using the system.

The researchers conducted experiments to evaluate the control system, including performance in terms of control accuracy and interaction experience. In the experiment, participants were required to control the robot to walk to a specific location to complete a task in a simulated environment. The experimental results show that the control system can provide precise control signals with an average control accuracy of 98.1%. Meanwhile, users evaluated the system as a very good interaction experience, and considered it a very natural and intuitive way of interaction.

WiMi’s head-mounted display (HMD) for robot control via a brain-computer interface (BCI) demonstrates a new way of interaction, providing a more natural and intuitive way of control. Experimental results show that the control system can provide precise control signals. This control method has the potential to be used in many scenarios that require precise control, such as medical, educational, and entertainment fields. Subsequently, the control system can be combined with other sensor technologies, such as voice and gesture, to provide more diverse control. This navigation-assisted scheme provides users with a novel interaction method, which can improve the robot’s operation efficiency and interaction experience.

About WIMI Hologram Cloud

WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.

Safe Harbor Statements

This press release contains “forward-looking statements” within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as “will,” “expects,” “anticipates,” “future,” “intends,” “plans,” “believes,” “estimates,” and similar statements. Statements that are not historical facts, including statements about the Company’s beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company’s strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission (“SEC”) on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company’s goals and strategies; the Company’s future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company’s expectations regarding demand for and market acceptance of its products and services.

Further information regarding these and other risks is included in the Company’s annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.

Source: WiMi Hologram Cloud Inc.

WiMi Proposes A Vehicular Networks-based Consensus Algorithm to Improve Data Security And Response Speed

BEIJING, July 14, 2023 /PRNewswire/ — WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced the proposal of a custom vehicular networks-based consensus algorithm (VBCA) to secure data across the network. The algorithm uses a blockchain to maintain a secure pool of confirmed information exchanged in the network. Based on a consortium chain, the solution optimizes transmission efficiency, reduces average data exchange latency, and increases the magnitude of confirmed data exchanges decentralized without compromising data integrity and security.

In recent years, autonomous vehicles (AVs) have attracted significant attention as an evolving technology for intelligent transportation systems (ITS). These vehicles usually have various onboard resources, such as sensors, radar, cameras, storage devices, event recorders, etc. These devices perform different operations, such as object detection, congestion monitoring, pathfinding, etc. Self-driving vehicles will capture large amounts of data for analysis and make real-time intelligent decisions based on surrounding events. AVs equipped with sensors can capture gigabytes of data, which needs to be processed using complex machine-learning algorithms to infer logical outcomes. For communication efficiency, storage, and high-end processing, 5G and 6G technologies and roadside units (RSUs) connected to a Mobile Edge Computing (MEC) server can be used to receive all the data sent by the vehicle, where the MEC server runs machine learning techniques to generate useful predictions.

WiMi’s VBCA solution, a lightweight decentralized ledger system, allows easy integration of recent blocks with existing P2P networks. It aims to provide a network for physical layer vehicles and devices that can share information efficiently and reliably. The solution reduces communication latency by combining blockchain with P2P networks, enabling the network to combine all active blocks into a lightweight blockchain. The scheme uses a hierarchical architecture to achieve an efficient consensus mechanism. Fixed nodes are responsible for attaching blocks to the blockchain, and all fixed nodes store copies of the blockchain. The scheme design estimates the number of active and inactive blocks in the network and maintains only active blocks instead of full blocks to improve the lightweight property.

In WiMi’s VBCA scheme, the system architecture nodes are divided into two node types: fixed nodes and mobile nodes. Fixed nodes are RSUs that provide geographic coverage for specific areas on the map by connecting to high-power edge servers and are interconnected via backhaul links. Mobile nodes use their sensors to capture event data and send it to the nearest fixed node. Through P2P networks, vehicles can use DSRC to connect more reliably to nearby RSUs, thereby reducing communication latency.

The consensus algorithm runs on the edge server and appends the verified protocol information to the blockchain stored on the edge server. Since vehicles are equipped with different types of sensors, e.g., self-driving cars can be equipped with cameras, radar, etc., the edge server will receive a large amount of data. Based on the collected data, various statistical and machine-learning tools can be applied to train models that generate multiple predictions for different applications. For example, a predictive learning-based approach can predict the expected load on numerous parts of the traffic network during a specific time window. The prediction information can be stored in a separate blockchain and shared among all fixed nodes through which vehicles can query the information. The system is built based on a network of vehicles and edge servers for managing traffic data and predicting traffic flow. The system works in five layers: application layer, contract layer, consensus layer, network layer, and data layer. The application layer provides the user interface that allows end users (vehicles) to perform general input/output operations. The contract layer verifies the authentication of vehicles and fixed nodes and deploys intelligent contracts. The consensus layer uses custom consensus algorithms to establish trust between nodes in the network. The network layer connects all nodes in a hybrid P2P fashion, with each node using a discovery protocol to find nearest neighbor RSUs to establish links and exchange messages. The data layer manages the protocols and blocks in the ledger, using tools such as hash functions, timestamps, and Merkle trees to ensure data integrity and security.

Furthermore, intelligent contracts ensure decentralization in the framework by allowing multiple fixed nodes to attach a block to the ledger. This algorithm technology significantly increases throughput and the number of blocks created by each node while ensuring decentralization. In addition, transaction latency is reduced by separating the protocol confirmation and block creation processes.

WiMi has several technologies used in autonomous driving, intelligent transportation systems, and smart cars. With the development of autonomous driving and intelligent transportation systems, vehicular networks are becoming increasingly important. The global market size for autonomous driving and intelligent transportation systems is expanding and is expected to grow in the coming years. This provides a broad market space for WiMi’s VBCA technology. In addition, with the development and popularization of 5G and 6G technologies, vehicular network data transmission speed and stability will be further improved, further promoting the application and development of consensus algorithm technology based on vehicular networks.

About WIMI Hologram Cloud

WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.

Safe Harbor Statements

This press release contains “forward-looking statements” within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as “will,” “expects,” “anticipates,” “future,” “intends,” “plans,” “believes,” “estimates,” and similar statements. Statements that are not historical facts, including statements about the Company’s beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company’s strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission (“SEC”) on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company’s goals and strategies; the Company’s future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company’s expectations regarding demand for and market acceptance of its products and services.

Further information regarding these and other risks is included in the Company’s annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.

Source: WiMi Hologram Cloud Inc.

WiMi Hologram Cloud Proposes A New Lightweight Decentralized Application Technical Solution Based on IPFS

BEIJING, July 12, 2023 /PRNewswire/ — WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced a blockchain distributed storage framework that presents a pooling algorithm and inverse process for IPFS-based DAPP schemes. The combination of DAPP with IPFS allows for a lightweight layout. By putting the static resources, data storage layer, distributed database, communication layer, and other parts of DAPP into the IPFS network, the data storage pressure on the chain can be effectively reduced, and the performance and availability of DAPP can be improved. In the implementation process, the IPFS network can be operated through the IPFS API to realize the interaction and communication between DAPP and IPFS.

The IPFS-based DAPP solution proposes a solution for data storage on the blockchain. It can store files and put unique and permanently available IPFS addresses into blockchain transactions. In this way, there is no need to put the space-hogging data on the blockchain. On the other hand, IPFS can also assist various blockchain networks in transferring information and files, thus increasing the scalability of the blockchain. Based on these advantages of IPFS, WiMi designed the main elements of the distributed pool, including the distributed storage phase and the invocation evidence phase.

The evidence phase is called to recover the original image by reversing the distributed pooling operation. After completing the distributed storage phase, it becomes easier to query the corresponding data hash address from the database. The distributed nature of IPFS makes the hash of the address uniquely trusted. Therefore, in the case of trusted nodes, the original data can be recovered using the inverse distributed pooling operation after confirming the correction of node addresses.

WiMi’s DAPP solution effectively solves the problem of insufficient storage space and reduces the cost of storage on the blockchain with the help of IPFS. By accessing the IPFS network, the data on the blockchain will become smaller, and therefore the storage space required by individual nodes will be reduced. Secondly, it greatly improves fault tolerance and can identify data stored in any node. In addition, the scheme can be recovered entirely losslessly, with no data lost from each node. Even if some information is lost, it can be retrieved proportionately to the information retained, which is helpful for excellent application capabilities.

With the rapid development of blockchain technology in these years, DAPP is also developing rapidly and will be the leading force in the application market. WiMi’s solution has significant market advantages and opportunities:

1. Reduced data storage cost: IPFS can be used as the distributed storage layer of DAPP, and the data will be stored in the IPFS network in a decentralized manner, which can reduce data storage costs and improve the availability and stability of DAPP.

2. Improved data access speed: Static resources in DAPP can be stored in the IPFS network, and you can locate these resources through IPFS Hash. This can improve the data access speed and reduce the data storage pressure of DAPP on the chain.

3. Improved data privacy: IPFS uses distributed storage, which allows data to be stored on multiple nodes, enhancing data privacy and reducing the risk of data leakage.

4. Better development experience: Using IPFS allows developers to focus more on developing business logic during the development of DAPP without paying too much attention to the implementation and maintenance of infrastructure, thus improving development efficiency and development experience.

With the popularity of decentralized applications and the development of blockchain technology, WiMi’s technical solution can bring a better experience and services to various participants in the blockchain ecosystem (such as users, developers, enterprises, etc.). At the same time, it also provides a better technical basis for developing new types of decentralized applications. WiMi’s lightweight advantage can bring many benefits and market opportunities, including reducing data storage costs, increasing data access speed, improving data privacy, providing a better development experience, and opening up new market opportunities.

About WIMI Hologram Cloud

WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.

Safe Harbor Statements

This press release contains “forward-looking statements” within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as “will,” “expects,” “anticipates,” “future,” “intends,” “plans,” “believes,” “estimates,” and similar statements. Statements that are not historical facts, including statements about the Company’s beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company’s strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission (“SEC”) on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company’s goals and strategies; the Company’s future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company’s expectations regarding demand for and market acceptance of its products and services.

Further information regarding these and other risks is included in the Company’s annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.

Source: WiMi Hologram Cloud Inc.

WiMi to Work on Convolutional Neural Network-Based Image Enhancement Algorithms

BEIJING, March 8, 2023 /PRNewswire/ — WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced that it is working on image enhancement algorithms based on CNN (convolutional neural network). CNNs have had significant achievements in many fields, such as computer vision and natural language processing. Applying convolutional neural networks to image enhancement has obvious advantages and can solve challenges in different environments.

The essence of CNN is to map the input image to a new mathematical model through multiple data transformations or dimensionality reduction. A CNN consists mainly of an input layer, a convolutional layer, a pooling layer, a fully connected layer, and an output layer.

The convolution layer performs a convolution operation on the input image or the output features of the previous layer, calculates the inner product of the entire convolution kernel and the corresponding position of the input image or feature map, and extracts the relevant image feature map. The pooling layer reduces the number of parameters and computational effort in the network by reducing the dimensionality of the activation feature map, maintaining the feature scale invariance property, and reducing overfitting to a certain extent. The pooling layer can downsample the image using the basis associated with the image section, reducing the amount of computational data and retaining valid information values. After multiple convolutional pooling operations on the image, the convolutional neural network classifies the features through the fully connected layer by using the one-dimensional activation feature vector obtained by expanding the three-dimensional activation feature map as input to the fully connected layer.

WiMi’s CNN-based image enhancement algorithms have substantial advantages in both extracting image feature information and feature representation. CNNs can share weights, perform convolutional calculations, and have powerful feature learning and mapping capabilities. It also ensures noise suppression and image detail preservation, has exceptionally high invariance during image displacement, scaling, and other deformations and exhibits better-reconstructed image quality.

CNNs can learn complex hierarchical features of images and accomplish complex image recognition tasks. At the same time, CNN-based feature extraction can understand a picture’s deep semantic feature information. This enables it to capture the contextual content of an image well and to train and learn the input image repeatedly, ultimately obtaining the best image enhancement effect to meet the requirements of the human visual system for images.

Currently, image enhancement algorithms based on CNN are widely used in security, medicine, and ecology. In the era of rapid global information development, world knowledge is increasingly dependent on the explosive transmission of information. Most people still know the world mainly through their eyes. Therefore, images are not only a carrier of human visual information but also an essential medium for disseminating information. To obtain practical information from images quickly, the demand for image quality is increasing, the need for image enhancement will continue to grow, and the field of application of image enhancement technology will be further expanded.

In the future, WiMi’s CNN-based image enhancement algorithm will strive for better progress and greater breakthroughs in visual effects, contrast ratio, and signal-to-noise ratio, laying a solid technical foundation for it to play a more significant role in more industrial fields.

About WIMI Hologram Cloud

WIMI Hologram Cloud, Inc. (NASDAQ:WIMI), whose commercial operations began in 2015, is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.

Safe Harbor Statements

This press release contains “forward-looking statements” within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as “will,” “expects,” “anticipates,” “future,” “intends,” “plans,” “believes,” “estimates,” and similar statements. Statements that are not historical facts, including statements about the Company’s beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company’s strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission (“SEC”) on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company’s goals and strategies; the Company’s future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company’s expectations regarding demand for and market acceptance of its products and services.

Further information regarding these and other risks is included in the Company’s annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement, except as required under applicable laws.

Cision View original content:https://www.prnewswire.com/news-releases/wimi-to-work-on-convolutional-neural-network-based-image-enhancement-algorithms-301765561.html

Source: WiMi Hologram Cloud Inc.

WiMi Hologram Cloud Launches WIMI-MR System to Explore AI Real-Time Holographic Display

BEIJING, Feb. 24, 2023 /PRNewswire/ — WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced that it has developed its WIMI-MR system that enables users to edit and display holographic AR content and create their customized visual effects. The R&D team of WIMI-MR is currently exploring CGH (Computer-Generated Holograms) technology that allows artificial intelligence to generate holograms and display them in real-time quickly.

Whereas traditional photographs present an actual physical image, holograms contain information about the recorded object’s size, shape, brightness, and contrast. Holograms also own the ability to deliver 3D scenes with a continuous sense of depth and have a profound impact on VR and AR, human-computer interaction, education, and training. CGH enables 3D projection at high spatial angles through diffraction and interference numerical simulation. CGH uses a computer to generate a digital hologram using specific algorithms to reproduce the light field. As light waves can be described by parameters such as phase and amplitude, the computer solves the phase or amplitude of the light to produce a digital hologram, which is then fed into an optical modulation device called an SLM (Spatial Light Modulator), which modulates the phase or amplitude of the light (equivalent to put a zoom-able lens and a screen into the SLM). The SLM is then irradiated with coherent light to create a refreshable light field, resulting in a dynamic holographic 3D image that can be freely changed.

The application of CGH in AR systems allows the user to focus naturally on the content displayed across multiple depth planes. This advantage addresses many of the shortcomings of current AR devices, allowing users to interact with AR objects more easily at short distances. In addition, it can further improve user comfort by solving the problem of VAC (Vergence-Accommodation Conflict), a widespread concern in AR wearable device design.

At present, CGH is in its infancy and faces many challenges. Firstly, it is computationally intensive and requires high computing power. The second is the SLM’s low resolution and small size, and the overall imaging quality still needs improvement. WiMi’s R&D team is still exploring the CGH acceleration algorithm to achieve faster computational holograms through acceleration technology or to realize holographic displays with different depths of field in the SLM with visual tracking technology.

CGH displays are considered a transformative technology, with applications in fields ranging from VR to 3D printing, where the new technology can help immerse AR viewers in a more realistic landscape while eliminating eye strain and other side effects associated with prolonged viewing. With the development and application of CGH technology, the WIMI-MR system will enable future applications in a wide range of systems, including direct vision, VR, AR, and in-car HUD displays, further contributing to the growth of WiMi’s business.

About WIMI Hologram Cloud

WIMI Hologram Cloud, Inc. (NASDAQ:WIMI), whose commercial operations began in 2015, is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.

Safe Harbor Statements

This press release contains “forward-looking statements” within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as “will,” “expects,” “anticipates,” “future,” “intends,” “plans,” “believes,” “estimates,” and similar statements. Statements that are not historical facts, including statements about the Company’s beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company’s strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission (“SEC”) on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company’s goals and strategies; the Company’s future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company’s expectations regarding demand for and market acceptance of its products and services.

Further information regarding these and other risks is included in the Company’s annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement, except as required under applicable laws.

Cision View original content:https://www.prnewswire.com/news-releases/wimi-hologram-cloud-launches-wimi-mr-system-to-explore-ai-real-time-holographic-display-301755255.html

Source: WiMi Hologram Cloud Inc.

WiMi Hologram Cloud Develops A Digital Content Compression and Processing System for Web 3.0

BEIJING, Feb. 21, 2023 /PRNewswire/ — WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced that it has been continuously optimizing its digital content compression technology. It has launched a digital content compression processing system to accommodate Web 3.0’s high bit-rate transmission requirements.

Compression is the reduction of the amount of data needed to represent digital content. WiMi’s digital content compression and processing system mainly deal with four kinds of redundancy: coding redundancy, spatial redundancy, temporal redundancy, and redundancy of irrelevant information.

The redundancy of digital content data is mainly manifested as coding redundancy caused by the word code in the digital content being more prominent than the optimal coding to form entropy; spatial redundancy caused by the correlation between adjacent pixels in the digital content; temporal redundancy caused by the correlation between different frames in the digital content sequence; and spectral redundancy caused by the correlation between different color or spectrum bands. Due to the sheer volume of digital content data, which is very difficult to store, transmit, and process, the application of WiMi’s system is essential for the more efficient, intelligent, and realistic environment required by Web 3.0.

Coding redundancy exists when the word code used is larger than the optimal code or relatively larger than the minimum length. This is where the concept of entropy comes into play, which has a more specific definition derived from a different discipline, entropy, in digital content processing. WiMi, therefore, optimizes codes intelligently by comparing them with particular algorithms and sorting out the disorganized codes to reduce the codes’ total entropy and redundancy.

Spatial redundancy is caused when addressing correlations between, for example, neighboring pixels in digital content. Spatial redundancy is a frequent type of data and is the most significant type presented in digital content images. There is often a spatial correlation between the colors of sampled points on the surface of the same scene, with adjacent points often taking on similar or identical values. Different data can have roughly the same histogram and entropy and approximately the same compression ratio. The pixels of any one image can reasonably be predicted from their neighboring pixel values, and these correlations are the potential basis for inter-pixel redundancy. To reduce inter-pixel redundancy, two-dimensional arrays of pixels can be transformed into a more efficient format. This type of transformation, known as mapping, takes the original digital content image data, transforms it into a dataset for reconstruction, and then merges it. The system will automatically identify and integrate to significantly reduce the amount of data in the digital content due to spatial redundancy and remove the excess data footprint.

Temporal redundancy, in close analogy to spatial redundancy, arises because of the inter-pixel correlation of adjacent frames in digital content data. The system can insert successive frames of digital content into a matrix of frame structures, linking each frame along a four-dimensional array. The first two dimensions are the number of rows and columns dimensions, the third dimension is the monochrome image, and the fourth is the number of frames in the image sequence. Of course, temporal redundancy refers not only to the image data of digital content but also to data such as speech data, control data, and operational and informational data, all of which can be compiled using the same theoretical basis for integration.

Unlike coding and spatial redundancy, Irrelevant information is a way of processing digital content data using biases or insensitivity in human vision or perception. For example, the human eye is insensitive to high-frequency information in color, so irreversible quantitative compression can be performed.

WiMi’s digital content compression processing system is based on the basic principles of a lossless compression framework. The size of the digital content data is actually information plus data redundancy. When the fundamental problem of data redundancy and data results is optimized, the performance of transmission speed can be significantly improved. WiMi is also continuously optimizing its holographic digital content compression and processing system and has previously introduced a parallel compression scheme with multi-tasking packages to considerably reduce the processing time and improve its performance. WiMi will continue to improve the system’s intelligent processing capabilities and project management performance to provide better services to customers in the Web 3.0 era.

About WIMI Hologram Cloud

WIMI Hologram Cloud, Inc. (NASDAQ:WIMI), whose commercial operations began in 2015, is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.

Safe Harbor Statements

This press release contains “forward-looking statements” within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as “will,” “expects,” “anticipates,” “future,” “intends,” “plans,” “believes,” “estimates,” and similar statements. Statements that are not historical facts, including statements about the Company’s beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company’s strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission (“SEC”) on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company’s goals and strategies; the Company’s future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company’s expectations regarding demand for and market acceptance of its products and services.

Further information regarding these and other risks is included in the Company’s annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement, except as required under applicable laws.

Cision View original content:https://www.prnewswire.com/news-releases/wimi-hologram-cloud-develops-a-digital-content-compression-and-processing-system-for-web-3-0–301751526.html

Source: WiMi Hologram Cloud Inc.

FORMULA E AND TATA COMMUNICATIONS ANNOUNCE MULTI-YEAR COLLABORATION

Tata Communications becomes Official Broadcast Distribution Provider to the ABB FIA Formula E World Championship

Multi-year strategic relationship supports Formula E’s innovative new remote broadcast production set up, reducing environmental impact associated with live TV coverage of major international sports events

TV viewers around the world tuned in live to the first-ever Formula E race in India last Saturday, the 2023 Greenko Hyderabad E-Prix

MUMBAI, India and LONDON, Feb. 17, 2023 /PRNewswire/ — Formula E and Tata Communications announced a strategic multi-year relationship with the global commtech company becoming the Official Broadcast Distribution Provider to the ABB FIA Formula E World Championship.

Formula E and Tata Communications announce multi-year collaboration. L-R Mr. Jamie Reigle, CEO, Formula E, Mr. Amur Lakshminarayanan, CEO, Tata Communications

Formula E and Tata Communications announce multi-year collaboration.  L-R Mr. Jamie Reigle, CEO, Formula E, Mr. Amur Lakshminarayanan, CEO, Tata Communications

The new agreement will see Tata Communications deliver high-definition, high-resolution and high-speed live broadcast content to viewers around the world as part of Formula E’s new remote broadcast production of live races, reducing the environmental impact typical of major live international sports events on TV.

Tata Communications’ technologically advanced, software-defined media edge platform will deliver more than 160 live video and audio signals from Formula E races across continents within milliseconds, using 26 media edge locations across North America, Europe and Asia.

The new super-fast race broadcast distribution will be supported by Tata Communications’ specially trained experts, providing round-the-clock global end-to-end managed services at all 16 races this season. Tata Communications and Formula E are also working together to further enrich experiences for motorsport fans with innovation and efficiency.

Tata Communications made history with Formula E as the ABB FIA Formula E World Championship held the races in India for the first time. Viewers around the world follow the action live as 22 drivers from 11 teams including Mahindra Racing, Jaguar TCS Racing, Maserati MSG Racing and NEOM McLaren Formula E Team compete in the 2023 Greenko Hyderabad E-Prix.

Jamie Reigle, CEO , Formula E, said:

“Formula E is an intense tour given its on-the-go nature. Tata Communications’ support over the years has enabled state-of-the-art remote production possible, with real-time TV signal transmissions from the race venues to our broadcast centre in London and finally to the audience’s screens. Thus, bringing down multiple logistical challenges, driving cost efficiencies, travel flexibilities for our employee, especially women, and reducing emissions.”

A.S Lakshminarayanan, MD and CEO, Tata Communications, said:

“There are 85 cameras capturing the event for over 400 million people watching all over the world. To be able to facilitate that truly speaks about the power of internet that we have been able to leverage, with our dedicated media cloud and edge computing capabilities. And apart from our long-standing partnership with FIA, we extend the services to multiple major sporting leagues across the world.”

Note to Editors: 

  • Tata Communications sustainable remote production solution will transmit and transfer live racing action from over 85 camera feeds and audio channels on each racetrack to Formula E’s central remote production hub in the UK.
  • The repackaged feeds are distributed to global rights holding broadcasters and digital platforms leveraging Tata Communications global edge infrastructure.
  • Tata Communications media edge cloud is capable of enabling very low latency video processing from any venue using first-mile internet while processing and distributing the video signals to any platform globally with high availability.
  • Offered as a fully-managed service, availability of edge capability at the venue allows businesses to add a wide variety of digital services such as providing high performance data tunnels over a secure connection to help with real time data enrichment for better viewer experience.

Forward-looking and cautionary statements

Certain words and statements in this release concerning Tata Communications and its prospects, and other statements, including those relating to Tata Communications expected financial position, business strategy, the future development of Tata Communications’ operations, and the general economy in India, are forward-looking statements. Such statements involve known and unknown risks, uncertainties and other factors, including financial, regulatory and environmental, as well as those relating to industry growth and trend projections, which may cause actual results, performance or achievements of Tata Communications, or industry results, to differ materially from those expressed or implied by such forward-looking statements. The important factors that could cause actual results, performance or achievements to differ materially from such forward-looking statements include, among others, failure to increase the volume of traffic on Tata Communications’ network; failure to develop new products and services that meet customer demands and generate acceptable margins; failure to successfully complete commercial testing of new technology and information systems to support new products and services, including voice transmission services; failure to stabilize or reduce the rate of price compression on certain of the company’s communications services; failure to integrate strategic acquisitions and changes in government policies or regulations of India and, in particular, changes relating to the administration of Tata Communications’ industry; and, in general, the economic, business and credit conditions in India. Additional factors that could cause actual results, performance or achievements to differ materially from such forward-looking statements, many of which are not in Tata Communications’ control, include, but are not limited to, those risk factors discussed in Tata Communications Limited’s Annual Reports. 

The Annual Reports of Tata Communications Limited are available at www.tatacommunications.com. Tata Communications is under no obligation to, and expressly disclaims any obligation to, update or alter its forward-looking statements.

© 2023 Tata Communications Ltd. All rights reserved. TATA COMMUNICATIONS and TATA are trademarks or registered trademarks of Tata Sons Private Limited in India and certain countries.