Tag Archives: Machine Learning

Sustainability for the AI Future Needs to Start with Data Centres

This article was based on an interview with Mr Chee Meng Tan, Area Sales Director, Commercial Building Services at Grundfos

As the world accelerates into the AI-driven future, the demand for data processing power is growing exponentially. The backbone of this digital revolution—data centres—is crucial in enabling advancements in Artificial Intelligence (AI), the Internet of Things (IoT), cloud computing, and more. However, this progress comes at a significant environmental cost. Data centres are notorious for their immense energy consumption and water usage, making them prime candidates for sustainability efforts. In an era of heightened environmental consciousness, achieving sustainability in data centres is not just an option but an imperative.

Grundfos UAE Data Centre
Source: Grundfos

Chee Meng Tan, Area Sales Director, Commercial Building Services at Grundfos, highlights this challenge succinctly: “The efficiency of both IT hardware and cooling has been improved, and there’s a shift away from small, inefficient enterprise data centres towards more efficient cloud and hyperscale data centres.” The future of sustainability in AI must begin with reevaluating how data centres manage energy, cooling, and water resources.

Malaysia’s Emergence as a Key Market for Data Centers

In the Asia Pacific region and more specifically, Southeast Asia, Malaysia quickly has emerged as a key hub for data centre investments. With its strategic geographic location, relatively stable political environment, and competitive energy prices, Malaysia has attracted significant interest from global technology giants. Companies like Google, Microsoft, Amazon Web Services (AWS) and Meta have all made substantial investments in the region to support their growing data infrastructure needs.

Copilot Generated Malaysia Digital
AI Generated Image of Malaysia Digital

Google has set its sights on expanding its cloud operations in Malaysia as part of a broader strategy to strengthen its Southeast Asian presence by investing over USD$2 billion. Similarly, Microsoft announced plans to invest USD$2.2 billion in new data centre regions in Malaysia, signalling the country’s rising importance in the global cloud ecosystem. Meta, too, is making moves to leverage Malaysia’s infrastructure for its massive data demands, driven by the increasing reliance on cloud services and social media across the globe. The emerging importance of Malaysia as a key player couldn’t be better emphasized than with the USD$6.2 billion investment by AWS to develop and deploy its data centres in the country.

Malaysia’s government has also been keen to position the country as a leader in digital infrastructure. The recent launch of the Malaysia Digital Economy Blueprint (MyDIGITAL) sets the stage for Malaysia to become a regional digital economy leader by 2030. Part of this initiative involves accelerating the development of data centres and semiconductor fabrication, positioning Malaysia as a critical player in the Southeast Asian data economy.

However, this rapid growth brings challenges. As Tan points out, Southeast Asia’s unique climate, characterized by high temperatures and humidity, exacerbates the energy and cooling demands of data centres. “Cooling alone accounts for 35 to 40% of energy consumption in data centres in Southeast Asia—up to 10% more than the global average,” he explains.

The Energy and Water Footprint of Data Centers

Data centres are power-hungry operations. According to the International Energy Agency, the electricity demand of data centres is expected to double by 2026, driven by the rising adoption of AI and other data-intensive technologies. Cooling systems alone account for about 40% of the total energy consumption in these facilities, making it clear that any advancements in energy efficiency must begin with improvements to these systems.

Water usage is another critical challenge. Tan notes that “data centres are estimated to use more than a billion litres of water per day – equivalent to about 400 Olympic-sized swimming pools.” This consumption is expected to increase rapidly as the demand for computing power rises. The water-energy nexus—where water is required to generate energy, and energy is required to circulate water—places a double burden on data centres striving to become more sustainable.

Grundfos: Innovating for Sustainability in Data Centers

Grundfos, a global leader in pump solutions and water management, has been at the forefront of sustainability efforts within the data centre industry. By leveraging over 75 years of experience in water solutions, Grundfos is not just selling pumps; they are working with data centre designers to optimize cooling systems based on specific needs. As Tan explains, “We don’t just sell pump solutions, but work with data centre designers to optimize the design of the cooling system based on the specific needs of each data centre.”

Grundfos UAE Case Study Grundfos Hydro MPC E and Hydro Multi E water boosting systems with IP55 rated MGE motors
Grundfos Hydro MPC E and Hydro Multi E water boosting systems with IP55 rated MGE motors (Source: Grundfos)

Grundfos provides tailored solutions that help data centres reduce their energy and water footprints. For instance, ensuring that pumps are properly sized for each system is critical to minimizing energy waste. “It may sound basic, but many building operators don’t get it right,” Tan points out. The right-sizing of pumps can prevent significant energy wastage and ensure that the cooling system operates efficiently.

In addition to sizing, the motors used in these pumps play a pivotal role in energy efficiency. Tan mentions that many data centres are still using outdated IE3 motors, even though more energy-efficient alternatives, such as IE5 motors, are available. “Based on our calculation, an IE5 motor can achieve 5% in energy savings as compared to an IE3 motor for 10MW data centres,” he says. These savings might seem small at first glance, but when considering the scale of energy usage in data centres, they add up to significant reductions in both energy consumption and operational costs.

AI and Machine Learning in Cooling Systems

As AI continues to develop, it is not only driving the demand for data centres but also providing the tools to make them more sustainable. One of the most exciting advancements in data centre cooling is the use of AI and machine learning to achieve real-time, on-demand cooling. Tan points to a groundbreaking example from Google: “Google reported using its DeepMind AI to reduce the electricity demand of their data centre cooling systems by 40%.”

An artist’s illustration of artificial intelligence (AI). This image was inspired by neural networks used in deep learning. It was created by Novoto Studio as part of the Visualising AI pr...
Photo by Google Deepmind

By harnessing AI to monitor temperature and cooling requirements, data centres can dynamically adjust their cooling systems to reduce energy consumption. Machine learning algorithms can analyze vast amounts of operational data to predict when and where cooling is needed, allowing the system to respond with precision, thereby optimizing energy use.

This shift toward intelligent cooling systems represents a major leap forward in the quest for sustainable data centres. AI-driven solutions not only reduce energy consumption but also extend the lifespan of equipment by preventing overheating and reducing the strain on infrastructure.

Modular and Prefabricated Solutions for Efficiency

Beyond AI, Grundfos is also pioneering modular and prefabricated solutions designed to enhance energy efficiency in data centres. Tan explains, “While not specific to data centres, the Delta Modular Systems we have developed offer various modularized standard solutions to the building services industry.” These systems are designed to optimize both the pump module’s structural design and control operations, bringing significant energy savings while reducing construction time and environmental impact.

Modular systems, particularly in cooling, are gaining traction because they allow for more tailored, needs-based cooling. Instead of relying on a centralized system, smaller cooling units can be deployed across server racks, each regulating its performance based on the needs of the corresponding rack. This not only minimizes energy usage but also ensures that each part of the data centre is cooled efficiently, without overburdening any single system.

Water Efficiency: A Key Focus for the Future

As global water security becomes an increasing concern, data centres are under pressure to reduce their water consumption. Grundfos is actively exploring alternative water sources and technologies that allow for more sustainable water management in data centres. “New technologies are being explored, such as using non-potable alternative water sources like rainwater harvesting or recycled water,” Tan explains. However, these systems require significant energy to treat the water and ensure its compatibility with the equipment used in data centres.

A holistic approach to sustainability, therefore, must include both energy and water efficiency. By integrating renewable water sources and improving the energy efficiency of water management systems, data centres can reduce their environmental impact and enhance their resilience in the face of water scarcity.

The Importance of Power Usage Effectiveness (PUE)

One of the key metrics used to measure the energy efficiency of data centres is Power Usage Effectiveness (PUE). PUE is a ratio that compares the total energy used by the data centre to the energy used by the computing equipment itself. An ideal PUE is 1.0, meaning that all energy consumed by the facility is being used for computing, with no excess energy wasted on overhead functions like cooling.

AI integration into Data Center Cooling systems
AI-Generated image of AI integration into Data Center Cooling systems by Microsoft Copilot

“PUE is a crucial metric for data centres,” Tan explains. “At Grundfos, our solutions are designed to achieve optimal energy efficiency, thereby reducing a data centre’s PUE.” Grundfos achieves this through three main strategies: providing reliable and efficient pumps, using digital technologies to optimize energy consumption, and offering prefabricated and modular solutions that shorten construction times and reduce environmental impact.

By focusing on reducing PUE, data centres can ensure that their energy usage is aligned with sustainability goals, while also cutting down on operational costs.

Renewable Energy Integration: The Next Step in Sustainability

Pairing energy-efficient cooling with renewable energy sources represents the next frontier for sustainable data centres. “The incorporation of renewable energy sources like solar panels or geothermal energy across operations can enable data centres to reduce reliance on fossil fuels and minimize their carbon footprint,” Tan says.

While renewable energy is often associated with variability—depending on factors like weather conditions—intelligent data insights and analytics can help overcome these challenges. By integrating renewable energy sources with advanced grid management systems, data centres can create a more resilient, sustainable energy supply that does not compromise performance.

The Road Ahead Should Be AI and Sustainability in Tandem

Looking to the future, AI is poised to play a central role in advancing sustainability efforts within the data centre industry. As Tan observes, “The rapid expansion of the AI market, which is projected to grow at a staggering annual rate of 37.3% between 2023 and 2030, is a primary driver of the rising demand for data centres.” This growth creates both opportunities and challenges: while data centres will need to expand to meet AI’s growing demands, they will also need to adopt AI-driven technologies to improve their sustainability.

In the next decade, data centres will increasingly rely on AI to optimize energy usage, predict maintenance needs, and enhance operational efficiency. These advancements will be critical in helping the industry meet global sustainability targets, such as those outlined in the Global Cooling Pledge, which aims to reduce cooling-related emissions by 68% by 2030.

Sustainable AI Starts with Data Centers

As the digital revolution continues to unfold, data centres will remain at the heart of technological progress. However, the environmental impact of these facilities cannot be ignored. As Chee Meng Tan from Grundfos emphasizes, “With a more robust approach to sustainability that considers impact across multiple touchpoints, data centres can demonstrate a strengthened commitment to the cause, which sharpens its competitive edge within the industry.”

By integrating energy-efficient technologies, adopting AI-driven cooling solutions, and exploring alternative water sources, data centres can drastically reduce their environmental footprint. The road to a sustainable AI future starts with the choices we make today, and it begins at the data centre.


This article was based on an interview with Mr Chee Meng Tan, Area Sales Director, Commercial Building Services at Grundfos

Tan Chee Meng Grundfos

Chee Meng Tan
Area Sales Director, Commercial Building Services, Grundfos

Chee Meng Tan is currently the Area Sales Director for Grundfos’ Commercial Building Services (CBS) division, responsible for growing CBS’s presence and leadership in Southeast Asia. Apart from opening and developing regional markets and operations, his role involves the strategic formulation and operative implementation of regional sales concepts.

Chee Meng joined Grundfos Singapore in 1995 as an Application Engineer and has taken on different roles and responsibilities in the last 29 years. Prior to taking on his current role, Chee Meng has held various positions within Grundfos such as the General Manager of Grundfos Alldos (Shanghai) Water Treatment Co, Ltd, the Regional Business Director for Industry Segment in Asia Pacific, as well as the Business Director for Water Utility segment in Asia Pacific.

Chee Meng holds a Diploma in Electrical Engineering as well as Management Studies.

Strengthening Core Infrastructure with Proactive Measures and Monitoring

This article was based on an interview with Mr Ramon Pinero, General Manager of BlackBerry AtHoc, and Mr Jonathan Jackson, Senior Director of Strategic Accounts at BlackBerry.

As digital ecosystems grow more complex, the need for robust cybersecurity measures becomes critical. BlackBerry, having transitioned from mobile devices to cybersecurity, is now one of the foremost companies leading the charge in safeguarding critical infrastructure, supply chains, and digital assets through emerging technologies like artificial intelligence (AI) and machine learning (ML). With cyberattacks growing in sophistication, BlackBerry’s focus on prevention and vigilance sets a new standard in cyber resilience.

The Financial Impact of Cyberattacks

The cost of a cyberattack is staggering. A 2024 study from BlackBerry revealed that the average cost of a single data breach has risen to USD 4.45 million. For industries such as healthcare, finance, and energy—where attacks can disrupt critical services—the cost can soar even higher. Beyond immediate financial losses, cyberattacks can damage reputations, erode customer trust, and lead to legal penalties or regulatory fines.

Woman in a Beige Coat Writing on a Glass Panel Using a Whiteboard Marker
Photo by Nataliya Vaitkevich

In particular, ransomware attacks—one of the most prevalent threats—cost companies over USD 1.85 million per incident when factoring in downtime, recovery, and ransom payments​. These costs are unsustainable for many organizations, especially smaller businesses. In the words of Ramon Pinero, General Manager of BlackBerry AtHoc, “If we can prevent attacks from happening, it means that organizations don’t have to spend time and resources recovering from incidents.”

The Rise in Cyber Threats: A Call for Vigilance

BlackBerry’s Global Threat Intelligence Report highlights a surge in cyberattacks. Between April and June 2024, the company prevented 3.7 million attacks— a 53% increase from the previous quarter. Over 800,000 of these attacks targeted critical infrastructure, including the financial sector, energy grids, and healthcare. This increase undoubtedly underlines the importance of continuous vigilance across all sectors.

Prevention Through Proactive AI Solutions

Prevention should be at the core of any company, organization and city’s strategy in dealing with malicious attacks. That being the case, it’s also at the core of BlackBerry’s strategy when it comes to empowering customers. By embedding AI into its cybersecurity solutions, the company is emphasizing deterring cyberattacks before they cause damage. By acquiring Cylance, BlackBerry was able to incorporate predictive AI technology into its security systems, allowing it to proactively prevent cyberattacks instead of just reacting to them.

an illustrated image of a chip with an AI label on it powering a security system
AI-Generated Image by Copilot of an AI Chip embedded in a system

BlackBerry’s AI-powered defence mechanisms are particularly effective in securing critical infrastructure and supply chains. According to the Global Threat Intelligence Report, the company intercepts over 11,500 unique malware hashes daily, highlighting the pace at which new cyber threats emerge.

Supply Chain Security: A Critical Weakness

One of the most significant risks today lies in the vulnerability of software supply chains. BlackBerry’s June 2024 survey on supply chain cybersecurity reveals that more than 75% of software supply chains experienced cyberattacks within the past year. These attacks often target smaller, less secure suppliers as entry points to larger organizations, causing a cascade of damage.

Of concern is that 74% of these attacks originated from third-party vendors or suppliers that organizations were either unaware of or failed to monitor. As BlackBerry’s Vice President of Product Security Christine Gadsby noted, “How a company monitors and manages cybersecurity in their software supply chain has to rely on more than just trust.” In this landscape, prevention demands heightened visibility, continuous monitoring, and regular audits of suppliers’ security postures.

The Role of Managed Detection and Response (MDR)

For organizations without dedicated cybersecurity teams, BlackBerry’s Cylance Managed Detection and Response (MDR) provides critical support. The service offers real-time monitoring and rapid response to emerging threats, ensuring that businesses, especially smaller ones, are not left vulnerable.

According to Jonathan Jackson, Senior Director of Strategic Accounts at BlackBerry, “BlackBerry offers MDR because not every organization can afford a fully staffed cybersecurity team,”. MDR enables companies to utilize sophisticated threat detection tools and professional analysis, allowing them to stay ahead of cybersecurity risks even without an extensive cybersecurity outfit.

Zero Trust for Critical Infrastructure

A zero-trust approach has become essential for protecting critical infrastructure. BlackBerry’s security solutions implement zero-trust architectures, where no device, user, or system is inherently trusted. This model is crucial for sectors like healthcare, finance, and energy, where the stakes are high, and breaches could lead to widespread disruptions.

cyber 4610993 1280
Image from Image by Pete Linforth from Pixabay

“Zero trust is especially important for critical infrastructure because of the types of data and assets involved,” Jackson emphasized. By enforcing strict authentication protocols and continuous monitoring, BlackBerry’s solutions protect critical systems from both internal and external threats.

Resilience for Smart Cities and IoT

As cities become smarter, integrating Internet of Things (IoT) devices into their infrastructure, they become prime targets for cyberattacks. With IoT traffic systems, smart utilities, and public services forming the backbone of modern cities, a single cyberattack could cripple entire urban centres.

Companies like BlackBerry will play a critical role in safeguarding these environments, ensuring that IoT systems are secured and cyber-resilient. “As you have more connected systems, you increase the attack surface,” Jackson explains. AI-driven threat detection and real-time monitoring are vital to ensure that smart cities remain functional despite cheap to cyber threats.

Cyber Resilience Through Prevention

As the cyber threat landscape evolves, prevention and vigilance remain the twin pillars of a robust cybersecurity strategy. Whether protecting supply chains, critical infrastructure, or smart cities, solutions like BlackBerry’s Cylance AI offer a blueprint for building resilience in the face of increasingly sophisticated cyberattacks.

A cyber resilient Kuala Lumpur
AI-Generated Image by Copilot of A cyber resilient Kuala Lumpur

By staying proactive, monitoring vulnerabilities, and implementing zero-trust architectures, organisations can prevent breaches before they occur. As BlackBerry’s research shows, the key to true cyber resilience lies in constant vigilance and a commitment to prevention—because in today’s world, one breach could mean the difference between success and disaster.


This article was written based on an interview session with Mr Ramon Pinero, General Manager of BlackBerry AtHoc and Mr Jonathan Jackson Senior Director of Strategic Accounts at BlackBerry.

Ramon Pinero Profile Pic sq

Ramon Pinero
General Manager BlackBerry AtHoc

Ramon Pinero is the General Manager of BlackBerry AtHoc, where he oversees all aspects of the critical event management business. With more than 20 years of experience in crisis/emergency management (CEM) technologies, Ramon brings a deep understanding of both product development and customer success.

With a passion for technology and deep roots in emergency response, Ramon continues to advance BlackBerry AtHoc’s position as the market leader. He is focused on driving strategy, fostering innovation, and enabling AtHoc’s technology to make an even greater impact—helping more organizations increase their resiliency and save lives through fast, accurate communications before, during, and after critical situations.

Jonathan Jackson Profile Pic

Jonathan Jackson
Senior Director, Strategic Technical Sales APAC at BlackBerry

Jonathan Jackson (JJ) is the Senior Director, Strategic Technical Sales APAC for BlackBerry. With over 20 years of experience, he helps organisations manage their cyber risk, leveraging the best of breed solutions to stop cyber attacks. JJ is a staunch advocate of protecting data and privacy and is a frequent spokesperson on cyber threat intelligence in Australia and all across APAC.

Developing and Enhancing Cyber Resilience in Core Infrastructure

This article was based on an interview with Mr Ramon Pinero, General Manager of BlackBerry AtHoc, and Mr Jonathan Jackson, Senior Director of Strategic Accounts at BlackBerry.

As we increasingly depend on digital systems for everyday operations, the security of our critical infrastructure faces unprecedented challenges. Cybersecurity, once a consideration largely confined to the tech sector, now plays a central role in safeguarding industries like transportation, healthcare, energy, and smart cities. BlackBerry, a company once synonymous with mobile devices, has shifted focus to become a key player in cybersecurity, integrating emerging technologies such as artificial intelligence (AI), machine learning, and the Internet of Things (IoT) to bolster infrastructure resilience.

Pivoting Mobile Leader to Cybersecurity Powerhouse

BlackBerry’s journey from a hardware pioneer to a cybersecurity leader was driven by recognizing a shift in the global digital landscape. Cybersecurity has evolved into an essential growth market in an interconnected world with myriad endpoints—from smartphones to autonomous vehicles and IoT sensors.

pexels huy phan 316220 1474234
Photo by Huy Phan

BlackBerry recognized this early and made a strategic decision to focus on software and services. “We saw an opportunity to make a strategic decision to focus on software and services in cybersecurity as a growth market,” noted Jonathan Jackson, Senior Director of Strategic Accounts at BlackBerry. This shift positioned the company to lead in two critical areas: cybersecurity and embedded systems, which are crucial in protecting modern infrastructure.

AI and Machine Learning: BlackBerry’s New Arsenal

At the core of BlackBerry’s transformation is its acquisition of Cylance, an AI-driven cybersecurity firm. The integration of Cylance’s AI and machine learning capabilities has empowered BlackBerry to prevent and respond to emerging threats. The nature of AI allows for predictive security, meaning that BlackBerry’s systems can anticipate and stop attacks before they materialize.

“Cylance AI is embedded across the full security stack of BlackBerry’s portfolio today,” explained Jackson. This approach is crucial in a world where cyberattacks evolve constantly, with AI even being used by malicious actors to generate never-before-seen threats. By using AI to fight AI, BlackBerry ensures it remains a step ahead of cybercriminals, leveraging technologies like deep learning to predict and prevent threats to critical systems.

Emergence of Smart Cities and the Need to Build Cyber Resilience

As smart cities emerge worldwide, they bring with them a host of new security challenges. The interconnected nature of smart city infrastructure—where traffic systems, public services, and energy grids communicate in real time—expands the potential attack surface for cybercriminals. BlackBerry’s embedded systems, designed to secure IoT devices and smart city infrastructure, are vital in protecting these increasingly complex environments.

pexels pixabay 219692
Photo by Pixabay

BlackBerry has already made significant strides in securing automotive systems, with their technology being used by 24 of the top 25 electric vehicle manufacturers. This same technology, paired with AI and machine learning, is essential in smart cities, where the attack surface is ever-growing. “As you have more connected systems, you increase the attack surface of that system,” the company pointed out. Ensuring that these systems are secure is vital to maintaining the resilience of smart cities.

As the attack surface increases, the question of cyber security incidences changes from an “if” to a “when”. This is where policies such as Zero Trust can help. However, smart cities will also benefit from the implementation of threat intelligence and managed detection and response (MDR) solutions.

Zero Trust and Critical Infrastructure

One of the cornerstones of BlackBerry’s approach to cybersecurity resilience is the zero-trust framework. As applied to critical infrastructure—whether it’s power grids, transportation systems, or hospitals—this model ensures that no entity within the system is inherently trusted. Instead, all systems, devices, and users must continuously authenticate their identity to access sensitive data or systems.

This level of scrutiny is essential in industries where the stakes are highest. Ramon Pinero, General Manager of Blackberry AtHoc emphasizes, “Zero trust is important especially for critical infrastructure because of the types of data and assets that are in critical infrastructure,”. In an era where cyberattacks can have catastrophic real-world consequences, securing every aspect of infrastructure is paramount.

Collaborating to Bridge the Skills Gap

The challenges of securing critical infrastructure are further compounded by a global skills shortage in cybersecurity. BlackBerry’s collaboration with the Malaysian government exemplifies how partnerships can help address this gap. Through the Cybersecurity Center of Excellence in Cyberjaya, BlackBerry is training the next generation of cybersecurity professionals, with a focus on AI, smart cities, and IoT security.

The initiative aims to develop local talent capable of addressing emerging cyber threats, ensuring Malaysia’s infrastructure is both secure and future-ready. As more countries invest in smart city technologies, this collaboration model could serve as a blueprint for other regions looking to build cybersecurity resilience into their infrastructure.

The Power of Prevention

While responding to cyberattacks is critical, BlackBerry strongly emphasizes prevention. Their Managed Detection and Response (MDR) services, powered by Cylance’s AI, help organizations that may not have in-house security teams by offering continuous threat monitoring and proactive security measures. With the ability to predict and prevent attacks, BlackBerry’s MDR solutions ensure that businesses and critical infrastructure providers can focus on operations without worrying about constant cybersecurity threats.

A smart city that is secured digitally
AI-Generated image by Copilot of a smart city that is secured digitally

In their approach, prevention is key. “If we can prevent attacks from happening, it means that organizations don’t have to spend time and resources recovering from incidents,” Ramon further emphasized. This proactive mindset is essential in today’s world, where the consequences of a successful cyberattack on critical infrastructure can be disastrous.

A Cyber-Resilient Future

As emerging technologies like AI, machine learning, and IoT continue to shape the future of infrastructure, ensuring that these systems are secure is more important than ever. BlackBerry’s strategic pivot from hardware to cybersecurity positions it as a leader in building resilient infrastructure, whether it’s in smart cities, healthcare, or automotive industries.

By focusing on prevention, embedding AI into their solutions, and addressing the cybersecurity skills gap, BlackBerry is helping organizations and governments alike secure the digital systems that power our world. In doing so, they ensure that critical infrastructure remains resilient in the face of an ever-evolving threat landscape.


This article was written based on an interview session with Mr Ramon Pinero, General Manager of BlackBerry AtHoc and Mr Jonathan Jackson Senior Director of Strategic Accounts at BlackBerry.

Ramon Pinero Profile Pic sq

Ramon Pinero
General Manager BlackBerry AtHoc

Ramon Pinero is the General Manager of BlackBerry AtHoc, where he oversees all aspects of the critical event management business. With more than 20 years of experience in crisis/emergency management (CEM) technologies, Ramon brings a deep understanding of both product development and customer success.

With a passion for technology and deep roots in emergency response, Ramon continues to advance BlackBerry AtHoc’s position as the market leader. He is focused on driving strategy, fostering innovation, and enabling AtHoc’s technology to make an even greater impact—helping more organizations increase their resiliency and save lives through fast, accurate communications before, during, and after critical situations.

Jonathan Jackson Profile Pic sq

Jonathan Jackson
Senior Director, Strategic Technical Sales APAC at BlackBerry

Jonathan Jackson (JJ) is the a Senior Director, Strategic Technical Sales APAC for BlackBerry. With over 20 years’ of experience, he helps organisations manage their cyber risk, leveraging the best of breed solutions to stop cyber attacks. JJ is a staunch advocate of protecting data and privacy and is a frequent spokesperson on cyber threat intelligence in Australia and all across APAC.

Sustainability Cannot Exist Without Innovation, & Vice Versa – Here’s Why

With just six years remaining to achieve the United Nations Sustainable Development Goals (SDGs) by 2030, the Asia Pacific region faces a pressing and formidable challenge.

The recently released 2030 Asia Pacific SDG Progress Report by the Economic and Social Commission for Asia and the Pacific (ESCAP) paints a stark picture, revealing that at the midpoint, the region has made less than 15% of the necessary progress towards the SDGs. The report also predicts that if current trends persist, it will take an estimated 42 years for the region to achieve the 2030 agenda, falling significantly short of reaching 90% of the 118 measurable SDG targets.

Woman Using Laptop Computer With VR Headset
Photo by ThisIsEngineering on Pexels

This sobering analysis underscores the urgent need to multiply efforts and accelerate progress. To address this challenge, corporates in the Asia Pacific & Japan (APJ) region must adopt an innovation mindset and place sustainability at the forefront of the business agenda. In fact, sustainability can also be a powerful driver of innovation, propelling companies forward on the path to success in today’s digital era.

Sustainable innovation is not limited to short-term gains but creates long-term value for both businesses and the planet. The Dell Technologies Innovation Index, which polled 6,600 business leaders across 45+ countries, reveals that more than one-third of companies (35%) in Malaysia – the same percentage as the APJ region – consider climate change as an accelerator of innovation.

From above of blue crumpled plastic bottle thrown on green park lawn on sunny summer day
Photo by Karolina Kaboompics on Pexels

Additionally, the research shows that momentum for sustainability innovation is steadily growing in our region. Half of the companies (50%) in Malaysia are actively reducing their overall IT carbon footprint, recognising the critical role of technology in addressing environmental challenges. Furthermore, 37% of businesses in Malaysia (40% in APJ) are turning to technology to gain greater visibility into their carbon impact, enabling them to make data-driven decisions for sustainability.

This emphasis on sustainability is also being prioritised by the Government, having – for the first time – set SDG indicator targets and finalised nine accelerator initiatives to achieve SDGs in the country. This is to ensure a more effective implementation of SDGs towards the country’s 2030 Agenda for Sustainable Development (Agenda 2030).

Innovating for sustainability, sustainably

In today’s economic climate, innovation has never been more important for organisations to stay ahead of the curve and build resilience. While sustainability evidently drives innovation forward, businesses also have a responsibility to ensure that innovation is carried out efficiently and with minimal environmental impact.

For one, IT decision-makers (ITDMs) in APJ can leverage innovative technologies such as edge computing, artificial intelligence/machine learning (AI/ML), and as-a-service (aaS) models to manage energy consumption effectively, improve energy efficiency, and act upon data insights to drive sustainability. Encouragingly, the Dell Technologies Innovation Index also found that more than half (57%) of companies in Malaysia are already progressing in this space, embracing technology as a powerful tool for sustainable practices.

Close Up Photo of Delivert Robots
Photo by Kindel Media on Pexels

For example, innovative consumption models such as aaS or on-demand solutions promote sustainable resource utilisation by aligning technology consumption with actual needs – therefore reducing waste and optimising resource allocation. Businesses that embrace these flexible consumption models can not only reduce their environmental impact but also benefit from increased efficiency and cost savings.

Additionally, as digital transformation and the consumption of technology become more widespread, the greening of data centres has become crucial. As businesses rely more heavily on data centres, optimising their energy consumption becomes paramount. Currently, 48% of businesses in Malaysia are actively exploring methods to reduce energy use in their data centres.

[i]By investing in energy-efficient infrastructure and adopting best practices, organisations can lead the way in sustainable data management, setting a positive example for the industry.

Scientist Checking Crops in Laboratory
Photo by ThisIsEngineering on Pexels

While technology can help drive efficiencies, there comes a day when these devices eventually reach their end of life. It is therefore equally critical that businesses take active steps and work with the right partners to retire and recycle their end-of-life IT equipment, in order to minimise electronic waste and foster a circular economy. Dell Technologies’ Asset Recovery Services, for example, helps businesses with the proper disposal and recycling of IT assets to reduce the environmental footprint of the technology industry. Notably, the practice is not new in Malaysia and many are already engaged in initiatives to retire and recycle IT equipment responsibly.

It is also encouraging to note that the government has launched a National Circular Economy Council (NCEC) to unite stakeholders to accelerate the transition of waste management from a linear economy to a more holistically circular one.[ii] The NCEC will focus on matters related to policies, laws, implementation of related strategies and action plans, and the commitment and collaboration between the government and the private sectors.

Sustainable innovation: A win-win for businesses and the planet

The benefits of sustainable innovation are two-fold, generating value for both our environment and the bottom line. By integrating sustainability into their innovation agenda, companies can reduce environmental impact, enhance resilience, and improve operational efficiency. Furthermore, embracing sustainable practices has become a critical consideration for businesses to not only attract customers and investors but also to engage current and future employees.

As the Asia Pacific region continues its pursuit of the SDGs, collaboration and collective action are essential. While sustainable innovation can and should be driven at the company level, governments, businesses and individuals must also come together to drive meaningful impact. Partnerships between the public and private sectors can facilitate knowledge sharing, resource mobilisation, and the development of innovative solutions to address pressing sustainability challenges. Cross-industry collaborations can foster innovation and create synergies that accelerate progress towards the SDGs.

With less than a decade to go, our region now stands at a critical juncture – where sustainable innovation can lead the way towards achieving the UN SDGs by 2030. Despite the challenges highlighted in the 2030 Asia Pacific SDG Progress Report, the growing momentum for sustainability innovation is encouraging. Businesses in APJ should continue to embrace sustainable practices and leverage cutting-edge technologies to make significant contributions to sustainable development.


[i] https://www.nst.com.my/business/2023/10/963188/riding-data-centre-wave
[ii] https://www.nst.com.my/news/nation/2023/09/952091/national-circular-economic-council-set-handle-solid-waste

Scientists Just Used AI to Discover New Genes Linked to Heart Disease

Heart disease remains the leading cause of death globally, impacting millions of lives each year. While significant progress has been made in understanding the risk factors associated with heart disease, the precise genetic underpinnings of this complex condition have remained largely elusive. However, a recent breakthrough involving artificial intelligence (AI) offers a glimpse into a future where personalized medicine for heart disease becomes a reality.

robina weermeijer z8 Fmfz06c unsplash
Photo by Robina Weermeijer on Unsplash

Heart disease is a multifaceted condition influenced by a combination of genetic and environmental factors. Traditional methods for identifying genes associated with disease often relied on genome-wide association studies (GWAS). These studies compare the genetic makeup of individuals with and without a particular disease, searching for variations (single nucleotide polymorphisms or SNPs) that occur more frequently in the diseased population. While GWAS have identified numerous SNPs linked to heart disease, many of these variants exert a relatively weak effect, making it challenging to pinpoint the specific genes responsible and develop targeted therapies.

Machine Learning Model Used to Gain More Insights by Researchers

Researchers at Icahn School of Medicine at Mount Sinai are pioneering the use of a novel AI tool to unlock the secrets hidden within our genes. This tool, called a machine learning-based marker (MLBM), takes a more sophisticated approach compared to traditional GWAS. Instead of simply analyzing individual SNPs, the MLBM leverages machine learning algorithms to identify complex patterns across hundreds of genetic variants. Imagine sifting through a vast library of books, searching not just for individual words but for nuanced patterns and connections between sentences and paragraphs. The MLBM operates in a similar fashion, analyzing the interplay between numerous genetic variations to identify those that collectively contribute to an increased risk of heart disease.

The MLBM’s ability to identify complex patterns within genetic data has yielded significant results. The research team used the MLBM to analyze electronic health records and genetic data from over 600,000 individuals. This analysis revealed not only common SNPs associated with heart disease but also a set of rare coding variants within 17 previously unknown genes. These rare variants, while individually occurring in a smaller proportion of the population, may exert a more significant impact on heart disease risk. Imagine finding a single, critical clue hidden amongst a mountain of seemingly unrelated information. The MLBM’s ability to identify these rare yet impactful genetic variations holds immense potential for uncovering new pathways involved in heart disease development.

digitale de MES6r7WFb0o unsplash
Photo by digitale.de on Unsplash

The identification of these novel genes opens doors for the development of more targeted therapies for heart disease. By understanding the specific genetic mutations contributing to an individual’s risk, doctors can potentially tailor treatment plans to address the underlying cause rather than simply manage symptoms. Imagine a future where preventive measures and medications can be personalized based on a person’s unique genetic makeup, potentially preventing heart disease altogether.

New Technologies Changing Medical Research

The success of the MLBM in uncovering new genetic variants for heart disease signifies a paradigm shift in our approach to medical research. AI has the potential to revolutionize the way we diagnose, treat, and ultimately prevent a wide range of diseases. By harnessing the power of AI to analyze complex biological data, researchers can gain a deeper understanding of the intricate dance between genes and disease. This newfound knowledge can pave the way for the development of personalized medicine, offering a future where healthcare becomes more proactive and effective in combating life-threatening conditions like heart disease.

What Might the Next Decade Bring for Computing?

New technologies can take many forms. Often, they come from generally straightforward, incremental product advances over the course of years; think the Complementary Metal-Oxide-Semiconductor (CMOS) process shrinks that underpinned many of the advances in computing over the past decades. Not easy, but relatively predictable from a high-level enough view.

Other shifts are less straightforward to predict. Even if a technology is not completely novel, it may require the right conditions and advances to come together so it can flourish in the mainstream. Both server virtualization and containerization fall into this category.

What’s next? Someone once said that predictions are hard, especially about the future. But here are some areas that Red Hat has been keeping an eye on and that you should likely have on your radar as well. This is hardly a comprehensive list and it may include some surprises, but, it is a combination of both early stage and more fleshed-out developments on the horizon. The first few are macro trends that pervade many different aspects of computing. Others are more specific to hardware and software computing infrastructure.

Artificial intelligence/machine learning (AI/ML)

On the one hand, AI/ML belongs on any list about where computing is headed. Whether coding tools, self-tuning infrastructure, or improved observability of systems, AI/ML is clearly a critical part of the computing landscape going forward.

What’s harder to predict is exactly what forms and applications of AI will deliver compelling business value, many of which will be interesting in narrow domains, and will likely turn out to be almost good enough over a lengthy time horizon.

elderly man thinking while looking at a chessboard
Photo by Pavel Danilyuk on Pexels.com

Much of the success of AI to date has rested on training deep neural networks (NNs) of increasing size (as measured by the number of weights and parameters) on increasingly large datasets using backpropagation, and supported by the right sort of fast hardware optimized for linear algebra operations—graphics processing units (GPUs) in particular. Large Language Models (LLMs) are one prominent, relatively recent example.

There have been many clear wins, but AI has struggled with more generalized systems that interface with an unconstrained physical world—as in the case of autonomous driving, for example. There are also regulatory and legal concerns relating to explainability, bias and even overall economic impact. Some experts also wonder if broad gaps in our collective understanding of the many areas covered by cognitive science that lay outside the direct focus of machine learning may (or may not) be needed for AI to handle many types of applications.

What’s certain is that we will be surprised.

Automation

In a sense, automation is a class of application to which AI brings more sophisticated capabilities. For example, Red Hat Ansible Lightspeed with IBM watsonx Code Assistant is one recent example of a generative AI service designed by and for Ansible automators, operators and developers.

Automation is increasingly necessary because hardware and software stacks are getting more complex. What’s less obvious is how improved observability tooling and AI-powered automation tools that make use of that more granular data plays out in detail.

At the least, it will lead us to think about questions such as: Where are the big wins in dynamic automated system tuning that will most improve IT infrastructure efficiency? What’s the scope of the automated environment? How much autonomy will we be prepared to give to the automation, and what circuit breakers and fallbacks will be considered best practice?

Over time, we’ve reduced manual human intervention in processes such as CI/CD pipelines. But we’ve done so in the context of evolving best practices in concert with the increased automation.

Security

Security is a broad and deep topic (and one of deep concern across the industry). It encompasses zero trust, software supply chains, digital sovereignty and yes, AI—both as a defensive tool and an offensive weapon. But one particular topic is worth highlighting here.

Confidential computing is a security technology that protects data in use, meaning that it is protected while it is being processed. This is in contrast to traditional encryption technologies, which protect data at rest (when it is stored) and data in transit (when it is being transmitted over a network).

woman in black hoodie holding a bank card
Photo by Tima Miroshnichenko on Pexels.com

Confidential computing works by using a variety of techniques to isolate data within a protected environment, such as a trusted execution environment (TEE) or a secure enclave. It’s of particular interest when running sensitive workloads in an environment over which you don’t have full control, such as a public cloud. It’s relatively new technology but is consistent with an overall trend towards more security controls, not fewer.

RISC-V

While there are examples of open hardware designs, such as the Open Compute Project, it would be hard to make the case for there having been a successful open processor relevant to server hardware.

However, major silicon vendors and cloud providers are exploring and adopting the RISC-V free-to-license and open processor instruction set architecture (ISA). It follows a different approach from past open processor efforts. For one thing, it was open source from the beginning and is not tied to any single vendor. For another, it was designed to be extensible and implementation-agnostic. It allows for the development of new embedded technologies implemented upon FPGAs as well as the manufacture of microcontrollers, microprocessors and specialized data processing units (DPUs).

Its impact is more nascent in the server space, but it has been gaining momentum. The architecture has also seen considerable standardization work to balance the flexibility of extensions with the fragmentation they can bring. RISC-V profiles are a set of standardized subsets of the RISC-V ISA. They are designed to make sure that hardware implementers and software developers can intersect with an interface built around a set of extensions with a bounded amount of flexibility designed to support well-defined categories of systems and applications.

Platform software

Perhaps one of the most intriguing questions is what happens at the lower levels of the server infrastructure software stack—roughly the operating system on a single shared memory server and the software that orchestrates workloads across many of these servers connected over a network.

It is probably easiest to start with what is unlikely to change in fundamental ways over the next decade. Linux has been around for more than 30 years; Unix more than 50, with many basic concepts dating to Multics about ten years prior.

close up view of system hacking
Photo by Tima Miroshnichenko on Pexels.com

That is a long time in the computer business. But it also argues for the overall soundness and adaptability of the basic approach taken by most modern operating systems—and the ability to evolve Linux when changes have been needed. That adaptation will continue by taking advantage of reducing overheads by selectively offloading workloads to FPGAs and other devices such as edge servers. There are also opportunities to reduce transition overheads for performance-critical applications; the Unikernel Linux project—a joint effort involving professors, PhD students and engineers at the Boston University-based Red Hat Collaboratory—demonstrates one direction such optimizations could take.

More speculative is the form that collections of computing resources might take and how they will be managed. Over the past few decades, these resources primarily took the form of masses of x86 servers. Some specialized hardware is used for networking, storage and other functions, but CMOS process shrinks meant that for the most part, it was easier, cheaper and faster to just wait for the next x86 generation than to buy some unproven specialized design.

However, with performance gains associated with general-purpose process shrinks decelerating—and maybe even petering out at some point—specialized hardware that more efficiently meets the needs of specific workload types starts to look more attractive. The use of GPUs for ML workloads is probably the most obvious example, but is not the only one.

The challenge is that developers are mostly not increasing in number or skill. Better development tools can help to some degree, but it will also become more important to abstract away the complexity of more specialized and more diverse hardware.

What might this look like? A new abstraction/virtualization layer? An evolution of Kubernetes to better understand hardware and cloud differences, the relationship between components and how to intelligently match relatively generic code to the most appropriate hardware or cloud? Or will we see something else that introduces completely new concepts?

Wrap up

What we can say about these predictions is that they’re probably a mixed bag. Some promising technologies may fizzle a bit. Others will bring major and generally unexpected changes in their wake, and something may pop onto the field at a time and from a place where we least expect it.

Accelerating AI-driven outcomes with Powerful Super Computing Solutions

This article is contributed by Mak Chin Wah, Country Manager, Malaysia and General Manager, Telecoms Systems Business, South Asia, Dell Technologies

As artificial intelligence (AI) technology continues to evolve and grows in capability, it’s becoming a growing presence in every aspect of our lives. One needs to look no further than voice assistants, navigation like Waze, or rideshare apps such as Grab, which Malaysians are familiar with.

robot pointing on a wall
Photo by Tara Winstead on Pexels.com

From machine learning and deep learning algorithms that automate manufacturing, natural language processing, video analytics and more, to the use of digital twins that virtually simulate, predict and inform decisions based on real-world conditions, AI helps solve critical modern-life challenges to benefit humanity. In fact, we have digital twin technology to thank for assisting in the bioengineering of vaccines to fight COVID-19.

AI is changing not only what we do but also how we do it — faster and more efficiently.

Advancing Human Progress

For companies like Dell Technologies who are committed to advancing human progress, AI will play a big part in developing solutions to the pressing issues of the 21st century. The 2020s, in particular, are ushering in a fully data-driven period in which AI will assist organisations and industries of all sizes to accelerate intelligent outcomes.

woman holding tablet computer
Photo by Roberto Nickson on Pexels.com

Organisations can harness their AI endeavours through high-performance computing (HPC) infrastructure solutions that reduce risk, improve processing speed and deliver deeper insights. By extracting value through AI from the massive amounts of data generated across the entire IT landscape — from the core to the cloud —businesses can better tackle challenges and make discoveries to advance large-scale, global progress.

Continuing to Innovate

Through transformative innovation, customers can derive the insights needed to change the course of discovery. For example,  Dell Technologies equipped Monash University Malaysia with top-of-the-line HPC and AI solutions[i] to help accelerate the university’s research and development computing capabilities at its Sunway City campus in Selangor. The solution aims to enhance and accelerate the university’s computation capabilities in solving complex problems across its significant research portfolio.

Financial services, life sciences and oil and gas exploration are just a few of the other computation-intensive applications where enhanced servers will make a difference in achieving meaningful results, for humankind and the planet.

At the heart of AI technology are essential building blocks and solutions that power these activities. For example, Dell’s existing line of PowerEdge servers has already contributed to transformational, life‑changing projects, and will continue to power human progress in this generation and the next.

image

The most demanding AI projects require servers that offer distinct advantages – specifically built to deliver higher performance and even more powerful supercomputing results, and yet engineered for the coming generation to support the real-time processing requirements and challenges of AI applications with ease.

In addition to helping deploy more secure and better-managed infrastructure for complex AI operations at mind-boggling modelling speeds, these transformative servers will help meet organisations’ biggest concerns in productivity, efficiency and sustainability, while cutting costs and conserving energy.

Transforming Business and Life

While organisations are in different stages with respect to their adoption of AI, the transformational impact on business and life itself can no longer be ignored. Human progress will depend on the ability of AI to make communication easier, personalise content delivery, advance medical research/diagnosis/treatments, track potential pandemics, revolutionise education and implement digital manufacturing. In Malaysia, while AI is progressively being recognised as the new general-purpose technology that will bring about revolutionary economic transformation similar to the Industrial Revolution, adoption of Industry 4.0 remains sluggish with only 15% to 20% of businesses having really embraced it. On the other hand, the government is taking this emerging technology seriously, having set out frameworks for the incorporation of AI by numerous sectors of the economy. These comprise the Malaysia Artificial Intelligence Roadmap 2021-2025 (AI-Rmap) and the Malaysian Digital Economy Blueprint (MDEB), spearheaded by the MyDIGITAL Corporation and the Economic Planning Unit.

Moving Forward

With servers and HPC at the heart of AI, modern infrastructure needs to match the unique requirements of increasingly complex and widely distributed workloads. Regardless of where a business is on the AI journey, the key to optimising outcomes is having the right infrastructure in place, ready to seamlessly scale as the business grows and positioned to take on the unexpected, unknown challenges of the future. To do that requires having the expertise – or a trusted partner that does – to help at any and every stage, from planning through to implementation, to make smart server decisions that will unlock the organisation’s data capital and support AI efforts to move human progress forward.


[i] Based on Dell Technologies helps Monash University Malaysia enhance its R&D capabilities with HPC and AI solutions Media Alert, November 2022

Adobe Firely, the Next-Generation AI Made for Creative Use

AI (Artificial Intelligence) generated graphics is not a new thing. You have things like OpenArt and Hotpot these days where you can just type in the keywords to the image you want, and let the engine generate art for your use. Even before AI generated graphics though, the implementation of AI within the creative industry is nothing new. NVIDIA has used their own AI engine to write an entire symphony, and even to create 3D environments using their Ray Tracing engines. Adobe too have something they call the Sensei. They the AI tool is implemented across their creative suite to understand and recognise objects better, fill details where needed more naturally, and even edit videos, images, or even texts quickly and efficiently. Now, they have Firefly.

Firefly is not a new separate AI system from Adobe’s Sensei. Firefly is a part of a larger Adobe Sensei generative AI together with technologies like Neural Filters, Content Aware Fill, Attribution AI, and Liquid mode implemented across several Adobe platforms. Unlike those platform specific implementations though, Adobe is looking to put Firefly to work on a number of various platforms across their Creative Cloud, Document Cloud, Experience Cloud, and even their Adobe Express platforms.

So, what is Adobe Firefly? We hear you ask. It is technically Adobe’s take on what a creative generative AI should be. They are not limiting Firefly to just image generation, modification, and correction. It is designed to allow any sort of content creators create even more without needing to spend hundreds of hours to learn a new skill. All they need to do is to adapt Firefly in their workflow and they will get contents that they have never been able to create before, be it images, audio, vectors, texts, videos, and even 3D materials. You can have different contents every time too with Adobe Firefly; the possibilities, according to Adobe, are endless.

What makes Adobe’s Firefly so powerful is the power of the entirety of Adobe’s experience and database behind it. Obviously Adobe’s Stock images and assets is a huge enough library for the AI implementation to dive into. The implementation can also look into using openly licensed assets and public domain contents in generating its contents. The tool, in this case, will prevent any IP infringements and help you avoid plenty of future litigations.

Adobe Firefly Cover
Source: Adobe

As Firefly is launched in its beta state, it will only be available as an image and text generator tool for Adobe Express, Adobe Experience Manager, Adobe Photoshop, and Adobe Illustrator. Adobe plans to bring Firefly into the rest of their platforms where relevant in the coming future. They are also pushing for more open standards in asset verification which will eventually include proper categorization and tagging of AI generated contents. Adobe is also planning to make the Firefly ecosystem a more open one with APIs for its users and customers to integrate the tool with their existing workflows. For more information on Adobe’s latest generative AI, you can visit their website.

Edge Computing Benefits and Use Cases

From telecommunications networks to the manufacturing floor, through financial services to autonomous vehicles and beyond, computers are everywhere these days, generating a growing tsunami of data that needs to be captured, stored, processed and analyzed. 

At Red Hat, we see edge computing as an opportunity to extend the open hybrid cloud all the way to data sources and end-users. Where data has traditionally lived in the data centre or cloud, there are benefits and innovations that can be realized by processing the data these devices generate closer to where it is produced.

This is where edge computing comes in.

4 benefits of edge computing

As the number of computing devices has grown, our networks simply haven’t kept pace with the demand, causing applications to be slower and/or more expensive to host centrally.

Pushing computing out to the edge helps reduce many of the issues and costs related to network latency and bandwidth, while also enabling new types of applications that were previously impractical or impossible.

1. Improve performance

When applications and data are hosted on centralized data centres and accessed via the internet, speed and performance can suffer from slow network connections. By moving things out to the edge, network-related performance and availability issues are reduced, although not entirely eliminated.

2. Place applications where they make the most sense

By processing data closer to where it’s generated, insights can be gained more quickly and response times reduced drastically. This is particularly true for locations that may have intermittent connectivity, including geographically remote offices and on vehicles such as ships, trains and aeroplanes.

hands gb5632839e 1280
Source: Pixabay

3. Simplify meeting regulatory and compliance requirements

Different situations and locations often have different privacy, data residency, and localization requirements, which can be extremely complicated to manage through centralized data processing and storage, such as in data centres or the cloud.

With edge computing, however, data can be collected, stored, processed, managed and even scrubbed in place, making it much easier to meet different locales’ regulatory and compliance requirements. For example, edge computing can be used to strip personally identifiable information (PII) or faces from a video before being sent back to the data centre.

4. Enable AI/ML applications

Artificial intelligence and machine learning (AI/ML) are growing in importance and popularity since computers are often able to respond to rapidly changing situations much more quickly and accurately than humans.

But AI/ML applications often require processing, analyzing and responding to enormous quantities of data which can’t reasonably be achieved with centralized processing due to network latency and bandwidth issues. Edge computing allows AI/ML applications to be deployed close to where data is collected so analytical results can be obtained in near real-time.

3 Edge Computing Scenarios

Red Hat focuses on three general edge computing scenarios, although these often overlap in each unique edge implementation.

1. Enterprise edge

Enterprise edge scenarios feature an enterprise data store at the core, in a data centre or as a cloud service. The enterprise edge allows organizations to extend their application services to remote locations.

nasa Q1p7bh3SHj8 unsplash
Photo by NASA on Unsplash

Chain retailers are increasingly using an enterprise edge strategy to offer new services, improve in-store experiences and keep operations running smoothly. Individual stores aren’t equipped with large amounts of computing power, so it makes sense to centralize data storage while extending a uniform app environment out to each store.

2. Operations edge

Operations edge scenarios concern industrial edge devices, with significant involvement from operational technology (OT) teams. The operations edge is a place to gather, process and act on data on-site.

Operations edge computing is helping some manufacturers harness artificial intelligence and machine learning (AI/ML) to solve operational and business efficiency issues through real-time analysis of data provided by Industrial Internet of Things (IIoT) sensors on the factory floor.

3. Provider edge

Provider edge scenarios involve both building out networks and offering services delivered with them, as in the case of a telecommunications company. The service provider edge supports reliability, low latency and high performance with computing environments close to customers and devices.

Service providers such as Verizon are updating their networks to be more efficient and reduce latency as 5G networks spread around the world. Many of these changes are invisible to mobile users, but allow providers to add more capacity quickly while reducing costs.

3 edge computing examples

Red Hat has worked with a number of organizations to develop edge computing solutions across a variety of industries, including healthcare, space and city management.

1. Healthcare

Clinical decision-making is being transformed through intelligent healthcare analytics enabled by edge computing. By processing real-time data from medical sensors and wearable devices, AI/ML systems are aiding in the early detection of a variety of conditions, such as sepsis and skin cancers.

cdc p33DqVXhWvs unsplash
Photo by CDC on Unsplash

2. Space

NASA has begun adopting edge computing to process data close to where it’s generated in space rather than sending it back to Earth, which can take minutes to days to arrive.

As an example, mission specialists on the International Space Station (ISS) are studying microbial DNA. Transmitting that data to Earth for analysis would take weeks, so they’re experimenting with doing those analyses onboard the ISS, speeding “time to insight” from months to minutes.

3. Smart cities

City governments are beginning to experiment with edge computing as well, incorporating emerging technologies such as the Internet of Things (IoT) along with AI/ML to quickly identify and remediate problems impacting public safety, citizen satisfaction and environmental sustainability.

Red Hat’s approach to edge computing

Of course, the many benefits of edge computing come with some additional complexity in terms of scale, interoperability and manageability.

Edge deployments often extend to a large number of locations that have minimal (or no) IT staff, or that vary in physical and environmental conditions. Edge stacks also often mix and match a combination of hardware and software elements from different vendors, and highly distributed edge architectures can become difficult to manage as infrastructure scales out to hundreds or even thousands of locations. The Red Hat Edge portfolio addresses these challenges by helping organizations standardize on a modern hybrid cloud infrastructure, providing an interoperable, scalable and modern edge computing platform that combines the flexibility and extensibility of open source with the power of a rapidly growing partner ecosystem

Google Looks to “MUM” to Enhance Search

Google has been working on creating a better, more unified experience with their bread and butter – search. The tech giant is looking for a more contextually relevant search as they move forwards. To do this, they are turning to MUM, the Multitask Unified Model, to bring more relevance to search results.

search on.max 1000x1000 1

The new Multitask Unified Model (MUM) allows Google’s search algorithm to understand multiple forms of input. It can draw context from text, speech, images and even video. This, in turn, allows the search engine to return more contextually relevant results. It will also allow the search engine to understand searches in a more natural language and make sense of more complex searches. When they first announced MUM, the new enhancement could understand over 75 languages. MUM is much more powerful than the existing algorithm.

Contextual Search is the New Normal

Search On Lens Desktop

Barely two months after the announcement, Google has begun implementing MUM into some of the most used apps and features. In the coming months, Google searches will be undergoing a bit of a major rehaul. The company is creating a new, more visual search experience. Users will be seeing more images and graphics in search results. you will also be able to refine and broaden searches with a single click thanks to MUM. You will be able to zoom into finer details such as specific techniques and more or get a broader picture of your search with a single click. In their announcement, Google used the example of acrylic painting. With the results from Google search, they were able to zoom in to specific techniques commonly used in acrylic painting or get a broader picture of how it started.

  • Googel SearchON 001
  • Googel SearchON 002
  • Googel SearchON 003
  • Googel SearchON 004

The search engine uses data such as language and even user behaviour in addition to context to recommend broadening or narrowing searches. They are even applying this to YouTube. They are hoping to be able to expand the search context to include topics mentioned in YouTube videos later this year. Contextual and multitask search is also making its way to Google Lens. Lens will be able to make sense of both visual and text data at the same time. It’s also making its way to Chrome. Don’t expect the rollout of the new experience on Lens too soon as the rollout is expected to be in 2022 after internal testing.

Googel SearchON 006

Context is also making search more “shoppable”. Google is allowing users to zoom in to specifics when searching. For instance, searching if you’re searching for fashion apparel, you will be able to narrow your search based on design and colour or use the context of the original to search for something else completely. In addition, Google’s Shopping Graph will allow users to narrow searches with an “in stock” filter as well. This particular enhancement will be available in select countries only.

Expanding Search to Make A Positive Impact

Google isn’t just focusing on MUM for its own benefit. The company has been busy bringing its technology to create change too. It’s working on expanding contextual data as well as A.I. implementation in addressing environmental and social issues. While this is nothing new, some of the new improvements could impact us more directly than ever.

Environmental Insights for Greener Cities

One of the biggest things that could make a huge impact is Googles Environmental Insights. While this isn’t brand new, the company is looking to make the feature more readily available to cities to help them be greener. Environmental Insights Explorer will allow municipalities and city councils to make decisions based on data from A.I. and Google’s Earth Engines.

Search On Tree Canopy Insights

With this data, cities and municipalities will be able to visualise tree density within their jurisdictions and plan for trees and greenery. This data will help tremendously in lowering the temperatures of cities. It will also help with carbon neutrality. The feature will be expanding to over 100 cities including Yokohama and Sydney this year.

Dealing with Natural Disasters with Actionable Insights

Google Maps will be getting more actionable insights when it comes to natural disasters. Of course, being an American company, their first feature is, naturally more relevant to the U.S. California and other areas have been hit by wildfires with increasing severity in the past years. Other countries such as Australia, Canada and even in parts of the African continent are also experiencing increasingly deadly wildfires. It’s more apparent that data on the wildfires is needed for the public.

Search On Wildfire Mapping

As such, Google Maps will be getting a layer that will allow users to see the boundaries of active wildfires. These boundaries are updated every 15 minutes allowing users to avoid affected areas. The data will also help authorities coordinate evacuations and even handling of situations. Google is also doing a pilot for flash flooding in India.

Simplifying Addresses

Google is expanding and simplifying one of its largest social projects – Plus Codes. The project, which was announced just under a year ago, is getting more accessible. Google is making Plus Codes more accessible with Address Maker. The new app continues with Plus Codes but allows users and organisations simplified access to making new addresses. Governments and NGOs will be able to create addresses at scale easier.