Category Archives: Contributed

Cyberattackers are Using the Cloud too – Are Malaysian Enterprises Prepared?

Cloud technology has been an integral component in paving the way for organizations across industries to undergo digital transformation. Globally, 50% of organizations are adopting a cloud-native approach to support both employees and customers, and the number of connected devices is expected to climb to 55.9 billion by 2025.

In Malaysia, we’ve also seen swift progress in cloud adoption – with the most recent milestone being the upgrade of the Malaysian Government’s Public Sector Data Centre (PDSA) into a hybrid cloud service called MyGovCloud. The pace of cloud adoption is expected to accelerate following the government’s decision to provide conditional approval to Microsoft, Google, Amazon, and Telekom Malaysia to build and manage hyperscale data centres and cloud services in Malaysia.

With cloud-based systems becoming a key component of organizations’ operations and infrastructures, malicious actors have been turning to the cloud, taking advantage of weaknesses in cloud security to perform various malicious activities — leading to new complexity regarding effective attack surface risk management. 

Why Malaysian Businesses Need Better Risk Management

The shift to the cloud and dramatic increase in connectivity gives malicious actors new and often unmanaged attack vectors to target.

photo of person typing on computer keyboard
Photo by Soumil Kumar on Pexels.com

As revealed in Trend Micro’s semi-yearly Cyber Risk Index (CRI) report, 67% of organizations in Malaysia report they are likely to be breached in the next 12 months – indicating a dire need for local organizations to be better prepared in managing cyber risks.

To better reduce the risk of cyberattacks, enterprises must first understand how cyberattackers are exploiting the cloud for their own benefit and bridge security gaps by proactively anticipating data breaches.

One of the most common ways that organizations put themselves in a vulnerable position to be attacked is through misconfigurations of the cloud. While misconfigurations might seem straightforward and avoidable, they are the most significant risk to cloud environments – making up 65 to 70% of all security challenges in the cloud. This is especially true for organizations that have been pushed to migrate quickly to the cloud since remote work became the new norm.

security logo
Photo by Pixabay on Pexels.com

Malicious actors are also turning to low-effort by high-impact attack strategies in gaining access to cloud applications and services. On top of exploiting new vulnerabilities in an enterprise’s network, cyberattackers will persistently exploit known vulnerabilities from past years as many enterprises still lack the ability to get full visibility on environments that are left unpatched.

How Malaysian Businesses can Stay Prepared

Since criminals can execute their attacks more effectively, they can also target a larger number of organizations, potentially leading to an increase in overall attacks. Organizations now have much less time to detect and respond to these incidents, and this will be expounded as the business model of cybercriminals matures further.

With that in mind, enterprises must strengthen their security posture foundations to defend against evolving cyberthreats. Among the key cybersecurity strategies to adopt include:

Automating everything

We live in a world where skills shortages and commercial demands have combined to expose organizations to escalating levels of cyber risk. In the cloud, it leads to misconfigurations and the risk of knock-on data breaches, as well as unpatched assets which are exposed to the latest exploits. The bad news is that cybercriminals and nation states are getting better at scanning for systems which may be vulnerable in this way.

Better digital attack surface management starts with the right tooling. Solutions such as Trend Micro Cloud One enable and automates platform-agnostic cloud security administration and cloud threat detection and response, which can help security teams improve the efficiency of threat investigation and response, as well as reduce the risk of a security breach.

Empowering employees with resources and tools to ensure cloud operational excellence  

Many enterprises are already well on their way in the world of cloud, with more and more security teams using cloud infrastructure services and developing cloud-native applications. However, this can often be a steep learning curve for cloud architects and developers – leaving gaps in protection, compliance, and visibility.

woman using a computer
Photo by cottonbro on Pexels.com

To improve the situation, organizations need to provide resources to employees to ensure that the cloud service configurations adhere to industry best practices and compliance standards. One such way is to use tools that automatically scan cloud services against best practices, relieving teams from having to manually check for misconfigurations.

Adopt a Shared Responsibility Model

Clouds aren’t secure or insecure, they’re as secure as you make them. Instead of “who is more secure – AWS, Azure, or Google Cloud?” ask “what have I done to make all of my clouds as secure as I need them?”

Security in the cloud works using the Shared Responsibility Model – which dictates who is responsible for any operational task in the cloud and security is simply a subset of those tasks. Security self-service for the cloud is fully here in all its forms, and understanding this model is critical to success in the cloud.

While increased cloud adoption allows organizations to be more agile, scalable, and cost-efficient, the benefits of using cloud services and technologies are no longer just reaped by legitimate companies, but also cybercriminals who keep up with the trend. As criminals accelerate attacks and expand their capabilities, businesses must adopt a solid cybersecurity strategy to stay a step ahead.

Automation in an App-centric, Hybrid Cloud World

The past few years have shown that enterprises want their applications, data, and resources located wherever it makes the most sense for their business and operating models, which means that automation needs to be available to execute anywhere. Automation across platforms and environments needs a common mechanism with an approach of automation as code, supported by communities of practice and even automation architects or committees to help define and deliver on the strategy.

Per a recent IDC Market Forecast— Worldwide IT Automation and Configuration Management Software Forecast, 2021–2025[i]—“state-of-the-art system management software tools will be needed to keep up with increasing operational complexity, particularly in organizations that cannot add headcount to keep up with requirements.” Managing this overall complexity is no easy feat. As IT and business needs continue to evolve, it’s no longer an issue of “if” organizations turn to automation, but “which” automation tool they choose.

macbook pro on brown table
Photo by Negative Space on Pexels.com

This is where the power of open source technology excels; per the same IDC study, “open source–driven innovation helped fuel the growth of newer players and technologies.” With a community-based, consistent approach to automation, the subject matter experts write the integrations and share them with other teams, building internal communities of practice that can adapt to change and deployments allowing enterprises to get to the cloud at an accelerated pace.

This is how Red Hat, through Red Hat Ansible Automation Platform, approaches automation, delivering tailored innovation for individual platforms combined with a standard, cross-framework language. With the continued shift to consuming public cloud services and resources, the key is to have a platform that allows you to harness the same skills, language and taxonomy that your teams have been using to drive efficiency and savings in on-premises implementations. This approach enables enterprises to achieve what they want, where they want to, in clouds like Amazon Web Services and Microsoft Azure.

Endorsing agility at the edge

We know that enterprises and their needs do not end with cloud automation. Assets at the edge are now just as important and, arguably, even more difficult to manage, than in the data center. Edge computing is critical to business, making automating at the edge non-negotiable. Making all of your existing processes and group components available using a tool like Ansible Automation Platform allows you to move edge management from a multi-person, complex task to one where common components and workflows are used with Ansible for management and integration.

cans in the assembly line
Photo by cottonbro on Pexels.com

Ansible automation becomes the connective tissue in an IT organization, bridging applications and their dependent infrastructure, and maintaining technology at the edge. IT staff can rely on automation to roll out new services at the edge to meet customer needs with speed, scale, and consistency.

Connecting it all through automation

We often refer to Ansible Automation Platform as the glue between people, process and technology. Automation allows for greater emphasis on strengthening the whole system, rather than just the sum of its parts. The benefits automation can bring aren’t always simple to achieve, but the right framework makes it less challenging. When there’s success at a high level, new ways of working become reality, along with resiliency and adaptability. This formula is precisely what organizations need as they face new challenges to drive modernization and transformation.


[i] IDC Market Forecast, Worldwide IT Automation and Configuration Management Software Forecast, 2021–2025, doc #US47434321, February 2021.

Edge Computing Benefits and Use Cases

From telecommunications networks to the manufacturing floor, through financial services to autonomous vehicles and beyond, computers are everywhere these days, generating a growing tsunami of data that needs to be captured, stored, processed and analyzed. 

At Red Hat, we see edge computing as an opportunity to extend the open hybrid cloud all the way to data sources and end-users. Where data has traditionally lived in the data centre or cloud, there are benefits and innovations that can be realized by processing the data these devices generate closer to where it is produced.

This is where edge computing comes in.

4 benefits of edge computing

As the number of computing devices has grown, our networks simply haven’t kept pace with the demand, causing applications to be slower and/or more expensive to host centrally.

Pushing computing out to the edge helps reduce many of the issues and costs related to network latency and bandwidth, while also enabling new types of applications that were previously impractical or impossible.

1. Improve performance

When applications and data are hosted on centralized data centres and accessed via the internet, speed and performance can suffer from slow network connections. By moving things out to the edge, network-related performance and availability issues are reduced, although not entirely eliminated.

2. Place applications where they make the most sense

By processing data closer to where it’s generated, insights can be gained more quickly and response times reduced drastically. This is particularly true for locations that may have intermittent connectivity, including geographically remote offices and on vehicles such as ships, trains and aeroplanes.

hands gb5632839e 1280
Source: Pixabay

3. Simplify meeting regulatory and compliance requirements

Different situations and locations often have different privacy, data residency, and localization requirements, which can be extremely complicated to manage through centralized data processing and storage, such as in data centres or the cloud.

With edge computing, however, data can be collected, stored, processed, managed and even scrubbed in place, making it much easier to meet different locales’ regulatory and compliance requirements. For example, edge computing can be used to strip personally identifiable information (PII) or faces from a video before being sent back to the data centre.

4. Enable AI/ML applications

Artificial intelligence and machine learning (AI/ML) are growing in importance and popularity since computers are often able to respond to rapidly changing situations much more quickly and accurately than humans.

But AI/ML applications often require processing, analyzing and responding to enormous quantities of data which can’t reasonably be achieved with centralized processing due to network latency and bandwidth issues. Edge computing allows AI/ML applications to be deployed close to where data is collected so analytical results can be obtained in near real-time.

3 Edge Computing Scenarios

Red Hat focuses on three general edge computing scenarios, although these often overlap in each unique edge implementation.

1. Enterprise edge

Enterprise edge scenarios feature an enterprise data store at the core, in a data centre or as a cloud service. The enterprise edge allows organizations to extend their application services to remote locations.

nasa Q1p7bh3SHj8 unsplash
Photo by NASA on Unsplash

Chain retailers are increasingly using an enterprise edge strategy to offer new services, improve in-store experiences and keep operations running smoothly. Individual stores aren’t equipped with large amounts of computing power, so it makes sense to centralize data storage while extending a uniform app environment out to each store.

2. Operations edge

Operations edge scenarios concern industrial edge devices, with significant involvement from operational technology (OT) teams. The operations edge is a place to gather, process and act on data on-site.

Operations edge computing is helping some manufacturers harness artificial intelligence and machine learning (AI/ML) to solve operational and business efficiency issues through real-time analysis of data provided by Industrial Internet of Things (IIoT) sensors on the factory floor.

3. Provider edge

Provider edge scenarios involve both building out networks and offering services delivered with them, as in the case of a telecommunications company. The service provider edge supports reliability, low latency and high performance with computing environments close to customers and devices.

Service providers such as Verizon are updating their networks to be more efficient and reduce latency as 5G networks spread around the world. Many of these changes are invisible to mobile users, but allow providers to add more capacity quickly while reducing costs.

3 edge computing examples

Red Hat has worked with a number of organizations to develop edge computing solutions across a variety of industries, including healthcare, space and city management.

1. Healthcare

Clinical decision-making is being transformed through intelligent healthcare analytics enabled by edge computing. By processing real-time data from medical sensors and wearable devices, AI/ML systems are aiding in the early detection of a variety of conditions, such as sepsis and skin cancers.

cdc p33DqVXhWvs unsplash
Photo by CDC on Unsplash

2. Space

NASA has begun adopting edge computing to process data close to where it’s generated in space rather than sending it back to Earth, which can take minutes to days to arrive.

As an example, mission specialists on the International Space Station (ISS) are studying microbial DNA. Transmitting that data to Earth for analysis would take weeks, so they’re experimenting with doing those analyses onboard the ISS, speeding “time to insight” from months to minutes.

3. Smart cities

City governments are beginning to experiment with edge computing as well, incorporating emerging technologies such as the Internet of Things (IoT) along with AI/ML to quickly identify and remediate problems impacting public safety, citizen satisfaction and environmental sustainability.

Red Hat’s approach to edge computing

Of course, the many benefits of edge computing come with some additional complexity in terms of scale, interoperability and manageability.

Edge deployments often extend to a large number of locations that have minimal (or no) IT staff, or that vary in physical and environmental conditions. Edge stacks also often mix and match a combination of hardware and software elements from different vendors, and highly distributed edge architectures can become difficult to manage as infrastructure scales out to hundreds or even thousands of locations. The Red Hat Edge portfolio addresses these challenges by helping organizations standardize on a modern hybrid cloud infrastructure, providing an interoperable, scalable and modern edge computing platform that combines the flexibility and extensibility of open source with the power of a rapidly growing partner ecosystem

Compatibility, Sound Preference, or Location: Which Samsung Soundbar is the one for you?

When it comes to selecting a sound system, there are various variables to consider. This depends on your goals and demands, how you use your leisure time, whether watching movies or playing video games and even how compatible it is with the rest of your entertainment system. For some, the visual may be the most important factor, while for others, the sound quality is what makes their entertainment come alive.

Curious to know which is most suitable for you? Here are Samsung’s recommendations on which soundbar you should go for based on your needs:

HW A550 A series Soundbar
  • For gamers: If you own a gaming console and have been looking for an immersive sound experience, consider the Q-Series Soundbar with Dolby Atmos and DTS:X. With the ultimate 3D sound coming to you from every direction, you’ll be able to feel like you’ve stepped into your TV and are experiencing the game in the first person. The Q800A Soundbar comes with three channels, one subwoofer channel, and two up-firing channels – which ultimately means that the sound moves around you based on the action on your screen. Not only does it bring your games to life, but you can also experience playing music on a whole other level, as with the ‘Tap Sound’ feature, you can simply tap the soundbar, and it will recognize your device and play the song you’re currently playing on your phone.
  • Seamless compatibility with other devices: If you’re looking for an all-rounder soundbar that can jive with all of your devices, you can consider the S-Series All-in-One Soundbar with Acoustic Beam and built-in Bixby Voice Assistant. You won’t have to worry about where you’re placing it in your house as it’s designed to fill the room with immersive sound and improved audio quality with its dual-sided horn speakers and Samsung’s Acoustic Beam® technology.
Q800A Soundbar with Dolby Atmos and DTSX
  • All about the bass: The bass makes a great sound system for music lovers. If blasting music and dancing to your heart’s desire is your thing, you can go for the A-Series Soundbar with the Dolby Digital 5.1 and DTS Virtual:X feature to give you immersive surround sound simulation. Imagine rewatching one of your favourite live concerts with the Powerful Bass Boost that’s connected to its very own wireless subwoofer – surely nothing could go wrong. The A-Series Soundbar can also connect to two different mobile devices simultaneously, allowing you and your friends to switch between your favourite playlists at any given time.

2022 and Beyond – Technologies that will Change the Dialogue

We are living in a do-anything-from-anywhere economy enabled by an exponentially expanding data ecosystem. It’s estimated 65% of Global GDP will be digital next year (2022). This influx of data presents both opportunities and challenges. After all, success in our digital present and future relies on our ability to secure and maintain increasingly complex IT systems. Here I’ll examine both near-term and long-term predictions that address the way the IT industry will deliver the platforms and capabilities to harness this data to transform our experiences at work, home and in the classroom.  

What to look for in 2022:  

The Edge discussion will separate into two focus areas – edge platforms that provide a stable pool of secure capacity for the diverse edge ecosystems and software defined edge workloads/software stacks that extend application and data systems into real world environments. This approach to Edge, where we separate the edge platforms from the edge workloads, is critical since, if each edge workload creates its own dedicated platform, we will have proliferation of edge infrastructure and unmanageable infrastructure sprawl.

woman using a computer
Photo by cottonbro on Pexels.com

Imagine an edge environment where you deploy an edge platform that presents compute, storage, I/O and other foundational IT capacities in a stable, secure, and operationally simple way. As you extend various public and private cloud data and applications pipelines to the edge along with local IoT and data management edges, they can be delivered as software-defined packages leveraging that common edge platform of IT capacity. This means that your edge workloads can evolve and change at software speed because the underlying platform is a common pool of stable capacity.

We are already seeing this shift today. Dell Technologies currently offers edge platforms for all the major cloud stacks, using common hardware and delivery mechanisms. As we move into 2022, we expect these platforms to become more capable and pervasive. We are already seeing most edge workloads – and even most public cloud edge architectures – shift to software-defined architectures using containerisation and assuming standard availably of capacities such as Kubernetes as the dial tone. This combination of modern edge platforms and software-defined edge systems will become the dominant way to build and deploy edge systems in the multi-cloud world.

The opening of the private mobility ecosystem will accelerate with more cloud and IT industries involved on the path to 5G. Enterprise use of 5G is still early. In fact, today 5G is not significantly different or better than WiFi in most enterprise use cases. This will change in 2022 as more modern, capable versions of 5G become available to enterprises. We will see higher performance and more scalable 5G along with new 5G features such as Ultra Reliability Low Latency Communications (UR-LLC) and Massive Machine Type Communicators (mMTC), with dialogue becoming much more dominant than traditional telecoms (think: open-source ecosystem, infrastructure companies, non-traditional telecom).

signal tower
Photo by Miguel Á. Padriñán on Pexels.com

More importantly we expect the ecosystem, delivering new and more capable private mobility, will expand to include IT providers such as Dell Technologies but also public cloud providers and even new Open-Source ecosystems focused on acceleration of the Open 5G ecosystem.

Edge will become the new battleground for data management as data management becomes a new class of workload. The data management ecosystem needs an edge. The modern data management industry began its journey on public clouds processing and analysing non-real-time centralised data. As the digital transformation of the world accelerates, it has become clear that most of the data in the world will be created and acted on outside of centralised data centers. We expect that the entire data management ecosystem will become very active in developing and utilising edge IT capacity as the ingress and egress of their data pipelines but will also utilise edges to remotely process and digest data.

As the data management ecosystem extends to the edge this will dramatically increase the number of edge workloads and overall edge demand. This correlates to our first prediction on edge platforms as we expect these data management edges to be modern software-defined offerings. Data management and the edge will increasingly converge and reinforce each other. IT infrastructure companies, like Dell Technologies, have the unique opportunity to provide the orchestration layer for edge and multi-cloud by delivering an edge data management strategy.

The security industry is now moving from discussion of emerging security concerns to a bias toward action. Enterprises and governments are facing threats of greater sophistication and impact on revenue and services. At the same time, the attack surface that hackers can exploit is growing based on the accelerated trend in remote work and digital transformation. As a result, the security industry is responding with greater automation and integration. The industry is also pivoting from automated detection to prevention and response with a focus on applying AI and machine learning to speed remediation. This is evidenced by industry initiatives like SOAR (Security Orchestration Automation & Response), CSPM (Cloud Security Posture Management) and XDR (Extended, Detection and Response). Most importantly we are seeing new efforts such as the Open Secure Software Foundation in the Linux Foundation ramp up the coordination and active involvement of the IT, telecom and semiconductor industries.

close up view of system hacking
Photo by Tima Miroshnichenko on Pexels.com

Across all four of these areas – edge, private mobility, data management and security – there is a clear need for a broad ecosystem where both public cloud and traditional infrastructure are integrated. We are now clearly in a multi-cloud, distributed world where the big challenges can no longer be solved by a single data center, cloud, system or technology.

What to look for beyond 2022:

Quantum Computing – Hybrid quantum/classical compute will take center stage providing greater access to quantum.  In 2022 we expect two major industry consensuses to emerge. First, we expect the industry will see the inevitable topology of a quantum system will be a hybrid quantum computer where the quantum hardware or quantum processing units (QPU) are specialised compute systems that look like accelerators and focus on specific quantum focused mathematics and functions. The QPUs will be surrounded by conventional compute systems to pre-process the data, run the overall process and even interpret the output of the QPUs.

Early real-world quantum systems are all following this hybrid quantum model and we see a clear path where the collaboration of classical and quantum compute is inevitable. The second major consensus is that quantum simulation using conventional computing will be the most cost effective and accessible way to get quantum systems into the hands of our universities, data science teams and researchers. In fact, Dell and IBM already announced significant work in making quantum simulation available to the world.

Automotive The automotive ecosystem will rapidly shift focus from a mechanical ecosystem to a data and compute industry.  The automotive industry is transforming at several levels. We are seeing a shift from Internal Combustion Engines to Electrified Vehicles resulting in radical simplification of the physical supply chain. We are also seeing a significant expansion of software and compute content within our automobiles via ADAS and autonomous vehicle efforts. Finally, we are seeing the automotive industry becoming data driven industries for everything from entertainment, to safety to major disruptions such as Car-as-a-Service and automated delivery.

All of this says that the automotive and transportation industries are beginning a rapid transition to be driven by software, compute and data. We have seen this in other industries such as telecom and retail and in every case the result is increased consumption of IT technology. Dell is actively engaged with most of the world’s major automotive companies in their early efforts, and we expect 2022 to continue their evolution towards digital transformation and deep interaction with IT ecosystems. 

jonas leupe 81DQcYCS8sQ unsplash
Photo by Jonas Leupe on Unsplash

Digital Twins – Digital Twins will become easier to create and consume as the technology is more clearly defined with dedicated tools. While gaining in awareness, digital twins is still a nascent technology with few real examples in production. Over the next several years, we’ll see digital twins become easier to create and consume as we define standardised frameworks, solutions and platforms. Making digital twin ideas more accessible will enable enterprises to provide enhanced analytics and predictive models to accelerate digital transformation efforts. Digital twin adoption will become more mainstream with accelerated standardisation and availability of solutions and framework, bringing deployment and investment costs down. Digital twins will be the core driver of Digital transformation 3.0 combining measured and modeled/simulated worlds for direct business value across industry verticals.

As a technology optimist, I increasingly see a world where humans and technology work together to deliver impactful outcomes at an unprecedented speed. These near-term and long-term perspectives are based on the strides we’re making today. If we see even incremental improvement, there is enormous opportunity to positively transform the way we work, live and learn and 2022 will be another year of accelerated technology innovation and adoption.

Six Edge Computing Trends to Watch in 2022

While many aspects of edge computing are not new, the overall picture continues to evolve quickly. For example, “edge computing” encompasses the distributed retail store branch systems that have been around for decades. The term has also swallowed all manner of local factory floor and telecommunications provider computing systems, albeit in a more connected and less proprietary fashion than was the historical norm.

However, even if we see echoes of older architectures in certain edge computing deployments, we also see developing edge trends that are genuinely new or at least quite different from what existed previously. These trends are helping IT and business leaders solve problems in industries ranging from telco to automotive, for example, as both sensor data and machine learning data proliferates.

Edge computing trends that should be on your radar

Here, edge experts explore six trends that IT and business leaders should focus on in 2022:

1. Edge workloads get fatter

One big change we are seeing is that there is more computing and more storage out on the edge. Decentralized systems have often existed more to reduce reliance on network links than to perform tasks that could not practically be done in a central location assuming reasonably reliable communications. But, that is changing.

server racks on data center
Photo by Brett Sayles on Pexels.com

IoT has always involved at least collecting data almost by definition. However, what could be a trickle has now turned into a flood as the data required for machine learning (ML) applications flows in from a multitude of sensors. But, even if training models are often developed in a centralized data centre, the ongoing application of those models is usually pushed out to the edge of the network. This limits network bandwidth requirements and allows for rapid local action, such as shutting down a machine in response to anomalous sensor readings. The goal is to deliver insights and take action at the moment they’re needed.

2. RISC-V gains ground

Of course, workloads that are both data- and compute-intensive need hardware on which to run. The specifics vary depending upon the application and the tradeoffs required between performance, power, cost, and so forth. Traditionally the choice has usually come down to either something custom, ARM, or x86. None are fully open, although ARM and x86 have developed a large ecosystem of supporting hardware and software over time, largely driven by the lead processor component designers.

But RISC-V is a new and intriguing open hardware-based instruction set architecture.

Why intriguing? Here’s how Red Hat Global Emerging Technology Evangelist Yan Fisher puts it: “The unique aspect of RISC-V is that its design process and the specification are truly open. The design reflects the community’s decisions based on collective experience and research.”

This open approach, and an active ecosystem to go along with it, is already helping to drive RISC-V design wins across a broad range of industries. Calista Redmond, CEO of RISC-V International, observes that: “With the shift to edge computing, we are seeing a massive investment in RISC-V across the ecosystem, from multinational companies like Alibaba, Andes Technology, and NXP to startups like SiFive, Esperanto Technologies, and GreenWaves Technologies designing innovative edge-AI RISC-V solutions.”

3. Virtual Radio Access Networks (vRAN) become an increasingly important edge use case

A radio access network is responsible for enabling and connecting devices such as smartphones or internet of things (IoT) devices to a mobile network. As part of 5G deployments, carriers are shifting to a more flexible vRAN approach whereby the high-level logical RAN components are disaggregated by decoupling hardware and software, as well as using cloud technology for automated deployment and scaling and workload placement.

pexels-photo-6200343.jpeg
Photo by Z z on Pexels.com

Hanen Garcia, Red Hat Telco Solutions Manager, and Ishu Verma, Red Hat Emerging Technology Evangelist, note that “One study indicates deployment of virtual RAN (vRAN)/Open RAN (oRAN) solutions realize network TCO savings of up to 44% compared to traditional distributed/centralized RAN configurations.” They add that: “Through this modernization, communications service providers (CSPs) can simplify network operations and improve flexibility, availability, and efficiency—all while serving an increasing number of use cases. Cloud-native and container-based RAN solutions provide lower costs, improved ease of upgrades and modifications, ability to scale horizontally, and with less vendor lock-in than proprietary or VM-based solutions.”

4. Scale drives operational approaches

Many aspects of an edge-computing architecture can be different from one that’s implemented solely within the walls of a data centre. Devices and computers may have weak physical security and no IT staff on-site. Network connectivity may be unreliable. Good bandwidth and low latencies aren’t a given. But many of the most pressing challenges relate to scale; there may be thousands (or more) network endpoints.

Kris Murphy, Senior Principal Software Engineer at Red Hat, identifies four primary steps you must take in order to deal with scale: “Standardize ruthlessly, minimize operational ‘surface area,’ pull whenever possible over push, and automate the small things.”

For example, she recommends doing transactional, which is to say atomic, updates so that a system can’t end up only partially updated and therefore in an ill-defined state. When updating, she also argues that it’s a good practice for endpoints to pull updates because “egress connectivity is more likely available.” One should also take care to limit peak loads by not doing all updates at the same time.

5. Edge computing needs attestation

With resources at the edge tight, capabilities that require little to no local resources are the pragmatic options to consider. Furthermore, any approach needs to be highly scalable or otherwise, the uses and benefits become extremely limited. One option that stands out is the Keylime project. “Technologies like Keylime, which can verify that computing devices boot up and remain in a trusted state of operation at scale should be considered for broad deployment, especially for resource-constrained environments” as described by Ben Fischer, Red Hat Emerging Technology Evangelist.

roonz nl 2xEQDxB0ss4 unsplash
Photo by RoonZ.nl on Unsplash

Keylime provides remote boot and runtime attestation using Integrity Measurement Architecture (IMA) and leverages Trusted Platform Modules (TPMs) which are common to most laptop, desktop, and server motherboards. If no hardware TPM is available, a virtual, or vTPM, can be loaded to provide the requisite TPM functionality. Boot and runtime attestation is a means to verify that the edge device boots to a known trusted state and maintains that state while running. In other words, if something unexpected happens, such as a rogue process, the expected state would change, which would be reflected in the measurement and would take the edge device offline, because it entered an untrusted state. This device could be investigated and remediated and put back into service again in a trusted state.

6. Confidential Computing becomes more important at the edge

Security at the edge requires broad preparation. Availability of resources, such as network connectivity, electricity, staff, equipment, and functionality vary widely but are far less than what would be available in a data centre. These limited resources limit the capabilities for ensuring availability and security. Besides encrypting local storage and connections to more centralized systems, confidential computing offers the ability to encrypt data while it is in use by the edge computing device.

​​This protects both the data being processed and the software processing the data from being captured or manipulated. Fischer argues that “confidential computing on edge computing devices will become a foundational security technology for computing at the edge, due to the limited edge resources.”

According to the Confidential Computing Consortium’s (CCC) report by the Everest group, Confidential Computing – The Next Frontier in Data Security, “Confidential computing in a distributed edge network can also help realize new efficiencies without affecting data or IP privacy by building a secure foundation to scale analytics at the edge without compromising data security.” Additionally, confidential computing “ensures only authorized commands and code are executed by edge and IoT devices. Use of confidential computing at the IoT and edge devices and back end helps control critical infrastructure by preventing tampering with code of data being communicated across interfaces.“

Confidential computing applications at the edge range from autonomous vehicles to collecting sensitive information.

Diverse applications across industries

The diversity of these edge computing trends reflects both the diversity and scale of edge workloads. There are some common threads – multiple physical footprints, the use of cloud-native and container technologies, an increasing use of machine learning. However, telco applications often have little in common with industrial IoT use cases, which in turn differ from those in the automotive industry. But whatever industry you look at, you’ll find interesting things happening at the edge in 2022.

The Cloud and the Opportunity Ahead

A lot of what we do now is underpinned by the cloud, and “cloud” has increasingly become a tech buzzword. There are many reasons there is buzz around the cloud, and I will expand on some of them here.

shutterstock image
Photo by Redd Angelo on StockSnap

Cloud democratises access to the kind of computing power that was previously only accessible to large corporations with deep pockets. What used to require a $100 million investment can now be achieved on the cloud for as little as $26 a year. And, by not spending time and resources on traditional IT infrastructure, companies using the cloud can build faster, better, and cheaper – in more sustainable ways. Cloud is flexible, agile, scalable, and has the potential to impact all industries in ways that were unimaginable just a few years ago across healthcare, finance, agriculture, education, and sustainability, to name a few. And as the demand for cloud computing grows, so does the demand for cloud-skilled workers. It has been predicted that there will be a significant skills gap by 2025 unless more is done to train, retrain, and upskill the region’s workforce.

Driving digital transformation and harnessing data

In today’s digital economy, it’s hard to find an industry that doesn’t use cloud applications. From accelerating medical research, improving crop yields in developing economies, and driving sustainability, to tracking bush fires, the cloud is changing the way we live, work, and play. Digital transformation is both an agent of change and a facilitator of it, and some of the biggest disruptions have been in the banking sector as we change the way we bank. There are more than 50 digital banks across Asia, with more on the way, helping drive financial inclusion in developing countries using the cloud. Today’s digital bank customers have high expectations for convenience, enhanced user experience, and personalisation, and access to the cloud has enabled these banks to innovate to meet these demands quickly and at low cost.

The pandemic has accelerated disruption and cloud adoption, and the volume of data produced as industries move to the cloud is growing rapidly. This data holds the potential for insights that can inform business strategies and is a resource that can’t be ignored. While some businesses are already leveraging data to drive decisions, gain competitive advantage, and fuel the next generation of innovation and success, more will do so in the coming year as business leaders start to understand the potential that cloud computing presents.

pexels lukas 574071 1
Photo by Lukas from Pexels

Data and analytics will become this decade’s priorities, and we must be ready with the necessary tools, skills, and expertise to tap into this resource to deliver efficiency and unlock experimentation. For many organisations, data is their most valuable asset, and we are helping them move data to the cloud, modernise applications, build next-generation secure data platforms, and build data lakes to collect real-time data. And, using Machine Learning (ML) algorithms, these organisations can gain real-time actionable insights, results, and predictions to improve decision making.

The digital skills gap

The rapid evolution of cloud technology and widespread adoption of cloud computing will require a workforce that has the right data and cloud skills, and across Asia, the supply of digitally skilled workers is nowhere near the demand. COVID-19 accelerated the adoption of cloud tech which meant the skills gap widened as the global talent landscape transformed. Digital workers in Asia today know they will need advanced digital skills – almost half believe cloud computing skills will be required in their jobs within just four years.

pexels thisisengineering 3861958
Photo by ThisIsEngineering from Pexels

Broadening the skills base of workers globally is vital for economic growth, resiliency, and prosperity, and the social implications of failing to act include rising income disparity and more unemployment. Since COVID-19, there has been mass labour market displacement with job losses predicted to far exceed the Global Financial Crisis, and unemployment is forecasted to be at its highest since the Great Depression. With this in mind, governments around the world are implementing national policies on skilling and laying the building blocks for reforms, but more needs to be done by the private sector. Employers need to help current workers upskill, educational institutes need to adopt curricula that provide relevant skills, and workers across all fields need to seize the opportunity to learn new digital skills.

AWS is invested in the future

AWS is committed to a dynamic and entrepreneurial IT sector and supporting economic growth globally, and we hope to build resilience into the digital-skilled workforce and help bridge the skills gap. Globally, we are committed to helping 29 million people grow their technical skills with free cloud computing training by 2025. We have made over 500 free, on-demand, courses available online, with many courses available in local languages such as Bahasa Indonesia, Japanese, Korean, Simplified and Traditional Chinese, as well as interactive labs and virtual day-long training sessions through AWS Training and Certification. We are also working with educational institutes around the region to develop programmes that provide students with relevant in-demand cloud tech skills.

The world’s workforce needs a sustainable future, and Amazon is committed to helping provide this by making more than 91 renewable energy investments around the world and committing to Amazon’s Climate Pledge to be a net-zero carbon business by 2040, 10 years ahead of the Paris Agreement, and to be on 100% renewable energy by 2025.

The cloud has the power to do a lot of good, but we must be prepared to harness that power with a skilled workforce that can meet the challenge to innovate at exponential speed. As the world emerges from the COVID-19 pandemic with new ways of operating, working, and living being adopted, cloud will remain at the forefront of our digital lives.

Keeping Up with the Pace of Innovation with the Cloud

When I was a young boy growing up in Jersey in the British Channel Islands, I’d turn on the grainy TV to warm up so I could watch sports with my father and brother. FORMULA 1 racing was the most exciting sport for us, even though the cars often sped by faster than the camera operator and the technology could keep up.

Now, racing is covered in a far richer and more engaging way, especially since F1 launched F1 Insights powered by AWS in 2018, bringing data analytics as a live feed to my screens. Watching on my phone in Singapore, I love the real-time Car Performance Scores, which include thousands of data points streamed every second from every car on the track, giving me a much better understanding of where my favorite car ranks in the field – and what’s driving its performance.

time lapse photography of brown concrete building
Photo by zhang kaiyv on Pexels.com

It’s exactly this type of real-time information that businesses need to understand their performance, so they can make decisions rapidly and keep up with market changes. During the pandemic, we have learned that speed matters, whether you’re a digital native or a more traditional organization. As all businesses faced social distancing measures, those who survived the pandemic adopted new ways to do business, and they adapted fast using the cloud.

Some moved faster than others. Some enterprises with legacy systems seem resigned to moving slowly. Even today, I often hear comments like, “It’s just the nature of our size and heritage.”

We must debunk that myth. Speed is not preordained by heritage. Speed is a choice that any organization can make if it is prepared to harness the cloud. As a recent McKinsey article put it: “For CEOs, cloud adoption is not just an engine for revenue growth and efficiency. The cloud’s speed, scale, innovation, and productivity benefits are essential to the pursuit of broader digital business opportunities, now and well into the future.”

Culture Change

Many organizations can look for ways to change their culture and embrace speed, creating an environment that values urgency. In a culture designed for speed, people are actively encouraged to experiment and are rewarded for it. Although, flipping a switch won’t suddenly deliver speed – companies have to build muscle while they learn how to innovate at pace, all the time.

Amazon has been around for nearly 27 years, and to this day we maintain what we call a “Day 1” culture – approaching everything we do with the entrepreneurial spirit of being on the first day of your organization. We do this by giving our teams autonomy, on the understanding that they operate within the guardrails of our culture.

group of people sitting indoors
Photo by fauxels on Pexels.com

We believe the more we can equip people to make high judgment decisions at all levels, the better off we, and our customers, are. We encourage employees to make high-velocity, high-quality decisions by setting the vision and context for teams. Since Amazon was founded in 1994, we’ve consistently operated based on three big ideas that every employee knows. The first is to obsess over customers. This is cemented in our mission statement to be “earth’s most customer-centric company.” The second is that if we focus on the customer it will force us to innovate – to look at new ways of solving problems on behalf of our customers. The third is to be stubborn in sustaining our long-term vision while being flexible in how we get there.

As Jeff Bezos explains, “In a traditional corporate hierarchy, a junior executive comes up with a new idea that they want to try. They have to convince their boss, their boss’s boss, their boss’s boss’s boss and so on – any ‘no’ in that chain can kill the whole idea.” Systems and processes that identify, validate, and approve new ideas from within the business are invaluable in democratizing company-wide idea exploration and driving experimentation in business as usual. For example, at Amazon, we make it easy for those closest to our customers to raise ideas for speedy review. Imagine a time-wasting process or one that results in a poor customer experience. People complain about it regularly, but they know that it can be so hard to implement change, that it’s not worth the effort. The problem is put in the “too hard” basket and no one says anything. Now, imagine actually rewarding teams for suggesting a fix. Imagine if the process was fast and painless and resulted in change. How many great ideas would happen every week?

Thinking Big and Acting Small

Thinking big is the hallmark of innovation. But, as we look to move quickly and embrace greater experimentation, we should also look to de-risk the process. This means recognizing that the most powerful innovations often come through simplification. One small, seemingly insignificant cost or time-saving can drive enormous benefits for both companies and their customers when applied at scale. Thinking big also means starting big ideas with very small, reversible experiments. At Amazon, we look for “two-way doors.” If an experiment fails (as they often do), we can back out of the decision rather than being committed to moving ahead through a “one-way door,” which can be expensive and difficult to undo. This way, you learn quickly with very low stakes.

StockSnap DNA7ILF8DA
Photo by Burst on StockSnap

A great example of innovative thinking in the face of legacy technology is FashionValet. As the modest fashion brand grew, its multi-environment hybrid technology infrastructure was unable to keep up with demand during product launches. In 2019, FashionValet went all-in on AWS to optimize processes and meet growing demand. With Auto Scaling Groups and RDS Aurora features, FashionValet can now run 10x more servers during product launches to meet demand, then scale down automatically with no downtime. Using this technology, FashionValet has also accelerated their product development timeline by 200% and reduced their infrastructure management costs by 75%.

Companies don’t have to bet their business on innovation, but they shouldn’t let legacy thinking hold them back. By actively empowering teams, clearing the path to “Yes,” and using small experiments, companies can build capability to promote high-velocity decisions – helping them operate at the speed of F1.

5G, Industry, & Collaboration at the Edge

Edge computing is the ability to give life to the transformative use cases that businesses are dreaming up today and bring real-time decision making to last-mile locales. This can include a far-flung factory or train roaring down the tracks, someone’s connected home, or their car speeding down the highway or even in space. Who thought we’d be running Kubernetes in space?

This shows that edge computing can transform the way we live, and we are doing it right now.

Why Collaboration Is Critical

Edge technologies are blending the digital and physical worlds in a new way, and that combination is resonating at a human level. This human resonance might sound like an aspirational achievement, but it is already here. A great example is when we used AR/VR to improve safety on the factory floor.

Continued collaboration, however, is necessary to keep enabling breakthrough successes. Across industries and organizations, we are all highly dependent on one another. Thinking about the telecommunications and industrial sectors, in particular, there is a mutually supportive, symbiotic relationship between these industries—5G development cannot be successful without industrial use cases, which, in turn, are based on telco technologies.

person writing on the notebook
Photo by Startup Stock Photos on Pexels.com

However, numerous challenges remain: reducing network complexity, maintaining security, improving agility, and ensuring a vibrant ecosystem where the only way to address and solve those is by tapping into the collective wisdom of the community.

With open-source, we can unify and empower communities on a broad scale. The open-source ecosystem brings people together to focus on a common problem to solve with software. That shared purpose can turn isolated efforts into collective ones so that changes are industry-wide and reflect a wide range of needs and values.

The collaboration that open source makes possible continues to ignite tremendous change and alter our future in so many ways, making it the innovation engine for industries.

If we collaborate on 5G and edge in this manner, nascent technologies could become exciting common foundations in the same way that Linux and Kubernetes have because when we work together, the only limit to these possibilities is our imagination.

From Maps to Apps and Much More

Do you remember having to use a paper-based map to figure out driving directions?  Flash forward to today: Look at the applications we take for granted on our phones or in our homes that allow us to change our driving route in real-time to avoid traffic, or to monitor and grant access to our front doors—to the point that these have shaped how we interact with our environments and each other. Yet not too long ago, many of these things were unimaginable. We barely had cloud technology, we were in the transition from 3G to 4G, and smartphones were new.

holding a smartphone
Photo by cottonbro on Pexels.com

But there was important work being done by lots of people who were improving upon the core technologies. The convergence of three technology trends, as it turns out, unlocked a hugely disruptive opportunity: a cloud-native, mobile-device-enabled transportation service that picked you up wherever you were and took you wherever you wanted to go.

This opportunity was only possible because each trend built on the others to create a truly novel offering. Without one of these trends, the applications from the ride-sharing apps of the world would not have been the same or as disruptive. Imagine yourself scrambling to find a WiFi hotspot on the street corner, whipping out your laptop outside a restaurant while standing in the rain, or starting your business by first constructing a massive data centre. The convergence of smartphones, 4G networks, and cloud computing has enabled a new world.

Today we are creating the next set of technologies that will become the things so embedded in our lives and so indispensable to our daily habits that we will wonder how we ever got by without them. Are you ready to be wearing clothes with sensors in them that tell you how healthy you are?

The possibilities with edge technologies are equally as exciting. It starts with the marriage of the digital world with the physical world. Adding in pervasive connectivity—leveraging a common 5G and edge platform—we can transform how operational technologies interact with the physical world and that changes everything.

The Future Is Now

We are creating this new world that is hard to imagine, yet it is not so foreign because we have seen how this story has played out before. Expect these new technologies to have profound implications for humanity—in our daily lives, how we interact with one another, and the social fabric of our world.

high angle photo of robot
Photo by Alex Knight on Pexels.com

All of that cannot happen without collaboration.

We have only to look at how open source has empowered collaboration and how working together has helped people across organizations and industries build more robust, shared platforms more quickly and differentiate on top of them—with apps and capabilities built on the foundation of Kubernetes and Linux, for example.

Vigilance is Crucial for Businesses in Dealing with Modern Malware

In just the first four months of 2021, Trend Micro’s Research team detected 113,010 ransomware threats in Malaysia. Ever since the first detected case of ransomware infection in 2005 globally[1], ransomware has evolved. Over the years, ransomware has evolved and has resulted in the emergence of what is often termed modern ransomware; which is even more targeted and malicious in nature.

The recent attack on enterprise technology firm Kaseya[2], where hackers demanded US$70 million (RM290.92 million) worth of bitcoin in return for stolen data, is a stark reminder of the sweeping damage and disruption that modern ransomware is capable of. 

crop hacker typing on laptop with information on screen
Photo by Sora Shimazaki on Pexels.com

Traditionally, ransomware attacks were conducted through a “click-on-the-link” that leads to compromised websites or spam emails. This was typically aimed at a random list of victims to collect moderate pay-out.

Today, threat actors have evolved their strategies to inflict greater damage on a company’s reputation and potentially collect larger pay-outs from high-profile victims. This is what is becoming known as a “double-extortion” strategy in modern ransomware attacks. According to Trend Micro’s research[3], criminals take these steps to personalize the attacks:

  1. Organize alternative access to a victim’s network such as through a supply chain attack;
  2. Determine the most valuable assets and processes that could potentially yield the highest possible ransom amount for each victim;
  3. Take control of valuable assets, recovery procedures, and backups;
  4. Steal and threaten to expose confidential data;

In Malaysia, Trend Micro found that the industries most targeted by ransomware are government, healthcare, and manufacturing[4]. As these sectors continue to play a role in driving economic growth in the country, it is clear that a multi-layered cybersecurity defence system is necessary. These enterprises will need to create such a defence to defend their networks and protect their business-critical data to keep up with the ever-evolving ransomware landscape.

close up view of system hacking
Photo by Tima Miroshnichenko on Pexels.com

In order to keep up with the ever-evolving ransomware landscape, among the three most important must-dos for Malaysian organizations are: 

  • Maintain IT hygiene factors: Security teams should ensure that proactive countermeasures, such as monitoring features, backups, and trainings in security skills, are in place to enable early detection. Alongside that, everyone in an organization should also have the latest security updates and patches installed.
  • Work with the right security partners: Start by clearly defining the needs and priorities around enterprise security in an organization. Then, collaborate with a security vendor that aligns with these priorities to create a solid security response playbook to be used on an ongoing basis.
  • Have visibility over all security layers: In order for security teams to be able to detect suspicious activity early-on and to respond to cyber attacks quicker, organizations should utilize tools such as Trend Micro Vision One, which collects and automatically correlates data across email, endpoints, servers, cloud workloads, and networks. By putting the right technologies in place, enterprises can also help reduce the alert fatigue commonly faced by security operations centers (SOCs), with 54% reporting that they are overwhelmed by alerts[5].

In today’s world of constant attacks, cybersecurity should be a top priority for everyone across the entire organization; and not just be the sole responsibility of the security team. While an organization can eventually recover its data or financial resources post-attack, the loss of trust among customers and partners will be a difficult challenge to remedy. All stakeholders must collaborate, invest in proper resources, and take proactive steps to transform workplace culture and best practices in order to stop pernicious ransomware threats at the door. 


[1] Trend Micro, Ransomware, https://www.trendmicro.com/vinfo/us/security/definition/ransomware

[2] Trend Micro, IT Management Platform Kaseya Hit With Sodinokibi/REvil Ransomware Attack, 4 July 2021. https://www.trendmicro.com/en_my/research/21/g/it-management-platform-kaseya-hit-with-sodinokibi-revil-ransomwa.html

[3] Trend Micro, Modern Ransomware’s Double Extortion Tactics, 8 June 2021. https://www.trendmicro.com/vinfo/gb/security/news/cybercrime-and-digital-threats/modern-ransomwares-double-extortion-tactics-and-how-to-protect-enterprises-against-them

[4] Trend Micro, Trend Micro 2020 Annual Cybersecurity Report, 23 February 2021. https://www.trendmicro.com/vinfo/us/security/research-and-analysis/threat-reports/roundup/a-constant-state-of-flux-trend-micro-2020-annual-cybersecurity-report

[5] Trend Micro, 70% Of SOC Teams Emotionally Overwhelmed By Security Alert Volume, 25 May 2021, https://newsroom.trendmicro.com/2021-05-25-70-Of-SOC-Teams-Emotionally-Overwhelmed-By-Security-Alert-Volume