How would the principles of open source, namely permissive licenses, transparent training data and weights and, perhaps most of all, the ability to contribute to an open source model impact the resulting project?
Open models do exist from many of the most notable players in AI, but they aren’t open source or they impose certain restrictions…and that’s a challenge. To create models that really work for specific enterprise use cases, technology organizations need to understand the full scope of a model – how it was trained, what it was trained on, who contributed to it and so on – before they even think about fine-tuning it with their own internal data.
At Red Hat Summit 2023, we introduced Red Hat OpenShift AI, providing the foundation for running AI models at scale. A powerful, scalable and optimized platform for AI workloads, but not focused on delivering actual models. Today, we’ve made it clear that Red Hat’s strategy doesn’t solely exist in providing the backbone for AI-enabled applications – we want to bring the power of community and open source to the models themselves.
In collaboration with IBM Research, we’re open sourcing several models for both language and code-assistance. But what makes this even more exciting is InstructLab – a new open source project that allows individuals to enhance a model, through a simple user interface. Think of it as being able to contribute to an LLM in the same way you would with Pull Requests to any other open source project.
Instead of forking an LLM, which creates a dead-end that no one else can contribute to, InstructLab enables anyone around the world to add knowledge and skills. These contributions can then be incorporated into future releases of the model. Put simply…you don’t need to be a data scientist to contribute to InstructLab. Domain and subject matter experts (and data scientists too) can use InstructLab to make contributions that benefit everyone. I cannot overstate how powerful this is – both for the community and enterprises!
RHEL AI combines the critical components of the world’s leading enterprise Linux platform (in the form of the newly-announced image mode for Red Hat Enterprise Linux), open source-licensed Granite models and a supported, lifecycled distribution of the InstructLab project. InstructLab further extends the role of open source in AI, making working with or contributing to the underlying open source model as easy as contributing to any other community project.
AI innovation should not be limited to organizations that can afford massive GPU farms or brigades of data scientists. Everyone, from developers to IT operations teams to lines of business, needs the capacity to contribute to AI in some way, in a manner of their choosing. That’s the beauty of InstructLab and the potential of RHEL AI – it brings the accessibility of open source to the often-closed world of AI.
This is where Red Hat’s AI product strategy is going. Our history embodies our philosophy. We enabled the power of open source for Linux, Kubernetes and hybrid cloud computing for the enterprise.
Now, we’re doing the same for AI. Everyone can benefit from AI, so everyone should be able to access and contribute to it. Let’s do it in the open.
New technologies can take many forms. Often, they come from generally straightforward, incremental product advances over the course of years; think the Complementary Metal-Oxide-Semiconductor (CMOS) process shrinks that underpinned many of the advances in computing over the past decades. Not easy, but relatively predictable from a high-level enough view.
Other shifts are less straightforward to predict. Even if a technology is not completely novel, it may require the right conditions and advances to come together so it can flourish in the mainstream. Both server virtualization and containerization fall into this category.
What’s next? Someone once said that predictions are hard, especially about the future. But here are some areas that Red Hat has been keeping an eye on and that you should likely have on your radar as well. This is hardly a comprehensive list and it may include some surprises, but, it is a combination of both early stage and more fleshed-out developments on the horizon. The first few are macro trends that pervade many different aspects of computing. Others are more specific to hardware and software computing infrastructure.
Artificial intelligence/machine learning (AI/ML)
On the one hand, AI/ML belongs on any list about where computing is headed. Whether coding tools, self-tuning infrastructure, or improved observability of systems, AI/ML is clearly a critical part of the computing landscape going forward.
What’s harder to predict is exactly what forms and applications of AI will deliver compelling business value, many of which will be interesting in narrow domains, and will likely turn out to be almost good enough over a lengthy time horizon.
Much of the success of AI to date has rested on training deep neural networks (NNs) of increasing size (as measured by the number of weights and parameters) on increasingly large datasets using backpropagation, and supported by the right sort of fast hardware optimized for linear algebra operations—graphics processing units (GPUs) in particular. Large Language Models (LLMs) are one prominent, relatively recent example.
There have been many clear wins, but AI has struggled with more generalized systems that interface with an unconstrained physical world—as in the case of autonomous driving, for example. There are also regulatory and legal concerns relating to explainability, bias and even overall economic impact. Some experts also wonder if broad gaps in our collective understanding of the many areas covered by cognitive science that lay outside the direct focus of machine learning may (or may not) be needed for AI to handle many types of applications.
What’s certain is that we will be surprised.
Automation
In a sense, automation is a class of application to which AI brings more sophisticated capabilities. For example, Red Hat Ansible Lightspeed with IBM watsonx Code Assistant is one recent example of a generative AI service designed by and for Ansible automators, operators and developers.
Automation is increasingly necessary because hardware and software stacks are getting more complex. What’s less obvious is how improved observability tooling and AI-powered automation tools that make use of that more granular data plays out in detail.
At the least, it will lead us to think about questions such as: Where are the big wins in dynamic automated system tuning that will most improve IT infrastructure efficiency? What’s the scope of the automated environment? How much autonomy will we be prepared to give to the automation, and what circuit breakers and fallbacks will be considered best practice?
Over time, we’ve reduced manual human intervention in processes such as CI/CD pipelines. But we’ve done so in the context of evolving best practices in concert with the increased automation.
Security
Security is a broad and deep topic (and one of deep concern across the industry). It encompasses zero trust, software supply chains, digital sovereignty and yes, AI—both as a defensive tool and an offensive weapon. But one particular topic is worth highlighting here.
Confidential computing is a security technology that protects data in use, meaning that it is protected while it is being processed. This is in contrast to traditional encryption technologies, which protect data at rest (when it is stored) and data in transit (when it is being transmitted over a network).
Confidential computing works by using a variety of techniques to isolate data within a protected environment, such as a trusted execution environment (TEE) or a secure enclave. It’s of particular interest when running sensitive workloads in an environment over which you don’t have full control, such as a public cloud. It’s relatively new technology but is consistent with an overall trend towards more security controls, not fewer.
RISC-V
While there are examples of open hardware designs, such as the Open Compute Project, it would be hard to make the case for there having been a successful open processor relevant to server hardware.
However, major silicon vendors and cloud providers are exploring and adopting the RISC-V free-to-license and open processor instruction set architecture (ISA). It follows a different approach from past open processor efforts. For one thing, it was open source from the beginning and is not tied to any single vendor. For another, it was designed to be extensible and implementation-agnostic. It allows for the development of new embedded technologies implemented upon FPGAs as well as the manufacture of microcontrollers, microprocessors and specialized data processing units (DPUs).
Its impact is more nascent in the server space, but it has been gaining momentum. The architecture has also seen considerable standardization work to balance the flexibility of extensions with the fragmentation they can bring. RISC-V profiles are a set of standardized subsets of the RISC-V ISA. They are designed to make sure that hardware implementers and software developers can intersect with an interface built around a set of extensions with a bounded amount of flexibility designed to support well-defined categories of systems and applications.
Platform software
Perhaps one of the most intriguing questions is what happens at the lower levels of the server infrastructure software stack—roughly the operating system on a single shared memory server and the software that orchestrates workloads across many of these servers connected over a network.
It is probably easiest to start with what is unlikely to change in fundamental ways over the next decade. Linux has been around for more than 30 years; Unix more than 50, with many basic concepts dating to Multics about ten years prior.
That is a long time in the computer business. But it also argues for the overall soundness and adaptability of the basic approach taken by most modern operating systems—and the ability to evolve Linux when changes have been needed. That adaptation will continue by taking advantage of reducing overheads by selectively offloading workloads to FPGAs and other devices such as edge servers. There are also opportunities to reduce transition overheads for performance-critical applications; the Unikernel Linux project—a joint effort involving professors, PhD students and engineers at the Boston University-based Red Hat Collaboratory—demonstrates one direction such optimizations could take.
More speculative is the form that collections of computing resources might take and how they will be managed. Over the past few decades, these resources primarily took the form of masses of x86 servers. Some specialized hardware is used for networking, storage and other functions, but CMOS process shrinks meant that for the most part, it was easier, cheaper and faster to just wait for the next x86 generation than to buy some unproven specialized design.
However, with performance gains associated with general-purpose process shrinks decelerating—and maybe even petering out at some point—specialized hardware that more efficiently meets the needs of specific workload types starts to look more attractive. The use of GPUs for ML workloads is probably the most obvious example, but is not the only one.
The challenge is that developers are mostly not increasing in number or skill. Better development tools can help to some degree, but it will also become more important to abstract away the complexity of more specialized and more diverse hardware.
What might this look like? A new abstraction/virtualization layer? An evolution of Kubernetes to better understand hardware and cloud differences, the relationship between components and how to intelligently match relatively generic code to the most appropriate hardware or cloud? Or will we see something else that introduces completely new concepts?
Wrap up
What we can say about these predictions is that they’re probably a mixed bag. Some promising technologies may fizzle a bit. Others will bring major and generally unexpected changes in their wake, and something may pop onto the field at a time and from a place where we least expect it.
The Heavy Reading 2022 5G Network Strategies Operator Survey provides insight into how 5G networks may evolve as operators and the wider mobile ecosystem continue to invest in 5G technology. The article will discuss some of the findings for 5G and edge computing, and conclude with a perspective centred around 5G security.
Drivers for 5G edge deployments
Current edge deployments are being driven by the healthcare, financial services and manufacturing industries. Heavy Reading says the next largest growth segment will be the media and entertainment sector, with 66% of respondents indicating they would deploy 5G edge services to these verticals in the next two years.
As the compiled data illustrates, the initial edge focus for service providers is to lower costs and increase performance. From a financial perspective, the main driver cited by 63% of those surveyed was to reduce bandwidth use and cost, followed by better support for vertical industry applications (46%) and differentiated services versus the competition (43%).
Two key criteria for edge deployments by smaller operators (less than $5bn in annual revenue) were improved resilience and application performance. Respondents cited that both these criteria had the effect of lowering costs and increasing customer satisfaction as service level agreements (SLAs) would be easier to fulfil.
Larger operators bring focus to differentiated services and applications that would create new revenues. The reason higher significance was communicated compared to smaller operators (68% versus 28%) might be centered in the need to compete not only with other telco service providers, but also hyperscalers. This presents an interesting observation considering that some service providers are looking to partner with hyperscalers to overcome challenges with edge deployment.
Edge deployment options
Even though a variety of different deployment options for the edge can be utilized, the most favored one is a hybrid public/private telco cloud infrastructure, with 33% of respondents preferring this choice. This finding is not surprising, as it allows service providers a good mix between ownership and control, and also reach.
As Heavy Reading points out, the cultural reluctance service providers retained when partnering with hyperscalers is now diminishing, primarily due to the speed at which hyperscalers can roll-out edge deployments.
Deployments at the edge of the network, actually on-premises, is also an option chosen by some service providers, and seems to be targeted at private 5G opportunities. Multi-access edge computing (MEC) is seen as a key enabler for private 5G, with private 5G for mining being a key segment for US tier 1 service providers.
The use of container-based technology at the edge
Linux containers allow the packaging of software with the files necessary to run it, while sharing access to the operating system and other infrastructure resources. This configuration makes it easier for service providers to move the containerized component between environments (development, test, production), and even between clouds, while retaining full functionality. Containers offer the potential for increased efficiency, resiliency and agility that can boost innovation and help create differentiation.
However, utilization of container-based technology remains a challenge for many service providers in the context of edge deployments. The survey confirms this complexity in the relatively slow pace of transition to containers, with almost half of respondents claiming less than 25% of their edge workloads are containerized today. This trend is forecasted to display greater adoption in the coming years, as over 50% of respondents expect 51% or more of their workloads to be containerized by 2025.
Other complexities with edge deployments
Cost and complexity of infrastructure is cited as the main barrier to current edge deployments (55% of respondents). Integration and compatibility between ecosystem components also scores high (49%). To address the integration and compatibility challenge, Red Hat has retained strong collaboration with partners focused on innovation for service provider networks.
Through our testbed facilities we can enable the development, testing and deployment of partner network functions (virtual network function and cloud-native network function) for accelerated adoption and mitigation of risk. We continuously validate network functions to ensure they’ll work reliably with our product offerings.
Additionally, Red Hat has developed numerous partner blueprints and reference architectures to allow service providers to deploy pre-integrated components from different vendors. Through our extensive portfolio, we provide a common and consistent cloud-native platform, accompanied by necessary functional components, orchestration and integration services from our partners for full operational readiness.
5G security concerns and strategy
Security of 5G networks has even great importance, primarily due to a more distributed network architecture, more capable devices, and a larger quantity of attack surfaces. The survey indicates a number of infrastructure capabilities that are important to service providers in a security context, including the use of trusted hardware and identity, and access management. In terms of securing the 5G edge, trusted hardware is considered a critical component for device endpoints.
As reinforcement to earlier points around container-based technology — container orchestration security and continuous image security scan and vulnerability analysis — also score highly. Trusted hardware and continuous image security scan and vulnerability are also the top two priorities for service providers’ 5G edge security strategies. They are also ranked highly as important capabilities for securing endpoints.
Zero-trust deployment and provisioning is also called out as an important factor. Zero-trust scores relatively highly in terms of consistent infrastructure provisioning for physical and virtual network functions (48%) and encryption of data in motion (46%).
While the majority of service providers say they are confident their 5G security strategy is robust, there is concern outside of the US related to maturity and the ability to scale. These concerns are centered around the internal resources and related skill sets needed to effectively implement a security strategy that includes ever-changing risks, compliance requirements, tools and architectural modifications.
Closing remarks
The edge expands opportunities and migrating toward it to capture new service and revenue opportunities, as well as network efficiencies, is a critical direction for service providers. With increasing demand and application use cases difficult to predict, technologies must be able to continually adapt to avoid inflexibility.
Service providers must implement security strategies and processes using different capabilities to effectively mitigate security risks. And these strategies and processes must be adapted over time as technologies, threats and needs evolve. Centralized identity management and access control is key for cloud-centric security approaches, using the principle of least privilege to provide users with only the access they need.
Last year, Red Hat shared our plan to evolve our global Telecommunications, Media and Entertainment (TME) organization to better suit the needs of our partners and customers. Since then, we’ve been connecting and building within our ecosystem to deliver solutions that answer our customer’s biggest needs, one of which is helping navigate the global shift in the way services are delivered across both the TME industry and society as a whole.
Industry-leading partners and connected organizations are working together with the telco ecosystem to build on each other’s innovations in new ways, working together to accelerate the pace of industry change, with a focus on building frictionless customer journeys. For example, service providers are helping banks meet the demands of customers for real-time digital services like hyper-personalization, real-time fraud detection and next-gen connectivity – while also giving the unbanked access to financial services. From mobile banking and payments, connected vehicles, public safety monitors, private 5G and more, service providers are fundamental in providing the many technologies that are driving a completely new landscape for improved societies and global transformation.
How Cloud Independence Can Drive Change
However, this does not happen overnight. Service providers are rethinking their cloud approach by transitioning to a hybrid and multi-cloud environment to help them become more flexible, agile, scalable and competitive in a constantly evolving market. In a TM Forum Themes Report, sponsored by Red Hat, we found that this pivot can lead a service provider to decide which hyper-scale cloud provider meets their needs best.
This leads to future-looking questions, such as:
Which workloads fit which clouds?
Which cloud-native solutions have the flexibility and functionality at the scale my organization requires?
Can I balance these benefits against customer choice, disparate cloud silos, increased costs and limited flexibility?
To help mitigate this risk, we found that service providers are working to maintain cloud and container independence – especially if they want to remain competitive as these new technologies begin rapidly rolling out. This TM Forum Themes Report explains this need for independence, highlighting how service providers are increasingly taking a hybrid multi-cloud approach to maintain supplier diversity while expanding their own telco cloud (operator-as-a-platform) skills and technologies.
Customers at Transformation’s Epicentre
Underpinning these efforts are 5G networks that provide innovative ways for service providers to monetize their investments. We see this in areas like enterprise multi-access edge computing (MEC), open and virtualized RAN, 5G core and more, with real-world successes from our customers including Bharti Airtel, Verizon and VodafoneZiggo.
Red Hat can help service providers successfully compete with new services and business models, boost revenues and meet rising customer expectations by providing strategic expertise and a rich portfolio of products and services for their hybrid cloud deployments. We provide the flexibility for their projects across this vast landscape, from proofs-of-concept to production environments, helping providers select what works best for their own specific needs.
In addition to this shift, we’re excited to see service providers taking advantage of cloud services managed by third-party experts like Red Hat including Red Hat OpenShift Service on AWS (ROSA) and Microsoft Azure on Red Hat OpenShift (ARO). This helps organizations offload the underlying infrastructure work and focus on their core business, providing additional flexibility and driving tangible business benefits.
We are also seeing Red Hat customers increase artificial intelligence (AI) deployments, or providing AI-as-a-Service, over the past year, from Turkcell AI to NTT East (in Japanese). It is clear that the practical deployments of AI – from new consumer apps and social engagements, to enterprise B2B apps and AI at the edge, are making a significant impact by enhancing customer experiences, driving greater business efficiencies and creating new revenue streams.
The Partner Ecosystem is Expanding
In order to deliver these customer-centric solutions, Red Hat is working with Ericsson, a leading provider of 5G software and hardware to lower the barriers to 5G adoption and build an open platform for 5G connectivity and innovation. We are doing this through active collaboration across Ericsson’s portfolio, including packet core, IP Multimedia Subsystem (IMS) and operations support system (OSS), as well as Cloud RAN in Ericsson’s Open Lab – a space for fast and interactive co-creation of innovative solutions with communications service providers and ecosystem partners.
Things do not stop there – other software providers such as Baicells, Casa Systems, MATRIXX Software, Mavenir, Nokia, Rakuten Symphony and Samsung work closely with Red Hat to modernize 5G and RAN workloads across the open hybrid cloud. Additionally, with Dell Technologies, Hewlett Packard Enterprise, Intel and Lenovo, we are able to build full-stack hardware and software solutions on top of a reliable infrastructure to support customer deployments from the data center to the edge.
Put simply, edge computing is computing that takes place at or near the physical location of either the user or the source of the data being processed, such as a device or sensor.
By placing computing services closer to these locations, users benefit from faster, more reliable services and organizations benefit from the flexibility and agility of the open hybrid cloud.
Challenges in Edge Computing
With the proliferation of devices and services at edge sites, however, there is an increasing amount to manage outside the sphere of traditional operations. Platforms are being extended well beyond the data- centre, devices are multiplying and spreading across vast areas, and on-demand applications and services are running in significantly different and distant locations.
This evolving IT landscape is posing new challenges for organizations, including:
Ensuring they have the skills to address evolving edge infrastructure requirements.
Building capabilities that can react with minimal human interaction in a more secure and trusted way.
Effectively scaling at the edge with an ever-increasing number of devices and endpoints to consider.
Of course, while there are difficult challenges to overcome, many of them can be mitigated with edge automation.
Benefits of Edge Automation
Automating operations at the edge can reduce much of the complexity that comes from extending hybrid cloud infrastructure so you are better able to take advantage of the benefits edge computing provides.
Edge automation can help your organization:
Increase scalability by applying configurations more consistently across your infrastructure and managing edge devices more efficiently.
Boost agility by adapting to changing customer demands and using edge resources only as needed.
Focus on remote operational security and safety by running updates, patches and required maintenance automatically without sending a technician to the site.
Reduce downtime by simplifying network management and reducing the chance of human error.
Improve efficiency by increasing performance with automated analysis, monitoring and alerting.
7 Examples of Edge Automation
Here are some industry-specific use cases and examples demonstrating edge automation’s value.
1. Transportation industry
By automating complex manual device configuration processes, transportation companies can efficiently deploy software and application updates to trains, aeroplanes and other moving vehicles with significantly less human intervention. This can save time and help eliminate manual configuration errors, freeing teams to work on more strategic, innovative and valuable projects.
Compared to a manual approach, automating device installation and management is generally safer and more reliable.
2. Retail
Establishing a new retail store and getting its digital services online can be complex, involving configuration management of networked devices, configuration auditing and setting up computing resources across the retail facility. And once a store is set up and open to the public, the IT focus shifts from speed and scale to consistency and reliability.
Edge automation gives retail stores the ability to stand up and maintain new devices more quickly and consistently while reducing manual configuration and update errors.
3. Industry 4.0
From oil and gas refineries to smart factories to supply chains, Industry 4.0 is seeing the integration of technologies such as the internet of things (IoT), cloud computing, analytics and artificial intelligence/machine learning (AI/ML) into industrial production facilities and across operations.
One example of the value of edge automation in Industry 4.0 can be found on the manufacturing floor. There, supported by visualization algorithms, edge automation can help detect defects in manufactured components on the assembly line. It can also help improve the safety of factory operations by identifying and alerting hazardous conditions or unpermitted actions.
4. Telecommunications, media and entertainment
The advantages edge automation can provide to service providers are numerous and include clear improvements to customer experience.
For example, edge automation can turn the data edge devices produce into valuable insights that can be used to improve customer experience, such as automatically resolving connectivity issues.
The delivery of new services can also be streamlined with edge automation. Service providers can send a device to a customer’s home or office that they can simply plug in and run, without the need for a technician on site. Automating service delivery not only improves the customer experience, it creates a more efficient network maintenance process, with the potential of reducing costs.
5. Financial services and insurance
Customers are demanding more personalized financial services and tools that can be accessed from virtually anywhere, including from customers’ mobile devices.
For example, if a bank launches a self-service tool to help their customers find the right offering — such as a new insurance package, a mortgage, or a credit card — edge automation can help that bank scale the new service while also automatically meeting strict industry security standards without impacting the customer experience.
Edge automation can help provide the speed and access that customers want, with the reliability and scalability that financial service providers need.
6. Smart cities
To improve services while increasing efficiency, many municipalities are incorporating edge technologies such as IoT and AI/ML to monitor and respond to issues affecting public safety, citizen satisfaction and environmental sustainability.
Early smart city projects were constrained by the technology of the time, but the rollout of 5G networks (and new communications technologies still to come) not only increase data speeds but also makes it possible to connect more devices. To scale capabilities more effectively, smart cities need to automate edge operations, including data collection, processing, monitoring and alerting.
7. Healthcare
Healthcare has long since started to move away from hospitals toward remote care treatment options such as outpatient centres, clinics and freestanding emergency rooms, and technologies have evolved and proliferated to support these new environments. Clinical decision-making can also be improved and personalized based on patient data generated from wearables and a variety of other medical devices.
Using automation, edge computing and analytics, clinicians can efficiently convert this flood of new data into valuable insights to help improve patient outcomes while delivering both financial and operational value.
Red Hat Edge
Modern compute platforms powered by Red Hat Edge can help organizations extend their open hybrid cloud to the edge. Red Hat Edge represents Red Hat’s collective drive to integrate edge computing across the open hybrid cloud. Red Hat’s large and growing ecosystem of partners and open methodologies give organizations the flexibility they need to build platforms that can respond to rapidly changing market conditions and create differentiated offerings.
The past few years have shown that enterprises want their applications, data, and resources located wherever it makes the most sense for their business and operating models, which means that automation needs to be available to execute anywhere. Automation across platforms and environments needs a common mechanism with an approach of automation as code, supported by communities of practice and even automation architects or committees to help define and deliver on the strategy.
Per a recent IDC Market Forecast— Worldwide IT Automation and Configuration Management Software Forecast, 2021–2025[i]—“state-of-the-art system management software tools will be needed to keep up with increasing operational complexity, particularly in organizations that cannot add headcount to keep up with requirements.” Managing this overall complexity is no easy feat. As IT and business needs continue to evolve, it’s no longer an issue of “if” organizations turn to automation, but “which” automation tool they choose.
This is where the power of open source technology excels; per the same IDC study, “open source–driven innovation helped fuel the growth of newer players and technologies.” With a community-based, consistent approach to automation, the subject matter experts write the integrations and share them with other teams, building internal communities of practice that can adapt to change and deployments allowing enterprises to get to the cloud at an accelerated pace.
This is how Red Hat, through Red Hat Ansible Automation Platform, approaches automation, delivering tailored innovation for individual platforms combined with a standard, cross-framework language. With the continued shift to consuming public cloud services and resources, the key is to have a platform that allows you to harness the same skills, language and taxonomy that your teams have been using to drive efficiency and savings in on-premises implementations. This approach enables enterprises to achieve what they want, where they want to, in clouds like Amazon Web Services and Microsoft Azure.
Endorsing agility at the edge
We know that enterprises and their needs do not end with cloud automation. Assets at the edge are now just as important and, arguably, even more difficult to manage, than in the data center. Edge computing is critical to business, making automating at the edge non-negotiable. Making all of your existing processes and group components available using a tool like Ansible Automation Platform allows you to move edge management from a multi-person, complex task to one where common components and workflows are used with Ansible for management and integration.
Ansible automation becomes the connective tissue in an IT organization, bridging applications and their dependent infrastructure, and maintaining technology at the edge. IT staff can rely on automation to roll out new services at the edge to meet customer needs with speed, scale, and consistency.
Connecting it all through automation
We often refer to Ansible Automation Platform as the glue between people, process and technology. Automation allows for greater emphasis on strengthening the whole system, rather than just the sum of its parts. The benefits automation can bring aren’t always simple to achieve, but the right framework makes it less challenging. When there’s success at a high level, new ways of working become reality, along with resiliency and adaptability. This formula is precisely what organizations need as they face new challenges to drive modernization and transformation.
[i] IDC Market Forecast, Worldwide IT Automation and Configuration Management Software Forecast, 2021–2025, doc #US47434321, February 2021.
While many aspects of edge computing are not new, the overall picture continues to evolve quickly. For example, “edge computing” encompasses the distributed retail store branch systems that have been around for decades. The term has also swallowed all manner of local factory floor and telecommunications provider computing systems, albeit in a more connected and less proprietary fashion than was the historical norm.
However, even if we see echoes of older architectures in certain edge computing deployments, we also see developing edge trends that are genuinely new or at least quite different from what existed previously. These trends are helping IT and business leaders solve problems in industries ranging from telco to automotive, for example, as both sensor data and machine learning data proliferates.
Edge computing trends that should be on your radar
Here, edge experts explore six trends that IT and business leaders should focus on in 2022:
1. Edge workloads get fatter
One big change we are seeing is that there is more computing and more storage out on the edge. Decentralized systems have often existed more to reduce reliance on network links than to perform tasks that could not practically be done in a central location assuming reasonably reliable communications. But, that is changing.
IoT has always involved at least collecting data almost by definition. However, what could be a trickle has now turned into a flood as the data required for machine learning (ML) applications flows in from a multitude of sensors. But, even if training models are often developed in a centralized data centre, the ongoing application of those models is usually pushed out to the edge of the network. This limits network bandwidth requirements and allows for rapid local action, such as shutting down a machine in response to anomalous sensor readings. The goal is to deliver insights and take action at the moment they’re needed.
2. RISC-V gains ground
Of course, workloads that are both data- and compute-intensive need hardware on which to run. The specifics vary depending upon the application and the tradeoffs required between performance, power, cost, and so forth. Traditionally the choice has usually come down to either something custom, ARM, or x86. None are fully open, although ARM and x86 have developed a large ecosystem of supporting hardware and software over time, largely driven by the lead processor component designers.
But RISC-V is a new and intriguing open hardware-based instruction set architecture.
Why intriguing? Here’s how Red Hat Global Emerging Technology Evangelist Yan Fisher puts it: “The unique aspect of RISC-V is that its design process and the specification are truly open. The design reflects the community’s decisions based on collective experience and research.”
This open approach, and an active ecosystem to go along with it, is already helping to drive RISC-V design wins across a broad range of industries. Calista Redmond, CEO of RISC-V International, observes that: “With the shift to edge computing, we are seeing a massive investment in RISC-V across the ecosystem, from multinational companies like Alibaba, Andes Technology, and NXP to startups like SiFive, Esperanto Technologies, and GreenWaves Technologies designing innovative edge-AI RISC-V solutions.”
3. Virtual Radio Access Networks (vRAN) become an increasingly important edge use case
A radio access network is responsible for enabling and connecting devices such as smartphones or internet of things (IoT) devices to a mobile network. As part of 5G deployments, carriers are shifting to a more flexible vRAN approach whereby the high-level logical RAN components are disaggregated by decoupling hardware and software, as well as using cloud technology for automated deployment and scaling and workload placement.
Hanen Garcia, Red Hat Telco Solutions Manager, and Ishu Verma, Red Hat Emerging Technology Evangelist, note that “One study indicates deployment of virtual RAN (vRAN)/Open RAN (oRAN) solutions realize network TCO savings of up to 44% compared to traditional distributed/centralized RAN configurations.” They add that: “Through this modernization, communications service providers (CSPs) can simplify network operations and improve flexibility, availability, and efficiency—all while serving an increasing number of use cases. Cloud-native and container-based RAN solutions provide lower costs, improved ease of upgrades and modifications, ability to scale horizontally, and with less vendor lock-in than proprietary or VM-based solutions.”
4. Scale drives operational approaches
Many aspects of an edge-computing architecture can be different from one that’s implemented solely within the walls of a data centre. Devices and computers may have weak physical security and no IT staff on-site. Network connectivity may be unreliable. Good bandwidth and low latencies aren’t a given. But many of the most pressing challenges relate to scale; there may be thousands (or more) network endpoints.
Kris Murphy, Senior Principal Software Engineer at Red Hat, identifies four primary steps you must take in order to deal with scale: “Standardize ruthlessly, minimize operational ‘surface area,’ pull whenever possible over push, and automate the small things.”
For example, she recommends doing transactional, which is to say atomic, updates so that a system can’t end up only partially updated and therefore in an ill-defined state. When updating, she also argues that it’s a good practice for endpoints to pull updates because “egress connectivity is more likely available.” One should also take care to limit peak loads by not doing all updates at the same time.
5. Edge computing needs attestation
With resources at the edge tight, capabilities that require little to no local resources are the pragmatic options to consider. Furthermore, any approach needs to be highly scalable or otherwise, the uses and benefits become extremely limited. One option that stands out is the Keylime project. “Technologies like Keylime, which can verify that computing devices boot up and remain in a trusted state of operation at scale should be considered for broad deployment, especially for resource-constrained environments” as described by Ben Fischer, Red Hat Emerging Technology Evangelist.
Keylime provides remote boot and runtime attestation using Integrity Measurement Architecture (IMA) and leverages Trusted Platform Modules (TPMs) which are common to most laptop, desktop, and server motherboards. If no hardware TPM is available, a virtual, or vTPM, can be loaded to provide the requisite TPM functionality. Boot and runtime attestation is a means to verify that the edge device boots to a known trusted state and maintains that state while running. In other words, if something unexpected happens, such as a rogue process, the expected state would change, which would be reflected in the measurement and would take the edge device offline, because it entered an untrusted state. This device could be investigated and remediated and put back into service again in a trusted state.
6. Confidential Computing becomes more important at the edge
Security at the edge requires broad preparation. Availability of resources, such as network connectivity, electricity, staff, equipment, and functionality vary widely but are far less than what would be available in a data centre. These limited resources limit the capabilities for ensuring availability and security. Besides encrypting local storage and connections to more centralized systems, confidential computing offers the ability to encrypt data while it is in use by the edge computing device.
This protects both the data being processed and the software processing the data from being captured or manipulated. Fischer argues that “confidential computing on edge computing devices will become a foundational security technology for computing at the edge, due to the limited edge resources.”
According to the Confidential Computing Consortium’s (CCC) report by the Everest group, Confidential Computing – The Next Frontier in Data Security, “Confidential computing in a distributed edge network can also help realize new efficiencies without affecting data or IP privacy by building a secure foundation to scale analytics at the edge without compromising data security.” Additionally, confidential computing “ensures only authorized commands and code are executed by edge and IoT devices. Use of confidential computing at the IoT and edge devices and back end helps control critical infrastructure by preventing tampering with code of data being communicated across interfaces.“
Confidential computing applications at the edge range from autonomous vehicles to collecting sensitive information.
Diverse applications across industries
The diversity of these edge computing trends reflects both the diversity and scale of edge workloads. There are some common threads – multiple physical footprints, the use of cloud-native and container technologies, an increasing use of machine learning. However, telco applications often have little in common with industrial IoT use cases, which in turn differ from those in the automotive industry. But whatever industry you look at, you’ll find interesting things happening at the edge in 2022.
Edge computing is the ability to give life to the transformative use cases that businesses are dreaming up today and bring real-time decision making to last-mile locales. This can include a far-flung factory or train roaring down the tracks, someone’s connected home, or their car speeding down the highway or even in space. Who thought we’d be running Kubernetes in space?
This shows that edge computing can transform the way we live, and we are doing it right now.
Why Collaboration Is Critical
Edge technologies are blending the digital and physical worlds in a new way, and that combination is resonating at a human level. This human resonance might sound like an aspirational achievement, but it is already here. A great example is when we used AR/VR to improve safety on the factory floor.
Continued collaboration, however, is necessary to keep enabling breakthrough successes. Across industries and organizations, we are all highly dependent on one another. Thinking about the telecommunications and industrial sectors, in particular, there is a mutually supportive, symbiotic relationship between these industries—5G development cannot be successful without industrial use cases, which, in turn, are based on telco technologies.
However, numerous challenges remain: reducing network complexity, maintaining security, improving agility, and ensuring a vibrant ecosystem where the only way to address and solve those is by tapping into the collective wisdom of the community.
With open-source, we can unify and empower communities on a broad scale. The open-source ecosystem brings people together to focus on a common problem to solve with software. That shared purpose can turn isolated efforts into collective ones so that changes are industry-wide and reflect a wide range of needs and values.
The collaboration that open source makes possible continues to ignite tremendous change and alter our future in so many ways, making it the innovation engine for industries.
If we collaborate on 5G and edge in this manner, nascent technologies could become exciting common foundations in the same way that Linux and Kubernetes have because when we work together, the only limit to these possibilities is our imagination.
From Maps to Apps and Much More
Do you remember having to use a paper-based map to figure out driving directions? Flash forward to today: Look at the applications we take for granted on our phones or in our homes that allow us to change our driving route in real-time to avoid traffic, or to monitor and grant access to our front doors—to the point that these have shaped how we interact with our environments and each other. Yet not too long ago, many of these things were unimaginable. We barely had cloud technology, we were in the transition from 3G to 4G, and smartphones were new.
But there was important work being done by lots of people who were improving upon the core technologies. The convergence of three technology trends, as it turns out, unlocked a hugely disruptive opportunity: a cloud-native, mobile-device-enabled transportation service that picked you up wherever you were and took you wherever you wanted to go.
This opportunity was only possible because each trend built on the others to create a truly novel offering. Without one of these trends, the applications from the ride-sharing apps of the world would not have been the same or as disruptive. Imagine yourself scrambling to find a WiFi hotspot on the street corner, whipping out your laptop outside a restaurant while standing in the rain, or starting your business by first constructing a massive data centre. The convergence of smartphones, 4G networks, and cloud computing has enabled a new world.
Today we are creating the next set of technologies that will become the things so embedded in our lives and so indispensable to our daily habits that we will wonder how we ever got by without them. Are you ready to be wearing clothes with sensors in them that tell you how healthy you are?
The possibilities with edge technologies are equally as exciting. It starts with the marriage of the digital world with the physical world. Adding in pervasive connectivity—leveraging a common 5G and edge platform—we can transform how operational technologies interact with the physical world and that changes everything.
The Future Is Now
We are creating this new world that is hard to imagine, yet it is not so foreign because we have seen how this story has played out before. Expect these new technologies to have profound implications for humanity—in our daily lives, how we interact with one another, and the social fabric of our world.
All of that cannot happen without collaboration.
We have only to look at how open source has empowered collaboration and how working together has helped people across organizations and industries build more robust, shared platforms more quickly and differentiate on top of them—with apps and capabilities built on the foundation of Kubernetes and Linux, for example.
2020’s gone and it won’t be missed. For all of the chaos, confusion and change the previous year brought, it helped illuminate a critical facet of Red Hat, our associates, our partners, our customers and our communities. It showed that we are resilient. Not only did we weather it as a company, we helped those around us stand firm through the storm. That’s something to be proud of, and I know that as CEO of Red Hat, I’m thankful at how we as a business, as a pillar of the open source community and as a global organization kept a steady hand throughout.
Red Hat was born out of community. It’s at the center of everything we do. When faced with uncertainty and when we see others in need, that’s when we pull together and show our mettle. Throughout the past year, Red Hatters showed a tremendous capacity for fortitude and humanity. When I first took over the role of CEO, I made the comment that I wanted every Red Hatter who was here at that point to still be here in a year. And I think we’ve held true to that.
At the time, that conversation centered on finding work-life balance when the lines became blurred. Without taking care of our personal lives and mental health, we’re not able to meet the needs of our customers. As associates became school teachers and caretakers, dealt with drastically reduced social interactions and grieved the loss of normalcy, they still served customers and helped them be successful. We didn’t just hunker down and wait for the storm to pass; we still moved forward and made ourselves available to help others.
No time to slow down
While the COVID-19 pandemic stalled many industries, the software industry raced forward. Technologies like cloud computing and automation became more important than ever. They are now firmly in the category of must-have, instead of nice-to-have. As a company, we turned our attention to products and services that our customers need to support remote work, expand digital services, scale to meet demand, become more resilient and keep innovating. I attribute our ability to continue to show strong growth throughout the year to this strategy and I’m so proud of the team for keeping the momentum going.
With our biggest announcements last year, you’ll no doubt sense a theme – making sure that our customers can develop and deploy any app, anywhere. They want the choice and flexibility to use the innovations and technologies on a platform that makes sense for the job at hand, and we’re making sure they can do just that. Red Hat OpenShift is the industry’s leading enterprise Kubernetes platform and highlights a future where containers and virtualization, managed consistently across the open hybrid cloud, are helping customers maintain operations while still bringing new products and services to market faster.
We introduced Red Hat Advanced Cluster Management for Kubernetes, a new management solution designed to help organizations exert more consistent control over their Kubernetes clusters across the hybrid cloud — from bare-metal to major public cloud providers and everything in between.
Once they can deploy anywhere, they need to be able to bring those mixed workloads together and that’s where OpenShift Virtualizationcomes in. An integrated component of Red Hat OpenShift, we’re giving customers the ability to manage traditional workloads alongside cloud-native services, letting them prepare for the future while retaining existing investments. This helps to break down technology silos that can slow innovation and impact the customer experience.
For those wanting an increased level of support from us, OpenShift Dedicated is a fully managed service of Red Hat OpenShift on AWS, Google Cloud Platform and Microsoft Azure. We continue to enhance and refine the capabilities of this managed offering, providing an option for organizations looking to reduce the operational complexity of infrastructure management, but still get all the benefits of enterprise Kubernetes. This enables their IT teams to focus on building and scaling the next-generation of applications, rather than keeping infrastructure lit up.
One of the benefits of open source is our close connection to the innovation born in open source communities, where new ideas and concepts emerge and incubate. This is a direct link to IT’s future, enabling us to more readily see trends as they evolve. It’s this connection that enabled us to push the envelope in open hybrid cloud computing, and it’s now providing our launchpad for the next wave: edge computing. Edge brings its own challenges for administrators and developers alike, so we’ve delivered new capabilities for Red Hat Enterprise Linux and Red Hat OpenShiftto help bring edge computing into hybrid cloud deployments.
Coming together
The channel is what made Red Hat. Without our partner ecosystem, Red Hat would be a very different company. We have been successful because of our independence and our work across a broad spectrum of cloud and service providers, including Amazon, Google, IBM and Microsoft. As the saying goes: “actions speak louder than words.” Our neutrality is something that can’t change and you can see it in some of the moves we made this year.
Red Hat and Microsoft have been working to co-develop hybrid cloud solutions for years, which ultimately led to Azure Red Hat OpenShift, the industry’s first jointly-engineered, managed and supported OpenShift service on a leading public cloud. This year we continued our drive as a leading enterprise Kubernetes service on the public cloud with Azure Red Hat OpenShift on OpenShift 4, bringing the power of Kubernetes Operators to Azure along with the flexibility of Red Hat Enterprise Linux CoreOS.
As I’ve said, open source is about choice and about meeting customers where they are, on whichever cloud platform they prefer. With that in mind, we continued our work across the public cloud withRed Hat OpenShift Service on AWS, a jointly-managed and jointly-supported enterprise Kubernetes service on AWS. Red Hat OpenShift is now the common Kubernetes denominator on two of the world’s largest clouds but, most importantly, it’s now easier for our customers to consume OpenShift where it makes most sense for them without sacrificing operational flexibility or service levels.
We’re also seeing the promise of our acquisition by IBM come to fruition, as we scale and work together for powerful world-spanning solutions. Schlumbergerrepresents one of these moments. By collaborating with IBM, this initiative will support its business and provide Schlumberger’s associates global access to its leading exploration and production cloud-based environment and cognitive applications by using IBM’s hybrid cloud technology, built on Red Hat OpenShift.
On the horizon
Just a month in and we’ve already set the tone for the year. All roads, whether it’s through edge computing, serverless or Kubernetes, lead to open hybrid cloud. That’s what we’ve worked to build and where our focus continues to be. We’ve been talking about it for nearly a decade because it’s not just another trend; it’s an enterprise imperative. It’s through the hybrid cloud that we help our customers solve dynamic challenges and keep Red Hat in innovation’s vanguard.
We announced our intent to acquire StackRox, a leader and innovator in Kubernetes-native security. Once the transaction closes, this move will allow us to enhance security for cloud-native workloads by expanding and refining the Kubernetes’ native controls already present in OpenShift while shifting security into the container build and CI/CD phase.
Having a seamless integration between our sales and services strategy and our technology vision is critical to our success, and it calls for the right leader. For nearly a decade, Arun Oberoi has led the team and transformed our go-to-market approach matching our expanding open hybrid cloud portfolio, through strategic acquisitions and new alliances. He will retire later this year and Larry Stack will step into the role of executive vice president of Global Sales and Services. What I appreciate most about him is that he embraces the Red Hat culture and the customer is always the focus. There is a huge opportunity in front of us, as we keep scaling, Larry’s strong experience and the strategic thinking that he brings are going to help us capitalize on it.
Just because we made it out of 2020, doesn’t mean we’re back to business as usual. The pandemic is still impacting the world and organizations are still feeling the effects. The challenges aren’t going away, but we’ve shown resilience and that needs to be a trait that we keep as we move through the year. While 2021 holds many unknowns, one thing that is not unknown is our path forward.
The COVID-19 global situation is a big one. It changed so many things for us. It changed the way we live, and definitely the way we work and stay productive. At the back of it all, tech companies all over the world are becoming the backbones of plenty of organisations. The ‘work from home’ trend is booming in a way never before seen.
For a company like Red Hat, transforming from an office based workspace to home based workspace is not necessarily complex. It may not be as straightforward as they make it seem though. There are still plenty of considerations for them and their clients when it comes to the quarantine procedures and working procedures.
What happens to their clients? What happens to the ongoing projects with the Open Innovation Labs? What happens to Red Hat globally? What do we do, as companies in this age?
In this Tech & Tonic special, we sit down with Eric Quah of Red Hat to find out a little bit more on their efforts to normalise during the COVID-19 crisis. Of course, we want to know how we can work a little normally at this time as well. If we learnt a thing or two in this session, we believe you might as well.