Tag Archives: Machine Learning

What Might the Next Decade Bring for Computing?

New technologies can take many forms. Often, they come from generally straightforward, incremental product advances over the course of years; think the Complementary Metal-Oxide-Semiconductor (CMOS) process shrinks that underpinned many of the advances in computing over the past decades. Not easy, but relatively predictable from a high-level enough view.

Other shifts are less straightforward to predict. Even if a technology is not completely novel, it may require the right conditions and advances to come together so it can flourish in the mainstream. Both server virtualization and containerization fall into this category.

What’s next? Someone once said that predictions are hard, especially about the future. But here are some areas that Red Hat has been keeping an eye on and that you should likely have on your radar as well. This is hardly a comprehensive list and it may include some surprises, but, it is a combination of both early stage and more fleshed-out developments on the horizon. The first few are macro trends that pervade many different aspects of computing. Others are more specific to hardware and software computing infrastructure.

Artificial intelligence/machine learning (AI/ML)

On the one hand, AI/ML belongs on any list about where computing is headed. Whether coding tools, self-tuning infrastructure, or improved observability of systems, AI/ML is clearly a critical part of the computing landscape going forward.

What’s harder to predict is exactly what forms and applications of AI will deliver compelling business value, many of which will be interesting in narrow domains, and will likely turn out to be almost good enough over a lengthy time horizon.

elderly man thinking while looking at a chessboard
Photo by Pavel Danilyuk on Pexels.com

Much of the success of AI to date has rested on training deep neural networks (NNs) of increasing size (as measured by the number of weights and parameters) on increasingly large datasets using backpropagation, and supported by the right sort of fast hardware optimized for linear algebra operations—graphics processing units (GPUs) in particular. Large Language Models (LLMs) are one prominent, relatively recent example.

There have been many clear wins, but AI has struggled with more generalized systems that interface with an unconstrained physical world—as in the case of autonomous driving, for example. There are also regulatory and legal concerns relating to explainability, bias and even overall economic impact. Some experts also wonder if broad gaps in our collective understanding of the many areas covered by cognitive science that lay outside the direct focus of machine learning may (or may not) be needed for AI to handle many types of applications.

What’s certain is that we will be surprised.

Automation

In a sense, automation is a class of application to which AI brings more sophisticated capabilities. For example, Red Hat Ansible Lightspeed with IBM watsonx Code Assistant is one recent example of a generative AI service designed by and for Ansible automators, operators and developers.

Automation is increasingly necessary because hardware and software stacks are getting more complex. What’s less obvious is how improved observability tooling and AI-powered automation tools that make use of that more granular data plays out in detail.

At the least, it will lead us to think about questions such as: Where are the big wins in dynamic automated system tuning that will most improve IT infrastructure efficiency? What’s the scope of the automated environment? How much autonomy will we be prepared to give to the automation, and what circuit breakers and fallbacks will be considered best practice?

Over time, we’ve reduced manual human intervention in processes such as CI/CD pipelines. But we’ve done so in the context of evolving best practices in concert with the increased automation.

Security

Security is a broad and deep topic (and one of deep concern across the industry). It encompasses zero trust, software supply chains, digital sovereignty and yes, AI—both as a defensive tool and an offensive weapon. But one particular topic is worth highlighting here.

Confidential computing is a security technology that protects data in use, meaning that it is protected while it is being processed. This is in contrast to traditional encryption technologies, which protect data at rest (when it is stored) and data in transit (when it is being transmitted over a network).

woman in black hoodie holding a bank card
Photo by Tima Miroshnichenko on Pexels.com

Confidential computing works by using a variety of techniques to isolate data within a protected environment, such as a trusted execution environment (TEE) or a secure enclave. It’s of particular interest when running sensitive workloads in an environment over which you don’t have full control, such as a public cloud. It’s relatively new technology but is consistent with an overall trend towards more security controls, not fewer.

RISC-V

While there are examples of open hardware designs, such as the Open Compute Project, it would be hard to make the case for there having been a successful open processor relevant to server hardware.

However, major silicon vendors and cloud providers are exploring and adopting the RISC-V free-to-license and open processor instruction set architecture (ISA). It follows a different approach from past open processor efforts. For one thing, it was open source from the beginning and is not tied to any single vendor. For another, it was designed to be extensible and implementation-agnostic. It allows for the development of new embedded technologies implemented upon FPGAs as well as the manufacture of microcontrollers, microprocessors and specialized data processing units (DPUs).

Its impact is more nascent in the server space, but it has been gaining momentum. The architecture has also seen considerable standardization work to balance the flexibility of extensions with the fragmentation they can bring. RISC-V profiles are a set of standardized subsets of the RISC-V ISA. They are designed to make sure that hardware implementers and software developers can intersect with an interface built around a set of extensions with a bounded amount of flexibility designed to support well-defined categories of systems and applications.

Platform software

Perhaps one of the most intriguing questions is what happens at the lower levels of the server infrastructure software stack—roughly the operating system on a single shared memory server and the software that orchestrates workloads across many of these servers connected over a network.

It is probably easiest to start with what is unlikely to change in fundamental ways over the next decade. Linux has been around for more than 30 years; Unix more than 50, with many basic concepts dating to Multics about ten years prior.

close up view of system hacking
Photo by Tima Miroshnichenko on Pexels.com

That is a long time in the computer business. But it also argues for the overall soundness and adaptability of the basic approach taken by most modern operating systems—and the ability to evolve Linux when changes have been needed. That adaptation will continue by taking advantage of reducing overheads by selectively offloading workloads to FPGAs and other devices such as edge servers. There are also opportunities to reduce transition overheads for performance-critical applications; the Unikernel Linux project—a joint effort involving professors, PhD students and engineers at the Boston University-based Red Hat Collaboratory—demonstrates one direction such optimizations could take.

More speculative is the form that collections of computing resources might take and how they will be managed. Over the past few decades, these resources primarily took the form of masses of x86 servers. Some specialized hardware is used for networking, storage and other functions, but CMOS process shrinks meant that for the most part, it was easier, cheaper and faster to just wait for the next x86 generation than to buy some unproven specialized design.

However, with performance gains associated with general-purpose process shrinks decelerating—and maybe even petering out at some point—specialized hardware that more efficiently meets the needs of specific workload types starts to look more attractive. The use of GPUs for ML workloads is probably the most obvious example, but is not the only one.

The challenge is that developers are mostly not increasing in number or skill. Better development tools can help to some degree, but it will also become more important to abstract away the complexity of more specialized and more diverse hardware.

What might this look like? A new abstraction/virtualization layer? An evolution of Kubernetes to better understand hardware and cloud differences, the relationship between components and how to intelligently match relatively generic code to the most appropriate hardware or cloud? Or will we see something else that introduces completely new concepts?

Wrap up

What we can say about these predictions is that they’re probably a mixed bag. Some promising technologies may fizzle a bit. Others will bring major and generally unexpected changes in their wake, and something may pop onto the field at a time and from a place where we least expect it.

Accelerating AI-driven outcomes with Powerful Super Computing Solutions

This article is contributed by Mak Chin Wah, Country Manager, Malaysia and General Manager, Telecoms Systems Business, South Asia, Dell Technologies

As artificial intelligence (AI) technology continues to evolve and grows in capability, it’s becoming a growing presence in every aspect of our lives. One needs to look no further than voice assistants, navigation like Waze, or rideshare apps such as Grab, which Malaysians are familiar with.

robot pointing on a wall
Photo by Tara Winstead on Pexels.com

From machine learning and deep learning algorithms that automate manufacturing, natural language processing, video analytics and more, to the use of digital twins that virtually simulate, predict and inform decisions based on real-world conditions, AI helps solve critical modern-life challenges to benefit humanity. In fact, we have digital twin technology to thank for assisting in the bioengineering of vaccines to fight COVID-19.

AI is changing not only what we do but also how we do it — faster and more efficiently.

Advancing Human Progress

For companies like Dell Technologies who are committed to advancing human progress, AI will play a big part in developing solutions to the pressing issues of the 21st century. The 2020s, in particular, are ushering in a fully data-driven period in which AI will assist organisations and industries of all sizes to accelerate intelligent outcomes.

woman holding tablet computer
Photo by Roberto Nickson on Pexels.com

Organisations can harness their AI endeavours through high-performance computing (HPC) infrastructure solutions that reduce risk, improve processing speed and deliver deeper insights. By extracting value through AI from the massive amounts of data generated across the entire IT landscape — from the core to the cloud —businesses can better tackle challenges and make discoveries to advance large-scale, global progress.

Continuing to Innovate

Through transformative innovation, customers can derive the insights needed to change the course of discovery. For example,  Dell Technologies equipped Monash University Malaysia with top-of-the-line HPC and AI solutions[i] to help accelerate the university’s research and development computing capabilities at its Sunway City campus in Selangor. The solution aims to enhance and accelerate the university’s computation capabilities in solving complex problems across its significant research portfolio.

Financial services, life sciences and oil and gas exploration are just a few of the other computation-intensive applications where enhanced servers will make a difference in achieving meaningful results, for humankind and the planet.

At the heart of AI technology are essential building blocks and solutions that power these activities. For example, Dell’s existing line of PowerEdge servers has already contributed to transformational, life‑changing projects, and will continue to power human progress in this generation and the next.

image

The most demanding AI projects require servers that offer distinct advantages – specifically built to deliver higher performance and even more powerful supercomputing results, and yet engineered for the coming generation to support the real-time processing requirements and challenges of AI applications with ease.

In addition to helping deploy more secure and better-managed infrastructure for complex AI operations at mind-boggling modelling speeds, these transformative servers will help meet organisations’ biggest concerns in productivity, efficiency and sustainability, while cutting costs and conserving energy.

Transforming Business and Life

While organisations are in different stages with respect to their adoption of AI, the transformational impact on business and life itself can no longer be ignored. Human progress will depend on the ability of AI to make communication easier, personalise content delivery, advance medical research/diagnosis/treatments, track potential pandemics, revolutionise education and implement digital manufacturing. In Malaysia, while AI is progressively being recognised as the new general-purpose technology that will bring about revolutionary economic transformation similar to the Industrial Revolution, adoption of Industry 4.0 remains sluggish with only 15% to 20% of businesses having really embraced it. On the other hand, the government is taking this emerging technology seriously, having set out frameworks for the incorporation of AI by numerous sectors of the economy. These comprise the Malaysia Artificial Intelligence Roadmap 2021-2025 (AI-Rmap) and the Malaysian Digital Economy Blueprint (MDEB), spearheaded by the MyDIGITAL Corporation and the Economic Planning Unit.

Moving Forward

With servers and HPC at the heart of AI, modern infrastructure needs to match the unique requirements of increasingly complex and widely distributed workloads. Regardless of where a business is on the AI journey, the key to optimising outcomes is having the right infrastructure in place, ready to seamlessly scale as the business grows and positioned to take on the unexpected, unknown challenges of the future. To do that requires having the expertise – or a trusted partner that does – to help at any and every stage, from planning through to implementation, to make smart server decisions that will unlock the organisation’s data capital and support AI efforts to move human progress forward.


[i] Based on Dell Technologies helps Monash University Malaysia enhance its R&D capabilities with HPC and AI solutions Media Alert, November 2022

Adobe Firely, the Next-Generation AI Made for Creative Use

AI (Artificial Intelligence) generated graphics is not a new thing. You have things like OpenArt and Hotpot these days where you can just type in the keywords to the image you want, and let the engine generate art for your use. Even before AI generated graphics though, the implementation of AI within the creative industry is nothing new. NVIDIA has used their own AI engine to write an entire symphony, and even to create 3D environments using their Ray Tracing engines. Adobe too have something they call the Sensei. They the AI tool is implemented across their creative suite to understand and recognise objects better, fill details where needed more naturally, and even edit videos, images, or even texts quickly and efficiently. Now, they have Firefly.

Firefly is not a new separate AI system from Adobe’s Sensei. Firefly is a part of a larger Adobe Sensei generative AI together with technologies like Neural Filters, Content Aware Fill, Attribution AI, and Liquid mode implemented across several Adobe platforms. Unlike those platform specific implementations though, Adobe is looking to put Firefly to work on a number of various platforms across their Creative Cloud, Document Cloud, Experience Cloud, and even their Adobe Express platforms.

So, what is Adobe Firefly? We hear you ask. It is technically Adobe’s take on what a creative generative AI should be. They are not limiting Firefly to just image generation, modification, and correction. It is designed to allow any sort of content creators create even more without needing to spend hundreds of hours to learn a new skill. All they need to do is to adapt Firefly in their workflow and they will get contents that they have never been able to create before, be it images, audio, vectors, texts, videos, and even 3D materials. You can have different contents every time too with Adobe Firefly; the possibilities, according to Adobe, are endless.

What makes Adobe’s Firefly so powerful is the power of the entirety of Adobe’s experience and database behind it. Obviously Adobe’s Stock images and assets is a huge enough library for the AI implementation to dive into. The implementation can also look into using openly licensed assets and public domain contents in generating its contents. The tool, in this case, will prevent any IP infringements and help you avoid plenty of future litigations.

Adobe Firefly Cover
Source: Adobe

As Firefly is launched in its beta state, it will only be available as an image and text generator tool for Adobe Express, Adobe Experience Manager, Adobe Photoshop, and Adobe Illustrator. Adobe plans to bring Firefly into the rest of their platforms where relevant in the coming future. They are also pushing for more open standards in asset verification which will eventually include proper categorization and tagging of AI generated contents. Adobe is also planning to make the Firefly ecosystem a more open one with APIs for its users and customers to integrate the tool with their existing workflows. For more information on Adobe’s latest generative AI, you can visit their website.

Edge Computing Benefits and Use Cases

From telecommunications networks to the manufacturing floor, through financial services to autonomous vehicles and beyond, computers are everywhere these days, generating a growing tsunami of data that needs to be captured, stored, processed and analyzed. 

At Red Hat, we see edge computing as an opportunity to extend the open hybrid cloud all the way to data sources and end-users. Where data has traditionally lived in the data centre or cloud, there are benefits and innovations that can be realized by processing the data these devices generate closer to where it is produced.

This is where edge computing comes in.

4 benefits of edge computing

As the number of computing devices has grown, our networks simply haven’t kept pace with the demand, causing applications to be slower and/or more expensive to host centrally.

Pushing computing out to the edge helps reduce many of the issues and costs related to network latency and bandwidth, while also enabling new types of applications that were previously impractical or impossible.

1. Improve performance

When applications and data are hosted on centralized data centres and accessed via the internet, speed and performance can suffer from slow network connections. By moving things out to the edge, network-related performance and availability issues are reduced, although not entirely eliminated.

2. Place applications where they make the most sense

By processing data closer to where it’s generated, insights can be gained more quickly and response times reduced drastically. This is particularly true for locations that may have intermittent connectivity, including geographically remote offices and on vehicles such as ships, trains and aeroplanes.

hands gb5632839e 1280
Source: Pixabay

3. Simplify meeting regulatory and compliance requirements

Different situations and locations often have different privacy, data residency, and localization requirements, which can be extremely complicated to manage through centralized data processing and storage, such as in data centres or the cloud.

With edge computing, however, data can be collected, stored, processed, managed and even scrubbed in place, making it much easier to meet different locales’ regulatory and compliance requirements. For example, edge computing can be used to strip personally identifiable information (PII) or faces from a video before being sent back to the data centre.

4. Enable AI/ML applications

Artificial intelligence and machine learning (AI/ML) are growing in importance and popularity since computers are often able to respond to rapidly changing situations much more quickly and accurately than humans.

But AI/ML applications often require processing, analyzing and responding to enormous quantities of data which can’t reasonably be achieved with centralized processing due to network latency and bandwidth issues. Edge computing allows AI/ML applications to be deployed close to where data is collected so analytical results can be obtained in near real-time.

3 Edge Computing Scenarios

Red Hat focuses on three general edge computing scenarios, although these often overlap in each unique edge implementation.

1. Enterprise edge

Enterprise edge scenarios feature an enterprise data store at the core, in a data centre or as a cloud service. The enterprise edge allows organizations to extend their application services to remote locations.

nasa Q1p7bh3SHj8 unsplash
Photo by NASA on Unsplash

Chain retailers are increasingly using an enterprise edge strategy to offer new services, improve in-store experiences and keep operations running smoothly. Individual stores aren’t equipped with large amounts of computing power, so it makes sense to centralize data storage while extending a uniform app environment out to each store.

2. Operations edge

Operations edge scenarios concern industrial edge devices, with significant involvement from operational technology (OT) teams. The operations edge is a place to gather, process and act on data on-site.

Operations edge computing is helping some manufacturers harness artificial intelligence and machine learning (AI/ML) to solve operational and business efficiency issues through real-time analysis of data provided by Industrial Internet of Things (IIoT) sensors on the factory floor.

3. Provider edge

Provider edge scenarios involve both building out networks and offering services delivered with them, as in the case of a telecommunications company. The service provider edge supports reliability, low latency and high performance with computing environments close to customers and devices.

Service providers such as Verizon are updating their networks to be more efficient and reduce latency as 5G networks spread around the world. Many of these changes are invisible to mobile users, but allow providers to add more capacity quickly while reducing costs.

3 edge computing examples

Red Hat has worked with a number of organizations to develop edge computing solutions across a variety of industries, including healthcare, space and city management.

1. Healthcare

Clinical decision-making is being transformed through intelligent healthcare analytics enabled by edge computing. By processing real-time data from medical sensors and wearable devices, AI/ML systems are aiding in the early detection of a variety of conditions, such as sepsis and skin cancers.

cdc p33DqVXhWvs unsplash
Photo by CDC on Unsplash

2. Space

NASA has begun adopting edge computing to process data close to where it’s generated in space rather than sending it back to Earth, which can take minutes to days to arrive.

As an example, mission specialists on the International Space Station (ISS) are studying microbial DNA. Transmitting that data to Earth for analysis would take weeks, so they’re experimenting with doing those analyses onboard the ISS, speeding “time to insight” from months to minutes.

3. Smart cities

City governments are beginning to experiment with edge computing as well, incorporating emerging technologies such as the Internet of Things (IoT) along with AI/ML to quickly identify and remediate problems impacting public safety, citizen satisfaction and environmental sustainability.

Red Hat’s approach to edge computing

Of course, the many benefits of edge computing come with some additional complexity in terms of scale, interoperability and manageability.

Edge deployments often extend to a large number of locations that have minimal (or no) IT staff, or that vary in physical and environmental conditions. Edge stacks also often mix and match a combination of hardware and software elements from different vendors, and highly distributed edge architectures can become difficult to manage as infrastructure scales out to hundreds or even thousands of locations. The Red Hat Edge portfolio addresses these challenges by helping organizations standardize on a modern hybrid cloud infrastructure, providing an interoperable, scalable and modern edge computing platform that combines the flexibility and extensibility of open source with the power of a rapidly growing partner ecosystem

Google Looks to “MUM” to Enhance Search

Google has been working on creating a better, more unified experience with their bread and butter – search. The tech giant is looking for a more contextually relevant search as they move forwards. To do this, they are turning to MUM, the Multitask Unified Model, to bring more relevance to search results.

search on.max 1000x1000 1

The new Multitask Unified Model (MUM) allows Google’s search algorithm to understand multiple forms of input. It can draw context from text, speech, images and even video. This, in turn, allows the search engine to return more contextually relevant results. It will also allow the search engine to understand searches in a more natural language and make sense of more complex searches. When they first announced MUM, the new enhancement could understand over 75 languages. MUM is much more powerful than the existing algorithm.

Contextual Search is the New Normal

Search On Lens Desktop

Barely two months after the announcement, Google has begun implementing MUM into some of the most used apps and features. In the coming months, Google searches will be undergoing a bit of a major rehaul. The company is creating a new, more visual search experience. Users will be seeing more images and graphics in search results. you will also be able to refine and broaden searches with a single click thanks to MUM. You will be able to zoom into finer details such as specific techniques and more or get a broader picture of your search with a single click. In their announcement, Google used the example of acrylic painting. With the results from Google search, they were able to zoom in to specific techniques commonly used in acrylic painting or get a broader picture of how it started.

  • Googel SearchON 001
  • Googel SearchON 002
  • Googel SearchON 003
  • Googel SearchON 004

The search engine uses data such as language and even user behaviour in addition to context to recommend broadening or narrowing searches. They are even applying this to YouTube. They are hoping to be able to expand the search context to include topics mentioned in YouTube videos later this year. Contextual and multitask search is also making its way to Google Lens. Lens will be able to make sense of both visual and text data at the same time. It’s also making its way to Chrome. Don’t expect the rollout of the new experience on Lens too soon as the rollout is expected to be in 2022 after internal testing.

Googel SearchON 006

Context is also making search more “shoppable”. Google is allowing users to zoom in to specifics when searching. For instance, searching if you’re searching for fashion apparel, you will be able to narrow your search based on design and colour or use the context of the original to search for something else completely. In addition, Google’s Shopping Graph will allow users to narrow searches with an “in stock” filter as well. This particular enhancement will be available in select countries only.

Expanding Search to Make A Positive Impact

Google isn’t just focusing on MUM for its own benefit. The company has been busy bringing its technology to create change too. It’s working on expanding contextual data as well as A.I. implementation in addressing environmental and social issues. While this is nothing new, some of the new improvements could impact us more directly than ever.

Environmental Insights for Greener Cities

One of the biggest things that could make a huge impact is Googles Environmental Insights. While this isn’t brand new, the company is looking to make the feature more readily available to cities to help them be greener. Environmental Insights Explorer will allow municipalities and city councils to make decisions based on data from A.I. and Google’s Earth Engines.

Search On Tree Canopy Insights

With this data, cities and municipalities will be able to visualise tree density within their jurisdictions and plan for trees and greenery. This data will help tremendously in lowering the temperatures of cities. It will also help with carbon neutrality. The feature will be expanding to over 100 cities including Yokohama and Sydney this year.

Dealing with Natural Disasters with Actionable Insights

Google Maps will be getting more actionable insights when it comes to natural disasters. Of course, being an American company, their first feature is, naturally more relevant to the U.S. California and other areas have been hit by wildfires with increasing severity in the past years. Other countries such as Australia, Canada and even in parts of the African continent are also experiencing increasingly deadly wildfires. It’s more apparent that data on the wildfires is needed for the public.

Search On Wildfire Mapping

As such, Google Maps will be getting a layer that will allow users to see the boundaries of active wildfires. These boundaries are updated every 15 minutes allowing users to avoid affected areas. The data will also help authorities coordinate evacuations and even handling of situations. Google is also doing a pilot for flash flooding in India.

Simplifying Addresses

Google is expanding and simplifying one of its largest social projects – Plus Codes. The project, which was announced just under a year ago, is getting more accessible. Google is making Plus Codes more accessible with Address Maker. The new app continues with Plus Codes but allows users and organisations simplified access to making new addresses. Governments and NGOs will be able to create addresses at scale easier.

A Necessity to Optimise & Leverage The Cloud – Lessons From Carsome and 500 Startups

Startups have become the norm nowadays. They’ve become a hallmark for not just the tech industry but also a thriving economy. However, when it comes down to it, the startup arena can also become one of the most brutal, unforgiving arenas any founder or individual can find themselves. The world has its eyes on Southeast Asia – Malaysia included – as its startup ecosystem teeters on the verge of another boom. The start-up arena has become one of the largest spaces for investment in the region, attracting some USD$1.48 billion in just Q1 of 2021 alone according to CB Insights. A significant chunk of 40.6% of this investment is driven by early-stage deals.

man in black crew neck t shirt sitting beside woman in gray crew neck t shirt
Photo by Canva Studio on Pexels.com

So, the big question is, what do we do with this data? We’ve heard tonnes of startup stories – so, we’re offering a slightly different perspective. Let’s talk about the tech. Yes, not every startup is an app or tech-related. However, with the rapidly changing needs and challenges now, it has become even more important for startups to be able to adapt and react accordingly – in a word – AGILE. Again, it’s a term we’ve heard or read countless times. That said, it’s become even more important now that they do – it could be the difference between survival and disappearing into the ether.

Fail Efficiently, Innovate Quickly

Like a wise woman once sang – “Let’s start at the very beginning. A very good place to start…”. The world as we know it has changed over the past few decades. In fact, it’s changed in the past few years! The costs of starting a startup have reduced from USD$5 million in 1999 to just over USD$50,000 in 2010 and continue to decline.

The biggest difference? The Cloud.  Cloud computing has significantly reduced the capital needed to start-up enterprises and it will continue to do so. Companies like Amazon Web Services (AWS) are enabling agility and cost-efficiency. They are enabling startups to take off with no upfront costs but most importantly they encourage startups to experiment and fail fast – allowing them to move forward with innovating their next approach. Each failure allows startups to learn, optimise and eventually succeed.

“The great thing about startups is the ability to start small and learn as you go. So long as you get the foundations right – such as ensuring you are secure by design from the outset – it won’t matter so much if you make the odd misstep along the way, because the consequences will be small.”

Digbijoy Shukla, Business Development Lead, Startup Business Development ASEAN, AWS
Digbijoy Shukla Business Development Lead Startup Business Development ASEAN AWS

These flexibilities are key in startups as it goes without saying – the road to their success is how fast they are able to present and prove their concept. The ability to provision and decommission servers and technological resources quickly and efficiently will help these start-ups further optimise and conserve resources. With this inherent efficiency built in it falls to start-ups and their management to take advantage of the tools at their fingertips to enhance their offering, evolve their approach and embrace the insights they are privy to.

AWS Article Carsome Stock Photos 001
Source: Adobe Stock

The Right Cloud Computing Partners can determine the Success of Startups

The ability to fail fast and experiment comes secondary to the tools any startup has at its disposal. Cloud computing continues to be a necessity simply because of its robust offerings. Going digital is no more about changing typewriters to desktops, it’s about a set of tools that allow you to create, adapt and react to ensure that the company is meeting its clients’ and customers’ needs.

Khailee Ng Managing Partner 500 Startups

“It’s critical to align yourself with the right partners and support as early as possible. Folks like 500 Startups and AWS aren’t here to be new and trendy, we’ve been part of the core ecosystem infrastructure since the early days.”

Khailee Ng, Managing Partner, 500 Startups

Choosing the right cloud, then, is an essential part of a start-up’s success. It’s like choosing the right business partner, you need someone who believes in your vision and complements your skills with the correct tools. With the number of Cloud providers continually increasing, start-ups are forced to make a choice based on the needs and skill level of their organisation.

In our session with AWS, Khailee Ng, Managing Partner at 500 Startups, stressed that getting the right partner can be akin to getting that first investment. Programs like AWS Activate enable startups to continue experimenting and functioning while upskilling and adapting. It creates a simultaneous process in which founders, staff and enablers are continually interacting and improving. In fact, programmes like AWS Activate essentially provide startups with an infusion of not just credits for experimentation and setting up, it provides a platform for startups to learn and implement the relevant knowledge for their success. AWS also provides technical support which allows non-technical founders to also benefit.

Scale, Pivot and React with Actionable Insights from the Cloud

Being on the Cloud is not always about cost or efficiency. It’s about the amount of data that will be available from the experimentation and even day to day usage of services and products. The data and insights that it gives will invariably determine the direction in which the startup can grow. In fact, if utilised properly, this data can even provide insights into new niches and services that can grow the startup’s user base and open new markets.

Eric Cheng Co founder CEO Carsome

In the initial six months, we were a car listings site. We pivoted the business in 2016, based on the data. We then extended our sales online, with customer benefits such as five days money back guarantee. Our (sales) pickup rate became much stronger, as we saw the same level of sales (as what we experienced) before the lockdowns. It’s really all about navigating successfully through this crisis.”

Eric Cheng, Co-Founder and CEO of Carsome, an integrated car e-commerce platform
AWS Article Carsome Stock Photos 002
Source: Adobe Stock

Take, for instance, Malaysian born startup – Carsome which started as a platform for searching for second-hand cars. The company ended up pivoting to complement its pre-existing service. They expanded to include the sales and purchase of these vehicles based on insights derived from the data generated by their users. They were able to gain insights that highlighted a niche that they could occupy; more importantly, it complemented their existing product. With these insights, they were quickly able to adapt, react and develop an offering that enhanced their product and led to exponential growth. They continue to use this data to enhance their service and ensure user happiness.

Of course, the Cloud doesn’t just provide for actionable insights and agility. It’s also about offloading mundane tasks and leveraging offerings like AWS Sagemaker. Implementing AI and Machine Learning in taking over tasks that can and should be automated will allow startups to focus their workforce on more pertinent tasks that will allow them to differentiate themselves further. Focusing on what is important will allow startups to eventually be able to scale. Of course, this doesn’t mean that vital tasks are offloaded, but it does mean that startups are able to maximise efficiency and optimise their workforce allowing them to flourish.

The Cloud Is Not the Future, It is Now

We keep hearing that the Cloud is the future. In truth, startups and companies that fail to adopt and adapt are bound to be held back by their own inefficiencies and stigmas. It is crucial that we realise that the Cloud is now – it’s not the future; at least, not anymore. Leveraging the Cloud and its many tools is a pivotal skill that startups need to develop. In fact, it would not be unfounded to say that it is a skill that all organisations should already be developing.

We are at a stage in the world where technology has already proliferated every aspect of our lives; from our entertainment to our work and even in our day-to-day lives. Why then are we hesitant to adopt it at scale to increase our own efficiencies and productivity? Why are we hesitant to put technology – already available – to use to increase profitability?

Startups cannot wait to adopt Cloud computing anymore. In fact, they are setting themselves up for failure without the proper Cloud and the willingness to learn how to use it. You don’t need to be a rocket scientist to put technology to work for you in this day.

Cloud, 5G, Machine Learning & Space: Digital Trends Shaping the Future

The world is arguably never going to be the same after the COVID-19 pandemic. The sentiment rings true in many aspects and sectors even now, a year on. However, the effects of the pandemic have spurred our normal to take a digital shift in which more companies are accelerating their digital transformation journeys with some further than others. That said, the adoption of technologies has created waves and trends that seem to be influencing everything in our lives.

In a nutshell, these trends are going to change the way we approach a whole myriad of thing from the way we work to the way we shop. We’re seeing businesses like your regular mom and pop shops adopt cloud technologies to help spur growth while digital native businesses and companies are doing the same to adapt to the ever-changing circumstances. The adoption of technologies and, in particular, cloud technologies, is building resilience in businesses like never before.

Our interview with the Lead Technologist for the Asia Pacific Region at Amazon Web Services (AWS), Mr Olivier Klein, sheds even more light on the trends that have and continue to emerge as businesses continue to navigate the pandemic and digitisation continues.

The Cloud Will Be Everywhere

As we see more and more businesses adopt technologies, a growing number of large, medium and small businesses will turn to cloud computing to stay competitive. In fact, businesses will be adopting cloud computing not only for agility but due to increasing expectations that will come from their customers. However, when referring to “The Cloud”, we are not only talking about things like machine learning, high performance computing, IoT and artificial intelligence (AI); we’re also talking about the simple things like data analytics and using digital channels.

pexels photomix company 106344
Photo by PhotoMIX Company from Pexels

Digitization journeys are creating expectations on businesses to be agile and adaptable. That said, businesses with humble beginnings like Malaysia’s TF Value-Mart have been able to scale thanks to their willingness to modernize and migrate to the cloud. Their adoption of cloud technologies has created a more secure digital environment for their business and has augmented their speed and scalability. This has allowed them to scale from a single, mom and pop store in Bentong in 1998 to over 37 outlets today.

The demand for cloud solution is increasing and there’s no deny it. Even businesses like AWS have had to expand to accommodate the growing demands for digital infrastructure and services. The company has scaled from 4 regions in their first 5 years to 13 regions today with more coming in the near future. AWS’s upcoming regions include six upcoming regions, of which four are in Asia Pacific: in Jakarta, Hyderabad, Osaka and Melbourne.

Edge Computing Spurred by 5G & Work From Anywhere

In fact, according to Mr Klein, AWS sees the next push in Cloud Computing coming from the ASEAN region. This will, primarily, be spurred by the region’s adoption of 5G technologies. Countries like Japan and Singapore are already leading the way with Malaysia and other countries close behind. The emergence of 5G technologies is creating a new demand for technologies that allow businesses to have a more hybrid approach to their utilisation of Cloud technologies.

nastya dulhiier OKOOGO578eo unsplash

As companies continue to scale and innovate, a growing demand is emerging for lower latencies. While 5G allows low latency connections, some are beginning to require access to scalable cloud technologies on premises. Data security and low latency computing are primary drives behind this demand. Businesses are innovating faster than ever before and require some of their workloads to happen quicker with faster results. As a result, we see a growing need for services like AWS Outpost which allows businesses to bring cloud services on premises, and with their recent announcement at AWS re:Invent, Outposts are becoming even more accessible.

Edge computing is also part and parcel of cloud computing as the mode in which we work continues to change. With most businesses forced to work remotely during the pandemic, the trend seems to be sticking; companies are beginning to adopt a work from anywhere policy which allows for more employee flexibility and increased productivity. That said, not all workloads are able to follow where workers go. With the adoption of 5G, that is no longer the case. Businesses will be able to adopt services like AWS Wavelength to enable low latency connection to cloud services empowering the work from anywhere policies.

The same rings true when it comes to education. The growth experienced in the adoption of remote learning will continue. Services like Zoom and Blue Jeans have become integral tools for educators to reach their students and will continue to see their roles expand as educational institutions continue to see the increased importance of remote learning.

Machine Learning is The Way

As edge computing and Cloud become the norm, so too will machine learning. Machine learning is enabling companies to adopt new approaches and adapt to changing circumstances. The adoption of machine learning solutions has paved the way to new expectations from customers that has and will continue to spur its adoption. In fact, Mr Klein, tells us that businesses will not only be adopting machine learning for automation but also to provide better customer experiences. What’s more, a growing number of their customers are also going to expect it.

Machine Learning’s prevalence is going to grow in the coming years – that’s a given. Customers and users have already had their experiences augmented by AI and machine learning. This has and continues to create expectations on how user experiences should be. Take for instance, services like Netflix have been using machine learning and AI to recommend and surface content to their users. Newer streaming services which lack these integrations are seen to be subpar and are criticised by users.

lenny kuhne jHZ70nRk7Ns unsplash
Photo by Lenny Kuhne on Unsplash

Aside from user experiences, businesses are getting more accustomed to using machine learning to provide insights when it comes to making decision making and automating business operations. It has also enabled companies to innovate more readily. These conveniences will also be one of the largest factors in the increasing prevalence. It will also see increased adoption which will be largely attributed to the adoption and development of autonomous vehicles and other augmented solutions.

Companies like Moderna have been utilising machine learning to help create and innovate in their arena. They have benefitted from adopting machine learning in their labs and manufacturing processes. This has also allowed them to develop their mRNA vaccines which are currently being deployed to combat COVID-19.

To Infinity & Beyond

The growing adoption of digital and cloud solutions is also spurring a new wave of technologies which allow businesses deeper insights. These technologies allow businesses to access insights gained from satellite imaging. Data such as ground imaging and even ocean imaging can be used to gain actionable insights for businesses. Use cases are beginning to emerge from business involved in logistics, search and rescue and even retail.

nasa Q1p7bh3SHj8 unsplash
Photo by NASA on Unsplash

However, the cost of building and putting a satellite in orbit is nonsensical for a business. That said, we already have thousands of them in orbit and it would make more sense to use them to help gain these insights. AWS is already introducing AWS Ground Station – a fully managed serve that gives businesses access to satellites to collect and downlink data which can then be processes in AWS Cloud.

These trends are simply a glance into an increasingly digitised and connected world where possibilities seem to be endless. Businesses are at the cusp of an age that will see them flourish if they are agile and willing to adopt new technologies and approaches that are, at this time, novel and unexplored.

Acer Expands to Healthcare with a Focus on AI-Assisted Diagnostics

Acer has been really busy in the recent past expanding its portfolio to become a more well-rounded tech and lifestyle company. In recent years, the company has already introduced the Predator Shot, an energy drink targeted at gamers, the Predator Gaming Chair, a collaborative effort with OSIM, and even a brand new brand – Acerpure. The company isn’t just stopping there though. It looks like they are expanding into the healthcare segment and it’s happening really soon.

In an interview session with the media, President of Acer Pan Asia Pacific Operations, Mr Andrew Hou, unwittingly revealed that the company would be exploring opportunities in healthcare in the near future. Upon further investigation, we found that Acer has already set up a new subsidiary, Acer Healthcare. The company is listed in the Tracxn database and is noted to be founded in 2019. Acer has also set up an official website for Acer Healthcare.

Source: Channel News Asia / Mr Andrew Hou, President of Acer Pan Pacific Operations

It looks like Acer is looking to leverage its prowess in dealing with data and technology to help bridge the closing gap between technology and medicine. Acer Healthcare seems to be looking into using AI-powered devices to help with diagnosis and patient monitoring. The field has been growing in the past few years with multiple startups and companies exploring opportunities and new technologies that can help better diagnose patients.

Acer Healthcare has already released a product called VeriSee DR, an AI-assisted solution for diagnosing Diabetic Retinopathy – a condition that affects close to 130 million people worldwide. Using Acer’s VeriSee DR, the condition can be diagnosed by utilising AI to analyse pictures of patients’ ocular fundus (the interior of the eye) for signs of diabetic retinopathy. According to their website, the technology works with a 95% sensitivity with 90% specificity for diagnosis. In fact, Acer Healthcare has ongoing clinical trials with the VeriSee DR and has published research on it in multiple medical journals.

anonymous oculist examining vision of patient on eye screener
Photo by Ksenia Chernaya on Pexels.com

In addition to VeriSee DR, it looks like Acer Healthcare is focusing on research and development of new diagnostic technologies using AI. Of note are a few currently listed research projects which include the diagnosis of heart arrhythmia using AI analysis of data collected from continual detection using an Acer Leap Ware wearable device and the diagnosis of renal impairment through retinal fundus imaging. While it does seem like the company’s focus is on diagnostic technologies they are also working on technologies for medical record and referrals as well.

The Future of Health Lies in Technology But We’re Not Ready According to the Philips Future Health Index

It goes without saying that technology is seeping into every aspect of our lives. This was a theme that Philips found to be true even when it comes to the medical field. In fact, technology is becoming so ubiquitous that the Future Health Index (FHI) has indicated that in a broad sense, the field of medicine simply isn’t ready. Their yearly survey of younger medical professionals had very interesting findings this round given that it was commissioned in the early months of the COVID-19 pandemic.

Younger Doctors Want Technology – It is the Key to value-based healthcare

In its fifth year, the Future Health Index found, among other things, that younger doctors are open to adopting technologies to assist in the mundane, repetitive tasks of medicine. In fact, nearly one in three doctors saw benefits in adopting technologies such as artificial intelligence, automation and telehealth in the day to day functions of medicine. 76% of doctors also cited that the adoption of technologies was able to help with decreasing the stresses of medical practice – one of the main worries with frontliners in the current pandemic.

However, the findings of Philip’s FHI show that key competencies which are key to a digital healthcare system are lacking in basic medical training – of interest is the lack of data competencies among younger medical professionals. In the FHI, about 47% of respondents found that they were left in the lurches when it came to key data competencies. Skillsets such as data analysis and interpretation were among the skills that were cited. Another notable competency when it came to data, was the management of data privacy, one of the current growing concerns of society.

Photo by Günter Valda on Unsplash

These particular findings highlight a robust issue that should be tackled in academia as well as with continuing medical education. Only 54% of doctors in Asia Pacific reported receiving training to address the legislative issues pertaining to data privacy while only 51% were receiving training in handling data.

These competencies are key in the current shift towards value-based healthcare. A healthcare model that measures patient outcomes as a key factor in determining the value of healthcare. While there is a good awareness of the term in the Asia Pacific region (82%), drilling further found that an alarming 4% knew what it was entirely. The majority of doctors surveyed only knew it by name.

While that may be a concern, the integration of technology into everyday healthcare and patient care is key in a value-based system. Only when doctors can access, interpret and analyse the data coming from adopted technologies, can they truly access the quality of healthcare. Key appreciations of technology in reducing their mundane workloads need to be more pervasive.

Technology in Improving Healthcare

Technology plays a vital role in creating a more efficient and effective standard of health. In their FHI, Philips found that a majority of younger doctors are advocates of adopting newer technologies. They see value in adopting the right technologies in creating a better standard of care.

However, in countries like Malaysia, these doctors are facing issues even with the simplest issue of automation of administrative tasks. That said, medical practice is being revolutionised by technologies that were once farfetched are becoming a reality. As the issue of personalised healthcare comes to the forefront, we have an increasing amount of doctors across the Asia Pacific region who see the benefits of having Artificial intelligence applied in the field. 74% surveyed opportunities to offer more personalised care while 79% believed that AI would help with more accurate diagnoses.

Photo by National Cancer Institute on Unsplash

That said, for AI to be effective, data needs to be made readily available. Nevertheless, the medical industry faces a data conundrum – should more effective and personalized healthcare come at the expense of data privacy? That said, the conundrum is addressed by anonymizing patient data to allow ready access. However, with the multiple data silos created by multiple software platforms, doctors are strained to have any actionable insights.

Interoperability is becoming a hurdle as hospitals and even clinics begin adopting new technologies that are not speaking to each other. This lack of interoperability creates data silos which doctors have to manually import and analyse. With a more cohesive digital architecture, doctors will be able to access a more holistic view of patient data and outcomes; and with the state of AI and machine learning now, they will be able to get even more insights to tough cases.

Technology isn’t just for the betterment of patient care, the FHI has also found that younger doctors report being less stressed at work when technologies are adopted effectively. The psychological benefits of reduced stress on the doctors will undoubtedly benefit patient care in the long run.

Looking to the Future & What the Medical Field can Learn from the Digitisation of Other Industries

Younger doctors are the key to the field of medicine progressing into the future. When it comes to their willingness to learn, it comes as no surprise that these doctors are spearheading the charge to adopt and learn new skills to remedy the skills gap that is emerging. However, it now falls to academia to address the needs in the nascent class of doctors emerging from their institutions into a field of medical practice that is both familiar and different.

What remains is for the medical industry to look to others who have a head start in dealing with the issues they are facing now. New technologies being adopted such as Kubernetes and the cloud could see the medical industry getting a quantum leap when it comes to patient care and medical breakthroughs.

Photo by Bofu Shaw on Unsplash

There is no better proof of the benefits of adopting the right technology than the state of vaccines for COVID-19. In a matter of months, multiple vaccine candidates have been developed. Some candidates such as the mRNA vaccine are revolutionary approaches which were made possible with the augmentation of human ingenuity with the insights derived from machine learning and AI.

In addition to technologies, their adoption needs a fundamental change in attitudes and values in the industry as well. Younger Doctors are already aware of these attitudes with an increasing number looking to autonomy in their practices. They also look to workspaces which are collaborative and have access to the latest medical equipment. However, more importantly, they look to a culture that supports work-life balance.

As with any industry, a majority of the attitudes will need a top-down approach; spearheaded by veteran doctors and administrators in hospitals and practices. It goes without saying that the agility needed to adapt and adopt new technologies and approaches must be spearheaded by leadership. They will also need to look into empowering younger doctors to be bold in their approaches and use of new technologies.

We’re in the Golden Age of Machine Learning, Tomorrow it will be Ubiquitous – Four Things We Need to Do Now

Today, thanks in large part to the cloud, actions such as communicating over text or transferring funds digitally are so commonplace, we hardly even think about how incredible these processes are; as we enter the golden age of machine learning, we can expect a similar boom of benefits that previously seemed impossible.

Machine learning is already helping companies make better and faster decisions. In healthcare, the use of predictive models created with machine learning is accelerating research and discovery of new drugs and treatment regiments. In other industries, it’s helping remote villages of Southeast Africa gain access to financial services, and matching individuals experiencing homelessness with housing.

While the short term applications are encouraging, machine learning could potentially have an even greater impact on our society. In the future, machine learning will be intertwined and under the hood of almost every application, business process, and end-user experience. However, before this technology becomes so ubiquitous that it’s almost boring, there are four key barriers to adoption we need to clear first.

Democratizing machine learning

The only way that machine learning will truly scale is if we as an industry make it easier for everyone – regardless of their skill level or resources – to be able to incorporate this sophisticated technology into applications and business processes.

green and white lights
Photo by cottonbro on Pexels.com

To achieve this, companies should take advantage of tools that have intelligence directly built into applications that their entire organization can benefit from. For instance, 123RF, a homegrown stock photography portal, aims to make design smarter, faster, and easier for users. To do so, it relies on Amazon Athena, Amazon Kinesis, and AWS Lambda for data pipeline processing. Its newer product Designs.ai Videomaker uses Amazon Polly to create voice-overs in more than 10 different languages. With AWS, 123RF has maintained flexibility in scaling its infrastructure and shortened product development cycles and is looking to incorporate other services to support its machine learning & AI research.

As processes go from being manual to automatic, workers are free to innovate and invent, and companies are empowered to be proactive instead of reactive. And as this technology becomes more intuitive and accessible, it can be applied to nearly every problem imaginable–from the toughest challenges in the IT department, to the biggest environmental issues in the world.

Upskilling workers

According to the World Economic Forum, the growth of AI could create 58 million net new jobs in the next few years. However, research suggests that there are currently only 300,000 AI engineers worldwide, and AI-related job postings are three times that of job searches with a widening divergence. Given this significant gap, organizations need to recognize that they simply aren’t going to be able to hire all the data scientists they need as they continue to implement machine learning into their work. Moreover, this pace of innovation will open doors and ultimately create jobs we can’t even begin to imagine today.

That’s why companies in the region like Asia Pacific University, DBS, Halodoc and others are finding innovative ways to encourage and nurture more young talents to gain new machine learning skills in fun, interactive hands-on ways, such as the AWS DeepRacer League. It’s critical that organizations should not only direct their efforts towards training the workforce they have with machine learning skills, but also invest in training programs that develop these important skills in the workforce of tomorrow.

Instilling trust in products

With anything new, often people are of two minds – either an emerging technology is a panacea and global savior, or it is a destructive force with cataclysmic tendencies. The reality is more often than not, a nuance somewhere in the middle. These disparate perspectives can be reconciled with information, transparency, and trust.

Photo by Arseny Togulev on Unsplash

As a first step, leaders in the industry need to help companies and communities learn about machine learning, how it works, where it can be applied, ways to use it responsibly, and understand what it is not.

Second, in order to gain faith in machine learning products, they need to be built by diverse groups of people across gender, race, age, national origin, sexual orientation, disability, culture, and education. We will all benefit from individuals who bring varying backgrounds, ideas, and points of view to inventing new machine learning products.

Third, machine learning services should be rigorously tested, measuring accuracy against third party benchmarks. Benchmarks should be established by academia, as well as governments, and be applied to any machine learning-based service, creating a rubric for reliable results, as well as contextualizing results for use cases.

Regulation of machine learning

Finally, as a society, we need to agree on what parameters should be put in place governing how and when machine learning can be used. With any new technology, there has to be a balance in protecting civil rights while also allowing for continued innovation and practical application of the technology.

small judge gavel placed on table near folders
Photo by Sora Shimazaki on Pexels.com

Any organization working with machine learning technology should be engaging customers, researchers, academics, and others to best determine the benefits of its machine learning technology with the potential risks. And they should be in active conversation with policymakers, supporting legislation, and creating their own guidelines for the responsible use of machine learning technology. Transparency, open dialogue, and constant evaluation must always be prioritized to ensure that machine learning is applied appropriately and is continuously enhanced.

What’s next

Through machine learning we’ve already accomplished so much, and yet, it’s still day one (and we haven’t even had a cup coffee yet!). If we’re using machine learning to help endangered orangutans, just imagine how it could be used to help save and preserve our oceans and marine life. If we’re using this technology to create digital snapshots of the planet’s forests in real-time, imagine how it could be used to predict and prevent forest fires. If machine learning can be used to help connect small-holder farmers to the people and resources they need to achieve their economic potential, imagine how it could help end world hunger.

To achieve this reality, we as an industry, have a lot of work ahead of us. I’m incredibly optimistic that machine learning will help us solve some of the world’s toughest challenges and create amazing end-user experiences we’ve never even dreamt. Before we know it, machine learning will be as familiar as reaching for our phones.