This article is contributed by Mak Chin Wah, Country Manager, Malaysia and General Manager, Telecoms Systems Business, South Asia, Dell Technologies
As artificial intelligence (AI) technology continues to evolve and grows in capability, it’s becoming a growing presence in every aspect of our lives. One needs to look no further than voice assistants, navigation like Waze, or rideshare apps such as Grab, which Malaysians are familiar with.
From machine learning and deep learning algorithms that automate manufacturing, natural language processing, video analytics and more, to the use of digital twins that virtually simulate, predict and inform decisions based on real-world conditions, AI helps solve critical modern-life challenges to benefit humanity. In fact, we have digital twin technology to thank for assisting in the bioengineering of vaccines to fight COVID-19.
AI is changing not only what we do but also how we do it — faster and more efficiently.
Advancing Human Progress
For companies like Dell Technologies who are committed to advancing human progress, AI will play a big part in developing solutions to the pressing issues of the 21st century. The 2020s, in particular, are ushering in a fully data-driven period in which AI will assist organisations and industries of all sizes to accelerate intelligent outcomes.
Organisations can harness their AI endeavours through high-performance computing (HPC) infrastructure solutions that reduce risk, improve processing speed and deliver deeper insights. By extracting value through AI from the massive amounts of data generated across the entire IT landscape — from the core to the cloud —businesses can better tackle challenges and make discoveries to advance large-scale, global progress.
Continuing to Innovate
Through transformative innovation, customers can derive the insights needed to change the course of discovery. For example, Dell Technologies equipped Monash University Malaysia with top-of-the-line HPC and AI solutions[i] to help accelerate the university’s research and development computing capabilities at its Sunway City campus in Selangor. The solution aims to enhance and accelerate the university’s computation capabilities in solving complex problems across its significant research portfolio.
Financial services, life sciences and oil and gas exploration are just a few of the other computation-intensive applications where enhanced servers will make a difference in achieving meaningful results, for humankind and the planet.
At the heart of AI technology are essential building blocks and solutions that power these activities. For example, Dell’s existing line of PowerEdge servers has already contributed to transformational, life‑changing projects, and will continue to power human progress in this generation and the next.
The most demanding AI projects require servers that offer distinct advantages – specifically built to deliver higher performance and even more powerful supercomputing results, and yet engineered for the coming generation to support the real-time processing requirements and challenges of AI applications with ease.
In addition to helping deploy more secure and better-managed infrastructure for complex AI operations at mind-boggling modelling speeds, these transformative servers will help meet organisations’ biggest concerns in productivity, efficiency and sustainability, while cutting costs and conserving energy.
Transforming Business and Life
While organisations are in different stages with respect to their adoption of AI, the transformational impact on business and life itself can no longer be ignored. Human progress will depend on the ability of AI to make communication easier, personalise content delivery, advance medical research/diagnosis/treatments, track potential pandemics, revolutionise education and implement digital manufacturing. In Malaysia, while AI is progressively being recognised as the new general-purpose technology that will bring about revolutionary economic transformation similar to the Industrial Revolution, adoption of Industry 4.0 remains sluggish with only 15% to 20% of businesses having really embraced it. On the other hand, the government is taking this emerging technology seriously, having set out frameworks for the incorporation of AI by numerous sectors of the economy. These comprise the Malaysia Artificial Intelligence Roadmap 2021-2025 (AI-Rmap) and the Malaysian Digital Economy Blueprint (MDEB), spearheaded by the MyDIGITAL Corporation and the Economic Planning Unit.
Moving Forward
With servers and HPC at the heart of AI, modern infrastructure needs to match the unique requirements of increasingly complex and widely distributed workloads. Regardless of where a business is on the AI journey, the key to optimising outcomes is having the right infrastructure in place, ready to seamlessly scale as the business grows and positioned to take on the unexpected, unknown challenges of the future. To do that requires having the expertise – or a trusted partner that does – to help at any and every stage, from planning through to implementation, to make smart server decisions that will unlock the organisation’s data capital and support AI efforts to move human progress forward.
[i] Based on Dell Technologies helps Monash University Malaysia enhance its R&D capabilities with HPC and AI solutions Media Alert, November 2022
AI (Artificial Intelligence) generated graphics is not a new thing. You have things like OpenArt and Hotpot these days where you can just type in the keywords to the image you want, and let the engine generate art for your use. Even before AI generated graphics though, the implementation of AI within the creative industry is nothing new. NVIDIA has used their own AI engine to write an entire symphony, and even to create 3D environments using their Ray Tracing engines. Adobe too have something they call the Sensei. They the AI tool is implemented across their creative suite to understand and recognise objects better, fill details where needed more naturally, and even edit videos, images, or even texts quickly and efficiently. Now, they have Firefly.
Firefly is not a new separate AI system from Adobe’s Sensei. Firefly is a part of a larger Adobe Sensei generative AI together with technologies like Neural Filters, Content Aware Fill, Attribution AI, and Liquid mode implemented across several Adobe platforms. Unlike those platform specific implementations though, Adobe is looking to put Firefly to work on a number of various platforms across their Creative Cloud, Document Cloud, Experience Cloud, and even their Adobe Express platforms.
So, what is Adobe Firefly? We hear you ask. It is technically Adobe’s take on what a creative generative AI should be. They are not limiting Firefly to just image generation, modification, and correction. It is designed to allow any sort of content creators create even more without needing to spend hundreds of hours to learn a new skill. All they need to do is to adapt Firefly in their workflow and they will get contents that they have never been able to create before, be it images, audio, vectors, texts, videos, and even 3D materials. You can have different contents every time too with Adobe Firefly; the possibilities, according to Adobe, are endless.
What makes Adobe’s Firefly so powerful is the power of the entirety of Adobe’s experience and database behind it. Obviously Adobe’s Stock images and assets is a huge enough library for the AI implementation to dive into. The implementation can also look into using openly licensed assets and public domain contents in generating its contents. The tool, in this case, will prevent any IP infringements and help you avoid plenty of future litigations.
As Firefly is launched in its beta state, it will only be available as an image and text generator tool for Adobe Express, Adobe Experience Manager, Adobe Photoshop, and Adobe Illustrator. Adobe plans to bring Firefly into the rest of their platforms where relevant in the coming future. They are also pushing for more open standards in asset verification which will eventually include proper categorization and tagging of AI generated contents. Adobe is also planning to make the Firefly ecosystem a more open one with APIs for its users and customers to integrate the tool with their existing workflows. For more information on Adobe’s latest generative AI, you can visit their website.
How will we connect with colleagues in five to ten years’ time? Will we all be interacting with holograms? Fully immersed in virtual worlds? Or will the reality be much closer to how most of us work from our laptops today?
Virtual worlds and immersive experiences could offer exciting new ways to connect with others – and our content. And with people more dispersed and working patterns more personalised than before, how we collaborate and get things done has never been more important.
It’s my team’s role to dig into future trends and technologies, experiment with solutions and reimagine experiences. Though immersive environments will play a role in the future of work, face-to-face meetings, instant messages, collaboration tools, and video calls aren’t going anywhere. That’s why we’re focusing on the user experience and honing in on everyday micro-moments that could be disruptive as we potentially bounce between physical, digital and virtual worlds in the future.
We’re asking questions like: How will people interact at the intersections of these worlds? What tools will people need to move between these locations seamlessly? What if people don’t want to wear a headset and dive into a virtual world for 8 hours a day – would they be excluded from future projects or collaboration opportunities?
Intelligent, familiar tools for future interactions
Using Concept Nyx’s ability to deliver compute all around, powered at the edge, we have been exploring how familiar devices and peripherals could be paired with Artificial Intelligence (AI) to work together as an ecosystem to deliver easily accessible and immersive experiences beyond gaming.
Our labs are packed with curated immersive demonstrations and concepts to help us test and explore how Dell could help people move between various spaces and tasks intuitively in the future. From fully immersive Virtual Reality (VR) builds to Mixed Reality (XR) experiences featuring displays and other tools that remove the need for a VR headset, these environments have helped us evolve concepts like the Concept Nyx Companion. As a lightweight tablet-style device that could be viewed and accessed in VR and XR environments, the concept could be a consistent tool throughout all these spaces and could ensure a user’s content is in one place as they move between spaces and tasks. No more taking photos of whiteboards or copying notes to be uploaded to a different space – users could just screenshot their project space and/or easily copy content for sharing across screens.
Together with the Concept Nyx Stylus, you could input notes by voice or via pen, and drag + drop them into digital and virtual collaboration spaces, and even use the voice activation for AI image creation – perfect for non-aspiring artists! All these tools could also seamlessly be used alongside the Concept Nyx Spatial Input in a future desktop environment with a keyboard and mouse, and possibly 3D displays too. We’ve been looking at creative ways to connect these traditional tools for a clutter-free space, and we’ve also been thinking about intuitive gestures for interacting with content – for example, using the tip of the Stylus for writing and the top of the Stylus for interacting with onscreen content or using the Spatial Input as a dial for a 360 view or for zooming in on details.
We’ve even been thinking through how people might show up in future digital and virtual spaces. We’ve all been on video calls where we need to step away for a moment to answer the door or tend to a pet or child off-camera. Instead of leaving a blank screen, empty seat, or static 2015 headshot, imagine with a wave of your hand, you could stay present as an intelligent avatar while you step away or stay off camera completely. To explore this, we’ve been experimenting with gestures and movement tracking and building on our imaging technology and video conferencing expertise to create the Concept Nyx Spatial camera, which when paired with AI software, could learn a user’s expressions and mannerisms to deliver a more authentic representation of them for future interactions.
Advancing the Concept Nyx Ecosystem
From infrastructure to devices, Dell is at the centre of present and future workplaces and is focused on developing the tools that will be needed to navigate these spaces. Right now, this means bringing tools to market like a new generation of UltraSharp conferencing monitors and intelligent webcams with motion-activated controls and presence detection, and building on technologies like storage, 5G, multi-cloud and edge that provide the advanced connectivity and infrastructure to allow organisations to shape how they work. In the future, productivity tools will be connected and intelligent enough to seamlessly move from experience to experience and task to task, helping to break down barriers and redefine how colleagues connect with one another.
My team continues to explore the future of compelling, immersive experiences in both work and play. Concepts play a huge role in allowing our designers, engineers, and strategists to test and tweak devices and solutions to inform future experience roadmaps. We’re excited to keep you updated on our journey!
From telecommunications networks to the manufacturing floor, through financial services to autonomous vehicles and beyond, computers are everywhere these days, generating a growing tsunami of data that needs to be captured, stored, processed and analyzed.
At Red Hat, we see edge computing as an opportunity to extend the open hybrid cloud all the way to data sources and end-users. Where data has traditionally lived in the data centre or cloud, there are benefits and innovations that can be realized by processing the data these devices generate closer to where it is produced.
This is where edge computing comes in.
4 benefits of edge computing
As the number of computing devices has grown, our networks simply haven’t kept pace with the demand, causing applications to be slower and/or more expensive to host centrally.
Pushing computing out to the edge helps reduce many of the issues and costs related to network latency and bandwidth, while also enabling new types of applications that were previously impractical or impossible.
1. Improve performance
When applications and data are hosted on centralized data centres and accessed via the internet, speed and performance can suffer from slow network connections. By moving things out to the edge, network-related performance and availability issues are reduced, although not entirely eliminated.
2. Place applications where they make the most sense
By processing data closer to where it’s generated, insights can be gained more quickly and response times reduced drastically. This is particularly true for locations that may have intermittent connectivity, including geographically remote offices and on vehicles such as ships, trains and aeroplanes.
3. Simplify meeting regulatory and compliance requirements
Different situations and locations often have different privacy, data residency, and localization requirements, which can be extremely complicated to manage through centralized data processing and storage, such as in data centres or the cloud.
With edge computing, however, data can be collected, stored, processed, managed and even scrubbed in place, making it much easier to meet different locales’ regulatory and compliance requirements. For example, edge computing can be used to strip personally identifiable information (PII) or faces from a video before being sent back to the data centre.
4. Enable AI/ML applications
Artificial intelligence and machine learning (AI/ML) are growing in importance and popularity since computers are often able to respond to rapidly changing situations much more quickly and accurately than humans.
But AI/ML applications often require processing, analyzing and responding to enormous quantities of data which can’t reasonably be achieved with centralized processing due to network latency and bandwidth issues. Edge computing allows AI/ML applications to be deployed close to where data is collected so analytical results can be obtained in near real-time.
3 Edge Computing Scenarios
Red Hat focuses on three general edge computing scenarios, although these often overlap in each unique edge implementation.
1. Enterprise edge
Enterprise edge scenarios feature an enterprise data store at the core, in a data centre or as a cloud service. The enterprise edge allows organizations to extend their application services to remote locations.
Chain retailers are increasingly using an enterprise edge strategy to offer new services, improve in-store experiences and keep operations running smoothly. Individual stores aren’t equipped with large amounts of computing power, so it makes sense to centralize data storage while extending a uniform app environment out to each store.
2. Operations edge
Operations edge scenarios concern industrial edge devices, with significant involvement from operational technology (OT) teams. The operations edge is a place to gather, process and act on data on-site.
Operations edge computing is helping some manufacturers harness artificial intelligence and machine learning (AI/ML) to solve operational and business efficiency issues through real-time analysis of data provided by Industrial Internet of Things (IIoT) sensors on the factory floor.
3. Provider edge
Provider edge scenarios involve both building out networks and offering services delivered with them, as in the case of a telecommunications company. The service provider edge supports reliability, low latency and high performance with computing environments close to customers and devices.
Service providers such as Verizon are updating their networks to be more efficient and reduce latency as 5G networks spread around the world. Many of these changes are invisible to mobile users, but allow providers to add more capacity quickly while reducing costs.
3 edge computing examples
Red Hat has worked with a number of organizations to develop edge computing solutions across a variety of industries, including healthcare, space and city management.
1. Healthcare
Clinical decision-making is being transformed through intelligent healthcare analytics enabled by edge computing. By processing real-time data from medical sensors and wearable devices, AI/ML systems are aiding in the early detection of a variety of conditions, such as sepsis and skin cancers.
2. Space
NASA has begun adopting edge computing to process data close to where it’s generated in space rather than sending it back to Earth, which can take minutes to days to arrive.
As an example, mission specialists on the International Space Station (ISS) are studying microbial DNA. Transmitting that data to Earth for analysis would take weeks, so they’re experimenting with doing those analyses onboard the ISS, speeding “time to insight” from months to minutes.
3. Smart cities
City governments are beginning to experiment with edge computing as well, incorporating emerging technologies such as the Internet of Things (IoT) along with AI/ML to quickly identify and remediate problems impacting public safety, citizen satisfaction and environmental sustainability.
Red Hat’s approach to edge computing
Of course, the many benefits of edge computing come with some additional complexity in terms of scale, interoperability and manageability.
Edge deployments often extend to a large number of locations that have minimal (or no) IT staff, or that vary in physical and environmental conditions. Edge stacks also often mix and match a combination of hardware and software elements from different vendors, and highly distributed edge architectures can become difficult to manage as infrastructure scales out to hundreds or even thousands of locations. The Red Hat Edge portfolio addresses these challenges by helping organizations standardize on a modern hybrid cloud infrastructure, providing an interoperable, scalable and modern edge computing platform that combines the flexibility and extensibility of open source with the power of a rapidly growing partner ecosystem
Xiaomi’s quest to be the king of the Internet of Things (IoT) is no secret. The company has more than one subsidiary working on its IoT products. To date, we’ve seen IoT products branded as Mi, Soocas and even Dreame. ROIDMI is yet another brand that works on IoT particularly cordless vacuums. Its laser focus on the niche seems to have worked in its favour as their line-up of cordless vacuums seems to be one of the more popular options on platforms like Shopee and Lazada.
That said, Robot Vacuums are no revolution when it comes to cleaning. They’ve been available on the market for quite a while now, but they’ve always had their quirks when it comes to cleaning. ROIDMI’s EVE Plus is looking to address many of these quirks with some interesting approaches and smart implementation of AI technology. These small innovations have made for one of the easier, hands-off cleaning experiences for robot vacuums we’ve experienced.
The ROIDMI Experience
The ROIDMI experience isn’t just a manual plug-and-forget experience; it comes with a host of “prep work” and setup that you’ll have to undertake at the beginning which lends to a more automated experience later on. Of course, it is in no way a deal-breaker when it comes to the overall experience.
Being an IoT device, the robot vacuum requires some setup. However, the process is pretty straightforward, simple and very app-centric. The EVE Plus Robot Vacuum itself doesn’t come with many interactive components. Most of the interactions and settings are done through the app. This actually makes setup a breeze. However, the ecosystem itself can be a little quirky as it isn’t as integrated as you would think.
When you initially unbox and set up your EVE Plus Robot Vacuum, you’ll need to make sure that you remove the plastic and Styrofoam pieces that have been placed to prevent damage to moving parts during shipping. If you look at the manual, it says that the vacuum can be integrated into the Mi Home app or the ROIDMI app. However, this particular model of the robot vacuum isn’t listed in the Mi Home app, instead, you will need to use the ROIDMI app to set it up.
Set up was very simple and quick. All you had to do is plug in the base, place the EVE Plus in the cradle and power on. Once you do, you just have to tap the add product option in the app which is denoted by a “+” on the top right. When you do this, it will automatically look for the local WiFi being broadcast by the vacuum and proceed to program the Wi-Fi settings for the vacuum. To be frank, that’s all the setup that is required. After this, everything else is automated and done by the vacuum itself during its first cleaning.
App Design & Usability
The ROIDMI app is a simple, well-designed app. Unlike a lot of other IoT apps, it cuts to the chase and immediately allows you to set up and manage your products after you sign in. The simplicity and straightforward design are some of the best features of the app. The no-frills in your face design lets you get things done without fumbling and digging for functions.
After your initial set-up of registering and logging in, you’ll be greeted by a screen with a list of your appliances. Each appliance can be set up and monitored through the app. The main screen shows you pertinent information such as the battery level, active time, and area that the vacuum has cleaned previously.
Clicking further into the app brings up more detailed information. In the case of the EVE Plus Robot Vacuum, you’ll be able to see a map of the space it’s in and the cleaning path it took on the previous cleaning session. It also gives you quick access to its cleaning modes and map customisations. It also gives you quick access to the recharge and clean options. You can also customise how much water it will dispense when mopping and even the suction power of the vacuum.
Designed for Real Living Spaces
While the app is the core of their user experience, the ROIDMI EVE Plus Robot Vacuum itself comes packed with hardware and design that makes using it a more seamless experience.
Let’s start off with the overall design of the vacuum. The ROIDMI EVE Plus Robot Vacuum is designed to be able to manoeuvre through real living spaces. While it shares a similar design to many of the robot vacuums available, it is short enough to fit under most spaces in a room. The circle design of it gives it more manoeuvrability that allows the vacuum to get out of tight situations with minimal intervention.
ROIDMI has also struck a balance in the size of the vacuum and the size of the internal tank. It is large enough that the vacuum doesn’t need to make multiple trips back to the docking station to be emptied even in larger rooms but small enough that the robot vacuum is still able to fit in most nooks and crannies of a space. It also doesn’t come with many parts which click into space. All the components of the robot vacuum are securely in place either with screws or by a secure locking mechanism.
The vacuums movement is dependent on two rather large plastic wheels. They function similar to the hoverboards we’ve seen in the market. This decision actually allows the robot vacuum to find its way through tough spaces. It also allows it to move over ledges objects about 2cm in height. So, if you have a table with a stand design that runs on the ground or running cables across a room, it’ll be able to move over them. However, for cables, if they aren’t fastened to the ground securely, you might have electrical items connected to these cables falling over.
The ROIDMI EVE Plus has a small, elevated component on the top where the LIDAR sensor is. This allows it a 360° field of view allowing it to map and detect quicker and more accurately. In fact, it managed to map the room it was in even during setup. The sensor also allows the robot vacuum to gauge the height of furniture, so it doesn’t get stuck under them. This is complemented by sensors on the side and bumpers on the front to help with movement and manoeuvring. There are only 3 physical buttons on the EVE PLUS – the power button, the home button and a button that acts as a quick clean command.
The base station or dock is also designed minimally. It’s a relatively small unit with a single touch screen for status monitoring and a space for the EVE Plus to come home to. The main, 3-litre dust bag is accessible through the top. It also has a HEPA filter to prevent odours from escaping. This also means that you won’t be emptying the bag too often. ROIDMI does highlight that they’ve designed the base in a more compact fashion. This apparently allows them to minimise noise while dust is being emptied.
Dealing with the Dust in a Smart Way with Some Quirks
To be really frank, I’ve never really understood the allure of robot vacuums even after reviewing earlier models ages ago. In fact, they always seemed like more hassle than they are worth. However, the Xiaomi ROIDMI EVE Plus robot vacuum did a good job of convincing me otherwise.
The AI that comes programmed into the EVE Plus makes it one of the simplest, most seamless robot vacuum experiences I’ve had to date. It can intelligently detect the height of furniture and even detect slopes or ledges. It was able to avoid getting stuck most of the time thanks to this. However, even if it got stuck you simply had to put it immediately beside the area it got stuck in and it would avoid it.
The way the EVE Plus cleans is also different from other robot vacuums. It intelligently partitions large areas into smaller rooms. It wasn’t immediately apparent when I was observing the vacuum itself but when I glanced at the app, the map was sectioned into multiple smaller areas. Using this mapping and guidance, it would optimise its route to efficiently clean the area. It’s also the only robot vacuum I’ve seen that has a unique Y cleaning pattern that allows it to clean more effectively. If you’re like me, you’ll also turn on the 2X clean feature which makes the EVE Plus do a second run when cleaning. Its ability to mop with spaces with water is also a welcomed feature. However, this feature is limited to 250m2 space as the water tank on the robot vacuum is limited.
However, the ROIDMI EVE Plus is not without its quirks. During our review period, the vacuum actually lost mapping data spontaneously. This isn’t a major issue as it is able to rebuild the data pretty quickly. The robot vacuum is also a little quirky when it comes to dealing with carpets and rugs. It’s able to handle thicker carpets but tends to wrestle with rags.
It also communicates through the app which is an added advantage – if your phone doesn’t put the app to sleep. The app never requests to allow it to run in the background either so when you launch your ROIDMI app, it tends to spam you with notifications. It also does cry for help with a voice when it’s stuck.
Of Raised Slopes & Tassels – ROIDMI Eve Plus Kryptonite
If the ROIDMI EVE Plus was Supergirl, tassels and slopes are its kryptonite. The robot vacuum seems to enjoy wrestling (and losing) with tassels. Rugs or carpets with tassels are things you may want to remove when using the EVEL Plus. In fact, I had to cut tassels off a floor mat cause the EVE Plus had a bout with them and couldn’t break free. The other thing that the EVE Plus seems to have trouble with is raised slopes and platforms. This is particularly apparent if you use a stand fan in your room. If the stand fan is designed with a base that is slightly sloped, the EVE Plus will try to run over the slope and eventually get stuck.
This was irritating at first. However, you can easily prevent this by creating no-fly zones on the map through the ROIDMI app.
Not Just About Removing Dust – It Zaps Bacteria with Activated Oxygen
Earlier we mentioned the HEPA filter that helps prevent odours from escaping. This is actually part of a larger disinfection system that is integrated into the base station of the EVE Plus. When the dust is emptied from the robot vacuum to the main 3-litre bag, it is bombarded by activated or ionised oxygen. A little bit of a science refresher here – activate or ionised oxygen is a charged molecule that readily destabilizes cellular structures and intracellular structures. Using this understanding, ROIDMI has created a solution that is able to kill 99.99% of bacteria or so they claim.
This technology is also responsible for the odourless storage of dust in the base station. The ionised oxygen can also help neutralise bad odours. This working together with the HEPA filter that is integrated into the docking station minimises harmful particles and allergens from escaping.
A Simplified, Smart Robot Vacuum that is able to handle small to medium spaces with a user experience that changed the mind of a non-believer
It’s very rare for a piece of technology to make me reconsider my initial experiences and change my mind. However, the ROIDMI EVE Plus robot vacuum did just that. It provided a seamless, simplified experience which convinced me that there is a time and space for smart cleaning devices. In my case, with a busy day to day life and older parents at home, the robot vacuum provided us with a means to maintain our most used spaces keeping them clean and dust-free without sacrificing time and conserving time.
The features of the EVE Plus are what made the difference. Its simple app and set-it-and-forget-it experience allowed me to get things done without the need to worry about the robot vacuum when it’s running a cleaning cycle. If the vacuum had poorer manoeuvrability or got stuck in spaces regularly, this review would have been very different. However, the fact that it was able to handle a busy space without much hassle, was a welcomed surprise.
Google has been working on creating a better, more unified experience with their bread and butter – search. The tech giant is looking for a more contextually relevant search as they move forwards. To do this, they are turning to MUM, the Multitask Unified Model, to bring more relevance to search results.
The new Multitask Unified Model (MUM) allows Google’s search algorithm to understand multiple forms of input. It can draw context from text, speech, images and even video. This, in turn, allows the search engine to return more contextually relevant results. It will also allow the search engine to understand searches in a more natural language and make sense of more complex searches. When they first announced MUM, the new enhancement could understand over 75 languages. MUM is much more powerful than the existing algorithm.
Contextual Search is the New Normal
Barely two months after the announcement, Google has begun implementing MUM into some of the most used apps and features. In the coming months, Google searches will be undergoing a bit of a major rehaul. The company is creating a new, more visual search experience. Users will be seeing more images and graphics in search results. you will also be able to refine and broaden searches with a single click thanks to MUM. You will be able to zoom into finer details such as specific techniques and more or get a broader picture of your search with a single click. In their announcement, Google used the example of acrylic painting. With the results from Google search, they were able to zoom in to specific techniques commonly used in acrylic painting or get a broader picture of how it started.
The search engine uses data such as language and even user behaviour in addition to context to recommend broadening or narrowing searches. They are even applying this to YouTube. They are hoping to be able to expand the search context to include topics mentioned in YouTube videos later this year. Contextual and multitask search is also making its way to Google Lens. Lens will be able to make sense of both visual and text data at the same time. It’s also making its way to Chrome. Don’t expect the rollout of the new experience on Lens too soon as the rollout is expected to be in 2022 after internal testing.
Context is also making search more “shoppable”. Google is allowing users to zoom in to specifics when searching. For instance, searching if you’re searching for fashion apparel, you will be able to narrow your search based on design and colour or use the context of the original to search for something else completely. In addition, Google’s Shopping Graph will allow users to narrow searches with an “in stock” filter as well. This particular enhancement will be available in select countries only.
Expanding Search to Make A Positive Impact
Google isn’t just focusing on MUM for its own benefit. The company has been busy bringing its technology to create change too. It’s working on expanding contextual data as well as A.I. implementation in addressing environmental and social issues. While this is nothing new, some of the new improvements could impact us more directly than ever.
Environmental Insights for Greener Cities
One of the biggest things that could make a huge impact is Googles Environmental Insights. While this isn’t brand new, the company is looking to make the feature more readily available to cities to help them be greener. Environmental Insights Explorer will allow municipalities and city councils to make decisions based on data from A.I. and Google’s Earth Engines.
With this data, cities and municipalities will be able to visualise tree density within their jurisdictions and plan for trees and greenery. This data will help tremendously in lowering the temperatures of cities. It will also help with carbon neutrality. The feature will be expanding to over 100 cities including Yokohama and Sydney this year.
Dealing with Natural Disasters with Actionable Insights
Google Maps will be getting more actionable insights when it comes to natural disasters. Of course, being an American company, their first feature is, naturally more relevant to the U.S. California and other areas have been hit by wildfires with increasing severity in the past years. Other countries such as Australia, Canada and even in parts of the African continent are also experiencing increasingly deadly wildfires. It’s more apparent that data on the wildfires is needed for the public.
As such, Google Maps will be getting a layer that will allow users to see the boundaries of active wildfires. These boundaries are updated every 15 minutes allowing users to avoid affected areas. The data will also help authorities coordinate evacuations and even handling of situations. Google is also doing a pilot for flash flooding in India.
Simplifying Addresses
Google is expanding and simplifying one of its largest social projects – Plus Codes. The project, which was announced just under a year ago, is getting more accessible. Google is making Plus Codes more accessible with Address Maker. The new app continues with Plus Codes but allows users and organisations simplified access to making new addresses. Governments and NGOs will be able to create addresses at scale easier.
Startups have become the norm nowadays. They’ve become a hallmark for not just the tech industry but also a thriving economy. However, when it comes down to it, the startup arena can also become one of the most brutal, unforgiving arenas any founder or individual can find themselves. The world has its eyes on Southeast Asia – Malaysia included – as its startup ecosystem teeters on the verge of another boom. The start-up arena has become one of the largest spaces for investment in the region, attracting some USD$1.48 billion in just Q1 of 2021 alone according to CB Insights. A significant chunk of 40.6% of this investment is driven by early-stage deals.
So, the big question is, what do we do with this data? We’ve heard tonnes of startup stories – so, we’re offering a slightly different perspective. Let’s talk about the tech. Yes, not every startup is an app or tech-related. However, with the rapidly changing needs and challenges now, it has become even more important for startups to be able to adapt and react accordingly – in a word – AGILE. Again, it’s a term we’ve heard or read countless times. That said, it’s become even more important now that they do – it could be the difference between survival and disappearing into the ether.
Fail Efficiently, Innovate Quickly
Like a wise woman once sang – “Let’s start at the very beginning. A very good place to start…”. The world as we know it has changed over the past few decades. In fact, it’s changed in the past few years! The costs of starting a startup have reduced from USD$5 million in 1999 to just over USD$50,000 in 2010 and continue to decline.
The biggest difference? The Cloud. Cloud computing has significantly reduced the capital needed to start-up enterprises and it will continue to do so. Companies like Amazon Web Services (AWS) are enabling agility and cost-efficiency. They are enabling startups to take off with no upfront costs but most importantly they encourage startups to experiment and fail fast – allowing them to move forward with innovating their next approach. Each failure allows startups to learn, optimise and eventually succeed.
“The great thing about startups is the ability to start small and learn as you go. So long as you get the foundations right – such as ensuring you are secure by design from the outset – it won’t matter so much if you make the odd misstep along the way, because the consequences will be small.”
Digbijoy Shukla, Business Development Lead, Startup Business Development ASEAN, AWS
These flexibilities are key in startups as it goes without saying – the road to their success is how fast they are able to present and prove their concept. The ability to provision and decommission servers and technological resources quickly and efficiently will help these start-ups further optimise and conserve resources. With this inherent efficiency built in it falls to start-ups and their management to take advantage of the tools at their fingertips to enhance their offering, evolve their approach and embrace the insights they are privy to.
The Right Cloud Computing Partners can determine the Success of Startups
The ability to fail fast and experiment comes secondary to the tools any startup has at its disposal. Cloud computing continues to be a necessity simply because of its robust offerings. Going digital is no more about changing typewriters to desktops, it’s about a set of tools that allow you to create, adapt and react to ensure that the company is meeting its clients’ and customers’ needs.
“It’s critical to align yourself with the right partners and support as early as possible. Folks like 500 Startups and AWS aren’t here to be new and trendy, we’ve been part of the core ecosystem infrastructure since the early days.”
Khailee Ng, Managing Partner, 500 Startups
Choosing the right cloud, then, is an essential part of a start-up’s success. It’s like choosing the right business partner, you need someone who believes in your vision and complements your skills with the correct tools. With the number of Cloud providers continually increasing, start-ups are forced to make a choice based on the needs and skill level of their organisation.
In our session with AWS, Khailee Ng, Managing Partner at 500 Startups, stressed that getting the right partner can be akin to getting that first investment. Programs like AWS Activate enable startups to continue experimenting and functioning while upskilling and adapting. It creates a simultaneous process in which founders, staff and enablers are continually interacting and improving. In fact, programmes like AWS Activate essentially provide startups with an infusion of not just credits for experimentation and setting up, it provides a platform for startups to learn and implement the relevant knowledge for their success. AWS also provides technical support which allows non-technical founders to also benefit.
Scale, Pivot and React with Actionable Insights from the Cloud
Being on the Cloud is not always about cost or efficiency. It’s about the amount of data that will be available from the experimentation and even day to day usage of services and products. The data and insights that it gives will invariably determine the direction in which the startup can grow. In fact, if utilised properly, this data can even provide insights into new niches and services that can grow the startup’s user base and open new markets.
In the initial six months, we were a car listings site. We pivoted the business in 2016, based on the data. We then extended our sales online, with customer benefits such as five days money back guarantee. Our (sales) pickup rate became much stronger, as we saw the same level of sales (as what we experienced) before the lockdowns. It’s really all about navigating successfully through this crisis.”
Eric Cheng, Co-Founder and CEO of Carsome, an integrated car e-commerce platform
Take, for instance, Malaysian born startup – Carsome which started as a platform for searching for second-hand cars. The company ended up pivoting to complement its pre-existing service. They expanded to include the sales and purchase of these vehicles based on insights derived from the data generated by their users. They were able to gain insights that highlighted a niche that they could occupy; more importantly, it complemented their existing product. With these insights, they were quickly able to adapt, react and develop an offering that enhanced their product and led to exponential growth. They continue to use this data to enhance their service and ensure user happiness.
Of course, the Cloud doesn’t just provide for actionable insights and agility. It’s also about offloading mundane tasks and leveraging offerings like AWS Sagemaker. Implementing AI and Machine Learning in taking over tasks that can and should be automated will allow startups to focus their workforce on more pertinent tasks that will allow them to differentiate themselves further. Focusing on what is important will allow startups to eventually be able to scale. Of course, this doesn’t mean that vital tasks are offloaded, but it does mean that startups are able to maximise efficiency and optimise their workforce allowing them to flourish.
The Cloud Is Not the Future, It is Now
We keep hearing that the Cloud is the future. In truth, startups and companies that fail to adopt and adapt are bound to be held back by their own inefficiencies and stigmas. It is crucial that we realise that the Cloud is now – it’s not the future; at least, not anymore. Leveraging the Cloud and its many tools is a pivotal skill that startups need to develop. In fact, it would not be unfounded to say that it is a skill that all organisations should already be developing.
We are at a stage in the world where technology has already proliferated every aspect of our lives; from our entertainment to our work and even in our day-to-day lives. Why then are we hesitant to adopt it at scale to increase our own efficiencies and productivity? Why are we hesitant to put technology – already available – to use to increase profitability?
Startups cannot wait to adopt Cloud computing anymore. In fact, they are setting themselves up for failure without the proper Cloud and the willingness to learn how to use it. You don’t need to be a rocket scientist to put technology to work for you in this day.
The world is arguably never going to be the same after the COVID-19 pandemic. The sentiment rings true in many aspects and sectors even now, a year on. However, the effects of the pandemic have spurred our normal to take a digital shift in which more companies are accelerating their digital transformation journeys with some further than others. That said, the adoption of technologies has created waves and trends that seem to be influencing everything in our lives.
In a nutshell, these trends are going to change the way we approach a whole myriad of thing from the way we work to the way we shop. We’re seeing businesses like your regular mom and pop shops adopt cloud technologies to help spur growth while digital native businesses and companies are doing the same to adapt to the ever-changing circumstances. The adoption of technologies and, in particular, cloud technologies, is building resilience in businesses like never before.
Our interview with the Lead Technologist for the Asia Pacific Region at Amazon Web Services (AWS), Mr Olivier Klein, sheds even more light on the trends that have and continue to emerge as businesses continue to navigate the pandemic and digitisation continues.
The Cloud Will Be Everywhere
As we see more and more businesses adopt technologies, a growing number of large, medium and small businesses will turn to cloud computing to stay competitive. In fact, businesses will be adopting cloud computing not only for agility but due to increasing expectations that will come from their customers. However, when referring to “The Cloud”, we are not only talking about things like machine learning, high performance computing, IoT and artificial intelligence (AI); we’re also talking about the simple things like data analytics and using digital channels.
Digitization journeys are creating expectations on businesses to be agile and adaptable. That said, businesses with humble beginnings like Malaysia’s TF Value-Mart have been able to scale thanks to their willingness to modernize and migrate to the cloud. Their adoption of cloud technologies has created a more secure digital environment for their business and has augmented their speed and scalability. This has allowed them to scale from a single, mom and pop store in Bentong in 1998 to over 37 outlets today.
The demand for cloud solution is increasing and there’s no deny it. Even businesses like AWS have had to expand to accommodate the growing demands for digital infrastructure and services. The company has scaled from 4 regions in their first 5 years to 13 regions today with more coming in the near future. AWS’s upcoming regions include six upcoming regions, of which four are in Asia Pacific: in Jakarta, Hyderabad, Osaka and Melbourne.
Edge Computing Spurred by 5G & Work From Anywhere
In fact, according to Mr Klein, AWS sees the next push in Cloud Computing coming from the ASEAN region. This will, primarily, be spurred by the region’s adoption of 5G technologies. Countries like Japan and Singapore are already leading the way with Malaysia and other countries close behind. The emergence of 5G technologies is creating a new demand for technologies that allow businesses to have a more hybrid approach to their utilisation of Cloud technologies.
As companies continue to scale and innovate, a growing demand is emerging for lower latencies. While 5G allows low latency connections, some are beginning to require access to scalable cloud technologies on premises. Data security and low latency computing are primary drives behind this demand. Businesses are innovating faster than ever before and require some of their workloads to happen quicker with faster results. As a result, we see a growing need for services like AWS Outpost which allows businesses to bring cloud services on premises, and with their recent announcement at AWS re:Invent, Outposts are becoming even more accessible.
Edge computing is also part and parcel of cloud computing as the mode in which we work continues to change. With most businesses forced to work remotely during the pandemic, the trend seems to be sticking; companies are beginning to adopt a work from anywhere policy which allows for more employee flexibility and increased productivity. That said, not all workloads are able to follow where workers go. With the adoption of 5G, that is no longer the case. Businesses will be able to adopt services like AWS Wavelength to enable low latency connection to cloud services empowering the work from anywhere policies.
The same rings true when it comes to education. The growth experienced in the adoption of remote learning will continue. Services like Zoom and Blue Jeans have become integral tools for educators to reach their students and will continue to see their roles expand as educational institutions continue to see the increased importance of remote learning.
Machine Learning is The Way
As edge computing and Cloud become the norm, so too will machine learning. Machine learning is enabling companies to adopt new approaches and adapt to changing circumstances. The adoption of machine learning solutions has paved the way to new expectations from customers that has and will continue to spur its adoption. In fact, Mr Klein, tells us that businesses will not only be adopting machine learning for automation but also to provide better customer experiences. What’s more, a growing number of their customers are also going to expect it.
Machine Learning’s prevalence is going to grow in the coming years – that’s a given. Customers and users have already had their experiences augmented by AI and machine learning. This has and continues to create expectations on how user experiences should be. Take for instance, services like Netflix have been using machine learning and AI to recommend and surface content to their users. Newer streaming services which lack these integrations are seen to be subpar and are criticised by users.
Aside from user experiences, businesses are getting more accustomed to using machine learning to provide insights when it comes to making decision making and automating business operations. It has also enabled companies to innovate more readily. These conveniences will also be one of the largest factors in the increasing prevalence. It will also see increased adoption which will be largely attributed to the adoption and development of autonomous vehicles and other augmented solutions.
Companies like Moderna have been utilising machine learning to help create and innovate in their arena. They have benefitted from adopting machine learning in their labs and manufacturing processes. This has also allowed them to develop their mRNA vaccines which are currently being deployed to combat COVID-19.
To Infinity & Beyond
The growing adoption of digital and cloud solutions is also spurring a new wave of technologies which allow businesses deeper insights. These technologies allow businesses to access insights gained from satellite imaging. Data such as ground imaging and even ocean imaging can be used to gain actionable insights for businesses. Use cases are beginning to emerge from business involved in logistics, search and rescue and even retail.
However, the cost of building and putting a satellite in orbit is nonsensical for a business. That said, we already have thousands of them in orbit and it would make more sense to use them to help gain these insights. AWS is already introducing AWS Ground Station – a fully managed serve that gives businesses access to satellites to collect and downlink data which can then be processes in AWS Cloud.
These trends are simply a glance into an increasingly digitised and connected world where possibilities seem to be endless. Businesses are at the cusp of an age that will see them flourish if they are agile and willing to adopt new technologies and approaches that are, at this time, novel and unexplored.
Acer has been really busy in the recent past expanding its portfolio to become a more well-rounded tech and lifestyle company. In recent years, the company has already introduced the Predator Shot, an energy drink targeted at gamers, the Predator Gaming Chair, a collaborative effort with OSIM, and even a brand new brand – Acerpure. The company isn’t just stopping there though. It looks like they are expanding into the healthcare segment and it’s happening really soon.
In an interview session with the media, President of Acer Pan Asia Pacific Operations, Mr Andrew Hou, unwittingly revealed that the company would be exploring opportunities in healthcare in the near future. Upon further investigation, we found that Acer has already set up a new subsidiary, Acer Healthcare. The company is listed in the Tracxn database and is noted to be founded in 2019. Acer has also set up an official website for Acer Healthcare.
It looks like Acer is looking to leverage its prowess in dealing with data and technology to help bridge the closing gap between technology and medicine. Acer Healthcare seems to be looking into using AI-powered devices to help with diagnosis and patient monitoring. The field has been growing in the past few years with multiple startups and companies exploring opportunities and new technologies that can help better diagnose patients.
Acer Healthcare has already released a product called VeriSee DR, an AI-assisted solution for diagnosing Diabetic Retinopathy – a condition that affects close to 130 million people worldwide. Using Acer’s VeriSee DR, the condition can be diagnosed by utilising AI to analyse pictures of patients’ ocular fundus (the interior of the eye) for signs of diabetic retinopathy. According to their website, the technology works with a 95% sensitivity with 90% specificity for diagnosis. In fact, Acer Healthcare has ongoing clinical trials with the VeriSee DR and has published research on it in multiple medical journals.
In addition to VeriSee DR, it looks like Acer Healthcare is focusing on research and development of new diagnostic technologies using AI. Of note are a few currently listed research projects which include the diagnosis of heart arrhythmia using AI analysis of data collected from continual detection using an Acer Leap Ware wearable device and the diagnosis of renal impairment through retinal fundus imaging. While it does seem like the company’s focus is on diagnostic technologies they are also working on technologies for medical record and referrals as well.
With the world moving to embrace the work from anywhere culture, it is becoming increasingly imperative that we have capable setups both at work and at home. A third of the workforce is expected to continue splitting their time between home and office even after the pandemic. HP is looking to provide an option with their new HP EliteOne 800 All-in-One (AiO).
The new AiO from HP is being touted as a virtual conference powerhouse for both home and office. It comes in two sizes – 23.8-inch and 27-inch. Both versions come with an integrated pop-up webcam which not only allows you to connect with colleagues and loved ones but also allows you to put it away when you’re not using it, helping maintain your privacy. HP is also offering an option for a dual-facing 5-megapixel camera which bring additional features including intelligent face tracking which tracks the user’s face allowing them to move about during video calls without being out of focus. It also dynamically adapts to lighting for optimal video quality.
The HP EliteOne 800 AiO comes with AI Noise Reduction which helps minimise background sounds. The integrated AI helps reduce not only outbound noises but also inbound ones. This will help ensure that you’re heard on the other end even if it gets a little bit noisy. This paired with HP’s run quiet design which maximises airflow and keeps the system running cool and quiet. Your data is also kept secure when you walk away from your PC with the EliteOne 800 AiO with HP’s Presence Aware. The AiO is powered with the latest Intel Core processors paired with high-performance memory and RAM to fit your needs.
Together with the AiO, HP also announced a new line up of EliteDesk desktops which are configurable to meet the needs of any workspace. The EliteDesk 800 comes in three form factors: the Desktop Mini, Small Form Factor and Tower. These desktops are designed for different workspace needs. The EliteDesk 800 Desktop Mini and Small Form Factor are designed to minimise the physical footprint of the desktop while keeping it cool and packed with power while the Tower is built for ultimate expandability.
Pricing & Availability
No official pricing has been announced by HP for the AiO or desktops just yet. However, it is expected that they will be available worldwide in May.