Tag Archives: Artificial Intelligence

Vimigo Announces Development of New AI-Integrated HR Mobile App

Vimigo

After successfully securing RM2.25 million through equity crowdfunding, Vimigo has revealed its new plans to develop an AI-powered HR mobile app and expand across the Asian market. In Malaysia, the company is known for its human resource (HR) performance reward system for startups.

With the funds raised, Vimigo plans to develop an AI-powered HR mobile app. This app will leverage AI to suggest SMART (specific, measurable, attainable, realistic, and time-bound) goals for each employee’s key performance metrics.

The app will also incorporate a gamified performance rewards system, which aims to create tangible progression milestones to motivate employees in a similar way to levelling up in a game. The app is expected to be released on iOS, Android, and web by the end of 2023.

In addition to app development, the company has set its sights on market expansion to several Asian countries, including Taiwan, Singapore, Thailand, Philippines, Indonesia, and Vietnam by 2025.

Alibaba Cloud Announces New AI Image Generation Model, Tongyi Wanxiang

Alibaba Tongyi Wanxiang

Alibaba Cloud has introduced its latest AI image generation model, Tongyi Wanxiang (‘Wanxiang’ means ‘tens of thousands of images’). This model is designed to help businesses improve their creativity by generating high-quality images across different art styles.

Text-to-image generation

The company states that Tongyi Wanxiang is able to generate detailed images in response to text prompts in both Chinese and English. This includes watercolours, oil paintings, sketches, animations, and more. Additionally, It can also transform existing images into new ones with similar styles and apply style transfers to create visually appealing compositions..

Tongyi Wanxiang is powered by Alibaba Cloud’s advanced technologies and leverages multilingual materials for enhanced training. This AI image generation model optimizes the high-resolution diffusion process to strike a balance between composition accuracy, detail sharpness, and clean backgrounds.

Alibaba Cloud developed it using Composer, the company’s proprietary large model that offers greater control over the final image output, including spatial layout and colour palette.

ModelScopeGPT Launched for Sophisticated AI Tasks

The company has also launched ModelScopeGPT, a versatile framework that utilizes Large Language Models (LLMs) to accomplish complex AI tasks across language, vision, and speech domains. ModelScopeGPT connects with an extensive array of domain-specific expert models, allowing businesses and developers to access and execute the best-suited models for sophisticated AI tasks. The platform is available for free and offers a rich Model-as-a-Service (MaaS) ecosystem.

Copilot-ing the Future of Work with Generative AI

“Artificial Intelligence (AI) is revolutionising the way we work.” This phrase is undoubtedly something that we’ve become so familiar with over the past few years. However, we’ve yet to see the impact of AI outside manufacturing and data science – that is – until now. With generative AI taking centre stage thanks to services like OpenAI’s ChatGP, that phrase couldn’t be more relevant. AI is taking the leap from automation to contextual intelligence which will benefit more people across more industries.

bionic hand and human hand finger pointing
Photo by cottonbro studio on Pexels.com

In the time since the pandemic, we’ve seen a revolution in the digitization of work. A large portion of workers – like ourselves – are still finding themselves working remotely to be more productive and reduce time wasted in commutes. In fact, Microsoft’s 2021 and 2022 Work Trend Index drew sharp focus on the subject. This year, the conversation is turning towards Generative AI and its role in leapfrogging work to the next level.

A Digital Leap with A Heavy Digital Debt

Speaking of the digitization of work, the recent digital leap – while a long time coming – has resulted in workers accumulating digital debt. What exactly is this? It’s that backlog of emails, that information dump, the work chats and even those meetings and their minutes that continue to pile up even as we work through them. While the digital leap we just experienced has been amazing for work and interpersonal communication, we’re finding it harder to cope with the sheer volume of information and communication we generate.

people on a video call
Photo by Anna Shvets on Pexels.com

In fact, Microsoft’s Work Trend Index found that 2 full work days are spent simply dealing with emails and inefficient meetings. This is exacerbated by the fact that we’re in 3 times more meetings at work since the pandemic. Leadership in organisations have also taken notice. They note that workers are spending too much time with their noses to the grindstone and not enough time innovating. This has created an increase in productivity but a lag in results. 77% of Malaysian respondents in Microsoft’s survey noted that they don’t have enough time and energy to get work done. This isn’t surprising given that we’re dealing with a continually growing digital debt.


Photo1 1024x567 webp

“As work evolves with AI, leaders and employees alike are looking at how technology can help them be more productive in their workplace. With AI, there is now an opportunity for us to reimagine the way we work and collaborate in the workplace of the future.”

K Raman, Managing Director of Microsoft Malaysia


AI and, in particular, generative AI like ChatGPT and Microsoft’s Copilot are part of the solution in dealing with this digital debt. They will help workers and leaders discern signals from the immense volume of noise. Relevant information can be distilled from data both internally and on the open web in seconds saving hours of work. What’s more, AI like Copilot will be able to help create effective meetings and eventually create a leap in innovation thanks to the offloading of menial tasks to these AI. In tandem with this, leaders and employees may shift to a more asynchronous form of communication allowing for more effective communications and meetings overall.

AI Isn’t Here to Replace, It’s Here to Augment

With the increased adoption of AI in work, many of us are feeling the pinch of possibly being replaced by a soul-less tool. 62% of the respondents to Microsoft’s Work Trend Index in Malaysia share this concern. While this may be true in essence, what we will expect to see in the coming years is the integration of AI into our work to lessen repetitive work. Perhaps, this will mean that some of our job roles will change.

image 5

We’re already seeing a large portion of workers and leaders willing to offload and delegate work to AI to lessen workloads. A whopping 84% of respondents indicated such. These tasks can be administrative, analytical or even creative. Yes, while it is concerning that even creative tasks can be offloaded to AI, there’s an evolution of work that will be spurred by this which will, hopefully, spur an increase in innovative work and boost productivity. This is where most Malaysian managers find the most value in AI.

Introducing Microsoft 365 Copilot | Your Copilot for Work

However, what will develop is an AI-employee alliance in which work is completed through a complementary pair of AI insight and employee ingenuity. This will necessitate employees and workers to have an aptitude for using AI. While it’s still early days when it comes to discerning skills that are AI-specific in the workforce, Microsoft has identified some key competencies that will be crucial for workers. 90% of leaders already anticipate this need.

A Need for AI Skills to Empower a Better Copilot

That said, employees will need to develop core competencies that will empower them to leverage AI as an integral tool for productivity. While digital skills still remain a pain point for employers, already 76% of Malaysians feel they are ill-equipped when it comes to AI skills.

image 4

The need to use and familiarise yourself with AI tools like ChatGPT cannot be overemphasized at this point. It’s no longer a question of “if” but one of “when” generative AI will make its way into your workflow. Getting to know how to prompt Generative AI tools to get the right outcome will be one of the key competencies of the AI revolution. However, it will also be crucial to understand that these AI tools are simply that – tools to get the work done. They are copilots, not autopilots when it comes to getting work done.


Photo1 1024x567 webp
Source: Facebook

“There is a need for a skilled workforce to reap the benefits of AI-powered technology and solutions. Human-AI collaboration is going to be critical as we shift from AI on autopilot to AI as our copilot. The most pressing opportunity and responsibility for every leader is to understand how to leverage AI to remove the drudgery of work, unleash creativity, and build AI aptitude,”

K Raman, Managing Director of Microsoft Malaysia


Industry leaders like Microsoft are already incorporating AI such as GPT and DALL-E into their offerings. With the recent announcement of Windows getting Bing’s GPT-enabled assistant, it comes as no surprise that Microsoft is also integrating its aptly named ‘Copilot’ into more products. In fact, their integrations are already being tested by some 600 enterprise customers including the likes of Chevron and Goodyear. Products like Microsoft 365 and Microsoft Viva will be benefiting most from this integration.

Next generation AI in Power Platform is changing how you develop low-code solutions

Copilot is already incorporated in products like Outlook, OneNote, Viva Learning, Whiteboard and PowerPoint. In Whiteboard, Copilot will enable more creative and effective meetings and brainstorms on Microsoft Teams. You will be able to prompt Copilot to generate and organise ideas, create designs and even summarize Whiteboard content. In OneNote, Copilot will be able to use prompts to draft plans, generate ideas, create lists and even organize information for easier reference. In apps like Outlook and Viva Learning, natural language will be used to customise learning journeys and empower better writing through tips and suggestions as well as designing learning paths for desired outcomes. PowerPoint will be getting a DALL-E infusion which will allow images to be customised to complete presentations.

As is already evident, the AI revolution is picking up steam. AI is quickly going to spur an evolution of work which will put its role in automation to shame. It will find a space nestled in the day-to-day workings of many industries. Workers like us will need to adopt, adapt and integrate generative AI in a way and at a scale that has not been seen before to accomplish more with less hassle.

[Google I/O 2023] Google Bard – What is That?

After Google I/O 2023 last week, you might have noticed that your Android smartphone pushing a notification to you. It is a prompt for you to try Google’s updated Bard. Most of you on Google’s email platform (Gmail) might also get an email asking you to try Bard today. If you are familiar with AI (artificial intelligence) news, you might already be familiar with Google’s Bard alongside OpenAI’s ChatGPT. To those, it might sound like a foreign object.

In simple terms, Google Bard is really the Google version of ChatGPT. While ChatGPT is developed by OpenAI, Bard is completely Google. You want to keep in mind that both ChatGPT and Bard are two separate platforms altogether though before jumping to conclusions and say that they are the same things. They are both categorised as generative AI, but they are both different from one another.

Unlike ChatGPT which has existed for some time, and is in its fourth iteration, Google Bard is fresh out of the oven; two months out of the oven, to be fair. Like ChatGPT, Google Bard was launched as an experiment. Like ChatGPT as well, the technology for Google Bard is not exactly new.

What is Google Bard?

Screenshot 2023 05 15 162043
Source: Google

As mentioned, Google Bard is a generative and creative AI by Google. Instead of overcomplicating the explanation, Google’s FAQ says that Google Bard is technically based on their LaMDA (Language Model for Dialogue Applications) AI model, Google’s very own linguistics program written for conversational purposes. When we say conversational, we do not mean that it will be like a regular conversation with a human being, but LaMDA aims to make it close.

To be fair, Google’s conversational AI is not something you have not seen before, you see it with Google Assistant whenever you call out “Hey, Google,” or “Okay, Google”. You can even use Google’s clever Assistant to get you a booking for a restaurant by having Google Assistant make the call and get the booking done, instead of you calling the restaurant yourself. In their demo a few years ago, Google’s Voice Assistant sounded so natural that the other person on the other end of the line could not even tell that they are speaking to an artificial person. This proves that LaMDA works, and has a place in the world. But our many use case of the Google Assistant even with Google Nest systems is prove enough that conversational AI has many uses in the current world.

Bard is not just a conversationalist though. It is more than that, a generative AI of sorts. It still has its roots in LaMDA, but it is a lot more than that now. It is made as a collaborative tool, for you to basically generate ideas, tabulate and make sense of data, help you plan things, help you design tools and steps, collate your calendars, and even use it as a learning tool.

According to Google, Bard is made to create original contents at the request and behest of individual users. Meaning that the algorithm could be different are results can be different from one person to another. Because it is Google, any request or question you post to Bard might prompt Bard to look into hundred or thousands of sources and draw conclusions, or present result in a way the does not infringe copyright or plagiarism laws. In the case that it does take up contents from another source, Bard will acknowledge and cite its sources. Google Bard is not built to write your college essay though, it is built to be a collaborator to manage your work and your life, to make it more seamless somehow over just Googling things. They do actually have a ‘Google It’ button for you to make full use of Google’s search engine though.

It is not a 100% solution for your own research and use case though. Google has mentioned and stressed that Google Bard is an experiment. It is an opportunity for their AI engines to learn even more at an accelerated pace with public input and use. Google Bard is meant to be iterated, which also means that the current form of Google Bard will not be final. They also mention that Google Bard, at its current form will not be 100% accurate at all times; hence, the ‘Google It’ button on Bard. While it is open source, Google also says that Bard is not meant to be used commercially or for advertising purposes at this time.

Why Bard?

Screenshot 2023 05 15 162312
Source: Google

The entire existence of Bard could be a sharp response to OpenAI’s ChatGPT. The success of the open-source AI platform has sort of forced Google to quickly introduce their own AI tool for use to the public. If they are to be believed, Google could offer the most powerful AI tool for the masses.

In the recent Google I/O 2023, Google has officially embraced Bard and announced that they have moved Bard to PaLM 2, an improved language model that offers more capabilities of Google Bard compared to just conversational based on LaMDA model. PaLM 2 now offers Bard the ability to code and program. It also allows Bard to solve even more complex mathematical problems and process through more complex reasoning models that offers Bard the ability to make better decisions over time.

As of Google I/O 2023, Google has opened the Bard experiment to more than 180 countries as of writing and is available in Japanese and Korean. As things go, Google is planning to open the experiment to more regions and make Bard available in about 40 languages. On top of more languages and regions, where the older Google Bard was mostly just conversational via text, the new improvement at Google I/O 2023 adds some visual flavours to your conversations with Bard. They have integrated Goole Lens into Bard and allow you to now scan photos of your things at home and let Bard come up with whatever captions you might want. You can even add photo references to your Google Bard generated itinerary when you travel.

But it is not just the surface updates for Google Bard. For Google I/O 2023, they have announced that Bard is not just a tool that is isolated from any other systems. Google is making the Bard available with an “export” button for collaboration purposes in the form of exporting and running codes on Python. You could directly copy email responses into your Gmail or Google Docs, if you want. If you want more out of Bard, you can even expect Adobe Firefly integration in the coming future for even more powerful generative tools like complete poster designs based on both Google’s and Adobe’s combined algorithms. They have also announced that they are working with more partners like Kayak, OpenTable, ZipRecruiter, Instacart, Wolfram and Khan Academy to get their Google Bard project integrated into their services and products.

In this case, where OpenAI is allowing you to plug its API anywhere and get it working with minor tweaks, Google is not looking to just do that. Google is offering deep integration with their partners to create even more, to become an even more powerful tool in your toolkit for the future. They look to open up even more opportunities and applications for the average user with deeper and more curated collaborations with partnering brands. While that may not necessarily be the best thing to do for some, it is a way forward for more integrated services and solutions to serve individuals and businesses better. It even allows partnering companies to understand their users and customers better in some cases.

Accelerating AI-driven outcomes with Powerful Super Computing Solutions

This article is contributed by Mak Chin Wah, Country Manager, Malaysia and General Manager, Telecoms Systems Business, South Asia, Dell Technologies

As artificial intelligence (AI) technology continues to evolve and grows in capability, it’s becoming a growing presence in every aspect of our lives. One needs to look no further than voice assistants, navigation like Waze, or rideshare apps such as Grab, which Malaysians are familiar with.

robot pointing on a wall
Photo by Tara Winstead on Pexels.com

From machine learning and deep learning algorithms that automate manufacturing, natural language processing, video analytics and more, to the use of digital twins that virtually simulate, predict and inform decisions based on real-world conditions, AI helps solve critical modern-life challenges to benefit humanity. In fact, we have digital twin technology to thank for assisting in the bioengineering of vaccines to fight COVID-19.

AI is changing not only what we do but also how we do it — faster and more efficiently.

Advancing Human Progress

For companies like Dell Technologies who are committed to advancing human progress, AI will play a big part in developing solutions to the pressing issues of the 21st century. The 2020s, in particular, are ushering in a fully data-driven period in which AI will assist organisations and industries of all sizes to accelerate intelligent outcomes.

woman holding tablet computer
Photo by Roberto Nickson on Pexels.com

Organisations can harness their AI endeavours through high-performance computing (HPC) infrastructure solutions that reduce risk, improve processing speed and deliver deeper insights. By extracting value through AI from the massive amounts of data generated across the entire IT landscape — from the core to the cloud —businesses can better tackle challenges and make discoveries to advance large-scale, global progress.

Continuing to Innovate

Through transformative innovation, customers can derive the insights needed to change the course of discovery. For example,  Dell Technologies equipped Monash University Malaysia with top-of-the-line HPC and AI solutions[i] to help accelerate the university’s research and development computing capabilities at its Sunway City campus in Selangor. The solution aims to enhance and accelerate the university’s computation capabilities in solving complex problems across its significant research portfolio.

Financial services, life sciences and oil and gas exploration are just a few of the other computation-intensive applications where enhanced servers will make a difference in achieving meaningful results, for humankind and the planet.

At the heart of AI technology are essential building blocks and solutions that power these activities. For example, Dell’s existing line of PowerEdge servers has already contributed to transformational, life‑changing projects, and will continue to power human progress in this generation and the next.

image

The most demanding AI projects require servers that offer distinct advantages – specifically built to deliver higher performance and even more powerful supercomputing results, and yet engineered for the coming generation to support the real-time processing requirements and challenges of AI applications with ease.

In addition to helping deploy more secure and better-managed infrastructure for complex AI operations at mind-boggling modelling speeds, these transformative servers will help meet organisations’ biggest concerns in productivity, efficiency and sustainability, while cutting costs and conserving energy.

Transforming Business and Life

While organisations are in different stages with respect to their adoption of AI, the transformational impact on business and life itself can no longer be ignored. Human progress will depend on the ability of AI to make communication easier, personalise content delivery, advance medical research/diagnosis/treatments, track potential pandemics, revolutionise education and implement digital manufacturing. In Malaysia, while AI is progressively being recognised as the new general-purpose technology that will bring about revolutionary economic transformation similar to the Industrial Revolution, adoption of Industry 4.0 remains sluggish with only 15% to 20% of businesses having really embraced it. On the other hand, the government is taking this emerging technology seriously, having set out frameworks for the incorporation of AI by numerous sectors of the economy. These comprise the Malaysia Artificial Intelligence Roadmap 2021-2025 (AI-Rmap) and the Malaysian Digital Economy Blueprint (MDEB), spearheaded by the MyDIGITAL Corporation and the Economic Planning Unit.

Moving Forward

With servers and HPC at the heart of AI, modern infrastructure needs to match the unique requirements of increasingly complex and widely distributed workloads. Regardless of where a business is on the AI journey, the key to optimising outcomes is having the right infrastructure in place, ready to seamlessly scale as the business grows and positioned to take on the unexpected, unknown challenges of the future. To do that requires having the expertise – or a trusted partner that does – to help at any and every stage, from planning through to implementation, to make smart server decisions that will unlock the organisation’s data capital and support AI efforts to move human progress forward.


[i] Based on Dell Technologies helps Monash University Malaysia enhance its R&D capabilities with HPC and AI solutions Media Alert, November 2022

Adobe Firely, the Next-Generation AI Made for Creative Use

AI (Artificial Intelligence) generated graphics is not a new thing. You have things like OpenArt and Hotpot these days where you can just type in the keywords to the image you want, and let the engine generate art for your use. Even before AI generated graphics though, the implementation of AI within the creative industry is nothing new. NVIDIA has used their own AI engine to write an entire symphony, and even to create 3D environments using their Ray Tracing engines. Adobe too have something they call the Sensei. They the AI tool is implemented across their creative suite to understand and recognise objects better, fill details where needed more naturally, and even edit videos, images, or even texts quickly and efficiently. Now, they have Firefly.

Firefly is not a new separate AI system from Adobe’s Sensei. Firefly is a part of a larger Adobe Sensei generative AI together with technologies like Neural Filters, Content Aware Fill, Attribution AI, and Liquid mode implemented across several Adobe platforms. Unlike those platform specific implementations though, Adobe is looking to put Firefly to work on a number of various platforms across their Creative Cloud, Document Cloud, Experience Cloud, and even their Adobe Express platforms.

So, what is Adobe Firefly? We hear you ask. It is technically Adobe’s take on what a creative generative AI should be. They are not limiting Firefly to just image generation, modification, and correction. It is designed to allow any sort of content creators create even more without needing to spend hundreds of hours to learn a new skill. All they need to do is to adapt Firefly in their workflow and they will get contents that they have never been able to create before, be it images, audio, vectors, texts, videos, and even 3D materials. You can have different contents every time too with Adobe Firefly; the possibilities, according to Adobe, are endless.

What makes Adobe’s Firefly so powerful is the power of the entirety of Adobe’s experience and database behind it. Obviously Adobe’s Stock images and assets is a huge enough library for the AI implementation to dive into. The implementation can also look into using openly licensed assets and public domain contents in generating its contents. The tool, in this case, will prevent any IP infringements and help you avoid plenty of future litigations.

Adobe Firefly Cover
Source: Adobe

As Firefly is launched in its beta state, it will only be available as an image and text generator tool for Adobe Express, Adobe Experience Manager, Adobe Photoshop, and Adobe Illustrator. Adobe plans to bring Firefly into the rest of their platforms where relevant in the coming future. They are also pushing for more open standards in asset verification which will eventually include proper categorization and tagging of AI generated contents. Adobe is also planning to make the Firefly ecosystem a more open one with APIs for its users and customers to integrate the tool with their existing workflows. For more information on Adobe’s latest generative AI, you can visit their website.

Concept Nyx and Explorations for the Future of Connection

How will we connect with colleagues in five to ten years’ time? Will we all be interacting with holograms? Fully immersed in virtual worlds? Or will the reality be much closer to how most of us work from our laptops today?

Virtual worlds and immersive experiences could offer exciting new ways to connect with others – and our content. And with people more dispersed and working patterns more personalised than before, how we collaborate and get things done has never been more important.

Dell Concept Nyx Ecosystem 4

It’s my team’s role to dig into future trends and technologies, experiment with solutions and reimagine experiences. Though immersive environments will play a role in the future of work, face-to-face meetings, instant messages, collaboration tools, and video calls aren’t going anywhere. That’s why we’re focusing on the user experience and honing in on everyday micro-moments that could be disruptive as we potentially bounce between physical, digital and virtual worlds in the future.

We’re asking questions like: How will people interact at the intersections of these worlds? What tools will people need to move between these locations seamlessly? What if people don’t want to wear a headset and dive into a virtual world for 8 hours a day – would they be excluded from future projects or collaboration opportunities?

Intelligent, familiar tools for future interactions

Using Concept Nyx’s ability to deliver compute all around, powered at the edge, we have been exploring how familiar devices and peripherals could be paired with Artificial Intelligence (AI) to work together as an ecosystem to deliver easily accessible and immersive experiences beyond gaming.

Dell Concept Nyx Ecosystem 1

Our labs are packed with curated immersive demonstrations and concepts to help us test and explore how Dell could help people move between various spaces and tasks intuitively in the future. From fully immersive Virtual Reality (VR) builds to Mixed Reality (XR) experiences featuring displays and other tools that remove the need for a VR headset, these environments have helped us evolve concepts like the Concept Nyx Companion. As a lightweight tablet-style device that could be viewed and accessed in VR and XR environments, the concept could be a consistent tool throughout all these spaces and could ensure a user’s content is in one place as they move between spaces and tasks. No more taking photos of whiteboards or copying notes to be uploaded to a different space – users could just screenshot their project space and/or easily copy content for sharing across screens.

Together with the Concept Nyx Stylus, you could input notes by voice or via pen, and drag + drop them into digital and virtual collaboration spaces, and even use the voice activation for AI image creation – perfect for non-aspiring artists! All these tools could also seamlessly be used alongside the Concept Nyx Spatial Input in a future desktop environment with a keyboard and mouse, and possibly 3D displays too. We’ve been looking at creative ways to connect these traditional tools for a clutter-free space, and we’ve also been thinking about intuitive gestures for interacting with content – for example, using the tip of the Stylus for writing and the top of the Stylus for interacting with onscreen content or using the Spatial Input as a dial for a 360 view or for zooming in on details.

Dell Concept Nyx Ecosystem 2

We’ve even been thinking through how people might show up in future digital and virtual spaces. We’ve all been on video calls where we need to step away for a moment to answer the door or tend to a pet or child off-camera. Instead of leaving a blank screen, empty seat, or static 2015 headshot, imagine with a wave of your hand, you could stay present as an intelligent avatar while you step away or stay off camera completely. To explore this, we’ve been experimenting with gestures and movement tracking and building on our imaging technology and video conferencing expertise to create the Concept Nyx Spatial camera, which when paired with AI software, could learn a user’s expressions and mannerisms to deliver a more authentic representation of them for future interactions.

Advancing the Concept Nyx Ecosystem

From infrastructure to devices, Dell is at the centre of present and future workplaces and is focused on developing the tools that will be needed to navigate these spaces. Right now, this means bringing tools to market like a new generation of UltraSharp conferencing monitors and intelligent webcams with motion-activated controls and presence detection, and building on technologies like storage, 5G, multi-cloud and edge that provide the advanced connectivity and infrastructure to allow organisations to shape how they work. In the future, productivity tools will be connected and intelligent enough to seamlessly move from experience to experience and task to task, helping to break down barriers and redefine how colleagues connect with one another.  

Dell Concept Nyx Ecosystem 3

My team continues to explore the future of compelling, immersive experiences in both work and play. Concepts play a huge role in allowing our designers, engineers, and strategists to test and tweak devices and solutions to inform future experience roadmaps. We’re excited to keep you updated on our journey!

Edge Computing Benefits and Use Cases

From telecommunications networks to the manufacturing floor, through financial services to autonomous vehicles and beyond, computers are everywhere these days, generating a growing tsunami of data that needs to be captured, stored, processed and analyzed. 

At Red Hat, we see edge computing as an opportunity to extend the open hybrid cloud all the way to data sources and end-users. Where data has traditionally lived in the data centre or cloud, there are benefits and innovations that can be realized by processing the data these devices generate closer to where it is produced.

This is where edge computing comes in.

4 benefits of edge computing

As the number of computing devices has grown, our networks simply haven’t kept pace with the demand, causing applications to be slower and/or more expensive to host centrally.

Pushing computing out to the edge helps reduce many of the issues and costs related to network latency and bandwidth, while also enabling new types of applications that were previously impractical or impossible.

1. Improve performance

When applications and data are hosted on centralized data centres and accessed via the internet, speed and performance can suffer from slow network connections. By moving things out to the edge, network-related performance and availability issues are reduced, although not entirely eliminated.

2. Place applications where they make the most sense

By processing data closer to where it’s generated, insights can be gained more quickly and response times reduced drastically. This is particularly true for locations that may have intermittent connectivity, including geographically remote offices and on vehicles such as ships, trains and aeroplanes.

hands gb5632839e 1280
Source: Pixabay

3. Simplify meeting regulatory and compliance requirements

Different situations and locations often have different privacy, data residency, and localization requirements, which can be extremely complicated to manage through centralized data processing and storage, such as in data centres or the cloud.

With edge computing, however, data can be collected, stored, processed, managed and even scrubbed in place, making it much easier to meet different locales’ regulatory and compliance requirements. For example, edge computing can be used to strip personally identifiable information (PII) or faces from a video before being sent back to the data centre.

4. Enable AI/ML applications

Artificial intelligence and machine learning (AI/ML) are growing in importance and popularity since computers are often able to respond to rapidly changing situations much more quickly and accurately than humans.

But AI/ML applications often require processing, analyzing and responding to enormous quantities of data which can’t reasonably be achieved with centralized processing due to network latency and bandwidth issues. Edge computing allows AI/ML applications to be deployed close to where data is collected so analytical results can be obtained in near real-time.

3 Edge Computing Scenarios

Red Hat focuses on three general edge computing scenarios, although these often overlap in each unique edge implementation.

1. Enterprise edge

Enterprise edge scenarios feature an enterprise data store at the core, in a data centre or as a cloud service. The enterprise edge allows organizations to extend their application services to remote locations.

nasa Q1p7bh3SHj8 unsplash
Photo by NASA on Unsplash

Chain retailers are increasingly using an enterprise edge strategy to offer new services, improve in-store experiences and keep operations running smoothly. Individual stores aren’t equipped with large amounts of computing power, so it makes sense to centralize data storage while extending a uniform app environment out to each store.

2. Operations edge

Operations edge scenarios concern industrial edge devices, with significant involvement from operational technology (OT) teams. The operations edge is a place to gather, process and act on data on-site.

Operations edge computing is helping some manufacturers harness artificial intelligence and machine learning (AI/ML) to solve operational and business efficiency issues through real-time analysis of data provided by Industrial Internet of Things (IIoT) sensors on the factory floor.

3. Provider edge

Provider edge scenarios involve both building out networks and offering services delivered with them, as in the case of a telecommunications company. The service provider edge supports reliability, low latency and high performance with computing environments close to customers and devices.

Service providers such as Verizon are updating their networks to be more efficient and reduce latency as 5G networks spread around the world. Many of these changes are invisible to mobile users, but allow providers to add more capacity quickly while reducing costs.

3 edge computing examples

Red Hat has worked with a number of organizations to develop edge computing solutions across a variety of industries, including healthcare, space and city management.

1. Healthcare

Clinical decision-making is being transformed through intelligent healthcare analytics enabled by edge computing. By processing real-time data from medical sensors and wearable devices, AI/ML systems are aiding in the early detection of a variety of conditions, such as sepsis and skin cancers.

cdc p33DqVXhWvs unsplash
Photo by CDC on Unsplash

2. Space

NASA has begun adopting edge computing to process data close to where it’s generated in space rather than sending it back to Earth, which can take minutes to days to arrive.

As an example, mission specialists on the International Space Station (ISS) are studying microbial DNA. Transmitting that data to Earth for analysis would take weeks, so they’re experimenting with doing those analyses onboard the ISS, speeding “time to insight” from months to minutes.

3. Smart cities

City governments are beginning to experiment with edge computing as well, incorporating emerging technologies such as the Internet of Things (IoT) along with AI/ML to quickly identify and remediate problems impacting public safety, citizen satisfaction and environmental sustainability.

Red Hat’s approach to edge computing

Of course, the many benefits of edge computing come with some additional complexity in terms of scale, interoperability and manageability.

Edge deployments often extend to a large number of locations that have minimal (or no) IT staff, or that vary in physical and environmental conditions. Edge stacks also often mix and match a combination of hardware and software elements from different vendors, and highly distributed edge architectures can become difficult to manage as infrastructure scales out to hundreds or even thousands of locations. The Red Hat Edge portfolio addresses these challenges by helping organizations standardize on a modern hybrid cloud infrastructure, providing an interoperable, scalable and modern edge computing platform that combines the flexibility and extensibility of open source with the power of a rapidly growing partner ecosystem

Xiaomi ROIDMI EVE Plus Robot Vacuum Review: Keeping up with the Dust Bunnies in a Smart Way

Xiaomi’s quest to be the king of the Internet of Things (IoT) is no secret. The company has more than one subsidiary working on its IoT products. To date, we’ve seen IoT products branded as Mi, Soocas and even Dreame. ROIDMI is yet another brand that works on IoT particularly cordless vacuums. Its laser focus on the niche seems to have worked in its favour as their line-up of cordless vacuums seems to be one of the more popular options on platforms like Shopee and Lazada.

Xiaomi ROIDMI EVE Plus 002

That said, Robot Vacuums are no revolution when it comes to cleaning. They’ve been available on the market for quite a while now, but they’ve always had their quirks when it comes to cleaning. ROIDMI’s EVE Plus is looking to address many of these quirks with some interesting approaches and smart implementation of AI technology. These small innovations have made for one of the easier, hands-off cleaning experiences for robot vacuums we’ve experienced.

The ROIDMI Experience

The ROIDMI experience isn’t just a manual plug-and-forget experience; it comes with a host of “prep work” and setup that you’ll have to undertake at the beginning which lends to a more automated experience later on. Of course, it is in no way a deal-breaker when it comes to the overall experience.

Being an IoT device, the robot vacuum requires some setup. However, the process is pretty straightforward, simple and very app-centric. The EVE Plus Robot Vacuum itself doesn’t come with many interactive components. Most of the interactions and settings are done through the app. This actually makes setup a breeze. However, the ecosystem itself can be a little quirky as it isn’t as integrated as you would think.

When you initially unbox and set up your EVE Plus Robot Vacuum, you’ll need to make sure that you remove the plastic and Styrofoam pieces that have been placed to prevent damage to moving parts during shipping. If you look at the manual, it says that the vacuum can be integrated into the Mi Home app or the ROIDMI app.  However, this particular model of the robot vacuum isn’t listed in the Mi Home app, instead, you will need to use the ROIDMI app to set it up.

Set up was very simple and quick. All you had to do is plug in the base, place the EVE Plus in the cradle and power on. Once you do, you just have to tap the add product option in the app which is denoted by a “+” on the top right. When you do this, it will automatically look for the local WiFi being broadcast by the vacuum and proceed to program the Wi-Fi settings for the vacuum. To be frank, that’s all the setup that is required. After this, everything else is automated and done by the vacuum itself during its first cleaning.

App Design & Usability

The ROIDMI app is a simple, well-designed app. Unlike a lot of other IoT apps, it cuts to the chase and immediately allows you to set up and manage your products after you sign in. The simplicity and straightforward design are some of the best features of the app. The no-frills in your face design lets you get things done without fumbling and digging for functions.

  • Xiaomi ROIDMI App 006
  • Xiaomi ROIDMI App 001
  • Xiaomi ROIDMI App 002
  • Xiaomi ROIDMI App 015
  • Xiaomi ROIDMI App 016
  • Xiaomi ROIDMI App 004
  • Xiaomi ROIDMI App 010
  • Xiaomi ROIDMI App 011
  • Xiaomi ROIDMI App 012
  • Xiaomi ROIDMI App 013

After your initial set-up of registering and logging in, you’ll be greeted by a screen with a list of your appliances. Each appliance can be set up and monitored through the app. The main screen shows you pertinent information such as the battery level, active time, and area that the vacuum has cleaned previously.

Clicking further into the app brings up more detailed information.  In the case of the EVE Plus Robot Vacuum, you’ll be able to see a map of the space it’s in and the cleaning path it took on the previous cleaning session. It also gives you quick access to its cleaning modes and map customisations. It also gives you quick access to the recharge and clean options. You can also customise how much water it will dispense when mopping and even the suction power of the vacuum.

Designed for Real Living Spaces

While the app is the core of their user experience, the ROIDMI EVE Plus Robot Vacuum itself comes packed with hardware and design that makes using it a more seamless experience.

Let’s start off with the overall design of the vacuum. The ROIDMI EVE Plus Robot Vacuum is designed to be able to manoeuvre through real living spaces. While it shares a similar design to many of the robot vacuums available, it is short enough to fit under most spaces in a room. The circle design of it gives it more manoeuvrability that allows the vacuum to get out of tight situations with minimal intervention.

ROIDMI has also struck a balance in the size of the vacuum and the size of the internal tank. It is large enough that the vacuum doesn’t need to make multiple trips back to the docking station to be emptied even in larger rooms but small enough that the robot vacuum is still able to fit in most nooks and crannies of a space. It also doesn’t come with many parts which click into space. All the components of the robot vacuum are securely in place either with screws or by a secure locking mechanism.

The vacuums movement is dependent on two rather large plastic wheels. They function similar to the hoverboards we’ve seen in the market. This decision actually allows the robot vacuum to find its way through tough spaces. It also allows it to move over ledges objects about 2cm in height. So, if you have a table with a stand design that runs on the ground or running cables across a room, it’ll be able to move over them. However, for cables, if they aren’t fastened to the ground securely, you might have electrical items connected to these cables falling over.

The ROIDMI EVE Plus has a small, elevated component on the top where the LIDAR sensor is. This allows it a 360° field of view allowing it to map and detect quicker and more accurately. In fact, it managed to map the room it was in even during setup. The sensor also allows the robot vacuum to gauge the height of furniture, so it doesn’t get stuck under them. This is complemented by sensors on the side and bumpers on the front to help with movement and manoeuvring. There are only 3 physical buttons on the EVE PLUS – the power button, the home button and a button that acts as a quick clean command.

The base station or dock is also designed minimally. It’s a relatively small unit with a single touch screen for status monitoring and a space for the EVE Plus to come home to. The main, 3-litre dust bag is accessible through the top. It also has a HEPA filter to prevent odours from escaping. This also means that you won’t be emptying the bag too often. ROIDMI does highlight that they’ve designed the base in a more compact fashion. This apparently allows them to minimise noise while dust is being emptied.

Dealing with the Dust in a Smart Way with Some Quirks

To be really frank, I’ve never really understood the allure of robot vacuums even after reviewing earlier models ages ago. In fact, they always seemed like more hassle than they are worth. However, the Xiaomi ROIDMI EVE Plus robot vacuum did a good job of convincing me otherwise.

The AI that comes programmed into the EVE Plus makes it one of the simplest, most seamless robot vacuum experiences I’ve had to date. It can intelligently detect the height of furniture and even detect slopes or ledges. It was able to avoid getting stuck most of the time thanks to this. However, even if it got stuck you simply had to put it immediately beside the area it got stuck in and it would avoid it.

The way the EVE Plus cleans is also different from other robot vacuums. It intelligently partitions large areas into smaller rooms. It wasn’t immediately apparent when I was observing the vacuum itself but when I glanced at the app, the map was sectioned into multiple smaller areas. Using this mapping and guidance, it would optimise its route to efficiently clean the area. It’s also the only robot vacuum I’ve seen that has a unique Y cleaning pattern that allows it to clean more effectively. If you’re like me, you’ll also turn on the 2X clean feature which makes the EVE Plus do a second run when cleaning. Its ability to mop with spaces with water is also a welcomed feature. However, this feature is limited to 250m2 space as the water tank on the robot vacuum is limited.

Xiaomi ROIDMI EVE Plus 013

However, the ROIDMI EVE Plus is not without its quirks. During our review period, the vacuum actually lost mapping data spontaneously. This isn’t a major issue as it is able to rebuild the data pretty quickly. The robot vacuum is also a little quirky when it comes to dealing with carpets and rugs. It’s able to handle thicker carpets but tends to wrestle with rags.

It also communicates through the app which is an added advantage – if your phone doesn’t put the app to sleep. The app never requests to allow it to run in the background either so when you launch your ROIDMI app, it tends to spam you with notifications. It also does cry for help with a voice when it’s stuck.

Of Raised Slopes & Tassels – ROIDMI Eve Plus Kryptonite

If the ROIDMI EVE Plus was Supergirl, tassels and slopes are its kryptonite. The robot vacuum seems to enjoy wrestling (and losing) with tassels. Rugs or carpets with tassels are things you may want to remove when using the EVEL Plus. In fact, I had to cut tassels off a floor mat cause the EVE Plus had a bout with them and couldn’t break free. The other thing that the EVE Plus seems to have trouble with is raised slopes and platforms. This is particularly apparent if you use a stand fan in your room. If the stand fan is designed with a base that is slightly sloped, the EVE Plus will try to run over the slope and eventually get stuck.

This was irritating at first. However, you can easily prevent this by creating no-fly zones on the map through the ROIDMI app.

Not Just About Removing Dust – It Zaps Bacteria with Activated Oxygen   

Earlier we mentioned the HEPA filter that helps prevent odours from escaping. This is actually part of a larger disinfection system that is integrated into the base station of the EVE Plus. When the dust is emptied from the robot vacuum to the main 3-litre bag, it is bombarded by activated or ionised oxygen. A little bit of a science refresher here – activate or ionised oxygen is a charged molecule that readily destabilizes cellular structures and intracellular structures. Using this understanding, ROIDMI has created a solution that is able to kill 99.99% of bacteria or so they claim.

Xiaomi ROIDMI EVE Plus 005

This technology is also responsible for the odourless storage of dust in the base station. The ionised oxygen can also help neutralise bad odours. This working together with the HEPA filter that is integrated into the docking station minimises harmful particles and allergens from escaping.

A Simplified, Smart Robot Vacuum that is able to handle small to medium spaces with a user experience that changed the mind of a non-believer

It’s very rare for a piece of technology to make me reconsider my initial experiences and change my mind. However, the ROIDMI EVE Plus robot vacuum did just that. It provided a seamless, simplified experience which convinced me that there is a time and space for smart cleaning devices. In my case, with a busy day to day life and older parents at home, the robot vacuum provided us with a means to maintain our most used spaces keeping them clean and dust-free without sacrificing time and conserving time.

The features of the EVE Plus are what made the difference. Its simple app and set-it-and-forget-it experience allowed me to get things done without the need to worry about the robot vacuum when it’s running a cleaning cycle. If the vacuum had poorer manoeuvrability or got stuck in spaces regularly, this review would have been very different. However, the fact that it was able to handle a busy space without much hassle, was a welcomed surprise.

Google Looks to “MUM” to Enhance Search

Google has been working on creating a better, more unified experience with their bread and butter – search. The tech giant is looking for a more contextually relevant search as they move forwards. To do this, they are turning to MUM, the Multitask Unified Model, to bring more relevance to search results.

search on.max 1000x1000 1

The new Multitask Unified Model (MUM) allows Google’s search algorithm to understand multiple forms of input. It can draw context from text, speech, images and even video. This, in turn, allows the search engine to return more contextually relevant results. It will also allow the search engine to understand searches in a more natural language and make sense of more complex searches. When they first announced MUM, the new enhancement could understand over 75 languages. MUM is much more powerful than the existing algorithm.

Contextual Search is the New Normal

Search On Lens Desktop

Barely two months after the announcement, Google has begun implementing MUM into some of the most used apps and features. In the coming months, Google searches will be undergoing a bit of a major rehaul. The company is creating a new, more visual search experience. Users will be seeing more images and graphics in search results. you will also be able to refine and broaden searches with a single click thanks to MUM. You will be able to zoom into finer details such as specific techniques and more or get a broader picture of your search with a single click. In their announcement, Google used the example of acrylic painting. With the results from Google search, they were able to zoom in to specific techniques commonly used in acrylic painting or get a broader picture of how it started.

  • Googel SearchON 001
  • Googel SearchON 002
  • Googel SearchON 003
  • Googel SearchON 004

The search engine uses data such as language and even user behaviour in addition to context to recommend broadening or narrowing searches. They are even applying this to YouTube. They are hoping to be able to expand the search context to include topics mentioned in YouTube videos later this year. Contextual and multitask search is also making its way to Google Lens. Lens will be able to make sense of both visual and text data at the same time. It’s also making its way to Chrome. Don’t expect the rollout of the new experience on Lens too soon as the rollout is expected to be in 2022 after internal testing.

Googel SearchON 006

Context is also making search more “shoppable”. Google is allowing users to zoom in to specifics when searching. For instance, searching if you’re searching for fashion apparel, you will be able to narrow your search based on design and colour or use the context of the original to search for something else completely. In addition, Google’s Shopping Graph will allow users to narrow searches with an “in stock” filter as well. This particular enhancement will be available in select countries only.

Expanding Search to Make A Positive Impact

Google isn’t just focusing on MUM for its own benefit. The company has been busy bringing its technology to create change too. It’s working on expanding contextual data as well as A.I. implementation in addressing environmental and social issues. While this is nothing new, some of the new improvements could impact us more directly than ever.

Environmental Insights for Greener Cities

One of the biggest things that could make a huge impact is Googles Environmental Insights. While this isn’t brand new, the company is looking to make the feature more readily available to cities to help them be greener. Environmental Insights Explorer will allow municipalities and city councils to make decisions based on data from A.I. and Google’s Earth Engines.

Search On Tree Canopy Insights

With this data, cities and municipalities will be able to visualise tree density within their jurisdictions and plan for trees and greenery. This data will help tremendously in lowering the temperatures of cities. It will also help with carbon neutrality. The feature will be expanding to over 100 cities including Yokohama and Sydney this year.

Dealing with Natural Disasters with Actionable Insights

Google Maps will be getting more actionable insights when it comes to natural disasters. Of course, being an American company, their first feature is, naturally more relevant to the U.S. California and other areas have been hit by wildfires with increasing severity in the past years. Other countries such as Australia, Canada and even in parts of the African continent are also experiencing increasingly deadly wildfires. It’s more apparent that data on the wildfires is needed for the public.

Search On Wildfire Mapping

As such, Google Maps will be getting a layer that will allow users to see the boundaries of active wildfires. These boundaries are updated every 15 minutes allowing users to avoid affected areas. The data will also help authorities coordinate evacuations and even handling of situations. Google is also doing a pilot for flash flooding in India.

Simplifying Addresses

Google is expanding and simplifying one of its largest social projects – Plus Codes. The project, which was announced just under a year ago, is getting more accessible. Google is making Plus Codes more accessible with Address Maker. The new app continues with Plus Codes but allows users and organisations simplified access to making new addresses. Governments and NGOs will be able to create addresses at scale easier.