Tag Archives: Machine Learning

Maxis Becomes First Malaysian Telco Accredited as AWS Advanced Consulting Partner

Maxis is one of the only telecommunications companies in Malaysia already embracing the cloud. The company embarked on its journey to become a one-stop provider for connectivity and infrastructure for Malaysia back in 2019 with an early partnership with AWS (Amazon Web Services) who is currently the most prolific web service platform in the world. Today, they are announcing that they have successfully achieved new accreditation as an AWS Advanced Consulting Partner; making them the only telecommunications company in Malaysia to have done so. This solidifies their claim to being one of the most equipped converged solutions providers in the country.

The new accreditation certifies that Maxis is equipped to provide its customers and partners with the technical support and know-how to migrate and sustain their businesses in the cloud. To achieve this, Maxis has to demonstrate a sustained competency in their workforce equipped and certified by AWS for the many services that their platform provides. This includes taking advantage of the Machine Learning and Artificial Intelligence components available on AWS.

In addition to this, Maxis is now also offering AWS Direct Connect. AWS Direct Connect allows customers to access AWS directly via a dedicated network connection with one of the many AWS Connect locations using industry-standard 802.1q VLANs. This also allows customers to partition the connection into multiple virtual interfaces easing access to object instances in the AWS public and private clouds while maintaining network separation.

The new accreditation comes on the heels of Maxis having announced key acquisitions that have bolstered the company’s position as one of the most equipped telecommunications companies in Malaysia able to empower businesses in their digitization journey. The company has also been certified in the AWS Public Sector Partner program with over 300 Maxis employees being accredited and undergone comprehensive training by AWS.

YouTube Tests AI to Create Chapters in YouTube Videos

Earlier this year, YouTube has brought a few updates to its platform this included a new feature called Chapters. This feature allowed creators to divide a video into separate chapters using timestamps. Chapters can be useful as it allows users to jump to the parts of the video that most interest them by clicking timestamps in the description or instead of scrubbing the video in the seek bar. That being said, not all videos uploaded comes with chapters as it involved the cumbersome task of manually identifying timestamps which can be a pain to do. To make things a little easier, YouTube has been testing an AI model which can divide videos into chapters on the fly.

To do this, YouTube is deploying AI that will go through a video and identify certain visual markers. These markers will then be used as reference points to break the video into chapters. The algorithm will also recognize certain text-based signs in a video to do the same. According to YouTube, the main purpose of this experiment is to create easy jumping on and off points for viewers, making it easier for viewers to navigate through videos and quickly jump to the relevant part that they desire.

The new feature is currently being tested on a small group of videos by YouTube. Needless to say, YouTube is allowing the creators to opt out from their experiment. That said, YouTube is also encouraging the uploaders to provide feedback on how the feature could be improved.

MIT Researchers Develop AI Model that Accurately Identifies Asymptomatic COVID-19 Carriers

The COVID-19 Pandemic doesn’t seem to be going away anytime soon. The virus continues to spread drastically and have a devastating effect in areas where outbreaks have occurred. However, since the early days of the pandemic, there have been reports of asymptomatic carriers; these carriers are able to spread the virus without showing any outwardly recognisable signs of infection. This also makes them one of the largest unsolved problems of the current COVID-19 pandemic. This group of individuals are less likely to seek testing and, in turn, be diagnosed and treated.

However, that’s about to change. A group of researchers at the Massachusetts Institute of Technology (MIT) have developed an AI model that has been able to accurately identify asymptomatic carriers based on the way they cough. The AI model has been able to accurately discern and identify 98.5% of coughs from confirmed COVID-19 patients and 100% of asymptomatic carriers.

Using A.I. to Identify Unique Markers in Coughs

The team at MIT, consisting of Jodi Laguarta, Ferran Hueto, and Brian Subirana, developed the mode on a neural network called ResNet50. ResNet50 is a type of neural network that is able to discern and identify differences and similarities in data. Until now, ResNet50 was used primarily in visual discernment. However, the team at MIT has applied it in identifying markers when people cough.

man in white crew neck t shirt
Photo by cottonbro on Pexels.com

Their model was initially developed to help detect early signs of Alzheimer’s which can present in the way people cough. This include the person’s emotional state, changes in lung and respiratory performance and vocal chord strength. These are known markers for someone who could be experiencing early onset Alzheimer’s.

Using the three criteria, three independent machine learning algorithms were train and then layered on each other. The team also included an algorithm for muscular degeneration on top of the model. In tandem, these machine learning layers made it possible for the team to detect and identify samples from Alzheimer’s patients.

Detecting the Indiscernible

In April, the team looked into applying the AI model to help identify COVID-19 patients. To do this, they established a website where people could record a series of coughs with their mobile phone or any other web enabled device. In addition to their submissions, participants had to fill up a survey of their symptoms, COVID-19 status, and if their method of diagnosis. Other factors such as their native language, geographical location and gender were also collected. They have, to date, collected over 70,000 recordings which amounts to about 200,000 forced cough samples. This is the largest known cough dataset that has been collected so far according to Brian Subirana.

Image by allinonemovie from Pixabay

The model proves a long known fact that COVID-19 does in fact affect respiratory function. However, it also draws similarities between the presentation of temporary respiratory degeneration to the neurodegeneration present in Alzheimer’s patients. That said, it also shows that there are sub-clinical presentations of the disease in asymptomatic individuals. The AI algorithm is able to detect and identify individuals with these presentations, providing a much needed boost to potential diagnoses of these individuals.

More significantly, the team has developed a method in which pre-screening can be done to help curb the spread of COVID-19. What’s more, their research could be the foundation of future diagnosis when it comes to sub-clinical presentations of diseases. That said, Brian Subirana highlights that the strengths of the tool lies in its ability to differentiate coughs from asymptomatic carriers from healthy individuals. He also stresses that it is not meant to be used as a definitive test for COVID-19.

[next@Acer] SigridWave Bridges the Language Barrier in eSports

Acer’s Planet 9 was launched a year ago as the company’s comitment to the growing eSports scene. The platform allows aspiring professional gamers to hone their skills and collaborate. The vision for this next gen platform is to provide a “training arena” where pros, semipros and enthusiasts can improve their game.

“Planet 9 is a community-oriented platform designed to give gamers everywhere a chance to interact and learn from each other. It is intended to be a social platform that caters to multiple audiences: those looking to improve are introduced to similarly skilled teammates and opponents, likewise, those just looking to chat and enjoy themselves can meet other casual players…”

Andrew Chuang, AVP, Esports Services, IT Products Business, Acer Inc.
Source: Acer

Planet9 was designed to bring different eSports communities together in one place, and a major part of the platform is effectively managing and integrating these communities. The platform helps users to find teammates based on a variety of factors such as game type, skill level and time zone. It also gathers and records a wide variety of data such as score, pathing, kill-death ratio and death location. This provides coaches and managers information they can use to help guide their players.

This year, Acer is bringing cutting edge AI to Planet9, its next-generation eSports platform, in the form of the SigridWave In-Game Live AI Translator. SigridWave has been specially designed to handle gaming terminology and jargon. It leverages deep learning technologies to bridge language barriers allowing gamers to communicate no matter where they are from. This is an important step in enhancing the gaming experience.

When SigridWave is deplyed, it will utilise Automatic Speech Recognition (ASR) technology to recognize speech from gamers. It then converts this into strings of text, similar to how smartphones do when you use virtual assistants. This string of text is then recognised using Neural Machine Translation (NMT) technology. The NMT deployed by SigridWave has so far been trained to recognise over 10 million bilingual sentence pairs. This allows it to recognise game specific language and jargon such as “ADSor “camping”, giving it context awareness. In-game overlays will be supported for League of Legends on launch in late 2020 or early 2021, and support will be made available for additional titles in the future.

The new technology has the potential to take competitive and professional gaming to a whole new level. Together with SigridWave, Acer also unveiled Clubs and Tournaments; two new features that will help players collaborate and compete regularly to up their game. These join a slew of new features designed to enhance competitive play and facilitate communication between brands and players.

Four Steps to Accelerate Your Machine Learning Journey

This is the golden age of machine learning­ (ML). Once considered peripheral, ML technology is becoming a core part of businesses around the world, regardless of the industry. By 2021, the International Data Corporation (IDC) estimates that spending on artificial intelligence (AI) and other cognitive technologies will exceed $50 billion.

Locally, 25% of organizations say they are setting aside at least 10% of their budget for technology, which includes investments in big data analytics (64%), cloud computing (57%), Machine Learning and artificial intelligence (33%), and robotic process automation (27%), based on the Malaysian Institute of Accountants’ “MIA-ACCA Business Outlook Report 2020″. [1] As more companies gain awareness of the importance of ML, they should work towards getting it in motion as quickly and effectively as possible.

person using a laptop
Photo by fauxels on Pexels.com

At Amazon, we have been on our own ML journey for more than two decades – applying it to areas like personalization, supply chain management, and forecasting systems for our fulfillment process. Today, there is not a single business function at Amazon that is not made better through machine learning.

Whether your company is just getting started or in the middle of your first implementation, here are the four steps you should take to have a successful machine learning journey.  

Get Your Data in Order

When it comes to adopting machine learning, data is often cited as the number one challenge. We found that more than 50% of time spent in building ML models can be spent in data wrangling, data cleanup, and pre-processing stages. Therefore, prioritize investing in the establishment of a strong data strategy to avoid spending excessive time and resources on data cleanup and management.

person holding white printer paper
Photo by bongkarn thanyakij on Pexels.com

When starting out, the three most important questions to ask are:

  • What data is available today?
  • What data can be made available?
  • A year from now, what data will we wish we had started collecting today?

In order to determine what data is available today, you will need to overcome data hugging – the tendency for teams to gatekeep data they work with most closely. Breaking down silos between teams for a more expansive view of the data landscape while still maintaining data governance is crucial for long-term success.

Additionally, identify what data actually matters as part of your machine learning approach. Think about best ways to store data and invest early in the data processing tools for de-identification and/or anonymization, if needed.

Identify the Right Business Problems

When evaluating what and how to apply ML, focus on assessing the problem across three dimensions: data readiness, business impact, and machine learning applicability.

Balancing speed with business value is key. Instead of trying to embark on a three-year ML project, focus on a handful of critical business use cases that could be solved in the upcoming six to 10 months. Start by identifying places where you already have a lot of untapped data and evaluate if machine learning brings benefits. Avoid picking a problem that is flashy but has unclear business value, as it will end up becoming a one-off experiment.

Champion a Culture of Machine Learning

In order to scale, you need to champion a culture of machine learning. At its core, ML is experimentation­. Therefore, it is imperative that your organization embrace failures and take a long-term view of what is possible.

high angle photo of robot
Photo by Alex Knight on Pexels.com

Businesses also need to combine a blend of technical and domain experts to work backward from the customer problem. Assembling the right group of people also helps eliminate the cultural barrier to adoption with a quicker buy-in from the business.

Similarly, leaders should constantly find ways to simplify the process of ML adoption for their developers. Since building ML infrastructures at scale is a time and labor-intensive process, leaders should encourage their teams to use tools that cover the entire ML workflow to build, train, and deploy these models efficiently.

For instance, 123RF, a homegrown stock photography portal, aims to make design smarter, faster, and easier for users. To do so, it relies on Amazon Athena, Amazon Kinesis, and AWS Lambda for data pipeline processing. Its newer products like Designs.ai Videomaker uses Amazon Polly to create voice-overs in more than 10 different languages. With AWS, 123RF has maintained flexibility in scaling its infrastructure and shortened product development cycles and is looking to incorporate other services to support its machine learning & AI research.

Develop Your Team

Developing your team is essential to foster a successful machine learning culture. Rather than spending resources to recruit new talent in a competitive market, hone in on developing your company’s internal talent through robust training programs.

group of people sitting indoors
Photo by fauxels on Pexels.com

Years ago, Amazon created an in-house Machine Learning University (MLU) to help its own developers sharpen their ML skills or equip neophytes with tools to get started. We made the same machine learning courses available to all developers through AWS’s Training and Certification offering.

DBS Bank, a Singaporean multinational bank, employed a different approach. It is collaborating with AWS to train its employees to program their own ML-powered AWS DeepRacer autonomous 1/18th scale car, and race among themselves at the DBS x AWS DeepRacer League. Through this initiative, it aims to train at least 3,000 employees to be conversant in AI and ML by year end.


[1] MIA (Malaysian Institute of Accountants) and ACCA (Association of Chartered Certified Accountants), Business Outlook Report 2020, 2020

The Art of Enabling the Disabled

Artificial Intelligence and Machine Learning (AI and ML) technologies have come a long way since its first inception. Who would have thought that we would have a working model of actual computer-based assistants that can do things like manage our schedules? Who would have thought that we could even use these assistants to manage our homes? These things can even be used to diagnose cancer patients, something impossible without doctors even five years ago.

Amazon Web Services (AWS) is at the forefront of AI and ML technology. As one of the world’s largest technology innovators, they would naturally be at an advantage to feed enough data to the technology and accelerate their development. Because they are also one of the largest technology firms any man has ever seen, they are also at an advantage in placing AI and ML in places and applications we may never have imagined.

Linguistics is one segment that has benefitted greatly from technologies today. Linguistics, if you think about it is also one of the most complex things that us humans can create and understand. The context of it and interpretation can be affected by plenty of things too. Linguistics is affected by area, culture, community, heritage, and even lineage.

For example, there are differences between French spoken in France and Canada. There are even subtle differences between French spoken in France and Monaco, or even Switzerland. The most common language of all, English has differences even in spelling and context in Britain, the Americas, and even Australia. English spoken today is also a distinct form of the language that was spoken 50 years ago.

The Pollexy Project

The progression of technology in linguistics have progressed through years and years of feeding all these data into it. That has allowed us to communicate with global communities with more ease than peeling an orange. AWS has taken it a little further than that though. They have gone beyond spoken or written languages. Through something called AWS DeepLens, they have developed translation algorithms to sign languages.

While that technology might sound like it is as simple as gesture controls, it is plenty more than that. Yes, it is technically gesture control and recognition. But it is way larger and more complex than just a solution for end-point devices. The trick is to teach the native algorithm to recognise all the available sign words and even alphabets. The AWS DeepLens Community projects so far has learnt to recognise most of the alphabets in the American Sign Language.

But technology also goes beyond just recognising alphabets to understanding proper words with the algorithm in Amazon Alexa. It is not just about communicating with your friends anymore. It is about using the platform as a home assistant tool, a customer service tool, a command center, and user defined PC experience that mimics voice control and command for us. Instead of using voice though, its all in the gestures.

Making Amazon Alexa respond to Sign Language using AI

The tool they use is called Amazon Transcribe. It works just like any transcribe apps you can find in the market. It supports up to 31 languages currently with more being added by time. It even supports ASL as a component to create text from sign language.

Simple communication is just the beginning for the technology though. AI and ML still has a long way to go even in the medical field. Just like the human race, the technology gets better everyday though. If you really think about it, the technology is not that new in the first place. We have embarked on the journey of having machine built and defined assistants since we started developing computers to help us with simple and complex mathematical problems.

It is just that simple mathematical problem solver has become something much bigger today. Who would have thought that we would let computers fly a commercial airplane? Who would have thought that cars can drive themselves today? Who would have thought that we could hire a private translator without spending any money or any time? You just have to look into your pocket.

Machine Learning in Sports: A Paradigm Shift in Progress

Sports, data analytics and machine learning. Three words you would never expect to be in the same sentence, right? Well, what if we told you that they already are in the same sentence in sports teams the world over. That’s right, we’re already seeing the inclusion of data analytics and machine learning in sports – some even as early as 15 years ago. You’d be surprised how advanced things have gotten when it comes to data analytics and sports; we’re even seeing companies use Amazon Web Services (AWS) to help deal with and store the data.

In sports such as the F1, American football and even rugby we’re seeing more and more decisions being made when taking into consideration probabilities and numbers generated by machine learning. In fact, one of the sports most adept at using data is the Formula 1. Teams generate up to 600GB of data per lap from the 200 to 300 sensors in the cars. When it comes to the American NFL (National Football League) each player is analysed based on over 100 data points. These data points drive the plays we, as fans, cheer and look for when we watch the athletes play.

Dilemma: Where to store the data? How to capitalise on it?

When it comes to dealing with the data generated from these sports, the first dilemma is where to store the data. Of course, Amazon Web Services has a slew of container and data lake services such as Amazon S3 storage and more these teams are already using to store their data. However, just keeping the data in the cloud isn’t enough. They will need to run through and analyse the data for it to truly be useful to the teams. That’s where machine learning comes in.

While it might seem like a brand-new paradigm, we can assure you, that it’s been happening behind the scenes for quite a while. Teams in the F1, NFL and even rugby have been collecting data and analysing them to help players perform better, drivers drive better and engineers optimise their technology further. In fact, there are companies out there such as Pro Football Focus that actually process and analyse the data in real time. In fact, at AWS Re:Invent, Cris Collinsworth, CEO and Co-Founder of Pro Football Focus, said that what used to take coaches around two to three days to analyse is done in less time. He said that with this improvement, coaches are given more time to strategize and tweak their plays to help their teams win.

Photo by Chris Peeters from Pexels

The data collected during the races of the F1 doesn’t just go to the cloud for storage. Analysts on the ground are constantly looking at it to help tweak and make critical decisions for that edge. In fact, the data plays a big role in the teams pitting and undercutting strategies in a race. The engineers are also using this data to help with their car design and tweaking between races. However, the F1 has a pretty good head start compared to other sports out there. They’ve been using data analytics in their sport for over a decade now and have been able to use it to help with performance. However, that isn’t the only way they use their data, they also use it to create new regulations that affect the whole game and the welfare of the drivers.

Machine Learning in Capitalising on Collected Data

“We don’t do magic. We use technology to make decisions.”

Rob Smedley, Expert Technical Consultant, Formula 1

With the advent of machine learning in the past few years, the work of analysing the data has been made even easier. Using services like Amazon SageMaker, companies and teams are able to take advantage of the numerous data points in real time. Machine learning algorithms can churn out predictions and probabilities based on the collected data near instantaneously.

Image by Gerd Altmann from Pixabay

That said, the data generated by the machine learning algorithms is only half the picture. It informs the coaches and players of not only the probabilities and possibilities but also what could be done to help give the teams an edge over the competition. The decision making process on the pitch or track is no longer only a question of gut instinct, it’s about tempering and guiding the gut instinct with mathematics.

“The teams that are really embracing the new approach are going to win the championships”

Cris Collinsworth, Co-founder and CEO, Pro Football Focus. and Broadcaster for NBC Sports Sunday Night Football

We are at the crossroads of a change in sports paradigms. Coaches are beginning to accept that the data being processed by machine learning algorithms as guides for their game time decisions. The game is changing based on how teams are able to use and optimise machine learning to get the edge they need during game time.

Creating New Fan Experiences

That said, machine learning isn’t just giving the edge during game time. It’s also being used to create new fan experiences. Watching sports can become a pretty mundane experience for some. However, using machine learning and data analytics, broadcasters can create new experiences for fans to keep them more engaged.

Image by Gerd Altmann from Pixabay

In the United States, broadcasters have been experimenting using data lakes and machine learning to enhance the sports viewing experience. This isn’t just restricted to F1, NFL, NBA or the MLB. It’s across the board. These broadcasters are using machine learning to create overlays and explanations of complexities that help fans better understand the sport. In fact, with the amount of data they have at their fingertips, shout casters and commentators are able to see plays before they happen or even suggest some that would have led to a better outcome. These hints of information are also opening up the sports world to new audiences. It is also creating a more engaging experience for long time sports viewers and fans.

Given the amount of data being collected, it also comes as no surprise that broadcasters and even teams are looking into giving fans a better experience via a second screen. They are looking at what information would make sense and enhance the experience for viewers. Of course, raw data isn’t the answer but the data processed by machine learning algorithms are able to give a better understanding and appreciation to fans. In fact, they expect that it would engage a whole new type of viewer.

“You still need the human element”

Rob Smedley, Expert Technical Consultant, Formula 1

With all the emphasis on machine learning and data analytics, it would seem that sports will be reduced to 1s and 0s. However, as Rob Smedly highlighted, artificial intelligence and machine learning can never replace the driver or player. In fact, the thing that makes sports engaging is the human element in the game. It’s about how athletes are able to push boundaries of human performance and how we use the data to improve, not only the game, but also other aspects of human life.

Combining AI and Humans in the New Decade

*This article is a contributed article by Ravi Saraogi, Co-Founder and President of Uniphore, APAC *

2020 marks the transition into the great unknown. With the emergence of new possibilities and challenges ahead of us, successful organisations must be quick to identify and take advantage of opportunities through the power of emerging technologies. Specific to the customer service industry, brands that utilise Conversational Artificial Intelligence (AI) technologies will improve business operations and customer experiences.

It is estimated that about 70% of organizations will integrate AI to assist employee productivity by 2021[1] to meet the high demand of delivering faster, relevant and holistic services to today’s customers. More often than not, customers today are frustrated that broken customer service systems and poorly equipped agents don’t understand their requests. To fix this, businesses must move away from a siloed experience and approach service holistically.

Photo by mahdis mousavi on Unsplash

In terms of the adoption of the adoption of AI in Malaysian businesses, it was revealed that only 26% of companies in Malaysia have actually begun integrating AI into their operations, according to a survey that was conducted in 2018. The low adoption rate is attributed to two key barriers that are related to organizational culture on AI and limited employee skill sets2. Thus, the time is now to blend the capabilities of people and AI and better understand conversations in real-time for businesses to stay ahead of the race.

New Power to Customer Voice

With technological capabilities, it’s about time we start hearing what customers really want. Customers today are time poor, distracted and empowered by lots of products and services to choose from. Instant gratification is their modus operandi. With other factors like price point and product quality being at par, superior customer service remains challenging and is often a deal breaker. In a competitive landscape, customers demand a seamless experience when interacting with a brand.

That said, poor customer experiences are not difficult to resolve at all, more so today due to machine learning, AI and automation. This is because AI is now helping brands to truly listen to the voice of the customer and understand their needs in order to quickly resolve customer queries, deepen customer engagement, and deliver superior customer experience at scale.  

Making Headway with Conversational Service Automation

Minister of Communications and Multimedia, Gobind Singh Deo emphasises Malaysia’s potential in the development of AI in both public and private sectors, and the importance of ensuring the local government and industries capitalise on the opportunities at hand.[2]

The use of AI is becoming more prevalent in the customer service industry as conversations become more complex. There is a small window of opportunity for brands to deliver personalised customer service, particularly when your engagement happens across diverse channels. Being equipped with an understanding of context, sentiment, behaviour and real intent, and being able to act on such insights in real-time becomes even more crucial.

Photo by Joseph Pearson on Unsplash

Conversational Service Automation is about enabling front office automation in contact centres. Consider this scenario: A customer starts a conversation with a chatbot for quick self-service. The bot is able to provide some quick and valuable updates based on the customer’s previous interactions. If the conversation gets more complex, the voice bot politely hands the call to a human agent via a live transfer. The agent is assisted through real-time analytics and chat transcripts to be able to make the next best offer which the customer gladly accepts.

This automation backed by real-time analytics is continuously self-learning, enabling real-time listening of conversations across channels and then converting them into actionable insights. As a result, a win-win situation is created where businesses can reduce work pressure on call center agents, improve accuracy of information and greater customer satisfaction.

Getting Ahead of the Race with Voice and AI

We are in the midst of a customer experience transformation and conversational AI technology is leading this change. There is a positive acceptance from both businesses and customers to adopt newer conversational AI technologies. This is driven by the try-before-you-buy and pay-as-you-go models offered, which enterprises find appealing and less risky. Brands can take smaller bets, test-and-learn and then scale up.

Automation has successfully allowed computers to respond to contexts within queries, monitor customer behaviour and improve overall customer service. Moreover, contact centre agents can now receive real-time alerts and recommendations for upsell and cross-sell. The time is now for companies to leverage conversational AI to deliver a quantum leap in customer service, in an industry that is full of potential. It is good to note that brands that embrace conversational service automation will be the ones who stay ahead of the competition and thrive in the new decade.


[1] https://www.gartner.com/en/newsroom/press-releases/2019-01-24-gartner-predicts-70-percent-of-organizations-will-int
2 https://www.stuff.tv/my/news/malaysian-companies-needs-build-ai-culture-it-too-late-microsoft

[2] https://www.malaymail.com/news/malaysia/2019/09/12/gobind-malaysia-well-positioned-in-se-asia-for-ai-research-and-development/1789773

Be A Maestro with AWS DeepComposer

You would think that when it comes to making compositions and music, you’d need a really good ear and knowledge of the arts. Not so much with Amazon Web Service’s new AI (Artificial Intelligence) service focused on creating musical pieces with a keyboard! DeepComposer is the latest in a series of Machine Learning focused services that AWS has introduced since it’s announcement of DeepLens at Re:Invent 2017.

The new music based AI is a 32 key, 2 octave keyboard which will allow developers to familiarise themselves with using Generative AI. The simple application of Generative AI in DeepComposer will take short riffs and generate a full compositions.

A brief diagram explaining how AWS’s DeepComposer works. (Source: AWS)

The DeepComposer generative AI will be able to layer and generate songs based on pre-trained models or even user defined models. The pre-trained models are able to generate based on algorithms developed by training the AI with large musical data sets. The user defined models give users better control of the generative AI. Users will be able to define multiple parameters including the Architecture and Discriminator. The latter allows the AI to distinguish between the genres and determine the overall composition.

Announcing AWS DeepComposer with Dr. Matt Wood, feat. Jonathan Coulton

Being a machine learning model, DeepComposer is continually learning to identify music types. The AI will improve with time as it learns and generates more music based on the models and riffs. It will also be able to generate music which mimics a defined model. Amazon’s release touts, ” you have to train as a counterfeiting expert in order to become a great counterfeiter “.

DeepComposer isn’t just linked to the physical keyboard. It also has a digital keyboard interface which allows users to compose on the go. Using this approach, AWS is hoping that Generative AI models are made more approachable for those looking to explore their applications.

The new feature is currently available for preview on AWS at the DeepComposer website. Also on the website is a FAQ to address some of the questions that new users may have.

Going Digital Isn’t Just About Technology; It’s About Changing Mindsets

The world is abuzz with a massive change in the way things are working when it comes to companies. This change is spurred by the introduction of many technologies which have revolutionised and fundamentally changed how things are done. Perhaps the biggest observable change so far is that start ups have become the new normal. The simple reason behind this is that there has been a fundamental change in paradigm when it comes to product development and the duration is takes for an industry-wide disruption to occur. What once took decades is now happening at a near daily pace. The reality of the nature of disruption today is that you don’t have to be a large corporation to disrupt nor do you have to be a digital native. You simply have to be able to impact the way things are done and fundamentally change a preset mindset.

Being Digital Simply Means Adopting A New Mindset

Looking back at disruptors such as Grab or Uber, this statement couldn’t be more true. Even in our sit down with Mr Santanu Dutt, Chieft Technology Officer and Head of Technology (ASEAN) at Amazon Web Services, this point was stressed upon. The world has changed from an industry-first paradigm to one where customers are placed front and center. Development starts with the identification of a gap in services or a new way of offering the service which would cater to better customer experience. From there, companies need to address the constantly changing demands of the customer with quick iterations. The harsh reality is, when it comes to competing in Industry 4.0, companies are now vying for a very limited commodity: customer attention. The days in which customers have a sense of loyalty are quickly fading. Instead, they look to new experiences and features which make their life easier.

Santanu Dutt, Chief Technology Officer and Head of Technology (ASEAN) at Amazon Web Services (AWS)

So the big question is: How can companies have a competitive edge in this marketplace? As Mr Santanu put it, “Being digital is also largely a cultural change. Yes, it is about technology but [also] a cultural change of a company to have their product and services digitally [and] expand their reach.”. He stresses that the fundamental cultural change is for companies and their employees to understand the needs of their customer, listen to their feedback and to iterate quickly to address them. In fact, in recent years, we’ve seen companies die because of this. One of the best examples of this on a international scale is Blockbuster and other video rental services. With the advent of fast, broadband internet, their customers started expecting videos and movies to be immediately available for on demand viewing. The only company to capitalise on this fundamental change was Netflix. Netflix changed from an overnight DVD and Blu-ray courier and rental service into a platform which allowed users to stream video on demand. This was, of course, followed closely by Amazon Prime Video and other companies. Another example is that of Grab which started off as an app to making hailing a taxi easier and safer. It is today, Southeast Asia’s largest ride-hailing application and e-Wallet.

Learning and Unlearning to Compete in Industry 4.0

There is a misconception that comes with companies going digital and that’s the assumption that going digital simply means that companies need to adopt new technologies to streamline processes. Truth be told, going digital entails more than just adopting new technologies; it involves the learning of new approaches and technologies and the unlearning of old approaches which are holding the company back. However, in adopting new technologies such as Amazon Web Services (AWS) cloud based services, companies cannot simply be looking at a “lift and shift” approach where they simply take their pre-existing architecture and shift it to platforms such as AWS. Instead, SMEs need to look at learning new the technology and implementing them in such a way that they are maximising their potential. In essence, unlearning the old and optimising essential processes and architecture using new technologies such as Machine Learning (ML) and Data Lakes.

Image by TeroVesalainen from Pixabay

To be agile and effective, SMEs must look to the most effective approach to their needs. Certain industries may not permit the complete migration of on premises infrastructure to one that is purely cloud based. In cases such as these, Mr Santanu says that there is no harm in keeping core services on premise with permitted peripheral services being moved to the cloud. This approach allows SMEs to benefit from an agile workflow whilst keeping inline with regulations. When it comes to regulated industries, certification is essential. This is why SMEs looking to take advantage of Industry 4.0 should look to partners who share the burden of getting industry certifications. Companies such as AWS share this burden with their clients and ensure that any certification necessary for relevant industries is met on a regular basis.

With these worries aside, SMEs can focus on learning new approaches such as implementing DevOps in a leaner, more efficient manner. This will, over time, lead to better processes which allow for greater profits while minimizing cost. With partners such as AWS, SMEs can focus on servicing their clients while leaving infrastructure maintenance to their partner.

Planning For Scale from the Beginning

To keep up with the demands of the rapidly changing landscape in Industry 4.0, companies need to have the foresight to plan for scale from the get-go. While AWS acknowledges that know-how and skill set may continue to be a gap in the near future, the company is working with Universities to train the future. In Malaysia alone, AWS is training over 100,000 students who will soon enter the work force ready with the skills and knowledge required to take advantage of Cloud Computing.

Image by Gerd Altmann from Pixabay

That said, companies have to look to scale dynamically. As businesses continue to grow rapidly thanks to the internet, they need to be ready, from the beginning, to cope with scale. At a moment’s notice, they may be required to adapt from thousands to hundreds of thousands of transactions. This can only be achieved when infrastructure is able to scale as such. With Cloud computing platforms such as AWS, SMEs need not worry about new infrastructure acquisition. Instead, they are able to accommodate with simple automation the increased scale.

Malaysia is already moving towards industry 4.0 with the push from the government as well as industry. More importantly, SMEs need to learn to iterate – at scale – to accommodate the needs and demands of their customers. That said, it is still early days in Malaysia. The change in mindset needed for the country and its industries to fully appreciate and benefit the potential of Industry 4.0 is still in its growing stages. Mr Santanu stresses that with the passage of time and the willingness of Malaysian SMEs in adopting new technologies and approaches, there is no doubt that the country will be able to reap the many benefits of Industry 4.0.