Earlier this year, YouTube has brought a few updates to its platform this included a new feature called Chapters. This feature allowed creators to divide a video into separate chapters using timestamps. Chapters can be useful as it allows users to jump to the parts of the video that most interest them by clicking timestamps in the description or instead of scrubbing the video in the seek bar. That being said, not all videos uploaded comes with chapters as it involved the cumbersome task of manually identifying timestamps which can be a pain to do. To make things a little easier, YouTube has been testing an AI model which can divide videos into chapters on the fly.
To do this, YouTube is deploying AI that will go through a video and identify certain visual markers. These markers will then be used as reference points to break the video into chapters. The algorithm will also recognize certain text-based signs in a video to do the same. According to YouTube, the main purpose of this experiment is to create easy jumping on and off points for viewers, making it easier for viewers to navigate through videos and quickly jump to the relevant part that they desire.
The new feature is currently being tested on a small group of videos by YouTube. Needless to say, YouTube is allowing the creators to opt out from their experiment. That said, YouTube is also encouraging the uploaders to provide feedback on how the feature could be improved.
The COVID-19 Pandemic doesn’t seem to be going away anytime soon. The virus continues to spread drastically and have a devastating effect in areas where outbreaks have occurred. However, since the early days of the pandemic, there have been reports of asymptomatic carriers; these carriers are able to spread the virus without showing any outwardly recognisable signs of infection. This also makes them one of the largest unsolved problems of the current COVID-19 pandemic. This group of individuals are less likely to seek testing and, in turn, be diagnosed and treated.
However, that’s about to change. A group of researchers at the Massachusetts Institute of Technology (MIT) have developed an AI model that has been able to accurately identify asymptomatic carriers based on the way they cough. The AI model has been able to accurately discern and identify 98.5% of coughs from confirmed COVID-19 patients and 100% of asymptomatic carriers.
Using A.I. to Identify Unique Markers in Coughs
The team at MIT, consisting of Jodi Laguarta, Ferran Hueto, and Brian Subirana, developed the mode on a neural network called ResNet50. ResNet50 is a type of neural network that is able to discern and identify differences and similarities in data. Until now, ResNet50 was used primarily in visual discernment. However, the team at MIT has applied it in identifying markers when people cough.
Their model was initially developed to help detect early signs of Alzheimer’s which can present in the way people cough. This include the person’s emotional state, changes in lung and respiratory performance and vocal chord strength. These are known markers for someone who could be experiencing early onset Alzheimer’s.
Using the three criteria, three independent machine learning algorithms were train and then layered on each other. The team also included an algorithm for muscular degeneration on top of the model. In tandem, these machine learning layers made it possible for the team to detect and identify samples from Alzheimer’s patients.
Detecting the Indiscernible
In April, the team looked into applying the AI model to help identify COVID-19 patients. To do this, they established a website where people could record a series of coughs with their mobile phone or any other web enabled device. In addition to their submissions, participants had to fill up a survey of their symptoms, COVID-19 status, and if their method of diagnosis. Other factors such as their native language, geographical location and gender were also collected. They have, to date, collected over 70,000 recordings which amounts to about 200,000 forced cough samples. This is the largest known cough dataset that has been collected so far according to Brian Subirana.
The model proves a long known fact that COVID-19 does in fact affect respiratory function. However, it also draws similarities between the presentation of temporary respiratory degeneration to the neurodegeneration present in Alzheimer’s patients. That said, it also shows that there are sub-clinical presentations of the disease in asymptomatic individuals. The AI algorithm is able to detect and identify individuals with these presentations, providing a much needed boost to potential diagnoses of these individuals.
More significantly, the team has developed a method in which pre-screening can be done to help curb the spread of COVID-19. What’s more, their research could be the foundation of future diagnosis when it comes to sub-clinical presentations of diseases. That said, Brian Subirana highlights that the strengths of the tool lies in its ability to differentiate coughs from asymptomatic carriers from healthy individuals. He also stresses that it is not meant to be used as a definitive test for COVID-19.
Acer’s Planet 9 was launched a year ago as the company’s comitment to the growing eSports scene. The platform allows aspiring professional gamers to hone their skills and collaborate. The vision for this next gen platform is to provide a “training arena” where pros, semipros and enthusiasts can improve their game.
“Planet 9 is a community-oriented platform designed to give gamers everywhere a chance to interact and learn from each other. It is intended to be a social platform that caters to multiple audiences: those looking to improve are introduced to similarly skilled teammates and opponents, likewise, those just looking to chat and enjoy themselves can meet other casual players…”
Andrew Chuang, AVP, Esports Services, IT Products Business, Acer Inc.
Source: Acer
Planet9 was designed to bring different eSports communities together in one place, and a major part of the platform is effectively managing and integrating these communities. The platform helps users to find teammates based on a variety of factors such as game type, skill level and time zone. It also gathers and records a wide variety of data such as score, pathing, kill-death ratio and death location. This provides coaches and managers information they can use to help guide their players.
This year, Acer is bringing cutting edge AI to Planet9, its next-generation eSports platform, in the form of the SigridWave In-Game Live AI Translator. SigridWave has been specially designed to handle gaming terminology and jargon. It leverages deep learning technologies to bridge language barriers allowing gamers to communicate no matter where they are from. This is an important step in enhancing the gaming experience.
When SigridWave is deplyed, it will utilise Automatic Speech Recognition (ASR) technology to recognize speech from gamers. It then converts this into strings of text, similar to how smartphones do when you use virtual assistants. This string of text is then recognised using Neural Machine Translation (NMT) technology. The NMT deployed by SigridWave has so far been trained to recognise over 10 million bilingual sentence pairs. This allows it to recognise game specific language and jargon such as “ADS” or “camping”, giving it context awareness. In-game overlays will be supported for League of Legends on launch in late 2020 or early 2021, and support will be made available for additional titles in the future.
The new technology has the potential to take competitive and professional gaming to a whole new level. Together with SigridWave, Acer also unveiled Clubs and Tournaments; two new features that will help players collaborate and compete regularly to up their game. These join a slew of new features designed to enhance competitive play and facilitate communication between brands and players.
This is the golden age of machine learning (ML). Once considered peripheral, ML technology is becoming a core part of businesses around the world, regardless of the industry. By 2021, the International Data Corporation (IDC) estimates that spending on artificial intelligence (AI) and other cognitive technologies will exceed $50 billion.
Locally, 25% of organizations say they are setting aside at least 10% of their budget for technology, which includes investments in big data analytics (64%), cloud computing (57%), Machine Learning and artificial intelligence (33%), and robotic process automation (27%), based on the Malaysian Institute of Accountants’ “MIA-ACCA Business Outlook Report 2020″. [1] As more companies gain awareness of the importance of ML, they should work towards getting it in motion as quickly and effectively as possible.
At Amazon, we have been on our own ML journey for more than two decades – applying it to areas like personalization, supply chain management, and forecasting systems for our fulfillment process. Today, there is not a single business function at Amazon that is not made better through machine learning.
Whether your company is just getting started or in the middle of your first implementation, here are the four steps you should take to have a successful machine learning journey.
Get Your Data in Order
When it comes to adopting machine learning, data is often cited as the number one challenge. We found that more than 50% of time spent in building ML models can be spent in data wrangling, data cleanup, and pre-processing stages. Therefore, prioritize investing in the establishment of a strong data strategy to avoid spending excessive time and resources on data cleanup and management.
When starting out, the three most important questions to ask are:
What data is available today?
What data can be made available?
A year from now, what data will we wish we had started collecting today?
In order to determine what data is available today, you will need to overcome data hugging – the tendency for teams to gatekeep data they work with most closely. Breaking down silos between teams for a more expansive view of the data landscape while still maintaining data governance is crucial for long-term success.
Additionally, identify what data actually matters as part of your machine learning approach. Think about best ways to store data and invest early in the data processing tools for de-identification and/or anonymization, if needed.
Identify the Right Business Problems
When evaluating what and how to apply ML, focus on assessing the problem across three dimensions: data readiness, business impact, and machine learning applicability.
Balancing speed with business value is key. Instead of trying to embark on a three-year ML project, focus on a handful of critical business use cases that could be solved in the upcoming six to 10 months. Start by identifying places where you already have a lot of untapped data and evaluate if machine learning brings benefits. Avoid picking a problem that is flashy but has unclear business value, as it will end up becoming a one-off experiment.
Champion a Culture of Machine Learning
In order to scale, you need to champion a culture of machine learning. At its core, ML is experimentation. Therefore, it is imperative that your organization embrace failures and take a long-term view of what is possible.
Businesses also need to combine a blend of technical and domain experts to work backward from the customer problem. Assembling the right group of people also helps eliminate the cultural barrier to adoption with a quicker buy-in from the business.
Similarly, leaders should constantly find ways to simplify the process of ML adoption for their developers. Since building ML infrastructures at scale is a time and labor-intensive process, leaders should encourage their teams to use tools that cover the entire ML workflow to build, train, and deploy these models efficiently.
For instance, 123RF, a homegrown stock photography portal, aims to make design smarter, faster, and easier for users. To do so, it relies on Amazon Athena, Amazon Kinesis, and AWS Lambda for data pipeline processing. Its newer products like Designs.ai Videomaker uses Amazon Polly to create voice-overs in more than 10 different languages. With AWS, 123RF has maintained flexibility in scaling its infrastructure and shortened product development cycles and is looking to incorporate other services to support its machine learning & AI research.
Develop Your Team
Developing your team is essential to foster a successful machine learning culture. Rather than spending resources to recruit new talent in a competitive market, hone in on developing your company’s internal talent through robust training programs.
Years ago, Amazon created an in-house Machine Learning University (MLU) to help its own developers sharpen their ML skills or equip neophytes with tools to get started. We made the same machine learning courses available to all developers through AWS’s Training and Certification offering.
DBS Bank, a Singaporean multinational bank, employed a different approach. It is collaborating with AWS to train its employees to program their own ML-powered AWS DeepRacer autonomous 1/18th scale car, and race among themselves at the DBS x AWS DeepRacer League. Through this initiative, it aims to train at least 3,000 employees to be conversant in AI and ML by year end.
[1] MIA (Malaysian Institute of Accountants) and ACCA (Association of Chartered Certified Accountants), Business Outlook Report 2020, 2020
Hot on the heels of the iPhone 12 launch, Google has sent out invites for a live stream event on October 15th at 12PM PT/ 3PM ET (3AM GMT+8/MST) called Search On. Google is looking to highlight how the company is applying the power of AI to help the people to understand the world better.
Source: Google
The event was announced through a brief tweet that revealed little about the virtual event. However, given the company’s focus, you can expect that the search giant will be updating the world, particularly developers, of the new features, services, and products from Google. More interestingly, Google is expected to update the world on how it is updating Search.
The event will be one of a series that has taken the place of I/O this year. The Their events have, in the past, had a singular focus – focusing on Assistant, Maps, and so on; so we are expecting that the search giant will be focusing their keynote on the advancements they will be introducing to Search.
‘Samsung AI Forum 2020’ Explores the Future of Artificial Intelligence (A.I.)! This is an interesting forum that highlights the future of AI and a platform to exchange ideas, researches and insights . It will be held via its Youtube channel for two days from the 2nd November 2020 to 3rd November 2020. The most exciting part is that the forum gathers industry experts from various industries in a discussion on the future of A.I.
As you know, Samsung is one of the largest technology company in the world and delivers the world with transformative ideas. They make some of the best selling and highly acclaimed electronics in the world too. Samsung technically makes nearly all sort of electronics; including televisions, smartphones, tablets, digital appliances, network systems, LED solutions, memory, and even network systems.
The forum on the first day of the conference, on the 2nd of November, will be hosted by Samsung Advanced Institute of Technology (SAIT). Dr. Kinam Kim, Vice Chairman and CEO of Device solutions at Samsung Electronics will deliver opening remarks in the forum. There will be no shortage of presentations by the world’s most renowned A.I. Experts on “AI Technologies for Changes in the Real World.”
Many of the professionals will have sharing sessions on day 1 especially the winner of the 2018 Turing Award (it is like the “Nobel Price” in computing), Professor Yoshua Bengio will be co-chairing for the forum in this event. On the first day of the event, the “researcher of the year” award will be presented to the winner as well as a US$ 30,000 prize.
Day 2, themed “Human-Centered AI”, will see Dr. Sebastian Seung the president and head of Samsung Research engage with A.I. experts to deliver a speech and share their different insights. Professor Christopher Manning, a conspicuous expert will also deliver the current status and future of Natural Language Processing (NLP) that required for Human-Centered AI. He has been working with Samsung on Q&A and dialogue modelling on the overall of NLP technologies development.
Samsung’s AI forum, as mentioned earlier, will be held on the 2nd November 2020 onward. It will be held exclusively on their YouTube page, which also means that it will be absolutely free to watch and ‘attend. You might want to register and look for more information on the Forum in their website too.
When it comes to networking there’s a myriad of considerations that go into securing, deploying and even managing the network. This is further exacerbated when it comes to large networks with the current perimeter of safety being disintegrated with BYOD (Bring Your Own Devices) and even with distance work cultures. Corporations and even homes are left with a huge gap that they have had to fill with multiple solutions which sometimes just don’t coalesce.
Aruba has been hard at work developing a solution that will help companies be forward thinking while keeping their security in check. Their new ESP (Edge Services Platform) allows companies to adopt policies such as BYOD without compromising their network security and without them having to dedicate human and large financial resources in managing and administering network access. ESP essentially empowers the network with an AI-driven sixth sense that provides actionable insights for network administrators while taking the bulk of menial tasks off their to-do lists.
The Aruba ESP framework essentially consists of three principal components: AI Ops, a unified infrastructure and Zero Trust Network Security. These components work in tandem to deliver increased network reliability and security. With a cohesive approach, Aruba has managed to build an offering that is able to be implemented at scale and even with smaller businesses. In fact, ESP is able to be deployed according to client needs over a period of time.
The AI Ops component of ESP works to help with identifying, segmenting and remediate network issues. With Aruba’s implementation of AI Ops, the network is able to analyse and segment the network to isolate and protect company assets while allowing employees and guests to access the network with their own devices. It also proactively monitors the network for any security risks such as infected devices or even probable attackers to prevent any downtime. Even if there is downtime, AI Ops will allow Aruba’s ESP to automatically heal and repair the network which will, in the best scenarios, negate possible downtime.
The Edge Services Platform is also a turnkey solution for corporations that allows the consolidation of their networking solution on one unified platform. Running on their already proven Aruba Central network management solution, ESP is able to provide administrators with a cloud native solution to manage everything from switching, Wi-Fi and SD-WAN across their campus network. The single, unified interface also allows them to have a one-stop platform to identify and deal with potential networking issues which may arise. This together with the analytics and insights from AI Ops simplifies the process identifying, isolating and fixing network issues. What’s more, Aruba’s ESP is brand agnostic allowing devices and services from other vendors to be seamlessly integrated into the network.
ESP adopts a Zero Trust approach to network security. However, it doesn’t just segment the network. Instead, it uses built-in, role based access technology that will enable Dynamic Segmentation. This simply means that the platform is able to identify and isolate devices dynamically as they enter the network. It uses an AI model that has been trained to identify certain parameter and automatically assign or isolate devices to help prevent potential security risks or breaches. This approach allows companies to be forward looking while keeping their assets and data safe from intrusion; empowering remote work and BYOD policies which have been proven to increase productivity.
Source: Aruba
Aruba’s ESP heavily leverages telemetry and insights derived from the company’s many years in providing networking solutions and hardware to deliver an ever evolving, rapidly adapting solution that can be deployed according the needs and constraints of their customers. That said, ESP isn’t just reliant of Aruba’s data and telemetry, it evolves with the company and learns from the data and telemetry that is natively derived from the organisation and its policies. Aruba ESP will be available for current platforms including Amazon Web Services.
Artificial Intelligence and Machine Learning (AI and ML) technologies have come a long way since its first inception. Who would have thought that we would have a working model of actual computer-based assistants that can do things like manage our schedules? Who would have thought that we could even use these assistants to manage our homes? These things can even be used to diagnose cancer patients, something impossible without doctors even five years ago.
Amazon Web Services (AWS) is at the forefront of AI and ML technology. As one of the world’s largest technology innovators, they would naturally be at an advantage to feed enough data to the technology and accelerate their development. Because they are also one of the largest technology firms any man has ever seen, they are also at an advantage in placing AI and ML in places and applications we may never have imagined.
Linguistics is one segment that has benefitted greatly from technologies today. Linguistics, if you think about it is also one of the most complex things that us humans can create and understand. The context of it and interpretation can be affected by plenty of things too. Linguistics is affected by area, culture, community, heritage, and even lineage.
For example, there are differences between French spoken in France and Canada. There are even subtle differences between French spoken in France and Monaco, or even Switzerland. The most common language of all, English has differences even in spelling and context in Britain, the Americas, and even Australia. English spoken today is also a distinct form of the language that was spoken 50 years ago.
The progression of technology in linguistics have progressed through years and years of feeding all these data into it. That has allowed us to communicate with global communities with more ease than peeling an orange. AWS has taken it a little further than that though. They have gone beyond spoken or written languages. Through something called AWS DeepLens, they have developed translation algorithms to sign languages.
While that technology might sound like it is as simple as gesture controls, it is plenty more than that. Yes, it is technically gesture control and recognition. But it is way larger and more complex than just a solution for end-point devices. The trick is to teach the native algorithm to recognise all the available sign words and even alphabets. The AWS DeepLens Community projects so far has learnt to recognise most of the alphabets in the American Sign Language.
But technology also goes beyond just recognising alphabets to understanding proper words with the algorithm in Amazon Alexa. It is not just about communicating with your friends anymore. It is about using the platform as a home assistant tool, a customer service tool, a command center, and user defined PC experience that mimics voice control and command for us. Instead of using voice though, its all in the gestures.
The tool they use is called Amazon Transcribe. It works just like any transcribe apps you can find in the market. It supports up to 31 languages currently with more being added by time. It even supports ASL as a component to create text from sign language.
Simple communication is just the beginning for the technology though. AI and ML still has a long way to go even in the medical field. Just like the human race, the technology gets better everyday though. If you really think about it, the technology is not that new in the first place. We have embarked on the journey of having machine built and defined assistants since we started developing computers to help us with simple and complex mathematical problems.
It is just that simple mathematical problem solver has become something much bigger today. Who would have thought that we would let computers fly a commercial airplane? Who would have thought that cars can drive themselves today? Who would have thought that we could hire a private translator without spending any money or any time? You just have to look into your pocket.