Hot on the heels of the iPhone 12 launch, Google has sent out invites for a live stream event on October 15th at 12PM PT/ 3PM ET (3AM GMT+8/MST) called Search On. Google is looking to highlight how the company is applying the power of AI to help the people to understand the world better.
The event was announced through a brief tweet that revealed little about the virtual event. However, given the company’s focus, you can expect that the search giant will be updating the world, particularly developers, of the new features, services, and products from Google. More interestingly, Google is expected to update the world on how it is updating Search.
The event will be one of a series that has taken the place of I/O this year. The Their events have, in the past, had a singular focus – focusing on Assistant, Maps, and so on; so we are expecting that the search giant will be focusing their keynote on the advancements they will be introducing to Search.
‘Samsung AI Forum 2020’ Explores the Future of Artificial Intelligence (A.I.)! This is an interesting forum that highlights the future of AI and a platform to exchange ideas, researches and insights . It will be held via its Youtube channel for two days from the 2nd November 2020 to 3rd November 2020. The most exciting part is that the forum gathers industry experts from various industries in a discussion on the future of A.I.
As you know, Samsung is one of the largest technology company in the world and delivers the world with transformative ideas. They make some of the best selling and highly acclaimed electronics in the world too. Samsung technically makes nearly all sort of electronics; including televisions, smartphones, tablets, digital appliances, network systems, LED solutions, memory, and even network systems.
The forum on the first day of the conference, on the 2nd of November, will be hosted by Samsung Advanced Institute of Technology (SAIT). Dr. Kinam Kim, Vice Chairman and CEO of Device solutions at Samsung Electronics will deliver opening remarks in the forum. There will be no shortage of presentations by the world’s most renowned A.I. Experts on “AI Technologies for Changes in the Real World.”
Many of the professionals will have sharing sessions on day 1 especially the winner of the 2018 Turing Award (it is like the “Nobel Price” in computing), Professor Yoshua Bengio will be co-chairing for the forum in this event. On the first day of the event, the “researcher of the year” award will be presented to the winner as well as a US$ 30,000 prize.
Day 2, themed “Human-Centered AI”, will see Dr. Sebastian Seung the president and head of Samsung Research engage with A.I. experts to deliver a speech and share their different insights. Professor Christopher Manning, a conspicuous expert will also deliver the current status and future of Natural Language Processing (NLP) that required for Human-Centered AI. He has been working with Samsung on Q&A and dialogue modelling on the overall of NLP technologies development.
Samsung’s AI forum, as mentioned earlier, will be held on the 2nd November 2020 onward. It will be held exclusively on their YouTube page, which also means that it will be absolutely free to watch and ‘attend. You might want to register and look for more information on the Forum in their website too.
When it comes to networking there’s a myriad of considerations that go into securing, deploying and even managing the network. This is further exacerbated when it comes to large networks with the current perimeter of safety being disintegrated with BYOD (Bring Your Own Devices) and even with distance work cultures. Corporations and even homes are left with a huge gap that they have had to fill with multiple solutions which sometimes just don’t coalesce.
Aruba has been hard at work developing a solution that will help companies be forward thinking while keeping their security in check. Their new ESP (Edge Services Platform) allows companies to adopt policies such as BYOD without compromising their network security and without them having to dedicate human and large financial resources in managing and administering network access. ESP essentially empowers the network with an AI-driven sixth sense that provides actionable insights for network administrators while taking the bulk of menial tasks off their to-do lists.
The Aruba ESP framework essentially consists of three principal components: AI Ops, a unified infrastructure and Zero Trust Network Security. These components work in tandem to deliver increased network reliability and security. With a cohesive approach, Aruba has managed to build an offering that is able to be implemented at scale and even with smaller businesses. In fact, ESP is able to be deployed according to client needs over a period of time.
The AI Ops component of ESP works to help with identifying, segmenting and remediate network issues. With Aruba’s implementation of AI Ops, the network is able to analyse and segment the network to isolate and protect company assets while allowing employees and guests to access the network with their own devices. It also proactively monitors the network for any security risks such as infected devices or even probable attackers to prevent any downtime. Even if there is downtime, AI Ops will allow Aruba’s ESP to automatically heal and repair the network which will, in the best scenarios, negate possible downtime.
The Edge Services Platform is also a turnkey solution for corporations that allows the consolidation of their networking solution on one unified platform. Running on their already proven Aruba Central network management solution, ESP is able to provide administrators with a cloud native solution to manage everything from switching, Wi-Fi and SD-WAN across their campus network. The single, unified interface also allows them to have a one-stop platform to identify and deal with potential networking issues which may arise. This together with the analytics and insights from AI Ops simplifies the process identifying, isolating and fixing network issues. What’s more, Aruba’s ESP is brand agnostic allowing devices and services from other vendors to be seamlessly integrated into the network.
ESP adopts a Zero Trust approach to network security. However, it doesn’t just segment the network. Instead, it uses built-in, role based access technology that will enable Dynamic Segmentation. This simply means that the platform is able to identify and isolate devices dynamically as they enter the network. It uses an AI model that has been trained to identify certain parameter and automatically assign or isolate devices to help prevent potential security risks or breaches. This approach allows companies to be forward looking while keeping their assets and data safe from intrusion; empowering remote work and BYOD policies which have been proven to increase productivity.
Aruba’s ESP heavily leverages telemetry and insights derived from the company’s many years in providing networking solutions and hardware to deliver an ever evolving, rapidly adapting solution that can be deployed according the needs and constraints of their customers. That said, ESP isn’t just reliant of Aruba’s data and telemetry, it evolves with the company and learns from the data and telemetry that is natively derived from the organisation and its policies. Aruba ESP will be available for current platforms including Amazon Web Services.
Artificial Intelligence and Machine Learning (AI and ML) technologies have come a long way since its first inception. Who would have thought that we would have a working model of actual computer-based assistants that can do things like manage our schedules? Who would have thought that we could even use these assistants to manage our homes? These things can even be used to diagnose cancer patients, something impossible without doctors even five years ago.
Amazon Web Services (AWS) is at the forefront of AI and ML technology. As one of the world’s largest technology innovators, they would naturally be at an advantage to feed enough data to the technology and accelerate their development. Because they are also one of the largest technology firms any man has ever seen, they are also at an advantage in placing AI and ML in places and applications we may never have imagined.
Linguistics is one segment that has benefitted greatly from technologies today. Linguistics, if you think about it is also one of the most complex things that us humans can create and understand. The context of it and interpretation can be affected by plenty of things too. Linguistics is affected by area, culture, community, heritage, and even lineage.
For example, there are differences between French spoken in France and Canada. There are even subtle differences between French spoken in France and Monaco, or even Switzerland. The most common language of all, English has differences even in spelling and context in Britain, the Americas, and even Australia. English spoken today is also a distinct form of the language that was spoken 50 years ago.
The progression of technology in linguistics have progressed through years and years of feeding all these data into it. That has allowed us to communicate with global communities with more ease than peeling an orange. AWS has taken it a little further than that though. They have gone beyond spoken or written languages. Through something called AWS DeepLens, they have developed translation algorithms to sign languages.
While that technology might sound like it is as simple as gesture controls, it is plenty more than that. Yes, it is technically gesture control and recognition. But it is way larger and more complex than just a solution for end-point devices. The trick is to teach the native algorithm to recognise all the available sign words and even alphabets. The AWS DeepLens Community projects so far has learnt to recognise most of the alphabets in the American Sign Language.
But technology also goes beyond just recognising alphabets to understanding proper words with the algorithm in Amazon Alexa. It is not just about communicating with your friends anymore. It is about using the platform as a home assistant tool, a customer service tool, a command center, and user defined PC experience that mimics voice control and command for us. Instead of using voice though, its all in the gestures.
The tool they use is called Amazon Transcribe. It works just like any transcribe apps you can find in the market. It supports up to 31 languages currently with more being added by time. It even supports ASL as a component to create text from sign language.
Simple communication is just the beginning for the technology though. AI and ML still has a long way to go even in the medical field. Just like the human race, the technology gets better everyday though. If you really think about it, the technology is not that new in the first place. We have embarked on the journey of having machine built and defined assistants since we started developing computers to help us with simple and complex mathematical problems.
It is just that simple mathematical problem solver has become something much bigger today. Who would have thought that we would let computers fly a commercial airplane? Who would have thought that cars can drive themselves today? Who would have thought that we could hire a private translator without spending any money or any time? You just have to look into your pocket.