Tag Archives: Generative AI

Email-based Phishing Attacks Up 464% in H1 2023

Acronis Mid-Year Cyberthreats Report

Acronis has published its Mid-Year Cyberthreats Report, revealing concerning trends in the cybersecurity landscape. The report highlights a 464% increase in email-based phishing attacks in the first half of 2023 compared to the previous year.

Cybercriminals are increasingly using generative artificial intelligence (AI) systems like ChatGPT to craft malicious content and conduct cyberattacks. Acronis states that ransomware remains a dominant threat to small and medium-sized businesses. This is mainly due to attackers leveraging AI-created malware to avoid detection by traditional antiviruses.

The cyberattack landscape is evolving

Acronis Mid Year Cyberthreats Report Top 5 Trends

In the report, Acronis also emphasizes the increasing sophistication of cyberattacks. These attacks utilize AI and existing ransomware code to penetrate victims’ systems and extract valuable data, making detection more challenging. Cybercriminals use public AI models to find source code vulnerabilities and develop attacks (including deep fakes).

Additionally, the study shows that phishing is the primary method cybercriminals use to steal login credentials. The use of large language model-based AI platforms has enabled cybercriminals to create, automate, and scale new attacks more efficiently. The report reveals a growing number of data stealers who exploit stolen credentials to gain unauthorized access to sensitive information.

Breaches demonstrate major security concerns

Artboard 8 copy 3@3x

Acronis points out some major security concerns that contribute to successful breaches, including a lack of strong security solutions to detect zero-day vulnerabilities, delayed updates of vulnerable software, and inadequate protection for Linux servers. Moreover, some organizations fail to follow proper data backup protocols, which can lead to severe consequences during attacks.

Acronis encourages companies to take a proactive stance in cyber protection. A comprehensive cybersecurity posture requires a multi-layered solution that combines various security measures. This includes anti-malware, email security, vulnerability assessments, backup capabilities and more. The report also includes steps that companies can take to increase their cyber protection:

Artboard 8 copy 9@3x

If you are interested to read the full Acronis Mid-Year Cyberthreats Report 2023, click here.

Meta Empowers Businesses to Leverage AI & Insights for Business Messaging

There’s no denying that businesses that fail to engage with their customers are doomed to stagnate and eventually die. As a matter of fact, Meta reports that over 1B people are regularly engaging with businesses on Meta platforms. This number isn’t industry specific either, it covers over 55% of every industry.

Meta Business Messaging 2
Source: Meta Malaysia

Meta continues to innovate on its platforms to allow businesses to leverage them to drive business objectives. Platforms like WhatsApp, Instagram and Facebook continue to be some of the most valuable touchpoints for businesses as it brings a mix of familiarity and proximity to both sides. It also allows businesses to leverage these aspects to build a persona and personality to better relate to its target audience. Recognising these factors, Meta has continually been innovating to allow businesses to leverage its platforms and the latest in technologies that complement them.

Leveraging AI to Ensure Platform Safety and Innovate to Empower Businesses

The latest to join the suite of tools is Artificial Intelligence. That’s not to say that Meta hasn’t used AI before. In fact, Facebook integrated AI into its timeline back in 2006. However, with the surge in interest when it comes to Generative AI, it is quickly becoming more apparent that we are indeed in AI 2.0.

Meta Business Messaging 1
Source: Meta Malaysia

Using the new advances in AI technology, Meta has quickly adapted to address newer trends and incorporate these advances to drive better results with less data. This also comes in the wake of a growing number of regions and countries clamping down on data privacy and security. The incorporation of Machine Learning algorithms and newer AI 2.0 advancements have led to 82% of hate speech being removed through automated means on platforms like Instagram and Facebook.

Meta is also implementing new algorithms that are created to use less data to deliver comparable or better results for businesses. To date, these algorithms have delivered a 20% increase in conversions for businesses leveraging them. With these algorithm’s working in the background, it falls to businesses to leverage them to drive business outcomes.

Business Messaging & Continuing the Customer Journey on Meta Platforms

As AI continues to become a deeply integrated factor for business continuity, we have to know and use the tools – paid or otherwise – that will not only allow for better outcomes but also help create a better customer experience.

Meta Business Messaging 3
Source: Meta Malaysia

Meta’s Business Suite and Ads Manager are continually being updated with tools that integrate AI technology to drive better business outcomes. One such tool is Meta’s Creative+ option which appears when you post content to your page. This feature allows you to test up to 4 different creatives to determine which delivers the best results.

Using features like this, businesses are able to extend their reach while keeping costs down. It also allows businesses to leverage the familiarity of the platforms to drive customer loyalty through business messaging. This comes in addition to AI-assisted product discovery with more broadly, AI-determined audiences for better conversions. AI-assisted determination also can help leverage behavioural data to optimise touchpoints based on customer behaviour.

This data can also be used to create chatbots that allow businesses to interact with customers more effectively. These chatbots can be built to suit the unique needs of businesses while still allowing for the flexibility for humans to jump in at any time.

One of the most important things to pay attention to is the trends that are emerging and continually shifting. These trends play a significant role in determining the combination of tools that will fit business needs. More importantly, it will also help determine the best approach for success on Meta’s platforms.

Firefly A friendly futuristic humanoid robot interacting with a floating digital interface with bu1

Meta shared a study on McDonald’s Malaysia leveraged the fact that there is an increasing number of users spending more time-consuming video content on Facebook and Instagram to be the driving force behind their recruitment campaign. Using reels available on Facebook and Instagram, the company was able to communicate the experience of being an employee at a McDonald’s outlet. Of course, the reels produced naturally embellished the experience with some fictional elements to generate interest and convey the business’s policies. This cornerstone content allowed McDonald’s to communicate directly to their target audience – Gen Z.

This falls in line with Meta’s own data which shows that more than 50% of time spent on Facebook and Instagram is spent consuming video content. This includes long-form videos, reels and even stories. In fact, reels may be the best touch point with over 200 Billion plays per day.

Meta’s Just Getting Started with AI 2.0 and Businesses Need to Start Leveraging It Now

It’s only the tip of the iceberg of how AI 2.0 will be impacting our world when it comes to creating consumer journeys, continuing Business Messaging and even creating content. Meta has already announced AI efforts like LLaMA which will no doubt factor into new tools that will come to its platforms in the future.

This will also entail businesses needing to deal with scams head-on hand in hand with regulators and companies like Meta. Meta is already working on identity verifications which will be more widely available to users as the year progresses. However, the company has yet to announce the same verification measures for businesses but we have it on good authority that it will be coming soon.

NVIDIA & Snowflake Storms the AI Scene With Custom Generative AI

NVIDIA & Snowflake partner up to allow companies to deploy custom Generative AI applications.

NVIDIA and Snowflake have joined forces to enable businesses of all sizes to build custom generative AI applications by using proprietary data. This collaboration allows companies to maximize NVIDIA’s AI technology and Snowflake’s Data Cloud platform to make better decisions and develop their own generative AI models.

NVIDIA offers the NeMo platform that enables companies to create, customize, and deploy large language models (LLMs) for AI applications like chatbots and intelligent search. With NeMo Guardrails software, developers can ensure that their applications align with business-specific requirements.

On the other hand, Snowflake provides a secure and efficient Data Cloud platform for businesses to store, manage, and analyze large amounts of data both internally and externally. This service also caters to various verticals like advertising, media and entertainment, financial services, retail, and more.

The collaboration between NVIDIA and Snowflake will allow companies to gather their proprietary data and customize the generative AI applications for business-specific applications. It also enables companies to maintain data governance without moving the data from a different platform.

Industries such as healthcare, retail, and financial services can benefit from this collaboration. For example, healthcare insurance models can provide detailed information about covered procedures. Financial services models can offer personalized loan options based on individual situations and needs.

Alibaba Cloud Announces New AI Image Generation Model, Tongyi Wanxiang

Alibaba Tongyi Wanxiang

Alibaba Cloud has introduced its latest AI image generation model, Tongyi Wanxiang (‘Wanxiang’ means ‘tens of thousands of images’). This model is designed to help businesses improve their creativity by generating high-quality images across different art styles.

Text-to-image generation

The company states that Tongyi Wanxiang is able to generate detailed images in response to text prompts in both Chinese and English. This includes watercolours, oil paintings, sketches, animations, and more. Additionally, It can also transform existing images into new ones with similar styles and apply style transfers to create visually appealing compositions..

Tongyi Wanxiang is powered by Alibaba Cloud’s advanced technologies and leverages multilingual materials for enhanced training. This AI image generation model optimizes the high-resolution diffusion process to strike a balance between composition accuracy, detail sharpness, and clean backgrounds.

Alibaba Cloud developed it using Composer, the company’s proprietary large model that offers greater control over the final image output, including spatial layout and colour palette.

ModelScopeGPT Launched for Sophisticated AI Tasks

The company has also launched ModelScopeGPT, a versatile framework that utilizes Large Language Models (LLMs) to accomplish complex AI tasks across language, vision, and speech domains. ModelScopeGPT connects with an extensive array of domain-specific expert models, allowing businesses and developers to access and execute the best-suited models for sophisticated AI tasks. The platform is available for free and offers a rich Model-as-a-Service (MaaS) ecosystem.

Copilot-ing the Future of Work with Generative AI

“Artificial Intelligence (AI) is revolutionising the way we work.” This phrase is undoubtedly something that we’ve become so familiar with over the past few years. However, we’ve yet to see the impact of AI outside manufacturing and data science – that is – until now. With generative AI taking centre stage thanks to services like OpenAI’s ChatGP, that phrase couldn’t be more relevant. AI is taking the leap from automation to contextual intelligence which will benefit more people across more industries.

bionic hand and human hand finger pointing
Photo by cottonbro studio on Pexels.com

In the time since the pandemic, we’ve seen a revolution in the digitization of work. A large portion of workers – like ourselves – are still finding themselves working remotely to be more productive and reduce time wasted in commutes. In fact, Microsoft’s 2021 and 2022 Work Trend Index drew sharp focus on the subject. This year, the conversation is turning towards Generative AI and its role in leapfrogging work to the next level.

A Digital Leap with A Heavy Digital Debt

Speaking of the digitization of work, the recent digital leap – while a long time coming – has resulted in workers accumulating digital debt. What exactly is this? It’s that backlog of emails, that information dump, the work chats and even those meetings and their minutes that continue to pile up even as we work through them. While the digital leap we just experienced has been amazing for work and interpersonal communication, we’re finding it harder to cope with the sheer volume of information and communication we generate.

people on a video call
Photo by Anna Shvets on Pexels.com

In fact, Microsoft’s Work Trend Index found that 2 full work days are spent simply dealing with emails and inefficient meetings. This is exacerbated by the fact that we’re in 3 times more meetings at work since the pandemic. Leadership in organisations have also taken notice. They note that workers are spending too much time with their noses to the grindstone and not enough time innovating. This has created an increase in productivity but a lag in results. 77% of Malaysian respondents in Microsoft’s survey noted that they don’t have enough time and energy to get work done. This isn’t surprising given that we’re dealing with a continually growing digital debt.


Photo1 1024x567 webp

“As work evolves with AI, leaders and employees alike are looking at how technology can help them be more productive in their workplace. With AI, there is now an opportunity for us to reimagine the way we work and collaborate in the workplace of the future.”

K Raman, Managing Director of Microsoft Malaysia


AI and, in particular, generative AI like ChatGPT and Microsoft’s Copilot are part of the solution in dealing with this digital debt. They will help workers and leaders discern signals from the immense volume of noise. Relevant information can be distilled from data both internally and on the open web in seconds saving hours of work. What’s more, AI like Copilot will be able to help create effective meetings and eventually create a leap in innovation thanks to the offloading of menial tasks to these AI. In tandem with this, leaders and employees may shift to a more asynchronous form of communication allowing for more effective communications and meetings overall.

AI Isn’t Here to Replace, It’s Here to Augment

With the increased adoption of AI in work, many of us are feeling the pinch of possibly being replaced by a soul-less tool. 62% of the respondents to Microsoft’s Work Trend Index in Malaysia share this concern. While this may be true in essence, what we will expect to see in the coming years is the integration of AI into our work to lessen repetitive work. Perhaps, this will mean that some of our job roles will change.

image 5

We’re already seeing a large portion of workers and leaders willing to offload and delegate work to AI to lessen workloads. A whopping 84% of respondents indicated such. These tasks can be administrative, analytical or even creative. Yes, while it is concerning that even creative tasks can be offloaded to AI, there’s an evolution of work that will be spurred by this which will, hopefully, spur an increase in innovative work and boost productivity. This is where most Malaysian managers find the most value in AI.

Introducing Microsoft 365 Copilot | Your Copilot for Work

However, what will develop is an AI-employee alliance in which work is completed through a complementary pair of AI insight and employee ingenuity. This will necessitate employees and workers to have an aptitude for using AI. While it’s still early days when it comes to discerning skills that are AI-specific in the workforce, Microsoft has identified some key competencies that will be crucial for workers. 90% of leaders already anticipate this need.

A Need for AI Skills to Empower a Better Copilot

That said, employees will need to develop core competencies that will empower them to leverage AI as an integral tool for productivity. While digital skills still remain a pain point for employers, already 76% of Malaysians feel they are ill-equipped when it comes to AI skills.

image 4

The need to use and familiarise yourself with AI tools like ChatGPT cannot be overemphasized at this point. It’s no longer a question of “if” but one of “when” generative AI will make its way into your workflow. Getting to know how to prompt Generative AI tools to get the right outcome will be one of the key competencies of the AI revolution. However, it will also be crucial to understand that these AI tools are simply that – tools to get the work done. They are copilots, not autopilots when it comes to getting work done.


Photo1 1024x567 webp
Source: Facebook

“There is a need for a skilled workforce to reap the benefits of AI-powered technology and solutions. Human-AI collaboration is going to be critical as we shift from AI on autopilot to AI as our copilot. The most pressing opportunity and responsibility for every leader is to understand how to leverage AI to remove the drudgery of work, unleash creativity, and build AI aptitude,”

K Raman, Managing Director of Microsoft Malaysia


Industry leaders like Microsoft are already incorporating AI such as GPT and DALL-E into their offerings. With the recent announcement of Windows getting Bing’s GPT-enabled assistant, it comes as no surprise that Microsoft is also integrating its aptly named ‘Copilot’ into more products. In fact, their integrations are already being tested by some 600 enterprise customers including the likes of Chevron and Goodyear. Products like Microsoft 365 and Microsoft Viva will be benefiting most from this integration.

Next generation AI in Power Platform is changing how you develop low-code solutions

Copilot is already incorporated in products like Outlook, OneNote, Viva Learning, Whiteboard and PowerPoint. In Whiteboard, Copilot will enable more creative and effective meetings and brainstorms on Microsoft Teams. You will be able to prompt Copilot to generate and organise ideas, create designs and even summarize Whiteboard content. In OneNote, Copilot will be able to use prompts to draft plans, generate ideas, create lists and even organize information for easier reference. In apps like Outlook and Viva Learning, natural language will be used to customise learning journeys and empower better writing through tips and suggestions as well as designing learning paths for desired outcomes. PowerPoint will be getting a DALL-E infusion which will allow images to be customised to complete presentations.

As is already evident, the AI revolution is picking up steam. AI is quickly going to spur an evolution of work which will put its role in automation to shame. It will find a space nestled in the day-to-day workings of many industries. Workers like us will need to adopt, adapt and integrate generative AI in a way and at a scale that has not been seen before to accomplish more with less hassle.

[Google I/O 2023] Google Bard – What is That?

After Google I/O 2023 last week, you might have noticed that your Android smartphone pushing a notification to you. It is a prompt for you to try Google’s updated Bard. Most of you on Google’s email platform (Gmail) might also get an email asking you to try Bard today. If you are familiar with AI (artificial intelligence) news, you might already be familiar with Google’s Bard alongside OpenAI’s ChatGPT. To those, it might sound like a foreign object.

In simple terms, Google Bard is really the Google version of ChatGPT. While ChatGPT is developed by OpenAI, Bard is completely Google. You want to keep in mind that both ChatGPT and Bard are two separate platforms altogether though before jumping to conclusions and say that they are the same things. They are both categorised as generative AI, but they are both different from one another.

Unlike ChatGPT which has existed for some time, and is in its fourth iteration, Google Bard is fresh out of the oven; two months out of the oven, to be fair. Like ChatGPT, Google Bard was launched as an experiment. Like ChatGPT as well, the technology for Google Bard is not exactly new.

What is Google Bard?

Screenshot 2023 05 15 162043
Source: Google

As mentioned, Google Bard is a generative and creative AI by Google. Instead of overcomplicating the explanation, Google’s FAQ says that Google Bard is technically based on their LaMDA (Language Model for Dialogue Applications) AI model, Google’s very own linguistics program written for conversational purposes. When we say conversational, we do not mean that it will be like a regular conversation with a human being, but LaMDA aims to make it close.

To be fair, Google’s conversational AI is not something you have not seen before, you see it with Google Assistant whenever you call out “Hey, Google,” or “Okay, Google”. You can even use Google’s clever Assistant to get you a booking for a restaurant by having Google Assistant make the call and get the booking done, instead of you calling the restaurant yourself. In their demo a few years ago, Google’s Voice Assistant sounded so natural that the other person on the other end of the line could not even tell that they are speaking to an artificial person. This proves that LaMDA works, and has a place in the world. But our many use case of the Google Assistant even with Google Nest systems is prove enough that conversational AI has many uses in the current world.

Bard is not just a conversationalist though. It is more than that, a generative AI of sorts. It still has its roots in LaMDA, but it is a lot more than that now. It is made as a collaborative tool, for you to basically generate ideas, tabulate and make sense of data, help you plan things, help you design tools and steps, collate your calendars, and even use it as a learning tool.

According to Google, Bard is made to create original contents at the request and behest of individual users. Meaning that the algorithm could be different are results can be different from one person to another. Because it is Google, any request or question you post to Bard might prompt Bard to look into hundred or thousands of sources and draw conclusions, or present result in a way the does not infringe copyright or plagiarism laws. In the case that it does take up contents from another source, Bard will acknowledge and cite its sources. Google Bard is not built to write your college essay though, it is built to be a collaborator to manage your work and your life, to make it more seamless somehow over just Googling things. They do actually have a ‘Google It’ button for you to make full use of Google’s search engine though.

It is not a 100% solution for your own research and use case though. Google has mentioned and stressed that Google Bard is an experiment. It is an opportunity for their AI engines to learn even more at an accelerated pace with public input and use. Google Bard is meant to be iterated, which also means that the current form of Google Bard will not be final. They also mention that Google Bard, at its current form will not be 100% accurate at all times; hence, the ‘Google It’ button on Bard. While it is open source, Google also says that Bard is not meant to be used commercially or for advertising purposes at this time.

Why Bard?

Screenshot 2023 05 15 162312
Source: Google

The entire existence of Bard could be a sharp response to OpenAI’s ChatGPT. The success of the open-source AI platform has sort of forced Google to quickly introduce their own AI tool for use to the public. If they are to be believed, Google could offer the most powerful AI tool for the masses.

In the recent Google I/O 2023, Google has officially embraced Bard and announced that they have moved Bard to PaLM 2, an improved language model that offers more capabilities of Google Bard compared to just conversational based on LaMDA model. PaLM 2 now offers Bard the ability to code and program. It also allows Bard to solve even more complex mathematical problems and process through more complex reasoning models that offers Bard the ability to make better decisions over time.

As of Google I/O 2023, Google has opened the Bard experiment to more than 180 countries as of writing and is available in Japanese and Korean. As things go, Google is planning to open the experiment to more regions and make Bard available in about 40 languages. On top of more languages and regions, where the older Google Bard was mostly just conversational via text, the new improvement at Google I/O 2023 adds some visual flavours to your conversations with Bard. They have integrated Goole Lens into Bard and allow you to now scan photos of your things at home and let Bard come up with whatever captions you might want. You can even add photo references to your Google Bard generated itinerary when you travel.

But it is not just the surface updates for Google Bard. For Google I/O 2023, they have announced that Bard is not just a tool that is isolated from any other systems. Google is making the Bard available with an “export” button for collaboration purposes in the form of exporting and running codes on Python. You could directly copy email responses into your Gmail or Google Docs, if you want. If you want more out of Bard, you can even expect Adobe Firefly integration in the coming future for even more powerful generative tools like complete poster designs based on both Google’s and Adobe’s combined algorithms. They have also announced that they are working with more partners like Kayak, OpenTable, ZipRecruiter, Instacart, Wolfram and Khan Academy to get their Google Bard project integrated into their services and products.

In this case, where OpenAI is allowing you to plug its API anywhere and get it working with minor tweaks, Google is not looking to just do that. Google is offering deep integration with their partners to create even more, to become an even more powerful tool in your toolkit for the future. They look to open up even more opportunities and applications for the average user with deeper and more curated collaborations with partnering brands. While that may not necessarily be the best thing to do for some, it is a way forward for more integrated services and solutions to serve individuals and businesses better. It even allows partnering companies to understand their users and customers better in some cases.

Adobe Firely, the Next-Generation AI Made for Creative Use

AI (Artificial Intelligence) generated graphics is not a new thing. You have things like OpenArt and Hotpot these days where you can just type in the keywords to the image you want, and let the engine generate art for your use. Even before AI generated graphics though, the implementation of AI within the creative industry is nothing new. NVIDIA has used their own AI engine to write an entire symphony, and even to create 3D environments using their Ray Tracing engines. Adobe too have something they call the Sensei. They the AI tool is implemented across their creative suite to understand and recognise objects better, fill details where needed more naturally, and even edit videos, images, or even texts quickly and efficiently. Now, they have Firefly.

Firefly is not a new separate AI system from Adobe’s Sensei. Firefly is a part of a larger Adobe Sensei generative AI together with technologies like Neural Filters, Content Aware Fill, Attribution AI, and Liquid mode implemented across several Adobe platforms. Unlike those platform specific implementations though, Adobe is looking to put Firefly to work on a number of various platforms across their Creative Cloud, Document Cloud, Experience Cloud, and even their Adobe Express platforms.

So, what is Adobe Firefly? We hear you ask. It is technically Adobe’s take on what a creative generative AI should be. They are not limiting Firefly to just image generation, modification, and correction. It is designed to allow any sort of content creators create even more without needing to spend hundreds of hours to learn a new skill. All they need to do is to adapt Firefly in their workflow and they will get contents that they have never been able to create before, be it images, audio, vectors, texts, videos, and even 3D materials. You can have different contents every time too with Adobe Firefly; the possibilities, according to Adobe, are endless.

What makes Adobe’s Firefly so powerful is the power of the entirety of Adobe’s experience and database behind it. Obviously Adobe’s Stock images and assets is a huge enough library for the AI implementation to dive into. The implementation can also look into using openly licensed assets and public domain contents in generating its contents. The tool, in this case, will prevent any IP infringements and help you avoid plenty of future litigations.

Adobe Firefly Cover
Source: Adobe

As Firefly is launched in its beta state, it will only be available as an image and text generator tool for Adobe Express, Adobe Experience Manager, Adobe Photoshop, and Adobe Illustrator. Adobe plans to bring Firefly into the rest of their platforms where relevant in the coming future. They are also pushing for more open standards in asset verification which will eventually include proper categorization and tagging of AI generated contents. Adobe is also planning to make the Firefly ecosystem a more open one with APIs for its users and customers to integrate the tool with their existing workflows. For more information on Adobe’s latest generative AI, you can visit their website.

Be A Maestro with AWS DeepComposer

You would think that when it comes to making compositions and music, you’d need a really good ear and knowledge of the arts. Not so much with Amazon Web Service’s new AI (Artificial Intelligence) service focused on creating musical pieces with a keyboard! DeepComposer is the latest in a series of Machine Learning focused services that AWS has introduced since it’s announcement of DeepLens at Re:Invent 2017.

The new music based AI is a 32 key, 2 octave keyboard which will allow developers to familiarise themselves with using Generative AI. The simple application of Generative AI in DeepComposer will take short riffs and generate a full compositions.

A brief diagram explaining how AWS’s DeepComposer works. (Source: AWS)

The DeepComposer generative AI will be able to layer and generate songs based on pre-trained models or even user defined models. The pre-trained models are able to generate based on algorithms developed by training the AI with large musical data sets. The user defined models give users better control of the generative AI. Users will be able to define multiple parameters including the Architecture and Discriminator. The latter allows the AI to distinguish between the genres and determine the overall composition.

Announcing AWS DeepComposer with Dr. Matt Wood, feat. Jonathan Coulton

Being a machine learning model, DeepComposer is continually learning to identify music types. The AI will improve with time as it learns and generates more music based on the models and riffs. It will also be able to generate music which mimics a defined model. Amazon’s release touts, ” you have to train as a counterfeiting expert in order to become a great counterfeiter “.

DeepComposer isn’t just linked to the physical keyboard. It also has a digital keyboard interface which allows users to compose on the go. Using this approach, AWS is hoping that Generative AI models are made more approachable for those looking to explore their applications.

The new feature is currently available for preview on AWS at the DeepComposer website. Also on the website is a FAQ to address some of the questions that new users may have.