Category Archives: Business

[Google I/O 2023] Google Bard – What is That?

After Google I/O 2023 last week, you might have noticed that your Android smartphone pushing a notification to you. It is a prompt for you to try Google’s updated Bard. Most of you on Google’s email platform (Gmail) might also get an email asking you to try Bard today. If you are familiar with AI (artificial intelligence) news, you might already be familiar with Google’s Bard alongside OpenAI’s ChatGPT. To those, it might sound like a foreign object.

In simple terms, Google Bard is really the Google version of ChatGPT. While ChatGPT is developed by OpenAI, Bard is completely Google. You want to keep in mind that both ChatGPT and Bard are two separate platforms altogether though before jumping to conclusions and say that they are the same things. They are both categorised as generative AI, but they are both different from one another.

Unlike ChatGPT which has existed for some time, and is in its fourth iteration, Google Bard is fresh out of the oven; two months out of the oven, to be fair. Like ChatGPT, Google Bard was launched as an experiment. Like ChatGPT as well, the technology for Google Bard is not exactly new.

What is Google Bard?

Screenshot 2023 05 15 162043
Source: Google

As mentioned, Google Bard is a generative and creative AI by Google. Instead of overcomplicating the explanation, Google’s FAQ says that Google Bard is technically based on their LaMDA (Language Model for Dialogue Applications) AI model, Google’s very own linguistics program written for conversational purposes. When we say conversational, we do not mean that it will be like a regular conversation with a human being, but LaMDA aims to make it close.

To be fair, Google’s conversational AI is not something you have not seen before, you see it with Google Assistant whenever you call out “Hey, Google,” or “Okay, Google”. You can even use Google’s clever Assistant to get you a booking for a restaurant by having Google Assistant make the call and get the booking done, instead of you calling the restaurant yourself. In their demo a few years ago, Google’s Voice Assistant sounded so natural that the other person on the other end of the line could not even tell that they are speaking to an artificial person. This proves that LaMDA works, and has a place in the world. But our many use case of the Google Assistant even with Google Nest systems is prove enough that conversational AI has many uses in the current world.

Bard is not just a conversationalist though. It is more than that, a generative AI of sorts. It still has its roots in LaMDA, but it is a lot more than that now. It is made as a collaborative tool, for you to basically generate ideas, tabulate and make sense of data, help you plan things, help you design tools and steps, collate your calendars, and even use it as a learning tool.

According to Google, Bard is made to create original contents at the request and behest of individual users. Meaning that the algorithm could be different are results can be different from one person to another. Because it is Google, any request or question you post to Bard might prompt Bard to look into hundred or thousands of sources and draw conclusions, or present result in a way the does not infringe copyright or plagiarism laws. In the case that it does take up contents from another source, Bard will acknowledge and cite its sources. Google Bard is not built to write your college essay though, it is built to be a collaborator to manage your work and your life, to make it more seamless somehow over just Googling things. They do actually have a ‘Google It’ button for you to make full use of Google’s search engine though.

It is not a 100% solution for your own research and use case though. Google has mentioned and stressed that Google Bard is an experiment. It is an opportunity for their AI engines to learn even more at an accelerated pace with public input and use. Google Bard is meant to be iterated, which also means that the current form of Google Bard will not be final. They also mention that Google Bard, at its current form will not be 100% accurate at all times; hence, the ‘Google It’ button on Bard. While it is open source, Google also says that Bard is not meant to be used commercially or for advertising purposes at this time.

Why Bard?

Screenshot 2023 05 15 162312
Source: Google

The entire existence of Bard could be a sharp response to OpenAI’s ChatGPT. The success of the open-source AI platform has sort of forced Google to quickly introduce their own AI tool for use to the public. If they are to be believed, Google could offer the most powerful AI tool for the masses.

In the recent Google I/O 2023, Google has officially embraced Bard and announced that they have moved Bard to PaLM 2, an improved language model that offers more capabilities of Google Bard compared to just conversational based on LaMDA model. PaLM 2 now offers Bard the ability to code and program. It also allows Bard to solve even more complex mathematical problems and process through more complex reasoning models that offers Bard the ability to make better decisions over time.

As of Google I/O 2023, Google has opened the Bard experiment to more than 180 countries as of writing and is available in Japanese and Korean. As things go, Google is planning to open the experiment to more regions and make Bard available in about 40 languages. On top of more languages and regions, where the older Google Bard was mostly just conversational via text, the new improvement at Google I/O 2023 adds some visual flavours to your conversations with Bard. They have integrated Goole Lens into Bard and allow you to now scan photos of your things at home and let Bard come up with whatever captions you might want. You can even add photo references to your Google Bard generated itinerary when you travel.

But it is not just the surface updates for Google Bard. For Google I/O 2023, they have announced that Bard is not just a tool that is isolated from any other systems. Google is making the Bard available with an “export” button for collaboration purposes in the form of exporting and running codes on Python. You could directly copy email responses into your Gmail or Google Docs, if you want. If you want more out of Bard, you can even expect Adobe Firefly integration in the coming future for even more powerful generative tools like complete poster designs based on both Google’s and Adobe’s combined algorithms. They have also announced that they are working with more partners like Kayak, OpenTable, ZipRecruiter, Instacart, Wolfram and Khan Academy to get their Google Bard project integrated into their services and products.

In this case, where OpenAI is allowing you to plug its API anywhere and get it working with minor tweaks, Google is not looking to just do that. Google is offering deep integration with their partners to create even more, to become an even more powerful tool in your toolkit for the future. They look to open up even more opportunities and applications for the average user with deeper and more curated collaborations with partnering brands. While that may not necessarily be the best thing to do for some, it is a way forward for more integrated services and solutions to serve individuals and businesses better. It even allows partnering companies to understand their users and customers better in some cases.

Apple and Google Agrees on Something Again – AirTags Needs Better Standards for Improved Privacy.

Apple and Google hardly agree on many things when it comes to their consumer offerings. When we say that they hardly agree, of course we do not mean that they are always on each other’s throats on every single issue. They offer two wildly different products that arrive at the same solution most of the time. Take Android and iOS for example, both highly successful smartphone platforms that offer an app ecosystem, smart integrations, and even machine learning based digital assistants. Both platforms look vastly different and function even more so in the hands of consumers though. There is a common denominator for both Google’s and Apple’s offerings though – privacy and security.

In this case though, while Apple and Google share the same concern over privacy and security, their approach can also be quite different. Android’s privacy and security layer has a slightly different depth compared to Apple’s. The Apple Play Store and Google Play Store ensure that app developers comply to certain practices and regulations to stay relevant, but both Apple and Google offer slightly different guidelines for their app marketplaces. Still, if developers want to have their app listed on both stores, their apps naturally must comply to both Apple’s and Google’s guidelines. Not so for location tracking devices so far though.

Apple introduced a clever Bluetooth based location-tracking tool we now know as AirTags. While the AirTags were intended as a sort of keychain or tool to keep track of your things at home or as a reminder for you not to leave things in your favourite café, the reality is a little different. A few weeks after AirTags were introduced, there were reports of the tiny pucks used for stalking purposes. To be fair, while AirTags was the center of attention in many of these cases, Apple’s solution was not the one being used in privacy invasion cases. Solutions from manufacturers like Samsung got involved shortly after they were introduced. To be fair though, thanks to the sophistication of the AirTags, offenders preferred Apple’s solution.

Over the years, Apple has introduced new measures as a stop gap solution to ensure that users are not being unwantedly tracked by other individuals. One of the solutions was a sort of notification when an AirTag device that your iPhone does not recognize comes in proximity via Find My app on the iOS. But this is only a solution for AirTags, what about others in the field? This is where Google also comes in.

Google does not make their own Bluetooth based location-tracking tool, but their partnering manufacturers do. Players like Samsung, Tile, and a few others make tracking devices that can easily pair to both Android and iOS devices. That also means there needs to be a standardized specification to ensure that all the trackers are as safe as one another to use. Yes, it is a beneficial thing for us the users.

Google and Apple’s partnership in the issue of standardizing Bluetooth based locating-tracking tools is a big step forward for this segment in the industry. In one way, it allows other players in the industry to catch up to what Apple has done with their AirTags. It ensures that industry players comply to a certain standard in making these little tracking devices, meaning there will be standardized parts produced by a single or multiple manufacturers creating economies of scale allowing the technology to be a lot more accessible. Standardized parts not only ensure that the industry can be policed at higher standards, but it also offers plenty more compatibility for users. It could allow Android users to use an AirTag to track their keys, for example and vice versa.

For now, standardized specifications for these trackers are not yet a reality. Google and Apple have submitted their draft proposal to the authorities for now, which means you can only expect to see some sort of results in the coming few months. Samsung, Tile, Chipolo, eufy Security, and Pebblebee have expressed their support in the program, which is a good sign for the proposal. Google and Apple expects to have some sort of production guideline and implementation by the end of 2023 with support for both iOS and Android in the same timeline.

Google Meet Now Supports Full HD 1080p Video Calls

Google has just updated the Google Meet app and it is now better than ever. You can now make video calls in Full HD instead of just 720p HD resolution. Those Full HD, 1440p, and 4K webcam for your video conferences are now starting to make sense. There are some caveats though.

The Full HD capability update for Google Meet applies to not just the app on your smartphone, or on your PC. It also works when you access Google Meet via the web. By default, it is set to ‘off’, so you do need to turn it on to activate the feature for your calls. You also need a Full HD or higher resolution cameras connected to your PC or device for it to work. You can also only use it in a one-on-one call unfortunately, means your group calls will still be in 720p at the maximum.

With Full HD 1080p resolution though, bandwidth requirements will be higher than ever before for Google Meet calls. In the case where bandwidth is an issue, Google Meet will default to 720p resolution at the maximum. Of course, if you feel like your feed is choppy, you can turn off the Full HD 1080p option yourself. Google Meet will also inform you about the feature before it puts you into a supported call.

Google Meets FHD
Source: Google

Here is the thing though, the function is not available to everyone using Google Meet. If you are using Google Meet for free to get personal video calls in, you are out of luck for now. The feature will only be available to users of Google Workspace Business Standard, Business Plus, Enterprise Starter, Enterprise Standard, Enterprise Plus, Teaching and Learning Upgrade, Education Plus, Enterprise Essentials, and Frontline. It is also available to Google One Subscribers with 2TB or more storage space with supported devices. We are hoping that Google will make the Full HD 1080p feature available to more users in the future. For now, if you are not a user of any of Google’s listed services you are out of luck. For more information on the latest Google Meet update, you can visit their website.

AMD Launches Most Powerful Radeon PRO W7000 Series GPUs for the Professionals

NVIDIA has been leading the trail when it comes to GPU technology. To be fair, they are still the leaders in consumer level GPU with the very powerful NVIDIA GeForce RTX 4090 GPU leading the charge. In the professional workspace though, the story is a little different.

In the professional space, GPU requirements are a little different. The GPUs are not rendering polygons for a gaming environment. They need a lot more from the GPU than just loading maps and characters. In the professional environment, they need the GPUs for 3D world creations in game development, fluid dynamics in auto and aerospace industries, and even more. Users in this space need specific things from their GPU, and they will pick the best GPU for their specific needs.

In many cases, plenty of professionals rely on the AMD Radeon PRO GPUs. Previously they had the AMD Radeon PRO W6000 series professional GPUs. With those, the movie Terminator: Dark Fate was made. There is a new one though, a much more powerful one compared to the GPUs that made Terminator: Dark Fate. They call it the AMD Radeon PRO W7000 series.

Within the series, they introduced two GPUs, the Radeon PRO W7900 and Radeon PRO W7800. In their own rights, these are AMD’s most powerful GPUs to date. For the professionals, these could be the most powerful GPUs they have got their hands on so far.

AMD Radeon PRO W7800

1973051 amd radeon pro w7800 1260x709 1
Source: AMD

For starters, the AMD Radeon PRO W7900 brings with it the technology packed in the AMD Radeon 7000 series GPUs – RDNA 3 architecture. RDNA 3 architecture was proven to be plenty more powerful and efficient than the previous RDNA 2 technology. Of course, it is also a beefier GPU than before for even more performance in professional workloads. The Radeon PRO W7800 packs 70 compute units, 10 more than the Radeon PRO W6800 it replaces. The compute units are based on TSMC’s 5nm technology too, making the GPU more efficient than before at 260W TBP.

The GDDR6 RAM remains the same as the previous GPU at 32GB. At 64MB of cache though, the AMD Radeon PRO W7800 packs less cache than before. Still, at least the RAM is faster at 576 GB/s transfer speeds over the 512GB/s of the W6800 GPU for even better real-world performances.

The GPU is fitted with the latest DisplayPort 2.1 technology to take advantage of the most advanced displays you can find in the industry. The Radeon PRO W7800 also finally comes with AV1 encoding and decoding capabilities for an even better video and audio editing workflows. But it is not just the DisplayPort 2.1 and AV1 that makes the Radeon PRO W7000 series GPU process work faster than before. It also packs a new AI engine that is supposed to be twice as powerful as before. It also packs AMD’s second-generation raytracing engine that is more powerful than before. While raytracing is still NVIDIA’s forte, you can be sure that AMD is not just sitting still behind.

The new Radeon PRO W7000 series is also designed to work with AMD’s latest and greatest Threadripper processors. With Infinity Cache technology, the GPU and CPU can share cache to ensure that there is less in the way of bottlenecks. At the same time, with then new AMD Software: PRO Edition, AMD ensures that the Radeon PRO GPUs are more reliable than ever.

AMD Radeon PRO W7900

1973051 amd radeon pro w7900 1260x709 1
Source: AMD

If the AMD Radeon PRO W7800 does not cut it, there is an even more powerful workstation GPU now. The AMD Radeon PRO W7900, as they call it, packs even more punch than the already powerful Radeon PRO W7800. It has 26 more RDNA 3 compute cores at 96 compared to the W7800’s 70. It is even more powerful with capabilities to perform 61 TFOPS precision calculations compared to 45 TFLOPS. It has 48GB of GDDR6 RAM with even more bandwidth at 864GB/s. Of course, with all this increase in power, the power consumption tends to be higher too at 295W TBP.

They say that the AMD Radeon PRO W7900 workstation GPU is so powerful that you can publish or render your work in the background while accessing other work at the same time without ever slowing down either. With more power as well, the AMD Radeon PRO W7900 works much faster than the W7800 in all sorts of workflows, in theory at least. If you wish to also, both the AMD Radeon PRO W7800 and Radeon PRO W7900 are AMD Remote Workstation capable, meaning you can work off any laptops anywhere in the world using the power of the Radeon PRO GPUs remotely.

Price and Availability

Where the AMD Radeon GPUs might be a clever choice for professionals is the price. The AMD Radeon PRO W7900 and Radeon PRO W7800 are available for US$ 3,999 (MYR *) and US$ 2,499 (MYR *) respectively. It will be available to users in the coming months of 2023. For more information on the AMD Radeon PRO W7900 and Radeon PRO 7800, you can visit their website.

Collaborate, Meet & Work from Anywhere with AI-imbued Webex from Cisco

“Things will never be the same again.” is one of the most common sayings we’ve been hearing since we started trying to live our normal again. To be honest, that statement can’t be more true. The ways in which we work, communicate and interact have changed drastically since the pandemic. One of the things that have changed drastically is how we work. Many of us are now working completely remotely while others are working in hybrid environments. Businesses have been forced to adapt to new “normals” that are here to stay. It’s become even more imperative for companies to have the correct tools at their disposal; one of which is Cisco’s Webex Suite.

Cinematic Meetings

Webex itself is Cisco’s answer to Google Meets, Microsoft Teams and Zoom. It’s a video conferencing platform that enables collaboration. It’s steadily been getting more bells and whistles as Cisco continues its development. The latest feature that’s coming to Webex is a purpose-built AI that is poised to change how we work and collaborate even further. Webex’s new AI is bringing more optimisations to over 10 million users. A bulk of which allow better collaboration through bandwidth savings, video clarity and automation.

Better Clarity, Privacy and a New Cinematic Experience with Webex AI

One of the more significant updates is the implementation of Webex’s Super Resolution which allows users to have crystal clear video even with lower internet bandwidths. It doesn’t just stop there though, Webex Super Resolution is able to upscale and enhance images from lower-resolution cameras to deliver high-definition video. It can even intelligently “relight” your image for the best clarity. Under harsh lighting environments, the AI will underexpose the image to compensate for the lighting and provide clearer video; while under dim lighting, the AI compensates with higher exposure and brightness. Even if you’re stepping away, Webex AI will automatically blur your video, mute your mic and put up a “be right back” message. These settings are magically removed when you’re back in front of the camera.

https://blog.webex.com/wp-content/uploads/2023/03/brb.mp4

It’s not all just about the Webex app either. Cisco is also imbuing Cisco Room OS to enhance AI and video features on Cisco Collaboration devices. These devices will be able to provide cinematic meeting experiences. What exactly is a “cinematic meeting experience”? Well, imagine a meeting where the camera intelligently zooms in to and follows the person speaking. Cisco is enabling this with voice and facial recognition technology. They’re even taking it a step further by ensuring the presenter is always in the frame and at the best angle.

IT admins can even take video conferencing and enforce company privacy even further by creating virtual boundaries for collaboration spaces in the office. Employees jumping on a Webex call in designated meeting zones will be viewed in a more condensed frame. This framing removes any blank space, keeping it out of view. What’s more, only individuals involved in the meeting will be included in the meeting. This is particularly important in open-space offices or if you’re working in a busy space.

Revolutionising Customer Support Experiences with Webex Connect

Webex is also one of the most used platforms when it comes to customer service. It comes as no surprise that Cisco has also zoomed in to the call centre to improve and revolutionise the experience there. Through its platform – Webex Connect – Cisco is bringing even more features to help businesses address customer needs and meet their expectations.

Cisco is starting at the very beginning when it comes to revolutionising these experiences. Webex Connect allows businesses to orchestrate and automate end-to-end customer journeys with their low code flow builder. Using this capability, businesses can automate basic functions like validation. Put simply, if a customer calls in for an email or phone validation, Webex Connect can, with the correct coding, provide the validation code without the need to pass the inquiry to an agent.

Speaking of agents, Webex Connect is also getting the AI treatment but it takes things to a new level. The platform will be able to provide coaching in real time for agents. With Agent Answers, human agents will be able to get insights and knowledge base articles surfaced as they interact with the customer. Agent Answers will also be constantly improved with customer interaction data that is constantly fed to it. This includes self-service and automated interactions.

In addition, agents can also be provided with AI-powered chat summaries. These summaries eliminate the need for agents to go through lengthy chat histories to serve customers better. These summaries will include previously recommended solutions and a history of the issues reported. These insights will be provided in an easily digested and understood fashion to allow agents to react more efficiently.

Actionable Insights for Business Development

Webex Connect isn’t just about the agent either. Business analysts will be able to get valuable insights from the platform too. Using Topic Analysis, Webex Contact Center will surface reasons why customers are calling in. It will aggregate call transcript data and model trends in an easy-to-understand form. Using this data, businesses will be able to react and adapt to address customer needs better.

This feature isn’t just a one-off thing either. Thanks to the nature of AI, Topic Analysis will continually improve and get smarter with time. It will be able to learn and improve while businesses adapt more proactively.

Continually Improving Throughout 2023

Cisco’s Webex platform will be continually improving with more AI integrations and features throughout 2023. The features mentioned in this article will be making their way to Webex in the near future.

Malaysia’s First SSD, NEUBE, Enters the High-Performance SSD Market with Server On & PHISON Backing

Malaysia has always had a complicated history with science and technology. While the country itself has never been seen at the forefront of the industries, its citizens are deeply entrenched in milestone achievements in the industries. One such milestone is the creation and mass production of USB flash drives. Dato’ Pua Khein-Seng, a Malaysian and one of the pioneers of solid state drives (SSDs) used in USBs is looking to change that in the near future.

Neube 1
[Source: Server On & PHISON] Robert Wu, ServerOn Co-founder; Chan Wone-Hoe, ServerOn Co-founder; Dato Pua Khein-Seng, CEO of Phison Electronics Corp; Albert Kang, Phison Head of Business Development officiating the launch of NEUBE.

Dato’ Pua Khein Seng is looking to bring back some of the success he’s had overseas, particularly in Taiwan, to Malaysia. How? He’s embarking on creating Malaysia’s first enterprise SSD and we’re not talking just the fabrication and assembly. He envisions an SSD that is designed, engineered and fabricated in Malaysia. To achieve this, Dato’ Pua is partnering with ServerOn Sdn Bhd, a Malaysian distributor of enterprise and data center-grade hardware.

The partnership will see ServerOn partner with PHISON Electronics Corporation, a Taiwan-based company specialising in storage technologies. As CEO of PHISON, Dato’ Pua is looking to train Malaysian talent to eventually design and create SSDs. The new startup which will be incorporated as part of this partnership will recruit and create job opportunities for Malaysians to be involved in the design and creation of cutting-edge storage technologies.

WhatsApp Image 2023 04 13 at 12.29.28 PM
[Source: Server On & PHISON] Neube SSDs will be available in Q2 2023.

As a first step, ServerOn and PHISON are introducing the first-ever truly Malaysian SSD with NEUBE. Designed and engineered in Malaysia, NEUBE SSDs cater to high-demand workloads and data centres. NEUBE SSDs will be available in popular form factors including 2.5-inch, U.2, M.2 and E1.S. ServerOn will be using these SSDs as an option in their offerings when they come to market later in Q2 2023. Other retailers can also be part of this developing story by getting in touch with Server On.

“NEUBE’s SSDs are built with high-quality materials and advanced error correction technology, which can help prevent data loss and system crashes. This can lead to improved system stability and reduce downtime for the client’s business,” said Dato’ Pua Khein-Seng, CEO of Phison.

Neube 2
[Source: Server On & PHISON] Dato Pua Khein-Seng addressing the media at the MoU signing and launch of NEUBE

That said, Dato’ Pua sees this as only the first step in creating a truly Malaysian product. While Server On and PHISON will collaborate further, he sees government policies and market demand as the biggest hurdles for NEUBE to be successful globally. He stressed, “The only way forward for Malaysia to become a leader in the technology space is for the government to empower the local startups and industry with the right policies. They should also be the pioneers in implementing these policies and adopting Malaysian products such as the newly launched NEUBE SSDs.”

Sony MDR-MV1 Open Back Headphones Comes to Set the New Standard for Monitoring Headphones

Sony is a world-famous brand when it comes to audio gear. They make all sorts of audio solutions for all kinds of uses. They have the WH-1000X series of headphones for consumer level high-end noise-cancelling headphones. They make the MDR Z series headphones for audiophiles. If you prefer in-ear earphones, there is the WF-1000X series for consumers looking for the best truly wireless listening experience. You also have the IER series earphones if you prefer a wired audiophile solution. They do not stop at headphones though, they make vinyl players, portable media players, speakers, home theaters, recorders, and even microphones. They are also some of the biggest name in audio specific production work.

For years, the benchmark for studio and production monitoring has been Sony’s MDR-7506 over ear headphones. In fact, Sony’s MDR series monitoring headphones have been setting the standard in studio level monitoring equipment for more than three decades. Now, there is a new one – the MDR MV1.

  • 230113 0046 re Mid
  • 230116 6213 Mid
  • 230116 3125 B Mid

Unlike the MDR-7506, the MDR-MV1 offers an open-back design. That means you can expect more natural and cleaner audio response from the headphones, an advantage over closed back designs. Open back headphones also usually offer a more accurate sound reproduction with a wider sound stage, allowing for better and more accurate mixes. There is a small problem of ambient noise though, since there is nothing stopping noise from outside from coming in.

Apart from that accuracy, the MDR-MV1 also offers spatial sound capabilities. For Sony, their spatial sound algorithm is 360 Reality Audio. You can technically use the headphones to mix for Apple’s Spatial Audio and other surround sound implementations. With Hi-Res resolution compatibility, the MDR-MV1 offers a broad depth of monitoring capabilities allowing you to mix all kinds of music accurately. The headphones offer frequency responses between 5Hz all the way to 80kHz, which is more than wide enough for all kinds of sounds, even if you sit in a foley studio instead of a music recording studio.

Unlike older designed Sony MDR monitoring headphones, the MDR-MV1 offers a detachable AUX cable. You can technically use other similar cables, but why would you want to when Sony offers a highly durable high-quality cable with machined connectors. They also re-engineered their earpads with softer materials that is also lightweight so that you can work for much longer without taking off the headphones or hurting your neck and head.

Sony C-80

  • Voice recording Mid
  • Webcasting Mid

Alongside the MDR-MV1 open back monitoring headphone Sony also launched a new microphone made mostly for home recording and podcasts, the Sony C-80.

Sony’s C-80 is not the first microphone Sony makes. They have the C-100 and C-800G, both are aimed at recording studios mostly. The C-80 is more made for prosumers and hobbyist looking to have a professional grade gear in their homes.

The C-80 offers the best of both the C-100 and C-800G microphones. The capsule is derived from the C-100 while the shock-proof two-part metallic body is derived from the C-800G. They also innovated with something they call “Noise Elimination Construction” that prevents the mic from picking up noise from the body’s vibration itself offering a much cleaner sound almost free of noise while recording.

The C-80, like the mics that came before, is made mostly for vocal recordings. That does not mean that you cannot use it for anything other than vocal recording. You can technically use it for recording instruments like guitars. It is also a condenser, so you want to make sure you have mixer with 48v power through its XLR port.

Price and Availability

The Sony MDR-MV1 will be available in May 2023, alongside the C-80. There are no colour options here, so personalizing your headphones and mics will have to come down to your own efforts. The MDR-MV1 will set you back MYR 1,690, which is a little way off from the older MDR-7506. It is supposed to offer a lot more in terms of monitoring and mixing capabilities though. The C-80 will retail at MYR 2,190. More on Sony’s latest MDR-MV1 can be found on their website.

WhatsApp Groups is Even Better Now with Even More Controls

Last year WhatsApp tested and later launched their new Communities feature for all WhatsApp users. If you have not found out or familiar with Communities, it is a powerful management tool sitting within WhatsApp to manage all your groups in one place. You can, as admin of multiple groups create a Community, a group of groups for you to manage and access in a single place. You can, as an admin, make a community wide broadcast message, create targeted messages for select groups, and even moderate chats within the group. Of course, WhatsApp will not just stop there.

WhatsApp has added an important privacy control for group admins. Group admins can now decide if a person is worthy of joining the group or not.  For Communities users, the admin can choose if a particular individual can join any of the groups within the community or not. Of course, the feature is not just limited to Communities users. They can also still send invite links to people they want in the group. If they do not want a particular person in the group for any longer, they still can kick relevant members out of the groups.

That is not to say that plain members within a community has no added features or benefits for themselves. WhatsApp has added a way to find other members that is in the same groups as you are. It works both ways too, you can find out which members are in the same groups as you are. You can also use the function to find groups through these common contacts, in case you forgot which groups are which. It is important, since the dependency on WhatsApp and Groups within Communities are much bigger than before now.

You might not be seeing the function rolling out to your app today. WhatsApp says that they are rolling the update to users in the coming weeks in stages. If you are interested in getting the best out of your Groups, you want to keep checking your app updates regularly. To know more about the new functions WhatsApp is adding to the app, you can always head out to their blog.

Adobe Firely, the Next-Generation AI Made for Creative Use

AI (Artificial Intelligence) generated graphics is not a new thing. You have things like OpenArt and Hotpot these days where you can just type in the keywords to the image you want, and let the engine generate art for your use. Even before AI generated graphics though, the implementation of AI within the creative industry is nothing new. NVIDIA has used their own AI engine to write an entire symphony, and even to create 3D environments using their Ray Tracing engines. Adobe too have something they call the Sensei. They the AI tool is implemented across their creative suite to understand and recognise objects better, fill details where needed more naturally, and even edit videos, images, or even texts quickly and efficiently. Now, they have Firefly.

Firefly is not a new separate AI system from Adobe’s Sensei. Firefly is a part of a larger Adobe Sensei generative AI together with technologies like Neural Filters, Content Aware Fill, Attribution AI, and Liquid mode implemented across several Adobe platforms. Unlike those platform specific implementations though, Adobe is looking to put Firefly to work on a number of various platforms across their Creative Cloud, Document Cloud, Experience Cloud, and even their Adobe Express platforms.

So, what is Adobe Firefly? We hear you ask. It is technically Adobe’s take on what a creative generative AI should be. They are not limiting Firefly to just image generation, modification, and correction. It is designed to allow any sort of content creators create even more without needing to spend hundreds of hours to learn a new skill. All they need to do is to adapt Firefly in their workflow and they will get contents that they have never been able to create before, be it images, audio, vectors, texts, videos, and even 3D materials. You can have different contents every time too with Adobe Firefly; the possibilities, according to Adobe, are endless.

What makes Adobe’s Firefly so powerful is the power of the entirety of Adobe’s experience and database behind it. Obviously Adobe’s Stock images and assets is a huge enough library for the AI implementation to dive into. The implementation can also look into using openly licensed assets and public domain contents in generating its contents. The tool, in this case, will prevent any IP infringements and help you avoid plenty of future litigations.

Adobe Firefly Cover
Source: Adobe

As Firefly is launched in its beta state, it will only be available as an image and text generator tool for Adobe Express, Adobe Experience Manager, Adobe Photoshop, and Adobe Illustrator. Adobe plans to bring Firefly into the rest of their platforms where relevant in the coming future. They are also pushing for more open standards in asset verification which will eventually include proper categorization and tagging of AI generated contents. Adobe is also planning to make the Firefly ecosystem a more open one with APIs for its users and customers to integrate the tool with their existing workflows. For more information on Adobe’s latest generative AI, you can visit their website.

Google Glass Bites the Dust – Support Officially Ending in September 2023, Sales Has Ceased

Google Glass made its debut in 2013. Back then, Google Glass made headlines everywhere and the idea that everyone in the world will eventually own one or some type of augmented reality (A.R.) headgear was not in any way ridiculous. That conversation died soon after though. The reality (no pun intended) was that an A.R. glasses from Google at the time will set you back US$ 1,500, or if you convert that to local currency at the time, about MYR 5,000 there or thereabouts. That kind of money for a pair of clunky glasses you need to keep charging every few hours is the kind of luxury most in the world cannot afford or does not need. Added to the fact that Google Assistant and A.R. functionalities at the time was in its infancy, crude at best; why would you pay that much money for  a pair of glasses?

The original Google Glass stayed on sale for about two years though, until 2015. No sales numbers were quoted within that time frame. In that time, Google also produced a new type of Google Glass. This time, they realized that the A.R. smart glasses market was not something they wanted to sell to end-users. Instead, they saw more potential use cases in the enterprise market. Hence, Google developed, supported, and sold Google Glass Enterprise edition from 2015 onward. Then in 2023, well today, they stopped selling the kit entirely and announce that they will stop supporting them in September 2023.

Through its life, the A.R. project by Google was adopted mostly in the construction and medical field. They updated the Google Glass Enterprise Edition once in 2019. From then on, Google Glass Enterprise Edition 2 replaced the first iteration.

Google has not announced any replacement for Google Glass Enterprise Edition 2. It does not look like Google will be announcing any replacement for the A.R. goggles anytime soon though. That does not mean that Google has given up on the idea of A.R. completely.

In 2020 Google made an acquisition that still confirms Google’s commitment to their A.R. project. They acquired North, a smart glasses maker. Since then the Mountain View giant has been reported to be working on some kind of smart A.R. wearable that resembles ski goggles. The project was code named Project Iris. There has been little update on the project’s progress since then though.

It is also unlikely for Google to scrap the project since their competitors are also working on the same thing. Apple and Meta (formerly known as Facebook) have been working on their own versions of A.R. and Virtual Reality (V.R.) headsets for some time now. Reportedly, they are looking to bring their own versions of the headsets some time in the future. Microsoft is also known to have a mixed reality department of their own and has produced working prototypes for mixed reality, but those hardware has not been sold to end-consumers for good reason.

A.R. is still pretty much something you can look forward to as a normal in the future. For now though, with Google shelving their most promising mixed reality project temporarily, that future looks a little further than we might like to think. You can find out more about Google’s Glass project from their website.