Tag Archives: Data Privacy

Mozilla Enhances Online Privacy with Firefox Relay Integration

Navigating the online world while safeguarding personal information has become increasingly challenging. Mozilla’s response – Firefox Relay – has just received an upgrade that promises enhanced privacy protection in the digital realm.

In a bid to fortify online privacy, Mozilla has seamlessly integrated the Firefox Relay feature into the Firefox experience itself. Formerly an add-on, Relay acts as a protective barrier for users’ email addresses, shielding them from unsolicited marketing emails and potential data breaches. With this integration, users can conveniently utilize the Relay feature to safeguard their personal email addresses while navigating the web.

Meet Firefox Relay, a privacy-first and free product that hides your real email address

Acquiring this enhanced layer of online privacy is simple. Users need only possess a Firefox account to access the built-in Relay feature, available for free within the browser. Mozilla aims to make this feature available to millions of users over the upcoming weeks.

This integration lines up with Mozilla’s commitment to elevating the user experience by prioritizing privacy. The company introduced Firefox Relay functionalities earlier this year, allowing users to seamlessly interact with the tool through their toolbar, generate fresh email masks, and reuse existing ones. Notably, Relay ensures protection against web trackers, a significant step in preserving sensitive data.

For those unacquainted with Firefox Relay, it offers practical solutions for various online scenarios. Users can generate temporary email aliases to safeguard personal emails, benefiting from the service’s email filtering that eliminates trackers before forwarding messages to the primary email ID. Whether it’s crafting transient emails or maintaining confidentiality on public platforms, Firefox Relay empowers users to embrace a more secure online experience, now more conveniently accessible than ever.

#JomJagaPrivasi with Meta’s Privacy Cafe

The world is quickly changing. In many ways, the landscape of social media and how we are online has changed drastically since social media became a mainstay. More of us are concerned with our data and how companies are handling it. We have become ever more critical of our own privacy when we are online. One of the largest social media companies we deal with on a regular basis has to be Meta. With over 3.7 billion people engaging with Instagram, WhatsApp, Oculus and Facebook, it’s become even more imperative that users are aware of the steps to protect their privacy and data.

Privacy Cafe – Sit, Eat and Explore Privacy and Data Protection

To that end, Meta kicked off the week-long campaign with a unique “Privacy Cafe” where users of their many platforms could learn the many tools available to them to protect their privacy and data. The event which took place from 28th to 30 October at The Farm Craft in Bangsar South, saw the neighbourhood cage undergo a top to toe transformation into Meta’s Privacy Cafe. The cafe featured interactive quizzes and AR-enabled activities that not only educated the public about the tools but encouraged them to activate and take control of their online privacy.

  • Meta Privacy Cafe 5
  • Meta Privacy Cafe 4
  • Meta Privacy Cafe 63
  • Meta Privacy Cafe 58
  • Meta Privacy Cafe 57
  • Meta Privacy Cafe 66
  • Meta Privacy Cafe 68
  • Meta Privacy Cafe 60
  • Meta Privacy Cafe 61

The Privacy Cafe featured an immersive Instagram AR Filter developed in collaboration with Florian Sebatier and an interactive digital experience called “Your Home”. “Your Home” allowed users to become interior decorators and create a home experience that they were comfortable with. The furnishings in the “Your Home” experience were analogues for the many controls available on Facebook and worked to demystify the whole concept of online privacy and data protection.

Of course, to help convey the message, Meta also recruited the help of local social media stars like Ceddy Ang, Tan Yuki, Gajendrabalan Chandra, Adam
Muzam, Ori Yuanwei, Amira Sachie and Jessica Chaw to help share and relate their experiences with privacy on Meta’s platforms.

Continuing the Conversation Online

In addition to the on-ground event, Meta Malaysia also had Facebook live sessions where they had open discussions with the public about their privacy tools and controls across Instagram, Facebook and Whatsapp. With the increasing awareness about privacy and data protection in Malaysia, Meta is looking to be one of the places where users can safely interact and be social online.

They’ve also put together a Privacy Fact Sheet with all their efforts including end-to-end encryption on all their platforms, Two-factor authentication and increased accountability for third party apps accessing accounts on their platforms.

While the onground activation is over, Meta is looking to continue the conversation online. They continue to make the “Your Home” and Instagram AR experiences accessible online for users to get to know the controls and safety measures that are available to them.

533 Million Facebook Users’ Data Resurfaces Online from 106 Countries

Facebook seems to be having a row of things recently. The company initially faced humongous backlash on their implementation of data sharing policies between popular messaging app, WhatsApp, and the larger company. Now, it looks like old wounds are reopening for the company as data from a breach that happened in 2019 has surfaced on forums in hacking forums.

The breach involves over half a million users from over 100 countries with data such as their phone number, emails and even birth date. Malaysia is listed in the countries affected with over 11 million users having been compromised. The breach was first reported by Business Insider. Business Insider has also verified the data in the leak by testing password reset requests. A spokesperson for Facebook has confirmed the data breach. The person also confirmed that the data breach occurred due to vulnerability which was identified and patched back in 2019.

https://twitter.com/UnderTheBreach/status/1378314424239460352

While the data is 2 years old, the fact that it is readily available online at this point is a worrying fact. Data like birthdates, phone numbers and emails can be used to socially engineer scams. In fact, due to the phone numbers being leaked and made readily available, the likelihood in getting scams over SMS and phone calls are heightened.

Acronis Vice President of Cyber Protetction research, Candid Wuest, advises that, in light of the leak, “There is now a higher risk of SMS spam, but also password reset attacks and attacks against other services that use SMS for MFA are now more likely. Users should therefore change from SMS-based MFA service where possible for critical accounts.”

The fact that the leaker has readily made the data available for free can be puzzling. However, according Wuest, “As the leaked data does not contain any passwords or payment card details it is of less value to attackers. Furthermore, at least two third of the data was already available from previous leaks. It is not uncommon to see such data sets being made available for free, as they would not yield much profits on underground site. Such large data sets tend to not stay private for very long anyway.”

The new leak brings into the spotlight the amount of personal data we have available online and especially on social media. It also brings into question Facebook’s privacy policies which govern and protect data stored on their service. What’s even more worrying is the fact that Facebook wasn’t the notifying users, instead, the leak was reported by twitter user Alon Gal who has since been looking at and verifying the data leak. Facebook has only confirmed the occurrence of the breach and has not even notified users that were affected.

Google Comes Under Fire For Chrome’s Incognito Mode

As a Google user, using Google Chrome in incognito Mode has become something of a habit especially when we don’t want people to trace our browsing behaviour. However, have you ever thought that certain authorities might be able to trace your activities on social media or the internet, even with incognito?  

Recently, Google has come under fire for an issue with their Google Chrome incognito mode that consists of misleading and deceitful behavior. A group of users in California voiced out that they realized that Google Chrome’s Incognito mode does not do a good job in preventing and safeguarding the activities of the user. This has created a contradiction of the purpose of incognito mode, which is privacy. Therefore, this group of users from California filed a lawsuit against Google.   

Koh, a US District Judge in San Jose who is presiding over the case also questions what data is being collected and why is Google collecting user data and activities in a way that big data companies can sell to advertisers or others.  

wooden gavel on table in courtroom
Photo by Sora Shimazaki on Pexels.com

Google’s lawyer is trying to dismiss the issue by saying that Google did follow the terms and regulations in protecting user’s data. Additionally, the plaintiff said that it is just another tactic for Google to observe and collect user data.  

Google claims that it is very hard for a user to prevent Google from collecting their data, and users should assume that their internet and social media activities are often traced by someone else. Even though Google claims that there will be no record of what you are working on while using incognito mode but they will still be able to access your activities. From a different perspective, users might think that Google’s access is a way to protect their users from bugs and other dangers from websites.  

As a Google user or online consumer, you have to understand that no matter what, tracing your activities on the internet is not all that difficult. Hence, users must always be aware of the security risks of each channel they are using to prevent leaking personal data.   

WhatsApp’s New Policies Are Coming into Effect, Here’s What Will Happen If You Haven’t Accepted

WhatsApp was in the middle of a media firestorm in early January thanks to the announcement of a controversial change to their Data Sharing Policy. The company, which was acquired by Facebook back in 2014, is requiring users to accept the new terms to continue using the application. The app will now be sharing a slew of data including your phone contact lists, app logs, diagnostic data, and status messages with its parent company. WhatsApp did send a prompt to users to accept the changes to continue using the app, however, it seems like there is a vocal portion of their users who have opted to look at its competitors: Signal and Telegram.

With the deadline for users to accept the new agreement looming, the company has published a new FAQ regarding the issue. The effective date of the new agreement has been extended to May 15, 2021, from the initial February 8, 2021. They have also indicated what would happen if users are still reluctant to accept their new terms – and to be frank, they seem to be coercing users into the new terms.

If you still don’t accept the new policy, you’ll essentially lose access to your messages and data in the app. The new FAQ states that WhatsApp will not delete your account. Instead, it will limit the capabilities of the application. In fact, you’ll be relegated to calls and notifications “for a short time”. The company hasn’t clarified what it means by “a short time” but during this period, users will not be able to access any of their messages.

While this is a less than ideal way of handling their faux pax, the company seems adamant to lose its user base and continue on with its plans to adopt the new changes. We even had Acronis’ Chief Information Security Officer, Kevin Reed weigh in on the issue in our Tech & Tonic Podcast. In the immediate fallout from the abrupt and poorly managed announcement, users have been flocking to other competitors with Telegram and Signal in the forefront.

[Podcast] Tech & Tonic Special : Sitdown with Alex Tan of HID International about Biometrics and Security

In this special, we sat down with Mr. Alex Tan, the Director of Sales for the ASEAN Region for Physical Access Control Systems from HID Global. We had a conversation about some of the emerging trends in the industry on the use of biometrics and security. One of the biggest issues we discussed was the use of biometrics and its implications on personal data privacy and security with the emergence of legislations such as the GDPR (General Data Protection Regulation) in the European Union and Malaysia’s own PDPA (Personal Data Protection Act). The issue brought us to talking about how personal technological devices can be a security risk with how they handle data collected by facial recognition and fingerprint reading technologies.

Mr. Alex Tan heads the strategic developmental and organisational growth of HID Global’s Physical Access Control business within the ASEAN Region. He has been in the security access control industry for about 19 years and has, prior to this, headed the sales and entreprise solutions at another leading access control manufacturer.

HID Global has over two decades of expertise in the security industry. The American company has its roots in radio frequency identification technologies and has over the years become a recognised brand when it comes to premise access and security as well as biometric technologies. You may recognise the company’s logos from the many devices and solutions it provides to buildings and businesses across the world. They may even be responsible for keeping you safe in your apartment building!

Take a listen to the podcast and let us know if you still have any unanswered questions when it comes to biometrics and personal data protection in the comments down below.

Is Privacy Our Sole Concern With Contact Tracing Technology?

This week the Guardian reported an alleged ‘standoff’ between the NHSX (the digital innovation arm of the NHS) and tech giants Google and Apple regarding the deployment of contact tracing technology aimed at curbing the spread of the Covid-19 virus. The debate is on two predominant issues; first, the base technology to be used and second, how the data will be stored.

Sidestepping the first issue which sees Google and Apple aiming to implement their feature directly on a device’s operating system while the NHSX version requires a downloadable dedicated application, this article will focus on the issue of privacy arising from the second issue.

In essence, Apple and Google have insisted that if there is to be any collaboration between the NHSX and them for the purposes of contact tracing the storage of all data will have to be decentralised. The NHSX, on the other hand, is pushing for centralised storage of data.

What’s the difference?

Before deciding on one system or another, it’s best to understand the basics of the distinction between these systems.

A centralised system has a single storage point and controller of the data collected. The central controller of the data may grant access to other users but remains ultimately responsible for the system as a whole. A centralized system is relatively easy to set up and can be developed quickly. Such a system is very useful where continuous modifications to the parameters of the system are expected or where the use of the data needs to be adapted for different purposes.

In contrast, a decentralised system has multiple controllers of data all of whom collect and store copies of the data on their respective systems. This system allows for quicker access to data and less risk of downtime as a fault with one controller will not necessarily affect the others.

The third form known as a distributed system in which there is no single central owner at all and instead gives collective ownership and control to each user on the network is unlikely to be used by either party.

Each system has its advantages and disadvantages and to make a decision between a centralised and a decentralised system the NHS and the tech giants will need to take into consideration a range of issues including:-

  1. The overall effectiveness of the technology;
  2. The adaptability of the system to the shifting demands of research;
  3. The cost of deployment and maintenance;
  4. Whether or not the system is a security risk for the user;
  5. Whether there are compliance concerns.

Why is a decentralised system so important?

Google and Apple have been clear that the reason for a proposed decentralised system is to avoid the risk of mass government surveillance presently or in the future. This is a genuine concern as the data being collected will be directly related to a user’s location and medical history. Although not absent from criticism, this position is the preferred option and has been supported by academics and numerous civil rights groups including the Electronic Frontier Foundation and the American Civil Liberties Union. 

Still, the European position is split with the seven governments supporting the project known as the Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) which proposes a centralised repository of data and a growing following for the Decentralised Privacy-Preserving Proximity Tracing (DP-3T) advocating a decentralised system.

The NHS itself may not be intent on surveillance however being publicly funded draws immediate speculation to its government links. In addition, both the NHS and the UK government have had a poor record of handling large scale IT projects such as the failed £11bn National Programme for IT, scrapped in 2011 and the plans for a paperless NHS by 2018 which could not even take off.

What about the NHS position?

Unfortunately, the focus on privacy risks coupled with the NHS’s bad track record in the field of technology projects have detracted from the core issue at hand – What does the NHS need right now to curb the spread of the Covid-19 virus?

Ross Anderson, an advisor to the NHS on its contact tracing application highlighted the problem with a decentralised system:-

…on the systems front, decentralised systems are all very nice in theory but are a complete pain in practice as they’re too hard to update. We’re still using Internet infrastructure from 30 years ago (BGP, DNS, SMTP…) because it’s just too hard to change… Relying on cryptography tends to make things even more complex, fragile and hard to change. In the pandemic, the public health folks may have to tweak all sorts of parameters weekly or even daily. You can’t do that with apps on 169 different types of phone and with peer-to-peer communications.

(https://www.lightbluetouchpaper.org/2020/04/12/contact-tracing-in-the-real-world/)

The Covid-19 virus took approximately 2 months to infect 100,000 UK residents and the spread has shown few signs of a slowing infection rate. Time is critical in this situation and correspondingly, flexibility in adapting to the constantly changing nature of the infection is a necessity. Decentralised systems do not allow for rapid evolution.

In addition, we should consider that unlike centralised systems, decentralised systems are often unencrypted. While trying to prevent a government from carrying out surveillance, the Google and Apple system may inadvertently open itself up to more security problems than expected. In fact, they have themselves admitted this risk stating that nothing is “unhackable”.     

As a second consideration, the API that Google and Apple will release will likely have strict limitations on the type of data that may be collected. For example, the NHS would not be able to gather a list of every person a user has been in contact with based on user proximity. Instead, it will utilise a more manual version of contact tracing involving sending every phone in the system a list of other phones that have been reported as contagious, and asking the user whether they have “seen this user” Such a system relies heavily on user verification which is often incorrect or simply disregarded.

Key location data which may be used for developing population flow maps and anticipating the further spread of the virus will likely not be made available under Google and Apple’s current proposal. It is also important to note that data from contact tracing could be used beyond the scope of curbing the spread of the virus i.e. for decisions on directing the flow of emergency aid, development of temporary healthcare facilities, deployment of healthcare equipment and personnel.   

What has been going on elsewhere?

Contrasting the UK’s situation, the Asian experience, having less stringent data protection regulations, have taken remarkably different approaches to Europe in general.

Hong Kong, for example, introduced the mandatory use of an electronic wristband connected to a smartphone application to enforce quarantine for arrivals from overseas. Users refusing to adopt this requirement are refused entry into the country.

South Korea won praise for both tracking and publishing data relating to affected person’s travel routes and affected areas, the data being collected through the government’s application as well as numerous independent applications. Residents also receive numerous location-based emergency messages and are not allowed to opt-out of this function.

China’s measures, which have come under considerable question, see a private entity collaboration through the Alipay Health Code. Citizens are given a ‘traffic light’ status that determines the restrictions that will be imposed on them. Although the exact basis for determining a person’s status is not known the status has widespread application including restriction of access to certain public facilities and payment systems.

Privacy concerns of these measures aside, all these countries have seen a considerable reduction in the spread of the Covid-19 virus. While it would be premature to suggest that this is solely attributable to the contact tracing measures implemented there is no doubt that the quick and extensive deployment of the technology has contributed to the battle against the virus’ spread which begs the question:

Is privacy getting in the way?

In 1890, Brandais and Wallace, pioneers of modern day privacy wrote:-

…To determine in advance of experience the exact line at which the dignity and convenience of the individual must yield to the demands of the public welfare or of private justice would be a difficult task…

The UK and indeed Europe are at this juncture and need to decide on the cost of the compromise as the death toll and infection rate continue to increase. History reminds us that the greatest privacy and surveillance violations occurred when the world was focused on a raging war and in fact it is times like this that we must be most vigilant about rights.    

Tech & Tonic Episode 2 feat. Vernon Chan – Pre-Order, Cash, or Installment? Oh, Use Protection Everyone!

 

Episode two of Tech & Tonic we have a featured guest who is none other than Vernon Chan! He will be joining us today to talk about a few key aspects of how the local tech market is, to security concerns with all the technologies that we are too dependent on. Also do listen to the end to find out why Vernon decided to delete his Facebook account!

A little more about Vernon, he hails from a creative background that spans over 20 years, and is the founder of boutique creative hotshop – scratchdisk creative. He’s an avid blogger/writer, Twitter-addict, foodie, technology and gadget enthusiast, animal lover, Steve Jobs groupie and petrolhead.

Mentioned in the title, we will be starting the podcast with what we think of pre-orders and how it is used to gauge the market dynamics of what phones or colours that could sell well. What are the pros and cons of getting a phone on pre-orders, getting it in cash and getting it on a installment plan with a telco. 

The second notable topic is data privacy. Why is data privacy important? How does it affect us and what can we do about it? Government policies should also be enforced for the data collecting and selling companies to ensure the citizens are protected. 

Most online services require the user to opt in to use the services. Mostly, it is for the convenience of using the services. How much are we willing to give up? How much is the government doing to protect us?

What can you do to protect yourself and your data online? Some tips and tricks from @vernieman himself on the measures he personally takes to protect himself even when he’s traveling around the world! 

Samsung Find My Mobile Notification was a Data Breach

Update (26 February 2020): Samsung has reached out to SamMobile to clarify that the data breach wasn’t related to the Find My Mobile notification. Instead, the data breach was an isolated incident which occurred on the UK Samsung website. According to the report, only 150 customers were affected in the data leak.

Last week, tech news was rife with news of a number of Samsung users getting a strange unexplained prompt from their Find My Mobile app. If you’re still 1dering why you got it, it appears that the issue may be a lot bigger than Samsung initially admitted to.

Here’s a little recap of what exactly happened. Samsung devices across the world started receiving a strange notification from their Find my Phone app. The notification simply said 1,1. There was no explanation nor reason behind the notification.

Reports also surfaced that Samsung’s non-Galaxy devices such as the Galaxy XCover. What’s even more alarming is that users who have already deactivated the “Find My Mobile” application were still receiving the notification. Deactivated applications are applications which have essentially been turned off as they cannot be uninstalled without altering the phone’s software. This and the fact that the notification appeared on devices spanning the whole range of Android enabled Galaxy devices including the new Galaxy Z Flip as reported by renown tech journalist, Michael Fisher; makes things very worrying.

Having received the notification, some users promptly decided to reset their passwords. However, when they tried to access their Samsung account pages, they were either greeted by information that wasn’t theirs or a blank screen. Keep in mind, a Samsung account is tied to every Galaxy device. In fact, on Android enabled devices, setting up a Samsung account is also part of the phone’s setup. The account is also tied to the SamsungPay service. Samsung’s payment gateway stores credit card and debit card information to use when paying at merchants.

Samsung initially owned up to the issue saying that the it was an internal test and that the notification was sent out unintentionally during an internal test. However, the company recently owned up to a data breach. In a statement to UK based news portal, The Register, Samsung’s spokesperson said,

“A technical error resulted in a small number of users being able to access the details of another user. As soon as we became of aware of the incident, we removed the ability to log in to the store on our website until the issue was fixed. We will be contacting those affected by the issue with further details.”

While the company has yet to reveal what “a small number” means. We can expect that the number is large enough. In my own experience, 4 out of 5 friends using Samsung devices received the notification. So, it would be safe to assume that the issue is relatively widespread.

Of greater concern is how the app was able to send out a notification. This indicates that the app was still running in the background and points to the app having more functionality than it should. It also raises the question on what functionality Android allows disabled built-in apps.