1. Subpar Home Assistant Devices
The first consumer-grade smart home assistant was the Amazon Echo speaker, released in 2014. Since then, various mainstream brands have released products, such as Google Home and Apple’s HomePod.
In a way, home assistant technology is a revolution in AI and one of the many applications of big data and machine learning. Still, the privacy aspect of using a home assistant device is debatable, and the arguments continue between privacy enthusiasts and the companies powering the devices.
But one thing’s for sure; big-name home assistant speakers on the market are incredibly secure. After all, Amazon, Google, and Apple are unlikely to sell loyal consumers a product with weak security.
But not all brands have the same appreciation for user security.
It’s estimated that almost one-half of homes will have a smart home assistant by 2022. But not everyone is ready to splurge over $100 on a fancy home assistant from Google or Amazon.
With everyone wanting to hop on board the smart speaker trend, lower price and lower quality products are finding a spot in the marketplace. Unfortunately, like many IoT devices, smart speakers have few built-in security measures to keep your home network safe from intruders.
2. Unreliable Facial Recognition Software
Facial recognition software has come a long way since its conception less than a decade ago. It has introduced tons of benefits such as passwordless logins with facial recognition and even helped authorities find missing people.
Under ideal circumstances (where most facial recognition systems are tested!), near-perfect results of 99.9 percent accuracy are possible. In trials, facial recognition apps are often provided with quality images with uniform lighting and clear angles of the faces. But that’s not how real-world pictures are taken.
Accuracy drastically drops when bad lighting is introduced. The same applies to day-to-day face coverings and changes such as heavy makeup, facial hair, glasses, face piercings, and medical masks.
While facial recognition software can still be used in less than ideal conditions, the belief that facial recognition has become infallible is concerning, especially when low-grade software is used to track individuals or report crime.
3. Insecure Autonomous and Semi-autonomous Vehicles
Cybersecurity in autonomous and semi-autonomous cars is no joking matter. Unlike personal devices, an insecure car system won’t just cost you personal information and data but also your physical safety.
Self-driving cars are still far from being the mainstream means of getting around, but they’re being used in relatively large numbers in various cities worldwide.
Autonomous and semi-autonomous vehicles are almost always connected to the internet. They’re constantly sending out metric readings and data from sensors positioned all over the car to a centralized cloud environment for analysis.
And while automotive manufacturers are doing their best to ensure the vehicles’ security, no online or offline system is 100 percent secure, as proved by the countless hacks to major corporations globally.
4. Deepfakes Becoming Mainstream
Deepfakes started out as one of the wonders of modern technology. You needed massive amounts of visual data and a powerful computer to process a short deepfake video of one or more people.
Before, you had to be a prominent individual, like a politician or a celebrity to have a deepfake made of you to spread misinformation and ruin your reputation.
But that’s no longer the case.
With the current technology available to most online users, anyone can make a deepfake of anyone. And they no longer need hundreds of photographs and videos from numerous angles. In fact, a handful of social media profile pictures and a short video clip of a person is now enough to create a convincing deepfake of them.
Another issue that arises with deepfakes becoming mainstream is its coincidence with facial recognition software. A recent study at Sungkyunkwan University in South Korea found that even highly credited facial recognition software was in danger of falling for a deepfake sample.
5. A Normalized Lack of Privacy
Privacy wasn’t declared a human right by the United Nations in 1948 for no reason. Privacy is the cornerstone in freedom of speech, the ability for a person to express themselves, autonomy, and the average person’s ability to live in peace and maintain their dignity.
Still, privacy is one of the least protected rights globally. Not to mention, many people don’t seem to care about it as much. Surveys show that 13 percent of internet users worldwide are willing to give up their personal information in exchange for free access to online content and services.
Over the past few years, there have been multiple attempts to legally enforce privacy laws such as the GDPR in Europe and state-based privacy laws such as the California Consumer Privacy Act (CCPA). But instead of prohibiting commercial organizations from collecting personal user information, privacy regulations only require them to get your permission.
That’s the reason for the massive increase in popups asking you to accept cookies on almost every website you visit nowadays. But this has resulted in a variation of notification fatigue known as privacy fatigue. Users are now blindly agreeing to every cookie and data request, not stopping to know which type of data the website is asking to take.
Technology trends with negative consequences may seem out of your control. Unfortunately, they’re not something you can escape without deserting technology altogether and living off-grid, and even then, it’s hard to abandon certain tech.
Even if you can’t stop what’s happening, knowing it will let you prepare for the worst. And in some areas, you could make your opinion about a specific matter known by where you spend your money and time online.
Please don’t forget to share this articles on all your social media handles for others to learn too. You can also find us on Facebook. Thank you.
See you soon!