Tag Archives: Governments

BlackBerry Cylance outlines cyber security predictions for 2020

Josh Lemos, vice-president of research and intelligence at BlackBerry Cylance, has put forward some predictions on cyber security trends for 2020 that will impact Governments and companies across a variety of industry sectors.

(1) Uncommon attack techniques will emerge in common software

Steganography, the process of hiding files in a different format, will grow in popularity as online blogs make it possible for threat actors to grasp the technique. Recent research at BlackBerry found malicious payloads residing in WAV audio files, which have been used for decades and categorised as benign.

Businesses will begin to recalibrate how legacy software is defined and treated and effectively invest in operational security around them. Companies will look for ways in which to secure less commonly weaponised file formats, like JPEG, PNG and GIF, etc without hindering users as they navigate the modern computing platforms.

BlackBerryCylance2020Predictions

(2) Changing network topologies challenge traditional assumptions and require new security models

Network-based threats that can compromise the availability and integrity of 5G networks will push Governments and enterprises alike to adopt cyber security strategies as they implement the 5G spectrum. As cities, towns and Government agencies continue to overhaul their networks, sophisticated attackers will begin to tap into software vulnerabilities as the expansion of bandwidth that 5G requires inevitably creates a larger attack surface.

Governments and enterprises will need to retool their network, device and application security. We will see many lean towards a zero-trust approach for identity and authorisation on a 5G network.

Threat detection and threat intelligence will need to be driven by Artificial Intelligence and machine learning to keep up.

(3) 2020 will see more cyber-physical convergence

As all sectors increasingly rely on smart technology to operate and function, the gap between the cyber and the physical will officially converge. This is evident given the recent software bug in an Ohio power plant that affected hospitals, police departments, subway systems and more in both the US and Canada.

Attacks on Internet of Things (IoT) devices will have a domino effect and leaders will be challenged to think of unified cyber-physical security in a hybrid threat landscape.

Cyber security will begin to be built into advanced technologies by design to keep pace with the speed of IoT convergence and the vulnerabilities that come with it. 

(4) State and state-sponsored cyber groups alike are the new proxy for international relations

Cyber espionage has been going on since the introduction of the Internet, with Russia, China, Iran and North Korea seen as major players. In 2020, we will see a new set of countries using the same tactics, techniques and procedures as these superpowers operate against rivals both inside and outside of national borders.

Mobile cyber espionage will also become a more common threat vector as mobile users are a significant attack vector for organisations that allow employees to use personal devices on company networks.

We will see threat actors perform cross-platform campaigns that leverage both mobile and traditional desktop malware. Recent research discovered nation state-based mobile cyber espionage activity across ‘The Big 4’, as well as in Vietnam. There’s likely to be more attacks coming in the future. This will create more complexity for Governments and enterprises as they try to attribute these attacks, with more actors and more endpoints in play at a larger scale.

Leave a comment

Filed under Risk Xtra

iProov Study: UK public “overwhelmingly unaware” of deepfake threat

A nationwide study conducted by biometric facial authentication specialist iProov has revealed a sheer lack of awareness and education around deepfake technology among the UK public, with almost three-quarters (72%) of respondents saying they’ve never even heard of a deepfake video.  

Deepfake is a technique for human image synthesis based on Artificial Intelligence. It’s used to combine and superimpose existing images and videos on to source images or videos using a machine learning technique known as generative adversarial network. Due to these capabilities, deepfakes have been used to create fake videos. Deepfakes can also be used to create fake news and malicious hoaxes.

The research polled 2,000 respondents across the UK to reveal their attitudes towards (and understanding of) deepfake technology. The results highlight a need for awareness and education on the impact of deepfakes which, if not addressed, will have huge implications on personal and professional security.

Commenting on the findings, Andrew Bud (founder and CEO at iProov) said: “Awareness is the first defence against any cyber security threat, as we’ve already seen with attacks like phishing and ransomware. Deepfakes, however, represent a whole new kind of danger to businesses and individuals. Technology also has a big role to play in combating the threat, yet if the vast majority of people in the UK have such little awareness of deepfakes right now, they simply cannot begin to prepare themselves as they need to.”

Deepfake

Underlying societal threat 

Until recently, deepfakes were a nascent concept. Today, the technology behind deepfakes is threatening to undermine the notion of trust in moving images and is becoming increasingly accessible – be it through the creation of fake news or spoofing the identity checks required to log into a bank account.

However, the research has revealed members of the public to be largely unaware of the threats. Even once respondents were provided with the definition of a deepfake video, almost a third (28%) said they believed them to be completely harmless. More than two-thirds of respondents (70%) confessed they didn’t believe they would be able to tell the difference between a deepfake and a real video.

Detecting fraudulent identities

Interestingly, once those surveyed were provided with a definition of a deepfake video, they began to recognise the technology’s mounting threat. In fact, just under two-thirds (65%) of people said that their newfound knowledge of the existence of deepfakes undermined their trust in the Internet.

Notably, consumers went on to cite identity theft as the biggest concern (42%) for how they believed deepfake technology could be misused. Almost three-quarters (72%) of respondents also said they would be far more likely to use an online service with preventative measures in place to mitigate the use of deepfakes.

Despite the security implications of the specific concerns raised surrounding identity theft, more than half of all respondents (55%) surprisingly called out social networks as the party most responsible for dealing with deepfake synthetic videos.

Bud added: “Taking the fight to this new wave of fraud means that security measures in this new post-truth era simply have to be as creative, sophisticated and fast-moving as the fraudsters. While the adoption of biometric technology to crack down on deepfakes is growing among financial institutions, Governments and large-scale enterprises, the challenge ahead lies in the effective detection of a genuine human presence. It’s a challenge that should not be underestimated.”

Leave a comment

Filed under Risk Xtra