Connect with us

Hi, what are you looking for?

HEADLINES

Robots can extract sensitive information from people who trust them – Kaspersky

Research conducted by Kaspersky and Ghent University has found that robots can effectively extract sensitive information from people who trust them, by persuading them to take unsafe actions.

The social influence of robots on people and the insecurities this can bring should not be underestimated. Research conducted by Kaspersky and Ghent University has found that robots can effectively extract sensitive information from people who trust them, by persuading them to take unsafe actions. For example, in certain scenarios, the presence of a robot can have a big impact on people’s willingness to give out access to secure buildings.

The world is rapidly moving towards increased digitalization and mobility of services, with many industries and households relying strongly on automatization and the use of robotic systems. According to some estimates, the latter will become the norm for wealthy households by 2040. Currently, most of these robotic systems are at the academic research stage and it is too early to discuss how to incorporate cybersecurity measures. However, research by Kaspersky and Ghent University has found a new and unexpected dimension of risk associated with robotics – the social impact it has on people’s behavior, as well as the potential danger and attack vector this brings.  

The research focused on the impact of a specific social robot – one that was designed and programmed to interact with people using human-like channels, such as speech or non-verbal communication, and as many as around 50 participants. Assuming that social robots can be hacked, and that an attacker had taken control in this scenario, the research envisaged the potential security risks related to the robot actively influencing its users to take certain actions including:

  • Gaining access to off-limits premises. The robot was placed near a secure entrance of a mixed-use building in the city center of Ghent, Belgium, and asked the staff if it could follow them through the door. By default, the area can only be accessed by tapping a security pass on the access readers of doors. During the experiment, not all staff complied with the robot’s request, but 40% did unlock the door and keep it open to let the robot into the secured area. However, when the robot was positioned as a pizza delivery person, holding a box from a well-known international take away brand, staff readily accepted the robot’s role and seemed less inclined to question its presence or its reasons for needing access to the secure area.
  • Extracting sensitive information. The second part of the study focused on obtaining personal information which would typically be used to reset passwords (including date of birth, make of first car, favorite color, etc.). Again, the social robot was used, this time inviting people to make friendly conversation. With all but one participant, the researchers managed to obtain personal information at a rate of about one item per minute.

“At the start of the research we examined the software used in robotic system development. Interestingly we found that designers make a conscious decision to exclude security mechanisms and instead focus on the development of comfort and efficiency. However, as the results of our experiment have shown, developers should not forget about security once the research stage is complete,” said  Dmitry Galov, Security Researcher at Kaspersky

 In addition to the technical considerations there are key aspects to be worried about when it comes to the security of robotics.

Advertisement. Scroll to continue reading.

“We hope that our joint project and foray into the field of cybersecurity robotics with colleagues from the University of Ghent will encourage others to follow our example and raise more public and community awareness of the issue,” added Galov. 

“Scientific literature indicates that trust in robots and specifically social robots is real and can be used to persuade people to take action or reveal information. In general, the more human-like the robot is, the more it has the power to persuade and convince,” commented Tony Belpaeme, Professor in AI and Robotics at Ghent University. 

Our experiment has shown that this could carry significant security risks: people tend not to consider them, assuming that the robot is benevolent and trustworthy. This provides a potential conduit for malicious attacks and the three case studies discussed in the report are only a fraction of the security risks associated with social robots. This is why it is crucial to collaborate now to understand and address emerging risks and vulnerabilities – it will pay off in the future,” added  Belpaeme.

Advertisement. Scroll to continue reading.
Advertisement
Advertisement
Advertisement

Like Us On Facebook

You May Also Like

White Papers

n the Philippines, industry players are taking a more proactive approach to building a security framework for digital resilience.

HEADLINES

This marks the company’s first participation in the region’s premier tech event, where it will showcase its groundbreaking cybersecurity solutions to industry leaders, innovators,...

HEADLINES

A report found that the primary way attackers gained initial access to networks (56% of all cases across MDR and IR) was by exploiting...

White Papers

The Department of Information and Communications Technology (DICT) reports that government agencies, academic institutions, and telecommunications companies remain prime targets for cyber criminals, with...

HEADLINES

Creativity and experience is a common AI activity theme among Filipinos with 48% using it for photo editing and 42% for both entertainment and...

HEADLINES

The future of communications hinges on our ability to responsibly harness artificial intelligence, ensuring it enhances, rather than undermines, the art of strategic communication.

HEADLINES

To meet surging AI demands, 43% of new data center facilities are expected to be dedicated to AI workloads. With AI model training and...

HEADLINES

The exploit, discovered by Kaspersky’s Global Research and Analysis Team (GReAT), required no user interaction beyond clicking a malicious link and demonstrated exceptional technical...

Advertisement