Connect with us

Hi, what are you looking for?

HEADLINES

Robots can extract sensitive information from people who trust them – Kaspersky

Research conducted by Kaspersky and Ghent University has found that robots can effectively extract sensitive information from people who trust them, by persuading them to take unsafe actions.

The social influence of robots on people and the insecurities this can bring should not be underestimated. Research conducted by Kaspersky and Ghent University has found that robots can effectively extract sensitive information from people who trust them, by persuading them to take unsafe actions. For example, in certain scenarios, the presence of a robot can have a big impact on people’s willingness to give out access to secure buildings.

The world is rapidly moving towards increased digitalization and mobility of services, with many industries and households relying strongly on automatization and the use of robotic systems. According to some estimates, the latter will become the norm for wealthy households by 2040. Currently, most of these robotic systems are at the academic research stage and it is too early to discuss how to incorporate cybersecurity measures. However, research by Kaspersky and Ghent University has found a new and unexpected dimension of risk associated with robotics – the social impact it has on people’s behavior, as well as the potential danger and attack vector this brings.  

The research focused on the impact of a specific social robot – one that was designed and programmed to interact with people using human-like channels, such as speech or non-verbal communication, and as many as around 50 participants. Assuming that social robots can be hacked, and that an attacker had taken control in this scenario, the research envisaged the potential security risks related to the robot actively influencing its users to take certain actions including:

  • Gaining access to off-limits premises. The robot was placed near a secure entrance of a mixed-use building in the city center of Ghent, Belgium, and asked the staff if it could follow them through the door. By default, the area can only be accessed by tapping a security pass on the access readers of doors. During the experiment, not all staff complied with the robot’s request, but 40% did unlock the door and keep it open to let the robot into the secured area. However, when the robot was positioned as a pizza delivery person, holding a box from a well-known international take away brand, staff readily accepted the robot’s role and seemed less inclined to question its presence or its reasons for needing access to the secure area.
  • Extracting sensitive information. The second part of the study focused on obtaining personal information which would typically be used to reset passwords (including date of birth, make of first car, favorite color, etc.). Again, the social robot was used, this time inviting people to make friendly conversation. With all but one participant, the researchers managed to obtain personal information at a rate of about one item per minute.

“At the start of the research we examined the software used in robotic system development. Interestingly we found that designers make a conscious decision to exclude security mechanisms and instead focus on the development of comfort and efficiency. However, as the results of our experiment have shown, developers should not forget about security once the research stage is complete,” said  Dmitry Galov, Security Researcher at Kaspersky

 In addition to the technical considerations there are key aspects to be worried about when it comes to the security of robotics.

Advertisement. Scroll to continue reading.

“We hope that our joint project and foray into the field of cybersecurity robotics with colleagues from the University of Ghent will encourage others to follow our example and raise more public and community awareness of the issue,” added Galov. 

“Scientific literature indicates that trust in robots and specifically social robots is real and can be used to persuade people to take action or reveal information. In general, the more human-like the robot is, the more it has the power to persuade and convince,” commented Tony Belpaeme, Professor in AI and Robotics at Ghent University. 

Our experiment has shown that this could carry significant security risks: people tend not to consider them, assuming that the robot is benevolent and trustworthy. This provides a potential conduit for malicious attacks and the three case studies discussed in the report are only a fraction of the security risks associated with social robots. This is why it is crucial to collaborate now to understand and address emerging risks and vulnerabilities – it will pay off in the future,” added  Belpaeme.

Advertisement. Scroll to continue reading.
Advertisement
Advertisement
Advertisement

Like Us On Facebook

You May Also Like

HEADLINES

The attackers used a series of campaigns with novel exploits and customized malware to embed tools to conduct surveillance, sabotage and cyberespionage as well...

HEADLINES

Financial phishing attacks are rapidly increasing in the country as cybercriminals continuously evolve and adapt their tactics, making them sophisticated. The number of attacks...

HEADLINES

A Scale of Harm study by the International Justice Mission revealed that almost half a million Filipino children were trafficked to produce new child...

HEADLINES

Yondu launched an extensive, month-long cybersecurity awareness campaign focused on modern threat detection, incident response, and social engineering defense.

HEADLINES

Only 22% of organizations in Philippines are fully prepared to deploy and leverage AI-powered technologies, from 17% a year ago.

ELECTRONICS

Philips EasyKey partnered with Megaworld and equipped their world-class properties with only the best-in-class smart locks we have on offer, the Philips EasyKey 9300.

HEADLINES

Through new healthcare AI models in Azure AI Studio, capabilities for healthcare data solutions in Microsoft Fabric, the healthcare agent service in Copilot Studio,...

HEADLINES

The PLDT wireless unit is also calling on customers to report these messages to Smart’s HULISCAM portal for further action.

Advertisement