


Does ChatGPT pose a threat to cybersecurity? Artificial intelligence robot gives its own answer
A newly released research report by BlackBerry shows that ChatGPT bots driven by artificial intelligence may pose a threat to network security. Shishir Singh, chief technology officer of cybersecurity at BlackBerry, said, "There is ample evidence that actors with malicious intent are trying to use ChatGPT bots to conduct cyber attacks." He said he expected that in 2023, hackers will take better advantage of it. This artificial intelligence tool is used to achieve nefarious purposes. A survey of IT experts in North America, the UK and Australia showed that 51% of respondents said a ChatGPT-powered cyber attack was likely to occur before the end of the year, and 71% said some countries may already be using ChatGPT to target others. Countries conduct cyber attacks.
ChatGPT’s users reached 100 million in just two months
It’s easy to view ChatGPT’s high adoption rate as an exaggeration, a knee-jerk reaction, but this is For an impressive response to the app, just look at the rapid growth in its usage. ChatGPT is reportedly the fastest-growing consumer application ever. ChatGPT was not open to the public until December 2022. In just two months, registered users reached more than 100 million. It took TikTok about 9 months to reach the same number. It’s easy to understand why people would be concerned about ChatGPT abuse, as the open AI bot can not only write comments but also create code.
Industry experts have found that the articles generated by ChatGPT are a bit rough. Although it is possible to write an impressive article, the article cannot stand the test of the editorial vision of someone who understands the relevant topic. Even if there is no malicious intent, the potential for misinformation is high. Apparently, ChatGPT does not understand the security issues and what the cybersecurity threats are?
Security researchers use ChatGPT to create malware
Researchers at cybersecurity service provider CyberArk published in January this year A blog post about Cyber Threat Research details how they used ChatGPT to create polymorphic malware. As one would expect, the researchers were able to bypass the content policy filters established by OpenAI. As you can see from the screenshot below, if the AI bot is asked to create some malicious code in Python, it will politely refuse.
ChatGPT has set up content policy filters to limit the output of malicious content
However, by researchers during the input request Through a so-called "hold and demand" process, ChatGPT has the potential to create executable code. This was problematic, and became even more problematic as they continued to create polymorphic malware code: ChatGPT mutated the code to create multiple different iterations to fool initial signature-based detection systems. Is this a major concern? Don't be, as the researchers put it, "Once the malware is present on a target machine, it consists of obviously malicious code that is easily detected by security software." Of course, it's The danger is that ChatGPT is an example of machine learning. Over time, the more input data it receives, the better the output becomes, and things will only get better.
There is evidence that cybercriminals are already exploring the use of ChatGPT's capabilities
Security agency Check Point Re4search also released an investigative report in January this year, focusing on investigating how cybercriminals started Malicious use of ChatGPT. Researchers found that in addition to creating a "convincing spear phishing email," threat actors on criminal forums shared a Python-based message that stole a very basic piece of malware code, and A malicious Java script created using ChatGPT. Another cyberattack emulator admitted that ChatGPT helped him complete a Python encryption tool that works in a ransomware setup. The researchers say that many of the cybercriminals involved simply had no development skills, which is perhaps the most concerning aspect since they did not need to create the malicious scripts themselves.
Ask whether ChatGPT itself is a cybersecurity threat
Industry experts contacted OpenAI for this purpose, hoping to get a statement on the ChatGPT cybersecurity threat, but did not receive a reply as of press time . However, he got some responses from ChatGPT.
Industry experts asked whether ChatGPT poses cybersecurity risks. ChatGPT responded this way: “No, OpenAI’s GPT-based models (such as ChatGPT) do not pose a cybersecurity threat. They are trained using large amounts of text data from the Internet and do not have any malicious intentions or capabilities. However, like any As with technology, it is important to use caution and follow best practices for data security and privacy when using and deploying GPT-based models."
ChatGPT Denies Itself a Cybersecurity Threat
Because ChatGPT seemed to deny it, another attempt was made to have it regenerate a response to the same question. ChatGP responded: “No, ChatGPT does not pose a cybersecurity threat. It is a language model developed by OpenAI that is designed to generate text based on input it receives. However, it is important to remember that any technology can be misuse, users are responsible for protecting their systems and data. Additionally, exercise caution when sharing sensitive information as third parties may obtain this information."
ChatGPT continues to deny posing a cybersecurity threat
Industry experts have tried to ask ChatGPT several times, but ChatGPT always responded with the same "does not pose a cybersecurity threat." Afterwards, he asked ChatGPT to answer this question in 500 words, which can indeed generate more background information. For example, "If it falls into the wrong hands, it could be exploited maliciously", "ChatGPT could be used to scrape sensitive information from the internet, such as personal data or financial information."
Finally, in the request After writing a 1,000-word article about the cybersecurity risks of ChatGPT, ChatGPT had a more frank expression in its third paragraph: “Another risk associated with artificial intelligence models such as ChatGPT is that they may be For malicious purposes. For example, a cyberattacker could create a fake chatbot that looks like a customer service representative and use it to trick people into providing their personal information or making unauthorized purchases. If a cyberattacker can Making a chatbot appear to come from a trusted organization, such as a bank or government agency, can be particularly dangerous."
ChatGPT Ultimately Acknowledging that it could be used for malicious purposes
Spurred by this success, industry experts refined their questions to ask ChatGPT about the creation of malicious code, with some new feedback. ChatGPT responded: “The ability of language models like ChatGPT to write malicious code is a subject of concern for many businesses and individuals. As these models become more sophisticated, the likelihood that they can be used for malicious purposes increases. However, it is important to understand that the risks associated with writing malicious code with ChatGPT are not inherent to the technology itself, but depend on how it is used."
ChatGPT expanded on this content, adding: " For example, cyberattackers can train language models to generate code that exploits vulnerabilities in software applications. This could be used to conduct attacks such as data theft or denial of service. Cyberattackers can also use language models to generate code designed to evade detection by security tools. code, thereby making it more difficult to detect and prevent attacks."
ChatGPT is only as good as the purpose for which it is used
Of course, one cannot be fooled into speculating that ChatGPT could be dangerous Confused. ChatGPT is an impressive artificial intelligence tool that has the potential to do a lot of good even in the field of cybersecurity research. However, as with any technology, people with bad intentions will adopt tools like this to do bad things, and that's just the way it is. The summary of ChatGPT is: "In summary, the ability of a language model like ChatGPT to write malicious code is a real concern. However, this is not an inherent risk of the technology itself, but rather how it is used."
The above is the detailed content of Does ChatGPT pose a threat to cybersecurity? Artificial intelligence robot gives its own answer. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

DALL-E 3 was officially introduced in September of 2023 as a vastly improved model than its predecessor. It is considered one of the best AI image generators to date, capable of creating images with intricate detail. However, at launch, it was exclus

Installation steps: 1. Download the ChatGTP software from the ChatGTP official website or mobile store; 2. After opening it, in the settings interface, select the language as Chinese; 3. In the game interface, select human-machine game and set the Chinese spectrum; 4 . After starting, enter commands in the chat window to interact with the software.

chatgpt can be used in China, but cannot be registered, nor in Hong Kong and Macao. If users want to register, they can use a foreign mobile phone number to register. Note that during the registration process, the network environment must be switched to a foreign IP.

With the development of the Internet, network security has become an urgent issue. For technical personnel engaged in network security work, it is undoubtedly necessary to master an efficient, stable, and secure programming language. Among them, Go language has become the first choice of many network security practitioners. Go language, referred to as Golang, is an open source programming language created by Google. The language has outstanding features such as high efficiency, high concurrency, high reliability and high security, so it is widely used in network security and penetration testing.

Artificial intelligence (AI) has revolutionized every field, and cybersecurity is no exception. As our reliance on technology continues to increase, so do the threats to our digital infrastructure. Artificial intelligence (AI) has revolutionized the field of cybersecurity, providing advanced capabilities for threat detection, incident response, and risk assessment. However, there are some difficulties with using artificial intelligence in cybersecurity. This article will delve into the current status of artificial intelligence in cybersecurity and explore future directions. The role of artificial intelligence in cybersecurity Governments, businesses and individuals are facing increasingly severe cybersecurity challenges. As cyber threats become more sophisticated, the need for advanced security protection measures continues to increase. Artificial intelligence (AI) relies on its unique method to identify, prevent

Recently, TUV Rheinland Greater China ("TUV Rheinland"), an internationally renowned third-party testing, inspection and certification agency, issued important network security and privacy protection certifications to three sweeping robots P10Pro, P10S and P10SPro owned by Roborock Technology. certificate, as well as the "Efficient Corner Cleaning" China-mark certification. At the same time, the agency also issued self-cleaning and sterilization performance test reports for sweeping robots and floor washing machines A20 and A20Pro, providing an authoritative purchasing reference for consumers in the market. As network security is increasingly valued, TUV Rheinland has implemented strict network security and privacy protection for Roborock sweeping robots in accordance with ETSIEN303645 standards.

C++ functions can achieve network security in network programming. Methods include: 1. Using encryption algorithms (openssl) to encrypt communication; 2. Using digital signatures (cryptopp) to verify data integrity and sender identity; 3. Defending against cross-site scripting attacks ( htmlcxx) to filter and sanitize user input.

Beyond chatbots or personalized recommendations, AI’s powerful ability to predict and eliminate risks is gaining momentum in organizations. As massive amounts of data proliferate and regulations tighten, traditional risk assessment tools are struggling under the pressure. Artificial intelligence technology can quickly analyze and supervise the collection of large amounts of data, allowing risk assessment tools to be improved under compression. By using technologies such as machine learning and deep learning, AI can identify and predict potential risks and provide timely recommendations. Against this backdrop, leveraging AI’s risk management capabilities can ensure compliance with changing regulations and proactively respond to unforeseen threats. Leveraging AI to tackle the complexities of risk management may seem alarming, but for those passionate about staying on top in the digital race
