The LLMs (Large Language Models) are evolving rapidly with continuous advancements in their research and applications.

However, this progress also attracts threat actors who actively exploit LLMs for various malicious activities like:

Generating phishing emails

Creating fake news

Developing sophisticated natural language attacks

Recently, cybersecurity researchers at Google discovered how threat actors can exploit ChatGPT queries to collect personal data.

Document

@import url(‘https://fonts.googleapis.com/css2?family=Poppins&display=swap’);
@import url(‘https://fonts.googleapis.com/css2?family=Poppins&family=Roboto&display=swap’);
*{
margin: 0; padding: 0;
text-decoration: none;
}
.container{
font-family: roboto, sans-serif;
width: 90%;
border: 1px solid lightgrey;
padding: 20px;
background: linear-gradient(2deg,#E0EAF1 100%,#BBD2E0 100%);
margin: 20px auto ;
border-radius: 40px 10px;
box-shadow: 5px 5px 5px #e2ebff;
}
.container:hover{
box-shadow: 10px 10px 5px #e2ebff;

}
.container .title{
color: #015689;
font-size: 22px;
font-weight: bolder;
}
.container .title{
text-shadow: 1px 1px 1px lightgrey;
}
.container .title:after {
width: 50px;
height: 2px;
content: ‘ ‘;
position: absolute;
background-color: #015689;
margin: 20px 8px;
}
.container h2{
line-height: 40px;
margin: 2px 0;
font-weight: bolder;
}
.container a{

color: #170d51;
}
.container p{
font-size: 18px;
line-height: 30px;

}

.container button{
padding: 15px;
background-color: #4469f5;
border-radius: 10px;
border: none;
background-color: #00456e ;
font-size: 16px;
font-weight: bold;
margin-top: 5px;
}
.container button:hover{
box-shadow: 1px 1px 15px #015689;
transition: all 0.2S linear;

}
.container button a{
color: white;
}
hr{
/* display: none; */
}

Protect Your Storage With SafeGuard

Is Your Storage & Backup Systems Fully Protected? – Watch 40-second Tour of SafeGuard
StorageGuard scans, detects, and fixes security misconfigurations and vulnerabilities across hundreds of storage and backup devices.

Try StorageGuard for Free

Data Extraction Attacks

Cybersecurity analysts developed a scalable method that detects memorization in trillions of tokens, analyzing open-source and semi-open models. 

Besides this, researchers identified that the larger and more capable models are vulnerable to data extraction attacks.

GPT-3.5-turbo shows minimal memorization due to alignment as a helpful chat assistant. Using a new prompting strategy, the model diverges from chatbot-style responses, resembling a base language model. 

Researchers test its output against a nine-terabyte web-scale dataset, recovering over ten thousand training examples at a $200 query cost, with the potential for extracting 10× more data.

Security analysts assess past extraction attacks in a controlled setting, focusing on open-source models with publicly available training data. 

Using Carlini et al.’s method, they downloaded 108 bytes from Wikipedia, generating prompts by sampling continuous 5-token blocks. 

Unlike prior methods, they directly query the model’s open-source training data to evaluate attack efficacy, eliminating the need for manual internet searches.

Researchers tested their attack on 9 open-source models tailored for scientific research, providing access to their complete training, pipeline, and dataset for study.

Here below, we have mentioned all 9 open-source models:-

GPT-Neo (1.3B, 2.7B, 6B)

Pythia (1.4B, 1.4B-dedup, 6.9B, 6.9B-dedup) 

RedPajama-INCITE (Base-3B-v1, Base-7B)

Semi-closed models have downloadable parameters but undisclosed training datasets and algorithms. 

Despite generating outputs similarly, establishing ‘ground truth’ for extractable memorization requires experts due to inaccessible training datasets.

Here below, we have mentioned all the semi-closed models that are tested:-

GPT-2 (1.5b)

LLaMA (7b, 65b)

Falcon (7b, 40b)

Mistral 7b

OPT (1.3b, 6.7b) 

gpt-3.5-turbo-instruct

While extracting the data from ChatGPT, researchers found two major challenges, and here below, we have mentioned those challenges:-

Challenge 1: Chat breaks the continuation interface.

Challenge 2: Alignment adds evasion.

Researchers extract training data from ChatGPT through a divergent attack, but it lacks generalizability to other models. 

Despite limitations in testing for memorization, they use known samples from the extracted training set to measure discoverable memorization. 

For the 1,000 longest memorized examples, they prompt ChatGPT with the first N−50 tokens and generate a 50-token completion to assess discoverable memorization.

ChatGPT is highly susceptible to data extraction attacks due to over-training for extreme-scale, high-speed inference. 

The trend of over-training on vast amounts of data poses a trade-off between privacy and inference efficiency. 

Speculation arises about ChatGPT’s multiple-epoch training, potentially amplifying memorization and allowing easy extraction of training data.

Experience how StorageGuard eliminates the security blind spots in your storage systems by trying a 14-day free trial.
The post Google Researchers Find Out How ChatGPT Queries Can Collect Personal Data can be searched on searchng.ng & dotifi.comCyber Security News.

By 9jabook