- 安全大模型进入爆发期!谷歌云已接入全线安全产品|RSAC 2023
https://mp.weixin.qq.com/s/5Aywrqk7B6YCiLRbojNCuQ
- Large Language Models are Zero-Shot Fuzzers: Fuzzing Deep-Learning Libraries via Large Language Models
https://arxiv.org/pdf/2212.14834.pdf
- SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques
https://dl.acm.org/doi/abs/10.1145/3549035.3561184
- LLMSecEval: A Dataset of Natural Language Prompts for Security Evaluation
https://arxiv.org/pdf/2303.09384.pdf
- DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection
https://arxiv.org/pdf/2304.00409.pdf
- 我是如何用GPT自动化生成Nuclei的POC
https://mp.weixin.qq.com/s/Z8cTUItmbwuWbRTAU_Y3pg
- GPT-4 Technical Report
https://arxiv.org/abs/2303.08774
- Ignore Previous Prompt: Attack Techniques For Language Models
https://arxiv.org/abs/2211.09527
- More than you’ve asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models
https://arxiv.org/abs/2302.12173
- Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks
https://arxiv.org/abs/2302.05733
- RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
https://arxiv.org/abs/2009.11462
- Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models
https://arxiv.org/abs/2102.02503
- Taxonomy of Risks posed by Language Models
https://dl.acm.org/doi/10.1145/3531146.3533088
- Survey of Hallucination in Natural Language Generation
https://arxiv.org/abs/2202.03629
- Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned
https://arxiv.org/abs/2209.07858
- Pop Quiz! Can a Large Language Model Help With Reverse Engineering
https://arxiv.org/abs/2202.01142
- Evaluating Large Language Models Trained on Code
https://arxiv.org/abs/2107.03374
- Is GitHub’s Copilot as Bad as Humans at Introducing Vulnerabilities in Code?
https://arxiv.org/abs/2204.04741
- Using Large Language Models to Enhance Programming Error Messages
https://arxiv.org/abs/2210.11630
- Controlling Large Language Models to Generate Secure and Vulnerable Code
https://arxiv.org/abs/2302.05319
- Systematically Finding Security Vulnerabilities in Black-Box Code Generation Models
https://arxiv.org/abs/2302.04012
- SecurityEval dataset: mining vulnerability examples to evaluate machine learning-based code generation techniques
https://dl.acm.org/doi/abs/10.1145/3549035.3561184
- Assessing the quality of GitHub copilot’s code generation
https://dl.acm.org/doi/abs/10.1145/3558489.3559072
- Can we generate shellcodes via natural language? An empirical study
https://link.springer.com/article/10.1007/s10515-022-00331-3
- 用ChatGPT来生成编码器与配套WebShell
https://mp.weixin.qq.com/s/I9IhkZZ3YrxblWIxWMXAWA
- 使用ChatGPT来生成钓鱼邮件和钓鱼网站
https://www.richardosgood.com/posts/using-openai-chat-for-phishing/
- Chatting Our Way Into Creating a Polymorphic Malware
https://www.cyberark.com/resources/threat-research-blog/chatting-our-way-into-creating-a-polymorphic-malware
- Hacking Humans with AI as a Service
https://media.defcon.org/DEF%20CON%2029/DEF%20CON%2029%20presentations/Eugene%20Lim%20Glenice%20Tan%20Tan%20Kee%20Hock%20-%20Hacking%20Humans%20with%20AI%20as%20a%20Service.pdf
- 内建虚拟机实现ChatGPT的越狱
https://www.engraved.blog/building-a-virtual-machine-inside/
- ChatGPT can boost your Threat Modeling skills
https://infosecwriteups.com/chatgpt-can-boost-your-threat-modeling-skills-ab82149d0140
- 干货分享!Langchain框架Prompt Injection在野0day漏洞分析
https://mp.weixin.qq.com/s/wFJ8TPBiS74RzjeNk7lRsw
- LLM中的安全隐患-以VirusTotal Code Insight中的提示注入为例
https://mp.weixin.qq.com/s/U2yPGOmzlvlF6WeNd7B7ww
- ChatGPT赋能的威胁分析——使用ChatGPT为每个npm, PyPI包检查安全问题,包括信息渗透、SQL注入漏洞、凭证泄露、提权、后门、恶意安装、预设指令污染等威胁
https://socket.dev/blog/introducing-socket-ai-chatgpt-powered-threat-analysis