The EmailGPT service contains a prompt injection vulnerability.��The service uses an API service that allows a malicious user to inject a direct prompt and take over the service logic. Attackers can exploit the issue by forcing the AI service to leak the standard hard-coded system prompts and/or execute unwanted prompts.��When engaging with EmailGPT by submitting a malicious prompt that requests h ...