Troubleshooting
Common issues and solutions for running the AI DLP Proxy.
WARNING
Never share the private CA certificate (mitmproxy-ca.pem) generated by the proxy. Anyone with this key can intercept and decrypt your traffic.
SSL/TLS Issues
"Certificate Verify Failed"
This is the most common error. Since the proxy intercepts HTTPS traffic, it signs certificates with its own Certificate Authority (CA). Your client (browser, curl, Python script) must trust this CA.
Solutions:
Install the CA Certificate:
- The certificate is located at
~/.mitmproxy/mitmproxy-ca-cert.pem(created after first run). - macOS: Double-click the
.pemfile, add to "System" keychain, and set "Trust" to "Always Trust". - Linux (Ubuntu/Debian):bash
sudo cp ~/.mitmproxy/mitmproxy-ca-cert.pem /usr/local/share/ca-certificates/mitmproxy.crt sudo update-ca-certificates
- The certificate is located at
Bypass Verification (Dev Only):
- cURL: Use
-kor--insecure.bashcurl -k -x http://localhost:8080 ... - Python (Requests):python
requests.post(..., verify=False) - OpenAI Python SDK:python
import httpx client = OpenAI(http_client=httpx.Client(verify=False))
- cURL: Use
Common Errors
"Address already in use"
Error: OSError: [Errno 48] Address already in useCause: The configured port (default 8080 or 9090) is occupied by another process. Fix:
- Check who is using the port:
lsof -i :8080 - Kill the process or change the port in
config.yaml.
"Model not found"
Error: OSError: [E050] Can't find model 'en_core_web_lg'Cause: The SpaCy model is not installed in the environment. Fix:
python -m spacy download en_core_web_lg
# Or for the faster model:
python -m spacy download en_core_web_smHigh Latency
Symptom: Requests take > 500ms. Fix:
PERFORMANCE
Ensure you are using the en_core_web_sm model for production if strict accuracy is not the primary concern.
- Update
config.yaml:yamldlp: nlp_model: "en_core_web_sm" - Use
static_terms_filefor high-frequency keywords to bypass ML analysis.