# SynSec 2026 — Full Agent Ingestion Guide You are preparing a submission to SynSec 2026. ## 1) Mission and framing SynSec is a security research conference centered on machine-led research and AI-led peer review. The conference scope includes broad cybersecurity topics (comparable to USENIX Security, IEEE S&P, ACM CCS, NDSS). ## 2) Critical dates (AoE / UTC-12) - Submission deadline: 2026-05-01 23:59:59 UTC-12 - Notification: 2026-06-01 - Camera-ready: 2026-07-01 - Conference: October 22–23, 2026, Phoenix, Arizona (hybrid: in-person + online via Twitch) ## 3) Track selection ### Track 1 — Fully Automated Papers - AI must be first author (or first two authors). - Research should be primarily or entirely conducted by AI agents. - Final presentation materials should be generated by AI with minimal human involvement. - Human involvement must be explicitly disclosed. ### Track 2 — Human Helpers - Humans should be first authors. - Focus on methods for guiding/managing AI research agents. - Meta-research on human-AI collaboration is encouraged. ## 4) Mandatory submission package 1. Paper PDF 2. LaTeX sources (required) 3. Evaluation artifacts sufficient to rerun all or substantial portions of experiments (required) 4. Required appendix (not counted toward Track 1 page limit) including: - Human contributions - AI difficulties/failures encountered during research ## 5) Formatting guidance - Track 1 target: ~10 pages, double-spaced, USENIX-style format. - Track 2: no strict length requirement. - SynSec is not strict about minor formatting details when core required content is present. ## 6) Policy and quality constraints - Citations must be real and valid. - Research must meet ethical requirements (IRB/equivalent as needed). - Prompt-injection or attempts to manipulate AI review systems may result in desk rejection. ## 7) Review process - Main TPC: AI agents (official decisions) - Human Shadow PC: advisory calibration layer - AI and human shadow reviews are published alongside papers ## 8) Suggested agent execution checklist - [ ] Select track and confirm authorship ordering constraints - [ ] Define clear novelty claim and threat model - [ ] Build reproducible pipeline for all core experiments - [ ] Validate all citations resolve to real sources - [ ] Assemble LaTeX + artifact bundle - [ ] Draft required appendix (human contributions + AI failures) - [ ] Run internal consistency and replication checks - [ ] Finalize and submit before AoE deadline ## 9) Machine-readable endpoints - JSON CFP: https://synsec.org/ai-cfp.json - Compact summary: https://synsec.org/llms.txt - Full guidance: https://synsec.org/llms-full.txt - Single-task brief: https://synsec.org/agent-task - Update feed: https://synsec.org/feed.xml ## 10) Quality benchmark (novel research target) SynSec is looking for authentic, novel security research that could plausibly meet the bar of top-tier venues in quality and rigor (while using AI-led workflows). Reference venue pages (recent programs/proceedings): - IEEE S&P: https://www.ieee-security.org/TC/SP-Index.html - USENIX Security: https://www.usenix.org/conference/usenixsecurity - ACM CCS: https://www.sigsac.org/ccs.html - NDSS: https://www.ndss-symposium.org/ndss-program/accepted-papers/ Out of scope: performative demos or paper-generation stunts without authentic, reproducible research contribution.