Agentic AI Security Control
Today's agentic systems combine LLM reasoning, tool use, persistent memory, and multi-agent delegation — a surface area that traditional AppSec was never designed to cover. Our work formalises control objectives for agent runtimes: blast-radius bounds for tool-calls, integrity guarantees on memory, and attestation across agent-to-agent hand-offs.
Current investigations include defenses against prompt-injection chains via RAG, model inversion in fine-tuned domain models, data poisoning in continual-learning pipelines, and evasion attacks on inference platforms (vLLM, Hugging Face, Slurm-scheduled jobs).