Active research · 2026

Toward AI systems that are secure, frugal, and governable as they evolve.

Vigilio is an independent research practice studying the control plane of agentic AI: how to constrain autonomous agents under adversarial conditions, how experience-driven memory can replace brute-force token spend, and how meta-control frameworks can keep self-modifying systems aligned over time.

Founded
2025 · Taiwan
Practice
Independent research lab
Focus
Agentic AI security · Evolutionary control
Lineage
UC Berkeley · OWASP · Armorize / Proofpoint
Research Areas

Three threads of work, one underlying question:
how do we keep increasingly autonomous AI systems safe, efficient, and answerable?

RA · 01

Agentic AI Security Control

Threat Modeling Adversarial ML Tool-use Containment

Today's agentic systems combine LLM reasoning, tool use, persistent memory, and multi-agent delegation — a surface area that traditional AppSec was never designed to cover. Our work formalises control objectives for agent runtimes: blast-radius bounds for tool-calls, integrity guarantees on memory, and attestation across agent-to-agent hand-offs.

Current investigations include defenses against prompt-injection chains via RAG, model inversion in fine-tuned domain models, data poisoning in continual-learning pipelines, and evasion attacks on inference platforms (vLLM, Hugging Face, Slurm-scheduled jobs).

RA · 02

Experience-Driven Token Economy

Memory Architectures Inference Cost Continual Learning

LLM cost today scales with context length; intelligence does not. We study how to replace recomputation with accumulated experience: structured episodic memory, distilled procedural skills, and verifiable retrieval that lets an agent answer a recurring class of questions without re-paying for the reasoning each time.

The goal is a measurable reduction in tokens-per-decision on production workloads, without sacrificing factuality — a prerequisite for any agentic system that must run continuously inside an enterprise budget.

RA · 03

Meta-Control of AI System Evolution

Governance Self-Modifying Systems Alignment

Modern AI stacks rewrite themselves: weights are updated, prompts are rewritten by other prompts, agents spawn agents. Single-layer policy is insufficient. We are developing a meta-control approach that treats the AI system itself as the object of governance — explicit invariants on the trajectory of change, with audit trails that survive across model swaps and platform migrations.

The framing draws on classical control theory, software supply-chain integrity, and threat modeling adapted for systems whose own behaviour is the deployment artefact.

Lab

From theory to
operational control.

On-premise LLM security governance, from policy authoring to multi-node deployment — built for enterprises where data never leaves the perimeter.

The Vigilio DLP Policy Studio is our first reference implementation: a control plane that translates AI threat models into enforceable, auditable rules across distributed inference nodes — without requiring data to touch an external API.

Policy Authoring

Natural-language rule input with structured YAML output. Conflict detection before deployment.

Policy Authoring UI
Multi-Node Deploy

Push governance rules to distributed proxy nodes in one operation. Per-node rollback with full git-style history.

Multi-Node Deploy UI
Audit & Compliance

Immutable audit log with prompt hashes, user attribution, and node-level action records. Export-ready for ISMS review.

Audit & Compliance UI
Open Live UI GitHub repo releasing soon
Approach

Research practice over product roadmap.

We work in the gap between security research and ML systems engineering — a place where empirical evidence is rare and deadlines are hostile to rigour. Our method is intentionally slower, with a small set of commitments.

P · 01

Threat models before tooling.

Every artefact begins with an explicit, falsifiable threat model. We do not endorse defenses we cannot break ourselves first.

P · 02

Independence over scale.

Findings are published without vendor pressure. When work touches a specific platform, we say so.

P · 03

End-to-end, not slide-deep.

Architecture matters because attackers compose primitives across layers. Our reviews follow data, models, and identity from training corpus to inference response.

Selected Notes & Disclosures

Working notes, technical disclosures, and prior art.

People

Lab of one, work of many years.

2025 — present
Head of AI Research & Product — Vigilio
Building novel intelligence-modelling math and algorithms for future AI; designing secure AI infrastructure for LLM-enabled workflows; rigorous adversarial evaluation of frontier models and reporting of distillation provenance.
2014 — 2024
Principal Quantitative Researcher — MFT/HFT venture
Mathematics, physics, and in-house deep AI quantitative models. Architected a low-latency trading stack from scratch (>100K LOC), including risk-aversion and data-poisoning prevention models.
2012 — 2013
Tech Lead — Yahoo, Inc. (Global Media BU)
Represented the BU at weekly risk councils; led security training that cut vulnerability MTTR by 10×. Yahoo Global Spot Award.
2006 — 2012
CTO & SVP, Product Management — Armorize Technologies (acq. Proofpoint)
Conceived HackAlert™, the first cloud malware-detection SaaS; pivoted the company, secured board approval, and grew revenue from $0 to $5M+ ARR across Taiwan, the U.S., and Europe. Lectured for the National Security Bureau, Ministry of Foreign Affairs, and major operators.
2006
Visiting Scholar, EECS — UC Berkeley (J-1, US Gov.)
Affiliations & Service & Trainings & Activities
Founding Vice Chair, OWASP Taiwan Chapter UC Berkeley EECS, Visiting Scholar (2006) DEMO Conference Alumni (2009) Stanford Machine Learning (A+) EPFL — Functional Programming in Scala (Distinction) FLOLAC '10, NTU Certified Scrum Master
Contact

Considered correspondence is welcomed.

For research collaborations, technical reviews, advisory engagements, or responsible-disclosure conversations, please reach out. We reply with intent.