Loading your experience...
Preparing something amazing for you

7.8
@dewittn
Developer & AI Systems Builder
AI Fluency Score
7.8/10
Assessed 3/30/2026
Velocity
Separated from my family during El Salvador's civil war, by death and adoption, I was reunited with them at the age of 16. I'm writing a novel and producing a documentary film about my life as one of El Salvador's disappeared children. The book has driven my most technically ambitious AI work, including the editorial infrastructure, the provenance system, and the writing acceleration detailed below.
Professor Claude is backed by a purpose-built MCP server (~5,600 lines of Python) with hybrid vector + keyword semantic search across past conversations, professor feedback turn-pairs, and development notes. Session snapshots give the professor cross-session continuity. The system includes a tamper-evident audit log built on a SQLite hash chain with git-anchored verification. Every conversation is imported and cryptographically verified so there's no retroactive editing. I built it so I can show literary agents and publishers exactly how I used AI with the book and prove that all of my conversations were editorial in nature. The book is entirely written by me.
Context engineering practice applied across multiple project domains (career archive, novel manuscript, project planning). I designed startup protocols that control what agents load and when, a four-level content hierarchy (always loaded, on trigger, on demand, discoverable), document authority rules for resolving source conflicts, and a two-tier insight system for managing context budget. Shared patterns live in a reusable skill while each domain's context stays intentionally separated. Mixing career positioning with novel craft feedback in the same workspace would dilute both.
Agentic job search pipeline with Docker containerization, multi-layer prompt injection protection, and isolated subagents. A strategy agent determines search keywords through multi-turn self-correction, queries three ATS platforms, and evaluates listings against a binary scoring framework I designed. Prompts are externalized as markdown files so I can iterate on specs without touching the Python code. The pipeline supports both cloud and local inference.
Feature planning methodology for working with Claude Code. I write feature definitions with acceptance criteria, run them through a planning phase, and hand the spec to the agent to build. I used this system to build itself, defining, planning, and implementing 16+ features through the pipeline.
I delegate execution and keep judgment. For business writing, Claude drafts and I rewrite to my standard. For code, I write specs and hand them to Claude Code to build. For my novel, all the writing is mine. Claude is an editorial partner who gives me real-time feedback on craft, but the words on the page come from me. That's a line I won't compromise on.
I've tested local models (LM Studio, Ollama) against cloud APIs for quality, cost, and multi-turn tool use reliability. Running evals locally taught me that multi-turn tool use is a capability cliff for smaller models. I also use Claude's dispatch feature for recurring agent delegation, including a scouting report for the sports team I coach that autonomously pulls opponent records from the web.
Generated 4/2/2026