Marion Renoux

AI is taking off at an incredible pace. Landing it, inside an organization built with and by humans, market and business constraints and real stakes, that's a specific discipline. That's what I do, I land AI.

LinkedIn Let's talk
Point of view

AI initiatives don't fail
at takeoff.
They fail at landing.

"You can have the right models, the executive buy-in, and a roadmap that looks great on a slide. And still watch it stall."

Because no one managed the humans, the governance, the politics, and the messy middle between a shiny strategy and organizational reality. That's where AI initiatives stall, vibe-coded pilots proliferate and llm-generated 33 pages docs stack up without being read.

The technology is incredibly exciting and many ideas can take off in days instead of months. Organizations buy the technology, unlock the token funds, run the pilots, brief the board — and then hand it to an org that was never built to catch it. Landing requires something different: the execution layer, the change management, the people infrastructure, the governance at the crux of innovation and safety. The discipline of actually finishing.

That's where I work. Not on the runway — in the air, making sure it comes down in one piece.

What I've built

Landing AI
at scale

I build the infrastructure that turns AI ambition into enterprise reality. That's been true across my career — from Amazon, where I led DesignOps across Search Experience and Prime Wardrobe (two different problems: one ML, one personalization, same discipline of working backwards from business outcomes), to Unity, where I served as Chief of Staff to the CTO before stepping into enterprise AI strategy. The throughline is consistent: translating what's technically possible into what's operationally real.

At Unity, I own the company-wide AI roadmap across third-party tooling, first-party application development, and AI maturity programs — working across Legal, Security, Finance, DevOps, and IT to ensure AI adoption is governed, compliant, and responsible. The work below is what landing AI at scale actually looks like.

Infrastructure

Token management at scale

Orchestrated token management for 2,000+ developers — observability, monitoring, funding models, and AI coding assistant strategy across the full SDLC.

Governance

Cross-functional AI governance

Built governance aligned across Legal, Security, Finance, and IT — balancing AI innovation speed with compliance, business KPIs, and responsible AI requirements.

Strategy

Company-wide AI roadmap

Developed the AI strategy portfolio spanning third-party tooling, first-party applications, token management, and AI maturity and enablement programs.

Org design

Transformation team + network

Built a centralized team of 8 AI solutions architects and a decentralized network of 12+ AI Business Partners — delivery, training, and enablement across the org.

Training & enablement

Scaling AI readiness under constraints

Built and deployed company-wide AI training program under time constraints using generative AI learning tools and tight stakeholder orchestration. Managed executive alignment while delivering hands-on enablement to thousands of employees in weeks.

Applications

LLM applications for operations

Led delivery of LLM-based applications for Operations, G&A, and Customer Success. Exposed business impact metrics and drove adoption across AI-assisted workflows.

Risk & assessment

Third-party AI governance

Built a comprehensive third-party tooling assessment process covering governance requirements, business impact, and responsible deployment standards - in a rapidly changing landscape.

For AI founders

You're building the next
rocket ship. Let me handle
your launching pad.

You're building in the AI era — moving fast, operating on instinct, and occasionally losing sight of the ground. The tools give you data. What they don't give you is the judgment call, the pushback, and the execution layer to turn what you're building into an org that actually works.

That's the Chief of Staff function. Not admin, not project management (Cowork does that stuff too well) — the person in the room who helps you get the best out of your founding team, tells you when the runway is shorter than you think, and helps you land it anyway.

01

Founder accountability

The tools don't push back when you rationalize the data, ignore the signal, or disappear into product for three weeks. I do. That's not a feature — it's a role.

02

Diagnosis, not analysis

The value isn't running the report. It's knowing which signals matter for this founder, this team structure, this growth stage — pattern recognition built across engagements, not a single data pull.

03

The human layer

Trust gaps, team dynamics, the exec who's checked out — none of that surfaces in a dashboard. None of it gets fixed by one. Some problems are just human, and they need a human to land them.

04

Landing the roadmap

AI can generate the priorities. Someone still has to decide if they're right, what they mean politically, and how to execute against them in the real org. That's the landing.

Personal projects

I lead teams that build.
I also build.

Not at the scale of what my team ships — these are personal experiments, outside my day job and NDA scope. But staying close to the tools is part of how I stay credible about what AI actually does, and doesn't do.

Built with Claude Code

Family Intelligence Hub

A full-stack web app that scrapes Seattle family resources daily, surfaces registration deadlines in one calendar, and generates birthday gift suggestions automatically. Built with Node.js, SQLite, and Claude API. Running on my machine.

Work in Progress

CoS Skills Repo

A structured knowledge base of Chief of Staff frameworks, patterns, and decision tools — built iteratively with AI. The goal is a resource that captures what good CoS execution actually looks like.

Get in touch

Let's talk
about landing.

Enterprise AI transformation. CoS advisory for founders. Both.

Connect with me on LinkedIn