THE PLATFORM
The platform we ship on.
Google's purpose-built agent platform: six components, four pillars. Below: what each pillar covers, what we ship on it, and what your team owns when we hand off.
The four pillars
BUILD
Design and ship the agent.
The Build pillar is where the agent gets authored — the prompts, the tools, the model choice, the eval set. Zinch designs and ships the agent inside your repo against the Agent Development Kit (ADK) v1.0 — code your engineers can read end-to-end, not a no-code canvas a vendor operates around. Agent Studio anchors the prompt-and-tool iteration; Agent Garden and Model Garden are how we pick the reference architecture and the model that actually fits the workflow. Every Build engagement closes with a runnable agent in your Git org and an eval set running on every commit, so the next conversation is about production, not about whether the thing works.
Components
- Agent Development Kit (ADK) v1.0Every engagement ships an ADK Python repo into your Git org. Your engineers read every prompt, every tool, every callback — no proprietary middle layer, no vendor canvas.
- Agent StudioWe use Agent Studio for the prompt-and-tool iteration loop with the workflow operator in the room — the same surface your team uses after handoff.
- Agent GardenReference architectures from Agent Garden are the starting point for the build, not the finish line — we adapt the pattern to the workflow your team actually runs.
- Model GardenModel selection through Model Garden is part of the engineering work, not a footnote — the model that fits the workflow is named in the architecture brief, with eval evidence behind the choice.
What we do
- Author the agent in ADK Python against your repo and your secrets.
- Iterate prompts and tools in Agent Studio with the workflow operator beside us.
- Hand off a runnable agent and an eval set running on every commit.

SCALE
Run it across teams and tenants.
The Scale pillar is where the agent stops being one team's project and becomes a service the rest of the organization can call. Zinch operates the agent across teams and tenants on Agent Runtime — autoscale, retry shapes, structured tracing, the runtime engineering that takes a working prototype to a production peer. Agent2Agent (A2A) v1.2 is how multi-agent systems talk; Memory Bank holds the shared per-user state across the conversation. The Scale engagement leaves you with a platform your team operates against, not a one-off agent that lives in one engineer's notebook.
Components
- Agent RuntimeAgents land on Agent Runtime with autoscale, retries, and structured tracing into your existing observability stack — no Zinch-hosted runtime in the middle.
- Agent2Agent (A2A) protocol v1.2Multi-agent systems talk over A2A v1.2. Boundaries between agents are a contract your team can read, not a private protocol we own.
- Memory BankPer-user and per-session state lives in Memory Bank, registered against your Identity layer — agents share context across the conversation without leaking it across tenants.
What we do
- Deploy the agent to Agent Runtime in your Google Cloud project, in the region you nominate.
- Wire A2A v1.2 between agents when the workflow needs more than one.
- Stand up Memory Bank against your Identity layer and your tenant boundary.

GOVERN
Make security and audit a default.
The Govern pillar is where the agent earns the right to handle real data. Zinch ships every agent with the audit envelope from day one — no retrofit, no governance bolt-on once the agent is already in production. Agent Registry is the source of truth for what is running and who owns it. Model Armor enforces policy at the gateway, not after the fact. Identity-aware controls keep the agent inside the tenant boundary the workflow lives behind. The Govern engagement leaves you with an agent your security team and your auditor can read on the same page as your engineers.
Components
- Agent RegistryEvery agent we ship is registered in Agent Registry from sprint one — owners, dependencies, policy attachments, and the audit envelope are all the same record.
- Model ArmorModel Armor enforces your policy at the gateway, not after the call returns. PHI, PII, and prompt-injection rules ship with the agent, configured against the policy library your team already runs.
- GatewayWe register agents behind the Gateway with rate limits, circuit breakers, and the audit log routed into your SIEM — the same controls your team's other production services already live behind.
- IdentityIdentity-aware controls scope the agent to the tenant, the user, and the role — the agent never sees data the calling user could not have seen on their own.
What we do
- Register every agent in Agent Registry on the first day of the build, not after the production cutover.
- Configure Model Armor policies against your existing policy library, with the eval set covering the policy edges.
- Wire Gateway and Identity against your SSO and your audit pipeline so the agent is a peer of your other production services.

OPTIMIZE
Measure, evaluate, improve.
The Optimize pillar is the page that answers whether the agent is actually working in production. Zinch measures, evaluates, and improves the agent against the metric the workflow owner cares about — not against a vanity benchmark. Eval harnesses run in CI from sprint one, against a sampled corpus of the real traffic. Observability traces every step into the same stack your team already watches. Agent Analytics dashboards give the operator a live view of queue depth, exception rate, and decision drift. The Optimize engagement closes the loop the agent runs inside, so the next iteration is grounded in evidence the team produced together.
Components
- EvalsEval harnesses run in CI from sprint one. The sampled-month corpus and the policy-edge cases are part of the build, not a follow-up project.
- ObservabilityStructured tracing routes every agent step into the observability stack your team already operates — Cloud Trace, your existing log sink, or both.
- Agent AnalyticsAgent Analytics dashboards land with the agent, configured against the operator metric (queue depth, exception rate, decision drift) and read by the workflow owner, not just the engineer.
What we do
- Stand up the eval harness against a sampled-month corpus before the agent ships.
- Wire Observability into your existing trace and log sinks so every decision is inspectable.
- Configure Agent Analytics dashboards against the operator metric the workflow owner reads.

One workflow. One outcome. Code your team owns.
Ship the first agent in two weeks. See where it leads.
Code
Lives in your Git org, owned from commit one.
Governance
Model Armor and Agent Registry on day one.
Speed
Two weeks to a runnable pilot. Eight to production.
Not ready to talk? Take the 4-min readiness assessment