Member of Technical Staff - Foundation Model Architecture & AI Infrastructure

Vinci4D.ai

Vinci4D.ai

Software Engineering, Other Engineering, IT, Data Science
Palo Alto, CA, USA
USD 100k-220k / year
Posted on Feb 24, 2026

Location

Palo Alto HQ

Employment Type

Full time

Location Type

Hybrid

Department

Engineering

Compensation

  • $100K – $220K

Member of Technical Staff - Foundation Model Architecture & AI Infrastructure

Vinci | Full-Time | Remote / Hybrid

The Mission

At Vinci, we are building the operator intelligence infrastructure that modern hardware programs rely on daily. We have already proven that a single foundation model works out of the box across industries on realistic production workloads.

  • Trained on 45TB+ of structured physics data

  • Running billion-voxel inference in production

  • Deployed inside Tier-1 semiconductor and hardware environments

  • Operating across multiple physical scales and operator regimes

This is not a research prototype. This is production infrastructure. Now we are scaling deployment at industrial magnitude:

  • Increase simulation throughput by two orders of magnitude

  • Move from billion-voxel to trillion-voxel domains

  • Expand operator coverage across nonlinear regimes

  • Support global, multi-entity deployment across Tier-1 ecosystems

Our ambition is not to become a frontier AI lab. Our ambition is to become the default operator intelligence layer that hardware companies run on.

The Operator Frontier

Today, our unified model already operates across a subset of partial differential equations in real industrial environments. The next phase is expanding that unified architecture across operators, including:

  • Maxwell’s equations

  • Elasticity

  • Plasticity

  • Navier–Stokes

  • Nonlinear constitutive systems

  • Coupled multiphysics interactions

We are not building separate models per equation. We are evolving a single operator foundation model that generalizes across industries, physical scales, and conditioning regimes - and scales in deployment volume.

What You Will Own

This role is about AI architecture and systems engineering - not low-level GPU kernel work. You will help define and scale the core operator intelligence layer.

Evolve the Foundation Architecture

  • Design and refine transformer variants for structured spatial domains

  • Explore sparse and locality-aware attention mechanisms

  • Build hierarchical attention across multi-resolution fields

  • Develop graph-transformer systems for multi-entity interactions

  • Improve modeling depth across nonlinear operator regimes

This is architectural ownership.

Scale Training & Continuous Learning

  • Expand distributed training beyond 45TB-scale datasets

  • Improve generalization across heterogeneous operator distributions

  • Design scalable data and curriculum strategies

  • Maintain reproducibility and determinism across distributed systems

  • Build feedback loops from deployed production environments

The system must grow in capability without fragmenting in design.

Architect Trillion-Scale Inference

Billion-voxel inference runs today. You will help design systems that:

  • Scale to trillion-voxel domains

  • Use sparse and hierarchical computation effectively

  • Balance memory, compute, and communication

  • Maintain production-grade stability and determinism

Throughput and reliability matter equally.

Ship at Industrial Scale

Our models already run inside Tier-1 hardware programs. You will:

  • Ship expanded operator capabilities into production

  • Increase simulations per day by 100×

  • Support global, multi-entity deployment

  • Maintain robustness under diverse industrial workloads

Success is measured by adoption, throughput, and reliability — not leaderboard metrics.

What We’re Looking For

Deep experience in:

  • Large-scale foundation model architecture

  • Transformer variants (sparse, hierarchical, graph-based)

  • Distributed training systems

  • Production ML system design

  • Scaling structured datasets

  • Writing clean, maintainable, high-quality code

You think in terms of:

  • Architectural generalization

  • Stability under nonlinear regimes

  • Communication vs computation tradeoffs

  • Deterministic distributed execution

  • Designing systems that become durable infrastructure

You’ve built AI systems that run in production — not just experiments.

Engineering Expectations

  • Strong software engineering fundamentals

  • Clean abstractions and scalable code design

  • Experience with modern ML stacks (e.g., PyTorch and distributed training ecosystems)

  • Strong CI, regression testing, and validation discipline

  • Comfort evolving core model infrastructure

This role is about building infrastructure that lasts.

Why Vinci

  • Single model already deployed across industries

  • 45TB+ structured training data

  • Billion-voxel inference in production

  • Tier-1 customers operating on real hardware workflows

  • High ownership at Series A stage

  • Opportunity to define a foundational abstraction layer early

We are building something that hardware companies will depend on daily. If you want to define and scale the operator intelligence layer that industry runs on — this role was built for you.

Compensation Range: $100K - $220K