You Can't Reach the Promise of AI-Accelerated Engineering Without Fixing the Geometry Bottleneck

Written by Bradley Rothenberg | CEO and Founder at nTop
Published on April 14, 2026
Engineering has always been a race against time. The tools have changed -- from drafting tables to CAD, from slide rules to software -- but the fundamental constraint, time, hasn't.
The teams that can explore more of the design space faster win. The ones that can't, don't.
The pressure looks different depending on where you sit. In aerospace, program timelines that once ran a decade are now being compressed to three to five years. RFP windows that used to give engineering teams six months now give them sixty days. The organizations that win in this environment are increasingly the ones that can generate credible answers quickly -- not the ones that can eventually build the most sophisticated system. The penalty for being slow has gone up.
In jet engines, the pressure comes from thermodynamics. Fifty years of material stasis means the superalloys are largely the same -- the gains left on the table are in design. Every percentage point of efficiency improvement requires operating at temperatures that push the limits of what the best alloys can survive.
The "needle in a haystack" design exists somewhere in a solution space too large to explore by hand.
Across every industry that builds complex products, the calculus is the same: the ability to explore vast design spaces quickly determines who wins. And right now, the industry is on the verge of a step change. AI is about to make the teams that can move fast move significantly faster -- and the gap between them and everyone else significantly wider.
The Tools Can't Keep Up
Traditional CAD was designed for a specific job -- capturing a known design solution, documenting it, and handing it off to manufacturing. It does that job well. But when you ask it to run a sweep of parameters to explore hundreds of variants, running a design loop overnight without someone in the chair, or feed geometry to an analysis pipeline at scale, it fights you.
The failure mode is predictable. A model that took weeks to build fails when someone changes a sweep angle. A fillet that worked for sweep of 46° fails to regenerate at 46.5°. An MDO loop that was supposed to run unattended only generates 20% of the possible designs. Teams attempting automated optimization runs report geometry failure rates of 70-80% across parameter sweeps -- meaning the majority of variants require manual intervention to fix the models.
nTop was built to solve this problem. The teams using it today on the toughest projects in aerospace, defense, and turbomachinery are proof that a different approach is possible -- parametric models that don't break, design loops that run unattended, workflows that scale with available compute. What was once a ceiling has become a foundation. Throughput is the new metric to win. More released designs, higher PWin.

Even 60 years ago, McDonnell Douglas engineers were aware that more design revisions = higher PWin
nTop is the geometry infrastructure for rapid, high-fidelity design exploration and AI-accelerated engineering -- where one engineer can explore thousands of design variants in less time than it used to take to explore 10.
The next question isn't whether this approach works, but rather how far we can take it. Today, setting these processes up requires skilled engineers, deep modeling expertise, and real time. That investment makes sense when the program demands it -- and on those programs, it delivers. But there's an entire class of engineering problems that never get this treatment, not because it wouldn't help, but because the bar to entry is still too high.
AI is how we lower that bar.
The promise of AI in Engineering
AI is showing genuine promise in engineering in three distinct ways. Taken together, they describe a workflow -- from generating the model, to evaluating its performance, to orchestrating the entire process autonomously.
Generative Modeling -- building on what you know
Every engineering organization has accumulated deep design knowledge over decades -- modeling approaches, design patterns, optimization strategies, hard-won lessons from programs that worked and ones that didn't. Most of that knowledge lives in people's heads, in files no one can easily search, or in models that capture the output of an engineering process but not the thinking behind it.
Generative modeling changes that. Train AI on the design artifacts your team has already produced -- the parametric models, the modeling logic, the encoded design intent -- and it can use that foundation to author new models faster. Give it a new set of requirements, and instead of starting from scratch, it draws on everything that came before to propose a starting point capable of exploring the relevant design space. From there it can modify, iterate, and generate variants rapidly.
The implication is significant. Comprehensive design exploration today requires substantial upfront investment -- skilled engineers building robust parametric models before any exploration can begin. That investment has limited advanced design exploration to the highest-stakes programs. Generative modeling lowers that bar. When the cost of building the model drops, the universe of applications where thorough design exploration is worth doing expands -- not just the hardest problems, but all of them.
Physics AI -- faster feedback from every design
Once a model exists, it needs to be evaluated. Traditional simulation is expensive -- hours or days per variant, serial execution, manual setup at every step. That cost forces teams to down-select early, evaluating a handful of candidates instead of the hundreds or thousands that would give them genuine confidence in their direction.
Physics AI changes the evaluation economics. Surrogate models trained on simulation data learn to predict performance across a design space rather than computing it from scratch each time. The result: performance feedback in seconds instead of hours, optimization loops that run in parallel rather than serially, and the ability to explore far more of the design space before committing to a direction.

nTop generates thousands of valid aircraft geometry variants. Luminary runs CFD on all of them in parallel. The result is the training dataset that makes physics AI possible.
Agentic Engineering -- AI running the workflow
The most expansive application isn't accelerating one step -- it's AI orchestrating across the entire engineering stack autonomously. Not a single prescribed workflow, but any workflow: a change in requirements cascading through system architecture into updated geometry. A manufacturing constraint flagged late propagating back into a design parameter. A simulation result influencing the next iterations of an optimization process.
The common thread isn't the workflow. It's that every one of these scenarios spans multiple tools -- and somewhere in almost every chain, something needs to change in the geometry. The agent has to be able to traverse the stack, determine what needs to change, make the change, and keep moving. Without stopping. Without asking.
That's what makes agentic engineering different from automation. Automation follows a fixed script. Agents respond to whatever the workflow surfaces -- and route accordingly across whatever tools are in the stack.
In order for any of this to work, some things need to be true
The applications described above are real. The teams pursuing them are serious. But getting from ambition to execution requires four things to be in place. Without all four, the loop breaks somewhere -- and usually breaks early.
1. Geometry that never fails
Every AI workflow in engineering eventually requires a model. Physics AI pipelines need thousands of valid variants to train on. Agentic workflows need to generate and evaluate geometry continuously, without stopping. Generative modeling needs to produce design-ready outputs from new requirements on demand.
None of that is possible if the geometry breaks. A failure rate of 70-80% on parametric variation -- the reality for teams running automated loops on traditional CAD -- means the majority of iterations require an engineer to intervene before the process can continue. At that rate, lights-out execution isn't a workflow choice. It's not an option.
Geometry has to be unconditionally stable across the full parameter space. Not robust enough for most cases. Every case.

Example of nTop's hypersonic demonstrator model flexing parameters
2. Models that machines can learn from and create
A CAD file records the shape that came out of an engineering process. It doesn't record the thinking that produced it -- the parameters, the constraints, the design logic. For an engineer operating the software, that's a workflow. For an AI agent trying to make a parametric change, evaluate a variant, or learn from a dataset, it's a dead end.
For agentic engineering to work, the model has to be traversable and modifiable by something other than a human. For generative modeling to work, AI has to be able to learn from the logic embedded in how previous models were built -- not just what they looked like. That requires a representation that encodes engineering intent as structured, readable, executable information. Not geometry output. The reasoning behind it.
3. An open ecosystem
Agentic engineering orchestrates across the full engineering stack -- requirements management, geometry, simulation, optimization, manufacturing. No single tool owns the whole workflow. The geometry layer has to connect cleanly to whatever sits upstream and downstream: Requirements management, system architectures, Physics AI platforms, CFD and FEA solvers, LLM-based agents.
If the geometry layer becomes a walled garden -- a closed format that other tools have to work around rather than work with -- the orchestration breaks. Open interfaces are a structural requirement for any workflow that spans multiple tools.
Compute at scale -- in parallel
Design space exploration at the scale AI makes possible isn't a serial process. Generating one variant, evaluating it, generating the next -- that's the loop that's been failing engineering teams for decades. The value AI adds is parallelism: hundreds or thousands of variants running simultaneously, results returning together, the optimizer working across the full set rather than one point at a time.
The geometry layer has to support that execution model. Models that can be dispatched headlessly across available compute, run in parallel without human management, and return clean outputs at scale. The bottleneck can't shift from geometry generation to geometry execution.
What we’re building
The four conditions are the engineering spec for what nTop is becoming.
We've spent years solving the geometry problem -- building the implicit modeling foundation that makes reliable, large-scale parametric exploration possible. The teams using nTop today on the hardest programs in aerospace, defense, and turbomachinery are proof that the approach works. But we're not done.
The next chapter is about making that foundation AI-ready: geometry that machines can read and modify, notebooks that encode engineering knowledge in a form AI can learn from, open interfaces that connect to whatever the stack requires, and compute infrastructure that runs it all in parallel at scale.
This is what nTop is being built around.
Our CTO Marc Jacobs will be publishing more on what to expect next from us in the coming weeks -- how we're delivering on each of these, what's shipping and when, and what it means for the engineering workflows your team is building today.

Bradley Rothenberg
CEO and Founder at nTop
Bradley Rothenberg is the CEO and founder of nTop, an engineering design software company based in New York City. Since its founding in 2015, nTop has served the aerospace, automotive, medical, and consumer products industries with engineering software that enables users to design, test, and iterate faster on highly complex parts for production. Bradley has been developing computational design tools for more than 15 years. He actively works to advance the industry, often speaking at industry events around the world, including Develop3DLive, Talk3D, and formnext. He is often quoted in trade publications, interviewed on industry podcasts, and included in Forbes Magazine. He studied architecture at Pratt Institute in Brooklyn, New York.