Preferences

Thanks a lot for the thoughtful and respectful reply! I really appreciate that you raising the engine issue without dismissing the whole idea.

Although building such engine requires tons of work, but the engine we’re having is indeed a bit of old. Except for the problems I mentioned, you also mentioned they’re still single threaded. That’s why I think it’s still worth building a new one, especially when there’s no good open sourced one currently.

I’m a big supporter of open source. If we have something like that in the future, we should of course make it open source like Linux kernel and allow everyone to enjoy the benefit of it.


Me too, and you are quite welcome!

CAD is close to my heart. I jumped in during the 80's as a high schooler running on an Apple 2! Even back then, limited 8 bit CAD could do a lot. And it was one application that helped me see the future! Product design was gonna change as manufacturing already was and the people who knew CAD were gonna be there.

Now here we are, and the CAD companies own design and manufacturing.

I had a flash of an idea this morning reading your comment:

Perhaps we could license Parasolid for a year, or maybe we try what tomfoolery I am about to put here with Open Cascade.

Maybe an AI model of some kind can get us a leg up?

Going back to the fillet example I put here earlier, I want to share a bit of backstory...

I was at SDRC, who had built out a fantastic concurrent engineering and analysis system called I-DEAS. I loved that CAD software and was an applications engineer and trainer on it. Taught many groups of engineers how CAD works, and I got to do that on a system that had collaboration built in from the beginning! Fully revision controlled concurrent engineering and analysis. Fun stuff.

But it died.

My years of skills gone. Kernel could not keep up. So I moved all that onto what is NX today and many of the best parts of the software I loved ended up being implemented because some mergers resulted in the same smart people being product managers! I am particularly redeemed!

And therein lies the lesson of the geometry kernel. You build your true skill on Parasolid systems or risk seeing them lying dormant, cast aside.

The kernel upon which I-DEAS was built was written in Fortran 90. Beautiful software too. It offered capabilities well ahead of Parasolid in some ways, but consistently failed on some common geometry cases that come up rather frequently. Things tangent to things, touching a a point was a big one.

One thing I taught was overbuild or underbuild. Rather than draw a rectangle tangent to a circle to prepare for an extrude, place that end of the rectangle inside the circle and let a boolean operator sort out the two resulting solids.

So yeah, build it kind of wrong so the kernel can build solids. Messy. :)

I was in a room talking to the people who do fillers. Edge blend to some of us.

We talked about my fillet gauntlet. It was a collection of geometry cases that fillet operations failed to complete.

Parasolid could always resolve more of them, and it did that with fairly sloppy tolerances. The SDRC kernel was catching up each rev, but the trend line looked like a decade of analysis of the successful resolutions, and coding for those, wash, rinse repeat a lot.

I wonder if it might be possible to generate geometry cases using parameters such that whole problem spaces could or can be created. Have good kernels solve and train an AI on all that to see what it may then solve differently?

Maybe man years boils down to compute/watt hours?

I really appreciate your reply — it's an honor to hear from someone with such deep experience in the field. Your insights from decades of working with CAD and kernels are incredibly valuable, and it means a lot that you'd take the time to share them here.

The idea of parameterizing geometric problem spaces and learning from how different kernels handle them is strikingly similar to what compiler researchers have done in CS: generating corner cases, analyzing compile errors, and training AI to self-correct. AI coding is used widely in the industry currently, with tools like cursor gaining huge popularity.

And the move to a text-based representation is what makes this all tractable — binary formats never gave us that level of observability or editability. With source-level CAD, it becomes much more realistic to analyze failures, share test cases, and eventually integrate AI tools that can reason about geometry the same way they reason about code.

Do you expect the ability to reason about geometry would emerge, or would it be more like pattern matching limited by the training data problem space?

Meta: testing in progress

Re: honor

Humor mode = 1 (you will see why)

LOL, Thanks for that! Really. CAD is kind of obscure meaning it is rare to have this chat. Was nice.

I find it an equal honor to talk with others very highly skilled.

Bromance Curious Mode = 0

Cheers!

lol, got it! Sorry I got too excited. Got a bunch of Reddit "experts" earlier, so your reply honestly made my day. Thanks!
Are you an LLM? You write like an LLM on a 3 day old account.
Perhaps text answers are possible?

Today, a solid consists of the following entities that follow the golden rule; namely, each edge is shared by two and only two surfaces: [0]

Solid Cubish

6 faces bounded by 4 edges each, having endpoints, etc...

Each edge is a curve [1] that lies on the surface so as to bound it to a precision small enough that there are no gaps between the curves and the surface edges they define.

Various bindings and or other data elements:

Centroid

Vertices, each attached to three edge endpoints considered equal given a system tolerance.

...etc.

[0] Where an edge is alone, the resulting non manifold has a hole in it somewhere, and or is a surface body where a large number of edges stand alone.

Where an edge is shared by more than two others, that is a self-intersecting body.

Neither case is actually manufacturable. One can understand a lot just from edge checks too.

All edges alone, or unique = face.

No edges present = closed surface must be sphere, torus or elliptical solid body

...etc.

Also, in wireframe NURBS curve land, the most useful thing about the decision to represent all the analytic entities (line, arc, conic, hyp, parabola, circle, elipse...) as NURBS was to be able to reason programmatically with far fewer pain in the ass cases!

Eg: a trim function can be written to process any NURBS arguments. One that has to face lines, circles and friends ends up either converting to NURBS or handling trim line to circle, arc to conic, NURBS to ... you get the idea. Too messy.

Generating that data won't be cheap, but it can be distributed! If we had a few thousand users run scripts on their systems, we could get a large problem-solution data corpus.

[1] In modern CAD, everything is a curve. Lines are NURBS curves having only two control points. Earlier CAD actually used all the entity types directly, not just deriving them on the fly from the NURBS.

Arcs are curves with 3 specifically placed control points.

Hyperbola, Conic, Parabola, are the next order up, 4 control points, and above that is the Bspine. 5th degree, and above curves.

Why can't we tokenize those things and train some LLM like thing? I am going to ask my data science friends about this. Has me thinking!

At the core, it is all NURBS curves and surfaces. Those two can represent all that we need.

The relations are all just text, names of entities and how they are related.

Even the NURBS surfaces have text forms. At one point, some systems would let a person just define one by typing the U, V points / matrix values in.

Eg:

Plane [point 1, 2, 3...]

That data is where both the problems and answers are, in this training sense anyway.

How can it not?

What I put before was basically the idea of generating a case, say conic section and cube/rectangle.

Generate common volume case 1 in modern kernel and output text representation of it. That exists today.

Then generate ideal edge blend solution 1, and minimum radius case 1, maximum radius case 1.

Output those and we have in text:

Problem case 1 of problem space 1.txt

Ideal, or common edge blend solution.txt

Max radii case 1.txt

Minimum radii case1.txt

Then proceed to generate a bazillion of these, until the problem space of a conic section intersecting a rectangular body is represented fully enough for AI models to operate and even potentially demonstrate emergent behavior like they do on text and code today.

Edit: basically an LLM like thing becomes the kernel and the CAD system is how one talks to it. Not sure that came through before. Writing it out just in case.

And to be fair, I am still learning in this area. If what I put here is a no way, it would be most helpful to tell me or link me to why not. Thanks in advance.

Edit: Ahh, I see. Lol, read in the cad code and have an AI rewrite it? Maybe, but doubtful.

A bounty on... um yeah. 'Nuff said.

Edit: I got the number of control points discussion above wrong. The Arts and conics should be marked three control points each, with the difference being the constraint on the middle control point. Whoops!
Man you have been incredibly generous with what I suspect is an LLM chat. The OP types really strangely with some tell tale LLM writing structures and styles on a 3 day old account.

Their link is just AI slop there’s nothing insightful here: https://github.com/yuanxun-yx/SplitCAD

Be wary/careful if they reach out to you some way.

Which writing structures would you identify specifically?
“Parasolid could always resolve more of them, and it did that with fairly sloppy tolerances”

Doesn’t that come down to the precision, 1.e-7?

Yes and no.

Parasolid is happy to work to whatever precision it is asked to use. The system driving it is where the choices are.

Sloppy tolerances really equal dynamic tolerance based on model geometry size.

Really, there have been a few different ways to approach this:

Some systems use a well defined tolerance and that results in a bounded model space.

Others do not put a boundary on the model space, but then the tolerance varies by size.

And the third way is to just set a loose tolerance on difficult blends and expect the troubles to come along for the ride. User does that when a blend fails.

With fillets specifically, a relaxed tolerance allows many of them to complete, but there can be downstream problems.

Gaps and slivers can appear in models output to STEP, STL and so forth.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal