This was not really true. Those terminals were often extremely expensive compared to "off the shelf" PCs of the time. They required decent CPUs and memory (this was before hardware acceleration) as well as (for the time) decent networking hardware to drive it at scale for larger institutions. On top of that they were usually connected to and had to drive very high res monitors, which weren't cheap either (anecdotally, the first time I "surfed" the web was at my mom's work at a telco lab on HP-UX machines in ~1993-94; when we later got the internet at home I hated doing it on the 800x600 monitor we had).
As you alluded to, what it did provide was a central way to administer software and reduce licensing costs, which pre-2000 was almost all commercial - companies were loath to buy multiple commercial compilers/matlab/etc (and the software vendors took awhile to update their models, too). In those days sysadmins often did things by hand and user management alone was far easier on a central mainframe/server. It also did allow some vendor interoperability as you mentioned.
"Dumb" text terminals were also the way that things were already done, so they just continued on that model with a GUI until the benefits of local computing became so much more economical. In large orgs, this usually was around when windows started to become prevalent over the course of the 1990s (sometimes with X-servers or terminal clients on windows to replace the physical terminals).
Your X server (e.g. a X terminal) could display applications running on a variety of other vendors' hardware and operating systems. The specification enabled this interoperability. Apps running on SunOS could display on VAX workstations, and vice versa (as long as you had TCP/IP installed!)
The advantage X terminals had was that they were relatively inexpensive to buy and operate. Most did not require management, however the CPU cost moved into a computer room and you needed a pretty fast network to get reasonable performance.