So if the grid wants to be at 400kV, and achieving 400kV under particular generation conditions requires 1500MVar of reactive power absorption by the grid (I made up that number), and the grid operator is relying on 220kV conventional generators to collectively have 1000Mvar of absorption available under said conditions, then something needs to communicate that need to those generators so that they actually absorb those 1000Mvar. And if the OLTCs fool the control algorithm into causing those generators to absorb only 400Mvar, then there’s a mismatch, and that mismatch doesn’t go away because the OLTCs are supposed to be slow.
If, as the writeup seems to suggest, the grid design also requires the OLTCs to operate quickly under large voltage fluctuations because the secondary side cannot tolerate the same fractional voltage swing that the primary side is specified to tolerated, then I would not want to be the person signing off on the grid being stable. (Writing the simulator could be fun, though!). Maybe the idea is that, if the primary voltage is stable at 10% above nominal, then the OLTCs are intended to be stable at a position that holds the secondary at 5% above nominal, and that in turn is intended to result in the correct amount of reactive power absorption?
If I were designing this thing from scratch, I would want an actual communication channel by which facilities that can adjust their reactive power can be commanded to do so independently of the voltage at the point at which they’re connected. And I would want a carefully considered decentralized algorithm to use these controls which, as a first pass, would take input from the primary side at the relevant substations. And then I would want to extend a similar protocol to most or all of the little solar generators at customer sites (not to mention the larger solar facilities that don’t dynamically control reactive power at all in Spain) because they, collectively, can quickly supply or absorb large amounts of reactive power on demand. (Large facilities would use fiber. Small facilities would use digital signals over the power lines or, maybe, grudgingly, the Internet. We really don’t want a situation where the grid cannot start up without customer sites having Internet access.)
Or I would dream of a grid that’s primarily DC with AC islands where the DC portions don’t care about reactive power or frequency at all and merely need to control voltage and power flow.
See Part C here (e.g., SOs having the ability to control generator setpoints): https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv:...
And the original long PDF, page 21, mentions the use of Operating Procedure 7.4 Dynamic Voltage Control, and it very vaguely mentions programming of the RRTT (which seems to include the 7.4 schedule) the day before the failure and the day of the failure, but I didn't see anything about the operator programming the RRTT during the failure to control voltage.
It seems to be (and this is not any sort of control theory analysis) that, if the grid voltage is too high (in specified range, but high enough that tap changers must operate to avoid disconnecting generators) and additional reactive power absorption is needed, then the grid ought to react by operating the tap changers (because it's necessary) and by somehow instructing the generators to absorb additional power despite the operation of the tap changers. And I see plenty of discussion about the tap changers in the big PDF as well as plenty of discussion of data acquired via SCADA links, but I don't see anything about adjusting the reactive power schedules to compensate for the operation of the tap changers or about the use of any sort of real-time SCADA control to adjust reactive power.