The main deciding factors were the process and frequency it was released / upgraded by us or our customers.
The on-prem installs had the longest delay because once it was out there it was harder for us to address issues. Some customers also had a change freeze in place once things have been approved which was a pain to deal with if we needed to patch something for them.
Products that had a shorter release or update cycle (e.g. the mobile app) had a shorter delay (but still a delay) because any issue could be addressed faster.
The services that were hosted by us had the shortest delay on the order of days to weeks.
There were obviously exceptions in both directions but we tried to avoid them.
Prioritisation wasnt really an issue - a lot of dependencies were increased on internal builds so we had more time to test and verify before committing to it once it reached our stability rules.
Other factors that influenced us: - Blast radius - a buggy dependency in our desktop/server applications had more chance to cause damage than our hosted web application so it rolled a little slower for dependencies.
- Language (more like ergonomics of the language) - updating our C++ deps was a lot more cumbersome than JS deps)
The harder part, as is often the case, wasn't technical - but more convincing customers to take the new version and getting time with their IT teams to manage. It got easier over time but the bureaucracy at some of the clients was slow to change so I suspect they still face some issues.
In the JVM ecosystem it's quite common to have Dependabot or Renovate automatically create PRs for dependency upgrades withing a few hours of it being released. If it's manual it highly irregular and depends on the company.