Zstd does reach LZMA compression ratios on high levels, but compression times also drop to LZMA level. Which, obviously, was clearly planned in advance to cover both high speed online applications and slow offline compression (unlike, say, brotli). Official limit on levels can also be explained by absence of gains on most inputs in development tests.
Distribution packages contain binary and mixed data, which might be less compressible. For text and mostly text, I suppose that some old style LZ-based tools can still produce an archive roughly 5% percent smaller (and still unpack fast); other compression algorithms can certainly squeeze it much better, but have symmetric time requirements. I was worried about the latter kind being introduced as a replacement solution.
bzip2 is a pig that has no place being in the same sentence as lzo and gzip. It's nieche was maximum compression no matter the speed but it hasn't been relevant even there for a long time.
Yet tools still need to support bzip2 because bzip2 archives are still out there and are still being produced. So we can't get rid of libbz2 anytime soon - same for liblzma.
You should use ultra settings and >=19 as the compression level. E.g. arch used 20 and higher compression levels do exist, but they were already at a <1% increase.
It does beat xz for these tasks. It's just not the default settings as those are indeed optimized for the lzo to gzip/bzip2 range.