Tuna-Fish parent
The problem was mainly in x86, existing code already mostly used instructions that belonged to the fast subset, and on 68k, it didn't. So while you could evolve the instruction set towards a saner design (and the 68060 was well on the way towards that), only new code would benefit from it.
And of course that's exactly what they did with Coldfire - rounding off the inconvenient corners of the ISA to produce CPUs with lower power requirements and able to run at higher clock speeds.
They did it with the 68030 before Coldfire. They discarded a number of things (e.g. addressing modes) that seemed like good ideas for the <=68020 but didn't end up being used in practice.
on the m68k, the "cisc-y-ness" is in the many many addressing modes, whereas x86 in that particular aspect of the architecture has always been rather "risc-y" (read: rather limited compared to other CISC architectures, including m68k).
The core instruction set of the m68k, as far as ALU/FPU is concerned, is simple enough. But converting the addressing modes to "risc building blocks" (μops or whatever term you like to use) is harder.
> only new code would benefit from it.
Not only new code. Old code in an high-level language would benefit, too, if the language compiler was updated and the code recompiled.
Not everyone has that sort of luxury of access to source etc., people want their existing binaries to run faster.
Also, if you need to recompile to get a performance boost, why not recompile for a cleaner modern architecture? You can always use an emulator for legacy code, if it isn't going to run fast on a modern CPU either way...
Or sometimes, binary pached.
This. Don't underestimate the amount of M68K code written in assembly language.