It is stated in the datasheet that "The flash block is designed to deliver 1 wait-state (WS) access time at 48 MHz and with 0 WS access time at 24 MHz". So, what is really the MIPS difference between 24MHz and 48MHz?
This heavily depends on how your code is structured. The data sheet talks about a "flash read accelerator" which I think is a kind of cache. This would mean that code which heavily localized (like a small loop) will execute with full speed (0 wait states) while code spread out over memory will use 1 wait state (when run with 48MHz clock).
Note that the data sheet doesn't talkj about the MIPS performance. But I did find some references which talk about 0.9 MIPS / MHz, which would result in about 43 MIPS at 48MHz.
That is difficult to answer: Since the ARM core has got an instruction cache with Zero (0) wait-cycles much of the execution speed has to do with the actual generated code distribution and cache purges due to conditional jumps. A dhrystone benchmark could tell the difference...
When you're interested in, here's a link http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dai0273a/index.html
This seems to indicate, logically, based on a poor definition of IPS, that MIPS and Memory have a weak relationship -
Instructions per second (IPS) is a measure of a computer's processor speed. Many reported IPS values have represented "peak" execution rates on artificial by CouponDropDown" id="_GPLITA_2" style="text-decoration:underline" href="http://en.wikipedia.org/w... sequences with few branches, whereas realistic workloads typically lead to significantly lower IPS values. The performance of the memory hierarchy also greatly affects processor performance, an issue barely considered in MIPS calculations. Because of these problems, researchers created standardized tests such as SPECint to attempt to measure the real effective performance in commonly used applications, and raw IPS has fallen into disuse.