I have some CPU-intensive processes (db backups & reorg especially).

I'm trying to gauge how much of my CPU "slice" I'm using ... so I would know the appropriate amount to purchase for best cost/performance/predictability ratio.

Based on experimentation, my first theory, is that the number on the top left represents percentage of the overall physical server (all 4 cores), and the numbers in the %CPU column down below are percentage of allocated slice.

But the theory seems a bit flawed... On an 8GB Could Server, I'm not able to push the CPU utilization (top number) above 25%, even with the other numbers adding up to more than 100%.

So, how does this really work?

BTW, just as an fyi - anecdotal benchmark - a single-threaded CPU intensive process running on a 1GB Cloud Server is slightly faster than an AMD 64 X2 5000+.