You could just Google the words "Why multi-core processors" and read the answers.
The two big problems were memory latency and heat generation/power consumption. (Power consumption and heat generation go hand-in-hand.) CPU speeds and their front system bus speeds were increasing at a much greater rate than memory speeds. (Still are.) To increase memory bandwidth, approaches like double data rate (DDR) - reading memory at both the rising and falling clock cycle - and double data rate version 2 (DDR2) - doubling the bus speed and reading memory at both the rising and falling edges of a clock cycle - were used. Another technique was to use dual-channel memory - which grabs the data from a pair of SDRAMs at the same part of a clock cycle. The new Intel Core i7's now use triple-channel memory. In theory, that should double and triple, respectively, the memory bandwidth. In reality, the gains were more like single-digit percentages.
While these approaches helped, CPU speed was doubling two or three times faster than memory speed. This gave you a very fast processor starved for data to work on. This is called "memory latency." By using two (or more) cores at lower clock speeds, both cores have a lower latency between the time they need data and the time the data is available. Efficiency is therefore, increased.
The other problem - heat generation and power generation - is due to the nature of increasing the clock frequency. Doubling the CPU speed did nearly the same to the power requirements. We had single core CPUs using 125W of power. More power created more heat. It also required beefier power supplies, which in turn generated even more heat. Intel 3.4 GHz Extreme Edition processors - one of the last of the single core line - had an idle temperature of about 65C. That's just too high for an air-cooled system. (We see dual core and quad core CPUs reaching this level again, but with all cores running at the 3.0+ range.)
There were other things, but those are the two major factors I heard repeatedly.