In the old days, processors were just processors, there was a single cpu on them. Over time they got smaller and faster.
Eventually the engineering problems of making these things smaller and faster began to mount up a bit. And so, a different strategy was employed: parallel processing.
Parallel processing simply means, speeding up a computing machine, not by using a faster processor, but by using multiple processors, and dividing the incoming processing work amongst them.
This is a hardware architecture strategy that was first employed in large research machines, requiring special purpose operating systems and lots of custom programming.
The processor manufacturers exploited the idea by putting multiple cpu's onto a single processing chip, and coding all the necessary software to divide the incoming processing work into the processor itself.
Well not all the necessary programming. Some old software can't make use of multiple processors and will just run on one of the cores, and you will get no benefit from the other core.
But anyway, no you do not automatically get a 2x speedup from having 2 cores. In practice it's something like a 1.5x speed up, very roughly, more or less, depending on the nature of the work load.
That's because of a couple things; 1) The work load can be very idiosyncratic. Some work loads divide more naturally into parts than others. Some problems are actually pretty difficult to split up. And the low level programming that splits up the work for these processors isn't really going to spend that much time on that matter. So the split up job may not be optimum in every case. But even it if is optimum, it takes some time for the splitting up to get done. That's overhead, and that subtracts from your optimum 2x rate even if nothing else does.
Nowadays you can processors with even like, I guess 8 cores on them or whatever. I think AMDs up that now. So there you go