Well, this is sort of how it works:
In the old days, power was quoted as continious, both channels driven, under worst case mains power conditions. It is also often quoted for the full bandwidth, but in this case it is not tested for by running the amp at an arbitrary frequency within that bandwidth (say 20 - 20k) for 24 hours, instead it is done using a wideband noise source with a known long term RMS value. For normal testing, a single frequency (often 1k or 315Hz) is used on a nominal load, and with low mains power. The amps are expected to perform that way for 1 hour, in a standard temperature room, under the recomended placement conditions (this is what is written in the manual, things like leven surface with no top ventilation holes obstructed and X inches free space on all sides). Keep in mind that mains voltage used to be specced as 110V US, 22V EU, 240V UK - while now it is 115V or 230V, hence US and EU standard amps run at higher mains, and also produce higher than declared power.
Somewhere during the decline after the golden years, 'peak power' declarations became more and more popular. In one way, this makes sense as music has a waveform envelope that is very peaky, i.e. the ratio of peak vs RMS power is very high. Unfortunately, because manufacturing costs and especially transport costs were continually creeping higher, this rating soon started being used to cut costs. Large electrolytics became cheaper, smaller and especially lighter than large transformers. It should be noted that normal EI transformer lamination technology has not progressed so far since those days and although transformers can be somewhat smaller and lighter using optimized stack sizes, most 80s amps already have way too small transformers, and todays multichannel receivers are downright ridiculous. This, however, is not true for C, R and toroidal core transformers - these are indeed about half the size for the same power - but they were also much more expensive to make then. Somewhere during that time declarations also started omitting the 'both channels driven' part, which imemdiately saves you half the power supply size - it proved to be a big hit with the accountants, and as we know, most of the buying public could not care less. Further, the peak power rating includes a very short time during which it is observable. As far as i remember it is about 10ms, and also, with a frequency burst at 1kHz. The reason for this is that 10ms is enough to fit only one single cycle at 100Hz, already short of the promised 20Hz lower cut-off frequency.
With the advent of multichannel, things started being even more absurd. At best, consumer grade multichannel amps are tested using two channel methodology, which means they at best satisfy the declaration with two out of more channels driven. Today this is VERY easy to see. Receivers declared at 5x100W have transformers smaller than stereo amps with the same WPC declaration. In many cases, because now there is no need to add the 'both channels driven' because it does not make sense, the declaration is for 5x100W, say, but for one channel at a time. As one would expect, if the rating is measured using tone bursts, the situation is even worse.
Finally, computer audio. Here we have the famous PMPO. Simply put, this has ABSOLUTELY NO CONENCTION WITH ANY REAL POWER. In other words, you can pretty much declare anything you want for PMPO power since no-one has ever defined what it actually means. All attempts to make some sense of it are actually completely futile and a fools errand. Why? Simply put, if a speaker is connected directly to a power supply (omitting all possible losses from an amp), with a whimpy 12V 1A you get 3A of current through a 4 ohm speaker, for the first ms or so - and that is a whole 36W. You get no more, no matter for how short the pulse is. When it says 100W or 1000@ or anything really over this 36W, it is downright lying, and if it says anything between 12 and 36W it is distorting the truth seriously.