Everything I'm going to post here is an oversimplification of complex realities... but for the sake of understanding:
FOPS just a measure of how much processing a system can do. FLOPS stands for "floating point operation per second." You can think of a FLOPS as a single addition, subtraction, multiplication, or division of a pair of numbers. A 1 FLOPS computer could do one of these operations every second. A 1 TeraFLOPS computer could do 1,000,000,000,000 (1x10^12) of these operations every second.
At the dawn of computing, computers usually had a single processor called the Central Processing Unit (or CPU), so it was easy to compare performance between computers. Now that computers have many processing units (multiple CPUs with multiple processing cores, and multiple graphics processing units (GPUs) with many, many, many cores) it is sometimes easier to compare performance by adding up all of the FLOPS these processing cores can calculate.
That's why you might see Microsoft and Sony comparing FLOPS on their latest consoles. The end user doesn't really care how many cores the CPUs and GPUs in these systems have, or at what speed (hertz) they're running at. All the user cares about is performance.
In case you're interested, the term "Supercomputer" originally defined a computing system that could execute 1 gigaFLOPS (or 1x10^9 operations per second). In 1999, Apple advertised the Power Mac G4 as the first "personal supercomputer" because it could process 4 gigaFLOPS. The G4 chip had an accelerated floating point co-processor (which Apple called the "Velocity Engine"), which gave it the boost over the 1 gigaFLOPS barrier. Applications had to be written specifically to take advantage of this co-processor.
The fastest supercomputer cluster these days is capable of over 93,000,000,000,000,000 (93x10^15) FLOPS (or roughly 93 petaFLOPS)