This model represents the relationship between computing time for a problem and the number of dedicated cores to compute it. In many cases, computing problems such as mapping DNA strands or running certain simulations require more computing power than any one computer can provide, so instead they are distributed among a number of computer processes, all running in parallel.
This model can be changed primarily by changing the two sliders on the graph: work load and data load. The work load slider determines the total number of computations necessary to finish the work, while the data load slider determines the total amount of data that must be held and/or transferred by the processors in order to complete the job.
For any level of work and data, the total time required for a set of parallel processors to compute the computation is the sum of the time required for all of the computers to do the actual work and the time required for the computers to transfer data about the work amongst themselves. The results are displayed on the graph.
Set the sliders to whatever value you wish, and then the table and graph will automatically update. Examine, in particular, how the shapes of the lines on the graph change with changing data and work loads.
The time required to process the actual job varies inversely with the number of processors, while the time required to transfer data varies directly with the data load and number of processors. Thus, the total will initially fall (as the job is divided among more processors) until the point at which the increased time spent transferring parts of the job from computer to computer outweighs the extra processing power available.