Guest post by John Raynor
Question: Why are Science Communicators like Thermostats?
Answer: Because they are both regulators
Science communicators regulate the exchange of scientific information within a community. A thermostat regulates the temperature within an oven.
We want to affect a community, a network, a society or an audience. The thermostat affects an oven. We have means to do it: media, dialogue processes, social networks or presentations. The oven has a heating element. We have a plan about what we want to share and desired outcomes. A stove has a dial to set the desired temperature. We measure or evaluate the actual outcomes and effects upon our community. The oven has a thermometer. We close the feedback loop by comparing the actual outcomes with the desired outcomes. The thermostat compares the actual temperature with the desired temperature. Based on this comparison we modify our inputs so that eventually the desired and actual outcomes match each other, as also happens with the oven. The science communicator manages the commination system while the thermostat regulates the oven. These examples are part of the broad interdisciplinary field known as cybernetics.
Cybernetics is the study of systems that involve some form of regulation. It is closely related to the study of control systems in engineering. What I aim to do here is to develop a control systems model of science communication.
Control systems may be described as being either open-loop or closed-loop depending on whether or not feedback is employed. Feedback is where you monitor (sample or evaluate) the outcomes of some process or activity within a system and use this information to modify the inputs to the system. A very early example of closed-loop control was the flywheel governor invented by James Watt to regulate the amount of steam delivered to a steam engine, and so control its speed.
As a starting point, here’s a block diagram of a closed-loop thermostat controller for our oven.
The heating element (gas or electricity) heats the oven making its temperature rise. A thermometer measures (evaluates) the actual temperature which is compared with the temperature that we have set. If the actual temperature is less than the required temperature then the heating element turns on and vice-versa. Information about what is happening at the output of the system feeds back to the input to determine what needs to happen. Once the temperature reaches its set point, the feedback should keep the actual temperature stable near the set point by turning the heating element on and off as required.
If we produce a sudden disturbance to the oven temperature by, say, opening the oven door, then the system takes some time to recover its original temperature. This time is known as the “response time” or “lag time” of the system. Its value depends on all of the components in the system. A second important time is the “reaction time” of the oven to the inputs. For a stable system it is important that the response time of the system is fast enough to keep up with the reaction time of the oven. If this is not the case then major problems can occur. For example, imagine that we have a small oven with a very powerful heating element which means that the oven heats very quickly. However, suppose that the measurement and comparison processes are very slow. This means that the actual temperature of the oven could go way above the set point before the rest of the system has time to catch up to turn off the element, resulting in highly unstable operation. If the delay is too long then you may be doing exactly the wrong thing at the inputs for the actual current state of the oven, with catastrophic results; the system is unstable.
Based on this example we can construct a generalised model of a closed-loop control system to which I have added two additional components. The “disturbance” is some external factor that disturbs the thing that we wish to control; opening the oven door in the previous example was a disturbance. There is also uncertainty, noise or random effects, particularly with respect to the measurement or evaluation of the outputs, which can interfere with the performance of the system
In many ways, we can think of science communication as a closed-loop feedback system with the science communicator being a critical element within the system.
Here is an attempt to describe science communication, as a feedback system. The boxes reflect the classic communication questions: why (communicate)?, what?, how?, to/among whom?, and with what effect? The green boxes show the role of the science communicator
Let us suppose that an organization has a need for science communication for a particular purpose that addresses the question: why communicate? The answer leads to such questions as: what do we wish to communicate?, and what are our desired outcomes? Next, the identification of the society, community, network, actors or audience among whom the communication is to take place, allows us to determine how the communication is to occur. As a result of the communication, there will be a numbers of outputs, outcomes, effects, or attitude changes that we need to measure or evaluate. This information provides feedback so that we can compare the actual outcomes with the desired outcomes. From this comparison, we can take action to either modify what is desired and/or the processes by which the communication occurs. For a stable system after several iterations we might hope that the agreement between the desired outcomes and the actual outcomes would progressively improve. If adopt this control systems thinking, then the fundamental role of evaluation becomes abundantly clear.
Unexpected disturbances to the community can stress the system and if the response time of the system as a whole is much longer than the time scale of changes in the community then significant difficulties can arise. In addition, uncertainties or noise can also degrade the performance of the system.
As an example, here is a notional model of a communication system for dealing with natural disasters, such as bushfires, or the spread of a flu epidemic, drawn as a control system.
I would like to suggest than when we think of these systems in this way that it becomes very clear that major problems will arise if there are major disruptions to the feedback path. In addition if the time scale for major changes within the community due to the fire is much shorter than the slow overall response time of the system, then the actions being taken at a particular time may be quite inappropriate to the current situation. Future changes to the system might then focus on making it more robust so that it will continue to operate in difficult conditions and making it more agile so that its response time would be short enough to keep up with what is happening on the ground.
For traditional control systems techniques exist for speeding up the response time without compromising the stability. I find it interesting to speculate that it might be possible to adapt some of these ideas to speed up the response time of time-critical science communication systems.