Computing can be defined as the process that turns inputs into outputs by following a set of instructions or rules. If we adopt such a flexible definition, we could arguably state that living systems compute. Bacterial cells are continuously sensing the environment around them and responding to changes by using their internal machinery to process information. Important efforts in synthetic biology deal with the rational modification of such machinery in order to engineer new-to-nature processes of information—what we refer to as biocomputations. However, not all of the natural information-processing abilities of living organisms are equally well understood. Because of this, standardisation efforts are badly needed, not only for the sake of defining standard parts, units or protocols, but also for formalising natural models of information processing.
Computer science has developed a number of models of computation, that describe inputs, outputs and the algorithmic basics to turn inputs into outputs. These models are theoretical definitions that solve information-related problems, and their physical implementation is well known. The device you are using to read this text with (unless you got it printed!) is a result of implementing a theoretical model of computation. The description of the models is heavily standardised; models describe their features unambiguously. For example, the definition of the Turing Machine, or Boolean logic, does not change depending on who talks about it. Electronic engineering, developed alongside computer science, enables the physical implementation of models. If the model requires digital processing of information, we can use digital-based hardware to reduce to the minimum the semantic gap that separates theory from implementation.
Living matter, however, does not provide a physical substrate with which to implement existing models of computation without such semantic gap. We have already seen this in synthetic biology, with the implementation of Boolean logic circuits. Logic gates, such as NOR and NOT, have been implemented in cells using regulatory networks. The fundamental concept that enables combinatorial logic computations in cells is the conceptualisation of gene expression events as on/off processes. This is intuitive to us, since often talk about gene expression in terms of “it happens or not”. By concatenating a number of on/off steps we can implement logic gates that do perform pre-defined computations. But the functioning of these gates differs substantially from the theoretical performance of the device. This is, the semantic gap that separates theory from implementation is quite big. Of course, this gap is not sufficient to argue that a cell is not computing. It is computing a function, just not as good as with other existing implementations.
In a recent publication* we described a number of information-processing features in living cells that differ from electronics. These features may help us describe radically new models of computation that are better fitted to the physical matter we are using to implement computations. We propose the notion of cellular supremacy to focus attention on domains in which biocomputing may offer superior performance over traditional computers. Such cellular supremacy needs standardisation efforts aiming at generating rigorous descriptions of information processing inside living organisms.
*Read more: https://www.nature.com/articles/s41467-019-13232-z
Text by: Angel Goñi Moreno (School of Computing of Newcastle University, UK), December 2019