Universal Nonlinear Spiking Neural P Systems with Delays and Weights on Synapses

The nonlinear spiking neural P systems (NSNP systems) are new types of computation models, in which the state of neurons is represented by real numbers, and nonlinear spiking rules handle the neuron's firing. In this work, in order to improve computing performance, the weights and delays are introduced to the NSNP system, and universal nonlinear spiking neural P systems with delays and weights on synapses (NSNP-DW) are proposed. Weights are treated as multiplicative constants by which the number of spikes is increased when transiting across synapses, and delays take into account the speed at which the synapses between neurons transmit information. As a distributed parallel computing model, the Turing universality of the NSNP-DW system as number generating and accepting devices is proven. 47 and 43 neurons are sufficient for constructing two small universal NSNP-DW systems. The NSNP-DW system solving the Subset Sum problem is also presented in this work.


Introduction
Membrane computing (MC) is a representative of a new type of computing, abstracted from the phenomenon of signal transmission between cells in animals. It was proposed by Gheorghe Pȃun in 1998 and published in 2000 [1]. As a new type of natural computing, membrane computing has abundant model support [2][3][4] and is widely used in real life [5,6]. e distributed computing model is named after the membrane system or P system. So far, P systems are mainly divided into three categories: cell-like P systems [7,8], tissue-like P systems [9,10], and neural-like P systems. Spiking neural P (SNP) systems [11], axon P systems [12], and dendrite P systems [13] are widely studied types of neural-like P systems. e research on SNP systems has been abundant for more than ten years. Similar to the spiking neural network (SNN), in the SNP system, neurons are activated, and the spikes are transmitted to other neurons along the synapse. SNP systems encode information through spikes in neurons. Intuitively, SNP systems are represented by a directed graph, where the nodes in the graph represent neurons, and the neurons are connected by arcs representing synapses. Neurons contain spikes and rules; two kinds of rules are used in SNP systems: firing rules (E/a r ) ⟶ a and forgetting rules a s ⟶ λ. Generating, accepting, and functional computing are the three working modes of SNP systems. It is worth mentioning that, by introducing neuron division, budding, or dissolution, the SNP system has been proven to be able to solve some computationally difficult problems, like SAT problem and Subset Sum problem [14,15]. SNP can also solve many practical problems such as fault diagnosis of power systems [16][17][18], image processing [19,20], and combination optimization problem [21]. Based on those innovative works, more and more scholars pay attention to SNP systems. Many variants of SNP systems have been proposed, and their computing power has also been proven. e generation of the SNP system itself is affected by the spike signal in biological phenomena. From the perspective of biological facts, Pȃun et al. considered another important component related to brain function, astrocytes, which implicitly control the number of spikes along the axon. e functional application of astrocytes in the SNP model was first introduced in 2007 [22]. In 2012, Pan et al. formally proposed SNP systems with astrocytes [23]. More discussions about this type of system have been formed, such as [24,25]. For such systems, astrocytes influence computation by controlling the transmission of spikes on synapses. Considering the difference in the number of synapses connected between neurons, Pan et al. [26] previously proposed spiking neural P systems with weighted synapses. en considering the mutual annihilation between spikes, Pan and Pȃun [27] used a pair of antispikes (a, a) to explain the SNP system with antispikes. In the initial SNP system, there was a unique synapse connection between two neurons. Based on biological facts, Peng et al. [28] proposed that the synapse of each neuron can have an indefinite number of channels connected to subsequent neurons, and the SNP system with multiple channels was reasonably verified. Later, for the phenomenon that neurons carry positive and negative ions, the polarized SNP system was studied [29,30]. In view of this neurophysiological observation, the speed of information transmission on the synapse is different. Song et al. [31] proposed spiking neural P systems with delay on synapses (SNP-DS) in 2020. On the other hand, driven by the ideas of mathematics and computing science, many variants of SNP systems make the SNP computing model with a parallel mechanism more powerful, for example, a complete arithmetic calculator constructed from SNP systems [32], SNP systems with random application of rules by introducing probability [33], homogeneous SNP systems with structural plasticity by adding and deleting connections between neurons [34], SNP systems with colored spikes by the idea of colored Petri nets [35], SNP systems with communication on request by using the parallel-cooperating grammar system to communicate on request [36], SNP systems with learning functions used to identify letters and symbols [37], and numerical SNP systems inspired by the numerical nature of the numerical P (NP) systems [38].
For SNP systems and variants mentioned above, the number of spikes in neurons is in integer form. So the nonlinear spiking neural P (NSNP) systems were investigated by Peng et al. [39]. In this system, the nonnegative integer state of neurons is replaced by real numbers; the number of spikes consumed and generated is replaced by nonlinear functions. However, NSNP systems need 117 and 164 neurons to construct Turing universal systems as functional computing device and number generator, respectively. For a new heuristic computing model such as this one, computing power is an important measure. At the same time, as a computational model of exchanging space for time, the computing power of NSNP systems needs to be further improved.
In this work, for the purpose of greater calculation power of NSNP systems, inspired by [26,31], a novel P system, the nonlinear spiking neural P systems with delays and weights on synapses are proposed, abbreviated as NSNP-DW. In NSNP-DW systems, related to several factors, such as the dendrites' length, the spikes emitted from the same firing neuron reach different postsynaptic neurons at different times. So delays on synapses are introduced into the NSNP systems. Also, the number of dendrites among a firing neuron and its postsynaptic neurons is different. Although the spikes emitted from one neuron are the same, postsynaptic neurons with different amounts of dendrites to the firing neuron receive different amounts of spikes. erefore, weights on synapses are added to the NSNP systems, and weights are treated as multiplicative constants by which the number of spikes varies when transiting across synapses. Both of these elements play a role in enhancing the computing power of the system. Different from NSNP systems, the main novelties of this work are as follows: (i) We propose a novel P system called universal nonlinear spiking neural P systems with delays and weights on synapses closer to biological neurons. (ii) For the NSNP systems in [39], integer spikes are replaced by nonlinear functions. In the proposed NSNP-DW system, we find more applicable nonlinear functions that are also common in neural networks and machine learning, so as to abstract the complex responses generated by spikes. (iii) We prove the computational power of the new P system; in addition, 47 and 43 neurons are successfully used for simulating function calculation and number generator, respectively. (iv) Computational efficiency is also an important evaluation for a P system, usually considering whether the NP-complete problem can be solved in a feasible time. We prove the computational efficiency of the new P system by solving a typical NPcomplete problem: Subset Sum problem in polynomial time. It is an attraction of P system solving computationally hard problems so quickly. is makes the NSNP-DW system another powerful tool to solve the NP-complete problem. e remaining contributions of the paper are reflected as follows: Section 2 briefly shares the relevant content of register machine. Section 3 proposes NSNP-DW systems and gives two intuitive examples. e computing power of NSNP-DW systems equivalent to the Turing machine is proven in Section 4. Section 5 verifies the universality of the NSNP-DW system using fewer neurons. e uniform solution of the Subset Sum using the NSNP-DW system is added in Section 6. Conclusion and follow-up research on the NSNP-DW system are explained in the last section.

Prerequisites
e elements of the formal language of the SNP system can be consulted in [11,26]. Here, we only introduce some symbolic theories used in this work, such as Turing machine construction, execution instructions, and universal Turing machine.
We show that the proposed NSNP-DW system as number generating and accepting device is equivalent to Turing machine. An arbitrary family of Turing computable natural numbers, defined as NRE, is a family of length sets for recursively enumerable languages. NRE can be characterized by a register machine M. e structure of the form M � (m, H, l 0 , l h , I), where m is the number of registers, H denotes the instruction tag set, l 0 is the starting label, l h is the 2 Computational Intelligence and Neuroscience halting instruction HALT, and I denotes the instruction set. Each instruction in I is one of the following three forms: (1) l i : (ADD(r), l j , l k ) (add 1 to register r and then execute nondeterministically the instruction with label l j or l k ). (2) l i : (SUB(r), l j , l k ) (if register r is nonzero, then subtract 1 from it, and go to the instruction with label l j ; otherwise, go to the instruction with label l k ). (3) l h : HALT (the halting instruction).
By imitating the register machine, the universal NSNP-DW system was verified. When all the registers are empty, the calculation starts from l 0 , and the instructions in I are continuously executed until halting instruction l h . e number set generated by the register machine M is defined as N gen . N acc is the number set accepted by M; corresponding instruction l i : (ADD(r), l j , l k ) is deterministic and can be expressed as l i : (ADD(r), l j ).
Computable function f: N k ⟶ N (k is a natural number) can be calculated by the register machine. If the equation ψ x (y) � M u (g(x), y) is satisfied, where x and y are nonnegative integers and g(x) is a recursive function, then the register machine M u is universal. A universal register machine M u ′ simulated by the NSNP-DW system is shown in Figure 1, consisting of 9 registers, 14 SUB instructions, 10 ADD instructions, and one HALT instruction.

NSNP-DW Systems
e materials and methods section should contain sufficient detail so that all procedures can be repeated. It may be divided into headed subsections if several methods are described.
In the following, we provide the definition of the NSNP-DW system and related semantic explanations. An example shows the operation of the system more clearly.

Definition.
e structure of the proposed NSNP-DW system with degree m ≥ 1 is where is the initial value of spikes contained in neuron σ i , indicating the initial state of neuron σ i , and R i is the finite set of spiking rules in the following form: (a) Spiking/firing rules: E|a p(x i ) ⟶ a q(x i ) , where E is the firing condition, p(x i ) is a linear or nonlinear function, and q(x i ) is a nonlinear function, and p(x i ) ≥ q(x i ) ≥ 0. (b) Forgetting rules: E|a p(x i ) ⟶ λ, for a linear or nonlinear function p(x i ). e NSNP-DW system can be visualized by a directed graph G Π with nodes and arcs, where nodes are neurons and arcs represent synapse connections. In original SNP systems, the number of spikes is described by an integer. In NSNP-DW systems, besides integer, it can also be described by nonlinear functions. For the rules E|a p(x i ) ⟶ a q(x i ) and E|a p(x i ) ⟶ λ in the NSNP-DW system, the firing condition E has two forms. (1) It is a regular expression like a(a 3 ) + to exactly "cover" the contents of the neuron. If E is exactly a p(x i ) , then E can be omitted. (2) It is a threshold for enabling the rule and exists in the form of a positive real number. In order to distinguish, we write the rules as T|a p(x i ) ⟶ a q(x i ) and T|a p(x i ) ⟶ λ, T ∈ R + . Both types of rules can be used in this paper. We assume that x i (t) is the spike value of neuron σ i at step t. For firing rules T|a p(x i ) ⟶ a q(x i ) , only when x i (t) ≥ p(x i ) and x i (t) ≥ T are satisfied, the rule can be applicable. Intuitively speaking, when the firing rule is met, the neuron is fired, consuming p(x i ) spikes (if the remaining (x i (t) − p(x i )) spike can no longer enable the rule, it will disappear in the neuron) and sending out q(x i ) spikes. Forgetting rules T|a p(x i ) ⟶ λ, q(x i ) ≡ 0 mean that spikes with value p(x i ) are consumed but not generated in the neuron to which it belongs. ere will inevitably be multiple rules (greater than or equal to 2) in a neuron, assuming two rules, corresponding to the thresholds of T 1 and T 2 , respectively. (i) When T 1 ≠ T 2 and both rules can be executed, the maximum threshold strategy is applied. For instance, if T 1 > T 2 , the rule with T 1 takes precedence, and the rule with threshold T 2 is not used. For this strategy, the forgetting rule is no exception. (ii) When T 1 � T 2 � T, nondeterministic rule selection strategy is enabled. at is, the rules with T 1 and T 2 need to be discussed separately.
In addition, for the NSNP-DW system, weights and delays are reflected in synapses. For any (i, j, n) ∈ syn, n is the weight on the synapse. If the spikes with value q(x i ) are Computational Intelligence and Neuroscience emitted by neuron σ i , neuron σ j will receive n × q(x i ) spikes. e delay between synapses (i, j) is represented by D syn , D syn is in the form of a time number. If there is a delay between neurons σ i and σ j , the spiking rules T|a p(x i ) ⟶ a q(x i ) belonging to neuron σ i are enabled at step t. e spikes with value p(x i ) are consumed from σ i in step t + 1; neuron σ j will receive the spikes with value q(x i ) from neuron σ i at step t + D syn + 1.
erefore, the spikes are owned by σ j after D syn (i, j) moments.
Besides, the consumption function p(x i ) and the production function q(x i ) can be selected from the following common activation functions in neural networks: In the NSNP-DW system, neurons are used in parallel, and the use strategy of rules in each neuron is in a sequential manner; that is, only one rule is allowed to be employed nondeterministically in a calculation step.
Assuming m neurons, x i (t) is the number of spikes of the i-th (1 ≤ i ≤ m) neuron, then the configuration (state) of the system Π at step t can be expressed as (x 1 (t), . . . , x m (t)), and the initial configuration is X 0 � (x 1 (0), . . . , x m (0)). By executing spike rules, configuration X 1 to configuration X 2 , denoted by X 1 ⇒X 2 , is defined as a transition of system Π, and the sequence obtained by this transition is defined as a calculation. Each calculation is related to a spike sequence similar to a binary sequence. e sequence is composed of 0 and 1. e output neuron emits a spike to the environment corresponding to 1; otherwise, it corresponds to 0. In this study, the time interval at which the output neuron emits spikes to the environment is used as the calculation result.
For an NSNP-DW system with at most m neurons and at most 2 rules in each neuron, we use N gen SNP 2 m to represent all natural number sets generated by the NSNP-DW system and N acc SNP 2 m to represent all natural number sets accepted by the NSNP-DW system. When the number of neurons is uncertain, m is often replaced by * .

Two Illustrative Examples
Example 1. A simple example of the NSNP-DW system is given in Figure 2, containing three neurons labeled by 1, 2, and 3. e weight between neuron σ 1 and neuron σ 2 is 2. e delay between neuron σ 1 and σ 3 is D syn (1, 3) � 1, denoted by t � 1. It is assumed here that p(x) and q(x) take (5) and (4) of the above function. At step t, neuron σ 1 receives two spikes from the environment, its state is x � 2, and rule 2|a x ⟶ a p(x)/2 is applied. Neuron σ 1 consumes two spikes and sends one spike to neurons σ 2 and σ 3 each (because (p(x)/2) � 1). At step t + 1, neuron σ 2 receives two spikes due to weight 2. At this moment, neuron σ 3 is not fired because of D syn (1, 3) � 1. At step t + 2, neuron σ 3 receives two spikes, one from neuron σ 1 and one from neuron σ 2 (because q(x) � 1).
ere are two rules in neuron σ 3 that both meet the fired conditions. Subject to the maximum threshold strategy, rule 2|a x ⟶ a q(x) is used and emits one spike to the environment (for q(x) � 1). is example is complete and the results of each step are presented in Table 1. (2) and (3) of the above functions as the consumption and generation functions. We define Π k as the system for generating natural numbers; as shown in Figure 3, each neuron initially has a spike.
In the first step, neuron σ 3 uses rule 1|a p(x) ⟶ a q(x)/2 to emit 1/2 spike to the environment. At the same time, neurons σ 1 and σ 2 also fire by applying rule 1|a p(x) ⟶ a q(x)/2 , sending a spike to each other (because of weight 2), and neuron σ 1 and neuron σ 2 send one (because of weight 2) and 1/2 spike to neuron σ 3 , respectively. In the next step, neurons σ 1 and σ 2 continue their initial actions any number of times, and the 3/2 spikes in neuron σ 3 are always removed. Once neuron σ 2 executes rule 1|a p(x) ⟶ a λ at step t, neuron σ 2 stops emitting spike, and neuron σ 1 sends the last spike to neuron σ 3 . At step t + 1, neuron σ 3 has a spike, using rule 1|a p(x) ⟶ a q(x)/2 to send 1/2 spike to the environment again. In this way, the time interval (t + 1) − 1 � t of transmitting spike to the environment, that is, the natural number N generated by the system Π k .

Computational Power
In the nonlinear spike rule of the NSNP-DW system, we choose two of the aforementioned functions (representing consumption or generation function) to verify the Turing universality of the system and its computing power. e following functions p(x) and q(x) are considered: 4 Computational Intelligence and Neuroscience us, the state of neuron σ i can be recorded by a nonlinear equation: where x i (t) and x i (t + 1) are the states of neuron at step t and t + 1, respectively, p(x i (t)) represents the consumption value, and y i (t) is the reception value. In this part, we are committed to showing Turing universal NSNP-DW system as number generating device and accepting device. Given that the register machine M can generate and accept any NRE, the NSNP-DW system is proved to be universal through simulating the number generated by M. In order to facilitate understanding, we assume that number n in register r represents 2n spikes in neuron σ r . Neurons σ l i , σ l j , and σ l k receive two spikes, and the corresponding instructions are activated.

NSNP-DW Systems as Number Generating Device
Proof. N gen SNP 2 * ⊆NRE is beyond doubt based on the Turing-Church thesis; only N gen SNP 2 * ⊇NRE needs to be proved. In the number generating mode, M � (m, H, l 0 , l h , I) is the needed register machine, and the number generating device includes ADD, SUB, and FIN modules. M generates the number n in the following way: initially, the number of all registers is empty, and the simulation starts from instruction l 0 , continues the process with the required label instructions, and stops at instruction l h . According to the instructions, the number n stored in the first register is calculated by M. In ADD and SUB modules, neuron σ l i receives two spikes and runs according to instruction l i : (OP(r), l j , l k ) (OP is ADD or SUB operation). Neuron σ l j or σ l k receives two spikes indefinitely, and corresponding instruction l j or l k is activated. In the FIN module, neuron σ out sends spikes outside twice at intervals.
ADD module (shown in Figure 4)-simulating an ADD instruction l i : (ADD(r), l j , l k ).
Neuron σ l i will receive two spikes from environment. After running the ADD mode, spikes will be sent to neuron σ l j or σ l k indefinitely to simulate instruction l j or l k . When two spikes are in neuron σ r , the corresponding register r is increased by 1.
In detail, neuron σ l i fires at step t, and rule 2|a x ⟶ a p(x)/2 is executed, sending one spike to neurons σ c 1 , σ c 2 , σ c 3 , and σ r , respectively. e next moment, neuron σ c 1 receives one spike. Since the same thresholds of rule 1|a x ⟶ a p(x) and rule 1|a x ⟶ λ, the two rules are executed indefinitely: will be sent to neurons σ c 2 and σ c 3 , respectively. Next step, both neurons σ c 2 and σ c 3 contain two spikes, one from neuron σ l i and one from neuron σ c 1 . At step t + 3, neuron σ c 3 fires by using rule 2|a x ⟶ λ, so that two spikes are removed. Rule 2|a x ⟶ a p(x)/2 in neuron σ c 2 is applied simultaneously, consuming two spikes and emitting one to neurons σ c 4 and σ l j . At the next step, neuron σ c 4 becomes active by executing rule 1|a x ⟶ a p(x) , emitting one spike to neuron σ r and σ l j each. In this way, the second spike is received by σ r , which will aggrandize the value of register r by 1. Neuron σ l j receives a total of two spikes successively, and then instruction l j starts to be simulated. (ii) At step t + 1, neuron σ c 1 fires by using rule 1|a x ⟶ λ, which causes a spike to be removed. At step t + 2, neurons σ c 2 and σ c 3 each receive one spike, and soon this no longer exists in neuron σ c 2 because of 1|a x ⟶ λ. e one received in neuron σ c 3 is sent to neurons σ c 4 and σ l k through rule 1|a x ⟶ a p(x) . en, the next moment, neuron σ c 4 transmits one, which is received by σ r and σ l k . So neuron σ r and σ l k have received two spikes at step t + 4, respectively,  Step Computational Intelligence and Neuroscience indicating that the register r is increased by 1, and l k is activated. erefore, simulating instruction l i : (ADD(r), l j , l k ) is displayed correctly. Considering two different rules, Table 2 shows the number of spikes in all neurons at each moment.
SUB Module (shown in Figure 5)-simulating a SUB instruction l i : (SUB(r), l j , l k ).
Two spikes are received by neuron σ l i . If register r is not empty, then two spikes are sent to neuron σ l j , and the corresponding instruction l j is executed. If the value in the register r is zero, then two spikes are sent to neuron σ l k , and the instruction with label l k is executed. e detailed simulation process is as follows.
Neuron σ l i fires at step t, and rule 2|a x ⟶ a p(x)/2 is applied to emit one spike. At step t + 1, neuron σ r , σ l j , and σ l k each receive one spike from neuron σ l i . Next, there will be two cases according to the value of spike in neuron σ r : (i) At step t + 1, if 2n + 1 (n ≥ 1) spikes are contained by σ r (the value of the corresponding register r is n), then rule 3|a x ⟶ a q(x) is applicable. Next step, the neuron σ c 1 receives three spikes and fires, one from neuron σ l i and two from neuron σ r . Based on the maximum threshold strategy, rule 2|a x ⟶ a λ is used to consume these three spikes. At the same step, neuron σ l j receives the second spike, and then instruction l j is simulated by system Π. (ii) At step t + 1, if neuron σ r only contains one spike (the value of the corresponding register r is 0), then the spike is removed by rule 1|a x ⟶ a λ . At step t + 2, a spike from neuron σ l i is in neuron σ c 1 . e firing of neuron σ c 1 by rule 1|a x ⟶ a p(x) causes neuron σ l k to add a spike. At the next step, neuron σ l k receives a total of two spikes, and then instruction l k is simulated by system Π, but not l j .
So SUB module simulates instruction l i : (SUB(r), l j , l k ) correctly. e simulated numerical changes are presented in Table 3.
At step t, neuron σ l h fires by running rule 2|a x ⟶ a p(x)/2 , transmitting a spike to σ 1 . Originally, neuron σ 1 contains 2(n − 1) (n ≥ 2) spikes, and after receiving one spike, the rule 3|a 2 ⟶ a can be used first because of the maximum threshold strategy. en neuron σ c 1 and neuron σ out each have a spike from neuron σ 1 . e first spike is sent by output neuron σ out to the environment through 1|a x ⟶ a p(x) at step t + 3. Besides, neuron σ out receives one spike from neuron σ 1 and neuron σ c 1 , respectively, causing them to be forgotten by 2|a x ⟶ λ. Since two spikes are consumed in neuron σ 1 , a spike is continuously given σ c 1 and σ out . As a result, both spikes in neuron σ out from t + 2 to t + n are forgotten by 2|a x ⟶ λ. Until step t + n + 1, only one spike is kept in σ 1 , and then the generated one is emitted by 1|a x ⟶ a p(x) . At step t + n + 2, the neuron σ out still accepts two spikes but is forgotten instantly. At step t + n + 3, the last spike is received by neuron σ out from σ c 1 and sent to the environment through rule 1|a x ⟶ a p(x) . e time interval between spikes emitted to the environment is the number calculated by the system Π. In short, the numerical result computed through the system Π is (t + n + 3) − (t + 3) � n. Take the generated number n � 4 as an example, and the simulated numerical changes of output module are reflected in Table 4. rough the above discussion, we can see that the system Π accurately simulates the register machine M, so the theorem is reasonable.

NSNP-DW Systems as Number Accepting Device
Proof. In the number accepting mode, the number in the first register is n (others are empty), and then the calculation starts from l 0 ; when the calculation stops, the number n is accepted. Similar to eorem 1, we only need to verify N acc SNP 2 * ⊇NRE. e constructed NSNP-DW system as number accepting device includes an INPUT module, a deterministic ADD module, and a SUB module. Figure 7 shows the INPUT module.
Suppose that the first spike is received by neuron σ in at step t; the firing of σ in gives a spike to neurons σ c 1 , σ c 2 , and σ c 3 through rule 1|a x ⟶ a p(x) . en, neuron σ c 1 fires and outputs one spike to neuron σ l 0 , while neurons σ c 2 and σ c 3 fire by using rule 1|a x ⟶ a p(x) . At step t + 2, neurons σ c 2 and σ c 3 send one spike to each other, and especially neuron σ 1 receives two spikes from neuron σ c 2 , until neuron σ in receives the second spike. us, neuron σ 1 receives 2n spikes from step t + 2 to t + n + 1.
At step t + n + 1, neuron σ in obtains a spike again and reacts using rule 1|a x ⟶ a p(x) , sending one spike to neurons σ c 1 , σ c 2 , and σ c 3 again. At step t + n + 2, σ c 2 and σ c 3 each possess two spikes, and rule 2|a x ⟶ λ is applicable so that spikes are eliminated. Simultaneously, neuron σ c 1 fires for the second step, executing rule 1|a x ⟶ a q(x) to give neuron σ l 0 a spike, whereupon instruction l 0 is activated. Figure 8 displays the simulating of deterministic ADD instruction l i : (ADD(r), l j ). Neuron σ l i accepts two spikes, consuming them and sending two spikes to neuron σ l j through rule 2|a x ⟶ a p(x) ; instruction l j is simulated. Neuron σ r contains two spikes, indicating that the register r is increased by 1. e SUB module of accepting mode is the same as in Figure 4.

Computational Intelligence and Neuroscience
Theorem 3.
ere is a small universal NSNP-DW system possessing 47 neurons for computing functions.
Proof. For simulation of the register machine M u ′ , the designed NSNP-DW system includes INPUT, ADD, SUB, and OUTPUT modules. e general design is visualized in Figure 9. Still the same as originally assumed, the value n in register r corresponds to 2n spikes in neuron σ r . Assume that all neurons are empty in the initial configuration. Figure 10 is the designed INPUT module. σ in is the input neuron, reading a spike train 10 g(x)− 1 0 y− 1 1. Finally, 2g(x) and 2y spikes are contained by neurons σ 1 and σ 2 , respectively.
As before, a time step or step represents the execution time of one rule. We still use this notion to mark the moment the rule is executed. At step t 1 , if the first spike from the environment is received by neuron σ in , rule 1|a x ⟶ a p(x) is applicable, sending one spike to neurons σ c 1 , σ c 2 , σ c 3 , σ c 4 , σ c 5 , and σ c 6 , respectively. At step t 1 + 1, neurons σ c 3 , σ c 4 , σ c 5 , and σ c 6 will not receive spikes due to one moment of delay; neuron σ c 1 and neuron σ c 2 each receive one spike but do not fire. At the next step, the neurons σ c 3 , σ c 4 , σ c 5 , and σ c 6 receive one spike from neuron σ in , but neurons σ c 5 and σ c 6 do not fire. Neurons σ c 3 and σ c 4 fire by employing rule 1|a x ⟶ a p(x) , sending one spike to each other and both sending one spike to σ 1 , so neuron σ 1 owns two spikes at t 1 + 3. In this way, neuron σ 1 continues to receive two spikes, until step t 1 + g(x) + 2, σ 1 has a total of 2g(x) spikes.
At step t 2 , here actually t 2 � t 1 + g(x) + 2, and neuron σ in fires a second time and applies rule 1|a x ⟶ a p(x) to send one spike to each of neurons σ c 1 , σ c 2 , σ c 3 , σ c 4 , σ c 5 , and σ c 6 . Neurons σ c 1 and σ c 2 each have two spikes at step t 2 + 1. For one delay, neurons σ c 3 , σ c 4 , σ c 5 , and σ c 6 receive one spike from neuron σ in at step t 2 + 2. At this step, neurons σ c 4 and σ c 5 each have two spikes, and executing rule 2|a x ⟶ λ causes two spikes to be removed. On the contrary, neurons σ c 5 and σ c 6 each have two spikes and stay active. Two spikes are sent to each other by neurons σ c 5 and σ c 6 , and two are given to σ 2 by neuron σ c 5 , so that neuron σ 2 contains two spikes at step t 2 + 3. Neuron σ 2 accepts two spikes from neuron σ c 5 continuously until step t 2 + y + 3. At step t 2 + y + 3, neuron σ 2 retains 2y spikes in total.
At step t 3 , here actually t 3 � t 2 + y + 3, neuron σ in fires a third time, one spike is consumed, and one is sent to neurons σ c 1 , σ c 2 , σ c 3 , σ c 4 , σ c 5 , and σ c 6 , respectively, by rule 1|a x ⟶ a p(x) . At step t 3 + 1, neuron σ c 1 with three spikes fires, and the rule 3|a x ⟶ a 2q(x) is used to consume three spikes and send two spikes to neuron σ l 0 . When the neuron σ l 0 receives two spikes, the introduction l 0 is simulated. Neuron σ c 2 also fires at step t 3 + 1 and sends one spike to neurons σ c 3 and σ c 4 through rule 3|a x ⟶ a q(x) . At step t 3 + 2, neuron σ c 3 receives spikes from neuron σ in and neuron σ c 2 ; forgetting rule 2|a x ⟶ λ is applied to remove two spikes. e same is true for neuron σ c 4 . At the same step, neurons σ c 5 and σ c 6 each receive the third spike from neuron σ in , so they have three spikes and remain inactive. e forgetting rule 3|a x ⟶ λ is used to remove the three spikes.
In order to reflect the rationality of INPUT module, assuming g(x) � 4 and y � 3, the change in the number of spikes at each step can be clearly seen in Table 5.
In addition, the ADD and SUB modules are the same as in Figures 8 and 5. e design and simulation process of OUTPUT module is the same as Figure 6, except that the register 1 is replaced by register 8 (shown in Figure 11). is is because when a small universal NSNP-DW system is used as function computing device, after each instruction simulation, the final register 8 contains n numbers (the neuron σ 8 contains 2n spikes). e result n is output through the OUTPUT module.
From the discussion above, the NSNP-DW system as a function computing device can accurately simulate the register machine M u ′ by using 57 neurons: (i) 7 neurons in the INPUT module, (ii) 2 neurons in the OUTPUT module, (iii) 1 neuron in each SUB instruction and 14 in total, (iv) 9 neurons in 9 registers, and (v) 25 neurons for 25 instructions. e register machine M u ′ is simulated by the NSNP-DW system, and each instruction l i on M u ′ is regarded as a neuron. However, some instructions are continuous. By exploring the relationship between the instructions of M u ′ , correspondingly constituted modules can be combined, instructions are omitted, and the use of neurons is reduced by the way. A detailed introduction to the initial register machine and its instructions can be found in [40]. For the NSNP-DW system, module combination is mainly divided into three categories: module ADD-ADD, module ADD-SUB, and module SUB-ADD (includes modules SUB-ADD-1 and SUB-ADD-2). e working process of module ADD-SUB and module SUB-ADD is closely related to that of module SUB. e working principle is expressed by the structure diagram. Readers interested in a description of the principle can refer to [11,26,41].
ese are two deterministic ADD instructions. e simulation of each instruction is the same as in Figure 8. e module design is shown in Figure 12 before the instruction l 21 is omitted.
Taking the omission of l 6 as an example, neuron σ l 5 sends spikes to σ 5 and σ l 6 , and then neuron σ l 6 performs the simulation of instruction l 6 . Here, neuron σ l 6 can be omitted, and neuron σ l 5 replaces σ l 6 to directly simulate the SUB instruction. It can be seen from Figure 14 that this is possible.

Module SUB-ADD.
For introduction l i : (SUB(r), l j , l k ), when the value r ≠ 0, it is subtracted by 1, and the instruction l j is executed; otherwise, the labeled l k is activated. erefore, considering that there are two forms of consecutive SUB-ADD instructions, module SUB-ADD is divided into modules SUB-ADD-1 and SUB-ADD-2.
Module SUB-ADD-2: is is the case when l j is activated and l k is reserved. ere are six pairs of instructions that can be combined in pairs. It is found through observation that the following Table 5: e results of spike in INPUT module when g(x) � 4 and y � 3.
Step Figure 11: OUTPUT module. ADD instruction happens to be the first execution position of the previous SUB instruction. en each pair of SUB-ADD instruction combinations can be updated to l i : (SUB(r 1 ), l j , l k ) andl j : (ADD(r 2 ), l g ). When the register r 1 ≠ r 2 , they can share one neuron σ l j . In this way, six neurons in total of σ 1 , σ 5 , σ 7 , σ 9 , σ 19 , and σ 23 can be saved. e visualization can be illustrated in Figure 16. rough the above instruction combination (called "code optimization" in [41]), 10 neurons σ 21 , σ 6 , σ 10 , σ 20 , σ 1 , σ 5 , σ 7 , σ 9 , σ 16 , and σ 23 can be omitted. In the end, 47 neurons are enough to complete a small universal NSNP-DW system for computing functions:

Small Universal NSNP-DW Systems as Number
Generator. In the simulation of number generator, the INPUTmodule can be combined with the OUTPUTmodule, found in Figure 17. In the constructed INPUT-OUTPUT module, instruction l h is removed, and register 8 is no longer needed. e spike train that the input neuron gets from the environment is 10 g(x)− 1 1, neuron σ 1 is loaded with 2g(x) spikes, and neuron σ 2 receives 2n spikes nondeterministically. Neuron σ out fires twice successively, and the time interval n is the numerical result generated. e simulation of NSNP-DW systems as number generator shares 43 neurons, and the specific details will not be repeated:  ere is a small universal NSNP-DW system possessing 43 neurons for number generator.
e specific simulation will not be introduced in detail.
We use an example to analyze the feasibility of number generator simulation; assuming g(x) � 2 and n � 2, the results of each step are reflected in Table 6.

Discussion.
eorem 3 gives the Turing universal NSNP-DW system with fewer neurons as a function computing device. In order to more intuitively verify the computing power of the NSNP-DW system, Table 7 compares the number of computing units for the variant and its related systems. According to Table 7, we observe that NSNP systems, SNP systems, SNP-DS systems, and recurrent neural networks use 117, 67, 56, and 886 neurons, respectively, to accomplish Turing universality for computing function, and NSNP-DW systems only require 47 neurons. Besides, according to Table 8, we have observed that 121   neurons are reduced for simulating number generator. In short, NSNP-DW systems are better than these systems in the use of neurons, and the computational power of the NSNP system has been effectively improved.

A Uniform Solution to Subset Sum Problem
e Subset Sum problem is one of the typical NP-complete problems proposed in [43]. We use the NSNP-DW system to solve the uniform solution of the Subset Sum in a nondeterministic operation mode.
e Swish function and the LReLU function are considered for the spike consumption function and the generating function in the problem.
αx, x ≤ 0.  e complexity of the uniform solution is that the system only "recognizes" the number n when solving the problem. e instance of the problem needs to be introduced into the system. σ g i, 3 (1 ≤ i ≤ n) is the input neuron of the system. e positive integer v i (1 ≤ i ≤ n) in the problem is encoded into σ g i, 3 . At the beginning of the calculation, 3(v 1 − 1) spikes (a 3(v 1 − 1) ) enter neuron σ g 1,3 , 3(v 2 − 1) spikes (a 3(v 2 − 1) ) enter neuron σ g 2,3 , ..., and 3(v n − 1) spikes (a 3(v n − 1) ) enter neuron σ g n, 3 . In the initial configuration (state) of the system, except that neuron σ i (1 ≤ i ≤ n) contains four spikes, all other neurons are empty.
In the first calculation, both rules in neuron σ i are likely to be employed first (because they have the same threshold). e indeterminate use of these two rules indicates that the system solves this Subset Sum problem in a nondeterministic way of operation, and it also corresponds to whether the integer v i is in the subset B. In the following, we carry out a complete derivation.
Proof. Neuron σ i initially has four spikes. At step one, if rule 4|a ϕ(x) ⟶ a (3/4)c(x) is selected for use, then neuron σ i will consume 4 · sigmoid(4) spikes and send three spikes to neurons σ g i, 1 and σ g i, 2 , respectively (because c(x) � 4). At step 2, neuron σ g i,1 forgets three spikes by the rule 3|a ϕ(x) ⟶ λ. At the same time, neuron σ g i,2 uses rule  Step  [39] 117 SNP systems [11] 67 SNP-DS systems [31] 56 Recurrent neural networks [42] 886  3 (because c(x) � 3). At the end of this step, neurons σ g i, 1 and σ g i,2 maintain their original state. At step 3, neuron σ g i,3 has a total of 3v i − 1 spikes and fires. e rule a 2 (a 3 ) + |a 3 ⟶ a 3 is used first, sending three spikes to neurons σ g i, 4 and σ out , respectively. is process will continue for v i − 1 steps until the rule a 2 (a 3 ) + |a 3 ⟶ a 3 cannot be activated. Neurons σ g i, 4 and σ out still cannot be active after receiving 3k (k ∈ N) spikes. When there are only two spikes left in neuron σ g i, 3 , rule a 2 ⟶ a 2 is used, and finally, two spikes are sent to neuron σ g i, 4 and σ out , respectively. After possessing 3k + 2 spikes, neuron σ g i,4 fires and emits a spike to neurons σ out and σ h , respectively. In the next step, the neuron σ out still cannot fire because it takes 3k + 1 spikes to be activated. Conversely, neuron σ h that has received n spikes is activated by rule a n ⟶ a and sends one spike to neuron σ out . In this way, the output neuron σ out can fire and emit spikes to the environment. If initially neuron σ i uses the rule 4|a ϕ(x) ⟶ a (1/2)c(x) , 4 · sigmoid(4) spikes are consumed and two spikes are sent to neurons σ g i, 1 and σ g i,2 , respectively (because c(x) � 4). In the second step, the two spikes received by neuron σ g i,2 are removed by rule 2|a ϕ(x) ⟶ λ. Neuron σ g i,1 uses the rule 2|a ϕ(x) ⟶ a (1/2)c(x) and sends a spike to neuron σ h (because c(x) � 2). Before the neuron σ h fires, neuron σ out remains inactive. After neuron σ h receives a total of n spikes from σ g i, 1 (1 ≤ i ≤ n), it fires and sends a spike to neuron σ out . In the next step, the neuron σ out contains only one spike, so it does not fire, nor does it emit spikes into the environment.
At this point, we have ended the process of solving the uniform solution of the Subset Sum. Obviously, the system requires a total of 5n + 2 neurons. After stopping operation, if there are exactly S spikes in the environment, the answer to the problem is "yes," which means that there is a subset B⊆V that makes b∈B b � S hold. Otherwise, it is "no." is is enough to show that the NSNP-DW system for solving Subset Sum problem is complete.
In the calculation process, the calculation between neurons is parallel, and the rules in each neuron are calculated sequentially. rough computing and reasoning, it can be known that neurons σ i , σ g i,1 , σ g i,2 , σ g i, 4 , and σ h fire once, respectively, and neuron σ g i, 3 fires n i�1 v i times. After all other neurons stop computing, the neuron σ out can fire at most n i�1 v i times. erefore, the system needs n i�1 2v i + 5 steps to complete the calculation. In addition, we choose nonlinear functions as the spike consumption function and generation function, which is closer to reality and reflects the significance of nonlinear functions in the NSNP-DW system.

Conclusions and Further Work
e nonlinear spiking neural P (NSNP) systems are variants of spiking neural P (SNP) systems. Nonlinear functions are used flexibly in NSNP systems. We focus on the computing power of NSNP systems in this work. Two mechanisms of delays and weights are introduced, and nonlinear spiking neural P systems with delays and weights (NSNP-DW) are  Computational Intelligence and Neuroscience proposed. An explicit example is given to visually demonstrate the operation of the NSNP-DW system. rough a series of simulation computing, 47 and 43 neurons are sufficient for constructing small universal NSNP-DW systems as function computing device and number generator. Compared with the NSNP systems [39], the NSNP-DW system decreases 70 neurons and 121 neurons, respectively, as function computing device and number generator. Finally, the uniform solution of the Subset Sum problem is solved efficiently by using the NSNP-DW system. For further work, the NSNP-DW system, as a distributed parallel computing model, can be combined with clustering algorithms to improve algorithm efficiency. As far as the computational power of the NSNP-DW system is concerned, we are committed to proving that the 47 and 43 neurons used by the simulating function calculation and the number generator, respectively, are the least in total. In particular, the number of spikes breaks through the integer limit and has been replaced by nonlinear functions in NSNP systems. In view of this breakthrough, we can try to link NSNP-DW systems with the neural network to expand more interesting research.

Data Availability
No datasets were used in this article.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.