Presentation is loading. Please wait.

Presentation is loading. Please wait.

Perceptron Implementation of Triple-Valued Logic Operations

Similar presentations


Presentation on theme: "Perceptron Implementation of Triple-Valued Logic Operations"— Presentation transcript:

1 Perceptron Implementation of Triple-Valued Logic Operations
Reporter: Changbing Tang Advisors: Fangyue Chen, Xiang Li Adaptive Networks and Control Laboratory, Department of Electronic Engineering, Fudan University

2 Outline • Introduction • Basic concepts of MVLFs
• Perceptron implementation of the MVLFs A. DNA-Like learning algorithm of the n–kVLF B. Inverse offset-level method C. Realization of “XOR” Operation D. Realization of the Half-Adder C. B Tang, F. Y Chen, X. L, “Perceptron Implementation of Triple-Valued Logic Operations.” IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II, VOL. 58, NO. 9, SEPTEMBER 2011.

3 Introduction Multiple-valued logic(MVL) has attracted considerable attention in many fields, ranging from artificial neural networks (ANNs) to circuit design techniques.

4 How to implement these MVLFs?
Previous works ANNs become a powerful tool for implementing the multiple-valued functions. Circuit design techniques have been applied to perform the basic MVL operations. Our method DNA-like learning algorithm Perceptron Inverse offset-level method

5 Basic concepts of MVLF An n-input k-valued logic function (n–kVLF) is a map 𝑭: {𝟎,𝟏,⋯,𝒌−𝟏} 𝒏 → 𝟎,𝟏,⋯,𝒌−𝟏 , 𝑭 𝒙 =𝒗 (1) 𝑝= 𝑖=1 𝑛 𝑥 𝑖 ∙ 𝑘 𝑛−𝑖 be the decimal code of the input window 𝑥. The map (1) can be rewritten as 𝑭 𝒙 (𝒑) = 𝒗 𝒑 (𝒑=𝟎,𝟏,⋯, 𝒌 𝒏 −𝟏) (2) Such a map can generate the output symbol tape [ 𝑣 0 , 𝑣 1 ,⋯, 𝑣 𝑘 𝑛 −1 ] consisting of 𝑘 𝑛 symbols Conversely, [ 𝑣 0 , 𝑣 1 ,⋯, 𝑣 𝑘 𝑛 −1 ]determines completely an n–kVLF.

6 An example for 2–3VLF 𝟑 (𝟑 𝟐 ) 𝟑 𝟐 input Output tape windows Windows
(0, 0) (0, 1) (0, 2) (1, 0) (1, 1) (1, 2) (2, 0) (2, 1) (2, 2) 𝟑 (𝟑 𝟐 ) Output tape 𝟑 𝟐 input windows

7 Perceptron Implementation
The ANNs implementation of the n–kVLF is equivalent to design a perceptron, i.e., finding the weight vector 𝑊 = ( 𝑤 1 , 𝑤 2 , , 𝑤 𝑛 ) 𝑇 ∈ 𝑅 𝑛 and the threshold value 𝜃 such that 𝒗 𝒑 =𝒇( 𝒊=𝟏 𝒏 𝒘 𝒊 ∙ 𝒙 𝒊 𝒑 −𝜽) (3)

8 Some definition 𝜎 𝑝 = 𝑊 𝑇 ∙ 𝑥 𝑝 = 𝑖=1 𝑛 𝑤 𝑖 ∙ 𝑥 𝑖 𝑝 𝜎 𝑝 = 𝑖=1 𝑛 𝑤 𝑖 ∙ 𝑥 𝑖 𝑝 −𝜃 excitative sequence: {𝜎 𝑝 } 𝑝=0 𝑘 𝑛 −1 offset-level sequence: { 𝜎 𝑝 } 𝑝=0 𝑘 𝑛 −1 Transition of the output symbol tape [ 𝑣 0 , 𝑣 1 ,⋯, 𝑣 𝑘 𝑛 −1 ] on sequences {𝜎 𝑝 } 𝑝=0 𝑘 𝑛 −1 as

9 Perceptron implementation of the MVLFs --- DNA-like learning algorithm of the n–kVLF
In [1], the concept of a DNA-like sequence was introduced, which was similar to the DNA sequence in the biological systems. In this paper, the concept is extended to the n–kVLF. [1] F. Y. Chen, G. R. Chen, G. L. He, X. B. Xu, and Q. B. He, “Universal perceptron and DNA-like learning algorithm for binary neural networks: LSBF and PBF implementations,” IEEE Trans. Neural Netw., vol. 20, no. 10, pp. 1645–1658, Oct

10 An example for k=3

11 DNA-like learning algorithm of n–kVLF

12 Perceptron implementation of the MVLFs --- Inverse offset-level method
Take 2-3VLF as an example For a 2-3VLF, its input windows and its output is 𝑣 = 𝑓 𝑤 1 𝑥 1 + 𝑤 2 𝑥 2 − 𝜃 (4) Let 𝜎 𝑝 = 𝑤 1 𝑥 1 + 𝑤 2 𝑥 2 − 𝜃, 𝑥 1 , 𝑥 2 {0, 1, 2}, where 𝑝 = 𝑥 𝑥 2 .

13

14 Perceptron implementation of the MVLFs
---Realization of “XOR” Operation “XOR” is represented as: 𝑥 1 ⊗ 𝑥 2 =( 𝑥 1 ⋀ 𝑥 2 )⋁( 𝑥 1 ⋀ 𝑥 2 ) The output tape of the “XOR” operation is [0, 1, 2, 1, 1, 1, 2, 1, 0]. Traditional method Our method: only one Perceptron

15 Perceptron implementation of the 2-3 VLF
--- Realization of the Half-Adder The half-adder is a well-known function in digital electronics, and its functionality can be summarized by the mechanism given by inputs 𝑥 1 and 𝑥 2 , which generates two outputs “SUM” and “CARRY”

16 Our method SUM: CARRY:

17 Thank You !


Download ppt "Perceptron Implementation of Triple-Valued Logic Operations"

Similar presentations


Ads by Google