Supplementary Examples to Techniques




Дата канвертавання24.04.2016
Памер70.07 Kb.

Supplementary Examples to Techniques




6 August 2003

Remote Viewing Location: TRW/Northrop Grumman


Carissa Astudillo (carissa.astudillo@ngc.com)
Dean Baluch (dean.baluch@ngc.com)
Jose Tiscareño (jose.tiscareno@ngc.com)

Abstract

Throughout the class textbook, Techniques in Advanced Switching Theory, the author – Dr. James Ellison – provides examples to explain algorithms or other difficult to understand course concepts. The purpose of this paper is to help aid the novice reader in advanced switching theory by providing step-by-step explanations and attempting to clarify some of the more difficult concepts in the book by providing detailed examples. The focus of this work is done on chapters 1 through 6. The authors of this paper divided up these chapters of the book.



References

  1. James Ellison. Techniques in Advanced Switching Theory

  2. James Ellison. EE552 class notes



Chapter 1

Examples prepared and written by Carissa Astudillo

Example: Simple Non-Disjunctive Decomposition

Given the function f(t,u,v,x,y), let us find a simple non-disjunctive decomposition of the form h(t,u,v,g(v,x,y)).


Let
The first thing we want to do is assign the X,Y, and Z-variables. The X-variables correspond to the variables in the inner function (g(v,x,y)), the Y-variables correspond to the overlapping or shared variables in the function, and the Z-variables correspond to the variables in the outer function (h(t,u,v,g(v,x,y)). In this example, the X-variables are (x,y), the Y-variable is v, and the Z-variables are (t,u). Figure 1 shows a general example of how to construct a K-map for any simple non-disjunctive decomposition, where the Y-variables correspond to the shared (overlapping) variables in our function.

Figure 1: X,Y,Z Variable K-map assignments


Figure 2 represents the K-map corresponding to the specific example above. Since v is the only shared (overlapping) variable, we divide up the K-maps into two groups, one for each bit, v=0 and v=1. Note that in general we would divide the K-maps into groups, where m is the number of shared variables. We can see for that each group (v=0 and v=1), there are two distinct columns for each K-map (in order to have a non-disjunctive decomposition each group must contain at most two distinct columns). Next, the two distinct columns are arbitrarily labeled k and l as shown in Figure 2.

Figure 2: K-map for our function f(t,u,v,x,y),
Now first let us compute g(v,x,y): to do this, we look at the two distinct columns for each group (v=0 and v=1) from the K-map in Figure 2 and assign g = 0 for our “k columns” and g = 1 for our “l columns”. Figure 3 shows the values of g for each column in xy and the corresponding K-map. From this we compute g(v,x,y) = y + vx

Figure 3: Computation of g(v,x,y)


Next, let us compute h(t,u,v,g(v,x,y)) = h(t,u,v,g). To do this we construct another K-map with (t,u) as our rows, v as our columns, and two groups for g=0 and g=1. Looking at Figure 2 and Figure 3 we recall that the“k columns” were assigned g=0 and the “l columns” were assigned g=1. Using this information, we fill in the K-map for h(t,u,v,g) as shown in Figure 4. From this we compute the simple non-disjunctive decomposition, h, for the function f:

h(t,u,v,g) = = h(t,u,v,g(v,x,y)) = f(t,u,v,x,y).



Figure 4: K-map of h(t,u,v,g)



Example: Logical Quantifier

Given the function f(a,b,c,d) = and the K-map shown in Figure 5, let us find a solution to.


Figure 5: K-map for function f(a,b,c,d)


First, recall that in general we have defined the following,



Using these definitions, and applying them to our specific example, we can solve for . Let us do this in parts.

First let us find







Next let us set

(using our definition of g)

(using definition of quantifier)

(setting g equal to its Boolean

equivalent)



(solving using Shannon

Decomposition)


Example: Constructing a Reduced Ordered Binary Decision Diagram (ROBDD)

Let us work through an example of building an ROBDD by recursively applying Shannon’s Decomposition Theorem to each node.


Given ; tThe K-map for this function is shown in Figure 6.


Figure 6: K-map for function f(t,u,v,w)


Note that the ordering tFirst we begin with node t and using Shannon’s Decomposition Theorem, we find and as shown in Figure 7.


Figure 7: Initial ROBDD for function f(t,u,v,w)

Next we again use Shannon’s decomposition for the next node (which is “u” in our example) and solve for , , , and :

Note that when we solved for , , , and , in some cases we could simplify our expression, which helped in eliminating nodes from our diagram. If our function for f equals 1 or 0, then the branch for that node goes directly to either the “1-leaf” or “0-leaf” respectively. Also if our function for f equals a single variable (as in this example) then we can go directly to that node as shown in Figure 8.


Figure 8: Expanded ROBDD for function f(t,u,v,w)

Again, we apply Shannon’s Decomposition on the remaining nodes in our BDD eliminating nodes where possible as shown in Figure 9.


Figure 9: Final ROBDD for function f(t,u,v,w)


Chapter 2

Examples prepared and written by Jose Tiscareño

Example: Dealing with infinite expressions of H


The polynomial transform, H = <1,1,1,1,0,1,0,1,1,0,0,1,0,0,0>0 can be written as the infinite sum of by performing the following procedure to H.

45 44 43 42 41 40 39 38 37 36 35 34 33 32 31

30 29 28 27 26 25 24 23 22 21 20 19 18 17 16

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 powers

H = <1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0>0 (1)
First of all we write the polynomial expression with powers from 1 to 15, 16 to 30 and 45 to 31. We then get the following expression:
…+D(D44+D43+D42+D41+D39+D37+D36+D33)

+D(D29+D28+D27+D26+D24+D22+D21+D18)

+D(D14+D13+D12+D11+D9+D7+D6+D3) (2)
We then factor out a D3 factor from every quantity making up the polynomial expression in (2).
…+D4(D41+D40+D42+D41+D36+D34+D33+D30)

+D4(D26+D25+D24+D23+D21+D19+D18+D15)

+D4(D11+D10+D9+D8+D6+D4+D3+1) (3)
Then we try to make the first and second quantities of the polynomial expression as the last quantity.
…+D4 D30(D11+D10+D9+D8+D6+D4+D3+1)

+D4 D15(D11+D10+D9+D8+D6+D4+D3+1)

+D4 D0(D11+D10+D9+D8+D6+D4+D3+1) (4)
With polynomial expression (4) we factor out a D4 (D11+D10+D9+D8+D6+D4+D3+1) factor from every quantity in the expression.

We then get the following expression:


D4 (…+D30+D15+D0)(D11+D10+D9+D8+D6+D4+D3+1) (5)
Expression five can be simplified as:
D4 (D11+D10+D9+D8+D6+D4+D3+1)ΣK=0D15K. (6)
Then we use (x-1) ΣK=0XK =1. To get the following expression ΣK=0D15K =1/(D15+1) which is substituted into expression (6) and get the following expression.
D4 (D11+D10+D9+D8+D6+D4+D3+1) /(D15+1) = 1 (7).

Since H was the impulse response then


H=1 and H = D4/(D4+D3+1) =1 (8),
transfer function, of H. The final step is to equate (8) and (7).
D4/(D4+D3+1) = D4 (D11+D10+D9+D8+D6+D4+D3+1) /(D15+1)
Eliminate D4 and cross multiply to get the following expression.
(D11+D10+D9+D8+D6+D4+D3+1) (D4+D3+1) = (D15+1)
Then verify if both quantities are equal. We multiply (D11+D10+D9+D8+D6+D4+D3+1)*(D4+D3+1) and we get the following expression.
D15+D14+D11

D14+D13+D10

D13+D12+D9

D12+D11+D8

D10+D9+D6

D8+D7+D4

D7+D6+D3

D4+D3+1


Remember that you can cancel Dx if they are common terms in the expression when performing mod 2 operations. The final result of this multiplication is (D15+1) which does agree with the expression (D11+D10+D9+D8+D6+D4+D3+1) (D4+D3+1) = (D15+1).

Example: Feedback System implementation of H.

Let say we wanted a feedback system to implement the following transfer function

H=D3/(D5+D2+D+1) (1).

First of all we can either do single-string or a two-string representation of this transfer function. Before we start drawing one of these representations, the numerator and denominator of this transfer function tell you a lot of information that can be used to easily draw any of the two transfer function representations. For example, the numerator informs you the location of the output in a two-string representation and the input location in a single-string representation. The other clue this transfer function gives us is that the denominators largest power is equal to the maximum number of delay elements in either of the two transfer function representations. The next step to do is to draw the following drawings.


Notice that in a single-string representation the numbering is from right to left starting from 0 and ending at 5 while in a two-string representation the opposite method of numbering is used to number the delay elements. The next step to apply is to use the clues provided in the transfer function to both of these representations.



It can be noticed that in a single-string representation we had to tap of on 0, 1, 2 and 5 locations of the delay elements to get the D5+D2+D+1 in the denominator. In the two-string representation a similar task was performed to get the following transfer function representation.

The final step to apply to this problem is to check if both are single and two string representations yield our transfer function H = D3/(D5+D2+D+1). To do this we start by setting X=1 in both representations of the transfer function like noted in the following figure.



Chapter 3

Examples prepared and written by Dean Baluch

Example: Application of the SKILL Algorithm


The SKILL algorithm works to generate a Minimal Cover from a compatibility pairchart. In cases with few states, it is easier to work through the chart by hand with an exhaustive search, but for cases of several states, it is useful to have a linear algorithm (in best-case time).

Consider the compatibility pairchart shown in Figure 10. X’s represent incompatible states, and gray boxes represent compatible state.


Figure 10: Input to SKILL Algorithm

The first step of the SKILL algorithm is to set up the table, and arbitrarily pick the first state.

As shown, the table is set up with k, Sk, L,’ and L. Or, in plain English: the state, its compatible states, the current possible MCs, and all current MCs once the decision for L,’ has been made. State 8 is chosen arbitrarily.

The next iteration of the algorithm considers state 7. It has no compatible classes in its column, making itself the only possible MC at this point. The addition of state 7 is shown below:


On to state 6, the compatible states are 6 and 7. The possible MCs at this phase are 6 and 67. Looking above at the L, 7 already appears, so 67 can replace it. The k=6 line is shown below.



State 5 has 2 compatible states in its column: 6 and 7. Since 6 is also compatible with 7, the possible MCs for this phase are 5 and 567. 567 replaces 67.


State 4 has 1 compatible state: 6. Similar to state 6, the possible MCs at this phase are 4 and 46. Since 4 is not compatible with 5 or 7, 46 has to be added to the list as its own MC, shown below.


State 3 has 3 compatible states in its columns: 5, 6, and 7. But, 567 is already an MC. Adding it to the list below:


State 2 also has 3 compatible states in its column, this time 3, 4, and 7. The possible MCs are 2, 24, and 237 after inspecting 4 and 7’s compatible states.


The final state, 1, has one compatible state: 4. Similar to state 4 and 6, the possible MCs for this phase are 1 and 14. 14 is added to the list. The final table at the conclusion of the SKILL algorithm is shown below in Table 1.


Table 1: Result of SKILL Algorithm

Applying the row reduction to the state table yields the reduced state table in Figure 11.


Figure 11: State Table after SKILL


Example: Application of Bargain Hunters Algorithm


The Bargain Hunter’s algorithm further reduces the set of compatible classes. It works to find those compatibility classes that make up the prime cover, where each remaining class is called a “prime compatible”. This example starts with the MCs and state table in Figure 12.

Figure 12: State Table at Start of Bargain Hunter's Algorithm


The first step is to order all of the MCs and their subsets in decreasing cardinality. 3567 is the only MC with 4 elements, so it is first. MCs or subsets of equal cardinality are ordered arbitrarily. One may decide to give an MC priority over a subset if both of the same cardinality.

The columns of the Bargain Hunter’s algorithm start from the left with the current compatibility class (MC or subset). The next-state transitions come next, followed by the class set. These are essentially any next states that have a cardinality greater than 1 that are not subsets of the compatibility class. If no next-states exist with cardinality greater than 1, this entry is null.

In working through the Bargain Hunter’s Algorithm, there are three heuristics to determine all the PCs.


  • All MCs are PCs

  • No subset of a PC with an empty class set is a PC

  • A singleton is a PC if and only if its state is not in a PC with an empty class set.

The first row works out as follows:


The next-state transitions follow those from Figure 12. The class set is empty as 367 is the only next-state with cardinality greater than 1 and it is also a subset of the current compatibility class 3567. Since the CS is empty, and 3567 is already an MC, 3567 is a PC, and placed in the PC column. 237 has a class set of 36, and it is also already an MC, therefore making it a PC:



In the third row, 356 is a subset of 3567, which is already a PC. The heart of the algorithm states that 356 is eliminated since the class set for 3567 is empty, yet the class set for 356 is 36. The bargain hunter will take 3567 for nothing as opposed to 356 for anything at all. This also applies to 357, another subset of 3567. So far, the table looks like this:

In the fifth row, 367 is the first subset to have an empty class set. The second heuristic states that this subset is not a PC since it is a subset of another CC whose class is empty as well. This also applies to 567, a subset of 3567 with an empty class set. The updated table:

In the seventh, eighth, and ninth rows are MCs 14 and 24. They are automatically added to the PC column, giving the updated table:

35, 36, 37, 56, 57, and 67 are familiar cases. They are all subsets of 3567, an MC with an empty class set:

23 has an empty class set. It is also a subset of 237, a PC with a class set of 36. 23 is added to the PC column because its class set is less than 237’s class set. The addition of this row is shown below:

The next row contains 27. 27 is a subset of 237, and has the same class set of 36. It is clearly a better deal to obtain 237 from 36 than merely 27. Singletons 1 through 7 follow. Each of these is contained already in a PC. According to the third heuristic, these are excluded from the PC column:

The final CC is 8, a singleton that is not contained in any PCs with empty class set. 8 is added to the PC column to complete the table and the algorithm. The final table is shown in Table 2:
Table 2: Final Result of Bargain Hunter's Algorithm



Chapter 4

Examples prepared and written by Carissa Astudillo

Example: Normalization Algorithm of an SOC machine

Let us work through an example of normalizing an SOC machine.


First recall the definition of a few important terms:
SOC (Single Output Change) – In an SOC machine, we begin in a stable state and given a single input change (only one input bit is changing), the output changes at most once.

Note that a stable state is one in which given a state (row) and any input, the state in the machine is equivalent to that row (for example, in Figure 13, row 3, column 11, “3” is a stable state since the state in row 3 column 11 is equivalent to 3 which is our row number).

An example of an SOC machine is shown in Figure 13.


Beginning with any stable state and given a single input change, the output changes at most once.

Example A:

1,10 -> 11 (transitions using red dotted arrows)

Starting at row 1, column 10, we move to row 1, column 11 this takes us to row 2 (column 11), which takes us to row 3 (column 11), we are now in a stable state so we stop (until our next input that is). With this transition we see that the output went from 1->1->1->1.


Example B:

1,10 -> 00 (transitions using solid blue arrows)

(Note that this is the only other case we can look at for row one since we must begin with a stable state and have only a single input change.)

Beginning with row 1, column 10, we move to row 1 column 00 which takes us to row 2, column 00, then to row 3, column 00, and finally to row 4, column 00 where we stop since we are now at a stable state. Looking at our output sequence, we see it went from 1->1->1->0->0.


Both examples had at most one output change which makes this a SOC machine. If we look at all other cases we will see that the output changes at most once.





Figure 13: SOC Machine

Normal Mode Machine – A normal mode machine is an SOC machine in which for every state, given a single input change, the next state is a stable state.


Now let us work through an example of the normalization of an SOC machine. Given the SOC machine in Figure 14, let us use the normalization algorithm to construct a Normal Mode machine.

Figure 14: SOC Machine


Normalization Algorithm:

  1. First check machine by looking column by column. In this example, start with column 00, then 01, 11, and finally 10.




  1. Beginning with first column 00, we then proceed row by row until we reach a stable state. In this example, let us begin by checking row 1, column 00.

1,00:0 -> 3,00:0 -> 5,00:0 -> 4,00:0

This implies that we begin in row 1, column 00, with an output of 0, we then move to row 3, column 00, with an output of 0, next we move to row 5, column 00, with an output of 0, and finally we move to a stable state in row 4, column 00, with an output of 0. Since row 4, column 00, is a stable state, we stop. So now we change row 1, column 00 from 3,0 to 4,0 (so that now our next state will be a stable state). Notice that in this case our output stays the same since there were no output changes.
2,00:0

Next we look at row 2, column 00. This state is a “don’t care” so we just leave it as is.


3,00:0 -> 5,00:0 -> 4,00:0

Next we check row 3, column 00, with an output of 0. Here we follow the same method as in row 1, column 00. We move from row 3, column 00, with an output of 0, to row 5, column 00, with an output of 0, and finally we move to a stable state in row 4, column 00, with an output of 0. Again, since row 4, column 00, is a stable state, we stop. So now we change row 2, column 00 from 5,0 to 4,0.


4,00:0

Next we look at row 4, column 00. Since this state is a stable state we leave it as is.


5,00:0 -> 4,00:0

Lastly for column 00, we look at row 5. The next state for this state is a stable state (which is what we want for a normal mode machine), so again we don’t need to make any changes.


Figure 15 shows the algorithm applied for column 00 (changes in red).

Figure 15: Normalization algorithm applied to column 00


Next we apply the algorithm for the remaining columns 01, 11, and 10

Using the same method as for column 00 for the remaining columns 01, 11, and 10.


1,01:1

Row 1, column 01 remains the same since it is a stable state.


2,01:1

Row 2, column 01 remains the same since it is a stable state.


3,01:1

Row 3, column 01 remains the same since its next state is a stable state.


4:01:0 -> 1,01:1

In this case, moving from row 4 column 01 to row 1 column 01, notice that the output changes from 0 to 1. So here row 4, column 01 changes from 1,0 to 1,1.


5,01:0 -> 3,01:1 ->2,01:1

In this case, we move from row 5, column 01, with an output of 0, to row 3, column 01, with an output of 1, and finally we move to a stable state in row 2, column 01 with an output of 1. So row 5, column 01 changes from 3,0 to 2,1.


We use this same method to normalize all remaining columns. The final normal mode table is shown in Figure 16. (changes to original SOC machine are marked in red)

Figure 16: Final Normalized Machine


Example: Primitivization of an SOC machine

First recall the definition of a Primitive Mode machine:


Primitive Mode Machine – A Primitive Mode machine, is an SOC machine such that there is exactly one stable state in each row of the SOC machine.
Now let us work through an example of the primitivization of an SOC machine. Given the same SOC machine as our previous example (shown again in Figure 17), let us use the primitivization algorithm to construct a Primitive Mode machine.

Figure 17: SOC Machine



Figure 18: Normalized SOC Machine


Primitivization Algorithm:

  1. The first thing we need to do is to check whether the SOC machine is in Normal Mode. If not, we must normalize the machine. The normalized machine (for this example) is shown in Figure 18.




  1. Next check the machine by looking row by row. In this example, begin with row 1, then 2, 3, 4, and finally 5.




  1. Beginning with the first row, we proceed column by column within the row to find a stable state. Once a stable state is reached, we revise our row such that only transient next states (transient next states are next states that are not stable) within a single input change of our stable state remain and all other next states in the row are re-defined as “don’t cares”. If there are multiple stable states in a row, we create a new row for these stable states. In this example, let us begin by checking row 1, column 00.

1,00 -> 1,01 (transition using red solid arrow in Figure 19)

This implies that we begin in row 1, column 00, then move to row 1 column 01 where we encounter a stable state, so we stop. Next we check the next states within a single input change of our stable state, which are row 1, column 00 and row 1, column 11 (using blue dotted arrow in Figure 19). Neither of these states are stable states, so we leave these as they are and according to our algorithm, we re-define row 1 column 10 to be a “don’t care” since it is neither a single input change nor a stable state. Figure 19 shows the algorithm applied for row 1. (changes in red)

Figure 19: Primitivization applied to first row on Normal Mode Table


2,00 -> 2,01 (transition using red solid arrow in Figure 20)

Next we look at row 2. Again beginning with column 00, we move to column 01, where we encounter a stable state and stop. Using the same method as before, we check the next states in row 2 that are within a single input change of the stable state, which are row 2, column 00 and row 2, column 11 (using blue dotted arrow in Figure 20). Neither of these states are stables states, so they remain unchanged and we re-define row 2, column 10 to be a “don’t care” since it is neither a single input change nor is it a stable state. Figure 20 shows the algorithm applied for row 1 and 2. (changes in red)


Figure 20: Primitivization algorithm applied row 2 of Normal Mode Table


3,00 -> 3,01 -> 3,11 -> 3,10

Next we look at row 3. Again beginning with column 00, we move to column 01, then to column 11, and finally to column 10. Notice that row 3 contains no stable states, therefore we do not include this row in our Primitivized Flow Table.


4,00

Next we look at row 4. Beginning with column 00, we immediately stop at row 4, column 00, since we are in a stable state. Using the same method as before we check the next states in row 4 that are within a single input change of the stable state. These states are row 4, column 01 and row 4, column 10. Neither of these states are stable states, so they remain unchanged and we re-define row 4 column 11 to be a “don’t care” since it is neither a single input change nor is it a stable state. Figure 21 shows the algorithm applied up to row 4.


Figure 21: Primitivization algorithm applied up to row 4 of Normal Mode Table


5,00 -> 5,01 -> 5,11

Lastly we look at row 5. This last row is a little tricky because we have two stable states in this row, and as we defined earlier a Primitive Mode Machine should have exactly one stable state per row. We revise this row in two steps:


First, beginning with column 00, we move to column 01, and then to column 11 where we stop since we encounter the first stable state (for this row). Using the same method as for the previous rows, we check the next states in row 4 that are within a single input change of the stable state in row 5, column 11. These states are row 5, column 01 and row 5 column 10. Row 5 column 01 is not a stable state so it remains unchanged. However, notice in this case that row 5, column 10 is a stable state. Therefore, we must create a new row in our Primitive Mode Table, which we label 5’. Now since we have taken care of the stable state in row 5, column 10 by creating a new row 5’, we re-define row 5, column 10 to be a “don’t care”. Row 5, column 00 is also re-defined as a “don’t care”, since it is neither a single input change of the stable state in row 5, column 11 nor a stable state.
Second, let us populate our new row 5’. To populate row 5’, we let row 5, column 10 be our stable state use row 5 to identify single input changes (as shown by the green solid arrow in Figure 22. First let us check the next states that are within a single input change of the stable state. These states are row 5, column 00 and row 5, column 11. Row 5, column 00 is not a stable state, so it remains unchanged. However, row 5, column 11 is a stable state. Since we already accommodated this stable state in row 5, it is re-defined as a “don’t care” in row 5’, column 11. Row 5, column 01 becomes a “don’t care” since it is neither a single input change of the stable state nor is it a stable state. Figure 22 shows the Final Primitivized Mode Table.



Figure 22: Final Primitivized Mode Table


База данных защищена авторским правом ©shkola.of.by 2016
звярнуцца да адміністрацыі

    Галоўная старонка