Lab Manual On Soft Computing (IT-802) : Ms. Neha Sexana
Lab Manual On Soft Computing (IT-802) : Ms. Neha Sexana
REFERENCE BOOKS :
1. S.N. Shivnandam, “Principle of soft computing”, Wiley.
#include<iostream.h>
#include<conio.h>
void main()
{
clrscr();
float x,b,w,net;
float out;
cout<<"enter the input X=";
cin>>x;
cout<<"enter the bias b=";
cin>>b;
cout<<"enter the weight W=";
cin>>w;
net=(w*x+b);
cout<<"net="<<net<<endl;
if(net<0)
{
out=0;
}
else
if((net>=0)&&(net<=1))
{
out=net;
}
else
out=1;
cout<<"output="<<out<<endl;
getch();
}
OUTPUT
When, net<0 :-
When, net>1 :-
EXPERIMENT: 2
1. Comparision Layer -> It works on 2/3 rule. This layer recieves the binary layer
input vector represented by X and initially passes it trough unchanged to become
the vector C.
For one iteration C=X, in later case binary vector R is
produce from Recognition layer. Each neuron in the comparision layer recieves
three binary inputs.
a. The component Xi from the input vector X.
b. The feedback signal Pj i.e. the weighted sum of recognition layer output.
c. The input from the gain signal i.e. Gain1 (G1).
OR of X OR of R G2
Component Component
0 0 0
1 0 1
1 1 0
0 1 0
2. Recognition Layer -> The recognition layer serves to classify the input vector.
Each recognition layer neuron has an associated weight vector bj only the neuron
with the weight vector best matching the input vectors, fire all other neurons are
inhibited.
The weights in the recognition layer make a stored pattern
for a category of input output vector, these weights are real number.
The binary version of same pattern is stored in the
corresponding set of weights in the comparision layer.
b. GAIN1 (G1) -> Like GAIN2, the output of GAIN1 is 1. If any component of
binary input vector is 1.
But if any component of R is 1 then, G1 is force to become 0.
c. Reset Signal -> The Reset modules measures the similarity between the vector
X and C. Generally, this similarity is the ratio of 1’s in the vector X.
If this ratio is below the vigilance parameter, a reset signal is
issue and inhibits the neuron of recognition layer.
The vigilance parameter p is closer to 1 for accuracy & its
value is 0.9
EXPERIMENT: 3
#include<conio.h>
#include<iostream.h>
#include<math.h>
void main()
{
clrscr();
float l,c,s1,n1,n2,w10,b10,w20,b20,w11,b11,w21,b21,p,t,a0=-1,a1,a2,e,s2;
cout<<"enter the input weights/base of second n/w= ";
cin>>w10>>b10;
cout<<"enter the input weights/base of second n/w= ";
cin>>w20>>b20;
cout<<"enter the learning coefficient of n/w c= ";
cin>>c;
/* Step1:Propagation of signal through n/w */
n1=w10*p+b10;
a1=tanh(n1);
n2=w20*a1+b20;
a2=tanh(n2);
e=(t-a2);
/* Back Propagation of Sensitivities */
s2=-2*(1-a2*a2)*e;
s1=(1-a1*a1)*w20*s2;
/* Updation of weights and bases */
w21=w20-(c*s2*a1);
w11=w10-(c*s1*a0);
b21=b20-(c*s2);
b11=b10-(c*s1);
cout<<"The uploaded weight of first n/w w11= "<<w11;
cout<<"\n"<<"The uploaded weight of second n/w w21= "<<w21;
cout<<"\n"<<"The uploaded base of second n/w b11= "<<b11;
cout<<"\n"<<"The uploaded base of second n/w b21= "<<b21;
getch();
}
OUTPUT
EXPERIMENT: 4
It consist of one input layer, one Kohonen layer and one Grossberg
layer.
All the units of input layer are fully interconnected by weights to the
units of Kohonen layer.
Similarly, all the units of Kohonen layer are fully interconnected by
weights to the Grossberg layer.
It works in two mode.
3. It is also useful for rapid prototyping of system, where the greater accuracy of
Back Propagation makes it the method of choice in the final version. But a
quick approximation is important so, CPN is more useful then Back
Propagation.
EXPERIMENT: 5
1. It shows simplicity.
2. Ease of operation.
3. Minimal requirement.
4. Global perspective.
5. It does not guarantee to find global minimum solutions but acceptably good
solutions to “acceptably quickly “.
1. BEGIN
2. Create initial population ;
3. Compute fitness of each individuals ;
4. WHILE NOT finished DO Loop
5. BEGIN
6. Select individuals from old generation for mating ;
7. Create offspring by applying crossover or mutation to the selected individuals ;
8. Compute fitness of new individuals ;
9. Kill old individuals to make a room for new chromosomes and insert offspring in
the new
generation ;
10. If population has converged
11. Then fitness=TRUE ;
12. END
13. END
EXPERIMENT: 6
It consist of one input layer, one Kohonen layer and one Grossberg
layer.
All the units of input layer are fully interconnected by weights to the
units of Kohonen layer.
Similarly, all the units of Kohonen layer are fully interconnected by
weights to the Grossberg layer.
It works in two mode.
6. It is also useful for rapid prototyping of system, where the greater accuracy of
Back Propagation makes it the method of choice in the final version. But a
quick approximation is important so, CPN is more useful then Back
Propagation.
EXPERIMENT: 7
FUZZY LOGIC :-
In older days, Crisp logic were used to handle the problem of binary
value ie. 0 and 1. Crisp logic is also known as traditional, conventional
or binary logic.
Crisp logic have two valued logic first is true and other is false. Crisp
logic is based on the reasoning which is exact and fixed. It is based on the logic of
completely true and completely false.
We can defined completely true as one (1) and completely false as zero
(0).
EXPERIMENT:7
Algorithm
Start with a randomly chosen weight vector w0;
Let k=1;
While these exists input vector that are misclassified by: Wk-1 do
Let i be a misclassified input vector
Let Xk=class(ij)ij, impling that Wk-1.Xk<0
Update the weight vector to Wk= Wk-1 + nXk;
increment k;
End while;
Program
#include<iostream.h>
#include<conio.h>
Void main( )
{
clrscr( );
int in[3],d,w[3],a=0;
for(int i=0;i<3,i++)
{
cout<<”\n initialize the weight vector w”<<i;
cin>>w[i]
}
for(i=0;i<3:i++}
{
cout<<”\n enter the input vector i”<<i;
cin>>in[i];
}
cout<<”\n enter the desined output”;
cin>>d;
int ans=1;
while(ans= = 1)
{
for (a= 0, i==0;i<3;i++)
{
a = a + w[i] * in[i];
}
clrscr( );
cout<<”\n desired output is”<<d;
cout<<”\n actual output is “<<a;
int e;
e=d-a;
cout<<”\n error is “<<e;
cout<<”\n press 1 to adjust weight else 0”;
cin>>ans;
if (e<0)
{
for(i=0;i<3;i++)
{
w[i]=w[i]-1;
}
}
else if (e>0)
{
for(i=0;i<3:i++)
{
w[i]=w[i]+1;
}
}
getch( );
}
OUTPUT:
EXPERIMENT:8
#include<<iostream.h>>
#include<<conio.h>>
void main()
{
float n,w,t,net,div,a,al;
cout<<”consider o single neuron percetron with a single i/p”;
cin>>w;
cout<<”enter the learning cofficient”;
cin>>d;
for (i=0;i<10;i++)
{
net = x+w;
if(wt<0)
a=0;
else
a=1;
div=at+a+w;
w=w+div;
cout<<”i+1 in fraction are i”<<a<<”change in weight”<<dw<<”adjustment at=”<<w;
}
}
OUTPUT:
EXPERIMENT :9
#include<<iostream.h>>
#include<<conio.h>>
void main()
{
clrscr( );
float input[3],d,weight[3],delta;
for(int i=0;i < 3 ; i++)
{
cout<<”\n initilize weight vector “<<i<<”\t”;
cin>>input[i];
}
cout<<””\n enter the desired output\t”;
cin>>d;
do
{
del=d-a;
if(del<0)
for(i=0 ;i<3 ;i++)
w[i]=w[i]-input[i];
else if(del>0)
for(i=0;i<3;i++)
weight[i]=weight[i]+input[i];
for(i=0;i<3;i++)
{
val[i]=del*input[i];
weight[+1]=weight[i]+val[i];
}
cout<<”\value of delta is “<<del;
cout<<”\n weight have been adjusted”;
}while(del ≠ 0)
if(del=0)
cout<<”\n output is correct”;
}
OUTPUT
EXPRIMENT :10
# include <iostream.h>
#include <conio.h>
void main ()
{
int i ;
float delta, com, coeff = 0.1;
struct input
{
float val,out,wo, wi;
int top;
} s[3] ;
cout<< “\n Enter the i/p value to target o/p” << “\t”;
for (i=0; i<3 ; i++)
cin>> s [i], val>> s[i], top);
i = 0;
do
{
if (i = = 0)
{
W0 = -1.0;
W1 = -0.3;
}
else
{
W0 = del [i - 1], W0 ;
W1 = del [i - 1] , Wi ;
}
del [i]. aop = w0 + (wi * del [i]. val);
del [i].out = del [i]. aop);
delta = (top – del [i]. out) * del [i].out * (1 – del [i].out);
corr = coeff * delta * del [i].[out];
del [i].w0 = w1 + corr;
del [i]. w1 = w1 + corr;
i++;
}While ( i ! = 3)
cout<< “VALUE”<<”Target”<<”Actual”<<”w0” <<”w1”<<’\n;
for (i=0; i=3; i++)
{
cout<< s [i].val<< s[i].top<<s[i].out << s[i]. w0<< s[i]. w1;
cout<< “\n”;
}
getch ();
}
OUTPUT
EXPERIMENT :11
cin.ignore(2);
return 0;
}//end main
OUTPUT