You are on page 1of 6

INTRODUCTION

An ITS is simple in operation. A set of questions to be given by the student is recorded in a database and is graded with a difficulty level by the author of the question. A student can then give an appropriate question, depending upon the difficulty of the question. However, in order for a student to progress in their learning, it is necessary to pitch questions so that they sufficiently make the student, without it being impossibly difficult. Therefore, it is necessary to track the students change in ability as they progress through the learning material. Similarly, the system is able to regrade questions in the question database. For example, a question may have been graded by the author as being relatively easy. However, it may transpire that populations of students actually find it difficult. This will be boom out by most students who should have performed well with the question actually performing poorly. Such a situation negates the pedagogy stated above. Our system is able to statistically determine that a question has been misgraded and is able to remedy the situation.

AUTOTUTORS
STORAGE REQUIREMENTS (KNOWLEDGE REPRESENTATION)
For the purpose of design and conceptualization, ITSs are described as four major components: The DOMAIN KNOWLEDGE, which is aimed to store, manipulate and reason with knowledge of the domain being taught. The PEDAGOGICAL MODULE, which provides information about the teaching strategy that must be used to a specific student. The STUDENT MODEL, that stores and analyzes information of students current state of knowledge and the personal characteristics. The INTERFACE, which handles the form of communication between the ITSs and the student. One of the most important features of an ITS should provide is the capability to adapt its behavior to the specific traits of the student. A human teacher bases his pedagogical decisions on the information about students learning performance obtained during the instruction, as well as by observing his problem solutions. The pedagogical module of an ITS uses the information collected by the student model during the interaction of the student with the ITS based on the actions performed by him/her. From this point of view the student model is analogous to an educational test instrument that attempts to measure student characteristics.

LEARNING ALGORITHM FOR MAPPING INTO NEURAL NETWORKS


Construct an AND-OR dependency graph based on the rules. Each node in the AND-OR dependence graph becomes a unit in the neural network. Insert additional units fro OR nodes.

Set the biases of each AND unit and the weights coming into the AND unit such that the unit will get activated only when all of its inputs are true. Set the biases of each OR unit and the weights coming into the OR unit such that the unit will get activated only when at least one of its inputs is true. Add links with low weights between otherwise unconnected nodes in adjacent layers of the network to allow learning over the long run

Figure 1: Mapping Rule into Score Page Network. The input unit is will be set to 1 for the current content stored in network, similarly the other input unit will be set only if the next current word gives the practical considerations of BPN.

Figure 2: Estimating the Value of the links. Artificial neural networks resemble the human brain n the following two ways: An ANN acquires knowledge through learning. An ANNs knowledge is stored within inter-neuron connection strengths known as synaptic weights. The true power and advantage of ANNs lie in their ability to represent both linear and non-linear relationships and in their ability to learn these relationships directly from the data. Linear models are simply inadequate when it comes to modeling data that contains non-linear characteristics. This algorithm consists of two passes.

Forward Pass:
For each pattern, the activation is propagated from the input to the output. Here, the input to hidden and the output was found out.

Backward Pass:
The error calculated at the output and then propagated backwards through the network to estimate the contribution to the error from each unit. Each weight value was changed by a small amount so as to reduce the total error.

Collect the data from the students by asking questions initially network structure was defined which is suitable to our problem solving. The network structure used here was Feed Forward Network which

belongs to interlayer connected i.e., the neurons in one layer were connected to neurons in adjacent layer. The neurons on the first layer sent their output to the neurons on the second layer, but they dont receive any input back from the neurons on the second layer. Node properties are defined followed by the network structure. Here, Autotutor use sigmoidal function that is given by: F (x) = 1/1+e^x where the output will be between 0 and 1.

Then the training algorithm was selected and the algorithm used here was Back-Propagated Algorithm, which was found to be a learning rules. Network having supervised learning rules that is the training data consists of many pairs of input/output training patterns and therefore the learning will benefit from the assistance of the teacher. By giving a new training pattern the weights may be updated. The supervised learning used here was Delta learning, where the weights are modified relative to the difference between the target and the actual output. This learning rule was used since it is guaranteed to learn for all problems and accuracy improves with training.

The Autotutors can be enhanced in the following implementations:

TRAINING THE AUTOTUTOR BY BPN


Give the questions to the students and collect the data from them and store it is sub files. Apply them to the input vector. Xp = Xp1, Xp2, Xp3 to the input units. Net = (Xpm, Win)

Calculate the net input values to the hidden layer units,

Figure 3: Training the Network. Calculate the outputs from hidden layer, H = f (net) Net = (hp, W2n)

Move to the Output layer; calculate the net input values to each unit, Calculate Output, Calculate the error terms for Output units, Calculate error terms for hidden units, O = F (net)

D = (t O). O (l O) where, t-target O-Output E = h (l h). d. W2

The error terms on the hidden units are calculated before the connected weights to the output layer units have been updated. Updated weights on Output layer, W2t = n dh + x ^W2 (t-1) Update weights on hidden layer, W2^new = W2^old + ^W2 n - Learning rate < 1, x Momentum factor W1 = W1 + ^W1 ^Wn = n e Xpm + W1 (t-1)

By choosing the weight value randomly, find the input to hidden layer and the output then find the error between target and output. Now (adjust) update the weights and train the dataset to minimize the error.

NEURAL MODEL FOR AUTOTUTOR

Figure 4: Neural Model

The questions in the wanted subject are put into Autotutor. Both the possible Questions and Answers are stored in the desired. Now the questions that are given as input to the Autotutor are trained with questions in the desired and their corresponding answers are displayed to the students. By applying data from sub files to input vector, the net input was calculated by using adder function (i.e.) net value. The net is processed by an activated function to produce the neurons to give output. Here, F is called a squashing function. Here, Autotutor use sigmoid function (meaning S shaped). F (net) = out = 1/1+e^ (-net) Then the desired is subtracted from the actual output to find the error and its propagated backwards to minimize the error by adjusting weights. Thus, the neural model was constructed by Autotutor.

TESTING THE AUTOTUTOR


For testing BPN, the program prompts the user for the following inputs.

Step 1: Learner asks question (or presents problem).


Step 2:Tutor answer question (or begins to solve the problem). Step 3: Tutor gives short immediate feedback on the quality of the answer (or solution). Step 4: The tutor short learner collaboratively improves the quality of the answer. Step 5: The tutor assesses the learners understanding of the answer.

As with training, the program normalizes the input and output parameter of the test signals prior to initialization through network. The result file contains rows of data, where each row contains the nonnormalized vector output from each node in the output layer. The next series of data points correspond to the absolute difference between the non-normalized vector output of the network and actual target output vectors. The final value represents the sum of error taken across the output nodes of network. For each entry in the list of extraction candidates, Autotutor first bind the variables to their candidate values. Then, Autotutor perform a forward propagation on the trained Score Page network and outputs the score of the

network for the test document based on the candidate bindings. If the output value of the network is greater than the threshold defined during the training step. Autotutor record the bindings as an extraction, otherwise, these bindings are discarded.

Figure 5: Testing a Trained Network.

IMPLEMENTATION
This program implements the Natural language processing network. It is used to detect the structure by propagating error backwards through the network. The program learns to predict the best solution among various inputs. In this implementation, it has been shown that Neural Networks can be used to tackle a problem to a certain extent, which appears at first glance to be not quite suitable for Neural Networks. The BPN introduced in this paper for error minimization can be implemented O (nlogn) flops per steps and require O (n) memory allocations. Thus, they are very efficient for large-scale computation. In this paper, Autotutor have demonstrated the feasibility of implementing large (data) classifiers following an approximation by means of sigmoidal function. The solution fro a given problem improves as the numbered of training sample increases. Autotutor has been tested on nearly 200 students in a computer literacy course. The tutoring was provided as extra credit in the course at a point in time after the students had allegedly read the relevant chapters and attended a lecture in the course. So, Autotutor gave students an opportunity to have additional studying of the material.

MAIN RESULTS
For illustration, consider one of the experiments that Autotutor conducted on Intelligent Tutoring System. Autotutor was tested on 56 students in a computer literacy course. The students received extra credit for participating in the experiment. Each student had one of the topics in neural networks (riles, architectures, applications) assigned to one of the three conditions, using a suitable counterbalancing scheme: Autotutor (student uses Autotutor to study one of the topics), Reread (student re-reads a chapter for a topic), and a noread Control (student doesnt re-study a topic). A repeated measures designed was used so that the Autotutor could evaluate aptitude X treatment interactions; that is, Autotutor could access whether Autotutor is relatively effective for some categories of learners but not others (such as high versus low

performers overall). On the average, students took 58 minutes to use Autotutor, which was somewhat less time than the 75 minutes assigned in the Reread condition. There were 3 outcome measures. There was a sample of test bank questions that were in an N-alternative multi-choice format. There was a sample of deep multi-choice questions, one questions for each of the 56 topics, that tapped causal inferences and reasoning. And finally, there was a close test that had 4 critical words deleted from the ideal answers of each topic; the students filled with these blanks with answers. The proportion of correct responses served as the metric of performance. Autotutor also combined all these outcome measures into a composite score. There were significant differences in the composite scores among the three conditions, with means of .63, .58, and, .56 in the Autotutor, Reread, and Control conditions, respectively. Planned comparisons showed the following pattern: Autotutor > Reread = Control. These results support the conclusion that Autotutor had a significant impact on learning gains. Our research revealed that Autotutor is almost as good as an expert in computer literacy in evaluating the quality of contributions in the tutorial dialog.

CONCLUSIONS
Autotutor are an effective and economically viable means of assists students to overcome in learning difficulties. However, their widespread use has been inhibited by high developmental costs associated with building such systems. The only way forward is to develop ITS architectures with reused and interoperable components. This paper describes a neural network architecture fro the development of Autotutors for the procedural and object-oriented programming paradigms. Finally, a number of studies will be conducted to acquire the knowledge needed by domain and pedagogical models, e.g., studies of error made by students; students determining the relationship between a students learning style pedagogical preferences. Thus, the learners performance is effectively evaluated with respect to more specific neural units.

REFERENCES
Beale and Jackson Neural Computing. James A. Freeman / David M.Skapura, Neural Networks Algorithm, Applications and Programming Techniques. Giangrandi, P. and Tasso, C. Truth Maintenance Techniques for Modeling Students Behavior. Journal of Artificial Intelligence in Education Mayo, M., and Mitrovi C, A. Using a probabilistic student model to control problem difficulty.

You might also like