Professional Documents
Culture Documents
TITLE
PAGE NO
ABSTRACT
iii
LIST OF TABLES
viii
LIST OF FIGURES
LIST OF ABBREVIATIONS
1.
INTRODUCTION
1.1
2.
xii
PROJECT DESCRIPTION
SYSTEM STUDY
11
SYSTEM SPECIFICATION
14
LANGUAGE SPECIFICATION
15
SYSTEM DESIGN
32
58
SYSTEM IMPLEMENTATION
59
CONCLUSION
60
BIBLIOGRAPHY
APPENDIX
SCREEN SHOT
SAMPLE CODING
I.
62
LIST OF FIGURES
FIGURE NO
NAME
.NET FRAMEWORK
INTEROPERABILITY
WEB CONTROLS
PAGE NO
10
12
behaviour
and
individual
interests
.Customization
vs
o User profile data provide information about the users of a Web site. A
user profile contains demographic information (such as name, age,
country, marital status, education, interests etc.) for each user of a
Web site, as well as information about users interests and preferences.
Scope of the Projects:As the size of web increases along with number of users, it is very much
essential for the website owners to better understand their customers so that they
can provide better service, and also enhance the quality of the website. To achieve
this they depend on the web access log files. The web access log files can be mined
to extract interesting pattern so that the user behavior can be understood. Access of
web pages according to period of time, e.g. daily, monthly, yearly is registered.
This project presents an overview of web usage mining and also provides a survey
of the pattern extraction algorithms used for web usage mining. It is a
comprehensive access log analysis tool. It allows you to keep track of activity on
your site by month, week, day and hour, to monitor total hits, total visitors, and
total successful and page views, and to keep track of your most popular pages. This
work will have the actors Administrator and User with Admin can extracting
interesting patterns from the pre processed web logs. He can get general statistics
like number of hits, no of visitors. User can login and view the desire information.
User can search what they need.
2. SYSTEM STUDY
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system
will have on the organization. The amount of fund that the company can pour into
the research and development of the system is limited. The expenditures must be
justified. Thus the developed system as well within the budget and this was
achieved because most of the technologies used are freely available. Only the
customized products had to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand
on the available technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands being placed on the
client. The developed system must have a modest requirement, as only minimal or
null changes are required for implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the
user. This includes the process of training the user to use the system efficiently.
The user must not feel threatened by the system, instead must accept it as a
necessity. The level of acceptance by the users solely depends on the methods that
are employed to educate the user about the system and to make him familiar with
it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.
Existing System:
A good and effective Customer Relationship Management (CRM) needs
clear understandings of customer requirements. The management should
have up to date needs of the customers, and accordingly they should act.
Gathering information about the web user (customer) profiles and analysis
of the profiles is not so easy for the large data. For example, Nokia site is
developed to search exact cell phone models based on the customer interests.
If they analyze all the user profiles they can know the most wanted cell
phone model. Analyzing web user profiles and making decisions and
predictions are difficult. The accessed data may not be stored at the Server
side [1]. So many Web Sites provides web user interface to their
Customers/Users, but they may not collect all the accessed information,
because they feel that is not so important.
Proposed System:
The Proposed System provides collecting and mining of the collected data,
that which can improve the companys growth and mining techniques also
provides the quality of the User-Interface. The Proposed Model constitutes
the following activities. Collecting all the Web accessed data on a web
server preparing the collected information as a data set. [Pre pruning
Process]. Creating per-user profiles, creating decisions based on the user
profiles. Showing the decisions to the company in the Website. Collection of
Web data such as activities/Click Streams recorded in Web Server logs
Preprocessing of Web data such as filtering request and identifying unique
regions. Interpretation/Evaluation of the discovered profiles.
:- Windows7
Front End
Coding Language
:- C#
Backend
Hardware Requirements:
System
Hard disk
Mouse
RAM
: 2GB(minimum)
Keyboard
4. LANGUAGE SPECIFICATION
4.1 FEATURES OF. NET:-
MANAGED CODE
The code that targets .NET, and which contains certain extra
Information - metadata - to describe itself. Whilst both managed and unmanaged
code can run in the runtime, only managed code contains the information that
allows the CLR to guarantee, for instance, safe execution and interoperability.
MANAGED DATA
With Managed Code comes Managed Data. CLR provides memory
allocation and Deal location facilities, and garbage collection. Some .NET
languages use Managed Data by default, such as C#, Visual Basic.NET and
JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending
on the language youre using, impose certain constraints on the features available.
As with managed and unmanaged code, one can have both managed and
unmanaged data in .NET applications - data that doesnt get garbage collected but
instead is looked after by unmanaged code.
COMMON TYPE SYSTEM
The CLR uses something called the Common Type System (CTS) to strictly
enforce type-safety. This ensures that all classes are compatible with each other, by
describing types in a common way. CTS define how types work within the
runtime, which enables types in one language to interoperate with types in another
language, including cross-language exception handling. As well as ensuring that
types are only used in appropriate ways, the runtime also ensures that code doesnt
attempt to access memory that hasnt been allocated to it.
new versions of Microsofts old favorites Visual Basic and C++ (as VB.NET and
Managed C++), but there are also a number of new additions to the family.
Visual Basic .NET has been updated to include many new and
improved language features that make it a powerful object-oriented programming
language. These features include inheritance, interfaces, and overloading, among
others. Visual Basic also now supports structured exception handling, custom
attributes and also supports multi-threading.
Visual Basic .NET is also CLS compliant, which means that any CLScompliant language can use the classes, objects, and components you create in
Visual Basic .NET.
Managed Extensions for C++ and attributed programming are just
some of the enhancements made to the C++ language. Managed Extensions
simplify the task of migrating existing C++ applications to the new .NET
Framework.
C# is Microsofts new language. Its a C-style language that is
essentially C++ for Rapid Application Development. Unlike other languages, its
specification is just the grammar of the language. It has no standard library of its
own, and instead has been designed with the intention of using the .NET libraries
as its own.
Microsoft Visual J# .NET provides the easiest transition for Javalanguage developers into the world of XML Web Services and dramatically
improves the interoperability of Java-language programs with existing software
written in a variety of other programming languages.
Active State has created Visual Perl and Visual Python, which
enable .NET-aware applications to be built in either Perl or Python. Both products
can be integrated into the Visual Studio .NET environment. Visual Perl includes
support for Active States Perl Dev Kit.
Other languages for which .NET compilers are available include
FORTRAN
COBOL
Eiffel
GARBAGE COLLECTION
Garbage Collection is another new feature in C#.NET. The .NET
Framework monitors allocated resources, such as objects and variables. In
addition, the .NET Framework automatically releases memory for reuse by
destroying objects that are no longer in use.
In C#.NET, the garbage collector checks for the objects that are not currently in
use by applications. When the garbage collector comes across an object that is
marked for garbage collection, it releases the memory occupied by the object.
OVERLOADING
Overloading is another feature in C#. Overloading enables us to define
multiple procedures with the same name, where each procedure has a different
set of arguments. Besides using overloading for procedures, we can use it for
constructors and properties in a class.
MULTITHREADING:
C#.NET also supports multithreading. An application that supports
multithreading can handle multiple tasks simultaneously, we can use
multithreading to decrease the time taken by an application to respond to user
interaction.
STRUCTURED EXCEPTION HANDLING
C#.NET supports structured handling, which enables us to detect and
remove errors at runtime. In C#.NET, we need to use TryCatchFinally
statements to create exception handlers. Using TryCatchFinally statements,
TABLE:
A database is a collection of data about a specific topic.
VIEWS OF TABLE:
We can work with a table in two types,
1. Design View
2. Datasheet View
DESIGN VIEW
To build or modify the structure of a table we work in the table design view.
We can specify what kind of data will be hold.
DATASHEET VIEW
To add, edit or analyses the data itself we work in tables datasheet view
mode.
QUERY:
A query is a question that has to be asked the data. Access gathers data that
answers the question from one or more table. The data that make up the answer is
either dynaset (if you edit it) or a snapshot(it cannot be edited).Each time we run
query, we get latest information in the dynaset. Access either displays the dynaset
or snapshot for us to view or perform an action on it ,such as deleting or updating.
Literature survey:Information Extraction:Information Extraction (IE) is the name given to any process which
selectively structures and combines data which is found, explicitly stated or
implied, in one or more texts. The final output of the extraction process varies; in
every case, however, it can be transformed so as to populate some type of database.
Information analysts working long term on specific tasks already carry out
information extraction manually with the express goal of database creation.
One reason for interest in IE is its role in evaluating, and comparing,
different Natural Language Processing technologies. Unlike other NLP
technologies, MT for example, the evaluation process is concrete and can be
performed automatically. This, plus the fact that a successful extraction system has
immediate applications, has encouraged research funders to support both
evaluations of and research into IE. It seems at the moment that this funding will
continue and will bring about the existence of working systems. Applications of IE
are still scarce. A few well known examples exist and other classified systems may
also be in operation. It is certainly not true that the level of the technology is such
that it is easy to build systems for new tasks, or that the levels of performance are
sufficiently high for use in fully automatic systems. The effect on long term
research on NLP is debatable and this is considered in the final section which
speculates on future directions in IE. We begin our examination of IE by
considering a specific example from the Fourth Message Understanding
Conference (MUC-4 DARPA 92) evaluation. An examination of the prognosis for
this relatively new, and as yet unproven, language technology follows together
with a brief history of how IE has evolved is given. The related problems of
evaluation methodology and task definition are examined. The current methods
used for building IE extraction systems are outlined. The term IE can be applied to
a range of tasks, and we consider three generic applications.
Visual Web Information Extraction with Lixto*:We present new techniques for supervised wrapper generation and
automated web information extraction, and a system called Lixto implementing
these techniques. Our system can generate wrappers which translate relevant pieces
of HTML pages into XML. Lixto, of which a working prototype has been
implemented, assists the user to semi-automatically create wrapper programs by
providing a fully visual and interactive user interface. In this convenient userinterface very expressive extraction programs can be created. Internally, this
functionality is reected by the new logicbased declarative language Elog. Users
never have to deal with Elog and even familiarity with HTML is not required.
Lixto can be used to create an \XML-Companion" for an HTML web page with
in Figure 1 we would like to extract book tuples, where each tuple consists of the
title, the set of authors, the (optional) list-price, and other attributes .
Many web sites contain large sets of pages generated using a common
template or layout. For example, Amazon lays out the author, title, comments, etc.
in the same way in all its book pages. The values used to generate the pages (e.g.,
the author, title,...) typically come from a database. In this paper, we study the
problem of automatically extracting the database values from such template
generated web pages without any learning examples or other similar human input.
We formally define a template, and propose a model that describes how values are
encoded into pages using a template. We present an algorithm that takes, as input, a
set of template-generated pages, deduces the unknown template used to generate
the pages, and extracts, as output, the values encoded in the pages. Experimental
evaluation on a large number of real input page collections indicates that our
algorithm correctly extracts data in most cases.
Web Object Retrieval:The primary function of current Web search engines is essentially relevance
ranking at the document level. However, myriad structured information about realworld objects is embedded in static Web pages and online Web databases.
Document-level information retrieval can unfortunately lead to highly inaccurate
relevance ranking in answering object-oriented queries. In this paper, we propose a
paradigm shift to enable searching at the object level. In traditional information
retrieval models, documents are taken as the retrieval units and the content of a
document is considered reliable. However, this reliability assumption is no longer
valid in the object retrieval context when multiple copies of information about the
same object typically exist. These copies may be inconsistent because of diversity
of Web site qualities and the limited performance of current information extraction
Authentication
Extracting Structured Data from Web Pages
Information Sources from webpage HTML code extraction
Wrapper process HTML element to XML element from webpage
Natural Language Processing extraction from webpage
User
Webpage
Information Extraction
Information Retrieval
from Web
NLP Processing
The information extracting structured data from world wide web (www). The World
Wide Web contains huge amounts of data. However, we cannot benefit very much from
the large amount of raw WebPages unless the information within them is extracted
accurately and organized well. Therefore, information extraction (IE) plays an important
role in Web knowledge discovery and management.
The solution is thus to use wrapper technology to extract the relevant information from
HTML documents and translate it into XML which can be easily queried or further
processed. Based on a new method of identifying and extracting relevant parts of HTML
documents and translating them to XML format, we designed and implemented the
efficient wrapper generation which is particularly well-suited for building HTML/XML
wrappers and introduces new ideas and programming language concepts for wrapper
Natural
Language
processing (NLP)
is
field
of computer
science and linguistics concerned with the interactions between computers and
human (natural) languages.
In theory, natural-language processing is a very attractive method of humancomputer interaction. Natural-language understanding is sometimes referred to as
an AI-complete problem, because natural-language recognition seems to require
extensive knowledge about the outside world and the ability to manipulate it.
NLP has significant overlap with the field of computational linguistics, and is
often considered a sub-field of artificial intelligence.
Technique used or Algorithm:We can use the HCRF algorithm is based on the VIPS approach. HCRF organizes the
segmentation of the page hierarchically to form a tree structure and conducts inference on the
vision tree to tag each vision node (vision block) with a label. The first algorithm is the original
HCRF and extended Semi-CRF framework. We name it the Basic HCRF and extended SemiCRF (BHS) algorithm.
webpage
Information extraction
NewClass
HTML Transform XML
label extraction
Class Diagram:webpage
Information Extraction
HTML Transform XML
Labelling extraction
user
Sequence Diagram:-
user
webpage
information
extraction
HTML transform
XML
labelling
extraction
Information
Retrieval from Web
create user
information extraction from webpage
Transform HTML into XML
labelling extraction from xml
Information Retrieval from Web
Collaborative Diagram:-
webpage
Information Retrieval
from Web
State Diagram:-
user
webpage
Information
Extraction
HTML Transform
XML
labelling
Extraction
Retriaval information
from webpage
Activity Diagram:-
us er
Webpage
Inform ation
Extraction
HTML
transform XML
Labelling
Extraction
Retrieval information
from webpage
Component Diagram:-
webpage
user
information
extraction
HTML
transform XML
labelling
extraction
Retrieval information
from webpage
Object Diagram:-
User
Webpage
Retrieval
information
from
webpage
Information
extraction
Labeling
extraction
HTML Transform
XML
System Architecture:-
User
Webpage
Structure
Authentication
storage
Information
Extraction
HTML
Transform
XML
Labeling
extraction
Retrieval information
from webpage
E-R Diagram:-
User
Webpa
ge
structu
re
Authentication
storage
Information
extraction
Html
transforms
xml
Label
extraction
Retrieval
information
from webpage
User
Retrieval information
from webpage
Webpage
Labeling
extraction
Information
extraction
Html Transform
Xml
User
Webpage
structure
Informati
on
extractio
n
Html transforms
xml
Label
extraction
Retrieval
information from
webpage
to uncover errors and ensure that defined inputs will produce actual results that
agree with the required results. Testing has to be done using the two common steps
Unit testing and Integration testing. In the project system testing is made as
follows:
The procedure level testing is made first. By giving improper inputs, the
errors occurred are noted and eliminated. This is the final step in system life cycle.
Here we implement the tested error-free system into real-life environment and
make necessary changes, which runs in an online fashion. Here system
maintenance is done every months or year based on company policies, and is
checked for errors like runtime errors, long run errors and other maintenances like
table verification and reports.
6.1. UNIT TESTING
Unit testing verification efforts on the smallest unit of software design,
module. This is known as Module Testing. The modules are tested separately.
This testing is carried out during programming stage itself. In these testing steps,
each module is found to be working satisfactorily as regard to the expected output
from the module.
6.2. INTEGRATION TESTING
Integration testing is a systematic technique for constructing tests to uncover
error associated within the interface. In the project, all the modules are combined
and then the entire programmer is tested as a whole. In the integration-testing step,
all the error uncovered is corrected for the next testing steps.
7. SYSTEM IMPLEMENTATION
Implementation is the stage of the project when the theoretical design is turned
out into a working system. Thus it can be considered to be the most critical stage in
achieving a successful new system and in giving the user, confidence that the new
system will work and be effective.
The implementation stage careful planning, investigation of the existing system
and its constraints on implementation, designing of methods to achieve
changeover and evaluation of changeover methods.
Implementation is the process of converting a new system design into
operation. It is the phase that focuses on user training, site preparation and file
conversion for installing a candidate system. The important factor that should be
considered here is that the conversion should not disrupt the functioning of the
organization.
Application: Used to understanding the system considers the information extraction
from the web page structure.
The systematic data extraction is useful to collect the information for
understand the webpage.
Conclusion:This application has attempted to provide an up-to-date survey of the rapidly
growing area of Web Usage mining. With the growth of Web-based applications,
specifically electronic commerce, there is significant interest in analyzing Web
usage data to better understand Web usage, and apply the knowledge to better serve
users. This has led to a number of commercial offerings for doing such analysis.
However, Web Usage mining raises some hard scientific questions that must be
answered before robust tools can be developed. For Web usage mining, the session
dissimilarity measure is not a distance metric, and dealing with relational data is
impractical given the huge size of the data sets. Therefore, evolutionary techniques
which can deal with ill-defined features and non-differentiable similarity measures
are suitable. Evolutionary techniques can handle a vast array of subjective, even
non-metric dissimilarities, making them suitable for many applications in data and
Web mining. Moreover, they are meaningful only within well defined distinct
profiles/contexts (context-sensitive) as opposed to all or none of the data (contextblind). Todays web sites are a source of an exploding amount of click stream data
that can put the scalability of any data mining technique into question. Moreover,
the Web access patterns on a web site are very dynamic, due not only to the
dynamics of Web site content and structure, but also to changes in the users
interests, and thus their navigation patterns. The access patterns can be observed to
change depending on the time of day, day of week, and according to seasonal
patterns or other events in the world.
Reference or Bibliography: [1] J. Cowie and W. Lehnert, Information Extraction, Comm. ACM, vol.
39, no. 1, pp. 80-91, 1996.
[2] C. Cardie, Empirical Methods in Information Extraction, AI Magazine,
vol. 18, no. 4, pp. 65-80, 1997.
[3] R. Baumgartner, S. Flesca, and G. Gottlob, Visual Web Information
Extraction with Lixto, Proc. Conf. Very Large Data Bases (VLDB), pp.
119-128, 2001.
[4] A. Arasu and H. Garcia-Molina, Extracting Structured Data from Web
Pages, Proc. ACM SIGMOD, pp. 337-348, 2003.
[5] D.W. Embley, Y.S. Jiang, and Y.-K. Ng, Record-Boundary Discovery in
Web Documents, Proc. ACM SIGMOD, pp. 467- 478, 1999.
Screen shot:
Home page:
Login form:
Details Register:-
Feedback Form:
FAQ Form:-
using
using
using
using
using
using
using
System.Collections.Generic;
System.ComponentModel;
System.Data;
System.Drawing;
System.Linq;
System.Text;
System.Windows.Forms;
namespace Loopunderstanding
{
public partial class Home : Form
{
public Home()
{
InitializeComponent();
}
private void button2_Click(object sender, EventArgs e)
{
Form1 fm = new Form1();
fm.Show();
this.Hide();
}
private void button5_Click(object sender, EventArgs e)
{
MessageBox.Show("Do You Want to Close this Application");
Application.Exit();
}
private void button3_Click(object sender, EventArgs e)
{
MessageBox.Show("Please Login ");
}
}
}
Login code
using System;
using
using
using
using
using
using
using
using
System.Collections.Generic;
System.ComponentModel;
System.Data;
System.Drawing;
System.Linq;
System.Text;
System.Windows.Forms;
System.Data.SqlClient;
namespace Loopunderstanding
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
SqlConnection con = new SqlConnection("data source
=SPIRO40\\SQLEXPRESS;Initial catalog=web;integrated security=true");
SqlCommand cmd = new SqlCommand("select * from login1 where
uname='" + textBox1.Text + "'and pwd='" + textBox2.Text + "'", con);
con.Open();
SqlDataReader dr = cmd.ExecuteReader();
if (dr.Read() == true)
{
MessageBox.Show("Login SuccessFully");
windows fm = new windows();
fm.Show();
this.Hide();
}
else
{
}
}
private void button2_Click(object sender, EventArgs e)
{
// Application.Exit();
Home pg = new Home();
pg.Show();
this.Hide();
}
private void linkLabel1_LinkClicked(object sender,
LinkLabelLinkClickedEventArgs e)
{
Registrations code:using
using
using
using
using
using
using
using
using
System;
System.Collections.Generic;
System.ComponentModel;
System.Data;
System.Drawing;
System.Linq;
System.Text;
System.Windows.Forms;
System.Data.SqlClient;
namespace Loopunderstanding
{
public partial class register : Form
{
public register()
{
InitializeComponent();
}
SqlConnection con = new SqlConnection("data source =
SPIRO40\\SQLEXPRESS;Initial Catalog=web;integrated security=true");
private void button1_Click(object sender, EventArgs e)
{
if (textBox2.Text == textBox3.Text)
{
if (textBox1.Text != "" && textBox2.Text != "" && textBox3.Text !=
"" && textBox4.Text != "" && textBox5.Text != "" && textBox6.Text!="")
{
con.Open();
SqlCommand cmd = new SqlCommand("insert into register1
values('" + textBox1.Text + "','" + textBox2.Text + "','" + textBox3.Text +
"','" + textBox4.Text + "','" + textBox5.Text + "','"+textBox6.Text+",')",
con);
SqlCommand cmd1 = new SqlCommand("insert into login1 values
('" + textBox1.Text + "','" + textBox2.Text + "')", con);
cmd.ExecuteNonQuery();
cmd1.ExecuteNonQuery();
MessageBox.Show("Your Details Registered");
Form1 fm = new Form1();
fm.Show();
this.Hide();
con.Close();
}
else
{
MessageBox.Show("please fill the entire values");
}
}
else
{
Password");
}
private void button2_Click(object sender, EventArgs e)
{
//Application.Exit();
Form1 pg = new Form1();
pg.Show();
this.Show();
}
}
Website Search:
using System;
using
using
using
using
using
using
using
using
using
using
using
System.Collections.Generic;
System.ComponentModel;
System.Data;
System.Drawing;
System.Linq;
System.Text;
System.Windows.Forms;
System.Net;
System.Xml.Linq;
System.Xml;
System.IO;
namespace Loopunderstanding
{
public partial class windows : Form
{
public windows()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
string url = "http://" + textBox1.Text;
webBrowser1.Url = new Uri(url);
HttpWebRequest myWebRequest = (HttpWebRequest)HttpWebRequest.Create(url);
myWebRequest.Method = "GET";
// make request for web page
HttpWebResponse myWebResponse = (HttpWebResponse)myWebRequest.GetResponse();
StreamReader myWebSource = new
StreamReader(myWebResponse.GetResponseStream());
textBox2.Text = myWebSource.ReadToEnd();
myWebResponse.Close();
}
private void button2_Click(object sender, EventArgs e)
{
string path = "C:\\Documents and Settings\\admin\\Desktop\\ITDDM08
FULL\\CODING\\loop understanding\\Loopunderstanding\\webpage_understanding.xml";
//create the reader filestream (fs) C:\\Documents and Settings\\admin\\Desktop\\ITDDM08
FULL\\CODING\\loop understanding\\Loopunderstanding\\
FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read,
FileShare.ReadWrite);
//Create the xml document
System.Xml.XmlDocument CXML = new System.Xml.XmlDocument();
//Load the xml document
CXML.Load(fs);
//Close the fs filestream
fs.Close();
XmlElement childNode = CXML.CreateElement("Website");
XmlNode root = CXML.DocumentElement;
Feedback code:
using
using
using
using
using
using
using
using
using
System;
System.Collections.Generic;
System.ComponentModel;
System.Data;
System.Drawing;
System.Linq;
System.Text;
System.Windows.Forms;
System.Data.SqlClient;
namespace Loopunderstanding
{
public partial class Feedback : Form
{
public Feedback()
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
SqlConnection con = new SqlConnection("Data Source=IFRAME3-PC\\SQLEXPRESS;Initial
Catalog=web;Integrated Security=True");
con.Open();
SqlCommand cmd = new SqlCommand("insert into feedback values('" + textBox1.Text +
"','" + textBox2.Text + "','" + textBox3.Text + ",')", con);
cmd.ExecuteNonQuery();
MessageBox.Show("Your FeedBack successfully.Thank You");
}
private void linkLabel1_LinkClicked(object sender, LinkLabelLinkClickedEventArgs e)
{
Application.Exit();
}
}
FAQ Code:
using
using
using
using
using
using
using
using
using
System;
System.Collections.Generic;
System.ComponentModel;
System.Data;
System.Drawing;
System.Linq;
System.Text;
System.Windows.Forms;
System.IO;
namespace Loopunderstanding
{
public partial class FAQ : Form
{
public FAQ()
{
InitializeComponent();
}
private void linkLabel1_LinkClicked(object sender, LinkLabelLinkClickedEventArgs e)
{
Feedback fb = new Feedback();
fb.Show();
this.Hide();
}
private void linkLabel2_LinkClicked(object sender, LinkLabelLinkClickedEventArgs e)
{
Application.Exit();
}
}
private void linkLabel12_LinkClicked(object sender, LinkLabelLinkClickedEventArgs e)
{
label12.Text = File.ReadAllText("\\Iframe2-pc\\d\\2013 - 2014\\Own
Concept\\Dotnet\\Diploma\\KPC\\Abarna Sampath\\ITDDM08\\ITDDM08 FULL\\CODING\\loop
understanding\\Loopunderstanding\\faq\\kind.txt").ToString();
}
private void linkLabel10_LinkClicked(object sender, LinkLabelLinkClickedEventArgs e)
{
label10.Text = File.ReadAllText("\\Iframe2-pc\\d\\2013 - 2014\\Own
Concept\\Dotnet\\Diploma\\KPC\\Abarna Sampath\\ITDDM08\\ITDDM08 FULL\\CODING\\loop
understanding\\Loopunderstanding\\faq\\uses webpge.txt").ToString();
}
}