A common example of an NP-complete problem is SAT, the question of whether a Boolean expression has a truth-assignment to its variables that makes the expression itself true. This class includes many of the hard combinatorial problems that have been assumed for decades or even centuries to require exponential time, and we learn that either none or all of these problems have polynomial-time algorithms. We meet the NP-complete problems, a large class of intractable problems. These are problems that, while they are decidable, have almost certainly no algorithm that runs in time less than some exponential function of the size of their input. Last, we look at the theory of intractable problems. We shall see some basic undecidable problems, for example, it is undecidable whether the intersection of two context-free languages is empty. That lets us define problems to be "decidable" if their language can be defined by a Turing machine and "undecidable" if not. We shall learn how "problems" (mathematical questions) can be expressed as languages. Next, we introduce the Turing machine, a kind of automaton that can define all the languages that can reasonably be said to be definable by any sort of computing device (the so-called "recursively enumerable languages"). We also introduce the pushdown automaton, whose nondeterministic version is equivalent in language-defining power to context-free grammars. We learn about parse trees and follow a pattern similar to that for finite automata: closure properties, decision properties, and a pumping lemma for context-free languages. Our second topic is context-free grammars and their languages. Finally, we see the pumping lemma for regular languages - a way of proving that certain languages are not regular languages. We consider decision properties of regular languages, e.g., the fact that there is an algorithm to tell whether or not the language defined by two finite automata are the same language. We also look at closure properties of the regular languages, e.g., the fact that the union of two regular languages is also a regular language. The t-test is used to verify the statistical significance of the precision results of the proposed crawler.We begin with a study of finite automata and the languages they can define (the so-called "regular languages." Topics include deterministic and nondeterministic automata, regular expressions, and the equivalence of these language-defining mechanisms. The obtained results show the superiority of the proposed crawler over several existing methods in terms of precision, recall, and running time. To show the performance of the proposed crawler, extensive simulation experiments are conducted. Based on the Martingale theorem, the convergence of the proposed algorithm is proved. This crawler is expected to have a higher precision rate because of construction a small Web graph of only on-topic documents. It can effectively adapt its configuration to the Web dynamics. Taking advantage of learning automata, the proposed crawler learns the most relevant URLs and the promising paths leading to the target on-topic documents. This paper designs a decentralized learning automata-based focused Web crawler. Topic oriented crawlers are widely used in domain-specific Web search portals and personalized search tools. Focused Web crawlers try to focus the crawling process on the topic-relevant Web documents. Web crawling has become an important aspect of the Web search on which the performance of the search engines is strongly dependent. The exponential growth of the Web has made it into a huge source of information wherein finding a document without an efficient search engine is unimaginable. The recent years have witnessed the birth and explosive growth of the Web.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |