Sanjeev Kumar Singh

Work place: Dept. of Mathematics, Galgotias University, Gr. Noida, India

E-mail: sksingh8@gmail.com

Website:

Research Interests: Computational Engineering, Computational Mathematics, Solid Modeling, Information Systems, Data Mining, Information Retrieval, Data Structures, Mathematics

Biography

Dr. Sanjeev Kumar Singh is working as Assistant Professor in Department of Mathematics at Galgotias University, Gr. Noida, India. He earned his M. Sc. and Ph.D. degrees with major in Mathematics and minor in Computer Science from G.B. Pant University of Agriculture and Technology, Pantnagar, Uttrakhand, India. Before that he completed B.Sc. (Physics, Mathematics & Computer Science) from Lucknow University, Lucknow.

He is having more than nine years of teaching and research experience. Besides organizing three national conferences, he has published several research papers in various International/National journals of repute and several national and international conferences and workshops. His areas of interest include Mathematical Modeling, Differential Geometry, Computational Mathematics, data mining & Information Retrieval.

Author Articles
A Novel Web Page Change Detection Approach using Sql Server

By Md. Abu Kausar V. S. Dhaka Sanjeev Kumar Singh

DOI: https://doi.org/10.5815/ijmecs.2015.09.05, Pub. Date: 8 Sep. 2015

The WWW is very dynamic in nature and is basically used for the exchange of information or data, the content of web page is added, changed or deleted continuously, which makes Web crawlers very challenging to keep refresh the page with the current version. The web page is frequently changes its content hence it becomes essential to develop an effective system which could detect these types of changes efficiently in the lowest browsing time to achieve this changes. In this chapter we compare the old web page hash value with the new web page hash value if hash value is changed that means web page content changed. The changes in web page can be detected by calculating the hash value which is unique.

[...] Read more.
Implementation of Parallel Web Crawler through .NET Technology

By Md. Abu Kausar V. S. Dhaka Sanjeev Kumar Singh

DOI: https://doi.org/10.5815/ijmecs.2014.08.07, Pub. Date: 8 Aug. 2014

The WWW is increasing at very fast rate and data or information present over web is changes very frequently. As the web is very dynamic, it becomes very difficult to get related and fresh information. In this paper we design and develop a program for web crawler which uses multiple HTTP for crawling the web. Here we use multiple threads for implementation of multiple HTTP connection. The whole downloading process can be reduced with the help of multiple threads. This paper deals with a system which is based on web crawler using .net technology. The proposed approach is implemented in VB.NET with multithread to crawl the web pages in parallel and crawled data is stored in central database (Sql Server). The duplicacy of record is checked through stored procedure which is pre complied & checks the result very fast. The proposed architecture is very fast and allows many crawlers to crawl the data in parallel.

[...] Read more.
Web Crawler Based on Mobile Agent and Java Aglets

By Md. Abu Kausar V. S. Dhaka Sanjeev Kumar Singh

DOI: https://doi.org/10.5815/ijitcs.2013.10.09, Pub. Date: 8 Sep. 2013

With the huge growth of the Internet, many web pages are available online. Search engines use web crawlers to collect these web pages from World Wide Web for the purpose of storage and indexing. Basically Web Crawler is a program, which finds information from the World Wide Web in a systematic and automated manner. This network load farther will be reduced by using mobile agents.
The proposed approach uses mobile agents to crawl the pages. A mobile agent is not bound to the system in which it starts execution. It has the unique ability to transfer itself from one system in a network to another system. The main advantages of web crawler based on Mobile Agents are that the analysis part of the crawling process is done locally rather than remote side. This drastically reduces network load and traffic which can improve the performance and efficiency of the whole crawling process.

[...] Read more.
Other Articles