HomebulletScriptsbulletTag: spider (14 results)
  1. No Screenshot
    1887 total visits
    PHP Functions for detecting Spiders and Small Screen Devices, two PHP functions that allow a developer to test for spiders and small screen devices.PHP Functions for detecting Spiders and Small Screen Devices can be used for useragent cloaking, delivering lite content to PDA's and cellphones, or solving any other problem that requires delivering content based on user agent.
  2. No Screenshot
    1915 total visits
    With Mini spiders and bots class, requests can be sent to various types of servers in order to perform different actions. Key Features of Mini spiders and bots class:- Get text phrase correct spelling using Google search spelling suggestions- Get exchange rates from Banca di Italia site- Get weather forecasts from Google- Get shorter URLs using TinyURL- Get the geographic ...
  3. No Screenshot
    2139 total visits
    Bot recognizer and dispatcher is the IP address of the computer or the user agent of the browser currently accessing a Web service can be checked to determine if it is known to be used by search engine robots or malicious Web crawlers.The Bot recognizer and dispatcher can call different callback functions depending on the type of crawler was identified. ...
  4. No Screenshot
    2096 total visits
    Spider website can retrieve a page of a site and follow all links recursively to retrieve all the site URLs. The crawling can be restricted to URLs with a particular extension. It can also avoid accessing pages listed in the site robots.txt file, or pages set with the no index or no follow meta tags. Requirements: PHP 5.0 or higher
  5. No Screenshot
    1715 total visits
    Link Searcher retrieves a given Web page and searches for links contained in it. The new links that are found are added to a queue to be crawled later and so implement recursive searching up to a given depth limit. Link Searcher supports regular expressions. Requirements: PHP 4.0 or higher
  6. No Screenshot
    2358 total visits
    Automap is able to crawl website by following the links found on the page of a given URL. It can build a sitemap XML document with all the URLs of the pages that were found. The number of crawled links, the allowed page extensions and the disallowed link URL parameters are configurable. Requirements: PHP 5 or higher
  7. No Screenshot
    1648 total visits
    Free PHP RSS to Content Reuters the HTML it retrives from multiple RSS feeds. It accepts up to 5 feeds you give it, combines the feeds together, shuffles and inserts your preset number of listings from the feeds into your HTML for spiders to crawl. You can use Yahoo news feeds or any other newsfeed made up of multiple search ...
  8. No Screenshot
    2131 total visits
    Spider Engine can retrieve one or more pages from Web sites. The URL of the pages may follow a numeric pattern. The HTML pages are parsed for configurable patterns.The data found in the pages is passed to a separate function for custom processing that can be implemented by a sub-class.Requirements:PHP 4.0 or higher
  9. No Screenshot
    2039 total visits
    Regular expressions are used to retrieve URLs and all links detected on a page are followed.
  10. No Screenshot
    2314 total visits
    Spider Class can retrieve Web pages and parse them to extract the list of their links to continue crawling all linked pages.The pages may be retrieved iteratively until it is reached a given limit of pages or link depth.Spider Class is possible to set regular expressions for both link definitions and content matches, changeable at every depth.
  11. No Screenshot
    1994 total visits
    This little, easy to install rss script will display any rss feeds on your website. It also randomizes the order in which they are displayed, which causes search engine spiders to revisit your site more often.
  12. No Screenshot
    1459 total visits
    robots.spider(tm) will read a robots.txt file and compare it with the pages on the site. Search engines spider the site. If there are no references to a directory or file, it will not be indexed. robots.txt files should not refer to unreferenced content because it allows other viewers to identify areas of your site that you don't want indexed. The ...
  13. No Screenshot
    1783 total visits
    Fetch Link Class uses text parsing and no regular expressions to find and extract every link in a webpage. This class is suitable for developers who want to develop spider and crawler programs.
  14. No Screenshot
    1816 total visits
    Statizier provides you a simple way to hide dinamic sites to web spiders so they can index it. Almost all web-spiders (like Google) can not index dinamic sites due to the parameters present on almost all links in the site.Statizier, along with a simple modification onto your Apache configuration file, allows you to hide all the parameters and dinamic links ...
Pages 1 of 1« 1 »