Posts

Showing posts from November, 2017

5 ways to backup

Image
5 Tips to Backup and Restore Database in PostgreSQL Written by Rahul K. PostgreSQL backup , database , pg_dump , PostgreSQL , psql , restore 4 Comments PostgreSQL database server provides pg_dump and psql utilities for backup and restore databases. This article will describe various ways to use of pg_dump command to backup database. Also you will learn how to restore datbase backup. Backup and Restore Database in PostgreSQL Below is some connections options which you can use for connecting remote server or authenticated server with all queries given in this article. -d, –dbname=DBNAME database name -h, –host=HOSTNAME database server hostname or ip -p, –port=PORT database server port number (default: 5432) -U, –username=NAME connect as specified database user -W, –password force password prompt –role=ROLENAME do SET ROLE before dump 1. Backup and Restore Single Database Backup: single database in PostgreSQL. Replace your actual database name w...

psql command link

http://www.postgresqltutorial.com/psql-commands/

Amazon Products Learning link

https://amazon-run.qwiklab.com/focuses/3459

BeanStalk

Beanstalk is a simple, fast work queue. Its interface is generic, but was originally designed for reducing the latency of page views in high-volume web applications by running time-consuming tasks asynchronously. News Run It First, run beanstalkd on one or more machines. There is no configuration file and only a handful of command-line options. $ ./beanstalkd -l 10.0.1.5 -p 11300 This starts up beanstalkd listening on address 10.0.1.5, port 11300. For more information on how to run beanstalkd as a background service, in production, see the adm directory . Use It Here’s an example in Ruby – see the client libraries to find your favorite language. First, have one process put a job into the queue: beanstalk = Beanstalk :: Pool . new ([ ‘10.0.1.5:11300’ ]) beanstalk . put ( ‘hello’ ) Then start another process to take jobs out of the queue and run them: beanstalk = Beanstalk :: Pool . new ([ ‘10.0.1.5:11300’ ]) loop do job = beanstalk . reserve pu...

First Web Crawler

Image
Develop your first web crawler in Python Scrapy The scraping series will not get completed without discussing Scrapy . In this post I am going to write a web crawler that will scrape data from OLX’s Electronics & Appliances’ items. Before I get into the code, how about having a brief intro of Scrapy itself? What is Scrapy? From Wikipedia : Scrapy (/ˈskreɪpi/ skray-pee)[1] is a free and open source web crawling framework , written in Python. Originally designed for web scraping, it can also be used to extract data using APIs or as a general purpose web crawler.[2] It is currently maintained by Scrapinghub Ltd., a web scraping development and services company. A web crawling framework which has done all the heavy lifting that is needed to write a crawler. What are those things, I will explore further below. Read on! Creating Project Scrapy introduces the idea of a project with multiple crawlers or spiders in a single project. This concept is ...

Search and Downlad Youtube Videos

Search and download youtube videos using Python The following python module allows users to search YouTube videos and download all the videos from the different playlists found within the search. Currently, it is able to search for playlists or collections of videos  and download individual videos from each of the playlists. For example, searching for “Top English KTV” will scan for all the songs playlists found in the search results and collect the individual songs web link from each playlist to be downloaded locally. Users can choose either to download as video format or as audio format. The script makes use of Python Pattern module for URL request and DOM object processing. For actual downloading of videos, it utilizes Pafy . Pafy is very comprehensive python module, allowing download in both video and audio format. There are other features of Pafy which is not used in this module. The following are the main flow o...

Python Scrapy Details

Image
1. Overview of Scrapy Scrapy is a Python framework for large scale web scraping. It gives you all the tools you need to efficiently extract data from websites, process them as you want, and store them in your preferred structure and format. As diverse the internet is, there is no “one size fits all” approach in extracting data from websites. Many a time ad hoc approaches are taken and if you start writing code for every little task you perform, you will eventually end up creating your own scraping framework. Scrapy is that framework. With Scrapy you don’t need to reinvent the wheel. Note: There are no specific prerequisites of this article, a basic knowledge of HTML and CSS is preferred. If you still think you need a refresher, do a quick read of this article . 2. Write your first Web Scraping code with Scrapy We will first quickly take a look at how to setup your system for web scraping and then see how we can build a simple web scraping system for extracting data...