Getting Structured Data from the Internet: Running Web Crawlers/Scrapers on a Big Data Production Scale

Utilize web scraping at scale to quickly get unlimited amounts of free data available on the web...
€84,50 EUR
€84,50 EUR
SKU: 9781484265758
Product Type: Books
Please hurry! Only 633 left in stock
Author: Jay M. Patel
Format: Paperback
Language: English
Subtotal: €84,50
Getting Structured Data from the Internet: Running Web Crawlers/Scrapers on a Big Data Production Scale by Patel, Jay M.

Getting Structured Data from the Internet: Running Web Crawlers/Scrapers on a Big Data Production Scale

€84,50

Getting Structured Data from the Internet: Running Web Crawlers/Scrapers on a Big Data Production Scale

€84,50
Author: Jay M. Patel
Format: Paperback
Language: English

Utilize web scraping at scale to quickly get unlimited amounts of free data available on the web into a structured format. This book teaches you to use Python scripts to crawl through websites at scale and scrape data from HTML and JavaScript-enabled pages and convert it into structured data formats such as CSV, Excel, JSON, or load it into a SQL database of your choice.

This book goes beyond the basics of web scraping and covers advanced topics such as natural language processing (NLP) and text analytics to extract names of people, places, email addresses, contact details, etc., from a page at production scale using distributed big data techniques on an Amazon Web Services (AWS)-based cloud infrastructure. It book covers developing a robust data processing and ingestion pipeline on the Common Crawl corpus, containing petabytes of data publicly available and a web crawl data set available on AWS's registry of open data.

Getting Structured Data from the Internet also includes a step-by-step tutorial on deploying your own crawlers using a production web scraping framework (such as Scrapy) and dealing with real-world issues (such as breaking Captcha, proxy IP rotation, and more). Code used in the book is provided to help you understand the concepts in practice and write your own web crawler to power your business ideas.


What You Will Learn

  • Understand web scraping, its applications/uses, and how to avoid web scraping by hitting publicly available rest API endpoints to directly get data
  • Develop a web scraper and crawler from scratch using lxml and BeautifulSoup library, and learn about scraping from JavaScript-enabled pages using Selenium
  • Use AWS-based cloud computing with EC2, S3, Athena, SQS, and SNS to analyze, extract, and store useful insights from crawled pages
  • Use SQL language on PostgreSQL running on Amazon Relational Database Service (RDS) and SQLite using SQLalchemy
  • Review sci-kit learn, Gensim, and spaCy to perform NLP tasks on scraped web pages such as name entity recognition, topic clustering (Kmeans, Agglomerative Clustering), topic modeling (LDA, NMF, LSI), topic classification (naive Bayes, Gradient Boosting Classifier) and text similarity (cosine distance-based nearest neighbors)
  • Handle web archival file formats and explore Common Crawl open data on AWS
  • Illustrate practical applications for web crawl data by building a similar website tool and a technology profiler similar to builtwith.com
  • Write scripts to create a backlinks database on a web scale similar to Ahrefs.com, Moz.com, Majestic.com, etc., for search engine optimization (SEO), competitor research, and determining website domain authority and ranking
  • Use web crawl data to build a news sentiment analysis system or alternative financial analysis covering stock market trading signals
  • Write a production-ready crawler in Python using Scrapy framework and deal with practical workarounds for Captchas, IP rotation, and more


Who This Book Is For

Primary audience: data analysts and scientists with little to no exposure to real-world data processing challenges, secondary: experienced software developers doing web-heavy data processing who need a primer, tertiary: business owners and startup founders who need to know more about implementation to better direct their technical team



Author: Jay M. Patel
Publisher: Apress
Published: 11/13/2020
Pages: 397
Binding Type: Paperback
Weight: 1.60lbs
Size: 10.00h x 7.00w x 0.86d
ISBN: 9781484265758

About the Author
Jay M. Patel is a software developer with over 10 years of experience in data mining, web crawling/scraping, machine learning, and natural language processing (NLP) projects. He is a co-founder and principal data scientist of Specrom Analytics, providing content, email, social marketing, and social listening products and services using web crawling/scraping and advanced text mining.

Jay worked at the US Environmental Protection Agency (EPA) for five years where he designed workflows to crawl and extract useful insights from hundreds of thousands of documents that were parts of regulatory filings from companies. He also led one of the first research teams within the agency to use Apache Spark-based workflows for chem and bioinformatics applications such as chemical similarities and quantitative structure activity relationships. He developed recurrent neural networks and more advanced LSTM models in Tensorflow for chemical SMILES generation.

Jay graduated with a bachelor's degree in engineering from the Institute of Chemical Technology, University of Mumbai, India and a master of science degree from the University of Georgia, USA. Jay serves as an editor of a publication titled Web Data Extraction and also blogs about personal projects, open source packages, and experiences as a startup founder on his personal site, jaympatel.com.


Returns Policy

You may return most new, unopened items within 30 days of delivery for a full refund. We'll also pay the return shipping costs if the return is a result of our error (you received an incorrect or defective item, etc.).

You should expect to receive your refund within four weeks of giving your package to the return shipper, however, in many cases you will receive a refund more quickly. This time period includes the transit time for us to receive your return from the shipper (5 to 10 business days), the time it takes us to process your return once we receive it (3 to 5 business days), and the time it takes your bank to process our refund request (5 to 10 business days).

If you need to return an item, simply login to your account, view the order using the "Complete Orders" link under the My Account menu and click the Return Item(s) button. We'll notify you via e-mail of your refund once we've received and processed the returned item.

Shipping

We can ship to virtually any address in the world. Note that there are restrictions on some products, and some products cannot be shipped to international destinations.

When you place an order, we will estimate shipping and delivery dates for you based on the availability of your items and the shipping options you choose. Depending on the shipping provider you choose, shipping date estimates may appear on the shipping quotes page.

Please also note that the shipping rates for many items we sell are weight-based. The weight of any such item can be found on its detail page. To reflect the policies of the shipping companies we use, all weights will be rounded up to the next full pound.