Difference between revisions of "Snowplow (software)"

From Wiki @ Karl Jones dot com
Jump to: navigation, search
(Discourse forums)
(EmrEtlRunner)
Line 45: Line 45:
 
== EmrEtlRunner ==
 
== EmrEtlRunner ==
  
Snowplow EmrEtlRunner is an application that parses the log files generated by your Snowplow collector and
+
'''Snowplow EmrEtlRunner''' is an application that parses the log files generated by your Snowplow collector and
  
 
* Cleans up the data into a format that is easier to parse / analyse
 
* Cleans up the data into a format that is easier to parse / analyse
 
* Enriches the data (e.g. infers the location of the visitor from his / her IP address and infers the search engine keywords from the query string)
 
* Enriches the data (e.g. infers the location of the visitor from his / her IP address and infers the search engine keywords from the query string)
 
* Stores that cleaned, enriched data in S3
 
* Stores that cleaned, enriched data in S3
 +
 +
See:
 +
 +
* [https://github.com/snowplow/snowplow/tree/master/3-enrich/emr-etl-runner emr-etl-runner]
 +
* [https://github.com/snowplow/snowplow/wiki/setting-up-EmrEtlRunner Setting up EmrEtlRunner]
  
 
== Discourse forums ==
 
== Discourse forums ==

Revision as of 13:40, 22 August 2016

Snowplow is a marketing and product analytics platform.

Description

According to the official website, Snowplow does three things:

  • Identifies website users, and tracks the way they engage with a website or web application;
  • Stores users' behavioral data in a scalable "event data warehouse" you control: in Amazon S3 and (optionally) Amazon Redshift or Postgres;
  • Leverages the biggest range of tools to analyze that data, including big data tools (e.g. Hive, Pig, Mahout) via EMR or more traditional tools e.g. Tableau, R, Looker, Chartio to analyze that behavioral data.

Core concepts

Snowplow is built around the following core concepts:

  • Events
  • Dictionaries and schemas
  • Contexts
  • Iglu
  • Stages in the Snowplow data pipeline

Setting up Snowplow

The process of setting up Snowplow consists of:

  1. Set up a collector;
  2. Set up a tracker or webhook;
  3. Set up enrich;
  4. Set up alternative data stores.

Iglu repository

An Iglu repository acts as a store of data schemas (Snowplow, currently JSON Schemas only).

Hosting JSON Schemas in an Iglu repository allows you to use those schemas in Iglu-capable systems such as Snowplow.

Enrich applications

A Snowplow Enrich application processes data from a Snowplow Collector, and stores enriched data in a persistent database.

There are currently two Enrichment processes available for setup:

  • EmrEtlRunner An application that parses logs from a Collector and stores enriched events to S3
  • Stream Enrich A Scala application that reads Thrift events from a Kinesis stream and outputs back to a Kinesis stream

EmrEtlRunner

Snowplow EmrEtlRunner is an application that parses the log files generated by your Snowplow collector and

  • Cleans up the data into a format that is easier to parse / analyse
  • Enriches the data (e.g. infers the location of the visitor from his / her IP address and infers the search engine keywords from the query string)
  • Stores that cleaned, enriched data in S3

See:

Discourse forums

See:

http://discourse.snowplowanalytics.com/users/karl_jones/activity

See also

External links