Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 

README.md

ETL Pipeline with AWS Elastic Map Reduce

Current project shows steps for analyzing the data for a hypothetical startup collecting songs and user activity from the new music streaming app they have created.

Given this hypothetical startup, their analytics team is particularly interested in understanding what songs users are listening to. Currently, they don't have an easy way to query their data, which resides in a directory of JSON logs on user activity on the app, as well as a directory with JSON metadata on the songs in their app.

Song Dataset

The first dataset is a subset of real data from the Million Song Dataset. Each file is in JSON format and contains metadata about a song and the artist of that song. The files are partitioned by the first three letters of each song's track ID. For example, here are filepaths to two files in this dataset.

song_data/A/B/C/TRABCEI128F424C983.json
song_data/A/A/B/TRAABJL12903CDCF1A.json

And below is an example of what a single song file, TRAABJL12903CDCF1A.json, looks like.

{
    "num_songs": 1,
    "artist_id": "ARJIE2Y1187B994AB7",
    "artist_latitude": null,
    "artist_longitude": null,
    "artist_location": "",
    "artist_name": "Line Renaud",
    "song_id": "ABCPIRU12A6D4FA1E1",
    "title": "Der Kleine Dompfaff",
    "duration": 152.92036,
    "year": 0
}

Log dataset

The second dataset consists of log files in JSON format generated by this event simulator based on the songs in the dataset above. These simulate activity logs from a music streaming app based on specified configurations.

The log files in the dataset you'll be working with are partitioned by year and month. For example, here are filepaths to two files in this dataset.

log_data/2018/11/2018-11-12-events.json
log_data/2018/11/2018-11-13-events.json

Schema for Song Play Analysis

Using the song and log datasets, you'll need to create a star schema optimized for queries on song play analysis. This includes the following tables.

Star Database Schema

Fact Tables

songplays - records in log data associated with song plays i.e. records with page NextSong

  • songplay_id (INT) PRIMARY KEY: ID of each user song play
  • start_time (DATE) NOT NULL: Timestamp of beginning of user activity
  • user_id (INT) NOT NULL: ID of user
  • level (TEXT): User level {free | paid}
  • song_id (TEXT) NOT NULL: ID of Song played
  • artist_id (TEXT) NOT NULL: ID of Artist of the song played
  • session_id (INT): ID of the user Session
  • location (TEXT): User location
  • user_agent (TEXT): Agent used by user to access the platform

Dimension Tables

users - users in the app

  • user_id (INT) PRIMARY KEY: ID of user
  • first_name (TEXT) NOT NULL: Name of user
  • last_name (TEXT) NOT NULL: Last Name of user
  • gender (TEXT): Gender of user {M | F}
  • level (TEXT): User level {free | paid}

songs - songs in music database

  • song_id (TEXT) PRIMARY KEY: ID of Song
  • title (TEXT) NOT NULL: Title of Song
  • artist_id (TEXT) NOT NULL: ID of song Artist
  • year (INT): Year of song release
  • duration (FLOAT) NOT NULL: Song duration in milliseconds

artists - artists in music database

  • artist_id (TEXT) PRIMARY KEY: ID of Artist
  • name (TEXT) NOT NULL: Name of Artist
  • location (TEXT): Name of Artist city
  • latitude (FLOAT): Latitude location of artist
  • longitude (FLOAT): Longitude location of artist

time - timestamps of records in songplays broken down into specific units

  • start_time (DATE) PRIMARY KEY: Timestamp of row
  • hour (INT): Hour associated to start_time
  • day (INT): Day associated to start_time
  • week (INT): Week of year associated to start_time
  • month (INT): Month associated to start_time
  • year (INT): Year associated to start_time
  • weekday (TEXT): Name of week day associated to start_time

ETL Pipeline

DataLakeS3-EMR.ipynb shows steps performed on EMR cluster to create DataLake.

etl.py shows all main steps from the Jupyter Notebook mentioned above, into standalone script that performs processing steps of building the Data Lake on S3 with Spark.

The following steps are required:

  • Load the credentials from dl.cfg
  • Raw data is loaded from s3 bucket (JSON files)
  • Spark is used to generate Fact and Dimension Tables based on raw data
  • Final processed data is loaded back to s3 bucket in Parquet format (columnar binary format).