CCA175 Exam Prep - Database Creation - Storing Metadata in Hive Metastore

Опубликовано: 01 Октябрь 2020
на канале: Verulam Blue
218
3

CCA175 Exam Prep - Database Creation - Storing Metadata in Hive Metastore| Prep for Cloudera’s Spark & Hadoop Developer Exam

Udemy discount code for channel subscribers:

CCA 175 Practice Tests (With Spark 2.4 Hadoop Cluster VM):

https://www.udemy.com/course/cca175-p...

CCA 175 Exam Prep Questions Part A ETL Focus (With Spark 2.4 Hadoop Cluster VM):

https://www.udemy.com/course/cca-175-...

CCA 175 Exam Prep Questions Part B Data Analysis Focus (With Spark 2.4 Hadoop Cluster VM):

https://www.udemy.com/course/cca175-e...


In this video we’ll use CSV data to create a database with metadata stored in a Hive Metastore. This is useful for CCA175 exam preparation as it provides us with an opportunity to perform data analysis on large data sets that are close in size to those found in the exams.

Music: https://www.bensound.com

kaggle - MovieLens 20M Dataset: https://www.kaggle.com/grouplens/movi...

/// Code snippets

///////////// Creating directories in HDFS for CSV data sets

hdfs dfs -mkdir -p /verulam_blue_demo/genome-scores/
hdfs dfs -mkdir -p /verulam_blue_demo/genome-tags/
hdfs dfs -mkdir -p /verulam_blue_demo/link/
hdfs dfs -mkdir -p /verulam_blue_demo/movie/
hdfs dfs -mkdir -p /verulam_blue_demo/rating/
hdfs dfs -mkdir -p /verulam_blue_demo/tag/

///////////// Copying data from LFS to HDFS

hdfs dfs -put /home/verulam-blue/Downloads/archive/genome_scores.csv /verulam_blue_demo/genome-scores/
hdfs dfs -put /home/verulam-blue/Downloads/archive/genome_tags.csv /verulam_blue_demo/genome-tags/
hdfs dfs -put /home/verulam-blue/Downloads/archive/link.csv /verulam_blue_demo/link/
hdfs dfs -put /home/verulam-blue/Downloads/archive/movie.csv /verulam_blue_demo/movie/
hdfs dfs -put /home/verulam-blue/Downloads/archive/rating.csv /verulam_blue_demo/rating/
hdfs dfs -put /home/verulam-blue/Downloads/archive/tag.csv /verulam_blue_demo/tag/

///////////// Create a Database

spark.sql("CREATE DATABASE demo_movie_db")

///////////// Read CSV data into a DataFrame | Writing out DataFrames to Tables

// Table setup: “genome_scores”

spark.sql("DROP TABLE IF EXISTS demo_movie_db.genome_scores")

var df_genome_scores = spark.read.format("csv")
.option("header", "true")
.schema("movieId int, tagId int, relevance float")
.load("/verulam_blue_demo/genome-scores/")

df_genome_scores.write.format("parquet")
.mode("overwrite")
.option("compression", "gzip")
.saveAsTable("demo_movie_db.genome_scores")

spark.sql("SELECT * FROM demo_movie_db.genome_scores").show(5)

// Table setup: “genome_tags”

spark.sql("DROP TABLE IF EXISTS demo_movie_db.genome_tags")

var df_genome_tags = spark.read.format("csv")
.option("header", "true")
.schema("tagId int, tag string")
.load("/verulam_blue_demo/genome-tags/")

df_genome_tags.write.format("parquet")
.mode("overwrite")
.option("compression", "gzip")
.saveAsTable("demo_movie_db.genome_tags")

spark.sql("SELECT * FROM demo_movie_db.genome_tags").show(5)

// Table setup: “link”

spark.sql("DROP TABLE IF EXISTS demo_movie_db.link")

var df_link = spark.read.format("csv")
.option("header", "true")
.schema("movieId int, imdbId int, tmbdId int")
.load("/verulam_blue_demo/link/")

df_link.write.format("parquet")
.mode("overwrite")
.option("compression", "gzip")
.saveAsTable("demo_movie_db.link")

spark.sql("SELECT * FROM demo_movie_db.link").show(5)

// Table setup: “movie”

spark.sql("DROP TABLE IF EXISTS demo_movie_db.movie")

var df_movie = spark.read.format("csv")
.option("header", "true")
.schema("movieId int, title string, genres string")
.load("/verulam_blue_demo/movie/")

df_movie.write.format("parquet")
.mode("overwrite")
.option("compression", "gzip")
.saveAsTable("demo_movie_db.movie")

spark.sql("SELECT * FROM demo_movie_db.movie").show(5)


// Table setup: “rating”

spark.sql("DROP TABLE IF EXISTS demo_movie_db.rating")

var df_rating = spark.read.format("csv")
.option("header", "true")
.schema("userId int, movieId int, rating float, timestamp timestamp")
.load("/verulam_blue_demo/rating/")

df_rating.write.format("parquet")
.mode("overwrite")
.option("compression", "gzip")
.saveAsTable("demo_movie_db.rating")

spark.sql("SELECT * FROM demo_movie_db.rating").show(5)


// Table setup: “tag”

spark.sql("DROP TABLE IF EXISTS demo_movie_db.tag")

var df_tag = spark.read.format("csv")
.option("header", "true")
.schema("userId int, movieId int, tag string, timestamp timestamp")
.load("/verulam_blue_demo/tag/")

df_tag.write.format("parquet")
.mode("overwrite")
.option("compression", "gzip")
.saveAsTable("demo_movie_db.tag")

spark.sql("SELECT * FROM demo_movie_db.tag").show(5)


// Final check & confirm

spark.sql("USE demo_movie_db")

spark.sql("SHOW TABLES")