This course will introduce Hadoop and Spark cluster to newcomers. We will introduce basic concepts used in MapReduce programming model, major components of a Hadoop cluster, how to get started with Hadoop on your own computer and with computing resources at TACC. We will also introduce Spark programming models and how Spark can be used in conjunction with a Hadoop cluster. We will also discuss different ways to use Hadoop and Spark for your analysis. During the course, a participant can explore a Hadoop cluster and perform exercises with prepared examples. Since this is an introductory course, participants do not need have the particular programming background to attend. Working knowledge of Linux operating system are required. Participants who are new to either Hadoop, Spark or TACC resources are strongly recommended to participate in this course.