(Ubuntu) Install and use Docker

The system used in this article is Ubuntu

1. Docker installation

  • Enter the command line

If you report that a resource is locked while installing a component, restart it

  • Installation Components:sudo apt install curl

1. The mirror is relatively large, you need to set up a stable network environment
2. Among them – Aliyun’s mirror representing Ali Baba

  • Download method:sudo curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

  • Check version information after downloading:docker -v

2. Create a container

  • Mirror pull:sudo docker pull centos:7

  • Run and create the container:
sudo docker run -itd --privileged --name singleNode -h singleNode \
-p 2222:22 \
-p 3306:3306 \
-p 50070:50070 \
-p 8088:8088 \
-p 8080:8080 \
-p 10000:10000 \
-p 60010:60010 \
-p 9092:9092 \
centos:7 /usr/sbin/init

  • Enter the container:sudo docker exec -it singleNode /bin/bash

  • Currently successfully enter the container

3. Create the Enmerail environment

  • Component installation
yum clean all
yum -y install unzip bzip2-devel vim bashname
  • Set up a free SSH login
yum install -y openssh openssh-server openssh-clients openssl openssl-devel 
ssh-keygen -t rsa -f ~/.ssh/id_rsa -P '' 
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

systemctl start sshd
  • Determine the time zone
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

  • If there is a firewall, you need to close it:
systemctl stop firewalld
systemctl disable firewalld
  • Create a folder:

mkdir -p /opt/install
  • output tray:exit
  • Download the big data component to a directory in Ubuntu and create a directory name in Solavare.
  • Copy part of the Ubuntu component package to a container
sudo docker cp /home/zy/software/ singleNode:/opt/
  • Repeat container
sudo docker exec -it singleNode /bin/bash

Install MySQL

  • Enter the track to install the bag
cd /opt/software
  • Package pressure
tar zxvf MySQL-5.5.40-1.linux2.6.x86_64.rpm-bundle.tar -C /opt/install
  • installation dependency
yum -y install libaio perl
  • Installation server and client

cd /opt/install

rpm -ivh MySQL-server-5.5.40-1.linux2.6.x86_64.rpm

rpm -ivh MySQL-client-5.5.40-1.linux2.6.x86_64.rpm 
  • Start and configure MySQL

systemctl start mysql

/usr/bin/mysqladmin -u root password 'root'

mysql -uroot -proot 

> update mysql.user set host='%' where host='localhost';
> delete from mysql.user where host<>'%' or user='';
> flush privileges;

quit

Install the JDK

  • Package pressure
tar zxvf /opt/software/jdk-8u171-linux-x64.tar.gz -C /opt/install/
  • Create a soft connection

ln -s /opt/install/jdk1.8.0_171 /opt/install/java
  • Configuration environment variable:
vi /etc/profile

export JAVA_HOME=/opt/install/java
export PATH=$JAVA_HOME/bin:$PATH
  • Active configuration file:
source /etc/profile
  • View Java version:
java -version

Hadoop installations

  • Package pressure
tar zxvf /opt/software/hadoop-2.6.0-cdh5.14.2.tar_2.gz -C /opt/install/
  • Create a soft connection
ln -s /opt/install/hadoop-2.6.0-cdh5.14.2 /opt/install/hadoop
  • Set Core-size.xml
vi core-site.xml
-------------------------------------------
<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://singleNode:9000</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/install/hadoop/data/tmp</value>
  </property>
</configuration>
-------------------------------------------
  • Configure HDFS-size.xml
vi hdfs-site.xml
-------------------------------------------
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property
</configuration>
-------------------------------------------
  • Set up mapred-site.xml
vi mapred-site.xml.template
-------------------------------------------
<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>singleNode:10020</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>singleNode:19888</value>
  </property>
</configuration>
-------------------------------------------
  • Setting yarn-site.xml
vi yarn-site.xml
-------------------------------------------
<configuration>
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>
	<property>
		<name>yarn.resourcemanager.hostname</name>
		<value>singleNode</value>
	</property>
	<property>
		<name>yarn.log-aggregation-enable</name>
		<value>true</value>
	</property>
	<property>
		<name>yarn.log-aggregation.retain-seconds</name>
		<value>604800</value>
	</property>
</configuration>
-------------------------------------------
  • Set up Hadoop-env.sh
vi hadoop-env.sh
-------------------------------------------
export JAVA_HOME=/opt/install/java
-------------------------------------------
  • Set up Mapred-env.sh
vi mapred-env.sh
-------------------------------------------
export JAVA_HOME=/opt/install/java
-------------------------------------------
  • Set up yarn-env.sh
vi yarn-env.sh
-------------------------------------------
export JAVA_HOME=/opt/install/java
-------------------------------------------
  • Prepare the slaves
export HADOOP_HOME=/opt/install/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
  • HDFS format
hdfs namenode -format
  • Start the Hadoop service
start-all.sh
  • Web side view

192.168.**.**:50070


Prepare the cell

  • Package pressure
tar zxvf /opt/software/hive-1.1.0-cdh5.14.2.tar.gz -C /opt/install/
  • Create a soft connection
ln -s /opt/install/hive-1.1.0-cdh5.14.2 /opt/install/hive
  • Change the configuration file:

cd /opt/install/hive/conf/
  • Edit hive-size.xml
vi hive-site.xml
-------------------------------------------
<configuration>
	<property>
		<name>hive.metastore.warehouse.dir</name>
		<value>/home/hadoop/hive/warehouse</value>
	</property>
	<property>
		<name>javax.jdo.option.ConnectionURL</name>
		<value>jdbc:mysql://singleNode:3306/hive?createDatabaseIfNotExist=true</value>
	</property>
	<property>
		<name>javax.jdo.option.ConnectionDriverName</name>
		<value>com.mysql.jdbc.Driver</value>
	</property>
	<property>
		<name>javax.jdo.option.ConnectionUserName</name>
		<value>root</value>
	</property>
	<property>
		<name>javax.jdo.option.ConnectionPassword</name>
		<value>root</value>
	</property>
	<property>
		<name>hive.exec.scratchdir</name>
		<value>/home/hadoop/hive/data/hive-${user.name}</value>
		<description>Scratch space for Hive jobs</description>
	</property>

	<property>
		<name>hive.exec.local.scratchdir</name>
		<value>/home/hadoop/hive/data/${user.name}</value>
		<description>Local scratch space for Hive jobs</description>
	</property>
</configuration>
-------------------------------------------
  • Change hive-env.sh.template
vi hive-env.sh.template
-------------------------------------------
HADOOP_HOME=/opt/install/hadoop
-------------------------------------------
  • Add dependencies
cp /opt/software/mysql-connector-java-5.1.31.jar /opt/install/hive/lib/
  • Add environment variables
vi /etc/profile

export HIVE_HOME=/opt/install/hive
export PATH=$HIVE_HOME/bin:$PATH
  • Start the service
nohup hive --service metastore &
nohup hive --service hiveserver2 &
  • Review Process: JPS

Install SQUOP

  • Package pressure
tar zxvf /opt/software/sqoop-1.4.6-cdh5.14.2.tar.gz -C /opt/install/
  • Create a soft connection
ln -s /opt/install/sqoop-1.4.6-cdh5.14.2 /opt/install/sqoop
  • Edit sqoop -env -template.sh
cd /opt/install/sqoop/conf/
vi sqoop-env-template.sh
-------------------------------------------
#Set path to where bin/hadoop is available
export HADOOP_COMMON_HOME=/opt/install/hadoop

#Set path to where hadoop-*-core.jar is available
export HADOOP_MAPRED_HOME=/opt/install/hadoop

#Set the path to where bin/hive is available
export HIVE_HOME=/opt/install/hive
-------------------------------------------
  • Add dependency package
cp /opt/software/mysql-connector-java-5.1.31.jar /opt/install/sqoop/lib/
cp /opt/software/java-json.jar /opt/install/sqoop/lib/
  • Add environment variables
vi /etc/profile

export SQOOP_HOME=/opt/install/sqoop
export PATH=$SQOOP_HOME/bin:$PATH
  • View version
sqoop version

Leave a Comment