How to quickly configure CentOS HDFS
Deploying Hadoop Distributed File System (HDFS) on a CentOS system requires several steps, and the following guide briefly describes the configuration process in stand-alone mode. Full cluster deployment is more complex.
1. Java environment configuration
First, make sure that the system has Java installed. Install OpenJDK using the following command:
yum install -y java-1.8.0-openjdk-devel
Configure Java environment variables:
echo "export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk" >> /etc/profile echo "export PATH=$JAVA_HOME/bin:$PATH" >> /etc/profile source /etc/profile java -version
2. SSH password-free login settings
In order to communicate seamlessly between nodes, SSH password-free login is required.
- Generate an SSH key pair:
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
- Copy the public key to all nodes (here is only a stand-alone configuration, so this step is omitted):
3. Hadoop download and decompression
Download the Hadoop distribution from the Apache Hadoop official website and unzip it to the specified directory:
wget https://downloads.apache.org/hadoop/core/hadoop-3.1.3/hadoop-3.1.3.tar.gz tar -zxvf hadoop-3.1.3.tar.gz mv hadoop-3.1.3 /opt/hadoop
4. Hadoop environment variable configuration
Edit the /etc/profile
file and add the following environment variables:
export HADOOP_HOME=/opt/hadoop export PATH=$HADOOP_HOME/bin:$PATH source /etc/profile
5. Hadoop configuration file modification
core-site.xml
Edit /opt/hadoop/etc/hadoop/core-site.xml
, add the following (replace 192.168.1.1
with your host IP):
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://192.168.1.1:9000</value> </property> </configuration>
hdfs-site.xml
Edit /opt/hadoop/etc/hadoop/hdfs-site.xml
and add the following:
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/opt/hadoop/hdfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/opt/hadoop/hdfs/datanode</value> </property> </configuration>
6. NameNode formatting
Format NameNode:
/opt/hadoop/bin/hdfs namenode -format
7. HDFS startup
Start HDFS service:
/opt/hadoop/sbin/start-dfs.sh
8. HDFS status verification
Check HDFS status:
jps
You should see the NameNode and DataNode processes running.
9. HDFS Web UI Access
Visit http://192.168.1.1:50070
(replace 192.168.1.1
with your host IP) to view the HDFS web interface.
This guide is for reference only for stand-alone HDFS configuration. Multi-node cluster deployment requires additional configuration of ZooKeeper, Secondary NameNode, etc., and ensure that all node configuration files are consistent.
The above is the detailed content of How to quickly configure CentOS HDFS. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Improve HDFS performance on CentOS: A comprehensive optimization guide to optimize HDFS (Hadoop distributed file system) on CentOS requires comprehensive consideration of hardware, system configuration and network settings. This article provides a series of optimization strategies to help you improve HDFS performance. 1. Hardware upgrade and selection resource expansion: Increase the CPU, memory and storage capacity of the server as much as possible. High-performance hardware: adopts high-performance network cards and switches to improve network throughput. 2. System configuration fine-tuning kernel parameter adjustment: Modify /etc/sysctl.conf file to optimize kernel parameters such as TCP connection number, file handle number and memory management. For example, adjust TCP connection status and buffer size

The CentOS shutdown command is shutdown, and the syntax is shutdown [Options] Time [Information]. Options include: -h Stop the system immediately; -P Turn off the power after shutdown; -r restart; -t Waiting time. Times can be specified as immediate (now), minutes ( minutes), or a specific time (hh:mm). Added information can be displayed in system messages.

CentOS will be shut down in 2024 because its upstream distribution, RHEL 8, has been shut down. This shutdown will affect the CentOS 8 system, preventing it from continuing to receive updates. Users should plan for migration, and recommended options include CentOS Stream, AlmaLinux, and Rocky Linux to keep the system safe and stable.

The key to installing MySQL elegantly is to add the official MySQL repository. The specific steps are as follows: Download the MySQL official GPG key to prevent phishing attacks. Add MySQL repository file: rpm -Uvh https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm Update yum repository cache: yum update installation MySQL: yum install mysql-server startup MySQL service: systemctl start mysqld set up booting

Complete Guide to Checking HDFS Configuration in CentOS Systems This article will guide you how to effectively check the configuration and running status of HDFS on CentOS systems. The following steps will help you fully understand the setup and operation of HDFS. Verify Hadoop environment variable: First, make sure the Hadoop environment variable is set correctly. In the terminal, execute the following command to verify that Hadoop is installed and configured correctly: hadoopversion Check HDFS configuration file: The core configuration file of HDFS is located in the /etc/hadoop/conf/ directory, where core-site.xml and hdfs-site.xml are crucial. use

Common problems and solutions for Hadoop Distributed File System (HDFS) configuration under CentOS When building a HadoopHDFS cluster on CentOS, some common misconfigurations may lead to performance degradation, data loss and even the cluster cannot start. This article summarizes these common problems and their solutions to help you avoid these pitfalls and ensure the stability and efficient operation of your HDFS cluster. Rack-aware configuration error: Problem: Rack-aware information is not configured correctly, resulting in uneven distribution of data block replicas and increasing network load. Solution: Double check the rack-aware configuration in the hdfs-site.xml file and use hdfsdfsadmin-printTopo

Building a Hadoop Distributed File System (HDFS) on a CentOS system requires multiple steps. This article provides a brief configuration guide. 1. Prepare to install JDK in the early stage: Install JavaDevelopmentKit (JDK) on all nodes, and the version must be compatible with Hadoop. The installation package can be downloaded from the Oracle official website. Environment variable configuration: Edit /etc/profile file, set Java and Hadoop environment variables, so that the system can find the installation path of JDK and Hadoop. 2. Security configuration: SSH password-free login to generate SSH key: Use the ssh-keygen command on each node

The key differences between CentOS and Ubuntu are: origin (CentOS originates from Red Hat, for enterprises; Ubuntu originates from Debian, for individuals), package management (CentOS uses yum, focusing on stability; Ubuntu uses apt, for high update frequency), support cycle (CentOS provides 10 years of support, Ubuntu provides 5 years of LTS support), community support (CentOS focuses on stability, Ubuntu provides a wide range of tutorials and documents), uses (CentOS is biased towards servers, Ubuntu is suitable for servers and desktops), other differences include installation simplicity (CentOS is thin)
