Centralized Log Server with ELK Stack on Ubuntu
A centralized log server helps you collect, process, and analyze logs from multiple systems in one place. This tutorial covers setting up an ELK (Elasticsearch, Logstash, Kibana) stack for log management on Ubuntu.
Prerequisites
- Ubuntu Server 20.04 LTS or newer
- Minimum 4GB RAM (8GB recommended for production)
- 20GB+ of free disk space
- Root or sudo privileges
- Java Runtime Environment (JRE)
- Basic knowledge of Linux command line
1 Install Java Runtime
Elasticsearch requires Java. Install OpenJDK:
sudo apt update
sudo apt install -y openjdk-11-jdk
Verify Java installation:
java -version
2 Install Elasticsearch
Import the Elasticsearch GPG key:
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Add the Elasticsearch repository:
sudo apt install -y apt-transport-https
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
Install Elasticsearch:
sudo apt update
sudo apt install -y elasticsearch
3 Configure Elasticsearch
Edit the Elasticsearch configuration file:
sudo nano /etc/elasticsearch/elasticsearch.yml
Update the following settings:
# Set the cluster name
cluster.name: my-log-server
# Set the node name
node.name: ${HOSTNAME}
# Set the network host
network.host: 0.0.0.0
# Set the HTTP port
http.port: 9200
# Set the initial master nodes
cluster.initial_master_nodes: ["${HOSTNAME}"]
network.host
to a specific IP address and configure proper security settings.
4 Start and Enable Elasticsearch
Start the Elasticsearch service and enable it to start on boot:
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch
Check the status of Elasticsearch:
sudo systemctl status elasticsearch
Verify Elasticsearch is running by sending a request to the HTTP port:
curl -X GET "localhost:9200"
1 Install Logstash
Install Logstash from the Elastic repository:
sudo apt install -y logstash
2 Configure Logstash
Create a Logstash configuration file for syslog input:
sudo nano /etc/logstash/conf.d/syslog.conf
Add the following configuration:
input {
tcp {
port => 5044
type => syslog
}
udp {
port => 5044
type => syslog
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "syslog-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
3 Start and Enable Logstash
Start the Logstash service and enable it to start on boot:
sudo systemctl enable logstash
sudo systemctl start logstash
Check the status of Logstash:
sudo systemctl status logstash
1 Install Kibana
Install Kibana from the Elastic repository:
sudo apt install -y kibana
2 Configure Kibana
Edit the Kibana configuration file:
sudo nano /etc/kibana/kibana.yml
Update the following settings:
# Server port
server.port: 5601
# Server host
server.host: "0.0.0.0"
# Elasticsearch connection
elasticsearch.hosts: ["http://localhost:9200"]
# Kibana index
kibana.index: ".kibana"
server.host
to a specific IP address and configure proper security settings.
3 Start and Enable Kibana
Start the Kibana service and enable it to start on boot:
sudo systemctl enable kibana
sudo systemctl start kibana
Check the status of Kibana:
sudo systemctl status kibana
4 Access Kibana Web Interface
Open your web browser and navigate to:
http://your_server_ip:5601
Follow the Kibana setup wizard to complete the configuration.
1 Install Filebeat on Client Systems
On each client system you want to monitor, install Filebeat:
# On Ubuntu/Debian
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.9-amd64.deb
sudo dpkg -i filebeat-7.17.9-amd64.deb
# On CentOS/RHEL
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.9-x86_64.rpm
sudo rpm -vi filebeat-7.17.9-x86_64.rpm
2 Configure Filebeat
Edit the Filebeat configuration file:
sudo nano /etc/filebeat/filebeat.yml
Update the following settings:
# Enable logging
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
# Configure log input
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
- /var/log/syslog
- /var/log/auth.log
# Configure output to Logstash
output.logstash:
hosts: ["your_log_server_ip:5044"]
your_log_server_ip
with the IP address of your log server.
3 Start and Enable Filebeat
Start the Filebeat service and enable it to start on boot:
sudo systemctl enable filebeat
sudo systemctl start filebeat
Check the status of Filebeat:
sudo systemctl status filebeat
5 Configure Firewall
Allow necessary ports through the firewall:
sudo ufw allow 5044/tcp # Logstash
sudo ufw allow 9200/tcp # Elasticsearch
sudo ufw allow 5601/tcp # Kibana
sudo ufw enable
6 Test the Log Server
Send a test log message to Logstash:
echo "Test log message from $(hostname)" | nc -q0 localhost 5044
Check if the log appears in Elasticsearch:
curl -X GET "localhost:9200/syslog-*/_search?q=message:test&pretty"
7 Create Kibana Index Pattern
In the Kibana web interface:
- Go to "Management" → "Stack Management"
- Click "Index Patterns"
- Click "Create index pattern"
- Enter "syslog-*" as the pattern
- Select "@timestamp" as the time field
- Click "Create index pattern"
Now you can explore your logs in the "Discover" section of Kibana.
8 Advanced Configuration
For production environments, consider these security enhancements:
# Enable Elasticsearch security features
xpack.security.enabled: true
# Enable HTTPS for Kibana
server.ssl.enabled: true
server.ssl.certificate: /path/to/your/certificate.crt
server.ssl.key: /path/to/your/private.key
# Set up authentication
elasticsearch.username: "kibana_system"
elasticsearch.password: "your_password"
Set up log rotation for Elasticsearch and Logstash:
sudo nano /etc/logrotate.d/elasticsearch
Add the following configuration:
/var/log/elasticsearch/*.log {
daily
rotate 7
copytruncate
compress
delaycompress
missingok
notifempty
create 644 elasticsearch elasticsearch
}