How to Install TheHive on Linux

Admin
Posted:
January 20, 2025
Home
Posts
Detection Engineer
How to Install TheHive on Linux
Contents

TheHive is an open-source security incident response and information management (SOAR/SIEM) platform that serves as a security operations center (SOC) for organizations. It is designed to help security teams automate and manage incident response, threat intelligence and security operations.

We will install Rocky Linux 8.8 together. If we are going to do a baremetal installation, you will need to write the ISO image to a USB disk and boot your machine with it. If you are going to use a virtualization environment, it will be enough to present this iso file as the Boot media.

Steps to Install TheHive on Linux

Installation media: Go to the link

As we talked about above under the “System Resource” heading, we provided 4 CPUs and 8GB RAM.

Regarding the hard disk, 20GB will be enough for the test environment, that's what we did.

If your virtualization environment cannot decide which operating system the image is, you can proceed by choosing Linux , or if it cannot decide which Linux distribution it is, Redhat 8 .

When we start the machine, the boot menu will greet us and we will see a countdown timer. If we wait for this timer, the “Test this media & install Rocky Linux 8.8” option will run and it will check the integrity of your iso file before starting the installation. Since we were sure of our iso file, we proceeded with the “Install Rocky Linux 8.8” option. (If you downloaded the iso file for the first time, did not check the hash values after downloading, or moved it from one place to another on your computer/network, we do not recommend skipping the test mode.)

After a while, we will be greeted by the screen on the left below. We leave the installation language in English and proceed by pressing button 1 and come to the screen on the right.

On this screen, we click on button 2 and see the disk configuration screen on the left below. By pressing button 3 twice on this screen, we accept the default disk configuration.

This time we select “Network & Host Name” from the options. On the screen that appears, we write the host name of our system in field 4 (we did thehive-test , you can do the same too) and make this change valid by clicking on button 5 . Finally, by clicking button number 6 , we open the network card of our system and enable it to connect to the network by obtaining the IP address. If this is not the case, we return to the main menu by clicking the “ Done ” button in the upper left corner .

Now it's almost over, this time in the main menu, we scroll down a little and click on the " Create User " option and we are greeted by the screen on the left below.

Make this user Administrator ” option at the bottom of the username and click on the “Done” Button in the upper left corner . If there is a policy conflict with the password we chose, if the password is not complex enough, etc. in cases ! We receive a warning in the area we indicate with , but if we do not want to make any changes, we can return to the Main Menu by clicking the " Done " button again . Now all we have to do is click on the " Begin Installation " button (in the lower right corner) and get ourselves some coffee.

When the installation is completed, we complete the installation by clicking the “ Reboot System ” button in the lower right corner. Our system automatically restarts and we are presented with the following screen:

We now have a working operating system, but first we need to bring it up to date.

To do this, after logging in with the username and password we created during the installation.

We get updates with the sudo dnf update command. After this command, when you are asked for confirmation for the updates to be installed, simply press Y and then press Enter.

After completing the updates, we can move on to installing the docker environment.

For installation, we first need to add the docker repo to our operating system:

sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Afterwards, we can start installing the docker engine and other packages we will need:

sudo dnf install docker-ce docker-ce-cli containerd.io docker-compose-plugin


After the installation is completed, we set the docker service to run automatically at system startup:

sudo systemctl enable docker

           

Then we run the docker service:

sudo systemctl start docker

Then we check if everything is OK:

sudo systemctl status docker

             

If everything is OK, we should see Active (Running) , as we see in the screenshot below :

Now, the next step is to add ourselves to the docker group:

sudo usermod -aG docker $(whoami)

                   

Finally, we end our session with the logout command and log in again.

TheHive Docker

Now you can copy our “docker-compose.yml” file that we configured for TheHive:


version: "3.8"
services:

  nginx:
    container_name: nginx
    hostname: nginx
    image: nginx:1.19.5
    ports:
      - 80:80
      - 443:443
    networks:
      - proxy
    volumes:
      - ./vol/nginx:/etc/nginx/conf.d
      - ./vol/ssl:/etc/ssl
      - ./vol/nginx-logs:/var/log/nginx
    restart: on-failure

  cassandra:
    container_name: cassandra
    image: cassandra:3.11
    restart: unless-stopped
    hostname: cassandra
    environment:
      - MAX_HEAP_SIZE=1G
      - HEAP_NEWSIZE=1G
      - CASSANDRA_CLUSTER_NAME=thp
    volumes:
      - ./vol/cassandra/data:/var/lib/cassandra/data
    networks:
      - backend

  elasticsearch:
    container_name: elasticsearch
    image: elasticsearch:7.11.1
    environment:
      - http.host=0.0.0.0
      - discovery.type=single-node
      - cluster.name=hive
      - script.allowed_types= inline
      - thread_pool.search.queue_size=100000
      - thread_pool.write.queue_size=10000
      - gateway.recover_after_nodes=1
      - xpack.security.enabled=false
      - bootstrap.memory_lock=true
      - ES_JAVA_OPTS=-Xms256m -Xmx256m
    ulimits:
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - ./vol/elasticsearch/data:/usr/share/elasticsearch/data
      - ./vol/elasticsearch/logs:/usr/share/elasticsearch/logs
    networks:
      - backend

  thehive:
    container_name: thehive
    image: 'thehiveproject/thehive4:latest'
    restart: unless-stopped
    depends_on:
      - cassandra
    ports:
      - '0.0.0.0:9000:9000'
    volumes:
      - ./vol/thehive/application.conf:/etc/thehive/application.conf
      - ./vol/thehive/data:/opt/thp/thehive/data
      - ./vol/thehive/index:/opt/thp/thehive/index
    command:
      --cortex-port 9001
      --cortex-keys ${CORTEX_KEY}
    networks:
      - proxy
      - backend

  cortex:
    container_name: cortex
    image: thehiveproject/cortex:latest
    depends_on:
      - elasticsearch
    networks:
      - proxy
      - backend
    command:
      --job-directory ${JOB_DIRECTORY}
    environment:
      - 'JOB_DIRECTORY=${JOB_DIRECTORY}'
    volumes:
      - '/var/run/docker.sock:/var/run/docker.sock'
      - '${JOB_DIRECTORY}:${JOB_DIRECTORY}'


networks:
  backend:
  proxy:
    external: true

                   

You will need to go back to the system we installed and save the above content to a file named “docker-compose.yml”. You can use editors such as vi and nano for this . We won't go into detail about these in this lesson, but if you don't know how to do it or don't remember it, this is a great time to remember the relevant lessons.

In the next step, we need to create a file named .env in which we will define the API key for cortex to communicate with thehive and define cortex's job directory: (the . at the beginning of the file name is important)

CORTEX_KEY=123456
JOB_DIRECTORY=/opt/cortex/jobs

                   

.env file and exit.

In our config file, you see 5 service definitions: Nginx, Cassandra, Elasticsearch Thehive and Cortex. Again, if you examine our config file carefully, you will see some storage mapping definitions. In order for these to work, we need to create a directory named vol in the directory where we saved the “docker-compose.yml” file:

mkdir vol

                   

Those of you who looked more carefully may have noticed that we also added an nginx service, although we never mentioned it before. In fact, it is possible to run TheHive without Nginx, but TheHive and Cortex's web interfaces do not support SSL/TLS. To provide this support, we will use Nginx as a reverse proxy.

[thehive-user@thehive-test ~]$ mkdir vol/ssl
[thehive-user@thehive-test ~]$ touch vol/nginx/ssl.conf
[thehive-user@thehive-test ~]$ touch vol/nginx/thehive.conf
[thehive-user@thehive-test ~]$ touch vol/nginx/cortex.conf

                   

After creating the files, it's time to fill in their contents. You need to save the following contents by writing them into the relevant files:

ssl.conf:

ssl_certificate /etc/ssl/nginx-crt.crt;
ssl_certificate_key /etc/ssl/nginx-key.key;


 

thehive.conf:


server {
  listen 443 ssl;
  server_name thehive-01.localhost;
  proxy_connect_timeout   600;
  proxy_send_timeout      600;
  proxy_read_timeout      600;
  send_timeout            600;
  client_max_body_size    2G;
  proxy_buffering off;
  client_header_buffer_size 8k;

  location / {
    add_header              Strict-Transport-Security "max-age=31536000; includeSubDomains";
    proxy_pass            http://thehive:9000;
    proxy_http_version      1.1;
  }
}


 

cortex.conf:


server {
  listen 4443 ssl;
  server_name cortex-01.localhost;

  proxy_connect_timeout   600;
  proxy_send_timeout      600;
  proxy_read_timeout      600;
  send_timeout            600;
  client_max_body_size    2G;
  proxy_buffering off;
  client_header_buffer_size 8k;

  location / {
    add_header              Strict-Transport-Security "max-age=31536000; includeSubDomains";
    proxy_pass            http://cortex:9001;
    proxy_http_version      1.1;
  }
}

Now, we need to create SelfSigned certificates in the ssl directory we defined for nginx by running the following commands:

[thehive-user@thehive-test ~]$ mkdir vol/nginx/
[thehive-user@thehive-test ~]$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout vol/ssl/nginx-key.key -out vol/ssl/nginx-crt.crt

While creating the certificates, it will ask you for some information, we have filled it in our own way, you can fill it in as you wish. The information you enter will not affect the result at this stage.

If everything is OK, you should have an image like this in your current directory:

Now we are ready to start our services, just run the following commands:

[thehive-user@thehive-test ~]$ docker network create proxy
[thehive-user@thehive-test ~]$ docker compose up -d

On the first run, docker images will be downloaded, so this stage will take some time, we recommend you to wait patiently.

After the installation is finished, when you run the docker compose ps command, you should see an output similar to the following:

There is a hurdle we need to overcome to access the web interfaces of Thehive and Cortex. In the Rocky Linux 8.8 we installed, the firewall is running by default and will reject our requests to 443 (TheHive) and 4443 (Cortex) services. To allow requests to these ports, we will need to run the following commands:

[thehive-user@thehive-test ~]$ sudo firewall-cmd --zone=public --add-port=443/tcp --permanent
[thehive-user@thehive-test ~]$ sudo firewall-cmd --zone=public --add-port=8443/tcp --permanent

Now that everything is ready, we can open a browser and see TheHive on the 443 (https) port of our virtual machine and Cortex on the 4443 (https) port. (Since we are using a SelfSigned certificate, we will see a certificate warning, we will have to accept it and proceed)

you can log in using the default username admin@thehive.local and default password: secret .

On the Cortex side, the process is a little different. It will bring up a screen for us to set an administrator password, where you need to set a username and password that you will not forget.

Afterwards, it will take us to a screen where we can log in with the username and password we specified:

Our next job, TheHive Creating the API-Key required for Cortex to work together.

Since Cortex works multi-tenancy just like TheHive, let's first define an organization for our tests and then create an api-key for this organization.

First, we click on the Organizations link in field (1) . Then, in area (2), we click on the +Add Organization link. After giving a name and a Display Name to our Organization in the Create Organization window that opens , we finish the process by clicking the Save button.

Now we will click on our organization and create not one but two users with the +Add User button.

One of these users will be the API User, which will allow our organization in TheHive to communicate with our organization in Cortex, and the other will be the user with whom we will do our work at the organizational level in Cortex.

read and analyze roles for API User . For our own user, we give the read, analyze, orgAdmin roles:

After adding both users, we create a password for the API User by clicking on the Create API Key button and clicking on the API Key, New Password button.

You will need to make a note of the Api-Key we created because we will need to enter this key in the CORTEX_KEY= section in the .env file we created before. After doing this, we need to restart TheHive Service in the console so that it can use this key.

docker compose  up --build  thehive

                   

You should see (3) ( OK ) in the CORTEX section in the popup that opens.

Next up is to create an organization on TheHive. As we said at the beginning, TheHive Multi Tenancy works and allows you to manage many different organizations from a single interface. We will click on the "Add Organization" button to create an organization that we will use in the rest of our work, give a name to the organization and click "OK".

Afterwards, the newly created organization will be displayed in the "Organizations" list:

As you can see in the screenshot, we created the “LetsDefend” organization. Now it's time to add users to our organization. When we click on the "LetsDefend" organization in the list, we will go to a screen where we can manage the relevant organization and here we will see the " +Create New User " option in the upper left corner. We click immediately and create our user as follows. (We will create our user in the orgAdmin profile.)

Conclusion

After adding the user, when we return to the Users list, we will see the "Create Password" button next to the user we created. What we need to do is to click on this button and create a password that we will not forget.

Finally, let's log out from the admin account and log in with this new account.

Share
letsdefend description card
letsdefend description card

You might also be interested in ...

Start learning cybersecurity today