Community discussions

MikroTik App
 
User avatar
tomaskir
Trainer
Trainer
Topic Author
Posts: 1162
Joined: Sat Sep 24, 2011 2:32 pm
Location: Slovakia

[Guide] Running Unimus Core in a Docker container

Wed Oct 04, 2023 9:40 pm

Hello fellow Tik fans. Wanted to share a guide to a sensible and useful use-case for a Docker container on a router. Containers on routers can be quite useful in the right use-cases - just please remember that just because you can run something directly on the router, doesn't mean you should. For some things it's a good fit, for others, not so much.

For a Unimus Core, this make sense, as it allows for local polling of network devices, and you don't have to deploy / maintain a separate server / VM at remote locations. It's also light-weight enough to run directly on the router. For those unfamiliar with Unimus, the Core serves as a remote proxy / agent for managing devices not directly reachable by the Unimus Server. With RouterOS' container support, you can deploy Unimus Core directly on an edge router servicing a network.

Hopefully this guide can give you an intro to how containers work on RouterOS in general, you can use this with any other container (like Pihole) with minimal modifications.

The Setup
This is the system used for testing remote Core deployment on:

Image

Starting from the right, the Unimus Server is installed on Ubuntu server (22.04) running on Raspberry Pi 2. It is connected via static IP to the HQ router – a MikroTik RouterBOARD. The HQ router is doing source NAT for Unimus Server, translating the private source IP to WAN interface public IP. This allows Unimus Server to reach resources outside the LAN.

HQ router is also configured for destination NAT, a.k.a port triggering, directing incoming TCP 5509 and TCP 8085 traffic to Unimus Server. TCP 5509 allows inbound remote Core connection. TCP 8085 is not strictly required for the demonstration, it's open simply for remote access to the HTTP GUI.

The left side represents a remote network. Branch router, a MikroTik RB5009UG+S+, is our edge router capable of running containers. Connected on LAN side is the managed device, another MikroTik RouterBOARD - Branch switch. Branch router supports containers and will run Unimus Core in one.

Our Unimus Core container will have its own virtual ethernet interface assigned to use for communication outside. Although this 'veth' could be added to the local bridge connecting to Branch switch, it makes more sense security-wise to add it to a separate 'containers' bridge. This way any container traffic goes through the routing engine and firewall, where it can be subject to policies.

Branch router, which is likely source NATting traffic for the whole branch network, needs to also SNAT the container subnet to allow outbound communication to the Unimus Server.

Edge router WAN ports are reachable via internet simulated by a local network. This is sufficient for the testing purposes as it simulates lack of direct connectivity between the Unimus Server and the remote Core.

Configuration
The focus is on Unimus Server, the Branch router and Unimus Core container configuration.

Unimus Server
Begin by navigating to Zones and 'Add new Zone'. This Zone will represent our remote location. Enter Zone name, ID and Remote core connection method and hit 'Confirm' to proceed.

Image

Next retrieve the remote core access key by hitting 'Show' and save it for later. It will be used to establish connection to the Unimus Server.

Image

Branch router
Before attempting to run any containers let's take care of the prerequisites:
RouterOS version 7.5 or higher is needed to run containers so update as necessary. Container package is compatible with arm, arm64 and x86 architectures.
Image

  • secure the router as there be dragons on the internet
    Take care of basic security. Configure a strong password and restrict management access using a firewall policy.
  • get the container package and install it
    Visit mikrotik.com/download and get the 'extra packages' for your architecture. Extract the contents, upload the containers package to your router and reboot. After reboot verify the currently installed packages:
    Image
  • enable device-mode containers
    You need physical access to the router for this part due to security implications. Hit the reset button to apply the setting.
    /system/device-mode/update container=yes
  • configure container interface
    To allow the container access to the network it needs a virtual interface. First create a bridge for the container:
    /interface/bridge/add name=containers
    /ip/address/add address=10.1.1.1/24 interface=containers
    Then create a veth1 interface and assign an IP address that Unimus Core will use for communication to Unimus Server. And add the interface to the newly created bridge:
    /interface/veth/add name=veth1 address=10.1.1.2/24 gateway=10.1.1.1
    /interface/bridge/port add bridge=containers interface=veth1
  • configure NAT
    Source NAT is needed for communication outside. You want the connection originated from the container subnet translated to an IP address reachable from the outside:
    /ip/firewall/nat/
    add action=masquerade chain=srcnat src-address=10.1.1.0/24 out-interface=ether1
  • use an external drive (optional)
    To avoid cluttering your platform storage it is recommended using a usb stick or an external hard drive for container images and volumes. They need to be formatted to ext3 or ext4 filesystem:
    Image

Unimus Core
For Unimus Core you need to specify where to reach the Unimus Server via container environment variables, pull the Unimus Core container image and run it.
  • define environment variables
    Variables are defined in key-value pairs. These are needed to point Unimus Core to the Unimus Server and input the Access Token you got earlier. Additionally, you can set the timezone and memory constraints for Java and there is an option to define volumes' mount points for data persistence. Details on https://github.com/crocandr/docker-unimus-core.
    /container/envs/
    add key=UNIMUS_SERVER_ADDRESS name=unimuscore_envs value=10.2.3.4
    add key=UNIMUS_SERVER_PORT name=unimuscore_envs value=5509
    add key=UNIMUS_SERVER_ACCESS_KEY name=unimuscore_envs value=\
        "v3ry_crypto;much_s3cr3t;W0w.."
    add key=TZ name=unimuscore_envs value=Europe/Budapest
    add key=XMX name=unimuscore_envs value=256M
    add key=XMS name=unimuscore_envs value=128M
    

    /container/mounts/
    add dst=/etc/unimus-core name=unimuscore_config src=/usb1-part1/config
    
  • add container image
    You can pull the Unimus Core container latest image straight from Docker hub at https://registry-1.docker.io. You could also import one from a PC (via docker pull/save) or build your own. The remote Core needs to be the same version as the embedded core on Unimus Server to avoid any compatibility issues between versions. So just make sure you grab a matching version.
    /container/config/set
    registry-url=https://registry-1.docker.io tmpdir=usb1-part1/pull
    
    /container/add
    remote-image=croc/unimus-core-arm64:latest interface=veth1 root-dir=usb1-part1/unimuscore mounts=unimuscore_config envlist=unimuscore_envs logging=yes
    • tmpdir specifies where to save the image
    • root-dir specifies where to extract the image
    • mounts specify mount points for volumes to ensure data persistence if container is removed or replaced
    • envlist specifies environment variables defined above
    • logging is enabled for troubleshooting
    After extraction it should go to "stopped" status.
    Image
  • run it!
    All is set to start the remote Unimus Core.
    /container/start 0
  • configure run on boot (optional)
    It might come in handy configuring the container to start with RouterOS boot to add some automation in case the Branch router gets rebooted for any reason.
    /container/set start-on-boot=yes 0

Aftermath
Assuming you have set it all up and everything went as planned you should see the remote Core come online in Unimus:

Image

Adding the test device (Branch Switch) under remote Core zone (BO1) prompts a discovery which results in a success:

Image

Troubleshooting
Most common issues relate to Unimus Server connectivity. Here’s a checklist of items you can try to check:
  • Unimus Server is UP
    Double-check your Unimus Server is up and running. Access it via browser at http(s)://<YourServerIP:8085/
  • Firewall policy
    Verify whether there’s a security rule allowing connection from outside your network.
  • Check NAT
    Destination NAT rule is necessary for core connection traffic. Destination address of incoming remote Core traffic needs to be translated to the Unimus Server IP address. TCP port 5509 is used by default.
  • Check variables
    Our Unimus Core container uses environment variables to establish connection to the server. Make sure the values in key-value pairs reflect your setup:

    UNIMUS_SERVER_ADDRESS is the IP address where Unimus Server is reachable (before NAT)
    UNIMUS_SERVER_PORT is the TCP port number (default 5509) on which Unimus listens for remote core messages
    UNIMUS_SERVER_ACCESS_KEY is the long string generated when you create a new Remote Core Zone

    Enabled container logs make troubleshooting easier:
    Log of misconfigured Core connection port:
    Image
    Log of wrong access key:
    Image
  • Check versions
    For Unimus Server to accept the remote Core connection they both need to run on the same version. Unimus Server log file would reveal this issue:
    Image

Attaching config exports used in this setup:
# HQ router
/interface bridge
add name=local
/interface bridge port
add bridge=local interface=ether2
/ip address
add address=172.31.254.1/24 interface=local network=172.31.254.0
add address=10.2.3.4/24 comment=internet interface=ether1 network=10.2.3.0
/ip dhcp-server network
add address=172.31.254.0/24 dns-server=172.31.254.1 gateway=172.31.254.1
/ip dns
set servers=10.2.3.254 allow-remote-requests=yes
/ip firewall nat
add action=masquerade chain=srcnat out-interface=ether1
add action=dst-nat chain=dstnat dst-address=10.2.3.4 dst-port=5509,8085 protocol=tcp to-addresses=172.31.254.2
/ip route
add distance=1 gateway=10.2.3.254
/system clock
set time-zone-name=Europe/Bratislava
/system identity
set name=HQ

# Branch router
/interface bridge
add name=containers
add name=local
/interface veth
add address=10.1.1.2/24 gateway=10.1.1.1 name=veth1
/container mounts
add dst=/etc/unimus-core name=unimuscore_config src=/usb1-part1/config
/container
add envlist=unimuscore_envs interface=veth1 logging=yes
/container config
set registry-url=https://registry-1.docker.io tmpdir=usb1-part1/pull
/container envs
add key=UNIMUS_SERVER_ADDRESS name=unimuscore_envs value=10.2.3.4
add key=UNIMUS_SERVER_PORT name=unimuscore_envs value=5509
add key=UNIMUS_SERVER_ACCESS_KEY name=unimuscore_envs value="secret key"
add key=TZ name=unimuscore_envs value=Europe/Budapest
add key=XMX name=unimuscore_envs value=256M
add key=XMS name=unimuscore_envs value=128M
/interface bridge port
add bridge=local interface=ether2
add bridge=containers interface=veth1
/ip address
add address=10.8.9.10/24 interface=local network=10.8.9.0
add address=10.5.6.7/24 comment="internet" interface=ether1 network=10.5.6.0
add address=10.1.1.1/24 interface=containers network=10.1.1.0
/ip dns
set servers=10.5.6.254  allow-remote-requests=yes
/ip firewall nat
add action=masquerade chain=srcnat src-address=10.1.1.0/24 out-interface=ether1
/ip route
add disabled=no dst-address=0.0.0.0/0 gateway=10.5.6.254 routing-table=main
/system clock
set time-zone-name=Europe/Bratislava
/system identity
set name="Branch router"

I hope this guide can serve as a reference for deploying Unimus Core (or any other) containers on RouterOS! The original article from which a lot of this is taken from is also available on the Unimus Blog, if you like, you can find it here.

Who is online

Users browsing this forum: No registered users and 2 guests