Categories
Dell

Dell DSU on Debian 12 bullseye

I recently deployed Proxmox on an older Dell T320 and I wanted to use Dell DSU to upgrade the servers’s various firmwares.

Unfortunately, Debian is not a supported OS and you really have to jump through a lot of hoops to get it working.

First, you need to download the package from Dell and run it.

This will install the package but you will still be missing some libraries.

apt install -y libgpgme11-dev libicu-dev

Once those packages are installed, if you just run “dsu”, you will get this message:

Could not download the catalog. Please check the configuration

To get it to actually use a catalog, use:

dsu --source-type=REPOSITORY

Unfortunately, with my old server it just said “no applicable updates”, so I had to download the SUU Iso and update from that:

mount SUU-LIN64_22.11.200.513.ISO /mnt
dsu --source-type=REPOSITORY --source-location=/mnt/repository
Categories
Docker linux Security Windows Server

Setting up RDPGW in docker

I was looking into setting up an instance of RDPWG for a client.

The client currently has a bunch of Windows server VMs in Workgroup which they currently connect to on port 3389 directly exposed to the internet which is obviously a massive security issue.
Additionally, the RDP port is often blocked on corporate networks.

The better option would be to set up an Active Directory domain and an RDP Gateway server, but as a quick alternative while the client decides if the cost is worth it I set up an RDP Gateway using the Open source project RDPGW.

The project itself is very interesting but the documentation is unfortunately seriously lacking and it took me quite a bit of effort to get everything set up, so I wanted to share the results in case it helps someone.

I didn’t find a way to get RDPGW to work behind a reverse proxy, so in this setup you need 2 public IP address if you intend to host Keycloak on port 443 unfortunately.

Setup

This is the docker-compose.yml file:

version: "3.9"
services:
  postgres:
    container_name: db
    image: "postgres:14.4"
    restart: always
    healthcheck:
      test: [ "CMD", "pg_isready", "-q", "-d", "postgres", "-U", "postgres" ]
      timeout: 45s
      interval: 10s
      retries: 10
    volumes:
      - ./postgres_data:/var/lib/postgresql/data
      #- ./sql:/docker-entrypoint-initdb.d/:ro # turn it on, if you need run init DB
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: keycloak
      POSTGRES_HOST: postgres
    networks:
      - pgsql

  keycloak:
    container_name: keycloak
    image: quay.io/keycloak/keycloak
    command: ['start', '--proxy', "edge"]
    restart: always
    depends_on:
      postgres:
        condition: service_healthy
    environment:
      JAVA_OPTS_APPEND: -Dkeycloak.profile.feature.upload_scripts=enabled
      KC_DB_PASSWORD: postgres
      KC_DB_URL: "jdbc:postgresql://postgres/keycloak"
      KC_DB_USERNAME: postgres
      KC_DB: postgres
      KC_HEALTH_ENABLED: 'true'
      KC_HTTP_ENABLED: 'true'
      KC_METRICS_ENABLED: 'true'
      KC_HOSTNAME_STRICT_HTTPS: true
      KC_HOSTNAME: rdgateway-keycloak.example.com
      PROXY_ADDRESS_FORWARDING: 'true'
      KEYCLOAK_ADMIN: admin
      KEYCLOAK_ADMIN_PASSWORD: password
    healthcheck:
      test: ["CMD-SHELL", "exec 3<>/dev/tcp/127.0.0.1/8080;echo -e \"GET /health/ready HTTP/1.1\r\nhost: http://localhost\r\nConnection: close\r\n\r\n\" >&3;grep \"HTTP/1.1 200 OK\" <&3"]
      interval: 10s
      retries: 10
      start_period: 20s
      timeout: 10s
    ports:
      - "8080:8080"
      - "8787:8787" # debug port
    networks:
      - pgsql
      - keycloak

  rdpgw:
    image: bolkedebruin/rdpgw:latest
    restart: always
    container_name: rdpgw
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./rdpgw.yaml:/opt/rdpgw/rdpgw.yaml
    depends_on:
      keycloak:
        condition: service_healthy
    networks:
      - keycloak

networks:
  pgsql:
    driver: bridge
  keycloak:
    driver: bridge

you need to create a postgres_data directory (or use a volume).

This is the rdpgw.yaml file (look a the documentation for information on secrets):

Server:
 Tls: auto
 GatewayAddress: rdgateway.example.com
 Port: 443
 Hosts:
  - rdshost-1.com:3389
  - rdshost-2.com:3389
 RoundRobin: any
 SessionKey: changeme
 SessionEncryptionKey: changeme
Authentication:
 - openid
OpenId:
 ProviderUrl: https://rdgateway-keycloak.example.com/realms/rdpgw
 ClientId: rdpgw
 ClientSecret: 01cd304c-6f43-4480-9479-618eb6fd578f
Client:
 UsernameTemplate: "{{ username }}"
 NetworkAutoDetect: 1
 BandwidthAutoDetect: 1
 ConnectionType: 6
 SplitUserDomain: true
Security:
  PAATokenSigningKey: changeme
  UserTokenEncryptionKey: changeme
  VerifyClientIp: false
Caps:
 TokenAuth: true
 IdleTimeout: 10
 EnablePrinter: true
 EnablePort: true
 EnablePnp: true
 EnableDrive: true
 EnableClipboard: true

The “hosts” section describes all the hosts the server allows access to.

Configuration

After all the files have been created in your chosen directory, you can run

docker compose up -d

After some time it will bring up Postgres and then Keycloak. RDPGW will not come up before we’ve create the new realm in Keycloak.

You will need to handle the reverse proxying of Keycloak on your own, it is outside the scope of this article.

Login to Keycloak with the user you’ve defined in the configuration file and on the top left, create a new Realm:

There, you can import the Realm configuration from here.

It will create a realm named “rdpgw” with a single user named “admin@rdpgw” with password “admin” (which you should change promptly)

Once this is done, the RDPGW container should come up.

You should be able to use https://rdgateway.example.com/connect and login with the user “admin@rdpgw”.
It will download a .rdp file, which you should be able to open to connect to the first host in the list.

If you want to connect to a specific host, you should use https://rdgateway.example.com/connect?host=destination-host-url1.com

Categories
Backup Docker Zabbix

Monitoring Synology Active Backup for business and Snapshot Replication with Zabbix

Active Backup for Business

We use Synology Active Backup for business as a backup solution for our Hyper-v hypervisors.

It works really well, but Synology hasn’t exposed any way to monitor it from SNMP or anything else.

I have seen many people asking how to monitor it, and the best solution I have seen requires connecting to the NAS unit using SSH, as root and running a script to read the ABB sqlite database.

I noticed that in ABB there is a configuration option to ship logs to a syslog server, which is easily monitored using Zabbix.

The log forwarding configuration in Active Backup for Business

Using a syslog-ng server in docker container, I set up everything to receive the logs:

---
version: "3.7"
services:
  syslog-ng:
    image: linuxserver/syslog-ng
    environment:
      - PUID=1000
      - PGID=4
      - TZ=Europe/Brussels
    volumes:
      - /srv/syslog-ng/config:/config
      - /srv/syslog-ng/log:/var/log
    ports:
      - 514:5514/udp
      - 601:6601/tcp
      - 6514:6514/tcp

The syslog-ng configuration file is very simple and to the point:

#############################################################################
# Default syslog-ng.conf file which collects all local logs into a
# single file called /var/log/messages tailored to container usage.

@version: 3.35
@include "scl.conf"

source s_local {
  internal();
};

source s_network_tcp {
  syslog(transport(tcp) port(6601));
};

source s_network_udp {
  syslog(transport(udp) port(5514));
};

destination d_local {
  file("/var/log/messages" perm(0640));
  file("/var/log/messages-kv.log" template("$ISODATE $HOST $(format-welf --scope all-nv-pairs)\n") frac-digits(3) perm(0640));
};

log {
  source(s_local);
  source(s_network_tcp);
  source(s_network_udp);
  destination(d_local);
};#

The log arrive in real time:

syslog-ng logs

Snapshot Replication

In the Snapshot replication app, there is not option to configure log shipping.

However, in the background, the apps uses the same mechanism as ABB for log management: an instance of syslog-ng on the Synology NAS.
The logs end up in: /var/log/synolog/synodr.log (it seems that snapshot replication is called Disaster Recovery internally at Synolog)

We just need to push a configuration file to in the syslog-ng patterndb.d folder (/etc/syslog-ng/patterndb.d/snapreplication.conf) with the appropriate permissions (owner “system”, group “log”, mode 644) which contains this: (replace {{ HOSTNAME }} with the hostname of your syslog-ng destination server)

@version: 3.34

source snapshot_replication {
    file("/var/log/synolog/synodr.log");
};
destination docker {
        network(
                "{{ HOSTNAME }}"
                port(514)
                transport(udp)
                ip-protocol(4)

                log_fifo_size(50000)
                keep_alive(no)
        );
};

log {
    source(snapshot_replication); destination(docker);
};

You juste need to restart the syslog-ng server on the NAS and you’re good to go:

systemctl restart syslog-ng

Logs will then start to appear in the syslog-ng docker container:

Zabbix

You can use the templates here to monitor the logfile using Zabbix.

It uses Zabbix Active check so you need to make sure that is set up correctly.

Categories
Active Directory Microsoft 365

Restart failed Microsoft 365 mailbox migration

We’ve had a client where a single mailbox keeps failing, no matter what we do.

I was tired of having to restart this migration manually, so I whipped up a tiny script that would do it for me.

$username = "exchange365admin@tenant.onmicrosoft.com"
$password = ConvertTo-SecureString 'password' -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential ($username, $password)

Connect-ExchangeOnline -credential $credential 

$migrationusers = Get-MigrationUser -ResultSize Unlimited | Where-Object {$_.Status -eq "Failed"}

$migrationusers | Start-MigrationUser

Disconnect-ExchangeOnline -Confirm:$false

I set up a scheduled task to run it regularly (every 15 minutes in my case) and a few days later, the mailbox was up and running in Microsoft 365.

Categories
Active Directory Azure AD Microsoft 365 Windows Server

Import Microsoft 365 user to on premise Active Directory

This solves a frequent situation: you have an existing Microsoft 365 tenant with users that already are using it, and you want to set-up an Azure AD connect synchronization with an on premise Active directory.

This script will read the AAD user’s attributes, create a local user with matching attributes and set up a mS-DS-ConsistencyGuid attribute that allows Azure AD connect to synchronize the two users.

It is important to note that since we can’t know the AAD user’s password, a new one will need to be set through the script and will replicate to AAD.

<# 
.SYNOPSIS
    Script synchronizes a remote AzureAd users with a local Activedirectory users that it creates

.DESCRIPTION 
    This script will get all attributes from office 365 and add them to a locally created user.
	It will then generate and add an mS-DS-ConsistencyGuid that will allow AzureADConnect to sync the remote and local user.
	User should be created in a non-synced OU and then copied over to a synchronized OU to allow for checks.
	
 
.NOTES 
    A global admin is probably not required but at this stage I'm not sure what permission are actually needed.

.COMPONENT 
    Requires module AzureAD and MSOnline


.LINK 
    https://blog.edzilla.info/?p=88
 
.Parameter usersContainer 
    OU where the user will be created

.Parameter userUPN 
    The UPN of the users we want to import

.Parameter userPassword 
    The password we want to user: password cannot be imported from AAD, so a new one must be set on import.
#>
Param (
	[string] $usersContainer = "OU=Isotoit,OU=Building,OU=No-365,OU=Users,OU=XLG,DC=XLG,DC=grp",
	[string] $userUPN = "toto@toto.com",
	[string] $userPassword = 'Qow65345@@'

)

$UserCredential = Get-Credential
 

#Install Module
Install-Module -Name AzureAD
Install-Module MSOnline
 
#Connect to Office 365 Azure AD
Connect-AzureAD -Credential $UserCredential
Connect-MsolService -Credential $UserCredential

#Add UPN suffixes to local ad
$azureDomains = Get-AzureADDomain
$localForest = (get-addomain).Forest
 
ForEach($domain in $azureDomains)
{
    if($domain.Name -notlike "*onmicrosoft.com")
    {
        Set-ADForest -Identity $localForest -UPNSuffixes @{Add=($domain).Name}
    }
}
 

 
function convert-guid {
  param(
    [Parameter(Mandatory=$true)]
    [GUID]$guidtoconvert
  )

  $bytearray = $guidtoconvert.ToByteArray()
  $immutableID = [System.Convert]::ToBase64String($bytearray)
  $immutableID
}


 
function Add-LocalADObject
{
    ForEach($object in $input)
    {
        write-host $object.ObjectType $object.DisplayName
 
        if($object.UserPrincipalName -like "*onmicrosoft.com")
        {
            write-host "Skipping object " $object.DisplayName "because it does not have a custom logon domain"
            continue
        }
 
        if($object.ObjectType -eq "User")
        {
            $userName, $upn = $object.UserPrincipalName.split('@')
            $upn = "@"+$upn
            New-ADUser -SamAccountName $userName -UserPrincipalName $object.UserPrincipalName -Name $object.DisplayName -DisplayName $object.DisplayName -Path $usersContainer -AccountPassword (ConvertTo-SecureString $userPassword -AsPlainText -Force) -Enabled $True -PassThru
            $filter = "CN=" + $object.DisplayName
        }
 
        $localADObject = Get-ADObject -Filter "UserPrincipalName -eq '$($userUPN)'"
        #Update attributes based on Azure contact
        if($object.GivenName -ne $null){ Set-ADObject $localADObject -Add @{givenName=$object.GivenName}  }
        if($object.Surname -ne $null){ Set-ADObject $localADObject -Add @{sn=$object.Surname}  }
        if($object.Mail -ne $null){ Set-ADObject $localADObject -Add @{mail=$object.Mail}  }
        if($object.StreetAddress -ne $null){ Set-ADObject $localADObject -Add @{streetAddress=$object.StreetAddress} }
        if($object.PostalCode -ne $null){ Set-ADObject $localADObject -Add @{postalCode=$object.PostalCode} }
        if($object.City -ne $null){ Set-ADObject $localADObject -Add @{l=$object.City} }
        if($object.State -ne $null){ Set-ADObject $localADObject -Add @{st=$object.State} }
        if($object.PhysicalDeliveryOfficeName -ne $null){ Set-ADObject $localADObject -Add @{physicalDeliveryOfficeName=$object.PhysicalDeliveryOfficeName} }
        if($object.TelephoneNumber -ne $null){ Set-ADObject $localADObject -Add @{telephoneNumber=$object.TelephoneNumber} }
        if($object.FacsimilieTelephoneNumber -ne $null){ Set-ADObject $localADObject -Add @{facsimileTelephoneNumber=$object.FacsimilieTelephoneNumber} }
        if($object.Mobile -ne $null){ Set-ADObject $localADObject -Add @{mobile=$object.Mobile} }
        if($object.JobTitle -ne $null){ Set-ADObject $localADObject -Add @{title=$object.JobTitle} }
        if($object.Department -ne $null){ Set-ADObject $localADObject -Add @{department=$object.Department} }
        if($object.CompanyName -ne $null){ Set-ADObject $localADObject -Add @{company=$object.CompanyName} }
		if($object.Country -ne $null){ Set-ADObject $localADObject -Add @{c="BE"} }
		if($object.MailNickName -ne $null){ Set-ADObject $localADObject -Add @{MailNickName=$object.MailNickName} }
		if($object.PreferredLanguage -ne $null){ Set-ADObject $localADObject -Add @{PreferredLanguage=$object.PreferredLanguage} }
		Set-ADObject $localADObject -Add @{msExchPoliciesExcluded="{26491cfc-9e50-4857-861b-0cb8df22b5d7}"}
 
        #get proxy addresses
        if($object.ProxyAddresses -ne $null)
        {
            ForEach($proxyAddress in $object.ProxyAddresses)
            {
                Set-ADObject -identity $localADObject -Add @{ProxyAddresses=$proxyAddress}
            }
        }
     }
}
 
#Add users to local AD
if (!(Get-ADUser -Filter "userPrincipalName -like '$userUPN'"))
{
	Get-AzureADUser -ObjectId $userUPN | Add-LocalADObject
}
$user = Get-ADUser -Filter "userPrincipalName -like '$userUPN'"
$immutableID = convert-guid -guidtoconvert $user.objectguid
Set-MsolUser -UserPrincipalName $userUPN -ImmutableId $immutableID
$decodedII = [system.convert]::frombase64string($ImmutableId)
$decode = [GUID]$decodedii
$ImmutableId = $decode
$user | Set-ADUser -Replace @{'mS-DS-ConsistencyGuid'=$($ImmutableId)}

Disconnect-AzureAD
Categories
linux Windows Server

Reverse proxy for Microsoft Exchange or RDS Gateway

When you have a bunch of applications all needing port 443 and a single public IP, you must use a reverse proxy.

For example, you have an Terminal services broker and an Exchange server which both need to use port 443.

You can set up an Haproxy instance that will proxy the request to the appropriate server. This is the configuration we use.
It uses SNI to find the requested hostname and direct you to the appropriate server.

######## Default values for all entries till next defaults section
defaults
option dontlognull # Do not log connections with no requests
option redispatch # Try another server in case of connection failure
option contstats # Enable continuous traffic statistics updates
retries 3 # Try to connect up to 3 times in case of failure
timeout connect 5s # 5 seconds max to connect or to stay in queue
timeout http-keep-alive 1s # 1 second max for the client to post next request
timeout http-request 15s # 15 seconds max for the client to send a request
timeout queue 30s # 30 seconds max queued on load balancer
timeout tarpit 1m # tarpit hold tim
backlog 10000 # Size of SYN backlog queue

balance roundrobin #alctl: load balancing algorithm
mode tcp #alctl: protocol analyser
option tcplog #alctl: log format
log global #alctl: log activation
timeout client 10800s #alctl: client inactivity timeout
timeout server 10800s #alctl: server inactivity timeout
default-server inter 3s rise 2 fall 3 #alctl: default check parameters

global
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
ssl-default-server-options no-sslv3
ssl-default-server-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
tune.ssl.default-dh-param 2048
log         stdout format raw  local0  info
# turn on stats unix socket
stats socket /var/run/haproxy.stat

listen stats
mode http
log global
bind :9000

maxconn 10

timeout queue 100s

stats enable
stats hide-version
stats refresh 30s
stats show-node
stats auth admin:password
stats uri /haproxy?stats

frontend https-in
bind :::443 v4v6 alpn h2,http/1.1 ssl crt /usr/local/etc/haproxy/certs/
log global
option httplog
mode http
http-response set-header Strict-Transport-Security max-age=31540000

use_backend mail.domain.com if { ssl_fc_sni -i mail.domain.com }
use_backend mail.domain.com if { ssl_fc_sni -i autodiscover.domain.com }
use_backend rds.domain.com if { ssl_fc_sni -i rds.domain.com }
use_backend website.domain.com if { ssl_fc_sni -i website.domain.com }

default_backend mail.domain.com

backend mail.domain.com
mode http
server exchange exchange_server.local:443 ssl verify none maxconn 10000 check #alctl: server exchange configuration.

backend rds.domain.com
mode http
server rds rds_server.local:443 ssl verify none maxconn 10000 check #alctl: server rds configuration.

backend website.domain.com
mode http
server website website_server.local:443 ssl verify none maxconn 10000 check #alctl: server rds configuration.

Categories
Docker linux Security

Traefik 2 and Nextcloud

I use Traefik and Nextcloud on docker and it took me a few tries to get it to the point where nextcloud would not complain about configuration issues.

Here is the configuration I ended up with:

- "traefik.enable=true"
- "traefik.docker.network=webgateway"
- "traefik.http.routers.nextcloud.middlewares=nextcloud,nextcloud_redirect"
- "traefik.http.routers.nextcloud.rule=Host(`nextcloud.fqdn.com`)"
- "traefik.http.services.nextcloud.loadbalancer.server.port=80"
- "traefik.http.routers.nextcloud.entrypoints=websecure"
- "traefik.http.routers.nextcloud.tls.certresolver=mydnschallenge"
- "traefik.http.middlewares.nextcloud.headers.customFrameOptionsValue=ALLOW-FROM https://fqdn.com"
- "traefik.http.middlewares.nextcloud.headers.contentSecurityPolicy=frame-ancestors 'self' fqdn.com *.fqdn.com"
- "traefik.http.middlewares.nextcloud.headers.stsSeconds=155520011"
- "traefik.http.middlewares.nextcloud.headers.stsIncludeSubdomains=true"
- "traefik.http.middlewares.nextcloud.headers.stsPreload=true"
- "traefik.http.middlewares.nextcloud.headers.referrerPolicy=same-origin"
- "traefik.http.middlewares.nextcloud.headers.customFrameOptionsValue=SAMEORIGIN"
- "traefik.http.middlewares.nextcloud_redirect.redirectregex.regex=/.well-known/(card|cal)dav"
- "traefik.http.middlewares.nextcloud_redirect.redirectregex.replacement=/remote.php/dav/"

This answers a question that’s been asked a few times on this post, which is “how to configure HSTS on Traefik 2”

Categories
Docker

Rexray S3fs docker plugin and Scaleway

I have been looking far and wide for a solution for docker swarm persistent volume shared among multiple hosts and nothing seemed to be easily deployable, reasonnably fast and reliable.

Glusterfs is slow as hell on every hardware I’ve tried to run it, Ceph seems very complicated to set up for a simple swarm setup.

Now I’ve begun experimenting with Rexray S3FS docker plugin and it seems quite interesting.

The only issue I’ve had is that I’m using Scaleway for hosting and I could not find a way to make it work with their S3 implementation.

I was missing a crucial piece of the puzzle: you need to specify the S3FS_REGION option, which is not mentionned in the scaleway docs, so the command line looks like this:

docker plugin install --grant-all-permissions rexray/s3fs:latest \
S3FS_ACCESSKEY=XXXXXXXXXX\
S3FS_SECRETKEY=YYYYYYYYYYYYY \
S3FS_ENDPOINT=https://s3.fr-par.scw.cloud \
S3FS_REGION=fr-par \
S3FS_OPTIONS="allow_other,nonempty,use_path_request_style,url=https://s3.fr-par.scw.cloud"

Also, it seems that the plugin tries to delete existing buckets, so I’ve created a separate account for docker S3 volumes, just to be safe.

Categories
Docker home assistant home automation

Home assistant and docker swarm

I’ve been trying to get Homeassistant working on swarm for a few months now, but the thing that was preventing me from moving to swarm was the Homeassistant requirement to use host networking on the container.
I had tried many things but I finally got everything to work with the macvlan driver. I configured the homeassistant service with two networks:

  • One macvlan network, giving it an IP address in my iot network
  • One overlay network, giving traefik access to homeassistant

The macvlan network is configured as follows. I had to create a local macvlan network on each member of the swarm, with a subnet swarm is free to choose an IP from:

1st node:
docker network create –config-only –subnet=192.168.2.0/24 –gateway=192.168.2.1 -o parent=eth0 –ip-range 192.168.2.232/29 macvlan_local
2nd node:
docker network create –config-only –subnet=192.168.2.0/24 –gateway=192.168.2.1 -o parent=eth0 –ip-range 192.168.2.240/29 macvlan_local
3rd node:
docker network create –config-only –subnet=192.168.2.0/24 –gateway=192.168.2.1 -o parent=eth0 –ip-range 192.168.2.248/29 macvlan_local

Then I create a swarm scoped network:

docker network create -d macvlan –scope swarm –config-from macvlan_local macvlan_swarm

Then I can deploy the whole stack, here is the docker-compose file:
(the wait-for-it config is a script that prevents homeassistant from starting before mqtt, it’s not necessary but quite useful)

version: '3.7'

configs:
  wait-for-it:
    file: /srv/docker/homeassistant/wait-for-it/wait-for-it.sh

services:
  homeassistant:
    image: homeassistant/armhf-homeassistant:0.86.4
    configs:
      - source: wait-for-it
        target: /wait-for-it.sh
        mode: 755
    networks:
      - webgateway
      - macvlan_swarm
    command: ["/wait-for-it.sh", "rasper:1883", "--", "python", "-m", "homeassistant", "--config", "/config"]
    volumes:
      - /srv/docker/homeassistant/config:/config
      - /etc/localtime:/etc/localtime:ro
    deploy:
      labels:
        - "traefik.backend=hassio"
        - "traefik.frontend.rule=Host:myhomeassistanthostname"
        - "traefik.port=8123"
        - "traefik.enable=true"
        - "traefik.docker.network=webgateway"
        - "traefik.default.protocol=http"

  mosquitto:
    image: eclipse-mosquitto
    volumes:
      - /srv/docker/mosquitto/config:/mosquitto/config
      - /etc/localtime:/etc/localtime:ro
      - /srv/docker/mosquitto/data/mosquitto-data:/mosquitto/data
      - /srv/docker/mosquitto/data/mosquitto-log:/mosquitto/log
    ports:
      - "1883:1883"
      - "9001:9001"

  nodered:
    image: nodered/node-red-docker:rpi-v8
    deploy:
      labels:
        - "traefik.backend=nodered"
        - "traefik.frontend.rule=Host:mynoderedhostname"
        - "traefik.port=1880"
        - "traefik.enable=true"
        - "traefik.docker.network=webgateway"
        - "traefik.default.protocol=http"
    networks:
      - webgateway
    volumes:
      - /srv/docker/nodered:/data
    environment:
      - TZ:Europe/Brussels


volumes:
  mosquitto-data:
  mosquitto-log:

networks:
  webgateway:
    external: true
  macvlan_swarm:
    external: true
  hostnet:
    external: true
    name: host
Categories
home assistant home automation

Home assistant and Unifi Mpower pro

I’ve had a Unifi Mpower pro for a while and I thought it was destined to die forgotten in a closet since UBNT has abandonned it… They are not improving the software anymore, and it’s not really installable at this, plus they have massive security issues with the software controller…

But then I found out that someone had made an MQTT client for this power strip!

I’ve never really worked with MQTT before, but it’s actually quite easy.

First, I installed the software on the power strip, after connecting it to my iot-only wireless network (which doesn’t have internet access. After all, this hardware has no software update, I don’t want to get it into some botnet)

Then I configured it like this:

mqtthost=<ip of my mqtt broker>
topic=homeassistant/mpower1

The power strip publishes to MQTT much information about the state of the plugs.
What I was interested in was the ability to turn them on and off, and to use port1 to monitor our washing machine and alert me when it was finished (when the power usage goes below some value)

To allow me to do that, I defined some sensors to get the power usage of each port, a binary sensor that turns on if the power usage on port1 goes above a fixed value, and some switches to turn everything on or off

So in my configuration.yaml, I have:

sensor:
  - platform: mqtt
    name: "mpower port1 power"
    unit_of_measurement: 'watts'
    state_topic: "homeassistant/mpower1/port1/power"
  - platform: mqtt
    name: "mpower port2 power"
    unit_of_measurement: 'watts'
    state_topic: "homeassistant/mpower1/port2/power"
  - platform: mqtt
    name: "mpower port3 power"
    unit_of_measurement: 'watts'
    state_topic: "homeassistant/mpower1/port3/power"
  - platform: mqtt
    name: "mpower port4 power"
    unit_of_measurement: 'watts'
    state_topic: "homeassistant/mpower1/port4/power"
  - platform: mqtt
    name: "mpower port5 power"
    unit_of_measurement: 'watts'
    state_topic: "homeassistant/mpower1/port5/power"
  - platform: mqtt
    name: "mpower port6 power"
    unit_of_measurement: 'watts'
    state_topic: "homeassistant/mpower1/port6/power"

binary_sensor:
  - platform: template
    sensors:
      washer_state:
        friendly_name: "washer state"
        value_template: >-
          {{ states('sensor.mpower_port1_power')|float > 2 }}

switch:
  - platform: mqtt
    state_topic: "homeassistant/mpower1/port1/relay"
    name: "mpower port1 switch"
    command_topic: "homeassistant/mpower1/port1/relay/set"
    payload_on: 1
    payload_off: 0
    availability_topic: 'homeassistant/mpower1/port1/lock/$settable'
    payload_available: 'true'
    payload_not_available: 'false'
  - platform: mqtt
    state_topic: "homeassistant/mpower1/port2/relay"
    name: "mpower port2 switch"
    command_topic: "homeassistant/mpower1/port2/relay/set"
    payload_on: 1
    payload_off: 0
    availability_topic: 'homeassistant/mpower1/port2/lock/$settable'
    payload_available: 'true'
    payload_not_available: 'false'
  - platform: mqtt
    state_topic: "homeassistant/mpower1/port3/relay"
    name: "mpower port3 switch"
    command_topic: "homeassistant/mpower1/port3/relay/set"
    payload_on: 1
    payload_off: 0
    availability_topic: 'homeassistant/mpower1/port3/lock/$settable'
    payload_available: 'true'
    payload_not_available: 'false'
  - platform: mqtt
    state_topic: "homeassistant/mpower1/port4/relay"
    name: "mpower port4 switch"
    command_topic: "homeassistant/mpower1/port4/relay/set"
    payload_on: 1
    payload_off: 0
    availability_topic: 'homeassistant/mpower1/port4/lock/$settable'
    payload_available: 'true'
    payload_not_available: 'false'
  - platform: mqtt
    state_topic: "homeassistant/mpower1/port5/relay"
    name: "mpower port5 switch"
    command_topic: "homeassistant/mpower1/port5/relay/set"
    payload_on: 1
    payload_off: 0
    availability_topic: 'homeassistant/mpower1/port5/lock/$settable'
    payload_available: 'true'
    payload_not_available: 'false'
  - platform: mqtt
    state_topic: "homeassistant/mpower1/port6/relay"
    name: "mpower port6 switch"
    command_topic: "homeassistant/mpower1/port6/relay/set"
    payload_on: 1
    payload_off: 0
    availability_topic: 'homeassistant/mpower1/port6/lock/$settable'
    payload_available: 'true'
    payload_not_available: 'false'

And in my automations.yaml I have

- id: '1525140123407'
  alias: alert if washer stops
  trigger:
    - platform: state
      entity_id:
        - binary_sensor.washer_state
      from: 'on'
      to: 'off'
  action:
    service: telegram_bot.send_message
    data:
      message: 'washing over'
      target: 123456789

- id: '1525140123408'
  alias: alert if washing starts
  trigger:
    - platform: state
      entity_id:
        - binary_sensor.washer_state
      from: 'off'
      to: 'on'
  action:
    service: telegram_bot.send_message
    data:
      message: 'washing starts'
      target: 123456789