-
Log into unifi controller web UI
-
Go to Settings
-
Select Routing & Firewall
-
Select Firewall
-
Select Groups
-
Hit "Create new Group"
-
Enter all your DNS servers here you want to be allowed on the local LAN (Eg, mine is 10.0.1.1 - gateway, 10.0.1.14 - pi-hole)
-
Name this "Allowed DNS Servers"
-
Hit OK
-
SSH into the Gateway - NOT the CloudKey (username/password is whatever you set up)
-
do this: 'mca-ctrl -t dump-cfg > config.txt'
-
edit the new file, config.txt 'vi config.txt'
-
Look for something that has the description field:
"description": "customized-Allowed DNS Servers"
-
Write down/copy aside the key associated that (mine is: 5d50c3764fd01c0ad01a6938) This is the Group ID for your group
-
Now you need your 'interfaces' - meaning all your vlans and such.
-
The way to find out your interfaces is ssh into the gateway and issue:
show interfaces
Output is:Codes: S - State, L - Link, u - Up, D - Down, A - Admin Down Interface IP Address S/L Description --------- ---------- --- ----------- eth0 XX.X.XXX.XXX/22 u/u WAN eth1 10.0.1.1/24 u/u LAN eth1.2 10.0.2.1/24 u/u eth1.80 10.0.80.1/24 u/u eth1.90 10.0.90.1/24 u/u eth1.100 10.0.100.1/24 u/u eth2 - A/D eth3 - A/D eth4 - A/D eth5 - u/D eth6 - u/D eth7 - u/D eth8 - u/D lo 127.0.0.1/8 u/u ::1/128
-
Note down all the eth1, eth1.2, - eth1.100 for each active VLAN you care about doing this too (all?)
-
Either open up your config.json on the CloudKey or learn how to edit/make one here: https://help.ubnt.com/hc/en-us/articles/215458888-UniFi-USG-Advanced-Configuration
-
Copy this template for each of your VLANs/interfaces above to the nat/rule section
{ "nat": { "rule": { "1": { "description": "Redirect DNS requests", "destination": { "group": { "address-group": "!YOUR_GROUP_ID_FOR_DNS_SERVERS_HERE" }, "port": "53" }, "inbound-interface": "YOUR_UNIX_INTERFACE_HERE (eg: eth1or eth1.90)", "inside-address": { "address": "YOUR_IP_FOR_DNS_SERVER_HERE (eg: 10.0.1.14)" }, "log": "enable", "protocol": "tcp_udp", "type": "destination" }, "5001": { "description": "Translate DNS to Internal", "destination": { "address": "YOUR_IP_FOR_DNS_SERVER_HERE (eg: 10.0.1.14)", "port": "53" }, "log": "disable", "outbound-interface": "YOUR_UNIX_INTERFACE_HERE (eg: eth1 or eth1.90)", "protocol": "tcp_udp", "type": "masquerade" } } } }
-
Validate the json using the tool of your choice
-
Go back to Unifi Controller web app
-
Go to the devices tab
-
Select your USG
-
Hit Settings on it
-
Scroll down and find "Force Provision"
-
Pray and Profit
-
Great way to verify this is to: 'dig @1.1.1.1 redis.siliconspirit.net' where the address I'm looking up doesn't exist in a public space (just my local DNS)
-
-
Save terafin/2ee5b231cb36712b0d2b7dd32941c2ab to your computer and use it in GitHub Desktop.
The same config should work if you update the port from 53 for dns, to whatever ooma’s port is! If Ooma has multiple, you should be able to (I think) just remove the port entry, and it’s a full redirect.
@terafin thanks, I was looking for a good explanation on how to get this done with limited networking knowledge.
A quick question about copying the template for each vlan. I have several vlans, do I need to copy the whole template as you have posted and change accordingly the vlan entry? Do I need to change the rule number from 1 to 2 or anything?
SOLVED ERROR:
It seems like a NAT rule should be within a service, so adding a service "service":{ and a closing } al the way at the end seems to fix this error :
`{
"service":{
"nat":{
"rule":{
[...]
}`
Testing DNS by a nslookup seems its working! Thanks.
Does anyone know why im getting this error with json:
{ "nat":{ "rule":{ "1":{ "description":"Redirect DNS requests", "destination":{ "group":{ "address-group":"!61d406dfbxxxxxx90907b16" }, "port":"53" }, "inbound-interface":"eth1.30", "inside-address":{ "address":"192.168.1.x" }, "log":"enable", "protocol":"tcp_udp", "type":"destination" }, "5001":{ "description":"Translate DNS to Internal", "destination":{ "address":"192.168.1.x", "port":"53" }, "log":"disable", "outbound-interface":"eth1.30", "protocol":"tcp_udp", "type":"masquerade" } } } }
Error:
mca-edgemax._edgemax_parse_set_commit_save_results(): [seterr] nat rule 1 description Redirect DNS requests: The specified configuration node is not valid#012 Jan 4 15:05:19 USG mcad: mcad[3884]: mca-edgemax._edgemax_parse_set_commit_save_results(): [seterr] nat rule 5001 log disable: The specified configuration node is not valid#012 Jan 4 15:05:19 USG mcad: mcad[3884]: mca-edgemax._edgemax_parse_set_commit_save_results(): [seterr] nat rule 1 inside-address address 192.168.1.x: The specified configuration node is not valid#012 Jan 4 15:05:19 USG mcad: mcad[3884]: mca-edgemax._edgemax_parse_set_commit_save_results(): [seterr] nat rule 5001 destination address 192.168.1.x: The specified configuration node is not valid#012 Jan 4 15:05:19 USG mcad: mcad[3884]: mca-edgemax._edgemax_parse_set_commit_save_results(): [seterr] nat rule 1 log enable: The specified configuration node is not valid#012 Jan 4 15:05:19 USG mcad: mcad[3884]: mca-edgemax._edgemax_parse_set_commit_save_results(): [seterr] nat rule 5001 description Translate DNS to Internal: The specified configuration node is not valid#012 Jan 4 15:05:19 USG mcad: mcad[3884]: mca-edgemax._edgemax_parse_set_commit_save_results(): [seterr] nat rule 5001 outbound-interface eth1.30: The specified configuration node is not valid#012 Jan 4 15:05:19 USG mcad: mcad[3884]: mca-edgemax._edgemax_parse_set_commit_save_results(): [seterr] nat rule 1 destination group address-group !61d406dfbxxxxxx90907b16: The specified configuration node is not valid#012 Jan 4 15:05:19 USG mcad: mcad[3884]: mca-edgemax._edgemax_parse_set_commit_save_results(): [seterr] nat rule 1 protocol tcp_udp: The specified configuration node is not valid#012 Jan 4 15:05:19 USG mcad: mcad[3884]: mca-edgemax._edgemax_parse_set_commit_save_results(): [seterr] nat rule 1 type destination: The specified configuration node is not valid#012 Jan 4 15:05:19 USG mcad: mcad[3884]: mca-edgemax._edgemax_parse_set_commit_save_results(): [seterr] nat rule 1 destination port 53: The specified configuration node is not valid#012 Jan 4 15:05:19 USG mcad: mcad[3884]: mca-edgemax._edgemax_parse_set_commit_save_results(): [seterr] nat rule 1 inbound-interface eth1.30: The specified configuration node is not valid#012 Jan 4 15:05:19 USG mcad: mcad[3884]: mca-edgemax._edgemax_parse_set_commit_save_results(): [seterr] nat rule 5001 destination port 53: The specified configuration node is not valid#012 Jan 4 15:05:19 USG mcad: mcad[3884]: mca-edgemax._edgemax_parse_set_commit_save_results(): [seterr] nat rule 5001 type masquerade: The specified configuration node is not valid#012 Jan 4 15:05:19 USG mcad: mcad[3884]: mca-edgemax._edgemax_parse_set_commit_save_results(): [seterr] nat rule 5001 protocol tcp_udp: The specified configuration node is not valid#012 Jan 4 15:05:19 USG mcad: mcad[3884]: mca-edgemax._edgemax_parse_set_commit_save_results(): [delete] failure: 0 success: 1 Jan 4 15:05:19 USG mcad: mcad[3884]: mca-edgemax._edgemax_parse_set_commit_save_results(): [set] failure: 1 success: 1 Jan 4 15:05:20 USG mcad: mcad[3884]: ace_reporter.reporter_handle_response(): edgemax apply config failed (error code: 2)
Thanks for this write up! I am succesfully running this now against my AdGuard server(s). Since I run more than one AdGuard server on my network for redundancy, is there a way to have it send queries to a group instead of just one DNS IP?
Thanks for this write up! I am succesfully running this now against my AdGuard server(s). Since I run more than one AdGuard server on my network for redundancy, is there a way to have it send queries to a group instead of just one DNS IP?
AFAIK, you can add two addresses if you separate them with a dash.
@mostlychris did you find a way to forward to a group/list of servers rather than just one?
@korkmazk I used your idea to make it work but how do I check it with nslookup? What would be the right response? I want to force all traffic over my dns servers even if they are hardcoded.
{ "service": { "nat": { "rule": { "1": { "description": "Redirect DNS requests", "destination": { "group": { "address-group": "!62fbdc0cb2b1xxxxx576e1c" }, "port": "53" }, "inbound-interface": "eth1", "inside-address": { "address": "10.0.1.5;10.0.1.6;10.0.1.10", }, "log": "enable", "protocol": "tcp_udp", "type": "destination" }, "5001": { "description": "Translate DNS to Internal", "destination": { "address": "10.0.1.5;10.0.1.6;10.0.1.10", "port": "53" }, "log": "disable", "outbound-interface": "eth1", "protocol": "tcp_udp", "type": "masquerade" } } } } }
@korkmazk I used your idea to make it work but how do I check it with nslookup? What would be the right response? I want to force all traffic over my dns servers even if they are hardcoded.
{ "service": { "nat": { "rule": { "1": { "description": "Redirect DNS requests", "destination": { "group": { "address-group": "!62fbdc0cb2b1xxxxx576e1c" }, "port": "53" }, "inbound-interface": "eth1", "inside-address": { "address": "10.0.1.5;10.0.1.6;10.0.1.10", }, "log": "enable", "protocol": "tcp_udp", "type": "destination" }, "5001": { "description": "Translate DNS to Internal", "destination": { "address": "10.0.1.5;10.0.1.6;10.0.1.10", "port": "53" }, "log": "disable", "outbound-interface": "eth1", "protocol": "tcp_udp", "type": "masquerade" } } } } }
I have syslog enabled to synology so I saw a hit on the rule by a google home device and I checked with nslookup which I forced to use 8.8.8.8 to resolve an internal address and it did so I knew it was working and using my pihole where this internal address was defined. Otherwise the internal address would not be resolved as 8.8.8.8 doesnt know the internal ip of this host.
So the output for the nslookup of an internal address should resolve to an internal ip while forcing nslookup using an external dns.
I'm no network expert or so, there might be an easy way but this worked for me.
I had to disable my config because pi-hole started answering with 'refused' because my unifi was spamming the pihole to bits...
Under customized-Allowed DNS Servers i added my 3 pi-hole ip adresses (not the gateway).
This is my config:
{ "service": { "nat": { "rule": { "1": { "description": "Redirect DNS requests", "destination": { "group": { "address-group": "!62fbdc0cb2b13c0007576e1c" }, "port": "53" }, "inbound-interface": "eth1", "inside-address": { "address": "10.0.1.10" }, "log": "enable", "protocol": "tcp_udp", "type": "destination" }, "5001": { "description": "Translate DNS to Internal", "destination": { "address": "10.0.1.10", "port": "53" }, "log": "disable", "outbound-interface": "eth1", "protocol": "tcp_udp", "type": "masquerade" } } } } }
Ideas?
What might help: go to pihole settings > DNS > interface settings and check Permit all origins option.
Make sure your read the warning!
The NAT masquerade rule is unnecessary. To prevent redirecting your DNS server's requests to itself, you can simply add a "source"
rule for the subnet that the DNS server is present on (you also don't need an address group). Note that config.json doesn't accept comments, they're just here for explanation.
{
"service": {
"nat": {
"rule": {
"1": {
"description": "Redirect DNS requests",
"destination": {
"port": "53"
},
"source": {
// Don't send DNS traffic back to the server
"address": "!YOUR_IP_FOR_DNS_SERVER_HERE/32"
},
"inbound-interface": "eth1",
"inside-address": {
"address": "YOUR_IP_FOR_DNS_SERVER_HERE",
"port": "53"
},
"log": "enable",
"protocol": "tcp_udp",
"type": "destination"
},
// Other subnets do not need the source rule
"2": {
"description": "Redirect DNS requests",
"destination": {
"port": "53"
},
"inbound-interface": "eth1.2",
"inside-address": {
"address": "YOUR_IP_FOR_DNS_SERVER_HERE",
"port": "53"
},
"log": "enable",
"protocol": "tcp_udp",
"type": "destination"
}
}
}
}
}
Can a port group be used to capture anything thats not going out on port 53?
@terafin Any tips on this json file to redirect private IP packets destined for 208.83.244.20 to 208.83.246.20 instead, and perform the reverse when they come back?
I'm trying to dnat 208.83.244.20 to 208.83.246.20... hard coded ip in my Ooma that i need to redirect to new Ooma server
"5009": { "description": "Ooma forward", "destination": { "address": "208.83.244.20", }, "inbound-interface": "eth1", "inside-address": { "address": "208.83.246.20" }, "protocol": "tcp", "type": "destination" },