Dynamic Multipoint VPN (DMVPN)
This material is work-in-progress ... Do not follow instructions here until this notice is removed. |
http://alpinelinux.org/about under Why the Name Alpine? states: [ref?]
The first open-source implementation of Cisco's DMVPN, called OpenNHRP, was written for Alpine Linux.
So the aim of this document is to be the reference Linux DMVPN setup, with all the networking services needed for the clients that will use the DMVPN (DNS, DHCP, firewall, etc.).
Terminology
NBMA: Non-Broadcast Multi-Access network as described in RFC 2332
Hub: the Next Hop Server (NHS) performing the Next Hop Resolution Protocol service within the NBMA cloud.
Spoke: the Next Hop Resolution Protocol Client (NHC) which initiates NHRP requests of various types in order to obtain access to the NHRP service.
Spoke Node
A local spoke node network has support for multiple ISP connections, along with redundant layer 2 switches. At least one 802.1q capable switch is required, and a second is optional for redundancy purposes. The typical spoke node network looks like:
Alpine Setup
We will setup the network interfaces as follows:
bond0.3 = Management (not implemented below yet)
bond0.8 = LAN
bond0.64 = DMZ
bond0.80 = Voice (not implemented below yet)
bond0.96 = Internet Access Only (no access to the DMVPN network)(not implemented below yet)
bond0.256 = ISP1
bond0.257 = ISP2
Boot Alpine in diskless mode and run setup-alpine
You will be prompted something like this... | Suggestion on what you could enter... |
---|---|
Select keyboard layout [none]:
|
Type an appropriate layout for you |
Select variant:
|
Type an appropriate layout for you (if prompted) |
Enter system hostname (short form, e.g. 'foo') [localhost]:
|
Enter the hostname, e.g. vpnc |
Available interfaces are: eth0
|
Enter bond0.8 |
Available bond slaves are: eth0 eth1
|
eth0 eth1 |
IP address for bond0? (or 'dhcp', 'none', '?') [dhcp]:
|
Press Enter confirming 'none' |
IP address for bond0.8? (or 'dhcp', 'none', '?') [dhcp]:
|
Enter the IP address of your LAN interface, e.g. 10.1.0.1 |
Netmask? [255.255.255.0]:
|
Press Enter confirming '255.255.255.0' or type another appropriate subnet mask |
Gateway? (or 'none') [none]:
|
Press Enter confirming 'none' |
Do you want to do any manual network configuration? [no]
|
yes |
Make a copy of the bond0.8 configuration for bond0.64, bond0.256 and bond0.257 (optional) interfaces. Don't forget to add a gateway and a metric value for ISP interfaces when multiple gateways are set. Save and close the file (:wq) | |
DNS domain name? (e.g. 'bar.com') []:
|
Enter the domain name of your intranet, e.g., example.net |
DNS nameservers(s)? []:
|
8.8.8.8 8.8.4.4 (we will change them later) |
Changing password for root
|
Enter a secure password for the console |
Retype password:
|
Retype the above password |
Which timezone are you in? ('?' for list) [UTC]:
|
Press Enter confirming 'UTC' |
HTTP/FTP proxy URL? (e.g. 'http://proxy:8080', or 'none') [none]
|
Press Enter confirming 'none' |
Enter mirror number (1-9) or URL to add (or r/f/e/done) [f]:
|
Select a mirror close to you and press Enter |
Which SSH server? ('openssh', 'dropbear' or 'none') [openssh]:
|
Press Enter confirming 'openssh' |
Which NTP client to run? ('openntpd', 'chrony' or 'none') [chrony]:
|
Press Enter confirming 'chrony' |
Which disk(s) would you like to use? (or '?' for help or 'none') [none]:
|
Press Enter confirming 'none' or type 'none' if needed |
Enter where to store configs ('floppy', 'usb' or 'none') [usb]:
|
Press Enter confirming 'usb' |
Enter apk cache directory (or '?' or 'none') [/media/usb/cache]:
|
Press Enter confirming '/media/usb/cache' |
Bonding
Update the bonding configuration:
echo bonding mode=balance-tlb miimon=100 updelay=500 >> /etc/modules
Recursive DNS
apk add unbound
With your favorite editor open /etc/unbound/unbound.conf
and add the following configuration. If you have a domain that you want unbound to resolve but is internal to your network only, the stub-zone stanza is present:
server: verbosity: 1 interface: 10.1.0.1 do-ip4: yes do-ip6: no do-udp: yes do-tcp: yes do-daemonize: yes access-control: 10.1.0.0/16 allow access-control: 127.0.0.0/8 allow do-not-query-localhost: no root-hints: "/etc/unbound/named.cache" forward-zone: name: "location1.example.net" forward-addr: 10.1.0.1 stub-zone: name: "example.net" stub-addr: 10.1.0.2 stub-zone: name: "example2.net" stub-addr: 10.1.0.2 python: remote-control: control-enable: no
Fetch the latest copy of root hints:
wget http://ftp.internic.net/domain/named.cache -O /etc/unbound/named.cache
/etc/init.d/unbound start
rc-update add unbound
echo nameserver 10.1.0.1 > /etc/resolv.conf
Local DNS Zone
If you have a DNS zone that is only resolvable internally to your network, you will need a 2nd IP address on your LAN interface, and use NSD to host the zone.
First, add the following to the end of the bond0.8 stanza in /etc/network/interfaces:
up ip addr add 10.1.0.2/24 dev bond0.8
Then, install nsd:
apk add nsd
Create /etc/nsd/nsd.conf:
server: ip-address: 10.1.0.2 port: 53 server-count: 1 ip4-only: yes hide-version: yes identity: "" zonesdir: "/etc/nsd" zone: name: location1.example.net zonefile: location1.example.net.zone
Create zonefile in /etc/nsd/location1.example.net.zone:
;## location1.example.net authoritative zone $ORIGIN location1.example.net. $TTL 86400 @ IN SOA ns1.location1.example.net. webmaster.location1.example.net. ( 2013081901 ; serial 28800 ; refresh 7200 ; retry 86400 ; expire 86400 ; min TTL ) NS ns1.location1.example.net. MX 10 mail.location1.example.net. ns IN A 10.1.0.2 mail IN A 10.1.0.4
Check configuration then start:
nsd-checkconf /etc/nsd/nsd.conf
nsdc rebuild
/etc/init.d/nsd start
rc-update add nsd
GRE Tunnel
With your favorite editor open /etc/network/interfaces
and add the following:
auto gre1 iface gre1 inet static pre-up ip tunnel add $IFACE mode gre ttl 64 tos inherit key 12.34.56.78 || true address 172.16.1.1 netmask 255.255.0.0 post-down ip tunnel del $IFACE || true
Save and close the file.
ifup gre1
IPSEC
apk add ipsec-tools
With your favorite editor open /etc/ipsec.conf
and change the content to the following:
spdflush; spdadd 0.0.0.0/0 0.0.0.0/0 gre -P out ipsec esp/transport//require; spdadd 0.0.0.0/0 0.0.0.0/0 gre -P in ipsec esp/transport//require;
With your favorite editor open /etc/racoon/racoon.conf
and change the content to the following:
remote anonymous { exchange_mode main; lifetime time 2 hour; certificate_type x509 "/etc/racoon/cert.pem" "/etc/racoon/key.pem"; ca_type x509 "/etc/racoon/ca.pem"; my_identifier asn1dn; nat_traversal on; script "/etc/opennhrp/racoon-ph1dead.sh" phase1_dead; dpd_delay 120; proposal { encryption_algorithm aes 256; hash_algorithm sha1; authentication_method rsasig; dh_group modp4096; } proposal { encryption_algorithm aes 256; hash_algorithm sha1; authentication_method rsasig; dh_group 2; } } sainfo anonymous { pfs_group 2; lifetime time 2 hour; encryption_algorithm aes 256; authentication_algorithm hmac_sha1; compression_algorithm deflate; }
Save and close the file.
/etc/init.d/racoon start
Next Hop Resolution Protocol (NHRP)
apk add opennhrp
With your favorite editor open /etc/opennhrp/opennhrp.conf
and change the content to the following:
interface gre1 dynamic-map 172.16.0.0/16 hub.example.com shortcut redirect non-caching interface bond0.8 shortcut-destination interface bond0.64 shortcut-destination
With your favorite editor open /etc/opennhrp/opennhrp-script
and change the content to the following:
#!/bin/sh MYAS=65001 case $1 in interface-up) echo "Interface $NHRP_INTERFACE is up" if [ "$NHRP_INTERFACE" = "gre1" ]; then ip route flush proto 42 dev $NHRP_INTERFACE ip neigh flush dev $NHRP_INTERFACE vtysh -d bgpd \ -c "configure terminal" \ -c "router bgp $MYAS" \ -c "no neighbor core" \ -c "neighbor core peer-group" fi ;; peer-register) ;; peer-up) if [ -n "$NHRP_DESTMTU" ]; then ARGS=`ip route get $NHRP_DESTNBMA from $NHRP_SRCNBMA | head -1` ip route add $ARGS proto 42 mtu $NHRP_DESTMTU fi echo "Create link from $NHRP_SRCADDR ($NHRP_SRCNBMA) to $NHRP_DESTADDR ($NHRP_DESTNBMA)" racoonctl establish-sa -w isakmp inet $NHRP_SRCNBMA $NHRP_DESTNBMA || exit 1 racoonctl establish-sa -w esp inet $NHRP_SRCNBMA $NHRP_DESTNBMA gre || exit 1 ;; peer-down) echo "Delete link from $NHRP_SRCADDR ($NHRP_SRCNBMA) to $NHRP_DESTADDR ($NHRP_DESTNBMA)" racoonctl delete-sa isakmp inet $NHRP_SRCNBMA $NHRP_DESTNBMA ip route del $NHRP_DESTNBMA src $NHRP_SRCNBMA proto 42 ;; nhs-up) echo "NHS UP $NHRP_DESTADDR" ( flock -x 200 vtysh -d bgpd \ -c "configure terminal" \ -c "router bgp $MYAS" \ -c "neighbor $NHRP_DESTADDR remote-as 65000" \ -c "neighbor $NHRP_DESTADDR peer-group core" \ -c "exit" \ -c "exit" \ -c "clear bgp $NHRP_DESTADDR" ) 200>/var/lock/opennhrp-script.lock ;; nhs-down) ( flock -x 200 vtysh -d bgpd \ -c "configure terminal" \ -c "router bgp $MYAS" \ -c "no neighbor $NHRP_DESTADDR" ) 200>/var/lock/opennhrp-script.lock ;; route-up) echo "Route $NHRP_DESTADDR/$NHRP_DESTPREFIX is up" ip route replace $NHRP_DESTADDR/$NHRP_DESTPREFIX proto 42 via $NHRP_NEXTHOP dev $NHRP_INTERFACE ip route flush cache ;; route-down) echo "Route $NHRP_DESTADDR/$NHRP_DESTPREFIX is down" ip route del $NHRP_DESTADDR/$NHRP_DESTPREFIX proto 42 ip route flush cache ;; esac exit 0
Save and close the file. Make it executable:
chmod +x /etc/opennhrp/opennhrp-script
BGP
apk add quagga
touch /etc/quagga/zebra.conf
With your favorite editor open /etc/quagga/bgpd.conf
and change the content to the following:
password strongpassword enable password strongpassword log syslog access-list 1 remark Command line access authorized IP access-list 1 permit 127.0.0.1 line vty access-class 1 hostname vpnc.example.net router bgp 65001 bgp router-id 172.16.1.1 network 10.1.0.0/16 neighbor %HUB_GRE_IP% remote-as 65000 neighbor %HUB_GRE_IP% remote-as 65000 ...
Add lines neighbor %HUB_GRE_IP%...
for each Hub host you have in your NBMA cloud.
Save and close the file.
/etc/init.d/bgpd start
Firewall
apk add awall
With your favorite editor, edit the following files and set their contents as follows:
/etc/awall/optional/params.json
{ "description": "params", "variable": { "B_IF" = "bond0.8", "C_IF" = "bond0.64", "E_IF" = "bond0.256", "E_IF2" = "bond0.257" } }
/etc/awall/optional/internet-host.json
{ "description": "Internet host", "import": "params", "zone": { "E": { "iface": "$E_IF" }, "E2": { "iface": "$E_IF2" } }, "filter": [ { "in": [ "E", "E2 ], "service": "ping", "action": "accept", "flow-limit": { "count": 10, "interval": 6 } }, { "in": [ "E", "E2" ], "out": "_fw", "service": "ssh", "action": "accept", "conn-limit": { "count": 3, "interval": 60 } }, { "in": "_fw", "out": [ "E", "E2" ], "service": [ "dns", "http", "ntp" ], "action": "accept" }, { "in": "_fw", "service": [ "ping", "ssh" ], "action": "accept" } ] }
/etc/awall/optional/mark.json
{ "description": "Mark traffic based on ISP", "import": [ "params", "internet-host" ], "route-track": [ { "in": "E", "mark": 1 }, { "in": "E2", "mark": 2 } ] }
/etc/awall/optional/dmvpn.json
{ "description": "DMVPN router", "import": "internet-host", "variable": { "A_ADDR": [ "10.0.0.0/8", "172.16.0.0/16" ], "A_IF": "gre1" }, "zone": { "A": { "addr": "$A_ADDR", "iface": "$A_IF" } }, "filter": [ { "in": [ "E", "E2" ], "out": "_fw", "service": "ipsec", "action": "accept" }, { "in": "_fw", "out": [ "E", "E2" ], "service": "ipsec", "action": "accept" }, { "in": [ "E", "E2" ], "out": "_fw", "ipsec": "in", "service": "gre", "action": "accept" }, { "in": "_fw", "out": [ "E", "E2" ], "ipsec": "out", "service": "gre", "action": "accept" }, { "in": "_fw", "out": "A", "service": "bgp", "action": "accept" }, { "in": "A", "out": "_fw", "service": "bgp", "action": "accept"}, { "out": [ "E", "E2" ], "dest": "$A_ADDR", "action": "reject" } ] }
/etc/awall/optional/vpnc.json
{ "description": "VPNc", "import": [ "params", "internet-host", "dmvpn" ], "zone": { "B": { "iface": "$B_IF" }, "C": { "iface": "$C_IF" } }, "policy": [ { "in": "A", "action": "accept" }, { "in": "B", "out": "A", "action": "accept" }, { "in": "C", "out": [ "A", "E" ], "action": "accept" }, { "in": [ "E", "E2" ], "action": "drop" } { "in": "_fw", "out": "A", "action": "accept" } ], "snat": [ { "out": [ "E", "E2" ] } ], "filter": [ { "in": "A", "out": "_fw", "service": [ "ping", "ssh", "http", "https" ], "action": "accept" }, { "in": [ "B", "C" ], "out": "_fw", "service": [ "dns", "ntp", "http", "https", "ssh" ], "action": "accept" }, { "in": "_fw", "out": [ "B", "C" ], "service": [ "dns", "ntp" ], "action": "accept" }, { "in": [ "A", "B", "C" ], "out": "_fw", "proto": "icmp", "action": "accept" } ] }
ISP Failover
apk add pingu
Configure pingu to monitor our bond0.256 and bond0.257 interfaces in /etc/pingu/pingu.conf
. Add the hosts to monitor for ISP failover to /etc/pingu/pingu.conf
and bind to primary ISP. We also set the ping timeout to 4 seconds.:
timeout 4 required 2 retry 11 interface bond0.256 { # route-table must correspond with mark in /etc/awall/optional/mark.json route-table 1 # the rule-priority must be a higher number than the priority in /etc/shorewall/route_rules <-- FIXME rule-priority 20000 } interface bond0.257 { # route-table must correspond with mark in /etc/awall/optional/mark.json route-table 2 rule-priority 20000 } # ping google dns via ISP1 host 8.8.8.8 { interval 60 bind-interface bond0.256 } # ping opendns via ISP1 host 208.67.222.222 { interval 60 bind-interface bond0.256 }
Now, if both hosts stop responding to pings, ISP-1 will be considered down and all gateways via bond0.256 will be removed from main route table. Note that the gateway will not be removed from the route table '1'. This is so we can continue try ping via bond0.256 so we can detect that the ISP is back online. When ISP starts working again, the gateways will be added back to main route table again.