Dynamic Multipoint VPN (DMVPN)
This material is work-in-progress ... Do not follow instructions here until this notice is removed. |
http://alpinelinux.org/about under Why the Name Alpine? states:
The first open-source implementation of Cisco's DMVPN, called OpenNHRP, was written for Alpine Linux.
So the aim of this document is to be the reference DMVPN setup, with all the networking services needed for the clients that will use the DMVPN (DNS, DHCP, firewall, etc.).
Terminology
NBMA: Non-Broadcast Multi-Access network as described in RFC 2332
Hub: the Next Hop Server (NHS) performing the Next Hop Resolution Protocol service within the NBMA cloud.
Spoke: the Next Hop Resolution Protocol Client (NHC) which initiates NHRP requests of various types in order to obtain access to the NHRP service.
Spoke Node
Alpine Setup
We will setup the network interfaces as follows:
bond0.1 = LAN
bond0.2 = DMZ
bond0.10 = ISP1
bond0.11 = ISP2
Boot Alpine in diskless mode and run setup-alpine
You will be prompted something like this... | Suggestion on what you could enter... |
---|---|
Select keyboard layout [none]:
|
Type an appropriate layout for you |
Select variant:
|
Type an appropriate layout for you (if prompted) |
Enter system hostname (short form, e.g. 'foo') [localhost]:
|
Enter the hostname, e.g. vpnc |
Available interfaces are: eth0
|
Enter bond0.1 |
Available bond slaves are: eth0 eth1
|
eth0 eth1 |
IP address for bond0? (or 'dhcp', 'none', '?') [dhcp]:
|
Press Enter confirming 'none' |
IP address for bond0.1? (or 'dhcp', 'none', '?') [dhcp]:
|
Enter the IP address of you LAN interface, e.g. 10.1.0.1 |
Netmask? [255.255.255.0]:
|
Press Enter confirming '255.255.255.0' or type an appropriate another appropriate subnet mask |
Gateway? (or 'none') [none]:
|
Press Enter confirming 'none' |
Do you want to do any manual network configuration? [no]
|
yes |
Make a copy of the bond0.1 configuration for bond0.2, bond0.10 and bond0.20 (optional) interfaces. Don't forget to add a gateway and a metric value for ISP interfaces when multiple gateways are set. Save and close the file (:wq) | |
DNS domain name? (e.g. 'bar.com') []:
|
Enter the domain name of your intranet, e.g., example.net |
DNS nameservers(s)? []:
|
8.8.8.8 8.8.4.4 (we will change them later) |
Changing password for root
|
Enter a secure password for the console |
Retype password:
|
Retype the above password |
Which timezone are you in? ('?' for list) [UTC]:
|
Press Enter confirming 'UTC' |
HTTP/FTP proxy URL? (e.g. 'http://proxy:8080', or 'none') [none]
|
Press Enter confirming 'none' |
Enter mirror number (1-9) or URL to add (or r/f/e/done) [f]:
|
Select a mirror close to you and press Enter |
Which SSH server? ('openssh', 'dropbear' or 'none') [openssh]:
|
Press Enter confirming 'openssh' |
Which NTP client to run? ('openntpd', 'chrony' or 'none') [chrony]:
|
Press Enter confirming 'chrony' |
Which disk(s) would you like to use? (or '?' for help or 'none') [none]:
|
Press Enter confirming 'none' or type 'none' if needed |
Enter where to store configs ('floppy', 'usb' or 'none') [usb]:
|
Press Enter confirming 'usb' |
Enter apk cache directory (or '?' or 'none') [/media/usb/cache]:
|
Press Enter confirming '/media/usb/cache' |
Bonding
Update the bonding configuration:
echo bonding mode=balance-tlb miimon=100 updelay=500 >> /etc/modules
DNS
apk add unbound
With your favorite editor open /etc/unbound/unbound.conf
and add the following configuration. If you have a domain that you want unbound to resolve but is internal to your network only, uncomment the stub-zone stanza and replace the stub-addr with the appropriate DNS server:
server: verbosity: 1 interface: 10.1.0.1 do-ip4: yes do-ip6: no do-udp: yes do-tcp: yes do-daemonize: yes access-control: 10.1.0.0/16 allow access-control: 127.0.0.0/8 allow do-not-query-localhost: no root-hints: "/etc/unbound/named.cache" #stub-zone: # name: "example.net" # stub-addr: 10.1.0.10 python: remote-control: control-enable: no
Fetch the latest copy of root hints:
wget http://ftp.internic.net/domain/named.cache -O /etc/unbound/named.cache
/etc/init.d/unbound start
echo nameserver 10.1.0.1 > /etc/resolv.conf
GRE Tunnel
With your favorite editor open /etc/network/interfaces
and add the following:
auto gre1 iface gre1 inet static pre-up ip tunnel add $IFACE mode gre ttl 64 tos inherit key 12.34.56.78 || true address 172.16.1.1 netmask 255.255.0.0 post-down ip tunnel del $IFACE || true
Save and close the file.
ifup gre1
IPSEC
apk add ipsec-tools
With your favorite editor open /etc/ipsec.conf
and change the content to the following:
spdflush; spdadd 0.0.0.0/0 0.0.0.0/0 gre -P out ipsec esp/transport//require; spdadd 0.0.0.0/0 0.0.0.0/0 gre -P in ipsec esp/transport//require;
With your favorite editor open /etc/racoon/racoon.conf
and change the content to the following:
remote anonymous { exchange_mode main; lifetime time 2 hour; certificate_type x509 "/etc/racoon/cert.pem" "/etc/racoon/key.pem"; ca_type x509 "/etc/racoon/ca.pem"; my_identifier asn1dn; nat_traversal on; script "/etc/opennhrp/racoon-ph1dead.sh" phase1_dead; dpd_delay 120; proposal { encryption_algorithm aes 256; hash_algorithm sha1; authentication_method rsasig; dh_group modp4096; } proposal { encryption_algorithm aes 256; hash_algorithm sha1; authentication_method rsasig; dh_group 2; } } sainfo anonymous { pfs_group 2; lifetime time 2 hour; encryption_algorithm aes 256; authentication_algorithm hmac_sha1; compression_algorithm deflate; }
Save and close the file.
/etc/init.d/racoon start
Next Hop Resolution Protocol (NHRP)
apk add opennhrp
With your favorite editor open /etc/opennhrp/opennhrp.conf
and change the content to the following:
interface gre1 dynamic-map 172.16.0.0/16 hub.example.com shortcut redirect non-caching interface bond0.1 shortcut-destination interface bond0.2 shortcut-destination
With your favorite editor open /etc/opennhrp/opennhrp-script
and change the content to the following:
#!/bin/sh case $1 in interface-up) ip route flush proto 42 dev $NHRP_INTERFACE ip neigh flush dev $NHRP_INTERFACE ;; peer-register) ;; peer-up) if [ -n "$NHRP_DESTMTU" ]; then ARGS=`ip route get $NHRP_DESTNBMA from $NHRP_SRCNBMA | head -1` ip route add $ARGS proto 42 mtu $NHRP_DESTMTU fi echo "Create link from $NHRP_SRCADDR ($NHRP_SRCNBMA) to $NHRP_DESTADDR ($NHRP_DESTNBMA)" racoonctl establish-sa -w isakmp inet $NHRP_SRCNBMA $NHRP_DESTNBMA || exit 1 racoonctl establish-sa -w esp inet $NHRP_SRCNBMA $NHRP_DESTNBMA gre || exit 1 vtysh -d bgpd -c "clear bgp $NHRP_DESTADDR" 2>/dev/null || true ;; peer-down) echo "Delete link from $NHRP_SRCADDR ($NHRP_SRCNBMA) to $NHRP_DESTADDR ($NHRP_DESTNBMA)" if [ "$NHRP_PEER_DOWN_REASON" != "lower-down" ]; then racoonctl delete-sa isakmp inet $NHRP_SRCNBMA $NHRP_DESTNBMA fi ip route del $NHRP_DESTNBMA src $NHRP_SRCNBMA proto 42 ;; route-up) echo "Route $NHRP_DESTADDR/$NHRP_DESTPREFIX is up" ip route replace $NHRP_DESTADDR/$NHRP_DESTPREFIX proto 42 via $NHRP_NEXTHOP dev $NHRP_INTERFACE ip route flush cache ;; route-down) echo "Route $NHRP_DESTADDR/$NHRP_DESTPREFIX is down" ip route del $NHRP_DESTADDR/$NHRP_DESTPREFIX proto 42 ip route flush cache ;; esac exit 0
Save and close the file. Make it executable:
chmod +x /etc/opennhrp/opennhrp-script
BGP
apk add quagga
touch /etc/quagga/zebra.conf
With your favorite editor open /etc/quagga/bgpd.conf
and change the content to the following:
password strongpassword enable password strongpassword log syslog access-list 1 remark Command line access authorized IP access-list 1 permit 127.0.0.1 line vty access-class 1 hostname vpnc.example.net router bgp 65001 bgp router-id 172.16.1.1 network 10.1.0.0/16 neighbor %HUB_GRE_IP% remote-as 65000 neighbor %HUB_GRE_IP% remote-as 65000 ...
Add lines neighbor %HUB_GRE_IP%...
for each Hub host you have in your NBMA cloud.
Save and close the file.
/etc/init.d/bgpd start
Firewall
apk add awall
With your favorite editor, edit the following files and set their contents as follows:
/etc/awall/optional/params.json
{ "description": "params", "variable": { "B_IF" = "bond0.1", "C_IF" = "bond0.2", "E_IF" = "bond0.10", "E_IF2" = "bond0.11" } }
/etc/awall/optional/internet-host.json
{ "description": "Internet host", "import": "params", "zone": { "E": { "iface": "$E_IF" }, "E2": { "iface": "$E_IF2" } }, "filter": [ { "in": [ "E", "E2 ], "service": "ping", "action": "accept", "flow-limit": { "count": 10, "interval": 6 } }, { "in": [ "E", "E2" ], "out": "_fw", "service": "ssh", "action": "accept", "conn-limit": { "count": 3, "interval": 60 } }, { "in": "_fw", "out": [ "E", "E2" ], "service": [ "dns", "http", "ntp" ], "action": "accept" }, { "in": "_fw", "service": [ "ping", "ssh" ], "action": "accept" } ] }
/etc/awall/optional/mark.json
{ "description": "Mark traffic based on ISP", "import": [ "params", "internet-host" ], "route-track": [ { "in": "E", "mark": 1 }, { "in": "E2", "mark": 2 } ] }
/etc/awall/optional/dmvpn.json
{ "description": "DMVPN router", "import": "internet-host", "variable": { "A_ADDR": [ "10.0.0.0/8", "172.16.0.0/16" ], "A_IF": "gre1" }, "zone": { "A": { "addr": "$A_ADDR", "iface": "$A_IF" } }, "filter": [ { "in": [ "E", "E2" ], "out": "_fw", "service": "ipsec", "action": "accept" }, { "in": "_fw", "out": [ "E", "E2" ], "service": "ipsec", "action": "accept" }, { "in": [ "E", "E2" ], "out": "_fw", "ipsec": "in", "service": "gre", "action": "accept" }, { "in": "_fw", "out": [ "E", "E2" ], "ipsec": "out", "service": "gre", "action": "accept" }, { "in": "_fw", "out": "A", "service": "bgp", "action": "accept" }, { "in": "A", "out": "_fw", "service": "bgp", "action": "accept"}, { "out": [ "E", "E2" ], "dest": "$A_ADDR", "action": "reject" } ] }
/etc/awall/optional/vpnc.json
{ "description": "VPNc", "import": [ "params", "internet-host", "dmvpn" ], "zone": { "B": { "iface": "$B_IF" }, "C": { "iface": "$C_IF" } }, "policy": [ { "in": "A", "action": "accept" }, { "in": "B", "out": "A", "action": "accept" }, { "in": "C", "out": [ "A", "E" ], "action": "accept" }, { "in": [ "E", "E2" ], "action": "drop" } { "in": "_fw", "out": "A", "action": "accept" } ], "snat": [ { "out": [ "E", "E2" ] } ], "filter": [ { "in": "A", "out": "_fw", "service": [ "ping", "ssh", "http", "https" ], "action": "accept" }, { "in": [ "B", "C" ], "out": "_fw", "service": [ "dns", "ntp", "http", "https", "ssh" ], "action": "accept" }, { "in": "_fw", "out": [ "B", "C" ], "service": [ "dns", "ntp" ], "action": "accept" }, { "in": [ "A", "B", "C" ], "out": "_fw", "proto": "icmp", "action": "accept" } ] }
ISP Failover
apk add pingu
}
Configure pingu to monitor our bond0.10 and bond0.11 interfaces in /etc/pingu/pingu.conf. Add the hosts to monitor for ISP failover to /etc/pingu/pingu.conf and bind to primary ISP. We also set the ping timeout to 2.5 seconds.:
interface bond0.10 { # route-table must correspond with NUMBER column in /etc/shorewall/providers route-table 1 # the rule-priority must be a higher number than the priority in /etc/shorewall/route_rules rule-priority 20000 } interface bond0.11 { # route-table must correspond with NUMBER column in /etc/shorewall/providers route-table 2 rule-priority 20000 } # Ping responses that takes more than 2.5 seconds are considered lost. timeout 2.5 # ping google dns via ISP1 host 8.8.8.8 { interval 60 bind-interface bond0.10 } # ping opendns via ISP1 host 208.67.222.222 { interval 60 bind-interface bond0.10 }
Now, if both hosts stop responding to pings, ISP-1 will be considered down and all gateways via bond0.10 will be removed from main route table. Note that the gateway will not be removed from the route table '1'. This is so we can continue try ping via bond0.10 so we can detect that the ISP is back online. When ISP starts working again, the gateways will be added back to main route table again.