Dynamic Multipoint VPN (DMVPN) Phase 3 with Quagga NHRPd

From Alpine Linux
Revision as of 09:34, 29 October 2015 by Fcolista (talk | contribs)

THIS DOC IS STILL A DRAFT

Overview

This is a follow-up of the most famous document [1], since opennhrp has been rewritten as quagga plugin [1], supporting interoperability with new Cisco's FlexVPN and Strongswan.

This NHRP implementation has some limits yet (Multicast is not ready, so you need to use BGP rather than OSPF), though is usable in a production environment.

Note: This document assumes that all Alpine installations are run in diskless mode and that the configuration is saved on USB key


This How-To will show you how to configure a DMVPN solution with this key items:

.1 VPN setup with Strongswan with PSK for the authentication (same PSK between all of the spokes and hub)

.2 DMVPN setup with quagga.nhrpd;

.3 iBGP used for announce LAN subnet

.4 Awall rules to allow NHRP shortcuts between spokes


The goal is making private network of spoke's nodes and hub to communicate each other over VPN created dynamically. Routes are learned via BGP, and hte IPSEC VPN is authenticated via PSK.

The logical setup is configured as shown:


Terminology

NBMA
Non-Broadcast Multi-Access network as described in RFC 2332
Hub
the Next Hop Server (NHS) performing the Next Hop Resolution Protocol service within the NBMA cloud.
Spoke
the Next Hop Resolution Protocol Client (NHC) which initiates NHRP requests of various types in order to obtain access to the NHRP service.


Hardware

For supporting VIA Padlock engine enable its modules:

echo -e "padlock_aes\npadlock-sha" >> /etc/modules


Alpine Installation

Follow the instructions on http://wiki.alpinelinux.org/wiki/Create_a_Bootable_USB about how to create a bootable USB.


Spoke Nodes

Spoke Node 1

Networking

We're going to setup the spoke node 1 as follow:


Host Interface Description Subnet
Spoke 1 eth0 Internet DHCP
eth1 LAN 192.168.10.0/24
gre1 Tunnel 172.16.1.1
Spoke 2 eth0 Internet DHCP
eth1 LAN 192.168.20.0/24
gre1 Tunnel 172.16.2.1
Spoke 3 eth0 Internet 90.100.150.200
eth1 LAN 192.168.30.0/24
gre1 Tunnel 172.16.3.1


With your favorite editor open /etc/network/interfaces and add interfaces:

Contents of /etc/network/interfaces

auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet static address 192.168.10.1 netmask 255.255.255.0


SSH

Remove password authentication and DNS reverse lookup:

sed -i "s/.PasswordAuthentication yes/PasswordAuthentication no/" /etc/ssh/sshd_config sed -i "s/.UseDNS yes/UseDNS no/" /etc/ssh/sshd_config


Restart ssh:

/etc/init.d/sshd restart

GRE Tunnel

With your favorite editor open /etc/network/interfaces and add the following:

Contents of /etc/network/interfaces

auto gre1 iface gre1 inet static pre-up ip tunnel add gre1 mode gre key 42 ttl 64 dev eth0 || true address 172.16.1.1 netmask 255.255.255.255 post-down ip tunnel del $IFACE || true

Bring up the new gre1 interface:

ifup gre1

IPSEC

Install package(s):

apk add strongswan

Contents of /etc/swanctl/swanctl.conf

connections { dmvpn { version = 2 pull = no mobike = no dpd_delay = 15 dpd_timeout = 30 fragmentation = yes unique = replace rekey_time = 4h reauth_time = 13h proposals = aes256-sha512-ecp384 local { auth = psk id = spoke1 } remote { auth = psk } children { dmvpn { esp_proposals = aes256-sha512-ecp384 local_ts = dynamic[gre] remote_ts = dynamic[gre] inactivity = 90m rekey_time = 100m mode = transport dpd_action = clear reqid = 1 } } } }


Contents of /etc/ipsec.secrets

# /etc/ipsec.secrets - strongSwan IPsec secrets file %any : PSK "cisco12345678987654321"

Start service(s):

/etc/init.d/charon start rc-update add charon


Routing

This section will configure the routing protocol suite quagga patched with NHRP support.


apk add quagga-nhrp touch /etc/quagga/zebra.conf /etc/quagga/bgpd.conf /etc/quagga/nhrpd.conf

}


Fix permissions:

chown -R quagga:quagga /etc/quagga

}


Start all the daemons:

/etc/init.d/zebra start /etc/init.d/bgpd start /etc/init.d/nhrpd start

Configure it to start from boot:

rc-update add zebra nhrpd bgpd

Now we're going to configure it with vtysh cli:

vtysh configure terminal log syslog debug nhrp common router bgp 65000 bgp router-id 172.16.1.1 network 192.168.10.0/24 neighbor spokes-ibgp peer-group neighbor spokes-ibgp remote-as 65000 neighbor spokes-ibgp ebgp-multihop 1 neighbor spokes-ibgp disable-connected-check neighbor spokes-ibgp advertisement-interval 1 neighbor spokes-ibgp next-hop-self neighbor spokes-ibgp soft-reconfiguration inbound neighbor 172.16.0.1 peer-group spokes-ibgp exit nhrp nflog-group 1 interface gre1 ip nhrp network-id 1 ip nhrp nhs dynamic nbma 50.60.70.80 ip nhrp registration no-unique ip nhrp shortcut ipv6 nd suppress-ra no link-detect tunnel protection vici profile dmvpn tunnel source eth0 exit write mem


Hub Node

We will document only what changes from the Spoke node setup.


Networking

The NHS (Hub) has the following settings:


Host Interface Description Subnet
Hub eth0 Internet 50.60.70.80
eth1 LAN 192.168.1.0/24


With your favorite editor open /etc/network/interfaces and add interfaces:

Contents of /etc/network/interfaces

auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 50.60.70.80 netmask 255.255.255.0 gateway 50.60.70.1 auto eth1 iface eth1 inet static address 192.168.1.1 netmask 255.255.255.0


GRE Tunnel

With your favorite editor open /etc/network/interfaces and add the following:

Contents of /etc/network/interfaces| auto gre1 iface gre1 inet static pre-up ip tunnel add gre1 mode gre key 42 ttl 64 dev eth0 || true address 172.16.0.1 netmask 255.255.255.255 post-down ip tunnel del $IFACE || true

{{{2}}}

Bring up the new gre1 interface:

ifup gre1


Routing

Again, routing is configured directly with vtysh

vtysh configure terminal log syslog debug nhrp common router bgp 65000 bgp router-id 172.16.0.1 bgp deterministic-med network 172.16.0.0/16 redistribute nhrp neighbor spokes-ibgp peer-group neighbor spokes-ibgp remote-as 65000 neighbor spokes-ibgp ebgp-multihop 1 neighbor spokes-ibgp disable-connected-check neighbor spokes-ibgp route-reflector-client neighbor spokes-ibgp next-hop-self all neighbor spokes-ibgp advertisement-interval 1 neighbor spokes-ibgp soft-reconfiguration inbound neighbor 172.16.1.1 peer-group spokes-ibgp exit interface gre1 ip nhrp network-id 1 ip nhrp nhs dynamic nbma 50.60.70.80 ip nhrp registration no-unique ip nhrp shortcut ipv6 nd suppress-ra no link-detect tunnel protection vici profile dmvpn tunnel source eth0 exit write mem


Add the lines neighbor %Spoke1_GRE_IP%... for each spoke node you have. For instance, if you want to add spoke node with gre1 address 172.16.3.1:

vtysh conf t router bgp 65000 neighbor 172.16.3.1 peer-group spokes-ibgp exit write mem


Awall

Differently from DMVPN Phase 2, in the Phase 3 DMVPN the HUB is the default gateway for all the spokes, then the spokes are able to communicate directly each other by means of NHRP redirects.

(For a good explanation of the differences between Phase 1, Phase 2 and Phase 3 DMVPN, see http://blog.ine.com/2008/12/23/dmvpn-phase-3/).

This is implemented by sending traffic indication notifications with iptables nflog.

This is the complete firewall configuration for the HUB, using Alpine Firewall Framework, Awall [2].


With your favorite editor open /etc/awall/optional/zones.json:

Contents of /etc/awall/optional/zones.json

{ "description": "Zones - zone definition for management", "variable": { "SUBNETS": [ "192.168.0.0/16", "172.16.0.0/16" ] }, "zone": { "DMVPN": { "addr": "$SUBNETS" } } }


Now, create /etc/awall/optional/inet.json:

Contents of /etc/awall/optional/inet.json

{ "description": "Internet - Host Management (rate limited)", "zone": { "INET": { "iface": "eth0" } }, "policy": [ { "in": "INET", "action": "drop" } ], "filter": [ { "in": "INET", "out": "_fw", "service": "ping", "action": "accept", "flow-limit": { "count": 10, "interval": 6 } }, { "in": "INET", "out": "_fw", "service": "ssh", "action": "accept", "conn-limit": { "count": 3, "interval": 60 } }, { "in": "_fw", "out": "INET", "service": [ "dns", "http", "ntp" ], "action": "accept" }, { "in": "_fw", "service": [ "ping", "ssh" ], "action": "accept" } ] }

Now, the DMVPN rule:

Contents of /etc/awall/optional/dmvpn.json

{ "description": "DMVPN specific rules", "import": [ "inet", "zones" ], "variable": { "HUB": true }, "policy": [ { "in": "DMVPN", "out": "DMVPN", "action": "accept" } ], "zone": { "DMVPN": { "iface": "gre1", "addr": "$SUBNETS", "route-back": "$HUB" } }, "filter": [ { "in": "INET", "out": "_fw", "service": "ipsec", "action": "accept" }, { "in": "_fw", "out": "INET", "service": "ipsec", "action": "accept" }, { "in": "INET", "out": "_fw", "ipsec": "in", "service": "gre", "action": "accept" }, { "in": "_fw", "out": "INET", "ipsec": "out", "service": "gre", "action": "accept" }, { "in": "_fw", "out": "DMVPN", "service": "bgp", "action": "accept" }, { "in": "DMVPN", "out": "_fw", "service": "bgp", "action": "accept"}, { "out": "INET", "dest": "$SUBNETS", "action": "reject" } ] }

Management interface allowed traffic:


Contents of /etc/awall/optional/management.json

{ "description": "Host Management (ssh, https, ping)", "import": [ "zones" ], "policy": [ { "in": "DMVPN", "out": "_fw", "action": "reject" } ], "filter": [ { "in": "DMVPN", "out": "_fw", "service": [ "ping", "ssh", "https", "bgp" ], "action": "accept" }, { "in": "_fw", "out": "DMVPN", "service": [ "ping", "ssh", "http", "http-alt", "https", "dns", "ntp" ], "action": "accept" } ] }

NHRP redirects rule:

Contents of /etc/awall/optional/vpnredirect.json

{ "description": "NHRP Traffic Indication Probe", "log": { "dmvpn": { "mode": "nflog", "group": 1, "range": 128, "limit": { "count": 6, "interval": 60, "mask": { "inet": { "src": 16, "dest": 16 }, "inet6": { "src": 48, "dest": 48 } } } } }, "packet-log": [ { "in": "DMVPN", "out": "DMVPN", "log": "dmvpn" } ] }


Enable awall rules:

awall enable zones awall enable inet awall enable dmvpn awall enable vpnredirect

Apply awall rules:

awall activate -f


IPSEC

Install package(s):

apk add strongswan

Contents of /etc/swanctl/swanctl.conf

connections { dmvpn { version = 2 pull = no mobike = no dpd_delay = 15 dpd_timeout = 30 fragmentation = yes unique = replace rekey_time = 4h reauth_time = 13h proposals = aes256-sha512-ecp384 local { auth = psk id = hub } remote { auth = psk } children { dmvpn { esp_proposals = aes256-sha512-ecp384 local_ts = dynamic[gre] remote_ts = dynamic[gre] inactivity = 90m rekey_time = 100m mode = transport dpd_action = clear reqid = 1 } } } }


Contents of /etc/ipsec.secrets

# /etc/ipsec.secrets - strongSwan IPsec secrets file %any : PSK "cisco12345678987654321"

Start service(s):

/etc/init.d/charon start rc-update add charon


Now, test if it works. In this example, spoke 1 tries to connect to spoke 3, who announces his subnet 192.168.30.0/24 via iBGP, the gre1 address is 172.16.3.1 and the public ip address is 90.100.150.200.


The first traffic goes from through the HUB.

spoke1:~/root# traceroute -n 192.168.30.1 traceroute to 192.168.30.1 (192.168.30.1), 30 hops max, 38 byte packets 1 172.16.0.1 0.664 ms 0.461 ms 0.457 ms 2 192.168.30.1 0.907 ms 0.776 ms 0.771 ms

Then, once the VPN is created, the traffic goes directly to the spoke node.

spoke1:~/root# traceroute -n 192.168.30.1 traceroute to 192.168.30.1 (192.168.30.1), 30 hops max, 38 byte packets 1 192.168.30.1 0.456 ms 0.385 ms 0.357 ms


With ipsec --status-all you can see alle the VPNs created:

spoke1:~/root# ipsec statusall Status of IKE charon daemon (strongSwan 5.3.2, Linux 3.18.20-1-grsec, i686): uptime: 9 days, since Aug 28 14:22:27 2015 worker threads: 11 of 16 idle, 5/0/0/0 working, job queue: 0/0/0/0, scheduled: 28 loaded plugins: charon random nonce x509 revocation constraints pubkey pkcs1 pkcs7 pkcs8 pkcs12 pgp dnskey sshkey pem openssl fips-prf gmp xcbc cmac curl sqlite attr kernel-netlink resolve socket-default farp stroke vici updown eap-identity eap-sim eap-aka eap-aka-3gpp2 eap-simaka-pseudonym eap-simaka-reauth eap-md5 eap-mschapv2 eap-radius eap-tls xauth-generic xauth-eap dhcp unity Listening IP addresses: 192.168.10.1 172.17.50.1 172.16.1.1 Connections: dmvpn: %any...%any IKEv2, dpddelay=15s dmvpn: local: [spoke1] uses pre-shared key authentication dmvpn: remote: uses pre-shared key authentication dmvpn: child: dynamic[gre] === dynamic[gre] TRANSPORT, dpdaction=clear Security Associations (3 up, 0 connecting): dmvpn[121]: ESTABLISHED 4 seconds ago, 172.17.50.1[spoke1]...90.100.150.200[spoke3] dmvpn[121]: IKEv2 SPIs: c770729967ea636c_i 0de8ffedbe32f21c_r*, rekeying in 3 hours, pre-shared key reauthentication in 12 hours dmvpn[121]: IKE proposal: AES_CBC_256/HMAC_SHA2_512_256/PRF_HMAC_SHA2_512/ECP_384 dmvpn{187}: INSTALLED, TRANSPORT, reqid 1, ESP in UDP SPIs: c132e6c3_i c49ae122_o dmvpn{187}: AES_CBC_256/HMAC_SHA2_512_256, 469 bytes_i (6 pkts, 2s ago), 326 bytes_o (6 pkts, 2s ago), rekeying in 90 minutes dmvpn{187}: 172.17.50.1/32[gre] === 90.100.150.200/32[gre] dmvpn[120]: ESTABLISHED 8 seconds ago, 172.17.50.1[spoke1]...90.100.150.200[spoke3] dmvpn[120]: IKEv2 SPIs: 46f81c8ec9a4b753_i* f768298b31ebe4da_r, rekeying in 3 hours, pre-shared key reauthentication in 11 hours dmvpn[120]: IKE proposal: AES_CBC_256/HMAC_SHA2_512_256/PRF_HMAC_SHA2_512/ECP_384 dmvpn{186}: INSTALLED, TRANSPORT, reqid 1, ESP in UDP SPIs: cad2c1c9_i cd5a287c_o dmvpn{186}: AES_CBC_256/HMAC_SHA2_512_256, 74 bytes_i (1 pkt, 2s ago), 46 bytes_o (1 pkt, 2s ago), rekeying in 91 minutes dmvpn{186}: 172.17.50.1/32[gre] === 90.100.150.200/32[gre] dmvpn[119]: ESTABLISHED 2 hours ago, 172.17.50.1[spoke1]...50.60.70.80[hub] dmvpn[119]: IKEv2 SPIs: 0e999ad802ced9cc_i* 6eaa469463601437_r, rekeying in 84 minutes, pre-shared key reauthentication in 8 hours dmvpn[119]: IKE proposal: AES_CBC_256/HMAC_SHA2_512_256/PRF_HMAC_SHA2_512/ECP_384 dmvpn{185}: INSTALLED, TRANSPORT, reqid 1, ESP in UDP SPIs: c84d6035_i cb72cd30_o dmvpn{185}: AES_CBC_256/HMAC_SHA2_512_256, 35764 bytes_i (473 pkts, 0s ago), 38266 bytes_o (384 pkts, 0s ago), rekeying in 46 minutes dmvpn{185}: 172.17.50.1/32[gre] === 50.60.70.80/32[gre]