易陆发现互联网技术论坛

 找回密码
 开始注册
查看: 4459|回复: 0
收起左侧

OpenStack SDN With OVN (Part 1) - Build and Install

[复制链接]
发表于 2018-12-20 01:59:11 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?开始注册

x
27 Nov 2016  8 min read  9 Comments   SDN
" B5 P1 i5 R- c' ?8 X- H  `9 _Vanilla openstack networking has many functional, performance and scaling limitations. Projects like L2 population, local ARP responder, L2 Gateway and DVR were conceived to address those issues. However good a job these projects do, they still remain a collection of separate projects, each with its own limitations, configuration options and sets of dependencies. That led to an effort outside of OpenStack to develop a special-purpose OVS-only SDN controller that would address those issues in a centralised and consistent manner. This post will be about one such SDN controller, coming directly from the people responsible for OpenvSwitch, Open Virtual Network (OVN).+ Y" J( w/ V; X2 P. \, G3 e
% [5 f% o4 p7 q* E3 W
OVN quick introduction
8 G1 f3 f9 \5 V6 H3 _% dOVN is a distributed SDN controller implementing virtual networks with the help OVS. Even though it is positioned as a CMS-independent controller, the main use case is still OpenStack. OVN was designed to address the following limitations of vanilla OpenStack networking:2 V& O; R# d: y& {6 G: s
& ]$ l( \, w$ z0 i, r1 h
Security groups could not be implemented directly on OVS ports and, therefore, required a dedicated Linux bridge between the VM and the OVS integration bridge.0 A' _4 m/ ]1 q  U1 x# m0 A& B
Routing and DHCP agents required dedicated network namespaces.
$ u6 X) X) p. S) E7 i* DNAT was implemented using a combination of network namespaces, iptables and proxy-ARP., s" n7 s. _, o" w8 J6 d6 N
OVN implements security groups, distributed virtual routing, NAT and distributed DHCP server all inside a single OVS bridge. This dramatically improves performance by reducing the number of inter-process packet handling and ensures that all flows can benefit from kernel fast-path switching.+ x( o( ~1 m5 [, e% t! p
1 A3 }) x4 e- J" r2 G" I9 A) g( m
At a high level, OVN consists of 3 main components:- t* r& v9 n" U

0 U% n: @& a- A8 ^OVN ML2 Plugin - performs translation between Neutron data model and OVN logical data model stored in Northbound DB." I4 p' {+ N1 P* w0 t2 ]0 ~6 E. k
OVN northd - the brains of OVN, translates the high level networking abstractions (logical switches, routers and ports) into logical flows. These logical flows are not yet OpenFlow flows but similar in concept and a very powerful abstraction. All translated information is stored in Southbound DB.$ V, J1 s* D! o6 Y) _+ {5 W( L
OVN controllers - located on each compute node, receive identical copies of logical flows (centralised network view) and exchange logical port to overlay IP binding information via the central Southbound DB. This information is used to perform logical flow translation into OpenFlow which are then programmed into the local OVS instance.8 m, v  \5 V  G4 w! @; x
- H& i* w5 Z9 y

. m" Z  J2 {% k: u8 b9 ZIf you want to learn more about OVN architecture and use cases, OpenStack OVN page has an excellent collection of resources for further reading.
) V/ k) a2 u; Z7 W2 l) Z0 b: Y! ]
; S, [" |8 p, WOpenStack installation# v4 W* J  F7 S0 _% Q
I’ll use RDO packstack to help me build a 1 controller and 2 compute nodes OpenStack lab on CentOS7. I’ll use the master trunk to deploy the latest OpenStack Ocata packages. This is required since at the time of writing (Nov 2016) some of the OVN features were not available in OpenStack Newton.6 U7 b; `# K  K9 U2 s9 S& p

4 `* ?; g+ j3 _4 y: d3 X& ~% a) V# Xcd /etc/yum.repos.d/
9 t, t1 I- \) \wget http://trunk.rdoproject.org/centos7/delorean-deps.repo( d3 g! F" m( n+ p
wget https://trunk.rdoproject.org/centos7-master/current/delorean.repo! ^0 k4 I8 N/ Y" C7 k
On the controller node, generate a sample answer file and modify settings to match the IPs of individual nodes. Optionally, you can disable some of the unused components like Nagios and Ceilometer similar to how I did it in my earlier post.# P# c. G8 e! R$ p( Z: ?" j

  B; @7 y; S9 Lyum install -y openstack-packstack crudini4 F+ y0 G# i0 z
packstack --gen-answer-file=/root/packstack.answer- Q+ Q9 I# e8 s+ j, N! r, k, Y
crudini --set --existing defautl CONFIG_COMPUTE_HOSTS 169.254.0.12,169.254.0.13
( o2 |' p4 ~6 f6 V8 n5 vcrudini --set --existing defautl CONFIG_CONTROLLER_HOST 169.254.0.11) z" b1 o- t" o  z8 t, }
crudini --set --existing defautl CONFIG_NETWORK_HOSTS 169.254.0.11
6 W  }; k) M5 g! a, tpackstack --answer-file=/root/packstack.answer+ Y! s: M( k$ s  J7 [& Q' Y1 [- s
After the last step we should have a working 3-node OpenStack lab, similar to the one depicted below. If you want to learn about how to automate this process, refer to my older posts about OpenStack and underlay Leaf-Spine fabric build using Chef.
( \& R9 e# ?8 y8 r5 @
/ x  V  Q1 x1 _( w. G1 i7 F! `

# T( `. j$ F" VOVN Build
! r7 r+ D' G% c8 A" q7 TOVN can be built directly from OVS source code. Instead of building and installing OVS on each of the OpenStack nodes individually, I’ll build a set of RPM’s on the Controller and will use them to install and upgrade OVS/OVN components on the remaining nodes.
! Q% m, E% n5 f9 }: s
+ J) P1 _- q; |9 O2 |Part of OVN build process includes building an OVS kernel module. In order to be able to use kmod RPM on all nodes we need to make sure all nodes use the same version of Linux kernel. The easiest way would be to fetch the latest updates from CentOS repos and reboot the nodes. This step should result in same kernel version on all nodes, which can be checked with uname -r command.1 y  l% B* C1 n" |+ w& \/ C. r
. r5 H1 \0 X+ W7 O5 U; ]; v
yum -y update kernel
- j& [5 I+ k1 @1 Y2 Lreboot' ?6 x, |4 E5 G! k
The official OVS installation procedure for CentOS7 is pretty accurate and requires only a few modifications to account for the packages missing in the minimal CentOS image I’ve used as a base OS.
1 a4 T$ H: g* K2 q  e6 [. G% a
; ]3 p( z+ S  }: M& }! L# zyum install rpm-build autoconf automake libtool systemd-units openssl openssl-devel python python-twisted-core python-zope-interface python-six desktop-file-utils groff graphviz procps-ng libcap-ng libcap-ng-devel+ h/ D, V- g; x: {$ p) y4 x
) I6 q* \2 L$ {& U1 n
yum install selinux-policy-devel kernel-devel-`uname -r` git
4 y. Z8 F- M" v
1 ]- Y# d- y0 Z/ {9 @2 ygit clone https://github.com/openvswitch/ovs.git && cd ovs0 j2 Z- _& ^* d1 U+ h
./boot.sh
3 s" O8 X* S! c' D. w./configure
( S" S9 w- ]' A$ |make rpm-fedora RPMBUILD_OPT="--without check"
; d/ G0 |3 k4 Z/ amake rpm-fedora-kmod% h* @' e7 j. Z' F, ]6 O
At the end of the process we should have a set of rpms inside the ovs/rpm/rpmbuild/RPMS/ directory.( c% z4 P  z, }" g( n$ c

# j5 B! ~! W  ?: ?OVN Install) w3 q& g* b. x7 K+ D# ?
Before we can begin installing OVN, we need to prepare the existing OpenStack environment by disabling and removing legacy Neutron OpenvSwitch agents. Since OVN natively implements L2 and L3 forwarding, DHCP and NAT, we won’t need L3 and DHCP agents on any of the Compute nodes. Network node that used to provide North-South connectivity will no longer be needed.
0 t: @/ s# s$ \# W
" u6 d, j  q( d, s; l! J( DOpenStack preparation
7 Y7 }% X; V/ i* O+ M1 d3 rFirst, we need to make sure all Compute nodes have a bridge that would provide access to external provider networks. In my case, I’ll move the eth1 interface under the OVS br-ex on all Compute nodes.
& ^! P& d* Z  i3 X
% K$ C) F7 ^( V% A# BDEVICE=eth1, E3 P# M' h0 l1 ^$ M
NAME=eth10 d3 B$ y: ]" _. Z  J  E7 {' U
DEVICETYPE=ovs
; I% v# n1 E3 Z  N! t! ]9 w6 TTYPE=OVSPort
1 ~8 D! m/ ?, [( FOVS_BRIDGE=br-ex
* ^0 V9 x8 c, k' PONBOOT=yes& ~3 o% |9 @7 T7 T! P' s2 j
BOOTPROTO=none4 @$ P  R% X" S
IP address needs to be moved to br-ex interface. Below example is for Compute node #2:
% r: p  Q% t# G* t/ V- K2 u8 c; {& L/ e* [: b0 g
ONBOOT=yes  e( c5 |# T/ e$ W8 n2 s8 r
DEFROUTE=yes
: A$ p4 f. F4 V& e( fIPADDR=169.254.0.12
+ G3 G4 `+ p( ^5 l' G! tPREFIX=24; ?1 K% Q: A. d+ ^. h
GATEWAY=169.254.0.1  i7 S" h, J6 z4 I- i  i. v
DNS1=8.8.8.8
* @( u* s, I. \7 L. uDEVICE=br-ex" R3 ?8 V; v, l$ Y% ]. u
NAME=br-ex0 V4 \- f: Q; ]+ c: V$ }' A
DEVICETYPE=ovs
. Z. D& a. ^, Q3 l! ~OVSBOOTPROTO=none
7 x0 b- q: e* [9 t+ yTYPE=OVSBridge
1 V% F- {2 p- P" c' vAt the same time OVS configuration on Network/Controller node will need to be completely wiped out. Once that’s done, we can remove the Neutron OVS package from all nodes.
, x- m3 r) \+ s; T& ^. l& S5 \* Q7 L: ]4 ~) t
yum remove openstack-neutron-openvswitch
) F- R  R* j: g/ yOVS packages installation
1 k* D' S$ D# R8 [Now everything is ready for OVN installation. First step is to install the kernel module and upgrade the existing OVS package. Reboot may be needed in order for the correct kernel module to be loaded.
+ k& _$ q. j* J& a6 ?% O/ x0 `) l. Z& L  n9 A; M
rpm -i openvswitch-kmod-2.6.90-1.el7.centos.x86_64.rpm: t" v4 b8 d) p( X
rpm -U openvswitch-2.6.90-1.el7.centos.x86_64.rpm' e9 R  Q9 R5 s. ?
reboot. Y! N1 `2 D8 t, U* g7 F
Now we can install OVN. Controllers will be running the ovn-northd process which can be installed as follows:
, k9 g# Z# l6 {, F' D0 W. Q3 J; N+ n( T4 S; W
rpm -i openvswitch-ovn-common-*.x86_64.rpm$ t: r+ q! ~( N* r3 v8 U1 k6 W
rpm -i openvswitch-ovn-central-*.x86_64.rpm
& [' G  Y4 z6 a2 h+ ^systemctl start ovn-northd
6 j! w7 i( ]6 v9 z9 }( K6 x& }The following packages install the ovn-controller on all Compute nodes:# h' [+ ?: j" w

' e1 \- Z/ w7 U* r/ C; w, lrpm -i openvswitch-ovn-common-*.x86_64.rpm% ~* k! B* u8 K: A2 e
rpm -i openvswitch-ovn-host-*.x86_64.rpm3 q) N' i8 I1 I& ]4 ?) Z, ]
systemctl start ovn-controller
  X/ @+ S- w2 E% E) WThe last thing is to install the OVN ML2 plugin, a python library that allows Neutron server to talk to OVN Northbound database.
. D2 l# `, [$ R' S0 ~/ z0 i  N
$ T% L, U& g7 d5 k6 j: I( qyum install python-networking-ovn4 s8 e2 ^, i' s' A) ?
OVN Configuration
0 n# U3 K2 H( u) UNow that we have all the required packages in place, it’s time to reconfigure Neutron to start using OVN instead of a default openvswitch plugin. The installation procedure is described in the official Neutron integration guide. At the end, once we’ve restarted ovn-northd on the controller and ovn-controller on the compute nodes, we should see the following output on the controller node:
; `: ?+ X1 L" i! T4 F) Q- L5 I: [; T3 T* t- t, M- q
$ ovs-sbctl show
  e8 }# c0 y4 D2 ?$ o; h7 fChassis "d03bdd51-e687-4078-aa54-0ff8007db0b5"
1 k+ Y7 }% l! v/ y9 X2 y$ W- t    hostname: "compute-3"
( i" l5 P0 b3 g: d- n    Encap geneve
9 {0 q8 N1 j0 A- L        ip: "10.0.0.4"; Z, R4 z& d' Y. |
        options: {csum="true"}
: ?; n2 E& I4 U/ x    Encap vxlan  G& ?! q2 d! H( T
        ip: "10.0.0.4"
4 q: p7 t! U' Q9 D        options: {csum="true"}: S! I' A7 r7 a! p2 D
Chassis "b89b8683-7c74-43df-8ac6-1d57ddefec77"
$ _( z# C# w  [1 E    hostname: "compute-2"
* @( v7 I3 u+ M0 T; d9 _% M2 C9 V    Encap vxlan
9 z/ ]) Z$ ]9 K$ [. |, U. v* Q        ip: "10.0.0.2"
, U& f- s: q( r3 x# X0 f        options: {csum="true"}% L1 a1 \2 Z+ R* N- Z$ I' d( v
    Encap geneve
+ p" d5 w# M! @1 j+ X& J/ T        ip: "10.0.0.2"
4 g, R7 M/ l+ t2 H' \2 q        options: {csum="true"}
5 X( o( l; C7 S+ M( t/ u* Y6 o) SThis means that all instances of a distributed OVN controller located on each compute node have successfully registered with Southbound OVSDB and provided information about their physical overlay addresses and supported encapsulation types.5 P( z, G# g, D! p

% x& S7 X/ x$ E* N+ S* o' L(Optional) Automating everything with Chef
; e5 G2 j* k$ I. Z/ f/ NAt this point of time there’s no way to automate OVN deployment with Packstack (TripleO already has OVN integration templates). For those who want to bypass the manual build process I have created a new Chef cookbook, automating all steps described above. This Chef playbook assumes that OpenStack environment has been built as described in my earlier post. Optionally, you can automate the build of underlay network as well by following my other post. Once you’ve got both OpenStack and underlay built, you can use the following scripts to build, install and configure OVN:5 C3 p* ^; A. h+ F
0 p8 I$ k, c6 L% L% f" l2 ]2 |
git clone https://github.com/networkop/chef-unl-os.git
* Q5 ~  M! `9 `+ h+ C+ D' @5 v" \cd chef-unl-os; l1 k  e, Y1 ?. O) E
chef-client -z -E lab ovn.rb
- e; a) t: V4 Y* @: @0 l: _, ]! hTest topology setup3 D$ ~2 ~& N7 b6 R, e
Now we should be able to create a test topology with two tenant subnets and an external network interconnected by a virtual router.
8 p% f& A) h0 C* c! E! s( X
9 w$ j2 W% i3 N- L3 F. Vneutron net-create NET-RED: D1 m! N. @% p' S- \) m
neutron net-create NET-BLUE
0 j$ p1 j& D6 ]* Q* E8 L6 c8 ineutron subnet-create --name SUB-BLUE NET-BLUE 10.0.0.0/24: t$ }7 W* k& [: b
neutron subnet-create --name SUB-RED NET-RED 20.0.0.0/24' B/ O. Z6 L5 Z& j
neutron net-create NET-EXT --provider:network_type flat \
, |2 @2 }9 [. W1 I- o% n4 F9 m, Q                           --provider:physical_network extnet \* ]1 }+ g& O5 p) \* `/ U
                           --router:external --shared% G/ Q: H) ^2 e* `' e& p
neutron subnet-create --name SUB-EXT --enable_dhcp=False \7 e5 z; _  @" u' p+ v- K1 h/ `* d
                      --allocation-pool=start=169.254.0.50,end=169.254.0.99 \
- M1 Z4 \+ C  O/ y( }1 @1 ~                      --gateway=169.254.0.1 NET-EXT 169.254.0.0/24/ i* x8 S; @: o* j; x6 e. }
neutron router-create R1" U$ T! Q; u8 v& t
neutron router-interface-add R1 SUB-BLUE  E- }, T5 H5 U1 X- `$ W
neutron router-interface-add R1 SUB-RED
$ v$ e) Z1 H% |9 l$ Z  vneutron router-gateway-set R1 NET-EXT
# l  _' B2 f  J) z5 Z  X! M- O0 KWhen we attach a few test VMs to each subnet we should be able to successfully ping between the VMs, assuming the security groups are setup to allow ICMP/ND.3 x0 k, ^9 g& g  `: s
/ ]0 Y3 ]  j$ L) _) s! d# H
curl http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img | glance \
- [; m: y% {; S$ b( y' Z4 j- k4 dimage-create --name='IMG-CIRROS' \- f/ v& A8 l* ^# x5 f
  --visibility=public \
, x/ T3 C# X2 p. c$ t" A) ]  --container-format=bare \+ P/ v& h* E- ~. ~1 b- Q9 O
  --disk-format=qcow2
1 r% G7 N: g) D0 S0 ?3 v3 znova aggregate-create AGG-RED AZ-RED& M$ z& y+ F" x7 Q
nova aggregate-create AGG-BLUE AZ-BLUE
' @3 U8 c7 M$ l1 G- Wnova aggregate-add-host AGG-BLUE compute-2
) X( k$ u* |' n. {: U9 }nova aggregate-add-host AGG-RED compute-3; q: O0 r/ H- Z/ `, f
nova boot --flavor m1.tiny --image 'IMG-CIRROS' \
/ J7 o9 P3 t- o$ h/ A" ~2 q  --nic net-name=NET-BLUE \
+ e+ F- A4 f. g! _6 f& c7 s  --availability-zone AZ-BLUE \* Z* l& Y9 c* B6 g0 j
  VM19 L4 Q/ l" A! L2 c/ x; {% ]
' H. d  B/ Z7 Q/ S
nova boot --flavor m1.tiny --image 'IMG-CIRROS' \4 y$ T9 I* Z  m- {
  --nic net-name=NET-RED \6 `' z- o! n- I3 @0 d- J
  --availability-zone AZ-RED \
  u! R: }' @1 B  VM2# q/ D. V8 z6 M
nova boot --flavor m1.tiny --image 'IMG-CIRROS' \
$ F* N/ h5 U1 j9 o. Y  --nic net-name=NET-BLUE \, t3 T, L" l. V9 F0 ]/ ?  V
  --availability-zone AZ-RED \% m6 d! C: @0 O5 ?
  VM3. i8 J, \+ j7 Z  e7 r5 S, S8 x
openstack floating ip create NET-EXT9 Z4 F0 [. H0 F
openstack server add floating ip VM3 169.254.0.53
& O6 s0 R/ G5 Z' A. e/ _( V2 @; r* c2 J5 w( s
  O) m/ b) ?& k
In the next post we will use the above virtual topology to explore the dataplane packet flow inside an OVN-managed OpenvSwitch and how it uses the new encapsulation protocol GENEVE to optimise egress forwarding lookups on remote compute nodes.
* j& \( C7 U1 D2 A
8 R+ G0 R' P! i! N  U, w/ VOpenStack-SDN OVN
. ]0 c' U& P# v) s1 O/ o* TRelated& }0 V/ `' ]' I, s, z9 v2 Q
OpenStack SDN - Distributed Virtual Routing
: h; u7 ^- [4 s6 |Automating the Build of OpenStack Lab (Part 2)# l) L( D2 |# L( P
Automating the Build of OpenStack Lab (Part 1)
  x% h, Z& C; `  _5 X& iOpenStack SDN - Interconnecting VMs and Physical Devices With Cumulus VX L2 Gateway; O. y- M$ D7 t. L2 J3 k& @
OpenStack SDN - Extending a L2 Provider Network Over a L3 Fabric
您需要登录后才可以回帖 登录 | 开始注册

本版积分规则

关闭

站长推荐上一条 /4 下一条

北京云银创陇科技有限公司以云计算运维,代码开发

QQ|返回首页|Archiver|小黑屋|易陆发现技术论坛 ( 蜀ICP备2026014127号-1 )点击这里给我发消息

GMT+8, 2026-4-8 23:55 , Processed in 0.044100 second(s), 23 queries .

Powered by Discuz! X3.4 Licensed

© 2012-2025 Discuz! Team.

快速回复 返回顶部 返回列表