- 积分
- 16841
在线时间 小时
最后登录1970-1-1
|
马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。
您需要 登录 才可以下载或查看,没有账号?开始注册
x
Controller nodes
* m- G1 F& A C- N: X7 f$ BEach controller node runs the Open vSwitch (OVS) service (including dependent services such as ovsdb-server) and ovn-northd. Only a single instance of the ovsdb-server and ovn-northd services can operate in a deployment. However, deployment tools can implement active/passive high-availability using a management tool that monitors service health and automatically starts these services on another node after failure of the primary node. See the Frequently Asked Questions for more information.
0 U+ F/ |: {: |1 |! C% Q/ L& v; m, w2 y' n# h+ `5 H
Install the ovn-central and openvswitch packages (RHEL/Fedora).! P9 |1 L8 R0 @* H
9 {: k0 p$ r7 V: l% V1 f8 T
Install the ovn-central and openvswitch-common packages (Ubuntu/Debian).1 Y+ a$ J, L& y
6 a$ T1 w% h; @3 c# SStart the OVS service. The central OVS service starts the ovsdb-server service that manages OVN databases.& g, }& ^1 R8 t5 ]" a. K' G
) [! Q$ W, t" L9 p+ A8 ?# ?2 S
Using the systemd unit:
8 T, @7 k/ H% s6 u1 F% Q
. Q' a0 z- B2 A* W/ f4 O% [9 n4 {: b" dsystemctl start openvswitch (RHEL/Fedora)
3 ~3 v" b$ r. C" }) rsystemctl start openvswitch-switch (Ubuntu/Debian)1 p; ]) I+ _! i% R+ R, z r m' Y
Configure the ovsdb-server component. By default, the ovsdb-server service only permits local access to databases via Unix socket. However, OVN services on compute nodes require access to these databases.1 _( U- `0 L7 D& ]& b/ E
- j, x3 o5 v5 i" J7 L
Permit remote database access.
# s o, Z/ n& P9 i( c/ X' h
+ M& e8 H. Z) ~4 o4 jovn-nbctl set-connection ptcp:6641:0.0.0.0 -- \
7 @" `- N) d5 G: s) {5 F7 \ set connection . inactivity_probe=60000
6 W6 O* A: e9 L" H+ ~ovn-sbctl set-connection ptcp:6642:0.0.0.0 -- \2 Y& ^/ P$ T( A& o7 e
set connection . inactivity_probe=600008 l6 T$ K( D+ T) |
if using the VTEP functionality:1 f6 n1 V, A/ z* H8 I
ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640:0.0.0.09 ? c& X1 x$ Q5 O& c1 ~+ B
Replace 0.0.0.0 with the IP address of the management network interface on the controller node to avoid listening on all interfaces. {9 Z. @& A; K! o' {& a8 j+ y9 F5 q* X
. p: Q& i, y1 ^: ~. U( W& m0 z
Note; T8 ?1 u6 h. O0 f6 {6 e, r' s# a$ i6 V
- J8 g T/ x# s: }) q
Permit remote access to TCP ports: 6640 (OVS) to VTEPS (if you use vteps), 6642 (SBDB) to hosts running neutron-server, gateway nodes that run ovn-controller, and compute node services like ovn-controller and ovn-metadata-agent. 6641 (NBDB) to hosts running neutron-server.
# b/ x( @# {! E5 M4 ~5 Q
; |4 ^0 W' }5 h3 KStart the ovn-northd service.
+ o2 M9 N$ b( w, Z' a4 s) k: n9 h* J1 U; b, h8 |
Using the systemd unit:
4 ?0 A" ?9 C [4 m) Q( H' ~* F+ K' o0 G" {: y
systemctl start ovn-northd
; P; N* [$ w2 ~( vConfigure the Networking server component. The Networking service implements OVN as an ML2 driver. Edit the /etc/neutron/neutron.conf file:/ V- z- b4 I8 q: T; G4 D
9 u. r" C: n JEnable the ML2 core plug-in.9 q; q* o0 p% `/ Y! p
: e, i$ ]1 M6 h1 H. K[DEFAULT], W* z6 _! |0 W1 V- P* ?, ~/ q# @
...& Z. R v6 H- x6 v
core_plugin = ml2/ I# k2 v( v5 C* l2 }. n
Enable the OVN layer-3 service.
& g2 [* R3 ?6 L1 H& Z# K! t7 }1 K8 B7 d, F9 w; N3 v9 c
[DEFAULT]8 l! U! g9 p2 U R/ y. Y# E- |# K
...) M( @; ?4 ?3 ^# `4 w
service_plugins = ovn-router4 \( v& |+ W) W e( T
Configure the ML2 plug-in. Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file:
+ i. F, M1 g: _2 u K3 s, `) L0 x% v( E# D) P0 F$ ^' D# i w6 ~: Y
Configure the OVN mechanism driver, network type drivers, self-service (tenant) network types, and enable the port security extension.
. ~& Y0 E' y6 |2 `% s; d! G: k! G- D
[ml2]
) d7 d2 }3 n# {- v2 N/ ~...1 L# f1 r% a Q) d+ D
mechanism_drivers = ovn
6 \% _. l+ U+ a) r7 Btype_drivers = local,flat,vlan,geneve
- j9 A9 N1 b) q0 }3 X ~3 ]; |tenant_network_types = geneve: t/ s2 a# b5 j5 T# S+ Y
extension_drivers = port_security
5 m! H7 c, r- R5 Noverlay_ip_version = 4& |7 v+ s) O$ u J: X5 D3 R+ r
Note: G9 r! ~8 {8 J; R$ K
# l3 {) D9 @; y; A, v5 yTo enable VLAN self-service networks, make sure that OVN version 2.11 (or higher) is used, then add vlan to the tenant_network_types option. The first network type in the list becomes the default self-service network type.
1 L `( x9 Z1 V$ ?( J/ [; c7 x: A( {$ g5 V8 }* F
To use IPv6 for all overlay (tunnel) network endpoints, set the overlay_ip_version option to 6.
1 R2 A# y0 _& F: f/ X8 \1 I) x: S, Y
Configure the Geneve ID range and maximum header size. The IP version overhead (20 bytes for IPv4 (default) or 40 bytes for IPv6) is added to the maximum header size based on the ML2 overlay_ip_version option.
& p0 c4 _; t M4 v( x) [, S+ g* P/ w. {- F+ G/ Z$ I
[ml2_type_geneve]
" R2 `2 `3 t8 t# `3 M8 X...% M; O6 ~' _. v/ r% Q* g0 Z, f0 w
vni_ranges = 1:65536
8 {6 i `& l+ m+ L- ^$ ?max_header_size = 38/ T3 Y3 ^7 G/ F7 a' L+ u* U
Note
2 ~+ N; g- r$ ^5 J- |" Q) o7 G$ _0 }
( Z9 i! e7 u' \! s/ C5 f2 V+ r7 BThe Networking service uses the vni_ranges option to allocate network segments. However, OVN ignores the actual values. Thus, the ID range only determines the quantity of Geneve networks in the environment. For example, a range of 5001:6000 defines a maximum of 1000 Geneve networks. On the other hand, these values are still relevant in Neutron context so 1:1000 and 5001:6000 are not simply interchangeable.: t) p* G* y" B& r: c. W( p5 F" X
3 O# B* y5 A" E; q) m+ Y$ }& O1 b
Warning
4 M8 c: J+ [6 e5 r9 t
5 h% \, s/ p3 x3 AThe default for max_header_size, 30, is too low for OVN. OVN requires at least 38.7 S( f$ @' @4 n6 q& L6 k
' ~1 D% b4 U8 i- ^0 v7 j& dOptionally, enable support for VXLAN type networks. Because of limited space in VXLAN VNI to pass over the needed information that requires OVN to identify a packet, the header size to contain the segmentation ID is reduced to 12 bits, that allows a maximum number of 4096 networks. The same limitation applies to the number of ports in each network, that are also identified with a 12 bits header chunk, limiting their number to 4096 ports. Please check [1] for more information.
* f" b4 h4 ?) W6 X1 x% E6 Z& ?
! f1 ?7 B/ T3 |+ j[ml2]/ b% q* L( _9 \- \0 x* S
...
: S! }: k& I0 Ytype_drivers = geneve,vxlan
! Q8 `1 e( ^) W2 {& b) o, X7 g. U6 b0 d7 u4 O
[ml2_type_vxlan]
- S# p$ u( R' hvni_ranges = 1001:11000 d8 b" b% K7 h _: G* J6 D; S
Optionally, enable support for VLAN provider and self-service networks on one or more physical networks. If you specify only the physical network, only administrative (privileged) users can manage VLAN networks. Additionally specifying a VLAN ID range for a physical network enables regular (non-privileged) users to manage VLAN networks. The Networking service allocates the VLAN ID for each self-service network using the VLAN ID range for the physical network.
. s8 K5 M6 E9 k2 P/ Y" c: ~+ R# `( i- V: p, o0 D- |" F
[ml2_type_vlan] r% W) m2 M3 \) A! s T4 V
...- i8 u9 a( W; A" L
network_vlan_ranges = PHYSICAL_NETWORK:MIN_VLAN_ID:MAX_VLAN_ID
# \+ [$ y* K! N" ?3 sReplace PHYSICAL_NETWORK with the physical network name and optionally define the minimum and maximum VLAN IDs. Use a comma to separate each physical network.
/ ?: L% r$ A$ r. \- ?- H3 ~3 c; L2 k7 O: N6 c |5 _5 C
For example, to enable support for administrative VLAN networks on the physnet1 network and self-service VLAN networks on the physnet2 network using VLAN IDs 1001 to 2000:: E! ?; u; Z" M1 s$ A" Z$ K
+ N* T2 A' D, q- x/ }network_vlan_ranges = physnet1,physnet2:1001:2000& V$ E" _5 c. b, O; m% {
Enable security groups.1 N0 q- s' X6 W
; j2 w9 }' L R3 {. c
[securitygroup]2 N) S v( i/ [ Y' l/ ~- ~" T' v
...
- U4 [& {2 s r8 C9 m6 eenable_security_group = true i B [6 H, D7 T0 X
Note
5 v1 b1 r( p' a5 m4 L" l+ q
, k2 X- L# Y+ vThe firewall_driver option under [securitygroup] is ignored since the OVN ML2 driver itself handles security groups.
! e4 h) x& }- ~
/ w; X, K& ` J; lConfigure OVS database access and L3 scheduler
0 [& k" x9 B: C( X% \( B* B! M- I7 ^, O& _' F
[ovn]
$ F* E. l! T& A4 i6 U9 R. ~( I...
( {- W$ e8 u9 W/ e' `- t. W8 ~ovn_nb_connection = tcp:IP_ADDRESS:6641
5 ]8 ~; d) }2 b$ N# m: ]ovn_sb_connection = tcp:IP_ADDRESS:6642
& m ?9 J* N) e$ N. \: l; kovn_l3_scheduler = OVN_L3_SCHEDULER
0 { F2 v O" }2 [9 `9 z Note" B! G0 t1 [5 d, E3 U$ `& p" ?$ e
# F1 x; S" L6 Z8 h# R* B$ JReplace IP_ADDRESS with the IP address of the controller node that runs the ovsdb-server service. Replace OVN_L3_SCHEDULER with leastloaded if you want the scheduler to select a compute node with the least number of gateway ports or chance if you want the scheduler to randomly select a compute node from the available list of compute nodes.
& I' z) B' L4 s
; l6 Z0 m% F, }; b/ DSet ovn-cms-options with enable-chassis-as-gw in Open_vSwitch table’s external_ids column. Then if this chassis has proper bridge mappings, it will be selected for scheduling gateway routers.
8 P& U4 o0 z7 m0 C4 S
/ p$ e. B! U5 |. G1 A% {5 v' J- u$ w8 _ovs-vsctl set open . external-ids:ovn-cms-options=enable-chassis-as-gw. Z1 X. ~/ D, e% L$ Y# L" d. U! {
Start, or restart, the neutron-server service.2 O0 I. F p) @8 e" o2 a; q
+ r' O. k$ ?4 @- k1 K6 P9 j$ PUsing the systemd unit:2 x2 Y: b* }. X( D) ]4 I. F
6 h# ] Q% \7 t% }- [/ d9 Qsystemctl start neutron-server
/ H7 a- \+ D- F, vNetwork nodes
4 R: e% {. i: b% rDeployments using OVN native layer-3 and DHCP services do not require conventional network nodes because connectivity to external networks (including VTEP gateways) and routing occurs on compute nodes.
2 M/ _- ~& T; D# Z* X
# k$ E5 X- E! d: ^Compute nodes- I0 [$ S! Y k. p
Each compute node runs the OVS and ovn-controller services. The ovn-controller service replaces the conventional OVS layer-2 agent.. u' |) q' n0 S- X( P3 A
! M, s/ Q# q( \
Install the ovn-host, openvswitch and neutron-ovn-metadata-agent packages (RHEL/Fedora).
# \+ t7 Q- o2 V: L" ]* [- x/ @$ `6 R+ d& Z1 G$ m, D9 e. C# H
Install the ovn-host, openvswitch-switch and neutron-ovn-metadata-agent packages (Ubuntu/Debian).
; C! M) ^% Z1 k; q' P) z/ q" y
! ]0 Q9 l, l; ~5 r2 ^/ Q/ [2 tStart the OVS service.
: w* z1 r! N$ ?8 U3 G0 L0 `2 x$ p0 Z- i1 ~
Using the systemd unit:+ P8 B. v' s, K A' J
( l8 i& s4 L( ?) d% M5 B, Esystemctl start openvswitch (RHEL/Fedora)0 ?* K0 d }9 X* P- b" P7 h3 {
systemctl start openvswitch-switch (Ubuntu/Debian)
+ a6 S/ b0 c- m4 vConfigure the OVS service.) x. p9 H! ]" r
! q6 k( w4 v8 K* s4 w* X
Use OVS databases on the controller node.2 ~6 M% Z& i( S$ {/ z$ C$ C1 ^ h
: j2 v$ d" K& Z1 _/ {" @9 covs-vsctl set open . external-ids:ovn-remote=tcp:IP_ADDRESS:6642
+ L3 x: v- H7 F6 E$ c4 L( q4 J7 v( q: P: lReplace IP_ADDRESS with the IP address of the controller node that runs the ovsdb-server service.: K+ E- u t6 S9 X
7 }6 J5 ~$ @. K. c i6 @Enable one or more overlay network protocols. At a minimum, OVN requires enabling the geneve protocol. Deployments using VTEP gateways should also enable the vxlan protocol.
; N/ H& F3 I# `* r( x$ z. F% N+ t
ovs-vsctl set open . external-ids:ovn-encap-type=geneve,vxlan
' X# i( ]+ D# ]0 M% p Note
2 u- b& D: s. x7 o, M, g
m! E4 W) J; K/ h3 u' S7 r/ dDeployments without VTEP gateways can safely enable both protocols.2 W: Q5 ~; z# K" P' S
* w7 v. Y3 x+ O9 g8 B& p1 q$ m. W
Configure the overlay network local endpoint IP address.
: X. ~9 g" e1 T* p6 ]( _) X3 ~( ]5 ]5 }
ovs-vsctl set open . external-ids:ovn-encap-ip=IP_ADDRESS. ~4 ]/ @" f4 K0 n( {
Replace IP_ADDRESS with the IP address of the overlay network interface on the compute node.7 o# Q5 j% q4 t$ L& q
6 b) t! v4 G) l0 k/ A
Start the ovn-controller and neutron-ovn-metadata-agent services.% @, W- _, A5 F7 E: ^9 j! Q) R
7 K2 [# ~9 N3 w" P+ f3 ~
Using the systemd unit:+ N' t$ f' B' K; n
3 I' G) d5 `% ]
systemctl start ovn-controller neutron-ovn-metadata-agent
7 p8 w) {1 K U& bVerify operation¶$ d5 O# Q# r3 q$ R' k1 E2 y
Each compute node should contain an ovn-controller instance.
* L, |* H8 L5 i: e; x) r
* v: g% D" Z2 K& I8 q$ j0 U( `ovn-sbctl show
n! O! c9 P) P1 X <output>
1 _2 u6 E7 k2 K0 r) n6 C: J* W6 }% o7 Y) e; l$ I0 p, J
! h$ o, l) Y R# G4 e
Deployment steps
5 H$ E9 O+ f# f) sDownload the quickstart.sh script with curl:: m( `2 w7 I/ I2 F6 F
- y0 y* z; R% e7 X& y4 ]7 c
curl -O https://raw.githubusercontent.co ... aster/quickstart.sh0 P1 `5 ~$ B/ F2 I
Install the necessary dependencies by running:' p T& l5 n* U* n2 z) U
% E, ^* Z \# A# h4 E: y2 Abash quickstart.sh --install-deps
, @) v4 Q$ A& j3 Z+ `3 UClone the tripleo-quickstart and neutron repositories:2 a% i4 D+ t: Z- K$ @( m9 s4 B
/ p' q2 m! i9 w+ S0 Z1 fgit clone https://opendev.org/openstack/tripleo-quickstart6 A' L+ G3 I8 T9 S6 ^/ N+ u) l: N
git clone https://opendev.org/openstack/neutron H( t( Q9 Z% W$ t! @5 V3 k! F& t
Once you’re done, run quickstart as follows (3 controller HA + 1 compute):
8 j' L" n. B8 Z
3 e8 l% U4 Z7 k6 L& f; vExporting the tags is a workaround until the bug
' m- Q( u4 r' S5 r8 C" q2 m: b5 Xhttps://bugs.launchpad.net/tripleo/+bug/1737602 is resolved5 o( J4 x, ?( V7 f: D4 H
( b0 ]- X5 F% b' j" t, }
export ansible_tags="untagged,provision,environment,libvirt,\5 I8 `! z6 U& P) P+ R- F# H
undercloud-scripts,undercloud-inventory,overcloud-scripts,\
1 T% H, D5 _4 ]+ I. |( |5 C5 r; kundercloud-setup,undercloud-install,undercloud-post-install,\
3 c( t# {: A/ J- e1 novercloud-prep-config", v+ O. }' Y' z
0 m f: o$ u, p
bash ./quickstart.sh --tags $ansible_tags --teardown all \
3 p0 @& B) ?( N: e0 z. ?--release master-tripleo-ci \
+ r$ _8 o) s* F! F% t--nodes tripleo-quickstart/config/nodes/3ctlr_1comp.yml \1 g0 S `+ o3 G) _- E( H! I% g
--config neutron/tools/tripleo/ovn.yml \
: c& i0 b; z) w/ o3 nVIRTHOST
* f7 T+ K/ d3 G/ K' h Note
( M9 X! l- G6 ~+ z- b$ }2 l- k. _- F) \& x% v6 y4 @
When deploying directly on localhost use the loopback address 127.0.0.2 as your $VIRTHOST. The loopback address 127.0.0.1 is reserved by ansible. Also make sure that 127.0.0.2 is accessible via public keys:2 C# {/ E; [1 M+ ]! h9 ^9 n8 L
6 _' p6 {, _; @+ G# `5 \) M$ O( U$ \% c$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
' t3 q7 a0 F7 K8 r/ w8 {9 B7 c3 |' j Note: X: d- U: ^, x$ Y
3 B( }$ W# O5 w7 x. s1 j
You can adjust RAM/VCPUs if you want by editing config/nodes/3ctlr_1comp.yml before running the above command. If you have enough memory stick to the defaults. We recommend using 8GB of RAM for the controller nodes.
2 n v& {! h: o f
: u: d) i) C, ]% g* O: M/ hWhen quickstart has finished you will have 5 VMs ready to be used, 1 for the undercloud (TripleO’s node to deploy your openstack from), 3 VMs for controller nodes and 1 VM for the compute node.
$ k' k1 N G! [* Z1 W: T7 H( U L
Log in into the undercloud:
4 e9 Z! f5 v- R% H) @5 e( L+ w% t- s. n% |- p
ssh -F ~/.quickstart/ssh.config.ansible undercloud. a- u4 }, m9 _" p! M: i
Prepare overcloud container images:
' k- e) a; O, B3 o; ^- k$ Q4 M8 M* X/ W' p4 e
./overcloud-prep-containers.sh% b+ P( M5 n# _9 K
Run inside the undercloud:; L: n) ~; w. x
{* t; O4 ]/ m0 Y! y2 ~./overcloud-deploy.sh) K! f3 E+ j n
Grab a coffee, that may take around 1 hour (depending on your hardware).
- w! w. ~& Z: H/ y# V, G
S% y1 U# b* bIf anything goes wrong, go to IRC on OFTC, and ask on #oooq+ z) J; w7 P: _: `8 ~ T, B
+ z' C* F: `0 {2 v2 M
Description of the environment% L# x7 N' V3 N( ]
Once deployed, inside the undercloud root directory two files are present: stackrc and overcloudrc, which will let you connect to the APIs of the undercloud (managing the openstack node), and to the overcloud (where your instances would live).
4 x' |/ D, N3 U; g0 f3 z3 E' k1 k3 r5 ~. L+ {9 A+ z
We can find out the existing controller/computes this way:
8 f! r5 L7 d# ~0 e
* v/ }7 a0 G( Z3 Z* [, W- [source stackrc
1 ]0 L2 a* ^ W0 w, \ openstack server list -c Name -c Networks -c Flavor4 L" G, y- B; c8 B" Q
+-------------------------+------------------------+--------------+" j# {4 h0 ?, l! a' ~+ ?
| Name | Networks | Flavor | ^$ X6 T$ u- l4 w
+-------------------------+------------------------+--------------+9 v) S& Y! R. Z
| overcloud-controller-1 | ctlplane=192.168.24.16 | oooq_control |
9 E- {$ O/ c0 @; z- x% I, f| overcloud-controller-0 | ctlplane=192.168.24.14 | oooq_control |
; n, `0 U& y3 D- L0 p, N8 }5 v| overcloud-controller-2 | ctlplane=192.168.24.12 | oooq_control |
5 e* V; \# f$ \! s+ X| overcloud-novacompute-0 | ctlplane=192.168.24.13 | oooq_compute |
; }, `6 x4 p0 X+ B9 D+-------------------------+------------------------+--------------+. j/ x2 u" T! T8 M, W* P) h, z2 g! w
Network architecture of the environment7 L9 o# }& k0 W' E
TripleO Quickstart single NIC with vlans9 w. r' n5 r& R: w
Connecting to one of the nodes via ssh
( G1 M3 m( f5 n) n' YWe can connect to the IP address in the openstack server list we showed before.; S' v+ u" y$ ^# c
/ ^+ a, ]- R z
ssh heat-admin@192.168.24.16
C6 R+ a4 p) E2 h% M9 kLast login: Wed Feb 21 14:11:40 2018 from 192.168.24.1
: q% `( e2 q: V5 U" E9 F+ U4 Y# v9 i+ [: F
ps fax | grep ovn-controller; C! ?0 Y9 g$ _( j* f
20422 ? S<s 30:40 ovn-controller unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --no-chdir --log-file=/var/log/openvswitch/ovn-controller.log --pidfile=/var/run/openvswitch/ovn-controller.pid --detach
/ M$ l- d+ L& `0 @2 \$ {6 @
1 L h! s1 \ |, M+ d4 C) G7 O' qsudo ovs-vsctl show& j" J: l& @% a2 n
bb413f44-b74f-4678-8d68-a2c6de725c73
2 T4 a" b" P4 w* ZBridge br-ex7 Y+ _, [" O" X
fail_mode: standalone$ u9 p- a( [7 E" }# A6 h
...
; N& w" H1 V3 N) J8 `) I$ H: q Port "patch-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6-to-br-int"
5 K5 G1 x6 G5 Q" } Interface "patch-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6-to-br-int"
! [7 O* c9 F# Y; {; p$ D type: patch
# |6 e; J; g. ?# ?% V$ {0 C options: {peer="patch-br-int-to-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6"}" H; @* Y/ K( w1 m$ _% @
Port "eth0"
- X4 U8 |! }- Q/ {( i Interface "eth0") T; m- i7 T5 l$ Q! J
...
. y9 h, }% v+ s$ iBridge br-int) }& k9 X: s$ K3 f B: l8 x" ^
fail_mode: secure
. m" B# K# h" x' k4 F Port "ovn-c8b85a-0"& e1 }5 Y5 J& z* y& R! s
Interface "ovn-c8b85a-0"
/ @8 g; z E3 O8 y/ P type: geneve) J) K4 m) o- [2 g
options: {csum="true", key=flow, remote_ip="172.16.0.17"}
( _1 J9 y6 C7 @# U# a ?0 ~0 y3 W/ q Port "ovn-b5643d-0"/ N0 n$ M( z: Y$ ]1 ]0 Z$ x* u6 c6 `
Interface "ovn-b5643d-0"
; e) ]% h1 Z% m) a2 C2 m type: geneve5 [" A' \' u5 Z5 ^
options: {csum="true", key=flow, remote_ip="172.16.0.14"}9 h$ I' x0 P3 q/ ~
Port "ovn-14d60a-0"$ _8 u5 u: C& w
Interface "ovn-14d60a-0"
0 `/ @0 r* B2 s; Q. Y! V5 R( Y type: geneve
) m2 z/ Y/ H( W# P$ ^4 W0 Y options: {csum="true", key=flow, remote_ip="172.16.0.12"}- y" k% r4 q7 M" Y
Port "patch-br-int-to-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6"
' B7 U# p' \# t" \" _ Interface "patch-br-int-to-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6"9 u& l% q) A& s, J$ m; t
type: patch
7 \5 a3 F( F7 ` options: {peer="patch-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6-to-br-int"}( N, O8 Z9 l2 Q4 Z7 ?7 V
Port br-int4 I# p, K8 L: ~8 `0 H5 t/ p0 ]# D
Interface br-int
# B) f2 A4 e; T5 Q type: internal
, D* H `) i, R% l$ z& NInitial resource creation
8 F9 @! n1 Q& Q: S$ LWell, now you have a virtual cloud with 3 controllers in HA, and one compute node, but no instances or routers running. We can give it a try and create a few resources:, I0 q+ {8 L N* y
1 q& P; a7 y2 v) q& _. M8 J
Initial resources we can create6 _! x2 z% I% Q- ]
You can use the following script to create the resources.
6 W' S( P3 L% U0 Y5 C2 E$ h, j* G+ j, ~+ [0 M( H! b/ a
ssh -F ~ /.quickstart/ssh.config.ansible undercloud
5 S7 b% G5 X# T2 t Q0 [: w' U% h- k
source ~/overcloudrc3 \% x% v; p. [- G! a
) L2 l; [! y* E6 ^
curl http://download.cirros-cloud.net ... 5.1-x86_64-disk.img \5 T) q. `5 P% v
> cirros-0.5.1-x86_64-disk.img* h/ e1 Z0 q' ~( I9 R0 ]; u
openstack image create "cirros" --file cirros-0.5.1-x86_64-disk.img \4 n; L$ f3 ~! }, B
--disk-format qcow2 --container-format bare --public
c% L2 `8 B5 a' j+ y
$ z2 W" }- X$ hopenstack network create public --provider-physical-network datacentre \
/ ~' N7 \" G0 z, b7 |/ P --provider-network-type vlan \
' U) V# A5 E2 p --provider-segment 10 \; _& m+ j# w' W( F! Y
--external --share
. @' R8 Q: C$ {" Q* g, k, O* y: a# w, I' K, I( w2 [
openstack subnet create --network public public --subnet-range 10.0.0.0/24 \
( Z- d5 f1 H& \$ n! ` --allocation-pool start=10.0.0.20,end=10.0.0.250 \
% T0 i+ S8 O' D$ s- [& k5 e' o --dns-nameserver 8.8.8.8 --gateway 10.0.0.1 \
# O, R1 K0 g, d3 r8 E7 ^ --no-dhcp/ K* A$ E. ]0 P$ Y4 V! R# I
* o$ e0 ~! m6 hopenstack network create private
3 P( O/ z4 Q- xopenstack subnet create --network private private \# w6 s" @, D! x, a3 i
--subnet-range 192.168.99.0/24) x( d; z$ p' Z6 F4 w
openstack router create router1* K3 q+ y6 h: ^. ?4 z1 t" E
; y, p7 b6 h% mopenstack router set --external-gateway public router1/ _ C4 M1 N1 k1 f* [% M1 ^: T: n
openstack router add subnet router1 private* c! {: U- e5 _6 T' b) c4 f2 o
- M+ n: J% I% t' Topenstack security group create test
. @. U: W. [) V; l% e: r2 dopenstack security group rule create --ingress --protocol tcp \: M1 t" g2 k2 t6 e) I) }" O
--dst-port 22 test
! y; s. W2 M# C1 |9 |' Qopenstack security group rule create --ingress --protocol icmp test
$ i; |* s @- Q1 [openstack security group rule create --egress test
7 q2 Q6 d. F" d, [% N' R# S+ J1 \9 n. e, n* w1 E p+ o$ g0 k
openstack flavor create m1.tiny --disk 1 --vcpus 1 --ram 649 L: r# |, B! S/ D, P
7 V5 M, }8 c! \0 E4 I
PRIV_NET=$(openstack network show private -c id -f value)$ I3 f2 p( V9 u' e/ p
?1 g z8 U, D( k. x! b2 R
openstack server create --flavor m1.tiny --image cirros \
3 l1 j4 W; [& l: x* I6 R/ ` --nic net-id=$PRIV_NET --security-group test \
% o- R5 h) @% I. A/ R! N --wait cirros& ?$ u' J! I% D% b/ _! M7 `; o+ r
8 P) |0 @4 \9 x: }5 ^0 x# A
openstack floating ip create --floating-ip-address 10.0.0.130 public8 i8 F4 Q- `) k
openstack server add floating ip cirros 10.0.0.130
" M% c; j/ R5 Q/ T- d) x7 [, R Note: O/ e0 @, |4 [
$ U( d4 o1 Z7 B9 t$ \5 `' hYou can now log in into the instance if you want. In a CirrOS >0.4.0 image, the login account is cirros. The password is gocubsgo.
% G" \' Z2 u' Y: p. |+ E" v: O6 w/ d. f
ssh cirros@10.0.0.1308 Q& H/ k/ L- s9 M" R4 F# H; \) A
cirros@10.0.0.130's password:" O( K1 u1 O) t/ L+ A
4 T0 @2 m! i- R- q/ [ip a | grep eth0 -A 10
1 f' F+ s1 S v2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast qlen 1000
6 w! z L, e: e I) a link/ether fa:16:3e:85:b4:66 brd ff:ff:ff:ff:ff:ff. |" C" q# [) l$ g9 f7 O" P
inet 192.168.99.5/24 brd 192.168.99.255 scope global eth0
* Q( n1 U4 u: j- Q* k8 @+ _- W1 U valid_lft forever preferred_lft forever5 z! b' f/ ^ m0 {- e
inet6 fe80::f816:3eff:fe85:b466/64 scope link
- [; R+ a% |! l! `- ?5 Y- M% O valid_lft forever preferred_lft forever8 G% V- F0 P1 U/ \% c5 p
- y1 Y5 _1 t( x+ k! V4 J2 iping 10.0.0.19 x( i O' S1 U6 M T
PING 10.0.0.1 (10.0.0.1): 56 data bytes3 G- O6 {* Y+ h% F8 }+ j, c
64 bytes from 10.0.0.1: seq=0 ttl=63 time=2.145 ms
2 {4 f; c5 y7 [3 N3 M+ G64 bytes from 10.0.0.1: seq=1 ttl=63 time=1.025 ms
# W/ T/ P$ l- W- r# {$ M5 @64 bytes from 10.0.0.1: seq=2 ttl=63 time=0.836 ms
4 \! Q, t9 q# P! b1 [^C
, \; g& E5 ~; h7 b6 u- Q; e--- 10.0.0.1 ping statistics ---3 [* Q" i+ ~7 X! S0 \' b1 K
3 packets transmitted, 3 packets received, 0% packet loss& S9 ] n* a. b
round-trip min/avg/max = 0.836/1.335/2.145 ms
) w; l: ]4 d6 [: h+ ]3 X) b: M) o: Q% A: x1 { _( e" E
ping 8.8.8.8
# r+ t' Q$ s; e( b* GPING 8.8.8.8 (8.8.8.8): 56 data bytes( m& K4 w& ~0 S0 {+ [+ X5 D, _
64 bytes from 8.8.8.8: seq=0 ttl=52 time=3.943 ms9 c& i$ e. R+ `
64 bytes from 8.8.8.8: seq=1 ttl=52 time=4.519 ms
3 @: W- [9 K& {3 \64 bytes from 8.8.8.8: seq=2 ttl=52 time=3.778 ms
2 z" i$ c( r( ]4 d0 u1 K
' x; }/ I ^+ O8 V+ d; wcurl http://169.254.169.254/2009-04-04/meta-data/instance-id
( ?& B4 t0 r" Y Ei-00000002
) {2 t% ?/ z* n& e& ?" L
+ H* l/ [6 `3 G* a7 b+ ]: e: L% q
/ @* Q8 l$ w4 J( X9 T# E- z2 h
7 c9 B6 R# \8 F v y& J& r |
|