►
From YouTube: Episode 17: The CNI Specification (Part 2)
Description
Following up on Episode 16 where we explored the initial CNI plugins specification, let's learn how to build a CNI from scratch with Michael Zappa.
In this episode You'll learn about:
- How to build a CNI manually
- How the CNI specification and cnitool work
- Visualizing the overall architecture of CNI networking plugins
A
Go
live,
oh,
we
are
live,
okay,
so
hi.
Everyone
welcome
to
this
week's
entry
life,
and
today
we
have
mike
and
matt
join
us
and
amin
join
us
this
week
and
we
are
talking.
We
are
gonna
talking
about
the
cni.
B
Yeah
totally,
thank
you
so
right
now
I'm
currently
engineering
manager
at
stateless
we're
trying
to
make
networking
easy
right
now
I
kind
of
have
a
I
have
a
background
in
networking
systems
devops
and
software
engineering.
I've
been
doing
this
since
I
was
10.
and
right
now
the
cni
has
been
a
focus
of
mine
for
about
a
year
or
so
now.
So
I'm
quite
pleased
to
be
talking
here.
A
Thanks
mike-
and
we
also
have
a
mat,
join
us
today,
so
would
you
like
to
talk
about
so
talk
a
little
bit
about
yourself.
C
A
B
B
A
quick
cni,
refresher,
the
commands
inputs,
outputs,
we're
going
to
break
it
down
to
a
quick
little
diagram
of
what
we're
going
to
try
to
build
a
simple
control,
plane,
node
and
a
worker
node
we're
going
to
go
through
the
code,
workflow
from
coop
ctl,
all
the
way
down
to
the
specific
plugins,
also
tying
in
container
d
and
cryo,
and
then
we're
gonna
jump
into
a
few
demos
where
we
kind
of
just
manually
set
up
kubernetes,
just
some
manual
steps.
B
Just
so
we
see
some
of
the
moving
parts
where
most
people
don't
get
to
see,
then
we're
going
to
just
you
know,
use
a
quick
little
shell
script
as
an
example
jay
mentioned
that
he
wanted
to
see
the
cni
tools.
So
I
brought
that
in
as
a
quick
example.
Unfortunately
he's
not
here
and
then,
if
there's
enough
time,
what
I'm
going
to
do
is
try
to
replace
some
of
the
static
routes
with
a
more
dynamic
approach.
B
Using
bgp
and
that's
kind
of
if
we
have
enough
time,
I'm
not
certain
how
you
know
this
will
go,
it
may
go
over
and
make
it
wonder.
B
Sweet
deal
so,
let's
go
for
it.
We're
gonna
just
break
this
down,
really
quick.
I'm
gonna
use
miro
instead
of
like
google
slides.
So
this
will
be
probably
a
refresher
for
most
people.
I
think
everybody
kind
of
had
the
you
know
more
than
enough
ex.
You
know
experience
with
the
cni
from
the
high
level.
Can
everybody
read
this,
or
should
I
zoom
in
one
more.
B
Sweet
deal,
so
let
me
know
I
can
just
zoom
in
and
I'll
scroll
a
little
bit
if
need
be,
so
for
the
most
part.
You
know,
we
all
know
that
the
cni
is
effectively.
You
know
specification
to
set
up
and
tear
down
the
network
and
verify
it
so
the
cni
plug-ins
all
have
various
inputs
or
outputs.
The
high-level
ones.
You'll
see
is
you
know
our
environment
variables
which
you
can
break
down.
You
know
the
cni
net
nest,
which
is
actually
a
path
to
the
specific.
You
know,
network
name
space
in
this.
B
In
this
case
the
actual
spec
doesn't
actually
specify.
You
know
that
you
need
to
use
network
name
spaces,
which
is
a
little
confusing
because
it
says
cni
net
nest.
However,
they
you
know,
use
the
term
isolation
domain,
the
next
one
being
the
cni
container
id,
which
you
know,
is
a
source
of
confusion
for
some
people,
because
in
kubernetes
you
can
have
multiple
containers
inside
of
a
pod,
so
the
container
id-
and
this
this
point
would
be
actually
the
infrastructure
pod.
I
mean
infrastructure
container
id,
and
then
we
have
our
most
famous
one.
B
That
most
people
know
is
the
cni
command
which
we'll
just
you
know,
hit
on
that
a
little
bit
later,
but
we
do
have
our
add
dell
check
conversion
and
possibly
possibly
some
more
when
it
comes
to
cni2o,
then
we
have
our
arguments
and
our.
If
you
know
our
interface
name,
which
is
generally
eth0
and
cnipath,
which
can
point
to
you,
know
your
binary
location
generally
or
up
c9
bin
and
you
can
actually
pass
that's.
Actually,
you
can
pass
multiple
locations.
B
B
Now
we
have
add
which
I
generally
try
to
explain
that
you
know
sets
the
network
up
or
you
know,
going
back
to
more
of
the
cni
specification,
we're
adding
the
container
to
the
network.
The
delete,
obviously
kind
of
just
does
the
opposite.
It
removes
the
network,
cleans
everything
up
or
removes
the
container
from
the
network.
In
a
case
that
will
show
up
above
where
we're
simply
deleting
the
network
namespace,
which
detaches
it
from
that
linux
bridge.
B
Next
big
piece
of
the
puzzle
here
is
what
we
call
our
network
configuration
and
there's
actually
two
types
of
configurations:
there's
a
network
configuration
which
is
passed
into
a
standard
in
and
then
there's
another
one.
That's
called
the
runtime
configuration
which
is
passed
in
via
the
container
runtime,
which
could
be
container
d,
cryo
or
cry
docker
d
and
we're
gonna
be
using
this
configuration
a
lot
we'll
see
it
like
five
or
six
more
times,
but
I
wanted
to
drive
in
that.
This
is
effectively
what
we're
going
to
be
using
keep
in
mind.
B
One
thing
to
note
is
that
type
references
that
directory
and
the
cnipath,
which
in
here
will
be
you
know,
opting
ibm.
This
is
referencing
an
actual
executable
and
in
this
network
configuration
we
have
a
bridge
and
host
local,
so
one
for
establishing
a
bridge
network
and
the
other
one
for
assigning
ip
addresses
and
generally
actually
out
of
the
container
networking
repo
we
get
some
out
of
the
box
default
plugins.
You
know
bandwidth,
bridge,
dhcp,
firewall
host
device
and
more
I
threw
in
vxlan
it's
not
an
existing.
Yet
it's
just
something
I'm
working
on.
B
So
these
are
all
that,
usually
you
could
probably
go
pull
down.
They
come
down
with,
I
think
kube
adm
and
you
install
them.
B
B
I've
already
mentioned
that
we
actually
have
a
few
container
runtimes
that
support
the
cni.
This
one
cry:
doctor
d,
that's
coming
out
of
mirantis
since
kubernetes
has
since
pulled
the
dr
shim
out
from
one
two
four.
They
have
gone
and
created
the
cry,
dr
d,
which
also
supports
the
cni.
B
Oh
here's
one
important
thing
that
all
plugins
are
actually
executed
in
the
root
network
name
space
and
that
they
actually
have
information
passed
to
them
to
have
knowledge
of
the
actual
specific
network
name
space.
That's
via
the
cni
nest,
direct
environment
variable,
which
is
sometimes
a
source
of
confusion.
Some
people
don't
know
where
these
are
actually
executed,
but
they're.
Just
a
simple
executable-
and
one
thing
also
to
know
is
that
there's
been
some
confusion
around
the
get
tagged
versions
and
the
c9
spec
versions.
They
do
not
equal
each
other.
B
So
if
you
are
going
to
actually
specify
version
one
zero
one
in
your
actual
network
configuration
that
will
not
work
and
you'll
have
a
ton
of
fun.
So
the
git
tag
versions
are
generally
associated
with
lib
cni
and
the
releasable
plugins.
B
And
one
thing
to
note
is
that
let's
zoom
in
one
thing,
I
did
mention
that
there's
you
know
two
types
of
configuration:
the
runtime
configuration
and
the
networking
configuration.
But
what
comes
first,
you
know
if
I
specify
two
values.
You
know
one
in
the
runtime
configuration
and
the
other
and
the
network
configuration
which
actually
takes
precedence.
The
runtime
will
override
anything
in
a
network
configuration
the
network.
Configuration
is
generally
meant
to
be
static
and
the
runtime
configuration
is
built
to
be
passed
in
through
the
container
runtime.
So
a
little
bit
more
static.
B
An
example
could
be.
You
know
your
port
mapping
and
your
runtime
configuration
actually
establishing
the
ip
tables
rules.
There's
more
for
ipam
one.
The
bandwidth
plugin
actually
takes
a
bandwidth
settings
from
your
container
runtime
container
d.
I
even
like
attached
a
little
code
sample,
which
you
can
actually
find
this
in
the
sandbox
run.
Sandbox
underscore
run
dot,
go
so
we're
actually
building
up
the
values
that
we
passed
in,
I'm
going
to
give
a
value
a
little.
You
know
visual
of
that
later.
B
D
Mike,
I
have
one
question
here
when
the
sport
map
is
being
used
like
is
this
used
for
node
boards
or
what
why
why
should
I
use
port
now
as
a
cna
plugin.
B
So
right
now,
I
believe
when
it
comes
to
like
the
port
mapping,
that
actually
does
I'm
not
actually
sure
if
that
doesn't
node
port
stuff
in
kubernetes.
I
believe
that's
more
of
like
if
you
were
to
use
like
pod
man
or
more
so
that
one
is
there's
a
little
bit
of
like
actually
confusion.
I
took
a
little
bit
time
to
dive
into
that.
B
So
when
it
comes
to,
like
cryo,
actually
implements
the
host
port
code
in
their
in
the
container
run
time-
and
I
believe
container
d
uses
the
port
map
plugin
to
do
that
which
this
actually
lays
down
the
ip
tables
rules.
Now,
there's
like
some
confusion
there,
because
now,
where
it
comes
to
like
coup
proxy
and
someone
else
may
be
more
of
an
expert
of
that,
I
would
be
kind
of
curious
to
where
that
lands
in,
because
I
know,
there's
a
node
port
chain.
B
So
this
might
be
a
duplicated
piece
so
coup
proxy
may
not
I'm
not
cuproxia
one
in
the
realm
of
kubernetes
port
map
may
not
be
used
so
coup
proxy
may
take
over
that
role.
So
there's
like
some
ambiguous
like
things
like
who's,
actually
doing
what
so,
I
don't
think
in
the
realm
of
kubernetes.
This
is
actually
used.
B
I
know
if
you
were
to
go
into
etsy,
c9
net
d
directory
and
depending
on,
if
you're,
using
podman,
I
think
it's
87
dash
configuration
or
if
you
look
at
the
default
container
d,
configuration
you'll
actually
see
that
they
enable
a
port
map
which
they
would
use
via,
probably
like
cry
ctl
and
more
or
s
node.
What's
called
nerd
ctl
so
probably
not
used
in
the
realm
of
kubernetes,
but
that
would
actually
probably
be
my
next
one.
B
You
know
presentation
dive
really
deep
into
coop
proxy,
but
for
the
most
part
I
don't
think
it's
using
kubernetes,
but
I,
if
I'm
wrong
there,
please
let
me
know
I'll,
buy
you
a
beer.
Did
I
get
you
did
I
answer
you.
I
mean.
B
So
this
is
what
we're
gonna
build
today
we
effectively
you're
gonna
have
two
nodes:
we
have
a
control
plane
and
then
we
have
a
working
node
and
they're
joined
at
layer
two
just
with
a
switch
and
this
effectively.
Let
me
zoom
in
sorry
guys.
I
forget
that
you
might
not
be
able
to
read
this
is
this
is
effectively
the
flannel
setup
using
the
host
gw
back-end,
it's
this
most
simplistic
network,
plug-in
that
you
probably
can
use
it's
not
the
most
feature-rich.
It
doesn't
have
network
policy,
so
probably
not
the
most
useful.
B
B
It's
effectively
connecting
the
you
know
our
isolation
domain,
a
network
name
space
in
this
case
to
the
host
and
establishing
connectivity
there
when
we
start
talking
pod
to
pod
networking,
that's
a
kubernetes
construct
that
is
not
a
cni
construct
so
and
what
I
mean
by
that
is
say:
hey
in
podpod
networking.
I
should
be
able
to
reach
this
from
this
network
namespace
over
here
to
the
network
namespace
running
on
node2,
however,
they're
on
two
separate
machines
and
two
different
networks,
so
we
would
actually
have
to
do
is
establish
routing
between
them.
B
So
in
this
case,
we
actually
have
to
add
a
route
to
point
to
the
other
machine.
Generally,
that's
not
done
in
the
cni
plug-ins,
that's
done
outside
of
it
in
more
of
an
agent
or
daemon
set
in
the
kubernetes
world,
so
you'll
never
really
see
the
cni
plug-in
dropping
routes
in
the
root
network
name
space
generally,
that
happens
inside
the
network
name
space
itself,
just
because
these
are
kind
of
moving
targets
and
a
little
bit
more
dynamic.
B
So
hopefully
that
kind
of
ironed
out.
You
know
where
these
things,
the
roles
and
responsibilities
lie
and
obviously
in
kubernetes,
every
pod
has
an
ip
address
and
that's
required
for
the
podpod
networking
it
wouldn't
work.
Elsewise
nice
thing
about
using
like
the
cni
is
that
you
know
you
don't
actually
need
to
do
this
setup
and
you
don't
need
to
rebuild.
You
know
the
kublet
or
you
know
your
container
run
time
to
support
like
as
an
example,
calico
actually
doesn't
use
a
linux
bridge.
B
They
use
a
b
from
the
root
network
name
space
into
the
network,
namespace
itself,
and
you
can
actually
kind
of
do
whatever
you
want.
So
this
may
you
know,
start
the
unholy
battle,
but
you
know
with
the
cni
you
can
actually
make
you
know
very
opinionated
setups,
where
you
know.
On
the
left
side,
we
actually
have
a
network
namespace
attached
to
linux,
bridge,
that's
participating.
B
B
B
And
for
the
most
part,
what
we're
going
to
be
doing
to
actually
complete
our
you
know
our
demos
is
that
we're
going
to
be
using
utilizing
these
two
network
configurations-
and
I
highlight
the
subnet,
because,
if
you're
going
to
go
ahead
and
create
your
own
network
plugin,
something
you
probably
need
to
deal
with
is
your
subnetting,
because
the
last
thing
you're
going
to
do
is
have
com
ip
conflicts
on
your
network,
because
that
may
cause
some
grief
for
a
lot
of
people.
B
So
it's
just
one
thing
that
you
probably
want
to
keep
in
mind.
The
other
piece
that
I
mentioned
earlier
is
establishing
you
know
your
routes
on
your
actual
nodes,
to
establish
your
pod
to
pod
networking.
So
in
kubernetes
you
probably
want
to
make
sure
your
subnets
are
not
overlapping
and
that
you
have
appropriate
routing
to
get
your
other
machines
assuming
you're.
You
know
layer
three
at
that
or
layer.
Two
depends
on
your
network.
A
B
Oh
that
depends
like
when
it
means
higher
level.
I
mean
that
depends
so
like
as
an
example.
Flannel
itself
doesn't
make
use
of
any
the
like
community
plug-ins
that
that
are
usually
in
the
op
scene.
I've
been
so
like
bridge
and
host
local.
I
don't
believe
that
flannel
makes
use
of
those
calico
definitely
does
not
calico.
Actually,
I
believe,
unless
they're
dynamically
calling
these
plug-ins
under
the
covers
calico
handles
everything
and
you'll
see
in
the
obscene
ibn
you'll
have
two
executables
in
that
directory
calico
and
then
calico
dash
ipam.
B
So
I
think
that's
a
very
depends
question.
I
I
I
think
you
could
search
around
kubevert,
I
believe
actually
makes
use.
I
know
that's
not
a
scene
I
plug
in,
but
it
makes
use
of
the
bridge
plug-in
the
community
maintained.
One
did
did
didn't
answer
jay.
B
B
Sweet
deal
so
I'll
see
if
I
get
that
answered,
so
we're
going
to
start
with
a
pretty
in-depth
workflow
and
I'll
zoom
in
and
what
we're
going
to
be
talking
about
effectively
is
the
scene.
I
add
the
scene,
I
delete
and
see,
and
I
check
and
we're
going
to
go
from
cube,
ctl
all
the
way
down
to
the
specific
cni
plug-ins
in
this
case
we're
going
to
be
making
use
of
our
bridge
and
host
local
plugin
again
and
we're
going
to
go
all
through
the
specific
exported
methods,
with
the
exception
of
a
few
non-exported
methods.
B
Hopefully
this
starts
to
show
a
pattern
of
how
everything
is
laid
out,
and
I
don't
have
this
in
specific.
You
know
swim
leans,
where
I
would
put
you
know
the
kublet,
the
cri,
the
specific
container
runtime
the
cni
code
and
then
the
specific
plugins,
but
I'll
get
working
on
that
and
commit
that
to
the
cni
repo,
so
everybody's
kind
of
curious
of
what
happens
because,
generally,
when
you
see
these
workflows,
you
see
the
kubelet.
B
You
know
the
cri
method
and
then
you
go
down
to
the
specific
cni
plug-ins,
but
there's
a
few
moving
pieces
between
that
that
I
just
wanted
to
capture
and
make
people
aware
of.
I
know
people
have
mentioned,
go
cni
and
more
and
probably
oci
c9
if
you're
cryo
people,
but
let's
go
for
this
and
by
the
way,
I'm
sorry
this
is
just
linux.
B
I
could
not
get
the
windows
side
documented
yet
so,
if
you
want
to
help
me
on
documenting
the
windows
side,
especially
if
you're
a
run
hcs
person,
I'd
love
to
talk
to
you
so
cool
we're
gonna
start
our
little
journey
that
we're
actually
gonna.
You
know
we're
a
user.
We're
going
to
coop
apply
a
specific
pod,
which
goes
obviously
cube
api
more,
but
down
once
it
gets
to
you
know
the
kubelet
says:
hey
I
notice
I
I
need
to
actually
have
a
pod
scheduled.
B
We
actually
hit
the
sync
pod
and
I
have
the
code
up
above
in
this
same
frame,
but
for
here
we're
actually
going
to
say
hey.
I
know
that
I
have
a
pod
that
I
need
to
schedule.
I'm
going
to
go
ahead
and
execute
the
run
pod
sandbox,
which
effectively
says
hey.
You
know
I
need
to
go,
do
something,
but
one
piece
of
the
puzzle
that
seems
to
be
overlooked
is
status.
B
The
status
crm,
cri
method
is
quite
important
if
you
have
nothing
in
your
etsy
c9.d
directory
or
whatever
a
directory
you're
using
your
node
will
actually
report
not
ready.
So
if
you
do
coupe
ctl
get
nodes,
you'll
be
like
I'm,
not
ready
and
pods
actually
won't
schedule
to
that
node.
So
the
cri
method,
that's
implemented
through
our
container
run
times,
cryo
or
container
d,
actually
you're.
B
Looking
for
a
few
pieces
of
data
one
do
you
actually
have
files
in
the
fcc
9.d
directory
and
if
not
report,
that's
not
ready,
and
if
it
actually
has,
you
know
a
configure,
multiple
configurations.
It's
going
to
pick
the
first
one,
that's
in
alphabetical
order.
So
if
you
have
one
dash
configuration
versus
10,
12
and
87,
one
is
going
to
go
first
here,
so
something
to
keep
in
mind
so
we'll
just
assume
that
we
have
now
just
dropped
a
file
in
that
directory.
It's
in
the
proper
format.
B
B
Is
you
know
kind
of
creating
the
network
name
space
and
there's
a
lot
of
like
questions
around
the
ordering
of
operations
here,
because
the
docker
shim
had
a
kind
of
not
different
way,
but
it
wasn't
the
most
straightforward
way,
but
in
the
new
container
red
tides
without
you
know
the
world
of
docker
shim,
the
container
d
is
actually
going
in
and
creating
you
know
the
network
name
space
and
then
it's
actually
creating.
You
know
our
sandbox
container
or
what
we
call
you
know
in
the
realm
of
kubernetes.
The
infrastructure
container.
B
Most
people
know
it
as
the
pod
container
and
then
the
rest
of
the
containers
are
spun
up.
But
before
all
that
happens,
we're
actually
going
to
call
you
know,
depending
on
your
runtime,
in
go
c9,
we
go
through
setup
and
setup
actually
goes
and
calls
into
the
lib
c
and
I
code,
which
is
our
ad
network
list.
But
if
you're,
actually
in
the
cryo
world,
you
have
a
different
method.
B
So
your
setup
pod,
with
contacts
which
nicely
actually
you
know
once
it
hits,
calls
our
ad
network
list
immediately
after
in
the
the
run
pod
samick
method,
they
actually
call
get
pod
network
status
with
contacts
which
cause
actually
lib,
seeing
eyes,
check,
network
list
and
verifies.
You
know
the
network
is
actually
set
up
the
right
way
and
if
not,
it
fails
right
now
go
cni
does
not
do
that,
I'm
actually
implementing
a
similar
feature,
but
we're
going
to
continue
on
like
what
happens
after
add
network
list.
B
The
one
method
I
said
that
we
are
going
to
talk
about
really
quick
is
that
ad
network
lists
actually
calls
each
plug-in
in
a
while
loop,
which
then
calls
calls
our
ad
network,
which
is
our
non-exported
method,
and
you
can
actually
start
to
see
that
how
this
co,
how
the
plugins
are
actually
executed
is
with
you
know
this
line,
which
we
dive
in
a
little
bit
later
in
the
next
block,
is
that
we
take
our
plug-in
path.
B
Our
network
configuration
all
of
our
arguments
that
I
talked
about
earlier
and
our
runtime
config
and
execute
the
plugin
in
this
case.
Since
it's
add,
add
actually
has
a
json
result,
so
we're
going
to
execute
in
libsy
now
the
exact
plugin
with
result,
which
then
entail
calls
exec
plug-in
and
executes
our
plug-in
in
the
flow.
B
As
a
reminder,
we
just
see
the
same
network
configuration
below
there's
nothing
too
special.
So
in
a
case
of
here
we're
calling
the
bridge
which
inside
has
an
I
ipam
key
that
calls
host
local
to
assign
the
pod
an
ip
address
and
potentially
the
the
linux
bridge.
In
this
case,
it's
going
to
be
cni
0
and
ipa
address
so
and
we'll
go
through
the
failure
condition.
So
if
any
of
these
pieces
or
anything
in
this
flow
fails,
the
pod
setup
fails
so
that'll
be
anything
non
exit
zero.
B
So
if
your
plug
it
does
happen
to
fail,
you
know
it's,
please
ensure
that
it
actually
does.
So
you
actually
get
a
nice
little
message
that
these
messages
are
visible
through
kubectl
cry,
ctl,
nerd,
ctl
and
more
so
it
gives
you
a
little
bit
of
a
breadcrumb
to
start
looking
at.
You
know
why
something
failed
and
more
and
one
more
piece.
That's
really
important
is
actually
the
result
and
not
too
many
people
dived.
B
Maybe
some
people
really
dive
deep
into
this,
but
there's
some
really
key
pieces
in
here
that
are
used
so
such
as
this,
the
interfaces
and
ips
all
this
is
written
to
disk,
and
I
have
a
picture
here
on
the
right
that
shows
everything
that
the
check
command
would
actually
use
to
verify
that
the
network
is
still
in
its
desired
state.
But
the
one
piece
I
think
is
rather
important
is
ip
address,
because
this
is
actually
if
this
ip
address
is,
for
whatever
reason
is
wrong.
Your
pod
metadata
is
wrong.
B
Thus,
then,
your
end
points
are
wrong
and
then,
if
you
have
a
service
and
you're
directing
your
traffic
to
the
wrong,
you
know
endpoint
ip
address,
then
you're
gonna
have
some
fun
because
that's
simply
not
gonna
work.
So
that's
just
one
of
the
impacts
of
you
know
this
being
wrong
in
any
form,
but
also
like.
B
B
Lib
cni
results,
the
network
name,
which
is
part
of
the
configuration
list.
The
network
configuration
the
the
container
id
and
an
interface
name.
So
up
above
I
given
an
example
that
we
have
bar
lib
c,
ni,
container
d
net,
the
pause
container
id
and
then
eth0
and
that's
more
specifically,
we
can
start
to
see
all
the
values
that
are
being
passed
in
via
the
container
runtime,
one
being
the
c9
args
and
the
other
one
being
the
capability
args,
which
each
depending
of
using
kubernetes
or
more.
B
This
is
passed
in
really
regardless
have
different
use
cases
so
that
covered
cni
ad.
Do
we
have
any
questions
regarding
ad?
I
think
this
is
probably
more
information
than
most
people
want.
D
Yeah,
so
just
just
to
understand
this
part,
when
this
output
of
the
cni
plug-in
is
returned,
it
returns
to
the
cni
inside
the
runtime.
D
When,
when
the
the
cni
plugin
returns,
it
comes
back
to
the
runtime
to
the
cnn
implementation
of
the
runtime
and
the
runtime
needs
to
understand
these
fields
like
ips
interfaces
and
that's
like
abstracted
by
the
go
cni
lib
or
whatever.
B
Yeah
so
like,
I
think
I
actually
have
this
right
here,
so
I
so
this
is
in
container
container
d,
so
we
actually
capture,
let's
see
from
setup,
we
have
the
results
so
what
they're
actually
grabbing
for
the
most
part
from
the
results
when
it
comes
into
container
d,
they
really
care
about
the
ip
addresses
here.
So
that's
where
you
know
that
is
reported
back
up
to
kubernetes.
That's
where
you
know,
if
you
did
coop
ctl
get
pod
and
you
do
oh
dash
oh
wide.
This
is
where
that
ipa
address
is
coming
from.
A
B
So
sweet
deal
cool,
so
that
is
where
generally
the
one
piece
of
the
information
where
it
comes
to
the
cni
result
comes
from.
It
literally
just
looks
like
we're
just
grabbing
the
ip
address
and
reporting
that
up
higher
through
the
cri
to
kublet,
and
the
next
piece
of
the
puzzle
obviously
is
walking
through
delete,
which
is
effectively
the
same
way
just
calling
different
methods.
So
you
know
in
this
case
we're
calling
the
kubectl
delete
and
going
through
our
kill
pod,
our
stop
pod,
sandbox
and
continuing
on
actually
we're
gonna
go
here.
B
However,
you
know
the
status
command
or
status
method
actually
from
the
cri.
B
If
you
were
to
say,
have
a
pod
running
and
you
actually
went
and
replaced
or
deleted
your
contents
fc
c9.d
or
made
you
know,
permissions
changes
your
op
scene,
I've
been
you're
actually
could
report
back
to
you
know
the
kubelet
and
those
pods
may
not
actually
terminate
the
right
way,
which
eventually,
those
will
be
rescheduled
but
they'll,
be
forever
stuck
in
a
terminating
state
until
you
resolve
the
issue
with
you
know
your
note
being
marked
not
ready
or
whatever
specific
change
you
had
made.
B
So
it's
one
thing
to
keep
in
mind
that
you
know
you
can
actually
break
your
stuff
and
then
tear
down
will
absolutely
fail,
which
kind
of
gets
a
little
annoying.
I
believe
a
big
network
plug-in
just
kind
of
caused
that
so
from
here
we're
actually
going
from
either
container
d
or
cryo
our
stop
pod,
sandbox
and
continuing
on,
obviously
for
os
cryo,
not
ucic,
and
I
we're
tearing
down
the
pod
sandbox
or
going
through
remove,
which
effectively
calls
our
dell
network
dell
network
list
dell
network
exec
plug.
B
Without
result,
there
is
no
json
output
when
it
comes
to
deletion,
and
then
we
actually
have
the
exact
plugin,
which
goes
into
the
specifics
here
when
it
comes
to
like
deleting
this,
that
the
container
runtime
being
responsible
for
you
know
deleting
the
network
name
space,
it
actually
makes
life
a
little
bit
easier.
B
However,
there
comes
with
some
catches
like
the
host.
Local
plug-in
must
reclaim
that
ip
address
or
it'll
fail
and
the
same
with
the
bridge.
But
luckily,
when
you
tear
down
the
network,
namespace
sliding
back
to
the
diagram,
it
tears
down
everything
in
there.
Obviously
so
you
don't
need
to
go
through
serially
delete
each
zero,
which
will
entail
delete
the
v
pipe
and
more
so
that
makes
life
quite
easy.
B
So
if
we
exit
code
zero,
everything
goes
moves
forward,
then
that
pod
is
actually
successfully
teared
down
now,
obviously,
the
opposite.
What
I
mentioned
earlier,
if
you
were
to
just,
go
and
delete
your
contents
of
etsy
c9netd
you'll,
get
an
error
similar
to
you
know
your
cni
plug-in,
not
initialized,
and
luckily
you
know,
through
your
coupe
ctl
nerd,
ctl
cry
ctl
or
whatever
cl
cli
you're
using
it
should
report
something
back
to
that.
Sometimes
it
gets
a
little
buried
in
the
noise,
but
it
sometimes
is
helpful
there
to.
B
And
one
thing
that
I've
noticed,
I
know,
is
more
on
the
windows
side,
so
unfortunately,
I
can't
talk
too
much
about
the
windows
side.
Is
the
check
command
which
right
now
in
both
container
run
times,
they're,
not
particularly
it
implemented
and
cryo?
Like
I
mentioned
earlier,
they
effectively
just
call
check
immediately
after
ad,
however,
there's
no
way
to
actually
say
hey,
I
want
to
actually
go.
You
know,
nerd
ctl
containers,
my
id
do
a
check
or
toggle
it
on
some
specific
way
or
even
say
hey.
B
If
my
check
command
fails,
I
want
to
reschedule
this
pod
and
or
I
want
to
try
to
reconcile
the
state
to
make
it
accurate,
so
that
is
some
behavior
that
I'm
trying
to
implement
in
container
d,
but
it's
pretty
far
from
ready.
I
broke
something
already.
The
one
thing
is
that
they
all
go
through
the
same
exact
same
flow.
So
you
know
your
your
method,
name,
network
list
to
your
check
network,
exact
plugin
without
result-
and
you
know
that
could
have
a
result-
it's
not
not
really
defined
there,
then.
B
Obviously,
we
call
the
plug-in
specific
check
command
and
exit
zero
and
non-exit
code
of
zero
could
mean.
You
know
that
the
state
of
that
network
name
space
or
the
isolation
domain
has
changed
which
entail
could
be.
You
know
bad.
I
have
a
an
issue
where
my
mac
addresses
change,
sometimes
so
that's
kind
of
catastrophic
in
certain
ways,
so
something
to
look
forward
to
in
the
future
and
I'm
sorry,
I
skipped
over
questions
for
stop.
B
We
could
do
stop
and
check
if
we
have
any
and
if
not,
we
can
just
hop
into
some
really
quick
manual.
Demos.
B
B
Just
beer
manually
we're
going
to
do
one
with
the
shell
script,
one
just
in
the
shell
script,
we'll
just
do
it
very
statically
on
our
control,
plane,
node,
we're
gonna
use
the
cni
tool
not
to
create
a
pod,
but
a
network
namespace
that
can
participate
in
the
pod,
the
pod
networking
and
then
we're
gonna
just
drop
in
these
two
network
configurations
manually
and
move
on,
and
I
will
be
adding
these
routes
manually.
So.
D
Yeah,
just
just
one
thing:
jay
has
one
good
good
question.
B
So
so
I
wanted
the
whole
scene.
I
spec
implement
it
so
right
now
you
generally
just
have
like
add
and
check.
I
mean
add,
delete
sorry
about
that
and
not
check.
So
I
thought
it'd
be
useful
for
us,
because
we
do
have
a
case
that
we
always
want
to
verify.
Our
network
is
still
set
up
that
someone
hasn't
gone
in
with
you
know,
root
access
and
bleed
it
you
know
be.
We
want
to
be
able
to
verify
our
tc
rules
as
well,
because
there
was
an
issue
that
we
had
found.
B
So
I
thought
it
would
be
quite
useful
to
work
on
the
cni
check,
even
there's,
actually
a
bigger
one
that
I
want
to
work
on
when
the
cni2o
comes
out,
and
hopefully
it
does.
B
Land
is
update
because
we
have
business
use
cases
to
updating
the
network
without
tearing
down
the
pod,
which
actually
has
a
lot
of
cool
investing
problems,
so
that
will
be
a
fun
one
and
I
would
love
to
get
it
implemented
in
you
know:
andrea
yep,
I
that's
where
I
started
digging
into
the
same
one
so
that,
but
that
was
on
the
windows
side.
I
believe
so.
B
The
one
thing
that
I
want
to
do
yup
is
get
with
microsoft
and
dive
into
it
more
because
when
it
comes
to
the
windows,
networking
it's
it's
a
little
undocumented
and
it
doesn't
actually
seem
to
follow.
You
know
the
normal
convention.
The
network
setup
seems
to
happen
in
the
high
level
container
runtimes,
but
for
windows
it
looks
like
it's
using,
you
know
the
run
c
fork
or
on
hcs
and
that's
where
it
seems
to
make
all
those
calls
to
the
host
network
service,
which
I
want
to
fully
document.
B
That's
awesome
yeah
totally,
because
I
I
definitely
want
to
help
you
guys
or
the
windows
side.
You
know
get
a
little
bit
more
visibility
on.
What's
going
on,
they
did
just
publish
their
latest
documentation,
which
I
can
send
a
link.
Microsoft
guy,
I
think
his
name
was
danny.
Send
it
over
to
me
so
there's
some.
You
know
efforts
moving.
I
know
microsoft
is
making
like
big
headway
when
it
comes
to
container
networking.
B
So
I
think
you
know
in
the
future,
we'll
all
be
able
to
answer
those
questions
and
fix
those
problems
with
the
race
conditions,
for
you
guys,
cool.
Hopefully
I
got
your
question
answered:
jay
all
righty.
Let's
just
kick
this
off,
really
quick.
I
might
run
out
of
time
so
I
might
fly
through
these.
What
we're
gonna
do
is
just
we're
going
to
demonstrate
like
right
now
as
an
example-
and
oh
I
mean,
can
we
everybody
read
this
or
I
can
join
me
to
zoom
in
a
little
bit.
B
The
thermal
is
fine,
okay,
cool.
So
right
now
you
know
our
nodes
are
marked,
not
ready
and,
as
I
explained
earlier,
you
know
container
d
and
I'm
using
container
d
by
the
way
in
this
setup
that
this
is
actually
empty.
So
we
need
to
go
ahead
and
resolve
this
for
both
our
nodes.
So
if
I
were
to
just
go,
kube
apply
and
I'm
just
going
to,
I
can
send
out
these
files
wrong
directory
demo,
one
I'm
going
to
spin
up
two
pods.
B
B
So
we're
just
going
to
go
ahead
and
resolve
this
we're
going
to
actually
move
our
network
configuration
that
I
showed
you
guys
quite
a
lot
and
it's
just
templated,
because
I'm
gonna,
you
know
manually
populate
the
subnet
with
my
handy
dandy
script.
So
why
don't
we
go
ahead
and
do
that
and
get
things
spun
up.
So
nothing,
no!
There's
no
secrets
here.
I'm
just
copying
this
thing
into
here.
So
let's
go
ahead
and
run
it
all
right
should
have
worked
if
it
didn't
all
right,
cool
all
right.
B
So
all
the
pods
on
node,
our
control,
plane
node,
will
actually
have
this
subnet
and
we'll
see
if
things
start
kicking
off
all
right
cool
and
by
the
way
I
have
constraints
on
these
pods
to
schedule
on
one
note
or
the
other.
So
on
this
case
we're
now
running
because
the
pod
you
know
the
mo
the
node
is
marked
ready,
but
node
two
is
still
stuck
so
we're
just
gonna
go
ahead
and
fix
that
really
quick
setup
two
and
it
should
be
pretty
quick
where's.
B
My
get
nodes
all
right
come
on
so
that
cc9d
directory
there's
a
file
watch
on
that
directory.
So
if
you
went
ahead
and
deleted
that
directory
that'll
cause
some
problems
but
and
also
why
that
took
a
little
bit,
there's
a
sync
loop
up
in
the
kubelet.
That's
also
saying
you
know,
I
think
I
believe
it's
every
was
it
every
10
seconds
or
every
30
seconds.
So
if
you
missed
that
train,
then
you
get
it.
You
know,
get
the
weight
30
seconds.
B
So
in
this
case,
probably
all
of
our
pods
are,
you
know,
running
and
david
show
like
so
what
we
had
to
manually
do.
You
know,
obviously
copy
that
you
know
think
about
how
we
copied
the
the
network
configuration
into
that
location,
but
also
ensure
that
all
of
our
binaries
are
in
place:
synopse,
united
bin.
So
a
lot
of
place
you
know
like
multis
or
others
drop
in
those
via
the
daemon
set.
B
So
there's
one
there's
additional
things
you
got
to
think
of
when
creating
your
own
network
plug-in
is
like
dropping
the
files
in
appropriate
locations
applying
the
right
subnet,
so
you
don't
have
overlapping.
So
in
this
case,
we
have
two
slash
24s,
obviously,
two
different
ranges
and
from
node
one.
B
I'm
gonna,
like
you
know
just
verify
my
networking,
but
there's
a
problem-
and
I
mentioned
it
earlier
and
I
kind
of
gave
it
away
oops-
is
that
there's
no
routing
established
between
these,
because
we
did
this
manually
and
so
most
network
plug-ins-
are
actually
dropping
these
routes
in
one
way
or
another,
depending
on
your
setup.
B
So
if
we
want
to
actually
establish
our
pod
to
pod
routing,
we
gotta
actually
add
the
routes,
so
we're
gonna
go
ahead
and
do
that
via
nine
two
and
I
don't
wanna,
let's
see
12
device
eth1
sweet
deal,
so
we've
just
did
it
on
one
node.
We
obviously
would
need
to
do
it.
You
know
the
reverse.
B
So
if
I
wanted
to
actually
have
new
our
worker
node
in
this
case,
node
two
be
able
to
ping
or
reach
nodes
on
our.
You
know
node
one.
We
have
to
do
the
same
exact
steps
here
if
I
can
type
the
right
way
and
these
these
are
just
our
egress
boards,
so
dev
one
cool.
So
why
don't
we
just
verify
that
we
can
actually
reach
all
right
cool,
so
real
fast
is
that
we
kind
of
manually
established.
You
know
our
pod
to
pod.
Networking
we
dropped
in
our
network.
B
B
Well,
if
I
actually
use
it
so
we
can
actually
see
our
result,
which
we
get
it
via
the
you
know.
The
container
runtime
gets
this
and
is
not
responsible
for
this.
This
is
actually
part
of
libsyn
I,
so
we
can
actually
see
all
the
information
that
was
passed
down
to
lipsy
and
I-
and
this
has
been
cash
so
now,
if
I
were
to
execute
the
check
command,
it's
gonna
go
and
say:
hey,
you
know,
is
all
this
still
accurate.
If
not,
ideally
it
blows
it
away.
B
So
there
we
go.
So
that
is
the
most
basic
setup,
and
this
will
keep
working,
I'm
not
certain
how
host
local
the
ipam
plug-in
handles
restarts
we
could
find
out,
but
not
here,
but
that
is
like
you
know
the
most
simplistic
way
of
going
about.
You
know
pod.
The
pod
I
mean
like
establishing
your
own.
You
know
quote-unquote
network
plug-in.
B
I
did
create
a
another
one.
If
this
is
like,
you
know,
one
goes
through
like
wildfire
and
if
I
wanted
to
actually
execute
it,
we're
gonna
you
know
manually,
go
through
all
the
steps
like
this
is
actually
what
bridge
the
bridge
plug-in
is
actually
doing.
You
know
we're
actually
creating
a
v
pair,
we're
actually
moving
it
into
the
container
of
the
pods
network,
name,
space,
we're
actually
gonna.
You
know
pseudo,
take
the
role
of
the
ipam
assigned
into
ipedrus,
and
this
one
I
just
used
10
240,
0,
12
24.
B
we're
actually
going
to
create
the
bridge
if
it
doesn't
exist.
In
this
case
it
already
does
we'll
set
the
bridge
to
up
we'll
set
the
v.
That's
in
the
host
network's
namespace
up
we'll
do
the
same
to
each
zero
and
then
we'll
actually
attach
that
be
in
the
root
network
namespace
to
the
cni,
zero
bridge,
we'll
assign
an
ip
address
and
then
we'll
exec
and
assign
a
default
route.
So
if
you
don't
assign
a
default
route,
you
can
only
reach.
You
know
ip
addresses
that
are
in
that
slash
24.
B
So
if
you
were
to
rich,
you
know
10
20,
240,
1
1.
That
will
not
work,
because
you
do
not
have
a
route
entries
for
that
and
then
obviously
we
have
our
you
know
output.
So
I
think
that
one's
not
the
the
most
exciting,
so
I
think
we've
already
hit
on
this
one.
It's
pretty
straightforward.
What
it's
actually
doing.
The
one
thing
I
want
to
hit
on
is:
I
want
to
give
utilize.
The
the
cni
tool
in
the
cni
tool
is
something
that
the
cni
maintainers,
what
the
hell's
going
on
here.
B
Three
well,
that's
funny
stand
by
my
demo
has
broke,
so
that's
fun.
B
Okay,
so
we're
back
in
action.
I
don't
know
what
happened
there
so
we'll
go
ahead
and
go
into
here,
thankfully,
that
recovered,
so
the
cni
tool
gives
you
the
quick
ability
to
add.
You
know
your
set
up
your
network,
tear
down
your
network
or
verify
your
network
in
this
case
we're
going
to
use
the
same
exact
network
configuration
that
we've
been
using
the
same
slash
24.
B
So
we're
going
to
execute
this
on
node
one
and
I've
just
made
some
really
quick,
helper
scripts
and
you'll
notice
that
I
have,
like
the
ipnetness,
add
c91234,
and
that's
because
obviously
the
cni
is
not
responsible
for
creating
the
network
namespace.
So
if
you
want
to
do
that,
we
we
actually
need
to
do
that.
So
part
of
the
inputs
are,
you
know,
cni
tool,
the
command
our
network
name.
B
So
if
you
want
to
know
where
that
came
from
network
name
right
up
here
and
then
our
network
namespace,
since
we're
using
iprout
by
convention,
it
is
in
var
run
s
c91234
and
we're
just
gonna
go
ahead
and
execute
that
so
right
now
we
actually
have
you
know
we
have
executed.
This
is
created
the
network.
Namespace,
it's
given
us
all
the
results
that
we
need
conforming
to
the
cni
specification
and
more.
We
got
the
ip
address.
B
So
if
this
is
a
case
where
you
know
the
container
run
time,
I'd
pass
this
up
higher
to
kubernetes
and
say
hey.
This
is
ip
address,
assign
it
to
my
pod
metadata
and
now,
if
you
have
that
wrapped
in
like
a
deployment
and
some
services,
the
endpoints
will
be
populated
the
right
way
since,
like
I
said
earlier,
if
that's
wrong,
then
you're
going
to
have
some.
You
know
really
interesting
problems
now.
If
we
wanted
to
actually
execute
check
since
that
exit
code
0.
That
means
that
this
network
was
actually
stable.
B
But
if
I
were
to
go
and
start
hacking,
the
network
namespace
we'd,
have
a
completely
different
result
and
the
same
thing
for
delete.
So
we've
actually
now
gone
through
and
deleted.
Our
network
name
space
as
torn
down.
Everything
is
reclaim
the
ipa
address
and
we
can
actually
verify
that
it
reclaimed
the
ip
address,
because
we
actually
don't
see
10
2405
in
this
directory.
That's
far
lib
c9
networks
and
the
network
name
is
by
convention.
B
B
B
They're
all
going
right
now,
but
you
can
make
them
in
any
language.
You
want
some
actually
there
I
did
just
come
across
a
project.
The
other
day
someone
had
written
a
few
plugins
using
rust,
which
I
thought
was
pretty
cool.
I
don't
see
too
many
plug-ins
that
are
not
go
lag.
I
think
it's
because
all
the
you
know
the
tooling
and
the
libraries
behind
it
are
quite
nice
and
you
know
featureful,
so
you
don't
really
need
to
dump
jump
through
hoops.
B
So
if
I
think
of
your
goat,
the
rust
someone
has
made
a
net
link
package,
that's
getting
pretty
pretty
big,
but
I'm
not
too
certain
where
everybody
else
is
doing.
B
All
right,
so
I
was
gonna,
do
instead
of
doing
static
routes,
which
is
quite
manual.
So
in
the
case,
if
say,
node
one
were
to
go
down
in
the
networking.
You
know
world.
These
are
static
that
you
know,
we'd
still
be
pointing
at
oops.
I
know
that
is
down
this
high
p
address.
You
know
we
could
see
is
down
so
we
can
start
making.
You
know
our
changes
to
our
network
a
little
bit
more
dynamic
and
I'm
just
gonna
go
ahead
and
delete.
B
These
is
going
on
sweet,
sorry
about
that
and
I'll
do
the
same
over
here.
I
just
want
to
make
sure
these
routes,
so
we're
going
to
do
this
with
bgp
in
six
minutes.
So
this
will
be.
I
don't
know
if
I'm
gonna
get
there,
but
we'll
see.
B
All
right
cool-
let's
see
if
I
can
get
this
so
what
I
just
did
is
I
launched
an
fr
daemon
set,
which
we
have
bgp
enabled
in
this.
So
I
don't
want
to
go
too
deep
into
fr
too
too
much,
but
it
is
a
nice
routing
suite
that
has
you
know,
support
for
ospf,
bgp,
rip,
isis.
B
B
Oh,
come
on
come
on
internet,
so,
while
this
is
creating
you'll
actually
notice
that
the
ipa
address
is
the
node
ip
address,
because
these
are
running
in
the
root
network
name
space,
because
we
actually
need
to
make
changes
to
the
routing
table
which
requires
that
to
be
in
the
root
network
name
space.
B
With
five
minutes
to
go,
do
you
have
any
I
mean,
do
you
have
any
questions
while
we're
waiting
on
this
to
slowly
pull
sweet,
all
right,
sweet
it?
Just
I
just
finished
so.
B
There's
nothing
on
this
configure
terminal
we're
gonna
use,
router
bgp
each
bgp
instance
we're
going
to
go
with
ebgp,
which
is
exterior
bgp,
external
bgp
and
there's
other
ib
gp.
But
I
don't
want
to
deal
with
route
reflectors
and
more
so
first
things.
First,
we
need
to
actually
establish.
B
A
peering
session
with
them-
and
I
believe,
we're
on
this
one.
So
this
ip
address
is
here
and
this
is
going
to
get
confusing,
really
quick.
We
have
established
this
session.
I
believe
this
one
is
advertised.
We
need
to
advertise
this
network
cool
and
all
right,
so
end
show
running
configuration
cool,
that's
65,
000,
so
we're
gonna
be
pointing
that
over
there.
B
Completely
unconfigured
terminal
router,
bgp
65001
we're
going
to
appear
with
our
other
side
with
our
nodes,
192168
5012
remote
as
65
000,
will
advertise
our
network,
our
pod
network,
on
this
node.
Let's
see
if
this
is
actually
going
to
show
all
right.
B
That
actually
looks
like
this.
Bgp
session
has
been
successfully
appeared
and
I
actually
you
know,
learned
the
route
for
the
other
network
via
bgp,
with
the
big
b
10
241
0
has
now
been
added
to
the
routing
table
of
our
masternode
and
we
can
actually
go
ahead
and
ping
our
pods.
So
if
we've
just
done
our
pod
network
layer
three
using
fr,
so
let's
just
verify
that
I'm
not
a
liar
cool,
we're
actually
pinging
the
other
node,
so
we
use
pgp
there,
so
we're
actually
learning
routes
that
way.
B
So
that
was
it
for
my
all,
my
demos,
I'm
glad
I
didn't
do
demo
two
or
it
would
have
ran
out
of
time.
Is
there
any
any
questions
that
was
really
fast?
Sorry,
guys.
A
That's
really
cool
to
see
how
everything
works
here
and
I
don't
see
any
questions
in
the
comments
and
oh
here's.
What
one.
A
B
Yeah,
it's
I
mean
it's
such
a
building
block,
because
if
you
saw
that
in
here
when
we
go
into
one
of
these
show
running
configuration
so
there's
an
alternative
way
where,
instead
of
putting
out
a
neighbor
with
an
ip
address,
I
can
actually
say
peer
with
anything
outside
of
specific
interface
and
that's
called
bgp
unnumbered
and
that's
for
peer-to-peer
connections
which
is
really
cool.
I
actually
really
like
that
because
I
can
say:
hey
any
neighbor,
that's
on
the
other
side
of
each
one
automatically
appear,
so
we
can
actually
in
a
way
simplify.
B
I'll
try
for
next
week.
Oh
boy,
these
things
are,
these
are
fun,
but
you
know
holy
crap.
I
was
like
really
thinking
that
this
would
go
an
hour
and
a
half.
I
just
talked
really
fast
and
skipped
over
a
bunch
of
things
to
get
in
time,
but
I
can
certainly
prep
more
content.
Yeah
jfu
and
I
mean,
if
you
want
more,
I'm
certainly
able
to
do
it
or
bring
other
people
that
are,
you
know,
will
go
gaga
for
vxlan.
B
Anything
more
sir,
is
it
time
to
depart?
A
Yeah,
I
think
it's
like
pretty
awesome
and
fox
have
like
really
fun
comments
there
so,
but
like
we're
like
run
off
run
out
of
time.
So
probably
we
can
like
think
more
about
the
pgp
stuff
next
week,
maybe
and.
A
Okay
thanks
everyone
for
watching
and
appreciate
your
time,
see
you
next
week,
bye.