►
From YouTube: Antrea LIVE: Episode 15 (Routable pods and Kubebuilder)
Description
Come hang out w/ Scott and Xinqi! We are gonna talk about routable pods with NSX-T as well as building a controller with Kubebuilder! Then, we're gonna take a look at a real controller "load balancer operator for kubernetes".
A
A
So
scott,
would
you
like
to
give
us
a
short
self
introduction
about
yourself.
B
Yeah,
so
hey
everyone,
I'm
scott
rosenberg,
I'm
the
practice
leader
for
cloud
and
automation
at
terra
sky,
a
vmware
partner,
a
global
vmware
partner,
as
well
as
many
other
companies
that
we
have
partnerships
with,
and
I
lead
the
tanzu
portfolio
internally
and
love
andrea
and
love
tanzu.
A
Yeah
thanks
scott
really
nice
to
have
you
here,
and
so
would
you
like
to
share
your
screen
and
go
ahead
and
talk
about
this?
Those
raw
power
feature
with
us.
B
Yeah
for
sure,
so
I
think
one
of
the
really
cool
things
is.
You
know
this
new
integrations
that
we're
starting
to
see
more
and
more
with
nsx
and
andrea,
and
so
tell
me
if
you
can
see
my
screen.
B
Perfect,
okay,
so
one
of
the
you
know
really
cool
things
that
we
have
is
typically
when
we
deal
with
kubernetes.
If
I
were
to
go
here
and
do
like
a
cube,
ctx
2,
pc,
demo
cls02
and
did
here
just
a
cube,
cpl
get
pods
dash
a
let's
say,
dash
o
y.
B
One
of
the
things
that
we
can
see
is
that
our
pods
are
actually
running
with
ips
in
this
like
internal
range,
that's
being
basically
s
nodded
when
we
go
out-
and
this
is
an
overlay
network
using
vxlan
technologies
based
off
of
obs
within
andrea-
and
this
is
great-
and
this
works
for
us-
and
this
is
typically
what
we
would
want
to
use,
because
we
don't
want
necessarily
need
our
pods
to
be
routable
within
the
environment.
They
can
go
out
through
the
nodes,
ip
address.
B
That's
the
standard
traffic
flow
that
we
get
and
internally
everything
is
routed,
but
yeah.
A
B
Exactly
other
pods
can
communicate
with
this
within
the
same
cluster,
but
anything
outside
of
the
cluster
can't
communicate
with
this
ip
address.
It
needs
to
go
through
an
ingress
or
a
service
type
load,
balance
or
node
port
or
some
other
way
to
access
it.
This
is
not
directly
accessible
from
outside,
and
the
other
thing
that
becomes
difficult
is
that
when
this
pod
tries
to
access-
let's
say
a
database
sitting
outside
of
kubernetes
in
the
typical
configuration
that's
going
to
go
out
with
the
node's
ip
address
it
does.
B
It
gets
s-natted
out
so
it's
source
matted
out
to
the
network,
and
when
I
look
at
the
access
logs,
for
example,
of
the
application
I'm
connecting
to
I'm
actually
connecting
with
the
ip
address
of
the
node,
so
there
is
no
way
of
differentiating
which
node
this
is
coming
from
and
when
we're
in
a
purely
cloud
native
world,
that's
perfectly
fine,
but
in
a
lot
of
enterprise
organizations
and
a
lot
of
large
organizations,
and
they
still
have
these
large
enterprise
firewalls
that
doesn't
necessarily
fly.
Not
everyone
is
the
biggest
fans
of
network
policies.
B
They
want
to
manage
things
in
a
traditional
way
through
a
traditional
firewall,
and
this
can
be
a
bit
difficult
as
well
as
there
are
certain
types
of
applications
that
don't
traverse
nap
very
well,
so
natting
doesn't
work
well
in
those
specific
applications
and
there's
actually
a
great
solution
for
this,
which
is
andrea
together
with
nsxt
and
jay.
So
am
I.
I
am
also
a
huge
fan
of
network
policies,
especially
andrea
network
policies,
but
you
know
there
are
people
that
aren't
what
can
you
do?
B
We
can't
fix
the
world
in
one
day,
so
we
need
to
have
a
solution
and
the
solution
for
this
is
really
a
great
integration
that
we
have
in
tanzu
it's
built
in,
but
we
can
also
do
this
completely
in
the
open
source.
Today,
I'm
doing
it
on
tanzu
community
edition,
so
completely
open
source
distribution,
and
so
what
we
can
actually
do
is
make
these
pods
routable.
B
We
can
actually
get
routable
ips
for
all
of
our
pods
and
just
to
show
what
that
looks
like
if
I
went
here
and
did
a
cube
ctx
to
routable
one
and
did
now
a
cube.
Ctl
get
pods
dash
o
wide
dash
a
let's
say,
all
right:
let's
take
a
random
pod
here.
A
I'm
sorry
to
interrupt
scott,
but
just
to
like
make
sure
everyone
has
every
everyone
can
see
what
we
are
saying
so
yeah.
A
Okay,
so
maybe
the
engine
can
rejoin
this
youtube
link
to
see
if
it
works
for
him
and-
and
I
think
we
are
good
to
continue.
B
Yeah,
so
if
we
look
here,
for
example,
it
just
I've
got
a
random
pod.
Here,
that's
running
the
aqua
starboard
operator
and
if
I
just
came
and
ping
this
ip,
it's
actually
accessible
from
my
own
computer,
and
this
is
a
really
powerful
thing
that
I
can
actually
access
that
pad
directly
and
what
I
could
even
do
beyond.
That
is.
If
I
ran
and
just
decided
to
run
here,
some
let's
say
jump
box
using
a
ubuntu
ssh
container,
which
will
take
a
second
to
come
up
here.
B
Exactly
so,
this
is
completely
accessible
from
anywhere.
So
if
I
did
now
a
cube
ctl
exact
into
this
pod
right
so
into
the
jump
box,
pod.
B
It's
under
audit,
I
think,
and
then
ssh
info
we
can
see
here
accepted
password
for
root
from
10
101
2.9,
the
ip
address
that
that
pod
received.
If
I
go
back
out,
we
can
see
that
that's
the
address
here,
cube
cto,
get
pods
dash
o
wide
and
we
can
see
10
101
2.9,
so
any
application
that's
coming
from
outside
of
my
environment.
I
now
also
see
which
pod
this
is
coming
from.
B
I
actually
get
the
ip
address,
because
I'm
going
out
with
that
ip-
and
this
is
something
that
enterprise
organizations-
large
companies
really
need
in
a
lot
of
cases,
and
this
is
a
really
strong
capability
now,
the
other
thing
that
this
allows
as
we
shop
it,
also
comes
into
the
cluster
and
also
out
from
the
cluster
with
this
ip
address
and
the
way
this
is
set
up
is
actually
really
easy
and
what
we'll
do
is
I'll
actually
create.
B
One
of
these
environments
live
to
show
you.
So
if
I
open
up
my
nsx
environment
here-
and
let's
say,
I'm
gonna
go
here
and
basically
what
we
need
to
do
is
we
have
this
idea
in
nsx
of
a
multi-tiered
routing?
So
we
have
this
idea
of
a
tier
zero
and
then
a
tier
one,
router,
and
then
under
that
we
have
segments
which
are
like
port
groups
and
vsphere.
So
what
I'm
gonna
do
is
just
create
a
new
tier
one
router
in
this
case.
Let's
call
it
live
demo,
routable,
pods
tier
one.
B
Why
not
and
I'll
select
here
where
I'm
connecting
to
this,
the
tier
zero
is
where
I've
configured
bgp
up
to
my
physical
routers
in
the
environment.
I
don't
need
to
select
that
that's
fine
and
for
route
advertisement.
We
need
to
select
all
static
routes
all
connected
segments
as
well.
I
also
just
always
like
to
select
everything,
because
why
not-
and
we
can
save
that
and
that's
what
we
need
to
do
here
from
a
tier
one
perspective.
So
once
we've
configured
this
for
our
cluster,
we
would
then
go
and
actually.
A
B
A
A
B
So
next,
what
we'll
do
is
actually
create
our
network
itself
right.
This
is
gonna,
be
our
node
network
that
our
nodes
are
gonna,
be
sitting
on.
So
in
this
case
I'm
gonna
call
it
tkg
routablecluster02,
and
I'm
going
to
connect
this
here
to
my
live
demo.
Tier
1
router
select
the
overlay
transport
zone
here
and
I'm
going
to
give
this
a
new
address
range.
Let's
use
10
103,
let's
do
10
100
181
dot,
1
24..
So
this
is
our
nodes.
B
This
is
not
our
pods,
yet
this
is
just
the
nodes
themselves
that
are
going
to
be
sitting
on
this
network
and
I'm
going
to
set
up
a
dhcp
server
here,
because
tkg
cluster
api
require
dhcp
for
our
nodes
to
get
ip
addresses.
So
what
I'm
going
to
do
here
is
set
this
to
10
100
181.2.
B
And
we're
going
to
create
a
dhcp
profile,
just
call
this
live
demo
and
we're
just
going
to
give
this
the
cider
block
of
10
100
181
dot,
1
24,
which
is
great,
and
once
we
have
that
set
up.
We
just
have
to
select
which
cluster
save
that,
and
so
here
we
have
our
dhcp
address
and
I
just
need
to
give
it
a
range
of
what
I
want.
B
So
once
I've
created
this,
I've
set
up
my
network
from
a
you
know
all
the
configurations
that
I
actually
need
we're
going
to
save
that,
and
I'm
going
to
make
one
more
change
here,
just
because
of
how
my
environment
is
set
up
and
I'm
going
to
change
a
few
of
these
settings.
These
are
not
needed.
I
just
do
them
in
my
environment
here,
because
it's
a
lab
environment,
and
now
that
I
have
my
network
set
up,
I
can
actually
go
and
create
my
cluster.
B
So
if
we
take
a
look
at
what
that
looks
like
is,
I
have
here
a
tce
demo,
routable
cls-02
redacted
file,
redacted,
because
I
don't
want
to
show
you
my
passwords
and
basically
this
is
a
standard
tanzu
cluster
configuration
file
and
one
of
the
really
cool
things
that
happened
here
is
so
we're
setting
again
the
cluster
name,
our
control,
plane,
endpoint,
everything
that
we
would
in
a
standard,
tanzu
cluster
and
the
magic
comes
here
on
lines
26
until
33..
B
So
I'm
telling
it
to
use
andrea
as
the
cni,
because
you
know
jay
put
an
old
version
of
calico
in
here,
so
we
prefer
to
use
andrea
and
we're
going
to
enable
trace
flow
and
we're
going
to
set
nsxt
powder
outing
to
true.
So
I'm
going
to
say.
B
Yes,
I
want
my
pods
to
be
routable,
and
this
is
going
to
use
an
nsx
integration
that
we
have
I'm
going
to
give
it
the
host
my
nsxt
manager,
I'm
going
to
tell
it
which
tier
1
router
it's
supposed
to
use,
I'm
going
to
give
it
the
password
and
user.
This
could
also
be
certificate-based
authentication,
whatever
you
would
like
and
I'm
in
a
lab,
so
it's
self-signed
certificates
beyond
that.
That's
all
we
need
to
do
here
so
once
we
actually
went
and
configured
this
out,
so
we
have
our
cluster.
B
All
we
need
to
do
is
a
tanzu
cluster,
create
dash
f
and
give
it
the
actual
file.
So
here
it's
going
to
be
the
non-redacted
tile,
so
tce
demo
routable
cls02.yaml,
and
this
will
create
now
in
the
back
end.
So
this
is
just
validating
the
configuration
and
is
going
to
start
creating
our
cluster
for
us
now,
while
this
runs
in
the
back
end.
A
By
the
way,
would
you
like
to
explain
more
about
the
two
parameters
we
were
talking
about,
like
the
like
about
20
yeah
line,
27
and
28?
I
think
these
these
two
parameter
parameters
are
the
one
are
the
one
that
you
enable
this
row
row
rotable.
B
Right
this
nsxt
pod
routing
enabled
and
then
we
need
to
give
it
some
parameters
which
are
the
lines
right
under
it.
But
if
we
actually
look
here
at
what
this
is
actually
doing-
and
this
is
as
people
know-
I'm
a
big
carvel
fan.
So
I
can
go
into
some
ytt
here.
So
if
we
go
into
config
tamzoo,
tkg
providers
ytt
zero,
two
add-ons
cni
andrea
and
I
go
into
the
andrea
file.
So
this
is
when
tanzu
creates
a
cluster.
B
It's
using
these
files
that
are
basically
extracted
from
the
cli,
and
these
are
the
files
that
configure
our
environment
and
if
I
were
to
look
here
at
this
andrea
add-on,
datalib
yaml
file,
one
of
the
things
that
we
can
see
that
happens
here
is
that
within
my
configuration
for
antria,
we
see
that
if
data
values
nsxt
pod
routing
enabled
so
if
we
set
that
to
true
in
our
file,
then
what
we're
actually
setting
is
traffic
encapsulation
mode
to
no
encapsulation
by
default,
we're
going
to
get
encapsulation
for
our
pods,
so
they're
going
to
be
vxlan
and
they
will
not
be
just
you
know,
routable
pods.
B
By
default,
they
will
be
actually
encapsulated
and
we're
setting
no
s
nat
to
true
so
we're
saying,
don't
set
any
snap
rules
when
a
pod
is
doing
egress
traffic
out
to
the
world
and
those
two
rules
are
basically
what
we
need
to
configure
from
an
andrea
perspective
in
order
to
get
what
we
need
now.
The
other
thing
that's
needed
is
antria
proxy
is
configured
to
true,
so
basically
setting
andrea
proxy
to
true
that's
again.
B
Part
of
this
integration
is
that
we
need
to
entry
a
proxy
as
well
to
handle
our
traffic
for
us.
So.
B
A
Configure
because
they
don't
really
need
to
like
look
into
the
entries
parameters
to
see
which
one
they
need
to
set
up.
They
just
need
one
exactly.
B
Yeah
one
parameter,
and
it
actually
does
more
than
these
right.
So
this
is
part
one.
This
is
the
configuration
of
andrea,
but
then
there's
also
the
configuration
of
nsxt,
which
is
not
done
through
andrea.
It's
actually
done
through
another
component
in
our
cluster
called
the
vsphere
cpi
or
the
cloud
provider
interface.
What
interacts
with
vsphere
for
us
from
within
the
cluster,
and
if
we
look
at.
B
Yeah,
cpi,
configures,
vsphere
and
also
configures
nsx.
So
if
we
look
at
what
this
looks
like-
and
I
did
a
bat
here
and
let's
see,
if
I
remember
which
file
this
is
in
because
I
do
not
know-
I
think
it's
in
the
ad
on
data
lit
yeah.
I
think
so,
let's
see
here
am
I
right?
Yes,
so
one
of
the
things
that
is
configured
here
as
well
within
our
cloud
provider
interface
right
in
any
cloud.
B
Today
we
have
a
cloud
provider
interface,
these
used
to
be
the
entry
cloud
providers,
what
they
were
called
and
now
those
are
being
removed
from
kubernetes
over
the
next
few
releases
and
we're
moving
to
the
out
of
tree
cpi
cloud
provider,
interface
and
csi
cloud
container
storage
interfaces,
and
so
here
what
we're
doing
is
actually
all
of
those
values
that
we
had
set.
B
Pod
routing
enabled
true
gets
configured
here
and
then
all
of
the
values
that
we
gave
of
what
was
my
router
path,
which
tier
1
router
in
nsxt
username
password
host
the
cluster
sider,
so
that
routable
sider
that
I'm
going
to
be
running
with
all
of
these
things
are
configured
for
me
automatically
right.
So
it's
really
easy.
The
way
that
tanzu
brings
this
out,
because
I
don't
need
to
know
exactly
how
to
configure
the
cpi.
B
A
B
B
Now
what
this
actually
looks
like
from
a
ui
perspective,
if
we
go
back
to
you
know
what
things
look
like
in
the
ui
is
actually
how
our
topology
in
nsxt
works.
So
if
I
took
a
look
here
at,
for
example,
the
cluster
that
I
already
have
this
tkg
routable
pods
tier
one
that
I
had
created
in
advance,
well,
the
way
that
routing
works
within
nsxt
is
that
we
have
this
idea
of
a
segment
which
is
a
port
group
in
vsphere
and
connected
to
that.
We
have
all
of
these
vms
right.
B
A
B
Segment
is
connected
to
what's
called
a
tier
one
gateway
or
a
tier
one,
router
or
whatever
other
name,
veeamer
decided
to
give
it
in
the
next
version.
I
don't
know
it
was
routers
now
it's
gateways,
but
this
tier
one
gateway
basically
is
the
connection
between
one
or
more
segments
to
what
we
call
the
tier
zero
gateway
and
the
tier
zero
gateway
is
where
we
basically
configure
our
connection
out
to
the
external
world.
So
this
is
where,
on
my
tier,
zero
gateway
itself.
B
If
I
were
to
go
here,
what
we
can
actually
see
is
that
this
tier
0
gateway-
I
have
bgp,
enabled
here
and
I'm
actually
configuring
bgp
against
some
of
my
physical
routers
in
my
lab
here
right.
So
that's
what
we're
actually
doing
here
and
that's
what's
really
strong.
We
have
this
multi-tiered
approach
of
routing
within
nsxt
that
allows
us
to
get
to
really
complex
and
really
strong
network
isolation
and
segmentation
capabilities
right.
So
now
this
has
already
been
created.
Our
cluster
is
almost
done
being
deployed
and
it
actually
just
finished.
B
So
our
cluster
is
ready.
So
let's
go
and
do
a
tanzu
cluster
cube.
Config
get
dash
dash
admin
and
just
give
it
the
cluster
name
here.
B
This
is
the
new
one
that
we
just
created.
Yes,
one
second
tamsu
cluster
list.
What's
the
actual
name
here,
oh
yeah,
I,
the
name
was
different
than
the
file.
That's
why
that
can't
work
and
let's
actually
give
it
the
cluster's
name.
When
I
try
to
get
its
cube,
config
great,
I
have
it,
and
now
I
could
do
a
cube
ctx
two
routable
cls02
and
I
can
do
a
cube.
Ctl
get
pod
dash
a
dash
o
y
and
we've
got
an
ip
here.
B
This
is
a
that's
an
errored
out
pod,
so
I
wouldn't
take
that
one.
Let's
take
the
pet
concierge
in
this
case
and
hoping
that
the
demo
gods
are
with
me.
There
we
go.
We
have
access
to
this
pod.
Now.
How
does
this
actually
work?
And
the
answer
is
that
it's
actually
a
really
strong
integration.
That
happens,
one
of
the
things
that
andrea
does
in
terms
of
how
it
overall,
just
you
know,
allocates
the
ip
addresses
mentioned.
B
I
think
a
few
weeks
ago
on
the
show
was
that
we
have
this
idea
of
the
node
ipam
controller
within
kubernetes
that
allocates
we
give
a
cluster
cider
to
the
entire
cluster.
In
my
case,
10
103,
0,
0,
16.
so
class
b
network
and
what
ends
up
happening
with
the
node
ipam
controller,
is
that
when
any
node
joins
the
cluster
ace
by
default,
a
slash
24,
so
a
class
c
subnet
from
that
larger
cluster
subnet
is
allocated
per
node
for
pods,
and
that
way
I
p
allocations
happen
at
the
at
the
per
node
level.
B
If
that
were
to
be
exhausted,
a
new
cider
would
be
allocated
as
well
to
that
node
and
so
what's
happening
here.
Is
that
each
of
our
nodes,
if
I
went
into
a
cube,
cto
get
nodes,
let's
say
I
take
this
one,
for
example,
and
do
a
cube
ctl
describe
node
that
one
and
actually
look
here.
We
can
see
my
pod
cider
here
is
10
103
2.0
and
the
ip
address
of
this
node
is
181
103..
Well,
let's
actually
look
at
that
from
nsx
and
see
how
that
actually
works
now.
A
A
A
A
So
so
so
didn't
you
mention
that
while
we
him
want
to
mention
no
in-cap
mode
works
with
any
kubernetes
cloud
providers
that
supports
adding
node
pod
sliders
to
underlay
routeable
routing
table
not
just
an
xxt.
For
example,
it
works
on
aws,
azure
ngc.
A
Thanks
changing
for
mentioning
that,
and
it's
got
that
scott
said
that
gave
him
a
few
minutes
to
rejoin.
So,
let's
see
if
who
is
like
watching
our
show
today,
jinjin
is
here:
jay
is
here
and
hello,
antonian
and
hello.
Matt
matthew,
hope
that
pronunciation,
your
name
correct,
okay,
oh
scott,
is
here
hi.
B
Sorry,
the
internet
just
crashed
in
in
my
office,
so
I'm
connected
to
my
hot
spot
right
now:
the
fun
of
networking
when
you're,
not
networking.
So
exactly
so
yeah.
A
B
Yeah,
I'm
trying
to
see
if
I
can
connect
to
my
network
again,
because
I'm
not
sure
that
I
can,
in
which
case
it'll,
be
a
bit
difficult
to
share
because
it's
all
sitting
locally
within
my
lab.
So
one
second,
let's
see,
let's
see
if
there's
any
way
for
me
to
connect,
because
I
think
the
internet
just
crashed
in
my
office,
meaning
my
entire
lab
crashed
as
well
yeah.
No,
the
lab
is
completely
done
right
now,
so
unfortunately
I
can't
continue
to
show
that,
but
basically
so
what
happened
was.
B
Is
that
our
tier
one
router
so
our
actual
network?
We
got
a
static
route
for
each
of
our
nodes
directly
to
that
pod
sider
and
that's
where
the
real
strength
here
comes
in,
because
we
get
this
automated
integration
here
really
easily
for
us
and
all
from
literally
setting
four
values
in
a
yaml
file
to
create
a
picture
and
that's
you
know
a
really
strong
capability.
B
So
yeah,
that's
you
know
really.
I
think
one
of
the
cool
parts
here
with
tamsu
is
just
like
how
these
things
are
going
forwards
and
how
this
actually
works
together
with
andrea
and
with
nsx,
to
really
make
things
just
awesome
and
powerful.
A
Yeah
yeah
yeah
yeah.
I
agree.
It's
amazing
features
that
we
have
in
hanzo
and
a
customer
can
enjoy
that
by
using
tongsu.
So
yeah
thanks
god
for
this
great
demo,
and
I
think
that
we
can
does
anyone
has
any
any
questions
before
we
move
to
that
to
the
second
part
of
this
show.
A
Okay,
okay,
so
now
I
will
go
ahead
and
share
some
share
something
about
the
cool
builder
that
we
used
to
create
the
controller
and
so
from
the.
Let
me
share
my
screen
here.
A
Okay,
so
from
episode
13,
we
introduced
a
v
to
the
to
our
audience,
and
it's
avi
is
a
load
balancer
that
in
town
in
townsville,
we
use
that
to
provide
a
balancer
type
of
service
for
for
the
customer
in
in
a
tons
of
cluster.
So
if
you
want
to
learn
more
about
how
avia
works
and
what
that
is,
you
can
like
take
a
look
at
the
episode
13
from
our
channel
and
bhushan
gave
up,
gives
a
great
demo
and
introduction
about
that.
A
A
So
the
topic
here
is
that
how
we
deploy
ako
in
our
workflow
cluster,
and
so
if
we
want
to
talk
about
that,
there's
one
thing
that
we
must
mention
is
that
we
have
something
called
a
chao
operator
installed
in
a
management
cluster
and
that
operator
will
manage
the
life
cycle
of
eko.
A
For
example,
if
a
local
cluster
is
created,
the
aq
operator
will
create
a
ko
in
the
cluster,
so
so
that
so
so
that
means
that
we
need
to
like
take
a
closer
look
at
what
a
qr
printer
is
so
when
so
before
we
start
before
we're
talking
about
ak
operator.
Let's
take
a
look
at
the
general.
What
a
general
operator
looks
like,
so
we
initiate
the
operator
with
the
computer,
and
that
is,
I
think,
that
it's
a
really
easy
to
use
framework
to
initiate
your
controller.
A
A
So
now
we
are
going
to
create
a
new
controller
from
the
scratch
and
after
we
take
a
look
at
this
sample
controller,
we
will
go
ahead
and
look
at
what
a
real
controller,
which
is
ako
looks
eq
operator
looks
like
so
let's,
let's
start,
let's
create
an
object
here.
A
So
before
we
started,
I
have
already
installed
the
computer.
So
if
you
not
so,
if
you
haven't
done
that
this
is
the
first
step
and
after
that
we
can
create
this
object.
A
A
Desktop
color
demo.
A
Oh
by
the
way,
thanks
ricardo
for
inspiring
me
on
last
thursday,
that
maybe
if
you
are
interested
in
computers,
so
it
will
be,
it
may
be
a
good
start
from
the
computer
to
to
tell
tell
people
more
about
the
ikea
operator
and
then
in
initial
we
initialize
this
this
controller.
Maybe
we
can
call
it
jada
io
like
like,
we
did
last
birthday
and
then
repo
is.
A
A
Let's
call
it
person,
but
now
I'm
create
a
correct
resource
called
controller.
Yes,
and
now
what
we
are
doing
is
that
we
want
to
define
a
new
kind
of
resource
in
a
kubernetes
cluster,
which
is
called
person
and
we
can
define
what
the
person
object
looks
like
what
is
the
spec?
A
What
what
is
the
parameter
we
can
have
in
a
spec
later
in
the
configuration
file,
so
a
new
api
generator
manifests
okay.
So,
let's,
let's,
let's
take
a
look
at
what
computer
created
for
us.
A
So
here
we
can
see
that
this
is
everything
that
google
builder
built
for
us.
So
after
you
initiate
your
project
and
create
an
api,
we
can
see
here
that
this
are
the
so
where's
where's,
the
crd
of
it.
A
I
think
we
have
like
the
maybe
it's
not
created
yet
okay,
so
we
have
so.
This
is
the
the
most
most
important
function
in
a
controller
which
is
reconcile
and
it's
all
it
all
it
automatically
created
for
us
and
what
you
need
to
do
is
just
you
need
to
enter
the
logic
of
how
you
want
to
reconcile
this.
A
This
resource
this
resource,
and
we
can
also
see
here's
an
example
of
how
you
can
create
a
resource,
create
a
resource
in
a
clinics
cluster
and
it's
called
person,
because
I
gave
that
name
before
so,
let's
see,
what's
the
next
step
in
the
documentation.
A
Oh
yeah,
it's
here,
okay,
so
crd
is
the
custom
resource
definition.
It
means
that
so
this
is
the
customer
resource
that
I
just
created,
because
before
I
run
the
computer
create
api.
So
this
is
the
one
that
I
created
and
this
file
is
the
destination
of
this
resource,
which
is
called,
which
is
short
for
crd.
A
So
we
need
to
like
deploy
this
deploy,
the
crd
into
the
cluster,
to
tell
the
to
tell
the
cluster
that
we
have
this
kind
of
resource
so
later,
when
people
do
the
coupe
apply
some
yaml
file
with
this
kind
defined
in
their
yaml
file.
We
can
know
that
this
is
what
we
are.
This
is
the
resources
we
want
to
create.
A
So
so
now
that.
B
A
A
Oh
yeah
aged,
maybe
and
let's
see,
30,
maybe
and
a
phone
number.
A
And
then
maybe
we
can
know
about
address.
A
So
now,
let's
oh,
let's
see
oh
right
so
before
we
do
that,
so
this
is
the
spec.
So
this
is
what
I
want.
So
this
is
what
I
want
to
define
in
a
person,
but
we
need
to
like
define
the
object
here.
So
what
I
have
here.
A
And
actually
you
can
have
some
validate
validator
here,
for
example.
So
if
we
want
to
see
if
we
want
to
say
that
the
the
phone
number
of
a
person
must
be
like
sorry
wait.
A
second.
A
So
we
can
have
so
cooper
builder.
Has
this
kind
of
annotation
here
it's
a
special
annotation
useful
for
group
builder
that
you
can.
A
You
can
say
that
the
for
the
this
string
has
the
maximum
length
of
10
and
if
we
want
to
like
make
sure
there's
only
like
10
characters
in
the
string,
we
can
also
set
the
to
10.
So
so
so
this
means
that
this
phone
number
must
be
like
10
10.
The
loans
of
this
string
must
be
like
10..
If
they
like,
only
put
like
seven
hours,
six
like
7
or
12
like
characters
there,
the
validation
will
fail
and
this
object
won't
be
won't
be
created.
A
So
so
here
we
add
some
of
that
and
this
evaluation
can
be
added
to
all
the
to
all
the
like
crime.
Just
we
have
here.
So
it's
just
an
example
and
in
a
state
of
the
park.
So
we
can't
like
add
something
like
this.
A
B
A
B
Let's
see
here,
I
just
found
it.
Yes,
I'm
it's,
let's
see
here,
I'm
sending
you
in
the
chat
here
in
the
back.
You
can
see
the
link.
So
if
you
look
it's
for
crd
validation,
it's
basically
the
oh.
B
If
on
the
search,
if
you
just
type
like
there
yeah
it's
562
is
the
on
the
left
side
in
the
drop
down
562,
and
then
here
we
get
the
list
of
all
of
the
different
types
of
validations
that
we
could
do.
A
A
B
A
It's
it's
after
we
do
that,
there's
several
steps
that
we
can
do
to
make
it
right
in
a
cluster,
but
after
that
we
can
run
like
make
run
to
make
it
run
locally.
So.
A
Yeah
right,
so,
if
we
go
back
to
this
quick
start
documentations,
you
can
see
that
the
crd
is
configured
because
we
change
the
parameters
here
and
if,
if
we
go
back
to
the
documentation,
we
can
see
that
after
we
create
the
object,
we
create
the
api
and
then
we
change
the.
A
A
I
have
some
logic
that
I
I
created
previously.
We
I
just
copy
this
part
of
here
so.
A
What
I'm
trying
to
do
here
is
that
I
declare
I
declare
a
personal
object,
and-
and
we
can
so
here-
is
that
if
you
so
so,
here's
how
the
controller
work
so
in
the
controller
will
wash
the
kind
of
object
in
a
cluster.
A
So,
for
example,
here
it
will
watch
the
personal
object
and
if
there's
any
any
changes
about
this
project,
like
of
this
object
like
if
there's
a
new
person
object
being
created
or
people
want
to
delete
or
update
some
or
some
objects
which
is
already
exist
in
a
cluster,
the
controller
will
catch
that
change,
so
in
the
request.
A
So
so,
each
change
of
the
object
in
a
cluster
will
become
a
request
and
it
will
be
cached.
It
will
be
a
parameters
of
the
reconcile,
proj
reconcile
function
and
you
can
get
the
which
object
is
being
changed
and
the
which
namespace
this
object
belongs
to.
A
So
if
we
have
a
function
here,
it
means
that
it
will
get
the
object
that
has
triggered
this
reconcile
function
and
and
we
will
get
it,
we
will
get
that
object
into
this
person
object
and
then
we-
and
then
here
I
just
like
print
out
the
number-
the
phone
number
of
that
person.
So
that
is
the
what
I'm
doing
here
and
so
in
order
to
add
some
log
here.
We
need
something
like
this,
so
I
need
to
add
a
logger
here
and
then
in
the
main
function.
A
A
So
what
we
will
do
now
is
that
after
we
have
this
basic
reconcile
function
and
we
have
that
the
customer
resources
defined,
we
can
run
the
controller
in
the
cluster.
So
if
we
use
make,
if
we
look
go
back
to
the
documentation,
there
are
two
ways
to
run
that.
So
if
we
just
use
the
make
run,
it
will
run
locally.
But
if
we
like.
A
Push
this
to
tour
to
a
registry
and
deploy
this
cluster.
This
will
be
run
in
a
real
cluster
and
after
that,
if
you
want
to
create
an
object
named
person,
you
can
use
the
sample
yaml
file
in
here
to
create
that
to
create
that,
and
this
will
and
this
object
will
be
created
in
a
cluster.
A
A
So
this
is
how
we,
so
this
is
a
real
controller.
The
structure
of
this
is
very
similar
to
what
we
have
here,
and
so,
if
you
look
at
here,
this
is
the
customer
resources
we
define
in
the
in
this
controller,
which
is
called
ecl
deployment
config.
A
So
in
this
one
we
can
see
that
it's
exactly
same,
it's
exactly
the
same
idea
with
with
this
one.
A
So
this
is
all
the
that
we
define
in
this
in
this
resources
and
then
the
reconcile
logic
of
it
is
in
the
controller's
repo
and.
A
So
so
in
this
function
we
define
we
define
the
reconcile
normal
and
reconcile
delayed,
which
means
that,
when
an
object,
change-
and
it
triggered
this-
reckon
this
controller
to
reconcile
so
which
fit
which
which
face
of
this
car.
This
object
it's
in,
for
example,
if
we
create
or
or
added
an
object,
it
will
trigger
the
reconcile
normal
function,
but
if
the
object
is
being
deleted,
it
will
be,
it
will
trigger
the
reconcile
delay
function.
A
A
A
It
will
reconcile
a
v
and
reconcile
cluster.
So
what
is
in
reconcile
me?
I
can't
go
into
that.
No
worries,
I
have
it
open
in
the
I
haven't.
I
have
I
have
this
one
open
here,
so
we
can
like
track
all
the
functions
here
so
right
console
of
the
we
can
see
that
we
create
some
several
of
these
specific
projects
in
order
to
run
ako
in
the
in
the
cluster,
and
we
also
have
something
new
to
run
in
the
cell
cluster.
A
So
this
one
is
the
most
important
one
because
big,
because
ako
is
converted
into
a
add-on
package
in
tanzu.
So
when
you
like,
if
you
want
to
deploy
ako
in
the
cluster,
that
is
the
main
purpose
that
I
mentioned
before.
A
B
So
this
is
creating.
This
is
basically
reconciling
the
configuration,
a
cluster
against
an
ako
configuration
and
then
generating
a
carvel,
basically
image
package
secret
to
run
an
app
an
add-on
in
the
workload
cluster.
That's
how
like
avi
gets
deployed
in
the
workload
cluster
automatically
right
from
the
management
cluster.
Okay,.
A
So
so,
for
so
for
folks,
who
don't
already,
who
don't
know
much
about
the
add-on
logic
in
tanza,
is
that
in
the
tc
in
the
tonsil
community
edition
we
have
the
we
have
all
the
add-on
packages
that
people
can
install
here
and
there's
one
load
balancer
in
ingresso.
This
one
is
ako
and
you
can
see
that
at.
A
You
can
see
the
add-on
templates
is.
This:
is
the
add-on
templates,
so
the
base
file
is
just
some
yaml
file
which
is
fixed
and-
and
we
have
the
overlay
here
to
to
to
to
define
how
we
want
to
override
the
base
file
with
the
parameters
that
people
passed
in.
So
this
is
how
we
like
generate
this:
the
yaml
file
for
each
specific
cluster
and
then
and
the
ytc
do
this
overlay
things
for
us
and
then
after
we
have
the
add-on
secret,
which
contains
all
the
other
parameters.
A
Here,
the
add-on
templates
and
the
add-on
secret
will
work
together
to
generate
all
the
yammer
files.
We
need
to
create
that
so
so
so
with
so,
if
we
create
the
add-on
secret
here,
the
add-on
manager
in
the
com
in
the
cluster
and
the
key
up
controller
in
the
cluster
will
deploy
this
eko
for
us
automatically.
A
A
So
so,
first
in
tons
of
cell,
if
you
enable
rv,
which
means
that
you
want
to
install
it,
the
acl
in
the
cluster,
the
tonsil
cell,
I
will
first
install
ko
and
adc,
which
is
echo
deployment,
config,
and
then
the
aq
operator
will
create
the
akidon
secret
and
the
template
is
stored
in
tce.
So
the
addon
manager
and
key
up
controller
will
combine
these
two
together
to
deploy
ako
in
the
local
cluster.
A
And
I,
if
we
take
a
closer
look
at
this
echo
deployment
config,
which
is
the
customer
resources
we
define
in
the
iq
operator,
it
contains
all
kinds
of
like
parameters
here
which
is
used
to
deploy
ako
and
also
there's
a
cluster
selector
here
defined
in
a
cloud
deployment
config,
which
means
that
if
you
create
an
akl
deployment
config
in
the
cluster
with
with
the
cluster
cost
selector
all
the
local
cluster
that
that
can
be
selected
by
this
equal
deployment.
Config
will
have
the
akl
deploying
the
cluster.
A
So,
for
example,
in
my
management
cluster
I'd
create
two
akl
deployment
config.
Each
of
them
has
different
labels,
and
then
I
create
three
oracle
cluster
and
these
two
have
the
same
label
and
this
one
has
a
different
one,
but
so
that
that
this
ako
deployment
config
is
responsible
for
configure
the
ako
in
this
two
cluster,
and
this
echo
deployment
config
is
only
control
the
akl
in
this
local
cluster.
A
So
after
we
create
this
after
we
create
this
echo
deployment
config
the
ak
operator
in
the
management
cluster
will
catch
this
change
and
create
this
add-on
secret.
You
know
in
the
in
each
local
cluster
that
it
controls,
so
these
two
can
be
created
together.
So
the
only
thing
that
user
needs
to
do
is
to
create
this
object,
and
the
echo
operator
a
ko
k
up
controller
as
a
manager
take
care
of
the
rest
part.
They
will
deploy
the
akl
in
the
oracle
cluster
and
also,
if
we
change
this
akl
deployment.
A
Config
this
to
akl
will
pick
up
the
change,
but
this
one
will
stay
the
same,
but
only
if
you
change
this
one,
this
xqr
player
will
pick
up
the
change,
so
that
is
with
why
we
have
this
aqua
operator.
This
is
one
of
the
benefits
of
having
of
using
this
equal
deployment
config
to
control
the
akl
you
have
here
it.
It
means
that
you
don't
need
to
control
each
cluster
one
by
one.
You
can
like
control
a
group
of
them
which
makes
the
configuration
more
more
easily.
B
It's
also
there's
something
else:
that's
really
great
about
it,
because
it's
in
your
management
cluster
that
you
know
when
we
have
an
environment
so
large
tan
zoo,
you
know
deployment
and
you
have
lots
of
workload
clusters.
You
may
be
giving
to
different
teams
having
it
in
the
management
cluster
which
is
really
owned
by
operations.
Even
if
someone
were
to
make
a
change
in
the
workload
cluster
because
we're
using
cap
controller
because
we're
using
this
operator
model
with
reconciliation,
they
can
break
their
obby
for
five
minutes.
B
A
B
B
Exactly
so,
it's
a
really
strong.
It's
a
really
strong
way
of
separating
concerns
and
making
sure
that
we're
secure,
while
still
giving
people
clusters
that
they
can
manage.
A
Yeah
yeah
right
yeah,
so
this
is
how
we
have
that
yeah
yeah.
So
this
is
how
we
like
deploy
the
echo
in
the
local
cluster,
and
I
think
we
are
about
the
time
to
finish
it's
already
like
one
hours.
So
thanks,
everyone
for
for
watching
and
thanks
gods
for
sharing
that
raul
paul
feature
with
us
with
which
is
very
cool.
B
Yeah
definitely
and
sorry
it
cut
out
in
the
middle,
but
it
was
great
to
be
here
and
you
know
in
the
last
show,
I
got
to
show
the
other
integration
with
nsx
for
those
that
didn't
see
it
with
back
in
the
last
session,
with
jay,
where
we
went
over
the
integration
of
entry
and
network
policies
in
tennessee,
and
it's
really
cool
to
see
these
tools
really
coming
together
and
seeing
where
it's
going
with
tanzu
so
yeah.
This
was
awesome.
Thank
you,
jinji
for
having
me.