►
From YouTube: Antrea LIVE: Episode 10 (Tanzu in the wild: production networking topologies and templates)
Description
Come join Scott (VRabbi) to dig into real world customer scenarios around antrea and Tanzu, and his legendary TKGM-customizations suite !
- Antrea with Tanzu 1.5, and NSX-T integration plans
- Egg nog
- vrabbi/tkgm-customizations
- packaging... everything into carvel on customer sites
A
A
C
Hey
guys
so
I'm
scott
rosenberg,
I'm
I
work
at
terra
sky
based
in
israel
and
do
a
lot
of
stuff
in
the
kubernetes
world,
especially
around
panzu
and
yeah.
Are
you
all
like
the
masters.
C
Exactly
you
know,
no,
so
we're
we're
a
services,
we're
a
service
integration
or
a
systems;
integration
company
based
in
israel,
but
also
working
in
the
states
and
in
europe.
We're
partners
of
vmware,
amazon,
google
and
a
bunch
of
others
and
really
working
hard
in
the
cloud
native
world,
public
cloud
and
on-premise,
and
I'm
the
practice
leader
for
cloud
and
automation
at
terra
sky
and
leading
the
tanzu
portfolio
from
our
side
as
well.
A
A
C
Yeah,
it's
like
the
fourth
from
the
top.
A
So
scott
is
collaborating,
collaborating
with
us
upstream
on
this,
so
this
is
going
to
be
a
cool
one
and
so
like
I
just
want
to
say.
One
of
the
things
I'm
really
excited
about
is
the
fact
that
we're
tanzu
is
now
we're.
Actually,
you
know
at
vmware
we're
developing
tanzi
with
people
in
the
community
like
scott,
which
is
really
cool
like
we're
doing
that
for
windows,
but
I
think
more
of
that's
going
to
be
happening
so,
like.
B
A
100
links
in
here
we
had
the
feature
gate
stuff
that
we
were
going
to
go
over
yeah.
B
A
Let
me
show
people
that
so
there's
these
new
panzer
feature
gates.
This
is
shimon
added
these
and.
C
Yeah,
it's
a
pretty
cool
feature,
it's
pretty
cool,
because
it
allows
us
to
basically
enable
to
add
beta
features
and
alpha
features
behind
feature
gates
so
that
they
can
come
out
in
kansu
distributions
and
really
just
allow
things
to
ship
out
quicker
than
they
would
be
able
to
otherwise.
C
So
this
is
a
really
cool
feature
that
just
through
the
cli
you'll
be
able
to
enable
a
bunch
of
feature
gates
for
like
new
cutting
edge
technology
and
yeah
yeah.
I.
A
This
flag
is
irrelevant,
so
you
don't
have
this
weird,
and
so
scott
showed
me
this
yesterday
and
I
was
like
this
is
really
cool.
I
thought
it
was
really
cool
in
two
ways.
One
somebody
outside
of
vmware
was
showing
me
how
to
a
new
feature
in
danza
that
I
didn't
fully
understand
yet
so
I
was
like
wow.
A
This
is
really
cool,
but
then
the
other
thing
that
was
cool
is
that
this
is
genuinely
cool,
because
you've
seen
this
in
upstream,
where
there's
feature
gates
and
people
get
really
confused
about
when
to
enable
them
and
when
to
disable
them,
and
there
are
these
constants
that
sort
of
sort
of
like
follow
you
around
forever,
and
so
this
is
a
not
a
really
nice
nice
convention
that
that
folks
came
up
with.
I
don't
know
who
came
up
with
this
convention,
but
shimon
implemented
it.
A
So
I
guess
he
gets
the
points
I
don't
know
so
then
we're
looking
at
pausing
reconciliation
controllers.
Now
this
is
an
interesting
one.
So
so
scott
was
showing
me
this
yesterday.
A
The
way
he
swapped
cni's
out
in
tanzu,
which
is
that
he
there's
a
pause
annotation
that
you
could
put
on
a
on
a
on
a
k
out
instance
right
on
a
k,
app
managed
reconciled
application,
and
then
you
put
that
pause
thing
in
there
and
then
you
manually
can
go
delete
all
the
resources
and
then
recreate
them
in
some
other
way.
A
I
don't
think
a
lot
of
people
know
about
that.
It
was
pretty
intuitive
to
me
because
at
one
point
someone
on
our
team
actually
had
to
do
that,
but
but
yeah
that's
real
interesting,
maybe
once
you
when,
maybe
once
you
fire
up
your
your
terminal,
you
can
just
show
us
that
really
quickly
for
sure
and
then
you've
got
all
these
links
in
here
and
I
I
feel
like
you've
got
so
much
stuff
going
on
that.
Maybe
we
should
just
jump
into
it
right,
so
yeah
yeah.
Why
not?
Why
don't
you
just.
A
A
A
B
C
So
yeah,
so
I
in
general-
basically
I
there's
one
thing
with
the
andre
here:
that's
actually
really
cool
is
nsxt,
3.2
just
came
out
and
there's
a
actually
really
cool
integration
that
can
be
done
between
andrea
and
nsx,
which
you
know
is
really
fun
actually
to
take
a
look
at.
I.
A
Gotta
interrupt
you
really
quick
and
say
hi
to
people.
Mourinho
is
here,
he's
my
friend
from
vmware,
but
he
left
mourinho.
I
just
I
just
got
back
from
the
gym.
You
can
be
really
proud
of
me.
Pera
is
here
and
he's
one
of
our
new
kaping
friends
miss
you
too
mourinho
you'll
be
really
proud.
You
you'd
be
really
proud.
I
I
did
like
a
two
and
a
half
hour
workout
today
and
eight
and
a
24-hour
fast
okay,
robert
clisterius.
Where
are
you
from
robert
you're
new.
A
A
C
C
Right,
the
general
idea,
I
think,
is
we're
just
going
to
kind
of
hack
around
and
see
you
know
how
we
customize
tanzu
and
like
some
of
the
stuff,
especially
around
networking
that
we've
been
doing
and
to
go
to
the
point
that
jay
had
mentioned.
You
know
about
pausing
reconciliation
of
apps
and
all
of
that
that
actually
came
in
use
here
for
this
demo.
C
Basically,
where
I
created
a
cluster
that
the
new
integration
that
happens
between
andrea
and
nsxt
actually
requires
a
higher
version
of
antrio
than
comes
in
the
current
released
version
of
tanzu
okay,
so
I
actually
paused
the
reconciliation
of
andrea
and
then
installed
my
own,
the
upstream
version
of
1.3
of
andrea
into
this
cluster.
So
if
we
did.
C
And
just
took
a
look
one
of
the
cool
things
we
actually
deployed
here.
The
interworking
operator
is
how
this
works.
So
it's
a
basically
a
controller
that
runs
within
your
cluster
that
you
deploy
after
having
andrea
that
registers
it
against
tanzu
or
registers
it
against
nsx
and
then
once
it's
registered,
it
actually
becomes
really
cool.
Because
if
I
go
to
the
inventory
now
of
just
my
standard
nsx
within
the
containers,
I
get
all
my
name
spaces.
I
could
see
any
pods
that
exist
here
and
all
their
labels.
C
A
C
As
we
know,
naming
is
the
hardest
part
about
any
project
in
the
world
yeah
so
but
in
general,
and
the
really
cool
thing
here
is
that
we
get
all
of
our
pods
and
all
of
our
data
and
services.
Pods
everything
come
up
into
nsx
here
and
then
from
this
point
we
can
actually
fully
manage
our
clusters.
So
I
only
have
one
connected
right
now,
but
if
I
went
just
for
example,
into
like
the
yelp
namespace,
we
can
see
right
now.
C
I
don't
have
any
network
policies
or
any
pods
here,
but
if
I
just
came
back
to
here
and
did
a
cube
cto.
C
C
No
yelp
is
just
a
nice
demo,
application
for
kubernetes
right
and
if
I
just
didn't
get
services
here,.
C
C
C
What's
cool
is
that
I
get
all
my
pods
now
automatically
and
I
can
see
their
status
and
I
could
go
and
see
everything,
but
I
also
get
network
policies
now
and
we
can
actually
view,
for
example,
standard
kubernetes
network
policies
from
here.
It's
only
view
only
though
right
now,
okay,.
A
C
What
is
really
strong
is
actually
that
we
can
go
here
and
in
nsx
we
have
this
idea
of
a
distributed
firewall
and
a
distributed.
Firewall
is
basically
we
use
it
for
whatever
we
need,
we
can,
you
know,
create
different
firewall
rules
so
for
like
proxy
networks
or
default
layer,
3.,
and
so
one
of
the
cool
things
that
till
now
it's
only
been
distributed
firewall.
C
A
One
second:
I
got
a
robert
roberts
making
a
pet
store
joke.
So
robert
I'll
tell
you
something,
you
know
the
original
k-8s
pet
store
app
in
kubernetes.
I
was
the
one
that
wrote
that
it
was
because
I
think
pet
clinic
is
the
best
demo
app
ever
and
I
don't
think
so,
and
then
we
made
a
synthetic
data,
generator
called
the
big
pet
store
data
generator
and
it
used
markov
models
to
simulate
pet
store
transactions
where
people
would
walk
within.
A
C
A
C
Okay,
yeah!
No,
so
what's
really
cool
here,
though,
is
you
know,
we've
got
this
idea
now
that
within
nsx
I
can
actually
go
and
create
my
own
firewall
rules,
just
like
I
could
for
any
of
my
virtual
machines.
C
So
you
can
create
groups
here
if
I
just
call
this
test,
for
example-
and
I
could
set
up
all
this-
you
know
criteria
of
how
something
comes
in
so
I
could
say
the
namespace
name
is
yell,
let's
say,
and
also
that
the
pod
tag,
which
would
be
just
the
name
in
nsx,
for
a
label
though-
and
I
could
say
let's
say
frontend
right,
so
we
have
the
kubernetes
tier,
is
front
end
and
now
any
pod
that
has
this
label
because
they
automatically
come
up
to
nsx
will
now
show
up
here.
C
So
now
this
test
group,
I
could
create
all
of
my
groups.
I
could
create
any
of
my.
You
know,
firewall
policies
based
off
of
these
different
groups,
so
we
can
see
here
yelp
front
end.
I
can
do
the
same
thing
with
back
end
and
then
apply
all
of
my
policies.
C
So
if
I
came
here
and
did
a
cube,
cto
get
cluster
network
policies,
crd
and
cio,
we
can
actually
see
that
this
was
created
through
nsx
and
we
actually
get
a
network
policy
in
kubernetes
to
manage
all
of
our
applications.
I
didn't
know
that
I
think.
C
A
C
Like
the
cluster
network
policies
and
actually
showed
this
one,
what
we'll
see
is
they're
actually
using
android
groups
in
the
applied
to
and
from
where
yeah.
So,
if
I
did
a
cube,
cto
get,
what
are
they
called?
I
think
cluster
groups
all
right,
yeah,
cluster
groups.crd
entry,
io
and
went
and
looked
at
this
one.
C
What
we
can
actually
see
is
that
we
get
everything
that
I
defined
within
the
ui
there,
just
through
groups
and
whatever
actually
gets
automatically
tied
in
here,
and
basically
we
can
both
manage
them
locally
in
kubernetes,
as
well
as
manage
from
nsxt,
which
gives
us
all
these
really
strong
capabilities
now,
and
I've
linked
in
the
docs,
the
official
documentation
from
vmware
for
how
to
install
this.
C
C
I
just
like
to
not
automate
stuff
exactly,
but
if
we
look
here
right
like
I,
this
was
just
pushed
up
today
and
basically,
if
you
just
add
the
four
yaml
files
from
here
into
the
tanzu
directory-
and
you
just
need
to
run
this
script
before
which
registers.
A
C
Nsx
and
it
will
print
out
the
additional
values
that
you
need
to
add
to
your
tkg
config
file
with
all
of
the
different
values.
So
the.
C
Exactly
automation
for
free,
so
basically
it
requires
to
use
a
certificate
based
principle
identity
to
talk
to
nsx.
So
what
we
do
here,
basically,
we
just
generate
with
openssl
some
certificates
and
then
run
an
api
call
against
nsx
to
create
a
principal
identity
and
then
base64,
encode
them
out
and
just
print
out.
Basically,
hey
just
add
this
to
your
cluster
config
deploy
it,
and
this
will
deploy
upstream,
andrea
1.3
and
then
we'll
install
the
interworking
controller
yeah
automatically
into
your
environment.
C
For
you
as
well
so
yeah
out
of
the
box,
you
can
get
it!
You
just
have
to
run
this
script
before
and
then
it
automatically
connects
up
to
nsx,
which
is
actually
a
really
cool
thing
to
do
and
yeah.
So
I
just
generated
this
today.
I've
created
about
seven
clusters
this
way
and
it
works
well
and
no
ariel.
I
do
not
use
microsoft
word
as
an
ide.
C
I
am
not
a
crazy
person
on
youtube.
Hopefully
I
am
on
youtube,
but
so
robert
says.
C
C
C
Right,
yeah
no
sign,
I
think,
like
this,
andrea
nsx
is
actually
a
really
cool
integration.
I
think
it
shows
you
know
a
real
change
in
the
cni
world,
where
we'll
be
able
to
get
really
true
good
integration
between
our
legacy
apps
and
what
exists
in
the
container
world
and
bringing
the
two
together
here
is
actually
a
really
nice
capability
where
they
also
added
the
last
piece
of
the
integration
is
actually
that
we
can
run
trace
flows
with
andrea
directly
through
the
ui.
C
C
So
this
is
connected
to
all
of
your
clusters
that
are
running
andrea
and
you
can
now
go
and
say:
okay,
I
want
to
do
this
from
the
yellow,
namespace,
let's
say
ui
to
the
app
server,
and
I
can
trace
this
right
now,
and
this
is
going
to
tell
me
exactly
if
it
passed
or
not
and
if
I
actually
go
and
edit
this
and
change
this
to
port.
Let's
say
443,
which
shouldn't
be
open.
C
I
guess
it
is,
but
it
does
work
my
network
policies
just
I
deleted
the
network
policies,
but
we
can
basically
just
go
and
completely
mess
around
here
and
set
up
different
network
policies
and
then
run
the
trace
fields
from
here
and
see
on
which
network
policy
it's
falling
through
a
very
simple
ui
in
the
same
place
that
you
would
do
it
for
any
of
your
virtual
machines.
C
A
C
We
normally
wow
exactly
it's
so
much
easier
because
it's
also
connected
to
all
of
your
clusters
right.
So
now
I
only
have
one,
but
you
can
get
a
drop
down
list
and
just
select
the
cluster
that
you
want
to
test
and
because
you
get
the
drop
downs
right
so
like
the
octant
plug-in
that
exists
as
well.
C
Yeah
that
would
be
awesome
to
see
that
would
be
really
cool,
especially
with
the
andrea
integration,
with
nsx
for
audible
pods
in
tanzu,
adding
that,
together
with
this,
to
get
the
routable
pods,
then
the
connection
between
clusters
could
be
really
fun
to
see
the
traffic
flow.
C
C
It's
just
the
two
uuids
at
the
end
that
are
always
a
mess,
and
here
we
just
get
the
full
drop
down
list
and
you
could
do
like
by
node,
because
it
has
all
this
data
and
then
no
matter
which
name
space,
so
it
just
becomes
really
easy
to
play
around
with
here
and
actually
just
get
our
trace
flows.
So
I
think
this
is
a
really
cool
cool
idea
that
they
have
this
year,
that
you
know
I'm.
A
A
C
C
Yeah,
so
I'm
like
calico
enterprise
exists
out
there.
There
are
some
stuff,
but
I
I
think
that
in
the
end
you
know,
the
really
nice
thing
here
is
that
it's
integrating
into
a
system
that
already
exists
in
a
lot
of
enterprises
like
calico
enterprise,
is
a
great
ui,
but
it
works
for
its
applications.
It
works
for
calico,
it's
its
own
ecosystem
versus
here,
we're
we're
really
integrating
with
a
whole
separate
stack
but
getting
a
unified
single
plane
of
glass
kind
of
yeah,
which
is
cool
but
anyways.
C
I
thought
that
was
just
kind
of
cool
to
show,
but
you
know
in
general
this
repo.
A
C
Yeah,
so
in
general
I
once
the
packaging
apis
came
out
for
carvel.
I
was
actually
just
really
interested
in
them
and
decided.
I
would
start
playing
around
and
one
of
the
things
that
was
getting
difficult.
As
a
you
know,
system
integrator
was
when
you
know
you
come
to
a
customer
they're
new
to
kubernetes
everyone's
heard
of
helm
and
now
all
of
the
installations
that
happen.
You
say
great
for
the
tanzu
packages.
C
You've
got
to
use
the
tanzu
package,
install
and
we're
using
carvel,
but
then
go
ahead
and
install
let's
say,
cube
apps
and
there
you're
going
to
use
helm,
charts
or
it's
just
all
these
different
packaging
mechanisms,
and
it
was
starting
to
confuse
a
lot
of
people
and
when
I
looked
into
the
packaging
apis
of
carvel,
one
of
the
things
I
saw
is
that
there's
actually
support
for
help.
C
So
I
took
just
a
little
side
project
and
I
built
a
tool
actually
that
I
linked
here,
which
is
a
helm
to
carvel
conversion
tool
that,
basically,
you
can
give
it
any
helm
repository
and
it
will
convert
them
into
carvel
packages,
including
air
gap,
support,
because
one
of
the
things
that
carvel
is
really
strong
with
is
that
it
can
do
full
air
gapping.
It
can
pull
down
all
of
them.
A
C
Exactly
so,
if
I
like
took
a
look
here,
this
is
just
like
a
command
right.
If
I
run
helm
to
package,
which
is
the
you
know
alias
for
this,
you
know
program
and
then
gave
it
a
repo
name
and
the
repo
url
that
I
want
to
pull
from,
and
I
told
it
yeah
the
number
of
chart
versions.
I
want
to
get
the
last
two
chart
versions.
A
C
Yeah,
so
in
general,
what
I
wanted
to
do
is
build
a
consistent
user
experience.
So
if
we
look
at
kansu
right
so
tanzu
cluster
list,
let's
say-
and
I
looked-
and
I
get
my
clusters
here
and
the
way
that
tanza
works
is.
We
also
have
this
idea
of
packages
that
come
from
tanzu.
C
Yeah,
you
can
do
it
like
this,
if
you
just
do
the
arrow
pointing
towards
the
source
and
then
if
we
did
a
tanzu
package,
let's
say
available
list
right
now.
One
of
the
things
that
we'll
be
able
to
see
is
that
in
this
cluster,
I've
got
all
of
these
different.
You
know
packages
that
are
available
to
me,
so
some
of
these
come
out
of
the
box
from
vmware,
and
some
of
them
are
custom
yeah.
C
C
A
C
A
C
A
C
A
C
Just
this
one
command
that
I'm
running
right
now
right.
So
what
this
is
going
to
do
is
this
is
actually
taking
the
bitnami
helm
charts.
I
gave
it
a
list
of
charts
because
I
don't
want
the
whole
thing
just
because
this
is
a
demo,
but
if
you
don't
pass
in
a
chartless
file
path,
it's
just
the
entire
helm
repository
and
what
this
is
actually
doing.
C
Is
it's
pulling
from
charts
bitnami
the
last
two
versions
of
the
three
charts
that
I
mentioned
in
there,
and
this
is
actually
going
to
push
up
to
my
registry
right
now,
packages
that
it's
generating
in
the
back
end
yeah
from
these
helm,
charts,
and
these
are
fully
air
gapped.
C
So
when
it
pulls
up
it's
going
to
pull
in
we'll
see
in
a
second,
we
can
see
it's
actually
using
k,
build
in
the
back
end
from
carvel
as
well
to
basically
come
and
understand
all
the
image
references
and
a
make
them
out
into
sha's.
But
once
I
go
and
copy
this
image
package,
which
is
the
package
repository
to
any
image
registry
in
the
world,
it
will
actually
import
all
of
the
container
images
as
well
as
all
the
manifests
in
a
single
repository
and
change.
C
All
of
the
references
for
me
at
deployment
time
to
point
to
my
local
registry.
So
this
becomes
really
a
strong
capability
and
we
can
see
this
is
just
pulling
down
everything
right
now.
We've
already
done
apache,
it's
doing
cube
apps
now
and
the
nice
thing
with
cubex
is:
it's
actually
got
two
different
levels
of
nested,
helm,
charts
and
this
tool
goes
through
all
of
the
nested
charts
as
well
and
pulls
them
all
in
so
so
now.
A
C
And
if
I
just
run,
if
I
run
this
output
command
of
tanzu
package
repository
ad,
it
just
added
the
repository,
and
if
I
were
to
do
now,
I'm
gonna
interrupt
you
in
a
second
just
warning.
You,
oh
yeah,
no
problems.
C
I
look
for
package
repositories,
so
we
can
see
it's
not
reconciled
yet
it'll
take
a
few
minutes
just
to
reconcile
just
like
adding
an
app
repository,
but
once
this
reconciles
into
my
cluster,
what
I'll
be
able
to
do
is
actually
all
of
these
images
are
actually
always
suffixed
with
a
you
know,
kind
of
like
a
fully
qualified
domain
name
yeah
and
the
full
qualified
domain
name
here
I
created
was
andrealive,
so
we'll
be
able
to
that.
A
B
C
C
I
so
managing
them
is
actually
not
that
difficult.
It's
actually
really
easy.
You
know
in
the
end,
it
can
be
as
easy
or
as
complex
as
you
want
it
to
be,
if
it's
just
converting
over
helm,
charts
and
then
hosting
them
locally.
It's
as
easy
as
running
this
tool,
and
just
you
know
pushing
it
into
your
registry
and
you're
good.
C
If
you
want
to
write
your
own
custom
automation,
so
you
want
to
do
things
like
ytt
and
k
build,
and
all
of
that
once
you
have
those
manifests,
it's
a
matter
of
a
few
seconds
and
you
can
make
that
into
a
package
repository.
So
it's
really
easy
once
you
decide
what
you
want
to
build,
but
if
you
have
the
base
templates,
whether
that
be
ytt
or
helm,
charts,
it's
a
really
easy
process,
and
I'm
actually
in
the
middle
of
writing
some
documentation
on
that
that'll.
Make
it
much
easier.
C
Even
you
know,
to
add
into
my
repository.
A
A
C
A
C
A
repository
in
the
end
is
an
oci
bundle
or
a
you
know.
Basically,
a
bundle,
that's
located
within
a
container
registry.
You
give
it
the
url
to
that
specific
package
repository
and
tell
it
in
which
name
spaces
package
repository
is
supposed
to
be
created.
That's
all
you
need
to
do.
Yeah.
C
C
C
We
can
actually
see
now
that
I've
got
two
versions
of
it.
I've
got
1226
and
1227,
which
are
actually
mapped
to
the
versions
of
the
helm
chart
itself
and
if
I
went
through
the
kansu
command
and
did
it
get
to
this
slash,
the
version
and
said
hey
give
me
the
schema
of
what
values
I
could
pass
into
this
because
in
helm
I've
got
a
values,
file
yeah.
How
can
I
know
what
values
to
pass
in?
A
C
V3
schema
wizard
from
israel
and
I
actually
took
all
the
descriptions
from
the
fields
within
the
packages
have
the
defaults,
the
types
everything
is
set
out
here,
so
you
can
actually
take
any
helm,
chart
and
convert
it
here
into
a
package
with
full
documentation.
A
C
I'm
a
believer
but
yeah
no,
and
I
think
it's
a
really
cool.
It's
just
like
a
little
side
project
that
I
built
together.
We
are
released
at
open
source
here
at
terra
sky
and
it's
a
really
fun
project
just
to
mess
around
with,
because.
C
Exactly
so,
it
originally
had
been
here
within
my
own
repository,
and
then
I
released
it
under
the
company
and
the
old
version
that
was
here
is
how
I
got
you
know
whatever
there
were
a
bunch
of
stars
because
of
this,
and
then
I
just
decided
that
it
deserved
to
be
on
its
own.
So
I
separated
it
out
from
the
repo
but
yeah.
C
We're
just
starting
also
so
there's
going
to
be
a
lot
more
coming
soon,
but
yeah,
and
so
that's
like
really
carvel
packages
right.
But
the
other
cool
thing
when
it
comes
to
carvel
packages
is
they're
used
in
tanzu,
and
sometimes
they
aren't
necessarily
exactly
what
we
need
right.
Sometimes
we
may
need
some
more
knobs
and
are
exposed
to
us
and
things
like
that
and
using
carvel.
We
can
actually
also
extend
that
using
like
overlays.
C
So
I
also
have
within
this
repo
this
tkg
extension
modifications
where
just
as
an
example
here
you
know,
prometheus
that
comes
in
tanzu
doesn't
have
tanos
within
it
and
that's
great
for
a
lot
of
environments.
But
we
had
a
specific
use
case
by
a
customer
that
wanted
to
ship
all
their
prometheuses
to
thanos
for
long-term
storage
and
then
from
there
up
to
grafana.
C
So
basically,
what
we
did
is
created
just
a
simple
overlay
file
to
add
thanos
into
the
package
from
prometheus,
and
then
we
actually
can.
We
just
wrote
a
simple
install
script
that
installs
the
package
and
then
it
just
annotates
that
package
with
the
secret
that
the
overlay
is
in
and
then
it
will
automatically
reconcile
that
package
and
add
the
thanos
sidecar
right.
So
it's
small
things
like
that
or
even
like
in
the
extensions
harbor
that
comes
in
tanzu,
doesn't
come
with
chart
museum.
C
A
A
C
Right
so
a
good
example
of
that
is
actually
cap.
Controller
yeah,
because
cap
controller
is
really
growing
a
lot.
It's
growing
very
fast,
and
it's
very
hard
for
downstream
projects
to
always
stay
up
to
date.
With
these
things
and
one
of
the
difficult
things
is
like
certain
projects
within
vmware,
so
like
controller.
A
Is
basically
a
canned
operator
right
like
it
allows
you
to
basically
build
an
operator
for
anything
that
does
most
of
the
things
that
an
operator
might
do,
or
a
lot
of
the
things
that
an
operator
would
do
without
you
having
to
write
anything
exactly
right?
It's
awesome!
That's
how
all
the
packages
are
working.
We've
gone.
C
A
great
session
with
dimitri
that
you
did
yeah,
but
so,
like
I
mean
if
we
look
at
like
the
general
idea,
cap
controller
is
actually
reconciled
from
the
management
cluster
into
all
workload
clusters
and
like
tanzu
rabbit
mq
came
out
with
a
new
version
1.2,
which
has
some
awesome
new
features,
but
its
installation
requires
to
use
cap
controller,
028
or
above
and
tkg
1.4
comes
with
0
23..
C
So,
basically,
what
you
can
do
in
that
case
is,
if
you
go
to
your
management
cluster,
and
if
you
look
here
in
cube,
ctl
get
app
in
the
namespace
of
your
cluster.
You
can
see
that
you
actually
get
one,
which
is
the
name
of
the
cluster
minuscap
controller,
and
if
I
actually
went
and
took
this
cluster,
let's
say
and
edited
this
app
one
of
the
things
that
I
can
do
on
any
app
cr
is
just
under
spec.
I
can
just
add
paused
true
and
once
I've
added
that.
C
Once
I've
done
that
now,
cap
controller
will
not
try
and
reconcile
my
application
anymore
and
now
what
I
could
do
is
if
I
go
into
that
cluster.
If
I
did
a
cube,
cpx
back
to
andrea,
live
cls02,
I
could
just
do
okay
get.
You
know.
Minus
n
tkg
system
and
I
can
just
go
and
a
delete
deployment,
minus
ntkg
system,
cap
controller
and
cap
is
deleted
and
if
I
went
to.
C
Exactly
right,
so
nothing
is
going
to
actually
go
and
replace
this
anymore,
and
now
I
could
go
and
deploy
my
own
cap
controller.
I
could
deploy
cap
controller
through
the
open
source
and
to
play
whatever
version
I
needed.
So
this
is
a
really
strong
capability.
A
A
C
Right
but
yeah,
so
I
this
is
like
a
really
cool
capability
that
we
have
when
we're
dealing
with
like
andrea
right,
and
you
know
some
of
the
other
really
cool
things
just
to
like
you
know
mess
around
with
here.
C
If
we
went
into
the
tkgm
customizations
that
I
just
pulled
down,
if
I
go
into
tkg
customization,
this
was
something
I
was
showing
jay
a
few
weeks
ago
with
some
of
this
stuff
in
the
custom
ytt
overlays
that
I've
been
building,
because
really
when
we
talk
about
how
to
you
know,
modify
tanzu
right
or
any
of
these
extension
points,
we
can
add
our
own
package
repositories.
C
We
can,
you
know,
pause
applications,
but
one
of
the
things
that
we've
experienced
from
a
lot
of
our
customers.
They
say
when
I
deploy
a
cluster
on
one
hand.
I
want
to
give
self-service
to
my
end
user
right.
I
want
the
developer
to
be
able
to
say,
hey,
give
me
a
cluster,
but
when
a
cluster
gets
deployed
we
want
it
to
have
certain
things
already
installed
in
it.
We
want
to
make
sure
that
it
has
prometheus
and
grafana.
C
We
want
to
make
sure
that
by
just
installing
the
cluster,
maybe
I
get
the
tanzu
postgres
operator
installed
automatically
or
tanzu
build
service
or
application
platform
right.
C
So
one
of
the
things
that
I
added
into
here
is
actually
this
installation
directory,
and
these
are
just
some
ytt
files
that
if
we
look
at
the
readme
it'll
be
easier
to
see
here,
but
if
we
actually
looked
at
like
any
of
these
things,
let's
say
you
know:
if
we
look
at
this,
basically
all
you
need
to
do
is
copy
these
files
into
you
know
this
folder.
I
should
update
that
dot.
C
Config
slash
tansu
to
the
new
path,
but
basically,
once
you
put
it
into
this
path,
all
of
these
yaml
files.
A
A
C
Dot
config
yeah
exactly
and
if
we
look
here,
we've
added
all
these
new
data
values
now
so
enable
can
zoo
software
auto
install.
Do
we
want
it
to
install
cert
manager,
contour
external
dns,
prometheus
grafana,
fluent
bit,
kanzu
build
service,
tanzu
postgres,
tenzu,
rabbit,
tanzu,
mysql
and
valero.
It
can
install
and
configure
all
of
this
automatically.
For
me,.
A
So
so
robert
has
a
question
so
robert
yeah
he
uses
that
paused
annotation
and
that
paused
annotation
present
prevents
reconciliation
of
those
of
that
local
cap
instance
exactly
and
so
that
local
cap
instance,
even
though
tana's
tanzania
management
cluster
isn't
going
to
say,
like
you
know
that
the
desired
state
of
the
workload
cluster
is
different.
That
local
cap
instance
is
not
going
to
implement
that,
because
it's
going
to
see
that
and
it's
going
to
pause
it.
I
think
that's
how
it
works.
C
C
Exactly
so,
if
I
go
back
there
just
to
show
this
again,
if
I
go
back
there
and
did
a
k
get
app
and
it
was
andrea,
live
cls,
zero,
two
and
actually,
instead
of
get
let's
just.
C
Yes,
this
is
what
that
did
yeah
this
disabled
cap
in
a
workload
cluster.
So
one
of
the
cool
things
that
people
don't
know
about
an
app
cr-
and
this
is
a
really
strong
capability-
is
that
the
app
cr
is
not
just
local
within
a
cluster,
and
if
you
have
a
secret
with
a
cube
config
in
it
for
another
cluster,
you
can
just
add
and
inspect
this
cluster
cube,
config
secret
ref
and
just
give
it
that
cube
config,
and
it
will
reconcile
this
application
from
the
management
cluster
in
the
workload
cluster
yeah.
A
C
A
C
Put
cap
controller
in
the
workloads
you
put
it
there
so
that,
then
you
can
manage
from
within
the
cluster
and
tree
or
csi
cni
cpi
ako.
All
of
that
stuff
right,
but
you
technically
you
anyone
that
wanted
to
use
this
in
tce,
let's
say
or
wanted
to
use
this
in
their
own
distribution,
could
install
cap
controller
in
like
a
single
management
cluster
and
actually
manage
all
their
workload
clusters
with
cap
right.
That's
the
capability
that
cap
gives
us
so.
A
C
Right-
and
here
we
added
this
paused
true,
so
what
this
is
saying
is
that
the
app
in
the
management
cluster,
that's
actually
deploying
cap
controller
in
the
workload
cluster
should
not
reconcile,
is
how
this
is
working.
Basically,.
C
Cluster
basis,
so
it's
basically
per
cluster.
You
can
run
a
cube
ctl
patch,
instead
of
so.
If
you
want
to
do
this.
C
C
And
you
can
exactly
what
you
would
do,
it
could
be
done
with
a
job.
You
could
create
a
cluster
resource
set.
That
would
run
a
job
that
would
just
patch
it
for
you
automatically.
C
C
B
Into
providers
and
then
ytt
and
went
into
here
and
then
it
would
be
under
where
would
cap
controller,
be.
C
I'd
have
to
look
into
this
a
bit
deeper,
but
it
could
actually
be
possible
if
the
manifests
are
actually
here
and
not
a
part
of
the
back
end
code,
but
are
actually
part
of
like
the
cap
controller
overlay.
C
B
B
If
I
did
a
cluster,
I
have
to
actually
change
cube
ctx
into
the
management
cluster.
To
do
this
and
then
did
a
k
get
cluster
re.
C
Source
set
and
took
a
look
here,
so
you
can
see
like
for
each
cluster.
We
actually
don't
have
one
for
the
cap,
because
cap
just
gets
deployed
automatically
and
then.
A
C
Go
lane
code
exactly,
but
the
manifest
I
believe
would
exist.
Actually
here,
though,
actually
it
should
be
in
zero,
two
add-ons
and
if
I
did
zero
due
into
cap
controller
and
if
we
looked
here
and
cat
to
add.
C
Yeah,
so
let's
see
so
here
we're
creating
a
secret
secret.
No,
so
this
is
the
old
method
of
a
cluster
resource
set.
If
I
looked
at
the
cap
controller
lib.
B
C
C
You're
on
the
cap
control?
No,
this!
No!
This
is
the
deployment
of
cap
controller
itself,
but
that
would
be
where
you
you
would
be
able
to
change
that.
No
that's
the
deployment
of
cap
controller
in
the
workload
cluster,
not
in
the
not
the
cap,
controller
app
that
gets
created
in
the
management
cluster.
You
also
have
the
app
itself
right.
A
Should
we
draw,
let
me
let
me
sh,
let
me
share
for
a
second
and
just
yeah
just
so
I
could
show
people
what
we're
talking
about,
because
let
me
see
here,
I
think,
we've
gone
through
this
before
yeah,
then
I'm
gonna
give
it
back
to
you
all
right.
So.
A
Meanwhile,
you
can
hack
around
and
see
if
this
even
works,
yeah,
I'm
trying
to
see
if
I
can
find
this
yeah,
so
all
right
so
I'll
share
just
for
us
just
for
a
second
okay.
So-
and
we
did
this
into
the
dimitri
show,
as
you
mentioned
right,
but
right
so
in
general,
what
scott's
going
back
and
forth
and
looking
at
the
way
things
work
right
now
is
that
you
have
a
you,
have
a
management
cluster,
and
why
is
myro
like
spamming
me
on
christmas?
A
A
Then
reconciles
the
cap
instances
on
the
workload
cluster,
so
scott
already
talked
about
that
and
then
like
what
scott's
looking
at
right
now
is.
The
idea
of
like
is
there
a
way
that
this
itself,
since
this
is
actually
created
using
a
cluster
resource,
set
right
exactly
when
I
make
a
workload
cluster?
Is
that
also
created
by
a
cluster
resource
set
when
you
create
a?
What,
when
I
create,
is
cap
also
put
in
via
crs
on
the
workload
cluster
or
is
it
put
in
through
cap?
C
C
C
A
Yeah,
so
it
does
not.
C
A
B
A
Possibly
move
like
we're,
not
100
sure
where
we're
going
to
go
with
this,
but
like
well
right
now.
The
way
it
works
is
that
cap
controllers-
I
mean
it's
okay
for
to
do
cap
controller
on
the
host
port,
that
that
makes
sense.
But
then
the
tricky
thing
is
that
there's
a
weird
thing:
we
do
that's
an
optimization,
which
is
that
we
install
the
cni
as
a
crs
and
then
we
adopt
it
into
cap,
and
that's
that's.
C
A
C
Yeah
yeah,
but
no
sign
yeah.
It's
a
pretty.
You
know
complex
idea,
but
it
works
really.
Well,
it's
actually
really
fun.
A
C
Yeah,
so
I
think
just
the
other,
like
two
other,
like
kind
of
cool
things
that
you
know
are
kind
of
in
this
sphere
is
one
of
them
is
actually-
and
I
like
to
bring
this
up
because
you
know
obviously
cube
proxy
is
the
best
part
of
kubernetes.
C
You
know
there's
no
need
for
a
next-gen
tool.
There.
C
Why
would
you
ever
want
to
like
rebuild
it
exactly,
but
so
in
the
basically?
What
we
have
here
is
we
actually
had
you
know,
cube
proxy
by
default
is
used
in
ip
tables,
and
the
cube
proxy
with
ip
tables
has
some
performance
limitations.
Let's
say,
and
we
had
a
customer
that
had
very
large
scale
that
wanted
to
run
cube
proxy
in
ipvs
mode
and
cluster
api
actually
doesn't
enable
that
it's
not
the
tanzu.
C
Doesn't
it's
that
cluster
api
doesn't,
and
so
basically,
what
we
built
here
is
a
hack
that
actually
enables
ipvs
as
well,
so
a
customer
can
just
add
enable
ipvs
on
their
cluster
and
they
get
ipvs
mode
instead
of
ip
tables
and
the
way
that
works
is
pretty
ugly,
but
we
basically
run
an
overlay
here
on
the
cube,
adam
config
templates
and
add
in
the
different
mod
pro
commands
in
order
to
you
know
just
enable
everything
that's
needed
for
ipvs,
because
the
default
templates
don't
have
that.
So
you
know.
A
C
Let's
overlay
append
here
also
on
the
cube,
adm
control
plane
and
then
what
we
do
is
we
create
a
file
and
this
file
is
actually
we
just
call
it
temp
generatecubeproxy.sh
and
these
commands
are
run
in
a
section
called
precube
adm
command.
So
all
of
the
files
for
running
cube
adm
are
on
the
cluster,
but
before
it
runs
the
cube,
adm,
init
or
cube
adm
join.
It
runs
these
commands
yeah.
A
C
What
we
have
it
do
is
run
this
script
that
we're
actually
creating
as
part
of
the
cloud
init
spec,
and
what
this
is
doing
is
it's
going
over
all
the
files
in
the
temp
directory,
which
is
where
the
files
get
placed
by
the
cubed
m
control,
plane
or
cubanium
config
template,
and
we
basically
find
the
cube
adm
config
file
that
was
generated
for
us
by
cluster
api,
and
we
read
that
file
and
add
to
the
end
of
it.
C
Another
yaml
document
of
cube
proxy
configuration
mode
ipvs
and
by
doing
this
automatically
when
cube
adm,
is
initiated
by
the
cloud
init
script
of
cluster
api
ipvs
mode
is
selected
because
we've
actually
just
manipulated
the
file
before
it's
executed,
and
so
we
get
full
ipvs
so
like
I
did
this
and
the
reason
that
I've
also
done
this
is
if
we
look
here
at
the
cluster
config
changes,
I've
also
added
support
for
celium,
so
you
can
install
celia
montanzu,
so
just
install
celium
and
install
celium
cli.
C
The
celium
cli
works.
Also
with
these
pre-k
badium
commands
and
just
pulls
down
the
celium
cni
cli.
It
checks
the
shas
and
then
moves
it
to
us,
our
local
bins,
so
that,
if
you
needed
to
debug
celium,
for
example,
I
did
the
same
thing
for
calico
to
debug,
calico
or
to
debug
celium.
C
You
need
your
own,
you
need
the
special
cli
really
yeah,
and
so
sometimes
when
you
have
networking
issues
and
that's
why
you're
going
to
debug
it's
kind
of
hard
to
download
the
binary
when
you're
dealing
with
a
host
that
has
networking
issues.
So
I
just
install
it
automatically
on
every
node
when
it
comes
up
and
then
anytime
you
just
ssh
into
that
node
or
open
up
a
you
know,
console
to
that
vm.
You
can
just
run
the
celium
command
automatically
and
this
just
installs
it
onto
all
the
nodes.
C
If
you
decided
to
install
celium
and
with
calico,
we
also
did
this
for
installing,
so
that
we
could
do
like
bgp
support
and
typha,
and
some
of
the
cool
things
can.
A
C
Command
is
diagram
here,
so
so
the
sequencing
is
is
that
those
files
get
created
by
cloud
init
right.
So
if
we
actually
take
a
look
here,
if
I
bring
up
the
environment
so
basically
the
way
that
it
works
is,
let's
see
I
do.
C
Andrea
live
cls02,
so
let's
say
that
I
wanted
to
take
a
look
at
what
exists
in
this
cluster
and
just
do
show
all
conditions
all
right
awesome.
So
now,
oh
and
sorry,
oh,
what
is
it
show
group
members?
That's
what
I
forgot
to
do
awesome
so
once
I
have
this,
this
is
basically
just
a
nice
view
with
intense
cli
that
comes
from
cluster
cuddle.
C
That
gives
me
the
status
of
my
all
of
the
objects
right,
so
we
have
our
cluster
and
then
we
have
a
control,
the
cube,
adm
control,
plane
and
the
machines
that
are
under
that
and
then
the
same
thing
goes
for
workers
right
and
we
get
all
this
stuff,
that's
all
cool!
Now
when
it
comes
to
the
bootstrapping,
we
actually
have
a
few
other
tools,
a
few
other
crs.
C
C
We
can
actually
take
a
look
at
it
and
if
I
did
a
cube,
cto
get
secret
and
let's
see
here,
andrea,
yep
and
that
gets
pulled
down
and
then
there's
and
if
I
looked
here
right,
if
we
wanted
to
look
at
just
what
this
looks
like
right.
Just
as
the
example
and
I
said,
hey
control,
plane
right.
I
want
to
look
at
the
control
plane,
node.
A
A
C
A
C
This
is
the
cloud
init
script
right,
so
everyone
that's
using
the
cube,
adm
control,
plane
provider.
Today
I
mean
there's
work
going
on
upstream
to
support
ignition
as
well
in
the
cubadium
provider,
but
currently
it's
just
cloud
in
it
right.
So
what
happens?
C
We
get
this
cloud
config
and
basically,
what
happens
is
it
adds
all
these
right
file
commands
that
get
run
for
us,
and
you
know
this
where
we're
creating
the
qubit
manifest
and
all
the
certificates
are
being
passed
in
we're
also
creating
that
cube,
adm,
yaml
file
and
all
the
files
that
we
really
need
right
and
we
have
that
in
it
configuration
and
what
happens
automatically
after
those
files
are
created
in
the
system.
Nothing
has
been
run.
C
A
C
Those
configurations
out,
yeah,
there's
also
the
ability,
there's
post,
cube,
adm
commands
and
those
will
by
the
bootstrap
provider
those
get
added
after
the
cube
adm
init
command
yeah.
So
it
orders
them
accordingly,
so
the
files
are
created
and
then
it
runs
the
commands.
So
that's,
basically
the
ordering
that
happens
here
automatically
for
us
yeah.
C
A
C
A
A
C
Yeah
exactly,
but
so
here,
what
we're
doing
is
right,
like
so
just
as
an
example
of
like
what
does
exist
out
of
the
box
is
actually
this
adding
into
the
etc
hosts
adding
in
the
host
name
that
was
received
to
127.001,
and
the
reason
this
is
so
important
is.
There
was
actually
an
issue
upstream
and
cluster
api
that
someone
said
that
their
nodes
were
coming
back
up
after
they
were
shut
down
and
he
powered
them
back
on
and
they
were
coming
back
up
without
an
ip
address.
C
Without
an
external
ip,
the
nodes
were
working,
but
the
external
ip
wasn't
returning
yeah
and
when
we
debug
the
issue.
What
it
seems
to
be
is
that
it
was
basically
trying
to
run
health
checks
against
the
node
name,
because
that's
the
way
that
kubernetes
is
running
cubelet
was
running
against
its
own
node
name,
but
because
his
manifest
didn't
have
adding
into
etc
hosts
the
host
name
and
localhost.
C
It
wasn't
able
to
reach
itself
because
that
name
doesn't
exist
in
the
dns
and
it
was
failing
so.
A
C
C
A
C
It's
fine,
it's
registered
and
everything
is
fine,
but
in
order
to
get
that
internal
or
external,
you
know
it
can
break
a
lot
of
integrations
that
happen.
On
top
of
kubernetes,
the
node
working
on.
A
C
Exactly
right
so,
like
that's
just
like
another
example
of
these
cases,
where
you
know,
we
really
can
automate
most
of
these
fixes,
because
cluster
api
is
so
pluggable
and
tkgm
specifically
happens
to
be
the
only
distribution
that
exists
from
any
vendor
out
there.
That's
a
supported
cluster
api
distribution
that
gives
you
full
access
to
the
raw
cluster
api
manifests
so
in
other
products
from
vmware
and
from
other
providers
as
well.
C
You
know
whether
it's
to
just
test
things
out
of
the
andre
and
nsxt,
or
we
can
do
things
like
calico
and
typhoid
and
install
you
know
an
up
to
date,
version
of
calico
j.
You
know
it's
always
fun.
I
like
this.
C
C
You
know
in
an
easy
way
out
of
the
box
from
day
zero,
because
we
have
the
new
capability
of
node
pools
in
1.4,
but
that's
a
day,
two
action,
and
it's
one
by
one
and
it's
a
manual
task-
and
you
know
it's
not
fully
baked
in
yet
it's
kind
of
more
of
a
like
a
veneer
over
the
machine
deployment
exactly
and
so
what
we
did
here
is
we
added
this
other
vsphere
overlay
and
what
we
do
here
is
we
basically
do
a
bunch
of
loops
through
ytt,
and
so
we
allow
every
value
of
a
virtual
machine
spec.
C
So
everything
if
we
looked
at
the
default
values
here
right,
we've
got
vsphere
worker,
num,
cpus
and
disk,
give
and
memory
megabytes
and
data
center
and
datastore,
and
all
these
things
are
possible
for
all
of
our
for
our
single
machine
deployment.
But
what
if
I
want
that
different
for
each
machine
deployment.
A
A
A
So
matt
this
is
a
cool
one,
so
I
know
you
like
over
at
synopsis.
Y'all
are
talking
about
like
multiple
clusters
for
multiple
people,
so
scott's
sort
of
solved
that
with
tkgm
customizations
on
tanzu,
like
he's,
got
a
recipe
that
he
uses
that
will
just
roll
out
like
you
know,
50
clusters
for
people
and
it
just
he
just
indexes
stuff.
C
For
example,
right,
so
if
we
look
at
the
auto
scaler
code
in
tanzu,
if
you
said
enable
autos,
closer,
auto,
scaler
and
didn't
set
work,
and
you
didn't
set
auto
scaler
min
size,
zero
to
whatever
or
max
size,
it'll
take
the
machine
count
now,
just
because
someone
is
adding
another
machine
deployment
to
their
cluster.
That
doesn't
mean
that
I
want
them
to
have
to
repeat
every
variable.
So
the
nice
thing
in
ytt
is
that
we
can
actually
come
and
say:
okay,
the
annotation
of
cluster
autoscaler,
basically
say:
okay.
C
If
there
is
auto
scale
or
min
size
at
the
place
of
I
so
auto
scale
or
min
size.
One
two,
three,
four:
whatever
machine
deployment
they're
creating
here
awesome,
if
not
use
worker
machine
count
one
two,
three,
four
whatever
it
is,
if
that
doesn't
exist,
go
and
use
worker
machine
because
that
needed
to
exist
for
machine
deployment
zero.
C
So
this
way
we
can
actually
play
around-
and
we
can
add
in
all
these
customizations
here
and
just
have
it
in
for
loop
here,
where
they
just
set
an
additional
md
count
and
say
how
many
machine
deployments
and
automatically
everything
around
cluster
auto
scaler
the
amount
of
replicas
everything
and
then
also
what
we
added,
because
it
doesn't
exist
in
tkg.
Right
now,
is
the
ability
to
add
node
labels.
C
So
what
if
I
wanted
to
have
labels
on
my
worker
nodes,
a
specific
label
when
they
came
up-
and
this
becomes
really
important
when
I
have
multiple
machine
deployments,
because
if
I
have
two
sizes,
one
has
gpus
the
other.
Doesn't
one
is
you
know
large?
The
other
is
small
and
I
want
to
be
able
to
target
them
accordingly.
C
So
here
we're
just
creating
an
overlay
on
the
cube,
adam
config
template
and
just
adding
node
labels
automatically
and
if
they
didn't
give
me
a
custom,
node
label,
I'm
creating
a
node
label
of
cluster
api
machine
deployment
with
the
machine
deployment
name.
So.
A
C
Can
either
give
me
one
or
I'm
gonna,
create
one
anyways
so
that
there
is
a
way
to
distinct
where
this
node
came
from
within
the
cluster,
but
they
can
offer
their
own
labels,
and
we
can
work
with
that
as
well.
C
Important
for
developers
right
exactly
anyways
yeah,
so.
A
A
Is
the
longest
show
we've
done
and
it's
probably
got
the
most
content
in
it
and
it's
definitely
probably
the
most
practical
like
right.
So
I
feel
like
we
should
do
a
follow-up
scott
like
because
I
feel
like
I'm
just
like
you're
just
getting
started
here,
but.
C
Yeah,
more
than
happy
to
would
love
to
come
back
and
hang
out
some
more
huge
thanks
to
tara
skye
for
letting
you.
A
C
Yeah
and
really
just
a
call
out
to
everyone
that,
if
anyone
you
know
has
questions
on
any
of
this
stuff
or
you
know,
is
trying
and
is
getting
stopped,
you
know
feel
free
to
reach
out
to
me
on
all
the
social
media
channels
and
whatnot.
I'm.
The
rabbi
everywhere
is
my
you
know,
handle
on
the
kubernetes
slack,
the
bmw
code,
slack
on
everywhere,
so
really
feel
free
to
reach
out
and
raise
issues
on
the
repo.
C
A
We
yeah
we
have
an
and
we
have
a
little
bit
of
a
carvel
show,
but
we
didn't
go
deep
into
the
carville
tooling,
that
configures
andrea
specifically,
and
how
that's
plumbed
from
tce
into
framework
and
the
life
cycle
of
that
and
a
meme
on
on
on
my
team
here
at
vmware
sort
of
owns
all
that
and
I'm
sure
he'd
be
happy
to
walk
folks
through
that.
So
yeah.
That.
C
Yeah,
so
robert
is
from
actually
another
company
from
itq
based
in
holland,
he's
actually
another
kansu
expert
and
yeah.
I
hang
out
with
him
a
lot
he's
pretty
awesome
and
okay
knows
a
lot
of
stuff
in
the
tanzania
world,
so
yeah,
okay,
this
is
cool,
so.
A
All
right
so
yeah
we'll
do
an
entry
deeper,
a
deeper
dive
into
the
car,
how
we
package
it,
how
we
ship
it
in
framework
and
how
we're
gonna
move
towards
shipping
it
upstream
soon,
scott.