►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
everyone
to
the
kubernetes,
sig
cluster
life
cycle,
cluster,
api
provider,
azure
office
hours
march,
31st
2022.
A
We
are
abiding
by
the
cncf
code
of
conduct
and
please
use
the
raise
hands
feature
if
you
have
questions
so
we
can
keep
this
meeting
nice
and
orderly.
I'm
sharing
the
agenda
now
is
a
great
time
to
add
your
name
to
the
attendee
list.
If
you
like
and
add
topics,
because
we
have
plenty
of
time
to
discuss
them,
we
have
one
thing
on
the
agenda
so
far
today,
which
is
jack,
is
going
to
talk
to
us
about
the
official
cloud
provider
azure
health
chart.
A
A
B
B
So,
while
that's
happening
I'll,
give
a
little
bit
of
background,
what
we're
talking
about
so
the
main
thing
that
folks
might
not
be
too
familiar
with
is
the
concept
of
out
of
tree
cloud
provider.
So
backstory
is
traditionally
in
kubernetes
the
sort
of
big
three
or
four
cloud
providers.
Google
amazon
azure
were
included
in
the
kubernetes
code
base
itself
and
link
compiled
into
the
main
bits
of
controller
manager
running
on
your
cluster
and
so
depending
on
how
you
invoked
your
controller
manager
runtime.
B
When
you
started
up
your
control,
plane,
node
vm,
you
would
either
be
running
in
an
azure
context
or
an
aws
context
or
a
google
context,
and
that
was
sort
of
it.
So
there
were
cloud
provider
specific
configurations
that
you
could
pair
with
with
your
controller
manager
running
in
that
context,
usually
via
a
file
running
on
the
control,
plane,
node
file
system
anyway.
B
Over
time
it
was
determined
that,
to
increase
flexibility
and
to
sort
of
execute
the
correct
separation
of
concerns,
it
would
be
better
for
the
cloud
provider
runtimes
to
be
distinct
from
the
kubernetes
bits,
the
controller
manager
runtime
and
so
a
multi-year,
like
literally
probably
six.
Seven
year
at
this
point,
effort
has
been
underway
to
do
that
across
all
of
the
cloud
provider
vendors
in
the
ecosystem,
including
azure.
B
So
first
for
the
last
several
years,
the
cloud
provider
azure
maintainers
have
been
doing
lots
of
hard
work
to
do
that,
to
allow
folks
to
run
the
cloud
fighter
as
a
distinct
component
and
they
cut
1.0
about
a
year
ago
or
so.
So
what
this
means
in
practice
is
that
you,
when
you
are
bootstrapping
your
cluster
you're,
going
to
have
a
distinct
set
of
pods
running
on
the
cluster
that
are
responsible
for
doing
azure
work
and,
as
distinct
from
running
inside
the
controller
manager
runtime
itself,
this
introduces.
B
Cool
cool,
so
this
introduces
some
interesting
challenges
for
cluster
api
workflows
because
we
now
have
to
bolt
on
separate
kubernetes
resource
specifications
as
part
of
the
cluster,
because
arguably
the
cloud
provider
runtime
is
a
sort
of
foundational
intrinsic
part
of
your
cluster
definition.
B
It's
not
something
that
you
can
do
after
the
fact,
so
it
is
sort
of
comparable
to
the
way
cluster
api
has
dealt
with
cni
in
the
past,
where
it's
separate,
but
equal
kind
of
thing.
It's
like
a
part
of
the
cluster,
but
it
is
not
handled
by
a
cluster
api,
primitive
and
so
the
way
that
we
have
sort
of
referenced.
How
you
can
do
this
in
in
the
cloud
writer
azure
repo
is
similar
to
how
we
referenced,
how
you
can
install
calico.
B
So
we
keep
a
calico
a
standard
calico
reference
as
a
static
kubernetes
resource
spec,
expressed
in
a
cluster
resource
set,
so
feel
free
to
ask
questions.
If
you
don't
understand
what
a
cluster
resource
set
is,
it's
essentially
a
thin
wrapper
cluster
api
wrapper
on
top
of
sort
of
arbitrary
static,
kubernetes
manifest
specification,
but
this
allows
you
to
define
a
cluster
resource
set
in
your
cluster
template
and
get
all
of
those
static.
Kubernetes
resource
specs
delivered
to
your
cluster
at
cluster
creation
time.
B
They
may
not
have
already
done
that,
and
it's
it's
not
obvious
how
to
do
that
for
folks
who
are
just
sort
of
hopping
into
the
cluster
api
ecosystem
through
the
azure
front
door.
So
it
adds
an
additional
level
of
friction
for
building
clusters
that
are
actually
functional
that
actually
have
cni
and
then,
in
the
out
of
tree
cloud
provider
scenario,
have
are
running
the
out
of
tree
cloud
provider,
azure
bits
on
the
cluster,
but
to
go
back
to
the
original
point
of
this
digression.
B
The
external
cloud
provider
stuff
has
been
in
the
kapsy
repo
for
a
long
time,
probably
a
year
or
so
express
as
cluster
resource
sets
in
a
reference
template,
and
we
test
that
as
an
optional
test
on
prs
and
on
regular,
daily
or
every
four
hour
test
grid
jobs.
To
make
sure
that
the
outa
tree
cloud
fighter
bits
that
we
are
aware
that
work
continue
to
work
in
our
sort
of
standard
cluster
definition
scenario.
A
I
have
a
lot
of
questions,
but
let's.
A
To
ashitosh
first,
because
he
has
raised
his
hand.
D
B
A
B
Right,
no,
it's
not
something
that
you
toggle
on
and
off.
It
is
something
that
you
do
want
to
maintain
independently
of
kubernetes
itself.
So
that's
that's!
An
important
distinction
is
that
the
cloud
provider
when
you're
when
you're
running
your
cluster,
using
the
out
tree
cloud
provider
you're
using
bits
that
are,
can
you
folks
see
a
ugly
old,
gomod
change
set?
We.
B
I
bet
it
is
cool,
so
yeah,
the
the
cloud
fighter
bits
when
you're
running
in
an
external
context
move
forward
independently
of
kubernetes.
So
when
there
are
bugs
in
the
azure
part
of
the
surface
area,
they
are
they're
almost
always
patched
before
a
new
version
of
kubernetes
on
the
appropriate
version
of
kubernetes
is
released,
so
you're
able
to
consume
fixes
to
various
bugs
in
the
cloud
provider
area
much
more
quickly
if
you're
running
out
of
tree
or
external.
B
So
to
answer
your
question,
the
key
aspect
for
how
we
test
out
a
tree
cloud
finder,
if
this
is
your
question-
is
this
is
a
p.
This
is
an
open
pr
that
I
have.
That
demonstrates
what
eventually
I'm
gonna
get
to
with
helm.
So
we
are
in
our
in
our
reference
template
here
that
we
use
to
test
out
a
tree,
color
threader.
You
can
see
all
this
negative
differential
here
we're
removing
all
of
the
cluster
resource
sets.
B
So
this
is
all
these
are
all
of
the
the
kubernetes
layer
specifications
that
we
use
to
install
the
out
of
free
cloud
fire
as
distinct
components.
So
this
is
how
we
test
it
right
now.
Is
that
the
template
itself
during
cluster
creation
time
includes
a
cluster
resource
set
which
is
going
to
put
this
static
pod
on
the
cluster,
which
is
running
the
azure
cloud
controller
managers.
This
is
the
azure.
This
is
what
you
get
when
you
run
the
out
of
tree.
Instead
of
having
it
be
a
part
of
the
controller
manager.
B
A
B
Oh
resume
share,
okay,
so
sorry
there
you
go
remember
what
I
was
babbling
about
45
seconds
ago.
So
this
is
what
I
was
scrolling
through.
B
So
sorry
about
that,
so
this
is
this
is
the
the
reference
template
that
we
use
to
test
and
we
are
removing
all
these
external
provider
bits
in
order
to
use
a
standard
canonical
reference
of
cloud
provider
that
works
on
cluster
api
that
works
on
cap
c
independent
of
this
cluster
resource
set,
so
that's
sort
of
what
what
I'm
going
to
demonstrate
but
again,
the
reason
why
this
is
undesirable
is
there's
two
two
main
reasons:
one
is
that
we
encapsul
the
folks
who
maintain
these.
B
These
reference
templates
have
to
continually
move
these
bits
forward
or
this
this
specification
forward.
So
you
can
see
what
we
do
right
now.
There's
there's
literally
a
hard-coded
version
of
this
cloud
controller
manager
piece
in
there
so
to
to
keep
this
fresh.
We
have
to
independently
move
this
forward.
You
know
every
week
or
so
they're
releases
fairly,
often,
and
then
secondarily,
the
cluster
resource
set
is
not
a
sort
of
native
built-in.
You
definitely
know
that
you
have
it
crd
that
you
get
with
cluster
api.
B
You
have
to
explicitly
opt
opt
into
using
this,
which
we
as
capacity
maintainers,
are
familiar
with,
and
we
do
in
our
test
ci.
So
you
know
when
we
build
kind
of
management
clusters,
we
definitely
enable
the
feature
flag
that
allows
us
to
use
cluster
resource
sets,
but
it
adds
additional
friction
for
users
who
don't
really
need
to
know
about
this
kind
of
thing
and
we'll
basically
execute
a
template
that
won't
work
on
their
magic
cluster.
D
Well
so
my
question
one
more
question:
you
know
that
I
wanted
to
understand
when
we
do
run
this
test
like
how
do
we
instruct
the
cubelets
or
the
responsible
components
to
be
able
to
talk
into
external
cloud
provider
like
I
know
for
a
fact
that
by
default
you
know
until
we
you
know
don't
specify.
You
know
whether
you
want
to
use
infrared
cloud
provider
or
you
want
to
use
external
cloud
provider.
Are
we
like
using
some
flag
to
achieve
that.
B
Yeah,
so
the
I'm
I'm
not
fully.
I
don't
have
the
full
expertise
to
understand.
C
B
B
So
they
have
separate
bits
running
on
each
node,
communicating
in
an
azure
context
back
to
the
mothership,
so
to
speak,
the
the
way
that
you
instruct
the
control
plane
that
you're
running
in
a
I
wonder
if
it
would
be
easier
to
just
switch
to
vs
code
yeah.
A
Oh,
I
was
just
gonna
say
I
know:
cecile
had
some
things,
she
wanted
to
add
there.
So
maybe
before
we
switch
away,
we
should
add
that
no
actually
jack
was
just
going
to
head.
B
There's
there's
a
simple
flag
so
that
the
the
same,
the
the
same
flag
that
you've
been
used
historically
to
say
I'm
running
controller
manager
in
an
azure
mode,
I'm
running
controller
manager
in
a
google
mode.
That
flag
has
been
overloaded
for
since
the
out
tree
effort
has
has
commenced
and
there's
a
new
flag.
There's
a
new
value
called
external
training.
D
B
Found
it
there,
we
go
yeah.
B
And
so
so
that
this
is
then
dispatch.
This
is
the
cube
adm
spec.
This
has
just
been
dispatched
under
the
hood
to
the
the
controller
manager
runtime.
So
it
knows
to
basically
I
am
not
going
to
do
any
cloud
provider
operations,
I'm
running
them
in
an
external
mode,
and
I
don't
know
exactly
how
the
plumbing
of
the
azure
cloud
controller
manager
works.
There's
probably
some
some
simple
labels
or
flags
that
that
are
used
to
know.
D
To
do
that
work
I
mean
the
only
question.
The
only
reason
I
want
to
ask
this
question
or
like
I
was
trying
to
actually
upgrade
1.22
to
1.23
in
management
cluster,
and
then
you
know
there
are
some
like
csi
migration
flags
like
azure
disk,
cic,
csi
migration,
that's
been
enabled
by
default,
and
then
I
just
wanted
to
use
external
cloud
provider,
but
then
the
upgrade
fails.
So
it's
trying
to
validate
and
understand.
D
D
B
I'm
glad
you
brought
the
csi
driver
so
yeah.
That's
also.
I
believe
something
is
that
right
is
this
right
still
that
we
deliver
as
a
cluster
resource
set.
So
that's
another
candidate
for
if
we,
if
we're
able
to
reach
consensus
on
this
solution
of
using
using
helm
as
a
as
a
reference
delivery
mechanism
to
get
these
super
important
cluster
bits,
but
not
actually
part
of
the
cluster
bits
at
during
bootstrap
time.
B
If
we
can
define
a
workflow
that
does
that
conveniently
then
that
should
greatly,
I
don't
want
to
say
ease
because
it's
not
actually
easier,
but
it's
better.
But
when
you,
when
you
go
to
upgrade
your
cluster,
you
will
upgrade
your
cluster
independently
of
csi,
independently
of
cni,
independently
of
out
of
tree,
the
the
more
likely
scenario
that
will
empower
you
is:
you
can
upgrade
your
csi
without
upgrading
your
cluster.
E
Are
you
cecile
yeah
we're
actually
not
installing
out
of
tree
csi
at
all
right
now,
which
we
need
to
start
doing,
and
it's
actually,
I
think,
in
terms
of
timeline
a
little
more
time
sensitive
than
cloud
provider,
because
cloud
provider
was
delayed
like
it's
been
pushed
back
many
times,
but
it
was
pushed
back
to
126.
E
D
D
A
B
A
Did
have
one
other
question,
while
you're
preparing
your
vs
co-chair,
we'll
see
if
folks
want
to
answer,
I
don't
know
if
you
want
to
magana.
If
you
want
to
a
mutant,
asked
you
a
question
or
not,
but
you
asked
about
a
reference
architecture
for
deploying
capsi
in
air
gap
environments.
A
H
So
if
I
were
to
deploy
cabsie
in
an
internet
restricted
environment,
so
I
wanted
to
know
if
there's
a
reference
architecture
that
I
can
look
at
so
that
I
can
have
like
more
details
about
where
the
management
cluster
should
be.
Should
it
be
in
the
private
subnet,
it
should
be
in
the
public
subnet.
So.
E
Yeah,
I'm
not
yeah,
I
don't
have
the
answer
you're
looking
for
probably
there
is
no
architecture
reference
or
anything
like
that
right
now,
I'm
personally
not
aware
of
anyone
who
has
used
cavzy
in
air
gap
or
internet
restricted
environments
so
far,
but
I
know
that
it
will
be
a
use
case
for
many
customers.
E
So
it's
something
that
I
think
we
want
to
look
at
together
as
a
group
for
sure
in
the
upcoming
period.
That
being
said,
I'm
not
sure
I
understood
exactly
what
you
meant
by
private
subnet
or
public
sign
that
subnets
aren't
public
or
private
in
azure.
There's
you
know
security
rules
which
allow
your
like
traffic,
outbound
and
inbound
traffic
in
and
out
of
your
virtual
machines
in
that
are
in
your
v-net
and
that
can
be
restricted.
E
So
you
can
restrict
the
outbound
traffic
such
that
your
v-net
can't
or
your
vms
can't
reach
to
the
public
internet,
and
I
think
the
only
thing
that
would
limit
in
a
cluster
api
scenario
is
what
images
it
can
pull
and
what
binaries
can
be
installed,
which
means
that
you'd
have
to
have
an
os
image
which
has
all
the
required
dependencies.
Pre
baked
into
the
image
or
pre-installed
or
something
that
makes
it
so
that
you
don't
have
to
download
anything
when
the
cluster
first
comes
up.
E
I
think
one
thing
we
need
to
start
doing
is
testing
that
scenario.
That
would
be
the
first
step
like
having
a
test
that
basically
just
cuts
off
internet
access
outbound
and
makes
sure
that
the
cluster
can
be
created
properly.
F
Yeah,
as
far
as
where
your
management
cluster
goes,
that
really
comes
down
to
what
your
requirements
are.
The
other
side
of
it
is.
F
You
could
probably
have
a
registry
within
that
private
available
from
that
private
network,
so
you
could
put
images
over
there
normally
in
cases
before
we
would
have
virtual
machine
images
that
you
would
probably
want
to
load
into
a
shared
image
gallery
into
that
inside
that
private
network,
so
that
you
can
access
those
images,
there's
a
handful
of
stuff
that
needs
you
to
get
built
out
there
and,
if
somebody's
passionate
about
it
in
the
community,
would
you
love
to
help
and
figure
out
how
to
document
this
so
that
it's
it's
at
least,
we
can
say
hey
here,
are
all
the
pieces
that
fit
together,
and
you
know
you're
going
to
have
to
you're
going
to
have
to
kind
of
figure
out
what
best
fits
your
use
case,
but
we
can.
A
Okay,
josh,
I
think
you
maybe
wanted
to
weigh
in
on
that,
and
then
I
think
jack
is
ready
to
show
us
some
stuff.
I
Sorry,
josh
wanted
to
speak
briefly
a
little
bit
more
to
the
context
that
megan
and
I
are
developing
a
lot
of
the
indian
tests
for
cafe
as
well
and
and
have
been
working
on
erica
for
captain
z,
and
so
we
were
looking
to
see
if
there
was
like
a
validated
architecture
around
air
gap.
The
working
design
right
now
is
essentially
a
non-restricted
management
cluster
that
is
paired
with
a
virtual
network
with
a
restricted.
I
You
know
close
subnet
and
then
kind
of
going
from
there,
but
it
sounds
like
we're
kind
of
free
to
design
this,
so
I
don't
think,
there's
a
problem
there.
We
just
wanted
to
make
sure
that
we're
following
you
know
best
practices.
If
there
was
anything
written
out.
C
C
B
C
C
B
F
F
J
F
Yeah
so
git
status,
and
it's
going
to
be
one
of
the
confusions.
B
B
By
the
way
someone
should
have
we
done
a
demo
on
how
to
run
ede
locally.
It's
it's
really
convenient
when
doing
tests.
Have
you
done
with
that
james.
C
We
didn't
get
to
doing
it
last
time
we
were
going
to
do
like
a
follow-up.
I
think.
B
Okay,
cool,
thank
you
for
your
patience,
everyone.
B
So
while
this
is
happening,
what
I
wanna
I'll
talk
through
what
this
is
supposed
to
demonstrate
and
call
out
some
of
the
benefits
to
doing
this
with
helm
so
and
while
I
do
that,
let
me
get
a
couple
of
let's
see:
do
I
have
a
clown
provider.
B
B
I
I
referred
earlier
to
the
flexibility
of
being
able
to
manage
your
cloud
provider
azure
bits
independently
from
your
kubernetes
bits,
but
that
also
prevents
challenges.
One
of
the
challenges,
historically,
is
that
the
auditory
project
was
managed
using
a
distinct
release
version
semantic
compared
to
kubernetes.
B
So
the
this
project
maintains
a
separate
release,
channel
per
kubernetes
version-
and
this
is
clarified
here
in
this
matrix.
So,
for
example,
for
for
kubernetes
versions,
running
1.20,
the
club
fighter
bits
to
run
are
on
the
0.7
release
channel
for
1.21,
clep
writer
went
1.0,
and
so
the
the
bits
to
run
are
1.0
for
122.,
it's
1.1
for
123.,
it's
123.,
so
that's
kind
of
crazy
confusing,
and
it's
actually
really
good
that
this
leap
has
happened,
because
this
will
predict
going
forward.
B
It's
going
to
be
a
little
bit
less
confusing,
but
even
even
the
fact
that
they
are
running
that
these
have
been
rationalized
where,
where
we
have
a
sort
of
corresponding
major.minor
minor
between
kubernetes
and
cloud
fighter,
it
is
it's
still
challenging
to
pick
a
known
working
version
of
the
cloud
fighter,
azure
bits
generally,
because
it
it's
highly
dependent
on
the
version
of
kubernetes
you're
running
so
helm
allows
us
to
do
that.
B
So
when
we,
when
you
define
a
helm
chart,
you
can
introspect
at
runtime
the
version
of
kubernetes
that
the
helm
chart
is
being
installed
upon
and
based
on
that
introspection,
you
can
choose
the
right
cloud
provider
version
to
to
run
on
your
cluster,
so
it
allows
for
really
convenient
gestures
for
users
to
do
a
helm,
install
refer
to
the
chart
and
it's
going
to
automatically
pick
the
right
bits
for
you.
B
So
earlier.
I
demonstrated
on
that
diff
that
we
have
been
maintaining
a
static
version
so
1.1.5.
So
if
we
go
over
here,
we
can
see
that
that
correlates
to
1.22.
So
what
that
means
is
that
we
don't
really
have
any
way
to
test
cloud
product
bits
across
the
set
of
supported
kubernetes
versions.
We
can
only
test
this
template
only
works
in
1.22
and
that's
not
sort
of
obviously
communicated.
B
B
Is
doing
this
looks
better
all
right,
so
I'm
going
to
for
folks
who
don't
understand
tilt
I'll,
try
to
talk
my
way
through
this
a
little
bit.
I'm
using
this
handy
dandy
tilt
front
end
to
build
new
clusters
using
the
reference
templates
that
we
maintain
in
the
capacity
cluster.
So
I
just
click
the
magical
button
to
create
an
external
cloud
provider,
a
new
cluster
running
that
reference
template
and
I'm
going
to
see.
If
I
can
follow
progress
here,
not
yet.
B
Quickly
and
what
I'm
gonna
do
the
easiest
way
to
demonstrate.
This
is
to
build
a
bunch
of
these
on
different
versions
of
kubernetes,
to
see
that
the
helm
chart
that
we
use
we'll
choose
the
right
version
of
the
cloud
proprietor
azure
for
us.
So
I'm
going
to
do
that
by
building
a
cluster
and
then
so
you
can
see
right
now
that
my
tilt
config
and
again
this
tilt
is
this
thing.
B
I'm
using
right
here
is
configured
with
this
version
of
kubernetes,
which
means
that
I
think
all
of
our
reference
templates
that
aren't
anyway,
most
of
our
reference
templates,
are
are
being
informed
by
this
configuration
to
know
which
version
of
kubernetes
to
build.
B
So
when
I
change
this-
and
I
re
I
build
a
new
cluster,
it
will
build
with
a
new
version
of
kubernetes
and
then
the
idea
is
to
demonstrate
a
common
helm
gesture
that
will
install
different
versions
of
the
cloud
printer
bits,
because
it
will
be
able
to
introspect
that
runtime
what
version
of
kubernetes
is
running
all
right.
So
that
looks
like
it's
done,
so
I
think
I'm
just
going
to
go
ahead.
Let
me
take
a
note
of
which
one
this
is
42
seconds
all
right,
so
I'm
just
going
to
change
this
right
now.
B
A
B
Yeah
sure
so
this
is
similar
to
any
kind
of
convenient
ide
that
has
a
file
watcher.
So
basically
it's
watching
a
set
of
files
and
when
those
files
change
it's
redo,
you
know
sort
of
re-scaffolding
its
thing
so
similar
to
yeah.
B
That's
right
so,
when
I
updated
this
file
and
hit
save
till
observed
that
the
file
changed
and
it
rebuilt
itself
according
to
this
new
change
configuration
so
the
second,
the
idea
is
the
second
cluster
that
I
built
will
be
built
with
123.5.
B
B
For
posterity
oldest
from
oldest
to
newest,
it's
going
to
be
122,
123
121.
That
doesn't
make
any
sense,
but
we'll
try
to
remember
it.
B
And
when
this
goes
updating,
okay,
cool
and
so
now
I
will
refresh
this
again:
that's
pretty
convenient!
So
that's
a
nice
little
demo.
How
cool
tilt
is
so
shout
out
to
all
the
folks
who've
been
maintaining
tilt
for
the
past
couple
years
on
capsid
and
also
the
maintainers
of
tilt
themselves,
all
right!
So
now
I
can
move
along
a
little
bit.
So
we've
got
clusters
here.
So,
let's,
let's,
let's,
let's,
let's
see
if
we
have
a
cube,
config
secret
for
the
first
one
and
then.
H
L
B
B
B
A
Can
subtly
see
a
tiny
bit
of
what's
scrolling
entertaining
since
you're?
I
think
you
shared
several
bits
of
your
either
several
windows
or
your
whole
desktop,
not
sure
yeah
you're
not.
A
B
Okay,
122.:
that's
what
we
want
to
see?
Okay,
so
unfortunately
we
have
to
wait.
A
few
minutes.
We've
got
23
minutes
left.
I
think
we're
going
to
be
okay
here,
all
right.
So,
let's
remind
ourselves
what
we're
doing
here.
I
am
running
on
this
branch
here,
which
has
removed
all
of
the
cluster
resource
set
definitions
from
so
so
in
tilt.
B
I
was
creating
this
column
list
in
this
column
is,
is
a
convenient
representation
of
all
of
the
the
cluster
definitions
in
it's
actually
in
a
no?
It
is
in
this
in
this
folder
templates,
so
it
strips
off
the
prefix
cluster
dash
template
and
it
strips
off
the
suffix.yaml
and
presents
them
as
a
a
cluster
definition
called
external
cloud
provider
here,
and
so
I
have
been
building
clusters
with
this
spec
right
here
that
we're
looking
at
so
it's
what
we're
doing,
but
compared
to
main
we've
removed
all
the
cluster
resource
set
stuff.
B
So
what
I
should
expect
to
see
on
this
cluster
is
that
the
there
is
no.
There
are
no
cloud
fighter
bits
running
on
it.
Maybe
someone
on
the
call
can
help
get
like
the
most
definitive
confirmation
of
that,
but
once
we
have
a
controller
manager
pod,
I
can
hop
into
it
and
it
it
will.
B
Quite
there,
yet
cubitium
is
doing
its
thing.
Actually,
this
is
it
it's
just
there's
so
much
semantic
information
there.
Okay,
so
I'm
gonna
look
at
this.
B
Because
I've
been
doing
this
a
lot,
but
this
in
particular
is
the
main
sort
of
tell,
and
then
this
is
semantic
suffix
information
that
we
append
to
the
pod.
That
gives
it
like
sort
of
cluster
identifying
information.
B
A
B
Is
is
I'm
not
gonna
even
ask
I'm
just
going
to
assert
that
the
font
is
too
small.
B
Okay,
I
think
we
are
ready
here
for
some
for
some
action:
okay,
cool,
so
anyone
can
anyone
give
me
a
hint
as
to
the
correct.
There
is
definitely
a
status
transition
that
has
not
happened
because
the
the
cloud
fighter
bits
haven't
been
installed
in
this
cluster.
What's
the
quickest
way
to
like
get
that
on
this
cluster.
B
B
See,
if
that's
true
work
this
maybe
you
know,
I
have
introspect
yeah.
E
I
wouldn't
expect
no
result.
I
would
expect
an
empty,
actually,
I'm
not
sure.
Oh
no,
it
should
you're
right.
It
was
the
results,
it
would
be.
B
Thank
you,
james,
okay,
cool,
so
yeah,
so
we're
running
a
cluster
that,
from
a
certain
point
of
view
it
it's
it's
running
it's
up,
but
it
can't
really
do
anything
terribly
interesting
because
it
doesn't
have
a
cloud
writer
context
at
all.
So
at
this
phase
again
this
is
a
122
cluster.
So
what
we're
going
to
do
is
demonstrate
helm,
install
not
that
you
guys
can
do
that
on
your
own
time
is
that
it.
B
Okay?
So
if
you
don't,
if,
if
you're
not
familiar
with
helm,
I'm
gonna
just
sort
of
skip
through
this
really
quickly.
So
we
can
have
a
follow
up
office
hours
on
on
helm
in
particular.
B
But
what
I'm
doing
here
is
I'm
referring
to
the
what
is
the
now
definitive
helm,
helm,
chart
and
repo
for
the
cloud
fighter,
azure
bits,
so
the
the
cloud
fighter
azure
project
is
going
to
maintain
its
own
repo
with
this
canonical
uri,
and
this
is
the
name
of
the
chart
that
that
delivers
all
the
bits-
and
I
am
setting
the
cluster
name
to
this,
because
the
helm
chart
isn't
aware
of
the
cluster
name,
as
you
can
imagine
so
that
this
is
one
bit
of
info
if
we
have
to
pass
into
the
helm
chart.
B
I
promise
I'll
fix
that
folks.
Then
we
have
now
just
deployed
our
home
chart.
So
let's
see
what
we
have
all
right:
we've
got
a
pending
cloud,
controller
manager
and
we've
got
the
cloud.
Node
manager
was
super
quick.
So
let's
do
some
introspection,
so
cloud
controller
manager,
okay,
get.
B
All
right,
let's
go
up
to
the
top
here
and
we
are
running
1.1.11.,
let's
check
and
see,
if
that's
what
we
want
to
be
doing,
and
that
is
correct,
that
is
cool.
So
again
that
was
not
passed
in
here.
There
was
nothing
said
like
make
sure
to
give
me
the
1.1.11
release.
We
simply
installed
the
chart
the
chart
introspected,
the
cluster
saw
that
it
was
122
and
then
the
chart
is
implemented
so
that
it
has
a
dictionary
that
will
look
up
the
right
cloud
fighter.
Azure
version
to
deliver
with
that
version
of
kubernetes.
A
Yeah,
that's
really
helpful.
We
do
have
a
question
in
the
chat
wondering
if
we're
recommending
this
to
users
or
only
for
tests.
B
Great
question
what
you
are
seeing
you
are
seeing
the
primordial
process
of
of
this,
of
something
that
will
eventually
be
the
way
to
do
this
for
users.
I
think
at
least
that's
the
idea.
That's
that's
what
I'm
advocating
here.
L
Yeah,
so
just
to
follow
up
on
that.
So
I
understand
the
benefits
of
using
helm,
but
I'm
wondering
a
scenario
where
a
user
would
have
like
hundreds
of
clusters
and
are
we
expect.
I
thought
like
crs
would
be
a
good
fit
in
that
case,
because
you
can
just
like
tag
a
crs
and
then
it's
going
to
install
the
pro
the
cloud
provider
for
all
the
clusters,
but
in
case
of
using
helm,
we
need
to
like
do
it
for
each
and
every
cluster.
B
That's
a
great
question,
so
there
are
discussions
happening,
there's
a
proposal
by
fabrizio
to
to
do
something
like
this
as
a
kind
of
native
cluster
api
add-on
and
where
that
to
have
sort
of
first-class
helm
support,
then
we
could
probably
lift
and
shift
this,
as
is
into
that
new
native
set
of
capabilities
to
allow
like
a
one-click
solution,
because
you're
correct
that
this
does
break
that
for
for
actual
user
flows.
B
But
I
would
like
to
point
out
that
the
with
that
convenience
comes
something
that's
very
dangerous,
which
is
that
the
cluster
resource
set
is
a
is
a
sort
of
fire
and
forget
kubernetes
resource
applier,
like
it's
not
going
to
be
able
to
maintain
that
over
time,
and
so
every
time
they
upgrade
their
cluster
or
do
or
edit
the
the
version
of
kubernetes
and
then
some
other
way.
They'll
have
to
read.
B
A
E
Yeah,
I've
subject
basically
said
that
I
was
basically
going
to
bring
up
the
same
proposal,
but
I
think
this
is
more
like
crawl
and
then
proposal
would
get
us
to
run
like
this
is
more
of
an
intermediary
step
to
get
us
to
a
more
because
once
we
get
users
relying
on
crs
there's
no
way
out
it's
kind
of
a
dead
end
and
there's
not
a
good
long-term
support
solution.
E
It's
going
to
be
deprecated,
so
if
we
get
all
these
users
like
relying
on
it
for
cloud
provider,
because
it's
the
only
way
to
get
external
cloud
provider,
I
think
we're
putting
our
users
in
a
bad
place,
but
if
we
get
them
to
a
place
where
they
have
to
do
a
bit
more
manual
steps
for
now,
but
at
least
there's
a
solution,
that's
going
to
be
completely
like
portable
to
use
this
like
new
add-on
manager
based
on
helm.
I
think
it
would
be
a
better
transition
story.
What
do
you
think
cheyenne.
L
Yeah
so
yeah
I'm
kind
of
conflicted
on
this
chat,
mainly
because
we
are
now
adding
one
more
dependency
for
the
user,
but
I
also
like
understand
at
the
point
where,
if
a
user
is
like
set
on
using
crs
it's,
it
becomes
difficult
over
time.
L
Addons
proposal
is
is
a
step
in
the
right
direction.
A
Okay,
great
ashitash
did
you
have
something
you
wanted
to
add
there.
D
I
was
just
a
quick
question
like
I
had
one
more
question,
but
I
didn't
say:
I'm
already
asked
it
so
she's,
like
the
saturn
manager,
maybe
I'll,
have
to
give
it
a
read,
but
just
to
understand
on
a
high
level
are
we
you
know
saying
that
helm
would
be
the
only
way
if
at
all,
proceed
with
this
approach.
Like
you
know,
somebody
can
use
a
different
package
manager
or
something
like
that.
That
manages
the
life
cycle
of
you,
know
cloud
provider
or
csi,
stuff
and
stuff,
and
then
so.
E
Yeah,
absolutely
no,
it
wouldn't
be
the
only
way.
It's
the
I
mean
they'll.
You
should
take
a
look
at
the
proposal,
but
basically
the
proposal
proposes
that,
instead
of
having
like
built-in
cluster
api
add-on
management
and
reinventing,
what's
already
out
there
in
package
managers
to
just
allow
for
integration
with
external
package
or
add-on
manager,
tools
and
so
helm
is
just
used
as
a
target
for
like
the
design
and
an
example,
and
I
think
it's
probably
the
way
that
we
would
go
for
the
first
prototype,
but
it
could
be.
E
A
B
Cool,
so
the
final
thing
I
wanted
to
show
on
this
cluster
is
that
the
helm
chart
has
delivered
us
a
solution
for
windows
as
well.
So
once
we
add
a
windows
pool
or
machine
to
this
cluster,
we
will
be
running
we'll
be
cooking
with
gas,
so
this
is,
this
is
actually
I
I
feel
like
this
should
be
demoed
across
the
ecosystem
of
kubernetes,
more
often
having
a
having
a
daemon
set.
That's
like
ready
to
go
once
a
particular
node
selection
is
triggered.
B
Is
the
right
way
to
do
this,
and
so
I'm
really
excited
that
this
has
windows
support
baked
in
so
basically,
when
you,
when
you
install
your
cloud
fighter
home
chart,
it
doesn't
care
about
linux
or
windows,
it
primes
the
cluster
for
both
okay
cool.
So
the
last
thing
I
will
do
is
let
me
get
away
from
this
is
go
through
my
quick
demo
now
that
I'm
confident
this
is
probably
gonna
work,
actually
all
right
so
from
oldest.
So
this
is
the
one
I've
looked
at.
B
B
B
That
and
sorry
for
this
message,
I
guess
my
sticky
bits
on
the
cube
directory
run.
Oh
no,
I
don't
get
the
message
anymore.
I'm
confused
okay,
cool
123.5,
so
let's
helm,
install.
B
A
A
question:
okay,
one
question
in
the
chat
about:
are
we
able
to
run
with
no
replicas
like
is?
This
was
pointing
out
that
we
might
have
been
talking
about
doing
something
similar
for
aadpat
identity.
B
Are
you
is
the
question?
Is
it
like
operationally
normative
to
run
a
daemon
set
with
no
replicas?
The
answer
is
yes.
That
might
be
an
opinion,
but
that's
my
that's
my
answer.
It's
my
opinion.
L
B
So
the
the
the
conditions
in
which
a
damage
that
would
run
with
zero
replicas
are,
if
essentially,
if
the
node
selector
configuration
yields
no
nodes
on
the
cluster.
The
reason
I
think
that's
operationally
normative
is
because
the
the
the
presence
of
certain
types
of
nodes
on
a
cluster
is
not
static
so
over
time
it
could
change
so
having
a
daemon
set.
There
be
ready
to
do
its
thing
when
the
proper
nodes
arrive
and
then
ready
to
kind
of
move
into
standby
mode
when
those
nodes
are
no
longer
there.
B
B
B
B
D
B
Cool
and
again
before
I
do
this,
let
me
just
validate
that.
Actually
let
me
validate
this
node
is
running
on
initialize.
That's
the
right
way
to
do
this.
Demo.
E
E
B
So
that
would
be,
it
would
be
totally
independent.
There
is
no
automatic
flow
there,
and
I
know
that
this
seems
inconvenient,
but
maybe
this
is
for
a
different
discussion.
B
B
E
I
think
the
proposal
that
addresses
that
is
not
that
particular
proposal
actually,
but
the
other
one
that
was
brought
up
at
the
office
hours
yesterday
with
having
like
runtime
books
and
having
like
pre-post
upgrades
pokes
so
that
you
can
say
like
post
my
upgrade
to
this
particular
operation,
like
upgrade
my
add-on
and
things
like
that.
B
B
B
Okay,
so
in
the
in
the
final
30
seconds,
I'll
just
mention
that
this
this
demo
is
intentionally
extremely
manual
and
tedious
in
order
to
sort
of
maximize
the
amount
of
information
communicated.
Hopefully,
but
there
is
a
pr
in
the
caps
eq
right
now
of
the
one
I
was
demonstrating.
B
That
does
this
automatically,
so
it
essentially
uses
the
helm
sdk
to
do
this
in
the
flow
of
the
end
to
end
so
it
was
a
little
tedious
to
get
this
working,
but
once
we
get
it
working,
it
will
just
work,
of
course,
because
that's
how
computers
are,
but
in
this
optional
test,
there's
a
tiny,
tiny
little
marker
that
shows
the
difference
that
only
I'd
probably
understand,
but
we
are
outputting
we're
outputting
the
the
results
of
the
the
helm
install
in
here,
and
so
that's
the
sort
of
signature
that
during
cluster
create
we
inject
a
helmet
stall
to
bootstrap
the
external
clap
fighter
stuff
before
it
then
runs
the
remaining
tests.
B
So
it
really
is
again
at
the
cost
of
some
complexity,
because
we're
now
using
the
helm
ecosystem
instead
of
cluster
resource
sets
allows
us
to
oh
wow.
Look
at
we're
live
streaming.
James's
comments,
cool.
B
Time,
I'm
I'm
happy
to
answer
any
questions
or
I'll
be
on
online
for
the
day,
so
you
can
hit
me
up
and
chat
on
slack.
I
should
say.
A
Fantastic,
thank
you
so
much
jack.
This
was
very
informative
and
I
imagine
people
will
find
you
on
kubernetes
slack
on
the
cluster
api
azure
channel.
If
they
have
additional
questions
and
then
we
will
see
folks
in
two
weeks,
I
believe
april
14th
we'll
be
doing
the
office
hours
again,
so
you
can
go
in
the
dock
and
sign
up.
If
you
want
to
do
demos
yourself
or
discuss
something,
add
your
questions.
Thank
you
so
much.