►
From YouTube: 20200212 - Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
I
will
do
my
best
to
call
on
you
when
I
see
it,
and
we
also
would
ask
that
you,
please
add
your
name
to
the
attending
list
and
if
you
have
any
agenda
items,
please
feel
free
to
add
them
here
and
we'll
get
started.
So
one
thing
we
like
to
do
is
say:
hi
and
welcome
to
new
attendees.
So
if
this
is
your
first
time
and
you
feel
like
introducing
yourself
I'll
give
you
all
a
minute
or
so
for
that
and
if
you
don't
feel
like
doing
it,
that's
cool
too
and
I'll
just
say:
welcome.
A
C
Hi
everyone
so
I'm
gonna
shove,
a
quick
demo
about
this
issue,
adding
adding
an
enhancement
to
apply
add
on
after
creating
clusters.
So
this
is
about
making
things
easier
to
apply
plugins
such
as
CSI
CNI
plugins.
After
cluster
creation,
it
is
I'm
right
now
calling
it
post
apply.
Basically,
the
use
case
we
are
thinking
is
users.
Operators
may
have
a
bunch
of
add-ons
that
they
wanna
add
when
they
bring
up
their
clusters
and
if
they
provide
those
add-ons
in
the
cluster
spec,
and
we
want
to
apply
them
once
the
cluster
is
ready.
C
C
That
I
brought
up
to
create
the
cluster,
so
let
me
show
you
the
spec
changes
far
for
this
announcement,
so
if,
for
example,
if
a
user
has
a
calico
atom,
currently
I
edit
them
as
secrets,
so
if
you
have
an
add-on
and
create
a
secret
about
it
and
provide
the
name
here,
we
are
trying
to
apply
it
after
cluster
is
ready.
For
example,
let
me
show
you
this
kalca
addon
is
Luvdisc
are
required
on.
C
C
C
C
C
So
we
see
that
at
every
reconciliation,
the
controller
tries
to
reapply.
The
missing
are
wrongly
formatted
cigarettes,
one
of
them
was
missing,
Adam
entered
on
and
the
other
one
was
missing
addend,
but
it
never
tries
to
reapply
the
existing
successfully
applied,
one
which
was
which
was
calico
Adam,
because
after
successfully
applying
this
Adam
I
added
that
an
annotation
to
the
cluster
which
we
can
see
here.
This
is
the
application
time
stamp.
So,
in
the
controller
we
are
checking,
if
there's
a
annotation
about
that
Adam.
If,
if
there
is,
we
don't
apply
it.
C
If
there
is
no
like
annotation,
we
keep
trying
to
apply
the
add-on.
So
this
is
the
demo
section
that
are.
There
are
some
design
considerations
for
examples.
There
are
some
discussions
there
if
the
add-on
type
should
be
secret
or
config
map
or
what
will
be
the
key
name
of
the
data
we
want
to
apply.
Currently
it
is
a
Don
but
Yemen,
and
now
I
am
applying
on
the
once.
C
C
D
C
D
That
makes
a
ton
of
sense,
I.
Think
the
thing
which
this
reminds
me
a
lot
of
is
the
work
we've
been
doing
in
for
sort
of
manage
add-ons
or
cluster
add-ons
and
I
think
there
is
obviously
a
difference
in
that
we
are
applying
to
a
different
cluster,
but
I
do
wonder
whether
we
could
think
of
a
model
like
that
like
so
today
we
would
have
like
a
calico
in
the
basic
model.
D
We
have
a
calico
CRD,
which
would
expand
to
a
manifest
which
will
be
applied
to
the
same
cluster,
and
now
we
could
have
like
a
remote
calico
right
so,
like
I,
think
there's
a
bunch
of
things
they
wanted,
which
is
like.
Should
the
add-ons
be
in
line
in
the
cluster
or
would
it
make?
Would
it
be
more
convenient
or
more
kubernetes
to
split
them
into
separate
objects
with
their
own
status,
for
example,
and
then
in
terms
of
sourcing
the
amyl?
D
There
are
also
other
suggestions
that
are
happening
in
that
in
that
cluster
add-ons
group
about
like.
Can
we
Swiss
it
from
a
there's,
a
suggestion
from
the
open
ship
folk
to
put
it
in
a
image
container
image?
As
a
distribution
mechanism
and
I
think
other
people
talked
about
git
and
like
HPS
is
a
commonly
one
that
I
mean
on
so
I
think.
Perhaps
if
we
were
to
work
like
together
on
that
or
like
it
make
it
work
in
the
same
way,
then
we
could
like
benefit
from
those
synergies.
D
There
is
a
huge
difference
than
its
cross
cluster,
and
then
we
don't
know
so
you
want
to
run
all
operators
in
this
way
or
all
add-ons
in
this
way,
if
you
don't
I
mean,
but
we
should
look
for
this
to
boot
strapping,
but
it
does
feel
like
they're
there.
They
don't
necessarily
have
to
be
at
an
entirely
different
mechanism.
I,
don't
know.
A
D
Yeah
I'm,
just
I
mean
just
I,
recognize
that
it's
a
bootstrapping
use
case
and
but
it
like
there
are
and
so
that
it's
not
a
Sarah
like
you
must
do
the
same
thing,
but
like
some
of
the
same
problems
will
certainly
occur.
Like
you
know,
in
add-ons
with
the
old
Bosch
add-on
manager,
there
was
a
debate
about
like
how
often
do
you
reapply
the
llamo
like,
and
it's
the
same
argument
or
debate
here
right.
Should
we
apply
it
once?
A
D
E
E
A
F
Was
saying
and
I
know
that
in
the
you
know
the
original
use
case
I
think
you
know
there,
it
says
like
a
simple
facility
that
gives
me
a
mechanism
but
doesn't
force
a
change
to
the
data
model
and
yeah
I'm
I'm.
Just
you
know,
I
haven't
thought
deeply
about
this,
like
this
very
proposal
but
yeah.
It
makes
me
a
little
uncomfortable
to
see
this.
You
know
in
the
cluster
in
the
cluster
type.
G
D
Want
to
say
that,
like
sometimes
when
we
say,
please
don't
do
it
in
cords
because
we're
not
sure
it
belongs.
It's
it's
a
it's
critical
functionality.
I
think
this
is
not
the
case
here.
I
think
this
is
very
much
like
we
all
agree.
This
is
like
something
we
all
want.
It's
just.
We
wonder
where
there
can
better
be
expressed
in
a
in
a
subject
that
happens
to
be
a
separate
object
and
I.
Think
it's
because
it's
so
powerful
that
we
want
to
keep
it
separate.
D
It's
not
that
we're
not
committing
something,
but
the
use
case
hopefully
also
be
the
same
code
fingers
crossed.
H
Yes,
I
just
caught
up
on
this,
so
Big
Boss
want
to
let
you
feature
flags
and
directory.
I
would
like
to
see
this
as
like,
even
behind
like
a
flag
in
the
main
dog
file,
so
they're
like
yes,
the
controller
lives
somewhere
else
in
terms
of
code
base.
But
then
we
have
a
promotion
process
like
if
we
do
feel
like
this
is,
should
be
in
the
core.
Then
we
like
default
and,
if
not
like,
looking
just
like
a
little
bit
out
like
it
very
easily
but
yeah
like
+12
experimentation.
H
B
A
A
For
example,
if
you
look
at
the
API
change
guidelines
for
kubernetes,
they
say
that
you
can't
introduce
a
new
API
version
and
make
it
the
storage
version
in
@cd.
At
the
same
time,
we
currently
are
doing
that
with
alpha
3,
so
we
may
need
to
rolled
up
that
storage
version
change
back,
but
anyways
Jordan
has
suggested
February,
25th,
26th
or
28th
sometime
during
the
work
day
and
those
are
Eastern
Standard
times.
I
will
be
sending
out
a
calendar,
invite
when
we
agree
on
a
date
and
a
time.
A
So
would
it
make
sense
for
me
to
send
out
a
doodle
poll
and,
if
you're
interested
in
attending,
we
can
get
some
feedback
on
that
before
I
just
send
out
an
invite
I
see
at
least
one
or
two
plus
ones.
So
I
will
do
that
action
item
candy
do
send
a
little
four
times:
okay,
I
lost.
My
participants
hold
on
us
again:
okay,
cool
minions,
mic
over
to
you
on
autoscaler.
I
So
I've
been
doing
some
work
kind
of
internally
I'm
looking
at
cluster
auto-scaling
and
and
this
reference
item
that
I
put
here
kind
of
came
up
in
a
discussion.
I
was
having
with
a
colleague
and
I
guess.
This
is
work
from
last
year.
That's
to
get
done
about
perhaps
adding
some
sort
of
integration
to
cluster
API
so
that
we
can
take
advantage
of
various
provider.
I
I
guess
various
providers
have
different
features
for
for
kind
of
auto
scaling
functionality
and
whatnot.
So
I
was
curious
to
kind
of
ask
the
group
here
before
I
start.
You
know
getting
really
deep
into
this.
Just
does
anybody
have
any
kind
of
background
on
this
beyond
the
issues
kind
of
linked
here?
Is
this
something
that
that
the
group
is
still
looking
into
like?
Are
people
already
working
on
this,
and
maybe
it
just
hasn't
been
exposed
so
I'm
kind
of
curious
about
some
of
these
things?
This.
J
So,
just
to
give
a
little
bit
more
context:
we've
we've
gone
through
this
a
few
times
in
the
past
and
what
we
end
up
keep
coming
back
to
is
that,
right
now,
while
we
could
get
things
working
with
the
cluster
autoscaler,
for
example,
the
PR
that
you
mentioned
against
the
cluster
autoscaler
repo,
the
integration
is
kind
of
different
than
other
integrations
for
the
autoscaler,
where
they
assume
that
they're
scaling
operations
or
in
their
sense
atomic.
So
they
go
through.
J
However,
with
cluster
api,
it's
a
little
bit
different
in
that
they're,
interacting
with
like
machine
deployment
or
machine
set
resources
that
have
replicas.
So
the
current
integration
is
attempting
to
allow
the
cluster
autoscaler
to
mark
a
specific
machine
for
deletion
and
then
do
the
scaling
operation
in
a
separate
step
and
eventually
come
and
reconcile
between
the
two.
J
J
You
know
like
one
of
the
high
level
ideas
there
is
that
the
logic
that
the
cluster
autoscaler
is
using
to
determine
which
machine
should
be
deleted.
That
is
something
that
could
potentially
be
exposed
as
a
library
and
we
can
consume
within
cluster
API
and
then
at
that
point,
all
the
autoscaler
needs
to
do
is
interact
with
the
replicas
count
to
sit
there
and
adjust
and
the
same
behavior
would
kind
of
come
into
play.
However,
that
model
has
some
issues
with
you
know
the
current
design
around
the
cluster
autoscaler.
J
B
Sat
on
call
with
autoscaler
folks
and
basically
the
agreement
we
came
to
was
yeah
they're
willing
to
integrate
that
patch.
As
long
as
we
document
the
things
that
are
different
on
the
cluster
API
side
right
now
versus
other
providers,
and
then
they
took
the
output
of
that
meeting
to
this
group.
You
know
we
talked
about
a
little
and
I
think
the
consensus
we
came
to
was:
let's
try
to
get
that
thing
merged,
even
though
it's
against
to
be
one
out
for
one
and
then
we
can
all
iterate
on
that
together.
B
I
Questions
yeah
I,
think
you
know,
there's
there's
definitely
interest
in
seeing
how
we
can
leverage
some
of
these
features
that
are
coming
out
of
the
providers
for
auto
scaling,
and
you
know,
as
that
relates
to
the
auto
scaling
work
that
we've
done
internally.
You
know
we'd
like
to
align
more
closely
with
what's
going
on
in
the
upstream.
So
that's
you
know,
as
I've
been
looking
at
internal
stuff
I'm
looking
back
towards
this
upstream,
to
see.
B
Yeah,
you
know
we
have
upstream
first
kind
of
so
we
have
this
thing.
It
works
visas
you
enough
for
one.
We
will
not
contribute
it
to
kubernetes
as
a
whole.
It's
not
really
Hawaiian
I
am
coming
from
project
management.
You
know
it's
just
we
have
this
thing.
You
creative
upstream
also
had
this
thing,
that's
better
for
us,
but
if
everyone,
you
know,
that's
pretty
much.
It.
I
A
Okay,
I
think,
if
we
don't
have
one
already,
we
should
get
an
issue
added
to
cluster
API
to
at
least
say:
we
need
to
figure
out
our
cluster
autoscaler
plan
and
we
can
rally
there
plus
whatever
autoscaler
issue.
There
is,
and
maybe
I
could
ask
that
whoever's
interested
in
this
work
self-organized
and
try
and
either
rally
on
the
issue
set
up
a
time
for
another
zine
chat
and
see
if
we
can
get
this
going
forward.
I
That
sounds
good
to
me.
I'm
I'm,
probably
gonna,
try
to
follow
up
with
Michael
afterwards,
just
to
get
a
little
more
info
on
what's
going
on
on
the
autoscaler
sig
side,
so
maybe
I'll
start
going
to
some
of
those
meetings
too.
Just
to
see
if
we
can
have
bridge
this,
because
it
sounds
like
that's
where
the
holdup
right
now
is
getting
that
word
kind
of
included
in
what
they're
doing.
A
H
All
right
great,
so
the
first
one
I
just
met
given
five
minutes
to
go.
I
just
had
a
meeting
with
control,
runtime
we're
playing
at
least
today.
They
also
like
we're
also
doing
a
zoom,
probably
so,
if
anyone
wants
to
join
and
see
how
runtime
feel
free,
what
was
post
in
queue
builder
channel
upstream,
which
means
like
we'll
be
able
to
get
this.
Your
dv1
wrapped
up
to
probably
buy
at
the
end
of
the
week.
I
just
wanted
to
give
a
heads
up
all
infrastructure.
H
Boostrap
providers
that,
like
this
is
a
breaking
change
and
you'll
need
to
update
dependents
as
well.
Documentation
will
be
added
to
the
v1
of
two
to
three
migration
document,
which
is
now
also
served
in
the
book
on
master.
If
you
go,
and
you
can
go
to
three
twenty
three
and
see
all
the
changes
questions
I
also.
A
A
H
H
Yes,
so
like
the
issues
actually
like
only
for
conversion
web
book,
the
Commission
web
book
and
only
point
in
a
single
place
and
the
service
were
the
API
server
has
to
call
to
the
conversion-
is
actually
specifying
the
CID.
Now
like
we
used
to
builder
to
generate
like
our
own
config
folder,
and
this
actually
is
like
kind
of
it.
H
This
requires
that
the
generation
will
actually
like
change
that
namespace
changing
the
namespace,
though,
like
will
kind
of
also
change
the
CID
namespace
for
for
that
conversion.
Web
book
to
happen-
and
this
is
like-
will
cost
like
a
lot
of
issues,
for
example
like
if
you
install
a
new
version,
which
one
is
actually
running,
my
conversion.
What
books
and
the
issue
like
covers
like
parts
of
this,
but
we
found
like
kind
of
like
a
different
solutions.
H
This
problem,
one
could
be
like
we
don't
support
most
dependency
at
all,
so
we
only
run
one
cluster
API
component,
1
1,
Kappa
component
cap,
G,
etc,
which
has
other
implications
which
I
would
like
to
discuss
with
the
community
and
the
other
solution
which
is
lived
here.
2279
is
separate
in
the
web
books
and
like
all
the
conversion,
but
also
default,
the
invalidation
to
a
separate
namespace.
H
This
required
extensive
changes
to
the
computer
directory.
One
like
a
few
cons
here
is
that,
like
all
the
infrastructure,
bootstrap
and
control
point
providers
who
will
have
to
apply
these
changes
to
be
compatible
with
would
like,
like
our
convention
is,
and
the
changes
are
quite
invasive.
So,
while
I
was
like
kind
of
working
on
these
changes,
I
was
like
I,
really
don't
like
this
solution,
because
it's
it
kind
of
diverse
to
what
you
build
it
does,
and
that
means
that
we
will
have
to
maintain
it.
H
A
Think
I'll
start
by
just
trying
to
summarize
a
little
bit
more
clearly
like
why
this
is
a
problem
from
the
way
that
we
produce
our
artifacts.
So
when
we
bundle
a
release
for
cluster
API
core
and
at
least
the
way
that
we've
been
bundling
Kappa
and
some
of
the
other
infrastructure
providers,
we
ship
a
single
container
image
that
runs
all
the
controllers.
For
that
particular
thing.
So,
cluster
API
has
one
container
image
and
it's
got
all
of
our
controllers
in
it.
A
The
ADA
based
provider
has
one
image:
it's
got
all
the
AWS
controllers
in
it
and
we
ship
a
single
ball
of
the
amyl
that
has
all
the
custom
resource
definitions,
that
a
provider
needs,
plus
namespaces
deployments,
etc.
Our
back
rules
and
so
on
and
where
this
becomes
an
issue
is
exactly
what
Vince
said.
A
So
we
are
at
the
mercy
of
kubernetes
api
machinery
where
there
is
not
a
true
multi
tenant
view
of
custom
resource
definitions
themselves,
and
we
need
to
figure
out
what
we
can
do
here.
So
does
anybody
have
any
questions
on
what
we're
talking
about?
What
like
what
are
the
actual
semantics
and
how
this
is
working
or
does
anybody
need
any
clarity
on
anything.
H
A
D
Suspect
it's
a
similar
questions.
The
law
in
Vince's
ask
which
is
like
is
the
use
case
for
running
for
this.
That
I
would
run
two
versions
of
two
versions
of
Kappa,
or
that
I
would
run
two
instances
of
Kappa
at
the
same,
probably
at
the
same
version,
and
certainly
the
same
schema
version
that
have
different
credentials.
For
example,.
A
A
We
may
they
may
want
to
test
out
a
new
version
of
Kappa
in
one
of
those
namespaces.
Maybe
it's
like
a
testing
dummy,
namespace
and
obviously
the
CR
DS.
If
there
were
changes,
would
apply
to
all
of
them
and
the
conversion
webhooks
when
there
are
multiple
versions
at
play,
would
apply
to
all
of
them.
But
if
there's
a
bug
fix
or
maybe
there's
a
new
API
field,
that
is
completely
optional
behavior
and
we
want
to
test
it
out
and
see
if
it
works.
A
That
would
be
one
use
case
for
having
different
versions
of
Kappa
running
at
the
same
time
in
the
same
management
cluster,
but
I
would
say.
Definitely
the
primary
use
case
that
we've
seen
is
Kappa
in
particular,
only
supports
only
easily
supports
one
set
of
credentials
for
the
process
and
we
use
unique
namespaces
and
unique
Kappa
deployments,
one
for
each
AWS
account
or
set
of
permissions.
Sorry
set
of
credentials.
A
G
Yeah
I
just
wanted
to
comment
that
we,
we
absolutely
run
I,
want
to
say
60
different
instances
of
kappa,
but
the
way
we
do
that
right
is
with
the
pivot
or
the
old
pivot
workflow.
The
clusters
are
self
managing
and
which
mostly
I
mentioned,
because
it
neatly
avoids
the
issue
of
multi-tenant
at
CR
DS,
though.
Obviously
there
are
some
trade-offs
that
we
accept
in
doing
that,
but
I
would
suggest
that
there's
a
space
for
perhaps
a
hierarchy
of
clusters,
Friday
bootstrap
cluster,
that
bootstraps
the
testing
cluster.
G
H
J
H
We
have
considered
like,
but
I'd
say
like
it's
definitely
outside
the
scope
of
be
one
for
three,
which
it's
a
two
weeks
away,
and
that's
kind
of
like
a
big
change
for
providers
to
do
today.
That
said
like,
if
that
could
be
added
on
like
even
we
went
off
for
three
like
a
maybe
like
in
a
point
version,
but
yes,
that
there
is
also
like
an
BR
opening
cap
but
they're
like
shows,
there
is
a
path
forward,
not
the
best
way
to
do
it
to
go
about
it,
but
like
there
is
this
possibility
as
well.
K
A
Like
if
you're
it,
you
could
not
do
it
if
you
just
wanted
to
say
I'm
done,
supporting
B
1
alpha
2.
This
is
a
completely
brand-new
installation
and
I'm
not
doing
an
upgrade
form
alpha
2
to
alpha
3.
Okay,
we're
trying
to
solve
the
problem
with
where
we
want
to
be
able
to
support
upgrading
an
existing
installation
to
a
newer
API
version.
D
Justin
something
vision,
cursed
me
is
it:
one
approach
could
be
to
effectively
get
stricter
about
or
separate
out
the
API
version
from
the
code
version
right.
So
we
we
do
have
a
separate
controller,
a
separate
binary
that
runs
our
web
hooks
that
does
live
in
a
provisioning
space
or
whether
it's
queue
system
or
something
else
and
the
rule
would
essentially
be.
You
must
run
the
max
version
right,
although
we
don't
change,
I
feel
as
much
so
it
could
be
a
little
bit
relaxed.
D
I
know
it's
not
perfect,
because
you
know
who
know
it's
a
little
scary,
if
you're,
if
you
upgrade
its
upgrade
a
share
component,
but
I
mean
I,
think
that
that
is
a
that
is
sort
of
what's
gonna
happen
anyway,
when
you
upgrade
the
CRD,
so
maybe
it's
acceptable,
but
it
would
certainly
be
like
and
I
could
that
could
just
be
a
packaging
thing
as
well
right.
It
could
be
that
we
have
a
way
to
turn
off
a
controller
and
just
leave
the
hoax
yeah.
H
Yeah,
that's
exactly
what
it
does
a
it
separates,
not
only
the
new
space,
but
also
the
it
runs
another
manager
for
web
book
only
and
like,
if
there's
a
flag
of
this,
which
is
the
webicon
and
turns
everything
else
off
and
apparently
like
this
also
doesn't
require
our
back
permissions,
which
is
kind
of
nice.
It's
the
API
server
will
make
the
calls
to
service
so
yeah,
but
yeah.
That's
exactly
what
it
does
so
I
would
urge
you
to
take
a
look
and
provide
some
comments.
There
Daniel.
F
Did
I
was
just
this
sounds
like
what
the
web
hook
is
kind
of
a
subset
of
the
the
general
just
a
versioning
problem
right
how
how
to
yeah?
If
your,
if
you're
running,
multiple
multiple
clusters
and
you
are
starting,
multiple
providers
and
you
want
to
or
multiple
businesses
above
writer
and
you
want
to
move
to
a
different
version
of
of
a
provider
that
is
at
the
more
general
problem,
or
it.
A
Has
become
bad?
Yes,
it
was
easier
in
alpha
to
when
there
were
no
conversion
Web
books,
and
there
was
just
one
API
version,
because
as
long
as
we
said,
any
API
changes,
if
we
need
to
make
them
for
alpha,
2
are
optional,
non-destructive
fully
compatible
going
forward.
Then
you
could
deploy
your
controllers
independently,
rev,
the
CR
DS.
Whenever
things
changed
and
in
theory
everybody
would
be
happy.
But
now
we
do
have
multiple
API
versions
and
a
single
way
to
do
the
conversions
and
that's
where
this
gets
to
be
problematic.
A
We
have
a.
We
have
a
PR
for
this
one.
Yes,
so
we'll
go
ahead.
This
is
about
just
moving
where
we
do
some
defaulting.
So,
given
that
we
have
this
working
or
inflate
I
think
we
can
take
off
and
put
active
on
here.
Folks,
if
you
are
just
as
a
reminder,
if
you
are
working
on
a
pluralist
for
an
issue,
please
mark
the
issue
active,
because
that
indicates
that
someone
is
working
on
it
and
it's
helpful
for
just
keeping
track
of
things.
A
A
A
A
J
A
A
J
A
J
The
big
prerequisites
that
have
been
mostly
disruptive
to
the
the
other
work
have
landed.
That's
mainly
the
EDD
health
check
work
and
some
refactoring
changes
to
reconciliation
loop.
Now
that
those
have
landed
I
know
Daniel
is
working
on
both
the
scale-up
improvements
and
the
scaled-down
functionality
and
I'm
currently
working
on
rebasing.
The
upgrade
work
that
I
had
started
on
top
of
the
latest
changes
and
getting
back
into
the
point
where
I
can
implement
that
so
PRS
for
both
the
scale
up
scale
down
and
the
upgrade
should
be
appearing
soon
for
review.
Awesome.