►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
everyone
should
see
my
screen
offline
yeah,
so
this
is
the
community
class
five
second
office
hours
on
21st
September,
we're
writing
to
the
cncf
code
content.
So
please
be
nice
to
each
other.
Please
use
the
right
send
features
when
you
want
to
say
something
yeah.
If
you
have
anything
that
you
want
to
talk
about
just
a
little
Channel
and
yeah,
you
can
also
add
your
subject.
A
Attendance
if
you
want
to,
and
if
you
don't
have
access
to
the
document,
you
can
join
the
cluster
by
second
meetings
and
they
could
automatic
mixes.
Okay,
so
first
point
of
a
possible
world.
Does
anyone
want
to
talk
about
other
noise
proposals,
yeah.
B
C
I
am
looking
for
more
reviews,
Agoda
ldtm
from
fabricia,
so
thank
you
for
that.
Review
and
I
also
got
another
couple
of
approvals
for
from
another
two
folks,
so
just
looking
for
for
more
people
to
look
into
this
and
see
if
we
can
move
it
forward,
if
there
is
interest
foreign.
A
Okay,
good,
so
obviously
your
selected
soon
yeah,
let's
just
say,
ing
okay,
so
yeah
come
back
to
your
point.
So
if
someone
is
sooner
or
not
introduce
ourselves,
I
really
just
unmute.
E
Yeah,
hey,
hey
everybody
I'll
just
quickly
introduce
myself,
although
I
think
I've
been
on
the
list
for
a
while
I
think
this
is
the
first
meeting.
I
have
I
think
at
least
my
colleague
Deepak
has
been
here
a
couple
times.
I
wanted
to
join
just
quickly
today
and
get
opinion
and
see
about
some
help
on
a
particular
pull
request.
We're
working
on
inside
of
image
Builder
over
there
there's
a
particular
section
we
want
to
add
for
for
our
own
provider
there
on
the
on
the
Packer
side.
E
Unfortunately,
we
don't
have
enough
Community
membership
to
kind
of
set
our
own
owners
within
there
just
yet
so
I
was
hoping
there
wouldn't
be
an
issue
to
maybe
just
use
sort
of
Upstream
sort
of
Cappy
owners
of
that
section
and
so
I
figured
this
meeting
would
be
the
best
place
to
maybe
address
and
hopefully
run
that
by
everybody
and
see.
If
that's,
okay,
I
imagine
the
folks
that
are
specified
over.
There
are
probably
also
members
here
and-
and
maybe
some
of
them
are
on
the
call
today.
A
So
just
command
settings
so
essentially
you
would
think
the
classified
maintainers
to
approve
your
PRS
right.
So
let's
ask
if
that's
okay
for
now.
A
F
A
F
A
So
so
one
thing
definitely
happens
that
all
the
cluster
Implement
trainers
are
certain
influencers
Repository.
So
we
just
don't
have
I
guess
any
kind
of
global
approval
thing.
But
if
it's
just
about
you
want
to
add
a
new
owner's
file
and
you're
asking
if
we
should
add
this
group
so
that
they
can
improve
your
PRS
for
you.
E
Yeah
exactly
I
think
just
by
default.
It
looks
like
it's,
you
know,
with
default,
I
think
those
maintainers
are
specified
in
the
owner's
file
and
in
the
directory
above
us
right,
so
I
think
we
would
probably
just
admit
the
owners
directly
in
our
tree
right.
So
it
would
default
to
those.
So
as
long
as
it's
okay
and
you
guys
feel
that
group
is
okay
and
then
that's
probably
the
best
way
for
us
to
handle
it.
B
Yeah
I
think
this
is
a
good
solution,
short
term,
but
in
general
it's
good
to
have
specific
owners
for
each
provider,
just
because
some
of
the
files
in
there
are
very
provider
specific,
and
it's
good
that
if
something
goes
on
in
those
files,
we
know
who
to
reach
out
to
so
I
would
say
what
I've
seen
before
for
another
provider
is,
they
also
didn't
have
any
members
and
they
didn't.
They
did
go
ahead
and
get
membership
for
specific
maintenance
of
this
project.
B
E
No
thank
you.
I
appreciate
that
yeah.
No,
that's
exactly
I
mean
that
that's
exactly
what
we
need
right,
I
think
you
know,
based
on,
looks
like
the
requirements
that
were
there.
That's
definitely
what
we
wanted
to
do.
In
fact,
I
think
Deepak
submitted
with
sort
of
a
new
Alias
for
sort
of
nutanix
maintainers,
and
then
we
kind
of
found
out
after
the
fact
that,
unfortunately,
we
didn't
meet
the
criteria
for
membership
at
the
time
and
we
want
to
kind
of
move
forward
with
using
it
as
quickly
as
possible.
So
yeah,
you
know.
E
Ultimately
we
absolutely
wanna
to
do
that
and
maintain
it.
So
any
help
we
could
get
on
sponsorship
to
get
added
is
absolutely
welcome.
That's
definitely
what
we
want
to
do
so.
I
appreciate
the
offers.
E
A
Perfect
yep,
so
so
we
can
definitely
I
would
say,
approve
this
PR.
If
you
need
it
now
and
then
yeah
I
guess
the
next
few
weeks,
usually
so
yeah.
Definitely
a
lot
of
people
can
sponsors.
Just
sometimes
it
takes
like
one
two
three
weeks
to
get
do
an
org
membership
issue
and
PR
merch.
E
Awesome,
thank
you
I'll
reach
out,
maybe
by
slack
or
something
as
we
need
help.
Okay,.
A
Okay,
next
one
yep,
hey.
G
I
am
Puja
I'm
a
developer
at
AWS,
working
on
eks
elastic
kubernetes
service
anywhere
project
and
I
recently
led
the
bare
metal
support
for
eks
anywhere
I
work
with
Guillermo
and
Joey,
who
are
also
here
on
this
call
today
and,
as
you
know
mentioned
since
we're
working
on
bare
metal,
we
wanted
to
resurface
some
talks
about
In-Place
upgrades,
because
that
is
becoming
very
important
for
anybody.
G
Running
kubernetes
on
bare
metal,
especially
at
our
customers,
are
ramping
up
and
I
kind
of
wanted
to
see
what
prior
art
is
there
I
know
there
was
an
intern
that
kind
of
started
looking
at
it,
but
I
haven't
seen
many
updates.
So
if
somebody
can
Enlighten
us
a
little
bit
on
what
priority
is
there
and
what
other
work
has
been
done
or
is
being
talked
about,
I
would
certainly
appreciate
it.
H
Thanks
yeah
I'm
wondering
if
you
can
talk
about
what
what
exactly
you
mean
by
by
in
place,
upgrades.
G
So
basically,
cluster
API
today
only
allows
for
a
rolling
update
strategy,
which
means
you
would
deploy
new
nodes
when
trying
to
upgrade
and
then
scale
down
the
existing
nodes
which
for
bare
metal
is
sort
of
heavy-ended,
because
it
would
require
spare
nodes
to
be
sitting
around
and
pretty
much
as
many
spare
nodes
as
if
you
had
external
LCD
and
at
least
one
for
control
plane
and
then
one
for
every
worker.
Node
group
config.
G
So
you
know
for
bare
metal
allowing
for
whatever
Max
unavailable
value
is
if,
if
sort
of
like
a
drain
upgrade
rejoin,
the
cluster
model
is
adopted.
That
would
be
very,
very
helpful.
H
So
that
that's
actually
that's.
H
Both
the
the
kubidium
control,
plane,
controller
and
in
machine
deployments
you
can,
you
can
achieve
the
sort
of
the
the
delete
first
strategy,
if
that's
an
option.
So
so
you
don't
need
spares.
But
then,
of
course
you
you
do
you
know
you,
you
don't
have
all
of
your
workloads
scheduled
right
during
the
upgrade
there's
some
portion
that
that
that
is
yeah.
That's
not
running.
H
Yeah
yeah
yeah,
if
you
want
we
can
we
can
sync,
you
know,
maybe
in
the
slack
Channel.
G
Yeah
that
would
be
that
would
be
wonderful,
yeah
I,
don't
want
to
take
up
too
much
time
here,
but
I
don't
know
if
you
want
to
like
just
Briefly
summarize
what
you
just
said
and
then
we'll
move
on
and
catch
up
on
Slack.
H
So
there
there
is
a
strata,
there's
an
update
strategy
implemented
both
in
the
the
kubidium
control
plane
provider,
which
is
what
orchestrates
control
plane
upgrades
and
similarly,
there's
a
strategy
implemented
in
the
machine
deployment
controller
which
allows
you
to
so
with
with
some
configuration.
It's
not.
The
default
strategy
allows
you
to.
H
You
know,
set
the
the
max
unavailable
or
I'm.
Sorry,
it
allows
you
to
to
delete.
You
know,
delete
a
machine
first
and
then
create
the
you
know
the
the
machine,
the
new
machine.
So
then
you
you
can
perform
upgrades
on
on
a
fixed
set
of
machines.
You
know
without
needing
spares.
Oh.
G
But
it's
still
a
replace
strategy.
Then
that
means
that
for
a
bare
metal,
that
kind
of
means
reprovisioning
a
machine
with
an
OS.
The.
G
H
That
that's
actually,
no,
that
that's
actually
that
depends
on
your
infrastructure
provider,
so
yeah.
So
we
we
have
these
in
place
upgrades
without
reprovisioning,
the
machine
completely.
Of
course
that
means
that
there
is,
you
know,
there's
a
from
cluster
api's
point
of
view.
It's
a
new
machine,
but
from
the
you
know
the
operating
system
is
not
not
completely
preparation,
so
there
is
yeah.
There
is
some
risk
there.
H
You
know
that
that
something
you
know
it's
not
not
quite
as
let's
say
clean
as
as
updating
or
sorry
is
replacing
a
you
know
with
a
fresh
machine
image,
but
that's
actually
possible
and
that's
that's
in
the
responsibility
of
the
infrastructure
provider.
So
yeah
I'm
I'm
happy
to
talk
about
that.
Okay,.
G
Yeah
I
I
would
love
to
so
I'll
hit
up
on
the
slack
channel
to
whoever
is
interested
and,
and
you
obviously
perfect.
Thank
you.
I
Hey
thanks,
Stefan
yeah,
just
reinforced
with
Daniel
just
said
today
you
have
that
deliver
strategy
available
and
depending
on
your
Burma
implementation,
that
might
mean
creating
a
new
instance
or
not,
and
then
for
a
more
Super
sophisticated
in
place
implementation.
There
were
a
lot
of
interest
I.
Think
the
community
like
there
has
been
a
lot
of
interest
around
this
topic
recently
and
we
started
a
dog
that
I
think
still
just
linked
on
the
channel.
I
So
I
guess
the
action
item
or
the
way
to
move
forward
with
this
would
be
to
just
share
feedback
conductor.
So
we
can
flesh
it
out
and
and
stuff
together
use
cases,
and
you
know
start
to
think
about
the
different
implementations
that
we
might
half
eventually
in
copy.
D
D
D
So
there
is
a
there
is
an
issue
about
tactic,
support
for
Russia
provider
and
faster
cattle,
and
so
the
background
here
is
that
we
are
using
Pastor
cattle,
basically
to
install
all
the
components
that
goes
into
into
a
management
cluster,
and-
and
we
are
doing
this
because
the
cattle
offer
a
set
of
capability
that
are
really
useful.
It
manages
a
reading
artifact
from
repositories.
It
does
some.
It
does
some
manipulation
on
this
artifact
I,
like
changing
in
space
and
subs
in
labels
objects.
D
So
we
can
basically
delete
upgrade
later
and
it's
also
another
feature
that
has
the
capital
offers,
which
is
very
interesting
in
terms
of
a
way
to
define
a
compability,
a
compatibility
Matrix
between
the
component
that
we
are
deploying
and
by
using
the
metadata
file.
So
this
is
the
kind
of
background,
and
also
in
the
past,
we
we
already
extended
the
definition
of
providers
that
originally
was
infrastructure
provider.
But
as
of
today,
we
already
have
a
car
provider.
We
are
a
bootstrap
Contra
plane.
D
So
my
question
for
the
community
is:
is
there
an
agreement
about
continuing
down
this
path
and
and
sort
of
extending
the
definition
of
Provider
to
basically
everything
that
provides
a
functionality
to
a
cluster
API
management
cluster?
Why
I'm
asking
this?
Because
if
there
is
agreement
to
go
down
this
path,
it
will
be
super
easy
to
add
support
for
installing
either
provider
for
installing
the
cluster
I,
don't
provider
or
for
studying
the
Valentine
SDK
provider,
which
are
projects
that
provided
at
the
time
sdks,
and
we
can
going
down
with
this
path.
D
We
can
provide
a
nice,
ux,
etc,
etc,
but
yeah
before
going
down
this
path.
I
think
that
is
that
as
a
community,
we
have
to
agree
that
we
basically
extend
the
density
of
of
provider
and
just
to
make
it
a
little
bit
more
concrete.
I
have
opened
a
work
in
progress
PR,
where
I'm
proposing
basically
some
change
in
the
cluster
bi
glossary
to.
F
I
think
so
I
guess
chair
size.
The
first
one
is
like
for
runtime
sdks
I
think
it's
definitely
like
needed
to
have
like
some
sort
of
vehicle
to
ease
the
installation,
especially
if
you
want
to
drive
like
an
option
and
make
it
easy.
F
You
know
to
actually
write
extensions
and
the
second
point
is
like
for
ipam
I
think
it
makes
sense,
also
in
the
sense
that
we
already
have
today,
providers
of
ipam
the
you
know
that
live
on
different
repos.
So
there's
there's
already
like
a
bunch
of
these.
Like
providers
cannot
just
deal
with
the
combinatory
things,
especially
that
sorry,
especially
that
we
don't
support
flavors
in
the
infrastructure
components
yellow.
F
So
you
can
only
have
like
one
infrastructure
component
file,
so
I
actually
like
having
a
way
to
install
providers
for
optional
providers,
definitely
makes
sense,
because
this
avoids
providers
having
to
install
all
of
the
extension
at
once,
so
that
the
user
can
actually
select
whichever
they
want.
J
D
That's
a
good
one!
So,
as
of
today,
we
kind
of
we
don't
have
dogs,
if
you
think
to
infrastructure
provider,
they
all
deploy
resources
in
the
same
API
groups,
but
there
is
a
sort
of
convention
that
they
should
basically
prefix
the
they
are
on
crd
with
with
something
that
made
that
make
a
data
boy
Collision.
So.
D
J
D
Is
interesting
and
and
could
help
is
that
at
the
end,
one
of
the
things
that
classic
cattle
offer
is
that
it's
a
loss
provider
to
Define
this
their
own
metadata.
We
can
stand
this
metadata
to
include
also
this
kind
of
concept
I'm
compatible
or
interpretable.
With
that
that
thing
it
is
not
there,
but
having
metadata
is,
is
what
you
need
to
solve
this
problem
so
yeah.
Basically,
it's
enabled
to
solve
this
problem
in
the
future
right.
J
F
You'll
see
so
I
think
I
I
mostly
agree
with
Fabrizio
in
the
sense
that
dependency
management
or
compatible
sorry,
not
dependency
bar
compatibility
management
between
provider
is
definitely
a
thing,
but
I
don't
think
that
we
should
lump
it
under
this
proposal
in
the
sense
that
like
it
makes
sense
that,
like,
if
you
have
a
management
cluster
sitting
on
vsphere,
you
cannot
just
go
ahead
and
install
Kappa,
because
it's
not
it
likely
won't
work.
So,
there's
like
there's
some
some
there
there's
going
to
be
some
discussions.
I
I!
F
Guess
that
we're
going
to
have
down
the
road,
especially
once
we
you
know
we're
gonna,
have
multiple
providers
and
provider
continuing
to
increase,
but
what
I
think
might
fall
under
this?
So
for
ipam
is
given
a
version
of
an
ipam
provider.
You
likely
want
to
check
the
compatibility
between
copy
and
the
API
version
in
the
sense
that
I
believe
there's
a
resource
that
lives
within
the
provider
so
yeah,
depending
on
which
version
that
you
have
of
Cappy.
F
D
No,
this
is
one
interesting
point,
because,
let
me
say
if
we
go
down
to
the
path
to
extend
the
customer,
the
provider
definition
across
the
cutter.
Again,
we
can
rely
on
metadata
to
handle
this
compatibility
Matrix
between
the
thing
that
we
are
starting
and
and
cluster
Pi.
Now,
the
only
criteria
that
that
there
is
there
is
the
contract,
but
yet
even
data.
Today,
the
content
basically
is
the
same
of
the
pi
version
and
the
same
of
the
open,
API
spec
that
we
are
using
in
Pen
Station.
It
covers
almost
everything
but
yeah.
D
F
Yeah
I
mean
like,
if
you
already
have
this,
then
that,
like
yeah
I,
think
as
long
as
we're
able
to
achieve
contract
check-in
with
the
even
with
the
existing
mechanism,
then
we're
perfectly
fine.
J
Yeah
and
I
think
you've
seen
you
kind
of
brought
up
a
case.
That
was
that
when
I
was
reading,
the
document
here
was
kind
of
occurring
to
me,
which
is
like
yeah
like
the
ipam
provider.
Right,
it's
like
it
would
be
to
me
it
would
be
like
kind
of
an
interesting
situation
or
probably
wouldn't
want
to
do
it
to
install
like
two
ipam
providers
or
whatever
right
like,
but
like
a
user,
might
make
that
mistake
and
then
cause
like
some
sort
of
problem.
J
So
it's
like,
if
you,
if
you
knew
you
had
a
component
that
was
like
okay.
Well,
there
probably
shouldn't
be
multiples
of
this
component,
and
it
covers
some
specific
area.
Then
then
yeah
like
I,
totally
agree
with
what
you're
saying
having
the
metadata.
That
can
say.
Well,
there's
only
supposed
to
be
one
of
these.
A
K
Don't
don't
over
underestimate
big
big
companies,
especially
telcos,
for
example,
I
think
I'm,
not
sure,
but
I
think
we
do
have
multiple
different
iPhone
providers
depending
on
location
data
center
team
Etc
that
you
could,
in
theory
hook
up
with
so
I,
wouldn't
make
those
at
least
not
hard
constraints,
because
I
can
see
use
cases
where
people
maybe
want
to
use
multiplied
from
providers
or
just
as
a
very
basic
example.
If
you
want
to
transition
from
one
to
the
other,
you
might
be
required
to
have
two
of
them.
F
Yeah
to
be
clear,
like
the
multiple
part
that
I'm
saying
is
about
the
providers,
as
in
the
infrastructure
providers,
I
think
for
ipam
there
is.
F
There
are
some
cases
in
the
sense
that,
like
some
companies
or
like
some
providers,
are
actually
carving
outside
there's
and
you
know
giving
them
to
various
locations
and
you
might
see
actually
different
solution
depending
on
the
ownership
of
where
the
cluster
gets
provisioned.
So
I
don't
think
like
that
restriction.
F
The
number
of
providers
make
sense
that
you
could
install
I
think
we're
already
like
not
restricted
from
from
an
infrastructure
point
of
view,
so
I
don't
think
that
it
makes
sense
to
restrict
from
an
iPad
but
like
yeah,
the
multi
I
think
the
multi-provider,
the
multi-provider
stuff
is
something
that
we
need
to
to
discuss,
because
today
there
are
some
combinations
that
actually
aren't
working
and
that
we're
allowing.
K
Yeah,
so
maybe
it
would
be
better
to
just
so.
One
thing
that
I
liked
from
before
sorry
is
is
basically
that
different
providers
can
state
that
they
they
are
compatible.
So
basically,
if
you
have
a
list
and
say
okay
I'm
compatible
with
this
this
and
that,
then
you
can
check
all
of
the
providers.
You
want
to
install
against
each
other
and
then
display
warnings
for
for
the
providers
that
didn't
explicitly
state
that
they
are
compatible
with
each
other
and
then
maybe.
K
K
So
I
wouldn't
warn
about
that,
but
to
just
make
it
a
little
bit
easier
for
new
comers
or
for
beginners
to
not
install
too
little
or
not
not
enough
providers
to
get
a
functioning
setup,
because
if
you
miss
one
of
the
essential
ones,
then
you
won't
be
able
to
deploy
anything
but
limiting
in
any
way
or
or
complaining
in
any
way
about
having
too
much
of
the
providers.
Isn't
a
good
good
idea
from
my
opinion?
You
can
have
multiple
infrastructure
providers
easily.
K
There
might
be
use
cases
for
multiple
provisioning
or
or
a
control
plane
and
bootstrap
providers
Etc,
and
that's
also
one
one
good
thing
about
copy
in
general.
Is
that
it's
so
modular
and
that
you
can
just
add,
remove
swap
components
as
you
like
and
combine
them,
even
at
least
in
some
cases,
so
yeah
or
maybe
even
add
in
a
second
when
it
comes
to
combining,
maybe
a
second
list
that
you
can
add
to
your
metadata.
D
If
I
remember
well
make
sure
that
you
have
at
least
a
core
provider
a
booster
up
and
a
plane,
and-
and
so
there
are
already
some
check-
I-
don't
remember
them
on
top
of
my
mind,
but
yeah
I
think
that
if
I
got
it
right
all
the
comments,
it
seems
that
the
idea
kind
of
resonant
to
basically
treat
everything
as
a
provider
I
will
let
the
the
pr
open
so
and
wait
for
a
couple
of
days.
D
So
we
can
eventually
continue
discussion
there,
and
it
seems
also
that
an
interesting
topic
that
we
have
open
on
to
open
on
the
table
is
even
though,
to
a
standout
metadata.
In
order
to
expand
this
notion
of
compatibility
not
only
on
company,
about
compatibility
from
the
provider
to
Cluster
API,
but
also
between
providers
yeah,
we
have
to
open
the
discussion.
I
see,
I
think
that
they
could
be
two-steps
of
the
same
Journey,
but
yeah
thanks
for
the
feedback
again.