►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
If
anybody
wants
to
participate
when
somebody
else
is
speaking,
please
raise
your
hand
and
I'll
call
on
you
in
turn,
so
we
should
see
the
agenda
I.
Think.
First
of
all,
we
want
to
invite
anybody
who's
new
to
this
meeting
or
just
wants
to
introduce
themselves
to
the
community
to
speak
up.
If
you'd
like
to
introduce
yourself.
B
C
Hey
I'm
Tiba
ernestberger
I
work
for
Walmart,
part
of
the
I'm,
a
tech
lead
on
the
internal
kubernetes
offering
and
so
we're
trying
to
modernize
our
platform
using
Cappy
and
so
we're
starting
to
participate
in
the
community.
A
Okay,
sorry
about
that,
okay,
anybody
else
like
to
introduce
themselves.
A
Okay,
I
think
we
don't
have
any
open
proposals,
so
we
can
skip
through
those
unless
somebody
has
country
information
so
onto
the
agenda.
So
I
think
the
first
thing
on
the
agenda
is
Stefan,
so
John
take
that
away.
Stefan.
D
You
can
just
click
on
the
first
link.
That's
enough
I
hope
I
didn't
write
more
than
one
page,
so
it
should
be
okay.
Okay,
so
first
topic
I
opened
an
issue
to
propose
a
plan
to
get
rid
of
female
Life
for
free
and
v194
apis,
so
motivation.
D
So
we
had
a
few
releases
in
the
past
which
we're
actually
using
v193154
as
their
main
API
version.
The
last
ones
were
0.3
and
0.4.
Both
of
them
are
all
of
support,
since
at
least
April
2022,
so
almost
a
year,
so
I
think
it.
Now
is
the
time
to
plan.
How
do
we
actually
get
rid
of
three
another
four
and
the
proposal
would
be
the
following.
D
So
for
the
upcoming
release,
which
is
like
in
two
or
three
months,
we
wouldn't
really
change
anything
apart
from
announcing
that
we're
going
to
remove
it
they're
going
to
remove
them
as
the
next
release
with
4.5.
We
would
stop
serving
them,
so
that
essentially
means
that
the
API
server.
Sorry,
if
your
client-
and
if
you
talk
to
the
API
server
you're,
not
able
to
get
create,
update
Etc
with
the
old
API
versions
anymore,
but
the
API
server
itself
can
still
read
with
the
old
versions.
D
So
that
means
clients
will
100
notice
if
they're
still
using
the
old
versions,
they
have
to
start
using
new
versions
and
as
soon
as
they're
used
a
new
version,
the
API
sort
of
automatically
migrate
if
there
are
still
objects
stored
with
the
old
versions
in
that
City,
and
it
can
still
do
that
because
the
versions
still
exist
in
crd
they're,
just
not
served
by
the
API
so
and
then
with
1.6.
We
would
remove
them
entirely.
It's
essentially
every
code
that
we
have
every
test
that
we
have,
which
is
using.
D
We
will
have
your
four
would
be
just
gone
with
one
disclaimer.
We
would
see
after
1.5
for
these
what
the
use
of
feedback
is
like
hey.
We
need
more
time
or
whatever
and
potentially
delay
this
last
step.
The
nice
thing
is
that,
once
we
did
the
changes
in
1.5,
if
a
user
still
needs
more
time,
they
can
simply
revert
or
change
on
their
side
by
going
into
the
cods
and
just
changing
serve
back
to
true.
So
that's
the
only
actual
change
in
any
of
our
release.
Artifacts
and
why
not
yeah?
D
So
that's,
essentially
a
proposal
and
I
laid
out
some
some
tasks.
If
you
want
to
go
with
that
in
that
timeline,
what
we
have
to
do
at
which
point
in
time
the
most
important
thing
I
think
is
that
we
have
two
prerequisites,
so
we
currently
have
some
issues
that
our
controllers
are
not
so
we
have
a
bunch
of
places
where
we
are
referencing
our
resources,
so
I
don't
know
how
much
deployment
is
referencing,
the
bootstrap
config
or
the
infrastructure
machine
Etc.
D
The
expected
behavior
is
that
if
you
have
in
one
of
them
we
have
an
either
three
or
four,
our
controllers
automatically
upgrade
them
to
even
beta1
we're
just
looking
at
them
and
all
writing
them.
But
that
isn't
the
case
everywhere.
So
we
have
one
issue
to
introduce
that
behavior
in
machine
pool,
and
we
have
one
issue
to
go
through
all
resources
and
look
into
honor
refs
and
make
sure
the
owners
are
up
to
date.
D
Otherwise,
essentially,
things
will
start
to
fail
as
soon
as
we
get
into
105.,
and
the
idea
is
that
we
get
those
issues
fixed
with
1.4,
so
that
everyone
had
one
version
of
cluster
API
running
where
all
references
were
upgraded
automatically,
and
then
we
can
start
with
actually
changing
things.
Okay
talked
enough
questions
concerns.
E
Does
that
time
frame
correspond
with
just
general
kubernetes
deprecation
schedules?
Is
this
going
to
catch
anybody
off
guard
by
happening
too
fast.
D
Yeah
so
I
think
what
is
important
to
say
is
that
you
and
I
feel
like
for
the
correspondent
releases,
where
we
use
them
at
their
end
of
life
since
a
year,
so
we
have
at
least
since,
like
a
year
or
two
years,
we
are
already
on
beta
1..
We
didn't
really
had
like
a
policy
on
how
to
get
rid
of
API
versions
in
cluster
API.
As
far
as
I
can
tell
so,
there
was
definitely
a
very
long
time
and
I
think
by
the
way
that
you
want
to
do
it.
D
It's
still
like
I
mean
1.4
and
nothing
changes.
One
out
of
five
people
can
easily
revert
back,
so
they
have
at
least
six
months,
but
it
don't
actually
have
to
change
that
much
at
all,
and
even
after
1.5,
depending
on
feedback,
we
would
even
wait
longer
okay
to
get
rid
so
I
hope.
That's
fine,
I,
don't
want
to
wait
a
few
more
years,
but
I,
don't
really
know
what
the
Upstream
policies
would
say
to
our
planet.
A
F
So
we
are
putting
a
lot
of,
let
me
say,
additional
care
and
love
to
make
sure
that
no
one
breaks
and
so,
first
of
all
thank
you,
Stefan
for
laying
out
so
detailed
plan.
We
are
going
to
advertise
these
in
the
release,
notes,
probably
in
a
mail
to
the
mailing
list,
and
then
basically,
after
two
years
after
deprecation
I,
think
that
it
is
okay.
Now
it
is
the
time.
A
A
D
I
think
the
the
best
answer
that
we
have
is
that
a
cluster
cuddle
doesn't
block
it
in
any
way,
but
we're
also
not
really
testing
it
or
have
like
a
policy
of
hey.
You
can
skip
level
how
many
minor
versions
you
want
and
it
will
still
work.
D
G
D
Yep
that
would
definitely
help,
but
I
mean
one.
One
thing
is
if
they
miss
1.4.0
with
that
thing,
but
if
they
pick
up
1.5,
they
can
still
flip
back
serve
to
true
and
one
not
five.
D
G
G
D
G
Yeah
I
think
if
we
communicate
it
clearly
in
the
docs
what's
happening
here.
That
sounds
good
to
me.
Thank
you.
H
Yeah
I
I,
actually
I,
was
going
to
ask
about
this
line
that
says,
API
server
will
still
be
able
to
read
and
convert
old,
API
versions,
but
now
I'm
realizing
it's
under
the
1.5,
so
starting
at
1.6.
That
won't
be
true
anymore
right.
H
Okay,
yeah
I
think
I
share
Florence
concern
because
we
do
say
everywhere
that
you
can
upgrade
I
think.
As
far
as
I
know,
you
can
upgrades
from
any
old
API
version
to
V1
beta1
and
there's
no
like
restrictions
of
doing
it
minor
per
minor,
so
you
could
theoretically
upgrade
from
D1
Alpha
4
to
1.6,
and
that
would
be
valid,
so
I
think
we
might
want
to
add
in
some
like
upgrade
validation
or
something
that
enforces
that
so
at
least
we
don't
let
users
get
in
this
bad
place.
D
So
something
on
essentially
classical
side
right,
yeah,
yeah,
I.
Think
that
that's
a
sphere
I
mean
why
not
yeah
I
make
a
note.
I
have
to
take
a
closer
look
but
sounds
like
a
good
idea.
F
H
F
F
D
D
Test
all
permutations,
but
not
if
you
have
Multiverse,
which
have
the
same
contract
and
then
just
okay.
H
D
A
D
F
D
Yeah,
definitely
it's
just
that
if
you
have
the
test
and
if
we,
if
our
expectation
is
that
from
always
stays
green,
then
we
need
another
plan
to
get
rid
of
those
API
versions.
That's
that's
why
I
think,
but
oh
yeah,
let's,
let's
continue
on
the
issue.
I
think.
D
Yep
so
context
we
are
running
so
by
the
way,
there's
all
the
more
or
less
like
core
cluster
API,
but
I
guess
all
of
them
have
some
sort
of
impact
on
providers.
If
providers
are
writing
to
what
we're
doing
so
context,
core
copy
will
be
running
tests
for
every
release,
starting
from
0.3
until
1.3
and.
E
D
Today
currently
supported
our
102
103.,
so
we
have
a
lot
of
tests
which
are
testing
on
answer
popular
pieces.
The
plan
here
is
to
get
rid
of
some
of
those
jobs
to
focus
on
the
ones
that
we
actually
support
and
the
one
that
we
actually
want
to
maintain
yeah
the
proposal.
The
proposed
policy.
Essentially,
we
keep
all
jobs
for,
of
course,
all
supported
branches
and
for
unsupported
release.
We
would
keep
them
for
one
one
cycle
more.
So
concrete
example.
D
If
we
would
apply
the
policy
to
what
we
have
today,
we
would
test
103
and
102,
because
they're
supported,
we
would
keep
the
1.1
tests
until
you
responded
for
just
as
some
sort
of
emergency
fallback
mechanisms
if
we
ever
to
have
to
do
a
patch
release,
so,
for
example,
kubernetes
one
or
two
was
already
out
of
support
kind
of,
but
then
they
wanted
to
do
the
registry
migration,
so
they
did
one
last
patch
release.
So
that
kind
of
thing.
So
that's
why
I'm
suggesting?
D
Let's,
let's
keep
it
for
one
more
cycle,
but
everything
else
you
know
just
a
bit
so
why
not
one
would
be
kept
until
1.4
and
everything
is
older
than
that.
We
were
just
to
view
it
so
1.0
0.41.3
would
go
away
yeah
and
going
forward.
Essentially
with
each
new
release.
We
dropped
old
test.
A
D
Okay,
so
similar
yeah,
maybe
not
doing
something,
so
maybe
you
can
click
at
the
link
in
the
first
line.
Just
add
support
for
additional
communication,
so
I
don't
know
who
looked
at
our
version
document,
but
essentially,
if
you
look
at
that,
you'll
see
that
our
support
Matrix
is
Flowing
up.
So
essentially
it's
just
getting
more
and
more
and
more
and
more
but
yeah,
which
is
basically
cost
by.
We
never
really
dropped
a
lot
of
support
for
all
stuff.
D
D
So,
looking
at
that,
Matrix
I
think
we
need
a
policy
which
allows
us
to
I
mean
have
a
certain
system
with
which
we
add
support
for
nucleus
versions,
but
also
get
rid
of
support
for
all
communist
releases
and,
as
of
today,
I
think
nine
communist
manual
business
is
what
what
main
or
1.3
is
currently
supporting
and
yeah
impact.
Essentially,
we
have
a
lot
of
Maintenance.
We
have
a
lot
of
resources
that
we
need
to
run
all
those
tests.
D
I
mean
we
don't
run
all
tests
on
all
versions,
but
we
at
least
run
one
upgrade
test
per
per
version,
so
we're
testing
one
upgrade
from
one
of
18
to
19,
19,
20
Etc
and
we're
trying
to
keep
them
all
stable.
So
maintenance
is
a
it's
an
issue,
infrastructure
cost
and
with
that
support
range
we
have
the
problem
that
we
can't
really
rely
on
new
ish
kubernetes
features.
D
So
if
you
have
to
support
the
management,
six
or
seven
versions
of
the
management
cluster
we
need
like,
if
you
want
to
use
a
new
feature
in
kubernetes,
we
have
to
wait
like
two,
three
or
four
years
to
be
able
to
use
it
in
cluster
API.
So
it
works
in
all
our
supporters.
D
Communist
versions,
which
is
a
nightmare
so
proposal
here,
would
be
that
when
we
create
a
new
class
API
minorities,
so
the
dot
C
release.
At
the
time
of
this
release,
we
would
support
four
minorities
for
the
management
cluster
and
six
for
the
for
workload
clusters.
D
D
A
concrete
example,
so
that
policy
for
1.4
would
mean
that
with
1.4
we
would
support
management,
cluster
123
until
126
versions,
record
cluster
121
to
126.
and
after
1.40
series
at
least
when
kubernetes
1.27
comes
out.
We
Implement
support
for
it
on
Main,
and
then
cherry
pick
it
into
1.4
and
not
into
1.3
a
bit
of
details
of
why
so
I
think
in
general.
We
want
to
support
a
sort
of
wide
range
of
communist
versions
so
that
users
have
some
time
to
migrate
and
there
are
always
reasons
not
to
upgrade,
etc,
etc.
D
But
I
think
we
have
to
set
some
boundaries.
We
can
just
support
everything.
The
reason
why
we
picked
four
as
that
initial
proposal
for
management
cluster,
is
it's
roughly
the
numbers
of
supporting
communist
releases.
So
if
you
look
at
kubernetes
IO
slash
releases,
you
will
see
that
they
currently
have
a
list
of
four
reasons
that
they
support.
D
Yeah,
you
can
click
on
it,
for
it
so
I
mean
they
have
like
three
releases
per
year
and
support
for
a
year,
but
it
seems
to
come
out
to
three
or
four
of
these
that
are
actually
still
supported,
and
even
if
it's
just
free
for
a
bit
one
buffer
is
maybe
not
too
bad.
So
that
would
mean
essentially
cluster
API
can
run
on
all
kubernetes
versions
that
are
currently
still
supported
by
kubernetes.
D
Can
you
go
back
to
the
issue
and
then
for
the
virtual
cluster,
the
idea
essentially
to
have
a
bit
more
whatever
definition
of
bit,
so
that
the
end
users,
which
are
actually
running
their
workloads
on
the
workload
clusters,
have
more
time
to
migrate
and
also
It's
relatively
cheap
for
us
to
support
more
Vector
cluster
versions,
because
essentially
the
coupling
is
not
that
tight,
so
I
mean
all
our
controls
are
running
on
a
management
cluster,
so
we
highly
depend
on
whatever
features
we
have
there.
D
The
things
we
do
on
the
workload
cluster
are
way
bit
less.
So
it's
yeah,
it's
just
cheaper
to
support
more
versions
there.
So
we
can
do
it
and
I
think
it's
where
you
want
to
use
this
and
then
about
the
backboard
thing.
So
the
reason
why
we
want
the
back
part
is
that
so
Ben
and
nucleus
series
comes
out
that
you
want
to
back
part
to
One
release.
D
So
the
reason
is
we
want
to
make
sure
that
users
can
use
the
new
communities
with
the
next
kubernetes
patch
B,
so
they
don't
have
to
wait
worst
case
four
months
to
use
cumulus
model
27.
So
they
can
just
do
it
with
104.102
or
something,
but
we
don't
want
to
back
part
it
in
older
releases,
one
to
reduce
maintenance
effort,
so
that
also
needs
test
coverage.
D
Of
course
it
also
needs
work
and
if
you
go
back
to
versions
things
change,
so
it's
it's
more
effort
to
backward
and
I
think
it's
also
good
to
incentivize
users
to
upgrade
to
latest
cluster
API
versions.
So
there's
some
sort
of
effect
of
hey
I
want
to
use
the
latest
computer
stuff.
Okay,
then
I
have
then
I
have
to
go
to
the
latest
cluster
API
minor
version
yep.
That's
it.
E
D
E
G
What
does
practically,
what
does
it
practically
mean
if
we
drop
support
for
a
workload
cluster
version
like
what
happens
to
existing
clusters
when
a
customer
or
when,
when
the
version
is
upgraded
on
the
management
cluster?
What
would
be
the
impact
to
the
workload
clusters
in
this
case.
D
Okay,
I
will
start
with
what
do
we
actually
change
in
cluster
API?
In
my
opinion,
I
think
it's
not
I
mean
it's
up
for
discussion
anyway,
but
I
think
I
didn't
even
write
it
here.
So
first
thing
we
do.
Is
we
drop
the
corresponding
test
coverage?
So
if
you
don't
support
128
anymore,
we
dropped
the
128
to
129
upgrade
test.
Then
we
look
into
our
controllers
and
audit.
Essentially,
where
do
we
have
special
handling
for
special
goodness
versions?
D
So,
for
example,
if
you
look
in
the
bootstrip
provider,
we
have
code
where
we're
saying.
Oh,
if
this
class
is
running,
I
don't
know,
Community
is
1.15,
then
please
render
the
cubeadm
config
file
with
the
V1
beta1
Cube
edem
API.
So
we
have
stuff
like
that
in
a
few
places,
it's
not
too
much,
but
in
a
few
places,
and
also
some
other
upgrade
logic
and
stuff
like
that.
D
So,
essentially,
when
we
drop
support
for
that,
we
would
get
rid
of
that
logic
and
the
corresponding
test
coverage,
which
then
also
means,
if
you
upgrade
to
a
new
cluster
API
version
where
you
work
with
cluster
versions,
not
supported
anymore,
and
then
you
do
I
mean
first
of
all,
nobody
knows
what
happens,
but
I
can
tell
you.
D
What
would
happen
is
that
new
nodes
are
not
able
to
join
because
it
would
render
the
at
least
you
would
rendered
configure
with
a
if
a
version
is
not
supported
anymore,
but
I
think
what
we
should
do
and
that's
probably
what
we're
aiming
at
is.
We
should
put
some
safeguards,
probably
everywhere,
that
when
you
run
classical
upgrade
that
when
you
I
don't
know
change
the
kcp
version,
all
that
kind
of
stuff
that
you
get
feedback
which
versions
are
actually
supported
so
that
you
see
it
before
it
just
breaks
somewhere
right.
G
Yeah
I
think
it's
important
that
we
keep
kind
of
static
stability
for
the
Clusters.
Maybe
like
features
would
break
like
I,
don't
know,
upgrading
and
adding
new
notes
or
stuff
like
that,
but
at
least
I
think
we
should
not
risk
breaking
existing
clusters.
G
I
mean
in
worst
case,
we
could
just
really
pause
all
the
reconciliation
completely
stop
it
if
we
detect.
This
is
an
operation
that
is
not
supported
anymore.
D
I
mean
that
that
would
be
definitely
good
point
so
that
we
have
like
ideally
some
checks
in
cluster
cuddle
Etc,
which
tell
you
before
it
happens.
But
if
you
already
got
into
that
situation
that
you
have
running
an
unsupported
version,
then
at
least
nothing
breaks,
because
we
just
throw
errors
and
and
don't
break
things
in
a
weird
way.
I
think
that's
something
that
we
can
generally
Implement
without.
H
H
That
also
accept
the
risk
that
that
comes
with
lack
of
support,
so
I
think
I,
like
the
idea
of
like
restricting
like
reducing
costs
and
maintenance
for
a
cluster
API
as
a
project,
and
really
the
costs
of
supporting
more
versions
is
like
running
tests,
jobs
and
documentation.
Right,
correct
me:
if
I'm
missing
anything
but
I,
think
that's
mostly
it.
H
So
what
would
you
think
about
like
a
compromise
where
we
say
we
are
officially
testing
and
supporting
these
sets
of
versions,
and
by
default
they
will
not
let
you
use
the
other
older
versions,
but
there
is
some
sort
of
like
opt-in
like
I
know
what
I'm
doing.
Let
me
do
this
swag
or
like
way
to
specify
it,
but
a
user
can
say
like
I
want
to
use
an
older
version
and
I
have
an
image
for
it
and
I
don't
care
that
cluster
API
is
not
testing
it.
Let
me
use
it.
D
D
I
mean
what
we
can
definitely
do
is
like
not
break
stuff
if
nothing
changes,
but
we
would
definitely
100
know
that
if
you
try
to
bootstrap
a
new
node
in
that
communist
version,
it
will
simply
never
come
up,
because
the
bootstrap
provider
doesn't
work
anymore.
For
that
version,
I
think
for
things
where
you
just
don't
know
and
don't
test
it's
fine
to
have
an
update
for
other
cases.
D
F
Yeah
so,
first
of
all,
I
want
to
under
that
the
cost
is
not
only
running
tests,
but
sometimes
it's
also
to
maintain
the
MLK.
So
I
make
an
example
recently
kubernetes
this
the
the
move
to
the
new
registry
and
this
move
was
retroactive
to
all
the
version,
and
so
basically
we
have
to
chase
down
all
the
tests
to
make
them
basically
work
with
the
new
with
the
new
registry
that
changes
in
the
middle
of
a
release.
So
sometimes-
and-
and
sometimes
this,
let
me
say-
brings
in
maintenance
for
additional.
F
I
kind
of
like
the
idea
of
an
unsafe
grade
that
I
know
what
I'm
doing
just
adding
that
some
of
this
limitation
an
official
they
did
also
on
some
end
of
life
versions.
So
they
basically
went
back
even
on
more
than
one
release
so,
but
it
impacted
basically
us
as
of
today.
We
have
all
the
tests
which
are
failing
to
due
to
this,
and
we
don't
want
to
fix
them
because
they
are
running
on
older
copyrights.
F
So,
let
me
say
even
the
kubernetes
versions:
they
stable
the
also
the
ecosystem
that
change
the
cncf
test
infrastructure,
whatever
a
little
bit
of
maintenance.
Is
there.
F
I,
like
the
idea
of
unsafe,
just
we
have
to
think
a
little
bit
more
about
the
Restriction,
because
in
some
cases
restriction
are
really
provided
specific.
So
we
know
that
an
old
version
cannot
work
with
kcp
or
Cuban
main
bootstrap.
We
do
not
know
what
happened
if
the
same
limitation
applies
to
other
booster
providers,
so
yeah.
We
have
to
think
a
little
bit
about
this,
but
if
we
can
make
a
safeguards,
it
will
be
nice.
We
have
already.
We
already
have
issue
about
this.
A
I
I
think
we
shouldn't
allow
that,
in
fact,
if
we're
going
to
publish
a
document
like
this
I
I
mean
I'd
love
to
hear
a
Justified
use
case
for
I
want
to
stay
on
kubernetes
123
for
the
next
three
years,
but
I
insist
on
moving
forward
with
cluster
API
at
the
sort
of
fast-moving
Cadence.
That
doesn't
really
make
sense
to
me,
but
maybe
I'm
not
understanding
it.
I
And
then
the
other
thing
I
want
to
bring
up
is
that
the
the
only
potential
thing
that
I
can
think
about
the
top
of
my
head
as
a
result
of
adding
this
kubernetes
version,
validation
into
Cappy
bootstrapping
would
be.
We
need
to
make
sure
that
we
accommodate
like
arbitrary
builds
of
kubernetes,
because
there's
lots
of
tests
out
in
the
ecosystem
that
might
not
have
version
identifying
information,
so
Cappy
might
not
be
able
to
tell
which
version
of
kubernetes
is
running
if
it's
running
an
arbitrary
build
and
it
has
image
tags
that
aren't
easily
identifiable.
D
I
think
I
just
have
an
answer
for
that.
One
I
think
I
think
you
know
what
you
mean,
because
there
are
stuff
like
in
certain
places
you
can
put
stuff
like
Ci,
slash
whatever,
but
I'm
pretty
sure
no
I
know
that
we
definitely
are
able
to
parse
the
versions
that
we're
getting
that
we're
getting
into
kcp
and
it
would
introduce
the
provider
because
we
literally
have
a
switch
case.
If
your
kubernetes
version
is
this
use
Cubit
and
B1
beta1.
If
you
keep
within
version,
is
this
use
Etc
et
cetera?
D
And
if
you
wouldn't
match
there,
the
machine
would
never
come
on
so.
I
D
I
think
our
control
is
already
have
that
switch
case
in
them,
so
they
already
rely
on
having
a
version
that
I
can
understand
and
say
it's
higher
or
lower
than
that
other
communist
version
I
think
what
you
mean
with
those
CI
versions,
I
think
they're
just
not
put
into
those
fields
that
it
becomes
a
problem
there,
they're
just
stored
somewhere
else,
so
I
think
Cuban,
immediate,
just
stores
them
in
some
other
place
somewhere.
D
I
know
I,
remember
the
last
time
we
looked
into
this
so
I
think
it's
just
not
I,
don't
know
in
case
the
p-spec
version
or
something
I
think
we
don't
have
to
be
at
versions
there.
We
just
have
them
in
other
places,
so
I
think
we
should
be
safe
for
that,
and
we
also
by
the
way
we
have
Downstream
versions
of
like
build
identity.
First
and
I,
don't
know
what
and
and
all
that
works
so
I
think
it
should
be
okay.
D
J
I
yeah
I
just
wanted
to
add
regarding
what
Cecile
was
proposing
if
it
will
be
possible-
and
maybe
maybe
this
is
too
much
effort,
but
if
it
will
be
possible
to
distinguish
between
unsupported
versions
and
non-compatible
versions
right,
it's
different
drop
in
the
texts.
Hence
it's
not
supported
because
they're
not
verifying
that
it
works
for
a
specific
version
that
you're
saying
hey:
it's
not
gonna
work.
J
We
already
know
it's
not
going
to
work
on
this
on
this
version
and
regarding
the
regarding
what
Jack
said
about
a
use
case,
I,
don't
think
that
we
should.
You
know,
promote
that
anyone
should
stay
in
123
for
the
next
three
years,
but
I'll
give
you
a
case
in
eks
distro.
We
support
four
kubernetes
versions
at
a
time
which
you
know
will
fit
very
well
with
this
proposal,
except
the
latest
version
that
we
support
is
not
always
the
latest
version
that
kubernetes
has
offered
so
right
now.
J
J
D
I
just
think
it
comes
from
my
side,
yeah
probably
said
so.
The
numbers
we
come
up
with
I
think
we're
coming
from
a
very
similar
place
than
you.
A
D
D
The
calculations
we
did
was
essentially
we
looked
at
when
we
have
to
release
your
distribution
to
be
still
kubernetes
certified,
because
you
don't
have
that
much
time.
I
think
you
have
to
be
like,
let's
say
roughly
something
between
9
or
12
months
is
that
you
can
be
behind
the
commissaries.
So
if
1.24
0
is
released
nine
months
later,
you
just
stress
this
part
as
well.
D
So
if
you
take
something
like
that
and
then
add
on
top
of
that,
how
many
verses
you
want
to
support
and
then
a
lot
of
buffer,
then
you
roughly
end
up
with
all
results
here
that
that
should
usually
work
out
yeah,
but
I
think
definitely
makes
sense
that
everyone
takes
a
look
at
that
things
through
the
same
stuff,
for
whatever
they're
doing
and
and
let's
see,
if
you
need
more
or
I,
don't
know.
I
Oh
sorry,
I
really
quickly
wanted
to
answer
the
question
the
you
know
the
idea
that
you
would
do
your
own
self-testing
to
self-validate
a
version
of
Cappy
that
works
with
your
version
of
kubernetes
doesn't
really
convince
me
that
you
have
met
the
bar
of
the
official
Cappy
testing
that
has
undergone
the
rigor
of
you
know
an
entire
development
cycle
running
in
periodic
jobs
and
test
grid
with
an
entire
development
staff
looking
into
those
and
supporting
those
tests.
I
So
I'm,
just
not
I'm,
not
convinced
that
you
want
to
build
a
product,
a
critical
production
platform,
that's
running
a
version
of
Cappy
that
has
never
been
developed
against
a
version,
an
older
version
of
kubernetes,
so
I'm,
just
I'm,
plus
one
on.
Let's
really
not
allow
that
if
we
can
it'll
help
our
users
to
the
extent
that
Cappy
is
tightly
coupled
to
kubernetes.
This
is
The.
F
It
is
a
healthy
to
keep
the
basically
the
ball
rolling
and
if,
if
we
take
a
new
funny
things
that
we
support,
we
have
to
drop
down
one.
Otherwise
it
became
unsustainable
unless
people
step
in.
So
it
seems
to
me
that
the
true
the
the
first
two
points
are
kind
of
uncontroversial
I.
It
seems
that
everyone
is
an
iron
tree,
the
one
about
kubernetes.
This
is
a
little
bit
more
dedicated,
so
I
I'm
kind
of
tempted
to
see
okay.
F
D
D
I
think
I
agree
with
the
first
two
I'm,
just
not
sure,
if,
like
the
third
one
is
something
like
where
we
can
just
say:
hey,
let's
wait
three
weeks
and
then,
if
nobody
objects,
because
I
think
we
first
probably
need
a
bit
more
of
a
plan
about
the
safeguards
and
the
exceptions
Etc
and
then
once
we
have
that
we
can
say:
okay,
let's,
let's
give
it
a
bit
more
time
to
get
a
consensus
on
that
right.
B
D
We
don't
have
a
plan
for
those
safeguards
and
that
opt-out
thing
you
know
whatever
you
wanted
to
do,
but
I
think
for
the
first.
What
sounds
good
see.
H
H
So
maybe
we
should
I
think
those
are
like
really
great
summaries
that
you
wrote
Stefan
thanks
for
doing
that.
Maybe
we
should
share
those
widely
like
asynchronously
for
people
who
didn't
have
a
chance
to
attend
a
meeting
like
mailing
lists,
like
everywhere,
just
to
like
give
a
period
of
discussion
for
people
to
bring
up
use
cases.
We
haven't
thought
about,
or
was
support
for
this.
D
Yeah
I
think
that's
fair
I
can
send
around
another
stack
and
domain
and
I
think
the
only
thing
that
is
sort
of
that
we
have
a
bit
of
a
deadline
for
is
the
getting
rid
of
you
and
I
have
feel
another
form,
but
we
don't
so
we
have
to
do
one
thing
before
I
mean
we
have
to
do
the
pack
fix
anyway,
but
it's
we're
not
blocked
for
those,
but
I
mean
with
these
notes.
We
have
time
until
we
have
to
write
those
for
1.4.0.
D
That's
fine,
so
we
don't
need
a
decision
next,
two
weeks
and
deprecating
all
the
API
types,
also
on
the
go
level.
I
think
that's
also
one
controversial.
We
just
never
thought
about
it.
I
guess
so.
Yeah
I'll
send
a
mail
around
with
water
flows
and
then
let's
see
what
we're
getting
back
and
then
we
can
talk
about
every
next
week
or
the
week.
After
about
the
feedback
we
got
and
some
deadlines
depending
on
feedback,
something
else.
A
A
F
Let
me
share
so,
let's
start
with
this
okay,
so
this
is
the
demo
about
the
things
that
we
discussed.
I
discussed
with
some
folks
that
is
coopercon.
So
the
idea
is
that
okay,
we
would
like
to
treat
the
cluster
as
a
cattle.
We
have
a
nice
story
to
create
a
cluster
easily
to
derive
cluster
easily.
What
we,
what
is
missing
is
okay.
F
When
we
delete,
we
want
to
delete
a
cluster,
what
we
do
with
the
workload
running
on
this
cluster,
how
do
we
move
them
to
another
one
and
so
think
about
this
I
come
out
with
a
solution
that
I
know
is
super
early
stage,
but
I
would
like
to
share
it
with
the
community.
So
maybe
someone
is
interesting
can
help
to
bring
these
two
and
to
a
better
level
of
maturity.
So.
F
That
I
have
two
cluster
one:
it's
called
Legacy,
which
has
a
counter
plane
and
three
workers,
and
on
this
cluster
I
have
some
workload
running
all
our
which
are
spread
across
the
three
Waters
okay.
Now,
what
I
want
to
do
is
basically
start
the
Commission
in
this
cluster
and
have
my
workloads
to
land
on
on
this
one.
So
the
idea:
okay,
let's
see
how
this
work
and
then
I
I,
will
discuss
a
little
bit.
How
I
managed
to
do
it.
So
I
would
like
to
do
this
in
a
cluster
API
way.
So
in.
F
What
I
would
like
to
happen
is
that
I
go
I,
simply
scale
down
this
cluster
and
things
happen
like
I
like
and
workload
that
move
and
exactly
like
I
do
when
I
roll
out
new
machines-
and
this
is
what
I'm
doing
so,
I'm
scaling
down
the
cluster
wire.
Actually
workload
are,
and,
as
you
see
some,
some
of
them
get
terminating
and
the
nice
thing
is
that
the
the
replacement
start
getting
scheduled
where
there
is
capacity,
and
they
start
to
get
this
scheduled
on
the
new
cluster,
and
so,
if
I,
just
to
keep
it
this
fast.
F
F
F
F
What
is
happening
is
that
in
cluster
API,
what
we
have
we
have
a
cluster
which
is
an
abstraction.
This
is
just
a
piece
of
yaml,
basically
NBI
and
this
structure
basically
controlled
a
couple
of
things:
a
piece
of
infrastructure
load
balancer,
one
whatever,
and
then
it
controls
that
kubernetes
nodes
and
the
kubernetes
cluster,
which
is
hosted
on
this
infrastructure.
Okay,
in
our
mind,
we
are
used
to
think
at
this
as
a
unique
stuff,
because,
usually
what
we
do
when
we
create
a
cluster,
we
got
infrastructure
and
the
kubernetes
cluster.
F
When
you
create
an
another
cluster,
we
got
a
new
set
of
infrastructure
and
a
new
kubernetes
cluster.
This
is
what
happened.
Usually
what
I
did
in
the
POC
that
that
I
just
showed
you
is
that
when
I
create
the
new
cluster,
I
got
new
pieces
of
infrastructure,
but
basically
I
join
the
existing
kubernetes
cluster.
F
So
I
have
two
cluster
API
clusters,
but
one
kubernetes
cluster
once
I
manage
this.
Basically,
when
I
train
a
node
workload
gets
moved
automatically
by
Kuba
cattle
by
cubicators
drain
to
the
new
cluster
and
by
the
scheduler.
So
the
idea
here
is
that
I
I
only
wire
up
the
the
kubernetes
cluster
and
then
I
leave
all
the
artwork
of
moving
workload
to
the
scheduler
or
to
the
like.
We
do
for
any
rollout
and
yeah
it
kind
of
works.
F
Changes
are
not
big
to
make
this
happen,
but
yeah
I
am
well
aware
that
there
are
a
lot
of
things
that
have
to
be
said
about
especially
around
and
it's
working
that
is
required.
All
the
networking
requirements
that
we
need
to
to
get
in
shape
to
to
have
to
Cluster
speech
properly
and
I
stopped
here
for
question
back.
I
I
F
Cool
now
I
manage
it.
Basically,
when
I
created
a
new
cluster,
I
have
an
annotation
on
the
new
one
that
says
basically
join
the
old
one.
I
Something
could
we
use
just
for
the
purposes
of
conversation,
a
concept
like
cluster
group,
so
maybe,
if
you
annotate
common
clusters
together
and
refer
them
to
a
single
cluster
group
and
this
Behavior,
you
get
you
get
opt
into
this
Behavior
automatically.
F
I,
so
the
goal
of
the
story
is
not
to
have
permanent
entry.
Cluster
group
is
just
a
temporary
situation
right.
F
Get
I
dispose
the
old
one
and
then
I
keep
going
I
make
an
example
if
I
want
to
migrate
from
Legacy
cluster,
so
cluster
without
cluster
class
and
clustered
with
cluster
class.
This
could
be
a
way
to
go
for
it
or,
if,
for
any
reason,
I
want
to
recreate
a
new
cluster.
This
could
be
a
way
to
go
for
it,
but
yeah.
There
are
many
details
to
be
figured
out
around
it.
I
Right
so
so
now,
basically
scale
out
I
guess
scale
in
really
scale
in
is
overloaded.
If
you
give
a
cluster
a
particular
annotation,
so
you
essentially
supercharge
a
particular
cluster
by
an
annotation
that
that
then
overloads
the
scale
in
operation.
So
when
you
scale
in
those
workloads
are
removed
from
those
nodes,
they're
they're
no
longer
removed
from
operation
entirely
they're
actually
migrated
to
another
cluster.
F
I
I
When
we
tell
a
cluster
like
when
you
scale
in
actually,
instead
of
just
deleting
nodes
and
deleting
all
the
workloads
actually
copy
the
workload
specs
and
move
it
over
to
a
different
cluster,
are
you
annotating
both
clusters
to
so
there's
like
a.
F
F
So
you,
when
you
delete
a
node,
the
scheduler
just
seeks
for
space
and
the
find
space
in
the
new
capacity,
and
it
just
scheduled
there
like
when
you
do
rollout
and
you
delete
a
machine
and
you
create
a
new
one.
So
it
it
is
really
a
kubernetes
layer
is
super
transparent.
What
we
are
doing
is
exactly
the
same
that
we
do
one
when
we
are
allowed.
F
Cluster
with
100
and
then
create
the
101
I
can
tell
oh
migrated
to
this
new
one.
H
F
Yeah,
yes,
that's
the
tricky
part.
We
are
exactly
so
the
at
the
end.
They
are
the
same
kubernetes
cluster
so
hopefully
share
the
same
version.
We'll
make
things.
Let
me
say,
let's
bumpy,
there
should
be
some
networking
between
all
the
machines
from
the
older
new
clusters.
F
We
have
to
figure
it
out
how
this
networking
can
work
in
in
different
cloud
provider,
etc,
etc.
But
the
idea
is
that,
if,
as
in
an
ideal
world
where
you
can
make
the
the
right
network
connection
between
the
the
new
and
the
old
cluster,
basically
a
kubernetes
layer,
everything
works
for
some
time.
You
have
two
load
balancer
in
front
of
the
same
cluster
at
Center
Point
you
get
a
riddled
one
and,
and
you
and
everything
works
with
the
new
cluster
API
extra
structure.
H
F
F
Why
not?
It
happened,
basically,
the
immigration
kubernetes
layer,
so
is
it
it
basically
doesn't
enter
into
any
consideration
of
what
is
running
on
the
cluster.
It
even
moves
a
persistent
volume,
whatever,
like
you
do
when
you
do
upgrade,
so
the
idea
is
that
it's
really
like
a
rollout.
The
difference
is
that
that
we
are
rolling
out
between
machines
that
exist
in
a
different
clusters
in
a
different
cluster
API
cluster,
so
yeah
from
some
point,
I
kind
of
like
the
Simplicity
of
the
idea.
We
are
just
using
what
is
already
there
and
works.
K
Okay,
I
do
think
the
use
case
of
the
versions
would
be
great
to
Transit
there's
some
way
we
could
think
about
making
that
work,
because
I
think
would
help
a
lot
of
a
lot
of
people
jumping.
You
know
from
older
versions
of
kubernetes
to
newer
ones.
F
A
H
I'll,
take
like
five
seconds
I
just
wanted
to
point
out
that
this
reviewer
and
maintainers,
like
new
owners,
PR
emerged,
which
is
super
awesome.
So
congratulations
to
everyone
who
is
now
an
owner
and
thank
you
for
driving
all
of
this
britsu.
E
Yeah,
so
I
was
working
on
this
PR
for
admitting
kubernetes
events
when
certain
cases,
situations,
change
and
I
was
just
wondering
if
well,
assuming
that
this
PR
gets
merged.
E
If
I
should
also
kind
of
work
on
a
aspect
that
provides
guidance
for
the
different
providers
for
when
would
be
recommended
recommended
for
them
to
emit
events
as
well
or
if
we
should
just
kind
of
leave
that,
for
whatever
the
individual
providers
decide
is
best,
wasn't
sure
how
much
guidance
capping
provides
out
to
the
individual
providers
for
things
like
that.
A
E
If
that's
for
some
of
the
cube
ADM
events,
so
some
cubeadm
ones
are
in
there
and
I
think
that
we
should
expand
on
some
of
the
cube
ADM
ones,
since
that's
built
more
into
the
core
Cappy
side.
But
if
we
were
to
expand
on
those
and
standardize.
Some
of
that
I
would
think
that
we
would
want
some
of
that
sent
want
in
quotes
there.
Some
of
that
standardizing
in
kind
of
in
quotes
across
some
of
the
other
providers
as
well
kind
of
then
provide
guidance.
That
of
that
suggestion.
H
Yeah
I
think
ID,
plus
one
on,
like
a
recommendation,
I
think
it's
always
nice.
When
we
have
like
provider,
consistency
and
cluster
API
is
kind
of
the
shows,
the
way
for
providers
that
doesn't
mean
we
have
to
like
force
them
to
do
it
or
make
it
part
of
the
contract,
but
we
could
put
it
in
the
provider
migration
guide,
which
we
have
for
each
release
as
a
heads
up.
We
did
this
for
Cappy.
You
might
want
to
do
the
same.
A
Yeah
I
think
we
had
a
similar
structure
with
some
of
the
logging
work
that
we've
done
over
the
last
six
months
or
a
year.
Okay,
so
we're
at
stop
the
hour.
We're
going
to
miss
I,
don't
see
any
provider
updates,
but
we're
going
to
miss
the
managed
kubernetes
group
update,
but
the
agenda
Doc
is
there
which
Jack
I
assume
has
the
details.