►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone-
this
is
the
kubernetes
C
classified
cycle
cluster
API
office
hour
today,
is
when
the
the
24th
of
May
a
few
notes
before
starting
this
meeting,
abide
to
the
cncf
code
of
Contour,
so
please
be
kind
with
each
other.
If
you
want
to
speak
you,
you
can
use
the
raise
end
feature
of
of
Zoom,
which
is
a
under
reaction,
and
we
have
a
meeting
agenda
document
which
I'm
fasting
on
chart.
A
A
We
have
a
section
for
today
meetings,
so
please
start
adding
your
name
to
the
attending
list,
so
we
can
keep
track
of
field
participant
and,
if
you
have
topic
of
discussion,
feel
free
to
add
them
to
the
agenda.
So
with
that,
we
can
kick
off
today
meeting
and
as
usual
at
the
beginning
of
the
meeting,
we
we
give
some
room
to
new
attendees
to
introduce
themselves
is
new
to
the
meeting
you
want
to
say
hi
to
everyone.
I
want
to
say
a
wide.
A
A
D
A
C
Yeah
so
basically
control
runtime
015
was
released
yesterday
for
everyone
in
providers
Etc
who
is
about
to
bump.
There
are
a
lot
of
freeze
notes.
There
are
probably
a
bunch
of
changes
that
we
have
to
make,
but
most
of
them
should
be
very
straightforward.
C
I
also
added
something
to
our
migration
notes
in
in
cluster
API,
in
the
pr
where
I
bump
control,
runtime
I'm,
basically
just
some.
Oh
here's
a
pitfall
or
two,
and
also
I'd
like
to
the
most
important
things
that
change
the
plan
is
to
merge
this
PR
very
soon.
C
So
I
already
have
a
few
htms
I
know
of
one
of
the
people
who
still
want
to
review,
but
yeah
I
would
assume
that
admirs,
maybe
tomorrow,
but
probably
this
week,
just
FY
I'm,
not
sure
if
there's
anything
else
to
mention
so
basically
there's
a
bunch
of
information
in
them.
We
want
or
two
v15
migration
document.
A
Maybe
you
want
to
comment
so
I
think
that
there
are
most
of
the
changes,
straightforward,
I
think
two
two
changes
may
be
required
a
bit
of
attention.
One
is
the
change
of
the
behavior
on
the
fake
client
that
that
basically
now
is
for.
C
Maybe
you
can
tap
to
that
and
go
and
grab
for
fake
client.
Now,
basically,
it's
not
possible
anymore
to
to
create
a
new,
fake
client
and
pass
in
an
object
which
has
a
deletion
timestamp,
but
not
a
finalizer,
and
the
one
I'm
talking
about
is
that
one
here
yeah.
So
basically,
the
fake
client
will
just
blow
up
if
you
give
it
an
object
with
a
Time
Sim,
but
no
Finance,
because
that's
something
that
basically
doesn't
make
sense,
because
usually
that
object
is
just
garbage
collected
immediately.
C
So
whenever
you
I
mean
that's
one
change
that
you
make
whenever
you're
passing
in
something
which
is
basically
deleting
you
have
to
pass
in
either
way
for
division
times
and
the
finalizing
or
you
have
to
pass
it
without
deletion,
timestamp
and
then
delete
it
afterwards.
But
if
you
don't
have
a
finalizer
that
did
call
ethereum
correctly,
we'll
just
remove
the
object
immediately.
C
So
what
you
probably
want
is
creating
the
object
with
timestamp
and
refinancing,
and
the
other
thing
is
that
when
you
set
a
dish
timestamp,
so
that's
that's
for
the
initial
creation
of
the
fake
line
once
that
fake
line
is
created
exists
and
you
do
a
create
columns,
then
the
division
times
above
every
object
that
you
positive
is
just
ignored
to
match
dubstant
Behavior
yeah.
C
So
that's
roughly
it
more
details
on
the
pr
I
think
there
was
also
a
small
change
around
update
and
Patch
calls,
basically
that
when
you
do
updating
patch
calls
which
result
in
an
object
which
has
a
division
time
system,
but
no
Financial
object
is
removed
from
the
fake
line.
It's
just
gone
afterwards.
C
Usually
tests
are
fading,
pretty
hard
with
that
change,
and
then
there
was
another
change
which
is
the
first
find
here
of
research
search,
and
that
is
basically
that
the
fake
client
handles
is
now
able
to
handle
a
status
sub
resource.
Let's
call
it
like
that
so
pre-default
for
all
call
resources.
C
It
knows
which
resources
have
a
status
and
which
I
don't
have
a
status,
and
if
you
have
cods,
then
you
can
tell
the
fake
client
when
you
create
a
fake
client
or
this
I
don't
know
the
cluster
Cod
and
cluster
API
has
a
status
sub
resource.
The
consequences
of
that
is
that
the
update
in
the
patch
calls
will
work
accordingly.
C
So
if
you
do
an
update
on
the
patch
or
normal
client
and
you
try
to
patch
the
status
that
won't
work,
if
you
yeah,
we
have
to
use
the
status
sub
resource
client
as
usual.
So
you
will
probably
also
notice
that,
because
tests
will
just
fail
yeah,
maybe
let's
leave
it
at
that.
C
C
A
Yeah,
but
first
of
all,
thank
you
very
much.
This
PR
is
and
all
the
work
in
preparation
controller
time
is
a
really
valuable
work.
The
changes
are
really
I,
think
that
are
a
good
Improvement
and
they
unblock
a
lot
of,
and
also
future
work
and
in
controller
runtime.
That
possibly
will
lead
us
to
you
to
have
a
better
multi-class
support
and
and
possibly
also
Apache
helper
from
controller
runtime
but
yeah.
Let
me
say
this
is
the
initial
work,
a
lot
of
work.
A
I
pointed
out
to
the
thank
you
very
much
Stefan
for
doing
so.
I
pointed
out
that
the
two
changes
in
the
fake
client,
because
in
review
the
pr
those
are
the
the
most
surprising
everything
else,
is
just
more
or
less
refactor
those
one
led
to
yeah
some
fixes
in
in
the
tests
and
require
a
little
bit
of
work.
Stefan.
C
Yeah,
so
so
the
good
thing
about
this
controller
team
pump
is
that
I
would
say
for
99
of
the
things
that
you
have
to
do
you
either
end
up
with
a
compiler
error
or
a
panic
or
a
fade
unit
testing,
so
it's
there's
very,
very
little,
which
could
be
just
missed
implicitly
and-
and
your
testing
are
still
green,
but
I
would
definitely
recommend
reading,
at
least
once
through
the
controller
on
some
of
these
notes.
C
D
Hi,
so
this
is
a
follow-up
from
last
week's
discussion
about
the
machine
set.
Preflight
checks.
I
just
want
to
add
that,
based
on
the
discussion
that
we
had
last
week,
we
now
decided
to
have
all
the
machines
that
PreFlight
checks
behind
the
feature
gate.
D
This
feature
gate
will
be
Alpha
and
therefore
will
be
off
by
default,
which
means
there
won't
be
any
changes
to
any
of
the
existing
behavior
of
how
machine
sets,
work
or
machine
deployments
work,
and
we
can
leave
it
up
to
the
and
the
users
can
decide
if
they
want
those
additional
checks
by
just
enabling
the
feature
flag
and
even
after
you
enable
the
feature
flag,
you
can
still
do.
You
can
still
skip
the
PreFlight
checks
on
a
per
cluster
basis
using
The
annotation,
so
that
still
stays
the
same.
D
A
I
I
just
want
to
add
that
we
we
did
some
research
about
the
Cuban
main
profile
check
that
we
discussed
last
week.
So
one
note
is
that
we
need
to
implement
the
no
matter
of
the
changing.
A
In
Google
mean
we
need
to
implement
the
provide
check
in
cluster
API
because
in
cluster
Pi
we
support
also
all
the
older
version
of
kubernetes
and
Cuban
mean,
and
so
even
if
we
do
a
change
in
government,
it
will
effect
only
from
a
certain
kubernetes
version
moving
forward
and
so
that
that
means
that
we
we
still
need
a
profile
check
great.
So
thank
you,
yoga
Raji
for
keeping
the
this
effort
moving.
Let's
move
on
Cecile.
E
Hello,
so
yeah
I
just
wanted
to
ask
I've
been
answering
a
few
questions
on
slack
Lately
from
users
asking.
Is
there
a
Helm
chart
for
Cappy,
and
this
has
come
up
at
least
like
three
four
times
I
think
in
the
recent
two
months?
E
How
has
anyone
ever
considered
this?
Has
there
been
any
previous
discussion
on
I
know?
There
are
several
home
charts
out
there
that
folks
have
are
using
internally
or
maintained.
I
found
a
few
just
Googling
online.
E
Has
there
any
been
any
thought
into
having
an
official
Cappy
home
chart
to
install
cluster
API
I
know
that
there's
a
cluster
API
operator,
but
that's
a
bit
different,
just
wondering?
Is
there
a
good
reason
for
not
doing
it,
or
is
this
just
a
matter
of
someone
taking
it
on
and
doing
the
work
and
maintaining
it.
C
Just
two
points
I
think:
one
thing
which
makes
it
a
little
bit
more
complicated
is
that
we
have
some
logic
In
classical
upgrade,
so
it's
I
think
it's
important
correctly.
We
had
the
last
like
edge
cases,
one
or
two
years
ago,
I
think
but
I
think
it's
important
that
providers
upgrade
in
the
same
order.
C
Otherwise
you
can
end
up
in
a
situation
where,
essentially
you
upgrade
is
that
locked,
because
the
providers
can
start
because
I
don't
know,
conversion
webs
are
not
working
I
think
it
mostly
happens
if
you
go
from
one
API
version
to
another,
but
we
can
search
for
that
somewhere
on
PR
Solutions.
So
that
was
one
thing
which
makes
a
bit
more
tricky
and
I.
Think
the
other
question
is
basically
would
it
I
mean
just
in
general?
C
Is
that
attention
only
for
copy
and
if
not,
which
providers
would
we
include,
or
is
that,
like
an
extendable
thing,
I
think
just
something
to
answer
in
general?
If
you
want
to
have
an
official
one.
C
E
Yeah,
that's
a
good
point
about
the
upgrades.
I
think
I
can
look
a
bit
into
that
for
the
provider
thing
I
think
it
would
be
up
to
each
provider.
Just
like
each
provider
maintains
their
infrastructure
components.
I
would
expect
the
repo
to
be
attached
to
the
home
chart
so
like
Cappy
Reaper
would
only
have
tappy
core
bootstrap
provider,
Cuban
control,
plane,
Cupid,
VM
and
then
other
providers,
if
they
want,
can
optionally,
maintain
a
help
chart.
But
it's
not
something
that
Cappy
would
be
able
to
do
on
its
own.
F
Yeah
I
think
one
of
the
reasons
for
the
Cappy
operator
was
for
that
whole
upgrade
scenario
and
the
cavity
operator
now
does
have
a
Helm
chart.
So
we
could
people
could
deploy
it
via
Helm
via
the
helm
operator
and
still
have.
You
know
the
correct
upgrade
paths
just
as
an
option.
A
G
Yeah
sir
I
was
just
going
to
say
I
think
I
mean
definitely
there's
the
kind
of
the
split
between
the
core
management
cluster
copy
stuff,
which
obviously
operator
helps
a
lot
with
versus
the
the
actual
definitions
for
the
providers.
Right
and
so
I
mean
it
feels
like
kind
of
almost
two
two
separate
things.
D
Yeah
just
wanted
to
say
that
in
classical
we
also
have
some
logic
here
on
crd
migration.
When
providers
are
upgrading
crds
and
like
dropping
old
versions,
I
wonder
how
that
would
work
with
Helm
charts.
B
E
Yeah
thanks
a
lot
so
I'm
hearing,
mostly
some
concern
around
upgrade
technical
details
of
how
we
handle
the
upgrade
ordering
and
the
crd
migration
I'd
like
to
ask
the
question
a
different
way.
Is
there
any
user
in
this
call?
Who
is
installing
Cappy,
not
VI?
Cluster
CTL,
like
anyone,
have
experiences
to
share
using
either
the
operator
or
their
own
Helm
chart,
or
anything
that
you
want
to
share
about
your
own
experience
installing
Cappy
and
upgrading
it.
B
H
B
H
We
are
installing,
at
the
moment,
buy
a
Helm
chart
that
we
kind
of
hacked
together
from
the
customized
code,
to
install
some
flavor
of
stuff
on
top
using
ASO
or
ack.
Depending
on
the
cloud
provider,
though,
we
are
concerned
about
the
upgrade
process
a
little
bit
as
well
and
are
looking
at
the
operator,
though
we're
not
quite
sure
the
state
of
the
operator
or
how
stable
it
is
for
actual
production
use
from
an
upgrade
perspective.
H
We
are,
we
don't
have
a
good
story
for
that.
The
plan
kind
of
was
to
run
through
the
upgrade
steps
in
the
non-production
environment
with
the
cluster
CTL
pre-flight
stops
see
what
it
says
and
then
make
the
decisions
based
upon
that.
Have
it
be
more
human
before
just
blindly
doing
an
upgrade,
but
I
am
I'm
all
in
favor
of
a
home
chart
just
personally,
because
we
do
home
for
everything
versus
customize.
H
H
You
know
just
as
things
get
rendered,
I
think
that
you
know
yeah
to
step
on
this
point
here
in
chat.
You
know,
having
to
maintain
two
separate
ones
would
be
a
huge
pain
and
I
I.
Think
things
would
get
missed.
H
A
There
is
an
ask
and
I
see
it
for
a
better
story
with
regards
to
Cluster,
API
and
githubs,
with
two
Dimensions,
because
they're
starting
and
managing
customer
API
itself
with
GitHub
and
installing
and
managing
clusters
with
githubs,
so
definitely
I
agree.
That
is
the
need.
A
I
share
the
same
cons,
the
same
concern
about
from
Stefan
and
and
a
new
variety,
because
basically,
cattle
now
is
handling.
Some
is
making
sure
that
upgrade
works,
for
instance,
make
a
simple
example:
we
are
doing
and
we
are
removing.
We
are
stopping
to
serve
V1,
Alpha,
3.
A
A
But
as
I
said
that
my
biggest
concern
is
that,
okay,
assuming
that
we
get
these,
we
need
a
team
that
maintain
it
and
not
only
a
team
that
makes
sure
that
we
have
a
end-to-end
test
over
it
like
we
do
for
everything
else,
and
we
have
a
pretty
good
high
quality
and
good
coverage
and
I.
Think
that
that's
the
bar
for
me.
A
E
Yeah,
no,
that
sounds
great
I
guess
one
last
thing
from
what's
going
on
in
chat
right
now,
folks
are
saying
that
the
copy
operator
is
what
you
know.
One
of
the
main
goals
is
to
help
with
get-ups,
and
maybe
we
should
recommend
that
as
the
official
way
to
deploy
Cappy
with
home,
is
there
any
reason
why
that
wouldn't
suffice?
I
guess
for
the
folks
that
do
have
Hong
chart
use
cases
and
are
Justified
if
you
want
to
go
first.
C
F
That's
one
of
the
reasons,
and
it
just
needs
a
a
bit
more
help
with
it.
There's
only
a
few
people
that
have
been
picked
up
more
recently.
It
had
a
little
bit
of
a
a
low
period
of
of
contributions,
but
people
I'll
pick
it
up
again,
which
just
needs
a
bit
more
help
to
to
get
it
a
bit
more
production
ready.
A
C
Yeah
so
I
think
one
one
option:
how
does
good
work
is
essentially
that
are
saying:
okay
in
the
end
plus
the
cuttle
code
is
always
the
thing
which
deploys
to
upgrades
cluster
API
and
if
I
remove
correctly
The
Operators
are
just
using
classical
as
a
library,
or
at
least
that
was
the
at
some
point.
It
was
working
like
that.
I
mean
that's
one
way
to
go
and
I
think
the
other
one
is
essentially
slightly
reverse
engineering.
C
What
sort
of
magic
classical
is
doing
for
upgrades
so
I
mean
one
thing
that
we
definitely
know
because
strategory
news,
Cod
migration
thing,
which
is
something
which
is
relevant
in
Edge
case,
but
the
other
thing
is
also
ordering
constraint.
I,
don't
know
if
there's
there's
much
more,
probably
a
bit
but
I'm,
not
sure
I
mean
maybe
it's.
Maybe
it's
not
that
much.
So
what
I'm
wondering
essentially?
C
Can
we
get
to
a
point
where
it
is
really
as
simple
as
deploying
class
API
providers
and
the
providers
Theory
and
that
sort
of
stuff
just
by
yaml
file?
And
if
you
update
you
just
update
the
jamaic
file,
it
doesn't
matter
in
which
order
you
do
it
because
I
think,
if
you
can
get
to
that
point,
I
mean
you
have
a
lot
more
options
than
today.
C
A
Yeah
I
think
that
that's
a
good
suggestion
we
can
aim
to
it.
It
falls
into
making
the
GitHub
story
better,
just
to
give
a
little
bit
as
what
I
remember,
because
the
only
thing
that
Costa
cattle
does
on
top
is
enforcing
consistency
of
version.
So
you
cannot
pick
up
a
version,
that's
not
compatible
one
another
and
then
it
adds
some
magic
on
top.
Like
I,
don't
know
image
over
rice
stuff
like
that
which
which
are
useful,
but
nothing
that
cannot
be
done
with
any
templating
and
engine.
A
So
I
think
that
the
biggest
value
is
version
compatibility.
But
this
is
something
that
people
can
decide
to
to
take
you
on
a
ship
bordering
which
probably
I
just
it
was
decided,
because
it
adds
some
problems
that
could
happen
and
crd
immigration,
which
is
an
annoying
problem.
C
Okay,
I
think
the
third
migration
thing,
for
example,
could
be
it's
probably
not
a
big
blocker,
because
I
mean
this.
If
I'm
correctly
in
time,
you
can
run
certain
stuff
at
certain
points
of
time.
So
basically
you,
you
could
have
some
sort
of
chop
basically,
which
just
runs
classical
as
a
container
and
the
classical.
We
have
a
sub
command
which
does
CD
migration
or
something
like
that.
So
I
think
that's
probably
relatively
occasionally
it's
really
going
to
create
a
city
migration.
C
If
people
want
to
make
that
part
of
a
handshock
deployment,
the
ordering
is
probably
a
bit
harder,
especially
if
you
I
mean
with
the
kind
constraints
you
need
ordering
between
various
time
charts.
If
every
provider
has
its
own
time
chart.
So
that's
probably
a
bit
tricky
I
know
it
only
fades
in
edge
cases,
but
then
it
fits
pretty
hard.
C
So
that's
probably
why
nobody
really
complains
about
it.
From
from
all
the
people
who
are
using
handshots
today,.
A
A
What
we
consider
as
a
follow-up
for
this
one
Cecilia
just
a
discussion,
we
aim
together,
I,
don't
know
a
document
to
to
engage
with
the
operator
forks
and
then
eventually
write
a
proposal
seems
good.
Okay.
F
Yeah,
just
a
quick
one:
it's
the
openc
circumference
starting
on
Friday.
So
if
anyone's
going,
hey
it'd
be
good
to
meet
up,
but
B
There
is
a
tutorial
stroke
workshop
on
cluster
API
on
the
Saturday.
So
if
you
know
anyone,
that's
interesting,
cluster
API
feel
free
to
send
them
to
that
session.
It
will
be
streamed,
live
as
well,
but
it's
it's
in
the
evening,
European
time,
so
not
so
good
but
yeah.
F
If
anyone
fences
it,
that's
that
one
it's
about
an
hour
long
as
well,
and
the
second
one
is
I.
Just
thought
of
is
Whitney
and
Victor
run.
This
show
where
they
do
a
stream
called
Choose,
Your
Own
Adventure,
where
they
take
a
a
part
of
a
path
to
production,
and
they
have
two
or
three
Technologies
pitch.
F
You
know
why
they,
they
think
their
technology
is
the
best
or
their
project
is
the
best
and
it's
all
based
on
the
cncf
projects,
and
they
are
looking
for
someone
to
represent
Cappy
for
the
cluster
provisioning
part
of
this.
This
journey
in
this
stream,
so
I'm
I'm,
not
sure
if
I
can
do
the
timing.
So
if
anyone
has
a
would
like
to
do
it
I'm
sure
that
they
would
both
be
be
very
welcome
of
anyone
there,
volunteering.
A
A
Thank
you
for
this
PSA.
So
if
someone
is
around
for
the
Souza
conference,
but
it
is
only
on
just
a
question,
it
is
online
or
in
person.
F
It's
in
person,
so
in
person
in
Nuremberg
and
online
via
Gypsy,
you
don't
have
to
register.
You
can
just
attend
the
sessions.
Most
of
the
sessions
are
very
linuxy
less
about
containers,
but
but
there
is
some
container
stuff.
Okay,.
A
So
if
someone
is
around
for
open
source
conference
reach
out
to
Richard,
and
otherwise
you
you
can
follow
the
copy
workshop
on
online
and
if
someone
wants
to
represent
copy,
this
is
a
and
these
are
chose
your
own
adventure
online
streaming.
Please
reach
out
to
the
to
the
outdoors.
Thank
you
Richard
for
the
PC.
B
Hey
just
wanted
a
quick
shout
out.
We
released
zero
nine
zero
earlier
this
week,
just
a
bunch
of
small
little
updates
to
kind
of
help,
some
internal
folk
with
some
some
different
cert
issues
and
adding
some
metrics
and
things
like
that.
So
nothing
major,
but
just
wanted
to
let
the
community
know
we're
still
kicking.
A
Thank
you
congrats
and
great
to
see
cluster
Pi
for
Oracle
container
to
move
forward.
A
Do
you
want
to
I,
don't
know
if
Killian
is
in
the
meeting,
but
I'm.
I
I
Yeah
I
can
do
so
yeah
we
had
a
as
of
the
1.4
release,
so
the
most
recent
minor
release
we
had
a
pretty
flight,
ECI
I
think
everybody
was
kind
of
unhappy
with
it.
So
we've
been
doing
these
sessions
a
few
people
on
the
goals
join
them
I,
just
can't
post
on
the
public
Channel,
where
we
do
a
deep
value
on
one
or
another
kind
of
flake,
and
we've
got
a
good
few
fixes
of
them.
I
So
far
we
just
merged
another
fix
which
we're
hoping
we'll
fix,
one
of
our
biggest
flakes
but
yeah.
So
we're
working
away
on
that
at
the
moment.
Hopefully,
we'll
see
things
get
a
bit
Greener,
but
we've
also
added
some
flaky
tests
in
the
meantime
so
just
give
and
take
but
yeah,
please
join
the
sessions.
They
tend
to
be
kind
of
informal
and
just
if
you're
in
and
around,
we
just
pause
the
zoom
Link
in
the
in
the
Upstream
channel
and
as
always
any
help
that
everybody
can
offer.
I
We
pulse
issues
with
the
flaky
test
tag,
I
think
in
the
copy
repo.
So
if
you
can
follow
along
and
I
hope,
there's
any
insight
be
very
welcome
to
to
help
out.
A
So,
first
of
all,
thank
you
for
all
the
work
on
changing
flicks
and
second
I
invite
everyone
which
has
some
time
on
interest
to
follow
this
session,
and
maybe
maybe
Killian
we
can.
We
can
also
add
it
to
the
to
the
gender
meetings
and
the
links
to
the
session
when
they
happen.
A
So
people
can,
they
are
a
little
bit
more
visible,
but
they
are
all
Linked
In,
the
C
clusters
cycle,
YouTube
playlist,
and
they
are
great
session
and
big
opportunity
to
learn
how
to
debug
a
copy,
how
to
run
them
to
test
local,
etc,
etc,
which
is
I,
think
a
nice
skill
to
have
because,
as
far
as
I
know,
many
providers
are
using
the
castrated
framework,
and
so
we
are
using
basically
the
same
and
debugging
the
same.
So
what
we
see
in
copy
can
be
applied
to
provided
as
well.