►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
We
are
we'll
kick
things
off
with
our
regular
pre-meeting
reminders
that
we
are
a
cncf
sponsored
project
and
we
adhere
to
the
cncf
code
of
conduct,
so
please
be
kind
to
one
another.
If
you'd
like
to
speak,
we
welcome
all
thoughts
and
feedback.
Please
raise
your
hands,
use
the
raise
hand,
feature
and
zoom
first
and
the
first
thing
we're
going
to
do
is
welcome
any
new
attendees,
so
just
generally
welcome
and
then
more
specifically.
A
Well,
I
don't
see
any
raised
hands
so
I'll
just
say:
welcome
if
there's
anyone
listening
is
new
and
let's
go
through
the
agenda,
so
open
proposal.
Readouts
is
first,
which
is
right
here
so
again,
we'll
just
let
folks
raise
hands.
We
don't
have
to
hear
from
any
of
these,
but
if
anyone
who
has
some
information
on
any
of
these
open
proposals
wishes
to
speak
now,
I
see
jonathan
raising
your
hand
go
ahead.
Jonathan.
B
Yeah,
just
giving
a
quick
update
on
the
add-on
orchestration
proposal,
I'm
going
through
and
addressing
the
last
few
unresolved
comments
and
making
some
final
edits.
So
hopefully
we
should
be
able
to
make
a
final
review
pass
through
that
soon
and
thanks
to
everybody,
who's
been
helping
me
review
the
doc.
I
know
it's
pretty
long,
so.
A
A
A
So
moving
down
to
discussion
topics,
I'm
just
going
to
go
in
order
here:
staphon
you're
up
with
kubernetes
1
1.25
support.
C
C
Let's
say
that
the
bike
of
our
tests
are
running
against
1.25
and
we
have
new
upgrade
tests
for
for
124
to
125
and
125
to
the
latest,
commit
of
1.26
same
for
third
and
yeah.
Just
thanks
for
all
contributions,
and
especially
a
shout
out
to
oscar
who
did
most
of
the
work.
D
C
1.251
yeah,
I
think
that's
that's
it.
For
that
point
I
would
also
just
include
the
next
one,
because
it's
super
related.
There
were
two
releases
today,
so
1.2
and
116.
C
1.2.2
has
the
cherry
picks
to
also
work
with
1.25,
so
we
didn't
pick
all
those
dependencies
bump,
but
we
try
picked
everything
else
that
we
need
to
manage
one
of
25
customers
yep.
That's
it.
A
Okay
sounds
great,
so
one
quick,
clarifying
question
for
me
at
least
the
125
and
119
and
all
the
other
changes
those
are
in
the
1.3
milestone.
I
would
imagine
for
cappy.
Yes,.
E
A
C
A
For
us,
okay
and
then
back
to
you
for
breezy
you're,
the
next
discussion
topic.
E
Yeah,
so
there
is
this
pr
where
oscar
volunteered
to
become
a
reviewer
in
for
the
doc
in
our
incas
api.
First
of
all,
I
want
to
thank
oscar
for
volunteering
and
for
the
work
done
so
far.
Stephen
just
talked
about
the
world
on
the
release,
but
yeah.
He
is
doing
a
lot
of
good
stuff.
E
I'm
proposing
alexis
consensus
on
the
fbr
by
end
of
week.
There
are
already
a
couple
of
lg
lgtm.
F
Yep,
just
I'm
sharing
the
link
to
the
recording
of
the
meeting
we
had
on
monday,
thanks
again
to
everyone
who
showed
up,
we
had
a
really
nice
discussion
and
I
think
it
kind
of
like
solidified
some
ideas
about
what
we're
going
to
do
with
the
cluster
auto
scaler,
at
least
in
the
near
term,
so
yeah
if
you're
curious
or
you
missed
the
session,
there's
the
recording
and
yeah
thanks
again.
A
A
Oh,
no,
I
wasn't
going
to
hang
up
on
us.
Go
ahead.
A
username
that
I
won't
attempt
to
say
out
loud
is
raising
hands.
G
Sorry,
no
problem
it's
for
cut
hey.
I
just
wanted
to
ask
regarding
the
go
version.
I
just
checked
in
the
release:
1.2
we
are
using
go
1.17
and
then
in
main
goal.1.19-
and
I
was
curious-
was
it
intentional
to
to
skip
using
gold
1.18
or
am
I
missing
something.
C
I
would
answer
that
so
essentially
with
when
we
started
or
when
we
did
the
bump
for
for
1.2,
we
had
a
choice
to
either
switch
to
118
or
stay
on
117..
C
We
were
changing
everything
else,
so
everything
is
compiled
with
gob118
already,
and
the
only
thing
that
is
1.17
is
the
version
in
the
go
module
file
and
the
idea
was
to
avoid
forcing
every
consumer
of
cluster
api
to
use.
Go
1.80.
C
That's
why
we
kept
go
on
17
there,
but
as
far
as
I
know,
it
otherwise
doesn't
make
a
difference.
With
main
we
picked
gold
119,
because
kubernetes
itself
was
already
enforcing
1.19
and
controlled
runtime
as
well.
So
we
figured
there's
no
reason
to
try
to
keep
an
older
version
in
the
command
yeah,
because
those
dependencies
should
already
enforce
that
one
at
19
has
to
be
used.
C
So
if
your
consumer
cluster
api
of
our
dependencies
or
apis,
you
can
pick
118
or
117
with
cluster
api
1.2
totally
up
to
you
and
a
lot
of
providers
are
using
1.18
by
the
way.
So,
if
you're
using
their
apis,
you
have
to
use
1.18
anyway.
As
far
as
I
understand
how
that
stuff
works.
G
A
F
This
is
just
kind
of
a
question
I
guess
follow
up
on
the
last
topic
that
fercot
brought
up
there
so
did.
Should
we
advise
pro,
should
providers
trying
to
get
to
1.19
in
their
releases,
or
I
mean
I
realize
we're
not
placing
any
sort
of
hard
requirements,
but
is
it
generally
like
good
practice
for
the
providers
to
try
and
stay
up
to
date
with
the
main
repository
golang
version.
H
Yeah,
I
think,
like
at
least
from
my
experience
I
think
using
cappy
as
a
reference
to
have
like
one
single
gold
version
across
the
board
is
definitely
good.
I
think
like,
unless
there's
like
a
very
specific
need
to
stay
on
a
given
version.
I
think,
like
the
reasonable
thing
seems
to
align
on
one
thing.
F
Okay,
cool
thanks
justine.
I
look.
You
know
this
is
like
I
think
about
cubemark
and
like
it
doesn't.
Kubemark
doesn't
have
like
a
super
high
velocity,
but
I
want
to
make
sure
that,
like
it
might
be
nice
just
to
keep
it
in
line
with
cluster
api
in
terms
of
the
golang
version.
So
I
appreciate
the
advice
here.
Thank
you.
A
C
Just
one
additional
comment:
I
agree
with
what
justine
said
and
I
think
in
china,
it's
very
just
good
to
be
on
the
newest
possible
go
version
because
I
think
covers
are
only
supported
for
half
a
year
or
a
year.
There
are
always
optimizations,
and
it's
just
good
to
stay
up
to
date
for
all
that
stuff.
A
Yeah,
the
only
caveat
I
would
add
is
getting
too
far
ahead
of
sometimes
upstream
kubernetes
can
be
a
little
bit
behind.
But
then,
if
that's
the
case-
and
you
want
to
go
further,
then
that's
your
chance
to
lean
in
and
help
I'm
sure
kubernetes,
because,
generally
speaking,
it
should
be
a
a
routine
update.
Really
after
at
least
one
drops
in
a
minor
release
of
go,
which
is
usually
pretty
quickly
after
0.00.
F
H
F
F
A
Yeah,
I
would,
I
would
actually
just
say
that,
like
the
cluster
out
of
school
discussion,
I
would
for
folks
interested
in
generics,
let's
spin
off
a
set
of
meetings
about
that,
because
I
think
that
that
is
a
super.
Interesting
thing.
That's
probably
hard
to
it's
going
to
require
a
lot
of
experimentation,
but
will
potentially
have
really
great
benefits.
A
I
A
Yeah
and
again
I
would.
I
would
encourage
folks
who
are
interested
in
this
to
actually
to
engage
with
cigar
architecture
and
and
various
sort
of
meta
upstream
sigs,
especially
if
you
want
to
influence
the
outcome.
You
want
to
see
generics
more
quickly.
A
It
probably
isn't
cluster
api's
place
to
implement
generics
faster
than
say
the
upstream
kubernetes
sigs
are
recommending.
So
if
you
want
that
change
go
go
directly
to
the
sig,
that's
sort
of
the
authoritative
point
of
advocacy
go
ahead
mike.
F
C
Just
recommend
control
runtime
already
started
to
use
them,
but
yeah,
as
mike
said.
I
think
that
document
is
just
there
to
make
sure
that
cherries
are
not
used
too
fast,
just
in
case
they
had
to
roll
big
things,
and
if
they
have
libraries
that
have
to
work
across
communities
versions,
then
they
have
to
be
had
to
be
careful,
but
that
doesn't
seem
to
be
an
issue
for
us
and
yeah,
but
I
wouldn't
just
use
generics
to
use
generics.
I
think
that's,
probably
the
most
important
part
about
the
new
language
feature.
Yeah.
A
F
A
Okay
looks
like
all
this
digression
has
resulted
in
a
new
discussion
topic
from
deepak,
so
deepak
you're
up
now.
D
Yeah
hi
yeah
hi,
I'm
deepak
from
nutanix.
I
just
wanted
to
understand
so
recently
we
have,
we
are
almost
close
to
releasing
a
nutanix
infrastructure
provider.
Ga
and
we
have
integrated
a
lot
of
our
own
test
in
the
e3
framework
as
well
as
sort
of
will
pull
some
of
the
the
the
specs
from
the
cluster
api
repo
and
use
them
to
run
with
our
provider
in
our
redo
test.
D
So
I
wanted
to
understand
in
general,
like
are
there
any
specific
list
which
every
provider
should
run
which
are
pulled
from
cluster
api
in
general,
beside
their
own,
like,
for
example,
in
we,
we
do
run
conformance
test
and
we
do
run
node,
train,
timeout
and
no
cluster
to
upgrade
but
like
there
are
a
bunch
of
things
which
we
will
keep
getting
getting
added
up
in
the
upstream
as
well,
so
trying
to
understand.
How
do
we
measure
what
needs
to
be
done
with
our
provider.
A
H
Yeah,
I
think
it's
very
much
because
of
the
other
providers.
I
think,
like
you
usually
like
it's
either
like
you,
run
and
import
pretty
much
all
of
the
tests
that
where
they
find
in
cappy
and
like
some
others,
are
pretty
much
running
tests
or
cases
where
they've
seen
flakes
or
issues
on
those
specific
scenarios.
But
I
think
yeah,
like
in
general.
I
think
it
might
be
good
to
actually
define
the
guidelines
to
say
what
need
what
what
a
provider
should
run
in
terms
of
tests-
and
I
think
yeah,
the
full.
H
At
least
the
full
suite
ensures
that
we
have
justin
justin,
basically
running
against
real
infrastructure,
because
that
might
catch
some
things
that
we
might
not
be
able
to
catch
with
cap
b,
for
example,.
D
I
Yeah,
I
just
wanted
to
share
this
video
now
on
youtube,
where
I
try
to
explain
a
little
bit
about
prometheus
cube
state
metrics.
As
with
an
example
of
cluster
api
and
the
new
configuration
we
added
to
the
ripple
and
there's
also
some
small
dashboards
showing
there.
So
if
anyone
is
interested
and
want
to
take
a
look
at
it
or
want
to
get
started
with
it,
it's
pretty
high
level
and
basic.
It
also
explains
a
little
bit
about
the
prometheus
thing.
Yeah,
maybe
best
to
share
that's
it.
A
Okay,
let's
move
down
to
provider
updates
chris,
you
want
to
talk
about
cape.
A
A
I
don't
have
this
on
the
list,
so
I'll
actually
live
stream.
My
chat
and
talk
about
it.
In
cap
c
we
have
released
1.5.0,
which
is
great,
there's
a
whole
bunch
of
stuff
I'll.
Add
this
sort
of
async,
so
1.5.0
is
live
and
that's
been
tested
against
cappy
1.2
dot.
A
A
D
Yeah,
so
we
have
like
almost
21
test
cases
of
our
own
and
in
general
it
also
involves
the
conformance
and
other
this
which
are
all
passing
and
we
will
be
announcing
ga
soon
as
well.
So
just
ahead.