►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
Address
and
in
the
chat,
if
you
want
feel
free
to
add
a
topic
in
agenda
in
order
to
write
to
the
gender,
you
need
to
to
subscribe
to
the
secret
customized
cycle
mailing
list,
and
if
you
want
to
speak,
please
use
the
resend
feature
in
slack.
Last
but
not
least,
please
remember
to
add
your
name
to
the
attending
list,
so
we
can
keep
track
of
everyone
participating
in
the
meeting
so
as
usual
as
a
when
we
start
our
meeting.
First
of
all,
we
we
welcome
new
participants.
A
Okay,
it
doesn't
seems
that
we
have
new
participant
today.
So,
let's
move
on,
we
don't
have
open
proposal
if
I
remember
well,
let
me
check
open
propose.
Okay,
we
don't
have
current
open
proposal
currently,
so
let's
keep
it
going.
Discussion
topic
so
I
have
a
Stefan
feel.
C
Free
to
speak
just
open
process,
I,
don't
know
if
you
want
to
bring
up
or
if
you
already
did
in
the
previous
meeting,
the
change,
the
oppr
that
we
have
I
think
for
the
kcp
proposal
for
the
remediation.
If
you
wanna
I.
A
Okay,
so
I
I
have
a
couple
of
PSA.
First
of
all,
I
have
moved,
moved
the
old
meeting
notes
to
a
separated
document,
as
as
we
do
every
at
the
beginning
of
every
year.
The
document
is
linked
here,
but
also,
if
you
want
all
the
past
meeting,
documents
are
linked
at
the
top
of
the
of
the
notes
document.
So
the
second
we
yesterday
we
released
a
cluster
pi,
v1.3.2
and
B
1.2.night.
B
Yeah
I
just
wanted
to
add
yeah
the
release
team
did
great
work
for
getting
these
releases
out
and
also
thanks
to
for
pleaser
and
Stefan
for
helping
with
the
kcp
fix
that
was
close
to
the
release
and
then
getting
those
test
grid
screen
so
that
we
can
cut
the
release
in
time
and
also
just
want
to
add
that
please
go
check
out
the
release
notes
because
one
of
the
things
one
of
the
baking
chain,
one
of
the
things
that
was
added
in
these
patch
releases
is
upgrades
to
certain
kubernetes
versions,
are
now
blocked
when
using
kcp.
B
The
versions
are
mentioned
in
the
release.
Note,
so
please
go
take
a
look
and
the
reason
oh
and
it
was.
You-
can
also
find
the
link
PR
there
for
more
details
on
more
context
on
why
these
upgrades
are
blocked.
But
the
thank
you.
The
idea
basically
is
for
kubernetes
versions
that
are
still
using
the
old
kubernetes
Registries
upgrades
to
these
versions
of
blocked,
because
we
essentially
want
to
encourage
people
to
move
to
the
newer
patch
versions
of
kubernetes
that
are
using
the
newer
registry.
A
Basically,
this
was
the
point
discussed
by
Stefan,
and
the
issue
happen
only
if
you
are
using
the
Upstream
registry,
so
the
the
filter
registry
that
kubernetes
uses,
which
was
can
changed
in
in
minorities
in
in
kubernetes
and
in
kubern,
mean
due
to
a
call
for
Action
in
order
to
reduce
the
infrastructure
cost
that
the
project
was
was
basically
facing.
A
Okay.
So
if
there
are
no
more
comment
on
on
The,
Bachelor
is
I.
Have
a
one
last
topic
that
I
would
like
to
discuss
with
the
community.
So
I
have
a
a
PR
where
I'm
I'm,
proposing
a
small
set
of
amendment
to
The,
kcp
Proposal
tldr,
is
that
in
this
proposal
we
describe
how
do
we
do
a
remediation
when
a
control
plane
not
fail?
A
Currently,
we
support
only
remediation
when
the
control
plane
has
more
than
three
replicas
or
when
the
control
plane
has
one
replica,
and
it
is
basically
upgrading,
and
so
it
has
two
and
this,
let
me
say
a
fairly
limited
set
of
scenario
with
this
amendment
I'm,
proposing
to
make
it
possible
a
change
that
will
make
it
possible
to
remediate
a
failure
happening
while
we
are
provisioning
in
the
cluster.
A
So
basically,
we
are
expanding
the
the
set
of
use
case
that
the
kcp
remediation
support
and
another
things
that
that
I
figured
it
out.
While
designed
these
changes
that
there
could
be
scenario
where
we
do
not
want
remediation
to
continually
retry
and
make
an
example.
I'm
provisioning,
a
cluster
I
have
an
error
creating
the
first
Contour
plane
because
the
or
quota,
or
because
there
is
a
error
in
the
crowd
in
the
cupboard
main
command
or
whatever.
A
And
if,
in
the
current
situation,
what
will
happen
is
that
cast
API
will
continue
to
tour
a
try.
This
could
generate
cost
if
you
are
running
a
cluster
API
in
our
infrastructure.
So
what
what
I'm
introducing
in
this
design
change?
Is
it
also
an
opt-in
control
of
how
many
retry
kcp
limitation
will
do
so?
You
will
try,
for
instance,
you
can
Define
to
retry
five
time
and
then
anything
after
trying
to
create
the
same
machine
over
five
time
it
it
fails.
D
Yeah,
just
a
quick
question
around
the
retry
behavior
that
you
just
stated,
like
you
said
swapping,
for
example,
is
one
potential
issue
of
three
trif
I.
Remember
correctly,
I
think
there
are
two
different
scenarios
here
right.
There
is
the
one
scenario
where,
for
example,
we
directly
get
throttled
by
the
infrastructure
provider,
because
the
API
called
the
synchronous.
One
essentially
fails
in
this
one
I
guess
we
don't
want
to
change
any
Behavior
here
right.
A
No,
let
me
say
a
kcp
remediation
relies
on
what
the
machine
I'll
check
determines
has
to
be
a
failure.
This
is
not
changing,
so
what
basically
they're
a
try
behavior
that
they
were
taking
care
is
how
many
new
machine
we
we
create
after
the
first
one
fails.
A
I
see,
okay,
makes
sense,
yeah,
but
yeah.
We
we
can
discuss
this
on
the
pr
or
hopefully
the
because
I
understand
it's
kind
of
complicated.
This
is
why
I'm
trying
to
do
the
pr
and
eventually
I,
can
do
a
demo
as
soon
as
I
have
something
working.
A
E
Hello,
how
are
you
everyone?
So
this
is
a
quick
one.
We
are
forming
a
new
pitch
group
to
discuss
alternative
communication
patterns
so
specifically
looking
at
alternatives
to
the
current
management
cluster
can
initiate
if
there
are
connections
or
child
clusters
and
that's
driven
by
the
issue
6520
originally,
but
then
there's
some
more
use.
Cases
have
started
to
to
fall
out
of
that.
E
So
if
you're
interested
please
comment
on
the
6902
PR
and
then
we
will,
when
we
get
a
group
of
people
that
are
interested,
we
can
send
out
a
doodle
to
arrange
a
time
to
meet
and
things
like
that,
but
yeah
hit
that
issue
and
register
your
interest
thanks.
A
A
My
main
goal
is
that
it
is
great
to
to
have
small
groups
to
to
take
all
some
issues,
but
please
always
keep
care
to
report
frequently
to
the
main
to
the
main
audience,
so
the
team
will
be
up
to
date
and
the
group
will
not
do
too
many
work
without
having
feedback
explore.
A
Okay,
thank
you
very
much
great
to
see
this
movie.
Then
we
have
Edwin
and
Christian.
F
Yeah,
so
we
added
a
or
pick
him
introducing
the
proposal
to
add
a
name
servers
field
to
the
IP
address
crd
for
ipam.
It's
the
motivation
is
I.
Guess
ipam
is
move
away
from
using
DHCP
for
or
clusters.
F
So
any
you
know,
feedback
would
be
appreciated
before
we
submit
this
or
a
pull
request.
A
A
Okay,
in
that
case,
in
my
opinion,
it
will
bet
will
be
better
to
just
amend
the
proposal
and
you
have
to
basically
update
the
last
update
date
and
usually
we
added
the
end,
a
new
line,
implementation
history.
So
it's
kind
of
we
we
keep
proposed
as
a
lighting
documented
that
a
reflective
state
of
the
system,
instead
of
piling
them
up
for
small
changes.
C
C
I'm
definitely
not
an
expert
on
that
Network.
Just
from
IP
address
management
perspective.
It
sounds
a
little
bit
strange
to
add
on
name
server
to
an
IP
address,
I
mean
I.
Think
it
may
definitely
makes
sense
if
you
look
at
it
that
you
want
to
replace
DHCP.
C
C
F
C
What
definitely
makes
sense
is
to
get
feedback
from
Jacob.
Maybe
I
don't
know
what
you
already
have,
but
it
would
I
think
it
would
definitely
make
sense
to
open
an
issue
in
a
cluster
API
Repository.
C
Maybe
I'll
leave
a
link
to
the
proposal
to
the
current
document,
and
then
we
can
just
see
all
the
people
who
might
have
an
opinion
and
see
what
they
think
and
maybe
also
I
mean
I,
don't
know
either
if
it
fits
there
or
not
or
maybe
there
are
also
other
Alternatives
where
that
could
be
placed
and
would
work
in.
A
Thank
you
Stefan.
This
is
a
good
suggestion.
So
if
we
are
yeah,
let's,
let's
maybe
have
a
discussion
on
issue
on
a
simple
statement
of
problem
and
we
it
will
be
easier
to
Loop
a
person
in
and
yeah.
We
can
use
the
existing
the
the
existing
PR
that
you
have
as
a
reference,
but
the
issue
will
facilitate
discussion.
A
Okay,
let's
move
on
Jonathan.
G
G
If
you,
if
you
give
me
a
screenshot,
I,
can
also
just
show
show
the
changes
really
quickly.
G
Yeah,
so
is
the
main
change
here
and
you
can
toggle
the
Dark
theme.
You
can
change
the
export
format
between
yaml
and
Json
and
you
also
set
the
background
refresh
period
or
you
can
turn
it
off.
G
So
so
what
the
Dark
theme
looks
like.
G
G
Here
should
be
the
the
new
images,
so
you
can
have.
You
can
have
AMD
64
arm
arm
64,
PPC
and
s390x.
G
And
yeah:
that's
it
for
the
visualizer
update.
A
These
looks
really
really
nice
I
see
a
lot
of
good
comments
in
the
chat.
Okay,
I
go
back
sharing.
A
Yeah
awesome
work,
just
a
reminder
that
if
you
are
using
the
Tilt
development
environment
to
to
work
with
cluster
API
providers,
it
is
super
easy
to
get
the
visualizer
on.
But
it's
just
a
flag
in
your
teeth.
Config
file
and
yeah.
D
H
Real
quick
mention
that
1708
of
cap
C
is
planning
for
release.
Today,
it's
going
to
be
a
really
really
big
release,
so
it's
hard
to
summarize
so
come
hang
out
with
us
on
Kathy
slack
if
you'd
like
details
and
then
finally
I
wanted
to
mention.
After
we
cut
170,
we
plan
to
land
a
graduation
PR
which
moves
the
managed
kubernetes
AKs
solution
out
of
experimental,
which
we
will
then
bake
during
the
upcoming
development
cycle
and
ship
with
1.8.0.
A
F
G
Yes,
just
quick
update
about
that,
since
everyone's
back
for
the
new
year,
I've
opened
the
pr
to
get
a
kubernetes
sick
repo
to
migrate,
my
prototype
into
there
I've
set
up
all
the
copyright
stuff
and
all
the
files
they
asked
for,
and
we
just
need
to
wait
to
get
it
created.
So
I've
been
in
touch
with
I,
think
I
think
it's
Bob
and
Lube
Amir
about
that,
and
hopefully
we
should
be
able
to
get
that
soon.
A
G
A
Okay,
thank
you
very
much.
So
I
saw
Stefan
adding
a
topic
on
agenda.
Is
it
okay?
Stephanie?
If
we
go
back
to
this
topic
after
finishing
the.
A
So,
let's
continue
so
after
provider
upgrade.
We
have
a
feature
group
updates.
Just
a
a
note.
We
recently
introduced
to
the
feature
groups
which
are
basically
working
groups
or
composed
by
different
Forks
that
try
to
address
or
may
or
make
a
cluster
API
or
the
entire
system
to
make
progresses
in
in
summer
in
Samaria.
So
let's
get
the
first
update
from
the
manage
kubernetes
working
group
from
Jake.
H
Yeah,
just
real
quick
I
wanted
to
paste
the
link
to
the
recording
of
today's
quick
discussion.
The
main
outcome
of
that
was
a
sort
of
boilerplate
template
document
that
will
capture
any
proposal
that
comes
out
of
this
feature
group.
So,
as
you
can
see
lots
of
ship
that
proposal
right
now,
the
most
uncontroversial
it'll
ever
be.
A
Okay,
that's
it
thank
you
for
the
grade.
Also,
thank
you
for
for
the
effort
in
keeping
all
the
in
trying
to
track
the
work
of
the
working
group
in
a
way
that
everyone
else
can
review
in
a
sync
way
in
a
sink
accordingly
time
and
and
time
and
so
on,
etc,
etc.
Thank
you.
C
Yeah,
sorry,
can
you
open
this
sure
just
want
to
give
a
quick
update
for
interested
parties,
so
essentially
the
work
is
done
so
that
we
support
1.26
in
cluster
API,
so
it
means
cluster
APK
can
run
on
126
clusters
and
we
can
create
an
output
Etc
one
of
the
26
management
customers.
Sorry
workload
clusters,
the
first
half
of
the
issue.
The
second
half
is
not
fine
done
and
finally
done,
but
we
only
need
the
first
part
to
be
able
to
manage
and
run
on
126
clusters.
C
What
might
be
interested
so
we
merged
it
like
I,
think
a
few
days
or
last
week
on
Main
and
we
merged
it
this
week
on
1.3
and
on
1.2,
together
with
corresponding
test
coverage.
What
might
be
interesting?
We
didn't
have
to
change
anything
in
our
implementation.
C
So
technically
we
will
only
support
126
in
our
upcoming
patch
releases
in
February.
But,
given
that
we
didn't
change,
anything
I
would
expect
it
to
work
already
today.
C
So
if
providers
are
using
one
of
the
latest
batch
releases
from
yesterday
and
or
you
want
to
set
up
test
pipelines
or
something
or
4136
I
would
expect
it
to
just
work.
And
of
course
we
also
have
the
new
pipelines
which
are
already
testing
the
upgrade
from
126
to
whatever
127
CIS
at
the
moment
so
justify
the
last
part
of
the
issue
is
only
bumping
control
around
too
many
controller
tools.
But
of
course
we
will
only
do
this
on
the
main
branch
and
there
are
DPS
open
since
today,
yeah,
that's
it.
A
Thank
you
very
much
for
the
update
and
I
also
want
to
give
kudos
to
these
comfortable
terms.
Sorry
I
don't
pronounce
the
name
but
because
I
will
get
it
from
for
sure
who
did
a
lot
of
work
and
with
the
regards
the
kubernetes
bump,
and
then
thank
you
very
much.
Great
great
work
talk
okay,
so
if
I'm
not
wrong,
we
are
at
the
end
of
our
agenda.
A
Okay,
great
folks,
let's
have
some
time
back
today.
Have
a
nice
week
see
you
see
you
next
Wednesday.