►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
hello,
everyone
happy
happy
new
year.
This
is
the
c
clustering
cycle,
the
the
cluster
api
office,
hour
meetings
and
this
meeting
we
are
using
an
etiquette,
so
you
have
to
use
the
resent
feature
of
zoom
if
you
want
to
speak,
and
we
are
also
following
the
the
the
cncf
code
of
contoured,
so
please
be
nice
with
each
other
and
we
have
a
meeting
agenda
which
I'm
sharing.
A
A
A
There
are
the
remaining
one
are
due
to
testing
for
a
machine
which
is
not
working
well
and
we
we
raised
the
the
problem
to
on
call
people
for
from
test
infra,
and
we
are
hopefully
when
they
fix
the
the
machine
in
ci.
Everything
will
get
back
to
normal,
but
if
you
have
basically,
the
problem
is
if
your
job
gets
scheduled
on
this
yeah
on
the
machine
that
that
is
having
problem,
your
job
would
fail.
A
So
you
can,
you
can
do
a
test
basically
and-
and
there
is
chance
that
the
job
exact
scheduling,
another
machine
question.
Stefan
do
we
want
to
add
something.
B
I
think
you
you
captured
everything,
it's
just
that
one
individual
node
is
broken
for
some
reason.
Our
jobs
are
scheduled
with
a
high
probability
probability
on
that
node
and
if
we
hit
that
we
get
errors
and
if
not,
everything
is
fine,
so
just
three
tests
and
if
it
fades
a
few
times
in
a
row,
we
test
it
a
few
hours
later.
A
Okay,
thank
you.
Next
psa,
we
are
moving
the
code
under
slash
controller
to
internal.
This
is
part
of
an
effort
that
we
agreed
last
year
and
a
bunch
of
pr
related
to
this
effort
already
merged.
Moving
on
under
internal
all
the
experimental
controllers
cup
d
cap
and
kappa
pk.
Now
we
are
moving
the
copy
card
controller
under
under
internal.
A
We
are
trying
to
make
this
fast
as
fast
as
possible
in
order
to
not
cause
too
much
annoyance
to
their
open
prs
and-
and
this
is
a
good
period
to
do
these.
These
changes
question
comments.
A
Okay,
last
psa
from
me
last
year
in
october,
if
I
remember.
C
A
Robert
von
oren,
from
from
vmware,
proposed
the
idea
to
start
an
effort
to
have
a
cluster
pi
characters
to
join
fifi
and
family
in
the
cncf,
and
if
we
want
to
make
this
happen
and
and
to
make
this
happen
before
kubercon,
that
is
made
this
year,
we
have
basically
to
get
things
started,
and
what
we
need
is
a
community
approval
on
three
things.
A
A
The
look
of
this
mascot,
which
is-
and
we
kind
of
agree
that
this
to
be
similar
to
the
cappy
logo,
with
the
turtles
or
the
cncf
plushie,
which
are
the
three
tartars
and
and
the
the
last.
The
last
point
is
how
much
to
give
visibility,
and
for
this
point
I
can,
I
think
the
answer
is
kind
of
given
as
much
as
as
much
as
possible
visibility
on
all
the
media,
so
iterating
with
the
community.
A
D
Just
a
question:
was
there
some
kind
of
document
or
something
some
place
for
like
name
ideas
that
was
going
around.
A
But
you
may
ask,
because
do
you
do
you
want
to
make
a
pull
on
the
names
some
something
similar?
No,
I
was
just
curious
because.
D
I
hadn't,
I
didn't
remember
like
the
name
alternatives.
A
No,
the
the
there
was
not
not
such
a
document.
If
we
want
to
do
this-
and
I
asked
you
to
to
give
your
opinion,
we
need
to
get
this
done
kind
of
quickly.
So
personally,
I
find
with
the
proposed
name,
and
I
I'm
fine
with
great
lighting,
but
let's
see
if
everyone
agrees
or
not.
E
The
scene
do
we
have
a
deadline
by
which
we
should
submit
the
name.
A
A
Okay,
so
so
it
is
my
proposal:
let's
give
a
a
deadline,
a
short
of
the
name.
That
is
the
till
this
the
end
of
this
weekend
and
if
not
no
objection
arise
in
in
the
channel
or
we
we
can
rely
to
the
earth.
D
Yeah,
maybe
we
can
just
post
in
slack
and
get
some
reactions
and
you
know
other
proposals.
There.
A
D
Oh
yes,
thank
you
sorry.
So
I
just
wanted
to
see.
If
anyone
had
any
ideas,
I
see
the
phone
already
commented
actually
so
in
cab,
zoo,
we're
trying
to
support
azure,
sorry
cluster
class.
In
order
to
do
that,
we
need
to
add
azure
cluster
templates,
but
it
turns
out
azure.
Cluster
templates
can't
share
the
exact
same
like
template,
spec
and
azure
clusters,
because
azure
clusters
have
some
spec
fields
that
are
per
cluster
specific.
D
For
example,
there's
a
resource
group
name,
which
is
an
azure
thing
where,
like
it's,
your
cluster
lives
in
a
resource
group
and
it
has
to
be
different
or
cluster
so
because
we
can't
share
the
whole
spec.
We
had
two
options,
which
is
either
to
duplicate
the
the
fields
and
have
two
different
crds.
D
But
in
order
to
do
that,
we're
trying
to
see
if
there's
any
other
pattern
that
exists
out
there
in
the
crd
world
or
like
sharing
common
types
or
shared
types
between
crds,
and
we
haven't
really
found
anything
so
far.
So
right
now
we're
going
with
like
naming
it
with
a
suffix
common.
But
I
don't
know
if
that's
the
best
way
to
handle
this,
and
I
was
wondering
if
anyone
had
any
suggestions
or
any
ideas
about
this.
E
Yasin
and
then
dane
yeah,
so
we
have
an
example
for
this
in
cap
v.
If
I
recall
correctly,
we
have
the
virtual
machine
clone
spec.
That
is
pretty
much
shared
between
vsphere
machine
and
the
crd.
That
represents
a
vm
and
vsphere,
so
you
can
take
a
look
at
their
yeah
aside
of
that
there's,
there
aren't
much
aware
like
of
these
cases
in
the
ecosystem.
Unfortunately,.
F
I
agree
this
has
been
a
large
pain
point.
We've
used
different
combinations,
sometimes
we'll
use
unstructured,
because
we
know
the
subset
of
fields
that
we
need
to
inspect
and
we
just
kind
of
rely
on
api
stability
to
not
have
to
change
those
very
often
in
other
cases,
we've
copied
entire
type
definitions
into
packages
within
the
projects
themselves,
to
avoid
creating
inter
controller
dependencies,
which
has
been
really
awful.
F
I've
thought
for
a
long
time
that
it
really,
I
think,
makes
sense
to
have
the
the
types
and
the
clients
broken
out
into
some
kind
of
separate
package
so
that
these
things
can
be
imported
by
many
things.
But
it's
definitely
not
been
the
trend.
B
Just
want
to
say
what
I
wrote
there,
so
we
had
a
similar
case,
but
I
guess
kind
of
reverse
when
we
talked
about
cluster
class,
so
we
discussed
struct
names
there
and
we
should
so.
Those
are
just
specific
to
cluster
class
and
we
just
didn't
want
to
name
them
without
any
kind
of
graphics.
Because
then
you
don't
know
too
much
api.
They
belong
and
we
essentially
just
picked
names
which
makes
sense
without
any
kind
of
hard
pattern,
with
prefix
or
suffix
or
anything.
B
We
also
looked
at
the
core
companies
types
and
the
only
thing
I
could
find
there
is
that
when
they
have
some
specific
structs,
which
are
really
specific
to
some
resources,
then
they
have
something
like
endpoint
prefix
or
something
so
endpoint
port
and
not
just
port,
and
I
guess
if
they
would
have
a
port
which
is
general
applicable
to
multiple
resources.
They
would
just
call
it
port
without
any
kind
of
prefix
or
suffix.
But
that's
just
one
data
point.
I
I'm
not
aware
of
any.
G
Happy
new
year,
just
catching
up
on
this
discussion
like
for
the
length
issue,
it
was
a
common
feel
or
like
suffix
or
prefix.
I
would
drop
it
honestly.
Just
looking
at
the
closest
example
of
with
embedding
types
would
be,
the
cube
builder
declarative
pattern.
G
I
don't
know
if
there
is
like
any
new
updates
on
here,
but
pretty
much
like
what
they
wanted
to
do,
and
I
think,
like
has
been
still
for
a
while
this
repo,
but
what
they
wanted
to
do
is
like
to
embed
like
some
version
types
into
structs
and
then
so
that
you
always
know
like
what
fields
will
be
there,
because
if
you
just
inline
those
fields,
then
you
can
just
unmarshall
it
with
json.
You
know
without
doing
any
further
mocking
with
the
types.
G
This
is
something
that
we
probably
could
take
like
for
our
patterns
of
like
instead
of
using
unstructured.
We
could
just
use
one
of
these
structure
like
by
contract.
You
have
to
embed
instead
of
having
fields
necessary,
but
that's
like
still
an
api
change
to
your
question,
though,
still
like,
I
would
say
like
if
you
do
have
a
type
that
have
to
be
shared.
Just
you
know
put
it
in
version,
but
without
any
prefix
or
suffix
just
make
it
explicit
what
it
is.
That's
at
least
like
what
I've
done
in
the
past.
D
G
It
I
don't,
but
what
I'm
saying
is
like
it
doesn't
matter
like
you
could
just
like.
Take
like
the
cube
builder
pattern
approach,
or
you
could
call
it
meta
or
you
could
just
call
it
like
mac.
Okay,
yeah,
it's
like
that's,
that's
also
got
it
another
thing:
okay,.
G
Yeah,
so
this
the
closest
thing
that
I
found
like
you
declare
your
parents
like,
has
that
that's
common
and
you
have
common
spec
common
status?
Okay,
that's.
D
What
we're
doing
yeah,
but
we
put
it
as
a
suffix
okay.
Well,
do
you
have
a
reference
to
the
cube
builder
yeah.
G
A
Thanks
cece
for
rising,
I
I
think
that
if
we
even
if
it
is
not
a
strong
recommendation,
as
I
understood
it,
will
be
nice
to
document
how
we
are
solving
this
problem
in
in
our
api
guidelines
as
a
suggested
or
or
reference
implementation,
and
it
will
be
nice
okay.
A
H
Yeah
hi
everyone
I
just
wanted
to
spread
the
word
just
before
the
holidays,
be
a
small,
app
called
cluster
api
state
metrics.
H
H
A
It
thank
you
personally,
I'm
definitely
plus
one.
I
don't
know
if
you
are
aware
recently
we
merged
something
in
cluster
api,
at
least
for
having
prometheus
grafana
in
easy,
easy
setup
in
our
test
environment
with
deals,
and
it
will
be
great
if
soon,
this
sooner
or
later
we
can
get
also
some
default
starts
with
matrix
the
default
dashboard
with
metrics
stuff
like
that
and
to
to
reason
about
so
I'm
I'm
definitely
interested
in
in
desiring
to
see
a
demo.
I
don't
know
if
the
other
opinions.
H
G
From
my
side,
like,
I
think,
the
only
thing
I
wanted
to
say
is
like
happy
new
year,
everybody
and
it
seems
like
we
have
like
a
lot
of
stuff
to
do
this
year.
So
it's
it's
really
great
to
be
aware
of
this
group.
As
always,
I'm
really
excited
for
to
just
prepare
the
roadmap.
I
think
like
we
have.
G
The
github
issues
opens
open
and
if
we
do
have
like
we're,
gonna
check
those
boxes
like
and
start
to
work
on
some
of
those
we
should
probably
put
them
like
finalize
it
a
little
bit
more
and
also
like,
I
think,
we're
wrapping
up
the
1.1
release
as
well.
This
month.
F
Yes,
I
didn't
get
time
to
add
it
to
the
agenda,
but
I
have
a
pull
request
open
to
add
the
externally
managed
replica
account
to
the
experimental
machine
pool
type,
and
I
just
kind
of
wanted
to
raise
that
kind
of
give
it
a
little
bump
in
the
meeting
here
and
see
if
there
were
any
impediments
to
that
getting
into
1.1.
A
G
Means
I
have
like
the
I.
If
I
mentioned
correctly,
this
is
a
field
that
it's
not
necessarily
using
within
claustrophobia,
but
it's
used
to
inform
other
things
outside
right,
I'm,
I
guess
like
actually
like
I'll,
probably
defer
to
cco,
because
you
probably
know
more
about
machine
pool
in
general.
D
C
Yeah,
I
don't
really
have
anything
to
add.
I
made
some
comments
on
the
pr,
but
I
think
that
speaks
for
itself.
I
guess.
G
Then
yeah
we
can.
We
can
just
look
on
the
pr.
I
I
personally
like
it
just
find
a
little
bit
weird
that
we're
saying
this
is
there's
a
replicas
field
and
then
there's
like
this
other
field.
That,
like
says,
like
it's
externally,
managed-
and
I
guess
like
if
it's
if
this
is
a
temporary
thing
like
I
would
actually
maybe
say.
G
G
F
I
agree
it's.
Oh,
it's
definitely
different
and
I
I
mean
it
could
probably
go
on
the
infrastructure
provider
resources
instead,.
F
But
I
think
that
may
cause
some
confusion
right,
because
if
the
infrastructure
is,
if
you're
making
the
infrastructure
manage
the
replicas
from
there
and
someone
decides
to
ctl
scale,
a
machine
pool,
for
instance,
that's
going
to
be
overwritten
by
the
infrastructure
provider,
which
is
going
to
definitely
be
awkward.
And
anyway,
I
saw
david
raised
his
hand.
Out
of
that
live
david
talk.
I
So
I
I
was
concerned
about
it
being
temporary
as
well,
and
it
also
jumps
out
you
know
not
having
any
kind
of
behavior
in
the
controller
itself.
So
those
two
things
kind
of
led
me
to
down
the
same
path,
as
is
what
vincent
said
with
you
know.
An
attribute
seems
like
a
reasonable
thing.
I
I'm
just
curious.
Why
not
an
attribute
there
as
opposed
to
you
know
a
field.
I
F
It
could
be,
I
don't
think
it's
going
to
be
a
temporary
field.
You
know
when
I
look
at
things
like
what
we've
started
to
kind
of
look
at
with
a
little
bit
of
spot
inst
and
some
other
things.
There
are
certain
use
cases
where
the
replicas
cut
will
never
be
managed
there.
So
it's
there's
been
a
few
others
from
the
community
too
that
that
are
in
that
boat.
F
It's
largely
it's
also
the
current
operating
state
of
most
production
kubernetes
clusters
today-
and
you
know
maybe
eventually
that
changes,
but
I
don't
know
if
they
foresee
a
world
in
which
all
cluster
api
clusters
will
always
want
to
be
managed.
Have
their
replica
cop
managed
via
that
path
right
and
that's
why?
I
think
a
field
does
make
sense,
because
I
think
it
would
be
long
lived
potentially
forever.
G
The
weird
thing
about
this
field,
though,
is
like
it's
not
really
doing
anything
right
like
it's,
just
informing
the
user
or
like
is
the
user
inform
itself?
Would
it
be
better
to
just
not
have
the
or
the
replicas
number
set
I
mean,
even
if
you
do
scale,
I
so
like.
Actually,
then,
back
up
a
little
like
it
just
feels
like
this
should
be
a
status
field
so,
like
the
system
should
inform
this.
This
thing,
like
this
pool,
is
managed
externally
and
then
inform
cluster
api.
G
That's
like
this
is
something
that's
like
externally
managed
and
potentially
block
any
skill
resources
as
well,
rather
than
like
something
that's
inspect,
which
you
always
change
from
true
to
false,
or
you
know,
whatever
else
you
can
still
change
the
replica
as
well.
So
that's
why
that's?
Why
it
like
kind
of
creates
confusion.
Is
that
we're
not
really
blocking
anything
with
this
field.
F
The
thing
is,
something
needs
to
configure
something
needs
to
signal
to
the
infrastructure
which
mode
it's
in
right.
Am
I
reading
this
replicas
count,
or
am
I
writing
to
it?
What's
my
source?
What's
my
destination,
this
is
essentially
reversing
the
reconciliation
flow
of
replicas
in
the
infrastructure
provider.
D
Thanks,
I
was
just
going
to
say
I
I
think
the
other
issue
of
this
is
that
there
is
also
no
way
to
know
if
the
infrastructure
provider
that's
backing.
It
will
actually
support
this
field
or
do
anything
with
it,
so
it
could
lead
to
a
bad
ux
where
you're
setting
it
it's
like
expecting
something
to
happen,
but
it's
being
totally
ignored
and
nothing's
happening.
So
I
think
like
to
go
back
to
what
vince
is
saying.
I
think
the
like
that
functionally.
D
It
makes
sense,
but
I
think
the
ux
is
weird
like
by
having
it
in
this
place
right
now,
because
it's
not
being
acted
upon.
So
I
don't
know
what
vince
was
going
to
say.
Maybe
it's
related
to
that,
but
yeah.
G
Yeah,
I
just
actually
answered
what
I
was
about
to
say,
which
is
like
it
does
feel
that's
something
like
a
very
specific
to
the
infrastructure
provider.
So
if
there
is
a
way
for
that
controller
to
understand
that
it's
externally
managed,
so
you
mentioned
like
it
should
update
the
other
replica
field.
Maybe
there
is
a
contract
here
that,
like
we
could
write
where,
like
we
have
a
status
field
that
informs
the
machine
pool
controller.
G
That
says,
I'm
externally
managed-
and
this
is
my
current
replica
account
that
I
see
and
then
the
machine
pool
controller
will
see
those
fields
update
itself
and
also
like
kind
of.
Maybe
I
guess
you
cannot
reject,
updates
or
just
make
a
nail
and
say
like
this
is
x
managed
externally
I
don't
have.
You
cannot
set
the
spec,
but
you
can
set
it
maybe
in
status.
So
maybe
we
have
a
replicas
number,
also
in
status
field
of
machine
fuel
instead,
which
just
informs
the
users
like
this
is
the
current
number
yeah.
F
G
Yeah,
I
would
be
probably
more
in
favor
of
that
it
just
seems
like
a
cleaner
and
then,
if
we
do
find
like
enough
kind
of
commonality,
we
can
write
a
contract
that,
like
brings
it
back.
D
The
other
thing
to
think
about,
maybe
is:
is
there
ever
going
to
be
other
types
of
things
that
we
want
to
delegate
management
of
for
machine
pools
right,
because
that
was
one
of
the
constructs
at
the
beginning
is
with
machine
pools.
You
can
delegate
some
stuff
to
the
cloud
providers
in
some
cases,
so
I
don't
know
if
like
it
would
be
worth
thinking
about
that
is
like.
Is
there
a
way
that
field
could
become
more
generic
or
like?
D
Is
there
something
else
that
might
you
know
be
worth
being
externally
managed
other
than
replicas,
maybe
like
os
images
and
things
like
that,
just
something
to
think
about.
F
Yeah
that
there
are
the
things
like,
especially
as
we
start
looking
to,
for
instance,
machine
pool
machines,
there's
already
logic
in
capo,
when
a
change
needs
to
roll
the
machines
to
use
amazon's
instance
refresh,
which
that
means
the
actual
rollout
mechanism
is
delegated
to
the
cloud
provider.
So,
yes,
there
are
definitely
mechanisms
that
that
are
in
other
areas
that
we
would
want
to
delegate
externally.
F
So
yeah,
that's
a
good
point
regarding
machine
pull
replicas
to
nil.
I
think
the
problem
we
ran
into
there
is,
I
think
that
would
be
a
pretty
breaking
change.
There's
several
places
that
end
up
defaulting
it
to
one
if
you
set
it
to
nil
or
check
if
it's
nil
and
then
treat
it
as
one
it's
kind
of
interesting,
so
I
mean
we
could
probably
clean
that
up,
but
that
might
have
some
odd
implications.
F
A
If
my
other
a
comment
is
that
the
other
construct
like
machine
deployment
are
using
a
machine
replica
account
equal
nil
to
signal.
This
is
manageable
out
by
something
else.
A
So
if
we,
I
know
that
they
are
different,
but
from
a
user
point
of
view
it
would
be
really
nice
if
we
have
the
same
pattern
for
declares
10
eliminated.
F
I
was
not
aware
of
that.
That's
very
good
to
know
I
will
I'll
revisit
what
I
ran
into,
because
that
was
actually
my
first
attempt
and
I
I
think
it
was
just
simply
that
I
had
a
lot
of
places
in
the
code
I
needed
to
touch,
but
that's
probably
okay,
if
it
makes
it
consistent
I'll
revisit
that.
Thank
you.