►
From YouTube: 20190607 - Cluster API - Backlog Grooming
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hopefully
folks
can
see
my
screen
right,
so
we
sent
this
list
to
discuss
while
ago
four
days
ago,
about
sort
of
like
overall
priorities
of
what
you
know.
We're
definitely
gonna
commit
to
you.
So
the
way
this
typically
roles
is
that
priorities
are
all
relative
based
upon
commitment
right.
So,
if
you're
going
to
commit
to
something
that
you
can
always
it
always
kind
of
bump
up
in
priority,
because
that
means
you're
gonna
try
to
deliver
it
with
a
negated
milestone.
So
these
are
the
priorities
that
kind
of
be
in
water.
A
Folks
are
gonna,
see
as
a
commitment
and,
if
other
folks
want
to
help
up
other
things
around
or
help
other
efforts.
You
know
more
power
to
you.
Like
the
certificate
storage
story.
You
know,
bumping
up
probably
would
be
a
good
thing,
but
it's
just
a
matter
of
how
many
resources
we
have
to
do
that
work.
A
B
C
I
mean
we,
we
do
need
to
have
some
aspect
of
secured
sorry
certificates
for
remote,
node
references,
most
likely
I
think
basically
like.
If
you
look
at
what
Kappa
does
right
now,
where
the
certificates
are
not
stored
securely,
we
could
make
minimal
changes
to
at
least
like
the
data
model,
for
the
cube,
ATM
certs
and
still
store
them
and
securely
not
considered
that
that
story
is
done
but
achieve
remote,
node
references
I'm,
not
saying
we
should
do
new
work
and
design
it
poorly
from
a
security
standpoint.
A
But
you
know
these:
these
are
all
relative
right.
This
is
kind
of
the
way
we
currently
see
it.
So
we
can
always
bump
things
around
if
folks
are
gonna
commit
to
it
and
it's
it's
a
swag
at
first
and
then
we,
you
know
as
we
iterate
and
get
through
their
lower
e
shuffle
the
prairies,
but
this
is
like
the
first
pass
filter.
A
D
A
So
I'll
do
next,
is
we're
gonna,
create
a
milestone
and
we're
going
to
the
milestones
already
there,
but
we're
gonna
give
it
a
date
and
we're
gonna
kind
of
assign
a
bunch
of
labels
to
it
and
give
it
some
relative
priorities
and
people
who
raise
their
hand
and
say
they
want
to
work
on
it.
You
know,
go
ahead
and
raise
your
hand
and
we'll
shuffle
the
work
around.
D
A
Yeah,
typically,
the
way
it's
typically
done
is
like
before
before
we
go
into
planning,
we
have
like
some
basic
set
of
like
these
are
more
or
less
features
or
user
stories
straight,
but
you
could
always
open
the
issue
at
any
time
and
we
can
always
triage
and
address
it
so
like
it's
meant
to
be
continual
alright,
this
isn't
like
a
one-time
pad.
So
if
there's
something
that
comes
up
and
be
like
hey,
that's
a
great
idea,
we
should
try
to
get
this
done.
A
A
A
E
A
Know
and
I'm,
usually
a
backlog
for
lack
of
a
better
term,
not
see
because
it's
the
source
of
truth
right.
It
should
be
what
people
are
looking
at
to
define
things
since
which
me
we
should
be
arguing
about
for
some
features,
whether
or
not
that
makes
sense
right.
So
that's
the
point
of
discussion
so
file
at
any
time
and
for
the
next
couple
of
months,
I'm
gonna
try
to
help
moderate
a
little
more
to
try
and
get
us
into
cadence
and
assist
in
the
cycle.
A
So
that
way,
it's
like
other
projects
right
so
I
run
the
several
sub
projects
in
the
community
and
the
way
I
run.
It
typically
is
that
every
single
meeting
we
go
through
and
anything
that's
new
gets
triaged,
that's
part
of
the
process,
and
then
we
talk
about
other
issues
that
we
want
to
address.
Okay,.
D
A
C
Think
so
1:15
should
be
out
kind
of
in
a
week.
I
think
approximately.
So
if
you
add
three
months
to
that,
so
you
go
from
mid-june
to
mid-september
and
then
six
weeks
after
mid-september
would
be
essentially
November
first.
So
if
we
want
to
do
six
weeks
after
116,
approximately
November
first
but
as
Vince
mentioned
we're
I
don't
want
to
necessarily
force
the
group
to
wait
until
six
weeks
after
116.
If
we
have
something
that's
ready
before
well,.
A
Why
don't
we
at
least
get
this
triage
first
see
how
we're
doing
in
a
couple
of
weeks
or
a
month,
and
if
we
need
to,
we
can
cut
a
little
early
and
then
get
get
online
with
a
regular
rhythm.
But
for
now
we
can
just
set
the
arbitrary
yeah,
throw
the
dart
at
the
date
board
and
see
what
sticks
sounds
good.
A
A
A
A
Gotten
it
well
I've
had
this
for
a
while
now
I,
don't
think
you've
been
I,
don't
think
we've
had
the
group
grooming
in
a
while.
You
guys
been
working
well
straight.
Behind
yeah
I
haven't
been
document
how
to
implement
provider
specific
cluster
cutoff,
so
I
kinda,
it's
my
opinion.
Take
with
a
grain
of
salt,
I
kinda
hate
the
way
cluster
cutter
works.
A
B
G
B
B
F
E
C
So
I
traced,
through
that
cube,
open,
API
reference
to
an
issue
or
pull
request,
and
it
points
to
some
other
things
as
well.
It
seemed
like
there
was
a
possibility
that
in
a
newer
version
of
kubernetes,
then
when
this
issue
was
originally
opened,
it
might
be
fixed,
so
somebody
needs
to
go
through
and
as
to
the
reproducer
flow
and
see
if
it's
still
an
issue.
C
This
one
is,
if
you
have
just
fubar
gamal
in
your
custom
resource
around
metadata,
the
shared
Informer
will
try
to
do
a
list
and
it
can't
parse
all
the
metadata.
So
if
there's
it's
hands
up
in
the
air
and
anything
trying
to
use
the
informer
for
machine
sets
in
this
case
is
just
broken.
So
the
controller
is
just
like
a
hundred
percent
broken
and
I
did
a
little
bit
of
digging.
A
C
D
D
C
A
C
E
F
F
A
F
C
C
H
C
C
So
so
I
posted
in
slack
saying
hey.
Do
we
still
need
this
and
somebody
said
yeah,
let's
keep
it
open
and
then,
like
you
were
saying
Michael.
Somebody
else
said:
hey
we've
been
working
on
some
packet
stuff,
but
again
it
there's
not
gonna,
be
any
code
that
goes
into
this
repository.
That
has
anything
to
do
with
that.
Hey.
A
I
have
a
question,
though:
should
we
open
an
issue?
Somebody
want
to
open
an
issue
that
we
should
list
down,
have
a
jump
point
for
people
to
say
like
they
are:
they've
created
a
provider
and
here's
the
link
to
it.
So
if
they
create
a
packet
provider,
you
know
they
should
be
able
to
PR
the
main
repo,
at
least
for
that
in
the
readme.
E
A
C
C
A
F
F
A
F
A
B
A
I'll
leave
it
open
this
help
on
it
so
like.
When
we
talk
about
this
stuff
on
Wednesday,
we
can
be
like
hey,
there's
a
bunch
of
these
extra
things
that
we
could
potentially
get
I'm,
not
gonna,
sign
the
milestone
tune
and
if,
in
the
next
bucket,
unless
folks
think
that
we
can
leave
this
Help
Wanted
bucket
open
for
this
cycle
and
they
just
move
everything
at
the
end.
I
think.
A
C
A
E
D
C
F
C
C
So
we
would
like
to
get
rid
of
any
raw
extension
fields,
which
is
where
this
issue
arises.
So
if
we
move
to
having
no
wrong
extensions,
then
you
can
do
validating
web
hooks
and
all
along
with
open,
API,
schema
validation.
So
I
think
this.
You
know
we
get
you
to
close
this.
If
everybody
is
on
the
same
page,
we
could
keep
it
open
and
say
we're
gonna
use
this
to
track
getting
rid
of
the
raw
extensions
I.
A
D
C
A
C
A
A
G
Yes,
for
sure
so
for
release
automation.
There
is
a
script
for
cluster
api
provider
AWS
that
we
use
to
do
most
of
the
things.
The
one
big
sticking
point
like
you
mentioned
is
waiting
for.
The
GC
are
someone
to
push
the
GC.
Our
buckets
ami
is
working
with
Kate's
infra
right
now
to
get
the
image
promoter
process
in
place.
A
A
G
A
G
G
A
B
A
G
A
A
Already
did
that
one
support
adopting
a
cluster
from
the
set
of
existing
machines,
so
this
is
the
weird
one
I
think
you
could
hijack
this
with
some
of
the
with
the
updated
bootstrapping.
You
should
be
able
to
do
this
within
your
own.
Bootstrap
controller
and
you'd
have
to
hijack
it
on
your
own,
but
I
don't
know
if
we
want
to
like
promote
this
idea,
because
we
made
several
statements
for
like
we
don't
want
to
adopt
existing
machines.
A
F
A
D
A
E
He
said
he
was
stepping
away
for
a
little
bit
as
far
as
I
can
tell.
This
has
to
do
with
the
fact
that
today
we
don't
know
when
a
machine
is
actually
ready
or
not,
and
we
kind
of
have
a
hack
in
place
for
cluster
cuddle
by
setting
an
annotation
or
the
provider
does,
but
at
the
same
time,
I
like
I,
don't
see
any
point
in
just
tracking
this,
because
we
have
to
track
the
Machine
state
itself
separately.
So
I
imagine.
I
E
B
A
It's
okay
for
us
to
close
and
open
new
ones,
especially
around
testing
automation,
so
like,
if
folks
see
gaps
in
testing,
I
would
just
say,
open
new
issues
because
it
I
think
we
should
probably
Vince.
Can
you
log
an
issue
to
update
our
templates
and
make
them
a
little
bit
more
verbose.
I
have
some
good
templates
that
we
can
reference
to?
Yes,.
A
A
Chuck
is
intimately
familiar
with
this,
where,
like
one
person
is
the
issue
logger
and
we
have
the
group
walk
through
the
code
and
as
we
do
that
we
start
to
open
a
bunch
of
issues
along
the
way,
perhaps
maybe
next
week,
sometime
definitely
not
earlier
I
do
a
full
code
walkthrough
to
identify
like
key
gaps
that
are
common
things
that
are
not
don't
have
issues
for
them.
If
we
could
do
that
the
week
after
next.
A
A
A
Do
we
want
to
start
adding
sort
of
disruption,
budget
fields
to
things
like
machines,
machine
sets,
yeah
I,
don't
know.
If
we
do
it
probably
be
just
for
machines,
I,
don't
know
if
you
can
do
it
dispense
versus
or
do
you
want
to
have
stepwise
modifiers
to
controllers
the
way
it's
done
in
KK
for
other
every
other
controller,
because
the
blast
radius,
especially
if
we,
if
we
eventually
get
to
aSG's
the
blast
radius,
is
like
super
high
I
guess:
I,
don't.
A
So
it's
just
a
matter
of
like
imagine,
you're
doing
an
update
or
you're
deploying
or
deleting
something
you
don't
want
to
like
just
blast
everything
you
want
to
do
it
especially
an
upgrade
is
the
most
common
scenario.
If
you
have
an
upgrade,
you
want
to
do
it
in
a
piecemeal
fashion
and
if
you
had
a
disruption
budget
that
you
could
set,
that
would
enable
you
the
ability
to
do
it
like
five
at
a
time.
So
that
way
does
if
something's
really
awkward.
It
prevents
you
from
accidentally
blasting
your
foot
off
and
we.
D
A
Think
what
we
should
do
is
just
document.
This
might
just
be
a
documentation
issue.
Then,
if
you're
gonna
have
multiple
machines,
do
them
in
deployments,
don't
do
them
as
wrong
machines,
but
I
do
know
that
a
lot
of
people
today
still
do
things
in
weird
ways.
Right.
A
person
can
do
non
deployments
and
doing
this
pause
today,
so
where's
the
documentation.
D
E
A
E
I
C
A
A
C
Think
the
yeah
I
think
the
idea,
like
the
first
couple
of
things,
about
there's
some
plus
AWS
specific
tasks,
there's
getting
console
logs
for
a
machine,
maybe
there's
a
suite
of
cute
control
plug-ins
that
would
fall
under
like
a
cluster
API
umbrella.
So
I
like
the
idea
of
trying
to
standardize
some
of
the
functionality
that
people
will
want
to
use
this
very
or
the
thread
spans
providers.
A
A
A
A
D
Well,
it
kind
of
depends
on
like
you
know,
where
the
pieces
fall,
what
we
end
up
doing
with
the
machine
controller,
we
obviously
need
a
place
that
makes
sense
to
do
this.
I
think
the
place
that
makes
most
sense
is
in
the
machine
controller
as
it
is
today,
but
on
the
machine
controller
sign
up
the
actuator
interface.
D
If
we
end
up
doing
like
an
infrastructure
of
provision
or
controller,
then
we
talked
the
other
day
briefly
about
having
that
boilerplate
SDK
or
something
that
somebody
could
fork,
and
this
would
live
in
that
boilerplate,
so
its
functionality
we
provide,
but
it
probably
need
to
exist
in
the
actual
provider.
Permutation
still.
A
C
C
B
I
A
Alright,
so
I'm
gonna
take
a
long-term
view
of
cluster
krodel.
I
will
take
that,
for
my
one
thing,
I
can
actually
work
on
for
the
the
one
and
only
thing
it
will
not
do
anything
other
than
that
other
than
like
right.
You
might
I'll
put
on
my
cat,
Wrangler
half
and
try
and
make
sure
that
cats
go
in
one
kind
of
direction.
A
B
C
E
C
A
E
D
Don't
know,
if
do
you
support
like
a
single
master
deployment
here.
That
would
obviously
be
helpful
in
that
situation,
but
on
the
other
hand,
if
you
have
three
masters,
that's
not
that
helpful,
because
just
hopefully
a
good
reason.
Somebody
wanted
to
delete
that
machine
and
currently
that's
not
achievable,
and
they
just
still
has
that
creation
time
stamp.
So
there
were
some
reason
they
the
host
gets
recruited
or
whatever.
Sometime
later,
then
it
gets.
E
We
just
need
to
be
sure
that,
as
we
solve
this,
we
don't
you
know
we're,
do
it
in
a
safe
way,
so
that
you
know,
in
the
case
of
most
clusters,
deployed
with
cluster
cuddle.
You
have
the
controller
running
on
the
initial
control,
plane
instance,
and
you
know
you
want
to
prevent
that
one
from
you
know
obviously
being
deleted,
because
if
you
do
that
and
you
lose
control
of
the
cluster,
so
all.