►
From YouTube: k8s 1.16 - Week 8 - Release Team Meeting 20190819
Description
Release details: http://bit.ly/k8s116
A
A
This
meeting
I
will
be
asking
for
a
volunteer
to
take
notes
so
think
about
that
for
a
second
and
I
want
to
remind
everybody
that
of
our
code
of
conduct
shortly
speaking,
this
is
this:
is
our
rule
to
treat
everybody
with
excellence
and
be
kind
to
each
other
and
without
further
ado,
the
meeting
notes
should
be
in
the
calendar
link
and
we
can
begin
taking
it
from
the
top
hi.
Everyone.
B
This
is
Eddie
B
Elva
I'll
be
taking
the
section
for
the
enhancements
I'm
one
of
the
shadows
for
the
hedge
funds.
Kendrick
couldn't
make
it
today
Thanks.
She
really
quickly
should
be
really
quick.
Karna
status
is
green.
We
have
40
enhancements
slated
for
116
release,
19
alpha
11,
beta
and
10
are
stable
and
all
40
of
those
have
caps
identified
and
being
updated,
and
also
there
was
a
the
issues
that
we
have
a
sig
instrumentation.
A
C
An
area
that's
the
nature
of
it,
so
a
first
stop
happy
Monday.
Everyone
hope
you
are
all
having
a
wonderful
day
or
about
to
have
a
wonderful
day,
so
see
a
signal.
Just
a
heads
up,
it's
gonna
keep
going
like
this.
It's
gonna
break,
it's
gonna
be
fixed,
gonna
break,
it's
gonna
be
fixed.
They
only
hope
we
have
is
that
it's
gonna
stay
green
enough
to
be
able
to
push
the
1/16
release
a
whenever
they
whenever
the
time
comes.
So
let
me
see
some
general
things
just
to
start
off
the
project
board.
C
If
you
want
to
stay
up
to
date,
if
you
wanna
a
if
you
are
boring,
looking
for
some
place
to
contribute,
please
check
out
the
project.
More
you'll
find
some
really
cool
and
interesting
issues,
a
authors
a
in
leading
to
that
a
the
author
general
announcement,
so
they
all
threw
out
last
week
and
anything
the
week
before
that
we
had
a
couple
CI
jobs
that
were
failing
due
to
time
to
time
outs.
It
I
would
say
that
mods
are
a
really
cool
thing
in
kubernetes.
C
Sometimes
it
can
just
mean
that
it
really
is
a
time
out
and
a
you
know,
things
just
take
longer
to
run
than
they
ate
and
they
had
taken
in
the
past
some
other
times.
It
can
mean
that
something
is
terribly
broken
in
is
somebody
who
has
a
more
context
into
the
things
that
are
being
run
in
a
job
next
and
is
to
look
into
it
after
a
lot
of
discussion
and
a
lot
of
people
look
a
look
into
the
different
jobs.
C
If
we
we
gonna
come
to
the
conclusion,
a
lot
of
a
lot
of
the
shape.
Okay,
a
lot
of
the
CF
walking
jobs
are
just
really
they're,
they're,
really
full.
We
have
a
in
some
of
them.
We
have
hundreds
of
tests
running
a
they
require.
A
the
requirements.
Right
now
is
that
they
all
the
jobs,
have
to
be
running.
I
I,
forget
the
exact
numbers,
but
I
think
it's
something
like
every
three
hours
and
most
of
the
jobs
cannot
take
more.
C
It
cannot
take
longer
than
eight
hours
to
run,
and
this
is
also
related
to
a
previous
issue
that
was
on
implementing
a
well
he's
blocking
release
blocking
jobs
and
before
before
this
for
this
past
couple
weeks,
the
issue
was
mostly
focused
on
right.
A
on
writing
the
documentation
having
a
say,
having
a
clear
set
of
criteria
you
can
have,
and
you
can
find
that
that
criteria
in
the
sequel
is
repo,
there's
a
markdown
file
for
it.
C
Now,
thanks
to
our
in
of
a
in
a
big
part
for
for
leading
this
effort
and
also
a
six
storage,
because
they
were
the
ones
that
made
us.
We
analyzed
that
this
was
a
that
this
was
I,
think
that
we
needed
to
do.
We
have
a
so
first
off,
we
have
a
new
beltram
dashboard
and
please
look
away.
Look
on
the
look
in
the
docs
there's
a
nice
paint
to
billet
row.
If
you
never
use
bellowed
room,
then
this
is
also
a
great
opportunity
for
you
to
go
and
check
a
go
and
check
it
out.
C
So,
using
that
information
we
are
hoping
to
actually
start
cleaning
or
cleaning
up
the
type
of
test
the
are
in
the
interval
interval
is
walking
jobs
to
adjust
it
to
actually
happen.
If
don't
really
have
tests
that
will
flag
and
that
tell
us
whether
not
only
what
are
kubernetes
in
general
is
healthy,
but
whether
features
that
really
a
that
really
matter
features
are
supposed
to
be
released.
Blocking
our
passing
or
failing-
and
there
are
a
lot
of
there-
are
a
lot
of
issues.
C
Okay,
if
you
go
to
the
signal
these
347
347
you'll
see
some
updates
on
what
people
are
doing
to
clean
a
to
clean
up.
They
really
the
release
booking
dashboards
and
yeah.
So
cleanup.
That's
why
that's
one
thing
now
moving
on
to
desperate
for
the
release
team
this
week,
it's
another
one
of
those
nights
weeks,
so
master
a
master
blocking.
There
seems
it
seems
that
there
are
no
persistent
failures
at
this
day
at
this
time.
C
There's
way,
one
of
the
a
one
of
the
jobs
is
us
failing,
but
it
just
needs
more
time
to
watch
a
it
just
needs
more
time
for
desperate
to
actually
realize
and
nothing
in
the
nothing
is
wrong
in
that
job,
so
mustard,
walking
all
good
master
informing.
There
have
been
a
couple
flakes
in
the
scale
performance,
that's
one
of
the
six
scalability
six
scalability
jobs
and
the
five
thousand
a
worker.
Node
jobs
are
really
in
play
that
really
crucial
and
they
can
be
released
blocking.
C
C
We'll
contact
them
this
week,
if
any
problem,
if
anything
shows
up,
they
will
send
messages
to
a
to
seek
release,
to
seek
release
channel
and
to
say
to
see
if
anybody
else
can
help
in
any
way
other
than
that
they
other
failures
are
showing
up
in
mastering
forming
out
of
the
DEP
and
rpm
APL
test
build
and
yeah
I
step
in
just
wave
and
I.
Don't
know
you
want
to
say
say
you
want
to
say
something
about
it.
D
So
the
first
I
mean
the
first
piece
of
that
just
going
back
a
second
it
with
regards
to
blocking
and
informing
criteria,
I
dropped,
a
link.
Josh's
burkas
has
done
some
major
work
in
documenting
redock.
You
menteng
the
blocking
and
informing
criteria
for
release
jobs
that
is
sig
release.
Pole
7-5
so
check
that
out.
If
you
have
time
to
to
review
that
doc,
that'd
be
greatly
appreciated.
I
want
to
make
sure
that
there
are
eyes
on
that
and
it's
a
decisions
that
we're
making
as
a
group
and
not
just
a
few
people
so
yeah.
D
So
the
the
debin
RPM
failures
basically
what's
happening
inside
of
the
so
the
way
the
script
was
written
was.
I
was
making
the
assumption
that
I'm
running
this
from
my
laptop
and
these
things
I
want
to
run
in
docker
on
my
laptop
right,
so
I
don't
have
to
deal
with
that
stuff.
The
what
has
essentially
happened
is
like
once
they
run
in
CI.
Now
it's
trying
trying
to
start
docker,
which
means
that
you
require
doctor
and
docker
for
CI,
which
we
don't
really
need
to
do
for
this
job
in
particular.
C
E
C
Sorry,
just
just
getting
back
on
my
a
thinking
so
just
to
summarize
again
so
master
blocking
all
good
master
informing.
We
need
to
get
in
contact
with
sick
scalability
about
a
one
other,
one
other
jobs
flanking.
The
other
big
thing
is
that
way
we
need
to
contact
a
OpenStack
again,
a
their
job
when
still
a
it
seems
a
Dell,
a
there's,
only
one
wrong
data,
a
that
way
that
was
executed
on
a
more
than
two
weeks
ago,
so
some
day
something
happening.
D
Yes,
I
would
say
reach
out
to
Chris
Taj
hodgepodge.
He
should
have
some
context
on
that.
I
believe
what
he
said
in
a
cloud
provider
meeting
is
that
the
infrastructure
that
they
use
for
the
OpenStack
jobs
is
potentially
going
away.
So
so
so
yeah
I,
don't
know
what
the
status
that
that
was
is,
but
that
was
maybe
a
week
and
half
ago,
I'm
so
checking
with
him
and
see
see
where
that's
at
that's,
why
there's
jobs
are
going
stale
because
those
jobs
don't
run
all
the
time.
Okay,.
C
Absolutely
thanks
keep
that
in
mind:
yeah
yeah,
so
that's
it
from
us
is
so
those
are
the
big
things
from
us
to
informing
it,
release
miscellaneous
just
and
stopped,
because
I
think
this
corresponds
before
they
release
engineering
people
and
they
are
really
on
top
of
things
as
I
say
that
there's
something
called
the
showed
up
already.
There's
a
release
unit,
a
personal
release.
Unit
job
has
been
failing
a
lot
and
I
just
a
couple.
More
issues
pub
top
since
I
just
saw
them
Friday
I,
don't
really
have
a
lot
of
context.
D
Yeah,
so
there's
so
the
reason
those
are
failing
was
because
we
we
switched
the
the
make
unit.
The
poll
release
you
whatever
it's
called
job
to,
to
use
the
make
target
instead
of
the
go
test
or
something
it
was
whatever.
Whatever
the
commanded
was
using
previously
is
no
longer
used.
We
switched
a
unit
of
make
target,
which
does
the
testing
for
both
the
shell
scripts
that
are
in
kubernetes
release,
as
well
as
the
go.
Binaries
are
the
go
code.
D
That's
in
so
the
reason
that
it
was
failing
was
because
it
didn't
have
I,
think
it
didn't
have
like
JQ
or
something
like
that
right.
So
now
we're
so
those
links
link
to
fixes
from
Honus
where
he
added
we
switched
to
an
image
cubic
ins
IDI
again
that
has
all
the
required
dependencies
to
run
those
jobs
from
those
links.
You'll
also
see
a
an
issue
opened
which
ace
is
taking
up
to
look
at
the
way
that
we
do
images
period
right.
How
do
we
build
test
released,
update
images
for
release
tooling
right?
D
The
part
of
that
investigation
will
be
okay
as
a
first
step.
Can
we
not
use
cubic
in
CD?
It
has
more
junk
in
it
than
it
needs
to
for
our
purposes
right,
and
we
want,
as
slim
of
an
images
as
possible
right
for
the
the
release
tooling,
it's
the
second
part
of
that
is
defining
policy
around
how
we
do
this
in
general,
like
tests,
infra
has
an
images
and
images
subdirectory
that
has
a
set
of
images
that
we
use
across
the
board
in
kubernetes.
D
We
also
have
our
own
images
that
we
maintain
in
kubernetes
release
that
needs
to
be
looked
at,
so
part
of
that
will
be
figuring
out.
Do
we
put
them
and
test
infra,
because
the
test,
infer
ones,
also
their
scripts
and
magic
that
that
happened
that
allow
them
to
auto
bump
right,
auto
bump
based
on
based
on
different
jaws
right?
So
we
need
to
determine.
Do
we
want
to
do
that?
Or
do
we
want
to
build
something?
That's
similar
to
that?
D
Or
do
we
want
to
keep
them
all
in-house
right
in
the
the
release,
repo
right
so
that
you
know
that
should
all
be
figured
out
once
we
build
a
smaller
image,
I
think
doing
discovery
on
building
a
smaller
image
for
urban
areas
for
release
engineering
will
inform
some
of
that
process,
so
active
work,
but
the
if
you
see
the
unit
tests
4k
release
fail
again.
Let
us
know
they
shouldn't.
They
shouldn't
fail
again
or
not
ever,
but
they
shouldn't
fail
again
for
the
time
being.
Okay,.
C
C
Someone
actively
taking
care
of
all
a
or
follow
them,
and
there
is
an
effort
to
completely
remove
in
the
clustered
directory,
which
is
the
part
of
kubernetes
and
integrator,
is
responsible
for
this
job.
But
you
know,
that's
a
whole,
a
that's
a
whole
other
problem
right
now.
They're
coupling
is
a
couple
other
interesting
things
that
I
wanted,
that
I
wanted
to
bring
up
and
that
we
were
going
to
investigate
before
there
is
a
couple
of
a
upgrade
job.
A
couple
of
the
upgrade
jobs
have
a
close
to
a
couple.
C
Closer
lifecycle
and
scale
like
this
film
and
you
know,
there's
a
there's
a
lot
of
generally.
There
are
a
lot
of
things
that
are
funny
about
this
test
and
for
one
of
them,
they're
not
really
they're,
not
really
maintained
a
a.
They
rely
on
a
clustered
directory,
which
is
something
that
has
been
deprecated
for
ages
good.
C
They
still
do
provide
some
just
they
still
to
provide
some
useful
information.
So
a
we're
just
on
a
we,
the
CI
single
team.
They
were
planning
to
check
up
on
check
up
on
these
failures
and
a
report
back
if
they
actually
a
are
something
actionable,
and
with
that
all
in
my
update
and
yeah
anyone
has
any
questions,
concerns
comments.
D
So
a
concern,
maybe
or
a
comment
or
something
I-
have
concerns
about
burning
time
on
anything,
that's
a
new
orphaned
right,
I,
just
the
so
part
of
the
reason,
part
of
the
reason
that
I
moved
all
that
stuff
to
orphaned.
Is
that
so
so
that
you
guys
can
ignore
them?
Are
you
folks
can
ignore
them
right?
The
idea
is
that,
if
you
feel
something
should
be
tracked
by
cig,
release
then
make
it
work
if
it
doesn't
like
orphaned,
is
the
pretty
staging
place
for
it
to
be
removed?
D
C
D
A
F
Everyone,
so
it's
time
for
yet
another
battery
update
and
our
starter
for
this
week
is
again
yellow
more
about
this
later,
but
regarding
queries,
we
have
80
open
issues,
which
is
one
less
than
he
had
this
time.
Last
week
we
gotta
be
pr's,
pinging,
helped
quite
a
bit
I
think
and
we
are
down
to
51,
which
is
7
less,
that
we
had
this
time
last
week
and
regarding
all
open
issues,
we
have
been
the
repository.
F
We
are
at
93,
which
is
realized,
and
last
week
now
apologize
from
my
side
because
last
two
weeks,
like
it
being
a
little
bit
less
responsive
and
they
had
the
cold
and
some
problems.
So
it
may
be
a
little
bit
slower
at
some
PD
kept
thinking
have
started
a
little
bit
later
than
it
should
be,
but
I'm
doing
my
best
to
catch
up
with
everything.
F
And
today
we
have
started
picking
issues
mostly
debts
that
have
not
been
updated
since
beginning
of
this
point-
and
there
is
about
50
out
of
80
total
issues,
that
about
being
as
soon
as
possible
and
regarding
PRS,
we
will
pink
rest
of
once.
We
have
I
think
about
20,
or
so
this
Thursday
one
week
before
starting
cold
freeze-
and
this
is
one
of
upcoming
milestones
for
us
and
next
Thursday.
F
We
are
starting
fourth
freeze,
so
you
will
see
about
panting
some
of
the
stuff
that
should
not
be
there
and
that
have
not
merged
and
that
doesn't
have
a
good
sign
of
magic
zone.
So
yeah
there
is
that
be
possible,
so
biggie
good
that
we
have
such
such
big
number
of
open
issues.
If
the
reason
we
are
yellow
but
I
hope
you
are
going
to
get,
and
this
regarding
my
update,
are
there
any
crashes
for
backlash.
F
A
Okay,
so
let
us
know,
especially
from
from
the
from
the
lead
team,
if
we
need,
if
you
need
help
reaching
out
on
any
of
those
issues,
it
can
be
a
lot
of
work
to
trawl
through
them
and
think
people.
So
this,
especially
especially
if
you
know
it,
looks
like
it's
a
lot
and
keep
them,
and
especially,
if
you've
been
struggling,
health-wise,
please
by
all
means
feel
free
to
reach
out
for
help.
Okay,.
A
G
This
is
one
of
Doc's
shadows,
I'll,
be
given
the
update
today,
currently
sitting
at
yellow.
We
have
17
enhancements
that
are
fully
merged,
have
placeholder
PRS
or
do
not
need
dots,
so
that
means
ever
about
23
still
outstanding
and
we'll
be
going
through
today.
Tomorrow,
the
rest
this
week
and
just
track
and
everybody
down
and
making
sure
a
placeholder
PRS
are
in
by
the
23rd.
A
Right
sounds
good,
Thank,
You
Seth.
Any
questions
comments,
nice
release,
notes,
hi,.
H
I
A
D
Want
release
notes
to
be
aware
that
the
code
there's
like
some
code,
fencing
nonsense,
that's
going
on
basically
the
way
that
we
look
for
the
way
that
we
look
for
the
release.
Note
in
code
right
now
it
will.
It
will
pull
in
it'll,
pull
in
any
cold
code
block
okay
from
from
the
PR
description,
which
means
if
the
code
block
is
not
is
not
appended
with
release
note
or
release
notes,
it'll
still
pull
it
in
right.
So
there
were
issues
with
the
release,
notes
update
because
of
that
right.
A
You
right
sorry
I
was
just
following
the
notes:
cool
communications,
yo.
J
Good
morning
everybody
happy
Monday.
Today
we
are
reporting
back
yellow,
mainly
because
we
were
waiting
for
some
feedback
from
some
of
these
SIG's
still
working
on
that
might
need
some
help
on
that
front,
just
with
engaging
some
people,
so
I'll
reach
out
after
the
meeting
on
that
front.
We
do
need
to
schedule
a
meeting,
probably
this
week
or
sometime
next
week.
Maybe
it's
a
release,
leads
Docs
and
release
notes
and
kind
of
go
over.
A
Good,
let
us
know
with
the
lists
a
short
list
of
SIG's
that
that
need
extra
help.
Absolutely.
J
A
Would
be
awesome,
Thank,
You,
Taylor,
any
questions.
A
L
A
L
A
Alright,
that
sounds
excellent.
Next
up,
we
have
thanks
yang
next
up.
We
have
comments.
Concerns
from
emeritus
leads
I,
see
a
couple
of
bullet
points,
go
ahead,
Josh.
E
We're
about
to
go
into
bring
down
burn
down
is
an
excellent
time
for
your
shadows
to
try
out
recording
for
your
section.
Not
only
is
this
good
training
and
involvement
for
the
shadows,
but
given
that
we're
gonna
have
I,
don't
know
whatever
it
is:
1213
burned
down
meetings,
it's
kind
of
a
sense
for
you,
because
your
section,
your
role,
is
required
to
check
in
every
rundown
meeting.
E
D
Yes,
I
put
a
few
big
bullet
points
there,
so
you
know,
as
always,
I
want
to
make
sure
that
people
feel
comfortable
and
enabled
to
come
talk
to
us
about
things
that
are
going
on
and
happening,
and
so
again
just
saying
if
you
feel
like
you
need
to
talk,
I
want
to
talk
about
how
the
release
team
is
going,
your
path
for
succession
and
all
that
good
stuff,
please
feel
free
to
ping
myself
or
Josh,
or
any
of
your
or
talk
to
your
lead
or
talk
to
your
release.
Team,
lead
shadows
or
your
release.
D
D
This
is
a
really
good
time
for
shadows
who
are
considering
succeeding.
The
role
may
be
mentioning
that,
to
your
lead,
all
right
talking
about
the
things
that
you
can
can
be
improving
on.
This
is
also
a
good
time.
Are
the
things
that
you're
doing
really
well
whatever
it?
This
is
also
a
really
good
time,
for
the
section
leads
to
start
looking
at
who
might
succeed,
you
right
who's
been
doing
a
great
job
who
has
not
been
active
there.
You
know
every
every
cycle.
D
Make
sure
that
you
know
your
your
your
lead
for
your
role,
but
you're
not
doing
it
by
yourself,
so
so
give
people
the
opportunity
to
to
to
join
you
in
that
process
and
again,
one
more
time.
If
you
need
to
talk,
know
that
Josh
and
I
are.
There
know
that
you
have
multiple
people
to
reach
out
to
you
cool.
A
Yeah
any
questions
for
our
wise
and
esteemed
emeritus
leads
and
I.
Yes,
I
want
to
double
the
the
I
want
to
repeat
what
Steven
said.
Please
please
feel
free
to
reach
out
to
them
in
private,
on
slack.
I
have
done
so
personally
and
I
can
report
that
it
always
goes
well.
So
please
and
I
encourage
everyone
to
avail
yourselves
of
their
knowledge
and
also
thank
you
so
much
for
being
there
all
right,
cool,
so
I,
don't
think
Laflin
is
here
today.
So
the
next
section.
A
Let
me
remind
everybody
again:
this
has
come
up.
This
is
the
week
that
burned
down
meetings,
start
Wednesday
will
be
our
very
next
meeting
same
time
and
it'll
be
Monday
Wednesday
Friday
for
the
next
few
weeks,
and
we
have
upcoming
milestones
tomorrow
we
will
be
cutting
the
beta
release
and
on
Friday
we
have
the
docks.
Pr
deadline,
we're
looking
when
aiming
toward
the
29th
for
code.
Freeze,
any
corrections
to
this
that
I
should
know
of
fabulous.
There
are
a
few
open
discussion
items.
A
D
A
A
Anyway,
while
I
figure
that
out,
let's
go
talk
about,
we
kind
of
touched,
the
removal
of
slow
tests
already,
this
is
an
action
item
where
we
need
to
ask
the
SIG's
if
they
really
need
all
of
their
tests,
especially
those
that
run
slow,
I,
think
Tim
and
I
have
sort
of
volunteer
to
talk
to
the
SIG's
and
get
a
little
bit
more
clarity.
There
are
there
any
other
points
on
that
point.
D
Not
really
outside
of
like
anyone
should
feel
free
to
do
this,
but
obviously
it's
the
job
of
CI
signal,
so
CI
signal.
If
you
have
free
moments,
free
cycles,
I
think
I
think
knocking
out
slow
tests
is
a
big
win.
I
know
that
there
have
been
some
discussion
on
the
keep
CTL
tests,
some
of
the
upgrade
downgrade
stuff,
as
well
as
I,
think
Michelle
is
working
on
breaking
out
the
the
sig
storage
tests
and
breaking
them
along
trying
to
figure
out
what
boundaries
to
break
them
along.
You
had
mentioned
that
there
is
a
test
timeout.
D
The
test.
Timeout
is
I
eight
hours,
but
the
criteria
for
some
of
those
jobs,
I
think,
is
four
hours.
Correct
me
if
I'm
wrong,
Josh,
so
they're
already
exceeding
criteria,
regardless
of
whether
or
not
they're
timing
up
so
trying
to
figure
out
whether
we
break
those
along
the
boundaries
of
entry
versus
CSI
test.
Sir,
there
are
a
bunch
of
interesting
things
happening,
I'm,
not
tracking,
all
of
them,
but
that
that
PR,
that
link
is
the
right
link
to
check
out
yep.
A
Yep,
that's
that's
pretty
much
it
on
that.
One
milestone
applier.
D
Right
so
I
am
trying
to
dig
up
the
PR,
essentially
Nikita,
or
if
this
cool
plug
in
for
a
prowl
that
I
believe
this
Nikita
pretty
sure
it
was
that
allows
you
to
automatically
milestone
and
a
PR
I'm,
not
sure
if
it
does
issues
but
a
PR
after
it
merges
right.
So
what
that
means
is
we
don't
have
to
go
looking
for
trying
to
figure
out
what
it
landed
in
a
milestone,
because
it
will
automatically
happen
now.
D
So
this
PR,
that
is
in
the
docks
that
I'm
linking
to
the
docks
now
and
also
and
zoom,
shows
you
what's
required
to
bump
that
stuff
and
it's
like
just
moving
forward
that'll,
be
part
of
the
process.
So
you
don't
have
to
worry
about
making
going
going
back
and
making
sure
things
or
milestones.
It'll
automatically
happen,
it'll,
be
part
of
the
release
engineering
process,
moving
forward.
K
Awesome
I
have
a
question:
is
there
a
scenario
where
an
enhancement
might
be
merged?
After
so
it
wasn't
with
two?
Sometimes
we
label
something
as
a
Chris
give.
It
hasn't,
got
the
merge
VR
by
so
emerged
enhancement.
We
are
by
enhancement,
freeze
but
then
sort
of
say
that
if
you
sort
of
merge
it
by
another
deadline,
can
that's
fine
too,
but
that
cause
trouble.
D
D
If
something
is
already
milestone
for
a
specific
milestone,
116
and
it
merges
after
it
won't
apply.
The
117
milestone.
It'll
still
apply
the
116
milestone
it
won't.
It
won't
override
any
milestones
that
already
set
it'll
only
apply
the
milestone,
but
only
apply
milestones
to
PRS
that
do
not
have
them.
A
Thank
you.
Let's
see
anything
that
we
wanted
to
revisit,
says
anything
we
parked
for
laters.
A
A
A
A
I
D
D
Essentially
what
that
file
does
is
it
specifies
a
set
of
a
set
of
versions,
a
set
of
dependent
external
dependencies,
their
versions
and
a
ref
path,
which
is
a
combination
of
the
file,
a
file
that
references
that
dependency
as
well
as
regex,
to
find
exactly
where
references
at
dependency
right?
That
runs
through
a
that,
so
CI
runs
a
HAC
HAC
verify
external
dependencies
that
sh
or
something
like
that,
and
what
that
will
do
kick
off.
D
I
think
it's
a
command
verify
dependencies
verify
dependencies
that
go
right,
verify
dependencies
that
go
we'll
take
a
look
at
that
yeah,
Mille
and
decide:
hey
is
hey.
Are
these
versions
out
of
sync?
Did
you
not
bump
something
if
you
decide
to
bump
an
external
dependency
part
of
what
we
need
to
do
there?
So
we
have
some
of
the
automation
in
place
to
at
least
validate
that
the
versions
are
the
right
versions
for
external
dependencies.
D
What
we
need
to
double
back
and
do
which
I
have
on
my
plate
is
to
you
write
a
policy
for
what
this
stuff
should
be
right.
So
when
you,
so
we
have
a
good
policy
in
place,
legged
and
and
and
kristoff,
and
a
few
others
have
contributed
to
it
for
bumping
dependencies
to
kubernetes
right
to
bumping
a
go
dependency.
There's
our
explicit
scripts
that
you
run
to
get
that
done
right.
What
we
don't
have
is
the
the
analog
for
external
dependencies
right
so
that
will
live
somewhere.
That
is
probably
I.
D
Don't
know,
contributors
devel
release,
something
like
that
to
let
you
know
exactly.
How
did
you
do
external
dependency
bumps
as
well
right,
but
essentially,
right
now
see
I
should
yell
at
you
and
say:
hey.
You
tried
to
bump
this
version,
but
you
didn't
get
all
the
things
right
right.
So
that's
a
that's
a
first
step,
but
more
things
to
do,
but
we
have
gotten
things
done.
There
sounds.
A
A
D
M
To
finish
my
thought,
I
don't
know
how
much
of
this
umbrella
issue
is
particularly
116
related.
We
haven't
marked
as
milestone
116
and
release
team,
but
operationally
this
has
been
around
predating
really
the
ramp
up
of
release
engineering.
This
discussion,
I
think,
would
largely
be
one
for
the
release
engineering
meeting,
but
specifically
for
116.
This
does
impact
the
release,
notes
so
Stephen
calling
out
that
UML
file
that
should
be
a
place
where
release
notes
can
get
the
current
list
of
expected
things,
but
I
think.
M
From
that
perspective,
we
could,
on
my
own
this
and
assuming
that
part
is
completed,
remove
area
release
team
like
I
could
see
that
flowing
into
the
release
team
handbook
for
the
the
notes
and
check
checking
at
the
end
what
we
published
for
those,
but
then
the
other
option
here
would
be
to
go
ahead
and
close.
This
issue
is
complete
and
any
subsequent
next
steps
to
open
up
smaller
issues,
instead
of
just
the
this
large
umbrella.
M
A
All
right
well,
it
sounds
like
it
sounds
like
we've
gotten,
a
lot
of
really
good
work
done
and
most
of
what's
left
us,
both
a
communication
and
an
organization
thing
as
well
as
some
things
to
finish.
Whoops.
Sorry
about
that
last
thing
on
in
progress
is
priority.
Labels
for
issues
related
to
CI
failures.
A
C
A
A
D
A
It
out,
actually,
you
know
what
I
can
check
it
out
outside
of
this
meeting.
We
don't
have
to
do
that
in
front
of
everybody.
Okay,
very
good,
I,
don't
think
I'll.
Do
it
I'll
spare
anyone
any
backlog,
grooming
today,
so
any
final
thoughts
and
comments
it's
starting
to
heat
up.
This
has
been
the
longest
meetings
so
far,
I
think
we
got
through
a
lot
of
good
things,
though
I
don't
feel
like
I
don't
feel
like
it
was
waste
of
time.