►
From YouTube: Kubernetes 1.13 Release Team Meeting 20181022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello,
everyone
welcome
to
yet
another
113
release
team
meeting,
so
the
usual
disclaimer
we
are
being
recorded
right
now.
So
please,
please
be
mindful
of
what
you
see
and
for
those
who
just
joined
I've
think
the
link
to
the
release
talk.
Nobody
stopped
the
meeting
minutes
in
the
chat.
Please
add
your
name
if
you've
not
done
so
already,
let's
get
started
announcements.
I
was
just
reading
the
issue.
Aaron
open
regarding
github
incident,
that's
affecting
plow
labels
this
morning,
Aaron!
A
B
B
If
you
see
something
weird,
please
say
something
and
tie
it
back
to
that
issue
so
because
we'd
like
to
make
sure
that
prowl
and
tide
and
Oliver
automation
are
resilient,
outages
like
this,
and
so
it
was
a
good
time
for
us
to
just
collect
and
document
all
of
the
things
that
happened
around
this
time.
Yeah.
C
And
this
will
affect
more
than
just
labels.
It'll
affect
pretty
much
all
the
cloud
automation
and
it's
pretty
likely
that
there
could
be
some
unexpected
stuff
if
they're
actually
playing
web
hooks
back
from
all
the
way
since
yesterday,
given
that
not
a
lot
has
happened
last
night,
it
probably
won't
affect
too
many
PR
stuff.
Hopefully,
okay.
A
Moving
on,
as
for
announcements,
first
of
all,
thanks
a
lot
for
folks
for
being
flexible
and
from
for
attending
the
meeting
at
10:00
a.m.
today,
I
got
at
least
multiple
folks
pinging
me
saying
the
11:00
a.m.
time
slot
didn't
work,
so
we
moved
it
back
titanium.
Hopefully
this
works
and
ask
for
announcements,
for
if
you
have
not
heard
this
already,
we
had
our
alpha
cat
last
week.
He
did
have
a
few
issues,
but
we
work
past
that
we
have
an
alpha
slated
for
actually
Wednesday.
A
A
I
will
shout
it
out,
maybe
in
a
day
or
two
and
pull
in
more
folks
to
add
historical
context
where
applicable
and
then
we'll
see
where
it
makes
sense
to
put
this
log
city
code
individual
handle.
So
she
let
go
the
release
time,
but
I
will
get
started
and
try
to
put
as
much
as
info
as
I
can
as
we
proceed
along
and
as
for
dates.
We
have
the
enhancement
freeze
coming
up
tomorrow,
so
we
that's
a
first
milestone
that
we
are
hitting
for
this
release.
D
So
that's
that
is
first
milestone
to
hit,
so
we
do
have
37
issues
we
are
currently
having
in
the
spreadsheet
right
now.
25
I
feel
very
confident
about
12
I,
put
as
at
risk.
I
I'm
gonna
be
working
with
the
shadows
a
little
bit
this
afternoon
to
go
in
peeing
all
these
at
risk.
Once
again,
the
only
reason
they're
at
risk
is
because,
of
course,
Ayesha
Erin
have
gone
through
most
of
these.
These
particular
issues
and
asked
to
have
some
sort
of
cake,
a
tracking
mechanism
built
in.
D
A
A
B
Nothing
super
major,
like
I
said:
pretty
much
people
kicked
out
things
ahead
of
time.
There
was
nothing
that
save
architecture,
I
felt
like
it
needed
to
be,
kicked
out
now,
some
things
too
so
the
way
I
did.
This
was
I
just
as
a
human
being
I
added
an
extra
column
to
the
future
spreadsheet
called
like
errand
clusters,
but
XP
cluster,
or
something
ignoring
the
sakes
and
all
that
stuff
just
to
try
and
decompose
it
into.
B
If
I
had
to
describe
the
kubernetes
feature
to
a
group
of
people
in
like
seconds
to
a
minute,
what
would
I
tell
them?
So
that's
that's
just
what
that
column
was
about.
Maybe
it'll
help
us
come
release
time
server
side
apply
is
a
major
major
major
change.
That's
been
going
on
in
a
feature
branch
for
a
while
there
they're
not
sure
if
they
will
make
it
to
alpha,
but
the
idea
is:
if
they
don't
make
it,
they
stay
in
their
feature
branch
and
that's
totally
fine.
B
If
they
do
make
it
we'll
have
a
better
idea
of
the
feasibility
of
that
prior
to
code.
Freeze
sounds
good
to
me.
Another
thing
was:
there's
a
lot
of
storage
work,
it
could
be,
can
be
interpreted
as
though
kubernetes
CSI
is
going
to
GA.
That
is
not
the
case.
It
sounds
more
like
the
CSI
suspect,
independent
of
the
kubernetes
projects.
B
The
entry
persistent
volume
drivers
to
out
of
tree
persistent
volume
drivers
so
when
it
comes
time
to
message
what's
happening
here,
it's
really
just
that
the
out
of
tree
CSI
drivers
going
GA,
but
there's
still
a
lot
of
on
board
for
CSI
and
then
the
other
kind
of
tricky
one
was
windows,
containers
going
GA
or
support
for
windows
containers
going
GA,
really
bumps
into
a
lot
of
assumptions
that
are
being
made
about
conformance
tests
right
now.
There
are
a
lot
of
differences
between
Windows
and
Linux
ish
operating
system,
including
things
like
the
way
file.
B
Permissions
work,
the
way
environment,
variables
work
with
mixed
case,
and
so
we're
kind
of
hashing
out
what
we're
going
to
do
about
that.
Not
entirely
sure
that
we
have
an
answer
today,
but
I
would
expect
there
to
be
some
ongoing
discussion
about
that
in
the
conformance
working
group
meeting
happening
on
Wednesday
and
in
further
state
architecture
meetings.
That's
it
for
me.
Okay,.
A
B
We
we
made
but
I,
would
rather
that
blocking
decision
come
from
sega
architecture.
I
believe
they
should
have
the
authority
to
say
what
the
gates
are
for
a
feature
to
go
from
beta
to
GA.
I
think
the
one
of
the
compromises
that
was
being
tossed
around
was
the
idea
that
the
windows
container
feature
itself
like
a
cluster
that
had
mixed
operating
systems
might
not
be
fully
conformant,
but
if
you
disabled
the
windows
containers,
maybe
that
would.
D
A
E
So
I'm
not
entirely
up
to
date,
because
I
was
in
an
airplane
yesterday
and
delivering
a
training
this
morning
about
Ollie
had
caught
up
recently,
more
or
less.
This
has
been
consistent
with
how
things
have
been
for
113
so
far,
which
is
that
we
don't
have
a
lot
of
new
test
failures,
but
we
continue
to
have
test
failures
that
don't
go
away
just
recently.
E
So
one
of
my
priorities
this
week
is
going
to
be
to
get
sig
app
to
please
close
the
daemon
set
thing
and
and
stop
having.
You
know,
stop
that
it's
time
to
stop
discussing
what
they
want
to
do
and
actually
implement
something.
The
so
the
because
my
concern
is
the
upgrade
tests
that
are
having
daemon
set
failures
may
suddenly
start
showing
other
failures
when
the
steam
except
failures
go
away.
The
so
beyond
that
we
don't
have
I,
haven't
seen
a
lot
of
new
high
priority
issues,
hi
something
actually.
E
E
E
A
F
E
Think
the
test
is
just
written
in
a
way
that
it
is
timing
dependent.
You
follow
me,
then
it
can
fail
because
of
timing
and
not
because
the
test
actually
failed,
but
I.
You
know
I,
don't
understand.
The
horizontal
auto
scale
are
well
enough
to
fix
it
myself,
I
kind
of
get
the
sense
that
sig,
auto
scaling
is
not
very
well
staffed
right
now,
because
they
just
have
not
been
very
responsive
through
this
whole
cycle,
but
I'll
attend
their
next
cig
meeting
to
see
what
I
can
do
about
follow-up
or
one
of
the
shadows.
A
E
It
does
bring
up
one
thing
that
I
should
discuss
with
sig
scalability
for
the
next
release
cycle,
which
is,
it
would
be
lovely
to
have
some
scalability
tests
that
also
report
times
as
well
as
success/failure,
just
because
Cortinas
could
have
slowed
down
everything
by
say,
6%,
which
would
not
be
considered
a
failure.
But
it's
something
we
ought
to
know.
Yeah.
A
E
Well,
I
mean
they
have
a
limit
set
right
and
that
is
our
pass/fail
criteria
and
I'm
not
gonna,
second-guess
them
about
changing
it,
but
you
know,
even
if
we
don't
do
that,
if
we
say
hey,
we
consider
Cortinas
passing,
but
it
will.
You
know
slow
down
response
times
by
4%
for
some
users,
that's
the
sort
of
thing
we
want
to
have
in
their
lease
notes.
Yep.
E
B
So
I
linked
a
thing
in
the
doc
I
will
share
the
screen
just
so
we're
all
looking
at
the
same
thing.
It
is
this
pull
request
where
I
add
police
job
criteria.
It's
based
off
this
issue,
which
I
can't
see
because
of
zoom.
Maybe
I
have
to
keyboard
shortcut
that
doesn't
whatever.
Yes,
that
is.
This
was
opened.
A
year
ago,
Jason
I
put
together
a
Google
Doc
I
then
put
in
sort
of
what
the
TLDR
is
for
that
anyway,
and
now,
I
have
gone
through
and
added
that
to
you,
our
release,
walking's
jobs
markdown
duck.
B
So
if
we
view
it,
it
looks
like
this
I'm
trying
to
spell
out
the
following
criteria:
the
ideal
release
blocking
job
provisions
cluster
via
open
source
software
instead
of
a
hosted
service.
This
would
disqualify
things
like
gke
e,
KS
and
aks
from
being
on
the
release
blocking
dashboard.
It
runs
in
120
minutes
or
less,
and
it
runs
at
least
every
three
hours.
B
B
If
we
got
aggressive
with
it,
it
could
be,
may
mean
we're
waiting
up
to
six
hours
to
get
our
sort
of
go
no-go
signal,
and
that
means
we
have
anywhere
from
two
to
four
decision
points
a
day,
love
to
drop
these
numbers
back
so
I
talked
about
the
we
tend
to
drive
these
two
stricter
thresholds
over
time.
But
again
these
are
numbers
that
I
found
based
on
surveying
the
jobs.
B
Today,
there
are
a
couple
of
things
here
that
are
aspirational,
like
it's
owned
by
a
stake
that
is
responsible
for
addressing
the
failures,
and
it
fails
for
no
more
than
10
runs
in
a
row,
and
it
has
a
documented
reason
for
its
inclusion
in
the
release
blocking
suite.
So
I
call
out
the
fact
that
all
of
these
things
are
aspirational
by
putting
into
twos
asking
questions
like.
B
So
if
that's
a
possibility,
I
really
think
some
of
these
metrics
should
be
percentiles,
not
axes
or
medians,
and
so
making
sakes
responsible
for
addressing
these
criteria.
I
would
like
to
represent
that
both
by
having
contact
info
as
part
of
the
job
description,
as
well
as
having
an
alert
address
configured
in
the
test
parents
ability
to.
Oh
my.
B
And
then
a
documented
reason
for
including
in
the
suite
I
kind
of
feel
like
that,
could
possibly
be
represented
in
the
description,
but
honestly
I'd
love
to
be
able
to
link
to
pull
requests
where
we've
had
discussion
about
this,
where
we've
made
our
decision
to
include
it
or
not
include
it
because
of
a
B
or
C
cuz.
None
of
these
have
that
right
now
so
we'd
sort
of
have
to
say
well.
This
is
all
kind
of
grandfathered
in
and
we're
going
to
go
through
and
kind.
B
Whether
or
not
this
is
necessary,
I
also
introduced
the
idea
of
moving
a
couple
jobs
off
to
what
I'm
calling
a
release
informing
dashboard
where
these
jobs
they'll
have
the
capacity
to
block
the
release
of,
because
they
require
so
much
more
human
interpretation,
we're
going
to
move
them
off
to
their
own
dashboard
and
whether
or
not
we
ultimately
decide
to
pay
attention
to
them
is
kind
of
reading
a
tealeaves,
last-minute
decision.
This
is
basically
knee
trying
to
represent
the
state
of
reality,
because
the
scale
and
the
serial
jobs
both
take
a
really
really
long
time.
B
D
B
To
it,
finally,
I
just
had
a
point
about
whether
or
not
we
wanted
to
do
this
for
whether
the
the
new
work
about
splitting
up
great
jobs
in
the
parallel
and
serial
means.
We
want
to
maybe
consider
that
as
well,
I'd
like
to
bike
shed
on
that
down
the
line,
if
you
notice
right
now,
all
of
the
previous
releases
have
two
sets
of
dashboards.
They
have
a
blocking
dashboard
and
they
have
an
all
dashboard.
B
All
dashboard
looks
to
me
like
it
contains
all
of
the
upgrade
tests
and
all
of
the
skew
tests,
and
generally
those
dashboards
tend
to
stay
red
all
the
time,
because
nobody
troubleshoots
upgrade
tests
for
older
releases.
What
we
have
for
master
is
a
whole
bunch,
more
people
here,
look
at
all
of
us,
and
so,
as
a
result,
we
have
more
tests
and
a
couple
more
dashboards.
B
It's
unclear
to
me
whether
the
upgrade
upgrade
optional,
cube,
CTL,
skew
and
misc
dashboards
all
actually
are
equivalent
to
an
all
dashboard,
but
I
know
for
purposes
of
troubleshooting
right
now.
It
makes
sense
to
kind
of
carve
those
up
into
their
own
things,
since,
as
you
as
we've
noticed
like
josh,
is
paying
extra
attention
to
about
grade
tests
right
now
and
justin
is
actually
fixing
the
upgrade
test.
So
maybe
we
can
talk
about
reorganizing
us
down
the
line,
but
yeah.
B
I
think
if
we
can
collectively
agree
to
start
with
this
and
I
can
show
you
can
kind
of
see
some
of
the
like
some
of
the
numbers
percentage
of
failures,
regardless
of
commit,
are
things
you
can
actually
get
by
reading.
The
summary
string
of
all
the
jobs
in
test
grid,
so
I
see
a
strong
+1
for
this.
For
the
seei
signal,
yeah.
A
Yeah
this
is
like
a
huge
+1
from
me
as
well.
If
not
anything,
this
is
going
to
at
least
state
me
help
us
ensure
that
the
Rhys
blocking
dashboard
is
robust
enough
and
if
something
fails
and
there
we
are
definitely
a
no-go
and
then
the
master
blocking
and
then
the
informated
one
we
can
interpret
and
then
I
mean
deal
with
it
on
Annie
on
a
case
by
case
basis
and
upgrade
a
completely
agree.
A
B
B
I'm
gonna
have
some
internal
conversations
to
make
sure
the
idea
of
taking
all
the
gke
jobs
off
and
master
blocking
is
acceptable.
I've
heard
that
it's
totally
acceptable
from
those
in
the
trenches
I
want
to
make
sure
I
communicate
that
of
the
stack
and
I
feel
like.
We
should
also
make
it
clear
to
scalability
that,
just
because
we're
moving
them
off
of
the
dashboard
that
has
the
word
blocking
to
one
that
has
the
word
informing
does
not
mean
we
are
in
any
way
moving
their
ability
to
block
yes,.
A
Yeah
I
think
for
what
it
was
with
the
113.
We
anyway
end
up
looking
at
the
informing
or
just
because
we
don't
want
to
lose
the
signal,
but
on,
but
it's
yeah.
Eventually,
it's
a
bigger
conversation
as
to
how
much
attention
wanna
drop
to
its
back
and
we
say:
okay,
these
tests
have
been
failing
for
the
X
days
or
X
runs
so
no
no
response
from
the
owners
hands.
You
know.
B
B
A
Thanks
a
lot
Erin,
so
yeah
in
general,
I
think
I'm
feeling
good
about
CI
signal.
At
this
point
Josh
you
can
agree
or
disagree
but
looks
like
we
are
getting
good
attention
from
owners
for
the
issues
that
we
file.
I
know
the
floodgates
are
going
to
open
later
this
week
and
next
week,
once
coding
kind
of
ramps
up,
but
if
we
get
the
daemon
set
and
maybe
the
horizontal,
auto
scaling
issues,
kinda
wrangled
ahead
of
that
I
think
we
might
go
into
active
coating
in
a
bit
much
better
state
but
yeah.
A
Let's
move
on
then
to
buck
triage
Nikko.
First
of
all,
before
Nikko
status,
update
a
huge
thanks
to
Nico
for
automating
the
the
buck
triage,
dashboard.
I
know
he's
been
working
on
this
for
the
last
few
weeks,
trying
to
pull
all
the
issues
down
the
corresponding
labels
and
status
and
also
working
through
feature
requests
for
making
this
happen
for
CI
signal
as
well.
Thanks
a
lot
Nico
and
yeah.
Would
you.
G
A
Yeah
I
was
I,
was
looking
at
the
spreadsheet
itself.
I
noticed
that
some
of
the
issues
they
don't
have
priorities
filled
out
so
yeah.
My
question.
That
was
good.
We
could
be
paying
the
owners
to
make
sure
they
fill
up
the
priorities
for
that
and
also
make
sure
that
the
PRS
that
are
linked
to
these
issues
have
the
same
labels.
G
G
A
C
Well,
besides
github
having
all
of
its
problems
today,
but
I,
don't
think
we
have
much
to
comment
on
yet.
Besides
what
Aaron
has
already
said,
as
far
as
the
label
requirements
go,
we
already
have
the
functionality
for
saying
that
we
are
going
to
require
some
type
of
label
like
requiring
the
kind
label
or
label
mm-hmm.
We
haven't
used
it
for
priority.
Yet
so
all
I
need
to
do
is
like
just
create
a
new
needs.
Priority
label,
but
requiring
priority
across
the
board
for
PR
shouldn't
be
a
problem.
Yeah.
A
I
want
to
do
just
get
others
feedback
opinions
about
formalizing.
This
fourth
coat
/
period
itself.
If,
if
you're
going
to
make
sure
that
the
BRS
that
go
in
have
important
soon
as
priority,
then
is
that
mean?
Is
there
a
reason
that
we
don't
want
to
put
this
in
code?
If
anybody
has
any
historical
context
than
people
to
know.
A
I
thought
it
was
in
according.
C
A
A
The
other
one
was
regarding
the
retro,
follow
up
on
adding
a
priority
again
to
PRS
post
during
the
code
thaw
or
so
that
we
can
get
critical
PRS
in
in
first
during
the
quota
period,
when
we
have
a
backlog,
I
was
looking
at
the
recording
and
looks
like
we.
We
finally
landed
on
not
doing
this
versus
making
the
test
less
flaky
and
ensuring
that
and
we
go
by
on
a
case-by-case
thesis.
A
Well,
if
there's
a
critical
PR
and
it's
backed
up,
then
we
you
directly
look
into
cherry-picking
it
into
the
release,
rather
than
waiting
for
a
seal
and
on
master
now
that
we
have
some
more
time
to
think
about
it.
I
just
wanted
to
know
if
there
was
any
other
options,
if
Cole
or
others
have
had
chance
to
think
about
adding
a
priority
or
if
we
want
to
stake
with
that
planned
for
1:30.
C
So
the
we've
avoided
having
priority
and
tied
for
some
design
reasons.
That
being
said,
it
seems
like
it's
been.
You
know
a
pitcher
would
be
pretty
useful.
It's
not
on
my
radar
for
compute
completing
next
quarter,
but
I
could
raise
that
and
potentially
do
that
if
that's
considered
something
that
is
really
needed.
My.
A
Only
fear
was
again:
we
just
have
like
two
days
of
code,
this
release
so
and
I'm,
considering
that
we
we
just
have
a
week
to
get
things
out
after
that.
So
if
I
mean,
if
there's
a
possibility
and
if
it's
not
too
much
of
you-
know
technical
or
work
or
a
big
big
amount
of
work,
there
I
thought
it'd
be
good
to
have
that
in
again,
I'm
open
to
any
discussions
for
or
against,
then.
C
A
I
don't
know
if
there
are
cases
where
it's
the
master
and
the
branch
deviates
quite
considerably
after
thought
that
it
might
not
be
safe
to
merge
it
cherry-pick
things
directly
into
the
release,
we're
putting
it
in
mastered.
I,
don't
know,
fail,
hit
cases
like
that.
We're
our
backup
option
might
not
be
that
helpful.
A
A
A
A
So
next
up,
two
dogs
I
know
Tim.
He
updated
the
status
he's
not
able
to
join
us
today.
So
he
has
us
updates
there.
He
was
going
to
did
the
issue.
He
has
a
pull
request
to
update
the
handbook
to
say
that
all
dogs
fear
should
now
go
to
the
key
website,
the
new
branch
in
the
key
website-
and
he
was
also
going
to
follow
on
running
the
API
and
CLI
generation
tool
during
RPG
cuts
offline.
A
Another
discussion
that
we
had
offline
that
I
wanted
to
update
to
the
forum
here
was
dogs
requirement
for
out
of
three
enhancements.
We
know
AWS
has
about
three
enhancements
that
are
coming
in
113,
for
which
they
wanted
some
kind
of
inclusion
in
dogs.
So
we
we
had
a
discussion
with
Mike
and
Stefan
and
Tim
offline,
and
there
are
three
types
of
talks
that
we
could
probably
put
in
for
enhancement
and
four
out
of
tree.
We
decided
that
we
do.
A
The
blog
post
will
include
it
as
needed
in
the
Cates
blog
post
and
Mike
could
manually
add,
call
attention
to
these
work
in
the
release
notes,
but
we
would
not
be
adding
it
to
the
official
Kate's
dogs
as
a
peer
to
the
King
website
itself.
So
we'll
do
one
and
two,
but
not
three,
the
one
that's
listed
in
the
notes
there
so
just
wanted
to
let
the
group
know
of
that,
and
if
anybody
has
any
opinions
on
that,
please
let
me
know.
A
H
Not
really
other
than
it's,
you
know,
I
feel
strange
for
there
to
be
zero
release.
Notes
so
far,
but
you
know
he's
so
pretty
early
I
guess
yeah.
A
H
I've
only
been
around
for
the
last
two
releases
and,
in
my
experience,
release
notes
start
pretty
quickly
and
they
definitely
speed
up
towards
the
end
of
the
release.
But
you
know,
in
my
opinion,
that's
pretty
strange
for
there
to
be
no
commits
with
any
released
notes
this
late
in
the
process.
Okay,
so
I
would
have
said
that
you
know
if
you
would
have
asked
me
three
weeks
ago.
A
We've
been
they've
been
bringing
the
enhancements
issues
for
more
PR
less
than
dogs.
Maybe
we
can
go
back
and
ping
them,
but
the
release
notes
as
well.
If
you
want
to
go
ahead
and
do
that
for
the
announcements
or
I
can
help
wrangle,
some
notes
as
well,
but
I
know
this
is
pretty
but
just
wanted
to
get
meet.
Your
expectations
are
in
line
with
what
the
calendar
says.
A
B
B
Would
write
like
Mike?
Is
it
si
I,
don't
know
if
that's
how
released
its
work
anymore?
It
used
to
be
when
earlier,
when
I
was
more
involved
in
the
release
process,
that
there
were
humans
who
did
manual
editing
and
tried
to
consolidate
things
into
themes
across
the
entire
release.
But
when
I
say
scatter
gather
what
I'm
talking
about
is
asking
every
single
sake
to
talk
about
their
themes,
and
so
the
release
notes
are
like
a
bunch
of
cigs
talking
about
a
bunch
of
things.
Yeah.
A
I
know
we
do
that.
It's
the
end,
at
least
in
the
last
two
releases.
We've
done
it
pretty
much
at
the--.
It's
the
tail
end.
Yes,
this
is
again
one
of
those
things
is
how
have
you
done
this?
What
is
the
right?
What's
the
right
balance
to
like
strike
and
win
okay?
This
is
one
more
item
for
us
to
follow
on
and
follow
up
with
you
Mike
and
we
hopefully
have
a
better
answer:
cool
that
oh
yeah,
the
last
one
that
was
on
the
list
was
patch
release.
A
A
Forget
the
name
of
the
person
who
is
requesting
a
batch
release
for
this,
given
that
one
of
a
yeah
and
Ray
was
requesting
a
patch
release
for
1.8,
given
that
we
are
almost
it's
it's
out
of
a
three
release,
it's
it's
out
of
a
three-layer,
recycle
I
just
wanted
to
know.
Do
we
wanna?
Do
we
end
up
painting
the
old
patch
manager?
We
need
some
ownership,
slash
action
item
to
move
this
forward.
So
how
have
we
done
this
in
the
past?
Do
we
do
we
just
go
ahead
and
paint
this
old
patch
manager
for
this.
A
B
I
would
think
the
old
patch
manager
they
might
get
cranky
that,
like
this,
is
beyond
our
release
window.
We
don't
do
this
anymore,
which
is
not
the
friendliest
answer
to
our
end
user
and
at
that
point
I'm,
not
sure
what
to
say,
maybe
I
would
bring
it
back
to
Caleb.
If
he's
the
person
who
sort
of
driving
release
packaging
efforts
in
general.
B
B
B
So
there's
that,
but
we
have
made
an
exception
to
this
in
the
past,
where
there
was
a
critical
security
vulnerability
and
the
product
security
team
managed
to
wrangle,
whoever
to
cut
a
release
or
something
that
was
outside
of
our
support
window.
So
it's
possible-
and
this
apparently
sounds
like
the
whole
one.
Like
the
last
release
of
one
point,
it
was
broken
or
is
it
all
releases
of
one
for
anything,
broken
I,
don't
know
it's
like
it's,
not
a
great
friendly
user
experience.