►
From YouTube: Kubernetes SIG Node 20190709
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
A
B
C
D
C
C
C
C
I
didn't
understand
that,
what's
the
real
concern
up
there,
I
saw
the
original
is
just
because
we
configure
off
the
left
means
prop
Orange
no
problem,
but
there's
the
cap
off
a
couple
could
be
failed
and
because
of
some
has
some
application,
maybe
take
a
longer
time
to
stand
up.
So
we
want
to
serve
that
initial
failure.
Stress
hold
is
different
from
the
normal
finish
treasure
holder.
So
that's
the
whole
proposal.
You
are
under
gear,
so
now
there's
the
concern
of
doing
the
Cuban
I
to
start
up
and
that
won't
work.
B
Okay,
but
basically
having
the
started
crop
allows
to
do
the
same
thing
as
the
initial
failure
for
the
initial
failure,
a
number
for
the
likeness
Pro,
except
that
it
would
allow
for
more
flexibility
if
people
can
find
other
usage
of
it,
and
it's
also,
it
appeared
more
clear
for
Tim,
because
you
have
like
one
probe
for
each
concern,
so
you
would
have
a
clear
separation
of
the
of
the
concerns
and
also
the
same
threshold.
I
mean
a
unique
threshold
for
each
probe,
regardless
of
the
of
the
state
of
the
container
I.
E
B
Yeah
we
should,
if
we
need
like
more
explanation,
it's
probably
better
to
speak
to
Tim,
directly
I.
Think
I
agreed
to
make
this
change
just
just
one
week
before
code
freeze,
because
at
the
end
the
result
was
the
same
for
my
problem
and
if
you
would
let
it
go
this
way,
then,
okay,
it
has
more
experience
than
me
so
I
trusted
him.
E
B
B
E
C
So
I
think
the
sinks
for
the
center
at
issue
there
and
also
I,
think
that
we
should
follow
up
next
week.
Is
that
okay,
because
I
think
it
looks
like
a
lot
of
people
didn't
spend
the
time
to
know
all
those
nothing.
Some
people
knows
the
original
proposal
and
the
sample
even
don't
understand.
That
was
the
original
proposal.
80S
and
didn't
follow
the
history.
So
thanks
for
the
sin,
did
the
history
and
explain
was
the
what's
going
on
and
then
we
can
follow
up
off
nine.
Yes,.
B
G
B
C
B
C
C
C
So
I
can
I
reassure
at
least
one
map
of
my
comment
because
the
use
kisses
here-
yes
kisses
she
listed
there
and
if
it's
just
because
the
registry
have
the
finger,
actually
the
no
matter
is
continent.
One
time
continuity
or
camp
or
darker
is
today's
already
have
a
way
to
configure
our
register
mural.
So
if
that's
the
only
cases
trying
to
address
the
problem
address
this
registry
down
problem,
then
I
think
there's
some
other
way.
So
so
this
proposal
might
complicate
of
the
our
API,
but
I
do
say
some
other
region
could
be
using
this.
A
That
thing
done,
I
think
it'd
also
be
useful
for
Bernal
and
Seth
to
give
an
update
on
some
of
the
stuff.
That's
also
happening
in
cryo,
where
you
can
support
repository,
Marion,
mirroring
or
also
registry
mirroring
by
Shah,
and
so
there's
other
ways
that
this
is
being
tackled
and
maybe
both
where
no
one
stuff
can
kind
of
give
an
update
on
where
the
runtime.
H
H
It
is
different
deposit
lease
and
you
can
create
a
mirror
of
just
that
repository,
not
the
entire
registry,
if
you
want
to
just
say,
put
your
control,
plane
images
or
some
critical
images
over
them
and
mirror
them
for
your
deployment
support
start
and
then
the
the
implementation
is
flexible.
So
you
can
use
any
regular
expression
to
convert
one
repository
path
to
another.
H
Another
concern
that
came
up
while
implementing
that
feature
is
what
to
do
with
pull
secrets,
and
one
way
is
if
cubelet
is
made
aware
of
this
mirroring,
then
cubelet
can
specify
the
pull
secret
directly
for
a
particular
mirror.
Right
now
we
are
passing
the
entire
all
the
pull
secrets
to
the
cubelet
configure
JSON,
which
is
not
ideal,
but
something
to
keep
in
mind
when
we
are
addressing
this
feature.
H
C
Good
point,
I
was
just
trying
to
add
a
comment
to
there
and
I
can
see
the
potential
some
other
use
cases
I
can
use
the
one
of
those
the
second
I
use
cases
is,
she
needs
the
incurred
design
talk
and
she
had
headed
our
stuff,
but
it
didn't
really
make
that
clear.
So
so
it's
not
like,
though,
sounds
like
it's
not
just
address
dries
the
registries
down.
C
If
you
also
want
to
enact
a
four-pack
new
kind
of
thing,
if
for
the
CCD,
if
the
one
image
needs
to
build
image,
it
is
is
wrong,
so
they
can
forget
previous
washing.
So
that's
the
different
thing
so
also,
but
at
least
the
cases
she
needs
there's
and
clear,
because
this
looks
like
always
the
registry
style
things
but
I
think
the
registrant
and
seen
there's
that
all
the
content
of
end
time
implementation
today,
I
should
have
the
way
to
fix
this
problem.
H
C
I
Hi,
this
is
Vinay
yeah
I
did
updates
last
night,
I
pushed
it
and
updates
from
previous
discussions.
I
think
the
main
changes
that
were
was
to
remove
preemption
from
the
pod
condition
from
notifying
that
that
was
direct
comment
and
remote.
The
container
restart
in
case
of
failure
will
just
keep
retrying
and
there
were
a
couple
of
questions
that
were
raised.
I
don't
have
good
answers
for
one
of
it.
One
of
that
was
if
the
couplet
were
to
restart,
then
we
currently
don't
have
a
way
to
figure
out.
I
Since
we
in
this
design,
we
are
using
resources
allocated
and
storing
that
in
the
pod
status.
If
that
is
lost,
then
we
don't
have
a
way
a
good
way
to
get
the
current.
We
can
potentially
get
the
limits
by
making
changes.
Ocr,
I,
ap
I
get
to
return
the
limits
that
are
currently
on
the
container
that
we,
when
we
update
continued
resources
or
create
container,
we
set
the
limits,
but
for
the
burstable
and
then
work
for
guaranteed
QoS
class
for
the
burstable.
I
We
there
is
no
good
way
to
know
what
the
requests
were
and
one
of
the
solutions
that
I
had
I
think
ujin
had
mentioned.
They
should
be
in
this
status,
but
if
we
keep
it
in
the
status,
then
we
cannot
reliably
get
the
request.
Information
and
one
of
the
solutions
I
had
was
to
potentially
have
this
resources
allocated
as
part
of
the
pod
spec
and
store
it
their
control
via
a
sub
resource.
I
So
I
just
wanted
to
discuss
that
and
see.
If
that
is,
we
brought
up
this
idea
earlier
when
scheduler
would
do
this
within
the
scheduler
was
on
the
loop
and
it
to
me,
it
seems
like
a
reasonable
way
to
persist
what
there's
a
desired
resources,
and
there
is
the
currently
allocated
resources.
So
if
the
couplet
were
to
restart
and
API
server
did
not
have
this
information
as
well,
then
we
can
pick
it
up
from
the
spec
and
generate
the
pod
condition
without
having
to
keep
any
state.
The
pollination
is
mainly
for
informing
the
initiating
actor.
I
I
J
A
So
I
also
had
a
chance
to
catch
up
on
the
latest
updates
in
say,
less
discussion.
So
I
don't
know
if
we're
gonna
be
able
have
like
a
reasoned
response
right
now
or
unless
I
don't
know.
How
would
any
comments
or
what
do
we
want
to
just
try
to
give
us
all
a
chance
to
catch
up
on
what
was
there
in
the
last
week
and
then
yeah.
I
Yeah
we
couldn't,
we
can
pick
this
up
next
week.
I
think
the
main
I
know
I
updated
it
last
night
and
it
didn't
really
give
chance
for
people
to
look
into
it.
The
main
intention
today
was
to
summarize
the
changes
that
I've
pushed
in
last
night
and
then
kind
of
given
overview
of
what
they
are
and
see
next
week.
If
this
brings
us
closer
to
where
we
want
to
be
on.
A
I
Can't
you
have
it
I
looked
into
removing
it.
I
think
the
main
reason
to
continue
to
have
it
is
it
can
give
the
initiating
actor
a
quick
feedback
on
whether
it's
possible
or
not,
and
if,
since
we
are
moving
the
retry
restart
resize
restart
policies
out
to
the
VP
a
we
moved
that
it
would
be
able
to
evict
the
pod
if
the
policy
allows
it
to
do
that.
I
Policy
is
there
per
resource,
it
is
the
default
I
thought.
The
discussion
was
that
we
we
had
different
resize
policies
for
CPU
and
memory
and
one
I
believe
your
comment
was.
We
should
have
the
uniform
resize
policy
recess
policy
tells
there
are
two
different
policies.
One
is:
if
you
cannot
resize
on
in
place,
then
the
policy
will
the
policy
that's
on
the
initiating
actor
in
this
case,
VP
a
will
guide.
Ok
with
this
part,
I
am
ok
with
rescheduling
it
to
a
different
node.
I
As
long
as
PDB
is
maintained,
PDB
is
respected,
or
with
this
pod
I
am
ok.
With
I
will
take
my
chances.
I'll
continue
to
run
sub-optimally.
So
those
are
the
two
policies
that
we
initially
had
it
in
this
camp,
but
it's
out
of
the
cap.
Now
there
is
the
policy
per
container
policy
where
a
per
container
per
resource
type,
where
let's
say,
if
I'm
a
plating
memory
I,
would
want
to
restart
it.
That
is
there,
the
that's,
the
Desai's
policy
that
we
have
here.
One
is
restart
policy.
I
One
is
resize
policies,
calling
it
a
retry
policy.
Sorry
retry
policy
has
been
moved
out
to
the
VP.
A
object
whether
you
want
to
retry
or
how
you
want
to
retry.
Do
you
want
to
keep
retrying
or
do
you
want
to
evict
and
then
get
it
running
with
the
new
resource
requirements
on
the
another
node
the
when
we,
if
in
plate
in
place
update,
he
is
possible,
then
the
user
can
select.
Ok,
I
am
updating.
Only
CPU
and
I
want
to
do
this
without
restarting
the
container
or
for
this
container.
I
If
I
update
memory,
I
want
to
restart
the
container,
I
can
tolerate
restarts,
but
java
application,
which
cannot
potentially
use
even
if
the
memory
has
been
increased
unless
you
have
the
restart
the
X
MX
flag.
If
it's
that
kind
of
like
a
sea
container,
so
that
policy
is
there,
the
pod
condition
part
of
it
was
the
one
that
you
mentioned,
whether
we
should
even
have
it.
That's
the
one
I
felt
it's
still
useful
to
have
it
and
I
looked
into
whether
there
is
any
state.
That's
required
to
do
this
as
far
as
I
can
see.
I
If
we
move
the
resources
allocated
out
into
the
pod
spec,
it's
absolutely
a
stateless,
so
the
Google
try
to
restart
it'll.
Just
look
at
the
defense
a-ok
currently
I
cannot
do
this,
it'll
be
failed
or
yes,
I
can
increase
the
size
or
give
you
what
you
want.
I'm
gonna
make
it
in
progress
and
then
eventually,
when
it
has
converged
and
its
update,
container
resources,
CRI
API
has
succeeded.
It
will
say:
okay,
I'm
gonna,
set
this
to
success,
so
that
information
I
believe
is
so
useful
for
the
initiating
actor
and
I'd
like
to
keep
that.
I
Without
that
we
can
still
do
it.
The
only
way
the
initiating
actor
would
have
would
know
is
to
have
some
pick
a
random
time
out,
and
it
may
be
that
the
cubelet
is
just
is
doing
it.
It's
able
to
resize
it.
However,
it's
in
a
loop
trying
to
let's
say
reduce
memory
and
then
the
timeout
random,
timeout
heads
you
pick
a
timeout
and
at
number
of
hits
and
the
new
you're
evicting
the
pod,
even
though
the
resize
is
in
progress.
So
for
that
reason,
I
felt
that
the
pod
condition
is
still
useful.
I
Yeah
I
guess
we
we
can
just
think
over
it
and
see
if
we,
if
we
really
want
it
out,
I,
don't
know
of
any
particular
corner
cases.
You
mentioned
that
there
are
some
potential
corner
cases,
I
kind
of
try
to
think
through
what
could
break
here
couldn't
come
up
with
any
so
if
there
is
any
major
reason
to
not
have
it
then
yeah.
Of
course
we
can.
J
I
Actually
have
implemented
the
orifice
version
and
a
POC
that
I
did
with
the
scheduler
in
the
loop
I
implemented
that
with,
in
fact,
two
part
conditions
in
there
before.
Instead
of
now,
we
have
just
one
and
it
seemed
to
work
well,
but
at
previous
implementation,
the
puc
was
keeping
state
in
the
pod
condition
and
there
was
a
problem
and
we
had.
We
reviewed
that
and
then
got
rid
of
it.
Convert
collapsed
it
into
a
single
pod
condition.
I
think
the
main
to
summarize
I
think
main
changes
here
are.
I
The
pre-empting
was
one
of
the
pod
conditions
that
we
had
in
there
before
and
the
REC
had
concerns
that
you
know
the
user
should
not
really
know
whether
how
they're
affecting
which
pods
are
being
like
positive
being
preempted
and
then
the
second
was
to
have
a
sections
for
cubelet
api
server
and
scheduler
API.
So
interactions
I
described
that
in
detail
and
then
I
still
have
the
preemption
in
the
end-to-end
flow.
But
I
mentioned
that
it
is,
and
it
is
something
that
you
can
implement
as
phase
two.
I
I
The
other,
some
of
the
other
things
were
calling
out
that
if
it's
a
static,
CPU
manager
policy,
then
we
allow
only
integral
updates
those
kind
of
things.
So
I
think,
let's
take
time
to
look
through
this
update,
but
focus
mainly
on
these
two
things.
One
is:
do
we
really
need
to
get
rid
of
the
pod
condition,
because
I
think
it's
useful
for
the
initial
reactor
and
the
second
more
important
thing
is
to
see
if
we
can
have
this
resource
allocated,
which
is
currently
in
container
status,
moved
to
the
pod
spec?
I
Unless
there
is
a
good
way
to
you
know,
keep
State
on
the
node.
I
am
hesitant
to
do
that.
I
know
that
today,
let's
say
the
cube
that
restarts
it
sees
what
pods
are
running
and
it
sees
what
the
API
so
dulse
should
be
running,
and
then
it
brings
the
if
there
are
new
parts
that
the
APS
ohm
has
given
it.
It
starts
them
up
if
the
parts
that
are
on
the
node
that
shouldn't
be
there,
it
removes
that
so
in
that
sense
it
kind
of
keeps
State
on
the
node,
but
that
is
I.
I
Don't
know
if
it's
feasible
to
like
just
add
one
more
list
of
okay
container.
One
has
request
limit.
Xy
container
two
has
request
limit
w
p
PQ,
something
like
that
that
if
we
keep
that
information,
then
it's
a
local
node
local
information
but
I'm
hesitant
to
put
any
state
on
the
node.
That's
unless
it's.
I
Unless
you
really
can't
do
without
it,
but
today
we
don't
have
a
good
burst
of
all
pots.
Is
the
main
concern
here
guaranteed.
We
can
query
the
limits
and
we
can.
We
know
that
the
request
equals
limits.
So
if
you
haven't
updated
the
limits,
you
can
get
that
information,
but
for
burstable
pots
we
don't
know
what
the
request
value
the
currently
allocated
request
value
is.
I
I
J
A
The
question
is
always
like
the
spec
I
thought
became
what
was
desired.
The
qubit
was
what
was
achieved
and
then
the
question
was
how
often
the
qubit
had
the
post
back
on
resources
allocated
like
on
doubt
on
resizing
down.
Would
it
keep
updating
that
value
as
it
pushes
pressure
on
the
resource
to
induce
reclaim
or
not,
or
does
it
only
write
back
that
value
when
it's
converged
on
the
spec,
but
I
don't
have
to
read
the
proposal
again
with
the
latest
updates
and
yeah.
I
A
If
that
was
the
case,
couldn't
we
only
set
resource
allocated
once
the
spec
has
converged
like?
Do
we
need
to
actually
checkpoint
or
even
pull
this
near
I
didn't
know
what
the
present
value
is,
because
the
I
would
expect
the
loop
and
the
cubelet
will
just
keep
trying
to
reconcile,
and
once
we
see
that
it's
been
set
as
desired,
I
think
we'd
be
fine,
but
maybe
I'm
mistaken.
Something
and
I
have
to
reread
this.
Oh,
we.
I
Can
we
can
post
once
it's
completely
done?
The
issue
is
on
downsizing
updating
the
request.
Part
of
it
is
a
no-brainer,
it's
you're,
reducing
the
resource
requirement,
so
that
should
be
immediate.
However,
the
concern
is
with
setting
the
limits
to
a
lower
value.
That's
where
we
met
sit
in
a
loop
for
a
little
while
trying
to
get
the
memory
limit
lower
to
desired
and
I,
don't
really
see
a
reason
to.
If
we
let's
say
we
were
at
5
gig
and
we
want
to
get
to
3
gig
and
we
got
to
four
point
five.
I
Four
point:
one
four
point:
three
point:
nine
I
don't
see
what
benefit
we
get
by
telling
the
user.
Ok,
we
are
now
at
this
every
so
often
it's
all
the
user
cases.
The
recess
is
happening
and
it
doesn't
in
this
case
VP
a
it,
doesn't
need
to
evict
the
pod
for
reducing
all
it
cases.
Ok,
the
we
have.
We
have
allocated
X,
we
are
using
Y,
which
is
much
lesser
than
X.
We
want
to
give
back
that
capacity
and
it's
our
desired
state.
It
takes
you
one
second
grade.
I
A
C
C
C
I
only
talked
about
this
when
the
when
the
kernel
first,
the
proposal
so
I
didn't
follow,
but
I
do
see
the
why
they
want
to
have
another
state
in
the
middle,
because
the
actual
located
that
your
state,
because
in
the
burger
we
coordinate
that
node
age
in
a
deed
checkpoint.
Okay.
So
so
you
don't
need
to
have
that
problem,
but
in
the
book
also
because
the
node
aging
to
Chico
pine
to
all
those
kind
of
things
cause
evolve
of
the
book.
Much
harder
I
just
want
to
say
that
I
said
so.
C
That's
why
we
try
very
hard
to
not
to
really
chikka
point
on
the
node
age
until
we
clear
about
a
lot
of
the
state
like
the
API,
all
those
kind
of
things.
Until
today
I
don't
say
we
are
really
do
a
good
job
and
have
the
clearly
know
the
API
and
all
those
kind
of
things
so,
but
but
in
the
book
we
do
trigger
point
like
that,
the
checkpoint.
What
actually
you
are
allocated
here?
Okay,.
A
I
It
doesn't
hurt
to
do
that.
I
just
feel
that
it's
a
traffic
that
that
you're
sending
to
API
server.
That's
really
not
helpful
for
anyone
at
this
point.
If
it
is
like
useful
for
even
the
cubelet
to
know
where
it
really
was
at
some
point,
then
sure
the
way
I
see
it
when,
if
they're
increasing
memory,
it
should
just
pass
it
reducing
or
increasing
CPU
should
succeed.
I
Increasing
memory
should
also
succeed,
decreasing
memory.
We
might
have
issues
where
we
have
to
massage
the
the
pot
to
you
know
get
a
container
to
get
to
a
when
we
desire
it
to
be,
and
if
we
were
to
restart
and
lose
that
information
of
where
we
were
before
restarting
we
start
from
the
top
and
then
we'll
immediately
converge
to
where
we
caught
very
quickly.
Welcome,
wish
to,
let's
say
we're
going
from
five
to
three
gig
and
we
were
having
problems
around
three
point:
five.
I
I
C
But
actually,
each
component-
we
because
we
have
especially
kubernetes-
is
the
extensible
model
and
a
lot
of
the
packing
a
black
hippy
wrong.
So
I
just
give
you
one
example
like
the
same
eye
tracking
so
because
a
lot
of
our
for
in
the
during
that
initial
sale,
I,
API
change
and
because
another
product
issues,
because
we
either
say
Oh
since
we
don't
have
a
clear,
node
API,
yet
under
each
of
the
edge
and
at
the
earlier
stage
with
you
Toto
the
storage
here
that
look
at
him.
C
If
it's
necessary
for
each
she's,
Nicosia
I
binary,
could
check
point
in
their
own,
so
they
could
so
then
we
can
force
them
make
sure
they
have
to
make
that
API
consistent,
but
still
they
did
check
find
and
they
couldn't
eval
same
scene
for
the
device
tracking.
So
there's
the
problem
protecting
issue
cause
because
it
took
on
some
data
and
then
later
they
want
to
add
some
new
staff
and
a
divorce.
They
forgot
the
backward
compatibility
things
and
anything,
so
they
messed
up
the
API
and
they
caused
the
production
issues.
C
So
there's
a
lot
of
things
is
out
of
our
control
because
is
extensible
a
lot
of
people
just
are.
This
is
extensible,
that's
the
cool,
but
if
you
units
the
complexity,
also
added
to
our
system
and
also
the
API
level,
is
not
compromised,
signaled
right.
So
there's
the
API
for
the
storage,
their
API
for
network
buggy,
there's
the
H
f40
was
packing.
Some,
it
is
came
to
the
signal
to
review.
Some
is
even
not
and
the
evolve
also,
they
think
always
got
asked
to
approval.
C
So
in
that
cases,
in
that
stage
we
decided
at
node
level
and
as
we
finalize
all
those
APN
freeze
or
the
API,
and
extend
extensible
points.
We
cannot
do
this
checkpoint
and
we
had
to
eat
honor
prank
Nagar
that
they're
doing
the
each
age
and
the
API
level.
So
they
decided
to
the
checkpoint
on
Archie
boy,
but
still
like
a
still
cause
trouble.
I
A
I
I
Think
the
main
thing
we
need
to
figure
out
at
this
point
is
see
if
we
can
push
this
out
into
the
spec
and
use
sub
resource
to
control,
have
a
very
tight
control.
Couplet
only
updates
this
kind
of
and
pod
condition
I'd
like
to
keep
it
I,
don't
know
what
else
I
can
do
to
convince
you,
but
so
I
think
both
both
comments
from
you
geez.
I
Why
don't?
We
all
take
a
look
at
this
updated
kept
and,
in
my
mind,
these
are
the
two
main
outstanding
concerns
that
we,
the
bigger
thing,
is
to
see
how
we
can
get
this
reliably
from
a
true
source.
One
option
is
to
add
that
information
local
to
the
node
we're
similar
towards
doing
today,
if
it's
already
keeping
pod.
If
it's
to
be
restart,
it
has
to
see
what
discovers
the
parts
that
are
currently
there.
It
might
as
well
discover
this
information
on
the
memory
resource
allocations.
I
The
requests
that
have
been
currently
set
I
am,
for
some
reason,
I.
Don't
I,
don't
want
to
add
that
kind
of
information
to
local
state
that
I
prefer
this
to
be
a
true
source
in
one
source.
We
see
this
coming
up.
If
the
API
server
provides
you
this
information
from
the
status,
then
you
have
to
okay.
We
just
we
trust
the
local
one
is
correct,
because
we
update
in
the
end.
That's
fine.
Both
ways
will
work
its,
which
one
is
better
agree
that
this
is
part
of
the
status,
because
it
should
be
it's.
I
B
K
A
great
I
was
worried.
My
mic
wasn't
working
so
yeah
just
to
give
folks
I'm
context
around
this.
For
some
reason,
I
can't
edit
the
agenda
so
I
wasn't
able
to
add
those
links
in,
but
hello
me
and
hone
are
from
cig
instrumentation
and
so
for
the
114
release
we
introduced
there
I
introduced
this
change,
namely
updating
some
of
the
labels
on
see
advisor
metrics
to
match
the
instrumentation
guidelines.
So
we
could
do
easier,
joins
with
stuff
like
exported
from
state
metrics
and
the
like.
K
So
the
intention
is
to
deprecated
the
old
label
names
which
are
like
container
name
and
pod
name.
Now.
This
will
break
anybody
who
has
written
a
bunch
of
Prometheus
queries
to
use
the
old
labels,
so
we've
included
both
sets
of
labels
for
at
least
two
releases
so
that
both
sets
of
labels
were
available
in
114
and
115,
and
basically
we
just
wanted
to
reach
out
to
sick
node
and
ask
if
you
are
okay
with
us,
removing
those
in
116
or
if
you
wanted
us
to
wait
until
117.
K
There
was
an
action
required
note
in
the
114
release
when
we
added
these
two
change
to
the
new
labels.
We
can
include
another
release,
note
saying
no
really
they're
gone
now.
You
must
change,
but
I'm,
not
sure
if
you
had
any
opinions
on
what
we
should
target
that
at
I'm,
basically
waiting
to
add
the
PR
to
remove
the
the
old
deprecated
label.
Snap.
L
K
We
just
basically
wanted
to
you
know,
make
folks
aware
of
this,
ensure
that
they
had
a
chance
to
read
over
the
kappa
more
aware
of
the
changes
and
then,
if
you're,
okay
with
us,
going
ahead
to
do
this
in
116
I'll,
go
ahead
and
submit
the
PR.
And
if
not,
then
we
can
try
to
communicate
how
to
better
make
those
changes
for
117
should.
K
N
L
L
L
K
N
A
L
Time,
that's
going
to
be
updated
when
your
controller
updated.
That
shouldn't
be
the
case,
because
what
has
happened.
If
you
look
at
the
PR
that
Elena
has
linked.
Basically
the
the
labels
are
still
present.
They
have
been
renamed,
so
you
have
duplicate
sets
of
labels
and
label
values.
So
if
you
have
migrated
across
from
one
14
to
the
new
label
values,
there
should
be
several
versions
which
have
both.
K
Yeah
there
shouldn't
be
any
interruption.
The
issue
at
this
point
now
is
that
we're
duplicating
this
data
in
two
places
and
so
we're
sending
a
bunch
of
extra
bits
over
the
wire
sort
of
unnecessarily
so
it'd
be
great.
Now
that
we've
had
the
sort
of
transitional
releases
for
people
to
update
all
their
queries
to
the
new
labels
that
we
can
turn
off
the
old
labels.
K
K
Up
is
that
okay
next
week,
yes
next
week
sounds
great
and
yeah.
That's
that's
specifically!
Why
we're
bringing
this
here
from
a
code
perspective,
it
seemed
like
it
would
be
fine
with
this
release,
but
we
want
to
make
sure
it's
communicated
and
that
folks
have
switched
over.
Certainly
internally
in
my
company,
we're
aware
of
the
change
and
we've
made
the
changes
so
I've
seen
a
number
of
PRS
actually
reference,
my
PR
and
say:
we've
made
the
changes
to
switch
the
labels
over
so
okay.
So
we.