►
From YouTube: Kubernetes SIG Node 20221011
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20221011-170349_Recording_1528x1120
A
B
Okay,
so
recording
is
in
progress
today,
hello,
everybody,
it's
October,
11th
2022,
it's
a
segment
weekly
meeting,
hi
everybody.
We
have
a
great
participation.
B
We
will
start
with
regular
update
on
our
PR
status,
as
you
know
that
we
just
exited
the
cap
freeze
and
it
was
a
a
lot
of
caps-
were
approved
for
this
release
and
we
will
start
working
on
that.
So
it's
a
good
time
to
start
development
and
look
at
active
PRS.
This
week
we
have
many
PRS
closed
open
and
closed,
like
somebody
was
trying
to
experiment,
and
there
were
a
lot
of
very
small
PRS
that
were
more
of
suggestions
rather
than
actual
improvements,
but
we
also
closed
immersed
like
regular
PRS.
B
So
it's
good
good
work,
everybody
approving
things
a
little
bit
down
from
last
week,
but
not
much
down.
So
maybe
with
the
course
of
development
stage
of
a
release,
you
can
merge
more
PR's
earlier.
We
will
nurse
them
earlier
will
find
issues.
So
please
go
ahead
and
try
to
review
something,
and
with
that
we
go
to
the
first
topic
is
Matthias
talking
about
work
group.
C
Hello,
everybody
so
yeah,
so
we
discussed
about
it
last
week,
I
think
and
we
missed
the
deadline
for
the
126
announcement
freeze
so
as
suggested
we
should
create
the
war
group
and
go
for
the
full
solution.
B
Thank
you,
Matthias
yeah,
I
think
what
practice
showed
if
you
start
actually
poking
people
around
the
cap,
freeze
data
like
during
the
cap
civilization
date,
then
we
wouldn't
be
able
to
complete
those
discussions.
B
All
discussions
typically
hit
in
the
wall
of
like
what
is
our
long-term
solution
and
here
that
sidecar
proposal
will
keep
the
scope
into
multiple
areas
like
security,
like
let's
run
site
containers
and
with
different
security
policies,
maybe
different
runtime
class.
This
kind
of
concerns
creeping
out
into
this
conversation,
and
since
we
don't
have
clear
answers
on
all
of
them,
we
typically
stole
and
Miss
release.
B
B
Will
we
may
need
to
schedule
separate
meetings
than
that
for
the
for
the
work
group
I
will
send
a
doodle
and
we'll
see
what
like,
who
wants
to
participate
and
what
time
this
to-do
will
give
us
I
think
initially,
we
can
start
meet
a
little
bit
more
often,
and
then
we
can
slow
down
and
get
into
document
writing
mode.
A
So
good
last
week
we
have
a
few
of
us
get
together
discussing
what
we
can
do
for
1.26.
Did
you
summarize?
What
do
we
talk
about,
and
can
you
summarize
it
here
because,
because
that
is
kind
of
the
sudden
request
and
I
hope
to
share
with
you
with
the
signaled
here,
make
sure
everyone
on
the
same
page
here
right.
B
Right
yeah,
so
there
is
a
little
bit
summarization
of
sidecar
problem
in
this
document
that
tamachi
has
posted,
but
also
to
continue
on
that
and
to
expand
a
little
bit.
We
generally
sidecar
started
with
a
proposal
a
few
years
back,
and
this
proposal
was
rejected
with
the
with
a
suggestion
to
go
to
two
directions.
First
direction
is
to
suggest
maybe
a
smaller
step
towards
sidecars.
That
I
will
be
at
least
controversial,
and
second
direction
is
to
define
a
long-term
strategy
for
sidecars.
B
So
from
smaller
steps
solution,
we
went
into
Keystone
and
then
terminate
both
Solutions.
Neither
of
them
was
taken
because
what
do
like
Keystone
wasn't
taken,
because
it's
much
better
to
do
sidecars
and
Keystone,
and
it's
pretty
much
the
same
meaning
but
like
we'll,
have
more
benefits
of
having
sidecars
such
as
key
stones
and
terminate
Port
wasn't
taken,
because
even
so,
we
can
come
up
with
scenarios
where
terminate
Port
is
needed.
We
cannot
clearly
articulate
and
clearly
confirm
that
those
scenarios
would
be
used
for
for
customers.
B
So
we
don't
want
to
create
something
that
enable
scenarios
that
we
just
dreamed
about.
We
want
some
real
confirmation
from
customers
and
second
work
stream
of
defining
the
long-term
strategy.
B
I
think
we
concentrated
too
much
on
defining
life
cycle
policies
and
life
cycle
ordering
when
we
concentrate
a
little
bit
less
on
what
other
scope
clip
can
happen
and
I
think
that
was
a
real
concern
from
many
people.
What
else
sidecars
will
bring
in
constantly
reduce
them?
B
So
let's
say
we
agreed
on
life
cycle
aspect
of
it,
but
then
like
how
do
we
handle
security
aspects
of
it
or
how
do
we
handle
other
requirements
like
other
dreams
of
some
community
members
about
running
sidecars
and
even
on
different
machines
or
running
them
in
some
different
scope?
So
all
this
consideration
wasn't
taken
care
of
and
I
think
we
need
to
do
it
this
way
and
also
summarizing
some
of
the
discussions.
B
We
had
this
proposal
to
have
a
dog
like
directly
graph
of
dependencies,
some
extent
of
the
system
D,
but
over
the
course
of
discussions.
We
realized
that
Doug,
even
though
it
may
enable
more
complex
scenarios.
We
cannot
come
up
with
any
and
we
duck
will
make
simple
scenarios
to
be
implemented
way,
more
difficult
and
sometimes
even
impossible.
So
we
I
think
overall
agreement
among
many
people
right
now
is
that
dark
is
not
a
direction
we
want
to
go.
B
We
want
to
go
with
some
kind
of
stages
or
some
kind
of
phases
approach
when
we
Define
some
types
of
containers
and
some
special
needs
so
yeah
at
least
we
get
out
of
dark
problem,
and
we
also
I
think
the
agreement
right
now
is
that
we
want
to
solve
the
problem
of
container
starting
before
we
need
continues.
If
not
the
first
iteration,
we
need
to
make
sure
that
we
have
a
way
to
extend
the
sidecars
into
that
scenario.
A
All
the
requests,
the
sex
circuit
dig
into
other
things,
so
I
also
spend
the
time
to
comment
on
that
one
for
the.
Why
Country
at
least
represent
what
might
concern
there
and
the
kind
of
proposal
I
mean
it's
not
like.
The
even
at
least
of
those
issue
is
not
connected.
Any
proposal
is
wrong
or
right
and
I
just
basically
want
to
see
here's
the
problem
we
need
attack
and
why
we
couldn't
make
progress
in
the
past.
It
is
people
didn't
explore,
answer
the
answer.
A
Those
concerns
so
I
couldn't
represent
the
direct,
but
some
of
those
I
believe
Derek
and
I
share
the
same
consent.
So
that's
why?
If
people
want
to
help
understand
container
interesting
on
this
work
group-
and
please
take
a
look
at
that
one,
so
you
have
the
background
contacts.
A
So
please
help
and
join
another
thing.
It
is
I
just
want
to
say
the
master
even
like
last
week
and
my
new
actually
reached
out
to
me
about
this.
We
didn't
use
in
work
group
because
what
we
go
first
I
think
you
need
an
answer
like
how
long
you
are
going
to,
but
we
do
agree.
So
we
need
to
have
at
least
the
bi-weekly
like
a
global
or
sub
team
discussing
and
a
sub
team
as
the
signal
at
the
discussing
and
to
make
as
early
as
possible
to
make
this
make
progress.
A
So
we
will
make
this
time
like
1.26
is
always
last
minutes
and
we
could
make
progress.
One
of
the
things
I
mentioned
to
my
new
I
said
the
this
one
actually
is
bigger
than
the
In-Place
about
the
update
right.
So
to
me,
honestly,
actually
is
because
we
changed
this
whole
neck
life
cycle.
Could
it
be
potential?
And
but
we
didn't
spend
much
of
the
time,
but
you
think
about
the
implants
part
vertical
scaling,
actually
how
many,
how
many
rounds
we've
been
back
and
forth
back
Force.
A
So
so
we
try
to
what
group
is
readable
to
me
and
you
need
to
make
this
program
make
this
one.
But
what
group
who
we
need
to
First
think
about
what
it
is?
Actually,
the
criteria
and
then
what's
the
process
process
and
like,
for
example,
report
back
to
the
signal
like
the
monthly
bi-weekly
or
whatever
things
is,
how
often
so
those
kind
of
things
we
need
to
settle
down
right
so
and
also
also
we
don't
want
to
Forever,
have
the
workable
program.
A
So
this
is
why
people,
when
people
normally
heard
of
the
work
group,
their
Olympic
concerned
and
needs
to
have
something
like
the
work
Group
by
end
of
when
that
this
is
brainstorming.
What
I
propose
I'm
thinking
about
the
back
end
of
the
one
that
27
or
maybe
by
end
of
the
the
type
close
backline
for
1927
at
least
right?
So
then
we
can
talk
about
extension
or
whatever
things.
B
I
think
to
the
end
of
the
year,
it
sounds
like
a
good
time
frame,
maybe
a
little
bit
further
to
127
Kappa,
freeze,
yeah.
A
And
also
the
Cadence
next
by
Weeknd,
but
knowing
that
and
I
discounts
we
think
about
bi-weekly,
maybe
it's
okay,
Cadence,
but
I'm
totally.
Okay,
up
to
you,
folks
decided
a
weekly
bi-weekly,
maybe
earlier
one
to
weekly,
maybe
yeah,
but
that's
the
menu
I
just
share
with
you
last
week,
I
I
promised
him
to
bring
to
signals
today,
but
I
saw
the
message
already
proposal,
so
I
didn't
put
here.
Yeah.
B
Okay,
any
other
comments
on
sidecars.
B
Okay,
wait
for
a
doodle
then,
and
we'll
go
from
there.
Renee
are
you
here
or
you
want
to.
D
Yeah
so
I
backward
is
the
community
support
two
one,
six
nine
so
I
can
check
with
the
continuity
community
and
see
when
we
can
release
that
I
think
that
is
the
only
blocker
yeah.
That
is
the
only
broker
right
now
for
the
main
VR.
B
D
B
Okay,
if
there
is
no
more
changes,
then
go
to
Michael
centralization
of
Q.
E
Yes
and
that's
that's
the
topic
so
essentially
just
a
heads
up,
because
I
think
some
people
might
be
interested
in
sick
note.
So
originally
the
this
started
from
from
the
cap,
I
was
I'm
still
working
I
mean
the
cap
is
done,
but
now
I
will
continue
the
implementation
on
the
adding
pot
conditions
to
parts
that
are
filing,
and
this
is
the
retryable
and
not
a
tribal
pod.
E
Failure
for
jobs
and
in
this
work,
I'm
using
the
I,
rely
on
the
killed
reason
to
be
said
by
container
in
time,
and
it
turns
out
that
the
the
condo
energy
and
Syria
do
this,
but
there
is
no
standard
so,
and
one
of
the
concerns
of
the
reviewers
was
that
you
know
this
may
break
in
the
process.
It
turned
out
that
also
the
current
master
of
I
mean
the
current
kubernetes
relies.
Also
on
this
and
I
was
thinking.
E
It
might
be
a
good
idea
to
standardize
this
like
to
enforce
that
the
implementation
set
it
and
they
don't
set
it
differently
in
the
future,
so
that
we
don't
break
and
I
created
an
issue
to
start
the
discussion
and
and
also
raise
the
pr
where
I
suggest,
like
the
simplest
solution
just
to
document
it.
E
So
that's
the
first
part.
There
is
another
part
of
this
issue
that
is
to
convey
more
information
somehow
so
this
is
just.
The
first
part
is
about
freezing
the
status
quo
and
the
second
part
would
be
about
conveying
more
information,
whether
which
could
be
useful,
at
least
for
the
cap,
which
would
be
to
convey,
if
the
evoked
due
to
exceeding
the
limits
or
or
does
the
system
running
Global
memory.
E
So
it
would
be
also
good
to
to
start
some
discussion
about
that
and
possibly,
but
this
is
more
involving
so
because
this
probably
will
also
require
some
implementation,
but
the
CRI
implementation,
so
I
think
I
will
not
myself
start
implementation
of
the
second,
but
just
raising
this
as
a
nice
to
have
that
that
would
be
good
to
have
from
my
perspective,
but
it's
a
little
bit
of
my
expertise
so,
but
for
the
first
part
I
raised
the
pr
so
yeah
you
are
welcome
to
maybe
jump
in
and
I
love
you
so
yeah
guys.
A
I
had
let's
discuss
this
one?
Actually,
what
do
you
propose
for
long-term
signal
to
try
to.
A
Secret
route
to
have
the
user
Space
Room
killer.
We
are
so
looking
forward
I'm
looking
forward
as
more
than
10
years,
so
so
one
of
the
one
of
the
things
I
first
thing
I
think
about
if
it
is
the
if
we
got
this,
is
due
to
of
the
C
group,
kill
like
it's,
not
Global
care
and
the
SQL
care.
Actually
I
thought
we
already
count
converging.
A
They
turn
off
the
container,
runtime
and
also
the
kubernetes,
but
maybe
in
like
the
right
after
we
have
the
CII
and
all
those
kind
of
container
runtime
involved,
and
now
with
that
version.
But
that's
the
first
thing
we
actually
agree.
We
standardized
that
one,
but
for
the
system,
oh
actually
yeah.
We
do
have
the
problem,
because
the
unless
we
are
analysis
is
a
CRI
container.
Runtime
give
us
this.
Ask
these
things
and
a
lot
of
times
we
have
to,
because
the
kernel
didn't
kill
entire
of
the
C
group
Colonel
care
of
the
process
right.
A
So
so
that's
why?
Then
we
have
to
do
the
sweeping
about
the
process
unless
so
so.
This
is
where
the
usernando
system,
even
when
I
was
in
book
I've,
been
requested
to
the
Linux,
carnota
and
community.
So
we
I'm
looking
forward
for
this
kind
of
things,
and
hopefully
we
can
converging
standardize
and
on
the
community,
not
just
us
and
also
for
Linux
kernel
Community
all
together.
So
then
we
can
do
better
jobs,
yeah,
but
dude.
B
Michael
I
have
a
question:
will
this
standardization
block
you
from
implementing
what
you
have
in
cap
and
drop?
Your
viewer
will
not
retry
on
home
queue
or
it's
just
nice
to
have
Improvement.
E
So
no,
it
doesn't
block
me
because
what
I'm
going
to
do
is
to
rely
on
the
current
signal
that
is
conveyed
to
cubelet
from
container
runtime.
So,
as
I
said,
the
killed
is
set,
and
this
is
not
a
precedence
to
rely
on
that,
because
the
currently
in
code
of
kubernetes,
you
can
find
also
this
sort
of
informal
dependency
to
this,
which
is
not
nice
because
he
can
break
but
I
mean
technically
using
the
status
quo
is
that
this
is
what
the
implementations
do.
E
B
E
B
Question
I
have
is,
if
so,
if
job
status
rely
on
retries
status,
is
there
how
easy
it
is
to
today
to
for
customers
to
understand
that
what
was
killed
or
like?
If
customer
will
complain
that
this
cap
doesn't
work
so
retry
didn't
happen?
Already,
try
happened
when
it
shouldn't
happen.
B
Is
there
a
clear
way
to
troubleshoot
and
like
is
there
any
improvements
we
need
to
make
in
this
part
of
oh
I,
know
that
historically,
it
wasn't
very
easy
for
people
to
understand
that
something
was
uncute
like
there
were
some
issues.
Do
you
see
any
issues
today
or
it's
clear
as
they
that,
like
how
easy
it
will
be
for
troubleshoot
your
cap,
once
customers
start
using
it.
E
Well,
it's
it's
a
little
bit
tough
question
to
answer
because
it's
about
predicting
the
future,
but
but
but
what?
What
can
go
wrong
like
okay,
so.
E
So
this
is
this.
This
feature
that
I'm
developing
in
this
Gap
is
twofold.
So
first
is
jobs
that
customers
can
configure
the
policies
about
handling
what
filers,
depending
on
the
end
state
of
the
pod.
So
that's
like
the
the
the
big
picture,
the
the
what
we
extend
into
for
the
jobs
that
it's
themselves,
but
then
how
do
we
declare
the
Pod
end
state
so
to
declare
the
port
end
state
in
a
sort
of
good
and
easy
to
to
track
way?
E
We
use
spot
conditions,
so
the
Pod
conditions
themselves
are
sort
of
attempt
to
unify
the
pot
and
state,
depending
on
the
reason
of
for
the
failure,
so
we
wouldn't
even
need
actually
the
pot
conditions.
Technically,
we
could
just
hack
in
a
job
controller
to
retry
or
don't
retry,
based
on
the
killed
reason.
The
ask.
E
Currently
we
have
so
the
post
conditions
are
actually
the
results
exhausted,
but
condition
is
actually
an
attempt
to
make
it
clear
to
the
user
that
so
that
the
user
don't
look
at
the
lower
level
thing,
but
at
the
pot
conditions,
and
we
add
the
what
conditions
in
other
scenarios
as
well
like
preemption
and
all
the
failure.
E
E
Let's
say
for
so
for
the
killed:
we
would
add
the
resource,
exhausted
condition,
so
the
user
check
is
a
suggested
condition
there
or
not.
If
there
is
no,
then
okay,
then
you
need
to
have
more
knowledge.
Is
the
reason
on
one
of
the
containers
said
to
be
killed
or
some
things
that
suggest
that
it's
out
of
memory?
Let's
say
there
is
another
runtime
that
sets
it
differently.
E
Then
user
could
compare
if
the
pod
condition
is
set
at
least
use
a.
Let's
assume
the
scenario
using
doesn't
see
the
about
condition,
but
to
see
the
reason
that
suggests
that
it
was
killed.
Then
there
would
be
like
a
mismatch
and
that's
I
don't
know
if
that
answers
your
question.
So
the
feature
is
like
twofold:
first
is
the
standardization
and
the
second
is
the
job
controller,
and
you
know
you.
E
Would
need
to
check
both
both
levels
of
the
of
the
of
the
of
the
stock
to
to
see
where
the
problem
is
and
but
essentially
what
I
think
user.
Once
you
pinpoint
that
there
is
a
problem
with
a
pod,
then
you
would
just
check
the
yam
of
the
button
inspect
manual
Rewards
happening,
maybe
check
the
events,
the
standard,
troubleshooting
techniques
logs.
B
E
A
A
This
is
why
I
say
I
give
you
that
information,
but
if
the
system,
you
won't
get
that
I
just
earlier,
that's
what
I
keep
myself,
but
we
could
make
and
also
there's
the
when
we
detect,
if
kubernetes,
acting
faster,
detect
that
the
potential
name
will
be
this
room
and
then
we'll
kill
based
on
your
policy,
then
we
give
you
signal
also,
so
you
can
see
that
if
kernel
X
system
acts
and
occur
and
then
we
cannot,
this
is
the
long-standing
issues.
This
is
why
I
think
about
the
usernames
boomkin
can
help
here.
A
In
the
past
we
did
the
sweeping
of
the
process
and
then
figure
out
which
process
being
killed
and
then
in
that
rich
C
group,
we
then
give
you
I
mean
this
is
in
the
program.
What
I
did
so
give
you
this
problem.
So
that's
why
we
request
a
lot
of
time
in
the
kernel.
I,
don't
think
about.
Today's
stick
will
be
too
have
that
simulator
room
user
than
the
womb
could
be
implemented.
A
That
kind
of
things
but
did
not
today
yet
so
that's
why
I
use
Bloom
get
100
but
I
mean
I
I
still
like
this
to
cap,
because
that
suspended
us
existing
message
we
already
knowing
and
the
and
the
grid
of
the
holistic
of
the
pipeline
for
those
those
things
so
still
is
really
valuable,
but
I
just
want
to
earlier
warning.
You
want,
because
you
are
doing
only
doing
this
work.
You
will
get
every
single
thing
since
we
won't
killed.
B
G
H
Yeah
I
just
look
at
a
quick
question
comments
about
kind
of
this
effort,
so
I
was
kind
of
trying
to
understand.
Maybe
one
thing:
how
practically
are
we
planning
to
standardize
this
like
because
it's
just
a
strain
right?
It's
something
a
container
on
time
like
it's
very
clear
today
how
we
make
CRI
changes
right.
We
have
repo,
we
mix,
Proto
change,
Etc
everybody
implements
it,
but
for
something
like
this,
we're
just
a
string
that
somebody
says:
how
do
we
standardize
on
this
like
practically
speaking?
H
Is
it
something
we
document
and
then
we
have
some
conformance
tests
for
or
what's
how
do
we
actually
do
this
foreign.
B
E
H
Yeah
the
first
thing:
maybe
we
documented
and
have
some
notes,
informance
tests
to
test
both
like
I,
don't
know,
continuity
and
cryo,
for
example,
today
as
a
starting
step,
but
I'm
not
sure.
If
maybe
we
have
some
precedent
for
kind
of
some
type
of
CRI
standardization.
That's
not
like
the
actual
Proto
chains,
right,
I'm,
not
sure
I,.
H
B
Login
is
a
perfect
example:
I
thought
of
I
was
trying
to
come
up
with
my
in
my
brain,
like
it's
ordering.
Of
course
it's
also
not
standardized
like
not
documented
clearly
but
yeah.
It's
something
that
kubernetes
does
do
in
very
specific
way,
but
Logan
is
a
bad
bad
example.
I.
A
H
Cool
yeah
I
had
one
other
comment:
I
wanted
to
bring
slightly
related
to
this
regarding
home
kills.
One
of
the
things
I've
seen
that's
kind
of
weird
today
is,
if
you
launch
a
container
that
has
multiple
processes
right
like
pid1
and
then
multiple
other
processors,
that
launches
of
sub
processes
and
those
sub
processors
get
boom
killed,
sometimes
like
I
think
the
container
runtime
is
only
looking
for
PID
one
within
the
container,
that's
being
killed.
So
if
some
of
the
other
processors
are
killed,
the
container
on
time
will
not
detect
that
and.
F
H
C
group,
E2
I,
think
actually
has
some
mechanism
where
the
kill
can
be
for
the
whole
C
group,
but
we
haven't
actually
enabled
that
yet
so,
maybe
that's
something
we
can
look
forward
to
doing
as.
G
H
G
Think
for
sequel,
Viva
and
the
behavior
is
probably
alternating
if
I
remember
correctly,
is
that
as
long
as
one
process
is
killed
in
State
group,
you
will
see
that
our
message,
even
if
it's
not
speed
one
so.
G
A
Yeah
Theory,
that's
the
that's.
The
first
I
mean
not
the
first
one
of
the
top
issue
we
fixed
when
earlier
kubernetes
development,
and
we
worked
together
with
the
docker
that
times
darker
is
not
contented
yet
so
so
so
we
didn't
to
solve.
The
problem
is
the
system
because
this
womb
is
kind
of
like
the
kernel
random
pick
up
the
process.
Right,
of
course
it's
not
random.
They
have
the
order
and
they
they
think
about
the
best
one
to
the
release
of
the
memory,
and
this
is
the
cure.
So
we
didn't
fix
that
problem.
A
But
for
the,
if
you
reach
your
C
group,
local
limit
actually
kill
and
we
basically
bring
this
whole
container
down
right
unless
listen
to
the
content.
Runtime
there's
the
regression,
but
that's
the.
H
A
E
Just
like
I
looked
at
the
briefly
at
the
at
the
prata,
and
there
are
also
some
other
fields,
let's
say:
inform
up
for
both
status,
sandbox
status
response,
and
there
is
also
like
the
sub.
So
the
field
itself
is
map
string
to
string.
E
But
the
documentations
of
the
field
says
that,
like
the
key
could
be
arbitrary
string
and
value
should
be
in
Json
format,
so
put
some
constraint
on
the
value
by
documenting
so
and
I
I
think
there
are
more
cases
like
that,
so
maybe
that's
sensible
to
just
restrict
by
documentation.
B
Yeah-
and
it
goes
into
next
topic-
if
nobody
have
any
comments
in
kills,
when
containers
will
adopt
this
the
this
standard,
it
may
break
some
backward
compatibility
storage
because
somebody
may
may
have
taken
dependence
on
certain
strings,
and
it
goes
in
the
topic
of
continuity.
1.6
is
going
LTS.
B
If
you
haven't
heard
about
it.
Container
G
is
one
of
the
runtimes
you
to
use,
and
it's
it
was
typically
been
supported
for
a
year
or
something
like
that
and
before
we
always
had
like
a
clear
mapping
of
continuity
version
to
actively
develop
kubernetes
versions.
Sometimes
it
was
like
one
of
the
versions
was
going
away
like
going
out
of
support
in
kubernetes
land.
Well,
containerdy
version
is
still
around
this
time.
B
Suggestions
to
introduce
LTS
and
1.6
will
be
alive
and
in
a
small
development
till
2025
and
clearly
by
2025.
The
like
test
Matrix
of
container
G
version
versus
versions
of
kubernetes
that
have
been
actively
supported
will
be
different.
So
today
we
have
some
list
of
supported
continuity
versions
for
kubernetes
versions,
and
this
list
will
will
be
fluid.
Now
it
wouldn't
be
like
one
to
one
mapping
or
like
not
like
one.
Two
three
mating
kind
of
thing,
so
yeah,
if
you
haven't
heard
about
it,
please
take
a
look.
B
There
are
some
I
mean
we
need
to
understand
how
what
continuity
will
be
saying
about
which
kubernetes
versions
it
supports.
I
suggest
in
one
of
the
discussions
that
we
may
start
saying
that
candidity,
like
patch
version
for
LTS,
support
this
kind
of
kubernetes
versions
and
after
this
patchy
version
it
supports
new
kubernetes
versions,
but
it's
not
written
in
PR.
Yet
so,
maybe
something
that
we
can
specify
going
forward.
G
Sergey
one
question
about
this:
is
that
sorry
about
not
following
it
closely
that
I
I
saw
that
there
there
will
be
some
major
changes
to
Canada
core
service
in
one
seven
and
I,
just
discussed
with
Raven
offline
and
I
saw
something
like
once
potentially
sandbox
API
based
on
understanding
another
one
is
they're,
adding
the
progress
image
for
program
transfer
service.
So
basically
the
way
I
I
saw
at
least
for
those
two
ones
so
I
feel
like
they
are
quite
related
to
the
signaled
work
right.
G
Do
we
have
a
list
of
the
major
changes
they
plan
to
make
and
is
signaled
a
lot
of
those
changes
because
yeah-
probably
it's
only
myself
sorry
about,
if
that's
the
case,
sorry
about
that.
But
if
it's
not,
maybe
it's
useful
to
like
like
into
to
introduce
what
they
are
going
to
do
in
1.7
in
signals,
so
that
everyone
knows
that
they're
going
to
have
a
big
change.
But
it's
good
for
the
community
overall.
B
Yeah,
it's
a
good
point.
We
don't
actually
discuss
it
here.
I
know
some
people
closely
watch
and
been
part
of
both
six.
So
we
we
have
this
knowledge
floating
back
and
forth,
but
it's
not
officially
been
presented
at
this
meeting
yeah
and
to
that
point,
1.7
indeed
introduces
the
sandbox
apis.
It
will
change
a
lot
of
apis
and
2.0
is
even
bigger
because
it
will
also
remove
all
the
duplicated
apis.
So
let's
say
how
we
configure
mirrors
for,
for
example,
registered
mirrors.
B
Today
we
have,
and
in
config
terminal
and
going
forward
to
the
separate
folder
with
files
for
each
registry
that
needs
to
be
configured.
So
it's
quite
a
big
change
that
we
also
looking
for
to
adapt
in
1.6
like
earlier,
but
it
will
require
some
changes
like
there
are
some
discussions
about
it
in
issues
and
signaled,
but
yeah
major.
B
Is
your
request
to
bring
this
issues
here
and
like
have
more
discussions
so
like
knowledge,
exchange
sessions
or.
G
It's
just
the
knowledge
sharing
I
mean
because
they're
going
to
ltis
and
they
they
mentioned
that
there
will
be
huge
engagement
in
one
seven
I
just
asked
why
there
is
I
mean
the
signal.
There
is
a
lot
of
like
of
those
changes.
If
we
already
know
that's
good,
yeah
I
will
go
and
learn
mental.
A
Thanks
for
putting
this
one
at
least
I,
don't
know,
and
so
I
I
now,
when
you
bring
up
I'll
briefly,
remember
you
did
the
listen
to
me
a
while
back
so
can
we
can?
We
have
the
action
item,
at
least
here
understand
what
it
is,
the
duplicate,
and
that
needs
to
be
put
back
to
signals
right.
We
have
if
we
depend
on
some
application
of
the
API
right,
so
we
need
to
find
the
alternative.
I
believe
we,
we
might
don't
have
those
dependency
on
the
stability,
but
we
need
the
clear.
A
B
B
Arriving
places
a
list
of
improvements
making
in
1.7
and
I
pasted
this
one
example
of
a
configs
that
go
in
a
way
and
I
think
Ben
highlighted
the
issue
is
kind
of
that
being
configured
with
a
single
file
right
now,
and
it
will
be
a
bigger
change
for
them
to
switch
to
multiple
config
files
to
configure
registries.
B
So
yeah
interesting,
if
you're
interested
in
this
kind
of
duplications,
that
some
of
them
highlighted
already
in
issues.
B
Maybe
it's
a
good
idea
to
have
a
topic
for
next
week,
or
even
do
you
want
to
just
highlight
all
the
like
big
API
changes
that
1.7
is
working
on
CE?
You
paste
it
all.
D
Right
yeah
I
can
check
with
the
community
and
see
if
we
have
any
list
of
Breaking
changes.
Deprecating
changes
but
yeah
I'll
try
to
come
up
with
something.
F
B
I
think
it's
worse
discussion
of
this
LTS
PR
Mark
to
understand
like
what
what
will
be
the
level
of
acceptance
like
what
I
mean
right
now.
It
says
that
we
will
even
Port
some
small
features
to
LTS
branches,
but
if
you're
talking
about
major
changes,
it
may
be
worse
confirming
yeah.
B
Kubernetes
standpoint,
you
also
need
to
understand,
like
which
versions
of
container
do
you
want
to
support?
If
we
at
some
point
just
say,
like
a
continuity,
is
still
being
actively
supported
in
LTS,
but
kubernetes
doesn't
support
it
because
we
miss
some
apis
or
like
we
don't
support
it
for
Windows.
It
may
be
strange
from
Community
perspective.
It
may
not
be
very
well
received.