►
From YouTube: Kubernetes SIG Node 20200825
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Usd
filled
with
representing
windows
and
microsoft
and
sig
node,
I
am
just
getting
back
from
parental
leave,
so
I've
been
in
the
meetings
for
on
and
off
before
this,
but
I
should
be
pretty
active
here
now,
just
a
little
bit
of
what
sigmendous
is
trying
to
do
is
in
118
and
119.
We've
really
been
pushing
for
getting
container
d,
stable
and
usable
for
as
a
container
runtime
in
the
windows
nodes.
I
think
we're
making
progress
on
that
and
promoting
that
to
beta
in
119..
A
There's
been
surprisingly
very
few
changes
that
have
been
needed
in
kubernetes
to
enable
that
so
that's
been
great.
Most
of
the
changes
have
been
the
windows
operating
system
or
continuity
itself.
B
A
Sure
yeah
don
and
I
had
talked,
and
I'm
going
to
be
working
on
kind
of
a
document
of
a
road
map
to
kind
of
highlight
where
sig
windows
wants
to
kind
of
invest
in
the
compute
space
for
kubernetes
and
share
that
out
with
the
team.
Once
that's
ready.
B
Yeah
I
I
just
want
to.
I
did
a
lot
last
couple
meetings
we
talked
about
and
we
we
feel
we
don't
really
understand
what
to
the
specific
windows
and
also
the
windows
container
hide
the
two.
So
there's
the
thumbnail,
the
pr
maybe
is
even
like
the
controversial
and
by
design
by
implementation.
B
So
so
now,
with
the
mac
support
and
the
help-
and
we
can
we-
we
plan
to
review
once
he
does
his
document
and
then
we
can
review
at
the
signal
and
then
we
can
start
to
identify
an
actor,
some
like
the
approver
and
the
reviewer
from
our
side
and
then
there's
the
subnet
across
the
sig
network
and
the
second
node.
The
work,
even
after
maybe
also
have
some
like
the
sig
storage.
So
then
we
can
call
coordinate
the
better
collaborate
better
and
they
provide
the
our
support
from
the
both
six.
A
I
can
just
kind
of
provide
a
very
high
level
of
the
two
things
so
that
I
think
the
two
big
kind
of
work
items
that
we're
trying
to
push
forward
with
sick
windows
is
one
just
having
a
lot
of
the
kind
of
container
features
get
a
lot
of
features
to
parity
where
they
are
with
linux
and
for
windows
without
going
into
too
much
detail.
A
Container
d
is
kind
of
the
only
path
forward
for
that
there's.
A
subsystem
in
windows
called
the
host
compute
system,
which
is
responsible
for
every
well
most
of
what
run
c
normally
takes
care
of
in
linux
and
docker
with
docker.
That
was
built
around
an
older
version
of
kind
of
the
windows
api
to
interact
with
that
that's
very,
very
painful,
too,
or
very,
very
slow
and
drawn
out
to
get
any
changes
to
that.
A
There's.
A
an
updated
api
sometimes
referred
to
as
the
hdsv2
in
the
container
decode
path.
That
is
much
easier
to
get
changes
in
and
much
more
I'd
say
that
api
was
much
more
designed
for
container
orchestrators
orchestrating
large
amounts
of
containers,
whereas
the
v1
api
was
kind
of
designed
around
starting
single
containers.
A
And
so
that's
why
we've
been
making
such
a
big
push
to
move
to
container
d?
Is
it
just
introduces
a
lot
more
flexibility
for
how
windows
can
start
and
manage
containers?
A
The
other
kind
of
area
that
sigmundus
is
investing
in
is
hyper-v,
isolated
containers
and
that's
essentially,
where
containers
on
windows
would
be
started
in
a
very
lightweight
vm,
referred
to
as
a
uv
utility,
vm
or
uvm,
and
that
allows
windows
to
provide
a
lot
better
isolation
between
containers
and
also
a
lot
more
granular
resource
controls
for
for
for
the
container
workloads.
C
C
So
it
might
be
of
interest
at
some
stage
to
if
you
want
to
sync
up
with
us
on
that,
to
see
what
we've
done
there,
because
a
lot
of
work
we've
been
doing
is
probably
just
finding
it
out
in
the
dark.
If
I
know
where
the
issues
are
and
trying
to
get
it
to
work.
So
maybe
your
input,
and
that
would
be
quite
handy
on
for
the
windows,
os
sure
sure.
A
B
B
D
D
Getting
a
weird
rendering
right
now
are
folks
able
to
see
the
doc
yes,
okay,
cool,
so
yeah
as
a
part
of
the
spirit
of
like
seeing
things
through
to
completion.
D
I
think
folks
are
aware
that
we're
trying
to
roll
out
some
new
policies
and
seek
architecture
around
avoiding
permanent
beta
for
restful
apis,
so
that
would
cover
things
like
runtime
class
or
pod
disruption,
budget
and
that
type
of
thing,
and
so
there's,
there's
already
project
momentum
around
trying
to
get
restful
resources
out
of
that
beta
status
and
either
remove
them
or
via
deprecation
or
motivate
the
community
to
try
to
see
them
go
forward
to
general
availability.
D
In
that
spirit,
I
kind
of
wanted
to
take
an
audit
of
like
not
necessarily
the
rest
apis,
because
keyboard
itself
doesn't
necessarily
directly
serve
those
in
this
call.
But
look
at
the
cube
features
that
we
had
in
flight
in
the
sig
and
kind
of
take
a
step
back
and
ask
like.
Is
there
something
we
can
be
doing
as
a
community
here
to
either
help
get
new
volunteers
to
help
grow
the
work
to
see
it
to
completion,
where
the
original
advocates
might
have
moved
on
to
new
activities?
D
Or
is
this
a
good
time
for
us
to
step
back
and
ask
if
the
feature
stalled?
Is
it
worth
continuing
to
do?
The
dock
is
a
little
out
of
date,
even
when
I
did
it,
because
when
I
first
counted
it
was
about
24.
D
I
think
it's
probably
about
30,
if
I
recount
the
rows
and
it
doesn't
cover
some
other
things
that
are
not
captured
in
the
cubefeatures.go
file,
where
we
have
all
of
our
feature
gates,
so
it
this
definitely
does
not
cover
the
state
of
something
like
cri,
going
from
an
alpha
status
to
some
newer
status.
D
Nor
does
an
earlier
iteration
of
it
cover,
say
the
work
around
c
groups
v2,
where
that's
just
basically
detecting
the
state
of
the
host,
but
not
requiring
a
user
to
enable
a
feature
flag,
but
either
way.
This
felt
like
a
good
place
to
start,
and
so
I
sent
this
table
out
to
the
signo
google
group.
D
So
anybody
who
is
a
member
of
that
group
should
have
edit
rights
on
the
stock
and
what
I
thought
I
would
do
is
quickly
go
through
some
of
the
features
and
most
of
the
comments
here,
I
think,
came
from
my
own
initial.
D
Assumption
on
what
we
could
or
couldn't
do
around
this
feature
and
then
obviously
feedback
and
volunteers
are
really
desired,
so
some
of
them
are
kind
of
embarrassing.
When
you
look
at
like
we
have
a
beta
feature
on
app
armor
since
1.4,
so
I
don't
know
if
sasha's
on
here,
but
I
guess
what
I'm
looking
to
see
for
app
arm,
in
particular
like
what
could
we
do
in
120
to
kind
of
move
this
forward
and
in
that
vein,
for
each
of
these
feature
rows?
D
If
someone
wanted
to
volunteer
to
help
move
something
forward,
some
of
these
features
predated
kept.
So
there
wasn't
like
a
clean
document
to
go
and
update
and
edit
where
to
actually
move
it
forward.
We
might
want
to
restate
what
the
state
of
the
feature
is
in
a
proper
cap
and
then
what
gaps
remain
to
actually
move
to
ga
I'm
not
aware
of
actually
a
designer
on
app
armor,
so
hopefully
sasha.
If
you're
there.
You
want
to
help
enrich
that
that
be
great,
dynamic,
public
config.
D
I
know
this
was
a
long-standing
effort
in
sig
node
for
a
while,
I'm
not
aware
of
any
actual
adopters
in
the
wild,
and
I
don't
think
we
have
the
actual
traction
in
the
sig
to
see
it
forward
and
I
think,
maybe
in
retrospect
the
adoption
might
be
a
sign
of
it,
not
meeting
like
providers
and
users
appropriately.
D
So
my
rough
assumption
here-
and
we
touched
this
on
a
little
bit
in
sig
architecture,
a
few
weeks
back
when
debating
some
topics
in
sigoth,
was
to
to
remove
the
feature
and
so
removing
it
because
it
is
beta,
would
require
deprecation
but
like
if
there
wasn't
a
disagreement
around
that
I'd
like
to
deprecate
it
in
120,
but
again,
feedback
welcome.
D
Moving
down
the
list,
we
had
some
alpha
features
that
are
very
much
just
sitting
there,
one
that
came
from
a
colleague
of
mine
at
red
hat,
around
host
user
name,
space
defaulting
personally
I'd
love
to
eliminate
this
feature
and
just
move
forward
on
a
more
a
better
enriched
user
name.
Space
design,
given
the
current
state
of
pods,
differs
from
1.5,
and
so
I'm
not
aware
of
anybody
using
this,
but
in
general,
seemed
like
that
was
a
good
thing
to
do
from
a
cleanup
standpoint.
D
Device
plug-ins,
I
don't
know
if
renault
is
here
or
renault,
but
this
is
one
of
those
areas
where
it
was
like.
I'm
very
much
aware
of
almost
everyone
using
these
things
in
practice,
but
we
kind
of
haven't
yet
moved
forward
on
the
plan
in
the
last
10
releases
to
see
the
next
step
to
ga.
I
didn't
know
renault
if
you're
there
or
not.
If
you
want
to
talk
about
the
current
state
of
the
container
device,
orchestration
work
and
how
that.
E
Works
yeah,
so
renault
is
not
on
the
call.
So
so
far
the
focus
has
been
on
the
lower
level
plumbing.
So
this
cd,
I
think
of
it
as
as
a
json
spec,
that
that
describes
modification
to
your
runtime
spec,
which
will
the
intention
is
to
make
it
work
across
container
runtimes
as
long
as
you're
using
oci
underneath.
E
F
Okay
and
then
for
that,
I
would
add
to
it
so
beside
the
low
level
things
what
we
are
discussing
in
cgi
working
group.
There
are
several
things
which
potentially
might
be
changing
with
device
plugin
interface
because,
like
cdi
will
be,
it
will
be
implemented
when
few
parameters
will
be
changed,
how
it's
passed
down
and
beside
that
it
was
in
the
community.
Several
issues
related
to
device
plugins
to
extend
the
apis
like
the
allocate
was
one
of
those
and
fewers
so
practically
like
the
vendor,
who
who
are
using
the
device
plugins.
B
Yes,
that's
my
understanding,
actually,
there's
the
intense,
I
don't
know
level,
but
I
did
the
device
plugging
api
well,
they
also
have
like
the
long
term
issue
we
didn't
address
and
they
want
the
idea
more
expanded
and
at
the
same
time
we
also
feel
like
the
certain
api.
Maybe
it
is
polluted
api
and
you
cause
some
problem
for
the
support.
So
we
need
to
finish
those
problems
since
we
want
to
interrupt
this
to
flow.
So
I
also
want
to
add
for
f
armor
for
f
armor.
B
Actually,
we
could
larry
down
the
scope,
the
original
one
for
the
ga
we
actually
have
like
the
know,
the
level
of
the
demon
stack.
We
already
have
the
different
side,
but
that's
there
are
some
more
things
to
improve,
but
we
could,
since
the
id
app
armor
is
already
adopted
by
the
production
and
and
for
such
long
and
people
start
using.
So
we
couldn't
it
down
from
the
original
ga
scope
and
promote
that
to
ga
and
the
dilemma
couponing
configure
it
is
we
definitely
initially
for
that
one?
B
Actually,
we
want
to
have
like
the
controller
implementation,
but
since
we
don't
have
the
control
implemented
and
there's
never
a
settle
down
to
on
how
to
implement
class
level
of
the
controller
for
that
dynamic
company,
config
and
also
there's
the
I
believe,
all
the
vendor
evolve
have
their
own
implementation
to
management
of
those
configurations,
and
so
I
personally
totally
okay
to
remove
that
feature,
and
but
do
we
need,
because
we
are
leak
of
adoption
for
that
one.
B
Just
like
the
direct
messaging
like
we
didn't
heard
anyone
using
so
far,
and
so
we
basically
maybe
need
to
figure
out
to
just
make
that
more
announcement
publicly
and
didn't
give
a
time
just
remove
it
to
simplify
our
code
base,
that's
pretty
complicated
feature
and
also
doing
that.
That
could
be
config.
I
want
to
see
that
actually
we
pro
we
switch
off
the
kubernetes
flag
and
to
the
the
kubernetes
config
com,
the
component
config,
and
that's
the
first
of
the
component
config
actually
being
graduated.
D
So
that's
the
progress.
The
component
config
change
is
a
big
change
and
it's
definitely
not
something
that
we
want
to
get
rid
of.
It's
a
it's
a
good
thing.
So,
hopefully
the
my
comment
wasn't
misconstrued
in
that
regard.
I
love
the
keyboard
config
file,
so
yeah,
so
I
guess
we
could
have
a
dialogue
in
each
one
if
we
wanted
for
these.
Some
of
them,
I
feel
like,
are
less
controversial,
some
of
them.
It's
like
sig
node
in
a
little
bit
with
maybe
a
a
mirror
and
sig.
D
I
don't
know
if
I
want
to
go
through
every
one
right
now,
but
I
think
the
general
thing
I
was
looking
to
do
was
maybe
spur
some
focused
contribution
from
new
members
who
might
want
to
figure
out
a
way
to
engage
or
just
have
the
dialogue
here
to
say
things
that
we
started.
We
want
to
maybe
give
up.
D
So
maybe,
if
I
focus
on
those
the
quas
reserved
one,
this
was
a
feature
we
had
that
tried
to
induce
memory
pressure
and
lower
class
tiers,
so
pods
and
higher
class
tiers
couldn't
have
memory
taken
from
them
kind
of
talking
with
monolinus
and
a
little
bit
of
giuseppe.
D
I
think
the
path
that
this
was
looking
at.
Maybe
we
could
do
something
better
with
secret
suite
too.
So
in
general,
I
wanted
to
look
to
remove
that.
I
think
seth
did
that.
D
Originally,
I
don't
know
seth
if
you're
too
upset
about
that
cpu
manager
maybe
pause
on
this
one
for
a
little
bit
like
some
of
these
features
like
cpu
manager
and
topology
manager,
I
think
maybe
the
path
to
general
availability
of
these
things
we
held
ourselves
to
maybe
a
higher
bar
than
needed
and
in
some
cases,
like
dawn,
you
said
around
app
armor.
D
It
might
be
good
for
us
to
my
daughter's
crying.
It
might
be
good
to
maybe
reset
our
expectations.
So,
for
example,
if
topology
manager
didn't
align
huge
pages
yet
that
doesn't
prevent
us
necessarily
from
moving
the
feature
to
ga.
It
would
just
mean
when
we
increment
the
feature
to
have
a
new
capability.
It
would
just
be
a
new
feature,
gate
right,
so
yeah
for
some
of
those
things
I
was
thinking.
D
We
might
want
to
take
that
posture
to
say,
hey,
there's,
there's
really,
no
reason
why
topology
manager
in
its
present
capability
couldn't
move
the
ga
and
as
we
increment
it
just
do
it
under
a
new
feature
gate,
because
having
something
perpetually
in
beta
kind
of
also
gives
a
bad
signal
versus
incremental
capability
going
into
alpha
and
beta.
D
D
We
had
this
flag
to
do
dynamic
quota
period
and
I
felt
like
the
history
of
this
was
prior
to
the
kernel
bug
around
cfs
quota
not
being
widely
understood,
given
the
work
that
was
going
forward
with
secrets,
v2,
enablement
and
stuff,
I
kind
of
wanted
to
see
if
we
could
think
through
ways
of
reducing
these.
D
I
don't
know
if
anyone
on
the
call
might
be
aware,
but
my
my
perspective
is
that
the
kernel
bug
around
cfs
quota
enforcement
may
have
skewed
our
thought
processes
towards
this
future,
and
so
I'm
curious.
If
the
thing
is
still
in
alpha,
which
means
we
we
could
drop
it
do
people
have
any
good
reason
to
think
on
a
going
forward
basis.
We
need
to
continue
allow
period
to
be
dynamically,
updated.
B
I
think
there's
the
people
even
his
alpha
there's
some
people
already
started
using
this
feature
and
even
our
production
they
have
the
user.
Ask
us,
can
we
enable
that
I
mean
I
I'm
talking
about
the
google
gke.
We
didn't
end
up
with
a
picture.
I
just
want
to
say
we
didn't
enable
this
feature,
because
this
is
so
hard
to
use
and-
and
but
I
do
know,
there's
like
even
it's
alpha
this
feature
in
some
certification.
It
is
using
so
it's
alpha.
B
We
can
deplicate,
but
at
the
same
time
not
everyone
upgraded
to
the
new
kernel
and-
and
also
I
heard
new
kernel-
also
have
some
other
problem
around.
That.
D
Yeah,
so
I'm
very
accommodating
right.
If
people
are
using
it,
we
don't
want
to
break
them
right.
So
I
don't
know
if
it's
just
my
zoom
or
internet
that's
lagging,
but
I
don't
want
to
get
the
wrong
perception.
So
if
people
were
using
this
or
they
had
data
that
showed
it
was
still
needed-
I
I
wasn't
sure,
but
right
now
it
lacked
an
owner
really
moving
it
forward.
H
Yeah
sorry
one
thing
to
add
on
about
that.
Specifically
I
mean
I
think,
you're
right.
It
is
mostly
attributed
to
that
journal,
bug
that
where
it
was
necessary
to
override
that,
but
I
think
I've
still
heard
of
some
people
using
that,
especially
when
you
set
limits,
people
hit
some
throttling
issues
and
I've
seen
some
people
who
lower
the
period
from
I
think
it
defaults
to
100
to
something
lower
to
work
around
some
of
those
issues.
But
I
can
look
closer,
but
I
believe
it's
still
being
used,
at
least
in
some
rare
cases,.
D
Okay,
even
after
they
had
an
updated
kernel,
that's
to
me,
like
I
haven't
seen
data
that
showed
it
yeah,
okay,
yeah
and
even
this
feature
like
there's,
probably
other
ways
we
could
have
approached
it
like,
rather
than
rather
than
the
the
cubelet
could
have
picked
up,
maybe
a
default
period
setting
on
the
node
and
then
the
node
could
have
had
a
system-wide
setting
but
either
way
there's
good
feedback,
and
so
maybe
we
can
get
some
of
that
collect
on
the
dock.
That
would
be
good
and
I
might
reach
out
to.
D
I
think
it
was
zolando
who
originally
wanted
this.
Some
of
the
other
ones
assist
controls.
I'm
not
aware
of
any
actual
issues
with
this
in
production.
I
think
we
had
a
general
desire
of
identifying
better
safe
versus
unsafe
cis
controls,
but
I
don't
really
think
that's
necessary
for
us
to
complete
before
moving
this
to
ga,
given
how
widely
used
this
is,
and
the
boundary
between
safe
and
unsafe
seems
to
to
blur.
D
But
this
seems
like
a
feature,
that's
heavily
used
in
my
experience
that
I
haven't
heard
a
ton
to
to
hold
it
back,
and
maybe
we
could
use
some
help
who
who
want
to
to
help
move
this
forward.
I
guess
for
the
pod
pit
limiting
or
any
of
the
pid
limiting
ones
that
I
had
here.
I
I
this
was
another
example
of
like
something
that
went
beta
had
to
sat
there
for
a
year.
D
D
One
thing
that
intrigued
me
was
around
the
proc
mount
type.
This
was
a
feature
that
came
in
at
112,
but
hasn't
really
had
any
use
cases
identified.
Yet
to
my
knowledge
that
made
it
get
high
enough
to
want
to
go
to
a
beta
status
and
so
like
when
I
see
a
feature
that
might
be
sitting
there
in
alpha
for
two
years.
I
kind
of
wonder
like
did
it
lack
a
clear
use
case.
D
E
Think
like
for
that,
one
right
when
we,
when
we
do
with
the
username
space
build
stuff,
we
may
immediately
know
whether
this
is
useful
for
that
or
not,
and
maybe
we
may
tie
it
to
another
username
space
flag
if
needed.
D
Okay
and
then
try
to
think
other
highlights
here.
The
huge
pages
work
got
moved
to
beta
in
118
for
empty
door
usage.
I
I
know
at
least
that
red
hat
we're
using
this
and
it's
used
in
a
lot
of
performance-sensitive
environments
so
like
I
actually
am
not
aware
of
a
reason
why
we
wouldn't
move
that
to
ga.
D
But
if
others,
I
don't
know
if
intel
or
others
in
the
community
had
been
looking
at
it,
who
worked
in
this
space,
had
any
blockers
like
be
creative
to
raise
them,
maybe
runtime
class.
I
thought
was
on
the
agenda
afterwards
and
we
can
talk
a
little
bit
on
that
separately,
but
in
general.
I
So
direct
for
this
one,
the
set
hostname
as
fkdn
like
I
can
take
ownership,
like
I
say,
pushing
that
feature
through
okay.
C
I
So
usage
is
that,
like
a
requirement
for
promoting
this,
we're
hoping
to
change
this
to
ga
directly
for
120,
but
I
know
I'm
uncertain
how
it
works
and
what
metrics
I
will
have
to
put
yeah.
So
I'm
not
aware
of.
D
Something
going
alpha
to
ga
in
one
step
like
you
have
to
go
to
this
alpha
beta,
ga
progression
and
like
with
beta,
you
might
be
on
by
default,
and
when
I
talk
about
usage,
I
don't
really
mean
it's
more.
D
Just
like
we
as
like
a
community
of
engineers
who
may
have
tried
it
out
and
found
rough
edges,
it's
not
necessarily,
and
so
any
individual
usage
or
any
usage
you
might
see
on
behalf
of
like
users
you're
representing,
is
all
we're
looking
for
like
pain
points,
and
that
type
of
thing
for
that,
one
though
I
think
that'd
be
great
like
so
maybe
I
conclude
here
a
little
bit
like
if
anyone
wants
to
help
shepherd
a
feature
forward
that
might
not
yet
be
in
that
state.
D
If
you
could
like
make
yourself
known,
that
would
be
good
and
then
maybe
in
next
week's
meeting
we
can
kind
of
coalesce
on
like
what
do
we
want
to
target
in
120
like?
I
think
it's
good
if
we
can
go
in
121
and
see
like
our
feature,
debt
load
not
just
increasing
but
getting
to
completion,
and
maybe
we
can
find
a
balance
between
things.
We
want
to
move
forward
that
we
had
started
versus
new
things.
We
want
to
add
in
120
or
121.
D
So
if
folks
want
to
identify
new
features
here
that
they
want
to
talk
through,
feel
free
to
add
them,
but
I
just
thought
it
was
useful
and
healthy
to
take
a
summary
on
on
on
where
we
stood,
and
with
that
I
will
stop
sharing.
B
I
just
want
to
add
one
thing
about
the
goes
through
that
alpha
by
tattoo
ga,
so
a
lot
of
people
unless
they're
really
really
in
production
pain
so
that
they
don't
try
the
alpha
feature.
They
will
try
the
better
feature,
so
it
is
use
case
package
and
because
we
try
to
avoid
the
situation
and
after
right
after
we
promote
to
beta
and
the
api,
require,
change
and
design
require
the
change
or
we
may
be
our
implementation
not
to
support
the
general
use
cases
and
instead
the
support
one
special
use
cases.
B
So
then
we
have
to
redo
whole
things.
So
this
is
why
and
the
normally
we
go
to
next,
the
alphabet
energy
that
just
shared
here,
but
to
excuse
the
kids
by
a
certain
feature,
maybe
not
initially
have
a
huge
intention
and
and
and
and
huge
tense
and
a
lot
of
arguments,
but
at
the
end
the
implementation
is
really
straightforward
and
then
there's
not
much
of
the
chaos
and
we
maybe
it
could
be
direct.
It
goes
to
the
gi
just
like
the
directory.
I
D
Okay,
cool
a
lot
of
times,
it's
just
going
back
and
seeing
if
we
had
sufficient
testing
some
of
these
things
before
the
kept
process
tried
to
before
sig
release
had
pushed
you
to
show
tests
before
the
feature
would
go
to
beta.
It's
very
likely.
We
might
have
a
testing
gap
on
some
of
these
things,
and
so
just
just
getting
that
audited
would
be
a
really
good
good
step.
But
if
you
had
sufficient
tests
and
that
type
of
thing
yeah.
J
Yeah,
I
think
the
most
controversial
thing
about
this
feature
specifically
was
the
decision
we
made
about
longer
names
like
how
we
proceed
with
the
longer
fidn
names,
and
if
this
decision
was
okay
and
it's
working
for
everybody,
I
think
it's
also
gives
one
more
signal,
so
I
think
it
was
the
most
controversial
part
of
it:
okay,
cool,
yes,.
B
D
Yeah
yeah,
I
was
just
trying
to
put
it
as
like:
hey
here's
an
example
of
a
really
small
pr
that
we
can
go
and
make
a
change
for
the
better.
So
it's
possible
on
one
of
these
lists.
When
people
review
it
they'll
see
it's,
it
was
already
tested,
there's
not
a
lot
of
controversy
and
we
could
just
move
forward.
So
I
was
more
just
trying
to
role
model
with
that.
One.
B
So
cool
sergey:
do
you
want
to
talk
about
the
run
time
class
ga
plan.
J
Yeah
I
want
to
follow
up
on
runtime
class
and
the
comment
from
derek
that
we
need
more
feedback.
J
We
will
use
it
in
google,
so
it's
I
think
it's
proven
for
us
that
it's
working,
so
we
would
like
to
get
them
ga,
and
I
wonder
if
we
have
other
use
and
anybody
in
community
can
speak
up
and
tell
if
there
are
some
problems
and
if
there
is
a
process
that
I
need
to
follow
to
follow
a
cap
that
people
can
give
feedback
for
or
like
any
other
process
that
I
need
to
follow.
I
would
happy
to
push
through
and
make
it
to
ga.
A
Hi,
this
is
mark
again.
I
know
from
the
windows
side
when
we
get
to
the
point
where
we're
probably
we're
more
testing
we're
adding
more
testing
around
the
hyper-v
isolated
containers.
A
There
were
some
caps
that
were
authored,
probably
two
or
three
releases
ago,
with
some
plans
to
add
or
potentially
add
some
more
flags
to
runtime
classes
for
windows
support
and
I
think
we're
planning
on
using
runtime
classes
pretty
heavily.
But
I
think,
as
previously
discussed,
we
just
need
to
add
new
kind
of
api
fields
behind
feature
gates
that
shouldn't
stop
anything
from
going
to
ga.
B
Yeah
yeah
yeah,
actually,
the
windows
support
is
one
of
the
use
cases
initially.
When
we
discuss
for
runtime
class.
There
are
other
things
also,
and
I
was
just
wondering
yeah
do
we
want
to
make
that
is
the
blocker
gear,
or
just
basically
layer
down
the
scope
and
just
move
forward
and
then
I'd
make
that
instrumental
class
extended
the
feature
in
the
future
and
add
the
new
future
gate.
Maybe
that's
maybe
looks
sounds
like
that's.
Maybe
is
the
okay
approach
for
a
lot
of
the
people
anyway,
it's
open
to
discussing
here.
D
Yeah
for
runtime
class
what
I
was
trying
to
leave
in
my
comment
there
and
I'm
still
not
clear
on
on.
I
have
to
go
and
audit.
What
we
had
is
written
class
is
one
of
those
apis
that
is
now
on
the
timer
right
to
get
out
of
perma-beta
status.
So
it's
a
good
one
to
focus
on.
D
I
think
the
issue
that
I
think
I'd
want
to
just
be
cautious
on
is
like
what
does
it
mean
to
test
that
in
conformance,
because
it's
very
coupled
to
configuration
of
a
container
runtime
itself,
so
as
long
as
we
can
make,
maybe
we
could
just
do
a
review
on
what
your
approach
will
be
to
do
conformance
testing
around
it.
That
would
be
good,
because
it's
it's
it's
basically
that
thing
that
lets
you
be
creative
and
I
don't
want
to
be
too
draconian
on
how
we
treat
conformance
of
it.
D
E
E
So
I'll
I'll
put
a
link
to
that
repo
in
the
signord
document
and
the
second
use
case
where
we
are
thinking
of
using
it
is
for
enabling
user
name
spaces
so
cryo
we
are
in
the
process
of
merging
username
space
support
and
what
we
intend
to
do
is
we
use
it
just
for
doing
our
builds
without
privileges,
and
then
we
want
to
get
this
using
runtime
classes
instead
of
just
enabling
it
by
default.
So
we
can
use
our
sec,
which
is
like
port
security
policy
to
control
which
pods
can
use
this
capability
in
cryo.
D
Yeah
so
at
least
for
those
two
use
cases
sergey
there
wasn't
anything
that
it
was
it's
a
useful
tool
to
have
in
the
community
to
to
to
meet
these
needs.
That,
like
from
our
side
on
red
hat,
I
didn't
feel
any
major
objection
to
other
than
just
wanting
to
revisit
how
the
conformance
test
would
work,
because
you
basically
need
a
no
up
test.
D
Can
I
can
I
credit
runtime
classes
and
does
the
pod
have
the
runtime
class,
but
you
can't
have
anything
that,
like
tests,
the
behavior
of
the
pod
afterwards
in
a
conformance
test,
in
my
opinion-
and
I
think
if
it
goes
to
ga-
it
should
probably
loosely
be
covered
under
conformance
and
so
that
that
would
be
my
only
major
concern.
B
Thank
you.
I
also
want
to
use
another
one,
and
maybe
it's
not
the
really
blocker
for
this
ga,
but
definitely
have
the
quite
tight
dependency.
It
is
on
the
powder
overhead,
and
so
are
we
going
to
so
orange.
B
We
will
decouple
that
those
two
feature,
and
but
do
we
want
to
like
together
to
promote
off
the
pod
overhype
to
ga
or
we
want
to
look
for
the
decouple,
and
so
so
so
are
we
going
to
hopefully
make
the
pod
overhide
the
bing
ga
first
and
then
have
the
runtime
cast,
which
there
has
like
a
lot
of
the
dependency
initially.
Actually
it's
the
same
project,
and
then
we
just
cautiously
make
that
decouple
those
two
decouple.
D
Yes,
so
for
myself,
I
don't.
I
would
be
fine
runtime
class
going
to
ga
before
pod
overhead,
especially
if
we
found
ourselves
running
up
to
the
wire
on
the
deprecation
time
windows
that
cigar
was
pushing
through
just
because
there's
a
lot
of
use,
cases
that
run
time
classes
meet
independent
of
pot,
overhead
right
and
so
at
least
forever.
For
us
we're
in
all
shared
too,
and
I
guess
in
the
kata
one
we
could
benefit
from
pot
overhead,
but
for
being
able
to
expose
particular
runtime
behaviors
that
didn't
come
with
overhead.
B
Okay,
that's
good
because
I
saw
the
qatar
maybe
have
more
dependency
on
the
part
overhead,
so
so
for
us,
because
g
visor
don't
have
that
huge
dependency.
This
is
what
part
of
the
overhyped
project
since
day,
one
because
when
we
have
the
info
container,
we've
been
talk
about
this
one,
but
we
didn't
really
find
it
to
really
fund
it
that
project.
It
is
mostly
is
this
because
of
that
kata
related
the
overhead.
B
If,
if
the
redhead
is
the
heavy
using
of
the
qatar
and
and
it's
that's
not
a
broker,
I
think
maybe
we
can
remove
one
of
those
dependencies
there
yeah
definitely.
J
Yeah
yeah
so
run
time
fast,
yes
and
wasn't
renault
working
for
now
you
promised
a
link.
Can
you
put
it
into
music
notes.
J
B
Secretary
for
the
volunteer
to
take
this
job
next,
one,
I
think
the
excuse.
Your
topic,
you
just
remind
everyone.
You
want
yeah.
J
I
just
wanted
to
remind
if
you
wasn't
on
kubernetes
df
and
didn't
receive
the
specification.
We
will
have
a
push
for
120
and
there
is
a
specific
ordering
that
I
released
teams
recommending,
and
I
just
want
to
make
sure
that,
like
derek
and
dawn,
I
think
you
are
only
two
here
who
has
a
permission
to
assign
milestones
for
signal
at
least
so
yeah.
J
You
will
need
to
allocate
some
time
next
week
and
be
very
like
track
it
very
closely
like
which
prs
go
in
and
which
you
postpone,
and
I
think
originally.
I
thought
that
we
need
to
clean
up
current
milestones,
but
it
seems
that
the
release
team
will
do
that
early
next
week
when
they
will
cut
119
and
they
will
remove
all
the
milestones
from
all
the
aprs.
So
we
can
start
from
scratch.
J
K
D
This
policy
came
out
of
sig
architecture,
so
it's
not
rooted
in
signo
just
for
awareness
and
what
we
were
basically
observing
was
features
would
go
to
a
beta
status
and
then
the
many
of
the
engineers
would
just
move
on,
and
so
there's
a
strong
desire
to
either
see
it
through
and
especially
for
things
that
are
beta,
but
not
on
by
default.
D
Like
there's
a
lot
of
those
and
then
there's
some
apis
that
are
just
kind
of
in
beta
that
are
kind
of
stuck
like
pod
security
policy
is
one
I
would
I'll
pick
on
there,
and
so
anyway,
the
sig
architecture,
api
kep.
I
can
link
to
in
the
notes,
if
you
hadn't
seen
it,
but
there
was
a
policy
change
in
119.
That
basically
said
if
an
api
was
in
beta
for
more
than
three
releases,
then
these
it
either
immediately
gets
deprecated
and
then
has
deprecated
for
something
like
six
releases.
D
Afterwards,
like
users,
don't
just
break
or
you
move
it
to
ga
or
you
move
it
to
a
new
version
of
beta
like
beta
v2,
beta
v3,
and
so
the
idea
was
just
to
make
sure
that,
like
folks,
stay
engaged
on
on
their
work
and
see
it
through
completion,
essentially,
okay,
cool
awesome.
Thanks
derek.
I
appreciate
it.
I'm
sorry.
D
Is
there
an
update
on
how
the
testing
meetings
were
going?
I
know
I
didn't
get
to
go
this
week,
but
I
thought
there
was
a
meeting.
J
Yeah
we
have
enemy
things
and
right
now.
One
question
we're
trying
to
resolve
is
how
to
identify
tests
on
signal
which
are
critical
enough
and
like
provide
the
birds
like
single
pane
of
glass
view
of
like
whether
the
signal
is
healthy
or
not
healthy.
Right
now
there
are
many
dashboards
with
many
tests,
and
some
of
them
like
like
like
christmas
tree
like
random
green.
So
we
want
to
make
sure
that
there
is
a
tab
that,
like
contains
all
the
tests
that
they
mostly
care
about.
J
Eventually,
we
want
all
the
tests
to
be
in
this
category
that
we
really
care
about
all
the
tests,
but
from
a
current
status
it
will
be
a
gradual
process.
So
we
identify
that
and
yeah
we
looking
into
specific
areas
like
continuity
and
what
else
yeah.
I
think
I
think
this
is
the
biggest
one
that
is
very
questionable
like
what
coverage
do
we
have
it
for
that
and
how
to
fix
all
the
tests.
So
this
is
another
area
yeah.
It's
going
good
people
on
this
call.
J
If
you
interested
in
helping
with
this
ici
and
end-to-end
testing,
please
join
a
meeting
on
monday.
It's
a
good
beginning
of
the
week
for
everybody
very
refreshing.
J
B
B
How
we
want
to
re
categorize
of
the
signal
test,
and
some
is
already
done,
and
some
it
is
not
finished,
and
also
also
that
can
share
for
the
new
node
the
test
group
and
can
share
some
like
background
the
context
I
just
put
here
and
we'll
put
that
link
also
to
the
our
our
meeting.
D
Week
so
just
on
the
follow-ups
from
last
time,
I
was
looking
through
the
agendas
I
I
did
tag
the
sidecar
container
cap
update,
which
basically
captured
the
learnings
on
the
gaps
from
the
initial
cap,
and
so
I
did
merge
that
I
want
to
thank
the
authors
for
doing
that.
We
still
are
not
yet
clear
on
a
path
forward
for
this.
Even
with
that
kept
merge.
D
It
could
be
a
factor
of
like
the
users
who
I'd
been
talking
with
that
might
have
been
skewing
my
thinking
when
reading
the
cap,
but
one
thing
I'm
curious
on.
I
I
don't
know
if
we
have
the
right
contact
is
a
lot
of
the
original
motivation
and
still
motivation
for
the
cap
was
around
service
mesh
use
cases,
and
so
I
I
felt
like
given
the
prominence
of
that.
I
wanted
to
really
understand
if
the
cubit
was
to
take
on
this
capability.
D
Would
a
service
mesh
actually
recommend
that
this
be
the
default
usage
pattern
and
I'm
I'm
openly
skeptical
on
that
and
I'm
curious
if
others
are
maybe
engaged
with
those
communities
to
kind
of
like
inform
my
skeptic
skepticism,
but
they
kept
alluded
to
two
pass.
One
is
the
cni
approach
and
the
other
one
is
a
sidecar
approach.
D
I
know
at
red
hat
we
use
the
cni
approach,
I'm
not
on
the
server
smash
team
here
at
red
hat,
so
I
can't
really
speak
too
deeply
and
I'll
follow
up
with
them,
but
a
lot
of
the
users.
I
I
talk
to
don't
really
would
seem
to
object
to
dynamic,
sidecar
injections
generally,
like
the
same
users
who
say
I
don't
want
to.
Let
you
exec
into
a
pod
would
be
the
same
users.
D
We'd
still
defer
the
problem
to
the
user
and
you'd
have
two
ways
of
doing
the
same
thing,
which
is
almost
like
having
two
ways
of
deploying
devices
or
two
ways
of
doing
monitoring,
but
like
here
in
sig,
node
we'd
feel
both
sides
of
the
pain,
and
so
maybe
as
a
closing
appeal.
If
someone
could
maybe
help
me,
if
I'm
not
just
alone
doing
this
figuring
out
like
if
we
took
on
this
capability,
would
it
actually
be
the
only
way
things
are
used?
D
I'm
just
very
skeptical
on
that
and
given
its
prominence
in
the
kept,
I
I
feel
like
I
want
a
more
definitive
statement
on
yes,
this
is
what
we
would
always
do
and
I
feel
like
that.
That
might
not
be
the
case.
Does
anyone
know.
J
I
yeah
me
and
don
had
the
meeting
with
istio
team
and
google,
and
I'm
curious
like
before.
I
comment
on
that.
I'm
curious
is
red.
Hat
using
cni
approaching
like
when
cni
completely
replaces
all
the
calls
like.
Is
there
no
injection
at
all,
because
in
google
there
is
a
cni
approach
when
we
when
c9
plugin,
helps
to
change
ip
tables,
so
we
don't
need
to
have
init
container,
but
the
same
way
we
need
to
have
some
executables,
some
proxies.
J
D
I
need
to
follow
up
myself
honestly,
but
we
do
deploy
the
cni
approach
by
default
and
like
at
least
at
red
hat,
we
always
run
multis,
so
we
have
more
than
one
cni
and
then
the
the
privileged
benefits
that
were
enumerated
were
there,
but
that's
an
action
for
me
to
follow
up
on,
but
I
was
just
curious
if
anybody
else
the
way
that
kept
described
it
was
an
either
or
choice
on.
D
If
I
had
sidecars,
I
would
never
use
cni
and
if
I
and
either
way
so
that
was
my
feeling
before
just
merging.
That
now
was
like.
If
anybody
had
a
general
pulse
on
how
things
were,
but
that's
a
work
I'll
follow
up
on
but
yeah.
D
The
other
issue
I
felt
like-
and
this
is
a
recurring
theme-
was
the
use
cases
around
sidecar
containers
are
really
about
a
lot
of
times,
bundling
the
infrastructure
to
support
my
app
with
the
pod
itself,
rather
than
depending
on
the
infrastructure
operator,
to
provide
me
like
a
sandbox
to
operate
in
and
focusing
my
pad
my
pod
more
on
the
business
logic
of
my
app,
and
so
I
don't
know
that
also
just
felt
like
a
recurring
tension,
because
a
lot
of
the
same
users,
I
feel
like
I've,
been
spending
a
lot
of
time
talking
to
were
the
types
of
users
who
would
bulk
at
that
pattern
and
instead
want
like
lockdown
nodes
with
log
forwarding
or
all
those
other
use
cases
that
were
called
out
for
side
cars
that
could
have
been
skewing.
D
My
thinking
and
my
thinking
is
not
necessarily
always
the
right
thinking,
but
it
felt
like
service
mesh
was
the
only
major
major
one
I
felt
was
was
ringing
through
so
anyway.
J
Maybe
you're
talking
with
a
bigger
corporations
when
there
is
a
dedicated,
I.t
support
team
that
can
do
all
these
infrastructure
changes
for
the
development
team
like
if
you're
talking
with
smaller
companies,
you
probably
will
see
more
of
a
pattern
when
developers
want
to
want
to
have
more
control.
A
D
Yeah,
you
can
see
it
for
both
I
just
and
so
that's
why
I
recognized
maybe
who
I
was
talking
to
is
the
issue,
but
it
did
feel
like
a
lot
of
us
all
talk
to
big
users
and
so
those
same
users.
You
need
to
keep
happy
with
the
project
and
not
provide
workarounds
that
they're
trying
to
lock
down
and
so.
D
I
was
largely
looking
to
see
if
there
were
use
cases
outside
of
server
smash
that
were
not
oriented
around
things
like
log
forwarding,
where
there
wasn't
necessarily
another
path
forward,
that
many
users
could
provide
or
look
at
so
anyway,
that
said,
did
merge
the
cap
and
we
continue
to
work
through
it,
but
it
it
is
a
complicated
topic.
B
Dark
just
like
what
is
sergey
said,
and
we
talked
to
the
easter
cases
and
I
believe
not
just
depend
on
us.
Even
we
end
up
with
the
same
life
approach
and
this
dude
required
the
today's
static
container
and
also
actually
they
hope
we
can
even
expand
it
to
this.
The
statica
container
approach
and
there's
some
other
use
cases
it's
not
handled,
but
I
agree
with
you.
D
From
the
kind
of
it's
like,
if
you
talk
to
tim
hawking,
your
colleague
right
he'd,
say
I
can't
give
a
a
controller,
arbitrary
exec
privileges
in
my
pod.
I
just
won't,
do
it
and
then
I
know
he's
probably
talking
to
the
same
security
style,
conscious
users
I
would
talk
to,
but
then
why
those
same
users
would
allow
dynamic
injection
of
containers
in
their
pod
is
crazy
to
me,
it's
the
same
problem.
D
B
And
also,
I
want
to
add
it
one
thing's,
just
like
the
response
for
the
word
test
circuit,
just
mince
it,
so
we
need
to
figure
out
who
is
our
customer?
B
Who
is
the
user
and-
and
I
believe
what
dark
described
is
the
represent,
like
the
at
least
it
is
the
top
use,
use
cases
here
and
then
second
use
case
is
obvious,
connect
for
the
developer
and
I
think,
there's
the
other
way
to
solve
those
problems
and
also
sometimes
for
like
the
arbitrary
they
want
to
change,
have
the
whole
control
for
developer
for
their
entire
of
the
working
environment.
B
I
have
questions
maybe
kubernetes
or
fully
support
the
functional.
The
kubernetes
is
that
right,
the
approach,
and
so
I
have
the
question
I
think,
there's
the.
I
think
this
kind
of
question
we
released
a
long
time
ago
when
we
first
started
kubernetes
and
because
I
think
about
the
kubernetes
is
trying
to
help
connect
the
because
the
photos
developer.
B
If
the
test
is
scenario
they
when
they
could
totally
control
their
environment
to
test
the
node,
but
if
they
are
making
that
is
the
production
use
cases
they
want
to
deploy
the
application,
then
they
definitely
require
about
the
cluster
enemy
and
and
to
create
off
without
workable
notes,
a
group
of
the
workable
cluster
and
notes
for
them.
So
they
don't
need
to
have
the
full
control
about
those
notes.
Have
the
root
access
to
the
notes,
so
that's
kind
of
production
use
cases.
I
think
we
should
separate
that,
otherwise
we
cannot
move
forward.
B
This
is
also
is
kind
of
the
responsible.
I
will
give
to
response
to
the
team,
hacking
and
the
mechanic,
so
this
is
kind
of
separate
right
so
for
us
I'm
the
developer.
If
I
want
the
weapon
application,
I
can't
analyze
and
there
I
have
the
container
runtime,
there's
a
single
node,
maybe
to
do
those
for
the
quick
test
dive,
quick
test
test,
but
once
before
I
want
the
staging
and
the
roll
out
calorie.
That's
the
difference.
Different
cases
so
needs
to
separate
those
things.
Otherwise
we
cannot
make
progress
here.
D
Yeah,
either
way
I'll
follow
up
on
what
we're
doing
on
the
red
hat's
a
little
closer
but
just
felt
like
dynamic,
injecting
containers
in
one
cap
and
then
looking
at
another
cup.
Saying
arbitrary
exacts
are
bad
felt
like
me,
being
hypocritical
in
my
own
mind
so
anyway,
thanks
for
the
follow
up
for
you
guys
on
the
google
side,
for
what
you
saw
there
and
I'll
take
a
look
at
what
we're
doing.
B
Do
you
recently,
I
turned
for
us
to
come
talk
to
each
other,
like
the
like
the
directly
like
the
both
service
match
cases,
and
also
you
and
me,
and
a
couple
people
interested
on
this
topic,
and
we
can
have
the
dedicated
meeting
just
meet
this
whole
topic
and
we
can
sort
out
that
will
be
make
that
will
be
helpful.
Make
a
quicker
decision
and
move
forward.