►
From YouTube: 20230209 SIG Architecture Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Foreign
architecture,
meeting
of
9,
February
2023
I'm
your
host.
We
have
a
finance
and
just
meetings
be
reported
without
everybody
to
view
it
later
and
keep
in
mind.
We
adhere
to
the
cetf
code
of
conduct,
while
in
this
main
meeting,
which
means
be
excellent
to
each
other
all
right.
So
let
me
see
if
I
can
pop
out
the
agenda
here
and
shoot.
A
B
Can't
share
my
screen
if
multiple.
B
A
All
right
and
first
item
up
that
we
have
Arvin
in
the
meeting.
C
C
You
can
go
first,
hey
folks.
My
name
is
Alvin
I
work
at
Red,
Hat
and
I'm
here
to
talk
about
this
node
lock,
query
feature:
this
is
a
feature
so
to
give
you
some
history
behind
this.
C
C
The
cap
was
opened
a
couple
of
years
ago
and
was
accepted
and
I've
been
working
through
the
implementation
for
the
last
couple
of
years.
This
is
a
feature
that
was
first
present
in
openshift
was
added
by
Clayton
and
when
I
started
working
with
Sig
Windows,
the
biggest
problem,
Sig
windows
or
what
we
have
in
the
windows
world
is
that
a
Windows
node
will
come
up,
it'll
come
up
as
ready
and
then
a
customer
will
try
to
bring
up
a
Windows,
pod
and
it'll
fail
and
no
idea
why
it
failed.
C
But
looking
at
the
Pod,
you
know
status
and
typically,
what
happens
at
that
point?
Is
you
ask
the
user
to
give
us
the
cubelet
logs,
the
runtime
logs
and
there's
always
a
lot
of
back
and
forth
about
how
the
user
can
go
about
getting
it?
And
so
this?
When
I,
when
I
mentioned
to
Sig
windows
that
hey
we?
This
is
the
way
we
do
this.
In
openshift
there
was
a
a
big
push
to
Upstream
it.
C
So
I
decided
okay
I'm
going
to
Upstream
this
feature,
so
we
have
had
a
lot
of
back
and
forth
in
the
last
couple
of
years
and
it
finally
reached.
C
Okay,
so
we
had
a
lot
of
back
and
forth
and
I
think
the
question
I
think
which
finally
Jordan
asked
in
the
last
round
of
reviews
in
the
implementation
PR
is
whether
we
should
be
expanding
Cube
like
ability
to
expose
the
whole
host
logs.
C
The
the
expectation
I
feel
is
in
some
sense.
What
we're
trying
to
figure
out
is:
is
the
cluster
admin
equivalent
to
a
node
admin
in
some
sense
and
then
what
is
the
way
to
move
forward
with
this
project
right?
Do
we
either
say
okay,
this
is
you
know
something
we
don't
want
to
add
as
a
feature,
that's
disabled
by
default
and
needs
to
be
enabled,
or
is
that
okay
to
do
so,
I
think
that's
why
we
want
to
discuss
here.
C
D
Yeah
I
I
I
I
first
want
to
apologize
for
bringing
this
up.
At
this
point,
mostly
I
was
trying
to
make
sense
of
conversations
going
on
in
two
different
parts
of
the
project.
One
part
where
security
folks
are
saying
like
oh
this
defaults
to
exposing
host
logs.
We
shouldn't
do
that.
We
should
stop
exposing
people's
host
stuff
and
recommend
they
not
enable
this.
E
Thanks
for
bringing
this
today,
I
saw
it
on
the
agenda
and
I
thought
it
would
come
and
I
know
that
I
had
a
lot
to
say
in
the
early
stages
of
the
review
of
this
I
feel
a
little
bit
like
that
that
Jeff
Goldblum
line.
We
spend
a
lot
of
time
thinking
about
whether
we
could
not
whether
we
should
I
I
think
the
question
of
how
much
host
access
we
should
expose
is
a
really
apt
question
and
I.
E
When
you
put
the
question:
is
a
cluster
admin,
a
necessarily
a
node
admin?
My
alarms
immediately
went
off
like
I.
Don't
think
that
that
is
a
safe
assumption
and
so
I
think
that's
a
really
good
question.
So
I
just
wanted
to
say
I,
agree,
I,
agree
with
the
premise
of
the
question
so.
F
E
F
To
raise
my
hand-
but
we
talked
this
about
this-
is
Derek.
We've
talked
about
this
in
signode
as
well
and
I.
Guess
how
I
looked
at
this.
Is
that
there's
no
actual
elevation
of
privilege
here?
It
is
just
convenience
and
if
we
had
a
project
posture
that
said,
cluster
admin
is
not
a
node
admin,
then
we
have
a
whole
host
of
things
that
contradict
that
today.
F
You
could
deploy
a
privilege,
pod
and
and
read
anything
on
that
node
today,
I
mean
this
is
how
cnis
often
deploy
this
is
how
existing
log
forwarders
often
deploy,
and
the
part
that
struck
me,
as
maybe
unique
in
the
windows
situation,
was
that
if,
if
you
couldn't
deploy
your
your
log
forwarder
having
something
to
fall
back
to
wasn't
the
worst,
you
know
possible
idea,
so
I'm
hard-pressed
to
ask
or
I'm
hard-pressed
to
see
how
a
cluster
admin
today
is
not
a
node
admin
and
I
I
kind
of
look
at
this
more
as
how
much
additional
surface
space
do
you
want
to
have
the
cable
you
know
give
ease
of
use
around
or
not,
but
I
I
think
it'd
be
inaccurate
to
to
say
it's
a
privilege
distinction.
E
At
putting
on
my
my
job
hat,
there
are
implementations
of
clusters
where
users
don't
have
access
to
nodes,
and
even
cluster
owners
don't
get
to
SSH
into
nodes
and
I.
Don't
know
if
there
are
implementation.
Details
that
are
hidden
on
those
nodes
like
like
are
would
be
dangerous
for
user
to
see
I,
don't
suspect
so,
but
I
don't
think
it's
safe
to
assume
that
the
one
thing
is
necessarily
the
other
thing.
F
E
F
E
F
Which
to
me
meant
when
evaluating
this
feature
and
to
be
clear,
like
even
wearing
my
openshift
hat
right.
There
are
plenty
of
users
who
disable
the
ability
to
do
exact,
attach,
or
logs
generally
and
I,
I'm
aware
of
security,
conscious
customers
of
ours
that
do
that.
F
Typically,
that's
in
a
hardened
production
environment
in
those
environments,
whereas
this
just
really
felt
like
how
do
we
get
users
past
getting
started
inertia
challenges
of
even
in
this
case,
as
my
understanding
Arvin
tell
me
them
wrong,
just
being
successful
with
the
windows
nodes
so
from
that
Spirit
team,
like
I
thought,
if,
if
the
feature
was
disableable,
the
concerns
you
raise
would
be
fine,
I
think
the
the
Nuance
which
I
appreciate
Jordan
you
raised,
which
was,
if
you
can
disable
it,
it's
not
going
to
be
a
part
of
conformance.
F
F
That's
not
a
fair
way
to
evaluate
this
in
my
feature,
in
my
view,
because
that
pretty
much
is
true
today,
unless
you
do
a
lot
of
additional
security,
Harding
and
even
then
I,
don't
know
how
you
fully
stock
cluster
admin
from
deploying
privileged
pods
on
nodes.
D
Yeah
I
like
the
argument
about
convenience
I
just
because
you
could
run
a
privileged
pod
and
then
do
host
mounts
and
like
do
I,
guess
anything
like
that.
That
doesn't
seem
like
a
good
reason
to
like
build
in
sort
of
arbitrary
things
to
the
cubelet
like
if
but
but
that
that
would
be
a
reasonable
argument
for
like
we
could
make
the
keyboard,
install
systemd
units
or
install
Windows
services
or
kind
of
do
any
other
node
adminy
type
of
thing.
F
F
My
only
commonness
in
this
discussion
is
not
if,
if
the
feature
goes
forward
or
not,
I
think
it's
it's
perfectly
okay
to
say
no,
it
doesn't
go
forward,
but
I
don't
want
us
to
establish
a
policy
that
says
it
shouldn't
go
forward
because
it's
a
privilege,
escalation,
I,
I,
think
that
would
be
inaccurate
and
not
set
helpful
precedent.
F
If
it's
a,
we
just,
don't
think
that
this
should
be
included
or
because
we
don't
think
it'd
be
on
an
entrepreneur
into
clusters-
it's
not
in
conformance,
but
there
are
other
avenues
that
people
could
choose
to
deploy
an
agent
in
their
node
prep
to
to
surface
these
logs
too
I.
Just
from
my
perspective,
I
want
to
make
sure
that
we
don't
give
the
wrong
rationale
for
for
what
we
do.
Yeah.
D
I
agree
like
it
I,
don't
I,
don't
think
it's
an
escalation
issue
I.
Think
it's
a
scoping
issue
like
you
could
deploy
a
pod
to
do
this
already
in
most
clusters,
not
some
locks
down
ones,
but
the
question
is:
is:
is
this
level
of
access?
D
Does
that
make
sense
to
build
into
the
cubelets
functionality
I?
There
was
a
comment
I
had
later,
where
and
like
with
the
cube.
That's
already
exposing
host
logs,
so
exposing
host
logs
coming
from
more
places
is
not
worse.
D
I
just
am
trying
to
like
I
said
figure
out
how
we
can
be
coherent
as
a
project.
If
we're,
if
we're
saying
we,
we
don't
recommend
you
turn
on
these
things
and
expose
these
things.
It
feels
weird
to
at
the
same
time
expand
them.
I,
don't
know.
G
G
I
think
I
was
called
on
yep,
so
I
guess
I
was
going
to
ask
so
I,
don't
have
a
good
answer
to
have
a
Windows
node
and
it
just
can't
run
Pods
at
all,
like
I,
don't
really
understand
how
a
system
shows
up
as
ready,
but
it
can't
do
the
one
thing
it's
designed
to
do,
but
I'll
just
I'll
just
ignore
that,
because
I
don't
understand
anything
about
Windows
nodes
to
me,
the
won't
be
enabled
by
default
won't
be
part
of
conformance
plus.
G
You
could
do
this
with
other
means
sort
of
sounds
like
an
add-on
right
like
something
that
you
Cube
CTL
apply
into
your
environment.
Should
you
want
to
right
so,
like
I?
Could
imagine
this
entire
feature
being
built
with,
like
a
cube,
CTL
plug-in
a
Damon
said
and
an
aggregated
API
server
like
if
you
want
it
exactly
the
semantics
that
are
basically
defined
in
the
except.
D
G
E
E
I
I
agree
with
the
the
scoping
issue.
Is
this
kubernetes
problem,
or
is
this
the
node's
problem
to
expose
an
API
here?
The
one
thing
that
makes
me
sort
of
lean
a
little
bit
towards
yeah?
We
could
probably
tolerate
this.
If
it's
disabled
is
is
convenience.
We
we
already
provide
a
conduit
from
a
user's
command
line.
Cube
cuddle
all
the
way
up
through
whatever
mechanism
they
need
to
get
into
their
cluster,
all
the
way
back
down
to
the
nodes,
with
some
amount
of
authen
and
auth
Z
built
in
that's
pretty
convenient.
C
D
That
seems
like
it
would
be
more
coherent
with
what
we
do
for
other
things
so
like
giving
you
a
way
to
get
pod
logs
through
kubernetes
is
sensible.
Giving
you
a
way
to
get
the
logs
for
the
API
server
through
kubernetes
seems
sensible.
D
H
So
yeah
I
can
add
a
couple
comments.
First,
a
little
bit
of
history
and
I.
Think,
like
maybe
an
answer.
The
question
like.
Why
do
why
can't
Windows
nodes
that
are
ready,
run
Windows
pods,
the
one
of
the
big
motivations
for
this
feature
also
was
when
pretty
early
when
we
were
shifting
from
Docker
to
containerd
on
Windows.
H
There
were
a
lot
of
issues
in
the
contained
the
implementation,
and
we
would
see
like
a
node
would
be
able
to
run
pods
fine
and
then
all
of
a
sudden
it
would
stop,
and
we've
been
most
almost
all
of
those
issues
that
we're
aware
of
have
been
kind
of
fixed
things.
A
lot
more
mature
now,
I
think
that
this
Camp
was
originally
proposed
in
like
120.
H
H
So
it's
very
hard
to
say,
like
oh
I,
want
to
be
able
to
diagnose
an
issue
with
starting
a
container
and
not
and
without
often
having
to
look
into
these
windows
logs
and
unfortunately
like
there's,
it's
really
hard
to
figure
out
which
Windows
logs
you
need
up
front
and
they
go
all
over
the
place.
So
there's
a
lot
of
like
tracking
through
there's
the
various
different
Windows
components
which
all
log
to
different
places,
and
especially
it's
like
we
have
the
ability
to
run
Windows
pods
as
a
as
like
an
active
directory
identity.
H
It's
like,
if
you
do,
that
you
need
a
completely
different
meat
to
debug
and
look
at
a
completely
different
set
of
Windows
logs.
In
order
to
see
why
your
you
know,
potential
active
directory,
logon
is
failing
too,
so
that
was
part
of
the
reason
why
it
was
determined
just
let's
let
them
view
any
of
the
system
logs
that
they
need
it's
very
hard
to
narrow
down
exactly
what
logs.
You
could
need
to
diagnose
some
of
these
issues
where
your
your.
E
At
what
point
this
sort
of
to
answer
your
both
your
question
or
your
point
mark
and
arvind,
at
what
point
do
we
say
this
isn't?
Kubernetes
is
a
problem.
This
is
windows's
problem.
Windows
should
have
a
good
remote
log,
viewing
experience
or
whoever
prepared
your
windows.
Nodes
should
have
put
an
agent
on
there
that
lets
you
access
those
random
nodes
all
over
the
place.
That's
not
a
kubernetes
problem.
That's
a
somebody
else
problem
an
ecosystem
problem
like
where's
the
line
for
that.
H
E
H
B
It
sounds
like
in
general
if
we
were
to
provide
some
best
practices
and
approaches
for
the
community
to
consolidate
on
for
setting
up
notes.
I
know:
we've
we've
a
lot
of
people,
we've
left
it
up
at
the
ecosystem
and
they've
everyone's
kind
of
doing
their
own
thing.
B
Is
there
a
way
that
the
kubernetes
project
can
have
a
harder
recommendation
for
some
decent
defaults,
not
inside
the
kubernetes
code
base,
but
in
the
kubernetes
windows
node
processes,
so
that
there's
a
more
consolidation
for
the
way
in
which
we
do
kubernetes
together
on
Windows
is
solidified
so
that
all
of
these
many
approaches
and
doing
things
that
there's
not
a
best
way.
There's
the
kubernetes
suggested
way.
H
H
Know
sell
your
container
runtime,
which
is
container
D
these
days
and
then
get
the
like
the
normal,
like
the
processes
that
need
to
run
on
the
Node
running,
get
cubic
Cube
proxy,
your
cni
configured,
and
then
there
you
go
it's
like
do.
We
want
to
make
this
more
complicated
for
people
or
easier,
since
this
is
like
a
quality
of
life,
Improvement.
E
I
mean,
obviously,
we
want
to
make
things
easier
for
people,
but
we
also
can't
take
the
responsibility
for
everything
under
the
sun
right.
Just
we're
talking
a
lot
about
Windows,
but
I.
Don't
think
it's
just
Windows
like
Windows
is
more
acute
than
Linux
in
this
regard,
but
I
don't
think
the
problem
is
actually
fundamentally
different,
but
to
take
Linux
as
an
example
to
prepare
a
node.
You
pick
a
Linux
distribution
and
the
Linux
distribution
has
config
and
options
and
installed,
and
did
you
choose
to
install
SSH
or
not
install
SSH
like?
E
D
Yeah
I
I
would
also
ask
if
this
is
primarily
focused
at
like
bootstrap
bring
up
time
or
if
this
is
an
ongoing
like
day
two
operations
kind
of
thing,
because
if
it's,
if
it's
bootstrap
time
that
seems
like
it
could
be
way
more
reasonable
for
the
thing
that
is
configuring,
the
cubelet
and
running
the
cube,
like
it's
clearly
doing
things
on
the
cubelet
If
part
of
that
process
was
a
smoke
test
of
can
I
run
a
container
like
does.
D
Does
the
thing
I
just
set
up
with
the
cumulating
containerd
and
cni,
and
whatever
else
does?
Is
it
actually
functional
like
a
smoke
test
at
the
end,
that
says,
did
I
run
a
container
and
if
I
didn't
gather
all
the
logs
from
these
three
things,
whether
it's
Windows
events
or
systemd,
like
whatever
the
setup,
is
doing
and
ship
them
off
or
say,
like
I
failed
here
like
at
bootstrap
time,
that's
a
more
reasonable
thing
to
do.
This
is
day
two
operations
focused
that
gets
harder
because
you
may
not
have
persistent
presence.
C
Yeah
I
just
want
to
answer
John's
question
yeah.
This
is
not
just
during
bootstrap.
We
have
issues
like
hey.
My
note
has
been
running
for
a
week
and
now
I'm
not
able
to
bring
up
containers,
at
which
point
we
again
asked
him
for
cubelet
logs
for
time
logs.
So
it
does
happen
like
not
just
during
bootstrap
on
the
Windows
type.
A
Ireland
was
this
discussion
helpful?
Does
it
help
you
to
move
it
forward.
C
I
still
don't
have
Clarity
whether
I
should
go
back
and
you
know
add
more
code
and
do
the
things
that
Jordan
was
asking
me
to
do.
Or
is
this
still
a?
No?
No.
We
don't
want
to
add
this
feature.
I'm,
sorry,
guys,
I,
I'm,
sorry,
folks,
I
know
I'm
putting
people
on
the
spot,
but
it's
been
two
years.
So
we
need
some
like
hey.
Do
we
do
this
or
we
don't
sort
of
thing
right,
I,
I,
don't
know
if
everybody
agrees,
but
that's
where
I'm
coming
from
at
this
point.
F
Yeah
so
to
Echo
that,
like
I'm
sympathetic
to
Arvin
getting
some
closure
here.
F
I,
don't
think
I'm
I'm
sympathetic
to
users
who
want
to
make
it
easier
to
debug
systems.
F
I
I
would
be
fine,
but
honestly
I'm,
fine.
If
this
was
a
Sig
windows
sub-project
and
it
was
like
here's
a
practice
and
a
plug-in
to
do
this
type
of
action
that
Mo
had
described
earlier,
like.
F
E
To
be
clear,
neither
am
I
I'm,
not
saying
I,
don't
have
a
strong
feeling
either
way.
I
actually
really
understand
this
use
case
and
I
don't
want
to
downplay
the
convenience
or
importance
of
having
this
very
fundamental
thing
be
available.
I
just
want
to
make
sure
we
don't
sign
up
for
more
than
we
think
we
are.
D
From
my
perspective,
like
I,
said
I
I,
what
this
is
doing
is
not
worse
than
what
the
cubelet
is
already
doing
in
terms
of
exposing
barlog
sort
of
philosophically,
at
least
if
this
sort
of
stays
at
that
level
of
exposure,
yeah,
I
I,
think
it
was
a
mistake
to
default
that
on
and
if
this
stays
the
same
level
of
exposure
where
it's
just
in
the
cubelete
API,
we
don't
add
special
API
surface
for
it
in
Cube,
API
server
and
it
defaults
off.
D
Cube
control
side
like
making
the
client
experience
nicer,
could
be
done
with
a
cube
control
plug-in.
That's,
like
you
know,
handles
fiddling
with
the
proxy
environments
and,
like
turns
Flags
into
parameters
like
maybe
that's
A,
Way
Forward,
where
we
don't
sort
of
commit
to
making
this
a
first
class
thing.
But
it's
there
as
an
affordance
for
platforms
that
want
to
expose
it.
B
D
F
Then
you
have
resources
that
are
exposed
for
scheduling,
regular
workload.
Pods
there's
been
a
number
of
requests
since
ignode
to
allow
pods
to
go
under
the
system.
Reservation
bucket,
separate
from
the
normal
allocatable
bucket,
and
typically
it's
around
use
cases
around
node
preparation
of
ecosystem,
like
things
like
oh
I,
want
to
run
my
GPU
driver
under
system
reserved
and
not
with
the
rest
of
the
workload.
F
So
I
could
see
like
a
trend
that
could
go
either
way
on
this,
like
we
have
more
and
more
high
privilege
components
that
are
wanting
to
deploy
as
pods
on
nodes
to
do
drivery-like
things,
but
in
that
Spirit,
treating
this
like
one
of
those
plugins
as
well
is
is
fine
to
me.
F
C
That
yeah,
that
is
definitely
one
route
we've
been
marked.
You
can
correct
me
if
I'm
wrong
is
that
we've
been
sort
of
bouncing
back
and
forth
within
Sig
Windows
itself.
We
still
have
to
figure
out
a
way
to
get
this
all
to
work
without
having
to
deploy,
like
you
know,
a
pod
on
the
Node,
because
that
kind
of
defeats,
the
purpose,
that's
where
I
would
be.
That's.
That's
that's
my
stance
on.
C
F
And
I
guess:
yeah
ignoring
individual
product
concerns.
The
only
other
thought
on
this
I
had
is
today.
The
set
of
API
resources
that
are
servable
by
the
cubelet
is
fixed.
There
have
been
other
requests
to
make
the
set
of
things
that
could
be
served
from
the
cable,
endpoint,
extendable,
I
guess
one
example
in
my
head
that
we
could
have
pursued
this
way
is
like
the
Pod
resources
API.
F
That
tells
you
things
that
have
been
scheduled
by
the
cubelet
on
that
node,
like
maybe
that
could
have
been
done
at
a
tree
or
that
type
of
thing.
But
maybe
my
question
to
Jordan
and
Tim
would
be
if,
if
folks
wanted,
to
allow
a
plug-in
system
to
the
cubelet
to
serve
different
endpoints
from
that
cubelet,
would
we
reject
that
in
the
Sig
or
from
an
architecture
perspective,
or
is
that
a
palatable
path
forward?
F
We've
got
a
number
of
requests
from
random
folks
on
wanting
to
extend
random
parts
of
either
the
cubelet
mission
chain
or
the
qubit
serving
path,
and
we've
never
really
had
a
discussion
as
a
broader
Community.
That's
a
good
or
a
bad
idea,
but
that
would
be
the
type
of
thing
like
if
Arvin
your
point
was
well,
we
still
have
to
figure
out
how
to
get
the
wiring
in
place.
I
could
see
a
counter
argument
coming
forward
to
say:
hey,
let
the
let
the
qubit
have
an
extension
mechanism
that
says
for
this
path.
E
Yeah
I
don't
have
any
philosophical
objection
to
that
and
that's
sort
of
what
I
was
getting
that's.
What
I
took
out
of
what
Jordan
had
said,
whether
he
meant
it
or
not?
I
don't
know
the
details
of
it,
but
you
know
you
being
the
closest
of
all
of
us
to
cubelet.
If
you
don't
think
it's
you
know
off
the
bat
crazy,
then
I'm
willing
to
look
at
it.
F
F
Of
making
us
not
have
to
answer
every
question
right,
but
we
we
haven't
given
a
deep
thought,
I
honestly
I
think
as
a
community.
If
we
could
do
more
to
get
energy
to
focus
on
cleaning
up
the
cubelet
serving
path
which
I
think
technically,
we
still
treat
internally
in
the
cable
it
as
like.
Not
quite
ga
that
was
just
one
thought
is:
could
we
Kickstart
a
project
to
explore
the
space
and
maybe
get
some
energy
in
that
space?.
D
Yeah,
if
we,
if
we
keep
this
scoped
under
the
kiblet
proxy,
like
the
pipe
that
goes
from
the
API
server
to
the
cubelet,
is
Cuba
proxy.
This
is
where
this
started
and
I
I
take
responsibility
for
being
the
one
that
said
like
this,
isn't
consistent.
Let's
try
to
do
a
long
sub
resource,
so
I'm!
Sorry,
if
we,
if
we
say
we
have
a
pipe
from
Cube
control
through
the
API
server
to
the
cubelet
through
proxy
and
platforms
that
don't
give
node
access,
also
don't
give
node
proxy
access.
They
just
say
you.
D
You
can't
use
that
the
node
is
our
responsibility.
Cumulated
endpoints
are
not
well
documented,
they're,
not
structured.
The
same
way
the
API
server
endpoints
are
there's
been
talk
in
the
past
of
making
that
consistent
or
making
that
better
defined
or
putting
guarantees
around
that.
If,
like
dropping
more
end
points
under
there,
so
that
Cube
control
plugins
could
use
the
pipe
that
already
exists
from
Cube
control
through
the
API
server
to
the
node
on
platforms
that
allow
it
that
doesn't
seem
bad
as
long
as
it
doesn't
like.
D
Us
no
no
and
I
mean
sorry
I
didn't
mean
to
click
on
other
I
same
things
for
like
any
other
gke
or
any
other
platform
like
if
they
have
specific
log
collectors,
with
specific
red
capabilities
registered
in
nodes
and
to
debug
a
node
on
that
platform.
You
have
to
know
what
you're
looking
for
that,
that
seems
plausible
yeah,
I
I
actually
have
to
switch
to
Mobile,
so
I
have
to
to
drop.
D
But
if
we
keep
this
contained
to
the
cubelet
for
now,
I
think
the
scope
is
a
lot
smaller
and
a
lot
easier
to
reason
about
and
rationalize
with,
like
a
future
direction
of
node
serving
extension
type
things.
E
Agree,
my
primary
concern
is
that
it's
something
that
providers
can
disable
if
they
need
to
and
that
we
don't
show
too
much
of
the
details
that
we
could
choose
to
implement
it
in
some
way,
like
this
more
generic
plug-in.
C
D
I
think
so
I
I
think
enshrining
this
in
a
first
class.
Api
is
probably
not
correct,
so
I'm.
C
F
D
And
yeah
Diego
asked
about
conformance
nothing
under
the
cubelet
API
surface
is
in
conformance
I,
think
so.
Yeah
everything
under
the
cubelete
API
service
is
explicitly
disclaimed
from
conformance.
C
Okay,
all
right
I
I
can
go
back
and
and
reject
this
PR
to
not
make
it
an
API
and
just
try
and
directly
as
in
go
back
to
I.
Think
revision,
number
I,
don't
know
a
few
revisions
back
when
this
was
not
an
API
or
something
like
that
and
try
and
make
it
work
that
way.
Okay,
all
right,
I
can
I
can
take
a
look
at
that.
A
C
Thank
you
folks,
and,
and
thank
you
Tim,
thank
you,
Jordan
for
all
the
time
you
spent
helping
me
out
with
this
I'm
going
to
bother
you
more
sorry
about
that.
E
Thank
you
for
being
responsive
and
flexible
and
not
hating
on
us.
A
All
right,
Stephen,
I'm
gonna,
give
her
next
topic
over
to
you
automatic
chair,
picking
where
they
want
to
Denmark
the
client,
AP
IP
endpoints.
I
I
Jobs
to
be
back
ported
to
1.24
and
1.25,
the
pr
just
needs
to
be
merged,
but
the
problem
is
that
the
automation
process
for
the
conformance
board
takes
its
input
from
it's,
not
a
boy.
So
the
problem
is
Sona.
Boy
needs
to
be
doing
its
update
and
Publishing
the
logs.
Let's
sorry
the
list
of
tests,
and
then
that
needs
to
be
then
used
by
the
end
users.
So
the
problem
is:
there's
nothing.
Can
we
just
yeah
that
particular.
H
I
The
very
awesome
I'm
not
sure
personally,
about
this
cut
off
date
of
February
10th,
but
the
problem
is
that
it's
just
not
a
Fast
Fix
and
there
needs
to
be
a
better
way
of
coordinating
with
the
son
of
boy
team
to
be
able
to
understand
the
publishing
process.
Better
and
I.
Don't
I
publish,
put
a
comment
in
this
on
a
boy
kubernetes,
Channel
and
I
haven't
had
any
feedback
as
of
yesterday.
B
Quick
note
on
the
conformance
process,
the
submissions
to
CSF,
slash,
Kate's
conformance
required
that
all
submissions
are
run
submit
their
logs
most
of
the
time.
Those
logs
are
submitted
from
the
tool
called
Sona
boy
and
the
release
for
where
the
tests
come
from
are
from
the
0.0
release,
and
so
we
don't
use
later
tags
because
conformance
is
not
meant
to
change
and
when
we're
back
porting
stuff
like
this,
we
need
to
find
a
mechanism
for
when
stuff
is
Cherry
Picked
and
put
into
a
new
release.
B
B
I
Yeah,
so
there
was
a
if
you
look
at
the
yeah.
Sorry,
if
you
go
a
little
bit
further
down.
I
A
I
Are
is
are
linked
to
the
code.
Let's
go
ahead
in
the
air.
I
Happy
yeah
sorry
I
I
found
somewhere
else.
There
is
a
list
of
tags
and
they're
publishing
on
each
patch
release
of
kubernetes
an
updated
list
and
it
has
removed
on
one
dot
26,
but
the
problem
is
as
soon
as
there's
nothing
to
stop
the
API
getting
merged.
Now,
that's
already
getting
it
accepted,
but
it
won't
come
through
until
summer
boy.
You
update
their
process.
F
So,
okay,
if
I
ask
a
question,
just
make
sure
I
called
everything.
So
if
the
project
was
to
merge
the
conformance
pr
updates,
basically,
there's
no
process
in
place
today
to
ensure
that,
even
if
a
new
release
was
published
from
Cube
that
that
would
be
picked
up
in
Sona
boy
and
then
even
if
Sona
boy
was
to
pick
it
up,
there's
no
process
in
place
to
have
it
pick
a
version
that
was
not
from
a
DOT
Zero
release
of
cube
is
that
right.
I
H
I
But
then
the
problem
is
that
the
end
user
would
need
to
be
on
that
and
release
hatch
release
of
kubernetes
to
then
be
able
to
submit
that
version
of
test
results
and
if
it's
a
end
provider
and
they
can't
upgrade
their
clusters
for
any
particular
reason
straight
away,
then
they're
in
a
bit
of
a
bind,
but
I've
tried
to
get
up
some
feedback
in
from
someone
on
a
somebody
team
and
there's
nothing.
This
no
sort
of
feedback
I've
got
so
far.
I
So
this
that
particular
PR
can
be
merged,
there's
no
real
blocker
from
the
project,
the
conformance
project.
The
problem
is
getting
sonner
boy
to
recognize
that
update.
I
This
John
had
asked
balameric
had
asked
for
some
feedback
Maybe
that
particular.
A
F
Okay,
so
I
guess
since
John
is
in
on
the
call
today,
it
seems
like
nothing
will
go
wrong
by
merging
it,
but
it
is
insufficient
to
get
the
end
to
end
steps
in
place.
So
this
plus
additional
steps
will
need
to
occur
and
it
sounds
like
yeah
the
poster,
the
pr
Dan
who,
unfortunately,
is
not
here
today.
F
We
will
at
least
I'll
make
sure
to
try
to
get
him
engaged
on
the
center
boy
in
next
steps,
but
thanks.
A
Thanks
Stephen
next
two
points
off
for
myself.
First
one
is.
E
D
We
move
on
before
we
move
on
from
this
one
I
just
wanted
to
clarify
it
sounded
like
you
were
saying.
The
thing
that
needed
to
happen
would
be
backboarding
to
change
the
conformance
and
cutting
a
tag
and
sauna
boy.
Updating
off
of
that
tag
and
the
person
running
the
tests
have
a
cluster,
at
least
on
that
version,
like
those
are
the
three
areas
that
need
to
happen.
D
Someone
who
is
wanting
to
certify
as
conformant
would
be
motivated
to
update
to
the
patch
version
that
drops
that
from
conformance,
so
people
who
aren't
affected
can
continue
to
sort
of
over,
submit
more
tests
on
older
patch
versions
and
that's
fine.
They'll
still
be
conformant,
and
people
who
are
impacted
by
this
test
that
is
actually
problematic
would
be
motivated
to
update.
That
seems
reasonable.
I
A
A
All
right
removing
endpoints
from
the
ineligible
list,
as
most
of
you
would
know
that
we
created
a
list
of
inelegible
endpoints
over
the
last
three
years,
where
we
restored
things
that
was
debated
where
there
should
be
eligible
or
not
we're
almost
done.
There's
three
endpoints
left
to
get
to
100
with
the
things
that
was
open
for
testing.
Now
it
was
revisiting
the
individual
list
and
Jordan
already
did
a
very
good
review
for
us
on
that.
Thank
you
very
much.
A
Thanks
John,
to
bring
this
about
40
or
50
endpoints
endpoints
that
we're
going
to
bring
back
there's
a
link
to
the
mailing
list,
there's
also
a
spreadsheet
inside
the
mailing
list.
A
If
you're
going
to
want
to
go
there
and
review,
we'll
appreciate
as
many
eyes
and
comments
on
that
as
possible
and
I'll
be
going
through
that
next
week,
start
to
create
an
umbrella
issue,
bring
the
issues
to
the
to
the
front,
and
then
we
can
and
those
Mark
there
in
green,
that
they've
been
showing
is
actually
Four
that
we
brought
back
this
week.
It's
called
performance
test
for
them,
so
we
plan
to
clean
out
this
list
and
leave
only
the
things
that
really
agreed
upon
to
be
ineligible
in
the
list.
A
All
right,
thank
you
very
much
for
all
the
input.
Then
we
shared
some
questions
in
big
node
in
slack,
there's
three
endpoints
that
belong
to
Sig
node.
A
That
is
still
to
be
tasted,
and
there
is
some
questions.
Stephen
can
give
some
more
clarity
on
exactly
what
we're
concerned
about
I'll,
say
yeah
and
whether
we
should
these
these
days
or
not,
and
if,
if
not,
let's
step
around,
if
we
have
to
let's
get
them
tested
as
soon
as
possible.
So
we'll
appreciate
some
feedback
on
that.
I
Patrick
made
a
comment
and
a
particular
PR
that
I've
listed
and
they
in
the
site
message
making
the
comment
about
how
that,
because
port
forwarding
was
a
debug
feature,
it
shouldn't
be
part
of
conformance
and
the
thought
I
was
having
was
that
it's
more
of
a
debug
in
points
that
is
used
for
a
end
user
to
particularly
to
debug
their
workloads,
not
more
of
a
under
the
hoods
like
cluster
admin.
So
it's
just
confirming
that
those
endpoints
really
should
be
part
of
conformance
or
not.
B
A
F
F
B
E
So
it
being
kept
free,
is
week,
I've
been
reading
a
lot
of
caps
and
for
Giggles
I
went
through
and
just
started
finding
other
caps
that
sounded
interesting.
That
I
hadn't
read
about
before
and
I
noticed.
E
Two
points
does
not
a
trend
make,
but
it
is
more
than
one
I've
noticed
at
least
two
distinct
caps
this
week
that
are
related
to
fixing
security
issues
that
in
this
case,
both
of
them
are
relatively
Niche
sort
of
corner
cases
that
I
have
a
hard
time.
Imagine
people
are
doing
on
purpose
or
using
on
purpose,
but
fixing
them
would
both
be
breaking
changes
and
in
both
cases,
I
found
myself
cornered
and
having
to
say
the
correct
fix
is
add
an
API
to
opt
into
security
and
I
hate.
D
E
In
one
of
these
cases,
it's
definitely
possible
in
the
other
I.
Don't
think
we
can
know.
E
So
the
The
quick
summary
is
one
of
them
is
a
a
weird
behavior
of
oci
and
run
times
that
allows
us
to
read
groups
information
from
the
container
image,
even
when
we
shouldn't-
and
so
you
can
end
up
running
your
pod
with
extra
groups
assigned.
We
could
detect
that
and
report
that
this
had
happened.
The
other
one
is
host
path,
mounts
that
are
mounted
read,
only
aren't
recursively
read-only
and
any
nested
mounts
underneath
it
are
potentially
read,
write
and
I
can't
imagine
what
people
are
really
doing
with
that
on
purpose.
E
And
I
was
also
inspired
by
a
git
change
recently,
where
get
made
a
breaking
change
in
their
API,
their
their
command
line,
apis
to
disable
a
secure,
insecure
operation
and
force
you
to
opt
into
insecure
mode
and
as
someone
who
is
on
the
receiving
end
of
that
break,
I
was
pretty
mad
and
so
I.
You
know
I
use
that
to
inform
my
position
that
having
it
not
do
breaking
chain
having
us
not
do
breaking
changes
is
important,
but
I
wanted
to.
You
know,
put
it
out
there
and
discuss.
D
D
That's
one
possibility
that
wouldn't
require
users
to
be
aware
and
could
maybe
be
rolled
out
gradually
like
new
clusters.
Get
this
on
existing
clusters.
Try
to
do
detection
if
detection
looks
good,
then
turn
it
on
type
of
patterns.
E
For
at
least
one
of
those
we
we
could,
we
talked
about
maybe
adding
admission
control
that
lets
the
cluster
administrator
choose
to
make
that
breaking
change
without
us
doing
it
by
default.
It
all
feels
very
complicated
for
something
that
is
really
a
very
Niche
case,
but
I
don't
want
to
be
the
person
who
wakes
up
on
Tuesday
and
finds
the
thing
that
worked
on
Monday
doesn't
work
anymore.
E
Anyway,
we
don't
I'm
not
seeking
an
answer
here,
but
I
wanted
to
plant
this
seed.
Let
people
think
about
it,
and
you
know
if
you
guys
have
more
concrete
suggestions.
I'd
love
to
hear
them.
I.
Imagine
that
at
some
point
we
will
come
up
with
something
that
is
actually
a
real
like
major
security
issue,
and
we
will
face
the
same
decision
and
it
would
be
nice
to
have
at
least
thought
about
it
before
then.
A
Thanks
Tim,
that
leaves
us
with
two
minutes
left
Derek
I,
don't
know
if
you
had
at
the
moment
where
you
want
to
comment.
Otherwise,
we
swing
around
to
you.