►
From YouTube: Kubernetes SIG-Windows 20211026
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
and
welcome
to
the
october
26
2021
iteration
of
the
kubernetes
sig
windows
community
meeting.
As
always,
these
meetings
are
recorded
so
please
be
sure
to
adhere
to
the
cncf
code
of
conduct.
Well,
let's
kind
of
let's
jump
in
first
of
all
announcements
code
freeze
is
about
three
weeks
away.
I
believe
that's
november
18th
for
123.
A
So
yeah,
let's
just
keep
working
on
getting
all
the
work
that
we
want
in
I've,
seen
a
lot
more
action,
both
reviews
and
submissions
of
prs,
which
is
good.
It
always
helps
to
get
those
in
early.
If
anybody
has
any
concerns
about
functionality,
not
making
it
in
and
or
about
not
getting
enough
pr
traction,
please
reach
out
in
slack
and
we'll
try
and
address
that
as
soon
as
possible.
B
In
but
I'd
add
we
we
merged
host
process,
container
support
and
container
d
and
cluster
api
for
azure
last
week.
So,
if
you've
been
holding
off
on
trying
to
try
out
the
host
process
stuff,
it's
a
little
bit
easier
now
with
with
the
cluster
api
for
azure,
so
give
it
a
give
it
a
check,
check
out
I'll,
get
a
link
and
drop
it
in
there.
A
A
Thanks
james
yeah
we'll
get
a
link
as
a
reference
for
people
who
want
to
experiment
and
build
on
top
of
that.
That
was
a
pretty
big
pr
too.
So.
Thank
you.
Oh
okay.
Let's
go
into
this
first
item,
which
spelled
over
from
last
week,
aravind
and
brendan,
I
think
brandon
and
I
saw
brandon
on
a
call
if
you
want
to
have
him
here.
D
Yeah,
can
you
hear
me
now?
Yes
right
so
yeah
following
up
from
last
week
regarding
the
projected
volume
documentation,
I
think
we
did
go
back
and
forth
a
little
bit
on
slack
on
this.
The
key
bit
is,
I
think
I
will
open
up
here
a
dot
pr
to
document
this,
and
I
think
it
should
go
in
the
projected
volume
section
in
the
docs
is
what
I'm
guessing,
and
maybe
we
should
link
to
that
from
somewhere
in
the
windows
side,
where
we
talk
about
the
storage
options
available.
D
So
that's
my
plan,
but
the
key
bit
that
I
think
we
wanted
to
discuss
this
time
around
is
in
openshift.
We
have
a
workaround
for
if
a
user
decides
to
use
both
run
as
user
and
run
as
username,
the
pod
will
not
come
up,
and
in
openshift
we
have
an
admission
controller
for
pods.
That
actually
add
both
of
them,
which
causes
windows
pods
to
just
not
come
up
right,
so
there
potentially
could
be
other
clusters
that
have
this
issue,
which
is
why
we
wanted
to
just
discuss.
Do
we
just
document?
D
A
So
I
have
a
question
around
that.
So
assuming
that
all
of
the
work
to
add
the
os
field
to
the
podspec
lands
and
like
in
goes
alpha
in
123
and
hopefully
beta
and
124.,
would
it
be?
A
D
D
I
mean
some,
but
somebody
could
try
and
fake
it
and
then
come
up
with
like
weird
security
issues
and
that's
what
that's
why
we
were
like.
Okay,
we
we.
We
cannot
guess
whether
the
part
is
for
windows
but
yeah,
the
minute
they
always
feel
goes
in,
and
I'm
I'm
assuming
and
at
that
point
will
change
at
admission,
control.
A
Yeah
because
I
I'm
pretty
sure
that
if
we
try
and
upstream
any
changes
with
that
that
that
question
is
going
to
be
asked,
and
then
the
next
question
is
going
to
be,
how
hard
is
it
going
to
be
to
remove
it
like
remove
the
check
upstream,
once
like
once
the
more
thing
like
once
the
os
field
is?
Is
there
and
I
and
my
my
feeling
is
in
general
folks,
especially
like
jordan
leggett
prefer
to
not
put
in
things
that
they
know
are
going
to
be
removed
or
obsolete
right
away?
A
So
I
think
it.
This
might
be
good
like
this
is
clearly
like
founded
in
openshift
and
openshift
specific,
but
is
anybody
on
the
call
are
they
familiar
with
like?
Are
they
running
into
this
issue?
Is
there
more
use
cases
that
or
like
scenarios
that
are
getting
impacted
by
this?
That
would
maybe
present
a
stronger
case
for
upstreaming
effects
temporarily,
for
this.
E
Even
if
you
make
change
in
openshift
what,
if
someone
manually,
provides
those
names
during
the
pod
admission
time,
how?
How
should
we
handle
that
scenario
shouldn't
we
have
a
stronger
validation
at
the
api
stage.
E
D
D
D
So
you
would,
I
think
we
can
do
that
only
after
your
pr
goes
in
where
you
are
clearly
identifying
whether
this
part
is
os
or
not.
Today,
if
we
try
to
do
this
they're
again
going
to
push
back
saying
how
do
you
know,
this
is
a
os?
How
do
you
know
this?
Is
a
windows
port
or
not?
So,
yes,
ravi,
once
your
pod,
spec
change
goes
through
yeah
sure
we
can
add.
Api
validation.
E
D
A
Yeah
so
yeah,
so
I
think
there's
a
couple
of
things
in
place.
One
is
so
here's
the
actual
logic
that
happens
on
the
cube
lights
for
this,
is
it
all
here's
some
of
it?
I
know
that
there's
one
space
where
there's
a
run
as
user
check
and
a
furnace
user
is
set.
The
pod
doesn't
go
to
running
I.
This
might
not
be
the
only
one,
so
yeah
there's
a
couple
of
kind
of
descriptive.
Behaviors
one
is
that
these
checks
happen
at
the
cubelets
and
we
want
we
once
once.
We
can
identify
them.
A
Yes,
I
think
that
they
should
happen
at
admission
time.
I
think
it's
much
clearer
for
users
to
and
just
to
figure
out
what
happened.
If
that
is
the
case,
if,
if
that
happens
at
api
admission
time,
you
can
get
better
errors
and
everything
we
today,
I
think
I
mean
it
might
be
possible
to
have
that
api
mission
check
without
the
os
field
and
just
check
to
see
if
both
of
those
are
set
and
error
out
and
say,
we
can't
make
a
determination,
but
we
know
this
is
invalid,
but
I
yeah
I.
C
A
E
Yeah
so
like
either
in
the
documentation,
if
we
can
clearly
call
out
that
this
will
be
rejected
in
future
releases,
I
think
that
would
be.
D
So
ravi
in
documentation,
the
minute
you
use
the
word
future
they're
going
to
reject
that
yeah.
They
don't
like
to
mention
anything
that
has
that
is
conjecture,
yeah
sure
I
mean
we
know
that
it'll
happen
right,
but
in
docs
you
can't
mention.
What's
going
to
happen
in
the
future,
you
you
have
to
state
what
the
current
state
is.
A
Yeah,
so
I
want
yeah,
I
don't
even
think
I
I
was
wondering
if
we
could
point
to
the
cup
in
the
docs
and
say
this
is
going
to
address
it
and
not
have
any
future.
Looking
statements,
and
just
say
like
this-
is
the
plan
to
address
it.
We
already
have
a
cap,
that's
been
approved
for
alpha,
I
don't
know
if
that
will
be
approved
or
not
or
if
they
wanted
to
just
limit
it
to
current
behavior,
like
you
mentioned.
A
Yeah,
so
I
am
I'm
on
the
fence,
if,
like
I,
I
don't
know
if
we
want
to
try
and
change
the
behavior
upstream
today
or
not.
Potentially,
we
can
start
a
thread
in
and
I
think
jordan
leggett
who's
been
is
probably
the
person
that
we
should
talk
to
in
slack.
I
wonder
if
we
should
add
to
the
thread
and
slack
and
sig
windows
and
see
if
what
what
he
thinks
here.
B
I
I
think
changing
the
behavior
now
and
then
rejecting
it
in
the
future
is
not
the
best
drop
to
go.
I
think
documented
leave
the
behavior
as
is,
and
then
reject
it
in
the
future,
and
I
don't
see
any
backwards
compatibility
with
rejecting
it
earlier
in
the
cycle.
It's
just
a
better
experience
and
overall.
D
A
When
we
documented,
it
might
be
helpful
to
put
to
have
the
exact
error
message
that
the
the
cubelet
spits
out
too,
so
if
people
search
for
that,
hopefully
they
can
hit
it
say
like.
I
think
we
should
be
clear
and
say
like
if
you
see
this
error,
these
are
the
things
to
check
and
have
had
that
error
string,
and
then
you
know
just
say,
make
sure
that
you're
not
setting
both
of
these
these
fields
are
essentially
mutually
exclusive.
D
A
These
ones,
where
was
it
the
I?
I
think
that
there's
a
couple
different
ways
that
you
can
trip
that
you
can
trigger
that?
If,
if
you,
if
you
have
the
security,
if
you
have
run,
is
not
in
route
set,
it
will
fail
here,
and
this
shows
up
in
the
as
an
event
and
qctl
describe
pod
will
show
this.
A
A
We
can
take
that
offline
to
move
the
agenda
forward,
but
yeah
yeah,
let's
see
if
we
can
figure
out
all
of
the
different
failure
scenarios
and
possibly
update
in
the
cubelet
code
too,
to
have
for
better
to
emit
an
event
or
something
that
would
show
up
in
that
case
instead
of
just
getting
stuck.
That
might
be
good.
F
So
one
thing
I'll
note:
on
the
documentation
front:
from
the
msdn
side,
we
do
have
a
little
bit
of
documentation
on
on
the
different
user
types
and
the
containers,
and
we
do
call
out
the
permissions
and
you
know,
mounting
volumes
that
could
arise.
I'm
sorry
the
issues
that
could
arise
from
using
something
like
container
administrator
with
a
mounted
volume
and
being
able
to
add
public
keys
and
such
I'll
link
that
here,
but
in
terms
of
extended
documentation.
F
I
think
my
plan
is
to
document
more
clearly
the
capabilities
of
the
user
accounts
in
containers
and
call
out
this
issue.
Specifically,
if
you
know
you're
mounting
volumes
in
you
know
with
windows
containers,
you
run
these
potential
risks,
but
if
there's
anything
else,
you
think
we
should
add
I'm
open
to
it.
A
No
well,
I
think,
that's
good
for
now,
and
also,
I
think,
brandon
and
a
couple
of
the
folks
internally
at
microsoft
were
talking
and
we
were
kind
of
thinking.
It
might
be
good,
a
good
idea
to
use
a
portion
of
potentially
one
of
the
upcoming
q
contacts
to
really
deep
dive
into
the
differences
of
the
container
user
versus
container
administrator.
A
A
A
Okay,
next,
I
see
the
mirror
run,
as
user
should
have
been
strings
in
the
pod
api.
Just
did
it
to
have
time
yeah.
A
I
think
that
that
yeah,
I
think,
if,
if
there's
ever
a
pod,
v2
spec,
we
can,
I
think
we
all
know
a
lot
more
about
all
the
nuances
of
the
different
security
descriptors
for
windows
and
the
shortcomings
of
just
having
a
string
and
everything
so
yeah.
I
agree
with
that.
Coming
next
agenda
item.
There's
a
pr
review
request.
I've
actually
been
following
this
pr
with
a
couple
of
other
folks.
Do
you
want
to
go
ahead.
C
Yeah
this
is
a
this
is
a
heferny
github.
Basically,
this
vr
basically
fixed
the
issue
where
external
traffic
policy
local
kind
of
broken
in
google
cloud.
C
But
in
terms
of
tribute
to
do
that,
we
need
to
expose
two
like
flags,
so
the
new
wall
has
two
additional
flags
saying
whether
or
not
we
do
this
forwarding
and
what
is
the
endpoint
name
to
for
the
traffic
health
check
two
so
yeah.
This
is
a
new
one.
C
Originally,
I
add
this
in
the
agenda
because
not
to
get
enough
attention,
but
right
now
I
see
like
a
lot
of
people
already
jump
on
that.
So
really
appreciate
that
yeah.
Just
brief
updates
on
like
what
this
pr
is
about.
A
Yeah
well,
I
was
before
all
of
these
updates.
I
was
going
to
comment
that
I
unfortunately
like
these.
This
pr
has
need
some
reviewers
from
a
lot
of
different
areas
and
we've
been
trying
to
get
the
attention
of
some
of
those
for
viewers
jay.
This
are
you
here.
This
might
be
is
something
that
you
might
be
able
to
help
with
too,
since
a
lot
more
of
the
networking
folks
who
you
meet
with
regularly.
So
I
don't
know.
G
A
A
Okay,
yeah,
so
if
what
what
usually
happens
after
we
have
the
regular
community
meeting
is
we
you
know,
I
usually
go
over
to
say
no,
but
that
people
usually
pick
some
topics
that
they
want
to
pair
on
and
jay
usually
leads
that
and
then
you
just
go
through
and
it
can
be
code
reviews
it
could
be
working
on
a
feature
or
it
could
be
just
answering
questions.
So
that
might
be
good
if
you
have
some
time
to
stay
for
a
few
minutes
afterwards.
G
A
Items
kind
of
maybe
a
little
bit
related
to
the
first
one,
but
there's
some
conversations
in
slack
about
specifically
how
to
handle
this
run
as
non-root
field
for
the
new
in
api
validation
for
the
new
os
fields.
I
think
we're
kind
of
reaching
consensus
with
with
jordan
hero.
I
had
some
concerns,
but
we
can
continue
the
discussion
ravi.
Do
you
want
to
play
some
background?
What
some
context
or
I
can.
E
Yeah,
so
just
to
make
sure
that
everyone
understands
the
run
as
non-root
is
a
field
that
exists
in
port
security
context,
as
well
as
container
security
context.
E
E
But
for
this
particular
field
there
is
some
confusion
like
should
we
have
it
or
should
we
not
have
it
because
in
case
of
windows,
we
have
this
run
as
user,
where
the
root
user
is
is
equivalent
to
container
administrator,
and
we
are
actually
validating
based
on
the
container
administrator
username,
not
on
uid,
just
like
in
case
of
linux,
where
we
have
ui
equal
to
0,
which
maps
to
root
user,
and
we
also
have
this
uid
to
make
sure
that
it
is
not
root.
E
We
do
not
have
that
in
case
of
windows.
So
that's
where
the
confusion
is
so.
I
was
initially
thinking
that
we
will
make
this
linux
specific.
This
run
has
non-root
and,
and
we
we
can
use
rana's
username
to
to
make
sure
that
the
validation
for
the
administrator
happens.
E
A
Yeah
and
I
think
to
add
to
some
confusion
that
on
the
windows
page,
it's
documented
that
this
run
as
non-root
does
not
do
anything
for
windows,
but
in
fact
it
does
so.
The
behavior
and
a
lot
of
that
behavior
is
here
and
outside
of
this
call,
so
that
the
behavior
today
is
that
when
the
cubelet
tries
to
start
a
container
on
windows,
if
this
run
as
non-root
is
set
and
what
run
is
not
just
run
as
non-root
is
supposed
to
be
to
prevent
anybody
from
running
containers
as
root
on
windows.
A
What
that
has
where
up
the
wrong
file
on
windows?
What
has
been
determined
and
there's
some
conversations
here
was
that
any
container
running
as
the
container
administrator
is
treated
as
root.
So,
if
you
set
run
as
non-root
to
true,
it
will
check
to
make
sure
that
run
as
username
is
not
set
to
container
administrator
and
then
also
make
sure
that
the
default,
if,
if
runner's
username,
is
not
set,
it
checks
to
make
sure
that
the
default
user
in
the
container
image
is
also
not
container
administrator.
A
And
if
it
is
it,
it
errors
out
and
fails
to
start.
The
container.
In
the
cubelet,
so
my
concerns
are
that
people
may
be
using
this
as
maybe
using
this
either
kind
of
knowing
that
it
works
for
windows
or
just
unintentionally.
A
They
stumbled
on
it
when
they
were
trying
to
play
with
it
for
linux
and
maybe
relying
on
that
functionality,
especially
like
other
policy
and
admission
engines.
So
I
I
is
fundamentally
wrong
about
that
logic.
So
nothing's
fundamentally
wrong
about
that
logic,
but
I
I
I
I
don't
think
anything's
fundamentally
wrong
about
that
logic.
A
The
one
of
the
big
discrepancies
now
is
that
it's
not
documented,
so
that
is
an
action
item
and
one
of
the
actions
on
the
one
of
the
feedback
on
the
pr
that's
linked
in
that
conversation,
but
the
other,
oh
yeah.
So
I
I
think
that
that's
a
good
question.
I
didn't
see
anything
fundamentally
wrong
about
that
logic,
except
for
it's
not
one.
That's
not
documented
and
two.
A
I
think
that
there
is
a
lot
of
room
for
improvement
for
microsoft,
to
really
highlight,
like
the
differences
between
container
administrator
and
and
other
users,
and
what
that
allows
the
in
the
document
that
brendan
linked.
It
starts
to
talk
about
that,
and
it
does
say
that
if
you
are
a
contain,
if
you
run
the
containers
container
administrator
and
then
allow
like
host
path
mounts
you
are
accessing,
like
the
container
is
accessing
the
any
of
the
volumes
there
as
if
it
was
in
the
administrators
group
for
like
on
on
the
machine.
A
A
G
E
Yeah,
I
think
my
concern
is
more
on
the
lines
of
tying
the
field
called
run
as
non
new
non-root
user
to
a
container
administrator.
Where,
like
say,
if,
if
there
is
no
association
between
those
two
fields,
then
I
think
I
would
have
been
okay
with
it,
because
there
is
nothing
like
a
root
user
in
case
of
windows.
E
What
we
are
trying
to
tell
to
the
user
effectively
who's
submitting
pod
specters
and
to
me
a
lot
of
it,
is
sort
of
a
baggage
that
we
have
been
carrying
around,
because
when
the
apa
was
introduced,
windows
was
not
in
the
mix.
E
Electron
has
non-root
and
all
that
kind
of
stuff
so
like
in
the
in
the
windows
options
of
the
security
context.
If
you
have
something
like
run
as
username
and
if
you
are
able
to
type
to
a
container
administrator
user,
that
should
be
good
enough
instead
of
having
another
feed
in
a
layer
above
which
does
not
know
anything
about
the
user
that
is
going
to
come
in.
That
is
where
I,
the
logic,
threw
me
through
me
a
bit
off.
That's
where
I
was
like
really
confused
when
I
started
working
on
this.
A
Yeah,
and
I
think
that
was
some
of
the
comments
on
that
thread
that
that
jordan
had
as
well
it's
like
I'm
on
linux.
We
can
check
to
see
if
your
user
id
is
greater
than
zero
and
do
this,
but
for
windows,
it's
a
lot
more
complicated
and
we.
E
A
Built-In
user
accounts
for
containers,
there's
container
administrator
and
container
user
container
administrator
is
the
much
more
privileged
account
and
then
there's
container
user.
But
there's
also
the
ability
to
you
know
create
either
create
other
users
in
the
container
and
then
run
as
that
and
there's
also
gmsa
and
it's
domain
accounts.
A
E
Yeah,
so
I
think
if
we
just
have
run
as
user
or
username
mapping
to
container
admin
swipe,
that
would
have
been
good
enough
for
us.
If
understood
click,
we
can
achieve
the
same
effect
with
the
api.
Provided
is
what
I'm
thinking.
A
Well,
my
concerns
there
is
that
the
so
the
the
the
check
that
happens
in
the
cube
light
checks,
two
sources
of
information.
It
checks
that
the
spec
the
prospect
that's
coming
in
and
if
nothing
specified
on
the
pod
spec,
then
it
goes
and
queries
the
cont,
like
the
the
manifest
for
the
container
image
and
looks
to
make
sure
that
the
container
image
doesn't
have
a
default
user
of
container
administrator.
A
I
think
that
that's
an
important
check,
because
if
somebody
publishes
a
container
that
does
some
things
and
runs
a
workload
as
container
user
and
doesn't
have
the
pods
back
or
an
end
doesn't
set
and-
and
nobody
sets
the
run
as
user
name
in
the
pods
back
then
they
could
be,
they
may
be
run
then
they
could
run.
Then
we
could
bypass
checks
in
the
api
admission
time
which
wouldn't
check
that
container
layer.
A
So
my
thoughts
were
that
we
either
needed
to
make
sure
that
runner's
username
was
set
and
then
make
sure
it
wasn't
set
to
container
administrator
or
check
that
or
or
like,
and
that's
what
we
would
need
to
do.
D
A
A
Not
on
the
host,
so
you
could
you
could
you
could
or
not,
not
and
not
be
able
to
access
host
resources?
So
what
you
could
do
is
you
could
make
a
container
add
a
user
foo.
Add
it
to
the
administrators
group
in
the
container
and
then
that
user
foo
would
be
able
to
do
things
like
toggle
services
on
the
inside
of
the
container
and
and
other
manipulate
files
inside
of
the
container.
A
F
F
It's
a
bit
of
a
complicated
topic.
It's
like
in
general
I'd
say
no
yeah
this.
This
is
a
I
I
I
think
it's
worth
like.
You
know
a
further
longer
discussion
on
that.
Okay,.
F
In
general
and
like
in
order
to
create
to
create
something,
that's
you
know
privileged
in
general
and
within
the
container
you're,
probably
already
using
the
container
administrator
account
anyways
there's
not
a
lot.
You
can
do
with
container
user,
it's
a
very
restricted
account,
so
like
so
in
most
cases
you
know
the
scenarios
you're
worried
about
you.
You
need
a
container
administrator
to
do
that.
Anyways.
D
A
D
A
Well,
yes,
that
is
a
good
concern
or
valid
concern.
A
Yeah
this,
like
my
thoughts,
are,
is
if
we
don't
support,
run
as
non-root,
then
we
would
need
to
add.
Then
we
need
to
be.
It
explicitly
have
run
as
username
set
for
all
pods.
That
would
potentially
want
to
enforce
the
user
for
policy,
and
I
think
that
that
is
even
though
it
doesn't
kind
of
map
to
the
actual
implementation
of
how
a
runes9
was
set
up.
A
It
does
could
map
to
like
the
spirit
behind
that,
and
I
think
it's
easier
for
my
my
feeling
is
it's
easier
for
people
who
want
to
enforce
policy
to
check
if
a
bull
field
is
set
than
it
is
to
and
endless
burdensome
for
the
workloads,
then
it
is
to
make
sure
that
an
option
of
the
field
is
optional.
Today
is
set
on
all
workloads
and
make
sure
it's
not
set
to
a
specific
value.
I'm
that's
that's
my
concerns,
I'm
more
than
happy
to
kind
of
talk
about
it
with
other
people,
but.
F
I
I
think
in
general,
from
the
microsoft
side,
what,
if
possible,
where
we
can
make
the
behavior
between
linux
and
windows?
Consistent,
I
think,
is
what
we
would
prefer
and,
and
I
think,
with
the
renaissance
root
field,
it
is,
I
think,
nice
to
have
as
a
blanket
policy
in
ensuring
that
containers
only
run
with
no,
you
know
low
privileged
accounts,
especially
thinking
in
the
future.
If
we
decide
that
you
know,
we
want
to
introduce
more,
you
know
low
privileged
accounts
for
for
windows,
containers
or
limit
the
capabilities
of
container
administrator.
B
F
Yeah,
but
you
know,
container
administrator
is
still
a
high
privileged
account.
You
know,
there's
things
you
can
do
with
it
that
could
interfere
with
the
host
and
I
think,
having
the
the
renaissance
field
used
as
a
way
to
enforce
the
you
know.
Non-Usage
of
that
I
I
think
would
be
helpful.
E
Yeah,
I
think
there
is
also
this
argument
of
using
run
as
non-root
fields,
side
effect
as
a
way
to
tell
that
the
container
administrator
is
the
root
user.
So
I
I
am
like
not
comfortable
with
with
using
that
field
as
a
side
effect
to
ensure
that
the
container
user
is
or
the
administrator
is
container
administrator
is
the
root
user.
So,
like
I'm
fine
with
it,
I
just
want
to
understand
the
the
rationale
behind
continuing
it
for
a
long
time.
E
It's
also
about
the
usage
of
field
where
people
are
using
that
field,
whereas
we
can
get
the
same,
we
can
get.
We
can
get
the
same
intent
from
using
content
run
as
user
equal
to
container
administrator.
A
A
E
Yeah
and
like
are
you
saying
that
that
will
continue
forever
or
can
we
have
in
future,
enforce
this
particular
field
in
the
pod
spec
and
make
sure
that
we
we
do
not
need
to
use
run
as
non-root
user.
A
A
F
No,
I
think,
that's
reasonable
and
in
the
future
we
do
want
to
move
away
from
you
know
the
need
to
use
container
administrator
by
default
in
in
some
images,
so
be
helpful,
for
I
think,
just
to
be
able
to
set
a
blanket
policy
of
don't
use
container
administrator
in
the
future.
A
All
right,
I
need
to
drop
to
sig
node,
so
I'm
going
to
stop
the
recording
and
head
it
over
to
jno.
We
could
continue
conversations
I
like
to.
A
We
can
also
link
to
a
slack
chat
if
people
want
to
continue
some
conversations
here,
but
I
think
it's
a
good
discussion.
I
think
that
really
highlights
that
we
need
to
document
like
what
connect
container
administrator
is
and
what
differences
it
has
for.
All
of
that.
So
thank
you.
Everybody
for
joining
there's
some
good
discussions
here
hope
to
see
most
of
you
or
all
of
you
next
week,
bye.