►
From YouTube: Kubernetes SIG Windows 20211207
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everyone,
I've
started
recording
this
is
the
7th
of
december
and
say,
windows
weekly
meeting.
This
is
a
cncf
project
and
we
do
follow
the
cncf
code
of
conduct.
So
please
be
nice
to
everyone,
and
if
you
have
any
questions
you
can
reach
out
to
myself
or
mark
or
any
other
leads
with,
that
we
can
go
ahead
and
get
started.
We
have
a
pretty
packed
agenda
today.
I
guess
making
it
for
the
last
couple
of
weeks.
A
Are
there
any
announcements
that
we
wanted
to
make
before
we
we
got
started?
I
think
the
release
is
today
so
123
is
coming
out
today.
B
Anything
else
anybody
wanted
to
say
yeah.
I
think
that
I
need
to
double
check
on
the
date,
but
I
believe
that
there's
a
contributor
summit,
a
virtual
contributor
summer,
on
the
13th
of
this
month
to
kind
of
you
know,
celebrate
the
release
and
just
have
some
fun
and
meet
more
contributors
virtually
if
anybody
is
available.
I
highly
recommend
to
join
that.
C
A
A
Oh
okay,
I
don't
see
muzz
here
today.
A
B
D
Yep,
I'm
here
so
one
of
the
things
I've
been
asked
about
a
few
times
and
just
wanted
to
give
a
quick
update
on
I've
been
in
communication
with
mirantis.
The
20.10.11
release
for
the
daca
runtime
from
mirantis
is
supposedly
coming
in
sometime
mid-december.
D
That
has
a
number
of
important
fixes
that
have
been
brought
up
with
us.
I
think
there
was
a
cte
and
some
other
permissions
issues
with
with
you
know,
starting
the
daemon.
D
So
if
you
have
any
questions
about
that,
let
me
know-
and
I
can
try
to
get
you
in
contact
with
miranda's
any
immediate
questions
there
or
I
guess
we
can
move
on
to
the
next
thing.
B
No
questions,
but
I
did
want
to
highlight,
or
just
has
like
a
public
service
announcement,
that
the
2010-8
version
for
windows
does
have
an
issue
where,
if
you
reboot
the
machine,
the
docker
demon
fails
to
start
and
there's
some
file
permission
issues
with
the
panic.log
file
that
gets
marked
as
or
created
as
read-only
and
so
on
a
reboot.
You
need
to
delete
that
file
like
if
you
want
to
adopt
2010-8.
A
D
We
don't
have
anything
right
now,
but
it
is
very
much
at
the
top
of
the
list
for
us.
That
is
something
we're
considering
and
trying
to
find
a
solution
for.
D
No,
no!
No!
This
is
kind
of
just
an
internal
discussion,
we're
having
oh
but
yeah.
I
I
it
would
be
good
to
have
a
public
issue
on
it.
D
If
I'm
not
mistaken,
I
believe
I
don't
know
if
one
guy
is
accepting
new
submissions
for
new
apis,
so
I
think
that's
part
of
what
our
discussion
is.
F
A
Great,
I
don't
see
david
here.
I
think
he
wanted
to
just
bring
awareness
to
this
particular
issue
with
the
load.
Balancer
type,
local
traffic
policy.
A
I
don't
see
him
here,
so
I
don't
know
if
anybody
has
any
information
on
this.
Otherwise,
if
he
joins
a
little
bit
later,
we
can
have
him
discuss
it.
F
No
go
ahead.
That's
what
exactly
david,
I
think,
wanted
to
ask
or
bring
awareness
that
we
have
seen
this
issue.
I
don't
think
it's
fully
resolved
we're
talking
with
networking
as
your
cni
team
and
their
team,
but
if
others
have.
First
of
all,
like
awareness,
be
aware
that
there
is
an
issue
that
we
have
confirmed
and
then,
if
anybody
else
has
any
you
know,
suggestions
or
something
reach
out
to
david,
as
he's
working
through
a
solution.
G
C
Board
for
for
sig
windows
for
networking
itself,
there's
the
sig
windows,
networking
tab.
A
A
A
I
A
Cool
all
right,
brandon
you've
got
host
process
containers
next
steps,
yeah.
D
Sure
so,
with
the
documentation
and
everything
being
completed
for
1.23
we're
kind
of
looking
forward
to
what's
coming
up
next
with
host
process
containers
before
we
ga
there
are
a
couple
things
that
we're
investigating
and
thinking
about
right
now,
which
is
the
support
for
custom
users
and
kind
of
in
some
file
system,
enhancements
that
we're
planning
for
124
and
125
before
ga
of
host
process
containers.
D
What
I
wanted
to
ask
the
folks
here-
and
it
doesn't
need
to
be
an
answer
right
now,
but
is
there
anything
with
those
process
containers
that
we
might
be
missing
that
you
you're
looking
for
you
know
before
we
ga
the
feature.
So
if
there's
anything
that
you
know
you'd
like
to
see
in
host
process
containers,
please
let
us
know,
because
this
is
the
time
to
start
planning
for
that.
B
I
also
think,
in
addition
to
just
like
feature
requests-
or
you
know,
functionality
changes
as
I'd
also
like
to
open
it
up
and
say:
are
there
any
kind
of
projects
or
components
that
weren't
possible
previously
on
windows
because
of
the
lack
of
privileged
container
support
that
people
would
like
to
see
working
with
with
host
process
containers?
Those
are
usually
really
good
use
cases
to
make
sure
that
we
have
all
the
functionality
we
need.
I
know
james
opened
up
a
pr
to
get
windows.
B
Exporter
of
the
prometheus
windows,
node
exporter
working
and
we've
got
some
samples
in
cluster
api
using
and
in
sig
windows
tools
for
running
cni's
and
cue
proxy
running.
I've
got
a
proof
of
concept
of
the
cured
the
kubernetes
research
daemon
open,
but
if
there's
any
other,
you
know
projects
that
people
would
really
like
to
see
working
with
us.
Please
either
like
contribute
or
comment,
we'd
like
to
make
sure
that
we
can
get
as
much
like
parity
with
linux
components
as
possible.
Here.
G
B
K
Yeah,
my
name
is
mauricion
from
six
storage,
yeah
you're
right.
Okay,
we
actually
would
not
need
csi
proxy,
but
we
are
now
in
in
a
stage
where
we
can
or
mark
has
actually
submitted
a
pr
where
csi
proxy,
the
binary
is
running
in
a
privileged
container
in
a
host
process
container.
So
that's
okay,
something
in
between
to
where
we
are
right
now
and
to
where
we
want
to
be
okay.
K
Yes,
yes,
but
well,
if
you're
more
interested
about
why
we
are
thinking
about
not
supporting
this
in
csi
proxy.
It's
because,
as
soon
as
we
start
pushing
an
image,
because
we
will
now
have
a
new
image
for
csi
proxy,
then
that's
also
something
that
we
will
have
to
support
long
term
and
this
this
this
month,
I'm
also
going
to
be
working
in
creating
a
doc
to
use
cost
process
containers
in
the
csi
drivers.
That's
our
objective.
L
G
That's
really
interesting,
so
is
that
ready
to
go
the
csi
proxy
privilege
thing?
Is
that
mostly
working
the
the
can
as
a
privilege,
pod.
B
Running
so
running
it
as
a
rent
just
running
the
csi
proxy
binary
in
a
host
process
container
is
mostly
working,
but,
as
mauricio
mentioned,
we're
figuring
out
a
good
one.
Well,
people
are
figuring
out
a
good
support
story,
long
term,
and
so
in
order
to
run
the
the
actual
components
as
host
process
containers
and
not
need
the
proxy
that
would
require
some
more
work
in
all
the
existing
components.
So
this
is
like
a
stop
gap.
B
People
are
more
than
welcome
to
try
it
out
and
validate
their
scenarios
with
it,
but
it
sounds
like
this
might
not
be
kind
of
the
official
support
like
an
official
supported
path.
J
We
might
end
up
going
the
host
processor
out
for
the
container
for
it
as
a
stop
gap
like
you
are
talking
about
before
we
can
have
full
host
process.
So
yeah
cni
is
the
big
ones
for
us
that
in
cube
proxy,
those
are
the
two
and
when
it's,
those
are
the
three
that
we're
going
to
really
be
rolling
with
this
process.
A
M
This
is
aaron
here
from
red
hat.
I
work
closely
with
arvind.
M
One
question
I
have
is
you
know:
palo
alto
basically
released
a
note,
some
time
back
saying
that
process,
isolation
containers
are
not
secure
and
they
had
found
a
way
to
escape
from
the
container
to
the
host
and
get
access
to
you
know
keys
and
such
the
question
I
have
is:
is
that
still
the
case
with
process
isolation
containers?
Has
there
been
any
change
around
it
and
does
this
expose
process
host
process
containers
as
well
or
is
hpc
shielded
from
this
vulnerability.
D
Yes,
so
no,
this
is,
this
is
kind
of
a
totally
separate
technology
for
the
most
part,
because
this
is
these
are
in
your
host
process.
D
Containers
are
literally
processes
running
on
the
host,
so
you
will
have,
for
the
you
know,
most
part
full
access
to
the
host,
depending
on
what
user
you're
running
as
in
the
host
process
container
and
since
we're
running,
limiting
to
well-known
service
accounts
right
now,
the
capabilities
and
permissions
are
known
and
when
we
add
custom
users
in
the
future
you'll
be
able
to
restrict
that
further.
So
this
is
a
privileged
container.
D
This
is
not
the
same
level
of
isolation
or
security
that
windows
server,
containers
provide
and
that
vulnerability
that
was
in
that
article
also,
you
know,
has
been
fixed
in
windows
server
containers.
So
that's
that's
not
an
existing
issue
anymore,
and
you
know
windows,
server,
containers,
they're
using
server
silos
and
they
have
isolation
from
the
host.
So
it's
they're
totally
different
scenarios.
M
So
that's
actually
good
news
to
my
ears,
because
a
lot
of
customers
have
been
blocked
by
this
issue
on
the
windows
container
side.
So
two
things.
So
if
you
could
send
me
a
link
to
the
fix
for
windows
containers
for
this
unit,
for
it
article
that'd
be
great
sure
awesome.
Thank
you
yeah
and
second,
if
you
understand
this
right,
you
said
host
ice
host
process.
Containers
are
less
isolated
and
run
is
privileged,
so
you
know
have
less
level
of
protection
than
windows
containers
or
is
it
the
other
way,
though?
No
they.
D
They
do
not
have
their
goal.
Is
that
they're
privileged
right,
they're,
they're,
they're
specific
use
cases
to
be
able
to
configure
privileged
access
to
the
host
so
that
you
can
do
all
sorts
of
you
know
whatever?
Whatever
administrative
tasks
require
privileges
on
the
host,
and
so
that
you
can
limit
the
you
can
limit
the
capabilities
and
privileges
of
the
regular
windows
server
containers,
so
you
can
use
this
as
a
privileged
workload
to
do
specific
access
to
the
host
and
limit
your
windows.
Server
containers
to
less
privileged
accounts
got
it.
Thank
you.
B
So
one
thing
I
know
about
that
too,
is
that
so
host
process
containers
the
ability
to
restrict
their
usage
is
built
into
the
new
pod
security
standards
and
and
all
the
new,
the
pod
security
policy
replacement
that
was
built
in
or
that
I
believe,
went
beta
in
123
along
with
host
process
containers.
B
So,
if
you're
concerned
about
that
now
host
process
containers
have
like
all
the
same
ways
to
restrict
scheduling
them
and
based
on
you
know
all
the
different
ways
that
pod
security
policies
worked
for
with
that
built-in
power
security
policy
replacement.
So
please,
like
read
up
on
that.
If
you're
concerned
about
privileged
access
to
the
host,
I
believe
that
the
by
that
their
post
process
containers
are
are
forbid
in
the
baseline
and
restricted
policies
by
default.
M
M
I
So
I
just
want
to
add
to
what
anan
said.
I
think
anan's
question
is
really
around.
I
So
there
is
a
statement
in
microsoft,
docs
that
says
we
don't
trust
the
kernel,
isolation,
boundary
and
you
should
use
what
you
call.
You
should
use
hyper-v
isolation
and
we
don't
have
hyper-v
isolation
yet
with
windows
with
kubernetes
right,
and
so
I
think
that's
the
main
concern
on
that.
Isn't
that
correct
on
it
it's
with
hyper-v
isolation
versus
process,
isolation,
and
because
we
have
customers
who
want
to
mix
workloads
they
want
to
have
their
linux
workloads
also
run
on
the
same
cluster,
where
they're
running
windows
workloads,
because
we
have
been
suggesting
hey.
I
F
M
M
You
know
the
proper
boundary
for
isolation
in
kubernetes
is
the
cluster
itself
and
not.
Let
me
give
you
the
exact
word.
Give
me
a
second.
M
Yeah,
the
note
says
you
know
in
kubernetes
or
aks
elsewhere,
you
know,
aren't
suited
for
hostile,
multi-thread
usage.
Additional
securities
like
pspe
or
rbac,
make
exploits
more
difficult.
However,
the
true
security
when
running
hostile,
multi-tenant
workloads,
the
hypervisor,
is
the
only
level
of
security.
You
should
trust
the
security
domain
for
kubernetes
is
the
entire
cluster,
not
the
node
for
hostile
multi-tenant
workloads.
You
should
consider
using
physically
isolated
workloads
right.
D
Yes,
so
one
thing
I
want
to
mention
there
is:
we
are
very
conservative
in
our
approach
towards
security
and
we
only
recommend
to
people
the
options
for
security
which
we
can
guarantee
and
with
hyper-v.
We
can
guarantee
security
hyper-v,
because
it's
kernel
isolated.
It
has
its
own
kernel
and
it
runs
on
the
hypervisor
that
we
can
guarantee
a
secure
boundary
windows,
server,
containers
and
the
process
isolated
containers.
D
They
have
a
you
know
for
in
in
general,
they
have
a
pretty
high
degree
of
security,
there's
very
limited
opportunity
for
what
you
can
do
with
the
container.
Even
the
container
administrator
account,
it
is
still
a
very
limited,
privileged
account
and
in
order
to
even
access
the
host
you
have
to-
and
you
know
enable
the
container
to
to
do
that
from
the
host
itself.
D
So
there's
a
lot
of
like
restrictions
in
place
for
the
container
administrator
and
even
the
container
user
account,
which
is
incredibly
restricted,
but
the
thing
is
because
process
isolated
containers
are
sharing
the
kernel
with
the
host
and
we
have
a
very
conservative
approach
to
security.
We
do
not
want
to
take
the
risk
of
the
fact.
D
You
know
like
saying
to
customers
that
you
know
there's
a
secure
boundary
there,
because
when
you
share
the
kernel,
whether
it's
linux
or
windows,
containers,
that
is
a
vulnerability
that
is,
that
is
a
potential
attack
factor,
even
though
we're
pretty
sure
that
there's
not.
You
know
that,
because
of
the
level
level
of
restrictions
that
we
have
in
place
for
windows,
server
containers,
it's
a
low
risk,
but
we
can't
guarantee
that
there's
zero
risk.
That's
what
I'm
kind
of
saying.
M
D
No
so
host
process
containers
are
the
least
secure
right
because
we're
giving
you
you're
you're
a
process
on
the
host
right,
you're
you're,
a
job
object,
that's
running
in
a
just
mounted
storage
volume
on
the
host
and
the
the
security.
There
is
basically
just
whatever
privileges
the
account
the
host
process
container
is
running
at
so
it's
the
same
level
of
security
as
a
normal
user.
On
the
host
process,
isolated
containers
are
more
secure
because
they
have
file
system
and
namespace
isolation
and
a
bunch
of
checks
in
place
that
the
kernel
enforces.
D
D
You
know
from
our
conservative
approach
and
saying
there's
still
potential
for
interaction
with
the
host
colonel,
because
the
kernel
is
shared.
Just
as
you
know,
a
linux
container
would
be.
You
know,
a
shared
kernel,
it's
kind
of
the
same
principle.
It's
like
when
you're
sharing
the
kernel
with
the
host
there's
there's
some
potential
for
communication
with
the
host
and
if
there's
things
that
we
haven't
discovered
yet
then
that's
a
potential
attack,
vector
hyper-v.
It's
the
same
level
of
isolation
as
a
hyper-v
vm,
so
you're
running
on
the
hypervisor.
M
B
And
one
other
thing
to
note
is
that
the
host
process
containers
really
are
not
intended
to
be
used
for,
like
as
workloads
they're,
mainly
intended
to
be
used
for
doing
like
you
know,
cluster
administration
and
node
node
maintenance
type
things.
You
know
patching
your
nodes,
getting
things
bootstrapped
so
that
you
can
join
the
node
to
the
cluster
and
have
all
that
maintained
via
containers.
And
you
know,
demon
sets
rather
than
needing
to
log
into
the
windows
nodes
themselves.
B
We
really
don't
anticipate
or
don't
recommend,
running
workloads
as
the
host
process
containers
because
they
do
bypass
like
they
do
kind
of.
Don't
really
have
a
good
isolation
model
by
design.
If
that
makes
sense,
sure.
D
You
patch,
you
know
whatever
yeah,
we
do
not
want
anyone
running
their
workloads
in
you
know
their
regular
production
workloads
in
host
process
containers,
because
it's
the
same
as
running
just
directly
on
the
the
host
vm.
The
nice
thing
about
them
is
is
that
you
can
restrict
your
administrative
workloads
to
a
single.
You
know
container
instance
in
a
single
pod
and
then
all
of
your
regular
windows,
server
container
workloads.
You
can
run.
D
You
know
super
restricted
user
accounts
on
your
regular
windows
server
containers,
because
you
don't
need
any
sort
of
privileged
access
or
privileged.
You
know
capability,
okay,.
M
We're
coming
up
in
time
here,
one
good
question:
sorry,
on
the
tech
note
you
sent
brandon,
I
don't
see
windows
server
2022
on
the
list
is
that
in
the
plan,
because
training
out
of
support
in
may
so
a
lot
of
our
customers
are
actually
moving
on
to
2022.
So
I
want
to
make
sure
that's
in
the
plan
as
well,
sorry
which
techno
this
cbe
that
you
sent
for
the
unit
42.
D
Oh
yeah,
no
it
the
fix
was
released.
You
know
for
all
windows,
server
versions
in
february,
that
includes
windows,
server,
292.
B
Okay,
oh
and
also
this
may
be
a
little
bit
of
a
distraction,
but
it
might
be
good
to
highlight
here,
since
it
is
security,
focus
that
the
way
that
the
windows
servicing
model
usually
works,
and
I
there's
always
going
to
be
potential
cutting
ads.
B
But
in
order
for
security
fixes
to
be
eligible
to
be
back
ported
into
an
older
os
version
like
windows,
server
2019,
they
must
that
same
fix
must
be
present
in
all
future
code
bases,
so
windows,
server
2022,
would
have
any
fixes
that
were
detected
in
windows,
server,
2019
kind
of
as
a
general
rule.
L
A
Cool
so
we're
two
minutes
past
we
had
two
more
items:
the
gmsa
one
jay.
I
followed
up
on
the
thread.
I
think
it
was
just
missing
some
dns
settings,
maybe,
but
I'm
happy
to
follow
up
more
on
that
thread
there
and
then
the
gmsa
one.
Maybe
we
can
talk
a
little
bit
more
about
it
next
week,
jamie
once
you
dig
in
I'll
bump
into
next
week.
Claudu
would
be
able
to
help
with
the
image
promotion
process,
probably
of
the
best
here.
C
C
A
G
G
You
know
we
run
on
sort
of
over
provisioned
hardware
downstream,
and
so
that's
why
we
did
a
five-minute
sleep
and
I'm
wondering
whether
we
could
start
with
those
flag
them
as
disruptive
and
then
do
like
a
small
iteration
to
make
them
smarter
later,
or
is
that,
like
a
big
red
flag
that
we
have
some
reboot
tests
months,
I
see
you
and
zoom
that
reboot
nodes
and
then
check
to
see
if
the
nodes
are
still
yeah.
They
reboot
the
host
yeah.
C
I
think
they
should
be
malta's
disruptive
by
default
because
they
are
restarting
the
node
itself
either.
G
G
But
my
other
question
is:
can
we
right
now
the
only
issue
we
have
is
we
never
finished,
making
a
making
a
smarter
way
to
to
detect
whether
the
node
was
online
or
not,
and
we
didn't
really
think
of
a
really
good
way
to
do
that.
I.
E
Yeah,
that's
what
we
do
and
we
actually
mark
those
tests
as
serial
like
disruptive
and
severe.
Oh.
G
Disrupt
yeah,
that's
a
good
idea.
I
I
think
they're
flagged
as
cereal,
but
if
that's
the
sort
of
final
way
to
do
it,
I
guess
we
could
just
put
that
in
there
I'll
update
the
comment
on
that.
E
Disrupt
to
serial
and
make
sure
that
the
note
conditions
the
note
status
conditions
they
are
ready
and
like
in
some
cases
we
actually
try
to
schedule
it
for
like
create
a
pod,
make
sure
that
it
lands
onto
the
newly
rebooted
mode,
yeah
yeah.
C
That
that's
a
good
point
actually
yeah.
That's
a
very
good
idea
because
it
might
happen
to
flaking
some
scenarios
in
which
the
node
reboots
so
fast.
It
doesn't
really
miss
any
health
or
a
live
checks.
G
Yeah,
that
was
a
concern
we
had,
which
is
why
we
did
a
long
sleep,
because
that's
kind
of.
G
E
Sometimes
what
we
try
to
do
is
before
the
reboot
happens,
get
the
list
of
pods
on
the
node,
save
it
somewhere
and
then,
depending
on
how
you
have
created
those
parts
they
may
come
up
or
they
may
not
come
up.
On
the
same
note,.
E
No,
it's
it's
not
about
that.
So
there
can
be
a
higher
level
controller,
which
one
sees
that
the
node
is
not
available
anymore.
It
may
evict
those
parts
and
then
those
parts
may
come
up
on
some
other
mode.
G
So
that's
the
that's
kind
of
the
tricky
thing
that
we
ran
into,
which
is
why
I
kind
of
brought
this
up.
Is
we
weren't
like
wait
it
wasn't?
It
wasn't
trivial
to
detect
that
the
coupe
that
the
you
can
skip
a
beat
if
you're
not
pulling
fast
enough
where,
like
if
the
boot,
if
the
reboot
happens
really
fast,
you
might
not
even
ever
even
know
that
you,
your
kublai,
went
down,
and
that
was
my
original
thing
it's
like.
G
Well,
maybe
we
should
just
do
the
dumb
thing
that
we're
doing
right
now
just
wait
a
ridiculous
period
of
time
and
then
then
check
if
the
node's
working,
because
because
that
works
right
like
it's
not
going
to
take,
you
know
11
minutes
to
reboot
a
node,
but
it
might
take
five
minutes.
You
know
and-
and
that
would
allow
us
to
have
this
test
upstream
right
and
then
we
could
have
like
a
separate
thing
about
making
the
test
smarter.
A
A
Oh,
she
was
gonna.
We
asked
about
this
on
the
pr.
A
Yeah
yeah,
the
suggestion
was,
you
could
extend
the
custom
plug-in
monitor
and
she
said
she
was
working
on
it.
G
H
E
G
G
Backlash
yeah
well,
we
just
empirically
were
like
well,
we
ran
it
a
bunch
of
times
and
we're
like
yeah.
It
definitely
reboots
within
10
minutes
even
on
horrible
hardware,
but
the
problem
is
sometimes
it
reboots
like
fast,
like
if
you're
on
a
fast
vsphere
instance
or
a
fast
cloud
instance
like
we
do
this
on.
We
run
these
tests
on
bear
hardware
and
we
also
run
them
in
this
nested
virtualization
environment.
We
see
both
extremes.
G
E
Yeah,
so
we
have
been
doing
it
in
case
of
openshift
and
what
we
try
to
do
is
when
we
try
to
bring
down
the
node,
we
will
have
a
workload,
make
sure
that
it
gets
drained
reboot,
and
then
we
will
schedule
a
new
workload
onto
the
new
node.
Wait
for
the
the
workload
to
be
available.
That
should
give
us
an
indication
with
the
timeout.
G
E
G
G
C
C
E
G
That's
a
lot
of
code
to
write
a
drain.
We
could
do
that,
but
I
still
have
the
other
question
of.
Is
this
useful
as
opposed
to
just
waiting
10
minutes
and
if
it's
useful,
if
the
consensus
is
it's
useful,
we
can
do
the
drain,
but
if
nobody
actually
cares,
my
my
like
dumb
brain
would
just
wait.
10
minutes
for
the
reboot
to
happen
and
then
go
or
iterate
on
this
solution
later.
G
C
I
mean
if
if,
if
you're
gonna
have
to
sleep
for
such
a
long
time,
you'll
also
have
to
add
the
slow
tag.
I
think
we
mentioned
that
already,
but
if
you
actually
managed
to
do
it
in
a
smarter
way
as
you
want,
then
the
slow
tag
might
not
be
necessary
anymore.
C
G
Okay,
that
no
that's
a
good
reason
to
do
it
the
right
way.
I
just
wanted
a
good
reason
to
do
it
the
right
way,
so
we're
suggesting
we
want
to
do
it
the
right
way
to
reduce
the
test
time.
That's
fair,
yeah
cool,
I'm
gonna,
I'm
gonna
relay
that
and
then
I,
like
ravi's
pod
drain
idea
because
you're
right,
even
the
crash
loop
back
off,
you
still
might
be
able
to
get
the
logs.
But
if
you
drain
then-
and
now
my
question
is
I
wonder
if
that
would
be
useful
for
something
else
like?
E
Even
have
to
do
it
like
to
be
clear
you
just
you,
can
just
evict
the
part
or
delete
the
part
and
then
create
another
part
on
the
same
node
like
say,
if
you
have
a
single
single
part
running
on
that,
just
delete
it
or
evict
it
and
then
start
the
reboot.
A
My
quick
question
here,
though,
is:
what's
the
purpose
of
this
restart
test?
What
are
we
testing
for
just
that
the
node
restarts
or
that
if
a
node
crashes
and
there's
pods
running
on
the
pod
on
the
node,
do
those
pods
come
back
in
and
work
properly
because
there's
like
I've
seen
windows
nodes
go
offline,
yeah
and
when
it
starts
back
up,
if
the
pod
was
still
scheduled
to
that
node
there's
problems.
So
I'm
wondering
if
if
we
just
delete
the
pod
and
then
deploy
it
again,
we're
not
actually
catching
the
issue
that
we
that.
G
So
we
actually
have
found
that
it's
kind
of
embarrassing,
but
now
that
you
ask
with
andrea,
we
found
that
rebooting,
a
node
actually
completely
destroys
the
entire
network
stack
because
of
the
way
it
extends
ovs.
So
for
us
we
just
want
the
stupidest
reboot
possible.
We
don't
care
about
any
fancy
stuff,
but
there's
this
whole
other
conversation.
This
opens
up
about.
G
Well,
there's
a
lot
of
other
stuff
you'd
want
to
test,
one
of
which
is
the
thing
you
talked
about
right
like
like.
Do
you
have
steel
volumes?
Do
you
have
pods
that
are
attached
to
storage
that
the
storage
can't
reattach
somewhere
else?
Do
you
have
like
there's
all
sorts
of
things
that
can
go
wrong
once
you
reboot
a
node
right
so
yeah?
You
know
like
my
answer
here,
even
though
your
question
wasn't
a
yes
or
no
question.
My
answer
is
yes,
like
that's
we're
thinking
that
same
thing
right,
like.
C
Yeah,
I
agree.
I
think
we
can
test
a
lot
of
things
to
make
sure
that
everything
works
properly.
For
example,
should
the
pods
still
be
alive
or
be
restarted
if
the?
If
the
node
restarts
you
know
and
do
they
become
alive
or
secondly,
what
we
could
check
as
well?
If
there
are
any
leaked
volumes,
if
there
are
any
leaked,
cnips
and
so
on
and
so
forth,
just
to
make
sure
we
don't
eventually
crash
because
of
stuff
like
this,
because
I
actually
noticed
one
thing
in
azure
cni.
C
If
you
are
starting
the
node
and
you
still
have
pods
those
ips
that
were
already
assigned,
they
are
never
cleaned
up
afterwards.
Apparently,.
G
G
G
So
let
me
let
me
update
the
the
pull
request
with
with
this
the
details
from
this
conversation.
So
I'm
going
to
update
the
pull
requests
and
talked
with
ravi
sontosh
goodly
mata
ravi.
You
have
the
longest
nick
of
anybody
in
sig
windows.
G
G
E
You
can
you
can
do
both
j
like,
depending
on
what
your
use
case
is
like
we
can
have
some
stateful
sets.
We
can
have
some
stateless
workloads
running
on
the
node
like
say.
If
we
have
a
stateful
set,
we
should
provision
the
the
disc
make
sure
that
it
is
in
sync
on
other
nodes,
save
the
workload
get
scheduled
on
to
another
one.
So,
depending
on
the
use
case,
you
can
do
both.
G
A
G
Need
to
have
our
back
for
the
normal
yeah
I
mean.
Usually
you
would
have
those
service
accounts,
but
I
like
that
better,
but
I
think
the
stupid
thing
of
curling
is
you're,
not
gonna
go
wrong
with
that
right
like
and
it's
super
easy
to
read
it
in
the
code
you
curl
for
curl
until
not
working
right,
like
you,
don't
have
to
compare,
but
I
like
the
timestamp
thing
better.
Theoretically,
though
schedule
padre,
but
this
will
do
it,
then
you
reboot,
then,
if
you
reboot
reboot
that
node.