►
From YouTube: Kubernetes SIG Windows 20200128
Description
Kubernetes SIG Windows 20200128
A
Hello,
everybody
and
welcome
to
another
see
windows
meetup,
it's
their
20th
of
January
and,
as
always,
is
a
recorded
meeting.
So
please
adhere
to
the
SNCF
rule
of
contact.
It's
actually
funny
when
I
always
say
that
it's
like
we
don't
all
become
like
you
know,
idiots
and
start
spitting
profanities
when
it's
not
recorded.
But
you
know,
rules
are
rules.
I
have
to
say
that.
So
today
is
the
enhancements
freeze
date.
A
B
B
A
A
C
B
D
A
B
A
8
9,
okay,
that's
I'll,
step
in
the
full
name.
That's
why.
C
A
B
Maybe
you
know
this
one
gap
so
so
briefly,
I
said
pinged
a
few
people
I
said
we
got
Jocelyn
on
Colin
Dinesh
I'm
about
the
cube
ATM
stuff,
and
so
they
they
were
look.
They
looked
at
that
kept
that
Ben
and
Gavin,
updated,
I.
Think
kind
of
the
one
I've
seen
in
question
that
we
needed
to
talk
about
there
was
that
we
didn't
close
on
was
whether
or
not
we
were
gonna
use
wins
as
is
or
do
something
like
pulled
into
a
sig
repo
and
and
make
modifications
there.
B
That's
there
because
I
don't
think
you
were
going
to
use
those
for
the
CNI
stuff
specifically
because
the
process
was
going
to
run
on
the
host
and
so
I
think
we
kind
of
need
to
make
some
decisions
on
how
on
how
that
goes
forward
and
then
and
I
think.
There
was
also
an
outstanding
question
on
how
we
lock
it
down
whether
or
not
you
know
we
need
a
pod
security
policy
or
something
like
that
to
control
access
to
that
mount,
and
so
yeah
we've
got
a
lot
of
those
things
kind
of
detailed
in
that
doc.
C
In
the
cab
we
do
call
out
that
will
provide
a
you
know,
a
pot,
a
pot
security
policy
expect
that
they
can
use
so
I
think
that
one's
an
easy,
easy
one.
As
for
we
spoke
briefly
about
you
know
maintaining
our
own
forked
when
us
and
we
have
an
action
from
like
January
14th
to
ping,
the
winners
maintainer
to
see
if
they're
willing
to
limit
or
remove
calls
mm-hmm
I
personally
haven't
done
that
I,
don't
know
of
anyone
outside.
C
B
B
C
F
F
Can
just
call
out
one
security
aspect:
I'm
not
sure
this
is
that
review
or
the
security
aspect
of
the
review
has
happened
already,
but
I
think,
like
you
know,
given
by
the
CSI
proxy
me,
ran
through
some
of
these
very
similar
security
questions.
The
overall
idea
of
a
host
process
being
spun
up
to
a
named
pipe.
Was
it
it
kind
of
drew
some
interesting
responses
where
people
were
a
little
uneasy
with
it?
A
C
F
Pretty
much
I
think
the
main
concern
from
storage
was
I.
Think
initially
we
were
exploring
a
very
similar
mechanism
which
is
like.
Can
we
just
expose
an
exact?
You
know
system
like
a
API
which
says
well
execute
this
PowerShell
script
for
me
on
the
host
for
me
as
and
like
on
behalf
of
the
container,
and
that
could
be,
you
know
super
generic.
Doesn't
it
is
not
doing
anything,
storage,
specific
or
CSI
specific,
so
it
could
potentially
be
used
for
any
other
purpose,
including
you
know,
H
and
s
configuration
size.
F
F
So
that's
an
heads-up
I
think,
secondarily,
I
think
Patrick
was
kind
of
exploring
this
idea
of.
Like
can
all
these
things
we
don't
know
where
SSH
I
think
we
did
discuss
this
briefly,
which
is
that
container,
potentially
getting
some
kind
SSH
access
to
the
host
and
essentially
invoking
the
commands
there
as
needed.
B
B
Might
it's
different
I'm
not
super
clear
on
whether
or
not
it's
better,
but
it
from
a
maintainability
standpoint
that
might
require
less
custom
code
than
an
alternative
likelike
when
s,
but
but
as
Gabe
just
mentioned,
the
keepalive
might
actually
be
pretty
handy
in
terms
of
making
sure
things
are
shut
down.
Whereas
if
you
launch
and
terminate
your
SSH
session,
it
could
still
run
forever.
C
G
F
B
Mean
I
think
that's
definitely
feasible
I'm,
just
not
aware
of
any.
You
have
any
code
base
out
there.
That's
you
know
ready
to
use
like
that.
It
looked
like
when
s
was
sort
of
going
down
that
direction
based
on
the
amount
of
H
and
s,
calls
that
were
proxied
but
I'm
not
super
clear,
whether
or
not
that
completes
the
use
case
or
if
it
was
just
something
they
did
for
a
specific
purpose.
B
A
I
have
one
more
item
on
this.
A
little
bit
related,
so
I
know
that
gabon
been
you
guys,
worked
a
little
bit
on
the
clustered
API
and
enabling
that,
through
the
work
you
guys
did
with
cube,
ATM
and
I
think
most
of
the
work
you
guys
did
was
around
enabling
this
for
AWS
as
class
trippy
as
maturing
for
for
other
public
cloud
providers,
either.
Very
specifically,
I
wanted
to
know
who's
who's
on
point
to
make
sure
that
windows
with
clustered
API
works
well
for
I,
sure
and
I
know
Patrick.
A
E
B
G
A
A
A
A
E
We
get
into
the
chat
we're
for
Windows
whenever
either
the
metrics
endpoint
in
stat
summary
endpoint
gets
hit
if
you're
running
multiple
containers
it
takes
a
while
to
the
point
order,
calls
to
like
metric
server
or
Prometheus
starts
a
timeout
with
the
default
10.
Second
timeout
I
tracked
it
down
to
a
call
to
the
docker
on
API
for
getting
individual
container
stats,
and
it
looks
like
that's
taking
about
2
seconds
per
container,
and
so,
if
you
have
a
lot
of
containers
running,
you
can
make
those
calls
take
a
long
time
on
in
the
issue.
E
B
E
E
E
Prometheus
on
a
coaster
and
I,
slowly,
I
had
a
pretty
decent
sized
node
and
I
slowly.
Added
more
just
is
containers
that
weren't
serving
anything
and
after
I
got
to
three
containers
or
pods
running
for
that
Prometheus
started
hitting
its
great
time
out
of
10
seconds
trying
to
get
metrics
for
these
nodes,
so
they
just
went
blank
and
then,
when
I
updated,
those
great
time
out,
I
could
see.
I
could
started
getting
stats
and
the
like.
B
E
Yeah
another
thing
is,
it
looks
like
looks
like
other
other
code.
Pest
call
into
hds,
gen2,
get
container
stats,
and
that
is
much
thicker.
I
haven't
looked
at.
How
much
refactor
like
I
think
that
if
we
were
to
update
this
code
path
to
college
this,
and
that
could
be
a
much
bigger
than
a
factor
of
how
these
calls
are
performed.
So
that
could
be
another
approach,
but
I
have
been
just
costed
that
yet.
A
I
mean
I
think
that
the
first
question
like
to
answer
is:
if
there's
a
way
that
you
can
make
sure
you
don't
lose
some
of
the
fidelity
in
the
data.
If
we
cache
it
and
we
don't
have
them
updated
every
two
seconds,
I'm
looking
to
what
the
consumers
are,
if
that's
not
the
case,
if
that's
basically
not
losing
much,
then
I
think
Hashem
might
be
the
easiest
way
to
to
solve
this,
and
I'm
I'm
in
favor
of
that,
obviously,
is
the
refactoring.
E
B
We
don't
have
a
working
prototype
of
that
yet
well
like
today.
That's
how
we
get
the
network
metrics
and
those
don't
have
this
latency
problem,
but
we
don't
know
that
making
the
same
call
to
get
CPU
or
memory
might
not
take
that
long
through
HTS,
like
we're
still
figuring
out
the
next
step.
After
this,
like
now
that
we've
identified
the
API
call
it's
taking
too
long
we're
trying
to
see
what
what
we
can
do.