►
From YouTube: Kubernetes SIG Windows 20191126
Description
Kubernetes SIG Windows 20191126
A
A
Patrick
is
on
well
nice
meeting.
All
of
you,
guys
that
were
able
to
attend
cubic
on
last
week
was
a
great
event.
We
had
two
sessions
on
windows
that
were
pretty
much
jam-packed
I
actually
went
to
run.
The
numbers
later
on,
Patrick
and
7%
of
the
conference
signed
up
for
our
intro
to
Windows
session,
didn't
all
show
up,
but
7%
of
the
conference
and
up
we
had
600
sign
up
the
sign.
Ups
awesome.
B
A
C
A
Then
you
know
I,
guess
I,
don't
know
if
you
guys
had
talked
about
it
last
time,
but
in
a
previous
meeting
by
Patrick
that
you
guys
open
sourced
from
from
a
login
perspective,
it's
phenomenal
at
least
we
are
going
to
take
advantage
of
that,
and
you
know
we're
recommending
it
to
people
are
looking
into
Windows
as
well.
So
it's
a
great
tool
to
basically
move
forward.
The
logging
logging
capabilities
of
Windows
so
I
think
you
and
your
team
for
doing
that.
B
A
B
Sure
so,
there's
a
tool
called
log
monitor
that
Microsoft
released
as
open
source,
and
it
was
actually
initially
demoed
and
and
shown
at
Microsoft
ignite
a
couple
weeks
ago
in
concert
with
the
Thatcher
container
monitoring.
But
what
it
lets
you
do
is
take
a
list
of
of
Windows
event,
logs
structured
etw
logs
and
in
the
list
of
files,
and
it
will
basically
tail
all
of
those
out
to
standard
out
so
that
way,
tools
that
use
the
kubernetes
api
to
get
the
logs
from
all
those
pods
can
actually
get
some
data.
B
Otherwise,
you'd
have
to
do
something
like
cube.
Control
exec
to
go
yo
tail
on
is
service
log
to
your
HTTP
requests.
This
is
the
way
that
was
designed.
It
was
actually
designed
to
be
pluggable
both
on
the
input
and
the
output
side,
and
so,
if
you'd,
like
more,
were
in
flying
I'd
recommend
you
actually
just
checking
out
the
repo
itself
and
then
their
hope
is
that
once
that's
polished
off,
maybe
they
can
make
that
something.
B
B
A
B
D
I
just
do
it
in
there
cuz.
The
talk
of
the
monitor
thing
reminded
me:
I've
been
this
past
week,
I
helped
out
with
building
a
docker
file.
Someone
else
has
been
working
on.
It's
like
a
seed
project,
so
they've
been
working
on
porting
it
to
Windows,
and
it's
pretty
close
to
functional
I
think
it's
still
a
little.
It's
a
little
beta
alpha
stage,
but
I
tried
it
out.
So
it's
just
a
tool
for
kind
of
like
fluent
D.
A
B
But
I
think
more
than
more
than
log
monitor
where
this
one's
important
is.
If
you
wanted
to
get
all
the
logs
on
a
given
machine
from
all
the
pods
fluent,
that
would
let
you
centralize
all
those
in
one
place,
without
necessarily
having
to
use
the
KU
Bernays
api
and
so
for
for
high
throughput
systems.
I
think
it's
a
little
bit
of
a
better
option,
but
you'd
use
it
in
collaboration
with
log,
monitor.
D
B
E
So
I
posted
a
link
to
an
issue
I
filed
about
building
the
test
container
images
for
the
latest
sa
C
version
of
Windows,
mm-hmm
and
ICC
ate
a
bunch
of
people
and
posted
to
speak
windows
and
no
one's
really
chimed
in
yet.
So
this
is
something
that
yeah
we're
interested
in.
Getting
our
end-to-end
test
running
against
19:09,
we
have
a
node
image
available.
We
don't
have
two
tests
container
images,
so
I
don't
know.
If,
if
Claudio
is
on
the
call
or
maybe
a
battle,
you
know
it's.
F
So,
basically,
the
status
of
the
images
so
that
there
are
two
lines
of
work.
There
are
the
images,
the
image
code
and
the
scripts
in
Windows
testing
that
we
currently
use
to
build
and
push
in
on
docker
hub,
and
there
is
some
work
that
Claudia
is
doing
to
move.
Basically,
all
of
this
process
into
kubernetes.
F
A
B
E
Okay,
so
yeah
I
know,
there's
been
some
work
on
the
yet
from
Claudio
and
I
see
this
PR
has
been
outstanding
since
April,
so
yeah
I,
I'm
glad
there's
stuff
going
on.
If
there's
anything
that
I
can
do
to
help.
Let
me
know
and
again
I
think
the
end
result
I'd
like
to
see
is
just
that
we
have
to
written
down
somewhere
so
that
six
months
from
now,
once
we
get
all
those
PRS
merged,
we
know
exactly
what
we
needs
to
do.
Looking
for
the
next
semi-annual
channel
release
is
easy
and
repeatable
I.
E
A
E
B
We
need,
if
we're
agreeing
on
what
the
operation
side
is
on
who's,
going
to
manage
those
build
nodes
and
that's
something
that
needs
to
be
managed
in
addition
to
the
proud
cluster
and
then
the
image
promotion
process
needed
some
work
before
non-googlers
could
push
official
images,
and
so,
if
there's
any
way
that
you
can
help
review
or
get
some
of
those
push
through
that'll
kind
of
move
forward,
because
it's
like
all
the
pieces,
I
think
are
there
in
PR
is
it's
just.
People
need
to
agree
to
finish,
reviewing
and
merge
it.
E
F
F
E
Okay,
well,
thank
you
for
linking
those
PRS
I'll.
Take
another
look
at
those
and
just
yeah.
Stepping
away
from
the
the
process
questions
would
sig
window.
Would
anyone
else
in
cig
windows
be
interested
in
sort
of
qualifying
19:09
for
the
for
kubernetes
118
I
know
it
seems
like
we'll
need
to
qualify
something
soon
or
officially
support
something
soon
for
the
new
semiannual
version.
Would
those
118
win
1909
sound
reasonable,
or
do
we
have
too
much
other
work
going
on
for
118,
I.
B
A
B
So
1903
already
has
a
test
image
is
there,
and
so
we
just
need
to
update
a
a
proud
job
on
that
and
I
think
we'll
probably
cover
that
one
on
Azure
at
a
lesser
frequency.
Just
because
we
got
more
people
of
running
18:09
and
then
we
could
probably
add
1909
as
well.
We
just
need
to
basically
reduce
the
job
frequency
of
some
other
jobs
to
get
the
capacity
just
because
we've
got
a
quota
of
about
200
cores
per
region
and
we're
currently
testing
urbanize
14
all
the
way
through
master.
B
A
E
D
B
D
Ok,
so
this
is
it's
an
approach
to
doing
like
cube,
ATM
the
cube
ATM
workflow,
but
using
this
privilege,
privilege
proxy
that
rancher
released
called
witness.
Just
basically
like
a
I
mean
it
works
similar
to
the
CSI
proxy,
but
it's
just
launches
like
host
processes.
So
we
wanted
to
explore
the
feasibility
of
it.
Then
it
seems
it
seems
like
it
does,
will
work.
The
question
is
just
whether
we
want
to
pursue
approach
like
this
or
whether
yeah
it's
to
what.
A
D
So
the
benefits
are
the
it's
not
like.
We
don't
need
a
wrapper
script,
really
you
just
use
cube
ATM
and
so
we're
not
as
tied
to
like
needing
to
rev
the
script
along
with
versions
of
kubernetes
we're
not
tied
to
like
managing
multiple
CNI
plugins.
You
know
like
it.
It
really
has
no,
it's
not
as
like,
coupled
it's
more
like
just
cube
ATM,
where
you
manage
everything
to
Damon
sets,
so
they
can
be
managed
independently
of
like
the
QB
ATM
parts.
D
D
D
A
D
D
C
Did
add
work
into
cute
baby
m
and
there
is
still
work,
that's
pending
that
would
be
needed
for
GA,
which
is
possibly
putting
in
not
possibly
definitely
putting
in
the
windows
version
of
these
config
maps.
But
the
other
thing
is:
we
had
already
proposed
this
during
the
cap
and
all
of
those
discussions,
and
it
was
decided
that
we
wouldn't
go
down
that
route,
because
it
would
just
be
another
workaround
and
it
would
be
another
thing
to
maintain.
C
While
we
wait
for
privileged
containers
and
we
have
a
solution
that
does
work
that
also
matches
what
Windows
customers
are
using
already
David.
The
RPM
has
seen
that
a
lot
of
Windows
customers
prefer
to
run
things
as
services,
which
is
like
another
just
separate
point,
but
I
mean
we've.
Had
this
discussion
before
and
said,
close
your
life
cycle
also
wasn't
too
keen
on
it.
It.
B
C
C
C
B
D
Mean
it
has
a
feature
to
enable
like
whitelisting,
which
processes
can
be
run
but
yeah
restricting
access
to
the
name.
Pipe
I
figure
is
the
main.
Well
that
and
the
whitelist
are
the
ways
that
I
think
you
can
secure
it,
but
yeah
I
mean
the
broader
point
is
I,
just
I
guess
my
I
still
believe
that
this
is
like
a
simpler
approach
compared
to
the
amount
of
things
like
yeah.
B
C
I'm
not
like
I,
don't
remember
exactly
what
was
in
the
cap,
but
they
probably
are
alternate
solution,
but
I
do
remember,
bringing
it
up
and
then
it
was
like.
We
did
consider
it
at
the
time
and
it
was
decided
against
because
it
was
creating.
I
mean
yeah,
it's
just
that
thing
of
like
maintaining
it
and
then,
if
people
start
using
it
and
they
don't
move
past
it,
then
we're
gonna
have
to
support
that
forever.
Mm-Hmm.
B
C
I
think,
regardless,
even
when
we
have
daemon
sets
we're
gonna,
have
to
support
this
service
model
because
customers
do
use
it,
and
it's
already
supported
anyway.
But
if
we
add
this
third
thing,
just
I
think
the
long-term
maintenance
story,
for
it
would
be
difficult.
A
So
maybe
a
good
next
step
since
also
running
out
of
time,
is
to
bend.
You
wanna
have
some
time
with
Lumiere.
That
seems
to
be
the
only
remaining
Shepard
of
things
like
Cuba
diem
from
cluster
lifecycle
and
just
run
this
idea
with
him
and
see
if
what
he
stars
are
efficient,
then
potentially
I'm
soon
Kalyan.
Your
earlier
conversations
loop
moment
was
involved
right,
so
was
he
I
mean
as
a
steward
of
that
in
the
community.
A
C
I
think
I
had
to
do
it.
Timothy,
st.
Claire
and
just
the
general
community
as
well
I,
don't
like
I,
don't
know,
I
feel
like
loop.
Amir
was
not
in,
he
was
open
right
and
I
was
open,
but
again,
like
I
see.
I
was
convinced
that
having
this
other
work
around
and
maintaining
that
and
doing
the
work
for
that
and
then
rewriting
to
proxy
to
use
that
like
I
just
I
mean
these
are
things
that
we
would
have
to
maintain
for
a
long
time.