►
From YouTube: Kubernetes SIG Windows 20180619
Description
Kubernetes SIG Windows 20180619
A
A
Sounds
good
to
me:
okay,
let's
start
with
the
one
of
Lovings
I.
Think
that's
going
to
be
relatively
quick,
so
we're
in
code,
freeze
I'm,
not
aware
of
any
PRS
that
we're
still
waiting
on
for
merge.
Does
anybody
else
have
any
or
any
other
open
issues
for
111
all
right,
I'll!
Take
that
as
a
no
so
I
guess.
Let's
move
on
to
the
fallout
from
last
week
on
the
DNS
so
Dinesh
and
and
mod
on
continued
looking
into
that,
we
had
two
options
for
how
to
fix
that
proposed
last
week.
D
So
besides
asking
go
when
we
what
will
be
out
basically,
so
we
can
get
that
get
what
I
asked
him
about.
Is
he
going
to
be
a
default
on
links
as
well?
Basically,
what
the
inner
circles
and
search
the
means
you
have
to
specify
to
give
them
what
and
afterwards
I
looked
over
the
documentation
and
I'm
posting
the
links
here.
So
it
gets
a
bit
worse
worse
in
the
state
that
what
we
need
to
support
from
what
I
saw
in
the
documentation.
D
Basically,
we
can
override
everything,
so
you
can
all
go
right
per
pod,
any
DNS
and
search
domain
and
I
don't
know.
We've
somebody
actually
tested
that
on
the
window
side,
you
can
also
specify
from
the
cubicle
command-line
argument
and
from
the
cni
config
from
the
sienna.
Config
basically
works,
because
that's
the
way
we're
using
it
now.
The
only
problem
is,
we
need
probably
to
Scott
the
previous
cases
as
well.
D
D
D
A
C
C
And
see
an
expect
call
set
out
that
you
can
do
that
and
then
basically
we
can
return
it.
The
problem
is,
Venus
is
much
more
related
to
networking
in
Windows
and
it's
part
of
the
endpoint
configuration.
So
we
want
to
keep
it
that
way
and
if
part
individual
pod
is
the
only
gap,
I
would
like
to
address
that
by
implementing
and
we
can
pass
it
through.
That
end
point
call
I.
D
C
A
C
B
A
B
A
D
A
B
I
agree
with
that.
That's
probably
what's
gonna
happen.
If
we
finish
it
fast
enough
and
like
usually
most
people
are
waited
today,
it's
going
to
be
one
11.1,
that's
or
two
that's
what
people
will
install.
So
if
we
get
cherry
picking
to
that,
then
we
have
a
good
chance
of
actually
going
down
the
path
of
the
right
solution.
D
B
D
D
A
Okay,
alright,
so
I
think
we
need.
We
know
what
we
need
to
do
there.
So
moving
on
to
the
next
side
of
my
head
on
the
list,
so
I'm
just
reordering
these
in
the
docks
that
follows
the
actual
order
we
used
in
the
meeting,
so
one
of
the
other
big
gaps
that
we
have
to
get
to
to
graduate
from
beta's,
making
sure
that
we're
prepared
to
with
everything
we
need
in
terms
of
monitoring
and
logging
so
for
1.11
painfully
got
a
PRN
that
covers
just
some
of
the
core
metrics,
except
for
networking.
A
And
then
the
other
thing
I
did
was,
I
went
looked
at
the
test
cluster
that
I
have
set
up
on
ACS
engine
and
looked
at
all
the
different
log
sources
that
that
I
could
find,
and
so
I've
got
a
doc
linked
in
the
notes
here.
But
basically,
what
I
found
was
that
we
had
logs
from
the
cube,
LED
and
cube
proxy
on
disk,
but
the
either
the
file
names
were
mislabeled
or
we
had
set
the
wrong
log
levels
on
there,
because
there
were
a
bunch
of
files
that
were,
you
know,
dot
error
log.
A
Yes,
I'd
like
to
ask
everyone
to
take
a
quick
look
at
that
list.
I
had
and-
and
you
know,
if
there's
anything
you're
looking
for
or
if
you're
trying
to
get
it
to
work
in
a
particular
monitoring
solution
that
you
know,
I'd
love
some
more
help
there,
because
this
isn't
something
I've
ever
dug
into
on
the
Linux
side.
B
The
interesting
thing
is
that
people
ask
me
well
from
Tears
monitoring
against
Windows
nodes
and
I
personally
never
had
the
time
to
sit
down
and
investigate
what
do
you
take
and
how
much
of
that
data
that
you
produce
could
be
ingested
by
Prometheus
I,
don't
know
if
I
have
the
time
now,
but
this
would
be
a
useful
exercise.
I'll
definitely
put
them
on
my
to-do
list
and
see
if
I,
if
I,
can
look
into
that,
but
along
the
lines
of
what
you're
talking
Divali
did
some
of
the
findings
you
had.
D
D
D
A
D
D
E
A
A
Okay,
well
yeah.
If
anyone's
got
got
feedback
on
this,
you
know,
please,
please
go
ahead
and
add
things
to
the
log
so
that
we
can
eventually
turn
this
into
a
plan,
because
you
know
had
people
ask
about
things
like
fluent
D
and
Prometheus,
but
nobody's
been
able
to
describe
to
me
what
the
exact
use
case
is
and
when
I
started.
A
Looking
at
the
Prometheus
stuff,
in
particular,
there's
a
node
exporter
on
linux,
but
it
looks
like
it
has
a
mix
of
things
from
different
api
sources,
and
so
I
think
that
what
we
need
to
do
is
figure
out
if
we
need
to
either
make
a
Prometheus
plug-in
that
makes
Windows
look
like
a
linux
node.
So
that
way
you
know
that
single
node
exporter
has
all
the
data
some
of
it
might
come
from
the
cubelet
api.
Some
of
it
might
come
from
the
windows
performance
counters,
I'm
basically
trying
to
figure
out.
A
If
we
need
to
create
you
know
some
sort
of
shim
for
consistency
or
if
we
can
just
use
the
existing
exporters
that
are
there,
but
just
ask
people
to
configure
a
different
set
of
rule
of
gathering
rules
for
Prometheus.
On
Windows,
but
it's
still
like
I
said
this-
isn't
something
that
I
am
an
expert
in.
A
B
D
A
B
Not
necessarily
a
profit
you
need,
but
it
need
people
for
real
both
interest
and
needs
around
monitoring,
to
try
out
the
different
solutions
and
figure
out
what
where
our
gaps
are,
and
how
can
we
make
it
better?
That's
really
what
we're
looking
for
I,
give
you
a
yay
or
nay
or
across
a
couple
of
different
solutions,
but
maybe
there
may
be.
F
G
F
F
F
F
G
F
Yeah
we
mark
them
here
for
all
of
them,
usually
when
we
see
something
like
that,
it's
it's
fixable.
So
if
it's
not
written
there
and
for
something
it's
not,
we
just
name
that
to
them.
Basically,
her
workflow,
she
gets
the
specific
command
or
whatever
the
test.
Does
it's
just
reading
the
test?
So
there's
no
there's
no
way
is
your
way
or
anything
like
that.
Okay,.
F
A
So
I
think
doc.
Prs
were
supposed
to
be
in
last
week
as
part
of
the
official
release,
but
the
QuickStart
guide
that
we
have
up
on
the
is
actually
over
on
the
website.
Repo,
so
I
think
we
can
update
that.
Basically
any
time
I
already
submitted
some
release,
notes
that
called
out
the
new
future
editions
for
1.11.