►
From YouTube: Kubernetes SIG Windows 20181204 Part1
Description
Kubernetes SIG Windows 20181204 Part1
A
A
I
think
of
these,
as
first
producing
a
design
and
kind
of
taking
that
from
the
initial
design
that
patrick
has
given
and
get
that
validated
with
signal
six
security
as
well,
and
make
sure
that
everybody
agrees
on
that
and
then
going
ahead
and
implementing
that,
according
to
the
roadmap
for
kubernetes
there
Jeremy
Jeremy
would
thoughts
on
this
meeting
right
now
is
would
be
hi.
Johnny
I
will
be
willing
to
kind
of
Shepherd
this
project
from
you
know.
This
is
more
of
a
p.m.
A
B
This
is
deep:
I
can
volunteer
for
a
resource
from
docker.
Look
into
it
some
background.
As
Patrick
Lang
might
know.
We
have
been
working
with
a
few
folks
from
myself
on
the
docker
strong
site
to
get
this
done.
So
we
were
thinking
like
we
can
use
some
of
the
learnings
there
to
liberate
on
the
cube
site.
That's.
A
Awesome
so
I'll,
let
you
guys
kind
of
work,
offline
and
exchange
details
and
start
doing
this.
Our
goal
is
given
what
happened
with
1.13.
We
want
to
be
a
little
bit
early
on
the
Train
of
everything,
kubernetes
and
release
related,
so
the
faster
we
actually
get
a
proposal
out
and
start
getting
approvals
before
we
start
coding.
The
better
some
members
of
the
community
have
raised
concerns
about
how
we
expose
some
of
these
managed
service
accounts.
A
So
we
don't
have
anybody
kind
of
take
advantage
our
orchid
elevated
permissions
for
anything
under
if
you
guys
have
seen
it,
but
there's
a
the
first
big
vulnerability
for
kubernetes
went
out
yesterday.
Were
you
know
a
malicious
user
can
gain
widespread
access
into
a
cluster,
so
so
it's
going
to
get
a
little
bit
more
sensitive
with
things
like
these
that
provide
access
to
Active
Directory
resources.
A
C
And
just
for
references,
so
this
is
something
that
we
need
to
go
through.
What's
called
the
cap
process
and
they
just
move
those
out
of
the
kubernetes
community
repo
into
I,
think
it's
kubernetes,
slash,
features
and
they're
actually
doing
a
session
talking
about
this
process
at
cube
con.
If
you
want
to
go
there,
one
of
the
one
of
the
difficulties
I
had
before
was
that
you're.
The
whole
point
of
cap
says
they're
supposed
to
be
merged
early
and
iterated
on
frequently.
C
So
that
way,
people
can
see
the
whole
history
as
it's
being
developed,
and
so
we
just
want
to
make
sure
that
we
get
you
know.
Even
partial
drafts
merge.
That's
the
way
the
process
is
supposed
to
work.
That
way,
people
can
track
it
over
the
long
term
and
so
because
they
were
moving
stuff
I.
Believe
they
closed.
My
initial
cap,
I,
don't
know
if
they
copied
it
to
another
one
or
not.
They
did
not
pedroka.
D
A
The
kept
link
for
their
projects
that
includes
everything
how
to
how
to
get
started.
Oh
yeah,
your
original
cap
was
closed,
but
it
was
not
moved
over.
The
the
stable
released
for
Windows
was
basically
you
paste
it
in
the
notes:
yeah
yeah,
so
so
either
they
do
micro,
a
few
of
our
caps
like
stable
release
for
1
to
13
414
for
C
Windows,
but
this
all
was
one
that
was
a
my
credit.
I
didn't
care
to
go.
Do
it
myself?
I
said
you
know
when
we
get
start
on
this
effort
to
just
go
yeah
that.
C
Makes
sense,
I
saw
a
post
a
link
in
there
and
then
happy
to
see
somebody
not
be
able
to
be
able
to
take
that
forward,
because
I
mean
at
this
point,
your
top
pair
for
me
is
going
to
be,
of
course,
getting
this.
Getting
this
release.
Table
and
I
can't
take
on
any
new
feature,
work
and
absolutely
and
focus
on
that
at
the
same
time,
so
yeah.
B
A
C
I
just
go
ahead
and
start
with
to
you
issue
and
I'm
gonna
put
a
few
more
notes
down
there,
and
then
we
could
talk
about
after
okay.
D
D
A
bunch
of
context
links
in
there,
but
basically,
when
you
creates
new
pods
on
the
windows
nodes,
they
don't
inherit
any
of
the
properties
of
the
management
interface.
Where
the
you
know
the
primary
NIC
on
the
VM.
So
what
that,
in
this
particular
case
for
GCE
on
Google
compute
engine,
you
only
support
an
MTU
of
1460,
and
so
all
of
our
network
interfaces
are
set
to
1464
the
MTU.
But
then,
when
we
create
pods,
they
default
to
1500.
D
So
we've
been
working
around
this
for
a
long
time,
but
now
that
we're
running
basically
when
we're
running
the
intent
conformance
test
for
running
pause,
that
we
don't
control,
so
the
workaround
is
to
as
soon
as
you
bring
up
the
pod
to
want
to
command
to
adjust
this
MTU.
That
seems
kind
of
infeasible
to
to
now
or
long
as
a
long
term
solution
for
this.
So
basically
we're
running
our
test
now,
but
you,
this
isn't
incompatible
with
the
platform
and
so
basically
I'm
wanted
to
just
ask
the
folks
at
Microsoft.
D
F
That's
a
feature
grab
and
we
we
don't
have
a
fix
for
it
like
an
ability
to
assign
an
empty
for
a
container
make,
but
what
we
did
do
is
basically,
there
are
a
couple
of
bugs
where
the
empty
was
not
supposed
to
be
1500.
It
was
supposed
to
be
1450,
but
I
think
I
need
to
understand
your
physical
interface
or
the
virtual.
The
VM
interface
itself
is
at
fourteen
sixty
and
if
we
are
adding
another
end
cap,
our
subs,
does
that
mean
we
have
to
reduce
it
further?
D
F
That
is
true,
but
the
between
the
container
and
the
management
interface,
the
packet
is
all
flowing
and
we
do
have
an
like
it.
It
doesn't
leave
the
host,
but
within
the
host
there
is
an
end
cap
and
d-cap
happening.
So
that's
why
we
decrement
the
container
interface
to
be
less
than
what
the
physical
interfaces
but
I
do
see
the
ask
of
ability
to
set
the
end
point
empty
you
and
expect
that
end
point
until
you
to
be
said
in
the
container.
F
D
Sure
so
I
mean
basically,
my
point
is,
if
you
look
at
those
context,
links
also
include
steps
to
reproduce
the
issue.
There's
something
there's
evidence
that
the
fragmentation
doesn't
work
correctly
when
the
container
into
you
is
higher
than
the
platform
supports
and
so
effectively
you
can't
connect
to
say
certain
websites.
You
know
certain
web
servers
on
the
internet,
so
I
don't
know
if
the
ending
performance
tests
for
kubernetes,
you
know
try
to
test
anything
like
that,
but
if
they
do,
then
it's
gonna
fail
on
GCE
or
any
platform
doesn't
have
an
MTU
of
1500.
F
F
F
But
let
me
let
me
give
you
a
like:
we
have,
we
have
empty,
you
fix
that's
going
in
in
1d
or
something
like
that
where
we
look
at
the
physical
interface
and
reduce
it
by
50,
so
that
should
work
for
GC
in
my
opinion,
but
let
me
that,
but
that
still
doesn't
give
the
control
to
the
CNI
to
say.
Okay,
I
want
this,
irrespective
what
you
think
this
is
my
empty,
you
I
want
it.
That's
the
feature
that
Peter
is
asking
for
and
I
think
we
should
do
that.
E
Needs
to
be
updated
to
write
and
I
mean
I
think
there's
talks
that
says
that
flag
is
supported
when
it
can't
be,
in
fact,
so
this
is
this
hms
MTU
feature
that
was
a
hard
cut,
unfortunately,
for
the
current
windows
release
that
we're
working
on,
but
certainly
there's
you
know,
you're,
not
the
first
person,
that's
asking
there's
numerous
github
frets
about
this,
so
yeah
we
will
try
it
we'll
do
our
best
to
prioritize
it
against
other
requests
and
work
on
this,
as
the
nurse
said
and
give
you
updates.
Yeah.
A
That'll
be
great,
if
you
guys
actually,
as
you
explore
that
if
you
can
also
be
back
ported
and
as
you
know,
if
you
had
a
discussion
last
week
as
well,
server
2019
is
a
we
wanna.
You
know
everybody's
kind
of
putting
their
eggs
in
a
basket
that
that's
gonna,
be
the
West
they're
gonna
release
him
support
across
the
board,
so
there's
an
option
to
backboard
it
to
be
something
we
want
to
explore
as
well.
The.
F
Yes,
yeah
there
is,
there
are
two
I
would
categorize
them
into
two
different
things.
There
is
a
one
one
change
we
have
made
in
2019
and
that's
gonna,
be
the
patch
that's
going
out
where
we
look
at
the
physical
interface
and
reduce
it
by
50,
so
that
should
work
for
GC
as
well.
Gc
should
see
the
container
networking
interface
as
1410,
or
something
like
that.
That
is
my
expectation,
but
I
would
like
to
give
the
patch
to
Peter
and
try
it
out
in
2019.
F
D
C
C
D
C
A
All
right
next
item
on
the
Honda
yeah
actually
well,
alright
I'm,
seeing
people
up
guys
here.
Unfortunately,
the
the
next
item
on
the
agenda
essentially
is
a
ticket
the
owner
Bobby.
That
placed
it
also
on
the
meeting
notes,
but
I'll
paste
it
here
on
the
chart
right
now,
and
this
issue
outlines
that
being
Server
2019,
you
can
only
create
one
container.
When
you
have
a
shared
network
stack
on
a
single
host,
it
seems
that
it
was
fixed
at
some
point
and
then
regressed.
A
The
latest
2019
build
Patrick,
have
kind
of
asked
the
people
of
folks
if
they
can
go
ahead
and
create
some
test
cases
in
the
kubernetes
report
that
can
actually
help
us
identify
this
issue
and
potentially
catch
other
regression.
But
you
know
the
elemental.
They
have
the
bandwidth
of
the
expertise
to
go
ahead
and
create
that
I
guess.
The
bigger
thing
here
is
to
see
if
someone
from
the
networking
team
of
Microsoft
can
see
about
them
should
be
ready
to
to
to
reproduce
this.
C
F
A
G
Sorry,
if
you're
talking
about
the
issue,
we
found
wasn't
related
to
a
specific
test
or
the
wall.
This
is
just
be
clear.
This
is
the
issue
where
you
can
have
a
Janis
endpoint
because
already
tries
to
a
naturist
point.
End
points
multiple
pods
right,
I,
don't
get
it?
Yes,
okay,
so
now
one
it's
not
necessarily
related
to
a
specific
test.
It
happened
all
the
time
yeah
in
a
run,
so
there's
not
one
specific
test
that
will
will
recreate
this.