►
From YouTube: Kubernetes SIG Window20200421
Description
Kubernetes SIG Window20200421
A
A
Actually
one
of
the
intentions
that
the
kubernetes-
let's
call
it
some
of
the
PM
leads,
and
some
of
the
previous
release
leads
had-
was
that
hey,
we're
gonna,
delay
things
and
we're
not
gonna
go
ahead
and
get
started
119
just
yet,
but
from
the
moment
I
118
kind
of
shaped
and
the
branches
were
unblocked
and
unfrozen
people
started
just
you
know,
throwing
things
in
there
and
then
effectively.
The
release
just
starts
at
that
point,
so
it
makes
no
sense
for
for
anybody
to
kind
of
delay
things
so
essentially
the
I
think
kind
of
got
started.
A
So
there's
there's
a
lot
of
things
that
that
are
up
in
the
balance
right
now,
mainly
because
of
resourcing
that
we
have
to
think
about,
but,
as
you
guys
are
starting
to
navigate
that
make
sure
that
that
you
are
aware
of
kubernetes
119
in
some
of
the
milestones
I'm
gonna
mention
them
here
in
a
second
as
well.
So
I'll
share
I'm
gonna,
put
put
this
on
the
notes
as
well,
but
this
is
the
119
release
leads.
So
if
you
have
any
questions
on
119,
you
need
to
tag
someone.
A
Well
that's
for
cherry
picking
and
that's
actually
important.
So
if
we
want
to
cherry
pick
something
118
or
117,
these
are
folks
that
would
need
to
go.
Go
okay
by
the
way.
After
the
snafu
last
week,
I
had
to
actually
put
only
voice
in
the
recording
on
youtubes
I
couldn't
put
the
video
since
I
shared
something
incorrectly
lesson
that
for
Michael
cool
and
then
that's
it
for
1:19
any
questions
or
any
concerns.
C
A
D
So
what
I've
been
noticing
is
that,
since
the
IPPS
and
IP
tables
to
proxy
code
bases
share
a
lot
of
common
code
that,
whenever
updates
go
into
like
the
Linux
cute
proxies,
they
don't
make
it
into
Windows,
because
we
have
our
own
code.
So
someone
did
a
refactor
for
a
Linux
so
that
they
would
share
everything.
But
that
didn't
also
happen
with
Windows
or
didn't
happen
with
Windows
in
mind.
So
that's
a
project
that
if
someone
is
looking
for
something
to
contribute
to
119
or
120,
like
anyone
can
take
that
up.
D
So
what
this
would
mean
is
that,
like
the
basic
Q
proxy
code,
just
in
terms
of
like
how
we
handle
endpoint,
updates
etc,
would
all
follow
like
the
same
common
code
so
that
in
the
future,
things
like
n
print
slices,
etc?
Would
just
automatically
like
have
Windows
included,
so
someone
wants
to
take
that
up,
I'm
more
than
willing
to
like
review
pr's
and
work
with
Rob
with
you
and
like
help
explain
anything
so
yeah.
Just
let
me
know.
A
E
D
A
D
C
F
Okay,
sorry
I'm
reconnecting
the
dot
on
you
now.
So
yes,
we
have
someone
and
I
don't
know
the
timeline.
We
basically
were
asking
him
on
stat
so
hopefully
should
be
able
to
apply
by
I,
mean
funny
soon
he's
with
Rob
on
the
slack
channel
on
the
private
section.
Oh
I
haven't
seen
any
ETA
from
him
that,
but
we
definitely
have
someone
assigned
to
that.
That
works.
A
A
A
B
Yeah
I
think
this
is
an
ask
that
that
muss
has
been
getting
is
to
update
the
craap
eyes
to
be
able
to
add
pagination
support
for
some
of
the
operations.
I
think
this
was
brought
up
here,
because
neither
mother
nor
I
have
much
kind
of
experience
working
with
sig
node,
and
we
just
wanted
to
kind
of
sanity
check
that
validate
this
idea
before
we
open
in
initial
enhancement,
requests
against
sig
no
to
support
this
is
that
correct,
Mose.
A
C
True,
so
yeah
summers
are
running
into,
especially
the
advanced
customers
start
running
into
this
when
they
cannot,
you
know
without
the
paging
when
they
run
the
cry,
the
cry:
CD
LPS
command
and
it's
not
payable.
They
are
not
able
to
see
the
results,
so
we
need
to
go
to
stage
and
and
make
them
or
convince
them
to
make
the
change
in
the
signal
I
wanted
to
bring
it
here,
because
you
know
this
windows
should
be
aware
and
then
also
I
need
to
understand
like
how
can
we
can
miss
the
signal
community
yeah.
A
Say
overwhelmed
with
requests
and
capacity,
and
the
capacities
just
know
what
it
should
be
so
so
having
trouble
across
the
board
there
I
can
ask
you
a
question:
what
kind
of
operations
are
they
looking
for
pagination?
Is
it
like,
for
example,
statistics
metrics?
Is
it
about
pods
like
you
know
what
are
some
of
the
requests
and
points
or
some
of
the
requests
that
they're
looking
for
this
pagination,
that
that
becomes
a
problem,
a
scale.
C
So
far
we
have
seen
the
the
list,
containers
and
list
pod
that
that
run
into
this
issue.
I
can
get
exactly
more
data
from
customer
on
what
they
were.
But
you
know
what
we
have
seen
so
far
is
people
are
trying
to
get
this
container
to
this
part
and
there
there's
the
densities
too
much
and
they're.
You
know
it
goes
out
of
the
page
and
it's
not
pages,
but.
A
C
A
C
A
Meeting
looking
after
that,
once
we
have
the
data
the
best
way
that
I've
seen
work
with
sick
node
is
to
basically
go
and
and
and
sit
down
with
them
and
and
good,
attend
a
meeting
and
put
in
the
agenda
and
talk
about
it
so
plumage
in
that
make
sure
they
accept
it
and
then
I
basically
attend
the
meeting.
I
believe
I'm
actually
trying
to
scour
through
the
docks
for
now
to
figure
out
what
the
limit
is.
It's
like
it
looks
to
be
like
500
pods
per
node,
so.
A
A
This
is
the
link
of
the
max
numbers.
The
kubernetes
supports
today
for
large
clusters,
so
no
more
than
150
thousand
total
pods,
but
that's
across
5,000
nodes
right
so
in
a
single
node
is
a
hundred
pods
like
and
they
were
the
obscene
threads.
What
they're
trying
to
increase
that
to
500
pods
per
node,
but
100
is
the
limit
so
when
you
actually
go
and
create
this
this
node
to
to
the
kubernetes
segundos
team
6ik
noting
make
sure
that
if
you
go
and
say
hey,
I
would
like
to
support
a
thousand
pods.
On
that
note.
C
A
Mean
so
so
here's
here's,
how
I
view
it
right
and
it
depends
on
who
your
consumer
is
if,
for
example,
build
a
user
interface
that
will
manage
kubernetes
and
kubernetes
nodes
and
then
I'm
asking
for
data
all
the
time.
Yes,
pagination
is
very
important
right
because
I
want
to
get
the
data
in
any
way,
that's
consumable,
but
that
works
that's
really
necessary
when
you're
looking
at
log
right
when
it's
at
18,000
results.
If
it's
only
a
hundred,
you
can
get
a
hundred
in
memory
and
just
list
them
any
way.
A
You
want
or
slice
and
dice
it
right.
So
so
that's
why
you
know
the
reason
why
we
might
want
this.
Pagination
can
be
very
important.
That's
why,
as
initially
for
metrics,
are
you
looking
at
metrics
because
you
can
get
hundreds
and
thousands
of
metrics
right
if
you're
looking
at
a
big
interval?
So
it's
important
to
understand
where
this
is
where
this
is
going
to
be
needed.
So
you
can
build
the
right
business
case.
Otherwise
signals
will
not
accept
it.
Yeah.
C
B
So
in
order
to
support
targeting
different
OS
versions
with
hyper-v
isolation,
we
needed
to
update
we
needed
to
have
a
way
to
pass
in
the
runtime
to
the
to
container
D
during
image
full-time,
to
make
sure
that
we
can
get
the
right
pause
or
in
frontier
started,
and
so
there
was
a
enhancement
or
a
document
that
went
out
that
is
linked
to
in
that
pull
request
to
describe
that
and
that
er
implements
it
and
I
think
that's
one
of
the
next
steps
needed
for
hyper-v
support.
So
if
anybody
is
interested,
please
take
a
look
he's.
A
B
I'm
not
sure
about
that.
So
one
of
the
reasons
why
I
was
rising
this
early
in
the
119
releases,
because
after
this
container
D
needs
to
do
a
bunch
of
work
in
order
to
consume
those
annotations
and
do
this
work
so
I
wanted
to
unblock
the
folks
at
Microsoft,
who
would
be
working
on
container
D.
I.
Think
that
it
largely
depends
on
the
scope
of
that.
We
may
be
able
to
get
something
like
testable
in
119,
but
I
don't
think
it's
going
to
be
stable
per
se
called.
A
B
A
A
The
way
119
they're,
actually
in
addition
to
some
of
the
things
I
mentioned
earlier,
they're
trying
to
also
have
an
extended
timeline,
so
so
the
release
date
is
going
to
be
15
weeks
rather
than
12.
So
there
are
three
weeks
the
PR
is
not
approved.
Yet
I
did
it
on
the
notes.
It's
a
PR
one,
zero,
five,
eight
and
their
sick
release.
Okay,.
A
B
G
A
C
A
A
And
my
comment
on
on
the
meeting
was
that
we
needed
to
support
two
si
si
releases
two
sacks
because
of
the
fact
that
kubernetes
could
release
right
after
the
latest
sac
release
or
like
a
couple
of
months
later
and
folks
may
not
have
had
into
the
capacity
or
when
I
say
folks,
I
mean
our
customers
may
know.
I
had
the
bandwidth
of
the
capacity
to
upgrade
just
yet.
So
it
is
possible
that
be
advantageous
for
us
to
support
the
last
two,
for
example,
I.
B
A
I
agree
with
you,
but
here's
one
problem.
We
kind
of
have
seen
this.
In
both
situations.
There
are
customers
still
on
114
of
kubernetes.
They
don't
upgrade
kubernetes,
often
and
maybe
for
windows
upgrade
a
little
bit
more
often
because
we're
moving
at
a
faster
pace,
but
they
definitely
don't
live
for
Linux
right
and
then
so.
So
we
have
to
kind
of
islands
that
are
almost
moveable.
The
people
that
stay
on
an
LTAC
they
don't
move
and
the
people
that
get
a
kubernetes
solid
production
environment.
They
don't
want
to
move
it
off
and
right.
A
So
so
both
of
those
become
a
problem.
So
so
let's
say
that
we
stay,
let's
say:
kubernetes
119
ships
right
then
we
stay
say
they
were
only
gonna
support,
sac
release,
19:03
just
to
give
an
example
or
1909,
then
some
folks
might
still
not
kubernetes
version
for
a
while,
but
they
might
decide
to
update
their
psych
release.
So
so
what
do
you
do
at
that
point?
A
So
it's
I,
don't
know
if
there's
a
if
that's
a
good
planner
or
a
good
release
or
a
good
mechanism
here,
but
for
us
other
than
the
fact
that
it
costs
us
a
little
bit
more
in
testing
to
test
with
one
more
sock
release
and
the
fact
that
you
have
a
lot
of
flexibility
in
terms
of
hey,
run
your
tests
one
day
with
one
release
another
day
with
another
release
in
my
last
day
with
another
release,
I
think
that
just
gives
a
few
weeks
relief.
We
want
to
support
free,
we
could
I
mean
they
they.
G
And
what
we
might
improve
with
that
in
terms
of
what?
So,
that's
why
you
are
saying
Michael
that
we
can
run
one
day
with
one
release
the
other
day
with
another
and
so
on.
That's
not
I
mean
not
a
circus
trivial.
The
way
the
sets
are
set
up
right
now
in
in
this
I
mean
I.
Guess
we
can
do
that
if
we
time
the
interval
properly
but
having
it
change,
I
mean
actually
changing.
A
So
I'm,
basically
talking
about
you,
know
what
I
said,
and
they
pick
an
image
port
for
the
node,
as
well
as
for
the
containers
right
and
have
that
be
kind
of
a
proxy,
so
that
the
actual
image
that
you
that
you
target
every
day
is
a
different
one
and-
and
we
can
change
it
in
automation.
That
way
like
that,
we
just
rotate
and
both
the
node
and
the
container
images
once
they
get
updated
and
they're.
All
the
test
cases
will
run
based
on
a
different
version
of
the
West.
G
G
G
A
Yeah,
we
don't
need
to
design
here-
Adelina-
maybe
maybe
after
you
finish
some
of
the
work
with
the
nano
server
and
take
a
look
at
this
and
see,
if
there's
an
optimization
that
you
can
make
I
think.
Ultimately,
there
is
a
system
somewhere
they
reach
a
value
from
something
else
and
studied
file.
In
a
case,
the
basically
dictates
what
kind
of
test
around
this
is
a
way
for
us
to
interject
a
process
and
and
basically
going
around
robin
manner
to
to
select
a
different
option.