►
From YouTube: Kubernetes SIG Node 20200618
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
A
The
119
dates
still
seem
to
be
having
some
change,
and
so,
from
my
perspective,
whether
it's
119
or
120,
we
should
still
look
to
do
the
work
and
then
we
can
work
with
sig
release
to
see
if
as
a
part
of
the
fluctuations
in
dates,
if
there's
any
issue
in
getting
this
in
119
but
I,
don't
know
the
answer.
All
stop
my
head
right
now:
I'm,
sorry,
hello,.
B
A
A
B
C
Related
to
this
PR
question,
so
BG
had
last
end
of
last
year
proposal
about
tagging
the
labels,
which
will
be
disclosing
where
current
police
here
off
what
apology
manager
as
a
node
labels.
So
it
can
help
this
like
this
node
selector
restrictions
when
replacing
will
work
was
so
I
think
it
might
probably
make
sense
to
rope
when
this
old
proposal
and
other
to
discussion,
because
it's
helpful
feature
in
my
opinion.
Yes,
the
list
of
labels.
A
So
I
guess
that's
the
one
caution
Alex
would
have
on
that
and
similar
to
why,
when
the
node
feature
discovery
component
requests,
changes
were
requested
to
do
the
labeling
centrally
and
not
from
each
load
node.
We
could
also
look
to
see
if,
if
that
component
as
well,
could
take
on
the
labeling
responsibility,
yeah.
A
A
B
A
B
B
On
the
picture.
You
can
see
the
5g
deployment
in
the
standalone
version,
which
is
the
the
purify
G
deployment.
If
you
we
would
have
deployment
that
would
base
on
the
4G
infrastructure.
That
would
be
the
non
stand-alone
deployment
and
there
is
the
difference
and
because
they're
not
non
stand-alone
deployment
would
be
much
more
context
because
it
would
be
connected
with
the
4G
infrastructure.
B
The
the
important
thing
in
5g
is
that
it
wants
to
use
the
micro
services
in
the
in
its
components,
so
the
core
network
of
5g
will
have
different
micro,
sip
services,
and
things
of
that
it
can
be
the
it
can
allow
telco
to
build
a
scalable
mesh
network
of
the
core
system
in
the
cloud.
Also
5g
implements
their
cups,
which
is
the
control
and
user
plane
separation
and
things
of
that
there
can
be
an
efficient
expansion
of
each
cloud,
so
the
core
cloud
can
grow
separately
and
the
edge
Club
can
grow
separately.
B
It
has,
there
has
to
be
the
the
best
performance
possible.
The
role
of
the
UPF,
which
is
the
user
plane
function,
is
to
inspect
the
packets
inspection
of
the
packets
and
switching
the
packets
and
also
the
network
slicing.
So
we
can
have
different
paths
of
packets
based
on
the
on
the
on
the
type
of
offload
on
the
type
of
traffic,
so
you
P.
This
is
very
important.
The
DPF
is
distributed
on
every
corner,
so
it
can
be
on
the
4
on
the
far
edge
of
the
deployment
in
the
edge
and
in
the
in
the
car.
B
A
Guess
the
other
thing
I'm
trying
to
keep
in
my
mind
here
as
well,
which
is
I've,
seen,
use
cases
discussing,
that
environments
that
are
processing
or
dedicated
to
be
ran,
made
benefit
from
real-time
and
so
I've
been
also
trying
to
think
ahead
to
see
if
there
are
any
scenarios
where
performance
related
topics
that
we
raised
in
this
broader
community
will
also
eventually
make
a
would
advocate
for
the
usage
of
a
real-time
kernel.
I.
D
Think
sorry
can
I
add
something:
I
think
that
if
at
Intel
had
some
presentation
that
involved
real-time
kernel
to
reduce
the
latency,
in
fact
so
I
think
it
was
considered
tiny
considered
for
for
the
for
the
deployment
that
that's
the
main.
The
main
goal
is
to
reduce
the
latency
and
because
of
reducing
the
latency,
we
can
also
increase
the
troop
of
in
Internet
work.
So,
okay,
I
can
provide
you
a
link
to
this
present
laughter
directly.
A
D
B
C
C
B
C
B
B
So
this
is
the
specific
use
case
for
the
pot
scope
of
topological
generation.
The
background
is
basically
the
hardware
example
how
the
user
playing
function
can
be
deployed,
so
the
first
one
we
want
have
servers
that
will
serve
specific
services.
So
we
want
separate
servers
for
UPF
separate
servers
for
the
veer
on
and
for
the
DB.
In
the
example,
we
have
five
servers
and
two
are
for
the
UPF
two
are
for
the
v1
and
one
for
DB.
B
So
we
will.
We
can,
in
this
scenario,
deploy
two
ups
on
that
on
that
server,
because
UPF
has
high
requirement
of
the
CP
of
exclusi
CPUs
again.
The
third
third
third
point:
there
is
an
example:
UPF
requires
12
exclusive
CPUs
and
at
the
wall,
socket
server
will
have
a
round
of
32
34
additional
for
the
for
the
hosts.
C
But
my
question
still
stand
so
like
the
current
generation
of
the
CPUs
have
like
40
or
more
cores
per
socket,
like
AMD
has
60
something
cost
per
socket
and
like
amount
of
PCI
buses
and
PCI
cards.
What
is
capable
of
running
the
two
socket
servers
is
also
like
I,
don't
know
like
eight
or
more
so.
My
question
is
your
expectation.
It
will
be
like
in
your
picture,
just
two
high
priority
work,
although
it
can
be
like
if
resources
are
available
like
up
2000
Apple
dozens.
B
Okay,
so
for
the
for
the
initial
generation
of
the
fudgey
deployment,
the
hardware
will
be
determined
for
the
for,
for
the
software
so
like
we
want
to
have
a
dual
socket
server
with
such
amount
of
CPUs.
That
will
serve,
for
example,
two
ups,
so
the
hardware
is
dedicated
for
the
for
the
for
the
UPS
here,
not
like
the
other
way.
We
take
the
hardware
with
a
high
amount
of
CPUs
and
we
apply
as
many
opiates
there.
It
is
the
other
way
alright,
so
you.
C
B
C
Was
just
curious
about
like
how
much
of
overstimulation
of
resources
or
like
if
potentials
I'm
over
subscribing
is
happening
for
I?
Don't
know
like
some
maintenance
or
cube
system
kind
of
processes
which
is
like
you
cannot
go
around.
What
like
kubernetes
is
bring
in
anyway,
some
few,
our
processes,
which
is
not
only
your
workload,
just
something
in
the
system,
will
be
still
running.
C
B
Right
and
that's
there
are
those
two
or
four
additional
spews
to
have
this.
This
extra
space
of
security
here
thanks
and
you
and
part,
is
that
probably
the
first
generation
of
the
V
Ron
will
use
the
single
circuit
server.
So
there
won't
be
any
problem
of
Optima
here,
but
in
the
future
generations,
probably
it
will
use
another
type
of
hardware,
but
still
the
hardware
will
be
determined
for
the
further
application
similar
like
UPF.
B
So
going
to
the
point,
what
is
the
use
case
of
the
pot
scope
of
topological
manager?
The
expectation
is
that
we
will
have
one
CNF,
so
one
UPF
Burnham
out
super
socket
and
actually
the
benefit
of
that
is
that
you
can.
You
can
account
how
many,
how
many
free
Numa
nodes
free
sockets.
You
have
based
on
how
many
seen
as
you
have
deployed
and
how
much
you
can
scale
and
also
the
second
one
is
that
you
can
scale
those
those
UPF
bytes
based
on
the
on
the
load.
B
So
when
the
traffic
is
I,
do
this,
it
is
low
you
can.
You
can
scale
down
the
ups
and
and
when
the
traffic
congestion
is
expected,
you
can
scale
them
out.
It
can
be
also
useful
when
the
net
for
slicing
is
required,
so
you
can
have
different
ups
for
network
slicing,
so
this
is
this
is
the
use
case
of
pot
scope,
topology
manager
and
actually
the
pot
scope
use
case
is
for
the
for
the
UPF,
not
not
the
whole.
D
C
C
But
looking
at
different
types
of
different
profiles
of
workloads,
so
what
I
can
say
from
my
experience
is
what,
like
this
assumption
of
one
socket
one
Numa
node
is
really
bad
in
terms
of
like
up
to
optimizing
well
performance
over
workloads.
So
for
some
scenarios
like
the
PDK
based,
it
might
be
good
assumption.
But
if
we
start
to
look
at
like
smaller
work,
loss
or
more,
our
work
laws
which
knows
how
to
deal
with
resources
like
scenarios
were
American
controllers
running,
can
different
mold.
It
might
give
additional
several
percent
of
performance.
A
Mmm-Hmm,
one
of
the
reasons
just
to
clarify
my
thinking
when
reviewing
how
we
maybe
contrast
the
work
Alex
you
had
presented
and
which
was
more
of
a
per
pod
policy
approach
versus
a
per
node
policy
approach,
is
the
reason
I
asked
the
colonel
real
time.
Question
was
like:
if
the
workload
inevitably
will.
A
Desire
a
node
configuration
that
breaks
like
the
container
host
isolation
boundary,
which
I
kind
of
view,
the
need
to
run
on
a
real-time
kernel.
As
doing
then
it
could
be
that
those
use
cases
are
where
a
per
node
policy
will
make
total
sense,
because
you
have
a
specialized
node
configuration
the
moment.
A
You
need
a
specialized
variant
of
your
EOS,
and
so
maybe
we
can
just
kind
of
like
build
a
bit
of
a
decision
matrix,
as
we
start
to
think
through
these
things,
like
for
kubernetes
on
this
class
of
workload,
where
you
desire
this
variant
of
an
OS,
you
may
decide
desire
or
having
per
node
configuration
policy
rather
than
per
pot
policy
as
a
user.
But
anyway
that
was
just
that's
where
I
was
coming
from
on
that
mind
set.
Maybe
that
resonates
with
you
Alex,
but
yeah.
C
Regarding
real
time,
actually
we
had
discussion
I.
Think
last
year's,
like
some
some
people
in
community,
came
to
us,
like
private
discussion.
I
think
it
was
from
French
telecom
operator
orange
we've
had
idea
about
how
to
use
with
real-time
scheduling
parameters
for
were
containers,
so
we
had
discussing
this
so
OSHA,
I
spec
knows
about
the
real-time
quota
parameters.
It's
just
not
exposed
upwards.
A
C
A
Potentially,
yes,
and
so
either
way,
I
was
just
trying
to
call
out
maybe
a
heuristic.
We
can
follow
when
thinking
about
approaches
the
problems
in
how
we
choose
to
favor
one
approach
for
another,
depending
on
the
other
things
that
might
be
configured
on
that
node,
so
either
way,
I
think
unless
there
are
other
questions
you
want
to
cover
on
this
topic,
it
does
run
through
the
list
of
agenda
items
so
where
their
items
that
were
not
capturing
the
agenda
that
people
want
to
raise
or
but.
C
C
C
Well,
I
would
definitely
will
be
a
bit
more
interested
on
classification
of
control
playing
were
close,
so
just
just
to
estimate
like,
for
example,
like
are
we
talking
about
dozens
of
report?
Hundreds
of
reports?
Are
we
having
some
specific
requirement
for
performance
or
like
reliability
or
something
else.
B
A
Excellent
and
I
know
there's
some
folks
that
Red
Hat
that
are
exploring
real-time
oriented
use
cases
as
well.
So
maybe
we
can
look
to
connect
a
broader
community
of
folks,
so
we
can
better
share
knowledge
here.
So
if
there
are
no
other
topics
that
people
in
race
today,
we
can
adjourn
and
big.
Thank
you
to
the
team
at
Samsung
for
presenting
and
I'm
glad
we're
making
some
progress
here.
C
A
Yes
sounds
good:
I
will
try
to
I,
don't
know
if
I
can
get
the
right
folks
on
the
operating
systems
side
here
for
our
next
meeting,
but
I
do
think
that
would
be
an
interesting
discussion
for
a
community
app.
So
let's
try
to
queue
that
up
all
right.
Well,
thank
you,
everyone
and
have
a
great
evening
and
rest
of
the
week
and
we'll
see
you
next
week.
Bye
guys.