►
From YouTube: Kubernetes IoT Edge WG 20201202
Description
December 2, 2020 of the Kubernetes IoT Edge Working Group. General "birds of a feather" discussion of topics related to running applications at edge and supporting IoT using Kubernetes.
A
A
So
this
is
the
december
2nd
meeting
of
the
kubernetes
iot
edge
working
group
on
the
north
america
time
cycle.
I'm
going
to
put
a
link
to
the
agenda
notes
document
in
the
chat,
as
usual
members
are
welcome.
To
put
any
topic
you'd
like
to
discuss
there.
I
looked
last
night
and
I
didn't
see
anything
queued
up,
but
when
that
happens,
we'll
just
turn
this
into
freeform
birds
of
a
feather
discussion.
A
I
think,
let
me
take
a
look
at
anybody.
Yeah.
It
looks
like
there
are
no
links
in
the
agenda
now
so
I'll
just
start
with
a
few
things.
A
A
But
anyway,
just
something
that
happened
was
it
the
week
before
last
was
the
virtual
kubecon
north
america.
So
we
had
a
session.
I
didn't
look,
but
I
don't.
I
think
it's
probably
not
up
on
youtube
yet
when
it
is
I'll,
put
a
link
to
it
in
our
own
groups
channel
but
dion,
and
I
talked
about
using
event
driven
architecture
at
edge,
and
we
had
a
demo
of
doing
that
where
I
hooked
up
a
small
temperature
humidity
sensor
in
my
house
and
went
halfway
literally
halfway
around
the
world
to
theons
installation
of
grafana.
A
There
were
some
other
interesting
sessions
there
that
I
attended.
I
think
cube
edge
had
one
that
I
missed.
Red
hat
had
a
pre-event
of
their
openshift
commons
and
they
had
about
three
sessions
on
edge.
Some
of
them
were
fairly
centric
to
the
red
hat
stuff,
but
some
some
were
pretty
generic
and
I
thought
they
were
all
pretty
good.
I
don't
have
the
links
right
now,
but
I'll
I'll
put
put
them
in
the
agenda
notes.
A
B
Sure
I
can
start
so
for
the
cubecon
me
and
dan
and
few
folks
attended
the
edge
panel
discussion
with
a
bunch
of
meteor.
There
are
some
good
questions
asked.
I
I
believe
the
talk
was
recorded,
feel
free
to
go
there
and
take
a
look.
A
B
So
actually,
I've
attended
three
or
four
such
kind
of
panel
discussion
in
the
past,
the
kukkung
conferences.
To
be
honest,
I've
never
seen
any
article
talking
about.
Maybe
I
missed,
but
I've
been
asked
questions
and
then
like,
but
I've
never
seen
like
articles
talking
about
it.
B
I
think
it's
mainly
all
the
like
what
you're
saying
about
the
media
press,
like
I
don't
remember
their
names.
There
are
a
list
of
people
there.
A
Yeah
there's
some
like
when
alex
williams
of
the
new
stack
does
it
he
usually
writes
a
blog
and
then
puts
something
out
about
it
on
twitter.
So
you
can,
I
don't
know
just
I'll
see
if
I
can
find
them
so
yeah
sure
if
I
come
across
it
I'll,
let
you
know
yeah
occasionally.
If
they
were
impressed
with
your
answer,
you
can
even
search
for
your
own
name.
B
Okay
and
then
the
other
two
things
I
like
to
share
with
you
guys
it's
like
recently,
I
saw
article
people
are
talking
about.
I
think
someone
doing
the
research
he's
suggesting
for
edge
computing
like
this
year
or
like
2021.
B
It's
gonna
be
the
inflection
point
for
edge
computing,
he's
for
seeing
some
picking
up
or
like
a
growth
of
edge
computing.
I
really
look
forward
to
it.
A
Yeah
yeah,
I
think
people
have
been
saying
that
for
years,
but
maybe
it's
arguably
already
here
to
some
extent
too.
It
depends
how
you
measure
it,
but
there's
been
a
lot
of
these
analyst
things:
forecasting
that
x
percent
of
compute
moves
from
a
central
public
cloud
to
the
edge
and
here's
another
thing
I
forgot.
So
I've
been
lurking
in
amazon
re
invent
sessions.
This.
A
B
A
Some
of
those
have
been
really
good
and
amazon
just
announced
an
on-prem
version
of
their
eks
kubernetes,
and
they
have
a
number
of
other
edge-related
things
that
I
don't
think
are
kubernetes,
but
so
far
what
they
did
was
announce
these
things
in
keynote,
with
very
vague
descriptions,
so
that
you
don't
until
they
have
a
breakout
session,
and
these
that
these
are
like
straggling
out
there
over
the
course
of
the
next
two
weeks.
I
think
you're
not
entirely
sure
what
the
deep
dive
details
are
on
these.
A
So
they
did
announce
some
interesting
stuff
related
to
machine
monitoring
and
edge
like
for
industrial
iot,
where
they
have
devices
that
apparently
will
collect
audio
temperature
and
vibration
from
machinery
and
they'll
watch
it
for
a
while
to
try
to
use
a
some
sort
of
machine
learning
analysis
to
detect
what's
normal
and
then
inform
you
if
their
machine
learning
thing
up
in
the
cloud
thinks
there
might
be
an
anomaly
detection
and
it
sounded
rather
interesting
and
what
I
just
told
you
is
the
full
extent
of
the
details
they've
announced
so
far.
A
You
know
to
software
running
in
one
of
the
aws
data
centers
and
does
this
analysis-
and
it
strikes
me
as
an
entry
by
amazon
into
this
space,
whether
it's
kubernetes
oriented
or
not.
I
don't
know
I
my
speculation
would
be
that
for
that
particular
one.
It
probably
isn't
kubernetes,
you
know
just
like
their
alexa
speaker
devices
are
running
kubernetes.
A
What's
on
the
back
end,
who
knows
and
they're
probably
never
going
to
tell
you,
but
they
do
have
an
on-prem
distribution
of
kubernetes.
That
is
described
to
be
like
the
eks
distro.
They
run
in
their
cloud
and
they
did
announce
that
it's
going
to
be
open
sourced
and
I
went
and
looked
and
indeed
on
the
aws
github.
A
They
described
it
as
on-prem,
but
if
you
will,
you
know
if
you
would
consider
on-prem
to
be
edge,
and
certainly
in
many
on-prem
deployments
it
is,
then
you
could
perhaps
look
at
that
as
a
amazon
kubernetes
edge
distribution
anyway
cindy
you
said
you
went
to
some
of
the
re
invent
too.
What
did
what
did
you
find
interesting.
B
B
B
A
So
if
you
don't
catch
these
things
live
unlike
pretty
much
every
online
conference
I've
been
to
lately,
you
show
up
a
half
hour
late
because
he
had
a
conflict
and
you
missed
it
and
they
don't
really
seem
to
be
getting
that
much
press
coverage
either.
Where
a
lot
of
these
things
that
I
missed,
I
went
thinking.
Oh
gee,
somebody
must
have
gone
there
and
they'll
write
an
article
about
it,
but
I
didn't
find
much.
They.
D
D
A
D
Thanks
john,
did
they
did
they
talk
much
about
I'm
kind
of
I
haven't,
sat
through
any
of
the
events
that
they're
they're
doing
right
now.
It's
not
it's
not
surprising
about
kind
of
what
they're
doing
on
the
machine
learning
side
in
the
industrial
space.
I
was
kind
of
curious
if
they
got
any
more
in
depth
on
sidewalk,
because
that's
kind
of
really
their
well.
A
A
The
details-
and
it
isn't
like
you
know,
the
sidewalk,
I
believe,
is
oriented
towards
the
ring
devices
and
things
like
that
that
maybe
create
a
mesh,
but
they
did
show
teasers
of
different
hardware
platforms
so
for
what
they
called.
I
forget
what
they
call.
I
think
they
might
have
called
it
retail
edge
or
something,
and
they
specifically
called
out
a
model
of
it
being
appropriate
for
restaurants,
and
they
described
hardware
the
options
of
either
one
or
two
pizza
boxes.
A
So
they
would
provide
some
sort
of
an
appliance
that
is
capable
of
temporary
severed
operation.
If
need
be,
and
connectivity
up
to
cloud
and,
like
cindy
said,
they
did
announce
that
their
their
prototyping
an
effort
to
go
to
a
selected
number
of
cities
with
regional
clouds.
If
you
will
so
los
angeles,
is
in
what
sounded
like
some
sort
of
beta
program-
and
they
didn't
call
this
out.
But
I
have
to
conjecture
it's
an
attempt
to
remove
some
of
the
latency
chasm.
A
You
know
if
you
had
to
go
back
to
amazon,
east
or
west
or
whatever
from
wherever
that
isn't
necessarily
going
to.
It
might
take
longer
than
parking
a
data
center
right
in
your
own
city
and
there
might
be
resiliency
issues
where
if
some
part
of
the
internet
went
out,
maybe
your
hopes
are
better
if
the
it's
regionally
located-
or
maybe
this
is
related
to
a
radio
frequency
mesh
that
they'd
go
to
later,
so
that.
A
D
Sidewalk
is
intriguing,
it's
I
won't
get
into
the
weeds
on
it
from
a
technical
standpoint,
but
what
they've
done
in
essence
is
they're,
taking
all
their
ble
devices
that
includes
all
your
alexa
dots,
every
anything
they
got
out.
That's
got
ble
in
it,
and
they've
basically
been
able
to
lay
laura
on
top
of
it.
So
the
lorawan
protocol,
which
is
primary,
slow,
speed,
long
long
long,
five
is
kind
of
the
way
it
usually
gets.
Addressed
allows
you
to
transmit
data
at
slow
speeds.
It's
meant
for
data
telemetry
at
slow
speeds.
D
D
Is
either
exist
there
they
basically
convert
they've
taken
the
protocol,
the
lower
protocol
itself,
I
guess-
and
basically
laid
it
on
top
of
an
existing
ble
chip,
which
has
limited
range
when
using
the
the
power,
because
it's
a
they're
building
a
mesh
network,
they
figure
they've
got
a
mill.
You
know
millions.
D
In
close
proximity,
they're
going
to
be
able
to
basically
build
an
inside
lorawan
kind
of
capability
inside
residences
all
over
the
place,
and
that's
got
real
intriguing
capabilities,
because
we
already
have
very
long
distance,
lore
networks
that,
like
I,
have
one
here
on
my
roof.
That
goes.
You
know.
I
cover
all
of
I'm
in
downtown
long
beach.
I
call
I
cover
all
of
la
county
with
a
single
antenna
and
that's
a
long
ways,
so
I
can
reach
all
the
way
above
malibu
and
all
the
way
down
into
parts
of
orange
county.
D
With
with
the
the
setup
I
have
running
on
the
roof,
and
I
can
send
data
all
day.
Long.
What's
interesting
is
that
when
you
start
getting
inside
the
house,
you
can
see
how
this
is
going
to
change
a
lot
of
stuff
when
you
start
talking
about
smart
homes
and
a
few
other
things
and
amazon
plans
on
using
that
as
a
mechanism
to
gather
more
data,
do
more
machine
learning
and
offer
up
new
services
to
homeowners
all
over
the
place.
So
anyways,
it's
just
it's.
A
They're
doing
enough
r
d
span
that
they're
in
a
position
to
revolutionize
the
world
by
many
measures,
as
they've
all
arguably
already
done
over
the
past
10
years
so
yeah.
This
is
worth
taking
a
look
and
really
what
I
caught
was
the
keynote
where
the
guy,
the
speaker
just
does,
throws
out
ten
different
things
and
doesn't
go
deep
in
any
of
them.
C
A
So
and
they're
really
scattered
all
over
the
time,
so
I
was
kind
of
shocked
on
opening
night
I
registered
and
just
because
I
just
filled
out
my
registration,
I
went
online
and
they
gave
this
teaser
saying
make
sure
you
join
the
first
night
because
there'll
be
a
major
announcement
and
you're
going.
Oh
yeah,
I've
been
to
these
conferences.
It's
usually
you
know
like
a
warm
beer
wrapped
in
a
napkin.
If
you
have
a
physical
one
and
a
bunch
of
hype,
but
yeah
they
did,
they
actually
did
a
major
announce
like
eight.
A
It
was
like
at
eight
p.m.
On
the
the
night,
what
you'd
think
would
be
the
night
before
the
big
stuff
and
I'm
sure
for
these
edge?
The
stuff
will
come
out
the
other
thing
they
had
those
pizza
boxes
kind
of
for
retail,
but
then
they
have
another
hardware
line
that
is
hardened
for
industrial,
and
I
think
I
might
have
seen
a
tiny
little
picture
of
like
a
cube
that
looked.
Maybe
I
don't
know
nook
like
or
pie
raspberry,
pi
sized
or
something
that
they.
C
A
Was
environmentally
hardened
and
that
would
be
more
the
device
that
would
do.
Data
collection
or
cloud
uplink
for
other
use
cases
but
clearly
looks
like
they'll.
They
plan
an
entry
to
the
space.
D
D
A
D
E
I
have
a
question
I
just
today
found
out
about
the
etsy
multi-axis
edge
computing
group,
which
is
basically
there
to
bring
a
standard
for
edge
computing
is.
Has
anybody
in
here
heard
of
that.
E
Because
it
sounds
a
lot
like
kubernetes
but
like
I
can't
find
it
anywhere,
and
I
was
just
wondering
if
it's
like
there's
like
some
kind
of
standardization
going
on
and
kubernetes
is
not
part
of
it.
B
I
know
like
some
company,
they
are
forming
a
standard
for
onap.
This
is
more
about
the
network
edge
so
like
as
the
link
you're
mentioning
is
the
multi-edge
computing
it's
for
network
like
5g,
because
the
the
openstack
or
the
networking
industry
they're
adopting
the
open,
omap
open
network.
C
B
Exactly
related,
it's
more
about
enabling
edge
network
solution,
yeah.
A
Yeah,
I'm
not
even
that
familiar
with
this
organization
etsy,
I
see
on
their
page
that
it's
a
european
standard
organization.
So
maybe
it's
just
that
I'm
to
north
america-centric
to
have
heard
of
them,
but.
E
Yeah
that
a
few
of
my
my
colleagues
all
over
germany
are
like
joining
on
there.
So
I
was
like
wondering
because
they
they
also
talk
about
video
analytics
internet
of
things,
argumented
reality
and
stuff
like
that,
which
of
course,
are
very
close
edge
related.
So
it
didn't
hit
me
on
on
5g.
A
C
C
They
there
was
some
kind
of
event
where
essentially
you
could
go
and
get
get
your
your
edge
platform
checked
against
the
etsy
standards
and
and
guidelines
that
kind
of
stuff,
and
so
typically
in
their
marketing.
They
would
say
that
that
fog
os
is
etsy
whatever
compliant.
So,
okay.
F
Maurice
also
to
add
there
so
myself
from
coming
from
an
academic
background
and
we
kind
of
work
not
directly
with
etsy,
but
we
follow
their
standardization
and
there's
it's
only
now.
They
are,
you
know
getting
more
into
the
edge
computing
spectrum,
because
before
there
was
this
kind
of
a
confusion,
edge
computing
versus
fog
computing
and
there
was
a
full
computing
consortium,
but
now
there's
a
slight
variation
with
edge
computing.
So
etsy
is
the
european
standard
of
trying
to
make
sure
that
there
is
some
standardization
towards
that.
F
Given
those
efforts,
but
mostly
for
5g
and
telcos.
F
E
A
So
I
didn't
mention
it,
but
when
I
brought
up
re
invent
too
just
in
case
people
weren't
aware
the
registration
now
that
it's
online
is
completely
free.
That
used
to
be
kind
of
a
pricey
conference
before,
and
I
think
it
was
known
to
sell
out
yeah.
It
was.
A
B
The
other
one
I
like
to
share
is
like
there's
a
ieee
network
edge
computing
summit.
Coming
up.
I
I'll
share
you
guys
the
link,
it's
gonna
be
virtual.
You
can
register
it's
free
as
well,
so
I
was
in
invited
for
talk
there,
so
I
think
with
iot
5g
ai
everything
connected
there
will
be
some,
including
the
teleco.
There
will
be
some
good
information
to
share
and
learn.
Are.
B
A
B
Yes,
I'm
preparing
my
deck.
It's
not
okay,.
A
Yeah,
by
the
way,
also
the
open
source
summit-
japan,
I
think,
is
going
on
right
now,
as
we
speak
like
conflicting
with
our
meeting
potentially
and
they've
always
had
a
lot
of
edge
topics:
the
specialty
of
open
source
summit.
Japan
has
always
been
automotive
just
because
there
are
kind
of
leading
car
makers
there
so
of
all
the
linux
foundation
conferences
to
cover
the
space.
If
you've
wanted
to
learn
about
automotive,
linux
related
things
open
source
summit,
japan
is
where
the
good
sessions
are.
A
So
the
other
thing
cindy
I
wanted
to
mention
to
you
is:
I
went.
Maybe
it
was
four
weeks
ago,
but
rancher
had
an
online
event
and
they
had
a
number
of
edge
sessions
that
I
thought
were
quite
good
and
kind
of
my
favorite
ones.
Were
the
microsoft
speakers
there
and
I'm
wondering
if
you
have
any
clout
to
maybe
recruit
them
to
represent
some
of
those
or
updates
on
those
topics
at
our
group
meeting
at
some
point
in
the
future.
B
Sure
I
think
I
attended
that
conference
as
well,
so
for
the
microsoft
one
I
believe,
is
external
company
work
with
renter,
but
using
azure
iot
edge
solution,
yeah
so
really
he's
not
an
internal
microsoft
employee.
That's
my.
A
Whoever
it
was,
I
really
liked
the
talk,
so
I
thought
it
was
good
and
maybe
that's
recorded
too
I'll
have
to
look
and
if
I
can
find
a
link
I'll
put
that
in
the
agenda
notes,
so
other
people
can
find
that,
but
I
found
it
kind
of
interesting,
especially
it's.
A
It
strikes
me
that
the
whole
azure
effort
on
addressing
edge
and
iot
has
changed
over
the
years
and
that's
probably
a
good
thing,
because
the
technology
has
been
moving
so
rapidly
that
if
you
design
something
two
or
three
years
ago
and
stuck
with
it,
it's
probably
not
the
way
you
do
it
today.
B
A
A
We
might
as
well,
why
don't
we
tentatively
if
anybody
thinks
of
it
after
today,
we'll
put
in
a
slot
for
the
next
meeting,
and
then
we
can
just
tentatively
suggest
potential
speakers
in
the
agenda,
notes,
doc
and
then
go
try
to
recruit
them.
It's
easy
enough
to
delete
them
later.
If
we
can't
get
a
commitment
from
people
or
a
speaker,
the
other
thing,
I
guess
we
should
address
here-
is
the
upcoming
schedule,
at
least
in
north
america.
A
The
attendance
at
meetings
in
the
holiday
season
between
you
know,
mid-december
and
new
year's
is
pretty
light
because
people
take
time
off
work
and
also
don't
show
up
for
meetings.
So
whether
we
want
to
have
the
european
aipac
cycle
meeting
in
two
weeks,
I
don't
know
I'll
leave
it
to
others
to
decide,
I
think,
by
the
kubernetes
governance
standards.
A
But
and
dion
I
see
you
joined
late,
I
already
told
them
about
the
session.
We
did
at
kubecon
and
told
them
we'd
send
a
link
out
when
that
gets
uploaded
to
youtube
which,
as
far
as
I
know,
hasn't
happened.
Yet
I
think
the
cncf
usually
waits
a
few
weeks
after
the
conference
so
that
they
encourage
people
to
pay
the
50
buck
registration
to
be
able
to
get
access
when
it's
more
current.
A
A
One
of
the
things
covered
was
a
a
version
of
the
argell
os
that
was
specifically
intended
for
edge,
and
then
there
were
kind
of
a
few
others,
but
I
thought
the
speakers
were
good
and
if
we
could
get
a
follow-up
presentation
on
those
subjects
by
those
people,
it'd
be
great.
G
Yeah
yeah,
we
discussed,
discussed
it
before
so
so,
and
I
tried
but
wasn't
too
persistent
I'll,
try
again
so
yeah.
Let's
try
to
do
something.
I
mean
there's
a
lot
going
on
with
the
opposite
edge.
So
there's
now
a
three-note
solution.
There's
now
a
remote
worker
node
and
you
know
soon
there
will
be
a
single.
A
A
E
I
just
had
today
like
a
like
a
really
interesting
conversation
with
some
guys
from
st
from
siemens,
where
we
thought
about
like
when
you,
when
you
run,
let's
say,
production
systems
on
the
edge.
You
have
like
your
controller,
that's
actually
connected
to
the
the
machine
and
of
course
you
can
imagine
like
okay,
this
machine
doesn't
go
anywhere,
so
I
can
actually
hook
it
up
into
a
kubernetes
cluster
and
deploy
my
my
manufacturing
code.
E
Whatever
is
running
on
there
for
the
plc
or
whatever
is
connected
there
to
to
kubernetes,
and
I
can
deploy
it
with
that.
E
Of
course,
the
problem
you
run
into
is
that
suddenly
you
can
change
that
stuff,
really
fast
and
really
really
easily,
which
of
course,
is
nice
to
have,
because
you
become
quite
agile,
but
on
the
other
hand,
it
raises
the
question
of
of
certification,
because
you
can
like
have
your
controller,
controlling
your
machine
and
then
now,
because
we
run
kubernetes
on
there.
Some
other
party
can
like
inject
some
kind
of
pot
on
there,
take
over
the
control
of
this
hardware
and
maybe
make
some
kind
of
living
out
of
that
like
like.
E
But,
on
the
other
hand,
since
it's
like
sharing
the
the
the
controller
or
the
the
hardware,
with
the
controller
you
kind
of
like
like
influence
the
system
you
are
like
running
on
and
like
we
didn't,
come
up
with
a
solution
of
course.
But
I
was
wondering
if
anybody
is
actually
working
on
that,
because,
like
making
sure
that
your
software
that
you
build
is
not
like
influencing
more
than
it
should.
A
Do
you
mean
this
as
like
a
a
security
measure,
or
just
as
kind
of
a
bug,
detection
measure
or
both,
I
think,
pretty
clearly,
at
least
in
the
circles
I've
been
running
in
you
know
when
people
are
architecting
overall
control
planes
for
edge,
whether
kubernetes,
based
or
not,
the
security
issue
has
come
up
as
kind
of
the
top
priority
concern.
In
other
words,
if
you
can't
impose
governance
and
consistency
on
this,
it
doesn't
matter
what
your
performance
is.
A
A
You
know
here
in
the
us
anyway,
there
have
been
widespread
outbreak
breaks
of
these
hijackings
and
ransomware
things
that
hit
hospitals,
and
once
somebody
pays
off
one
of
those
it
just
becomes
like
the
gold
rush
for
these
black
cat
hackers
and
then
within
the
last
week,
I
saw
a
story
that
the
first
school
district
went
down,
so
the
the
city
of
baltimore
and
the
us
apparently
had
is
teaching
their
kids
over
zoom
and
whamo.
E
E
So
we
have
like
really
fast
communication,
and
what
can
happen,
of
course,
is
that,
due
to
bugs
or
due
to
due
to
wrong
system
setup,
because
not
all
the
systems
are
like
sharing
the
same
same
hardware,
for
example,
that
the
part
that
is
actually
doing
some
kind
of
analysis
influences
the
hardware
itself
and
then,
of
course,
security,
loops
and
stuff
like
that.
That
doesn't
work
anymore
and
you
can
run
into
some
kind
of
situation
where,
let's
say
some
kind
of
end,
stop
is
hit.
A
So,
in
terms
of
that
being
an
issue
of
kind
of
imposing
resource
governance,
so
there's
a
couple
things
with
resource
governance
that
are
out
there
and
this
by
the
way,
isn't
new.
It's
maybe
sort
of
newer
to
containers,
but
these
kind
of
things
hid
in
the
virtual,
the
vm
world
over
10
years
ago,
same
kind
of
thing
where
you'd
have
a
hypervisor.
A
So
they
came
up
with
ways
to
partition
that
and
really
the
trick
to
virtualization
wasn't
just
to
create
vms,
but
to
do
it
in
a
way
where
you
could
assign
quotas
and
what
the
moment
you
get
to
quotas,
you
alpha
you
will.
I
will
predict
you'll
often
also
soon
discover
that
you
want
guarantees,
so
the
two
go
hand
in
hand
where
this
governance
layer
says
that
look,
this
particular
vm
or
container
or
pod
is
critical
importance.
So
I
want
to
give
it
minimum
of
one
dedicated
cpu.
A
I
don't
even
care
if
it
appears
to
not
be
using
it.
I
don't
want
you
to
hand
that
capacity
over
to
the
other
person,
because
when
this
critical
the
important
app
needs
it,
it
needs
it
now.
So
we'll
set
it
aside
and
be
willing
to
tolerate
some
waste
just
to
guarantee
that,
should
this
important
workload
ever
needed,
it's
always
guaranteed
to
be
there
and
for
these
others,
you
put
them
into
prior.
You
know
one.
A
This
is
an
implementation
detail,
but
you
can
assign
them
kind
of
priority
levels,
a
b
c
or
d
or
numbers
from
1
to
10
and
kind
of
when
the
when
the
people
at
high
priority
wanted
their
vips,
they
get
what
they
want
and
everybody
it's
trickle
down,
where
the
lower
priority
ones
suffer
the
consequences,
and
there
is
a
way
to
do
this
in
kubernetes.
Now,
however,
none
of
this
is
applied
by
default
and
it
perhaps
isn't
as
ironclad
and
it's
going
through
multiple
levels.
A
A
Kind
of
the
easy
path
is
to
ignore
all
these,
not
even
read
the
docs
and
know
they're
out
there
and
your
stuff
just
works
and
it
works
until
you
run
into
a
resource
constraint,
and
then
you
find
that
it's
completely
unpredictable.
Where
you
know
there
are
people
who
run
into
this
scenario
and
are
puzzled
by
gee.
It
works
almost
it.
A
But
the
takeaway
is
that
you
should
probably
always
fill
out
these
resource
things,
but
you
have
to
have
it's
kind
of
self-discipline.
That
makes
this
happen.
Then
the
mechanism
is
there
inside
kubernetes
to
do
it,
but
much
of
this
is
pushed
down
to
the
linux
os
itself.
So
kubernetes
really
is
just
a
pass-through
and
where
they're
enforced
is
inside
linux
and
linux
can
run
on
a
hypervisor.
A
So
you
could
have
yet
another
layer
where
the
hypervisor
has
force
now
take
this
with
a
grain
of
salt,
because
I
am
you
know,
my
salary
is
paid
for
by
a
hypervisor
vendor,
so
I
might
be
lying
to
you,
but
I
think
that
the
hypervisors
have
are
thought
to
have
more
rigorous
enforcement
of
these
resource
quotas
and
limits
than
what's
available
right
now
in
the
linux
os,
particularly
when
it
comes
to
very
large
systems,
and
this
might
not
apply
to
typical
edge
deployments.
A
But
when
you
get
to
large
servers,
you
will
have
things
like
blades
or
motherboards
that
have
what
they
call
the
pneuma
architecture,
where
there's
non-uniform
latencies,
going
to
certain
parts
of
your
memory
com
from
one
cpu
core
versus
from
a
different
one,
and
it
has
to
do
with
how
those
boards
or
blade
systems
are
architected.
Where
you
have
memory,
that's
atta
and
I
o
that's
attached
in
close
physical
proximity
to
this
socket
and
because
you
needed
to
put
in
four
sockets
or
200
sockets
in
a
blade
by
necessity,
those
are
going
over.
A
Buses
that
have
latencies
and
the
hypervisor
evolved
to
be
aware
of
those
chasms
and
do
its
scheduling
so
that
your
workloads
are
kind
of
matched
up
in
assigned
memory.
A
That
is
known
to
be
close
to
the
cpu
core
you're
running
on,
and
I
I
believe
that
linux
itself
isn't
neces,
doesn't
necessarily
have
that
kind
of
code
in
it
to
do
those
sorts
of
things.
They
have
options.
You
can
turn
on,
and
one
of
the
things
you'll
frequently
find
is
there's
a
way
to
get
the
linux
schedulers
to
work
in
random
mode
so
that,
instead
of
trying
to
assign
you
memory
close
to
your
core,
they
always
randomly
give
you
memory
all
over
the
place
and
the
advantage
is
at
least
it's
deterministic.
A
It
might
suck,
but
it
will
always
suck,
and
you
won't
have
these
hard
to
diagnose
situations
where
one
day
it
ran
good
and
the
next
day
it
ran
poorly
and
you
don't
know
why
and
kind
of
the
kubernetes
flow
has
to
go
potentially
through
all
of
that,
so
it
becomes
very
complex.
D
Yeah,
I
I
one
other
aspect
that
you
didn't
touch
on
honestly
steve
and
I
think
kubernetes
got
it.
Right
is
observability
from
the
at
least
the
get-go,
and
because
this
is
a
part
and
parcel
with
it.
You
know
with
no
visibility,
if,
if
it,
if
that,
if
the
control,
plane
etc
stays
opaque,
you're
screwed,
I
mean,
if
you
don't
know,
really
what's
going
on,
so
I
think
from
that
perspective,
kubernetes
done
a
hell
of
a
job,
at
least
addressing
and
adopting
you
know
prometheus
and
everything
else.
D
You
know
scraping
metrics,
left
and
right
and
in
regards
to
protection
I
mean
getting
back
to
the
linux
kernel
stuff.
I
mean
you're,
seeing
a
big
push
right
now.
At
least
I
have
on
and
there's
several
companies
doing
it
on
implementing
ebpf
right,
berkeley
packet
filter
at
the
kernel
level.
D
Well,
you
where
you
can
stop
pretty
much
anything
from
transpiring,
which
is
pretty
powerful
in
regards
to
having
pods
overrun
other
pods,
if
they're
using
network
mechanisms
et
cetera,
but
the
the
you
know
that
kernel
level,
control
and
trap
is,
is
pretty
pretty
important
in
maintaining
some
semblance
of
control
in
the
long
run.
But
I
and
I'm
amazed
at
how
I
mean
obviously
kubernetes
extremely
fluid
environment.
You
know
I
I'm
always
amazed
by
the
next
thing.
D
That's
you
know
it's
just
around
the
corner
on
both
the
you
know,
upstream
roadmap,
as
well
as
third
parties
and
what
they're
doing
so,
but
I
think
the
observer
that
I
think
the
observability.
If
you
don't
put
in
observability
right
away
on
your
cluster,
you
know
you're
doomed
anyways,
because
you
you
need
to
know
what's
happening
and
and
what
nodes
are
performing
well
and
what
ones
aren't
what
resources
are
running
where
etc.
D
A
A
That
don't
really
deliver
that
observability,
some
of
them
use
kind
of
crude
networking,
implementations
others
can
use
like
the
packet
filter
or
the
open
v-switch,
and
things
like
that.
That
are
all
other
options,
and
only
some
of
them
are
capable
of
mapping
back
that
observability
to
your
kubernetes
objects.
A
Actually
like
to
know
if
you're
running
short
on
bandwidth
or
whatever,
who
exactly
is
using
it-
and
I
don't
want
to
know
that
it's
this
cluster
node,
because
I
might
have
five
things
running
in
the
cluster
node.
I
want
to
know
specifically
what
pot
that
maps
to
or
what
workspace
that
maps
to,
and
it's
my
observation
that
when
you
look
at
the
30-some
cni's
out
there
that
you
can
choose
from
some
of
them
are
better
at
that
kind
of
mapping
of
observability
back
to
your
actual
identifiable
workloads
than
others
are
yeah.
D
A
And
then,
in
terms
of
road
map,
now,
once
again,
this
might
be
kind
of
too
expensive
to
be
deployed
in
really
low
resource
edge.
But
there
is
a
movement
to
support
smartnix
up
all
the
way
through
into
the
cni
space
and
the
concept,
one
of
the
things
a
smart
nick
would
have
like.
If
you
look
at
the
nvidia
variants,
the
I
think
some
of
them
may
have
come
out
of
the
melanox
purchase.
They
did.
A
But
the
idea
is
like
virtual
mix
that
can
be
assigned
set-asides
for
bandwidth
and
can
be
have
their
own
identity
for
purposes
of
observability
and
prioritization,
and
really
when
you
get
up
once
again
at
edge.
Maybe
this
is
wishful
thinking,
but
certainly
in
a
big
data
center,
where
almost
they
they're
moving
well
beyond
10
gig
up
to
40
100.
A
This
dividing
it
up
when
you
have
massive
bandwidth
can
make
a
lot
of
sense,
whether
dividing
it
up
when
you're,
really
tiny
bandwidth
makes
sense
or
not
probably
it
does,
but
you
also
may
not
have
the
compute
capacity
associated
with
your
network
hardware
to
pull
this
off,
but
it's
something
potentially
to
look
at
and
even
if
you
have
it
get
an
edge,
that's
end
here
you
could
maybe
somehow
utilize
these
smartnics
to
your
advantage
at
your
gateway
in
higher
tiers.
E
A
I
know
back
to
the
virtualization
world
too.
The
best
practice
was
always
to
keep
a
separate
physical
network
for
your
management
layer.
If
you
really
wanted
that
thing
never
to
go
down.
The
fact
is
that
if
you
only
have
one
network
and
something
bad
happened,
you
as
the
administrator
might
be
just
sol
and
have
no
means
to
figure
all
you
know
is
it
won't
talk
anymore,
you're
having
the
slightest
clue.
Why
and
you
don't
have
an
ability
to
fix
it.
They.
A
E
A
Think,
actually,
you
know
that
would
make
a
great
subject
for
a
talk
at
some
conference.
You
know
if
you
wanted
to
ever
do
that
with
security
in
your
containers,
basically
or
or
even
like
resource
and
governance,
you
know
like
the
solving
the
noisy,
neighbor
problem.
Solving
really
some
of
the
things
that
solve
noisy
neighbor
are
good
tools
for
solving
malicious
actors
too,
when
you
get
down
to
it.
So
if
you're
going
to
bother
to
do
it,
you
could
probably
do
it
in
a
generic
enough
way
that
you
can.
A
E
Is
it's
like
like,
of
course,
the
the
the
really
hardware
power
of
switch,
how
you
actually
implement
that,
like
normally,
you
have
like
this
lifeline?
That's
connecting
all
the
the
hardware
parts
and
then
when,
when
you
press
a
button
like
you
can
be
sure
that
some
kind
of
thing
is
triggered,
that
kind
of
shuts
down
everything
instantly
and
then
I'm
still
like
not
sure
how
how
you
can
like
incorporate
that
into
like
kubernetes,
because
it's
like
this
one
wire
that
then,
of
course
needs
to
connect
everything.
E
And
then
the
the
pro
of
your
system,
like
the
the
whole.
E
I
don't
need
to
interconnect.
Stuff
anymore
is
kind
of
gone,
but,
of
course,
by
like
when
you,
when
you
think
about
it,
makes
a
lot
of
sense
to
have
like
this.
This
yeah.
C
A
And
for
production
things
in
cloud
scenarios
that
would
normally
be
put
behind
a
load,
balancer
and
then
you'd
have
three
nodes
that
would
be
on
the
other
side
of
the
load,
balancer,
well,
n,
nodes
and
three
being
a
common
choice.
But
when
you
get
down
to
a
scenario
of
deploying
kubernetes
out
at
edge
where
the
whole
cl-
and
I'm
talking
here
about
a
whole
cluster
in
edge,
a
lot
of
these
things
become
problematic
resource,
wise
you're
right
because
even
in
a
big
cloud,
the
load
balancer
is
a
single
point
of
failure.
A
Unless
it's
some
kind
of
redundant
load
balancer
and
who
is
going,
you
know
there
may
be
some
people
who
are
challenged
with
the
cost
of
even
a
load,
balancer
much
less
a
resilient
one
outed
edge
and
can't
run
three
control
plane
nodes
anyway,
in
which
case
did
the
load?
Balancer
even
buy
you
anything
for
resiliency
I
mean
if
you
just
put
a
load
balancer
in
front
of
one
point,
a
single
point
of
failure:
api
server,
the
load
balancer
just
made
it
worse,
not
better.
A
Arguably
so
you
know
you're
load,
balancing
with
one
thing
behind
it
to
load
balance
too.
That
doesn't
quite
make
sense,
yeah,
so
no
load.
A
But
yeah
that
would
be
an
interesting
topic.
I
think
I
did
a
really
old
one
at
kubecon
china,
with
this
guy
michael
gash,
on
kubernetes
resource,
but
it
at
this
point
is
years
old
and
at
the
speed
kubernetes
moves
it's
way
too
stale
and
that
could
certainly
be
revisited
as
a
topic
and
it
was
really
interesting
back.
Then
I
learned
a
lot
just
working
with
michael
gash,
putting
that
together.
E
Yeah,
we'll
probably
put
in
a
chapter
about
that
in
my
phd
season,
so
I
could
just
kind
of
rip
that
out
and
make
a
presentation
out
of
that.
That's
like
somewhere
next
year,
probably
so,
yeah.
A
I
think,
in
terms
of
a
talk
or
an
interesting
blog
on
this
writing
even
me
as
a
reader
when
you
write
up
the
war
stories
of
what
how
this
can
go
wrong,
it's
kind
of
more
interesting
to
read.
You
know
for
some
reason,
it's
kind
of
like
watching
the
disaster
movie
or
something
as
opposed
to
the
lecture
on
you
know
these.
A
A
So
time
sync
we're
nearing
the
top
of
the
hour.
So
almost
done.
Does
anybody
have
any
last
minute
things
they
want
to
throw
in
there
to
discuss.
A
Okay,
I'll
take
silence
as
no
so
as
usual,
even
without
an
agenda,
some
fascinating
things,
and
I
think
I've
got
at
least
an
hour
of
google
searches
to
go.
Look
up
some
of
the
things
like
the
the
radio
things
that
jerry
brought
up
and
and
I'm
going
to
continue
to
try
to
watch
some
of
these
amazon
re
invents
later
sessions
over
the
next
couple
of
weeks.
But
thanks
everybody
for
contributing
to
this
discussion
and
we'll
see
you
at
the
next
meeting.