►
From YouTube: CNCF SIG Network 2021-03-18
Description
CNCF SIG Network 2021-03-18
C
C
Okay,
hey:
hey!
Welcome
everybody,
put
a
link
to
the
meeting
minutes
in
the
chat.
C
C
Well,
good
afternoon,
kenny
it
it
sounds
like
you're
having
internet
issues
in
your
area.
C
C
Am
very
good:
okay!
Well,
hey!
Listen!
We
don't
need
a
bunch
of
corny
jokes
for
me.
Like
usual,
we
there's
a
full
agenda
today,
so
we've
got
a
number
of
different
topics
to
go
through,
but
let's,
let's
dive
in
we've,
ken
owens
is
here
with
me
co-chairing
the
cncf
sig
network
for
those
that
might
not
be
familiar.
C
This
particular
time
slot
is
used
for
both
the
service
mesh
working
group
and
its
initiatives.
There's
about
four
and
sig
network.
Just
for
those
that
are
unaware,
sig
network
sometimes
has
sometimes
has
lots
of
topics,
it
seems
to
go
in
spurts
and
fits
and
sometimes
not,
and
so,
instead
of
creating
a
separate
time,
we've
been
using
this
time
for
the
service
mesh
working
group.
C
Those
topics
are
somewhat
light
today,
so
it
works
out
well,
because
we've
got
a
couple
of
presentations
without
you
know,
without
any
further
ado,
given
the
time
an
update
on
get
nighthawk
actually
so
abhishek,
do
you
wanna?
Do
you
wanna
brief
us
on
transpirings
from
two
weeks
ago?
Oh,
yes,
sure.
D
Right,
everybody,
I
will
be
briefing
up
the
update
on
the
nighthawk.
Let
me
share
my
screen.
D
I
hope
my
my
dog
is
visible.
D
Please,
oh
great,
okay,
oh
just
an
external
share,
my
whole
screen
so
that
all
right
cool.
So
last
time
when
we
discussed
we,
we
I
talked
about
publishing,
build
artifacts
of
individual
binaries
as
well
as
docker
images
and
setting
up
ci
in
action.
To
do
so.
D
D
Which,
basically
right
now
publishes
kind
of
open
to
binaries,
which
are
different
binaries
of
the
nighthawk
project
itself,
like
the
client
server
and
the
test
server
binaries?
So
I've
done
for
just
ubuntu
right
now,
just
for
testing
purposes
and
it
builds
out
fine
and
it
publishes
publishes
these
binaries
on
as
a
part
of
the
release
artifacts.
D
But
I
do
have
a
couple
of
questions
around
the
night
of
maintainers.
If
someone
is
up.
The
first
thing
is
that
I
I
want
to
know
if
there
is
any
dockerfile
which
or
before
that
is
there.
Anyone
who
is
from
the
nighthawk
project
so.
C
Sunku
from
intel
said
that
he
has
a
conflict
today,
so
he's
not
here
and
then
auto
was
positive
on
the
progress.
C
Auto
of
red
hat
was
positive
of
the
progress,
but
I
don't
see
him
on
a
call.
D
All
right,
in
that
case
I'll
just
save
it
for
my
private
conversation
with
them,
probably
so.
The
second.
The
next
step
of
discussion
I
wanted
to
make
is
around
how
we're
going
to
automate
this
process,
basically
right
now
that
we
trigger
this
action
manually
by
giving
in
a
couple
of
parameters
like
which
version
to
be
released
and
what
what
architecture
and
etc.
So
how
are
we
planning
on
automating?
This
process
is
going
to
be
the
next
discussion
point
of
this
project.
I
suppose.
D
D
D
D
C
So
I'm
pasting
that
into
the
meeting
minutes
in
in
this
week's
meeting
minutes,
I
haven't
done
an
explicit
tally,
but
by
account
by
by
the
looks
of
it
it
looks
like
image
or
draft
logo.
Number
eight
is
winning
out.
So
so
there
aren't,
there
aren't
a
bunch
of
other
get
nighthawk
or
we're
missing
the
other
get
nighthawk
representatives
on
today's
call,
some
so,
but
anyone
who's
on
the
call
is
welcome
to
vote
on
what
image
they
might,
what
what
logo
they
might
think
is
befitting
of
the
project.
C
D
No,
I
think
that's
pretty
much
and
I
just
wanted
some
inputs,
maybe
mainly
because
we
are
missing
night
of
maintainers.
I
will
save
my
questions
for
later,
so
yeah,
no
more.
C
Updates
right,
two
other
items
that
that
were
we've
been
tracking
in
the
service
mesh
working
group,
one
is
service,
mesh
performance
and
in
it,
sort
of
congealing
and
being
an
appropriate
time
to
propose
it
for
the
sandbox,
so
ken
and
sunku
and
myself
and
all
the
rest
of
you
are
welcome
as
well
have
drafted
this
sandbox
proposal
fairly
important
sweet.
This
is
these
are
the
questions
that
come
from
the
sandbox
proposal
form
so.
C
Good,
what
else,
along
with
that
measury
as
the
canonical
implementation
of
smp
and
as
the
smi
conformance
tool,
will
probably
be
submitted
alongside
it's
been
a
long
time
kind
of
waiting
for
it
for
that
to
come,
and
it's
probably
most
appropriate
to
submit
it
alongside
service
mesh
performance.
So
and
there
isn't
a
draft
proposal,
so
that
needs
to
come
together
pretty
quickly.
C
Okay,
so
for
service
mesh
working
group
topics,
I
think
that's
that's
the
end
of
it.
Any
comments
or
questions.
C
Nice
all
right,
sig
network,
just
a
a
quick
reminder
that
emissary
ingress
or
the
project,
formerly
known
as
ambassador,
is
still
out
for
public
review.
The
last
time
I
checked
anyway
so
certainly
they'll
appreciate
your
feed.
Your
support,
give
it
give
it
a
plus
one
or
give
it
a
a
minus
one,
if
that's
appropriate,
and
so
for
the
remainder
of
the
topics
that
we
have
two
presentations.
So
the
submariner
is
up
to
present
and
they're
they're
thinking
in
and
around
a
donation.
C
And
then
the
next
presentation
is
from
linker
d
that
is
currently
in
at
the
incubation
stage
and
is
I'm
headed
toward
graduation
and
wanna
and
want
to
get
some
eyeballs
and
review
there.
So
with
that
of
the
submariner
team?
Welcome
folks
who
do
we,
who
do
we
hand
off
to.
A
Haley,
I
can
start
it
off
and
then
I'll
hand
off
to
some
different
team
members
as
we
go
great,
so
we
have
our
slides
there,
so
miguel's
gonna
be
giving
most
of
the
presentation
he's.
Four
of
us
are
red
headers
working
on
some
runner
here
and
then
saki
is
a
very
active
user.
We're
hitachi
who's,
helping
us
a
lot
to
figure
out
stuff.
We
have
five
submariner
people,
people
on
the
call.
If
you
have
any
questions,
we
should
be
able
to
answer
it.
A
So
I
just
wanted
to
tee
up
by
mentioning
that
we're
planning
on
donating
some
mariner
to
the
cncf,
and
so
we
would
love
feedback
throughout
the
presentation
or
asynchronously
later,
if
you
think
of
something
else,
an
interesting
time,
I'll
go
ahead
and
hand
off
to
miguel
and
then
stephen
for
a
demo
and
then
we'll
move
back
for
questions.
B
Okay,
sorry,
I
was
trying
to
I'm
mute,
I'm
not
very
used
to
some
okay,
so
daniel.
Do
you
want
to
to
start
with
the
submariner
donation
slide.
A
Yeah
I
mean
I
I
mentioned
it
briefly,
but
there's
we've
been
working
for
a
long
time,
I'll
say
a
few
more
words.
I
guess.
We've
been
working
for
quite
a
long
time
to
prepare
some
mariner
to
be
donated
to
the
cncf
as
far
as
preparing
our
developer
infrastructure
to
scale
and
making
our
user
experience
nicer
and
getting
all
our
ducks
in
line
for
intellectual
property
things
and
licensing
and
all
that
stuff.
So
we
think
we're
in
quite
good
shape
there.
A
We
have
a
document
linked
both
in
the
slide
deck
and
then
the
agenda
and
put
it
in
chat.
That's
the
same
thing
that
he
was
just
showing
that
copied
questions
from
the
google
form,
with
our
answers,
you're
welcome
to
comment
there
too
yeah-
and
this
is
one
of
our
last
stops-
that
we
hope,
before
submitting
the
donation.
B
Okay,
so
let
me
explain
what
is
submariner
for
yeah
the
people
on
the
call,
so
the
the
idea
of
submariner
is
enabling
direct
network
connectivity
like
layer,
3,
ip
packets,
between
the
bots
and
services
of
kubernetes
clusters,
and
it
works
by
exposing
a
set
of
custom
resources
in
a
kubernetes
data
store.
I
will
explain
a
little
bit
more
about
that.
B
You
have
the
the
link
to
the
website.
You
want
to
see
more
details
on
the
architecture
and
how
it
works.
How
can
it
be
installed?
We
have
a
few
quick
starts
for
different
types
of
different
clouds:
different
kubernetes
flavors.
B
And
yeah
different
network
plugins,
so
yeah
and
you
can
you
can
deploy
submariner
in
in
different
ways.
We
have
an
operator,
we
have
hand
charts
and
we
have
a
command
line
tool
that
really
helps
you
in
the
onboarding
of
of
clusters
to
to
a
cluster
set
and
looking
into
the
details
of
how
the
connectivity
is
working
or
troubleshooting.
If
there
are
issues.
B
B
Data
resistant
residency
guidelines,
so
if
you
data
needs
to
live
on
a
specific
locations
and
and
many
other
use
cases-
and
it
is
really
similar
to
service
measures
but
in
a
more
simplified
way
in
terms
of
how
packets
are
handled,
the
idea
is
that
the
the
packets
of
the
pots,
when
they
are
talking
to
each
other
or
to
services
in
other
clusters,
they
are
always
handled
in
the
linux
kernel.
B
B
So
this
is
a
simplified
picture.
If
you
have
two
clusters
that
your
bots
will
be
able
to
talk
to
each
other,
they
will
be
able
to
discover
each
other
via
standard
apis
that
have
been
defined
in
the
kubernetes
multicluster
seek
and-
and
we
will
be
working
in
network
policies
because
then
that
becomes
increasingly
important.
B
B
I
have
covered
most
of
what
we
have
here
yeah.
The
idea
is
that
we
try
to
be
as
much
as
we
can
agnostic
to
the
flavor
of
kubernetes
that
it
is,
I
mean
it
can
be
hard
because
we
try
to
do
everything
on
the
kernel,
also
agnostic
to
the
network,
plugin
that
you
are
using,
but
again
for
for
some
network
plugins.
We
will
need
to
to
develop
specific
integrations,
as
we
already
have
to
do
for
for
some
of
them
and
yeah.
B
You
can
deploy
services
across
multiple
clusters
and
they
will
be,
and
you
can
load
balance
between
them
and
they
can
be
discovered
using
using
a
standard
apis
that
that
have
been
defined
in
in
kubernetes,
multi-cluster
c.
B
We
use
what
we
call
a
broker
to
exchange
information
about
the
participating
clusters
or
the
services
that
have
been
exported
to
other
clusters
and
endpoints,
the
endpoints
that
are
and
the
information
about
how
to
reach
an
specific
cluster.
So
if
you
see
it's,
its
cluster
needs
at
least
one
gateway.
B
Now
it's
just
a
one
of
your
kubernetes
nodes
that
you
mark
with
a
submariner
gateway
label
so
that
one
will
become
your
gateway
you,
you
can
have
multiple
ones.
Currently
we
do
active
passive
failover
with
you
know,
between
three
to
ten
seconds
failover
but
yeah.
They
become
the
connectivity
point
for
other
clusters.
C
Quick
question:
if
I
could,
just
I
probably
missed
it,
I
was
busy
chatting
but
the
the
broker.
Is
this
a?
Is
it
a?
Is
it
a
masterless
broker,
or
is
it
a
headless
like
this?
Isn't
a
single
yeah.
B
Yeah
yeah
currently,
so
the
idea
is
that
you,
it
is
suspected
that
your
broker
is
going
to
be
highly
available
in
in
the
current
design.
B
So
in
the
future
we
we
want
to
enhance
this
design
being
able
to
set
up
multiple
brokers,
so
you
can
do
failover.
If
I
mean,
even
in
the
case
where
your
broker
is
supposed
to
be
highly
available,
I
mean
if
something
goes
wrong.
Still,
all
the
clusters
can
move
to
a
different
broker,
also
yeah.
We
have
made
the
design
in
a
way
that,
even
if
the
broker
goes
down
still
the
connectivity
will
I
mean
everything
from
the
broker
is
replicated
on
all
the
participated
clusters.
B
So
I
mean
they
will
not
be
able
to
get
information
on
on
new
services,
but
they
will
be
able
to
work
with
what
they
had.
So
here
we
have,
I
mean
certain
level
of
resiliency,
but
we
want
to
improve
that.
E
Also
on
the
slide,
you
say:
you
label
the
you
label
the
individual
node
to
be
a
gateway
engine.
Does
that
dedicate
that?
No,
then,
to
only
being
a
gateway
or
does
it
I
mean,
can
I
run
other
workloads
and
other
application
pods?
B
Yeah,
so
you
you
can
have
a
dedicated
node
is
a
standard
kubernetes
node,
so
it
can
be
dedicated
if
you
configure
the
tolerations
to
not
allow
other
things
than
the
submariner
workloads
or
you
can
use
any
regular
node.
F
B
B
Zones:
okay:
I
think
we
describe
this.
Oh,
no,
okay,
so
yeah.
There
is
no
impact
to
the
intra
cluster
traffic,
so
any
interact.
Clustered
traffic
is
not
handled
and
it
will
follow
its
normal
path.
So
the
traffic
with
destination
and
other
clusters
will
go
through
the
gateway,
and
the
idea
is
that
we
always
preserve
the
source.
Ip.
B
Also
we
provide,
I
mean
this
is
complicated
if
your
clusters
have
overlapping
or
ciders,
for
example,
in
terms
of
pots
or
services.
So
we
have
a
special
mode
that
we
need
to
iterate
into
a
new
version,
but
it's
working
already
that
we
call
globalnet.
The
idea
of
inlet
is
that
we
do
like
a
super,
some
sort
of
super
cluster
ipam
that
is
going
to
assign
eyepiece
from
that
supercluster
ip
address
space,
so
they
can
communicate
with
other
clusters
and
be
recognized
and
have
their
own
ip
address.
B
B
B
We
use
the
multi-cluster
service
api,
which
is
now
in
in
alpha
from
the
kubernetes
multi-cluster
seek
and-
and
we
have
the
class
in
the
the
concept
of
cluster
set,
which
means
that.
B
A
cluster
is
a
group
of
clusters
that
have
a
like
a
high
degree
of
mutual
thrust,
normally
administrated
by
the
same
people,
and
we
and
it
is
assumed
that
the
name
spaces
in
in
different
clusters
are
supposed
to
belong
to
the
same
project.
It's
like
a
base
assumption
on
this
multi-cluster
service,
api,
so
yeah.
B
It
means
that
if
you
are
exporting
one
service
in
a
in
a
namespace
in
one
cluster
and
you
export
the
same
service
on
the
same
name
space
in
a
different
cluster,
it
means
that
it
is
the
same
service
and
you
can
read
either
cluster
a
or
cluster
b,
and
it
should
not
matter
so
yeah.
This
is
like
a
foundation
of
the
multi-cluster
service
api
and
in
this
api
we
have.
We
have
two
core
objects:
one
is
the
service
export
and
the
other
is
the
service
import.
B
A
service
export
is
something
that
you
have
to
create
to
to
declare
okay,
I
want
to
export
my
service
and
and
when
you
do
that
in
the
other
clusters,
it
will
be
available
in
this
format.
Then
we
have
more
formats
for
headless
services
or
yeah
or
stateful
sets,
because
you
need
to
address
individual
pots,
but
this
is
the
most
simple
one
and,
and
then
the
service
import
is
something
that
you
will
find
in
your
cluster.
If
another
cluster
has
exported
the
service,
and
then
this
is,
I
mean
your
cluster
has
discovered
that
service.
B
It's
basically
coordinates
with
a
plugin
that
is
going
to
use
those
service
imports
how
to
to
resolve
those
dns
requests.
So
you
need
to
introduce
a
hope
in
your
queue,
dns
or
existing
core
dns
to
to
send
the
service
cluster
set
local
to
those
via
our
service.
B
B
File
which
is
generated,
which
allows
subcattel
to
to
create
credentials
for
the
new
cluster
and
then
connect
it
to
the
broker
and
deposit
marina.
B
B
With
openshift
as
the
end,
one
in
the
side
of
red
hat,
also
with
obiang
kubernetes,
and
we
know
that
some
people
is
also
using
it
with
calico
and
yeah
so
far.
Those
are
the
ones
that
have
under
control,
also
yeah
the
dke.
B
So
yeah,
that
is
one
part,
then
that
we
we
we
have
been
working
with
the
kubernetes
multi-cluster,
seek
to
define
the
the
this
those
apis
yeah
they
are
implementing
in
in
in
google.
B
And,
yes,
we
started
implementing
them
for
submariner
trying
to
be
to
make
something
super
agnostic,
and
hopefully
there
will
be
more
people
implementing
this
api.
F
All
right,
so
somebody
was
asking
in
the
chat
how
the
the
agent
was
set
up
and
yes,
demon
sets.
So
in
the
demo
we've
got.
Let
me
show
you:
we've
got
three
clusters
set
up,
one
running
on
gke
and
two
running
openshift
on
aws,
and
so
miguel
mentioned
that
the
broker
uses
a
number
of
crds
to
do
its
work
and
basically
that's
all.
F
It
is
there's
no
code
running
in
the
broker
cluster
for
submariner,
it's
just
data
storage,
and
so
the
first
one
is
cluster
here
and
that
lists
all
the
clusters
that
have
been
drawing
in
the
cluster
sets
and
once
openshift
loads.
The
information
we'll
be
able
to
see.
F
There
we
have
three
so,
like
I
said,
gke,
ocpa
and
ocpb,
and
on
on
those
clusters
we've
set
up
while
miguel
set
up
rocket
chats.
So
if
you
want
to
go
and
play
I'll
paste,
the
links.
F
Right,
okay,
yeah
go
for
it,
and
so,
while
mommy
girl's
painting
the
links
in,
I
can
continue
showing
more
of
the
crds.
Perhaps
so.
That
was
the
clusters,
one
which
gives
the
list
of
clusters
basically
there's
not
all
that
much
information
in
there
and
then
the
the
the
different
clusters
they
connect
using
endpoints.
F
So
these
are
shown
here-
and
this
gives
details
of
the
actual
ip
addresses
to
use
to
connect
to
the
the
other
clusters
and
the
the
back
end.
That's
being
used
and
the
subnets
that
are
managed.
F
So
that's
that's
really
how
the
connectivity
appears
from
the
administrator's
point
of
view
services,
so
they
use
the
the
mcs
crds
well,
you'll
see
actually
in
the
list
here.
F
We've
got
our
own
legacy
ones
as
well,
but
we've
migrated
over
to
the
multi-cluster
so
see
here:
service
exports
and
two
versions,
lighthouse
and
multicluster.exe-
and
this
isn't
the
exporting
cluster.
So
we
won't
see
anything
in
service
export,
but
we
will
see
in
service
imports
that
we
have
a
mongodb
service.
That's
been
exported
here,
imported
sorry.
F
There
we
have
it
so
rocket
chat
and
the
defaults
name
space,
and
so
this
would.
This,
like
miguel
said,
means
that,
from
all
of
our
all
of
the
clusters
that
have
imported
the
service,
which
is
all
of
the
clusters
in
the
cluster
set,
you
can
look
up.
F
Rocketchat.Mongo.Default.Svc.Clusterset.Local
and
you'll
get
one
of
the
services
that's
accessible
across
the
the
cluster
set.
So
I
can
just
demonstrate
that
quickly.
F
The
service
itself
we'll
get
a
different
well
we'll
get
a
service
in
in
the
in
one
of
the.
F
But
we
have,
we
do
prefer
the
local
cluster
if
we
can
so
it's
the
services
running
on
multiple
clusters
in
the
cluster
set,
and
you
query
the
service
from
oh,
it's.
It's.
F
Yeah,
okay,
so
I'll
set
up
another
one
just
so
that
we
can
illustrate
so
I'll
start
nginx.
F
We
prefer
the
the
local
cluster.
So
if
you
have
you
have
a
service,
that's
set
up
on
multiple
clusters:
you'll
get
the
local
one
back.
If
you
query
from
a
cluster,
that's
not
got
the
service
at
all
locally.
Then
you'll
get.
If
you
have.
F
I
know
I'm
just
I'm
just
running
a
disappointing
nginx
demo
right
now,
so
this
should
appear
so
go
back
to
the
cutter.
The
crds.
F
Yeah,
so
if
you
have
the
service
available
across
multiple
clusters,
then
you'll
get
it
back
in
the
round-robin
fashion,
and
this
is
a
bit
different
to
versus.
If
you've
played
with
this
on
using
the
same
mcs
api
on
gke
that
relies
on
cluster
set
ips,
where
the
same
ip
address
will
lead
to
different
instances,
we
rely
on
dns
front
open,
that's
perhaps
the
most
significant
difference
in
mcs
api
implementations.
B
F
F
There
again,
so
I
should
have
tried
that
before
deploying
it
so
that
you,
so
that
was
obvious,
that
I
wasn't
cheating
but-
and
so
this
one
here,
it's
on
a
different
different
cluster
has
been
exported,
but
I
can
show
also
running
it
on
this
same
cluster.
So
just
gets
nginx
set
up
here
on
cluster
a
as
well.
F
So
that's
from
the
deployment
perspective
for
administrator
purposes.
We
also
have
a
number
of
metrics
that
are
published,
and
these
get
if
you're
running
on
openshift,
these
get
set
up
automatically.
F
So,
for
example,
we
have
we
track
the
number
of
the
amount
of
traffic
that
goes
over
the
various
connections.
So
here
I'm
on
cluster
ocp
cluster,
a
connected
to
ocp
cluster
b
and
the
gke
cluster,
and
so
because
the
mongodb
database
is
on
gke,
then
I
would
expect
most
of
the
traffic
to
occur
between
post
pa
and
gke
and
a
lot
more
traffic
to
b.
And
so
that's
what
we
see
here,
the
blue
line
is
goes
to
gke.
F
You
can
see
that
in
the
labels
here,
remote
cluster
gke
and
then
very
little
going
to
be
at
all.
We
have
a
number
of
other
metrics,
like
just
the
number
of
gateways
that
are.
F
So
that's
just
one.
This
is
on
the
number
of
gateways
that
are
set
up
on
the
local
cluster.
The
number
of
connections,
as
well
with
their.
F
What
do
we
have?
The
latency?
That's
an
interesting
one,
so
we
track
the
latency
to
the
different
clusters
that
each
cluster
is
connected
to,
and
so,
as
you'd
expect,
the
other
aws
cluster
has
very
low
latency
and
the
gp
one,
since
it's
further
away,
has
higher
latency
but
they're
both
pretty
stable,
and
we
also
track.
F
So
if
we
had
a
global
net
which
isn't
the
case
here,
we
have
some
global
metrics,
because
because
we
have
a
big
pool
of
addresses
that
are
used,
you
obviously
need
to
pay
attention
to
the
amount
that's
actually
being
used.
F
F
All
right
now,
that's
in
the
next
version
we'll
have
service
discovery
metrics
as
well,
but
not
yet.
So
I
think
that's
about
it
for
the
for
what
I
had
to
demonstrate.
A
C
Here,
if
we
can't
get
to
them
all
that
you
know,
then
my
curiosity
will
wait,
but
yeah
yeah
one
or
two
of
them.
If
you
guys
care
to
pick
one
or
two.
F
Right
yeah
so
yeah,
I
like
the
question
about
incorporating
into
kubernetes.
F
So
that's
that's
piqued
my
interest,
I'm
wondering
what
it
would
take,
but
that's
probably
a
bigger
discussion.
C
Very
good
and
not
a
not
a
suggestive
question
per
se,
but
just
yeah.
A
Was
about
to
start
repeating
him,
there
was
one
about
brownfield
for
globalnet.
Can
we
step
into
a
broad
field
and
set
up
globalnet
for
a
revolving
cider
sport.
F
If
you
know
that
from
the
start
then
submariner
can
you
can
set
up
submariner
from
the
beginning
with
globalnet
and
it
will
work
fine,
the
what
we
don't
support
yet
is,
if
you
set
up
submariner
without
globalnet,
and
then
you
try
to
draw
in
a
cluster
that
has
an
overlapping
cider
that
won't
work.
We
can't
add
globe
on
it
post
facto,
once
submarine
has
been
set
up,
but
it's
easy
enough
to
just
redeploy
submariner
when
that
happens.
C
Good
I've
got
a
good
good
good.
There's.
A
bunch
of
this
is
a
great
fantastic
presentation,
guys
I've
gotta
I've
got
questions
that
are
last
us
the
next
hour.
Just
you
know,
for
whatever,
for
hey,
it's
really
fun
sitting
on
this
side
of
the
table.
You
know
hammering
you
with
questions
pelting,
you
with
questions
no.
F
Yeah
so
the
the
last
question
that
is
a
quite
an
important
one
I
think
do
all
does
all
in
their
cluster
internal
traffic
transit,
the
cable
driver
2
and
the
answer
is
no
inner
cluster
traffic
just
uses
the
normal
kubernetes
networking
layer
it
doesn't
go
through.
The
gateway
only
only
enter
cluster
traffic
that
goes
through
the
the
tunnels.
F
What
does
root
agent
upgrade?
Look
like
in
terms
of
disruption
to
active
into
cluster
communication
by
pods
on
the
node
sridhar.
You
might
want
to
build
that
one
perhaps.
G
Yeah
so,
basically
like
the
route
is
in
runs
as
damon
said,
and
it
programmed
some
routing
rules
and
some
create
some
big
submariner
terminal
interfaces.
So
when
you're
upgrading,
depending
on
like
for
what
version
we're
upgrading,
we
generally
don't
modify
the
configuration
until
unless
it's
required.
So
ideally
you
we
should
we
I
mean
we
may
not
expect
any
disruption
to
the
interclass
traffic
unless
we
are
really
modifying
some
configuration
on
the
respective
hosts.
B
Yeah
and
on
everywhere,
as
much
as
we
can,
we
we
try
to
leave
the
data
plane,
work,
configured
and
working.
So
if
you
bring
the
the
pods
down
or
you
are
updating
them
to
a
newer
version,
the
data
plane
will
will
remain
while
that
is
happening,
so
it
yeah.
We,
we
don't
expect
this
action.
We
I
mean
we
we
test
for
failovers
and
we
test
for
hammering
the
road
agents.
B
We
have
a
pretty
big
set
of
end-to-end
tests
that
we
keep
improving
them
with
new
ideas
yeah,
and
that
is
something
that
we
just
we
don't
test.
For
example,
if
there
are,
if
there
is
a
small
time
with
packet
drops,
for
example,
I
think
that
will
be
interesting
to
this.
A
C
C
Thank
you
guys,
that
was,
it
was
great.
Oh
mr
morgan
and
linkard
is
up
for
graduation.
H
Yeah
thanks
thanks
for
having
me
so
yep
we're
up
for
graduation
linker
d
has
actually
been.
You
know.
I
think
it
was
a
fifth
ever
project
accepted
into
the
cncf
before
there
was
a
in
a
sandbox
phase,
even
back
when
it
was
called
in
cube
incubation
inception
back
when
it
was
called
inception.
H
So
I
have
a
couple
slides
that
I
can
run
through
kind
of
giving
an
overview
of
the
project
and
adoption.
But
you
know
honestly,
I'm
also
here
to
just
answer
questions.
So
you
know,
I
think
part
of
the
graduation
process
is
having
the
cncf
sig
network
review
the
proposal.
So
if
there's
anything
I
can
provide,
that
would
be
helpful
for
those
purposes
then
like
here,
I
am
ready
to
provide
it
and
obviously
offline
as
well.
H
So
you
want
me
to
do
a
quick
overview
or
is
there
anything
you
want
to
dive
into
specifically.
H
C
Half
comment:
half
question
that
that
again,
like
it's,
really
easy
to
sit
on
this
side
of
the
table
and
ask
the
question
like
and
that
well
or
let
me
start
by
saying:
do
you
want
to.
C
Let
me
just
make
a
statement
and
say
the
kudos
on
the
establishment
of
a
steering
committee,
just
as
the
project
matures
in
functionality
and
matures
and
governance
matures
and
adoption
and
being
used
in
ways
that
you
didn't
imagine.
I
suspect
you
know
what
a
what
a
self-directed
self-initiated
healthy.
You
know
healthy
step.
I
was
thanks
for
having
me
the
first
yeah
that
was,
it
was
yeah
or
I
mean
you
know
yeah
anyway,
and
then.
C
The
yeah
and
then
I'll
follow
up
with,
with
other,
with
with
comments
and
boy
I've
made.
It
sounds
super
on
anonymous
and
and
it's
not
and
so
yeah
william,
if
you
take
us
through
a
couple
of
slides
that'd,
be
great.
H
Sure,
let's
see,
can
you
see
a
giant
linkardy
logo
somewhere
yeah?
Yes,
all
right,
that's
good,
because
I
cannot.
Where
is
it.
H
Okay,
there
we
go
all
right,
so
yeah
I'll.
Just
give
you
a
very
brief
rundown.
You
know,
there's
a
lot
there's
a
lot
to
say,
but
linker
d
is
a
service
mesh.
We
have
a
very
strong
focus
on
being
light
and
fast
and
security
centric,
and
at
this
point
you
know,
we've
been
in
production
for
over
40
years
at
companies
around
the
world,
we've
gone
through
a
bunch
of
iterations
on
the
project.
H
Internally,
I
have
a
very
healthy
community
in
the
slack
channel,
primarily
and
a
whole
lot
of
github
stars
and
things
like
that
over
200
contributors.
So
I
just
counted
up
to
200
and
doing
a
near
weekly
edge
releases.
So
we
try
and
get
code
out.
You
know
in
front
of
early
adopters
as
rapidly
as
possible
and,
of
course
we
have
open
governance
and
you
know
a
neutral
home
in
the
cncf.
H
These
are
some
of
the
logos
that
are
currently
using
linker
d,
some
of
them.
I
know
a
lot
about
because
they've
told
us
a
lot
some
of
them.
I
don't
know
anything
about
because
we
only
know
kind
of
through
external
evidence
that
they're
using
linker
d-
and
they
don't
want
to
talk
to
us,
so
it's
always
part
of
the
the
fun
of
of
open
source.
H
Okay.
So
what
does
lingerie
do?
I
think
this
is
very
similar
to
you
know
what
every
service
mesh
does.
There's
kind
of
three
big
categories
set
of
features
around
observability
sort
of
features
around
reliability
instead
of
features
around
for
linker
d.
You
know
our
our
goal
is
to
deliver
those
features
to
you
in
a
way
that
minimizes
the
operational
pain
associated
with
that.
So
we
we
believe
that
the
service
mesh
doesn't
have
to
be
complicated.
In
fact,
it
can
be
pretty
simple
to
operate.
H
It's
not
a
trivial
piece
of
technology,
but
the
operational
component
can
be
simple
and
we
do
a
lot
of
stuff
in
our
design
to
reduce
that
operational
overhead
and
that's
kind
of
the
primary
driver,
and
I
think
also
you
know
part
of
what
makes
linkery
a
little
unique
in
the
service
mesh
space.
H
I
won't
go
too
much
into
this,
but
you
know,
I
think
the
value
of
the
service
mesh
you
know
is
not
really
the
features
that
it
brings,
but
it's
in
the
fact
that
it
delivers
those
features
at
the
platform
level.
These
are
features
that
historically,
we've
had
to
get
in
the
application,
even
though
they
are
effectively
platform
features.
So
the
real
audience
of
linker
d
are
the
sres
the
platform
owners.
You
know
the
folks
who
are
operating
kubernetes.
H
The
developers
are
much
less
exposed,
and,
ideally
you
know
they're
often
not
exposed
at
all
to
the
service
mesh.
So
what
the
service
mesh
is
giving
you
what
linkedin
is
really
solving
for
you.
You
know
it's
not
really
giving
you
retries,
it's
giving
you
retries
in
a
way
where
you
can
get
that
at
the
platform
level,
and
you
don't
have
to
beg
the
developers
to
do
that
same
thing
with
mtls.
H
Okay,
let's
talk
a
little
bit
about
kind
of
our
design
philosophy,
so
we're
really
trying
to
you
know,
follow
this
idea
of
minimalism
and
do
you
know
just
the
bare
minimum
to
give
you
a
secure
and
operationally
simple
service
mesh,
so
linker
d
out
of
the
the
goal
is
it
should
work
if
you
have
a
functioning
kubernetes
application
and
you
add
lingerie
to
it,
the
application
should
continue
functioning.
H
We
can
do
that
in
almost
every
case
and
it
took
quite
a
feat
of
engineering
to
get
there,
but
that's
a
really
strong
belief
for
us,
ultralight
of
course,
bare
minimum
resource
cost
and
and
latency
as
well.
Of
course,
the
service
mesh
works
with
lots
of
user
space
proxies,
so
you're
gonna
pay
a
cost.
We
try
and
minimize
that
cost
as
much
as
possible.
Make
it
simple
I'll
talk
a
little
bit
about
how
we
do
that
and
then
security
first.
H
So,
whenever
possible,
we
want
security
to
be
the
default,
setting,
not
a
thing
that
you
have
to
configure,
not
a
thing
that
you
have
to
enable
later
so
control
planes
written
and
go
it's
sitting
around
200
megs
of
rss.
H
It
can
optionally
correct,
collect
metrics
data,
in
which
case
like
you
can
use
a
lot
of
memory
depending
on
how
much
data
you're
collecting
data
plane
are
these
little
rust-based
proxies.
We
call
them
microproxies
because
they
actually
are
very
different
from
something
like
envoy
or
nginx,
and
I've
written
a
lot
over
the
years
about
linker
d.
There's
a
historical
article.
Here
you
can
read
on
info
queue.
We
actually
started
out
with
a
jvm
based
linker
d
written
in
scala
and
went
through
a
pretty
big
rewrite.
H
You
know,
starting
in
2018,
to
get
to
this
go
and
rust,
combo.
H
Okay,
I've
got
like
one
two,
more
slides,
maybe
and
then
I'll
be
done
with
the
whirlwind
tour.
So,
like
most
service
meshes,
you
know,
there's
a
set
of
control,
plane,
components
that
set
off
to
the
side
and
then
the
magic
is
in
these
little
micro
proxies
that
we
inject
inside
the
pods
and
we
do
the
transparent
wiring.
So
all
tcp
communication
goes
through
those
pods,
which
means
that
whenever
service
a
talks
to
service
b-
or
you
know,
instance
a
talks
to
instance
b,
it's
going
through
two,
not
one
but
two
proxies.
H
So
that
means
those
proxies
have
to
be
very
very
fast
and
it
means
you're
gonna
have
a
lot
of
them,
so
they
need
to
be
very,
very
small,
so
link
rd
uses
this
microproxy,
which
is
called
simply
linker
d2
proxy.
It's
not
really
a
general
purpose
thing
it's
very
tightly
coupled
to
linker
d
itself.
H
It's
built
on
top
of
this
amazing
rust,
ecosystem,
rust
network
library
ecosystem,
which
is
general
purpose.
You
know-
and
I
believe
this
is
probably
one
of
the
most
advanced
technologically
advanced
projects
in
the
entire
cntf
landscape,
because
we
are
sitting
right
on
top
of
this
very,
very
fast,
moving
and
very
exciting
rust,
asynchronous
network
ecosystem
choice
of
rust
lets
us
avoid
an
entire
class
of
memory.
H
Vulnerabilities
won't
get
too
much
into
that,
but,
like
that's
really
nice,
since,
what's
going
through
this
data
plane
is
like
customers,
health
information
and
pii,
and
so
on.
We
can
compile
down
to
native
code.
We
do
regular
third-party
security
audits,
which
we
pass,
thankfully,
and,
like
I
said,
very
ace.
Modern
networking
stack,
linker
d2
proxy
part
of
linkedin
project,
so
it's
open
source.
It's
audited
it's
up
on
github,
but
a
pretty
different
approach
from
the
general
purpose
proxy.
So
we
you
know
the
goal
for
us
is.
H
You
should
not
have
to
become
an
expert,
an
operational
expert
in
linkery
to
proxy
right.
You
should
become
an
operational
expert
in
linker
d,
but
the
proxy
should
as
much
as
possible
be
an
implementation
detail,
lots
more
to
say
about
that
and
we
have
a
security
philosophy
as
well,
but
I'm
going
to
stop
here.
I
see
there's
a
couple
questions
and
we're
coming
up
to
time.
So
I'll
stop
the
presentation
here
and
start
working
through
some
of
these
questions.
H
Okay,
plugable
ingress.
H
Yeah,
so
right
now
right,
it's
not!
We
don't
have
our
own
ingress.
We
work
with
every
other
ingress
that
we
can
possibly
work
with.
So
that's
part
of
our
philosophy
of
keeping
this
as
minimalist
as
possible.
There
are
many
many
good
ingress
controllers
out
there
that
have
a
huge
feature
set,
none
of
which
I
want
to
implement
and
none
of
which
are
like
service
mesh,
specific,
so
yeah
we
work
with
that.
C
Totally
does
oh
yeah.
Actually
it
makes
I
mean,
that's
a
great
philosophy.
I
mean
it's,
it's
a
winning
flower,
you
you,
you
need
my
commentary
and
like
put
up
but
I'll
give
it
anyway
yeah
it's
a
winning
philosophy.
I
mean
there
are
the
vast
majority
of
projects.
Individuals,
I
think,
tend
to
make
the
other
choice
which
so
this
is
refreshing.
H
C
Yeah
yeah
yeah,
it's
a
sign
of
strength.
Actually,
I
think
actually-
and
I
misinterpreted
it
just
when
I
initially
saw
it
thinking-
oh
that
perhaps
on
liquor,
d's
microproxy
is
pluggable
itself
or
has
extension
points
or
is
pluggable
itself
and
and
and
that's
that's
not
currently
the
case.
H
H
C
H
Yeah,
you
know
if
you're
talking
about
things
like
wasm,
that
is
on
the
tentative
road
maps.
I
think
there
is
value
to
that,
but
we
have
not
tackled
that.
Yet
we
did
something
like
that.
In
the
one.x
days
when
we
were
on
the
jvm,
you
know
we
had
this
idea
of
plugins.
It
was
cool
because
people
could
extend.
You
know
linkery
do
all
sorts
of
application
specific
stuff,
but
operationally
it
became
very
complicated
very
rapidly.
H
So
there's
a
little
there's
some
friction
to
that
idea
currently
on
the
team,
but
not
not
necessarily
forever
sure.
C
Well
again,
like
some
some
just
some
general
comments
like
hey
in
every
like
you,
in
accordance
with
graduation
criteria
and
and
even
not
being
from
you
know,
even
if
you're
not
familiar
with
what
the
specifics
of
that
criteria
is,
I
mean
in
every
you
know
in
every
which
way,
lingard
v2
just
you
know
he
hits
those
out
of
the
park.
The.
C
Yep
one
of
the,
but
in
the
other
way
yeah.
That
was
what
I
was
going
to
mention
the
other
in
the
other
ways,
the
fact
that
it
is
a
v2,
in
fact
you've
taken
learnings
from
your
v1.
It's
almost
the
same.
C
If
you
look
at
some
of
the
other
projects
in
the
space
and
not
just
the
space,
but
the
cloud
native
space,
it's
a
large
or
significant
sign
of
maturity,
of
the
set
of
knowledge
and
to
have
to
do
a
re-architecture
to
like
have
taken
all
those
learnings
from
all
the
like,
what
a
significant
benefit
that
is
to
any
of
the
users
of
the
project
and
the
fact
that
it's,
the
fact
that
you
you
the
project
itself
and
the
principles
by
which
it's
being
designed
are
are
in.
C
Like
that
strength,
that
you
were
just
talking
about
about,
avoiding
like
acknowledging
that
there's
other
ingresses
to
use
and
not
reinventing
that
particular
wheel.
Is
it
also,
you
know
or
identifying
general
purpose
things
versus
purpose-built
things
and-
and
you
know
identifying
just
a
few
ultra
fast
ultralight.
You
know
just
kubernetes
native
kubernetes,
first
like
hanging
on
to
a
few
of
those
design
principles
throughout.
C
You
know
throughout
v2s,
you
know
from
from
conduit
to
now.
Well,
you
makes
it
for
anyone
who
pays.
It
pays
attention
and
sort
of
reviews
those
different
projects.
It
becomes
quite
clear
that
just
how
those
manifests
and
in
terms
of
a
user
experience
in
terms
of
well
in
terms
of
a
lot
of
things
like
time
to
value
and
there's
some
amount
of
boringness
is
good
and
the
simplicity
facilitates
some
some
boringness.
If
you
will,
I
think
boring
is
the
wrong.
C
Has
a
negative
connotation,
but
rather
stable
is
the
is
more
of
the
appropriate
unsurprising.
Yeah
on
that
this
is
my
last
question.
I
know
we're
five
minutes
over
and
so
reflections
for
you
as
you
well
yeah,
as
you
balance
like
we
were
just
talking
about
wassum
and
and
it
being
hot
and
interesting,
and
yet
you
know
from
prior
experience.
You
know
friction
and
also
you
know
it.
You
know,
expands
the
scope
of
the
the
scope
of
work
to
be
done.
C
C
H
Yeah,
we
have
a
really
strong
opinion
here
and
it's
like
it's
something
that
took
a
while
for
us
to
develop,
but
we
are
extremely
user
focused.
We
spend
as
much
time
with
our
users
as
we
can.
We
look
at
the
things
that
are
causing
them
pain
and
what
you
realize
is
like
95,
plus
percent
of
the
time.
It's
not
like.
I
don't
have
you
know
a
data
plane
plug-in.
You
know
with
wasm.
H
It's
like
I'm
running
out
of
you
know
space
in
prometheus
because
it
just
takes
all
these
metrics,
and
I
have
no
idea
how
to
control
that.
It's
like
that's
the
stuff
that
actually
causes
pain
for
people
and,
I
think,
being
hyper
focused
on
that
and
you
know
trying
to
map
that
to
a
concrete
thing
you
can
do
in
as
short
a
time
as
possible
is
the
one
skill
that
we've
I
mean
still
working
on
it,
but
really
trying
hard
and
that
we've
relied
on
to
guide
our
our
feature.
C
C
My
apologies
on
the
on
the
overshot
on
time,
great
presentations
today,
you
know,
for
my
part,
I
really
appreciate
people
feeling
building
them.
So
all
right,
very
good,
we'll
see
you
in
a
couple
of
weeks.
That's
the
that's
a
wrap
all
right.
Thanks
for.